Hi,
You can try search query -> "+information +retrieval"
Meaning the document should have both the keywords. Doc 5 will also be in
the results.
https://lucene.apache.org/solr/guide/8_7/the-standard-query-parser.html#the-boolean-operator
- Mohandoss.
On Wed, Jan 20, 2021 at 1:38 AM gnandre wr
rting and updating if a query is executed. My
> environment is three nodes, 3 shards and 2 replicas.
>
> I noticed there was master slave mode in the old version, but for solr
> cloud, I don’t know whether it is doable.
>
> Derrick
>
> Sent from my iPhone
>
> > On
Hi,
You haven't shared information about your environment and how frequently
you are commiting the changes, whether your user searching collection gets
real time inserts / updates etc.,
but if you are not doing any real time analysis with the user query
information, you can store the information
any parameter which can reduce the TLOG replication
time
Have a greate week ahead!
Regards,
Mohandoss.
On Fri, Jan 15, 2021 at 9:20 PM Shawn Heisey wrote:
> On 1/15/2021 7:56 AM, Doss wrote:
> > 1. Suppose we have 10 node SOLR Cloud setup, is it possible to dedicate 4
> > nodes
Dear All,
1. Suppose we have 10 node SOLR Cloud setup, is it possible to dedicate 4
nodes for writes and 6 nodes for selects?
2. We have a SOLR cloud setup for our customer facing applications, and we
would like to have two more SOLR nodes for some backend jobs. Is it good
idea to form these node
We have 12 node SOLR cloud with 3 zookeeper ensemble
RAM: 80 CPU:40 Heap:16GB Records: 4 Million
We do real time update and deletes (by ID), and we do us Inplace updates
for 4 fields
We have one index with 4 shards: 1 shard in 3 nodes
Often we are getting the following errors
1. *2021-01-08 17:
- - [18/Sep/2020:07:54:06 +] "GET
/solr/admin/info/logging?_=1600414902762&since=0&wt=json HTTP/1.1" 200 802
0:0:0:0:0:0:0:1 - - [18/Sep/2020:07:54:17 +] "GET
/solr/admin/info/logging?_=1600414902762&since=0&wt=json HTTP/1.1" 200 802
0:0:0:0:0:0:0:1 - - [18/Sep/2020:07:54:28 +] "GET
/solr/admin/info/logging?_=1600414902762&since=0&wt=json HTTP/1.1" 200 802
Thanks.
Doss.
Hi Dominique,
Our issues are similar to the one discussed here.
https://github.com/eclipse/jetty.project/issues/4105
Your views on this.
Thanks,
Mohandoss.
On Tue, Aug 11, 2020 at 7:06 AM Doss wrote:
> Hi Dominique,
>
> Thanks for the response.
>
> I don't think I would
information? Please suggest other ways to dig this further.
Did you check Zookeeper logs ?
>> We never looked at the Zookeeper logs, will check and share, is there
any kind of information to watch out for?
Regards,
Doss
On Monday, August 10, 2020, Dominique Bejean
wrote:
> Doss,
>
&
ts*
Easy GC Report says GC health is good, one server's gc report:
https://drive.google.com/file/d/1C2SqEn0iMbUOXnTNlYi46Gq9kF_CmWss/view?usp=sharing
CPU Load Pattern:
https://drive.google.com/file/d/1rjRMWv5ritf5QxgbFxDa0kPzVlXdbySe/view?usp=sharing
Thanks,
Doss.
On Mon, Aug 10, 2020 at
this?
On Mon, Aug 10, 2020 at 12:01 AM Doss wrote:
> Hi,
>
> We are having 3 node SOLR (8.3.1 NRT) + 3 Node Zookeeper Ensemble now and
> then we are facing "Max requests queued per destination 3000 exceeded for
> HttpDestination"
>
> After restart evering thing
Hi,
We are having 3 node SOLR (8.3.1 NRT) + 3 Node Zookeeper Ensemble now and
then we are facing "Max requests queued per destination 3000 exceeded for
HttpDestination"
After restart evering thing starts working fine until another problem. Once
a problem occurred we are seeing soo many TIMED_WAIT
tQueuedSynchronizer$ConditionObject@1e0205c3
")
Server 3:
*4210* Threads are in TIMED_WATING
("lock":"java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@5ee792c0
")
Please help in resolving this issue.
Thanks,
Doss
On Wed, Aug 5, 2020 at 11:30 PM D
will help, can we use both ? please suggest.
Thanks,
Doss.
On Sunday, August 2, 2020, Doss wrote:
> Hi All,
>
> We are having SOLR (8.3.1) CLOUD (NRT) with Zookeeper Ensemble , 3 nodes
> each on Centos VMs
>
> SOLR Nodes has 66GB RAM, 15GB HEAP MEM, 4 CPUs.
> Record Count:
olr/userinfoindex_6jul20_shard1_replica_n3/ =>
java.io.IOException: java.util.concurrent.RejectedExecutionException: Max
requests queued per destination 3000 exceeded for
HttpDestination[http://172.29.3.23:8983
]@5a9a216b,queue=3000,pool=MultiplexConnectionPool@545dc448
[c=4/4,b=4,m=0,i=0]
Thanks,
Doss.
"command":"full-import", "status":"busy", "importResponse":"A command
is still running...", "statusMessages":{ "Time
Elapsed":"298:1:59.986", "Total Requests made to DataSource":"1",
"Total Rows Fetched":"17426", "Total Documents Processed":"17425",
"Total Documents Skipped":"0", "Full Dump Started":"2020-01-09
19:10:02"}}
Thanks,
Doss.
22, 2020 at 4:27 PM Doss wrote:
>
> > HI,
> >
> > SOLR version 8.3.1 (10 nodes), zookeeper ensemble (3 nodes)
> >
> > Read somewhere that the score join parser will be faster, but for me it
> > produces no results. I am using string type fields for from and
HI,
SOLR version 8.3.1 (10 nodes), zookeeper ensemble (3 nodes)
One of our use cases requires joins, we are joining 2 large indexes. As
required by SOLR one index (2GB) has one shared and 10 replicas and the
other has 10 shard (40GB / Shard).
The query takes too much time, some times in minutes
Hi,
4 to 5 million documents.
For an NTR index, we need a field to be updated very frequently and filter
results based on it. Will In-Place updates help us?
Thanks,
Doss.
Hi Experts,
We are migrating our entire search platform from SPHINX to SOLR, we wanted
to do this without any flaw so any suggestion would be greatly appreciated.
Thanks!
On Fri, Sep 6, 2019 at 11:13 AM Doss wrote:
> Dear Experts,
>
> For a matchmaking portal, we have one requirem
got
a successful response from that particular shard?
On Thu, Sep 5, 2019 at 4:53 PM Jörn Franke wrote:
> 1 Node zookeeper ensemble does not sound very healthy
>
> > Am 05.09.2019 um 13:07 schrieb Doss :
> >
> > Hi,
> >
> > We are using 3 node SOLR (7.0.1) c
ly not visited profiles.
>
> > Am 06.09.2019 um 07:43 schrieb Doss :
> >
> > Dear Experts,
> >
> > For a matchmaking portal, we have one requirement where in, if a customer
> > viewed complete details of a bride or groom then we have to exclude that
> > profil
Dear Experts,
For a matchmaking portal, we have one requirement where in, if a customer
viewed complete details of a bride or groom then we have to exclude that
profile id from further search results. Currently, along with other details
we are storing the viewed profile ids in a field (multivalued
Dear Jack,
Thanks for your input. Non of our cores were created with autoAddReplicas.
The problem we are facing is, upon rebooting leader tries to sync the data
with other nodes which are part of the cluster.
Thanks,
Doss.
On Thu, Sep 5, 2019 at 9:46 PM Jack Schlederer
wrote:
> My mistake
Thanks Eric for the explanation. Sum of all our index size is about 138 GB,
only 2 indexes are > 19 GB, time to scale up :-). Adding new hardware will
require at least couple of days, till that time is there any option to
control the replication method?
Thanks,
Doss.
On Thu, Sep 5, 2019 at 6
of
> which has a 20g index. You don't need as much memory as your aggregate
> index size, but this system feels severely under provisioned. I suspect
> that's the root of your instability
>
> Best,
> Erick
>
> On Thu, Sep 5, 2019, 07:08 Doss wrote:
>
> >
responsive state.
As soon as a node starts the replication process initiated for all 130
cores, is there any we control it, like one after the other?
Thanks,
Doss.
running the SOLR instances in VMs. We are maintaining only last
three days zookeeper logs.
Thanks,
Doss.
On Tue, Sep 3, 2019 at 8:21 PM Erick Erickson
wrote:
> The “unable to create new thread” is where I’d focus first. It means
> you’re running out of some system resources and it’s
can we fix "AlreadyClosedException" without restarting the service.
Thanks,
Doss.
cted uniqueness, am I doing something
wrong? Guide me please.
Thanks,
Doss.
/yourindexname/dataimport?command=full-import&commit=true
Delta Import:
http://node1ip:8983/solr/yourindexname/dataimport?command=delta-import&commit=true
If you want to do the delta import automatically you can setup a cron
(linux) which can call the URL periodically.
Best,
Doss.
--
Sent fr
Hi,
I am assuming you are having the same index replicated in all 3 nodes, then
doing a full index/ delta index using DIH in one node will replicate the
data to other nodes, so no need to do it in all 3 nodes. Hope this helps!
Best,
Doss.
--
Sent from: http://lucene.472066.n3.nabble.com/Solr
our kind of situation which replica Type can we choose? All NRT or NRT
with TLOG ?
Thanks in advance!
Best,
Doss.
which reads "No registered leader was found
after waiting for 4000ms" followed by this full index.
Thanks,
Doss.
On Sun, Dec 30, 2018 at 8:49 AM Erick Erickson
wrote:
> No. There's a "peer sync" that will try to update from the leader's
> transaction log if
we are using 3 node solr (64GB ram/8cpu/12GB heap)cloud setup with version
7.X. we have 3 indexes/collection on each node. index size were about
250GB. NRT with 5sec soft /10min hard commit. Sometimes in any one node we
are seeing full index replication started running.. is there any
configuration
happening after the results being
obtained by the parent query, so there won't be much performance impact it
will create, but
we are going to use this functionality for a large and busy index, so before
taking further steps I need expert opinion.
Thanks,
Doss.
--
Sent from: http://lucene.4720
then the result should have the country
value as India.
Please help to crack this.
Thanks,
Doss.
optimize at day time will be a problem, on the other side if the heap
got full then OOM exception is happening and the cloud crashes.
I read somewhere that G1GC will give better results, but SOLR experts
doesn't encourage to use it
what else we can do to resolve this issue?
Thanks,
Doss.
Hi,
Is there any network parameter that we need to fine tune? Is there any
specific tweaking needed to deploy solr in virtual machines? We use VMware.
Thanks.
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Hi Emir,
Just realised DBQ = Delete by Query, we are not using that, we are deleting
documents using the document id / unique id.
Thanks,
Mohandoss.
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Hi Emir,
We do fire delete queries but that is very very minimal.
Thanks!
Mohandoss
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
@wunder
Are you sending updates in batches? Are you doing a commit after every
update?
>> We want the system to be near real time, so we are not doing updates in
>> batches and also we are not doing commit after every update.
>> autoSoftCommit once in every minute, and autoCommit once in every
We have SOLR(7.0.1) cloud 3 VM Linux instances wit 4 CPU, 90 GB RAM with
zookeeper (3.4.11) ensemble running on the same machines. We have 130 cores
of overall size of 45GB. No Sharding, almost all VMs has the same copy of
data. These nodes are under LB.
Index Config:
=
300
30
-Dbootstrap_conf=true which made solr
to look for core configuration files in a wrong directory, after removing
this it started without any issues.
I also commented suggester component which made solr to load fast.
Thanks,
Doss.
On Thu, Nov 20, 2014 at 9:47 PM, Erick Erickson
wrote:
> D
*How much memory are you allocating the JVM? *
5GB for JVM, Total RAM available in the systems is 30 GB
*can you restart Tomcat without a problem?*
This problem is occurring in production, I never tried.
Thanks,
Doss.
On Wed, Nov 19, 2014 at 7:55 PM, Erick Erickson
wrote:
> You'
whole data again. We are using this setup in production because of this
issue we are having 1 to 1.30 hours of service down time. Any suggestions
would be greatly appreciated.
Thanks,
Doss.
, 2007, at 3:29 AM, Doss wrote:
> > We are running an appalication built using SOLR, now we are trying
> > to build
> > a tagging system using the existing SOLR indexed field called
> > "tag_keywords", this field has different keywords seperated by
> > comma,
Dear all,
We are running an appalication built using SOLR, now we are trying to build
a tagging system using the existing SOLR indexed field called
"tag_keywords", this field has different keywords seperated by comma, please
give suggestions on how can we build tagging system using this field?
Th
Thanks Yonik.
Regards,
Doss.
On 5/25/07, Yonik Seeley <[EMAIL PROTECTED]> wrote:
On 5/24/07, Doss <[EMAIL PROTECTED]> wrote:
> Is it advisable to maintain a large amount of data in synonyms.txt file?
It's read into an in-memory map, so the only real impact is increas
Dear all,
Is it advisable to maintain a large amount of data in synonyms.txt file?
Thanks,
Doss.
Hi Yonik,
Thanks for your quick response, my question is this, can we take incremental
backup/replication in SOLR?
Regards,
Doss.
M. MOHANDOSS Software Engineer Ext: 507 (A BharatMatrimony Enterprise)
- Original Message -
From: "Yonik Seeley" <[EMAIL PROTECTED
e of words?
Thanks,
Doss.
52 matches
Mail list logo