We have duplicate records in two shards and want to delete one set of duplicate
records from one shard.
curl --proxy
'http://host.abc.com:8983/solr/collection1/update?shards=localhost:8983/solr/collection1commit=true'
-H Content-Type: text/xml --data-binary
Hi,
We upgraded our cluster to Solr 4.10.0 for couple days and again reverted back
to 4.8.0. However the dashboard still shows Solr 4.10.0. Do you know why?
* solr-spec 4.10.0
* solr-impl 4.10.0 1620776
* lucene-spec 4.10.0
* lucene-impl 4.10.0 1620776
We recently added
Thank you for replying.
We added new shard to same cluster where some shards are showing Solr version
4.10.0 and this new shard is showing Solr version 4.8.0. All shards source Solr
software from same location and use same start up script. I am surprised how
older shards are still running
are very recent.
Try searching the JIRA for Solr for details.
Best,
Erick
On Tue, Jan 27, 2015 at 1:51 PM, Joshi, Shital shital.jo...@gs.com wrote:
Hello,
We have SolrCloud cluster (5 shards and 2 replicas) on 10 boxes and three
zookeeper instances. We have noticed that when a leader node
problem has been that the Zookeeper
timeout used to default to 15 seconds, and occasionally a node would be
unresponsive (sometimes due to GC pauses) and exceed the timeout. So upping
the ZK timeout has helped some people avoid this...
FWIW,
Erick
On Wed, Jan 28, 2015 at 7:11 AM, Joshi, Shital
Hello,
We have SolrCloud cluster (5 shards and 2 replicas) on 10 boxes and three
zookeeper instances. We have noticed that when a leader node goes down the
replica never takes over as a leader, cloud becomes unusable and we have to
bounce entire cloud for replica to assume leader role. Is this
We wrote a script which queries each Solr instance in cloud
(http://$host/solr/replication?command=details) and subtracts the
‘replicableVersion’ number from the ‘indexVersion’ number, converts to minutes,
and alerts if the minutes exceed 20. We get alerted many times a day. The soft
commit
Hi,
We're updating Solr cloud from a java process using UpdateRequest API.
UpdateRequest req = new UpdateRequest();
req.setResponseParser(new XMLResponseParser());
req.setParam(_shard_, shard);
req.add(docs);
We see too many searcher open errors in log and wondering if frequent updates
from
autocommit settings (probably
soft commit) until you no longer see that error message
and see if the problem goes away. If it doesn't, let us know.
Best,
Erick
On Thu, Aug 28, 2014 at 9:39 AM, Joshi, Shital shital.jo...@gs.com wrote:
Hi Shawn,
Thanks for your reply.
We did some tests enabling
Hi Shawn,
Thanks for your reply.
We did some tests enabling shards.info=true and confirmed that there is not
duplicate copy of our index.
We have one replica but many times we see three versions on Admin GUI/Overview
tab. All three has different versions and gen. Is that a problem?
Master
Hi,
We have SolrCloud cluster (5 shards and 2 replicas) on 10 boxes. We have three
collections. We recently upgraded from 4.4.0 from 4.8. We have ~850 mil
documents.
We are facing an issue where refreshing a Solr query may give different results
(number of documents returned). This issue is
Hi,
We upgraded from Solr version 4.4 to 4.8. In doing so we also upgraded from JDK
1.6 to 1.7. After few days of testing, we decided to move back to 4.4. We get
following error in all nodes and our cloud is not usable. How do we fix it?
Format version is not supported (resource:
Yes that was the problem. Switching back works now. Thanks!
-Original Message-
From: Shawn Heisey [mailto:s...@elyograg.org]
Sent: Tuesday, June 10, 2014 4:48 PM
To: solr-user@lucene.apache.org
Subject: Re: Format version is not supported error
On 6/10/2014 1:17 PM, Joshi, Shital wrote
Hi,
We have SolrCloud cluster (5 shards and 2 replicas) on 10 boxes. On some of the
boxes we have about 5 million deleted docs and we have never run optimization
since beginning. Does number of deleted docs have anything to do with
performance of query? Should we consider optimization at all
. If you're still having the issue, can you post your
warming query configuration?
Joel Bernstein
Search Engineer at Heliosearch
On Wed, May 7, 2014 at 4:25 PM, Joshi, Shital shital.jo...@gs.com wrote:
Hi,
How many auto warming queries are supported per collection in Solr4.4 and
higher? We see
Hi,
What are ways to prevent someone executing random delete commands against Solr?
Like:
curl http://solr.com:8983/solr/core/update?commit=true -H Content-Type:
text/xml --data-binary 'deletequery*:*/query/delete'
I understand we can do IP based access (change /etc/jetty.xml). Is there
newSearcher queries are fired when a new searcher is opened, i.e. when
a commit (hard when openSeracher=true or soft) happens.
Let's see your configuration too where you think you're setting up the
queries, maybe you've got an error there.
Best,
Erick
On Mon, May 12, 2014 at 8:27 AM, Joshi
Hi,
How many auto warming queries are supported per collection in Solr4.4 and
higher? We see one out of three queries in log when new searcher is created.
Thanks!
Hi,
How many auto warming queries are supported per collection in Solr4.4 and
higher? We see one out of three queries in log when new searcher is created.
Shouldn't it print all searcher queries?
Thanks!
We added an id (str name=xxxsearcher3/str) in each searcher but it never
gets printed in log file. Does Solr internally massages the searcher queries?
_
From: Joshi, Shital [Tech]
Sent: Monday, May 12, 2014 11:27 AM
To: 'solr-user@lucene.apache.org
Hi,
We have 10 node Solr Cloud (5 shards, 2 replicas) with 30 GB JVM on 60GB
machine and 40 GB of index.
We're constantly noticing that Solr queries take longer time while update (with
commit=false setting) is in progress. The query which usually takes .5 seconds,
take up to 2 minutes while
settings in solrconfig.xml? I'm
guessing you're using SolrJ or similar, but the solrconfig settings
will trip a commit as well.
For that matter ,what are all our commit settings in solrconfig.xml,
both hard and soft?
Best,
Erick
On Tue, Apr 8, 2014 at 10:28 AM, Joshi, Shital shital.jo...@gs.com wrote
Hi,
What happens when we use commit=false in Solr update URL?
http://$solr_url/solr/$solr_core/update/csv?commit=falseseparator=|trim=trueskipLines=2_shard_=$shardid
1. Does it invalidate all caches? We really need to know this.
2. Nothing happens to existing searcher, correct?
3.
12:48 PM
To: solr-user@lucene.apache.org
Subject: Re: commit=false in Solr update URL
On 3/28/2014 10:22 AM, Joshi, Shital wrote:
What happens when we use commit=false in Solr update URL?
http://$solr_url/solr/$solr_core/update/csv?commit=falseseparator=|trim=trueskipLines=2_shard_=$shardid
1
Thank you!
-Original Message-
From: Shawn Heisey [mailto:s...@elyograg.org]
Sent: Friday, March 28, 2014 3:14 PM
To: solr-user@lucene.apache.org
Subject: Re: commit=false in Solr update URL
On 3/28/2014 1:02 PM, Joshi, Shital wrote:
You mean default for openSearcher is false right? So
Hi,
We have SolrCloud cluster (5 shards and 2 replicas) on 10 boxes.
When GUI fires query to solr url:
1. The node which receives query, Does it send query to each shard in
parallel or in sequence?
2. From log file, how do we find total time taken to integrate results
from different
I see lots of messages like this in solr4 logs. What is it for?
INFO - 2014-03-14 16:42:47.098; org.apache.solr.core.SolrCore; [collection1]
webapp=/solr path=/select
, Joshi, Shital shital.jo...@gs.com wrote:
Hi Michael,
If page cache is the issue, what is the solution?
Thanks!
-Original Message-
From: Michael Della Bitta [mailto:michael.della.bi...@appinions.com]
Sent: Monday, February 24, 2014 9:54 PM
To: solr-user@lucene.apache.org
Subject: Re
+:
plus.google.com/appinionshttps://plus.google.com/u/0/b/112002776285509593336/112002776285509593336/posts
w: appinions.com http://www.appinions.com/
On Mon, Feb 24, 2014 at 5:35 PM, Joshi, Shital shital.jo...@gs.com wrote:
Thanks.
We found some evidence that this could be the issue. We're monitoring
t: @appinions https://twitter.com/Appinions | g+:
plus.google.com/appinionshttps://plus.google.com/u/0/b/112002776285509593336/112002776285509593336/posts
w: appinions.com http://www.appinions.com/
On Fri, Feb 21, 2014 at 5:20 PM, Joshi, Shital shital.jo...@gs.com wrote:
Thanks for your answer
Hello,
We have following hard commit setting in solrconfig.xml.
updateHandler class=solr.DirectUpdateHandler2
updateLog
str name=dir${solr.ulog.dir:}/str
/updateLog
autoCommit
maxTime${solr.autoCommit.maxTime:60}/maxTime
maxDocs10/maxDocs
:55 PM
To: solr-user@lucene.apache.org
Subject: Re: Solr4 performance
On 2/18/2014 2:14 PM, Joshi, Shital wrote:
Thanks much for all suggestions. We're looking into reducing allocated heap
size of Solr4 JVM.
We're using NRTCachingDirectoryFactory. Does it use MMapDirectory internally?
Can
Hi,
Thanks much for all suggestions. We're looking into reducing allocated heap
size of Solr4 JVM.
We're using NRTCachingDirectoryFactory. Does it use MMapDirectory internally?
Can someone please confirm?
Would optimization help with performance? We did that in QA (took about 13
hours for
Does Solr4 load entire index in Memory mapped file? What is the eviction policy
of this memory mapped file? Can we control it?
_
From: Joshi, Shital [Tech]
Sent: Wednesday, February 05, 2014 12:00 PM
To: 'solr-user@lucene.apache.org'
Subject: Solr4
Hi,
We have SolrCloud cluster (5 shards and 2 replicas) on 10 dynamic compute boxes
(cloud). We're using local disk (/local/data) to store solr index files. All
hosts have 60GB ram and Solr4 JVM are running with max 30GB heap size. So far
we have 470 million documents. We are using custom
We have SolrCloud cluster (5 shards and 2 replicas) on 10 boxes with 500
million documents. We're using custom sharding where we direct all documents
with specific business date to specific shard.
With Solr 3.6 we used this command to optimize documents on master and then let
replication take
:15 PM
To: solr-user@lucene.apache.org
Subject: Re: external zookeeper with SolrCloud
On 8/9/2013 11:15 AM, Joshi, Shital wrote:
Same thing happen. It only works with N/2 + 1 zookeeper instances up.
Got it.
An update came in on the issue that I filed. This behavior that you're
seeing
would help diagnose this. Also, did you try to copy/paste
the configuration from your Solr3 to Solr4? I'd start with the
Solr4 and copy/paste only the parts needed from your SOlr3 setup.
Best
Erick
On Mon, Aug 12, 2013 at 11:38 AM, Joshi, Shital shital.jo...@gs.com wrote:
Hi,
We have
Hi,
We have SolrCloud (4.4.0) cluster (5 shards and 2 replicas) on 10 boxes with
about 450 mil documents (~90 mil per shard). We're loading 1000 or less
documents in CSV format every few minutes. In Solr3, with 300 mil documents, it
used to take 30 seconds to load 1000 documents while in
, Joshi, Shital wrote:
We did quite a bit of testing and we think bug
https://issues.apache.org/jira/browse/SOLR-4899 is not resolved in Solr 4.4
The commit for SOLR-4899 was made to branch_4x on June 10th.
lucene_solr_4_4 code branch was created from branch_4x on July 8th.
The change
Same thing happen. It only works with N/2 + 1 zookeeper instances up.
-Original Message-
From: Shawn Heisey [mailto:s...@elyograg.org]
Sent: Friday, August 09, 2013 11:22 AM
To: solr-user@lucene.apache.org
Subject: Re: external zookeeper with SolrCloud
On 8/9/2013 9:02 AM, Joshi
We did quite a bit of testing and we think bug
https://issues.apache.org/jira/browse/SOLR-4899 is not resolved in Solr 4.4
-Original Message-
From: Joshi, Shital [Tech]
Sent: Wednesday, August 07, 2013 2:48 PM
To: 'solr-user@lucene.apache.org'
Subject: RE: external zookeeper
that the upgrade to 4.4
was carried out on all machines?
Erick
On Tue, Aug 6, 2013 at 5:23 PM, Joshi, Shital shital.jo...@gs.com wrote:
Machines are definitely up. Solr4 node and zookeeper instance share the
machine. We're using -DzkHost=zk1,zk2,zk3,zk4,zk5 to let solr nodes know
about the zk instances
, but the zkHost param
only shows 5 instances... is that correct?
On Tue, Aug 6, 2013 at 11:23 PM, Joshi, Shital shital.jo...@gs.com wrote:
Machines are definitely up. Solr4 node and zookeeper instance share the
machine. We're using -DzkHost=zk1,zk2,zk3,zk4,zk5 to let solr nodes know
about the zk
: Re: external zookeeper with SolrCloud
You said earlier that you had 6 zookeeper instances, but the zkHost param
only shows 5 instances... is that correct?
On Tue, Aug 6, 2013 at 11:23 PM, Joshi, Shital shital.jo...@gs.com wrote:
Machines are definitely up. Solr4 node and zookeeper instance
Message-
From: Mark Miller [mailto:markrmil...@gmail.com]
Sent: Tuesday, June 11, 2013 10:42 AM
To: solr-user@lucene.apache.org
Subject: Re: external zookeeper with SolrCloud
On Jun 11, 2013, at 10:15 AM, Joshi, Shital shital.jo...@gs.com wrote:
Thanks Mark.
Looks like this bug is fixed
nodes at specific ZK
machines
that aren't up when you have this problem? I.e. -zkHost=zk1,zk2,zk3
Best
Erick
On Tue, Aug 6, 2013 at 4:56 PM, Joshi, Shital shital.jo...@gs.com wrote:
Hi,
We have SolrCloud (4.4.0) cluster (5 shards and 2 replicas) on 10 boxes.
We have 6 zookeeper
Thanks for all answers. We decided to use VisualVM with multiple remote
connections.
-Original Message-
From: Utkarsh Sengar [mailto:utkarsh2...@gmail.com]
Sent: Friday, July 26, 2013 6:19 PM
To: solr-user@lucene.apache.org
Subject: Re: monitor jvm heap size for solrcloud
We have been
We have SolrCloud (4.3.0) cluster (5 shards and 2 replicas) on 10 boxes. We
have about 450 million documents. We're planning to upgrade to Solr 4.4.0. Do
We need to re-index already indexed documents?
Thanks!
We have SolrCloud cluster (5 shards and 2 replicas) on 10 boxes. While running
stress tests, we want to monitor JVM heap size across 10 nodes. Is there a
utility which would connect to all nodes' jmx port and display all bean details
for the cloud?
Thanks!
...@eolya.fr wrote:
With 6 zookeeper instances you need at least 4 instances running at the same
time. How can you decide to stop 4 instances and have only 2 instances
running ? Zookeeper can't work anymore in these conditions.
Dominique
Le 25 juil. 2013 à 00:16, Joshi, Shital shital.jo...@gs.com
We have SolrCloud cluster (5 shards and 2 replicas) on 10 dynamic compute boxes
(cloud), where 5 machines (leaders) are in datacenter1 and replicas on
datacenter2. We have 6 zookeeper instances - 4 on datacenter1 and 2 on
datacenter2. The zookeeper instances are on same hosts as Solr nodes.
Hi,
We have Solr 3.6 set up with master and two slaves, each one with 70GB JVM. We
run into java.lang.OutOfMemoryError when we cross 250 million documents. Every
time this happens we purge documents, bring it below 200 million and bounce
both slaves. We have facets on 14 fields. We usually
adding the docs.
- Mark
On Jun 27, 2013, at 4:42 PM, Joshi, Shital shital.jo...@gs.com wrote:
Hi,
We finally decided on using custom sharding (implicit document routing) for
our project. We will have ~3 mil documents per shardkey. We're maintaining
shardkey - shardid mapping in a database
, 2013, at 3:13 PM, Joshi, Shital shital.jo...@gs.com wrote:
Thanks Mark.
We use commit=true as part of the request to add documents. Something like
this:
echo $data| curl --proxy --silent
http://HOST:9983/solr/collection1/update/csv?commit=trueseparator=|fieldnames=$fieldnames_shard_
On Fri, Jun 21, 2013 at 6:08 PM, Joshi, Shital shital.jo...@gs.com wrote:
But now Solr stores composite id in the document id
Correct, it's the document id itself that contains everything needed
for tje compositeId router to determine the hash.
It would only use it to calculate hash key but while
21, 2013 at 6:08 PM, Joshi, Shital shital.jo...@gs.com wrote:
But now Solr stores composite id in the document id
Correct, it's the document id itself that contains everything needed
for tje compositeId router to determine the hash.
It would only use it to calculate hash key but while storing
splitting it. I will restart the
cloud and see if it goes away.
Thanks!
-Original Message-
From: Shawn Heisey [mailto:s...@elyograg.org]
Sent: Friday, June 21, 2013 5:38 PM
To: solr-user@lucene.apache.org
Subject: Re: SPLITSHARD throws error
On 6/21/2013 3:06 PM, Joshi, Shital wrote
Hi,
We have 5 shards with replication factor 2 (total 10 jvm instances). Our shards
are named (shardid) shard1,shard2,shard3,shar4 and shar5 and collection name is
collection1. When we execute this command:
curl --proxy ''
...@elyograg.org]
Sent: Friday, June 21, 2013 4:45 PM
To: solr-user@lucene.apache.org
Subject: Re: SPLITSHARD throws error
On 6/21/2013 2:26 PM, Joshi, Shital wrote:
Hi,
We have 5 shards with replication factor 2 (total 10 jvm instances). Our
shards are named (shardid) shard1,shard2,shard3
the same shard.
On Mon, Jun 17, 2013 at 9:47 PM, Joshi, Shital shital.jo...@gs.com wrote:
Thanks for the links. It was very useful.
Is there a way to use implicit router WITH numShards parameter? We have 5
shards and business day (Monday-Friday) is our shardkey. We want to be able
to say
Hi,
We hard committed (/update/csv?commit=true) about 20,000 documents to
SolrCloud (5 shards, 1 replicas = 10 jvm instances). We have commented out both
autoCommit and autoSoftCommit settings from solrconfig.xml. What we noticed
that the transaction log size never goes down to 0. We thought
to route your documents to a dedicated shard.
You can use select?q=xyzshard.keys=uniquekey to focus your search to hit
only the shard that has your shard.key
Thanks,
Rishi.
-Original Message-
From: Joshi, Shital shital.jo...@gs.com
To: 'solr-user@lucene.apache.org' solr-user
Hi,
We are using Solr 4.3.0 SolrCloud (5 shards, 10 replicas). I have couple
questions on shard key.
1. Looking at the admin GUI, how do I know which field is being used
for shard key.
2. What is the default shard key used?
3. How do I override the default shard key?
:05 PM
To: solr-user@lucene.apache.org
Subject: Re: external zookeeper with SolrCloud
This might be https://issues.apache.org/jira/browse/SOLR-4899
- Mark
On Jun 10, 2013, at 5:59 PM, Joshi, Shital shital.jo...@gs.com wrote:
Hi,
We're setting up 5 shard SolrCloud with external zoo keeper
Hi,
We're setting up 5 shard SolrCloud with external zoo keeper. When we bring up
Solr nodes while the zookeeper instance is not up and running, we see this
error in Solr logs.
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native
66 matches
Mail list logo