Hi,
Issue in brief:
I am facing a strange issue, where, the collections that are deleted in
SOLR, are still having reference in Zookeeper and due to which, in the solr
cloud console, i am still seeing the reference to the deleted collections in
down state
Issue in Detail:
I am using Solr 4.5.1
Thanks for all of your responses - did some more research - and here is an
observation:
I am seeing an inconsistency in the QTime in the SolrQueryResponse object
returned to the Client App Versus the value of the QTime printed in the
Solr.log.
Here is one specific instance:
Value
Thanks for all of your responses - did some more research - and here is an
observation:
I am seeing an inconsistency in the QTime in the SolrQueryResponse object
returned to the Client App Versus the value of the QTime printed in the
Solr.log.
Here is one specific instance:
Value
I am running Solr 4.5.1. Here is how my setup looks:
Have 2 modest sized Collections.
Collection 1 - 2 shards, 3 replicas (Size of Shard 1 - 115
MB, Size of Shard 2 - 55 MB)
Collection 2 - 2 shards, 3 replicas (Size of Shard 2 - 3.5
GB, Size of Shard 2 - 1 GB)
I am using Solr 4.5.1. I have two collections:
Collection 1 - 2 shards, 3 replicas (Size of Shard 1 - 115
MB, Size of Shard 2 - 55 MB)
Collection 2 - 2 shards, 3 replicas (Size of Shard 2 - 3.5
GB, Size of Shard 2 - 1 GB)
I have a batch process that performs
Hi,
I am using Solr 4.5.1. In that i have created an Index 114.8 MB. Also i have
the following index configuration
indexConfig
maxIndexingThreads8/maxIndexingThreads
ramBufferSizeMB100/ramBufferSizeMB
mergeFactor10/mergeFactor
Thanks Shawn and Thanks Chris!!
Shawn your explanation was very clear and clarified my doubts
Chris,
The video was also very useful
--
View this message in context:
http://lucene.472066.n3.nabble.com/Segment-Count-of-my-Index-is-greater-than-the-Configured-MergeFactor-tp4142783p4142987.html
Hi,
Brief Description of my application:
We have a java program which reads a flat file, and adds document to solr
using cloudsolrserver.
And we index for every 1000 documents(bulk indexing).
And the Autocommit setting of my application is:
autoCommit
maxDocs10/maxDocs
Hi,
I am preparing a solr query. in that i am only giving fq parameter .. I dont
give any q parameter..
If i exeucte such query, where only it is having fq, it is not returning any
docs. in the sense it is returning 0 docs.
So, is it always mandatory to have q parameter in solr query?
if so, then
Hi,I am using solr4.4 with zookeeper 3.3.5. While i was checking for error
conditions of my application, i came across a strange issue.Here is what i
tried:I have three fields defined in my schemaa) UNIQUE_KEY - of type
solr.TrieLongb) empId - of type Solr.TrieLongc) companyId - of type
Thanks Shawn for your response.
So, from your email, it seems that unique_key validation is handled
differently from other field validation.
But what i am not very clear, is what the unique_key has to do with finding
the live server?
Becase if there is any mismatch in the unique_key, it is
Shalin,
It is working for me. As you pointed rightly, i had defined UNIQUE_KEY field
in schema, but forgot to mention this field in the uniqueKey decalaration.
After i added this, it started working.
One another question i have with regard to SPLITSHARD is, we are not able to
control, which nodes
Hi,
My setup is
Zookeeper ensemble - running with 3 nodes
Tomcats - 9 Tomcat instances are brought up, by registereing with zookeeper.
Steps :
1) I uploaded the solr configuration like db_data_config, solrconfig, schema
xmls into zookeeoper
2) Now, i am trying to create a collection with the
Hi All,
For POC purpose, I just brought up a Tomcat-Solr Cluster, with Zookeeper of
3 zodes.
In one of my collection, i haave only one shard, with two replicas. I just
want to split this shard, so that, it will be splitted by two and each
splitted shard will have two replicas(including the master
Thanks for the response!!
Yes i have defined unique key in the schema... Still it is throwing the same
error..
Is this SPLITSHARD a new feature that is under development in solr 4.4? Has
anyone able to split the shards using SPLITSHARD successfully?
--
View this message in context:
Hi,
We have a setup where we have 3 shards in a collection, and each shard in
the collection need to load different sets of data
That is
Shard1- will contain data only for Entity1
Shard2 - will contain data for entity2
shard3- will contain data for entity3
So in this case,. the db-data-config.xml
16 matches
Mail list logo