This question is related to Out of Memory Errors that I am seeing on my Solr
Cloud Setup - I am running Solr 4.5.1.
Here is how my setup looks:
1. Have 6 Solr Tomcat Nodes distributed across 3 Servers - i.e. 2 nodes per
server
2. Each tomcat node has been allocated 2 GB RAM - XmX setting
Have two
I always get the "Loading" message on the Solr Admin Console if I use IE.
However - the page loads perfectly fine when I use Google Chrome or Mozilla
Firefox.
Could you check if your problem resolves itself if you use a different
browser ???
--
View this message in context:
http://lucene.47206
Here is one potential design approach:
1. Create a single collection (instead of two collections).
Let your schema have a "RecordType" field which can take the values of
either "initial" or "follow-up" for documents that are indexed into this
collection.
2. Let there be 30 shards - just like you
One thing you could do is:
1. If you current index is called A1, then you can create a new index called
A2 with the correct schema.xml / solrconfig.xml
2. Index your 18,000 documents into A2 afresh
3. Then delete A1 (the bad index)
4. Then quickly create an Alias with the name of A1 pointng to A2 -
I know u mentioned you have a single machine at play - but do you have
multiple nodes on the machine that talk to one another ??
Does your problem recur when the load on the system is low ?
Also faced a similar problem wherein the "5 second delay" (described in
detail on my other post) kept happe
GUess - I had the same issues as you. Was resolved
http://lucene.472066.n3.nabble.com/Slow-QTimes-5-seconds-for-Small-sized-Collections-td4143681.html
was resolved by adding an explicit host mapping entry on /etc/hosts for
inter node solr communication - thereby bypassing DNS Lookups.
--
View
This issue was finally resolved. Adding an explicit Host - IP address mapping
on /etc/host file seemed to do the trick. The one strange thing is - before
the host file entry was made - we were unable to simulate the 5 second delay
from the linux shell by performing a simple nslookup . In any
case -
So - we do end up with two copies / versions of the same document (uniqueid)
- one in each of the two shards - Is this a BUG or a FEATURE in Solr ?
Have a follow up question - In case one were to attempt to delete the
document -lets say usng the CloudSolrServer - deleteById() API - would that
atte
Lets say I create a Solr Collection with multiple shards (say 2 shards) and
set the value of "router.field" to a field called "CompanyName". Now - we
all know that during Indexing Solr would compute a hash on the value indexed
into the "CompanyName" and route to an appropriate shard.
Lets say I in
We faced similar problems on our side. We found it more reliable to have a
mechanism to extract all data from the Database into a flat file - and then
use a JAVA program to bulk index into Solr from the file via SolrJ API.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Out-
Try indexing your data as follows:
C01,C02,C03,C04,C09,C12,C23,C50
instead of
C1,C2,C3,C4,C9,C12,C23,C50
and the sort order would work correctly.
BTW, what you are describing as an issue is NOT unique to Solr. The same
happens on regular Databases as well. Google up how database type systems
Yes, the Solr Collections API allows you to pass in a set of explicit nodes
(subset of the complete list of nodes in your cluster) to setup your
Collection.
This the "createNodeSet" input parameter in the CREATE COLLECTION API -
described as follows in the documentation:
Allows defining the nodes
I am a colleague of the person who posted the original question. We have done
some more analysis and have more information to provide.
Here are the responses to Toke's questions:
>> * Do they (slow performing queries) occur under heavy network load?
No, they don't. This happens even when there
13 matches
Mail list logo