Are there any performance differences in using Dynamic Fields over Static
Fields?
I read an earlier post on Aug 12, 2008 that suggested there is nothing
significant (See link below). Is this still the case?
http://lucene.472066.n3.nabble.com/Static-Fields-vs-Dynamic-Fields-td487639.html
--
I am pretty sure I have resolved this issue down to the HttpClient version
SolrJ was using.
The SolrJ 4.9.0 Maven Library (org.apache.solr:solr-solrj) has a dependency
on HttpClient version org.apache.httpcomponents:httpclient 4.3.1, however I
had explicitly declared another version
I seem frequently getting the following exception in my Solr 4.9 logs,
org.apache.solr.common.SolrException: Invalid chunk header. These
exceptions still continue to happen even if I throttle my Solr requests.
Does anyone have any suggestions on how to address or work-around his issue?
I have
So for all current versions of Solr, rollback will not work for SolrCloud?
Will this change in the future, or will rollback always be unsupported for
SolrCloud?
This did catch me by surprise. Should the SolrJ documentation be updated to
reflect this behavior?
I am wondering if it was possible to achieve SolrJ/Solr Two Phase Commit.
Any examples? Any best practices?
What I know:
* Lucene offers Two Phase Commitvia it's index writer (prepareCommit()
followed by either commit() or rollback()).
What version of Solr are you using? 4.2.0 or 4.2.1?
The following might be of interest to you:
* https://issues.apache.org/jira/browse/SOLR-4605
https://issues.apache.org/jira/browse/SOLR-4605
* https://issues.apache.org/jira/browse/SOLR-4733
https://issues.apache.org/jira/browse/SOLR-4733
By saying commits in Solr are global, do you mean per Solr deployment, per
HttpSolrServer instance, per thread, or something else?
--
View this message in context:
http://lucene.472066.n3.nabble.com/SolrJ-Solr-Two-Phase-Commit-tp4060399p4060584.html
Sent from the Solr - User mailing list
in Solr are about controlling visibility more than
anything, although now with Cloud, they have resource consumption and
lifecycle ramifications as well.
On May 2, 2013 10:01 PM, mark12345 wrote:
By saying commits in Solr are global, do you mean per Solr deployment,
per
HttpSolrServer instance
One thing I noticed is that while the HttpSolrServer add(SolrInputDocument
doc) method is atomic (Either a bean is added or an exception is thrown),
the HttpSolrServer add(CollectionSolrInputDocument docs) method is not
atomic.
Question: Is there a way to commit multiple documents/beans in a
Solr 4.2.1, which claims to have fixed this issue. I have
some logging indicating that the rollback is not broadcasted to other
nodes in solr. So only one node in the cluster gets the rollback but not
the others.
Thanks,
Dipti
On 5/2/13 12:09 AM, mark12345 wrote:
What version of Solr
What are the performance characteristics and implications of the SolrServer
classes queryAndStreamResponse method over large result sets ( One hundred
thousand, one million records, etc ) ?
Create JIRA Issue:
https://issues.apache.org/jira/browse/SOLR-4605
https://issues.apache.org/jira/browse/SOLR-4605
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-4-1-4-2-SolrException-Error-opening-new-searcher-tp4046543p4048420.html
Sent from the Solr - User
This looks similar to the issue I also have:
*
http://lucene.472066.n3.nabble.com/Solr-4-1-4-2-SolrException-Error-opening-new-searcher-td4046543.html
http://lucene.472066.n3.nabble.com/Solr-4-1-4-2-SolrException-Error-opening-new-searcher-td4046543.html
I wrote a simple test to reproduce a very similar stack trace to the above
issue, where only some line numbers differences.
Any ideas as to why the following happens? Any help would be very
appreciated.
* The test case:
@Test
public void documentCommitAndRollbackTest() throws
Found the exception logs that match the notifications in
http://localhost:8080/solr-app/#/~logging as quoted below:
14:30:00 SEVERE SolrCore org.apache.solr.common.SolrException: Error
opening new searcher
14:30:00 SEVERE SolrDispatchFilter
null:org.apache.solr.common.SolrException: Error
The following relates directly to my question above. Thanks Erick.
Erick Erickson wrote
Don't use the internal Lucene doc ID. It _will_ change, even the
relationship between existing docs will change. When cores are merged, the
Lucene doc IDs are renumbered. Segments are NOT merged in
I am continuing to work on this problem. So will update this thread as I go.
These are the only logs I have through the
http://localhost:8080/solr-app/#/~logging interface. I am using tomcat
to run the solr war. Is there anything I can do to get more descriptive
logs?
--
View this message
I am running into issues where my Solr instance is behaving weirdly. After I
get the SolrException Error opening new searcher, my Solr instance fails
to handle even the simplest of update requests.
http://lucene.472066.n3.nabble.com/exceeded-limit-of-maxWarmingSearchers-td494732.html
I have
In Solr, I noticed that I can sort by the internal Lucene _docid_.
- http://wiki.apache.org/solr/CommonQueryParameters
http://wiki.apache.org/solr/CommonQueryParameters
You can sort by index id using sort=_docid_ asc or sort=_docid_ desc
* I have also read the docid is represented by a
A slightly different approach.
* I noticed that I can sort by the internal Lucene _docid_.
- http://wiki.apache.org/solr/CommonQueryParameters
http://wiki.apache.org/solr/CommonQueryParameters
You can sort by index id using sort=_docid_ asc or sort=_docid_ desc
* I have also read the
So I think I took the easiest option by creating an UpdateRequestProcessor
implementation (I was unsure of the performance implications and object
model of ScriptUpdateProcessor). The below
DocumentCreationDetailsProcessorFactory class seems to achieve my aim of
allowing me to sort my Solr
Appending a random value only reduces the chance of a collision (And I need
to ensure continuous uniqueness) and could hurt how the field is later
sorted. I have not written a custom UpdateRequestProcessor before, is there
a way to incorporate a Singleton that ensures one instance across a
22 matches
Mail list logo