This could have multiple solutions
1) "read" should cover all the paths
2) system properties are a strict NO . This can be strictly a property
of the Authentication plugin. So , you can use the API to modify the
property.
On Sat, Nov 21, 2015 at 3:57 AM, Jan Høydahl wrote:
>> ideally we should h
Thanks Alex. This is what I was looking for. One more query how to set this
from solrj while calling add() ? Do I have to make a curl call with this
param set?
On Dec 13, 2015 12:53 AM, "Shalin Shekhar Mangar"
wrote:
> Oh yes, I had forgotten about that! Thanks Alexandre!
>
> On Sat, Dec 12, 2015
Oh yes, I had forgotten about that! Thanks Alexandre!
On Sat, Dec 12, 2015 at 11:57 PM, Alexandre Rafalovitch
wrote:
> Does "versions=true" flag match what you are looking for? It is
> described towards the end of:
> https://cwiki.apache.org/confluence/display/solr/Updating+Parts+of+Documents#Upd
Autowarm times will only happen when the commit has openSearcher=true
or on a soft commit. But maybe your log levels aren't at INFO for the right
code...
That said, your autowarm counts at 0 probably means that you're not seeing
any autowarming really, so that might be a red herring. Your newSearc
Right. What's happening is, essentially what used to be
happening in your custom code where individual core
reload commands were being sent. Except it's all happening
in Solr. To whit:
1> the code looks at the collection state
2> for each replica it sends a core admin API reload command
to the
+1 to what Shalin said. You've adjusted maxWarmingSeachers up,
probably because you saw warnings in the log files. This is _not_
the solution to the "MaxWarmingSearchers exceeded" error. The solution
is, as Shalin says, decrease your commit frequency.
Commit can be an expensive operation,
see:
ht
Does "versions=true" flag match what you are looking for? It is
described towards the end of:
https://cwiki.apache.org/confluence/display/solr/Updating+Parts+of+Documents#UpdatingPartsofDocuments-OptimisticConcurrency
Regards,
Alex.
Newsletter and resources for Solr beginners and intermedi
Yes, that is probably the cause. I think you have very aggressive
commit rates and Solr is not able to keep up. If you are sending
explicit commits, switch to using autoCommit with openSearcher=false
every 5-10 minutes (this depends on your indexing rate) and
autoSoftCommit every 2-5 minutes. Adjus
Hi there,
I am using Solr Cloud 5.3.0 on a multiserver cluster (3 servers to mention)
whereby each server spec is at 16 core and 32 GB Ram.
I am facing regular errors - Error sending update to http://someip:8983/solr
- "Timeout occured while waiting response from server at server a" ... Caus
I was thinking if it is possible to get the version without making one more
extra call getById. Can I get that as part of the update response when I am
updating or adding a new document?
On Dec 12, 2015 3:28 PM, "Shalin Shekhar Mangar"
wrote:
> You will have to request a real-time-get with the un
Thanks a lot for all the clarifications.
Actually resources are not a big problem, I think customer can afford 4 GB RAM
Red Hat linux machines for Zookeeper. Solr Machines will have in production 64
or 96 GB of ram, depending on the dimension of the index.
My primary concern is maintenance of t
You will have to request a real-time-get with the unique key of the
document you added/updated. In Solr 5.1+ you can go
client.getById(String id) to get this information.
On Sat, Dec 12, 2015 at 10:19 AM, Debraj Manna wrote:
> Is there a way I can get the version of a document back in response af
12 matches
Mail list logo