Thanks Mark,
Good, this is probably good enough to give it a try. My analyzers are
normally fast, doing duplicate analysis (at each replica) is
probably not going to cost a lot, if there is some decent "batching"
Can this be somehow controlled (depth of this buffer / time till flush
or some such
We actually do currently batch updates - we are being somewhat loose when we
say a document at a time. There is a buffer of updates per replica that gets
flushed depending on the requests coming through and the buffer size.
- Mark Miller
lucidimagination.com
On Feb 28, 2012, at 3:38 AM, eks dev
SolrCluod is going to be great, NRT feature is really huge step
forward, as well as central configuration, elasticity ...
The only thing I do not yet understand is treatment of cases that were
traditionally covered by Master/Slave setup. Batch update
If I get it right (?), updates to replicas are
As I understand it (and I'm just getting into SolrCloud myself), you can
essentially forget about master/slave stuff. If you're using NRT,
the soft commit will make the docs visible, you don't ned to do a hard
commit (unlike the master/slave days). Essentially, the update is sent
to each shard lead
Hi All,
I am trying to understand features of Solr Cloud, regarding commits and
scaling.
- If I am using Solr Cloud then do I need to explicitly call commit
(hard-commit)? Or, a soft commit is okay and Solr Cloud will do the job of
writing to disk?
- Do We still need to use Master