Yes thats right, there is no "best" setup at all, only one that
gives most advantage to your requirements.
And any setup has some disadvantages.
Currently I'm short in time and have to bring our Cloud to production
but a write-up is in the queue as already done with other developments.
https://ww
On Tue, 2018-08-28 at 09:37 +0200, Bernd Fehling wrote:
> Yes, I tested many cases.
Erick is absolutely right about the challenge of finding "best" setups.
What we can do is gather observations, as you have done, and hope that
people with similar use cases finds them. With that in mind, have you
c
Hi Erick,
I am looking into the rule based replica placement documentation and
confused. How to ensure there are no more than one replica for any shard on
the same host? There is an example rule shard:*,replica:<2,node:* seem
to serve the purpose, but I am not sure if 'node' refer to solr ins
Bernd:
If you only knew how many times I've had the conversation "No, I can't
tell you what's best, you have to test with _your_ data on _your_
hardware with _your_ queries" ;)
I suspect, but have no real proof, that GC is the biggest difference,
Solr has we call "the laggard problem". Since one
Yes, I tested many cases.
As I already mentioned 3 Server as 3x3 SolrCloud cluster.
- 12 Mio. data records from our big single index
- always the same queries (SWD, german keyword norm data)
- Apache jmeter 3.1 for the load (separate server)
- Haproxy 1.6.11 with roundrobin (separate server)
- no
There was no real bottleneck.
I just started with 30QPS and after that just doubled the QPS.
But as you mentioned I used my specific data and analysis, and also
used SWD (german keyword norm data) dictionary for querying.
Regards,
Bernd
Am 27.08.2018 um 15:41 schrieb Jan Høydahl:
What was your
Thanks Bernd. Do you have preferLocalShards=true in both cases? Do you
notice CPU/memory utilization difference between the two deployments? How
many servers did you use in total? I am curious what's the bottleneck for
the one instance and 3 cores configuration.
Thanks,
Wei
On Mon, Aug 27, 2018
What was your bottleneck when maxing on 30QPS on 3 node cluster?
I expect such tests to vary quite much between use cases, so a good approach is
to do just as you did: benchmark on your specific data and usage.
--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com
> 27. aug.
My tests with many combinations (instance, node, core) on a 3 server cluster
with SolrCloud pointed out that highest performance is with multiple solr
instances and shards and replicas placed by rules so that you get advantage
from preferLocalShards=true.
The disadvantage ist the handling of the
Hi,
I would start with one instance per host and add more shards to that one. As
long as you stay below 32G heap this would be a preferred setup.
It is a common mistake to think that you need more JVM heap than necessary. In
fact you should try to minimize your heap and leave more free RAM for O
Yes, you can use the "node placement rules", see:
https://lucene.apache.org/solr/guide/6_6/rule-based-replica-placement.html
This is a variant of "rack awareness".
Of course the simplest way if you're not doing very many collections is to
create the collection with the special "EMPTY" createNodeS
Thanks Shawn. When using multiple Solr instances per host, is there any way
to prevent solrcloud from putting multiple replicas of the same shard on
same host?
I see it makes sense if we can splitting into multiple instances with
smaller heap size. Besides that, do you think multiple instances will
On 8/26/2018 12:00 AM, Wei wrote:
I have a question about the deployment configuration in solr cloud. When
we need to increase the number of shards in solr cloud, there are two
options:
1. Run multiple solr instances per host, each with a different port and
hosting a single core for one shard.
Hi,
I have a question about the deployment configuration in solr cloud. When
we need to increase the number of shards in solr cloud, there are two
options:
1. Run multiple solr instances per host, each with a different port and
hosting a single core for one shard.
2. Run one solr instance per
14 matches
Mail list logo