I tried my last proposition, editing the clusterstate.json to add a dummy
frontend shard seems to work. I made sure the ranges were not overlapping.
Doesn't it resolve the solr cloud issue as specified above?
Would adding a dummy shard instead of a dummy collection would resolve the
situation? - e.g. editing clusterstate.json from a zookeeper client and
adding a shard with a 0-range so no docs are routed to this core. This core
would be on a separate server and act as the collection gateway.
What Shawn has described is exactly what we do: classical distributed
no-SolrCloud setup. This is why it was possible to implement a custom
frontend solr instance.
On Wed, Oct 2, 2013 at 12:42 AM, Shawn Heisey wrote:
> On 10/1/2013 2:35 PM, Isaac Hebsh wrote:
>
>> Hi Dmitry,
>>
>> I'm trying to
On 10/1/2013 4:04 PM, Isaac Hebsh wrote:
Hi Shawn,
I know that every node operates as a frontend. This is the way our cluster
currently run.
If I seperate the frontend from the nodes which hold the shards, I can let
him different amount of CPUs as RAM. (e.g. large amount of RAM to JVM,
because t
Hi Shawn,
I know that every node operates as a frontend. This is the way our cluster
currently run.
If I seperate the frontend from the nodes which hold the shards, I can let
him different amount of CPUs as RAM. (e.g. large amount of RAM to JVM,
because this server won't need the OS cache for read
On 10/1/2013 2:35 PM, Isaac Hebsh wrote:
Hi Dmitry,
I'm trying to examine your suggestion to create a frontend node. It sounds
pretty usefull.
I saw that every node in solr cluster can serve request for any collection,
even if it does not hold a core of that collection. because of that, I
though
Hi Dmitry,
I'm trying to examine your suggestion to create a frontend node. It sounds
pretty usefull.
I saw that every node in solr cluster can serve request for any collection,
even if it does not hold a core of that collection. because of that, I
thought that adding a new node to the cluster (ak
Manuel,
Whether to have the front end solr as aggregator of shard results depends
on your requirements. To repeat, we found merging from many shards very
inefficient fo our use case. It can be the opposite for you (i.e. requires
testing). There are some limitations with distributed search, see her
Dmitry - currently we don't have such a front end, this sounds like a good
idea creating it. And yes, we do query all 36 shards every query.
Mikhail - I do think 1 minute is enough data, as during this exact minute I
had a single query running (that took a qtime of 1 minute). I wanted to
isolate t
Hi Manuel,
The frontend solr instance is the one that does not have its own index and
is doing merging of the results. Is this the case? If yes, are all 36
shards always queried?
Dmitry
On Mon, Sep 9, 2013 at 10:11 PM, Manuel Le Normand <
manuel.lenorm...@gmail.com> wrote:
> Hi Dmitry,
>
> I h
Hello Manuel,
1 minute sampling brings too few data. Lowering termindex should help,
however I don't know how FST really behaves on in. It definitely helped at
3.x;
Would you mind if I ask which OS you have and which Directory
implementation is used actually?
On Sun, Sep 8, 2013 at 7:56 PM, Manu
Hi Dmitry,
I have solr 4.3 and every query is distributed and merged back for ranking
purpose.
What do you mean by frontend solr?
On Mon, Sep 9, 2013 at 2:12 PM, Dmitry Kan wrote:
> are you querying your shards via a frontend solr? We have noticed, that
> querying becomes much faster if resul
are you querying your shards via a frontend solr? We have noticed, that
querying becomes much faster if results merging can be avoided.
Dmitry
On Sun, Sep 8, 2013 at 6:56 PM, Manuel Le Normand <
manuel.lenorm...@gmail.com> wrote:
> Hello all
> Looking on the 10% slowest queries, I get very bad
Hello all
Looking on the 10% slowest queries, I get very bad performances (~60 sec
per query).
These queries have lots of conditions on my main field (more than a
hundred), including phrase queries and rows=1000. I do return only id's
though.
I can quite firmly say that this bad performance is due
On Fri, Dec 23, 2011 at 11:36 AM, Shyam Bhaskaran
wrote:
> Hi,
>
> Can someone suggest me on performing Solr Profiling.
Have you looked at JMX: http://wiki.apache.org/solr/SolrJmx ?
Regards,
Gora
: Profiling Solr
Hi Jean,
I am also looking into Profiling Solr and wanted to check with you whether
you were able to use YourKit successfully for Solr Profiling and were you
able to find out the bottleneck with your situation.
Can you share how you were able to find out the performance bottleneck and
Hi Jean,
I am also looking into Profiling Solr and wanted to check with you whether
you were able to use YourKit successfully for Solr Profiling and were you
able to find out the bottleneck with your situation.
Can you share how you were able to find out the performance bottleneck and
fix the
On Thu, Mar 11, 2010 at 1:11 PM, Jean-Sebastien Vachon
wrote:
> Hi,
>
> I'm trying to identify the bottleneck to get acceptable performance of a
> single shard containing 4.7 millions of documents using my own machine (Mac
> Pro - Quad Core with 8Gb of RAM with 4Gb allocated to the JVM).
>
> I t
Hi,
I'm trying to identify the bottleneck to get acceptable performance of a single
shard containing 4.7 millions of documents using my own machine (Mac Pro - Quad
Core with 8Gb of RAM with 4Gb allocated to the JVM).
I tried using YourKit but I don't get anything about Solr classes. I'm new to
I usually use YourKit or JProfiler, but there are free ones too, like VisualVM.
Check out:
http://www.lucidimagination.com/blog/2009/09/19/java-garbage-collection-boot-camp-draft/
and
http://www.lucidimagination.com/blog/2009/02/09/investigating-oom-and-other-jvm-issues/
On Dec 22, 2009, at 9
Hi All,
Recently we noticed that some of our heavy load Solr instances are facing
memory leak kind situations.
It goes onto Full GC and as it was unable to release any memory, the broken
pripe and socket errors happen.
(This happens both in Solr 1.3 and 1.4 for us.)
Is there a good tool (prefe
The Netbeans profiler is also very good - available both in Netbeans and
VisualVM. And of course Eclipse has a profiler - but its a little harder
to get that off the ground.
Grant Ingersoll wrote:
> I usually use YourKit, but have also had success w/ JProfiler.
>
>
> On Oct 26, 2009, at 3:31 PM, J
I usually use YourKit, but have also had success w/ JProfiler.
On Oct 26, 2009, at 3:31 PM, Joe Calderon wrote:
as a curiosity ide like to use a profiler to see where within solr
queries spend most of their time, im curious what tools if any others
use for this type of task..
im using jetty a
as a curiosity ide like to use a profiler to see where within solr
queries spend most of their time, im curious what tools if any others
use for this type of task..
im using jetty as my servlet container so ideally ide like a profiler
thats compatible with it
--joe
24 matches
Mail list logo