Hi Dominique,
Unfortunately Solr doesn't support metrics you are interested in. You can
however have another process that makes jmx queries on the solr process, do
required transformation and store data in some kind of data store.
Just make sure you are not DDOSing your Solr instances :-)
On Oct
On 10/10/2016 9:58 AM, Dominique De Vito wrote:
> It looks like the Solr metric "avgTimePerRequest" is computed with
> requests from t0 (startup time).
The percentile metrics (available in 4.1 and later if memory serves) are
generally far more useful than the average time.
> If so, is there a
Hi,
It looks like the Solr metric "avgTimePerRequest" is computed with requests
from t0 (startup time).
If so, it's quite useless, for example, for detecting a surge in latency
within the last 10 mn for example.
Is my understanding correct ?
If so, is there a way
(1) to configure Solr to
not bad advise ;-)
2009/12/20 Walter Underwood wun...@wunderwood.org
Here is an idea. Don't make one core per user. Use a field with a user id.
wunder
On Dec 20, 2009, at 12:38 PM, Matthieu Labour wrote:
Hi
I have a slr instance in which i created 700 core. 1 Core per user of my
Have you tried loading solr instances as you need them and unloading
those that are not being used? I wish I could help more, I don't know
many people running that many use cores.
didier
On Sun, Dec 20, 2009 at 2:38 PM, Matthieu Labour matth...@strateer.com wrote:
Hi
I have a slr instance in
Hi
I have a slr instance in which i created 700 core. 1 Core per user of my
application.
The total size of the data indexed on disk is 35GB with solr cores going
from 100KB and few documents to 1.2GB and 50 000 documents.
Searching seems very slow and indexing as well
This is running on a EC2 xtra
Here is an idea. Don't make one core per user. Use a field with a user id.
wunder
On Dec 20, 2009, at 12:38 PM, Matthieu Labour wrote:
Hi
I have a slr instance in which i created 700 core. 1 Core per user of my
application.
The total size of the data indexed on disk is 35GB with solr cores