Dmitry,

Thats beyond the scope of this thread, but Munin essentially runs
"plugins" which are essentially scripts that output graph configuration
and values when polled by the Munin server.  So it uses a plain text
protocol, so that the scripts can be written in any language.  Munin
then feeds this info into RRDtool, which displays the graph.  There are
some examples[1] of solr plugins that people have used to scrape the
stats.jsp page.

Justin

1. http://exchange.munin-monitoring.org/plugins/search?keyword=solr

Dmitry Kan <dmitry....@gmail.com> writes:

> Thanks, Justin. With zabbix I can gather jmx exposed stats from SOLR, how
> about munin, what protocol / way it uses to accumulate stats? It wasn't
> obvious from their online documentation...
>
> On Mon, Dec 12, 2011 at 4:56 PM, Justin Caratzas
> <justin.carat...@gmail.com>wrote:
>
>> Dmitry,
>>
>> The only added stress that munin puts on each box is the 1 request per
>> stat per 5 minutes to our admin stats handler.  Given that we get 25
>> requests per second, this doesn't make much of a difference.  We don'tg
>> have a sharded index (yet) as our index is only 2-3 GB, but we do have
>> slave servers with replicated
>> indexes that handle the queries, while our master handles
>> updates/commits.
>>
>> Justin
>>
>> Dmitry Kan <dmitry....@gmail.com> writes:
>>
>> > Justin, in terms of the overhead, have you noticed if Munin puts much of
>> it
>> > when used in production? In terms of the solr farm: how big is a shard's
>> > index (given you have sharded architecture).
>> >
>> > Dmitry
>> >
>> > On Sun, Dec 11, 2011 at 6:39 PM, Justin Caratzas
>> > <justin.carat...@gmail.com>wrote:
>> >
>> >> At my work, we use Munin and Nagio for monitoring and alerts.  Munin is
>> >> great because writing a plugin for it so simple, and with Solr's
>> >> statistics handler, we can track almost any solr stat we want.  It also
>> >> comes with included plugins for load, file system stats, processes,
>> >> etc.
>> >>
>> >> http://munin-monitoring.org/
>> >>
>> >> Justin
>> >>
>> >> Paul Libbrecht <p...@hoplahup.net> writes:
>> >>
>> >> > Allow me to chim in and ask a generic question about monitoring tools
>> >> > for people close to developers: are any of the tools mentioned in this
>> >> > thread actually able to show graphs of loads, e.g. cache counts or CPU
>> >> > load, in parallel to a console log or to an http request log??
>> >> >
>> >> > I am working on such a tool currently but I have a bad feeling of
>> >> reinventing the wheel.
>> >> >
>> >> > thanks in advance
>> >> >
>> >> > Paul
>> >> >
>> >> >
>> >> >
>> >> > Le 8 déc. 2011 à 08:53, Dmitry Kan a écrit :
>> >> >
>> >> >> Otis, Tomás: thanks for the great links!
>> >> >>
>> >> >> 2011/12/7 Tomás Fernández Löbbe <tomasflo...@gmail.com>
>> >> >>
>> >> >>> Hi Dimitry, I pointed to the wiki page to enable JMX, then you can
>> use
>> >> any
>> >> >>> tool that visualizes JMX stuff like Zabbix. See
>> >> >>>
>> >> >>>
>> >>
>> http://www.lucidimagination.com/blog/2011/10/02/monitoring-apache-solr-and-lucidworks-with-zabbix/
>> >> >>>
>> >> >>> On Wed, Dec 7, 2011 at 11:49 AM, Dmitry Kan <dmitry....@gmail.com>
>> >> wrote:
>> >> >>>
>> >> >>>> The culprit seems to be the merger (frontend) SOLR. Talking to one
>> >> shard
>> >> >>>> directly takes substantially less time (1-2 sec).
>> >> >>>>
>> >> >>>> On Wed, Dec 7, 2011 at 4:10 PM, Dmitry Kan <dmitry....@gmail.com>
>> >> wrote:
>> >> >>>>
>> >> >>>>> Tomás: thanks. The page you gave didn't mention cache
>> specifically,
>> >> is
>> >> >>>>> there more documentation on this specifically? I have used
>> solrmeter
>> >> >>>> tool,
>> >> >>>>> it draws the cache diagrams, is there a similar tool, but which
>> would
>> >> >>> use
>> >> >>>>> jmx directly and present the cache usage in runtime?
>> >> >>>>>
>> >> >>>>> pravesh:
>> >> >>>>> I have increased the size of filterCache, but the search hasn't
>> >> become
>> >> >>>> any
>> >> >>>>> faster, taking almost 9 sec on avg :(
>> >> >>>>>
>> >> >>>>> name: search
>> >> >>>>> class: org.apache.solr.handler.component.SearchHandler
>> >> >>>>> version: $Revision: 1052938 $
>> >> >>>>> description: Search using components:
>> >> >>>>>
>> >> >>>>
>> >> >>>
>> >>
>> org.apache.solr.handler.component.QueryComponent,org.apache.solr.handler.component.FacetComponent,org.apache.solr.handler.component.MoreLikeThisComponent,org.apache.solr.handler.component.HighlightComponent,org.apache.solr.handler.component.StatsComponent,org.apache.solr.handler.component.DebugComponent,
>> >> >>>>>
>> >> >>>>> stats: handlerStart : 1323255147351
>> >> >>>>> requests : 100
>> >> >>>>> errors : 3
>> >> >>>>> timeouts : 0
>> >> >>>>> totalTime : 885438
>> >> >>>>> avgTimePerRequest : 8854.38
>> >> >>>>> avgRequestsPerSecond : 0.008789442
>> >> >>>>>
>> >> >>>>> the stats (copying fieldValueCache as well here, to show term
>> >> >>>> statistics):
>> >> >>>>>
>> >> >>>>> name: fieldValueCache
>> >> >>>>> class: org.apache.solr.search.FastLRUCache
>> >> >>>>> version: 1.0
>> >> >>>>> description: Concurrent LRU Cache(maxSize=10000, initialSize=10,
>> >> >>>>> minSize=9000, acceptableSize=9500, cleanupThread=false)
>> >> >>>>> stats: lookups : 79
>> >> >>>>> hits : 77
>> >> >>>>> hitratio : 0.97
>> >> >>>>> inserts : 1
>> >> >>>>> evictions : 0
>> >> >>>>> size : 1
>> >> >>>>> warmupTime : 0
>> >> >>>>> cumulative_lookups : 79
>> >> >>>>> cumulative_hits : 77
>> >> >>>>> cumulative_hitratio : 0.97
>> >> >>>>> cumulative_inserts : 1
>> >> >>>>> cumulative_evictions : 0
>> >> >>>>> item_shingleContent_trigram :
>> >> >>>>>
>> >> >>>>
>> >> >>>
>> >>
>> {field=shingleContent_trigram,memSize=326924381,tindexSize=4765394,time=215426,phase1=213868,nTerms=14827061,bigTerms=35,termInstances=114359167,uses=78}
>> >> >>>>> name: filterCache
>> >> >>>>> class: org.apache.solr.search.FastLRUCache
>> >> >>>>> version: 1.0
>> >> >>>>> description: Concurrent LRU Cache(maxSize=153600,
>> initialSize=4096,
>> >> >>>>> minSize=138240, acceptableSize=145920, cleanupThread=false)
>> >> >>>>> stats: lookups : 1082854
>> >> >>>>> hits : 940370
>> >> >>>>> hitratio : 0.86
>> >> >>>>> inserts : 142486
>> >> >>>>> evictions : 0
>> >> >>>>> size : 142486
>> >> >>>>> warmupTime : 0
>> >> >>>>> cumulative_lookups : 1082854
>> >> >>>>> cumulative_hits : 940370
>> >> >>>>> cumulative_hitratio : 0.86
>> >> >>>>> cumulative_inserts : 142486
>> >> >>>>> cumulative_evictions : 0
>> >> >>>>>
>> >> >>>>>
>> >> >>>>> index size: 3,25 GB
>> >> >>>>>
>> >> >>>>> Does anyone have some pointers to where to look at and optimize
>> for
>> >> >>> query
>> >> >>>>> time?
>> >> >>>>>
>> >> >>>>>
>> >> >>>>> 2011/12/7 Tomás Fernández Löbbe <tomasflo...@gmail.com>
>> >> >>>>>
>> >> >>>>>> Hi Dimitry, cache information is exposed via JMX, so you should
>> be
>> >> >>> able
>> >> >>>> to
>> >> >>>>>> monitor that information with any JMX tool. See
>> >> >>>>>> http://wiki.apache.org/solr/SolrJmx
>> >> >>>>>>
>> >> >>>>>> On Wed, Dec 7, 2011 at 6:19 AM, Dmitry Kan <dmitry....@gmail.com
>> >
>> >> >>>> wrote:
>> >> >>>>>>
>> >> >>>>>>> Yes, we do require that much.
>> >> >>>>>>> Ok, thanks, I will try increasing the maxsize.
>> >> >>>>>>>
>> >> >>>>>>> On Wed, Dec 7, 2011 at 10:56 AM, pravesh <
>> suyalprav...@yahoo.com>
>> >> >>>>>> wrote:
>> >> >>>>>>>
>> >> >>>>>>>>>> facet.limit=500000
>> >> >>>>>>>> your facet.limit seems too high. Do you actually require this
>> >> >>> much?
>> >> >>>>>>>>
>> >> >>>>>>>> Since there a lot of evictions from filtercache, so, increase
>> the
>> >> >>>>>> maxsize
>> >> >>>>>>>> value to your acceptable limit.
>> >> >>>>>>>>
>> >> >>>>>>>> Regards
>> >> >>>>>>>> Pravesh
>> >> >>>>>>>>
>> >> >>>>>>>> --
>> >> >>>>>>>> View this message in context:
>> >> >>>>>>>>
>> >> >>>>>>>
>> >> >>>>>>
>> >> >>>>
>> >> >>>
>> >>
>> http://lucene.472066.n3.nabble.com/cache-monitoring-tools-tp3566645p3566811.html
>> >> >>>>>>>> Sent from the Solr - User mailing list archive at Nabble.com.
>> >> >>>>>>>>
>> >> >>>>>>>
>> >> >>>>>>>
>> >> >>>>>>>
>> >> >>>>>>> --
>> >> >>>>>>> Regards,
>> >> >>>>>>>
>> >> >>>>>>> Dmitry Kan
>> >> >>>>>>>
>> >> >>>>>>
>> >> >>>>>
>> >> >>>>>
>> >> >>>>>
>> >> >>>>> --
>> >> >>>>> Regards,
>> >> >>>>>
>> >> >>>>> Dmitry Kan
>> >> >>>>>
>> >> >>>>
>> >> >>>>
>> >> >>>>
>> >> >>>> --
>> >> >>>> Regards,
>> >> >>>>
>> >> >>>> Dmitry Kan
>> >> >>>>
>> >> >>>
>> >> >>
>> >> >>
>> >> >>
>> >> >> --
>> >> >> Regards,
>> >> >>
>> >> >> Dmitry Kan
>> >>
>>

Reply via email to