Hello,

I'm loading up our solr cloud with data (from a solrj client) and
running into a weird memory issue.  I can reliably reproduce the
problem.

- Using Solr Cloud 4.4.0 (also replicated with 4.6.0)
- 24 solr nodes (one shard each), spread across 3 physical hosts, each
host has 256G of memory
- index and tlogs on ssd
- Xmx=7G, G1GC
- Java 1.7.0_25
- schema and solrconfig.xml attached

I'm using composite routing to route documents with the same clientId
to the same shard.  After several hours of indexing, I occasionally
see an IndexWriter go OOM.  I think that's a symptom.  When that
happens, indexing continues, and that node's tlog starts to grow.
When I notice this, I stop indexing, and bounce the problem node.
That's where it gets interesting.

Upon bouncing, the tlog replays, and then segments merge.  Once the
merging is complete, the heap is fairly full, and forced full GC only
helps a little.  But if I then bounce the node again, the heap usage
goes way down, and stays low until the next segment merge.  I believe
segment merges are also what causes the original OOM.

More details:

Index on disk for this node is ~13G, tlog is ~2.5G.
See attached mem1.png.  This is a jconsole view of the heap during the
following:

(Solr cloud node started at the left edge of this graph)

A) One CPU core pegged at 100%.  Thread dump shows:
"Lucene Merge Thread #0" daemon prio=10 tid=0x00007f5a3c064800
nid=0x7a74 runnable [0x00007f5a41c5f000]
   java.lang.Thread.State: RUNNABLE
        at org.apache.lucene.util.fst.Builder.add(Builder.java:397)
        at 
org.apache.lucene.codecs.BlockTreeTermsWriter$TermsWriter.finishTerm(BlockTreeTermsWriter.java:1000)
        at org.apache.lucene.codecs.TermsConsumer.merge(TermsConsumer.java:112)
        at org.apache.lucene.codecs.FieldsConsumer.merge(FieldsConsumer.java:72)
        at 
org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:365)
        at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:98)
        at 
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:3772)
        at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3376)
        at 
org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
        at 
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)

B) One CPU core pegged at 100%.  Manually triggered GC.  Lots of
memory freed.  Thread dump shows:
"Lucene Merge Thread #0" daemon prio=10 tid=0x00007f5a3c064800
nid=0x7a74 runnable [0x00007f5a41c5f000]
   java.lang.Thread.State: RUNNABLE
        at 
org.apache.lucene.codecs.DocValuesConsumer$1$1.hasNext(DocValuesConsumer.java:127)
        at 
org.apache.lucene.codecs.lucene42.Lucene42DocValuesConsumer.addNumericField(Lucene42DocValuesConsumer.java:144)
        at 
org.apache.lucene.codecs.lucene42.Lucene42DocValuesConsumer.addNumericField(Lucene42DocValuesConsumer.java:92)
        at 
org.apache.lucene.codecs.DocValuesConsumer.mergeNumericField(DocValuesConsumer.java:112)
        at 
org.apache.lucene.index.SegmentMerger.mergeNorms(SegmentMerger.java:221)
        at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:119)
        at 
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:3772)
        at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3376)
        at 
org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
        at 
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)

C) One CPU core pegged at 100%.  Manually triggered GC.  No memory
freed.  Thread dump shows:
"Lucene Merge Thread #0" daemon prio=10 tid=0x00007f5a3c064800
nid=0x7a74 runnable [0x00007f5a41c5f000]
   java.lang.Thread.State: RUNNABLE
        at 
org.apache.lucene.codecs.DocValuesConsumer$1$1.hasNext(DocValuesConsumer.java:127)
        at 
org.apache.lucene.codecs.lucene42.Lucene42DocValuesConsumer.addNumericField(Lucene42DocValuesConsumer.java:108)
        at 
org.apache.lucene.codecs.lucene42.Lucene42DocValuesConsumer.addNumericField(Lucene42DocValuesConsumer.java:92)
        at 
org.apache.lucene.codecs.DocValuesConsumer.mergeNumericField(DocValuesConsumer.java:112)
        at 
org.apache.lucene.index.SegmentMerger.mergeNorms(SegmentMerger.java:221)
        at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:119)
        at 
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:3772)
        at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3376)
        at 
org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
        at 
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)

D) One CPU core pegged at 100%.  Thread dump shows:
"Lucene Merge Thread #0" daemon prio=10 tid=0x00007f5a3c064800
nid=0x7a74 runnable [0x00007f5a41c5f000]
   java.lang.Thread.State: RUNNABLE
        at 
org.apache.lucene.codecs.compressing.CompressingTermVectorsReader.get(CompressingTermVectorsReader.java:322)
        at 
org.apache.lucene.index.SegmentReader.getTermVectors(SegmentReader.java:169)
        at 
org.apache.lucene.codecs.compressing.CompressingTermVectorsWriter.merge(CompressingTermVectorsWriter.java:789)
        at 
org.apache.lucene.index.SegmentMerger.mergeVectors(SegmentMerger.java:312)
        at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:130)
        at 
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:3772)
        at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3376)
        at 
org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
        at 
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)

E) CPU usage drops to nominal levels.  Thread dump shows no Lucene Merge Thread.

F) Manually triggered full GC.  Some memory freed, but much remains.

G) Restarted solr.  Very little memory used.


Throughout all of this, there was no indexing or querying happening.
In ordered to try to determine what's using up the memory, I took a
heap dump at point (F) and analyzed it in Eclipse MAT (see attached
screenshot).  This shows 311 instances of Lucene42DocValuesProducer$3,
each holding a large byte[].  By attaching a remote debugger and
re-running, it looks like there is one of these byte[] for each field
in the schema (we have several of the "dim_*" dynamic fields).  And as
far as I know, I'm not using DocValues at all.


Any hints as to what might be going on here would be greatly
appreciated.  It takes me about 10 minutes to reproduce this, so I'm
willing to try things.  I don't know enough about the internals of
solr's memory usage to proceed much further on my own.

Thank you.

-Greg
<?xml version="1.0" encoding="UTF-8" ?>

<schema name="marin" version="1.5">

  <types>

    <!-- The StrField type is not analyzed, but indexed/stored verbatim. -->
    <fieldType name="string" class="solr.StrField" sortMissingLast="true" />

    <!-- boolean type: "true" or "false" -->
    <fieldType name="boolean" class="solr.BoolField" sortMissingLast="true"/>
    <fieldType name="int" class="solr.TrieIntField" precisionStep="0" positionIncrementGap="0"/>
    <fieldType name="float" class="solr.TrieFloatField" precisionStep="0" positionIncrementGap="0"/>
    <fieldType name="long" class="solr.TrieLongField" precisionStep="0" positionIncrementGap="0"/>
    <fieldType name="double" class="solr.TrieDoubleField" precisionStep="0" positionIncrementGap="0"/>

    <fieldType name="tint" class="solr.TrieIntField" precisionStep="8" positionIncrementGap="0"/>
    <fieldType name="tfloat" class="solr.TrieFloatField" precisionStep="8" positionIncrementGap="0"/>
    <fieldType name="tlong" class="solr.TrieLongField" precisionStep="8" positionIncrementGap="0"/>
    <fieldType name="tdouble" class="solr.TrieDoubleField" precisionStep="8" positionIncrementGap="0"/>

    <fieldType name="date" class="solr.TrieDateField" precisionStep="0" positionIncrementGap="0"/>

    <!-- A Trie based date field for faster date range queries and date faceting. -->
    <fieldType name="tdate" class="solr.TrieDateField" precisionStep="6" positionIncrementGap="0"/>


    <fieldType name="text_general" class="solr.TextField" positionIncrementGap="100">
      <analyzer type="index">
        <!-- NOTE!  Changes here must also be reflected in SolrTextValue in olap_stitch for phrase queries -->
        <charFilter class="solr.PatternReplaceCharFilterFactory" pattern="[_:|()]" replacement=" "/>
        <!--<charFilter class="solr.PatternReplaceCharFilterFactory" pattern="\+" replacement="PLUS"/>-->
        <charFilter class="solr.PatternReplaceCharFilterFactory" pattern="\+([^ ]+)" replacement="PLUS$1 $1" maxBlockChars="20"/>
        <tokenizer class="solr.StandardTokenizerFactory"/>
        <filter class="solr.LowerCaseFilterFactory"/>
      </analyzer>
      <analyzer type="query">
        <charFilter class="solr.PatternReplaceCharFilterFactory" pattern="[_:|()]" replacement=" "/>
        <charFilter class="solr.PatternReplaceCharFilterFactory" pattern="\+" replacement="PLUS"/>
        <tokenizer class="solr.StandardTokenizerFactory"/>
        <filter class="solr.LowerCaseFilterFactory"/>
      </analyzer>
    </fieldType>


      <!-- TODO: example CJK tokenizer field type.  Experimental for now -->
    <fieldType name="text_cjk" class="solr.TextField">
        <analyzer>
            <tokenizer class="solr.StandardTokenizerFactory"/>
            <filter class="solr.CJKWidthFilterFactory"/>
            <filter class="solr.LowerCaseFilterFactory"/>
            <filter class="solr.CJKBigramFilterFactory" han="true" hiragana="true" katakana="true" hangul="true" outputUnigrams="false" />
        </analyzer>
    </fieldType>


    <!-- This is an example of using the KeywordTokenizer along
         With various TokenFilterFactories to produce a sortable field
         that does not include some properties of the source text
      -->
    <fieldType name="alphaOnlySort" class="solr.TextField" sortMissingLast="true" omitNorms="true">
      <analyzer>
        <tokenizer class="solr.KeywordTokenizerFactory"/>
        <filter class="solr.LowerCaseFilterFactory" />
        <filter class="solr.TrimFilterFactory" />
        <filter class="solr.PatternReplaceFilterFactory"
                pattern="([^a-z])" replacement="" replace="all"
        />
      </analyzer>
    </fieldType>

      <!-- This is a custom field type for indexing text without tokenization, and having spaces replaced by underscores.
           Will be used for "matches exactly".  Could also theoretically be used for arbitrary substring search, but the
           observed performance for that was bad.  -->
      <fieldType name="string_case_insensitive_underscores" class="solr.TextField" sortMissingLast="true" omitNorms="true">
          <analyzer type="index">
              <tokenizer class="solr.KeywordTokenizerFactory" />
              <filter class="solr.LowerCaseFilterFactory" />
              <filter class="solr.TrimFilterFactory" />
              <filter class="solr.PatternReplaceFilterFactory" pattern="\s+" replacement="_" replace="all" />
          </analyzer>
          <analyzer type="query">
              <tokenizer class="solr.KeywordTokenizerFactory" />
              <filter class="solr.LowerCaseFilterFactory" />
              <filter class="solr.TrimFilterFactory" />
              <filter class="solr.PatternReplaceFilterFactory" pattern="\s+" replacement="_" replace="all" />
          </analyzer>
      </fieldType>


      <!-- This is a custom field type for indexing urls.  We will likely tweak this in the future, for but now, we want
           to tokenize on common url punctuation.  -->
      <fieldType name="url" class="solr.TextField" sortMissingLast="true" omitNorms="true">
          <analyzer type="index">
              <!-- tokenize on '/', '?', ':', '.', '=', and '&' -->
              <tokenizer class="solr.PatternTokenizerFactory" pattern="[&amp;/\?:\.=]+" />
              <filter class="solr.LowerCaseFilterFactory" />
          </analyzer>
          <analyzer type="query">
              <!-- tokenize on '/', '?', ':', '.', '=', and '&' -->
              <tokenizer class="solr.PatternTokenizerFactory" pattern="[&amp;/\?:\.=]+" />
              <filter class="solr.LowerCaseFilterFactory" />
          </analyzer>
      </fieldType>


 </types>


 <fields>

     <!--compound key field-->
     <field name="key" type="string" indexed="true" stored="true" required="true" />

     <!--common fields-->
     <field name="clientId" type="long" indexed="true" stored="true" required="true" />
     <field name="customerId" type="long" indexed="true" stored="true" required="true" />
     <field name="docType" type="string" indexed="true" stored="true" required="true" />
     <field name="title" type="text_general" indexed="true" stored="true" termVectors="true" termPositions="true" termOffsets="true" />
     <field name="id" type="long" indexed="true" stored="true" required="true" />
     <field name="status" type="string" indexed="true" stored="true" />
     <field name="url" type="url" indexed="true" stored="true" />
     <field name="aId" type="long" indexed="true" stored="true" />
     <field name="aName" type="text_general" indexed="true" stored="true" />
     <field name="bId" type="long" indexed="true" stored="true" />
     <field name="bName" type="text_general" indexed="true" stored="true" />
     <field name="cId" type="long" indexed="true" stored="true" />
     <field name="cName" type="text_general" indexed="true" stored="true" />
     <field name="dId" type="long" indexed="true" stored="true" />
     <field name="dName" type="text_general" indexed="true" stored="true" />
     <field name="eId" type="long" indexed="true" stored="true" />
     <field name="eName" type="text_general" indexed="true" stored="true" />
     <field name="fDate" type="date" indexed="true" stored="true" />
     <field name="gDate" type="date" indexed="true" stored="true" />
     <field name="lastModDate" type="date" indexed="true" stored="true" />
     <field name="hId" type="long" indexed="true" stored="true" /> <!-- what's this? -->
     <field name="url2" type="url" indexed="true" stored="true" />
     <field name="category" type="string" indexed="true" stored="true" />
     <field name="override" type="string" indexed="true" stored="true" />
     <field name="opStatus" type="string" indexed="true" stored="true" />
     <field name="kId" type="long" indexed="true" stored="true" />
     <field name="type" type="string" indexed="true" stored="true" />
     <field name="mId" type="long" indexed="true" stored="true" />
     <field name="pId" type="long" indexed="true" stored="true" />
     
     <!-- dynamic field defs -->
     <dynamicField name="*_sort"  type="alphaOnlySort" indexed="true" stored="false"/>
     <dynamicField name="dim_*"  type="text_general" indexed="true" stored="true"/>
     <dynamicField name="n_d_*"  type="double" indexed="true" stored="true"/>
     <dynamicField name="n_i_*"  type="int" indexed="true" stored="true"/>
     <dynamicField name="*_facet"  type="string" indexed="true" stored="false"/>
     <dynamicField name="md_l_*"  type="long" indexed="true" stored="true"/>
     <dynamicField name="md_t_*"  type="date" indexed="true" stored="true"/>
     <dynamicField name="*_str" type="string_case_insensitive_underscores" indexed="true" stored="false"/>

     <!-- copy sort fields -->
     <copyField source="title" dest="title_sort"/>
     <copyField source="bName" dest="bName_sort"/>
     <copyField source="cName" dest="cName_sort"/>
     <copyField source="dName" dest="dName_sort"/>
     <copyField source="eName" dest="eName_sort"/>
     <copyField source="fName" dest="fName_sort"/>

     <!-- copy to string fields for exact matching -->
     <copyField source="title" dest="title_str"/>
     <copyField source="bName" dest="bName_str"/>
     <copyField source="cName" dest="cName_str"/>
     <copyField source="dName" dest="dName_str"/>
     <copyField source="eName" dest="eName_str"/>
     <copyField source="fName" dest="fName_str"/>

     <field name="_version_" type="long" indexed="true" stored="true" />
   
 </fields>

 <!-- Field to use to determine and enforce document uniqueness. 
      Unless this field is marked with required="false", it will be a required field
   -->
 <uniqueKey>key</uniqueKey>

</schema>
<?xml version="1.0" encoding="UTF-8" ?>

<config>

  <!-- need to keep this in sync with olap_stitch/src/com/marin/olap/query/solr/values/SolrTextValue.java -->
  <luceneMatchVersion>LUCENE_41</luceneMatchVersion>

  <jmx />

  <!-- Set this to 'false' if you want solr to continue working after it has 
       encountered an severe configuration error.  In a production environment, 
       you may want solr to keep working even if one handler is mis-configured.

       You may also set this to false using by setting the system property:
         -Dsolr.abortOnConfigurationError=false
     -->
  <abortOnConfigurationError>${solr.abortOnConfigurationError:true}</abortOnConfigurationError>

  <!-- <lib dir="../../../../dist/" regex="apache-solr-dataimporthandler-.*\.jar" /> -->
  <!--<lib dir="../../../dist/" regex="solr-dataimporthandler-\d.*\.jar" /> -->

  <dataDir>../../data/</dataDir>

  <indexConfig>

    <!-- options specific to the main on-disk lucene index -->
    <useCompoundFile>false</useCompoundFile>
    <ramBufferSizeMB>64</ramBufferSizeMB>
    <maxMergeDocs>2147483647</maxMergeDocs>

    <!--
       The maximum number of simultaneous threads that may be
       indexing documents at once in IndexWriter; if more than this
       many threads arrive they will wait for others to finish.
       Default in Solr/Lucene is 8. -->
    <!-- <maxIndexingThreads>8</maxIndexingThreads>  -->

    <!--
         Expert: Merge Policy
         The Merge Policy in Lucene controls how merging of segments is done.
         The default since Solr/Lucene 3.3 is TieredMergePolicy.
         The default since Lucene 2.3 was the LogByteSizeMergePolicy,
         Even older versions of Lucene used LogDocMergePolicy.

    -->
    <mergePolicy class="org.apache.lucene.index.TieredMergePolicy">
      <int name="maxMergeAtOnce">10</int>
      <int name="segmentsPerTier">10</int>
      <double name="reclaimDeletesWeight">4.0</double>
    </mergePolicy>

    <mergeScheduler class="org.apache.lucene.index.ConcurrentMergeScheduler"/>

    <mergedSegmentWarmer class="org.apache.lucene.index.SimpleMergedSegmentWarmer"/>

    <writeLockTimeout>1000</writeLockTimeout>
    <commitLockTimeout>10000</commitLockTimeout>

    <unlockOnStartup>false</unlockOnStartup>

    <lockType>native</lockType>

    <nrtMode>true</nrtMode>

    <infoStream>true</infoStream>

  </indexConfig>

  <!-- the default high-performance update handler -->
  <updateHandler class="solr.DirectUpdateHandler2">

    <updateLog>
      <str name="dir">${solr.ulog.dir:}</str>
    </updateLog>

    <!-- Perform a <commit/> automatically under certain conditions:

         maxDocs - number of updates since last commit is greater than this
         maxTime - oldest uncommited update (in ms) is this long ago
    -->
    <autoCommit>
      <maxDocs>500000</maxDocs>
      <maxTime>300000</maxTime>  <!-- 5 minutes -->
      <openSearcher>false</openSearcher>
    </autoCommit>
 	<!-- softAutoCommit is like autoCommit except it causes a
         'soft' commit which only ensures that changes are visible
         but does not ensure that data is synced to disk.  This is
         faster and more near-realtime friendly than a hard commit.
      -->
    <autoSoftCommit>
      <maxTime>300000</maxTime>  <!-- 5 minutes -->
    </autoSoftCommit>

  </updateHandler>


  <query>
    <!-- Maximum number of clauses in a boolean query... can affect
        range or prefix queries that expand to big boolean
        queries.  An exception is thrown if exceeded.  -->
    <maxBooleanClauses>1024</maxBooleanClauses>

    <filterCache
      class="solr.FastLRUCache"
      size="5120"
      initialSize="2048"
      autowarmCount="256"/>

   <!-- queryResultCache caches results of searches - ordered lists of
         document ids (DocList) based on a query, a sort, and the range
         of documents requested.  -->
    <queryResultCache
      class="solr.LRUCache"
      size="5120"
      initialSize="2048"
      autowarmCount="256"/>

  <!-- documentCache caches Lucene Document objects (the stored fields for each document).
       Since Lucene internal document ids are transient, this cache will not be autowarmed.  -->
    <documentCache
      class="solr.LRUCache"
      size="512"
      initialSize="512"
      autowarmCount="0"/>


    <fieldValueCache
      class="solr.LRUCache"
      size="10000"
      initialSize="10"
      autowarmCount="0"/>

    <!-- If true, stored fields that are not requested will be loaded lazily.

    This can result in a significant speed improvement if the usual case is to
    not load all stored fields, especially if the skipped fields are large compressed
    text fields.
    -->
    <enableLazyFieldLoading>true</enableLazyFieldLoading>


    <queryResultWindowSize>400</queryResultWindowSize>
    
    <!-- Maximum number of documents to cache for any entry in the
         queryResultCache. -->
    <queryResultMaxDocsCached>1000</queryResultMaxDocsCached>


    <!-- a newSearcher event is fired whenever a new searcher is being prepared
         and there is a current searcher handling requests (aka registered). -->
    <!-- QuerySenderListener takes an array of NamedList and executes a
         local query request for each NamedList in sequence. -->
    <listener event="newSearcher" class="solr.QuerySenderListener">
      <arr name="queries">
      </arr>
    </listener>

    <!-- a firstSearcher event is fired whenever a new searcher is being
         prepared but there is no current registered searcher to handle
         requests or to gain autowarming data from. -->
    <listener event="firstSearcher" class="solr.QuerySenderListener">
      <arr name="queries">
      </arr>
    </listener>

    <!-- If a search request comes in and there is no current registered searcher,
         then immediately register the still warming searcher and use it.  If
         "false" then all requests will block until the first searcher is done
         warming. -->
    <useColdSearcher>false</useColdSearcher>

    <!-- Maximum number of searchers that may be warming in the background
      concurrently.  An error is returned if this limit is exceeded. Recommend
      1-2 for read-only slaves, higher for masters w/o cache warming. -->
    <maxWarmingSearchers>2</maxWarmingSearchers>

  </query>

  <!-- 
    Let the dispatch filter handler /select?qt=XXX
    handleSelect=true will use consistent error handling for /select and /update
    handleSelect=false will use solr1.1 style error formatting
    -->
  <requestDispatcher handleSelect="true" >
    <!--Make sure your system has some authentication before enabling remote streaming!  -->
    <requestParsers enableRemoteStreaming="true" multipartUploadLimitInKB="2048" />
        
    <!-- Set HTTP caching related parameters (for proxy caches and clients).
          
         To get the behaviour of Solr 1.2 (ie: no caching related headers)
         use the never304="true" option and do not specify a value for
         <cacheControl>
    -->
    <httpCaching never304="true">
    </httpCaching>
  </requestDispatcher>
  
      
  <!-- requestHandler plugins... incoming queries will be dispatched to the
     correct handler based on the path or the qt (query type) param.
     Names starting with a '/' are accessed with the a path equal to the 
     registered name.  Names without a leading '/' are accessed with:
      http://host/app/select?qt=name
     If no qt is defined, the requestHandler that declares default="true"
     will be used.
  -->
  <requestHandler name="standard" class="solr.StandardRequestHandler" default="true">
    <!-- default values for query parameters -->
    <lst name="defaults">
        <str name="echoParams">explicit</str>
        <str name="df">title</str>
        <str name="q.op">AND</str>
    </lst>
  </requestHandler>


   <!--<requestHandler name="/dataimport" class="org.apache.solr.handler.dataimport.DataImportHandler">-->
    <!--<lst name="defaults">-->
    	<!--<str name="config">data-config.xml</str>-->
    <!--</lst>-->
  <!--</requestHandler>-->

  <requestHandler name="/replication" class="solr.ReplicationHandler">
  </requestHandler>

    <!-- realtime get handler, guaranteed to return the latest stored fields of
         any document, without the need to commit or open a new searcher.  The
         current implementation relies on the updateLog feature being enabled. -->
    <requestHandler name="/get" class="solr.RealTimeGetHandler">
        <lst name="defaults">
            <str name="omitHeader">true</str>
            <str name="wt">json</str>
            <str name="indent">true</str>
        </lst>
    </requestHandler>

    <!--

    Search components are registered to SolrCore and used by Search Handlers

    By default, the following components are avaliable:

    <searchComponent name="query"     class="org.apache.solr.handler.component.QueryComponent" />
    <searchComponent name="facet"     class="org.apache.solr.handler.component.FacetComponent" />
    <searchComponent name="mlt"       class="org.apache.solr.handler.component.MoreLikeThisComponent" />
    <searchComponent name="highlight" class="org.apache.solr.handler.component.HighlightComponent" />
    <searchComponent name="debug"     class="org.apache.solr.handler.component.DebugComponent" />

    If you register a searchComponent to one of the standard names, that will be used instead.

    -->
 
  <requestHandler name="/search" class="org.apache.solr.handler.component.SearchHandler">
    <lst name="defaults">
      <str name="echoParams">explicit</str>
    </lst>
  </requestHandler>
  
  <requestHandler name="/update" class="solr.UpdateRequestHandler" >
    <!--
    <str name="update.processor.class">org.apache.solr.handler.UpdateRequestProcessor</str>
    -->
  </requestHandler>


  <requestHandler name="/analysis/field"
                  startup="lazy"
                  class="solr.FieldAnalysisRequestHandler" />

  <requestHandler name="/analysis/document"
                  class="solr.DocumentAnalysisRequestHandler"
                  startup="lazy" />

  <requestHandler name="/admin/" class="org.apache.solr.handler.admin.AdminHandlers" />

  <!-- ping/healthcheck -->
  <requestHandler name="/admin/ping" class="solr.PingRequestHandler">
      <lst name="invariants">
          <str name="q">solrpingquery</str>
      </lst>
      <lst name="defaults">
          <str name="echoParams">all</str>
      </lst>
      <!-- An optional feature of the PingRequestHandler is to configure the
         handler with a "healthcheckFile" which can be used to enable/disable
         the PingRequestHandler.
         relative paths are resolved against the data dir
      -->
      <!-- <str name="healthcheckFile">server-enabled.txt</str> -->
  </requestHandler>

  <!-- Echo the request contents back to the client -->
  <requestHandler name="/debug/dump" class="solr.DumpRequestHandler" >
    <lst name="defaults">
     <str name="echoParams">explicit</str> <!-- for all params (including the default etc) use: 'all' -->
     <str name="echoHandler">true</str>
    </lst>
  </requestHandler>
  

  <!-- XSLT response writer transforms the XML output by any xslt file found
       in Solr's conf/xslt directory.  Changes to xslt files are checked for
       every xsltCacheLifetimeSeconds.  
   -->
  <queryResponseWriter name="xslt" class="solr.XSLTResponseWriter">
    <int name="xsltCacheLifetimeSeconds">5</int>
  </queryResponseWriter> 
    
  <!-- config for the admin interface --> 
  <admin>
    <defaultQuery>*:*</defaultQuery>
    
    <!-- configure a healthcheck file for servers behind a loadbalancer
    <healthcheck type="file">server-enabled</healthcheck>
    -->
  </admin>

</config>

Reply via email to