Trying to fit 1,000 cores in 6G of memory is... interesting. That's a
lot of stuff in a small amount of memory. I hope these cores' indexes
are tiny.

The lazy-loading bit for cores has a price. The first user in will pay
the warmup penalty for that core while it loads. This may or may not
be noticeable but be aware of it. You may or may not want autowarming
in place.

You can also specify how many cores are kept in memory at one time,
they go into an LRU cache and are aged out after they serve their last
outstanding request.

BTW, current Java practice seems to be setting Xmx and Xms to the same
value, 6G in your case.

Good Luck!
Erick

On Thu, Apr 10, 2014 at 12:14 AM, Atanas Atanasov <atanaso...@gmail.com> wrote:
> Thanks for the quick responses,
> I have allocated 1GB min and 6 GB max memory to Java. The cache settings
> are the default ones (maybe this is a good point to start).
> All cores share the same schema and config.
> I'll try setting the
> loadOnStartup=*false* transient=*true *options for each core and see what
> will happen.
>
> Those are the exceptions from the log files:
> SEVERE: Servlet.service() for servlet [default] in context with path
> [/solrt] threw exception
> java.lang.IllegalStateException: Cannot call sendError() after the response
> has been committed
> at
> org.apache.catalina.connector.ResponseFacade.sendError(ResponseFacade.java:450)
> at
> org.apache.solr.servlet.SolrDispatchFilter.sendError(SolrDispatchFilter.java:695)
> at
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:383)
> at
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:158)
> at
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
> at
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
> at
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:220)
> at
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:122)
> at
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:170)
> at
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:98)
> at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:950)
> at
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116)
> at
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:408)
> at
> org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1040)
> at
> org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:607)
> at
> org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:315)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
> at java.lang.Thread.run(Unknown Source)
>
> AND
>
> SEVERE: null:ClientAbortException:  java.net.SocketException: Software
> caused connection abort: socket write error
> at org.apache.catalina.connector.OutputBuffer.doFlush(OutputBuffer.java:371)
> at org.apache.catalina.connector.OutputBuffer.flush(OutputBuffer.java:333)
> at
> org.apache.catalina.connector.CoyoteOutputStream.flush(CoyoteOutputStream.java:101)
> at sun.nio.cs.StreamEncoder.implFlush(Unknown Source)
> at sun.nio.cs.StreamEncoder.flush(Unknown Source)
> at java.io.OutputStreamWriter.flush(Unknown Source)
> at org.apache.solr.util.FastWriter.flush(FastWriter.java:137)
> at
> org.apache.solr.servlet.SolrDispatchFilter.writeResponse(SolrDispatchFilter.java:648)
> at
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:375)
> at
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:158)
> at
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
> at
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
> at
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:220)
> at
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:122)
> at
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:170)
> at
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:98)
> at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:950)
> at
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116)
> at
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:408)
> at
> org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1040)
> at
> org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:607)
> at
> org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:313)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
> at java.lang.Thread.run(Unknown Source)
> Caused by: java.net.SocketException: Software caused connection abort:
> socket write error
> at java.net.SocketOutputStream.socketWrite0(Native Method)
> at java.net.SocketOutputStream.socketWrite(Unknown Source)
> at java.net.SocketOutputStream.write(Unknown Source)
> at
> org.apache.coyote.http11.InternalOutputBuffer.realWriteBytes(InternalOutputBuffer.java:215)
> at org.apache.tomcat.util.buf.ByteChunk.flushBuffer(ByteChunk.java:480)
> at
> org.apache.coyote.http11.InternalOutputBuffer.flush(InternalOutputBuffer.java:119)
> at
> org.apache.coyote.http11.AbstractHttp11Processor.action(AbstractHttp11Processor.java:799)
> at org.apache.coyote.Response.action(Response.java:174)
> at org.apache.catalina.connector.OutputBuffer.doFlush(OutputBuffer.java:366)
> ... 24 more
>
>
>
> On Thu, Apr 10, 2014 at 9:51 AM, Alexandre Rafalovitch
> <arafa...@gmail.com>wrote:
>
>> Are you using all those cores at once? If not, there is a recent
>> settings to allow solr to load cores on demand.
>>
>> If you are using them all, perhaps you need to look into splitting
>> them to different machines (horizontal scaling).
>>
>> What about your caches? How many additional structures you have
>> configured for each core? How much memory you allocated to the Java
>> process. You are probably running out of memory and thrashing with a
>> swap. I am not even sure Java process can access that much memory in
>> one process. You might be better off running multiple Tomcat/Solr
>> instances on the same machine with different subsets of cores.
>>
>> Regards,
>>    Alex.
>> P.s. This is general advice, I don't know the specific issues around
>> that version of Solr/Tomcat.
>> Personal website: http://www.outerthoughts.com/
>> Current project: http://www.solr-start.com/ - Accelerating your Solr
>> proficiency
>>
>>
>> On Thu, Apr 10, 2014 at 1:40 PM, Atanas Atanasov <atanaso...@gmail.com>
>> wrote:
>> > Hi, guys,
>> >
>> > I need some help. After updating to SOLR 4.4 the tomcat process is
>> > consuming about 2GBs of memory, the CPU usage is about 40% after the
>> start
>> > for about 10 minutes. However, the bigger problem is, I have about 1000
>> > cores and seems that for each core a thread is created. The process has
>> > more than 1000 threads and everything is extremely slow. Creating or
>> > unloading a core even without documents takes about 20 minutes. Searching
>> > is more or less good, but storing also takes a lot.
>> > Is there some configuration I missed or that I did wrong? There aren't
>> many
>> > calls, I use 64 bit tomcat 7, SOLR 4.4, latest 64 bit Java. The machine
>> has
>> > 24 GBs of RAM, a CPU with 16 cores and is running Windows Server 2008 R2.
>> > Index is uppdated every 30 seconds/10 000 documents.
>> > I haven't checked the number of threads before the update, because I
>> didn't
>> > have to, it was working just fine. Any suggestion will be highly
>> > appreciated, thank you in advance.
>> >
>> > Regards,
>> > Atanas
>>

Reply via email to