" and need to expect to need to do some heavy duty
trial and error tuning on your own.
-- Jack Krupansky
-Original Message----- From: Tim Vaillancourt
Sent: Saturday, July 27, 2013 4:21 PM
To: solr-user@lucene.apache.org
Subject: Re: SolrCloud 4.3.1 - "Failure to open existing log file (non
t; -- Jack Krupansky
>
> -----Original Message- From: Tim Vaillancourt
> Sent: Saturday, July 27, 2013 4:21 PM
> To: solr-user@lucene.apache.org
> Subject: Re: SolrCloud 4.3.1 - "Failure to open existing log file (non
> fatal)" errors under high load
>
>
> Thank
own.
-- Jack Krupansky
-Original Message-
From: Tim Vaillancourt
Sent: Saturday, July 27, 2013 4:21 PM
To: solr-user@lucene.apache.org
Subject: Re: SolrCloud 4.3.1 - "Failure to open existing log file (non
fatal)" errors under high load
Thanks for the reply Erick,
Hard C
Thanks for the reply Erick,
Hard Commit - 15000ms, openSearcher=false
Soft Commit - 1000ms, openSearcher=true
15sec hard commit was sort of a guess, I could try a smaller number.
When you say "getting too large" what limit do you think it would be
hitting: a ulimit (nofiles), disk space, numbe
What is your autocommit limit? Is it possible that your transaction
logs are simply getting too large? tlogs are truncated whenever
you do a hard commit (autocommit) with openSearcher either
true for false it doesn't matter.
FWIW,
Erick
On Fri, Jul 26, 2013 at 12:56 AM, Tim Vaillancourt
wro
Thanks Shawn and Yonik!
Yonik: I noticed this error appears to be fairly trivial, but it is not
appearing after a previous crash. Every time I run this high-volume test
that produced my stack trace, I zero out the logs, Solr data and
Zookeeper data and start over from scratch with a brand new
On 7/25/2013 6:53 PM, Tim Vaillancourt wrote:
> Thanks for the reply Shawn, I can always count on you :).
>
> We are using 10GB heaps and have over 100GB of OS cache free to answer the
> JVM question, Young has about 50% of the heap, all CMS. Our max number of
> processes for the JVM user is 10k,
On Thu, Jul 25, 2013 at 7:44 PM, Tim Vaillancourt wrote:
> "ERROR [2013-07-25 19:34:24.264] [org.apache.solr.common.SolrException]
> Failure to open existing log file (non fatal)
>
That itself isn't necessarily a problem (and why it says "non fatal")
- it just means that most likely the a transac
Thanks for the reply Shawn, I can always count on you :).
We are using 10GB heaps and have over 100GB of OS cache free to answer the
JVM question, Young has about 50% of the heap, all CMS. Our max number of
processes for the JVM user is 10k, which is where Solr dies when it blows
up with 'cannot c
On 7/25/2013 5:44 PM, Tim Vaillancourt wrote:
The transaction log error I receive after about 10-30 minutes of load
testing is:
"ERROR [2013-07-25 19:34:24.264] [org.apache.solr.common.SolrException]
Failure to open existing log file (non fatal)
/opt/easw/easw_apps/easo_solr_cloud/solr/xmshd_sha
Stack trace:
http://timvaillancourt.com.s3.amazonaws.com/tmp/solrcloud.nodeC.2013-07-25-16.jstack.gz
Cheers!
Tim
On 25 July 2013 16:44, Tim Vaillancourt wrote:
> Hey guys,
>
> I am reaching out to the Solr list with a very vague issue: under high
> load against a SolrCloud 4.3.1 cluster of 3
Hey guys,
I am reaching out to the Solr list with a very vague issue: under high load
against a SolrCloud 4.3.1 cluster of 3 instances, 3 shards, 2 replicas (2
cores per instance), I eventually see failure messages related to
transaction logs, and shortly after these stacktraces occur the cluster
12 matches
Mail list logo