Thanks. We will try with more heap.
And we noticed that zookeeper(open jdk) and Solr(sun jdk) is using different
jvm. Will this really cause this OOM issue ?.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Invalid-version-expected-2-but-60-or-the-data-in-not-in-javabin-for
You have your GC tuned?
In the pass I had a lot of problems with zookeeper as a result of GC pauses
because my heap was too big.
Increase your heap to 20G or more, and use some of the configurations exposed
in this thread http://wiki.apache.org/solr/ShawnHeisey
The first works fine for me wit
Thanks all for the reply.
we are working on that to reduce the delete query size.
But after that we faced one more issue.The issue is our batch process is
able to delete 16k records but we got an OOM exception in one server.(out of
4 server in solrcloud).We are using solr 4.2 and zookeeper 3.4.5.
Just an FYI, newer version of Solr will deploy the proper error message rather
than that cryptic one.
- Mark
On Jan 3, 2014, at 12:54 AM, Shawn Heisey wrote:
> On 1/2/2014 10:22 PM, gpssolr2020 wrote:
>> Caused by: java.lang.RuntimeException: Invalid version (expected 2, but 60)
>> or the data
On 1/2/2014 10:22 PM, gpssolr2020 wrote:
> Caused by: java.lang.RuntimeException: Invalid version (expected 2, but 60)
> or the data in not in 'javabin' format
> (Account:123+AND+DATE:["2013-11-29T00:00:00Z"+TO+"2013-11-29T23:59:59Z"])+OR+
> (Account:345+AND+DATE:["2013-11-29T00:00:00Z"+TO+"2013
60 in ASCII is '<'. Is it returning XML? Or maybe an error message?
wunder
On Jan 2, 2014, at 9:22 PM, gpssolr2020 wrote:
> Hi,
>
> We are getting the below error message while trying to delete 30k records
> from solr.
>
> Error occured while invoking endpoint on Solr:
> org.apache.solr.cli
The process looks like this:
each shard returns the top 100K documents
(actually the doc ID and whatever your
sort criteria is, often just the score).
_from every shard_ and the node that
distributes that request then takes those
900K items and merges the list to get the
100K that satisfy the requ
Since it works to fetch 10K rows and doesn't work to fetch 100K rows in a
single request, I very strongly suggest that you use the request that work.
Make ten requests of 10K rows each. Or even better, 100 requests of 1K rows
each.
Large requests make large memory demands.
wunder
On Jul 5, 20
Oops I actually meant to say that search engines *are not* optimized
for large pages. See https://issues.apache.org/jira/browse/SOLR-1726
Well one of the shards involved in the request is throwing an error.
Check the logs of your shards. You can also add a shards.info=true
param to your search whi
Thanks for your answer,
I can fetch 10K documents without any issue. I don't think we are having out
of memory exception because each tomcat server in cluster has 8GB memory
allocated.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Invalid-version-expected-2-but-60-or-the-
Can you try to fetch a smaller number of documents? Search engines are
optimized for returning large pages. My guess is that one of the
shards is returning an error (maybe an OutOfMemoryError) for this
query.
On Fri, Jul 5, 2013 at 7:56 PM, eakarsu wrote:
> I am using Solr 4.3.1 on solrcloud with
Adding the original message.
Thank you
Sergiu
-Original Message-
From: Sergiu Bivol [mailto:sbi...@blackberry.com]
Sent: Thursday, May 09, 2013 2:50 PM
To: solr-user@lucene.apache.org
Subject: RE: Invalid version (expected 2, but 60) or the data in not in
'javabin' format
I have a similar problem. With 5 shards, querying 500K rows fails, but 400K is
fine.
Querying individual shards for 1.5 million rows works.
All solr instances are v4.2.1 and running on separate Ubuntu VMs.
It is not random, can be always reproduced by adding &rows=50 to a query
where numFound
This looks like you are using a SolrJ version different than the Solr
server version you are using. Make sure that server and client are using
the same Solr version.
On Mon, Mar 19, 2012 at 8:02 AM, Markus Jelsma
wrote:
> You probably have a non-char codepoint hanging around somewhere. You can
>
You probably have a non-char codepoint hanging around somewhere. You
can strip them away:
http://unicode.org/cldr/utility/list-unicodeset.jsp?a=[:Noncharacter_Code_Point=True:]
On Mon, 19 Mar 2012 10:33:35 +0800, "怪侠" <87863...@qq.com> wrote:
Hi, all.
I want to update the file's index. The fo
15 matches
Mail list logo