Yes they are both 6.0.
On Apr 25, 2016 1:07 PM, "Anshum Gupta" wrote:
> Hi Joe,
>
> Can you confirm if the version of Solr and SolrJ are in sync ?
>
> On Mon, Apr 25, 2016 at 10:05 AM, Joe Lawson <
> jlaw...@opensourceconnections.com> wrote:
>
> > This appear to be a bug
Hi Joe,
Can you confirm if the version of Solr and SolrJ are in sync ?
On Mon, Apr 25, 2016 at 10:05 AM, Joe Lawson <
jlaw...@opensourceconnections.com> wrote:
> This appear to be a bug that'll be fixed in 6.1:
> https://issues.apache.org/jira/browse/SOLR-7729
>
> On Fri, Apr 22, 2016 at 8:07
This appear to be a bug that'll be fixed in 6.1:
https://issues.apache.org/jira/browse/SOLR-7729
On Fri, Apr 22, 2016 at 8:07 PM, Doug Turnbull <
dturnb...@opensourceconnections.com> wrote:
> Joe this might be _version_ as in Solr's optimistic concurrency used in
> atomic updates, etc
>
>
Joe this might be _version_ as in Solr's optimistic concurrency used in
atomic updates, etc
http://yonik.com/solr/optimistic-concurrency/
On Fri, Apr 22, 2016 at 5:24 PM Joe Lawson <
jlaw...@opensourceconnections.com> wrote:
> I'm updating from a basic Solr Client to the
I'm updating from a basic Solr Client to the ConcurrentUpdateSolrClient and
I'm hitting a really strange error. I cannot share the code but the snippet
is like:
try (ConcurrentUpdateSolrClient solrUpdateClient =
> new ConcurrentUpdateSolrClient("
>
Thanks. We will try with more heap.
And we noticed that zookeeper(open jdk) and Solr(sun jdk) is using different
jvm. Will this really cause this OOM issue ?.
--
View this message in context:
Thanks all for the reply.
we are working on that to reduce the delete query size.
But after that we faced one more issue.The issue is our batch process is
able to delete 16k records but we got an OOM exception in one server.(out of
4 server in solrcloud).We are using solr 4.2 and zookeeper
You have your GC tuned?
In the pass I had a lot of problems with zookeeper as a result of GC pauses
because my heap was too big.
Increase your heap to 20G or more, and use some of the configurations exposed
in this thread http://wiki.apache.org/solr/ShawnHeisey
The first works fine for me
Just an FYI, newer version of Solr will deploy the proper error message rather
than that cryptic one.
- Mark
On Jan 3, 2014, at 12:54 AM, Shawn Heisey s...@elyograg.org wrote:
On 1/2/2014 10:22 PM, gpssolr2020 wrote:
Caused by: java.lang.RuntimeException: Invalid version (expected 2, but 60)
Hi,
We are getting the below error message while trying to delete 30k records
from solr.
Error occured while invoking endpoint on Solr:
org.apache.solr.client.solrj.SolrServerException: Error executing query
at
60 in ASCII is ''. Is it returning XML? Or maybe an error message?
wunder
On Jan 2, 2014, at 9:22 PM, gpssolr2020 psgoms...@gmail.com wrote:
Hi,
We are getting the below error message while trying to delete 30k records
from solr.
Error occured while invoking endpoint on Solr:
On 1/2/2014 10:22 PM, gpssolr2020 wrote:
Caused by: java.lang.RuntimeException: Invalid version (expected 2, but 60)
or the data in not in 'javabin' format
snip
(Account:123+AND+DATE:[2013-11-29T00:00:00Z+TO+2013-11-29T23:59:59Z])+OR+
The process looks like this:
each shard returns the top 100K documents
(actually the doc ID and whatever your
sort criteria is, often just the score).
_from every shard_ and the node that
distributes that request then takes those
900K items and merges the list to get the
100K that satisfy the
I am using Solr 4.3.1 on solrcloud with 10 nodes.
I added 3 million documents from a csv file with this command
curl
Can you try to fetch a smaller number of documents? Search engines are
optimized for returning large pages. My guess is that one of the
shards is returning an error (maybe an OutOfMemoryError) for this
query.
On Fri, Jul 5, 2013 at 7:56 PM, eakarsu eaka...@gmail.com wrote:
I am using Solr 4.3.1
Thanks for your answer,
I can fetch 10K documents without any issue. I don't think we are having out
of memory exception because each tomcat server in cluster has 8GB memory
allocated.
--
View this message in context:
Oops I actually meant to say that search engines *are not* optimized
for large pages. See https://issues.apache.org/jira/browse/SOLR-1726
Well one of the shards involved in the request is throwing an error.
Check the logs of your shards. You can also add a shards.info=true
param to your search
Since it works to fetch 10K rows and doesn't work to fetch 100K rows in a
single request, I very strongly suggest that you use the request that work.
Make ten requests of 10K rows each. Or even better, 100 requests of 1K rows
each.
Large requests make large memory demands.
wunder
On Jul 5,
I have a similar problem. With 5 shards, querying 500K rows fails, but 400K is
fine.
Querying individual shards for 1.5 million rows works.
All solr instances are v4.2.1 and running on separate Ubuntu VMs.
It is not random, can be always reproduced by adding rows=50 to a query
where numFound
Adding the original message.
Thank you
Sergiu
-Original Message-
From: Sergiu Bivol [mailto:sbi...@blackberry.com]
Sent: Thursday, May 09, 2013 2:50 PM
To: solr-user@lucene.apache.org
Subject: RE: Invalid version (expected 2, but 60) or the data in not in
'javabin' format
I have a
Thanks for the prompt reply Mark.
Just to give you some background, I'm simulating a multi-shard environment by
running more than 200 Solr Cores on a single machine (machine does not seem to
be stressed) and I'm running a distributed facet.
The Solr server is running trunk 1404975 with
Thanks Otis.
I went through every piece of info that I could lay may hands on.
Most of them are about incompatible SolrJ versions (that's not my case) and
there was one message from Mark Miller that Solr may respond with an XML
instead of javabin in case there was some kind of http error being
The problem is not necessary xml - it seems to be anything that is not valid
javabin - I've just most often seen it with 404s that return an html error.
I'm not sure if there is a jira issue or not, but this type of thing should be
failing in a more user friendly way.
As to why your response
Hi,
Have a look at http://search-lucene.com/?q=invalid+version+javabin
Otis
--
Solr Monitoring - http://sematext.com/spm/index.html
Search Analytics - http://sematext.com/search-analytics/index.html
On Wed, Dec 19, 2012 at 11:23 AM, Shahar Davidson shah...@checkpoint.comwrote:
Hi,
I'm
Hi,
I'm encountering this error randomly when running a distributed facet. (i.e.
I'm sending the exact same request, yet this does not reproduce consistently)
I have about 180 shards that are being queried.
It seems that when Solr distributes the request to the shards one , or perhaps
more,
You probably have a non-char codepoint hanging around somewhere. You
can strip them away:
http://unicode.org/cldr/utility/list-unicodeset.jsp?a=[:Noncharacter_Code_Point=True:]
On Mon, 19 Mar 2012 10:33:35 +0800, 怪侠 87863...@qq.com wrote:
Hi, all.
I want to update the file's index. The
This looks like you are using a SolrJ version different than the Solr
server version you are using. Make sure that server and client are using
the same Solr version.
On Mon, Mar 19, 2012 at 8:02 AM, Markus Jelsma
markus.jel...@openindex.iowrote:
You probably have a non-char codepoint hanging
Hi, all.
I want to update the file's index. The folowing is my code:
ContentStreamUpdateRequest up = new ContentStreamUpdateRequest(
/update/extract);
up.addFile(file);
up.setParam(uprefix, attr_);
up.setParam(fmap.content, attr_content);
up.setParam(literal.id,
28 matches
Mail list logo