On 4/11/2016 7:40 AM, Charles Sanders wrote:
> Multivalued fields are controlled by the schema. You need to define your
> field in the schema file as 'not' a multivalue field. Here are a couple of
> examples of field definitions, one multivalued, the other not.
>
> multiValued="true"/>
> />
se default
definitions, which are probably storing the field as multivalued.
Charles
- Original Message -
From: "巩学超"
To: solr-user@lucene.apache.org
Sent: Monday, April 11, 2016 7:58:35 AM
Subject: How to set multivalued false, using SolrJ
Hello,
Can you do me a
Hello,
Can you do me a favour, I use solrJ to index, but I get all the Field
is multivalued. How can I set my Field to not multivalued, can you tell me how
to setting use solrJ.
as 5.5.0
>
> Georg Sorst schrieb am So., 10. Apr. 2016 um
> 01:49 Uhr:
>
>> Hi,
>>
>> how can you set Config API values from SolrJ? Does anyone have an example
>> for this?
>>
>> Here's what I'm currently trying:
>>
>> /* Buil
Addendum: Apparently the code works fine with HttpSolrClient, but not with
EmbeddedSolrServer (used in our tests).The most recent version I tested
this was 5.5.0
Georg Sorst schrieb am So., 10. Apr. 2016 um
01:49 Uhr:
> Hi,
>
> how can you set Config API values from SolrJ? Does anyon
Hi,
how can you set Config API values from SolrJ? Does anyone have an example
for this?
Here's what I'm currently trying:
/* Build the structure for the request */
Map parameters = new HashMap() {{
put("key", "value");
}};
final NamedList req
.472066.n3.nabble.com/SolrJ-Indexing-tp4265506p4266436.html
Sent from the Solr - User mailing list archive at Nabble.com.
On 3/25/2016 2:04 AM, fabigol wrote:
> what i want to do and to create the differents links between the entities
> which i'm going to index. Therefore, i have a root entity and girls entities
> like showing xml File.
>
> But, my main problem is the number of documents. In facr, when i want to
> ind
lem(5 millions), if i want to index 6
months of data (10 millions) the indexation is not finished after 18 hours.
--
View this message in context:
http://lucene.472066.n3.nabble.com/SolrJ-Indexing-tp4265506p4265998.html
Sent from the Solr - User mailing list archive at Nabble.com.
On 3/24/2016 4:06 AM, fabigol wrote:
> I know doint that for DIH but with solrJ i don't know. Must i use the
> annotations as @Field...?
>
> Moreover, i create a new project solr with the same XML Files - copy conf
> directory - and oddly the Indexing is much faster and n
Hi Shawn
thank for your response.
Like i can see in my XML file i have many enties which are linked between
it.
I know doint that for DIH but with solrJ i don't know. Must i use the
annotations as @Field...?
Moreover, i create a new project solr with the same XML Files - copy conf
directory
On 3/23/2016 2:36 AM, fabigol wrote:
> i want to do indexing with api SolrJ. So, i believe the indexing will be
> multhreaded.
> But i have 5 root entites.
The config you included is from the dataimport handler. This is *NOT*
indexing with SolrJ. You can SolrJ to *start* the indexing,
Hi,
i want to do indexing with api SolrJ. So, i believe the indexing will be
multhreaded.
But i have 5 root entites.
i find this link:
https://lucidworks.com/blog/2012/02/14/indexing-with-solrj/
<https://lucidworks.com/blog/2012/02/14/indexing-with-solrj/>
but i don't talk li
arse the response.
Hope this helps.
On Sat, Mar 19, 2016 at 4:44 PM, Iana Bondarska wrote:
> Hi,
> Could you please tell me, is it possible to create new collection on solr
> server only using solrj,without manual creation of core folder on server.
> I'm using solrj v.5.5.0,standalone client.
>
> Thanks,
> Iana
>
--
Anshum Gupta
On 3/19/2016 5:44 PM, Iana Bondarska wrote:
> Could you please tell me, is it possible to create new collection on solr
> server only using solrj,without manual creation of core folder on server.
> I'm using solrj v.5.5.0,standalone client.
If the server is running in cloud mode (
Hi,
Could you please tell me, is it possible to create new collection on solr
server only using solrj,without manual creation of core folder on server.
I'm using solrj v.5.5.0,standalone client.
Thanks,
Iana
Post the stack trace for the exception.
On Sun, 13 Mar 2016, 15:43 Adel Mohamed Khalifa,
wrote:
> Hello,
>
>
>
> I am facing a problem when I try to connect to the Solr Server
> (HttpSolrServer server = new HttpSolrServer("http://localhost:8983/solr
> ");)
>
> Note that my platform is Ubuntu and
Hello,
I am facing a problem when I try to connect to the Solr Server
(HttpSolrServer server = new HttpSolrServer("http://localhost:8983/solr";);)
Note that my platform is Ubuntu and I install solr server on it and it work
correctly when I try to log on through browser but when I try in netbea
ote:
>
> Hi,
> I am running the following query on an index that has around 123 million
> records, using SolrJ..
> Each record has only 5 fields.
>
> String *qry*="( fieldA:(value1 OR value2 OR value24) AND
> fieldB:(value1 OR value2 OR value3 OR value4 OR value5) )
Hi,
I am running the following query on an index that has around 123 million
records, using SolrJ..
Each record has only 5 fields.
String *qry*="( fieldA:(value1 OR value2 OR value24) AND
fieldB:(value1 OR value2 OR value3 OR value4 OR value5) )
(...basically a simple AND of 2 ORs)
W
ixed the
problem when using SolrJ 5.5. This is also a good idea for my program
in general, so I'm not really unhappy about needing it.
So, I believe what happens with the binary writer is that when the field
contains a null object, the writer is adding the literal string "NULL"
(in
e hit NPEs
> upgrading
> > to 5.5 too. In my case though, SolrJ talks to a proxy servlets before the
> > request gets routed to Solr, and that servlet didn't handle binary
> content
> > stream well.
> >
> > I had to add another resource method to the servle
On 2/29/2016 9:14 PM, Shai Erera wrote:
> Shawn, not sure if it's the same case as yours, but I've hit NPEs upgrading
> to 5.5 too. In my case though, SolrJ talks to a proxy servlets before the
> request gets routed to Solr, and that servlet didn't handle binary content
>
Shawn, not sure if it's the same case as yours, but I've hit NPEs upgrading
to 5.5 too. In my case though, SolrJ talks to a proxy servlets before the
request gets routed to Solr, and that servlet didn't handle binary content
stream well.
I had to add another resource method to th
On 2/29/2016 7:01 PM, Shawn Heisey wrote:
> On 2/29/2016 6:42 PM, Shawn Heisey wrote:
>> I'm getting this stacktrace after upgrading SolrJ to 5.5.0 in my build
>> client:
>>
>> http://apaste.info/Vpg
>>
>> This is happening with 4.x, 5.3.2-SNAPSHOT, and
On 2/29/2016 6:42 PM, Shawn Heisey wrote:
> I'm getting this stacktrace after upgrading SolrJ to 5.5.0 in my build
> client:
>
> http://apaste.info/Vpg
>
> This is happening with 4.x, 5.3.2-SNAPSHOT, and 5.5.0-SNAPSHOT servers.
I got a little bit closer look at the se
I'm getting this stacktrace after upgrading SolrJ to 5.5.0 in my build
client:
http://apaste.info/Vpg
This is happening with 4.x, 5.3.2-SNAPSHOT, and 5.5.0-SNAPSHOT servers.
If I configure ivy to pull solrj 5.4.1 instead of the latest version,
everything works without changing the cod
Hi,
Any help or pointers in this issue?
Thanks,
On Wed, Feb 24, 2016 at 12:44 PM, Debraj Manna
wrote:
> Hi,
>
> I am using Solrj 5.1 to talk add & delete docs from solr. Whenever there
> is some exception while doing addition or deletion. Solr is throwing
> SolrServerExcep
Hi list!
Does SolrJ already wrap the new JSON Facet API? I couldn't find any info
about this.
If not, what's the best way for a Java client to build and send requests
when you want to use the JSON Facets?
On a side note, since the JSON Facet API uses POST I will not be able to
see the
Hi,
I am using Solrj 5.1 to talk add & delete docs from solr. Whenever there is
some exception while doing addition or deletion. Solr is throwing
SolrServerException with the error message in the exception.
I am trying to map each error to an error code. For example if I am getting
an excep
uot; instead of creating a
> new ArrayList
Will do that, allthough I am not hunting for nano's, at least not at the moment
;)
-Ursprüngliche Nachricht-
Von: Shawn Heisey [mailto:apa...@elyograg.org]
Gesendet: Montag, 22. Februar 2016 15:57
An: solr-user@lucene.apache.org
Betreff
On 2/22/2016 1:55 AM, Clemens Wyss DEV wrote:
> SolrClient solrClient = getSolrClient( coreName, true );
> Collection batch = new ArrayList();
> while ( elements.hasNext() )
> {
> IIndexableElement elem = elements.next();
> SolrInputDocument doc = createSolrDocForElement( elem, provider, locale
rom SolrJ
Find attached the relevant part of the batch-update:
...
SolrClient solrClient = getSolrClient( coreName, true );
Collection batch = new ArrayList(); while
( elements.hasNext() ) {
IIndexableElement elem = elements.next();
SolrInputDocument doc = createSolrDocForElement( elem, provi
;
}
...
IIndexableElement is part of our index/search framework.
[1] creating a single SolrInputDocument
[2] handing the SIDs to SolrJ/SolrClient
[3] creating a new batch, i.e. releasing the SolrInputDocuments
The above code is being executed in an ExecutorService handed in as a lambda.
I.e
On 2/19/2016 3:08 AM, Clemens Wyss DEV wrote:
> The logic is somewhat this:
>
> SolrClient solrClient = new HttpSolrClient( coreUrl );
> while ( got more elements to index )
> {
> batch = create 100 SolrInputDocuments
> solrClient.add( batch )
> }
How much data is going into each of those Sol
Clemens,
What i understand from your above emails that you are creating
SolrInputDocuments in a batch inside a loop which gets created in heap .
SolrJ/SolrClient doesn't have any control on removing those objects from
heap which is controlled by Garbage Collection. So your program may end
Thanks Susheel,
but I am having problems in and am talking about SolrJ, i.e. the "client-side
of Solr" ...
-Ursprüngliche Nachricht-
Von: Susheel Kumar [mailto:susheel2...@gmail.com]
Gesendet: Freitag, 19. Februar 2016 17:23
An: solr-user@lucene.apache.org
Betreff: Re: OutOfM
[mailto:susheel2...@gmail.com]
> Gesendet: Freitag, 19. Februar 2016 14:42
> An: solr-user@lucene.apache.org
> Betreff: Re: OutOfMemory when batchupdating from SolrJ
>
> When you run your SolrJ Client Indexing program, can you increase heap
> size similar below. I guess it may b
4:42
An: solr-user@lucene.apache.org
Betreff: Re: OutOfMemory when batchupdating from SolrJ
When you run your SolrJ Client Indexing program, can you increase heap size
similar below. I guess it may be on your client side you are running int
OOM... or please share the exact error if below doesn't wo
And if it is on Solr side, please increase the heap size on Solr side
https://cwiki.apache.org/confluence/display/solr/JVM+Settings
On Fri, Feb 19, 2016 at 8:42 AM, Susheel Kumar
wrote:
> When you run your SolrJ Client Indexing program, can you increase heap
> size similar below. I gu
When you run your SolrJ Client Indexing program, can you increase heap size
similar below. I guess it may be on your client side you are running int
OOM... or please share the exact error if below doesn't work/is the issue.
java -Xmx4096m
Thanks,
Susheel
On Fri, Feb 19, 2016 at 6:
Guessing on ;) :
must I commit after every "batch", in order to force a flushing of
org.apache.solr.client.solrj.request.RequestWriter$LazyContentStream et al?
OTH it is propagated to NOT "commit" from a (SolrJ) client
https://lucidworks.com/blog/2013/08/23/understand
ngliche Nachricht-
Von: Clemens Wyss DEV [mailto:clemens...@mysign.ch]
Gesendet: Freitag, 19. Februar 2016 09:07
An: solr-user@lucene.apache.org
Betreff: OutOfMemory when batchupdating from SolrJ
Environment: Solr 5.4.1
I am facing OOMs when batchupdating SolrJ. I am seeing approx 30'000(!)
Environment: Solr 5.4.1
I am facing OOMs when batchupdating SolrJ. I am seeing approx 30'000(!)
SolrInputDocument instances, although my batchsize is 100. I.e. I call
solrClient.add( documents ) for every 100 documents only. So I'd expect to see
at most 100 SolrInputDocument's i
> Sent: Tuesday 16th February 2016 17:08
> To: solr-user@lucene.apache.org
> Subject: RE: Which open-source crawler to use with SolrJ and Postgresql ?
>
> I'm far, far from an expert on this sort of thing, but my personal experience
> 1-year ago was that Nutch-1 was eas
of Computer and Communications Systems,
National Library of Medicine, NIH
-Original Message-
From: Emir Arnautovic [mailto:emir.arnauto...@sematext.com]
Sent: Tuesday, February 16, 2016 10:58 AM
To: solr-user@lucene.apache.org
Subject: Re: Which open-source crawler to use with SolrJ
e.org/jira/browse/NUTCH-2197
Markus
-Original message-
From:Emir Arnautovic
Sent: Tuesday 16th February 2016 16:26
To: solr-user@lucene.apache.org
Subject: Re: Which open-source crawler to use with SolrJ and Postgresql ?
Hi,
It is most common to use Nutch as crawler, but it seems th
it earlier this month.
https://issues.apache.org/jira/browse/NUTCH-2197
Markus
-Original message-
From:Emir Arnautovic
Sent: Tuesday 16th February 2016 16:26
To: solr-user@lucene.apache.org
Subject: Re: Which open-source crawler to use with SolrJ and Postgresql ?
Hi,
It is most common
open-source crawler to use with SolrJ and Postgresql ?
>
> Hi,
> It is most common to use Nutch as crawler, but it seems that it still
> does not have support for SolrCloud (if I am reading this ticket
> correctly https://issues.apache.org/jira/browse/NUTCH-1662). Anyway, I
&
:02, Victor D'agostino wrote:
Hi
I am building a Solr 5 architecture with 3 Solr nodes and 1 zookeeper.
The database backend is postgresql 9 on RHEL 6.
I am looking for a free open-source crawler which use SolrJ.
What do you guys recommend ?
Best regards
Victor d'Agostino
_
Hi
I am building a Solr 5 architecture with 3 Solr nodes and 1 zookeeper.
The database backend is postgresql 9 on RHEL 6.
I am looking for a free open-source crawler which use SolrJ.
What do you guys recommend ?
Best regards
Victor d'Agostino
Ce message et les éven
);//
TODO: edit
response.setResponse(client.request(request));
-Original Message-
From: Shawn Heisey [mailto:apa...@elyograg.org]
Sent: Wednesday, February 10, 2016 11:24 AM
To: solr-user@lucene.apache.org
Subject: Re: Solrj-collection creation
On 2/10/2016 6:55 AM
es. Use the CloudSolrClient object in the SolrJ API. It accepts a
zkHost string.
https://cwiki.apache.org/confluence/display/solr/Using+SolrJ
I'm not sure exactly how to do the collection create, but it will
probably involve this object:
https://lucene.apache.org/solr/5_4_1/solr-solrj/org/a
Since you're using SolrJ anyway just use the
CollectionsAdminRequest. You can see
examples of it's use in the test cases, take a look
at CollectionsApiSolrJTests..
Best,
Erick
On Wed, Feb 10, 2016 at 5:55 AM, vidya wrote:
> Hi
>
> I want to connect to solrCloud server from
x27;m getting an error like
"defected tokens detected". So, i wanted to connect to solr using solrj API.
For that I need to create a collection initially.Got struck here.
Anyone can help me.
Thanks in advance
--
View this message in context:
http://lucene.472066.n3.nabble.com/S
at 2:50 AM, deniz wrote:
> I have been trying to export the whole resultset via SolrJ but till now
> everything (Including the tricks here:
>
> http://stackoverflow.com/questions/33540577/how-can-use-the-export-request-handler-via-solrj
> )
> has failed... On curl, it is worki
I have been trying to export the whole resultset via SolrJ but till now
everything (Including the tricks here:
http://stackoverflow.com/questions/33540577/how-can-use-the-export-request-handler-via-solrj)
has failed... On curl, it is working totally fine to query with
server:port/solr/core/export
What versions of the dependent jars do you have in your project? There
might be something leaking in a dependency rather than within SolrJ.
I also set up a test program using SolrJ 5.2.1, with updated
dependencies beyond the versions included with SolrJ, and could not get
that to show a leak either.
Thanks,
Shawn
The JavaDoc needs a lot more information. As I remember it, SolrJ started as a
thin layer over Apache HttpClient, so the authors may have assumed that
programmers were familiar with that library. HttpClient makes a shared object
that manages a pool of connections to the target server
Thanks Walter. Yes, I saw your answer and fixed the issue per your
suggestion.
The JavaDoc need to make this clear. The fact there is a close() on this
class and the JavaDoc does not say "your program should have exactly as
many HttpSolrClient objects as there are servers it talks to" is a prime
I already answered this.
Move the creation of the HttpSolrClient outside the loop. Your code will run
much fast, because it will be able to reuse the connections.
Put another way, your program should have exactly as many HttpSolrClient
objects as there are servers it talks to. If there is one S
Thank you all for your feedback.
This is code that I inherited and the example i gave is intended to
demonstrate the memory leak which based on YourKit is
on java/util/LinkedHashMap$Entry. In short, I'm getting core dumps with
"Detail "java/lang/OutOfMemoryError" "Java heap space" received "
Her
Assuming you're not really using code like above and it's a test case
What's your evidence that memory consumption goes up? Are you sure
you're not just seeing uncollected garbage?
When I attached Java Mission Control to this program it looked pretty
scary at first, but the heap allocated aft
Create one HttpSolrClient object for each Solr server you are talking to. Reuse
it for all requests to that Solr server.
It will manage a pool of connections and keep them alive for faster
communication.
I took a look at the JavaDoc and the wiki doc, neither one explains this well.
I don’t thi
Hi Steve,
Can you please elaborate what error you are getting and i didn't understand
your code above, that why initiating Solr client object is in loop. In
general creating client instance should be outside the loop and a one time
activity during the complete execution of program.
Thanks,
Sus
Hi folks,
I'm getting memory leak in my code. I narrowed the code to the following
minimal to cause the leak.
while (true) {
HttpSolrClient client = new HttpSolrClient("
http://192.168.202.129:8983/solr/core1";);
client.close();
}
Is this a defect or an issue in the way
Neither I'm aware of. Why don't traverse NamedList
QueryResponse.getResponse() and pickup what's necessary?
On Mon, Jan 25, 2016 at 12:32 PM, Sumeet Sharma wrote:
> Hi
>
> Is there any work going on for parsing json facet response in solrj? If yes
> can someon
Hi
Is there any work going on for parsing json facet response in solrj? If yes
can someone link me to the project or documentation?
--
Sumeet R Sharma
ed schema API is your friend here. There are
> several commercial front-ends that already do this.
>
> The managed schema API is all just HTTP, so there's nothing
> precluding a Java program from interpreting a form and sending
> off the proper HTTP requests to modify the sche
's nothing
precluding a Java program from interpreting a form and sending
off the proper HTTP requests to modify the schema.
The SolrJ client library has some sugar around this, there's no
reason you can't use that as it's just a jar (and a dependency on
a logging jar).
For SolrClou
e you are
looking at a lot of work for little gain.
Best,
GW
On 7 January 2016 at 21:36, Bob Lawson wrote:
> I want to programmatically make changes to schema.xml using java to do
> it. Should I use Solrj to do this or is there a better way? Can I use
> Solrj to make the rest calls th
L. "Schemafull" ought to be a marketing term as well.
-Original Message-
From: Bob Lawson [mailto:bwlawson...@gmail.com]
Sent: Friday, January 08, 2016 8:30 AM
To: solr-user@lucene.apache.org
Subject: Re: Manage schema.xml via Solrj?
Thanks for the replies. The problem I'm trying to s
On 1/8/2016 6:30 AM, Bob Lawson wrote:
> Thanks for the replies. The problem I'm trying to solve is to automate
> whatever steps I can in configuring Solr for our customer. Rather than an
> admin have to edit schema.xml, I thought it would be easier and less
> error-prone to do it programmaticall
t said, there's the Schema API you can use, see:
> https://cwiki.apache.org/confluence/display/solr/Schema+API
>
> You can access it from the SolrJ library, see
> SchemaRequest.java. For examples of using this, see:
> SchemaTest.java
>
> to _get_ the Solr source code to see
Is JSON Facet supported in SolrJ? If not is there a work around?
Thanks
I'd ask first what the high-level problem you're trying to solve is, this
could be an XY problem.
That said, there's the Schema API you can use, see:
https://cwiki.apache.org/confluence/display/solr/Schema+API
You can access it from the SolrJ library, see
SchemaRequest.java.
I am not sure about solrj but you can use any XML parsing library to
achieve this.
Take a look here:
http://www.tutorialspoint.com/java_xml/java_xml_parsers.htm
On Fri, 8 Jan 2016, 08:06 Bob Lawson wrote:
> I want to programmatically make changes to schema.xml using java to do
> it. Sh
I want to programmatically make changes to schema.xml using java to do it.
Should I use Solrj to do this or is there a better way? Can I use Solrj to
make the rest calls that make up the schema API? Whatever the answer, can
anyone point me to an example showing how to do it? Thanks!
hello everyone
Caused by: org.apache.solr.client.solrj.SolrServerException:
java.lang.IllegalStateException: Connection pool shut down
how to solve this problem
Thanks
Regards
soledede_w...@ehsy.com
Caused by: org.apache.solr.client.solrj.SolrServerException:
java.lang.IllegalStateException: Connection pool shut down
org.apache.solr.client.solrj.SolrServerException:
java.util.concurrent.RejectedExecutionException: Task
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$
And the other large benefit of CloudSolrClient is that it
routes documents directly to the correct leader, i.e. does
the routing on the client rather than have the Solr
instances forward docs to the routing. Using CloudSolrClient
should scale more nearly linearly with increasing
shards.
Best,
Eric
On 11/6/2015 7:15 AM, Vincenzo D'Amore wrote:
> I have followed your same path, having a look at java source. I inherited
> an installation with CloudSolrServer (I still had solrcloud 4.8) but I was
> not sure it was the right choice instead of the (apparently) more appealing
> ConcurrentUpdateSolr
2015 at 12:30 PM, Alessandro Benedetti <
> abenede...@apache.org
> > wrote:
>
> > Hi guys,
> > I was taking a look to the implementation details to understand how Solr
> > requests are written by SolrJ APIs.
> > The interesting classes are :
> >
> > *org.a
does a job
better than the standard CloudSolrServer.
Best,
Vincenzo
On Thu, Nov 5, 2015 at 12:30 PM, Alessandro Benedetti wrote:
> Hi guys,
> I was taking a look to the implementation details to understand how Solr
> requests are written by SolrJ APIs.
> The i
Hi guys,
I was taking a look to the implementation details to understand how Solr
requests are written by SolrJ APIs.
The interesting classes are :
*org.apache.solr.client.solrj.request.RequestWriter*
*org.apache.solr.client.solrj.impl.BinaryRequestWriter* ( wrong package ? )
I discovered that
tures in SolrJ particularly regarding the suggestions. Thank
you for confirming this though.
O. O.
Shawn Heisey-2 wrote
> Erick is right. It won't even compile. When the jump to Java 7 was
> made between the 4.7 and 4.8 releases, most of the source code was
> reviewed and certa
On 11/3/2015 3:33 PM, Erick Erickson wrote:
> You're on your one if you try to do this. Solr 4.10 requires Java7. I
> don't believe Solr will even compile under 1.6.
>
> You may bet lucky and get SolrJ to compile, but whether it works or
> not is chancy at best.
>
>
Thank you Erick. I'm sorry I did not clarify this in my original message.
I'm compiling Solr (or SolrJ) under Java 7. I'm aware that it requires Java
7 to compile, and that's why I have not changed the "java.source" value in
the common-build.xml file. SolrJ compi
wrote:
> You're on your one if you try to do this. Solr 4.10 requires Java7. I
> don't believe Solr will even compile under 1.6.
>
> You may bet lucky and get SolrJ to compile, but whether it works or
> not is chancy at best.
>
> Best,
> Erick
>
> On Tue, No
You're on your one if you try to do this. Solr 4.10 requires Java7. I
don't believe Solr will even compile under 1.6.
You may bet lucky and get SolrJ to compile, but whether it works or
not is chancy at best.
Best,
Erick
On Tue, Nov 3, 2015 at 2:13 PM, O. Olson wrote:
> Hi,
>
Hi,
I'm looking to compile the SolrJ for Solr 4.10.3 for running on Java 6.
(Due to choices beyond my control, we are on this older version of SolrJ and
Java 6.) I'm looking for any pointers on how I could do it?
I tried downloading the source from SVN (for Solr 4.10.3, not the late
Glad you can solve it one way or the other. I do wonder, though what's
really going on, the fact that your original case just hung is kind of
disturbing.
50K is still a lot, and Yonik's comment is well taken. I did some benchmarking
(not ConcurrentUpdateSolrServer, HttpSolrClient as I remember) an
input, which it
> clearly doesn't. I changed the code to partition my own input up to 50k
> documents and everything is running fine.
>
> Markus
>
>
>
> -Original message-
> > From:Erick Erickson
> > Sent: Thursday 29th October 2015 22:28
> >
On Thu, Oct 29, 2015 at 5:28 PM, Erick Erickson wrote:
> Try making batches of 1,000 docs and sending them through instead.
The other thing about ConcurrentUpdateSolrClient is that it will
create batches itself while streaming.
For example, if you call add a number of times very quickly, those
w
ged the code to partition my own input up to 50k documents and
everything is running fine.
Markus
-Original message-
> From:Erick Erickson
> Sent: Thursday 29th October 2015 22:28
> To: solr-user
> Subject: Re: SolrJ stalls/hangs on client.add(); and doesn't return
&g
You're sending 100K docs in a single packet? It's vaguely possible that you're
getting a timeout although that doesn't square with no docs being indexed...
Hmmm, to check you could do a manual commit. Or watch the Solr log to
see if update
requests ever go there.
Or you're running out of memory o
Hello - we have some processes periodically sending documents to 5.3.0 in local
mode using ConcurrentUpdateSolrClient 5.3.0, it has queueSize 10 and
threadCount 4, just chosen arbitrarily having no idea what is right.
Usually its a few thousand up to some tens of thousands of rather small
docum
- Why doesn't a solrj UpdateRequest delete return any shard replication
factor data?
- Is there a way to know if/when a solrj UpdateRequest delete has
achieved replication factor > 1?
When executing a UpdateRequest deleteByQuery with route,
the minAchievedReplicationF
t; -Original Message-
> From: Zheng Lin Edwin Yeo [mailto:edwinye...@gmail.com]
> Sent: 17 October 2015 00:55
> To: solr-user@lucene.apache.org
> Subject: Re: Recursively scan documents for indexing in a folder in SolrJ
>
> Thanks for your advice. I also found this method
ngenta.com
-Original Message-
From: Zheng Lin Edwin Yeo [mailto:edwinye...@gmail.com]
Sent: 17 October 2015 00:55
To: solr-user@lucene.apache.org
Subject: Re: Recursively scan documents for indexing in a folder in SolrJ
Thanks for your advice. I also found this method which so far has been
601 - 700 of 2761 matches
Mail list logo