Hi all,
I noticed that for Solr 4.2, when an internal call is made between two nodes
Solr uses the list of matching document ids to fetch the document details. At
this time, it prints out all matching document ids as a part of the query. Is
there a way to suppress these log statements from bein
eems counter intuitive from the users's perspective,
>but I don't think Solr Cloud currently has any logic to favour a local
>instance over a remote one, I guess that would be a change to CloudSolrServer?
>Alternatively, you can do it in your client, send a non-distributed query, s
Hi all,
I'm trying to make sure that I understand under what circumstance a distributed
search is performed against Solr and if my general understanding of what
constitutes a distributed search is correct.
I have a Solr collection that was created using the Collections API with the
following p
Thanks Shawn and Mark! That was very helpful.
-Niran
>
> From: Shawn Heisey
>To: solr-user@lucene.apache.org
>Sent: Monday, April 22, 2013 5:30 PM
>Subject: Re: Soft Commit and Document Cache
>
>
>On 4/22/2013 4:16 PM, Niran Fajemisi
Hi all,
A quick (and hopefully simply) question: Does the document cache (or any of the
other caches for that matter), get invalidated after a soft commit has been
performed?
Thanks,
Niran
0 AM, Michael Della Bitta <
>michael.della.bi...@appinions.com> wrote:
>
>> My understanding is that logs stick around for a while just in case they
>> can be used to catch up a shard that rejoins the cluster.
>> On Mar 24, 2013 12:03 PM, "Niran Fajemisin"
Hi all,
We import about 1.5 million documents on a nightly basis using DIH. During this
time, we need to ensure that all documents make it into index otherwise
rollback on any errors; which DIH takes care of for us. We also disable
autoCommit in DIH but instruct it to commit at the very end of
Hi all,
I have noticed the following occur with some consistency: When I execute a long
running query (that spans 15 or more seconds), the Solr node that is servicing
the request starts to perform a full copy from the shard leader. My current
configuration has only one shard with 3 replicas. No
termine the disk
IO utilization between the 3.6 and 4.0 environments.
Hopefully that all makes sense.
Any immediate thoughts on any of this?
Thanks as usual.
-Niran
>
> From: Otis Gospodnetic
>To: solr-user@lucene.apache.org; Niran Fajemisin
>Sent: Thu
Hi all,
I'm currently in the process of doing some performance testing in preparations
for upgrading from Solr 3.6.1 to Solr 4.0. (We're badly in need of NRT
functionality)
Our existing deployment is not a typical deployment for Solr, as we use it to
search and facet on financial data such as
Hi all,
We're thinking of moving forward with Solr 4.0 and we plan to have a master
index server and at least two slaves servers. The Master server will be used
primarily for indexing and the queries will be load balanced across to the
replicated slave servers. I would like to know if, with the
>> Apologies for the terseness of this reply, as I'm on my mobile.
>>
>> To treat the result of a function call as a table in Oracle SQL, use the
>> table() function, like this:
>>
>> select * from table(my_stored_func())
>>
>> HTH,
>>
any alternatives I would greatly appreciate hearing them.
Thanks for the responses as usual.
Cheers.
>
> From: Lance Norskog
>To: solr-user@lucene.apache.org; Niran Fajemisin
>Sent: Thursday, May 31, 2012 3:09 PM
>Subject: Re: Using Data Import H
a table. So possibly you
>could wrap your procedure in a function that returns the cursor, or
>convert the procedure to a function.
>
>Michael Della Bitta
>
>
>Appinions, Inc. -- Where Influence Isn’t a Game.
>http://www.appinion
Hi all,
I've seen a few questions asked around invoking stored procedures from within
Data Import Handler but none of them seem to indicate what type of output
parameters were being used.
I have a stored procedure created in Oracle database that takes a couple input
parameters and has an outpu
15 matches
Mail list logo