Yes, you can try to use the SignatureUpdateProcessorFactory to do a hashing
of the content to a signature field, and group the signature field during
your search.
You can find more information here:
https://cwiki.apache.org/confluence/display/solr/De-Duplication
I have been using this method to
Is there a way to do something like
q=hello+world={!rerank reRankQuery=$rqq
reRankDocs=100}=sort={!func}myFunc() desc ?
or even as simple as
1.
http://localhost:8983/solr/0/select?q=edgengram:abc=json=true=true={!rerank
reRankQuery=$rqq reRankDocs=20}=sort=some_field desc
I am not
Is there a way I can issue a regular query with q and then apply
functionQuery only on the top n documents of the result from q ?
Applying functionQuery on all documents will be very expensive in my case.
I am not able to find a way to "rerank" only top N documents using Function
Query.
--aj
On
The syntax would be something like this:
q=hello+world={!rerank reRankQuery=$rqq
reRankDocs=100}={!func}myFunc()
I'm not sure if there is a test case demonstrating this but it should work.
Joel Bernstein
http://joelsolr.blogspot.com/
On Fri, Sep 18, 2015 at 2:42 PM, Ajinkya Kale
Thank Joel!
This is exactly what I was looking for. I did not realize rerank was
extensible to your own Function Query. This is good.
--aj
On Fri, Sep 18, 2015 at 12:00 PM Joel Bernstein wrote:
> The syntax would be something like this:
>
> q=hello+world={!rerank
The ReRankQuery re-ranks the Top N documents of the main query based on a
query. Rather then the CustomScoreQuery you may want to look at ReRanking
by a Function Query using the FunctionQParserPlugin. This would allow you
to directly control the ReRankScore for the top N documents.
Writing your
Hi Ashish, are we talking about Analysis at query or Index time or both ?
As Erick say I found really hard to believe for this combination in a
classic search.
Are you trying to provide something special ?
Ngram token filter will produce a setof ngram out of your token:
token
to ok ke en in
I really can't imagine ngrams followed by a stemmer really
being that useful, but I've been wrong once or twice before.
Well, a lot more than once or twice. But this pair isn't
something I've ever really seen before.
I'd make use of the admin/analysis page for your field to see
why it appears to
It sounds the classic XY problem , can you explain us a little bit better
your problem ?
Why you have such strange field content, how do you produce it ?
Can this be solved with an analysis ad hoc for your language ?
It sounds to me as a tokenization problem, and you are not going to solve
it
Position increments were considered problematic, especially for
highlighting. Did you get this for the stop filter? There was a Jira for
this - check CHANGES.TXT and the Jira for details.
For some discussion, see:
https://issues.apache.org/jira/browse/SOLR-6468
-- Jack Krupansky
On Thu, Apr 2,
That's my understanding - but use the Solr Admin UI analysis page to
confirm exactly what happens, for both index and query analysis.
-- Jack Krupansky
On Thu, Apr 2, 2015 at 10:04 AM, Aman Tandon amantandon...@gmail.com
wrote:
Hi Jack,
I read that jira, i understand the concern of heaven.
Hi Jack,
I read that jira, i understand the concern of heaven.
So does it mean that, no hole will be left when we will use the stop filter?
With Regards
Aman Tandon
On Thu, Apr 2, 2015 at 6:01 PM, Jack Krupansky jack.krupan...@gmail.com
wrote:
Position increments were considered problematic,
By default the max connections is set to 128 and max connections per host
is 32. You can configure an HttpClient as per your needs and pass it as a
parameter to CloudSolrServer's constructor.
On Mon, Feb 23, 2015 at 3:49 PM, Manohar Sripada manohar...@gmail.com
wrote:
Thanks for the response.
Thanks for the response. How to control the number of connections pooled
here in SolrJ Client? Also, what will be the default values for maximum
Connections and all.
- Thanks
On Thu, Feb 19, 2015 at 6:09 PM, Shalin Shekhar Mangar
shalinman...@gmail.com wrote:
No, you should reuse the same
No, you should reuse the same CloudSolrServer instance for all requests. It
is a thread safe object. You could also create a static/common HttpClient
instance and pass it to the constructor of CloudSolrServer but even if you
don't, it will create one internally and use it for all requests so that
I think you meant to say that the shard has MORE than 1 replica. If a shard
has only 1 replica, then a query to that shard can only go to that one node.
Also, the leader is by definition a replica as well. So, where you say the
leader or replica, that should be a replica which may happen to be
Comments inline:
On Sun, Feb 15, 2015 at 2:09 AM, jaime spicciati jaime.spicci...@gmail.com
wrote:
All,
This is my current understanding of how SolrCloud load balancing works...
Within SolrCloud, for a cluster with more than 1 shard and at least 1
replica, the Zookeeper aware SolrJ client
The problem is that if you want only docs 200-250, how do you know whether
any particular
doc will wind up in in positions 0-199? You process a doc and find it's
score is X. That has
no relation to the score of the _next_ doc you score. Or the previous one
for that matter.
So to find the doc in
From memory: there are different methods in SolrIndexSearcher for reason. It
has to do with paging and sorting. Whenever you sort on a simple field, you
can easily start at a specific offset. The problem with sorting on score, is
that score has to be calculated for all documents matching query.
https://issues.apache.org/jira/browse/SOLR-6234
{!scorejoin} which is a Solr QParser brings Lucene JoinUtil, for sure.
replying into appropriate list.
On Wed, Dec 10, 2014 at 10:14 PM, Parnit Pooni parni...@gmail.com wrote:
Hi,
I'm running into an issue attempting to sort, here is the
Thanks Shawn,
Can you please re-direct me to any wiki which describes (in detail) the
differences between MMapDirectoryFactory and NRTCachingDirectoryFactory? I
found this blog
http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html very
helpful which describes about
On 12/8/2014 2:42 AM, Manohar Sripada wrote:
Can you please re-direct me to any wiki which describes (in detail) the
differences between MMapDirectoryFactory and NRTCachingDirectoryFactory? I
found this blog
http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html very
helpful
Hi, Manohar,
1. Does posting-list and term-list of the index reside in the memory? If
not, how to load this to memory. I don't want to load entire data, like
using DocumentCache. Either I want to use RAMDirectoryFactory as the data
will be lost if you restart
If you use MMapDirectory, Lucene
Thanks Micheal for the response.
If you use MMapDirectory, Lucene will map the files into memory off heap
and the OS's disk cache will cache the files in memory for you. Don't use
RAMDirectory, it's not better than MMapDirectory for any use I'm aware of.
Will that mean it will cache the
On 12/4/2014 10:06 PM, Manohar Sripada wrote:
If you use MMapDirectory, Lucene will map the files into memory off heap
and the OS's disk cache will cache the files in memory for you. Don't use
RAMDirectory, it's not better than MMapDirectory for any use I'm aware of.
Will that mean it will
Yeah, that behavior is consistent with what I documented in my e-book for
Solr. The dot is kept only if between two digits or two letters.
-- Jack Krupansky
-Original Message-
From: Jorge Luis BetancourtGonzález
Sent: Sunday, November 2, 2014 4:34 PM
To: solr-user@lucene.apache.org
On Fri, Oct 3, 2014 at 3:42 PM, Peter Keegan peterlkee...@gmail.com wrote:
Say I have a boolean field named 'hidden', and less than 1% of the
documents in the index have hidden=true.
Do both these filter queries use the same docset cache size? :
fq=hidden:false
fq=!hidden:true
Nope...
On 10/3/2014 1:57 PM, Yonik Seeley wrote:
On Fri, Oct 3, 2014 at 3:42 PM, Peter Keegan peterlkee...@gmail.com wrote:
Say I have a boolean field named 'hidden', and less than 1% of the
documents in the index have hidden=true.
Do both these filter queries use the same docset cache size? :
On Fri, Oct 3, 2014 at 4:35 PM, Shawn Heisey apa...@elyograg.org wrote:
On 10/3/2014 1:57 PM, Yonik Seeley wrote:
On Fri, Oct 3, 2014 at 3:42 PM, Peter Keegan peterlkee...@gmail.com wrote:
Say I have a boolean field named 'hidden', and less than 1% of the
documents in the index have
it will be cached as hidden:true and then inverted
Inverted at query time, so for best query performance use fq=hidden:false,
right?
On Fri, Oct 3, 2014 at 3:57 PM, Yonik Seeley yo...@heliosearch.com wrote:
On Fri, Oct 3, 2014 at 3:42 PM, Peter Keegan peterlkee...@gmail.com
wrote:
Say I
On Fri, Oct 3, 2014 at 6:38 PM, Peter Keegan peterlkee...@gmail.com wrote:
it will be cached as hidden:true and then inverted
Inverted at query time, so for best query performance use fq=hidden:false,
right?
Yep.
-Yonik
http://heliosearch.org - native code faceting, facet functions,
1. Better to target a max of 100 million docs per node, unless you do a POC
that more docs really does work well for you.
2. Sounds like you don't have enough memory, either heap or system memory.
Increase your heap first. Then more system memory.
3. Document examples of a simple query, facet
Vamsee Yarlagadda [vam...@cloudera.com] Wrote:
I filed https://issues.apache.org/jira/browse/SOLR-6314 to track this issue
going forward.
Any ideas around this problem?
Apparently the distributed faceting handling collapsed the duplicate fields,
which singular did not. I guess your test case
I filed https://issues.apache.org/jira/browse/SOLR-6314 to track this issue
going forward.
Any ideas around this problem?
Thanks,
Vamsee
On Tue, Jul 29, 2014 at 4:00 PM, Vamsee Yarlagadda vam...@cloudera.com
wrote:
Hi,
I am trying to work with multi-threaded faceting on SolrCloud and in
I'm having a little trouble understanding the use-case here. Why use
re-ranking?
Isn't this just combining the original query with the second query with an
AND
and using the original sort?
At the end, you have your original list in it's original order, with
(potentially) some
documents removed
See http://heliosearch.org/solrs-new-re-ranking-feature/
On Wed, Jul 23, 2014 at 11:27 AM, Erick Erickson erickerick...@gmail.com
wrote:
I'm having a little trouble understanding the use-case here. Why use
re-ranking?
Isn't this just combining the original query with the second query with an
The ReRankingQParserPlugin uses the Lucene QueryRescorer, which only uses
the score from the re-rank query when re-ranking the top N documents.
The ReRanklingQParserPlugin is built as a RankQuery plugin so you can swap
in your own implementation. Patches are also welcome for the existing
Blog on the RankQuery API
http://heliosearch.org/solrs-new-rankquery-feature/
Joel Bernstein
Search Engineer at Heliosearch
On Wed, Jul 23, 2014 at 3:27 PM, Joel Bernstein joels...@gmail.com wrote:
The ReRankingQParserPlugin uses the Lucene QueryRescorer, which only uses
the score from the
The ReRankingQParserPlugin uses the Lucene QueryRescorer, which only uses
the score from the re-rank query when re-ranking the top N documents.
Understood, but if the re-rank scores produce new ties, wouldn't you want
to resort them with the FieldSortedHitQueue?
Anyway, I was looking to
I like the FieldSortedHitQueue idea. If you want to work up a patch for
that, it would be great.
Joel Bernstein
Search Engineer at Heliosearch
On Wed, Jul 23, 2014 at 5:17 PM, Peter Keegan peterlkee...@gmail.com
wrote:
The ReRankingQParserPlugin uses the Lucene QueryRescorer, which only
I don’t know offhand about the num docs issue - are you doing NRT?
As far as being able to query the replica, I’m not sure anyone ever got to
making that fail if you directly query a node that is not active. It certainly
came up, but I have no memory of anyone tackling it. Of course in many
No, we're not doing NRT. The search clients aren't using CloudSolrServer
and they are behind an AWS load balancer, which calls the Solr ping handler
(implemented with ClusterStateAwarePingRequestHandler) to determine when
the node is active. This ping handler also responds during the index copy,
Try querying the recovering core with distrib=false, you should get the count
of docs in it.
Most likely, since the replica is recovering it is forwarding all queries to
the active replica, this can be verified in the core logs.
--
View this message in context:
Aha, you are right wrdrvf! The query is forwarded to any of the active
shards (I saw the query alternate between both of mine). Nice feature.
Also, looking at 'ClusterStateAwarePingRequestHandler' (which I downloaded
from www.manning.com/SolrinAction), it is checking zookeeper to see if the
To: solr-user@lucene.apache.org
Subject: Re: Question about sending solrconfig and schema files with java
Hi Jack, actually i posted on OS first, but got no anwser.
Check here :
https://stackoverflow.com/questions/24296014/datastax-
dse-search-how-to-post-solrconfig-xml-and-schema-xml-using
On 6/20/2014 5:16 AM, Frederic Esnault wrote:
I know how to send solrconfig.xml and schema.xml files to SolR using curl
commands.
But my problem is that i want to send them with java, and i can't find a
way to do so.
I used HttpComponentsand got http headers before the file begins, which SAX
Hi Shawn,
First thank you for taking the time to answer me.
Actually i tried looking for a way to use SolrJ to upload my files, but i
cannot find anywhere informations about how to create nodes with their
config files using SolrJ.
All websites, blogs and docs i found seem to be based on the
On Fri, Jun 20, 2014 at 9:46 PM, Frederic Esnault fesna...@serenzia.com wrote:
Actually i tried looking for a way to use SolrJ to upload my files, but i
cannot find anywhere informations about how to create nodes with their
config files using SolrJ.
Is this something solvable with configsets?
Hi Alexandre,
Nope, I cannot access the server (well i can actually, but my users won't
be able to do so), and i can't rely on an http curl call.
As for the final http call indicated in the link you gave, this is my last
step, but before that i need my solrconfig.xml and schema.xml uploaded via
On 6/20/2014 8:46 AM, Frederic Esnault wrote:
First thank you for taking the time to answer me.
Actually i tried looking for a way to use SolrJ to upload my files, but i
cannot find anywhere informations about how to create nodes with their
config files using SolrJ.
All websites, blogs and
Hi Shawn,
Actually i should say that i'm using DSE Search (ie. Datastax Enterprise
with SolR enabled).
With cURL, i'm doing like this :
$ curl http://localhost:8983/solr/resource/nhanes_ks.nhanes/solrconfig.xml
--data-binary @solrconfig.xml -H 'Content-type:text/xml;
charset=utf-8'
$ curl
it
is probably not Solr-related.
Sorry for the inconvenience!
-- Jack Krupansky
-Original Message-
From: Frederic Esnault
Sent: Friday, June 20, 2014 11:50 AM
To: solr-user@lucene.apache.org
Subject: Re: Question about sending solrconfig and schema files with java
Hi Shawn,
Actually i should
- From: Frederic Esnault
Sent: Friday, June 20, 2014 11:50 AM
To: solr-user@lucene.apache.org
Subject: Re: Question about sending solrconfig and schema files with java
Hi Shawn,
Actually i should say that i'm using DSE Search (ie. Datastax Enterprise
with SolR enabled).
With cURL, i'm
Oops! Sorry I missed it. Please post of the rest of the info on SO as well.
We'll get to it!
-- Jack Krupansky
-Original Message-
From: Frederic Esnault
Sent: Friday, June 20, 2014 7:03 PM
To: solr-user@lucene.apache.org
Subject: Re: Question about sending solrconfig and schema files
You'll have better luck asking the folks at OpenNLP. This isn't really a
Solr question.
On Fri, May 23, 2014 at 6:38 PM, rashi gandhi gandhirash...@gmail.comwrote:
HI,
I have one running solr core with some data indexed on solr server.
This core is designed to provide OpenNLP
Hi Shamik,
Your assumptions on that are correct.
As far as explicit '/8' at query time is concerned, that's the only
way the router would get to know that it's a 3-level id and not a
2-level one i.e. e.g.
shard.keys='myapp!'
Hash range to be fetched: first-16 bits of murmur hash of myapp
to
Awesome, thanks a lot Anshum, makes total sense now. Appreciate your help.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Question-on-3-level-composite-id-routing-tp4137044p4137071.html
Sent from the Solr - User mailing list archive at Nabble.com.
Thanks for the information Yonik.
-Original Message-
From: ysee...@gmail.com [mailto:ysee...@gmail.com] On Behalf Of Yonik
Seeley
Sent: May-16-14 8:52 PM
To: solr-user@lucene.apache.org
Subject: Re: Question regarding the lastest version of HeliosSearch
On Thu, May 15, 2014 at 3
On Thu, May 15, 2014 at 3:44 PM, Jean-Sebastien Vachon
jean-sebastien.vac...@wantedanalytics.com wrote:
I spent some time today playing around with subfacets and facets functions
now available in helios search 0.05 and I have some concerns... They look
very promising .
Thanks, glad for the
Sorry for not replying!!!
It was wrong version of solrj that client was using (As it was third-party
code, we couldn't find out earlier). After fixing the version, things seem
to be working fine.
Thanks for your response!!!
On Sun, Apr 13, 2014 at 7:26 PM, Erick Erickson
Thanks for your feedback. Following are some more details
Version of solr : 4.3.0
Version of solrj : 4.3.0
The way I am returning response to client:
Request Holder is the object containing post process request from client
(After renaming few of the fields, and internal to external mapping of
You say I can't change the client. What is the client written in?
What does it expect? Does it use the same version of SolrJ?
Best,
Erick
On Sun, Apr 13, 2014 at 6:40 AM, Prashant Golash
prashant.gol...@gmail.com wrote:
Thanks for your feedback. Following are some more details
Version of solr
Hi;
If you had a chance to change the code at client side I would suggest to
try that:
http://lucene.apache.org/solr/4_2_1/solr-solrj/org/apache/solr/client/solrj/impl/HttpSolrServer.html#setParser(org.apache.solr.client.solrj.ResponseParser)
There
maybe a problem about character encoding of your
Shalin,
I am running 4.7 and seeing this behavior :(
On Thu, Mar 27, 2014 at 10:36 PM, Shalin Shekhar Mangar
shalinman...@gmail.com wrote:
Yes, there are known bugs with EdgeNGram filters. I think they are fixed in
4.4
See https://issues.apache.org/jira/browse/LUCENE-3907
On Fri, Mar 28,
Certainly I am not the only user experiencing this?
On Wed, Mar 26, 2014 at 1:11 PM, Software Dev static.void@gmail.com wrote:
Is this a known bug?
On Tue, Mar 25, 2014 at 1:12 PM, Software Dev static.void@gmail.com
wrote:
Same problem here:
Yes, there are known bugs with EdgeNGram filters. I think they are fixed in 4.4
See https://issues.apache.org/jira/browse/LUCENE-3907
On Fri, Mar 28, 2014 at 10:17 AM, Software Dev
static.void@gmail.com wrote:
Certainly I am not the only user experiencing this?
On Wed, Mar 26, 2014 at
Is this a known bug?
On Tue, Mar 25, 2014 at 1:12 PM, Software Dev static.void@gmail.com wrote:
Same problem here:
http://lucene.472066.n3.nabble.com/Solr-4-x-EdgeNGramFilterFactory-and-highlighting-td4114748.html
On Tue, Mar 25, 2014 at 9:39 AM, Software Dev static.void@gmail.com
Bump
On Mon, Mar 24, 2014 at 3:00 PM, Software Dev static.void@gmail.com wrote:
In 3.5.0 we have the following.
fieldType name=autocomplete class=solr.TextField
positionIncrementGap=100
analyzer type=index
tokenizer class=solr.StandardTokenizerFactory/
filter
Same problem here:
http://lucene.472066.n3.nabble.com/Solr-4-x-EdgeNGramFilterFactory-and-highlighting-td4114748.html
On Tue, Mar 25, 2014 at 9:39 AM, Software Dev static.void@gmail.com wrote:
Bump
On Mon, Mar 24, 2014 at 3:00 PM, Software Dev static.void@gmail.com
wrote:
In 3.5.0
Hmmm, before going there let's be sure you're trying to do
what you think you are.
Solr does _not_ index arbitrary XML. There is a very
specific format of XML that describes solr documents
that _can_ be indexed. But random XML is not
supported. See the documents in example/exampledocs
for the XML
On 2/12/2014 8:21 AM, Eric_Peng wrote:
I was just trying to use SolrJ Client to import XML data to Solr server. And
I read SolrJ wiki that says SolrJ lets you upload content in XML and Binary
format
I realized there is a XML parser in Solr (We can use a dataUpadateHandler in
Solr default
Thanks a lot, learnt a lot from it
--
View this message in context:
http://lucene.472066.n3.nabble.com/Question-about-how-to-upload-XML-by-using-SolrJ-Client-Java-Code-tp4116901p4116937.html
Sent from the Solr - User mailing list archive at Nabble.com.
Thanks you so much Erick, I will try to write my owe XML parser
--
View this message in context:
http://lucene.472066.n3.nabble.com/Question-about-how-to-upload-XML-by-using-SolrJ-Client-Java-Code-tp4116901p4116936.html
Sent from the Solr - User mailing list archive at Nabble.com.
#UploadingDatawithIndexHandlers-UsingXSLTtoTransformXMLIndexUpdates
-- Jack Krupansky
-Original Message-
From: Eric_Peng
Sent: Wednesday, February 12, 2014 11:42 AM
To: solr-user@lucene.apache.org
Subject: Re: Question about how to upload XML by using SolrJ Client Java
Code
Thanks you so much
The problem is with the admin UI not following the XML include to find
entity so it found none. DIH itself does support XML include as I can
issue the DIH commands via HTTP on the included entities successfully.
Bill
On Mon, Jan 13, 2014 at 8:03 PM, Shawn Heisey s...@elyograg.org wrote:
On
On 1/13/2014 3:31 PM, Bill Au wrote:
But when I use XML include, the Entity pull-down in the Dataimport section
of the Solr admin UI is empty. I know that happens when there is a syntax
error in solr-data-config.xml. Does DIH supports XML include? Also I am
not seeing any error message in the
Hi Vulcanoid,
If you want to consider proximity, you need to use pf (phrase fields) and ps
(phrase slop) parameter. Please see :
http://wiki.apache.org/solr/SolrRelevancyFAQ#How_can_I_search_for_one_term_near_another_term_.28say.2C_.22batman.22_and_.22movie.22.29
P.S. edismax has more fine
Here's another sequence of messages I frequently see where replication
isn't happening with no clearly identified cause:
INFO org.apache.solr.handler.SnapPuller; Starting replication process
INFO org.apache.solr.handler.SnapPuller; Master's generation: 6
INFO
What Solr version?
- Mark
On Dec 20, 2013, at 1:14 PM, Fred Drake fdr...@gmail.com wrote:
Here's another sequence of messages I frequently see where replication
isn't happening with no clearly identified cause:
INFO org.apache.solr.handler.SnapPuller; Starting replication process
INFO
On Fri, Dec 20, 2013 at 1:33 PM, Mark Miller markrmil...@gmail.com wrote:
What Solr version?
I've seen the first problem in the thread with Solr 4.1, and the
second with both 4.1 and 4.6.
-Fred
--
Fred L. Drake, Jr.fred at fdrake.net
A storm broke loose in my mind. --Albert Einstein
I guess you refer to this post?
http://1opensourcelover.wordpress.com/2013/07/02/solr-external-file-fields/
If so .. he already provides at least one possible use case:
*snip*
We use Solr to serve our company’s browse pages. Our browse pages are similar
to how a typical Stackoverflow tag page
On 11/19/2013 6:18 AM, adfel70 wrote:
Hi, we plan to establish an ensemble of solr with zookeeper.
We gonna have 6 solr servers with 2 instances on each server, also we'll
have 6 shards with replication factor 2, in addition we'll have 3
zookeepers.
You'll want to do one Solr instance per
Regarding data loss, Solr returns an error code to the callling app (either
HTTP error code, or equivalent in SolrJ), so if it fails to index for a
known reason, you'll know about it.
There are always edge cases though.
If Solr indexes the document (returns success), that means the document is
I’d recommend you start with the upcoming 4.6 release. Should be out this week
or next.
- Mark
On Nov 19, 2013, at 8:18 AM, adfel70 adfe...@gmail.com wrote:
Hi, we plan to establish an ensemble of solr with zookeeper.
We gonna have 6 solr servers with 2 instances on each server, also we'll
On 11/19/2013 4:10 PM, yriveiro wrote:
After the reading this link about DocValues and be pointed by Mark Miller to
raise the question on the mailing list, I have some questions about the
codec implementation note:
Note that only the default implementation is supported by future version of
Shawn,
This setup has big implication and I think that this problem is not describe in
proper way either wiki or ref.. guide and how can be overcame (all the process
that you describes).
+1 to find a way to upgrade without reindexing the data, I have not space
enough to do an optimize of 3T
Other question,
Can someone confirm that I can upgrade from 4.5.1 to 4.6 in a safety and clean
way (without optimises and all stuff)?
--
Yago Riveiro
Sent with Sparrow (http://www.sparrowmailapp.com/?sig)
On Wednesday, November 20, 2013 at 12:16 AM, Yago Riveiro wrote:
Shawn,
This
Thanks so much for the answer, and for JIRA-fying it.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Question-on-Lots-Of-cores-How-do-I-know-it-s-working-tp4099847p415.html
Sent from the Solr - User mailing list archive at Nabble.com.
On a related note, ..
In our application, the cores can get moderately large , and since we mostly
use a subset of them on a roughly LRU basis, the dynamic core loading seems
a good fit. We interact with our solr server via a solrj client.
That said, we do require the capability to access older
Just send a query for that core I think
Erick
On Fri, Nov 8, 2013 at 11:14 AM, vybe3142 vybe3...@gmail.com wrote:
On a related note, ..
In our application, the cores can get moderately large , and since we
mostly
use a subset of them on a roughly LRU basis, the dynamic core loading
Hmmm, not really, you have to kind of take it on faith I'm afraid.
You can check the Solr logs and you should see messages about
cores unloading, but that's not very satisfactory.
Actually sounds like a JIRA. See SOLR-5430
On Thu, Nov 7, 2013 at 12:43 PM, Vinay B, vybe3...@gmail.com wrote:
Hi Dennis,
I would not expect the index growth to be quite linear as the number of
shapes grows, but nonetheless it may be significant. Indexing non-point
shapes will index more term data than it ideally should: LUCENE-4942 I
need to find the time/priority to do it. Probably within the next
You can't control that if using the compositeIdRouter because the routing
is dependent on the hash function. What you want is custom sharding i.e.
the ability to control the shard to which updates are routed.
You should create a collection using the Collections API with a shards
param specifying
Can I split shards as with compositeId using this method?
On Wednesday, October 23, 2013, Shalin Shekhar Mangar wrote:
You can't control that if using the compositeIdRouter because the routing
is dependent on the hash function. What you want is custom sharding i.e.
the ability to control the
No, shard splitting does not support collections with implicit router.
On Wed, Oct 23, 2013 at 1:21 PM, Yago Riveiro yago.rive...@gmail.comwrote:
Can I split shards as with compositeId using this method?
On Wednesday, October 23, 2013, Shalin Shekhar Mangar wrote:
You can't control that
I really don't understand the question. What behavior are you seeing
that leads you to ask?
bq: Is it necessary duplicate the field and set index and stored to false
and
If this means setting _both_ indexed and stored to false, then you
effectively
throw the field completely away, there's no
Sorry if I don't make understand, my english is not too good.
My goal is remove pressure from the heap, my indexes are too big and the heap
get full very quick and I get an OOM. I read about docValues stored on disk,
but I don't know how configure it.
A read this link:
Hello Yago,
To my knowledge, in facet calculations docValues take precedence over other
methods. So, even if your field is also stored and indexed, your facets won't
use the inverted index or fieldValueCache, when docValues are present.
I think you will still have to store and index to
Hi Gun,
Thanks for the response.
Indeed I only want docValues to do facets.
IMHO I think that a reference to the fact that docValues take precedence over
other methods is needed. Is not always obvious.
--
Yago Riveiro
Sent with Sparrow (http://www.sparrowmailapp.com/?sig)
On Monday,
Issue resolved. Not a Solr issue; a really hard to discover missing
library in my installation.
On Thu, Oct 10, 2013 at 7:10 PM, Jack Park jackp...@topicquests.org wrote:
I have an interceptor which grabs SolrDocument instances in the
update handler chain. It feeds those documents as a JSON
201 - 300 of 912 matches
Mail list logo