1. No, there is only one instance
2. init() is called
3. check these standard search components:
https://github.com/apache/lucene-solr/tree/trunk/solr/core/src/java/org/apache/solr/handler/component
Depending on what you are doing, you can pick the component that's closest to
your purposes.
Well, it's kind of hard to find a person if the requirement is 10 years' experience
with Solr given that Solr was created in 2004.
On Jul 23, 2014, at 12:45 PM, Jack Krupansky j...@basetechnology.com wrote:
I occasionally get pinged by recruiters looking for Solr application
developers...
It wouldn't be too hard to write a Solr Plugin that take a param docId together
with a query and return the position of that doc within the result list for
that query. You will still need to deal with the performance though. For
example, if the doc ranks at one millionth, the plugin still
Oh, I see what you are trying to do, you were confusing :)
To get the exact position of a particular document in the ranked list, you will
need to loop through the whole list, as that's exactly what Solr has to do to
get to the that document.
However, you could do some optimization with the
Hey Josh,I am not an expert in Java performance, but I would start withdumping a the heapand investigatewith visualvm (the free tool that comes with JDK).In my experience, the most common cause for PermGen exception is the app createstoo manyinterned strings.Solr (actually Lucene) interns the
production environment for our heavier users would see in the range of 3200+ user cores created a day. Thanks for the help. Josh On Mon, Mar 3, 2014 at 11:24 AM, Tri Cao tm...@me.com wrote: Hey Josh,I am not an expert in Java performance, but I would start with dumping athe heapand investigate
If you just want to see which classes are occupying the most memory in a live JVM,you can do:jmap -permstat pidI don't think you can dump the contents of PERM space.Hope this helps,TriOn Mar 03, 2014, at 11:41 AM, KNitin nitin.t...@gmail.com wrote:Is there a way to dump the contents of permgen and
Lucene main file formats actually don't change a lot in 4.x (or even 5.x), and the newer codecs just delegate to previous versions for most file types. The newer file types don't typically include Lucene's version in file names.For example, Lucene 4.6 codes basically delegate stored fields and
Taminidi,Relevance ranking is tricky and very domain specific. There are usually multiple ways to do the same thing, each is better at some edge cases and worse at some others :)It looks to me that you are trying to rank the products by: exact match on SKU, then exact match onManufactureSKU, then
1. Yes, that's the right way to go, well, in theory at least :)2. Yes, queries are alway fanned to all shards and will be as slow as the slowest shard. When I looked intoSolr distributed querying implementation a few months back, the support for graceful degradation for thingslike networkfailures
at Heliosearch On Wed, Feb 12, 2014 at 2:12 PM, Tri Cao tm...@me.com wrote: Hi all,I am running a Solr application and I would need to implement a featurethat requires faceting and filtering on a large list of IDs. The IDs arestored outside of Solr and is specific to the current logged on user
Hi all,I am running a Solr application and I would need to implement a feature that requires faceting and filtering on a large list of IDs. The IDs are stored outside of Solr and is specific to the current logged on user. An example of this is the articles/tweets the user has read in the last few
I don't think there's a synonym file for this use case. I am not even sure if
synonym is the right way to handle it.
I think the better way to improve recall is to mark up your documents with
a hidden field of is the geographic relations. For example, before indexing,
you can add a field to all
13 matches
Mail list logo