Hi,
We've written a few searchComponenets that make use
of rb.setNeedDocSet(true); the trouble with this is that the query gets
cached in the filter_cache, and we think are purging our more 'useful'
docsets from the filter_cache.
Has anyone else noticed this and has a useful remedy?
We are curre
Hi,
We have a need to specify a different query analyzer depending on input
parameters dynamically.
We need this so that we can use different stopword lists at query time.
Would any one know how I might be able to achieve this in solr?
I'm aware of the solution to specify different field types,
Hi,
How can I have faceting on a subset of the query docset e.g. with something
akin to:
SimpleFacets.base =
SolrIndexSearcher.getDocSet(
Query mainQuery,
SolrIndexSearcher.getDocSet(Query filter)
)
Is there anything like facet.fq?
Cheers,
Dan
rg/apache/lucene/search/similarities/Similarity.html#coord%28int,%20int%29but
> it requires deep understanding of Lucene internals
>
>
>
> On Tue, Jan 29, 2013 at 2:12 PM, Daniel Rosher wrote:
>
> > Hi,
> >
> > I'm wondering if there exists or if someone has implemented
Hi,
I'm wondering if there exists or if someone has implemented something like
the following as a function query:
overlap(query,field) = number of matching terms in field/number of terms in
field
e.g. with three docs having these tokens(e.g.A B C) in a field
D
1:A B B
2:A B
3:A
The overlap woul
Hi
The product function query needs a valuesource, not the pseudo score field.
You probably need something like (with Solr 4.0):
q={!lucene}*:*&sort=product(query($q),2) desc,score
desc&fl=score,_score_:product(query($q),2),[explain]
Cheers,
Dan
On Tue, Nov 20, 2012 at 2:29 AM, Floyd Wu wrote
Hi,
Have a look at DocTransformers
http://wiki.apache.org/solr/DocTransformers and ExplainAugmenterFactory as
an example
Cheers,
Dan
On Tue, Nov 20, 2012 at 3:08 PM, Sebastian Hofmann wrote:
> Hello all,
> We import xml documents to solr with solrj. We use xsl to proccess the
> "objects" to f
Ah ha .. good thinking ... thanks!
Dan
On Wed, Oct 10, 2012 at 2:39 PM, Ahmet Arslan wrote:
>
> > Token_Input:
> > the fox jumped over the lazy dog
> >
> > Synonym_Map:
> > fox => vulpes
> > dog => canine
> >
> > Token_Output:
> > vulpes canine
> >
> > So remove all tokens, but retain those mat
Hi,
I'm trying to index some content that has things like 'java/J2EE' but with
solr.WordDelimiterFilterFactory and parameters [generateWordParts="1"
generateNumberParts="0" catenateWords="0" catenateNumbers="0"
catenateAll="0" splitOnCaseChange="0"] this ends up tokenized as
'java','j','2',EE'
Do
itution.name only, the rest are copy fields of the same.
>
>
> Any help is appreciated.
>
> Thanks
> Sundar
>
> _
> Chose
Hi,
I've modified a copy of
./src/test/org/apache/solr/TestDistributedSearch.java for my own build
process. I can compile fine but running the test always logs to STDERR
INFO: Logging to STDERR via org.mortbay.log.StdErrLog
This method appears deprecated?
//public JettySolrRunner( String conte
Hi All,
We'd like to restrict older modified documents with a step function, rather
than the suggested method:
*recip(rord(creationDate),1,1000,1000).
I'm wondering whether the following might do it, and if anyone else has had
to solve this before?
bf="map(map(modified,0,0,today),0,12monthago,0
12 matches
Mail list logo