Am 09.08.2011 14:58, schrieb Ahmet Arslan:
while searching with debug on I see strange query parsing:
identifier:"ub.uni-bielefeld.de"
identifier:"ub.uni-bielefeld.de"
+MultiPhraseQuery(identifier:"(ub.uni-bielefeld.de ub) uni
bielefeld de")
+identifier:"(ub.uni-bielefeld.de ub) uni bielefe
> while searching with debug on I see strange query parsing:
>
> name="rawquerystring">identifier:"ub.uni-bielefeld.de"
> name="querystring">identifier:"ub.uni-bielefeld.de"
>
> +MultiPhraseQuery(identifier:"(ub.uni-bielefeld.de ub) uni
> bielefeld de")
>
>
> +identifier:"(ub.uni-bielefeld.de
lboutros wrote:
>
> I used Spanish stemming, put the ASCIIFoldingFilterFactory before the
> stemming filter and added it in the query part too.
>
> Ludovic.
>
My experiments with french stemmer does not yield good results with this
order. Applying the ASCIIFoldingFilterFactory before stemming
(11/07/29 8:52), Chris Hostetter wrote:
: If I got an exception during faceting (e.g. undefined field), Solr doesn't
: return HTTP 400 but 200 with the exception stack trace in
: ... tag. Why is it implemented so? I checked Solr 1.1 and saw the same
behavior.
super historic, pre-apache, code
: If I got an exception during faceting (e.g. undefined field), Solr doesn't
: return HTTP 400 but 200 with the exception stack trace in
: ... tag. Why is it implemented so? I checked Solr 1.1 and saw the same
behavior.
super historic, pre-apache, code ... the idea at the time was that some
pa
Correction:
> Except FacetComponent, HighlightComponent for example, if I use a bad regex
> pattern
> for RegexFragmenter, HighlightComponent throws an exception then Solr return
> 400.
Solr returns 500 in this case actually. I think it should be 400 (bad request).
koji
--
Check out "Query Lo
Excellent, thanks for the confirmation Erik. I've started working with
Solr (just getting my feet wet at this point).
-Matt
On 07/20/2011 05:38 PM, Erick Erickson wrote:
Solr would work find for this, your PDF files would have to be interpreted
by Tika, but see Data Import handler, FileListEnt
Solr would work find for this, your PDF files would have to be interpreted
by Tika, but see Data Import handler, FileListEntityProcessor and
TikaEntityProcessor. I don't quite think Nutch is the tool here.
You'll be wanting to do highlighting and a couple of other things
You'll spend some tim
: I saw this in the Solr wiki : "An un-optimized index is going to be *at
: least* 10% slower for un-cached queries."
: Is this still true? I read somewhere that recent versions of Lucene where
: less sensitive to un-ptimized indexed than is the past...
correct. I've removed that specific statem
On Mon, Jul 4, 2011 at 5:47 PM, Engy Morsy wrote:
>
> What is the workflow of solr starting from submitting an xml document to be
> indexed? Is there any default analyzer that is called before the analyzer
> specified in my solr schema for the text field. I have a situation where the
> words of t
: I'm working with Solrj, and I like to use the SolrResponseBase.toString()
: method, as it seems to return JSON. However, the JSON returned is not
many of the toString methods on internal solr objects use {} to show
encapsulation when recursively calling toString() on sub objects, but they
ar
Hello again!
Thank you very much for answering. The problem was the defaultOperator,
which was setted as AND. Damn, I was blind :-/
Thank you again.
On Tue, Jun 7, 2011 at 12:34 PM, Luis Cappa Banda wrote:
> *Expression*: A B C D E F G H I
As written, this is equivalent to
*Expression*: A default_field:B default_field:C default_field:D
default_field:E default_field:F default_field:G default_field:H
default_field:I
Try *Expression*:( A B C D
My first guess would be that you are using AND as default operator? you can
see the generated query by using the parameter debugQuery=true
On Tue, Jun 7, 2011 at 1:34 PM, Luis Cappa Banda wrote:
> Hello!
>
> My problem is as follows: I've got a field (indexed and stored setted as
> true) tokenize
Ahhh, you're right. I know there's been some discussion in the past about
how to find out the number of terms that matched, but don't remember the
outcome off-hand. You might try searching the mail archive for something like
"number of matching terms" or some such.
Sorry I'm not more help
Erick
O
On 02/06/11 13:32, Erick Erickson wrote:
Say you're trying to match terms A, B, C. Would something like
(A AND B AND C)^1000 OR (A AND B)^100 OR (A AND C)^100 OR (B AND
C)^100 OR A OR B OR C
work? It wouldn't be an absolute ordering, but it would tend to
push the documents where all three terms
Say you're trying to match terms A, B, C. Would something like
(A AND B AND C)^1000 OR (A AND B)^100 OR (A AND C)^100 OR (B AND
C)^100 OR A OR B OR C
work? It wouldn't be an absolute ordering, but it would tend to
push the documents where all three terms matched toward
the top.
It would get real
Thanks. I think I can take it form there!
Aaron Chmelik
Web Designer & Programmer
email: aaron.chme...@gmail.com
website: http://webdesign.aaronchmelik.com
phone: 651.757.5979
On Thu, May 26, 2011 at 4:51 PM, Markus Jelsma
wrote:
> Optimizing an index forces segments to merge. Usually, segments
Optimizing an index forces segments to merge. Usually, segments are merged
automatically based on your mergeFactor setting. During a merge documents
flagged for deletion are really purged and the number of segments is reduces
which improves search performance. There are some good pages on mergeF
One more question - what does optimization do? Maybe to be a little more
precise - what happens to the index that requires optimizaion (what is the
problem and how does optimization solve it).
Aaron Chmelik
Web Designer & Programmer
email: aaron.chme...@gmail.com
website: http://webdesign.aaronchm
Define reindexing. Every new document is indexed and existing documents are
deleted and indexed as if it is a new document. Completely reindexing from
scratch is only required if breaking changes are made to the schema or if you
upgrade to a new version that uses another format and isn't able to
14:56
To: solr-user@lucene.apache.org
Subject: Re: Question concerning the updating of my solr index
Greg,
I believe the point of SUSS is that you can just add docs to it one by one, so
that SUSS can asynchronously send them to the backend Solr instead of you
batching the docs.
Otis
-lucene.com/
- Original Message
> From: Greg Georges
> To: "solr-user@lucene.apache.org"
> Sent: Mon, May 2, 2011 2:45:40 PM
> Subject: RE: Question concerning the updating of my solr index
>
> Oops, here is the code
>
> SolrServer server = new
>Stream
ver.commit();
server.optimize();
Greg
-Original Message-
From: Greg Georges [mailto:greg.geor...@biztree.com]
Sent: 2 mai 2011 14:44
To: solr-user@lucene.apache.org
Subject: RE: Question concerning the updating of my solr index
Ok I had seen this in the wiki, performance has go
Gospodnetic [mailto:otis_gospodne...@yahoo.com]
Sent: 2 mai 2011 13:59
To: solr-user@lucene.apache.org
Subject: Re: Question concerning the updating of my solr index
Greg,
You could use StreamingUpdateSolrServer instead of that UpdateRequest class -
http://search-lucene.com/?q=StreamingUpdateSolrServer
Greg,
You could use StreamingUpdateSolrServer instead of that UpdateRequest class -
http://search-lucene.com/?q=StreamingUpdateSolrServer+&fc_project=Solr
Your index won't be locked in the sense that you could have multiple apps or
threads adding docs to the same index simultaneously and that se
Nutch
Lucene ecosystem search :: http://search-lucene.com/
- Original Message
> From: Charles Wardell
> To: solr-user@lucene.apache.org
> Sent: Wed, April 27, 2011 7:51:20 PM
> Subject: Re: Question on Batch process
>
> Thank you for your response. I did not m
;
>
>
> - Original Message
> > From: Charles Wardell
> > To: solr-user@lucene.apache.org
> > Sent: Tue, April 26, 2011 8:01:28 PM
> > Subject: Re: Question on Batch process
> >
> > Thank you Otis.
> > Without trying to appear to stupid,
al Message
> From: Charles Wardell
> To: solr-user@lucene.apache.org
> Sent: Tue, April 26, 2011 8:01:28 PM
> Subject: Re: Question on Batch process
>
> Thank you Otis.
> Without trying to appear to stupid, when you refer to having the params
>matching your # of CPU cor
Thank you Otis.
Without trying to appear to stupid, when you refer to having the params
matching your # of CPU cores, you are talking about the # of threads I can
spawn with the StreamingUpdateSolrServer object?
Up until now, I have been just utilizing post.sh or post.jar. Are these capable
of t
Charlie,
How's this:
* -Xmx2g
* ramBufferSizeMB 512
* mergeFactor 10 (default, but you could up it to 20, 30, if ulimit -n allows)
* ignore/delete maxBufferedDocs - not used if you ran ramBufferSizeMB
* use SolrStreamingUpdateServer (with params matching your number of CPU cores)
or send batches
allows you to set the maximum size of the merged segment:
> https://issues.apache.org/jira/browse/LUCENE-854.
>
>
> Tom Burton-West
> http://www.hathitrust.org/blogs/large-scale-search
>
> -Original Message-
> From: Juan Grande [mailto:juan.gra...@gmail.com]
> Sent
@gmail.com]
Sent: Friday, April 15, 2011 5:15 PM
To: solr-user@lucene.apache.org
Subject: Re: QUESTION: SOLR INDEX BIG FILE SIZES
Hi John,
¿How can split the file of the solr index into multiple files?
>
Actually, the index is organized in a set of files called segments. It's not
just a sin
Specifically to the file size support, all the file systems on current releases
of linux (and unixes too) support large files with 64 bit offsets, and I am
pretty sure that java VM supports 64 bit offsets in files, so there is no 2GB
file size limit anymore.
François
On Apr 15, 2011, at 4:31 P
Hi John,
¿How can split the file of the solr index into multiple files?
>
Actually, the index is organized in a set of files called segments. It's not
just a single file, unless you tell Solr to do so.
That's because some "file systems are about to support a maximun
> of space in a single file"
Hi Raj,
I'm guessing your slug field is much shorter and thus a match in that field has
more weight than a match is a much longer story field. If you omit norms for
those fields in the schema (and reindex), I believe you will see File 4 drop to
position #4.
Otis
Sematext :: http://semate
Thank you so much. I will give this a try. Thanks again everybody for
your help
Raj
-Original Message-
From: lboutros [mailto:boutr...@gmail.com]
Sent: Tuesday, April 05, 2011 2:28 PM
To: solr-user@lucene.apache.org
Subject: RE: question on solr.ASCIIFoldingFilterFactory
this
this analyzer seems to work :
I used Spanish stemming, put the ASCIIFoldingFilterFactory before the
stemming filter and added it in the que
Your analyzer contains these two filters :
before :
So two things :
The words you are testing are not english words (no ?), so the stemming will
have strange behavior.
If you really want to remove accents, try to put the
ASCIIFoldingFilterFactory before the two others.
Ludovic.
-
Jo
/>
>
> generateWordParts="1" generateNumberParts="1" catenateWords="0"
> catenateNumbers="0" catenateAll="0" splitOnCaseChange="1"/>
>
>
>
> protected="protwords.txt"/>
>
?
---
From: Steven A Rowe [mailto:sar...@syr.edu]
Sent: Tuesday, April 05, 2011 12:28 PM
To: solr-user@lucene.apache.org
Subject: RE: question on
Att = filter.getAttribute(CharTermAttribute.class);
assertTermEquals("despues", filter, termAtt);
assertTermEquals("Imagenes", filter, termAtt);
}
Steve
> -Original Message-
> From: lboutros [mailto:boutr...@gmail.com]
> Sent: Tuesday, April 05, 2011 12:18 PM
>
Is there any Stemming configured in for this field in your schema
configuration file ?
Ludovic.
2011/4/5 Nemani, Raj [via Lucene] <
ml-node+2780463-48954297-383...@n3.nabble.com>
> All,
>
> I am using solr.ASCIIFoldingFilterFactory to perform accent insensitive
> search. One of the words that g
I can't remember where I read it, but I think MappingCharFilterFactory is
prefered.
There is an example in the example schema.
>From this, I get:
org.apache.solr.analysis.MappingCharFilterFactory
{mapping=mapping-ISOLatin1Accent.txt}
|text|despues|
On Tue, Apr 5, 2011 at 5:06 PM, Nemani, Raj
Thanks Hoss,
Externanlizing this part is exactly the path we are exploring now, not
only for this reason.
We already started testing Hadoop SequenceFile for write ahead log for
updates/deletes.
SequenceFile supports append now (simply great!). It was a a pain to
have to add hadoop into mix for "
: Is it possible in solr to have multivalued "id"? Or I need to make my
: own "mv_ID" for this? Any ideas how to achieve this efficiently?
This isn't something the SignatureUpdateProcessor is going to be able to
hel pyou with -- it does the deduplication be changing hte low level
"update" (impl
That did the trick! thanks!
On Wed, Mar 30, 2011 at 1:31 PM, Steven A Rowe wrote:
> Hi Marcelo,
>
> Try adding the 'method="text"' attribute to your tag, e.g.:
>
>
>
> If that doesn't work, there is another attribute "omit-xml-declaration"
> that might do the trick.
>
> See http://www.w3.org/T
Hi Marcelo,
Try adding the 'method="text"' attribute to your tag, e.g.:
If that doesn't work, there is another attribute "omit-xml-declaration" that
might do the trick.
See http://www.w3.org/TR/xslt#output for more info.
Steve
> -Original Message-
> From: Marcelo Iturbe [mailto:mar
On Mon, Mar 28, 2011 at 3:59 PM, Firdous Ali wrote:
> Hi,
> I m unable to index data, looks like the datasource is not even read by
> solr, even created an empty dataimport.properties file at /conf but the
> problem
> persists.
[...]
Look at the Solr log files, which will probably have an except
You need to reindex.
On Monday 14 March 2011 14:04:00 Ahsan |qbal wrote:
> Hi All
>
> Is there any way to drop term vectors from already built index file.
>
> Regards
> Ahsan Iqbal
--
Markus Jelsma - CTO - Openindex
http://www.linkedin.com/in/markus17
050-8536620 / 06-50258350
Also _query_ is the right approach when using fq with 2 Boolean values. Just
make sure you double quote the "{!geofilt}" when using that.
Bill Bell
Sent from mobile
On Mar 10, 2011, at 9:33 AM, Jerry Mindek wrote:
> Hi,
>
> I am using rev 1036236 of solr trunk running as a servlet in Tomcat
Can you use 2 fq parameters ? The default op is usually set to AND.
Bill Bell
Sent from mobile
On Mar 10, 2011, at 9:33 AM, Jerry Mindek wrote:
> Hi,
>
> I am using rev 1036236 of solr trunk running as a servlet in Tomcat 7.
> The doc set is sharded over 11 shards.
> Currently, I have all th
I am not 100% sure. But I why did you not use the standard confix for "text" ?
You are using:
-
-
-
Ca
In your first attempt, the crux of your problem was probably that you were
never closing the searcher/reader.
: Or how can I perform a query on the current state of the index from within an
: UpdateProcessor?
If you implement UpdateRequestProcessorFactory, the getInstance method is
given the S
Hi Bill
Any update..
On Thu, Feb 24, 2011 at 8:58 PM, Ahsan |qbal wrote:
> Hi
>
> schema and document are attached.
>
>
> On Thu, Feb 24, 2011 at 8:24 PM, Bill Bell wrote:
>
>> Send schema and document in XML format and I'll look at it
>>
>> Bill Bell
>> Sent from mobile
>>
>>
>> On Feb 24, 201
I have used ram disks on slaves, since the master is already persisted.
On Sun, Feb 27, 2011 at 7:00 PM, Nick Jenkin wrote:
> You could also try using a ram disk,
> mkdir /var/ramdisk
> mount -t tmpfs none /var/ramdisk -o size=m
>
> Obviously, if you lose power you will lose everything..
>
>
You could also try using a ram disk,
mkdir /var/ramdisk
mount -t tmpfs none /var/ramdisk -o size=m
Obviously, if you lose power you will lose everything..
On Mon, Feb 28, 2011 at 11:37 AM, Lance Norskog wrote:
> This sounds like a great idea but rarely works out. Garbage collection
> has to
This sounds like a great idea but rarely works out. Garbage collection
has to work around the data stored in memory, and most of the data you
want to hit frequently is in the indexed and cached. The operating
system is very smart about keeping the popular parts of the index in
memory, and there is
Or how can I perform a query on the current state of the index from
within an UpdateProcessor?
Thanks
On 2/25/11 6:30 AM, Mark wrote:
I am trying to write my own custom UpdateHandler that extends
DirectUpdateHandler2.
I would like to be able to query the current state of the index within
th
Hi
schema and document are attached.
On Thu, Feb 24, 2011 at 8:24 PM, Bill Bell wrote:
> Send schema and document in XML format and I'll look at it
>
> Bill Bell
> Sent from mobile
>
>
> On Feb 24, 2011, at 7:26 AM, "Ahsan |qbal"
> wrote:
>
> > Hi
> >
> > To narrow down the issue I indexed a s
How to use this?
Bill Bell
Sent from mobile
On Feb 24, 2011, at 7:19 AM, Koji Sekiguchi wrote:
> (11/02/24 21:38), Andrés Ospina wrote:
>>
>> Hi,
>>
>> My name is Felipe and i want to use the index main of solr in RAM memory.
>>
>> How it's possible? I have solr 1.4
>>
>> Thank you!
>>
>>
Send schema and document in XML format and I'll look at it
Bill Bell
Sent from mobile
On Feb 24, 2011, at 7:26 AM, "Ahsan |qbal" wrote:
> Hi
>
> To narrow down the issue I indexed a single document with one of the sample
> queries (given below) which was giving issue.
>
> *"evaluation of loa
Hi
To narrow down the issue I indexed a single document with one of the sample
queries (given below) which was giving issue.
*"evaluation of loan and lease portfolios for purposes of assessing the
adequacy of" *
Now when i Perform a search query (*TextContents:"evaluation of loan and
lease portf
(11/02/24 21:38), Andrés Ospina wrote:
Hi,
My name is Felipe and i want to use the index main of solr in RAM memory.
How it's possible? I have solr 1.4
Thank you!
Felipe
Welcome Felipe!
If I understand your question correctly, you can use RAMDirectoryFact
Hi
It didn't search.. (means no results found even results exist) one
observation is that it works well even in the long phrases but when the long
phrases contain stop words and same stop word exist two or more time in the
phrase then, solr can't search with query parsed in this way.
On Wed, Feb
Hi,
What do you mean by "this doesn't work fine"? Does it not work correctly or is
it slow or ...
I was going to suggest you look at Surround QP, but it looks like you already
did that. Wouldn't it be better to get Surround QP to work?
Otis
Sematext :: http://sematext.com/ :: Solr - Luc
Hi All
I even tried that (Appending &hl.usePhraseHighlighter=true) but it still
does not work.
Please help
Regards
Ahsan Iqbal
On Fri, Feb 18, 2011 at 12:30 AM, Ahmet Arslan wrote:
> > I had a requirement to implement phrase proximity like ["a
> > b c" w/5 "d e f"] for
> > this i have implemen
Greg,
You need to get stopword lists for your 6 languages. Then you need to create
new field types just like that 'text' type, one for each language. Point them
to the appropriate stopwords files and instead of "English" specify each one of
your languages. You can either index each language
Hi,
I'm following your suggestions.
Extract of your last step:
>This would give you three different configurations - you would then edit
>the zookeeper info to point each collection (essentially a SolrCore at
>this point) to the right configuration files:
>
>collections/collection1
> config=con
> I had a requirement to implement phrase proximity like ["a
> b c" w/5 "d e f"] for
> this i have implemented a custom query parser plug in which
> I make use of nested
> span queries to fulfill this requirement. Now it looks that
> documents are
> filtered correctly, but there is an issue in h
Greg,
a few things, i noticed while reading your post:
1) you don't need an -assignment for fields where the name does
not change, you can just skip that. - just to name one example
2) TemplateTransformer
(http://wiki.apache.org/solr/DataImportHandler#TemplateTransformer)
has no name-attribute,
OK, I think I found some information, supposedly TemplateTransformer will
return an empty string if the value of a variable is null. Some people say to
use the regex transformer instead, can anyone clarify this? Thanks
-Original Message-
From: Greg Georges [mailto:greg.geor...@biztree.co
Yes, you need to create both a QParserPlugin and a QParser implementation.
Look at Solr's own source code for the LuceneQParserPlugin/LuceneQParser and
built it like that.
Baking the surround query parser into Solr out of the box would be a useful
contribution, so if you care to give it a litt
Any One
On Thu, Jan 27, 2011 at 1:27 PM, Ahson Iqbal wrote:
> Hi All
>
> I want to integrate lucene Surround Query Parser with solr 1.4.1, and for
> that I
> am writing Custom Query Parser Plugin, To accomplish this task I should
> write a
> sub class of "org.apache.solr.search.QParserPlugin" an
If this is a one-time cleanup, not something you need to do programmatically,
you could delete the index directory ( /data/index ). In my case I
have to stop Tomcat, delete .\index and restart Tomcat. It is very fast and
starts me out with a fresh, empty, index. Noticed you are multi-core, I'm
not
Use this type of url for delete all data with fallowed by commit
http://localhost:8983/solr/update/?stream.body=*:*&commit=true
-
Grijesh
--
View this message in context:
http://lucene.472066.n3.nabble.com/Question-on-deleting-all-rows-for-an-index-tp2246726p2246948.html
Sent from the Sol
Hi Robert,
You can find an example of something similar to this in the examples
that are part of the solr distribution. The tutorial (
http://lucene.apache.org/solr/tutorial.html) describes how to post data
to the solr server via the post.jar
user:~/solr/example/exampledocs$ *java -jar post
On Thu, Jan 13, 2011 at 6:08 AM, Wilson, Robert
wrote:
> We are just staring with Solr and have a multi core implementation and need
> to delete all the rows in the index to clean things up.
>
> When running an update via a url we are using something like the following
> which works fine:
> http
Yes, that's my conclusion as well Grant.
As for the example output:
The symposium of Tg(RX3fg+and) gene studies
Should end up tokenizing to:
symposium tg the rx3fg and gene studi
Assuming I guessed right on the stemming.
Anyhow, thanks for the confirmation guys.
Matt
On 12/4/2010 8:18 PM,
On Fri, Dec 3, 2010 at 1:14 PM, Matthew Hall wrote:
> Oh, and let me add that the WordDelimiterFilter comes really close to what I
> want, but since we are unwilling to promote our solr version to the trunk
> (we are on the 1.4x) version atm, the inability to turn off the automatic
> phrase querie
Could you expand on your example and show the output you want? FWIW, you could
simply write a token filter that does the same thing as the WhitespaceTokenizer.
-Grant
On Dec 3, 2010, at 1:14 PM, Matthew Hall wrote:
> Hey folks, I'm working with a fairly specific set of requirements for our
>
> As mentioned, in the typical case it's important that the field names be
> included in the signature, but i imagine there would be cases where you
> wouldn't want them included (like a simple concat Signature for building
> basic composite keys)
>
> I think the Signature API could definitely
: Why is also the field name (* above) added to the signature
: and not only the content of the field?
:
: By purpose or by accident?
It was definitely deliberate. This way if your signature fields are
"fieldA,fieldB,fieldC" then these two documents...
Doc1:fielda:XXX
Doc1:fie
Am 29.11.2010 14:55, schrieb Markus Jelsma:
>
>
> On Monday 29 November 2010 14:51:33 Bernd Fehling wrote:
>> Dear list,
>> another suggestion about SignatureUpdateProcessorFactory.
>>
>> Why can I make signatures of several fields and place the
>> result in one field but _not_ make a signature
On Monday 29 November 2010 14:51:33 Bernd Fehling wrote:
> Dear list,
> another suggestion about SignatureUpdateProcessorFactory.
>
> Why can I make signatures of several fields and place the
> result in one field but _not_ make a signature of one field
> and place the result in several fields.
Why do you want to do this? It'd be the same value, just stored in
multiple fields in the document, which seems a waste. What's
the use-case you're addressing?
Best
Erick
On Mon, Nov 29, 2010 at 8:51 AM, Bernd Fehling <
bernd.fehl...@uni-bielefeld.de> wrote:
> Dear list,
> another suggestion abo
Dear list,
another suggestion about SignatureUpdateProcessorFactory.
Why can I make signatures of several fields and place the
result in one field but _not_ make a signature of one field
and place the result in several fields.
Could be realized without huge programming?
Best regards,
Bernd
Am
On 11/22/2010 5:45 PM, Mark wrote:
After I perform a delta-import on my master the slave replicates the
whole index which can be quite time consuming. Is there any way for
the slave to replicate only partials that have changed? Do I need to
change some setting on master not to commit/optimize t
I don't quite understand what you mean by that. Did you mean TermVector
Components?
Also, I did some more digging and I found some messages on this mailing list
about filtering. From what I understand, using the standard query handler
(solr/select/?q=...) with a qt parameter allows you to filter
Try adding TFV's (term frequency vectors) to the title field as well as
the body.
On Wed, 3 Nov 2010 11:41:35 -0700 (PDT), ahammad
wrote:
> Hello,
>
> I'm trying to implement a "Related Articles" feature within my search
> application using the mlt handler.
>
> To give you a little background
Why do you want to know? If there is a specific problem you're
trying to solve, perhaps stating the problem itself will get you a
better response.
Best
Erick
On Thu, Oct 28, 2010 at 4:00 AM, Li Li wrote:
> is there anyone could help me?
>
> 2010/10/11 Li Li :
> > hi all,
> >I want to know t
is there anyone could help me?
2010/10/11 Li Li :
> hi all,
> I want to know the detail of IndexReader in SolrCore. I read a
> little codes of SolrCore. Here is my understanding, are they correct?
> Each SolrCore has many SolrIndexSearcher and keeps them in
> _searchers. and _searcher keep t
: I have question is it possible to perform a phrase search with wild cards
in
: solr/lucene as if i have two queries both have exactly same results one is
: +Contents:"change market"
:
: and other is
: +Contents:"chnage* market"
:
: but i think the second should match "chages market" as w
Hi,
Lots of threads on that topic here:
http://search-lucene.com/?q=phrase+query+wildcard&fc_project=Lucene
And if you click that JIRA facet you'll see this as #1
hit: https://issues.apache.org/jira/browse/LUCENE-1486
(note: that's Lucene, not Solr)
Otis
Sematext :: http://sematext.com/ ::
Hi Ahson,
You'll really want to store an additional date field (make it a
TrieDateField type) that has only the date, and in the reverse order
from how you've shown it. You can still keep the one you've got, just
use it only for 'human viewing' rather than sorting.
Something like:
20080205 if you
On 8/16/10 1:55 AM, Yatir Ben Shlomo wrote:
> Hi!
> I am using solrCloud with tomcat5.5
> in my setup every lanugage has an its own index and its own solr filters so
> it needs a seprated solr configuration files.
>
> in solrCLoud examples posted here : http://wiki.apache.org/solr/SolrCloud
> I n
; Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
> Lucene ecosystem search :: http://search-lucene.com/
>
>
>
> - Original Message
> > From: Bharat Jain
> > To: solr-user@lucene.apache.org
> > Sent: Fri, July 30, 2010 10:40:19 AM
> > Subjec
)
SOLR-237 Field collapsing
Otis
Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
Lucene ecosystem search :: http://search-lucene.com/
- Original Message
> From: Bharat Jain
> To: solr-user@lucene.apache.org
> Sent: Fri, July 30, 2010 10:40:19 AM
>
Hi,
Thanks a lot for the info and your time. I think field collapse will work
for us. I looked at the https://issues.apache.org/jira/browse/SOLR-236 but
which file I should use for patch. We use solr-1.3.
Thanks
Bharat Jain
On Fri, Jul 30, 2010 at 12:53 AM, Chris Hostetter
wrote:
>
> : 1. Th
: 1. There are user records of type A, B, C etc. (userId field in index is
: common to all records)
: 2. A user can have any number of A, B, C etc (e.g. think of A being a
: language then user can know many languages like french, english, german etc)
: 3. Records are currently stored as a document
501 - 600 of 980 matches
Mail list logo