Hi,
Presently OR is the default operator for search in Solr. for e.g. If I am
searching for these 2 words with a space: abc xyz then it will return all
the records which has either abc or xyz or both. It means it is executing
query like abc or xyz.
But my requirement is that it should return
Having solrQueryParser defaultOperator=AND/ in your schema.xml should
address your requirements.
Cheers
Avlesh
On Mon, May 11, 2009 at 12:18 PM, dabboo ag...@sapient.com wrote:
Hi,
Presently OR is the default operator for search in Solr. for e.g. If I am
searching for these 2 words with a
Sorry to mention in the problem that I am trying to do this with dismax
request. Without dismax request, it is working fine but not with dismax
request.
Avlesh Singh wrote:
Having solrQueryParser defaultOperator=AND/ in your schema.xml should
address your requirements.
Cheers
Avlesh
I have 10M document, 2.9GB,Not to use the elevate.xml when there is no
problem, adding the elevate.xml in SOLR_HOME/data , to search the
configured key word , the system will be very slow, all of memory be
used(JVM 2GB ) soon ,and web container suspended.
Shalin Shekhar Mangar schrieb:
On Fri, May 8, 2009 at 2:14 AM, Jonathan Mamou ma...@il.ibm.com
wrote:
SpellingQueryConverter always splits words with special
character. I think that the issue is in SpellingQueryConverter
class Pattern.compile.((?:(?!(\\w+:|\\d+)))\\w+);?:
According to
Hi,
I'm facing a silly problem. Every time I restart tomcat all the indexes are
lost. I used all the default configurations. I'm pretty sure there must be
some basic changes to fix this. I'd highly appreciate if someone could
direct me fixing this.
Thanks,
KK.
Hi
Is it possible to stop a full-import from a dataimport handler and if so, how?
If I stop the import or stop Jetty and restart it whilst the
full-import is taking place, will it delete the indexed data?
Thanks in Advance
Andrew
you can abort a running import with command=abort
if you kill the jetty in between Lucene would commit the uncommitted docs
On Mon, May 11, 2009 at 3:13 PM, Andrew McCombe eupe...@gmail.com wrote:
Hi
Is it possible to stop a full-import from a dataimport handler and if so, how?
If I stop
Hi
Thanks. Found out the hard way that abort also removes the index :)
Regards
Andrew
2009/5/11 Noble Paul നോബിള് नोब्ळ् noble.p...@corp.aol.com:
you can abort a running import with command=abort
if you kill the jetty in between Lucene would commit the uncommitted docs
On Mon, May 11,
Hey there,
I would like to give very low boost to the docs that match field_a = 54.
I have tried
str name=bqfield_a:54^0.1/str
but it's not working. In the opposite case, I mean to give hight boost
doing:
str name=bqfield_a:54^1/str
it works perfect. I supose it is because I do the
Hi,
Not sure if this is what you want, but would this do what you need?
fq={!tag=p1}publisher_name:publisher1fq={!tag=p2}publisher_name:publisher2q=abstract:philosophyfacet=truefacet.mincount=1facet.field={!ex=p1
key=p2_book_title}book_titlefacet.field={!ex=p2
key=p1_book_title}book_title
With dismax, to get all terms required, set mm (minimum match) to 100%
Erik
On May 11, 2009, at 4:08 AM, dabboo wrote:
Sorry to mention in the problem that I am trying to do this with
dismax
request. Without dismax request, it is working fine but not with
dismax
request.
Hi,
I already have done this but still I am not getting any record. But if I
remove the qt=dismaxrequest, then it works fine.
Erik Hatcher wrote:
With dismax, to get all terms required, set mm (minimum match) to 100%
Erik
On May 11, 2009, at 4:08 AM, dabboo wrote:
Sorry to
Can anybody point me in the direction of resources and/or projects regarding
the following scenario; I have a community of users contributing content to
a Solr index. By default, the user (A) who contributes a document owns it,
and can see the document in their search results. The owner can then
Hi Everyone,
I'm running solr 1.3 and I was wondering if there's a problem with
running the snapshot script concurrently .
For instance, I have a cron job which performs a
snappuller/snapinstaller every minute on my slave servers. Sometime
(for instance after an optimize), the snappuller can
I have a case where I would like a solr index created which disables the
unique-key option.
I've tried commenting out the uniqueKey option and that just spits out an
error:
SEVERE: org.apache.solr.common.SolrException: QueryElevationComponent
requires the schema to have a uniqueKeyField
I've
Hi !
Is there any primary table in your view with a unique single key
you could use ?
J.
2009/5/11 jcott28 jcot...@yahoo.com:
I have a case where I would like a solr index created which disables the
unique-key option.
I've tried commenting out the uniqueKey option and that just spits
Man, I hadn't even thought of that! Now I feel like an idiot! Thanks!
Erik Hatcher wrote:
If you're not using it, remove the QueryElevationComponent from
solrconfig.xml
Erik
On May 11, 2009, at 1:15 PM, jcott28 wrote:
I have a case where I would like a solr index
If you're not using it, remove the QueryElevationComponent from
solrconfig.xml
Erik
On May 11, 2009, at 1:15 PM, jcott28 wrote:
I have a case where I would like a solr index created which disables
the
unique-key option.
I've tried commenting out the uniqueKey option and that
After spending more time on this, it seems more likely a problem from
FunctionQuery. If using boost = log(100) takes 100ms, log(log(100)) adds
another 100ms, log(log(log(100))) adds another 100ms, and so on. The time
goes up almost linearly instead of being constant. Any ideas?
Thanks,
Hello,
I had? Nutch -1.0 to crawl fetch and index a lot of files. Then I needed to?
index a few files also. But I know keywords for those files and their?
locations. I need to add them manually. I took a look to two tutorials on the
wiki, but did not find any info about this issue.
Is there a
On Mon, May 11, 2009 at 2:46 PM, Michael Ludwig m...@as-guides.com wrote:
Could you give an example of how the spellcheck.q parameter can be
brought into play to (take non-ASCII characters into account, so
that Käse isn't mishandled) given the following example:
You will need to set the
On Mon, May 11, 2009 at 3:58 PM, Andrew McCombe eupe...@gmail.com wrote:
Thanks. Found out the hard way that abort also removes the index :)
I guess you were using 1.3?
In the 1.3 release, abort stops the full-import and does not commit the
data. However, due to Lucene's limitation, the data
Please ignore my posts. Log is quite expensive an operation...
On Mon, May 11, 2009 at 11:45 AM, Guangwei Yuan guy...@gmail.com wrote:
After spending more time on this, it seems more likely a problem from
FunctionQuery. If using boost = log(100) takes 100ms, log(log(100)) adds
another
Shalin,
Here is what I've read on maxMergeDocs,
While merging segments, Lucene will ensure that no segment with more
than maxMergeDocs is created.
Wouldn't that mean that no index file should contain more than max
docs? I guess the index files could also just contain the index
information
Hi,
I want to make my system fault tolerant. My system has two shards
each with one master and two slaves. So if any of the slave or master fails
i want my system to continue working.
Any known solutions to this. Does solr provide any such functionalities as
yet.
Thanx.
--
View this
Why can't you simply index a field authorized-to with value user-B
and enrich any query you receive from a user with a mandatory query
for that authorization?
paul
Le 11-mai-09 à 17:50, Terence Gannon a écrit :
Can anybody point me in the direction of resources and/or projects
On Mon, May 11, 2009 at 4:55 AM, ant dormant.m...@gmail.com wrote:
I have 10M document, 2.9GB,Not to use the elevate.xml when there is no
problem, adding the elevate.xml in SOLR_HOME/data , to search the
configured key word , the system will be very slow, all of memory be
used(JVM 2GB ) soon
On Tue, May 12, 2009 at 2:30 AM, vivek sar vivex...@gmail.com wrote:
Here is what I've read on maxMergeDocs,
While merging segments, Lucene will ensure that no segment with more
than maxMergeDocs is created.
Wouldn't that mean that no index file should contain more than max
docs? I guess
The fault tolerance is achieved using external loadbalancing .you can
use an external /w loadbalancer or a simple one like this
http://wiki.apache.org/solr/LBHttpSolrServer for java or
http://code.google.com/p/solr-php-client/ for php
On Tue, May 12, 2009 at 3:38 AM, mirage1987
30 matches
Mail list logo