Any there any known issues may cause the index sync between the master/slave
abnormal?
And is there any API to call to force sync the index between the master and
slave, or force to delete the old index on the slave?
Hi,
I want to do a geo query with LocalSolr. However, It seems it supports only
miles **when calculating distances. Is there a quick way to use this search
component with solr using Km instead?
The other thing I want it to calculate distance start from 500 meters up.
How could I do this?
--
Chho
Data set: About 4,000 log files (will eventually grow to millions). Average
log file is 850k. Largest log file (so far) is about 70MB.
Problem: When I search for common terms, the query time goes from under 2-3
seconds to about 60 seconds. TermVectors etc are enabled. When I disable
highlig
This is a DIH plug-in that lets you seach Solr directly in the processing chain.
https://issues.apache.org/jira/browse/SOLR-1499
You can fetch a database record, search Solr, then search the DB again
using the return values.
Lance
On Tue, Jul 20, 2010 at 1:35 PM, Travis Low wrote:
> I have a l
Lots of things. But nobody can guess until you've provided more details.
How big is your index?
How much memory do you give the JVM?
what were you doing when the error occurred?
Are you sorting over many unique terms?
Are you simultaneously updating your index?
etc.
Perhaps reviewing this would be
Well, 1147 is still open and none of the comments indicate it's been
applied, so
no. And there's no subversion commits...
Is 1.4 nightly stable? I can't answer that. It's stable enough to pass all
the unit tests,
but that's not a strong endorsement...
Patches are applied to the source code, then
"Nomerge" has struck me as somewhat uncontrollable. There is also a
"balanced" merge policy in the trunk, courtesy of LinkedIn.
On Mon, Jul 19, 2010 at 12:43 PM, Burton-West, Tom wrote:
> Hi Ken,
>
> This is all very dependent on your documents, your indexing setup and your
> hardware. Just as a
https://issues.apache.org/jira/browse/LUCENE-2055
On Tue, Jul 20, 2010 at 7:01 PM, Blargy wrote:
>
> Perfect!
>
> Is there an associated JIRA ticket/patch for this so I can patch my 4.1
> build?
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Stemming-tp982690p982786.ht
Perfect!
Is there an associated JIRA ticket/patch for this so I can patch my 4.1
build?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Stemming-tp982690p982786.html
Sent from the Solr - User mailing list archive at Nabble.com.
http://wiki.apache.org/solr/LanguageAnalysis#solr.StemmerOverrideFilterFactory
On Tue, Jul 20, 2010 at 5:53 PM, Blargy wrote:
>
> I am using the LucidKStemmer and I noticed that it doesnt stem certain
> words... for example "bags". How could I create a list of explicit words to
> stem... ie sort
Hi,
I was wondering if anyone has found any resolution to this email thread?
Thank you,
Siva Kommuri
--
View this message in context:
http://lucene.472066.n3.nabble.com/SolrCore-has-a-large-number-of-SolrIndexSearchers-retained-in-infoRegistry-tp483900p982700.html
Sent from the Solr - User mai
I am using the LucidKStemmer and I noticed that it doesnt stem certain
words... for example "bags". How could I create a list of explicit words to
stem... ie sort of the opposite of protected words.
I know this can be accomplished using the synonyms file but I want to know
how to just replace one
Hi,
I was wondering if anyone has found any resolution to this email thread?
Thank you,
Siva Kommuri
On Dec 23, 2009, at 10:15 AM, Jon Poulton wrote:
Hi there,
I'm looking at some problems we are having with some legacy code which uses
Solr (1.3) under the hood. We seem to get repeated OutOfMem
I have a large database table with many document records, and I plan to use
SOLR to improve the searching for the documents.
The twist here is that perhaps 50% of the records will originate from
outside sources, and sometimes those records may be updated versions of
documents we already have. Cur
Hi Andrew,
the whole tomcat shouldn't fail on restart if only one core fails.
We are using the setup described here:
http://wiki.apache.org/solr/SolrTomcat
With the help of several different Tomcat Context xml files (under
conf/Catalina/localhost/) the cores should be independent webapps:
A diffe
Use SpanFirstQuery
: I need to make sure that documents with the search term occurring
: towards the beginning of the document are ranked higher.
:
: For example,
:
: Search term : ox
: Doc 1: box fox ox
: Doc 2: ox box fox
:
: Result: Doc2 will be ranked higher than Doc1.
:
: The solution I
Sorry for such a late reply...
: I still don't understand why updateHandler is called after
: searchExcecutor when updateHandler has the possibility of
: adding/submitting to searchExecutor.
...i believe you are correct, this does look like a race condition bug.
I've opened SOLR-2008 to tra
: I am using a function query to tweak my regular query search score, so
: search query outputs regular query score modified by some function query. Is
: there a way to also obtain a score from regular query?
Not at the moment, but there has been wide and varried discussions in the
past of how t
I'm still having trouble with this. My program will run for a while, then
hang up at the same place. Here is my add/commit process:
I am using StreamingUpdateSolrServer with queue size = 100 and num threads =
3. My indexing process spawns 8 threads to process a subset of RSS feeds
which each th
setting asside for a moment my opionin that trying to do cut offs relative
the max Score is a bad idea in general...
1) you're definitley not going to know the "topScore" until all of the
matching docs are collected.
2) Solr really doesn't make it easy to plug in a custom Comparator - there
i
On Jul 20, 2010, at 6:14 AM, Bilgin Ibryam wrote:
So I assume that storing entity each field in as a separate index
field is
correct, since they will get different scoring.
Just to get the terminology right... to use dismax, *index* each field
separately. Whether a field is *stored* or no
try something like this:
q.alt=*:*&fq=keyphrase:hotel
though if you dont need to query across multiple fields, dismax is
probably not the best choice
On Tue, Jul 20, 2010 at 4:57 AM, olivier sallou
wrote:
> q will search in defaultSearchField if no field name is set, but you can
> specify in you
> Is it possible to use dismax query parser using solrJ,
> since this is how I'm going to access solr?
Sure it is possible. SolrQuery.setQueryType("dismax") is equals to
&defType=dismax.
More permanent way: You can define your dismax parameters (fields and boost
weights) in solrconfig.xml, and
Sorry for the lack of details. I'm up and running now- before I was
accidently using some nightly snapshot.
On Jul 19, 2010, at 10:49 PM, Chris Hostetter
wrote:
: I'm trying to enable clustering in solr 1.4. I'm following these
instructions:
:
: http://wiki.apache.org/solr/Clustering
q will search in defaultSearchField if no field name is set, but you can
specify in your "q" param the fields you want to search into.
Dismax is a handler where you can specify to look in a number of fields for
the input query. In this case, you do not specify the fields and dismax will
look in th
Ok, I have found a big bug in my indexing script. Things are getting
better. I managed to have my parsed_filter_query to:
+coords_lat_lon_0_latLon:[48.694179707855874 TO 49.01213545059667]
+coords_lat_lon_1_latLon:[2.1079512793239767 TO 2.5911832073858765]
For the record, here are the parameter
Thanks for the answers guys.
So I assume that storing entity each field in as a separate index field is
correct, since they will get different scoring.
Is it possible to use dismax query parser using solrJ, since this is how I'm
going to access solr?
On Tue, Jul 20, 2010 at 10:46 AM, MitchK w
Here you can find params and their meanings for the dismax-handler.
You may not find anything in the wiki by searching for a parser ;).
Link: http://wiki.apache.org/solr/DisMaxRequestHandler Wiki:
DisMaxRequestHandler
Kind regards
- Mitch
Erik Hatcher-4 wrote:
>
> Consider using the dismax
Hi
Sorry, it wasn't very clear was it? [?]
Yes, I use a 'template' core that isn't used and create a copy of this on
the command line. I then edit the newcore/conf/solrconfig.xml and set the
data path, add data-import sections etc and then I edit the
solr.home/solr.xml and add the core name & dir
Hi Andrew,
I didn't correctly understand what you are trying to do with 'copying'?
Just use one core as a template or use it to replicate data?
You can reload only one application via:
http://localhost/manager/html/reload?path=/yourapp
(if you do this often you need to increase the PermGen space)
Hi
We have a few cores set up for separate sites and one of these is in use
constantly. When I add a new core I can currently copying one of the other
cores and renaming it, changing the conf etc and then reloading Solr via the
tomcat manager. However, if something goes wrong then the other core
If u can do ,then don't store as 0 ,then by sorting it will come at last (if
is used SortMissingLast=true).
Either u can create a different field for shorting and for empty price don't
set anything to field.
--
View this message in context:
http://lucene.472066.n3.nabble.com/set-field-with-value
I have applied patch#236 for collapsing in solr1.3 and getting the count
after collapse.My pagination is also working fine.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Handling-Pagination-when-using-the-collapsing-feature-tp980637p980745.html
Sent from the Solr - User mai
Consider using the dismax query parser instead. It has more
sophisticated capability to spread user queries across multiple fields
with different weightings.
Erik
On Jul 20, 2010, at 4:34 AM, Bilgin Ibryam wrote:
Hi all,
I have two simple questions:
I have an Item entity with id
Hi all,
I have two simple questions:
I have an Item entity with id, name, category and description fields. The
main requirements is to be able to search in all the fields with the same
string and different priority per field, so matches in name appear before
category matches, and they appear befo
It sounds like the best solution here, right.
However, I do not want to exclude the possibility of doing things one
*should* do in different cores with different configurations and schema.xml
in one core.
I haven't completly read the lucidimagination article, but I would suggest
you to do your wo
Le 20/07/2010 04:18, Lance Norskog a écrit :
Add the debugQuery=true parameter and it will show you the Lucene
query tree, and how each document is evaluated. This can help with the
more complex queries.
Do you see something wrong?
[debug] => Array
(
[rawquerystring] =>
Hi,
The current collapsing feature gives the count of the hits for a query.
What collapsing actually is supposed to do is return the count of records
returned grouped on a particular field. Pagination with total number of hits
is not possible.
Is there a work around in collapsing which will do s
38 matches
Mail list logo