Ok, thanks. I'm studying the RAM buffer/MergePolicy nexus as we speak.
I hereby name the function "minimum number of coins and bills needed
to represent a number" as its "change log".
On Tue, Apr 6, 2010 at 2:08 AM, Michael McCandless
wrote:
> Actually this isn't quite right.
>
> Lucene flushes
Is it possible to use Solr indexes created using Lucene directly (not posted
to Solr)
Thanks,
Joe
--
View this message in context:
http://n3.nabble.com/Searching-Lucene-Indexes-with-Solr-tp701882p701882.html
Sent from the Solr - User mailing list archive at Nabble.com.
What would be the best way to do range bucketing on a price field?
I'm sort of taking the example from the Solr 1.4 book and I was thinking
about using a PatternTokenizerFactory with a SynonymFilterFactory.
Is there a better way?
Thanks
--
View this message in context:
http://n3.nabble.com
Thanks Eric, Chris!
I tried the Query Elevation and it seems to be working fine for me.
Best Rgds,
Mark.
On Mon, Apr 5, 2010 at 7:40 PM, Chris Hostetter wrote:
>
> : If that's the case, you could copy the magic keyword to a different field
> : (say magic_keyword) and boost it right into orbit a
On 06.04.2010 17:49 Alexander Rothenberg wrote:
> On Monday 05 April 2010 20:14:44 Chris Hostetter wrote:
>> define "crashes" ? ... presumabl you are tlaking about the client crashing
>> because it can't parse theerro response, correct? ... the best suggestion
>> given the current state of Solr is
I am going through some of my DIH verbose output and I noticed that for each
sub entity it appear to be query the DB multiple times and it keeps
increasing at a linear fashion!
For example:
.
select * from item_categories where item_id=1
...
.
Hi,
I am using Solr 1.4.
I have an issue with Solr indexing large PDF files (> 5MB but < 10MB).
I have set the:
properties in solrconfig.xml.
The exception I get is:
SEVERE: org.apache.solr.common.SolrException: org.apache.tika.exception.Tik=
aException: TIKA-198: Illegal IOException fro
On Monday 05 April 2010 20:14:44 Chris Hostetter wrote:
> define "crashes" ? ... presumabl you are tlaking about the client crashing
> because it can't parse theerro response, correct? ... the best suggestion
> given the current state of Solr is to make hte client smart enough to not
> attempt pars
Before digging through src ...
Docs say ... "Every component can have an extra attribute enable which can be
set as true/false."
It doesn't seem that listeners are part of PluginInfo scheme though ... for
example is this possible?
Mark,
Its confirmed.. its been more than 20 hrs since I completed indexing but
still the JVM is yet to release the memory used by it for indexing.
I will try to use jconsole and will provide some detailed output.
Thanks,
Barani
--
View this message in context:
http://n3.nabble.com/Need-info-o
: For example I have product listings and I want to be able to filter out
: mature items by default. To do this I added:
I've never done this before, but you might find something like this more
to your likeing then what you're currently got...
_query_:"{!lucene df=mature v=$mature}" ma
hi,
I found elevate query working fine with dismax handler when i added the
searchComponent to my Dismax RH.
Couldn't find the desired results when trying with the standard
RequestHandler. Hope it works just like that with the Standard RH also.
Thanks and Rgds,
Mark.
Hi,
I have an existing web application which is using Lucene (v2.1.0 and/or
v2.4.x) and which I'd like to gradually migrate to Solr.
I am already using multiple cores, master/slave replication and SolrJ
to re-implement current functionalities.
One use case I have is: backup/restore indexes.
I a
On 4/5/2010 8:43 PM, Mark Miller wrote:
On 04/05/2010 10:12 PM, Chris Hostetter wrote:
: The best you have to work with at the moment is Xincludes:
:
: http://wiki.apache.org/solr/SolrConfigXml#XInclude
:
: and System Property Substitution:
:
: http://wiki.apache.org/solr/SolrConfigXml#System_pr
hi, i'm new in using apache nutch and solr... has anyone from the list
experiences in indexing nutch crawls into solr? the main problem is, that e.g.
nutch crawled pdf documents (with the other stuff from the crawled site) after
solr-indexing isn't queryable... e.g.
query in nutch:
bin/nutch
Hello again ;)
i got new trouble with my import.
a cronjob start every 2 hours an import off 4 cores. from one table. make it
sense ? it doesnt think so.
we have 2 servers, one playground and one Live server. so when at the
weekend my imports started and i get an ArrayOutOfBoundsException.
Actually this isn't quite right.
Lucene flushes a new segment whenever RAM is full (not every 5 docs if
mergeFactor is 5).
Whereas mergeFactor decides how many segments of roughly the same size
are merged at once.
So eg if you index 42 docs, unless the docs are immense (or, are not
indexed in a
Hi Lance
Thanks for this. The wiki definitely isn't clear about this. I will
test this tonight.
Regards
Andrew
On 5 April 2010 23:04, Lance Norskog wrote:
> The MailEntityProcessor is an "extra" and does not come normally with
> the DataImportHandler. The wiki page should mention this.
>
> In
18 matches
Mail list logo