Perfect!
Is there an associated JIRA ticket/patch for this so I can patch my 4.1
build?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Stemming-tp982690p982786.html
Sent from the Solr - User mailing list archive at Nabble.com.
I am using the LucidKStemmer and I noticed that it doesnt stem certain
words... for example "bags". How could I create a list of explicit words to
stem... ie sort of the opposite of protected words.
I know this can be accomplished using the synonyms file but I want to know
how to just replace one
Nevermind. Apparently my IDE (Netbeans) was set to "No encoding"... wtf.
Changed it to UTF-8 and recreated the file and all is good now. Thanks!
--
View this message in context:
http://lucene.472066.n3.nabble.com/Foreign-characters-question-tp964078p967058.html
Sent from the Solr - User mailing
How can I tell and/or create a UTF-8 synonyms file? Do I have to instruct
solr that this file is UTF-8?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Foreign-characters-question-tp964078p967037.html
Sent from the Solr - User mailing list archive at Nabble.com.
Thanks for the reply but that didnt help.
Tomcat is accepting foreign characters but for some reason when it reads the
synonyms file and it encounters that character ñ it doesnt appear correctly
in the Field Analysis admin. It shows up as �. If I query exactly for ñ it
will work but the synonyms
I am trying to add the following synonym while indexing/searching
swimsuit, bañadores, bañador
I testing searching for "bañadores" however it didn't return any results.
After further inspection I noticed in the field analysis admin that swimsuit
gets expanded to ba�adores. Not sure if it will sh
Oh.. i didnt know about the different signatures to tf. Thanks for that
clarification.
It sounds like all I need to do is actually override tf(float) in the
SweetSpotSimilarity class to delegate to baselineTF just like tf(int) does.
Is this correct?
Thanks
--
View this message in context:
http
I've asked this question in the past without too much success. I figured I
would try to revive it.
Is there a way I can incorporate boost functions with a MoreLikeThis search?
Can it be accomplished at the MLT request handler level or would I need to
create a custom request handler which in turn
Can someone explain what the createWeight methods should do?
And one someone mind explaining what the hashCode method is doing in this
use case?
public int hashCode() {
int h = a.hashCode();
h ^= (h << 13) | (h >>> 20);
h += b.hashCode();
h ^= (h << 23) | (h >>> 10);
h += n
Is there anyway to override/change up the default PhraseQuery class that is
used... similar to how you can change out the Similarity class?
Let me explain what I am trying to do. I would like to override the TF is
calculated... always returning a max of 1 for phraseFreq.
For example:
Query: "fo
Here is a screen shot for our cache from New Relic.
http://s4.postimage.org/mmuji-31d55d69362066630eea17ad7782419c.png
Query cache: 55-65%
Filter cache: 100%
Document cache: 63%
Cache size is 512 for above 3 caches.
How do I interpret this data? What are some optimal configuration changes
give
iorixxx wrote:
>
> CustomSimilarityFactory that extends
> org.apache.solr.schema.SimilarityFactory should do it. There is an example
> CustomSimilarityFactory.java under src/test/org...
>
This is exactly what I was looking for... this is very similar ( no put
intended ;) ) to the updateProcess
iorixxx wrote:
>
> it is in schema.xml:
>
>
>
How would you configure the tfBaselineTfFactors and LengthNormFactors when
configuring via schema.xml? Do I have to create a subclass that hardcodes
these values?
--
View this message in context:
http://lucene.472066.n3.nabble.com/SweetSpotSimi
iorixxx wrote:
>
> it is in schema.xml:
>
>
>
Thanks. Im guessing this is all or nothing.. ie you can't you one similarity
class for one request handler and another for a separate request handler. Is
that correct?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Sweet
Would someone mind explaining how this differs from the DefaultSimilarity?
Also how would one replace the use of the DefaultSimilarity class with this
one? I can't seem to find any such configuration in solrconfig.xml.
Thanks
--
View this message in context:
http://lucene.472066.n3.nabble.com/S
Yonik Seeley-2-2 wrote:
>
> Depends on the larger context of what you are trying to do.
> Do you still want the idf and length norm relevancy factors? If not,
> use a filter, or boost the particular clause with 0.
>
I do want the other relevancy factors.. ie boost, phrase-boosting etc but I
j
Can someone explain how I can override the default behavior of the tf
contributing a higher score for documents with repeated words?
For example:
Query: "foo"
Doc1: "foo bar" score 1.0
Doc2: "foo foo bar" score 1.1
Doc2 contains "foo" twice so it is scored higher. How can I override this
behavi
Muneeb Ali wrote:
>
> Hi Blargy,
>
> Nice to hear that I am not alone ;)
>
> Well we have been using Hadoop for other data-intensive services, those
> that can be done in parallel. We have multiple nodes, which are used by
> Hadoop for all our MapReduce jobs. I pe
Need,
Seems like we are in the same boat. Our index consist of 5M records which
roughly equals around 30 gigs. All in all thats not too bad however our
indexing process (we use DIH but I'm now revisiting that idea) takes a
whopping 30+ hours!!!
I just bought the Hadoop In Action early edition b
Huh? Read through the wiki: See http://wiki.apache.org/solr/LocalParams but I
still don't understand its utility?
Can someone explain to me why this would even be used? Any examples to help
clarify? Thanks!
--
View this message in context:
http://lucene.472066.n3.nabble.com/LocalParams-tp91318
It seems that when importing via DIH the "Total Documents Processed" status
message does not appear when there are two entities for a given document. Is
this by design?
--
View this message in context:
http://lucene.472066.n3.nabble.com/IDH-Total-Documents-Processed-is-missing-tp909325
Does anyone know a solution to this problem? I've already tried
autoReconnect=true and it doesn't appear to help. This happened 34 hours
into my full-import... ouch!
org.apache.solr.handler.dataimport.DataImportHandlerException:
com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: The last p
Otis Gospodnetic-2 wrote:
>
> You may want to try the RPM tool, it will show you what inside of that
> QueryComponent is really slow.
>
We are already using it :)
Where should I be concentrating on? Transaction trace?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Pefo
Otis Gospodnetic-2 wrote:
>
> Smaller merge factor will make things worse -
>
- Whoops... Ill guess Ill change it from 5 to the default 10
--
View this message in context:
http://lucene.472066.n3.nabble.com/Peformance-tuning-tp904540p905726.html
Sent from the Solr - User mailing list archiv
> first step is to do an &debugQuery=true and see where the time is
> going on the server-side. If you're doing highlighting of a stored
> field, that can be a biggie. The timings will be in the debug output
> - be sure to look at both sections of the timings.
>
Looks like the majori
Blargy - Please try to quote the mail you're responding to, at least
> the relevant piece. It's nice to see some context to the discussion.
No problem ;)
Depends - if you optimize the index on the master, then the entire index is
replicated. If you simply commit and let Luc
Is there an alternative for highlighting on a large stored field? I thought
for highlighting you needed the field stored? I really just need the
excerpting feature for highlighting relevant portions of our item
descriptions.
Not sure if this is because of the index size (17.5G) or because of
high
After indexing our item descriptions our index grew from around 3gigs to now
17.5 and I can see our search has deteriorated from sub 50ms searches to
over 500ms now. The sick thing is I'm not even searching across that field
at the moment but I plan to in the near future as well as include
highlig
Sorry for the repost but I posted under DismaxRequestHandler when I should
have listed it as DismaxQueryParser.. ie im using defType=dismax
I have a title field and a description filed. I am searching across both
fields but I don't want description matches unless they are within some slop
of each
I have a title field and a description filed. I am searching across both
fields but I don't want description matches unless they are within some slop
of each other. How can I query for this? It seems that im getting back crazy
results when there are matches that are nowhere each other
--
View th
Ok that makes perfect sense.
"What I did was use a combination of the two running the indexed terms
through " - I initially read this as you used your current index and use
the terms from that to buildup your dictionary.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Au
Thanks for the reply Michael. Ill definitely try that out and let you know
how it goes. Your solution sounds similar to the one I've read here:
http://www.lucidimagination.com/blog/2009/09/08/auto-suggest-from-popular-queries-using-edgengrams/
There are some good comments in there too.
I think
How can I preserve phrases for either autosuggest/autocomplete/spellcheck?
For example we have a bunch of product listings and I want if someone types:
"louis" for it to common up with "Louis Vuitton". "World" ... "World cup".
Would I need n-grams? Shingling? Thanks
--
View this message in con
Im trying to apply this via the command line "patch -p0 < SOLR-1316.patch".
When patching against trunk I get the following errors.
~/workspace $ patch -p0 < SOLR-1316.patch
patching file
dev/trunk/solr/src/java/org/apache/solr/handler/component/SpellCheckComponent.java
Hunk #2 succeeded at 575
Follow up question.
How can I influence the "scoring" of results that comeback either through
term frequency (if i build of an index) or through # of search results
returned (if using a search log)?
Thanks
--
View this message in context:
http://lucene.472066.n3.nabble.com/SpellCheckComponent-
Is it generally wiser to build the dictionary from the existing index? Search
Log? Other?
For "Did you mean" does one usually just use collate=true and then return
that string?
Should I be using a separate spellchecker handler to should I just always
include spellcheck=true in my original searc
Can someone explain how to register a SolrEventListener?
I am actually interested in using the SpellCheckerListener and it appears
that it would build/rebuild a spellchecker index on commit and/or optimize
but according to the wiki "the only events that can be "listened" for are
firstSearcher a
Can someone please explain what the inform method should accomplish? Thanks
--
View this message in context:
http://lucene.472066.n3.nabble.com/SolrCoreAware-tp899064p899064.html
Sent from the Solr - User mailing list archive at Nabble.com.
Got it. Thanks!
--
View this message in context:
http://lucene.472066.n3.nabble.com/Custom-faceting-question-tp868015p897390.html
Sent from the Solr - User mailing list archive at Nabble.com.
: ...you've already got the conceptual model of how to do it, all you need
: now is to implement it as a Component that does the secondary-faceting in
: the same requests (which should definitley be more efficient since you can
: reuse the DocSets) instead of issuing secondary requets from your cl
Do I even need to tidy/clean up the html if I use the
HTMLStripCharFilterFactory?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Indexing-HTML-tp884497p885797.html
Sent from the Solr - User mailing list archive at Nabble.com.
Wait... do you mean I should try the HTMLStripCharFilterFactory analyzer at
index time?
http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#solr.HTMLStripCharFilterFactory
--
View this message in context:
http://lucene.472066.n3.nabble.com/Indexing-HTML-tp884497p884592.html
Sent from
Does the HTMLStripChar apply at index time or query time? Would it matter to
use over the other?
As a side question, if I want to perform highlighter summaries against this
field do I need to store the whole field or just index it with
TermVector.WITH_POSITIONS_OFFSETS?
--
View this message in
What is the preferred way to index html using DIH (my html is stored in a
blob field in our database)?
I know there is the built in HTMLStripTransformer but that doesn't seem to
work well with malformed/incomplete HTML. I've created a custom transformer
to first tidy up the html using JTidy then
I believe I'll need to write some custom code to accomplish what I want
(efficiently that is) but I'm unsure of what would be the best route to
take. Will this require a custom request handler? Search component?
Ok the easiest way to explain is to show you what I want.
http://shop.ebay.com/?_fro
Would dumping the databases to a local file help at all?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Importing-large-datasets-tp863447p866538.html
Sent from the Solr - User mailing list archive at Nabble.com.
Erik Hatcher-4 wrote:
>
> One thing that might help indexing speed - create a *single* SQL query
> to grab all the data you need without using DIH's sub-entities, at
> least the non-cached ones.
>
> Erik
>
> On Jun 2, 2010, at 12:21 PM, Blargy wrote:
&
me
>> fields
>> in about 8mins I know it's not quite the scale bit with batching...
>>
>> David Stuar
>>
>> On 2 Jun 2010, at 17:58, Blargy wrote:
>>
>>>
>>>
>>>
>>>> One thing that might help indexing speed - create
> One thing that might help indexing speed - create a *single* SQL query
> to grab all the data you need without using DIH's sub-entities, at
> least the non-cached ones.
>
Not sure how much that would help. As I mentioned that without the item
description import the full process takes 4 h
As a data point, I routinely see clients index 5M items on normal hardware
in approx. 1 hour (give or take 30 minutes).
Also wanted to add that our main entity (item) consists of 5 sub-entities
(ie, joins). 2 of those 5 are fairly small so I am using
CachedSqlEntityProcessor for them but the ot
Andrzej Bialecki wrote:
>
> On 2010-06-02 12:42, Grant Ingersoll wrote:
>>
>> On Jun 1, 2010, at 9:54 PM, Blargy wrote:
>>
>>>
>>> We have around 5 million items in our index and each item has a
>>> description
>>> located on a
As a data point, I routinely see clients index 5M items on normal
> hardware in approx. 1 hour (give or take 30 minutes).
Our master solr machine is running 64-bit RHEL 5.4 on dedicated machine with
4 cores and 16G ram so I think we are good on the hardware. Our DB is MySQL
version 5.0.67 (exa
We have around 5 million items in our index and each item has a description
located on a separate physical database. These item descriptions vary in
size and for the most part are quite large. Currently we are only indexing
items and not their corresponding description and a full import takes arou
I'll give the deletedEntity "trick" a try... igneous
--
View this message in context:
http://lucene.472066.n3.nabble.com/Subclassing-DIH-tp830954p863108.html
Sent from the Solr - User mailing list archive at Nabble.com.
How would this be any different than simply using the function to alter the
scoring of the final results and then sorting by score?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Sort-by-function-workaround-for-Solr-1-4-tp851922p852471.html
Sent from the Solr - User mailin
Yonik Seeley-2-2 wrote:
>
> Lots of other stuff has changed. For example, trunk is now always the
> next *major* version number.
> So the trunk of the combined lucene/solr is 4.0-dev
>
> There is now a branch_3x that is like trunk for all future 3.x releases.
>
> The next version of Solr will
Can someone explain to be what the state of Solr/Lucene is... didn't they
recently combine?
I know I am running version 1.4 but I keep seeing version numbers out there
that are 3.0, 4.0??? Can someone explain what that means.
Also is the state of trunk (1.4 or 4.0??) "good enough" for production
There will never be any need to search the actual HTML (tags, markup, etc) so
as far as functionality goes it seems like the DIH HTMLStripTransformer is
the way to go.
Are there any significant performance differences between the two?
--
View this message in context:
http://lucene.472066.n3.nab
We have user entered item listings that have a title and contain html in
their descriptions. I would like to index the full descriptions (minus the
html which im stripping out via the DIH HTMLStripTransformer) so I can
search across that it as well as perform highlighting/excerpting.
Can someone
What are the correct for settings to get highlighting excerpting working?
Original Text: "The quick brown fox jumps over the lazy dog"
Query: "jump"
Result: " fox jumps over "
Can you do something like the above with the highlighter or can it only
surround matches with pre and post tags?
Is it possible to limit the number of snapshots taken by the replication
handler? ...http://localhost:8983/solr/replication?command=backup
Thanks
--
View this message in context:
http://lucene.472066.n3.nabble.com/Snapshooter-question-tp838914p838914.html
Sent from the Solr - User mailing list
Forgot to mention, the entity that is causing this is the root entity
--
View this message in context:
http://lucene.472066.n3.nabble.com/StackOverflowError-during-Delta-Import-tp811053p837451.html
Sent from the Solr - User mailing list archive at Nabble.com.
Narrowed down the issues to this block in in DocBuilder.java in the
collectDelta method. Any ideas?
Set> deletedSet = new HashSet>();
Set> deltaRemoveSet = new HashSet>();
while (true) {
Map row = entityProcessor.nextDeletedRowKey();
if (row == null)
break;
Ok... just read up on Log4J email notification. Sounds like it would be a
good idea however can you have separate SMTPAppenders based on which
exception is thrown and/or by searching for a particular string?
ie, if log level = SEVERE and contains "rollback" then use SMTPAppender foo.
Thanks
--
Smiley, I dont follow. Can you explain how one could do this?
I'm guessing Log4J would parse the logs looking for a "ROLLBACK" and then it
would send out a notification? Sorry but i'm not really familiar with Log4J
BTW, loved your book. Have you've thought about putting out another more
advanced
Awesome thanks
--
View this message in context:
http://lucene.472066.n3.nabble.com/DIH-post-import-event-listener-for-errors-tp834645p836955.html
Sent from the Solr - User mailing list archive at Nabble.com.
I am trying to send out email notifications when our full/delta imports fail.
I tried working with onImportEnd EventListener but that only fires off when
the import passes.
Can anyone recommend a good way to send out email notifications on import
failures?
--
View this message in context:
http
Ok to further explain myself.
Well first off I was experience a StackOverFlow error during my
delta-imports after doing a full-import. The strange thing was, it only
happened sometimes. Thread is here:
http://lucene.472066.n3.nabble.com/StackOverflowError-during-Delta-Import-td811053.html#a824780
I am trying to subclass DIH to add I am having a hard time trying to get
access to the current Solr Context. How is this possible?
Is there anyway to get access to the current DataSource, DataImporter etc?
On a related note... when working with an onImportEnd, or onImportStart how
can I get a r
Basically for some uses cases I would like to show duplicates for other I
wanted them ignored.
If I have overwriteDupes=false and I just create the dedup hash how can I
query for only unique hash values... ie something like a SQL group by.
Thanks
--
View this message in context:
http://lucen
Thanks for the info Hoss.
I will probably need to go with one of the more complicated solutions. Is
there any online documentation for this task? Thanks.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Autosuggest-tp818430p827329.html
Sent from the Solr - User mailing list a
Whats the best way to get to the instance of DataImport handler from the
current context?
Thanks
--
View this message in context:
http://lucene.472066.n3.nabble.com/DataImporter-from-context-tp825517p825517.html
Sent from the Solr - User mailing list archive at Nabble.com.
I just found out if I remove my deletedPkQuery then the import will work. Is
it possible that the there is some conflict between my delta indexing and my
delta deleting?
Any suggestions?
--
View this message in context:
http://lucene.472066.n3.nabble.com/StackOverflowError-during-Delta-Import-t
Is there anymore information I can post so someone can give me a clue on
whats happening?
--
View this message in context:
http://lucene.472066.n3.nabble.com/StackOverflowError-during-Delta-Import-tp811053p824516.html
Sent from the Solr - User mailing list archive at Nabble.com.
Maybe I should have phrased it as: "Is this ready to be used with Solr 1.4?"
Also, as Grang asked in the thread, what is the actual status of that patch?
Thanks again!
--
View this message in context:
http://lucene.472066.n3.nabble.com/Autosuggest-tp818430p819765.html
Sent from the Solr - User
Andrzej is this ready for production usage?
"Hopefully in the future we can include user click through rates to boost
those terms/phrases higher"
- This could be huge!
--
View this message in context:
http://lucene.472066.n3.nabble.com/Autosuggest-tp818430p819762.html
Sent from the Solr - User
Thanks for your help and especially your analyzer.. probably saved me a
full-import or two :)
--
View this message in context:
http://lucene.472066.n3.nabble.com/Autosuggest-tp818430p818712.html
Sent from the Solr - User mailing list archive at Nabble.com.
"Easiest and oldest is wildcards on facets. "
- Does this allow partial matching or is this only prefix matching?
"It and facets allow limiting the database with searches. Using the spelling
database does not allow this."
- What do you mean?
So there is no generally accepted preferred way to do
What is the preferred way to implement this feature? Using facets or the
terms component (or maybe something entirely different). Thanks in advance!
--
View this message in context:
http://lucene.472066.n3.nabble.com/Autosuggest-tp818430p818430.html
Sent from the Solr - User mailing list archive
Lucas.. was there a reason you went with 5.1.10 or was it just the latest
when you started your Solr project?
Also, how many items are in your index and how big is your index size?
Thanks
--
View this message in context:
http://lucene.472066.n3.nabble.com/Recommended-MySQL-JDBC-driver-tp817458p
Shawn, first off thanks for the reply and links!
"As far as the error in the 5.0.8 version, does the import work, or does it
fail when the exception is thrown?"
- The import "works" for about 5-10 minutes then it fails and everything is
rolled-back one the above exception is thrown.
" You might
Which driver is the "best" for use with solr?
I am currently using mysql-connector-java-5.1.12-bin.jar in my production
setting. However I recently tried downgrading and did some quick indexing
using mysql-connector-java-5.0.8-bin.jar and I close to a 2x improvement in
speed!!! Unfortunately I ke
Can you please share with me your DIH settings and JDBC driver you are using.
I'll start...
jdbc driver = mysql-connector-java-5.1.12-bin
batchSize = "-1"
readOnly = "true"
Would someone mind explaining what "convertType" and "transactionIsolation"
actually does? The wiki doesnt really explain
Does anyone know of any documentation that is more in-depth that the wiki and
the Solr 1.4 book? I'm passed the basic usage of Solr and creating simple
support plugins. I really want to know all about the inner workings of Solr
and Lucene. Can someone recommend anything?
Thanks
--
View this mess
Is there any way to configure this so it only takes after if you match more
than one word?
For example if I search for: "foo" it should have no effect on scoring, but
if I search for "foo bar" then it should.
Is this possible? Thanks
--
View this message in context:
http://lucene.472066.n3.nab
Anyone know of any way to accomplish (or at least simulate) this?
Thanks again
--
View this message in context:
http://lucene.472066.n3.nabble.com/MLT-Boost-Function-tp811227p813982.html
Sent from the Solr - User mailing list archive at Nabble.com.
Mike,
This only happens when I attempt to do a delta-import without first deleting
the index dir before doing a full-index.
For example these will work correctly.
1) Delete /home/corename/data
2) Full-Import
3) Delta-Import
However I attempt to do the following, it will result in an error
1)
How can one accomplish a MoreLikeThis search using boost functions?
If its not capable out of the box, can someone point me in the right
direction on what I would need to create to get this working? Thanks
--
View this message in context:
http://lucene.472066.n3.nabble.com/MLT-Boost-Function-tp
FYI I am using the mysql-connector-java-5.1.12-bin.jar as my JDBC driver
--
View this message in context:
http://lucene.472066.n3.nabble.com/StackOverflowError-during-Delta-Import-tp811053p811058.html
Sent from the Solr - User mailing list archive at Nabble.com.
Posted a few weeks ago about this but no one seemed to respond. Has anyone
seen this before? Why is this happening and more importantly how can I fix
it? Thanks in advance!
May 11, 2010 12:05:45 PM org.apache.solr.handler.dataimport.DataImporter
doDeltaImport
SEVERE: Delta Import Failed
java.lang
Thanks for the input Lance.
My use case was actually pretty simple so my solution was relatively simple.
I ended up using the HTTP method. The code is listed here:
http://pastie.org/952040. I would appreciate any comments.
iorixxx you may find this solution to be of some use to you.
--
View th
Thanks for the tip Lance. Just for reference, why is it dangerous to use the
HTTP method? I realized that the embedded method is probably not the way to
go (obviously since I was getting that "SEVERE:
java.util.concurrent.RejectedExecutionException")
--
View this message in context:
http://luc
Can someone please explain to me the use cases when one would use one over
the other.
All I got from the wiki was: (In reference to Embedded) "If you need to use
solr in an embedded application, this is the recommended approach. It allows
you to work with the same interface whether or not you hav
FYI, the code that is causing this exception and an explanation of my
specific use case is all listed in this thread:
http://lucene.472066.n3.nabble.com/Custom-DIH-variables-td777696.html
--
View this message in context:
http://lucene.472066.n3.nabble.com/SEVERE-java-util-concurrent-RejectedExec
So I came up with the following class.
public class LatestTimestampEvaluator extends Evaluator {
private static final Logger logger =
Logger.getLogger(LatestTimestampEvaluator.class.getName());
@Override
public String evaluate(String expression, Context context) {
List params = Evalu
I am working with creating my own custom dataimport handler evaluator class
and I keep running across this error when I am trying to delta-import. It
told me to post this exception to the mailing list so thats what I am doing
;)
[java] SEVERE: java.util.concurrent.RejectedExecutionException
I know one can create custom event listeners for update or query events, but
is it possible to create one for any DIH event (Full-Import, Delta-Import)?
Thanks
--
View this message in context:
http://lucene.472066.n3.nabble.com/Custom-DIH-EventListeners-tp780517p780517.html
Sent from the Solr -
Thanks Noble this is exactly what I was looking for.
What is the preferred way to query solr within these sorts of classes?
Should I grab the core from the context that is being passed in? Should I be
using SolrJ?
Can you provide an example and/or provide some tutorials/documentation.
Once aga
Thanks Paul, that will certainly work. I was just hoping there was a way I
could write my own class that would inject this value as needed instead of
precomputing this value and then passing it along in the params.
My specific use case is instead of using dataimporter.last_index_time I want
to us
Can someone please point me in the right direction (classes) on how to create
my own custom dih variable that can be used in my data-config.xml
So instead of ${dataimporter.last_index_time} I want to be able to create
${dataimporter.foo}
Thanks
--
View this message in context:
http://lucene.47
1 - 100 of 185 matches
Mail list logo