?
Upayavira
On Tue, May 26, 2015, at 12:03 PM, Aman Tandon wrote:
Hi,
We have some field *city* in which the docValues are enabled. We need to
add the synonym in that field so how could we do it?
With Regards
Aman Tandon
the entire content of your large segment, which will
impact performance.
Before Solr 3.6, Optimisation was necessary and recommended. At that
point (or a little before) the TieredMergePolicy became the default, and
this made optimisation generally unnecessary.
Upayavira
On Mon, May 25, 2015, at 07:17 AM
to the next version of the library you need. Or explain here what
you are trying to so folks can help you find another way to achieve the
same.
Upayavira
On Tue, May 26, 2015, at 01:00 PM, Daniel Collins wrote:
I guess this is one reason why the whole WAR approach is being removed!
Solr should
an UpdateProcessor, as these
happen before fields are stored (E.g. RegexpReplaceProcessorFactory).
If you are concerned about facet results, then you can do it in an
analysis chain, for example with a RegexpFilterFactory.
Upayavira
I doubt mlt.fl=* will work. Provide it with specific field names that
should be used for the comparison.
Upayavira
On Tue, May 26, 2015, at 08:17 PM, John Blythe wrote:
hi all,
running a query like this, but am getting no results from the mlt
handler:
http://localhost:8983/solr/parts
If the source document is in your index (i.e. not passed in via
stream.body) then the fields used will either need to be stored or have
term vectors enabled. The latter is more performant.
Upayavira
On Tue, May 26, 2015, at 09:24 PM, John Blythe wrote:
Just checked my schema.xml and think
what's required to upgrade the Guava in Solr.
Upayavira
On Tue, May 26, 2015, at 03:11 PM, Robust Links wrote:
i have a minhash logic that uses guava 18.0 method that is not in guava
14.0.1. This minhash logic is a separate maven project. I'm including it
in
my project via maven.the code
scripts), or you might consider asking on the Zookeeper
mailing lists directly: https://zookeeper.apache.org/lists.html
Upayavira
On Mon, May 25, 2015, at 10:34 AM, Zheng Lin Edwin Yeo wrote:
I've managed to get the Solr started as a Windows service after
re-configuring the startup script, as I've
server that does the compression for you.
Am I missing something?
Upayavira
On Mon, May 25, 2015, at 03:26 AM, Zheng Lin Edwin Yeo wrote:
Thanks for your reply.
Do we still have to use back the solr.war file in Solr 5.1 in order to
get
the gzip working?
Regards,
Edwin
On 25 May 2015
Well, given you, Shawn, did the hard bit and created the page, I've
taken the liberty to populate it, based upon that link you gave and my
own understanding.
Feel free to edit/replace/whatever.
Upayavira
On Fri, May 22, 2015, at 03:28 PM, Shawn Heisey wrote:
On 5/22/2015 12:46 AM, TK Solr
amongst segments and makes future merges more costly.
Upayavira
changes back to solr/webapp/web from
where I generate patches.
There's not too much more to it than that.
I did review your changes in SOLR-7555 and they seemed very reasonable.
I'm sure there's a lot more you could help with if you are interested.
Upayavira
and then facet on the tags field.
facet=onfacet.field=tags
Upayavira
On Thu, May 21, 2015, at 04:34 PM, Erick Erickson wrote:
Have you tried
fq=type:A
Best,
Erick
On Thu, May 21, 2015 at 5:49 AM, Danesh Kuruppu dknkuru...@gmail.com
wrote:
Hi all,
Is it possible to do term
it to Solr over HTTP post will allow you to achieve what you are
aiming for.
Upayavira
On Tue, May 19, 2015, at 08:51 PM, rumford wrote:
I have an entity which extracts records from a MySQL data source. One of
the
fields is meant to be a multi-value field, except, this data source does
not
store
A few things:
Scores aren't confidence metrics, they are relative rankings, in
relation to a single resultset, that's all.
Secondly for edismax, boost does multiplicative boosting (whatever
function you provide, the score is multiplied by that), whereas bf does
additive boosting.
Upayavira
, and you might start to find features that won't
be supported at some point in that context.
HTH
Upayavira
container, and it should work.
Although, for how long...
Upayavira
On Mon, May 11, 2015, at 10:30 AM, nutchsolruser wrote:
I can not set qf in solrconfig.xml file because my qf and boost values
will
be changing frequently . I am reading those values from external source.
Can we not set qf value from searchComponent? Or is there any other way
to
do
that the QueryComponent is ignoring qf.
What is it that you are trying to do?
Upayavira
On Mon, May 11, 2015, at 09:33 AM, nutchsolruser wrote:
Hi ,
I am trying to add my own query parameters in Solr query using solr
component . In below example I am trying to add qf parameter in the
query
getting your problem solved
without coding first.
Can you not set qf= in the request handler configuration? Make sure you
set defType=edismax if you want qf to have any effect at all.
Upayavira
On Mon, May 11, 2015, at 10:09 AM, nutchsolruser wrote:
Thanks Upayavira,
I tried it by changing
attaching them to each request, then just add qf= as a param to the URL,
easy.
On Mon, May 11, 2015, at 12:17 PM, nutchsolruser wrote:
These boosting parameters will be configured outside Solr and there is
seperate module from which these values get populated , I am reading
those
values from
of Java code using OpenNLP should answer that for you.
Upayavira
On Mon, May 4, 2015, at 05:52 PM, bbarani wrote:
Hi,
Note: I have very basic knowledge on NLP..
I am working on an answer engine prototype where when the user enters a
keyword and searches for it we show them the answer
Why do you want to do this?
Bear in mind that over time, the fact that Solr uses Jetty will become
more and more of an implementation detail, and not something you are
expected to interact with, so it might work, but that doesn’t mean it
will work with all future versions.
Upayavira
On Fri, Apr
What are you trying to do? A search component is not intended for
updating the index, so it really doesn’t surprise me that you aren’t
seeing updates.
I’d suggest you describe the problem you are trying to solve before
proposing solutions.
Upayavira
On Tue, Apr 7, 2015, at 01:32 PM, Ali
, then you are
good. Otherwise, you will want to write your own code to call Tika, then
push the extracted content as a plain document.
Solr is just an HTTP server, so your application can post binary files
for Solr to ingest with Tika, or otherwise.
Upayavira
. When testing it, I had it happily notice a node going
down and redirect traffic to another host within 200ms, and did so
transparently. I will likely be starting to use it in a project in the
next few weeks myself.
Upayavira
On Thu, Apr 2, 2015, at 09:00 PM, Erick Erickson wrote:
See inline
, Solr *will* run on Windows, whether desktop (for development) or
server. However, it is much less tested, and you will find some things,
such as new init scripts, and so on, that maybe have not yet been ported
over to Windows.
Upayavira
will return a document that hasn’t
even been soft-committed yet.
As to which performs better, I’d encourage you to set up a simple
experiment, and try it out.
Upayavira
On Fri, Mar 27, 2015, at 06:56 AM, Aman Tandon wrote:
Hi,
Does an ID based filtering on solr will perform poor than DB
if that gives you what you are after?
Upayavira
On Thu, Mar 26, 2015, at 03:15 PM, Matt Kuiper wrote:
Erick, Shawn,
Thanks for your responses. I figured this was the case, just wanted to
check to be sure.
I have used Zabbix to configure JMX points to monitor over time, but it
was a bit of work
Given that it is log entries, you might find it works to use a
collection per day, and then use collection aliasing to query over them
all. This way, you can have a different aliases that specify certain
ranges (e.g. week is an alias for the last 7 or 8 day's collections).
Upayavira
On Thu, Feb
between two search requests, but unfortunately, there's
*enough* similarity between requests to make it work, *sometimes*. But
when it doesn't work, people get baffled, and don't accept the truth as
an answer (you can't use scores to compare separate sets of search
results).
Upayavira
On Tue, Feb 3
these changes
into SVN should I actually succeed at producing something useful. Is it
enough just to make a branch called SOLR-5507 and start committing my
changes there?
Periodically, I'll zip up the relevant bits and attach them to the JIRA
ticket.
TIA
Upayavira
Perfect, thanks!
On Tue, Dec 23, 2014, at 07:10 AM, Shalin Shekhar Mangar wrote:
You can make github play well with Apache Infra. See
https://wiki.apache.org/lucene-java/BensonMarguliesGitWorkflow
On Tue, Dec 23, 2014 at 11:52 AM, Upayavira u...@odoko.co.uk wrote:
Hi,
I've (hopefully
of design abstraction.
Upayavira
On Tue, Dec 23, 2014, at 10:09 AM, Alexandre Rafalovitch wrote:
Semi Off Topic, but is AngularJS the best next choice, given the
version 2 being so different from version 1?
Regards,
Alex.
Sign up for my Solr resources newsletter at http://www.solr
in this way?
Thanks,
Upayavira
), so am looking for an alternative way to encapsulate the
scoring algorithm.
Upayavira
On Wed, Nov 26, 2014, at 10:14 PM, Nicholas Ding wrote:
I'm not sure if Solr is the right tool to do this task. You probably need
a
machine learning library like Mahout or Weka.
PS: Lucene doesn't really
Nope, not yet.
Someone did propose a JavascriptRequestHandler or such, which would
allow you to code such things in Javascript (obviously), but I don't
believe that has been accepted or completed yet.
Upayavira
On Thu, Oct 16, 2014, at 03:48 AM, Aaron Lewis wrote:
Hi,
I'm trying
you can move stuff around within your infrastructure without
needing to tell your app, and without needing to mess with load
balancers as that is all handled for you by the SolrJ client deciding
which node to forward your request.
Upayavira
Or consider separating frequently changing data into a different core
from the slow moving data, if you can, reducing the amount of data being
pushed around.
Upayavira
On Mon, Sep 29, 2014, at 09:16 PM, Bryan Bende wrote:
You can try lowering the mergeFactor in solrconfig.xml to cause more
to downloading the Zookeeper distribution.
Thoughts?
Upayavira
Perhaps because du reports disk block usage, not total file size?
Upayavira
On Wed, May 7, 2014, at 04:34 AM, Darrell Burgan wrote:
Hello all, I’m trying to reconcile what I’m seeing in the file system
for a Solr index versus what it is reporting in the UI. Here’s what I
see in the UI
Why would you want to do this? Javabin is used by SolrJ to communicate
with Solr. XML is good enough for communicating from the command
line/curl, as is JSON. Attempting to use javabin just seems to add an
unnecessary complication.
Upayavifra
On Thu, Apr 24, 2014, at 10:20 AM, Elran Dvir wrote:
Looks to me that the original query was using the lucene query parser,
whereas bq is a parameter of the edismax query parser. This means that
the bq param is being ignored.
Move the fq clause to the q param, and add ^2000 after the name:123-444
bit.
Upayavira
On Fri, Apr 11, 2014, at 06:03 PM
Tell the user they can't have!
Or, write a small app that reads in their XML in one go, and pushes it
in parts to Solr. Generally, I'd say letting a user hit Solr directly is
a bad thing - especially a user who doesn't know the details of how Solr
works.
Upayavira
On Mon, Mar 31, 2014, at 07:17
be of ongoing use.
Upayavira
with the first.
So, whilst this may be possible, and may give some benefits, I'd reckon
that it would be a rather substantial engineering exercise, rather than
the quick win you seem to be assuming it might be.
Upayavira
*something* is still referring to collection1. Have you tried searching
through your SOLR_HOME dir for any references to collection1?
Upayavira
On Mon, Dec 23, 2013, at 08:44 AM, YouPeng Yang wrote:
Hi users
I get a very werid problem within solr 4.6
I just want to reload a core :
http
there are other similar situations
(quotes, colons, etc) that you may want to handle eventually.
Upayavira
On Sun, Dec 8, 2013, at 11:51 AM, Vulcanoid Developer wrote:
Thanks for your email.
Great, I will look at the WordDelimiterFactory. Just to make clear, I
DON'T
want any other tokenizing
Have you tried a WhitespaceTokenizerFactory followed by the
WordDelimiterFilterFactory? The latter is perhaps more configurable at
what it does. Alternatively, you could use a RegexFilterFactory to
remove extraneous punctuation that wasn't removed by the Whitespace
Tokenizer.
Upayavira
On Sat
using SOLR 4.5.0.
That just doesn't make sense. Search components are read only.
What are you trying to do? What stuff do you need to change? Could you
do it within an UpdateProcessor?
Upayavira
By default it sorts by score. If the score is a consistent one, it will
order docs as they appear in the index, which effectively means an
undefined order.
For example a *:* query doesn't have terms that can be used to score, so
every doc will get a score if 1.
Upayavira
On Tue, Dec 3, 2013
, but using that sort of
thing would get you there, For deletes, think.
There isn't yet an update by query (batch update) feature, one that
would be very useful.
Upayavira
On Wed, Nov 27, 2013, at 08:13 AM, Thomas Scheffler wrote:
Hi,
I am relatively new to SOLR and I am looking for a neat way
in that field.
Am I missing something?
Upayavira
according to how many of your query terms
matched a document, so you shouldn't need to worry about all of this.
Upayavira
[1]
https://cwiki.apache.org/confluence/display/solr/Working+with+External+Files+and+Processes
[2] http://wiki.apache.org/solr/FieldCollapsing
On Sun, Nov 10, 2013, at 05:47
*almost* do it. It probably
wouldn't take much tweaking so you don't need to do it in your own
code...
Upayavira
because the stored.field values would include
the pretty units, whilst the indexed values would be pure numbers. You
could use the field for range faceting, but you would get the indexed
value, I.e. Without the units, as faceting uses the indexed value not
the stored one.
Upayavira
On Mon, Nov 11
http://wiki.apache.org/solr/HierarchicalFaceting
Upayavira
On Sat, Nov 9, 2013, at 12:09 PM, Nea wrote:
Hi Everybody,
I’m using Solr 4.5.1 and I need to achieve a HierarchicalFaceting for
leveled categories. Someone can explain me how schema.xml and query
should be?
My category path
for all of
us.
Upayavira
On Sat, Nov 9, 2013, at 03:19 PM, Nea wrote:
HierarchicalFaceting documentation does not clearly explain how to index
and query field types descendent_path and ancestor_path” included in
schema.xml.
Any help would be greatly appreciated.
!--
Example
. If it works for you, then
great.
What I would say though, is that if you have a lot of documents in your
index, consider pre-computing that field at index time, and boost on the
pre-computed value, as that will give you better performance.
Upayavira
Also note that function queries only return numbers (given their origin
in scoring). They cannot be used to create virtual string or text
fields.
Upayavira
On Wed, Oct 30, 2013, at 05:19 PM, Jack Krupansky wrote:
A function query is simply returning a calculated result based on
existing
data
There'd be no point having them the same.
You're likely to include boosts in your pf, so that docs that match the
phrase query as well as the term query score higher than those that just
match the term query.
Such as:
qf=text descriptionpf=text^2 description^4
Upayavira
On Mon, Oct 28, 2013
When this gets interesting is if we had batch atomic updates. Imagine
you could do indexCount++ fro all docs matching the query
category:sport. Could be really useful. /dreaming.
Upayavira
On Thu, Oct 24, 2013, at 05:40 PM, Aloke Ghoshal wrote:
Upayavira - Nice idea pushing in a nominal update
Can you say more about the problem? What did you see that led to that
problem? How did you distribute docs between shards, and how is that
different from your 3.6 setup?
It might be a distributed IDF thing, or it could be something simpler.
Upayavira
On Wed, Oct 23, 2013, at 03:26 AM, dboychuck
Missing a colon before the curly bracket in the fq?
On Wed, Oct 23, 2013, at 09:42 AM, Peter Kirk wrote:
Hi
If I do a search like
/search?q=catid:{123}
I get the results I expect.
But if I do
/search?q=*:*fq=catid{123}
I get an error from Solr like:
update to cause
the document to be re-indexed. I've never tried it though. I'd be
curious to know if it works.
Upayavira
On Wed, Oct 23, 2013, at 02:25 PM, michael.boom wrote:
Being given
field name=title type=string bindexed=false* stored=true
multiValued=false /
Changed to
field name=title
Not too sure what you're asking. Are you saying that you want to only
return a relevant part of a field in search results - i.e. a contextual
snippet?
If so, then you should look at the highlighting component, which can do
this.
http://wiki.apache.org/solr/HighlightingParameters
Upayavira
Do two searches.
Why do you want to do this though? It seems a bit strange. Presumably
your users want the best matches possible whether exact or fuzzy?
Wouldn't it be best to return both exact and fuzzy matches, but score
the exact ones above the fuzzy ones?
Upayavira
On Mon, Oct 21, 2013
grouping' should be able to get you there. You group on your field, and
only show on value per group.
Sorry I can't give you more specifics right now, but google with the
above keywords should get you there.
Upayavira
field rounded to the nearest
month, then you will be able to use that field in a pivot facet.
Obviously, this requires index time effort, which is less than ideal.
I guess this is a feature just waiting for someone to implement it.
Upayavira
format expected by Solr. Or, you can add
tr=.xsl to the URL, and use an XSL stylesheet to transform your XML
into Solr's XML format.
The schema defines the fields that are present in the index, not the
format of the XML used.
Upayavira
to solr config.xml, you will find a file
called something like stopwords.txt. Compare these files between your
two systems.
Upayavira
On Thu, Oct 17, 2013, at 07:18 AM, Stavros Delsiavas wrote:
Unfortunatly, I don't really know what stopwords are. I would like it to
not ignore any words of my
I would say, if index size is not an issue, there's merit in indexing a
field twice, once with these turned off, once with them turned on. That
gives yuo the chance to choose at query time without major
re-engineering efforts for your indexer code.
Upayavira
On Tue, Oct 15, 2013, at 07:08 AM
occurrences.
Upayavira
On Mon, Oct 14, 2013, at 08:33 AM, Karan jindal wrote:
Hi all,
I have a general query about fieldNorm
Is it advisable to use fieldNorm (which kinds of gives importance to
shorter length fields).
Is there any set of standard factors on which the decision of turning
by the total number of
terms in the document.
Almost there, but the last leg is not quite.
I don't know whether it is possible to write a fieldlength(text)
function that returns the number of terms in the field.
Upayavira
by the total number of
terms in the document.
Almost there, but the last leg is not quite.
I don't know whether it is possible to write a fieldlength(text)
function that returns the number of terms in the field.
Upayavira
Right - aside from the interesting intellectual exercise, the correct
question to ask is, why?
Why would you want to do this? What's the benefit, and is there a way of
doing it that is more in keeping with how Solr has been designed?
Upayavira
On Thu, Oct 10, 2013, at 01:17 PM, Erick Erickson
Use $solrzip/example/cloud-scripts/zkcli.sh to upload a new set of
configuration files.
Upayavira
On Thu, Oct 10, 2013, at 04:57 PM, maephisto wrote:
On this topic, once you've uploaded you collection's configuration in ZK,
how
can you update it?
Upload the new one with the same config name
with:
mul
mult
multi
multic
multica
multicad
all indexed at the same term position, allowing for any of those to
match. Of course, that will make your index much larger.
As Erick says, use the admin/analysis page to play with your analysis
chains and see what they do to different inputs.
Upayavira
=topic:x1rows=20sort=timestamp desc
Will get you what you ask for.
The above ticket might just make it a little faster.
Upayavira
, and your
field values will disappear.
Therefore, you need to store a field for it to survive an atomic update.
Whether you index it or not is up to you and the needs of your
application.
Upayavira
nicely.
Upayavira
On Fri, Oct 4, 2013, at 10:41 AM, Jan Høydahl wrote:
Hi,
I have been asked the same question. There are only DELETEALIAS and
CREATEALIAS actions available, so is there a way to achieve uninterrupted
switch of an alias from one index to another? Are we lacking a MOVEALIAS
modify_date:[* TO 2012-07-06T9:23:43Z]
modify_date:[2012-07-06T9:23:43Z TO *]
On Thu, Oct 3, 2013, at 09:18 AM, soumikghosh05 wrote:
I have a date filed modify_date and the field value is
2012-08-09T11:23:43Z ,
2011-09-02T12:23:43Z and 2012-07-06T9:23:43Z for 3 docs.
User provided a date
Which query parser are you using? It seems you are mixing them up.
As far as I know, edismax doesnt support quoted phrases, it uses pf
param to invoke phrase queries. Likewise, the lucene query parser
doesn't support a phrase slop param, it uses a phrase slop~2 syntax.
Upayavira
On Tue, Oct 1
that new
field(and skipping/removing the string one if no-longer needed).
Hope this helps.
Upayavira
On Sat, Sep 28, 2013, at 04:38 PM, bengates wrote:
Haha,
Thanks for your reply, that's what I'll do then.
Unfortunately I can speak Java as well as I can speak ancient Chinese in
Sign Language
=+user_id:X +date:[dateX TO dateY] to
find out how many docs, then take the numFound value, if it is above Y,
do a subsequent query to retrieve the docs, either all docs, or toes in
the relevant date range.
Don't know if that helps.
Upayavira
On Sun, Sep 29, 2013, at 05:15 PM, Matheus Salvia wrote:
Thanks
). So
really, you need to craft your own analysis chain to fit the kind of
data you are working with.
Upayavira
On Mon, Sep 30, 2013, at 06:50 PM, Van Tassell, Kristian wrote:
I have a search term multi-CAD being issues on tokenized text. The
problem is that you cannot get any search results when
.
Upayavira
On Fri, Sep 27, 2013, at 09:44 PM, Peter Keegan wrote:
Hi Joel,
I tried this patch and it is quite a bit faster. Using the same query on
a
larger index (500K docs), the 'join' QTime was 1500 msec, and the 'hjoin'
QTime was 100 msec! This was for true for large and small result
Mattheus,
Given these mails form a part of an archive that are themselves
self-contained, can you please post your actual question here? You're
more likely to get answers that way.
Thanks, Upayavira
On Fri, Sep 27, 2013, at 04:36 PM, Matheus Salvia wrote:
Hello everyone,
I'm having a problem
:[y TO *]
Worst case, if you don't want to mess with your indexing code, I wonder
if you could use a ScriptUpdateProcessor to do this work - not sure if
you can have one add an entirely new, additional, document to the list,
but may be possible.
Upayavira
On Fri, Sep 27, 2013, at 09:50 PM, Matheus
to
review this info alongside a Solr book or two, it is quite complex).
Looking at your example below, I suspect that all of your examples have
the same score, so are sorted randomly.
Upayavira
On Tue, Sep 24, 2013, at 01:38 PM, Viresh Modi wrote:
Mu Query Looks Like:
start=0rows=10hl
q=country:[* TO *] will find all docs that have a value in a field.
However, it seems you have a space, which *is* a value. I think Eric is
right - track down that record and fix the data.
Upayavira
On Wed, Sep 18, 2013, at 09:23 AM, Prasi S wrote:
How to filter them in the query itself
Filter them out in your query, or in your display code.
Upayavira
On Wed, Sep 18, 2013, at 06:36 AM, Prasi S wrote:
Hi ,
Im using solr 4.4 for our search. When i query for a keyword, it returns
empty valued facets in the response
lst name=facet_counts
lst name=facet_queries/
lst name
Have you used debugQuery=true, or fl=*,[explain], or those various
functions? It is possible to ask Solr to tell you how it calculated the
score, which will enable you to see what is going on in each case. You
can probably work it out for yourself then I suspect.
Upayavira
On Tue, Sep 17, 2013
If you have two cores, then the core name should be in your URL.
Http://host:8983/solr/CORE/select?q=blah
Or you can set a default core in solr.xml.
Upayavira
On Sun, Sep 15, 2013, at 12:16 PM, Nutan wrote:
I get this error : solr/select not available.I am using two cores
document
The simplest thing is to exclude empty values in the query: myfield:[*
TO *]
Upayavira
On Thu, Sep 12, 2013, at 03:50 PM, Raheel Hasan wrote:
ok, so I got the idea... I will pull 7 fields instead and remove the
empty
one...
But there must be some setting that can be done in Facet
at Amazon, with a separate EBS volume
per core giving some performance improvement.
Upayavira
On Wed, Sep 11, 2013, at 07:35 PM, Deepak Konidena wrote:
Hi,
I know that SolrCloud allows you to have multiple shards on different
machines (or a single machine). But it requires a zookeeper
Upload changed config files to zookeeper, using the zookeeper cli, which
I think is in example/cloud-scripts. Then use the collections api, over
http, to reload the collection.
Upayavira
On Tue, Sep 10, 2013, at 06:25 AM, Prasi S wrote:
Hi,
I have solrcloud with two collections. I have indexed
.
Upayavira
On Tue, Sep 10, 2013, at 01:43 PM, Christian Köhler - ZFMK wrote:
Hi,
I use the new SpatialRecursivePrefixTreeFieldType field to store geo
coordinates (e.g. 14.021666,51.5433353 ). I can retrieve the coordinates
just find so I am sure they are indexed correctly.
However when I
It's a wiki. Can't you correct it?
Upayavira
On Wed, Sep 4, 2013, at 08:25 PM, Dmitri Popov wrote:
Hi,
http://wiki.apache.org/solr/XsltResponseWriter (and reference manual PDF
too) become out of date:
In configuration section
queryResponseWriter
name=xslt
class
If you can't do it before the content gets to Solr, which would be best,
then use the ScriptUpdateProcessor and code up some Javascript to remove
it. Or, if the pattern is regular enough, you might be able to use the
RegexpUpdateProcessor.
Upayavira
On Fri, Aug 30, 2013, at 04:24 AM, vincent
401 - 500 of 854 matches
Mail list logo