1) The terms Query Parser (TermsQParser) has nothing to do with the
TermsComponent (the first is for quering many distinct terms, the
later is for requesting info about low level terms in your index)
https://cwiki.apache.org/confluence/display/solr/Other+Parsers#OtherParsers-TermsQueryParser
: updates? i can't do this because i have delta-import queries which also
: should be able to assign uuid when it is needed
You really need to give us a full and complete picture of what exactly you
are currently doing, what's working, what's not working, and when it's not
working what is
: Can you please explain how having the same field for query and stat can
: cause some issue for my better understanding of this feature?
I don't know if it can, it probably shouldn't, but in terms of trying ot
udnerstand the bug and reproduce it, any pertinant facts may be relivant -
: A follow up question. Is the sub-sorting on the lucene internal doc IDs
: ascending or descending order? That is, do the most recently index doc
you can not make any generic assumptions baout hte order of the internal
lucene doc IDS -- the secondary sort on the internal IDs is stable (and
: implementation of Solr.
:
: Chris I will try to create sample data and create a jira ticket with
: details.
:
: Regards,
: Modassar
:
:
: On Tue, Aug 18, 2015 at 9:58 PM, Chris Hostetter hossman_luc...@fucit.org
: wrote:
:
:
: : I am getting following exception for the query
https://people.apache.org/~hossman/#threadhijack
Thread Hijacking on Mailing Lists
When starting a new discussion on a mailing list, please do not reply to
an existing message, instead start a fresh email. Even if you change the
subject line of your email, other mail headers still track which
: My current expansion expands from the
:user-query
: to the
:+user-query favouring-query-depending-other-params overall-favoring-query
: (where the overall-favoring-query could be computed as a function).
: With the boost parameter, i'd do:
:(+user-query
: I am getting following exception for the query :
: *q=field:querystats=truestats.field={!cardinality=1.0}field*. The
: exception is not seen once the cardinality is set to 0.9 or less.
: The field is *docValues enabled* and *indexed=false*. The same exception
: I tried to reproduce on non
: I have a fresh install of Solr 5.2.1 with about 3 million docs freshly
: indexed (I can also reproduce this issue on 4.10.0). When I use the Solr
: MorelikeThisHandler with content stream I'm getting different results per
: shard.
I haven't looked at the code recently but i'm 99% certain that
: Has anyone worked with deep pagination using SolrNet? The SolrNet
: version that I am using is v0.4.0.2002. I followed up with this article,
: https://github.com/mausch/SolrNet/blob/master/Documentation/CursorMark.md
: , however the version of SolrNet.dll does not expose the a StartOrCursor
: meta name=date content=Unknown /
: meta name=dc.date.created content=Unknown /
:
: Most documents have a correctly formatted date string and I would like to keep
: that data available for search on the date field.
...
: I realize it is complaining because the date string isn't matching
Thanks to the SortedSetDocValues this is in fact possible -- in fact i
just uploaded a patch for SOLR-2522 that you can take a look at to get an
idea of how to make it work (the main class you're probably going
to want to look at is SortedSetSelector: you're going to want a similar
: I’m getting this error on startup:
:
: solrcloud section of solr.xml contains 1 unknown config parameter(s):
[shareSchema]
Pretty sure that's because it was never a supported property of the
solrcloud section -- even in the old format of solr.xml.
it's just a top level property -- ie:
: HI All:I need a pagenigation with facet offset.
: There are two or more fields in [facet.pivot], but only one value
: for [facet.offset], eg: facet.offset=10facet.pivot=field_1,field_2.
: In this condition, field_2 is 10's offset and then field_1 is 10's
: offset.
: /?q=wt=jsondefType=dismaxq.alt=*:*bq=provider:A^2.0/
: My first results have provider A.
: ?q=wt=jsondefType=dismaxq.alt=*:*bq=provider:B^1.5
: My first results have provider B. Good!
: /?q=wt=jsondefType=dismaxq.alt=*:*bq=provider:(A^2.0 B^1.5)/
: Then my first results have
: Hello - i need to run a thread on a single instance of a cloud so need
: to find out if current node is the overseer. I know we can already
: programmatically find out if this replica is the leader of a shard via
: isLeader(). I have looked everywhere but i cannot find an isOverseer. I
At
To clarify the difference:
- bf is a special param of the dismax parser, which does an *additive*
boost function - that function can be something as simple as a numeric
field
- alternatively, you can use the boost parser in your main query string,
to wrap any parser (dismax, edismax,
: Are there any examples/documentation for IntervalFaceting using dates that
: I could refer to?
You just specify the interval set start end as properly formated date
values. This example shows some range faceting and interval faceting on
the same field of the bin/solr -e techproducts
the additive boosts of the 'bf' field and
paramaterize the current ^boost values you are using, the closest
corelary using the function syntax would be the prod() (ie: 'product')
function...
bf = prod(a,$a1,$fa) prod(sum(b,$b1,$b2),$fb) ...
: On 7/14/2015 2:31 PM, Chris Hostetter wrote:
: To clarify
: Some of the buckets return with a count of ‘0’ in the bucket even though
: the facet.range.min is set to ‘1’. That is not the primary issue
facet.range.min has never been a supported (or documented) param -- you
are most likeley trying to use facet.mincount (which can be specified
per
: However, when I try to follow the instructions for loading the examples
: I find that there is a file that I am supposed to have called post.jar
: which I cannot find in the directory specified, exampledocs. There is a
: file called post in another directory but it does not seem to be a
:
: The other option I looked at is writing my own handler for my crawler and
: plugging it into Solr's solrconfig.xml. If I do this, then my crawler will
: run in the same JVM space as Solr, this is something I want to avoid.
If you don't want you crawler to run in the same JVM as solr, then
Jetty is an implementation detail in Solr 5.0 -- modifying the underlying
jetty configs, or directly adding handlers isn't supported by Solr. I
nthe future, jetty may be ripped out completely and replaced with some
other networking stack w/o advanced notice (probably unlikely, but smaller
according to hte echParams output, you aren't specifying a q param.
You seem to be trying to specify your query input using the q.alt param
-- but the q.alt param doesn't use the edismax parser specified by the
defType param -- q.alt is a feature *of* the edismax parser that is used
to
Forgot the relevent documentation...
https://cwiki.apache.org/confluence/display/solr/The+Extended+DisMax+Query+Parser
https://cwiki.apache.org/confluence/display/solr/The+DisMax+Query+Parser
: Date: Tue, 7 Jul 2015 13:57:25 -0700 (MST)
: From: Chris Hostetter hossman_luc...@fucit.org
: Thanks I’ll try that. Is the Thread Dump view in the Solr Admin panel not
reliable for diagnosing thread hangs?
If the JVM is totally hung, you might not be able to connect to solr to
even ask it to generate the hread dump itself -- but jstack may still be
able to.
-Hoss
: Hmm, interesting. That particular bug was fixed by upgrading to Jetty
: 4.1.7 in https://issues.apache.org/jira/browse/SOLR-4031
1st) Typo - Shalin ment 8.1.7 above.
2nd) If you note the details of both issues, no root cause was ever
identified as being fixed -- all that hapened was that Per
: Have nothing found in the ref guides, docs, wiki, examples about this
mutually
: exclusive parameters.
:
: Is this a bug or a feature and if it is a feature, where is the sense of
this?
The problem is that if a timeAllowed exceeded situation pops up, you won't
get a nextCursorMark to
I'm not sure i understand your question ...
if you know that you are only ever going to have the 'year' then why not
just index the year as an int?
a TrieDateField isn't really of any use to you, because normal date type
usage (date math, date ranges) are useless because you don't have any
: For the _version_ field in the schema.xml, do we need to set it be
: docValues=true?
you *can* add docValues, but it is not required.
There is an open discussion about wether we should add docValues to
the _version_ field (or even switch completely to indexed=false) in this
jira...
: Have you tried this syntax ?
:
: facet=truefacet.field={!ex=st key=terms facet.limit=5
: facet.prefix=ap}query_termsfacet.field={!key=terms2
: facet.limit=1}query_termsrows=0facet.mincount=1
:
: This seems the proper syntax, I found it here :
yeah, local params are supported for specifying
: You can get raw query (and other debug information) with debug=true
: paramter.
more specifically -- if you are writting a custom SearchComponent, and
want to access the underlying Query object produced by the parsers that
SolrIndexSearcher has executed, you can do so the same way the debug
: Subject: How to use https://issues.apache.org/jira/browse/SOLR-7274
:
: How do you set this up?
Some draft documentation is available in the online ref gude (not yet
ready to be published) ... i just addded a link to here from the jira...
: I encounter this peculiar case with solr 4.10.2 where the parsed query
: doesnt seem to be logical.
:
: PHRASE23(reduce workforce) ==
: SpanNearQuery(spanNear([spanNear([Contents:reduceä,
: Contents:workforceä], 1, true)], 23, true))
1) that does not appear to be a parser syntax of any parser
: However, I need to do able to divide certain metrics. I tried including
: functions in the stats.field such as div(sum(bounce_rate), (sum(visits)) but
: it doesn't recognize the functions. Also it seems to ignoring the paging for
: the stats results and returns all groups regardless.
i'm lost
:
: The problem is SlowCompositeReaderWrapper.wrap(searcher.getIndexReader());
: you hardly ever need to to this, at least because Solr already does it.
Specifically you should just use...
searcher.getLeafReader().getSortedSetDocValues(your_field_anme)
...instead of doing all this
: So my question is: can I get offset of time if I use NOW/MINUTE and not
NOW/DAY rounding?
i'm sorry, but your question is still too terse, vague, and ambiguious for
me to really make much sense of it; and the example queries you provided
really don't have enough context for me to understand
: The guys was using delta import anyway, so maybe the problem is
: different and not related to the clean.
that's not what the logs say.
Here's what i see...
Log begins with server startup @ Jun 10, 2015 11:14:56 AM
The DeletionPolicy for the shopclue_prod core is initialized at Jun
10,
: So, are you saying that you are expected to store UTC dates in your
: index, but if you happen to know that a user is in a different timezone,
: you can round those dates for them according to their timezone instead
: of UTC?
:
: That's how I'd interpret it, but useful to confirm.
Date
: I'm using Solr 4.10.0.I'm trying to figure out how to use the TZ
: param.I've noticed that i have to use date math in order for this to
: work,also I've got to use rounding when I query Solr in order to use the
: TZ param.
I'm having trouble understanding your question. The TZ param, as
: I was hoping there was a solr server dependency package that I could
: declare against to get solr's standalone server which seems to be the
: direction the team is taking, and I want to stay in those lines for the
: future if that's the direction.
it's hard to understand what exactly your
: passed in as a Properties object to the CD constructor. At the moment,
: you can't refer to a property defined in solrcore.properties within your
: core.properties file.
but if you look at it fro ma historical context, that doesn't really
matter for the purpose that solrcore.properties was
: I took a quick look at the code and it _looks_ like any string
: starting with t, T or 1 is evaluated as true and everything else
: as false.
correct and documented...
https://cwiki.apache.org/confluence/display/solr/Field+Types+Included+with+Solr
: sortMissingLast determines sort order if
: What about at query time? If I index my Boolean and it has one of the
: variations of t, T or 1, what should my query be to get a hit on
: true? q=MyBoolField:what ? What should the value of what be when I
: want to check if the field has a true and when I need to check if it has
: a false?
:
https://cwiki.apache.org/confluence/display/solr/Common+Query+Parameters#CommonQueryParameters-ThesortParameter
:
: I think we may have an omission from the docs -- docValues can also be
: used for sorting, and may also offer a performance advantage.
I added a note about that.
-Hoss
: i'm not sure i follow what you're saying on #3. let me clarify in case it's
: on my end. i was wanting to *eventually* set a lower bound of -10%size1 and
: an upper of +10%size1. for the sake of experimentation i started with just
lower bound of what ?
write out the math equation you want to
: 2) lame :\
Why do you say that? ... it's a practical limitation -- for each document
a function is computed, and then the result of that function is compared
against the (fixed) upper and lower bounds.
In situations where you want the something like the lower bound of the
range comparison
: Expected identifier at pos 29 str='{!frange l=sum(size1, product(size1,
: .10))}size1
:
: pos 29 is the open parenthesis of product(). can i not use a function
: within a function? or is there something else i'm missing in the way i'm
: constructing this?
1) you're confusing the parser by
: Subject: Per field mm parameter
:
: How to specify per field mm parameter in edismax query.
you can't.
the mm param applies to the number of minimum match clauses in the final
query, where each of those clauses is a disjunction over each of the
qf fields.
this blog might help explain the
the
configset, and then local core-specific properties overriding both.
:
: Do you want to open a JIRA bug, Steve?
:
: Alan Woodward
: www.flax.co.uk
:
:
: On 28 May 2015, at 00:58, Chris Hostetter wrote:
:
: : I am attempting to override some properties in my solrconfig.xml file
: certainly didn't intend to write it like this!). The problem here will
: be that CoreDescriptors are currently built entirely from
: core.properties files, and the CoreLocators that construct them don't
: have any access to zookeeper.
But they do have access to the CoreContainer which is
: I am attempting to override some properties in my solrconfig.xml file by
: specifying properties in a solrcore.properties file which is uploaded in
: Zookeeper's collections/conf directory, though when I go to create a new
: collection those properties are never loaded. One work-around is to
: Subject: Re: Is it possible to search for the empty string?
:
: Not out of the box.
:
: Fields are parsed into tokens and queries search on tokens. An empty
: string has no tokens for that field and a missing field has no tokens
: for that field.
that's a missleading over simplification of
I suspect you aren't doing anything wrong, i think it's the same as this
bug...
https://issues.apache.org/jira/browse/SOLR-7035
: Date: Thu, 14 May 2015 12:53:34 +0530
: From: Aman Tandon amantandon...@gmail.com
: Reply-To: solr-user@lucene.apache.org
: To: solr-user@lucene.apache.org
: {
: match:true,
: value:655360,
: description:fieldNorm(doc=5316)
: }
...
: This match is in the title field, which has 119669 total terms (which
: isn't such big number) and the total document count in this index is
that smells like a bug -- by
: Sorry for leaving the Solr version out in my previous email, I'm using
: Solr 4.10.3 running on Centos7, with the following JRE: Oracle
: Corporation OpenJDK 64-Bit Server VM (1.7.0_75 24.75-b04)
I can't reproduce Using Solr 4.10.3 (or 4.10.4 - mistread your email the
first time)
Are you
: Right now, I specify the boost for my request handler as:
: requestHandler name=/select class=solr.SearchHandler
: .
: str name=boostln(qty)/str
:
: /requestHandler
:
: Is there a way to specify this boost in the Solrconfig.xml?
:
: I tried: str name=boost(*:*
: DocSet docset1 = Searcher.getDocSet(query1)
: DocSet docset2 = Searcher.getDocSet(query2);
:
: Docset finalDocset = docset1.intersection(docset2);
:
: Is this a valid approach ? Give docset could either be a sortedintdocset or
: a bitdocset. I am facing ArrayIndexOutOfBoundException when
:
: We have implemented a custom scoring function and also need to limit the
: results by score. How could we go about that? Alternatively, can we
: suppress the results early using some kind of custom filter?
in general, limiting by score is a bad idea for all of the reasons
outlined here...
you should be good to go, thanks (in advance) for helping out with your
edits.
: http://www.manning.com/turnbull/. I have already set up an account with
: the username NicoleButterfield. Many thanks in advance for your help
-Hoss
http://www.lucidworks.com/
: On SOLR3.6, I defined a string_ci field like this:
:
: fieldType name=string_ci class=solr.TextField
: sortMissingLast=true omitNorms=true
: analyzer
: tokenizer class=solr.KeywordTokenizerFactory/
: filter class=solr.LowerCaseFilterFactory/
: /analyzer
: /fieldType
:
XY-ish problem -- if you are deleting a bunch of documents by id, why have
you switched from using delete-by-id to using delete-by-query? What drove
that decision? Did you try using delete-by-query in your 3.6 setup?
: my f1 field is my key field. It is unique.
...
: On my old solr
: I need to run solr 5.1.0 on port 80 with some basic apache authentication.
: Normally, under earlier versions of solr I would set it up to run under
: tomcat, then connect it to apache web server using mod_jk.
the general gist of what you should look into is running Solr (via
./bin/solr) on
: My Solr documents contain descriptions of products, similar to a
BestBuy or
: a NewEgg catalog. I'm wondering if it were possible to push a product down
: the ranking if it contains a certain word. By this I mean it would still
(cross posted, please confine any replies to general@lucene)
A quick reminder and/or heads up for htose who haven't heard yet: this
year's Lucene/Solr Revolution is happeing in Austin Texas in October. The
CFP and Early bird registration are currently open. (CFP ends May 8,
Early Bird ends
: There is a possible solution here:
: https://issues.apache.org/jira/browse/LUCENE-2347 (Dump WordNet to SOLR
: Synonym format).
If you have WordNet synonyms you do't need any special code/tools to
convert them -- the current solr.SynonymFilterFactory supports wordnet
files (just specify
: snippet queryparser class=ower.impl.MyQParserPlugin name=myparser / to
: vufind/solr/biblio/conf/sorconfig.xml.
the correct syntax should be...
queryParser class=ower.impl.MyQParserPlugin name=myparser /
...note the P
If it's loaded properly, you should see mention of MyQParserPlugin in
: After we upgraded Solr from 4.5.1 to 4.10.4, we started seeing the
: following UnsupportedOperationException logged repeatedly. We do not
: have highlighting configured to useFastVectorHighlighter. The logged
: stack trace has given me little to go on. I was hoping this is a
: problem
: We did two SOLR qeries and they supposed to return the same results but
: did not:
the short answer is: if you want those queries to return the same results,
then you need to adjust your query time analyzer forthe all_text field to
not split intra numberic tokens on ,
i don't know *why*
https://issues.apache.org/jira/browse/SOLR-7487
: Date: Wed, 29 Apr 2015 12:23:13 -0400
: From: Scott Dawson sc.e.daw...@gmail.com
: Reply-To: solr-user@lucene.apache.org
: To: solr-user@lucene.apache.org
: Subject: Re: luceneMatchVersion
:
: Thanks Shawn. There's a closed JIRA ticket related
: 1) If the content of indexAnalyzer and queryAnalyzer are exactly the same,
: that's the same as if I have an analyzer only, right?
Effectively yes.
Subtle nuance: if you declare 1 analyzer, there is one Analyzer object in
ram. If you declare both, then there are 2 Analyzer objects in RAM
: I am thinking to index these companies name in solr since all the
functionality already there?
:
: Do we have support for spark?
https://github.com/LucidWorks/spark-solr
Also of possible interest...
http://lucidworks.com/blog/solr-yarn/
https://github.com/LucidWorks/yarn-proto
: I would still use ConcurrentUpdateSolrServer as it is good for catching up
: when my indexing has fallen behind. I know it swallows exceptions.
I feel like you are missing the point of when/why
ConcurrentUpdateSolrServer compared to your goal of load balancing
updates.
The *only* feature
because of th enature of the CSV format, the order of the fields *has* to
be deterministic and consistent for all documents, so the response writer
sorts them into the approrpaite columns.
for JSON XML formats this consistency isn't required, so instead Solr
writes out hte fields of each
: I manage a SolrCloud with 5 shards. Queries go thru an AWS load balancer but
: indexing does not, so my leader1 is getting clobbered. Should my SolrJ app
: be pointing at a load balancer and if so will indexing via the
: ConcurrentUpdateSolrServer class still work?
The Concurrent part
the defaults for a field/ come from the fieldType/ specified by the
type attribute.
From that point, the default behavior of a fieldType/ can vary by the
individual FieldType class implementation (ie: most fields default to
omitTermFreqAndPositions=true but TextField defaults to false) or by
: Another question I have though (which fits the subject even better):
: In the log I see many
: org.apache.solr.common.SolrException: missing content stream
...
: What are possible reasons herfore?
The possible and likeley reasons are that you sent an update request w/o
any
: I was under understanding that stopwords are filtered even before being
: parsed by search handler, i do have the filter in collection schema to
: filter stopwords and the analysis shows that this stopword is filtered
Generally speaking, your understanding of the order of operations for
query
: And stopword in user query is being changed to q.op=AND, i am going to
: look more into this
This is an explicitly documented feature of the edismax parser...
https://cwiki.apache.org/confluence/display/solr/The+Extended+DisMax+Query+Parser
* treats and and or as AND and OR in Lucene syntax
: To be clear, here is an example of a type from Solr's schema.xml:
:
: field name=weight type=float indexed=true stored=true/
:
: Here, the type is float. I'm looking for the complete list of
: out-of-the-box types supported.
what you are asking about are just symbolic names that come
different behavior.
:
: Thanks
:
: Steve
:
: On Wed, Apr 22, 2015 at 12:59 PM, Chris Hostetter hossman_luc...@fucit.org
: wrote:
:
:
: : To be clear, here is an example of a type from Solr's schema.xml:
: :
: : field name=weight type=float indexed=true stored=true/
: :
: : Here
1) https://lucidworks.com/blog/why-not-and-or-and-not/
2) use debug=query to understand how your (filter) query is being parsed.
: Date: Wed, 22 Apr 2015 14:56:22 +
: From: Dhutia, Devansh ddhu...@gannett.com
: Reply-To: solr-user@lucene.apache.org
: To: solr-user@lucene.apache.org
to update in my
REST service).
:
: -Original Message-
: From: Chris Hostetter [mailto:hossman_luc...@fucit.org]
: Sent: Thursday, April 16, 2015 5:04 PM
: To: solr-user@lucene.apache.org
: Subject: Re: Spurious _version_ conflict?
:
:
: : I notice that the expected value in the error message
Off the cuff, it sounds like you are making a POST request to the
SearchHandler (ie: /search or /query) and the Content-TYpe you are sending
is text/xml; charset=UTF-8
In the past SearchHandler might have ignored that Content-Type, but now
that structured queries can be sent as POST data,
: It looks to me that f with qq is doing phrase search, that's not what I
: want. The data in the field title is Apache Solr Release Notes
if you don't wnat phrase queries then you don't want pharse queries and
that's fine -- but it wasn't clear from any of your original emails
because you
: df and q.op are the ones you are looking for.
: You can define them in defaults section.
specifically...
https://cwiki.apache.org/confluence/display/solr/InitParams+in+SolrConfig
:
: Ahmet
:
:
:
: On Friday, April 17, 2015 9:18 PM, Bruno Mannina bmann...@free.fr wrote:
: Dear Solr
: I notice that the expected value in the error message matches both what
: I pass in and the index contents. But the actual value in the error
: message is different only in the last (low order) two digits.
: Consistently.
what does your client code look like? Are you sure you aren't
: The summary of your email is: client's must escape search string to prevent
: Solr from failing.
:
: It would be a nice addition to Solr to provide a new query parameter that
: tells it to treat the query text as literal text. Doing so, means you
: remove the burden placed on clients to
the short answer is that you need something to re-open the searcher -- but
i'm not going to go into specifics on how to do that because...
You are dealing with a VERY low level layer of the lucene/solr code stack
-- w/o more details on why you've written this particular bit of code (and
where
You're going to have to provide a lot more details (solr version, sample
data, full queries, details about configs, etc...) in order for anyone to
offer you meaningful assistence...
https://wiki.apache.org/solr/UsingMailingLists
I attempted to reproduce the steps you describe using Solr 5.1
: In the interests of minimizing round-trips to the database, is there any
: way to get the added/changed _version_ values returned from /update?
: Or do you always have to do a fresh get?
there is a versions=true param you can specify on updates to get the
version# back for each doc added
: we have quite a problem with Solr. We are running it in a config 6x3, and
: suddenly solr started to hang, taking all the available cpu on the nodes.
:
: In the threads dump noticed things like this can eat lot of CPU time
:
:
:- org.apache.solr.search.LRUCache.put(LRUCache.java:116)
:
: Does the Solr admin UIcloud view show the gettingstarted collection?
: The graph view might help. It _sounds_ like somehow you didn't
: actually create the collection.
: [Adnan]- Yes
if you can see the collection in the admin ui, can you please use the
Dump menu option in the Cloud section to
: Chris,
: Please find attached Dump
nothing jumps out at me as looking odd, but i'm not the expert on this
stuff either -- hopefully someone else can take a look.
can you provide us with some more detials on what exactly you've done?
you said ...
: : What steps did you follow to create
: Probably a historical artifact.
Yeah, probably. fixing the solr example configs would be fairly trivial
-- the names are just symbolic strings -- but currently they are all
consistent with the lucene packagine names, which would me a more complex
cange from a back compat standpoint -- i've
: A simple query on the collection: ../select?q=*:* works perfectly fine.
:
: But as soon as i add sorting, it crashes the nodes with OOM:
: .../select?q=*:*sort=unique_id ascrows=0.
if you don't have docValues=true on your unique_id field, then sorting
rquires it to build up a large in memory
: We are using 3 shard solr cloud with 5 replicas per shard. We use SolrJ to
: execute solr queries. Often times, I cannot explain when, but we see in the
: query, isShard=true and shard.url=ip addresses of all the replicas.
what does see in the query mean? ... see where? what are you looking
Can you open a jira to add docValues support for BoolField? ... i can't
think of any good reason not to directly support that in Solr for
BoolField ... seems like just an oversight that slipped through the
cracks.
For now, your best bet is probably to use an UpdateProcessor ... maybe 2
You should start by checking out the SweetSpotSimilarity .. it was
heavily designed arround the idea of dealing with things like excessively
verbose titles, and keyword stuffing in summary text ... so you can
configure your expectation for what a normal length doc is, and they
will be
: If I am finding the values of a long field for a single numeric field, I
: just do:
:
: DocValues.getNumeric(contex.reader(), myField).get(docNumber). This
: returns the value of the field and everything is good.
:
: However, my field is a multi-valued long field. So, I need to do:
:
:
301 - 400 of 4495 matches
Mail list logo