Hi Leonid,
Have you had a look at edismax query parser[1]? Isn't that any use to
your requirement? I am not sure whether it is something that you are
looking for. But the question seemed to be having a query related to that.
[1] http://wiki.apache.org/solr/ExtendedDisMax#Query_Syntax
On Th
Hi Aman,
This error could be because the solr instance is looking for the
dependent logger jars. You should copy the jar files from solr download (
solr/example/lib/ext) to tomcat lib[1].
[1]
https://wiki.apache.org/solr/SolrLogging#Using_the_example_logging_setup_in_containers_other_than_Jet
Hi Raja,
Could you please mention the list of solr features that you were/are
using in Solr 1.4. There have been tremendous changes since 1.4 to 4.10.
Also, you may have to explore solr cloud for resolving the indexing
operation. But what kind of indexing problems are you facing?
You should loo
Hi All,
How to do a phrase search and then term proximity search using edismax
query parser?
For ex: If the search term is "red apples", the products having "red
apples" in their fields should be returned first and then products having
red apples with term proximity of n.
Thanks.
David
Hi,
Is there a way to clear the solr admin interface logging page's logs?
I understand that we can change the logging level but incase if I would
want to just clear the logs and say reload collection and expect to see
latest only and not the past?
Manual way or anywhere that I should clear so th
Hi,
I am trying to obtain multi-word spellcheck suggestions. For eg., I have
a title field with content as "Indefinite and fictitious large numbers" and
user searched for "larg numberr", in that case, I wanted to obtain "large
number" as suggestion from spell check suggestions. Could you please
Hi,
Could you please point me to the link where I can learn about the
theory behind the implementation of word break spell checker?
Like we know that the solr's DirectSolrSpellCheck component uses levenstian
distance algorithm, what is the algorithm used behind the word break spell
checker com
, 2014 at 7:21 PM, David Philip
wrote:
> contd..
>
> expectation was that the "ride care" should not have split into two
> tokens.
>
> It should have been as below. Please correct me/point me where I am wrong.
>
>
> Input : ridemakers, ride makers, ridemakerz,
care
o/p
ridemakersrideridemakerzrideridemarkridemakersmakerz
*ride care*
On Wed, Oct 15, 2014 at 7:16 PM, David Philip
wrote:
> Hi All,
>
>I remember using multi-words in synonyms in Solr 3.x version. In case
> of multi words, I was escaping space with back slash[\] and it work as
> intended. Ex: ride\ makers, riders,
Hi All,
I remember using multi-words in synonyms in Solr 3.x version. In case of
multi words, I was escaping space with back slash[\] and it work as
intended. Ex: ride\ makers, riders, rider\ guards. Each one mapped to
each other and so when I searched for ride makers, I obtained the search
r
Hi,
This question is related to SolrJ document as a bean. I have an entity
that has another entity within it. Could you please tell me how to annotate
for inner entities? The issue I am facing is the inner entities fields are
missing while indexing. In the below example, It is just adding Conte
Hi Aman,
I think it is possible.
1. Use fl parameter.
2. Add all the 4 fields in both the schemas[schemas of core 1 and 2].
3. While querying use &fl=id,name,type,page.
It will return all the fields. The document that has no data for this
field, the field will be an empty string.
Ex: {id:111,na
Hi,
I have a query on Multi-Lingual Analyser.
Which one of the below is the best approach?
1.1.To develop a translator that translates a/any language to
English and then use standard English analyzer to analyse – use translator,
both at index time and while search time?
2.
efficient since I believe it uses base-64 encoding under the covers
> though...
>
> Is this an "XY" problem?
>
> Best,
> Erick
>
>
> On Wed, Oct 30, 2013 at 8:06 AM, David Philip
> wrote:
>
> > Hi All,
> >
> > What should be the field ty
Hi All,
What should be the field type if I have to save solr's open bit set value
within solr document object and retrieve it later for search?
OpenBitSet bits = new OpenBitSet();
bits.set(0);
bits.set(1000);
doc.addField("SolrBitSets", bits);
What should be the field type of SolrBit
gt; you might be just fine denormalizing the data.
>>
>> Alternatively, there's the "pseudo join" capability to consider. I'm
>> usually hesitant to recommend that, but Joel is committing some
>> really interesting stuff in the join area which you migh
e - no document for a disease
> if it is not present for that group.
>
> -- Jack Krupansky
>
> -Original Message- From: David Philip
> Sent: Saturday, October 12, 2013 9:56 PM
> To: solr-user@lucene.apache.org
> Subject: Re: Storing 2 dimension array in Solr
>
>
document
> with a suitable ID "groupN" in your example?
>
>
> On Sat, Oct 12, 2013 at 2:43 PM, David Philip
> wrote:
>
> > Hi Erick,
> >
> >We have set of groups as represented below. New columns (diseases as
> in
> > below matrix) keep comin
is feels like it may be an XY problem. _Why_ do you
> want to store a 2-dimensional array and what
> do you want to do with it? Maybe there are better
> approaches.
>
> Best
> Erick
>
>
> On Sat, Oct 12, 2013 at 2:07 AM, David Philip
> wrote:
>
> > Hi,
> >
Hi,
I have a 2 dimension array and want it to be persisted in solr. How can I
do that?
Sample case:
disease1disease2 disease3
group1exist slight not found
groups2 slightnot foundexist
group2slight exist
exist-1 not found - 2
ents that
> had made it through the select, although how to convey which
> groups the user selected to the post filter is an open
> question.
>
> Best,
> Erick
>
> On Wed, Oct 9, 2013 at 12:23 PM, David Philip
> wrote:
> > Hi All,
> >
> > I have an
Hi All,
I have an issue in handling filters for one of our requirements and
liked to get suggestion for the best approaches.
*Use Case:*
1. We have List of groups and the number of groups can increase upto >1
million. Currently we have almost 90 thousand groups in the solr search
system.
Informative. Useful.Thanks
On Thu, Mar 14, 2013 at 1:59 PM, Chantal Ackermann <
c.ackerm...@it-agenten.com> wrote:
> Hi all,
>
>
> this is not a question. I just wanted to announce that I've written a blog
> post on how to set up Maven for packaging and automatic testing of a SOLR
> index config
/201008.mbox/%3CAANLkTi=jpph3x5tlkbj_rax5qhex6zrcguiunhqbf...@mail.gmail.com%3E
On Mon, Mar 4, 2013 at 4:08 PM, David Philip wrote:
> Hi Chris,
>
>Thank you for the reply. okay understood about *fieldWeight*.
>
> I am actually curious to know how are the documents sequenc
Hi Chris,
Thank you for the reply. okay understood about *fieldWeight*.
I am actually curious to know how are the documents sequenced in this case
when the product of tf idf and fieldnorm is same for both the documents?
Afaik, at the first step, documents are sequenced based on
fieldWeight(p
500 Raw_text^1
>
> It's not strictly layered, but by playing with the numbers you can achieve
> that effect
>
> --
> Jan Høydahl, search solution architect
> Cominvent AS - www.cominvent.com
> Solr Training - www.solrtraining.com
>
> 26. feb. 2013 kl. 14:55 skrev David
26 matches
Mail list logo