You're correct Mukesh, that's the JIRA with pretty much all of that discussion.
On Fri, Aug 8, 2014 at 8:44 PM, Mukesh Jha wrote:
> Looks like https://issues.apache.org/jira/browse/SOLR-5473 is the story :)
>
>
> On Fri, Aug 8, 2014 at 9:30 PM, Mukesh Jha wrote:
>
>> Hey *Shawn*, *Erik*,
>>
>> I
Looks like https://issues.apache.org/jira/browse/SOLR-5473 is the story :)
On Fri, Aug 8, 2014 at 9:30 PM, Mukesh Jha wrote:
> Hey *Shawn*, *Erik*,
>
> I's wondering if there is a JIRA story for splitting the current
> clusterstate.json to collection specific clusterstate config that I can
> tr
Hey *Shawn*, *Erik*,
I's wondering if there is a JIRA story for splitting the current
clusterstate.json to collection specific clusterstate config that I can
track.
I looked around a bit but couldn't get my hands on anything useful on that.
On Mon, Apr 28, 2014 at 7:43 AM, Shawn Heisey wrote:
The word delimiter filter is actually combining "100-001" into "11". You
have BOTH catenateNumbers AND catenateAll, so "100-R8989" should generate
THREE tokens: the concatenated numbers 100", the concatenated words "R8989",
and both numbers and words concatenated, "100R8989 ".
-- Jack Krup
You haven't really explained what you want to _do_. If you don't
want to split words up, just take WordDelimiterFilterFactory out.
Or do you want to split sometimes but not others?
Best,
Erick
On Fri, Aug 8, 2014 at 12:27 PM, EXTERNAL Taminidi Ravi (ETI,
Automotive-Service-Solutions) wrote:
>
Solr scales based on number of documents, not fields or collections. Dozens
of fields or collections is perfectly fine. Hundreds of fields or
collections CAN work, but you have to be extra diligent and use more
powerful hardware. Millions and even billions of DOCUMENTS is fine - that's
the prim
In our application there are many complicated filter conditions, very often
those conditions are special to each user (like whether or not a doc is
important or already read by a user ..), two possible solutions to
implement those filters in lucene:
1/ create many fields
2/ create many collections
Edismax
Field Value (Index) = a & w Field Value (Query) = a & w restaurant
The last token filter for both the index and query is LengthFilter.
So the very bottom of the Analyse Value results look like:
LF | a&w|a&wLF |a&w|restaurant
The bolded a&w above indicate a match.
In an a
HI, I have a situation where I don't want to split the words, I am using the
workdelimterfilter where it works good.
For eg. If I send to analyszer for 100-001 , it is not splitting the keyword,
but if I send 100-R8989 then the worddelimiter filter to 100 | R9889, below is
the filed analyzer an
On 8/8/2014 8:29 AM, Sören Schneider wrote:
> Thanks for your helpful reply. Your code snippet works perfectly, but
> I have another question. Do I have to manually move the index files in
> the appropriate directories of the Solr cores, that should be swapped?
>
> Could you please post the content
Hi Shawn,
Thanks for your helpful reply. Your code snippet works perfectly, but I
have another question. Do I have to manually move the index files in the
appropriate directories of the Solr cores, that should be swapped?
Could you please post the content of your "updateDirectories()" method?
And the Solr Support list is where people register their available
consulting services:
http://wiki.apache.org/solr/Support
-- Jack Krupansky
-Original Message-
From: Alexandre Rafalovitch
Sent: Friday, August 8, 2014 9:12 AM
To: solr-user
Subject: Re: Help Required
We don't mediate
We don't mediate jobs offers/positions on this list. We help people to
learn how to make these kinds of things yourself. If you are a
developer, you may find that it would take only several days to get a
strong feel for Solr. Especially, if you start from tutorials/right
books.
To find developers,
Abhishek,
Your first part of the question is interesting, but your specific
details are probably the wrong level for you to concentrate on. The
issues you will be facing are not about which file does what. That's
more performance and inner details. I feel you should worry more about
the fields, de
That would be more of a question for the Lucene dev list, but... the
standard answer there would be for you to become familiar with the Lucene
source code and trace through it yourself.
It's a "Lucene directory", not a "Solr directory" - Solr is a server built
on top of the Lucene search libra
Hello,
I am fairly new to SOLR, can someone please help me understand how a
query is processed in SOLR, i.e, what i want to understand is from the time
it hits solr what files it refers to process the query, i.e, order in which
.tvx, .tvd files and others are accessed. basically i would like to
All tokens produced have still have the same position as their initial
position, so no.
-Original message-
> From:Johannes Siegert
> Sent: Friday 8th August 2014 11:11
> To: solr-user@lucene.apache.org
> Subject: NGramTokenizer influence to length normalization?
>
> Hi,
>
> does the
It's also intermittent -- I had to refresh 30 times before it would
happen. So far it won't happen on back to back refreshes. The shortest
gap between having it exhibit the odd behavior was 6 refreshes. I spent
about 10 minutes just hitting the refresh button, and I only had it happen
a handful
Dear Sirs,
I wonder if you can help me?
I'm looking for a developer who uses Solr to build for me a facted seach
facilty using location. In a nutshell, I need this funtionality as in here:
www.citypantry.com
wwwdinein.
Here the vendor via google maps enters the area/radius they cover which en
Hi,
I am using solr 3.6.1 and trying to find a range on a field which was defined
as integer. but i'm not getting accurate results. below is my schema.
The input will be as [-1 TO 0] or [2 TO 5]
my query string will be interestlevel:[-1 TO 0] -- this is returning only 2
records from solr
Hi,
does the NGramTokenizer have an influence to the length normalization?
Thanks.
Johannes
First of all thank you very much for the answer, James. It is very complete
and it gives us several alternatives :)
I think we will try first the cache approach, as, after solving this
problem https://issues.apache.org/jira/browse/SOLR-5954 the performance has
been improved, so along with the cach
22 matches
Mail list logo