Hi Juan,

Using facets or actual searches with "Edgy Text" was going to be my reply, but 
you're already aware of that.

My personal advice would be to just bite the bullet and use copyField, as you 
already know how to do, and let your index size grow.

It's really common to copy fields to multiple places, that's how many fancy 
systems work behind the scenes.

Rationalizations:
1: You seem to already understand how it works
2: It's what a lot of other folks do
3: Relatively speaking, adding disk space is often easier and cheaper than 
other options (though of course there are exceptions)

The other "joke" that I make sometimes, but it's actually really TRUE, is that 
"disk space is cheaper than time".   What I mean is that disk space has 
increased by a factor of 1 Million (6 orders of magnitude!) in the past 30 
years, meanwhile the work week is still only 40 or so hours long (if time had 
increased like disk space, we would now get 40,000,000 hours of work done each 
week!)

I realize you say that this is an issue for your project, but I just wanted to 
chime in.

Mark

--
Mark Bennett / LucidWorks: Search & Big Data / 
[email protected]<mailto:[email protected]>
Office: 408-898-4201 / Telecommute: 408-733-0387 / Cell: 408-829-6513

On Aug 15, 2014, at 9:55 AM, Juan Pablo Albuja 
<[email protected]<mailto:[email protected]>> wrote:

Hi guys, I have the following needs and I really appreciate if someone can give 
me a status if we are going to have in a future a terms component that can 
accomplish the following:

I need to implement a Solr autosuggest that supports:
1.       Get autosuggestion over multivalued fields
2.       Case – Insensitiveness
3.       Look for content in the middle for example I have the value “Hello 
World” indexed, and I need to get that value when the user types “wor”
4.       Filter by an additional field.

I was using the terms component because with it I can satisfy 1 to 3, but for 
point 4 is not possible. I also was looking at faceting searches and 
Ngram.Edge-Ngrams, but the problem with those approaches is that I need to copy 
fields over to make them tokenized or apply grams to those, and I don’t want to 
do that because I have more than 6 fields that needs autosuggest, my index is 
big I have more than 400k documents and I don’t want to increase the size.
I was trying to Extend the terms component in order to add an additional filter 
but it uses TermsEnum that is a vector over an specific field and I couldn’t 
figure out how to filter it in a really efficient way.

When do we have that functionality on the terms component? Do we have a 
workaround to do this?

Thanks,



Juan Pablo Albuja
Senior Developer

Reply via email to