Hi Mike,
I am using white space analyzer with lower case filter. The test code is
same as i send above.
The contents i am indexing is
String contents = "•Check for vulnerable ports •Check for old and
vulnerable versions of services on open ports •Transfer a code which";
In that "Ch
Hi Mike,
I got the problem.The term is not indexed properly..
On Thu, Oct 31, 2013 at 7:19 AM, VIGNESH S wrote:
> Hi Mike,
>
> please find tha attached test case G1.java..
>
>
> On Wed, Oct 30, 2013 at 8:41 PM, Michael McCandless <
> luc...@mikemccandless.com> wrote:
>
>> I don't see any java s
UNOFFICIAL
Hi Mike,
Thanks for the helpful response. I'll try them both and see if any performance
imrpovement I get from the mre complicated method is worth the extra complexity.
Thanks,
Steve
-Original Message-
From: Michael McCandless [mailto:luc...@mikemccandless.com]
Sent: Wednesd
Hi Mike,
please find tha attached test case G1.java..
On Wed, Oct 30, 2013 at 8:41 PM, Michael McCandless <
luc...@mikemccandless.com> wrote:
> I don't see any java sources here?
>
> Make sure "check" is in fact being indexed; can you boil it down to a
> small test case?
>
> Mike McCandless
>
>
Hello,
I'm attempting to setup a master/slave arrangment between two servers where
the master uses a SearcherTaxonomyManger to index and search, and the slave
is read-only - using just an IndexSearcher and TaxonomyReader.
So far I am able to publish new IndexAndTaxonomyRevisions on the master and
I don't see any java sources here?
Make sure "check" is in fact being indexed; can you boil it down to a
small test case?
Mike McCandless
http://blog.mikemccandless.com
On Wed, Oct 30, 2013 at 10:59 AM, VIGNESH S wrote:
> Hi,
>
> I have indexed the below text file "filename.txt" using the tes
Hi,
I have indexed the below text file "filename.txt" using the test code
G1.java..
When I search for "check for old" trm.seekceil() method gives "checking"
and "checks" and ignores "check" which is there in text document..
It is working for most cases except a few
Please kindly help me..
--
You should try MultiDocValues first; it's trivial to use and may not
be horribly slow.
It must do a binary-search for every docID lookup.
And then if this is too slow, assuming you traverse the docIDs in
order, you can use IndexReader.leaves() to get the sub-readers. The
docIDs are just "appende
OK, thanks, for some reason the test of my tokenizer didn't fail but the
test of my token filter with my tokenizer hit the problem. All fixed.
On Wed, Oct 30, 2013 at 2:23 AM, Uwe Schindler wrote:
> I think this is more a result of the Tokenizer on top, does not correctly
> implementing end().