Re: Regarding Compression Tool

2013-09-16 Thread Jebarlin Robertson
I am using Apache Lucene in Android. I have around 1 GB of Text documents
(Logs). When I Index these text documents using this
*new Field(ContentIndex.KEY_TEXTCONTENT, contents, Field.Store.YES,
Field.Index.ANALYZED,TermVector.WITH_POSITIONS_OFFSETS)*, the index
directory is consuming 1.59GB memory size.
But without Field Store it will be adound 0.59 GB indexed size. If the
Lucene indexing is taking this much space to create index and to store the
original text just to use hightlight feature, it will be big problem for
mobile devices. So I just want some help that, is there any alternative
ways to do this without occupying more space to use highligh feature in
Android powered devices.


On Sun, Sep 15, 2013 at 3:26 AM, Erick Erickson wrote:

> bq: I thought that I can use the CompressionTool to minimize the memory
> size.
>
> This doesn't make a lot of sense. Highlighting needs the raw data to
> figure out what to highlight, so I don't see how the CompressionTool
> will help you there.
>
> And unless you have a huge document and only a very few of them, then
> the memory occupied by the uncompressed data should be trivial
> compared to the various low-level caches. This really is seeming like
> an XY problem. Perhaps if you backed up and explained _why_ this
> seems important to do people could be more helpful.
>
>
> Best,
> Erick
>
>
> On Sat, Sep 14, 2013 at 12:21 PM, Jebarlin Robertson  >wrote:
>
> > Thank you very much Erick. Actually I was using Highlighter tool, that
> > needs the entire data to be stored to get the relevant searched sentence.
> > But when I use that, It was consuming more memory (Indexed data size +
> >  Store.YES - the entire content) than the actual documents size.
> > I thought that I can use the CompressionTool to minimize the memory size.
> > You can help, if there is any possiblities or way to store the entire
> > content and to use the highlighter feature.
> >
> > Thankyou
> >
> >
> > On Fri, Sep 13, 2013 at 6:54 PM, Erick Erickson  > >wrote:
> >
> > > Compression is for the _stored_ data, which is not searched. Ignore
> > > the compression and insure that you index the data.
> > >
> > > The compressing/decompressing for looking at stored
> > > values is, I believe, done at a very low level that you don't
> > > need to care about at all.
> > >
> > > If you index the data in the field, you shouldn't have to do
> > > anything special to search it.
> > >
> > > Best,
> > > Erick
> > >
> > >
> > > On Fri, Sep 13, 2013 at 1:19 AM, Jebarlin Robertson <
> jebar...@gmail.com
> > > >wrote:
> > >
> > > > Hi,
> > > >
> > > > I am trying to store all the Field values using CompressionTool, But
> > > When I
> > > > search for any content, it is not finding any results.
> > > >
> > > > Can you help me, how to create the Field with CompressionTool to add
> to
> > > the
> > > > Document and how to decompress it when searching for any content in
> it.
> > > >
> > > > --
> > > > Thanks & Regards,
> > > > Jebarlin Robertson.R
> > > >
> > >
> >
> >
> >
> > --
> > Thanks & Regards,
> > Jebarlin Robertson.R
> > GSM: 91-9538106181.
> >
>



-- 
Thanks & Regards,
Jebarlin Robertson.R
GSM: 91-9538106181.


Re: possible latency increase from Lucene versions 4.1 to 4.4?

2013-09-16 Thread Adrien Grand
Hi John,

I just had a look at Mike's benchs[1][2] which don't show any
performance difference from approximately 1 year. But this only tests
a conjunction of two terms so it might still be that latency worsened
for more complex queries.

[1] http://people.apache.org/~mikemccand/lucenebench/AndHighMed.html
[2] http://people.apache.org/~mikemccand/lucenebench/AndHighHigh.html

-- 
Adrien

-
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-user-h...@lucene.apache.org



Multiple field instances and Field.Store.NO

2013-09-16 Thread Alan Burlison
I'm creating multiple instances of a field, some with Field.Store.YES
and some with Field.Store.NO, with Lucene 4.4. If Field.Store.YES is
set then I see multiple instances of the field in the documents in the
resulting index, if I use Field.Store.NO then I only see a single
field. Is that expected or am I doing something dumb?

Thanks,

-- 
Alan Burlison
--

-
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-user-h...@lucene.apache.org



Re: Multiple field instances and Field.Store.NO

2013-09-16 Thread Ian Lea
Not exactly dumb, and I can't tell you exactly what is happening here,
but lucene stores some info at the index level rather than the field
level, and things can get confusing if you don't use the same Field
definition consistently for a field.

>From the javadocs for org.apache.lucene.document.Field:

NOTE: the field type is an IndexableFieldType. Making changes to the
state of the IndexableFieldType will impact any Field it is used in.
It is strongly recommended that no changes be made after Field
instantiation.

--
Ian.


On Mon, Sep 16, 2013 at 11:33 AM, Alan Burlison  wrote:
> I'm creating multiple instances of a field, some with Field.Store.YES
> and some with Field.Store.NO, with Lucene 4.4. If Field.Store.YES is
> set then I see multiple instances of the field in the documents in the
> resulting index, if I use Field.Store.NO then I only see a single
> field. Is that expected or am I doing something dumb?
>
> Thanks,
>
> --
> Alan Burlison
> --
>
> -
> To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
> For additional commands, e-mail: java-user-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-user-h...@lucene.apache.org



Re: Multiple field instances and Field.Store.NO

2013-09-16 Thread Michael McCandless
That is strange.

If you use Field.Store.NO for all fields for a given document then no
field should have been stored.  Can you boil this down to a small test
case?

Mike McCandless

http://blog.mikemccandless.com


On Mon, Sep 16, 2013 at 6:33 AM, Alan Burlison  wrote:
> I'm creating multiple instances of a field, some with Field.Store.YES
> and some with Field.Store.NO, with Lucene 4.4. If Field.Store.YES is
> set then I see multiple instances of the field in the documents in the
> resulting index, if I use Field.Store.NO then I only see a single
> field. Is that expected or am I doing something dumb?
>
> Thanks,
>
> --
> Alan Burlison
> --
>
> -
> To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
> For additional commands, e-mail: java-user-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-user-h...@lucene.apache.org



Re: Multiple field instances and Field.Store.NO

2013-09-16 Thread Alan Burlison
On 16 September 2013 11:47, Ian Lea  wrote:

> Not exactly dumb, and I can't tell you exactly what is happening here,
> but lucene stores some info at the index level rather than the field
> level, and things can get confusing if you don't use the same Field
> definition consistently for a field.
>
> From the javadocs for org.apache.lucene.document.Field:
>
> NOTE: the field type is an IndexableFieldType. Making changes to the
> state of the IndexableFieldType will impact any Field it is used in.
> It is strongly recommended that no changes be made after Field
> instantiation.

I'm not changing the field type between instances of a field, I have
several fields in each document, some stored and some unstored.  Any
given field may have multiple instances in a single record, but they
are all created with the same type & flags. Stored fields and up with
multiple instances, unstored ones don't.

-- 
Alan Burlison
--

-
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-user-h...@lucene.apache.org



Re: Multiple field instances and Field.Store.NO

2013-09-16 Thread Alan Burlison
On 16 September 2013 12:40, Michael McCandless
 wrote:

> If you use Field.Store.NO for all fields for a given document then no
> field should have been stored.  Can you boil this down to a small test
> case?

repeated calls to

doc.add(new TextField("content", c, Field.Store.NO)))

result in a single instance of the field showing up in Luke whereas

doc.add(new TextField("content", c, Field.Store.YES)))

results in multiple instances

-
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-user-h...@lucene.apache.org



Re: Multiple field instances and Field.Store.NO

2013-09-16 Thread Michael McCandless
On Mon, Sep 16, 2013 at 9:52 AM, Alan Burlison  wrote:
> On 16 September 2013 12:40, Michael McCandless
>  wrote:
>
>> If you use Field.Store.NO for all fields for a given document then no
>> field should have been stored.  Can you boil this down to a small test
>> case?
>
> repeated calls to
>
> doc.add(new TextField("content", c, Field.Store.NO)))
>
> result in a single instance of the field showing up in Luke whereas
>
> doc.add(new TextField("content", c, Field.Store.YES)))
>
> results in multiple instances

Is Luke showing you stored fields?  If so, this makes no sense ...
Field.Store.NO (single or multiple calls) should have resulted in no
stored fields.

Mike McCandless

http://blog.mikemccandless.com

-
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-user-h...@lucene.apache.org



Re: org.apache.lucene.analysis.icu.ICUNormalizer2Filter -- why Token?

2013-09-16 Thread Robert Muir
Mostly because our tokenizers like StandardTokenizer will tokenize the
same way regardless of normalization form or whether its normalized at
all?

But for other tokenizers, such a charfilter should be useful: there is
a JIRA for it, but it has some unresolved issues

https://issues.apache.org/jira/browse/LUCENE-4072

On Sun, Sep 15, 2013 at 7:05 PM, Benson Margulies  wrote:
> Can anyone shed light as to why this is a token filter and not a char
> filter? I'm wishing for one of these _upstream_ of a tokenizer, so that the
> tokenizer's lookups in its dictionaries are seeing normalized contents.

-
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-user-h...@lucene.apache.org



Lucene Query Syntax with analyzed and unanalyzed text

2013-09-16 Thread Scott Smith
I want to be sure I understand this correctly.  Suppose I have a search that 
I'm going to run through the query parser that looks like:

body:"some phrase" AND keyword:"my-keyword"

clearly "body" and "keyword" are field names.  However, the additional 
information is that the "body" field is analyzed and the "keyword" field is not.

I don't believe this will work.  I'm assuming that the query parser can't 
correctly determine which fields are analyzed and which are not.

Is there an easy way to handle this?


Re: Lucene Query Syntax with analyzed and unanalyzed text

2013-09-16 Thread Ian Lea
org.apache.lucene.analysis.miscellaneous.PerFieldAnalyzerWrapper in
analyzers-common is what you need.  There's an example in the
javadocs.  Build and use the wrapper instance in place of
StandardAnalyzer or whatever you are using now.


--
Ian.


On Mon, Sep 16, 2013 at 5:36 PM, Scott Smith  wrote:
> I want to be sure I understand this correctly.  Suppose I have a search that 
> I'm going to run through the query parser that looks like:
>
> body:"some phrase" AND keyword:"my-keyword"
>
> clearly "body" and "keyword" are field names.  However, the additional 
> information is that the "body" field is analyzed and the "keyword" field is 
> not.
>
> I don't believe this will work.  I'm assuming that the query parser can't 
> correctly determine which fields are analyzed and which are not.
>
> Is there an easy way to handle this?

-
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-user-h...@lucene.apache.org



Re: Regarding Compression Tool

2013-09-16 Thread Mark Miller
Have you considered storing your indexes server-side? I haven't used
compression but usually the trade-off of compression is CPU usage which
will also be a drain on battery life. Or maybe consider how important the
highlighter is to your users - is it worth the trade-off of either disk
space or battery life? If it's more of a nice-to-have then maybe hold off
on the feature for a later release until you've had some feedback and some
more time to figure out the best solution. Of course I don't know much
about your application, so take my advice with a grain of salt.


On Mon, Sep 16, 2013 at 2:22 AM, Jebarlin Robertson wrote:

> I am using Apache Lucene in Android. I have around 1 GB of Text documents
> (Logs). When I Index these text documents using this
> *new Field(ContentIndex.KEY_TEXTCONTENT, contents, Field.Store.YES,
> Field.Index.ANALYZED,TermVector.WITH_POSITIONS_OFFSETS)*, the index
> directory is consuming 1.59GB memory size.
> But without Field Store it will be adound 0.59 GB indexed size. If the
> Lucene indexing is taking this much space to create index and to store the
> original text just to use hightlight feature, it will be big problem for
> mobile devices. So I just want some help that, is there any alternative
> ways to do this without occupying more space to use highligh feature in
> Android powered devices.
>
>
> On Sun, Sep 15, 2013 at 3:26 AM, Erick Erickson  >wrote:
>
> > bq: I thought that I can use the CompressionTool to minimize the memory
> > size.
> >
> > This doesn't make a lot of sense. Highlighting needs the raw data to
> > figure out what to highlight, so I don't see how the CompressionTool
> > will help you there.
> >
> > And unless you have a huge document and only a very few of them, then
> > the memory occupied by the uncompressed data should be trivial
> > compared to the various low-level caches. This really is seeming like
> > an XY problem. Perhaps if you backed up and explained _why_ this
> > seems important to do people could be more helpful.
> >
> >
> > Best,
> > Erick
> >
> >
> > On Sat, Sep 14, 2013 at 12:21 PM, Jebarlin Robertson  > >wrote:
> >
> > > Thank you very much Erick. Actually I was using Highlighter tool, that
> > > needs the entire data to be stored to get the relevant searched
> sentence.
> > > But when I use that, It was consuming more memory (Indexed data size +
> > >  Store.YES - the entire content) than the actual documents size.
> > > I thought that I can use the CompressionTool to minimize the memory
> size.
> > > You can help, if there is any possiblities or way to store the entire
> > > content and to use the highlighter feature.
> > >
> > > Thankyou
> > >
> > >
> > > On Fri, Sep 13, 2013 at 6:54 PM, Erick Erickson <
> erickerick...@gmail.com
> > > >wrote:
> > >
> > > > Compression is for the _stored_ data, which is not searched. Ignore
> > > > the compression and insure that you index the data.
> > > >
> > > > The compressing/decompressing for looking at stored
> > > > values is, I believe, done at a very low level that you don't
> > > > need to care about at all.
> > > >
> > > > If you index the data in the field, you shouldn't have to do
> > > > anything special to search it.
> > > >
> > > > Best,
> > > > Erick
> > > >
> > > >
> > > > On Fri, Sep 13, 2013 at 1:19 AM, Jebarlin Robertson <
> > jebar...@gmail.com
> > > > >wrote:
> > > >
> > > > > Hi,
> > > > >
> > > > > I am trying to store all the Field values using CompressionTool,
> But
> > > > When I
> > > > > search for any content, it is not finding any results.
> > > > >
> > > > > Can you help me, how to create the Field with CompressionTool to
> add
> > to
> > > > the
> > > > > Document and how to decompress it when searching for any content in
> > it.
> > > > >
> > > > > --
> > > > > Thanks & Regards,
> > > > > Jebarlin Robertson.R
> > > > >
> > > >
> > >
> > >
> > >
> > > --
> > > Thanks & Regards,
> > > Jebarlin Robertson.R
> > > GSM: 91-9538106181.
> > >
> >
>
>
>
> --
> Thanks & Regards,
> Jebarlin Robertson.R
> GSM: 91-9538106181.
>



-- 
Mark J. Miller
Blog: http://www.developmentalmadness.com
LinkedIn: http://www.linkedin.com/in/developmentalmadness


Re: Multiple field instances and Field.Store.NO

2013-09-16 Thread Alan Burlison
> Is Luke showing you stored fields?  If so, this makes no sense ...
> Field.Store.NO (single or multiple calls) should have resulted in no
> stored fields.

It shows the field but shows the content as 

-- 
Alan Burlison
--

-
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-user-h...@lucene.apache.org



Re: org.apache.lucene.analysis.icu.ICUNormalizer2Filter -- why Token?

2013-09-16 Thread Benson Margulies
Thanks, I might pitch in.


On Mon, Sep 16, 2013 at 12:58 PM, Robert Muir  wrote:

> Mostly because our tokenizers like StandardTokenizer will tokenize the
> same way regardless of normalization form or whether its normalized at
> all?
>
> But for other tokenizers, such a charfilter should be useful: there is
> a JIRA for it, but it has some unresolved issues
>
> https://issues.apache.org/jira/browse/LUCENE-4072
>
> On Sun, Sep 15, 2013 at 7:05 PM, Benson Margulies 
> wrote:
> > Can anyone shed light as to why this is a token filter and not a char
> > filter? I'm wishing for one of these _upstream_ of a tokenizer, so that
> the
> > tokenizer's lookups in its dictionaries are seeing normalized contents.
>
> -
> To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
> For additional commands, e-mail: java-user-h...@lucene.apache.org
>
>


org.apache.lucene.analysis.icu.ICUNormalizer2Filter -- why Token?

2013-09-16 Thread Benson Margulies
Can anyone shed light as to why this is a token filter and not a char
filter? I'm wishing for one of these _upstream_ of a tokenizer, so that the
tokenizer's lookups in its dictionaries are seeing normalized contents.


Re: org.apache.lucene.analysis.icu.ICUNormalizer2Filter -- why Token?

2013-09-16 Thread Robert Muir
That would be great!

On Mon, Sep 16, 2013 at 1:41 PM, Benson Margulies  wrote:
> Thanks, I might pitch in.
>
>
> On Mon, Sep 16, 2013 at 12:58 PM, Robert Muir  wrote:
>
>> Mostly because our tokenizers like StandardTokenizer will tokenize the
>> same way regardless of normalization form or whether its normalized at
>> all?
>>
>> But for other tokenizers, such a charfilter should be useful: there is
>> a JIRA for it, but it has some unresolved issues
>>
>> https://issues.apache.org/jira/browse/LUCENE-4072
>>
>> On Sun, Sep 15, 2013 at 7:05 PM, Benson Margulies 
>> wrote:
>> > Can anyone shed light as to why this is a token filter and not a char
>> > filter? I'm wishing for one of these _upstream_ of a tokenizer, so that
>> the
>> > tokenizer's lookups in its dictionaries are seeing normalized contents.
>>
>> -
>> To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: java-user-h...@lucene.apache.org
>>
>>

-
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-user-h...@lucene.apache.org



exception while writing to index

2013-09-16 Thread nischal reddy
Hi,

I am getting an exception while indexing files, i tried debugging but
couldnt figure out the problem.

I have a custom analyzer which creates the token stream , i am indexing
around 15k files, when i start the indexing after some time i get this
exception:


java.lang.IllegalArgumentException: maxValue must be non-negative (got: -1)
at
org.apache.lucene.util.packed.PackedInts.bitsRequired(PackedInts.java:1184)
at
org.apache.lucene.codecs.lucene41.ForUtil.bitsRequired(ForUtil.java:243)
at
org.apache.lucene.codecs.lucene41.ForUtil.writeBlock(ForUtil.java:164)
at
org.apache.lucene.codecs.lucene41.Lucene41PostingsWriter.addPosition(Lucene41PostingsWriter.java:322)
at
org.apache.lucene.index.FreqProxTermsWriterPerField.flush(FreqProxTermsWriterPerField.java:534)
at
org.apache.lucene.index.FreqProxTermsWriter.flush(FreqProxTermsWriter.java:85)
at org.apache.lucene.index.TermsHash.flush(TermsHash.java:116)
at org.apache.lucene.index.DocInverter.flush(DocInverter.java:53)
at
org.apache.lucene.index.DocFieldProcessor.flush(DocFieldProcessor.java:81)
at
org.apache.lucene.index.DocumentsWriterPerThread.flush(DocumentsWriterPerThread.java:501)
at
org.apache.lucene.index.DocumentsWriter.doFlush(DocumentsWriter.java:478)
at
org.apache.lucene.index.DocumentsWriter.flushAllThreads(DocumentsWriter.java:615)
at
org.apache.lucene.index.IndexWriter.prepareCommitInternal(IndexWriter.java:2748)
at
org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:2897)
at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:2872)
at
com.progress.openedge.pdt.search.index.OEIndexer.flushWriter(OEIndexer.java:597)
at
com.progress.openedge.pdt.search.index.OEIndexer.access$8(OEIndexer.java:594)
at
com.progress.openedge.pdt.search.index.OEIndexer$3.run(OEIndexer.java:282)
at
com.progress.openedge.pdt.search.index.OEIndexer$IndexJob.run(OEIndexer.java:620)
at org.eclipse.core.internal.jobs.Worker.run(Worker.java:54)


I seem to be getting this error when i hit the max rambuffer size and
lucene is trying to flush my ram.

When i debugged i found out that there were couple of negative values in
offsetStartDeltaBuffer field of "Lucene41PostingsWriter".

am i screwing up somewhere with the offsets, i am not sending any negative
offset values.

What would i possibly be doing wrong? need your help.

Thanks in advance,
Nischal Y


IndexUpdater (4.4.0) fails when -verbose is not set

2013-09-16 Thread Bruce Karsh
Here it fails because -verbose is not set:

$ java -cp ./lucene-core-4.4-SNAPSHOT.jar
org.apache.lucene.index.IndexUpgrader ./INDEX
Exception in thread "main" java.lang.IllegalArgumentException: printStream
must not be null
 at
org.apache.lucene.index.IndexWriterConfig.setInfoStream(IndexWriterConfig.java:514)
at org.apache.lucene.index.IndexUpgrader.(IndexUpgrader.java:126)
 at org.apache.lucene.index.IndexUpgrader.main(IndexUpgrader.java:109)

Here it works with -verbose set:

$ java -cp ./lucene-core-4.4-SNAPSHOT.jar
org.apache.lucene.index.IndexUpgrader -verbose ./INDEX
IFD 0 [Mon Sep 16 18:25:53 PDT 2013; main]: init: current segments file is
"segments_5";
deletionPolicy=org.apache.lucene.index.KeepOnlyLastCommitDeletionPolicy@42698403

...

IW 0 [Mon Sep 16 18:25:53 PDT 2013; main]: at close: _2(4.4):C4


RE: IndexUpdater (4.4.0) fails when -verbose is not set

2013-09-16 Thread Uwe Schindler
Hi Bruce,

Thanks for investigating! Can you open a bug report on 
https://issues.apache.org/jira/browse/LUCENE ?

Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de

> -Original Message-
> From: Bruce Karsh [mailto:bruceka...@gmail.com]
> Sent: Tuesday, September 17, 2013 3:27 AM
> To: java-user@lucene.apache.org
> Subject: IndexUpdater (4.4.0) fails when -verbose is not set
> 
> Here it fails because -verbose is not set:
> 
> $ java -cp ./lucene-core-4.4-SNAPSHOT.jar
> org.apache.lucene.index.IndexUpgrader ./INDEX Exception in thread "main"
> java.lang.IllegalArgumentException: printStream must not be null  at
> org.apache.lucene.index.IndexWriterConfig.setInfoStream(IndexWriterConf
> ig.java:514)
> at org.apache.lucene.index.IndexUpgrader.(IndexUpgrader.java:126)
>  at org.apache.lucene.index.IndexUpgrader.main(IndexUpgrader.java:109)
> 
> Here it works with -verbose set:
> 
> $ java -cp ./lucene-core-4.4-SNAPSHOT.jar
> org.apache.lucene.index.IndexUpgrader -verbose ./INDEX IFD 0 [Mon Sep 16
> 18:25:53 PDT 2013; main]: init: current segments file is "segments_5";
> deletionPolicy=org.apache.lucene.index.KeepOnlyLastCommitDeletionPolicy
> @42698403
> 
> ...
> 
> IW 0 [Mon Sep 16 18:25:53 PDT 2013; main]: at close: _2(4.4):C4


-
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-user-h...@lucene.apache.org