[jira] [Resolved] (LUCENE-3403) Term vectors missing after addIndexes + optimize

2011-08-26 Thread Shai Erera (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-3403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shai Erera resolved LUCENE-3403.


Resolution: Fixed

Committed revision 1162300 (3x).
Committed revision 1162301 (trunk -- tests only).

> Term vectors missing after addIndexes + optimize
> 
>
> Key: LUCENE-3403
> URL: https://issues.apache.org/jira/browse/LUCENE-3403
> Project: Lucene - Java
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 3.3
>Reporter: Shai Erera
>Assignee: Shai Erera
>Priority: Blocker
> Fix For: 3.4, 4.0
>
> Attachments: LUCENE-3403.patch
>
>
> I encountered a problem with addIndexes where term vectors disappeared 
> following optimize(). I wrote a simple test case which demonstrates the 
> problem. The bug appears with both addIndexes() versions, but does not appear 
> if addDocument is called twice, committing changes in between.
> I think I tracked the problem down to IndexWriter.mergeMiddle() -- it sets 
> term vectors before merger.merge() was called. In the addDocs case, 
> merger.fieldInfos is already populated, while in the addIndexes case it is 
> empty, hence fieldInfos.hasVectors returns false.
> will post a patch shortly.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



trunk test failure (1314424801)

2011-08-26 Thread Charlie Cron
A test failure occurred running the tests.

revert! revert! revert!

You can see the entire build log at 
http://sierranevada.servebeer.com/1314424801.log

Thanks,
Charlie Cron


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



trunk test failure (1314417182)

2011-08-26 Thread Charlie Cron
A test failure occurred running the tests.

revert! revert! revert!

You can see the entire build log at 
http://sierranevada.servebeer.com/1314417182.log

Thanks,
Charlie Cron


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



trunk test failure (1314403261)

2011-08-26 Thread Charlie Cron
A test failure occurred running the tests.

revert! revert! revert!

You can see the entire build log at 
http://sierranevada.servebeer.com/1314403261.log

Thanks,
Charlie Cron


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: trunk test failure (1314339061)

2011-08-26 Thread Grant Ingersoll
Is there something we are supposed to be doing about these?

-Grant

On Aug 25, 2011, at 11:32 PM, Charlie Cron wrote:

> A test failure occurred running the tests.
> 
> revert! revert! revert!
> 
> You can see the entire build log at 
> http://sierranevada.servebeer.com/1314339061.log
> 
> Thanks,
> Charlie Cron
> 
> 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



trunk test failure (1314387601)

2011-08-26 Thread Charlie Cron
A test failure occurred running the tests.

revert! revert! revert!

You can see the entire build log at 
http://sierranevada.servebeer.com/1314387601.log

Thanks,
Charlie Cron


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



trunk test failure (1314384241)

2011-08-26 Thread Charlie Cron
A test failure occurred running the tests.

revert! revert! revert!

You can see the entire build log at 
http://sierranevada.servebeer.com/1314384241.log

Thanks,
Charlie Cron


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2716) QueryResultKey hashCode() and equals() is dependent on filter order

2011-08-26 Thread Mike Sokolov (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13091948#comment-13091948
 ] 

Mike Sokolov commented on SOLR-2716:


Also, if Set.equals() is slower than List.equals() and it seems worth the 
trouble, one could use maybe a SortedMap with keys being the filter hashCode.  
This would have the effect of eliminating dups though, which could be bad in 
some weird case.  So maybe a Bag?

> QueryResultKey hashCode() and equals() is dependent on filter order
> ---
>
> Key: SOLR-2716
> URL: https://issues.apache.org/jira/browse/SOLR-2716
> Project: Solr
>  Issue Type: Improvement
>  Components: search
>Affects Versions: 3.3
>Reporter: Neil Prosser
>Priority: Minor
> Attachments: SOLR-2716.patch
>
>
> The hashCode() and equals() methods of a QueryResultKey are dependent on the 
> order of the filters meaning that potentially identical result sets are 
> missed when cached.
> Query query = new TermQuery(new Term("field1", "value1"));
> Query filter1 = new TermQuery(new Term("field2", "value2"));
> Query filter2 = new TermQuery(new Term("field3", "value3"));
> List filters1 = new ArrayList();
> filters1.add(filter1);
> filters1.add(filter2);
> List filters2 = new ArrayList();
> filters2.add(filter2);
> filters2.add(filter1);
> QueryResultKey key1 = new QueryResultKey(query, filters1, null, 0);
> QueryResultKey key2 = new QueryResultKey(query, filters2, null, 0);
> // Both the following assertions fail
> assert key1.equals(key2);
> assert key1.hashCode() == key2.hashCode();

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-3233) HuperDuperSynonymsFilter™

2011-08-26 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13091919#comment-13091919
 ] 

Yonik Seeley commented on LUCENE-3233:
--

I just tested by hand:
I added a line to synonyms.txt "a\,a => b\,b", fired up the example server and 
then executed the following query:
http://localhost:8983/solr/select?q=a,a&debugQuery=true

I then verified that the synonyms were in effect in general, via:
http://localhost:8983/solr/select?q=fooaaa&debugQuery=true

> HuperDuperSynonymsFilter™
> -
>
> Key: LUCENE-3233
> URL: https://issues.apache.org/jira/browse/LUCENE-3233
> Project: Lucene - Java
>  Issue Type: Improvement
>Reporter: Robert Muir
> Fix For: 3.4, 4.0
>
> Attachments: LUCENE-3223.patch, LUCENE-3233.patch, LUCENE-3233.patch, 
> LUCENE-3233.patch, LUCENE-3233.patch, LUCENE-3233.patch, LUCENE-3233.patch, 
> LUCENE-3233.patch, LUCENE-3233.patch, LUCENE-3233.patch, LUCENE-3233.patch, 
> LUCENE-3233.patch, LUCENE-3233.patch, LUCENE-3233.patch, LUCENE-3233.patch, 
> LUCENE-3233.patch, LUCENE-3233.patch, synonyms.zip
>
>
> The current synonymsfilter uses a lot of ram and cpu, especially at build 
> time.
> I think yesterday I heard about "huge synonyms files" three times.
> So, I think we should use an FST-based structure, sharing the inputs and 
> outputs.
> And we should be more efficient with the tokenStream api, e.g. using 
> save/restoreState instead of cloneAttributes()

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-3384) TestIWExceptions.testRandomExceptionsThreads sometimes fails

2011-08-26 Thread Steven Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13091898#comment-13091898
 ] 

Steven Rowe commented on LUCENE-3384:
-

After updating from svn, then applying the patch, I got 3 runs with at least 
one failure out of 12 100-iter runs.  So it seems to be "less reproducible" now 
- 25% vs. 50% previously...

> TestIWExceptions.testRandomExceptionsThreads sometimes fails
> 
>
> Key: LUCENE-3384
> URL: https://issues.apache.org/jira/browse/LUCENE-3384
> Project: Lucene - Java
>  Issue Type: Bug
>Affects Versions: 3.4
>Reporter: Robert Muir
> Attachments: LUCENE-3384.patch
>
>
> failed in hudson (in test-backwards), with AIOOBE on termvectorswriter.
> the problem with this test method is that seeds never reproduce.
> But I made it fail just by doing this:
> {noformat}
> ant test-core -Dtestcase=TestIndexWriterExceptions 
> -Dtestmethod=testRandomExceptionsThreads -Dtests.iter=100
> {noformat}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



trunk test failure (1314375781)

2011-08-26 Thread Charlie Cron
A test failure occurred running the tests.

revert! revert! revert!

You can see the entire build log at 
http://sierranevada.servebeer.com/1314375781.log

Thanks,
Charlie Cron


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-3384) TestIWExceptions.testRandomExceptionsThreads sometimes fails

2011-08-26 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13091853#comment-13091853
 ] 

Robert Muir commented on LUCENE-3384:
-

Thanks for testing Steve... don't yet know why the patch doesnt fix it.

However, i fixed some reproducibility bugs in this test, maybe its "more 
reproducible" now.

> TestIWExceptions.testRandomExceptionsThreads sometimes fails
> 
>
> Key: LUCENE-3384
> URL: https://issues.apache.org/jira/browse/LUCENE-3384
> Project: Lucene - Java
>  Issue Type: Bug
>Affects Versions: 3.4
>Reporter: Robert Muir
> Attachments: LUCENE-3384.patch
>
>
> failed in hudson (in test-backwards), with AIOOBE on termvectorswriter.
> the problem with this test method is that seeds never reproduce.
> But I made it fail just by doing this:
> {noformat}
> ant test-core -Dtestcase=TestIndexWriterExceptions 
> -Dtestmethod=testRandomExceptionsThreads -Dtests.iter=100
> {noformat}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-3233) HuperDuperSynonymsFilter™

2011-08-26 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13091843#comment-13091843
 ] 

Robert Muir commented on LUCENE-3233:
-

Do you have a test?

Because we have tests for this:
{noformat}
String testFile = 
  "a\\=>a => b\\=>b\n" +
  "a\\,a => b\\,b";
...
assertAnalyzesTo(analyzer, "a=>a",
new String[] { "b=>b" },
new int[] { 1 });

assertAnalyzesTo(analyzer, "a,a",
new String[] { "b,b" },
new int[] { 1 });
{noformat}

> HuperDuperSynonymsFilter™
> -
>
> Key: LUCENE-3233
> URL: https://issues.apache.org/jira/browse/LUCENE-3233
> Project: Lucene - Java
>  Issue Type: Improvement
>Reporter: Robert Muir
> Fix For: 3.4, 4.0
>
> Attachments: LUCENE-3223.patch, LUCENE-3233.patch, LUCENE-3233.patch, 
> LUCENE-3233.patch, LUCENE-3233.patch, LUCENE-3233.patch, LUCENE-3233.patch, 
> LUCENE-3233.patch, LUCENE-3233.patch, LUCENE-3233.patch, LUCENE-3233.patch, 
> LUCENE-3233.patch, LUCENE-3233.patch, LUCENE-3233.patch, LUCENE-3233.patch, 
> LUCENE-3233.patch, LUCENE-3233.patch, synonyms.zip
>
>
> The current synonymsfilter uses a lot of ram and cpu, especially at build 
> time.
> I think yesterday I heard about "huge synonyms files" three times.
> So, I think we should use an FST-based structure, sharing the inputs and 
> outputs.
> And we should be more efficient with the tokenStream api, e.g. using 
> save/restoreState instead of cloneAttributes()

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



trunk test failure (1314373561)

2011-08-26 Thread Charlie Cron
A test failure occurred running the tests.

revert! revert! revert!

You can see the entire build log at 
http://sierranevada.servebeer.com/1314373561.log

Thanks,
Charlie Cron


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (LUCENE-3233) HuperDuperSynonymsFilter™

2011-08-26 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-3233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley reopened LUCENE-3233:
--

  Assignee: (was: Robert Muir)

Looks like that when Solr's synonym parsing was moved to the analysis module, 
it was also rewritten, introducing escaping bugs.

Examples:
a\,a is no longer treated as a single token
a\=>a is no longer treated as a single token
a\ta is treated as "ata" instead of containing a tab character

I didn't do a full review, so I'm not sure if there are other differences in 
behavior.

> HuperDuperSynonymsFilter™
> -
>
> Key: LUCENE-3233
> URL: https://issues.apache.org/jira/browse/LUCENE-3233
> Project: Lucene - Java
>  Issue Type: Improvement
>Reporter: Robert Muir
> Fix For: 3.4, 4.0
>
> Attachments: LUCENE-3223.patch, LUCENE-3233.patch, LUCENE-3233.patch, 
> LUCENE-3233.patch, LUCENE-3233.patch, LUCENE-3233.patch, LUCENE-3233.patch, 
> LUCENE-3233.patch, LUCENE-3233.patch, LUCENE-3233.patch, LUCENE-3233.patch, 
> LUCENE-3233.patch, LUCENE-3233.patch, LUCENE-3233.patch, LUCENE-3233.patch, 
> LUCENE-3233.patch, LUCENE-3233.patch, synonyms.zip
>
>
> The current synonymsfilter uses a lot of ram and cpu, especially at build 
> time.
> I think yesterday I heard about "huge synonyms files" three times.
> So, I think we should use an FST-based structure, sharing the inputs and 
> outputs.
> And we should be more efficient with the tokenStream api, e.g. using 
> save/restoreState instead of cloneAttributes()

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-3396) Make TokenStream Reuse Mandatory for Analyzers

2011-08-26 Thread Chris Male (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-3396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Male updated LUCENE-3396:
---

Attachment: LUCENE-3396-rab.patch

Absolutely, I missed that fieldname usage.

Updated patch.  Still all green.

> Make TokenStream Reuse Mandatory for Analyzers
> --
>
> Key: LUCENE-3396
> URL: https://issues.apache.org/jira/browse/LUCENE-3396
> Project: Lucene - Java
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Chris Male
> Attachments: LUCENE-3396-rab.patch, LUCENE-3396-rab.patch, 
> LUCENE-3396-rab.patch, LUCENE-3396-rab.patch
>
>
> In LUCENE-2309 it became clear that we'd benefit a lot from Analyzer having 
> to return reusable TokenStreams.  This is a big chunk of work, but its time 
> to bite the bullet.
> I plan to attack this in the following way:
> - Collapse the logic of ReusableAnalyzerBase into Analyzer
> - Add a ReuseStrategy abstraction to Analyzer which controls whether the 
> TokenStreamComponents are reused globally (as they are today) or per-field.
> - Convert all Analyzers over to using TokenStreamComponents.  I've already 
> seen that some of the TokenStreams created in tests need some work to be 
> reusable (even if they aren't reused).
> - Remove Analyzer.reusableTokenStream and convert everything over to using 
> .tokenStream (which will now be returning reusable TokenStreams).

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-3402) LuceneTestCase shouldn't go crazy if a test fails in an @AfterClass annotated method

2011-08-26 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13091819#comment-13091819
 ] 

Uwe Schindler commented on LUCENE-3402:
---

Go for it.

> LuceneTestCase shouldn't go crazy if a test fails in an @AfterClass annotated 
> method
> 
>
> Key: LUCENE-3402
> URL: https://issues.apache.org/jira/browse/LUCENE-3402
> Project: Lucene - Java
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-3402.patch, LUCENE-3402.patch
>
>
> An example can be seen here: http://sierranevada.servebeer.com/1314308641.log
> The general problem is this: the assertions and cleanups in lucenetestcase's 
> afterclass should be reordered, and have better error handling.
> In this particular case these were the steps that happened:
> # AutoCommitTest didn't close its searchers, so SolrTestCaseJ4 threw an 
> assertion exception in its @AfterClass method.
> # Because the searcher wasn't closed, LuceneTestCase threw an assertion 
> exception about unclosed directories/file handles in its afterClass. Even 
> though the test had already "failed" it ran this assertion because 
> testsFailed is false, since our TestWatchMan isnt aware of failures that 
> happen in @AfterClass methods :(
> # Because it threw this exception, it never made it to the part where it 
> resets the random, so the next test blew up in its BeforeClass.
> To add insult to injury, all this happened but we didnt get a random seed 
> printed, so we cant even hope to reproduce the situation.
> After discussion with hossman, we came up with some ideas on how to improve 
> this, and I'm adding some i just thought of, too:
> # try to divide up these assertions and cleanups in LuceneTestCase: we could 
> use multiple @AfterClass-annotated methods but then i'm not sure we can 
> control the order, which is scary. But one safe thing to do is to put these 
> pieces of code in little methods and afterclass can handle this stuff with 
> try/finally.
> # think about exposing the testsFailed variable for subclasses that do 
> assertions in their @AfterClasses. otherwise you might not get a random seed, 
> which is bad.
> # think about upgrading junit, because I know from experimentation that the 
> TestWatchMan (or whatever its replacement is) can "see more" of the test 
> lifecycle and this would probably make a lot of this much cleaner.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-3396) Make TokenStream Reuse Mandatory for Analyzers

2011-08-26 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13091818#comment-13091818
 ] 

Robert Muir commented on LUCENE-3396:
-

just a quick glance: the MockAnalyzer needs to reuse-per-field?

This way we test the case where payloads are enabled for one field but not for 
another.

> Make TokenStream Reuse Mandatory for Analyzers
> --
>
> Key: LUCENE-3396
> URL: https://issues.apache.org/jira/browse/LUCENE-3396
> Project: Lucene - Java
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Chris Male
> Attachments: LUCENE-3396-rab.patch, LUCENE-3396-rab.patch, 
> LUCENE-3396-rab.patch
>
>
> In LUCENE-2309 it became clear that we'd benefit a lot from Analyzer having 
> to return reusable TokenStreams.  This is a big chunk of work, but its time 
> to bite the bullet.
> I plan to attack this in the following way:
> - Collapse the logic of ReusableAnalyzerBase into Analyzer
> - Add a ReuseStrategy abstraction to Analyzer which controls whether the 
> TokenStreamComponents are reused globally (as they are today) or per-field.
> - Convert all Analyzers over to using TokenStreamComponents.  I've already 
> seen that some of the TokenStreams created in tests need some work to be 
> reusable (even if they aren't reused).
> - Remove Analyzer.reusableTokenStream and convert everything over to using 
> .tokenStream (which will now be returning reusable TokenStreams).

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-3402) LuceneTestCase shouldn't go crazy if a test fails in an @AfterClass annotated method

2011-08-26 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13091817#comment-13091817
 ] 

Robert Muir commented on LUCENE-3402:
-

I'm gonna give this patch a try, if things go crazy i'll back it out.


> LuceneTestCase shouldn't go crazy if a test fails in an @AfterClass annotated 
> method
> 
>
> Key: LUCENE-3402
> URL: https://issues.apache.org/jira/browse/LUCENE-3402
> Project: Lucene - Java
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-3402.patch, LUCENE-3402.patch
>
>
> An example can be seen here: http://sierranevada.servebeer.com/1314308641.log
> The general problem is this: the assertions and cleanups in lucenetestcase's 
> afterclass should be reordered, and have better error handling.
> In this particular case these were the steps that happened:
> # AutoCommitTest didn't close its searchers, so SolrTestCaseJ4 threw an 
> assertion exception in its @AfterClass method.
> # Because the searcher wasn't closed, LuceneTestCase threw an assertion 
> exception about unclosed directories/file handles in its afterClass. Even 
> though the test had already "failed" it ran this assertion because 
> testsFailed is false, since our TestWatchMan isnt aware of failures that 
> happen in @AfterClass methods :(
> # Because it threw this exception, it never made it to the part where it 
> resets the random, so the next test blew up in its BeforeClass.
> To add insult to injury, all this happened but we didnt get a random seed 
> printed, so we cant even hope to reproduce the situation.
> After discussion with hossman, we came up with some ideas on how to improve 
> this, and I'm adding some i just thought of, too:
> # try to divide up these assertions and cleanups in LuceneTestCase: we could 
> use multiple @AfterClass-annotated methods but then i'm not sure we can 
> control the order, which is scary. But one safe thing to do is to put these 
> pieces of code in little methods and afterclass can handle this stuff with 
> try/finally.
> # think about exposing the testsFailed variable for subclasses that do 
> assertions in their @AfterClasses. otherwise you might not get a random seed, 
> which is bad.
> # think about upgrading junit, because I know from experimentation that the 
> TestWatchMan (or whatever its replacement is) can "see more" of the test 
> lifecycle and this would probably make a lot of this much cleaner.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-3401) need to ensure that sims that use collection-level stats (e.g. sumTotalTermFreq) handle non-existent field

2011-08-26 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-3401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-3401.
-

   Resolution: Fixed
Fix Version/s: flexscoring branch

> need to ensure that sims that use collection-level stats (e.g. 
> sumTotalTermFreq) handle non-existent field
> --
>
> Key: LUCENE-3401
> URL: https://issues.apache.org/jira/browse/LUCENE-3401
> Project: Lucene - Java
>  Issue Type: Bug
>Affects Versions: flexscoring branch
>Reporter: Robert Muir
> Fix For: flexscoring branch
>
> Attachments: LUCENE-3401.patch, LUCENE-3401.patch
>
>
> Because of things like queryNorm, unfortunately similarities have to handle 
> the case where they are asked to computeStats() for a term, where the field 
> does not exist at all.
> (Note they will never have to actually score anything, but unless we break 
> how queryNorm works for TFIDF, we have to deal with this case).
> I noticed this while doing some benchmarking, so i created a test to test 
> some cases like this across all the sims.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-3403) Term vectors missing after addIndexes + optimize

2011-08-26 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13091739#comment-13091739
 ] 

Michael McCandless commented on LUCENE-3403:


Phew nice catch Shai!


> Term vectors missing after addIndexes + optimize
> 
>
> Key: LUCENE-3403
> URL: https://issues.apache.org/jira/browse/LUCENE-3403
> Project: Lucene - Java
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 3.3
>Reporter: Shai Erera
>Assignee: Shai Erera
>Priority: Blocker
> Fix For: 3.4, 4.0
>
> Attachments: LUCENE-3403.patch
>
>
> I encountered a problem with addIndexes where term vectors disappeared 
> following optimize(). I wrote a simple test case which demonstrates the 
> problem. The bug appears with both addIndexes() versions, but does not appear 
> if addDocument is called twice, committing changes in between.
> I think I tracked the problem down to IndexWriter.mergeMiddle() -- it sets 
> term vectors before merger.merge() was called. In the addDocs case, 
> merger.fieldInfos is already populated, while in the addIndexes case it is 
> empty, hence fieldInfos.hasVectors returns false.
> will post a patch shortly.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2731) CSVResponseWriter should optionally return numfound

2011-08-26 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13091735#comment-13091735
 ] 

Erik Hatcher commented on SOLR-2731:


Perhaps we could have an Excel response writer that could create a multi-sheet 
spreadsheet file?

> CSVResponseWriter should optionally return numfound
> ---
>
> Key: SOLR-2731
> URL: https://issues.apache.org/jira/browse/SOLR-2731
> Project: Solr
>  Issue Type: Improvement
>  Components: Response Writers
>Affects Versions: 3.1, 3.3, 4.0
>Reporter: Jon Hoffman
>  Labels: patch
> Fix For: 3.1.1, 3.3, 4.0
>
> Attachments: SOLR-2731.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> an optional parameter "csv.numfound=true" can be added to the request which 
> causes the first line of the response to be the numfound.  This would have no 
> impact on existing behavior, and those who are interested in that value can 
> simply read off the first line before sending to their usual csv parser.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2731) CSVResponseWriter should optionally return numfound

2011-08-26 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13091734#comment-13091734
 ] 

Erik Hatcher commented on SOLR-2731:


I'm mostly with Lance here, actually.  I want *pure* CSV.  So long as there 
there is always an option (which should be the default) to keep the output pure 
CSV then I'm ok with whatever extras folks want to add as options.

We really should get the response writer framework able to return custom HTTP 
headers though.

> CSVResponseWriter should optionally return numfound
> ---
>
> Key: SOLR-2731
> URL: https://issues.apache.org/jira/browse/SOLR-2731
> Project: Solr
>  Issue Type: Improvement
>  Components: Response Writers
>Affects Versions: 3.1, 3.3, 4.0
>Reporter: Jon Hoffman
>  Labels: patch
> Fix For: 3.1.1, 3.3, 4.0
>
> Attachments: SOLR-2731.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> an optional parameter "csv.numfound=true" can be added to the request which 
> causes the first line of the response to be the numfound.  This would have no 
> impact on existing behavior, and those who are interested in that value can 
> simply read off the first line before sending to their usual csv parser.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2726) NullPointerException when using spellcheck.q

2011-08-26 Thread Bernd Fehling (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13091725#comment-13091725
 ] 

Bernd Fehling commented on SOLR-2726:
-

The NPE comes from SpellCheckComponent.process --> 
SpellCheckComponent.getTokens because there is no analyzer defined, but if 
using "spellcheck.q" there is an analyzer required.
This issue is manifold.
- there is no default analyzer defined (like the WhitespaceAnalyzer if only 
using "q" paramter for suggest)
- there cannot be any analyzer defined at all for this, beause:
  -- no analyzer or spellcheck.analyzer parameter is read from solrconfig.xml 
for this
  -- the class SolrSpellChecker has only getQueryAnalyzer() but no 
setQueryAnalzyer() to set one

How should we fix this?
- add a default analyzer?
- add setQueryAnalyzer() to SolrSpellChecker?
- set analyzer at Suggester.init or SolrSpellChecker.init or 
SpellCheckComponent.prepare?

Any opinions?


> NullPointerException when using spellcheck.q
> 
>
> Key: SOLR-2726
> URL: https://issues.apache.org/jira/browse/SOLR-2726
> Project: Solr
>  Issue Type: Bug
>  Components: spellchecker
>Affects Versions: 3.3, 4.0
> Environment: ubuntu
>Reporter: valentin
>  Labels: nullpointerexception, spellcheck
>
> When I use spellcheck.q in my query to define what will be "spellchecked", I 
> always have this error, for every configuration I try :
> java.lang.NullPointerException
> at 
> org.apache.solr.handler.component.SpellCheckComponent.getTokens(SpellCheckComponent.java:476)
> at 
> org.apache.solr.handler.component.SpellCheckComponent.process(SpellCheckComponent.java:131)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:202)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:1368)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:356)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:252)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
> at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
> at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
> at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
> at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
> at 
> org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
> at 
> org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:114)
> at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
> at org.mortbay.jetty.Server.handle(Server.java:326)
> at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
> at 
> org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
> at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
> at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
> at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
> at 
> org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228)
> at 
> org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
> All my other functions works great, this is the only thing which doesn't work 
> at all, just when i add "&spellcheck.q=my%20sentence" in the query...
> Example of a query : 
> http://localhost:8983/solr/db/suggest_full?q=american%20israel&spellcheck.q=american%20israel
> In solrconfig.xml :
> 
>suggestTextFull
>
> suggest_full
> org.apache.solr.spelling.suggest.Suggester
>  name="lookupImpl">org.apache.solr.spelling.suggest.tst.TSTLookup
> text_suggest_full
> suggestTextFull
>
> 
>  class="org.apache.solr.handler.component.SearchHandler">
>   
>true
>suggest_full
>10
>true
>   
>   
>suggest_full
>   
> 
> I'm using SolR 3.3, and I tried it too on SolR 4.0

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-3397) Cleanup Test TokenStreams so they are reusable

2011-08-26 Thread Simon Willnauer (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13091714#comment-13091714
 ] 

Simon Willnauer commented on LUCENE-3397:
-

thanks for all this cleanup work chris! this makes our codebase much cleaner 
and test real examples! Very very welcome work!

> Cleanup Test TokenStreams so they are reusable
> --
>
> Key: LUCENE-3397
> URL: https://issues.apache.org/jira/browse/LUCENE-3397
> Project: Lucene - Java
>  Issue Type: Sub-task
>  Components: modules/analysis
>Reporter: Chris Male
>Assignee: Chris Male
> Fix For: 4.0
>
> Attachments: LUCENE-3397-highlighter.patch, LUCENE-3397-more.patch, 
> LUCENE-3397.patch, LUCENE-3397.patch
>
>
> Many TokenStreams created in tests are not reusable.  Some do some really 
> messy things which prevent their reuse so we may have to change the tests 
> themselves.
> We'll target back porting this to 3x.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-3403) Term vectors missing after addIndexes + optimize

2011-08-26 Thread Simon Willnauer (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13091712#comment-13091712
 ] 

Simon Willnauer commented on LUCENE-3403:
-

bq.You're right, it does not happen on trunk. I still want to commit the test 
cases to trunk too, so that we've got that covered there as well. Therefore I 
think I should keep the 4.0 fix version?

don't get me wrong I was just double checking because 4.0 was not in the 
affected version. I don't wanna miss such a trap. :)

bq. The problem is that SegmentMerger receives its FieldInfos from 
DocumentsWriter, and it knows whether to set hasVector according to what it 
receives. When you addDoc, DW has FieldInfos, but when you only addIndexes, DW 
doesn't.

maybe we should adopt what trunk does, checking all the FI if one of the stores 
vectors unless you FIs is readonly?

bq. If it's ok, I'll commit the fix to 3x and the tests-only to trunk.

+1 tests are great!



> Term vectors missing after addIndexes + optimize
> 
>
> Key: LUCENE-3403
> URL: https://issues.apache.org/jira/browse/LUCENE-3403
> Project: Lucene - Java
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 3.3
>Reporter: Shai Erera
>Assignee: Shai Erera
>Priority: Blocker
> Fix For: 3.4, 4.0
>
> Attachments: LUCENE-3403.patch
>
>
> I encountered a problem with addIndexes where term vectors disappeared 
> following optimize(). I wrote a simple test case which demonstrates the 
> problem. The bug appears with both addIndexes() versions, but does not appear 
> if addDocument is called twice, committing changes in between.
> I think I tracked the problem down to IndexWriter.mergeMiddle() -- it sets 
> term vectors before merger.merge() was called. In the addDocs case, 
> merger.fieldInfos is already populated, while in the addIndexes case it is 
> empty, hence fieldInfos.hasVectors returns false.
> will post a patch shortly.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-3403) Term vectors missing after addIndexes + optimize

2011-08-26 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13091709#comment-13091709
 ] 

Shai Erera commented on LUCENE-3403:


You're right, it does not happen on trunk. I still want to commit the test 
cases to trunk too, so that we've got that covered there as well. Therefore I 
think I should keep the 4.0 fix version?

The problem is that SegmentMerger receives its FieldInfos from DocumentsWriter, 
and it knows whether to set hasVector according to what it receives. When you 
addDoc, DW has FieldInfos, but when you only addIndexes, DW doesn't.

In fact, the field infos are read only on IW open ... so even if I 
addIndexes(), commit(), addIndexes(), the field infos would still be missing. A 
workaround I see for now is to addIndexes(), close(), new IW(), continue with 
addIndexes() or optimize(). Which is ugly but it's a workaround until we 
release a new version. I'll try that.

If it's ok, I'll commit the fix to 3x and the tests-only to trunk.

> Term vectors missing after addIndexes + optimize
> 
>
> Key: LUCENE-3403
> URL: https://issues.apache.org/jira/browse/LUCENE-3403
> Project: Lucene - Java
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 3.3
>Reporter: Shai Erera
>Assignee: Shai Erera
>Priority: Blocker
> Fix For: 3.4, 4.0
>
> Attachments: LUCENE-3403.patch
>
>
> I encountered a problem with addIndexes where term vectors disappeared 
> following optimize(). I wrote a simple test case which demonstrates the 
> problem. The bug appears with both addIndexes() versions, but does not appear 
> if addDocument is called twice, committing changes in between.
> I think I tracked the problem down to IndexWriter.mergeMiddle() -- it sets 
> term vectors before merger.merge() was called. In the addDocs case, 
> merger.fieldInfos is already populated, while in the addIndexes case it is 
> empty, hence fieldInfos.hasVectors returns false.
> will post a patch shortly.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2731) CSVResponseWriter should optionally return numfound

2011-08-26 Thread Lance Norskog (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13091697#comment-13091697
 ] 

Lance Norskog commented on SOLR-2731:
-

-1

* When you do the same query twice, the second time it usually takes 0ms. If it 
doesn't, turn on query caching.
* You can code these variations with Velocity. I would stick with keeping the 
very simplest CSV output and then coding any additions yourself.




> CSVResponseWriter should optionally return numfound
> ---
>
> Key: SOLR-2731
> URL: https://issues.apache.org/jira/browse/SOLR-2731
> Project: Solr
>  Issue Type: Improvement
>  Components: Response Writers
>Affects Versions: 3.1, 3.3, 4.0
>Reporter: Jon Hoffman
>  Labels: patch
> Fix For: 3.1.1, 3.3, 4.0
>
> Attachments: SOLR-2731.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> an optional parameter "csv.numfound=true" can be added to the request which 
> causes the first line of the response to be the numfound.  This would have no 
> impact on existing behavior, and those who are interested in that value can 
> simply read off the first line before sending to their usual csv parser.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



trunk test failure (1314350401)

2011-08-26 Thread Charlie Cron
A test failure occurred running the tests.

revert! revert! revert!

You can see the entire build log at 
http://sierranevada.servebeer.com/1314350401.log

Thanks,
Charlie Cron


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



trunk test failure (1314346501)

2011-08-26 Thread Charlie Cron
A test failure occurred running the tests.

revert! revert! revert!

You can see the entire build log at 
http://sierranevada.servebeer.com/1314346501.log

Thanks,
Charlie Cron


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-3286) Move XML QueryParser to queryparser module

2011-08-26 Thread Chris Male (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13091675#comment-13091675
 ] 

Chris Male commented on LUCENE-3286:


Okay there doesn't seem anyway to support the luke target going forward.  So 
I'm going to comment it out in the build.xml and when the incompatibility issue 
is addressed, it can be added back in.

I'll wait for some of the GSoC merges to be completed and then I'll commit this.


> Move XML QueryParser to queryparser module
> --
>
> Key: LUCENE-3286
> URL: https://issues.apache.org/jira/browse/LUCENE-3286
> Project: Lucene - Java
>  Issue Type: Sub-task
>  Components: modules/queryparser
>Reporter: Chris Male
> Attachments: LUCENE-3286-core.patch, LUCENE-3286-core.patch, 
> LUCENE-3286-core.patch, LUCENE-3286-core.patch, LUCENE-3286.patch
>
>
> The XML QueryParser will be ported across to queryparser module.
> As part of this work, we'll move the QP's demo into the demo module.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



trunk test failure (1314342961)

2011-08-26 Thread Charlie Cron
A test failure occurred running the tests.

revert! revert! revert!

You can see the entire build log at 
http://sierranevada.servebeer.com/1314342961.log

Thanks,
Charlie Cron


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-3403) Term vectors missing after addIndexes + optimize

2011-08-26 Thread Simon Willnauer (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13091660#comment-13091660
 ] 

Simon Willnauer commented on LUCENE-3403:
-

good catch Shai, Does this happen on 4.0 too? I don't think we have 
setHasVectors there anymore. I am just wondering since you put 4.0 as a fix 
version.

> Term vectors missing after addIndexes + optimize
> 
>
> Key: LUCENE-3403
> URL: https://issues.apache.org/jira/browse/LUCENE-3403
> Project: Lucene - Java
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 3.3
>Reporter: Shai Erera
>Assignee: Shai Erera
>Priority: Blocker
> Fix For: 3.4, 4.0
>
> Attachments: LUCENE-3403.patch
>
>
> I encountered a problem with addIndexes where term vectors disappeared 
> following optimize(). I wrote a simple test case which demonstrates the 
> problem. The bug appears with both addIndexes() versions, but does not appear 
> if addDocument is called twice, committing changes in between.
> I think I tracked the problem down to IndexWriter.mergeMiddle() -- it sets 
> term vectors before merger.merge() was called. In the addDocs case, 
> merger.fieldInfos is already populated, while in the addIndexes case it is 
> empty, hence fieldInfos.hasVectors returns false.
> will post a patch shortly.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org