[HUDSON] Lucene-Solr-tests-only-trunk - Build # 5801 - Failure

2011-03-11 Thread Apache Hudson Server
Build: https://hudson.apache.org/hudson/job/Lucene-Solr-tests-only-trunk/5801/

2 tests failed.
FAILED:  
org.apache.solr.update.DirectUpdateHandlerOptimizeTest.testWatchChildren

Error Message:
Forked Java VM exited abnormally. Please note the time in the report does not 
reflect the time until the VM exit.

Stack Trace:
junit.framework.AssertionFailedError: Forked Java VM exited abnormally. Please 
note the time in the report does not reflect the time until the VM exit.
at java.lang.Thread.run(Thread.java:636)


FAILED:  TEST-org.apache.solr.cloud.ZkSolrClientTest.xml.init

Error Message:


Stack Trace:
Test report file 
/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk/checkout/solr/build/test-results/TEST-org.apache.solr.cloud.ZkSolrClientTest.xml
 was length 0



Build Log (for compile errors):
[...truncated 8461 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[HUDSON] Solr-trunk - Build # 1439 - Still Failing

2011-03-11 Thread Apache Hudson Server
Build: https://hudson.apache.org/hudson/job/Solr-trunk/1439/

All tests passed

Build Log (for compile errors):
[...truncated 11479 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[HUDSON] Lucene-Solr-tests-only-trunk - Build # 5802 - Still Failing

2011-03-11 Thread Apache Hudson Server
Build: https://hudson.apache.org/hudson/job/Lucene-Solr-tests-only-trunk/5802/

2 tests failed.
FAILED:  
org.apache.solr.update.DirectUpdateHandlerOptimizeTest.testWatchChildren

Error Message:
Forked Java VM exited abnormally. Please note the time in the report does not 
reflect the time until the VM exit.

Stack Trace:
junit.framework.AssertionFailedError: Forked Java VM exited abnormally. Please 
note the time in the report does not reflect the time until the VM exit.
at java.lang.Thread.run(Thread.java:636)


FAILED:  TEST-org.apache.solr.cloud.ZkSolrClientTest.xml.init

Error Message:


Stack Trace:
Test report file 
/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk/checkout/solr/build/test-results/TEST-org.apache.solr.cloud.ZkSolrClientTest.xml
 was length 0



Build Log (for compile errors):
[...truncated 8466 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[HUDSON] Lucene-Solr-tests-only-trunk - Build # 5803 - Still Failing

2011-03-11 Thread Apache Hudson Server
Build: https://hudson.apache.org/hudson/job/Lucene-Solr-tests-only-trunk/5803/

2 tests failed.
FAILED:  
org.apache.solr.update.DirectUpdateHandlerOptimizeTest.testWatchChildren

Error Message:
Forked Java VM exited abnormally. Please note the time in the report does not 
reflect the time until the VM exit.

Stack Trace:
junit.framework.AssertionFailedError: Forked Java VM exited abnormally. Please 
note the time in the report does not reflect the time until the VM exit.
at java.lang.Thread.run(Thread.java:636)


FAILED:  TEST-org.apache.solr.cloud.ZkSolrClientTest.xml.init

Error Message:


Stack Trace:
Test report file 
/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk/checkout/solr/build/test-results/TEST-org.apache.solr.cloud.ZkSolrClientTest.xml
 was length 0



Build Log (for compile errors):
[...truncated 8466 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (LUCENE-2959) [GSoC] Implementing State of the Art Ranking for Lucene

2011-03-11 Thread David Mark Nemeskey (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13005575#comment-13005575
 ] 

David Mark Nemeskey commented on LUCENE-2959:
-

Andrzej: thanks! Indeed, I have read that that paper, but have only skimmed 
through the code. I am also aware of at least one BM25 implementation for 
Lucene, which may or may not be what issue LUCENE-2091 is about. I need to have 
a look into it.

 [GSoC] Implementing State of the Art Ranking for Lucene
 ---

 Key: LUCENE-2959
 URL: https://issues.apache.org/jira/browse/LUCENE-2959
 Project: Lucene - Java
  Issue Type: New Feature
  Components: Examples, Javadocs, Query/Scoring
Reporter: David Mark Nemeskey
  Labels: gsoc2011, lucene-gsoc-11
 Attachments: implementation_plan.pdf, proposal.pdf


 Lucene employs the Vector Space Model (VSM) to rank documents, which compares
 unfavorably to state of the art algorithms, such as BM25. Moreover, the 
 architecture is
 tailored specically to VSM, which makes the addition of new ranking functions 
 a non-
 trivial task.
 This project aims to bring state of the art ranking methods to Lucene and to 
 implement a
 query architecture with pluggable ranking functions.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[HUDSON] Lucene-Solr-tests-only-trunk - Build # 5806 - Failure

2011-03-11 Thread Apache Hudson Server
Build: https://hudson.apache.org/hudson/job/Lucene-Solr-tests-only-trunk/5806/

2 tests failed.
FAILED:  
org.apache.solr.update.DirectUpdateHandlerOptimizeTest.testWatchChildren

Error Message:
Forked Java VM exited abnormally. Please note the time in the report does not 
reflect the time until the VM exit.

Stack Trace:
junit.framework.AssertionFailedError: Forked Java VM exited abnormally. Please 
note the time in the report does not reflect the time until the VM exit.
at java.lang.Thread.run(Thread.java:636)


FAILED:  TEST-org.apache.solr.cloud.ZkSolrClientTest.xml.init

Error Message:


Stack Trace:
Test report file 
/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk/checkout/solr/build/test-results/TEST-org.apache.solr.cloud.ZkSolrClientTest.xml
 was length 0



Build Log (for compile errors):
[...truncated 8454 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[HUDSON] Lucene-Solr-tests-only-trunk - Build # 5807 - Still Failing

2011-03-11 Thread Apache Hudson Server
Build: https://hudson.apache.org/hudson/job/Lucene-Solr-tests-only-trunk/5807/

2 tests failed.
FAILED:  
org.apache.solr.update.DirectUpdateHandlerOptimizeTest.testWatchChildren

Error Message:
Forked Java VM exited abnormally. Please note the time in the report does not 
reflect the time until the VM exit.

Stack Trace:
junit.framework.AssertionFailedError: Forked Java VM exited abnormally. Please 
note the time in the report does not reflect the time until the VM exit.
at java.lang.Thread.run(Thread.java:636)


FAILED:  TEST-org.apache.solr.cloud.ZkSolrClientTest.xml.init

Error Message:


Stack Trace:
Test report file 
/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk/checkout/solr/build/test-results/TEST-org.apache.solr.cloud.ZkSolrClientTest.xml
 was length 0



Build Log (for compile errors):
[...truncated 8470 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-3.x #55: POMs out of sync

2011-03-11 Thread Apache Hudson Server
Build: https://hudson.apache.org/hudson/job/Lucene-Solr-Maven-3.x/55/

All tests passed

Build Log (for compile errors):
[...truncated 17599 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (LUCENE-2958) WriteLineDocTask improvements

2011-03-11 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13005612#comment-13005612
 ] 

Shai Erera commented on LUCENE-2958:


bq. Really, it would be better if LineDocSource could directly set Field values.

That will break the separation we have today -- ContentSource returns DocData 
which is not a Lucene Document, and DocMaker creates a Document out of it. 
Remember that we were in this design before -- DocMaker was responsible for 
both parsing the content and creating a Document out of it. The current design 
is much more flexible.

bq. until then we should just pass the full String line to eg a processLine 
method

I agree. Either processLine or getDocData or whatever, but something which 
receives a line and returns DocData.

 WriteLineDocTask improvements
 -

 Key: LUCENE-2958
 URL: https://issues.apache.org/jira/browse/LUCENE-2958
 Project: Lucene - Java
  Issue Type: Improvement
  Components: contrib/benchmark
Reporter: Doron Cohen
Assignee: Doron Cohen
Priority: Minor
 Fix For: 3.2, 4.0

 Attachments: LUCENE-2958.patch, LUCENE-2958.patch


 Make WriteLineDocTask and LineDocSource more flexible/extendable:
 * allow to emit lines also for empty docs (keep current behavior as default)
 * allow more/less/other fields

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: IndexWriter#setRAMBufferSizeMB removed in trunk

2011-03-11 Thread Earwin Burrfoot
Is it really that hard to recreate IndexWriter if you have to change
the settings??

Yeah, yeah, you lose all your precious reused buffers, and maybe
there's a small indexing latency spike, when switching from old IW to
new one, but people aren't changing their IW configs several times a
second?

I suggest banning as much runtime-mutable settings as humanely
possible, and ask people to recreate objects for reconfiguration, be
it IW, IR, Analyzers, whatnot.

On Thu, Mar 10, 2011 at 23:07, Michael McCandless
luc...@mikemccandless.com wrote:
 On Thu, Mar 10, 2011 at 7:28 AM, Robert Muir rcm...@gmail.com wrote:

 This should block the release: if IndexWriterConfig is a broken design
 then we need to revert this now before its released, not make users
 switch over and then undeprecate/revert in a future release.

 +1

 I think we have to sort this out, one way or another, before releasing 3.1.

 I really don't like splitting setters across IWC vs IW.  That'll just
 cause confusion, and noise over time as we change our minds about
 where things belong.

 Looking through IWC, it seems that most setters can be done live.
 In fact, setRAMBufferSizeMB is *almost* live: all places in IW that
 use this pull it from the config, except for DocumentsWriter.  We
 could just push the config down to DW and have it pull live too...

 Other settings are not pulled live but for no good reason, eg
 termsIndexInterval is copied to a private field in IW but could just
 as easily be pulled when it's time to write a new segment...

 Maybe we should simply document which settings are live vs only take
 effect at init time?

 Mike

 --
 Mike

 http://blog.mikemccandless.com

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org





-- 
Kirill Zakharenko/Кирилл Захаренко
E-Mail/Jabber: ear...@gmail.com
Phone: +7 (495) 683-567-4
ICQ: 104465785

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (LUCENE-2960) Allow (or bring back) the ability to setRAMBufferSizeMB on an open IndexWriter

2011-03-11 Thread Earwin Burrfoot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13005617#comment-13005617
 ] 

Earwin Burrfoot commented on LUCENE-2960:
-

As I said on the list - if one needs to change IW config, he can always 
recreate IW with new settings.
Such changes cannot happen often enough for recreation to affect indexing 
performance.

The fact that you can change IW's behaviour post-construction by modifying 
unrelated IWC instance is frightening. IW should either make a private copy of 
IWC when constructing, or IWC should be made immutable.

 Allow (or bring back) the ability to setRAMBufferSizeMB on an open IndexWriter
 --

 Key: LUCENE-2960
 URL: https://issues.apache.org/jira/browse/LUCENE-2960
 Project: Lucene - Java
  Issue Type: Improvement
  Components: Index
Reporter: Shay Banon
Priority: Blocker
 Fix For: 3.1, 4.0


 In 3.1 the ability to setRAMBufferSizeMB is deprecated, and removed in trunk. 
 It would be great to be able to control that on a live IndexWriter. Other 
 possible two methods that would be great to bring back are 
 setTermIndexInterval and setReaderTermsIndexDivisor. Most of the other 
 setters can actually be set on the MergePolicy itself, so no need for setters 
 for those (I think).

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: IndexWriter#setRAMBufferSizeMB removed in trunk

2011-03-11 Thread Shai Erera
I agree. After IWC, the only setter left in IW is setInfoStream which makes
sense. But the rest ... assuming these config change don't happen very
often, recreating IW doesn't sound like a big thing to me. The alternative
of complicating IWC to support runtime changes -- we need to be absolutely
sure it's worth it.

Also, if the solution is to allow changing IWC (runtime) settings, then I
don't think this issue should block 3.1? We can anyway add other runtime
settings following 3.1, and we won't undeprecate anything. So maybe mark
that issue as a non-blocker?

Shai

On Fri, Mar 11, 2011 at 2:20 PM, Earwin Burrfoot ear...@gmail.com wrote:

 Is it really that hard to recreate IndexWriter if you have to change
 the settings??

 Yeah, yeah, you lose all your precious reused buffers, and maybe
 there's a small indexing latency spike, when switching from old IW to
 new one, but people aren't changing their IW configs several times a
 second?

 I suggest banning as much runtime-mutable settings as humanely
 possible, and ask people to recreate objects for reconfiguration, be
 it IW, IR, Analyzers, whatnot.

 On Thu, Mar 10, 2011 at 23:07, Michael McCandless
 luc...@mikemccandless.com wrote:
  On Thu, Mar 10, 2011 at 7:28 AM, Robert Muir rcm...@gmail.com wrote:
 
  This should block the release: if IndexWriterConfig is a broken design
  then we need to revert this now before its released, not make users
  switch over and then undeprecate/revert in a future release.
 
  +1
 
  I think we have to sort this out, one way or another, before releasing
 3.1.
 
  I really don't like splitting setters across IWC vs IW.  That'll just
  cause confusion, and noise over time as we change our minds about
  where things belong.
 
  Looking through IWC, it seems that most setters can be done live.
  In fact, setRAMBufferSizeMB is *almost* live: all places in IW that
  use this pull it from the config, except for DocumentsWriter.  We
  could just push the config down to DW and have it pull live too...
 
  Other settings are not pulled live but for no good reason, eg
  termsIndexInterval is copied to a private field in IW but could just
  as easily be pulled when it's time to write a new segment...
 
  Maybe we should simply document which settings are live vs only take
  effect at init time?
 
  Mike
 
  --
  Mike
 
  http://blog.mikemccandless.com
 
  -
  To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
  For additional commands, e-mail: dev-h...@lucene.apache.org
 
 



 --
 Kirill Zakharenko/Кирилл Захаренко
 E-Mail/Jabber: ear...@gmail.com
 Phone: +7 (495) 683-567-4
 ICQ: 104465785

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




[HUDSON] Lucene-Solr-tests-only-trunk - Build # 5809 - Still Failing

2011-03-11 Thread Apache Hudson Server
Build: https://hudson.apache.org/hudson/job/Lucene-Solr-tests-only-trunk/5809/

2 tests failed.
FAILED:  
org.apache.solr.update.DirectUpdateHandlerOptimizeTest.testWatchChildren

Error Message:
Forked Java VM exited abnormally. Please note the time in the report does not 
reflect the time until the VM exit.

Stack Trace:
junit.framework.AssertionFailedError: Forked Java VM exited abnormally. Please 
note the time in the report does not reflect the time until the VM exit.
at java.lang.Thread.run(Thread.java:636)


FAILED:  TEST-org.apache.solr.cloud.ZkSolrClientTest.xml.init

Error Message:


Stack Trace:
Test report file 
/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk/checkout/solr/build/test-results/TEST-org.apache.solr.cloud.ZkSolrClientTest.xml
 was length 0



Build Log (for compile errors):
[...truncated 8461 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: IndexWriter#setRAMBufferSizeMB removed in trunk

2011-03-11 Thread Earwin Burrfoot
Thanks for your support, but I don't think setInfoStream makes any
sense either : )

Do we /change/ infoStreams for IW @runtime? Why can't we pass it as
constructor argument/IWC field?
Ok, just maybe, I can imagine a case, where a certain app runs
happily, then misbehaves, and then you, with some clever trickery
supply it a fresh infoStream, to capture the problem live, without
restarting.
So, just maybe, we should leave setInfoStream asis.

2011/3/11 Shai Erera ser...@gmail.com:
 I agree. After IWC, the only setter left in IW is setInfoStream which makes
 sense. But the rest ... assuming these config change don't happen very
 often, recreating IW doesn't sound like a big thing to me. The alternative
 of complicating IWC to support runtime changes -- we need to be absolutely
 sure it's worth it.

 Also, if the solution is to allow changing IWC (runtime) settings, then I
 don't think this issue should block 3.1? We can anyway add other runtime
 settings following 3.1, and we won't undeprecate anything. So maybe mark
 that issue as a non-blocker?

 Shai

 On Fri, Mar 11, 2011 at 2:20 PM, Earwin Burrfoot ear...@gmail.com wrote:

 Is it really that hard to recreate IndexWriter if you have to change
 the settings??

 Yeah, yeah, you lose all your precious reused buffers, and maybe
 there's a small indexing latency spike, when switching from old IW to
 new one, but people aren't changing their IW configs several times a
 second?

 I suggest banning as much runtime-mutable settings as humanely
 possible, and ask people to recreate objects for reconfiguration, be
 it IW, IR, Analyzers, whatnot.

 On Thu, Mar 10, 2011 at 23:07, Michael McCandless
 luc...@mikemccandless.com wrote:
  On Thu, Mar 10, 2011 at 7:28 AM, Robert Muir rcm...@gmail.com wrote:
 
  This should block the release: if IndexWriterConfig is a broken design
  then we need to revert this now before its released, not make users
  switch over and then undeprecate/revert in a future release.
 
  +1
 
  I think we have to sort this out, one way or another, before releasing
  3.1.
 
  I really don't like splitting setters across IWC vs IW.  That'll just
  cause confusion, and noise over time as we change our minds about
  where things belong.
 
  Looking through IWC, it seems that most setters can be done live.
  In fact, setRAMBufferSizeMB is *almost* live: all places in IW that
  use this pull it from the config, except for DocumentsWriter.  We
  could just push the config down to DW and have it pull live too...
 
  Other settings are not pulled live but for no good reason, eg
  termsIndexInterval is copied to a private field in IW but could just
  as easily be pulled when it's time to write a new segment...
 
  Maybe we should simply document which settings are live vs only take
  effect at init time?
 
  Mike
 
  --
  Mike
 
  http://blog.mikemccandless.com
 
  -
  To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
  For additional commands, e-mail: dev-h...@lucene.apache.org
 
 



 --
 Kirill Zakharenko/Кирилл Захаренко
 E-Mail/Jabber: ear...@gmail.com
 Phone: +7 (495) 683-567-4
 ICQ: 104465785

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org






-- 
Kirill Zakharenko/Кирилл Захаренко
E-Mail/Jabber: ear...@gmail.com
Phone: +7 (495) 683-567-4
ICQ: 104465785

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Lucene and Solr 3.1 release candidate

2011-03-11 Thread Yonik Seeley
On Thu, Mar 10, 2011 at 6:49 PM, Chris Hostetter
hossman_luc...@fucit.org wrote:
 The number of lucene jars included in the release is also odd -- they
 are embedded in the solr.war obviously, but not included anywhere else.
 so people wanting to do something like use apache-solr-core-3.1.0.jar to
 embed solr in their app still need to get the lucene jars from a distinct
 release ... except that there does seem to be 3 lucene jars included in
 ./contrib/analysis-extras/lucene-libs (i suspect this was a mistake in an
 intentional exclusion of those jars)

I was just going for including jars needed to run (and the lucene jars
are in the war, but the ones from analysis-extras are not).
Thinking about it again... we should either leave out lib (which
should also be in the war) or include lucene_lib.
Doing the latter should make it possible to compile plugins against a
binary release w/o exploding the war... but I don't know how important
that is.

-Yonik
http://lucidimagination.com

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[HUDSON] Lucene-Solr-tests-only-trunk - Build # 5810 - Still Failing

2011-03-11 Thread Apache Hudson Server
Build: https://hudson.apache.org/hudson/job/Lucene-Solr-tests-only-trunk/5810/

2 tests failed.
FAILED:  
org.apache.solr.update.DirectUpdateHandlerOptimizeTest.testWatchChildren

Error Message:
Forked Java VM exited abnormally. Please note the time in the report does not 
reflect the time until the VM exit.

Stack Trace:
junit.framework.AssertionFailedError: Forked Java VM exited abnormally. Please 
note the time in the report does not reflect the time until the VM exit.
at java.lang.Thread.run(Thread.java:636)


FAILED:  TEST-org.apache.solr.cloud.ZkSolrClientTest.xml.init

Error Message:


Stack Trace:
Test report file 
/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk/checkout/solr/build/test-results/TEST-org.apache.solr.cloud.ZkSolrClientTest.xml
 was length 0



Build Log (for compile errors):
[...truncated 8475 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (LUCENE-2324) Per thread DocumentsWriters that write their own private segments

2011-03-11 Thread Jason Rutherglen (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13005631#comment-13005631
 ] 

Jason Rutherglen commented on LUCENE-2324:
--

bq. I think making a different data structure to hold low-DF terms would 
actually be a big boost in RAM efficiency. The RAM-per-unique-term is fairly 
high...

However we're not sure why a largish 1+ GB RAM buffer seems to slow down?  If 
we're round robin indexing against the DWPTs I think they'll have a similar 
number of unique terms as today, even though each DWPT will be smaller in size 
total size from each containing 1/Nth docs.  

 Per thread DocumentsWriters that write their own private segments
 -

 Key: LUCENE-2324
 URL: https://issues.apache.org/jira/browse/LUCENE-2324
 Project: Lucene - Java
  Issue Type: Improvement
  Components: Index
Reporter: Michael Busch
Assignee: Michael Busch
Priority: Minor
 Fix For: Realtime Branch

 Attachments: LUCENE-2324-SMALL.patch, LUCENE-2324-SMALL.patch, 
 LUCENE-2324-SMALL.patch, LUCENE-2324-SMALL.patch, LUCENE-2324-SMALL.patch, 
 LUCENE-2324.patch, LUCENE-2324.patch, LUCENE-2324.patch, LUCENE-2324.patch, 
 lucene-2324.patch, lucene-2324.patch, test.out, test.out, test.out, test.out


 See LUCENE-2293 for motivation and more details.
 I'm copying here Mike's summary he posted on 2293:
 Change the approach for how we buffer in RAM to a more isolated
 approach, whereby IW has N fully independent RAM segments
 in-process and when a doc needs to be indexed it's added to one of
 them. Each segment would also write its own doc stores and
 normal segment merging (not the inefficient merge we now do on
 flush) would merge them. This should be a good simplification in
 the chain (eg maybe we can remove the *PerThread classes). The
 segments can flush independently, letting us make much better
 concurrent use of IO  CPU.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Problem of Replication Reservation Duration

2011-03-11 Thread Li Li
hi all,
    The replication handler in solr 1.4 which we used seems to be a
little problematic in some extreme situation.
    The default reserve duration is 10s and can't modified by any method.
      private Integer reserveCommitDuration =
SnapPuller.readInterval(00:00:10);
    The current implementation is: slave send a http
request(CMD_GET_FILE_LIST) to ask server list current index files.
    In the response codes of master, it will reserve this commit for 10s.
      // reserve the indexcommit for sometime
      core.getDeletionPolicy().setReserveDuration(version,
reserveCommitDuration);
   If the master's indexes are changed within 10s, the old version
will not be deleted. Otherwise, the old version will be deleted.
    slave then get the files in the list one by one.
    considering the following situation.
    Every mid-night we optimize the whole indexes into one single
index, and every 15 minutes, we add new segments to it.
    e.g. when the slave copy the large optimized indexes, it will cost
more than 15 minutes. So it will fail to copy all files and
retry 5 minutes later. But each time it will re-copy all the files
into a new tmp directory. it will fail again and again as long as
we update indexes within 15 minutes.
    we can tack this problem by setting reserveCommitDuration to 20
minutes. But then because we update small number of
documents very frequently, many useless indexes will be reserved and
it's a waste of disk space.
    Any one confronted the problem before and is there any solution for it?
    We comes up a ugly solution like this: slave fetches files using
multithreads. each file a thread. Thus master will open all the
files that slave needs. As long as the file is opened. when master
want to delete them, these files will be deleted. But the inode
reference count is larger than 0.  Because reading too many files by
master will decrease the ability of master. we want to use
some synchronization mechanism to allow only 1 or 2 ReplicationHandler
threads are doing CMD_GET_FILE command.
    Is that solution feasible?

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: Problem of Replication Reservation Duration

2011-03-11 Thread Steven A Rowe
Hi Li Li,

Please do not use the solr-dev mailing list - Solr and Lucene development both 
use the dev at lucene.apache.org list.

Steve

 -Original Message-
 From: Li Li [mailto:fancye...@gmail.com]
 Sent: Friday, March 11, 2011 8:41 AM
 To: solr-...@lucene.apache.org
 Subject: Problem of Replication Reservation Duration
 
 hi all,
     The replication handler in solr 1.4 which we used seems to be a
 little problematic in some extreme situation.
     The default reserve duration is 10s and can't modified by any method.
       private Integer reserveCommitDuration =
 SnapPuller.readInterval(00:00:10);
     The current implementation is: slave send a http
 request(CMD_GET_FILE_LIST) to ask server list current index files.
     In the response codes of master, it will reserve this commit for 10s.
       // reserve the indexcommit for sometime
       core.getDeletionPolicy().setReserveDuration(version,
 reserveCommitDuration);
    If the master's indexes are changed within 10s, the old version
 will not be deleted. Otherwise, the old version will be deleted.
     slave then get the files in the list one by one.
     considering the following situation.
     Every mid-night we optimize the whole indexes into one single
 index, and every 15 minutes, we add new segments to it.
     e.g. when the slave copy the large optimized indexes, it will cost
 more than 15 minutes. So it will fail to copy all files and
 retry 5 minutes later. But each time it will re-copy all the files
 into a new tmp directory. it will fail again and again as long as
 we update indexes within 15 minutes.
     we can tack this problem by setting reserveCommitDuration to 20
 minutes. But then because we update small number of
 documents very frequently, many useless indexes will be reserved and
 it's a waste of disk space.
     Any one confronted the problem before and is there any solution for
 it?
     We comes up a ugly solution like this: slave fetches files using
 multithreads. each file a thread. Thus master will open all the
 files that slave needs. As long as the file is opened. when master
 want to delete them, these files will be deleted. But the inode
 reference count is larger than 0.  Because reading too many files by
 master will decrease the ability of master. we want to use
 some synchronization mechanism to allow only 1 or 2 ReplicationHandler
 threads are doing CMD_GET_FILE command.
     Is that solution feasible?
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Created: (LUCENE-2965) Faster GeoHashUtils

2011-03-11 Thread JIRA
Faster GeoHashUtils
---

 Key: LUCENE-2965
 URL: https://issues.apache.org/jira/browse/LUCENE-2965
 Project: Lucene - Java
  Issue Type: Improvement
  Components: contrib/spatial
Affects Versions: 3.0.3, 3.0.2, 3.0.1, 3.0, 2.9.4, 2.9.2
Reporter: 朱文彬


I found the current implement of 
org.apache.lucene.spatial.geohash.GeoHashUtils.encode and decode is slow and 
this is my improvement (400% faster )

/**
 * Encodes the given latitude and longitude into a geohash
 *
 * @param latitude Latitude to encode
 * @param longitude Longitude to encode
 * @return Geohash encoding of the longitude and latitude
 */
public static String encode(double latitude, double longitude) {
double latL = -90, latH = 90;
double lngL = -180, lngH = 180;
double mid;
//  assert PRECISION % 2 == 0;
final char[] geohash = new char[PRECISION];
int len = 0;
int ch = 0;
while (len  PRECISION) {
if (longitude  (mid = (lngL + lngH) * 0.5)) {
ch |= 16;
lngL = mid;
} else
lngH = mid;

if (longitude  (mid = (lngL + lngH) * 0.5)) {
ch |= 4;
lngL = mid;
} else
lngH = mid;

if (longitude  (mid = (lngL + lngH) * 0.5)) {
ch |= 1;
lngL = mid;
} else {
lngH = mid;
}

if (latitude  (mid = (latL + latH) * 0.5)) {
ch |= 8;
latL = mid;
} else {
latH = mid;
}
if (latitude  (mid = (latL + latH) * 0.5)) {
ch |= 2;
latL = mid;
} else {
latH = mid;
}

geohash[len++] = BASE_32[ch];
ch = 0;

if (latitude  (mid = (latL + latH) * 0.5)) {
ch |= 16;
latL = mid;
} else
latH = mid;

if (longitude  (mid = (lngL + lngH) * 0.5)) {
ch |= 8;
lngL = mid;
} else
lngH = mid;

if (latitude  (mid = (latL + latH) * 0.5)) {
ch |= 4;
latL = mid;
} else
latH = mid;

if (longitude  (mid = (lngL + lngH) * 0.5)) {
ch |= 2;
lngL = mid;
} else
lngH = mid;

if (latitude  (mid = (latL + latH) * 0.5)) {
ch |= 1;
latL = mid;
} else
latH = mid;

geohash[len++] = BASE_32[ch];
ch = 0;
}

return new String(geohash);
}

/**
 * Decodes the given geohash into a latitude and longitude
 *
 * @param geohash Geohash to deocde
 * @return Array with the latitude at index 0, and longitude at index 1
 */
public static double[] decode(String geohash) {
double latL = -90.0, latH = 90.0;
double lngL = -180.0, lngH = 180.0;
double gap;
int len = geohash.length();
for (int i = 0; i  len; ) {
switch (geohash.charAt(i++)) {
case '0':
latH -= (latH - latL) * 0.75;
lngH -= (lngH - lngL) * 0.875;
break;
case '1':
latH -= (latH - latL) * 0.75;
gap = lngH - lngL;
lngL += gap * 0.125;
lngH -= gap * 0.75;
break;
case '2':
gap = latH - latL;
latL += gap * 0.25;

[jira] Commented: (LUCENE-2960) Allow (or bring back) the ability to setRAMBufferSizeMB on an open IndexWriter

2011-03-11 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13005641#comment-13005641
 ] 

Michael McCandless commented on LUCENE-2960:



bq. As I said on the list - if one needs to change IW config, he can always 
recreate IW with new settings.

That's not really true, in general.  If you have a large merge running
then closing the IW can take an unpredictable amount of time.  You
could abort the merges on close, but that's obviously not great.

Furthermore, closing the IW also forces you to commit, and I don't
like tying changing of configuration to forcing a commit.

In fact, it doesn't make sense to me to arbitrarily prevent settings
from being live, just because we've factored out IWC as a separate
class.  Many of these settings were naturally live before the IWC
cutover, and have no particular reason not to be (besides this API
change).

We could also rollback the IWC change.  I'm not saying that's a great
option, but, it should be on the table.

InfoStream, for example, should remain live: eg, maybe I'm having
trouble w/ optimize, so, I turn on infoStream and then call optimize.

The flushing params (maxBufferedDocs/Deletes/RAM) should also remain
live, since we have a very real user/data point (Shay) relying on
this.

But take MergedSegmentWarmer (used to be live but is now unchangeable).
This is a setting that obviously can easily remain live; there's no
technical reason for it not to be.  So why should we force it to be
unchangeable?  That can only remove freedom, freedom that is perhaps
valuable to an app somewhere.

 Allow (or bring back) the ability to setRAMBufferSizeMB on an open IndexWriter
 --

 Key: LUCENE-2960
 URL: https://issues.apache.org/jira/browse/LUCENE-2960
 Project: Lucene - Java
  Issue Type: Improvement
  Components: Index
Reporter: Shay Banon
Priority: Blocker
 Fix For: 3.1, 4.0


 In 3.1 the ability to setRAMBufferSizeMB is deprecated, and removed in trunk. 
 It would be great to be able to control that on a live IndexWriter. Other 
 possible two methods that would be great to bring back are 
 setTermIndexInterval and setReaderTermsIndexDivisor. Most of the other 
 setters can actually be set on the MergePolicy itself, so no need for setters 
 for those (I think).

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (LUCENE-2324) Per thread DocumentsWriters that write their own private segments

2011-03-11 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13005642#comment-13005642
 ] 

Michael McCandless commented on LUCENE-2324:


The slowdown could have been due to the merge sort by docID that we do today on 
flush.

Ie, if a given term X occurrs in 6 DWPTs (today) then we merge-sort the docIDs 
from the postings of that term, which is costly.  (The normal merge that will 
merge these DWPTs after this issue lands just append by docIDs).

So maybe after this lands we'll see only faster performance the larger the RAM 
buffer :)  That would be nice!

 Per thread DocumentsWriters that write their own private segments
 -

 Key: LUCENE-2324
 URL: https://issues.apache.org/jira/browse/LUCENE-2324
 Project: Lucene - Java
  Issue Type: Improvement
  Components: Index
Reporter: Michael Busch
Assignee: Michael Busch
Priority: Minor
 Fix For: Realtime Branch

 Attachments: LUCENE-2324-SMALL.patch, LUCENE-2324-SMALL.patch, 
 LUCENE-2324-SMALL.patch, LUCENE-2324-SMALL.patch, LUCENE-2324-SMALL.patch, 
 LUCENE-2324.patch, LUCENE-2324.patch, LUCENE-2324.patch, LUCENE-2324.patch, 
 lucene-2324.patch, lucene-2324.patch, test.out, test.out, test.out, test.out


 See LUCENE-2293 for motivation and more details.
 I'm copying here Mike's summary he posted on 2293:
 Change the approach for how we buffer in RAM to a more isolated
 approach, whereby IW has N fully independent RAM segments
 in-process and when a doc needs to be indexed it's added to one of
 them. Each segment would also write its own doc stores and
 normal segment merging (not the inefficient merge we now do on
 flush) would merge them. This should be a good simplification in
 the chain (eg maybe we can remove the *PerThread classes). The
 segments can flush independently, letting us make much better
 concurrent use of IO  CPU.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (LUCENE-2960) Allow (or bring back) the ability to setRAMBufferSizeMB on an open IndexWriter

2011-03-11 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13005646#comment-13005646
 ] 

Yonik Seeley commented on LUCENE-2960:
--

bq. InfoStream, for example, should remain live

Agree - it's logging.

bq. But take MergedSegmentWarmer (used to be live but is now unchangeable). 
This is a setting that obviously can easily remain live; there's no technical 
reason for it not to be.

Anyone's implementation can be live (i.e. the impl could change it's behavior 
over time for whatever reason).


 Allow (or bring back) the ability to setRAMBufferSizeMB on an open IndexWriter
 --

 Key: LUCENE-2960
 URL: https://issues.apache.org/jira/browse/LUCENE-2960
 Project: Lucene - Java
  Issue Type: Improvement
  Components: Index
Reporter: Shay Banon
Priority: Blocker
 Fix For: 3.1, 4.0


 In 3.1 the ability to setRAMBufferSizeMB is deprecated, and removed in trunk. 
 It would be great to be able to control that on a live IndexWriter. Other 
 possible two methods that would be great to bring back are 
 setTermIndexInterval and setReaderTermsIndexDivisor. Most of the other 
 setters can actually be set on the MergePolicy itself, so no need for setters 
 for those (I think).

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (LUCENE-2958) WriteLineDocTask improvements

2011-03-11 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13005647#comment-13005647
 ] 

Michael McCandless commented on LUCENE-2958:


So the separation we have today of DocData from DocMaker allows what 
flexibility?  Is it just so that we can pull multiple docs from a single 
DocData?  EG the line file could have massive docs, but we want to index tiny 
docs, so DocMaker can split them up?

I agree that's useful... but it does result in somewhat synthetic docs.  EG 20 
docs in a row will have the same title and date (and any other properties).  If 
you are eval'ing a standard corpus, presumably you don't do this doc splitting, 
right?

The flexibility can only cost us performance (though maybe it's not so much of 
a hit).

 WriteLineDocTask improvements
 -

 Key: LUCENE-2958
 URL: https://issues.apache.org/jira/browse/LUCENE-2958
 Project: Lucene - Java
  Issue Type: Improvement
  Components: contrib/benchmark
Reporter: Doron Cohen
Assignee: Doron Cohen
Priority: Minor
 Fix For: 3.2, 4.0

 Attachments: LUCENE-2958.patch, LUCENE-2958.patch


 Make WriteLineDocTask and LineDocSource more flexible/extendable:
 * allow to emit lines also for empty docs (keep current behavior as default)
 * allow more/less/other fields

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (LUCENE-2324) Per thread DocumentsWriters that write their own private segments

2011-03-11 Thread Jason Rutherglen (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13005651#comment-13005651
 ] 

Jason Rutherglen commented on LUCENE-2324:
--

{quote}Ie, if a given term X occurrs in 6 DWPTs (today) then we merge-sort the 
docIDs from the postings of that term, which is costly. (The normal merge 
that will merge these DWPTs after this issue lands just append by 
docIDs).{quote}

Right, this is the same principal motivation behind implementing DWPTs for use 
with realtime search, eg, the doc-id interleaving is too expensive to be 
performed at query time.

 Per thread DocumentsWriters that write their own private segments
 -

 Key: LUCENE-2324
 URL: https://issues.apache.org/jira/browse/LUCENE-2324
 Project: Lucene - Java
  Issue Type: Improvement
  Components: Index
Reporter: Michael Busch
Assignee: Michael Busch
Priority: Minor
 Fix For: Realtime Branch

 Attachments: LUCENE-2324-SMALL.patch, LUCENE-2324-SMALL.patch, 
 LUCENE-2324-SMALL.patch, LUCENE-2324-SMALL.patch, LUCENE-2324-SMALL.patch, 
 LUCENE-2324.patch, LUCENE-2324.patch, LUCENE-2324.patch, LUCENE-2324.patch, 
 lucene-2324.patch, lucene-2324.patch, test.out, test.out, test.out, test.out


 See LUCENE-2293 for motivation and more details.
 I'm copying here Mike's summary he posted on 2293:
 Change the approach for how we buffer in RAM to a more isolated
 approach, whereby IW has N fully independent RAM segments
 in-process and when a doc needs to be indexed it's added to one of
 them. Each segment would also write its own doc stores and
 normal segment merging (not the inefficient merge we now do on
 flush) would merge them. This should be a good simplification in
 the chain (eg maybe we can remove the *PerThread classes). The
 segments can flush independently, letting us make much better
 concurrent use of IO  CPU.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (LUCENE-2958) WriteLineDocTask improvements

2011-03-11 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13005653#comment-13005653
 ] 

Shai Erera commented on LUCENE-2958:


No, the flexibility is in the ability to have a TrecContentSource emitting the 
TREC documents, and multiple DocMakers that consume them and build Lucene 
documents out of them.

For example, one DocMaker can decide to split each doc into N tiny docs. 
Another can choose to add facets to it. Yet another can do complex analysis on 
it and produce richer documents.

Before that, you'd have to write a DocMaker for every such combination. E.g., 
if you wanted to add facets, you'd need to write a DocMaker per source of data 
with the same impl.

DocData as an intermediary object is not expensive, considering it's only bin 
over some already allocated Strings. And we reuse it always, so you don't even 
allocate it more than once ...

I would hate to lose that flexibility.

 WriteLineDocTask improvements
 -

 Key: LUCENE-2958
 URL: https://issues.apache.org/jira/browse/LUCENE-2958
 Project: Lucene - Java
  Issue Type: Improvement
  Components: contrib/benchmark
Reporter: Doron Cohen
Assignee: Doron Cohen
Priority: Minor
 Fix For: 3.2, 4.0

 Attachments: LUCENE-2958.patch, LUCENE-2958.patch


 Make WriteLineDocTask and LineDocSource more flexible/extendable:
 * allow to emit lines also for empty docs (keep current behavior as default)
 * allow more/less/other fields

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Fwd: Problem of Replication Reservation Duration

2011-03-11 Thread Li Li
-- Forwarded message --
From: Steven A Rowe sar...@syr.edu
Date: 2011/3/11
Subject: RE: Problem of Replication Reservation Duration
To: solr-...@lucene.apache.org solr-...@lucene.apache.org


Hi Li Li,

Please do not use the solr-dev mailing list - Solr and Lucene
development both use the dev at lucene.apache.org list.

Steve

 -Original Message-
 From: Li Li [mailto:fancye...@gmail.com]
 Sent: Friday, March 11, 2011 8:41 AM
 To: solr-...@lucene.apache.org
 Subject: Problem of Replication Reservation Duration

 hi all,
     The replication handler in solr 1.4 which we used seems to be a
 little problematic in some extreme situation.
     The default reserve duration is 10s and can't modified by any method.
       private Integer reserveCommitDuration =
 SnapPuller.readInterval(00:00:10);
     The current implementation is: slave send a http
 request(CMD_GET_FILE_LIST) to ask server list current index files.
     In the response codes of master, it will reserve this commit for 10s.
       // reserve the indexcommit for sometime
       core.getDeletionPolicy().setReserveDuration(version,
 reserveCommitDuration);
    If the master's indexes are changed within 10s, the old version
 will not be deleted. Otherwise, the old version will be deleted.
     slave then get the files in the list one by one.
     considering the following situation.
     Every mid-night we optimize the whole indexes into one single
 index, and every 15 minutes, we add new segments to it.
     e.g. when the slave copy the large optimized indexes, it will cost
 more than 15 minutes. So it will fail to copy all files and
 retry 5 minutes later. But each time it will re-copy all the files
 into a new tmp directory. it will fail again and again as long as
 we update indexes within 15 minutes.
     we can tack this problem by setting reserveCommitDuration to 20
 minutes. But then because we update small number of
 documents very frequently, many useless indexes will be reserved and
 it's a waste of disk space.
     Any one confronted the problem before and is there any solution for
 it?
     We comes up a ugly solution like this: slave fetches files using
 multithreads. each file a thread. Thus master will open all the
 files that slave needs. As long as the file is opened. when master
 want to delete them, these files will be deleted. But the inode
 reference count is larger than 0.  Because reading too many files by
 master will decrease the ability of master. we want to use
 some synchronization mechanism to allow only 1 or 2 ReplicationHandler
 threads are doing CMD_GET_FILE command.
     Is that solution feasible?

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Fwd: Problem of Replication Reservation Duration

2011-03-11 Thread Li Li
-- Forwarded message --
From: Li Li fancye...@gmail.com
Date: 2011/3/11
Subject: Problem of Replication Reservation Duration
To: solr-...@lucene.apache.org


hi all,
    The replication handler in solr 1.4 which we used seems to be a
little problematic in some extreme situation.
    The default reserve duration is 10s and can't modified by any method.
      private Integer reserveCommitDuration =
SnapPuller.readInterval(00:00:10);
    The current implementation is: slave send a http
request(CMD_GET_FILE_LIST) to ask server list current index files.
    In the response codes of master, it will reserve this commit for 10s.
      // reserve the indexcommit for sometime
      core.getDeletionPolicy().setReserveDuration(version,
reserveCommitDuration);
   If the master's indexes are changed within 10s, the old version
will not be deleted. Otherwise, the old version will be deleted.
    slave then get the files in the list one by one.
    considering the following situation.
    Every mid-night we optimize the whole indexes into one single
index, and every 15 minutes, we add new segments to it.
    e.g. when the slave copy the large optimized indexes, it will cost
more than 15 minutes. So it will fail to copy all files and
retry 5 minutes later. But each time it will re-copy all the files
into a new tmp directory. it will fail again and again as long as
we update indexes within 15 minutes.
    we can tack this problem by setting reserveCommitDuration to 20
minutes. But then because we update small number of
documents very frequently, many useless indexes will be reserved and
it's a waste of disk space.
    Any one confronted the problem before and is there any solution for it?
    We comes up a ugly solution like this: slave fetches files using
multithreads. each file a thread. Thus master will open all the
files that slave needs. As long as the file is opened. when master
want to delete them, these files will be deleted. But the inode
reference count is larger than 0.  Because reading too many files by
master will decrease the ability of master. we want to use
some synchronization mechanism to allow only 1 or 2 ReplicationHandler
threads are doing CMD_GET_FILE command.
    Is that solution feasible?

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (LUCENE-2965) Faster GeoHashUtils

2011-03-11 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13005655#comment-13005655
 ] 

David Smiley commented on LUCENE-2965:
--

The proper way to propose a code improvement is to post a patch file.  Please 
do so and then we'll discuss.
FYI I already made some performance tweaks to encode/decode as part of SOLR-2155

 Faster GeoHashUtils
 ---

 Key: LUCENE-2965
 URL: https://issues.apache.org/jira/browse/LUCENE-2965
 Project: Lucene - Java
  Issue Type: Improvement
  Components: contrib/spatial
Affects Versions: 2.9.2, 2.9.4, 3.0, 3.0.1, 3.0.2, 3.0.3
Reporter: 朱文彬

 I found the current implement of 
 org.apache.lucene.spatial.geohash.GeoHashUtils.encode and decode is slow and 
 this is my improvement (400% faster )
 /**
* Encodes the given latitude and longitude into a geohash
*
* @param latitude Latitude to encode
* @param longitude Longitude to encode
* @return Geohash encoding of the longitude and latitude
*/
   public static String encode(double latitude, double longitude) {
   double latL = -90, latH = 90;
   double lngL = -180, lngH = 180;
   double mid;
 //assert PRECISION % 2 == 0;
   final char[] geohash = new char[PRECISION];
   int len = 0;
   int ch = 0;
   while (len  PRECISION) {
   if (longitude  (mid = (lngL + lngH) * 0.5)) {
   ch |= 16;
   lngL = mid;
   } else
   lngH = mid;
   if (longitude  (mid = (lngL + lngH) * 0.5)) {
   ch |= 4;
   lngL = mid;
   } else
   lngH = mid;
   if (longitude  (mid = (lngL + lngH) * 0.5)) {
   ch |= 1;
   lngL = mid;
   } else {
   lngH = mid;
   }
   if (latitude  (mid = (latL + latH) * 0.5)) {
   ch |= 8;
   latL = mid;
   } else {
   latH = mid;
   }
   if (latitude  (mid = (latL + latH) * 0.5)) {
   ch |= 2;
   latL = mid;
   } else {
   latH = mid;
   }
   
   geohash[len++] = BASE_32[ch];
   ch = 0;
   if (latitude  (mid = (latL + latH) * 0.5)) {
   ch |= 16;
   latL = mid;
   } else
   latH = mid;
   if (longitude  (mid = (lngL + lngH) * 0.5)) {
   ch |= 8;
   lngL = mid;
   } else
   lngH = mid;
   if (latitude  (mid = (latL + latH) * 0.5)) {
   ch |= 4;
   latL = mid;
   } else
   latH = mid;
   if (longitude  (mid = (lngL + lngH) * 0.5)) {
   ch |= 2;
   lngL = mid;
   } else
   lngH = mid;
   if (latitude  (mid = (latL + latH) * 0.5)) {
   ch |= 1;
   latL = mid;
   } else
   latH = mid;
   geohash[len++] = BASE_32[ch];
   ch = 0;
   }
   return new String(geohash);
   }
   /**
* Decodes the given geohash into a latitude and longitude
*
* @param geohash Geohash to deocde
* @return Array with the latitude at index 0, and longitude at index 1
*/
   public static double[] decode(String geohash) {
   double latL = -90.0, latH = 90.0;
   double lngL = -180.0, lngH = 180.0;
   double gap;
   int len = geohash.length();
   for (int i = 0; i  len; ) {
   switch (geohash.charAt(i++)) {
   case '0':
   latH -= (latH - latL) * 0.75;
   lngH -= (lngH - lngL) * 0.875;
   break;
   case '1':
   latH -= (latH - latL) * 0.75;
 

[jira] Updated: (LUCENE-2952) Make license checking/maintenance easier/automated

2011-03-11 Thread Grant Ingersoll (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Ingersoll updated LUCENE-2952:


Attachment: LUCENE-2952.patch

Here's some real progress on this.  Works in standalone mode, but is not hooked 
into the build process yet.

 Make license checking/maintenance easier/automated
 --

 Key: LUCENE-2952
 URL: https://issues.apache.org/jira/browse/LUCENE-2952
 Project: Lucene - Java
  Issue Type: Improvement
Reporter: Grant Ingersoll
Priority: Minor
 Attachments: LUCENE-2952.patch, LUCENE-2952.patch


 Instead of waiting until release to check licenses are valid, we should make 
 it a part of our build process to ensure that all dependencies have proper 
 licenses, etc.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (LUCENE-2952) Make license checking/maintenance easier/automated

2011-03-11 Thread Grant Ingersoll (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13005666#comment-13005666
 ] 

Grant Ingersoll commented on LUCENE-2952:
-

Should note, I've only hooked it up for lucene/lib and solr/lib and not any of 
the modules or contrib.

 Make license checking/maintenance easier/automated
 --

 Key: LUCENE-2952
 URL: https://issues.apache.org/jira/browse/LUCENE-2952
 Project: Lucene - Java
  Issue Type: Improvement
Reporter: Grant Ingersoll
Priority: Minor
 Attachments: LUCENE-2952.patch, LUCENE-2952.patch


 Instead of waiting until release to check licenses are valid, we should make 
 it a part of our build process to ensure that all dependencies have proper 
 licenses, etc.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (LUCENE-2308) Separately specify a field's type

2011-03-11 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13005668#comment-13005668
 ] 

David Smiley commented on LUCENE-2308:
--

I'm surprised to barely even see a mention to Solr here which already, of 
course obviously, already has a FieldType.  Might it be ported?

 Separately specify a field's type
 -

 Key: LUCENE-2308
 URL: https://issues.apache.org/jira/browse/LUCENE-2308
 Project: Lucene - Java
  Issue Type: Improvement
  Components: Index
Reporter: Michael McCandless
  Labels: gsoc2011, lucene-gsoc-11
 Fix For: 4.0


 This came up from dicussions on IRC.  I'm summarizing here...
 Today when you make a Field to add to a document you can set things
 index or not, stored or not, analyzed or not, details like omitTfAP,
 omitNorms, index term vectors (separately controlling
 offsets/positions), etc.
 I think we should factor these out into a new class (FieldType?).
 Then you could re-use this FieldType instance across multiple fields.
 The Field instance would still hold the actual value.
 We could then do per-field analyzers by adding a setAnalyzer on the
 FieldType, instead of the separate PerFieldAnalzyerWrapper (likewise
 for per-field codecs (with flex), where we now have
 PerFieldCodecWrapper).
 This would NOT be a schema!  It's just refactoring what we already
 specify today.  EG it's not serialized into the index.
 This has been discussed before, and I know Michael Busch opened a more
 ambitious (I think?) issue.  I think this is a good first baby step.  We could
 consider a hierarchy of FIeldType (NumericFieldType, etc.) but maybe hold
 off on that for starters...

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Lucene and Solr 3.1 release candidate

2011-03-11 Thread Yonik Seeley
On Fri, Mar 11, 2011 at 8:14 AM, Yonik Seeley
yo...@lucidimagination.com wrote:
 On Thu, Mar 10, 2011 at 6:49 PM, Chris Hostetter
 hossman_luc...@fucit.org wrote:
 The number of lucene jars included in the release is also odd -- they
 are embedded in the solr.war obviously, but not included anywhere else.
 so people wanting to do something like use apache-solr-core-3.1.0.jar to
 embed solr in their app still need to get the lucene jars from a distinct
 release ... except that there does seem to be 3 lucene jars included in
 ./contrib/analysis-extras/lucene-libs (i suspect this was a mistake in an
 intentional exclusion of those jars)

 I was just going for including jars needed to run (and the lucene jars
 are in the war, but the ones from analysis-extras are not).
 Thinking about it again... we should either leave out lib (which
 should also be in the war) or include lucene_lib.

Oops, my mistake... I already did exclude lib as well.

-Yonik
http://lucidimagination.com

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[HUDSON] Lucene-Solr-tests-only-trunk - Build # 5812 - Failure

2011-03-11 Thread Apache Hudson Server
Build: https://hudson.apache.org/hudson/job/Lucene-Solr-tests-only-trunk/5812/

2 tests failed.
FAILED:  
org.apache.solr.update.DirectUpdateHandlerOptimizeTest.testWatchChildren

Error Message:
Forked Java VM exited abnormally. Please note the time in the report does not 
reflect the time until the VM exit.

Stack Trace:
junit.framework.AssertionFailedError: Forked Java VM exited abnormally. Please 
note the time in the report does not reflect the time until the VM exit.
at java.lang.Thread.run(Thread.java:636)


FAILED:  TEST-org.apache.solr.cloud.ZkSolrClientTest.xml.init

Error Message:


Stack Trace:
Test report file 
/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk/checkout/solr/build/test-results/TEST-org.apache.solr.cloud.ZkSolrClientTest.xml
 was length 0



Build Log (for compile errors):
[...truncated 8485 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (LUCENE-2091) Add BM25 Scoring to Lucene

2011-03-11 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13005688#comment-13005688
 ] 

Robert Muir commented on LUCENE-2091:
-

{quote}
your attachment (BM25SimilarityProvider) seems to rely on some other code 
(Stats.DocFieldStats)  AggregatesProvider .. which I guess is part of your DFR 
patch.. can you provide a pointer to that.
{quote}

Yeah this is from LUCENE-2392. Unfortunately it won't work with the most recent 
patch there, but both patches are just really exploration to see how we can 
divide into subtasks.

For an update, the JIRA issues aren't well linked but we have actually made 
pretty good progress on some major portions (imo these are the most 
interesting):
* Collection term stats: LUCENE-2862
* per-field similarity: LUCENE-2236
* termstate, to avoid redundant i/o for stats: LUCENE-2694
* norms cleanup: LUCENE-2771, LUCENE-2846

The next big step is to separate scoring from matching (see the latest patch on 
LUCENE-2392) so that similarity has full responsibility for all calculations, 
and so we get full integration with all queries, etc.

This isn't that complicated: however, in order to do this, we need to first 
refactor Explanations, so that a Similarity has the capability (and 
responsibility!) to fully explain its calculations. So I think this is the next 
issue to resolve before going any further.


 Add BM25 Scoring to Lucene
 --

 Key: LUCENE-2091
 URL: https://issues.apache.org/jira/browse/LUCENE-2091
 Project: Lucene - Java
  Issue Type: New Feature
  Components: contrib/*
Reporter: Yuval Feinstein
Priority: Minor
 Fix For: 4.0

 Attachments: BM25SimilarityProvider.java, LUCENE-2091.patch, 
 persianlucene.jpg

   Original Estimate: 48h
  Remaining Estimate: 48h

 http://nlp.uned.es/~jperezi/Lucene-BM25/ describes an implementation of 
 Okapi-BM25 scoring in the Lucene framework,
 as an alternative to the standard Lucene scoring (which is a version of mixed 
 boolean/TFIDF).
 I have refactored this a bit, added unit tests and improved the runtime 
 somewhat.
 I would like to contribute the code to Lucene under contrib. 

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: custom ValueSource for decoding geohash into lat lon

2011-03-11 Thread Smiley, David W.

On Mar 10, 2011, at 6:21 PM, William Bell wrote:

 OK. But I am concerned that you are trying to bite off more than can
 be done easily. The sample call is:
 
 http://localhost:8983/solr/select?q=*:*fq={!geofilt}sfieldmulti=storemvpt=43.17614,-90.57341d=100sfield=storesort=geomultidist%28%29%20ascsfieldmultidir=asc
 
 Notice that geomultidist() needs another field called storemv right
 now that is bar delimited. I tried to pull out the lat,long from
 geohash, but Dave stores the geohash values in Ngram for the purpose
 of filtering (I believe).

yep.  The field cache loader would have to filter out the grams not at full 
length.  Pretty easy.

 Here are the issues as I see them:
 
 1. ValueSources does not support MultiValue fields.
...
Technically it does.  A ValueSource's job seems to simply be to give access to 
abstract DocValues.java, which has methods like double doubleVal(int doc) but 
also void doubleVal(int doc, double[] vals), vals being an output-parameter.. 
Current use-cases assume a fixed number of values per document, not a variable 
number which is what I want. But I suppose there's nothing stopping me from 
using it for variable length values.  Of course the caller would have to know 
that.  It's a bit unfortunate that the signature of these methods don't return 
the array either since the caller doesn't know how big to make the array if 
it's variable length.  And again, I suppose there's nothing stopping me from 
adding a different method that works the way I want to. The only consumer of 
this Values/DocValues would be a special function query of my design so it's 
safe.

 2. Using ValueSource with one value is fast, and splitting it this way
 might be a lot slower to calculate distances. It is convenient, but
 could be slow. It might be better to just have solr.GeoHashField
 append to the interanal field so that it can use ValueSource directly.
 
 Use an internal field that uses bars internally:
 
 store_lat_long_bar =  39.90923,-86.19389|42.37577,-72.50858
 
 For each lat,long value
- Calculate geohash and Ngram store
- Append to the internal field store_lat_long_bar based on the field name
 
 Option 2 is easier and makes it supportable now without waiting for
 redesign of ValueSource.

As I suggest above, I'm not sure I really need to wait for some redesign. I 
could just add the methods I want in my DocValues subclass for use by my 
spatial function query.

~ David Smiley
Author: http://www.packtpub.com/solr-1-4-enterprise-search-server/





-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[HUDSON] Lucene-Solr-tests-only-trunk - Build # 5813 - Still Failing

2011-03-11 Thread Apache Hudson Server
Build: https://hudson.apache.org/hudson/job/Lucene-Solr-tests-only-trunk/5813/

2 tests failed.
FAILED:  
org.apache.solr.update.DirectUpdateHandlerOptimizeTest.testWatchChildren

Error Message:
Forked Java VM exited abnormally. Please note the time in the report does not 
reflect the time until the VM exit.

Stack Trace:
junit.framework.AssertionFailedError: Forked Java VM exited abnormally. Please 
note the time in the report does not reflect the time until the VM exit.
at java.lang.Thread.run(Thread.java:636)


FAILED:  TEST-org.apache.solr.cloud.ZkSolrClientTest.xml.init

Error Message:


Stack Trace:
Test report file 
/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk/checkout/solr/build/test-results/TEST-org.apache.solr.cloud.ZkSolrClientTest.xml
 was length 0



Build Log (for compile errors):
[...truncated 8479 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (LUCENE-2960) Allow (or bring back) the ability to setRAMBufferSizeMB on an open IndexWriter

2011-03-11 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13005696#comment-13005696
 ] 

Michael McCandless commented on LUCENE-2960:


bq. Anyone's implementation can be live (i.e. the impl could change it's 
behavior over time for whatever reason).

Well, that's really cheating.  I mean, yes, technically it's an out, so
it's certainly possible that an app can do the switching inside its
class... but that's not nice :)

EG if an app has LoadsAllDocsWarmer and VisitsAllPostingsWarmer (say)
and they want to switch between (for some reason)... they'd like have
to make a SegmentWarmerSwitcher class or something.

Seems silly because IW could care less if you switch up your warmer.
It just needs to get the current warmer every time it goes and warms
a segment...



 Allow (or bring back) the ability to setRAMBufferSizeMB on an open IndexWriter
 --

 Key: LUCENE-2960
 URL: https://issues.apache.org/jira/browse/LUCENE-2960
 Project: Lucene - Java
  Issue Type: Improvement
  Components: Index
Reporter: Shay Banon
Priority: Blocker
 Fix For: 3.1, 4.0


 In 3.1 the ability to setRAMBufferSizeMB is deprecated, and removed in trunk. 
 It would be great to be able to control that on a live IndexWriter. Other 
 possible two methods that would be great to bring back are 
 setTermIndexInterval and setReaderTermsIndexDivisor. Most of the other 
 setters can actually be set on the MergePolicy itself, so no need for setters 
 for those (I think).

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Lucene and Solr 3.1 release candidate

2011-03-11 Thread Yonik Seeley
I'm trying to fix the solr javadoc targets.
I just noticed that it looks like we have a double-copy of the solr
javadoc too - I'll
try and fix that while I'm in there.

Overall I think things are looking pretty good - if anyone wants to review/fix
things, please run ant package and check the resulting output.  Many of
the items Hoss mentioned had already been fixed.

-Yonik
http://lucidimagination.com

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GSoC] Apache Lucene @ Google Summer of Code 2011 [STUDENTS READ THIS]

2011-03-11 Thread Simon Willnauer
Hey folks,

Google Summer of Code 2011 is very close and the Project Applications
Period has started recently. Now it's time to get some excited students
on board for this year's GSoC.

I encourage students to submit an application to the Google Summer of Code
web-application. Lucene  Solr are amazing projects and GSoC is an
incredible opportunity to join the community and push the project
forward.

If you are a student and you are interested spending some time on a
great open source project while getting paid for it, you should submit
your application from March 28 - April 8, 2011. There are only 3
weeks until this process starts!

Quote from the GSoC website: We hear almost universally from our
mentoring organizations that the best applications they receive are
from students who took the time to interact and discuss their ideas
before submitting an application, so make sure to check out each
organization's Ideas list to get to know a particular open source
organization better.

So if you have any ideas what Lucene  Solr should have, or if you
find any of the GSoC pre-selected projects [1] interesting, please
join us on dev@lucene.apache.org [2].  Since you as a student must
apply for a certain project via the GSoC website [3], it's a good idea
to work on it ahead of time and include the community and possible
mentors as soon as possible.

Open source development here at the Apache Software
Foundation happens almost exclusively in the public and I encourage you to
follow this. Don't mail folks privately; please use the mailing list to
get the best possible visibility and attract interested community
members and push your idea forward. As always, it's the idea that
counts not the person!

That said, please do not underestimate the complexity of even small
GSoC - Projects. Don't try to rewrite Lucene or Solr!  A project
usually gains more from a smaller, well discussed and carefully
crafted  tested feature than from a half baked monster change that's
too large to work with.

Once your proposal has been accepted and you begin work, you should
give the community the opportunity to iterate with you.  We prefer
progress over perfection so don't hesitate to describe your overall
vision, but when the rubber meets the road let's take it in small
steps.  A code patch of 20 KB is likely to be reviewed very quickly so
get fast feedback, while a patch even 60kb in size can take very
- Hide quoted text -
long. So try to break up your vision and the community will work with
you to get things done!

On behalf of the Lucene  Solr community,

Go! join the mailing list and apply for GSoC 2011,

Simon

[1] 
https://issues.apache.org/jira/secure/IssueNavigator.jspa?reset=truejqlQuery=labels+%3D+lucene-gsoc-11
[2] http://lucene.apache.org/java/docs/mailinglists.html
[3] http://www.google-melange.com

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Reopened: (LUCENE-2957) generate-maven-artifacts target should include all non-Mavenized Lucene Solr dependencies

2011-03-11 Thread Steven Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Rowe reopened LUCENE-2957:
-


I forgot to handle carrot2-core in branch_3x and lucene_solr_3_1.

 generate-maven-artifacts target should include all non-Mavenized Lucene  
 Solr dependencies
 ---

 Key: LUCENE-2957
 URL: https://issues.apache.org/jira/browse/LUCENE-2957
 Project: Lucene - Java
  Issue Type: Improvement
  Components: Build
Affects Versions: 3.1, 3.2, 4.0
Reporter: Steven Rowe
Assignee: Steven Rowe
Priority: Minor
 Fix For: 3.1, 3.2, 4.0

 Attachments: LUCENE-2957-part2.patch, LUCENE-2957.patch


 Currently, in addition to deploying artifacts for all of the Lucene and Solr 
 modules to a repository (by default local), the {{generate-maven-artifacts}} 
 target also deploys artifacts for the following non-Mavenized Solr 
 dependencies (lucene_solr_3_1 version given here):
 # {{solr/lib/commons-csv-1.0-SNAPSHOT-r966014.jar}} as 
 org.apache.solr:solr-commons-csv:3.1
 # {{solr/lib/apache-solr-noggit-r944541.jar}} as 
 org.apache.solr:solr-noggit:3.1
 \\ \\
 The following {{.jar}}'s should be added to the above list (lucene_solr_3_1 
 version given here):
 \\ \\
 # {{lucene/contrib/icu/lib/icu4j-4_6.jar}}
 # 
 {{lucene/contrib/benchmark/lib/xercesImpl-2.9.1-patched-XERCESJ}}{{-1257.jar}}
 # {{solr/contrib/clustering/lib/carrot2-core-3.4.2.jar}}**
 # {{solr/contrib/uima/lib/uima-an-alchemy.jar}}
 # {{solr/contrib/uima/lib/uima-an-calais.jar}}
 # {{solr/contrib/uima/lib/uima-an-tagger.jar}}
 # {{solr/contrib/uima/lib/uima-an-wst.jar}}
 # {{solr/contrib/uima/lib/uima-core.jar}}
 \\ \\
 I think it makes sense to follow the same model as the current non-Mavenized 
 dependencies:
 \\ \\
 * {{groupId}} = {{org.apache.solr/.lucene}}
 * {{artifactId}} = {{solr-/lucene-}}original-name,
 * {{version}} = lucene-solr-release-version.
 **The carrot2-core jar doesn't need to be included in trunk's release 
 artifacts, since there already is a Mavenized Java6-compiled jar.  branch_3x 
 and lucene_solr_3_1 will need this Solr-specific Java5-compiled maven 
 artifact, though.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Updated: (LUCENE-2957) generate-maven-artifacts target should include all non-Mavenized Lucene Solr dependencies

2011-03-11 Thread Steven Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Rowe updated LUCENE-2957:


Attachment: LUCENE-2923-part3.patch

Patch that includes carrot2-core jar in generate-maven-artifacts.

Committing shortly.

 generate-maven-artifacts target should include all non-Mavenized Lucene  
 Solr dependencies
 ---

 Key: LUCENE-2957
 URL: https://issues.apache.org/jira/browse/LUCENE-2957
 Project: Lucene - Java
  Issue Type: Improvement
  Components: Build
Affects Versions: 3.1, 3.2, 4.0
Reporter: Steven Rowe
Assignee: Steven Rowe
Priority: Minor
 Fix For: 3.1, 3.2, 4.0

 Attachments: LUCENE-2923-part3.patch, LUCENE-2957-part2.patch, 
 LUCENE-2957.patch


 Currently, in addition to deploying artifacts for all of the Lucene and Solr 
 modules to a repository (by default local), the {{generate-maven-artifacts}} 
 target also deploys artifacts for the following non-Mavenized Solr 
 dependencies (lucene_solr_3_1 version given here):
 # {{solr/lib/commons-csv-1.0-SNAPSHOT-r966014.jar}} as 
 org.apache.solr:solr-commons-csv:3.1
 # {{solr/lib/apache-solr-noggit-r944541.jar}} as 
 org.apache.solr:solr-noggit:3.1
 \\ \\
 The following {{.jar}}'s should be added to the above list (lucene_solr_3_1 
 version given here):
 \\ \\
 # {{lucene/contrib/icu/lib/icu4j-4_6.jar}}
 # 
 {{lucene/contrib/benchmark/lib/xercesImpl-2.9.1-patched-XERCESJ}}{{-1257.jar}}
 # {{solr/contrib/clustering/lib/carrot2-core-3.4.2.jar}}**
 # {{solr/contrib/uima/lib/uima-an-alchemy.jar}}
 # {{solr/contrib/uima/lib/uima-an-calais.jar}}
 # {{solr/contrib/uima/lib/uima-an-tagger.jar}}
 # {{solr/contrib/uima/lib/uima-an-wst.jar}}
 # {{solr/contrib/uima/lib/uima-core.jar}}
 \\ \\
 I think it makes sense to follow the same model as the current non-Mavenized 
 dependencies:
 \\ \\
 * {{groupId}} = {{org.apache.solr/.lucene}}
 * {{artifactId}} = {{solr-/lucene-}}original-name,
 * {{version}} = lucene-solr-release-version.
 **The carrot2-core jar doesn't need to be included in trunk's release 
 artifacts, since there already is a Mavenized Java6-compiled jar.  branch_3x 
 and lucene_solr_3_1 will need this Solr-specific Java5-compiled maven 
 artifact, though.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Resolved: (LUCENE-2957) generate-maven-artifacts target should include all non-Mavenized Lucene Solr dependencies

2011-03-11 Thread Steven Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Rowe resolved LUCENE-2957.
-

Resolution: Fixed

Committed carrot2-core fixes:
- branch_3x revision 1080646
- lucene_solr_3_1 revision 1080648


 generate-maven-artifacts target should include all non-Mavenized Lucene  
 Solr dependencies
 ---

 Key: LUCENE-2957
 URL: https://issues.apache.org/jira/browse/LUCENE-2957
 Project: Lucene - Java
  Issue Type: Improvement
  Components: Build
Affects Versions: 3.1, 3.2, 4.0
Reporter: Steven Rowe
Assignee: Steven Rowe
Priority: Minor
 Fix For: 3.1, 3.2, 4.0

 Attachments: LUCENE-2923-part3.patch, LUCENE-2957-part2.patch, 
 LUCENE-2957.patch


 Currently, in addition to deploying artifacts for all of the Lucene and Solr 
 modules to a repository (by default local), the {{generate-maven-artifacts}} 
 target also deploys artifacts for the following non-Mavenized Solr 
 dependencies (lucene_solr_3_1 version given here):
 # {{solr/lib/commons-csv-1.0-SNAPSHOT-r966014.jar}} as 
 org.apache.solr:solr-commons-csv:3.1
 # {{solr/lib/apache-solr-noggit-r944541.jar}} as 
 org.apache.solr:solr-noggit:3.1
 \\ \\
 The following {{.jar}}'s should be added to the above list (lucene_solr_3_1 
 version given here):
 \\ \\
 # {{lucene/contrib/icu/lib/icu4j-4_6.jar}}
 # 
 {{lucene/contrib/benchmark/lib/xercesImpl-2.9.1-patched-XERCESJ}}{{-1257.jar}}
 # {{solr/contrib/clustering/lib/carrot2-core-3.4.2.jar}}**
 # {{solr/contrib/uima/lib/uima-an-alchemy.jar}}
 # {{solr/contrib/uima/lib/uima-an-calais.jar}}
 # {{solr/contrib/uima/lib/uima-an-tagger.jar}}
 # {{solr/contrib/uima/lib/uima-an-wst.jar}}
 # {{solr/contrib/uima/lib/uima-core.jar}}
 \\ \\
 I think it makes sense to follow the same model as the current non-Mavenized 
 dependencies:
 \\ \\
 * {{groupId}} = {{org.apache.solr/.lucene}}
 * {{artifactId}} = {{solr-/lucene-}}original-name,
 * {{version}} = lucene-solr-release-version.
 **The carrot2-core jar doesn't need to be included in trunk's release 
 artifacts, since there already is a Mavenized Java6-compiled jar.  branch_3x 
 and lucene_solr_3_1 will need this Solr-specific Java5-compiled maven 
 artifact, though.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (LUCENE-2957) generate-maven-artifacts target should include all non-Mavenized Lucene Solr dependencies

2011-03-11 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13005710#comment-13005710
 ] 

Dawid Weiss commented on LUCENE-2957:
-

Steven,

I don't think Maven Central will allow me to upload another (classified) 
artifact if an existing POM and artifacts are already in Maven Central. At 
least the SonaType staging process won't allow it, I'm sure. I'll see if I can 
prepare a JDK1.5 release for the next Solr (3.2).

 generate-maven-artifacts target should include all non-Mavenized Lucene  
 Solr dependencies
 ---

 Key: LUCENE-2957
 URL: https://issues.apache.org/jira/browse/LUCENE-2957
 Project: Lucene - Java
  Issue Type: Improvement
  Components: Build
Affects Versions: 3.1, 3.2, 4.0
Reporter: Steven Rowe
Assignee: Steven Rowe
Priority: Minor
 Fix For: 3.1, 3.2, 4.0

 Attachments: LUCENE-2923-part3.patch, LUCENE-2957-part2.patch, 
 LUCENE-2957.patch


 Currently, in addition to deploying artifacts for all of the Lucene and Solr 
 modules to a repository (by default local), the {{generate-maven-artifacts}} 
 target also deploys artifacts for the following non-Mavenized Solr 
 dependencies (lucene_solr_3_1 version given here):
 # {{solr/lib/commons-csv-1.0-SNAPSHOT-r966014.jar}} as 
 org.apache.solr:solr-commons-csv:3.1
 # {{solr/lib/apache-solr-noggit-r944541.jar}} as 
 org.apache.solr:solr-noggit:3.1
 \\ \\
 The following {{.jar}}'s should be added to the above list (lucene_solr_3_1 
 version given here):
 \\ \\
 # {{lucene/contrib/icu/lib/icu4j-4_6.jar}}
 # 
 {{lucene/contrib/benchmark/lib/xercesImpl-2.9.1-patched-XERCESJ}}{{-1257.jar}}
 # {{solr/contrib/clustering/lib/carrot2-core-3.4.2.jar}}**
 # {{solr/contrib/uima/lib/uima-an-alchemy.jar}}
 # {{solr/contrib/uima/lib/uima-an-calais.jar}}
 # {{solr/contrib/uima/lib/uima-an-tagger.jar}}
 # {{solr/contrib/uima/lib/uima-an-wst.jar}}
 # {{solr/contrib/uima/lib/uima-core.jar}}
 \\ \\
 I think it makes sense to follow the same model as the current non-Mavenized 
 dependencies:
 \\ \\
 * {{groupId}} = {{org.apache.solr/.lucene}}
 * {{artifactId}} = {{solr-/lucene-}}original-name,
 * {{version}} = lucene-solr-release-version.
 **The carrot2-core jar doesn't need to be included in trunk's release 
 artifacts, since there already is a Mavenized Java6-compiled jar.  branch_3x 
 and lucene_solr_3_1 will need this Solr-specific Java5-compiled maven 
 artifact, though.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[HUDSON] Lucene-Solr-tests-only-trunk - Build # 5814 - Still Failing

2011-03-11 Thread Apache Hudson Server
Build: https://hudson.apache.org/hudson/job/Lucene-Solr-tests-only-trunk/5814/

3 tests failed.
REGRESSION:  org.apache.solr.client.solrj.TestLBHttpSolrServer.testSimple

Error Message:
expected:3 but was:2

Stack Trace:
junit.framework.AssertionFailedError: expected:3 but was:2
at 
org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:1213)
at 
org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:1145)
at 
org.apache.solr.client.solrj.TestLBHttpSolrServer.testSimple(TestLBHttpSolrServer.java:127)


FAILED:  
org.apache.solr.update.DirectUpdateHandlerOptimizeTest.testWatchChildren

Error Message:
Forked Java VM exited abnormally. Please note the time in the report does not 
reflect the time until the VM exit.

Stack Trace:
junit.framework.AssertionFailedError: Forked Java VM exited abnormally. Please 
note the time in the report does not reflect the time until the VM exit.
at java.lang.Thread.run(Thread.java:636)


FAILED:  TEST-org.apache.solr.cloud.ZkSolrClientTest.xml.init

Error Message:


Stack Trace:
Test report file 
/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk/checkout/solr/build/test-results/TEST-org.apache.solr.cloud.ZkSolrClientTest.xml
 was length 0



Build Log (for compile errors):
[...truncated 8493 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-3.x #57: POMs out of sync

2011-03-11 Thread Apache Hudson Server
Build: https://hudson.apache.org/hudson/job/Lucene-Solr-Maven-3.x/57/

No tests ran.

Build Log (for compile errors):
[...truncated 12994 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[HUDSON] Lucene-Solr-tests-only-trunk - Build # 5815 - Still Failing

2011-03-11 Thread Apache Hudson Server
Build: https://hudson.apache.org/hudson/job/Lucene-Solr-tests-only-trunk/5815/

2 tests failed.
FAILED:  
org.apache.solr.update.DirectUpdateHandlerOptimizeTest.testWatchChildren

Error Message:
Forked Java VM exited abnormally. Please note the time in the report does not 
reflect the time until the VM exit.

Stack Trace:
junit.framework.AssertionFailedError: Forked Java VM exited abnormally. Please 
note the time in the report does not reflect the time until the VM exit.
at java.lang.Thread.run(Thread.java:636)


FAILED:  TEST-org.apache.solr.cloud.ZkSolrClientTest.xml.init

Error Message:


Stack Trace:
Test report file 
/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk/checkout/solr/build/test-results/TEST-org.apache.solr.cloud.ZkSolrClientTest.xml
 was length 0



Build Log (for compile errors):
[...truncated 8489 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[HUDSON] Lucene-Solr-tests-only-trunk - Build # 5816 - Still Failing

2011-03-11 Thread Apache Hudson Server
Build: https://hudson.apache.org/hudson/job/Lucene-Solr-tests-only-trunk/5816/

2 tests failed.
FAILED:  
org.apache.solr.update.DirectUpdateHandlerOptimizeTest.testWatchChildren

Error Message:
Forked Java VM exited abnormally. Please note the time in the report does not 
reflect the time until the VM exit.

Stack Trace:
junit.framework.AssertionFailedError: Forked Java VM exited abnormally. Please 
note the time in the report does not reflect the time until the VM exit.
at java.lang.Thread.run(Thread.java:636)


FAILED:  TEST-org.apache.solr.cloud.ZkSolrClientTest.xml.init

Error Message:


Stack Trace:
Test report file 
/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk/checkout/solr/build/test-results/TEST-org.apache.solr.cloud.ZkSolrClientTest.xml
 was length 0



Build Log (for compile errors):
[...truncated 8493 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[HUDSON] Lucene-Solr-tests-only-trunk - Build # 5817 - Still Failing

2011-03-11 Thread Apache Hudson Server
Build: https://hudson.apache.org/hudson/job/Lucene-Solr-tests-only-trunk/5817/

2 tests failed.
FAILED:  
org.apache.solr.update.DirectUpdateHandlerOptimizeTest.testWatchChildren

Error Message:
Forked Java VM exited abnormally. Please note the time in the report does not 
reflect the time until the VM exit.

Stack Trace:
junit.framework.AssertionFailedError: Forked Java VM exited abnormally. Please 
note the time in the report does not reflect the time until the VM exit.
at java.lang.Thread.run(Thread.java:636)


FAILED:  TEST-org.apache.solr.cloud.ZkSolrClientTest.xml.init

Error Message:


Stack Trace:
Test report file 
/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk/checkout/solr/build/test-results/TEST-org.apache.solr.cloud.ZkSolrClientTest.xml
 was length 0



Build Log (for compile errors):
[...truncated 8466 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[HUDSON] Lucene-Solr-tests-only-trunk - Build # 5818 - Still Failing

2011-03-11 Thread Apache Hudson Server
Build: https://hudson.apache.org/hudson/job/Lucene-Solr-tests-only-trunk/5818/

2 tests failed.
FAILED:  
org.apache.solr.update.DirectUpdateHandlerOptimizeTest.testWatchChildren

Error Message:
Forked Java VM exited abnormally. Please note the time in the report does not 
reflect the time until the VM exit.

Stack Trace:
junit.framework.AssertionFailedError: Forked Java VM exited abnormally. Please 
note the time in the report does not reflect the time until the VM exit.
at java.lang.Thread.run(Thread.java:636)


FAILED:  TEST-org.apache.solr.cloud.ZkSolrClientTest.xml.init

Error Message:


Stack Trace:
Test report file 
/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk/checkout/solr/build/test-results/TEST-org.apache.solr.cloud.ZkSolrClientTest.xml
 was length 0



Build Log (for compile errors):
[...truncated 8461 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[HUDSON] Lucene-Solr-tests-only-trunk - Build # 5819 - Still Failing

2011-03-11 Thread Apache Hudson Server
Build: https://hudson.apache.org/hudson/job/Lucene-Solr-tests-only-trunk/5819/

2 tests failed.
FAILED:  
org.apache.solr.update.DirectUpdateHandlerOptimizeTest.testWatchChildren

Error Message:
Forked Java VM exited abnormally. Please note the time in the report does not 
reflect the time until the VM exit.

Stack Trace:
junit.framework.AssertionFailedError: Forked Java VM exited abnormally. Please 
note the time in the report does not reflect the time until the VM exit.
at java.lang.Thread.run(Thread.java:636)


FAILED:  TEST-org.apache.solr.cloud.ZkSolrClientTest.xml.init

Error Message:


Stack Trace:
Test report file 
/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk/checkout/solr/build/test-results/TEST-org.apache.solr.cloud.ZkSolrClientTest.xml
 was length 0



Build Log (for compile errors):
[...truncated 8464 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[HUDSON] Lucene-Solr-tests-only-trunk - Build # 5820 - Still Failing

2011-03-11 Thread Apache Hudson Server
Build: https://hudson.apache.org/hudson/job/Lucene-Solr-tests-only-trunk/5820/

2 tests failed.
FAILED:  
org.apache.solr.update.DirectUpdateHandlerOptimizeTest.testWatchChildren

Error Message:
Forked Java VM exited abnormally. Please note the time in the report does not 
reflect the time until the VM exit.

Stack Trace:
junit.framework.AssertionFailedError: Forked Java VM exited abnormally. Please 
note the time in the report does not reflect the time until the VM exit.
at java.lang.Thread.run(Thread.java:636)


FAILED:  TEST-org.apache.solr.cloud.ZkSolrClientTest.xml.init

Error Message:


Stack Trace:
Test report file 
/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk/checkout/solr/build/test-results/TEST-org.apache.solr.cloud.ZkSolrClientTest.xml
 was length 0



Build Log (for compile errors):
[...truncated 8463 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[HUDSON] Lucene-Solr-tests-only-trunk - Build # 5821 - Still Failing

2011-03-11 Thread Apache Hudson Server
Build: https://hudson.apache.org/hudson/job/Lucene-Solr-tests-only-trunk/5821/

2 tests failed.
FAILED:  
org.apache.solr.update.DirectUpdateHandlerOptimizeTest.testWatchChildren

Error Message:
Forked Java VM exited abnormally. Please note the time in the report does not 
reflect the time until the VM exit.

Stack Trace:
junit.framework.AssertionFailedError: Forked Java VM exited abnormally. Please 
note the time in the report does not reflect the time until the VM exit.
at java.lang.Thread.run(Thread.java:636)


FAILED:  TEST-org.apache.solr.cloud.ZkSolrClientTest.xml.init

Error Message:


Stack Trace:
Test report file 
/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk/checkout/solr/build/test-results/TEST-org.apache.solr.cloud.ZkSolrClientTest.xml
 was length 0



Build Log (for compile errors):
[...truncated 8393 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Behavior of Solr Cell changed in 3.1?

2011-03-11 Thread Eric Pugh
Hi all,

I was playing around with the capture attributes stuff in Solr cell, and I 
could not get the example to work with the 3.1 code:

The query:

 curl 
http://localhost:8983/solr/update/extract?literal.id=doc2captureAttr=truedefaultField=textfmap.div=foo_tcapture=div;
  -F tutorial=@tutorial.pdf

from the docs seems to fail.  I had to make the _t dynamic field multivalued as 
a first step, but even then, then the result:

   foo_t:[
  page,
  page,
  page,
  page,
  page,
  page,
  page,
  page,
SolrtutorialTableofcontents1Overvie  ALL THE TEXT IN THE DOC...

Is this a documentation error (at least the need for _t to be multivalued) or a 
bug?

Eric




-
Eric Pugh | Principal | OpenSource Connections, LLC | 434.466.1467 | 
http://www.opensourceconnections.com
Co-Author: Solr 1.4 Enterprise Search Server available from 
http://www.packtpub.com/solr-1-4-enterprise-search-server
This e-mail and all contents, including attachments, is considered to be 
Company Confidential unless explicitly stated otherwise, regardless of whether 
attachments are marked as such.










-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[HUDSON] Lucene-Solr-tests-only-trunk - Build # 5822 - Still Failing

2011-03-11 Thread Apache Hudson Server
Build: https://hudson.apache.org/hudson/job/Lucene-Solr-tests-only-trunk/5822/

2 tests failed.
FAILED:  org.apache.solr.update.DirectUpdateHandlerOptimizeTest.testReconnect

Error Message:
Forked Java VM exited abnormally. Please note the time in the report does not 
reflect the time until the VM exit.

Stack Trace:
junit.framework.AssertionFailedError: Forked Java VM exited abnormally. Please 
note the time in the report does not reflect the time until the VM exit.
at java.lang.Thread.run(Thread.java:636)


FAILED:  TEST-org.apache.solr.cloud.ZkSolrClientTest.xml.init

Error Message:


Stack Trace:
Test report file 
/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk/checkout/solr/build/test-results/TEST-org.apache.solr.cloud.ZkSolrClientTest.xml
 was length 0



Build Log (for compile errors):
[...truncated 8474 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[HUDSON] Lucene-Solr-tests-only-trunk - Build # 5823 - Still Failing

2011-03-11 Thread Apache Hudson Server
Build: https://hudson.apache.org/hudson/job/Lucene-Solr-tests-only-trunk/5823/

2 tests failed.
FAILED:  
org.apache.solr.update.DirectUpdateHandlerOptimizeTest.testWatchChildren

Error Message:
Forked Java VM exited abnormally. Please note the time in the report does not 
reflect the time until the VM exit.

Stack Trace:
junit.framework.AssertionFailedError: Forked Java VM exited abnormally. Please 
note the time in the report does not reflect the time until the VM exit.
at java.lang.Thread.run(Thread.java:636)


FAILED:  TEST-org.apache.solr.cloud.ZkSolrClientTest.xml.init

Error Message:


Stack Trace:
Test report file 
/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk/checkout/solr/build/test-results/TEST-org.apache.solr.cloud.ZkSolrClientTest.xml
 was length 0



Build Log (for compile errors):
[...truncated 8479 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (LUCENE-2957) generate-maven-artifacts target should include all non-Mavenized Lucene Solr dependencies

2011-03-11 Thread Steven Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13005842#comment-13005842
 ] 

Steven Rowe commented on LUCENE-2957:
-

bq. I'll see if I can prepare a JDK1.5 release for the next Solr (3.2).

Thanks Dawid!

 generate-maven-artifacts target should include all non-Mavenized Lucene  
 Solr dependencies
 ---

 Key: LUCENE-2957
 URL: https://issues.apache.org/jira/browse/LUCENE-2957
 Project: Lucene - Java
  Issue Type: Improvement
  Components: Build
Affects Versions: 3.1, 3.2, 4.0
Reporter: Steven Rowe
Assignee: Steven Rowe
Priority: Minor
 Fix For: 3.1, 3.2, 4.0

 Attachments: LUCENE-2923-part3.patch, LUCENE-2957-part2.patch, 
 LUCENE-2957.patch


 Currently, in addition to deploying artifacts for all of the Lucene and Solr 
 modules to a repository (by default local), the {{generate-maven-artifacts}} 
 target also deploys artifacts for the following non-Mavenized Solr 
 dependencies (lucene_solr_3_1 version given here):
 # {{solr/lib/commons-csv-1.0-SNAPSHOT-r966014.jar}} as 
 org.apache.solr:solr-commons-csv:3.1
 # {{solr/lib/apache-solr-noggit-r944541.jar}} as 
 org.apache.solr:solr-noggit:3.1
 \\ \\
 The following {{.jar}}'s should be added to the above list (lucene_solr_3_1 
 version given here):
 \\ \\
 # {{lucene/contrib/icu/lib/icu4j-4_6.jar}}
 # 
 {{lucene/contrib/benchmark/lib/xercesImpl-2.9.1-patched-XERCESJ}}{{-1257.jar}}
 # {{solr/contrib/clustering/lib/carrot2-core-3.4.2.jar}}**
 # {{solr/contrib/uima/lib/uima-an-alchemy.jar}}
 # {{solr/contrib/uima/lib/uima-an-calais.jar}}
 # {{solr/contrib/uima/lib/uima-an-tagger.jar}}
 # {{solr/contrib/uima/lib/uima-an-wst.jar}}
 # {{solr/contrib/uima/lib/uima-core.jar}}
 \\ \\
 I think it makes sense to follow the same model as the current non-Mavenized 
 dependencies:
 \\ \\
 * {{groupId}} = {{org.apache.solr/.lucene}}
 * {{artifactId}} = {{solr-/lucene-}}original-name,
 * {{version}} = lucene-solr-release-version.
 **The carrot2-core jar doesn't need to be included in trunk's release 
 artifacts, since there already is a Mavenized Java6-compiled jar.  branch_3x 
 and lucene_solr_3_1 will need this Solr-specific Java5-compiled maven 
 artifact, though.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[HUDSON] Lucene-Solr-tests-only-trunk - Build # 5824 - Still Failing

2011-03-11 Thread Apache Hudson Server
Build: https://hudson.apache.org/hudson/job/Lucene-Solr-tests-only-trunk/5824/

2 tests failed.
FAILED:  
org.apache.solr.update.DirectUpdateHandlerOptimizeTest.testWatchChildren

Error Message:
Forked Java VM exited abnormally. Please note the time in the report does not 
reflect the time until the VM exit.

Stack Trace:
junit.framework.AssertionFailedError: Forked Java VM exited abnormally. Please 
note the time in the report does not reflect the time until the VM exit.
at java.lang.Thread.run(Thread.java:636)


FAILED:  TEST-org.apache.solr.cloud.ZkSolrClientTest.xml.init

Error Message:


Stack Trace:
Test report file 
/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk/checkout/solr/build/test-results/TEST-org.apache.solr.cloud.ZkSolrClientTest.xml
 was length 0



Build Log (for compile errors):
[...truncated 8477 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[HUDSON] Lucene-Solr-tests-only-trunk - Build # 5825 - Still Failing

2011-03-11 Thread Apache Hudson Server
Build: https://hudson.apache.org/hudson/job/Lucene-Solr-tests-only-trunk/5825/

2 tests failed.
FAILED:  
org.apache.solr.update.DirectUpdateHandlerOptimizeTest.testWatchChildren

Error Message:
Forked Java VM exited abnormally. Please note the time in the report does not 
reflect the time until the VM exit.

Stack Trace:
junit.framework.AssertionFailedError: Forked Java VM exited abnormally. Please 
note the time in the report does not reflect the time until the VM exit.
at java.lang.Thread.run(Thread.java:636)


FAILED:  TEST-org.apache.solr.cloud.ZkSolrClientTest.xml.init

Error Message:


Stack Trace:
Test report file 
/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk/checkout/solr/build/test-results/TEST-org.apache.solr.cloud.ZkSolrClientTest.xml
 was length 0



Build Log (for compile errors):
[...truncated 8477 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: custom ValueSource for decoding geohash into lat lon

2011-03-11 Thread Smiley, David W.
On Mar 10, 2011, at 6:21 PM, William Bell wrote:

 1. ValueSources does not support MultiValue fields. 

I think the problem isn't ValueSources, it's the FieldCache.  The FieldCache is 
fundamentally very limited to one indexed primitive value per document. I took 
a look at UninvertedField but that appears to be tied to faceting and it's not 
sufficiently flexible any way. I think I need to do, as UninvertedField does, 
create a cache registered in solrconfig.xml.  The other tricky bit is somehow 
accessing it.  I think I figured it out. In my field type's 
getValueSource(SchemaField field, QParser parser), the parser is a 
FunctionQParser implementation, which has access to SolrQueryRequest, which has 
access to SolrIndexSearcher, which allows me to lookup the cache by the name I 
choose.  That's quite a chain of indirection that took time to track down; I 
nearly gave up :-).

~ David
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[HUDSON] Lucene-Solr-tests-only-trunk - Build # 5826 - Still Failing

2011-03-11 Thread Apache Hudson Server
Build: https://hudson.apache.org/hudson/job/Lucene-Solr-tests-only-trunk/5826/

2 tests failed.
FAILED:  
org.apache.solr.update.DirectUpdateHandlerOptimizeTest.testWatchChildren

Error Message:
Forked Java VM exited abnormally. Please note the time in the report does not 
reflect the time until the VM exit.

Stack Trace:
junit.framework.AssertionFailedError: Forked Java VM exited abnormally. Please 
note the time in the report does not reflect the time until the VM exit.
at java.lang.Thread.run(Thread.java:636)


FAILED:  TEST-org.apache.solr.cloud.ZkSolrClientTest.xml.init

Error Message:


Stack Trace:
Test report file 
/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk/checkout/solr/build/test-results/TEST-org.apache.solr.cloud.ZkSolrClientTest.xml
 was length 0



Build Log (for compile errors):
[...truncated 8495 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[HUDSON] Lucene-3.x - Build # 311 - Failure

2011-03-11 Thread Apache Hudson Server
Build: https://hudson.apache.org/hudson/job/Lucene-3.x/311/

No tests ran.

Build Log (for compile errors):
[...truncated 9289 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (LUCENE-2960) Allow (or bring back) the ability to setRAMBufferSizeMB on an open IndexWriter

2011-03-11 Thread Earwin Burrfoot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13005891#comment-13005891
 ] 

Earwin Burrfoot commented on LUCENE-2960:
-

bq. Furthermore, closing the IW also forces you to commit, and I don't like 
tying changing of configuration to forcing a commit.
Like I said, one isn't going to change his configuration five times a second. 
It's ok to commit from time to time?

bq. So why should we force it to be unchangeable? That can only remove freedom, 
freedom that is perhaps valuable to an app somewhere.
Each and every live reconfigurable setting adds to complexity.
At the very least it requires proper synchronization. Take your SegmentWarmer 
example - you should make the field volatile.
While it's possible to chicken out on primitive fields ([except 
long/double|http://java.sun.com/docs/books/jls/third_edition/html/memory.html#17.7]),
 as Yonik mentioned earlier, making nonvolatile mutable references introduces 
you to a world of hard-to-catch unsafe publication bugs (yes, infoStream is 
currently broken!).
For more complex cases, certain on-change logic is required. And then you have 
to support this logic across all possible code rewrites and refactorings.

 Allow (or bring back) the ability to setRAMBufferSizeMB on an open IndexWriter
 --

 Key: LUCENE-2960
 URL: https://issues.apache.org/jira/browse/LUCENE-2960
 Project: Lucene - Java
  Issue Type: Improvement
  Components: Index
Reporter: Shay Banon
Priority: Blocker
 Fix For: 3.1, 4.0


 In 3.1 the ability to setRAMBufferSizeMB is deprecated, and removed in trunk. 
 It would be great to be able to control that on a live IndexWriter. Other 
 possible two methods that would be great to bring back are 
 setTermIndexInterval and setReaderTermsIndexDivisor. Most of the other 
 setters can actually be set on the MergePolicy itself, so no need for setters 
 for those (I think).

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[HUDSON] Lucene-Solr-tests-only-3.x - Build # 5814 - Failure

2011-03-11 Thread Apache Hudson Server
Build: https://hudson.apache.org/hudson/job/Lucene-Solr-tests-only-3.x/5814/

1 tests failed.
FAILED:  org.apache.lucene.util.TestSortedVIntList.testRun

Error Message:
Forked Java VM exited abnormally. Please note the time in the report does not 
reflect the time until the VM exit.

Stack Trace:
junit.framework.AssertionFailedError: Forked Java VM exited abnormally. Please 
note the time in the report does not reflect the time until the VM exit.
at java.lang.Thread.run(Thread.java:679)




Build Log (for compile errors):
[...truncated 4449 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[HUDSON] Lucene-Solr-tests-only-trunk - Build # 5828 - Failure

2011-03-11 Thread Apache Hudson Server
Build: https://hudson.apache.org/hudson/job/Lucene-Solr-tests-only-trunk/5828/

2 tests failed.
FAILED:  
org.apache.solr.update.DirectUpdateHandlerOptimizeTest.testWatchChildren

Error Message:
Forked Java VM exited abnormally. Please note the time in the report does not 
reflect the time until the VM exit.

Stack Trace:
junit.framework.AssertionFailedError: Forked Java VM exited abnormally. Please 
note the time in the report does not reflect the time until the VM exit.
at java.lang.Thread.run(Thread.java:636)


FAILED:  TEST-org.apache.solr.cloud.ZkSolrClientTest.xml.init

Error Message:


Stack Trace:
Test report file 
/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk/checkout/solr/build/test-results/TEST-org.apache.solr.cloud.ZkSolrClientTest.xml
 was length 0



Build Log (for compile errors):
[...truncated 8484 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[HUDSON] Lucene-Solr-tests-only-trunk - Build # 5829 - Still Failing

2011-03-11 Thread Apache Hudson Server
Build: https://hudson.apache.org/hudson/job/Lucene-Solr-tests-only-trunk/5829/

2 tests failed.
FAILED:  
org.apache.solr.update.DirectUpdateHandlerOptimizeTest.testWatchChildren

Error Message:
Forked Java VM exited abnormally. Please note the time in the report does not 
reflect the time until the VM exit.

Stack Trace:
junit.framework.AssertionFailedError: Forked Java VM exited abnormally. Please 
note the time in the report does not reflect the time until the VM exit.
at java.lang.Thread.run(Thread.java:636)


FAILED:  TEST-org.apache.solr.cloud.ZkSolrClientTest.xml.init

Error Message:


Stack Trace:
Test report file 
/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk/checkout/solr/build/test-results/TEST-org.apache.solr.cloud.ZkSolrClientTest.xml
 was length 0



Build Log (for compile errors):
[...truncated 8460 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Updated: (LUCENE-2958) WriteLineDocTask improvements

2011-03-11 Thread Doron Cohen (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doron Cohen updated LUCENE-2958:


Attachment: LUCENE-2958.patch

Hi, thanks Mike and Shai for the review and great comments.

Attaching an updated patch.

Now WriteLineDocTask writes the fields as a header line to the result file. 

It always does this - perhaps a property to disable the header will be useful 
for allowing previous behavior (no header).

There are quite a few involved changes to LineDocSource:

- replaced line.split(SEP) by original recurring search for SEP.
- Method fillDocData(doc,fields[]) was changed to take a line String instead of 
the array of fields.
- That method was wrapped in a new interface: DocDataFiller for which there are 
now two implementations: 
-- SimpleDocDataFiller is used when there is no header line in the input file. 
It is implementing the original logic before this change. This allows to 
continue using existing line-doc-files which have no header line.
-- HeaderDocDataFiller is used when there exists a header line in the input 
file. Its implementation populates both fixed fields and flexible properties of 
DocData:
--- At construction of the filler a mapping is created from the field position 
in the header line to a setter method of docData. That mapping is not by 
reflection, nor by a HashMap - simply an int[] posToM where if posToM[3] = 1, 
later, when handling the field no. 3 in the line, the method fillDate3() will 
be called, and it will, in turn, call docData.setDate(), through a switch 
statement. If there's no mapping to a DocData setter, its properties object 
will be populated. So, this is quite general, with some performance overhead, 
though less than reflection I think (I did not measure this).
- An extension point for overriding the filler creation is through two new 
methods:
-- createDocDataFiller() for the case of no header line
-- createDocDataFiller(String[] header) when a header line is found in the input
- Note that filler creation is done once, when reading the first line of the 
input file. 

Some tests were fixed to account for the existence (or absence) of a header 
line.

I think more tests are required, but you can get the idea how this code will 
work.

Bottom line, LineDocSource is more general now, but the code became more 
complex.

I have mixed feelings about this - preferring simple code, but the added 
functionality is appealing.

 WriteLineDocTask improvements
 -

 Key: LUCENE-2958
 URL: https://issues.apache.org/jira/browse/LUCENE-2958
 Project: Lucene - Java
  Issue Type: Improvement
  Components: contrib/benchmark
Reporter: Doron Cohen
Assignee: Doron Cohen
Priority: Minor
 Fix For: 3.2, 4.0

 Attachments: LUCENE-2958.patch, LUCENE-2958.patch, LUCENE-2958.patch


 Make WriteLineDocTask and LineDocSource more flexible/extendable:
 * allow to emit lines also for empty docs (keep current behavior as default)
 * allow more/less/other fields

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[HUDSON] Lucene-Solr-tests-only-trunk - Build # 5830 - Still Failing

2011-03-11 Thread Apache Hudson Server
Build: https://hudson.apache.org/hudson/job/Lucene-Solr-tests-only-trunk/5830/

2 tests failed.
FAILED:  
org.apache.solr.update.DirectUpdateHandlerOptimizeTest.testWatchChildren

Error Message:
Forked Java VM exited abnormally. Please note the time in the report does not 
reflect the time until the VM exit.

Stack Trace:
junit.framework.AssertionFailedError: Forked Java VM exited abnormally. Please 
note the time in the report does not reflect the time until the VM exit.
at java.lang.Thread.run(Thread.java:636)


FAILED:  TEST-org.apache.solr.cloud.ZkSolrClientTest.xml.init

Error Message:


Stack Trace:
Test report file 
/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk/checkout/solr/build/test-results/TEST-org.apache.solr.cloud.ZkSolrClientTest.xml
 was length 0



Build Log (for compile errors):
[...truncated 8455 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[HUDSON] Lucene-Solr-tests-only-3.x - Build # 5816 - Failure

2011-03-11 Thread Apache Hudson Server
Build: https://hudson.apache.org/hudson/job/Lucene-Solr-tests-only-3.x/5816/

1 tests failed.
REGRESSION:  
org.apache.solr.handler.clustering.DistributedClusteringComponentTest.testDistribSearch

Error Message:
java.net.SocketException: Bad address

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: java.net.SocketException: Bad 
address
at 
org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:484)
at 
org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:245)
at 
org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:105)
at 
org.apache.solr.client.solrj.SolrServer.deleteByQuery(SolrServer.java:110)
at 
org.apache.solr.BaseDistributedSearchTestCase.del(BaseDistributedSearchTestCase.java:252)
at 
org.apache.solr.handler.clustering.DistributedClusteringComponentTest.doTest(DistributedClusteringComponentTest.java:34)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:540)
at 
org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:1075)
at 
org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:1007)
Caused by: java.net.SocketException: Bad address
at java.net.PlainSocketImpl.socketGetOption(Native Method)
at 
java.net.AbstractPlainSocketImpl.getOption(AbstractPlainSocketImpl.java:299)
at java.net.Socket.getSendBufferSize(Socket.java:1107)
at 
org.apache.commons.httpclient.HttpConnection.open(HttpConnection.java:737)
at 
org.apache.commons.httpclient.MultiThreadedHttpConnectionManager$HttpConnectionAdapter.open(MultiThreadedHttpConnectionManager.java:1361)
at 
org.apache.commons.httpclient.HttpMethodDirector.executeWithRetry(HttpMethodDirector.java:387)
at 
org.apache.commons.httpclient.HttpMethodDirector.executeMethod(HttpMethodDirector.java:171)
at 
org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:397)
at 
org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:323)
at 
org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:428)




Build Log (for compile errors):
[...truncated 10801 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[HUDSON] Lucene-Solr-tests-only-trunk - Build # 5831 - Still Failing

2011-03-11 Thread Apache Hudson Server
Build: https://hudson.apache.org/hudson/job/Lucene-Solr-tests-only-trunk/5831/

No tests ran.

Build Log (for compile errors):
[...truncated 1660 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [HUDSON] Lucene-Solr-tests-only-3.x - Build # 5816 - Failure

2011-03-11 Thread Robert Muir
Sorry, was trying to upgrade openjdk (from b20 to b22) to hopefully
get past our jvm crashes in tests.

The problem is that we cannot do this without having procfs mounted,
as b22 now requires this for the native freebsd version.
(http://www.freebsd.org/cgi/cvsweb.cgi/ports/java/openjdk6/pkg-message?rev=1.2)

Furthermore, both the linux 1.5 and 1.6 versions require linprocfs, so
we can't use those either.

In other words, we are stuck on ancient jvms for the indefinite
future, I don't see anyway around this except for getting
procfs/linprocfs in our jail, which Uwe says apache will not do?

On Fri, Mar 11, 2011 at 7:27 PM, Apache Hudson Server
hud...@hudson.apache.org wrote:
 Build: https://hudson.apache.org/hudson/job/Lucene-Solr-tests-only-3.x/5816/

 1 tests failed.
 REGRESSION:  
 org.apache.solr.handler.clustering.DistributedClusteringComponentTest.testDistribSearch

 Error Message:
 java.net.SocketException: Bad address

 Stack Trace:
 org.apache.solr.client.solrj.SolrServerException: java.net.SocketException: 
 Bad address
        at 
 org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:484)
        at 
 org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:245)
        at 
 org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:105)
        at 
 org.apache.solr.client.solrj.SolrServer.deleteByQuery(SolrServer.java:110)
        at 
 org.apache.solr.BaseDistributedSearchTestCase.del(BaseDistributedSearchTestCase.java:252)
        at 
 org.apache.solr.handler.clustering.DistributedClusteringComponentTest.doTest(DistributedClusteringComponentTest.java:34)
        at 
 org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:540)
        at 
 org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:1075)
        at 
 org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:1007)
 Caused by: java.net.SocketException: Bad address
        at java.net.PlainSocketImpl.socketGetOption(Native Method)
        at 
 java.net.AbstractPlainSocketImpl.getOption(AbstractPlainSocketImpl.java:299)
        at java.net.Socket.getSendBufferSize(Socket.java:1107)
        at 
 org.apache.commons.httpclient.HttpConnection.open(HttpConnection.java:737)
        at 
 org.apache.commons.httpclient.MultiThreadedHttpConnectionManager$HttpConnectionAdapter.open(MultiThreadedHttpConnectionManager.java:1361)
        at 
 org.apache.commons.httpclient.HttpMethodDirector.executeWithRetry(HttpMethodDirector.java:387)
        at 
 org.apache.commons.httpclient.HttpMethodDirector.executeMethod(HttpMethodDirector.java:171)
        at 
 org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:397)
        at 
 org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:323)
        at 
 org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:428)




 Build Log (for compile errors):
 [...truncated 10801 lines...]



 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[HUDSON] Lucene-Solr-tests-only-trunk - Build # 5832 - Still Failing

2011-03-11 Thread Apache Hudson Server
Build: https://hudson.apache.org/hudson/job/Lucene-Solr-tests-only-trunk/5832/

2 tests failed.
FAILED:  
org.apache.solr.update.DirectUpdateHandlerOptimizeTest.testWatchChildren

Error Message:
Forked Java VM exited abnormally. Please note the time in the report does not 
reflect the time until the VM exit.

Stack Trace:
junit.framework.AssertionFailedError: Forked Java VM exited abnormally. Please 
note the time in the report does not reflect the time until the VM exit.
at java.lang.Thread.run(Thread.java:636)


FAILED:  TEST-org.apache.solr.cloud.ZkSolrClientTest.xml.init

Error Message:


Stack Trace:
Test report file 
/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk/checkout/solr/build/test-results/TEST-org.apache.solr.cloud.ZkSolrClientTest.xml
 was length 0



Build Log (for compile errors):
[...truncated 8466 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[HUDSON] Lucene-Solr-tests-only-trunk - Build # 5833 - Still Failing

2011-03-11 Thread Apache Hudson Server
Build: https://hudson.apache.org/hudson/job/Lucene-Solr-tests-only-trunk/5833/

2 tests failed.
FAILED:  
org.apache.solr.update.DirectUpdateHandlerOptimizeTest.testWatchChildren

Error Message:
Forked Java VM exited abnormally. Please note the time in the report does not 
reflect the time until the VM exit.

Stack Trace:
junit.framework.AssertionFailedError: Forked Java VM exited abnormally. Please 
note the time in the report does not reflect the time until the VM exit.
at java.lang.Thread.run(Thread.java:636)


FAILED:  TEST-org.apache.solr.cloud.ZkSolrClientTest.xml.init

Error Message:


Stack Trace:
Test report file 
/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk/checkout/solr/build/test-results/TEST-org.apache.solr.cloud.ZkSolrClientTest.xml
 was length 0



Build Log (for compile errors):
[...truncated 8481 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[HUDSON] Lucene-Solr-tests-only-trunk - Build # 5834 - Still Failing

2011-03-11 Thread Apache Hudson Server
Build: https://hudson.apache.org/hudson/job/Lucene-Solr-tests-only-trunk/5834/

1 tests failed.
FAILED:  org.apache.solr.cloud.ZkSolrClientTest.testConnect

Error Message:
Could not connect to ZooKeeper 127.0.0.1:30762/solr within 3 ms

Stack Trace:
java.util.concurrent.TimeoutException: Could not connect to ZooKeeper 
127.0.0.1:30762/solr within 3 ms
at 
org.apache.solr.common.cloud.ConnectionManager.waitForConnected(ConnectionManager.java:124)
at 
org.apache.solr.common.cloud.SolrZkClient.init(SolrZkClient.java:121)
at 
org.apache.solr.common.cloud.SolrZkClient.init(SolrZkClient.java:84)
at 
org.apache.solr.common.cloud.SolrZkClient.init(SolrZkClient.java:65)
at 
org.apache.solr.cloud.ZkSolrClientTest.testConnect(ZkSolrClientTest.java:43)
at 
org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:1214)
at 
org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:1146)




Build Log (for compile errors):
[...truncated 8578 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (SOLR-2422) Improve reliability of ZkSolrClientTest

2011-03-11 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13005975#comment-13005975
 ] 

Robert Muir commented on SOLR-2422:
---

The first problem I found here is in testConnect, it has a timeout of 100ms

Our lucene slave is pretty busy (lots of cores, so lots of tests going on at 
once in parallel).

By changing this timeout to AbstractZkTestCase.TIMEOUT (1ms), I found the 
test to be significantly more reliable. This is consistent with the other test 
cases, they seem to use this timeout.

I tested this on hudson and it seems a big improvement, so I committed the 
trivial change in r1080852 (sorry for the heavy-commit, I know we are all sick 
of the hudson instability).

 Improve reliability of ZkSolrClientTest
 ---

 Key: SOLR-2422
 URL: https://issues.apache.org/jira/browse/SOLR-2422
 Project: Solr
  Issue Type: Bug
  Components: Build
Affects Versions: 4.0
Reporter: Robert Muir

 The ZKSolrClient test is pretty unreliable, it seems to fail a significant 
 portion of the time on hudson (often on my local as well).
 Additionally it seems to somehow sometimes (maybe depending upon retry loop?) 
 leave a lot of zookeeper threads running.
 I ran into these issues when i discovered that trying to interrupt() these 
 threads after the test completed was triggering a JRE bug, but by working 
 through it I saw how unreliable the test is.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (SOLR-2422) Improve reliability of ZkSolrClientTest

2011-03-11 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13005976#comment-13005976
 ] 

Robert Muir commented on SOLR-2422:
---

There's definitely something up with the exception message here, but maybe 
something else going on with reconnection or similar?

Because I noticed when the test failed before with the 100ms timeout, it would 
print:
{noformat}
1 tests failed.
FAILED:  org.apache.solr.cloud.ZkSolrClientTest.testConnect

Error Message:
Could not connect to ZooKeeper 127.0.0.1:30762/solr within 3 ms
{noformat}

I think this is what made it hard to debug problems, it made me think it was 
actually waiting 3ms, but in fact was only waiting 100ms.

Looking at ConnectionManager, I think this might indicate a bug (at least in 
the exception message, but the use of two timeouts seems wrong)... i added my 
comments to the source snippet:

{noformat}
try {
// zkClientTimeout = 100ms
connectionStrategy.reconnect(zkServerAddress, zkClientTimeout, this, 
new ZkClientConnectionStrategy.ZkUpdate() {
  @Override
  public void update(SolrZooKeeper keeper) throws InterruptedException, 
TimeoutException, IOException {
   // DEFAULT_CLIENT_CONNECT_TIMEOUT = 3ms
   waitForConnected(SolrZkClient.DEFAULT_CLIENT_CONNECT_TIMEOUT);
   client.updateKeeper(keeper);
   if(onReconnect != null) {
 onReconnect.command();
   }
   ConnectionManager.this.connected = true;
  }
});
  } catch (Exception e) {
log.error(, e); // fails after 100ms, but says it waited 3ms?!
  }
{noformat}

 Improve reliability of ZkSolrClientTest
 ---

 Key: SOLR-2422
 URL: https://issues.apache.org/jira/browse/SOLR-2422
 Project: Solr
  Issue Type: Bug
  Components: Build
Affects Versions: 4.0
Reporter: Robert Muir

 The ZKSolrClient test is pretty unreliable, it seems to fail a significant 
 portion of the time on hudson (often on my local as well).
 Additionally it seems to somehow sometimes (maybe depending upon retry loop?) 
 leave a lot of zookeeper threads running.
 I ran into these issues when i discovered that trying to interrupt() these 
 threads after the test completed was triggering a JRE bug, but by working 
 through it I saw how unreliable the test is.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: custom ValueSource for decoding geohash into lat lon

2011-03-11 Thread Bill Bell
Cool.

I am definitely looking forward to that!!



On 3/11/11 3:25 PM, Smiley, David W. dsmi...@mitre.org wrote:

On Mar 10, 2011, at 6:21 PM, William Bell wrote:

 1. ValueSources does not support MultiValue fields.

I think the problem isn't ValueSources, it's the FieldCache.  The
FieldCache is fundamentally very limited to one indexed primitive value
per document. I took a look at UninvertedField but that appears to be
tied to faceting and it's not sufficiently flexible any way. I think I
need to do, as UninvertedField does, create a cache registered in
solrconfig.xml.  The other tricky bit is somehow accessing it.  I think I
figured it out. In my field type's getValueSource(SchemaField field,
QParser parser), the parser is a FunctionQParser implementation, which
has access to SolrQueryRequest, which has access to SolrIndexSearcher,
which allows me to lookup the cache by the name I choose.  That's quite a
chain of indirection that took time to track down; I nearly gave up :-).

~ David
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org




-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[HUDSON] Lucene-Solr-tests-only-trunk - Build # 5836 - Failure

2011-03-11 Thread Apache Hudson Server
Build: https://hudson.apache.org/hudson/job/Lucene-Solr-tests-only-trunk/5836/

3 tests failed.
REGRESSION:  
org.apache.solr.client.solrj.embedded.SolrExampleStreamingTest.testCommitWithin

Error Message:
expected:1 but was:0

Stack Trace:
junit.framework.AssertionFailedError: expected:1 but was:0
at 
org.apache.solr.client.solrj.SolrExampleTests.testCommitWithin(SolrExampleTests.java:344)
at 
org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:1214)
at 
org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:1146)


REGRESSION:  org.apache.solr.cloud.BasicDistributedZkTest.testDistribSearch

Error Message:
Severe errors in solr configuration.  Check your log files for more detailed 
information on what may be wrong.  
- 
org.apache.solr.common.cloud.ZooKeeperException:   at 
org.apache.solr.core.CoreContainer.register(CoreContainer.java:517)  at 
org.apache.solr.core.CoreContainer.load(CoreContainer.java:406)  at 
org.apache.solr.core.CoreContainer.load(CoreContainer.java:290)  at 
org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:239)
  at 
org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:93)  at 
org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)  at 
org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)  at 
org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)  
at 
org.mortbay.jetty.servlet.ServletHandler.updateMappings(ServletHandler.java:1104)
  at 
org.mortbay.jetty.servlet.ServletHandler.setFilterMappings(ServletHandler.java:1140)
  at 
org.mortbay.jetty.servlet.ServletHandler.addFilterWithMapping(ServletHandler.java:940)
  at 
org.mortbay.jetty.servlet.ServletHandler.addFilterWithMapping(ServletHandler.java:895)
  at org.mortbay.jetty.servlet.Context.addFilter(Context.java:207)  at 
org.apache.solr.client.solrj.embedded.JettySolrRunner$1.lifeCycleStarted(JettySolrRunner.java:98)
  at 
org.mortbay.component.AbstractLifeCycle.setStarted(AbstractLifeCycle.java:140)  
at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:52)  at 
org.apache.solr.client.solrj.embedded.JettySolrRunner.start(JettySolrRunner.java:123)
  at 
org.apache.solr.client.solrj.embedded.JettySolrRunner.start(JettySolrRunner.java:118)
  at 
org.apache.solr.BaseDistributedSearchTestCase.createJetty(BaseDistributedSearchTestCase.java:245)
  at 
org.apache.solr.BaseDistributedSearchTestCase.createJetty(BaseDistributedSearchTestCase.java:236)
  at 
org.apache.solr.cloud.AbstractDistributedZkTestCase.createServers(AbstractDistributedZkTestCase.java:64)
  at org.apache.solr.BaseDistributedSearch  Severe errors in solr 
configuration.  Check your log files for more detailed information on what may 
be wrong.  - 
org.apache.solr.common.cloud.ZooKeeperException:   at 
org.apache.solr.core.CoreContainer.register(CoreContainer.java:517)  at 
org.apache.solr.core.CoreContainer.load(CoreContainer.java:406)  at 
org.apache.solr.core.CoreContainer.load(CoreContainer.java:290)  at 
org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:239)
  at 
org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:93)  at 
org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)  at 
org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)  at 
org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)  
at 
org.mortbay.jetty.servlet.ServletHandler.updateMappings(ServletHandler.java:1104)
  at 
org.mortbay.jetty.servlet.ServletHandler.setFilterMappings(ServletHandler.java:1140)
  at 
org.mortbay.jetty.servlet.ServletHandler.addFilterWithMapping(ServletHandler.java:940)
  at 
org.mortbay.jetty.servlet.ServletHandler.addFilterWithMapping(ServletHandler.java:895)
  at org.mortbay.jetty.servlet.Context.addFilter(Context.java:207)  at 
org.apache.solr.client.solrj.embedded.JettySolrRunner$1.lifeCycleStarted(JettySolrRunner.java:98)
  at 
org.mortbay.component.AbstractLifeCycle.setStarted(AbstractLifeCycle.java:140)  
at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:52)  at 
org.apache.solr.client.solrj.embedded.JettySolrRunner.start(JettySolrRunner.java:123)
  at 
org.apache.solr.client.solrj.embedded.JettySolrRunner.start(JettySolrRunner.java:118)
  at 
org.apache.solr.BaseDistributedSearchTestCase.createJetty(BaseDistributedSearchTestCase.java:245)
  at 
org.apache.solr.BaseDistributedSearchTestCase.createJetty(BaseDistributedSearchTestCase.java:236)
  at 
org.apache.solr.cloud.AbstractDistributedZkTestCase.createServers(AbstractDistributedZkTestCase.java:64)
  at org.apache.solr.BaseDistributedSearch  request: 
http://localhost:11236/solr/update?wt=javabinversion=2

Stack Trace:


request: 

[HUDSON] Solr-3.x - Build # 291 - Still Failing

2011-03-11 Thread Apache Hudson Server
Build: https://hudson.apache.org/hudson/job/Solr-3.x/291/

All tests passed

Build Log (for compile errors):
[...truncated 11891 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: custom ValueSource for decoding geohash into lat lon

2011-03-11 Thread Ryan McKinley
Rather then use the FieldCache, you may consider a
WeakHashMapIndexReader,YourObject  solr uses this and the internals
of FieldCache are implemented like this.  Long term, I want to see the
FieldCache moved to a map directly on the IndexReader (LUCENE-2665 but
that has a ways to go)



On Fri, Mar 11, 2011 at 5:25 PM, Smiley, David W. dsmi...@mitre.org wrote:
 On Mar 10, 2011, at 6:21 PM, William Bell wrote:

 1. ValueSources does not support MultiValue fields.

 I think the problem isn't ValueSources, it's the FieldCache.  The FieldCache 
 is fundamentally very limited to one indexed primitive value per document. I 
 took a look at UninvertedField but that appears to be tied to faceting and 
 it's not sufficiently flexible any way. I think I need to do, as 
 UninvertedField does, create a cache registered in solrconfig.xml.  The other 
 tricky bit is somehow accessing it.  I think I figured it out. In my field 
 type's getValueSource(SchemaField field, QParser parser), the parser is a 
 FunctionQParser implementation, which has access to SolrQueryRequest, which 
 has access to SolrIndexSearcher, which allows me to lookup the cache by the 
 name I choose.  That's quite a chain of indirection that took time to track 
 down; I nearly gave up :-).

 ~ David
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



I want to take part in Google Summer Code 2011

2011-03-11 Thread anurag . it . jolly
I know Lucene , Solr and Nutch . I am also involved in such a project. Please 
guide me through any obstructions 

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Created: (SOLR-2423) FieldType can take Object rather then String

2011-03-11 Thread Ryan McKinley (JIRA)
FieldType can take Object rather then String


 Key: SOLR-2423
 URL: https://issues.apache.org/jira/browse/SOLR-2423
 Project: Solr
  Issue Type: Improvement
Reporter: Ryan McKinley
 Fix For: 4.0


Currently, FieldType takes a String value and converts that to whatever it uses 
internally -- it would be great to be able to pass an Object in directly.  For 
embeded solr, and for UpdateRequestProcessors that wish to reuse objects in 
various fields, this will avoid conversion to and from string.

This is a major API change, so it should only apply to 4.0

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Assigned: (SOLR-2423) FieldType can take Object rather then String

2011-03-11 Thread Ryan McKinley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McKinley reassigned SOLR-2423:
---

Assignee: Ryan McKinley

 FieldType can take Object rather then String
 

 Key: SOLR-2423
 URL: https://issues.apache.org/jira/browse/SOLR-2423
 Project: Solr
  Issue Type: Improvement
Reporter: Ryan McKinley
Assignee: Ryan McKinley
 Fix For: 4.0

 Attachments: SOLR-2423-field-object.patch


 Currently, FieldType takes a String value and converts that to whatever it 
 uses internally -- it would be great to be able to pass an Object in 
 directly.  For embeded solr, and for UpdateRequestProcessors that wish to 
 reuse objects in various fields, this will avoid conversion to and from 
 string.
 This is a major API change, so it should only apply to 4.0

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Updated: (SOLR-2423) FieldType can take Object rather then String

2011-03-11 Thread Ryan McKinley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McKinley updated SOLR-2423:


Attachment: SOLR-2423-field-object.patch

This patch changes the FieldType (and SchemaField) method signatures to use 
Object

{code:java}
-  public Fieldable createField(SchemaField field, String externalVal, float 
boost) {
+  public Fieldable createField(SchemaField field, Object value, float boost) {
{code}

It also changes DocumentBuilder to use the Object directly

 FieldType can take Object rather then String
 

 Key: SOLR-2423
 URL: https://issues.apache.org/jira/browse/SOLR-2423
 Project: Solr
  Issue Type: Improvement
Reporter: Ryan McKinley
 Fix For: 4.0

 Attachments: SOLR-2423-field-object.patch


 Currently, FieldType takes a String value and converts that to whatever it 
 uses internally -- it would be great to be able to pass an Object in 
 directly.  For embeded solr, and for UpdateRequestProcessors that wish to 
 reuse objects in various fields, this will avoid conversion to and from 
 string.
 This is a major API change, so it should only apply to 4.0

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (SOLR-2423) FieldType can take Object rather then String

2011-03-11 Thread Ryan McKinley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13006001#comment-13006001
 ] 

Ryan McKinley commented on SOLR-2423:
-

FYI, I ran into this because I am experimenting with different ways to index 
spatial data and want to index the same data with multiple representations.  
Converting to/from string each time is silly and with this the CopyField just 
works  

 FieldType can take Object rather then String
 

 Key: SOLR-2423
 URL: https://issues.apache.org/jira/browse/SOLR-2423
 Project: Solr
  Issue Type: Improvement
Reporter: Ryan McKinley
Assignee: Ryan McKinley
 Fix For: 4.0

 Attachments: SOLR-2423-field-object.patch


 Currently, FieldType takes a String value and converts that to whatever it 
 uses internally -- it would be great to be able to pass an Object in 
 directly.  For embeded solr, and for UpdateRequestProcessors that wish to 
 reuse objects in various fields, this will avoid conversion to and from 
 string.
 This is a major API change, so it should only apply to 4.0

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



FieldType API change proposal -- SOLR-2423

2011-03-11 Thread Ryan McKinley
I think FieldType should take an Object input rather then String --
this gives FieldTypes the option of using (and reusing) explicit types
in addition to String.  For embedded apps that fill SolrInputDocuments
with real objects, the fields can use objects directly -- this means
that Date does not have to get converted to a String and then back to
a Date.

This is a major API change, but I think the value is worth the trouble.

Thoughts?

ryan

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: I want to take part in Google Summer Code 2011

2011-03-11 Thread Simon Willnauer
Hey there,

On Fri, Mar 11, 2011 at 10:02 PM,  anurag.it.jo...@gmail.com wrote:
 I know Lucene , Solr and Nutch . I am also involved in such a project. Please 
 guide me through any obstructions

This is great! Did you read the GSoC WikiPage here:
http://wiki.apache.org/lucene-java/SummerOfCode2011

If so do you are any particular project in mind you want to spend time on?

Simon


 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org