[jira] [Commented] (SOLR-4969) Enable ChaosMonkeyNothingIsSafeTest

2013-06-28 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13695297#comment-13695297
 ] 

Uwe Schindler commented on SOLR-4969:
-

Unstable as before. Can you revert and disable again?

 Enable ChaosMonkeyNothingIsSafeTest
 ---

 Key: SOLR-4969
 URL: https://issues.apache.org/jira/browse/SOLR-4969
 Project: Solr
  Issue Type: Task
  Components: SolrCloud
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 5.0, 4.4


 This test is currently marked as a @badapple - I run it on my local jenkins 
 though, and it's behaved much better since some changes that were made today. 
 I'd like to play around with removing @badapple from this test.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-4969) Enable ChaosMonkeyNothingIsSafeTest

2013-06-28 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler reopened SOLR-4969:
-


 Enable ChaosMonkeyNothingIsSafeTest
 ---

 Key: SOLR-4969
 URL: https://issues.apache.org/jira/browse/SOLR-4969
 Project: Solr
  Issue Type: Task
  Components: SolrCloud
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 5.0, 4.4


 This test is currently marked as a @badapple - I run it on my local jenkins 
 though, and it's behaved much better since some changes that were made today. 
 I'd like to play around with removing @badapple from this test.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-trunk #893: POMs out of sync

2013-06-28 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-trunk/893/

2 tests failed.
FAILED:  
org.apache.solr.cloud.BasicDistributedZkTest.org.apache.solr.cloud.BasicDistributedZkTest

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.cloud.BasicDistributedZkTest: 1) Thread[id=3605, 
name=recoveryCmdExecutor-1942-thread-1, state=RUNNABLE, 
group=TGRP-BasicDistributedZkTest] at 
java.net.PlainSocketImpl.socketConnect(Native Method) at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
 at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
 at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)  
   at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391) at 
java.net.Socket.connect(Socket.java:579) at 
org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:127)
 at 
org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180)
 at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294)
 at 
org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645)
 at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480)
 at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
 at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
 at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784)
 at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:365)
 at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180)
 at org.apache.solr.cloud.SyncStrategy$1.run(SyncStrategy.java:297) 
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
at java.lang.Thread.run(Thread.java:722)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.cloud.BasicDistributedZkTest: 
   1) Thread[id=3605, name=recoveryCmdExecutor-1942-thread-1, state=RUNNABLE, 
group=TGRP-BasicDistributedZkTest]
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391)
at java.net.Socket.connect(Socket.java:579)
at 
org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:127)
at 
org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180)
at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294)
at 
org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645)
at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:365)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180)
at org.apache.solr.cloud.SyncStrategy$1.run(SyncStrategy.java:297)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
at __randomizedtesting.SeedInfo.seed([7B8A7BC96FDE9313]:0)


FAILED:  
org.apache.solr.cloud.BasicDistributedZkTest.org.apache.solr.cloud.BasicDistributedZkTest

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=3605, name=recoveryCmdExecutor-1942-thread-1, state=RUNNABLE, 
group=TGRP-BasicDistributedZkTest] at 
java.net.PlainSocketImpl.socketConnect(Native Method) at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
 at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
 at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)  
   at 

[jira] [Commented] (SOLR-4969) Enable ChaosMonkeyNothingIsSafeTest

2013-06-28 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13695374#comment-13695374
 ] 

Mark Miller commented on SOLR-4969:
---

No, its not like before...

And no, I'm not ready to disable it again yet.

 Enable ChaosMonkeyNothingIsSafeTest
 ---

 Key: SOLR-4969
 URL: https://issues.apache.org/jira/browse/SOLR-4969
 Project: Solr
  Issue Type: Task
  Components: SolrCloud
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 5.0, 4.4


 This test is currently marked as a @badapple - I run it on my local jenkins 
 though, and it's behaved much better since some changes that were made today. 
 I'd like to play around with removing @badapple from this test.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4967) Frequent test fails in org.apache.solr.cloud.SyncSliceTest

2013-06-28 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13695393#comment-13695393
 ] 

Mark Miller commented on SOLR-4967:
---

Okay, I've got some decent logging in place - another hard one to parse though. 
Luckily, it looks like some seeds are easily repeatable fails.

 Frequent test fails in org.apache.solr.cloud.SyncSliceTest
 --

 Key: SOLR-4967
 URL: https://issues.apache.org/jira/browse/SOLR-4967
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 5.0, 4.4


 It looks like this started with either the recent distrib commit fix or the 
 fix to wait to reconnect to zk forever, not 30 seconds. If that turns out to 
 be the case, this is probably exposing an existing issue rather than anything 
 new.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Test failure locally, something about solrconfig.snippet.randomindexconfig

2013-06-28 Thread Erick Erickson
OK, found it. Two problems:

1 OpenCloseCoreStressTest constructs various sub-directories for cores,
bypassing the place where the included XML snippet is copied to the same
dir as solrconfig-minimal.xml. Easy fix.

2 The test swallows the exception rather than failing, I can fix that too.

Sorry for the noise.



On Thu, Jun 27, 2013 at 12:52 PM, Erick Erickson erickerick...@gmail.comwrote:

 Seeing this on trunk, checkout this morning. I was off on vacation so I
 haven't a clue when it started, could have been any time in the last week.
 partial trace below.

 Getting this running OpenCloseCoreStressTest, but since I don't see this
 from Jenkins it must be something local? Note it's using a different
 solrconfig than the default

 Thanks,
 Erick

 Caused by: org.apache.solr.common.SolrException: Unable to create core:
 00011_core
 [junit4:junit4]   2 at
 org.apache.solr.core.CoreContainer.recordAndThrow(CoreContainer.java:1258)
 [junit4:junit4]   2 at
 org.apache.solr.core.CoreContainer.create(CoreContainer.java:777)
 [junit4:junit4]   2 at
 org.apache.solr.core.CoreContainer.getCore(CoreContainer.java:989)
 [junit4:junit4]   2 ... 27 more
 [junit4:junit4]   2 Caused by: org.apache.solr.common.SolrException:
 Could not load config for solrconfig-minimal.xml
 [junit4:junit4]   2 at
 org.apache.solr.core.CoreContainer.createFromLocal(CoreContainer.java:703)
 [junit4:junit4]   2 at
 org.apache.solr.core.CoreContainer.create(CoreContainer.java:768)
 [junit4:junit4]   2 ... 28 more
 *[junit4:junit4]   2 Caused by: org.xml.sax.SAXParseException; systemId:
 solrres:/solrconfig-minimal.xml; lineNumber: 31; columnNumber: 107; An
 include with href 'solrconfig.snippet.randomindexconfig.xml'failed, and no
 fallback element was found.*
 [junit4:junit4]   2 at
 com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.createSAXParseException(ErrorHandlerWrapper.java:198)
 [junit4:junit4]   2 at
 com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.fatalError(ErrorHandlerWrapper.java:177)
 [junit4:junit4]   2 at
 com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:441)
 [junit4:junit4]   2 at
 com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:368)
 [junit4:junit4]   2 at
 com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:325)
 [junit4:junit4]   2 at
 com.sun.org.apache.xerces.internal.xinclude.XIncludeHandler.reportError(XIncludeHandler.java:2326)
 [junit4:junit4]   2 at
 com.sun.org.apache.xerces.internal.xinclude.XIncludeHandler.reportFatalError(XIncludeHandler.java:2321)
 [junit4:junit4]   2 at
 com.sun.org.apache.xerces.internal.xinclude.XIncludeHandler.emptyElement(XIncludeHandler.java:948)
 [junit4:junit4]   2 at
 com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.scanStartElement(XMLNSDocumentScannerImpl.java:353)
 [junit4:junit4]   2 at
 com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2717)
 [junit4:junit4]   2 at
 com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:607)
 [junit4:junit4]   2 at
 com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.next(XMLNSDocumentScannerImpl.java:116)
 [junit4:junit4]   2 at
 com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:489)
 [junit4:junit4]   2 at
 com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:835)
 [junit4:junit4]   2 at
 com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:764)
 [junit4:junit4]   2 at
 com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:123)
 [junit4:junit4]   2 at
 com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:237)
 [junit4:junit4]   2 at
 com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:300)
 [junit4:junit4]   2 at org.apache.solr.core.Config.init(Config.java:133)
 [junit4:junit4]   2 at org.apache.solr.core.Config.init(Config.java:85)
 [junit4:junit4]   2 at
 org.apache.solr.core.SolrConfig.init(SolrConfig.java:120)
 [junit4:junit4]   2 at
 org.apache.solr.core.CoreContainer.createFromLocal(CoreContainer.java:700)
 [junit4:junit4]   2 ... 29 more
 [junit4:junit4]   2



[jira] [Commented] (SOLR-4960) race condition in CoreContainer.shutdown leads to double closes on cores

2013-06-28 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13695494#comment-13695494
 ] 

Erick Erickson commented on SOLR-4960:
--

[~yo...@apache.org] What's the state of all this? These patches look like 
they're on trunk but not 4x. When I looked at them I realized that the 
transient and pending lists could be handled the same way (actually in a single 
list) which simplifies things.

I'll open up a new JIRA for my additions, but we need to merge these changes 
into 4x before I deal with the next patch

 race condition in CoreContainer.shutdown leads to double closes on cores
 

 Key: SOLR-4960
 URL: https://issues.apache.org/jira/browse/SOLR-4960
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.3
Reporter: Yonik Seeley
Assignee: Yonik Seeley
 Fix For: 4.4

 Attachments: SOLR-4960_getCore.patch, SOLR-4960.patch


 CoreContainer.shutdown has a race condition that can lead to a closed (or 
 closing) core being handed out to an incoming request.  This can further lead 
 to SolrCore.close() logic being executed again when the request is finished.
 This bug was introduced in SOLR-4196 r1451797

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-4974) Consolidate and simplify closing cores in container shutdown.

2013-06-28 Thread Erick Erickson (JIRA)
Erick Erickson created SOLR-4974:


 Summary: Consolidate and simplify closing cores in container 
shutdown.
 Key: SOLR-4974
 URL: https://issues.apache.org/jira/browse/SOLR-4974
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.3, 5.0
Reporter: Erick Erickson
Assignee: Erick Erickson
Priority: Minor


Looking at Yonik's changes for SOLR-4690 suggested similar potential problems 
with transient and pending cores.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-4975) SOLR CONSOLE DISPLAYING STATUS NOT CURRENT AFTER DELTA IMPORT

2013-06-28 Thread mohammad (JIRA)
mohammad created SOLR-4975:
--

 Summary: SOLR CONSOLE DISPLAYING STATUS NOT CURRENT AFTER DELTA 
IMPORT
 Key: SOLR-4975
 URL: https://issues.apache.org/jira/browse/SOLR-4975
 Project: Solr
  Issue Type: Bug
  Components: web gui
Affects Versions: 4.1
Reporter: mohammad


I've recently done a delta-import with commit:true in SOLR on multiple cores... 
the indexing went fine without any issue and document count showing correct 
total number of documents on admin console on all cores... however a few lines 
after of that total number of documents the console is supposed to show if the 
indexing status is current or not, where mine is showing not current for only 
one core ...other cores showing status current ... I'm using the same data 
import configuration for all cores.

I just need to understand if this is just a display issue or something else 
that I'm missing here.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-4975) SOLR CONSOLE DISPLAYING STATUS NOT CURRENT AFTER DELTA IMPORT

2013-06-28 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson closed SOLR-4975.


Resolution: Invalid

Please raise this question on the Solr user's list rather than raise a JIRA 
until you're reasonably sure you're encountering a bug rather than just a setup 
issue.

 SOLR CONSOLE DISPLAYING STATUS NOT CURRENT AFTER DELTA IMPORT
 -

 Key: SOLR-4975
 URL: https://issues.apache.org/jira/browse/SOLR-4975
 Project: Solr
  Issue Type: Bug
  Components: web gui
Affects Versions: 4.1
Reporter: mohammad

 I've recently done a delta-import with commit:true in SOLR on multiple 
 cores... the indexing went fine without any issue and document count showing 
 correct total number of documents on admin console on all cores... however a 
 few lines after of that total number of documents the console is supposed to 
 show if the indexing status is current or not, where mine is showing not 
 current for only one core ...other cores showing status current ... I'm using 
 the same data import configuration for all cores.
 I just need to understand if this is just a display issue or something else 
 that I'm missing here.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: The commit-tag-bot...

2013-06-28 Thread Mark Miller
Well JIRA locked out the new user pretty right away…sweet…

Also cannot log him in even if I solve the required captcha - that or I just 
can't solve captchas for the life of me.

Will see if I can reset it by email or not.

- Mark

On Jun 27, 2013, at 8:45 PM, Steve Rowe sar...@gmail.com wrote:

 Hey Mark,
 
 Thanks for fixing!
 
 One (trivial) thing I noticed about recent (non-commit-bot) commit 
 notification emails: the included subversion revision link is 
 shorter/eleganter, e.g. http://svn.apache.org/r1497608, than the format the 
 (heavy) commit tag bot uses, e.g. 
 http://svn.apache.org/viewvc?view=revisionrevision=1497563.
 
 Steve
 
 On Jun 27, 2013, at 3:13 PM, Mark Miller markrmil...@gmail.com wrote:
 
 The commit-tag-bot has been offline since sometime Saturday.
 
 The JIRA account being used is in a bad state - it wants the user to solve a 
 captchya to log in, but won't accept any answers.
 
 To try and resolve this, I opened 
 https://issues.apache.org/jira/browse/INFRA-6469
 
 Unfortunately, the email address for that account is … odd. To my knowledge 
 it goes nowhere. So unless the infra guys take pity on me, that account is 
 in limbo.
 
 To work around this and bring the commit-tag-bot back up now, I've made a 
 new JIRA account for it - The Heavy Commit Tag Bot 
 
 I apologize if any filters need adjusting.
 
 - Mark
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org
 
 
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org
 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-4967) Frequent test fails in org.apache.solr.cloud.SyncSliceTest

2013-06-28 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-4967.
---

Resolution: Fixed

This should be fixed now. It was a test issue - the distrib update processor 
would not pass on multiple valued params - and one that it can pass on is a 
test param that can be multi valued.

 Frequent test fails in org.apache.solr.cloud.SyncSliceTest
 --

 Key: SOLR-4967
 URL: https://issues.apache.org/jira/browse/SOLR-4967
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 5.0, 4.4


 It looks like this started with either the recent distrib commit fix or the 
 fix to wait to reconnect to zk forever, not 30 seconds. If that turns out to 
 be the case, this is probably exposing an existing issue rather than anything 
 new.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: The commit-tag-bot...

2013-06-28 Thread Mark Miller
Ah well, no go. If I try and send a pw reset link, it says its sent it and I 
never get it. If I try and send the username to the registered email address I 
used, it says that address does not exist. If I try and login as the user and 
solve the captcha, it doesn't work. If I make a new user, I'm sure something 
has changed with JIRA so that he will get locked out again anyway. If I bring 
it up with infra, they tell me to email root@ and that this is no matter for 
JIRA. 

I give up.

- Mark

On Jun 28, 2013, at 11:30 AM, Mark Miller markrmil...@gmail.com wrote:

 Well JIRA locked out the new user pretty right away…sweet…
 
 Also cannot log him in even if I solve the required captcha - that or I just 
 can't solve captchas for the life of me.
 
 Will see if I can reset it by email or not.
 
 - Mark
 
 On Jun 27, 2013, at 8:45 PM, Steve Rowe sar...@gmail.com wrote:
 
 Hey Mark,
 
 Thanks for fixing!
 
 One (trivial) thing I noticed about recent (non-commit-bot) commit 
 notification emails: the included subversion revision link is 
 shorter/eleganter, e.g. http://svn.apache.org/r1497608, than the format 
 the (heavy) commit tag bot uses, e.g. 
 http://svn.apache.org/viewvc?view=revisionrevision=1497563.
 
 Steve
 
 On Jun 27, 2013, at 3:13 PM, Mark Miller markrmil...@gmail.com wrote:
 
 The commit-tag-bot has been offline since sometime Saturday.
 
 The JIRA account being used is in a bad state - it wants the user to solve 
 a captchya to log in, but won't accept any answers.
 
 To try and resolve this, I opened 
 https://issues.apache.org/jira/browse/INFRA-6469
 
 Unfortunately, the email address for that account is … odd. To my knowledge 
 it goes nowhere. So unless the infra guys take pity on me, that account is 
 in limbo.
 
 To work around this and bring the commit-tag-bot back up now, I've made a 
 new JIRA account for it - The Heavy Commit Tag Bot 
 
 I apologize if any filters need adjusting.
 
 - Mark
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org
 
 
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org
 
 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4975) SOLR CONSOLE DISPLAYING STATUS NOT CURRENT AFTER DELTA IMPORT

2013-06-28 Thread mohammad (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13695525#comment-13695525
 ] 

mohammad commented on SOLR-4975:


Thanks for you quick response ... will do that.

 SOLR CONSOLE DISPLAYING STATUS NOT CURRENT AFTER DELTA IMPORT
 -

 Key: SOLR-4975
 URL: https://issues.apache.org/jira/browse/SOLR-4975
 Project: Solr
  Issue Type: Bug
  Components: web gui
Affects Versions: 4.1
Reporter: mohammad

 I've recently done a delta-import with commit:true in SOLR on multiple 
 cores... the indexing went fine without any issue and document count showing 
 correct total number of documents on admin console on all cores... however a 
 few lines after of that total number of documents the console is supposed to 
 show if the indexing status is current or not, where mine is showing not 
 current for only one core ...other cores showing status current ... I'm using 
 the same data import configuration for all cores.
 I just need to understand if this is just a display issue or something else 
 that I'm missing here.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5081) Compress doc ID sets

2013-06-28 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-5081:


 Summary: Compress doc ID sets
 Key: LUCENE-5081
 URL: https://issues.apache.org/jira/browse/LUCENE-5081
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor


Our filters use bit sets a lot to store document IDs. However, it is likely 
that most of them are sparse hence easily compressible. Having efficient 
compressed sets would allow for caching more data.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5014) ANTLR Lucene query parser

2013-06-28 Thread Roman Chyla (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Chyla updated LUCENE-5014:


Attachment: LUCENE-5014.txt

The patch that *actually* contains the extended parser with NEAR operator 
support

 ANTLR Lucene query parser
 -

 Key: LUCENE-5014
 URL: https://issues.apache.org/jira/browse/LUCENE-5014
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/queryparser, modules/queryparser
Affects Versions: 4.3
 Environment: all
Reporter: Roman Chyla
  Labels: antlr, query, queryparser
 Attachments: LUCENE-5014.txt, LUCENE-5014.txt, LUCENE-5014.txt, 
 LUCENE-5014.txt


 I would like to propose a new way of building query parsers for Lucene.  
 Currently, most Lucene parsers are hard to extend because they are either 
 written in Java (ie. the SOLR query parser, or edismax) or the parsing logic 
 is 'married' with the query building logic (i.e. the standard lucene parser, 
 generated by JavaCC) - which makes any extension really hard.
 Few years back, Lucene got the contrib/modern query parser (later renamed to 
 'flexible'), yet that parser didn't become a star (it must be very confusing 
 for many users). However, that parsing framework is very powerful! And it is 
 a real pity that there aren't more parsers already using it - because it 
 allows us to add/extend/change almost any aspect of the query parsing. 
 So, if we combine ANTLR + queryparser.flexible, we can get very powerful 
 framework for building almost any query language one can think of. And I hope 
 this extension can become useful.
 The details:
  - every new query syntax is written in EBNF, it lives in separate files (and 
 can be tested/developed independently - using 'gunit')
  - ANTLR parser generates parsing code (and it can generate parsers in 
 several languages, the main target is Java, but it can also do Python - which 
 may be interesting for pylucene)
  - the parser generates AST (abstract syntax tree) which is consumed by a  
 'pipeline' of processors, users can easily modify this pipeline to add a 
 desired functionality
  - the new parser contains a few (very important) debugging functions; it can 
 print results of every stage of the build, generate AST's as graphical 
 charts; ant targets help to build/test/debug grammars
  - I've tried to reuse the existing queryparser.flexible components as much 
 as possible, only adding new processors when necessary
 Assumptions about the grammar:
  - every grammar must have one top parse rule called 'mainQ'
  - parsers must generate AST (Abstract Syntax Tree)
 The structure of the AST is left open, there are components which make 
 assumptions about the shape of the AST (ie. that MODIFIER is parent of a a 
 FIELD) however users are free to choose/write different processors with 
 different assumptions about the AST shape.
 More documentation on how to use the parser can be seen here:
 http://29min.wordpress.com/category/antlrqueryparser/
 The parser has been created more than one year back and is used in production 
 (http://labs.adsabs.harvard.edu/adsabs/). A different dialects of query 
 languages (with proximity operatos, functions, special logic etc) - can be 
 seen here: 
 https://github.com/romanchyla/montysolr/tree/master/contrib/adsabs
 https://github.com/romanchyla/montysolr/tree/master/contrib/invenio

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: The commit-tag-bot...

2013-06-28 Thread Steve Rowe
Thanks for the effort Mark, sorry it was such a pain.

I've filed a JIRA to switch to using svngit2jira for LUCENE and SOLR issues 
(following http://www.apache.org/dev/svngit2jira): 
https://issues.apache.org/jira/browse/INFRA-6474.

Steve

On Jun 28, 2013, at 11:45 AM, Mark Miller markrmil...@gmail.com wrote:

 Ah well, no go. If I try and send a pw reset link, it says its sent it and I 
 never get it. If I try and send the username to the registered email address 
 I used, it says that address does not exist. If I try and login as the user 
 and solve the captcha, it doesn't work. If I make a new user, I'm sure 
 something has changed with JIRA so that he will get locked out again anyway. 
 If I bring it up with infra, they tell me to email root@ and that this is no 
 matter for JIRA. 
 
 I give up.
 
 - Mark
 
 On Jun 28, 2013, at 11:30 AM, Mark Miller markrmil...@gmail.com wrote:
 
 Well JIRA locked out the new user pretty right away…sweet…
 
 Also cannot log him in even if I solve the required captcha - that or I just 
 can't solve captchas for the life of me.
 
 Will see if I can reset it by email or not.
 
 - Mark
 
 On Jun 27, 2013, at 8:45 PM, Steve Rowe sar...@gmail.com wrote:
 
 Hey Mark,
 
 Thanks for fixing!
 
 One (trivial) thing I noticed about recent (non-commit-bot) commit 
 notification emails: the included subversion revision link is 
 shorter/eleganter, e.g. http://svn.apache.org/r1497608, than the format 
 the (heavy) commit tag bot uses, e.g. 
 http://svn.apache.org/viewvc?view=revisionrevision=1497563.
 
 Steve
 
 On Jun 27, 2013, at 3:13 PM, Mark Miller markrmil...@gmail.com wrote:
 
 The commit-tag-bot has been offline since sometime Saturday.
 
 The JIRA account being used is in a bad state - it wants the user to solve 
 a captchya to log in, but won't accept any answers.
 
 To try and resolve this, I opened 
 https://issues.apache.org/jira/browse/INFRA-6469
 
 Unfortunately, the email address for that account is … odd. To my 
 knowledge it goes nowhere. So unless the infra guys take pity on me, that 
 account is in limbo.
 
 To work around this and bring the commit-tag-bot back up now, I've made a 
 new JIRA account for it - The Heavy Commit Tag Bot 
 
 I apologize if any filters need adjusting.
 
 - Mark
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org
 
 
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org
 
 
 
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org
 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4968) The collection alias api should have a list cmd.

2013-06-28 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-4968:
--

Attachment: SOLR-4968.patch

First patch. Response needs some tweaking, test needs some asserts.

 The collection alias api should have a list cmd.
 

 Key: SOLR-4968
 URL: https://issues.apache.org/jira/browse/SOLR-4968
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 5.0, 4.4

 Attachments: SOLR-4968.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5081) Compress doc ID sets

2013-06-28 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-5081:
-

Attachment: LUCENE-5081.patch

Here is an implementation of a compressed doc ID set. Although it is immutable, 
only supports sequential access and requires doc IDs to be provided in order at 
building time, it supports fast iteration (nextDoc), skipping (advance), union 
and intersection. The detailed format is a bit complex (see javadocs), but the 
rough idea is to store large sequences of null bytes as a VInt and sequences of 
non-null bytes as-is. This implementation can compress data as soon as it can 
find more than 2 consecutive null bytes. Moreover, even incompressible sets 
only require a few bytes more than FixedBitSet.

I ran a quick benchmark to measure the size (as reported by RamUsageEstimator) 
depending on the percentage of bits set on a bit set containing 2sup23/sup 
elements (FixedBitSet requires 1MB) as well as the time required to iterate 
over all document IDs compared to FixedBitSet:
||Percentage of bits set||Size||Iteration time/FixedBitSet iteration time||
|0.001%|360 bytes|0.01|
|0.01%|2.8KB|0.10|
|0.1%|23.8 KB|0.38|
|1%|187.7 KB|0.80|
|10%|864 KB|1.3|
|50%|1 MB|2.5|
|90%|1 MB|2.3|
|100%|1 MB|1.7|

Even in the worst case, memory usage exceeds the memory usage of FixedBitSet by 
a few bytes, and iteration is 2.5 times slower.

The patch includes the set implementation but it is not used anywhere yet. I 
was thinking about using it automatically instead of FixedBitSet in 
CachingWrapperFilter but it looks like ToParentBlockJoinQuery expects to get a 
FixedBitSet from the cache.

 Compress doc ID sets
 

 Key: LUCENE-5081
 URL: https://issues.apache.org/jira/browse/LUCENE-5081
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Attachments: LUCENE-5081.patch


 Our filters use bit sets a lot to store document IDs. However, it is likely 
 that most of them are sparse hence easily compressible. Having efficient 
 compressed sets would allow for caching more data.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4945) Japanese Autocomplete and Highlighter broken

2013-06-28 Thread Shruthi Khatawkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13695587#comment-13695587
 ] 

Shruthi Khatawkar commented on SOLR-4945:
-

Thanks All.

Regards,
Shruthi

 Japanese Autocomplete and Highlighter broken
 

 Key: SOLR-4945
 URL: https://issues.apache.org/jira/browse/SOLR-4945
 Project: Solr
  Issue Type: Bug
  Components: highlighter
Affects Versions: 1.4.1
Reporter: Shruthi Khatawkar

 Autocomplete is implemented with Highlighter functionality. This works fine 
 for most of the languages but breaks for Japanese.
 multivalued,termVector,termPositions and termOffset are set to true.
 Here is an example:
 Query: product classic.
 Result:
 Actual : 
 この商品の互換性の機種にproduct 1 やclassic Touch2 が記載が有りません。 USB接続ケーブルをproduct 1 やclassic 
 Touch2に付属の物を使えば利用出来ると思いますが 間違っていますか?
 With Highlighter (em /em tags being used):
 この商品の互換性の機種emにproduct/em 1 emやclassic/em Touch2 が記載が有りません。 
 USB接続ケーブルをproduct 1 やclassic Touch2に付属の物を使えば利用出来ると思いますが 間違っていますか?
 Though query terms product classic is repeated twice, highlighting is 
 happening only on the first instance. As shown above.
 Solr returns only first instance offset and second instance is ignored.
 Also it's observed, highlighter repeats first letter of the token if there is 
 numeric.
 For eg.Query : product and We have product1, highlighter returns as 
 pemproduct/em1.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-4976) info stream doesn't work with merged segment warmer

2013-06-28 Thread Ryan Ernst (JIRA)
Ryan Ernst created SOLR-4976:


 Summary: info stream doesn't work with merged segment warmer
 Key: SOLR-4976
 URL: https://issues.apache.org/jira/browse/SOLR-4976
 Project: Solr
  Issue Type: Bug
Reporter: Ryan Ernst


In SolrIndexConfig, constructing the merged segment warmer takes an InfoStream, 
but InfoStream.NO_OUTPUT is hardcoded.  Instead, the info stream should be 
constructed in SolrIndexConfig, instead of SolrIndexWriter where it is now, so 
that it can be used for the warmer.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-4539) Consistently failing seed for SyncSliceTest

2013-06-28 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-4539.
---

   Resolution: Fixed
Fix Version/s: 5.0

 Consistently failing seed for SyncSliceTest
 ---

 Key: SOLR-4539
 URL: https://issues.apache.org/jira/browse/SOLR-4539
 Project: Solr
  Issue Type: Bug
Reporter: Shawn Heisey
Assignee: Mark Miller
 Fix For: 5.0, 4.4


 http://mail-archives.us.apache.org/mod_mbox/lucene-dev/201303.mbox/%3c513933dd.5000...@elyograg.org%3E
 {quote}
 [junit4:junit4]   2 NOTE: reproduce with: ant test  -Dtestcase=SyncSliceTest 
 -Dtests.method=testDistribSearch -Dtests.seed=1D1206F80A77FE6F 
 -Dtests.nightly=true -Dtests.weekly=true -Dtests.slow=true 
 -Dtests.locale=ar_LY -Dtests.timezone=BET -Dtests.file.encoding=UTF-8
 [junit4:junit4] FAILURE  109s | SyncSliceTest.testDistribSearch 
 [junit4:junit4] Throwable #1: java.lang.AssertionError: shard1 is not 
 consistent.  Got 305 from http://127.0.0.1:44083/collection1lastClient and 
 got 5 from http://127.0.0.1:43445/collection1
 [junit4:junit4]at 
 __randomizedtesting.SeedInfo.seed([1D1206F80A77FE6F:9CF488E07D289E53]:0)
 [junit4:junit4]at org.junit.Assert.fail(Assert.java:93)
 [junit4:junit4]at 
 org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:963)
 [junit4:junit4]at 
 org.apache.solr.cloud.SyncSliceTest.doTest(SyncSliceTest.java:234)
 [junit4:junit4]at 
 org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:806)
 {quote}
 (issue files by Hoss on Shawn's behalf so we don't lose track of it)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4565) Extend NorwegianMinimalStemFilter to handle nynorsk

2013-06-28 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-4565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13695709#comment-13695709
 ] 

Jan Høydahl commented on SOLR-4565:
---

Thanks Robert!

 Extend NorwegianMinimalStemFilter to handle nynorsk
 -

 Key: SOLR-4565
 URL: https://issues.apache.org/jira/browse/SOLR-4565
 Project: Solr
  Issue Type: Improvement
  Components: Schema and Analysis
Reporter: Jan Høydahl
Assignee: Robert Muir
 Fix For: 5.0, 4.4

 Attachments: SOLR-4565.patch, SOLR-4565.patch, SOLR-4565.patch


 Norway has two official languages, both called Norwegian, namely Bokmål 
 (nb_NO) and Nynorsk (nn_NO).
 The NorwegianMinimalStemFilter and NorwegianLightStemFilter today only works 
 with the largest of the two, namely Bokmål.
 Propose to incorporate nn support through a new vaiant config option:
 * variant=nb or not configured - Bokmål as today
 * variant=nn - Nynorsk only
 * variant=no - Remove stems for both nb and nn

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4976) info stream doesn't work with merged segment warmer

2013-06-28 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13695710#comment-13695710
 ] 

Robert Muir commented on SOLR-4976:
---

+1... i TODO's this instead of cleaning up properly so that you get logging for 
the segment warming

 info stream doesn't work with merged segment warmer
 ---

 Key: SOLR-4976
 URL: https://issues.apache.org/jira/browse/SOLR-4976
 Project: Solr
  Issue Type: Bug
Reporter: Ryan Ernst

 In SolrIndexConfig, constructing the merged segment warmer takes an 
 InfoStream, but InfoStream.NO_OUTPUT is hardcoded.  Instead, the info stream 
 should be constructed in SolrIndexConfig, instead of SolrIndexWriter where it 
 is now, so that it can be used for the warmer.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4221) Custom sharding

2013-06-28 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-4221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13695723#comment-13695723
 ] 

Jan Høydahl commented on SOLR-4221:
---

bq. If you wish to create a new collection/shard w/o enough nodes we can just 
use something like forceCreate=true so that the collection and shards are 
created empty + inactive . if forceCreate=false let us just fail if nodes are 
insufficient

+1

When adding a new empty node, it should only get auto-assigned replicas if any 
of the existing collections have not filled up its numShards and 
replicationFactor yet. Else it should be left empty and then be utilized by 
future create collection requests or rebalance/split requests.

 Custom sharding
 ---

 Key: SOLR-4221
 URL: https://issues.apache.org/jira/browse/SOLR-4221
 Project: Solr
  Issue Type: New Feature
Reporter: Yonik Seeley
Assignee: Noble Paul
 Attachments: SOLR-4221.patch


 Features to let users control everything about sharding/routing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-trunk-Linux-Java7-64-test-only - Build # 51118 - Failure!

2013-06-28 Thread builder
Build: builds.flonkings.com/job/Lucene-trunk-Linux-Java7-64-test-only/51118/

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.lucene.search.TestMultiTermConstantScore

Error Message:
count should be = maxMergeCount (= 1)

Stack Trace:
java.lang.IllegalArgumentException: count should be = maxMergeCount (= 1)
at __randomizedtesting.SeedInfo.seed([491FB3F847D4B1F7]:0)
at 
org.apache.lucene.index.ConcurrentMergeScheduler.setMaxThreadCount(ConcurrentMergeScheduler.java:91)
at org.apache.lucene.util._TestUtil.reduceOpenFiles(_TestUtil.java:773)
at 
org.apache.lucene.search.BaseTestRangeFilter.build(BaseTestRangeFilter.java:140)
at 
org.apache.lucene.search.BaseTestRangeFilter.beforeClassBaseTestRangeFilter(BaseTestRangeFilter.java:102)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:677)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at java.lang.Thread.run(Thread.java:722)


FAILED:  
junit.framework.TestSuite.org.apache.lucene.search.TestMultiTermConstantScore

Error Message:


Stack Trace:
java.lang.NullPointerException
at __randomizedtesting.SeedInfo.seed([491FB3F847D4B1F7]:0)
at 
org.apache.lucene.search.TestMultiTermConstantScore.afterClass(TestMultiTermConstantScore.java:81)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:700)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 

Re: [JENKINS] Lucene-trunk-Linux-Java7-64-test-only - Build # 51118 - Failure!

2013-06-28 Thread Robert Muir
ill look into this... its https://issues.apache.org/jira/browse/LUCENE-5080:)

On Fri, Jun 28, 2013 at 4:01 PM, buil...@flonkings.com wrote:

 Build:
 builds.flonkings.com/job/Lucene-trunk-Linux-Java7-64-test-only/51118/

 3 tests failed.
 FAILED:
  junit.framework.TestSuite.org.apache.lucene.search.TestMultiTermConstantScore

 Error Message:
 count should be = maxMergeCount (= 1)

 Stack Trace:
 java.lang.IllegalArgumentException: count should be = maxMergeCount (= 1)
 at __randomizedtesting.SeedInfo.seed([491FB3F847D4B1F7]:0)
 at
 org.apache.lucene.index.ConcurrentMergeScheduler.setMaxThreadCount(ConcurrentMergeScheduler.java:91)
 at
 org.apache.lucene.util._TestUtil.reduceOpenFiles(_TestUtil.java:773)
 at
 org.apache.lucene.search.BaseTestRangeFilter.build(BaseTestRangeFilter.java:140)
 at
 org.apache.lucene.search.BaseTestRangeFilter.beforeClassBaseTestRangeFilter(BaseTestRangeFilter.java:102)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:601)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:677)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
 at
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at
 org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
 at
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
 at
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at
 org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
 at
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 at
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
 at
 org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
 at
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at
 com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
 at java.lang.Thread.run(Thread.java:722)


 FAILED:
  junit.framework.TestSuite.org.apache.lucene.search.TestMultiTermConstantScore

 Error Message:


 Stack Trace:
 java.lang.NullPointerException
 at __randomizedtesting.SeedInfo.seed([491FB3F847D4B1F7]:0)
 at
 org.apache.lucene.search.TestMultiTermConstantScore.afterClass(TestMultiTermConstantScore.java:81)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:601)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:700)
 at
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at
 org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
 at
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
 at
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at
 org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
 at
 

Re: [JENKINS] Lucene-trunk-Linux-Java7-64-test-only - Build # 51118 - Failure!

2013-06-28 Thread Robert Muir
I committed a fix

On Fri, Jun 28, 2013 at 4:08 PM, Robert Muir rcm...@gmail.com wrote:

 ill look into this... its
 https://issues.apache.org/jira/browse/LUCENE-5080 :)

 On Fri, Jun 28, 2013 at 4:01 PM, buil...@flonkings.com wrote:

 Build:
 builds.flonkings.com/job/Lucene-trunk-Linux-Java7-64-test-only/51118/

 3 tests failed.
 FAILED:
  
 junit.framework.TestSuite.org.apache.lucene.search.TestMultiTermConstantScore

 Error Message:
 count should be = maxMergeCount (= 1)

 Stack Trace:
 java.lang.IllegalArgumentException: count should be = maxMergeCount (= 1)
 at __randomizedtesting.SeedInfo.seed([491FB3F847D4B1F7]:0)
 at
 org.apache.lucene.index.ConcurrentMergeScheduler.setMaxThreadCount(ConcurrentMergeScheduler.java:91)
 at
 org.apache.lucene.util._TestUtil.reduceOpenFiles(_TestUtil.java:773)
 at
 org.apache.lucene.search.BaseTestRangeFilter.build(BaseTestRangeFilter.java:140)
 at
 org.apache.lucene.search.BaseTestRangeFilter.beforeClassBaseTestRangeFilter(BaseTestRangeFilter.java:102)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:601)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:677)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
 at
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at
 org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
 at
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
 at
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at
 org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
 at
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 at
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
 at
 org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
 at
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at
 com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
 at java.lang.Thread.run(Thread.java:722)


 FAILED:
  
 junit.framework.TestSuite.org.apache.lucene.search.TestMultiTermConstantScore

 Error Message:


 Stack Trace:
 java.lang.NullPointerException
 at __randomizedtesting.SeedInfo.seed([491FB3F847D4B1F7]:0)
 at
 org.apache.lucene.search.TestMultiTermConstantScore.afterClass(TestMultiTermConstantScore.java:81)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:601)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:700)
 at
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at
 org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
 at
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
 at
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at
 

[jira] [Commented] (SOLR-4221) Custom sharding

2013-06-28 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13695754#comment-13695754
 ] 

Yonik Seeley commented on SOLR-4221:


bq. Do we really need to have so many configuration params and complicate this? 

Eh, you replaced one parameter (a minimum replication factor to achieve) with 
another (a boolean meaning 0 or replicationFactor).
This seems akin to facet.includeZeros vs facet.minCount... the latter 
incorporates the former and is more powerful (hence fewer parameters in the 
long run), and the former would have never existed if the latter had been 
thought of first.

Now, I don't like the name createReplictionFactor (maybe minReplicas or 
something like that would be better), but the functionality is a superset of  
forceCreate and is more descriptive (after all, there are many ways to force 
things).  It also seems useful to say something like create this collection 
and make sure it's usable (at least one replica for every shard) but don't 
worry about trying to satisfy the replciationFactor just yet).  That would be 
minReplicas=1

 Custom sharding
 ---

 Key: SOLR-4221
 URL: https://issues.apache.org/jira/browse/SOLR-4221
 Project: Solr
  Issue Type: New Feature
Reporter: Yonik Seeley
Assignee: Noble Paul
 Attachments: SOLR-4221.patch


 Features to let users control everything about sharding/routing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4976) info stream doesn't work with merged segment warmer

2013-06-28 Thread Ryan Ernst (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan Ernst updated SOLR-4976:
-

Attachment: SOLR-4976.patch

First try at a patch, with a simple test.

 info stream doesn't work with merged segment warmer
 ---

 Key: SOLR-4976
 URL: https://issues.apache.org/jira/browse/SOLR-4976
 Project: Solr
  Issue Type: Bug
Reporter: Ryan Ernst
 Attachments: SOLR-4976.patch


 In SolrIndexConfig, constructing the merged segment warmer takes an 
 InfoStream, but InfoStream.NO_OUTPUT is hardcoded.  Instead, the info stream 
 should be constructed in SolrIndexConfig, instead of SolrIndexWriter where it 
 is now, so that it can be used for the warmer.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4976) info stream doesn't work with merged segment warmer

2013-06-28 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13695777#comment-13695777
 ] 

Mark Miller commented on SOLR-4976:
---

Nice, +1.

 info stream doesn't work with merged segment warmer
 ---

 Key: SOLR-4976
 URL: https://issues.apache.org/jira/browse/SOLR-4976
 Project: Solr
  Issue Type: Bug
Reporter: Ryan Ernst
 Attachments: SOLR-4976.patch


 In SolrIndexConfig, constructing the merged segment warmer takes an 
 InfoStream, but InfoStream.NO_OUTPUT is hardcoded.  Instead, the info stream 
 should be constructed in SolrIndexConfig, instead of SolrIndexWriter where it 
 is now, so that it can be used for the warmer.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4787) Join Contrib

2013-06-28 Thread Kranti Parisa (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13695802#comment-13695802
 ] 

Kranti Parisa commented on SOLR-4787:
-

Joel,

Yes, sort implementation would be costly, I did review the code. 
I think having the option for MIN/MAX would help in few cases, and if we pass 
that as null then we are same as with current implementation.

Having LongLongOpenHashMap would really help.

-
Kranti

 Join Contrib
 

 Key: SOLR-4787
 URL: https://issues.apache.org/jira/browse/SOLR-4787
 Project: Solr
  Issue Type: New Feature
  Components: search
Affects Versions: 4.2.1
Reporter: Joel Bernstein
Priority: Minor
 Fix For: 4.4

 Attachments: SOLR-4787-deadlock-fix.patch, SOLR-4787.patch, 
 SOLR-4787.patch, SOLR-4787.patch, SOLR-4787.patch, SOLR-4787.patch, 
 SOLR-4787.patch, SOLR-4787.patch, SOLR-4787.patch, SOLR-4787.patch, 
 SOLR-4787.patch


 This contrib provides a place where different join implementations can be 
 contributed to Solr. This contrib currently includes 2 join implementations. 
 The initial patch was generated from the Solr 4.3 tag. Because of changes in 
 the FieldCache API this patch will only build with Solr 4.2 or above.
 *PostFilterJoinQParserPlugin aka pjoin*
 The pjoin provides a join implementation that filters results in one core 
 based on the results of a search in another core. This is similar in 
 functionality to the JoinQParserPlugin but the implementation differs in a 
 couple of important ways.
 The first way is that the pjoin is designed to work with integer join keys 
 only. So, in order to use pjoin, integer join keys must be included in both 
 the to and from core.
 The second difference is that the pjoin builds memory structures that are 
 used to quickly connect the join keys. It also uses a custom SolrCache named 
 join to hold intermediate DocSets which are needed to build the join memory 
 structures. So, the pjoin will need more memory then the JoinQParserPlugin to 
 perform the join.
 The main advantage of the pjoin is that it can scale to join millions of keys 
 between cores.
 Because it's a PostFilter, it only needs to join records that match the main 
 query.
 The syntax of the pjoin is the same as the JoinQParserPlugin except that the 
 plugin is referenced by the string pjoin rather then join.
 fq=\{!pjoin fromCore=collection2 from=id_i to=id_i\}user:customer1
 The example filter query above will search the fromCore (collection2) for 
 user:customer1. This query will generate a list of values from the from 
 field that will be used to filter the main query. Only records from the main 
 query, where the to field is present in the from list will be included in 
 the results.
 The solrconfig.xml in the main query core must contain the reference to the 
 pjoin.
 queryParser name=pjoin 
 class=org.apache.solr.joins.PostFilterJoinQParserPlugin/
 And the join contrib jars must be registed in the solrconfig.xml.
 lib dir=../../../dist/ regex=solr-joins-\d.*\.jar /
 The solrconfig.xml in the fromcore must have the join SolrCache configured.
  cache name=join
   class=solr.LRUCache
   size=4096
   initialSize=1024
   /
 *ValueSourceJoinParserPlugin aka vjoin*
 The second implementation is the ValueSourceJoinParserPlugin aka vjoin. 
 This implements a ValueSource function query that can return a value from a 
 second core based on join keys and limiting query. The limiting query can be 
 used to select a specific subset of data from the join core. This allows 
 customer specific relevance data to be stored in a separate core and then 
 joined in the main query.
 The vjoin is called using the vjoin function query. For example:
 bf=vjoin(fromCore, fromKey, fromVal, toKey, query)
 This example shows vjoin being called by the edismax boost function 
 parameter. This example will return the fromVal from the fromCore. The 
 fromKey and toKey are used to link the records from the main query to the 
 records in the fromCore. The query is used to select a specific set of 
 records to join with in fromCore.
 Currently the fromKey and toKey must be longs but this will change in future 
 versions. Like the pjoin, the join SolrCache is used to hold the join 
 memory structures.
 To configure the vjoin you must register the ValueSource plugin in the 
 solrconfig.xml as follows:
 valueSourceParser name=vjoin 
 class=org.apache.solr.joins.ValueSourceJoinParserPlugin /

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: 

[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0-ea-b94) - Build # 6357 - Failure!

2013-06-28 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/6357/
Java: 64bit/jdk1.8.0-ea-b94 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
REGRESSION:  
org.apache.lucene.index.TestIndexWriterReader.testAddIndexesAndDoDeletesThreads

Error Message:
count should be = maxThreadCount (= 4)

Stack Trace:
java.lang.IllegalArgumentException: count should be = maxThreadCount (= 4)
at 
__randomizedtesting.SeedInfo.seed([FD7396C1CBF296C1:2740D6E5A06B96FD]:0)
at 
org.apache.lucene.index.ConcurrentMergeScheduler.setMaxMergeCount(ConcurrentMergeScheduler.java:114)
at org.apache.lucene.util._TestUtil.reduceOpenFiles(_TestUtil.java:774)
at 
org.apache.lucene.index.TestIndexWriterReader$AddDirectoriesThreads.init(TestIndexWriterReader.java:411)
at 
org.apache.lucene.index.TestIndexWriterReader.testAddIndexesAndDoDeletesThreads(TestIndexWriterReader.java:370)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:491)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at java.lang.Thread.run(Thread.java:724)




Build Log:
[...truncated 536 lines...]
[junit4:junit4] Suite: org.apache.lucene.index.TestIndexWriterReader
[junit4:junit4]   2 NOTE: reproduce with: ant test  
-Dtestcase=TestIndexWriterReader 

Re: [JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0-ea-b94) - Build # 6357 - Failure!

2013-06-28 Thread Robert Muir
im just gonna go fix https://issues.apache.org/jira/browse/LUCENE-5080

On Fri, Jun 28, 2013 at 4:41 PM, Policeman Jenkins Server 
jenk...@thetaphi.de wrote:

 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/6357/
 Java: 64bit/jdk1.8.0-ea-b94 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

 1 tests failed.
 REGRESSION:
  
 org.apache.lucene.index.TestIndexWriterReader.testAddIndexesAndDoDeletesThreads

 Error Message:
 count should be = maxThreadCount (= 4)

 Stack Trace:
 java.lang.IllegalArgumentException: count should be = maxThreadCount (= 4)
 at
 __randomizedtesting.SeedInfo.seed([FD7396C1CBF296C1:2740D6E5A06B96FD]:0)
 at
 org.apache.lucene.index.ConcurrentMergeScheduler.setMaxMergeCount(ConcurrentMergeScheduler.java:114)
 at
 org.apache.lucene.util._TestUtil.reduceOpenFiles(_TestUtil.java:774)
 at
 org.apache.lucene.index.TestIndexWriterReader$AddDirectoriesThreads.init(TestIndexWriterReader.java:411)
 at
 org.apache.lucene.index.TestIndexWriterReader.testAddIndexesAndDoDeletesThreads(TestIndexWriterReader.java:370)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:491)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
 at
 org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
 at
 org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
 at
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
 at
 org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
 at
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
 at
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 at
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at
 com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
 at
 com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
 at
 com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
 at
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at
 org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
 at
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
 at
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at
 org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
 at
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 at
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
 at
 org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
 at
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at
 com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
 at java.lang.Thread.run(Thread.java:724)



[JENKINS] Lucene-trunk-Linux-Java7-64-test-only - Build # 51144 - Failure!

2013-06-28 Thread builder
Build: builds.flonkings.com/job/Lucene-trunk-Linux-Java7-64-test-only/51144/

2 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.lucene.search.TestFieldCacheRangeFilter

Error Message:
count should be = maxThreadCount (= 4)

Stack Trace:
java.lang.IllegalArgumentException: count should be = maxThreadCount (= 4)
at __randomizedtesting.SeedInfo.seed([30B27FEC9ADCB1D1]:0)
at 
org.apache.lucene.index.ConcurrentMergeScheduler.setMaxMergeCount(ConcurrentMergeScheduler.java:114)
at org.apache.lucene.util._TestUtil.reduceOpenFiles(_TestUtil.java:774)
at 
org.apache.lucene.search.BaseTestRangeFilter.build(BaseTestRangeFilter.java:140)
at 
org.apache.lucene.search.BaseTestRangeFilter.beforeClassBaseTestRangeFilter(BaseTestRangeFilter.java:102)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:677)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at java.lang.Thread.run(Thread.java:722)


FAILED:  
junit.framework.TestSuite.org.apache.lucene.search.TestFieldCacheRangeFilter

Error Message:


Stack Trace:
java.lang.NullPointerException
at __randomizedtesting.SeedInfo.seed([30B27FEC9ADCB1D1]:0)
at 
org.apache.lucene.search.BaseTestRangeFilter.afterClassBaseTestRangeFilter(BaseTestRangeFilter.java:108)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:700)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 

[jira] [Created] (SOLR-4977) info stream in solrconfig should have option for writing to the solr log

2013-06-28 Thread Ryan Ernst (JIRA)
Ryan Ernst created SOLR-4977:


 Summary: info stream in solrconfig should have option for writing 
to the solr log
 Key: SOLR-4977
 URL: https://issues.apache.org/jira/browse/SOLR-4977
 Project: Solr
  Issue Type: Improvement
Reporter: Ryan Ernst


Having a separate file is annoying, plus the print stream option doesn't 
rollover on size or date, doesn't have custom formatting options, etc.  Exactly 
what the logging lib is meant to handle.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5080) CMS setters cannot work unless you setMaxMergeCount before you setMaxThreadCount

2013-06-28 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13695939#comment-13695939
 ] 

Robert Muir commented on LUCENE-5080:
-

{quote}
Maybe instead we can have CMS.reset(int,int) to avoid that case?
{quote}

Fine by me. if we decide we dont like it later, we can just remove it.

 CMS setters cannot work unless you setMaxMergeCount before you 
 setMaxThreadCount
 

 Key: LUCENE-5080
 URL: https://issues.apache.org/jira/browse/LUCENE-5080
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir

 {code}
   public void setMaxThreadCount(int count) {
 ...
 if (count  maxMergeCount) {
   throw new IllegalArgumentException(count should be = maxMergeCount (= 
  + maxMergeCount + ));
 }
 {code}
 but:
 {code}
public void setMaxMergeCount(int count) {
 ...
 if (count  maxThreadCount) {
   throw new IllegalArgumentException(count should be = maxThreadCount 
 (=  + maxThreadCount + ));
 }
 {code}
 So you must call them in a magical order. I think we should nuke these 
 setters and just have a CMS(int,int)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5082) For SolrQueryParserBase.getPrefixQuery(String field, String termStr), use BytesRef from calling FieldType.readableToIndexed for Term

2013-06-28 Thread Jessica Cheng (JIRA)
Jessica Cheng created LUCENE-5082:
-

 Summary: For SolrQueryParserBase.getPrefixQuery(String field, 
String termStr), use BytesRef from calling FieldType.readableToIndexed for Term
 Key: LUCENE-5082
 URL: https://issues.apache.org/jira/browse/LUCENE-5082
 Project: Lucene - Core
  Issue Type: Wish
  Components: core/queryparser
Affects Versions: 4.3
Reporter: Jessica Cheng
Priority: Minor


Current, SolrQueryParserBase.getFieldQuery calls FieldType.getFieldQuery, which 
in turn calls readableToIndexed to get a type-specific BytesRef to pass to Term 
constructor. I would like SolrQueryParserBase.getPrefixQuery to do the same 
thing so that I can implement my own indexed binary field and allow my field to 
process my base64 encoded string query.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5082) For SolrQueryParserBase.getPrefixQuery(String field, String termStr), use BytesRef from calling FieldType.readableToIndexed for Term

2013-06-28 Thread Jessica Cheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jessica Cheng updated LUCENE-5082:
--

Description: Currently, SolrQueryParserBase.getFieldQuery calls 
FieldType.getFieldQuery, which in turn calls readableToIndexed to get a 
type-specific BytesRef to pass to Term constructor. I would like 
SolrQueryParserBase.getPrefixQuery to do the same thing so that I can implement 
my own indexed binary field and allow my field to process my base64 encoded 
string query.  (was: Current, SolrQueryParserBase.getFieldQuery calls 
FieldType.getFieldQuery, which in turn calls readableToIndexed to get a 
type-specific BytesRef to pass to Term constructor. I would like 
SolrQueryParserBase.getPrefixQuery to do the same thing so that I can implement 
my own indexed binary field and allow my field to process my base64 encoded 
string query.)

 For SolrQueryParserBase.getPrefixQuery(String field, String termStr), use 
 BytesRef from calling FieldType.readableToIndexed for Term
 

 Key: LUCENE-5082
 URL: https://issues.apache.org/jira/browse/LUCENE-5082
 Project: Lucene - Core
  Issue Type: Wish
  Components: core/queryparser
Affects Versions: 4.3
Reporter: Jessica Cheng
Priority: Minor
  Labels: parser, prefix

 Currently, SolrQueryParserBase.getFieldQuery calls FieldType.getFieldQuery, 
 which in turn calls readableToIndexed to get a type-specific BytesRef to pass 
 to Term constructor. I would like SolrQueryParserBase.getPrefixQuery to do 
 the same thing so that I can implement my own indexed binary field and allow 
 my field to process my base64 encoded string query.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4977) info stream in solrconfig should have option for writing to the solr log

2013-06-28 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13695956#comment-13695956
 ] 

Mark Miller commented on SOLR-4977:
---

+1, this would be nice.

 info stream in solrconfig should have option for writing to the solr log
 

 Key: SOLR-4977
 URL: https://issues.apache.org/jira/browse/SOLR-4977
 Project: Solr
  Issue Type: Improvement
Reporter: Ryan Ernst

 Having a separate file is annoying, plus the print stream option doesn't 
 rollover on size or date, doesn't have custom formatting options, etc.  
 Exactly what the logging lib is meant to handle.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5080) CMS setters cannot work unless you setMaxMergeCount before you setMaxThreadCount

2013-06-28 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-5080:


Attachment: LUCENE-5080.patch

here's my patch. To address Shai's concern, i just combined the two setters 
into one and didnt do any ctors (that would sorta be redundant with the 
combined-setter, and i dont like the name reset).

Additionally i made the defaults public constants.


 CMS setters cannot work unless you setMaxMergeCount before you 
 setMaxThreadCount
 

 Key: LUCENE-5080
 URL: https://issues.apache.org/jira/browse/LUCENE-5080
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-5080.patch


 {code}
   public void setMaxThreadCount(int count) {
 ...
 if (count  maxMergeCount) {
   throw new IllegalArgumentException(count should be = maxMergeCount (= 
  + maxMergeCount + ));
 }
 {code}
 but:
 {code}
public void setMaxMergeCount(int count) {
 ...
 if (count  maxThreadCount) {
   throw new IllegalArgumentException(count should be = maxThreadCount 
 (=  + maxThreadCount + ));
 }
 {code}
 So you must call them in a magical order. I think we should nuke these 
 setters and just have a CMS(int,int)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-trunk #894: POMs out of sync

2013-06-28 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-trunk/894/

2 tests failed.
FAILED:  
org.apache.solr.cloud.BasicDistributedZkTest.org.apache.solr.cloud.BasicDistributedZkTest

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.cloud.BasicDistributedZkTest: 1) Thread[id=5465, 
name=recoveryCmdExecutor-2899-thread-1, state=RUNNABLE, 
group=TGRP-BasicDistributedZkTest] at 
java.net.PlainSocketImpl.socketConnect(Native Method) at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
 at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
 at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)  
   at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391) at 
java.net.Socket.connect(Socket.java:579) at 
org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:127)
 at 
org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180)
 at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294)
 at 
org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645)
 at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480)
 at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
 at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
 at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784)
 at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:365)
 at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180)
 at org.apache.solr.cloud.SyncStrategy$1.run(SyncStrategy.java:297) 
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
at java.lang.Thread.run(Thread.java:722)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.cloud.BasicDistributedZkTest: 
   1) Thread[id=5465, name=recoveryCmdExecutor-2899-thread-1, state=RUNNABLE, 
group=TGRP-BasicDistributedZkTest]
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391)
at java.net.Socket.connect(Socket.java:579)
at 
org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:127)
at 
org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180)
at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294)
at 
org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645)
at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:365)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180)
at org.apache.solr.cloud.SyncStrategy$1.run(SyncStrategy.java:297)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
at __randomizedtesting.SeedInfo.seed([19353F683D6F4113]:0)


FAILED:  
org.apache.solr.cloud.BasicDistributedZkTest.org.apache.solr.cloud.BasicDistributedZkTest

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=5465, name=recoveryCmdExecutor-2899-thread-1, state=RUNNABLE, 
group=TGRP-BasicDistributedZkTest] at 
java.net.PlainSocketImpl.socketConnect(Native Method) at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
 at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
 at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)  
   at 

[jira] [Updated] (SOLR-4976) info stream doesn't work with merged segment warmer

2013-06-28 Thread Ryan Ernst (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan Ernst updated SOLR-4976:
-

Attachment: SOLR-4976.patch

New patch removing null check on close() since IndexWriterConfig does not allow 
null for InfoStream.

 info stream doesn't work with merged segment warmer
 ---

 Key: SOLR-4976
 URL: https://issues.apache.org/jira/browse/SOLR-4976
 Project: Solr
  Issue Type: Bug
Reporter: Ryan Ernst
 Attachments: SOLR-4976.patch, SOLR-4976.patch


 In SolrIndexConfig, constructing the merged segment warmer takes an 
 InfoStream, but InfoStream.NO_OUTPUT is hardcoded.  Instead, the info stream 
 should be constructed in SolrIndexConfig, instead of SolrIndexWriter where it 
 is now, so that it can be used for the warmer.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5080) CMS setters cannot work unless you setMaxMergeCount before you setMaxThreadCount

2013-06-28 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-5080:


Attachment: LUCENE-5080.patch

updated patch: some solr tests depended on these previous setters with 
reflection.

 CMS setters cannot work unless you setMaxMergeCount before you 
 setMaxThreadCount
 

 Key: LUCENE-5080
 URL: https://issues.apache.org/jira/browse/LUCENE-5080
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-5080.patch, LUCENE-5080.patch


 {code}
   public void setMaxThreadCount(int count) {
 ...
 if (count  maxMergeCount) {
   throw new IllegalArgumentException(count should be = maxMergeCount (= 
  + maxMergeCount + ));
 }
 {code}
 but:
 {code}
public void setMaxMergeCount(int count) {
 ...
 if (count  maxThreadCount) {
   throw new IllegalArgumentException(count should be = maxThreadCount 
 (=  + maxThreadCount + ));
 }
 {code}
 So you must call them in a magical order. I think we should nuke these 
 setters and just have a CMS(int,int)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5080) CMS setters cannot work unless you setMaxMergeCount before you setMaxThreadCount

2013-06-28 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-5080.
-

   Resolution: Fixed
Fix Version/s: 4.4
   5.0

I committed the new setter. If someone is unhappy about the name, i won't be 
offended, we can change it.

At least jenkins will have a nice weekend in the meantime.

 CMS setters cannot work unless you setMaxMergeCount before you 
 setMaxThreadCount
 

 Key: LUCENE-5080
 URL: https://issues.apache.org/jira/browse/LUCENE-5080
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Fix For: 5.0, 4.4

 Attachments: LUCENE-5080.patch, LUCENE-5080.patch


 {code}
   public void setMaxThreadCount(int count) {
 ...
 if (count  maxMergeCount) {
   throw new IllegalArgumentException(count should be = maxMergeCount (= 
  + maxMergeCount + ));
 }
 {code}
 but:
 {code}
public void setMaxMergeCount(int count) {
 ...
 if (count  maxThreadCount) {
   throw new IllegalArgumentException(count should be = maxThreadCount 
 (=  + maxThreadCount + ));
 }
 {code}
 So you must call them in a magical order. I think we should nuke these 
 setters and just have a CMS(int,int)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-Linux (32bit/ibm-j9-jdk6) - Build # 6285 - Failure!

2013-06-28 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/6285/
Java: 32bit/ibm-j9-jdk6 
-Xjit:exclude={org/apache/lucene/util/fst/FST.pack(IIF)Lorg/apache/lucene/util/fst/FST;}

1 tests failed.
REGRESSION:  org.apache.lucene.index.TestNRTThreads.testNRTThreads

Error Message:
count should be = maxThreadCount (= 4)

Stack Trace:
java.lang.IllegalArgumentException: count should be = maxThreadCount (= 4)
at 
__randomizedtesting.SeedInfo.seed([C07D06DDD813EF5A:5BA412C699E8F931]:0)
at 
org.apache.lucene.index.ConcurrentMergeScheduler.setMaxMergeCount(ConcurrentMergeScheduler.java:114)
at org.apache.lucene.util._TestUtil.reduceOpenFiles(_TestUtil.java:764)
at 
org.apache.lucene.index.ThreadedIndexingAndSearchingTestCase.runTest(ThreadedIndexingAndSearchingTestCase.java:482)
at 
org.apache.lucene.index.TestNRTThreads.testNRTThreads(TestNRTThreads.java:128)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:60)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37)
at java.lang.reflect.Method.invoke(Method.java:611)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at java.lang.Thread.run(Thread.java:738)




Build Log:
[...truncated 938 lines...]
[junit4:junit4] Suite: org.apache.lucene.index.TestNRTThreads
[junit4:junit4]   2 NOTE: reproduce with: ant test  -Dtestcase=TestNRTThreads 
-Dtests.method=testNRTThreads 

[jira] [Commented] (SOLR-4977) info stream in solrconfig should have option for writing to the solr log

2013-06-28 Thread Ryan Ernst (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13696007#comment-13696007
 ] 

Ryan Ernst commented on SOLR-4977:
--

This patch adds a LoggingInfoStream in solr which just passes the message to 
slf4j.  It is at info level for now; adding support for other log levels can 
happen later in a separate jira.

 info stream in solrconfig should have option for writing to the solr log
 

 Key: SOLR-4977
 URL: https://issues.apache.org/jira/browse/SOLR-4977
 Project: Solr
  Issue Type: Improvement
Reporter: Ryan Ernst
 Attachments: SOLR-4977.patch


 Having a separate file is annoying, plus the print stream option doesn't 
 rollover on size or date, doesn't have custom formatting options, etc.  
 Exactly what the logging lib is meant to handle.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4977) info stream in solrconfig should have option for writing to the solr log

2013-06-28 Thread Ryan Ernst (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan Ernst updated SOLR-4977:
-

Attachment: SOLR-4977.patch

 info stream in solrconfig should have option for writing to the solr log
 

 Key: SOLR-4977
 URL: https://issues.apache.org/jira/browse/SOLR-4977
 Project: Solr
  Issue Type: Improvement
Reporter: Ryan Ernst
 Attachments: SOLR-4977.patch


 Having a separate file is annoying, plus the print stream option doesn't 
 rollover on size or date, doesn't have custom formatting options, etc.  
 Exactly what the logging lib is meant to handle.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-4978) Time is stripped from datetime column when imported into Solr date field

2013-06-28 Thread Bill Au (JIRA)
Bill Au created SOLR-4978:
-

 Summary: Time is stripped from datetime column when imported into 
Solr date field
 Key: SOLR-4978
 URL: https://issues.apache.org/jira/browse/SOLR-4978
 Project: Solr
  Issue Type: Bug
  Components: contrib - DataImportHandler
Reporter: Bill Au


I discovered that all dates I imported into a Solr date field from a MySQL 
datetime column have the time stripped (ie time portion is always 00:00:00).

After double checking my DIH config and trying different things, I decided to 
take a look at the DIH code.

When I looked at the source code of DIH JdbcDataSource class, I discovered that 
it is using java.sql.ResultSet and its getDate() method to handle date field. 
The getDate() method returns java.sql.Date. The java api doc for java.sql.Date

http://docs.oracle.com/javase/6/docs/api/java/sql/Date.html

states that:

To conform with the definition of SQL DATE, the millisecond values wrapped by 
a java.sql.Date instance must be 'normalized' by setting the hours, minutes, 
seconds, and milliseconds to zero in the particular time zone with which the 
instance is associated.

I am so surprise at my finding that I think I may not be right.  What am I 
doing wrong here?  This is such a big hole in DIH, how could it be possible 
that no one has noticed this until now?

Has anyone successfully imported a datetime column into a Solr date field using 
DIH?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4977) info stream in solrconfig should have option for writing to the solr log

2013-06-28 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13696014#comment-13696014
 ] 

Robert Muir commented on SOLR-4977:
---

Looks pretty good: two comments:
* This might be useful for developers to use in tests actually: The problem is 
during testing (when assertions are enabled), it can get flooded with 
testpoints. So what I did in lucene was to exclude component=TP (means test 
point). I think it would be good to do here too.
* Do you think we should deprecate the /infostream/file method? Like we can 
issue a warning if someone does this, because really they could configure this 
guy to go to its own file (without rolling or whatever) via their logging 
configuration instead?


 info stream in solrconfig should have option for writing to the solr log
 

 Key: SOLR-4977
 URL: https://issues.apache.org/jira/browse/SOLR-4977
 Project: Solr
  Issue Type: Improvement
Reporter: Ryan Ernst
 Attachments: SOLR-4977.patch


 Having a separate file is annoying, plus the print stream option doesn't 
 rollover on size or date, doesn't have custom formatting options, etc.  
 Exactly what the logging lib is meant to handle.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4894) Add a new update processor factory that will dynamically add fields to the schema if an input document contains unknown fields

2013-06-28 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-4894:
-

Attachment: SOLR-4894.patch

Patch, I think it's ready to go.

 Add a new update processor factory that will dynamically add fields to the 
 schema if an input document contains unknown fields
 --

 Key: SOLR-4894
 URL: https://issues.apache.org/jira/browse/SOLR-4894
 Project: Solr
  Issue Type: New Feature
  Components: update
Reporter: Steve Rowe
Assignee: Steve Rowe
Priority: Minor
 Attachments: SOLR-4894.patch


 Previous {{ParseFooUpdateProcessorFactory}}-s (see SOLR-4892) in the same 
 chain will detect, parse and convert unknown fields’ {{String}}-typed values 
 to the appropriate Java object type.
 This factory will take as configuration a set of mappings from Java object 
 type to schema field type.
 {{ManagedIndexSchema.addFields()}} adds new fields to the schema.
 If schema addition fails for any field, addition is re-attempted only for 
 those that don’t match any schema field.  This process is repeated, either 
 until all new fields are successfully added, or until there are no new fields 
 (because the fields that were new when this update chain started its work 
 were subsequently added by a different update request, possibly on a 
 different node).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-4976) info stream doesn't work with merged segment warmer

2013-06-28 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved SOLR-4976.
---

   Resolution: Fixed
Fix Version/s: 4.4
   5.0

Thanks Ryan!

 info stream doesn't work with merged segment warmer
 ---

 Key: SOLR-4976
 URL: https://issues.apache.org/jira/browse/SOLR-4976
 Project: Solr
  Issue Type: Bug
Reporter: Ryan Ernst
 Fix For: 5.0, 4.4

 Attachments: SOLR-4976.patch, SOLR-4976.patch


 In SolrIndexConfig, constructing the merged segment warmer takes an 
 InfoStream, but InfoStream.NO_OUTPUT is hardcoded.  Instead, the info stream 
 should be constructed in SolrIndexConfig, instead of SolrIndexWriter where it 
 is now, so that it can be used for the warmer.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4977) info stream in solrconfig should have option for writing to the solr log

2013-06-28 Thread Ryan Ernst (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan Ernst updated SOLR-4977:
-

Attachment: SOLR-4977.patch

New patch addressing Robert's comments.

 info stream in solrconfig should have option for writing to the solr log
 

 Key: SOLR-4977
 URL: https://issues.apache.org/jira/browse/SOLR-4977
 Project: Solr
  Issue Type: Improvement
Reporter: Ryan Ernst
 Attachments: SOLR-4977.patch, SOLR-4977.patch


 Having a separate file is annoying, plus the print stream option doesn't 
 rollover on size or date, doesn't have custom formatting options, etc.  
 Exactly what the logging lib is meant to handle.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4977) info stream in solrconfig should have option for writing to the solr log

2013-06-28 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13696029#comment-13696029
 ] 

Robert Muir commented on SOLR-4977:
---

my only other comment is i think it should go to slf4j with 
LoggerFactory.getLogger instead of log4j? 
(yes, its confusing log4j is in the classpath...)

 info stream in solrconfig should have option for writing to the solr log
 

 Key: SOLR-4977
 URL: https://issues.apache.org/jira/browse/SOLR-4977
 Project: Solr
  Issue Type: Improvement
Reporter: Ryan Ernst
 Attachments: SOLR-4977.patch, SOLR-4977.patch


 Having a separate file is annoying, plus the print stream option doesn't 
 rollover on size or date, doesn't have custom formatting options, etc.  
 Exactly what the logging lib is meant to handle.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4977) info stream in solrconfig should have option for writing to the solr log

2013-06-28 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13696030#comment-13696030
 ] 

Robert Muir commented on SOLR-4977:
---

And we probably want to fix the verbage/examples under 
example/**/solrconfig.xml to make it obvious it goes to logging and remove the 
deprecated file example...

 info stream in solrconfig should have option for writing to the solr log
 

 Key: SOLR-4977
 URL: https://issues.apache.org/jira/browse/SOLR-4977
 Project: Solr
  Issue Type: Improvement
Reporter: Ryan Ernst
 Attachments: SOLR-4977.patch, SOLR-4977.patch


 Having a separate file is annoying, plus the print stream option doesn't 
 rollover on size or date, doesn't have custom formatting options, etc.  
 Exactly what the logging lib is meant to handle.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4977) info stream in solrconfig should have option for writing to the solr log

2013-06-28 Thread Ryan Ernst (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan Ernst updated SOLR-4977:
-

Attachment: SOLR-4977.patch

{quote}
my only other comment is i think it should go to slf4j with 
LoggerFactory.getLogger instead of log4j? 
{quote}

Doh! I blame autocomplete.  Fixed.

{quote}
And we probably want to fix the verbage/examples under example/**/solrconfig.xml
{quote}

Good point.  Done.

New patch addresses both of these.

Also, here is an example of the output (using junit's log format):
{noformat}
14690 T10 C0 oasu.LoggingInfoStream.message 
[IW][SUITE-TestIndexingPerformance-seed#[F536639DD826197C]-worker]: now flush 
at close waitForMerges=true
14711 T10 C0 oasu.LoggingInfoStream.message 
[IW][SUITE-TestIndexingPerformance-seed#[F536639DD826197C]-worker]:   start 
flush: applyAllDeletes=true
14712 T10 C0 oasu.LoggingInfoStream.message 
[IW][SUITE-TestIndexingPerformance-seed#[F536639DD826197C]-worker]:   index 
before flush
14712 T10 C0 oasu.LoggingInfoStream.message 
[DW][SUITE-TestIndexingPerformance-seed#[F536639DD826197C]-worker]: 
SUITE-TestIndexingPerformance-seed#[F536639DD826197C]-worker startFullFlush
14712 T10 C0 oasu.LoggingInfoStream.message 
[DW][SUITE-TestIndexingPerformance-seed#[F536639DD826197C]-worker]: anyChanges? 
numDocsInRam=0 deletes=false hasTickets:false pendingChangesInFullFlush: false
{noformat}

 info stream in solrconfig should have option for writing to the solr log
 

 Key: SOLR-4977
 URL: https://issues.apache.org/jira/browse/SOLR-4977
 Project: Solr
  Issue Type: Improvement
Reporter: Ryan Ernst
 Attachments: SOLR-4977.patch, SOLR-4977.patch, SOLR-4977.patch


 Having a separate file is annoying, plus the print stream option doesn't 
 rollover on size or date, doesn't have custom formatting options, etc.  
 Exactly what the logging lib is meant to handle.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



change verbose [junit4:junit4] to just [junit4] and save horizontal space?

2013-06-28 Thread Robert Muir
i feel like this is a no-brainer, but there might be some regexps for
jenkins that could break or something else i'm not aware of:

Index: lucene/common-build.xml
===
--- lucene/common-build.xml(revision 1497957)
+++ lucene/common-build.xml(working copy)
@@ -869,6 +869,7 @@
 mkdir dir=${tests.cachedir}/${name} /

 junit4:junit4
+taskName=junit4
 dir=@{workDir}
 tempdir=@{workDir}/temp
 maxmemory=${tests.heapsize}


[jira] [Updated] (SOLR-4977) info stream in solrconfig should have option for writing to the solr log

2013-06-28 Thread Ryan Ernst (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan Ernst updated SOLR-4977:
-

Attachment: SOLR-4977.patch

Attempting to upload the patch again to get the newer version...

 info stream in solrconfig should have option for writing to the solr log
 

 Key: SOLR-4977
 URL: https://issues.apache.org/jira/browse/SOLR-4977
 Project: Solr
  Issue Type: Improvement
Reporter: Ryan Ernst
 Attachments: SOLR-4977.patch, SOLR-4977.patch, SOLR-4977.patch, 
 SOLR-4977.patch


 Having a separate file is annoying, plus the print stream option doesn't 
 rollover on size or date, doesn't have custom formatting options, etc.  
 Exactly what the logging lib is meant to handle.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4977) info stream in solrconfig should have option for writing to the solr log

2013-06-28 Thread Ryan Ernst (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan Ernst updated SOLR-4977:
-

Attachment: SOLR-4977.patch
SOLR-4977.patch

And one more with the hdfs example config file fixed.

 info stream in solrconfig should have option for writing to the solr log
 

 Key: SOLR-4977
 URL: https://issues.apache.org/jira/browse/SOLR-4977
 Project: Solr
  Issue Type: Improvement
Reporter: Ryan Ernst
 Attachments: SOLR-4977.patch, SOLR-4977.patch, SOLR-4977.patch, 
 SOLR-4977.patch, SOLR-4977.patch, SOLR-4977.patch


 Having a separate file is annoying, plus the print stream option doesn't 
 rollover on size or date, doesn't have custom formatting options, etc.  
 Exactly what the logging lib is meant to handle.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: change verbose [junit4:junit4] to just [junit4] and save horizontal space?

2013-06-28 Thread Steve Rowe
+1 - I think the Jenkins regexps can handle either (and can certainly be 
changed if not)

On Jun 28, 2013, at 11:55 PM, Robert Muir rcm...@gmail.com wrote:

 i feel like this is a no-brainer, but there might be some regexps for jenkins 
 that could break or something else i'm not aware of:
 
 Index: lucene/common-build.xml
 ===
 --- lucene/common-build.xml(revision 1497957)
 +++ lucene/common-build.xml(working copy)
 @@ -869,6 +869,7 @@
  mkdir dir=${tests.cachedir}/${name} /
  
  junit4:junit4
 +taskName=junit4
  dir=@{workDir}
  tempdir=@{workDir}/temp
  maxmemory=${tests.heapsize} 
 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org