Jenkins build is back to normal : slow-io-beasting #4910

2012-10-26 Thread Charlie Cron
See 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Build failed in Jenkins: slow-io-beasting #4909

2012-10-26 Thread Charlie Cron
See 

Changes:

[yonik] SOLR-3998: Atomic update on uniqueKey field itself causes duplicate 
document

--
[...truncated 18123 lines...]
[junit4:junit4]   2> 77051 T460 C35 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[361]} 0 1
[junit4:junit4]   2> 77054 T457 C35 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[362]} 0 1
[junit4:junit4]   2> 77056 T459 C35 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[363]} 0 0
[junit4:junit4]   2> 77058 T451 C35 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[364]} 0 0
[junit4:junit4]   2> 77059 T458 C35 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[365]} 0 0
[junit4:junit4]   2> 77060 T455 C35 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[366]} 0 0
[junit4:junit4]   2> 77062 T453 C35 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[367]} 0 0
[junit4:junit4]   2> 77094 T460 C35 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[368]} 0 0
[junit4:junit4]   2> 77110 T457 C35 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[369]} 0 0
[junit4:junit4]   2> 77125 T459 C35 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[370]} 0 0
[junit4:junit4]   2> 77141 T447 oash.SnapPuller.fetchLatestIndex SEVERE Master 
at: http://127.0.0.1:54179/solr is not available. Index fetch failed. 
Exception: org.apache.solr.client.solrj.SolrServerException: Server refused 
connection at: http://127.0.0.1:54179/solr
[junit4:junit4]   2> 77141 T451 C35 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[371]} 0 0
[junit4:junit4]   2> 77156 T458 C35 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[372]} 0 0
[junit4:junit4]   2> 77172 T455 C35 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[373]} 0 0
[junit4:junit4]   2> 77188 T453 C35 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[374]} 0 0
[junit4:junit4]   2> 77203 T460 C35 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[375]} 0 0
[junit4:junit4]   2> 77219 T457 C35 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[376]} 0 0
[junit4:junit4]   2> 77234 T459 C35 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[377]} 0 0
[junit4:junit4]   2> 77250 T451 C35 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[378]} 0 0
[junit4:junit4]   2> 77266 T458 C35 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[379]} 0 0
[junit4:junit4]   2> 77281 T455 C35 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[380]} 0 0
[junit4:junit4]   2> 77297 T453 C35 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[381]} 0 0
[junit4:junit4]   2> 77312 T460 C35 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[382]} 0 0
[junit4:junit4]   2> 77328 T459 C35 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[383]} 0 0
[junit4:junit4]   2> 77344 T451 C35 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[384]} 0 0
[junit4:junit4]   2> 77359 T458 C35 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[385]} 0 0
[junit4:junit4]   2> 77375 T455 C35 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[386]} 0 0
[junit4:junit4]   2> 77390 T453 C35 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[387]} 0 0
[junit4:junit4]   2> 77406 T460 C35 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[388]} 0 0
[junit4:junit4]   2> 77422 T457 C35 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[389]} 0 0
[junit4:junit4]   2> 77437 T459 C35 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[390]} 0 0
[junit4:junit4]   2> 77453 T451 C35 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[391]} 0 0
[junit4:junit4]   2> 77468 T458 C35 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[392]} 0 0
[junit4:junit4]   2> 77500 T455 C35 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[393]} 0 0
[junit4:junit4]   2> 77515 T453 C35 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[394]} 0 0
[junit4:junit4]   2> 77531 T460 C35 UPDATE [collection1] w

Build failed in Jenkins: slow-io-beasting #4908

2012-10-26 Thread Charlie Cron
See 

--
[...truncated 15281 lines...]
[junit4:junit4]   2>at 
org.apache.solr.cloud.ZkController.joinElection(ZkController.java:733)
[junit4:junit4]   2>at 
org.apache.solr.cloud.ZkController.register(ZkController.java:566)
[junit4:junit4]   2>... 46 more
[junit4:junit4]   2>
[junit4:junit4]   2> 7221357 T72 oaz.ClientCnxn$EventThread.run EventThread 
shut down
[junit4:junit4]   2> 7221357 T22 oaz.ZooKeeper. Initiating client 
connection, connectString=127.0.0.1:50639/solr sessionTimeout=1 
watcher=org.apache.solr.common.cloud.ConnectionManager@18c14b3
[junit4:junit4]   2> 7221357 T22 oasc.CoreContainer.shutdown Shutting down 
CoreContainer instance=14952429
[junit4:junit4]   2> 7221357 T22 oasc.SolrCore.close [collection1]  CLOSING 
SolrCore org.apache.solr.core.SolrCore@c20893
[junit4:junit4]   2> 7221357 T22 oasu.DirectUpdateHandler2.close closing 
DirectUpdateHandler2{commits=0,autocommits=0,soft 
autocommits=0,optimizes=0,rollbacks=0,expungeDeletes=0,docsPending=0,adds=0,deletesById=0,deletesByQuery=0,errors=0,cumulative_adds=0,cumulative_deletesById=0,cumulative_deletesByQuery=0,cumulative_errors=0}
[junit4:junit4]   2> 7221357 T22 oasc.SolrCore.decrefSolrCoreState Closing 
SolrCoreState
[junit4:junit4]   2> 7221357 T22 oasu.DefaultSolrCoreState.closeIndexWriter 
SolrCoreState ref count has reached 0 - closing IndexWriter
[junit4:junit4]   2> 7221357 T22 oasu.DefaultSolrCoreState.closeIndexWriter 
closing IndexWriter with IndexWriterCloser
[junit4:junit4]   2> 7221357 T22 oasc.SolrCore.closeSearcher [collection1] 
Closing main searcher on request.
[junit4:junit4]   2> 7221357 T63 oazs.NIOServerCnxn.doIO WARNING Exception 
causing close of session 0x13aa06f29530004 due to 
java.nio.channels.ClosedByInterruptException
[junit4:junit4]   2> 7221357 T22 oaz.ZooKeeper.close Session: 0x13aa06f29530004 
closed
[junit4:junit4]   2> 7221357 T63 oazs.NIOServerCnxn.closeSock Closed socket 
connection for client /127.0.0.1:50708 which had sessionid 0x13aa06f29530004
[junit4:junit4]   2> 7221357 T22 oasc.CoreContainer.shutdown Shutting down 
CoreContainer instance=470748
[junit4:junit4]   2> 7221357 T22 oasc.SolrCore.close [collection1]  CLOSING 
SolrCore org.apache.solr.core.SolrCore@1dc4cd9
[junit4:junit4]   2> 7221358 T22 oasu.DirectUpdateHandler2.close closing 
DirectUpdateHandler2{commits=0,autocommits=0,soft 
autocommits=0,optimizes=0,rollbacks=0,expungeDeletes=0,docsPending=0,adds=0,deletesById=0,deletesByQuery=0,errors=0,cumulative_adds=0,cumulative_deletesById=0,cumulative_deletesByQuery=0,cumulative_errors=0}
[junit4:junit4]   2> 7221358 T22 oasc.SolrCore.decrefSolrCoreState Closing 
SolrCoreState
[junit4:junit4]   2> 7221358 T22 oasu.DefaultSolrCoreState.closeIndexWriter 
SolrCoreState ref count has reached 0 - closing IndexWriter
[junit4:junit4]   2> 7221358 T22 oasu.DefaultSolrCoreState.closeIndexWriter 
closing IndexWriter with IndexWriterCloser
[junit4:junit4]   2> 7221358 T22 oasc.SolrCore.closeSearcher [collection1] 
Closing main searcher on request.
[junit4:junit4]   2> 7221358 T63 oazs.NIOServerCnxn.doIO WARNING Exception 
causing close of session 0x13aa06f29530005 due to 
java.nio.channels.ClosedByInterruptException
[junit4:junit4]   2> 7221358 T22 oaz.ZooKeeper.close Session: 0x13aa06f29530005 
closed
[junit4:junit4]   2> 7221358 T63 oazs.NIOServerCnxn.closeSock Closed socket 
connection for client /127.0.0.1:50715 which had sessionid 0x13aa06f29530005
[junit4:junit4]   2> 7221358 T22 oasc.CoreContainer.shutdown Shutting down 
CoreContainer instance=11438836
[junit4:junit4]   2> 7221358 T22 oasc.SolrCore.close [collection1]  CLOSING 
SolrCore org.apache.solr.core.SolrCore@1be3bb2
[junit4:junit4]   2> 7221358 T22 oasu.DirectUpdateHandler2.close closing 
DirectUpdateHandler2{commits=0,autocommits=0,soft 
autocommits=0,optimizes=0,rollbacks=0,expungeDeletes=0,docsPending=0,adds=0,deletesById=0,deletesByQuery=0,errors=0,cumulative_adds=0,cumulative_deletesById=0,cumulative_deletesByQuery=0,cumulative_errors=0}
[junit4:junit4]   2> 7221358 T22 oasc.SolrCore.decrefSolrCoreState Closing 
SolrCoreState
[junit4:junit4]   2> 7221358 T22 oasu.DefaultSolrCoreState.closeIndexWriter 
SolrCoreState ref count has reached 0 - closing IndexWriter
[junit4:junit4]   2> 7221358 T22 oasu.DefaultSolrCoreState.closeIndexWriter 
closing IndexWriter with IndexWriterCloser
[junit4:junit4]   2> 7221358 T22 oasc.SolrCore.closeSearcher [collection1] 
Closing main searcher on request.
[junit4:junit4]   2> 7221938 T76 oaz.ClientCnxn$SendThread.run WARNING Session 
0x13aa06f29530003 for server 127.0.0.1/127.0.0.1:50639, unexpected error, 
closing socket connection and attempting reconnect 
java.nio.channels.ClosedByInterruptException
[junit4:junit4]   2>at 
java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
[junit4:junit4]   2>   

[JENKINS] Lucene-Solr-SmokeRelease-4.x - Build # 19 - Failure

2012-10-26 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-4.x/19/

No tests ran.

Build Log:
[...truncated 30506 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.x/lucene/build/fakeRelease
 [copy] Copying 381 files to 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.x/lucene/build/fakeRelease/lucene
 [copy] Copying 4 files to 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.x/lucene/build/fakeRelease/lucene/changes
  [get] Getting: http://people.apache.org/keys/group/lucene.asc
  [get] To: 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.x/lucene/build/fakeRelease/lucene/KEYS
 [copy] Copying 189 files to 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.x/lucene/build/fakeRelease/solr
 [copy] Copying 1 file to 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.x/lucene/build/fakeRelease/solr
 [copy] Copying 4 files to 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.x/lucene/build/fakeRelease/solr/changes
 [exec] NOTE: output encoding is US-ASCII
 [exec] 
 [exec] Load release URL 
"file:/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.x/lucene/build/fakeRelease/"...
 [exec] 
 [exec] Test Lucene...
 [exec]   test basics...
 [exec]   get KEYS
 [exec] 0.1 MB
 [exec]   check changes HTML...
 [exec]   download lucene-4.1.0-src.tgz...
 [exec] 26.5 MB
 [exec] verify md5/sha1 digests
 [exec]   download lucene-4.1.0.tgz...
 [exec] 47.6 MB
 [exec] verify md5/sha1 digests
 [exec]   download lucene-4.1.0.zip...
 [exec] 57.0 MB
 [exec] verify md5/sha1 digests
 [exec]   unpack lucene-4.1.0.tgz...
 [exec] verify JAR/WAR metadata...
 [exec] test demo with 1.6...
 [exec]   got 5339 hits for query "lucene"
 [exec] test demo with 1.7...
 [exec]   got 5339 hits for query "lucene"
 [exec] check Lucene's javadoc JAR
 [exec]   unpack lucene-4.1.0.zip...
 [exec] verify JAR/WAR metadata...
 [exec] test demo with 1.6...
 [exec]   got 5339 hits for query "lucene"
 [exec] test demo with 1.7...
 [exec]   got 5339 hits for query "lucene"
 [exec] check Lucene's javadoc JAR
 [exec]   unpack lucene-4.1.0-src.tgz...
 [exec] make sure no JARs/WARs in src dist...
 [exec] run "ant validate"
 [exec] run tests w/ Java 6...
 [exec] test demo with 1.6...
 [exec]   got 215 hits for query "lucene"
 [exec] generate javadocs w/ Java 6...
 [exec] run tests w/ Java 7...
 [exec] test demo with 1.7...
 [exec]   got 215 hits for query "lucene"
 [exec] generate javadocs w/ Java 7...
 [exec] Traceback (most recent call last):
 [exec]   File 
"/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.x/dev-tools/scripts/smokeTestRelease.py",
 line 1342, in 
 [exec] main()
 [exec]   File 
"/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.x/dev-tools/scripts/smokeTestRelease.py",
 line 1288, in main
 [exec] smokeTest(baseURL, version, tmpDir, isSigned)
 [exec]   File 
"/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.x/dev-tools/scripts/smokeTestRelease.py",
 line 1324, in smokeTest
 [exec] unpackAndVerify('lucene', tmpDir, 'lucene-%s-src.tgz' % 
version, version)
 [exec]   File 
"/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.x/dev-tools/scripts/smokeTestRelease.py",
 line 563, in unpackAndVerify
 [exec] verifyUnpacked(project, artifact, unpackPath, version, tmpDir)
 [exec]   File 
"/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.x/dev-tools/scripts/smokeTestRelease.py",
 line 673, in verifyUnpacked
 [exec] checkJavadocpathFull('%s/build/docs' % unpackPath)
 [exec]   File 
"/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.x/dev-tools/scripts/smokeTestRelease.py",
 line 854, in checkJavadocpathFull
 [exec] if checkJavadocLinks.checkAll(path):
 [exec]   File 
"/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.x/dev-tools/scripts/checkJavadocLinks.py",
 line 160, in checkAll
 [exec] allFiles[fullPath] = parse(fullPath, open('%s/%s' % (root, f), 
encoding='UTF-8').read())
 [exec]   File 
"/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.x/dev-tools/scripts/checkJavadocLinks.py",
 line 110, in parse
 [exec] parser.feed(html)
 [exec]   File "/usr/local/lib/python3.2/html/parser.py", line 142, in feed
 [exec] self.goahead(0)
 [exec]   File "/usr/local/lib/python3.2/html/parser.py", line 188, in 
goahead
 [exec] k = self.parse_endtag(i)
 [exec]   File "/usr/local/lib/python3.2/html/parser.

[jira] [Resolved] (SOLR-3998) Atomic update on uniqueKey field itself causes duplicate document

2012-10-26 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley resolved SOLR-3998.


   Resolution: Fixed
Fix Version/s: 4.1

Fix committed to trunk & 4x.

> Atomic update on uniqueKey field itself causes duplicate document
> -
>
> Key: SOLR-3998
> URL: https://issues.apache.org/jira/browse/SOLR-3998
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.0
> Environment: Windows XP and RH Linux
>Reporter: Eric Spencer
>Assignee: Yonik Seeley
> Fix For: 4.1
>
> Attachments: solr_atomic_update_unique_key_bug_t.java
>
>
> Issuing an atomic update which includes the uniqueKey field itself will cause 
> Solr to insert a second document with the same uniqueKey thereby violating 
> uniqueness. A non-atomic update will "correct" the document. Attached is a 
> JUnit test case that demonstrates the issue against the collection1 core in 
> the standard Solr download.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.6.0_35) - Build # 2022 - Still Failing!

2012-10-26 Thread Policeman Jenkins Server
Build: http://jenkins.sd-datasolutions.de/job/Lucene-Solr-trunk-Linux/2022/
Java: 64bit/jdk1.6.0_35 -XX:+UseParallelGC

1 tests failed.
REGRESSION:  org.apache.solr.cloud.BasicDistributedZk2Test.testDistribSearch

Error Message:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test-files/solr/collection1/conf/stopwords.txt.bak
 (No such file or directory)

Stack Trace:
java.io.FileNotFoundException: 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test-files/solr/collection1/conf/stopwords.txt.bak
 (No such file or directory)
at 
__randomizedtesting.SeedInfo.seed([E6EA817E95985083:670C0F66E2C730BF]:0)
at java.io.FileInputStream.open(Native Method)
at java.io.FileInputStream.(FileInputStream.java:120)
at org.apache.commons.io.FileUtils.doCopyFile(FileUtils.java:935)
at org.apache.commons.io.FileUtils.doCopyDirectory(FileUtils.java:1225)
at org.apache.commons.io.FileUtils.doCopyDirectory(FileUtils.java:1223)
at org.apache.commons.io.FileUtils.doCopyDirectory(FileUtils.java:1223)
at org.apache.commons.io.FileUtils.copyDirectory(FileUtils.java:1186)
at org.apache.commons.io.FileUtils.copyDirectory(FileUtils.java:1058)
at org.apache.commons.io.FileUtils.copyDirectory(FileUtils.java:1027)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createServers(AbstractFullDistribZkTestBase.java:233)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:693)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
   

[jira] [Updated] (SOLR-3938) prepareCommit command omits commitData

2012-10-26 Thread Lance Norskog (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lance Norskog updated SOLR-3938:


Attachment: SOLR-3938-unit.patch

Add unit test to TestReplicationHandler. This requires solrj support for 
prepareCommit, and thus includes that. 

> prepareCommit command omits commitData
> --
>
> Key: SOLR-3938
> URL: https://issues.apache.org/jira/browse/SOLR-3938
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.0
>Reporter: Yonik Seeley
>  Labels: 4.0.1_Candidate
> Fix For: 4.1
>
> Attachments: SOLR-3938.patch, SOLR-3938-unit.patch
>
>
> Solr's prepareCommit doesn't set any commitData, and then when a commit is 
> done, it's too late.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Jenkins build is back to normal : slow-io-beasting #4903

2012-10-26 Thread Charlie Cron
See 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-4000) autowarmCount missing from cache stats in gui

2012-10-26 Thread Shawn Heisey (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey resolved SOLR-4000.


   Resolution: Not A Problem
Fix Version/s: (was: 4.1)

Turns out the information IS there -- if you have defined it on that cache.  
When I gathered the information from my own system, I picked the first core, 
which it turns out has autowarmCount="0" in it.  Just now I tried another core 
and it had all the same information as 3.x does, including autowarmCount and 
regenerator.

> autowarmCount missing from cache stats in gui
> -
>
> Key: SOLR-4000
> URL: https://issues.apache.org/jira/browse/SOLR-4000
> Project: Solr
>  Issue Type: Bug
>  Components: web gui
>Affects Versions: 4.0
>Reporter: Shawn Heisey
>Priority: Minor
>
> In 3.x, the admin gui page for cache statistics includes autowarmCount.  In 
> 4.0, this particular number is missing.  Unaware of this, I asked someone on 
> the user list for their cache statistics, thinking it would be included.  The 
> regenerator (available in 3.x) is also not included, but that's probably 
> mostly unnecessary.
> The information *is* in the SolrInfoMBeanHandler (/admin/mbeans?stats=true) 
> output.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Build failed in Jenkins: slow-io-beasting #4902

2012-10-26 Thread Charlie Cron
See 

--
[...truncated 13813 lines...]
[junit4:junit4]   2> 29576 T10 oasc.SolrCore. [collection1] Opening new 
SolrCore at 
.\org.apache.solr.client.solrj.TestLBHttpSolrServer$SolrInstance-1351303506606\solr\collection12\collection1\,
 
dataDir=.\org.apache.solr.client.solrj.TestLBHttpSolrServer$SolrInstance-1351303506606\solr\collection12\collection1\data\
[junit4:junit4]   2> 29576 T10 oasc.SolrCore. JMX monitoring not detected 
for core: collection1
[junit4:junit4]   2> 29576 T10 oasc.SolrCore.getNewIndexDir New index directory 
detected: old=null 
new=.\org.apache.solr.client.solrj.TestLBHttpSolrServer$SolrInstance-1351303506606\solr\collection12\collection1\data\index/
[junit4:junit4]   2> 29576 T10 oasc.SolrCore.initIndex WARNING [collection1] 
Solr index directory 
'.\org.apache.solr.client.solrj.TestLBHttpSolrServer$SolrInstance-1351303506606\solr\collection12\collection1\data\index'
 doesn't exist. Creating new index...
[junit4:junit4]   2> 29576 T10 oasc.CachingDirectoryFactory.get return new 
directory for 

 forceNew:false
[junit4:junit4]   2> 29607 T10 oasc.SolrDeletionPolicy.onCommit 
SolrDeletionPolicy.onCommit: commits:num=1
[junit4:junit4]   2>
commit{dir=MockDirWrapper(org.apache.lucene.store.NIOFSDirectory@
 
lockFactory=org.apache.lucene.store.NativeFSLockFactory@6dd108),segFN=segments_1,generation=1,filenames=[segments_1]
[junit4:junit4]   2> 29607 T10 oasc.SolrDeletionPolicy.updateCommits newest 
commit = 1
[junit4:junit4]   2> 29607 T10 oasc.RequestHandlers.initHandlersFromConfig 
created standard: solr.StandardRequestHandler
[junit4:junit4]   2> 29607 T10 oasc.RequestHandlers.initHandlersFromConfig 
created defaults: solr.StandardRequestHandler
[junit4:junit4]   2> 29607 T10 oasc.RequestHandlers.initHandlersFromConfig 
adding lazy requestHandler: solr.StandardRequestHandler
[junit4:junit4]   2> 29607 T10 oasc.RequestHandlers.initHandlersFromConfig 
created lazy: solr.StandardRequestHandler
[junit4:junit4]   2> 29607 T10 oasc.RequestHandlers.initHandlersFromConfig 
created /update: solr.UpdateRequestHandler
[junit4:junit4]   2> 29607 T10 oasc.RequestHandlers.initHandlersFromConfig 
created /replication: solr.ReplicationHandler
[junit4:junit4]   2> 29622 T10 oashl.XMLLoader.init xsltCacheLifetimeSeconds=60
[junit4:junit4]   2> 29654 T10 oass.SolrIndexSearcher. Opening 
Searcher@ec366a main
[junit4:junit4]   2> 29654 T10 oass.SolrIndexSearcher.getIndexDir WARNING 
WARNING: Directory impl does not support setting indexDir: 
org.apache.lucene.store.MockDirectoryWrapper
[junit4:junit4]   2> 29654 T10 oasu.CommitTracker. Hard AutoCommit: 
disabled
[junit4:junit4]   2> 29654 T10 oasu.CommitTracker. Soft AutoCommit: 
disabled
[junit4:junit4]   2> 29654 T10 oashc.HttpShardHandlerFactory.getParameter 
Setting socketTimeout to: 0
[junit4:junit4]   2> 29654 T10 oashc.HttpShardHandlerFactory.getParameter 
Setting urlScheme to: http://
[junit4:junit4]   2> 29654 T10 oashc.HttpShardHandlerFactory.getParameter 
Setting connTimeout to: 0
[junit4:junit4]   2> 29654 T10 oashc.HttpShardHandlerFactory.getParameter 
Setting maxConnectionsPerHost to: 20
[junit4:junit4]   2> 29654 T10 oashc.HttpShardHandlerFactory.getParameter 
Setting corePoolSize to: 0
[junit4:junit4]   2> 29654 T10 oashc.HttpShardHandlerFactory.getParameter 
Setting maximumPoolSize to: 2147483647
[junit4:junit4]   2> 29654 T10 oashc.HttpShardHandlerFactory.getParameter 
Setting maxThreadIdleTime to: 5
[junit4:junit4]   2> 29654 T10 oashc.HttpShardHandlerFactory.getParameter 
Setting sizeOfQueue to: -1
[junit4:junit4]   2> 29654 T10 oashc.HttpShardHandlerFactory.getParameter 
Setting fairnessPolicy to: false
[junit4:junit4]   2> 29654 T10 oascsi.HttpClientUtil.createClient Creating new 
http client, 
config:maxConnectionsPerHost=20&maxConnections=1&socketTimeout=0&connTimeout=0&retry=false
[junit4:junit4]   2> 29685 T10 oash.ReplicationHandler.inform Commits will be 
reserved for  1
[junit4:junit4]   2> 29685 T10 oasc.CoreContainer.register registering core: 
collection1
[junit4:junit4]   2> 29685 T10 oass.SolrDispatchFilter.init 
user.dir=
[junit4:junit4]   2> 29685 T10 oass.SolrDispatchFilter.init 
SolrDispatchFilter.init() done
[junit4:junit4]   2> 29685 T113 oasc.SolrCore.registerSearcher [collection1] 
Registered new searcher Searcher@ec366a 
main{StandardDirectoryReader(segments_1:1)}
[junit4:juni

Jenkins build is back to normal : slow-io-beasting #4901

2012-10-26 Thread Charlie Cron
See 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4000) autowarmCount missing from cache stats in gui

2012-10-26 Thread Shawn Heisey (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey updated SOLR-4000:
---

Description: 
In 3.x, the admin gui page for cache statistics includes autowarmCount.  In 
4.0, this particular number is missing.  Unaware of this, I asked someone on 
the user list for their cache statistics, thinking it would be included.  The 
regenerator (available in 3.x) is also not included, but that's probably mostly 
unnecessary.

The information *is* in the SolrInfoMBeanHandler (/admin/mbeans?stats=true) 
output.


  was:
In 3.x, the admin gui page for statistics includes autowarmCount.  In 4.0, this 
particular number is missing.  Unaware of this, I asked someone on the user 
list for their cache statistics, thinking it would be included.  The 
regenerator (available in 3.x) is also not included, but that's probably mostly 
unnecessary.

The information *is* in the SolrInfoMBeanHandler (/admin/mbeans?stats=true) 
output.



> autowarmCount missing from cache stats in gui
> -
>
> Key: SOLR-4000
> URL: https://issues.apache.org/jira/browse/SOLR-4000
> Project: Solr
>  Issue Type: Bug
>  Components: web gui
>Affects Versions: 4.0
>Reporter: Shawn Heisey
>Priority: Minor
> Fix For: 4.1
>
>
> In 3.x, the admin gui page for cache statistics includes autowarmCount.  In 
> 4.0, this particular number is missing.  Unaware of this, I asked someone on 
> the user list for their cache statistics, thinking it would be included.  The 
> regenerator (available in 3.x) is also not included, but that's probably 
> mostly unnecessary.
> The information *is* in the SolrInfoMBeanHandler (/admin/mbeans?stats=true) 
> output.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Build failed in Jenkins: slow-io-beasting #4900

2012-10-26 Thread Charlie Cron
See 

--
[...truncated 13811 lines...]
[junit4:junit4]   2> 30785 T10 oasc.RequestHandlers.initHandlersFromConfig 
created standard: solr.StandardRequestHandler
[junit4:junit4]   2> 30785 T10 oasc.RequestHandlers.initHandlersFromConfig 
created defaults: solr.StandardRequestHandler
[junit4:junit4]   2> 30785 T10 oasc.RequestHandlers.initHandlersFromConfig 
adding lazy requestHandler: solr.StandardRequestHandler
[junit4:junit4]   2> 30785 T10 oasc.RequestHandlers.initHandlersFromConfig 
created lazy: solr.StandardRequestHandler
[junit4:junit4]   2> 30785 T10 oasc.RequestHandlers.initHandlersFromConfig 
created /update: solr.UpdateRequestHandler
[junit4:junit4]   2> 30785 T10 oasc.RequestHandlers.initHandlersFromConfig 
created /replication: solr.ReplicationHandler
[junit4:junit4]   2> 30800 T10 oashl.XMLLoader.init xsltCacheLifetimeSeconds=60
[junit4:junit4]   2> 30800 T10 oass.SolrIndexSearcher. Opening 
Searcher@198a654 main
[junit4:junit4]   2> 30800 T10 oass.SolrIndexSearcher.getIndexDir WARNING 
WARNING: Directory impl does not support setting indexDir: 
org.apache.lucene.store.MockDirectoryWrapper
[junit4:junit4]   2> 30816 T10 oasu.CommitTracker. Hard AutoCommit: 
disabled
[junit4:junit4]   2> 30816 T10 oasu.CommitTracker. Soft AutoCommit: 
disabled
[junit4:junit4]   2> 30816 T10 oashc.HttpShardHandlerFactory.getParameter 
Setting socketTimeout to: 0
[junit4:junit4]   2> 30816 T10 oashc.HttpShardHandlerFactory.getParameter 
Setting urlScheme to: http://
[junit4:junit4]   2> 30816 T10 oashc.HttpShardHandlerFactory.getParameter 
Setting connTimeout to: 0
[junit4:junit4]   2> 30816 T10 oashc.HttpShardHandlerFactory.getParameter 
Setting maxConnectionsPerHost to: 20
[junit4:junit4]   2> 30816 T10 oashc.HttpShardHandlerFactory.getParameter 
Setting corePoolSize to: 0
[junit4:junit4]   2> 30816 T10 oashc.HttpShardHandlerFactory.getParameter 
Setting maximumPoolSize to: 2147483647
[junit4:junit4]   2> 30816 T10 oashc.HttpShardHandlerFactory.getParameter 
Setting maxThreadIdleTime to: 5
[junit4:junit4]   2> 30816 T10 oashc.HttpShardHandlerFactory.getParameter 
Setting sizeOfQueue to: -1
[junit4:junit4]   2> 30816 T10 oashc.HttpShardHandlerFactory.getParameter 
Setting fairnessPolicy to: false
[junit4:junit4]   2> 30816 T10 oascsi.HttpClientUtil.createClient Creating new 
http client, 
config:maxConnectionsPerHost=20&maxConnections=1&socketTimeout=0&connTimeout=0&retry=false
[junit4:junit4]   2> 30816 T10 oash.ReplicationHandler.inform Commits will be 
reserved for  1
[junit4:junit4]   2> 30816 T10 oasc.CoreContainer.register registering core: 
collection1
[junit4:junit4]   2> 30816 T113 oasc.SolrCore.registerSearcher [collection1] 
Registered new searcher Searcher@198a654 
main{StandardDirectoryReader(segments_1:1)}
[junit4:junit4]   2> 30816 T10 oass.SolrDispatchFilter.init 
user.dir=
[junit4:junit4]   2> 30816 T10 oass.SolrDispatchFilter.init 
SolrDispatchFilter.init() done
[junit4:junit4]   2> ASYNC  NEW_CORE C9 name=collection1 
org.apache.solr.core.SolrCore@baa573
[junit4:junit4]   2> 30832 T107 C9 oasc.SolrDeletionPolicy.onInit 
SolrDeletionPolicy.onInit: commits:num=1
[junit4:junit4]   2>
commit{dir=MockDirWrapper(org.apache.lucene.store.SimpleFSDirectory@
 
lockFactory=org.apache.lucene.store.NativeFSLockFactory@176484e),segFN=segments_1,generation=1,filenames=[segments_1]
[junit4:junit4]   2> 30832 T107 C9 oasc.SolrDeletionPolicy.updateCommits newest 
commit = 1
[junit4:junit4]   2> 30878 T107 C9 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]} 
0 46
[junit4:junit4]   2> 30894 T108 C9 oasu.DirectUpdateHandler2.commit start 
commit{flags=0,_version_=0,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=false}
[junit4:junit4]   2> 30894 T108 C9 oasc.SolrDeletionPolicy.onCommit 
SolrDeletionPolicy.onCommit: commits:num=2
[junit4:junit4]   2>
commit{dir=MockDirWrapper(org.apache.lucene.store.SimpleFSDirectory@
 
lockFactory=org.apache.lucene.store.NativeFSLockFactory@176484e),segFN=segments_1,generation=1,filenames=[segments_1]
[junit4:junit4]   2>
commit{dir=MockDirWrapper(org.apache.lucene.store.SimpleFSDirectory@

[jira] [Created] (SOLR-4000) autowarmCount missing from cache stats in gui

2012-10-26 Thread Shawn Heisey (JIRA)
Shawn Heisey created SOLR-4000:
--

 Summary: autowarmCount missing from cache stats in gui
 Key: SOLR-4000
 URL: https://issues.apache.org/jira/browse/SOLR-4000
 Project: Solr
  Issue Type: Bug
  Components: web gui
Affects Versions: 4.0
Reporter: Shawn Heisey
Priority: Minor
 Fix For: 4.1


In 3.x, the admin gui page for statistics includes autowarmCount.  In 4.0, this 
particular number is missing.  Unaware of this, I asked someone on the user 
list for their cache statistics, thinking it would be included.  The 
regenerator (available in 3.x) is also not included, but that's probably mostly 
unnecessary.

The information *is* in the SolrInfoMBeanHandler (/admin/mbeans?stats=true) 
output.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Build failed in Jenkins: slow-io-beasting #4899

2012-10-26 Thread Charlie Cron
See 

--
[...truncated 17997 lines...]
[junit4:junit4]   2> 90335 T336 C24 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[345]} 0 0
[junit4:junit4]   2> 90337 T334 C24 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[346]} 0 0
[junit4:junit4]   2> 90342 T330 C24 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[347]} 0 0
[junit4:junit4]   2> 90345 T331 C24 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[348]} 0 0
[junit4:junit4]   2> 90349 T328 C24 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[349]} 0 0
[junit4:junit4]   2> 90351 T332 C24 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[350]} 0 0
[junit4:junit4]   2> 90352 T327 C24 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[351]} 0 0
[junit4:junit4]   2> 90353 T336 C24 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[352]} 0 0
[junit4:junit4]   2> 90354 T334 C24 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[353]} 0 0
[junit4:junit4]   2> 90357 T330 C24 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[354]} 0 0
[junit4:junit4]   2> 90364 T331 C24 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[355]} 0 0
[junit4:junit4]   2> 90379 T328 C24 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[356]} 0 0
[junit4:junit4]   2> 90382 T332 C24 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[357]} 0 0
[junit4:junit4]   2> 90383 T336 C24 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[358]} 0 0
[junit4:junit4]   2> 90384 T334 C24 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[359]} 0 0
[junit4:junit4]   2> 90387 T330 C24 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[360]} 0 1
[junit4:junit4]   2> 90390 T331 C24 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[361]} 0 1
[junit4:junit4]   2> 90391 T328 C24 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[362]} 0 0
[junit4:junit4]   2> 90393 T327 C24 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[363]} 0 0
[junit4:junit4]   2> 90396 T332 C24 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[364]} 0 0
[junit4:junit4]   2> 90399 T336 C24 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[365]} 0 0
[junit4:junit4]   2> 90401 T334 C24 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[366]} 0 0
[junit4:junit4]   2> 90403 T330 C24 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[367]} 0 0
[junit4:junit4]   2> 90404 T331 C24 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[368]} 0 0
[junit4:junit4]   2> 90411 T328 C24 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[369]} 0 0
[junit4:junit4]   2> 90417 T327 C24 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[370]} 0 2
[junit4:junit4]   2> 90424 T332 C24 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[371]} 0 0
[junit4:junit4]   2> 90426 T336 C24 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[372]} 0 0
[junit4:junit4]   2> 90429 T334 C24 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[373]} 0 0
[junit4:junit4]   2> 90431 T330 C24 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[374]} 0 0
[junit4:junit4]   2> 90442 T331 C24 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[375]} 0 0
[junit4:junit4]   2> 90449 T328 C24 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[376]} 0 0
[junit4:junit4]   2> 90451 T327 C24 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[377]} 0 0
[junit4:junit4]   2> 90452 T332 C24 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[378]} 0 0
[junit4:junit4]   2> 90455 T336 C24 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[379]} 0 1
[junit4:junit4]   2> 90459 T334 C24 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[380]} 0 0
[junit4:junit4]   2> 90462 T330 C24 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[381]} 0 0
[junit4:junit4]   2> 90464 T331 C24 UPDATE 

Jenkins build is back to normal : slow-io-beasting #4898

2012-10-26 Thread Charlie Cron
See 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Build failed in Jenkins: slow-io-beasting #4897

2012-10-26 Thread Charlie Cron
See 

--
[...truncated 14168 lines...]
[junit4:junit4]> at 
org.eclipse.jetty.util.thread.QueuedThreadPool.access$600(QueuedThreadPool.java:39)
[junit4:junit4]> at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:563)
[junit4:junit4]> at java.lang.Thread.run(Thread.java:662)
[junit4:junit4]>   10) Thread[id=254, name=qtp31680683-254, 
state=TIMED_WAITING, group=TGRP-FullSolrCloudDistribCmdsTest]
[junit4:junit4]> at sun.misc.Unsafe.park(Native Method)
[junit4:junit4]> at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:198)
[junit4:junit4]> at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
[junit4:junit4]> at 
org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:337)
[junit4:junit4]> at 
org.eclipse.jetty.util.thread.QueuedThreadPool.idleJobPoll(QueuedThreadPool.java:517)
[junit4:junit4]> at 
org.eclipse.jetty.util.thread.QueuedThreadPool.access$600(QueuedThreadPool.java:39)
[junit4:junit4]> at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:563)
[junit4:junit4]> at java.lang.Thread.run(Thread.java:662)
[junit4:junit4]>   11) Thread[id=249, name=qtp31680683-249, 
state=TIMED_WAITING, group=TGRP-FullSolrCloudDistribCmdsTest]
[junit4:junit4]> at sun.misc.Unsafe.park(Native Method)
[junit4:junit4]> at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:198)
[junit4:junit4]> at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
[junit4:junit4]> at 
org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:337)
[junit4:junit4]> at 
org.eclipse.jetty.util.thread.QueuedThreadPool.idleJobPoll(QueuedThreadPool.java:517)
[junit4:junit4]> at 
org.eclipse.jetty.util.thread.QueuedThreadPool.access$600(QueuedThreadPool.java:39)
[junit4:junit4]> at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:563)
[junit4:junit4]> at java.lang.Thread.run(Thread.java:662)
[junit4:junit4]>   12) Thread[id=263, name=qtp30884152-263, 
state=TIMED_WAITING, group=TGRP-FullSolrCloudDistribCmdsTest]
[junit4:junit4]> at sun.misc.Unsafe.park(Native Method)
[junit4:junit4]> at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:198)
[junit4:junit4]> at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
[junit4:junit4]> at 
org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:337)
[junit4:junit4]> at 
org.eclipse.jetty.util.thread.QueuedThreadPool.idleJobPoll(QueuedThreadPool.java:517)
[junit4:junit4]> at 
org.eclipse.jetty.util.thread.QueuedThreadPool.access$600(QueuedThreadPool.java:39)
[junit4:junit4]> at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:563)
[junit4:junit4]> at java.lang.Thread.run(Thread.java:662)
[junit4:junit4]>   13) Thread[id=270, 
name=TEST-FullSolrCloudDistribCmdsTest.testDistribSearch-seed#[899D7DC90149C710]-EventThread,
 state=WAITING, group=TGRP-FullSolrCloudDistribCmdsTest]
[junit4:junit4]> at sun.misc.Unsafe.park(Native Method)
[junit4:junit4]> at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
[junit4:junit4]> at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1987)
[junit4:junit4]> at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:399)
[junit4:junit4]> at 
org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
[junit4:junit4]>   14) Thread[id=262, name=qtp30884152-262, 
state=TIMED_WAITING, group=TGRP-FullSolrCloudDistribCmdsTest]
[junit4:junit4]> at sun.misc.Unsafe.park(Native Method)
[junit4:junit4]> at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:198)
[junit4:junit4]> at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
[junit4:junit4]> at 
org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:337)
[junit4:junit4]> at 
org.eclipse.jetty.util.thread.QueuedThreadPool.idleJobPoll(QueuedThreadPool.java:517)
[junit4:junit4]> at 
org.eclipse.jetty.util.thread.QueuedThreadPool.access$600(QueuedThreadPool.java:39)
[junit4:junit4]> at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPoo

Jenkins build is back to normal : slow-io-beasting #4896

2012-10-26 Thread Charlie Cron
See 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Build failed in Jenkins: slow-io-beasting #4895

2012-10-26 Thread Charlie Cron
See 

--
[...truncated 13872 lines...]
[junit4:junit4]   2> 26820 T10 oasu.CommitTracker. Hard AutoCommit: 
disabled
[junit4:junit4]   2> 26820 T10 oasu.CommitTracker. Soft AutoCommit: 
disabled
[junit4:junit4]   2> 26820 T10 oashc.HttpShardHandlerFactory.getParameter 
Setting socketTimeout to: 0
[junit4:junit4]   2> 26820 T10 oashc.HttpShardHandlerFactory.getParameter 
Setting urlScheme to: http://
[junit4:junit4]   2> 26820 T10 oashc.HttpShardHandlerFactory.getParameter 
Setting connTimeout to: 0
[junit4:junit4]   2> 26820 T10 oashc.HttpShardHandlerFactory.getParameter 
Setting maxConnectionsPerHost to: 20
[junit4:junit4]   2> 26820 T10 oashc.HttpShardHandlerFactory.getParameter 
Setting corePoolSize to: 0
[junit4:junit4]   2> 26820 T10 oashc.HttpShardHandlerFactory.getParameter 
Setting maximumPoolSize to: 2147483647
[junit4:junit4]   2> 26820 T10 oashc.HttpShardHandlerFactory.getParameter 
Setting maxThreadIdleTime to: 5
[junit4:junit4]   2> 26820 T10 oashc.HttpShardHandlerFactory.getParameter 
Setting sizeOfQueue to: -1
[junit4:junit4]   2> 26820 T10 oashc.HttpShardHandlerFactory.getParameter 
Setting fairnessPolicy to: false
[junit4:junit4]   2> 26835 T10 oascsi.HttpClientUtil.createClient Creating new 
http client, 
config:maxConnectionsPerHost=20&maxConnections=1&socketTimeout=0&connTimeout=0&retry=false
[junit4:junit4]   2> 26835 T10 oash.ReplicationHandler.inform Commits will be 
reserved for  1
[junit4:junit4]   2> 26835 T10 oasc.CoreContainer.register registering core: 
collection1
[junit4:junit4]   2> 26835 T10 oass.SolrDispatchFilter.init 
user.dir=
[junit4:junit4]   2> 26835 T116 oasc.SolrCore.registerSearcher [collection1] 
Registered new searcher Searcher@17a7adf 
main{StandardDirectoryReader(segments_1:1)}
[junit4:junit4]   2> 26835 T10 oass.SolrDispatchFilter.init 
SolrDispatchFilter.init() done
[junit4:junit4]   2> ASYNC  NEW_CORE C9 name=collection1 
org.apache.solr.core.SolrCore@369fdc
[junit4:junit4]   2> 26898 T113 C9 oasc.SolrDeletionPolicy.onInit 
SolrDeletionPolicy.onInit: commits:num=1
[junit4:junit4]   2>
commit{dir=MockDirWrapper(org.apache.lucene.store.SimpleFSDirectory@
 
lockFactory=org.apache.lucene.store.NativeFSLockFactory@15b5783),segFN=segments_1,generation=1,filenames=[segments_1]
[junit4:junit4]   2> 26898 T113 C9 oasc.SolrDeletionPolicy.updateCommits newest 
commit = 1
[junit4:junit4]   2> 26945 T113 C9 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]} 
0 94
[junit4:junit4]   2> 26960 T111 C9 oasu.DirectUpdateHandler2.commit start 
commit{flags=0,_version_=0,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=false}
[junit4:junit4]   2> 27132 T111 C9 oasc.SolrDeletionPolicy.onCommit 
SolrDeletionPolicy.onCommit: commits:num=2
[junit4:junit4]   2>
commit{dir=MockDirWrapper(org.apache.lucene.store.SimpleFSDirectory@
 
lockFactory=org.apache.lucene.store.NativeFSLockFactory@15b5783),segFN=segments_1,generation=1,filenames=[segments_1]
[junit4:junit4]   2>
commit{dir=MockDirWrapper(org.apache.lucene.store.SimpleFSDirectory@
 
lockFactory=org.apache.lucene.store.NativeFSLockFactory@15b5783),segFN=segments_2,generation=2,filenames=[_0_1.len,
 _0_0.len, segments_2, _0.inf, _0.si, _0.pst, _0.fld]
[junit4:junit4]   2> 27132 T111 C9 oasc.SolrDeletionPolicy.updateCommits newest 
commit = 2
[junit4:junit4]   2> 27163 T111 C9 oass.SolrIndexSearcher. Opening 
Searcher@1b2591c main
[junit4:junit4]   2> 27163 T111 C9 oass.SolrIndexSearcher.getIndexDir WARNING 
WARNING: Directory impl does not support setting indexDir: 
org.apache.lucene.store.MockDirectoryWrapper
[junit4:junit4]   2> 27163 T111 C9 oasu.DirectUpdateHandler2.commit 
end_commit_flush
[junit4:junit4]   2> 27163 T116 oasc.SolrCore.registerSearcher [collection1] 
Registered new searcher Searcher@1b2591c 
main{StandardDirectoryReader(segments_2:3 _0(4.1):C10)}
[junit4:junit4]   2> 27163 T111 C9 UPDATE [collection1] webapp=/solr 
path=/update 
params={waitSearcher=true&wt=javabin&commit=true&softCommit=false&version=2} 
{commit=} 0 203
[junit4:junit4]   2> 27179 T1

[jira] [Commented] (SOLR-3996) Solr 4.0.0: problems with character '/' in fields names

2012-10-26 Thread Jack Krupansky (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13485286#comment-13485286
 ] 

Jack Krupansky commented on SOLR-3996:
--

A couple of comments:

1. Slash introduces a regex query term in 4.0.

2. Escaping is not supported parameters such as "qf".

3. Solr 4.0 supports "pseudo fields" in the "fl" parameter. The items in the 
list can be full function queries (and some other things, including "glob" 
names). This syntax presumes (to some degree) that field names follow the rules 
of java identifiers.

See:
http://wiki.apache.org/solr/CommonQueryParameters#fl
https://issues.apache.org/jira/browse/SOLR-2444

4. Although the Solr schema technically does accept arbitray names, including 
white space and punctuation characters, users would be STRONGLY ADVISED to 
stick with Java identifiers. Strict Java-like feld names are not required in 
all circumstances, but as you note, there are some situations where the 
particular situation may either require escaping or simply prohibit field names 
which are not strict Java identifiers.

In short, Solr gives you a lot of rope to play with, but don't blame Solr if 
that rope is not universally accepted for everything everywhere - and for all 
time.

That said, feel free to propose specific enhancements for escaping, etc.



> Solr 4.0.0: problems with character '/' in fields names
> ---
>
> Key: SOLR-3996
> URL: https://issues.apache.org/jira/browse/SOLR-3996
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 4.0
> Environment: Solr cloud configuration
> OS Linux
> Apache tomcat 6.0.29
> solr-spec 4.0.0.2012.10.06.03.04.33
> solr-impl 4.0.0 1394950 - rmuir - 2012-10-06 03:04:33
> jvm OpenJDK 64-Bit Server VM (14.0-b16)
>Reporter: Federico Grillini
>
> Good morning,
> we have a document management system and we use solr for fulltext searches.
> Our documents have fields with the character '/' in their names - they come 
> from xml documents and we use simple xpaths as fileds names.
> We have noticed some problems in queries:
> 1. the character '/' must be escaped also if it's not mentioned as a special 
> character 
> (http://lucene.apache.org/core/3_6_0/queryparsersyntax.html#Escaping%20Special%20Characters);
> 2. that character must not be escaped if specified in sorting rules or in 
> required fields list.
> 3. in a fields list (parameter 'fl' in a query) a field with '/' in its name  
> must not be the first field if you don't want to get an error.
> I think there is a bug treating this character.
> Thanks for your help.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-1972) Need additional query stats in admin interface - median, 95th and 99th percentile

2012-10-26 Thread Shawn Heisey (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey updated SOLR-1972:
---

Attachment: solr1972-metricsregistry-branch4x-failure.log

Full putty log showing actions taken right after checkout of branch_4x.


> Need additional query stats in admin interface - median, 95th and 99th 
> percentile
> -
>
> Key: SOLR-1972
> URL: https://issues.apache.org/jira/browse/SOLR-1972
> Project: Solr
>  Issue Type: Improvement
>  Components: web gui
>Affects Versions: 1.4
>Reporter: Shawn Heisey
>Assignee: Alan Woodward
>Priority: Minor
> Fix For: 4.1
>
> Attachments: elyograg-1972-3.2.patch, elyograg-1972-3.2.patch, 
> elyograg-1972-trunk.patch, elyograg-1972-trunk.patch, 
> SOLR-1972-branch3x-url_pattern.patch, SOLR-1972-branch4x.patch, 
> SOLR-1972-branch4x.patch, SOLR-1972_metrics.patch, SOLR-1972_metrics.patch, 
> SOLR-1972_metrics.patch, SOLR-1972_metrics.patch, SOLR-1972_metrics.patch, 
> SOLR-1972_metrics.patch, solr1972-metricsregistry-branch4x-failure.log, 
> SOLR-1972.patch, SOLR-1972.patch, SOLR-1972.patch, SOLR-1972.patch, 
> SOLR-1972-url_pattern.patch
>
>
> I would like to see more detailed query statistics from the admin GUI.  This 
> is what you can get now:
> requests : 809
> errors : 0
> timeouts : 0
> totalTime : 70053
> avgTimePerRequest : 86.59209
> avgRequestsPerSecond : 0.8148785 
> I'd like to see more data on the time per request - median, 95th percentile, 
> 99th percentile, and any other statistical function that makes sense to 
> include.  In my environment, the first bunch of queries after startup tend to 
> take several seconds each.  I find that the average value tends to be useless 
> until it has several thousand queries under its belt and the caches are 
> thoroughly warmed.  The statistical functions I have mentioned would quickly 
> eliminate the influence of those initial slow queries.
> The system will have to store individual data about each query.  I don't know 
> if this is something Solr does already.  It would be nice to have a 
> configurable count of how many of the most recent data points are kept, to 
> control the amount of memory the feature uses.  The default value could be 
> something like 1024 or 4096.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-1972) Need additional query stats in admin interface - median, 95th and 99th percentile

2012-10-26 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13485280#comment-13485280
 ] 

Shawn Heisey commented on SOLR-1972:


I ran into some trouble with tests failing, so I checked out a fresh branch_4x, 
applied this patch, and tried running solr tests.  There are major explosions.  
I'll be attaching the full putty log for review.  It appears that the first 
exception is an out of memory error:

[junit4:junit4]   2> 7800 T177 
ccr.RandomizedRunner$QueueUncaughtExceptionsHandler.uncaughtException WARNING 
Uncaught exception in thread: 
Thread[metrics-meter-tick-thread-1,5,TGRP-TestRandomFaceting] 
java.lang.OutOfMemoryError: unable to create new native thread
[junit4:junit4]   2>at 
__randomizedtesting.SeedInfo.seed([2191B18B87EDCD66]:0)
[junit4:junit4]   2>at java.lang.Thread.start0(Native Method)
[junit4:junit4]   2>at java.lang.Thread.start(Thread.java:691)
[junit4:junit4]   2>at 
java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:943)
[junit4:junit4]   2>at 
java.util.concurrent.ThreadPoolExecutor.processWorkerExit(ThreadPoolExecutor.java:992)
[junit4:junit4]   2>at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
[junit4:junit4]   2>at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
[junit4:junit4]   2>at java.lang.Thread.run(Thread.java:722)
[junit4:junit4]   2>

If I edit out the MetricsRegistry and go back to creating the objects directly 
from Metrics and using this.toString() as the scope, the tests all pass.

The Metrics documentation talks about MetricsRegistry as being something you 
create on a per-application basis.  That suggests that it's a very heavy 
object, and even a barebones Solr install probably has at least a dozen 
requestHandlers defined.  I don't know how many are defined in the JVMs used 
for testing.

On my test branch_4x installation, I see 29 handlers between QUERYHANDLER and 
UPDATEHANDLER in the stats visible in the gui, and that's just on one core.  
I've got 16 cores defined.


> Need additional query stats in admin interface - median, 95th and 99th 
> percentile
> -
>
> Key: SOLR-1972
> URL: https://issues.apache.org/jira/browse/SOLR-1972
> Project: Solr
>  Issue Type: Improvement
>  Components: web gui
>Affects Versions: 1.4
>Reporter: Shawn Heisey
>Assignee: Alan Woodward
>Priority: Minor
> Fix For: 4.1
>
> Attachments: elyograg-1972-3.2.patch, elyograg-1972-3.2.patch, 
> elyograg-1972-trunk.patch, elyograg-1972-trunk.patch, 
> SOLR-1972-branch3x-url_pattern.patch, SOLR-1972-branch4x.patch, 
> SOLR-1972-branch4x.patch, SOLR-1972_metrics.patch, SOLR-1972_metrics.patch, 
> SOLR-1972_metrics.patch, SOLR-1972_metrics.patch, SOLR-1972_metrics.patch, 
> SOLR-1972_metrics.patch, SOLR-1972.patch, SOLR-1972.patch, SOLR-1972.patch, 
> SOLR-1972.patch, SOLR-1972-url_pattern.patch
>
>
> I would like to see more detailed query statistics from the admin GUI.  This 
> is what you can get now:
> requests : 809
> errors : 0
> timeouts : 0
> totalTime : 70053
> avgTimePerRequest : 86.59209
> avgRequestsPerSecond : 0.8148785 
> I'd like to see more data on the time per request - median, 95th percentile, 
> 99th percentile, and any other statistical function that makes sense to 
> include.  In my environment, the first bunch of queries after startup tend to 
> take several seconds each.  I find that the average value tends to be useless 
> until it has several thousand queries under its belt and the caches are 
> thoroughly warmed.  The statistical functions I have mentioned would quickly 
> eliminate the influence of those initial slow queries.
> The system will have to store individual data about each query.  I don't know 
> if this is something Solr does already.  It would be nice to have a 
> configurable count of how many of the most recent data points are kept, to 
> control the amount of memory the feature uses.  The default value could be 
> something like 1024 or 4096.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Build failed in Jenkins: slow-io-beasting #4894

2012-10-26 Thread Charlie Cron
See 

--
[...truncated 21152 lines...]
[junit4:junit4]   2> 7220739 T508 oaz.ZooKeeper. Initiating client 
connection, connectString=127.0.0.1:64755/solr sessionTimeout=1 
watcher=org.apache.solr.common.cloud.ConnectionManager@6c8343
[junit4:junit4]   2> 7220739 T545 oazs.PrepRequestProcessor.run 
PrepRequestProcessor exited loop!
[junit4:junit4]   2> 7220739 T556 oaz.ClientCnxn$EventThread.run EventThread 
shut down
[junit4:junit4]   2> 7220739 T559 oaz.ClientCnxn$EventThread.run EventThread 
shut down
[junit4:junit4]   2> 7220739 T552 oasc.Overseer$ClusterStateUpdater.amILeader 
According to ZK I (id=88557932611502082-127.0.0.1:7000_solr-n_00) am no 
longer a leader.
[junit4:junit4]   2> 7220739 T562 oaz.ClientCnxn$EventThread.run SEVERE Event 
thread exiting due to interruption java.lang.InterruptedException
[junit4:junit4]   2>at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:1961)
[junit4:junit4]   2>at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1996)
[junit4:junit4]   2>at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:399)
[junit4:junit4]   2>at 
org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
[junit4:junit4]   2> 
[junit4:junit4]   2> 7220739 T542 oazs.NIOServerCnxn.doIO WARNING Exception 
causing close of session 0x13a9ef8735a0002 due to 
java.nio.channels.ClosedByInterruptException
[junit4:junit4]   2> 7220739 T550 oaz.ClientCnxn$SendThread.run WARNING Session 
0x13a9ef8735a0002 for server 127.0.0.1/127.0.0.1:64755, unexpected error, 
closing socket connection and attempting reconnect 
java.nio.channels.ClosedByInterruptException
[junit4:junit4]   2>at 
java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
[junit4:junit4]   2>at 
sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:341)
[junit4:junit4]   2>at 
org.apache.zookeeper.ClientCnxn$SendThread.doIO(ClientCnxn.java:890)
[junit4:junit4]   2>at 
org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1154)
[junit4:junit4]   2> 
[junit4:junit4]   2> 7220739 T543 oazs.SessionTrackerImpl.run 
SessionTrackerImpl exited loop!
[junit4:junit4]   2> 7220739 T562 oaz.ClientCnxn$EventThread.run EventThread 
shut down
[junit4:junit4]   2> 7220739 T542 oazs.NIOServerCnxn.closeSock Closed socket 
connection for client /127.0.0.1:64764 which had sessionid 0x13a9ef8735a0002
[junit4:junit4]   2> 7220739 T508 oasc.CoreContainer.shutdown Shutting down 
CoreContainer instance=23469803
[junit4:junit4]   2> 7220739 T508 oasc.SolrCore.close [collection1]  CLOSING 
SolrCore org.apache.solr.core.SolrCore@14b5d3f
[junit4:junit4]   2> 7220748 T508 oasu.DirectUpdateHandler2.close closing 
DirectUpdateHandler2{commits=0,autocommits=0,soft 
autocommits=0,optimizes=0,rollbacks=0,expungeDeletes=0,docsPending=0,adds=0,deletesById=0,deletesByQuery=0,errors=0,cumulative_adds=0,cumulative_deletesById=0,cumulative_deletesByQuery=0,cumulative_errors=0}
[junit4:junit4]   2> 7220749 T508 oasc.SolrCore.decrefSolrCoreState Closing 
SolrCoreState
[junit4:junit4]   2> 7220749 T508 oasu.DefaultSolrCoreState.closeIndexWriter 
SolrCoreState ref count has reached 0 - closing IndexWriter
[junit4:junit4]   2> 7220749 T568 oascc.ConnectionManager.process Watcher 
org.apache.solr.common.cloud.ConnectionManager@192e652 name:ZooKeeperConnection 
Watcher:127.0.0.1:64755/solr got event WatchedEvent state:Disconnected 
type:None path:null path:null type:None
[junit4:junit4]   2> 7220749 T508 oasu.DefaultSolrCoreState.closeIndexWriter 
closing IndexWriter with IndexWriterCloser
[junit4:junit4]   2> 7220750 T568 oascc.ConnectionManager.process zkClient has 
disconnected
[junit4:junit4]   2> 7220750 T508 oasc.SolrCore.closeSearcher [collection1] 
Closing main searcher on request.
[junit4:junit4]   2> 7220753 T542 oazs.NIOServerCnxn.doIO WARNING Exception 
causing close of session 0x13a9ef8735a0004 due to 
java.nio.channels.ClosedByInterruptException
[junit4:junit4]   2> 7220751 T568 oaz.ClientCnxn$EventThread.run SEVERE Event 
thread exiting due to interruption java.lang.InterruptedException
[junit4:junit4]   2>at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireInterruptibly(AbstractQueuedSynchronizer.java:1199)
[junit4:junit4]   2>at 
java.util.concurrent.locks.ReentrantLock.lockInterruptibly(ReentrantLock.java:312)
[junit4:junit4]   2>at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:396)
[junit4:junit4]   2>at 
org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
[junit4:junit4]   2> 
[junit4:junit4]   2> 7220753 T542 oazs.NIOServerCnxn.closeSock Closed socket 
connection for client /127.0.0.1:64781 which had sessionid 0x13a9ef8735a0004

Re: Source Control

2012-10-26 Thread Robert Muir
On Fri, Oct 26, 2012 at 7:02 PM, Mark Miller  wrote:
> So, it's not everyone's favorite tool, but it sure seems to be the most 
> popular tool.
>

My main question is, is it really git thats popular, or github?

if git would really bring in more contributions, we should do it. But
would it do that without github, or just make things more complex?

It has such a crappy commandline that is the "price" for using it, and
if that buys us nothing, it would be a waste of effort :)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3998) Atomic update on uniqueKey field itself causes duplicate document

2012-10-26 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13485257#comment-13485257
 ] 

Yonik Seeley commented on SOLR-3998:


Thanks Eric - I've changed UpdateTest to include this scenario and replicated 
what you see.
I think the right thing to do here is probably throw an exception (rather than 
ignore the modifiers on the id field).


> Atomic update on uniqueKey field itself causes duplicate document
> -
>
> Key: SOLR-3998
> URL: https://issues.apache.org/jira/browse/SOLR-3998
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.0
> Environment: Windows XP and RH Linux
>Reporter: Eric Spencer
>Assignee: Yonik Seeley
> Attachments: solr_atomic_update_unique_key_bug_t.java
>
>
> Issuing an atomic update which includes the uniqueKey field itself will cause 
> Solr to insert a second document with the same uniqueKey thereby violating 
> uniqueness. A non-atomic update will "correct" the document. Attached is a 
> JUnit test case that demonstrates the issue against the collection1 core in 
> the standard Solr download.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Source Control

2012-10-26 Thread Mark Miller
So, it's not everyone's favorite tool, but it sure seems to be the most popular 
tool.

What are peoples thoughts about moving to git?

Distributed version control is where it's at :)

I know some prefer mercurial, but git and github clearly are taking over the 
world.

Also, the cmd line for git is a little eccentric - I use a GUI client called 
SmartGit. Some very clever German's make it.

A few Apache projects are already using git. 

I'd like to hear what people feel about this idea.

- Mark
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2216) Highlighter query exceeds maxBooleanClause limit due to range query

2012-10-26 Thread Lance Norskog (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13485241#comment-13485241
 ] 

Lance Norskog commented on SOLR-2216:
-

Is this still a problem in 3.6, 4.0 or the trunk?

> Highlighter query exceeds maxBooleanClause limit due to range query
> ---
>
> Key: SOLR-2216
> URL: https://issues.apache.org/jira/browse/SOLR-2216
> Project: Solr
>  Issue Type: Bug
>  Components: highlighter
>Affects Versions: 1.4.1
> Environment: Linux solr-2.bizjournals.int 2.6.18-194.3.1.el5 #1 SMP 
> Thu May 13 13:08:30 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux
> java version "1.6.0_21"
> Java(TM) SE Runtime Environment (build 1.6.0_21-b06)
> Java HotSpot(TM) 64-Bit Server VM (build 17.0-b16, mixed mode)
> JAVA_OPTS="-client -Dcom.sun.management.jmxremote=true 
> -Dcom.sun.management.jmxremote.port= 
> -Dcom.sun.management.jmxremote.authenticate=true 
> -Dcom.sun.management.jmxremote.access.file=/root/.jmxaccess 
> -Dcom.sun.management.jmxremote.password.file=/root/.jmxpasswd 
> -Dcom.sun.management.jmxremote.ssl=false -XX:+UseCompressedOops 
> -XX:MaxPermSize=512M -Xms10240M -Xmx15360M -XX:+UseParallelGC 
> -XX:+AggressiveOpts -XX:NewRatio=5"
> top - 11:38:49 up 124 days, 22:37,  1 user,  load average: 5.20, 4.35, 3.90
> Tasks: 220 total,   1 running, 219 sleeping,   0 stopped,   0 zombie
> Cpu(s): 47.5%us,  2.9%sy,  0.0%ni, 49.5%id,  0.1%wa,  0.0%hi,  0.0%si,  0.0%st
> Mem:  24679008k total, 18179980k used,  6499028k free,   125424k buffers
> Swap: 26738680k total,29276k used, 26709404k free,  8187444k cached
>Reporter: Ken Stanley
>
> For a full detail of the issue, please see the mailing list: 
> http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201011.mbox/%3CAANLkTimE8z8yOni+u0Nsbgct1=ef7e+su0_waku2c...@mail.gmail.com%3E
> The nutshell version of the issue is that when I have a query that contains 
> ranges on a specific (non-highlighted) field, the highlighter component is 
> attempting to create a query that exceeds the value of maxBooleanClauses set 
> from solrconfig.xml. This is despite my explicit setting of hl.field, 
> hl.requireFieldMatch, and various other hightlight options in the query. 
> As suggested by Koji in the follow-up response, I removed the range queries 
> from my main query, and SOLR and highlighting were happy to fulfill my 
> request. It was suggested that if removing the range queries worked that this 
> might potentially be a bug, hence my filing this JIRA ticket. For what it is 
> worth, if I move my range queries into an fq, I do not get the exception 
> about exceeding maxBooleanClauses, and I get the effect that I was looking 
> for. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3997) Have solr config and index files on HDFS.

2012-10-26 Thread Otis Gospodnetic (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13485234#comment-13485234
 ] 

Otis Gospodnetic commented on SOLR-3997:


James - try asking on the mailing list, not in JIRA.

> Have solr config and index files on HDFS.
> -
>
> Key: SOLR-3997
> URL: https://issues.apache.org/jira/browse/SOLR-3997
> Project: Solr
>  Issue Type: Wish
>  Components: SearchComponents - other
>Affects Versions: 4.0
>Reporter: James Ji
>
> We are currently working on having Solr files read from HDFS. We extended 
> some of the classes so as to avoid modifying the original Solr code and make 
> it compatible with the future release. So here comes the question, I found in 
> QueryElevationComponent, there is a piece of code checking whether 
> elevate.xml exists at local file system. I am wondering if there is a way to 
> by pass this?
> QueryElevationComponent.inform(){
> 
> File fC = new File(core.getResourceLoader().getConfigDir(), f);
> File fD = new File(core.getDataDir(), f);
> if (fC.exists() == fD.exists()) {
>  throw new SolrException(SolrException.ErrorCode.SERVER_ERROR,
>  "QueryElevationComponent missing config file: '" + f + "\n"
>  + "either: " + fC.getAbsolutePath() + " or " +   
>fD.getAbsolutePath() + " must exist, but not both.");
>   }
>  if (fC.exists()) {
> exists = true;
> log.info("Loading QueryElevation from: "+fC.getAbsolutePath());
> Config cfg = new Config(core.getResourceLoader(), f);
> elevationCache.put(null, loadElevationMap(cfg));
>  }
> 
> }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4258) Incremental Field Updates through Stacked Segments

2012-10-26 Thread Sivan Yogev (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sivan Yogev updated LUCENE-4258:


Attachment: LUCENE-4258.r1402630.patch

New patch, naive test of adding updates to a single-document segment before or 
after update working. Working on more complex tests with multiple segments, 
documents and updates.

> Incremental Field Updates through Stacked Segments
> --
>
> Key: LUCENE-4258
> URL: https://issues.apache.org/jira/browse/LUCENE-4258
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/index
>Reporter: Sivan Yogev
> Attachments: IncrementalFieldUpdates.odp, 
> LUCENE-4258-API-changes.patch, LUCENE-4258-inner-changes.patch, 
> LUCENE-4258.r1402630.patch
>
>   Original Estimate: 2,520h
>  Remaining Estimate: 2,520h
>
> Shai and I would like to start working on the proposal to Incremental Field 
> Updates outlined here (http://markmail.org/message/zhrdxxpfk6qvdaex).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.7.0_07) - Build # 2015 - Still Failing!

2012-10-26 Thread Policeman Jenkins Server
Build: http://jenkins.sd-datasolutions.de/job/Lucene-Solr-trunk-Linux/2015/
Java: 32bit/jdk1.7.0_07 -client -XX:+UseConcMarkSweepGC

All tests passed

Build Log:
[...truncated 28972 lines...]
BUILD FAILED
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:294: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/extra-targets.xml:117: The 
following files are missing svn:eol-style (or binary svn:mime-type):
* solr/core/src/java/org/apache/solr/core/EphemeralDirectoryFactory.java

Total time: 38 minutes 20 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Recording test results
Description set: Java: 32bit/jdk1.7.0_07 -client -XX:+UseConcMarkSweepGC
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (LUCENE-4509) Make CompressingStoredFieldsFormat the new default StoredFieldsFormat impl

2012-10-26 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13485218#comment-13485218
 ] 

Robert Muir commented on LUCENE-4509:
-

No, I'm referring to the second packed ints structure (start offset within a 
block)

> Make CompressingStoredFieldsFormat the new default StoredFieldsFormat impl
> --
>
> Key: LUCENE-4509
> URL: https://issues.apache.org/jira/browse/LUCENE-4509
> Project: Lucene - Core
>  Issue Type: Wish
>  Components: core/store
>Reporter: Adrien Grand
>Priority: Minor
>
> What would you think of making CompressingStoredFieldsFormat the new default 
> StoredFieldsFormat?
> Stored fields compression has many benefits :
>  - it makes the I/O cache work for us,
>  - file-based index replication/backup becomes cheaper.
> Things to know:
>  - even with incompressible data, there is less than 0.5% overhead with LZ4,
>  - LZ4 compression requires ~ 16kB of memory and LZ4 HC compression requires 
> ~ 256kB,
>  - LZ4 uncompression has almost no memory overhead,
>  - on my low-end laptop, the LZ4 impl in Lucene uncompresses at ~ 300mB/s.
> I think we could use the same default parameters as in CompressingCodec :
>  - LZ4 compression,
>  - in-memory stored fields index that is very memory-efficient (less than 12 
> bytes per block of compressed docs) and uses binary search to locate 
> documents in the fields data file,
>  - 16 kB blocks (small enough so that there is no major slow down when the 
> whole index would fit into the I/O cache anyway, and large enough to provide 
> interesting compression ratios ; for example Robert got a 0.35 compression 
> ratio with the geonames.org database).
> Any concerns?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4509) Make CompressingStoredFieldsFormat the new default StoredFieldsFormat impl

2012-10-26 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13485211#comment-13485211
 ] 

Adrien Grand commented on LUCENE-4509:
--

bq. Well you say you use a separate packed ints structure for the offsets 
right? so these would all be zero?

These are absolute offsets in the fields data file. For example, when looking 
up a document, it first performs a binary search in the first array (the one 
that contains the first document IDs of every chunk). The resulting index is 
used to find the start offset of the chunk of compressed documents thanks to 
the second array. When you read data starting at this offset in the fields data 
file, there is first a packed ints array that stores the uncompressed length of 
every document in the chunk, and then the compressed data. I'll add file 
formats docs soon...

> Make CompressingStoredFieldsFormat the new default StoredFieldsFormat impl
> --
>
> Key: LUCENE-4509
> URL: https://issues.apache.org/jira/browse/LUCENE-4509
> Project: Lucene - Core
>  Issue Type: Wish
>  Components: core/store
>Reporter: Adrien Grand
>Priority: Minor
>
> What would you think of making CompressingStoredFieldsFormat the new default 
> StoredFieldsFormat?
> Stored fields compression has many benefits :
>  - it makes the I/O cache work for us,
>  - file-based index replication/backup becomes cheaper.
> Things to know:
>  - even with incompressible data, there is less than 0.5% overhead with LZ4,
>  - LZ4 compression requires ~ 16kB of memory and LZ4 HC compression requires 
> ~ 256kB,
>  - LZ4 uncompression has almost no memory overhead,
>  - on my low-end laptop, the LZ4 impl in Lucene uncompresses at ~ 300mB/s.
> I think we could use the same default parameters as in CompressingCodec :
>  - LZ4 compression,
>  - in-memory stored fields index that is very memory-efficient (less than 12 
> bytes per block of compressed docs) and uses binary search to locate 
> documents in the fields data file,
>  - 16 kB blocks (small enough so that there is no major slow down when the 
> whole index would fit into the I/O cache anyway, and large enough to provide 
> interesting compression ratios ; for example Robert got a 0.35 compression 
> ratio with the geonames.org database).
> Any concerns?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-3999) No serialVersionUID in SolrInputDocument and SolrInputField

2012-10-26 Thread yuanyun.cn (JIRA)
yuanyun.cn created SOLR-3999:


 Summary: No serialVersionUID in SolrInputDocument and 
SolrInputField
 Key: SOLR-3999
 URL: https://issues.apache.org/jira/browse/SOLR-3999
 Project: Solr
  Issue Type: Bug
  Components: search
Affects Versions: 4.0, 3.6
Reporter: yuanyun.cn
 Fix For: 4.1


In my solr server, one field is binarydoc which stores a SolrInputDocument; it 
contains some essential fields. - We do this to improve the transfer 
performance and some other desgin consideration, when copy the index of one 
document to another solr server, we only need copy the binarydoc field.

But when the client(3.6) and server(3.1) are not at same version, this would 
fail with error:
java.io.InvalidClassException: org.apache.solr.common.SolrInputDocument; local 
class incompatible: stream classdesc serialVersionUID = -933324331332362318, 
local class serialVersionUID = 7513968456965125205L4675759

The root cause if that SolrInputDocument doesn't provide a default 
serialVersionUID, and different version has different serialVersionUID.

When client is 4.0, server is 3.1, it report one more error:
java.io.InvalidClassException: org.apache.solr.common.SolrInputField; local 
class incompatible: stream classdesc serialVersionUID = -837387721983293523, 
local class serialVersionUID = 6061428683263691882
6061428683263691882

I temporarily fixed the error by adding serialVersionUID to client and server 
code with same value.

Please fix this :)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4509) Make CompressingStoredFieldsFormat the new default StoredFieldsFormat impl

2012-10-26 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13485205#comment-13485205
 ] 

Adrien Grand commented on LUCENE-4509:
--

bq. should we s/uncompression/decompression/ across the board?

If decompression sounds better, let's do this!

bq. here is some scary stuff (literal decompressions etc) uncovered by the 
clover report. We should make sure any special cases are tested.

I can work on it next week.

> Make CompressingStoredFieldsFormat the new default StoredFieldsFormat impl
> --
>
> Key: LUCENE-4509
> URL: https://issues.apache.org/jira/browse/LUCENE-4509
> Project: Lucene - Core
>  Issue Type: Wish
>  Components: core/store
>Reporter: Adrien Grand
>Priority: Minor
>
> What would you think of making CompressingStoredFieldsFormat the new default 
> StoredFieldsFormat?
> Stored fields compression has many benefits :
>  - it makes the I/O cache work for us,
>  - file-based index replication/backup becomes cheaper.
> Things to know:
>  - even with incompressible data, there is less than 0.5% overhead with LZ4,
>  - LZ4 compression requires ~ 16kB of memory and LZ4 HC compression requires 
> ~ 256kB,
>  - LZ4 uncompression has almost no memory overhead,
>  - on my low-end laptop, the LZ4 impl in Lucene uncompresses at ~ 300mB/s.
> I think we could use the same default parameters as in CompressingCodec :
>  - LZ4 compression,
>  - in-memory stored fields index that is very memory-efficient (less than 12 
> bytes per block of compressed docs) and uses binary search to locate 
> documents in the fields data file,
>  - 16 kB blocks (small enough so that there is no major slow down when the 
> whole index would fit into the I/O cache anyway, and large enough to provide 
> interesting compression ratios ; for example Robert got a 0.35 compression 
> ratio with the geonames.org database).
> Any concerns?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4510) when a test's heart beats it should also throw up (dump stack of all threads)

2012-10-26 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13485204#comment-13485204
 ] 

Robert Muir commented on LUCENE-4510:
-

+1, this would eliminate operator error!

> when a test's heart beats it should also throw up (dump stack of all threads)
> -
>
> Key: LUCENE-4510
> URL: https://issues.apache.org/jira/browse/LUCENE-4510
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
>
> We've had numerous cases where tests were hung but the "operator" of that 
> particular Jenkins instance struggles to properly get a stack dump for all 
> threads and eg accidentally kills the process instead (rather awful that the 
> same powerful tool "kill" can be used to get stack traces and to destroy the 
> process...).
> Is there some way the test infra could do this for us, eg when it prints the 
> HEARTBEAT message?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4509) Make CompressingStoredFieldsFormat the new default StoredFieldsFormat impl

2012-10-26 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13485203#comment-13485203
 ] 

Robert Muir commented on LUCENE-4509:
-

Well you say you use a separate packed ints structure for the offsets right? so 
these would all be zero?

> Make CompressingStoredFieldsFormat the new default StoredFieldsFormat impl
> --
>
> Key: LUCENE-4509
> URL: https://issues.apache.org/jira/browse/LUCENE-4509
> Project: Lucene - Core
>  Issue Type: Wish
>  Components: core/store
>Reporter: Adrien Grand
>Priority: Minor
>
> What would you think of making CompressingStoredFieldsFormat the new default 
> StoredFieldsFormat?
> Stored fields compression has many benefits :
>  - it makes the I/O cache work for us,
>  - file-based index replication/backup becomes cheaper.
> Things to know:
>  - even with incompressible data, there is less than 0.5% overhead with LZ4,
>  - LZ4 compression requires ~ 16kB of memory and LZ4 HC compression requires 
> ~ 256kB,
>  - LZ4 uncompression has almost no memory overhead,
>  - on my low-end laptop, the LZ4 impl in Lucene uncompresses at ~ 300mB/s.
> I think we could use the same default parameters as in CompressingCodec :
>  - LZ4 compression,
>  - in-memory stored fields index that is very memory-efficient (less than 12 
> bytes per block of compressed docs) and uses binary search to locate 
> documents in the fields data file,
>  - 16 kB blocks (small enough so that there is no major slow down when the 
> whole index would fit into the I/O cache anyway, and large enough to provide 
> interesting compression ratios ; for example Robert got a 0.35 compression 
> ratio with the geonames.org database).
> Any concerns?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Build failed in Jenkins: slow-io-beasting #4893

2012-10-26 Thread Charlie Cron
See 

--
[...truncated 14743 lines...]
[junit4:junit4]   2>at sun.nio.ch.SocketChannelImpl.checkConnect(Native 
Method)
[junit4:junit4]   2>at 
sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
[junit4:junit4]   2>at 
org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1143)
[junit4:junit4]   2> 
[junit4:junit4]   2> 871392 T792 oaz.ClientCnxn$EventThread.run EventThread 
shut down
[junit4:junit4]   2> 871392 T734 oaz.ZooKeeper.close Session: 0x13a9edd4ca10005 
closed
[junit4:junit4]   2> 871402 T734 oejsh.ContextHandler.doStop stopped 
o.e.j.s.ServletContextHandler{/solr,null}
[junit4:junit4]   2> 871454 T734 oasc.ChaosMonkey.monkeyLog monkey: stop shard! 
63015
[junit4:junit4]   2> 871454 T734 oasc.CoreContainer.shutdown Shutting down 
CoreContainer instance=7699504
[junit4:junit4]   2> 871454 T734 oasc.SolrCore.close [collection1]  CLOSING 
SolrCore org.apache.solr.core.SolrCore@11cb8ac
[junit4:junit4]   2> 871460 T734 oasu.DirectUpdateHandler2.close closing 
DirectUpdateHandler2{commits=1,autocommits=0,soft 
autocommits=0,optimizes=0,rollbacks=0,expungeDeletes=0,docsPending=0,adds=0,deletesById=0,deletesByQuery=0,errors=0,cumulative_adds=0,cumulative_deletesById=0,cumulative_deletesByQuery=0,cumulative_errors=0}
[junit4:junit4]   2> 871460 T734 oasc.SolrCore.decrefSolrCoreState Closing 
SolrCoreState
[junit4:junit4]   2> 871460 T734 oasu.DefaultSolrCoreState.closeIndexWriter 
SolrCoreState ref count has reached 0 - closing IndexWriter
[junit4:junit4]   2> 871460 T734 oasu.DefaultSolrCoreState.closeIndexWriter 
closing IndexWriter with IndexWriterCloser
[junit4:junit4]   2> 871462 T734 oasc.SolrCore.closeSearcher [collection1] 
Closing main searcher on request.
[junit4:junit4]   2> 872990 T805 oaz.ClientCnxn$SendThread.startConnect Opening 
socket connection to server 127.0.0.1/127.0.0.1:62905
[junit4:junit4]   2> 874097 T806 oaz.ClientCnxn$EventThread.run EventThread 
shut down
[junit4:junit4]   2> 874097 T734 oaz.ZooKeeper.close Session: 0x13a9edd4ca10006 
closed
[junit4:junit4]   2> 874113 T734 oejsh.ContextHandler.doStop stopped 
o.e.j.s.ServletContextHandler{/solr,null}
[junit4:junit4]   2> 874445 T734 oas.SolrTestCaseJ4.tearDown ###Ending 
testDistribSearch
[junit4:junit4]   1>"core":"collection1",
[junit4:junit4]   1>"node_name":"127.0.0.1:63015_solr",
[junit4:junit4]   1>"base_url":"http://127.0.0.1:63015/solr"}
[junit4:junit4]   1>/solr/collections/control_collection (3)
[junit4:junit4]   1>DATA:
[junit4:junit4]   1>{"configName":"conf1"}
[junit4:junit4]   1> /solr/collections/control_collection/shards (0)
[junit4:junit4]   1> /solr/collections/control_collection/leader_elect (1)
[junit4:junit4]   1>  
/solr/collections/control_collection/leader_elect/control_shard (1)
[junit4:junit4]   1>   
/solr/collections/control_collection/leader_elect/control_shard/election (1)
[junit4:junit4]   1>
/solr/collections/control_collection/leader_elect/control_shard/election/88557815997726722-127.0.0.1:62915_solr_collection1-n_00
 (0)
[junit4:junit4]   1> /solr/collections/control_collection/leaders (1)
[junit4:junit4]   1>  
/solr/collections/control_collection/leaders/control_shard (0)
[junit4:junit4]   1>  DATA:
[junit4:junit4]   1>  {
[junit4:junit4]   1>"core":"collection1",
[junit4:junit4]   1>"node_name":"127.0.0.1:62915_solr",
[junit4:junit4]   1>"base_url":"http://127.0.0.1:62915/solr"}
[junit4:junit4]   1>   /solr/clusterstate.json (0)
[junit4:junit4]   1>   DATA:
[junit4:junit4]   1>   {
[junit4:junit4]   1> "collection1":{
[junit4:junit4]   1>   "shard1":{
[junit4:junit4]   1> "range":"8000-",
[junit4:junit4]   1> "replicas":{
[junit4:junit4]   1>   "127.0.0.1:62930_solr_collection1":{
[junit4:junit4]   1> "shard":"shard1",
[junit4:junit4]   1> "roles":null,
[junit4:junit4]   1> "state":"active",
[junit4:junit4]   1> "core":"collection1",
[junit4:junit4]   1> "collection":"collection1",
[junit4:junit4]   1> "node_name":"127.0.0.1:62930_solr",
[junit4:junit4]   1> "base_url":"http://127.0.0.1:62930/solr";,
[junit4:junit4]   1> "leader":"true"},
[junit4:junit4]   1>   "127.0.0.1:63007_solr_collection1":{
[junit4:junit4]   1> "shard":"shard1",
[junit4:junit4]   1> "roles":null,
[junit4:junit4]   1> "state":"active",
[junit4:junit4]   1> "core":"collection1",
[junit4:junit4]   1> "collection":"collection1",
[junit4:junit4]   1> "node_name":"127.0.0.1:63007_solr",
[junit4:junit4]   1> 
"base_url":"http:

[jira] [Commented] (LUCENE-4509) Make CompressingStoredFieldsFormat the new default StoredFieldsFormat impl

2012-10-26 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13485201#comment-13485201
 ] 

Adrien Grand commented on LUCENE-4509:
--

But if we worry about this worst-case (numDocs == numChunks), maybe we should 
just increase the chunk size (for example, ElasticSearch uses 65 kB by default).

(Another option would be to change the compress+flush trigger to something 
like : chunk size >= 16 kB AND number of documents in the chunk >= 4.)

> Make CompressingStoredFieldsFormat the new default StoredFieldsFormat impl
> --
>
> Key: LUCENE-4509
> URL: https://issues.apache.org/jira/browse/LUCENE-4509
> Project: Lucene - Core
>  Issue Type: Wish
>  Components: core/store
>Reporter: Adrien Grand
>Priority: Minor
>
> What would you think of making CompressingStoredFieldsFormat the new default 
> StoredFieldsFormat?
> Stored fields compression has many benefits :
>  - it makes the I/O cache work for us,
>  - file-based index replication/backup becomes cheaper.
> Things to know:
>  - even with incompressible data, there is less than 0.5% overhead with LZ4,
>  - LZ4 compression requires ~ 16kB of memory and LZ4 HC compression requires 
> ~ 256kB,
>  - LZ4 uncompression has almost no memory overhead,
>  - on my low-end laptop, the LZ4 impl in Lucene uncompresses at ~ 300mB/s.
> I think we could use the same default parameters as in CompressingCodec :
>  - LZ4 compression,
>  - in-memory stored fields index that is very memory-efficient (less than 12 
> bytes per block of compressed docs) and uses binary search to locate 
> documents in the fields data file,
>  - 16 kB blocks (small enough so that there is no major slow down when the 
> whole index would fit into the I/O cache anyway, and large enough to provide 
> interesting compression ratios ; for example Robert got a 0.35 compression 
> ratio with the geonames.org database).
> Any concerns?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-4510) when a test's heart beats it should also throw up (dump stack of all threads)

2012-10-26 Thread Michael McCandless (JIRA)
Michael McCandless created LUCENE-4510:
--

 Summary: when a test's heart beats it should also throw up (dump 
stack of all threads)
 Key: LUCENE-4510
 URL: https://issues.apache.org/jira/browse/LUCENE-4510
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Michael McCandless


We've had numerous cases where tests were hung but the "operator" of that 
particular Jenkins instance struggles to properly get a stack dump for all 
threads and eg accidentally kills the process instead (rather awful that the 
same powerful tool "kill" can be used to get stack traces and to destroy the 
process...).

Is there some way the test infra could do this for us, eg when it prints the 
HEARTBEAT message?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-1972) Need additional query stats in admin interface - median, 95th and 99th percentile

2012-10-26 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward updated SOLR-1972:


Attachment: SOLR-1972_metrics.patch

This patch declares a MetricsRegistry in the RequestHandlerBase constructor, 
and uses that to ensure that the various metrics are held per-handler.

There doesn't seem to be any way of setting the metric names after 
construction, so I don't think we'll be able to write the handler path into the 
metrics, unfortunately.  But you'll be able to tell the various handlers apart 
by accessing the data through the existing SOLR JMX beans.

> Need additional query stats in admin interface - median, 95th and 99th 
> percentile
> -
>
> Key: SOLR-1972
> URL: https://issues.apache.org/jira/browse/SOLR-1972
> Project: Solr
>  Issue Type: Improvement
>  Components: web gui
>Affects Versions: 1.4
>Reporter: Shawn Heisey
>Assignee: Alan Woodward
>Priority: Minor
> Fix For: 4.1
>
> Attachments: elyograg-1972-3.2.patch, elyograg-1972-3.2.patch, 
> elyograg-1972-trunk.patch, elyograg-1972-trunk.patch, 
> SOLR-1972-branch3x-url_pattern.patch, SOLR-1972-branch4x.patch, 
> SOLR-1972-branch4x.patch, SOLR-1972_metrics.patch, SOLR-1972_metrics.patch, 
> SOLR-1972_metrics.patch, SOLR-1972_metrics.patch, SOLR-1972_metrics.patch, 
> SOLR-1972_metrics.patch, SOLR-1972.patch, SOLR-1972.patch, SOLR-1972.patch, 
> SOLR-1972.patch, SOLR-1972-url_pattern.patch
>
>
> I would like to see more detailed query statistics from the admin GUI.  This 
> is what you can get now:
> requests : 809
> errors : 0
> timeouts : 0
> totalTime : 70053
> avgTimePerRequest : 86.59209
> avgRequestsPerSecond : 0.8148785 
> I'd like to see more data on the time per request - median, 95th percentile, 
> 99th percentile, and any other statistical function that makes sense to 
> include.  In my environment, the first bunch of queries after startup tend to 
> take several seconds each.  I find that the average value tends to be useless 
> until it has several thousand queries under its belt and the caches are 
> thoroughly warmed.  The statistical functions I have mentioned would quickly 
> eliminate the influence of those initial slow queries.
> The system will have to store individual data about each query.  I don't know 
> if this is something Solr does already.  It would be nice to have a 
> configurable count of how many of the most recent data points are kept, to 
> control the amount of memory the feature uses.  The default value could be 
> something like 1024 or 4096.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4509) Make CompressingStoredFieldsFormat the new default StoredFieldsFormat impl

2012-10-26 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13485187#comment-13485187
 ] 

Adrien Grand commented on LUCENE-4509:
--

bq. How would this work with lrge documents that might be > 16KB in size?

Actually 16kB is the minimum size of an uncompressed chunk of documents. 
CompressingStoredFieldsWriter fills a buffer with documents until its size is 
>= 16kb, compresses it and then flushes to disk. If all documents are greater 
than 16kB then all chunks will contain exactly one document.

It also means you could end up having a chunk that is made of 15 documents of 
1kb and 1 document of 256kb. (And in this case there is no performance problem 
for the 15 first documents given that uncompression stops as soon as enough 
data has been uncompressed.)

bq. Does this mean with the default CompressingStoredFieldsIndex setting that 
now he pays 12-bytes/doc in RAM (because docsize > blocksize)? If so, lets 
think of ways to optimize that case.

Probably less than 12. The default CompressingStoredFieldsIndex impl uses two 
packed ints arrays of size numChunks (the number of chunks, <= numDocs). The 
first array stores the doc ID of the first document of the chunk while the 
second array stores the start offset of the chunk of documents in the fields 
data file.

So if your fields data file is fdtBytes bytes, the actual memory usage is ~ 
{{numChunks * (ceil(log2(numDocs)) + ceil(log2(fdtBytes))) / 8}}.

For example, if there are 10M documents of 16kB (fdtBytes ~= 160GB), we'll have 
numChunks == numDocs and a memory usage per document of (24 + 38) / 8 = 7.75 => 
~ 77.5 MB of memory overall.

bq. 100GB of compressed stored fields == 6.25M index entries == 75MB RAM

Thanks for the figures, Yonik! Did you use RamUsageEstimator to compute the 
amount of used memory?

> Make CompressingStoredFieldsFormat the new default StoredFieldsFormat impl
> --
>
> Key: LUCENE-4509
> URL: https://issues.apache.org/jira/browse/LUCENE-4509
> Project: Lucene - Core
>  Issue Type: Wish
>  Components: core/store
>Reporter: Adrien Grand
>Priority: Minor
>
> What would you think of making CompressingStoredFieldsFormat the new default 
> StoredFieldsFormat?
> Stored fields compression has many benefits :
>  - it makes the I/O cache work for us,
>  - file-based index replication/backup becomes cheaper.
> Things to know:
>  - even with incompressible data, there is less than 0.5% overhead with LZ4,
>  - LZ4 compression requires ~ 16kB of memory and LZ4 HC compression requires 
> ~ 256kB,
>  - LZ4 uncompression has almost no memory overhead,
>  - on my low-end laptop, the LZ4 impl in Lucene uncompresses at ~ 300mB/s.
> I think we could use the same default parameters as in CompressingCodec :
>  - LZ4 compression,
>  - in-memory stored fields index that is very memory-efficient (less than 12 
> bytes per block of compressed docs) and uses binary search to locate 
> documents in the fields data file,
>  - 16 kB blocks (small enough so that there is no major slow down when the 
> whole index would fit into the I/O cache anyway, and large enough to provide 
> interesting compression ratios ; for example Robert got a 0.35 compression 
> ratio with the geonames.org database).
> Any concerns?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-2878) Allow Scorer to expose positions and payloads aka. nuke spans

2012-10-26 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13485183#comment-13485183
 ] 

Alan Woodward commented on LUCENE-2878:
---

I've committed a whole bunch more javadocs, and a package.html.

There's still a big nocommit in SloppyPhraseScorer, but other than that we're 
looking good.  We could probably do with more test coverage, but then that's 
never not the case, so...

> Allow Scorer to expose positions and payloads aka. nuke spans 
> --
>
> Key: LUCENE-2878
> URL: https://issues.apache.org/jira/browse/LUCENE-2878
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Affects Versions: Positions Branch
>Reporter: Simon Willnauer
>Assignee: Simon Willnauer
>  Labels: gsoc2011, gsoc2012, lucene-gsoc-11, lucene-gsoc-12, 
> mentor
> Fix For: Positions Branch
>
> Attachments: LUCENE-2878-OR.patch, LUCENE-2878.patch, 
> LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, 
> LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, 
> LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, 
> LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, 
> LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, 
> LUCENE-2878.patch, LUCENE-2878_trunk.patch, LUCENE-2878_trunk.patch, 
> PosHighlighter.patch, PosHighlighter.patch
>
>
> Currently we have two somewhat separate types of queries, the one which can 
> make use of positions (mainly spans) and payloads (spans). Yet Span*Query 
> doesn't really do scoring comparable to what other queries do and at the end 
> of the day they are duplicating lot of code all over lucene. Span*Queries are 
> also limited to other Span*Query instances such that you can not use a 
> TermQuery or a BooleanQuery with SpanNear or anthing like that. 
> Beside of the Span*Query limitation other queries lacking a quiet interesting 
> feature since they can not score based on term proximity since scores doesn't 
> expose any positional information. All those problems bugged me for a while 
> now so I stared working on that using the bulkpostings API. I would have done 
> that first cut on trunk but TermScorer is working on BlockReader that do not 
> expose positions while the one in this branch does. I started adding a new 
> Positions class which users can pull from a scorer, to prevent unnecessary 
> positions enums I added ScorerContext#needsPositions and eventually 
> Scorere#needsPayloads to create the corresponding enum on demand. Yet, 
> currently only TermQuery / TermScorer implements this API and other simply 
> return null instead. 
> To show that the API really works and our BulkPostings work fine too with 
> positions I cut over TermSpanQuery to use a TermScorer under the hood and 
> nuked TermSpans entirely. A nice sideeffect of this was that the Position 
> BulkReading implementation got some exercise which now :) work all with 
> positions while Payloads for bulkreading are kind of experimental in the 
> patch and those only work with Standard codec. 
> So all spans now work on top of TermScorer ( I truly hate spans since today ) 
> including the ones that need Payloads (StandardCodec ONLY)!!  I didn't bother 
> to implement the other codecs yet since I want to get feedback on the API and 
> on this first cut before I go one with it. I will upload the corresponding 
> patch in a minute. 
> I also had to cut over SpanQuery.getSpans(IR) to 
> SpanQuery.getSpans(AtomicReaderContext) which I should probably do on trunk 
> first but after that pain today I need a break first :).
> The patch passes all core tests 
> (org.apache.lucene.search.highlight.HighlighterTest still fails but I didn't 
> look into the MemoryIndex BulkPostings API yet)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4509) Make CompressingStoredFieldsFormat the new default StoredFieldsFormat impl

2012-10-26 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13485182#comment-13485182
 ] 

Robert Muir commented on LUCENE-4509:
-

I'd say to make progress for the default we want to look at:
* make a concrete impl of CompressingStoredFieldsFormat called Lucene41, 
hardwired to the defaults and add file format docs?
  This way, we don't have to support all of the Compression options/layouts in 
the default codec (if someone wants that, 
  encourage them to make their own codec with the Compressed settings they 
like). Back compat is much 
  less costly as the parameters are fixed. File format docs are easier :)
* should we s/uncompression/decompression/ across the board?
* tests already look pretty good. I can try to work on some additional ones to 
try to break it like we did with BlockPF.
* there is some scary stuff (literal decompressions etc) uncovered by the 
clover report: 
https://builds.apache.org/job/Lucene-Solr-Clover-4.x/49/clover-report/org/apache/lucene/codecs/compressing/CompressionMode.html
 We should make sure any special cases are tested.


> Make CompressingStoredFieldsFormat the new default StoredFieldsFormat impl
> --
>
> Key: LUCENE-4509
> URL: https://issues.apache.org/jira/browse/LUCENE-4509
> Project: Lucene - Core
>  Issue Type: Wish
>  Components: core/store
>Reporter: Adrien Grand
>Priority: Minor
>
> What would you think of making CompressingStoredFieldsFormat the new default 
> StoredFieldsFormat?
> Stored fields compression has many benefits :
>  - it makes the I/O cache work for us,
>  - file-based index replication/backup becomes cheaper.
> Things to know:
>  - even with incompressible data, there is less than 0.5% overhead with LZ4,
>  - LZ4 compression requires ~ 16kB of memory and LZ4 HC compression requires 
> ~ 256kB,
>  - LZ4 uncompression has almost no memory overhead,
>  - on my low-end laptop, the LZ4 impl in Lucene uncompresses at ~ 300mB/s.
> I think we could use the same default parameters as in CompressingCodec :
>  - LZ4 compression,
>  - in-memory stored fields index that is very memory-efficient (less than 12 
> bytes per block of compressed docs) and uses binary search to locate 
> documents in the fields data file,
>  - 16 kB blocks (small enough so that there is no major slow down when the 
> whole index would fit into the I/O cache anyway, and large enough to provide 
> interesting compression ratios ; for example Robert got a 0.35 compression 
> ratio with the geonames.org database).
> Any concerns?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4509) Make CompressingStoredFieldsFormat the new default StoredFieldsFormat impl

2012-10-26 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13485163#comment-13485163
 ] 

Robert Muir commented on LUCENE-4509:
-

I think its ok too. I just didnt know if we could do something trivial like 
store the offsets-within-the-blocks as packed ints,
so that it optimizes for this case anyway (offset=0) and only takes a 
8bytes+1bit instead of 12 bytes.

But i don't have a real understanding of what this thing does when docsize > 
blocksize, i havent dug in that much.

in any case I think it should be the default: its fast and works also for tiny 
documents with lots of fields.
I think people expect the index to be compressed in some way and the stored 
fields are really wasteful today.

> Make CompressingStoredFieldsFormat the new default StoredFieldsFormat impl
> --
>
> Key: LUCENE-4509
> URL: https://issues.apache.org/jira/browse/LUCENE-4509
> Project: Lucene - Core
>  Issue Type: Wish
>  Components: core/store
>Reporter: Adrien Grand
>Priority: Minor
>
> What would you think of making CompressingStoredFieldsFormat the new default 
> StoredFieldsFormat?
> Stored fields compression has many benefits :
>  - it makes the I/O cache work for us,
>  - file-based index replication/backup becomes cheaper.
> Things to know:
>  - even with incompressible data, there is less than 0.5% overhead with LZ4,
>  - LZ4 compression requires ~ 16kB of memory and LZ4 HC compression requires 
> ~ 256kB,
>  - LZ4 uncompression has almost no memory overhead,
>  - on my low-end laptop, the LZ4 impl in Lucene uncompresses at ~ 300mB/s.
> I think we could use the same default parameters as in CompressingCodec :
>  - LZ4 compression,
>  - in-memory stored fields index that is very memory-efficient (less than 12 
> bytes per block of compressed docs) and uses binary search to locate 
> documents in the fields data file,
>  - 16 kB blocks (small enough so that there is no major slow down when the 
> whole index would fit into the I/O cache anyway, and large enough to provide 
> interesting compression ratios ; for example Robert got a 0.35 compression 
> ratio with the geonames.org database).
> Any concerns?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-3998) Atomic update on uniqueKey field itself causes duplicate document

2012-10-26 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley reassigned SOLR-3998:
--

Assignee: Yonik Seeley

> Atomic update on uniqueKey field itself causes duplicate document
> -
>
> Key: SOLR-3998
> URL: https://issues.apache.org/jira/browse/SOLR-3998
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.0
> Environment: Windows XP and RH Linux
>Reporter: Eric Spencer
>Assignee: Yonik Seeley
> Attachments: solr_atomic_update_unique_key_bug_t.java
>
>
> Issuing an atomic update which includes the uniqueKey field itself will cause 
> Solr to insert a second document with the same uniqueKey thereby violating 
> uniqueness. A non-atomic update will "correct" the document. Attached is a 
> JUnit test case that demonstrates the issue against the collection1 core in 
> the standard Solr download.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-4476) maven deployment scripts dont work (except from the machine you made the RC from)

2012-10-26 Thread Steven Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Rowe resolved LUCENE-4476.
-

   Resolution: Fixed
Fix Version/s: 5.0
   4.1
Lucene Fields: New,Patch Available  (was: New)

I committed Robert's patch:

- trunk: [r1402630|http://svn.apache.org/viewvc?rev=1402630&view=rev]
- branch_4x: [r1402637|http://svn.apache.org/viewvc?rev=1402637&view=rev]

> maven deployment scripts dont work (except from the machine you made the RC 
> from)
> -
>
> Key: LUCENE-4476
> URL: https://issues.apache.org/jira/browse/LUCENE-4476
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
>Assignee: Steven Rowe
> Fix For: 4.1, 5.0
>
> Attachments: LUCENE-4476.patch, LUCENE-4476.patch, LUCENE-4476.patch
>
>
> Currently the maven process described in 
> http://wiki.apache.org/lucene-java/PublishMavenArtifacts does not work (on 
> mac)
> It worked fine for the 4.0-alpha and 4.0-beta releases.
> NOTE: This appears to be working on linux so I am going with that. But this 
> seems strange it doesnt work on mac.
>  
> {noformat}
> artifact:install-provider] Installing provider: 
> org.apache.maven.wagon:wagon-ssh:jar:1.0-beta-7:runtime
> [artifact:pom] Downloading: 
> org/apache/lucene/lucene-parent/4.0.0/lucene-parent-4.0.0.pom from repository 
> sonatype.releases at http://oss.sonatype.org/content/repositories/releases
> [artifact:pom] Unable to locate resource in repository
> [artifact:pom] [INFO] Unable to find resource 
> 'org.apache.lucene:lucene-parent:pom:4.0.0' in repository sonatype.releases 
> (http://oss.sonatype.org/content/repositories/releases)
> [artifact:pom] Downloading: 
> org/apache/lucene/lucene-parent/4.0.0/lucene-parent-4.0.0.pom from repository 
> central at http://repo1.maven.org/maven2
> [artifact:pom] Unable to locate resource in repository
> [artifact:pom] [INFO] Unable to find resource 
> 'org.apache.lucene:lucene-parent:pom:4.0.0' in repository central 
> (http://repo1.maven.org/maven2)
> [artifact:pom] An error has occurred while processing the Maven artifact 
> tasks.
> [artifact:pom]  Diagnosis:
> [artifact:pom] 
> [artifact:pom] Unable to initialize POM lucene-test-framework-4.0.0.pom: 
> Cannot find parent: org.apache.lucene:lucene-parent for project: 
> org.apache.lucene:lucene-test-framework:jar:null for project 
> org.apache.lucene:lucene-test-framework:jar:null
> [artifact:pom] Unable to download the artifact from any repository
> BUILD FAILED
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4509) Make CompressingStoredFieldsFormat the new default StoredFieldsFormat impl

2012-10-26 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13485154#comment-13485154
 ] 

Yonik Seeley commented on LUCENE-4509:
--

Nice timing Adrien... I was just going to ask how we could enable this easiest 
in Solr (or if it should in fact be the default).

One data point: 100GB of compressed stored fields == 6.25M index entries == 
75MB RAM
That seems decent for a default.


> Make CompressingStoredFieldsFormat the new default StoredFieldsFormat impl
> --
>
> Key: LUCENE-4509
> URL: https://issues.apache.org/jira/browse/LUCENE-4509
> Project: Lucene - Core
>  Issue Type: Wish
>  Components: core/store
>Reporter: Adrien Grand
>Priority: Minor
>
> What would you think of making CompressingStoredFieldsFormat the new default 
> StoredFieldsFormat?
> Stored fields compression has many benefits :
>  - it makes the I/O cache work for us,
>  - file-based index replication/backup becomes cheaper.
> Things to know:
>  - even with incompressible data, there is less than 0.5% overhead with LZ4,
>  - LZ4 compression requires ~ 16kB of memory and LZ4 HC compression requires 
> ~ 256kB,
>  - LZ4 uncompression has almost no memory overhead,
>  - on my low-end laptop, the LZ4 impl in Lucene uncompresses at ~ 300mB/s.
> I think we could use the same default parameters as in CompressingCodec :
>  - LZ4 compression,
>  - in-memory stored fields index that is very memory-efficient (less than 12 
> bytes per block of compressed docs) and uses binary search to locate 
> documents in the fields data file,
>  - 16 kB blocks (small enough so that there is no major slow down when the 
> whole index would fit into the I/O cache anyway, and large enough to provide 
> interesting compression ratios ; for example Robert got a 0.35 compression 
> ratio with the geonames.org database).
> Any concerns?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: fyi LeaderIntegrationTest sometimes hangs

2012-10-26 Thread Robert Muir
i restarted jenkins, ill get it next time (this happened about an hour ago).

I fumble around in windows here, i accidentally killed the wrong
process (the jenkins one instead).

On Fri, Oct 26, 2012 at 3:11 PM, Mark Miller  wrote:
> jconsole, visualvm, jstack from cmd line.
>
> Stack trace will would be super helpful.
>
> Also, a lot that recently went in (depending on what time you mean
> last night) that might relate to this.
>
> - Mark
>
> On Fri, Oct 26, 2012 at 3:07 PM, Robert Muir  wrote:
>> I spun up an additional jenkins last night (no crazy blackholes etc,
>> just running 'ant test' in a loop):
>>
>> LeaderIntegrationTest hung for about an hour before i killed it
>> (unfortunately i dont know how to get stacktraces on windows, there is
>> no kill -3).
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>
>
>
> --
> - Mark
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: fyi LeaderIntegrationTest sometimes hangs

2012-10-26 Thread Mark Miller
jconsole, visualvm, jstack from cmd line.

Stack trace will would be super helpful.

Also, a lot that recently went in (depending on what time you mean
last night) that might relate to this.

- Mark

On Fri, Oct 26, 2012 at 3:07 PM, Robert Muir  wrote:
> I spun up an additional jenkins last night (no crazy blackholes etc,
> just running 'ant test' in a loop):
>
> LeaderIntegrationTest hung for about an hour before i killed it
> (unfortunately i dont know how to get stacktraces on windows, there is
> no kill -3).
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>



-- 
- Mark

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



fyi LeaderIntegrationTest sometimes hangs

2012-10-26 Thread Robert Muir
I spun up an additional jenkins last night (no crazy blackholes etc,
just running 'ant test' in a loop):

LeaderIntegrationTest hung for about an hour before i killed it
(unfortunately i dont know how to get stacktraces on windows, there is
no kill -3).

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4509) Make CompressingStoredFieldsFormat the new default StoredFieldsFormat impl

2012-10-26 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13485115#comment-13485115
 ] 

Robert Muir commented on LUCENE-4509:
-

I am a strong +1 for this idea.

I only have one concern, about the defaults. How would this work with lrge 
documents (e.g. those massive Hathitrust book-documents) that might be > 16KB 
in size?

Does this mean with the default CompressingStoredFieldsIndex setting that now 
he pays 12-bytes/doc in RAM (because docsize > blocksize)?
If so, lets think of ways to optimize that case.

> Make CompressingStoredFieldsFormat the new default StoredFieldsFormat impl
> --
>
> Key: LUCENE-4509
> URL: https://issues.apache.org/jira/browse/LUCENE-4509
> Project: Lucene - Core
>  Issue Type: Wish
>  Components: core/store
>Reporter: Adrien Grand
>Priority: Minor
>
> What would you think of making CompressingStoredFieldsFormat the new default 
> StoredFieldsFormat?
> Stored fields compression has many benefits :
>  - it makes the I/O cache work for us,
>  - file-based index replication/backup becomes cheaper.
> Things to know:
>  - even with incompressible data, there is less than 0.5% overhead with LZ4,
>  - LZ4 compression requires ~ 16kB of memory and LZ4 HC compression requires 
> ~ 256kB,
>  - LZ4 uncompression has almost no memory overhead,
>  - on my low-end laptop, the LZ4 impl in Lucene uncompresses at ~ 300mB/s.
> I think we could use the same default parameters as in CompressingCodec :
>  - LZ4 compression,
>  - in-memory stored fields index that is very memory-efficient (less than 12 
> bytes per block of compressed docs) and uses binary search to locate 
> documents in the fields data file,
>  - 16 kB blocks (small enough so that there is no major slow down when the 
> whole index would fit into the I/O cache anyway, and large enough to provide 
> interesting compression ratios ; for example Robert got a 0.35 compression 
> ratio with the geonames.org database).
> Any concerns?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-4509) Make CompressingStoredFieldsFormat the new default StoredFieldsFormat impl

2012-10-26 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-4509:


 Summary: Make CompressingStoredFieldsFormat the new default 
StoredFieldsFormat impl
 Key: LUCENE-4509
 URL: https://issues.apache.org/jira/browse/LUCENE-4509
 Project: Lucene - Core
  Issue Type: Wish
  Components: core/store
Reporter: Adrien Grand
Priority: Minor


What would you think of making CompressingStoredFieldsFormat the new default 
StoredFieldsFormat?

Stored fields compression has many benefits :
 - it makes the I/O cache work for us,
 - file-based index replication/backup becomes cheaper.

Things to know:
 - even with incompressible data, there is less than 0.5% overhead with LZ4,
 - LZ4 compression requires ~ 16kB of memory and LZ4 HC compression requires ~ 
256kB,
 - LZ4 uncompression has almost no memory overhead,
 - on my low-end laptop, the LZ4 impl in Lucene uncompresses at ~ 300mB/s.

I think we could use the same default parameters as in CompressingCodec :
 - LZ4 compression,
 - in-memory stored fields index that is very memory-efficient (less than 12 
bytes per block of compressed docs) and uses binary search to locate documents 
in the fields data file,
 - 16 kB blocks (small enough so that there is no major slow down when the 
whole index would fit into the I/O cache anyway, and large enough to provide 
interesting compression ratios ; for example Robert got a 0.35 compression 
ratio with the geonames.org database).

Any concerns?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3998) Atomic update on uniqueKey field itself causes duplicate document

2012-10-26 Thread Eric Spencer (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13485069#comment-13485069
 ] 

Eric Spencer commented on SOLR-3998:


Obviously, the workaround is to make sure that you don't do an atomic update on 
the uniqueKey field itself (which I was sort of doing by accident anyway). I 
just don't think anything should allow the violating of the uniqueKey.

> Atomic update on uniqueKey field itself causes duplicate document
> -
>
> Key: SOLR-3998
> URL: https://issues.apache.org/jira/browse/SOLR-3998
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.0
> Environment: Windows XP and RH Linux
>Reporter: Eric Spencer
> Attachments: solr_atomic_update_unique_key_bug_t.java
>
>
> Issuing an atomic update which includes the uniqueKey field itself will cause 
> Solr to insert a second document with the same uniqueKey thereby violating 
> uniqueness. A non-atomic update will "correct" the document. Attached is a 
> JUnit test case that demonstrates the issue against the collection1 core in 
> the standard Solr download.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3998) Atomic update on uniqueKey field itself causes duplicate document

2012-10-26 Thread Eric Spencer (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Spencer updated SOLR-3998:
---

Attachment: solr_atomic_update_unique_key_bug_t.java

JUnit 4 test case demonstrating the problem.

> Atomic update on uniqueKey field itself causes duplicate document
> -
>
> Key: SOLR-3998
> URL: https://issues.apache.org/jira/browse/SOLR-3998
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.0
> Environment: Windows XP and RH Linux
>Reporter: Eric Spencer
> Attachments: solr_atomic_update_unique_key_bug_t.java
>
>
> Issuing an atomic update which includes the uniqueKey field itself will cause 
> Solr to insert a second document with the same uniqueKey thereby violating 
> uniqueness. A non-atomic update will "correct" the document. Attached is a 
> JUnit test case that demonstrates the issue against the collection1 core in 
> the standard Solr download.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3998) Atomic update on uniqueKey field itself causes duplicate document

2012-10-26 Thread Eric Spencer (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Spencer updated SOLR-3998:
---

Description: Issuing an atomic update which includes the uniqueKey field 
itself will cause Solr to insert a second document with the same uniqueKey 
thereby violating uniqueness. A non-atomic update will "correct" the document. 
Attached is a JUnit test case that demonstrates the issue against the 
collection1 core in the standard Solr download.  (was: Issuing an atomic update 
which includes the uniqueKey field itself will cause Solr to insert a second 
document with the same uniqueKey thereby violating uniqueness. A non-atomic 
update will "correct" the document.)

> Atomic update on uniqueKey field itself causes duplicate document
> -
>
> Key: SOLR-3998
> URL: https://issues.apache.org/jira/browse/SOLR-3998
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.0
> Environment: Windows XP and RH Linux
>Reporter: Eric Spencer
>
> Issuing an atomic update which includes the uniqueKey field itself will cause 
> Solr to insert a second document with the same uniqueKey thereby violating 
> uniqueness. A non-atomic update will "correct" the document. Attached is a 
> JUnit test case that demonstrates the issue against the collection1 core in 
> the standard Solr download.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-3998) Atomic update on uniqueKey field itself causes duplicate document

2012-10-26 Thread Eric Spencer (JIRA)
Eric Spencer created SOLR-3998:
--

 Summary: Atomic update on uniqueKey field itself causes duplicate 
document
 Key: SOLR-3998
 URL: https://issues.apache.org/jira/browse/SOLR-3998
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.0
 Environment: Windows XP and RH Linux
Reporter: Eric Spencer


Issuing an atomic update which includes the uniqueKey field itself will cause 
Solr to insert a second document with the same uniqueKey thereby violating 
uniqueness. A non-atomic update will "correct" the document.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (LUCENE-4476) maven deployment scripts dont work (except from the machine you made the RC from)

2012-10-26 Thread Steven Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Rowe reassigned LUCENE-4476:
---

Assignee: Steven Rowe

> maven deployment scripts dont work (except from the machine you made the RC 
> from)
> -
>
> Key: LUCENE-4476
> URL: https://issues.apache.org/jira/browse/LUCENE-4476
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
>Assignee: Steven Rowe
> Attachments: LUCENE-4476.patch, LUCENE-4476.patch, LUCENE-4476.patch
>
>
> Currently the maven process described in 
> http://wiki.apache.org/lucene-java/PublishMavenArtifacts does not work (on 
> mac)
> It worked fine for the 4.0-alpha and 4.0-beta releases.
> NOTE: This appears to be working on linux so I am going with that. But this 
> seems strange it doesnt work on mac.
>  
> {noformat}
> artifact:install-provider] Installing provider: 
> org.apache.maven.wagon:wagon-ssh:jar:1.0-beta-7:runtime
> [artifact:pom] Downloading: 
> org/apache/lucene/lucene-parent/4.0.0/lucene-parent-4.0.0.pom from repository 
> sonatype.releases at http://oss.sonatype.org/content/repositories/releases
> [artifact:pom] Unable to locate resource in repository
> [artifact:pom] [INFO] Unable to find resource 
> 'org.apache.lucene:lucene-parent:pom:4.0.0' in repository sonatype.releases 
> (http://oss.sonatype.org/content/repositories/releases)
> [artifact:pom] Downloading: 
> org/apache/lucene/lucene-parent/4.0.0/lucene-parent-4.0.0.pom from repository 
> central at http://repo1.maven.org/maven2
> [artifact:pom] Unable to locate resource in repository
> [artifact:pom] [INFO] Unable to find resource 
> 'org.apache.lucene:lucene-parent:pom:4.0.0' in repository central 
> (http://repo1.maven.org/maven2)
> [artifact:pom] An error has occurred while processing the Maven artifact 
> tasks.
> [artifact:pom]  Diagnosis:
> [artifact:pom] 
> [artifact:pom] Unable to initialize POM lucene-test-framework-4.0.0.pom: 
> Cannot find parent: org.apache.lucene:lucene-parent for project: 
> org.apache.lucene:lucene-test-framework:jar:null for project 
> org.apache.lucene:lucene-test-framework:jar:null
> [artifact:pom] Unable to download the artifact from any repository
> BUILD FAILED
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Jenkins build is back to normal : slow-io-beasting #4883

2012-10-26 Thread Charlie Cron
See 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Build failed in Jenkins: slow-io-beasting #4882

2012-10-26 Thread Charlie Cron
See 

--
[...truncated 13995 lines...]
[junit4:junit4]   2> 690920 T20 oazs.SessionTrackerImpl.run SessionTrackerImpl 
exited loop!
[junit4:junit4]   2> 693166 T19 oazs.NIOServerCnxn$Factory.run WARNING Ignoring 
unexpected runtime exception java.nio.channels.CancelledKeyException
[junit4:junit4]   2>at 
sun.nio.ch.SelectionKeyImpl.ensureValid(SelectionKeyImpl.java:55)
[junit4:junit4]   2>at 
sun.nio.ch.SelectionKeyImpl.readyOps(SelectionKeyImpl.java:69)
[junit4:junit4]   2>at 
org.apache.zookeeper.server.NIOServerCnxn$Factory.run(NIOServerCnxn.java:241)
[junit4:junit4]   2> 
[junit4:junit4]   2> 693166 T17 oazs.NIOServerCnxn.closeSock Closed socket 
connection for client /127.0.0.1:49875 which had sessionid 0x13a9dfc31710003
[junit4:junit4]   2> 693166 T51 oaz.ClientCnxn$SendThread.run Unable to read 
additional data from server sessionid 0x13a9dfc31710003, likely server has 
closed socket, closing socket connection and attempting reconnect
[junit4:junit4]   2> 693166 T17 oazs.NIOServerCnxn.closeSock Closed socket 
connection for client /127.0.0.1:49887 which had sessionid 0x13a9dfc31710004
[junit4:junit4]   2> 693166 T63 oaz.ClientCnxn$SendThread.run Unable to read 
additional data from server sessionid 0x13a9dfc31710004, likely server has 
closed socket, closing socket connection and attempting reconnect
[junit4:junit4]   2> 693166 T17 oazs.NIOServerCnxn.closeSock Closed socket 
connection for client /127.0.0.1:49869 which had sessionid 0x13a9dfc31710002
[junit4:junit4]   2> 693166 T37 oaz.ClientCnxn$SendThread.run Unable to read 
additional data from server sessionid 0x13a9dfc31710002, likely server has 
closed socket, closing socket connection and attempting reconnect
[junit4:junit4]   2> 693166 T17 oazs.NIOServerCnxn.closeSock Closed socket 
connection for client /127.0.0.1:49905 which had sessionid 0x13a9dfc31710005
[junit4:junit4]   2> 693166 T75 oaz.ClientCnxn$SendThread.run Unable to read 
additional data from server sessionid 0x13a9dfc31710005, likely server has 
closed socket, closing socket connection and attempting reconnect
[junit4:junit4]   2> 693166 T17 oazs.NIOServerCnxn.closeSock Closed socket 
connection for client /127.0.0.1:50590 which had sessionid 0x13a9dfc31710006
[junit4:junit4]   2> 693166 T19 oazs.NIOServerCnxn$Factory.run NIOServerCnxn 
factory exited run method
[junit4:junit4]   2> 693166 T87 oaz.ClientCnxn$SendThread.run Unable to read 
additional data from server sessionid 0x13a9dfc31710006, likely server has 
closed socket, closing socket connection and attempting reconnect
[junit4:junit4]   2> 693166 T17 oazs.FinalRequestProcessor.shutdown shutdown of 
request processor complete
[junit4:junit4]   2> 693166 T17 oasc.ChaosMonkey.monkeyLog monkey: stop shard! 
49864
[junit4:junit4]   2> 693166 T17 oasc.CoreContainer.shutdown Shutting down 
CoreContainer instance=12689343
[junit4:junit4]   2> 693166 T17 oasc.SolrCore.close [collection1]  CLOSING 
SolrCore org.apache.solr.core.SolrCore@343bb6
[junit4:junit4]   2> 693182 T17 oasu.DirectUpdateHandler2.close closing 
DirectUpdateHandler2{commits=0,autocommits=0,soft 
autocommits=0,optimizes=0,rollbacks=0,expungeDeletes=0,docsPending=0,adds=0,deletesById=0,deletesByQuery=0,errors=0,cumulative_adds=0,cumulative_deletesById=0,cumulative_deletesByQuery=0,cumulative_errors=0}
[junit4:junit4]   2> 693182 T17 oasc.SolrCore.decrefSolrCoreState Closing 
SolrCoreState
[junit4:junit4]   2> 693182 T17 oasu.DefaultSolrCoreState.closeIndexWriter 
SolrCoreState ref count has reached 0 - closing IndexWriter
[junit4:junit4]   2> 693182 T17 oasu.DefaultSolrCoreState.closeIndexWriter 
closing IndexWriter with IndexWriterCloser
[junit4:junit4]   2> 693182 T17 oasc.SolrCore.closeSearcher [collection1] 
Closing main searcher on request.
[junit4:junit4]   2> 693182 T39 oasc.Overseer$ClusterStateUpdater.amILeader 
According to ZK I (id=88556849173954562-127.0.0.1:49864_solr-n_00) am 
no longer a leader.
[junit4:junit4]   2> 693276 T76 oascc.ConnectionManager.process Watcher 
org.apache.solr.common.cloud.ConnectionManager@1b9e7fc name:ZooKeeperConnection 
Watcher:127.0.0.1:49846/solr got event WatchedEvent state:Disconnected 
type:None path:null path:null type:None
[junit4:junit4]   2> 693276 T64 oascc.ConnectionManager.process Watcher 
org.apache.solr.common.cloud.ConnectionManager@18bbc98 name:ZooKeeperConnection 
Watcher:127.0.0.1:49846/solr got event WatchedEvent state:Disconnected 
type:None path:null path:null type:None
[junit4:junit4]   2> 693276 T76 oascc.ConnectionManager.process zkClient has 
disconnected
[junit4:junit4]   2> 693276 T64 oascc.ConnectionManager.process zkClient has 
disconnected
[junit4:junit4]   2> 693276 T88 oascc.ConnectionManager.process Watcher 
org.apache.solr.common.cloud.ConnectionManager@18b4ccb name:ZooKeeperConnection 
Watcher:127.0.0.1:49846/solr got event WatchedEvent state:Disconnected 
type

[JENKINS] Solr-Artifacts-trunk - Build # 2008 - Failure

2012-10-26 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Solr-Artifacts-trunk/2008/

No tests ran.

Build Log:
[...truncated 7538 lines...]
BUILD FAILED
/usr/home/hudson/hudson-slave/workspace/Solr-Artifacts-trunk/solr/build.xml:373:
 The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Solr-Artifacts-trunk/lucene/common-build.xml:1958:
 Can't get https://issues.apache.org/jira/rest/api/2/project/SOLR to 
/usr/home/hudson/hudson-slave/workspace/Solr-Artifacts-trunk/solr/build/solr/svn-export/solr/docs/changes/jiraVersionList.json

Total time: 1 minute 41 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Publishing Javadoc
Email was triggered for: Failure
Sending email for trigger: Failure




-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (LUCENE-4508) Stored fields should require as little resources as possible when no field is stored

2012-10-26 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13485037#comment-13485037
 ] 

Adrien Grand commented on LUCENE-4508:
--

oh right, sorry!



> Stored fields should require as little resources as possible when no field is 
> stored
> 
>
> Key: LUCENE-4508
> URL: https://issues.apache.org/jira/browse/LUCENE-4508
> Project: Lucene - Core
>  Issue Type: Wish
>Reporter: Adrien Grand
>Priority: Minor
> Fix For: 4.1
>
>
> Currently, stored fields may require a non-negligible amount of memory and/or 
> disk space even if no field is actually stored. We should find a way to 
> reduce these requirements.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4508) Stored fields should require as little resources as possible when no field is stored

2012-10-26 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13485029#comment-13485029
 ] 

Robert Muir commented on LUCENE-4508:
-

actually we can just throw exception when startDoc(n) and n > 0... i wrote this 
up on 2025 :)

> Stored fields should require as little resources as possible when no field is 
> stored
> 
>
> Key: LUCENE-4508
> URL: https://issues.apache.org/jira/browse/LUCENE-4508
> Project: Lucene - Core
>  Issue Type: Wish
>Reporter: Adrien Grand
>Priority: Minor
> Fix For: 4.1
>
>
> Currently, stored fields may require a non-negligible amount of memory and/or 
> disk space even if no field is actually stored. We should find a way to 
> reduce these requirements.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4508) Stored fields should require as little resources as possible when no field is stored

2012-10-26 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13485026#comment-13485026
 ] 

Adrien Grand commented on LUCENE-4508:
--

bq. Well I still think we could offer the option as a start

How do you imagine it? A new StoredFieldsFormat impl where 
StoredFieldsReader.visitDocument has an empty body and 
StoredFieldsWriter.writeField throws UnsupportedOperationException?


> Stored fields should require as little resources as possible when no field is 
> stored
> 
>
> Key: LUCENE-4508
> URL: https://issues.apache.org/jira/browse/LUCENE-4508
> Project: Lucene - Core
>  Issue Type: Wish
>Reporter: Adrien Grand
>Priority: Minor
> Fix For: 4.1
>
>
> Currently, stored fields may require a non-negligible amount of memory and/or 
> disk space even if no field is actually stored. We should find a way to 
> reduce these requirements.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-3997) Have solr config and index files on HDFS.

2012-10-26 Thread James Ji (JIRA)
James Ji created SOLR-3997:
--

 Summary: Have solr config and index files on HDFS.
 Key: SOLR-3997
 URL: https://issues.apache.org/jira/browse/SOLR-3997
 Project: Solr
  Issue Type: Wish
  Components: SearchComponents - other
Affects Versions: 4.0
Reporter: James Ji


We are currently working on having Solr files read from HDFS. We extended some 
of the classes so as to avoid modifying the original Solr code and make it 
compatible with the future release. So here comes the question, I found in 
QueryElevationComponent, there is a piece of code checking whether elevate.xml 
exists at local file system. I am wondering if there is a way to by pass this?
QueryElevationComponent.inform(){

File fC = new File(core.getResourceLoader().getConfigDir(), f);
File fD = new File(core.getDataDir(), f);
if (fC.exists() == fD.exists()) {
 throw new SolrException(SolrException.ErrorCode.SERVER_ERROR,
 "QueryElevationComponent missing config file: '" + f + "\n"
 + "either: " + fC.getAbsolutePath() + " or " + 
 fD.getAbsolutePath() + " must exist, but not both.");
  }
 if (fC.exists()) {
exists = true;
log.info("Loading QueryElevation from: "+fC.getAbsolutePath());
Config cfg = new Config(core.getResourceLoader(), f);
elevationCache.put(null, loadElevationMap(cfg));
 }

}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-2878) Allow Scorer to expose positions and payloads aka. nuke spans

2012-10-26 Thread Simon Willnauer (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13485017#comment-13485017
 ] 

Simon Willnauer commented on LUCENE-2878:
-

alan FYI - I committed some refactorings (renamed Scorer#positions to 
Scorere#intervals) etc. so you should update. I also committed your lattest 
patch

> Allow Scorer to expose positions and payloads aka. nuke spans 
> --
>
> Key: LUCENE-2878
> URL: https://issues.apache.org/jira/browse/LUCENE-2878
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Affects Versions: Positions Branch
>Reporter: Simon Willnauer
>Assignee: Simon Willnauer
>  Labels: gsoc2011, gsoc2012, lucene-gsoc-11, lucene-gsoc-12, 
> mentor
> Fix For: Positions Branch
>
> Attachments: LUCENE-2878-OR.patch, LUCENE-2878.patch, 
> LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, 
> LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, 
> LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, 
> LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, 
> LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, 
> LUCENE-2878.patch, LUCENE-2878_trunk.patch, LUCENE-2878_trunk.patch, 
> PosHighlighter.patch, PosHighlighter.patch
>
>
> Currently we have two somewhat separate types of queries, the one which can 
> make use of positions (mainly spans) and payloads (spans). Yet Span*Query 
> doesn't really do scoring comparable to what other queries do and at the end 
> of the day they are duplicating lot of code all over lucene. Span*Queries are 
> also limited to other Span*Query instances such that you can not use a 
> TermQuery or a BooleanQuery with SpanNear or anthing like that. 
> Beside of the Span*Query limitation other queries lacking a quiet interesting 
> feature since they can not score based on term proximity since scores doesn't 
> expose any positional information. All those problems bugged me for a while 
> now so I stared working on that using the bulkpostings API. I would have done 
> that first cut on trunk but TermScorer is working on BlockReader that do not 
> expose positions while the one in this branch does. I started adding a new 
> Positions class which users can pull from a scorer, to prevent unnecessary 
> positions enums I added ScorerContext#needsPositions and eventually 
> Scorere#needsPayloads to create the corresponding enum on demand. Yet, 
> currently only TermQuery / TermScorer implements this API and other simply 
> return null instead. 
> To show that the API really works and our BulkPostings work fine too with 
> positions I cut over TermSpanQuery to use a TermScorer under the hood and 
> nuked TermSpans entirely. A nice sideeffect of this was that the Position 
> BulkReading implementation got some exercise which now :) work all with 
> positions while Payloads for bulkreading are kind of experimental in the 
> patch and those only work with Standard codec. 
> So all spans now work on top of TermScorer ( I truly hate spans since today ) 
> including the ones that need Payloads (StandardCodec ONLY)!!  I didn't bother 
> to implement the other codecs yet since I want to get feedback on the API and 
> on this first cut before I go one with it. I will upload the corresponding 
> patch in a minute. 
> I also had to cut over SpanQuery.getSpans(IR) to 
> SpanQuery.getSpans(AtomicReaderContext) which I should probably do on trunk 
> first but after that pain today I need a break first :).
> The patch passes all core tests 
> (org.apache.lucene.search.highlight.HighlighterTest still fails but I didn't 
> look into the MemoryIndex BulkPostings API yet)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr 4.01 / 4.1

2012-10-26 Thread Erick Erickson
So, like, I have to keep track of all the revisions? I mean like you think
I'm organized or something ...

Might need some coaching when it's time to merge, but that seems like
the best idea

Thanks,
Erick

On Fri, Oct 26, 2012 at 11:14 AM, Robert Muir  wrote:
> On Fri, Oct 26, 2012 at 11:07 AM, Erick Erickson
>  wrote:
>> I've got some stuff I'm interested in having bake some more,
>> I'm playing fast and loose with core loading (SOLR-1293 and associated).
>> Particularly what's up with the interaction, if any, between this and
>> SolrCloud (I expect that this is just not supported in SolrCloud, but...)
>>
>> It's going to get a bit awkward to manage soon (yeah, I know that's my
>> problem). So far I haven't checked any of this in.
>
> I would recommend committing to trunk when you are comfortable. Then
> give it time in jenkins or whatever.
>
> After you feel like its baked, then merge the relevant revisions to 4.x?
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-3612) Race condition when starting an embedded zk ensemble.

2012-10-26 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-3612.
---

Resolution: Fixed

> Race condition when starting an embedded zk ensemble.
> -
>
> Key: SOLR-3612
> URL: https://issues.apache.org/jira/browse/SOLR-3612
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 4.0
>
>
> This affects example 3 from the wiki. It seems there is a race here, and at 
> one time the config just happened to upload fast enough? Now, sometimes other 
> instances in the ensemble try to load the config faster than it is uploaded 
> and cannot find key files (solrconfig, schema). Not sure I have a great 
> simple solution right now, but at minimum I can add in a wait of 10 seconds 
> or so - and add some buffer time. A better solution could come later.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3612) Race condition when starting an embedded zk ensemble.

2012-10-26 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-3612:
--

Fix Version/s: (was: 4.1)
   4.0

I added a longer wait for 4.0.

> Race condition when starting an embedded zk ensemble.
> -
>
> Key: SOLR-3612
> URL: https://issues.apache.org/jira/browse/SOLR-3612
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 4.0
>
>
> This affects example 3 from the wiki. It seems there is a race here, and at 
> one time the config just happened to upload fast enough? Now, sometimes other 
> instances in the ensemble try to load the config faster than it is uploaded 
> and cannot find key files (solrconfig, schema). Not sure I have a great 
> simple solution right now, but at minimum I can add in a wait of 10 seconds 
> or so - and add some buffer time. A better solution could come later.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-3621) Fix concurrency race around newIndexWriter

2012-10-26 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-3621.
---

   Resolution: Fixed
Fix Version/s: 5.0

Have no seen any reports of problems here in a while, and all this had hardened 
a fair amount by now.

> Fix concurrency race around newIndexWriter 
> ---
>
> Key: SOLR-3621
> URL: https://issues.apache.org/jira/browse/SOLR-3621
> Project: Solr
>  Issue Type: Bug
>  Components: update
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Minor
> Fix For: 4.1, 5.0
>
> Attachments: SOLR--3621.patch
>
>
> When I did the first big refactor on update handler, I was trying to never 
> close the index writer - I had to give in on this goal due to the replication 
> handler - it requires rebooting the indexwriter. At the time, I settled for 
> allowing a little race that didn't show up as an issue in tests - this IW 
> reboot was always a bit of a hack in the past anyhow.
> Now that the dust has settled, we should make this air tight though. I'd like 
> to make opening a new indexwriter a full class citizen rather than a hacky 
> method only used minimally for replication to reboot things. It should be a 
> solid API that is valid for any uses down the road.
> For some IW config changes, we may want to do it in 'some' cases on reload.
> To do this, we have to start ref counting iw use - so that we only actually 
> open a new one and close the old one when it's not in use at all.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-3708) Add hashcode to ClusterState so that structures built based on the ClusterState can be easily cached.

2012-10-26 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-3708.
---

   Resolution: Fixed
Fix Version/s: 5.0

> Add hashcode to ClusterState so that structures built based on the 
> ClusterState can be easily cached.
> -
>
> Key: SOLR-3708
> URL: https://issues.apache.org/jira/browse/SOLR-3708
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Minor
> Fix For: 4.1, 5.0
>
> Attachments: SOLR-3708.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3041) Solrs using SolrCloud feature for having shared config in ZK, might not all start successfully when started for the first time simultaneously

2012-10-26 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13485005#comment-13485005
 ] 

Mark Miller commented on SOLR-3041:
---

The overall issue is pretty much solved by the new zkCli tool I think - you can 
just use that to upload your config before starting up.

In terms of being more robust when not using that tool, I guess that is still 
something to consider here.

> Solrs using SolrCloud feature for having shared config in ZK, might not all 
> start successfully when started for the first time simultaneously
> -
>
> Key: SOLR-3041
> URL: https://issues.apache.org/jira/browse/SOLR-3041
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.0-ALPHA
> Environment: Exact version: 
> https://builds.apache.org/job/Solr-trunk/1718/artifact/artifacts/apache-solr-4.0-2011-12-28_08-33-55.tgz
>Reporter: Per Steffensen
>Assignee: Mark Miller
> Fix For: 4.1
>
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> Starting Solr like this
> java -DzkHost= -Dbootstrap_confdir=./myproject/conf 
> -Dcollection.configName=myproject_conf -Dsolr.solr.home=./myproject -jar 
> start.jar
> When not already there (starting solr for the first time) the content of 
> ./myproject/conf will be copied by Solr into ZK. That process does not work 
> very well in parallel, so if the content is not there and I start several 
> Solrs simultaneously, one or more of them might not start successfully.
> I see exceptions like the ones shown below, and the Solrs throwing them will 
> not work correctly afterwards.
> I know that there could be different workarounds, like making sure to always 
> start one Solr and wait for a while before starting the rest of them, but I 
> think we should really be more robuste in these cases.
> Regards, Per Steffensen
>  exception example 1 (the znode causing the problem can be different than 
> /configs/myproject_conf/protwords.txt) 
> org.apache.solr.common.cloud.ZooKeeperException: 
>   at 
> org.apache.solr.core.CoreContainer.initZooKeeper(CoreContainer.java:193)
>   at org.apache.solr.core.CoreContainer.load(CoreContainer.java:337)
>   at org.apache.solr.core.CoreContainer.load(CoreContainer.java:294)
>   at 
> org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:240)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:93)
>   at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
>   at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
>   at 
> org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
>   at 
> org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
>   at org.mortbay.jetty.Server.doStart(Server.java:224)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at org.mortbay.xml.XmlConfiguration.main(XmlConfiguration.java:985)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at org.mortbay.start.Main.invokeMain(Main.java:194)
>   at org.mortbay.start.Main.start(Main.java:534)
>   at org.mortbay.start.Main.start(Main.java:441)
>   at org.mortbay.start.Main.main(Main.java:119)
> Caused by: org.apache.zookeeper.KeeperException$NodeExistsException: 
> KeeperErrorCode = NodeExists for /configs/myproject_conf/protwords.txt
>   at org.apache.zookeeper.KeeperException.create(KeeperException.java:1

[jira] [Resolved] (SOLR-3995) Recovery may never finish on SolrCore shutdown if the last reference to a SolrCore is closed by the recovery process.

2012-10-26 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-3995.
---

Resolution: Fixed

> Recovery may never finish on SolrCore shutdown if the last reference to a 
> SolrCore is closed by the recovery process.
> -
>
> Key: SOLR-3995
> URL: https://issues.apache.org/jira/browse/SOLR-3995
> Project: Solr
>  Issue Type: Task
>  Components: SolrCloud
>Reporter: Mark Miller
>Assignee: Mark Miller
>  Labels: 4.0.1_Candidate
> Fix For: 4.1, 5.0
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3995) Recovery may never finish on SolrCore shutdown if the last reference to a SolrCore is closed by the recovery process.

2012-10-26 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-3995:
--

Labels: 4.0.1_Candidate  (was: )

> Recovery may never finish on SolrCore shutdown if the last reference to a 
> SolrCore is closed by the recovery process.
> -
>
> Key: SOLR-3995
> URL: https://issues.apache.org/jira/browse/SOLR-3995
> Project: Solr
>  Issue Type: Task
>  Components: SolrCloud
>Reporter: Mark Miller
>Assignee: Mark Miller
>  Labels: 4.0.1_Candidate
> Fix For: 4.1, 5.0
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-3939) An empty or just replicated index cannot become the leader of a shard after a leader goes down.

2012-10-26 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-3939.
---

Resolution: Fixed

> An empty or just replicated index cannot become the leader of a shard after a 
> leader goes down.
> ---
>
> Key: SOLR-3939
> URL: https://issues.apache.org/jira/browse/SOLR-3939
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.0-BETA, 4.0
>Reporter: Joel Bernstein
>Assignee: Mark Miller
>Priority: Critical
>  Labels: 4.0.1_Candidate
> Fix For: 4.1, 5.0
>
> Attachments: cloud2.log, cloud.log, SOLR-3939.patch, SOLR-3939.patch
>
>
> When a leader core is unloaded using the core admin api, the followers in the 
> shard go into recovery but do not come out. Leader election doesn't take 
> place and the shard goes down.
> This effects the ability to move a micro-shard from one Solr instance to 
> another Solr instance.
> The problem does not occur 100% of the time but a large % of the time. 
> To setup a test, startup Solr Cloud with a single shard. Add cores to that 
> shard as replicas using core admin. Then unload the leader core using core 
> admin. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3939) An empty or just replicated index cannot become the leader of a shard after a leader goes down.

2012-10-26 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13485000#comment-13485000
 ] 

Mark Miller commented on SOLR-3939:
---

Okay, I'm going to resolve this - we can make a new issue for the case where a 
replica comes up and is ahead somehow.

> An empty or just replicated index cannot become the leader of a shard after a 
> leader goes down.
> ---
>
> Key: SOLR-3939
> URL: https://issues.apache.org/jira/browse/SOLR-3939
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.0-BETA, 4.0
>Reporter: Joel Bernstein
>Assignee: Mark Miller
>Priority: Critical
>  Labels: 4.0.1_Candidate
> Fix For: 4.1, 5.0
>
> Attachments: cloud2.log, cloud.log, SOLR-3939.patch, SOLR-3939.patch
>
>
> When a leader core is unloaded using the core admin api, the followers in the 
> shard go into recovery but do not come out. Leader election doesn't take 
> place and the shard goes down.
> This effects the ability to move a micro-shard from one Solr instance to 
> another Solr instance.
> The problem does not occur 100% of the time but a large % of the time. 
> To setup a test, startup Solr Cloud with a single shard. Add cores to that 
> shard as replicas using core admin. Then unload the leader core using core 
> admin. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-2878) Allow Scorer to expose positions and payloads aka. nuke spans

2012-10-26 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13484997#comment-13484997
 ] 

Alan Woodward commented on LUCENE-2878:
---

OK!  I think we're nearly there...

> Allow Scorer to expose positions and payloads aka. nuke spans 
> --
>
> Key: LUCENE-2878
> URL: https://issues.apache.org/jira/browse/LUCENE-2878
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Affects Versions: Positions Branch
>Reporter: Simon Willnauer
>Assignee: Simon Willnauer
>  Labels: gsoc2011, gsoc2012, lucene-gsoc-11, lucene-gsoc-12, 
> mentor
> Fix For: Positions Branch
>
> Attachments: LUCENE-2878-OR.patch, LUCENE-2878.patch, 
> LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, 
> LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, 
> LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, 
> LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, 
> LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, 
> LUCENE-2878.patch, LUCENE-2878_trunk.patch, LUCENE-2878_trunk.patch, 
> PosHighlighter.patch, PosHighlighter.patch
>
>
> Currently we have two somewhat separate types of queries, the one which can 
> make use of positions (mainly spans) and payloads (spans). Yet Span*Query 
> doesn't really do scoring comparable to what other queries do and at the end 
> of the day they are duplicating lot of code all over lucene. Span*Queries are 
> also limited to other Span*Query instances such that you can not use a 
> TermQuery or a BooleanQuery with SpanNear or anthing like that. 
> Beside of the Span*Query limitation other queries lacking a quiet interesting 
> feature since they can not score based on term proximity since scores doesn't 
> expose any positional information. All those problems bugged me for a while 
> now so I stared working on that using the bulkpostings API. I would have done 
> that first cut on trunk but TermScorer is working on BlockReader that do not 
> expose positions while the one in this branch does. I started adding a new 
> Positions class which users can pull from a scorer, to prevent unnecessary 
> positions enums I added ScorerContext#needsPositions and eventually 
> Scorere#needsPayloads to create the corresponding enum on demand. Yet, 
> currently only TermQuery / TermScorer implements this API and other simply 
> return null instead. 
> To show that the API really works and our BulkPostings work fine too with 
> positions I cut over TermSpanQuery to use a TermScorer under the hood and 
> nuked TermSpans entirely. A nice sideeffect of this was that the Position 
> BulkReading implementation got some exercise which now :) work all with 
> positions while Payloads for bulkreading are kind of experimental in the 
> patch and those only work with Standard codec. 
> So all spans now work on top of TermScorer ( I truly hate spans since today ) 
> including the ones that need Payloads (StandardCodec ONLY)!!  I didn't bother 
> to implement the other codecs yet since I want to get feedback on the API and 
> on this first cut before I go one with it. I will upload the corresponding 
> patch in a minute. 
> I also had to cut over SpanQuery.getSpans(IR) to 
> SpanQuery.getSpans(AtomicReaderContext) which I should probably do on trunk 
> first but after that pain today I need a break first :).
> The patch passes all core tests 
> (org.apache.lucene.search.highlight.HighlighterTest still fails but I didn't 
> look into the MemoryIndex BulkPostings API yet)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-2878) Allow Scorer to expose positions and payloads aka. nuke spans

2012-10-26 Thread Simon Willnauer (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13484989#comment-13484989
 ] 

Simon Willnauer commented on LUCENE-2878:
-

alan +1 to the patch BooleanIntervalIterator is a relict. I will go ahead and 
commit it.

bq. Other than writing javadocs, we need to replace PayloadTermQuery and 
PayloadNearQuery, I think. I'll work on that next.
Honestly, fuck it! PayloadTermQuery and PayloadNearQuery are so exotic I'd 
leave it out and move it into a sep. issue and maybe add them once we are on 
trunk. We can still just convert them to pos iters eventually. For now that is 
not important. we should focus on getting this on trunk. 

> Allow Scorer to expose positions and payloads aka. nuke spans 
> --
>
> Key: LUCENE-2878
> URL: https://issues.apache.org/jira/browse/LUCENE-2878
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Affects Versions: Positions Branch
>Reporter: Simon Willnauer
>Assignee: Simon Willnauer
>  Labels: gsoc2011, gsoc2012, lucene-gsoc-11, lucene-gsoc-12, 
> mentor
> Fix For: Positions Branch
>
> Attachments: LUCENE-2878-OR.patch, LUCENE-2878.patch, 
> LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, 
> LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, 
> LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, 
> LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, 
> LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, 
> LUCENE-2878.patch, LUCENE-2878_trunk.patch, LUCENE-2878_trunk.patch, 
> PosHighlighter.patch, PosHighlighter.patch
>
>
> Currently we have two somewhat separate types of queries, the one which can 
> make use of positions (mainly spans) and payloads (spans). Yet Span*Query 
> doesn't really do scoring comparable to what other queries do and at the end 
> of the day they are duplicating lot of code all over lucene. Span*Queries are 
> also limited to other Span*Query instances such that you can not use a 
> TermQuery or a BooleanQuery with SpanNear or anthing like that. 
> Beside of the Span*Query limitation other queries lacking a quiet interesting 
> feature since they can not score based on term proximity since scores doesn't 
> expose any positional information. All those problems bugged me for a while 
> now so I stared working on that using the bulkpostings API. I would have done 
> that first cut on trunk but TermScorer is working on BlockReader that do not 
> expose positions while the one in this branch does. I started adding a new 
> Positions class which users can pull from a scorer, to prevent unnecessary 
> positions enums I added ScorerContext#needsPositions and eventually 
> Scorere#needsPayloads to create the corresponding enum on demand. Yet, 
> currently only TermQuery / TermScorer implements this API and other simply 
> return null instead. 
> To show that the API really works and our BulkPostings work fine too with 
> positions I cut over TermSpanQuery to use a TermScorer under the hood and 
> nuked TermSpans entirely. A nice sideeffect of this was that the Position 
> BulkReading implementation got some exercise which now :) work all with 
> positions while Payloads for bulkreading are kind of experimental in the 
> patch and those only work with Standard codec. 
> So all spans now work on top of TermScorer ( I truly hate spans since today ) 
> including the ones that need Payloads (StandardCodec ONLY)!!  I didn't bother 
> to implement the other codecs yet since I want to get feedback on the API and 
> on this first cut before I go one with it. I will upload the corresponding 
> patch in a minute. 
> I also had to cut over SpanQuery.getSpans(IR) to 
> SpanQuery.getSpans(AtomicReaderContext) which I should probably do on trunk 
> first but after that pain today I need a break first :).
> The patch passes all core tests 
> (org.apache.lucene.search.highlight.HighlighterTest still fails but I didn't 
> look into the MemoryIndex BulkPostings API yet)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3971) A collection that is created with numShards=1 turns into a numShards=2 collection after starting up a second core and not specifying numShards.

2012-10-26 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13484991#comment-13484991
 ] 

Mark Miller commented on SOLR-3971:
---

Just to clarify when this is an issue - it's when you are creating the 
SolrCores in the same Solr instance - not if they are in separate instances.

> A collection that is created with numShards=1 turns into a numShards=2 
> collection after starting up a second core and not specifying numShards.
> ---
>
> Key: SOLR-3971
> URL: https://issues.apache.org/jira/browse/SOLR-3971
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.0
>Reporter: Mark Miller
>Assignee: Mark Miller
>  Labels: 4.0.1_Candidate
> Fix For: 4.1, 5.0
>
> Attachments: SOLR-3971.patch
>
>
> Showing up while I'm working on a different test.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr 4.01 / 4.1

2012-10-26 Thread Robert Muir
On Fri, Oct 26, 2012 at 11:07 AM, Erick Erickson
 wrote:
> I've got some stuff I'm interested in having bake some more,
> I'm playing fast and loose with core loading (SOLR-1293 and associated).
> Particularly what's up with the interaction, if any, between this and
> SolrCloud (I expect that this is just not supported in SolrCloud, but...)
>
> It's going to get a bit awkward to manage soon (yeah, I know that's my
> problem). So far I haven't checked any of this in.

I would recommend committing to trunk when you are comfortable. Then
give it time in jenkins or whatever.

After you feel like its baked, then merge the relevant revisions to 4.x?

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Jenkins build is back to normal : slow-io-beasting #4876

2012-10-26 Thread Charlie Cron
See 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr 4.01 / 4.1

2012-10-26 Thread Erick Erickson
I've got some stuff I'm interested in having bake some more,
I'm playing fast and loose with core loading (SOLR-1293 and associated).
Particularly what's up with the interaction, if any, between this and
SolrCloud (I expect that this is just not supported in SolrCloud, but...)

It's going to get a bit awkward to manage soon (yeah, I know that's my
problem). So far I haven't checked any of this in.

So, should I
1> just accumulate them all into one really big patch? I really don't
want to do this, some if this is iffy enough that I think it'd be clearer
to track down if there were incremental patches applied. I know, write
it right the first time.
2> maintain my separate patches and commit after 4.1 is labeled?
3> ???

Note that the changes I'm working on should NOT change current
behavior, but there's always the chance of spillover. This applies to
either the 4.0.1 or 4.1 I guess.



On Thu, Oct 25, 2012 at 10:08 PM, Robert Muir  wrote:
> On Thu, Oct 25, 2012 at 9:47 PM, Mark Miller  wrote:
>> In my case, all the important bug fixes were only just recently fixed or I'm 
>> still fixing them - so for my stuff, I see a larger negative with 4.1 vs 
>> 4.0.1. They won't bake long in either version - but they should go out soon 
>> regardless.
>>
>
> This can be easily mitigated: just commit to trunk and spin up an
> extra jenkins against it. But 4.1 is already stable on the lucene side
> and I don't think we should go backwards.
>
> There is just a lot of little shit, like javadocs fixes, improvements
> to the build, etc that would make it a higher quality release. We also
> have enough features already to make it a real release
> (http://wiki.apache.org/lucene-java/ReleaseNote41).
>
> I'm not really worried about playing tricks trying to convince users
> to upgrade, I think we should just focus on quality releases and that
> comes naturally.
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4508) Stored fields should require as little resources as possible when no field is stored

2012-10-26 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13484979#comment-13484979
 ] 

Robert Muir commented on LUCENE-4508:
-

Well I still think we could offer the option as a start (either this issue, or 
on LUCENE-2025) ?

Maybe, one day later we figure out how to do this cleanly and safely in the 
default impl, but for now it would be a nice
step to offer the option?

> Stored fields should require as little resources as possible when no field is 
> stored
> 
>
> Key: LUCENE-4508
> URL: https://issues.apache.org/jira/browse/LUCENE-4508
> Project: Lucene - Core
>  Issue Type: Wish
>Reporter: Adrien Grand
>Priority: Minor
> Fix For: 4.1
>
>
> Currently, stored fields may require a non-negligible amount of memory and/or 
> disk space even if no field is actually stored. We should find a way to 
> reduce these requirements.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-4508) Stored fields should require as little resources as possible when no field is stored

2012-10-26 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand resolved LUCENE-4508.
--

Resolution: Not A Problem

You convinced me. :-)

> Stored fields should require as little resources as possible when no field is 
> stored
> 
>
> Key: LUCENE-4508
> URL: https://issues.apache.org/jira/browse/LUCENE-4508
> Project: Lucene - Core
>  Issue Type: Wish
>Reporter: Adrien Grand
>Priority: Minor
> Fix For: 4.1
>
>
> Currently, stored fields may require a non-negligible amount of memory and/or 
> disk space even if no field is actually stored. We should find a way to 
> reduce these requirements.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4508) Stored fields should require as little resources as possible when no field is stored

2012-10-26 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13484973#comment-13484973
 ] 

Robert Muir commented on LUCENE-4508:
-

That may be, but i think thats still atypical. 

Why not just add this "null" or "no" implementation to give these people the 
choice? 

This is a great example of where flexible indexing wins out, its very tricky to 
incorporate this kinda stuff into a one-size-fits-all format.
Instead we can just provide alternatives.

Think about bulk merge and only having one doc with stored fields, then that 
doc getting deleted, and all those kinda cases.

> Stored fields should require as little resources as possible when no field is 
> stored
> 
>
> Key: LUCENE-4508
> URL: https://issues.apache.org/jira/browse/LUCENE-4508
> Project: Lucene - Core
>  Issue Type: Wish
>Reporter: Adrien Grand
>Priority: Minor
> Fix For: 4.1
>
>
> Currently, stored fields may require a non-negligible amount of memory and/or 
> disk space even if no field is actually stored. We should find a way to 
> reduce these requirements.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Build failed in Jenkins: slow-io-beasting #4875

2012-10-26 Thread Charlie Cron
See 

--
[...truncated 13095 lines...]
[junit4:junit4]   2> 12547 T548 oasc.RequestHandlers.initHandlersFromConfig 
created defaults: solr.StandardRequestHandler
[junit4:junit4]   2> 12547 T548 oasc.RequestHandlers.initHandlersFromConfig 
adding lazy requestHandler: solr.StandardRequestHandler
[junit4:junit4]   2> 12547 T548 oasc.RequestHandlers.initHandlersFromConfig 
created lazy: solr.StandardRequestHandler
[junit4:junit4]   2> 12547 T548 oasc.RequestHandlers.initHandlersFromConfig 
created /update: solr.UpdateRequestHandler
[junit4:junit4]   2> 12547 T548 oasc.RequestHandlers.initHandlersFromConfig 
created /terms: org.apache.solr.handler.component.SearchHandler
[junit4:junit4]   2> 12547 T548 oasc.RequestHandlers.initHandlersFromConfig 
created spellCheckCompRH: org.apache.solr.handler.component.SearchHandler
[junit4:junit4]   2> 12547 T548 oasc.RequestHandlers.initHandlersFromConfig 
created spellCheckCompRH_Direct: org.apache.solr.handler.component.SearchHandler
[junit4:junit4]   2> 12547 T548 oasc.RequestHandlers.initHandlersFromConfig 
created spellCheckWithWordbreak: org.apache.solr.handler.component.SearchHandler
[junit4:junit4]   2> 12547 T548 oasc.RequestHandlers.initHandlersFromConfig 
created spellCheckWithWordbreak_Direct: 
org.apache.solr.handler.component.SearchHandler
[junit4:junit4]   2> 12547 T548 oasc.RequestHandlers.initHandlersFromConfig 
created spellCheckCompRH1: org.apache.solr.handler.component.SearchHandler
[junit4:junit4]   2> 12547 T548 oasc.RequestHandlers.initHandlersFromConfig 
created tvrh: org.apache.solr.handler.component.SearchHandler
[junit4:junit4]   2> 12547 T548 oasc.RequestHandlers.initHandlersFromConfig 
created /mlt: solr.MoreLikeThisHandler
[junit4:junit4]   2> 12547 T548 oasc.RequestHandlers.initHandlersFromConfig 
created /debug/dump: solr.DumpRequestHandler
[junit4:junit4]   2> 12547 T548 oashl.XMLLoader.init xsltCacheLifetimeSeconds=60
[junit4:junit4]   2> 12563 T548 oasc.SolrCore.initDeprecatedSupport WARNING 
solrconfig.xml uses deprecated , Please update your config 
to use the ShowFileRequestHandler.
[junit4:junit4]   2> 12563 T548 oasc.SolrCore.initDeprecatedSupport WARNING 
adding ShowFileRequestHandler with hidden files: [SCHEMA.XML, OLD_SYNONYMS.TXT, 
STOPWORDS.TXT, PROTWORDS.TXT, OPEN-EXCHANGE-RATES.JSON, SYNONYMS.TXT, 
CURRENCY.XML, MAPPING-ISOLATIN1ACCENT.TXT]
[junit4:junit4]   2> 12563 T548 oass.SolrIndexSearcher. Opening 
Searcher@1eaee49 main
[junit4:junit4]   2> 12563 T548 oass.SolrIndexSearcher.getIndexDir WARNING 
WARNING: Directory impl does not support setting indexDir: 
org.apache.lucene.store.MockDirectoryWrapper
[junit4:junit4]   2> 12563 T548 oasu.CommitTracker. Hard AutoCommit: 
disabled
[junit4:junit4]   2> 12563 T548 oasu.CommitTracker. Soft AutoCommit: 
disabled
[junit4:junit4]   2> 12563 T548 oashc.SpellCheckComponent.inform Initializing 
spell checkers
[junit4:junit4]   2> 12563 T548 oass.DirectSolrSpellChecker.init init: 
{name=direct,classname=DirectSolrSpellChecker,field=lowerfilt,minQueryLength=3}
[junit4:junit4]   2> 12594 T574 oasc.SolrCore.registerSearcher [collection1] 
Registered new searcher Searcher@1eaee49 
main{StandardDirectoryReader(segments_1:1)}
[junit4:junit4]   2> 12594 T548 oasc.ZkController.publish numShards not found 
on descriptor - reading it from system property
[junit4:junit4]   2> 12953 T560 oascc.ZkStateReader.updateClusterState Updating 
cloud state from ZooKeeper... 
[junit4:junit4]   2> 12953 T560 oasc.Overseer$ClusterStateUpdater.updateState 
Update state numShards=null message={
[junit4:junit4]   2>  "operation":"state",
[junit4:junit4]   2>  "numShards":null,
[junit4:junit4]   2>  "shard":null,
[junit4:junit4]   2>  "roles":null,
[junit4:junit4]   2>  "state":"down",
[junit4:junit4]   2>  "core":"collection1",
[junit4:junit4]   2>  "collection":"collection1",
[junit4:junit4]   2>  "node_name":"BIGBOY:8983_solr",
[junit4:junit4]   2>  "base_url":"http://BIGBOY:8983/solr"}
[junit4:junit4]   2> 12968 T559 oascc.ZkStateReader$2.process A cluster state 
change has occurred - updating... (3)
[junit4:junit4]   2> 14965 T551 oazs.ZooKeeperServer.expire Expiring session 
0x13a9d8c71e30003, timeout of 3000ms exceeded
[junit4:junit4]   2> 14965 T553 oazs.PrepRequestProcessor.pRequest Processed 
session termination for sessionid: 0x13a9d8c71e30003
[junit4:junit4]   2> 14965 T559 oascc.ZkStateReader$3.process Updating live 
nodes... (2)
[junit4:junit4]   2> 14981 T573 oascc.ZkStateReader$3.process Updating live 
nodes... (2)
[junit4:junit4]   2> 74310 T548 oasc.SolrException.log SEVERE 
null:org.apache.solr.common.SolrException: Could not get shard_id for core: 
collection1
[junit4:junit4]   2>at 
org.apache.solr.cloud.ZkController.doGetShardIdProcess(ZkController.java:996)
[junit4:junit4]   2>at 
org.apache.solr.cloud.ZkController.preRegister(

[jira] [Commented] (LUCENE-4508) Stored fields should require as little resources as possible when no field is stored

2012-10-26 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13484971#comment-13484971
 ] 

Adrien Grand commented on LUCENE-4508:
--

Thanks for the link, I didn't know about this issue.

I am not sure the use-case is that atypical. For example, I think it would make 
sense for someone who just needs to store an ID and a few scoring factors to 
only use doc values?

> Stored fields should require as little resources as possible when no field is 
> stored
> 
>
> Key: LUCENE-4508
> URL: https://issues.apache.org/jira/browse/LUCENE-4508
> Project: Lucene - Core
>  Issue Type: Wish
>Reporter: Adrien Grand
>Priority: Minor
> Fix For: 4.1
>
>
> Currently, stored fields may require a non-negligible amount of memory and/or 
> disk space even if no field is actually stored. We should find a way to 
> reduce these requirements.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4508) Stored fields should require as little resources as possible when no field is stored

2012-10-26 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13484967#comment-13484967
 ] 

Robert Muir commented on LUCENE-4508:
-

I think the way is to just add a "NoStoredFieldsFormat".

I looked at doing this as part of general purpose codecs and it gets really 
hairy, and the use case is very atypical.

LUCENE-2025

> Stored fields should require as little resources as possible when no field is 
> stored
> 
>
> Key: LUCENE-4508
> URL: https://issues.apache.org/jira/browse/LUCENE-4508
> Project: Lucene - Core
>  Issue Type: Wish
>Reporter: Adrien Grand
>Priority: Minor
> Fix For: 4.1
>
>
> Currently, stored fields may require a non-negligible amount of memory and/or 
> disk space even if no field is actually stored. We should find a way to 
> reduce these requirements.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-1972) Need additional query stats in admin interface - median, 95th and 99th percentile

2012-10-26 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13484964#comment-13484964
 ] 

Alan Woodward commented on SOLR-1972:
-

Yeah, the toString hack is quite ... hacky.  The problem with using the request 
handler path as the scope (which I agree would be logical) is that this isn't 
available until init() is called, and both getStatistics() and handleRequest() 
can get called before init().

Having one MetricsRegistry per request handler looks a lot more sensible 
though.  I'll put a patch together.  Thanks!

> Need additional query stats in admin interface - median, 95th and 99th 
> percentile
> -
>
> Key: SOLR-1972
> URL: https://issues.apache.org/jira/browse/SOLR-1972
> Project: Solr
>  Issue Type: Improvement
>  Components: web gui
>Affects Versions: 1.4
>Reporter: Shawn Heisey
>Assignee: Alan Woodward
>Priority: Minor
> Fix For: 4.1
>
> Attachments: elyograg-1972-3.2.patch, elyograg-1972-3.2.patch, 
> elyograg-1972-trunk.patch, elyograg-1972-trunk.patch, 
> SOLR-1972-branch3x-url_pattern.patch, SOLR-1972-branch4x.patch, 
> SOLR-1972-branch4x.patch, SOLR-1972_metrics.patch, SOLR-1972_metrics.patch, 
> SOLR-1972_metrics.patch, SOLR-1972_metrics.patch, SOLR-1972_metrics.patch, 
> SOLR-1972.patch, SOLR-1972.patch, SOLR-1972.patch, SOLR-1972.patch, 
> SOLR-1972-url_pattern.patch
>
>
> I would like to see more detailed query statistics from the admin GUI.  This 
> is what you can get now:
> requests : 809
> errors : 0
> timeouts : 0
> totalTime : 70053
> avgTimePerRequest : 86.59209
> avgRequestsPerSecond : 0.8148785 
> I'd like to see more data on the time per request - median, 95th percentile, 
> 99th percentile, and any other statistical function that makes sense to 
> include.  In my environment, the first bunch of queries after startup tend to 
> take several seconds each.  I find that the average value tends to be useless 
> until it has several thousand queries under its belt and the caches are 
> thoroughly warmed.  The statistical functions I have mentioned would quickly 
> eliminate the influence of those initial slow queries.
> The system will have to store individual data about each query.  I don't know 
> if this is something Solr does already.  It would be nice to have a 
> configurable count of how many of the most recent data points are kept, to 
> control the amount of memory the feature uses.  The default value could be 
> something like 1024 or 4096.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-4508) Stored fields should require as little resources as possible when no field is stored

2012-10-26 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-4508:


 Summary: Stored fields should require as little resources as 
possible when no field is stored
 Key: LUCENE-4508
 URL: https://issues.apache.org/jira/browse/LUCENE-4508
 Project: Lucene - Core
  Issue Type: Wish
Reporter: Adrien Grand
Priority: Minor
 Fix For: 4.1


Currently, stored fields may require a non-negligible amount of memory and/or 
disk space even if no field is actually stored. We should find a way to reduce 
these requirements.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-3971) A collection that is created with numShards=1 turns into a numShards=2 collection after starting up a second core and not specifying numShards.

2012-10-26 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-3971.
---

Resolution: Fixed

> A collection that is created with numShards=1 turns into a numShards=2 
> collection after starting up a second core and not specifying numShards.
> ---
>
> Key: SOLR-3971
> URL: https://issues.apache.org/jira/browse/SOLR-3971
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.0
>Reporter: Mark Miller
>Assignee: Mark Miller
>  Labels: 4.0.1_Candidate
> Fix For: 4.1, 5.0
>
> Attachments: SOLR-3971.patch
>
>
> Showing up while I'm working on a different test.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-3939) An empty or just replicated index cannot become the leader of a shard after a leader goes down.

2012-10-26 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13484683#comment-13484683
 ] 

Yonik Seeley edited comment on SOLR-3939 at 10/26/12 2:32 PM:
--

bq. Isn't that what capturing the starting versions is all about?

For a node starting up, yeah.  For a leader syncing to someone else - I don't 
think it should matter.
edit: OK - I think I got what you're saying now - if the new node coming up did 
have an extra doc, then the only way to guarantee the leader pick it up would 
be if not too many updates came in for either.  We could require that a sync 
from the leader to the replica have the list of recent versions overlap enough 
(else the replica would be forced to replicate), but as you say... if updates 
are coming in fast enough (and that is prob pretty slow) you're going to force 
a replication anyway.

bq. but if you want to peer sync from the leader to a replica that is coming 
back up, if updates are coming in, you are going to force a replication anyway. 

If updates were coming in fast enough during the "bounce"... I guess so.

  was (Author: ysee...@gmail.com):
bq. Isn't that what capturing the starting versions is all about?

For a node starting up, yeah.  For a leader syncing to someone else - I don't 
think it should matter.

bq. but if you want to peer sync from the leader to a replica that is coming 
back up, if updates are coming in, you are going to force a replication anyway. 

If updates were coming in fast enough during the "bounce"... I guess so.
  
> An empty or just replicated index cannot become the leader of a shard after a 
> leader goes down.
> ---
>
> Key: SOLR-3939
> URL: https://issues.apache.org/jira/browse/SOLR-3939
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.0-BETA, 4.0
>Reporter: Joel Bernstein
>Assignee: Mark Miller
>Priority: Critical
>  Labels: 4.0.1_Candidate
> Fix For: 4.1, 5.0
>
> Attachments: cloud2.log, cloud.log, SOLR-3939.patch, SOLR-3939.patch
>
>
> When a leader core is unloaded using the core admin api, the followers in the 
> shard go into recovery but do not come out. Leader election doesn't take 
> place and the shard goes down.
> This effects the ability to move a micro-shard from one Solr instance to 
> another Solr instance.
> The problem does not occur 100% of the time but a large % of the time. 
> To setup a test, startup Solr Cloud with a single shard. Add cores to that 
> shard as replicas using core admin. Then unload the leader core using core 
> admin. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-1972) Need additional query stats in admin interface - median, 95th and 99th percentile

2012-10-26 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13484945#comment-13484945
 ] 

Adrien Grand commented on SOLR-1972:


Hi Alan,

This is great to have all these statistics available through JMX, this would be 
really helpful for Solr monitoring! Instead of relying on 
{{RequestHandlerBase.toString}} to return a distinct instance per request 
handler, maybe each request handler should have its own {{MetricsRegistry}} (or 
maybe use the request handler path as the scope, but this looks harder to do)?

I have no objection to adding the metrics jar to the dependencies of solr-core.

> Need additional query stats in admin interface - median, 95th and 99th 
> percentile
> -
>
> Key: SOLR-1972
> URL: https://issues.apache.org/jira/browse/SOLR-1972
> Project: Solr
>  Issue Type: Improvement
>  Components: web gui
>Affects Versions: 1.4
>Reporter: Shawn Heisey
>Assignee: Alan Woodward
>Priority: Minor
> Fix For: 4.1
>
> Attachments: elyograg-1972-3.2.patch, elyograg-1972-3.2.patch, 
> elyograg-1972-trunk.patch, elyograg-1972-trunk.patch, 
> SOLR-1972-branch3x-url_pattern.patch, SOLR-1972-branch4x.patch, 
> SOLR-1972-branch4x.patch, SOLR-1972_metrics.patch, SOLR-1972_metrics.patch, 
> SOLR-1972_metrics.patch, SOLR-1972_metrics.patch, SOLR-1972_metrics.patch, 
> SOLR-1972.patch, SOLR-1972.patch, SOLR-1972.patch, SOLR-1972.patch, 
> SOLR-1972-url_pattern.patch
>
>
> I would like to see more detailed query statistics from the admin GUI.  This 
> is what you can get now:
> requests : 809
> errors : 0
> timeouts : 0
> totalTime : 70053
> avgTimePerRequest : 86.59209
> avgRequestsPerSecond : 0.8148785 
> I'd like to see more data on the time per request - median, 95th percentile, 
> 99th percentile, and any other statistical function that makes sense to 
> include.  In my environment, the first bunch of queries after startup tend to 
> take several seconds each.  I find that the average value tends to be useless 
> until it has several thousand queries under its belt and the caches are 
> thoroughly warmed.  The statistical functions I have mentioned would quickly 
> eliminate the influence of those initial slow queries.
> The system will have to store individual data about each query.  I don't know 
> if this is something Solr does already.  It would be nice to have a 
> configurable count of how many of the most recent data points are kept, to 
> control the amount of memory the feature uses.  The default value could be 
> something like 1024 or 4096.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Jenkins build is back to normal : slow-io-beasting #4872

2012-10-26 Thread Charlie Cron
See 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Build failed in Jenkins: slow-io-beasting #4871

2012-10-26 Thread Charlie Cron
See 

--
[...truncated 17899 lines...]
[junit4:junit4]   2> 77866 T379 C49 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[359]} 0 0
[junit4:junit4]   2> 77882 T377 C49 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[360]} 0 0
[junit4:junit4]   2> 77898 T378 C49 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[361]} 0 0
[junit4:junit4]   2> 77913 T370 oash.SnapPuller.fetchLatestIndex SEVERE Master 
at: http://127.0.0.1:56093/solr is not available. Index fetch failed. 
Exception: org.apache.solr.client.solrj.SolrServerException: Server refused 
connection at: http://127.0.0.1:56093/solr
[junit4:junit4]   2> 77913 T373 C49 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[362]} 0 0
[junit4:junit4]   2> 77929 T375 C49 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[363]} 0 0
[junit4:junit4]   2> 77944 T376 C49 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[364]} 0 0
[junit4:junit4]   2> 77960 T379 C49 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[365]} 0 0
[junit4:junit4]   2> 77976 T377 C49 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[366]} 0 0
[junit4:junit4]   2> 77991 T378 C49 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[367]} 0 0
[junit4:junit4]   2> 78007 T373 C49 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[368]} 0 0
[junit4:junit4]   2> 78022 T375 C49 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[369]} 0 0
[junit4:junit4]   2> 78038 T379 C49 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[370]} 0 0
[junit4:junit4]   2> 78054 T377 C49 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[371]} 0 0
[junit4:junit4]   2> 78069 T378 C49 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[372]} 0 0
[junit4:junit4]   2> 78085 T373 C49 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[373]} 0 0
[junit4:junit4]   2> 78100 T375 C49 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[374]} 0 0
[junit4:junit4]   2> 78116 T376 C49 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[375]} 0 0
[junit4:junit4]   2> 78132 T379 C49 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[376]} 0 0
[junit4:junit4]   2> 78147 T377 C49 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[377]} 0 0
[junit4:junit4]   2> 78163 T378 C49 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[378]} 0 0
[junit4:junit4]   2> 78178 T373 C49 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[379]} 0 0
[junit4:junit4]   2> 78194 T375 C49 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[380]} 0 0
[junit4:junit4]   2> 78210 T376 C49 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[381]} 0 0
[junit4:junit4]   2> 78225 T379 C49 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[382]} 0 0
[junit4:junit4]   2> 78241 T377 C49 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[383]} 0 0
[junit4:junit4]   2> 78256 T378 C49 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[384]} 0 0
[junit4:junit4]   2> 78272 T373 C49 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[385]} 0 0
[junit4:junit4]   2> 78288 T375 C49 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[386]} 0 0
[junit4:junit4]   2> 78303 T376 C49 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[387]} 0 0
[junit4:junit4]   2> 78319 T379 C49 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[388]} 0 0
[junit4:junit4]   2> 78334 T377 C49 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[389]} 0 0
[junit4:junit4]   2> 78350 T378 C49 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[390]} 0 0
[junit4:junit4]   2> 78366 T373 C49 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[391]} 0 0
[junit4:junit4]   2> 78381 T375 C49 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[392]} 0 0
[junit4:junit4]   2> 78397 T376 C49 UPDATE [collection1] webapp=/solr 
path=/update params={wt=javabin&version=2} {add=[393]} 0 0
[junit4:junit4]   2> 78412 T379 

[jira] [Updated] (LUCENE-2878) Allow Scorer to expose positions and payloads aka. nuke spans

2012-10-26 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward updated LUCENE-2878:
--

Attachment: LUCENE-2878.patch

This patch removes the abstract BooleanIntervalIterator, as it doesn't seem to 
gain us anything.

Other than writing javadocs, we need to replace PayloadTermQuery and 
PayloadNearQuery, I think.  I'll work on that next.

> Allow Scorer to expose positions and payloads aka. nuke spans 
> --
>
> Key: LUCENE-2878
> URL: https://issues.apache.org/jira/browse/LUCENE-2878
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Affects Versions: Positions Branch
>Reporter: Simon Willnauer
>Assignee: Simon Willnauer
>  Labels: gsoc2011, gsoc2012, lucene-gsoc-11, lucene-gsoc-12, 
> mentor
> Fix For: Positions Branch
>
> Attachments: LUCENE-2878-OR.patch, LUCENE-2878.patch, 
> LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, 
> LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, 
> LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, 
> LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, 
> LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, 
> LUCENE-2878.patch, LUCENE-2878_trunk.patch, LUCENE-2878_trunk.patch, 
> PosHighlighter.patch, PosHighlighter.patch
>
>
> Currently we have two somewhat separate types of queries, the one which can 
> make use of positions (mainly spans) and payloads (spans). Yet Span*Query 
> doesn't really do scoring comparable to what other queries do and at the end 
> of the day they are duplicating lot of code all over lucene. Span*Queries are 
> also limited to other Span*Query instances such that you can not use a 
> TermQuery or a BooleanQuery with SpanNear or anthing like that. 
> Beside of the Span*Query limitation other queries lacking a quiet interesting 
> feature since they can not score based on term proximity since scores doesn't 
> expose any positional information. All those problems bugged me for a while 
> now so I stared working on that using the bulkpostings API. I would have done 
> that first cut on trunk but TermScorer is working on BlockReader that do not 
> expose positions while the one in this branch does. I started adding a new 
> Positions class which users can pull from a scorer, to prevent unnecessary 
> positions enums I added ScorerContext#needsPositions and eventually 
> Scorere#needsPayloads to create the corresponding enum on demand. Yet, 
> currently only TermQuery / TermScorer implements this API and other simply 
> return null instead. 
> To show that the API really works and our BulkPostings work fine too with 
> positions I cut over TermSpanQuery to use a TermScorer under the hood and 
> nuked TermSpans entirely. A nice sideeffect of this was that the Position 
> BulkReading implementation got some exercise which now :) work all with 
> positions while Payloads for bulkreading are kind of experimental in the 
> patch and those only work with Standard codec. 
> So all spans now work on top of TermScorer ( I truly hate spans since today ) 
> including the ones that need Payloads (StandardCodec ONLY)!!  I didn't bother 
> to implement the other codecs yet since I want to get feedback on the API and 
> on this first cut before I go one with it. I will upload the corresponding 
> patch in a minute. 
> I also had to cut over SpanQuery.getSpans(IR) to 
> SpanQuery.getSpans(AtomicReaderContext) which I should probably do on trunk 
> first but after that pain today I need a break first :).
> The patch passes all core tests 
> (org.apache.lucene.search.highlight.HighlighterTest still fails but I didn't 
> look into the MemoryIndex BulkPostings API yet)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-4507) das

2012-10-26 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved LUCENE-4507.


Resolution: Incomplete

Nothing here.

> das
> ---
>
> Key: LUCENE-4507
> URL: https://issues.apache.org/jira/browse/LUCENE-4507
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Tyheem Backer
>Priority: Trivial
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Build failed in Jenkins: slow-io-beasting #4870

2012-10-26 Thread Charlie Cron
See 

--
[...truncated 14303 lines...]
[junit4:junit4]   2> 872363 T20 oazs.NIOServerCnxn.closeSock Closed socket 
connection for client /127.0.0.1:52224 which had sessionid 0x13a9d32d6480002
[junit4:junit4]   2> 872363 T40 oaz.ClientCnxn$SendThread.run Unable to read 
additional data from server sessionid 0x13a9d32d6480002, likely server has 
closed socket, closing socket connection and attempting reconnect
[junit4:junit4]   2> 872363 T20 oazs.NIOServerCnxn.closeSock Closed socket 
connection for client /127.0.0.1:52266 which had sessionid 0x13a9d32d6480003
[junit4:junit4]   2> 872363 T22 oazs.NIOServerCnxn$Factory.run NIOServerCnxn 
factory exited run method
[junit4:junit4]   2> 872363 T54 oaz.ClientCnxn$SendThread.run Unable to read 
additional data from server sessionid 0x13a9d32d6480003, likely server has 
closed socket, closing socket connection and attempting reconnect
[junit4:junit4]   2> 872363 T20 oazs.FinalRequestProcessor.shutdown shutdown of 
request processor complete
[junit4:junit4]   2> 872363 T20 oasc.ChaosMonkey.monkeyLog monkey: stop shard! 
52219
[junit4:junit4]   2> 872363 T20 oasc.CoreContainer.shutdown Shutting down 
CoreContainer instance=26799663
[junit4:junit4]   2> 872363 T20 oasc.SolrCore.close [collection1]  CLOSING 
SolrCore org.apache.solr.core.SolrCore@3ba4f1
[junit4:junit4]   2> 872363 T20 oasu.DirectUpdateHandler2.close closing 
DirectUpdateHandler2{commits=0,autocommits=0,soft 
autocommits=0,optimizes=0,rollbacks=0,expungeDeletes=0,docsPending=0,adds=0,deletesById=0,deletesByQuery=0,errors=0,cumulative_adds=0,cumulative_deletesById=0,cumulative_deletesByQuery=0,cumulative_errors=0}
[junit4:junit4]   2> 872363 T20 oasc.SolrCore.decrefSolrCoreState Closing 
SolrCoreState
[junit4:junit4]   2> 872363 T20 oasu.DefaultSolrCoreState.closeIndexWriter 
SolrCoreState ref count has reached 0 - closing IndexWriter
[junit4:junit4]   2> 872363 T20 oasu.DefaultSolrCoreState.closeIndexWriter 
closing IndexWriter with IndexWriterCloser
[junit4:junit4]   2> 872363 T20 oasc.SolrCore.closeSearcher [collection1] 
Closing main searcher on request.
[junit4:junit4]   2> 872363 T42 oasc.Overseer$ClusterStateUpdater.amILeader 
According to ZK I (id=88555984356114434-127.0.0.1:52219_solr-n_00) am 
no longer a leader.
[junit4:junit4]   2> 872472 T93 oascc.ConnectionManager.process Watcher 
org.apache.solr.common.cloud.ConnectionManager@13d0fea name:ZooKeeperConnection 
Watcher:127.0.0.1:52195/solr got event WatchedEvent state:Disconnected 
type:None path:null path:null type:None
[junit4:junit4]   2> 872472 T55 oascc.ConnectionManager.process Watcher 
org.apache.solr.common.cloud.ConnectionManager@138ec91 name:ZooKeeperConnection 
Watcher:127.0.0.1:52195/solr got event WatchedEvent state:Disconnected 
type:None path:null path:null type:None
[junit4:junit4]   2> 872472 T93 oascc.ConnectionManager.process zkClient has 
disconnected
[junit4:junit4]   2> 872472 T55 oascc.ConnectionManager.process zkClient has 
disconnected
[junit4:junit4]   2> 872472 T67 oascc.ConnectionManager.process Watcher 
org.apache.solr.common.cloud.ConnectionManager@1e3c2c6 name:ZooKeeperConnection 
Watcher:127.0.0.1:52195/solr got event WatchedEvent state:Disconnected 
type:None path:null path:null type:None
[junit4:junit4]   2> 872472 T79 oascc.ConnectionManager.process Watcher 
org.apache.solr.common.cloud.ConnectionManager@14c02d4 name:ZooKeeperConnection 
Watcher:127.0.0.1:52195/solr got event WatchedEvent state:Disconnected 
type:None path:null path:null type:None
[junit4:junit4]   2> 872472 T20 oaz.ZooKeeper.close Session: 0x13a9d32d6480002 
closed
[junit4:junit4]   2> 872472 T41 oascc.ConnectionManager.process Watcher 
org.apache.solr.common.cloud.ConnectionManager@17a906e name:ZooKeeperConnection 
Watcher:127.0.0.1:52195/solr got event WatchedEvent state:Disconnected 
type:None path:null path:null type:None
[junit4:junit4]   2> 872472 T41 oascc.ConnectionManager.process 
Client->ZooKeeper status change trigger but we are already closed
[junit4:junit4]   2> 872472 T67 oascc.ConnectionManager.process zkClient has 
disconnected
[junit4:junit4]   2> 872472 T79 oascc.ConnectionManager.process zkClient has 
disconnected
[junit4:junit4]   2> 872472 T41 oaz.ClientCnxn$EventThread.run EventThread shut 
down
[junit4:junit4]   2> 872473 T20 oejsh.ContextHandler.doStop stopped 
o.e.j.s.ServletContextHandler{/solr,null}
[junit4:junit4]   2> 872526 T20 oasc.ChaosMonkey.monkeyLog monkey: stop shard! 
52256
[junit4:junit4]   2> 872526 T20 oasc.CoreContainer.shutdown Shutting down 
CoreContainer instance=29120938
[junit4:junit4]   2> 872528 T20 oasc.SolrCore.close [collection1]  CLOSING 
SolrCore org.apache.solr.core.SolrCore@f6852d
[junit4:junit4]   2> 872532 T20 oasu.DirectUpdateHandler2.close closing 
DirectUpdateHandler2{commits=1,autocommits=0,soft 
autocommits=0,optimizes=0,rollbacks=0,expungeDeletes=0,docsPending=0,ad

Build failed in Jenkins: slow-io-beasting #4869

2012-10-26 Thread Charlie Cron
See 

--
[...truncated 18864 lines...]
[junit4:junit4]   2> 13178 T583 oasc.RequestHandlers.initHandlersFromConfig 
created defaults: solr.StandardRequestHandler
[junit4:junit4]   2> 13178 T583 oasc.RequestHandlers.initHandlersFromConfig 
adding lazy requestHandler: solr.StandardRequestHandler
[junit4:junit4]   2> 13178 T583 oasc.RequestHandlers.initHandlersFromConfig 
created lazy: solr.StandardRequestHandler
[junit4:junit4]   2> 13178 T583 oasc.RequestHandlers.initHandlersFromConfig 
created /update: solr.UpdateRequestHandler
[junit4:junit4]   2> 13178 T583 oasc.RequestHandlers.initHandlersFromConfig 
created /terms: org.apache.solr.handler.component.SearchHandler
[junit4:junit4]   2> 13178 T583 oasc.RequestHandlers.initHandlersFromConfig 
created spellCheckCompRH: org.apache.solr.handler.component.SearchHandler
[junit4:junit4]   2> 13178 T583 oasc.RequestHandlers.initHandlersFromConfig 
created spellCheckCompRH_Direct: org.apache.solr.handler.component.SearchHandler
[junit4:junit4]   2> 13178 T583 oasc.RequestHandlers.initHandlersFromConfig 
created spellCheckWithWordbreak: org.apache.solr.handler.component.SearchHandler
[junit4:junit4]   2> 13178 T583 oasc.RequestHandlers.initHandlersFromConfig 
created spellCheckWithWordbreak_Direct: 
org.apache.solr.handler.component.SearchHandler
[junit4:junit4]   2> 13178 T583 oasc.RequestHandlers.initHandlersFromConfig 
created spellCheckCompRH1: org.apache.solr.handler.component.SearchHandler
[junit4:junit4]   2> 13178 T583 oasc.RequestHandlers.initHandlersFromConfig 
created tvrh: org.apache.solr.handler.component.SearchHandler
[junit4:junit4]   2> 13178 T583 oasc.RequestHandlers.initHandlersFromConfig 
created /mlt: solr.MoreLikeThisHandler
[junit4:junit4]   2> 13178 T583 oasc.RequestHandlers.initHandlersFromConfig 
created /debug/dump: solr.DumpRequestHandler
[junit4:junit4]   2> 13225 T583 oashl.XMLLoader.init xsltCacheLifetimeSeconds=60
[junit4:junit4]   2> 13256 T583 oasc.SolrCore.initDeprecatedSupport WARNING 
solrconfig.xml uses deprecated , Please update your config 
to use the ShowFileRequestHandler.
[junit4:junit4]   2> 13256 T583 oasc.SolrCore.initDeprecatedSupport WARNING 
adding ShowFileRequestHandler with hidden files: [SCHEMA.XML, OLD_SYNONYMS.TXT, 
STOPWORDS.TXT, PROTWORDS.TXT, OPEN-EXCHANGE-RATES.JSON, SYNONYMS.TXT, 
CURRENCY.XML, MAPPING-ISOLATIN1ACCENT.TXT]
[junit4:junit4]   2> 13271 T583 oass.SolrIndexSearcher. Opening 
Searcher@1914fe9 main
[junit4:junit4]   2> 13271 T583 oass.SolrIndexSearcher.getIndexDir WARNING 
WARNING: Directory impl does not support setting indexDir: 
org.apache.lucene.store.MockDirectoryWrapper
[junit4:junit4]   2> 13271 T583 oasu.CommitTracker. Hard AutoCommit: 
disabled
[junit4:junit4]   2> 13271 T583 oasu.CommitTracker. Soft AutoCommit: 
disabled
[junit4:junit4]   2> 13271 T583 oashc.SpellCheckComponent.inform Initializing 
spell checkers
[junit4:junit4]   2> 13271 T583 oass.DirectSolrSpellChecker.init init: 
{name=direct,classname=DirectSolrSpellChecker,field=lowerfilt,minQueryLength=3}
[junit4:junit4]   2> 13287 T609 oasc.SolrCore.registerSearcher [collection1] 
Registered new searcher Searcher@1914fe9 
main{StandardDirectoryReader(segments_1:1)}
[junit4:junit4]   2> 13287 T583 oasc.ZkController.publish numShards not found 
on descriptor - reading it from system property
[junit4:junit4]   2> 13583 T595 oascc.ZkStateReader.updateClusterState Updating 
cloud state from ZooKeeper... 
[junit4:junit4]   2> 13583 T595 oasc.Overseer$ClusterStateUpdater.updateState 
Update state numShards=null message={
[junit4:junit4]   2>  "operation":"state",
[junit4:junit4]   2>  "numShards":null,
[junit4:junit4]   2>  "shard":null,
[junit4:junit4]   2>  "roles":null,
[junit4:junit4]   2>  "state":"down",
[junit4:junit4]   2>  "core":"collection1",
[junit4:junit4]   2>  "collection":"collection1",
[junit4:junit4]   2>  "node_name":"BIGBOY:8983_solr",
[junit4:junit4]   2>  "base_url":"http://BIGBOY:8983/solr"}
[junit4:junit4]   2> 13583 T594 oascc.ZkStateReader$2.process A cluster state 
change has occurred - updating... (3)
[junit4:junit4]   2> 14753 T586 oazs.ZooKeeperServer.expire Expiring session 
0x13a9d2773470003, timeout of 3000ms exceeded
[junit4:junit4]   2> 14753 T588 oazs.PrepRequestProcessor.pRequest Processed 
session termination for sessionid: 0x13a9d2773470003
[junit4:junit4]   2> 14769 T594 oascc.ZkStateReader$3.process Updating live 
nodes... (2)
[junit4:junit4]   2> 14769 T608 oascc.ZkStateReader$3.process Updating live 
nodes... (2)
[junit4:junit4]   2> 75019 T583 oasc.SolrException.log SEVERE 
null:org.apache.solr.common.SolrException: Could not get shard_id for core: 
collection1
[junit4:junit4]   2>at 
org.apache.solr.cloud.ZkController.doGetShardIdProcess(ZkController.java:996)
[junit4:junit4]   2>at 
org.apache.solr.cloud.ZkController.preRegister(

Build failed in Jenkins: slow-io-beasting #4868

2012-10-26 Thread Charlie Cron
See 

--
[...truncated 13814 lines...]
[junit4:junit4]   2> 28670 T10 oasc.SolrCore.getNewIndexDir New index directory 
detected: old=null 
new=.\org.apache.solr.client.solrj.TestLBHttpSolrServer$SolrInstance-1351255527121\solr\collection12\collection1\data\index/
[junit4:junit4]   2> 28670 T10 oasc.SolrCore.initIndex WARNING [collection1] 
Solr index directory 
'.\org.apache.solr.client.solrj.TestLBHttpSolrServer$SolrInstance-1351255527121\solr\collection12\collection1\data\index'
 doesn't exist. Creating new index...
[junit4:junit4]   2> 28670 T10 oasc.CachingDirectoryFactory.get return new 
directory for 

 forceNew:false
[junit4:junit4]   2> 28670 T10 oasc.SolrDeletionPolicy.onCommit 
SolrDeletionPolicy.onCommit: commits:num=1
[junit4:junit4]   2>
commit{dir=BaseDirectoryWrapper(org.apache.lucene.store.MMapDirectory@
 
lockFactory=org.apache.lucene.store.NativeFSLockFactory@1a86488),segFN=segments_1,generation=1,filenames=[segments_1]
[junit4:junit4]   2> 28670 T10 oasc.SolrDeletionPolicy.updateCommits newest 
commit = 1
[junit4:junit4]   2> 28686 T10 oasc.RequestHandlers.initHandlersFromConfig 
created standard: solr.StandardRequestHandler
[junit4:junit4]   2> 28686 T10 oasc.RequestHandlers.initHandlersFromConfig 
created defaults: solr.StandardRequestHandler
[junit4:junit4]   2> 28686 T10 oasc.RequestHandlers.initHandlersFromConfig 
adding lazy requestHandler: solr.StandardRequestHandler
[junit4:junit4]   2> 28686 T10 oasc.RequestHandlers.initHandlersFromConfig 
created lazy: solr.StandardRequestHandler
[junit4:junit4]   2> 28686 T10 oasc.RequestHandlers.initHandlersFromConfig 
created /update: solr.UpdateRequestHandler
[junit4:junit4]   2> 28686 T10 oasc.RequestHandlers.initHandlersFromConfig 
created /replication: solr.ReplicationHandler
[junit4:junit4]   2> 28686 T10 oashl.XMLLoader.init xsltCacheLifetimeSeconds=60
[junit4:junit4]   2> 28701 T10 oass.SolrIndexSearcher. Opening 
Searcher@46136 main
[junit4:junit4]   2> 28701 T10 oass.SolrIndexSearcher.getIndexDir WARNING 
WARNING: Directory impl does not support setting indexDir: 
org.apache.lucene.store.BaseDirectoryWrapper
[junit4:junit4]   2> 28701 T10 oasu.CommitTracker. Hard AutoCommit: 
disabled
[junit4:junit4]   2> 28701 T10 oasu.CommitTracker. Soft AutoCommit: 
disabled
[junit4:junit4]   2> 28701 T10 oashc.HttpShardHandlerFactory.getParameter 
Setting socketTimeout to: 0
[junit4:junit4]   2> 28701 T10 oashc.HttpShardHandlerFactory.getParameter 
Setting urlScheme to: http://
[junit4:junit4]   2> 28701 T10 oashc.HttpShardHandlerFactory.getParameter 
Setting connTimeout to: 0
[junit4:junit4]   2> 28701 T10 oashc.HttpShardHandlerFactory.getParameter 
Setting maxConnectionsPerHost to: 20
[junit4:junit4]   2> 28701 T10 oashc.HttpShardHandlerFactory.getParameter 
Setting corePoolSize to: 0
[junit4:junit4]   2> 28701 T10 oashc.HttpShardHandlerFactory.getParameter 
Setting maximumPoolSize to: 2147483647
[junit4:junit4]   2> 28701 T10 oashc.HttpShardHandlerFactory.getParameter 
Setting maxThreadIdleTime to: 5
[junit4:junit4]   2> 28701 T10 oashc.HttpShardHandlerFactory.getParameter 
Setting sizeOfQueue to: -1
[junit4:junit4]   2> 28701 T10 oashc.HttpShardHandlerFactory.getParameter 
Setting fairnessPolicy to: false
[junit4:junit4]   2> 28701 T10 oascsi.HttpClientUtil.createClient Creating new 
http client, 
config:maxConnectionsPerHost=20&maxConnections=1&socketTimeout=0&connTimeout=0&retry=false
[junit4:junit4]   2> 28717 T10 oash.ReplicationHandler.inform Commits will be 
reserved for  1
[junit4:junit4]   2> 28717 T10 oasc.CoreContainer.register registering core: 
collection1
[junit4:junit4]   2> 28717 T10 oass.SolrDispatchFilter.init 
user.dir=
[junit4:junit4]   2> 28717 T10 oass.SolrDispatchFilter.init 
SolrDispatchFilter.init() done
[junit4:junit4]   2> 28717 T117 oasc.SolrCore.registerSearcher [collection1] 
Registered new searcher Searcher@46136 
main{StandardDirectoryReader(segments_1:1)}
[junit4:junit4]   2> ASYNC  NEW_CORE C9 name=collection1 
org.apache.solr.core.SolrCore@84f665
[junit4:junit4]   2> 28733 T112 C9 oasc.SolrDeletionPolicy.onInit 
SolrDeletionPolicy.onInit: commits:num=1
[junit4:junit4]   2>
commit{dir=BaseDirectoryWrapper(org.apache.lucene.store.MMapDirectory@

[jira] [Created] (LUCENE-4507) das

2012-10-26 Thread Tyheem Backer (JIRA)
Tyheem Backer created LUCENE-4507:
-

 Summary: das
 Key: LUCENE-4507
 URL: https://issues.apache.org/jira/browse/LUCENE-4507
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Tyheem Backer
Priority: Trivial




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >