Lucene Benchmark

2014-09-24 Thread John Wang
Hi guys:

 Can you guys point me to some details on the Lucene Benchmark module?
Specifically the grammar/syntax for the Algorithm files?

Thanks

-John


[JENKINS] Lucene-Solr-5.x-Windows (32bit/jdk1.7.0_67) - Build # 4231 - Still Failing!

2014-09-24 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/4231/
Java: 32bit/jdk1.7.0_67 -client -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.DeleteReplicaTest.testDistribSearch

Error Message:
No live SolrServers available to handle this request:[https://127.0.0.1:63512, 
https://127.0.0.1:63527, https://127.0.0.1:63545, https://127.0.0.1:63554, 
https://127.0.0.1:63536]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request:[https://127.0.0.1:63512, https://127.0.0.1:63527, 
https://127.0.0.1:63545, https://127.0.0.1:63554, https://127.0.0.1:63536]
at 
__randomizedtesting.SeedInfo.seed([27DDBFA5F12DB87D:A63B31BD8672D841]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:322)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.sendRequest(CloudSolrServer.java:880)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.requestWithRetryOnStaleState(CloudSolrServer.java:658)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:601)
at 
org.apache.solr.cloud.DeleteReplicaTest.removeAndWaitForReplicaGone(DeleteReplicaTest.java:171)
at 
org.apache.solr.cloud.DeleteReplicaTest.deleteLiveReplicaTest(DeleteReplicaTest.java:144)
at 
org.apache.solr.cloud.DeleteReplicaTest.doTest(DeleteReplicaTest.java:88)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.GeneratedMethodAccessor78.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Updated] (SOLR-6476) Create a bulk mode for schema API

2014-09-24 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-6476:
-
Attachment: SOLR-6476.patch

 Create a bulk mode for schema API
 -

 Key: SOLR-6476
 URL: https://issues.apache.org/jira/browse/SOLR-6476
 Project: Solr
  Issue Type: Bug
  Components: Schema and Analysis
Reporter: Noble Paul
Assignee: Noble Paul
  Labels: managedResource
 Attachments: SOLR-6476.patch, SOLR-6476.patch, SOLR-6476.patch, 
 SOLR-6476.patch, SOLR-6476.patch, SOLR-6476.patch


 The current schema API does one operation at a time and the normal usecase is 
 that users add multiple fields/fieldtypes/copyFields etc in one shot.
 example 
 {code:javascript}
 curl http://localhost:8983/solr/collection1/schema -H 
 'Content-type:application/json'  -d '{
 add-field: {
 name:sell-by,
 type:tdate,
 stored:true
 },
 add-field:{
 name:catchall,
 type:text_general,
 stored:false
 }
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-trunk-Java7 - Build # 4879 - Failure

2014-09-24 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java7/4879/

1 tests failed.
REGRESSION:  org.apache.solr.client.solrj.TestLBHttpSolrServer.testReliability

Error Message:
No live SolrServers available to handle this request

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request
at 
__randomizedtesting.SeedInfo.seed([25609EFCD35821FB:E4A843BA723EF052]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:528)
at 
org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:91)
at org.apache.solr.client.solrj.SolrServer.query(SolrServer.java:301)
at 
org.apache.solr.client.solrj.TestLBHttpSolrServer.testReliability(TestLBHttpSolrServer.java:223)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Updated] (SOLR-6453) Stop throwing an error message from Overseer on Solr exit

2014-09-24 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-6453:
-
Attachment: SOLR-6453.patch

I guess this is the right fix

 Stop throwing an error message from Overseer on Solr exit
 -

 Key: SOLR-6453
 URL: https://issues.apache.org/jira/browse/SOLR-6453
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Ramkumar Aiyengar
Assignee: Noble Paul
Priority: Minor
 Attachments: SOLR-6453.patch


 SOLR-5859 adds a leadership check every time Overseer exits loop. This 
 however gets triggered even when Solr really is exiting, causing a spurious 
 error. Here's a one-liner to stop that from happening.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene Benchmark

2014-09-24 Thread Mikhail Khludnev
Hi John,

It's obvious
http://lucene.apache.org/core/4_8_0/benchmark/org/apache/lucene/benchmark/byTask/package-summary.html
It's also described in LUA. I just get into it and understand how to use
it. Feel free to ask if you face any difficulties.

Beware that Lucene devs use
https://code.google.com/a/apache-extras.org/p/luceneutil/
http://blog.mikemccandless.com/2011/04/catching-slowdowns-in-lucene.html
I didn't get into it, just know that it reports fancy tables which you can
meet in performance optimization jiras.


On Wed, Sep 24, 2014 at 10:45 AM, John Wang john.w...@gmail.com wrote:

 Hi guys:

  Can you guys point me to some details on the Lucene Benchmark module?
 Specifically the grammar/syntax for the Algorithm files?

 Thanks

 -John




-- 
Sincerely yours
Mikhail Khludnev
Principal Engineer,
Grid Dynamics

http://www.griddynamics.com
mkhlud...@griddynamics.com


[jira] [Created] (SOLR-6556) User from trusted kerberos realm can't access admin console

2014-09-24 Thread Andrejs Dubovskis (JIRA)
Andrejs Dubovskis created SOLR-6556:
---

 Summary: User from trusted kerberos realm can't access admin 
console 
 Key: SOLR-6556
 URL: https://issues.apache.org/jira/browse/SOLR-6556
 Project: Solr
  Issue Type: Bug
  Components: web gui
Affects Versions: 4.4
 Environment: CDH5.1.2 + Kerberos + Sentry
Reporter: Andrejs Dubovskis
Priority: Minor


SOLR security configured accordingly [this 
document|http://www.cloudera.com/content/cloudera-content/cloudera-docs/CDH5/latest/CDH5-Security-Guide/cdh5sg_search_security.html]

User from primary realm (used by Hadoop cluster itself) can access the console, 
but user from trusted realm can't.
{code}
Sep 24, 2014 9:30:13 AM org.apache.catalina.core.StandardWrapperValve invoke
SEVERE: Servlet.service() for servlet LoadAdminUI threw exception
org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: No 
rules applied to admin@TRUSTED.REALM
at 
org.apache.hadoop.security.authentication.util.KerberosName.getShortName(KerberosName.java:389)
at 
org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler$2.run(KerberosAuthenticationHandler.java:359)
at 
org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler$2.run(KerberosAuthenticationHandler.java:329)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler.authenticate(KerberosAuthenticationHandler.java:329)
at 
org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:349)
at 
org.apache.solr.servlet.SolrHadoopAuthenticationFilter.doFilter(SolrHadoopAuthenticationFilter.java:148)
at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at 
org.apache.solr.servlet.HostnameFilter.doFilter(HostnameFilter.java:86)
at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at 
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
at 
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
at 
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
at 
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103)
at 
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at 
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)
at 
org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:861)
at 
org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:606)
at 
org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
at java.lang.Thread.run(Thread.java:745)

{code}

Required kerberos  auth_to_local rules are defined in hadoop/core-site.xml file 
and was added to /etc/krb5.conf as well.

Another CDH components (for example, Impala) use these rules and allow access 
for users from trusted domain.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5975) Lucene can't read 3.0-3.3 deleted documents

2014-09-24 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14146049#comment-14146049
 ] 

Uwe Schindler commented on LUCENE-5975:
---

Thanks for figuring that out!

Nice test!

 Lucene can't read 3.0-3.3 deleted documents
 ---

 Key: LUCENE-5975
 URL: https://issues.apache.org/jira/browse/LUCENE-5975
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
Priority: Blocker
 Fix For: 4.10.1

 Attachments: LUCENE-5975.patch, LUCENE-5975.patch


 BitVector before Lucene 3.4 had many bugs, particulary that it wrote extra 
 bogus trailing crap at the end.
 But since Lucene 4.8, we check that we read all the bytes... this check can 
 fail for 3.0-3.3 indexes due to the previous bugs in those indexes, instead 
 users will get exception on open like this: CorruptIndexException(did not 
 read all bytes from file: read 5000 vs 5001



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5569) Rename AtomicReader to LeafReader

2014-09-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14146075#comment-14146075
 ] 

ASF subversion and git services commented on LUCENE-5569:
-

Commit 1627258 from [~rjernst] in branch 'dev/trunk'
[ https://svn.apache.org/r1627258 ]

LUCENE-5569: Rename more locations in test classes and comments

 Rename AtomicReader to LeafReader
 -

 Key: LUCENE-5569
 URL: https://issues.apache.org/jira/browse/LUCENE-5569
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Priority: Blocker
 Fix For: Trunk

 Attachments: LUCENE-5569.patch, LUCENE-5569.patch


 See LUCENE-5527 for more context: several of us seem to prefer {{Leaf}} to 
 {{Atomic}}.
 Talking from my experience, I was a bit confused in the beginning that this 
 thing is named {{AtomicReader}}, since {{Atomic}} is otherwise used in Java 
 in the context of concurrency. So maybe renaming it to {{Leaf}} would help 
 remove this confusion and also carry the information that these readers are 
 used as leaves of top-level readers?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-5.x - Build # 196 - Still Failing

2014-09-24 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-5.x/196/

No tests ran.

Build Log:
[...truncated 51701 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist
 [copy] Copying 446 files to 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 254 files to 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.7 JAVA_HOME=/home/jenkins/tools/java/latest1.7
   [smoker] NOTE: output encoding is US-ASCII
   [smoker] 
   [smoker] Load release URL 
file:/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist/...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.1 MB in 0.01 sec (14.2 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-5.0.0-src.tgz...
   [smoker] 27.6 MB in 0.04 sec (676.0 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-5.0.0.tgz...
   [smoker] 61.1 MB in 0.09 sec (669.8 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-5.0.0.zip...
   [smoker] 70.5 MB in 0.13 sec (524.8 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-5.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.7...
   [smoker]   got 5561 hits for query lucene
   [smoker] checkindex with 1.7...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-5.0.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.7...
   [smoker]   got 5561 hits for query lucene
   [smoker] checkindex with 1.7...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-5.0.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run ant validate
   [smoker] run tests w/ Java 7 and testArgs='-Dtests.jettyConnector=Socket 
-Dtests.disableHdfs=true -Dtests.multiplier=1 -Dtests.slow=false'...
   [smoker] test demo with 1.7...
   [smoker]   got 223 hits for query lucene
   [smoker] checkindex with 1.7...
   [smoker] generate javadocs w/ Java 7...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.1 MB in 0.01 sec (11.3 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-5.0.0-src.tgz...
   [smoker] 33.8 MB in 0.08 sec (443.6 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-5.0.0.tgz...
   [smoker] 143.2 MB in 0.60 sec (237.8 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-5.0.0.zip...
   [smoker] 149.3 MB in 0.74 sec (201.7 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-5.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-5.0.0.tgz...
   [smoker]   **WARNING**: skipping check of 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
   [smoker] verify WAR metadata/contained JAR identity/no javax.* or java.* 
classes...
   [smoker] unpack lucene-5.0.0.tgz...
   [smoker] copying unpacked distribution for Java 7 ...
   [smoker] test solr example w/ Java 7...
   [smoker]   start Solr instance 
(log=/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0-java7/solr-example.log)...
   [smoker]   startup done
   [smoker]   test utf8...
   [smoker]   index example docs...
   [smoker]   run query...
   [smoker]   stop server (SIGINT)...
   [smoker]   unpack solr-5.0.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-5.0.0.tgz...
   [smoker]   **WARNING**: skipping check of 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has 

[jira] [Commented] (SOLR-6453) Stop throwing an error message from Overseer on Solr exit

2014-09-24 Thread Ramkumar Aiyengar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14146085#comment-14146085
 ] 

Ramkumar Aiyengar commented on SOLR-6453:
-

Thanks for picking this up, Noble! Just so that I understand, could you explain 
the difference?

I see that the controller closes the {{Overseer}} and then the client, but 
aren't we coupling the two implementations tightly by assuming that? In 
hindsight, it doesn't make sense to use the {{zkClient}} when it is closed, so 
I should have probably used {{if (zkClient.isClosed())}} instead of just 
{{isClosed}}. Checking just {{zkController}} does make sense for the final 
operation of rejoining election (which is what we are checking in the 
{{finally}} block anyway), but checking just that assumes that the state reader 
passed to us is from the same controller. May be we should check both, i.e. the 
controller is active (if not, we can't rejoin anyway), and the client is 
active..

 Stop throwing an error message from Overseer on Solr exit
 -

 Key: SOLR-6453
 URL: https://issues.apache.org/jira/browse/SOLR-6453
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Ramkumar Aiyengar
Assignee: Noble Paul
Priority: Minor
 Attachments: SOLR-6453.patch


 SOLR-5859 adds a leadership check every time Overseer exits loop. This 
 however gets triggered even when Solr really is exiting, causing a spurious 
 error. Here's a one-liner to stop that from happening.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6453) Stop throwing an error message from Overseer on Solr exit

2014-09-24 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14146101#comment-14146101
 ] 

Noble Paul commented on SOLR-6453:
--

The objective of that method is to handle the overseer QUIT command. So , 
when a QUIT command is received, the node is still running, 
(corecontainer.isShutdown() returns false and isClosed() returns true) and the 
current overseer should do the cleanup and exit gracefully

 Stop throwing an error message from Overseer on Solr exit
 -

 Key: SOLR-6453
 URL: https://issues.apache.org/jira/browse/SOLR-6453
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Ramkumar Aiyengar
Assignee: Noble Paul
Priority: Minor
 Attachments: SOLR-6453.patch


 SOLR-5859 adds a leadership check every time Overseer exits loop. This 
 however gets triggered even when Solr really is exiting, causing a spurious 
 error. Here's a one-liner to stop that from happening.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6453) Stop throwing an error message from Overseer on Solr exit

2014-09-24 Thread Ramkumar Aiyengar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14146112#comment-14146112
 ] 

Ramkumar Aiyengar commented on SOLR-6453:
-

Ah, okay, makes sense.. Could you add the {{zkClient.isClosed()}} check though 
as per my comment above?

 Stop throwing an error message from Overseer on Solr exit
 -

 Key: SOLR-6453
 URL: https://issues.apache.org/jira/browse/SOLR-6453
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Ramkumar Aiyengar
Assignee: Noble Paul
Priority: Minor
 Attachments: SOLR-6453.patch


 SOLR-5859 adds a leadership check every time Overseer exits loop. This 
 however gets triggered even when Solr really is exiting, causing a spurious 
 error. Here's a one-liner to stop that from happening.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6453) Stop throwing an error message from Overseer on Solr exit

2014-09-24 Thread Ramkumar Aiyengar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14146112#comment-14146112
 ] 

Ramkumar Aiyengar edited comment on SOLR-6453 at 9/24/14 9:19 AM:
--

Ah, okay, makes sense.. Could you add the {{zkClient.isClosed()}} check though 
as per my comment above?

Also, {{OverseerTest}} passed for me with this change, probably shouldn't have?


was (Author: andyetitmoves):
Ah, okay, makes sense.. Could you add the {{zkClient.isClosed()}} check though 
as per my comment above?

 Stop throwing an error message from Overseer on Solr exit
 -

 Key: SOLR-6453
 URL: https://issues.apache.org/jira/browse/SOLR-6453
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Ramkumar Aiyengar
Assignee: Noble Paul
Priority: Minor
 Attachments: SOLR-6453.patch


 SOLR-5859 adds a leadership check every time Overseer exits loop. This 
 however gets triggered even when Solr really is exiting, causing a spurious 
 error. Here's a one-liner to stop that from happening.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5569) Rename AtomicReader to LeafReader

2014-09-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14146123#comment-14146123
 ] 

ASF subversion and git services commented on LUCENE-5569:
-

Commit 1627266 from [~rjernst] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1627266 ]

LUCENE-5569: Rename more locations in test classes and comments (merged 1627258)

 Rename AtomicReader to LeafReader
 -

 Key: LUCENE-5569
 URL: https://issues.apache.org/jira/browse/LUCENE-5569
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Priority: Blocker
 Fix For: 5.0

 Attachments: LUCENE-5569.patch, LUCENE-5569.patch


 See LUCENE-5527 for more context: several of us seem to prefer {{Leaf}} to 
 {{Atomic}}.
 Talking from my experience, I was a bit confused in the beginning that this 
 thing is named {{AtomicReader}}, since {{Atomic}} is otherwise used in Java 
 in the context of concurrency. So maybe renaming it to {{Leaf}} would help 
 remove this confusion and also carry the information that these readers are 
 used as leaves of top-level readers?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5569) Rename AtomicReader to LeafReader

2014-09-24 Thread Ryan Ernst (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan Ernst resolved LUCENE-5569.

   Resolution: Fixed
Fix Version/s: (was: Trunk)
   5.0
 Assignee: Ryan Ernst

 Rename AtomicReader to LeafReader
 -

 Key: LUCENE-5569
 URL: https://issues.apache.org/jira/browse/LUCENE-5569
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Ryan Ernst
Priority: Blocker
 Fix For: 5.0

 Attachments: LUCENE-5569.patch, LUCENE-5569.patch


 See LUCENE-5527 for more context: several of us seem to prefer {{Leaf}} to 
 {{Atomic}}.
 Talking from my experience, I was a bit confused in the beginning that this 
 thing is named {{AtomicReader}}, since {{Atomic}} is otherwise used in Java 
 in the context of concurrency. So maybe renaming it to {{Leaf}} would help 
 remove this confusion and also carry the information that these readers are 
 used as leaves of top-level readers?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6453) Stop throwing an error message from Overseer on Solr exit

2014-09-24 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14146125#comment-14146125
 ] 

Noble Paul commented on SOLR-6453:
--


bq.Also, OverseerTest passed for me with this change, probably shouldn't have?

The cleanup is usually not required . So, the tests should normally have no 
impact . Under load it might behave differently

 Stop throwing an error message from Overseer on Solr exit
 -

 Key: SOLR-6453
 URL: https://issues.apache.org/jira/browse/SOLR-6453
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Ramkumar Aiyengar
Assignee: Noble Paul
Priority: Minor
 Attachments: SOLR-6453.patch


 SOLR-5859 adds a leadership check every time Overseer exits loop. This 
 however gets triggered even when Solr really is exiting, causing a spurious 
 error. Here's a one-liner to stop that from happening.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6557) bandwidth cap for large file replication

2014-09-24 Thread Kenji Kikuchi (JIRA)
Kenji Kikuchi created SOLR-6557:
---

 Summary: bandwidth cap for large file replication
 Key: SOLR-6557
 URL: https://issues.apache.org/jira/browse/SOLR-6557
 Project: Solr
  Issue Type: Improvement
  Components: replication (java)
Affects Versions: 5.0, Trunk
Reporter: Kenji Kikuchi
Priority: Minor
 Fix For: 5.0, Trunk


Sometimes I need to set up a slave server in the rack where a master
server does not exist. In this case, our rack to rack bandwidth is often
saturated during large file transfer, such as initial replication, large
index file merge and optimization. This impairs our other services. So I
think a bandwidth cap for large file replication is helpful for large web 
service providers and adds flexibility to our Solr slave server setups.

Currently I am limiting replication bandwidth by using a tc command on
the master servers. But to use a tc command, I need to login to an
on-service master server and add tc related settings to add a new slave
server because tc command only shapes outbound traffics. So the feature
of setting up a desired replication bandwidth cap with just one line in
a new slave configuration file reduces our Solr operations and secures
the on-service master servers by avoiding the need to login.

Parsing bandwidth setting in slave solrconfig.xml in ‘bits per
second' is preferable for me. This is because most of our site operators
use ‘bits per second' not ‘bytes per second’ in our network monitoring
metrics.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6557) bandwidth cap for large file replication

2014-09-24 Thread Ramkumar Aiyengar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14146131#comment-14146131
 ] 

Ramkumar Aiyengar commented on SOLR-6557:
-

Does SOLR-6485 do what you are looking for?

 bandwidth cap for large file replication
 

 Key: SOLR-6557
 URL: https://issues.apache.org/jira/browse/SOLR-6557
 Project: Solr
  Issue Type: Improvement
  Components: replication (java)
Affects Versions: 5.0, Trunk
Reporter: Kenji Kikuchi
Priority: Minor
 Fix For: 5.0, Trunk


 Sometimes I need to set up a slave server in the rack where a master
 server does not exist. In this case, our rack to rack bandwidth is often
 saturated during large file transfer, such as initial replication, large
 index file merge and optimization. This impairs our other services. So I
 think a bandwidth cap for large file replication is helpful for large web 
 service providers and adds flexibility to our Solr slave server setups.
 Currently I am limiting replication bandwidth by using a tc command on
 the master servers. But to use a tc command, I need to login to an
 on-service master server and add tc related settings to add a new slave
 server because tc command only shapes outbound traffics. So the feature
 of setting up a desired replication bandwidth cap with just one line in
 a new slave configuration file reduces our Solr operations and secures
 the on-service master servers by avoiding the need to login.
 Parsing bandwidth setting in slave solrconfig.xml in ‘bits per
 second' is preferable for me. This is because most of our site operators
 use ‘bits per second' not ‘bytes per second’ in our network monitoring
 metrics.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6557) bandwidth cap for large file replication

2014-09-24 Thread Kenji Kikuchi (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kenji Kikuchi updated SOLR-6557:

Attachment: SOLR-replication_bandwidth.patch

Please review the patch attached.
If there are any problems with it, please let me know.

 bandwidth cap for large file replication
 

 Key: SOLR-6557
 URL: https://issues.apache.org/jira/browse/SOLR-6557
 Project: Solr
  Issue Type: Improvement
  Components: replication (java)
Affects Versions: 5.0, Trunk
Reporter: Kenji Kikuchi
Priority: Minor
 Fix For: 5.0, Trunk

 Attachments: SOLR-replication_bandwidth.patch


 Sometimes I need to set up a slave server in the rack where a master
 server does not exist. In this case, our rack to rack bandwidth is often
 saturated during large file transfer, such as initial replication, large
 index file merge and optimization. This impairs our other services. So I
 think a bandwidth cap for large file replication is helpful for large web 
 service providers and adds flexibility to our Solr slave server setups.
 Currently I am limiting replication bandwidth by using a tc command on
 the master servers. But to use a tc command, I need to login to an
 on-service master server and add tc related settings to add a new slave
 server because tc command only shapes outbound traffics. So the feature
 of setting up a desired replication bandwidth cap with just one line in
 a new slave configuration file reduces our Solr operations and secures
 the on-service master servers by avoiding the need to login.
 Parsing bandwidth setting in slave solrconfig.xml in ‘bits per
 second' is preferable for me. This is because most of our site operators
 use ‘bits per second' not ‘bytes per second’ in our network monitoring
 metrics.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5863) Generate backwards compatibility indexes for all 4.x releases

2014-09-24 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless resolved LUCENE-5863.

Resolution: Fixed

This is done right?  We now have smoke tester checking this...

 Generate backwards compatibility indexes for all 4.x releases
 -

 Key: LUCENE-5863
 URL: https://issues.apache.org/jira/browse/LUCENE-5863
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
Assignee: Robert Muir
 Fix For: 4.10.1, 5.0, Trunk

 Attachments: testTheBWCTester.py


 Currently the versioning here is a total mess, and its inconsistent across 
 bugfix releases.
 We should just generate back compat indexes for every release: regardless of 
 whether the index format changed, even for bugfix releases. This ensures at 
 least we try to test that the back compat is working.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-SmokeRelease-5.x - Build # 196 - Still Failing

2014-09-24 Thread Robert Muir
I am extremely close from disabling this analytics module, as i
mentioned last week.

Guys the solr build is still fucking broken.

On Wed, Sep 24, 2014 at 4:46 AM, Apache Jenkins Server
jenk...@builds.apache.org wrote:
 Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-5.x/196/

 No tests ran.

 Build Log:
 [...truncated 51701 lines...]
 prepare-release-no-sign:
 [mkdir] Created dir: 
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist
  [copy] Copying 446 files to 
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist/lucene
  [copy] Copying 254 files to 
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist/solr
[smoker] Java 1.7 JAVA_HOME=/home/jenkins/tools/java/latest1.7
[smoker] NOTE: output encoding is US-ASCII
[smoker]
[smoker] Load release URL 
 file:/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist/...
[smoker]
[smoker] Test Lucene...
[smoker]   test basics...
[smoker]   get KEYS
[smoker] 0.1 MB in 0.01 sec (14.2 MB/sec)
[smoker]   check changes HTML...
[smoker]   download lucene-5.0.0-src.tgz...
[smoker] 27.6 MB in 0.04 sec (676.0 MB/sec)
[smoker] verify md5/sha1 digests
[smoker]   download lucene-5.0.0.tgz...
[smoker] 61.1 MB in 0.09 sec (669.8 MB/sec)
[smoker] verify md5/sha1 digests
[smoker]   download lucene-5.0.0.zip...
[smoker] 70.5 MB in 0.13 sec (524.8 MB/sec)
[smoker] verify md5/sha1 digests
[smoker]   unpack lucene-5.0.0.tgz...
[smoker] verify JAR metadata/identity/no javax.* or java.* classes...
[smoker] test demo with 1.7...
[smoker]   got 5561 hits for query lucene
[smoker] checkindex with 1.7...
[smoker] check Lucene's javadoc JAR
[smoker]   unpack lucene-5.0.0.zip...
[smoker] verify JAR metadata/identity/no javax.* or java.* classes...
[smoker] test demo with 1.7...
[smoker]   got 5561 hits for query lucene
[smoker] checkindex with 1.7...
[smoker] check Lucene's javadoc JAR
[smoker]   unpack lucene-5.0.0-src.tgz...
[smoker] make sure no JARs/WARs in src dist...
[smoker] run ant validate
[smoker] run tests w/ Java 7 and 
 testArgs='-Dtests.jettyConnector=Socket -Dtests.disableHdfs=true 
 -Dtests.multiplier=1 -Dtests.slow=false'...
[smoker] test demo with 1.7...
[smoker]   got 223 hits for query lucene
[smoker] checkindex with 1.7...
[smoker] generate javadocs w/ Java 7...
[smoker]
[smoker] Crawl/parse...
[smoker]
[smoker] Verify...
[smoker]   confirm all releases have coverage in TestBackwardsCompatibility
[smoker] find all past Lucene releases...
[smoker] run TestBackwardsCompatibility..
[smoker] success!
[smoker]
[smoker] Test Solr...
[smoker]   test basics...
[smoker]   get KEYS
[smoker] 0.1 MB in 0.01 sec (11.3 MB/sec)
[smoker]   check changes HTML...
[smoker]   download solr-5.0.0-src.tgz...
[smoker] 33.8 MB in 0.08 sec (443.6 MB/sec)
[smoker] verify md5/sha1 digests
[smoker]   download solr-5.0.0.tgz...
[smoker] 143.2 MB in 0.60 sec (237.8 MB/sec)
[smoker] verify md5/sha1 digests
[smoker]   download solr-5.0.0.zip...
[smoker] 149.3 MB in 0.74 sec (201.7 MB/sec)
[smoker] verify md5/sha1 digests
[smoker]   unpack solr-5.0.0.tgz...
[smoker] verify JAR metadata/identity/no javax.* or java.* classes...
[smoker] unpack lucene-5.0.0.tgz...
[smoker]   **WARNING**: skipping check of 
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
  it has javax.* classes
[smoker]   **WARNING**: skipping check of 
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
  it has javax.* classes
[smoker] verify WAR metadata/contained JAR identity/no javax.* or 
 java.* classes...
[smoker] unpack lucene-5.0.0.tgz...
[smoker] copying unpacked distribution for Java 7 ...
[smoker] test solr example w/ Java 7...
[smoker]   start Solr instance 
 (log=/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0-java7/solr-example.log)...
[smoker]   startup done
[smoker]   test utf8...
[smoker]   index example docs...
[smoker]   run query...
[smoker]   stop server (SIGINT)...
[smoker]   unpack solr-5.0.0.zip...
[smoker] verify JAR metadata/identity/no 

[jira] [Commented] (SOLR-6460) Keep transaction logs around longer

2014-09-24 Thread Renaud Delbru (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14146174#comment-14146174
 ] 

Renaud Delbru commented on SOLR-6460:
-

Hi, 

here is an initial analysis and proposal of the modifications of the UpdateLog 
for the CDCR scenario.
Most of the original workflow of the UpdateLog can be left untouched. It is 
necessary however to keep the concept of maximum number of records to keep 
(except for the cleaning of old transaction logs) in order to not interfere 
with the normal workflow.

h4. Cleaning of Old Transaction Logs

The logic to remove old tlog files should be modified so that it relies on 
pointers instead of a limit defined by the maximum number of records to keep.
The UpdateLog should be the one in charge of keeping the list of pointers and 
of managing their life-cycle (or to deleguate it to the LogReader which is 
presented next). Such a pointer, denoted LogPointer, should be composed of a 
tlog file and of an associated file pointer.

h4. Log Reader

The UpdateLog must provide a log reader, denoted LogReader, that will be used 
by the CDC Replicator to search, scan and read the update logs. The LogReader 
will wrap a LogPointer and hide its management (e.g., instantiation, increment, 
release).

The operations that must be provided by the LogReader are:
* Scan: move LogPointer to next entry
* Read: read a log entry specified by the LogPointer
* Lookup: lookup a version number - this will be performed during the 
initialisation of the CDC Replicator / election of a new leader, therefore 
rarely.

The LogReader must not only read olf tlog files, but also the new tlog file 
(i.e., transaction log being written). This requires specific logic, since a 
LogReader can be exhausted at a time t1 and have new entries available at a 
time t2.

h4. Log Index

In order to support efficient lookup of version numbers across a large number 
of tlog files, we need a pre-computed index of version numbers across tlog 
files.
The index could be designed as a list of tlog files, associated with their 
lower and upper bound in term of version numbers. The search will then read 
this index to find quickly the tlog files containing a given version number, 
then read the tlog file to find the associated entry.
However, a single tlog file can be large in certain scenarios. Therefore, we 
could add another secondary index per tlog file. This index will contain a list 
of version, pointer pairs. This will allow the LogReader to quickly find an 
entry without having to scan the full tlog file. This index will be created and 
managed by the TransactionLog.
This secondary index however duplicates the version number for each log entry. 
A possible optimisation is to modify the format of the transaction log so that 
the version number is not stored as part of the log entry.

h4. Transaction Log

The TransactionLog class is opening the tlog file in the constructor. This 
could be problematic with a large numbers of tlog files, as it will exhaust the 
file descriptors. One possible solution is to create a subclass for read only 
mode that will not open the file in the constructor. Instead, the file will be 
opened and closed on-demand by using the TransactionLog#LogReader. 
The CDCR Update Logs will take care of converting old transaction log objects 
into a read-only version.
This has however indirect consequences on the initialisation of the UpdateLog, 
more precisely in the recovery phase (#recoverFromLog), as the UpdateLog might 
write a commit (line 1418) at the end of an old tlog during replaying.

h4. Integration within the UpdateHandler

We will have to extend the UpdateHandler constructor in order to have the 
possibility to switch the UpdateLog implementation based on some configuration 
keys in the solrconfig.xml file.


 Keep transaction logs around longer
 ---

 Key: SOLR-6460
 URL: https://issues.apache.org/jira/browse/SOLR-6460
 Project: Solr
  Issue Type: Sub-task
Reporter: Yonik Seeley

 Transaction logs are currently deleted relatively quickly... but we need to 
 keep them around much longer to be used as a source for cross-datacenter 
 recovery.  This will also be useful in the future for enabling peer-sync to 
 use more historical updates before falling back to replication.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: How to traverse automaton with new api?

2014-09-24 Thread Dmitry Kan
Thanks a lot, Mike! I should have checked it first thing!

On 23 September 2014 17:47, Michael McCandless luc...@mikemccandless.com
wrote:

 Try looking at the sources for Automaton.toDot?  It does a similar
 traversal...

 Mike McCandless

 http://blog.mikemccandless.com


 On Tue, Sep 23, 2014 at 9:50 AM, Dmitry Kan dmitry.luc...@gmail.com
 wrote:
  o.a.l.u.automaton.Automaton api has changed in lucene 4.10
  (
 https://issues.apache.org/jira/secure/attachment/12651171/LUCENE-5752.patch
 ).
 
  Method getNumberedStates() got dropped. class State does not exist
 anymore.
 
  In the Automaton api before 4.10 the traversal could be achieved like
 this:
 
  // Automaton a;
  State[] states = a.getNumberedStates();
  for (State s : states) {
StringBuilder msg = new StringBuilder();
msg.append(String.valueOf(s.getNumber()));
if (a.getInitialState() == s) {
  msg.append( INITIAL);
}
msg.append(s.isAccept() ?  [accept] :  [reject]);
msg.append(,  + s.numTransitions +  transitions);
for (Transition t : s.getTransitions()) {
  // do something with transitions
}
log.info(msg);
  }
 
  Can anybody help on how to traverse an existing Automaton object with new
  api?
 
  Thanks,
  Dmitry

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




[jira] [Commented] (SOLR-6557) bandwidth cap for large file replication

2014-09-24 Thread Kenji Kikuchi (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14146201#comment-14146201
 ] 

Kenji Kikuchi commented on SOLR-6557:
-

Thank you for pointing to the SOLR-6485. 
I read the patch. The patch helps my operations.

I think if maxWriteMBPerSec in the SOLR-6485 can be requested from slave 
servers it is more helpful. This is because when I add a slave server in the 
rack where a master server exists I can use full server to rack switch 
bandwidth. But when I add a slave server in the rack where a master server does 
not exist, I need to add a bandwidth cap.


 bandwidth cap for large file replication
 

 Key: SOLR-6557
 URL: https://issues.apache.org/jira/browse/SOLR-6557
 Project: Solr
  Issue Type: Improvement
  Components: replication (java)
Affects Versions: 5.0, Trunk
Reporter: Kenji Kikuchi
Priority: Minor
 Fix For: 5.0, Trunk

 Attachments: SOLR-replication_bandwidth.patch


 Sometimes I need to set up a slave server in the rack where a master
 server does not exist. In this case, our rack to rack bandwidth is often
 saturated during large file transfer, such as initial replication, large
 index file merge and optimization. This impairs our other services. So I
 think a bandwidth cap for large file replication is helpful for large web 
 service providers and adds flexibility to our Solr slave server setups.
 Currently I am limiting replication bandwidth by using a tc command on
 the master servers. But to use a tc command, I need to login to an
 on-service master server and add tc related settings to add a new slave
 server because tc command only shapes outbound traffics. So the feature
 of setting up a desired replication bandwidth cap with just one line in
 a new slave configuration file reduces our Solr operations and secures
 the on-service master servers by avoiding the need to login.
 Parsing bandwidth setting in slave solrconfig.xml in ‘bits per
 second' is preferable for me. This is because most of our site operators
 use ‘bits per second' not ‘bytes per second’ in our network monitoring
 metrics.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene Benchmark

2014-09-24 Thread david.w.smi...@gmail.com
I use the benchmark module for spatial and I intend to for highlighting
performance next month.

On Wednesday, September 24, 2014, Mikhail Khludnev 
mkhlud...@griddynamics.com wrote:

 Hi John,

 It's obvious
 http://lucene.apache.org/core/4_8_0/benchmark/org/apache/lucene/benchmark/byTask/package-summary.html
 It's also described in LUA. I just get into it and understand how to use
 it. Feel free to ask if you face any difficulties.

 Beware that Lucene devs use
 https://code.google.com/a/apache-extras.org/p/luceneutil/
 http://blog.mikemccandless.com/2011/04/catching-slowdowns-in-lucene.html
 I didn't get into it, just know that it reports fancy tables which you can
 meet in performance optimization jiras.


 On Wed, Sep 24, 2014 at 10:45 AM, John Wang john.w...@gmail.com
 javascript:_e(%7B%7D,'cvml','john.w...@gmail.com'); wrote:

 Hi guys:

  Can you guys point me to some details on the Lucene Benchmark
 module? Specifically the grammar/syntax for the Algorithm files?

 Thanks

 -John




 --
 Sincerely yours
 Mikhail Khludnev
 Principal Engineer,
 Grid Dynamics

 http://www.griddynamics.com
 javascript:_e(%7B%7D,'cvml','mkhlud...@griddynamics.com');



-- 
Sent from Gmail Mobile


[jira] [Commented] (SOLR-5961) Solr gets crazy on /overseer/queue state change

2014-09-24 Thread Ugo Matrangolo (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14146217#comment-14146217
 ] 

Ugo Matrangolo commented on SOLR-5961:
--

Happened again :/

After a routine maintenance of our network causing a 30 secs connectivity 
hiccup the SOLR cluster started to spam overseer/queue with more than 47k+ 
events.

{code}
[zk: zookeeper4:2181(CONNECTED) 26] get /gilt/config/solr/overseer/queue
null
cZxid = 0x290008df29
ctime = Fri Aug 29 02:06:47 GMT+00:00 2014
mZxid = 0x290008df29
mtime = Fri Aug 29 02:06:47 GMT+00:00 2014
pZxid = 0x290023cedd
cversion = 60632
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 0
numChildren = 47822
[zk: zookeeper4:2181(CONNECTED) 27]
{code}

This time we tried to wait for it to heal itself and we watched the numChildren 
count go down but then up again: no way it was going to fix alone.

As usual we had to shutdown all the cluster, rmr /overseer/queue and restart.

Annoying :/

 Solr gets crazy on /overseer/queue state change
 ---

 Key: SOLR-5961
 URL: https://issues.apache.org/jira/browse/SOLR-5961
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.7.1
 Environment: CentOS, 1 shard - 3 replicas, ZK cluster with 3 nodes 
 (separate machines)
Reporter: Maxim Novikov
Priority: Critical

 No idea how to reproduce it, but sometimes Solr stars littering the log with 
 the following messages:
 419158 [localhost-startStop-1-EventThread] INFO  
 org.apache.solr.cloud.DistributedQueue  ? LatchChildWatcher fired on path: 
 /overseer/queue state: SyncConnected type NodeChildrenChanged
 419190 [Thread-3] INFO  org.apache.solr.cloud.Overseer  ? Update state 
 numShards=1 message={
   operation:state,
   state:recovering,
   base_url:http://${IP_ADDRESS}/solr;,
   core:${CORE_NAME},
   roles:null,
   node_name:${NODE_NAME}_solr,
   shard:shard1,
   collection:${COLLECTION_NAME},
   numShards:1,
   core_node_name:core_node2}
 It continues spamming these messages with no delay and the restarting of all 
 the nodes does not help. I have even tried to stop all the nodes in the 
 cluster first, but then when I start one, the behavior doesn't change, it 
 gets crazy nuts with this  /overseer/queue state again.
 PS The only way to handle this was to stop everything, manually clean up all 
 the data in ZooKeeper related to Solr, and then rebuild everything from 
 scratch. As you should understand, it is kinda unbearable in the production 
 environment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-5961) Solr gets crazy on /overseer/queue state change

2014-09-24 Thread Ugo Matrangolo (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14146217#comment-14146217
 ] 

Ugo Matrangolo edited comment on SOLR-5961 at 9/24/14 11:23 AM:


Happened again :/

After a routine maintenance of our network causing a 30 secs connectivity 
hiccup the SOLR cluster started to spam overseer/queue with more than 47k+ 
events.

{code}
[zk: zookeeper4:2181(CONNECTED) 26] get /gilt/config/solr/overseer/queue
null
cZxid = 0x290008df29
ctime = Fri Aug 29 02:06:47 GMT+00:00 2014
mZxid = 0x290008df29
mtime = Fri Aug 29 02:06:47 GMT+00:00 2014
pZxid = 0x290023cedd
cversion = 60632
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 0
numChildren = 47822
[zk: zookeeper4:2181(CONNECTED) 27]
{code}

This time we tried to wait for it to heal itself and we watched the numChildren 
count go down but then up again: no way it was going to fix alone.

As usual we had to shutdown all the cluster, rmr /overseer/queue and restart.

Annoying :/


was (Author: ugo.matrangolo):
Happened again :/

After a routine maintenance of our network causing a 30 secs connectivity 
hiccup the SOLR cluster started to spam overseer/queue with more than 47k+ 
events.

{code}
[zk: zookeeper4:2181(CONNECTED) 26] get /gilt/config/solr/overseer/queue
null
cZxid = 0x290008df29
ctime = Fri Aug 29 02:06:47 GMT+00:00 2014
mZxid = 0x290008df29
mtime = Fri Aug 29 02:06:47 GMT+00:00 2014
pZxid = 0x290023cedd
cversion = 60632
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 0
numChildren = 47822
[zk: zookeeper4:2181(CONNECTED) 27]
{code}

This time we tried to wait for it to heal itself and we watched the numChildren 
count go down but then up again: no way it was going to fix alone.

As usual we had to shutdown all the cluster, rmr /overseer/queue and restart.

Annoying :/

 Solr gets crazy on /overseer/queue state change
 ---

 Key: SOLR-5961
 URL: https://issues.apache.org/jira/browse/SOLR-5961
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.7.1
 Environment: CentOS, 1 shard - 3 replicas, ZK cluster with 3 nodes 
 (separate machines)
Reporter: Maxim Novikov
Priority: Critical

 No idea how to reproduce it, but sometimes Solr stars littering the log with 
 the following messages:
 419158 [localhost-startStop-1-EventThread] INFO  
 org.apache.solr.cloud.DistributedQueue  ? LatchChildWatcher fired on path: 
 /overseer/queue state: SyncConnected type NodeChildrenChanged
 419190 [Thread-3] INFO  org.apache.solr.cloud.Overseer  ? Update state 
 numShards=1 message={
   operation:state,
   state:recovering,
   base_url:http://${IP_ADDRESS}/solr;,
   core:${CORE_NAME},
   roles:null,
   node_name:${NODE_NAME}_solr,
   shard:shard1,
   collection:${COLLECTION_NAME},
   numShards:1,
   core_node_name:core_node2}
 It continues spamming these messages with no delay and the restarting of all 
 the nodes does not help. I have even tried to stop all the nodes in the 
 cluster first, but then when I start one, the behavior doesn't change, it 
 gets crazy nuts with this  /overseer/queue state again.
 PS The only way to handle this was to stop everything, manually clean up all 
 the data in ZooKeeper related to Solr, and then rebuild everything from 
 scratch. As you should understand, it is kinda unbearable in the production 
 environment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene Benchmark

2014-09-24 Thread Shalin Shekhar Mangar
There's a ticket for Solr too but it is kind of clunky. I plan to spend a
few weeks next month on the Solr benchmark suite.

On Wed, Sep 24, 2014 at 4:43 PM, david.w.smi...@gmail.com 
david.w.smi...@gmail.com wrote:

 I use the benchmark module for spatial and I intend to for highlighting
 performance next month.


 On Wednesday, September 24, 2014, Mikhail Khludnev 
 mkhlud...@griddynamics.com wrote:

 Hi John,

 It's obvious
 http://lucene.apache.org/core/4_8_0/benchmark/org/apache/lucene/benchmark/byTask/package-summary.html
 It's also described in LUA. I just get into it and understand how to use
 it. Feel free to ask if you face any difficulties.

 Beware that Lucene devs use
 https://code.google.com/a/apache-extras.org/p/luceneutil/
 http://blog.mikemccandless.com/2011/04/catching-slowdowns-in-lucene.html
 I didn't get into it, just know that it reports fancy tables which you
 can meet in performance optimization jiras.


 On Wed, Sep 24, 2014 at 10:45 AM, John Wang john.w...@gmail.com wrote:

 Hi guys:

  Can you guys point me to some details on the Lucene Benchmark
 module? Specifically the grammar/syntax for the Algorithm files?

 Thanks

 -John




 --
 Sincerely yours
 Mikhail Khludnev
 Principal Engineer,
 Grid Dynamics

 http://www.griddynamics.com



 --
 Sent from Gmail Mobile




-- 
Regards,
Shalin Shekhar Mangar.


[jira] [Created] (SOLR-6558) solr does not insert the first line in the csv file

2014-09-24 Thread fatih (JIRA)
fatih created SOLR-6558:
---

 Summary: solr does not insert the first line in the csv file
 Key: SOLR-6558
 URL: https://issues.apache.org/jira/browse/SOLR-6558
 Project: Solr
  Issue Type: Bug
  Components: Build, clients - java, contrib - DataImportHandler
Affects Versions: 4.7.2
 Environment: 4.7.2 solr , windows 7 and  java version is 1.7.0_25
Reporter: fatih
 Fix For: 4.7.2


link for stackoverflow as well 
http://stackoverflow.com/questions/26000623/solr-does-not-insert-the-first-line-in-the-csv-file

When a csv file is uploaded over curl command as below


C:\curl 
http://localhost:8983/solr/update/csv?commit=truestream.file=C:\dev\tools\solr-4.7.2\data.txtstream.contentType=text/csvheader=falsefieldnames=id,cat,pubyear_i,title,author,
series_s,sequence_iskipLines=0


and data.txt content is as below 

book1,fantasy,2000,A Storm of Swords,George R.R. Martin,A Song of Ice and 
Fire,3
book2,fantasy,2005,A Feast for Crows,George R.R. Martin,A Song of Ice and 
Fire,4
book3,fantasy,2011,A Dance with Dragons,George R.R. Martin,A Song of Ice 
and Fire,5
book4,sci-fi,1987,Consider Phlebas,Iain M. Banks,The Culture,1
book5,sci-fi,1988,The Player of Games,Iain M. Banks,The Culture,2
book6,sci-fi,1990,Use of Weapons,Iain M. Banks,The Culture,3
book7,fantasy,1984,Shadows Linger,Glen Cook,The Black Company,2
book8,fantasy,1984,The White Rose,Glen Cook,The Black Company,3
book9,fantasy,1989,Shadow Games,Glen Cook,The Black Company,4
book10,sci-fi,2001,Gridlinked,Neal Asher,Ian Cormac,1
book11,sci-fi,2003,The Line of Polity,Neal Asher,Ian Cormac,2
book12,sci-fi,2005,Brass Man,Neal Asher,Ian Cormac,3

first data in data.txt file is not being inserted to Solr which its id is 
book1. Can someone please tell why?

http://localhost:8983/solr/query?q=id:book1
{
  responseHeader:{
status:0,
QTime:1,
params:{
  q:id:book1}},
  response:{numFound:0,start:0,docs:[]
  }}

Solr logs already tells that book1 is being added.

15440876 [searcherExecutor-5-thread-1] INFO  org.apache.solr.core.SolrCore  
û [collection1] Registered new searcher Searcher@177fcdf1[collection1] 
main{StandardDirectoryReader(segments_1g:124:nrt _z(4.7):C12)}
15440877 [qtp84034882-11] INFO  
org.apache.solr.update.processor.LogUpdateProcessor  û [collection1] 
webapp=/solr path=/update 
params={fieldnames=id,cat,pubyear_i,title,author,series_s,sequence_iskipLines=0commit=truestream.con

tentType=text/csvheader=falsestream.file=C:\dev\tools\solr-4.7.2\data.txt} 
{add=[?book1 (1480070032327180288), book2 (1480070032332423168), book3 
(1480070032335568896), book4 (1480070032337666048), book5 
(1480070032339763200), b
ook6 (1480070032341860352), book7 (1480070032343957504), book8 
(1480070032347103232), book9 (1480070032349200384), book10 
(1480070032351297536), ... (12 adds)],commit=} 0 92

If I ask for all data then below you can also see book1 is still missing


http://localhost:8983/solr/query?q=id:book*sort=pubyear_i+descfl=id,title,pubyear_irows=15
{
  responseHeader:{
status:0,
QTime:1,
params:{
  fl:id,title,pubyear_i,
  sort:pubyear_i desc,
  q:id:book*,
  rows:15}},
  response:{numFound:11,start:0,docs:[
  {
id:book3,
pubyear_i:2011,
title:[A Dance with Dragons]},
  {
id:book2,
pubyear_i:2005,
title:[A Feast for Crows]},
  {
id:book12,
pubyear_i:2005,
title:[Brass Man]},
  {
id:book11,
pubyear_i:2003,
title:[The Line of Polity]},
  {
id:book10,
pubyear_i:2001,
title:[Gridlinked]},
  {
id:book6,
pubyear_i:1990,
title:[Use of Weapons]},
  {
id:book9,
pubyear_i:1989,
title:[Shadow Games]},
  {
id:book5,
pubyear_i:1988,
title:[The Player of Games]},
  {
id:book4,
pubyear_i:1987,
title:[Consider Phlebas]},
  {
id:book7,
pubyear_i:1984,
title:[Shadows Linger]},
  {
id:book8,
pubyear_i:1984,
title:[The White Rose]}]
  }}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-trunk - Build # 637 - Failure

2014-09-24 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/637/

3 tests failed.
REGRESSION:  org.apache.solr.DistributedIntervalFacetingTest.testDistribSearch

Error Message:
Captured an uncaught exception in thread: Thread[id=58297, name=Thread-8747, 
state=RUNNABLE, group=TGRP-DistributedIntervalFacetingTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=58297, name=Thread-8747, state=RUNNABLE, 
group=TGRP-DistributedIntervalFacetingTest]
at 
__randomizedtesting.SeedInfo.seed([2B6B51549F05F06F:AA8DDF4CE85A9053]:0)
Caused by: java.lang.RuntimeException: 
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: https://127.0.0.1:56646
at __randomizedtesting.SeedInfo.seed([2B6B51549F05F06F]:0)
at 
org.apache.solr.BaseDistributedSearchTestCase$5.run(BaseDistributedSearchTestCase.java:580)
Caused by: org.apache.solr.client.solrj.SolrServerException: Timeout occured 
while waiting response from server at: https://127.0.0.1:56646
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:562)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:210)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:206)
at 
org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:91)
at org.apache.solr.client.solrj.SolrServer.query(SolrServer.java:301)
at 
org.apache.solr.BaseDistributedSearchTestCase$5.run(BaseDistributedSearchTestCase.java:575)
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:152)
at java.net.SocketInputStream.read(SocketInputStream.java:122)
at sun.security.ssl.InputRecord.readFully(InputRecord.java:442)
at sun.security.ssl.InputRecord.read(InputRecord.java:480)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:927)
at sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:884)
at sun.security.ssl.AppInputStream.read(AppInputStream.java:102)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.fillBuffer(AbstractSessionInputBuffer.java:160)
at 
org.apache.http.impl.io.SocketInputBuffer.fillBuffer(SocketInputBuffer.java:84)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.readLine(AbstractSessionInputBuffer.java:273)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:140)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
at 
org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:260)
at 
org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:283)
at 
org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:251)
at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:197)
at 
org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:271)
at 
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:123)
at 
org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:682)
at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:486)
at 
org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:863)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:106)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:57)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:448)
... 5 more


REGRESSION:  
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testDistribSearch

Error Message:
Error CREATEing SolrCore 'halfcollection_shard1_replica1': Unable to create 
core [halfcollection_shard1_replica1] Caused by: Could not get shard id for 
core: halfcollection_shard1_replica1

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Error 
CREATEing SolrCore 'halfcollection_shard1_replica1': Unable to create core 
[halfcollection_shard1_replica1] Caused by: Could not get shard id for core: 
halfcollection_shard1_replica1
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:552)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:210)
at 

[jira] [Commented] (SOLR-6558) solr does not insert the first line in the csv file

2014-09-24 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14146303#comment-14146303
 ] 

Yonik Seeley commented on SOLR-6558:


Answered here: http://stackoverflow.com/a/26017883/654209

 solr does not insert the first line in the csv file
 ---

 Key: SOLR-6558
 URL: https://issues.apache.org/jira/browse/SOLR-6558
 Project: Solr
  Issue Type: Bug
  Components: Build, clients - java, contrib - DataImportHandler
Affects Versions: 4.7.2
 Environment: 4.7.2 solr , windows 7 and  java version is 1.7.0_25
Reporter: fatih
  Labels: features
 Fix For: 4.7.2

   Original Estimate: 24h
  Remaining Estimate: 24h

 link for stackoverflow as well 
 http://stackoverflow.com/questions/26000623/solr-does-not-insert-the-first-line-in-the-csv-file
 When a csv file is uploaded over curl command as below
 C:\curl 
 http://localhost:8983/solr/update/csv?commit=truestream.file=C:\dev\tools\solr-4.7.2\data.txtstream.contentType=text/csvheader=falsefieldnames=id,cat,pubyear_i,title,author,
 series_s,sequence_iskipLines=0
 and data.txt content is as below 
 book1,fantasy,2000,A Storm of Swords,George R.R. Martin,A Song of Ice and 
 Fire,3
 book2,fantasy,2005,A Feast for Crows,George R.R. Martin,A Song of Ice and 
 Fire,4
 book3,fantasy,2011,A Dance with Dragons,George R.R. Martin,A Song of Ice 
 and Fire,5
 book4,sci-fi,1987,Consider Phlebas,Iain M. Banks,The Culture,1
 book5,sci-fi,1988,The Player of Games,Iain M. Banks,The Culture,2
 book6,sci-fi,1990,Use of Weapons,Iain M. Banks,The Culture,3
 book7,fantasy,1984,Shadows Linger,Glen Cook,The Black Company,2
 book8,fantasy,1984,The White Rose,Glen Cook,The Black Company,3
 book9,fantasy,1989,Shadow Games,Glen Cook,The Black Company,4
 book10,sci-fi,2001,Gridlinked,Neal Asher,Ian Cormac,1
 book11,sci-fi,2003,The Line of Polity,Neal Asher,Ian Cormac,2
 book12,sci-fi,2005,Brass Man,Neal Asher,Ian Cormac,3
 first data in data.txt file is not being inserted to Solr which its id is 
 book1. Can someone please tell why?
 http://localhost:8983/solr/query?q=id:book1
 {
   responseHeader:{
 status:0,
 QTime:1,
 params:{
   q:id:book1}},
   response:{numFound:0,start:0,docs:[]
   }}
 Solr logs already tells that book1 is being added.
 15440876 [searcherExecutor-5-thread-1] INFO  
 org.apache.solr.core.SolrCore  û [collection1] Registered new searcher 
 Searcher@177fcdf1[collection1] 
 main{StandardDirectoryReader(segments_1g:124:nrt _z(4.7):C12)}
 15440877 [qtp84034882-11] INFO  
 org.apache.solr.update.processor.LogUpdateProcessor  û [collection1] 
 webapp=/solr path=/update 
 params={fieldnames=id,cat,pubyear_i,title,author,series_s,sequence_iskipLines=0commit=truestream.con
 
 tentType=text/csvheader=falsestream.file=C:\dev\tools\solr-4.7.2\data.txt} 
 {add=[?book1 (1480070032327180288), book2 (1480070032332423168), book3 
 (1480070032335568896), book4 (1480070032337666048), book5 
 (1480070032339763200), b
 ook6 (1480070032341860352), book7 (1480070032343957504), book8 
 (1480070032347103232), book9 (1480070032349200384), book10 
 (1480070032351297536), ... (12 adds)],commit=} 0 92
 If I ask for all data then below you can also see book1 is still missing
 
 http://localhost:8983/solr/query?q=id:book*sort=pubyear_i+descfl=id,title,pubyear_irows=15
 {
   responseHeader:{
 status:0,
 QTime:1,
 params:{
   fl:id,title,pubyear_i,
   sort:pubyear_i desc,
   q:id:book*,
   rows:15}},
   response:{numFound:11,start:0,docs:[
   {
 id:book3,
 pubyear_i:2011,
 title:[A Dance with Dragons]},
   {
 id:book2,
 pubyear_i:2005,
 title:[A Feast for Crows]},
   {
 id:book12,
 pubyear_i:2005,
 title:[Brass Man]},
   {
 id:book11,
 pubyear_i:2003,
 title:[The Line of Polity]},
   {
 id:book10,
 pubyear_i:2001,
 title:[Gridlinked]},
   {
 id:book6,
 pubyear_i:1990,
 title:[Use of Weapons]},
   {
 id:book9,
 pubyear_i:1989,
 title:[Shadow Games]},
   {
 id:book5,
 pubyear_i:1988,
 title:[The Player of Games]},
   {
 id:book4,
 pubyear_i:1987,
 title:[Consider Phlebas]},
   {
 id:book7,
 pubyear_i:1984,
 title:[Shadows Linger]},
   {
 id:book8,
 pubyear_i:1984,
 

[JENKINS] Lucene-Solr-Tests-trunk-Java7 - Build # 4880 - Still Failing

2014-09-24 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java7/4880/

1 tests failed.
REGRESSION:  org.apache.solr.cloud.BasicDistributedZk2Test.testDistribSearch

Error Message:
There are still nodes recoverying - waited for 45 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 45 
seconds
at 
__randomizedtesting.SeedInfo.seed([6B4F7FA250D34619:EAA9F1BA278C2625]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:178)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:755)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForThingsToLevelOut(AbstractFullDistribZkTestBase.java:1374)
at 
org.apache.solr.cloud.BasicDistributedZk2Test.brindDownShardIndexSomeDocsAndRecover(BasicDistributedZk2Test.java:405)
at 
org.apache.solr.cloud.BasicDistributedZk2Test.doTest(BasicDistributedZk2Test.java:110)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 

[VOTE] Release 4.10.1 RC0

2014-09-24 Thread Michael McCandless
Artifacts here:
http://people.apache.org/~mikemccand/staging_area/lucene-solr-4.10.1-RC0-rev1627268

Smoke tester: python3 -u dev-tools/scripts/smokeTestRelease.py
http://people.apache.org/~mikemccand/staging_area/lucene-solr-4.10.1-RC0-rev1627268
1627268 4.10.1 /tmp/smoke4101 True

 SUCCESS! [0:29:15.587659]

Here's my +1

Mike McCandless

http://blog.mikemccandless.com

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.10-Linux (32bit/jdk1.8.0_20) - Build # 4 - Failure!

2014-09-24 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.10-Linux/4/
Java: 32bit/jdk1.8.0_20 -client -XX:+UseParallelGC

1 tests failed.
REGRESSION:  
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testDistribSearch

Error Message:
createcollection the collection error [Watcher fired on path: null state: 
SyncConnected type None]

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: 
createcollection the collection error [Watcher fired on path: null state: 
SyncConnected type None]
at 
__randomizedtesting.SeedInfo.seed([31BD59E01A6274C1:B05BD7F86D3D14FD]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:550)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:210)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:206)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.deletePartiallyCreatedCollection(CollectionsAPIDistributedZkTest.java:366)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.doTest(CollectionsAPIDistributedZkTest.java:207)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:871)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

Re: [JENKINS] Lucene-Solr-SmokeRelease-5.x - Build # 196 - Still Failing

2014-09-24 Thread Steve Rowe
Thanks for your patience Robert.

I’ll look at it today.

Steve

On Sep 24, 2014, at 6:15 AM, Robert Muir rcm...@gmail.com wrote:

 I am extremely close from disabling this analytics module, as i
 mentioned last week.
 
 Guys the solr build is still fucking broken.
 
 On Wed, Sep 24, 2014 at 4:46 AM, Apache Jenkins Server
 jenk...@builds.apache.org wrote:
 Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-5.x/196/
 
 No tests ran.
 
 Build Log:
 [...truncated 51701 lines...]
 prepare-release-no-sign:
[mkdir] Created dir: 
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist
 [copy] Copying 446 files to 
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 254 files to 
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.7 JAVA_HOME=/home/jenkins/tools/java/latest1.7
   [smoker] NOTE: output encoding is US-ASCII
   [smoker]
   [smoker] Load release URL 
 file:/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist/...
   [smoker]
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.1 MB in 0.01 sec (14.2 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-5.0.0-src.tgz...
   [smoker] 27.6 MB in 0.04 sec (676.0 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-5.0.0.tgz...
   [smoker] 61.1 MB in 0.09 sec (669.8 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-5.0.0.zip...
   [smoker] 70.5 MB in 0.13 sec (524.8 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-5.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.7...
   [smoker]   got 5561 hits for query lucene
   [smoker] checkindex with 1.7...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-5.0.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.7...
   [smoker]   got 5561 hits for query lucene
   [smoker] checkindex with 1.7...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-5.0.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run ant validate
   [smoker] run tests w/ Java 7 and 
 testArgs='-Dtests.jettyConnector=Socket -Dtests.disableHdfs=true 
 -Dtests.multiplier=1 -Dtests.slow=false'...
   [smoker] test demo with 1.7...
   [smoker]   got 223 hits for query lucene
   [smoker] checkindex with 1.7...
   [smoker] generate javadocs w/ Java 7...
   [smoker]
   [smoker] Crawl/parse...
   [smoker]
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] success!
   [smoker]
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.1 MB in 0.01 sec (11.3 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-5.0.0-src.tgz...
   [smoker] 33.8 MB in 0.08 sec (443.6 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-5.0.0.tgz...
   [smoker] 143.2 MB in 0.60 sec (237.8 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-5.0.0.zip...
   [smoker] 149.3 MB in 0.74 sec (201.7 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-5.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-5.0.0.tgz...
   [smoker]   **WARNING**: skipping check of 
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
  it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
  it has javax.* classes
   [smoker] verify WAR metadata/contained JAR identity/no javax.* or 
 java.* classes...
   [smoker] unpack lucene-5.0.0.tgz...
   [smoker] copying unpacked distribution for Java 7 ...
   [smoker] test solr example w/ Java 7...
   [smoker]   start Solr instance 
 (log=/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0-java7/solr-example.log)...
   [smoker]   startup done
   [smoker]   test utf8...
   [smoker]   index example docs...
   [smoker]   run query...
   [smoker]   stop server (SIGINT)...
   [smoker]   unpack 

Re: [JENKINS] Lucene-Solr-SmokeRelease-5.x - Build # 196 - Still Failing

2014-09-24 Thread Steve Rowe
Ah, looks like Uwe fixed the immediate problem.

I’ll run the smoker locally to see if there’s anything else wrong.

Steve

On Sep 24, 2014, at 10:09 AM, Steve Rowe sar...@gmail.com wrote:

 Thanks for your patience Robert.
 
 I’ll look at it today.
 
 Steve
 
 On Sep 24, 2014, at 6:15 AM, Robert Muir rcm...@gmail.com wrote:
 
 I am extremely close from disabling this analytics module, as i
 mentioned last week.
 
 Guys the solr build is still fucking broken.
 
 On Wed, Sep 24, 2014 at 4:46 AM, Apache Jenkins Server
 jenk...@builds.apache.org wrote:
 Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-5.x/196/
 
 No tests ran.
 
 Build Log:
 [...truncated 51701 lines...]
 prepare-release-no-sign:
   [mkdir] Created dir: 
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist
[copy] Copying 446 files to 
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist/lucene
[copy] Copying 254 files to 
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist/solr
  [smoker] Java 1.7 JAVA_HOME=/home/jenkins/tools/java/latest1.7
  [smoker] NOTE: output encoding is US-ASCII
  [smoker]
  [smoker] Load release URL 
 file:/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist/...
  [smoker]
  [smoker] Test Lucene...
  [smoker]   test basics...
  [smoker]   get KEYS
  [smoker] 0.1 MB in 0.01 sec (14.2 MB/sec)
  [smoker]   check changes HTML...
  [smoker]   download lucene-5.0.0-src.tgz...
  [smoker] 27.6 MB in 0.04 sec (676.0 MB/sec)
  [smoker] verify md5/sha1 digests
  [smoker]   download lucene-5.0.0.tgz...
  [smoker] 61.1 MB in 0.09 sec (669.8 MB/sec)
  [smoker] verify md5/sha1 digests
  [smoker]   download lucene-5.0.0.zip...
  [smoker] 70.5 MB in 0.13 sec (524.8 MB/sec)
  [smoker] verify md5/sha1 digests
  [smoker]   unpack lucene-5.0.0.tgz...
  [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
  [smoker] test demo with 1.7...
  [smoker]   got 5561 hits for query lucene
  [smoker] checkindex with 1.7...
  [smoker] check Lucene's javadoc JAR
  [smoker]   unpack lucene-5.0.0.zip...
  [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
  [smoker] test demo with 1.7...
  [smoker]   got 5561 hits for query lucene
  [smoker] checkindex with 1.7...
  [smoker] check Lucene's javadoc JAR
  [smoker]   unpack lucene-5.0.0-src.tgz...
  [smoker] make sure no JARs/WARs in src dist...
  [smoker] run ant validate
  [smoker] run tests w/ Java 7 and 
 testArgs='-Dtests.jettyConnector=Socket -Dtests.disableHdfs=true 
 -Dtests.multiplier=1 -Dtests.slow=false'...
  [smoker] test demo with 1.7...
  [smoker]   got 223 hits for query lucene
  [smoker] checkindex with 1.7...
  [smoker] generate javadocs w/ Java 7...
  [smoker]
  [smoker] Crawl/parse...
  [smoker]
  [smoker] Verify...
  [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
  [smoker] find all past Lucene releases...
  [smoker] run TestBackwardsCompatibility..
  [smoker] success!
  [smoker]
  [smoker] Test Solr...
  [smoker]   test basics...
  [smoker]   get KEYS
  [smoker] 0.1 MB in 0.01 sec (11.3 MB/sec)
  [smoker]   check changes HTML...
  [smoker]   download solr-5.0.0-src.tgz...
  [smoker] 33.8 MB in 0.08 sec (443.6 MB/sec)
  [smoker] verify md5/sha1 digests
  [smoker]   download solr-5.0.0.tgz...
  [smoker] 143.2 MB in 0.60 sec (237.8 MB/sec)
  [smoker] verify md5/sha1 digests
  [smoker]   download solr-5.0.0.zip...
  [smoker] 149.3 MB in 0.74 sec (201.7 MB/sec)
  [smoker] verify md5/sha1 digests
  [smoker]   unpack solr-5.0.0.tgz...
  [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
  [smoker] unpack lucene-5.0.0.tgz...
  [smoker]   **WARNING**: skipping check of 
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
  it has javax.* classes
  [smoker]   **WARNING**: skipping check of 
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
  it has javax.* classes
  [smoker] verify WAR metadata/contained JAR identity/no javax.* or 
 java.* classes...
  [smoker] unpack lucene-5.0.0.tgz...
  [smoker] copying unpacked distribution for Java 7 ...
  [smoker] test solr example w/ Java 7...
  [smoker]   start Solr instance 
 (log=/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0-java7/solr-example.log)...
  [smoker]   startup done
  [smoker]   test utf8...
  [smoker]   index 

Re: [JENKINS] Lucene-Solr-SmokeRelease-5.x - Build # 196 - Still Failing

2014-09-24 Thread Uwe Schindler
I fixed this already. But maybe more failures will happen. 

Uwe

Am 24. September 2014 16:09:06 MESZ, schrieb Steve Rowe sar...@gmail.com:
Thanks for your patience Robert.

I’ll look at it today.

Steve

On Sep 24, 2014, at 6:15 AM, Robert Muir rcm...@gmail.com wrote:

 I am extremely close from disabling this analytics module, as i
 mentioned last week.
 
 Guys the solr build is still fucking broken.
 
 On Wed, Sep 24, 2014 at 4:46 AM, Apache Jenkins Server
 jenk...@builds.apache.org wrote:
 Build:
https://builds.apache.org/job/Lucene-Solr-SmokeRelease-5.x/196/
 
 No tests ran.
 
 Build Log:
 [...truncated 51701 lines...]
 prepare-release-no-sign:
[mkdir] Created dir:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist
 [copy] Copying 446 files to
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 254 files to
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.7 JAVA_HOME=/home/jenkins/tools/java/latest1.7
   [smoker] NOTE: output encoding is US-ASCII
   [smoker]
   [smoker] Load release URL
file:/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist/...
   [smoker]
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.1 MB in 0.01 sec (14.2 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-5.0.0-src.tgz...
   [smoker] 27.6 MB in 0.04 sec (676.0 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-5.0.0.tgz...
   [smoker] 61.1 MB in 0.09 sec (669.8 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-5.0.0.zip...
   [smoker] 70.5 MB in 0.13 sec (524.8 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-5.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.*
classes...
   [smoker] test demo with 1.7...
   [smoker]   got 5561 hits for query lucene
   [smoker] checkindex with 1.7...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-5.0.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.*
classes...
   [smoker] test demo with 1.7...
   [smoker]   got 5561 hits for query lucene
   [smoker] checkindex with 1.7...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-5.0.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run ant validate
   [smoker] run tests w/ Java 7 and
testArgs='-Dtests.jettyConnector=Socket -Dtests.disableHdfs=true
-Dtests.multiplier=1 -Dtests.slow=false'...
   [smoker] test demo with 1.7...
   [smoker]   got 223 hits for query lucene
   [smoker] checkindex with 1.7...
   [smoker] generate javadocs w/ Java 7...
   [smoker]
   [smoker] Crawl/parse...
   [smoker]
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in
TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] success!
   [smoker]
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.1 MB in 0.01 sec (11.3 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-5.0.0-src.tgz...
   [smoker] 33.8 MB in 0.08 sec (443.6 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-5.0.0.tgz...
   [smoker] 143.2 MB in 0.60 sec (237.8 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-5.0.0.zip...
   [smoker] 149.3 MB in 0.74 sec (201.7 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-5.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.*
classes...
   [smoker] unpack lucene-5.0.0.tgz...
   [smoker]   **WARNING**: skipping check of
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
it has javax.* classes
   [smoker]   **WARNING**: skipping check of
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
it has javax.* classes
   [smoker] verify WAR metadata/contained JAR identity/no javax.*
or java.* classes...
   [smoker] unpack lucene-5.0.0.tgz...
   [smoker] copying unpacked distribution for Java 7 ...
   [smoker] test solr example w/ Java 7...
   [smoker]   start Solr instance
(log=/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0-java7/solr-example.log)...
   [smoker]   startup done
   [smoker]   test utf8...
   [smoker]   index 

[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2131 - Failure

2014-09-24 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2131/

1 tests failed.
REGRESSION:  
org.apache.solr.cloud.DeleteLastCustomShardedReplicaTest.testDistribSearch

Error Message:
No live SolrServers available to handle this request:[https://127.0.0.1:17080, 
https://127.0.0.1:17085, https://127.0.0.1:17107]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request:[https://127.0.0.1:17080, https://127.0.0.1:17085, 
https://127.0.0.1:17107]
at 
__randomizedtesting.SeedInfo.seed([9AE17DA2DE9D5982:1B07F3BAA9C239BE]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:322)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.sendRequest(CloudSolrServer.java:880)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.requestWithRetryOnStaleState(CloudSolrServer.java:658)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:601)
at 
org.apache.solr.cloud.DeleteLastCustomShardedReplicaTest.removeAndWaitForLastReplicaGone(DeleteLastCustomShardedReplicaTest.java:117)
at 
org.apache.solr.cloud.DeleteLastCustomShardedReplicaTest.doTest(DeleteLastCustomShardedReplicaTest.java:107)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-6485) ReplicationHandler should have an option to throttle the speed of replication

2014-09-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14146385#comment-14146385
 ] 

ASF subversion and git services commented on SOLR-6485:
---

Commit 1627340 from [~noble.paul] in branch 'dev/trunk'
[ https://svn.apache.org/r1627340 ]

SOLR-6485

 ReplicationHandler should have an option to throttle the speed of replication
 -

 Key: SOLR-6485
 URL: https://issues.apache.org/jira/browse/SOLR-6485
 Project: Solr
  Issue Type: Improvement
Reporter: Varun Thacker
Assignee: Noble Paul
 Attachments: SOLR-6485.patch, SOLR-6485.patch, SOLR-6485.patch, 
 SOLR-6485.patch, SOLR-6485.patch, SOLR-6485.patch, SOLR-6485.patch


 The ReplicationHandler should have an option to throttle the speed of 
 replication.
 It is useful for people who want bring up nodes in their SolrCloud cluster or 
 when have a backup-restore API and not eat up all their network bandwidth 
 while replicating.
 I am writing a test case and will attach a patch shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6485) ReplicationHandler should have an option to throttle the speed of replication

2014-09-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14146389#comment-14146389
 ] 

ASF subversion and git services commented on SOLR-6485:
---

Commit 1627341 from [~noble.paul] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1627341 ]

SOLR-6485

 ReplicationHandler should have an option to throttle the speed of replication
 -

 Key: SOLR-6485
 URL: https://issues.apache.org/jira/browse/SOLR-6485
 Project: Solr
  Issue Type: Improvement
Reporter: Varun Thacker
Assignee: Noble Paul
 Attachments: SOLR-6485.patch, SOLR-6485.patch, SOLR-6485.patch, 
 SOLR-6485.patch, SOLR-6485.patch, SOLR-6485.patch, SOLR-6485.patch


 The ReplicationHandler should have an option to throttle the speed of 
 replication.
 It is useful for people who want bring up nodes in their SolrCloud cluster or 
 when have a backup-restore API and not eat up all their network bandwidth 
 while replicating.
 I am writing a test case and will attach a patch shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-6241) HttpPartitionTest.testRf3WithLeaderFailover fails sometimes

2014-09-24 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter reassigned SOLR-6241:


Assignee: Timothy Potter  (was: Shalin Shekhar Mangar)

 HttpPartitionTest.testRf3WithLeaderFailover fails sometimes
 ---

 Key: SOLR-6241
 URL: https://issues.apache.org/jira/browse/SOLR-6241
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud, Tests
Reporter: Shalin Shekhar Mangar
Assignee: Timothy Potter
Priority: Minor
 Fix For: 4.10


 This test fails sometimes locally as well as on jenkins.
 {code}
 Expected 2 of 3 replicas to be active but only found 1
 at org.junit.Assert.fail(Assert.java:93)
 at org.junit.Assert.assertTrue(Assert.java:43)
 at 
 org.apache.solr.cloud.HttpPartitionTest.testRf3WithLeaderFailover(HttpPartitionTest.java:367)
 at 
 org.apache.solr.cloud.HttpPartitionTest.doTest(HttpPartitionTest.java:148)
 at 
 org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:863)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6241) HttpPartitionTest.testRf3WithLeaderFailover fails sometimes

2014-09-24 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14146394#comment-14146394
 ] 

Timothy Potter commented on SOLR-6241:
--

I'm doing some refactoring as part of SOLR-6511 and looks to have fixed this 
issue.

 HttpPartitionTest.testRf3WithLeaderFailover fails sometimes
 ---

 Key: SOLR-6241
 URL: https://issues.apache.org/jira/browse/SOLR-6241
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud, Tests
Reporter: Shalin Shekhar Mangar
Assignee: Timothy Potter
Priority: Minor
 Fix For: 4.10


 This test fails sometimes locally as well as on jenkins.
 {code}
 Expected 2 of 3 replicas to be active but only found 1
 at org.junit.Assert.fail(Assert.java:93)
 at org.junit.Assert.assertTrue(Assert.java:43)
 at 
 org.apache.solr.cloud.HttpPartitionTest.testRf3WithLeaderFailover(HttpPartitionTest.java:367)
 at 
 org.apache.solr.cloud.HttpPartitionTest.doTest(HttpPartitionTest.java:148)
 at 
 org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:863)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6485) ReplicationHandler should have an option to throttle the speed of replication

2014-09-24 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul resolved SOLR-6485.
--
Resolution: Fixed

 ReplicationHandler should have an option to throttle the speed of replication
 -

 Key: SOLR-6485
 URL: https://issues.apache.org/jira/browse/SOLR-6485
 Project: Solr
  Issue Type: Improvement
Reporter: Varun Thacker
Assignee: Noble Paul
 Attachments: SOLR-6485.patch, SOLR-6485.patch, SOLR-6485.patch, 
 SOLR-6485.patch, SOLR-6485.patch, SOLR-6485.patch, SOLR-6485.patch


 The ReplicationHandler should have an option to throttle the speed of 
 replication.
 It is useful for people who want bring up nodes in their SolrCloud cluster or 
 when have a backup-restore API and not eat up all their network bandwidth 
 while replicating.
 I am writing a test case and will attach a patch shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6485) ReplicationHandler should have an option to throttle the speed of replication

2014-09-24 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-6485:
-
Fix Version/s: Trunk
   5.0

 ReplicationHandler should have an option to throttle the speed of replication
 -

 Key: SOLR-6485
 URL: https://issues.apache.org/jira/browse/SOLR-6485
 Project: Solr
  Issue Type: Improvement
Reporter: Varun Thacker
Assignee: Noble Paul
 Fix For: 5.0, Trunk

 Attachments: SOLR-6485.patch, SOLR-6485.patch, SOLR-6485.patch, 
 SOLR-6485.patch, SOLR-6485.patch, SOLR-6485.patch, SOLR-6485.patch


 The ReplicationHandler should have an option to throttle the speed of 
 replication.
 It is useful for people who want bring up nodes in their SolrCloud cluster or 
 when have a backup-restore API and not eat up all their network bandwidth 
 while replicating.
 I am writing a test case and will attach a patch shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6453) Stop throwing an error message from Overseer on Solr exit

2014-09-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14146408#comment-14146408
 ] 

ASF subversion and git services commented on SOLR-6453:
---

Commit 1627343 from [~noble.paul] in branch 'dev/trunk'
[ https://svn.apache.org/r1627343 ]

SOLR-6453

 Stop throwing an error message from Overseer on Solr exit
 -

 Key: SOLR-6453
 URL: https://issues.apache.org/jira/browse/SOLR-6453
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Ramkumar Aiyengar
Assignee: Noble Paul
Priority: Minor
 Attachments: SOLR-6453.patch


 SOLR-5859 adds a leadership check every time Overseer exits loop. This 
 however gets triggered even when Solr really is exiting, causing a spurious 
 error. Here's a one-liner to stop that from happening.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6453) Stop throwing an error message from Overseer on Solr exit

2014-09-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14146412#comment-14146412
 ] 

ASF subversion and git services commented on SOLR-6453:
---

Commit 1627344 from [~noble.paul] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1627344 ]

SOLR-6453

 Stop throwing an error message from Overseer on Solr exit
 -

 Key: SOLR-6453
 URL: https://issues.apache.org/jira/browse/SOLR-6453
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Ramkumar Aiyengar
Assignee: Noble Paul
Priority: Minor
 Attachments: SOLR-6453.patch


 SOLR-5859 adds a leadership check every time Overseer exits loop. This 
 however gets triggered even when Solr really is exiting, causing a spurious 
 error. Here's a one-liner to stop that from happening.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6453) Stop throwing an error message from Overseer on Solr exit

2014-09-24 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul resolved SOLR-6453.
--
   Resolution: Fixed
Fix Version/s: Trunk
   5.0

 Stop throwing an error message from Overseer on Solr exit
 -

 Key: SOLR-6453
 URL: https://issues.apache.org/jira/browse/SOLR-6453
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Ramkumar Aiyengar
Assignee: Noble Paul
Priority: Minor
 Fix For: 5.0, Trunk

 Attachments: SOLR-6453.patch


 SOLR-5859 adds a leadership check every time Overseer exits loop. This 
 however gets triggered even when Solr really is exiting, causing a spurious 
 error. Here's a one-liner to stop that from happening.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6511) Fencepost error in LeaderInitiatedRecoveryThread

2014-09-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14146420#comment-14146420
 ] 

ASF subversion and git services commented on SOLR-6511:
---

Commit 1627347 from [~thelabdude] in branch 'dev/trunk'
[ https://svn.apache.org/r1627347 ]

SOLR-6511: Fencepost error in LeaderInitiatedRecoveryThread; refactor 
HttpPartitionTest to resolve jenkins failures.

 Fencepost error in LeaderInitiatedRecoveryThread
 

 Key: SOLR-6511
 URL: https://issues.apache.org/jira/browse/SOLR-6511
 Project: Solr
  Issue Type: Bug
Reporter: Alan Woodward
Assignee: Timothy Potter
 Attachments: SOLR-6511.patch, SOLR-6511.patch


 At line 106:
 {code}
 while (continueTrying  ++tries  maxTries) {
 {code}
 should be
 {code}
 while (continueTrying  ++tries = maxTries) {
 {code}
 This is only a problem when called from DistributedUpdateProcessor, as it can 
 have maxTries set to 1, which means the loop is never actually run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6559) Create an endpoint /update/xml/docs endpoint to do custom xml indexing

2014-09-24 Thread Noble Paul (JIRA)
Noble Paul created SOLR-6559:


 Summary: Create an endpoint /update/xml/docs endpoint to do custom 
xml indexing
 Key: SOLR-6559
 URL: https://issues.apache.org/jira/browse/SOLR-6559
 Project: Solr
  Issue Type: Bug
Reporter: Noble Paul


Just the way we have an json end point create an xml end point too. use the 
XPathRecordReader in DIH to do the same



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-6559) Create an endpoint /update/xml/docs endpoint to do custom xml indexing

2014-09-24 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul reassigned SOLR-6559:


Assignee: Noble Paul

 Create an endpoint /update/xml/docs endpoint to do custom xml indexing
 --

 Key: SOLR-6559
 URL: https://issues.apache.org/jira/browse/SOLR-6559
 Project: Solr
  Issue Type: Bug
Reporter: Noble Paul
Assignee: Noble Paul

 Just the way we have an json end point create an xml end point too. use the 
 XPathRecordReader in DIH to do the same



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_40-ea-b04) - Build # 11320 - Failure!

2014-09-24 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11320/
Java: 64bit/jdk1.8.0_40-ea-b04 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.admin.CoreAdminRequestStatusTest

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.handler.admin.CoreAdminRequestStatusTest: 1) 
Thread[id=6654, name=parallelCoreAdminExecutor-4171-thread-1, state=WAITING, 
group=TGRP-CoreAdminRequestStatusTest] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.handler.admin.CoreAdminRequestStatusTest: 
   1) Thread[id=6654, name=parallelCoreAdminExecutor-4171-thread-1, 
state=WAITING, group=TGRP-CoreAdminRequestStatusTest]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
at __randomizedtesting.SeedInfo.seed([995D358BA6F1D5DA]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.admin.CoreAdminRequestStatusTest

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=6654, name=parallelCoreAdminExecutor-4171-thread-1, state=WAITING, 
group=TGRP-CoreAdminRequestStatusTest] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=6654, name=parallelCoreAdminExecutor-4171-thread-1, 
state=WAITING, group=TGRP-CoreAdminRequestStatusTest]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
at __randomizedtesting.SeedInfo.seed([995D358BA6F1D5DA]:0)


REGRESSION:  
org.apache.solr.handler.admin.CoreAdminRequestStatusTest.testCoreAdminRequestStatus

Error Message:
The status of request was expected to be completed expected:[completed] but 
was:[running]

Stack Trace:
org.junit.ComparisonFailure: The status of request was expected to be completed 
expected:[completed] but was:[running]
at 
__randomizedtesting.SeedInfo.seed([995D358BA6F1D5DA:BF431B8F18676160]:0)
at org.junit.Assert.assertEquals(Assert.java:125)
at 
org.apache.solr.handler.admin.CoreAdminRequestStatusTest.testCoreAdminRequestStatus(CoreAdminRequestStatusTest.java:84)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

[jira] [Commented] (SOLR-6557) bandwidth cap for large file replication

2014-09-24 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14146454#comment-14146454
 ] 

Varun Thacker commented on SOLR-6557:
-

You can pass maxWriteMBPerSec from your slave server - So you could try out 
something like this after applying the patch - 

http://slave_host:port/solr/replication?command=fetchindexmaxWriteMBPerSec=100

 bandwidth cap for large file replication
 

 Key: SOLR-6557
 URL: https://issues.apache.org/jira/browse/SOLR-6557
 Project: Solr
  Issue Type: Improvement
  Components: replication (java)
Affects Versions: 5.0, Trunk
Reporter: Kenji Kikuchi
Priority: Minor
 Fix For: 5.0, Trunk

 Attachments: SOLR-replication_bandwidth.patch


 Sometimes I need to set up a slave server in the rack where a master
 server does not exist. In this case, our rack to rack bandwidth is often
 saturated during large file transfer, such as initial replication, large
 index file merge and optimization. This impairs our other services. So I
 think a bandwidth cap for large file replication is helpful for large web 
 service providers and adds flexibility to our Solr slave server setups.
 Currently I am limiting replication bandwidth by using a tc command on
 the master servers. But to use a tc command, I need to login to an
 on-service master server and add tc related settings to add a new slave
 server because tc command only shapes outbound traffics. So the feature
 of setting up a desired replication bandwidth cap with just one line in
 a new slave configuration file reduces our Solr operations and secures
 the on-service master servers by avoiding the need to login.
 Parsing bandwidth setting in slave solrconfig.xml in ‘bits per
 second' is preferable for me. This is because most of our site operators
 use ‘bits per second' not ‘bytes per second’ in our network monitoring
 metrics.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release 4.10.1 RC0

2014-09-24 Thread Mark Miller
+1

SUCCESS! [0:46:29.195055]

-- 
- Mark

http://about.me/markrmiller

On Wed, Sep 24, 2014 at 9:42 AM, Michael McCandless 
luc...@mikemccandless.com wrote:

 Artifacts here:

 http://people.apache.org/~mikemccand/staging_area/lucene-solr-4.10.1-RC0-rev1627268

 Smoke tester: python3 -u dev-tools/scripts/smokeTestRelease.py

 http://people.apache.org/~mikemccand/staging_area/lucene-solr-4.10.1-RC0-rev1627268
 1627268 4.10.1 /tmp/smoke4101 True

  SUCCESS! [0:29:15.587659]

 Here's my +1

 Mike McCandless

 http://blog.mikemccandless.com

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6460) Keep transaction logs around longer

2014-09-24 Thread Renaud Delbru (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Renaud Delbru updated SOLR-6460:

Attachment: SOLR-6460.patch

Here is a first patch with an initial implementation of the CdcrUpdateLog which 
includes:
* a cleaning of the old logs based on log pointers
* a log reader that reads both the old and new tlog files.
Many nocommit or todos, but this might provide enough materials for discussion.

 Keep transaction logs around longer
 ---

 Key: SOLR-6460
 URL: https://issues.apache.org/jira/browse/SOLR-6460
 Project: Solr
  Issue Type: Sub-task
Reporter: Yonik Seeley
 Attachments: SOLR-6460.patch


 Transaction logs are currently deleted relatively quickly... but we need to 
 keep them around much longer to be used as a source for cross-datacenter 
 recovery.  This will also be useful in the future for enabling peer-sync to 
 use more historical updates before falling back to replication.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6560) Solr example file has outdated termIndexInterval entry

2014-09-24 Thread Tom Burton-West (JIRA)
Tom Burton-West created SOLR-6560:
-

 Summary: Solr example file has outdated termIndexInterval entry
 Key: SOLR-6560
 URL: https://issues.apache.org/jira/browse/SOLR-6560
 Project: Solr
  Issue Type: Bug
  Components: documentation
Affects Versions: 4.10
Reporter: Tom Burton-West
Priority: Minor


The termIndexInterval comment and example settings in the example 
solrconfig.xml file is left over from Solr 3.x versions.  It does not apply to 
the default Solr  4.x installation and its presence in the example is 
confusing.  

According to the JavaDocs for IndexWriterConfig, the Lucene level
implementations of setTermIndexInterval and setReaderTermsIndexDivisor these do 
not apply to the default Solr4 PostingsFormat implementation.  

From 
(http://lucene.apache.org/core/4_10_0/core/org/apache/lucene/index/IndexWriterConfig.html#setTermIndexInterval%28int%29
 )
This parameter does not apply to all PostingsFormat implementations, including 
the default one in this release. It only makes sense for term indexes that are 
implemented as a fixed gap between terms. For example, Lucene41PostingsFormat 
implements the term index instead based upon how terms share prefixes. To 
configure its parameters (the minimum and maximum size for a block), you would 
instead use Lucene41PostingsFormat.Lucene41PostingsFormat(int, int). which can 
also be configured on a per-field basis:

The (soon to be ) attached patch just removes the outdated example. 
Documentation on the wiki and Solr ref guide should also be updated.

If the latest Solr default postings format can be configured from Solr, perhaps 
someone with knowledge of the use case and experience configuring it could 
provide a suitable example.   Since the Solr 4 default postingsformat is so 
much more efficient than Solr 3.x, there might no longer be a use case for 
messing with the parameters.








--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6560) Solr example file has outdated termIndexInterval entry

2014-09-24 Thread Tom Burton-West (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Burton-West updated SOLR-6560:
--
Attachment: SOLR-6560.patch

Patch removes offending lines in example solrconfig.xml

 Solr example file has outdated termIndexInterval entry
 --

 Key: SOLR-6560
 URL: https://issues.apache.org/jira/browse/SOLR-6560
 Project: Solr
  Issue Type: Bug
  Components: documentation
Affects Versions: 4.10
Reporter: Tom Burton-West
Priority: Minor
 Attachments: SOLR-6560.patch


 The termIndexInterval comment and example settings in the example 
 solrconfig.xml file is left over from Solr 3.x versions.  It does not apply 
 to the default Solr  4.x installation and its presence in the example is 
 confusing.  
 According to the JavaDocs for IndexWriterConfig, the Lucene level
 implementations of setTermIndexInterval and setReaderTermsIndexDivisor these 
 do not apply to the default Solr4 PostingsFormat implementation.  
 From 
 (http://lucene.apache.org/core/4_10_0/core/org/apache/lucene/index/IndexWriterConfig.html#setTermIndexInterval%28int%29
  )
 This parameter does not apply to all PostingsFormat implementations, 
 including the default one in this release. It only makes sense for term 
 indexes that are implemented as a fixed gap between terms. For example, 
 Lucene41PostingsFormat implements the term index instead based upon how terms 
 share prefixes. To configure its parameters (the minimum and maximum size for 
 a block), you would instead use 
 Lucene41PostingsFormat.Lucene41PostingsFormat(int, int). which can also be 
 configured on a per-field basis:
 The (soon to be ) attached patch just removes the outdated example. 
 Documentation on the wiki and Solr ref guide should also be updated.
 If the latest Solr default postings format can be configured from Solr, 
 perhaps someone with knowledge of the use case and experience configuring it 
 could provide a suitable example.   Since the Solr 4 default postingsformat 
 is so much more efficient than Solr 3.x, there might no longer be a use case 
 for messing with the parameters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene Benchmark

2014-09-24 Thread John Wang
Thank you Mikhail! Exactly what I was looking for!

-John

On Wed, Sep 24, 2014 at 12:59 AM, Mikhail Khludnev 
mkhlud...@griddynamics.com wrote:

 Hi John,

 It's obvious
 http://lucene.apache.org/core/4_8_0/benchmark/org/apache/lucene/benchmark/byTask/package-summary.html
 It's also described in LUA. I just get into it and understand how to use
 it. Feel free to ask if you face any difficulties.

 Beware that Lucene devs use
 https://code.google.com/a/apache-extras.org/p/luceneutil/
 http://blog.mikemccandless.com/2011/04/catching-slowdowns-in-lucene.html
 I didn't get into it, just know that it reports fancy tables which you can
 meet in performance optimization jiras.


 On Wed, Sep 24, 2014 at 10:45 AM, John Wang john.w...@gmail.com wrote:

 Hi guys:

  Can you guys point me to some details on the Lucene Benchmark
 module? Specifically the grammar/syntax for the Algorithm files?

 Thanks

 -John




 --
 Sincerely yours
 Mikhail Khludnev
 Principal Engineer,
 Grid Dynamics

 http://www.griddynamics.com
 mkhlud...@griddynamics.com



[jira] [Commented] (SOLR-6551) ConcurrentModificationException in UpdateLog

2014-09-24 Thread Mark Bennett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14146625#comment-14146625
 ] 

Mark Bennett commented on SOLR-6551:


This is a comment Noble made to me about this via email:
I looked into the code and it's not threadsafe.

 ConcurrentModificationException in UpdateLog
 

 Key: SOLR-6551
 URL: https://issues.apache.org/jira/browse/SOLR-6551
 Project: Solr
  Issue Type: Bug
Reporter: Noble Paul
Assignee: Noble Paul

 {code}
 null:java.util.ConcurrentModificationException
at java.util.LinkedList$ListItr.checkForComodification(Unknown Source)
at java.util.LinkedList$ListItr.next(Unknown Source)
at 
 org.apache.solr.update.UpdateLog.getTotalLogsSize(UpdateLog.java:199)
at 
 org.apache.solr.update.DirectUpdateHandler2.getStatistics(DirectUpdateHandler2.java:871)
at 
 org.apache.solr.handler.admin.SolrInfoMBeanHandler.getMBeanInfo(SolrInfoMBeanHandler.java:159)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6551) ConcurrentModificationException in UpdateLog

2014-09-24 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14146633#comment-14146633
 ] 

Yonik Seeley commented on SOLR-6551:


I assume this bug was introduced by SOLR-5441?

 ConcurrentModificationException in UpdateLog
 

 Key: SOLR-6551
 URL: https://issues.apache.org/jira/browse/SOLR-6551
 Project: Solr
  Issue Type: Bug
Reporter: Noble Paul
Assignee: Noble Paul

 {code}
 null:java.util.ConcurrentModificationException
at java.util.LinkedList$ListItr.checkForComodification(Unknown Source)
at java.util.LinkedList$ListItr.next(Unknown Source)
at 
 org.apache.solr.update.UpdateLog.getTotalLogsSize(UpdateLog.java:199)
at 
 org.apache.solr.update.DirectUpdateHandler2.getStatistics(DirectUpdateHandler2.java:871)
at 
 org.apache.solr.handler.admin.SolrInfoMBeanHandler.getMBeanInfo(SolrInfoMBeanHandler.java:159)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: How to openIfChanged the most recent merge?

2014-09-24 Thread Michael McCandless
I don't understand what's actually happening / going wrong here.

Maybe you can make a test case / give more details?

What assertions are broken?  Why is it bad if SMS does a merge before
you reopen?  Why are you using SMS :)

Mike McCandless

http://blog.mikemccandless.com

On Mon, Sep 22, 2014 at 6:00 PM, Mikhail Khludnev
mkhlud...@griddynamics.com wrote:
 Hello!
 I'm in trouble with Lucene Index Writer. I'm benchmarking some algorithm
 which might seem like NRT-case, but I'm not sure that I need it
 particularly. The overall problem is to writing join index (column holds
 docnums) via updating binary docvalues after commit. i.e.:
  - update docs
  - commit
  - read docs (openIfChanged() before )
  - updateDocVals
  - commit

 It's clunky but it works, until guess what happens... merge.Oh my.

 Once a time I have segments
 segments_ec:2090 _7c(5.0):C117/8:delGen=8:
 _7j(5.0):C1:fieldInfosGen=1:dvGen=1 _7k(5.0):C1)

 I apply one update and trigger commit, as a result I have:
 segments_ee:2102 _7c(5.0):C117/9:delGen=9:..
 _7k(5.0):C1:fieldInfosGen=1:dvGen=1 _7l(5.0):C1)

 however, somewhere inside of the this commit call, pretty
 SerialMergeScheduler bakes the single solid segment
 _7m(5.0):C117
 however, it wasn't exposed in via any segments file so far.

 And now I get into trouble:
 if I call DR.openIfChanged(segments_ec) (even after IW.waitMerges()), I've
 got segments_ee that's fairly reasonable, to keep it incremental and fast.
 but if I use that IndexWriter, it applies new updates on top of that  merged
 one (_7m(5.0):C117), not on segments_ee. And it broke my assertions. I
 rather need to open reader of that merged _7m(5.0):C117, which IW keeps
 somewhere internally, and it's better to do if fancyincremental. If you can
 point me on how NRT can solve I'd happy to switch on it.

 Incredibly thank you for your time!!!

 --
 Sincerely yours
 Mikhail Khludnev
 Principal Engineer,
 Grid Dynamics



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



queryResultMaxDocsCached vs. queryResultWindowSize

2014-09-24 Thread Tom Burton-West
Hello,

No response on the Solr user list so I thought I would try the dev list.


queryResultWindowSize sets the number of documents  to cache for each query
in the queryResult cache.So if you normally output 10 results per page,
and users don't go beyond page 3 of results, you could set
queryResultWindowSize to 30 and the second and third page requests will
read from cache, not from disk.  This is well documented in both the Solr
example solrconfig.xml file and the Solr documentation.

However, the example in solrconfig.xml and the documentation in the
reference manual for Solr 4.10 say that queryResultMaxDocsCached :

sets the maximum number of documents to cache for any entry in the
queryResultCache.

Looking at the code  it appears that the queryResultMaxDocsCached parameter
actually tells Solr not to cache any results list that has a size  over
 queryResultMaxDocsCached:.

From:  SolrIndexSearcher.getDocListC
// lastly, put the superset in the cache if the size is less than or equal
// to queryResultMaxDocsCached
if (key != null  superset.size() = queryResultMaxDocsCached 
!qr.isPartialResults()) {
  queryResultCache.put(key, superset);
}

Deciding whether or not to cache a DocList if its size is over N (where N =
queryResultMaxDocsCached) is very different than caching only N items from
the DocList which is what the current documentation (and the variable name)
implies.

Looking at the JIRA issue https://issues.apache.org/jira/browse/SOLR-291
the original intent was to control memory use and the variable name
originally suggested was  noCacheIfLarger

Can someone please let me know if it is true that the
queryResultMaxDocsCached parameter actually tells Solr not to cache any
results list that contains over the  queryResultMaxDocsCached?

If so, I will add a comment to the Cwiki doc and open a JIRA and submit a
patch to the example file.

I tried to find a test case that excercises SolrIndexSearcher.getDocListC
so I could see how  queryResultWindowSize or queryResultMaxDocsCached
actually work in the debugger but could not find a test case.  Could
someone please point me to a good test case that either excercises
SolrIndexSearcher.getDocListC or would be a good starting point for writing
one?


Tom



---

http://svn.apache.org/viewvc/lucene/dev/branches/lucene_solr_4_10/solr/example/solr/collection1/conf/solrconfig.xml?revision=1624269view=markup

635 !-- Maximum number of documents to cache for any entry in the
636 queryResultCache.
637 --
638 queryResultMaxDocsCached200/queryResultMaxDocsCached


[jira] [Commented] (SOLR-6551) ConcurrentModificationException in UpdateLog

2014-09-24 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14146653#comment-14146653
 ] 

Yonik Seeley commented on SOLR-6551:


BTW, we have some very good transaction log stress tests.
Step #1 here should probably be adding calls to the new methods introduced 
(like getTotalLogsSize) and verify that these tests can be made to fail.  In 
general, see the tests that extend TestRTGBase.

 ConcurrentModificationException in UpdateLog
 

 Key: SOLR-6551
 URL: https://issues.apache.org/jira/browse/SOLR-6551
 Project: Solr
  Issue Type: Bug
Reporter: Noble Paul
Assignee: Noble Paul

 {code}
 null:java.util.ConcurrentModificationException
at java.util.LinkedList$ListItr.checkForComodification(Unknown Source)
at java.util.LinkedList$ListItr.next(Unknown Source)
at 
 org.apache.solr.update.UpdateLog.getTotalLogsSize(UpdateLog.java:199)
at 
 org.apache.solr.update.DirectUpdateHandler2.getStatistics(DirectUpdateHandler2.java:871)
at 
 org.apache.solr.handler.admin.SolrInfoMBeanHandler.getMBeanInfo(SolrInfoMBeanHandler.java:159)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-5.x #713: POMs out of sync

2014-09-24 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-5.x/713/

1 tests failed.
REGRESSION:  
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testDistribSearch

Error Message:
Error CREATEing SolrCore 'halfcollection_shard1_replica1': Unable to create 
core [halfcollection_shard1_replica1] Caused by: Could not get shard id for 
core: halfcollection_shard1_replica1

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Error 
CREATEing SolrCore 'halfcollection_shard1_replica1': Unable to create core 
[halfcollection_shard1_replica1] Caused by: Could not get shard id for core: 
halfcollection_shard1_replica1
at 
__randomizedtesting.SeedInfo.seed([822350BD4F4A893A:3C5DEA53815E906]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:550)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:210)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:206)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testErrorHandling(CollectionsAPIDistributedZkTest.java:583)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.doTest(CollectionsAPIDistributedZkTest.java:205)




Build Log:
[...truncated 53539 lines...]
BUILD FAILED
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-5.x/build.xml:514: 
The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-5.x/build.xml:198: 
The following error occurred while executing this line:
: Java returned: 1

Total time: 232 minutes 43 seconds
Build step 'Invoke Ant' marked build as failure
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-6551) ConcurrentModificationException in UpdateLog

2014-09-24 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14146727#comment-14146727
 ] 

Yonik Seeley commented on SOLR-6551:


After a really quick look, it looks like logs is guarded by 
synchronized(this) (the UpdateLog monitor), and getTotalLogsSize() uses 
synchronized(logs).  So it should be an easy fix, but as mentioned above, we 
should make some tests fail w/o the fix first...


 ConcurrentModificationException in UpdateLog
 

 Key: SOLR-6551
 URL: https://issues.apache.org/jira/browse/SOLR-6551
 Project: Solr
  Issue Type: Bug
Reporter: Noble Paul
Assignee: Noble Paul

 {code}
 null:java.util.ConcurrentModificationException
at java.util.LinkedList$ListItr.checkForComodification(Unknown Source)
at java.util.LinkedList$ListItr.next(Unknown Source)
at 
 org.apache.solr.update.UpdateLog.getTotalLogsSize(UpdateLog.java:199)
at 
 org.apache.solr.update.DirectUpdateHandler2.getStatistics(DirectUpdateHandler2.java:871)
at 
 org.apache.solr.handler.admin.SolrInfoMBeanHandler.getMBeanInfo(SolrInfoMBeanHandler.java:159)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3015) QParserPlugins can not be SolrCoreAware

2014-09-24 Thread Ryan Josal (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14146798#comment-14146798
 ] 

Ryan Josal commented on SOLR-3015:
--

I ran into a problem with this today.  I have a QParserPlugin that would like 
to get some info from the index at startup, but without having it 
SolrCoreAware, I can't getSolrConfig().getDataDir().  Having run into the 
circular init problem before in the past, I understand the motive to reduce 
risk of problems.  On a related note, I have a SearchComponent that wants to 
(but isn't allow to) be ResourceLoaderAware so it can load some data file from 
the config dir.  Interestingly, QParserPlugin CAN be ResourceLoaderAware.  At 
least with that one I can put the instanceDir in my config because that 
property is automatically populated, but I can't work around the data.dir 
problem that way because it is left blank.  My personal feeling is that it 
would be great if these artificial limitations could be safely lifted somehow.

 QParserPlugins can not be SolrCoreAware
 ---

 Key: SOLR-3015
 URL: https://issues.apache.org/jira/browse/SOLR-3015
 Project: Solr
  Issue Type: New Feature
Reporter: Karl Wright
  Labels: closehook, qparserplugin, solrcoreaware
 Fix For: 3.6, 4.0-ALPHA

 Attachments: SOLR-3015.patch


 QParserPlugin cannot be made SolrCoreAware:
 {code}
 [junit] org.apache.solr.common.SolrException: Invalid 'Aware' object: 
 org.apache.solr.mcf.ManifoldCFQParserPlugin@18941f7 -- 
 org.apache.solr.util.plugin.SolrCoreAware must be an instance of: 
 [org.apache.solr.request.SolrRequestHandler]
 [org.apache.solr.response.QueryResponseWriter] 
 [org.apache.solr.handler.component.SearchComponent] 
 [org.apache.solr.update.processor.UpdateRequestProcessorFactory] 
 [org.apache.solr.handler.component.ShardHandlerFactory]
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: svn commit: r1627355 - in /lucene/dev/branches/branch_5x: ./ lucene/ lucene/build.xml lucene/site/ lucene/site/xsl/index.xsl

2014-09-24 Thread Ryan Ernst
This change somehow caused a newline in the middle of the latest codec
link, which broke checkJavadocLinks.py.  I don't know how to fix the xsl,
but html allows whitespace in the middle of links and browsers just remove
it (tested), so I fixed checkJavadocLinks.py to remove inner whitespace.

On Wed, Sep 24, 2014 at 9:12 AM, uschind...@apache.org wrote:

 Author: uschindler
 Date: Wed Sep 24 16:12:05 2014
 New Revision: 1627355

 URL: http://svn.apache.org/r1627355
 Log:
 Merged revision(s) 1627353 from lucene/dev/trunk:
 Fix encoding issue with source file, remove groovy script and do the
 defaultCodec transformation natively in ANT, hack lowercasing in XSL

 Modified:
 lucene/dev/branches/branch_5x/   (props changed)
 lucene/dev/branches/branch_5x/lucene/   (props changed)
 lucene/dev/branches/branch_5x/lucene/build.xml   (contents, props
 changed)
 lucene/dev/branches/branch_5x/lucene/site/   (props changed)
 lucene/dev/branches/branch_5x/lucene/site/xsl/index.xsl

 Modified: lucene/dev/branches/branch_5x/lucene/build.xml
 URL:
 http://svn.apache.org/viewvc/lucene/dev/branches/branch_5x/lucene/build.xml?rev=1627355r1=1627354r2=1627355view=diff

 ==
 --- lucene/dev/branches/branch_5x/lucene/build.xml (original)
 +++ lucene/dev/branches/branch_5x/lucene/build.xml Wed Sep 24 16:12:05 2014
 @@ -198,16 +198,10 @@
fileset dir=. includes=**/build.xml
 excludes=build.xml,analysis/*,build/**,tools/**,site/**/
  /makeurl
  property name=Codec.java
 location=core/src/java/org/apache/lucene/codecs/Codec.java/
 -loadfile srcfile=${Codec.java} property=defaultCodecPackage
 +loadfile srcfile=${Codec.java} property=defaultCodec
 encoding=UTF-8
filterchain
 -tokenfilter
 -  filetokenizer/
 -  scriptfilter language=groovy
 classpathref=groovy.classpath![CDATA[
 -//   private static Codec defaultCodec   =
  Codec.   forName(   LuceneXXX   )   ;
 -def defaultCodecMatcher = self.getToken() =~
 /defaultCodec\s*=\s*Codec\s*\.\s*forName\s*\(\s*([^]+)\s*\)\s*;/
 -
 self.setToken(defaultCodecMatcher[0][1].toLowerCase(Locale.ROOT));
 -  ]]/scriptfilter
 -/tokenfilter
 +!--  private static Codec defaultCodec   =   Codec.
  forName(   LuceneXXX )   ; --
 +containsregex
 pattern=^.*defaultCodec\s*=\s*Codec\s*\.\s*forName\s*\(\s*quot;([^quot;]+)quot;\s*\)\s*;.*$
 replace=\1/
/filterchain
  /loadfile

 @@ -223,7 +217,7 @@
outputproperty name=indent value=yes/
param name=buildfiles
 expression=${process-webpages.buildfiles}/
param name=version expression=${version}/
 -  param name=defaultCodecPackage
 expression=${defaultCodecPackage}/
 +  param name=defaultCodec expression=${defaultCodec}/
  /xslt

  pegdown todir=${javadoc.dir}
 @@ -232,7 +226,7 @@
  /pegdown

  copy todir=${javadoc.dir}
 -  fileset dir=site/html includes=**/*/
 +  fileset dir=site/html/
  /copy
/target


 Modified: lucene/dev/branches/branch_5x/lucene/site/xsl/index.xsl
 URL:
 http://svn.apache.org/viewvc/lucene/dev/branches/branch_5x/lucene/site/xsl/index.xsl?rev=1627355r1=1627354r2=1627355view=diff

 ==
 --- lucene/dev/branches/branch_5x/lucene/site/xsl/index.xsl (original)
 +++ lucene/dev/branches/branch_5x/lucene/site/xsl/index.xsl Wed Sep 24
 16:12:05 2014
 @@ -22,7 +22,10 @@
  
xsl:param name=buildfiles/
xsl:param name=version/
 -  xsl:param name=defaultCodecPackage/
 +  xsl:param name=defaultCodec/
 +
 +  !-- ANT cannot lowercase a property, so we hack this here: --
 +  xsl:variable name=defaultCodecPackage
 select=translate($defaultCodec,'ABCDEFGHIJKLMNOPQRSTUVWXYZ','abcdefghijklmnopqrstuvwxyz')/

!--
  NOTE: This template matches the root element of any given input XML
 document!





[jira] [Created] (LUCENE-5976) Index upgrader should have option to do multiple segments instead of one

2014-09-24 Thread Ryan Ernst (JIRA)
Ryan Ernst created LUCENE-5976:
--

 Summary: Index upgrader should have option to do multiple segments 
instead of one
 Key: LUCENE-5976
 URL: https://issues.apache.org/jira/browse/LUCENE-5976
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Ryan Ernst


Right now, index upgrader produces one gigantic segment.  This can take a long 
time, consume more memory than normal merges, and even sidestep the max seg 
size for the delegated MergePolicy.

It would be nice to have a simpler option: create 1 upgraded segment for every 
existing segment.  If there are deletes that are merged away, the regulard MP 
can takeover after the upgrade is complete (or even partially complete) to 
merge away deletes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5971) Separate backcompat creation script from adding version

2014-09-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14146862#comment-14146862
 ] 

ASF subversion and git services commented on LUCENE-5971:
-

Commit 1627419 from [~rjernst] in branch 'dev/trunk'
[ https://svn.apache.org/r1627419 ]

LUCENE-5971: Create addBackcompatIndexes.py script to build and add backcompat 
test indexes

 Separate backcompat creation script from adding version
 ---

 Key: LUCENE-5971
 URL: https://issues.apache.org/jira/browse/LUCENE-5971
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Ryan Ernst
 Attachments: LUCENE-5971.patch


 The recently created {{bumpVersion.py}} attempts to create a new backcompat 
 index if the default codec has changed.  However, we now want to create a 
 backcompat index for every released version, instead of just when there is a 
 change to the default codec.
 We should have a separate script which creates the backcompat indexes.  It 
 can even work directly on the released artifacts (by pulling down from 
 mirrors once released), so that there is no possibility for generating the 
 index from an incorrect svn/git checkout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Windows (64bit/jdk1.8.0_20) - Build # 4232 - Still Failing!

2014-09-24 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/4232/
Java: 64bit/jdk1.8.0_20 -XX:-UseCompressedOops -XX:+UseSerialGC

2 tests failed.
REGRESSION:  org.apache.solr.cloud.AliasIntegrationTest.testDistribSearch

Error Message:
KeeperErrorCode = Session expired for 
/overseer/collection-queue-work/qnr-16

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: 
KeeperErrorCode = Session expired for 
/overseer/collection-queue-work/qnr-16
at 
__randomizedtesting.SeedInfo.seed([1088C48062C82FFA:916E4A9815974FC6]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:550)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:210)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:206)
at 
org.apache.solr.cloud.AliasIntegrationTest.deleteAlias(AliasIntegrationTest.java:288)
at 
org.apache.solr.cloud.AliasIntegrationTest.doTest(AliasIntegrationTest.java:245)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 

[jira] [Commented] (LUCENE-5969) Add Lucene50Codec

2014-09-24 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14146872#comment-14146872
 ] 

Uwe Schindler commented on LUCENE-5969:
---

Thanks Robert. Unfortunately I was not able to verify the full patch. But the 
changes with supressed exceptions looked fine.

 Add Lucene50Codec
 -

 Key: LUCENE-5969
 URL: https://issues.apache.org/jira/browse/LUCENE-5969
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
 Fix For: 5.0, 6.0

 Attachments: LUCENE-5969.patch


 Spinoff from LUCENE-5952:
   * Fix .si to write Version as 3 ints, not a String that requires parsing at 
 read time.
   * Lucene42TermVectorsFormat should not use the same codecName as 
 Lucene41StoredFieldsFormat
 It would also be nice if we had a bumpCodecVersion script so rolling a new 
 codec is not so daunting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: queryResultMaxDocsCached vs. queryResultWindowSize

2014-09-24 Thread Tomás Fernández Löbbe
I think you are right. I think the name is this because it’s considering a
series of queries paging a result. The first X pages are going to be
cached, but once the limit is reached, no further pages are and the last
superset that fitted remains in cache. At least that’s my understanding.
After a quick look, I couldn’t find a test case for this either.

Tomás

On Wed, Sep 24, 2014 at 11:10 AM, Tom Burton-West tburt...@umich.edu
wrote:

 Hello,

 No response on the Solr user list so I thought I would try the dev list.


 queryResultWindowSize sets the number of documents  to cache for each
 query in the queryResult cache.So if you normally output 10 results per
 page, and users don't go beyond page 3 of results, you could set
 queryResultWindowSize to 30 and the second and third page requests will
 read from cache, not from disk.  This is well documented in both the Solr
 example solrconfig.xml file and the Solr documentation.

 However, the example in solrconfig.xml and the documentation in the
 reference manual for Solr 4.10 say that queryResultMaxDocsCached :

 sets the maximum number of documents to cache for any entry in the
 queryResultCache.

 Looking at the code  it appears that the queryResultMaxDocsCached
 parameter actually tells Solr not to cache any results list that has a size
  over  queryResultMaxDocsCached:.

 From:  SolrIndexSearcher.getDocListC
 // lastly, put the superset in the cache if the size is less than or equal
 // to queryResultMaxDocsCached
 if (key != null  superset.size() = queryResultMaxDocsCached 
 !qr.isPartialResults()) {
   queryResultCache.put(key, superset);
 }

 Deciding whether or not to cache a DocList if its size is over N (where N
 = queryResultMaxDocsCached) is very different than caching only N items
 from the DocList which is what the current documentation (and the variable
 name) implies.

 Looking at the JIRA issue https://issues.apache.org/jira/browse/SOLR-291
 the original intent was to control memory use and the variable name
 originally suggested was  noCacheIfLarger

 Can someone please let me know if it is true that the
 queryResultMaxDocsCached parameter actually tells Solr not to cache any
 results list that contains over the  queryResultMaxDocsCached?

 If so, I will add a comment to the Cwiki doc and open a JIRA and submit a
 patch to the example file.

 I tried to find a test case that excercises SolrIndexSearcher.getDocListC
 so I could see how  queryResultWindowSize or queryResultMaxDocsCached
 actually work in the debugger but could not find a test case.  Could
 someone please point me to a good test case that either excercises
 SolrIndexSearcher.getDocListC or would be a good starting point for writing
 one?


 Tom



 ---


 http://svn.apache.org/viewvc/lucene/dev/branches/lucene_solr_4_10/solr/example/solr/collection1/conf/solrconfig.xml?revision=1624269view=markup

 635 !-- Maximum number of documents to cache for any entry in the
 636 queryResultCache.
 637 --
 638 queryResultMaxDocsCached200/queryResultMaxDocsCached



[JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 631 - Still Failing

2014-09-24 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/631/

1 tests failed.
REGRESSION:  
org.apache.solr.handler.component.DistributedSpellCheckComponentTest.testDistribSearch

Error Message:
Captured an uncaught exception in thread: Thread[id=2263, name=Thread-1076, 
state=RUNNABLE, group=TGRP-DistributedSpellCheckComponentTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=2263, name=Thread-1076, state=RUNNABLE, 
group=TGRP-DistributedSpellCheckComponentTest]
at 
__randomizedtesting.SeedInfo.seed([4382992141BC9E4E:C264173936E3FE72]:0)
Caused by: java.lang.RuntimeException: 
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: https://127.0.0.1:44787
at __randomizedtesting.SeedInfo.seed([4382992141BC9E4E]:0)
at 
org.apache.solr.BaseDistributedSearchTestCase$5.run(BaseDistributedSearchTestCase.java:580)
Caused by: org.apache.solr.client.solrj.SolrServerException: Timeout occured 
while waiting response from server at: https://127.0.0.1:44787
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:558)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:210)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:206)
at 
org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:91)
at org.apache.solr.client.solrj.SolrServer.query(SolrServer.java:301)
at 
org.apache.solr.BaseDistributedSearchTestCase$5.run(BaseDistributedSearchTestCase.java:575)
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:152)
at java.net.SocketInputStream.read(SocketInputStream.java:122)
at sun.security.ssl.InputRecord.readFully(InputRecord.java:442)
at sun.security.ssl.InputRecord.read(InputRecord.java:480)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:927)
at sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:884)
at sun.security.ssl.AppInputStream.read(AppInputStream.java:102)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.fillBuffer(AbstractSessionInputBuffer.java:160)
at 
org.apache.http.impl.io.SocketInputBuffer.fillBuffer(SocketInputBuffer.java:84)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.readLine(AbstractSessionInputBuffer.java:273)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:140)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
at 
org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:260)
at 
org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:283)
at 
org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:251)
at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:197)
at 
org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:271)
at 
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:123)
at 
org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:682)
at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:486)
at 
org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:863)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:106)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:57)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:448)
... 5 more




Build Log:
[...truncated 12088 lines...]
   [junit4] Suite: 
org.apache.solr.handler.component.DistributedSpellCheckComponentTest
   [junit4]   2 Creating dataDir: 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/solr/build/solr-core/test/J0/temp/solr.handler.component.DistributedSpellCheckComponentTest-4382992141BC9E4E-001/init-core-data-001
   [junit4]   2 261144 T500 oas.SolrTestCaseJ4.buildSSLConfig Randomized ssl 
(true) and clientAuth (true)
   [junit4]   2 261144 T500 oas.BaseDistributedSearchTestCase.initHostContext 
Setting hostContext system property: /
   [junit4]   2 261148 T500 oas.SolrTestCaseJ4.setUp ###Starting 
testDistribSearch
   [junit4]   2 261151 T500 oejs.Server.doStart jetty-8.1.10.v20130312
   

[jira] [Created] (SOLR-6561) LBHttpSolrServer's aliveCheckExecutor is leaking connection when ResponseParser is null

2014-09-24 Thread Sudhan Moghe (JIRA)
Sudhan Moghe created SOLR-6561:
--

 Summary: LBHttpSolrServer's aliveCheckExecutor is leaking 
connection when ResponseParser is null
 Key: SOLR-6561
 URL: https://issues.apache.org/jira/browse/SOLR-6561
 Project: Solr
  Issue Type: Bug
  Components: clients - java
Affects Versions: 4.10
Reporter: Sudhan Moghe


LBHttpSolrServer's aliveCheckExecutor is leaking connection when ResponseParser 
is null.
We are providing search as a service and our Solr setup is not directly exposed 
to clients. We are setting parser to null and then passing on the InputStream, 
received from Solr server, as it is to our clients.
The LBHttpSolrServer.checkAZombieServer() is no closing connection in this case.

I think something like following needs to be there. Not the exact code.
if (zombieServer.solrServer.getParser() == null)
is = (InputStream) resp.getResponse().get(stream);
is.close();
}

This is blocker for us. I will test this out locally and update this bug 
report. But, we can't deploy that in production till we get official fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: queryResultMaxDocsCached vs. queryResultWindowSize

2014-09-24 Thread Yonik Seeley
On Wed, Sep 24, 2014 at 5:27 PM, Tomás Fernández Löbbe
tomasflo...@gmail.com wrote:
 I think you are right. I think the name is this because it’s considering a
 series of queries paging a result. The first X pages are going to be cached,
 but once the limit is reached, no further pages are and the last superset
 that fitted remains in cache.

I was confused about the confusion ;-)  But your summary seems correct.

queryResultWindowSize rounds up to a multiple of the window size for
caching purposes.
So if you ask for top 10, and the queryResultWindowSize is 20, then
the top 20 will be cached (so if a user hits next to get to the next
10, it will still result in a cache hit).

queryResultMaxDocsCached sets a limit beyond which the resulting docs
aren't cached (so if a user asks for docs 1 through 10010, we skip
caching logic).

-Yonik
http://heliosearch.org - native code faceting, facet functions,
sub-facets, off-heap data

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.9.0-ea-b28) - Build # 11324 - Failure!

2014-09-24 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11324/
Java: 64bit/jdk1.9.0-ea-b28 -XX:-UseCompressedOops -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 41406 lines...]
BUILD FAILED
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:524: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:90: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build.xml:96: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/tools/custom-tasks.xml:104:
 Lib versions check failed. Check the logs.

Total time: 106 minutes 31 seconds
Build step 'Invoke Ant' marked build as failure
[description-setter] Description set: Java: 64bit/jdk1.9.0-ea-b28 
-XX:-UseCompressedOops -XX:+UseG1GC
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (LUCENE-5971) Separate backcompat creation script from adding version

2014-09-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14147133#comment-14147133
 ] 

ASF subversion and git services commented on LUCENE-5971:
-

Commit 1627438 from [~rjernst] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1627438 ]

LUCENE-5971: Create addBackcompatIndexes.py script to build and add backcompat 
test indexes (merged 1627419)

 Separate backcompat creation script from adding version
 ---

 Key: LUCENE-5971
 URL: https://issues.apache.org/jira/browse/LUCENE-5971
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Ryan Ernst
 Attachments: LUCENE-5971.patch


 The recently created {{bumpVersion.py}} attempts to create a new backcompat 
 index if the default codec has changed.  However, we now want to create a 
 backcompat index for every released version, instead of just when there is a 
 change to the default codec.
 We should have a separate script which creates the backcompat indexes.  It 
 can even work directly on the released artifacts (by pulling down from 
 mirrors once released), so that there is no possibility for generating the 
 index from an incorrect svn/git checkout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5902) Add bumpVersion script to increment version after release branch creation

2014-09-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14147136#comment-14147136
 ] 

ASF subversion and git services commented on LUCENE-5902:
-

Commit 1627439 from [~rjernst] in branch 'dev/trunk'
[ https://svn.apache.org/r1627439 ]

LUCENE-5902: rename to addVersion.py

 Add bumpVersion script to increment version after release branch creation
 -

 Key: LUCENE-5902
 URL: https://issues.apache.org/jira/browse/LUCENE-5902
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Ryan Ernst
Assignee: Ryan Ernst
 Fix For: 5.0, Trunk

 Attachments: LUCENE-5902.patch, LUCENE-5902.patch


 Thanks to LUCENE-5898 there are many less places to increment version.  
 However, I still think this script can be useful in automating the entire 
 process (minus the commit).  This would:
 * Add new sections to {{lucene/CHANGES.txt}} and {{solr/CHANGES.txt}}
 * Add new version constant
 * Change {{LATEST}} value
 * Change {{version.base}} in {{lucene/version.properties}}
 * Change version used in solr example configs
 * Create a BWC index and test if necessary



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5902) Add bumpVersion script to increment version after release branch creation

2014-09-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14147137#comment-14147137
 ] 

ASF subversion and git services commented on LUCENE-5902:
-

Commit 1627440 from [~rjernst] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1627440 ]

LUCENE-5902: rename to addVersion.py

 Add bumpVersion script to increment version after release branch creation
 -

 Key: LUCENE-5902
 URL: https://issues.apache.org/jira/browse/LUCENE-5902
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Ryan Ernst
Assignee: Ryan Ernst
 Fix For: 5.0, Trunk

 Attachments: LUCENE-5902.patch, LUCENE-5902.patch


 Thanks to LUCENE-5898 there are many less places to increment version.  
 However, I still think this script can be useful in automating the entire 
 process (minus the commit).  This would:
 * Add new sections to {{lucene/CHANGES.txt}} and {{solr/CHANGES.txt}}
 * Add new version constant
 * Change {{LATEST}} value
 * Change {{version.base}} in {{lucene/version.properties}}
 * Change version used in solr example configs
 * Create a BWC index and test if necessary



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5902) Add bumpVersion script to increment version after release branch creation

2014-09-24 Thread Ryan Ernst (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan Ernst resolved LUCENE-5902.

Resolution: Fixed

 Add bumpVersion script to increment version after release branch creation
 -

 Key: LUCENE-5902
 URL: https://issues.apache.org/jira/browse/LUCENE-5902
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Ryan Ernst
Assignee: Ryan Ernst
 Fix For: 5.0, Trunk

 Attachments: LUCENE-5902.patch, LUCENE-5902.patch


 Thanks to LUCENE-5898 there are many less places to increment version.  
 However, I still think this script can be useful in automating the entire 
 process (minus the commit).  This would:
 * Add new sections to {{lucene/CHANGES.txt}} and {{solr/CHANGES.txt}}
 * Add new version constant
 * Change {{LATEST}} value
 * Change {{version.base}} in {{lucene/version.properties}}
 * Change version used in solr example configs
 * Create a BWC index and test if necessary



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.8.0) - Build # 1848 - Failure!

2014-09-24 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/1848/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC

All tests passed

Build Log:
[...truncated 41267 lines...]
BUILD FAILED
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/build.xml:491: The following 
error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/build.xml:89: The following 
error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build.xml:96: The 
following error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/tools/custom-tasks.xml:104:
 Lib versions check failed. Check the logs.

Total time: 180 minutes 5 seconds
Build step 'Invoke Ant' marked build as failure
[description-setter] Description set: Java: 64bit/jdk1.8.0 
-XX:+UseCompressedOops -XX:+UseSerialGC
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS-MAVEN] Lucene-Solr-Maven-trunk #1218: POMs out of sync

2014-09-24 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-trunk/1218/

2 tests failed.
FAILED:  
org.apache.solr.handler.TestReplicationHandlerBackup.org.apache.solr.handler.TestReplicationHandlerBackup

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.handler.TestReplicationHandlerBackup: 
   1) Thread[id=16228, name=Thread-5704, state=RUNNABLE, 
group=TGRP-TestReplicationHandlerBackup]
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at java.net.Socket.connect(Socket.java:528)
at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:652)
at 
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1323)
at java.net.URL.openStream(URL.java:1037)
at 
org.apache.solr.handler.TestReplicationHandlerBackup$BackupThread.run(TestReplicationHandlerBackup.java:318)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.handler.TestReplicationHandlerBackup: 
   1) Thread[id=16228, name=Thread-5704, state=RUNNABLE, 
group=TGRP-TestReplicationHandlerBackup]
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at java.net.Socket.connect(Socket.java:528)
at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:652)
at 
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1323)
at java.net.URL.openStream(URL.java:1037)
at 
org.apache.solr.handler.TestReplicationHandlerBackup$BackupThread.run(TestReplicationHandlerBackup.java:318)
at __randomizedtesting.SeedInfo.seed([CEE1D420BFBCA9BF]:0)


FAILED:  
org.apache.solr.handler.TestReplicationHandlerBackup.org.apache.solr.handler.TestReplicationHandlerBackup

Error Message:
There are still zombie threads that couldn't be terminated:
   1) Thread[id=16228, name=Thread-5704, state=RUNNABLE, 
group=TGRP-TestReplicationHandlerBackup]
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at java.net.Socket.connect(Socket.java:528)
at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:652)
at 
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1323)
at java.net.URL.openStream(URL.java:1037)
at 
org.apache.solr.handler.TestReplicationHandlerBackup$BackupThread.run(TestReplicationHandlerBackup.java:318)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=16228, name=Thread-5704, state=RUNNABLE, 
group=TGRP-TestReplicationHandlerBackup]
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at java.net.Socket.connect(Socket.java:528)
at 

[JENKINS] Lucene-Solr-Tests-trunk-Java7 - Build # 4881 - Still Failing

2014-09-24 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java7/4881/

1 tests failed.
REGRESSION:  org.apache.solr.cloud.DeleteReplicaTest.testDistribSearch

Error Message:
No live SolrServers available to handle this 
request:[https://127.0.0.1:10821/_hc/j, https://127.0.0.1:17541/_hc/j, 
https://127.0.0.1:55682/_hc/j, https://127.0.0.1:58503/_hc/j, 
https://127.0.0.1:11651/_hc/j]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request:[https://127.0.0.1:10821/_hc/j, 
https://127.0.0.1:17541/_hc/j, https://127.0.0.1:55682/_hc/j, 
https://127.0.0.1:58503/_hc/j, https://127.0.0.1:11651/_hc/j]
at 
__randomizedtesting.SeedInfo.seed([11AA45D50E717210:904CCBCD792E122C]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:322)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.sendRequest(CloudSolrServer.java:880)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.requestWithRetryOnStaleState(CloudSolrServer.java:658)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:601)
at 
org.apache.solr.cloud.DeleteReplicaTest.removeAndWaitForReplicaGone(DeleteReplicaTest.java:171)
at 
org.apache.solr.cloud.DeleteReplicaTest.deleteLiveReplicaTest(DeleteReplicaTest.java:144)
at 
org.apache.solr.cloud.DeleteReplicaTest.doTest(DeleteReplicaTest.java:88)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.8.0) - Build # 1809 - Failure!

2014-09-24 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/1809/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

2 tests failed.
REGRESSION:  
org.apache.solr.cloud.DeleteLastCustomShardedReplicaTest.testDistribSearch

Error Message:
No live SolrServers available to handle this request:[http://127.0.0.1:56952, 
http://127.0.0.1:56947, http://127.0.0.1:56955]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request:[http://127.0.0.1:56952, http://127.0.0.1:56947, 
http://127.0.0.1:56955]
at 
__randomizedtesting.SeedInfo.seed([DAC576184A0EDBBB:5B23F8003D51BB87]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:322)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.sendRequest(CloudSolrServer.java:880)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.requestWithRetryOnStaleState(CloudSolrServer.java:658)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:601)
at 
org.apache.solr.cloud.DeleteLastCustomShardedReplicaTest.removeAndWaitForLastReplicaGone(DeleteLastCustomShardedReplicaTest.java:117)
at 
org.apache.solr.cloud.DeleteLastCustomShardedReplicaTest.doTest(DeleteLastCustomShardedReplicaTest.java:107)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.GeneratedMethodAccessor49.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

Re: [JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.9.0-ea-b28) - Build # 11324 - Failure!

2014-09-24 Thread Steve Rowe
From the build log:

-
[libversions] VERSION CONFLICT: transitive dependency in module(s) langid:
[libversions] /com.cybozu.labs/langdetect=1.1-20120112
[libversions] +-- /net.arnx/jsonic=1.2.7
[libversions] +-- /com.google.inject/guice=4.0-beta5  Conflict
(direct=3.0, latest=4.0-beta5)
[libversions] ... and 1 more
[libversions] Checked that ivy-versions.properties and
ivy-ignore-conflicts.properties have lexically sorted '/org/name' keys
and no duplicates or orphans.
[libversions] Scanned 46 ivy.xml files for rev=${/org/name} format.
[libversions] Found 1 indirect dependency version conflicts.
-


On Wed, Sep 24, 2014 at 7:41 PM, Policeman Jenkins Server 
jenk...@thetaphi.de wrote:

 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11324/
 Java: 64bit/jdk1.9.0-ea-b28 -XX:-UseCompressedOops -XX:+UseG1GC

 All tests passed

 Build Log:
 [...truncated 41406 lines...]
 BUILD FAILED
 /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:524: The
 following error occurred while executing this line:
 /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:90: The
 following error occurred while executing this line:
 /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build.xml:96:
 The following error occurred while executing this line:
 /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/tools/custom-tasks.xml:104:
 Lib versions check failed. Check the logs.

 Total time: 106 minutes 31 seconds
 Build step 'Invoke Ant' marked build as failure
 [description-setter] Description set: Java: 64bit/jdk1.9.0-ea-b28
 -XX:-UseCompressedOops -XX:+UseG1GC
 Archiving artifacts
 Recording test results
 Email was triggered for: Failure - Any
 Sending email for trigger: Failure - Any




 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.9.0-ea-b28) - Build # 11324 - Failure!

2014-09-24 Thread Steve Rowe
I don't understand why this suddenly started getting flagged - AFAICT the
most recent changes to the langid module's ivy.xml file, or to the version
of the dependencies in question in ivy-versions.xml, occurred in 2013.

Nevertheless, I told check-lib-versions not to complain when it sees this,
on trunk only.

For some reason brach_5x, without the exception, does not fail the build,
so I haven't backported the change there.

On Wed, Sep 24, 2014 at 11:33 PM, Steve Rowe sar...@gmail.com wrote:

 From the build log:

 -
 [libversions] VERSION CONFLICT: transitive dependency in module(s) langid:
 [libversions] /com.cybozu.labs/langdetect=1.1-20120112
 [libversions] +-- /net.arnx/jsonic=1.2.7
 [libversions] +-- /com.google.inject/guice=4.0-beta5  Conflict 
 (direct=3.0, latest=4.0-beta5)
 [libversions] ... and 1 more
 [libversions] Checked that ivy-versions.properties and 
 ivy-ignore-conflicts.properties have lexically sorted '/org/name' keys and no 
 duplicates or orphans.
 [libversions] Scanned 46 ivy.xml files for rev=${/org/name} format.
 [libversions] Found 1 indirect dependency version conflicts.
 -


 On Wed, Sep 24, 2014 at 7:41 PM, Policeman Jenkins Server 
 jenk...@thetaphi.de wrote:

 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11324/
 Java: 64bit/jdk1.9.0-ea-b28 -XX:-UseCompressedOops -XX:+UseG1GC

 All tests passed

 Build Log:
 [...truncated 41406 lines...]
 BUILD FAILED
 /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:524: The
 following error occurred while executing this line:
 /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:90: The
 following error occurred while executing this line:
 /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build.xml:96:
 The following error occurred while executing this line:
 /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/tools/custom-tasks.xml:104:
 Lib versions check failed. Check the logs.

 Total time: 106 minutes 31 seconds
 Build step 'Invoke Ant' marked build as failure
 [description-setter] Description set: Java: 64bit/jdk1.9.0-ea-b28
 -XX:-UseCompressedOops -XX:+UseG1GC
 Archiving artifacts
 Recording test results
 Email was triggered for: Failure - Any
 Sending email for trigger: Failure - Any




 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org





[JENKINS] Lucene-Solr-SmokeRelease-trunk - Build # 205 - Failure

2014-09-24 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-trunk/205/

No tests ran.

Build Log:
[...truncated 51530 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/smokeTestRelease/dist
 [copy] Copying 446 files to 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 245 files to 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.7 JAVA_HOME=/home/jenkins/tools/java/latest1.7
   [smoker] NOTE: output encoding is US-ASCII
   [smoker] 
   [smoker] Load release URL 
file:/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/smokeTestRelease/dist/...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.1 MB in 0.01 sec (14.4 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-6.0.0-src.tgz...
   [smoker] 27.6 MB in 0.04 sec (640.7 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-6.0.0.tgz...
   [smoker] 61.0 MB in 0.16 sec (373.4 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-6.0.0.zip...
   [smoker] 70.4 MB in 0.09 sec (800.9 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-6.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.7...
   [smoker]   got 5557 hits for query lucene
   [smoker] checkindex with 1.7...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-6.0.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.7...
   [smoker]   got 5557 hits for query lucene
   [smoker] checkindex with 1.7...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-6.0.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run ant validate
   [smoker] run tests w/ Java 7 and testArgs='-Dtests.jettyConnector=Socket 
-Dtests.disableHdfs=true -Dtests.multiplier=1 -Dtests.slow=false'...
   [smoker] test demo with 1.7...
   [smoker]   got 225 hits for query lucene
   [smoker] checkindex with 1.7...
   [smoker] generate javadocs w/ Java 7...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] Traceback (most recent call last):
   [smoker]   File 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/dev-tools/scripts/smokeTestRelease.py,
 line 1507, in module
   [smoker] main()
   [smoker]   File 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/dev-tools/scripts/smokeTestRelease.py,
 line 1452, in main
   [smoker] smokeTest(c.java, c.url, c.revision, c.version, c.tmp_dir, 
c.is_signed, ' '.join(c.test_args))
   [smoker]   File 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/dev-tools/scripts/smokeTestRelease.py,
 line 1490, in smokeTest
   [smoker] unpackAndVerify(java, 'lucene', tmpDir, 'lucene-%s-src.tgz' % 
version, svnRevision, version, testArgs, baseURL)
   [smoker]   File 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/dev-tools/scripts/smokeTestRelease.py,
 line 616, in unpackAndVerify
   [smoker] verifyUnpacked(java, project, artifact, unpackPath, 
svnRevision, version, testArgs, tmpDir, baseURL)
   [smoker]   File 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/dev-tools/scripts/smokeTestRelease.py,
 line 801, in verifyUnpacked
   [smoker] confirmAllReleasesAreTestedForBackCompat(unpackPath)
   [smoker]   File 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/dev-tools/scripts/smokeTestRelease.py,
 line 1412, in confirmAllReleasesAreTestedForBackCompat
   [smoker] tup = int(name[0]), int(name[1]), int(name[2])
   [smoker] ValueError: invalid literal for int() with base 10: '.'

BUILD FAILED
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/build.xml:409:
 exec returned: 1

Total time: 50 minutes 55 seconds
Build step 'Invoke Ant' marked build as failure
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-6561) LBHttpSolrServer's aliveCheckExecutor is leaking connection when ResponseParser is null

2014-09-24 Thread Sudhan Moghe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14147375#comment-14147375
 ] 

Sudhan Moghe commented on SOLR-6561:


We have changed our handling and not setting parser to null. We are using a 
ResponseParser on the lines of NoOpResponseParser. So, this bug is not 
affecting us anymore.

 LBHttpSolrServer's aliveCheckExecutor is leaking connection when 
 ResponseParser is null
 ---

 Key: SOLR-6561
 URL: https://issues.apache.org/jira/browse/SOLR-6561
 Project: Solr
  Issue Type: Bug
  Components: clients - java
Affects Versions: 4.10
Reporter: Sudhan Moghe

 LBHttpSolrServer's aliveCheckExecutor is leaking connection when 
 ResponseParser is null.
 We are providing search as a service and our Solr setup is not directly 
 exposed to clients. We are setting parser to null and then passing on the 
 InputStream, received from Solr server, as it is to our clients.
 The LBHttpSolrServer.checkAZombieServer() is no closing connection in this 
 case.
 I think something like following needs to be there. Not the exact code.
 if (zombieServer.solrServer.getParser() == null)
   is = (InputStream) resp.getResponse().get(stream);
   is.close();
 }
 This is blocker for us. I will test this out locally and update this bug 
 report. But, we can't deploy that in production till we get official fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org