[jira] [Commented] (SOLR-6666) Dynamic copy fields are considering all dynamic fields, causing a significant performance impact on indexing documents

2014-11-26 Thread Liram Vardi (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14225926#comment-14225926
 ] 

Liram Vardi commented on SOLR-:
---

Hi all,
Did anyone have a chance to take a look on this?
Thanks

 Dynamic copy fields are considering all dynamic fields, causing a significant 
 performance impact on indexing documents
 --

 Key: SOLR-
 URL: https://issues.apache.org/jira/browse/SOLR-
 Project: Solr
  Issue Type: Improvement
  Components: Schema and Analysis, update
 Environment: Linux, Solr 4.8, Schema with 70 fields and more than 500 
 specific CopyFields for dynamic fields, but without wildcards (the fields are 
 dynamic, the copy directive is not)
Reporter: Liram Vardi
Assignee: Erick Erickson
 Attachments: SOLR-.patch


 Result:
 After applying a fix for this issue, tests which we conducted show more than 
 40 percent improvement on our insertion performance.
 Explanation:
 Using JVM profiler, we found a CPU bottleneck during Solr indexing process. 
 This bottleneck can be found at org.apache.solr.schema.IndexSchema, in the 
 following method, getCopyFieldsList():
 {code:title=getCopyFieldsList() |borderStyle=solid}
 final ListCopyField result = new ArrayList();
 for (DynamicCopy dynamicCopy : dynamicCopyFields) {
   if (dynamicCopy.matches(sourceField)) {
 result.add(new CopyField(getField(sourceField), 
 dynamicCopy.getTargetField(sourceField), dynamicCopy.maxChars));
   }
 }
 ListCopyField fixedCopyFields = copyFieldsMap.get(sourceField);
 if (null != fixedCopyFields) {
   result.addAll(fixedCopyFields);
 }
 {code}
 This function tries to find for an input source field all its copyFields (All 
 its destinations which Solr need to move this field). 
 As you can probably note, the first part of the procedure is the procedure 
 most “expensive” step (takes O( n ) time while N is the size of the 
 dynamicCopyFields group).
 The next part is just a simple hash extraction, which takes O(1) time. 
 Our schema contains over then 500 copyFields but only 70 of then are 
 indexed fields. 
 We also have one dynamic field with  a wildcard ( * ), which catches the 
 rest of the document fields. 
 As you can conclude, we have more than 400 copyFields that are based on this 
 dynamicField but all, except one, are fixed (i.e. does not contain any 
 wildcard).
 From some reason, the copyFields registration procedure defines those 400 
 fields as DynamicCopyField  and then store them in the “dynamicCopyFields” 
 array, 
 This step makes getCopyFieldsList() very expensive (in CPU terms) without any 
 justification: All of those 400 copyFields are not glob and therefore do not 
 need any complex pattern matching to the input field. They all can be store 
 at the fixedCopyFields.
 Only copyFields with asterisks need this special treatment and they are 
 (especially on our case) pretty rare.  
 Therefore, we created a patch which fix this problem by changing the 
 registerCopyField() procedure.
 Test which we conducted show that there is no change in the Indexing results. 
 Moreover, the fix still successfully passes the class unit tests (i.e. 
 IndexSchemaTest.java).




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Where is the SVN repository only for Lucene project ?

2014-11-26 Thread Yosuke Yamatani
Dear sir/madam

Hello, I’m Yosuke Yamatani.
I’m a graduate student at Wakayama University, Japan.
I study software evolution in OSS projects through the analysis of SVN
repositories.
I found the entire ASF repository, but I would like to mirror the SVN
repository only for your project.
Could you let me know how to get your repository ?

Sincerely yours.
Yosuke



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Where is the SVN repository only for Lucene project ?

2014-11-26 Thread Mikhail Khludnev
Yosuke,

Welcome aboard!
pls find svn url at
http://lucene.apache.org/core/developer.html
however, I found https://github.com/apache/lucene-solr/ rather convenient.

On Wed, Nov 26, 2014 at 12:19 PM, Yosuke Yamatani 
s151...@center.wakayama-u.ac.jp wrote:

 Dear sir/madam

 Hello, I’m Yosuke Yamatani.
 I’m a graduate student at Wakayama University, Japan.
 I study software evolution in OSS projects through the analysis of SVN
 repositories.
 I found the entire ASF repository, but I would like to mirror the SVN
 repository only for your project.
 Could you let me know how to get your repository ?

 Sincerely yours.
 Yosuke



 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




-- 
Sincerely yours
Mikhail Khludnev
Principal Engineer,
Grid Dynamics

http://www.griddynamics.com
mkhlud...@griddynamics.com


RE: Where is the SVN repository only for Lucene project ?

2014-11-26 Thread Uwe Schindler
There is only the ASF repository, Lucene does not have ist own. Where is the 
problem in only mirroring a specific subdirectory?

Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


 -Original Message-
 From: Yosuke Yamatani [mailto:s151...@center.wakayama-u.ac.jp]
 Sent: Wednesday, November 26, 2014 10:19 AM
 To: dev@lucene.apache.org
 Subject: Where is the SVN repository only for Lucene project ?
 
 Dear sir/madam
 
 Hello, I’m Yosuke Yamatani.
 I’m a graduate student at Wakayama University, Japan.
 I study software evolution in OSS projects through the analysis of SVN
 repositories.
 I found the entire ASF repository, but I would like to mirror the SVN
 repository only for your project.
 Could you let me know how to get your repository ?
 
 Sincerely yours.
 Yosuke
 
 
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional
 commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Where is the SVN repository only for Lucene project ?

2014-11-26 Thread Yosuke Yamatani

Hi Uwe and Mikhail,

Thank you for your quick response!!

I would like to do SVN sync to get your projects' repository since a 
mirror copy of the svn repository is needed to execute my tool which 
analyzes  software evolution over time.


I can do it through the URL you told me, but it is too slow to sync 
because the sync command tries to get the entire apache repository.


Sincerely yours,

Yosuke

(2014/11/26 18:47), Uwe Schindler wrote:

There is only the ASF repository, Lucene does not have ist own. Where is the 
problem in only mirroring a specific subdirectory?

Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de



-Original Message-
From: Yosuke Yamatani [mailto:s151...@center.wakayama-u.ac.jp]
Sent: Wednesday, November 26, 2014 10:19 AM
To: dev@lucene.apache.org
Subject: Where is the SVN repository only for Lucene project ?

Dear sir/madam

Hello, I’m Yosuke Yamatani.
I’m a graduate student at Wakayama University, Japan.
I study software evolution in OSS projects through the analysis of SVN
repositories.
I found the entire ASF repository, but I would like to mirror the SVN
repository only for your project.
Could you let me know how to get your repository ?

Sincerely yours.
Yosuke



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional
commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org




-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6796) distrib.singlePass does not return correct set of fields for multi-fl-parameter requests

2014-11-26 Thread Per Steffensen (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Per Steffensen updated SOLR-6796:
-
Attachment: fix.patch

I believe this fix will make {{distrib.singlePass=true}} work for any query. 
That is, you can turn on distrib.singlePass whenever you like, and have a 
response similar to a non-distributed request to a single core/shard containing 
the same data.
We have a version of Solr based on 4.4.0, with a lot of our own changes, and 
with SOLR-5768, SOLR-1880 and parts of SOLR-5399 merged. With this fix, then 
entire 4.4.0 test-suite is green, when we make all queries issued across the 
test-suite run with {{distrib.singlePass=true}}.
At least it fixes the particular problem that this SOLR-6796 is about.

 distrib.singlePass does not return correct set of fields for 
 multi-fl-parameter requests
 

 Key: SOLR-6796
 URL: https://issues.apache.org/jira/browse/SOLR-6796
 Project: Solr
  Issue Type: Bug
  Components: multicore, search
Affects Versions: 5.0
Reporter: Per Steffensen
Assignee: Shalin Shekhar Mangar
  Labels: distributed_search, search
 Attachments: fix.patch, fix.patch, fix.patch, 
 test_that_reveals_the_problem.patch


 If I pass distrib.singlePass in a request that also has two fl-parameters, in 
 some cases, I will not get the expected set of fields back for the returned 
 documents



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6796) distrib.singlePass does not return correct set of fields for multi-fl-parameter requests

2014-11-26 Thread Per Steffensen (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14225979#comment-14225979
 ] 

Per Steffensen edited comment on SOLR-6796 at 11/26/14 10:11 AM:
-

I believe this fix will make {{distrib.singlePass=true}} work for any query 
(together with SOLR-6795). That is, you can turn on distrib.singlePass whenever 
you like, and have a response similar to a non-distributed request to a single 
core/shard containing the same data.
We have a version of Solr based on 4.4.0, with a lot of our own changes, and 
with SOLR-5768, SOLR-1880 and parts of SOLR-5399 merged. With this fix, then 
entire 4.4.0 test-suite is green, when we make all queries issued across the 
test-suite run with {{distrib.singlePass=true}}.
At least it fixes the particular problem that this SOLR-6796 is about.


was (Author: steff1193):
I believe this fix will make {{distrib.singlePass=true}} work for any query. 
That is, you can turn on distrib.singlePass whenever you like, and have a 
response similar to a non-distributed request to a single core/shard containing 
the same data.
We have a version of Solr based on 4.4.0, with a lot of our own changes, and 
with SOLR-5768, SOLR-1880 and parts of SOLR-5399 merged. With this fix, then 
entire 4.4.0 test-suite is green, when we make all queries issued across the 
test-suite run with {{distrib.singlePass=true}}.
At least it fixes the particular problem that this SOLR-6796 is about.

 distrib.singlePass does not return correct set of fields for 
 multi-fl-parameter requests
 

 Key: SOLR-6796
 URL: https://issues.apache.org/jira/browse/SOLR-6796
 Project: Solr
  Issue Type: Bug
  Components: multicore, search
Affects Versions: 5.0
Reporter: Per Steffensen
Assignee: Shalin Shekhar Mangar
  Labels: distributed_search, search
 Attachments: fix.patch, fix.patch, fix.patch, 
 test_that_reveals_the_problem.patch


 If I pass distrib.singlePass in a request that also has two fl-parameters, in 
 some cases, I will not get the expected set of fields back for the returned 
 documents



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6792) cleanup solrconfig.xml files by removing implicit plugins

2014-11-26 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14225998#comment-14225998
 ] 

ASF subversion and git services commented on SOLR-6792:
---

Commit 1641790 from [~noble.paul] in branch 'dev/trunk'
[ https://svn.apache.org/r1641790 ]

SOLR-6792 deprecate AdminHandlers, Clean up solrconfig.xml of unnecessary 
plugin definitions, implicit registration of /replication, /get and /admin/* 
handlers

 cleanup solrconfig.xml files by removing implicit plugins
 -

 Key: SOLR-6792
 URL: https://issues.apache.org/jira/browse/SOLR-6792
 Project: Solr
  Issue Type: Bug
Reporter: Noble Paul
Assignee: Noble Paul
 Attachments: SOLR-6792.patch


  /replication , /get , /update, /admin/ are registered implicitly for each 
 core. No need to specify them from solrconfig.xml if nothing custom needs to 
 be added



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6792) cleanup solrconfig.xml files by removing implicit plugins

2014-11-26 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14226002#comment-14226002
 ] 

ASF subversion and git services commented on SOLR-6792:
---

Commit 1641792 from [~noble.paul] in branch 'dev/trunk'
[ https://svn.apache.org/r1641792 ]

SOLR-6792 adding deprecation details

 cleanup solrconfig.xml files by removing implicit plugins
 -

 Key: SOLR-6792
 URL: https://issues.apache.org/jira/browse/SOLR-6792
 Project: Solr
  Issue Type: Bug
Reporter: Noble Paul
Assignee: Noble Paul
 Attachments: SOLR-6792.patch


  /replication , /get , /update, /admin/ are registered implicitly for each 
 core. No need to specify them from solrconfig.xml if nothing custom needs to 
 be added



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.7.0) - Build # 1915 - Still Failing!

2014-11-26 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/1915/
Java: 64bit/jdk1.7.0 -XX:+UseCompressedOops -XX:+UseG1GC (asserts: false)

1 tests failed.
FAILED:  org.apache.solr.core.TestNonNRTOpen.testReaderIsNotNRT

Error Message:
SOLR-5815? : wrong maxDoc: core=org.apache.solr.core.SolrCore@71235ce5 
searcher=Searcher@2196788a[collection1] 
main{ExitableDirectoryReader(UninvertingDirectoryReader(Uninverting(_4(5.0.0):c1)
 Uninverting(_5(5.0.0):c1)))} expected:3 but was:2

Stack Trace:
java.lang.AssertionError: SOLR-5815? : wrong maxDoc: 
core=org.apache.solr.core.SolrCore@71235ce5 
searcher=Searcher@2196788a[collection1] 
main{ExitableDirectoryReader(UninvertingDirectoryReader(Uninverting(_4(5.0.0):c1)
 Uninverting(_5(5.0.0):c1)))} expected:3 but was:2
at 
__randomizedtesting.SeedInfo.seed([C20445D5EAC42D50:7782245255059FA4]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.core.TestNonNRTOpen.assertNotNRT(TestNonNRTOpen.java:142)
at 
org.apache.solr.core.TestNonNRTOpen.testReaderIsNotNRT(TestNonNRTOpen.java:100)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 

[jira] [Commented] (SOLR-6554) Speed up overseer operations for collections with stateFormat 1

2014-11-26 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14226005#comment-14226005
 ] 

Noble Paul commented on SOLR-6554:
--

This is a much needed refatoring for Overseer. good jo. I have not gotten 
around to do a full review  but I have seen enough of it and looks good. do we 
have batching of operations in stateFormat=2 now?

 Speed up overseer operations for collections with stateFormat  1
 -

 Key: SOLR-6554
 URL: https://issues.apache.org/jira/browse/SOLR-6554
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 5.0, Trunk
Reporter: Shalin Shekhar Mangar
 Attachments: SOLR-6554.patch, SOLR-6554.patch, SOLR-6554.patch, 
 SOLR-6554.patch, SOLR-6554.patch


 Right now (after SOLR-5473 was committed), a node watches a collection only 
 if stateFormat=1 or if that node hosts at least one core belonging to that 
 collection.
 This means that a node which is the overseer operates on all collections but 
 watches only a few. So any read goes directly to zookeeper which slows down 
 overseer operations.
 Let's have the overseer node watch all collections always and never remove 
 those watches (except when the collection itself is deleted).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Where is the SVN repository only for Lucene project ?

2014-11-26 Thread Steve Davids
http://lucene.apache.org/core/developer.html

Sent from my iPhone

 On Nov 26, 2014, at 4:19 AM, Yosuke Yamatani 
 s151...@center.wakayama-u.ac.jp wrote:
 
 Dear sir/madam
 
 Hello, I’m Yosuke Yamatani.
 I’m a graduate student at Wakayama University, Japan.
 I study software evolution in OSS projects through the analysis of SVN
 repositories.
 I found the entire ASF repository, but I would like to mirror the SVN
 repository only for your project.
 Could you let me know how to get your repository ?
 
 Sincerely yours.
 Yosuke
 
 
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org
 


[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.8.0_20) - Build # 4453 - Still Failing!

2014-11-26 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4453/
Java: 64bit/jdk1.8.0_20 -XX:-UseCompressedOops -XX:+UseParallelGC (asserts: 
false)

3 tests failed.
FAILED:  
org.apache.lucene.mockfile.TestMockFilesystems.testDeleteIfExistsOpenFile

Error Message:
should have gotten exception

Stack Trace:
java.lang.AssertionError: should have gotten exception
at 
__randomizedtesting.SeedInfo.seed([8273B5CE58C73484:3098DAA623DAF603]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.lucene.mockfile.TestMockFilesystems.testDeleteIfExistsOpenFile(TestMockFilesystems.java:154)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)


FAILED:  org.apache.lucene.mockfile.TestMockFilesystems.testRenameOpenFile

Error Message:
should have gotten exception

Stack Trace:
java.lang.AssertionError: should have gotten exception
at 
__randomizedtesting.SeedInfo.seed([8273B5CE58C73484:F1A48368AD0DA5A4]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.lucene.mockfile.TestMockFilesystems.testRenameOpenFile(TestMockFilesystems.java:172)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

Re: [JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.8.0_40-ea-b09) - Build # 4452 - Failure!

2014-11-26 Thread Michael McCandless
Doesn't repro for me on Linux with 1.8.0_40, but the test failed on
Windows ... can someone with a Windows box try to repro?

It's odd, as if WindowsFS started up buggy in this JVM, because all
3 test cases for WindowsFS failed...

Mike McCandless

http://blog.mikemccandless.com

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-SmokeRelease-5.x - Build # 215 - Still Failing

2014-11-26 Thread Michael McCandless
Is anyone looking into why the smoke tester can't run Solr's example?
This has been failing for quite a while, and I thought I saw a commit
to smoke tester to try to fix it?

Should we stop trying to test the solr example from the smoke tester?

Mike McCandless

http://blog.mikemccandless.com


On Tue, Nov 25, 2014 at 10:06 PM, Apache Jenkins Server
jenk...@builds.apache.org wrote:
 Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-5.x/215/

 No tests ran.

 Build Log:
 [...truncated 51672 lines...]
 prepare-release-no-sign:
 [mkdir] Created dir: 
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist
  [copy] Copying 446 files to 
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist/lucene
  [copy] Copying 254 files to 
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist/solr
[smoker] Java 1.7 JAVA_HOME=/home/jenkins/tools/java/latest1.7
[smoker] NOTE: output encoding is US-ASCII
[smoker]
[smoker] Load release URL 
 file:/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist/...
[smoker]
[smoker] Test Lucene...
[smoker]   test basics...
[smoker]   get KEYS
[smoker] 0.1 MB in 0.01 sec (13.0 MB/sec)
[smoker]   check changes HTML...
[smoker]   download lucene-5.0.0-src.tgz...
[smoker] 27.8 MB in 0.04 sec (681.1 MB/sec)
[smoker] verify md5/sha1 digests
[smoker]   download lucene-5.0.0.tgz...
[smoker] 63.8 MB in 0.09 sec (694.6 MB/sec)
[smoker] verify md5/sha1 digests
[smoker]   download lucene-5.0.0.zip...
[smoker] 73.2 MB in 0.14 sec (526.5 MB/sec)
[smoker] verify md5/sha1 digests
[smoker]   unpack lucene-5.0.0.tgz...
[smoker] verify JAR metadata/identity/no javax.* or java.* classes...
[smoker] test demo with 1.7...
[smoker]   got 5569 hits for query lucene
[smoker] checkindex with 1.7...
[smoker] check Lucene's javadoc JAR
[smoker]   unpack lucene-5.0.0.zip...
[smoker] verify JAR metadata/identity/no javax.* or java.* classes...
[smoker] test demo with 1.7...
[smoker]   got 5569 hits for query lucene
[smoker] checkindex with 1.7...
[smoker] check Lucene's javadoc JAR
[smoker]   unpack lucene-5.0.0-src.tgz...
[smoker] make sure no JARs/WARs in src dist...
[smoker] run ant validate
[smoker] run tests w/ Java 7 and 
 testArgs='-Dtests.jettyConnector=Socket -Dtests.multiplier=1 
 -Dtests.slow=false'...
[smoker] test demo with 1.7...
[smoker]   got 207 hits for query lucene
[smoker] checkindex with 1.7...
[smoker] generate javadocs w/ Java 7...
[smoker]
[smoker] Crawl/parse...
[smoker]
[smoker] Verify...
[smoker]   confirm all releases have coverage in TestBackwardsCompatibility
[smoker] find all past Lucene releases...
[smoker] run TestBackwardsCompatibility..
[smoker] success!
[smoker]
[smoker] Test Solr...
[smoker]   test basics...
[smoker]   get KEYS
[smoker] 0.1 MB in 0.00 sec (86.1 MB/sec)
[smoker]   check changes HTML...
[smoker]   download solr-5.0.0-src.tgz...
[smoker] 34.1 MB in 0.04 sec (768.8 MB/sec)
[smoker] verify md5/sha1 digests
[smoker]   download solr-5.0.0.tgz...
[smoker] 146.5 MB in 0.48 sec (302.1 MB/sec)
[smoker] verify md5/sha1 digests
[smoker]   download solr-5.0.0.zip...
[smoker] 152.6 MB in 0.26 sec (598.2 MB/sec)
[smoker] verify md5/sha1 digests
[smoker]   unpack solr-5.0.0.tgz...
[smoker] verify JAR metadata/identity/no javax.* or java.* classes...
[smoker] unpack lucene-5.0.0.tgz...
[smoker]   **WARNING**: skipping check of 
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
  it has javax.* classes
[smoker]   **WARNING**: skipping check of 
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
  it has javax.* classes
[smoker] verify WAR metadata/contained JAR identity/no javax.* or 
 java.* classes...
[smoker] unpack lucene-5.0.0.tgz...
[smoker] copying unpacked distribution for Java 7 ...
[smoker] test solr example w/ Java 7...
[smoker]   start Solr instance 
 (log=/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0-java7/solr-example.log)...
[smoker] No process found for Solr node running on port 8983
[smoker]   starting Solr on port 8983 from 
 

[jira] [Commented] (SOLR-6658) SearchHandler should accept POST requests with JSON data in content stream for customized plug-in components

2014-11-26 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14226142#comment-14226142
 ] 

Noble Paul commented on SOLR-6658:
--

No, there should be no constraint on the content type. It should be
possible to use xml ore csv or whatever you wish



 SearchHandler should accept POST requests with JSON data in content stream 
 for customized plug-in components
 

 Key: SOLR-6658
 URL: https://issues.apache.org/jira/browse/SOLR-6658
 Project: Solr
  Issue Type: Improvement
  Components: search, SearchComponents - other
Affects Versions: 4.7, 4.7.1, 4.7.2, 4.8, 4.8.1, 4.9, 4.9.1, 4.10, 4.10.1
Reporter: Mark Peng
Assignee: Noble Paul
 Attachments: SOLR-6658.patch, SOLR-6658.patch


 This issue relates to the following one:
 *Return HTTP error on POST requests with no Content-Type*
 [https://issues.apache.org/jira/browse/SOLR-5517]
 The original consideration of the above is to make sure that incoming POST 
 requests to SearchHandler have corresponding content-type specified. That is 
 quite reasonable, however, the following lines in the patch cause to reject 
 all POST requests with content stream data, which is not necessary to that 
 issue:
 {code}
 Index: solr/core/src/java/org/apache/solr/handler/component/SearchHandler.java
 ===
 --- solr/core/src/java/org/apache/solr/handler/component/SearchHandler.java   
 (revision 1546817)
 +++ solr/core/src/java/org/apache/solr/handler/component/SearchHandler.java   
 (working copy)
 @@ -22,9 +22,11 @@
  import java.util.List;
  
  import org.apache.solr.common.SolrException;
 +import org.apache.solr.common.SolrException.ErrorCode;
  import org.apache.solr.common.params.CommonParams;
  import org.apache.solr.common.params.ModifiableSolrParams;
  import org.apache.solr.common.params.ShardParams;
 +import org.apache.solr.common.util.ContentStream;
  import org.apache.solr.core.CloseHook;
  import org.apache.solr.core.PluginInfo;
  import org.apache.solr.core.SolrCore;
 @@ -165,6 +167,10 @@
{
  // int sleep = req.getParams().getInt(sleep,0);
  // if (sleep  0) {log.error(SLEEPING for  + sleep);  
 Thread.sleep(sleep);}
 +if (req.getContentStreams() != null  
 req.getContentStreams().iterator().hasNext()) {
 +  throw new SolrException(ErrorCode.BAD_REQUEST, Search requests cannot 
 accept content streams);
 +}
 +
  ResponseBuilder rb = new ResponseBuilder(req, rsp, components);
  if (rb.requestInfo != null) {
rb.requestInfo.setResponseBuilder(rb);
 {code}
 We are using Solr 4.5.1 in our production services and considering to upgrade 
 to 4.9/5.0 to support more features. But due to this issue, we cannot have a 
 chance to upgrade because we have some important customized SearchComponent 
 plug-ins that need to get POST data from SearchHandler to do further 
 processing.
 Therefore, we are requesting if it is possible to remove the content stream 
 constraint shown above and to let SearchHandler accept POST requests with 
 *Content-Type: application/json* to allow further components to get the data.
 Thank you.
 Best regards,
 Mark Peng



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6547) CloudSolrServer query getqtime Exception

2014-11-26 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-6547:
-
Attachment: SOLR-6547.patch

no need to check for string etc.

Is it possible to get a testcase for this?

 CloudSolrServer query getqtime Exception
 

 Key: SOLR-6547
 URL: https://issues.apache.org/jira/browse/SOLR-6547
 Project: Solr
  Issue Type: Bug
  Components: SolrJ
Affects Versions: 4.10
Reporter: kevin
Assignee: Noble Paul
 Attachments: SOLR-6547.patch, SOLR-6547.patch


 We are using CloudSolrServer to query ,but solrj throw Exception ;
 java.lang.ClassCastException: java.lang.Long cannot be cast to 
 java.lang.Integer  at 
 org.apache.solr.client.solrj.response.SolrResponseBase.getQTime(SolrResponseBase.java:76)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6469) Solr search with multicore + grouping + highlighting cause NPE

2014-11-26 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14226171#comment-14226171
 ] 

Noble Paul commented on SOLR-6469:
--

I don't think this bug is valid anymore or the fix is wrong
replacing 
{code}
rb.resultIds = new HashMap();
{code}
 with
{code}
 if(rb.resultIds == null) {
rb.resultIds = new HashMap();
  }
{code}

cannot fix an NPE


 Solr search with multicore + grouping + highlighting cause NPE
 --

 Key: SOLR-6469
 URL: https://issues.apache.org/jira/browse/SOLR-6469
 Project: Solr
  Issue Type: Bug
  Components: highlighter, multicore, SearchComponents - other
Affects Versions: 4.8.1
 Environment: Windows 7, Intellij
Reporter: Shay Sofer
Assignee: Noble Paul
  Labels: patch
 Attachments: SOLR-6469.patch


 Integration of Grouping + shards + highlighting cause NullPointerException.
 Query: 
 localhost:8983/solr/Global_A/select?q=%2Btext%3A%28shay*%29+rows=100fl=id%2CobjId%2Cnullshards=http%3A%2F%2F127.0.0.1%3A8983%2Fsolr%2F0_A%2Chttp%3A%2F%2F127.0.0.1%3A8983%2Fsolr%2FGlobal_Agroup=truegroup.query=name__s%3Ashaysort=name__s_sort+aschl=true
 results:
 java.lang.NullPointerException
  at 
 org.apache.solr.handler.component.HighlightComponent.finishStage(HighlightComponent.java:189)
  at 
 org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:330)
  at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
  at org.apache.solr.core.SolrCore.execute(SolrCore.java:1952)
  at 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:774)
  at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418)
  at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)
  at 
 org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
  at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
  at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
  at 
 org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
  at 
 org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
  at 
 org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
  at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
  at 
 org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
  at 
 org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
  at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
  at 
 org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
  at 
 org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
  at 
 org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
  at org.eclipse.jetty.server.Server.handle(Server.java:368)
  at 
 org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
  at 
 org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
  at 
 org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)
  at 
 org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004)
  at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640)
  at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
  at 
 org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
  at 
 org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
  at 
 org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
  at 
 org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
  at java.lang.Thread.run(Thread.java:722)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6469) Solr search with multicore + grouping + highlighting cause NPE

2014-11-26 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14226171#comment-14226171
 ] 

Noble Paul edited comment on SOLR-6469 at 11/26/14 1:25 PM:


I don't think this bug is valid anymore or the fix is wrong
replacing 
{code}
rb.resultIds = new HashMap();
{code}
 with
{code}
 if(rb.resultIds == null) {
rb.resultIds = new HashMap();
  }

{code}

cannot fix an NPE

Is there a way to reproduce this ?


was (Author: noble.paul):
I don't think this bug is valid anymore or the fix is wrong
replacing 
{code}
rb.resultIds = new HashMap();
{code}
 with
{code}
 if(rb.resultIds == null) {
rb.resultIds = new HashMap();
  }
{code}

cannot fix an NPE


 Solr search with multicore + grouping + highlighting cause NPE
 --

 Key: SOLR-6469
 URL: https://issues.apache.org/jira/browse/SOLR-6469
 Project: Solr
  Issue Type: Bug
  Components: highlighter, multicore, SearchComponents - other
Affects Versions: 4.8.1
 Environment: Windows 7, Intellij
Reporter: Shay Sofer
Assignee: Noble Paul
  Labels: patch
 Attachments: SOLR-6469.patch


 Integration of Grouping + shards + highlighting cause NullPointerException.
 Query: 
 localhost:8983/solr/Global_A/select?q=%2Btext%3A%28shay*%29+rows=100fl=id%2CobjId%2Cnullshards=http%3A%2F%2F127.0.0.1%3A8983%2Fsolr%2F0_A%2Chttp%3A%2F%2F127.0.0.1%3A8983%2Fsolr%2FGlobal_Agroup=truegroup.query=name__s%3Ashaysort=name__s_sort+aschl=true
 results:
 java.lang.NullPointerException
  at 
 org.apache.solr.handler.component.HighlightComponent.finishStage(HighlightComponent.java:189)
  at 
 org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:330)
  at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
  at org.apache.solr.core.SolrCore.execute(SolrCore.java:1952)
  at 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:774)
  at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418)
  at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)
  at 
 org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
  at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
  at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
  at 
 org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
  at 
 org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
  at 
 org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
  at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
  at 
 org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
  at 
 org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
  at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
  at 
 org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
  at 
 org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
  at 
 org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
  at org.eclipse.jetty.server.Server.handle(Server.java:368)
  at 
 org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
  at 
 org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
  at 
 org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)
  at 
 org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004)
  at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640)
  at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
  at 
 org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
  at 
 org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
  at 
 org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
  at 
 org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
  at java.lang.Thread.run(Thread.java:722)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Where is the SVN repository only for Lucene project ?

2014-11-26 Thread david.w.smi...@gmail.com
GitHub offers SVN access:
svn checkout https://github.com/apache/lucene-solr

~ David Smiley
Freelance Apache Lucene/Solr Search Consultant/Developer
http://www.linkedin.com/in/davidwsmiley

On Wed, Nov 26, 2014 at 4:19 AM, Yosuke Yamatani 
s151...@center.wakayama-u.ac.jp wrote:

 Dear sir/madam

 Hello, I’m Yosuke Yamatani.
 I’m a graduate student at Wakayama University, Japan.
 I study software evolution in OSS projects through the analysis of SVN
 repositories.
 I found the entire ASF repository, but I would like to mirror the SVN
 repository only for your project.
 Could you let me know how to get your repository ?

 Sincerely yours.
 Yosuke



 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




[jira] [Commented] (SOLR-4799) SQLEntityProcessor for zipper join

2014-11-26 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14226174#comment-14226174
 ] 

Noble Paul commented on SOLR-4799:
--

hi , can yoou post a patch updated to the trunk  please. If I'm not wrong the 
code kicks in when the entity attribute join is present . So it is a low risk 
feature anyway

 SQLEntityProcessor for zipper join
 --

 Key: SOLR-4799
 URL: https://issues.apache.org/jira/browse/SOLR-4799
 Project: Solr
  Issue Type: New Feature
  Components: contrib - DataImportHandler
Reporter: Mikhail Khludnev
Priority: Minor
  Labels: DIH, dataimportHandler, dih
 Attachments: SOLR-4799.patch


 DIH is mostly considered as a playground tool, and real usages end up with 
 SolrJ. I want to contribute few improvements target DIH performance.
 This one provides performant approach for joining SQL Entities with miserable 
 memory at contrast to 
 http://wiki.apache.org/solr/DataImportHandler#CachedSqlEntityProcessor  
 The idea is:
 * parent table is explicitly ordered by it’s PK in SQL
 * children table is explicitly ordered by parent_id FK in SQL
 * children entity processor joins ordered resultsets by ‘zipper’ algorithm.
 Do you think it’s worth to contribute it into DIH?
 cc: [~goksron] [~jdyer]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6075) SimpleRateLimiter cast overflow results in Thread.sleep exception

2014-11-26 Thread Boaz Leskes (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14226188#comment-14226188
 ] 

Boaz Leskes commented on LUCENE-6075:
-

Thank you for fixing it.

I ran into it in code that ran in a VM and I suspect (though can't be sure) it 
had something to do with virtualized time. I wonder if it makes sense to have a 
sanity check upper bound to rate limiting - as sleeping for 25 days is most 
likely not the intended behaviour.. 

 SimpleRateLimiter cast overflow results in Thread.sleep exception
 -

 Key: LUCENE-6075
 URL: https://issues.apache.org/jira/browse/LUCENE-6075
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/store
Reporter: Boaz Leskes
Assignee: Michael McCandless
 Fix For: Trunk, 5.x

 Attachments: LUCENE-6075.patch


 SimpleRateLimiter.pause() uses an uncheck cast of longs to ints:
 Thread.sleep((int) (pauseNS/100), (int) (pauseNS % 100));
 Although we check that pauseNS is positive, however if it's large enough the 
 cast to int produces a negative value, causing Thread.sleep to throw an 
 exception.
 We should protect for it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: Where is the SVN repository only for Lucene project ?

2014-11-26 Thread Uwe Schindler
Cool idea J

 

Uwe

 

-

Uwe Schindler

H.-H.-Meier-Allee 63, D-28213 Bremen

 http://www.thetaphi.de/ http://www.thetaphi.de

eMail: u...@thetaphi.de

 

From: david.w.smi...@gmail.com [mailto:david.w.smi...@gmail.com] 
Sent: Wednesday, November 26, 2014 2:26 PM
To: dev@lucene.apache.org
Subject: Re: Where is the SVN repository only for Lucene project ?

 

GitHub offers SVN access:

svn checkout https://github.com/apache/lucene-solr




~ David Smiley

Freelance Apache Lucene/Solr Search Consultant/Developer

http://www.linkedin.com/in/davidwsmiley

 

On Wed, Nov 26, 2014 at 4:19 AM, Yosuke Yamatani 
s151...@center.wakayama-u.ac.jp wrote:

Dear sir/madam

Hello, I’m Yosuke Yamatani.
I’m a graduate student at Wakayama University, Japan.
I study software evolution in OSS projects through the analysis of SVN
repositories.
I found the entire ASF repository, but I would like to mirror the SVN
repository only for your project.
Could you let me know how to get your repository ?

Sincerely yours.
Yosuke



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

 



Re: Where is the SVN repository only for Lucene project ?

2014-11-26 Thread Alexandre Rafalovitch
With Git and GitHub it is possible to do a shallow fetch which will
only get the files without much history. Maybe with SVN as well, but I
haven't tried.

Regards,
   Alex.
Personal: http://www.outerthoughts.com/ and @arafalov
Solr resources and newsletter: http://www.solr-start.com/ and @solrstart
Solr popularizers community: https://www.linkedin.com/groups?gid=6713853


On 26 November 2014 at 08:26, david.w.smi...@gmail.com
david.w.smi...@gmail.com wrote:
 GitHub offers SVN access:
 svn checkout https://github.com/apache/lucene-solr

 ~ David Smiley
 Freelance Apache Lucene/Solr Search Consultant/Developer
 http://www.linkedin.com/in/davidwsmiley

 On Wed, Nov 26, 2014 at 4:19 AM, Yosuke Yamatani
 s151...@center.wakayama-u.ac.jp wrote:

 Dear sir/madam

 Hello, I’m Yosuke Yamatani.
 I’m a graduate student at Wakayama University, Japan.
 I study software evolution in OSS projects through the analysis of SVN
 repositories.
 I found the entire ASF repository, but I would like to mirror the SVN
 repository only for your project.
 Could you let me know how to get your repository ?

 Sincerely yours.
 Yosuke



 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6077) Add a filter cache

2014-11-26 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-6077:


 Summary: Add a filter cache
 Key: LUCENE-6077
 URL: https://issues.apache.org/jira/browse/LUCENE-6077
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: 5.0


Lucene already has filter caching abilities through CachingWrapperFilter, but 
CachingWrapperFilter requires you to know which filters you want to cache 
up-front.

Caching filters is not trivial. If you cache too aggressively, then you slow 
things down since you need to iterate over all documents that match the filter 
in order to load it into an in-memory cacheable DocIdSet. On the other hand, if 
you don't cache at all, you are potentially missing interesting speed-ups on 
frequently-used filters.

Something that would be nice would be to have a generic filter cache that would 
track usage for individual filters and make the decision to cache or not a 
filter on a given segments based on usage statistics and various heuristics, 
such as:
 - the overhead to cache the filter (for instance some filters produce 
DocIdSets that are already cacheable)
 - the cost to build the DocIdSet (the getDocIdSet method is very expensive on 
some filters such as MultiTermQueryWrapperFilter that potentially need to merge 
lots of postings lists)
 - the segment we are searching on (flush segments will likely be merged right 
away so it's probably not worth building a cache on such segments)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6077) Add a filter cache

2014-11-26 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-6077:
-
Attachment: LUCENE-6077.patch

Here is a patch. It divides the work into 2 pieces:
 - FilterCache whose responsibility is to act as a per-segment cache for 
filters but doesn't make any decision about which filters should be cached
 - FilterCachingPolicy, whose responsibility is to decide about whether a 
filter is worth caching given the filter itself, the current segment and the 
produced (uncached) DocIdSet.

FilterCache has an implementation called LRUFilterCache that accepts a maximum 
size (number of cached filters) and ram usage and is going to evict 
least-recently-used filters first. It has some protected methods that allow to 
configure which impl should be used to cache DocIdSets (RoaringDocIdSet by 
default), and how to measure ram usage of filters (the default impl uses 
Accountable#ramBytesUsed if the filter implements Accountable, and falls back 
to an arbitrary constant (1024) otherwise).

FilterCachingPolicy has an implementation called 
UsageTrackingFilterCachingPolicy that tries to provide sensible defaults:
 - it tracks the 256 most recently used filters (through their hash codes) 
globally (not per segment)
 - it only caches on segments whose source is a merge or addIndexes (not 
flushes)
 - it uses some heuristics to decide how many times a filter should appear in 
the history of 256 filters in order to be cached.

The filter caching policy can be configured on a per-filter basis, so that even 
if there are filters that you want to cache more aggressively than others, it 
is possible to cache them all in a single FilterCache instance.

 Add a filter cache
 --

 Key: LUCENE-6077
 URL: https://issues.apache.org/jira/browse/LUCENE-6077
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: 5.0

 Attachments: LUCENE-6077.patch


 Lucene already has filter caching abilities through CachingWrapperFilter, but 
 CachingWrapperFilter requires you to know which filters you want to cache 
 up-front.
 Caching filters is not trivial. If you cache too aggressively, then you slow 
 things down since you need to iterate over all documents that match the 
 filter in order to load it into an in-memory cacheable DocIdSet. On the other 
 hand, if you don't cache at all, you are potentially missing interesting 
 speed-ups on frequently-used filters.
 Something that would be nice would be to have a generic filter cache that 
 would track usage for individual filters and make the decision to cache or 
 not a filter on a given segments based on usage statistics and various 
 heuristics, such as:
  - the overhead to cache the filter (for instance some filters produce 
 DocIdSets that are already cacheable)
  - the cost to build the DocIdSet (the getDocIdSet method is very expensive 
 on some filters such as MultiTermQueryWrapperFilter that potentially need to 
 merge lots of postings lists)
  - the segment we are searching on (flush segments will likely be merged 
 right away so it's probably not worth building a cache on such segments)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5864) Remove previous SolrCore as parameter on reload

2014-11-26 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14226210#comment-14226210
 ] 

ASF subversion and git services commented on SOLR-5864:
---

Commit 1641819 from [~tomasflobbe] in branch 'dev/trunk'
[ https://svn.apache.org/r1641819 ]

SOLR-5864: Remove previous SolrCore as parameter on reload

 Remove previous SolrCore as parameter on reload
 ---

 Key: SOLR-5864
 URL: https://issues.apache.org/jira/browse/SOLR-5864
 Project: Solr
  Issue Type: Improvement
Affects Versions: Trunk
Reporter: Tomás Fernández Löbbe
Priority: Trivial
 Attachments: SOLR-5864.patch, SOLR-5864.patch, SOLR-5864.patch, 
 SOLR-5864.patch


 Currently the reload method is reload(SolrResourceLoader resourceLoader, 
 SolrCore prev), but all the times it’s called with “prev” being the same as 
 “this”:
 core.reload(resourceLoader, core). 
 Frankly, I don’t think it even makes sense to call it in other way (it would 
 be just to create the first reader with a different core than the one its 
 being reloaded?)
 I think we should just remove the SolrCore parameter and let the reload 
 method always reload the core where it's being called. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-5072) Create a field type for multi-value numeric/time durations

2014-11-26 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley closed SOLR-5072.
--
Resolution: Duplicate

Closed as duplicate of SOLR-6103 (DateRangeField).  

DateRangeField uses a single-dimensional spatial prefix tree.  
Performance-wise, it would be very interesting to compare the 1D 
SpatialPrefixTree with range data approach to a 2D SpatialPrefixTree with point 
data.  Someday maybe.

 Create a field type for multi-value numeric/time durations
 --

 Key: SOLR-5072
 URL: https://issues.apache.org/jira/browse/SOLR-5072
 Project: Solr
  Issue Type: New Feature
  Components: spatial
Reporter: David Smiley

 It would be great if there was a field type that implemented the technique 
 described here: http://wiki.apache.org/solr/SpatialForTimeDurations   It can 
 be tricky to implement properly.
 Eventually once there's a new prefixTree implementation that I'm (slowly) 
 working on, the internal implementation can be much better.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6072) Use mock filesystem in tests

2014-11-26 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14226220#comment-14226220
 ] 

ASF subversion and git services commented on LUCENE-6072:
-

Commit 1641821 from [~rcmuir] in branch 'dev/trunk'
[ https://svn.apache.org/r1641821 ]

LUCENE-6072: add a way to test for too many open files

 Use mock filesystem in tests
 

 Key: LUCENE-6072
 URL: https://issues.apache.org/jira/browse/LUCENE-6072
 Project: Lucene - Core
  Issue Type: Test
Reporter: Robert Muir
 Attachments: LUCENE-6072.patch, LUCENE-6072.patch, LUCENE-6072.patch


 We went through the trouble to convert to NIO.2, but we don't take advantage 
 of it in tests...
 Since everything boils down to LuceneTestCase's temp dir (which is just 
 Path), we can wrap the filesystem with useful stuff:
 * detect file handle leaks (better than mockdir: not just index files)
 * act like windows (don't delete open files, case-insensitivity, etc)
 * verbosity (add what is going on to infostream for debugging)
 I prototyped some of this in a patch. Currently it makes a chain like this:
 {code}
   private FileSystem initializeFileSystem() {
 FileSystem fs = FileSystems.getDefault();
 if (LuceneTestCase.VERBOSE) {
   fs = new VerboseFS(fs,
 new PrintStreamInfoStream(System.out)).getFileSystem(null);
 }
 fs = new LeakFS(fs).getFileSystem(null);
 fs = new WindowsFS(fs).getFileSystem(null);
 return fs.provider().getFileSystem(URI.create(file:///));
   }
 {code}
 Some things to figure out:
 * I don't think we want to wrap all the time (worry about hiding bugs)
 * its currently a bit lenient (e.g. these filesystems allow calling toFile, 
 which can escape and allow you to do broken things). But only 2 or 3 tests 
 really need File, so we could fix that.
 * its currently complicated and messy (i blame the jdk api here, but maybe we 
 can simplify it)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6076) CachingWrapperFilter.getChildResources locks on the wrong object

2014-11-26 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14226226#comment-14226226
 ] 

ASF subversion and git services commented on LUCENE-6076:
-

Commit 1641822 from [~jpountz] in branch 'dev/trunk'
[ https://svn.apache.org/r1641822 ]

LUCENE-6076: Fix locking in CachingWrapperFilter.getChildResources.

 CachingWrapperFilter.getChildResources locks on the wrong object
 

 Key: LUCENE-6076
 URL: https://issues.apache.org/jira/browse/LUCENE-6076
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 5.0
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: 5.0

 Attachments: LUCENE-6076.patch


 CachingWrapperFilter.getChildResources caches on the CachingWrapperFilter 
 instance instead of the wrapped cache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6076) CachingWrapperFilter.getChildResources locks on the wrong object

2014-11-26 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14226228#comment-14226228
 ] 

ASF subversion and git services commented on LUCENE-6076:
-

Commit 1641823 from [~jpountz] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1641823 ]

LUCENE-6076: Fix locking in CachingWrapperFilter.getChildResources.

 CachingWrapperFilter.getChildResources locks on the wrong object
 

 Key: LUCENE-6076
 URL: https://issues.apache.org/jira/browse/LUCENE-6076
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 5.0
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: 5.0

 Attachments: LUCENE-6076.patch


 CachingWrapperFilter.getChildResources caches on the CachingWrapperFilter 
 instance instead of the wrapped cache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: svn commit: r1641822 - /lucene/dev/trunk/lucene/core/src/java/org/apache/lucene/search/CachingWrapperFilter.java

2014-11-26 Thread Robert Muir
Accountables* methods are always a snapshot in time, so we don't need
to make a copy here.

On Wed, Nov 26, 2014 at 9:26 AM,  jpou...@apache.org wrote:
 Author: jpountz
 Date: Wed Nov 26 14:26:43 2014
 New Revision: 1641822

 URL: http://svn.apache.org/r1641822
 Log:
 LUCENE-6076: Fix locking in CachingWrapperFilter.getChildResources.

 Modified:
 
 lucene/dev/trunk/lucene/core/src/java/org/apache/lucene/search/CachingWrapperFilter.java

 Modified: 
 lucene/dev/trunk/lucene/core/src/java/org/apache/lucene/search/CachingWrapperFilter.java
 URL: 
 http://svn.apache.org/viewvc/lucene/dev/trunk/lucene/core/src/java/org/apache/lucene/search/CachingWrapperFilter.java?rev=1641822r1=1641821r2=1641822view=diff
 ==
 --- 
 lucene/dev/trunk/lucene/core/src/java/org/apache/lucene/search/CachingWrapperFilter.java
  (original)
 +++ 
 lucene/dev/trunk/lucene/core/src/java/org/apache/lucene/search/CachingWrapperFilter.java
  Wed Nov 26 14:26:43 2014
 @@ -22,6 +22,7 @@ import static org.apache.lucene.search.D
  import java.io.IOException;
  import java.util.ArrayList;
  import java.util.Collections;
 +import java.util.HashMap;
  import java.util.List;
  import java.util.Map;
  import java.util.WeakHashMap;
 @@ -146,8 +147,12 @@ public class CachingWrapperFilter extend
}

@Override
 -  public synchronized Iterable? extends Accountable getChildResources() {
 -// Sync only to pull the current set of values:
 -return Accountables.namedAccountables(segment, cache);
 +  public Iterable? extends Accountable getChildResources() {
 +// Sync to pull the current set of values:
 +final MapObject, DocIdSet copy;
 +synchronized (cache) {
 +  copy = new HashMap(cache);
 +}
 +return Accountables.namedAccountables(segment, copy);
}
  }



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6076) CachingWrapperFilter.getChildResources locks on the wrong object

2014-11-26 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14226252#comment-14226252
 ] 

ASF subversion and git services commented on LUCENE-6076:
-

Commit 1641824 from [~jpountz] in branch 'dev/trunk'
[ https://svn.apache.org/r1641824 ]

LUCENE-6076: Remove unnecessary copy (thanks Robert for pointing this out).

 CachingWrapperFilter.getChildResources locks on the wrong object
 

 Key: LUCENE-6076
 URL: https://issues.apache.org/jira/browse/LUCENE-6076
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 5.0
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: 5.0

 Attachments: LUCENE-6076.patch


 CachingWrapperFilter.getChildResources caches on the CachingWrapperFilter 
 instance instead of the wrapped cache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6076) CachingWrapperFilter.getChildResources locks on the wrong object

2014-11-26 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14226258#comment-14226258
 ] 

ASF subversion and git services commented on LUCENE-6076:
-

Commit 1641825 from [~jpountz] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1641825 ]

LUCENE-6076: Remove unnecessary copy.

 CachingWrapperFilter.getChildResources locks on the wrong object
 

 Key: LUCENE-6076
 URL: https://issues.apache.org/jira/browse/LUCENE-6076
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 5.0
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: 5.0

 Attachments: LUCENE-6076.patch


 CachingWrapperFilter.getChildResources caches on the CachingWrapperFilter 
 instance instead of the wrapped cache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: svn commit: r1641822 - /lucene/dev/trunk/lucene/core/src/java/org/apache/lucene/search/CachingWrapperFilter.java

2014-11-26 Thread Adrien Grand
I just committed a fix. Thanks for the note!

On Wed, Nov 26, 2014 at 3:32 PM, Robert Muir rcm...@gmail.com wrote:
 Accountables* methods are always a snapshot in time, so we don't need
 to make a copy here.

 On Wed, Nov 26, 2014 at 9:26 AM,  jpou...@apache.org wrote:
 Author: jpountz
 Date: Wed Nov 26 14:26:43 2014
 New Revision: 1641822

 URL: http://svn.apache.org/r1641822
 Log:
 LUCENE-6076: Fix locking in CachingWrapperFilter.getChildResources.

 Modified:
 
 lucene/dev/trunk/lucene/core/src/java/org/apache/lucene/search/CachingWrapperFilter.java

 Modified: 
 lucene/dev/trunk/lucene/core/src/java/org/apache/lucene/search/CachingWrapperFilter.java
 URL: 
 http://svn.apache.org/viewvc/lucene/dev/trunk/lucene/core/src/java/org/apache/lucene/search/CachingWrapperFilter.java?rev=1641822r1=1641821r2=1641822view=diff
 ==
 --- 
 lucene/dev/trunk/lucene/core/src/java/org/apache/lucene/search/CachingWrapperFilter.java
  (original)
 +++ 
 lucene/dev/trunk/lucene/core/src/java/org/apache/lucene/search/CachingWrapperFilter.java
  Wed Nov 26 14:26:43 2014
 @@ -22,6 +22,7 @@ import static org.apache.lucene.search.D
  import java.io.IOException;
  import java.util.ArrayList;
  import java.util.Collections;
 +import java.util.HashMap;
  import java.util.List;
  import java.util.Map;
  import java.util.WeakHashMap;
 @@ -146,8 +147,12 @@ public class CachingWrapperFilter extend
}

@Override
 -  public synchronized Iterable? extends Accountable getChildResources() {
 -// Sync only to pull the current set of values:
 -return Accountables.namedAccountables(segment, cache);
 +  public Iterable? extends Accountable getChildResources() {
 +// Sync to pull the current set of values:
 +final MapObject, DocIdSet copy;
 +synchronized (cache) {
 +  copy = new HashMap(cache);
 +}
 +return Accountables.namedAccountables(segment, copy);
}
  }



 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




-- 
Adrien

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6076) CachingWrapperFilter.getChildResources locks on the wrong object

2014-11-26 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand resolved LUCENE-6076.
--
Resolution: Fixed

 CachingWrapperFilter.getChildResources locks on the wrong object
 

 Key: LUCENE-6076
 URL: https://issues.apache.org/jira/browse/LUCENE-6076
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 5.0
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: 5.0

 Attachments: LUCENE-6076.patch


 CachingWrapperFilter.getChildResources caches on the CachingWrapperFilter 
 instance instead of the wrapped cache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5864) Remove previous SolrCore as parameter on reload

2014-11-26 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14226263#comment-14226263
 ] 

ASF subversion and git services commented on SOLR-5864:
---

Commit 1641826 from [~tomasflobbe] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1641826 ]

SOLR-5864: Remove previous SolrCore as parameter on reload

 Remove previous SolrCore as parameter on reload
 ---

 Key: SOLR-5864
 URL: https://issues.apache.org/jira/browse/SOLR-5864
 Project: Solr
  Issue Type: Improvement
Affects Versions: Trunk
Reporter: Tomás Fernández Löbbe
Priority: Trivial
 Attachments: SOLR-5864.patch, SOLR-5864.patch, SOLR-5864.patch, 
 SOLR-5864.patch


 Currently the reload method is reload(SolrResourceLoader resourceLoader, 
 SolrCore prev), but all the times it’s called with “prev” being the same as 
 “this”:
 core.reload(resourceLoader, core). 
 Frankly, I don’t think it even makes sense to call it in other way (it would 
 be just to create the first reader with a different core than the one its 
 being reloaded?)
 I think we should just remove the SolrCore parameter and let the reload 
 method always reload the core where it's being called. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6797) Add score=degrees|kilometers|miles for AbstractSpatialFieldType

2014-11-26 Thread David Smiley (JIRA)
David Smiley created SOLR-6797:
--

 Summary: Add score=degrees|kilometers|miles for 
AbstractSpatialFieldType
 Key: SOLR-6797
 URL: https://issues.apache.org/jira/browse/SOLR-6797
 Project: Solr
  Issue Type: Improvement
  Components: spatial
Reporter: David Smiley


Annoyingly, the units=degrees attribute is required for fields extending 
AbstractSpatialFieldType (e.g. RPT, BBox).  And it doesn't really have any 
effect.  I propose the following:

* Simply drop the attribute; ignore it if someone sets it to degrees (for 
back-compat).
* When using score=distance, or score=area or area2D (as seen in BBoxField) 
then use kilometers if geo=true, otherwise degrees.
* Add support for score=degrees|kilometers|miles|degrees



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6797) Add score=degrees|kilometers|miles for AbstractSpatialFieldType

2014-11-26 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14226267#comment-14226267
 ] 

David Smiley commented on SOLR-6797:


The origins of why I added the units attribute was for the coordinate values, 
which are in degrees.  And for consistency, I returned distances in degrees as 
well.

 Add score=degrees|kilometers|miles for AbstractSpatialFieldType
 ---

 Key: SOLR-6797
 URL: https://issues.apache.org/jira/browse/SOLR-6797
 Project: Solr
  Issue Type: Improvement
  Components: spatial
Reporter: David Smiley

 Annoyingly, the units=degrees attribute is required for fields extending 
 AbstractSpatialFieldType (e.g. RPT, BBox).  And it doesn't really have any 
 effect.  I propose the following:
 * Simply drop the attribute; ignore it if someone sets it to degrees (for 
 back-compat).
 * When using score=distance, or score=area or area2D (as seen in BBoxField) 
 then use kilometers if geo=true, otherwise degrees.
 * Add support for score=degrees|kilometers|miles|degrees



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6077) Add a filter cache

2014-11-26 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14226284#comment-14226284
 ] 

Robert Muir commented on LUCENE-6077:
-

This looks great!

Do we really need to default CachingWrapperFilter to a stupid policy?
Is there a better name for FilterCache.cache() method? it can be a noun or a 
verb, so its kind of confusing. Maybe doCache would be better?
CachingWrapperFilter's new ctor: can we fix the typo?
FilterCachingPolicy.onCache, can we correct the param name?

 Add a filter cache
 --

 Key: LUCENE-6077
 URL: https://issues.apache.org/jira/browse/LUCENE-6077
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: 5.0

 Attachments: LUCENE-6077.patch


 Lucene already has filter caching abilities through CachingWrapperFilter, but 
 CachingWrapperFilter requires you to know which filters you want to cache 
 up-front.
 Caching filters is not trivial. If you cache too aggressively, then you slow 
 things down since you need to iterate over all documents that match the 
 filter in order to load it into an in-memory cacheable DocIdSet. On the other 
 hand, if you don't cache at all, you are potentially missing interesting 
 speed-ups on frequently-used filters.
 Something that would be nice would be to have a generic filter cache that 
 would track usage for individual filters and make the decision to cache or 
 not a filter on a given segments based on usage statistics and various 
 heuristics, such as:
  - the overhead to cache the filter (for instance some filters produce 
 DocIdSets that are already cacheable)
  - the cost to build the DocIdSet (the getDocIdSet method is very expensive 
 on some filters such as MultiTermQueryWrapperFilter that potentially need to 
 merge lots of postings lists)
  - the segment we are searching on (flush segments will likely be merged 
 right away so it's probably not worth building a cache on such segments)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6072) Use mock filesystem in tests

2014-11-26 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14226323#comment-14226323
 ] 

ASF subversion and git services commented on LUCENE-6072:
-

Commit 1641833 from [~rcmuir] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1641833 ]

LUCENE-6072: use mockfilesystem in tests

 Use mock filesystem in tests
 

 Key: LUCENE-6072
 URL: https://issues.apache.org/jira/browse/LUCENE-6072
 Project: Lucene - Core
  Issue Type: Test
Reporter: Robert Muir
 Attachments: LUCENE-6072.patch, LUCENE-6072.patch, LUCENE-6072.patch


 We went through the trouble to convert to NIO.2, but we don't take advantage 
 of it in tests...
 Since everything boils down to LuceneTestCase's temp dir (which is just 
 Path), we can wrap the filesystem with useful stuff:
 * detect file handle leaks (better than mockdir: not just index files)
 * act like windows (don't delete open files, case-insensitivity, etc)
 * verbosity (add what is going on to infostream for debugging)
 I prototyped some of this in a patch. Currently it makes a chain like this:
 {code}
   private FileSystem initializeFileSystem() {
 FileSystem fs = FileSystems.getDefault();
 if (LuceneTestCase.VERBOSE) {
   fs = new VerboseFS(fs,
 new PrintStreamInfoStream(System.out)).getFileSystem(null);
 }
 fs = new LeakFS(fs).getFileSystem(null);
 fs = new WindowsFS(fs).getFileSystem(null);
 return fs.provider().getFileSystem(URI.create(file:///));
   }
 {code}
 Some things to figure out:
 * I don't think we want to wrap all the time (worry about hiding bugs)
 * its currently a bit lenient (e.g. these filesystems allow calling toFile, 
 which can escape and allow you to do broken things). But only 2 or 3 tests 
 really need File, so we could fix that.
 * its currently complicated and messy (i blame the jdk api here, but maybe we 
 can simplify it)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Cosmetic: Getting rid of an extra \n in TFIDFSimilarity.explainScore output

2014-11-26 Thread Vanlerberghe, Luc
TFIDFSimilarity.explainScore currently outputs an annoying (but harmless of 
course) extra \n.

It occurs because the freq argument is included as is in the description of the 
top Explain node,
whereas freq.getValue() is sufficient. The full freq Explain node is included 
as a detail further on anyway...

I attached a patch generated with git, but it's just:
-result.setDescription(score(doc=+doc+,freq=+freq+), product of:);
+result.setDescription(score(doc=+doc+,freq=+freq.getValue()+), 
product of:);

Output like this:

  lst name=explain
str name=0-764629
5.5484066 = (MATCH) max of:
  5.5484066 = (MATCH) weight(titreSearch:camus in 4158) [DefaultSimilarity], 
result of:
5.5484066 = score(doc=4158,freq=1.0 = termFreq=1.0
), product of:
  0.60149205 = queryWeight, product of:
9.224405 = idf(docFreq=450, maxDocs=1682636)
0.065206595 = queryNorm
  9.224405 = fieldWeight in 4158, product of:
1.0 = tf(freq=1.0), with freq of:
  1.0 = termFreq=1.0
9.224405 = idf(docFreq=450, maxDocs=1682636)
1.0 = fieldNorm(doc=4158)
/str
  /lst

becomes:

  lst name=explain
str name=0-764629
5.5484066 = (MATCH) max of:
  5.5484066 = (MATCH) weight(titreSearch:camus in 4158) [DefaultSimilarity], 
result of:
5.5484066 = score(doc=4158,freq=1.0), product of:
  0.60149205 = queryWeight, product of:
9.224405 = idf(docFreq=450, maxDocs=1682636)
0.065206595 = queryNorm
  9.224405 = fieldWeight in 4158, product of:
1.0 = tf(freq=1.0), with freq of:
  1.0 = termFreq=1.0
9.224405 = idf(docFreq=450, maxDocs=1682636)
1.0 = fieldNorm(doc=4158)
/str
  /lst



0001-Cleanup-of-TFIDFSimilarity.explainScore.patch
Description: 0001-Cleanup-of-TFIDFSimilarity.explainScore.patch

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-6666) Dynamic copy fields are considering all dynamic fields, causing a significant performance impact on indexing documents

2014-11-26 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14226342#comment-14226342
 ] 

Erick Erickson commented on SOLR-:
--

Liram:

Sorry, it's been on my list for a bit but I haven't gotten to it yet. Over 
Thanksgiving, I _promise_.

Erick

 Dynamic copy fields are considering all dynamic fields, causing a significant 
 performance impact on indexing documents
 --

 Key: SOLR-
 URL: https://issues.apache.org/jira/browse/SOLR-
 Project: Solr
  Issue Type: Improvement
  Components: Schema and Analysis, update
 Environment: Linux, Solr 4.8, Schema with 70 fields and more than 500 
 specific CopyFields for dynamic fields, but without wildcards (the fields are 
 dynamic, the copy directive is not)
Reporter: Liram Vardi
Assignee: Erick Erickson
 Attachments: SOLR-.patch


 Result:
 After applying a fix for this issue, tests which we conducted show more than 
 40 percent improvement on our insertion performance.
 Explanation:
 Using JVM profiler, we found a CPU bottleneck during Solr indexing process. 
 This bottleneck can be found at org.apache.solr.schema.IndexSchema, in the 
 following method, getCopyFieldsList():
 {code:title=getCopyFieldsList() |borderStyle=solid}
 final ListCopyField result = new ArrayList();
 for (DynamicCopy dynamicCopy : dynamicCopyFields) {
   if (dynamicCopy.matches(sourceField)) {
 result.add(new CopyField(getField(sourceField), 
 dynamicCopy.getTargetField(sourceField), dynamicCopy.maxChars));
   }
 }
 ListCopyField fixedCopyFields = copyFieldsMap.get(sourceField);
 if (null != fixedCopyFields) {
   result.addAll(fixedCopyFields);
 }
 {code}
 This function tries to find for an input source field all its copyFields (All 
 its destinations which Solr need to move this field). 
 As you can probably note, the first part of the procedure is the procedure 
 most “expensive” step (takes O( n ) time while N is the size of the 
 dynamicCopyFields group).
 The next part is just a simple hash extraction, which takes O(1) time. 
 Our schema contains over then 500 copyFields but only 70 of then are 
 indexed fields. 
 We also have one dynamic field with  a wildcard ( * ), which catches the 
 rest of the document fields. 
 As you can conclude, we have more than 400 copyFields that are based on this 
 dynamicField but all, except one, are fixed (i.e. does not contain any 
 wildcard).
 From some reason, the copyFields registration procedure defines those 400 
 fields as DynamicCopyField  and then store them in the “dynamicCopyFields” 
 array, 
 This step makes getCopyFieldsList() very expensive (in CPU terms) without any 
 justification: All of those 400 copyFields are not glob and therefore do not 
 need any complex pattern matching to the input field. They all can be store 
 at the fixedCopyFields.
 Only copyFields with asterisks need this special treatment and they are 
 (especially on our case) pretty rare.  
 Therefore, we created a patch which fix this problem by changing the 
 registerCopyField() procedure.
 Test which we conducted show that there is no change in the Indexing results. 
 Moreover, the fix still successfully passes the class unit tests (i.e. 
 IndexSchemaTest.java).




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Cosmetic: Getting rid of an extra \n in TFIDFSimilarity.explainScore output

2014-11-26 Thread Michael McCandless
Thank you for the patch!  I agree that is annoying.

It makes me a little nervous, losing possibly important explanation
about how that freq itself was computed?

E.g. a PhraseQuery will have phraseFreq=X as the explanation for
that freq, telling you this wasn't just a simple term freq ... I
wonder whether other queries want to explain an interesting freq?

Mike McCandless

http://blog.mikemccandless.com


On Wed, Nov 26, 2014 at 10:33 AM, Vanlerberghe, Luc
luc.vanlerber...@bvdinfo.com wrote:
 TFIDFSimilarity.explainScore currently outputs an annoying (but harmless of 
 course) extra \n.

 It occurs because the freq argument is included as is in the description of 
 the top Explain node,
 whereas freq.getValue() is sufficient. The full freq Explain node is included 
 as a detail further on anyway...

 I attached a patch generated with git, but it's just:
 -result.setDescription(score(doc=+doc+,freq=+freq+), product of:);
 +result.setDescription(score(doc=+doc+,freq=+freq.getValue()+), 
 product of:);

 Output like this:

   lst name=explain
 str name=0-764629
 5.5484066 = (MATCH) max of:
   5.5484066 = (MATCH) weight(titreSearch:camus in 4158) [DefaultSimilarity], 
 result of:
 5.5484066 = score(doc=4158,freq=1.0 = termFreq=1.0
 ), product of:
   0.60149205 = queryWeight, product of:
 9.224405 = idf(docFreq=450, maxDocs=1682636)
 0.065206595 = queryNorm
   9.224405 = fieldWeight in 4158, product of:
 1.0 = tf(freq=1.0), with freq of:
   1.0 = termFreq=1.0
 9.224405 = idf(docFreq=450, maxDocs=1682636)
 1.0 = fieldNorm(doc=4158)
 /str
   /lst

 becomes:

   lst name=explain
 str name=0-764629
 5.5484066 = (MATCH) max of:
   5.5484066 = (MATCH) weight(titreSearch:camus in 4158) [DefaultSimilarity], 
 result of:
 5.5484066 = score(doc=4158,freq=1.0), product of:
   0.60149205 = queryWeight, product of:
 9.224405 = idf(docFreq=450, maxDocs=1682636)
 0.065206595 = queryNorm
   9.224405 = fieldWeight in 4158, product of:
 1.0 = tf(freq=1.0), with freq of:
   1.0 = termFreq=1.0
 9.224405 = idf(docFreq=450, maxDocs=1682636)
 1.0 = fieldNorm(doc=4158)
 /str
   /lst



 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: Cosmetic: Getting rid of an extra \n in TFIDFSimilarity.explainScore output

2014-11-26 Thread Vanlerberghe, Luc
The freq explanation itself is still included as detail a bit lower in the 
code (line 798 in my version)
so no information gets lost!

See:
   1.0 = termFreq=1.0

Luc

-Original Message-
From: Michael McCandless [mailto:luc...@mikemccandless.com] 
Sent: woensdag 26 november 2014 16:59
To: Lucene/Solr dev; Vanlerberghe, Luc
Subject: Re: Cosmetic: Getting rid of an extra \n in 
TFIDFSimilarity.explainScore output

Thank you for the patch!  I agree that is annoying.

It makes me a little nervous, losing possibly important explanation
about how that freq itself was computed?

E.g. a PhraseQuery will have phraseFreq=X as the explanation for
that freq, telling you this wasn't just a simple term freq ... I
wonder whether other queries want to explain an interesting freq?

Mike McCandless

http://blog.mikemccandless.com


On Wed, Nov 26, 2014 at 10:33 AM, Vanlerberghe, Luc
luc.vanlerber...@bvdinfo.com wrote:
 TFIDFSimilarity.explainScore currently outputs an annoying (but harmless of 
 course) extra \n.

 It occurs because the freq argument is included as is in the description of 
 the top Explain node,
 whereas freq.getValue() is sufficient. The full freq Explain node is included 
 as a detail further on anyway...

 I attached a patch generated with git, but it's just:
 -result.setDescription(score(doc=+doc+,freq=+freq+), product of:);
 +result.setDescription(score(doc=+doc+,freq=+freq.getValue()+), 
 product of:);

 Output like this:

   lst name=explain
 str name=0-764629
 5.5484066 = (MATCH) max of:
   5.5484066 = (MATCH) weight(titreSearch:camus in 4158) [DefaultSimilarity], 
 result of:
 5.5484066 = score(doc=4158,freq=1.0 = termFreq=1.0
 ), product of:
   0.60149205 = queryWeight, product of:
 9.224405 = idf(docFreq=450, maxDocs=1682636)
 0.065206595 = queryNorm
   9.224405 = fieldWeight in 4158, product of:
 1.0 = tf(freq=1.0), with freq of:
   1.0 = termFreq=1.0
 9.224405 = idf(docFreq=450, maxDocs=1682636)
 1.0 = fieldNorm(doc=4158)
 /str
   /lst

 becomes:

   lst name=explain
 str name=0-764629
 5.5484066 = (MATCH) max of:
   5.5484066 = (MATCH) weight(titreSearch:camus in 4158) [DefaultSimilarity], 
 result of:
 5.5484066 = score(doc=4158,freq=1.0), product of:
   0.60149205 = queryWeight, product of:
 9.224405 = idf(docFreq=450, maxDocs=1682636)
 0.065206595 = queryNorm
   9.224405 = fieldWeight in 4158, product of:
 1.0 = tf(freq=1.0), with freq of:
   1.0 = termFreq=1.0
 9.224405 = idf(docFreq=450, maxDocs=1682636)
 1.0 = fieldNorm(doc=4158)
 /str
   /lst



 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org


[jira] [Commented] (SOLR-5864) Remove previous SolrCore as parameter on reload

2014-11-26 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14226391#comment-14226391
 ] 

ASF subversion and git services commented on SOLR-5864:
---

Commit 1641844 from [~tomasflobbe] in branch 'dev/branches/lucene_solr_4_10'
[ https://svn.apache.org/r1641844 ]

SOLR-5864: Deprecated SolrCore.reload(ConfigSet, SolrCore) and added 
SolrCore.reload(ConfigSet)

 Remove previous SolrCore as parameter on reload
 ---

 Key: SOLR-5864
 URL: https://issues.apache.org/jira/browse/SOLR-5864
 Project: Solr
  Issue Type: Improvement
Affects Versions: Trunk
Reporter: Tomás Fernández Löbbe
Priority: Trivial
 Attachments: SOLR-5864.patch, SOLR-5864.patch, SOLR-5864.patch, 
 SOLR-5864.patch


 Currently the reload method is reload(SolrResourceLoader resourceLoader, 
 SolrCore prev), but all the times it’s called with “prev” being the same as 
 “this”:
 core.reload(resourceLoader, core). 
 Frankly, I don’t think it even makes sense to call it in other way (it would 
 be just to create the first reader with a different core than the one its 
 being reloaded?)
 I think we should just remove the SolrCore parameter and let the reload 
 method always reload the core where it's being called. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-5864) Remove previous SolrCore as parameter on reload

2014-11-26 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-5864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe resolved SOLR-5864.
-
   Resolution: Fixed
Fix Version/s: Trunk
   5.0
 Assignee: Tomás Fernández Löbbe

 Remove previous SolrCore as parameter on reload
 ---

 Key: SOLR-5864
 URL: https://issues.apache.org/jira/browse/SOLR-5864
 Project: Solr
  Issue Type: Improvement
Affects Versions: Trunk
Reporter: Tomás Fernández Löbbe
Assignee: Tomás Fernández Löbbe
Priority: Trivial
 Fix For: 5.0, Trunk

 Attachments: SOLR-5864.patch, SOLR-5864.patch, SOLR-5864.patch, 
 SOLR-5864.patch


 Currently the reload method is reload(SolrResourceLoader resourceLoader, 
 SolrCore prev), but all the times it’s called with “prev” being the same as 
 “this”:
 core.reload(resourceLoader, core). 
 Frankly, I don’t think it even makes sense to call it in other way (it would 
 be just to create the first reader with a different core than the one its 
 being reloaded?)
 I think we should just remove the SolrCore parameter and let the reload 
 method always reload the core where it's being called. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Cosmetic: Getting rid of an extra \n in TFIDFSimilarity.explainScore output

2014-11-26 Thread Michael McCandless
Aha, excellent!  I will commit.  Thank you.

Mike McCandless

http://blog.mikemccandless.com


On Wed, Nov 26, 2014 at 11:04 AM, Vanlerberghe, Luc
luc.vanlerber...@bvdinfo.com wrote:
 The freq explanation itself is still included as detail a bit lower in the 
 code (line 798 in my version)
 so no information gets lost!

 See:
   1.0 = termFreq=1.0

 Luc

 -Original Message-
 From: Michael McCandless [mailto:luc...@mikemccandless.com]
 Sent: woensdag 26 november 2014 16:59
 To: Lucene/Solr dev; Vanlerberghe, Luc
 Subject: Re: Cosmetic: Getting rid of an extra \n in 
 TFIDFSimilarity.explainScore output

 Thank you for the patch!  I agree that is annoying.

 It makes me a little nervous, losing possibly important explanation
 about how that freq itself was computed?

 E.g. a PhraseQuery will have phraseFreq=X as the explanation for
 that freq, telling you this wasn't just a simple term freq ... I
 wonder whether other queries want to explain an interesting freq?

 Mike McCandless

 http://blog.mikemccandless.com


 On Wed, Nov 26, 2014 at 10:33 AM, Vanlerberghe, Luc
 luc.vanlerber...@bvdinfo.com wrote:
 TFIDFSimilarity.explainScore currently outputs an annoying (but harmless of 
 course) extra \n.

 It occurs because the freq argument is included as is in the description of 
 the top Explain node,
 whereas freq.getValue() is sufficient. The full freq Explain node is 
 included as a detail further on anyway...

 I attached a patch generated with git, but it's just:
 -result.setDescription(score(doc=+doc+,freq=+freq+), product of:);
 +result.setDescription(score(doc=+doc+,freq=+freq.getValue()+), 
 product of:);

 Output like this:

   lst name=explain
 str name=0-764629
 5.5484066 = (MATCH) max of:
   5.5484066 = (MATCH) weight(titreSearch:camus in 4158) [DefaultSimilarity], 
 result of:
 5.5484066 = score(doc=4158,freq=1.0 = termFreq=1.0
 ), product of:
   0.60149205 = queryWeight, product of:
 9.224405 = idf(docFreq=450, maxDocs=1682636)
 0.065206595 = queryNorm
   9.224405 = fieldWeight in 4158, product of:
 1.0 = tf(freq=1.0), with freq of:
   1.0 = termFreq=1.0
 9.224405 = idf(docFreq=450, maxDocs=1682636)
 1.0 = fieldNorm(doc=4158)
 /str
   /lst

 becomes:

   lst name=explain
 str name=0-764629
 5.5484066 = (MATCH) max of:
   5.5484066 = (MATCH) weight(titreSearch:camus in 4158) [DefaultSimilarity], 
 result of:
 5.5484066 = score(doc=4158,freq=1.0), product of:
   0.60149205 = queryWeight, product of:
 9.224405 = idf(docFreq=450, maxDocs=1682636)
 0.065206595 = queryNorm
   9.224405 = fieldWeight in 4158, product of:
 1.0 = tf(freq=1.0), with freq of:
   1.0 = termFreq=1.0
 9.224405 = idf(docFreq=450, maxDocs=1682636)
 1.0 = fieldNorm(doc=4158)
 /str
   /lst



 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-SmokeRelease-5.x - Build # 215 - Still Failing

2014-11-26 Thread Timothy Potter
I'm working on it ... problem right now is ps waux doesn't work the same on
FreeBSD so the script isn't getting the info it needs ... ps auxww seems to
do the trick

On Wed, Nov 26, 2014 at 4:12 AM, Michael McCandless 
luc...@mikemccandless.com wrote:

 Is anyone looking into why the smoke tester can't run Solr's example?
 This has been failing for quite a while, and I thought I saw a commit
 to smoke tester to try to fix it?

 Should we stop trying to test the solr example from the smoke tester?

 Mike McCandless

 http://blog.mikemccandless.com


 On Tue, Nov 25, 2014 at 10:06 PM, Apache Jenkins Server
 jenk...@builds.apache.org wrote:
  Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-5.x/215/
 
  No tests ran.
 
  Build Log:
  [...truncated 51672 lines...]
  prepare-release-no-sign:
  [mkdir] Created dir:
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist
   [copy] Copying 446 files to
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist/lucene
   [copy] Copying 254 files to
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist/solr
 [smoker] Java 1.7 JAVA_HOME=/home/jenkins/tools/java/latest1.7
 [smoker] NOTE: output encoding is US-ASCII
 [smoker]
 [smoker] Load release URL
 file:/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist/...
 [smoker]
 [smoker] Test Lucene...
 [smoker]   test basics...
 [smoker]   get KEYS
 [smoker] 0.1 MB in 0.01 sec (13.0 MB/sec)
 [smoker]   check changes HTML...
 [smoker]   download lucene-5.0.0-src.tgz...
 [smoker] 27.8 MB in 0.04 sec (681.1 MB/sec)
 [smoker] verify md5/sha1 digests
 [smoker]   download lucene-5.0.0.tgz...
 [smoker] 63.8 MB in 0.09 sec (694.6 MB/sec)
 [smoker] verify md5/sha1 digests
 [smoker]   download lucene-5.0.0.zip...
 [smoker] 73.2 MB in 0.14 sec (526.5 MB/sec)
 [smoker] verify md5/sha1 digests
 [smoker]   unpack lucene-5.0.0.tgz...
 [smoker] verify JAR metadata/identity/no javax.* or java.*
 classes...
 [smoker] test demo with 1.7...
 [smoker]   got 5569 hits for query lucene
 [smoker] checkindex with 1.7...
 [smoker] check Lucene's javadoc JAR
 [smoker]   unpack lucene-5.0.0.zip...
 [smoker] verify JAR metadata/identity/no javax.* or java.*
 classes...
 [smoker] test demo with 1.7...
 [smoker]   got 5569 hits for query lucene
 [smoker] checkindex with 1.7...
 [smoker] check Lucene's javadoc JAR
 [smoker]   unpack lucene-5.0.0-src.tgz...
 [smoker] make sure no JARs/WARs in src dist...
 [smoker] run ant validate
 [smoker] run tests w/ Java 7 and
 testArgs='-Dtests.jettyConnector=Socket -Dtests.multiplier=1
 -Dtests.slow=false'...
 [smoker] test demo with 1.7...
 [smoker]   got 207 hits for query lucene
 [smoker] checkindex with 1.7...
 [smoker] generate javadocs w/ Java 7...
 [smoker]
 [smoker] Crawl/parse...
 [smoker]
 [smoker] Verify...
 [smoker]   confirm all releases have coverage in
 TestBackwardsCompatibility
 [smoker] find all past Lucene releases...
 [smoker] run TestBackwardsCompatibility..
 [smoker] success!
 [smoker]
 [smoker] Test Solr...
 [smoker]   test basics...
 [smoker]   get KEYS
 [smoker] 0.1 MB in 0.00 sec (86.1 MB/sec)
 [smoker]   check changes HTML...
 [smoker]   download solr-5.0.0-src.tgz...
 [smoker] 34.1 MB in 0.04 sec (768.8 MB/sec)
 [smoker] verify md5/sha1 digests
 [smoker]   download solr-5.0.0.tgz...
 [smoker] 146.5 MB in 0.48 sec (302.1 MB/sec)
 [smoker] verify md5/sha1 digests
 [smoker]   download solr-5.0.0.zip...
 [smoker] 152.6 MB in 0.26 sec (598.2 MB/sec)
 [smoker] verify md5/sha1 digests
 [smoker]   unpack solr-5.0.0.tgz...
 [smoker] verify JAR metadata/identity/no javax.* or java.*
 classes...
 [smoker] unpack lucene-5.0.0.tgz...
 [smoker]   **WARNING**: skipping check of
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
 [smoker]   **WARNING**: skipping check of
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has javax.* classes
 [smoker] verify WAR metadata/contained JAR identity/no javax.* or
 java.* classes...
 [smoker] unpack lucene-5.0.0.tgz...
 [smoker] copying unpacked distribution for Java 7 ...
 [smoker] test solr example w/ 

[jira] [Commented] (SOLR-6708) Smoke tester couldn't communicate with Solr started using 'bin/solr start'

2014-11-26 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14226426#comment-14226426
 ] 

ASF subversion and git services commented on SOLR-6708:
---

Commit 1641853 from [~thelabdude] in branch 'dev/trunk'
[ https://svn.apache.org/r1641853 ]

SOLR-6708: Use ps auxww instead of waux for finding Solr processes to work on 
FreeBSD

 Smoke tester couldn't communicate with Solr started using 'bin/solr start'
 --

 Key: SOLR-6708
 URL: https://issues.apache.org/jira/browse/SOLR-6708
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.0
Reporter: Steve Rowe
Assignee: Timothy Potter
 Attachments: solr-example.log


 The nightly-smoke target failed on ASF Jenkins 
 [https://builds.apache.org/job/Lucene-Solr-SmokeRelease-5.x/208/]: 
 {noformat}
[smoker]   unpack solr-5.0.0.tgz...
[smoker] verify JAR metadata/identity/no javax.* or java.* classes...
[smoker] unpack lucene-5.0.0.tgz...
[smoker]   **WARNING**: skipping check of 
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
  it has javax.* classes
[smoker]   **WARNING**: skipping check of 
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
  it has javax.* classes
[smoker] verify WAR metadata/contained JAR identity/no javax.* or 
 java.* classes...
[smoker] unpack lucene-5.0.0.tgz...
[smoker] copying unpacked distribution for Java 7 ...
[smoker] test solr example w/ Java 7...
[smoker]   start Solr instance 
 (log=/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0-java7/solr-example.log)...
[smoker]   startup done
[smoker] Failed to determine the port of a local Solr instance, cannot 
 create core!
[smoker]   test utf8...
[smoker] 
[smoker] command sh ./exampledocs/test_utf8.sh 
 http://localhost:8983/solr/techproducts; failed:
[smoker] ERROR: Could not curl to Solr - is curl installed? Is Solr not 
 running?
[smoker] 
[smoker] 
[smoker]   stop server using: bin/solr stop -p 8983
[smoker] No process found for Solr node running on port 8983
[smoker] ***WARNING***: Solr instance didn't respond to SIGINT; using 
 SIGKILL now...
[smoker] ***WARNING***: Solr instance didn't respond to SIGKILL; 
 ignoring...
[smoker] Traceback (most recent call last):
[smoker]   File 
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/dev-tools/scripts/smokeTestRelease.py,
  line 1526, in module
[smoker] main()
[smoker]   File 
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/dev-tools/scripts/smokeTestRelease.py,
  line 1471, in main
[smoker] smokeTest(c.java, c.url, c.revision, c.version, c.tmp_dir, 
 c.is_signed, ' '.join(c.test_args))
[smoker]   File 
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/dev-tools/scripts/smokeTestRelease.py,
  line 1515, in smokeTest
[smoker] unpackAndVerify(java, 'solr', tmpDir, artifact, svnRevision, 
 version, testArgs, baseURL)
[smoker]   File 
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/dev-tools/scripts/smokeTestRelease.py,
  line 616, in unpackAndVerify
[smoker] verifyUnpacked(java, project, artifact, unpackPath, 
 svnRevision, version, testArgs, tmpDir, baseURL)
[smoker]   File 
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/dev-tools/scripts/smokeTestRelease.py,
  line 783, in verifyUnpacked
[smoker] testSolrExample(java7UnpackPath, java.java7_home, False)
[smoker]   File 
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/dev-tools/scripts/smokeTestRelease.py,
  line 888, in testSolrExample
[smoker] run('sh ./exampledocs/test_utf8.sh 
 http://localhost:8983/solr/techproducts', 'utf8.log')
[smoker]   File 
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/dev-tools/scripts/smokeTestRelease.py,
  line 541, in run
[smoker] raise RuntimeError('command %s failed; see log file %s' % 
 (command, logPath))
[smoker] RuntimeError: command sh ./exampledocs/test_utf8.sh 
 http://localhost:8983/solr/techproducts; failed; see log file 
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0-java7/example/utf8.log
 BUILD FAILED
 

[jira] [Updated] (LUCENE-6077) Add a filter cache

2014-11-26 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-6077:
-
Attachment: LUCENE-6077.patch

Updated patch:
 - CachingWrapperFilter now uses a policy that only caches on merged segments 
by default (instead of all segments)
 - applied other suggestions about typos/naming

 Add a filter cache
 --

 Key: LUCENE-6077
 URL: https://issues.apache.org/jira/browse/LUCENE-6077
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: 5.0

 Attachments: LUCENE-6077.patch, LUCENE-6077.patch


 Lucene already has filter caching abilities through CachingWrapperFilter, but 
 CachingWrapperFilter requires you to know which filters you want to cache 
 up-front.
 Caching filters is not trivial. If you cache too aggressively, then you slow 
 things down since you need to iterate over all documents that match the 
 filter in order to load it into an in-memory cacheable DocIdSet. On the other 
 hand, if you don't cache at all, you are potentially missing interesting 
 speed-ups on frequently-used filters.
 Something that would be nice would be to have a generic filter cache that 
 would track usage for individual filters and make the decision to cache or 
 not a filter on a given segments based on usage statistics and various 
 heuristics, such as:
  - the overhead to cache the filter (for instance some filters produce 
 DocIdSets that are already cacheable)
  - the cost to build the DocIdSet (the getDocIdSet method is very expensive 
 on some filters such as MultiTermQueryWrapperFilter that potentially need to 
 merge lots of postings lists)
  - the segment we are searching on (flush segments will likely be merged 
 right away so it's probably not worth building a cache on such segments)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-5.x #769: POMs out of sync

2014-11-26 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-5.x/769/

3 tests failed.
FAILED:  
org.apache.solr.hadoop.MorphlineMapperTest.org.apache.solr.hadoop.MorphlineMapperTest

Error Message:
null

Stack Trace:
java.lang.AssertionError: null
at __randomizedtesting.SeedInfo.seed([D026250C5C4AF22E]:0)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.before(TestRuleTemporaryFilesCleanup.java:92)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.before(TestRuleAdapter.java:26)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:35)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)


FAILED:  org.apache.solr.hadoop.MorphlineBasicMiniMRTest.testPathParts

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([B974EB1D1C3AC2]:0)


FAILED:  
org.apache.solr.hadoop.MorphlineBasicMiniMRTest.org.apache.solr.hadoop.MorphlineBasicMiniMRTest

Error Message:
Suite timeout exceeded (= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (= 720 msec).
at __randomizedtesting.SeedInfo.seed([B974EB1D1C3AC2]:0)




Build Log:
[...truncated 53957 lines...]
BUILD FAILED
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-5.x/build.xml:548: 
The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-5.x/build.xml:200: 
The following error occurred while executing this line:
: Java returned: 1

Total time: 399 minutes 30 seconds
Build step 'Invoke Ant' marked build as failure
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2239 - Failure

2014-11-26 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2239/

No tests ran.

Build Log:
[...truncated 150 lines...]
[javac] Compiling 146 source files to 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/lucene/build/test-framework/classes/java
[javac] 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/lucene/test-framework/src/java/org/apache/lucene/mockfile/FilterFileSystem.java:104:
 error: anonymous org.apache.lucene.mockfile.FilterFileSystem$1$1 is not 
abstract and does not override abstract method remove() in Iterator
[javac] return new IteratorPath() {
[javac] ^
[javac] 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/lucene/test-framework/src/java/org/apache/lucene/mockfile/FilterFileSystem.java:126:
 error: anonymous org.apache.lucene.mockfile.FilterFileSystem$2$1 is not 
abstract and does not override abstract method remove() in Iterator
[javac] return new IteratorFileStore() {
[javac]  ^
[javac] 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/lucene/test-framework/src/java/org/apache/lucene/mockfile/FilterPath.java:230:
 error: anonymous org.apache.lucene.mockfile.FilterPath$1 is not abstract and 
does not override abstract method remove() in Iterator
[javac] return new IteratorPath() {
[javac] ^
[javac] Note: Some input files use or override a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] 3 errors

BUILD FAILED
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/build.xml:525:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/build.xml:473:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/build.xml:61:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/extra-targets.xml:39:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/lucene/build.xml:49:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/lucene/common-build.xml:765:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/lucene/common-build.xml:514:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/lucene/common-build.xml:1870:
 Compile failed; see the compiler error output for details.

Total time: 18 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Sending artifact delta relative to Lucene-Solr-Tests-5.x-Java7 #2238
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 464 bytes
Compression is 0.0%
Took 60 ms
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 685 - Still Failing

2014-11-26 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/685/

No tests ran.

Build Log:
[...truncated 260 lines...]
[javac] Compiling 146 source files to 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/lucene/build/test-framework/classes/java
[javac] 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/lucene/test-framework/src/java/org/apache/lucene/mockfile/FilterFileSystem.java:104:
 error: anonymous org.apache.lucene.mockfile.FilterFileSystem$1$1 is not 
abstract and does not override abstract method remove() in Iterator
[javac] return new IteratorPath() {
[javac] ^
[javac] 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/lucene/test-framework/src/java/org/apache/lucene/mockfile/FilterFileSystem.java:126:
 error: anonymous org.apache.lucene.mockfile.FilterFileSystem$2$1 is not 
abstract and does not override abstract method remove() in Iterator
[javac] return new IteratorFileStore() {
[javac]  ^
[javac] 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/lucene/test-framework/src/java/org/apache/lucene/mockfile/FilterPath.java:230:
 error: anonymous org.apache.lucene.mockfile.FilterPath$1 is not abstract and 
does not override abstract method remove() in Iterator
[javac] return new IteratorPath() {
[javac] ^
[javac] Note: Some input files use or override a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] 3 errors

BUILD FAILED
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/build.xml:532:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/build.xml:473:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/build.xml:61:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/extra-targets.xml:39:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/lucene/build.xml:49:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/lucene/common-build.xml:765:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/lucene/common-build.xml:514:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/lucene/common-build.xml:1870:
 Compile failed; see the compiler error output for details.

Total time: 14 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Sending artifact delta relative to Lucene-Solr-NightlyTests-5.x #680
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 464 bytes
Compression is 0.0%
Took 75 ms
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Re: [JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2239 - Failure

2014-11-26 Thread Robert Muir
Ill take care of this... i suppose it has default method impls in java
8, so you have to run with java7 to catch this.

too bad the compiler cant catch it, thats garbage.

On Wed, Nov 26, 2014 at 12:03 PM, Apache Jenkins Server
jenk...@builds.apache.org wrote:
 Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2239/

 No tests ran.

 Build Log:
 [...truncated 150 lines...]
 [javac] Compiling 146 source files to 
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/lucene/build/test-framework/classes/java
 [javac] 
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/lucene/test-framework/src/java/org/apache/lucene/mockfile/FilterFileSystem.java:104:
  error: anonymous org.apache.lucene.mockfile.FilterFileSystem$1$1 is not 
 abstract and does not override abstract method remove() in Iterator
 [javac] return new IteratorPath() {
 [javac] ^
 [javac] 
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/lucene/test-framework/src/java/org/apache/lucene/mockfile/FilterFileSystem.java:126:
  error: anonymous org.apache.lucene.mockfile.FilterFileSystem$2$1 is not 
 abstract and does not override abstract method remove() in Iterator
 [javac] return new IteratorFileStore() {
 [javac]  ^
 [javac] 
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/lucene/test-framework/src/java/org/apache/lucene/mockfile/FilterPath.java:230:
  error: anonymous org.apache.lucene.mockfile.FilterPath$1 is not abstract 
 and does not override abstract method remove() in Iterator
 [javac] return new IteratorPath() {
 [javac] ^
 [javac] Note: Some input files use or override a deprecated API.
 [javac] Note: Recompile with -Xlint:deprecation for details.
 [javac] 3 errors

 BUILD FAILED
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/build.xml:525:
  The following error occurred while executing this line:
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/build.xml:473:
  The following error occurred while executing this line:
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/build.xml:61:
  The following error occurred while executing this line:
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/extra-targets.xml:39:
  The following error occurred while executing this line:
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/lucene/build.xml:49:
  The following error occurred while executing this line:
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/lucene/common-build.xml:765:
  The following error occurred while executing this line:
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/lucene/common-build.xml:514:
  The following error occurred while executing this line:
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/lucene/common-build.xml:1870:
  Compile failed; see the compiler error output for details.

 Total time: 18 seconds
 Build step 'Invoke Ant' marked build as failure
 Archiving artifacts
 Sending artifact delta relative to Lucene-Solr-Tests-5.x-Java7 #2238
 Archived 1 artifacts
 Archive block size is 32768
 Received 0 blocks and 464 bytes
 Compression is 0.0%
 Took 60 ms
 Recording test results
 Email was triggered for: Failure
 Sending email for trigger: Failure




 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6072) Use mock filesystem in tests

2014-11-26 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14226479#comment-14226479
 ] 

ASF subversion and git services commented on LUCENE-6072:
-

Commit 1641861 from [~rcmuir] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1641861 ]

LUCENE-6072: fix delegation of remove method

 Use mock filesystem in tests
 

 Key: LUCENE-6072
 URL: https://issues.apache.org/jira/browse/LUCENE-6072
 Project: Lucene - Core
  Issue Type: Test
Reporter: Robert Muir
 Attachments: LUCENE-6072.patch, LUCENE-6072.patch, LUCENE-6072.patch


 We went through the trouble to convert to NIO.2, but we don't take advantage 
 of it in tests...
 Since everything boils down to LuceneTestCase's temp dir (which is just 
 Path), we can wrap the filesystem with useful stuff:
 * detect file handle leaks (better than mockdir: not just index files)
 * act like windows (don't delete open files, case-insensitivity, etc)
 * verbosity (add what is going on to infostream for debugging)
 I prototyped some of this in a patch. Currently it makes a chain like this:
 {code}
   private FileSystem initializeFileSystem() {
 FileSystem fs = FileSystems.getDefault();
 if (LuceneTestCase.VERBOSE) {
   fs = new VerboseFS(fs,
 new PrintStreamInfoStream(System.out)).getFileSystem(null);
 }
 fs = new LeakFS(fs).getFileSystem(null);
 fs = new WindowsFS(fs).getFileSystem(null);
 return fs.provider().getFileSystem(URI.create(file:///));
   }
 {code}
 Some things to figure out:
 * I don't think we want to wrap all the time (worry about hiding bugs)
 * its currently a bit lenient (e.g. these filesystems allow calling toFile, 
 which can escape and allow you to do broken things). But only 2 or 3 tests 
 really need File, so we could fix that.
 * its currently complicated and messy (i blame the jdk api here, but maybe we 
 can simplify it)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6072) Use mock filesystem in tests

2014-11-26 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14226491#comment-14226491
 ] 

ASF subversion and git services commented on LUCENE-6072:
-

Commit 1641862 from [~rcmuir] in branch 'dev/trunk'
[ https://svn.apache.org/r1641862 ]

LUCENE-6072: fix delegation of remove method

 Use mock filesystem in tests
 

 Key: LUCENE-6072
 URL: https://issues.apache.org/jira/browse/LUCENE-6072
 Project: Lucene - Core
  Issue Type: Test
Reporter: Robert Muir
 Attachments: LUCENE-6072.patch, LUCENE-6072.patch, LUCENE-6072.patch


 We went through the trouble to convert to NIO.2, but we don't take advantage 
 of it in tests...
 Since everything boils down to LuceneTestCase's temp dir (which is just 
 Path), we can wrap the filesystem with useful stuff:
 * detect file handle leaks (better than mockdir: not just index files)
 * act like windows (don't delete open files, case-insensitivity, etc)
 * verbosity (add what is going on to infostream for debugging)
 I prototyped some of this in a patch. Currently it makes a chain like this:
 {code}
   private FileSystem initializeFileSystem() {
 FileSystem fs = FileSystems.getDefault();
 if (LuceneTestCase.VERBOSE) {
   fs = new VerboseFS(fs,
 new PrintStreamInfoStream(System.out)).getFileSystem(null);
 }
 fs = new LeakFS(fs).getFileSystem(null);
 fs = new WindowsFS(fs).getFileSystem(null);
 return fs.provider().getFileSystem(URI.create(file:///));
   }
 {code}
 Some things to figure out:
 * I don't think we want to wrap all the time (worry about hiding bugs)
 * its currently a bit lenient (e.g. these filesystems allow calling toFile, 
 which can escape and allow you to do broken things). But only 2 or 3 tests 
 really need File, so we could fix that.
 * its currently complicated and messy (i blame the jdk api here, but maybe we 
 can simplify it)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: [JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2239 - Failure

2014-11-26 Thread Uwe Schindler
Yeah they changed it to default to make implementations easier.

They also added another default method to iterator, so you can now use it with 
a closure, very nice:

iterator.forEachRemaining(item - System.out.println(item));

The same applies to Iterable: iterable.forEach(item - 
System.out.println(item));

Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


 -Original Message-
 From: Robert Muir [mailto:rcm...@gmail.com]
 Sent: Wednesday, November 26, 2014 6:06 PM
 To: dev@lucene.apache.org
 Subject: Re: [JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2239 - Failure
 
 Ill take care of this... i suppose it has default method impls in java 8, so 
 you
 have to run with java7 to catch this.
 
 too bad the compiler cant catch it, thats garbage.
 
 On Wed, Nov 26, 2014 at 12:03 PM, Apache Jenkins Server
 jenk...@builds.apache.org wrote:
  Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2239/
 
  No tests ran.
 
  Build Log:
  [...truncated 150 lines...]
  [javac] Compiling 146 source files to /usr/home/jenkins/jenkins-
 slave/workspace/Lucene-Solr-Tests-5.x-Java7/lucene/build/test-
 framework/classes/java
  [javac] /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-
 5.x-Java7/lucene/test-
 framework/src/java/org/apache/lucene/mockfile/FilterFileSystem.java:104:
 error: anonymous org.apache.lucene.mockfile.FilterFileSystem$1$1 is not
 abstract and does not override abstract method remove() in Iterator
  [javac] return new IteratorPath() {
  [javac] ^
  [javac] /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-
 5.x-Java7/lucene/test-
 framework/src/java/org/apache/lucene/mockfile/FilterFileSystem.java:126:
 error: anonymous org.apache.lucene.mockfile.FilterFileSystem$2$1 is not
 abstract and does not override abstract method remove() in Iterator
  [javac] return new IteratorFileStore() {
  [javac]  ^
  [javac] /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-
 5.x-Java7/lucene/test-
 framework/src/java/org/apache/lucene/mockfile/FilterPath.java:230: error:
 anonymous org.apache.lucene.mockfile.FilterPath$1 is not abstract and
 does not override abstract method remove() in Iterator
  [javac] return new IteratorPath() {
  [javac] ^
  [javac] Note: Some input files use or override a deprecated API.
  [javac] Note: Recompile with -Xlint:deprecation for details.
  [javac] 3 errors
 
  BUILD FAILED
  /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-
 Java7/build.xml:525: The following error occurred while executing this line:
  /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-
 Java7/build.xml:473: The following error occurred while executing this line:
  /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-
 Java7/build.xml:61: The following error occurred while executing this line:
  /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-
 Java7/extra-targets.xml:39: The following error occurred while executing this
 line:
  /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-
 Java7/lucene/build.xml:49: The following error occurred while executing this
 line:
  /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-
 Java7/lucene/common-build.xml:765: The following error occurred while
 executing this line:
  /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-
 Java7/lucene/common-build.xml:514: The following error occurred while
 executing this line:
  /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-
 Java7/lucene/common-build.xml:1870: Compile failed; see the compiler error
 output for details.
 
  Total time: 18 seconds
  Build step 'Invoke Ant' marked build as failure Archiving artifacts
  Sending artifact delta relative to Lucene-Solr-Tests-5.x-Java7 #2238
  Archived 1 artifacts Archive block size is 32768 Received 0 blocks and
  464 bytes Compression is 0.0% Took 60 ms Recording test results Email
  was triggered for: Failure Sending email for trigger: Failure
 
 
 
 
  -
  To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For
  additional commands, e-mail: dev-h...@lucene.apache.org
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional
 commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6078) testClosingNRTReaderDoesNotCorruptYourIndex fail

2014-11-26 Thread Robert Muir (JIRA)
Robert Muir created LUCENE-6078:
---

 Summary: testClosingNRTReaderDoesNotCorruptYourIndex fail
 Key: LUCENE-6078
 URL: https://issues.apache.org/jira/browse/LUCENE-6078
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir


I havent had time to dig yet, dont want to lose the seed.

{noformat}
 [junit4] Suite: org.apache.lucene.index.TestIndexWriter
  [junit4]  2 NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
-Dtests.method=testClosingNRTReaderDoesNotCorruptYourIndex 
-Dtests.seed=96987E2DC40CF59 -Dtests.directory=NIOFSDirectory 
-Dtests.locale=ar_YE -Dtests.timezone=Europe/Lisbon -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
  [junit4] ERROR  0.02s J1 | 
TestIndexWriter.testClosingNRTReaderDoesNotCorruptYourIndex 
  [junit4] Throwable #1: java.io.IOException: access denied: 
/home/rmuir/workspace/trunk/lucene/build/core/test/J1/temp/lucene.index.TestIndexWriter-96987E2DC40CF59-001/index-NIOFSDirectory-066/_1.nvd
  [junit4]at 
__randomizedtesting.SeedInfo.seed([96987E2DC40CF59:7EB89313A45B3566]:0)
  [junit4]at 
org.apache.lucene.mockfile.WindowsFS.checkDeleteAccess(WindowsFS.java:106)
  [junit4]at 
org.apache.lucene.mockfile.WindowsFS.delete(WindowsFS.java:114)
  [junit4]at java.nio.file.Files.delete(Files.java:1126)
  [junit4]at 
org.apache.lucene.store.FSDirectory.deleteFile(FSDirectory.java:210)
  [junit4]at 
org.apache.lucene.store.MockDirectoryWrapper.deleteFile(MockDirectoryWrapper.java:530)
  [junit4]at 
org.apache.lucene.store.MockDirectoryWrapper.deleteFile(MockDirectoryWrapper.java:475)
  [junit4]at 
org.apache.lucene.index.TestIndexWriter.testClosingNRTReaderDoesNotCorruptYourIndex(TestIndexWriter.java:2651)
  [junit4]at java.lang.Thread.run(Thread.java:745)
{noformat}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Where is the SVN repository only for Lucene project ?

2014-11-26 Thread Shawn Heisey
On 11/26/2014 2:19 AM, Yosuke Yamatani wrote:
 Hello, I’m Yosuke Yamatani.
 I’m a graduate student at Wakayama University, Japan.
 I study software evolution in OSS projects through the analysis of SVN
 repositories.
 I found the entire ASF repository, but I would like to mirror the SVN
 repository only for your project.
 Could you let me know how to get your repository ?

Checking out a branch for a single Apache project is easy -- we do it
all the time when working with the source for the lucene/solr project,
and the procedure for that has been given to you in other replies.

If you need the actual repository, that's probably different.  Is that
*really* what you need, or can you look at the history on svn.apache.org
and various branch checkouts?

I don't know a lot about subversion repos, but I've seen some things on
the infrastructure mailing list and just now I've done some googling,
and come up with the following.  It may not be helpful, but it's what
I've got:

If you simply use svnsync to get all of what you want, you'll find your
IP address banned automatically by scripts that watch for abuse.  You
can only get your IP address un-banned if you promise to not repeat the
actions that got it banned in the first place, and repeated bans will
not be lifted.

Mirroring the whole repo needs to be done by starting with a dump, or
you'll get banned:

http://www.apache.org/dev/version-control.html#mirror

According to what I found at the following URL, it may be possible to
mirror only a subsection of the repo, but it's not very straightforward:

http://svn.haxx.se/users/archive-2011-08/0136.shtml

The last answer on this SO question also talks about only syncing part
of a repo ... but I'm fairly sure that you must still start with the
full dump, or risk getting banned:

http://stackoverflow.com/questions/4303697/copy-parts-of-an-svn-repo-to-another

Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6078) testClosingNRTReaderDoesNotCorruptYourIndex fail

2014-11-26 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14226662#comment-14226662
 ] 

Michael McCandless commented on LUCENE-6078:


I'll fix: the test already does this:
{noformat}
assumeFalse(this test can't run on Windows, Constants.WINDOWS);
{noformat}

so we also must assume it doesn't get WindowsFS :)

 testClosingNRTReaderDoesNotCorruptYourIndex fail
 

 Key: LUCENE-6078
 URL: https://issues.apache.org/jira/browse/LUCENE-6078
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir

 I havent had time to dig yet, dont want to lose the seed.
 {noformat}
  [junit4] Suite: org.apache.lucene.index.TestIndexWriter
   [junit4]  2 NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
 -Dtests.method=testClosingNRTReaderDoesNotCorruptYourIndex 
 -Dtests.seed=96987E2DC40CF59 -Dtests.directory=NIOFSDirectory 
 -Dtests.locale=ar_YE -Dtests.timezone=Europe/Lisbon -Dtests.asserts=true 
 -Dtests.file.encoding=UTF-8
   [junit4] ERROR  0.02s J1 | 
 TestIndexWriter.testClosingNRTReaderDoesNotCorruptYourIndex 
   [junit4] Throwable #1: java.io.IOException: access denied: 
 /home/rmuir/workspace/trunk/lucene/build/core/test/J1/temp/lucene.index.TestIndexWriter-96987E2DC40CF59-001/index-NIOFSDirectory-066/_1.nvd
   [junit4]at 
 __randomizedtesting.SeedInfo.seed([96987E2DC40CF59:7EB89313A45B3566]:0)
   [junit4]at 
 org.apache.lucene.mockfile.WindowsFS.checkDeleteAccess(WindowsFS.java:106)
   [junit4]at 
 org.apache.lucene.mockfile.WindowsFS.delete(WindowsFS.java:114)
   [junit4]at java.nio.file.Files.delete(Files.java:1126)
   [junit4]at 
 org.apache.lucene.store.FSDirectory.deleteFile(FSDirectory.java:210)
   [junit4]at 
 org.apache.lucene.store.MockDirectoryWrapper.deleteFile(MockDirectoryWrapper.java:530)
   [junit4]at 
 org.apache.lucene.store.MockDirectoryWrapper.deleteFile(MockDirectoryWrapper.java:475)
   [junit4]at 
 org.apache.lucene.index.TestIndexWriter.testClosingNRTReaderDoesNotCorruptYourIndex(TestIndexWriter.java:2651)
   [junit4]at java.lang.Thread.run(Thread.java:745)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6078) testClosingNRTReaderDoesNotCorruptYourIndex fail

2014-11-26 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless resolved LUCENE-6078.

   Resolution: Fixed
Fix Version/s: Trunk
   5.0

 testClosingNRTReaderDoesNotCorruptYourIndex fail
 

 Key: LUCENE-6078
 URL: https://issues.apache.org/jira/browse/LUCENE-6078
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Fix For: 5.0, Trunk


 I havent had time to dig yet, dont want to lose the seed.
 {noformat}
  [junit4] Suite: org.apache.lucene.index.TestIndexWriter
   [junit4]  2 NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
 -Dtests.method=testClosingNRTReaderDoesNotCorruptYourIndex 
 -Dtests.seed=96987E2DC40CF59 -Dtests.directory=NIOFSDirectory 
 -Dtests.locale=ar_YE -Dtests.timezone=Europe/Lisbon -Dtests.asserts=true 
 -Dtests.file.encoding=UTF-8
   [junit4] ERROR  0.02s J1 | 
 TestIndexWriter.testClosingNRTReaderDoesNotCorruptYourIndex 
   [junit4] Throwable #1: java.io.IOException: access denied: 
 /home/rmuir/workspace/trunk/lucene/build/core/test/J1/temp/lucene.index.TestIndexWriter-96987E2DC40CF59-001/index-NIOFSDirectory-066/_1.nvd
   [junit4]at 
 __randomizedtesting.SeedInfo.seed([96987E2DC40CF59:7EB89313A45B3566]:0)
   [junit4]at 
 org.apache.lucene.mockfile.WindowsFS.checkDeleteAccess(WindowsFS.java:106)
   [junit4]at 
 org.apache.lucene.mockfile.WindowsFS.delete(WindowsFS.java:114)
   [junit4]at java.nio.file.Files.delete(Files.java:1126)
   [junit4]at 
 org.apache.lucene.store.FSDirectory.deleteFile(FSDirectory.java:210)
   [junit4]at 
 org.apache.lucene.store.MockDirectoryWrapper.deleteFile(MockDirectoryWrapper.java:530)
   [junit4]at 
 org.apache.lucene.store.MockDirectoryWrapper.deleteFile(MockDirectoryWrapper.java:475)
   [junit4]at 
 org.apache.lucene.index.TestIndexWriter.testClosingNRTReaderDoesNotCorruptYourIndex(TestIndexWriter.java:2651)
   [junit4]at java.lang.Thread.run(Thread.java:745)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6078) testClosingNRTReaderDoesNotCorruptYourIndex fail

2014-11-26 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14226721#comment-14226721
 ] 

ASF subversion and git services commented on LUCENE-6078:
-

Commit 1641902 from [~mikemccand] in branch 'dev/trunk'
[ https://svn.apache.org/r1641902 ]

LUCENE-6078: disable this test if get WindowsFS

 testClosingNRTReaderDoesNotCorruptYourIndex fail
 

 Key: LUCENE-6078
 URL: https://issues.apache.org/jira/browse/LUCENE-6078
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Fix For: 5.0, Trunk


 I havent had time to dig yet, dont want to lose the seed.
 {noformat}
  [junit4] Suite: org.apache.lucene.index.TestIndexWriter
   [junit4]  2 NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
 -Dtests.method=testClosingNRTReaderDoesNotCorruptYourIndex 
 -Dtests.seed=96987E2DC40CF59 -Dtests.directory=NIOFSDirectory 
 -Dtests.locale=ar_YE -Dtests.timezone=Europe/Lisbon -Dtests.asserts=true 
 -Dtests.file.encoding=UTF-8
   [junit4] ERROR  0.02s J1 | 
 TestIndexWriter.testClosingNRTReaderDoesNotCorruptYourIndex 
   [junit4] Throwable #1: java.io.IOException: access denied: 
 /home/rmuir/workspace/trunk/lucene/build/core/test/J1/temp/lucene.index.TestIndexWriter-96987E2DC40CF59-001/index-NIOFSDirectory-066/_1.nvd
   [junit4]at 
 __randomizedtesting.SeedInfo.seed([96987E2DC40CF59:7EB89313A45B3566]:0)
   [junit4]at 
 org.apache.lucene.mockfile.WindowsFS.checkDeleteAccess(WindowsFS.java:106)
   [junit4]at 
 org.apache.lucene.mockfile.WindowsFS.delete(WindowsFS.java:114)
   [junit4]at java.nio.file.Files.delete(Files.java:1126)
   [junit4]at 
 org.apache.lucene.store.FSDirectory.deleteFile(FSDirectory.java:210)
   [junit4]at 
 org.apache.lucene.store.MockDirectoryWrapper.deleteFile(MockDirectoryWrapper.java:530)
   [junit4]at 
 org.apache.lucene.store.MockDirectoryWrapper.deleteFile(MockDirectoryWrapper.java:475)
   [junit4]at 
 org.apache.lucene.index.TestIndexWriter.testClosingNRTReaderDoesNotCorruptYourIndex(TestIndexWriter.java:2651)
   [junit4]at java.lang.Thread.run(Thread.java:745)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6078) testClosingNRTReaderDoesNotCorruptYourIndex fail

2014-11-26 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14226723#comment-14226723
 ] 

ASF subversion and git services commented on LUCENE-6078:
-

Commit 1641903 from [~mikemccand] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1641903 ]

LUCENE-6078: disable this test if get WindowsFS

 testClosingNRTReaderDoesNotCorruptYourIndex fail
 

 Key: LUCENE-6078
 URL: https://issues.apache.org/jira/browse/LUCENE-6078
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Fix For: 5.0, Trunk


 I havent had time to dig yet, dont want to lose the seed.
 {noformat}
  [junit4] Suite: org.apache.lucene.index.TestIndexWriter
   [junit4]  2 NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
 -Dtests.method=testClosingNRTReaderDoesNotCorruptYourIndex 
 -Dtests.seed=96987E2DC40CF59 -Dtests.directory=NIOFSDirectory 
 -Dtests.locale=ar_YE -Dtests.timezone=Europe/Lisbon -Dtests.asserts=true 
 -Dtests.file.encoding=UTF-8
   [junit4] ERROR  0.02s J1 | 
 TestIndexWriter.testClosingNRTReaderDoesNotCorruptYourIndex 
   [junit4] Throwable #1: java.io.IOException: access denied: 
 /home/rmuir/workspace/trunk/lucene/build/core/test/J1/temp/lucene.index.TestIndexWriter-96987E2DC40CF59-001/index-NIOFSDirectory-066/_1.nvd
   [junit4]at 
 __randomizedtesting.SeedInfo.seed([96987E2DC40CF59:7EB89313A45B3566]:0)
   [junit4]at 
 org.apache.lucene.mockfile.WindowsFS.checkDeleteAccess(WindowsFS.java:106)
   [junit4]at 
 org.apache.lucene.mockfile.WindowsFS.delete(WindowsFS.java:114)
   [junit4]at java.nio.file.Files.delete(Files.java:1126)
   [junit4]at 
 org.apache.lucene.store.FSDirectory.deleteFile(FSDirectory.java:210)
   [junit4]at 
 org.apache.lucene.store.MockDirectoryWrapper.deleteFile(MockDirectoryWrapper.java:530)
   [junit4]at 
 org.apache.lucene.store.MockDirectoryWrapper.deleteFile(MockDirectoryWrapper.java:475)
   [junit4]at 
 org.apache.lucene.index.TestIndexWriter.testClosingNRTReaderDoesNotCorruptYourIndex(TestIndexWriter.java:2651)
   [junit4]at java.lang.Thread.run(Thread.java:745)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5524) Elias-Fano sequence also on BytesRef

2014-11-26 Thread Paul Elschot (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Elschot updated LUCENE-5524:
-
Attachment: LUCENE-5524-20141126.patch

Update to trunk of today

 Elias-Fano sequence also on BytesRef
 

 Key: LUCENE-5524
 URL: https://issues.apache.org/jira/browse/LUCENE-5524
 Project: Lucene - Core
  Issue Type: New Feature
  Components: core/other
Reporter: Paul Elschot
Priority: Minor
 Attachments: LUCENE-5524-20141126.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5627) Positional joins

2014-11-26 Thread Paul Elschot (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Elschot updated LUCENE-5627:
-
Attachment: LUCENE-5627-20141126.patch

Update to trunk of today, depends on LUCENE-5524 of today

 Positional joins
 

 Key: LUCENE-5627
 URL: https://issues.apache.org/jira/browse/LUCENE-5627
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Paul Elschot
Priority: Minor
 Attachments: LUCENE-5627-20141126.patch


 Prototype of analysis and search for labeled fragments



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: IntelliJ build

2014-11-26 Thread Ramkumar R. Aiyengar
An another thing I found out after a bit of digging is that the language
compliance level setting in the Project SDK had to be set at 8, else it
fails compilation as we are already making use of the default methods
feature.
On 25 Nov 2014 03:38, david.w.smi...@gmail.com david.w.smi...@gmail.com
wrote:

 On trunk I cleaned and re-created my IntelliJ based build (ant clean-idea,
 idea).  IntelliJ didn’t get the memo about Java 8 so I changed that
 (locally).  Then I found that the Solr velocity contrib couldn’t resolve a
 ResourceLoader class in analysis-common.  So I simply checked the “export”
 checkbox on analysis-comon from the Solr-core module, and Solr-core is a
 dependency of velocity, and this it can resolve it.  Export is synonymous
 with transitive resolution.  Now it compiles locally.  It seems like an odd
 thing to go wrong.  Java 8 I expected.

 So if any IntelliJ user has run into issues lately, maybe sharing my
 experience will help.  I should commit the changes but I’ll wait for a
 reply.

 I think the “Export” (transitive resolution) feature could allow us to
 simplify some of the dependency management quite a bit within IntelliJ so
 that it may need less maintenance.

 ~ David Smiley
 Freelance Apache Lucene/Solr Search Consultant/Developer
 http://www.linkedin.com/in/davidwsmiley



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2240 - Still Failing

2014-11-26 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2240/

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.TestReplicationHandler

Error Message:
file handle leaks: 
[SeekableByteChannel(/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/build/solr-core/test/J2/temp/solr.handler.TestReplicationHandler-3809307807E7D9EF-001/index-SimpleFSDirectory-049/replication.properties),
 
SeekableByteChannel(/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/build/solr-core/test/J2/temp/solr.handler.TestReplicationHandler-3809307807E7D9EF-001/index-SimpleFSDirectory-028/replication.properties),
 
SeekableByteChannel(/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/build/solr-core/test/J2/temp/solr.handler.TestReplicationHandler-3809307807E7D9EF-001/index-SimpleFSDirectory-007/replication.properties),
 
SeekableByteChannel(/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/build/solr-core/test/J2/temp/solr.handler.TestReplicationHandler-3809307807E7D9EF-001/index-SimpleFSDirectory-034/replication.properties),
 
SeekableByteChannel(/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/build/solr-core/test/J2/temp/solr.handler.TestReplicationHandler-3809307807E7D9EF-001/index-SimpleFSDirectory-031/replication.properties),
 
SeekableByteChannel(/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/build/solr-core/test/J2/temp/solr.handler.TestReplicationHandler-3809307807E7D9EF-001/index-SimpleFSDirectory-037/replication.properties),
 
SeekableByteChannel(/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/build/solr-core/test/J2/temp/solr.handler.TestReplicationHandler-3809307807E7D9EF-001/index-SimpleFSDirectory-004/replication.properties),
 
SeekableByteChannel(/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/build/solr-core/test/J2/temp/solr.handler.TestReplicationHandler-3809307807E7D9EF-001/index-SimpleFSDirectory-059/replication.properties),
 
SeekableByteChannel(/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/build/solr-core/test/J2/temp/solr.handler.TestReplicationHandler-3809307807E7D9EF-001/index-SimpleFSDirectory-021/replication.properties)]

Stack Trace:
java.lang.RuntimeException: file handle leaks: 
[SeekableByteChannel(/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/build/solr-core/test/J2/temp/solr.handler.TestReplicationHandler-3809307807E7D9EF-001/index-SimpleFSDirectory-049/replication.properties),
 
SeekableByteChannel(/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/build/solr-core/test/J2/temp/solr.handler.TestReplicationHandler-3809307807E7D9EF-001/index-SimpleFSDirectory-028/replication.properties),
 
SeekableByteChannel(/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/build/solr-core/test/J2/temp/solr.handler.TestReplicationHandler-3809307807E7D9EF-001/index-SimpleFSDirectory-007/replication.properties),
 
SeekableByteChannel(/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/build/solr-core/test/J2/temp/solr.handler.TestReplicationHandler-3809307807E7D9EF-001/index-SimpleFSDirectory-034/replication.properties),
 
SeekableByteChannel(/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/build/solr-core/test/J2/temp/solr.handler.TestReplicationHandler-3809307807E7D9EF-001/index-SimpleFSDirectory-031/replication.properties),
 
SeekableByteChannel(/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/build/solr-core/test/J2/temp/solr.handler.TestReplicationHandler-3809307807E7D9EF-001/index-SimpleFSDirectory-037/replication.properties),
 
SeekableByteChannel(/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/build/solr-core/test/J2/temp/solr.handler.TestReplicationHandler-3809307807E7D9EF-001/index-SimpleFSDirectory-004/replication.properties),
 
SeekableByteChannel(/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/build/solr-core/test/J2/temp/solr.handler.TestReplicationHandler-3809307807E7D9EF-001/index-SimpleFSDirectory-059/replication.properties),
 
SeekableByteChannel(/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/build/solr-core/test/J2/temp/solr.handler.TestReplicationHandler-3809307807E7D9EF-001/index-SimpleFSDirectory-021/replication.properties)]
at __randomizedtesting.SeedInfo.seed([3809307807E7D9EF]:0)
at org.apache.lucene.mockfile.LeakFS.onClose(LeakFS.java:64)
at 
org.apache.lucene.mockfile.FilterFileSystem.close(FilterFileSystem.java:77)
at 
org.apache.lucene.mockfile.FilterFileSystem.close(FilterFileSystem.java:78)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:179)
at 

[jira] [Commented] (SOLR-4509) Disable HttpClient stale check for performance and fewer spurious connection errors.

2014-11-26 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14226842#comment-14226842
 ] 

Mark Miller commented on SOLR-4509:
---

I've got a patch coming that tries to extend this to all our http client usage 
- that ended up being a fairly long thread to pull on. Ill post a progress 
patch soon but there are still some things to address. 

 Disable HttpClient stale check for performance and fewer spurious connection 
 errors.
 

 Key: SOLR-4509
 URL: https://issues.apache.org/jira/browse/SOLR-4509
 Project: Solr
  Issue Type: Improvement
  Components: search
 Environment: 5 node SmartOS cluster (all nodes living in same global 
 zone - i.e. same physical machine)
Reporter: Ryan Zezeski
Assignee: Mark Miller
Priority: Minor
 Fix For: 5.0, Trunk

 Attachments: IsStaleTime.java, SOLR-4509-4_4_0.patch, 
 SOLR-4509.patch, SOLR-4509.patch, SOLR-4509.patch, SOLR-4509.patch, 
 baremetal-stale-nostale-med-latency.dat, 
 baremetal-stale-nostale-med-latency.svg, 
 baremetal-stale-nostale-throughput.dat, baremetal-stale-nostale-throughput.svg


 By disabling the Apache HTTP Client stale check I've witnessed a 2-4x 
 increase in throughput and reduction of over 100ms.  This patch was made in 
 the context of a project I'm leading, called Yokozuna, which relies on 
 distributed search.
 Here's the patch on Yokozuna: https://github.com/rzezeski/yokozuna/pull/26
 Here's a write-up I did on my findings: 
 http://www.zinascii.com/2013/solr-distributed-search-and-the-stale-check.html
 I'm happy to answer any questions or make changes to the patch to make it 
 acceptable.
 ReviewBoard: https://reviews.apache.org/r/28393/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6234) Scoring modes for query time join

2014-11-26 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated SOLR-6234:
---
Fix Version/s: (was: 4.10.3)
   (was: 5.0)

 Scoring modes for query time join 
 --

 Key: SOLR-6234
 URL: https://issues.apache.org/jira/browse/SOLR-6234
 Project: Solr
  Issue Type: New Feature
  Components: query parsers
Affects Versions: 4.10.3, Trunk
Reporter: Mikhail Khludnev
  Labels: features, patch, test
 Attachments: SOLR-6234.patch, lucene-join-solr-query-parser-0.0.2.zip


 it adds {{scorejoin}} query parser which calls Lucene's JoinUtil underneath. 
 It supports:
 - {{score=none|avg|max|total}} local param (passed as ScoreMode to JoinUtil), 
 also 
 - supports {{b=100}} param to pass {{Query.setBoost()}}.
 So far
 - -it always passes {{multipleValuesPerDocument=true}}- 
 {{multiVals=true|false}} is introduced 
 - -it doesn't cover cross core join case,- it covers cross-core join but 
 rather opportunistically.I just can't find the multicore testcase in Solr 
 test, I appreciate if you point me on one. 
 - -I attach standalone plugin project, let me know if somebody interested, I 
 convert it into the proper Solr codebase patch. Also please mention the 
 blockers!- done. thanks for your attitude!
 - so far it joins string and multivalue string fields (Sorted, SortedSet, 
 Binary), but not Numerics DVs. follow-up LUCENE-5868  
 Note: the development of this patch was sponsored by an anonymous contributor 
 and approved for release under Apache License.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4509) Disable HttpClient stale check for performance and fewer spurious connection errors.

2014-11-26 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14226858#comment-14226858
 ] 

Mark Miller commented on SOLR-4509:
---

bq. given that any check is imperfect

Yeah, I was more referring to the fact that the whole thing kind of seems like 
a bad bug to me - except that in pure http, a retry is often fine and I guess 
keeps it from being a flat out buggy situation. If you turn off retries, it 
does seem like a buggy implementation, though a tough one to solve generally 
for HttpClient I guess. Even with Solr, I'm not super happy you have to line up 
the server and client idle setting reasonably, but it appears the best we can 
do.  

I guess in my mind, a non imperfect implementation would not have this built in 
'random race condition fail'.

 Disable HttpClient stale check for performance and fewer spurious connection 
 errors.
 

 Key: SOLR-4509
 URL: https://issues.apache.org/jira/browse/SOLR-4509
 Project: Solr
  Issue Type: Improvement
  Components: search
 Environment: 5 node SmartOS cluster (all nodes living in same global 
 zone - i.e. same physical machine)
Reporter: Ryan Zezeski
Assignee: Mark Miller
Priority: Minor
 Fix For: 5.0, Trunk

 Attachments: IsStaleTime.java, SOLR-4509-4_4_0.patch, 
 SOLR-4509.patch, SOLR-4509.patch, SOLR-4509.patch, SOLR-4509.patch, 
 baremetal-stale-nostale-med-latency.dat, 
 baremetal-stale-nostale-med-latency.svg, 
 baremetal-stale-nostale-throughput.dat, baremetal-stale-nostale-throughput.svg


 By disabling the Apache HTTP Client stale check I've witnessed a 2-4x 
 increase in throughput and reduction of over 100ms.  This patch was made in 
 the context of a project I'm leading, called Yokozuna, which relies on 
 distributed search.
 Here's the patch on Yokozuna: https://github.com/rzezeski/yokozuna/pull/26
 Here's a write-up I did on my findings: 
 http://www.zinascii.com/2013/solr-distributed-search-and-the-stale-check.html
 I'm happy to answer any questions or make changes to the patch to make it 
 acceptable.
 ReviewBoard: https://reviews.apache.org/r/28393/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: IntelliJ build

2014-11-26 Thread Steve Rowe
David,

I’d rather not go down the transitive route, because it would introduce 
misalignments with the Ant build, and because unwanted transitive deps could 
improperly influence the IntelliJ build.  But if you feel strongly about it, go 
ahead: -0.

Thanks for working on it.

Steve

 On Nov 24, 2014, at 10:37 PM, david.w.smi...@gmail.com wrote:
 
 On trunk I cleaned and re-created my IntelliJ based build (ant clean-idea, 
 idea).  IntelliJ didn’t get the memo about Java 8 so I changed that 
 (locally).  Then I found that the Solr velocity contrib couldn’t resolve a 
 ResourceLoader class in analysis-common.  So I simply checked the “export” 
 checkbox on analysis-comon from the Solr-core module, and Solr-core is a 
 dependency of velocity, and this it can resolve it.  Export is synonymous 
 with transitive resolution.  Now it compiles locally.  It seems like an odd 
 thing to go wrong.  Java 8 I expected.
 
 So if any IntelliJ user has run into issues lately, maybe sharing my 
 experience will help.  I should commit the changes but I’ll wait for a reply.
 
 I think the “Export” (transitive resolution) feature could allow us to 
 simplify some of the dependency management quite a bit within IntelliJ so 
 that it may need less maintenance.
 
 ~ David Smiley
 Freelance Apache Lucene/Solr Search Consultant/Developer
 http://www.linkedin.com/in/davidwsmiley


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-6795) distrib.singlePass returns score even though not asked for

2014-11-26 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar reassigned SOLR-6795:
---

Assignee: Shalin Shekhar Mangar

 distrib.singlePass returns score even though not asked for
 --

 Key: SOLR-6795
 URL: https://issues.apache.org/jira/browse/SOLR-6795
 Project: Solr
  Issue Type: Bug
  Components: multicore, search
Affects Versions: 5.0
Reporter: Per Steffensen
Assignee: Shalin Shekhar Mangar
  Labels: distributed_search, search
 Attachments: fix.patch, test_that_reveals_the_problem.patch


 If I pass distrib.singlePass in a request and do not ask for score back (fl 
 does not include score) it will return the score back anyway.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.7.0) - Build # 1916 - Still Failing!

2014-11-26 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/1916/
Java: 64bit/jdk1.7.0 -XX:+UseCompressedOops -XX:+UseParallelGC (asserts: true)

1 tests failed.
FAILED:  
org.apache.solr.cloud.DistribDocExpirationUpdateProcessorTest.testDistribSearch

Error Message:
There are still nodes recoverying - waited for 30 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 30 
seconds
at 
__randomizedtesting.SeedInfo.seed([E5DE0983470F5CE5:6438879B30503CD9]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:178)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:840)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForThingsToLevelOut(AbstractFullDistribZkTestBase.java:1459)
at 
org.apache.solr.cloud.DistribDocExpirationUpdateProcessorTest.doTest(DistribDocExpirationUpdateProcessorTest.java:79)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 

[jira] [Commented] (SOLR-6796) distrib.singlePass does not return correct set of fields for multi-fl-parameter requests

2014-11-26 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14226909#comment-14226909
 ] 

Shalin Shekhar Mangar commented on SOLR-6796:
-

Thanks Per. I've assigned the other issue to myself too. I'm travelling today 
and tomorrow but I will review and commit the fixes by the weekend.

 distrib.singlePass does not return correct set of fields for 
 multi-fl-parameter requests
 

 Key: SOLR-6796
 URL: https://issues.apache.org/jira/browse/SOLR-6796
 Project: Solr
  Issue Type: Bug
  Components: multicore, search
Affects Versions: 5.0
Reporter: Per Steffensen
Assignee: Shalin Shekhar Mangar
  Labels: distributed_search, search
 Attachments: fix.patch, fix.patch, fix.patch, 
 test_that_reveals_the_problem.patch


 If I pass distrib.singlePass in a request that also has two fl-parameters, in 
 some cases, I will not get the expected set of fields back for the returned 
 documents



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6796) distrib.singlePass does not return correct set of fields for multi-fl-parameter requests

2014-11-26 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14226930#comment-14226930
 ] 

Yonik Seeley commented on SOLR-6796:


Both the original code and the patches seem more complex than needed
why is it necessary to rebuild the fl parameter rather than just append another 
(it's multi-valued)?

 distrib.singlePass does not return correct set of fields for 
 multi-fl-parameter requests
 

 Key: SOLR-6796
 URL: https://issues.apache.org/jira/browse/SOLR-6796
 Project: Solr
  Issue Type: Bug
  Components: multicore, search
Affects Versions: 5.0
Reporter: Per Steffensen
Assignee: Shalin Shekhar Mangar
  Labels: distributed_search, search
 Attachments: fix.patch, fix.patch, fix.patch, 
 test_that_reveals_the_problem.patch


 If I pass distrib.singlePass in a request that also has two fl-parameters, in 
 some cases, I will not get the expected set of fields back for the returned 
 documents



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6793) ReplicationHandler does not destroy all of it's created SnapPullers

2014-11-26 Thread Ramkumar Aiyengar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14226971#comment-14226971
 ] 

Ramkumar Aiyengar commented on SOLR-6793:
-

+1 for the current patch, it does fix stuff, but in {{doFetch}}, shouldn't the 
{{finally}} block have the same cleanup?

FWIW, this entire {{(temp)SnapPuller}} stuff is pretty messy, took me quite a 
bit of time when I first tried to get my head around it. Should the scheduling 
functionality in {{SnapPuller}} really move to {{ReplicationHandler}}?

 ReplicationHandler does not destroy all of it's created SnapPullers
 ---

 Key: SOLR-6793
 URL: https://issues.apache.org/jira/browse/SOLR-6793
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: 5.0, Trunk

 Attachments: SOLR-6793.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3619) Rename 'example' dir to 'server' and pull examples into an 'examples' directory

2014-11-26 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14226980#comment-14226980
 ] 

ASF subversion and git services commented on SOLR-3619:
---

Commit 1641961 from [~thelabdude] in branch 'dev/trunk'
[ https://svn.apache.org/r1641961 ]

SOLR-3619: clone configsets when starting examples or creating cores instead of 
using configsets directly; selectively clone the server directory when creating 
node1 for the cloud example; fix script error reporting

 Rename 'example' dir to 'server' and pull examples into an 'examples' 
 directory
 ---

 Key: SOLR-3619
 URL: https://issues.apache.org/jira/browse/SOLR-3619
 Project: Solr
  Issue Type: Improvement
Reporter: Mark Miller
Assignee: Timothy Potter
 Fix For: 5.0, Trunk

 Attachments: SOLR-3619.patch, SOLR-3619.patch, SOLR-3619.patch, 
 managed-schema, server-name-layout.png, solrconfig.xml






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6708) Smoke tester couldn't communicate with Solr started using 'bin/solr start'

2014-11-26 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14226982#comment-14226982
 ] 

ASF subversion and git services commented on SOLR-6708:
---

Commit 1641963 from [~thelabdude] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1641963 ]

SOLR-6708: Use ps auxww instead of waux for finding Solr processes to work on 
FreeBSD

 Smoke tester couldn't communicate with Solr started using 'bin/solr start'
 --

 Key: SOLR-6708
 URL: https://issues.apache.org/jira/browse/SOLR-6708
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.0
Reporter: Steve Rowe
Assignee: Timothy Potter
 Attachments: solr-example.log


 The nightly-smoke target failed on ASF Jenkins 
 [https://builds.apache.org/job/Lucene-Solr-SmokeRelease-5.x/208/]: 
 {noformat}
[smoker]   unpack solr-5.0.0.tgz...
[smoker] verify JAR metadata/identity/no javax.* or java.* classes...
[smoker] unpack lucene-5.0.0.tgz...
[smoker]   **WARNING**: skipping check of 
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
  it has javax.* classes
[smoker]   **WARNING**: skipping check of 
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
  it has javax.* classes
[smoker] verify WAR metadata/contained JAR identity/no javax.* or 
 java.* classes...
[smoker] unpack lucene-5.0.0.tgz...
[smoker] copying unpacked distribution for Java 7 ...
[smoker] test solr example w/ Java 7...
[smoker]   start Solr instance 
 (log=/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0-java7/solr-example.log)...
[smoker]   startup done
[smoker] Failed to determine the port of a local Solr instance, cannot 
 create core!
[smoker]   test utf8...
[smoker] 
[smoker] command sh ./exampledocs/test_utf8.sh 
 http://localhost:8983/solr/techproducts; failed:
[smoker] ERROR: Could not curl to Solr - is curl installed? Is Solr not 
 running?
[smoker] 
[smoker] 
[smoker]   stop server using: bin/solr stop -p 8983
[smoker] No process found for Solr node running on port 8983
[smoker] ***WARNING***: Solr instance didn't respond to SIGINT; using 
 SIGKILL now...
[smoker] ***WARNING***: Solr instance didn't respond to SIGKILL; 
 ignoring...
[smoker] Traceback (most recent call last):
[smoker]   File 
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/dev-tools/scripts/smokeTestRelease.py,
  line 1526, in module
[smoker] main()
[smoker]   File 
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/dev-tools/scripts/smokeTestRelease.py,
  line 1471, in main
[smoker] smokeTest(c.java, c.url, c.revision, c.version, c.tmp_dir, 
 c.is_signed, ' '.join(c.test_args))
[smoker]   File 
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/dev-tools/scripts/smokeTestRelease.py,
  line 1515, in smokeTest
[smoker] unpackAndVerify(java, 'solr', tmpDir, artifact, svnRevision, 
 version, testArgs, baseURL)
[smoker]   File 
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/dev-tools/scripts/smokeTestRelease.py,
  line 616, in unpackAndVerify
[smoker] verifyUnpacked(java, project, artifact, unpackPath, 
 svnRevision, version, testArgs, tmpDir, baseURL)
[smoker]   File 
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/dev-tools/scripts/smokeTestRelease.py,
  line 783, in verifyUnpacked
[smoker] testSolrExample(java7UnpackPath, java.java7_home, False)
[smoker]   File 
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/dev-tools/scripts/smokeTestRelease.py,
  line 888, in testSolrExample
[smoker] run('sh ./exampledocs/test_utf8.sh 
 http://localhost:8983/solr/techproducts', 'utf8.log')
[smoker]   File 
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/dev-tools/scripts/smokeTestRelease.py,
  line 541, in run
[smoker] raise RuntimeError('command %s failed; see log file %s' % 
 (command, logPath))
[smoker] RuntimeError: command sh ./exampledocs/test_utf8.sh 
 http://localhost:8983/solr/techproducts; failed; see log file 
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0-java7/example/utf8.log
 BUILD FAILED
 

[jira] [Commented] (SOLR-3619) Rename 'example' dir to 'server' and pull examples into an 'examples' directory

2014-11-26 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14226995#comment-14226995
 ] 

Timothy Potter commented on SOLR-3619:
--

I've addressed most of [~arafalov]'s concerns in the latest commit. However, I 
didn't create the example cores in a solr home under example (as suggested) 
because what happens if the user does the following?

{code}
bin/solr -e schemaless
bin/solr create_core -n my_core
{code}

Now {{my_core}} lives under {{example/solr/my_core}} vs. 
{{server/solr/my_core}}, which is exactly where we were before this ticket - 
setting up the user to create non-example stuff under examples! In other words, 
we can't assume the user isn't going to create their own cores after starting 
up an example so it doesn't make sense to me to create things under a different 
solr home (other than the default server/solr). Moreover, once we start using a 
different solr home, then the user would have to know to restart the server 
with the -s parameter, i.e. {{bin/solr restart -s example/solr}}.

Next, if you re-run an example, such as:

{code}
bin/solr -e techproducts
bin/solr stop -all
bin/solr -e techproducts
{code}

Then the script does the correct thing (fires up Solr, tries to re-create the 
existing core (which fails), and then re-indexes the docs but that's harmless 
IMO).

Also, I went with the selective cloning of the server directory when running 
the cloud example. Agreed it's a bit of a maintenance headache, but Solr needs 
a default solr.solr.home set in order to initialize, so at some point, there 
may be cores in the {{server/solr}} directory. In other words, if the user does:

{code}
bin/solr start -p 8983
bin/solr create_core -n foo
bin/solr stop -p 8983
bin/solr -e cloud
{code}

Then the foo instanceDir is in {{server/solr}} and we don't want to pull that 
over when creating node1. At least now with the latest changes, it's safe to 
run the -e cloud example after doing other things that affect the server/solr 
directory.

In short, I think we should only take this immutable master concept so far, but 
at some point, there's either going to be a dirty solr home directory some 
where OR the user is going to have to be burdened with passing the correct -s 
param, which is bad for getting started.

 Rename 'example' dir to 'server' and pull examples into an 'examples' 
 directory
 ---

 Key: SOLR-3619
 URL: https://issues.apache.org/jira/browse/SOLR-3619
 Project: Solr
  Issue Type: Improvement
Reporter: Mark Miller
Assignee: Timothy Potter
 Fix For: 5.0, Trunk

 Attachments: SOLR-3619.patch, SOLR-3619.patch, SOLR-3619.patch, 
 managed-schema, server-name-layout.png, solrconfig.xml






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3619) Rename 'example' dir to 'server' and pull examples into an 'examples' directory

2014-11-26 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14226998#comment-14226998
 ] 

ASF subversion and git services commented on SOLR-3619:
---

Commit 1641965 from [~thelabdude] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1641965 ]

SOLR-3619: clone configsets when starting examples or creating cores instead of 
using configsets directly; selectively clone the server directory when creating 
node1 for the cloud example; fix script error reporting

 Rename 'example' dir to 'server' and pull examples into an 'examples' 
 directory
 ---

 Key: SOLR-3619
 URL: https://issues.apache.org/jira/browse/SOLR-3619
 Project: Solr
  Issue Type: Improvement
Reporter: Mark Miller
Assignee: Timothy Potter
 Fix For: 5.0, Trunk

 Attachments: SOLR-3619.patch, SOLR-3619.patch, SOLR-3619.patch, 
 managed-schema, server-name-layout.png, solrconfig.xml






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-SmokeRelease-5.x - Build # 215 - Still Failing

2014-11-26 Thread Timothy Potter
Ok - fix committed ... I'll keep an eye on it but think this should do it
this time.

On Wed, Nov 26, 2014 at 9:27 AM, Timothy Potter thelabd...@gmail.com
wrote:

 I'm working on it ... problem right now is ps waux doesn't work the same
 on FreeBSD so the script isn't getting the info it needs ... ps auxww seems
 to do the trick

 On Wed, Nov 26, 2014 at 4:12 AM, Michael McCandless 
 luc...@mikemccandless.com wrote:

 Is anyone looking into why the smoke tester can't run Solr's example?
 This has been failing for quite a while, and I thought I saw a commit
 to smoke tester to try to fix it?

 Should we stop trying to test the solr example from the smoke tester?

 Mike McCandless

 http://blog.mikemccandless.com


 On Tue, Nov 25, 2014 at 10:06 PM, Apache Jenkins Server
 jenk...@builds.apache.org wrote:
  Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-5.x/215/
 
  No tests ran.
 
  Build Log:
  [...truncated 51672 lines...]
  prepare-release-no-sign:
  [mkdir] Created dir:
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist
   [copy] Copying 446 files to
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist/lucene
   [copy] Copying 254 files to
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist/solr
 [smoker] Java 1.7 JAVA_HOME=/home/jenkins/tools/java/latest1.7
 [smoker] NOTE: output encoding is US-ASCII
 [smoker]
 [smoker] Load release URL
 file:/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist/...
 [smoker]
 [smoker] Test Lucene...
 [smoker]   test basics...
 [smoker]   get KEYS
 [smoker] 0.1 MB in 0.01 sec (13.0 MB/sec)
 [smoker]   check changes HTML...
 [smoker]   download lucene-5.0.0-src.tgz...
 [smoker] 27.8 MB in 0.04 sec (681.1 MB/sec)
 [smoker] verify md5/sha1 digests
 [smoker]   download lucene-5.0.0.tgz...
 [smoker] 63.8 MB in 0.09 sec (694.6 MB/sec)
 [smoker] verify md5/sha1 digests
 [smoker]   download lucene-5.0.0.zip...
 [smoker] 73.2 MB in 0.14 sec (526.5 MB/sec)
 [smoker] verify md5/sha1 digests
 [smoker]   unpack lucene-5.0.0.tgz...
 [smoker] verify JAR metadata/identity/no javax.* or java.*
 classes...
 [smoker] test demo with 1.7...
 [smoker]   got 5569 hits for query lucene
 [smoker] checkindex with 1.7...
 [smoker] check Lucene's javadoc JAR
 [smoker]   unpack lucene-5.0.0.zip...
 [smoker] verify JAR metadata/identity/no javax.* or java.*
 classes...
 [smoker] test demo with 1.7...
 [smoker]   got 5569 hits for query lucene
 [smoker] checkindex with 1.7...
 [smoker] check Lucene's javadoc JAR
 [smoker]   unpack lucene-5.0.0-src.tgz...
 [smoker] make sure no JARs/WARs in src dist...
 [smoker] run ant validate
 [smoker] run tests w/ Java 7 and
 testArgs='-Dtests.jettyConnector=Socket -Dtests.multiplier=1
 -Dtests.slow=false'...
 [smoker] test demo with 1.7...
 [smoker]   got 207 hits for query lucene
 [smoker] checkindex with 1.7...
 [smoker] generate javadocs w/ Java 7...
 [smoker]
 [smoker] Crawl/parse...
 [smoker]
 [smoker] Verify...
 [smoker]   confirm all releases have coverage in
 TestBackwardsCompatibility
 [smoker] find all past Lucene releases...
 [smoker] run TestBackwardsCompatibility..
 [smoker] success!
 [smoker]
 [smoker] Test Solr...
 [smoker]   test basics...
 [smoker]   get KEYS
 [smoker] 0.1 MB in 0.00 sec (86.1 MB/sec)
 [smoker]   check changes HTML...
 [smoker]   download solr-5.0.0-src.tgz...
 [smoker] 34.1 MB in 0.04 sec (768.8 MB/sec)
 [smoker] verify md5/sha1 digests
 [smoker]   download solr-5.0.0.tgz...
 [smoker] 146.5 MB in 0.48 sec (302.1 MB/sec)
 [smoker] verify md5/sha1 digests
 [smoker]   download solr-5.0.0.zip...
 [smoker] 152.6 MB in 0.26 sec (598.2 MB/sec)
 [smoker] verify md5/sha1 digests
 [smoker]   unpack solr-5.0.0.tgz...
 [smoker] verify JAR metadata/identity/no javax.* or java.*
 classes...
 [smoker] unpack lucene-5.0.0.tgz...
 [smoker]   **WARNING**: skipping check of
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
 [smoker]   **WARNING**: skipping check of
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has javax.* classes
 [smoker] verify WAR metadata/contained JAR identity/no javax.*
 

[jira] [Commented] (SOLR-3619) Rename 'example' dir to 'server' and pull examples into an 'examples' directory

2014-11-26 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14227012#comment-14227012
 ] 

Alexandre Rafalovitch commented on SOLR-3619:
-

Why can't we have 
{quote}
/server
/logs
/homes (whatever name, if not examples)
/homes/default
/homes/cloud
/exampledocs
{quote}

Then, the /server is immutable, can be copied around and so on. And the users 
do not have to grep the file system to look for where the files changed.

The problem is not when the user creates something (though you bet they will 
freak out over the error message). The problem is when they come back the next 
day and something goes wrong. And they have to figure out what happened behind 
the scenes. Or when they want to delete the examples they created all over the 
place and want to start a fresh setup of their own. 

At the moment, if they create schemaless and techproducts, they go together in 
a very deep directory under server. If they create cloud example, we get two 
directory copies (with logs, not just those examples I complained about) in a 
random (current directory) location. 

Another way to look at it is _what do I need to delete to get back to the 
original setup_. At the moment, it's something in /bin (pid files), something - 
not everything - in some deep directories (/server/solr/X,Y,Z), something else 
in another deep directory (/server/logs) and something in whatever location I 
was in when I run cloud example. Even I just delete the whole setup and just 
untar the build archive.

 Rename 'example' dir to 'server' and pull examples into an 'examples' 
 directory
 ---

 Key: SOLR-3619
 URL: https://issues.apache.org/jira/browse/SOLR-3619
 Project: Solr
  Issue Type: Improvement
Reporter: Mark Miller
Assignee: Timothy Potter
 Fix For: 5.0, Trunk

 Attachments: SOLR-3619.patch, SOLR-3619.patch, SOLR-3619.patch, 
 managed-schema, server-name-layout.png, solrconfig.xml






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6793) ReplicationHandler does not destroy all of it's created SnapPullers

2014-11-26 Thread Ramkumar Aiyengar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14227044#comment-14227044
 ] 

Ramkumar Aiyengar commented on SOLR-6793:
-

I started taking a dig at this with 
https://github.com/bloomberg/lucene-solr/commit/50a198e518d66380e4bfef81baddd7fd27ffa198
 A couple of tests fail though, I need to take a look.. With this refactoring 
in place, it looks like {{tempSnapPuller}} can go away, but may be I am missing 
something..


 ReplicationHandler does not destroy all of it's created SnapPullers
 ---

 Key: SOLR-6793
 URL: https://issues.apache.org/jira/browse/SOLR-6793
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: 5.0, Trunk

 Attachments: SOLR-6793.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6776) Data lost when use SoftCommit and TLog

2014-11-26 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14227062#comment-14227062
 ] 

Mark Miller commented on SOLR-6776:
---

By default, the tlog doesnt fsync, it just flushes and leans on replicas. You 
can configure the sync level in solrconfig.xml.

 Data lost when use SoftCommit and TLog
 --

 Key: SOLR-6776
 URL: https://issues.apache.org/jira/browse/SOLR-6776
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10
Reporter: yuanyun.cn
  Labels: softCommit, updateLog
 Fix For: 4.10.3


 We enabled update log and change autoCommit to some bigger value 10 mins.
 After restart, we push one doc with softCommit=true
 http://localhost:8983/solr/update?stream.body=adddocfield 
 name=idid1/field/doc/addsoftCommit=true
 Then we kill the java process after a min. 
 After restart, Tlog failed to replay with following exception, and there is 
 no data in solr.
 6245 [coreLoadExecutor-5-thread-1] ERROR org.apache.solr.update.UpdateLog  û 
 Failure to open existing log file (non fatal) 
 E:\jeffery\src\apache\solr\4.10.2\solr-4.10.2\example\solr\collection1\data\t
 log\tlog.000:org.apache.solr.common.SolrException: 
 java.io.EOFException
 at 
 org.apache.solr.update.TransactionLog.init(TransactionLog.java:181)
 at org.apache.solr.update.UpdateLog.init(UpdateLog.java:261)
 at org.apache.solr.update.UpdateHandler.init(UpdateHandler.java:134)
 at org.apache.solr.update.UpdateHandler.init(UpdateHandler.java:94)
 at 
 org.apache.solr.update.DirectUpdateHandler2.init(DirectUpdateHandler2.java:100)
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
 Method)
 at sun.reflect.NativeConstructorAccessorImpl.newInstance(Unknown 
 Source)
 at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown 
 Source)
 at java.lang.reflect.Constructor.newInstance(Unknown Source)
 at org.apache.solr.core.SolrCore.createInstance(SolrCore.java:550)
 at 
 org.apache.solr.core.SolrCore.createUpdateHandler(SolrCore.java:620)
 at org.apache.solr.core.SolrCore.init(SolrCore.java:835)
 at org.apache.solr.core.SolrCore.init(SolrCore.java:646)
 at org.apache.solr.core.CoreContainer.create(CoreContainer.java:491)
 at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:255)
 at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:249)
 at java.util.concurrent.FutureTask.run(Unknown Source)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
 at java.lang.Thread.run(Unknown Source)
 Caused by: java.io.EOFException
 at 
 org.apache.solr.common.util.FastInputStream.readUnsignedByte(FastInputStream.java:73)
 at 
 org.apache.solr.common.util.FastInputStream.readInt(FastInputStream.java:216)
 at 
 org.apache.solr.update.TransactionLog.readHeader(TransactionLog.java:268)
 at 
 org.apache.solr.update.TransactionLog.init(TransactionLog.java:159)
 ... 19 more
 Check the code: seems this is related with: 
 org.apache.solr.update.processor.RunUpdateProcessor, in processCommit, it 
 sets changesSinceCommit=false(even we are using softCommit)
 So in finish, updateLog.finish will not be called.
   public void finish() throws IOException {
 if (changesSinceCommit  updateHandler.getUpdateLog() != null) {
   updateHandler.getUpdateLog().finish(null);
 }
 super.finish();
   }
 To fix this issue: I have to change RunUpdateProcessor.processCommit like 
 below:
 if (!cmd.softCommit) {
   changesSinceCommit = false;
 }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6793) ReplicationHandler does not destroy all of it's created SnapPullers

2014-11-26 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14227068#comment-14227068
 ] 

Mark Miller commented on SOLR-6793:
---

Thanks for the review.

bq. this entire (temp)SnapPuller stuff is pretty messy

Agreed.

 ReplicationHandler does not destroy all of it's created SnapPullers
 ---

 Key: SOLR-6793
 URL: https://issues.apache.org/jira/browse/SOLR-6793
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: 5.0, Trunk

 Attachments: SOLR-6793.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6793) ReplicationHandler does not destroy all of it's created SnapPullers

2014-11-26 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14227069#comment-14227069
 ] 

Mark Miller commented on SOLR-6793:
---

Side Note from looking at your patch: We should rename SnapPuller to something 
more meaningful, like IndexFetcher.

 ReplicationHandler does not destroy all of it's created SnapPullers
 ---

 Key: SOLR-6793
 URL: https://issues.apache.org/jira/browse/SOLR-6793
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: 5.0, Trunk

 Attachments: SOLR-6793.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Review Request 28393: SOLR-4509: Disable HttpClient stale check for performance and fewer spurious connection errors.

2014-11-26 Thread Mark Miller


 On Nov. 25, 2014, 7:34 p.m., Hrishikesh Gadre wrote:
  trunk/solr/core/src/java/org/apache/solr/core/ConfigSolrXml.java, line 55
  https://reviews.apache.org/r/28393/diff/1/?file=774307#file774307line55
 
  This configuration property needs to correlate with the 
  connection_timeout configuration specified in the servlet container hosting 
  Solr. If Solr provides a default value, the user may not realize this (i.e. 
  they may have a different connection_timeout configuration and if they use 
  default value for this property, then they may experience *more* connection 
  reset errors since the staleness check would be disabled). Is it possible 
  to make this a required property?
 
 Gregory Chanan wrote:
 Maybe it makes sense to do this with SOLR-4792 where we control 
 everything, i.e. we don't have to worry about another container's settings?
 
 Hrishikesh Gadre wrote:
 In that case can we keep the previous settings in case this new parameter 
 is not specified (i.e. if this parameter is not specified, don't disable the 
 staleness check)?
 
 The advantage is that staless check would keep the failure window fairly 
 small (I have found it difficult to reproduce in a live cluster). With this 
 change, depending upon the difference between default_value (40) and the 
 actual connection_timeout configured, users may observe more failures (larger 
 the difference - more failures).
 
 Hrishikesh Gadre wrote:
 With this change, depending upon the difference between default_value 
 (40) and the actual connection_timeout configured, users may observe more 
 failures (larger the difference - more failures).
 
 With the caveat that default_value  connection_timeout.

We will ship with a working default and there is a comment where you would 
change alerting. I don't think it's a good idea to make it a required property. 
It is the weakest part of this approach, but I don't see a better option yet.


- Mark


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/28393/#review63024
---


On Nov. 24, 2014, 3:42 p.m., Mark Miller wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/28393/
 ---
 
 (Updated Nov. 24, 2014, 3:42 p.m.)
 
 
 Review request for lucene.
 
 
 Repository: lucene
 
 
 Description
 ---
 
 https://issues.apache.org/jira/browse/SOLR-4509
 
 
 Diffs
 -
 
   trunk/solr/core/src/java/org/apache/solr/core/ConfigSolr.java 1641405 
   trunk/solr/core/src/java/org/apache/solr/core/ConfigSolrXml.java 1641405 
   trunk/solr/core/src/java/org/apache/solr/core/ConfigSolrXmlOld.java 1641405 
   trunk/solr/core/src/java/org/apache/solr/core/CoreContainer.java 1641405 
   trunk/solr/core/src/java/org/apache/solr/core/PluginInfo.java 1641405 
   
 trunk/solr/core/src/java/org/apache/solr/handler/component/HttpShardHandlerFactory.java
  1641405 
   trunk/solr/core/src/java/org/apache/solr/update/UpdateShardHandler.java 
 1641405 
   
 trunk/solr/core/src/java/org/apache/solr/util/IdleConnectionMonitorRunnable.java
  PRE-CREATION 
   trunk/solr/core/src/test/org/apache/solr/cloud/OverseerTest.java 1641405 
   
 trunk/solr/core/src/test/org/apache/solr/cloud/TestLeaderElectionZkExpiry.java
  1641405 
   trunk/solr/core/src/test/org/apache/solr/cloud/TestZkChroot.java 1641405 
   trunk/solr/core/src/test/org/apache/solr/cloud/ZkControllerTest.java 
 1641405 
   trunk/solr/core/src/test/org/apache/solr/core/TestCoreDiscovery.java 
 1641405 
   
 trunk/solr/core/src/test/org/apache/solr/core/TestImplicitCoreProperties.java 
 1641405 
   trunk/solr/server/etc/jetty.xml 1641405 
   
 trunk/solr/solrj/src/java/org/apache/solr/client/solrj/impl/HttpClientUtil.java
  1641405 
 
 Diff: https://reviews.apache.org/r/28393/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Mark Miller
 




[jira] [Commented] (SOLR-6658) SearchHandler should accept POST requests with JSON data in content stream for customized plug-in components

2014-11-26 Thread Mark Peng (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14227087#comment-14227087
 ] 

Mark Peng commented on SOLR-6658:
-

Hi [~noble.paul],

Agreed. If the committee reaches an agreement to open POST for search requests, 
we would be very welcome. :)
Thank you.

Best regards,
Mark

 SearchHandler should accept POST requests with JSON data in content stream 
 for customized plug-in components
 

 Key: SOLR-6658
 URL: https://issues.apache.org/jira/browse/SOLR-6658
 Project: Solr
  Issue Type: Improvement
  Components: search, SearchComponents - other
Affects Versions: 4.7, 4.7.1, 4.7.2, 4.8, 4.8.1, 4.9, 4.9.1, 4.10, 4.10.1
Reporter: Mark Peng
Assignee: Noble Paul
 Attachments: SOLR-6658.patch, SOLR-6658.patch


 This issue relates to the following one:
 *Return HTTP error on POST requests with no Content-Type*
 [https://issues.apache.org/jira/browse/SOLR-5517]
 The original consideration of the above is to make sure that incoming POST 
 requests to SearchHandler have corresponding content-type specified. That is 
 quite reasonable, however, the following lines in the patch cause to reject 
 all POST requests with content stream data, which is not necessary to that 
 issue:
 {code}
 Index: solr/core/src/java/org/apache/solr/handler/component/SearchHandler.java
 ===
 --- solr/core/src/java/org/apache/solr/handler/component/SearchHandler.java   
 (revision 1546817)
 +++ solr/core/src/java/org/apache/solr/handler/component/SearchHandler.java   
 (working copy)
 @@ -22,9 +22,11 @@
  import java.util.List;
  
  import org.apache.solr.common.SolrException;
 +import org.apache.solr.common.SolrException.ErrorCode;
  import org.apache.solr.common.params.CommonParams;
  import org.apache.solr.common.params.ModifiableSolrParams;
  import org.apache.solr.common.params.ShardParams;
 +import org.apache.solr.common.util.ContentStream;
  import org.apache.solr.core.CloseHook;
  import org.apache.solr.core.PluginInfo;
  import org.apache.solr.core.SolrCore;
 @@ -165,6 +167,10 @@
{
  // int sleep = req.getParams().getInt(sleep,0);
  // if (sleep  0) {log.error(SLEEPING for  + sleep);  
 Thread.sleep(sleep);}
 +if (req.getContentStreams() != null  
 req.getContentStreams().iterator().hasNext()) {
 +  throw new SolrException(ErrorCode.BAD_REQUEST, Search requests cannot 
 accept content streams);
 +}
 +
  ResponseBuilder rb = new ResponseBuilder(req, rsp, components);
  if (rb.requestInfo != null) {
rb.requestInfo.setResponseBuilder(rb);
 {code}
 We are using Solr 4.5.1 in our production services and considering to upgrade 
 to 4.9/5.0 to support more features. But due to this issue, we cannot have a 
 chance to upgrade because we have some important customized SearchComponent 
 plug-ins that need to get POST data from SearchHandler to do further 
 processing.
 Therefore, we are requesting if it is possible to remove the content stream 
 constraint shown above and to let SearchHandler accept POST requests with 
 *Content-Type: application/json* to allow further components to get the data.
 Thank you.
 Best regards,
 Mark Peng



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4509) Disable HttpClient stale check for performance and fewer spurious connection errors.

2014-11-26 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-4509:
--
Attachment: SOLR-4509.patch

Here is the latest in progress patch. It moves configuration to system 
properties and away from solr.xml so that we can try and use the new stale 
connection approach with more of our HttpClient usage.

ReviewBoard doesn't seem to like a move that I did of IOUtils in the patch, so 
moving back to the JIRA issue.

 Disable HttpClient stale check for performance and fewer spurious connection 
 errors.
 

 Key: SOLR-4509
 URL: https://issues.apache.org/jira/browse/SOLR-4509
 Project: Solr
  Issue Type: Improvement
  Components: search
 Environment: 5 node SmartOS cluster (all nodes living in same global 
 zone - i.e. same physical machine)
Reporter: Ryan Zezeski
Assignee: Mark Miller
Priority: Minor
 Fix For: 5.0, Trunk

 Attachments: IsStaleTime.java, SOLR-4509-4_4_0.patch, 
 SOLR-4509.patch, SOLR-4509.patch, SOLR-4509.patch, SOLR-4509.patch, 
 SOLR-4509.patch, baremetal-stale-nostale-med-latency.dat, 
 baremetal-stale-nostale-med-latency.svg, 
 baremetal-stale-nostale-throughput.dat, baremetal-stale-nostale-throughput.svg


 By disabling the Apache HTTP Client stale check I've witnessed a 2-4x 
 increase in throughput and reduction of over 100ms.  This patch was made in 
 the context of a project I'm leading, called Yokozuna, which relies on 
 distributed search.
 Here's the patch on Yokozuna: https://github.com/rzezeski/yokozuna/pull/26
 Here's a write-up I did on my findings: 
 http://www.zinascii.com/2013/solr-distributed-search-and-the-stale-check.html
 I'm happy to answer any questions or make changes to the patch to make it 
 acceptable.
 ReviewBoard: https://reviews.apache.org/r/28393/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Review Request 28393: SOLR-4509: Disable HttpClient stale check for performance and fewer spurious connection errors.

2014-11-26 Thread Mark Miller


 On Nov. 25, 2014, 7:34 p.m., Hrishikesh Gadre wrote:
  trunk/solr/core/src/java/org/apache/solr/core/ConfigSolrXml.java, line 55
  https://reviews.apache.org/r/28393/diff/1/?file=774307#file774307line55
 
  This configuration property needs to correlate with the 
  connection_timeout configuration specified in the servlet container hosting 
  Solr. If Solr provides a default value, the user may not realize this (i.e. 
  they may have a different connection_timeout configuration and if they use 
  default value for this property, then they may experience *more* connection 
  reset errors since the staleness check would be disabled). Is it possible 
  to make this a required property?
 
 Gregory Chanan wrote:
 Maybe it makes sense to do this with SOLR-4792 where we control 
 everything, i.e. we don't have to worry about another container's settings?
 
 Hrishikesh Gadre wrote:
 In that case can we keep the previous settings in case this new parameter 
 is not specified (i.e. if this parameter is not specified, don't disable the 
 staleness check)?
 
 The advantage is that staless check would keep the failure window fairly 
 small (I have found it difficult to reproduce in a live cluster). With this 
 change, depending upon the difference between default_value (40) and the 
 actual connection_timeout configured, users may observe more failures (larger 
 the difference - more failures).
 
 Hrishikesh Gadre wrote:
 With this change, depending upon the difference between default_value 
 (40) and the actual connection_timeout configured, users may observe more 
 failures (larger the difference - more failures).
 
 With the caveat that default_value  connection_timeout.
 
 Mark Miller wrote:
 We will ship with a working default and there is a comment where you 
 would change alerting. I don't think it's a good idea to make it a required 
 property. It is the weakest part of this approach, but I don't see a better 
 option yet.

bq. Maybe it makes sense to do this with SOLR-4792 

We are, this is 5.0 development work.


- Mark


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/28393/#review63024
---


On Nov. 24, 2014, 3:42 p.m., Mark Miller wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/28393/
 ---
 
 (Updated Nov. 24, 2014, 3:42 p.m.)
 
 
 Review request for lucene.
 
 
 Repository: lucene
 
 
 Description
 ---
 
 https://issues.apache.org/jira/browse/SOLR-4509
 
 
 Diffs
 -
 
   trunk/solr/core/src/java/org/apache/solr/core/ConfigSolr.java 1641405 
   trunk/solr/core/src/java/org/apache/solr/core/ConfigSolrXml.java 1641405 
   trunk/solr/core/src/java/org/apache/solr/core/ConfigSolrXmlOld.java 1641405 
   trunk/solr/core/src/java/org/apache/solr/core/CoreContainer.java 1641405 
   trunk/solr/core/src/java/org/apache/solr/core/PluginInfo.java 1641405 
   
 trunk/solr/core/src/java/org/apache/solr/handler/component/HttpShardHandlerFactory.java
  1641405 
   trunk/solr/core/src/java/org/apache/solr/update/UpdateShardHandler.java 
 1641405 
   
 trunk/solr/core/src/java/org/apache/solr/util/IdleConnectionMonitorRunnable.java
  PRE-CREATION 
   trunk/solr/core/src/test/org/apache/solr/cloud/OverseerTest.java 1641405 
   
 trunk/solr/core/src/test/org/apache/solr/cloud/TestLeaderElectionZkExpiry.java
  1641405 
   trunk/solr/core/src/test/org/apache/solr/cloud/TestZkChroot.java 1641405 
   trunk/solr/core/src/test/org/apache/solr/cloud/ZkControllerTest.java 
 1641405 
   trunk/solr/core/src/test/org/apache/solr/core/TestCoreDiscovery.java 
 1641405 
   
 trunk/solr/core/src/test/org/apache/solr/core/TestImplicitCoreProperties.java 
 1641405 
   trunk/solr/server/etc/jetty.xml 1641405 
   
 trunk/solr/solrj/src/java/org/apache/solr/client/solrj/impl/HttpClientUtil.java
  1641405 
 
 Diff: https://reviews.apache.org/r/28393/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Mark Miller
 




[jira] [Commented] (SOLR-5517) Return HTTP error on POST requests with no Content-Type

2014-11-26 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14227093#comment-14227093
 ] 

Noble Paul commented on SOLR-5517:
--

[~thetaphi] If a user wants to provide has own SearchHandler extending 
SearchHandler , he needs to completely rewrite the handleRequestBody() method. 
This check is not really helpful to anyone. Let's remove this. 

 Return HTTP error on POST requests with no Content-Type
 ---

 Key: SOLR-5517
 URL: https://issues.apache.org/jira/browse/SOLR-5517
 Project: Solr
  Issue Type: Improvement
Reporter: Ryan Ernst
Assignee: Ryan Ernst
 Fix For: 4.7, Trunk

 Attachments: SOLR-5517.patch, SOLR-5517.patch, SOLR-5517.patch, 
 SOLR-5517.patch, SOLR-5517.patch


 While the http spec states requests without a content-type should be treated 
 as application/octet-stream, the html spec says instead that post requests 
 without a content-type should be treated as a form 
 (http://www.w3.org/MarkUp/html-spec/html-spec_8.html#SEC8.2.1).  It would be 
 nice to allow large search requests from html forms, and not have to rely on 
 the browser to set the content type (since the spec says it doesn't have to).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6780) Merging request parameters with defaults produce duplicate entries

2014-11-26 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-6780:
---
Attachment: SOLR-6780.patch

What an evil freaking bug.

I audited all of the usages of {{getParameterNamesIterator()}} to try and track 
down how severe the impacts of this issue may be -- most of them are either 
semi-benign (like the echoParams case) or not affected by the redundency (ie: 
code that iterats over all params looking for certain things, and then adds the 
assocaited names/values to a Set so they get deduped anyway)

There were 4 main areas i found where this bug could result in problematic 
behavior

* ExtractingRequestHandler
** literal.\* params will be duplicated if overridden by 
defaults/invariants/appends - this will result in redundent literal field=value 
params being added to the document.
** impact: multiple values in literal fields when not expected/desired
* FacetComponent
** facet.\* params will be duplicated if overridden by 
defaults/invariants/appends - this can result in redundent computation and 
identical facet.field, facet.query, or facet.range blocks in the response
** impact: wasted computation  increased response size
* SpellCheckComponent
** when custom params (ie: spellcheck.\[dictionary name\].= are 
used in used in defaults, appends, or invariants, it can cause redudent 
X= params to be used.
** when spellcheck.collateParam.= type params are used defaults, 
appends, or invariants, it can cause redundent = params to exist in the 
collation verification queries.
** impact: unclear to me at first glance, probably just wasted computation  
increased response size
* AnalyticsComponent
** olap.\* params will be duplicated if overridden by 
defaults/invariants/appends - this can result in redundent computation
** impact: unclear to me at first glance, probably just wasted computation  
increased response size

...in addition to fixing the bug  adding explicit unit tests for it, the 
attached patch also includes some sanity check testing for the FacetComponent  
ExtractingRequestHandler situations above, to try and protect us from similar 
redundency bugs in the future.



 Merging request parameters with defaults produce duplicate entries
 --

 Key: SOLR-6780
 URL: https://issues.apache.org/jira/browse/SOLR-6780
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.1, 5.0, Trunk
Reporter: Alexandre Rafalovitch
  Labels: parameters
 Attachments: SOLR-6780.patch


 When a parameter (e.g. echoParams) is specified and overrides the default on 
 the handler, it actually generates two entries for that key with the same 
 value. 
 Most of the time it is just a confusion and not an issue, however, some 
 components will do the work twice. For example faceting component as 
 described in http://search-lucene.com/m/QTPaSlFUQ1/duplicate
 It may also be connected to SOLR-6369
 The cause seems to be the interplay between 
 *DefaultSolrParams#getParameterNamesIterator()* which just returns param 
 names in sequence and *SolrParams#toNamedList()* which uses the first 
 (override then default) value for each key, without deduplication.
 It's easily reproducible in trunk against schemaless example with 
 bq. curl 
 http://localhost:8983/solr/schemaless/select?indent=trueechoParams=all;
 I've also spot checked it and it seems to be reproducible back to Solr 4.1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-6780) Merging request parameters with defaults produce duplicate entries

2014-11-26 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man reassigned SOLR-6780:
--

Assignee: Hoss Man

 Merging request parameters with defaults produce duplicate entries
 --

 Key: SOLR-6780
 URL: https://issues.apache.org/jira/browse/SOLR-6780
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.1, 5.0, Trunk
Reporter: Alexandre Rafalovitch
Assignee: Hoss Man
  Labels: parameters
 Attachments: SOLR-6780.patch


 When a parameter (e.g. echoParams) is specified and overrides the default on 
 the handler, it actually generates two entries for that key with the same 
 value. 
 Most of the time it is just a confusion and not an issue, however, some 
 components will do the work twice. For example faceting component as 
 described in http://search-lucene.com/m/QTPaSlFUQ1/duplicate
 It may also be connected to SOLR-6369
 The cause seems to be the interplay between 
 *DefaultSolrParams#getParameterNamesIterator()* which just returns param 
 names in sequence and *SolrParams#toNamedList()* which uses the first 
 (override then default) value for each key, without deduplication.
 It's easily reproducible in trunk against schemaless example with 
 bq. curl 
 http://localhost:8983/solr/schemaless/select?indent=trueechoParams=all;
 I've also spot checked it and it seems to be reproducible back to Solr 4.1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4509) Disable HttpClient stale check for performance and fewer spurious connection errors.

2014-11-26 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14227105#comment-14227105
 ] 

Yonik Seeley commented on SOLR-4509:


I thought I had killed stale connection checks long ago.  Maybe it was long 
enough ago that it was back in the CNET days or something...

Can we tell when this happens (and that we are certain that the server did not 
receive the request) so that we can do a retry?



 Disable HttpClient stale check for performance and fewer spurious connection 
 errors.
 

 Key: SOLR-4509
 URL: https://issues.apache.org/jira/browse/SOLR-4509
 Project: Solr
  Issue Type: Improvement
  Components: search
 Environment: 5 node SmartOS cluster (all nodes living in same global 
 zone - i.e. same physical machine)
Reporter: Ryan Zezeski
Assignee: Mark Miller
Priority: Minor
 Fix For: 5.0, Trunk

 Attachments: IsStaleTime.java, SOLR-4509-4_4_0.patch, 
 SOLR-4509.patch, SOLR-4509.patch, SOLR-4509.patch, SOLR-4509.patch, 
 SOLR-4509.patch, baremetal-stale-nostale-med-latency.dat, 
 baremetal-stale-nostale-med-latency.svg, 
 baremetal-stale-nostale-throughput.dat, baremetal-stale-nostale-throughput.svg


 By disabling the Apache HTTP Client stale check I've witnessed a 2-4x 
 increase in throughput and reduction of over 100ms.  This patch was made in 
 the context of a project I'm leading, called Yokozuna, which relies on 
 distributed search.
 Here's the patch on Yokozuna: https://github.com/rzezeski/yokozuna/pull/26
 Here's a write-up I did on my findings: 
 http://www.zinascii.com/2013/solr-distributed-search-and-the-stale-check.html
 I'm happy to answer any questions or make changes to the patch to make it 
 acceptable.
 ReviewBoard: https://reviews.apache.org/r/28393/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-SmokeRelease-5.x - Build # 215 - Still Failing

2014-11-26 Thread Mark Miller
   [smoker] RuntimeError: command sh ./exampledocs/test_utf8.sh
http://localhost:8983/solr/techproducts; failed; see log file

- Mark

On Wed Nov 26 2014 at 6:43:30 PM Timothy Potter thelabd...@gmail.com
wrote:

 Ok - fix committed ... I'll keep an eye on it but think this should do it
 this time.

 On Wed, Nov 26, 2014 at 9:27 AM, Timothy Potter thelabd...@gmail.com
 wrote:

 I'm working on it ... problem right now is ps waux doesn't work the same
 on FreeBSD so the script isn't getting the info it needs ... ps auxww seems
 to do the trick

 On Wed, Nov 26, 2014 at 4:12 AM, Michael McCandless 
 luc...@mikemccandless.com wrote:

 Is anyone looking into why the smoke tester can't run Solr's example?
 This has been failing for quite a while, and I thought I saw a commit
 to smoke tester to try to fix it?

 Should we stop trying to test the solr example from the smoke tester?

 Mike McCandless

 http://blog.mikemccandless.com


 On Tue, Nov 25, 2014 at 10:06 PM, Apache Jenkins Server
 jenk...@builds.apache.org wrote:
  Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-5.x/215/
 
  No tests ran.
 
  Build Log:
  [...truncated 51672 lines...]
  prepare-release-no-sign:
  [mkdir] Created dir:
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist
   [copy] Copying 446 files to
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist/lucene
   [copy] Copying 254 files to
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist/solr
 [smoker] Java 1.7 JAVA_HOME=/home/jenkins/tools/java/latest1.7
 [smoker] NOTE: output encoding is US-ASCII
 [smoker]
 [smoker] Load release URL
 file:/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist/...
 [smoker]
 [smoker] Test Lucene...
 [smoker]   test basics...
 [smoker]   get KEYS
 [smoker] 0.1 MB in 0.01 sec (13.0 MB/sec)
 [smoker]   check changes HTML...
 [smoker]   download lucene-5.0.0-src.tgz...
 [smoker] 27.8 MB in 0.04 sec (681.1 MB/sec)
 [smoker] verify md5/sha1 digests
 [smoker]   download lucene-5.0.0.tgz...
 [smoker] 63.8 MB in 0.09 sec (694.6 MB/sec)
 [smoker] verify md5/sha1 digests
 [smoker]   download lucene-5.0.0.zip...
 [smoker] 73.2 MB in 0.14 sec (526.5 MB/sec)
 [smoker] verify md5/sha1 digests
 [smoker]   unpack lucene-5.0.0.tgz...
 [smoker] verify JAR metadata/identity/no javax.* or java.*
 classes...
 [smoker] test demo with 1.7...
 [smoker]   got 5569 hits for query lucene
 [smoker] checkindex with 1.7...
 [smoker] check Lucene's javadoc JAR
 [smoker]   unpack lucene-5.0.0.zip...
 [smoker] verify JAR metadata/identity/no javax.* or java.*
 classes...
 [smoker] test demo with 1.7...
 [smoker]   got 5569 hits for query lucene
 [smoker] checkindex with 1.7...
 [smoker] check Lucene's javadoc JAR
 [smoker]   unpack lucene-5.0.0-src.tgz...
 [smoker] make sure no JARs/WARs in src dist...
 [smoker] run ant validate
 [smoker] run tests w/ Java 7 and
 testArgs='-Dtests.jettyConnector=Socket -Dtests.multiplier=1
 -Dtests.slow=false'...
 [smoker] test demo with 1.7...
 [smoker]   got 207 hits for query lucene
 [smoker] checkindex with 1.7...
 [smoker] generate javadocs w/ Java 7...
 [smoker]
 [smoker] Crawl/parse...
 [smoker]
 [smoker] Verify...
 [smoker]   confirm all releases have coverage in
 TestBackwardsCompatibility
 [smoker] find all past Lucene releases...
 [smoker] run TestBackwardsCompatibility..
 [smoker] success!
 [smoker]
 [smoker] Test Solr...
 [smoker]   test basics...
 [smoker]   get KEYS
 [smoker] 0.1 MB in 0.00 sec (86.1 MB/sec)
 [smoker]   check changes HTML...
 [smoker]   download solr-5.0.0-src.tgz...
 [smoker] 34.1 MB in 0.04 sec (768.8 MB/sec)
 [smoker] verify md5/sha1 digests
 [smoker]   download solr-5.0.0.tgz...
 [smoker] 146.5 MB in 0.48 sec (302.1 MB/sec)
 [smoker] verify md5/sha1 digests
 [smoker]   download solr-5.0.0.zip...
 [smoker] 152.6 MB in 0.26 sec (598.2 MB/sec)
 [smoker] verify md5/sha1 digests
 [smoker]   unpack solr-5.0.0.tgz...
 [smoker] verify JAR metadata/identity/no javax.* or java.*
 classes...
 [smoker] unpack lucene-5.0.0.tgz...
 [smoker]   **WARNING**: skipping check of
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
 [smoker]   **WARNING**: skipping check of
 

[jira] [Commented] (SOLR-4792) stop shipping a war in 5.0

2014-11-26 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14227110#comment-14227110
 ] 

ASF subversion and git services commented on SOLR-4792:
---

Commit 1641990 from [~markrmil...@gmail.com] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1641990 ]

SOLR-4792: Stop shipping a .war.

 stop shipping a war in 5.0
 --

 Key: SOLR-4792
 URL: https://issues.apache.org/jira/browse/SOLR-4792
 Project: Solr
  Issue Type: Task
  Components: Build
Reporter: Robert Muir
Assignee: Mark Miller
 Fix For: 5.0, Trunk

 Attachments: SOLR-4792.patch


 see the vote on the developer list.
 This is the first step: if we stop shipping a war then we are free to do 
 anything we want. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2241 - Still Failing

2014-11-26 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2241/

1 tests failed.
REGRESSION:  org.apache.solr.MinimalSchemaTest.testAllConfiguredHandlers

Error Message:
exception w/handler: '/admin/system'

Stack Trace:
java.lang.RuntimeException: exception w/handler: '/admin/system'
at 
__randomizedtesting.SeedInfo.seed([796D6677923004CE:9473DD4D193D054A]:0)
at 
org.apache.solr.MinimalSchemaTest.testAllConfiguredHandlers(MinimalSchemaTest.java:124)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: Exception during query
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:729)
at 
org.apache.solr.MinimalSchemaTest.testAllConfiguredHandlers(MinimalSchemaTest.java:115)
... 40 more
Caused by: 

[jira] [Commented] (SOLR-4792) stop shipping a war in 5.0

2014-11-26 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14227114#comment-14227114
 ] 

ASF subversion and git services commented on SOLR-4792:
---

Commit 1641993 from [~markrmil...@gmail.com] in branch 'dev/trunk'
[ https://svn.apache.org/r1641993 ]

SOLR-4792: Move CHANGES entry from 6 to 5.

 stop shipping a war in 5.0
 --

 Key: SOLR-4792
 URL: https://issues.apache.org/jira/browse/SOLR-4792
 Project: Solr
  Issue Type: Task
  Components: Build
Reporter: Robert Muir
Assignee: Mark Miller
 Fix For: 5.0, Trunk

 Attachments: SOLR-4792.patch


 see the vote on the developer list.
 This is the first step: if we stop shipping a war then we are free to do 
 anything we want. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-4792) stop shipping a war in 5.0

2014-11-26 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-4792.
---
Resolution: Fixed

Thanks Ram!

 stop shipping a war in 5.0
 --

 Key: SOLR-4792
 URL: https://issues.apache.org/jira/browse/SOLR-4792
 Project: Solr
  Issue Type: Task
  Components: Build
Reporter: Robert Muir
Assignee: Mark Miller
 Fix For: 5.0, Trunk

 Attachments: SOLR-4792.patch


 see the vote on the developer list.
 This is the first step: if we stop shipping a war then we are free to do 
 anything we want. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >