[jira] [Commented] (SOLR-4872) Allow schema analysis object factories to be cleaned up properly when the core shuts down

2014-11-21 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14220645#comment-14220645
 ] 

Mikhail Khludnev commented on SOLR-4872:


fwiw, if someone else need to -shoot himself in leg- make TokenizerFactory or 
TokenFilterFactory SolrCoreAware ie violate restriction described at 
https://wiki.apache.org/solr/SolrPlugins#SolrCoreAware, just make your class 
implements QueryResponseWriter, and supply it with empty method impls. It will 
work until the assertion is improved.

 Allow schema analysis object factories to be cleaned up properly when the 
 core shuts down
 -

 Key: SOLR-4872
 URL: https://issues.apache.org/jira/browse/SOLR-4872
 Project: Solr
  Issue Type: Improvement
Affects Versions: 4.3
Reporter: Benson Margulies
 Attachments: solr-4872.patch, solr-4872.patch


 I have a need, in an TokenizerFactory or TokenFilterFactory, to have a shared 
 cache that is cleaned up when the core is torn down. 
 There is no 'close' protocol on these things, and Solr rejects analysis 
 components that are SolrCoreAware. 
 Possible solutions:
 # add a close protocol to these factories and make sure it gets called at 
 core shutdown.
 # allow these items to be 'core-aware'.
 # invent some notion of 'schema-lifecycle-aware'.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5950) Move to Java 8 in trunk

2014-11-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14220652#comment-14220652
 ] 

ASF subversion and git services commented on LUCENE-5950:
-

Commit 1640874 from [~thetaphi] in branch 'dev/trunk'
[ https://svn.apache.org/r1640874 ]

LUCENE-5950: These 2 hacks are still needed to make the whole thing to compile 
in Eclipse's compiler (IDE)

 Move to Java 8 in trunk
 ---

 Key: LUCENE-5950
 URL: https://issues.apache.org/jira/browse/LUCENE-5950
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Ryan Ernst
 Attachments: LUCENE-5950.patch, LUCENE-5950.patch, LUCENE-5950.patch, 
 LUCENE-5950.patch, LUCENE-5950.patch, LUCENE-5950.patch


 The dev list thread [VOTE] Move trunk to Java 8 passed.
 http://markmail.org/thread/zcddxioz2yvsdqkc
 This issue is to actually move trunk to java 8.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: [JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_20) - Build # 11644 - Failure!

2014-11-21 Thread Uwe Schindler
Hi,

Two of the hacks (diamond operator problem and generics/rawtypes problem) for 
Eclipse Compiler were still needed. The only ones that were fixed recently were 
the null analysis.
I committed a fix.

Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


 -Original Message-
 From: Policeman Jenkins Server [mailto:jenk...@thetaphi.de]
 Sent: Friday, November 21, 2014 7:11 AM
 To: u...@thetaphi.de; rjer...@apache.org; dev@lucene.apache.org
 Subject: [JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_20) - Build #
 11644 - Failure!
 
 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11644/
 Java: 32bit/jdk1.8.0_20 -server -XX:+UseG1GC (asserts: false)
 
 All tests passed
 
 Build Log:
 [...truncated 43731 lines...]
 BUILD FAILED
 /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:515: The
 following error occurred while executing this line:
 /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:79: The
 following error occurred while executing this line:
 /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build.xml:188:
 The following error occurred while executing this line:
 /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/common-
 build.xml:1893: The following error occurred while executing this line:
 /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/common-
 build.xml:1921: Compile failed; see the compiler error output for details.
 
 Total time: 105 minutes 57 seconds
 Build step 'Invoke Ant' marked build as failure [description-setter]
 Description set: Java: 32bit/jdk1.8.0_20 -server -XX:+UseG1GC (asserts:
 false) Archiving artifacts Recording test results Email was triggered for: 
 Failure
 - Any Sending email for trigger: Failure - Any
 



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5950) Move to Java 8 in trunk

2014-11-21 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-5950:
--
Labels: Java8  (was: )

 Move to Java 8 in trunk
 ---

 Key: LUCENE-5950
 URL: https://issues.apache.org/jira/browse/LUCENE-5950
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Ryan Ernst
  Labels: Java8
 Fix For: Trunk

 Attachments: LUCENE-5950-javadocpatcher.patch, LUCENE-5950.patch, 
 LUCENE-5950.patch, LUCENE-5950.patch, LUCENE-5950.patch, LUCENE-5950.patch, 
 LUCENE-5950.patch


 The dev list thread [VOTE] Move trunk to Java 8 passed.
 http://markmail.org/thread/zcddxioz2yvsdqkc
 This issue is to actually move trunk to java 8.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5950) Move to Java 8 in trunk

2014-11-21 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-5950:
--
Attachment: LUCENE-5950-javadocpatcher.patch

Attached is a patch that removes the Javadocs Frame Injection patcher 
(LUCENE-5072, http://www.kb.cert.org/vuls/id/225657), because Java 8 is no 
longer affected by this bug. So its impossible to generate Javadocs on Lucene 
that are vulnerable if we use Java 8 minimum.
I will commit this in a moment.

 Move to Java 8 in trunk
 ---

 Key: LUCENE-5950
 URL: https://issues.apache.org/jira/browse/LUCENE-5950
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Ryan Ernst
  Labels: Java8
 Fix For: Trunk

 Attachments: LUCENE-5950-javadocpatcher.patch, LUCENE-5950.patch, 
 LUCENE-5950.patch, LUCENE-5950.patch, LUCENE-5950.patch, LUCENE-5950.patch, 
 LUCENE-5950.patch


 The dev list thread [VOTE] Move trunk to Java 8 passed.
 http://markmail.org/thread/zcddxioz2yvsdqkc
 This issue is to actually move trunk to java 8.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5950) Move to Java 8 in trunk

2014-11-21 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-5950:
--
Fix Version/s: Trunk

 Move to Java 8 in trunk
 ---

 Key: LUCENE-5950
 URL: https://issues.apache.org/jira/browse/LUCENE-5950
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Ryan Ernst
  Labels: Java8
 Fix For: Trunk

 Attachments: LUCENE-5950-javadocpatcher.patch, LUCENE-5950.patch, 
 LUCENE-5950.patch, LUCENE-5950.patch, LUCENE-5950.patch, LUCENE-5950.patch, 
 LUCENE-5950.patch


 The dev list thread [VOTE] Move trunk to Java 8 passed.
 http://markmail.org/thread/zcddxioz2yvsdqkc
 This issue is to actually move trunk to java 8.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5950) Move to Java 8 in trunk

2014-11-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14220668#comment-14220668
 ] 

ASF subversion and git services commented on LUCENE-5950:
-

Commit 1640876 from [~thetaphi] in branch 'dev/trunk'
[ https://svn.apache.org/r1640876 ]

LUCENE-5950: Remove Javadocs patcher (no longer needed with Java 8 minimum)

 Move to Java 8 in trunk
 ---

 Key: LUCENE-5950
 URL: https://issues.apache.org/jira/browse/LUCENE-5950
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Ryan Ernst
  Labels: Java8
 Fix For: Trunk

 Attachments: LUCENE-5950-javadocpatcher.patch, LUCENE-5950.patch, 
 LUCENE-5950.patch, LUCENE-5950.patch, LUCENE-5950.patch, LUCENE-5950.patch, 
 LUCENE-5950.patch


 The dev list thread [VOTE] Move trunk to Java 8 passed.
 http://markmail.org/thread/zcddxioz2yvsdqkc
 This issue is to actually move trunk to java 8.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.7.0) - Build # 1907 - Still Failing!

2014-11-21 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/1907/
Java: 64bit/jdk1.7.0 -XX:-UseCompressedOops -XX:+UseG1GC (asserts: false)

1 tests failed.
REGRESSION:  
org.apache.solr.cloud.DistribDocExpirationUpdateProcessorTest.testDistribSearch

Error Message:
There are still nodes recoverying - waited for 30 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 30 
seconds
at 
__randomizedtesting.SeedInfo.seed([90E4788EA6B697FD:1102F696D1E9F7C1]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:178)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:840)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForThingsToLevelOut(AbstractFullDistribZkTestBase.java:1459)
at 
org.apache.solr.cloud.DistribDocExpirationUpdateProcessorTest.doTest(DistribDocExpirationUpdateProcessorTest.java:79)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 

[jira] [Commented] (SOLR-2931) Statistics/aggregated values per group in a grouped response

2014-11-21 Thread Ansarul Islam Laskar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14220717#comment-14220717
 ] 

Ansarul Islam Laskar commented on SOLR-2931:


I need a full statistics for the grouped filed. Means every group should have 
its statistics. Can someone come forward and look into this.

 Statistics/aggregated values per group in a grouped response
 

 Key: SOLR-2931
 URL: https://issues.apache.org/jira/browse/SOLR-2931
 Project: Solr
  Issue Type: New Feature
Reporter: Morten Lied Johansen

 We need to get minimum and maximum values for a field, within a group in a 
 grouped search-result.
 I'll flesh out our use-case a little to make our needs clearer:
 We have a number of documents, indexed with a price, date and a hotel. For 
 each hotel, there are a number of documents, each representing a price/date 
 combination. We then group our search result on hotel. We want to show the 
 minimum and maximum price for each hotel.
 Other use-cases could be to calculate an average or a sum within a group.
 We plan to work on this in the coming weeks, and will be supplying patches.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6774) Stats per group

2014-11-21 Thread Ansarul Islam Laskar (JIRA)
Ansarul Islam Laskar created SOLR-6774:
--

 Summary: Stats per group
 Key: SOLR-6774
 URL: https://issues.apache.org/jira/browse/SOLR-6774
 Project: Solr
  Issue Type: New Feature
  Components: Build
Affects Versions: 4.10.2
Reporter: Ansarul Islam Laskar


There Should be a functionality for getting stats per Group for a particular 
field or list of field, Currently stats component produce stats for whole result



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5833) Suggestor Version 2 doesn't support multiValued fields

2014-11-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14220747#comment-14220747
 ] 

ASF subversion and git services commented on LUCENE-5833:
-

Commit 1640886 from [~mikemccand] in branch 'dev/trunk'
[ https://svn.apache.org/r1640886 ]

LUCENE-5833: allow suggesters to build off of each value from multi-valued 
fields

 Suggestor Version 2 doesn't support multiValued fields
 --

 Key: LUCENE-5833
 URL: https://issues.apache.org/jira/browse/LUCENE-5833
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/other
Affects Versions: 4.8.1
Reporter: Greg Harris
Assignee: Steve Rowe
 Attachments: LUCENE-5833.patch, LUCENE-5833.patch, LUCENE-5833.patch, 
 SOLR-6210.patch


 So if you use a multiValued field in the new suggestor it will not pick up 
 terms for any term after the first one. So it treats the first term as the 
 only term it will make it's dictionary from. 
 This is the suggestor I'm talking about:
 https://issues.apache.org/jira/browse/SOLR-5378



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5833) Suggestor Version 2 doesn't support multiValued fields

2014-11-21 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless resolved LUCENE-5833.

   Resolution: Fixed
Fix Version/s: Trunk
   5.0

Thanks Varun!

 Suggestor Version 2 doesn't support multiValued fields
 --

 Key: LUCENE-5833
 URL: https://issues.apache.org/jira/browse/LUCENE-5833
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/other
Affects Versions: 4.8.1
Reporter: Greg Harris
Assignee: Steve Rowe
 Fix For: 5.0, Trunk

 Attachments: LUCENE-5833.patch, LUCENE-5833.patch, LUCENE-5833.patch, 
 SOLR-6210.patch


 So if you use a multiValued field in the new suggestor it will not pick up 
 terms for any term after the first one. So it treats the first term as the 
 only term it will make it's dictionary from. 
 This is the suggestor I'm talking about:
 https://issues.apache.org/jira/browse/SOLR-5378



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5833) Suggestor Version 2 doesn't support multiValued fields

2014-11-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14220749#comment-14220749
 ] 

ASF subversion and git services commented on LUCENE-5833:
-

Commit 1640887 from [~mikemccand] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1640887 ]

LUCENE-5833: allow suggesters to build off of each value from multi-valued 
fields

 Suggestor Version 2 doesn't support multiValued fields
 --

 Key: LUCENE-5833
 URL: https://issues.apache.org/jira/browse/LUCENE-5833
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/other
Affects Versions: 4.8.1
Reporter: Greg Harris
Assignee: Steve Rowe
 Fix For: 5.0, Trunk

 Attachments: LUCENE-5833.patch, LUCENE-5833.patch, LUCENE-5833.patch, 
 SOLR-6210.patch


 So if you use a multiValued field in the new suggestor it will not pick up 
 terms for any term after the first one. So it treats the first term as the 
 only term it will make it's dictionary from. 
 This is the suggestor I'm talking about:
 https://issues.apache.org/jira/browse/SOLR-5378



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6763) Shard leader election thread can persist across connection loss

2014-11-21 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward updated SOLR-6763:

Attachment: SOLR-6763.patch

Patch with the better fix.

 Shard leader election thread can persist across connection loss
 ---

 Key: SOLR-6763
 URL: https://issues.apache.org/jira/browse/SOLR-6763
 Project: Solr
  Issue Type: Bug
Reporter: Alan Woodward
 Attachments: SOLR-6763.patch, SOLR-6763.patch


 A ZK connection loss during a call to 
 ElectionContext.waitForReplicasToComeUp() will result in two leader election 
 processes for the shard running within a single node - the initial election 
 that was waiting, and another spawned by the ReconnectStrategy.  After the 
 function returns, the first election will create an ephemeral leader node.  
 The second election will then also attempt to create this node, fail, and try 
 to put itself into recovery.  It will also set the 'isLeader' value in its 
 CloudDescriptor to false.
 The first election, meanwhile, is happily maintaining the ephemeral leader 
 node.  But any updates that are sent to the shard will cause an exception due 
 to the mismatch between the cloudstate (where this node is the leader) and 
 the local CloudDescriptor leader state.
 I think the fix is straightfoward - the call to zkClient.getChildren() in 
 waitForReplicasToComeUp should be called with 'retryOnReconnect=false', 
 rather than 'true' as it is currently, because once the connection has 
 dropped we're going to launch a new election process anyway.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5205) SpanQueryParser with recursion, analysis and syntax very similar to classic QueryParser

2014-11-21 Thread Tim Allison (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tim Allison updated LUCENE-5205:

Summary: SpanQueryParser with recursion, analysis and syntax very similar 
to classic QueryParser  (was: [PATCH] SpanQueryParser with recursion, analysis 
and syntax very similar to classic QueryParser)

 SpanQueryParser with recursion, analysis and syntax very similar to classic 
 QueryParser
 ---

 Key: LUCENE-5205
 URL: https://issues.apache.org/jira/browse/LUCENE-5205
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/queryparser
Reporter: Tim Allison
  Labels: patch
 Attachments: LUCENE-5205-cleanup-tests.patch, 
 LUCENE-5205-date-pkg-prvt.patch, LUCENE-5205.patch.gz, LUCENE-5205.patch.gz, 
 LUCENE-5205_dateTestReInitPkgPrvt.patch, 
 LUCENE-5205_improve_stop_word_handling.patch, 
 LUCENE-5205_smallTestMods.patch, LUCENE_5205.patch, 
 SpanQueryParser_v1.patch.gz, patch.txt


 This parser extends QueryParserBase and includes functionality from:
 * Classic QueryParser: most of its syntax
 * SurroundQueryParser: recursive parsing for near and not clauses.
 * ComplexPhraseQueryParser: can handle near queries that include multiterms 
 (wildcard, fuzzy, regex, prefix),
 * AnalyzingQueryParser: has an option to analyze multiterms.
 At a high level, there's a first pass BooleanQuery/field parser and then a 
 span query parser handles all terminal nodes and phrases.
 Same as classic syntax:
 * term: test 
 * fuzzy: roam~0.8, roam~2
 * wildcard: te?t, test*, t*st
 * regex: /\[mb\]oat/
 * phrase: jakarta apache
 * phrase with slop: jakarta apache~3
 * default or clause: jakarta apache
 * grouping or clause: (jakarta apache)
 * boolean and +/-: (lucene OR apache) NOT jakarta; +lucene +apache -jakarta
 * multiple fields: title:lucene author:hatcher
  
 Main additions in SpanQueryParser syntax vs. classic syntax:
 * Can require in order for phrases with slop with the \~ operator: 
 jakarta apache\~3
 * Can specify not near: fever bieber!\~3,10 ::
 find fever but not if bieber appears within 3 words before or 10 
 words after it.
 * Fully recursive phrasal queries with \[ and \]; as in: \[\[jakarta 
 apache\]~3 lucene\]\~4 :: 
 find jakarta within 3 words of apache, and that hit has to be within 
 four words before lucene
 * Can also use \[\] for single level phrasal queries instead of  as in: 
 \[jakarta apache\]
 * Can use or grouping clauses in phrasal queries: apache (lucene solr)\~3 
 :: find apache and then either lucene or solr within three words.
 * Can use multiterms in phrasal queries: jakarta\~1 ap*che\~2
 * Did I mention full recursion: \[\[jakarta\~1 ap*che\]\~2 (solr~ 
 /l\[ou\]\+\[cs\]\[en\]\+/)]\~10 :: Find something like jakarta within two 
 words of ap*che and that hit has to be within ten words of something like 
 solr or that lucene regex.
 * Can require at least x number of hits at boolean level: apache AND (lucene 
 solr tika)~2
 * Can use negative only query: -jakarta :: Find all docs that don't contain 
 jakarta
 * Can use an edit distance  2 for fuzzy query via SlowFuzzyQuery (beware of 
 potential performance issues!).
 Trivial additions:
 * Can specify prefix length in fuzzy queries: jakarta~1,2 (edit distance =1, 
 prefix =2)
 * Can specifiy Optimal String Alignment (OSA) vs Levenshtein for distance 
 =2: (jakarta~1 (OSA) vs jakarta~1(Levenshtein)
 This parser can be very useful for concordance tasks (see also LUCENE-5317 
 and LUCENE-5318) and for analytical search.  
 Until LUCENE-2878 is closed, this might have a use for fans of SpanQuery.
 Most of the documentation is in the javadoc for SpanQueryParser.
 Any and all feedback is welcome.  Thank you.
 Until this is added to the Lucene project, I've added a standalone 
 lucene-addons repo (with jars compiled for the latest stable build of Lucene) 
  on [github|https://github.com/tballison/lucene-addons].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5205) SpanQueryParser with recursion, analysis and syntax very similar to classic QueryParser

2014-11-21 Thread Tim Allison (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14220762#comment-14220762
 ] 

Tim Allison commented on LUCENE-5205:
-

[~modassar], no problem at all.  Thank you for your feedback.

Note that there is a distinction between single and double quotes.  Double 
quotes and square brackets should be used for phrasal/near searches.  Single 
quotes are used for tokens that should not be parsed.

For example, if you wanted to search for a path '/the/quick/brown/fox.txt', you 
are telling the parser not to try to parse a regex within that term between the 
/ and /.  To escape a single quote within a single quoted term, double it: 
{noformat}
'bob''s' 
{noformat}

is parsed as {noformat}bob's{noformat}

Thank you, again. 

 SpanQueryParser with recursion, analysis and syntax very similar to classic 
 QueryParser
 ---

 Key: LUCENE-5205
 URL: https://issues.apache.org/jira/browse/LUCENE-5205
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/queryparser
Reporter: Tim Allison
  Labels: patch
 Attachments: LUCENE-5205-cleanup-tests.patch, 
 LUCENE-5205-date-pkg-prvt.patch, LUCENE-5205.patch.gz, LUCENE-5205.patch.gz, 
 LUCENE-5205_dateTestReInitPkgPrvt.patch, 
 LUCENE-5205_improve_stop_word_handling.patch, 
 LUCENE-5205_smallTestMods.patch, LUCENE_5205.patch, 
 SpanQueryParser_v1.patch.gz, patch.txt


 This parser extends QueryParserBase and includes functionality from:
 * Classic QueryParser: most of its syntax
 * SurroundQueryParser: recursive parsing for near and not clauses.
 * ComplexPhraseQueryParser: can handle near queries that include multiterms 
 (wildcard, fuzzy, regex, prefix),
 * AnalyzingQueryParser: has an option to analyze multiterms.
 At a high level, there's a first pass BooleanQuery/field parser and then a 
 span query parser handles all terminal nodes and phrases.
 Same as classic syntax:
 * term: test 
 * fuzzy: roam~0.8, roam~2
 * wildcard: te?t, test*, t*st
 * regex: /\[mb\]oat/
 * phrase: jakarta apache
 * phrase with slop: jakarta apache~3
 * default or clause: jakarta apache
 * grouping or clause: (jakarta apache)
 * boolean and +/-: (lucene OR apache) NOT jakarta; +lucene +apache -jakarta
 * multiple fields: title:lucene author:hatcher
  
 Main additions in SpanQueryParser syntax vs. classic syntax:
 * Can require in order for phrases with slop with the \~ operator: 
 jakarta apache\~3
 * Can specify not near: fever bieber!\~3,10 ::
 find fever but not if bieber appears within 3 words before or 10 
 words after it.
 * Fully recursive phrasal queries with \[ and \]; as in: \[\[jakarta 
 apache\]~3 lucene\]\~4 :: 
 find jakarta within 3 words of apache, and that hit has to be within 
 four words before lucene
 * Can also use \[\] for single level phrasal queries instead of  as in: 
 \[jakarta apache\]
 * Can use or grouping clauses in phrasal queries: apache (lucene solr)\~3 
 :: find apache and then either lucene or solr within three words.
 * Can use multiterms in phrasal queries: jakarta\~1 ap*che\~2
 * Did I mention full recursion: \[\[jakarta\~1 ap*che\]\~2 (solr~ 
 /l\[ou\]\+\[cs\]\[en\]\+/)]\~10 :: Find something like jakarta within two 
 words of ap*che and that hit has to be within ten words of something like 
 solr or that lucene regex.
 * Can require at least x number of hits at boolean level: apache AND (lucene 
 solr tika)~2
 * Can use negative only query: -jakarta :: Find all docs that don't contain 
 jakarta
 * Can use an edit distance  2 for fuzzy query via SlowFuzzyQuery (beware of 
 potential performance issues!).
 Trivial additions:
 * Can specify prefix length in fuzzy queries: jakarta~1,2 (edit distance =1, 
 prefix =2)
 * Can specifiy Optimal String Alignment (OSA) vs Levenshtein for distance 
 =2: (jakarta~1 (OSA) vs jakarta~1(Levenshtein)
 This parser can be very useful for concordance tasks (see also LUCENE-5317 
 and LUCENE-5318) and for analytical search.  
 Until LUCENE-2878 is closed, this might have a use for fans of SpanQuery.
 Most of the documentation is in the javadoc for SpanQueryParser.
 Any and all feedback is welcome.  Thank you.
 Until this is added to the Lucene project, I've added a standalone 
 lucene-addons repo (with jars compiled for the latest stable build of Lucene) 
  on [github|https://github.com/tballison/lucene-addons].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.9.0-ea-b34) - Build # 11645 - Still Failing!

2014-11-21 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11645/
Java: 32bit/jdk1.9.0-ea-b34 -server -XX:+UseG1GC (asserts: true)

1 tests failed.
REGRESSION:  
org.apache.solr.handler.TestSolrConfigHandlerConcurrent.testDistribSearch

Error Message:
[[size property not set, expected = 11, actual null, initialSize property not 
set, expected = 12, actual null, autowarmCount property not set, expected = 13, 
actual null], [], [], []]

Stack Trace:
java.lang.AssertionError: [[size property not set, expected = 11, actual null, 
initialSize property not set, expected = 12, actual null, autowarmCount 
property not set, expected = 13, actual null], [], [], []]
at 
__randomizedtesting.SeedInfo.seed([FE0B351C9DE2623F:7FEDBB04EABD0203]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.handler.TestSolrConfigHandlerConcurrent.doTest(TestSolrConfigHandlerConcurrent.java:112)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.GeneratedMethodAccessor51.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 

[jira] [Commented] (SOLR-6533) Support editing common solrconfig.xml values

2014-11-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14220862#comment-14220862
 ] 

ASF subversion and git services commented on SOLR-6533:
---

Commit 1640909 from [~noble.paul] in branch 'dev/trunk'
[ https://svn.apache.org/r1640909 ]

SOLR-6533 fixing test failures 
http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11645

 Support editing common solrconfig.xml values
 

 Key: SOLR-6533
 URL: https://issues.apache.org/jira/browse/SOLR-6533
 Project: Solr
  Issue Type: Sub-task
Reporter: Noble Paul
 Attachments: SOLR-6533.patch, SOLR-6533.patch, SOLR-6533.patch, 
 SOLR-6533.patch, SOLR-6533.patch, SOLR-6533.patch, SOLR-6533.patch, 
 SOLR-6533.patch, SOLR-6533.patch, SOLR-6533.patch, SOLR-6533.patch


 There are a bunch of properties in solrconfig.xml which users want to edit. 
 We will attack them first
 These properties will be persisted to a separate file called config.json (or 
 whatever file). Instead of saving in the same format we will have well known 
 properties which users can directly edit
 {code}
 updateHandler.autoCommit.maxDocs
 query.filterCache.initialSize
 {code}   
 The api will be modeled around the bulk schema API
 {code:javascript}
 curl http://localhost:8983/solr/collection1/config -H 
 'Content-type:application/json'  -d '{
 set-property : {updateHandler.autoCommit.maxDocs:5},
 unset-property: updateHandler.autoCommit.maxDocs
 }'
 {code}
 {code:javascript}
 //or use this to set ${mypropname} values
 curl http://localhost:8983/solr/collection1/config -H 
 'Content-type:application/json'  -d '{
 set-user-property : {mypropname:my_prop_val},
 unset-user-property:{mypropname}
 }'
 {code}
 The values stored in the config.json will always take precedence and will be 
 applied after loading solrconfig.xml. 
 An http GET on /config path will give the real config that is applied . 
 An http GET of/config/overlay gives out the content of the configOverlay.json
 /config/component-name gives only the fchild of the same name from /config



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6067) Change Accountable.getChildResources to return empty list by default

2014-11-21 Thread Robert Muir (JIRA)
Robert Muir created LUCENE-6067:
---

 Summary: Change Accountable.getChildResources to return empty list 
by default
 Key: LUCENE-6067
 URL: https://issues.apache.org/jira/browse/LUCENE-6067
 Project: Lucene - Core
  Issue Type: Task
Reporter: Robert Muir
 Fix For: Trunk


This is the typical case, and defaulting to it makes this accounting api much 
less invasive on the codebase.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6067) Change Accountable.getChildResources to return empty list by default

2014-11-21 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-6067:

Attachment: LUCENE-6067.patch

attached is a patch.

 Change Accountable.getChildResources to return empty list by default
 

 Key: LUCENE-6067
 URL: https://issues.apache.org/jira/browse/LUCENE-6067
 Project: Lucene - Core
  Issue Type: Task
Reporter: Robert Muir
 Fix For: Trunk

 Attachments: LUCENE-6067.patch


 This is the typical case, and defaulting to it makes this accounting api much 
 less invasive on the codebase.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6067) Change Accountable.getChildResources to return empty list by default

2014-11-21 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14220999#comment-14220999
 ] 

Adrien Grand commented on LUCENE-6067:
--

+1 this is just great

 Change Accountable.getChildResources to return empty list by default
 

 Key: LUCENE-6067
 URL: https://issues.apache.org/jira/browse/LUCENE-6067
 Project: Lucene - Core
  Issue Type: Task
Reporter: Robert Muir
 Fix For: Trunk

 Attachments: LUCENE-6067.patch


 This is the typical case, and defaulting to it makes this accounting api much 
 less invasive on the codebase.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6775) Creating backup snapshot null pointer exception

2014-11-21 Thread Ryan Hesson (JIRA)
Ryan Hesson created SOLR-6775:
-

 Summary: Creating backup snapshot null pointer exception
 Key: SOLR-6775
 URL: https://issues.apache.org/jira/browse/SOLR-6775
 Project: Solr
  Issue Type: Bug
  Components: replication (java)
Affects Versions: 4.10
 Environment: Linux Server, Java version 1.7.0_21, Solr version 4.10.0
Reporter: Ryan Hesson


I set up Solr Replication. I have one master on a server, one slave on another 
server. The replication of data appears functioning correctly. The issue is 
when the master SOLR tries to create a snapshot backup it gets a null pointer 
exception. 

org.apache.solr.handler.SnapShooter createSnapshot method calls 
org.apache.solr.handler.SnapPuller.delTree(snapShotDir); at line 162 and the 
exception happens within  org.apache.solr.handler.SnapPuller at line 1026 
because snapShotDir is null. 
Here is the actual log output:

58319963 [qtp12610551-16] INFO  org.apache.solr.core.SolrCore  - newest commit 
generation = 349
58319983 [Thread-19] INFO  org.apache.solr.handler.SnapShooter  - Creating 
backup snapshot...
Exception in thread Thread-19 java.lang.NullPointerException
at org.apache.solr.handler.SnapPuller.delTree(SnapPuller.java:1026)
at 
org.apache.solr.handler.SnapShooter.createSnapshot(SnapShooter.java:162)
at org.apache.solr.handler.SnapShooter$1.run(SnapShooter.java:91)

I may have missed how to set the directory in the documentation but I've looked 
around without much luck. I thought the process was to use the same directory 
as the index data for the snapshots. Is this a known issue with this release or 
am I missing how to set the value? If someone could tell me how to set 
snapshotdir or confirm that it is an issue and a different way of backing up 
the index is needed it would be much appreciated. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6766) Switch o.a.s.store.blockcache.Metrics to use JMX

2014-11-21 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14221011#comment-14221011
 ] 

Mark Miller commented on SOLR-6766:
---

The block cache should normally just be one now that the global block cache is 
enabled by default. The per directory version should be special case at best. 
Perhaps the HdfsDirectoryFactory could simply track all the Metrics objects and 
spit out the stats for the caches (normally 1) that it tracks. If everything is 
keyed on HdfsDirectoryFactory as the info bean, nothing else has to be changed. 
Just spitballing though.

 Switch o.a.s.store.blockcache.Metrics to use JMX
 

 Key: SOLR-6766
 URL: https://issues.apache.org/jira/browse/SOLR-6766
 Project: Solr
  Issue Type: Bug
Reporter: Mike Drob
  Labels: metrics

 The Metrics class currently reports to hadoop metrics, but it would be better 
 to report to JMX.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6683) Need a configurable parameter to control the doc number between peersync and the snapshot pull recovery

2014-11-21 Thread Ramkumar Aiyengar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14221022#comment-14221022
 ] 

Ramkumar Aiyengar commented on SOLR-6683:
-

Does the patch on SOLR-6359 suffice for you?

 Need a configurable parameter to control the doc number between peersync and 
 the snapshot pull recovery
 ---

 Key: SOLR-6683
 URL: https://issues.apache.org/jira/browse/SOLR-6683
 Project: Solr
  Issue Type: Bug
  Components: replication (java)
Affects Versions: 4.7
 Environment: Redhat Linux 64bit
Reporter: Forest Soup
Priority: Critical
  Labels: performance

 If there are 100 docs gap between the recovering node and the good node, the 
 solr will do snap pull recovery instead of peersync.
 Can the 100 docs be configurable? For example, there can be 1, 1000, or 
 10 docs gap between the good node and the node to recover.
 For 100 doc, a regular restart of a solr node will trigger a full recovery, 
 which is a huge impact to the performance of the running systems
 Thanks!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Windows (32bit/jdk1.7.0_67) - Build # 4339 - Failure!

2014-11-21 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/4339/
Java: 32bit/jdk1.7.0_67 -client -XX:+UseSerialGC (asserts: true)

No tests ran.

Build Log:
[...truncated 11729 lines...]
FATAL: java.io.IOException: Unexpected termination of the channel
hudson.remoting.RequestAbortedException: java.io.IOException: Unexpected 
termination of the channel
at hudson.remoting.Request.abort(Request.java:295)
at hudson.remoting.Channel.terminate(Channel.java:814)
at 
hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:69)
at ..remote call to Windows VBOX(Native Method)
at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1356)
at hudson.remoting.Request.call(Request.java:171)
at hudson.remoting.Channel.call(Channel.java:751)
at 
hudson.remoting.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:173)
at com.sun.proxy.$Proxy71.join(Unknown Source)
at hudson.Launcher$RemoteLauncher$ProcImpl.join(Launcher.java:979)
at hudson.Launcher$ProcStarter.join(Launcher.java:388)
at hudson.tasks.Ant.perform(Ant.java:217)
at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:770)
at hudson.model.Build$BuildExecution.build(Build.java:199)
at hudson.model.Build$BuildExecution.doRun(Build.java:160)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:533)
at hudson.model.Run.execute(Run.java:1759)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:89)
at hudson.model.Executor.run(Executor.java:240)
Caused by: java.io.IOException: Unexpected termination of the channel
at 
hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:50)
Caused by: java.io.EOFException
at 
java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2325)
at 
java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.java:2794)
at 
java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:801)
at java.io.ObjectInputStream.init(ObjectInputStream.java:299)
at 
hudson.remoting.ObjectInputStreamEx.init(ObjectInputStreamEx.java:40)
at 
hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:34)
at 
hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:48)



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-6775) Creating backup snapshot null pointer exception

2014-11-21 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14221100#comment-14221100
 ] 

Varun Thacker commented on SOLR-6775:
-

Hi Ryan,

Can you copy-paste your replication hander setting and the API call that you 
made. I will try to reproduce it.

 Creating backup snapshot null pointer exception
 ---

 Key: SOLR-6775
 URL: https://issues.apache.org/jira/browse/SOLR-6775
 Project: Solr
  Issue Type: Bug
  Components: replication (java)
Affects Versions: 4.10
 Environment: Linux Server, Java version 1.7.0_21, Solr version 
 4.10.0
Reporter: Ryan Hesson
  Labels: snapshot, solr

 I set up Solr Replication. I have one master on a server, one slave on 
 another server. The replication of data appears functioning correctly. The 
 issue is when the master SOLR tries to create a snapshot backup it gets a 
 null pointer exception. 
 org.apache.solr.handler.SnapShooter createSnapshot method calls 
 org.apache.solr.handler.SnapPuller.delTree(snapShotDir); at line 162 and the 
 exception happens within  org.apache.solr.handler.SnapPuller at line 1026 
 because snapShotDir is null. 
 Here is the actual log output:
 58319963 [qtp12610551-16] INFO  org.apache.solr.core.SolrCore  - newest 
 commit generation = 349
 58319983 [Thread-19] INFO  org.apache.solr.handler.SnapShooter  - Creating 
 backup snapshot...
 Exception in thread Thread-19 java.lang.NullPointerException
 at org.apache.solr.handler.SnapPuller.delTree(SnapPuller.java:1026)
 at 
 org.apache.solr.handler.SnapShooter.createSnapshot(SnapShooter.java:162)
 at org.apache.solr.handler.SnapShooter$1.run(SnapShooter.java:91)
 I may have missed how to set the directory in the documentation but I've 
 looked around without much luck. I thought the process was to use the same 
 directory as the index data for the snapshots. Is this a known issue with 
 this release or am I missing how to set the value? If someone could tell me 
 how to set snapshotdir or confirm that it is an issue and a different way of 
 backing up the index is needed it would be much appreciated. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6766) Switch o.a.s.store.blockcache.Metrics to use JMX

2014-11-21 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated SOLR-6766:

Attachment: SOLR-6766.patch

This patch depends on the one on SOLR-6752 being applied first.

Make HdfsDirctoryFactory implement SolrInfoMBean and expose metrics that way. I 
think it works for both the global cache and local cache options because it 
uses the same Metrics object for everything.

 Switch o.a.s.store.blockcache.Metrics to use JMX
 

 Key: SOLR-6766
 URL: https://issues.apache.org/jira/browse/SOLR-6766
 Project: Solr
  Issue Type: Bug
Reporter: Mike Drob
  Labels: metrics
 Attachments: SOLR-6766.patch


 The Metrics class currently reports to hadoop metrics, but it would be better 
 to report to JMX.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6766) Switch o.a.s.store.blockcache.Metrics to use JMX

2014-11-21 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14221115#comment-14221115
 ] 

Mark Miller commented on SOLR-6766:
---

bq.  I think it works for both the global cache and local cache options because 
it uses the same Metrics object for everything.

Oh cool, well that simplifies things. I'll take a look at this shortly.

 Switch o.a.s.store.blockcache.Metrics to use JMX
 

 Key: SOLR-6766
 URL: https://issues.apache.org/jira/browse/SOLR-6766
 Project: Solr
  Issue Type: Bug
Reporter: Mike Drob
  Labels: metrics
 Attachments: SOLR-6766.patch


 The Metrics class currently reports to hadoop metrics, but it would be better 
 to report to JMX.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6766) Switch o.a.s.store.blockcache.Metrics to use JMX

2014-11-21 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14221124#comment-14221124
 ] 

Mike Drob commented on SOLR-6766:
-

I didn't rename any of the metrics yet because that should be a fairly easy 
change to make. Trying to get the structure of the changeset correct, first.

 Switch o.a.s.store.blockcache.Metrics to use JMX
 

 Key: SOLR-6766
 URL: https://issues.apache.org/jira/browse/SOLR-6766
 Project: Solr
  Issue Type: Bug
Reporter: Mike Drob
  Labels: metrics
 Attachments: SOLR-6766.patch


 The Metrics class currently reports to hadoop metrics, but it would be better 
 to report to JMX.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5950) Move to Java 8 in trunk

2014-11-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14221129#comment-14221129
 ] 

ASF subversion and git services commented on LUCENE-5950:
-

Commit 1640958 from [~rjernst] in branch 'dev/trunk'
[ https://svn.apache.org/r1640958 ]

LUCENE-5950: Update changes entries

 Move to Java 8 in trunk
 ---

 Key: LUCENE-5950
 URL: https://issues.apache.org/jira/browse/LUCENE-5950
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Ryan Ernst
  Labels: Java8
 Fix For: Trunk

 Attachments: LUCENE-5950-javadocpatcher.patch, LUCENE-5950.patch, 
 LUCENE-5950.patch, LUCENE-5950.patch, LUCENE-5950.patch, LUCENE-5950.patch, 
 LUCENE-5950.patch


 The dev list thread [VOTE] Move trunk to Java 8 passed.
 http://markmail.org/thread/zcddxioz2yvsdqkc
 This issue is to actually move trunk to java 8.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6761) Ability to ignore commit and optimize requests from clients when running in SolrCloud mode.

2014-11-21 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14221132#comment-14221132
 ] 

Mark Miller commented on SOLR-6761:
---

+1, good idea Tim. I think it can make sense to use commitWithin from the 
client exclusively with SolrCloud, but only when a knowledgeable/expert 
person/team owns the service. That is very often not the case due to a variety 
of reasons in my experience. Solr is often deployed in situations where an 
administrator needs to protect the service from a variety of users with varying 
expertise.

I agree with Ram though - I think it makes more sense to make sure the client 
knows it cannot call commit and adjusts behavior. We just need a useful error 
message.

 Ability to ignore commit and optimize requests from clients when running in 
 SolrCloud mode.
 ---

 Key: SOLR-6761
 URL: https://issues.apache.org/jira/browse/SOLR-6761
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud, SolrJ
Reporter: Timothy Potter

 In most SolrCloud environments, it's advisable to only rely on auto-commits 
 (soft and hard) configured in solrconfig.xml and not send explicit commit 
 requests from client applications. In fact, I've seen cases where improperly 
 coded client applications can send commit requests too frequently, which can 
 lead to harming the cluster's health. 
 As a system administrator, I'd like the ability to disallow commit requests 
 from client applications. Ideally, I could configure the updateHandler to 
 ignore the requests and return an HTTP response code of my choosing as I may 
 not want to break existing client applications by returning an error. In 
 other words, I may want to just return 200 vs. 405. The same goes for 
 optimize requests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6775) Creating backup snapshot null pointer exception

2014-11-21 Thread Ryan Hesson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14221134#comment-14221134
 ] 

Ryan Hesson commented on SOLR-6775:
---

Sure, I really appreciate the help. Please let me know if I can share any more 
information. 

Thank you for your time,

Ryan

This is my setting on the master Solr index: 

  requestHandler name=/replication class=solr.ReplicationHandler 
   lst name=master
 str name=replicateAftercommit/str
 str name=replicateAfterstartup/str
 str name=backupAfteroptimize/str
 str name=confFilesschema.xml,stopwords.txt/str
   /lst
   int name=maxNumberOfBackups2/int
  /requestHandler

This is my setting on the slave Solr index:

  requestHandler name=/replication class=solr.ReplicationHandler 
   lst name=slave
 str name=masterUrl[Server URL]:8983/solr/[Core Name]/str
 str name=pollInterval01:00:00/str
   /lst
  /requestHandler

Far as API call, I know I'm doing add/update documents to the core and then a 
commit. Here's additional log info if it helps:

58316473 [qtp12610551-16] INFO  
org.apache.solr.update.processor.LogUpdateProcessor  – [Core Name] webapp=/solr 
path=/update params={wt=javabinversion=2} {add=[559991 (1485389611244978176)]} 
0 14
58316500 [qtp12610551-16] INFO  
org.apache.solr.update.processor.LogUpdateProcessor  – [Core Name] webapp=/solr 
path=/update params={wt=javabinversion=2} {add=[560030 (1485389611278532608)]} 
0 9
58316507 [qtp12610551-16] INFO  
org.apache.solr.update.processor.LogUpdateProcessor  – [Core Name] webapp=/solr 
path=/update params={wt=javabinversion=2} {add=[539417 (1485389611289018368)]} 
0 4
58316523 [qtp12610551-16] INFO  
org.apache.solr.update.processor.LogUpdateProcessor  – [Core Name] webapp=/solr 
path=/update params={wt=javabinversion=2} {add=[568646 (1485389611304747008)]} 
0 5
58316537 [qtp12610551-16] INFO  
org.apache.solr.update.processor.LogUpdateProcessor  – [Core Name] webapp=/solr 
path=/update params={wt=javabinversion=2} {add=[635394 (1485389611318378496)]} 
0 6
58316677 [qtp12610551-16] INFO  org.apache.solr.update.UpdateHandler  – start 
commit{,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
58316714 [qtp12610551-16] INFO  org.apache.solr.core.SolrCore  – 
SolrDeletionPolicy.onCommit: commits: num=2

commit{dir=NRTCachingDirectory(NIOFSDirectory@/index/solr-4.10.0/[Company 
Name]/[Core Name]/data/index 
lockFactory=NativeFSLockFactory@/index/solr-4.10.0/[Company Name]/[Core 
Name]/data/index; maxCacheMB=48.0 
maxMergeSizeMB=4.0),segFN=segments_9n,generation=347}

commit{dir=NRTCachingDirectory(NIOFSDirectory@/index/solr-4.10.0/[Company 
Name]/[Core Name]/data/index 
lockFactory=NativeFSLockFactory@/index/solr-4.10.0/[Company Name]/[Core 
Name]/data/index; maxCacheMB=48.0 
maxMergeSizeMB=4.0),segFN=segments_9o,generation=348}
58316714 [qtp12610551-16] INFO  org.apache.solr.core.SolrCore  – newest commit 
generation = 348
58316717 [qtp12610551-16] INFO  org.apache.solr.search.SolrIndexSearcher  – 
Opening Searcher@1ff79df[Core Name] main
58316721 [qtp12610551-16] INFO  org.apache.solr.update.UpdateHandler  – 
end_commit_flush
58316721 [searcherExecutor-6-thread-1] INFO  org.apache.solr.core.SolrCore  – 
QuerySenderListener sending requests to Searcher@1ff79df[Core Name] 
main{StandardDirectoryReader(segments_9o:1720:nrt _ik(4.10.0):C90144/5:delGen=1 
_il(4.10.0):C5)}
58316721 [searcherExecutor-6-thread-1] INFO  org.apache.solr.core.SolrCore  – 
QuerySenderListener done.
58316722 [searcherExecutor-6-thread-1] INFO  org.apache.solr.core.SolrCore  – 
[Core Name] Registered new searcher Searcher@1ff79df[Core Name] 
main{StandardDirectoryReader(segments_9o:1720:nrt _ik(4.10.0):C90144/5:delGen=1 
_il(4.10.0):C5)}
58316735 [qtp12610551-16] INFO  
org.apache.solr.update.processor.LogUpdateProcessor  – [Core Name] webapp=/solr 
path=/update 
params={waitSearcher=truecommit=truewt=javabinversion=2softCommit=false} 
{commit=} 0 58
58316737 [qtp12610551-16] INFO  org.apache.solr.update.UpdateHandler  – start 
commit{,optimize=true,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
58319963 [qtp12610551-16] INFO  org.apache.solr.core.SolrCore  – 
SolrDeletionPolicy.onCommit: commits: num=2

commit{dir=NRTCachingDirectory(NIOFSDirectory@/index/solr-4.10.0/[Company 
Name]/[Core Name]/data/index 
lockFactory=NativeFSLockFactory@/index/solr-4.10.0/[Company Name]/[Core 
Name]/data/index; maxCacheMB=48.0 
maxMergeSizeMB=4.0),segFN=segments_9o,generation=348}

commit{dir=NRTCachingDirectory(NIOFSDirectory@/index/solr-4.10.0/[Company 
Name]/[Core Name]/data/index 
lockFactory=NativeFSLockFactory@/index/solr-4.10.0/[Company Name]/[Core 
Name]/data/index; maxCacheMB=48.0 
maxMergeSizeMB=4.0),segFN=segments_9p,generation=349}
58319963 

[jira] [Commented] (SOLR-6761) Ability to ignore commit and optimize requests from clients when running in SolrCloud mode.

2014-11-21 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14221142#comment-14221142
 ] 

Mark Miller commented on SOLR-6761:
---

I don't see why silent fail couldn't be a config option though. There probably 
are Solr administrators that would like to try and address this and not break 
all it's clients. It's fairly dangerous if any clients where counting on that 
behavior though. I think it should come with a big fat warning at least.

 Ability to ignore commit and optimize requests from clients when running in 
 SolrCloud mode.
 ---

 Key: SOLR-6761
 URL: https://issues.apache.org/jira/browse/SOLR-6761
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud, SolrJ
Reporter: Timothy Potter

 In most SolrCloud environments, it's advisable to only rely on auto-commits 
 (soft and hard) configured in solrconfig.xml and not send explicit commit 
 requests from client applications. In fact, I've seen cases where improperly 
 coded client applications can send commit requests too frequently, which can 
 lead to harming the cluster's health. 
 As a system administrator, I'd like the ability to disallow commit requests 
 from client applications. Ideally, I could configure the updateHandler to 
 ignore the requests and return an HTTP response code of my choosing as I may 
 not want to break existing client applications by returning an error. In 
 other words, I may want to just return 200 vs. 405. The same goes for 
 optimize requests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-5.x #764: POMs out of sync

2014-11-21 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-5.x/764/

3 tests failed.
FAILED:  
org.apache.solr.hadoop.MorphlineMapperTest.org.apache.solr.hadoop.MorphlineMapperTest

Error Message:
null

Stack Trace:
java.lang.AssertionError: null
at __randomizedtesting.SeedInfo.seed([CB2E5E4C7D12691B]:0)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.before(TestRuleTemporaryFilesCleanup.java:92)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.before(TestRuleAdapter.java:26)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:35)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)


FAILED:  org.apache.solr.hadoop.MorphlineBasicMiniMRTest.testPathParts

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([1B1EAF446F632EA5]:0)


FAILED:  
org.apache.solr.hadoop.MorphlineBasicMiniMRTest.org.apache.solr.hadoop.MorphlineBasicMiniMRTest

Error Message:
Suite timeout exceeded (= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (= 720 msec).
at __randomizedtesting.SeedInfo.seed([1B1EAF446F632EA5]:0)




Build Log:
[...truncated 53921 lines...]
BUILD FAILED
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-5.x/build.xml:548: 
The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-5.x/build.xml:200: 
The following error occurred while executing this line:
: Java returned: 1

Total time: 400 minutes 49 seconds
Build step 'Invoke Ant' marked build as failure
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-6761) Ability to ignore commit and optimize requests from clients when running in SolrCloud mode.

2014-11-21 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14221144#comment-14221144
 ] 

Hoss Man commented on SOLR-6761:


This would be fairly easy to implement as an UpdateProcessor, which would also 
give you an easy way to enable/configure it (I thought we already had an open 
issue for that, but i may just be thinking of of the issue about killing 
Optimize)

 Ability to ignore commit and optimize requests from clients when running in 
 SolrCloud mode.
 ---

 Key: SOLR-6761
 URL: https://issues.apache.org/jira/browse/SOLR-6761
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud, SolrJ
Reporter: Timothy Potter

 In most SolrCloud environments, it's advisable to only rely on auto-commits 
 (soft and hard) configured in solrconfig.xml and not send explicit commit 
 requests from client applications. In fact, I've seen cases where improperly 
 coded client applications can send commit requests too frequently, which can 
 lead to harming the cluster's health. 
 As a system administrator, I'd like the ability to disallow commit requests 
 from client applications. Ideally, I could configure the updateHandler to 
 ignore the requests and return an HTTP response code of my choosing as I may 
 not want to break existing client applications by returning an error. In 
 other words, I may want to just return 200 vs. 405. The same goes for 
 optimize requests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.9.0-ea-b34) - Build # 11646 - Still Failing!

2014-11-21 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11646/
Java: 32bit/jdk1.9.0-ea-b34 -server -XX:+UseG1GC (asserts: true)

All tests passed

Build Log:
[...truncated 26522 lines...]
BUILD FAILED
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:515: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:86: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build.xml:101: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/common-build.xml:2293:
 Parsing signatures failed: No method found with following signature: 
java.util.jar.Pack200$Packer#addPropertyChangeListener(java.beans.PropertyChangeListener)

Total time: 115 minutes 31 seconds
Build step 'Invoke Ant' marked build as failure
[description-setter] Description set: Java: 32bit/jdk1.9.0-ea-b34 -server 
-XX:+UseG1GC (asserts: true)
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.8.0) - Build # 1950 - Still Failing!

2014-11-21 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/1950/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC (asserts: false)

1 tests failed.
REGRESSION:  org.apache.solr.cloud.ShardSplitTest.testDistribSearch

Error Message:
Timeout occured while waiting response from server at: 
http://127.0.0.1:55387/a/sm

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:55387/a/sm
at 
__randomizedtesting.SeedInfo.seed([416F849DD0FA95C2:C0890A85A7A5F5FE]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:581)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:215)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:211)
at 
org.apache.solr.cloud.ShardSplitTest.splitShard(ShardSplitTest.java:532)
at 
org.apache.solr.cloud.ShardSplitTest.incompleteOrOverlappingCustomRangeTest(ShardSplitTest.java:138)
at org.apache.solr.cloud.ShardSplitTest.doTest(ShardSplitTest.java:103)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.GeneratedMethodAccessor109.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 

[jira] [Resolved] (LUCENE-5950) Move to Java 8 in trunk

2014-11-21 Thread Ryan Ernst (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan Ernst resolved LUCENE-5950.

Resolution: Fixed
  Assignee: Ryan Ernst

 Move to Java 8 in trunk
 ---

 Key: LUCENE-5950
 URL: https://issues.apache.org/jira/browse/LUCENE-5950
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Ryan Ernst
Assignee: Ryan Ernst
  Labels: Java8
 Fix For: Trunk

 Attachments: LUCENE-5950-javadocpatcher.patch, LUCENE-5950.patch, 
 LUCENE-5950.patch, LUCENE-5950.patch, LUCENE-5950.patch, LUCENE-5950.patch, 
 LUCENE-5950.patch


 The dev list thread [VOTE] Move trunk to Java 8 passed.
 http://markmail.org/thread/zcddxioz2yvsdqkc
 This issue is to actually move trunk to java 8.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-trunk-Java8 - Build # 5 - Still Failing

2014-11-21 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java8/5/

2 tests failed.
REGRESSION:  org.apache.solr.TestDistributedGrouping.testDistribSearch

Error Message:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request:[https://[ff01::114]:2, https://[ff01::083]:2, 
https://127.0.0.1:37975, https://[ff01::213]:2]

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: 
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request:[https://[ff01::114]:2, https://[ff01::083]:2, 
https://127.0.0.1:37975, https://[ff01::213]:2]
at 
__randomizedtesting.SeedInfo.seed([103A566D0461F871:91DCD875733E984D]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:569)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:215)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:211)
at 
org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:91)
at org.apache.solr.client.solrj.SolrServer.query(SolrServer.java:301)
at 
org.apache.solr.BaseDistributedSearchTestCase.queryServer(BaseDistributedSearchTestCase.java:512)
at 
org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:560)
at 
org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:542)
at 
org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:521)
at 
org.apache.solr.TestDistributedGrouping.doTest(TestDistributedGrouping.java:206)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:875)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 

RE: [JENKINS] Lucene-Solr-Tests-trunk-Java8 - Build # 5 - Still Failing

2014-11-21 Thread Uwe Schindler
Hi,

you might have noticed, Lucene trunk is now on Java 8. Policeman Jenkins was 
already changed to handle that, builds are succeeding perfectly.

Unfortunately, FreeBSD Jenkins has a problem, because OpenJDK8, namely the 
FreeBSD Port, crushes always while running Solr tests. Reason seems to be a 
problem in the networking layer, which is heavy modified from default OpenJDK. 
So there seems to be a bug in the patch that FreBSD applys to OpenJDK.

I already updated to OpenJDK 8u25, but this did not help. I will open an issue 
at the FreeBSD website, but the problem is: The crush always looks different 
(sometimes sigbus, sometimes sigsegv, sometimes crushing inside free()) - I 
have also seen a failed malloc() call [caused by corrumpted heap].

For now I disabled the test builds (nightly-smoke, nightly-tests, and 
hourly-tests). The artifact builds are still enabled, so people can download 
nightly trunk artifacts. The maven build that publishes to ASF snapshot repo 
was modified to have -DskipTests passed to ANT/Maven. So it runs and deploys 
the artifacts.

I am not sure if we can solve the FreeBSD issues soon, so I would suggest that 
we ask ASF INFRA, if they could provide us with an Ubuntu VM machine so we can 
migrate our builds there. I have to check with them to get enough CPU power and 
free disk space, so we can run the heavy nightly tests. The other good thing 
would be: no more blackhole :-)

Comments?

Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


 -Original Message-
 From: Apache Jenkins Server [mailto:jenk...@builds.apache.org]
 Sent: Friday, November 21, 2014 8:02 PM
 To: dev@lucene.apache.org
 Subject: [JENKINS] Lucene-Solr-Tests-trunk-Java8 - Build # 5 - Still Failing
 
 Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java8/5/
 
 2 tests failed.
 REGRESSION:  org.apache.solr.TestDistributedGrouping.testDistribSearch
 
 Error Message:
 org.apache.solr.client.solrj.SolrServerException: No live SolrServers 
 available
 to handle this request:[https://[ff01::114]:2, https://[ff01::083]:2,
 https://127.0.0.1:37975, https://[ff01::213]:2]
 
 Stack Trace:
 org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException:
 org.apache.solr.client.solrj.SolrServerException: No live SolrServers 
 available
 to handle this request:[https://[ff01::114]:2, https://[ff01::083]:2,
 https://127.0.0.1:37975, https://[ff01::213]:2]
   at
 __randomizedtesting.SeedInfo.seed([103A566D0461F871:91DCD875733E984
 D]:0)
   at
 org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrSer
 ver.java:569)
   at
 org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:
 215)
   at
 org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:
 211)
   at
 org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.ja
 va:91)
   at org.apache.solr.client.solrj.SolrServer.query(SolrServer.java:301)
   at
 org.apache.solr.BaseDistributedSearchTestCase.queryServer(BaseDistribute
 dSearchTestCase.java:512)
   at
 org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearc
 hTestCase.java:560)
   at
 org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearc
 hTestCase.java:542)
   at
 org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearc
 hTestCase.java:521)
   at
 org.apache.solr.TestDistributedGrouping.doTest(TestDistributedGrouping.ja
 va:206)
   at
 org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistri
 butedSearchTestCase.java:875)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.j
 ava:62)
   at
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
 sorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:483)
   at
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(Randomize
 dRunner.java:1618)
   at
 com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(Rando
 mizedRunner.java:827)
   at
 com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(Rando
 mizedRunner.java:863)
   at
 com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(Rando
 mizedRunner.java:877)
   at
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.
 evaluate(SystemPropertiesRestoreRule.java:53)
   at
 org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRule
 SetupTeardownChained.java:50)
   at
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeA
 fterRule.java:46)
   at
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1
 .evaluate(SystemPropertiesInvariantRule.java:55)
   at
 org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleTh
 readAndTestName.java:49)
   

Lucene ancient greek normalization

2014-11-21 Thread paolo anghileri
For development purposes I need the ability in lucene to normalize 
ancient greek characters for al the cases of grammatical details such as 
accents, diacritics and so on.


My need is to retrieve ancient greek words with accents and other 
grammatical details by the input of the string without accents.


For example the input of οργανον (organon) should to retrieve also Ὄργανον,


I am not a lucene commiter and I a new to this so my question is about 
the best practice to implement this in Lucene, and possibile submit a 
commit proposal to Lucene A project management committee.


I have made some searches and found this file in Lucene-soir:


It contains normalization for some chars.
My thought would be to add extra normalization here, including all 
unicode ancient greek chars with all grammatical details.
I already have all the unicode values for that chars so It should not be 
difficult for me to include them


If my understanding is correct, this should add to lucene the features 
described above.



As I am new to this, my needs are:

1.   To be sure that this is the correct place in Lucene for doing
   normalization
2. How to post commit proposal


Any help appreciated

Kind regards

Paolo



Re: Lucene ancient greek normalization

2014-11-21 Thread paolo anghileri

Sorry, forgot adding the link to lucene file:

https://github.com/apache/lucene-solr/blob/trunk/lucene/analysis/common/src/java/org/apache/lucene/analysis/el/GreekLowerCaseFilter.java

On 21/11/2014 20:14, paolo anghileri wrote:
For development purposes I need the ability in lucene to normalize 
ancient greek characters for al the cases of grammatical details such 
as accents, diacritics and so on.


My need is to retrieve ancient greek words with accents and other 
grammatical details by the input of the string without accents.


For example the input of οργανον (organon) should to retrieve also 
Ὄργανον,



I am not a lucene commiter and I a new to this so my question is about 
the best practice to implement this in Lucene, and possibile submit a 
commit proposal to Lucene A project management committee.


I have made some searches and found this file in Lucene-soir:


It contains normalization for some chars.
My thought would be to add extra normalization here, including all 
unicode ancient greek chars with all grammatical details.
I already have all the unicode values for that chars so It should not 
be difficult for me to include them


If my understanding is correct, this should add to lucene the features 
described above.



As I am new to this, my needs are:

 1.  To be sure that this is the correct place in Lucene for doing
normalization
 2. How to post commit proposal


Any help appreciated

Kind regards

Paolo





[JENKINS-MAVEN] Lucene-Solr-Maven-trunk #1265: POMs out of sync

2014-11-21 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-trunk/1265/

No tests ran.

Build Log:
[...truncated 40102 lines...]


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

RE: [JENKINS-MAVEN] Lucene-Solr-Maven-trunk #1265: POMs out of sync

2014-11-21 Thread Uwe Schindler
I fixed the problem (Jenkins was complaining about no tests).

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


 -Original Message-
 From: Apache Jenkins Server [mailto:jenk...@builds.apache.org]
 Sent: Friday, November 21, 2014 8:34 PM
 To: dev@lucene.apache.org
 Subject: [JENKINS-MAVEN] Lucene-Solr-Maven-trunk #1265: POMs out of
 sync
 
 Build: https://builds.apache.org/job/Lucene-Solr-Maven-trunk/1265/
 
 No tests ran.
 
 Build Log:
 [...truncated 40102 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Trunk now requires Java 8

2014-11-21 Thread Ryan Ernst
Just an FYI for those not following along closely.  See LUCENE-5950 for
details.


Re: Lucene ancient greek normalization

2014-11-21 Thread Alexandre Rafalovitch
Are you sure that's not something that's already addressed by the ICU
Filter? 
http://www.solr-start.com/javadoc/solr-lucene/org/apache/lucene/analysis/icu/ICUTransformFilterFactory.html

If you follow the links to what's possible, the page talks about
Greek, though not ancient:
http://userguide.icu-project.org/transforms/general#TOC-Greek

There was also some discussion on:
https://issues.apache.org/jira/browse/LUCENE-1343

Regards,
   Alex.
Personal: http://www.outerthoughts.com/ and @arafalov
Solr resources and newsletter: http://www.solr-start.com/ and @solrstart
Solr popularizers community: https://www.linkedin.com/groups?gid=6713853


On 21 November 2014 14:14, paolo anghileri
paolo.anghil...@codegeneration.it wrote:
 For development purposes I need the ability in lucene to normalize ancient
 greek characters for al the cases of grammatical details such as accents,
 diacritics and so on.

 My need is to retrieve ancient greek words with accents and other
 grammatical details by the input of the string without accents.

 For example the input of οργανον (organon) should to retrieve also  Ὄργανον,


 I am not a lucene commiter and I a new to this so my question is about the
 best practice to implement this in Lucene, and possibile submit a commit
 proposal to Lucene A project management committee.

 I have made some searches and found this file in Lucene-soir:


 It contains normalization for some chars.
 My thought would be to add extra normalization here, including all unicode
 ancient greek chars with all grammatical details.
 I already have all the unicode values for that chars so It should not be
 difficult for me to include them

 If my understanding is correct, this should add to lucene the features
 described above.


 As I am new to this, my needs are:

  To be sure that this is the correct place in Lucene for doing normalization
 How to post commit proposal


 Any help appreciated

 Kind regards

 Paolo

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: Lucene ancient greek normalization

2014-11-21 Thread Allison, Timothy B.
ICU looks promising:

Μῆνιν ἄειδε, θεὰ, Πηληϊάδεω Ἀχιλλῆος -

1.μηνιν
2.αειδε
3.θεα
4.πηληιαδεω
5.αχιλληοσ

-Original Message-
From: Alexandre Rafalovitch [mailto:arafa...@gmail.com] 
Sent: Friday, November 21, 2014 3:08 PM
To: dev@lucene.apache.org
Subject: Re: Lucene ancient greek normalization

Are you sure that's not something that's already addressed by the ICU
Filter? 
http://www.solr-start.com/javadoc/solr-lucene/org/apache/lucene/analysis/icu/ICUTransformFilterFactory.html

If you follow the links to what's possible, the page talks about
Greek, though not ancient:
http://userguide.icu-project.org/transforms/general#TOC-Greek

There was also some discussion on:
https://issues.apache.org/jira/browse/LUCENE-1343

Regards,
   Alex.
Personal: http://www.outerthoughts.com/ and @arafalov
Solr resources and newsletter: http://www.solr-start.com/ and @solrstart
Solr popularizers community: https://www.linkedin.com/groups?gid=6713853


On 21 November 2014 14:14, paolo anghileri
paolo.anghil...@codegeneration.it wrote:
 For development purposes I need the ability in lucene to normalize ancient
 greek characters for al the cases of grammatical details such as accents,
 diacritics and so on.

 My need is to retrieve ancient greek words with accents and other
 grammatical details by the input of the string without accents.

 For example the input of οργανον (organon) should to retrieve also  Ὄργανον,


 I am not a lucene commiter and I a new to this so my question is about the
 best practice to implement this in Lucene, and possibile submit a commit
 proposal to Lucene A project management committee.

 I have made some searches and found this file in Lucene-soir:


 It contains normalization for some chars.
 My thought would be to add extra normalization here, including all unicode
 ancient greek chars with all grammatical details.
 I already have all the unicode values for that chars so It should not be
 difficult for me to include them

 If my understanding is correct, this should add to lucene the features
 described above.


 As I am new to this, my needs are:

  To be sure that this is the correct place in Lucene for doing normalization
 How to post commit proposal


 Any help appreciated

 Kind regards

 Paolo

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Testing Solr 5

2014-11-21 Thread Alexandre Rafalovitch
Hi,

I am writing something that - will - depend on Solr 5. As I usually
work with released versions, I am not entirely sure of the correct
workflow.

I can check out branch_5x and do my research against that. I assume
that's the correct source for what will land in version 5.

But if I find an issue, do I then report it against 5? Or do I need to
retest that against trunk and report against trunk? I don't know how
in-sync these two are at this point.

I've checked https://wiki.apache.org/solr/HackingSolr but it is not
specific enough (and is actually out of date regarding version 5).

Regards,
   Alex.

Personal: http://www.outerthoughts.com/ and @arafalov
Solr resources and newsletter: http://www.solr-start.com/ and @solrstart
Solr popularizers community: https://www.linkedin.com/groups?gid=6713853

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Testing Solr 5

2014-11-21 Thread Shawn Heisey
On 11/21/2014 1:14 PM, Alexandre Rafalovitch wrote:
 I am writing something that - will - depend on Solr 5. As I usually
 work with released versions, I am not entirely sure of the correct
 workflow.

 I can check out branch_5x and do my research against that. I assume
 that's the correct source for what will land in version 5.

 But if I find an issue, do I then report it against 5? Or do I need to
 retest that against trunk and report against trunk? I don't know how
 in-sync these two are at this point.

 I've checked https://wiki.apache.org/solr/HackingSolr but it is not
 specific enough (and is actually out of date regarding version 5).

Any testing we do for branch_5x probably isn't enough, so please do test
with it.  That is the branch where development will happen for all 5.x
releases.  If you do find a problem and *can* try again with trunk,
please do, but don't make that a prerequisite for filing an issue or
creating a patch.  A patch against trunk is preferred, but any patch is
better than none.

We'd like to know which branch the patch applies to, and if there are
any problems we may need to know the SVN revision number, so it's better
to include that info up front if you have it.  If you are using one of
the git mirrors, the SVN revision number may not be available.  The git
commit hash may be useful instead.

Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-6759) ExpandComponent does not call finish() on DelegatingCollectors

2014-11-21 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein reassigned SOLR-6759:


Assignee: Joel Bernstein

 ExpandComponent does not call finish() on DelegatingCollectors
 --

 Key: SOLR-6759
 URL: https://issues.apache.org/jira/browse/SOLR-6759
 Project: Solr
  Issue Type: Bug
Reporter: Simon Endele
Assignee: Joel Bernstein
 Attachments: ExpandComponent.java.patch


 We have a PostFilter for ACL filtering in action that has a similar structure 
 as CollapsingQParserPlugin, i.e. it's DelegatingCollector gathers all 
 documents and calls delegate.collect() for all docs finally in its finish() 
 method.
 In contrast to CollapsingQParserPlugin our PostFilter is also called by the 
 ExpandComponent (for purpose).
 But as the finish method is never called by the ExpandComponent, the expand 
 section in the result is always empty.
 Tested with Solr 4.10.2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6759) ExpandComponent does not call finish() on DelegatingCollectors

2014-11-21 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14221402#comment-14221402
 ] 

Joel Bernstein commented on SOLR-6759:
--

Simon, thanks for the patch. It's pretty close to what needs to be done here. 

 ExpandComponent does not call finish() on DelegatingCollectors
 --

 Key: SOLR-6759
 URL: https://issues.apache.org/jira/browse/SOLR-6759
 Project: Solr
  Issue Type: Bug
Reporter: Simon Endele
Assignee: Joel Bernstein
 Attachments: ExpandComponent.java.patch


 We have a PostFilter for ACL filtering in action that has a similar structure 
 as CollapsingQParserPlugin, i.e. it's DelegatingCollector gathers all 
 documents and calls delegate.collect() for all docs finally in its finish() 
 method.
 In contrast to CollapsingQParserPlugin our PostFilter is also called by the 
 ExpandComponent (for purpose).
 But as the finish method is never called by the ExpandComponent, the expand 
 section in the result is always empty.
 Tested with Solr 4.10.2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6759) ExpandComponent does not call finish() on DelegatingCollectors

2014-11-21 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-6759:
-
Fix Version/s: 5.0
   4.10.3

 ExpandComponent does not call finish() on DelegatingCollectors
 --

 Key: SOLR-6759
 URL: https://issues.apache.org/jira/browse/SOLR-6759
 Project: Solr
  Issue Type: Bug
Reporter: Simon Endele
Assignee: Joel Bernstein
 Fix For: 4.10.3, 5.0

 Attachments: ExpandComponent.java.patch


 We have a PostFilter for ACL filtering in action that has a similar structure 
 as CollapsingQParserPlugin, i.e. it's DelegatingCollector gathers all 
 documents and calls delegate.collect() for all docs finally in its finish() 
 method.
 In contrast to CollapsingQParserPlugin our PostFilter is also called by the 
 ExpandComponent (for purpose).
 But as the finish method is never called by the ExpandComponent, the expand 
 section in the result is always empty.
 Tested with Solr 4.10.2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6067) Change Accountable.getChildResources to return empty list by default

2014-11-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14221429#comment-14221429
 ] 

ASF subversion and git services commented on LUCENE-6067:
-

Commit 1641002 from [~rcmuir] in branch 'dev/trunk'
[ https://svn.apache.org/r1641002 ]

LUCENE-6067: Accountable.getChildResources returns empty list by default

 Change Accountable.getChildResources to return empty list by default
 

 Key: LUCENE-6067
 URL: https://issues.apache.org/jira/browse/LUCENE-6067
 Project: Lucene - Core
  Issue Type: Task
Reporter: Robert Muir
 Fix For: Trunk

 Attachments: LUCENE-6067.patch


 This is the typical case, and defaulting to it makes this accounting api much 
 less invasive on the codebase.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.8.0) - Build # 1908 - Still Failing!

2014-11-21 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/1908/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC (asserts: 
true)

4 tests failed.
REGRESSION:  
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testDistribSearch

Error Message:
some core start times did not change on reload

Stack Trace:
java.lang.AssertionError: some core start times did not change on reload
at 
__randomizedtesting.SeedInfo.seed([65D59E8A59989082:E43310922EC7F0BE]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:884)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.doTest(CollectionsAPIDistributedZkTest.java:203)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.GeneratedMethodAccessor98.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Resolved] (LUCENE-6067) Change Accountable.getChildResources to return empty list by default

2014-11-21 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-6067.
-
Resolution: Fixed

 Change Accountable.getChildResources to return empty list by default
 

 Key: LUCENE-6067
 URL: https://issues.apache.org/jira/browse/LUCENE-6067
 Project: Lucene - Core
  Issue Type: Task
Reporter: Robert Muir
 Fix For: Trunk

 Attachments: LUCENE-6067.patch


 This is the typical case, and defaulting to it makes this accounting api much 
 less invasive on the codebase.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene ancient greek normalization

2014-11-21 Thread paolo anghileri

Many thanks Alex,

For clearness, I try explaining a bit what I would like to do:
I'd like to use mediawiki as a base for this project.
The need is being able to search with simple strings without grammatical 
details and retrieve data with grammatical details.

For that, I am evaluating to use a wikimedia extension called CirrusSearch.
CirrusSearch depends from elasticsearch, while elasticsearch depends on 
Lucene.


CirrusSearch (and its dependencies) is used, for instance, by the modern 
greek wictionary, and works correctly for modern greek grammatical details.


In this case, if you input αλφα it will retrieve also άλφα

but in the case of ancient greek, οργανον will not retrieve Ὄργανον 
since its grammatical details are proper of ancient greek and do not 
appear to be supported.


Since this kind of wikipedia search is at end based on lucene, adding 
this feature to lucene will potentially make this feature available also 
for wikimedia.


As Tim remarks in following message, it seems that ICU is able to 
support this.


I have to investigate a little more about this, and check if CirruSearch 
is implementing ICU.


About the third link you are providing:

https://issues.apache.org/jira/browse/LUCENE-1343

It seems that the first one I indicated:

https://github.com/apache/lucene-solr/blob/trunk/lucene/analysis/common/src/java/org/apache/lucene/analysis/el/GreekLowerCaseFilter.java

Does something similar but specialized for greek. This source converts 
also some diacritics, but is lacking many other chars.

At a first point, my idea was adding extra normalization here.

I'll do some other searches next week, both in lucene and in 
cirrusSearch docs and I'll let you know



Thanks to you and Tim for taking time on this

Regards

Paolo









On 21/11/2014 21:07, Alexandre Rafalovitch wrote:

Are you sure that's not something that's already addressed by the ICU
Filter? 
http://www.solr-start.com/javadoc/solr-lucene/org/apache/lucene/analysis/icu/ICUTransformFilterFactory.html

If you follow the links to what's possible, the page talks about
Greek, though not ancient:
http://userguide.icu-project.org/transforms/general#TOC-Greek

There was also some discussion on:
https://issues.apache.org/jira/browse/LUCENE-1343

Regards,
Alex.
Personal: http://www.outerthoughts.com/ and @arafalov
Solr resources and newsletter: http://www.solr-start.com/ and @solrstart
Solr popularizers community: https://www.linkedin.com/groups?gid=6713853


On 21 November 2014 14:14, paolo anghileri
paolo.anghil...@codegeneration.it wrote:

For development purposes I need the ability in lucene to normalize ancient
greek characters for al the cases of grammatical details such as accents,
diacritics and so on.

My need is to retrieve ancient greek words with accents and other
grammatical details by the input of the string without accents.

For example the input of οργανον (organon) should to retrieve also  Ὄργανον,


I am not a lucene commiter and I a new to this so my question is about the
best practice to implement this in Lucene, and possibile submit a commit
proposal to Lucene A project management committee.

I have made some searches and found this file in Lucene-soir:


It contains normalization for some chars.
My thought would be to add extra normalization here, including all unicode
ancient greek chars with all grammatical details.
I already have all the unicode values for that chars so It should not be
difficult for me to include them

If my understanding is correct, this should add to lucene the features
described above.


As I am new to this, my needs are:

  To be sure that this is the correct place in Lucene for doing normalization
How to post commit proposal


Any help appreciated

Kind regards

Paolo

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org




-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Testing Solr 5

2014-11-21 Thread Jack Krupansky

My understanding from what I have heard over the past year:

1. Reporting of issues (bugs) can be on any release or branch or trunk. 
Testing on the other releases, branches, or trunk is encouraged, but not 
mandatory. The main thing is to capture a repro scenario.
2. Any bug fixes should be on: 1) trunk, 2) branch_5x, and 3) branch_4x - to 
all that the bug applies to.
3. Any new features should be first on trunk and given time to bake (i.e., 
fix any Jenkins errors), then backported to branch_5x as the feature is 
completed and debugged. A determined effort must be made to keep the stable 
branch as stable as possible.


-- Jack Krupansky

-Original Message- 
From: Shawn Heisey

Sent: Friday, November 21, 2014 3:36 PM
To: dev@lucene.apache.org
Subject: Re: Testing Solr 5

On 11/21/2014 1:14 PM, Alexandre Rafalovitch wrote:

I am writing something that - will - depend on Solr 5. As I usually
work with released versions, I am not entirely sure of the correct
workflow.

I can check out branch_5x and do my research against that. I assume
that's the correct source for what will land in version 5.

But if I find an issue, do I then report it against 5? Or do I need to
retest that against trunk and report against trunk? I don't know how
in-sync these two are at this point.

I've checked https://wiki.apache.org/solr/HackingSolr but it is not
specific enough (and is actually out of date regarding version 5).


Any testing we do for branch_5x probably isn't enough, so please do test
with it.  That is the branch where development will happen for all 5.x
releases.  If you do find a problem and *can* try again with trunk,
please do, but don't make that a prerequisite for filing an issue or
creating a patch.  A patch against trunk is preferred, but any patch is
better than none.

We'd like to know which branch the patch applies to, and if there are
any problems we may need to know the SVN revision number, so it's better
to include that info up front if you have it.  If you are using one of
the git mirrors, the SVN revision number may not be available.  The git
commit hash may be useful instead.

Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org 



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene ancient greek normalization

2014-11-21 Thread Alexandre Rafalovitch
On 21 November 2014 16:10, paolo anghileri
paolo.anghil...@codegeneration.it wrote:
 The need is being able to search with simple strings without grammatical
 details and retrieve data with grammatical details.

I am pretty sure that this is what I did for a Thai dome. Actually, I
went another two steps and converted Thai to English transliteration
and then broadened phonetically. With Solr, in my case:
https://github.com/arafalov/solr-thai-test/blob/master/collection1/conf/schema.xml#L35

So to me, the specific question would be whether Ancient Greek -
specifically - is present in the Unicode mapping tables, not the rest
of it.

Regards,
   Alex.
Personal: http://www.outerthoughts.com/ and @arafalov
Solr resources and newsletter: http://www.solr-start.com/ and @solrstart
Solr popularizers community: https://www.linkedin.com/groups?gid=6713853

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Testing Solr 5

2014-11-21 Thread Alexandre Rafalovitch
Well, my concern was mostly about not reporting something against
brunch_5x that may have already been fixed in trunk (to avoid
JIRA-spam). I am not sure I am yet at the patch skill-level.

But I got my answer and the extra knowledge is extra power, so I am
not complaining. :-)

Regards,
   Alex.
Personal: http://www.outerthoughts.com/ and @arafalov
Solr resources and newsletter: http://www.solr-start.com/ and @solrstart
Solr popularizers community: https://www.linkedin.com/groups?gid=6713853


On 21 November 2014 16:13, Jack Krupansky j...@basetechnology.com wrote:
 My understanding from what I have heard over the past year:

 1. Reporting of issues (bugs) can be on any release or branch or trunk.
 Testing on the other releases, branches, or trunk is encouraged, but not
 mandatory. The main thing is to capture a repro scenario.
 2. Any bug fixes should be on: 1) trunk, 2) branch_5x, and 3) branch_4x - to
 all that the bug applies to.
 3. Any new features should be first on trunk and given time to bake (i.e.,
 fix any Jenkins errors), then backported to branch_5x as the feature is
 completed and debugged. A determined effort must be made to keep the stable
 branch as stable as possible.

 -- Jack Krupansky

 -Original Message- From: Shawn Heisey
 Sent: Friday, November 21, 2014 3:36 PM
 To: dev@lucene.apache.org
 Subject: Re: Testing Solr 5


 On 11/21/2014 1:14 PM, Alexandre Rafalovitch wrote:

 I am writing something that - will - depend on Solr 5. As I usually
 work with released versions, I am not entirely sure of the correct
 workflow.

 I can check out branch_5x and do my research against that. I assume
 that's the correct source for what will land in version 5.

 But if I find an issue, do I then report it against 5? Or do I need to
 retest that against trunk and report against trunk? I don't know how
 in-sync these two are at this point.

 I've checked https://wiki.apache.org/solr/HackingSolr but it is not
 specific enough (and is actually out of date regarding version 5).


 Any testing we do for branch_5x probably isn't enough, so please do test
 with it.  That is the branch where development will happen for all 5.x
 releases.  If you do find a problem and *can* try again with trunk,
 please do, but don't make that a prerequisite for filing an issue or
 creating a patch.  A patch against trunk is preferred, but any patch is
 better than none.

 We'd like to know which branch the patch applies to, and if there are
 any problems we may need to know the SVN revision number, so it's better
 to include that info up front if you have it.  If you are using one of
 the git mirrors, the SVN revision number may not be available.  The git
 commit hash may be useful instead.

 Thanks,
 Shawn


 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6776) Data lost when use SoftCommit and TLog

2014-11-21 Thread yuanyun.cn (JIRA)
yuanyun.cn created SOLR-6776:


 Summary: Data lost when use SoftCommit and TLog
 Key: SOLR-6776
 URL: https://issues.apache.org/jira/browse/SOLR-6776
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10
Reporter: yuanyun.cn
 Fix For: 4.10.3


We enabled update log and change autoCommit to some bigger value 10 mins.

After restart, we push one doc with softCommit=true
http://localhost:8983/solr/update?stream.body=adddocfield 
name=idid1/field/doc/addsoftCommit=true

Then we kill the java process after a min. 

After restart, Tlog failed to replay with following exception, and there is no 
data in solr.
6245 [coreLoadExecutor-5-thread-1] ERROR org.apache.solr.update.UpdateLog  û 
Failure to open existing log file (non fatal) 
E:\jeffery\src\apache\solr\4.10.2\solr-4.10.2\example\solr\collection1\data\t
log\tlog.000:org.apache.solr.common.SolrException: 
java.io.EOFException
at org.apache.solr.update.TransactionLog.init(TransactionLog.java:181)
at org.apache.solr.update.UpdateLog.init(UpdateLog.java:261)
at org.apache.solr.update.UpdateHandler.init(UpdateHandler.java:134)
at org.apache.solr.update.UpdateHandler.init(UpdateHandler.java:94)
at 
org.apache.solr.update.DirectUpdateHandler2.init(DirectUpdateHandler2.java:100)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown 
Source)
at java.lang.reflect.Constructor.newInstance(Unknown Source)
at org.apache.solr.core.SolrCore.createInstance(SolrCore.java:550)
at org.apache.solr.core.SolrCore.createUpdateHandler(SolrCore.java:620)
at org.apache.solr.core.SolrCore.init(SolrCore.java:835)
at org.apache.solr.core.SolrCore.init(SolrCore.java:646)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:491)
at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:255)
at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:249)
at java.util.concurrent.FutureTask.run(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by: java.io.EOFException
at 
org.apache.solr.common.util.FastInputStream.readUnsignedByte(FastInputStream.java:73)
at 
org.apache.solr.common.util.FastInputStream.readInt(FastInputStream.java:216)
at 
org.apache.solr.update.TransactionLog.readHeader(TransactionLog.java:268)
at org.apache.solr.update.TransactionLog.init(TransactionLog.java:159)
... 19 more

Check the code: seems this is related with: 
org.apache.solr.update.processor.RunUpdateProcessor, in processCommit, it sets 
changesSinceCommit=false(even we are using softCommit)

So in finish, updateLog.finish will not be called.
  public void finish() throws IOException {
if (changesSinceCommit  updateHandler.getUpdateLog() != null) {
  updateHandler.getUpdateLog().finish(null);
}
super.finish();
  }

To fix this issue: I have to change RunUpdateProcessor.processCommit like below:
if (!cmd.softCommit) {
  changesSinceCommit = false;
}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6066) New remove method in PriorityQueue

2014-11-21 Thread Stefan Pohl (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14221500#comment-14221500
 ] 

Stefan Pohl commented on LUCENE-6066:
-

I also needed this missing functionality and hence implemented it in the 
specialized heap used within the MinShouldMatchSumScorer (LUCENE-4571). I also 
went for the O\(n\) solution not to have the position-update overhead, but in 
my scoring use-case the removed item is often to be found in the beginning of 
the heap array.

Regarding your patch, please note that it's not enough to sift-down the 
inserted last element, you first have to try bubbling it up in order to deal 
with all possible heap states. A more randomized test case would generate 
plenty of example cases where the heap property otherwise won't be invariant.

It would be nice if there would be only one implementation of heap 
functionality, so that they could centrally be tested thoroughly, but it still 
seems that specialized versions can outperform generic ones (e.g. 
LUCENE-6028)...

 New remove method in PriorityQueue
 

 Key: LUCENE-6066
 URL: https://issues.apache.org/jira/browse/LUCENE-6066
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/query/scoring
Reporter: Mark Harwood
Priority: Minor
 Fix For: 5.0

 Attachments: LUCENE-PQRemoveV1.patch


 It would be useful to be able to remove existing elements from a 
 PriorityQueue. 
 The proposal is that a linear scan is performed to find the element being 
 removed and then the end element in heap[size] is swapped into this position 
 to perform the delete. The method downHeap() is then called to shuffle the 
 replacement element back down the array but the existing downHeap method must 
 be modified to allow picking up an entry from any point in the array rather 
 than always assuming the first element (which is its only current mode of 
 operation).
 A working javascript model of the proposal with animation is available here: 
 http://jsfiddle.net/grcmquf2/22/ 
 In tests the modified version of downHeap produces the same results as the 
 existing impl but adds the ability to push down from any point.
 An example use case that requires remove is where a client doesn't want more 
 than N matches for any given key (e.g. no more than 5 products from any one 
 retailer in a marketplace). In these circumstances a document that was 
 previously thought of as competitive has to be removed from the final PQ and 
 replaced with another doc (eg a retailer who already has 5 matches in the PQ 
 receives a 6th match which is better than his previous ones). This particular 
 process is managed by a special DiversifyingPriorityQueue which wraps the 
 main PriorityQueue and could be contributed as part of another issue if there 
 is interest in that. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-380) There's no way to convert search results into page-level hits of a structured document.

2014-11-21 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14221595#comment-14221595
 ] 

Alexandre Rafalovitch commented on SOLR-380:


I think this one is probably dead/safe-to-close. Some of the functionality 
seems to be in the folding/parent-child code in the latest Solr.

And if so, we can probably delete the Wiki page about *Plugins* that contains 
only this plugin so far: https://wiki.apache.org/solr/SolrPluginRepository (and 
the link to that from https://wiki.apache.org/solr/FrontPage under Tips)

 There's no way to convert search results into page-level hits of a 
 structured document.
 -

 Key: SOLR-380
 URL: https://issues.apache.org/jira/browse/SOLR-380
 Project: Solr
  Issue Type: New Feature
  Components: search
Reporter: Tricia Jenkins
Priority: Minor
 Fix For: 4.9, Trunk

 Attachments: SOLR-380-XmlPayload.patch, SOLR-380-XmlPayload.patch, 
 xmlpayload-example.zip, xmlpayload-src.jar, xmlpayload.jar


 Paged-Text FieldType for Solr
 A chance to dig into the guts of Solr. The problem: If we index a monograph 
 in Solr, there's no way to convert search results into page-level hits. The 
 solution: have a paged-text fieldtype which keeps track of page divisions 
 as it indexes, and reports page-level hits in the search results.
 The input would contain page milestones: page id=234/. As Solr processed 
 the tokens (using its standard tokenizers and filters), it would concurrently 
 build a structural map of the item, indicating which term position marked the 
 beginning of which page: page id=234 firstterm=14324/. This map would 
 be stored in an unindexed field in some efficient format.
 At search time, Solr would retrieve term positions for all hits that are 
 returned in the current request, and use the stored map to determine page ids 
 for each term position. The results would imitate the results for 
 highlighting, something like:
 lst name=pages
 nbsp;nbsp;lst name=doc1
 nbsp;nbsp;nbsp;nbsp;int name=pageid234/int
 nbsp;nbsp;nbsp;nbsp;int name=pageid236/int
 nbsp;nbsp;/lst
 nbsp;nbsp;lst name=doc2
 nbsp;nbsp;nbsp;nbsp;int name=pageid19/int
 nbsp;nbsp;/lst
 /lst
 lst name=hitpos
 nbsp;nbsp;lst name=doc1
 nbsp;nbsp;nbsp;nbsp;lst name=234
 nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;int 
 name=pos14325/int
 nbsp;nbsp;nbsp;nbsp;/lst
 nbsp;nbsp;/lst
 nbsp;nbsp;...
 /lst



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-380) There's no way to convert search results into page-level hits of a structured document.

2014-11-21 Thread Tricia Jenkins (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14221606#comment-14221606
 ] 

Tricia Jenkins commented on SOLR-380:
-

No problem here.
On 21 Nov 2014 16:40, Alexandre Rafalovitch (JIRA) j...@apache.org



 There's no way to convert search results into page-level hits of a 
 structured document.
 -

 Key: SOLR-380
 URL: https://issues.apache.org/jira/browse/SOLR-380
 Project: Solr
  Issue Type: New Feature
  Components: search
Reporter: Tricia Jenkins
Priority: Minor
 Fix For: 4.9, Trunk

 Attachments: SOLR-380-XmlPayload.patch, SOLR-380-XmlPayload.patch, 
 xmlpayload-example.zip, xmlpayload-src.jar, xmlpayload.jar


 Paged-Text FieldType for Solr
 A chance to dig into the guts of Solr. The problem: If we index a monograph 
 in Solr, there's no way to convert search results into page-level hits. The 
 solution: have a paged-text fieldtype which keeps track of page divisions 
 as it indexes, and reports page-level hits in the search results.
 The input would contain page milestones: page id=234/. As Solr processed 
 the tokens (using its standard tokenizers and filters), it would concurrently 
 build a structural map of the item, indicating which term position marked the 
 beginning of which page: page id=234 firstterm=14324/. This map would 
 be stored in an unindexed field in some efficient format.
 At search time, Solr would retrieve term positions for all hits that are 
 returned in the current request, and use the stored map to determine page ids 
 for each term position. The results would imitate the results for 
 highlighting, something like:
 lst name=pages
 nbsp;nbsp;lst name=doc1
 nbsp;nbsp;nbsp;nbsp;int name=pageid234/int
 nbsp;nbsp;nbsp;nbsp;int name=pageid236/int
 nbsp;nbsp;/lst
 nbsp;nbsp;lst name=doc2
 nbsp;nbsp;nbsp;nbsp;int name=pageid19/int
 nbsp;nbsp;/lst
 /lst
 lst name=hitpos
 nbsp;nbsp;lst name=doc1
 nbsp;nbsp;nbsp;nbsp;lst name=234
 nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;int 
 name=pos14325/int
 nbsp;nbsp;nbsp;nbsp;/lst
 nbsp;nbsp;/lst
 nbsp;nbsp;...
 /lst



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.8.0_20) - Build # 4445 - Failure!

2014-11-21 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4445/
Java: 64bit/jdk1.8.0_20 -XX:-UseCompressedOops -XX:+UseParallelGC (asserts: 
true)

1 tests failed.
REGRESSION:  
org.apache.solr.handler.TestSolrConfigHandlerConcurrent.testDistribSearch

Error Message:
[[], [size property not set, expected = 11, actual null, initialSize property 
not set, expected = 12, actual null, autowarmCount property not set, expected = 
13, actual null], [], []]

Stack Trace:
java.lang.AssertionError: [[], [size property not set, expected = 11, actual 
null, initialSize property not set, expected = 12, actual null, autowarmCount 
property not set, expected = 13, actual null], [], []]
at 
__randomizedtesting.SeedInfo.seed([EC559DFE50C274A9:6DB313E6279D1495]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.handler.TestSolrConfigHandlerConcurrent.doTest(TestSolrConfigHandlerConcurrent.java:112)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.GeneratedMethodAccessor68.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 

[jira] [Updated] (SOLR-6559) Create an endpoint /update/xml/docs endpoint to do custom xml indexing

2014-11-21 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-6559:
-
Description: Just the way we have an json end point create an xml end point 
too. use the XPathRecordReader in DIH to do the same . The syntax would require 
slight tweaking to match the params of /update/json/docs  (was: Just the way we 
have an json end point create an xml end point too. use the XPathRecordReader 
in DIH to do the same)

 Create an endpoint /update/xml/docs endpoint to do custom xml indexing
 --

 Key: SOLR-6559
 URL: https://issues.apache.org/jira/browse/SOLR-6559
 Project: Solr
  Issue Type: Bug
Reporter: Noble Paul
Assignee: Noble Paul

 Just the way we have an json end point create an xml end point too. use the 
 XPathRecordReader in DIH to do the same . The syntax would require slight 
 tweaking to match the params of /update/json/docs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6533) Support editing common solrconfig.xml values

2014-11-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14221688#comment-14221688
 ] 

ASF subversion and git services commented on SOLR-6533:
---

Commit 1641020 from [~noble.paul] in branch 'dev/trunk'
[ https://svn.apache.org/r1641020 ]

SOLR-6533 fixing test failures 
http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4445

 Support editing common solrconfig.xml values
 

 Key: SOLR-6533
 URL: https://issues.apache.org/jira/browse/SOLR-6533
 Project: Solr
  Issue Type: Sub-task
Reporter: Noble Paul
 Attachments: SOLR-6533.patch, SOLR-6533.patch, SOLR-6533.patch, 
 SOLR-6533.patch, SOLR-6533.patch, SOLR-6533.patch, SOLR-6533.patch, 
 SOLR-6533.patch, SOLR-6533.patch, SOLR-6533.patch, SOLR-6533.patch


 There are a bunch of properties in solrconfig.xml which users want to edit. 
 We will attack them first
 These properties will be persisted to a separate file called config.json (or 
 whatever file). Instead of saving in the same format we will have well known 
 properties which users can directly edit
 {code}
 updateHandler.autoCommit.maxDocs
 query.filterCache.initialSize
 {code}   
 The api will be modeled around the bulk schema API
 {code:javascript}
 curl http://localhost:8983/solr/collection1/config -H 
 'Content-type:application/json'  -d '{
 set-property : {updateHandler.autoCommit.maxDocs:5},
 unset-property: updateHandler.autoCommit.maxDocs
 }'
 {code}
 {code:javascript}
 //or use this to set ${mypropname} values
 curl http://localhost:8983/solr/collection1/config -H 
 'Content-type:application/json'  -d '{
 set-user-property : {mypropname:my_prop_val},
 unset-user-property:{mypropname}
 }'
 {code}
 The values stored in the config.json will always take precedence and will be 
 applied after loading solrconfig.xml. 
 An http GET on /config path will give the real config that is applied . 
 An http GET of/config/overlay gives out the content of the configOverlay.json
 /config/component-name gives only the fchild of the same name from /config



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4509) Disable Stale Check - Distributed Search (Performance)

2014-11-21 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14221753#comment-14221753
 ] 

Mark Miller commented on SOLR-4509:
---

My biggest hesitation on this patch is:

{code}
+// NOTE: The sweeper task is assuming hard-coded Jetty max-idle of 50s.
+final Runnable sweeper = new Runnable() {
+public void run() {
+mgr.closeIdleConnections(40, TimeUnit.SECONDS);
+}
+};
+final ScheduledExecutorService stp = Executors.newScheduledThreadPool(1);
+stp.scheduleWithFixedDelay(sweeper, 5, 5, TimeUnit.SECONDS);
{code}

 Disable Stale Check - Distributed Search (Performance)
 --

 Key: SOLR-4509
 URL: https://issues.apache.org/jira/browse/SOLR-4509
 Project: Solr
  Issue Type: Improvement
  Components: search
 Environment: 5 node SmartOS cluster (all nodes living in same global 
 zone - i.e. same physical machine)
Reporter: Ryan Zezeski
Priority: Minor
 Attachments: IsStaleTime.java, SOLR-4509-4_4_0.patch, 
 SOLR-4509.patch, baremetal-stale-nostale-med-latency.dat, 
 baremetal-stale-nostale-med-latency.svg, 
 baremetal-stale-nostale-throughput.dat, baremetal-stale-nostale-throughput.svg


 By disabling the Apache HTTP Client stale check I've witnessed a 2-4x 
 increase in throughput and reduction of over 100ms.  This patch was made in 
 the context of a project I'm leading, called Yokozuna, which relies on 
 distributed search.
 Here's the patch on Yokozuna: https://github.com/rzezeski/yokozuna/pull/26
 Here's a write-up I did on my findings: 
 http://www.zinascii.com/2013/solr-distributed-search-and-the-stale-check.html
 I'm happy to answer any questions or make changes to the patch to make it 
 acceptable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Linux (32bit/jdk1.7.0_67) - Build # 11485 - Failure!

2014-11-21 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11485/
Java: 32bit/jdk1.7.0_67 -server -XX:+UseParallelGC (asserts: false)

2 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.ChaosMonkeySafeLeaderTest

Error Message:
Suite timeout exceeded (= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (= 720 msec).
at __randomizedtesting.SeedInfo.seed([616F35D8C26556CE]:0)


REGRESSION:  org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.testDistribSearch

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([616F35D8C26556CE]:0)




Build Log:
[...truncated 12516 lines...]
   [junit4] Suite: org.apache.solr.cloud.ChaosMonkeySafeLeaderTest
   [junit4]   2 Creating dataDir: 
/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeySafeLeaderTest-616F35D8C26556CE-001/init-core-data-001
   [junit4]   2 1449027 T4795 oas.SolrTestCaseJ4.buildSSLConfig Randomized ssl 
(true) and clientAuth (true)
   [junit4]   2 1449028 T4795 
oas.BaseDistributedSearchTestCase.initHostContext Setting hostContext system 
property: /_k/up
   [junit4]   2 1449043 T4795 oas.SolrTestCaseJ4.setUp ###Starting 
testDistribSearch
   [junit4]   2 1449044 T4795 oasc.ZkTestServer.run STARTING ZK TEST SERVER
   [junit4]   1 client port:0.0.0.0/0.0.0.0:0
   [junit4]   2 1449045 T4796 oasc.ZkTestServer$ZKServerMain.runFromConfig 
Starting server
   [junit4]   2 1449144 T4795 oasc.ZkTestServer.run start zk server on 
port:42136
   [junit4]   2 1449145 T4795 
oascc.SolrZkClient.createZkCredentialsToAddAutomatically Using default 
ZkCredentialsProvider
   [junit4]   2 1449146 T4795 oascc.ConnectionManager.waitForConnected Waiting 
for client to connect to ZooKeeper
   [junit4]   2 1449148 T4803 oascc.ConnectionManager.process Watcher 
org.apache.solr.common.cloud.ConnectionManager@1a6818a name:ZooKeeperConnection 
Watcher:127.0.0.1:42136 got event WatchedEvent state:SyncConnected type:None 
path:null path:null type:None
   [junit4]   2 1449148 T4795 oascc.ConnectionManager.waitForConnected Client 
is connected to ZooKeeper
   [junit4]   2 1449148 T4795 oascc.SolrZkClient.createZkACLProvider Using 
default ZkACLProvider
   [junit4]   2 1449149 T4795 oascc.SolrZkClient.makePath makePath: /solr
   [junit4]   2 1449152 T4795 
oascc.SolrZkClient.createZkCredentialsToAddAutomatically Using default 
ZkCredentialsProvider
   [junit4]   2 1449152 T4795 oascc.ConnectionManager.waitForConnected Waiting 
for client to connect to ZooKeeper
   [junit4]   2 1449153 T4806 oascc.ConnectionManager.process Watcher 
org.apache.solr.common.cloud.ConnectionManager@cc7f03 name:ZooKeeperConnection 
Watcher:127.0.0.1:42136/solr got event WatchedEvent state:SyncConnected 
type:None path:null path:null type:None
   [junit4]   2 1449153 T4795 oascc.ConnectionManager.waitForConnected Client 
is connected to ZooKeeper
   [junit4]   2 1449154 T4795 oascc.SolrZkClient.createZkACLProvider Using 
default ZkACLProvider
   [junit4]   2 1449154 T4795 oascc.SolrZkClient.makePath makePath: 
/collections/collection1
   [junit4]   2 1449156 T4795 oascc.SolrZkClient.makePath makePath: 
/collections/collection1/shards
   [junit4]   2 1449157 T4795 oascc.SolrZkClient.makePath makePath: 
/collections/control_collection
   [junit4]   2 1449158 T4795 oascc.SolrZkClient.makePath makePath: 
/collections/control_collection/shards
   [junit4]   2 1449159 T4795 oasc.AbstractZkTestCase.putConfig put 
/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test-files/solr/collection1/conf/solrconfig-tlog.xml
 to /configs/conf1/solrconfig.xml
   [junit4]   2 1449160 T4795 oascc.SolrZkClient.makePath makePath: 
/configs/conf1/solrconfig.xml
   [junit4]   2 1449162 T4795 oasc.AbstractZkTestCase.putConfig put 
/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test-files/solr/collection1/conf/schema15.xml
 to /configs/conf1/schema.xml
   [junit4]   2 1449162 T4795 oascc.SolrZkClient.makePath makePath: 
/configs/conf1/schema.xml
   [junit4]   2 1449164 T4795 oasc.AbstractZkTestCase.putConfig put 
/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test-files/solr/collection1/conf/solrconfig.snippet.randomindexconfig.xml
 to /configs/conf1/solrconfig.snippet.randomindexconfig.xml
   [junit4]   2 1449165 T4795 oascc.SolrZkClient.makePath makePath: 
/configs/conf1/solrconfig.snippet.randomindexconfig.xml
   [junit4]   2 1449166 T4795 oasc.AbstractZkTestCase.putConfig put 
/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test-files/solr/collection1/conf/stopwords.txt
 to /configs/conf1/stopwords.txt
   [junit4]   2 1449166 T4795 oascc.SolrZkClient.makePath makePath: 
/configs/conf1/stopwords.txt
   [junit4]   2 1449167 T4795 oasc.AbstractZkTestCase.putConfig put 

[JENKINS] Lucene-Solr-SmokeRelease-5.x - Build # 213 - Still Failing

2014-11-21 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-5.x/213/

No tests ran.

Build Log:
[...truncated 51656 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist
 [copy] Copying 446 files to 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 254 files to 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.7 JAVA_HOME=/home/jenkins/tools/java/latest1.7
   [smoker] NOTE: output encoding is US-ASCII
   [smoker] 
   [smoker] Load release URL 
file:/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist/...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.1 MB in 0.01 sec (8.6 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-5.0.0-src.tgz...
   [smoker] 27.8 MB in 0.05 sec (596.8 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-5.0.0.tgz...
   [smoker] 63.8 MB in 0.09 sec (677.9 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-5.0.0.zip...
   [smoker] 73.2 MB in 0.10 sec (744.0 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-5.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.7...
   [smoker]   got 5569 hits for query lucene
   [smoker] checkindex with 1.7...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-5.0.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.7...
   [smoker]   got 5569 hits for query lucene
   [smoker] checkindex with 1.7...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-5.0.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run ant validate
   [smoker] run tests w/ Java 7 and testArgs='-Dtests.jettyConnector=Socket 
-Dtests.multiplier=1 -Dtests.slow=false'...
   [smoker] test demo with 1.7...
   [smoker]   got 206 hits for query lucene
   [smoker] checkindex with 1.7...
   [smoker] generate javadocs w/ Java 7...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.1 MB in 0.00 sec (87.3 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-5.0.0-src.tgz...
   [smoker] 34.1 MB in 0.05 sec (638.9 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-5.0.0.tgz...
   [smoker] 146.4 MB in 0.50 sec (294.0 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-5.0.0.zip...
   [smoker] 152.5 MB in 0.21 sec (726.4 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-5.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-5.0.0.tgz...
   [smoker]   **WARNING**: skipping check of 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has javax.* classes
   [smoker] verify WAR metadata/contained JAR identity/no javax.* or java.* 
classes...
   [smoker] unpack lucene-5.0.0.tgz...
   [smoker] copying unpacked distribution for Java 7 ...
   [smoker] test solr example w/ Java 7...
   [smoker]   start Solr instance 
(log=/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0-java7/solr-example.log)...
   [smoker] No process found for Solr node running on port 8983
   [smoker]   starting Solr on port 8983 from 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0-java7
   [smoker] Startup failed; see log 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0-java7/solr-example.log
   [smoker] 
   [smoker] Starting Solr on port 8983 from 

[jira] [Commented] (SOLR-3619) Rename 'example' dir to 'server' and pull examples into an 'examples' directory

2014-11-21 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14221801#comment-14221801
 ] 

Alexandre Rafalovitch commented on SOLR-3619:
-

Is there a specific reason the collections are created in the server directory? 
I read the discussion before, but don't remember anything jumping out. The 
*delete extra stuff* strategy feels quite fragile to me. 

 Rename 'example' dir to 'server' and pull examples into an 'examples' 
 directory
 ---

 Key: SOLR-3619
 URL: https://issues.apache.org/jira/browse/SOLR-3619
 Project: Solr
  Issue Type: Improvement
Reporter: Mark Miller
Assignee: Timothy Potter
 Fix For: 5.0, Trunk

 Attachments: SOLR-3619.patch, SOLR-3619.patch, SOLR-3619.patch, 
 managed-schema, server-name-layout.png, solrconfig.xml






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.8.0) - Build # 1951 - Still Failing!

2014-11-21 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/1951/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC (asserts: true)

1 tests failed.
REGRESSION:  org.apache.solr.cloud.BasicDistributedZkTest.testDistribSearch

Error Message:
commitWithin did not work on node: http://127.0.0.1:51216/collection1 
expected:68 but was:67

Stack Trace:
java.lang.AssertionError: commitWithin did not work on node: 
http://127.0.0.1:51216/collection1 expected:68 but was:67
at 
__randomizedtesting.SeedInfo.seed([FA1FFC092382FF19:7BF9721154DD9F25]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.cloud.BasicDistributedZkTest.doTest(BasicDistributedZkTest.java:345)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.GeneratedMethodAccessor63.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)

[JENKINS] Lucene-Solr-4.10-Linux (64bit/jdk1.8.0_20) - Build # 110 - Failure!

2014-11-21 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.10-Linux/110/
Java: 64bit/jdk1.8.0_20 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC 
(asserts: true)

1 tests failed.
REGRESSION:  org.apache.lucene.index.TestIndexWriter.testCorruptFirstCommit

Error Message:
segments in dir=RAMDirectory@fda35a8 
lockFactory=org.apache.lucene.store.SingleInstanceLockFactory@177705a0

Stack Trace:
java.nio.file.NoSuchFileException: segments in dir=RAMDirectory@fda35a8 
lockFactory=org.apache.lucene.store.SingleInstanceLockFactory@177705a0
at 
__randomizedtesting.SeedInfo.seed([E70D8668746F2F4D:CDE6A3F23A096215]:0)
at 
org.apache.lucene.store.MockDirectoryWrapper.openInput(MockDirectoryWrapper.java:603)
at 
org.apache.lucene.store.Directory.openChecksumInput(Directory.java:113)
at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:341)
at org.apache.lucene.index.SegmentInfos$1.doBody(SegmentInfos.java:454)
at 
org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:906)
at 
org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:752)
at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:457)
at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:414)
at org.apache.lucene.util.TestUtil.checkIndex(TestUtil.java:207)
at 
org.apache.lucene.store.MockDirectoryWrapper.close(MockDirectoryWrapper.java:724)
at 
org.apache.lucene.index.TestIndexWriter.testCorruptFirstCommit(TestIndexWriter.java:2520)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at