[jira] [Comment Edited] (SOLR-9515) Update to Hadoop 3

2019-02-07 Thread Steve Rowe (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16762779#comment-16762779
 ] 

Steve Rowe edited comment on SOLR-9515 at 2/7/19 3:44 PM:
--

Maven build usage instructions are at {{dev-tools/maven/README.maven}}.  FYI 
currently the Maven Jenkins jobs don't run tests under Maven because nobody 
maintains that capability - the Maven test runner (surefire) is different from 
RandomizedRunner, the Ant one (see LUCENE-4045); the Jenkins jobs just build 
the Maven artifacts and populate the Apache Maven snapshot repo.


was (Author: steve_rowe):
Maven build usage instructions are at {{dev-tools/maven/README. maven}}.  FYI 
currently the Maven Jenkins jobs don't run tests under Maven because nobody 
maintains that capability - the Maven test runner (surefire) is different from 
RandomizedRunner, the Ant one (see LUCENE-4045); the Jenkins jobs just build 
the Maven artifacts and populate the Apache Maven snapshot repo.

> Update to Hadoop 3
> --
>
> Key: SOLR-9515
> URL: https://issues.apache.org/jira/browse/SOLR-9515
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Kevin Risden
>Priority: Blocker
> Fix For: 8.0, 8.x, master (9.0)
>
> Attachments: SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch, 
> SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch, 
> SOLR-9515.patch
>
>  Time Spent: 7h 40m
>  Remaining Estimate: 0h
>
> Hadoop 3 is not out yet, but I'd like to iron out the upgrade to be prepared. 
> I'll start up a dev branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9515) Update to Hadoop 3

2019-01-31 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757773#comment-16757773
 ] 

Kevin Risden edited comment on SOLR-9515 at 1/31/19 10:40 PM:
--

I had tested with just JDK8. This does reproduce locally with JDK11. I checked 
the failure locally and don't see any symlinks in the path going down. I can 
revert the master commit since I don't have this working on JDK9+ yet. Sigh.

I was going to try a few configs from here: 
[https://docs.oracle.com/javase/10/security/permissions-jdk1.htm#JSSEC-GUID-83063225-0ACB-4909-9BAB-7F7D4E3749E2]


was (Author: risdenk):
I had tested with just JDK8. This does reproduce locally though with JDK11. I 
checked the failure locally and don't see any symlinks in the path going down. 
I can revert the master commit since I don't have this working on JDK9+ yet. 
Sigh.

I was going to try a few configs from here: 
https://docs.oracle.com/javase/10/security/permissions-jdk1.htm#JSSEC-GUID-83063225-0ACB-4909-9BAB-7F7D4E3749E2

> Update to Hadoop 3
> --
>
> Key: SOLR-9515
> URL: https://issues.apache.org/jira/browse/SOLR-9515
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Kevin Risden
>Priority: Blocker
> Fix For: 8.0, master (9.0)
>
> Attachments: SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch, 
> SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch
>
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> Hadoop 3 is not out yet, but I'd like to iron out the upgrade to be prepared. 
> I'll start up a dev branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9515) Update to Hadoop 3

2019-01-31 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757746#comment-16757746
 ] 

Kevin Risden edited comment on SOLR-9515 at 1/31/19 10:08 PM:
--

So this caused failures on JDK9+ not sure how the below is possible 
currently since solr-tests.policy 
([https://github.com/apache/lucene-solr/blob/master/lucene/tools/junit4/solr-tests.policy#L27)]
 allows read access to that path.

[https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23605]
{code:java}
[junit4]   2> java.io.IOException: Failed to start sub tasks to add replica in 
replica map :java.security.AccessControlException: access denied 
("java.io.FilePermission" 
"/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0/temp/solr.index.hdfs.CheckHdfsIndexTest_B3EBC148FC827CD8-001/tempDir-001/hdfsBaseDir/data/data3/current/BP-669531916-88.99.242.108-1548970895105/current/finalized"
 "read")
   [junit4]   2>at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.getVolumeMap(BlockPoolSlice.java:439)
 ~[hadoop-hdfs-3.2.0.jar:?]
   [junit4]   2>at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.getVolumeMap(FsVolumeImpl.java:1003)
 ~[hadoop-hdfs-3.2.0.jar:?]
   [junit4]   2>at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList$1.run(FsVolumeList.java:201)
 ~[hadoop-hdfs-3.2.0.jar:?]
{code}
Snippet from solr-tests.policy:
{code:java}
permission java.io.FilePermission "<>", "read,execute";
permission java.io.FilePermission "${junit4.childvm.cwd}", "read,execute";
permission java.io.FilePermission "${junit4.childvm.cwd}${/}temp", 
"read,execute,write,delete";
permission java.io.FilePermission "${junit4.childvm.cwd}${/}temp${/}-", 
"read,execute,write,delete";
permission java.io.FilePermission "${junit4.childvm.cwd}${/}jacoco.db", "write";
permission java.io.FilePermission "${junit4.tempDir}${/}*", 
"read,execute,write,delete";
permission java.io.FilePermission "${clover.db.dir}${/}-", 
"read,execute,write,delete";
permission java.io.FilePermission "${tests.linedocsfile}", "read";
permission java.nio.file.LinkPermission "hard";
{code}
Variables from run:
{code:java}
junit4.childvm.cwd=/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0
junit.tempDir=/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/temp
java.security.manager=org.apache.lucene.util.TestSecurityManager
java.security.policy/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/tools/junit4/solr-tests.policy{code}
So we should have read on the paths that HDFS is trying to use?


was (Author: risdenk):
So this caused failures on JDK9+ not sure how the below is possible 
currently since solr-tests.policy 
([https://github.com/apache/lucene-solr/blob/master/lucene/tools/junit4/solr-tests.policy#L27)]
 allows read access to that path.

[https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23605]
{code:java}
[junit4]   2> java.io.IOException: Failed to start sub tasks to add replica in 
replica map :java.security.AccessControlException: access denied 
("java.io.FilePermission" 
"/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0/temp/solr.index.hdfs.CheckHdfsIndexTest_B3EBC148FC827CD8-001/tempDir-001/hdfsBaseDir/data/data3/current/BP-669531916-88.99.242.108-1548970895105/current/finalized"
 "read")
   [junit4]   2>at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.getVolumeMap(BlockPoolSlice.java:439)
 ~[hadoop-hdfs-3.2.0.jar:?]
   [junit4]   2>at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.getVolumeMap(FsVolumeImpl.java:1003)
 ~[hadoop-hdfs-3.2.0.jar:?]
   [junit4]   2>at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList$1.run(FsVolumeList.java:201)
 ~[hadoop-hdfs-3.2.0.jar:?]
{code}
Snippet from solr-tests.policy:
{code:java}
permission java.io.FilePermission "<>", "read,execute";
permission java.io.FilePermission "${junit4.childvm.cwd}", "read,execute";
permission java.io.FilePermission "${junit4.childvm.cwd}${/}temp", 
"read,execute,write,delete";
permission java.io.FilePermission "${junit4.childvm.cwd}${/}temp${/}-", 
"read,execute,write,delete";
permission java.io.FilePermission "${junit4.childvm.cwd}${/}jacoco.db", "write";
permission java.io.FilePermission "${junit4.tempDir}${/}*", 
"read,execute,write,delete";
permission java.io.FilePermission "${clover.db.dir}${/}-", 
"read,execute,write,delete";
permission java.io.FilePermission "${tests.linedocsfile}", "read";
permission java.nio.file.LinkPermission "hard";
{code}
Variables from run:
{code:java}
junit4.childvm.cwd=/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0
junit.tempDir=/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/temp{code}
So we should have read on the paths 

[jira] [Comment Edited] (SOLR-9515) Update to Hadoop 3

2019-01-31 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757746#comment-16757746
 ] 

Kevin Risden edited comment on SOLR-9515 at 1/31/19 9:48 PM:
-

So this caused failures on JDK9+ not sure how the below is possible 
currently since solr-tests.policy 
([https://github.com/apache/lucene-solr/blob/master/lucene/tools/junit4/solr-tests.policy#L27)]
 allows read access to that path.

[https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23605]
{code:java}
[junit4]   2> java.io.IOException: Failed to start sub tasks to add replica in 
replica map :java.security.AccessControlException: access denied 
("java.io.FilePermission" 
"/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0/temp/solr.index.hdfs.CheckHdfsIndexTest_B3EBC148FC827CD8-001/tempDir-001/hdfsBaseDir/data/data3/current/BP-669531916-88.99.242.108-1548970895105/current/finalized"
 "read")
   [junit4]   2>at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.getVolumeMap(BlockPoolSlice.java:439)
 ~[hadoop-hdfs-3.2.0.jar:?]
   [junit4]   2>at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.getVolumeMap(FsVolumeImpl.java:1003)
 ~[hadoop-hdfs-3.2.0.jar:?]
   [junit4]   2>at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList$1.run(FsVolumeList.java:201)
 ~[hadoop-hdfs-3.2.0.jar:?]
{code}
Snippet from solr-tests.policy:
{code:java}
permission java.io.FilePermission "<>", "read,execute";
permission java.io.FilePermission "${junit4.childvm.cwd}", "read,execute";
permission java.io.FilePermission "${junit4.childvm.cwd}${/}temp", 
"read,execute,write,delete";
permission java.io.FilePermission "${junit4.childvm.cwd}${/}temp${/}-", 
"read,execute,write,delete";
permission java.io.FilePermission "${junit4.childvm.cwd}${/}jacoco.db", "write";
permission java.io.FilePermission "${junit4.tempDir}${/}*", 
"read,execute,write,delete";
permission java.io.FilePermission "${clover.db.dir}${/}-", 
"read,execute,write,delete";
permission java.io.FilePermission "${tests.linedocsfile}", "read";
permission java.nio.file.LinkPermission "hard";
{code}
Variables from run:
{code:java}
junit4.childvm.cwd=/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0
junit.tempDir=/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/temp{code}
So we should have read on the paths that HDFS is trying to use?


was (Author: risdenk):
So this caused failures on JDK9+ not sure how the below is possible 
currently since solr-tests.policy allows read access to that path.

https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23605
{code:java}
[junit4]   2> java.io.IOException: Failed to start sub tasks to add replica in 
replica map :java.security.AccessControlException: access denied 
("java.io.FilePermission" 
"/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0/temp/solr.index.hdfs.CheckHdfsIndexTest_B3EBC148FC827CD8-001/tempDir-001/hdfsBaseDir/data/data3/current/BP-669531916-88.99.242.108-1548970895105/current/finalized"
 "read")
   [junit4]   2>at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.getVolumeMap(BlockPoolSlice.java:439)
 ~[hadoop-hdfs-3.2.0.jar:?]
   [junit4]   2>at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.getVolumeMap(FsVolumeImpl.java:1003)
 ~[hadoop-hdfs-3.2.0.jar:?]
   [junit4]   2>at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList$1.run(FsVolumeList.java:201)
 ~[hadoop-hdfs-3.2.0.jar:?]
{code}

> Update to Hadoop 3
> --
>
> Key: SOLR-9515
> URL: https://issues.apache.org/jira/browse/SOLR-9515
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Kevin Risden
>Priority: Blocker
> Fix For: 8.0, master (9.0)
>
> Attachments: SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch, 
> SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch
>
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> Hadoop 3 is not out yet, but I'd like to iron out the upgrade to be prepared. 
> I'll start up a dev branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9515) Update to Hadoop 3

2019-01-30 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756401#comment-16756401
 ] 

Kevin Risden edited comment on SOLR-9515 at 1/30/19 9:13 PM:
-

The precommit failures from above for Hadoop are real but they are an issue 
with commons-lang3. I just sent a [message to the commons-lang3 mailing 
list|http://mail-archives.apache.org/mod_mbox/commons-user/201901.mbox/%3CCAJU9nmhqgzh7VcxyhJNfb4czC2SvJzZd4o6ARcuD4msof1U2Zw%40mail.gmail.com%3E].
 I have seen these errors sporadically as well. The stacktrace that is similar 
is:
{code:java}
ava.lang.ArrayIndexOutOfBoundsException: 4
   [junit4]   2>at 
org.apache.commons.lang3.time.FastDatePrinter$TextField.appendTo(FastDatePrinter.java:901)
 ~[commons-lang3-3.7.jar:3.7]
   [junit4]   2>at 
org.apache.commons.lang3.time.FastDatePrinter.applyRules(FastDatePrinter.java:573)
 ~[commons-lang3-3.7.jar:3.7]
   [junit4]   2>at 
org.apache.commons.lang3.time.FastDatePrinter.applyRulesToString(FastDatePrinter.java:455)
 ~[commons-lang3-3.7.jar:3.7]
   [junit4]   2>at 
org.apache.commons.lang3.time.FastDatePrinter.format(FastDatePrinter.java:446) 
~[commons-lang3-3.7.jar:3.7]
   [junit4]   2>at 
org.apache.commons.lang3.time.FastDateFormat.format(FastDateFormat.java:428) 
~[commons-lang3-3.7.jar:3.7]
   [junit4]   2>at 
org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.start(DirectoryScanner.java:281)
 ~[hadoop-hdfs-3.2.0.jar:?]
   [junit4]   2>at 
org.apache.hadoop.hdfs.server.datanode.DataNode.initDirectoryScanner(DataNode.java:1090)
 ~[hadoop-hdfs-3.2.0.jar:?]
   [junit4]   2>at 
org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1686)
 ~[hadoop-hdfs-3.2.0.jar:?]
   [junit4]   2>at 
org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:390)
 ~[hadoop-hdfs-3.2.0.jar:?]
   [junit4]   2>at 
org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:280)
 ~[hadoop-hdfs-3.2.0.jar:?]
   [junit4]   2>at 
org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:819)
 [hadoop-hdfs-3.2.0.jar:?]
   [junit4]   2>at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191]
{code}
This happens when the locale is set. I haven't nailed down which locales all 
fail but a simple reproducing test case without Hadoop or Lucene/Solr is here:
{code:java}
long timestamp = System.currentTimeMillis();
Locale.setDefault(Locale.forLanguageTag("ja-JP-u-ca-japanese-x-lvariant-JP"));
Assert.assertEquals(SimpleDateFormat.getInstance().format(timestamp),
FastDateFormat.getInstance().format(timestamp));
{code}
 This is with commons-lang3 3.8.1 the latest release.


was (Author: risdenk):
The precommit failures from above for Hadoop are real but they are an issue 
with commons-lang3. I just sent a message to the commons-lang3 mailing list. I 
have seen these errors sporadically as well. The stacktrace that is similar is:
{code:java}
ava.lang.ArrayIndexOutOfBoundsException: 4
   [junit4]   2>at 
org.apache.commons.lang3.time.FastDatePrinter$TextField.appendTo(FastDatePrinter.java:901)
 ~[commons-lang3-3.7.jar:3.7]
   [junit4]   2>at 
org.apache.commons.lang3.time.FastDatePrinter.applyRules(FastDatePrinter.java:573)
 ~[commons-lang3-3.7.jar:3.7]
   [junit4]   2>at 
org.apache.commons.lang3.time.FastDatePrinter.applyRulesToString(FastDatePrinter.java:455)
 ~[commons-lang3-3.7.jar:3.7]
   [junit4]   2>at 
org.apache.commons.lang3.time.FastDatePrinter.format(FastDatePrinter.java:446) 
~[commons-lang3-3.7.jar:3.7]
   [junit4]   2>at 
org.apache.commons.lang3.time.FastDateFormat.format(FastDateFormat.java:428) 
~[commons-lang3-3.7.jar:3.7]
   [junit4]   2>at 
org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.start(DirectoryScanner.java:281)
 ~[hadoop-hdfs-3.2.0.jar:?]
   [junit4]   2>at 
org.apache.hadoop.hdfs.server.datanode.DataNode.initDirectoryScanner(DataNode.java:1090)
 ~[hadoop-hdfs-3.2.0.jar:?]
   [junit4]   2>at 
org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1686)
 ~[hadoop-hdfs-3.2.0.jar:?]
   [junit4]   2>at 
org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:390)
 ~[hadoop-hdfs-3.2.0.jar:?]
   [junit4]   2>at 
org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:280)
 ~[hadoop-hdfs-3.2.0.jar:?]
   [junit4]   2>at 
org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:819)
 [hadoop-hdfs-3.2.0.jar:?]
   [junit4]   2>at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191]
{code}
This happens when the locale is set. I haven't nailed down which locales all 
fail but a simple reproducing test case 

[jira] [Comment Edited] (SOLR-9515) Update to Hadoop 3

2019-01-30 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756430#comment-16756430
 ] 

Kevin Risden edited comment on SOLR-9515 at 1/30/19 7:03 PM:
-

[~gerlowskija] there are some more details on the PR in comments about 
HttpServer2. Near the bottom here: 
[https://github.com/apache/lucene-solr/pull/553]

 

Gist: Lucene in integration tests spins up a Hadoop cluster which uses 
HttpServer2 (from Hadoop internally). It only works on Jetty 9.3 but we have 
Jetty 9.4. So copied/patched HttpServer2 to make the integration tests work. 


was (Author: risdenk):
[~gerlowskija] there are some more details on the PR in comments about 
HttpServer2. Near the bottom here: 
[https://github.com/apache/lucene-solr/pull/553]

 

Gist: Lucene in integration tests spins up a Hadoop cluster which HttpServer2 
(from Hadoop internally). It only works on Jetty 9.3 but we have Jetty 9.4. So 
copied/patched HttpServer2 to make the integration tests work. 

> Update to Hadoop 3
> --
>
> Key: SOLR-9515
> URL: https://issues.apache.org/jira/browse/SOLR-9515
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Kevin Risden
>Priority: Major
> Fix For: 8.0, master (9.0)
>
> Attachments: SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> Hadoop 3 is not out yet, but I'd like to iron out the upgrade to be prepared. 
> I'll start up a dev branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org