Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/465/ No changes [Error replacing 'FILE' - Workspace is not accessible] - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-16626) S3A ITestRestrictedReadAccess fails
[ https://issues.apache.org/jira/browse/HADOOP-16626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran resolved HADOOP-16626. - Fix Version/s: 3.3.0 Resolution: Fixed merged to trunk. Will review other tests which try to unset config options to see if they are also exposed to this "quirk" of the Configuration class > S3A ITestRestrictedReadAccess fails > --- > > Key: HADOOP-16626 > URL: https://issues.apache.org/jira/browse/HADOOP-16626 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Siddharth Seth >Assignee: Steve Loughran >Priority: Major > Fix For: 3.3.0 > > > Just tried running the S3A test suite. Consistently seeing the following. > Command used > {code} > mvn -T 1C verify -Dparallel-tests -DtestsThreadCount=12 -Ds3guard -Dauth > -Ddynamo -Dtest=moo -Dit.test=ITestRestrictedReadAccess > {code} > cc [~ste...@apache.org] > {code} > --- > Test set: org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess > --- > Tests run: 3, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 5.335 s <<< > FAILURE! - in org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess > testNoReadAccess[raw](org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess) > Time elapsed: 2.841 s <<< ERROR! > java.nio.file.AccessDeniedException: > test/testNoReadAccess-raw/noReadDir/emptyDir/: getFileStatus on > test/testNoReadAccess-raw/noReadDir/emptyDir/: > com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden (Service: Amazon > S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: > FE8B4D6F25648BCD; S3 Extended Request ID: > hgUHzFskU9CcEUT3DxgAkYcWLl6vFoa1k7qXX29cx1u3lpl7RVsWr5rp27/B8s5yjmWvvi6hVgk=), > S3 Extended Request ID: > hgUHzFskU9CcEUT3DxgAkYcWLl6vFoa1k7qXX29cx1u3lpl7RVsWr5rp27/B8s5yjmWvvi6hVgk=:403 > Forbidden > at > org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:244) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:2777) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:2705) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:2589) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.innerListStatus(S3AFileSystem.java:2377) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$listStatus$10(S3AFileSystem.java:2356) > at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:110) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.listStatus(S3AFileSystem.java:2356) > at > org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess.checkBasicFileOperations(ITestRestrictedReadAccess.java:360) > at > org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess.testNoReadAccess(ITestRestrictedReadAccess.java:282) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) > at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) > at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at java.lang.Thread.run(Thread.java:748) > Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden > (Service: Amazon S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: > FE8B4D6F25648BCD; S3 Extended Request ID: > hgUHzFskU9CcEUT3DxgAkYcWLl6vFoa1k7qXX29cx1u3lpl7RVsWr5rp27/B8s5yjmWvvi6hVgk=), > S3 Extended Request ID: > hgUHzFskU9CcEUT3DxgAkYcWLl6vFoa1k7qXX29cx1u3lpl7RVsWr5rp27/B8s5yjmWvvi6hVgk= > at > com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1
[jira] [Reopened] (HADOOP-16579) Upgrade to Apache Curator 4.2.0 in Hadoop
[ https://issues.apache.org/jira/browse/HADOOP-16579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang reopened HADOOP-16579: -- It's causing unit test failures. I'm going to revert this change. > Upgrade to Apache Curator 4.2.0 in Hadoop > - > > Key: HADOOP-16579 > URL: https://issues.apache.org/jira/browse/HADOOP-16579 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Mate Szalay-Beko >Assignee: Norbert Kalmár >Priority: Major > Fix For: 3.3.0 > > > Currently in Hadoop we are using [ZooKeeper version > 3.4.13|https://github.com/apache/hadoop/blob/7f9073132dcc9db157a6792635d2ed099f2ef0d2/hadoop-project/pom.xml#L90]. > ZooKeeper 3.5.5 is the latest stable Apache ZooKeeper release. It contains > many new features (including SSL related improvements which can be very > important for production use; see [the release > notes|https://zookeeper.apache.org/doc/r3.5.5/releasenotes.html]). > Apache Curator is a high level ZooKeeper client library, that makes it easier > to use the low level ZooKeeper API. Currently [in Hadoop we are using Curator > 2.13.0|https://github.com/apache/hadoop/blob/7f9073132dcc9db157a6792635d2ed099f2ef0d2/hadoop-project/pom.xml#L91] > and [in Ozone we use Curator > 2.12.0|https://github.com/apache/hadoop/blob/7f9073132dcc9db157a6792635d2ed099f2ef0d2/pom.ozone.xml#L146]. > Curator 2.x is supporting only the ZooKeeper 3.4.x releases, while Curator > 3.x is compatible only with the new ZooKeeper 3.5.x releases. Fortunately, > the latest Curator 4.x versions are compatible with both ZooKeeper 3.4.x and > 3.5.x. (see [the relevant Curator > page|https://curator.apache.org/zk-compatibility.html]). Many Apache projects > have already migrated to Curator 4 (like HBase, Phoenix, Druid, etc.), other > components are doing it right now (e.g. Hive). > *The aims of this task are* to: > - change Curator version in Hadoop to the latest stable 4.x version > (currently 4.2.0) > - also make sure we don't have multiple ZooKeeper versions in the classpath > to avoid runtime problems (it is > [recommended|https://curator.apache.org/zk-compatibility.html] to exclude the > ZooKeeper which come with Curator, so that there will be only a single > ZooKeeper version used runtime in Hadoop) > In this ticket we still don't want to change the default ZooKeeper version in > Hadoop, we only want to make it possible for the community to be able to > build / use Hadoop with the new ZooKeeper (e.g. if they need to secure the > ZooKeeper communication with SSL, what is only supported in the new ZooKeeper > version). Upgrading to Curator 4.x should keep Hadoop to be compatible with > both ZooKeeper 3.4 and 3.5. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-16636) No rule to make target PROTOBUF_PROTOC_EXECUTABLE-NOTFOUND when compiling trunk
Jonathan Hung created HADOOP-16636: -- Summary: No rule to make target PROTOBUF_PROTOC_EXECUTABLE-NOTFOUND when compiling trunk Key: HADOOP-16636 URL: https://issues.apache.org/jira/browse/HADOOP-16636 Project: Hadoop Common Issue Type: Bug Reporter: Jonathan Hung {noformat} [WARNING] make[1]: Leaving directory '/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/target' [WARNING] Makefile:127: recipe for target 'all' failed [WARNING] make[2]: *** No rule to make target '/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/proto/PROTOBUF_PROTOC_EXECUTABLE-NOTFOUND', needed by 'main/native/libhdfspp/lib/proto/ClientNamenodeProtocol.hrpc.inl'. Stop. [WARNING] make[1]: *** [main/native/libhdfspp/lib/proto/CMakeFiles/proto_obj.dir/all] Error 2 [WARNING] make[1]: *** Waiting for unfinished jobs [WARNING] make: *** [all] Error 2 {noformat} e.g. here: [https://builds.apache.org/job/PreCommit-YARN-Build/24911/artifact/out/patch-compile-root.txt] Not sure exactly what changed here. But some online resources suggest to install protobuf-compiler. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org