[jira] [Created] (MAPREDUCE-7221) _SUCCESS file should be created with permissions of parent directory
Kevin created MAPREDUCE-7221: Summary: _SUCCESS file should be created with permissions of parent directory Key: MAPREDUCE-7221 URL: https://issues.apache.org/jira/browse/MAPREDUCE-7221 Project: Hadoop Map/Reduce Issue Type: Bug Components: client Affects Versions: 3.1.2 Reporter: Kevin The _SUCCESS file is created with the default user permissions ... often read-only to the current user. This can prevent a second user from successfully performing operations in the directory that they would otherwise be able to perform. The permissions of the directory are a much better representation of the intent of access and the _SUCCESS file's permissions should not interfere with this intent. The specific case where I ran into this being a problem: We have an "analytic pipeline" that can be used by many different users. At the end of the pipeline, statistics are appended to a Hive table. The _SUCCESS file must currently be deleted with elevated privileges before the statistics append at the end of the job. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1179/ [Jun 25, 2019 2:07:22 AM] (aengineer) HDFS-14598. Findbugs warning caused by HDFS-12487. Contributed by He [Jun 25, 2019 8:47:37 AM] (tasanuma) HADOOP-16390. escape javadoc in S3AUtils public methods [Jun 25, 2019 10:11:09 AM] (aajisaka) HDFS-14590. [SBN Read] Add the document link to the top page. [Jun 25, 2019 3:07:39 PM] (xkrogen) HDFS-12345 Add Dynamometer to hadoop-tools, a tool for scale testing the [Jun 25, 2019 3:48:03 PM] (aengineer) HDDS-1723. Create new OzoneManagerLock class. (#1006) [Jun 25, 2019 8:57:01 PM] (aengineer) HDDS-1709. TestScmSafeNode is flaky. Contributed by Elek, Marton. [Jun 26, 2019 12:12:45 AM] (github) HDDS-1727. Use generation of resourceName for locks in OzoneManagerLock. -1 overall The following subsystems voted -1: asflicense findbugs hadolint pathlen unit The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: FindBugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-documentstore Unread field:TimelineEventSubDoc.java:[line 56] Unread field:TimelineMetricSubDoc.java:[line 44] FindBugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346] Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114] org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115] FindBugs : module:hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-blockgen Self assignment of field BlockInfo.replication in new org.apache.hadoop.tools.dynamometer.blockgenerator.BlockInfo(BlockInfo) At BlockInfo.java:in new org.apache.hadoop.tools.dynamometer.blockgenerator.BlockInfo(BlockInfo) At BlockInfo.java:[line 78] FindBugs : module:hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-infra org.apache.hadoop.tools.dynamometer.Client.addFileToZipRecursively(File, File, ZipOutputStream) may fail to clean up java.io.InputStream on checked exception Obligation to clean up resource created at Client.java:to clean up java.io.InputStream on checked exception Obligation to clean up resource created at Client.java:[line 859] is not discharged Exceptional return value of java.io.File.mkdirs() ignored in org.apache.hadoop.tools.dynamometer.DynoInfraUtils.fetchHadoopTarball(File, String, Configuration, Logger) At DynoInfraUtils.java:ignored in org.apache.hadoop.tools.dynamometer.DynoInfraUtils.fetchHadoopTarball(File, String, Configuration, Logger) At DynoInfraUtils.java:[line 138] Found reliance on default encoding in org.apache.hadoop.tools.dynamometer.SimulatedDataNodes.run(String[]):in org.apache.hadoop.tools.dynamometer.SimulatedDataNodes.run(String[]): new java.io.InputStreamReader(InputStream) At SimulatedDataNodes.java:[line 149] org.apache.hadoop.tools.dynamometer.SimulatedDataNodes.run(String[]) invokes System.exit(...), which shuts down the entire virtual machine At SimulatedDataNodes.java:down the entire virtual machine At SimulatedDataNodes.java:[line 123] org.apache.hadoop.tools.dynamometer.SimulatedDataNodes.run(String[]) may fail to close stream At SimulatedDataNodes.java:stream At SimulatedDataNodes.java:[line 149] Failed junit tests : hadoop.hdfs.server.diskbalancer.TestDiskBalancer hadoop.hdfs.web.TestWebHdfsTimeouts hadoop.hdfs.server.datanode.TestDirectoryScanner hadoop.hdfs.server.federation.router.TestRouterWithSecureStartup hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption hadoop.yarn.client.api.impl.TestAMRMClient hadoop.ozone.client.rpc.TestReadRetries hadoop.ozone.client.rpc.TestOzoneAtRestEncryption hadoop.ozone.client.rpc.TestOzoneRpcClient hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion hadoop.ozone.client.rpc.TestSecureOzoneRpcClient hadoop.ozone.om.TestScmSafeMode hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis cc: https://builds.apache.org/job/hadoop-qbt-trunk-j
Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/364/ [Jun 25, 2019 2:04:33 AM] (inigoiri) HDFS-14541. When evictableMmapped or evictable size is zero, do not [Jun 25, 2019 4:17:38 AM] (weichiu) HDFS-14247. Repeat adding node description into network topology. -1 overall The following subsystems voted -1: asflicense compile findbugs hadolint mvnsite pathlen unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml FindBugs : module:hadoop-common-project/hadoop-common Class org.apache.hadoop.fs.GlobalStorageStatistics defines non-transient non-serializable instance field map In GlobalStorageStatistics.java:instance field map In GlobalStorageStatistics.java FindBugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client Boxed value is unboxed and then immediately reboxed in org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result, byte[], byte[], KeyConverter, ValueConverter, boolean) At ColumnRWHelper.java:then immediately reboxed in org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result, byte[], byte[], KeyConverter, ValueConverter, boolean) At ColumnRWHelper.java:[line 335] Failed junit tests : hadoop.hdfs.server.datanode.TestDirectoryScanner hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys hadoop.hdfs.TestDFSClientRetries hadoop.hdfs.web.TestWebHdfsTimeouts hadoop.registry.secure.TestSecureLogins hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 hadoop.yarn.client.api.impl.TestNMClient hadoop.yarn.client.api.impl.TestAMRMProxy cc: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/364/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/364/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt [328K] compile: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/364/artifact/out/patch-compile-root-jdk1.8.0_212.txt [616K] cc: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/364/artifact/out/patch-compile-root-jdk1.8.0_212.txt [616K] javac: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/364/artifact/out/patch-compile-root-jdk1.8.0_212.txt [616K] checkstyle: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/364/artifact/out//testptch/patchprocess/maven-patch-checkstyle-root.txt [] hadolint: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/364/artifact/out/diff-patch-hadolint.txt [4.0K] mvnsite: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/364/artifact/out/patch-mvnsite-root.txt [88K] pathlen: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/364/artifact/out/pathlen.txt [12K] pylint: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/364/artifact/out/diff-patch-pylint.txt [24K] shellcheck: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/364/artifact/out/diff-patch-shellcheck.txt [72K] shelldocs: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/364/artifact/out/diff-patch-shelldocs.txt [48K] whitespace: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/364/artifact/out/whitespace-eol.txt [12M] https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/364/artifact/out/whitespace-tabs.txt [1.2M] xml: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/364/artifact/out/xml.txt [12K] findbugs: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/364/artifact/out/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html [8.0K] https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/364/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings