Re: [VOTE] Release Apache Hadoop 3.4.0 (RC3)
Thanks Ayush for highlighting this information. Absolutely true, we should count RM's vote when explicit +1 here. Best Regards, - He Xiaoqiao On Thu, Mar 14, 2024 at 3:55 AM Ayush Saxena wrote: > > Counter should be with yourself vote, where the current summary > is 5 +1 binding and 1 +1 non-binding. Let's re-count when deadline. > > Just on the process: The release manager needs to "explicitly" vote like > any other before counting their own vote, there has been a lot of > discussions around that at multiple places & the official apache doc has > been updated as well [1], the last paragraph reads: > > "Note that there is no implicit +1 from the release manager, or from > anyone in any ASF vote. Only explicit votes are valid. The release manager > is encouraged to vote on releases, like any reviewer would do." > > So, do put an explicit +1, before you count yourself. Good Luck!!! > > -Ayush > > [1] https://www.apache.org/foundation/voting.html#ReleaseVotes > > On Tue, 12 Mar 2024 at 17:27, Steve Loughran > wrote: > >> followup: overnight work happy too. >> >> one interesting pain point is that on a raspberry pi 64 os checknative >> complains that libcrypto is missing >> >> > bin/hadoop checknative >> >> 2024-03-12 11:50:24,359 INFO bzip2.Bzip2Factory: Successfully loaded & >> initialized native-bzip2 library system-native >> 2024-03-12 11:50:24,363 INFO zlib.ZlibFactory: Successfully loaded & >> initialized native-zlib library >> 2024-03-12 11:50:24,370 WARN erasurecode.ErasureCodeNative: ISA-L support >> is not available in your platform... using builtin-java codec where >> applicable >> 2024-03-12 11:50:24,429 INFO nativeio.NativeIO: The native code was built >> without PMDK support. >> 2024-03-12 11:50:24,431 WARN crypto.OpensslCipher: Failed to load OpenSSL >> Cipher. >> java.lang.UnsatisfiedLinkError: Cannot load libcrypto.so (libcrypto.so: >> cannot open shared object file: No such file or directory)! >> at org.apache.hadoop.crypto.OpensslCipher.initIDs(Native Method) >> at >> org.apache.hadoop.crypto.OpensslCipher.(OpensslCipher.java:90) >> at >> >> org.apache.hadoop.util.NativeLibraryChecker.main(NativeLibraryChecker.java:111) >> Native library checking: >> hadoop: true >> >> /home/stevel/Projects/hadoop-release-support/target/arm-untar/hadoop-3.4.0/lib/native/libhadoop.so.1.0.0 >> zlib:true /lib/aarch64-linux-gnu/libz.so.1 >> zstd : true /lib/aarch64-linux-gnu/libzstd.so.1 >> bzip2: true /lib/aarch64-linux-gnu/libbz2.so.1 >> openssl: false Cannot load libcrypto.so (libcrypto.so: cannot open shared >> object file: No such file or directory)! >> ISA-L: false libhadoop was built without ISA-L support >> PMDK:false The native code was built without PMDK support. >> >> which happens because its not in /lib/aarch64-linux-gnu but instead in >> /usr/lib/aarch64-linux-gnu/l >> ls -l /usr/lib/aarch64-linux-gnu/libcrypto* >> -rw-r--r-- 1 root root 2739952 Sep 19 13:09 >> /usr/lib/aarch64-linux-gnu/libcrypto.so.1.1 >> -rw-r--r-- 1 root root 4466856 Oct 27 13:40 >> /usr/lib/aarch64-linux-gnu/libcrypto.so.3 >> >> Anyone got any insights on how I should set up this (debian-based) OS >> here? >> I know it's only a small box but with arm64 VMs becoming available in >> cloud >> infras, it'd be good to know if they are similar. >> >> Note: checknative itself is happy; but checknative -a will fail because of >> this -though it's an OS setup issue, nothing related to the hadoop >> binaries. >> >> steve >> >> On Tue, 12 Mar 2024 at 02:26, Xiaoqiao He wrote: >> >> > Hi Shilun, Counter should be with yourself vote, where the current >> summary >> > is 5 +1 binding and 1 +1 non-binding. Let's re-count when deadline. >> > Thanks again. >> > >> > Best Regards, >> > - He Xiaoqiao >> > >> > On Tue, Mar 12, 2024 at 9:00 AM slfan1989 wrote: >> > >> > > As of now, we have collected 5 affirmative votes, with 4 votes binding >> > and >> > > 1 vote non-binding. >> > > >> > > Thank you very much for voting and verifying! >> > > >> > > This voting will continue until March 15th, this Friday. >> > > >> > > Best Regards, >> > > Shilun Fan. >> > > >> > > On Tue, Mar 12, 2024 at 4:29 AM Steve Loughran >> > > > > > >> > > wrote: >> > > >> > > > +1 binding >> > > > >> > > > (sorry, this had ended in the yarn-dev folder, otherwise I'd have >> seen >> > it >> > > > earlier. been testing it this afternoon: >> > > > >> > > > pulled the latest version of >> > > > https://github.com/apache/hadoop-release-support >> > > > (note, this module is commit-then-review; whoever is working >> > > on/validating >> > > > a release can commit as they go along. This is not production >> code...) >> > > > >> > > > * went through the "validating a release" step, validating maven >> > > artifacts >> > > > * building the same downstream modules which built for me last time >> > (avro >> > > > too complex; hboss not aws v2 in apache yet) >> > > > >> > > > spark build is still ongoing, but I'm not going to wait. It is
Apache Hadoop qbt Report: trunk+JDK11 on Linux/x86_64
For more details, see https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java11-linux-x86_64/644/ [Mar 12, 2024, 4:25:33 AM] (github) HDFS-17422. Enhance the stability of the unit test TestDFSAdmin (#6621). Contributed by lei w and Hualong Zhang. [Mar 12, 2024, 10:36:43 AM] (github) HDFS-17391. Adjust the checkpoint io buffer size to the chunk size (#6594). Contributed by lei w. [Mar 12, 2024, 6:49:06 PM] (github) HADOOP-19066. S3A: AWS SDK V2 - Enabling FIPS should be allowed with central endpoint (#6539) [Mar 12, 2024, 8:16:47 PM] (github) HADOOP-19088. Use jersey-json 1.22.0 (#6585) -1 overall The following subsystems voted -1: blanks hadolint mvnsite pathlen spotbugs unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-common-project/hadoop-common/src/test/resources/xml/external-dtd.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml spotbugs : module:hadoop-common-project/hadoop-common Possible null pointer dereference in org.apache.hadoop.crypto.key.kms.ValueQueue.getSize(String) due to return value of called method Dereferenced at ValueQueue.java:org.apache.hadoop.crypto.key.kms.ValueQueue.getSize(String) due to return value of called method Dereferenced at ValueQueue.java:[line 332] spotbugs : module:hadoop-common-project Possible null pointer dereference in org.apache.hadoop.crypto.key.kms.ValueQueue.getSize(String) due to return value of called method Dereferenced at ValueQueue.java:org.apache.hadoop.crypto.key.kms.ValueQueue.getSize(String) due to return value of called method Dereferenced at ValueQueue.java:[line 332] spotbugs : module:hadoop-hdfs-project/hadoop-hdfs-client Redundant nullcheck of sockStreamList, which is known to be non-null in org.apache.hadoop.hdfs.PeerCache.getInternal(DatanodeID, boolean) Redundant null check at PeerCache.java:is known to be non-null in org.apache.hadoop.hdfs.PeerCache.getInternal(DatanodeID, boolean) Redundant null check at PeerCache.java:[line 158] spotbugs : module:hadoop-hdfs-project/hadoop-hdfs Redundant nullcheck of oldLock, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory) Redundant null check at DataStorage.java:is known to be non-null in org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory) Redundant null check at DataStorage.java:[line 695] Redundant nullcheck of metaChannel, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MappableBlockLoader.verifyChecksum(long, FileInputStream, FileChannel, String) Redundant null check at MappableBlockLoader.java:is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MappableBlockLoader.verifyChecksum(long, FileInputStream, FileChannel, String) Redundant null check at MappableBlockLoader.java:[line 138] Redundant nullcheck of blockChannel, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MemoryMappableBlockLoader.load(long, FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null check at MemoryMappableBlockLoader.java:is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MemoryMappableBlockLoader.load(long, FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null check at MemoryMappableBlockLoader.java:[line 75] Redundant nullcheck of blockChannel, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.load(long, FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null check at NativePmemMappableBlockLoader.java:is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.load(lon
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64
For more details, see https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1527/ [Mar 12, 2024, 4:25:33 AM] (github) HDFS-17422. Enhance the stability of the unit test TestDFSAdmin (#6621). Contributed by lei w and Hualong Zhang. [Mar 12, 2024, 10:36:43 AM] (github) HDFS-17391. Adjust the checkpoint io buffer size to the chunk size (#6594). Contributed by lei w. [Mar 12, 2024, 6:49:06 PM] (github) HADOOP-19066. S3A: AWS SDK V2 - Enabling FIPS should be allowed with central endpoint (#6539) [Mar 12, 2024, 8:16:47 PM] (github) HADOOP-19088. Use jersey-json 1.22.0 (#6585) -1 overall The following subsystems voted -1: blanks hadolint pathlen spotbugs xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-common-project/hadoop-common/src/test/resources/xml/external-dtd.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml spotbugs : module:hadoop-common-project/hadoop-common Possible null pointer dereference in org.apache.hadoop.crypto.key.kms.ValueQueue.getSize(String) due to return value of called method Dereferenced at ValueQueue.java:org.apache.hadoop.crypto.key.kms.ValueQueue.getSize(String) due to return value of called method Dereferenced at ValueQueue.java:[line 332] spotbugs : module:hadoop-common-project Possible null pointer dereference in org.apache.hadoop.crypto.key.kms.ValueQueue.getSize(String) due to return value of called method Dereferenced at ValueQueue.java:org.apache.hadoop.crypto.key.kms.ValueQueue.getSize(String) due to return value of called method Dereferenced at ValueQueue.java:[line 332] spotbugs : module:hadoop-hdfs-project/hadoop-hdfs-client Redundant nullcheck of sockStreamList, which is known to be non-null in org.apache.hadoop.hdfs.PeerCache.getInternal(DatanodeID, boolean) Redundant null check at PeerCache.java:is known to be non-null in org.apache.hadoop.hdfs.PeerCache.getInternal(DatanodeID, boolean) Redundant null check at PeerCache.java:[line 158] spotbugs : module:hadoop-hdfs-project/hadoop-hdfs-httpfs Redundant nullcheck of xAttrs, which is known to be non-null in org.apache.hadoop.fs.http.client.HttpFSFileSystem.getXAttr(Path, String) Redundant null check at HttpFSFileSystem.java:is known to be non-null in org.apache.hadoop.fs.http.client.HttpFSFileSystem.getXAttr(Path, String) Redundant null check at HttpFSFileSystem.java:[line 1373] spotbugs : module:hadoop-yarn-project/hadoop-yarn org.apache.hadoop.yarn.service.ServiceScheduler$1.load(ConfigFile) may return null, but is declared @Nonnull At ServiceScheduler.java:is declared @Nonnull At ServiceScheduler.java:[line 555] spotbugs : module:hadoop-hdfs-project/hadoop-hdfs-rbf Redundant nullcheck of dns, which is known to be non-null in org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getCachedDatanodeReport(HdfsConstants$DatanodeReportType) Redundant null check at RouterRpcServer.java:is known to be non-null in org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getCachedDatanodeReport(HdfsConstants$DatanodeReportType) Redundant null check at RouterRpcServer.java:[line 1093] spotbugs : module:hadoop-hdfs-project Redundant nullcheck of xAttrs, which is known to be non-null in org.apache.hadoop.fs.http.client.HttpFSFileSystem.getXAttr(Path, String) Redundant null check at HttpFSFileSystem.java:is known to be non-null in org.apache.hadoop.fs.http.client.HttpFSFileSystem.getXAttr(Path, String) Redundant null check at HttpFSFileSystem.java:[line 1373] Redundant nullcheck of sockStreamList, which is known to be non-null in org.apache.hadoop.hdfs.PeerCache.getInternal(DatanodeID, boolean) Redundant null check at PeerCache.java:is known to be non-null in org.apache.hadoop.hdfs.PeerCache.getInternal(DatanodeID, boolean) Redundant nu
Re: [VOTE] Release Apache Hadoop 3.4.0 (RC3)
> Counter should be with yourself vote, where the current summary is 5 +1 binding and 1 +1 non-binding. Let's re-count when deadline. Just on the process: The release manager needs to "explicitly" vote like any other before counting their own vote, there has been a lot of discussions around that at multiple places & the official apache doc has been updated as well [1], the last paragraph reads: "Note that there is no implicit +1 from the release manager, or from anyone in any ASF vote. Only explicit votes are valid. The release manager is encouraged to vote on releases, like any reviewer would do." So, do put an explicit +1, before you count yourself. Good Luck!!! -Ayush [1] https://www.apache.org/foundation/voting.html#ReleaseVotes On Tue, 12 Mar 2024 at 17:27, Steve Loughran wrote: > followup: overnight work happy too. > > one interesting pain point is that on a raspberry pi 64 os checknative > complains that libcrypto is missing > > > bin/hadoop checknative > > 2024-03-12 11:50:24,359 INFO bzip2.Bzip2Factory: Successfully loaded & > initialized native-bzip2 library system-native > 2024-03-12 11:50:24,363 INFO zlib.ZlibFactory: Successfully loaded & > initialized native-zlib library > 2024-03-12 11:50:24,370 WARN erasurecode.ErasureCodeNative: ISA-L support > is not available in your platform... using builtin-java codec where > applicable > 2024-03-12 11:50:24,429 INFO nativeio.NativeIO: The native code was built > without PMDK support. > 2024-03-12 11:50:24,431 WARN crypto.OpensslCipher: Failed to load OpenSSL > Cipher. > java.lang.UnsatisfiedLinkError: Cannot load libcrypto.so (libcrypto.so: > cannot open shared object file: No such file or directory)! > at org.apache.hadoop.crypto.OpensslCipher.initIDs(Native Method) > at > org.apache.hadoop.crypto.OpensslCipher.(OpensslCipher.java:90) > at > > org.apache.hadoop.util.NativeLibraryChecker.main(NativeLibraryChecker.java:111) > Native library checking: > hadoop: true > > /home/stevel/Projects/hadoop-release-support/target/arm-untar/hadoop-3.4.0/lib/native/libhadoop.so.1.0.0 > zlib:true /lib/aarch64-linux-gnu/libz.so.1 > zstd : true /lib/aarch64-linux-gnu/libzstd.so.1 > bzip2: true /lib/aarch64-linux-gnu/libbz2.so.1 > openssl: false Cannot load libcrypto.so (libcrypto.so: cannot open shared > object file: No such file or directory)! > ISA-L: false libhadoop was built without ISA-L support > PMDK:false The native code was built without PMDK support. > > which happens because its not in /lib/aarch64-linux-gnu but instead in > /usr/lib/aarch64-linux-gnu/l > ls -l /usr/lib/aarch64-linux-gnu/libcrypto* > -rw-r--r-- 1 root root 2739952 Sep 19 13:09 > /usr/lib/aarch64-linux-gnu/libcrypto.so.1.1 > -rw-r--r-- 1 root root 4466856 Oct 27 13:40 > /usr/lib/aarch64-linux-gnu/libcrypto.so.3 > > Anyone got any insights on how I should set up this (debian-based) OS here? > I know it's only a small box but with arm64 VMs becoming available in cloud > infras, it'd be good to know if they are similar. > > Note: checknative itself is happy; but checknative -a will fail because of > this -though it's an OS setup issue, nothing related to the hadoop > binaries. > > steve > > On Tue, 12 Mar 2024 at 02:26, Xiaoqiao He wrote: > > > Hi Shilun, Counter should be with yourself vote, where the current > summary > > is 5 +1 binding and 1 +1 non-binding. Let's re-count when deadline. > > Thanks again. > > > > Best Regards, > > - He Xiaoqiao > > > > On Tue, Mar 12, 2024 at 9:00 AM slfan1989 wrote: > > > > > As of now, we have collected 5 affirmative votes, with 4 votes binding > > and > > > 1 vote non-binding. > > > > > > Thank you very much for voting and verifying! > > > > > > This voting will continue until March 15th, this Friday. > > > > > > Best Regards, > > > Shilun Fan. > > > > > > On Tue, Mar 12, 2024 at 4:29 AM Steve Loughran > > > > > > > > wrote: > > > > > > > +1 binding > > > > > > > > (sorry, this had ended in the yarn-dev folder, otherwise I'd have > seen > > it > > > > earlier. been testing it this afternoon: > > > > > > > > pulled the latest version of > > > > https://github.com/apache/hadoop-release-support > > > > (note, this module is commit-then-review; whoever is working > > > on/validating > > > > a release can commit as they go along. This is not production > code...) > > > > > > > > * went through the "validating a release" step, validating maven > > > artifacts > > > > * building the same downstream modules which built for me last time > > (avro > > > > too complex; hboss not aws v2 in apache yet) > > > > > > > > spark build is still ongoing, but I'm not going to wait. It is > > building, > > > > which is key. > > > > > > > > The core changes I needed in are at the dependency level and I've > > > > verified they are good. > > > > > > > > Oh, and I've also got my raspberry p5 doing the download of the arm > > > > stuff for its checknative; not expecting problems. > > > > > > > > So: i've got some stuff still ongo
[jira] [Created] (HADOOP-19110) ITestExponentialRetryPolicy failing in branch-3.4
Mukund Thakur created HADOOP-19110: -- Summary: ITestExponentialRetryPolicy failing in branch-3.4 Key: HADOOP-19110 URL: https://issues.apache.org/jira/browse/HADOOP-19110 Project: Hadoop Common Issue Type: Bug Components: fs/azure Affects Versions: 3.4.0 Reporter: Mukund Thakur Assignee: Anuj Modi {code:java} [ERROR] Tests run: 6, Failures: 0, Errors: 1, Skipped: 2, Time elapsed: 91.416 s <<< FAILURE! - in org.apache.hadoop.fs.azurebfs.services.ITestExponentialRetryPolicy [ERROR] testThrottlingIntercept(org.apache.hadoop.fs.azurebfs.services.ITestExponentialRetryPolicy) Time elapsed: 0.622 s <<< ERROR! Failure to initialize configuration for dummy.dfs.core.windows.net key ="null": Invalid configuration value detected for fs.azure.account.key at org.apache.hadoop.fs.azurebfs.services.SimpleKeyProvider.getStorageAccountKey(SimpleKeyProvider.java:53) at org.apache.hadoop.fs.azurebfs.AbfsConfiguration.getStorageAccountKey(AbfsConfiguration.java:646) at org.apache.hadoop.fs.azurebfs.services.ITestAbfsClient.createTestClientFromCurrentContext(ITestAbfsClient.java:339) at org.apache.hadoop.fs.azurebfs.services.ITestExponentialRetryPolicy.testThrottlingIntercept(ITestExponentialRetryPolicy.java:106) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) {code} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-19066) AWS SDK V2 - Enabling FIPS should be allowed with central endpoint
[ https://issues.apache.org/jira/browse/HADOOP-19066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran resolved HADOOP-19066. - Fix Version/s: 3.4.1 Resolution: Fixed > AWS SDK V2 - Enabling FIPS should be allowed with central endpoint > -- > > Key: HADOOP-19066 > URL: https://issues.apache.org/jira/browse/HADOOP-19066 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.5.0, 3.4.1 >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Fix For: 3.5.0, 3.4.1 > > > FIPS support can be enabled by setting "fs.s3a.endpoint.fips". Since the SDK > considers overriding endpoint and enabling fips as mutually exclusive, we > fail fast if fs.s3a.endpoint is set with fips support (details on > HADOOP-18975). > Now, we no longer override SDK endpoint for central endpoint since we enable > cross region access (details on HADOOP-19044) but we would still fail fast if > endpoint is central and fips is enabled. > Changes proposed: > * S3A to fail fast only if FIPS is enabled and non-central endpoint is > configured. > * Tests to ensure S3 bucket is accessible with default region us-east-2 with > cross region access (expected with central endpoint). > * Document FIPS support with central endpoint on connecting.html. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-19109) checkPermission should not ignore original AccessControlException
Xiaobao Wu created HADOOP-19109: --- Summary: checkPermission should not ignore original AccessControlException Key: HADOOP-19109 URL: https://issues.apache.org/jira/browse/HADOOP-19109 Project: Hadoop Common Issue Type: Improvement Components: hdfs-client Affects Versions: 3.3.0 Reporter: Xiaobao Wu In the environment where the *Ranger-HDFS* plugin is enabled, I look at the log information of *AccessControlException* caused by the *du.* I find that the printed log information is not accurate, because the original AccessControlException is ignored by checkPermission, which is not conducive to judging the real situation of the AccessControlException . At least part of the original log information should be printed. AccessControlException information currently printed: {code:java} org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=test,access=READ_EXECUTE, inode="/warehouse/tablespace/managed/hive/test.db/stu/dt=2024-01-17":hive:hadoop:drwxrwx--- at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:226){code} The original AccessControlException information printed: {code:java} org.apache.hadoop.security.AccessControlException: Permission denied: user=test,access=READ_EXECUTE, inode="dt=2024-01-17":hive:hadoop:drwxrwx--- at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:400) {code} >From the comparison results of the above log information, it can be seen that >the inode information and the exception stack printed by the log are not >accurate. Later, the *inode* information prompted by the original AccessControlException log information makes me realize that the Ranger-HDFS plug-in in the current environment is not incorporated into RANGER-2297, so I think it is necessary to prompt this part of the log information. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: branch-2.10+JDK7 on Linux/x86_64
For more details, see https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1330/ No changes -1 overall The following subsystems voted -1: asflicense hadolint mvnsite pathlen unit The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: Failed junit tests : hadoop.fs.TestFileUtil hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints hadoop.hdfs.TestLeaseRecovery2 hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain hadoop.hdfs.TestFileLengthOnClusterRestart hadoop.hdfs.server.namenode.ha.TestPipelinesFailover hadoop.hdfs.TestDFSInotifyEventInputStream hadoop.hdfs.server.namenode.snapshot.TestSnapshotBlocksMap hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys hadoop.fs.viewfs.TestViewFileSystemHdfs hadoop.hdfs.server.federation.router.TestRouterQuota hadoop.hdfs.server.federation.router.TestRouterNamenodeHeartbeat hadoop.hdfs.server.federation.resolver.order.TestLocalResolver hadoop.hdfs.server.federation.resolver.TestMultipleDestinationResolver hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints hadoop.mapreduce.lib.input.TestLineRecordReader hadoop.mapred.TestLineRecordReader hadoop.mapreduce.jobhistory.TestHistoryViewerPrinter hadoop.resourceestimator.service.TestResourceEstimatorService hadoop.resourceestimator.solver.impl.TestLpSolver hadoop.yarn.sls.appmaster.TestAMSimulator hadoop.yarn.sls.TestSLSRunner hadoop.yarn.server.nodemanager.containermanager.linux.resources.TestNumaResourceAllocator hadoop.yarn.server.nodemanager.containermanager.linux.resources.TestNumaResourceHandlerImpl hadoop.yarn.server.resourcemanager.TestClientRMService hadoop.yarn.server.resourcemanager.recovery.TestFSRMStateStore hadoop.yarn.server.resourcemanager.monitor.invariants.TestMetricsInvariantChecker cc: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1330/artifact/out/diff-compile-cc-root.txt [4.0K] javac: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1330/artifact/out/diff-compile-javac-root.txt [488K] checkstyle: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1330/artifact/out/diff-checkstyle-root.txt [14M] hadolint: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1330/artifact/out/diff-patch-hadolint.txt [4.0K] mvnsite: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1330/artifact/out/patch-mvnsite-root.txt [568K] pathlen: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1330/artifact/out/pathlen.txt [12K] pylint: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1330/artifact/out/diff-patch-pylint.txt [20K] shellcheck: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1330/artifact/out/diff-patch-shellcheck.txt [72K] whitespace: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1330/artifact/out/whitespace-eol.txt [12M] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1330/artifact/out/whitespace-tabs.txt [1.3M] javadoc: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1330/artifact/out/patch-javadoc-root.txt [36K] unit: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1330/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt [220K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1330/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [1.8M] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1330/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt [36K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1330/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt [16K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1330/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt [104K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1330/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt [20K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1330/artifact/out/patch-unit-hadoop-tools_hadoop-resourceestimator.txt [16K]