Re: [VOTE] Release Apache Hadoop 3.3.5 (RC2)

2023-03-07 Thread Steve Loughran
thanks.

now looking at a critical kerby CVE (
https://github.com/apache/hadoop/pull/5458) and revisited one for netty
from last week

i am never a fan of last-minute jar updates, but if we don't ship with them
we will be fielding jiras of "update kerby/netty on 3.3.5" for the next 18
months

On Mon, 6 Mar 2023 at 23:29, Erik Krogen  wrote:

> > OK. Could you have a go with a (locally built) patch release
>
> Just validated the same on the latest HEAD of branch-3.3.5, which includes
> the two HDFS Jiras I mentioned plus one additional one:
>
> * 143fe8095d4 (HEAD -> branch-3.3.5) 2023-03-06 HDFS-16934.
> TestDFSAdmin.testAllDatanodesReconfig regression (#5434) [slfan1989 <
> 55643692+slfan1...@users.noreply.github.com>]
> * d4ea9687a8e 2023-03-03 HDFS-16923. [SBN read] getlisting RPC to observer
> will throw NPE if path does not exist (#5400) [ZanderXu <
> zande...@apache.org
> >]
> * 44bf8aadedf 2023-03-03 HDFS-16832. [SBN READ] Follow-on to HDFS-16732.
> Fix NPE when check the block location of empty directory (#5099)
> [zhengchenyu ]
> * 72f8c2a4888 (tag: release-3.3.5-RC2) 2023-02-25 HADOOP-18641. Cloud
> connector dependency and LICENSE fixup. (#5429) [Steve Loughran <
> ste...@cloudera.com>]
>
> On Mon, Mar 6, 2023 at 2:17 AM Steve Loughran  >
> wrote:
>
> >  i looked at that test and wondered if it it was just being brittle to
> > time. I'm not a fan of those -there's one in abfs which is particularly
> bad
> > for me- maybe we could see if the test can be cut as it is quite a slow
> one
> >
> > On Sat, 4 Mar 2023 at 18:28, Viraj Jasani  wrote:
> >
> > > A minor update on ITestS3AConcurrentOps#testParallelRename
> > >
> > > I was previously connected to a vpn due to which bandwidth was getting
> > > throttled earlier. Ran the test again today without vpn and had no
> issues
> > > (earlier only 40% of the overall putObject were able to get completed
> > > within timeout).
> > >
> > >
> > > On Sat, Mar 4, 2023 at 4:29 AM Steve Loughran
> >  > > >
> > > wrote:
> > >
> > > > On Sat, 4 Mar 2023 at 01:47, Erik Krogen  wrote:
> > > >
> > > > > Thanks Steve. I see now that the branch cut was way back in October
> > so
> > > I
> > > > > definitely understand your frustration here!
> > > > >
> > > > > This made me realize that HDFS-16832
> > > > > , which
> resolves a
> > > > very
> > > > > similar issue as the aforementioned HDFS-16923, is also missing
> from
> > > the
> > > > > RC. I erroneously marked it with a fix version of 3.3.5 -- it was
> > > before
> > > > > the initial 3.3.5 RC was made and I didn't notice the branch was
> cut.
> > > My
> > > > > apologies for that. I've pushed both HDFS-16832 and HDFS-16932 to
> > > > > branch-3.3.5, so they are ready if/when an RC3 is cut.
> > > > >
> > > >
> > > > thanks.
> > > >
> > > > >
> > > > > In the meantime, I tested for RC2 that a local cluster of NN +
> > standby
> > > +
> > > > > observer + QJM works as expected for some basic HDFS commands.
> > > > >
> > > >
> > > > OK. Could you have a go with a (locally built) patch release
> > > >
> > > > >
> > > > > On Fri, Mar 3, 2023 at 2:52 AM Steve Loughran
> > > > 
> > > > > wrote:
> > > > >
> > > > >> shipping broken hdfs isn't something we'd want to do, but if we
> can
> > be
> > > > >> confident that all other issues can be addressed in RC3 then I'd
> be
> > > > happy.
> > > > >>
> > > > >> On Fri, 3 Mar 2023 at 05:09, Ayush Saxena 
> > wrote:
> > > > >>
> > > > >> > I will highlight that I am completely fed up with doing this
> > > release
> > > > >> and
> > > > >> >> really want to get it out the way -for which I depend on
> support
> > > from
> > > > >> as
> > > > >> >> many other developers as possible.
> > > > >> >
> > > > >> >
> > > > >> > hmm, I can feel the pain. I tried to find if there is any config
> > or
> > > > any
> > > > >> > workaround which can dodge this HDFS issue, but unfortunately
> > > couldn't
> > > > >> find
> > > > >> > any. If someone does a getListing with needLocation and the file
> > > > doesn't
> > > > >> > exist at Observer he is gonna get a NPE rather than a FNF, It
> > isn't
> > > > just
> > > > >> > the exception, AFAIK Observer reads have some logic around
> > handling
> > > > FNF
> > > > >> > specifically, that it attempts Active NN or something like that
> in
> > > > such
> > > > >> > cases, So, that will be broken as well for this use case.
> > > > >> >
> > > > >> > Now, there is no denying the fact there is an issue on the HDFS
> > > side,
> > > > >> and
> > > > >> > it has already been too much work on your side, so you can argue
> > > that
> > > > it
> > > > >> > might not be a very frequent use case or so. It's your call.
> > > > >> >
> > > > >> > Just sharing, no intentions of saying you should do that, But as
> > an
> > > RM
> > > > >> > "nobody" can force you for a new iteration of a RC, it is gonna
> be
> > > > your
> > > > >> > call and discretion. As far as I know a release can not be
> vetoed
> > by
> > > > >

[jira] [Created] (HDFS-16942) Send error to datanode if FBR is rejected due to bad lease

2023-03-07 Thread Stephen O'Donnell (Jira)
Stephen O'Donnell created HDFS-16942:


 Summary: Send error to datanode if FBR is rejected due to bad lease
 Key: HDFS-16942
 URL: https://issues.apache.org/jira/browse/HDFS-16942
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode, namenode
Reporter: Stephen O'Donnell
Assignee: Stephen O'Donnell


When a datanode sends a FBR to the namenode, it requires a lease to send it. On 
a couple of busy clusters, we have seen an issue where the DN is somehow 
delayed in sending the FBR after requesting the least. Then the NN rejects the 
FBR and logs a message to that effect, but from the Datanodes point of view, it 
thinks the report was successful and does not try to send another report until 
the 6 hour default interval has passed.

If this happens to a few DNs, there can be missing and under replicated blocks, 
further adding to the cluster load. Even worse, I have see the DNs join the 
cluster with zero blocks, so it is not obvious the under replication is caused 
by lost a FBR, as all DNs appear to be up and running.

I believe we should propagate an error back to the DN if the FBR is rejected, 
that way, the DN can request a new lease and try again.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64

2023-03-07 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1158/

[Mar 6, 2023, 12:10:31 PM] (github) HDFS-16939. Fix the thread safety bug in 
LowRedundancyBlocks. (#5450). Contributed by Shuyan Zhang.
[Mar 6, 2023, 3:26:53 PM] (github) HDFS-16934. 
TestDFSAdmin.testAllDatanodesReconfig regression (#5434)




-1 overall


The following subsystems voted -1:
blanks hadolint pathlen spotbugs unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/resources/xml/external-dtd.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

spotbugs :

   module:hadoop-mapreduce-project/hadoop-mapreduce-client 
   Write to static field 
org.apache.hadoop.mapreduce.task.reduce.Fetcher.nextId from instance method new 
org.apache.hadoop.mapreduce.task.reduce.Fetcher(JobConf, TaskAttemptID, 
ShuffleSchedulerImpl, MergeManager, Reporter, ShuffleClientMetrics, 
ExceptionReporter, SecretKey) At Fetcher.java:from instance method new 
org.apache.hadoop.mapreduce.task.reduce.Fetcher(JobConf, TaskAttemptID, 
ShuffleSchedulerImpl, MergeManager, Reporter, ShuffleClientMetrics, 
ExceptionReporter, SecretKey) At Fetcher.java:[line 120] 

spotbugs :

   
module:hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core
 
   Write to static field 
org.apache.hadoop.mapreduce.task.reduce.Fetcher.nextId from instance method new 
org.apache.hadoop.mapreduce.task.reduce.Fetcher(JobConf, TaskAttemptID, 
ShuffleSchedulerImpl, MergeManager, Reporter, ShuffleClientMetrics, 
ExceptionReporter, SecretKey) At Fetcher.java:from instance method new 
org.apache.hadoop.mapreduce.task.reduce.Fetcher(JobConf, TaskAttemptID, 
ShuffleSchedulerImpl, MergeManager, Reporter, ShuffleClientMetrics, 
ExceptionReporter, SecretKey) At Fetcher.java:[line 120] 

spotbugs :

   module:hadoop-mapreduce-project 
   Write to static field 
org.apache.hadoop.mapreduce.task.reduce.Fetcher.nextId from instance method new 
org.apache.hadoop.mapreduce.task.reduce.Fetcher(JobConf, TaskAttemptID, 
ShuffleSchedulerImpl, MergeManager, Reporter, ShuffleClientMetrics, 
ExceptionReporter, SecretKey) At Fetcher.java:from instance method new 
org.apache.hadoop.mapreduce.task.reduce.Fetcher(JobConf, TaskAttemptID, 
ShuffleSchedulerImpl, MergeManager, Reporter, ShuffleClientMetrics, 
ExceptionReporter, SecretKey) At Fetcher.java:[line 120] 

spotbugs :

   module:root 
   Write to static field 
org.apache.hadoop.mapreduce.task.reduce.Fetcher.nextId from instance method new 
org.apache.hadoop.mapreduce.task.reduce.Fetcher(JobConf, TaskAttemptID, 
ShuffleSchedulerImpl, MergeManager, Reporter, ShuffleClientMetrics, 
ExceptionReporter, SecretKey) At Fetcher.java:from instance method new 
org.apache.hadoop.mapreduce.task.reduce.Fetcher(JobConf, TaskAttemptID, 
ShuffleSchedulerImpl, MergeManager, Reporter, ShuffleClientMetrics, 
ExceptionReporter, SecretKey) At Fetcher.java:[line 120] 

Failed junit tests :

   hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy 
   
hadoop.hdfs.server.federation.router.TestRouterRPCMultipleDestinationMountTableResolver
 
  

   cc:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1158/artifact/out/results-compile-cc-root.txt
 [96K]

   javac:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1158/artifact/out/results-compile-javac-root.txt
 [528K]

   blanks:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1158/artifact/out/blanks-eol.txt
 [14M]
  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1158/artifact/out/blanks-tabs.txt
 [2.0M]

   checkstyle:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1158/artifact/out/results-checkstyle-root.txt
 [13M]

   hadolint:

  
https://ci-hadoop.apache.org/jo

Apache Hadoop qbt Report: branch-2.10+JDK7 on Linux/x86_64

2023-03-07 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/959/

No changes


ERROR: File 'out/email-report.txt' does not exist

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org

Apache Hadoop qbt Report: trunk+JDK11 on Linux/x86_64

2023-03-07 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java11-linux-x86_64/454/

[Mar 5, 2023, 3:55:16 PM] (Varun Saxena) YARN-11383. Workflow priority mappings 
is case sensitive (#5171)
[Mar 6, 2023, 12:10:31 PM] (github) HDFS-16939. Fix the thread safety bug in 
LowRedundancyBlocks. (#5450). Contributed by Shuyan Zhang.
[Mar 6, 2023, 3:26:53 PM] (github) HDFS-16934. 
TestDFSAdmin.testAllDatanodesReconfig regression (#5434)




-1 overall


The following subsystems voted -1:
blanks hadolint mvnsite pathlen spotbugs unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/resources/xml/external-dtd.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

spotbugs :

   module:hadoop-hdfs-project/hadoop-hdfs 
   Redundant nullcheck of oldLock, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory))
 Redundant null check at DataStorage.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory))
 Redundant null check at DataStorage.java:[line 695] 
   Redundant nullcheck of metaChannel, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MappableBlockLoader.verifyChecksum(long,
 FileInputStream, FileChannel, String) Redundant null check at 
MappableBlockLoader.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MappableBlockLoader.verifyChecksum(long,
 FileInputStream, FileChannel, String) Redundant null check at 
MappableBlockLoader.java:[line 138] 
   Redundant nullcheck of blockChannel, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MemoryMappableBlockLoader.load(long,
 FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null 
check at MemoryMappableBlockLoader.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MemoryMappableBlockLoader.load(long,
 FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null 
check at MemoryMappableBlockLoader.java:[line 75] 
   Redundant nullcheck of blockChannel, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.load(long,
 FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null 
check at NativePmemMappableBlockLoader.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.load(long,
 FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null 
check at NativePmemMappableBlockLoader.java:[line 85] 
   Redundant nullcheck of metaChannel, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.verifyChecksumAndMapBlock(NativeIO$POSIX$$PmemMappedRegion,,
 long, FileInputStream, FileChannel, String) Redundant null check at 
NativePmemMappableBlockLoader.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.verifyChecksumAndMapBlock(NativeIO$POSIX$$PmemMappedRegion,,
 long, FileInputStream, FileChannel, String) Redundant null check at 
NativePmemMappableBlockLoader.java:[line 130] 
   
org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager$UserCounts
  doesn't override java.util.ArrayList.equals(Object) At 
RollingWindowManager.java:At RollingWindowManager.java:[line 1] 

spotbugs :

   module:hadoop-yarn-project/hadoop-yarn 
   Redundant nullcheck of it, which is known to be non-null in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker,
 NMStateStoreService$LocalResourceTrackerState)) Redundant null chec

[jira] [Created] (HDFS-16943) RBF: Implement MySQL based StateStoreDriver

2023-03-07 Thread Simbarashe Dzinamarira (Jira)
Simbarashe Dzinamarira created HDFS-16943:
-

 Summary: RBF: Implement MySQL based StateStoreDriver
 Key: HDFS-16943
 URL: https://issues.apache.org/jira/browse/HDFS-16943
 Project: Hadoop HDFS
  Issue Type: Task
  Components: hdfs, rbf
Reporter: Simbarashe Dzinamarira


RBF supports two types of StateStoreDrivers
 # StateStoreFileImpl
 # StateStoreZooKeeperImpl

I propose implementing a third driver that is backed by MySQL.

HADOOP-18535 implemented a MySQL token store. When tokens are stored in MySQL, 
using MySQL for the StateStore as well reduces the number of external 
dependencies for routers.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org