[jira] [Resolved] (HDFS-17099) Fix Null Pointer Exception when stop namesystem in HDFS

2024-05-13 Thread Xiaoqiao He (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoqiao He resolved HDFS-17099.

Fix Version/s: 3.5.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

> Fix Null Pointer Exception when stop namesystem in HDFS
> ---
>
> Key: HDFS-17099
> URL: https://issues.apache.org/jira/browse/HDFS-17099
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: ConfX
>Assignee: ConfX
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 3.5.0
>
> Attachments: reproduce.sh
>
>
> h2. What happend:
> Got NullPointerException when stop namesystem in HDFS.
> h2. Buggy code:
>  
> {code:java}
>   void stopActiveServices() {
>     ...
>     if (dir != null && getFSImage() != null) {
>       if (getFSImage().editLog != null) {    // <--- Check whether editLog is 
> null
>         getFSImage().editLog.close();
>       }
>       // Update the fsimage with the last txid that we wrote
>       // so that the tailer starts from the right spot.
>       getFSImage().updateLastAppliedTxIdFromWritten(); // <--- BUG: Even if 
> editLog is null, this line will still be executed and cause nullpointer 
> exception
>     }
>     ...
>   }  public void updateLastAppliedTxIdFromWritten() {
>     this.lastAppliedTxId = editLog.getLastWrittenTxId();  // < This will 
> cause nullpointer exception if editLog is null
>   } {code}
> h2. StackTrace:
>  
> {code:java}
> java.lang.NullPointerException
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.updateLastAppliedTxIdFromWritten(FSImage.java:1553)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.stopActiveServices(FSNamesystem.java:1463)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.close(FSNamesystem.java:1815)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:1017)
>         at 
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:248)
>         at 
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.(SecondaryNameNode.java:194)
>         at 
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.(SecondaryNameNode.java:181)
>  {code}
> h2. How to reproduce:
> (1) Set {{dfs.namenode.top.windows.minutes}} to {{{}37914516,32,0{}}}; or set 
> {{dfs.namenode.top.window.num.buckets}} to {{{}244111242{}}}.
> (2) Run test: 
> {{org.apache.hadoop.hdfs.server.namenode.TestNameNodeHttpServerXFrame#testSecondaryNameNodeXFrame}}
> h2. What's more:
> I'm still investigating how the parameter 
> {{dfs.namenode.top.windows.minutes}} triggered the buggy code.
>  
> For an easy reproduction, run the reproduce.sh in the attachment.
> We are happy to provide a patch if this issue is confirmed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-17526) getMetadataInputStream should use getShareDeleteFileInputStream for windows

2024-05-13 Thread Danny Becker (Jira)
Danny Becker created HDFS-17526:
---

 Summary: getMetadataInputStream should use 
getShareDeleteFileInputStream for windows
 Key: HDFS-17526
 URL: https://issues.apache.org/jira/browse/HDFS-17526
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 3.3.4
Reporter: Danny Becker


In HDFS-10636(, the getDataInputStream method uses the 
getShareDeleteFileInputStream for windows, but the getMetaDataInputStream does 
not use this. The following error can happen when a DataNode is trying to 
update the genstamp on a block in Windows.

DataNode Logs:
{{Caused by: java.io.IOException: Failed to rename 
G:\data\hdfs\data\current\BP-1\current\finalized\subdir5\subdir16\blk_1_1.meta 
to 
G:\data\hdfs\data\current\BP-1\current\finalized\subdir5\subdir16\blk_1_2.meta 
due to failure in native rename. 32: The process cannot access the file because 
it is being used by another process.}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-17522) JournalNode web interfaces lack configs for X-FRAME-OPTIONS protection

2024-05-13 Thread Xiaoqiao He (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoqiao He resolved HDFS-17522.

Fix Version/s: 3.5.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

> JournalNode web interfaces lack configs for X-FRAME-OPTIONS protection
> --
>
> Key: HDFS-17522
> URL: https://issues.apache.org/jira/browse/HDFS-17522
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: journal-node
>Affects Versions: 3.0.0-alpha1, 3.5.0
>Reporter: wangzhihui
>Assignee: wangzhihui
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.5.0
>
>
> [HDFS-10579 |https://issues.apache.org/jira/browse/HDFS-10579] has added 
> protection for NameNode and DataNode, but missing protection for JournalNode 
> web interfaces.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-17525) Router web interfaces missing X-FRAME-OPTIONS security configurations

2024-05-13 Thread Hualong Zhang (Jira)
Hualong Zhang created HDFS-17525:


 Summary: Router web interfaces missing X-FRAME-OPTIONS security 
configurations
 Key: HDFS-17525
 URL: https://issues.apache.org/jira/browse/HDFS-17525
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: router
Affects Versions: 3.4.0
Reporter: Hualong Zhang
Assignee: Hualong Zhang


Router web interfaces are missing X-FRAME-OPTIONS security configurations, we 
should complete them.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: branch-2.10+JDK7 on Linux/x86_64

2024-05-13 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1391/

No changes




-1 overall


The following subsystems voted -1:
asflicense hadolint mvnsite pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.fs.TestFileUtil 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   
hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain 
   hadoop.hdfs.server.namenode.snapshot.TestSnapshotDeletion 
   hadoop.hdfs.server.blockmanagement.TestRBWBlockInvalidation 
   hadoop.hdfs.TestFileLengthOnClusterRestart 
   hadoop.hdfs.server.namenode.ha.TestPipelinesFailover 
   hadoop.hdfs.TestDFSInotifyEventInputStream 
   hadoop.hdfs.server.namenode.snapshot.TestSnapshotBlocksMap 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes 
   hadoop.hdfs.server.federation.router.TestRouterQuota 
   hadoop.hdfs.server.federation.router.TestRouterNamenodeHeartbeat 
   hadoop.hdfs.server.federation.resolver.order.TestLocalResolver 
   hadoop.hdfs.server.federation.resolver.TestMultipleDestinationResolver 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.mapreduce.v2.app.TestRuntimeEstimators 
   hadoop.mapreduce.lib.input.TestLineRecordReader 
   hadoop.mapred.TestLineRecordReader 
   hadoop.mapreduce.jobhistory.TestHistoryViewerPrinter 
   hadoop.resourceestimator.service.TestResourceEstimatorService 
   hadoop.resourceestimator.solver.impl.TestLpSolver 
   hadoop.yarn.sls.appmaster.TestAMSimulator 
   hadoop.yarn.sls.TestSLSRunner 
   
hadoop.yarn.server.nodemanager.containermanager.linux.resources.TestNumaResourceAllocator
 
   
hadoop.yarn.server.nodemanager.containermanager.linux.resources.TestNumaResourceHandlerImpl
 
   hadoop.yarn.server.resourcemanager.TestClientRMService 
   hadoop.yarn.server.resourcemanager.recovery.TestFSRMStateStore 
   
hadoop.yarn.server.resourcemanager.monitor.invariants.TestMetricsInvariantChecker
 
  

   cc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1391/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1391/artifact/out/diff-compile-javac-root.txt
  [488K]

   checkstyle:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1391/artifact/out/diff-checkstyle-root.txt
  [14M]

   hadolint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1391/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   mvnsite:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1391/artifact/out/patch-mvnsite-root.txt
  [572K]

   pathlen:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1391/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1391/artifact/out/diff-patch-pylint.txt
  [20K]

   shellcheck:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1391/artifact/out/diff-patch-shellcheck.txt
  [72K]

   whitespace:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1391/artifact/out/whitespace-eol.txt
  [12M]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1391/artifact/out/whitespace-tabs.txt
  [1.3M]

   javadoc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1391/artifact/out/patch-javadoc-root.txt
  [36K]

   unit:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1391/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [220K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1391/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [536K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1391/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
  [36K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1391/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt
  [16K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1391/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt
  [44K]
   

[jira] [Created] (HDFS-17524) OIV: add Transformed processor which reconstructs an fsimage from another fsimage file

2024-05-13 Thread Xiaobao Wu (Jira)
Xiaobao Wu created HDFS-17524:
-

 Summary: OIV: add Transformed processor which reconstructs an 
fsimage from another fsimage file
 Key: HDFS-17524
 URL: https://issues.apache.org/jira/browse/HDFS-17524
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: tools
Affects Versions: 3.3.4, 3.2.0
Reporter: Xiaobao Wu


*Background:*

The Image file generated by the existing Hadoop 3.3.4 version cannot be forward 
compatible . In the high version of HDFS, the fsimage file conversion tool is 
provided to support the generation of forward compatible fsimage file to 
support the downgrade operation.

{*}Description{*}:

Because there are differences in the structure and loading methods of some 
Sections between high and low versions of fsimage files, especially the 
StringTable Section. This will make it impossible to downgrade to a lower 
version of HDFS ( e.g., 3.1.1 ) in higher versions ( e.g., 3.3.4 ), because 
when the lower version of HDFS loads the fsimage file generated by the higher 
version of HDFS, there will be an ArrayIndexOutOfBoundsException.

 

The code differences are as follows:
{code:java}
// 3.3.4  
static SerialNumberManager.StringTable loadStringTable(InputStream in)
throws IOException {
  ··· ···
  SerialNumberManager.StringTable stringTable =
SerialNumberManager.newStringTable(s.getNumEntry(), s.getMaskBits());
  for (int i = 0; i < s.getNumEntry(); ++i) {
FsImageProto.StringTableSection.Entry e = FsImageProto  
.StringTableSection.Entry.parseDelimitedFrom(in);
stringTable.put(e.getId(), e.getStr());
  }
return stringTable;
} 


// 3.1.1
static String[] loadStringTable(InputStream in) throws IOException {
  ··· ···
  String[] stringTable = new String[s.getNumEntry() + 1];
  for (int i = 0; i < s.getNumEntry(); ++i) {
FsImageProto.StringTableSection.Entry e = FsImageProto
.StringTableSection.Entry.parseDelimitedFrom(in);
// ArrayIndexOutOfBoundsException is triggered when loading a higher 
version of the fsimage file.
stringTable[e.getId()] = e.getStr();
  }
  return stringTable;
}{code}
{*}Solution{*}:
Solution Reference from HDFS-17463
!http://www.kdocs.cn/api/v3/office/copy/Mm0rd3BzNEx2Y29zaUdsQkczVnRUV2JwR2RvVWNVdk9aT3dRc2czUXRYdit1ekZ4UmN3UWFLN0hwOTZidnJ1L2ZxaW5PaUNHRmU1bGNyS3lRUGZRbE1vR2I4MlQvS0ppOUZxbVRnQ2o2SUNJZGFoeVNzMUFjR2tKTStsTjZpUTFwanpmcTRML0JFTDJHcXV4aGpESVFXS1RTeEkyZk5sb25LOEEyT0lHbDJydVlIZEJ2dXlyYVozM2pkZGdacEtWQnR3SUQ0MXUwV1RINTMyaDluV2FRTWNjS2p5Nm0rZngzbGNGdEd4cFpLdjFpWUtWK2UyMDZhVVFYUWVHZXlwZEQ0c25MWU93NFY0PQ==/attach/object/K3TLVNAYAAQFQ?|width=693!
>From the figure, it can be seen that the Id arrangement of StringTable in the 
>fsimage file has changed from a compact arrangement to a decentralized 
>arrangement, that is, USER, GROUP and XATTR are no longer mixed. The 
>arrangement is divided into different storage areas and arranged separately.
 * With the sub-sections feature introduced in HDFS-14617, Protobuf can support 
compatible reading. 
 * When saving fsimage files in high and low versions, the main difference is 
the arrangement of Entry(e.g., USER, GROUP, and XATTR ) in StringTable.
 * We will add a conversion tool to convert the Id arrangement of the high 
version fsimage file StringTable to a compact arrangement, so that the low 
version can be compatible with this format fsimage file.

 

 

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org