[ https://issues.apache.org/jira/browse/HDFS-16564?focusedWorklogId=764363&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-764363 ]
ASF GitHub Bot logged work on HDFS-16564: ----------------------------------------- Author: ASF GitHub Bot Created on: 29/Apr/22 16:59 Start Date: 29/Apr/22 16:59 Worklog Time Spent: 10m Work Description: GauthamBanasandra commented on code in PR #4245: URL: https://github.com/apache/hadoop/pull/4245#discussion_r861986392 ########## hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/hdfspp_mini_dfs_smoke.cc: ########## @@ -34,7 +34,6 @@ TEST_F(HdfsMiniDfsSmokeTest, SmokeTest) { EXPECT_NE(nullptr, connection.handle()); } - Review Comment: I need to touch at least one test file. Otherwise, `test4tests` will fail while building for the first platform in the pipeline (Centos 7) and doesn't proceed to build for the rest of the platforms. Issue Time Tracking ------------------- Worklog Id: (was: 764363) Time Spent: 1h (was: 50m) > Use uint32_t for hdfs_find > -------------------------- > > Key: HDFS-16564 > URL: https://issues.apache.org/jira/browse/HDFS-16564 > Project: Hadoop HDFS > Issue Type: Bug > Components: libhdfs++ > Affects Versions: 3.4.0 > Reporter: Gautham Banasandra > Assignee: Gautham Banasandra > Priority: Major > Labels: libhdfscpp, pull-request-available > Time Spent: 1h > Remaining Estimate: 0h > > *hdfs_find* uses *u_int32_t* type for storing the value for the *max-depth* > command line argument - > https://github.com/apache/hadoop/blob/a631f45a99c7abf8c9a2dcfb10afb668c8ff6b09/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tools/hdfs-find/hdfs-find.cc#L43. > The type u_int32_t isn't standard, isn't available on Windows and thus breaks > cross-platform compatibility. We need to replace this with *uint32_t* which > is available on all platforms since it's part of the C++ standard. -- This message was sent by Atlassian Jira (v8.20.7#820007) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org