[jira] [Created] (HDFS-17636) Don't add declspec for Windows

2024-10-02 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-17636:
-

 Summary: Don't add declspec for Windows
 Key: HDFS-17636
 URL: https://issues.apache.org/jira/browse/HDFS-17636
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: libhdfs
Affects Versions: 3.5.0
 Environment: Windows 10/11
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra


* Windows doesn't want the
  macro _JNI_IMPORT_OR_EXPORT_
  to be defined in the function
  definition. It fails to compile with
  the following error -
  "definition of dllimport function
  not allowed".
* However, Linux needs it. Hence,
  we're going to add this macro
  based on the OS.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-17246) Fix shaded client for building Hadoop on Windows

2023-11-01 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-17246:
-

 Summary: Fix shaded client for building Hadoop on Windows
 Key: HDFS-17246
 URL: https://issues.apache.org/jira/browse/HDFS-17246
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 3.4.0
 Environment: Windows 10
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra
 Fix For: 3.4.0


Currently, the *shaded client* Yetus personality in Hadoop fails to build on 
Windows - 
https://github.com/apache/hadoop/blob/4c04a6768c0cb3d5081cfa5d84ffb389d92f5805/dev-support/bin/hadoop.sh#L541-L615.

This happens due to the integration test failures in Hadoop client modules - 
https://github.com/apache/hadoop/tree/4c04a6768c0cb3d5081cfa5d84ffb389d92f5805/hadoop-client-modules/hadoop-client-integration-tests.

There are several issues that need to be addressed in order to get the 
integration tests working -
# Set the HADOOP_HOME, needed by the Mini DFS and YARN clusters spawned by the 
integration tests.
# Add Hadoop binaries to PATH, so that winutils.exe can be located.
# Create a new user with Symlink privilege in the Docker image. This is needed 
for the proper working of Mini YARN cluster, spawned by the integration tests.
# Fix a bug in DFSUtilClient.java that prevents colon (:) in the path. The 
colon is used a delimiter for the PATH variable while specifying multiple 
paths. However, this isn't a delimiter in the case of Windows and must be 
handled appropriately.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16736) Link to Boost library in libhdfspp

2022-08-21 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-16736:
-

 Summary: Link to Boost library in libhdfspp
 Key: HDFS-16736
 URL: https://issues.apache.org/jira/browse/HDFS-16736
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: libhdfs++
Affects Versions: 3.4.0
 Environment: Windows 10
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra


The compilation of HDFS Native Client fails on Windows 10 due to the following 
error -

{code}
[exec] 
"H:\hadoop-hdfs-project\hadoop-hdfs-native-client\target\native\main\native\libhdfspp\tests\logging_test.vcxproj"
 (default target) (105) ->
 [exec]   rpc.lib(rpc_engine.obj) : error LNK2019: unresolved external 
symbol "__declspec(dllimport) public: __cdecl 
boost::gregorian::greg_month::greg_month(unsigned short)" 
(__imp_??0greg_month@gregorian@boost@@QEAA@G@Z) referenced in function 
"private: static class boost::posix_time::ptime __cdecl 
boost::date_time::microsec_clock::create_time(struct tm * (__cdecl*)(__int64 const 
*,struct tm *))" 
(?create_time@?$microsec_clock@Vptime@posix_time@boost@@@date_time@boost@@CA?AVptime@posix_time@3@P6APEAUtm@@PEB_JPEAU6@@Z@Z)
 
[H:\hadoop-hdfs-project\hadoop-hdfs-native-client\target\native\main\native\libhdfspp\tests\logging_test.vcxproj]
 [exec]   rpc.lib(request.obj) : error LNK2001: unresolved external symbol 
"__declspec(dllimport) public: __cdecl 
boost::gregorian::greg_month::greg_month(unsigned short)" 
(__imp_??0greg_month@gregorian@boost@@QEAA@G@Z) 
[H:\hadoop-hdfs-project\hadoop-hdfs-native-client\target\native\main\native\libhdfspp\tests\logging_test.vcxproj]
 [exec]   
H:\hadoop-hdfs-project\hadoop-hdfs-native-client\target\native\main\native\libhdfspp\tests\RelWithDebInfo\logging_test.exe
 : fatal error LNK1120: 1 unresolved externals 
[H:\hadoop-hdfs-project\hadoop-hdfs-native-client\target\native\main\native\libhdfspp\tests\logging_test.vcxproj]
{code}

Thus, we need to link against the Boost library to resolve this error.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16681) Do not pass GCC flags for MSVC in libhdfspp

2022-07-25 Thread Gautham Banasandra (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gautham Banasandra resolved HDFS-16681.
---
Fix Version/s: 3.4.0
   Resolution: Fixed

Merged PR https://github.com/apache/hadoop/pull/4615 to trunk.

> Do not pass GCC flags for MSVC in libhdfspp
> ---
>
> Key: HDFS-16681
> URL: https://issues.apache.org/jira/browse/HDFS-16681
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs++
>Affects Versions: 3.4.0
> Environment: Windows 10
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: libhdfscpp, pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> The tests in HDFS native client uses the *-Wno-missing-field-initializers* 
> flag to ignore warnings about uninitialized members - 
> https://github.com/apache/hadoop/blob/8f83d9f56d775c73af6e3fa1d6a9aa3e64eebc37/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/CMakeLists.txt#L27-L28
> {code}
> set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-missing-field-initializers")
> set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -Wno-missing-field-initializers")
> {code}
> This leads to the following error on Visual C++.
> {code}
> [exec] 
> "E:\hadoop-hdfs-project\hadoop-hdfs-native-client\target\native\ALL_BUILD.vcxproj"
>  (default target) (1) ->
> [exec] 
> "E:\hadoop-hdfs-project\hadoop-hdfs-native-client\target\native\main\native\libhdfspp\tests\x-platform\x_platform_dirent_test_obj.vcxproj"
>  (default target) (24) ->
> [exec]   cl : command line error D8021: invalid numeric argument 
> '/Wno-missing-field-initializers' 
> [E:\hadoop-hdfs-project\hadoop-hdfs-native-client\target\native\main\native\libhdfspp\tests\x-platform\x_platform_dirent_test_obj.vcxproj]
> {code}
> Thus, we need to pass this flag only when the compiler isn't Visual C++.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16681) Do not pass GCC flags for MSVC

2022-07-23 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-16681:
-

 Summary: Do not pass GCC flags for MSVC
 Key: HDFS-16681
 URL: https://issues.apache.org/jira/browse/HDFS-16681
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: libhdfs++
Affects Versions: 3.4.0
 Environment: Windows 10
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra


The tests in HDFS native client uses the *-Wno-missing-field-initializers* flag 
to ignore warnings about uninitialized members - 
https://github.com/apache/hadoop/blob/8f83d9f56d775c73af6e3fa1d6a9aa3e64eebc37/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/CMakeLists.txt#L27-L28
{code}
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-missing-field-initializers")
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -Wno-missing-field-initializers")
{code}

This leads to the following error on Visual C++.
{code}
 [exec] 
"E:\hadoop-hdfs-project\hadoop-hdfs-native-client\target\native\ALL_BUILD.vcxproj"
 (default target) (1) ->
 [exec] 
"E:\hadoop-hdfs-project\hadoop-hdfs-native-client\target\native\main\native\libhdfspp\tests\x-platform\x_platform_dirent_test_obj.vcxproj"
 (default target) (24) ->
 [exec]   cl : command line error D8021: invalid numeric argument 
'/Wno-missing-field-initializers' 
[E:\hadoop-hdfs-project\hadoop-hdfs-native-client\target\native\main\native\libhdfspp\tests\x-platform\x_platform_dirent_test_obj.vcxproj]
{code}

Thus, we need to pass these flags only when the compiler isn't Visual C++.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16680) Skip libhdfspp Valgrind tests on Windows

2022-07-23 Thread Gautham Banasandra (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gautham Banasandra resolved HDFS-16680.
---
Fix Version/s: 3.4.0
   Resolution: Fixed

Merged PR https://github.com/apache/hadoop/pull/4611 to trunk.

> Skip libhdfspp Valgrind tests on Windows
> 
>
> Key: HDFS-16680
> URL: https://issues.apache.org/jira/browse/HDFS-16680
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs++
>Affects Versions: 3.4.0
> Environment: Windows 10
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> The CMake test libhdfs_mini_stress_valgrind requires Valgrind - 
> https://github.com/apache/hadoop/blob/221eb2d68d5b52e4394fd36cb30d5ee9ffeea7f0/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/CMakeLists.txt#L172-L175
> {code}
> build_libhdfs_test(libhdfs_mini_stress_valgrind hdfspp_test_static expect.c 
> test_libhdfs_mini_stress.c ${OS_DIR}/thread.c)
> link_libhdfs_test(libhdfs_mini_stress_valgrind hdfspp_test_static fs reader 
> rpc proto common connection ${PROTOBUF_LIBRARIES} ${OPENSSL_LIBRARIES} 
> native_mini_dfs ${JAVA_JVM_LIBRARY} ${SASL_LIBRARIES})
> add_memcheck_test(libhdfs_mini_stress_valgrind_hdfspp_test_static 
> libhdfs_mini_stress_valgrind_hdfspp_test_static)
> set_target_properties(libhdfs_mini_stress_valgrind_hdfspp_test_static 
> PROPERTIES COMPILE_DEFINITIONS "VALGRIND")
> {code}
> We need to skip this test on Windows since we don't have Valgrind on Windows.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16467) Ensure Protobuf generated headers are included first

2022-07-23 Thread Gautham Banasandra (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gautham Banasandra resolved HDFS-16467.
---
Fix Version/s: 3.4.0
   Resolution: Fixed

Merged PR [https://github.com/apache/hadoop/pull/4601] to trunk.

> Ensure Protobuf generated headers are included first
> 
>
> Key: HDFS-16467
> URL: https://issues.apache.org/jira/browse/HDFS-16467
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs++
>Affects Versions: 3.4.0
> Environment: Windows 10
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: libhdfscpp, pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> We need to ensure that the Protobuf generated headers ([such as 
> ClientNamenodeProtocol.pb.h|https://github.com/apache/hadoop/blob/cce5a6f6094cefd2e23b73d202cc173cf4fc2cc5/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/connection/datanodeconnection.h#L23])
>  are included at the top. In other words, **.ph.h* should be the first header 
> files to be included in any of the .c/.cc/.h files. Otherwise, it causes 
> symbol redefinition errors during compilation. Also, we need to ensure that 
> Protobuf generated header files are the first ones to be included even in the 
> case of *transitive inclusion* of header files.
> This issue seems to be specific to Windows only. Not sure why the other 
> platforms aren't running into this.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16680) Skip libhdfspp Valgrind tests on Windows

2022-07-22 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-16680:
-

 Summary: Skip libhdfspp Valgrind tests on Windows
 Key: HDFS-16680
 URL: https://issues.apache.org/jira/browse/HDFS-16680
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: libhdfs++
Affects Versions: 3.4.0
 Environment: Windows 10
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra


The CMake test libhdfs_mini_stress_valgrind requires Valgrind - 
https://github.com/apache/hadoop/blob/221eb2d68d5b52e4394fd36cb30d5ee9ffeea7f0/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/CMakeLists.txt#L172-L175
{code}
build_libhdfs_test(libhdfs_mini_stress_valgrind hdfspp_test_static expect.c 
test_libhdfs_mini_stress.c ${OS_DIR}/thread.c)
link_libhdfs_test(libhdfs_mini_stress_valgrind hdfspp_test_static fs reader rpc 
proto common connection ${PROTOBUF_LIBRARIES} ${OPENSSL_LIBRARIES} 
native_mini_dfs ${JAVA_JVM_LIBRARY} ${SASL_LIBRARIES})
add_memcheck_test(libhdfs_mini_stress_valgrind_hdfspp_test_static 
libhdfs_mini_stress_valgrind_hdfspp_test_static)
set_target_properties(libhdfs_mini_stress_valgrind_hdfspp_test_static 
PROPERTIES COMPILE_DEFINITIONS "VALGRIND")
{code}

We need to skip this test on Windows since we don't have Valgrind on Windows.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16667) Use malloc for buffer allocation in uriparser2

2022-07-20 Thread Gautham Banasandra (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gautham Banasandra resolved HDFS-16667.
---
Fix Version/s: 3.4.0
   Resolution: Fixed

Merged PR https://github.com/apache/hadoop/pull/4576 to trunk.

> Use malloc for buffer allocation in uriparser2
> --
>
> Key: HDFS-16667
> URL: https://issues.apache.org/jira/browse/HDFS-16667
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs++
>Affects Versions: 3.4.0
> Environment: Windows 10
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: libhdfscpp, pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Currently, a variable is used to specify the array size in *uriparser2* -
> https://github.com/apache/hadoop/blob/34e548cb62ed21c5bba7a82f5f1489ca6bdfb8c4/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/uriparser2/uriparser2/uriparser2.c#L71-L79
> {code:cpp}
> static int parse_int(const char *first, const char *after_last) {
>   const int size = after_last - first;
>   if (size) {
>   char buffer[size + 1];
>   memcpyz(buffer, first, size);
>   return atoi(buffer);
>   }
>   return 0;
> }
> {code}
> This results in the following error on Windows -
> {code}
> H:\hadoop-hdfs-project\hadoop-hdfs-native-client\src\out\build\x64-Debug\cl : 
> command line warning D9025: overriding '/W4' with '/w' 
> uriparser2.c
> H:\hadoop-hdfs-project\hadoop-hdfs-native-client\src\main\native\libhdfspp\third_party\uriparser2\uriparser2\uriparser2.c(74,23):
>  error C2057: expected constant expression 
> H:\hadoop-hdfs-project\hadoop-hdfs-native-client\src\main\native\libhdfspp\third_party\uriparser2\uriparser2\uriparser2.c(74,23):
>  error C2466: cannot allocate an array of constant size 0 
> H:\hadoop-hdfs-project\hadoop-hdfs-native-client\src\main\native\libhdfspp\third_party\uriparser2\uriparser2\uriparser2.c(74,24):
>  error C2133: 'buffer': unknown size 
> {code}
> Thus, we need to use malloc to fix this.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16665) Fix duplicate sources for hdfspp_test_shim_static

2022-07-19 Thread Gautham Banasandra (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gautham Banasandra resolved HDFS-16665.
---
Fix Version/s: 3.4.0
   Resolution: Fixed

Merged PR https://github.com/apache/hadoop/pull/4573 to trunk.

> Fix duplicate sources for hdfspp_test_shim_static
> -
>
> Key: HDFS-16665
> URL: https://issues.apache.org/jira/browse/HDFS-16665
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs++
>Affects Versions: 3.4.0
> Environment: Windows 10
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Critical
>  Labels: libhdfscpp, pull-request-available
> Fix For: 3.4.0
>
> Attachments: hdfs_shim duplicate symbols.log
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> The library target *hdfspp_test_shim_static* is built using the following 
> sources, which causes duplicate symbols to be defined -
> 1. hdfs_shim.c
> 2. ${LIBHDFSPP_BINDING_C}/hdfs.cc
> https://github.com/apache/hadoop/blob/8774f178686487007dcf8c418c989b785a529000/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/CMakeLists.txt#L153
> And fails the compilation -
> {code}
> H:\hadoop-hdfs-project\hadoop-hdfs-native-client\src\out\build\x64-Debug\hdfspp_test_shim_static.lib(hdfs_shim.obj)
>  : error LNK2005: hdfsAllowSnapshot already defined in 
> hdfspp_test_shim_static.lib(hdfs.obj) 
> H:\hadoop-hdfs-project\hadoop-hdfs-native-client\src\out\build\x64-Debug\hdfspp_test_shim_static.lib(hdfs_shim.obj)
>  : error LNK2005: hdfsAvailable already defined in 
> hdfspp_test_shim_static.lib(hdfs.obj) 
> H:\hadoop-hdfs-project\hadoop-hdfs-native-client\src\out\build\x64-Debug\hdfspp_test_shim_static.lib(hdfs_shim.obj)
>  : error LNK2005: hdfsBuilderConfSetStr already defined in 
> hdfspp_test_shim_static.lib(hdfs.obj) 
> H:\hadoop-hdfs-project\hadoop-hdfs-native-client\src\out\build\x64-Debug\hdfspp_test_shim_static.lib(hdfs_shim.obj)
>  : error LNK2005: hdfsBuilderConnect already defined in 
> hdfspp_test_shim_static.lib(hdfs.obj)
> {code}
> Duplicate symbols defined by hdfs_shim.c - 
> https://github.com/apache/hadoop/blob/440f4c2b28515d2007b81ac00b549bbf14fa9f64/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/hdfs_shim.c#L583-L585.
> Adding the source *${LIBHDFSPP_BINDING_C}/hdfs.cc* is redundant here since 
> this file is transitively included in hdfs_shim.c through 
> *libhdfspp_wrapper.h* - 
> https://github.com/apache/hadoop/blob/440f4c2b28515d2007b81ac00b549bbf14fa9f64/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/hdfs_shim.c#L20.
>  Thus, we need to exclude *${LIBHDFSPP_BINDING_C}/hdfs.cc* to fix this issue.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16464) Create only libhdfspp static libraries for Windows

2022-07-19 Thread Gautham Banasandra (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gautham Banasandra resolved HDFS-16464.
---
Fix Version/s: 3.4.0
   Resolution: Fixed

Merged PR https://github.com/apache/hadoop/pull/4571 to trunk.

> Create only libhdfspp static libraries for Windows
> --
>
> Key: HDFS-16464
> URL: https://issues.apache.org/jira/browse/HDFS-16464
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs++
>Affects Versions: 3.4.0
> Environment: Windows 10
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: libhdfscpp, pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> While building dynamic libraries (.dll) on Windows, there's a constraint that 
> all the dependent libraries be linked dynamically. This poses an issue since 
> Protobuf (which is an HDFS native client dependency) runs into build issues 
> when linked dynamically. There are a few [warning 
> notes|https://github.com/protocolbuffers/protobuf/blob/9ebb31726cef11e4e940b50ec751df4e863e3d2a/cmake/README.md#dlls-vs-static-linking]
>  on the Protobuf repository's build instructions page as well.
> Thus, to keep things simple, we can resort to do only static linking and 
> thereby only produce statically linked libraries on Windows. In summary, 
> we'll be providing only Hadoop .lib files initially. We can aim to produce 
> Hadoop .dll on Windows eventually once we're able to resolve Protobuf's .dll 
> issues.
> In Hadoop CMake files, we've a function 
> [hadoop_add_dual_library|https://github.com/apache/hadoop/blob/f0241ec2161f6eccdb9bdaf1cbcbee55be379217/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/CMakeLists.txt#L289-L298]
>  that creates both the static an dynamic library targets. We need to replace 
> all these to get only static libraries for Windows.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16666) Pass CMake args for Windows in pom.xml

2022-07-18 Thread Gautham Banasandra (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-1?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gautham Banasandra resolved HDFS-1.
---
Fix Version/s: 3.4.0
   Resolution: Fixed

Merged PR https://github.com/apache/hadoop/pull/4574 to trunk.

> Pass CMake args for Windows in pom.xml
> --
>
> Key: HDFS-1
> URL: https://issues.apache.org/jira/browse/HDFS-1
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs++
>Affects Versions: 3.4.0
> Environment: Windows 10
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: libhdfscpp, pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Need to pass the required HDFS CMake related argument for building on 
> Windows. Currently, these are passed - 
> https://github.com/apache/hadoop/blob/34e548cb62ed21c5bba7a82f5f1489ca6bdfb8c4/hadoop-hdfs-project/hadoop-hdfs-native-client/pom.xml#L150.
>  We need to pass these arguments as well for Windows - 
> https://github.com/apache/hadoop/blob/34e548cb62ed21c5bba7a82f5f1489ca6bdfb8c4/hadoop-hdfs-project/hadoop-hdfs-native-client/pom.xml#L219-L223



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16667) Use malloc for buffer allocation in uriparser2

2022-07-17 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-16667:
-

 Summary: Use malloc for buffer allocation in uriparser2
 Key: HDFS-16667
 URL: https://issues.apache.org/jira/browse/HDFS-16667
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: libhdfs++
Affects Versions: 3.4.0
 Environment: Windows 10
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra


Currently, a variable is used to specify the array size in *uriparser2* -
{code:cpp}
static int parse_int(const char *first, const char *after_last) {
const int size = after_last - first;
if (size) {
char buffer[size + 1];
memcpyz(buffer, first, size);
return atoi(buffer);
}
return 0;
}
{code}
https://github.com/apache/hadoop/blob/34e548cb62ed21c5bba7a82f5f1489ca6bdfb8c4/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/uriparser2/uriparser2/uriparser2.c#L71-L79

This results in the following error on Windows -
{code}
H:\hadoop-hdfs-project\hadoop-hdfs-native-client\src\out\build\x64-Debug\cl : 
command line warning D9025: overriding '/W4' with '/w' 
uriparser2.c
H:\hadoop-hdfs-project\hadoop-hdfs-native-client\src\main\native\libhdfspp\third_party\uriparser2\uriparser2\uriparser2.c(74,23):
 error C2057: expected constant expression 
H:\hadoop-hdfs-project\hadoop-hdfs-native-client\src\main\native\libhdfspp\third_party\uriparser2\uriparser2\uriparser2.c(74,23):
 error C2466: cannot allocate an array of constant size 0 
H:\hadoop-hdfs-project\hadoop-hdfs-native-client\src\main\native\libhdfspp\third_party\uriparser2\uriparser2\uriparser2.c(74,24):
 error C2133: 'buffer': unknown size 
{code}

Thus, we need to use malloc to fix this.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16666) Parameterize CMake args for Windows

2022-07-17 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-1:
-

 Summary: Parameterize CMake args for Windows
 Key: HDFS-1
 URL: https://issues.apache.org/jira/browse/HDFS-1
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: libhdfs++
Affects Versions: 3.4.0
 Environment: Windows 10
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra


Need to pass the required HDFS CMake related argument for building on Windows. 
Currently, these are passed - 
https://github.com/apache/hadoop/blob/34e548cb62ed21c5bba7a82f5f1489ca6bdfb8c4/hadoop-hdfs-project/hadoop-hdfs-native-client/pom.xml#L150.
 We need to pass these arguments as well for Windows - 
https://github.com/apache/hadoop/blob/34e548cb62ed21c5bba7a82f5f1489ca6bdfb8c4/hadoop-hdfs-project/hadoop-hdfs-native-client/pom.xml#L219-L223



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16665) Fix duplicate sources for hdfspp_test_shim_static

2022-07-17 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-16665:
-

 Summary: Fix duplicate sources for hdfspp_test_shim_static
 Key: HDFS-16665
 URL: https://issues.apache.org/jira/browse/HDFS-16665
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: libhdfs++
Affects Versions: 3.4.0
 Environment: Windows 10
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra


The library target *hdfspp_test_shim_static* is built using the following 
sources, which causes duplicate symbols to be defined -
1. hdfs_shim.c
2. ${LIBHDFSPP_BINDING_C}/hdfs.cc
https://github.com/apache/hadoop/blob/8774f178686487007dcf8c418c989b785a529000/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/CMakeLists.txt#L153

And fails the compilation -
{code}
H:\hadoop-hdfs-project\hadoop-hdfs-native-client\src\out\build\x64-Debug\hdfspp_test_shim_static.lib(hdfs_shim.obj)
 : error LNK2005: hdfsAllowSnapshot already defined in 
hdfspp_test_shim_static.lib(hdfs.obj) 
H:\hadoop-hdfs-project\hadoop-hdfs-native-client\src\out\build\x64-Debug\hdfspp_test_shim_static.lib(hdfs_shim.obj)
 : error LNK2005: hdfsAvailable already defined in 
hdfspp_test_shim_static.lib(hdfs.obj) 
H:\hadoop-hdfs-project\hadoop-hdfs-native-client\src\out\build\x64-Debug\hdfspp_test_shim_static.lib(hdfs_shim.obj)
 : error LNK2005: hdfsBuilderConfSetStr already defined in 
hdfspp_test_shim_static.lib(hdfs.obj) 
H:\hadoop-hdfs-project\hadoop-hdfs-native-client\src\out\build\x64-Debug\hdfspp_test_shim_static.lib(hdfs_shim.obj)
 : error LNK2005: hdfsBuilderConnect already defined in 
hdfspp_test_shim_static.lib(hdfs.obj)
{code}

Duplicate symbols defined by hdfs_shim.c - 
https://github.com/apache/hadoop/blob/440f4c2b28515d2007b81ac00b549bbf14fa9f64/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/hdfs_shim.c#L583-L585.

Adding the source *${LIBHDFSPP_BINDING_C}/hdfs.cc* is redundant here since this 
file is transitively included in hdfs_shim.c through *libhdfspp_wrapper.h* - 
https://github.com/apache/hadoop/blob/440f4c2b28515d2007b81ac00b549bbf14fa9f64/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/hdfs_shim.c#L20.
 Thus, we need to exclude *${LIBHDFSPP_BINDING_C}/hdfs.cc* to fix this issue.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16654) Link OpenSSL lib for CMake deps check

2022-07-17 Thread Gautham Banasandra (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gautham Banasandra resolved HDFS-16654.
---
Fix Version/s: 3.4.0
   Resolution: Fixed

Merged PR https://github.com/apache/hadoop/pull/4538 to trunk.

> Link OpenSSL lib for CMake deps check
> -
>
> Key: HDFS-16654
> URL: https://issues.apache.org/jira/browse/HDFS-16654
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs++
>Affects Versions: 3.4.0
> Environment: Windows 10
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: libhdfscpp, pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> CMake checks whether the required components of OpenSSL are available prior 
> to building HDFS native client - 
> https://github.com/apache/hadoop/blob/fac895828f714b5587b57900d588acac69880c1e/hadoop-hdfs-project/hadoop-hdfs-native-client/src/CMakeLists.txt#L130
> {code}
> check_c_source_compiles("#include 
> \"${OPENSSL_INCLUDE_DIR}/openssl/evp.h\"\nint main(int argc, char **argv) { 
> return !EVP_aes_256_ctr; }" HAS_NEW_ENOUGH_OPENSSL)
> {code}
> This check compiles but fails while linking on Windows -
> {code}
> src.obj : error LNK2019: unresolved external symbol EVP_aes_256_ctr 
> referenced in function main 
> [H:\hadoop-hdfs-project\hadoop-hdfs-native-client\src\out\build\x64-Debug\CMakeFiles\CMakeTmp\cmTC_e391b.vcxproj]
> H:\hadoop-hdfs-project\hadoop-hdfs-native-client\src\out\build\x64-Debug\CMakeFiles\CMakeTmp\Debug\cmTC_e391b.exe
>  : fatal error LNK1120: 1 unresolved externals 
> [H:\hadoop-hdfs-project\hadoop-hdfs-native-client\src\out\build\x64-Debug\CMakeFiles\CMakeTmp\cmTC_e391b.vcxproj]
> Source file was:
> #include 
> int main(int argc, char **argv) { return !EVP_aes_256_ctr; }
> {code}
> Thus, we need to link to the OpenSSL library prior to running this check. 
> Please note that this check doesn't fail on Linux since CMake is able to pick 
> it up from the standard location where libs are installed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16654) Link OpenSSL lib for CMake deps pre-check

2022-07-08 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-16654:
-

 Summary: Link OpenSSL lib for CMake deps pre-check
 Key: HDFS-16654
 URL: https://issues.apache.org/jira/browse/HDFS-16654
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: libhdfs++
Affects Versions: 3.4.0
 Environment: Windows 10
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra


CMake checks whether the required components of OpenSSL are available prior to 
building HDFS native client - 
https://github.com/apache/hadoop/blob/fac895828f714b5587b57900d588acac69880c1e/hadoop-hdfs-project/hadoop-hdfs-native-client/src/CMakeLists.txt#L130

{code}
check_c_source_compiles("#include \"${OPENSSL_INCLUDE_DIR}/openssl/evp.h\"\nint 
main(int argc, char **argv) { return !EVP_aes_256_ctr; }" 
HAS_NEW_ENOUGH_OPENSSL)
{code}

This check compiles but fails while linking on Windows -
{code}
src.obj : error LNK2019: unresolved external symbol EVP_aes_256_ctr referenced 
in function main 
[H:\hadoop-hdfs-project\hadoop-hdfs-native-client\src\out\build\x64-Debug\CMakeFiles\CMakeTmp\cmTC_e391b.vcxproj]

H:\hadoop-hdfs-project\hadoop-hdfs-native-client\src\out\build\x64-Debug\CMakeFiles\CMakeTmp\Debug\cmTC_e391b.exe
 : fatal error LNK1120: 1 unresolved externals 
[H:\hadoop-hdfs-project\hadoop-hdfs-native-client\src\out\build\x64-Debug\CMakeFiles\CMakeTmp\cmTC_e391b.vcxproj]
{code}

Thus, we need to link to the OpenSSL library prior to running this check. 
Please note that this check doesn't fail on Linux since CMake is able to pick 
it up from the standard location where libs are installed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16466) Implement Linux permission flags on Windows

2022-07-07 Thread Gautham Banasandra (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gautham Banasandra resolved HDFS-16466.
---
Fix Version/s: 3.4.0
   Resolution: Fixed

Merged PR https://github.com/apache/hadoop/pull/4526 to trunk.

> Implement Linux permission flags on Windows
> ---
>
> Key: HDFS-16466
> URL: https://issues.apache.org/jira/browse/HDFS-16466
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs++
>Affects Versions: 3.4.0
> Environment: Windows 10
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: libhdfscpp, pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> [statinfo.cc|https://github.com/apache/hadoop/blob/869317be0a1fdff23be5fc500dcd9ae4ecd7bc29/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/common/statinfo.cc#L41-L49]
>  uses POSIX permission flags. These flags aren't available for Windows. We 
> need to implement the equivalent flags on Windows to make this cross platform 
> compatible.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16469) Locate protoc-gen-hrpc across platforms

2022-06-15 Thread Gautham Banasandra (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gautham Banasandra resolved HDFS-16469.
---
Fix Version/s: 3.4.0
   Resolution: Fixed

Merged PR https://github.com/apache/hadoop/pull/4434 to trunk.

> Locate protoc-gen-hrpc across platforms
> ---
>
> Key: HDFS-16469
> URL: https://issues.apache.org/jira/browse/HDFS-16469
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs++
>Affects Versions: 3.4.0
> Environment: Windows 10
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: libhdfscpp, pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> protoc-gen-hrpc.exe is supposed to be found at 
> [${CMAKE_CURRENT_BINARY_DIR}/protoc-gen-hrpc|https://github.com/apache/hadoop/blob/652b257478f723a9e119e5e9181f3c7450ac92b5/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/proto/CMakeLists.txt#L70].
>  This works so long as we're building the Release build. Since we can only 
> build RelWithDebInfo on Windows, the protoc-gen-hrpc binary will be placed at 
> {*}${CMAKE_CURRENT_BINARY_DIR}/RelWithDebInfo/protoc-gen-hrpc.exe{*}. Hadoop 
> would need to locate this binary in order to generate the Protobuf headers.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16463) Make dirent cross platform compatible

2022-06-10 Thread Gautham Banasandra (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gautham Banasandra resolved HDFS-16463.
---
Fix Version/s: 3.4.0
   Resolution: Fixed

Merged PR https://github.com/apache/hadoop/pull/4370 to trunk.

> Make dirent cross platform compatible
> -
>
> Key: HDFS-16463
> URL: https://issues.apache.org/jira/browse/HDFS-16463
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs++
>Affects Versions: 3.4.0
> Environment: Windows 10
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: libhdfscpp, pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 6.5h
>  Remaining Estimate: 0h
>
> [jnihelper.c|https://github.com/apache/hadoop/blob/1fed18bb2d8ac3dbaecc3feddded30bed918d556/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/jni_helper.c#L28]
>  in HDFS native client uses *dirent.h*. This header file isn't available on 
> Windows. Thus, we need to replace this with a cross platform compatible 
> implementation for dirent.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16630) Simplify extern wrapping for XPlatform dirent

2022-06-10 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-16630:
-

 Summary: Simplify extern wrapping for XPlatform dirent
 Key: HDFS-16630
 URL: https://issues.apache.org/jira/browse/HDFS-16630
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: libhdfs++
Affects Versions: 3.4.0
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra


Need to simplify the wrapping of the [extern "C" 
block|https://github.com/apache/hadoop/blob/7f5a34dfaa7e6fcb08af75ab40f67e50fe4d78ef/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/x-platform/c-api/extern/dirent.h#L25-L33]
 as described here - 
https://github.com/apache/hadoop/pull/4370#discussion_r892836982.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16602) Use "defined" directive along with #if

2022-06-03 Thread Gautham Banasandra (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gautham Banasandra resolved HDFS-16602.
---
Fix Version/s: 3.4.0
   Resolution: Fixed

Merged PR https://github.com/apache/hadoop/pull/4371 to trunk.

> Use "defined" directive along with #if
> --
>
> Key: HDFS-16602
> URL: https://issues.apache.org/jira/browse/HDFS-16602
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs++
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: libhdfscpp, pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> The #if preprocessor directive expects a boolean expression. Thus, we need to 
> use the "defined" directive as well to check if the macro has been defined.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16604) Upgrade GoogleTest to 1.11.0

2022-05-29 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-16604:
-

 Summary: Upgrade GoogleTest to 1.11.0
 Key: HDFS-16604
 URL: https://issues.apache.org/jira/browse/HDFS-16604
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: libhdfs++
Affects Versions: 3.4.0
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra


CMake is unable to checkout *release-1.10.0* version of GoogleTest -
{code}
[WARNING] -- Build files have been written to: 
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4370/centos-7/src/hadoop-hdfs-project/hadoop-hdfs-native-client/target/main/native/libhdfspp/googletest-download
[WARNING] Scanning dependencies of target googletest
[WARNING] [ 11%] Creating directories for 'googletest'
[WARNING] [ 22%] Performing download step (git clone) for 'googletest'
[WARNING] Cloning into 'googletest-src'...
[WARNING] fatal: invalid reference: release-1.10.0
[WARNING] CMake Error at 
googletest-download/googletest-prefix/tmp/googletest-gitclone.cmake:40 
(message):
[WARNING]   Failed to checkout tag: 'release-1.10.0'
[WARNING] 
[WARNING] 
[WARNING] gmake[2]: *** [CMakeFiles/googletest.dir/build.make:111: 
googletest-prefix/src/googletest-stamp/googletest-download] Error 1
[WARNING] gmake[1]: *** [CMakeFiles/Makefile2:95: 
CMakeFiles/googletest.dir/all] Error 2
[WARNING] gmake: *** [Makefile:103: all] Error 2
[WARNING] CMake Error at main/native/libhdfspp/CMakeLists.txt:68 (message):
[WARNING]   Build step for googletest failed: 2
{code}

Jenkins run - 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4370/6/artifact/out/branch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client.txt

Hence, we're bumping up the version to resolve the issue.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16602) Use "defined" directive along with #if

2022-05-28 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-16602:
-

 Summary: Use "defined" directive along with #if
 Key: HDFS-16602
 URL: https://issues.apache.org/jira/browse/HDFS-16602
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: libhdfs++
Affects Versions: 3.4.0
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra


The #if preprocessor directive expects a boolean expression. Thus, we need to 
use the "defined" directive as well to check if the macro has been defined.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16561) Handle error returned by strtol

2022-05-25 Thread Gautham Banasandra (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gautham Banasandra resolved HDFS-16561.
---
Resolution: Fixed

Merged PR https://github.com/apache/hadoop/pull/4287 to trunk. Thank you 
[~__rishuu__] for your contribution.

> Handle error returned by strtol
> ---
>
> Key: HDFS-16561
> URL: https://issues.apache.org/jira/browse/HDFS-16561
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs++
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> *strtol* is used in 
> [hdfs-chmod.cc|https://github.com/apache/hadoop/blob/6dddbd42edd57cc26279c678756386a47c040af5/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tools/hdfs-chmod/hdfs-chmod.cc#L144].
>  The call to strtol could error out when an invalid input is provided. Need 
> to handle the error given out by strtol.
> Tasks to do -
> 1. Detect the error returned by strtol. The [strtol documentation 
> |https://en.cppreference.com/w/cpp/string/byte/strtol]explains how to do so.
> 2. Return false to the caller if the error is detected.
> 3. Extend 
> [this|https://github.com/apache/hadoop/blob/6dddbd42edd57cc26279c678756386a47c040af5/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/tools/hdfs-chmod-mock.cc]
>  unit test and add a case which exercises this by passing an invalid input. 
> Please refer to this PR to get more context on how this unit test is written 
> - https://github.com/apache/hadoop/pull/3588.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16465) Remove redundant strings.h inclusions

2022-05-11 Thread Gautham Banasandra (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gautham Banasandra resolved HDFS-16465.
---
Fix Version/s: 3.4.0
   Resolution: Fixed

Merged PR https://github.com/apache/hadoop/pull/4279 to trunk.

> Remove redundant strings.h inclusions
> -
>
> Key: HDFS-16465
> URL: https://issues.apache.org/jira/browse/HDFS-16465
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs++
>Affects Versions: 3.4.0
> Environment: Windows 10
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: libhdfscpp, pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> *strings.h* was included in a bunch of C/C++ files and are redundant in its 
> usage. Also, strings.h is not available on Windows and thus isn't 
> cross-platform compatible. Thus, these inclusions of strings.h must be 
> removed.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16572) Fix typo in readme of hadoop-project-dist

2022-05-10 Thread Gautham Banasandra (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gautham Banasandra resolved HDFS-16572.
---
Fix Version/s: 3.4.0
   Resolution: Fixed

Merged PR https://github.com/apache/hadoop/pull/4288 to trunk.

> Fix typo in readme of hadoop-project-dist
> -
>
> Key: HDFS-16572
> URL: https://issues.apache.org/jira/browse/HDFS-16572
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Trivial
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Change *not* to *no*.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16572) Fix typo in readme of hadoop-project-dist

2022-05-08 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-16572:
-

 Summary: Fix typo in readme of hadoop-project-dist
 Key: HDFS-16572
 URL: https://issues.apache.org/jira/browse/HDFS-16572
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.4.0
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra


Change *not* to *no*.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16468) Define ssize_t for Windows

2022-05-04 Thread Gautham Banasandra (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gautham Banasandra resolved HDFS-16468.
---
Fix Version/s: 3.4.0
   Resolution: Fixed

Merged PR https://github.com/apache/hadoop/pull/4228 to trunk.

> Define ssize_t for Windows
> --
>
> Key: HDFS-16468
> URL: https://issues.apache.org/jira/browse/HDFS-16468
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs++
>Affects Versions: 3.4.0
> Environment: Windows 10
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: libhdfscpp, pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> Some C/C++ files use *ssize_t* data type. This isn't available for Windows 
> and we need to define an alias for this and set it to *long long* to make it 
> cross platform compatible.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16564) Use uint32_t for hdfs_find

2022-05-04 Thread Gautham Banasandra (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gautham Banasandra resolved HDFS-16564.
---
Fix Version/s: 3.4.0
   Resolution: Fixed

Merged PR https://github.com/apache/hadoop/pull/4245 to trunk.

> Use uint32_t for hdfs_find
> --
>
> Key: HDFS-16564
> URL: https://issues.apache.org/jira/browse/HDFS-16564
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs++
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: libhdfscpp, pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> *hdfs_find* uses *u_int32_t* type for storing the value for the *max-depth* 
> command line argument - 
> https://github.com/apache/hadoop/blob/a631f45a99c7abf8c9a2dcfb10afb668c8ff6b09/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tools/hdfs-find/hdfs-find.cc#L43.
> The type u_int32_t isn't standard, isn't available on Windows and thus breaks 
> cross-platform compatibility. We need to replace this with *uint32_t* which 
> is available on all platforms since it's part of the C++ standard.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16564) Use uint32_t for hdfs_find

2022-04-28 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-16564:
-

 Summary: Use uint32_t for hdfs_find
 Key: HDFS-16564
 URL: https://issues.apache.org/jira/browse/HDFS-16564
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: libhdfs++
Affects Versions: 3.4.0
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra


*hdfs_find* uses *u_int32_t* type for storing the value for the *max-depth* 
command line argument - 
https://github.com/apache/hadoop/blob/a631f45a99c7abf8c9a2dcfb10afb668c8ff6b09/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tools/hdfs-find/hdfs-find.cc#L43.
The type u_int32_t isn't standard, isn't available on Windows and thus breaks 
cross-platform compatibility. We need to replace this with *uint32_t* which is 
available on all platforms.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16561) Handle error returned by strtol

2022-04-25 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-16561:
-

 Summary: Handle error returned by strtol
 Key: HDFS-16561
 URL: https://issues.apache.org/jira/browse/HDFS-16561
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: libhdfs++
Affects Versions: 3.4.0
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra


*strtol* is used in 
[hdfs-chmod.cc|https://github.com/apache/hadoop/blob/6dddbd42edd57cc26279c678756386a47c040af5/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tools/hdfs-chmod/hdfs-chmod.cc#L144].
 The call to strtol could error out when an invalid input is provided. Need to 
handle the error given out by strtol.

Tasks to do -
1. Detect the error returned by strtol. The [strtol documentation 
|https://en.cppreference.com/w/cpp/string/byte/strtol]explains how to do so.
2. Return false to the caller if the error is detected.
3. Extend 
[this|https://github.com/apache/hadoop/blob/6dddbd42edd57cc26279c678756386a47c040af5/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/tools/hdfs-chmod-mock.cc]
 unit test and add a case which exercises this by passing an invalid input. 
Please refer to this PR to get more context on how this unit test is written - 
https://github.com/apache/hadoop/pull/3588.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16474) Make HDFS tail tool cross platform

2022-04-12 Thread Gautham Banasandra (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gautham Banasandra resolved HDFS-16474.
---
Fix Version/s: 3.4.0
   Resolution: Fixed

Merged PR https://github.com/apache/hadoop/pull/4157 to trunk.

> Make HDFS tail tool cross platform
> --
>
> Key: HDFS-16474
> URL: https://issues.apache.org/jira/browse/HDFS-16474
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, libhdfs++, tools
>Affects Versions: 3.4.0
> Environment: Centos 7, Centos 8, Debian 10, Ubuntu Focal
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: libhdfscpp, pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> The source files for *hdfs_tail* uses *getopt* for parsing the command line 
> arguments. getopt is available only on Linux and thus, isn't cross platform. 
> We need to replace getopt with *boost::program_options* to make these tools 
> cross platform.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16473) Make HDFS stat tool cross platform

2022-04-08 Thread Gautham Banasandra (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gautham Banasandra resolved HDFS-16473.
---
Fix Version/s: 3.4.0
   Resolution: Fixed

Merged PR https://github.com/apache/hadoop/pull/4145 to trunk.

> Make HDFS stat tool cross platform
> --
>
> Key: HDFS-16473
> URL: https://issues.apache.org/jira/browse/HDFS-16473
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, libhdfs++, tools
>Affects Versions: 3.4.0
> Environment: Centos 7, Centos 8, Debian 10, Ubuntu Focal
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: libhdfscpp, pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> The source files for *hdfs_stat* uses *getopt* for parsing the command line 
> arguments. getopt is available only on Linux and thus, isn't cross platform. 
> We need to replace getopt with *boost::program_options* to make this tool 
> cross platform.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16472) Make HDFS setrep tool cross platform

2022-04-05 Thread Gautham Banasandra (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gautham Banasandra resolved HDFS-16472.
---
Fix Version/s: 3.4.0
   Resolution: Fixed

Merged PR https://github.com/apache/hadoop/pull/4130 to trunk.

> Make HDFS setrep tool cross platform
> 
>
> Key: HDFS-16472
> URL: https://issues.apache.org/jira/browse/HDFS-16472
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, libhdfs++, tools
>Affects Versions: 3.4.0
> Environment: Centos 7, Centos 8, Debian 10, Ubuntu Focal
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: libhdfscpp, pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> The source files for *hdfs_setrep* uses *getopt* for parsing the command line 
> arguments. getopt is available only on Linux and thus, isn't cross platform. 
> We need to replace getopt with *boost::program_options* to make this tool 
> cross platform.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16471) Make HDFS ls tool cross platform

2022-03-22 Thread Gautham Banasandra (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gautham Banasandra resolved HDFS-16471.
---
Fix Version/s: 3.4.0
   Resolution: Fixed

Merged PR https://github.com/apache/hadoop/pull/4086 to trunk.

> Make HDFS ls tool cross platform
> 
>
> Key: HDFS-16471
> URL: https://issues.apache.org/jira/browse/HDFS-16471
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, libhdfs++, tools
>Affects Versions: 3.4.0
> Environment: Centos 7, Centos 8, Debian 10, Ubuntu Focal
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: libhdfscpp, pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> The source files for *hdfs_ls* uses *getopt* for parsing the command line 
> arguments. getopt is available only on Linux and thus, isn't cross platform. 
> We need to replace getopt with *boost::program_options* to make this tool 
> cross platform.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16470) Make HDFS find tool cross platform

2022-03-18 Thread Gautham Banasandra (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gautham Banasandra resolved HDFS-16470.
---
Fix Version/s: 3.4.0
   Resolution: Fixed

Merged PR https://github.com/apache/hadoop/pull/4076 to trunk.

> Make HDFS find tool cross platform
> --
>
> Key: HDFS-16470
> URL: https://issues.apache.org/jira/browse/HDFS-16470
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, libhdfs++, tools
>Affects Versions: 3.4.0
> Environment: Centos 7, Centos 8, Debian 10, Ubuntu Focal
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: libhdfscpp, pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> The source files for *hdfs_find* uses *getopt* for parsing the command line 
> arguments. getopt is available only on Linux and thus, isn't cross platform. 
> We need to replace getopt with *boost::program_options* to make these tools 
> cross platform.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16462) Make HDFS get tool cross platform

2022-03-05 Thread Gautham Banasandra (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gautham Banasandra resolved HDFS-16462.
---
Fix Version/s: 3.4.0
   Resolution: Fixed

Merged PR https://github.com/apache/hadoop/pull/4003 to trunk.

> Make HDFS get tool cross platform
> -
>
> Key: HDFS-16462
> URL: https://issues.apache.org/jira/browse/HDFS-16462
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, libhdfs++, tools
>Affects Versions: 3.4.0
> Environment: Centos 7, Centos 8, Debian 10, Ubuntu Focal
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: libhdfscpp, pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> The source files for *hdfs_get* uses *getopt* for parsing the command line 
> arguments. getopt is available only on Linux and thus, isn't cross platform. 
> We need to replace getopt with *boost::program_options* to make these tools 
> cross platform.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16491) HDFS native client tests timing out

2022-03-02 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-16491:
-

 Summary: HDFS native client tests timing out
 Key: HDFS-16491
 URL: https://issues.apache.org/jira/browse/HDFS-16491
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: libhdfs
Affects Versions: 3.4.0
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra


There's been a regression in the HDFS native client tests. These tests are 
failing -

{code}
test_test_libhdfs_ops_hdfs_static
test_test_libhdfs_threaded_hdfs_static
test_test_libhdfs_zerocopy_hdfs_static
test_test_native_mini_dfs
test_libhdfs_threaded_hdfspp_test_shim_static
test_hdfspp_mini_dfs_smoke_hdfspp_test_shim_static
libhdfs_mini_stress_valgrind_hdfspp_test_static
memcheck_libhdfs_mini_stress_valgrind_hdfspp_test_static
test_libhdfs_mini_stress_hdfspp_test_shim_static
test_hdfs_ext_hdfspp_test_shim_static
{code}

Because of the following exception -
{code}
SocketTimeoutException: 6 millis timeout while waiting for channel to be 
ready for read. ch : java.nio.channels.SocketChannel[connected 
local=/127.0.0.1:59874 
remote=localhost/127.0.0.1:33567]java.net.SocketTimeoutException: Call From 
cd9d74e12c88/172.17.0.3 to localhost:33567 failed on socket timeout exception: 
java.net.SocketTimeoutException: 6 millis timeout while waiting for channel 
to be ready for read. ch : java.nio.channels.SocketChannel[connected 
local=/127.0.0.1:59874 remote=localhost/127.0.0.1:33567]; For more details see: 
 http://wiki.apache.org/hadoop/SocketTimeout
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
...
{code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16474) Make HDFS tail tool cross platform

2022-02-20 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-16474:
-

 Summary: Make HDFS tail tool cross platform
 Key: HDFS-16474
 URL: https://issues.apache.org/jira/browse/HDFS-16474
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client, libhdfs++, tools
Affects Versions: 3.4.0
 Environment: Centos 7, Centos 8, Debian 10, Ubuntu Focal
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra


The source files for *hdfs_stat* uses *getopt* for parsing the command line 
arguments. getopt is available only on Linux and thus, isn't cross platform. We 
need to replace getopt with *boost::program_options* to make these tools cross 
platform.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16473) Make HDFS stat tool cross platform

2022-02-20 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-16473:
-

 Summary: Make HDFS stat tool cross platform
 Key: HDFS-16473
 URL: https://issues.apache.org/jira/browse/HDFS-16473
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client, libhdfs++, tools
Affects Versions: 3.4.0
 Environment: Centos 7, Centos 8, Debian 10, Ubuntu Focal
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra


The source files for *hdfs_setrep* uses *getopt* for parsing the command line 
arguments. getopt is available only on Linux and thus, isn't cross platform. We 
need to replace getopt with *boost::program_options* to make these tools cross 
platform.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16472) Make HDFS setrep tool cross platform

2022-02-20 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-16472:
-

 Summary: Make HDFS setrep tool cross platform
 Key: HDFS-16472
 URL: https://issues.apache.org/jira/browse/HDFS-16472
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client, libhdfs++, tools
Affects Versions: 3.4.0
 Environment: Centos 7, Centos 8, Debian 10, Ubuntu Focal
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra


The source files for *hdfs_ls* uses *getopt* for parsing the command line 
arguments. getopt is available only on Linux and thus, isn't cross platform. We 
need to replace getopt with *boost::program_options* to make these tools cross 
platform.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16471) Make HDFS ls tool cross platform

2022-02-20 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-16471:
-

 Summary: Make HDFS ls tool cross platform
 Key: HDFS-16471
 URL: https://issues.apache.org/jira/browse/HDFS-16471
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client, libhdfs++, tools
Affects Versions: 3.4.0
 Environment: Centos 7, Centos 8, Debian 10, Ubuntu Focal
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra


The source files for *hdfs_find* uses *getopt* for parsing the command line 
arguments. getopt is available only on Linux and thus, isn't cross platform. We 
need to replace getopt with *boost::program_options* to make these tools cross 
platform.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16470) Make HDFS find tool cross platform

2022-02-20 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-16470:
-

 Summary: Make HDFS find tool cross platform
 Key: HDFS-16470
 URL: https://issues.apache.org/jira/browse/HDFS-16470
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client, libhdfs++, tools
Affects Versions: 3.4.0
 Environment: Centos 7, Centos 8, Debian 10, Ubuntu Focal
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra


The source files for *hdfs_get* uses *getopt* for parsing the command line 
arguments. getopt is available only on Linux and thus, isn't cross platform. We 
need to replace getopt with *boost::program_options* to make these tools cross 
platform.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16469) Locate protoc-gen-hrpc.exe on Windows

2022-02-20 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-16469:
-

 Summary: Locate protoc-gen-hrpc.exe on Windows
 Key: HDFS-16469
 URL: https://issues.apache.org/jira/browse/HDFS-16469
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: libhdfs++
Affects Versions: 3.4.0
 Environment: Windows 10
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra


protoc-gen-hrpc.exe is supposed to be found at 
${CMAKE_CURRENT_BINARY_DIR}/protoc-gen-hrpc. This works so long as we're 
building the Release build. Since we can only build RelWithDebInfo, the 
protoc-gen-hrpc binary will be placed at 
${CMAKE_CURRENT_BINARY_DIR}/RelWithDebInfo/protoc-gen-hrpc.exe. Hadoop would 
need to locate this binary in order to generate the Protobuf headers.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16468) Define ssize_t for Windows

2022-02-20 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-16468:
-

 Summary: Define ssize_t for Windows
 Key: HDFS-16468
 URL: https://issues.apache.org/jira/browse/HDFS-16468
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: libhdfs++
Affects Versions: 3.4.0
 Environment: Windows 10
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra


Some C/C++ files use *ssize_t* data type. This isn't available for Windows and 
we need to define an alias for this and set it to *long long* to make it cross 
platform compatible.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16467) Ensure Protobuf generated headers are included first

2022-02-20 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-16467:
-

 Summary: Ensure Protobuf generated headers are included first
 Key: HDFS-16467
 URL: https://issues.apache.org/jira/browse/HDFS-16467
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: libhdfs++
Affects Versions: 3.4.0
 Environment: Windows 10
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra


We need to ensure that the Protobuf generated headers ([such as 
ClientNamenodeProtocol.pb.h|https://github.com/apache/hadoop/blob/cce5a6f6094cefd2e23b73d202cc173cf4fc2cc5/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/connection/datanodeconnection.h#L23])
 are included at the top. In other words, *.ph.h should be the first header 
files to be included in any of the .c/.cc/.h files. Otherwise, it causes symbol 
redefinition errors during compilation. Also, we need to ensure that Protobuf 
generated header files are the first ones to be included even in the case of 
transitive inclusion of header files.

This issue seems to be specific to Windows only. Not sure why the other 
platforms aren't running into this.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16466) Implement Linux permission flags on Windows

2022-02-20 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-16466:
-

 Summary: Implement Linux permission flags on Windows
 Key: HDFS-16466
 URL: https://issues.apache.org/jira/browse/HDFS-16466
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: libhdfs++
Affects Versions: 3.4.0
 Environment: Windows 10
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra


[statinfo.cc|https://github.com/apache/hadoop/blob/869317be0a1fdff23be5fc500dcd9ae4ecd7bc29/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/common/statinfo.cc#L41-L49]
 uses POSIX permission flags. These flags aren't available for Windows. We need 
to implement the equivalent flags on Windows to make this cross platform 
compatible.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16465) Make usage of strings.h cross platform compatible

2022-02-20 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-16465:
-

 Summary: Make usage of strings.h cross platform compatible
 Key: HDFS-16465
 URL: https://issues.apache.org/jira/browse/HDFS-16465
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: libhdfs++
Affects Versions: 3.4.0
 Environment: Windows 10
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra


[configuration.cc|https://github.com/apache/hadoop/blob/3148791da42b48db0ecb611db85eda39a7f7d452/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/common/configuration.cc#L38]
 in HDFS native client uses strings.h, which is a POSIX header file and isn't 
available on Windows. We need to replace this with a Windows equivalent 
implementation to make it cross platform compatible.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16464) Create only libhdfspp static libraries for Windows

2022-02-20 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-16464:
-

 Summary: Create only libhdfspp static libraries for Windows
 Key: HDFS-16464
 URL: https://issues.apache.org/jira/browse/HDFS-16464
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: libhdfs++
Affects Versions: 3.4.0
 Environment: Windows 10
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra


While building dynamic libraries on Windows, there's a constraint that all the 
dependent libraries be linked dynamically. This poses an issue since Protobuf 
(which is an HDFS native client dependency) runs into build issues when linked 
dynamically. There are a few [warning 
notes|https://github.com/protocolbuffers/protobuf/blob/9ebb31726cef11e4e940b50ec751df4e863e3d2a/cmake/README.md#dlls-vs-static-linking]
 on the Protobuf repo's build instructions page as well.

Thus, to keep things simple, we can resort to do only static linking and 
thereby only produce statically linked libraries on Windows. In summary, we'll 
be providing only Hadoop .lib files initially. We can aim to produce Hadoop 
.dll on Windows eventually once we're able to resolve Protobuf's .dll issues.

In Hadoop CMake files, we've function 
[hadoop_add_dual_library|https://github.com/apache/hadoop/blob/f0241ec2161f6eccdb9bdaf1cbcbee55be379217/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/CMakeLists.txt#L289-L298]
 that creates both the static an dynamic library targets. We need to replace 
all these to get only static libraries for Windows.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16463) Make dirent.h cross platform compatible

2022-02-19 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-16463:
-

 Summary: Make dirent.h cross platform compatible
 Key: HDFS-16463
 URL: https://issues.apache.org/jira/browse/HDFS-16463
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: libhdfs++
Affects Versions: 3.4.0
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra


[jnihelper.c|https://github.com/apache/hadoop/blob/1fed18bb2d8ac3dbaecc3feddded30bed918d556/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/jni_helper.c#L28]
 in HDFS native client uses *dirent.h*. This header file isn't available on 
Windows. Thus, we need to replace this with a cross platform compatible 
implementation for dirent.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16462) Make HDFS get tool cross platform

2022-02-19 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-16462:
-

 Summary: Make HDFS get tool cross platform
 Key: HDFS-16462
 URL: https://issues.apache.org/jira/browse/HDFS-16462
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client, libhdfs++, tools
Affects Versions: 3.4.0
 Environment: Centos 7, Centos 8, Debian 10, Ubuntu Focal
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra
 Fix For: 3.4.0


The source files for *hdfs_count*, *hdfs_mkdir* and *hdfs_rm* uses *getopt* for 
parsing the command line arguments. getopt is available only on Linux and thus, 
isn't cross platform. We need to replace getopt with *boost::program_options* 
to make these tools cross platform.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16445) Make HDFS count, mkdir, rm cross platform

2022-02-01 Thread Gautham Banasandra (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gautham Banasandra resolved HDFS-16445.
---
   Fix Version/s: 3.4.0
Target Version/s: 3.4.0
  Resolution: Fixed

Merged PR https://github.com/apache/hadoop/pull/3945 to trunk.

> Make HDFS count, mkdir, rm cross platform
> -
>
> Key: HDFS-16445
> URL: https://issues.apache.org/jira/browse/HDFS-16445
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, libhdfs++, tools
>Affects Versions: 3.4.0
> Environment: Centos 7, Centos 8, Debian 10, Ubuntu Focal
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: libhdfscpp, pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> The source files for *hdfs_count*, *hdfs_mkdir*and *hdfs_rm* uses *getopt* 
> for parsing the command line arguments. getopt is available only on Linux and 
> thus, isn't cross platform. We need to replace getopt with 
> *boost::program_options* to make these tools cross platform.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16445) Make HDFS count, mkdir, rm cross platform

2022-01-29 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-16445:
-

 Summary: Make HDFS count, mkdir, rm cross platform
 Key: HDFS-16445
 URL: https://issues.apache.org/jira/browse/HDFS-16445
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client, libhdfs++, tools
Affects Versions: 3.4.0
 Environment: Centos 7, Centos 8, Debian 10, Ubuntu Focal
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra


The source files for hdfs_du uses *getopt* for parsing the command line 
arguments. getopt is available only on Linux and thus, isn't cross platform. We 
need to replace getopt with *boost::program_options* to make this cross 
platform.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16419) Make HDFS data transfer tools cross platform

2022-01-12 Thread Gautham Banasandra (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gautham Banasandra resolved HDFS-16419.
---
Resolution: Fixed

Merged https://github.com/apache/hadoop/pull/3873 to trunk.

> Make HDFS data transfer tools cross platform
> 
>
> Key: HDFS-16419
> URL: https://issues.apache.org/jira/browse/HDFS-16419
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, libhdfs++, tools
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: libhdfscpp, pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> The source files for *hdfs_copyToLocal* and *hdfs_moveToLocal* uses getopt 
> for parsing the command line arguments. getopt is available only on Linux and 
> thus, isn't cross platform. We need to replace getopt with 
> boost::program_options to make these tools cross platform.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16419) Make HDFS data transfer tools cross platform

2022-01-09 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-16419:
-

 Summary: Make HDFS data transfer tools cross platform
 Key: HDFS-16419
 URL: https://issues.apache.org/jira/browse/HDFS-16419
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client, libhdfs++, tools
Affects Versions: 3.4.0
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra
 Fix For: 3.4.0


The source files for *hdfs_chown*, *hdfs_chmod* and *hdfs_chgrp* uses getopt 
for parsing the command line arguments. getopt is available only on Linux and 
thus, isn't cross platform. We need to replace getopt with 
boost::program_options to make these tools cross platform.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16407) Make hdfs_du tool cross platform

2022-01-04 Thread Gautham Banasandra (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gautham Banasandra resolved HDFS-16407.
---
Resolution: Fixed

Merged [HDFS-16407. Make hdfs_du tool cross platform by GauthamBanasandra · 
Pull Request #3848 · apache/hadoop 
(github.com)|https://github.com/apache/hadoop/pull/3848] to trunk.

> Make hdfs_du tool cross platform
> 
>
> Key: HDFS-16407
> URL: https://issues.apache.org/jira/browse/HDFS-16407
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, libhdfs++, tools
>Affects Versions: 3.4.0
> Environment: Centos 7, Centos 8, Debian 10, Ubuntu Focal
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: libhdfscpp, pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> The source files for hdfs_du uses *getopt* for parsing the command line 
> arguments. getopt is available only on Linux and thus, isn't cross platform. 
> We need to replace getopt with *boost::program_options* to make this cross 
> platform.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16407) Make hdfs_du tool cross platform

2022-01-01 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-16407:
-

 Summary: Make hdfs_du tool cross platform
 Key: HDFS-16407
 URL: https://issues.apache.org/jira/browse/HDFS-16407
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client, libhdfs++, tools
Affects Versions: 3.4.0
 Environment: Centos 7, Centos 8, Debian 10, Ubuntu Focal
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra
 Fix For: 3.4.0


The source files for hdfs_df uses *getopt* for parsing the command line 
arguments. getopt is available only on Linux and thus, isn't cross platform. We 
need to replace getopt with *boost::program_options* to make this cross 
platform.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16285) Make HDFS ownership tools cross platform

2021-12-08 Thread Gautham Banasandra (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gautham Banasandra resolved HDFS-16285.
---
Resolution: Fixed

Merged [HDFS-16285. Make HDFS ownership tools cross platform by 
GauthamBanasandra · Pull Request #3588 · apache/hadoop 
(github.com)|https://github.com/apache/hadoop/pull/3588] to trunk.

> Make HDFS ownership tools cross platform
> 
>
> Key: HDFS-16285
> URL: https://issues.apache.org/jira/browse/HDFS-16285
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, libhdfs++, tools
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: libhdfscpp, pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 4h 20m
>  Remaining Estimate: 0h
>
> The source files for *hdfs_chown*, *hdfs_chmod* and *hdfs_chgrp* uses getopt 
> for parsing the command line arguments. getopt is available only on Linux and 
> thus, isn't cross platform. We need to replace getopt with 
> boost::program_options to make these tools cross platform.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16300) Use libcrypto in Windows for libhdfspp

2021-11-04 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-16300:
-

 Summary: Use libcrypto in Windows for libhdfspp
 Key: HDFS-16300
 URL: https://issues.apache.org/jira/browse/HDFS-16300
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: libhdfs++
Affects Versions: 3.4.0
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra


Currently, eay32 is the library that's used in libhdfspp for Windows. Whereas, 
we use libcrypto for the rest of the platforms. As per the following mail 
thread, the OpenSSL library was renamed from eay32 to libcrypto from OpenSSL 
version 1.1.0 onwards - 
https://mta.openssl.org/pipermail/openssl-dev/2016-August/008351.html.

Thus, we need to use libcrypto on Windows as well to ensure that we standardize 
the version of the OpenSSL libraries used across platforms.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16285) Make HDFS ownership tools cross platform

2021-10-26 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-16285:
-

 Summary: Make HDFS ownership tools cross platform
 Key: HDFS-16285
 URL: https://issues.apache.org/jira/browse/HDFS-16285
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client, libhdfs++, tools
Affects Versions: 3.4.0
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra
 Fix For: 3.4.0


The source files for *hdfs_createSnapshot*, *hdfs_disallowSnapshot* and 
*hdfs_renameSnapshot* uses getopt for parsing the command line arguments. 
getopt is available only on Linux and thus, isn't cross platform. We need to 
replace getopt with boost::program_options to make these tools cross platform.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16278) Make HDFS snapshot tools cross platform

2021-10-17 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-16278:
-

 Summary: Make HDFS snapshot tools cross platform
 Key: HDFS-16278
 URL: https://issues.apache.org/jira/browse/HDFS-16278
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client, libhdfs++, tools
Affects Versions: 3.4.0
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra


The source files for *hdfs_createSnapshot*, *hdfs_disallowSnapshot* and 
*hdfs_rename_Snapshot* uses getopt for parsing the command line arguments. 
getopt is available only on Linux and thus, isn't cross platform. We need to 
replace getopt with boost::program_options to make these tools cross platform.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16267) Make hdfs_df tool cross platform

2021-10-11 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-16267:
-

 Summary: Make hdfs_df tool cross platform
 Key: HDFS-16267
 URL: https://issues.apache.org/jira/browse/HDFS-16267
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client, libhdfs++, tools
Affects Versions: 3.4.0
 Environment: Centos 7, Centos 8, Debian 10, Ubuntu Focal
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra


The source files for hdfs_deleteSnapshot uses *getopt* for parsing the command 
line arguments. getopt is available only on Linux and thus, isn't cross 
platform. We need to replace getopt with *boost::program_options* to make this 
cross platform.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16265) Refactor HDFS tool tests for better reuse

2021-10-08 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-16265:
-

 Summary: Refactor HDFS tool tests for better reuse
 Key: HDFS-16265
 URL: https://issues.apache.org/jira/browse/HDFS-16265
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client, libhdfs++, tools
Affects Versions: 3.4.0
 Environment: Centos 7, Centos 8, Debian 10, Ubuntu Focal
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra


Currently, the test cases written in hdfs-tool-test.h isn't easy to reuse. 
Primarily because the expectations are different for each HDFS tool. I realized 
this while I was creating the PR for HDFS-16260. For instance, passing more 
than one argument is erroneous to hdfs_allowSnapshot while it's the only valid 
scenario for hdfs_deleteSnapshot.

Thus, it won't be possible to reuse the test cases without decoupling the 
expectations from the test case definitions. The solution here is to move the 
expectations to the corresponding mock classes and invoke the call to set them 
up in the test cases after the creation of mock instances.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16263) Add CMakeLists for hdfs_allowSnapshot

2021-10-07 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-16263:
-

 Summary: Add CMakeLists for hdfs_allowSnapshot
 Key: HDFS-16263
 URL: https://issues.apache.org/jira/browse/HDFS-16263
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client, libhdfs++, tools
Affects Versions: 3.4.0
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra


Currently, hdfs_allowSnapshot is built in it's [parent directory's 
CMakeLists.txt|https://github.com/apache/hadoop/blob/95b537ee6a9ff3082c9ad9bc773f86fd4be04e50/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tools/CMakeLists.txt#L83-L89].
 Need to move this into a separate CMakeLists.txt file under 
hdfs-allow-snapshot so that it's more modular.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16260) Make hdfs_deleteSnapshot tool cross platform

2021-10-06 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-16260:
-

 Summary: Make hdfs_deleteSnapshot tool cross platform
 Key: HDFS-16260
 URL: https://issues.apache.org/jira/browse/HDFS-16260
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client, libhdfs++, tools
Affects Versions: 3.4.0
 Environment: Centos 7, Centos 8, Debian 10, Ubuntu Focal
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra


The source files for hdfs_deleteSnapshot uses *getopt* for parsing the command 
line arguments. getopt is available only on Linux and thus, isn't cross 
platform. We need to replace getopt with *boost::program_options* to make this 
cross platform.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16254) Cleanup protobuf on exit of hdfs_allowSnapshot

2021-10-04 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-16254:
-

 Summary: Cleanup protobuf on exit of hdfs_allowSnapshot
 Key: HDFS-16254
 URL: https://issues.apache.org/jira/browse/HDFS-16254
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: libhdfs++, tools
Affects Versions: 3.4.0
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra


Need to move the call google::protobuf::ShutdownProtobufLibrary() to main 
method instead of 
[AllowSnapshot::HandlePath|https://github.com/apache/hadoop/blob/35a8d48872a13438d4c4199b6ef5b902105e2eb2/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tools/hdfs-allow-snapshot/hdfs-allow-snapshot.cc#L116-L117]
 since we want the clean-up tasks to run only when the program exits.
The current implementation doesn't cause any issues since 
AllowSnapshot::HandlePath is called only once.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16251) Make hdfs_cat tool cross platform

2021-10-03 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-16251:
-

 Summary: Make hdfs_cat tool cross platform
 Key: HDFS-16251
 URL: https://issues.apache.org/jira/browse/HDFS-16251
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: libhdfs++, tools
Affects Versions: 3.4.0
 Environment: Centos 7, Centos 8, Debian 10, Ubuntu Focal
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra


The source files for hdfs_cat uses *getopt* for parsing the command line 
arguments. getopt is available only on Linux and thus, isn't cross platform. We 
need to replace getopt with *boost::program_options* to make this cross 
platform.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16250) Use GMock for AllowSnapshotMock

2021-10-03 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-16250:
-

 Summary: Use GMock for AllowSnapshotMock
 Key: HDFS-16250
 URL: https://issues.apache.org/jira/browse/HDFS-16250
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: libhdfs++
Affects Versions: 3.4.0
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra


Currently, the mock 
[implementation|https://github.com/apache/hadoop/blob/35a8d48872a13438d4c4199b6ef5b902105e2eb2/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/tools/hdfs-allow-snapshot-mock.cc]
 of AllowSnapshotMock is quite basic. Need to replace this with GMock so that 
we can tap into the benefits offered by GMock.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16205) Make hdfs_allowSnapshot tool cross platform

2021-09-01 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-16205:
-

 Summary: Make hdfs_allowSnapshot tool cross platform
 Key: HDFS-16205
 URL: https://issues.apache.org/jira/browse/HDFS-16205
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: libhdfs++, tools
Affects Versions: 3.4.0
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra


The source files for hdfs_allowSnapshot uses *getopt *for parsing the command 
line arguments. getopt is available only on Linux and thus, isn't cross 
platform. We need to replace getopt with *boost::program_options* to make this 
cross platform.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16178) Make recursive delete cross platform in libhdfs++

2021-08-18 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-16178:
-

 Summary: Make recursive delete cross platform in libhdfs++
 Key: HDFS-16178
 URL: https://issues.apache.org/jira/browse/HDFS-16178
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: libhdfs++
Affects Versions: 3.4.0
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra


The *TempDir *class in libhdfs++ is currently using nftw API provided by 
*ftw.h*, which is only present in Linux and not on Windows. Need to use the 
APIs from C++17 *std::filesystem* to make this cross platform.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16174) Implement temp file and dir in cc files

2021-08-13 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-16174:
-

 Summary: Implement temp file and dir in cc files
 Key: HDFS-16174
 URL: https://issues.apache.org/jira/browse/HDFS-16174
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: libhdfs++
Affects Versions: 3.4.0
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra


In C++, we generally do the declaration in the header files and the 
corresponding implementation in the .cc files. Here we see that the 
implementation of TempFile and TempDir are done in configuration_test.h itself. 
This offers no benefit and the compilation of TempFile and TempDir classes are 
duplicated for every #include of the configuration_test.h header. Thus, we need 
to implement it in separate cc files to avoid this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16026) Restore cross platform mkstemp

2021-05-23 Thread Gautham Banasandra (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gautham Banasandra resolved HDFS-16026.
---
Resolution: Abandoned

This fix will be tracked as part of HDFS-15971.

> Restore cross platform mkstemp
> --
>
> Key: HDFS-16026
> URL: https://issues.apache.org/jira/browse/HDFS-16026
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs++
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16026) Restore cross platform mkstemp

2021-05-15 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-16026:
-

 Summary: Restore cross platform mkstemp
 Key: HDFS-16026
 URL: https://issues.apache.org/jira/browse/HDFS-16026
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: libhdfs++
Affects Versions: 3.4.0
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-15976) Make mkdtemp cross platform

2021-04-14 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-15976:
-

 Summary: Make mkdtemp cross platform
 Key: HDFS-15976
 URL: https://issues.apache.org/jira/browse/HDFS-15976
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: libhdfs++
Affects Versions: 3.4.0
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra


mkdtemp is used for creating temporary directory, adhering to the given 
pattern. It's not available on Visual C++. Need to make this cross platform.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-15971) Make mkstemp cross platform

2021-04-12 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-15971:
-

 Summary: Make mkstemp cross platform
 Key: HDFS-15971
 URL: https://issues.apache.org/jira/browse/HDFS-15971
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: libhdfs++
Affects Versions: 3.4.0
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra


mkstemp isn't available in Visual C++. Need to make it cross platform.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-15962) Make strcasecmp cross platform

2021-04-09 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-15962:
-

 Summary: Make strcasecmp cross platform
 Key: HDFS-15962
 URL: https://issues.apache.org/jira/browse/HDFS-15962
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: libhdfs++
Affects Versions: 3.4.0
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra


strcasecmp isn't available on Visual C++. Need to make this cross platform.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-15955) Make explicit_bzero cross platform

2021-04-07 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-15955:
-

 Summary: Make explicit_bzero cross platform
 Key: HDFS-15955
 URL: https://issues.apache.org/jira/browse/HDFS-15955
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: libhdfs++
Affects Versions: 3.4.0
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra


The function explicit_bzero isn't available in Visual C++. Need to make this 
cross platform.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-15954) Pass correct function for SASL callback

2021-04-06 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-15954:
-

 Summary: Pass correct function for SASL callback
 Key: HDFS-15954
 URL: https://issues.apache.org/jira/browse/HDFS-15954
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: libhdfs++
Affects Versions: 3.4.0
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra


It seems like an incorrect function type is passed as the callback for SASL. We 
get the following warnings during compilation -
{code}
/mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/rpc/cyrus_sasl_engine.cc:
 In constructor ‘hdfs::CySaslEngine::CySaslEngine()’:
/mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/rpc/cyrus_sasl_engine.cc:124:46:
 warning: cast between incompatible function types from ‘int (*)(void*, int, 
const char**, unsigned int*)’ to ‘hdfs::sasl_callback_ft’ {aka ‘int (*)()’} 
[-Wcast-function-type]
  124 | { SASL_CB_USER, (sasl_callback_ft) & get_name, this}, // userid 
for authZ
  |  ^~~~
/mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/rpc/cyrus_sasl_engine.cc:125:46:
 warning: cast between incompatible function types from ‘int (*)(void*, int, 
const char**, unsigned int*)’ to ‘hdfs::sasl_callback_ft’ {aka ‘int (*)()’} 
[-Wcast-function-type]
  125 | { SASL_CB_AUTHNAME, (sasl_callback_ft) & get_name, this}, // authid 
for authT
  |  ^~~~
/mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/rpc/cyrus_sasl_engine.cc:126:46:
 warning: cast between incompatible function types from ‘int (*)(void*, int, 
const char**, const char**)’ to ‘hdfs::sasl_callback_ft’ {aka ‘int (*)()’} 
[-Wcast-function-type]
  126 | { SASL_CB_GETREALM, (sasl_callback_ft) & getrealm, this}, // 
krb/gssapi realm
  |  ^~~~
/mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/rpc/cyrus_sasl_engine.cc:
 At global scope:
/mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/rpc/cyrus_sasl_engine.cc:423:41:
 warning: cast between incompatible function types from ‘int (*)(void*, int, 
const char*)’ to ‘hdfs::sasl_callback_ft’ {aka ‘int (*)()’} 
[-Wcast-function-type]
  423 | { SASL_CB_LOG, (sasl_callback_ft) & sasl_my_log, NULL},
  | ^~~
/mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/rpc/cyrus_sasl_engine.cc:424:44:
 warning: cast between incompatible function types from ‘int (*)(void*, const 
char*, const char*, const char**, unsigned int*)’ to ‘hdfs::sasl_callback_ft’ 
{aka ‘int (*)()’} [-Wcast-function-type]
  424 | { SASL_CB_GETOPT, (sasl_callback_ft) & sasl_getopt, NULL},
  |^~~
/mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/rpc/cyrus_sasl_engine.cc:425:45:
 warning: cast between incompatible function types from ‘int (*)(void*, const 
char**)’ to ‘hdfs::sasl_callback_ft’ {aka ‘int (*)()’} [-Wcast-function-type]
  425 | { SASL_CB_GETPATH, (sasl_callback_ft) & get_path, NULL}, // to find 
th mechanisms
  | ^~~~
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-15950) Remove unused hdfs.proto import

2021-04-03 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-15950:
-

 Summary: Remove unused hdfs.proto import
 Key: HDFS-15950
 URL: https://issues.apache.org/jira/browse/HDFS-15950
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 3.4.0
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra


hdfs.proto is imported in inotify.proto and is unused. This causes the 
following warning to be generated -

{code}
inotify.proto:35:1: warning: Import hdfs.proto is unused.
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-15949) Fix integer overflow

2021-04-02 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-15949:
-

 Summary: Fix integer overflow
 Key: HDFS-15949
 URL: https://issues.apache.org/jira/browse/HDFS-15949
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: libhdfs++
Affects Versions: 3.4.0
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra


There are some instances where integer overflow warnings are reported. Need to 
fix them.

{code}
[ 63%] Building CXX object 
main/native/libhdfspp/tests/CMakeFiles/hdfs_ext_hdfspp_test_shim_static.dir/hdfs_ext_test.cc.o
In file included from 
/mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/out/build/WSL-GCC-Debug/main/native/libhdfspp/googletest-src/googletest/include/gtest/gtest.h:375,
 from 
/mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/out/build/WSL-GCC-Debug/main/native/libhdfspp/googletest-src/googlemock/include/gmock/internal/gmock-internal-utils.h:47,
 from 
/mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/out/build/WSL-GCC-Debug/main/native/libhdfspp/googletest-src/googlemock/include/gmock/gmock-actions.h:51,
 from 
/mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/out/build/WSL-GCC-Debug/main/native/libhdfspp/googletest-src/googlemock/include/gmock/gmock.h:59,
 from 
/mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/hdfspp_mini_dfs.h:24,
 from 
/mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/hdfs_ext_test.cc:19:
/mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/hdfs_ext_test.cc:
 In member function ‘virtual void hdfs::HdfsExtTest_TestHosts_Test::TestBody()’:
/mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/hdfs_ext_test.cc:456:95:
 warning: integer overflow in expression of type ‘long int’ results in 
‘-9223372036854775808’ [-Woverflow]
  456 |   EXPECT_EQ(nullptr, hdfsGetHosts(fs, filename.c_str(), 0, 
std::numeric_limits::max()+1));
  |
~~~^~
/mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/hdfs_ext_test.cc:460:92:
 warning: integer overflow in expression of type ‘long int’ results in 
‘-9223372036854775808’ [-Woverflow]
  460 |   EXPECT_EQ(nullptr, hdfsGetHosts(fs, filename.c_str(), 
std::numeric_limits::max()+1, std::numeric_limits::max()));
  | 
~~~^~
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-15948) Fix test4tests for libhdfspp

2021-04-02 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-15948:
-

 Summary: Fix test4tests for libhdfspp
 Key: HDFS-15948
 URL: https://issues.apache.org/jira/browse/HDFS-15948
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build, libhdfs++
Affects Versions: 3.4.0
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra


test4tests seems to be broken for libhdfs++. Even if I modify the tests 
accordingly, the Jenkins run will still report -1 against test4tests saying 
that the tests weren't added/modified. Seems like, some config is missing that 
actually conveys to yetus about the test discovery in libhdfs++.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-15947) Replace deprecated protobuf APIs

2021-04-02 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-15947:
-

 Summary: Replace deprecated protobuf APIs
 Key: HDFS-15947
 URL: https://issues.apache.org/jira/browse/HDFS-15947
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: libhdfs++
Affects Versions: 3.4.0
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra


Some protobuf APIs are soon going to get deprecated and must be replaced with 
newer ones. One of the warnings are reported due to the issue is as follows -
{code}
[ 48%] Building CXX object 
main/native/libhdfspp/tests/CMakeFiles/rpc_engine_test.dir/rpc_engine_test.cc.o
/mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/rpc_engine_test.cc:
 In function ‘std::pair > RpcResponse(const 
hadoop::common::RpcResponseHeaderProto&, const string&, const 
boost::system::error_code&)’:
/mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/rpc_engine_test.cc:92:56:
 warning: ‘int google::protobuf::MessageLite::ByteSize() const’ is deprecated: 
Please use ByteSizeLong() instead [-Wdeprecated-declarations]
   92 |   pbio::CodedOutputStream::VarintSize32(h.ByteSize()) +
  |^
In file included from 
/usr/local/include/google/protobuf/generated_enum_util.h:36,
 from /usr/local/include/google/protobuf/map.h:49,
 from 
/usr/local/include/google/protobuf/generated_message_table_driven.h:34,
 from 
/mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/out/build/WSL-GCC-Debug/main/native/libhdfspp/tests/test.pb.h:26,
 from 
/mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/rpc_engine_test.cc:22:
/usr/local/include/google/protobuf/message_lite.h:408:7: note: declared here
  408 |   int ByteSize() const { return internal::ToIntSize(ByteSizeLong()); }
  |   ^~~~
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-15944) Prevent truncation with snprintf

2021-04-02 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-15944:
-

 Summary: Prevent truncation with snprintf
 Key: HDFS-15944
 URL: https://issues.apache.org/jira/browse/HDFS-15944
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: fuse-dfs, libhdfs
Affects Versions: 3.4.0
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra


There are some areas of code in libhdfs and fuse-dfs components where the 
destination buffer is smaller than the source that's trying to write. This 
would cause truncation. Thus we need to ensure that the source that's being 
written doesn't exceed the destination buffer size.

The following warnings are reported for this issue -
{code}
/mnt/c/Users/Gautham/projects/apache/wsl/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs-tests/test_libhdfs_threaded.c:
 In function ‘doTestHdfsOperations.isra.0’:
/mnt/c/Users/Gautham/projects/apache/wsl/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs-tests/test_libhdfs_threaded.c:226:39:
 warning: ‘/many_files_’ directive output may be truncated writing 12 bytes 
into a region of size between 1 and 4096 [-Wformat-truncation=]
  226 |   snprintf(filename, PATH_MAX, "%s/many_files_%d", listDirTest, 
nFile);
  |   ^~~~
/mnt/c/Users/Gautham/projects/apache/wsl/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs-tests/test_libhdfs_threaded.c:226:36:
 note: directive argument in the range [0, ]
  226 |   snprintf(filename, PATH_MAX, "%s/many_files_%d", listDirTest, 
nFile);
  |^~
In file included from /usr/include/stdio.h:867,
 from 
/mnt/c/Users/Gautham/projects/apache/wsl/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs-tests/expect.h:23,
 from 
/mnt/c/Users/Gautham/projects/apache/wsl/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs-tests/test_libhdfs_threaded.c:19:
/usr/include/x86_64-linux-gnu/bits/stdio2.h:67:10: note: 
‘__builtin___snprintf_chk’ output between 14 and 4112 bytes into a destination 
of size 4096
   67 |   return __builtin___snprintf_chk (__s, __n, __USE_FORTIFY_LEVEL - 1,
  |  ^~~~
   68 |__bos (__s), __fmt, __va_arg_pack ());
  |~
{code}

{code}
/mnt/c/Users/Gautham/projects/apache/wsl/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/test/fuse_workload.c:255:33:
 warning: ‘/a’ directive output may be truncated writing 2 bytes into a region 
of size between 1 and 4096 [-Wformat-truncation=]
  255 |   snprintf(tmp, sizeof(tmp), "%s/a", base);
  | ^~
In file included from /usr/include/stdio.h:867,
 from 
/mnt/c/Users/Gautham/projects/apache/wsl/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs-tests/expect.h:23,
 from 
/mnt/c/Users/Gautham/projects/apache/wsl/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/test/fuse_workload.c:22:
/usr/include/x86_64-linux-gnu/bits/stdio2.h:67:10: note: 
‘__builtin___snprintf_chk’ output between 3 and 4098 bytes into a destination 
of size 4096
   67 |   return __builtin___snprintf_chk (__s, __n, __USE_FORTIFY_LEVEL - 1,
  |  ^~~~
   68 |__bos (__s), __fmt, __va_arg_pack ());
  |~
/mnt/c/Users/Gautham/projects/apache/wsl/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/test/fuse_workload.c:263:33:
 warning: ‘/b’ directive output may be truncated writing 2 bytes into a region 
of size between 1 and 4096 [-Wformat-truncation=]
  263 |   snprintf(tmp, sizeof(tmp), "%s/b", base);
  | ^~
In file included from /usr/include/stdio.h:867,
 from 
/mnt/c/Users/Gautham/projects/apache/wsl/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs-tests/expect.h:23,
 from 
/mnt/c/Users/Gautham/projects/apache/wsl/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/test/fuse_workload.c:22:
/usr/include/x86_64-linux-gnu/bits/stdio2.h:67:10: note: 
‘__builtin___snprintf_chk’ output between 3 and 4098 bytes into a destination 
of size 4096
   67 |   return __builtin___snprintf_chk (__s, __n, __USE_FORTIFY_LEVEL - 1,
  |  ^~~~
   68 |__bos (__s), __fmt, __va_arg_pack ());
  |~
/mnt/c/Users/Gautham/projects/apache/wsl/hadoop/hadoop-hdfs-project/hadoop-hdfs-nativ

[jira] [Created] (HDFS-15935) Use memcpy for copying non-null terminated string

2021-03-29 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-15935:
-

 Summary: Use memcpy for copying non-null terminated string
 Key: HDFS-15935
 URL: https://issues.apache.org/jira/browse/HDFS-15935
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: libhdfs++
Affects Versions: 3.4.0
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra


We currently get a warning while compiling HDFS native client -
{code}
[WARNING] /usr/include/x86_64-linux-gnu/bits/string_fortified.h:106:10: 
warning: '__builtin_strncpy' output truncated before terminating nul copying as 
many bytes from a string as its length [-Wstringop-truncation]
{code}

The scenario here is such that the copied string is deliberately not null 
terminated, so that we append a custom character appropriately. The warning 
reported by strncpy is valid, but not applicable in this scenario. Thus, we 
need to use memcpy which doesn't mind if the string is null terminated or not.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-15929) Replace RAND_pseudo_bytes in util.cc

2021-03-28 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-15929:
-

 Summary: Replace RAND_pseudo_bytes in util.cc
 Key: HDFS-15929
 URL: https://issues.apache.org/jira/browse/HDFS-15929
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: libhdfs++
Affects Versions: 3.4.0
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra


RAND_pseudo_bytes was deprecated in OpenSSL 1.1.1. We get the following warning 
during compilation that it's deprecated -

{code}
[WARNING] 
/home/jenkins/jenkins-agent/workspace/hadoop-multibranch_PR-2792/src/hadoop-hdfs-project/hadoop-hdfs-native-client/src/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/common/util.cc:
 warning: 'int RAND_pseudo_bytes(unsigned char*, int)' is deprecated 
[-Wdeprecated-declarations]
[WARNING]  from 
/home/jenkins/jenkins-agent/workspace/hadoop-multibranch_PR-2792/src/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/common/util.cc
[WARNING] /usr/include/openssl/rand.h:44:1: note: declared here
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-15928) Replace RAND_pseudo_bytes in rpc_engine.cc

2021-03-27 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-15928:
-

 Summary: Replace RAND_pseudo_bytes in rpc_engine.cc
 Key: HDFS-15928
 URL: https://issues.apache.org/jira/browse/HDFS-15928
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: libhdfs++
Affects Versions: 3.4.0
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra


RAND_pseudo_bytes was deprecated in OpenSSL 1.1.1. We get the following warning 
during compilation that it's deprecated -
{code}
[WARNING] 
/home/jenkins/jenkins-agent/workspace/hadoop-multibranch_PR-2792/src/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/rpc/rpc_engine.cc:124:40:
 warning: 'int RAND_pseudo_bytes(unsigned char*, int)' is deprecated 
[-Wdeprecated-declarations]
[WARNING]  from 
/home/jenkins/jenkins-agent/workspace/hadoop-multibranch_PR-2792/src/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/rpc/rpc_engine.cc:18:
[WARNING] /usr/include/openssl/rand.h:44:1: note: declared here
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-15927) Catch polymorphic type by reference

2021-03-27 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-15927:
-

 Summary: Catch polymorphic type by reference
 Key: HDFS-15927
 URL: https://issues.apache.org/jira/browse/HDFS-15927
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: libhdfs++
Affects Versions: 3.4.0
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra


Need to catch polymorphic exception types by reference in order to realize the 
polymorphic usage, if any.

Also, the following warning gets reported since it's currently caught by value -
{code}
[WARNING] 
/home/jenkins/jenkins-agent/workspace/hadoop-multibranch_PR-2792/src/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/common/hdfs_configuration.cc:138:22:
 warning: catching polymorphic type 'const class hdfs::uri_parse_error' by 
value [-Wcatch-value=]
[WARNING] 
/home/jenkins/jenkins-agent/workspace/hadoop-multibranch_PR-2792/src/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/common/hdfs_configuration.cc:151:27:
 warning: catching polymorphic type 'struct hdfs::ha_parse_error' by value 
[-Wcatch-value=]
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-15922) Use memcpy for copying non-null terminated string in jni_helper.c

2021-03-25 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-15922:
-

 Summary: Use memcpy for copying non-null terminated string in 
jni_helper.c
 Key: HDFS-15922
 URL: https://issues.apache.org/jira/browse/HDFS-15922
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: libhdfs++
Affects Versions: 3.4.0
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra


We currently get a warning while compiling HDFS native client -
{code}
[WARNING] inlined from 'wildcard_expandPath' at 
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2792/src/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/jni_helper.c:427:21,
[WARNING] /usr/include/x86_64-linux-gnu/bits/string_fortified.h:106:10: 
warning: '__builtin_strncpy' output truncated before terminating nul copying as 
many bytes from a string as its length [-Wstringop-truncation]
[WARNING] 
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2792/src/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/jni_helper.c:402:43:
 note: length computed here
{code}

The scenario here is such that the copied string is deliberately not null 
terminated, since we want to insert a PATH_SEPARATOR ourselves. The warning 
reported by strncpy is valid, but not applicable in this scenario.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-15918) Replace RAND_pseudo_bytes in sasl_digest_md5.cc

2021-03-24 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-15918:
-

 Summary: Replace RAND_pseudo_bytes in sasl_digest_md5.cc
 Key: HDFS-15918
 URL: https://issues.apache.org/jira/browse/HDFS-15918
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: libhdfs++
Affects Versions: 3.4.0
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra


RAND_pseudo_bytes was deprecated in OpenSSL 1.1.1. We get the following warning 
during compilation that it's deprecated -
{code}
[WARNING] 
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2792/src/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/common/sasl_digest_md5.cc:97:74:
 warning: 'int RAND_pseudo_bytes(unsigned char*, int)' is deprecated 
[-Wdeprecated-declarations]
[WARNING]  from 
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2792/src/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/common/sasl_digest_md5.cc:20:
[WARNING] /usr/include/openssl/rand.h:44:1: note: declared here
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-15917) Make HDFS native client more secure

2021-03-24 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-15917:
-

 Summary: Make HDFS native client more secure
 Key: HDFS-15917
 URL: https://issues.apache.org/jira/browse/HDFS-15917
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: libhdfs++
Affects Versions: 3.4.0
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra


There's lots of legacy code in HDFS native client. With the recent C++17, 
CMake, Boost and other dependent library upgrades, we're noticing warnings 
during compilation that some of these functions are on the path to deprecation.

We need to prioritize replacing the security related function calls as it's the 
most important functionality.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-15910) Replace bzero with explicit_bzero for better safety

2021-03-21 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-15910:
-

 Summary: Replace bzero with explicit_bzero for better safety
 Key: HDFS-15910
 URL: https://issues.apache.org/jira/browse/HDFS-15910
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: libhdfs++
Affects Versions: 3.2.2
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra


It is better to always use explicit_bzero since it guarantees that the buffer 
will be cleared irrespective of the compiler optimizations.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-15909) Make fnmatch cross platform

2021-03-20 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-15909:
-

 Summary: Make fnmatch cross platform
 Key: HDFS-15909
 URL: https://issues.apache.org/jira/browse/HDFS-15909
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: libhdfs++
Affects Versions: 3.2.2
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra


The function fnmatch isn't available in Visual C++. Need to make this cross 
platform.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-15903) Refactor X-Platform library

2021-03-17 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-15903:
-

 Summary: Refactor X-Platform library
 Key: HDFS-15903
 URL: https://issues.apache.org/jira/browse/HDFS-15903
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: libhdfs++
Affects Versions: 3.2.2
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra


X-Platform started out as a utility to help in writing cross platform code in 
Hadoop. As its scope expanding to cover various scenarios, it is necessary to 
refactor it in early stages to provide proper organization and growth of the 
X-Platform library.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-15843) Make write cross platform

2021-02-20 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-15843:
-

 Summary: Make write cross platform
 Key: HDFS-15843
 URL: https://issues.apache.org/jira/browse/HDFS-15843
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: libhdfs++
Affects Versions: 3.2.2
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra


We're currently using the *write* function from unistd.h which isn't 
cross-platform. We need to replace this with *std::cout.write* instead.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-15740) Make basename cross-platform

2020-12-19 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-15740:
-

 Summary: Make basename cross-platform
 Key: HDFS-15740
 URL: https://issues.apache.org/jira/browse/HDFS-15740
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: libhdfs++
Affects Versions: 3.4.0
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra


The *basename* function isn't available on Visual Studio 2019 compiler. We need 
to make it cross platform.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-15712) Upgrade googletest to 1.10.0

2020-12-05 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-15712:
-

 Summary: Upgrade googletest to 1.10.0
 Key: HDFS-15712
 URL: https://issues.apache.org/jira/browse/HDFS-15712
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: build, libhdfs++
Affects Versions: 3.4.0
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra
 Fix For: 3.4.0


The Google test library used in *libhdfspp* module in the *Hadoop HDFS Native 
Client* project is quite old (about 7 years at the time of this writing). 
Moreover, even though it's third party code, the entire library is checked in 
as part of the Hadoop codebase.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-15699) lz4 sources missing for native Visual Studio project

2020-11-28 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-15699:
-

 Summary: lz4 sources missing for native Visual Studio project
 Key: HDFS-15699
 URL: https://issues.apache.org/jira/browse/HDFS-15699
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: native
Affects Versions: 3.3.0
 Environment: Windows
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra
 Fix For: 3.3.0


lz4 sources are missing for the *native.vcxproj* Visual Studio project for 
Windows. This is causing compilation failure on Windows.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-15385) Upgrade boost library

2020-06-03 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-15385:
-

 Summary: Upgrade boost library
 Key: HDFS-15385
 URL: https://issues.apache.org/jira/browse/HDFS-15385
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: libhdfs++
Reporter: Gautham Banasandra
 Attachments: image-2020-06-03-21-41-49-397.png

The version of the boost library that's currently being used in HDFS is 1.10.2. 
It is VERY old. It's from a time when it perhaps the name "boost" wasn't even 
conceived. Going by the name of the library, it was probably just called as 
"asio".

>From [https://www.boost.org/users/history/] website, the earliest available 
>version,1.10.3 is more than 2 decades old as it was released in 1999 -

!image-2020-06-03-21-41-49-397.png!

This really poses a big hurdle when it comes to upgrading to newer compiler 
versions as, some the programming constructs that are used in asio-1.10.2 which 
were mere warnings back then, get flagged as outright errors with modern 
compilers. (I tried to compile Hadoop with Visual Studio 2019 and saw plenty of 
such errors).

In the interest of keeping the Hadoop project modern and live, I would like to 
propose the idea of upgrading the boost library with the latest.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org