[jira] [Created] (HADOOP-9160) Change management protocol to JMX

2012-12-20 Thread Luke Lu (JIRA)
Luke Lu created HADOOP-9160:
---

 Summary: Change management protocol to JMX
 Key: HADOOP-9160
 URL: https://issues.apache.org/jira/browse/HADOOP-9160
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Luke Lu


Currently we use Hadoop RPC (and some HTTP, notably fsck) for admin protocols. 
We should consider moving all admin protocols to JMX, as it's the industry 
standard for java server management with wide client support.

Having an alternative/redundant RPC mechanism is very desirable for admin 
protocols. I've seen in the past in multiple cases, where NN and/or JT RPC were 
locked up solid due to various bugs and/or RPC thread pool exhaustion, while 
HTTP and/or JMX worked just fine.

Other desirable benefits include admin protocol backward compatibility and 
introspectability, which is convenient for a centralized management system to 
manage multiple Hadoop clusters of different versions. Another notable benefit 
is that it's much easier to implement new admin commands in JMX (especially 
with MXBean) than Hadoop RPC, especially in trunk (and 0.23+, 2.x).

Since Hadoop RPC doesn't guarantee backward compatibility (probably not ever 
for branch-1), there are no external management tools depending on it. We can 
maintain a practical backward compatibility by keeping the admin script/command 
line interface unchanged.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-5469) Exposing Hadoop metrics via HTTP

2012-12-20 Thread Yevgen Yampolskiy (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-5469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13536899#comment-13536899
 ] 

Yevgen Yampolskiy commented on HADOOP-5469:
---

Is it available with hadoop-metrics2? It looks like /metrics page is missing in 
hadoop-1.0.4

 Exposing Hadoop metrics via HTTP
 

 Key: HADOOP-5469
 URL: https://issues.apache.org/jira/browse/HADOOP-5469
 Project: Hadoop Common
  Issue Type: New Feature
  Components: metrics
Reporter: Philip Zeyliger
Assignee: Philip Zeyliger
 Fix For: 0.21.0

 Attachments: HADOOP-5469.patch, HADOOP-5469.patch

  Time Spent: 2h
  Remaining Estimate: 1.5h

 Implement a /metrics URL on the HTTP server of Hadoop daemons, to expose 
 metrics data to users via their web browsers, in plain-text and JSON.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9153) Support createNonRecursive in ViewFileSystem

2012-12-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13536957#comment-13536957
 ] 

Hudson commented on HADOOP-9153:


Integrated in Hadoop-Yarn-trunk #71 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/71/])
HADOOP-9153. Support createNonRecursive in ViewFileSystem. Contributed by 
Sandy Ryza. (Revision 1423824)

 Result = SUCCESS
tomwhite : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1423824
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ChRootedFileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFileSystemBaseTest.java


 Support createNonRecursive in ViewFileSystem
 

 Key: HADOOP-9153
 URL: https://issues.apache.org/jira/browse/HADOOP-9153
 Project: Hadoop Common
  Issue Type: Improvement
  Components: viewfs
Affects Versions: 2.0.2-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
 Fix For: 2.0.3-alpha

 Attachments: HADOOP-9153-1.patch, HADOOP-9153.patch


 Implement createNonRecursive in ViewFileSystem.  Currently an 
 Unsupported... exception is thrown when it's called.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8427) Convert Forrest docs to APT, incremental

2012-12-20 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-8427:
---

Summary: Convert Forrest docs to APT, incremental  (was: Convert Forrest 
docs to APT)

 Convert Forrest docs to APT, incremental
 

 Key: HADOOP-8427
 URL: https://issues.apache.org/jira/browse/HADOOP-8427
 Project: Hadoop Common
  Issue Type: Task
  Components: documentation
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Andy Isaacson
  Labels: newbie
 Attachments: hadoop8427-1.txt, hadoop8427-3.txt, hadoop8427-4.txt, 
 hadoop8427-5.txt, HADOOP-8427.sh, hadoop8427.txt


 Some of the forrest docs content in src/docs/src/documentation/content/xdocs 
 has not yet been converted to APT and moved to src/site/apt. Let's convert 
 the forrest docs that haven't been converted yet to new APT content in 
 hadoop-common/src/site/apt (and link the new content into 
 hadoop-project/src/site/apt/index.apt.vm) and remove all forrest dependencies.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8427) Convert Forrest docs to APT, incremental

2012-12-20 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13537018#comment-13537018
 ] 

Alejandro Abdelnur commented on HADOOP-8427:


+1

 Convert Forrest docs to APT, incremental
 

 Key: HADOOP-8427
 URL: https://issues.apache.org/jira/browse/HADOOP-8427
 Project: Hadoop Common
  Issue Type: Task
  Components: documentation
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Andy Isaacson
  Labels: newbie
 Attachments: hadoop8427-1.txt, hadoop8427-3.txt, hadoop8427-4.txt, 
 hadoop8427-5.txt, HADOOP-8427.sh, hadoop8427.txt


 Some of the forrest docs content in src/docs/src/documentation/content/xdocs 
 has not yet been converted to APT and moved to src/site/apt. Let's convert 
 the forrest docs that haven't been converted yet to new APT content in 
 hadoop-common/src/site/apt (and link the new content into 
 hadoop-project/src/site/apt/index.apt.vm) and remove all forrest dependencies.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-8427) Convert Forrest docs to APT, incremental

2012-12-20 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur resolved HADOOP-8427.


   Resolution: Fixed
Fix Version/s: 3.0.0
 Hadoop Flags: Reviewed

Thanks Andy. I've committed this to trunk, I have not merged it to branch-2, 
site generation fails in gridmix module, and I think we should complete the 
migration to APT in trunk first.

 Convert Forrest docs to APT, incremental
 

 Key: HADOOP-8427
 URL: https://issues.apache.org/jira/browse/HADOOP-8427
 Project: Hadoop Common
  Issue Type: Task
  Components: documentation
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Andy Isaacson
  Labels: newbie
 Fix For: 3.0.0

 Attachments: hadoop8427-1.txt, hadoop8427-3.txt, hadoop8427-4.txt, 
 hadoop8427-5.txt, HADOOP-8427.sh, hadoop8427.txt


 Some of the forrest docs content in src/docs/src/documentation/content/xdocs 
 has not yet been converted to APT and moved to src/site/apt. Let's convert 
 the forrest docs that haven't been converted yet to new APT content in 
 hadoop-common/src/site/apt (and link the new content into 
 hadoop-project/src/site/apt/index.apt.vm) and remove all forrest dependencies.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9105) FsShell -moveFromLocal erroneously fails

2012-12-20 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13537093#comment-13537093
 ] 

Robert Joseph Evans commented on HADOOP-9105:
-

The changes look fine to me.  I just don't totally understand why it was 
failing before.  Is it because there is a bug in FileSystem.moveFromLocalFile? 
If so do we need to fix it too?

 FsShell -moveFromLocal erroneously fails
 

 Key: HADOOP-9105
 URL: https://issues.apache.org/jira/browse/HADOOP-9105
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Attachments: HADOOP-9105.branch-0.23.patch, HADOOP-9105.patch


 The move successfully completes, but then reports error upon trying to delete 
 the local source directory even though it succeeded.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-7975) Add entry to XML defaults for new LZ4 codec

2012-12-20 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HADOOP-7975:


Target Version/s:   (was: 0.23.1, 0.24.0)

 Add entry to XML defaults for new LZ4 codec
 ---

 Key: HADOOP-7975
 URL: https://issues.apache.org/jira/browse/HADOOP-7975
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.1
Reporter: Harsh J
Assignee: Harsh J
Priority: Minor
 Fix For: 0.23.1

 Attachments: HADOOP-7975.patch


 HADOOP-7657 added in a new LZ4 codec, but failed to extend the 
 io.compression.codecs list which MR/etc. use up to load codecs.
 We should add an entry to the core-default XML for this new codec, just as we 
 did with Snappy.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9161) FileSystem.moveFromLocalFile fails to remove source

2012-12-20 Thread Daryn Sharp (JIRA)
Daryn Sharp created HADOOP-9161:
---

 Summary: FileSystem.moveFromLocalFile fails to remove source
 Key: HADOOP-9161
 URL: https://issues.apache.org/jira/browse/HADOOP-9161
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.0-alpha, 0.23.0, 3.0.0
Reporter: Daryn Sharp


FileSystem.moveFromLocalFile fails with cannot remove file:/path after copying 
the files.  It appears to be trying to remove a file uri as a relative path.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9105) FsShell -moveFromLocal erroneously fails

2012-12-20 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13537112#comment-13537112
 ] 

Daryn Sharp commented on HADOOP-9105:
-

Yes, I filed HADOOP-9161.  This patch fixes the shell and makes it behave more 
consistently with other shell commands.

 FsShell -moveFromLocal erroneously fails
 

 Key: HADOOP-9105
 URL: https://issues.apache.org/jira/browse/HADOOP-9105
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Attachments: HADOOP-9105.branch-0.23.patch, HADOOP-9105.patch


 The move successfully completes, but then reports error upon trying to delete 
 the local source directory even though it succeeded.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9105) FsShell -moveFromLocal erroneously fails

2012-12-20 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13537120#comment-13537120
 ] 

Robert Joseph Evans commented on HADOOP-9105:
-

OK I am +1.  Even if HADOOP-9161 were in, it would not give the same detailed 
error messages on failure that this patch does, and it would not be as 
consistent with other FsShell commands.  I'll check it in.

 FsShell -moveFromLocal erroneously fails
 

 Key: HADOOP-9105
 URL: https://issues.apache.org/jira/browse/HADOOP-9105
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Attachments: HADOOP-9105.branch-0.23.patch, HADOOP-9105.patch


 The move successfully completes, but then reports error upon trying to delete 
 the local source directory even though it succeeded.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8427) Convert Forrest docs to APT, incremental

2012-12-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13537128#comment-13537128
 ] 

Hudson commented on HADOOP-8427:


Integrated in Hadoop-trunk-Commit #3146 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3146/])
HADOOP-8427. Convert Forrest docs to APT, incremental. (adi2 via tucu) 
(Revision 1424459)

 Result = SUCCESS
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1424459
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/HttpAuthentication.xml
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/cluster_setup.xml
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/commands_manual.xml
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/file_system_shell.xml
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/ClusterSetup.apt.vm
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/CommandsManual.apt.vm
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/FileSystemShell.apt.vm
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/HttpAuthentication.apt.vm
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hdfs_design.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/resources/images/hdfs-logo.jpg
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/resources/images/hdfsarchitecture.gif
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/resources/images/hdfsarchitecture.odg
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/resources/images/hdfsarchitecture.png
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/resources/images/hdfsdatanodes.gif
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/resources/images/hdfsdatanodes.odg
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/resources/images/hdfsdatanodes.png
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/resources/images/hdfsproxy-forward.jpg
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/resources/images/hdfsproxy-overview.jpg
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/resources/images/hdfsproxy-server.jpg
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/apt/HdfsDesign.apt.vm
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/resources/images
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/resources/images/hdfs-logo.jpg
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/resources/images/hdfsarchitecture.gif
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/resources/images/hdfsarchitecture.odg
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/resources/images/hdfsarchitecture.png
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/resources/images/hdfsdatanodes.gif
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/resources/images/hdfsdatanodes.odg
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/resources/images/hdfsdatanodes.png
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/resources/images/hdfsproxy-forward.jpg
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/resources/images/hdfsproxy-overview.jpg
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/resources/images/hdfsproxy-server.jpg
* /hadoop/common/trunk/hadoop-project/src/site/site.xml


 Convert Forrest docs to APT, incremental
 

 Key: HADOOP-8427
 URL: https://issues.apache.org/jira/browse/HADOOP-8427
 Project: Hadoop Common
  Issue Type: Task
  Components: documentation
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Andy Isaacson
  Labels: newbie
 Fix For: 3.0.0

 Attachments: hadoop8427-1.txt, hadoop8427-3.txt, hadoop8427-4.txt, 
 hadoop8427-5.txt, HADOOP-8427.sh, hadoop8427.txt


 Some of the forrest docs content in src/docs/src/documentation/content/xdocs 
 has not yet been converted to APT and moved to src/site/apt. Let's convert 
 the forrest docs that haven't been converted yet to new APT content in 
 hadoop-common/src/site/apt (and link the new 

[jira] [Commented] (HADOOP-9105) FsShell -moveFromLocal erroneously fails

2012-12-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13537129#comment-13537129
 ] 

Hudson commented on HADOOP-9105:


Integrated in Hadoop-trunk-Commit #3146 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3146/])
HADOOP-9105. FsShell -moveFromLocal erroneously fails (daryn via bobby) 
(Revision 1424566)

 Result = SUCCESS
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1424566
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Command.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/MoveCommands.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFsShellCopy.java


 FsShell -moveFromLocal erroneously fails
 

 Key: HADOOP-9105
 URL: https://issues.apache.org/jira/browse/HADOOP-9105
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Attachments: HADOOP-9105.branch-0.23.patch, HADOOP-9105.patch


 The move successfully completes, but then reports error upon trying to delete 
 the local source directory even though it succeeded.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9162) Add utility to check native library availability

2012-12-20 Thread Binglin Chang (JIRA)
Binglin Chang created HADOOP-9162:
-

 Summary: Add utility to check native library availability
 Key: HADOOP-9162
 URL: https://issues.apache.org/jira/browse/HADOOP-9162
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Binglin Chang
Assignee: Binglin Chang
Priority: Minor


Many times, after deploy hadoop or when trouble shooting, we need to check 
whether native library(along with native compression libraries) can work 
properly, and I just want to use one command to check that, like this:

hadoop org.apache.hadoop.util.NativeCodeLoader

and it shows:

Native library loading test:
hadoop: false
zlib:   false
snappy: false
lz4:false


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9162) Add utility to check native library availability

2012-12-20 Thread Binglin Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Binglin Chang updated HADOOP-9162:
--

Attachment: HADOOP-9162.patch

 Add utility to check native library availability
 

 Key: HADOOP-9162
 URL: https://issues.apache.org/jira/browse/HADOOP-9162
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Binglin Chang
Assignee: Binglin Chang
Priority: Minor
 Attachments: HADOOP-9162.patch


 Many times, after deploy hadoop or when trouble shooting, we need to check 
 whether native library(along with native compression libraries) can work 
 properly, and I just want to use one command to check that, like this:
 hadoop org.apache.hadoop.util.NativeCodeLoader
 and it shows:
 Native library loading test:
 hadoop: false
 zlib:   false
 snappy: false
 lz4:false

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9162) Add utility to check native library availability

2012-12-20 Thread Binglin Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Binglin Chang updated HADOOP-9162:
--

Status: Patch Available  (was: Open)

 Add utility to check native library availability
 

 Key: HADOOP-9162
 URL: https://issues.apache.org/jira/browse/HADOOP-9162
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Binglin Chang
Assignee: Binglin Chang
Priority: Minor
 Attachments: HADOOP-9162.patch


 Many times, after deploy hadoop or when trouble shooting, we need to check 
 whether native library(along with native compression libraries) can work 
 properly, and I just want to use one command to check that, like this:
 hadoop org.apache.hadoop.util.NativeCodeLoader
 and it shows:
 Native library loading test:
 hadoop: false
 zlib:   false
 snappy: false
 lz4:false

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9162) Add utility to check native library availability

2012-12-20 Thread Binglin Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Binglin Chang updated HADOOP-9162:
--

  Component/s: native
 Target Version/s: 2.0.3-alpha
Affects Version/s: 2.0.3-alpha
   3.0.0

 Add utility to check native library availability
 

 Key: HADOOP-9162
 URL: https://issues.apache.org/jira/browse/HADOOP-9162
 Project: Hadoop Common
  Issue Type: Improvement
  Components: native
Affects Versions: 3.0.0, 2.0.3-alpha
Reporter: Binglin Chang
Assignee: Binglin Chang
Priority: Minor
 Attachments: HADOOP-9162.patch


 Many times, after deploy hadoop or when trouble shooting, we need to check 
 whether native library(along with native compression libraries) can work 
 properly, and I just want to use one command to check that, like this:
 hadoop org.apache.hadoop.util.NativeCodeLoader
 and it shows:
 Native library loading test:
 hadoop: false
 zlib:   false
 snappy: false
 lz4:false

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9105) FsShell -moveFromLocal erroneously fails

2012-12-20 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-9105:


   Resolution: Fixed
Fix Version/s: 0.23.6
   2.0.3-alpha
   3.0.0
   Status: Resolved  (was: Patch Available)

Thanks Daryn,

I put this in trunk, branch-2, and branch-0.23

 FsShell -moveFromLocal erroneously fails
 

 Key: HADOOP-9105
 URL: https://issues.apache.org/jira/browse/HADOOP-9105
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Fix For: 3.0.0, 2.0.3-alpha, 0.23.6

 Attachments: HADOOP-9105.branch-0.23.patch, HADOOP-9105.patch


 The move successfully completes, but then reports error upon trying to delete 
 the local source directory even though it succeeded.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9162) Add utility to check native library availability

2012-12-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13537152#comment-13537152
 ] 

Hadoop QA commented on HADOOP-9162:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12561917/HADOOP-9162.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1914//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1914//console

This message is automatically generated.

 Add utility to check native library availability
 

 Key: HADOOP-9162
 URL: https://issues.apache.org/jira/browse/HADOOP-9162
 Project: Hadoop Common
  Issue Type: Improvement
  Components: native
Affects Versions: 3.0.0, 2.0.3-alpha
Reporter: Binglin Chang
Assignee: Binglin Chang
Priority: Minor
 Attachments: HADOOP-9162.patch


 Many times, after deploy hadoop or when trouble shooting, we need to check 
 whether native library(along with native compression libraries) can work 
 properly, and I just want to use one command to check that, like this:
 hadoop org.apache.hadoop.util.NativeCodeLoader
 and it shows:
 Native library loading test:
 hadoop: false
 zlib:   false
 snappy: false
 lz4:false

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9124) SortedMapWritable violates contract of Map interface for equals() and hashCode()

2012-12-20 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13537202#comment-13537202
 ] 

Karthik Kambatla commented on HADOOP-9124:
--

Thanks Suren for the updated patch. It looks mostly good, but for some nits.

# It is nicer to import only used methods in Assert and not Assert.*
# Tests should have timeouts
# The failure reasons for equals and hashcode should be different - should a 
test fail, the message should be clear enough.
# assertFalse(condition) can be used in places with assertTrue(!condition)
# Though basic null checks might not be required; if we are checking for mapA 
not null, we should check for mapB also to be not null



 SortedMapWritable violates contract of Map interface for equals() and 
 hashCode()
 

 Key: HADOOP-9124
 URL: https://issues.apache.org/jira/browse/HADOOP-9124
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 2.0.2-alpha
Reporter: Patrick Hunt
Priority: Minor
 Attachments: HADOOP-9124.patch, HADOOP-9124.patch, HADOOP-9124.patch


 This issue is similar to HADOOP-7153. It was found when using MRUnit - see 
 MRUNIT-158, specifically 
 https://issues.apache.org/jira/browse/MRUNIT-158?focusedCommentId=13501985page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13501985
 --
 o.a.h.io.SortedMapWritable implements the java.util.Map interface, however it 
 does not define an implementation of the equals() or hashCode() methods; 
 instead the default implementations in java.lang.Object are used.
 This violates the contract of the Map interface which defines different 
 behaviour for equals() and hashCode() than Object does. More information 
 here: 
 http://download.oracle.com/javase/6/docs/api/java/util/Map.html#equals(java.lang.Object)
 The practical consequence is that SortedMapWritables containing equal entries 
 cannot be compared properly. We were bitten by this when trying to write an 
 MRUnit test for a Mapper that outputs MapWritables; the MRUnit driver cannot 
 test the equality of the expected and actual MapWritable objects.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9151) Include RPC error info in RpcResponseHeader instead of sending it separately

2012-12-20 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13537225#comment-13537225
 ] 

Suresh Srinivas commented on HADOOP-9151:
-

bq.  I don't think it's a good idea to break compatibility for what amounts to 
a surface-level cleanup...
bq. Despite the alpha label, there are a lot of people using it, and breaking 
the compat should require a really good reason.
I agree that if possible we should avoid breaking the compatibility. But when a 
cleanup can make the implementation lot more clear, I prefer making that 
change. That is the reason why we use alpha and any one who uses that release 
should be ready to adopt the changes.

bq. Plus that the overhead of creating new proto objects for every rpc may 
already take a lot more resource than buffer copy
Not sure I understand this comment. Isn't the current byte[] coming from a 
protobuf message already?

bq. And another concern is this protocol may be bad for async non-blocking io
Can you please explain how does the current proposal prevent this?

 Include RPC error info in RpcResponseHeader instead of sending it separately
 

 Key: HADOOP-9151
 URL: https://issues.apache.org/jira/browse/HADOOP-9151
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Sanjay Radia
Assignee: Sanjay Radia
 Attachments: HADOOP-9151.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8957) AbstractFileSystem#IsValidName should be overridden for embedded file systems like ViewFs

2012-12-20 Thread Thomas Graves (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13537226#comment-13537226
 ] 

Thomas Graves commented on HADOOP-8957:
---

can you please update fix versions on this?

 AbstractFileSystem#IsValidName should be overridden for embedded file systems 
 like ViewFs
 -

 Key: HADOOP-8957
 URL: https://issues.apache.org/jira/browse/HADOOP-8957
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: trunk-win
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HADOOP-8957-branch-trunk-win.2.patch, 
 HADOOP-8957-branch-trunk-win.3.patch, HADOOP-8957-branch-trunk-win.4.patch, 
 HADOOP-8957.patch, HADOOP-8957.patch, HADOOP-8957-trunk.4.patch


 This appears to be a problem with parsing a Windows-specific path, ultimately 
 throwing InvocationTargetException from AbstractFileSystem.newInstance.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9124) SortedMapWritable violates contract of Map interface for equals() and hashCode()

2012-12-20 Thread Surenkumar Nihalani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surenkumar Nihalani updated HADOOP-9124:


Attachment: HADOOP-9124.patch

Can you clarify on fifth point? I seem to have checked for (not null) on both 
maps. This patch should address the rest of the points.

 SortedMapWritable violates contract of Map interface for equals() and 
 hashCode()
 

 Key: HADOOP-9124
 URL: https://issues.apache.org/jira/browse/HADOOP-9124
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 2.0.2-alpha
Reporter: Patrick Hunt
Priority: Minor
 Attachments: HADOOP-9124.patch, HADOOP-9124.patch, HADOOP-9124.patch, 
 HADOOP-9124.patch


 This issue is similar to HADOOP-7153. It was found when using MRUnit - see 
 MRUNIT-158, specifically 
 https://issues.apache.org/jira/browse/MRUNIT-158?focusedCommentId=13501985page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13501985
 --
 o.a.h.io.SortedMapWritable implements the java.util.Map interface, however it 
 does not define an implementation of the equals() or hashCode() methods; 
 instead the default implementations in java.lang.Object are used.
 This violates the contract of the Map interface which defines different 
 behaviour for equals() and hashCode() than Object does. More information 
 here: 
 http://download.oracle.com/javase/6/docs/api/java/util/Map.html#equals(java.lang.Object)
 The practical consequence is that SortedMapWritables containing equal entries 
 cannot be compared properly. We were bitten by this when trying to write an 
 MRUnit test for a Mapper that outputs MapWritables; the MRUnit driver cannot 
 test the equality of the expected and actual MapWritable objects.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9151) Include RPC error info in RpcResponseHeader instead of sending it separately

2012-12-20 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13537284#comment-13537284
 ] 

Suresh Srinivas commented on HADOOP-9151:
-

Minor nit - For better readability you may want to use conditional statement 
using ?: for setting exception class name and stack trace.


 Include RPC error info in RpcResponseHeader instead of sending it separately
 

 Key: HADOOP-9151
 URL: https://issues.apache.org/jira/browse/HADOOP-9151
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Sanjay Radia
Assignee: Sanjay Radia
 Attachments: HADOOP-9151.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8957) AbstractFileSystem#IsValidName should be overridden for embedded file systems like ViewFs

2012-12-20 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-8957:


Target Version/s:   (was: 3.0.0, trunk-win)
   Fix Version/s: 3.0.0

 AbstractFileSystem#IsValidName should be overridden for embedded file systems 
 like ViewFs
 -

 Key: HADOOP-8957
 URL: https://issues.apache.org/jira/browse/HADOOP-8957
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: trunk-win
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Fix For: 3.0.0

 Attachments: HADOOP-8957-branch-trunk-win.2.patch, 
 HADOOP-8957-branch-trunk-win.3.patch, HADOOP-8957-branch-trunk-win.4.patch, 
 HADOOP-8957.patch, HADOOP-8957.patch, HADOOP-8957-trunk.4.patch


 This appears to be a problem with parsing a Windows-specific path, ultimately 
 throwing InvocationTargetException from AbstractFileSystem.newInstance.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9162) Add utility to check native library availability

2012-12-20 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13537299#comment-13537299
 ] 

Suresh Srinivas commented on HADOOP-9162:
-

This is a useful utility. Comments:
# Please add this functionality in a separate class instead of NativeCodeLoader.
# Given that you are providing the availability status as output, exitcode of 1 
only when native code is not available seems strange. Perhaps you should just 
always return 0.
# Please add this a command into src/main/bin/hadoop

 Add utility to check native library availability
 

 Key: HADOOP-9162
 URL: https://issues.apache.org/jira/browse/HADOOP-9162
 Project: Hadoop Common
  Issue Type: Improvement
  Components: native
Affects Versions: 3.0.0, 2.0.3-alpha
Reporter: Binglin Chang
Assignee: Binglin Chang
Priority: Minor
 Attachments: HADOOP-9162.patch


 Many times, after deploy hadoop or when trouble shooting, we need to check 
 whether native library(along with native compression libraries) can work 
 properly, and I just want to use one command to check that, like this:
 hadoop org.apache.hadoop.util.NativeCodeLoader
 and it shows:
 Native library loading test:
 hadoop: false
 zlib:   false
 snappy: false
 lz4:false

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9151) Include RPC error info in RpcResponseHeader instead of sending it separately

2012-12-20 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13537307#comment-13537307
 ] 

Todd Lipcon commented on HADOOP-9151:
-

bq. I agree that if possible we should avoid breaking the compatibility. But 
when a cleanup can make the implementation lot more clear, I prefer making that 
change. That is the reason why we use alpha and any one who uses that release 
should be ready to adopt the changes.

The implementation is slightly more clear, I agree. We should have done it this 
way to begin with. But IMO it's really not worth it.

There are now downstream projects building against Hadoop-2 releases such as 
HBase. If we break compat between 2.0.2 and a later release, then HBase users 
will have the messy make sure you replace your HDFS client jar inside HBase's 
lib directory with the exact right version or else get weird error messages 
nonsense come back again.

I'd be +1 on this for 3.0 but not for branch-2, unless you can figure out some 
way of maintaining a compatibility path. For example, we could do something 
like this in 2.0:
- add an optional flag to the request protobuf indicating that the new style 
exception response is supported
- mark the embedded exception 
- new server checks the flag: if the flag is there, it sends the new-style 
(in-PB) exception response. If it is not there, it uses the old style response 
for compatibility
- when the new client gets an ERROR response, but exceptionClassName is not 
found in the response PB, then it knows it is an old style server, and reads 
the writables following the response (like the old protocol)

This would be compatible as follows:
- old client - old server: obviously compatible
- old client - new server: doesn't send flag, so new server responds with old 
response
- new client - old server: sends flag, but the old server ignores it (since 
it's part of a PB, this is automatic). server responds with old style response.
- new client - new server: sends flag, so new server responds with new style 
response.

Obviously, all this makes the code more complex. So, I think it would be better 
to just leave branch-2 alone, and make the improvement in branch-3/trunk.

 Include RPC error info in RpcResponseHeader instead of sending it separately
 

 Key: HADOOP-9151
 URL: https://issues.apache.org/jira/browse/HADOOP-9151
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Sanjay Radia
Assignee: Sanjay Radia
 Attachments: HADOOP-9151.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9124) SortedMapWritable violates contract of Map interface for equals() and hashCode()

2012-12-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13537310#comment-13537310
 ] 

Hadoop QA commented on HADOOP-9124:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12561963/HADOOP-9124.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1915//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1915//console

This message is automatically generated.

 SortedMapWritable violates contract of Map interface for equals() and 
 hashCode()
 

 Key: HADOOP-9124
 URL: https://issues.apache.org/jira/browse/HADOOP-9124
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 2.0.2-alpha
Reporter: Patrick Hunt
Priority: Minor
 Attachments: HADOOP-9124.patch, HADOOP-9124.patch, HADOOP-9124.patch, 
 HADOOP-9124.patch


 This issue is similar to HADOOP-7153. It was found when using MRUnit - see 
 MRUNIT-158, specifically 
 https://issues.apache.org/jira/browse/MRUNIT-158?focusedCommentId=13501985page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13501985
 --
 o.a.h.io.SortedMapWritable implements the java.util.Map interface, however it 
 does not define an implementation of the equals() or hashCode() methods; 
 instead the default implementations in java.lang.Object are used.
 This violates the contract of the Map interface which defines different 
 behaviour for equals() and hashCode() than Object does. More information 
 here: 
 http://download.oracle.com/javase/6/docs/api/java/util/Map.html#equals(java.lang.Object)
 The practical consequence is that SortedMapWritables containing equal entries 
 cannot be compared properly. We were bitten by this when trying to write an 
 MRUnit test for a Mapper that outputs MapWritables; the MRUnit driver cannot 
 test the equality of the expected and actual MapWritable objects.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9151) Include RPC error info in RpcResponseHeader instead of sending it separately

2012-12-20 Thread Sanjay Radia (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13537337#comment-13537337
 ] 

Sanjay Radia commented on HADOOP-9151:
--

bq. @todd I've implemented a simple client for it in C++ 
Any plans to check this into Hadoop? This would be useful to the wider 
community and we can speed up libhdfs.

 Include RPC error info in RpcResponseHeader instead of sending it separately
 

 Key: HADOOP-9151
 URL: https://issues.apache.org/jira/browse/HADOOP-9151
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Sanjay Radia
Assignee: Sanjay Radia
 Attachments: HADOOP-9151.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8561) Introduce HADOOP_PROXY_USER for secure impersonation in child hadoop client processes

2012-12-20 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-8561:
--

Fix Version/s: 0.23.6

 Introduce HADOOP_PROXY_USER for secure impersonation in child hadoop client 
 processes
 -

 Key: HADOOP-8561
 URL: https://issues.apache.org/jira/browse/HADOOP-8561
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Reporter: Luke Lu
Assignee: Yu Gao
 Fix For: 1.2.0, 3.0.0, 2.0.3-alpha, 0.23.6, 1.1.2

 Attachments: hadoop-8561-branch-1.patch, hadoop-8561-branch-2.patch, 
 hadoop-8561.patch, hadoop-8561-v2.patch


 To solve the problem for an authenticated user to type hadoop shell commands 
 in a web console, we can introduce an HADOOP_PROXY_USER environment variable 
 to allow proper impersonation in the child hadoop client processes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8427) Convert Forrest docs to APT, incremental

2012-12-20 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13537375#comment-13537375
 ] 

Suresh Srinivas commented on HADOOP-8427:
-

[~adi2] Thanks for doing this work.

 Convert Forrest docs to APT, incremental
 

 Key: HADOOP-8427
 URL: https://issues.apache.org/jira/browse/HADOOP-8427
 Project: Hadoop Common
  Issue Type: Task
  Components: documentation
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Andy Isaacson
  Labels: newbie
 Fix For: 3.0.0

 Attachments: hadoop8427-1.txt, hadoop8427-3.txt, hadoop8427-4.txt, 
 hadoop8427-5.txt, HADOOP-8427.sh, hadoop8427.txt


 Some of the forrest docs content in src/docs/src/documentation/content/xdocs 
 has not yet been converted to APT and moved to src/site/apt. Let's convert 
 the forrest docs that haven't been converted yet to new APT content in 
 hadoop-common/src/site/apt (and link the new content into 
 hadoop-project/src/site/apt/index.apt.vm) and remove all forrest dependencies.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-5469) Exposing Hadoop metrics via HTTP

2012-12-20 Thread Luke Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-5469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13537388#comment-13537388
 ] 

Luke Lu commented on HADOOP-5469:
-

Everything is available at /jmx (including generic JVM properties and Hadoop 
metrics) now, as all metrics in metrics2 is published to JMX.

 Exposing Hadoop metrics via HTTP
 

 Key: HADOOP-5469
 URL: https://issues.apache.org/jira/browse/HADOOP-5469
 Project: Hadoop Common
  Issue Type: New Feature
  Components: metrics
Reporter: Philip Zeyliger
Assignee: Philip Zeyliger
 Fix For: 0.21.0

 Attachments: HADOOP-5469.patch, HADOOP-5469.patch

  Time Spent: 2h
  Remaining Estimate: 1.5h

 Implement a /metrics URL on the HTTP server of Hadoop daemons, to expose 
 metrics data to users via their web browsers, in plain-text and JSON.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9162) Add utility to check native library availability

2012-12-20 Thread Luke Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13537399#comment-13537399
 ] 

Luke Lu commented on HADOOP-9162:
-

It'll be more useful, if it can print the versions of the native libs as well. 
I've seen in the past it can pick up the wrong libs.

 Add utility to check native library availability
 

 Key: HADOOP-9162
 URL: https://issues.apache.org/jira/browse/HADOOP-9162
 Project: Hadoop Common
  Issue Type: Improvement
  Components: native
Affects Versions: 3.0.0, 2.0.3-alpha
Reporter: Binglin Chang
Assignee: Binglin Chang
Priority: Minor
 Attachments: HADOOP-9162.patch


 Many times, after deploy hadoop or when trouble shooting, we need to check 
 whether native library(along with native compression libraries) can work 
 properly, and I just want to use one command to check that, like this:
 hadoop org.apache.hadoop.util.NativeCodeLoader
 and it shows:
 Native library loading test:
 hadoop: false
 zlib:   false
 snappy: false
 lz4:false

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9124) SortedMapWritable violates contract of Map interface for equals() and hashCode()

2012-12-20 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13537415#comment-13537415
 ] 

Karthik Kambatla commented on HADOOP-9124:
--

Thanks Suren - the fifth point seems to have already been addressed.

Few more nits (hopefully the last :) ):
# We can be a little conservative on the test timeouts - they shouldn't timeout 
just because we run them on a slow/loaded machine. Do you think 200 ms is 
conservative enough? If not, we can may be bump it up to 1s.
# The indentation seems to be off on a few lines.

 SortedMapWritable violates contract of Map interface for equals() and 
 hashCode()
 

 Key: HADOOP-9124
 URL: https://issues.apache.org/jira/browse/HADOOP-9124
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 2.0.2-alpha
Reporter: Patrick Hunt
Priority: Minor
 Attachments: HADOOP-9124.patch, HADOOP-9124.patch, HADOOP-9124.patch, 
 HADOOP-9124.patch


 This issue is similar to HADOOP-7153. It was found when using MRUnit - see 
 MRUNIT-158, specifically 
 https://issues.apache.org/jira/browse/MRUNIT-158?focusedCommentId=13501985page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13501985
 --
 o.a.h.io.SortedMapWritable implements the java.util.Map interface, however it 
 does not define an implementation of the equals() or hashCode() methods; 
 instead the default implementations in java.lang.Object are used.
 This violates the contract of the Map interface which defines different 
 behaviour for equals() and hashCode() than Object does. More information 
 here: 
 http://download.oracle.com/javase/6/docs/api/java/util/Map.html#equals(java.lang.Object)
 The practical consequence is that SortedMapWritables containing equal entries 
 cannot be compared properly. We were bitten by this when trying to write an 
 MRUnit test for a Mapper that outputs MapWritables; the MRUnit driver cannot 
 test the equality of the expected and actual MapWritable objects.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9160) Change management protocol to JMX

2012-12-20 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13537451#comment-13537451
 ] 

Todd Lipcon commented on HADOOP-9160:
-

I have mixed feelings on this. On one hand, it would have been nice had we 
started with JMX. But changing everything around now has some downsides:

First, I disagree with your assertion that there are no management tools 
depending on the existing RPCs. Our products at Cloudera definitely do depend 
on them, and there are also various APIs inside DistributedFileSystem which 
would probably be classified as admin.

In terms of handling those APIs, we'd end up having to use a remote JMX client 
in order to access the cluster. This is problematic in that it would use an 
entirely separate authentication system from Hadoop RPC -- I don't know a ton 
about JMX, but it appears to be user/password based? This begs the question of 
how it should be configured, how one deals with the case where multiple users 
should be able to admin the cluster, how to centrally manage it, etc. We've 
already solved these problems (at least partially) with the existing auth 
schemes and RPC, and we'd have to solve them again for JMX.

As for cross-version compatibility, I think pulling in a forward-compatible 
protobuf implementation into branch-1 would actually be a similar (or less) 
amount of work compared to overhauling all of the admin functionality to JMX, 
and would have the other major advantage that we could potentially get some 
basic client compatibility between hadoop 1 and 2, if someone were to do the 
infrastructure work.

Lastly, though JMX is a standard, in my experience there isn't wide client 
support. Most of the tooling I've found around JMX is pretty bad, to be 
honest, especially if you have management tools in languages other than Java. 
For example, googling python jmx client, the only methods I can find involve 
putting a JMX-HTTP gateway on the server side, which means we're not really 
getting a ton of mileage anyway.

 Change management protocol to JMX
 -

 Key: HADOOP-9160
 URL: https://issues.apache.org/jira/browse/HADOOP-9160
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Luke Lu

 Currently we use Hadoop RPC (and some HTTP, notably fsck) for admin 
 protocols. We should consider moving all admin protocols to JMX, as it's the 
 industry standard for java server management with wide client support.
 Having an alternative/redundant RPC mechanism is very desirable for admin 
 protocols. I've seen in the past in multiple cases, where NN and/or JT RPC 
 were locked up solid due to various bugs and/or RPC thread pool exhaustion, 
 while HTTP and/or JMX worked just fine.
 Other desirable benefits include admin protocol backward compatibility and 
 introspectability, which is convenient for a centralized management system to 
 manage multiple Hadoop clusters of different versions. Another notable 
 benefit is that it's much easier to implement new admin commands in JMX 
 (especially with MXBean) than Hadoop RPC, especially in trunk (and 0.23+, 
 2.x).
 Since Hadoop RPC doesn't guarantee backward compatibility (probably not ever 
 for branch-1), there are no external management tools depending on it. We can 
 maintain a practical backward compatibility by keeping the admin 
 script/command line interface unchanged.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-9141) Add support for createNonRecursive to ViewFileSystem

2012-12-20 Thread Sandy Ryza (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandy Ryza resolved HADOOP-9141.


Resolution: Duplicate

Duplicate of HADOOP-9153.  My bad.

 Add support for createNonRecursive to ViewFileSystem
 

 Key: HADOOP-9141
 URL: https://issues.apache.org/jira/browse/HADOOP-9141
 Project: Hadoop Common
  Issue Type: Bug
  Components: viewfs
Affects Versions: 2.0.2-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza

 Currently this is thrown:
 java.io.IOException: createNonRecursive unsupported for this filesystem class 
 org.apache.hadoop.fs.viewfs.ViewFileSystem

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9151) Include RPC error info in RpcResponseHeader instead of sending it separately

2012-12-20 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13537477#comment-13537477
 ] 

Todd Lipcon commented on HADOOP-9151:
-

bq. Not sure if I follow. When HBase is installed on top a Hadoop cluster 
doesn't it pickup the hadoop jars from the installed hadoop?

Not always. If you download the HBase tarball, for example, and build it with 
the Hadoop 2 profile, it will end up with its own copies of the hadoop jars.

bq. Any plans to check this into Hadoop? This would be useful to the wider 
community and we can speed up libhdfs.

The client is just a simple RPC client - not a full HDFS client, and I didn't 
get it to a really usable production quality state - just enough to do a 
listFiles on a running NN. I'll try to throw it up on github at some point. 
My point, though, was just to say that, even though this area of the protocol 
is slightly messy, it's by far not the biggest obstacle to writing foreign 
language implementations of RPC.

bq. I understand that it could be messy for downstream projects. But that is 
what an alpha release tag is meant to indicate. BTW since 2.0.0-alpha we have 
had many incompatible changes as well

As far as I know we have not broken RPC or API compatibility at all since 
2.0.0, and I would be against any case where we do (not just this one). As for 
the labeling of alpha, I have been arguing against calling it alpha for several 
months, but there's no clear bylaws on how the labeling changes, etc. So, given 
that I can't change the labeling, I'll just represent my feelings on individual 
code changes - and this is one that I can't legitimately support on branch-2.

In the spirit of full disclosure: wearing my Cloudera hat, we have a 
distribution based on the Hadoop 2 code line. We are not going to break wire 
compatibility within minor updates of this distribution. So, if branch-2 breaks 
compatibility, then our distro will become incompatible with branch-2, which is 
no good.

Wearing my Apache hat: it would be nice if Cloudera, Hortonworks, Apache, and 
anyone else distributing Hadoop 2 can remain fully wire- and API-compatible. 
If this change goes in, it will serve to fracture the ecosystem more, and make 
it harder for the community to move freely between different versions. I 
imagine other distributors would much prefer that all distros are 
wire-compatible, since it makes it easier, for example, for one of our 
customers to go and work with another vendor without rebuilding all their code 
with a new client jar (one of the main points of value in open source!)

So, to be clear, -1 on this change for branch-2 unless there is a compatibility 
path.

 Include RPC error info in RpcResponseHeader instead of sending it separately
 

 Key: HADOOP-9151
 URL: https://issues.apache.org/jira/browse/HADOOP-9151
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Sanjay Radia
Assignee: Sanjay Radia
 Attachments: HADOOP-9151.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9151) Include RPC error info in RpcResponseHeader instead of sending it separately

2012-12-20 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13537518#comment-13537518
 ] 

Suresh Srinivas commented on HADOOP-9151:
-

bq. As far as I know we have not broken RPC or API compatibility at all since 
2.0.0, and I would be against any case where we do (not just this one).
We have not broken API compatibility in a long time. Even though RPC 
compatibility is not broken, there have been many changes marked as 
incompatible (I counted 6 of them). Just because incompatible changes have not 
been done in RPC does not mean it cannot be done.

bq. As for the labeling of alpha, I have been arguing against calling it alpha 
for several months
The reason of having alpha tag is so that we do not have to provide the 
stricter guarantees of a GA release. I am glad that we have retained it so far, 
so that these kind of changes can happen.
 
bq. In the spirit of full disclosure: wearing my Cloudera hat, we have a 
distribution based on the Hadoop 2 code line. We are not going to break wire 
compatibility within minor updates of this distribution. So, if branch-2 breaks 
compatibility, then our distro will become incompatible with branch-2, which is 
no good.

We are talking about Apache 2.0.x-alpha release here. How CDH manages its 
distribution, backward compatibility does not guide how Apache releases are 
done or what goes into Apache releases. The fact that you chose include a 
content that is not in trunk or decided to tag a release in some other way 
should not put constraints on the Apache releases. Wearing my Apache hat on, if 
this is the main reason for -1, then it has no merit. 

If there was no issue to CDH distribution, would you have objected to this 
change?

I would like others to comment on why a vendor's distribution or compatibility 
to it should put artificial constraints in Apache.

 Include RPC error info in RpcResponseHeader instead of sending it separately
 

 Key: HADOOP-9151
 URL: https://issues.apache.org/jira/browse/HADOOP-9151
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Sanjay Radia
Assignee: Sanjay Radia
 Attachments: HADOOP-9151.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9163) The rpc msg in ProtobufRpcEngine.proto should be moved out to avoid an extra copy

2012-12-20 Thread Sanjay Radia (JIRA)
Sanjay Radia created HADOOP-9163:


 Summary: The rpc msg in  ProtobufRpcEngine.proto should be moved 
out to avoid an extra copy
 Key: HADOOP-9163
 URL: https://issues.apache.org/jira/browse/HADOOP-9163
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Sanjay Radia




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9163) The rpc msg in ProtobufRpcEngine.proto should be moved out to avoid an extra copy

2012-12-20 Thread Sanjay Radia (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13537563#comment-13537563
 ] 

Sanjay Radia commented on HADOOP-9163:
--

{code}
message RequestProto {
  /** Name of the RPC method */
  required string methodName = 1;

  /** Bytes corresponding to the client protobuf request */
  optional bytes request = 2;
.
}
{code}
The request should be moved out as a separate message to avoid copying of the 
request.


 The rpc msg in  ProtobufRpcEngine.proto should be moved out to avoid an extra 
 copy
 --

 Key: HADOOP-9163
 URL: https://issues.apache.org/jira/browse/HADOOP-9163
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Sanjay Radia



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9151) Include RPC error info in RpcResponseHeader instead of sending it separately

2012-12-20 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13537565#comment-13537565
 ] 

Todd Lipcon commented on HADOOP-9151:
-

{quote}
We are talking about Apache 2.0.x-alpha release here. How CDH manages its 
distribution, backward compatibility does not guide how Apache releases are 
done or what goes into Apache releases. The fact that you chose include a 
content that is not in trunk or decided to tag a release in some other way 
should not put constraints on the Apache releases. Wearing my Apache hat on, if 
this is the main reason for -1, then it has no merit.
If there was no issue to CDH distribution, would you have objected to this 
change?
I would like others to comment on why a vendor's distribution or compatibility 
to it should put artificial constraints in Apache.
{quote}

Regardless of the existence of CDH, I would have argued that HDFS and Common 
should have been labeled Stable months ago. For people who don't care to run 
MR2, I've found HDFS2 to be far more stable than HDFS1, in addition to offering 
many other benefits. But we've already beaten that horse to death on the 
mailing list.

So, given that I already consider HDFS to be stable, and know people running 
this branch in production scenarios where a rolling upgrade is required, I 
would make the same argument that we should not break compatibility.

bq. I would like others to comment on why a vendor's distribution or 
compatibility to it should put artificial constraints in Apache.

There are lots of people out there using this distro. If we break 
compatibility, people will have a harder time moving from the distro _back_ to 
Apache, if that's the angle you want to look at it from.



If this issue were a case of a big performance improvement, or a security 
issue, or even a feature improvement, I would weigh it stronger. But given that 
it's a code cleanup that brings no improvement at all to users, and only a very 
slight improvement for the 3 or 4 people in the world who are trying to 
implement Hadoop-compatible RPC in another langauge (one of whom happens to be 
me!), I think it needs much more motivation.

 Include RPC error info in RpcResponseHeader instead of sending it separately
 

 Key: HADOOP-9151
 URL: https://issues.apache.org/jira/browse/HADOOP-9151
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Sanjay Radia
Assignee: Sanjay Radia
 Attachments: HADOOP-9151.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9163) The rpc msg in ProtobufRpcEngine.proto should be moved out to avoid an extra copy

2012-12-20 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13537566#comment-13537566
 ] 

Todd Lipcon commented on HADOOP-9163:
-

You can already avoid copying without changing the RPC request protobuf, by 
manually serializing the request proto using CodedOutputStream. See HBASE-5945 
for a patch in which I did this for HBase.

 The rpc msg in  ProtobufRpcEngine.proto should be moved out to avoid an extra 
 copy
 --

 Key: HADOOP-9163
 URL: https://issues.apache.org/jira/browse/HADOOP-9163
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Sanjay Radia



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9124) SortedMapWritable violates contract of Map interface for equals() and hashCode()

2012-12-20 Thread Surenkumar Nihalani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surenkumar Nihalani updated HADOOP-9124:


Attachment: HADOOP-9124.patch

 SortedMapWritable violates contract of Map interface for equals() and 
 hashCode()
 

 Key: HADOOP-9124
 URL: https://issues.apache.org/jira/browse/HADOOP-9124
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 2.0.2-alpha
Reporter: Patrick Hunt
Priority: Minor
 Attachments: HADOOP-9124.patch, HADOOP-9124.patch, HADOOP-9124.patch, 
 HADOOP-9124.patch, HADOOP-9124.patch


 This issue is similar to HADOOP-7153. It was found when using MRUnit - see 
 MRUNIT-158, specifically 
 https://issues.apache.org/jira/browse/MRUNIT-158?focusedCommentId=13501985page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13501985
 --
 o.a.h.io.SortedMapWritable implements the java.util.Map interface, however it 
 does not define an implementation of the equals() or hashCode() methods; 
 instead the default implementations in java.lang.Object are used.
 This violates the contract of the Map interface which defines different 
 behaviour for equals() and hashCode() than Object does. More information 
 here: 
 http://download.oracle.com/javase/6/docs/api/java/util/Map.html#equals(java.lang.Object)
 The practical consequence is that SortedMapWritables containing equal entries 
 cannot be compared properly. We were bitten by this when trying to write an 
 MRUnit test for a Mapper that outputs MapWritables; the MRUnit driver cannot 
 test the equality of the expected and actual MapWritable objects.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9124) SortedMapWritable violates contract of Map interface for equals() and hashCode()

2012-12-20 Thread Surenkumar Nihalani (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13537578#comment-13537578
 ] 

Surenkumar Nihalani commented on HADOOP-9124:
-

So, the sole purpose of timeout is to have a timeout/sanity? there is not much 
logic behind the timeout value, right? 
It's just enough that method completes?

 SortedMapWritable violates contract of Map interface for equals() and 
 hashCode()
 

 Key: HADOOP-9124
 URL: https://issues.apache.org/jira/browse/HADOOP-9124
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 2.0.2-alpha
Reporter: Patrick Hunt
Priority: Minor
 Attachments: HADOOP-9124.patch, HADOOP-9124.patch, HADOOP-9124.patch, 
 HADOOP-9124.patch, HADOOP-9124.patch


 This issue is similar to HADOOP-7153. It was found when using MRUnit - see 
 MRUNIT-158, specifically 
 https://issues.apache.org/jira/browse/MRUNIT-158?focusedCommentId=13501985page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13501985
 --
 o.a.h.io.SortedMapWritable implements the java.util.Map interface, however it 
 does not define an implementation of the equals() or hashCode() methods; 
 instead the default implementations in java.lang.Object are used.
 This violates the contract of the Map interface which defines different 
 behaviour for equals() and hashCode() than Object does. More information 
 here: 
 http://download.oracle.com/javase/6/docs/api/java/util/Map.html#equals(java.lang.Object)
 The practical consequence is that SortedMapWritables containing equal entries 
 cannot be compared properly. We were bitten by this when trying to write an 
 MRUnit test for a Mapper that outputs MapWritables; the MRUnit driver cannot 
 test the equality of the expected and actual MapWritable objects.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9124) SortedMapWritable violates contract of Map interface for equals() and hashCode()

2012-12-20 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13537595#comment-13537595
 ] 

Karthik Kambatla commented on HADOOP-9124:
--

Yes. The timeout should be high enough that we don't hit on expected runs of 
the test even on slow systems, and low enough to bail out early if there is 
something terribly wrong going on with the test.

 SortedMapWritable violates contract of Map interface for equals() and 
 hashCode()
 

 Key: HADOOP-9124
 URL: https://issues.apache.org/jira/browse/HADOOP-9124
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 2.0.2-alpha
Reporter: Patrick Hunt
Priority: Minor
 Attachments: HADOOP-9124.patch, HADOOP-9124.patch, HADOOP-9124.patch, 
 HADOOP-9124.patch, HADOOP-9124.patch


 This issue is similar to HADOOP-7153. It was found when using MRUnit - see 
 MRUNIT-158, specifically 
 https://issues.apache.org/jira/browse/MRUNIT-158?focusedCommentId=13501985page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13501985
 --
 o.a.h.io.SortedMapWritable implements the java.util.Map interface, however it 
 does not define an implementation of the equals() or hashCode() methods; 
 instead the default implementations in java.lang.Object are used.
 This violates the contract of the Map interface which defines different 
 behaviour for equals() and hashCode() than Object does. More information 
 here: 
 http://download.oracle.com/javase/6/docs/api/java/util/Map.html#equals(java.lang.Object)
 The practical consequence is that SortedMapWritables containing equal entries 
 cannot be compared properly. We were bitten by this when trying to write an 
 MRUnit test for a Mapper that outputs MapWritables; the MRUnit driver cannot 
 test the equality of the expected and actual MapWritable objects.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9124) SortedMapWritable violates contract of Map interface for equals() and hashCode()

2012-12-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13537606#comment-13537606
 ] 

Hadoop QA commented on HADOOP-9124:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12562018/HADOOP-9124.patch
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1916//console

This message is automatically generated.

 SortedMapWritable violates contract of Map interface for equals() and 
 hashCode()
 

 Key: HADOOP-9124
 URL: https://issues.apache.org/jira/browse/HADOOP-9124
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 2.0.2-alpha
Reporter: Patrick Hunt
Priority: Minor
 Attachments: HADOOP-9124.patch, HADOOP-9124.patch, HADOOP-9124.patch, 
 HADOOP-9124.patch, HADOOP-9124.patch


 This issue is similar to HADOOP-7153. It was found when using MRUnit - see 
 MRUNIT-158, specifically 
 https://issues.apache.org/jira/browse/MRUNIT-158?focusedCommentId=13501985page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13501985
 --
 o.a.h.io.SortedMapWritable implements the java.util.Map interface, however it 
 does not define an implementation of the equals() or hashCode() methods; 
 instead the default implementations in java.lang.Object are used.
 This violates the contract of the Map interface which defines different 
 behaviour for equals() and hashCode() than Object does. More information 
 here: 
 http://download.oracle.com/javase/6/docs/api/java/util/Map.html#equals(java.lang.Object)
 The practical consequence is that SortedMapWritables containing equal entries 
 cannot be compared properly. We were bitten by this when trying to write an 
 MRUnit test for a Mapper that outputs MapWritables; the MRUnit driver cannot 
 test the equality of the expected and actual MapWritable objects.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9151) Include RPC error info in RpcResponseHeader instead of sending it separately

2012-12-20 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13537613#comment-13537613
 ] 

Suresh Srinivas commented on HADOOP-9151:
-

bq. So, given that I already consider HDFS to be stable, and know people 
running this branch in production scenarios where a rolling upgrade is 
required, I would make the same argument that we should not break compatibility.

You may consider it to be stable. But the release is called 2.0.2-alpha. 
Before calling it GA, I was planning to run through a checklist to make sure 
our wire compatibility story is rock solid. That is what is happening some of 
the jiras that are being filed now.

bq. If this issue were a case of a big performance improvement, or a security 
issue, or even a feature improvement, I would weigh it stronger
While these are compelling reasons for you, getting rid of writable in 
serialized rpc request and response is equally important. I and Sanjay have 
spent quite a lot of time on this. Seeing it completed before GA is a good goal.

 Include RPC error info in RpcResponseHeader instead of sending it separately
 

 Key: HADOOP-9151
 URL: https://issues.apache.org/jira/browse/HADOOP-9151
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Sanjay Radia
Assignee: Sanjay Radia
 Attachments: HADOOP-9151.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9151) Include RPC error info in RpcResponseHeader instead of sending it separately

2012-12-20 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13537620#comment-13537620
 ] 

Todd Lipcon commented on HADOOP-9151:
-

bq. getting rid of writable in serialized rpc request and response is equally 
important

It's not a custom writable. It's a 4-byte length-prefixed string (which happens 
to be implemented by a class called WritableUtils). This is super easy to 
serialize in any language. In fact, anyone implementing RPC already needs to 
implement length-prefixed strings in order to send the protos themselves.

I won't debate it's a wart. It's just not worth the pain to fix it in branch-2 
when it doesn't buy us any real advantages.

bq. I and Sanjay have spent quite a lot of time on this

It's a 5kb patch...?

 Include RPC error info in RpcResponseHeader instead of sending it separately
 

 Key: HADOOP-9151
 URL: https://issues.apache.org/jira/browse/HADOOP-9151
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Sanjay Radia
Assignee: Sanjay Radia
 Attachments: HADOOP-9151.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9151) Include RPC error info in RpcResponseHeader instead of sending it separately

2012-12-20 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13537623#comment-13537623
 ] 

Suresh Srinivas commented on HADOOP-9151:
-

bq. I won't debate it's a wart. It's just not worth the pain to fix it in 
branch-2 when it doesn't buy us any real advantages.
I have to disagree. I also want to make sure we avoid this kind of discussions 
and -1s for incompatible changes, just because a distribution decided to call 
it stable. We avoid incompatible changes when Apache release makes it stable.

bq. It's a 5kb patch...?
Obviously not on this patch. On cleaning RPC layer.



 Include RPC error info in RpcResponseHeader instead of sending it separately
 

 Key: HADOOP-9151
 URL: https://issues.apache.org/jira/browse/HADOOP-9151
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Sanjay Radia
Assignee: Sanjay Radia
 Attachments: HADOOP-9151.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9151) Include RPC error info in RpcResponseHeader instead of sending it separately

2012-12-20 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13537625#comment-13537625
 ] 

Todd Lipcon commented on HADOOP-9151:
-

Well, how do we resolve it? The two options I see are:

1) We follow the compatibility path I outlined above in a comment, or:
2) Since I've vetoed the patch for branch-2, bring it to the larger development 
community to discuss and perhaps vote.

 Include RPC error info in RpcResponseHeader instead of sending it separately
 

 Key: HADOOP-9151
 URL: https://issues.apache.org/jira/browse/HADOOP-9151
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Sanjay Radia
Assignee: Sanjay Radia
 Attachments: HADOOP-9151.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9151) Include RPC error info in RpcResponseHeader instead of sending it separately

2012-12-20 Thread Binglin Chang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13537676#comment-13537676
 ] 

Binglin Chang commented on HADOOP-9151:
---

bq. Not sure I understand this comment. Isn't the current byte[] coming from a 
protobuf message already?
I mean for every rpc, there are already many new Proto objects created(C++ can 
reuse proto objects, but java can't...), the overhead may already way larger 
than buffer copies, so avoiding buffer copy may not help much. 
bq. Can you please explain how does the current proposal prevent this?
There are 2 problems:
1. As you know, protobuf library doesn't provide non-blocking style method for 
reading varint length prefix, in order to handle rpc in a non-blocking style, I 
have to write my own version of non-blocking reading varint, in this scenario, 
4 byte fix length prefix are more suitable for non-blocking io. 
2. With every length prefix packet, I need 2 more callbacks to read length and 
read packet body, if each rpc call only have one request packet and one 
response packet, the code have much less callbacks.
Since its already a fact, I just want it keep length prefix as few as possible. 
As I already finish the rpc code using the complex code, honestly I'm all OK 
with keeping compatibility or breaking it. 
But if we do want to break it, please consider the proposal more throughly, 
consider potential usage, make it high quality protocol.

@Todd
When I write async rpc client, The prefixed string(WritableUtils.readString) 
make -1(or MAX_UINT) to represent null string, this is a little annoying, 
introducing some special code to handle writable string.


 Include RPC error info in RpcResponseHeader instead of sending it separately
 

 Key: HADOOP-9151
 URL: https://issues.apache.org/jira/browse/HADOOP-9151
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Sanjay Radia
Assignee: Sanjay Radia
 Attachments: HADOOP-9151.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9162) Add utility to check native library availability

2012-12-20 Thread Binglin Chang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13537702#comment-13537702
 ] 

Binglin Chang commented on HADOOP-9162:
---

bq. Please add this functionality in a separate class instead of 
NativeCodeLoader.
bq. Please add this a command into src/main/bin/hadoop
OK,I will put it in a new class, and add command to bin/hadoop

bq. Given that you are providing the availability status as output, exitcode of 
1 only when native code is not available seems strange. 
I'm assuming people can use this in deploy scripts(tools) to check Native 
library availability, return -1 just a convenience for scripts, I can add a 
argument -a to check all native libraries(all available return 0) and default 
just check libhadoop(libhadoop available return 0).

bq. Because the snappy.jar file may not even be on the classpath, depending on 
the artifacts installed you need to
I was not aware that I imported org.xerial.snappy.Snappy; it must be eclipse 
who mistakenly add it, actually it is not useful at all, I will remove it.

bq. It'll be more useful, if it can print the versions of the native libs as 
well. I've seen in the past it can pick up the wrong libs.
I don't know how to achieve that, it seams a problem to introspect 
System.loadLibrary(hadoop); to find out which file it actually load.

 Add utility to check native library availability
 

 Key: HADOOP-9162
 URL: https://issues.apache.org/jira/browse/HADOOP-9162
 Project: Hadoop Common
  Issue Type: Improvement
  Components: native
Affects Versions: 3.0.0, 2.0.3-alpha
Reporter: Binglin Chang
Assignee: Binglin Chang
Priority: Minor
 Attachments: HADOOP-9162.patch


 Many times, after deploy hadoop or when trouble shooting, we need to check 
 whether native library(along with native compression libraries) can work 
 properly, and I just want to use one command to check that, like this:
 hadoop org.apache.hadoop.util.NativeCodeLoader
 and it shows:
 Native library loading test:
 hadoop: false
 zlib:   false
 snappy: false
 lz4:false

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9162) Add utility to check native library availability

2012-12-20 Thread Binglin Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Binglin Chang updated HADOOP-9162:
--

Attachment: HADOOP-9162.v2.patch

new version addressing my previous comments.
As a single main class util, it's hard to add unit test. But I have done some 
simple manual test, here is the result.
{code}
decster:~/projects/hadoop-trunk/hadoop-dist/target/hadoop-3.0.0-SNAPSHOT 
bin/hadoop 
Usage: hadoop [--config confdir] COMMAND
   where COMMAND is one of:
  fs   run a generic filesystem user client
  version  print the version
  jar jarrun a jar file
  checknative [-a|-h]  check native hadoop and compression libraries 
availability
  distcp srcurl desturl copy file or directories recursively
  archive -archiveName NAME -p parent path src* dest create a hadoop 
archive
  classpathprints the class path needed to get the
   Hadoop jar and the required libraries
  daemonlogget/set the log level for each daemon
 or
  CLASSNAMErun the class named CLASSNAME

Most commands print help when invoked w/o parameters.
decster:~/projects/hadoop-trunk/hadoop-dist/target/hadoop-3.0.0-SNAPSHOT 
bin/hadoop checknative -h
NativeLibraryChecker [-a|-h]
decster:~/projects/hadoop-trunk/hadoop-dist/target/hadoop-3.0.0-SNAPSHOT 
bin/hadoop checknative
12/12/21 15:07:33 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Native library checking:
hadoop: false
zlib:   false
snappy: false
lz4:false
decster:~/projects/hadoop-trunk/hadoop-dist/target/hadoop-3.0.0-SNAPSHOT echo 
$?
1
decster:~/projects/hadoop-trunk/hadoop-dist/target/hadoop-3.0.0-SNAPSHOT 
bin/hadoop checknative -a
12/12/21 15:07:58 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Native library checking:
hadoop: false
zlib:   false
snappy: false
lz4:false
decster:~/projects/hadoop-trunk/hadoop-dist/target/hadoop-3.0.0-SNAPSHOT echo 
$?
1
{code}

 Add utility to check native library availability
 

 Key: HADOOP-9162
 URL: https://issues.apache.org/jira/browse/HADOOP-9162
 Project: Hadoop Common
  Issue Type: Improvement
  Components: native
Affects Versions: 3.0.0, 2.0.3-alpha
Reporter: Binglin Chang
Assignee: Binglin Chang
Priority: Minor
 Attachments: HADOOP-9162.patch, HADOOP-9162.v2.patch


 Many times, after deploy hadoop or when trouble shooting, we need to check 
 whether native library(along with native compression libraries) can work 
 properly, and I just want to use one command to check that, like this:
 hadoop org.apache.hadoop.util.NativeCodeLoader
 and it shows:
 Native library loading test:
 hadoop: false
 zlib:   false
 snappy: false
 lz4:false

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9151) Include RPC error info in RpcResponseHeader instead of sending it separately

2012-12-20 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13537734#comment-13537734
 ] 

Konstantin Boudnik commented on HADOOP-9151:


Wearing my hat of application developer on top of Hadoop I'd rather see the 
wrinkles ironed out as early as possible. The reason is: it is enough of PITA 
to deal with all the changes in serialization layer in the matter of transit to 
HDFS2. Getting back to it in 3.x when an incompatible changes will be 
introduced again - I don't think this is the message Hadoop community needs to 
be sending around.

Wearing my Apache hat: it won't be real fair to punish Hadoop consumers, just 
because  Cloudera was too eager to come ahead with the CDH version based on an 
alpha version of ASF Hadoop. 

 Include RPC error info in RpcResponseHeader instead of sending it separately
 

 Key: HADOOP-9151
 URL: https://issues.apache.org/jira/browse/HADOOP-9151
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Sanjay Radia
Assignee: Sanjay Radia
 Attachments: HADOOP-9151.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9162) Add utility to check native library availability

2012-12-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13537749#comment-13537749
 ] 

Hadoop QA commented on HADOOP-9162:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12562050/HADOOP-9162.v2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1917//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1917//console

This message is automatically generated.

 Add utility to check native library availability
 

 Key: HADOOP-9162
 URL: https://issues.apache.org/jira/browse/HADOOP-9162
 Project: Hadoop Common
  Issue Type: Improvement
  Components: native
Affects Versions: 3.0.0, 2.0.3-alpha
Reporter: Binglin Chang
Assignee: Binglin Chang
Priority: Minor
 Attachments: HADOOP-9162.patch, HADOOP-9162.v2.patch


 Many times, after deploy hadoop or when trouble shooting, we need to check 
 whether native library(along with native compression libraries) can work 
 properly, and I just want to use one command to check that, like this:
 hadoop org.apache.hadoop.util.NativeCodeLoader
 and it shows:
 Native library loading test:
 hadoop: false
 zlib:   false
 snappy: false
 lz4:false

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira