[jira] [Commented] (HDFS-9004) Add upgrade domain to DatanodeInfo

2015-09-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14746554#comment-14746554
 ] 

Hadoop QA commented on HDFS-9004:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  20m 10s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 2 new or modified test files. |
| {color:green}+1{color} | javac |   7m 50s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 13s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 24s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m 34s | The applied patch generated  5 
new checkstyle issues (total was 125, now 128). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 36s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 36s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   4m 24s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 12s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests |  84m 26s | Tests failed in hadoop-hdfs. |
| {color:red}-1{color} | hdfs tests |   0m 32s | Tests failed in 
hadoop-hdfs-client. |
| | | 136m  2s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.TestWriteRead |
|   | hadoop.hdfs.server.namenode.snapshot.TestCheckpointsWithSnapshots |
|   | hadoop.hdfs.server.namenode.ha.TestDNFencing |
|   | hadoop.hdfs.server.namenode.ha.TestStandbyIsHot |
|   | hadoop.hdfs.server.namenode.TestListCorruptFileBlocks |
|   | hadoop.hdfs.TestHFlush |
|   | hadoop.security.TestPermission |
|   | hadoop.hdfs.TestParallelRead |
|   | hadoop.fs.viewfs.TestViewFsHdfs |
|   | hadoop.hdfs.TestBlockReaderLocalLegacy |
|   | hadoop.hdfs.server.namenode.TestAuditLogger |
|   | hadoop.hdfs.server.namenode.TestFSDirectory |
|   | hadoop.hdfs.server.namenode.TestXAttrConfigFlag |
|   | hadoop.hdfs.TestPread |
|   | hadoop.hdfs.TestMiniDFSCluster |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.TestWriteConfigurationToDFS |
|   | hadoop.hdfs.server.datanode.TestNNHandlesBlockReportPerStorage |
|   | hadoop.hdfs.TestDatanodeRegistration |
|   | hadoop.hdfs.web.TestWebHDFSXAttr |
|   | hadoop.hdfs.server.datanode.TestStorageReport |
|   | hadoop.hdfs.TestDFSRollback |
|   | hadoop.hdfs.TestDataTransferKeepalive |
|   | hadoop.hdfs.server.datanode.TestDataNodeRollingUpgrade |
|   | hadoop.hdfs.TestDatanodeConfig |
|   | hadoop.hdfs.TestDFSFinalize |
|   | hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics |
|   | hadoop.fs.TestWebHdfsFileContextMainOperations |
|   | hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication |
|   | hadoop.hdfs.server.namenode.TestAllowFormat |
|   | hadoop.hdfs.server.namenode.TestMetadataVersionOutput |
|   | hadoop.hdfs.server.namenode.metrics.TestNNMetricFilesInGetListingOps |
|   | hadoop.hdfs.server.blockmanagement.TestComputeInvalidateWork |
|   | hadoop.fs.TestGlobPaths |
|   | hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots |
|   | hadoop.hdfs.server.namenode.ha.TestHASafeMode |
|   | hadoop.hdfs.TestDFSShell |
|   | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
|   | hadoop.hdfs.server.datanode.TestFsDatasetCacheRevocation |
|   | hadoop.hdfs.TestSeekBug |
|   | hadoop.fs.loadGenerator.TestLoadGenerator |
|   | hadoop.hdfs.TestCrcCorruption |
|   | hadoop.hdfs.server.blockmanagement.TestNodeCount |
|   | hadoop.fs.contract.hdfs.TestHDFSContractMkdir |
|   | hadoop.hdfs.server.namenode.TestStorageRestore |
|   | hadoop.hdfs.server.namenode.TestMetaSave |
|   | hadoop.hdfs.TestAbandonBlock |
|   | hadoop.hdfs.server.datanode.TestDnRespectsBlockReportSplitThreshold |
|   | hadoop.hdfs.TestGetFileChecksum |
|   | hadoop.hdfs.server.namenode.TestBlockUnderConstruction |
|   | 
hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerForContentSummary |
|   | hadoop.hdfs.TestFileCreationDelete |
|   | hadoop.hdfs.TestReadWhileWriting |
|   | hadoop.fs.viewfs.TestViewFileSystemWithAcls |
|   | hadoop.fs.contract.hdfs.TestHDFSContractConcat |
|   | hadoop.hdfs.server.datanode.TestDataNodeMetrics |
|   | hadoop.hdfs.security.TestDelegationToken |
|   | hadoop.fs.TestSymlinkHdfsDisable |
|   | hadoop.hdfs.server.namenode.ha.TestHAAppend |
|   | hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory |
|   | hadoop.hdfs.server.mover.TestStorageMover |
|   | hadoop.hdfs.TestMissingBlocksAlert |
|   | 

[jira] [Commented] (HDFS-9086) Rename dfs.datanode.stripedread.threshold.millis to dfs.datanode.stripedread.timeout.millis

2015-09-15 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14746600#comment-14746600
 ] 

Andrew Wang commented on HDFS-9086:
---

This one relates to HDFS-9088 as well, since we need to change the config key 
in the docs too if this is committed.

> Rename dfs.datanode.stripedread.threshold.millis to 
> dfs.datanode.stripedread.timeout.millis
> ---
>
> Key: HDFS-9086
> URL: https://issues.apache.org/jira/browse/HDFS-9086
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Affects Versions: HDFS-7285
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Trivial
>
> This config key is used to control the timeout for ECWorker reads, let's name 
> it with the standard term "timeout" rather than "threshold".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7779) Improve the HDFS Web UI browser to allow chowning / chgrp and setting replication

2015-09-15 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash updated HDFS-7779:
---
Attachment: HDFS-7779.03.patch

Here's a rebased patch after HDFS-7986 (delete)

> Improve the HDFS Web UI browser to allow chowning / chgrp and setting 
> replication
> -
>
> Key: HDFS-7779
> URL: https://issues.apache.org/jira/browse/HDFS-7779
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
> Attachments: Chmod.png, Chown.png, HDFS-7779.01.patch, 
> HDFS-7779.02.patch, HDFS-7779.03.patch
>
>
> This JIRA converts the owner, group and replication fields into 
> contenteditable fields which can be modified by the user from the browser 
> itself. It too uses the WebHDFS to affect these changes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7037) Using distcp to copy data from insecure to secure cluster via hftp doesn't work (branch-2 only)

2015-09-15 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14746406#comment-14746406
 ] 

Aaron T. Myers commented on HDFS-7037:
--

[~wheat9] - with regard to your comment that "the security concerns remain 
unaddressed," could you please respond to this point specifically:

bq. adding this capability to HFTP does not change the security semantics of 
Hadoop at all, since RPC and other interfaces used for remote access already 
support allowing configurable insecure fallback. This is not a security 
vulnerability. If it were, we should be removing the ability to configure 
insecure fallback at all in Hadoop. We're not doing that, because it was a 
deliberate choice to add that feature.

i.e., this change _is not changing the security level of Hadoop_, so I don't 
understand what security concerns you have with this change. This change is 
proposing to expand the fallback capability that already exists in other RPC 
interfaces to the HFTP interface.

> Using distcp to copy data from insecure to secure cluster via hftp doesn't 
> work  (branch-2 only)
> 
>
> Key: HDFS-7037
> URL: https://issues.apache.org/jira/browse/HDFS-7037
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security, tools
>Affects Versions: 2.6.0
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7037.001.patch
>
>
> This is a branch-2 only issue since hftp is only supported there. 
> Issuing "distcp hftp:// hdfs://" gave the 
> following failure exception:
> {code}
> 14/09/13 22:07:40 INFO tools.DelegationTokenFetcher: Error when dealing 
> remote token:
> java.io.IOException: Error when dealing remote token: Internal Server Error
>   at 
> org.apache.hadoop.hdfs.tools.DelegationTokenFetcher.run(DelegationTokenFetcher.java:375)
>   at 
> org.apache.hadoop.hdfs.tools.DelegationTokenFetcher.getDTfromRemote(DelegationTokenFetcher.java:238)
>   at 
> org.apache.hadoop.hdfs.web.HftpFileSystem$2.run(HftpFileSystem.java:252)
>   at 
> org.apache.hadoop.hdfs.web.HftpFileSystem$2.run(HftpFileSystem.java:247)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1554)
>   at 
> org.apache.hadoop.hdfs.web.HftpFileSystem.getDelegationToken(HftpFileSystem.java:247)
>   at 
> org.apache.hadoop.hdfs.web.TokenAspect.ensureTokenInitialized(TokenAspect.java:140)
>   at 
> org.apache.hadoop.hdfs.web.HftpFileSystem.addDelegationTokenParam(HftpFileSystem.java:337)
>   at 
> org.apache.hadoop.hdfs.web.HftpFileSystem.openConnection(HftpFileSystem.java:324)
>   at 
> org.apache.hadoop.hdfs.web.HftpFileSystem$LsParser.fetchList(HftpFileSystem.java:457)
>   at 
> org.apache.hadoop.hdfs.web.HftpFileSystem$LsParser.getFileStatus(HftpFileSystem.java:472)
>   at 
> org.apache.hadoop.hdfs.web.HftpFileSystem.getFileStatus(HftpFileSystem.java:501)
>   at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:57)
>   at org.apache.hadoop.fs.Globber.glob(Globber.java:248)
>   at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1623)
>   at 
> org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:77)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:81)
>   at 
> org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:342)
>   at org.apache.hadoop.tools.DistCp.execute(DistCp.java:154)
>   at org.apache.hadoop.tools.DistCp.run(DistCp.java:121)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.tools.DistCp.main(DistCp.java:390)
> 14/09/13 22:07:40 WARN security.UserGroupInformation: 
> PriviledgedActionException as:hadoopu...@xyz.com (auth:KERBEROS) 
> cause:java.io.IOException: Unable to obtain remote token
> 14/09/13 22:07:40 ERROR tools.DistCp: Exception encountered 
> java.io.IOException: Unable to obtain remote token
>   at 
> org.apache.hadoop.hdfs.tools.DelegationTokenFetcher.getDTfromRemote(DelegationTokenFetcher.java:249)
>   at 
> org.apache.hadoop.hdfs.web.HftpFileSystem$2.run(HftpFileSystem.java:252)
>   at 
> org.apache.hadoop.hdfs.web.HftpFileSystem$2.run(HftpFileSystem.java:247)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1554)
>   at 
> org.apache.hadoop.hdfs.web.HftpFileSystem.getDelegationToken(HftpFileSystem.java:247)
>   at 
> 

[jira] [Commented] (HDFS-9040) Erasure coding: Refactor DFSStripedOutputStream (Move Namenode RPC Requests to Coordinator)

2015-09-15 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14746493#comment-14746493
 ] 

Zhe Zhang commented on HDFS-9040:
-

bq. Maybe we can do the refactoring after merging EC feature into trunk? Before 
the merging we may want to minimize the changes related to the original writing 
pipeline.
Actually I have moved all write-pipeline-error-handling JIRAs, including this 
one, as follow-on tasks. Let me know if you think some should be moved back. My 
feeling is that this is an advanced topic instead of the basic feature.

> Erasure coding: Refactor DFSStripedOutputStream (Move Namenode RPC Requests 
> to Coordinator)
> ---
>
> Key: HDFS-9040
> URL: https://issues.apache.org/jira/browse/HDFS-9040
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Walter Su
> Attachments: HDFS-9040-HDFS-7285.002.patch, 
> HDFS-9040-HDFS-7285.003.patch, HDFS-9040.00.patch, HDFS-9040.001.wip.patch, 
> HDFS-9040.02.bgstreamer.patch
>
>
> The general idea is to simplify error handling logic.
> Proposal 1:
> A BlockGroupDataStreamer to communicate with NN to allocate/update block, and 
> StripedDataStreamer s only have to stream blocks to DNs.
> Proposal 2:
> See below the 
> [comment|https://issues.apache.org/jira/browse/HDFS-9040?focusedCommentId=14741388=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14741388]
>  from [~jingzhao].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9085) Show renewer information in DelegationTokenIdentifier#toString

2015-09-15 Thread zhihai xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhihai xu updated HDFS-9085:

Attachment: HDFS-9085.001.patch

> Show renewer information in DelegationTokenIdentifier#toString
> --
>
> Key: HDFS-9085
> URL: https://issues.apache.org/jira/browse/HDFS-9085
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: zhihai xu
>Assignee: zhihai xu
>Priority: Trivial
> Attachments: HDFS-9085.001.patch
>
>
> Show renewer information in {{DelegationTokenIdentifier#toString}}. Currently 
> {{org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier}}
>  didn't show the renewer information. It will be very useful to have renewer 
> information to debug security related issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7986) Allow files / directories to be deleted from the NameNode UI

2015-09-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14746722#comment-14746722
 ] 

Hudson commented on HDFS-7986:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #393 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/393/])
HDFS-7986. Allow files / directories to be deleted from the NameNode UI. 
Contributed by Ravi Prakash. (wheat9: rev 
6c52be78a0c6d6d86444933c6b0734dfc2477c32)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.js
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.html
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/hadoop.css


> Allow files / directories to be deleted from the NameNode UI
> 
>
> Key: HDFS-7986
> URL: https://issues.apache.org/jira/browse/HDFS-7986
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
> Fix For: 2.8.0
>
> Attachments: HDFS-7986.01.patch, HDFS-7986.02.patch
>
>
> Users should be able to delete files or directories using the Namenode UI.
> I'm thinking there ought to be a confirmation dialog. For directories 
> recursive should be set to true. Initially there should be no option to 
> skipTrash.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8968) New benchmark throughput tool for striping erasure coding

2015-09-15 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14746747#comment-14746747
 ] 

Kai Zheng commented on HDFS-8968:
-

Thanks Rui for the update. LGTM. +1

[~zhz], I'm going to commit this to HDFS-7285 branch today. It's totally new 
codes. Please let me know if this interrupts  the merging work too much you're 
going. Thanks.

> New benchmark throughput tool for striping erasure coding
> -
>
> Key: HDFS-8968
> URL: https://issues.apache.org/jira/browse/HDFS-8968
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Rui Li
> Attachments: HDFS-8968-HDFS-7285.1.patch, HDFS-8968-HDFS-7285.2.patch
>
>
> We need a new benchmark tool to measure the throughput of client writing and 
> reading considering cases or factors:
> * 3-replica or striping;
> * write or read, stateful read or positional read;
> * which erasure coder;
> * striping cell size;
> * concurrent readers/writers using processes or threads.
> The tool should be easy to use and better to avoid unnecessary local 
> environment impact, like local disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9087) Add some jitter to DataNode.checkDiskErrorThread

2015-09-15 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HDFS-9087:

Attachment: HDFS-9087-v0.patch

Add 5 seconds of jitter. This has the added benefit of adding more time in 
between disk checker runs. There's almost never going to be a time that disks 
are going to fail sequentially one every five seconds.

> Add some jitter to DataNode.checkDiskErrorThread
> 
>
> Key: HDFS-9087
> URL: https://issues.apache.org/jira/browse/HDFS-9087
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.6.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HDFS-9087-v0.patch
>
>
> If all datanodes are started across a cluster at the same time (or errors in 
> the network cause ioexceptions) there can be storms where lots of datanodes 
> check their disks at the exact same time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7995) Implement chmod in the HDFS Web UI

2015-09-15 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14746806#comment-14746806
 ] 

Haohui Mai commented on HDFS-7995:
--

There is no need to give every single checkbox an id, and it can be annoying to 
use a modal dialog considering (1) there is not much information to show, and 
(2) the amount of mouse movement can be huge.

I uploaded a patch to simplify the code and replace the modal dialog with a 
popover.

> Implement chmod in the HDFS Web UI
> --
>
> Key: HDFS-7995
> URL: https://issues.apache.org/jira/browse/HDFS-7995
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7995.005.patch, HDFS-7995.01.patch, 
> HDFS-7995.02.patch, HDFS-7995.03.patch, HDFS-7995.04.patch
>
>
> We should let users change the permissions of files and directories using the 
> HDFS Web UI



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8953) DataNode Metrics logging

2015-09-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14746536#comment-14746536
 ] 

Hudson commented on HDFS-8953:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2314 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2314/])
HDFS-8953. DataNode Metrics logging (Contributed by Kanaka Kumar Avvaru) 
(vinayakumarb: rev ce69c9b54c642cfbe789fc661cfc7dcbb07b4ac5)
* hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeMetricsLogger.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/DataNodeTestUtils.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/log4j.properties
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/MetricsLoggerTask.java
* hadoop-common-project/hadoop-common/src/main/conf/log4j.properties
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java


> DataNode Metrics logging
> 
>
> Key: HDFS-8953
> URL: https://issues.apache.org/jira/browse/HDFS-8953
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Kanaka Kumar Avvaru
>Assignee: Kanaka Kumar Avvaru
> Fix For: 2.8.0
>
> Attachments: HDFS-8953-01.patch, HDFS-8953-02.patch, 
> HDFS-8953-03.patch, HDFS-8953-04.patch, HDFS-8953-05.patch, HDFS-8953-06.patch
>
>
> HDFS-8880 added metrics logging at NameNode. Similarly, this JIRA is to  add 
> a separate logger for metrics at DN



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9082) Change the log level in WebHdfsFileSystem.initialize() from INFO to DEBUG

2015-09-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14746655#comment-14746655
 ] 

Hudson commented on HDFS-9082:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2315 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2315/])
HDFS-9082. Change the log level in WebHdfsFileSystem.initialize() from INFO to 
DEBUG. Contributed by Santhosh Nayak. (cnauroth: rev 
559c09dc0eba28666c4b16435512cc2d35e31683)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java


> Change the log level in WebHdfsFileSystem.initialize() from INFO to DEBUG
> -
>
> Key: HDFS-9082
> URL: https://issues.apache.org/jira/browse/HDFS-9082
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Reporter: Santhosh G Nayak
>Assignee: Santhosh G Nayak
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9082.1.patch
>
>
> Log info statements described below show up in the stdouts of {{FileSystem}} 
> operations on {{WebHdfsFileSystem}}. So, the proposal is to change the log 
> level from INFO to DEBUG.
>  
> {code}
>  if(isOAuth) {
>   LOG.info("Enabling OAuth2 in WebHDFS");
>   connectionFactory = URLConnectionFactory
>   .newOAuth2URLConnectionFactory(conf);
> } else {
>   LOG.info("Not enabling OAuth2 in WebHDFS");
>   connectionFactory = URLConnectionFactory
>   .newDefaultURLConnectionFactory(conf);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7986) Allow files / directories to be deleted from the NameNode UI

2015-09-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14746654#comment-14746654
 ] 

Hudson commented on HDFS-7986:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2315 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2315/])
HDFS-7986. Allow files / directories to be deleted from the NameNode UI. 
Contributed by Ravi Prakash. (wheat9: rev 
6c52be78a0c6d6d86444933c6b0734dfc2477c32)
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.html
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/hadoop.css
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.js


> Allow files / directories to be deleted from the NameNode UI
> 
>
> Key: HDFS-7986
> URL: https://issues.apache.org/jira/browse/HDFS-7986
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
> Fix For: 2.8.0
>
> Attachments: HDFS-7986.01.patch, HDFS-7986.02.patch
>
>
> Users should be able to delete files or directories using the Namenode UI.
> I'm thinking there ought to be a confirmation dialog. For directories 
> recursive should be set to true. Initially there should be no option to 
> skipTrash.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9082) Change the log level in WebHdfsFileSystem.initialize() from INFO to DEBUG

2015-09-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14746657#comment-14746657
 ] 

Hudson commented on HDFS-9082:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2339 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2339/])
HDFS-9082. Change the log level in WebHdfsFileSystem.initialize() from INFO to 
DEBUG. Contributed by Santhosh Nayak. (cnauroth: rev 
559c09dc0eba28666c4b16435512cc2d35e31683)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java


> Change the log level in WebHdfsFileSystem.initialize() from INFO to DEBUG
> -
>
> Key: HDFS-9082
> URL: https://issues.apache.org/jira/browse/HDFS-9082
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Reporter: Santhosh G Nayak
>Assignee: Santhosh G Nayak
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9082.1.patch
>
>
> Log info statements described below show up in the stdouts of {{FileSystem}} 
> operations on {{WebHdfsFileSystem}}. So, the proposal is to change the log 
> level from INFO to DEBUG.
>  
> {code}
>  if(isOAuth) {
>   LOG.info("Enabling OAuth2 in WebHDFS");
>   connectionFactory = URLConnectionFactory
>   .newOAuth2URLConnectionFactory(conf);
> } else {
>   LOG.info("Not enabling OAuth2 in WebHDFS");
>   connectionFactory = URLConnectionFactory
>   .newDefaultURLConnectionFactory(conf);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8632) Erasure Coding: Add InterfaceAudience annotation to the erasure coding classes

2015-09-15 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14746691#comment-14746691
 ] 

Rakesh R commented on HDFS-8632:


Thanks [~zhz]. I've rebased the patch in the latest branch. Please take a look 
at it.

Following are the {{Public}} interfaces, all others are considered as 
{{Private}}.
1) ECSchema
2) ErasureCodingPolicy


> Erasure Coding: Add InterfaceAudience annotation to the erasure coding classes
> --
>
> Key: HDFS-8632
> URL: https://issues.apache.org/jira/browse/HDFS-8632
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-8632-HDFS-7285-00.patch, 
> HDFS-8632-HDFS-7285-01.patch, HDFS-8632-HDFS-7285-02.patch
>
>
> I've noticed some of the erasure coding classes missing 
> {{@InterfaceAudience}} annotation. It would be good to identify the classes 
> and add proper annotation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7995) Implement chmod in the HDFS Web UI

2015-09-15 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-7995:
-
Attachment: HDFS-7995.005.patch

> Implement chmod in the HDFS Web UI
> --
>
> Key: HDFS-7995
> URL: https://issues.apache.org/jira/browse/HDFS-7995
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7995.005.patch, HDFS-7995.01.patch, 
> HDFS-7995.02.patch, HDFS-7995.03.patch, HDFS-7995.04.patch
>
>
> We should let users change the permissions of files and directories using the 
> HDFS Web UI



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9040) Erasure coding: Refactor DFSStripedOutputStream (Move Namenode RPC Requests to Coordinator)

2015-09-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14746779#comment-14746779
 ] 

Hadoop QA commented on HDFS-9040:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  15m 53s | Findbugs (version ) appears to 
be broken on HDFS-7285. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 9 new or modified test files. |
| {color:green}+1{color} | javac |   7m 42s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 54s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 16s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | checkstyle |   0m 59s | There were no new checkstyle 
issues. |
| {color:red}-1{color} | whitespace |   0m 32s | The patch has 1  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 36s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   4m 36s | The patch appears to introduce 8 
new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m  8s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 134m 57s | Tests failed in hadoop-hdfs. |
| {color:red}-1{color} | hdfs tests |   0m 19s | Tests failed in 
hadoop-hdfs-client. |
| | | 180m 32s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs |
| FindBugs | module:hadoop-hdfs-client |
| Failed unit tests | hadoop.hdfs.TestFileAppend4 |
|   | hadoop.hdfs.TestRead |
|   | hadoop.hdfs.server.datanode.TestRefreshNamenodes |
|   | hadoop.hdfs.TestHdfsAdmin |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.datanode.TestBlockHasMultipleReplicasOnSameDN |
|   | hadoop.hdfs.TestClientReportBadBlock |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistPolicy |
|   | hadoop.hdfs.server.datanode.TestDataNodeMetrics |
|   | hadoop.hdfs.TestAppendSnapshotTruncate |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles |
|   | 
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement |
|   | hadoop.hdfs.TestSafeModeWithStripedFile |
|   | hadoop.hdfs.TestFileAppendRestart |
|   | hadoop.cli.TestErasureCodingCLI |
|   | hadoop.hdfs.protocol.TestBlockListAsLongs |
|   | hadoop.hdfs.TestFileStatusWithECPolicy |
|   | hadoop.TestRefreshCallQueue |
|   | hadoop.hdfs.TestListFilesInDFS |
|   | hadoop.hdfs.server.datanode.TestDnRespectsBlockReportSplitThreshold |
|   | hadoop.hdfs.TestMiniDFSCluster |
|   | hadoop.hdfs.server.mover.TestMover |
|   | hadoop.security.TestPermissionSymlinks |
|   | hadoop.hdfs.TestDFSRollback |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestWriteToReplica |
|   | hadoop.hdfs.TestFileConcurrentReader |
|   | hadoop.hdfs.TestFileAppend2 |
|   | hadoop.hdfs.server.datanode.TestDataNodeExit |
|   | hadoop.hdfs.server.blockmanagement.TestSequentialBlockGroupId |
|   | hadoop.hdfs.TestGetFileChecksum |
|   | hadoop.security.TestRefreshUserMappings |
|   | hadoop.hdfs.TestRecoverStripedFile |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl |
|   | hadoop.hdfs.crypto.TestHdfsCryptoStreams |
|   | hadoop.hdfs.server.blockmanagement.TestAvailableSpaceBlockPlacementPolicy 
|
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestDatanodeRestart |
|   | hadoop.hdfs.server.datanode.TestTriggerBlockReport |
|   | hadoop.hdfs.server.blockmanagement.TestOverReplicatedBlocks |
|   | hadoop.hdfs.server.datanode.TestIncrementalBlockReports |
|   | hadoop.hdfs.TestBlockStoragePolicy |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.server.datanode.TestReadOnlySharedStorage |
|   | hadoop.hdfs.TestCrcCorruption |
|   | hadoop.hdfs.TestDFSUpgrade |
|   | hadoop.hdfs.server.blockmanagement.TestBlockReportRateLimiting |
|   | hadoop.hdfs.server.blockmanagement.TestNameNodePrunesMissingStorages |
|   | hadoop.hdfs.server.blockmanagement.TestRBWBlockInvalidation |
|   | hadoop.hdfs.server.datanode.TestIncrementalBrVariations |
|   | hadoop.hdfs.TestLeaseRecovery2 |
|   | hadoop.cli.TestXAttrCLI |
|   | hadoop.hdfs.server.blockmanagement.TestBlockManager |
|   | hadoop.hdfs.server.blockmanagement.TestComputeInvalidateWork |
|   | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration |
|   | hadoop.hdfs.TestDFSShell |
|   | hadoop.hdfs.client.impl.TestLeaseRenewer |
|   | hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer |
|  

[jira] [Updated] (HDFS-6955) DN should reserve disk space for a full block when creating tmp files

2015-09-15 Thread Kanaka Kumar Avvaru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kanaka Kumar Avvaru updated HDFS-6955:
--
Status: Patch Available  (was: Open)

> DN should reserve disk space for a full block when creating tmp files
> -
>
> Key: HDFS-6955
> URL: https://issues.apache.org/jira/browse/HDFS-6955
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.5.0
>Reporter: Arpit Agarwal
>Assignee: Kanaka Kumar Avvaru
> Attachments: HDFS-6955-01.patch, HDFS-6955-02.patch, 
> HDFS-6955-03.patch, HDFS-6955-04.patch, HDFS-6955-05.patch, 
> HDFS-6955-06.patch, HDFS-6955-07.patch, HDFS-6955-08.patch, HDFS-6955-09.patch
>
>
> HDFS-6898 is introducing disk space reservation for RBW files to avoid 
> running out of disk space midway through block creation.
> This Jira is to introduce similar reservation for tmp files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6955) DN should reserve disk space for a full block when creating tmp files

2015-09-15 Thread Kanaka Kumar Avvaru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kanaka Kumar Avvaru updated HDFS-6955:
--
Status: Open  (was: Patch Available)

> DN should reserve disk space for a full block when creating tmp files
> -
>
> Key: HDFS-6955
> URL: https://issues.apache.org/jira/browse/HDFS-6955
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.5.0
>Reporter: Arpit Agarwal
>Assignee: Kanaka Kumar Avvaru
> Attachments: HDFS-6955-01.patch, HDFS-6955-02.patch, 
> HDFS-6955-03.patch, HDFS-6955-04.patch, HDFS-6955-05.patch, 
> HDFS-6955-06.patch, HDFS-6955-07.patch, HDFS-6955-08.patch, HDFS-6955-09.patch
>
>
> HDFS-6898 is introducing disk space reservation for RBW files to avoid 
> running out of disk space midway through block creation.
> This Jira is to introduce similar reservation for tmp files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8953) DataNode Metrics logging

2015-09-15 Thread Kanaka Kumar Avvaru (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14746923#comment-14746923
 ] 

Kanaka Kumar Avvaru commented on HDFS-8953:
---

Thanks [~arpitagarwal], [~steve_l] for reviews. 
Thanks [~vinayrpet] for reviews and commit.

> DataNode Metrics logging
> 
>
> Key: HDFS-8953
> URL: https://issues.apache.org/jira/browse/HDFS-8953
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Kanaka Kumar Avvaru
>Assignee: Kanaka Kumar Avvaru
> Fix For: 2.8.0
>
> Attachments: HDFS-8953-01.patch, HDFS-8953-02.patch, 
> HDFS-8953-03.patch, HDFS-8953-04.patch, HDFS-8953-05.patch, HDFS-8953-06.patch
>
>
> HDFS-8880 added metrics logging at NameNode. Similarly, this JIRA is to  add 
> a separate logger for metrics at DN



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7986) Allow files / directories to be deleted from the NameNode UI

2015-09-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14746827#comment-14746827
 ] 

Hudson commented on HDFS-7986:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2340 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2340/])
HDFS-7986. Allow files / directories to be deleted from the NameNode UI. 
Contributed by Ravi Prakash. (wheat9: rev 
6c52be78a0c6d6d86444933c6b0734dfc2477c32)
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.html
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.js
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/hadoop.css


> Allow files / directories to be deleted from the NameNode UI
> 
>
> Key: HDFS-7986
> URL: https://issues.apache.org/jira/browse/HDFS-7986
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
> Fix For: 2.8.0
>
> Attachments: HDFS-7986.01.patch, HDFS-7986.02.patch
>
>
> Users should be able to delete files or directories using the Namenode UI.
> I'm thinking there ought to be a confirmation dialog. For directories 
> recursive should be set to true. Initially there should be no option to 
> skipTrash.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-9083) Replication violates block placement policy.

2015-09-15 Thread Jagadesh Kiran N (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jagadesh Kiran N reassigned HDFS-9083:
--

Assignee: Jagadesh Kiran N

> Replication violates block placement policy.
> 
>
> Key: HDFS-9083
> URL: https://issues.apache.org/jira/browse/HDFS-9083
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS, namenode
>Affects Versions: 2.6.0
>Reporter: Rushabh S Shah
>Assignee: Jagadesh Kiran N
>
> Recently we are noticing many cases in which all the replica of the block are 
> residing on the same rack.
> During the block creation, the block placement policy was honored.
> But after node failure event in some specific manner, the block ends up in 
> such state.
> On investigating more I found out that BlockManager#blockHasEnoughRacks is 
> dependent on the config (net.topology.script.file.name)
> {noformat}
>  if (!this.shouldCheckForEnoughRacks) {
>   return true;
> }
> {noformat}
> We specify DNSToSwitchMapping implementation (our own custom implementation) 
> via net.topology.node.switch.mapping.impl and no longer use 
> net.topology.script.file.name config.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-4224) The dncp_block_verification log can be compressed

2015-09-15 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J resolved HDFS-4224.
---
Resolution: Invalid

Invalid after HDFS-7430

> The dncp_block_verification log can be compressed
> -
>
> Key: HDFS-4224
> URL: https://issues.apache.org/jira/browse/HDFS-4224
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.0.0-alpha
>Reporter: Harsh J
>Priority: Minor
>
> On some systems, I noticed that when the scanner runs, the 
> dncp_block_verification.log.curr file under the block pool gets quite large 
> (several GBs). Although this is rolled away, we could also configure 
> compression upon it (a codec that may work without natives, would be a good 
> default) and save on I/O and space.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8968) New benchmark throughput tool for striping erasure coding

2015-09-15 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14746787#comment-14746787
 ] 

Kai Zheng commented on HDFS-8968:
-

Yes the purpose for this new tool is to benchmark throughput of HDFS client 
read and write, for both replica and striping modes, considering we may make 
the tool more erasure coding specific in future, I thought it's OK not to use a 
general name. Then, better to use {{ErasureCodingBenchmarkThroughput}} for 
consistency. 

> New benchmark throughput tool for striping erasure coding
> -
>
> Key: HDFS-8968
> URL: https://issues.apache.org/jira/browse/HDFS-8968
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Rui Li
> Attachments: HDFS-8968-HDFS-7285.1.patch, HDFS-8968-HDFS-7285.2.patch
>
>
> We need a new benchmark tool to measure the throughput of client writing and 
> reading considering cases or factors:
> * 3-replica or striping;
> * write or read, stateful read or positional read;
> * which erasure coder;
> * striping cell size;
> * concurrent readers/writers using processes or threads.
> The tool should be easy to use and better to avoid unnecessary local 
> environment impact, like local disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9087) Add some jitter to DataNode.checkDiskErrorThread

2015-09-15 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HDFS-9087:

Status: Patch Available  (was: Open)

> Add some jitter to DataNode.checkDiskErrorThread
> 
>
> Key: HDFS-9087
> URL: https://issues.apache.org/jira/browse/HDFS-9087
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.6.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HDFS-9087-v0.patch
>
>
> If all datanodes are started across a cluster at the same time (or errors in 
> the network cause ioexceptions) there can be storms where lots of datanodes 
> check their disks at the exact same time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7995) Implement chmod in the HDFS Web UI

2015-09-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14746862#comment-14746862
 ] 

Hadoop QA commented on HDFS-7995:
-

\\
\\
| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |   0m  0s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | release audit |   0m 15s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| | |   0m 19s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12756155/HDFS-7995.005.patch |
| Optional Tests |  |
| git revision | trunk / 2ffe2db |
| Java | 1.7.0_55 |
| uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12469/console |


This message was automatically generated.

> Implement chmod in the HDFS Web UI
> --
>
> Key: HDFS-7995
> URL: https://issues.apache.org/jira/browse/HDFS-7995
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7995.005.patch, HDFS-7995.01.patch, 
> HDFS-7995.02.patch, HDFS-7995.03.patch, HDFS-7995.04.patch
>
>
> We should let users change the permissions of files and directories using the 
> HDFS Web UI



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8920) Erasure Coding: when recovering lost blocks, logs can be too verbose and hurt performance

2015-09-15 Thread Rui Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14746906#comment-14746906
 ] 

Rui Li commented on HDFS-8920:
--

I tried several failures locally and cannot reproduce them. Also both 
{{TestDFSInputStream}} and {{TestDFSStripedInputStream}} pass on my side. So I 
suppose the failures here are not related.

Since the patch only changes how we print logs, I didn't add new tests for it. 
I did manually perform some tests on a cluster, as I mentioned above.

> Erasure Coding: when recovering lost blocks, logs can be too verbose and hurt 
> performance
> -
>
> Key: HDFS-8920
> URL: https://issues.apache.org/jira/browse/HDFS-8920
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Rui Li
>Assignee: Rui Li
> Attachments: HDFS-8920-HDFS-7285.1.patch
>
>
> When we test reading data with datanodes killed, 
> {{DFSInputStream::getBestNodeDNAddrPair}} becomes a hot spot method and 
> effectively blocks the client JVM. This log seems too verbose:
> {code}
> if (chosenNode == null) {
>   DFSClient.LOG.warn("No live nodes contain block " + block.getBlock() +
>   " after checking nodes = " + Arrays.toString(nodes) +
>   ", ignoredNodes = " + ignoredNodes);
>   return null;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9082) Change the log level in WebHdfsFileSystem.initialize() from INFO to DEBUG

2015-09-15 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-9082:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

There are no tests required, because the patch only changes the logging level 
of some log statements.  I have committed this to trunk and branch-2.  
[~snayak], thank you for contributing the patch.

> Change the log level in WebHdfsFileSystem.initialize() from INFO to DEBUG
> -
>
> Key: HDFS-9082
> URL: https://issues.apache.org/jira/browse/HDFS-9082
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Reporter: Santhosh G Nayak
>Assignee: Santhosh G Nayak
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9082.1.patch
>
>
> Log info statements described below show up in the stdouts of {{FileSystem}} 
> operations on {{WebHdfsFileSystem}}. So, the proposal is to change the log 
> level from INFO to DEBUG.
>  
> {code}
>  if(isOAuth) {
>   LOG.info("Enabling OAuth2 in WebHDFS");
>   connectionFactory = URLConnectionFactory
>   .newOAuth2URLConnectionFactory(conf);
> } else {
>   LOG.info("Not enabling OAuth2 in WebHDFS");
>   connectionFactory = URLConnectionFactory
>   .newDefaultURLConnectionFactory(conf);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9086) Rename dfs.datanode.stripedread.threshold.millis to dfs.datanode.stripedread.timeout.millis

2015-09-15 Thread Andrew Wang (JIRA)
Andrew Wang created HDFS-9086:
-

 Summary: Rename dfs.datanode.stripedread.threshold.millis to 
dfs.datanode.stripedread.timeout.millis
 Key: HDFS-9086
 URL: https://issues.apache.org/jira/browse/HDFS-9086
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: HDFS-7285
Reporter: Andrew Wang
Assignee: Andrew Wang
Priority: Trivial


This config key is used to control the timeout for ECWorker reads, let's name 
it with the standard term "timeout" rather than "threshold".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9052) deleteSnapshot runs into AssertionError

2015-09-15 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14746534#comment-14746534
 ] 

Jing Zhao commented on HDFS-9052:
-

Hi Alex, so the issue here is not about {{computeDiffBetweenSnapshots}} or 
deleting a snapshot. These are just possible cases that can expose the 
corrupted snapshot diff list. Let me try to provide more context information 
about snapshot diff list. In our current snapshot implementation, we record 
newly created files in create list and deleted files in the delete list. So 
let's suppose we take a snapshot s1, and then delete the file 
"useraction.log.crypto", since the file exists before creating snapshot s1, we 
have:
{noformat}
s1: deleted list: [INodeFile_1(useraction.log.crypto)]
{noformat}
Now we take another snapshot s2, and then create the new log file with the same 
name. s2's diff list looks like:
{noformat}
s2: created list: [INodeFile_2(useraction.log.crypto)]
{noformat}
We then take snapshot s3, and delete the log file. Now we have:
{noformat}
s1: created list:[], deleted list: [INodeFile_1(useraction.log.crypto)]
s2: created list: [INodeFile_2(useraction.log.crypto)], deleted list: []
s3: created list: [], deleted list: [INodeFile_2(useraction.log.crypto)]
{noformat}
Let's say we now delete s3. The diff lists of s2 and s3 should be combined and 
because INodeFile_2(useraction.log.crypto) is created after taking s2, the 
correct diff lists should look like:
{noformat}
s1: created list: [], deleted list: [INodeFile_1(useraction.log.crypto)]
s2: created list: [], deleted list: []
{noformat}
But before HDFS-6908 we have a bug which caused 
INodeFile_2(useraction.log.crypto) still stayed in s2's deleted list. Then we 
have:
{noformat}
s1: deleted list: [INodeFile_1(useraction.log.crypto)]
s2: deleted list: [INodeFile_2(useraction.log.crypto)]
{noformat}
Now we have a corrupted diff list state. No matter we compute snapshot diff 
between s1 and the current state, or delete the snapshot s2, in case that we 
have to combine s1 and s2, we will get the AssertionError.

Because the corruption has been persisted in your fsimage, to fix the issue you 
may have to use a patched jar to remove the INodeFile_2(useraction.log.crypto) 
from s2's deleted list when loading the fsimage.


> deleteSnapshot runs into AssertionError
> ---
>
> Key: HDFS-9052
> URL: https://issues.apache.org/jira/browse/HDFS-9052
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Alex Ivanov
>
> CDH 5.0.5 upgraded from CDH 5.0.0 (Hadoop 2.3)
> Upon deleting a snapshot, we run into the following assertion error. The 
> scenario is as follows:
> 1. We have a program that deletes snapshots in reverse chronological order.
> 2. The program deletes a couple of hundred snapshots successfully but runs 
> into the following exception:
> java.lang.AssertionError: Element already exists: 
> element=useraction.log.crypto, DELETED=[useraction.log.crypto]
> 3. There seems to be an issue with that snapshot, which causes a file, which 
> normally gets overwritten in every snapshot to be added to the SnapshotDiff 
> delete queue twice.
> 4. Once the deleteSnapshot is run on the problematic snapshot, if the 
> Namenode is restarted, it cannot be started again until the transaction is 
> removed from the EditLog.
> 5. Sometimes the bad snapshot can be deleted but the prior snapshot seems to 
> "inherit" the same issue.
> 6. The error below is from Namenode starting when the DELETE_SNAPSHOT 
> transaction is replayed from the EditLog.
> 2015-09-01 22:59:59,140 INFO  [IPC Server handler 0 on 8022] BlockStateChange 
> (BlockManager.java:logAddStoredBlock(2342)) - BLOCK* addStoredBlock: blockMap 
> updated: 10.52.209.77:1004 is added to 
> blk_1080833995_7093259{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, 
> replicas=[ReplicaUnderConstruction[[DISK]DS-16de62e5-f6e2-4ea7-aad9-f8567bded7d7:NORMAL|FINALIZED]]}
>  size 0
> 2015-09-01 22:59:59,140 INFO  [IPC Server handler 0 on 8022] BlockStateChange 
> (BlockManager.java:logAddStoredBlock(2342)) - BLOCK* addStoredBlock: blockMap 
> updated: 10.52.209.77:1004 is added to 
> blk_1080833996_7093260{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, 
> replicas=[ReplicaUnderConstruction[[DISK]DS-1def2b07-d87f-49dd-b14f-ef230342088d:NORMAL|FINALIZED]]}
>  size 0
> 2015-09-01 22:59:59,141 ERROR [IPC Server handler 0 on 8022] 
> namenode.FSEditLogLoader (FSEditLogLoader.java:loadEditRecords(232)) - 
> Encountered exception on operation DeleteSnapshotOp 
> [snapshotRoot=/data/tenants/pdx-svt.baseline84/wddata, 
> snapshotName=s2015022614_maintainer_soft_del, 
> RpcClientId=7942c957-a7cf-44c1-880d-6eea690e1b19, RpcCallId=1]
> 2015-09-01 22:59:59,141 ERROR [IPC Server handler 0 on 8022] 
> namenode.FSEditLogLoader (FSEditLogLoader.java:loadEditRecords(232)) - 
> Encountered exception 

[jira] [Resolved] (HDFS-7492) If multiple threads call FsVolumeList#checkDirs at the same time, we should only do checkDirs once and give the results to all waiting threads

2015-09-15 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark resolved HDFS-7492.
-
Resolution: Duplicate

Fixed in HDFS-7531. Since there are no more locks on FsVolumeList there isn't a 
contention.

> If multiple threads call FsVolumeList#checkDirs at the same time, we should 
> only do checkDirs once and give the results to all waiting threads
> --
>
> Key: HDFS-7492
> URL: https://issues.apache.org/jira/browse/HDFS-7492
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Colin Patrick McCabe
>Assignee: Elliott Clark
>Priority: Minor
>
> checkDirs is called when we encounter certain I/O errors.  It's rare to get 
> just a single I/O error... normally you start getting many errors when a disk 
> is going bad.  For this reason, we shouldn't start a new checkDirs scan for 
> each error.  Instead, if multiple threads call FsVolumeList#checkDirs at 
> around the same time, we should only do checkDirs once and give the results 
> to all the waiting threads.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9082) Change the log level in WebHdfsFileSystem.initialize() from INFO to DEBUG

2015-09-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14746391#comment-14746391
 ] 

Hudson commented on HDFS-9082:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8460 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8460/])
HDFS-9082. Change the log level in WebHdfsFileSystem.initialize() from INFO to 
DEBUG. Contributed by Santhosh Nayak. (cnauroth: rev 
559c09dc0eba28666c4b16435512cc2d35e31683)
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Change the log level in WebHdfsFileSystem.initialize() from INFO to DEBUG
> -
>
> Key: HDFS-9082
> URL: https://issues.apache.org/jira/browse/HDFS-9082
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Reporter: Santhosh G Nayak
>Assignee: Santhosh G Nayak
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9082.1.patch
>
>
> Log info statements described below show up in the stdouts of {{FileSystem}} 
> operations on {{WebHdfsFileSystem}}. So, the proposal is to change the log 
> level from INFO to DEBUG.
>  
> {code}
>  if(isOAuth) {
>   LOG.info("Enabling OAuth2 in WebHDFS");
>   connectionFactory = URLConnectionFactory
>   .newOAuth2URLConnectionFactory(conf);
> } else {
>   LOG.info("Not enabling OAuth2 in WebHDFS");
>   connectionFactory = URLConnectionFactory
>   .newDefaultURLConnectionFactory(conf);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9040) Erasure coding: Refactor DFSStripedOutputStream (Move Namenode RPC Requests to Coordinator)

2015-09-15 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14746443#comment-14746443
 ] 

Jing Zhao commented on HDFS-9040:
-

Thanks for the great review, Walter and Zhe!

bq. Speaking blockToken, it reminds me another severe issue.

Yes this can be an issue and we should fix it. But at this stage it may not be 
that severe: the block token default life time (600 min) should be long enough 
to cover normal writing scenario. Also slow writer may not be our main use case 
in phase I, especially considering we do not support hflush/hsync now so HBase 
cannot use EC files yet. Creating streams before having real data can be a good 
idea. Maybe we create a jira for this?

bq. Since we have agreed to move the locateFollowingBlock logic to OutputStream 
level, we should limit the lifespan of a StripedDataStreamer to a single block.

This is a good point. In my current patch only failed streamers are replaced 
when writing a new block. To replace all the streamers can be even simpler. My 
only concern is the workload of creating new threads.

bq. We can also consider refactoring the base DataStreamer class into 
BlockDataStreamer

Maybe we can do the refactoring after merging EC feature into trunk? Before the 
merging we may want to minimize the changes related to the original writing 
pipeline.

I will upload a new patch soon to fix race conditions pointed by Walter.

> Erasure coding: Refactor DFSStripedOutputStream (Move Namenode RPC Requests 
> to Coordinator)
> ---
>
> Key: HDFS-9040
> URL: https://issues.apache.org/jira/browse/HDFS-9040
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Walter Su
> Attachments: HDFS-9040-HDFS-7285.002.patch, HDFS-9040.00.patch, 
> HDFS-9040.001.wip.patch, HDFS-9040.02.bgstreamer.patch
>
>
> The general idea is to simplify error handling logic.
> Proposal 1:
> A BlockGroupDataStreamer to communicate with NN to allocate/update block, and 
> StripedDataStreamer s only have to stream blocks to DNs.
> Proposal 2:
> See below the 
> [comment|https://issues.apache.org/jira/browse/HDFS-9040?focusedCommentId=14741388=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14741388]
>  from [~jingzhao].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7986) Allow files / directories to be deleted from the NameNode UI

2015-09-15 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-7986:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

I've committed the patch to trunk and branch-2. Thanks [~raviprak] for the 
contribution.

> Allow files / directories to be deleted from the NameNode UI
> 
>
> Key: HDFS-7986
> URL: https://issues.apache.org/jira/browse/HDFS-7986
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
> Fix For: 2.8.0
>
> Attachments: HDFS-7986.01.patch, HDFS-7986.02.patch
>
>
> Users should be able to delete files or directories using the Namenode UI.
> I'm thinking there ought to be a confirmation dialog. For directories 
> recursive should be set to true. Initially there should be no option to 
> skipTrash.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7779) Improve the HDFS Web UI browser to allow chowning / chgrp and setting replication

2015-09-15 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash updated HDFS-7779:
---
Status: Open  (was: Patch Available)

> Improve the HDFS Web UI browser to allow chowning / chgrp and setting 
> replication
> -
>
> Key: HDFS-7779
> URL: https://issues.apache.org/jira/browse/HDFS-7779
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
> Attachments: Chmod.png, Chown.png, HDFS-7779.01.patch, 
> HDFS-7779.02.patch
>
>
> This JIRA converts the owner, group and replication fields into 
> contenteditable fields which can be modified by the user from the browser 
> itself. It too uses the WebHDFS to affect these changes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9082) Change the log level in WebHdfsFileSystem.initialize() from INFO to DEBUG

2015-09-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14746543#comment-14746543
 ] 

Hudson commented on HDFS-9082:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #398 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/398/])
HDFS-9082. Change the log level in WebHdfsFileSystem.initialize() from INFO to 
DEBUG. Contributed by Santhosh Nayak. (cnauroth: rev 
559c09dc0eba28666c4b16435512cc2d35e31683)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java


> Change the log level in WebHdfsFileSystem.initialize() from INFO to DEBUG
> -
>
> Key: HDFS-9082
> URL: https://issues.apache.org/jira/browse/HDFS-9082
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Reporter: Santhosh G Nayak
>Assignee: Santhosh G Nayak
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9082.1.patch
>
>
> Log info statements described below show up in the stdouts of {{FileSystem}} 
> operations on {{WebHdfsFileSystem}}. So, the proposal is to change the log 
> level from INFO to DEBUG.
>  
> {code}
>  if(isOAuth) {
>   LOG.info("Enabling OAuth2 in WebHDFS");
>   connectionFactory = URLConnectionFactory
>   .newOAuth2URLConnectionFactory(conf);
> } else {
>   LOG.info("Not enabling OAuth2 in WebHDFS");
>   connectionFactory = URLConnectionFactory
>   .newDefaultURLConnectionFactory(conf);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9040) Erasure coding: Refactor DFSStripedOutputStream (Move Namenode RPC Requests to Coordinator)

2015-09-15 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14746541#comment-14746541
 ] 

Jing Zhao commented on HDFS-9040:
-

I agree. Let's see if we can be confident about this fix before the merging. 
For now let's keep it in HDFS-8031. Thanks Zhe!

> Erasure coding: Refactor DFSStripedOutputStream (Move Namenode RPC Requests 
> to Coordinator)
> ---
>
> Key: HDFS-9040
> URL: https://issues.apache.org/jira/browse/HDFS-9040
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Walter Su
> Attachments: HDFS-9040-HDFS-7285.002.patch, 
> HDFS-9040-HDFS-7285.003.patch, HDFS-9040.00.patch, HDFS-9040.001.wip.patch, 
> HDFS-9040.02.bgstreamer.patch
>
>
> The general idea is to simplify error handling logic.
> Proposal 1:
> A BlockGroupDataStreamer to communicate with NN to allocate/update block, and 
> StripedDataStreamer s only have to stream blocks to DNs.
> Proposal 2:
> See below the 
> [comment|https://issues.apache.org/jira/browse/HDFS-9040?focusedCommentId=14741388=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14741388]
>  from [~jingzhao].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9040) Erasure coding: Refactor DFSStripedOutputStream (Move Namenode RPC Requests to Coordinator)

2015-09-15 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-9040:

Attachment: (was: HDFS-9040-HDFS-7285.003.patch)

> Erasure coding: Refactor DFSStripedOutputStream (Move Namenode RPC Requests 
> to Coordinator)
> ---
>
> Key: HDFS-9040
> URL: https://issues.apache.org/jira/browse/HDFS-9040
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Walter Su
> Attachments: HDFS-9040-HDFS-7285.002.patch, HDFS-9040.00.patch, 
> HDFS-9040.001.wip.patch, HDFS-9040.02.bgstreamer.patch
>
>
> The general idea is to simplify error handling logic.
> Proposal 1:
> A BlockGroupDataStreamer to communicate with NN to allocate/update block, and 
> StripedDataStreamer s only have to stream blocks to DNs.
> Proposal 2:
> See below the 
> [comment|https://issues.apache.org/jira/browse/HDFS-9040?focusedCommentId=14741388=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14741388]
>  from [~jingzhao].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9085) Show renewer information in DelegationTokenIdentifier#toString

2015-09-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14746579#comment-14746579
 ] 

Hadoop QA commented on HDFS-9085:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  17m 33s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 58s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 18s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 25s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m 36s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 31s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 35s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m  0s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 15s | Pre-build of native portion |
| {color:green}+1{color} | hdfs tests |   0m 29s | Tests passed in 
hadoop-hdfs-client. |
| | |  45m 45s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12756118/HDFS-9085.001.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 559c09d |
| hadoop-hdfs-client test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12462/artifact/patchprocess/testrun_hadoop-hdfs-client.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12462/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12462/console |


This message was automatically generated.

> Show renewer information in DelegationTokenIdentifier#toString
> --
>
> Key: HDFS-9085
> URL: https://issues.apache.org/jira/browse/HDFS-9085
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: zhihai xu
>Assignee: zhihai xu
>Priority: Trivial
> Attachments: HDFS-9085.001.patch
>
>
> Show renewer information in {{DelegationTokenIdentifier#toString}}. Currently 
> {{org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier}}
>  didn't show the renewer information. It will be very useful to have renewer 
> information to debug security related issue. Because the renewer will be 
> filtered by "hadoop.security.auth_to_local", it will be helpful to show the 
> real renewer info after applying the rules.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9087) Add some jitter to DataNode.checkDiskErrorThread

2015-09-15 Thread Elliott Clark (JIRA)
Elliott Clark created HDFS-9087:
---

 Summary: Add some jitter to DataNode.checkDiskErrorThread
 Key: HDFS-9087
 URL: https://issues.apache.org/jira/browse/HDFS-9087
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.6.0
Reporter: Elliott Clark
Assignee: Elliott Clark


If all datanodes are started across a cluster at the same time (or errors in 
the network cause ioexceptions) there can be storms where lots of datanodes 
check their disks at the exact same time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7986) Allow files / directories to be deleted from the NameNode UI

2015-09-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14746595#comment-14746595
 ] 

Hudson commented on HDFS-7986:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #1133 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1133/])
HDFS-7986. Allow files / directories to be deleted from the NameNode UI. 
Contributed by Ravi Prakash. (wheat9: rev 
6c52be78a0c6d6d86444933c6b0734dfc2477c32)
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.html
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/hadoop.css
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.js


> Allow files / directories to be deleted from the NameNode UI
> 
>
> Key: HDFS-7986
> URL: https://issues.apache.org/jira/browse/HDFS-7986
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
> Fix For: 2.8.0
>
> Attachments: HDFS-7986.01.patch, HDFS-7986.02.patch
>
>
> Users should be able to delete files or directories using the Namenode UI.
> I'm thinking there ought to be a confirmation dialog. For directories 
> recursive should be set to true. Initially there should be no option to 
> skipTrash.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8632) Erasure Coding: Add InterfaceAudience annotation to the erasure coding classes

2015-09-15 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-8632:
---
Attachment: HDFS-8632-HDFS-7285-02.patch

> Erasure Coding: Add InterfaceAudience annotation to the erasure coding classes
> --
>
> Key: HDFS-8632
> URL: https://issues.apache.org/jira/browse/HDFS-8632
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-8632-HDFS-7285-00.patch, 
> HDFS-8632-HDFS-7285-01.patch, HDFS-8632-HDFS-7285-02.patch
>
>
> I've noticed some of the erasure coding classes missing 
> {{@InterfaceAudience}} annotation. It would be good to identify the classes 
> and add proper annotation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8968) New benchmark throughput tool for striping erasure coding

2015-09-15 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14746750#comment-14746750
 ] 

Kai Zheng commented on HDFS-8968:
-

One thing to note is the new tool is generally enough, not coupled with erasure 
coding. I suggest the name can be changed to something like 
{{ClientBenchmarkThroughput}}. 

> New benchmark throughput tool for striping erasure coding
> -
>
> Key: HDFS-8968
> URL: https://issues.apache.org/jira/browse/HDFS-8968
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Rui Li
> Attachments: HDFS-8968-HDFS-7285.1.patch, HDFS-8968-HDFS-7285.2.patch
>
>
> We need a new benchmark tool to measure the throughput of client writing and 
> reading considering cases or factors:
> * 3-replica or striping;
> * write or read, stateful read or positional read;
> * which erasure coder;
> * striping cell size;
> * concurrent readers/writers using processes or threads.
> The tool should be easy to use and better to avoid unnecessary local 
> environment impact, like local disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7986) Allow files / directories to be deleted from the NameNode UI

2015-09-15 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14746523#comment-14746523
 ] 

Ravi Prakash commented on HDFS-7986:


Thanks a lot Haohui! I'll update the other patches.

> Allow files / directories to be deleted from the NameNode UI
> 
>
> Key: HDFS-7986
> URL: https://issues.apache.org/jira/browse/HDFS-7986
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
> Fix For: 2.8.0
>
> Attachments: HDFS-7986.01.patch, HDFS-7986.02.patch
>
>
> Users should be able to delete files or directories using the Namenode UI.
> I'm thinking there ought to be a confirmation dialog. For directories 
> recursive should be set to true. Initially there should be no option to 
> skipTrash.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9082) Change the log level in WebHdfsFileSystem.initialize() from INFO to DEBUG

2015-09-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14746545#comment-14746545
 ] 

Hudson commented on HDFS-9082:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #1132 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1132/])
HDFS-9082. Change the log level in WebHdfsFileSystem.initialize() from INFO to 
DEBUG. Contributed by Santhosh Nayak. (cnauroth: rev 
559c09dc0eba28666c4b16435512cc2d35e31683)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java


> Change the log level in WebHdfsFileSystem.initialize() from INFO to DEBUG
> -
>
> Key: HDFS-9082
> URL: https://issues.apache.org/jira/browse/HDFS-9082
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Reporter: Santhosh G Nayak
>Assignee: Santhosh G Nayak
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9082.1.patch
>
>
> Log info statements described below show up in the stdouts of {{FileSystem}} 
> operations on {{WebHdfsFileSystem}}. So, the proposal is to change the log 
> level from INFO to DEBUG.
>  
> {code}
>  if(isOAuth) {
>   LOG.info("Enabling OAuth2 in WebHDFS");
>   connectionFactory = URLConnectionFactory
>   .newOAuth2URLConnectionFactory(conf);
> } else {
>   LOG.info("Not enabling OAuth2 in WebHDFS");
>   connectionFactory = URLConnectionFactory
>   .newDefaultURLConnectionFactory(conf);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9040) Erasure coding: Refactor DFSStripedOutputStream (Move Namenode RPC Requests to Coordinator)

2015-09-15 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-9040:

Attachment: HDFS-9040-HDFS-7285.003.patch

> Erasure coding: Refactor DFSStripedOutputStream (Move Namenode RPC Requests 
> to Coordinator)
> ---
>
> Key: HDFS-9040
> URL: https://issues.apache.org/jira/browse/HDFS-9040
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Walter Su
> Attachments: HDFS-9040-HDFS-7285.002.patch, 
> HDFS-9040-HDFS-7285.003.patch, HDFS-9040.00.patch, HDFS-9040.001.wip.patch, 
> HDFS-9040.02.bgstreamer.patch
>
>
> The general idea is to simplify error handling logic.
> Proposal 1:
> A BlockGroupDataStreamer to communicate with NN to allocate/update block, and 
> StripedDataStreamer s only have to stream blocks to DNs.
> Proposal 2:
> See below the 
> [comment|https://issues.apache.org/jira/browse/HDFS-9040?focusedCommentId=14741388=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14741388]
>  from [~jingzhao].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9082) Change the log level in WebHdfsFileSystem.initialize() from INFO to DEBUG

2015-09-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14746628#comment-14746628
 ] 

Hudson commented on HDFS-9082:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #392 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/392/])
HDFS-9082. Change the log level in WebHdfsFileSystem.initialize() from INFO to 
DEBUG. Contributed by Santhosh Nayak. (cnauroth: rev 
559c09dc0eba28666c4b16435512cc2d35e31683)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java


> Change the log level in WebHdfsFileSystem.initialize() from INFO to DEBUG
> -
>
> Key: HDFS-9082
> URL: https://issues.apache.org/jira/browse/HDFS-9082
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Reporter: Santhosh G Nayak
>Assignee: Santhosh G Nayak
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9082.1.patch
>
>
> Log info statements described below show up in the stdouts of {{FileSystem}} 
> operations on {{WebHdfsFileSystem}}. So, the proposal is to change the log 
> level from INFO to DEBUG.
>  
> {code}
>  if(isOAuth) {
>   LOG.info("Enabling OAuth2 in WebHDFS");
>   connectionFactory = URLConnectionFactory
>   .newOAuth2URLConnectionFactory(conf);
> } else {
>   LOG.info("Not enabling OAuth2 in WebHDFS");
>   connectionFactory = URLConnectionFactory
>   .newDefaultURLConnectionFactory(conf);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7492) If multiple threads call FsVolumeList#checkDirs at the same time, we should only do checkDirs once and give the results to all waiting threads

2015-09-15 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14746491#comment-14746491
 ] 

Elliott Clark commented on HDFS-7492:
-

I'm going to grab this one. We're seeing this in production.

There's an un-related issue with one datanode locking up (still heart beating 
to NN but not able to make progress on anything that hits disks). So all 
datanodes talking to the bad node throw a bunch of IOExceptions. This causes a 
significant portion of the cluster to checkDiskError while the network issue is 
going on. FsDatasetImpl.checkDirs holds a lock so all new xceivers are blocked 
by the checkDiskError. This causes more time outs and basically serializes all 
reading and writing to blocks until everything on the cluster settles down.
{code}
"DataXceiver for client unix:/mnt/d2/hdfs-socket/dn.50010 [Passing file 
descriptors for block 
BP-1735829752-10.210.49.21-1437433901380:blk_1121816087_48310306]" #85474 
daemon prio=5 os_prio=0 tid=0x7f10910b2800 nid=0x5d44f waiting for monitor 
entry [0x7f1072c06000]
   java.lang.Thread.State: BLOCKED (on object monitor)
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getBlockFileNoExistsCheck(FsDatasetImpl.java:606)
- waiting to lock <0x0007015a3fe8> (a 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getBlockInputStream(FsDatasetImpl.java:618)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.requestShortCircuitFdsForRead(DataNode.java:1524)
at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.requestShortCircuitFds(DataXceiver.java:287)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opRequestShortCircuitFds(Receiver.java:185)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:89)
at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:235)
at java.lang.Thread.run(Thread.java:745)

"DataXceiver for client DFSClient_NONMAPREDUCE_-1067692187_1 at 
/10.210.65.21:33560 [Receiving block 
BP-1735829752-10.210.49.21-1437433901380:blk_1121839247_48333595]" #85463 
daemon prio=5 os_prio=0 tid=0x7f108933d800 nid=0x5d28e waiting for monitor 
entry [0x7f1072904000]
   java.lang.Thread.State: BLOCKED (on object monitor)
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList.getNextVolume(FsVolumeList.java:63)
- waiting to lock <0x0007015a4030> (a 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList)
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:1084)
- locked <0x0007015a3fe8> (a 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:114)
at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:183)
at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:615)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:235)
at java.lang.Thread.run(Thread.java:745)

"Thread-13149" #13302 daemon prio=5 os_prio=0 tid=0x7f10884a9000 nid=0xe9e7 
runnable [0x7f1076e6]
   java.lang.Thread.State: RUNNABLE
at java.io.UnixFileSystem.createDirectory(Native Method)
at java.io.File.mkdir(File.java:1316)
at 
org.apache.hadoop.util.DiskChecker.mkdirsWithExistsCheck(DiskChecker.java:67)
at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:104)
at org.apache.hadoop.util.DiskChecker.checkDirs(DiskChecker.java:88)
at org.apache.hadoop.util.DiskChecker.checkDirs(DiskChecker.java:91)
at org.apache.hadoop.util.DiskChecker.checkDirs(DiskChecker.java:91)
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.checkDirs(BlockPoolSlice.java:300)
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.checkDirs(FsVolumeImpl.java:307)
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList.checkDirs(FsVolumeList.java:183)
- locked <0x0007015a4030> (a 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList)
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.checkDataDir(FsDatasetImpl.java:1743)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.checkDiskError(DataNode.java:3002)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.access$800(DataNode.java:240)
at 

[jira] [Assigned] (HDFS-7492) If multiple threads call FsVolumeList#checkDirs at the same time, we should only do checkDirs once and give the results to all waiting threads

2015-09-15 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark reassigned HDFS-7492:
---

Assignee: Elliott Clark

> If multiple threads call FsVolumeList#checkDirs at the same time, we should 
> only do checkDirs once and give the results to all waiting threads
> --
>
> Key: HDFS-7492
> URL: https://issues.apache.org/jira/browse/HDFS-7492
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Colin Patrick McCabe
>Assignee: Elliott Clark
>Priority: Minor
>
> checkDirs is called when we encounter certain I/O errors.  It's rare to get 
> just a single I/O error... normally you start getting many errors when a disk 
> is going bad.  For this reason, we shouldn't start a new checkDirs scan for 
> each error.  Instead, if multiple threads call FsVolumeList#checkDirs at 
> around the same time, we should only do checkDirs once and give the results 
> to all the waiting threads.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9085) Show renewer information in DelegationTokenIdentifier#toString

2015-09-15 Thread zhihai xu (JIRA)
zhihai xu created HDFS-9085:
---

 Summary: Show renewer information in 
DelegationTokenIdentifier#toString
 Key: HDFS-9085
 URL: https://issues.apache.org/jira/browse/HDFS-9085
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: zhihai xu
Assignee: zhihai xu
Priority: Trivial


Show renewer information in {{DelegationTokenIdentifier#toString}}. Currently 
{{org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier}} 
didn't show the renewer information. It will be very useful to have renewer 
information to debug security related issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9088) Cleanup erasure coding documentation

2015-09-15 Thread Andrew Wang (JIRA)
Andrew Wang created HDFS-9088:
-

 Summary: Cleanup erasure coding documentation
 Key: HDFS-9088
 URL: https://issues.apache.org/jira/browse/HDFS-9088
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: documentation
Affects Versions: HDFS-7285
Reporter: Andrew Wang
Assignee: Andrew Wang


The documentation could use a pass to clean up typos, unify formatting, and 
also make it more user-oriented.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9088) Cleanup erasure coding documentation

2015-09-15 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-9088:
--
Attachment: hdfs-9088.001.patch

Patch attached, [~zhz] mind reviewing?

> Cleanup erasure coding documentation
> 
>
> Key: HDFS-9088
> URL: https://issues.apache.org/jira/browse/HDFS-9088
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Affects Versions: HDFS-7285
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: hdfs-9088.001.patch
>
>
> The documentation could use a pass to clean up typos, unify formatting, and 
> also make it more user-oriented.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7986) Allow files / directories to be deleted from the NameNode UI

2015-09-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14746611#comment-14746611
 ] 

Hudson commented on HDFS-7986:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #399 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/399/])
HDFS-7986. Allow files / directories to be deleted from the NameNode UI. 
Contributed by Ravi Prakash. (wheat9: rev 
6c52be78a0c6d6d86444933c6b0734dfc2477c32)
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/hadoop.css
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.js
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.html


> Allow files / directories to be deleted from the NameNode UI
> 
>
> Key: HDFS-7986
> URL: https://issues.apache.org/jira/browse/HDFS-7986
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
> Fix For: 2.8.0
>
> Attachments: HDFS-7986.01.patch, HDFS-7986.02.patch
>
>
> Users should be able to delete files or directories using the Namenode UI.
> I'm thinking there ought to be a confirmation dialog. For directories 
> recursive should be set to true. Initially there should be no option to 
> skipTrash.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7995) Implement chmod in the HDFS Web UI

2015-09-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14746668#comment-14746668
 ] 

Hadoop QA commented on HDFS-7995:
-

\\
\\
| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |   0m  0s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| | |   0m 27s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12756137/HDFS-7995.04.patch |
| Optional Tests |  |
| git revision | trunk / 77aaf4c |
| Java | 1.7.0_55 |
| uname | Linux asf908.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12464/console |


This message was automatically generated.

> Implement chmod in the HDFS Web UI
> --
>
> Key: HDFS-7995
> URL: https://issues.apache.org/jira/browse/HDFS-7995
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7995.01.patch, HDFS-7995.02.patch, 
> HDFS-7995.03.patch, HDFS-7995.04.patch
>
>
> We should let users change the permissions of files and directories using the 
> HDFS Web UI



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8941) DistributedFileSystem listCorruptFileBlocks API should resolve relative path

2015-09-15 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-8941:
---
Attachment: HDFS-8941-02.patch

> DistributedFileSystem listCorruptFileBlocks API should resolve relative path
> 
>
> Key: HDFS-8941
> URL: https://issues.apache.org/jira/browse/HDFS-8941
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-8941-00.patch, HDFS-8941-01.patch, 
> HDFS-8941-02.patch
>
>
> Presently {{DFS#listCorruptFileBlocks(path)}} API is not resolving the given 
> path relative to the workingDir. This jira is to discuss and provide the 
> implementation of the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8941) DistributedFileSystem listCorruptFileBlocks API should resolve relative path

2015-09-15 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14746730#comment-14746730
 ] 

Rakesh R commented on HDFS-8941:


Attached another patch based on the latest trunk code base.

> DistributedFileSystem listCorruptFileBlocks API should resolve relative path
> 
>
> Key: HDFS-8941
> URL: https://issues.apache.org/jira/browse/HDFS-8941
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-8941-00.patch, HDFS-8941-01.patch, 
> HDFS-8941-02.patch
>
>
> Presently {{DFS#listCorruptFileBlocks(path)}} API is not resolving the given 
> path relative to the workingDir. This jira is to discuss and provide the 
> implementation of the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8953) DataNode Metrics logging

2015-09-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14746478#comment-14746478
 ] 

Hudson commented on HDFS-8953:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2338 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2338/])
HDFS-8953. DataNode Metrics logging (Contributed by Kanaka Kumar Avvaru) 
(vinayakumarb: rev ce69c9b54c642cfbe789fc661cfc7dcbb07b4ac5)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/MetricsLoggerTask.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/log4j.properties
* hadoop-common-project/hadoop-common/src/main/conf/log4j.properties
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeMetricsLogger.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/DataNodeTestUtils.java


> DataNode Metrics logging
> 
>
> Key: HDFS-8953
> URL: https://issues.apache.org/jira/browse/HDFS-8953
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Kanaka Kumar Avvaru
>Assignee: Kanaka Kumar Avvaru
> Fix For: 2.8.0
>
> Attachments: HDFS-8953-01.patch, HDFS-8953-02.patch, 
> HDFS-8953-03.patch, HDFS-8953-04.patch, HDFS-8953-05.patch, HDFS-8953-06.patch
>
>
> HDFS-8880 added metrics logging at NameNode. Similarly, this JIRA is to  add 
> a separate logger for metrics at DN



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9040) Erasure coding: Refactor DFSStripedOutputStream (Move Namenode RPC Requests to Coordinator)

2015-09-15 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-9040:

Attachment: HDFS-9040-HDFS-7285.003.patch

Upload a patch trying to fix race conditions. Still need to fix the issue when 
failure happens during the last stripe of a block.

# For {{waitCreatingNewStreams}}, now instead of only counting the 
updateStreamerMap's size, the new patch also tries to check data streamers that 
failed before taking the updated block from the queue.
# For {{allocateNewBlock}}, the new patch also keeps checking if the streamer 
is still healthy.
# For {{setExternalError}}, the new patch sets external error only if the error 
state is not internal error.

> Erasure coding: Refactor DFSStripedOutputStream (Move Namenode RPC Requests 
> to Coordinator)
> ---
>
> Key: HDFS-9040
> URL: https://issues.apache.org/jira/browse/HDFS-9040
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Walter Su
> Attachments: HDFS-9040-HDFS-7285.002.patch, 
> HDFS-9040-HDFS-7285.003.patch, HDFS-9040.00.patch, HDFS-9040.001.wip.patch, 
> HDFS-9040.02.bgstreamer.patch
>
>
> The general idea is to simplify error handling logic.
> Proposal 1:
> A BlockGroupDataStreamer to communicate with NN to allocate/update block, and 
> StripedDataStreamer s only have to stream blocks to DNs.
> Proposal 2:
> See below the 
> [comment|https://issues.apache.org/jira/browse/HDFS-9040?focusedCommentId=14741388=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14741388]
>  from [~jingzhao].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7986) Allow files / directories to be deleted from the NameNode UI

2015-09-15 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14746510#comment-14746510
 ] 

Haohui Mai commented on HDFS-7986:
--

+1. I'll commit it shortly.

> Allow files / directories to be deleted from the NameNode UI
> 
>
> Key: HDFS-7986
> URL: https://issues.apache.org/jira/browse/HDFS-7986
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
> Attachments: HDFS-7986.01.patch, HDFS-7986.02.patch
>
>
> Users should be able to delete files or directories using the Namenode UI.
> I'm thinking there ought to be a confirmation dialog. For directories 
> recursive should be set to true. Initially there should be no option to 
> skipTrash.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9085) Show renewer information in DelegationTokenIdentifier#toString

2015-09-15 Thread zhihai xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhihai xu updated HDFS-9085:

Description: 
Show renewer information in {{DelegationTokenIdentifier#toString}}. Currently 
{{org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier}} 
didn't show the renewer information. It will be very useful to have renewer 
information to debug security related issue. Because the renewer will be 
filtered by "hadoop.security.auth_to_local", it will be helpful to show the 
real renewer after applying the rules.

  was:
Show renewer information in {{DelegationTokenIdentifier#toString}}. Currently 
{{org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier}} 
didn't show the renewer information. It will be very useful to have renewer 
information to debug security related issue.


> Show renewer information in DelegationTokenIdentifier#toString
> --
>
> Key: HDFS-9085
> URL: https://issues.apache.org/jira/browse/HDFS-9085
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: zhihai xu
>Assignee: zhihai xu
>Priority: Trivial
> Attachments: HDFS-9085.001.patch
>
>
> Show renewer information in {{DelegationTokenIdentifier#toString}}. Currently 
> {{org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier}}
>  didn't show the renewer information. It will be very useful to have renewer 
> information to debug security related issue. Because the renewer will be 
> filtered by "hadoop.security.auth_to_local", it will be helpful to show the 
> real renewer after applying the rules.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9085) Show renewer information in DelegationTokenIdentifier#toString

2015-09-15 Thread zhihai xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhihai xu updated HDFS-9085:

Status: Patch Available  (was: Open)

> Show renewer information in DelegationTokenIdentifier#toString
> --
>
> Key: HDFS-9085
> URL: https://issues.apache.org/jira/browse/HDFS-9085
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: zhihai xu
>Assignee: zhihai xu
>Priority: Trivial
> Attachments: HDFS-9085.001.patch
>
>
> Show renewer information in {{DelegationTokenIdentifier#toString}}. Currently 
> {{org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier}}
>  didn't show the renewer information. It will be very useful to have renewer 
> information to debug security related issue. Because the renewer will be 
> filtered by "hadoop.security.auth_to_local", it will be helpful to show the 
> real renewer info after applying the rules.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9085) Show renewer information in DelegationTokenIdentifier#toString

2015-09-15 Thread zhihai xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhihai xu updated HDFS-9085:

Description: 
Show renewer information in {{DelegationTokenIdentifier#toString}}. Currently 
{{org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier}} 
didn't show the renewer information. It will be very useful to have renewer 
information to debug security related issue. Because the renewer will be 
filtered by "hadoop.security.auth_to_local", it will be helpful to show the 
real renewer info after applying the rules.

  was:
Show renewer information in {{DelegationTokenIdentifier#toString}}. Currently 
{{org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier}} 
didn't show the renewer information. It will be very useful to have renewer 
information to debug security related issue. Because the renewer will be 
filtered by "hadoop.security.auth_to_local", it will be helpful to show the 
real renewer after applying the rules.


> Show renewer information in DelegationTokenIdentifier#toString
> --
>
> Key: HDFS-9085
> URL: https://issues.apache.org/jira/browse/HDFS-9085
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: zhihai xu
>Assignee: zhihai xu
>Priority: Trivial
> Attachments: HDFS-9085.001.patch
>
>
> Show renewer information in {{DelegationTokenIdentifier#toString}}. Currently 
> {{org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier}}
>  didn't show the renewer information. It will be very useful to have renewer 
> information to debug security related issue. Because the renewer will be 
> filtered by "hadoop.security.auth_to_local", it will be helpful to show the 
> real renewer info after applying the rules.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9001) DFSUtil.getNsServiceRpcUris() can return too many entries in a non-HA, non-federated cluster

2015-09-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14746577#comment-14746577
 ] 

Hadoop QA commented on HDFS-9001:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  18m 58s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 51s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 16s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 25s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m 23s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 30s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 32s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 18s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 136m 46s | Tests failed in hadoop-hdfs. |
| | | 183m 39s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | 
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistLockedMemory |
|   | 
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement |
| Timed out tests | org.apache.hadoop.hdfs.server.namenode.TestHostsFiles |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12756086/HDFS-9001.001.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 8c1cdb1 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12458/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12458/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf902.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12458/console |


This message was automatically generated.

> DFSUtil.getNsServiceRpcUris() can return too many entries in a non-HA, 
> non-federated cluster
> 
>
> Key: HDFS-9001
> URL: https://issues.apache.org/jira/browse/HDFS-9001
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS
>Affects Versions: 2.6.0, 2.7.0, 2.7.1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Attachments: HDFS-9001.001.patch
>
>
> If defaultFS differs from rpc-address, then DFSUtil.getNsServiceRpcUris() 
> will return two entries: one for the [service] RPC address and one for the 
> default FS.  This behavior violates the expected behavior stated in the 
> JavaDoc header.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7986) Allow files / directories to be deleted from the NameNode UI

2015-09-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14746715#comment-14746715
 ] 

Hudson commented on HDFS-7986:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #376 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/376/])
HDFS-7986. Allow files / directories to be deleted from the NameNode UI. 
Contributed by Ravi Prakash. (wheat9: rev 
6c52be78a0c6d6d86444933c6b0734dfc2477c32)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/hadoop.css
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.js
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.html


> Allow files / directories to be deleted from the NameNode UI
> 
>
> Key: HDFS-7986
> URL: https://issues.apache.org/jira/browse/HDFS-7986
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
> Fix For: 2.8.0
>
> Attachments: HDFS-7986.01.patch, HDFS-7986.02.patch
>
>
> Users should be able to delete files or directories using the Namenode UI.
> I'm thinking there ought to be a confirmation dialog. For directories 
> recursive should be set to true. Initially there should be no option to 
> skipTrash.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9082) Change the log level in WebHdfsFileSystem.initialize() from INFO to DEBUG

2015-09-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14746716#comment-14746716
 ] 

Hudson commented on HDFS-9082:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #376 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/376/])
HDFS-9082. Change the log level in WebHdfsFileSystem.initialize() from INFO to 
DEBUG. Contributed by Santhosh Nayak. (cnauroth: rev 
559c09dc0eba28666c4b16435512cc2d35e31683)
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Change the log level in WebHdfsFileSystem.initialize() from INFO to DEBUG
> -
>
> Key: HDFS-9082
> URL: https://issues.apache.org/jira/browse/HDFS-9082
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Reporter: Santhosh G Nayak
>Assignee: Santhosh G Nayak
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9082.1.patch
>
>
> Log info statements described below show up in the stdouts of {{FileSystem}} 
> operations on {{WebHdfsFileSystem}}. So, the proposal is to change the log 
> level from INFO to DEBUG.
>  
> {code}
>  if(isOAuth) {
>   LOG.info("Enabling OAuth2 in WebHDFS");
>   connectionFactory = URLConnectionFactory
>   .newOAuth2URLConnectionFactory(conf);
> } else {
>   LOG.info("Not enabling OAuth2 in WebHDFS");
>   connectionFactory = URLConnectionFactory
>   .newDefaultURLConnectionFactory(conf);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7779) Improve the HDFS Web UI browser to allow chowning / chgrp and setting replication

2015-09-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14746718#comment-14746718
 ] 

Hadoop QA commented on HDFS-7779:
-

\\
\\
| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |   0m  0s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| | |   0m 27s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12756139/HDFS-7779.03.patch |
| Optional Tests |  |
| git revision | trunk / 77aaf4c |
| Java | 1.7.0_55 |
| uname | Linux asf902.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12466/console |


This message was automatically generated.

> Improve the HDFS Web UI browser to allow chowning / chgrp and setting 
> replication
> -
>
> Key: HDFS-7779
> URL: https://issues.apache.org/jira/browse/HDFS-7779
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
> Attachments: Chmod.png, Chown.png, HDFS-7779.01.patch, 
> HDFS-7779.02.patch, HDFS-7779.03.patch
>
>
> This JIRA converts the owner, group and replication fields into 
> contenteditable fields which can be modified by the user from the browser 
> itself. It too uses the WebHDFS to affect these changes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7986) Allow files / directories to be deleted from the NameNode UI

2015-09-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14746521#comment-14746521
 ] 

Hudson commented on HDFS-7986:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8461 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8461/])
HDFS-7986. Allow files / directories to be deleted from the NameNode UI. 
Contributed by Ravi Prakash. (wheat9: rev 
6c52be78a0c6d6d86444933c6b0734dfc2477c32)
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.js
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/hadoop.css
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.html
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Allow files / directories to be deleted from the NameNode UI
> 
>
> Key: HDFS-7986
> URL: https://issues.apache.org/jira/browse/HDFS-7986
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
> Fix For: 2.8.0
>
> Attachments: HDFS-7986.01.patch, HDFS-7986.02.patch
>
>
> Users should be able to delete files or directories using the Namenode UI.
> I'm thinking there ought to be a confirmation dialog. For directories 
> recursive should be set to true. Initially there should be no option to 
> skipTrash.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8246) Get HDFS file name based on block pool id and block id

2015-09-15 Thread feng xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

feng xu updated HDFS-8246:
--
Resolution: Auto Closed
Status: Resolved  (was: Patch Available)

We accomplished the same functionality with a different approach.

> Get HDFS file name based on block pool id and block id
> --
>
> Key: HDFS-8246
> URL: https://issues.apache.org/jira/browse/HDFS-8246
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: HDFS, hdfs-client, namenode
>Reporter: feng xu
>Assignee: feng xu
>  Labels: BB2015-05-TBR
> Attachments: HDFS-8246.0.patch
>
>
> This feature provides HDFS shell command and C/Java API to retrieve HDFS file 
> name based on block pool id and block id.
> 1. The Java API in class DistributedFileSystem
> public String getFileName(String poolId, long blockId) throws IOException
> 2. The C API in hdfs.c
> char* hdfsGetFileName(hdfsFS fs, const char* poolId, int64_t blockId)
> 3. The HDFS shell command 
>  hdfs dfs [generic options] -fn  
> This feature is useful if you have HDFS block file name in local file system 
> and want to  find out the related HDFS file name in HDFS name space 
> (http://stackoverflow.com/questions/10881449/how-to-find-file-from-blockname-in-hdfs-hadoop).
>   Each HDFS block file name in local file system contains both block pool id 
> and block id, for sample HDFS block file name 
> /hdfs/1/hadoop/hdfs/data/current/BP-97622798-10.3.11.84-1428081035160/current/finalized/subdir0/subdir0/blk_1073741825,
>   the block pool id is BP-97622798-10.3.11.84-1428081035160 and the block id 
> is 1073741825. The block  pool id is uniquely related to a HDFS name 
> node/name space,  and the block id is uniquely related to a HDFS file within 
> a HDFS name node/name space, so the combination of block pool id and a block 
> id is uniquely related a HDFS file name. 
> The shell command and C/Java API do not map the block pool id to name node, 
> so it’s user’s responsibility to talk to the correct name node in federation 
> environment that has multiple name nodes. The block pool id is used by name 
> node to check if the user is talking with the correct name node.
> The implementation is straightforward. The client request to get HDFS file 
> name reaches the new method String getFileName(String poolId, long blockId) 
> in FSNamesystem in name node through RPC,  and the new method does the 
> followings,
> (1)   Validate the block pool id.
> (2)   Create Block  based on the block id.
> (3)   Get BlockInfoContiguous from Block.
> (4)   Get BlockCollection from BlockInfoContiguous.
> (5)   Get file name from BlockCollection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9057) allow/disallow snapshots via webhdfs

2015-09-15 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14746606#comment-14746606
 ] 

Brahma Reddy Battula commented on HDFS-9057:


{quote}-1   hdfs tests  0m 18s  Tests failed in hadoop-hdfs.{quote}
Didn't see any testfailures in report..I ran locally ,all are passed.

> allow/disallow snapshots via webhdfs
> 
>
> Key: HDFS-9057
> URL: https://issues.apache.org/jira/browse/HDFS-9057
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-9057.patch
>
>
> We should be able to allow and disallow directories for snapshotting via 
> WebHDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7995) Implement chmod in the HDFS Web UI

2015-09-15 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash updated HDFS-7995:
---
Attachment: HDFS-7995.04.patch

Rebased patch after HDFS-7986 (delete)

> Implement chmod in the HDFS Web UI
> --
>
> Key: HDFS-7995
> URL: https://issues.apache.org/jira/browse/HDFS-7995
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7995.01.patch, HDFS-7995.02.patch, 
> HDFS-7995.03.patch, HDFS-7995.04.patch
>
>
> We should let users change the permissions of files and directories using the 
> HDFS Web UI



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6264) Provide FileSystem#create() variant which throws exception if parent directory doesn't exist

2015-09-15 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HDFS-6264:
-
Description: 
FileSystem#createNonRecursive() is deprecated.
However, there is no DistributedFileSystem#create() implementation which throws 
exception if parent directory doesn't exist.
This limits clients' migration away from the deprecated method.

For HBase, IO fencing relies on the behavior of FileSystem#createNonRecursive().
Variant of create() method should be added which throws exception if parent 
directory doesn't exist.

  was:
FileSystem#createNonRecursive() is deprecated.
However, there is no DistributedFileSystem#create() implementation which throws 
exception if parent directory doesn't exist.
This limits clients' migration away from the deprecated method.

For HBase, IO fencing relies on the behavior of FileSystem#createNonRecursive().

Variant of create() method should be added which throws exception if parent 
directory doesn't exist.


> Provide FileSystem#create() variant which throws exception if parent 
> directory doesn't exist
> 
>
> Key: HDFS-6264
> URL: https://issues.apache.org/jira/browse/HDFS-6264
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: namenode
>Affects Versions: 2.4.0
>Reporter: Ted Yu
>  Labels: hbase
> Attachments: hdfs-6264-v1.txt
>
>
> FileSystem#createNonRecursive() is deprecated.
> However, there is no DistributedFileSystem#create() implementation which 
> throws exception if parent directory doesn't exist.
> This limits clients' migration away from the deprecated method.
> For HBase, IO fencing relies on the behavior of 
> FileSystem#createNonRecursive().
> Variant of create() method should be added which throws exception if parent 
> directory doesn't exist.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9075) Multiple datacenter replication inside one HDFS cluster

2015-09-15 Thread He Tianyi (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14746675#comment-14746675
 ] 

He Tianyi commented on HDFS-9075:
-

Thanks for point that out, [~cnauroth].

Prior discusses mentioned global namespace model, which i think is the most 
valuable direction to work on.

There are consistency choices about namespace:
1. strong consistent namespace, perhaps either requires a global quorum to 
ensure consistency or namespace segmentation (bit like federation, with only 
local block pool)
2. eventual consistent namespace, can be achieved via snapshots. (What happens

Besides, there are choices about data replication fashion:
1. sync replication, add remote nodes to pipeline during write,
2. async replication.

IMHO strong consistent namespace is a must, otherwise global operations tend to 
be hard to become transparent.
i.e. What happens if append operation on different file (or same file) in same 
directory take place simultaneously in two datacenters?
(Of course a global lease manager would do the trick, but that requires remote 
communication)
If we go the strong consistent way, performance suffers anyway (R/W needs 
global communication anyway). It's no harm simply use one central active 
NameNode, but with JournalNode and standby NameNode deployed globally.

As for replication, I think performance will not be an issue when given latency 
is tolerable and bandwidth is sufficient (See HDFS-8829). We can certainly let 
user decide.

We have a real scenario that communication between two datacenters have a 
latency of nearly 3ms, while bandwidth is sufficient.
In this case, we see no performance drop so far.

But with high latency, I think that will not hold. Perhaps we need some fresh 
idea.

> Multiple datacenter replication inside one HDFS cluster
> ---
>
> Key: HDFS-9075
> URL: https://issues.apache.org/jira/browse/HDFS-9075
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, namenode
>Reporter: He Tianyi
>Assignee: He Tianyi
>
> It is common scenario of deploying multiple datacenter for scaling and 
> disaster tolerant. 
> In this case we certainly want that data can be shared transparently (to 
> user) across datacenters.
> For example, say we have a raw user action log stored daily, different 
> computations may take place with the log as input. As scale grows, we may 
> want to schedule various kind of computations in more than one datacenter.
> As far as i know, current solution is to deploy multiple independent clusters 
> corresponding to datacenters, using {{distcp}} to sync data files between 
> them.
> But in this case, user needs to know exactly where data is stored, and 
> mistakes may be made during human-intervened operations. After all, it is 
> basically a computer job.
> Based on these facts, it is obvious that a multiple datacenter replication 
> solution may solve the scenario.
> I am working one prototype that works with 2 datacenters, the goal is to 
> provide data replication between datacenters transparently and minimize the 
> inter-dc bandwidth usage. Basic idea is replicate blocks to both DC and 
> determine number of replications by historical statistics of access behaviors 
> of that part of namespace.
> I will post a design document soon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7779) Improve the HDFS Web UI browser to allow chowning / chgrp and setting replication

2015-09-15 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash updated HDFS-7779:
---
Status: Patch Available  (was: Open)

> Improve the HDFS Web UI browser to allow chowning / chgrp and setting 
> replication
> -
>
> Key: HDFS-7779
> URL: https://issues.apache.org/jira/browse/HDFS-7779
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
> Attachments: Chmod.png, Chown.png, HDFS-7779.01.patch, 
> HDFS-7779.02.patch, HDFS-7779.03.patch
>
>
> This JIRA converts the owner, group and replication fields into 
> contenteditable fields which can be modified by the user from the browser 
> itself. It too uses the WebHDFS to affect these changes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9052) deleteSnapshot runs into AssertionError

2015-09-15 Thread Alex Ivanov (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14746697#comment-14746697
 ] 

Alex Ivanov commented on HDFS-9052:
---

Thank you for the detailed explanation, Jing. I had not seen the following 
change in _cleanDirectory_ method in 
[HDFS-6908|https://issues.apache.org/jira/browse/HDFS-6908], which threw me off:
{code}
+  counts.add(currentINode.cleanSubtreeRecursively(snapshot, prior,
+  collectedBlocks, removedINodes, priorDeleted, countDiffChange));
+
   // check priorDiff again since it may be created during the diff deletion
   if (prior != Snapshot.NO_SNAPSHOT_ID) {
 DirectoryDiff priorDiff = this.getDiffs().getDiffById(prior);
{code}

I will follow your suggestion to fix the fsimage. Should I link this Jira to 
[HDFS-6908|https://issues.apache.org/jira/browse/HDFS-6908] and resolve it?

> deleteSnapshot runs into AssertionError
> ---
>
> Key: HDFS-9052
> URL: https://issues.apache.org/jira/browse/HDFS-9052
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Alex Ivanov
>
> CDH 5.0.5 upgraded from CDH 5.0.0 (Hadoop 2.3)
> Upon deleting a snapshot, we run into the following assertion error. The 
> scenario is as follows:
> 1. We have a program that deletes snapshots in reverse chronological order.
> 2. The program deletes a couple of hundred snapshots successfully but runs 
> into the following exception:
> java.lang.AssertionError: Element already exists: 
> element=useraction.log.crypto, DELETED=[useraction.log.crypto]
> 3. There seems to be an issue with that snapshot, which causes a file, which 
> normally gets overwritten in every snapshot to be added to the SnapshotDiff 
> delete queue twice.
> 4. Once the deleteSnapshot is run on the problematic snapshot, if the 
> Namenode is restarted, it cannot be started again until the transaction is 
> removed from the EditLog.
> 5. Sometimes the bad snapshot can be deleted but the prior snapshot seems to 
> "inherit" the same issue.
> 6. The error below is from Namenode starting when the DELETE_SNAPSHOT 
> transaction is replayed from the EditLog.
> 2015-09-01 22:59:59,140 INFO  [IPC Server handler 0 on 8022] BlockStateChange 
> (BlockManager.java:logAddStoredBlock(2342)) - BLOCK* addStoredBlock: blockMap 
> updated: 10.52.209.77:1004 is added to 
> blk_1080833995_7093259{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, 
> replicas=[ReplicaUnderConstruction[[DISK]DS-16de62e5-f6e2-4ea7-aad9-f8567bded7d7:NORMAL|FINALIZED]]}
>  size 0
> 2015-09-01 22:59:59,140 INFO  [IPC Server handler 0 on 8022] BlockStateChange 
> (BlockManager.java:logAddStoredBlock(2342)) - BLOCK* addStoredBlock: blockMap 
> updated: 10.52.209.77:1004 is added to 
> blk_1080833996_7093260{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, 
> replicas=[ReplicaUnderConstruction[[DISK]DS-1def2b07-d87f-49dd-b14f-ef230342088d:NORMAL|FINALIZED]]}
>  size 0
> 2015-09-01 22:59:59,141 ERROR [IPC Server handler 0 on 8022] 
> namenode.FSEditLogLoader (FSEditLogLoader.java:loadEditRecords(232)) - 
> Encountered exception on operation DeleteSnapshotOp 
> [snapshotRoot=/data/tenants/pdx-svt.baseline84/wddata, 
> snapshotName=s2015022614_maintainer_soft_del, 
> RpcClientId=7942c957-a7cf-44c1-880d-6eea690e1b19, RpcCallId=1]
> 2015-09-01 22:59:59,141 ERROR [IPC Server handler 0 on 8022] 
> namenode.FSEditLogLoader (FSEditLogLoader.java:loadEditRecords(232)) - 
> Encountered exception on operation DeleteSnapshotOp 
> [snapshotRoot=/data/tenants/pdx-svt.baseline84/wddata, 
> snapshotName=s2015022614_maintainer_soft_del, 
> RpcClientId=7942c957-a7cf-44c1-880d-6eea690e1b19, RpcCallId=1]
> java.lang.AssertionError: Element already exists: 
> element=useraction.log.crypto, DELETED=[useraction.log.crypto]
> at org.apache.hadoop.hdfs.util.Diff.insert(Diff.java:193)
> at org.apache.hadoop.hdfs.util.Diff.delete(Diff.java:239)
> at org.apache.hadoop.hdfs.util.Diff.combinePosterior(Diff.java:462)
> at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.DirectoryWithSnapshotFeature$DirectoryDiff$2.initChildren(DirectoryWithSnapshotFeature.java:293)
> at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.DirectoryWithSnapshotFeature$DirectoryDiff$2.iterator(DirectoryWithSnapshotFeature.java:303)
> at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.DirectoryWithSnapshotFeature.cleanDeletedINode(DirectoryWithSnapshotFeature.java:531)
> at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.DirectoryWithSnapshotFeature.cleanDirectory(DirectoryWithSnapshotFeature.java:823)
> at 
> org.apache.hadoop.hdfs.server.namenode.INodeDirectory.cleanSubtree(INodeDirectory.java:714)
> at 
> org.apache.hadoop.hdfs.server.namenode.INodeDirectory.cleanSubtreeRecursively(INodeDirectory.java:684)
> at 
> 

[jira] [Commented] (HDFS-8968) New benchmark throughput tool for striping erasure coding

2015-09-15 Thread Rui Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14746771#comment-14746771
 ] 

Rui Li commented on HDFS-8968:
--

Hi [~drankye], the tool is intended to test and compare throughput of EC and 
replication mode. And it does require some EC APIs to run, e.g. 
{{DFSClient::setErasureCodingPolicy}}. So I didn't give it a general name. Let 
me know if you think otherwise.

> New benchmark throughput tool for striping erasure coding
> -
>
> Key: HDFS-8968
> URL: https://issues.apache.org/jira/browse/HDFS-8968
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Rui Li
> Attachments: HDFS-8968-HDFS-7285.1.patch, HDFS-8968-HDFS-7285.2.patch
>
>
> We need a new benchmark tool to measure the throughput of client writing and 
> reading considering cases or factors:
> * 3-replica or striping;
> * write or read, stateful read or positional read;
> * which erasure coder;
> * striping cell size;
> * concurrent readers/writers using processes or threads.
> The tool should be easy to use and better to avoid unnecessary local 
> environment impact, like local disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9004) Add upgrade domain to DatanodeInfo

2015-09-15 Thread Ming Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ming Ma updated HDFS-9004:
--
Attachment: HDFS-9004-2.patch

Thanks [~shahrs87]! Here is the updated patch to address your comment.

> Add upgrade domain to DatanodeInfo
> --
>
> Key: HDFS-9004
> URL: https://issues.apache.org/jira/browse/HDFS-9004
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ming Ma
>Assignee: Ming Ma
> Attachments: HDFS-9004-2.patch, HDFS-9004.patch
>
>
> As part of upgrade domain feature, we first need to add upgrade domain string 
> to {{DatanodeInfo}}. It includes things like:
> * Add a new field to DatanodeInfo.
> * Modify protobuf for DatanodeInfo.
> * Update DatanodeInfo.getDatanodeReport to include upgrade domain.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7037) Using distcp to copy data from insecure to secure cluster via hftp doesn't work (branch-2 only)

2015-09-15 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14746325#comment-14746325
 ] 

Aaron T. Myers commented on HDFS-7037:
--

[~wheat9] - it's been 5 months and I've received no response from you on this 
matter, and there's been no progress made on HADOOP-11701. As I said back in 
April, I don't think that fixing this bug in HFTP should not be gated on 
implementing that new feature. Would you please consider changing your -1 to a 
-0, so that we can fix this issue for users who are encountering this problem?

> Using distcp to copy data from insecure to secure cluster via hftp doesn't 
> work  (branch-2 only)
> 
>
> Key: HDFS-7037
> URL: https://issues.apache.org/jira/browse/HDFS-7037
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security, tools
>Affects Versions: 2.6.0
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7037.001.patch
>
>
> This is a branch-2 only issue since hftp is only supported there. 
> Issuing "distcp hftp:// hdfs://" gave the 
> following failure exception:
> {code}
> 14/09/13 22:07:40 INFO tools.DelegationTokenFetcher: Error when dealing 
> remote token:
> java.io.IOException: Error when dealing remote token: Internal Server Error
>   at 
> org.apache.hadoop.hdfs.tools.DelegationTokenFetcher.run(DelegationTokenFetcher.java:375)
>   at 
> org.apache.hadoop.hdfs.tools.DelegationTokenFetcher.getDTfromRemote(DelegationTokenFetcher.java:238)
>   at 
> org.apache.hadoop.hdfs.web.HftpFileSystem$2.run(HftpFileSystem.java:252)
>   at 
> org.apache.hadoop.hdfs.web.HftpFileSystem$2.run(HftpFileSystem.java:247)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1554)
>   at 
> org.apache.hadoop.hdfs.web.HftpFileSystem.getDelegationToken(HftpFileSystem.java:247)
>   at 
> org.apache.hadoop.hdfs.web.TokenAspect.ensureTokenInitialized(TokenAspect.java:140)
>   at 
> org.apache.hadoop.hdfs.web.HftpFileSystem.addDelegationTokenParam(HftpFileSystem.java:337)
>   at 
> org.apache.hadoop.hdfs.web.HftpFileSystem.openConnection(HftpFileSystem.java:324)
>   at 
> org.apache.hadoop.hdfs.web.HftpFileSystem$LsParser.fetchList(HftpFileSystem.java:457)
>   at 
> org.apache.hadoop.hdfs.web.HftpFileSystem$LsParser.getFileStatus(HftpFileSystem.java:472)
>   at 
> org.apache.hadoop.hdfs.web.HftpFileSystem.getFileStatus(HftpFileSystem.java:501)
>   at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:57)
>   at org.apache.hadoop.fs.Globber.glob(Globber.java:248)
>   at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1623)
>   at 
> org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:77)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:81)
>   at 
> org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:342)
>   at org.apache.hadoop.tools.DistCp.execute(DistCp.java:154)
>   at org.apache.hadoop.tools.DistCp.run(DistCp.java:121)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.tools.DistCp.main(DistCp.java:390)
> 14/09/13 22:07:40 WARN security.UserGroupInformation: 
> PriviledgedActionException as:hadoopu...@xyz.com (auth:KERBEROS) 
> cause:java.io.IOException: Unable to obtain remote token
> 14/09/13 22:07:40 ERROR tools.DistCp: Exception encountered 
> java.io.IOException: Unable to obtain remote token
>   at 
> org.apache.hadoop.hdfs.tools.DelegationTokenFetcher.getDTfromRemote(DelegationTokenFetcher.java:249)
>   at 
> org.apache.hadoop.hdfs.web.HftpFileSystem$2.run(HftpFileSystem.java:252)
>   at 
> org.apache.hadoop.hdfs.web.HftpFileSystem$2.run(HftpFileSystem.java:247)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1554)
>   at 
> org.apache.hadoop.hdfs.web.HftpFileSystem.getDelegationToken(HftpFileSystem.java:247)
>   at 
> org.apache.hadoop.hdfs.web.TokenAspect.ensureTokenInitialized(TokenAspect.java:140)
>   at 
> org.apache.hadoop.hdfs.web.HftpFileSystem.addDelegationTokenParam(HftpFileSystem.java:337)
>   at 
> org.apache.hadoop.hdfs.web.HftpFileSystem.openConnection(HftpFileSystem.java:324)
>   at 
> org.apache.hadoop.hdfs.web.HftpFileSystem$LsParser.fetchList(HftpFileSystem.java:457)
>   at 
> 

[jira] [Commented] (HDFS-9052) deleteSnapshot runs into AssertionError

2015-09-15 Thread Alex Ivanov (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14746329#comment-14746329
 ] 

Alex Ivanov commented on HDFS-9052:
---

[~jingzhao], please let me know if you have any additional comments on this 
since we're trying to figure out how to work around this problem in our 
production clusters.

> deleteSnapshot runs into AssertionError
> ---
>
> Key: HDFS-9052
> URL: https://issues.apache.org/jira/browse/HDFS-9052
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Alex Ivanov
>
> CDH 5.0.5 upgraded from CDH 5.0.0 (Hadoop 2.3)
> Upon deleting a snapshot, we run into the following assertion error. The 
> scenario is as follows:
> 1. We have a program that deletes snapshots in reverse chronological order.
> 2. The program deletes a couple of hundred snapshots successfully but runs 
> into the following exception:
> java.lang.AssertionError: Element already exists: 
> element=useraction.log.crypto, DELETED=[useraction.log.crypto]
> 3. There seems to be an issue with that snapshot, which causes a file, which 
> normally gets overwritten in every snapshot to be added to the SnapshotDiff 
> delete queue twice.
> 4. Once the deleteSnapshot is run on the problematic snapshot, if the 
> Namenode is restarted, it cannot be started again until the transaction is 
> removed from the EditLog.
> 5. Sometimes the bad snapshot can be deleted but the prior snapshot seems to 
> "inherit" the same issue.
> 6. The error below is from Namenode starting when the DELETE_SNAPSHOT 
> transaction is replayed from the EditLog.
> 2015-09-01 22:59:59,140 INFO  [IPC Server handler 0 on 8022] BlockStateChange 
> (BlockManager.java:logAddStoredBlock(2342)) - BLOCK* addStoredBlock: blockMap 
> updated: 10.52.209.77:1004 is added to 
> blk_1080833995_7093259{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, 
> replicas=[ReplicaUnderConstruction[[DISK]DS-16de62e5-f6e2-4ea7-aad9-f8567bded7d7:NORMAL|FINALIZED]]}
>  size 0
> 2015-09-01 22:59:59,140 INFO  [IPC Server handler 0 on 8022] BlockStateChange 
> (BlockManager.java:logAddStoredBlock(2342)) - BLOCK* addStoredBlock: blockMap 
> updated: 10.52.209.77:1004 is added to 
> blk_1080833996_7093260{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, 
> replicas=[ReplicaUnderConstruction[[DISK]DS-1def2b07-d87f-49dd-b14f-ef230342088d:NORMAL|FINALIZED]]}
>  size 0
> 2015-09-01 22:59:59,141 ERROR [IPC Server handler 0 on 8022] 
> namenode.FSEditLogLoader (FSEditLogLoader.java:loadEditRecords(232)) - 
> Encountered exception on operation DeleteSnapshotOp 
> [snapshotRoot=/data/tenants/pdx-svt.baseline84/wddata, 
> snapshotName=s2015022614_maintainer_soft_del, 
> RpcClientId=7942c957-a7cf-44c1-880d-6eea690e1b19, RpcCallId=1]
> 2015-09-01 22:59:59,141 ERROR [IPC Server handler 0 on 8022] 
> namenode.FSEditLogLoader (FSEditLogLoader.java:loadEditRecords(232)) - 
> Encountered exception on operation DeleteSnapshotOp 
> [snapshotRoot=/data/tenants/pdx-svt.baseline84/wddata, 
> snapshotName=s2015022614_maintainer_soft_del, 
> RpcClientId=7942c957-a7cf-44c1-880d-6eea690e1b19, RpcCallId=1]
> java.lang.AssertionError: Element already exists: 
> element=useraction.log.crypto, DELETED=[useraction.log.crypto]
> at org.apache.hadoop.hdfs.util.Diff.insert(Diff.java:193)
> at org.apache.hadoop.hdfs.util.Diff.delete(Diff.java:239)
> at org.apache.hadoop.hdfs.util.Diff.combinePosterior(Diff.java:462)
> at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.DirectoryWithSnapshotFeature$DirectoryDiff$2.initChildren(DirectoryWithSnapshotFeature.java:293)
> at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.DirectoryWithSnapshotFeature$DirectoryDiff$2.iterator(DirectoryWithSnapshotFeature.java:303)
> at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.DirectoryWithSnapshotFeature.cleanDeletedINode(DirectoryWithSnapshotFeature.java:531)
> at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.DirectoryWithSnapshotFeature.cleanDirectory(DirectoryWithSnapshotFeature.java:823)
> at 
> org.apache.hadoop.hdfs.server.namenode.INodeDirectory.cleanSubtree(INodeDirectory.java:714)
> at 
> org.apache.hadoop.hdfs.server.namenode.INodeDirectory.cleanSubtreeRecursively(INodeDirectory.java:684)
> at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.DirectoryWithSnapshotFeature.cleanDirectory(DirectoryWithSnapshotFeature.java:830)
> at 
> org.apache.hadoop.hdfs.server.namenode.INodeDirectory.cleanSubtree(INodeDirectory.java:714)
> at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.INodeDirectorySnapshottable.removeSnapshot(INodeDirectorySnapshottable.java:341)
> at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotManager.deleteSnapshot(SnapshotManager.java:238)
> at 
> 

[jira] [Commented] (HDFS-6407) Add sorting and pagination in the datanode tab of the NN Web UI

2015-09-15 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14746342#comment-14746342
 ] 

Ravi Prakash commented on HDFS-6407:


Hi Benoy! 
https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/jquery.dataTables.min.js#L1
 the first line of the minified file contains the version. If you feel that's 
not adequate, please leave a comment on HDFS-9084 and I can make the change 
there.

> Add sorting and pagination in the datanode tab of the NN Web UI
> ---
>
> Key: HDFS-6407
> URL: https://issues.apache.org/jira/browse/HDFS-6407
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.4.0
>Reporter: Nathan Roberts
>Assignee: Haohui Mai
>Priority: Critical
>  Labels: BB2015-05-TBR
> Fix For: 2.8.0
>
> Attachments: 002-datanodes-sorted-capacityUsed.png, 
> 002-datanodes.png, 002-filebrowser.png, 002-snapshots.png, 
> HDFS-6407-002.patch, HDFS-6407-003.patch, HDFS-6407.008.patch, 
> HDFS-6407.009.patch, HDFS-6407.010.patch, HDFS-6407.011.patch, 
> HDFS-6407.4.patch, HDFS-6407.5.patch, HDFS-6407.6.patch, HDFS-6407.7.patch, 
> HDFS-6407.patch, browse_directory.png, datanodes.png, snapshots.png, sorting 
> 2.png, sorting table.png
>
>
> old ui supported clicking on column header to sort on that column. The new ui 
> seems to have dropped this very useful feature.
> There are a few tables in the Namenode UI to display  datanodes information, 
> directory listings and snapshots.
> When there are many items in the tables, it is useful to have ability to sort 
> on the different columns.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7037) Using distcp to copy data from insecure to secure cluster via hftp doesn't work (branch-2 only)

2015-09-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14746344#comment-14746344
 ] 

Hadoop QA commented on HDFS-7037:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12668640/HDFS-7037.001.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 34ef1a0 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12459/console |


This message was automatically generated.

> Using distcp to copy data from insecure to secure cluster via hftp doesn't 
> work  (branch-2 only)
> 
>
> Key: HDFS-7037
> URL: https://issues.apache.org/jira/browse/HDFS-7037
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security, tools
>Affects Versions: 2.6.0
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7037.001.patch
>
>
> This is a branch-2 only issue since hftp is only supported there. 
> Issuing "distcp hftp:// hdfs://" gave the 
> following failure exception:
> {code}
> 14/09/13 22:07:40 INFO tools.DelegationTokenFetcher: Error when dealing 
> remote token:
> java.io.IOException: Error when dealing remote token: Internal Server Error
>   at 
> org.apache.hadoop.hdfs.tools.DelegationTokenFetcher.run(DelegationTokenFetcher.java:375)
>   at 
> org.apache.hadoop.hdfs.tools.DelegationTokenFetcher.getDTfromRemote(DelegationTokenFetcher.java:238)
>   at 
> org.apache.hadoop.hdfs.web.HftpFileSystem$2.run(HftpFileSystem.java:252)
>   at 
> org.apache.hadoop.hdfs.web.HftpFileSystem$2.run(HftpFileSystem.java:247)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1554)
>   at 
> org.apache.hadoop.hdfs.web.HftpFileSystem.getDelegationToken(HftpFileSystem.java:247)
>   at 
> org.apache.hadoop.hdfs.web.TokenAspect.ensureTokenInitialized(TokenAspect.java:140)
>   at 
> org.apache.hadoop.hdfs.web.HftpFileSystem.addDelegationTokenParam(HftpFileSystem.java:337)
>   at 
> org.apache.hadoop.hdfs.web.HftpFileSystem.openConnection(HftpFileSystem.java:324)
>   at 
> org.apache.hadoop.hdfs.web.HftpFileSystem$LsParser.fetchList(HftpFileSystem.java:457)
>   at 
> org.apache.hadoop.hdfs.web.HftpFileSystem$LsParser.getFileStatus(HftpFileSystem.java:472)
>   at 
> org.apache.hadoop.hdfs.web.HftpFileSystem.getFileStatus(HftpFileSystem.java:501)
>   at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:57)
>   at org.apache.hadoop.fs.Globber.glob(Globber.java:248)
>   at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1623)
>   at 
> org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:77)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:81)
>   at 
> org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:342)
>   at org.apache.hadoop.tools.DistCp.execute(DistCp.java:154)
>   at org.apache.hadoop.tools.DistCp.run(DistCp.java:121)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.tools.DistCp.main(DistCp.java:390)
> 14/09/13 22:07:40 WARN security.UserGroupInformation: 
> PriviledgedActionException as:hadoopu...@xyz.com (auth:KERBEROS) 
> cause:java.io.IOException: Unable to obtain remote token
> 14/09/13 22:07:40 ERROR tools.DistCp: Exception encountered 
> java.io.IOException: Unable to obtain remote token
>   at 
> org.apache.hadoop.hdfs.tools.DelegationTokenFetcher.getDTfromRemote(DelegationTokenFetcher.java:249)
>   at 
> org.apache.hadoop.hdfs.web.HftpFileSystem$2.run(HftpFileSystem.java:252)
>   at 
> org.apache.hadoop.hdfs.web.HftpFileSystem$2.run(HftpFileSystem.java:247)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1554)
>   at 
> org.apache.hadoop.hdfs.web.HftpFileSystem.getDelegationToken(HftpFileSystem.java:247)
>   at 
> org.apache.hadoop.hdfs.web.TokenAspect.ensureTokenInitialized(TokenAspect.java:140)
>   at 
> org.apache.hadoop.hdfs.web.HftpFileSystem.addDelegationTokenParam(HftpFileSystem.java:337)
>   at 
> org.apache.hadoop.hdfs.web.HftpFileSystem.openConnection(HftpFileSystem.java:324)
>   at 
> 

[jira] [Commented] (HDFS-7037) Using distcp to copy data from insecure to secure cluster via hftp doesn't work (branch-2 only)

2015-09-15 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14746350#comment-14746350
 ] 

Haohui Mai commented on HDFS-7037:
--

It looks like nothing has changed so far. The security concerns remain 
unaddressed, thus I think my -1 still holds. Just to echo my previous comments 
I'm willing to change it to -0 if there are solutions like HADOOP-11701 to 
limit the impact of such a configuration. I suggest doing something that is 
along the line with HADOOP-11701 in this patch to wrap up this jira.

> Using distcp to copy data from insecure to secure cluster via hftp doesn't 
> work  (branch-2 only)
> 
>
> Key: HDFS-7037
> URL: https://issues.apache.org/jira/browse/HDFS-7037
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security, tools
>Affects Versions: 2.6.0
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7037.001.patch
>
>
> This is a branch-2 only issue since hftp is only supported there. 
> Issuing "distcp hftp:// hdfs://" gave the 
> following failure exception:
> {code}
> 14/09/13 22:07:40 INFO tools.DelegationTokenFetcher: Error when dealing 
> remote token:
> java.io.IOException: Error when dealing remote token: Internal Server Error
>   at 
> org.apache.hadoop.hdfs.tools.DelegationTokenFetcher.run(DelegationTokenFetcher.java:375)
>   at 
> org.apache.hadoop.hdfs.tools.DelegationTokenFetcher.getDTfromRemote(DelegationTokenFetcher.java:238)
>   at 
> org.apache.hadoop.hdfs.web.HftpFileSystem$2.run(HftpFileSystem.java:252)
>   at 
> org.apache.hadoop.hdfs.web.HftpFileSystem$2.run(HftpFileSystem.java:247)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1554)
>   at 
> org.apache.hadoop.hdfs.web.HftpFileSystem.getDelegationToken(HftpFileSystem.java:247)
>   at 
> org.apache.hadoop.hdfs.web.TokenAspect.ensureTokenInitialized(TokenAspect.java:140)
>   at 
> org.apache.hadoop.hdfs.web.HftpFileSystem.addDelegationTokenParam(HftpFileSystem.java:337)
>   at 
> org.apache.hadoop.hdfs.web.HftpFileSystem.openConnection(HftpFileSystem.java:324)
>   at 
> org.apache.hadoop.hdfs.web.HftpFileSystem$LsParser.fetchList(HftpFileSystem.java:457)
>   at 
> org.apache.hadoop.hdfs.web.HftpFileSystem$LsParser.getFileStatus(HftpFileSystem.java:472)
>   at 
> org.apache.hadoop.hdfs.web.HftpFileSystem.getFileStatus(HftpFileSystem.java:501)
>   at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:57)
>   at org.apache.hadoop.fs.Globber.glob(Globber.java:248)
>   at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1623)
>   at 
> org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:77)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:81)
>   at 
> org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:342)
>   at org.apache.hadoop.tools.DistCp.execute(DistCp.java:154)
>   at org.apache.hadoop.tools.DistCp.run(DistCp.java:121)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.tools.DistCp.main(DistCp.java:390)
> 14/09/13 22:07:40 WARN security.UserGroupInformation: 
> PriviledgedActionException as:hadoopu...@xyz.com (auth:KERBEROS) 
> cause:java.io.IOException: Unable to obtain remote token
> 14/09/13 22:07:40 ERROR tools.DistCp: Exception encountered 
> java.io.IOException: Unable to obtain remote token
>   at 
> org.apache.hadoop.hdfs.tools.DelegationTokenFetcher.getDTfromRemote(DelegationTokenFetcher.java:249)
>   at 
> org.apache.hadoop.hdfs.web.HftpFileSystem$2.run(HftpFileSystem.java:252)
>   at 
> org.apache.hadoop.hdfs.web.HftpFileSystem$2.run(HftpFileSystem.java:247)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1554)
>   at 
> org.apache.hadoop.hdfs.web.HftpFileSystem.getDelegationToken(HftpFileSystem.java:247)
>   at 
> org.apache.hadoop.hdfs.web.TokenAspect.ensureTokenInitialized(TokenAspect.java:140)
>   at 
> org.apache.hadoop.hdfs.web.HftpFileSystem.addDelegationTokenParam(HftpFileSystem.java:337)
>   at 
> org.apache.hadoop.hdfs.web.HftpFileSystem.openConnection(HftpFileSystem.java:324)
>   at 
> org.apache.hadoop.hdfs.web.HftpFileSystem$LsParser.fetchList(HftpFileSystem.java:457)
>   at 
> 

[jira] [Commented] (HDFS-8953) DataNode Metrics logging

2015-09-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14746360#comment-14746360
 ] 

Hudson commented on HDFS-8953:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #374 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/374/])
HDFS-8953. DataNode Metrics logging (Contributed by Kanaka Kumar Avvaru) 
(vinayakumarb: rev ce69c9b54c642cfbe789fc661cfc7dcbb07b4ac5)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/DataNodeTestUtils.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeMetricsLogger.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
* hadoop-common-project/hadoop-common/src/main/conf/log4j.properties
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/log4j.properties
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/MetricsLoggerTask.java


> DataNode Metrics logging
> 
>
> Key: HDFS-8953
> URL: https://issues.apache.org/jira/browse/HDFS-8953
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Kanaka Kumar Avvaru
>Assignee: Kanaka Kumar Avvaru
> Fix For: 2.8.0
>
> Attachments: HDFS-8953-01.patch, HDFS-8953-02.patch, 
> HDFS-8953-03.patch, HDFS-8953-04.patch, HDFS-8953-05.patch, HDFS-8953-06.patch
>
>
> HDFS-8880 added metrics logging at NameNode. Similarly, this JIRA is to  add 
> a separate logger for metrics at DN



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-8594) Erasure Coding: cache ErasureCodingZone

2015-09-15 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su resolved HDFS-8594.
-
Resolution: Not A Problem

> Erasure Coding: cache ErasureCodingZone
> ---
>
> Key: HDFS-8594
> URL: https://issues.apache.org/jira/browse/HDFS-8594
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Walter Su
>Assignee: Walter Su
>
> scenario 1:
> We have 100m files in a EC zone. Everytime we open a file, we need to get 
> ECSchema(need get ECZone first). So getting EC zone is frequent.
> scenario 2:
> We have a EC zone "/d1". We have a file in "/d1/d2/d3/.../dN". We have to 
> search from xAttrs from dN, dN-1, ..., d3, d2, d1, until we find a EC zone 
> from d1's xAttr.
> It's better we cache the EC zone, like EncryptionZoneManager#encryptionZones



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9065) Include commas on # of files, blocks, total filesystem objects in NN Web UI

2015-09-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14744950#comment-14744950
 ] 

Hudson commented on HDFS-9065:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #393 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/393/])
HDFS-9065. Include commas on # of files, blocks, total filesystem objects in NN 
Web UI. Contributed by Daniel Templeton. (wheat9: rev 
d57d21c15942275bff6bb98876637950d73f)
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/dfs-dust.js
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.js
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Include commas on # of files, blocks, total filesystem objects in NN Web UI
> ---
>
> Key: HDFS-9065
> URL: https://issues.apache.org/jira/browse/HDFS-9065
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: HDFS
>Affects Versions: 2.7.1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9065.001.patch, HDFS-9065.002.patch, 
> HDFS-9065.003.patch, HDFS-9065.003.patch
>
>
> Include commas on the number of files, blocks, and total filesystem objects 
> in the NN Web UI (please see example below) to make the numbers easier to 
> read.
> Current format:
> 3236 files and directories, 1409 blocks = 4645 total filesystem object(s).
> Proposed format:
> 3,236 files and directories, 1,409 blocks = 4,645 total filesystem object(s).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9040) Erasure coding: Refactor DFSStripedOutputStream (Move Namenode RPC Requests to Coordinator)

2015-09-15 Thread Walter Su (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14745021#comment-14745021
 ] 

Walter Su commented on HDFS-9040:
-

Oh, I see you flush all before call checkStreamerFailures. Please ignore my 
previous comment.

> Erasure coding: Refactor DFSStripedOutputStream (Move Namenode RPC Requests 
> to Coordinator)
> ---
>
> Key: HDFS-9040
> URL: https://issues.apache.org/jira/browse/HDFS-9040
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Walter Su
> Attachments: HDFS-9040-HDFS-7285.002.patch, HDFS-9040.00.patch, 
> HDFS-9040.001.wip.patch, HDFS-9040.02.bgstreamer.patch
>
>
> The general idea is to simplify error handling logic.
> Proposal 1:
> A BlockGroupDataStreamer to communicate with NN to allocate/update block, and 
> StripedDataStreamer s only have to stream blocks to DNs.
> Proposal 2:
> See below the 
> [comment|https://issues.apache.org/jira/browse/HDFS-9040?focusedCommentId=14741388=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14741388]
>  from [~jingzhao].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9065) Include commas on # of files, blocks, total filesystem objects in NN Web UI

2015-09-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14744964#comment-14744964
 ] 

Hudson commented on HDFS-9065:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #1126 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1126/])
HDFS-9065. Include commas on # of files, blocks, total filesystem objects in NN 
Web UI. Contributed by Daniel Templeton. (wheat9: rev 
d57d21c15942275bff6bb98876637950d73f)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/dfs-dust.js
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.js


> Include commas on # of files, blocks, total filesystem objects in NN Web UI
> ---
>
> Key: HDFS-9065
> URL: https://issues.apache.org/jira/browse/HDFS-9065
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: HDFS
>Affects Versions: 2.7.1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9065.001.patch, HDFS-9065.002.patch, 
> HDFS-9065.003.patch, HDFS-9065.003.patch
>
>
> Include commas on the number of files, blocks, and total filesystem objects 
> in the NN Web UI (please see example below) to make the numbers easier to 
> read.
> Current format:
> 3236 files and directories, 1409 blocks = 4645 total filesystem object(s).
> Proposed format:
> 3,236 files and directories, 1,409 blocks = 4,645 total filesystem object(s).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8799) Erasure Coding: add tests for namenode processing corrupt striped blocks

2015-09-15 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-8799:

Attachment: HDFS-8799-HDFS-7285.02.patch

Thanks [~tasanuma0829], [~zhz]. Updated the patch.

> Erasure Coding: add tests for namenode processing corrupt striped blocks
> 
>
> Key: HDFS-8799
> URL: https://issues.apache.org/jira/browse/HDFS-8799
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Walter Su
>Assignee: Walter Su
>Priority: Minor
> Attachments: HDFS-8799-HDFS-7285.01.patch, 
> HDFS-8799-HDFS-7285.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9065) Include commas on # of files, blocks, total filesystem objects in NN Web UI

2015-09-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14744994#comment-14744994
 ] 

Hudson commented on HDFS-9065:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #371 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/371/])
HDFS-9065. Include commas on # of files, blocks, total filesystem objects in NN 
Web UI. Contributed by Daniel Templeton. (wheat9: rev 
d57d21c15942275bff6bb98876637950d73f)
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/dfs-dust.js
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.js
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html


> Include commas on # of files, blocks, total filesystem objects in NN Web UI
> ---
>
> Key: HDFS-9065
> URL: https://issues.apache.org/jira/browse/HDFS-9065
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: HDFS
>Affects Versions: 2.7.1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9065.001.patch, HDFS-9065.002.patch, 
> HDFS-9065.003.patch, HDFS-9065.003.patch
>
>
> Include commas on the number of files, blocks, and total filesystem objects 
> in the NN Web UI (please see example below) to make the numbers easier to 
> read.
> Current format:
> 3236 files and directories, 1409 blocks = 4645 total filesystem object(s).
> Proposed format:
> 3,236 files and directories, 1,409 blocks = 4,645 total filesystem object(s).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9010) Replace NameNode.DEFAULT_PORT with HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT config key

2015-09-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14744995#comment-14744995
 ] 

Hudson commented on HDFS-9010:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #371 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/371/])
HDFS-9010. Replace NameNode.DEFAULT_PORT with 
HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT config key. Contributed by 
Mingliang Liu. (wheat9: rev 76957a485b526468498f93e443544131a88b5684)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestAppendSnapshotTruncate.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/Hdfs.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDefaultNameNodePort.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancerWithHANameNodes.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientFailover.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/NameNodeProxies.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/NNHAServiceTarget.java


> Replace NameNode.DEFAULT_PORT with 
> HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT config key
> 
>
> Key: HDFS-9010
> URL: https://issues.apache.org/jira/browse/HDFS-9010
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HDFS-9010.000.patch, HDFS-9010.001.patch, 
> HDFS-9010.002.patch, HDFS-9010.003.patch, HDFS-9010.004.patch, 
> HDFS-9010.005.patch
>
>
> The {{NameNode.DEFAULT_PORT}} static attribute is stale as we use 
> {{HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT}} config value.
> This jira tracks the effort of replacing the  {{NameNode.DEFAULT_PORT}}  with 
> {{HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT}}. Meanwhile, we mark 
> the {{NameNode.DEFAULT_PORT}} as _@Deprecated_ before removing it totally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9040) Erasure coding: Refactor DFSStripedOutputStream (Move Namenode RPC Requests to Coordinator)

2015-09-15 Thread Walter Su (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14745016#comment-14745016
 ] 

Walter Su commented on HDFS-9040:
-

1. toClose is not safe. If we write a small file and close quickly, we don't 
{{sleep}}. But some streamer could still in {{PIPELINE_SETUP_CREATE}} stage. 
{code}
if (newFailed.size() > 0 && !toClose) {
  // for healthy streamers, wait till all of them have fetched the new block
  while (!readyToHandleFailure()) {
sleep(100, "wait for all the streamers to pick the new block");
  }
}
{code}
How about do this:
{code}
  private boolean readyToHandleFailure(boolean toClose) {
for (int i = 0; i < numAllBlocks; i++) {
  final StripedDataStreamer streamer = getStripedDataStreamer(i);
  if(!streamer.isHealthy()){
continue;
  } else if(streamer.getStage() == BlockConstructionStage.DATA_STREAMING){
continue;
  } else if (isEmptyStreamer(i, toClose)) {
continue;
  }
  return false;
}
return true;
  }

  /**
   * return true if the blockGroup is too small and the streamer never get the
   * chance to open a blockStream.
   */
  private boolean isEmptyStreamer(int i, boolean toClose) {
if (!toClose) {
  // there could be more data coming
  return false;
}
long numBytes = currentBlockGroup.getNumBytes();
if (i >= numDataBlocks && numBytes > 0) {
  return false;
}
if (numBytes >= numDataBlocks * cellSize) {
  // blockGroup lager than a full stripe so no empty streamer.
  return false;
} else if (numBytes == 0) {
  return true;
} else {
  // there's no data for this streamer. It'll always be 
PIPELINE_SETUP_CREATE
  return i > (numBytes - 1) / cellSize;
}
  }

  private void checkStreamerFailures(boolean toClose) throws IOException {
List newFailed = checkStreamers();
if (newFailed.size() > 0) {
  // for healthy streamers, wait till all of them have fetched the new block
  while (!readyToHandleFailure(toClose)) {
sleep(100, "wait for all the streamers to pick the new block");
  }
}
{code}

> Erasure coding: Refactor DFSStripedOutputStream (Move Namenode RPC Requests 
> to Coordinator)
> ---
>
> Key: HDFS-9040
> URL: https://issues.apache.org/jira/browse/HDFS-9040
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Walter Su
> Attachments: HDFS-9040-HDFS-7285.002.patch, HDFS-9040.00.patch, 
> HDFS-9040.001.wip.patch, HDFS-9040.02.bgstreamer.patch
>
>
> The general idea is to simplify error handling logic.
> Proposal 1:
> A BlockGroupDataStreamer to communicate with NN to allocate/update block, and 
> StripedDataStreamer s only have to stream blocks to DNs.
> Proposal 2:
> See below the 
> [comment|https://issues.apache.org/jira/browse/HDFS-9040?focusedCommentId=14741388=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14741388]
>  from [~jingzhao].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-8373) Ec files can't be deleted into Trash because of that Trash isn't EC zone.

2015-09-15 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang resolved HDFS-8373.
-
Resolution: Not A Problem

With HDFS-8833 we should be able to delete EC files into Trash.

> Ec files can't be deleted into Trash because of that Trash isn't EC zone.
> -
>
> Key: HDFS-8373
> URL: https://issues.apache.org/jira/browse/HDFS-8373
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7285
>Reporter: GAO Rui
>Assignee: Brahma Reddy Battula
>  Labels: EC
>
> When EC files were deleted, they would be moved into {{Trash}} directory. 
> But, EC files can only be placed under EC zone. So, EC files which have been 
> deleted can not be moved to {{Trash}} directory.
> Problem could be solved by creating a EC zone(floder) inside {{Trash}} to 
> contain deleted EC files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9055) WebHDFS REST v2

2015-09-15 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14745872#comment-14745872
 ] 

Colin Patrick McCabe commented on HDFS-9055:


[~aw], can you explain what you find "severely lacking" about webHDFS?  If the 
issue is that we don't have commands for taking snapshots, truncating files, or 
setting quotas, those features seem easy to add.

> WebHDFS REST v2
> ---
>
> Key: HDFS-9055
> URL: https://issues.apache.org/jira/browse/HDFS-9055
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>
> There's starting to be enough changes to fix and add missing functionality to 
> webhdfs that we should probably update to REST v2.  This also gives us an 
> opportunity to deal with some incompatible issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8953) DataNode Metrics logging

2015-09-15 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-8953:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Committed to trunk and branch-2.
Thanks [~kanaka] for the contribution.
Thanks [~arpitagarwal] and [~ste...@apache.org].

> DataNode Metrics logging
> 
>
> Key: HDFS-8953
> URL: https://issues.apache.org/jira/browse/HDFS-8953
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Kanaka Kumar Avvaru
>Assignee: Kanaka Kumar Avvaru
> Fix For: 2.8.0
>
> Attachments: HDFS-8953-01.patch, HDFS-8953-02.patch, 
> HDFS-8953-03.patch, HDFS-8953-04.patch, HDFS-8953-05.patch, HDFS-8953-06.patch
>
>
> HDFS-8880 added metrics logging at NameNode. Similarly, this JIRA is to  add 
> a separate logger for metrics at DN



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9083) Replication violates block placement policy.

2015-09-15 Thread Rushabh S Shah (JIRA)
Rushabh S Shah created HDFS-9083:


 Summary: Replication violates block placement policy.
 Key: HDFS-9083
 URL: https://issues.apache.org/jira/browse/HDFS-9083
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: HDFS, namenode
Affects Versions: 2.6.0
Reporter: Rushabh S Shah


Recently we are noticing many cases in which all the replica of the block are 
residing on the same rack.
During the block creation, the block placement policy was honored.
But after node failure event in some specific manner, the block ends up in such 
state.

On investigating more I found out that BlockManager#blockHasEnoughRacks is 
dependent on the config (net.topology.script.file.name)
{noformat}
 if (!this.shouldCheckForEnoughRacks) {
  return true;
}
{noformat}
We specify DNSToSwitchMapping implementation (our own custom implementation) 
via net.topology.node.switch.mapping.impl and no longer use 
net.topology.script.file.name config.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7351) Document the HDFS Erasure Coding feature

2015-09-15 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-7351:
--
   Resolution: Fixed
Fix Version/s: HDFS-7285
   Status: Resolved  (was: Patch Available)

LGTM, committed! Thanks for working on this Uma and Zhe for updating it.

I saw a few typos but I'll just fix them in a follow-on patch.

> Document the HDFS Erasure Coding feature
> 
>
> Key: HDFS-7351
> URL: https://issues.apache.org/jira/browse/HDFS-7351
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7285
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Fix For: HDFS-7285
>
> Attachments: HDFS-7351-HDFS-7285-01.patch, 
> HDFS-7351-HDFS-7285-02.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9067) o.a.h.hdfs.server.datanode.fsdataset.impl.TestLazyWriter is failing in trunk

2015-09-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14745785#comment-14745785
 ] 

Hudson commented on HDFS-9067:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #394 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/394/])
HDFS-9067. o.a.h.hdfs.server.datanode.fsdataset.impl.TestLazyWriter is failing 
in trunk (Contributed by Surendra Singh Lilhore) (vinayakumarb: rev 
a4405674919d14be89bc4da22db2f417b5ae6ac3)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsConfig.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSystemImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyWriter.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/LazyPersistTestCase.java
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/hadoop-metrics2.properties


> o.a.h.hdfs.server.datanode.fsdataset.impl.TestLazyWriter is failing in trunk
> 
>
> Key: HDFS-9067
> URL: https://issues.apache.org/jira/browse/HDFS-9067
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Surendra Singh Lilhore
>Priority: Critical
> Fix For: 2.8.0
>
> Attachments: HDFS-9067-001.patch, HDFS-9067-002.patch, 
> HDFS-9067-003.patch, HDFS-9067.patch
>
>
> The test TestLazyWriter is consistently failing in trunk. For example:
> https://builds.apache.org/job/PreCommit-HDFS-Build/12407/testReport/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (HDFS-9057) allow/disallow snapshots via webhdfs

2015-09-15 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-9057 started by Brahma Reddy Battula.
--
> allow/disallow snapshots via webhdfs
> 
>
> Key: HDFS-9057
> URL: https://issues.apache.org/jira/browse/HDFS-9057
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-9057.patch
>
>
> We should be able to allow and disallow directories for snapshotting via 
> WebHDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9047) deprecate libwebhdfs in branch-2; remove from trunk

2015-09-15 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14745854#comment-14745854
 ] 

Colin Patrick McCabe commented on HDFS-9047:


[~aw], if you think "some random code on github" is better, then please see if 
you can integrate that (assuming its license is compatible) and get rid of 
libwebhdfs.  I'm just saying that we shouldn't remove it without a replacement.

> deprecate libwebhdfs in branch-2; remove from trunk
> ---
>
> Key: HDFS-9047
> URL: https://issues.apache.org/jira/browse/HDFS-9047
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Reporter: Allen Wittenauer
>
> This library is basically a mess:
> * It's not part of the mvn package
> * It's missing functionality and barely maintained
> * It's not in the precommit runs so doesn't get exercised regularly
> * It's not part of the unit tests (at least, that I can see)
> * It isn't documented in any official documentation
> But most importantly:  
> * It fails at it's primary mission of being pure C (HDFS-3917 is STILL open)
> Let's cut our losses and just remove it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9055) WebHDFS REST v2

2015-09-15 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14745880#comment-14745880
 ] 

Haohui Mai commented on HDFS-9055:
--

I agree with [~cmccabe] that the main focus of WebHDFS is compatibility.

IMO to ensure compatibility it makes a lot of sense to carefully think through 
what features needs to be added in so that it is possible verify compatibility 
across different Hadoop versions.

For the command and control purpose, a web-friendly interface is definitely 
useful, but I don't think it needs to reside in the scope of WebHDFS.

> WebHDFS REST v2
> ---
>
> Key: HDFS-9055
> URL: https://issues.apache.org/jira/browse/HDFS-9055
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>
> There's starting to be enough changes to fix and add missing functionality to 
> webhdfs that we should probably update to REST v2.  This also gives us an 
> opportunity to deal with some incompatible issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9008) Balancer#Parameters class could use a builder pattern

2015-09-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14745909#comment-14745909
 ] 

Hudson commented on HDFS-9008:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8455 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8455/])
HDFS-9008. Balancer#Parameters class could use a builder pattern. (Chris Trezzo 
via mingma) (mingma: rev 083b44c136ea5aba660fcd1dddbb2d21513b4456)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancerWithNodeGroup.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancer.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancerWithHANameNodes.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/BalancerParameters.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancerWithMultipleNameNodes.java


> Balancer#Parameters class could use a builder pattern
> -
>
> Key: HDFS-9008
> URL: https://issues.apache.org/jira/browse/HDFS-9008
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Reporter: Chris Trezzo
>Assignee: Chris Trezzo
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9008-trunk-v1.patch, HDFS-9008-trunk-v2.patch, 
> HDFS-9008-trunk-v3.patch, HDFS-9008-trunk-v4.patch, HDFS-9008-trunk-v5.patch
>
>
> The Balancer#Parameters class is violating a few checkstyle rules.
> # Instance variables are not privately scoped and do not have accessor 
> methods.
> # The Balancer#Parameter constructor has too many arguments (according to 
> checkstyle).
> Changing this class to use the builder pattern could fix both of these style 
> issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >