[jira] [Updated] (HDFS-8251) Move the synthetic load generator into its own package

2015-04-25 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HDFS-8251:
---
Affects Version/s: 3.0.0

 Move the synthetic load generator into its own package
 --

 Key: HDFS-8251
 URL: https://issues.apache.org/jira/browse/HDFS-8251
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: J.Andreina

 It doesn't really make sense for the HDFS load generator to be a part of the 
 (extremely large) mapreduce jobclient package. It should be pulled out and 
 put its own package, probably in hadoop-tools.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8254) In StripedDataStreamer, it is hard to tolerate datanode failure in the leading streamer

2015-04-25 Thread Tsz Wo Nicholas Sze (JIRA)
Tsz Wo Nicholas Sze created HDFS-8254:
-

 Summary: In StripedDataStreamer, it is hard to tolerate datanode 
failure in the leading streamer
 Key: HDFS-8254
 URL: https://issues.apache.org/jira/browse/HDFS-8254
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze


StripedDataStreamer javadoc is shown below.
{code}
 * The StripedDataStreamer class is used by {@link DFSStripedOutputStream}.
 * There are two kinds of StripedDataStreamer, leading streamer and ordinary
 * stream. Leading streamer requests a block group from NameNode, unwraps
 * it to located blocks and transfers each located block to its corresponding
 * ordinary streamer via a blocking queue.
{code}
Leading streamer is the streamer with index 0.  When the datanode of the 
leading streamer fails, the other steamers cannot continue since no one will 
request a block group from NameNode anymore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6888) Remove audit logging of getFIleInfo()

2015-04-25 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14512817#comment-14512817
 ] 

Allen Wittenauer commented on HDFS-6888:


One of the key points of the HDFS audit log is to show accesses to files, 
including for security purposes. If a user can legitimately use getFileInfo(), 
then it needs to get logged.

 Remove audit logging of getFIleInfo()
 -

 Key: HDFS-6888
 URL: https://issues.apache.org/jira/browse/HDFS-6888
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.5.0
Reporter: Kihwal Lee
Assignee: Chen He
  Labels: log
 Attachments: HDFS-6888-2.patch, HDFS-6888-3.patch, HDFS-6888-4.patch, 
 HDFS-6888-5.patch, HDFS-6888-6.patch, HDFS-6888.patch


 The audit logging of getFileInfo() was added in HDFS-3733.  Since this is a 
 one of the most called method, users have noticed that audit log is now 
 filled with this.  Since we now have HTTP request logging, this seems 
 unnecessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8174) Update replication count to live rep count in fsck report

2015-04-25 Thread J.Andreina (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

J.Andreina updated HDFS-8174:
-
Status: Patch Available  (was: Open)

 Update replication count to live rep count in fsck report
 -

 Key: HDFS-8174
 URL: https://issues.apache.org/jira/browse/HDFS-8174
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: J.Andreina
Assignee: J.Andreina
Priority: Minor
 Attachments: HDFS-8174.1.patch


 When one of the replica is decommissioned , fetching fsck report gives repl 
 count is one less than the total replica information displayed. 
 {noformat}
 blk_x len=y repl=3 [dn1, dn2, dn3, dn4]
 {noformat}
 Update the description from rep to Live_rep



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8232) Missing datanode counters when using Metrics2 sink interface

2015-04-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14512703#comment-14512703
 ] 

Hadoop QA commented on HDFS-8232:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 36s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 3 new or modified test files. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | javac |   7m 31s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 38s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   4m  1s | The applied patch generated  2 
 additional checkstyle issues. |
| {color:green}+1{color} | install |   1m 31s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m  3s | The patch does not introduce 
any new Findbugs (version 2.0.3) warnings. |
| {color:green}+1{color} | native |   3m 13s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests |  97m 30s | Tests failed in hadoop-hdfs. |
| | | 142m  1s | |
\\
\\
|| Reason || Tests ||
| Timed out tests | org.apache.hadoop.hdfs.qjournal.client.TestQJMWithFaults |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12728071/hdfs-8232.001.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / f83c55a |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10393/artifact/patchprocess/checkstyle-result-diff.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10393/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10393/testReport/ |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10393/console |


This message was automatically generated.

 Missing datanode counters when using Metrics2 sink interface
 

 Key: HDFS-8232
 URL: https://issues.apache.org/jira/browse/HDFS-8232
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.7.1
Reporter: Anu Engineer
Assignee: Anu Engineer
 Attachments: hdfs-8232.001.patch


 When using the Metric2 Sink interface none of the counters declared under 
 Dataanode:FSDataSetBean are visible. They are visible if you use JMX or if 
 you do http://host:port/jmx. 
 Expected behavior is that they be part of Sink interface and accessible in 
 the putMetrics call back.
  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8209) Support different number of datanode directories in MiniDFSCluster.

2015-04-25 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14512713#comment-14512713
 ] 

Tsz Wo Nicholas Sze commented on HDFS-8209:
---

{code}
 assert storageTypes == null || storageTypes.length == storagesPerDatanode;
 for (int j = 0; j  storagesPerDatanode; ++j) {
+  if ((storageTypes != null)  (j = storageTypes.length)) {
+break;
+  }
{code}
With the assert storageTypes.length == storagesPerDatanode and j  
storagesPerDatanode, the condition j = storageTypes.length is always false.  
So I guess the patch won't work.

 Support different number of datanode directories in MiniDFSCluster.
 ---

 Key: HDFS-8209
 URL: https://issues.apache.org/jira/browse/HDFS-8209
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: test
Affects Versions: 2.6.0
Reporter: surendra singh lilhore
Assignee: surendra singh lilhore
Priority: Minor
 Attachments: HDFS-8209.patch


 I want to create MiniDFSCluster with 2 datanode and for each datanode I want 
 to set different number of StorageTypes, but in this case I am getting 
 ArrayIndexOutOfBoundsException.
 My cluster schema is like this.
 {code}
 final MiniDFSCluster cluster = new MiniDFSCluster.Builder(conf)
   .numDataNodes(2)
   .storageTypes(new StorageType[][] {{ 
 StorageType.DISK, StorageType.ARCHIVE },{ StorageType.DISK } })
   .build();
 {code}
 *Exception* :
 {code}
 java.lang.ArrayIndexOutOfBoundsException: 1
   at 
 org.apache.hadoop.hdfs.MiniDFSCluster.makeDataNodeDirs(MiniDFSCluster.java:1218)
   at 
 org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1402)
   at 
 org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:832)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7673) synthetic load generator docs give incorrect/incomplete commands

2015-04-25 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14512598#comment-14512598
 ] 

Brahma Reddy Battula commented on HDFS-7673:


[~aw] thanks for committing and reviewing this issue..

 synthetic load generator docs give incorrect/incomplete commands
 

 Key: HDFS-7673
 URL: https://issues.apache.org/jira/browse/HDFS-7673
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: Allen Wittenauer
Assignee: Brahma Reddy Battula
 Fix For: 3.0.0

 Attachments: HDFS-7673.patch


 The synthetic load generator guide gives this helpful command to start it:
 {code}
 java LoadGenerator [options]
 {code}
 This, of course, won't work.  What's the class path?  What jar is it in?  Is 
 this really the command?  Isn't there a shell script wrapping this?
 This atrocity against normal users is committed three more times after this 
 one with equally incomplete commands for other parts of the system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8174) Update replication count to live rep count in fsck report

2015-04-25 Thread J.Andreina (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

J.Andreina updated HDFS-8174:
-
Attachment: HDFS-8174.1.patch

Attached an initial patch.
Please review.

 Update replication count to live rep count in fsck report
 -

 Key: HDFS-8174
 URL: https://issues.apache.org/jira/browse/HDFS-8174
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: J.Andreina
Assignee: J.Andreina
Priority: Minor
 Attachments: HDFS-8174.1.patch


 When one of the replica is decommissioned , fetching fsck report gives repl 
 count is one less than the total replica information displayed. 
 {noformat}
 blk_x len=y repl=3 [dn1, dn2, dn3, dn4]
 {noformat}
 Update the description from rep to Live_rep



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8187) Remove usage of -setStoragePolicy and -getStoragePolicy using dfsadmin cmd (as it is not been supported)

2015-04-25 Thread J.Andreina (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

J.Andreina updated HDFS-8187:
-
Status: Patch Available  (was: Open)

 Remove usage of -setStoragePolicy and -getStoragePolicy using dfsadmin 
 cmd (as it is not been supported)
 

 Key: HDFS-8187
 URL: https://issues.apache.org/jira/browse/HDFS-8187
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Affects Versions: 3.0.0
Reporter: J.Andreina
Assignee: J.Andreina
 Attachments: HDFS-8187.1.patch


 Remove usage of -setStoragePolicy and -getStoragePolicy using dfsadmin 
 cmd (as it is not been supported)
 Incorrect Usage in Document:
 {noformat}
 The storage policy can be specified using the [`dfsadmin 
 -setStoragePolicy`](#Set_Storage_Policy) command. 
 .
 .
 The effective storage policy can be retrieved by the [`dfsadmin 
 -getStoragePolicy`](#Get_Storage_Policy) command.
 {noformat}
 Correct Commands:
 {noformat}
 hdfs storagepolicies -getStoragePolicy -path path
 hdfs storagepolicies -setStoragePolicy -path path -policy policy
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8175) Provide information on snapshotDiff for supporting the comparison between snapshot and current status

2015-04-25 Thread J.Andreina (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

J.Andreina updated HDFS-8175:
-
Attachment: HDFS-8175.1.patch

Attached an initial patch..
Please review.

 Provide information on snapshotDiff for supporting the comparison between 
 snapshot and current status
 -

 Key: HDFS-8175
 URL: https://issues.apache.org/jira/browse/HDFS-8175
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: J.Andreina
Assignee: J.Andreina
 Attachments: HDFS-8175.1.patch


 SnapshotDiff  can be used to find difference
 1. Between two Snapshot ( Documented)
 2. Between a snapshot and current status of directory *(which is not been 
 documented)*.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8252) Fix test case failure in org.apache.hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate.testAppendOverTypeQuota

2015-04-25 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14512643#comment-14512643
 ] 

Brahma Reddy Battula commented on HDFS-8252:


Thinking HDFS-8231 broken this testcase..

 Fix test case failure in 
 org.apache.hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate.testAppendOverTypeQuota
 

 Key: HDFS-8252
 URL: https://issues.apache.org/jira/browse/HDFS-8252
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula

 Quota by storage type : SSD on path : /TestAppendOverTypeQuota is exceeded. 
 quota = 1 B but space consumed = 1 KB
  at 
 org.apache.hadoop.hdfs.server.namenode.DirectoryWithQuotaFeature.verifyQuotaByStorageType(DirectoryWithQuotaFeature.java:227)
  at 
 org.apache.hadoop.hdfs.server.namenode.DirectoryWithQuotaFeature.verifyQuota(DirectoryWithQuotaFeature.java:240)
  at 
 org.apache.hadoop.hdfs.server.namenode.FSDirectory.verifyQuota(FSDirectory.java:874)
  at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.verifyQuotaForUCBlock(FSNamesystem.java:2765)
  at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.prepareFileForAppend(FSNamesystem.java:2713)
  at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2686)
  at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2968)
  at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2939)
  at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:659)
  at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:418)
  at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
  at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2174)
  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2170)
  at java.security.AccessController.doPrivileged(Native Method)
  at javax.security.auth.Subject.doAs(Subject.java:415)
  at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1669)
  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2168)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8232) Missing datanode counters when using Metrics2 sink interface

2015-04-25 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-8232:

 Component/s: (was: HDFS)
  datanode
Target Version/s: 2.8.0  (was: 2.7.1)

 Missing datanode counters when using Metrics2 sink interface
 

 Key: HDFS-8232
 URL: https://issues.apache.org/jira/browse/HDFS-8232
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.7.1
Reporter: Anu Engineer
Assignee: Anu Engineer
 Attachments: hdfs-8232.001.patch


 When using the Metric2 Sink interface none of the counters declared under 
 Dataanode:FSDataSetBean are visible. They are visible if you use JMX or if 
 you do http://host:port/jmx. 
 Expected behavior is that they be part of Sink interface and accessible in 
 the putMetrics call back.
  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8252) Fix test case failure in org.apache.hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate.testAppendOverTypeQuota

2015-04-25 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HDFS-8252:
--

 Summary: Fix test case failure in 
org.apache.hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate.testAppendOverTypeQuota
 Key: HDFS-8252
 URL: https://issues.apache.org/jira/browse/HDFS-8252
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula


Quota by storage type : SSD on path : /TestAppendOverTypeQuota is exceeded. 
quota = 1 B but space consumed = 1 KB
 at 
org.apache.hadoop.hdfs.server.namenode.DirectoryWithQuotaFeature.verifyQuotaByStorageType(DirectoryWithQuotaFeature.java:227)
 at 
org.apache.hadoop.hdfs.server.namenode.DirectoryWithQuotaFeature.verifyQuota(DirectoryWithQuotaFeature.java:240)
 at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.verifyQuota(FSDirectory.java:874)
 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.verifyQuotaForUCBlock(FSNamesystem.java:2765)
 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.prepareFileForAppend(FSNamesystem.java:2713)
 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2686)
 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2968)
 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2939)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:659)
 at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:418)
 at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
 at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2174)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2170)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1669)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2168)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8247) TestDiskspaceQuotaUpdate#testAppendOverTypeQuota is failing

2015-04-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14512656#comment-14512656
 ] 

Hudson commented on HDFS-8247:
--

FAILURE: Integrated in Hadoop-trunk-Commit #7676 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7676/])
HDFS-8247. TestDiskspaceQuotaUpdate#testAppendOverTypeQuota is failing. 
Contributed by Xiaoyu Yao. (cnauroth: rev 
a00e001a1a9fa2c6287b2f078e425e9bb157e5ca)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDiskspaceQuotaUpdate.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 TestDiskspaceQuotaUpdate#testAppendOverTypeQuota is failing
 ---

 Key: HDFS-8247
 URL: https://issues.apache.org/jira/browse/HDFS-8247
 Project: Hadoop HDFS
  Issue Type: Test
  Components: HDFS
Affects Versions: 2.7.1
Reporter: Anu Engineer
Assignee: Xiaoyu Yao
 Fix For: 2.8.0

 Attachments: HDFS-8247.00.patch


 Running org.apache.hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate 
 failing with the following error
 Running org.apache.hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate
 Tests run: 6, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 19.828 sec 
  FAILURE! - in 
 org.apache.hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate
 testAppendOverTypeQuota(org.apache.hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate)
   Time elapsed: 0.962 sec   ERROR!
 org.apache.hadoop.hdfs.protocol.QuotaByStorageTypeExceededException: Quota by 
 storage type : SSD on path : /TestAppendOverTypeQuota is exceeded. quota = 1 
 B but space consumed = 1 KB
   at 
 org.apache.hadoop.hdfs.server.namenode.DirectoryWithQuotaFeature.verifyQuotaByStorageType(DirectoryWithQuotaFeature.java:227)
   at 
 org.apache.hadoop.hdfs.server.namenode.DirectoryWithQuotaFeature.verifyQuota(DirectoryWithQuotaFeature.java:240)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSDirectory.verifyQuota(FSDirectory.java:874)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.verifyQuotaForUCBlock(FSNamesystem.java:2765)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.prepareFileForAppend(FSNamesystem.java:2713)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2686)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2968)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2939)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:659)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:418)
   at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2174)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2170)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1669)
   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2168)
   at org.apache.hadoop.ipc.Client.call(Client.java:1492)
   at org.apache.hadoop.ipc.Client.call(Client.java:1423)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
   at com.sun.proxy.$Proxy19.append(Unknown Source)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:328)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
   at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:101)
   at com.sun.proxy.$Proxy20.append(Unknown Source)
   at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1460)
   at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1524)
   at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1494)
   at 
 

[jira] [Commented] (HDFS-8251) Move the synthetic load generator into its own package

2015-04-25 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14512596#comment-14512596
 ] 

Brahma Reddy Battula commented on HDFS-8251:


thanks for your interest,It was found while cheking HDFS-7673 and I started 
working on this...thanks for your interest..

 Move the synthetic load generator into its own package
 --

 Key: HDFS-8251
 URL: https://issues.apache.org/jira/browse/HDFS-8251
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Allen Wittenauer
Assignee: Brahma Reddy Battula

 It doesn't really make sense for the HDFS load generator to be a part of the 
 (extremely large) mapreduce jobclient package. It should be pulled out and 
 put its own package, probably in hadoop-tools.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8187) Remove usage of -setStoragePolicy and -getStoragePolicy using dfsadmin cmd (as it is not been supported)

2015-04-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14512632#comment-14512632
 ] 

Hadoop QA commented on HDFS-8187:
-

\\
\\
| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |   2m 53s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | release audit |   0m 20s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | site |   2m 56s | Site still builds. |
| | |   6m 13s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12728162/HDFS-8187.1.patch |
| Optional Tests | site |
| git revision | trunk / f83c55a |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10390/console |


This message was automatically generated.

 Remove usage of -setStoragePolicy and -getStoragePolicy using dfsadmin 
 cmd (as it is not been supported)
 

 Key: HDFS-8187
 URL: https://issues.apache.org/jira/browse/HDFS-8187
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Affects Versions: 3.0.0
Reporter: J.Andreina
Assignee: J.Andreina
 Attachments: HDFS-8187.1.patch


 Remove usage of -setStoragePolicy and -getStoragePolicy using dfsadmin 
 cmd (as it is not been supported)
 Incorrect Usage in Document:
 {noformat}
 The storage policy can be specified using the [`dfsadmin 
 -setStoragePolicy`](#Set_Storage_Policy) command. 
 .
 .
 The effective storage policy can be retrieved by the [`dfsadmin 
 -getStoragePolicy`](#Get_Storage_Policy) command.
 {noformat}
 Correct Commands:
 {noformat}
 hdfs storagepolicies -getStoragePolicy -path path
 hdfs storagepolicies -setStoragePolicy -path path -policy policy
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8204) Mover/Balancer should not schedule two replicas to the same DN

2015-04-25 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14512675#comment-14512675
 ] 

Tsz Wo Nicholas Sze commented on HDFS-8204:
---

 ... may causes 2 replicas ends in same node after running balance.

Indeed, it won't happen.  The target datanode will reject to receive the 
replica since it already has it.  So this is a bug only making the scheduling 
inefficient but won't cause 2 replicas moved to the same node.

Try your new test without the Dispatcher change.  It still passes and you will 
find ReplicaAlreadyExistsException in the log.

 Mover/Balancer should not schedule two replicas to the same DN
 --

 Key: HDFS-8204
 URL: https://issues.apache.org/jira/browse/HDFS-8204
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: balancer  mover
Reporter: Walter Su
Assignee: Walter Su
 Attachments: HDFS-8204.001.patch, HDFS-8204.002.patch


 Balancer moves blocks between Datanode(Ver. 2.6 ).
 Balancer moves blocks between StorageGroups ( introduced by HDFS-6584) , in 
 the new version(Ver. =2.6) .
 function
 {code}
 class DBlock extends LocationsStorageGroup
 DBlock.isLocatedOn(StorageGroup loc)
 {code}
 is flawed, may causes 2 replicas ends in same node after running balance.
 For example:
 We have 2 nodes. Each node has two storages.
 We have (DN0, SSD), (DN0, DISK), (DN1, SSD), (DN1, DISK).
 We have a block with ONE_SSD storage policy.
 The block has 2 replicas. They are in (DN0,SSD) and (DN1,DISK).
 Replica in (DN0,SSD) should not be moved to (DN1,SSD) after running Balancer.
 Otherwise DN1 has 2 replicas.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8249) Separate HdfsConstants into the client and the server side class

2015-04-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14512705#comment-14512705
 ] 

Hadoop QA commented on HDFS-8249:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 35s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 37 new or modified test files. |
| {color:red}-1{color} | whitespace |   0m  0s | The patch has 1  line(s) that 
end in whitespace. |
| {color:green}+1{color} | javac |   7m 27s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 36s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 19s | The applied patch generated 
1 release audit warnings. |
| {color:red}-1{color} | checkstyle |   7m 50s | The applied patch generated  
22  additional checkstyle issues. |
| {color:green}+1{color} | install |   1m 33s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   5m  0s | The patch does not introduce 
any new Findbugs (version 2.0.3) warnings. |
| {color:green}+1{color} | native |   3m 14s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 162m 33s | Tests failed in hadoop-hdfs. |
| {color:green}+1{color} | hdfs tests |   0m 16s | Tests passed in 
hadoop-hdfs-client. |
| {color:green}+1{color} | hdfs tests |   1m 41s | Tests passed in 
hadoop-hdfs-nfs. |
| {color:green}+1{color} | hdfs tests |   3m 58s | Tests passed in bkjournal. |
| | | 218m 42s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.TestAppendSnapshotTruncate |
|   | hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate |
|   | hadoop.hdfs.server.datanode.TestDataNodeRollingUpgrade |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12728101/HDFS-8249.000.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / f83c55a |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10389/artifact/patchprocess/whitespace.txt
 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10389/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10389/artifact/patchprocess/checkstyle-result-diff.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10389/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| hadoop-hdfs-client test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10389/artifact/patchprocess/testrun_hadoop-hdfs-client.txt
 |
| hadoop-hdfs-nfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10389/artifact/patchprocess/testrun_hadoop-hdfs-nfs.txt
 |
| bkjournal test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10389/artifact/patchprocess/testrun_bkjournal.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10389/testReport/ |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10389/console |


This message was automatically generated.

 Separate HdfsConstants into the client and the server side class
 

 Key: HDFS-8249
 URL: https://issues.apache.org/jira/browse/HDFS-8249
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-8249.000.patch


 The constants in {{HdfsConstants}} are used by both the client side and the 
 server side. There are two types of constants in the class:
 1. Constants that are used internally by the servers or not part of the APIs. 
 These constants are free to evolve without breaking compatibilities. For 
 example, {{MAX_PATH_LENGTH}} is used by the NN to enforce the length of the 
 path does not go too long. Developers are free to change the name of the 
 constants and to move it around if necessary.
 1. Constants that are used by the clients, but not parts of the APIs. For 
 example, {{QUOTA_DONT_SET}} represents an unlimited quota. The value is part 
 of the wire protocol but the value is not. Developers are free to rename the 
 constants but are not allowed to change the value of the constants.
 1. Constants that are parts of the APIs. For example, {{SafeModeAction}} is 
 used in {{DistributedFileSystem}}. Changing the name / value of the constant 
 will break binary compatibility, but not source code compatibility.
 This jira proposes to separate the above three types of constants 

[jira] [Commented] (HDFS-8241) Remove unused Namenode startup option FINALIZE

2015-04-25 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14512716#comment-14512716
 ] 

Konstantin Shvachko commented on HDFS-8241:
---

 I'm astounded as to how many people were watching that JIRA and still thought 
 it was OK

Well, not everybody, but Aaron seemed quite confident about changing startup 
options and their semantics.

 Remove unused Namenode startup option  FINALIZE
 -

 Key: HDFS-8241
 URL: https://issues.apache.org/jira/browse/HDFS-8241
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.7.0
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula

 Command : hdfs namenode -finalize
 15/04/24 22:26:23 INFO namenode.NameNode: createNameNode [-finalize]
  *Use of the argument 'FINALIZE' is no longer supported.*  To finalize an 
 upgrade, start the NN  and then run `hdfs dfsadmin -finalizeUpgrade'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-2009) test-patch comment doesn't show names of failed FI tests

2015-04-25 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-2009.

Resolution: Won't Fix

stale.

 test-patch comment doesn't show names of failed FI tests
 

 Key: HDFS-2009
 URL: https://issues.apache.org/jira/browse/HDFS-2009
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build, test
Reporter: Todd Lipcon

 Looks like test-patch.sh only looks at the build/test/*xml test results, but 
 it should also look at build-fi/test/*xml I think



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8175) Provide information on snapshotDiff for supporting the comparison between snapshot and current status

2015-04-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14512640#comment-14512640
 ] 

Hadoop QA commented on HDFS-8175:
-

\\
\\
| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |   2m 50s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | release audit |   0m 20s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | site |   2m 55s | Site still builds. |
| | |   6m  8s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12728163/HDFS-8175.1.patch |
| Optional Tests | site |
| git revision | trunk / f83c55a |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10391/console |


This message was automatically generated.

 Provide information on snapshotDiff for supporting the comparison between 
 snapshot and current status
 -

 Key: HDFS-8175
 URL: https://issues.apache.org/jira/browse/HDFS-8175
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: J.Andreina
Assignee: J.Andreina
 Attachments: HDFS-8175.1.patch


 SnapshotDiff  can be used to find difference
 1. Between two Snapshot ( Documented)
 2. Between a snapshot and current status of directory *(which is not been 
 documented)*.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-8252) Fix test case failure in org.apache.hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate.testAppendOverTypeQuota

2015-04-25 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth resolved HDFS-8252.
-
Resolution: Duplicate

 Fix test case failure in 
 org.apache.hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate.testAppendOverTypeQuota
 

 Key: HDFS-8252
 URL: https://issues.apache.org/jira/browse/HDFS-8252
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula

 Quota by storage type : SSD on path : /TestAppendOverTypeQuota is exceeded. 
 quota = 1 B but space consumed = 1 KB
  at 
 org.apache.hadoop.hdfs.server.namenode.DirectoryWithQuotaFeature.verifyQuotaByStorageType(DirectoryWithQuotaFeature.java:227)
  at 
 org.apache.hadoop.hdfs.server.namenode.DirectoryWithQuotaFeature.verifyQuota(DirectoryWithQuotaFeature.java:240)
  at 
 org.apache.hadoop.hdfs.server.namenode.FSDirectory.verifyQuota(FSDirectory.java:874)
  at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.verifyQuotaForUCBlock(FSNamesystem.java:2765)
  at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.prepareFileForAppend(FSNamesystem.java:2713)
  at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2686)
  at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2968)
  at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2939)
  at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:659)
  at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:418)
  at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
  at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2174)
  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2170)
  at java.security.AccessController.doPrivileged(Native Method)
  at javax.security.auth.Subject.doAs(Subject.java:415)
  at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1669)
  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2168)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8209) Support different number of datanode directories in MiniDFSCluster.

2015-04-25 Thread surendra singh lilhore (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14512678#comment-14512678
 ] 

surendra singh lilhore commented on HDFS-8209:
--

Thanks [~szetszwo]...  Attached patch, please review.

 Support different number of datanode directories in MiniDFSCluster.
 ---

 Key: HDFS-8209
 URL: https://issues.apache.org/jira/browse/HDFS-8209
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: test
Affects Versions: 2.6.0
Reporter: surendra singh lilhore
Assignee: surendra singh lilhore
Priority: Minor
 Attachments: HDFS-8209.patch


 I want to create MiniDFSCluster with 2 datanode and for each datanode I want 
 to set different number of StorageTypes, but in this case I am getting 
 ArrayIndexOutOfBoundsException.
 My cluster schema is like this.
 {code}
 final MiniDFSCluster cluster = new MiniDFSCluster.Builder(conf)
   .numDataNodes(2)
   .storageTypes(new StorageType[][] {{ 
 StorageType.DISK, StorageType.ARCHIVE },{ StorageType.DISK } })
   .build();
 {code}
 *Exception* :
 {code}
 java.lang.ArrayIndexOutOfBoundsException: 1
   at 
 org.apache.hadoop.hdfs.MiniDFSCluster.makeDataNodeDirs(MiniDFSCluster.java:1218)
   at 
 org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1402)
   at 
 org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:832)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-8251) Move the synthetic load generator into its own package

2015-04-25 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula reassigned HDFS-8251:
--

Assignee: Brahma Reddy Battula  (was: J.Andreina)

 Move the synthetic load generator into its own package
 --

 Key: HDFS-8251
 URL: https://issues.apache.org/jira/browse/HDFS-8251
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Allen Wittenauer
Assignee: Brahma Reddy Battula

 It doesn't really make sense for the HDFS load generator to be a part of the 
 (extremely large) mapreduce jobclient package. It should be pulled out and 
 put its own package, probably in hadoop-tools.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8187) Remove usage of -setStoragePolicy and -getStoragePolicy using dfsadmin cmd (as it is not been supported)

2015-04-25 Thread J.Andreina (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

J.Andreina updated HDFS-8187:
-
Attachment: HDFS-8187.1.patch

Attached an initial patch.
Please review.

 Remove usage of -setStoragePolicy and -getStoragePolicy using dfsadmin 
 cmd (as it is not been supported)
 

 Key: HDFS-8187
 URL: https://issues.apache.org/jira/browse/HDFS-8187
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Affects Versions: 3.0.0
Reporter: J.Andreina
Assignee: J.Andreina
 Attachments: HDFS-8187.1.patch


 Remove usage of -setStoragePolicy and -getStoragePolicy using dfsadmin 
 cmd (as it is not been supported)
 Incorrect Usage in Document:
 {noformat}
 The storage policy can be specified using the [`dfsadmin 
 -setStoragePolicy`](#Set_Storage_Policy) command. 
 .
 .
 The effective storage policy can be retrieved by the [`dfsadmin 
 -getStoragePolicy`](#Get_Storage_Policy) command.
 {noformat}
 Correct Commands:
 {noformat}
 hdfs storagepolicies -getStoragePolicy -path path
 hdfs storagepolicies -setStoragePolicy -path path -policy policy
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8251) Move the synthetic load generator into its own package

2015-04-25 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HDFS-8251:
---
Hadoop Flags: Incompatible change

 Move the synthetic load generator into its own package
 --

 Key: HDFS-8251
 URL: https://issues.apache.org/jira/browse/HDFS-8251
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: J.Andreina

 It doesn't really make sense for the HDFS load generator to be a part of the 
 (extremely large) mapreduce jobclient package. It should be pulled out and 
 put its own package, probably in hadoop-tools.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8251) Move the synthetic load generator into its own package

2015-04-25 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HDFS-8251:
---
Assignee: J.Andreina  (was: Brahma Reddy Battula)

 Move the synthetic load generator into its own package
 --

 Key: HDFS-8251
 URL: https://issues.apache.org/jira/browse/HDFS-8251
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: J.Andreina

 It doesn't really make sense for the HDFS load generator to be a part of the 
 (extremely large) mapreduce jobclient package. It should be pulled out and 
 put its own package, probably in hadoop-tools.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8251) Move the synthetic load generator into its own package

2015-04-25 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14512626#comment-14512626
 ] 

Allen Wittenauer commented on HDFS-8251:


[~brahmareddy], that's not cool, especially considering how many other issues 
you've got assigned to you that you haven't done anything with...

Re-assigning to [~andreina].

 Move the synthetic load generator into its own package
 --

 Key: HDFS-8251
 URL: https://issues.apache.org/jira/browse/HDFS-8251
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Allen Wittenauer
Assignee: Brahma Reddy Battula

 It doesn't really make sense for the HDFS load generator to be a part of the 
 (extremely large) mapreduce jobclient package. It should be pulled out and 
 put its own package, probably in hadoop-tools.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8250) Create HDFS bindings for java.nio.file.FileSystem

2015-04-25 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-8250:

 Component/s: hdfs-client
Target Version/s: 2.8.0
  Issue Type: New Feature  (was: Improvement)

Oleg, thanks for filing the issue.  +1 for the proposal.  This could enable 
some interesting new integrations with HDFS.

Oleg and I already had some conversation about this, so I'd like to recap for 
the community.  He has a working preliminary patch that created a binding of 
{{org.apache.hadoop.hdfs.DistributedFileSystem}} to 
{{org.apache.hadoop.fs.FileSystem}}.  We'll iterate on that and get it to a 
state where it can be contributed.

While working on it, I expect we'll identify some code to refactor up to Hadoop 
Common.  This would enable easy development of {{java.nio.file.FileSystem}} 
bindings for alternative file systems (S3, Azure, etc.).  As an end goal, I'd 
like to aim for any {{org.apache.hadoop.fs.FileSystem}} to be accessible as a 
{{java.nio.file.FileSystem}} too.  This is likely to happen in phases with 
distinct pieces of work tracked in separate issues.

We'll need to document any pieces of the {{java.nio.file.FileSystem}} interface 
that we can't satisfy because of HDFS semantics.  I believe it is considered 
acceptable for an implementor to provide partial functionality.  For example, I 
know the JDK's own local file system implementation of 
{{java.nio.file.FileSystem}} supports setting POSIX permissions on a Linux file 
system, but those methods throw unchecked exceptions on Windows.  There may be 
some overlap with the work on Hadoop Compatible File System specification and 
contract tests.

Thanks again, Oleg.  I'm happy to help with this effort.

 Create HDFS bindings for java.nio.file.FileSystem
 -

 Key: HDFS-8250
 URL: https://issues.apache.org/jira/browse/HDFS-8250
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: hdfs-client
Affects Versions: 2.7.0
Reporter: Oleg Zhurakousky
Assignee: Oleg Zhurakousky

 It's a nice to have feature as it would allow developers to have a unified 
 programming model while dealing with various File Systems even though this 
 particular issue only addresses HDFS.
 It has already been done in the unrelated project, so I just need to extract 
 the code and provide a patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8231) StackTrace displayed at client while QuotaByStorageType exceeds

2015-04-25 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14512647#comment-14512647
 ] 

Chris Nauroth commented on HDFS-8231:
-

For some reason, this patch didn't get a Jenkins run, and none of us caught it 
before the commit.  HDFS-8247 tracks a test failure that was introduced.

 StackTrace displayed at client while QuotaByStorageType exceeds
 ---

 Key: HDFS-8231
 URL: https://issues.apache.org/jira/browse/HDFS-8231
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Reporter: J.Andreina
Assignee: J.Andreina
 Fix For: 2.8.0

 Attachments: HDFS-8231.00.patch, HDFS-8231.1.patch, HDFS-8231.2.patch


 StackTrace displayed at client while QuotaByStorageType exceeds.
 With reference to HDFS-2360, feel better to fix this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8253) DFSStripedOutputStream.closeThreads releases cellBuffers multiple times

2015-04-25 Thread Tsz Wo Nicholas Sze (JIRA)
Tsz Wo Nicholas Sze created HDFS-8253:
-

 Summary: DFSStripedOutputStream.closeThreads releases cellBuffers 
multiple times
 Key: HDFS-8253
 URL: https://issues.apache.org/jira/browse/HDFS-8253
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze


In closeThreads, setClosed is called once for each streamer.  setClose releases 
all the cellBuffers.  As a result, all cellBuffers are released multiple times.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8204) Mover/Balancer should not schedule two replicas to the same DN

2015-04-25 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-8204:
--
Hadoop Flags: Reviewed
  Status: Patch Available  (was: Reopened)

+1 patch looks good.  Pending Jenkins.

 Mover/Balancer should not schedule two replicas to the same DN
 --

 Key: HDFS-8204
 URL: https://issues.apache.org/jira/browse/HDFS-8204
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: balancer  mover
Reporter: Walter Su
Assignee: Walter Su
 Attachments: HDFS-8204.001.patch, HDFS-8204.002.patch


 Balancer moves blocks between Datanode(Ver. 2.6 ).
 Balancer moves blocks between StorageGroups ( introduced by HDFS-6584) , in 
 the new version(Ver. =2.6) .
 function
 {code}
 class DBlock extends LocationsStorageGroup
 DBlock.isLocatedOn(StorageGroup loc)
 {code}
 is flawed, may causes 2 replicas ends in same node after running balance.
 For example:
 We have 2 nodes. Each node has two storages.
 We have (DN0, SSD), (DN0, DISK), (DN1, SSD), (DN1, DISK).
 We have a block with ONE_SSD storage policy.
 The block has 2 replicas. They are in (DN0,SSD) and (DN1,DISK).
 Replica in (DN0,SSD) should not be moved to (DN1,SSD) after running Balancer.
 Otherwise DN1 has 2 replicas.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8209) Support different number of datanode directories in MiniDFSCluster.

2015-04-25 Thread surendra singh lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

surendra singh lilhore updated HDFS-8209:
-
Attachment: HDFS-8209.patch

 Support different number of datanode directories in MiniDFSCluster.
 ---

 Key: HDFS-8209
 URL: https://issues.apache.org/jira/browse/HDFS-8209
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: test
Affects Versions: 2.6.0
Reporter: surendra singh lilhore
Assignee: surendra singh lilhore
Priority: Minor
 Attachments: HDFS-8209.patch


 I want to create MiniDFSCluster with 2 datanode and for each datanode I want 
 to set different number of StorageTypes, but in this case I am getting 
 ArrayIndexOutOfBoundsException.
 My cluster schema is like this.
 {code}
 final MiniDFSCluster cluster = new MiniDFSCluster.Builder(conf)
   .numDataNodes(2)
   .storageTypes(new StorageType[][] {{ 
 StorageType.DISK, StorageType.ARCHIVE },{ StorageType.DISK } })
   .build();
 {code}
 *Exception* :
 {code}
 java.lang.ArrayIndexOutOfBoundsException: 1
   at 
 org.apache.hadoop.hdfs.MiniDFSCluster.makeDataNodeDirs(MiniDFSCluster.java:1218)
   at 
 org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1402)
   at 
 org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:832)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8204) Mover/Balancer should not schedule two replicas to the same DN

2015-04-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14512761#comment-14512761
 ] 

Hadoop QA commented on HDFS-8204:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 37s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | javac |   7m 30s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 37s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   5m 20s | The applied patch generated  1 
 additional checkstyle issues. |
| {color:green}+1{color} | install |   1m 31s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m  4s | The patch does not introduce 
any new Findbugs (version 2.0.3) warnings. |
| {color:green}+1{color} | native |   3m 13s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 162m 27s | Tests failed in hadoop-hdfs. |
| | | 208m 16s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.server.mover.TestMover |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12728130/HDFS-8204.002.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / a00e001 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10394/artifact/patchprocess/checkstyle-result-diff.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10394/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10394/testReport/ |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10394/console |


This message was automatically generated.

 Mover/Balancer should not schedule two replicas to the same DN
 --

 Key: HDFS-8204
 URL: https://issues.apache.org/jira/browse/HDFS-8204
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: balancer  mover
Reporter: Walter Su
Assignee: Walter Su
 Attachments: HDFS-8204.001.patch, HDFS-8204.002.patch


 Balancer moves blocks between Datanode(Ver. 2.6 ).
 Balancer moves blocks between StorageGroups ( introduced by HDFS-6584) , in 
 the new version(Ver. =2.6) .
 function
 {code}
 class DBlock extends LocationsStorageGroup
 DBlock.isLocatedOn(StorageGroup loc)
 {code}
 is flawed, may causes 2 replicas ends in same node after running balance.
 For example:
 We have 2 nodes. Each node has two storages.
 We have (DN0, SSD), (DN0, DISK), (DN1, SSD), (DN1, DISK).
 We have a block with ONE_SSD storage policy.
 The block has 2 replicas. They are in (DN0,SSD) and (DN1,DISK).
 Replica in (DN0,SSD) should not be moved to (DN1,SSD) after running Balancer.
 Otherwise DN1 has 2 replicas.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8175) Provide information on snapshotDiff for supporting the comparison between snapshot and current status

2015-04-25 Thread J.Andreina (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

J.Andreina updated HDFS-8175:
-
Status: Patch Available  (was: Open)

 Provide information on snapshotDiff for supporting the comparison between 
 snapshot and current status
 -

 Key: HDFS-8175
 URL: https://issues.apache.org/jira/browse/HDFS-8175
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: J.Andreina
Assignee: J.Andreina
 Attachments: HDFS-8175.1.patch


 SnapshotDiff  can be used to find difference
 1. Between two Snapshot ( Documented)
 2. Between a snapshot and current status of directory *(which is not been 
 documented)*.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8252) Fix test case failure in org.apache.hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate.testAppendOverTypeQuota

2015-04-25 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14512646#comment-14512646
 ] 

Chris Nauroth commented on HDFS-8252:
-

HDFS-8247 was filed yesterday to track this failure, so I'm resolving HDFS-8252 
as duplicate.

 Fix test case failure in 
 org.apache.hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate.testAppendOverTypeQuota
 

 Key: HDFS-8252
 URL: https://issues.apache.org/jira/browse/HDFS-8252
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula

 Quota by storage type : SSD on path : /TestAppendOverTypeQuota is exceeded. 
 quota = 1 B but space consumed = 1 KB
  at 
 org.apache.hadoop.hdfs.server.namenode.DirectoryWithQuotaFeature.verifyQuotaByStorageType(DirectoryWithQuotaFeature.java:227)
  at 
 org.apache.hadoop.hdfs.server.namenode.DirectoryWithQuotaFeature.verifyQuota(DirectoryWithQuotaFeature.java:240)
  at 
 org.apache.hadoop.hdfs.server.namenode.FSDirectory.verifyQuota(FSDirectory.java:874)
  at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.verifyQuotaForUCBlock(FSNamesystem.java:2765)
  at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.prepareFileForAppend(FSNamesystem.java:2713)
  at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2686)
  at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2968)
  at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2939)
  at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:659)
  at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:418)
  at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
  at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2174)
  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2170)
  at java.security.AccessController.doPrivileged(Native Method)
  at javax.security.auth.Subject.doAs(Subject.java:415)
  at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1669)
  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2168)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8232) Missing datanode counters when using Metrics2 sink interface

2015-04-25 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14512644#comment-14512644
 ] 

Chris Nauroth commented on HDFS-8232:
-

I've submitted a new Jenkins run.

https://builds.apache.org/job/PreCommit-HDFS-Build/10393/


 Missing datanode counters when using Metrics2 sink interface
 

 Key: HDFS-8232
 URL: https://issues.apache.org/jira/browse/HDFS-8232
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.7.1
Reporter: Anu Engineer
Assignee: Anu Engineer
 Attachments: hdfs-8232.001.patch


 When using the Metric2 Sink interface none of the counters declared under 
 Dataanode:FSDataSetBean are visible. They are visible if you use JMX or if 
 you do http://host:port/jmx. 
 Expected behavior is that they be part of Sink interface and accessible in 
 the putMetrics call back.
  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8116) RollingWindowManager#getTopUserForMetric should check if LOG.isDebugEnabled() before LOG.debug()

2015-04-25 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14512645#comment-14512645
 ] 

Brahma Reddy Battula commented on HDFS-8116:


Testcase failures are unrelated to this patch...

 RollingWindowManager#getTopUserForMetric should check if LOG.isDebugEnabled() 
 before LOG.debug() 
 -

 Key: HDFS-8116
 URL: https://issues.apache.org/jira/browse/HDFS-8116
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.7.0
Reporter: Xiaoyu Yao
Assignee: Brahma Reddy Battula
Priority: Trivial
 Attachments: HDFS-8116.patch


 RollingWindowManager#getTopUserForMetric should check if LOG.isDebugEnabled() 
 before LOG.debug() 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8247) TestDiskspaceQuotaUpdate#testAppendOverTypeQuota is failing

2015-04-25 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-8247:

  Resolution: Fixed
   Fix Version/s: 2.8.0
Target Version/s: 2.8.0  (was: 2.7.1)
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

+1 for the patch.  The test failure in the last Jenkins run was unrelated to 
the patch.  It looks like we hit a bind number conflict on a port.  I verified 
that the test passes locally.

I have committed this to trunk and branch-2.  Anu, thank you for finding and 
reporting the issue.  Xiaoyu, thank you for the patch.

 TestDiskspaceQuotaUpdate#testAppendOverTypeQuota is failing
 ---

 Key: HDFS-8247
 URL: https://issues.apache.org/jira/browse/HDFS-8247
 Project: Hadoop HDFS
  Issue Type: Test
  Components: HDFS
Affects Versions: 2.7.1
Reporter: Anu Engineer
Assignee: Xiaoyu Yao
 Fix For: 2.8.0

 Attachments: HDFS-8247.00.patch


 Running org.apache.hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate 
 failing with the following error
 Running org.apache.hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate
 Tests run: 6, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 19.828 sec 
  FAILURE! - in 
 org.apache.hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate
 testAppendOverTypeQuota(org.apache.hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate)
   Time elapsed: 0.962 sec   ERROR!
 org.apache.hadoop.hdfs.protocol.QuotaByStorageTypeExceededException: Quota by 
 storage type : SSD on path : /TestAppendOverTypeQuota is exceeded. quota = 1 
 B but space consumed = 1 KB
   at 
 org.apache.hadoop.hdfs.server.namenode.DirectoryWithQuotaFeature.verifyQuotaByStorageType(DirectoryWithQuotaFeature.java:227)
   at 
 org.apache.hadoop.hdfs.server.namenode.DirectoryWithQuotaFeature.verifyQuota(DirectoryWithQuotaFeature.java:240)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSDirectory.verifyQuota(FSDirectory.java:874)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.verifyQuotaForUCBlock(FSNamesystem.java:2765)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.prepareFileForAppend(FSNamesystem.java:2713)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2686)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2968)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2939)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:659)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:418)
   at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2174)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2170)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1669)
   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2168)
   at org.apache.hadoop.ipc.Client.call(Client.java:1492)
   at org.apache.hadoop.ipc.Client.call(Client.java:1423)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
   at com.sun.proxy.$Proxy19.append(Unknown Source)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:328)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
   at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:101)
   at com.sun.proxy.$Proxy20.append(Unknown Source)
   at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1460)
   at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1524)
   at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1494)
   at 
 

[jira] [Commented] (HDFS-8174) Update replication count to live rep count in fsck report

2015-04-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14512729#comment-14512729
 ] 

Hadoop QA commented on HDFS-8174:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 40s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | javac |   7m 27s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 35s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   7m  7s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | install |   1m 36s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m  3s | The patch does not introduce 
any new Findbugs (version 2.0.3) warnings. |
| {color:green}+1{color} | native |   3m 15s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 175m 20s | Tests failed in hadoop-hdfs. |
| | | 223m  1s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate |
|   | hadoop.hdfs.TestEncryptionZonesWithKMS |
|   | hadoop.hdfs.web.TestWebHDFSXAttr |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12728161/HDFS-8174.1.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / f83c55a |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10392/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10392/testReport/ |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10392/console |


This message was automatically generated.

 Update replication count to live rep count in fsck report
 -

 Key: HDFS-8174
 URL: https://issues.apache.org/jira/browse/HDFS-8174
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: J.Andreina
Assignee: J.Andreina
Priority: Minor
 Attachments: HDFS-8174.1.patch


 When one of the replica is decommissioned , fetching fsck report gives repl 
 count is one less than the total replica information displayed. 
 {noformat}
 blk_x len=y repl=3 [dn1, dn2, dn3, dn4]
 {noformat}
 Update the description from rep to Live_rep



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8206) Fix the typos in hadoop-hdfs-httpfs

2015-04-25 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-8206:
-
Fix Version/s: (was: 3.0.0)
   2.8.0

 Fix the typos in hadoop-hdfs-httpfs
 ---

 Key: HDFS-8206
 URL: https://issues.apache.org/jira/browse/HDFS-8206
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
 Fix For: 2.8.0

 Attachments: HDFS-8206-002.patch, HDFS-8206-branch2-002.patch, 
 HDFS-8206-branch2.patch, HDFS-8206.patch


 Actual :
  @throws IOException thrown if an IO error occurrs.
 Expected :
  @throws IOException thrown if an IO error occurs..
 There are 12 occurrences in this project... 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8206) Fix the typos in hadoop-hdfs-httpfs

2015-04-25 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-8206:
-
Target Version/s: 2.8.0  (was: 3.0.0)

 Fix the typos in hadoop-hdfs-httpfs
 ---

 Key: HDFS-8206
 URL: https://issues.apache.org/jira/browse/HDFS-8206
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
 Fix For: 2.8.0

 Attachments: HDFS-8206-002.patch, HDFS-8206-branch2-002.patch, 
 HDFS-8206-branch2.patch, HDFS-8206.patch


 Actual :
  @throws IOException thrown if an IO error occurrs.
 Expected :
  @throws IOException thrown if an IO error occurs..
 There are 12 occurrences in this project... 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8206) Fix the typos in hadoop-hdfs-httpfs

2015-04-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14512883#comment-14512883
 ] 

Hudson commented on HDFS-8206:
--

FAILURE: Integrated in Hadoop-trunk-Commit #7678 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7678/])
Updated CHANGES.TXT for correct version of HDFS-8206 (xyao: rev 
22b70e7c5a005b553610820d866763d8096aeca5)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Fix the typos in hadoop-hdfs-httpfs
 ---

 Key: HDFS-8206
 URL: https://issues.apache.org/jira/browse/HDFS-8206
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
 Fix For: 2.8.0

 Attachments: HDFS-8206-002.patch, HDFS-8206-branch2-002.patch, 
 HDFS-8206-branch2.patch, HDFS-8206.patch


 Actual :
  @throws IOException thrown if an IO error occurrs.
 Expected :
  @throws IOException thrown if an IO error occurs..
 There are 12 occurrences in this project... 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8205) CommandFormat#parse() should not parse option as value of option

2015-04-25 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-8205:
-
Affects Version/s: (was: 2.7.1)
   2.8.0

 CommandFormat#parse() should not parse option as value of option
 

 Key: HDFS-8205
 URL: https://issues.apache.org/jira/browse/HDFS-8205
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.8.0
Reporter: Peter Shi
Assignee: Peter Shi
Priority: Blocker
 Attachments: HDFS-8205.01.patch, HDFS-8205.02.patch, HDFS-8205.patch


 {code}./hadoop fs -count -q -t -h -v /
QUOTA   REM_QUOTA SPACE_QUOTA REM_SPACE_QUOTADIR_COUNT   
 FILE_COUNT   CONTENT_SIZE PATHNAME
 15/04/21 15:20:19 INFO hdfs.DFSClient: Sets 
 dfs.client.block.write.replace-datanode-on-failure.replication to 0
 9223372036854775807 9223372036854775763none inf   
 31   13   1230 /{code}
 This blocks query quota by storage type and clear quota by storage type.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8222) Remove usage of dfsadmin -upgradeProgress from document which is no longer supported

2015-04-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14512902#comment-14512902
 ] 

Hadoop QA commented on HDFS-8222:
-

\\
\\
| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |   2m 56s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | release audit |   0m 21s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | site |   2m 56s | Site still builds. |
| | |   6m 17s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12728207/HDFS-8222.1.patch |
| Optional Tests | site |
| git revision | trunk / 22b70e7 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10397/console |


This message was automatically generated.

 Remove usage of dfsadmin -upgradeProgress  from document which  is no 
 longer supported 
 -

 Key: HDFS-8222
 URL: https://issues.apache.org/jira/browse/HDFS-8222
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: J.Andreina
Assignee: J.Andreina
 Attachments: HDFS-8222.1.patch


 Usage of  dfsadmin -upgradeProgress   is been removed as part of HDFS-2686.
 Information on -upgradeProgress has to be removed from document also.
 {noformat}
 Before upgrading Hadoop software, finalize if there an existing backup. 
 dfsadmin -upgradeProgress status can tell if the cluster needs to be 
 finalized.
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8206) Fix the typos in hadoop-hdfs-httpfs

2015-04-25 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-8206:
-
  Resolution: Fixed
Target Version/s: 3.0.0
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

I've committed this to trunk. Thanks [~brahmareddy] for the contribution. 

 Fix the typos in hadoop-hdfs-httpfs
 ---

 Key: HDFS-8206
 URL: https://issues.apache.org/jira/browse/HDFS-8206
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
 Fix For: 3.0.0

 Attachments: HDFS-8206-002.patch, HDFS-8206-branch2-002.patch, 
 HDFS-8206-branch2.patch, HDFS-8206.patch


 Actual :
  @throws IOException thrown if an IO error occurrs.
 Expected :
  @throws IOException thrown if an IO error occurs..
 There are 12 occurrences in this project... 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6888) Remove audit logging of getFIleInfo()

2015-04-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14512878#comment-14512878
 ] 

Hadoop QA commented on HDFS-6888:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 35s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 3 new or modified test files. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | javac |   7m 28s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 34s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   7m 54s | The applied patch generated  2 
 additional checkstyle issues. |
| {color:green}+1{color} | install |   1m 33s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m  3s | The patch does not introduce 
any new Findbugs (version 2.0.3) warnings. |
| {color:green}+1{color} | native |   3m 11s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 163m 59s | Tests failed in hadoop-hdfs. |
| | | 212m 16s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.server.namenode.TestNamenodeCapacityReport |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12676436/HDFS-6888-6.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / a00e001 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10395/artifact/patchprocess/checkstyle-result-diff.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10395/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10395/testReport/ |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10395/console |


This message was automatically generated.

 Remove audit logging of getFIleInfo()
 -

 Key: HDFS-6888
 URL: https://issues.apache.org/jira/browse/HDFS-6888
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.5.0
Reporter: Kihwal Lee
Assignee: Chen He
  Labels: log
 Attachments: HDFS-6888-2.patch, HDFS-6888-3.patch, HDFS-6888-4.patch, 
 HDFS-6888-5.patch, HDFS-6888-6.patch, HDFS-6888.patch


 The audit logging of getFileInfo() was added in HDFS-3733.  Since this is a 
 one of the most called method, users have noticed that audit log is now 
 filled with this.  Since we now have HTTP request logging, this seems 
 unnecessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8255) Rename getBlockReplication to getPreferredBlockStorageNum

2015-04-25 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-8255:

Description: 
We should rename {{BlockCollection#getBlockReplication}} to 
{{getPreferredBlockStorageNum}} for 2 reasons:
# Currently, this method actually returns the _preferred_ block replication 
factor instead of the _actual_ number of replicas. The current name is a little 
ambiguous. {{getPreferredBlockStorageNum}} is also consistent with 
{{getPreferredBlockSize}}
# With the erasure coding feature, the name doesn't apply to striped blocks. 

  was:
We should rename {{BlockCollection#getBlockReplication}} to 
{{getPreferredBlockStorageNum}} for 2 reasons:
# Currently, this method actually returns the _preferred_ block replication 
factor instead of the _actual_ number of replicas. The current name is a little 
ambiguous. 
# With the erasure coding feature, the name doesn't apply to striped blocks. 


 Rename getBlockReplication to getPreferredBlockStorageNum
 -

 Key: HDFS-8255
 URL: https://issues.apache.org/jira/browse/HDFS-8255
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Zhe Zhang
Assignee: Zhe Zhang

 We should rename {{BlockCollection#getBlockReplication}} to 
 {{getPreferredBlockStorageNum}} for 2 reasons:
 # Currently, this method actually returns the _preferred_ block replication 
 factor instead of the _actual_ number of replicas. The current name is a 
 little ambiguous. {{getPreferredBlockStorageNum}} is also consistent with 
 {{getPreferredBlockSize}}
 # With the erasure coding feature, the name doesn't apply to striped blocks. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8108) Fsck should provide the info on mandatory option to be used along with -blocks , -locations and -racks

2015-04-25 Thread J.Andreina (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

J.Andreina updated HDFS-8108:
-
Status: Patch Available  (was: Open)

 Fsck should provide the info on mandatory option to be used along with 
 -blocks , -locations and -racks
 

 Key: HDFS-8108
 URL: https://issues.apache.org/jira/browse/HDFS-8108
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: J.Andreina
Assignee: J.Andreina
Priority: Trivial
 Attachments: HDFS-8108.1.patch


 Fsck usage information should provide the information on  which options are 
 mandatory to be  passed along with -blocks , -locations and -racks to be in 
 sync with documentation.
 For example :
 To get information on:
 1.  Blocks (-blocks),  option  -files should also be used.
 2.  Rack information (-racks),  option  -files and -blocks should also be 
 used.
 {noformat}
 ./hdfs fsck -files -blocks
 ./hdfs fsck -files -blocks -racks
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8108) Fsck should provide the info on mandatory option to be used along with -blocks , -locations and -racks

2015-04-25 Thread J.Andreina (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

J.Andreina updated HDFS-8108:
-
Attachment: HDFS-8108.1.patch

Provided an initial patch. 
Please review.

 Fsck should provide the info on mandatory option to be used along with 
 -blocks , -locations and -racks
 

 Key: HDFS-8108
 URL: https://issues.apache.org/jira/browse/HDFS-8108
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: J.Andreina
Assignee: J.Andreina
Priority: Trivial
 Attachments: HDFS-8108.1.patch


 Fsck usage information should provide the information on  which options are 
 mandatory to be  passed along with -blocks , -locations and -racks to be in 
 sync with documentation.
 For example :
 To get information on:
 1.  Blocks (-blocks),  option  -files should also be used.
 2.  Rack information (-racks),  option  -files and -blocks should also be 
 used.
 {noformat}
 ./hdfs fsck -files -blocks
 ./hdfs fsck -files -blocks -racks
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8252) Fix test case failure in org.apache.hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate.testAppendOverTypeQuota

2015-04-25 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14512842#comment-14512842
 ] 

Brahma Reddy Battula commented on HDFS-8252:


OK,,thanks

 Fix test case failure in 
 org.apache.hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate.testAppendOverTypeQuota
 

 Key: HDFS-8252
 URL: https://issues.apache.org/jira/browse/HDFS-8252
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula

 Quota by storage type : SSD on path : /TestAppendOverTypeQuota is exceeded. 
 quota = 1 B but space consumed = 1 KB
  at 
 org.apache.hadoop.hdfs.server.namenode.DirectoryWithQuotaFeature.verifyQuotaByStorageType(DirectoryWithQuotaFeature.java:227)
  at 
 org.apache.hadoop.hdfs.server.namenode.DirectoryWithQuotaFeature.verifyQuota(DirectoryWithQuotaFeature.java:240)
  at 
 org.apache.hadoop.hdfs.server.namenode.FSDirectory.verifyQuota(FSDirectory.java:874)
  at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.verifyQuotaForUCBlock(FSNamesystem.java:2765)
  at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.prepareFileForAppend(FSNamesystem.java:2713)
  at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2686)
  at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2968)
  at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2939)
  at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:659)
  at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:418)
  at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
  at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2174)
  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2170)
  at java.security.AccessController.doPrivileged(Native Method)
  at javax.security.auth.Subject.doAs(Subject.java:415)
  at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1669)
  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2168)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8255) Rename getBlockReplication to getPreferredBlockStorageNum

2015-04-25 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-8255:

Status: Patch Available  (was: Open)

The patch is just a simple refactor plus reformatting several lines to make 
them shorter than 80 chars.

 Rename getBlockReplication to getPreferredBlockStorageNum
 -

 Key: HDFS-8255
 URL: https://issues.apache.org/jira/browse/HDFS-8255
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Attachments: HDFS-8255.000.patch


 We should rename {{BlockCollection#getBlockReplication}} to 
 {{getPreferredBlockStorageNum}} for 2 reasons:
 # Currently, this method actually returns the _preferred_ block replication 
 factor instead of the _actual_ number of replicas. The current name is a 
 little ambiguous. {{getPreferredBlockStorageNum}} is also consistent with 
 {{getPreferredBlockSize}}
 # With the erasure coding feature, the name doesn't apply to striped blocks. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8108) Fsck should provide the info on mandatory option to be used along with -blocks , -locations and -racks

2015-04-25 Thread J.Andreina (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

J.Andreina updated HDFS-8108:
-
Description: 
Fsck usage information should provide the information on  which options are 
mandatory to be  passed along with -blocks , -locations and -racks to be in 
sync with documentation.

For example :
To get information on:
1.  Blocks (-blocks),  option  -files should also be used.
2.  Rack information (-racks),  option  -files and -blocks should also be 
used.
{noformat}
./hdfs fsck -files -blocks
./hdfs fsck -files -blocks -racks
{noformat}



  was:
Fsck usage information should provide the information on  which options are 
mandatory to be  passed along with -blocks , -locations and -racks.

For example :
To get information on:
1.  Blocks (-blocks),  option  -files should also be used.
2.  Rack information (-racks),  option  -files and -blocks should also be 
used.
{noformat}
./hdfs fsck -files -blocks
./hdfs fsck -files -blocks -racks
{noformat}



   Priority: Trivial  (was: Minor)
 Issue Type: Improvement  (was: Bug)

 Fsck should provide the info on mandatory option to be used along with 
 -blocks , -locations and -racks
 

 Key: HDFS-8108
 URL: https://issues.apache.org/jira/browse/HDFS-8108
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: J.Andreina
Assignee: J.Andreina
Priority: Trivial

 Fsck usage information should provide the information on  which options are 
 mandatory to be  passed along with -blocks , -locations and -racks to be in 
 sync with documentation.
 For example :
 To get information on:
 1.  Blocks (-blocks),  option  -files should also be used.
 2.  Rack information (-racks),  option  -files and -blocks should also be 
 used.
 {noformat}
 ./hdfs fsck -files -blocks
 ./hdfs fsck -files -blocks -racks
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8255) Rename getBlockReplication to getPreferredBlockStorageNum

2015-04-25 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-8255:

Attachment: HDFS-8255.000.patch

 Rename getBlockReplication to getPreferredBlockStorageNum
 -

 Key: HDFS-8255
 URL: https://issues.apache.org/jira/browse/HDFS-8255
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Attachments: HDFS-8255.000.patch


 We should rename {{BlockCollection#getBlockReplication}} to 
 {{getPreferredBlockStorageNum}} for 2 reasons:
 # Currently, this method actually returns the _preferred_ block replication 
 factor instead of the _actual_ number of replicas. The current name is a 
 little ambiguous. {{getPreferredBlockStorageNum}} is also consistent with 
 {{getPreferredBlockSize}}
 # With the erasure coding feature, the name doesn't apply to striped blocks. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8010) Erasure coding: extend UnderReplicatedBlocks to accurately handle striped blocks

2015-04-25 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14512893#comment-14512893
 ] 

Zhe Zhang commented on HDFS-8010:
-

Thanks for the reviews [~jingzhao] and [~rakeshr]! And sorry for getting to 
this late.

I just submitted a path for the rename under HDFS-8255. Will update the patch 
for the {{getPriority}} part soon.

 Erasure coding: extend UnderReplicatedBlocks to accurately handle striped 
 blocks
 

 Key: HDFS-8010
 URL: https://issues.apache.org/jira/browse/HDFS-8010
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Attachments: HDFS-8010-000.patch


 This JIRA tracks efforts to accurately assess the _risk level_ of a striped 
 block groups with missing blocks, when added to {{UnderReplicatedBlocks}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8206) Fix the typos in hadoop-hdfs-httpfs

2015-04-25 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-8206:
-
Fix Version/s: 3.0.0

 Fix the typos in hadoop-hdfs-httpfs
 ---

 Key: HDFS-8206
 URL: https://issues.apache.org/jira/browse/HDFS-8206
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
 Fix For: 3.0.0

 Attachments: HDFS-8206-002.patch, HDFS-8206-branch2-002.patch, 
 HDFS-8206-branch2.patch, HDFS-8206.patch


 Actual :
  @throws IOException thrown if an IO error occurrs.
 Expected :
  @throws IOException thrown if an IO error occurs..
 There are 12 occurrences in this project... 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8206) Fix the typos in hadoop-hdfs-httpfs

2015-04-25 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14512864#comment-14512864
 ] 

Xiaoyu Yao commented on HDFS-8206:
--

+1 for v002 patch. I will commit it shortly.

 Fix the typos in hadoop-hdfs-httpfs
 ---

 Key: HDFS-8206
 URL: https://issues.apache.org/jira/browse/HDFS-8206
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
 Attachments: HDFS-8206-002.patch, HDFS-8206-branch2-002.patch, 
 HDFS-8206-branch2.patch, HDFS-8206.patch


 Actual :
  @throws IOException thrown if an IO error occurrs.
 Expected :
  @throws IOException thrown if an IO error occurs..
 There are 12 occurrences in this project... 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8205) CommandFormat#parse() should not parse option as value of option

2015-04-25 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-8205:
-
Status: Patch Available  (was: Open)

 CommandFormat#parse() should not parse option as value of option
 

 Key: HDFS-8205
 URL: https://issues.apache.org/jira/browse/HDFS-8205
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.7.1
Reporter: Peter Shi
Assignee: Peter Shi
Priority: Blocker
 Attachments: HDFS-8205.01.patch, HDFS-8205.02.patch, HDFS-8205.patch


 {code}./hadoop fs -count -q -t -h -v /
QUOTA   REM_QUOTA SPACE_QUOTA REM_SPACE_QUOTADIR_COUNT   
 FILE_COUNT   CONTENT_SIZE PATHNAME
 15/04/21 15:20:19 INFO hdfs.DFSClient: Sets 
 dfs.client.block.write.replace-datanode-on-failure.replication to 0
 9223372036854775807 9223372036854775763none inf   
 31   13   1230 /{code}
 This blocks query quota by storage type and clear quota by storage type.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8205) CommandFormat#parse() should not parse option as value of option

2015-04-25 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-8205:
-
Affects Version/s: (was: 3.0.0)
   2.7.1
   Status: Open  (was: Patch Available)

 CommandFormat#parse() should not parse option as value of option
 

 Key: HDFS-8205
 URL: https://issues.apache.org/jira/browse/HDFS-8205
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.7.1
Reporter: Peter Shi
Assignee: Peter Shi
Priority: Blocker
 Attachments: HDFS-8205.01.patch, HDFS-8205.02.patch, HDFS-8205.patch


 {code}./hadoop fs -count -q -t -h -v /
QUOTA   REM_QUOTA SPACE_QUOTA REM_SPACE_QUOTADIR_COUNT   
 FILE_COUNT   CONTENT_SIZE PATHNAME
 15/04/21 15:20:19 INFO hdfs.DFSClient: Sets 
 dfs.client.block.write.replace-datanode-on-failure.replication to 0
 9223372036854775807 9223372036854775763none inf   
 31   13   1230 /{code}
 This blocks query quota by storage type and clear quota by storage type.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8222) Remove usage of dfsadmin -upgradeProgress from document which is no longer supported

2015-04-25 Thread J.Andreina (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

J.Andreina updated HDFS-8222:
-
Status: Patch Available  (was: Open)

 Remove usage of dfsadmin -upgradeProgress  from document which  is no 
 longer supported 
 -

 Key: HDFS-8222
 URL: https://issues.apache.org/jira/browse/HDFS-8222
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: J.Andreina
Assignee: J.Andreina
 Attachments: HDFS-8222.1.patch


 Usage of  dfsadmin -upgradeProgress   is been removed as part of HDFS-2686.
 Information on -upgradeProgress has to be removed from document also.
 {noformat}
 Before upgrading Hadoop software, finalize if there an existing backup. 
 dfsadmin -upgradeProgress status can tell if the cluster needs to be 
 finalized.
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8222) Remove usage of dfsadmin -upgradeProgress from document which is no longer supported

2015-04-25 Thread J.Andreina (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

J.Andreina updated HDFS-8222:
-
Attachment: HDFS-8222.1.patch

Attached an initial Patch.
Please Review

 Remove usage of dfsadmin -upgradeProgress  from document which  is no 
 longer supported 
 -

 Key: HDFS-8222
 URL: https://issues.apache.org/jira/browse/HDFS-8222
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: J.Andreina
Assignee: J.Andreina
 Attachments: HDFS-8222.1.patch


 Usage of  dfsadmin -upgradeProgress   is been removed as part of HDFS-2686.
 Information on -upgradeProgress has to be removed from document also.
 {noformat}
 Before upgrading Hadoop software, finalize if there an existing backup. 
 dfsadmin -upgradeProgress status can tell if the cluster needs to be 
 finalized.
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8256) -storagepolicies , -blockId ,-replicaDetails options are missed out in usage and from documentation

2015-04-25 Thread J.Andreina (JIRA)
J.Andreina created HDFS-8256:


 Summary: -storagepolicies , -blockId ,-replicaDetails  options 
are missed out in usage and from documentation
 Key: HDFS-8256
 URL: https://issues.apache.org/jira/browse/HDFS-8256
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: HDFS
Reporter: J.Andreina
Assignee: J.Andreina


-storagepolicies , -blockId ,-replicaDetails  options are missed out in usage 
and from documentation.

{noformat}
Usage: hdfs fsck path [-list-corruptfileblocks | [-move | -delete | 
-openforwrite] [-files [-blocks [-locations | -racks [-includeSnapshots] 
[-showprogress]
{noformat}

Found as part of HDFS-8108.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8206) Fix the typos in hadoop-hdfs-httpfs

2015-04-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14512874#comment-14512874
 ] 

Hudson commented on HDFS-8206:
--

FAILURE: Integrated in Hadoop-trunk-Commit #7677 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7677/])
HDFS-8206. Fix the typos in hadoop-hdfs-httpfs. (Brahma Reddy Battula via xyao) 
(xyao: rev 8f3946cd4013eaeaafbaf7d038f3920f74c8457e)
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSServer.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/servlet/MDCFilter.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/servlet/HostnameFilter.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/CheckUploadContentTypeFilter.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/servlet/FileSystemReleaseFilter.java


 Fix the typos in hadoop-hdfs-httpfs
 ---

 Key: HDFS-8206
 URL: https://issues.apache.org/jira/browse/HDFS-8206
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
 Fix For: 3.0.0

 Attachments: HDFS-8206-002.patch, HDFS-8206-branch2-002.patch, 
 HDFS-8206-branch2.patch, HDFS-8206.patch


 Actual :
  @throws IOException thrown if an IO error occurrs.
 Expected :
  @throws IOException thrown if an IO error occurs..
 There are 12 occurrences in this project... 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8235) Create DFSStripedInputStream in DFSClient#open

2015-04-25 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki updated HDFS-8235:
-
Attachment: HDFS-8235.2.patch

 Create DFSStripedInputStream in DFSClient#open
 --

 Key: HDFS-8235
 URL: https://issues.apache.org/jira/browse/HDFS-8235
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Jing Zhao
Assignee: Kai Sasaki
 Attachments: HDFS-8235.1.patch, HDFS-8235.2.patch


 Currently DFSClient#open can only create a DFSInputStream object. It should 
 also support DFSStripedInputStream.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8255) Rename getBlockReplication to getPreferredBlockStorageNum

2015-04-25 Thread Zhe Zhang (JIRA)
Zhe Zhang created HDFS-8255:
---

 Summary: Rename getBlockReplication to getPreferredBlockStorageNum
 Key: HDFS-8255
 URL: https://issues.apache.org/jira/browse/HDFS-8255
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Zhe Zhang
Assignee: Zhe Zhang


We should rename {{BlockCollection#getBlockReplication}} to 
{{getPreferredBlockStorageNum}} for 2 reasons:
# Currently, this method actually returns the _preferred_ block replication 
factor instead of the _actual_ number of replicas. The current name is a little 
ambiguous. 
# With the erasure coding feature, the name doesn't apply to striped blocks. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8206) Fix the typos in hadoop-hdfs-httpfs

2015-04-25 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14512886#comment-14512886
 ] 

Xiaoyu Yao commented on HDFS-8206:
--

Commit to both trunk and branch-2 with fix version 2.8.0. Also thanks 
[~cnauroth] for helping figure out the appropriate fix version.

 Fix the typos in hadoop-hdfs-httpfs
 ---

 Key: HDFS-8206
 URL: https://issues.apache.org/jira/browse/HDFS-8206
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
 Fix For: 2.8.0

 Attachments: HDFS-8206-002.patch, HDFS-8206-branch2-002.patch, 
 HDFS-8206-branch2.patch, HDFS-8206.patch


 Actual :
  @throws IOException thrown if an IO error occurrs.
 Expected :
  @throws IOException thrown if an IO error occurs..
 There are 12 occurrences in this project... 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8116) RollingWindowManager#getTopUserForMetric should check if LOG.isDebugEnabled() before LOG.debug()

2015-04-25 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14512293#comment-14512293
 ] 

Brahma Reddy Battula commented on HDFS-8116:


Kindly review the attached patch!!!

 RollingWindowManager#getTopUserForMetric should check if LOG.isDebugEnabled() 
 before LOG.debug() 
 -

 Key: HDFS-8116
 URL: https://issues.apache.org/jira/browse/HDFS-8116
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.7.0
Reporter: Xiaoyu Yao
Assignee: Brahma Reddy Battula
Priority: Trivial
 Attachments: HDFS-8116.patch


 RollingWindowManager#getTopUserForMetric should check if LOG.isDebugEnabled() 
 before LOG.debug() 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8116) RollingWindowManager#getTopUserForMetric should check if LOG.isDebugEnabled() before LOG.debug()

2015-04-25 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-8116:
---
Attachment: HDFS-8116.patch

 RollingWindowManager#getTopUserForMetric should check if LOG.isDebugEnabled() 
 before LOG.debug() 
 -

 Key: HDFS-8116
 URL: https://issues.apache.org/jira/browse/HDFS-8116
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.7.0
Reporter: Xiaoyu Yao
Assignee: Brahma Reddy Battula
Priority: Trivial
 Attachments: HDFS-8116.patch


 RollingWindowManager#getTopUserForMetric should check if LOG.isDebugEnabled() 
 before LOG.debug() 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8116) RollingWindowManager#getTopUserForMetric should check if LOG.isDebugEnabled() before LOG.debug()

2015-04-25 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-8116:
---
Affects Version/s: 2.7.0

 RollingWindowManager#getTopUserForMetric should check if LOG.isDebugEnabled() 
 before LOG.debug() 
 -

 Key: HDFS-8116
 URL: https://issues.apache.org/jira/browse/HDFS-8116
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.7.0
Reporter: Xiaoyu Yao
Assignee: Brahma Reddy Battula
Priority: Trivial
 Attachments: HDFS-8116.patch


 RollingWindowManager#getTopUserForMetric should check if LOG.isDebugEnabled() 
 before LOG.debug() 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8116) RollingWindowManager#getTopUserForMetric should check if LOG.isDebugEnabled() before LOG.debug()

2015-04-25 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-8116:
---
Status: Patch Available  (was: Open)

 RollingWindowManager#getTopUserForMetric should check if LOG.isDebugEnabled() 
 before LOG.debug() 
 -

 Key: HDFS-8116
 URL: https://issues.apache.org/jira/browse/HDFS-8116
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Xiaoyu Yao
Assignee: Brahma Reddy Battula
Priority: Trivial
 Attachments: HDFS-8116.patch


 RollingWindowManager#getTopUserForMetric should check if LOG.isDebugEnabled() 
 before LOG.debug() 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8204) Mover/Balancer should not schedule two replicas to the same DN

2015-04-25 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-8204:

Description: 
Balancer moves blocks between Datanode(Ver. 2.6 ).
Balancer moves blocks between StorageGroups ( introduced by HDFS-6584) , in the 
new version(Ver. =2.6) .
function
{code}
class DBlock extends LocationsStorageGroup
DBlock.isLocatedOn(StorageGroup loc)
{code}
is flawed, may causes 2 replicas ends in same node after running balance.

For example:
We have 2 nodes. Each node has two storages.
We have (DN0, SSD), (DN0, DISK), (DN1, SSD), (DN1, DISK).
We have a block with ONE_SSD storage policy.
The block has 2 replicas. They are in (DN0,SSD) and (DN1,DISK).
Replica in (DN0,SSD) should not be moved to (DN1,SSD) after running Balancer.
Otherwise DN1 has 2 replicas.

  was:
Balancer moves blocks between Datanode(Ver. 2.6 ).
Balancer moves blocks between StorageGroups ( introduced by HDFS-6584) , in the 
new version(Ver. =2.6) .
function
{code}
class DBlock extends LocationsStorageGroup
DBlock.isLocatedOn(StorageGroup loc)
{code}
is flawed, may causes 2 replicas ends in same node after running balance.


 Mover/Balancer should not schedule two replicas to the same DN
 --

 Key: HDFS-8204
 URL: https://issues.apache.org/jira/browse/HDFS-8204
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: balancer  mover
Reporter: Walter Su
Assignee: Walter Su
 Attachments: HDFS-8204.001.patch, HDFS-8204.002.patch


 Balancer moves blocks between Datanode(Ver. 2.6 ).
 Balancer moves blocks between StorageGroups ( introduced by HDFS-6584) , in 
 the new version(Ver. =2.6) .
 function
 {code}
 class DBlock extends LocationsStorageGroup
 DBlock.isLocatedOn(StorageGroup loc)
 {code}
 is flawed, may causes 2 replicas ends in same node after running balance.
 For example:
 We have 2 nodes. Each node has two storages.
 We have (DN0, SSD), (DN0, DISK), (DN1, SSD), (DN1, DISK).
 We have a block with ONE_SSD storage policy.
 The block has 2 replicas. They are in (DN0,SSD) and (DN1,DISK).
 Replica in (DN0,SSD) should not be moved to (DN1,SSD) after running Balancer.
 Otherwise DN1 has 2 replicas.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8204) Mover/Balancer should not schedule two replicas to the same DN

2015-04-25 Thread Walter Su (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14512283#comment-14512283
 ] 

Walter Su commented on HDFS-8204:
-

upload 002 patch, please review.
And sorry the description is not clear. I add an example to it.

 Mover/Balancer should not schedule two replicas to the same DN
 --

 Key: HDFS-8204
 URL: https://issues.apache.org/jira/browse/HDFS-8204
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: balancer  mover
Reporter: Walter Su
Assignee: Walter Su
 Attachments: HDFS-8204.001.patch, HDFS-8204.002.patch


 Balancer moves blocks between Datanode(Ver. 2.6 ).
 Balancer moves blocks between StorageGroups ( introduced by HDFS-6584) , in 
 the new version(Ver. =2.6) .
 function
 {code}
 class DBlock extends LocationsStorageGroup
 DBlock.isLocatedOn(StorageGroup loc)
 {code}
 is flawed, may causes 2 replicas ends in same node after running balance.
 For example:
 We have 2 nodes. Each node has two storages.
 We have (DN0, SSD), (DN0, DISK), (DN1, SSD), (DN1, DISK).
 We have a block with ONE_SSD storage policy.
 The block has 2 replicas. They are in (DN0,SSD) and (DN1,DISK).
 Replica in (DN0,SSD) should not be moved to (DN1,SSD) after running Balancer.
 Otherwise DN1 has 2 replicas.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7673) synthetic load generator docs give incorrect/incomplete commands

2015-04-25 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-7673:
---
Status: Patch Available  (was: Open)

 synthetic load generator docs give incorrect/incomplete commands
 

 Key: HDFS-7673
 URL: https://issues.apache.org/jira/browse/HDFS-7673
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: Allen Wittenauer
Assignee: Brahma Reddy Battula
 Attachments: HDFS-7673.patch


 The synthetic load generator guide gives this helpful command to start it:
 {code}
 java LoadGenerator [options]
 {code}
 This, of course, won't work.  What's the class path?  What jar is it in?  Is 
 this really the command?  Isn't there a shell script wrapping this?
 This atrocity against normal users is committed three more times after this 
 one with equally incomplete commands for other parts of the system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7673) synthetic load generator docs give incorrect/incomplete commands

2015-04-25 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14512341#comment-14512341
 ] 

Brahma Reddy Battula commented on HDFS-7673:


Thanks [~aw] for reporting this jira..Attached patch..Kindly review..

 synthetic load generator docs give incorrect/incomplete commands
 

 Key: HDFS-7673
 URL: https://issues.apache.org/jira/browse/HDFS-7673
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: Allen Wittenauer
Assignee: Brahma Reddy Battula
 Attachments: HDFS-7673.patch


 The synthetic load generator guide gives this helpful command to start it:
 {code}
 java LoadGenerator [options]
 {code}
 This, of course, won't work.  What's the class path?  What jar is it in?  Is 
 this really the command?  Isn't there a shell script wrapping this?
 This atrocity against normal users is committed three more times after this 
 one with equally incomplete commands for other parts of the system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7673) synthetic load generator docs give incorrect/incomplete commands

2015-04-25 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-7673:
---
Attachment: HDFS-7673.patch

 synthetic load generator docs give incorrect/incomplete commands
 

 Key: HDFS-7673
 URL: https://issues.apache.org/jira/browse/HDFS-7673
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: Allen Wittenauer
Assignee: Brahma Reddy Battula
 Attachments: HDFS-7673.patch


 The synthetic load generator guide gives this helpful command to start it:
 {code}
 java LoadGenerator [options]
 {code}
 This, of course, won't work.  What's the class path?  What jar is it in?  Is 
 this really the command?  Isn't there a shell script wrapping this?
 This atrocity against normal users is committed three more times after this 
 one with equally incomplete commands for other parts of the system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7673) synthetic load generator docs give incorrect/incomplete commands

2015-04-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14512350#comment-14512350
 ] 

Hadoop QA commented on HDFS-7673:
-

\\
\\
| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |   4m  7s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | release audit |   0m 21s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | site |   3m  0s | Site still builds. |
| | |   7m 31s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12728146/HDFS-7673.patch |
| Optional Tests | site |
| git revision | trunk / 78c6b46 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10388/console |


This message was automatically generated.

 synthetic load generator docs give incorrect/incomplete commands
 

 Key: HDFS-7673
 URL: https://issues.apache.org/jira/browse/HDFS-7673
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: Allen Wittenauer
Assignee: Brahma Reddy Battula
 Attachments: HDFS-7673.patch


 The synthetic load generator guide gives this helpful command to start it:
 {code}
 java LoadGenerator [options]
 {code}
 This, of course, won't work.  What's the class path?  What jar is it in?  Is 
 this really the command?  Isn't there a shell script wrapping this?
 This atrocity against normal users is committed three more times after this 
 one with equally incomplete commands for other parts of the system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7674) Adding metrics for Erasure Coding

2015-04-25 Thread Walter Su (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14512365#comment-14512365
 ] 

Walter Su commented on HDFS-7674:
-

*TODO*: Separate EC need-to-Recover blockGoup counting from under-replicated 
block counting.
Description:
HDFS-7912  include EC recovery blockGroup into {{UnderReplicatedBlocks}}.
When user run 
{code}
# dfsadmin -report
Configured Capacity: 786329370624 (732.33 GB)
Present Capacity: 706469978112 (657.95 GB)
DFS Remaining: 706207547392 (657.71 GB)
DFS Used: 262430720 (250.27 MB)
DFS Used%: 0.04%
Under replicated blocks: 3
...
{code}
We should separate EC need-to-Recover blockGoup counting from under-replicated 
block counting.
Same as NN WebUI.


 Adding metrics for Erasure Coding
 -

 Key: HDFS-7674
 URL: https://issues.apache.org/jira/browse/HDFS-7674
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Wei Zhou

 As the design (in HDFS-7285) indicates, erasure coding involves non-trivial 
 impact and workload for NameNode, DataNode and client; it also allows 
 configurable and pluggable erasure codec and schema with flexible tradeoff 
 options (see HDFS-7337). To support necessary analysis and adjustment, we'd 
 better have various meaningful metrics for the EC support, like 
 encoding/decoding tasks, recovered blocks, read/transferred data size, 
 computation time and etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8116) RollingWindowManager#getTopUserForMetric should check if LOG.isDebugEnabled() before LOG.debug()

2015-04-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14512376#comment-14512376
 ] 

Hadoop QA commented on HDFS-8116:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 32s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | javac |   7m 28s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 34s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   7m 40s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | install |   1m 35s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m  3s | The patch does not introduce 
any new Findbugs (version 2.0.3) warnings. |
| {color:green}+1{color} | native |   3m 13s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 164m 49s | Tests failed in hadoop-hdfs. |
| | | 212m 53s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12728137/HDFS-8116.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 78c6b46 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10387/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10387/testReport/ |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10387/console |


This message was automatically generated.

 RollingWindowManager#getTopUserForMetric should check if LOG.isDebugEnabled() 
 before LOG.debug() 
 -

 Key: HDFS-8116
 URL: https://issues.apache.org/jira/browse/HDFS-8116
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.7.0
Reporter: Xiaoyu Yao
Assignee: Brahma Reddy Battula
Priority: Trivial
 Attachments: HDFS-8116.patch


 RollingWindowManager#getTopUserForMetric should check if LOG.isDebugEnabled() 
 before LOG.debug() 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8191) Fix byte to integer casting in SimulatedFSDataset#simulatedByte

2015-04-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14512418#comment-14512418
 ] 

Hudson commented on HDFS-8191:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #908 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/908/])
HDFS-8191. Fix byte to integer casting in SimulatedFSDataset#simulatedByte. 
Contributed by Zhe Zhang. (wang: rev c7d9ad68e34c7f8b9efada6cfbf7d5474cbeff11)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestSimulatedFSDataset.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java


 Fix byte to integer casting in SimulatedFSDataset#simulatedByte
 ---

 Key: HDFS-8191
 URL: https://issues.apache.org/jira/browse/HDFS-8191
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Zhe Zhang
Assignee: Zhe Zhang
Priority: Minor
 Fix For: 2.8.0

 Attachments: HDFS-8191.000.patch, HDFS-8191.001.patch, 
 HDFS-8191.002.patch, HDFS-8191.003.patch, HDFS-8191.003.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8211) DataNode UUID is always null in the JMX counter

2015-04-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14512405#comment-14512405
 ] 

Hudson commented on HDFS-8211:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #908 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/908/])
HDFS-8211. DataNode UUID is always null in the JMX counter. (Contributed by Anu 
Engineer) (arp: rev dcc5455e07be75ca44eb6a33d4e706eec11b9905)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeUUID.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 DataNode UUID is always null in the JMX counter
 ---

 Key: HDFS-8211
 URL: https://issues.apache.org/jira/browse/HDFS-8211
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: HDFS
Affects Versions: 2.7.0
Reporter: Anu Engineer
Assignee: Anu Engineer
Priority: Minor
 Fix For: 2.8.0

 Attachments: hdfs-8211.001.patch, hdfs-8211.002.patch


 The DataNode JMX counters are tagged with DataNode UUID, but it always gets a 
 null value instead of the UUID.
 {code}
 Hadoop:service=DataNode,name=FSDatasetState*-null*.
 {code}
 This null is supposed be the datanode UUID.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8176) Record from/to snapshots in audit log for snapshot diff report

2015-04-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14512416#comment-14512416
 ] 

Hudson commented on HDFS-8176:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #908 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/908/])
HDFS-8176. Record from/to snapshots in audit log for snapshot diff report. 
Contributed by J. Andreina. (jing9: rev 
cf6c8a1b4ee70dd45c2e42ac61999e61a05db035)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Record from/to snapshots in audit log for snapshot diff report
 --

 Key: HDFS-8176
 URL: https://issues.apache.org/jira/browse/HDFS-8176
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: snapshots
Reporter: J.Andreina
Assignee: J.Andreina
Priority: Minor
 Fix For: 2.8.0

 Attachments: HDFS-8176.1.patch, HDFS-8176.2.patch


 Provide information about the snapshots compared in audit log. 
 In current code value null is been passed. 
 {code}
 logAuditEvent(diffs != null, computeSnapshotDiff, null, null, null);
 {code}
 {noformat}
 2015-04-15 09:56:49,328 INFO FSNamesystem.audit: allowed=true   ugi=Rex 
 (auth:SIMPLE)   ip=/Xcmd=computeSnapshotDiff src=null
 dst=nullperm=null   proto=rpc
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8110) Remove unsupported 'hdfs namenode -rollingUpgrade downgrade' from document

2015-04-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14512419#comment-14512419
 ] 

Hudson commented on HDFS-8110:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #908 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/908/])
HDFS-8110. Remove unsupported 'hdfs namenode -rollingUpgrade downgrade' from 
document. Contributed by J.Andreina. (aajisaka: rev 
91b97c21c9271629dae7515a6a58c35d13b777ff)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/site/xdoc/HdfsRollingUpgrade.xml


 Remove unsupported 'hdfs namenode -rollingUpgrade downgrade' from document
 --

 Key: HDFS-8110
 URL: https://issues.apache.org/jira/browse/HDFS-8110
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: J.Andreina
Assignee: J.Andreina
Priority: Minor
 Fix For: 3.0.0

 Attachments: HDFS-8110.1.patch


 Support for -rollingUpgrade downgrade is been removed as part of HDFS-7302.
 Corresponding information should be removed from document also.
 {noformat}
 Downgrade with Downtime
 Administrator may choose to first shutdown the cluster and then downgrade it. 
 The following are the steps:
 Shutdown all NNs and DNs.
 Restore the pre-upgrade release in all machines.
 Start NNs with the -rollingUpgrade downgrade option.
 Start DNs normally.
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8191) Fix byte to integer casting in SimulatedFSDataset#simulatedByte

2015-04-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14512400#comment-14512400
 ] 

Hudson commented on HDFS-8191:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #174 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/174/])
HDFS-8191. Fix byte to integer casting in SimulatedFSDataset#simulatedByte. 
Contributed by Zhe Zhang. (wang: rev c7d9ad68e34c7f8b9efada6cfbf7d5474cbeff11)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestSimulatedFSDataset.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java


 Fix byte to integer casting in SimulatedFSDataset#simulatedByte
 ---

 Key: HDFS-8191
 URL: https://issues.apache.org/jira/browse/HDFS-8191
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Zhe Zhang
Assignee: Zhe Zhang
Priority: Minor
 Fix For: 2.8.0

 Attachments: HDFS-8191.000.patch, HDFS-8191.001.patch, 
 HDFS-8191.002.patch, HDFS-8191.003.patch, HDFS-8191.003.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8110) Remove unsupported 'hdfs namenode -rollingUpgrade downgrade' from document

2015-04-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14512401#comment-14512401
 ] 

Hudson commented on HDFS-8110:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #174 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/174/])
HDFS-8110. Remove unsupported 'hdfs namenode -rollingUpgrade downgrade' from 
document. Contributed by J.Andreina. (aajisaka: rev 
91b97c21c9271629dae7515a6a58c35d13b777ff)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/site/xdoc/HdfsRollingUpgrade.xml


 Remove unsupported 'hdfs namenode -rollingUpgrade downgrade' from document
 --

 Key: HDFS-8110
 URL: https://issues.apache.org/jira/browse/HDFS-8110
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: J.Andreina
Assignee: J.Andreina
Priority: Minor
 Fix For: 3.0.0

 Attachments: HDFS-8110.1.patch


 Support for -rollingUpgrade downgrade is been removed as part of HDFS-7302.
 Corresponding information should be removed from document also.
 {noformat}
 Downgrade with Downtime
 Administrator may choose to first shutdown the cluster and then downgrade it. 
 The following are the steps:
 Shutdown all NNs and DNs.
 Restore the pre-upgrade release in all machines.
 Start NNs with the -rollingUpgrade downgrade option.
 Start DNs normally.
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8211) DataNode UUID is always null in the JMX counter

2015-04-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14512387#comment-14512387
 ] 

Hudson commented on HDFS-8211:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #174 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/174/])
HDFS-8211. DataNode UUID is always null in the JMX counter. (Contributed by Anu 
Engineer) (arp: rev dcc5455e07be75ca44eb6a33d4e706eec11b9905)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeUUID.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java


 DataNode UUID is always null in the JMX counter
 ---

 Key: HDFS-8211
 URL: https://issues.apache.org/jira/browse/HDFS-8211
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: HDFS
Affects Versions: 2.7.0
Reporter: Anu Engineer
Assignee: Anu Engineer
Priority: Minor
 Fix For: 2.8.0

 Attachments: hdfs-8211.001.patch, hdfs-8211.002.patch


 The DataNode JMX counters are tagged with DataNode UUID, but it always gets a 
 null value instead of the UUID.
 {code}
 Hadoop:service=DataNode,name=FSDatasetState*-null*.
 {code}
 This null is supposed be the datanode UUID.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8176) Record from/to snapshots in audit log for snapshot diff report

2015-04-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14512398#comment-14512398
 ] 

Hudson commented on HDFS-8176:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #174 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/174/])
HDFS-8176. Record from/to snapshots in audit log for snapshot diff report. 
Contributed by J. Andreina. (jing9: rev 
cf6c8a1b4ee70dd45c2e42ac61999e61a05db035)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java


 Record from/to snapshots in audit log for snapshot diff report
 --

 Key: HDFS-8176
 URL: https://issues.apache.org/jira/browse/HDFS-8176
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: snapshots
Reporter: J.Andreina
Assignee: J.Andreina
Priority: Minor
 Fix For: 2.8.0

 Attachments: HDFS-8176.1.patch, HDFS-8176.2.patch


 Provide information about the snapshots compared in audit log. 
 In current code value null is been passed. 
 {code}
 logAuditEvent(diffs != null, computeSnapshotDiff, null, null, null);
 {code}
 {noformat}
 2015-04-15 09:56:49,328 INFO FSNamesystem.audit: allowed=true   ugi=Rex 
 (auth:SIMPLE)   ip=/Xcmd=computeSnapshotDiff src=null
 dst=nullperm=null   proto=rpc
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8250) Create HDFS bindings for java.nio.file.FileSystem

2015-04-25 Thread Oleg Zhurakousky (JIRA)
Oleg Zhurakousky created HDFS-8250:
--

 Summary: Create HDFS bindings for java.nio.file.FileSystem
 Key: HDFS-8250
 URL: https://issues.apache.org/jira/browse/HDFS-8250
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.7.0
Reporter: Oleg Zhurakousky
Assignee: Oleg Zhurakousky


It's a nice to have feature as it would allow developers to have a unified 
programming model while dealing with various File Systems even though this 
particular issue only addresses HDFS.

It has already been done in the unrelated project, so I just need to extract 
the code and provide a patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8191) Fix byte to integer casting in SimulatedFSDataset#simulatedByte

2015-04-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14512465#comment-14512465
 ] 

Hudson commented on HDFS-8191:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #165 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/165/])
HDFS-8191. Fix byte to integer casting in SimulatedFSDataset#simulatedByte. 
Contributed by Zhe Zhang. (wang: rev c7d9ad68e34c7f8b9efada6cfbf7d5474cbeff11)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestSimulatedFSDataset.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java


 Fix byte to integer casting in SimulatedFSDataset#simulatedByte
 ---

 Key: HDFS-8191
 URL: https://issues.apache.org/jira/browse/HDFS-8191
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Zhe Zhang
Assignee: Zhe Zhang
Priority: Minor
 Fix For: 2.8.0

 Attachments: HDFS-8191.000.patch, HDFS-8191.001.patch, 
 HDFS-8191.002.patch, HDFS-8191.003.patch, HDFS-8191.003.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8251) Move the synthetic load generator into its own package

2015-04-25 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HDFS-8251:
--

 Summary: Move the synthetic load generator into its own package
 Key: HDFS-8251
 URL: https://issues.apache.org/jira/browse/HDFS-8251
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Allen Wittenauer


It doesn't really make sense for the HDFS load generator to be a part of the 
(extremely large) mapreduce jobclient package. It should be pulled out and put 
its own package, probably in hadoop-tools.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7281) Missing block is marked as corrupted block

2015-04-25 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14512508#comment-14512508
 ] 

Allen Wittenauer commented on HDFS-7281:


It should be noted that if I had to wager, the fsck output is probably the 
second most automatically processed output for operations people.  (With hdfs 
audit log being first.)  Making changes in the code here is definitely 
disruptive.

 Missing block is marked as corrupted block
 --

 Key: HDFS-7281
 URL: https://issues.apache.org/jira/browse/HDFS-7281
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ming Ma
Assignee: Ming Ma
  Labels: supportability
 Attachments: HDFS-7281-2.patch, HDFS-7281-3.patch, HDFS-7281-4.patch, 
 HDFS-7281.patch


 In the situation where the block lost all its replicas, fsck shows the block 
 is missing as well as corrupted. Perhaps it is better not to mark the block 
 corrupted in this case. The reason it is marked as corrupted is 
 numCorruptNodes == numNodes == 0 in the following code.
 {noformat}
 BlockManager
 final boolean isCorrupt = numCorruptNodes == numNodes;
 {noformat}
 Would like to clarify if it is the intent to mark missing block as corrupted 
 or it is just a bug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8176) Record from/to snapshots in audit log for snapshot diff report

2015-04-25 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14512510#comment-14512510
 ] 

Allen Wittenauer commented on HDFS-8176:


Err, what's the value of doing this?  Does doing a snapshot compare actually 
change the data on HDFS?  If no, then this shouldn't be in the audit log.

 Record from/to snapshots in audit log for snapshot diff report
 --

 Key: HDFS-8176
 URL: https://issues.apache.org/jira/browse/HDFS-8176
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: snapshots
Reporter: J.Andreina
Assignee: J.Andreina
Priority: Minor
 Fix For: 2.8.0

 Attachments: HDFS-8176.1.patch, HDFS-8176.2.patch


 Provide information about the snapshots compared in audit log. 
 In current code value null is been passed. 
 {code}
 logAuditEvent(diffs != null, computeSnapshotDiff, null, null, null);
 {code}
 {noformat}
 2015-04-15 09:56:49,328 INFO FSNamesystem.audit: allowed=true   ugi=Rex 
 (auth:SIMPLE)   ip=/Xcmd=computeSnapshotDiff src=null
 dst=nullperm=null   proto=rpc
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7281) Missing block is marked as corrupted block

2015-04-25 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14512518#comment-14512518
 ] 

Yongjun Zhang commented on HDFS-7281:
-

Thanks [~aw], that's very informative.


 Missing block is marked as corrupted block
 --

 Key: HDFS-7281
 URL: https://issues.apache.org/jira/browse/HDFS-7281
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ming Ma
Assignee: Ming Ma
  Labels: supportability
 Attachments: HDFS-7281-2.patch, HDFS-7281-3.patch, HDFS-7281-4.patch, 
 HDFS-7281.patch


 In the situation where the block lost all its replicas, fsck shows the block 
 is missing as well as corrupted. Perhaps it is better not to mark the block 
 corrupted in this case. The reason it is marked as corrupted is 
 numCorruptNodes == numNodes == 0 in the following code.
 {noformat}
 BlockManager
 final boolean isCorrupt = numCorruptNodes == numNodes;
 {noformat}
 Would like to clarify if it is the intent to mark missing block as corrupted 
 or it is just a bug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8251) Move the synthetic load generator into its own package

2015-04-25 Thread J.Andreina (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14512524#comment-14512524
 ] 

J.Andreina commented on HDFS-8251:
--

I would like to work on this [~aw]. Feel free to assign to you if you have 
already started working on this.

 Move the synthetic load generator into its own package
 --

 Key: HDFS-8251
 URL: https://issues.apache.org/jira/browse/HDFS-8251
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Allen Wittenauer
Assignee: J.Andreina

 It doesn't really make sense for the HDFS load generator to be a part of the 
 (extremely large) mapreduce jobclient package. It should be pulled out and 
 put its own package, probably in hadoop-tools.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-8251) Move the synthetic load generator into its own package

2015-04-25 Thread J.Andreina (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

J.Andreina reassigned HDFS-8251:


Assignee: J.Andreina

 Move the synthetic load generator into its own package
 --

 Key: HDFS-8251
 URL: https://issues.apache.org/jira/browse/HDFS-8251
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Allen Wittenauer
Assignee: J.Andreina

 It doesn't really make sense for the HDFS load generator to be a part of the 
 (extremely large) mapreduce jobclient package. It should be pulled out and 
 put its own package, probably in hadoop-tools.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8211) DataNode UUID is always null in the JMX counter

2015-04-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14512529#comment-14512529
 ] 

Hudson commented on HDFS-8211:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #175 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/175/])
HDFS-8211. DataNode UUID is always null in the JMX counter. (Contributed by Anu 
Engineer) (arp: rev dcc5455e07be75ca44eb6a33d4e706eec11b9905)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeUUID.java


 DataNode UUID is always null in the JMX counter
 ---

 Key: HDFS-8211
 URL: https://issues.apache.org/jira/browse/HDFS-8211
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: HDFS
Affects Versions: 2.7.0
Reporter: Anu Engineer
Assignee: Anu Engineer
Priority: Minor
 Fix For: 2.8.0

 Attachments: hdfs-8211.001.patch, hdfs-8211.002.patch


 The DataNode JMX counters are tagged with DataNode UUID, but it always gets a 
 null value instead of the UUID.
 {code}
 Hadoop:service=DataNode,name=FSDatasetState*-null*.
 {code}
 This null is supposed be the datanode UUID.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8176) Record from/to snapshots in audit log for snapshot diff report

2015-04-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14512540#comment-14512540
 ] 

Hudson commented on HDFS-8176:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #175 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/175/])
HDFS-8176. Record from/to snapshots in audit log for snapshot diff report. 
Contributed by J. Andreina. (jing9: rev 
cf6c8a1b4ee70dd45c2e42ac61999e61a05db035)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java


 Record from/to snapshots in audit log for snapshot diff report
 --

 Key: HDFS-8176
 URL: https://issues.apache.org/jira/browse/HDFS-8176
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: snapshots
Reporter: J.Andreina
Assignee: J.Andreina
Priority: Minor
 Fix For: 2.8.0

 Attachments: HDFS-8176.1.patch, HDFS-8176.2.patch


 Provide information about the snapshots compared in audit log. 
 In current code value null is been passed. 
 {code}
 logAuditEvent(diffs != null, computeSnapshotDiff, null, null, null);
 {code}
 {noformat}
 2015-04-15 09:56:49,328 INFO FSNamesystem.audit: allowed=true   ugi=Rex 
 (auth:SIMPLE)   ip=/Xcmd=computeSnapshotDiff src=null
 dst=nullperm=null   proto=rpc
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8191) Fix byte to integer casting in SimulatedFSDataset#simulatedByte

2015-04-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14512542#comment-14512542
 ] 

Hudson commented on HDFS-8191:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #175 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/175/])
HDFS-8191. Fix byte to integer casting in SimulatedFSDataset#simulatedByte. 
Contributed by Zhe Zhang. (wang: rev c7d9ad68e34c7f8b9efada6cfbf7d5474cbeff11)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestSimulatedFSDataset.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java


 Fix byte to integer casting in SimulatedFSDataset#simulatedByte
 ---

 Key: HDFS-8191
 URL: https://issues.apache.org/jira/browse/HDFS-8191
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Zhe Zhang
Assignee: Zhe Zhang
Priority: Minor
 Fix For: 2.8.0

 Attachments: HDFS-8191.000.patch, HDFS-8191.001.patch, 
 HDFS-8191.002.patch, HDFS-8191.003.patch, HDFS-8191.003.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7673) synthetic load generator docs give incorrect/incomplete commands

2015-04-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14512504#comment-14512504
 ] 

Hudson commented on HDFS-7673:
--

FAILURE: Integrated in Hadoop-trunk-Commit #7675 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7675/])
HDFS-7673. synthetic load generator docs give incorrect/incomplete commands 
(Brahma Reddy Battula via aw) (aw: rev f83c55a6be4d6482d05613446be6322a5bce8add)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/SLGUserGuide.md


 synthetic load generator docs give incorrect/incomplete commands
 

 Key: HDFS-7673
 URL: https://issues.apache.org/jira/browse/HDFS-7673
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: Allen Wittenauer
Assignee: Brahma Reddy Battula
 Fix For: 3.0.0

 Attachments: HDFS-7673.patch


 The synthetic load generator guide gives this helpful command to start it:
 {code}
 java LoadGenerator [options]
 {code}
 This, of course, won't work.  What's the class path?  What jar is it in?  Is 
 this really the command?  Isn't there a shell script wrapping this?
 This atrocity against normal users is committed three more times after this 
 one with equally incomplete commands for other parts of the system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8176) Record from/to snapshots in audit log for snapshot diff report

2015-04-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14512463#comment-14512463
 ] 

Hudson commented on HDFS-8176:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #165 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/165/])
HDFS-8176. Record from/to snapshots in audit log for snapshot diff report. 
Contributed by J. Andreina. (jing9: rev 
cf6c8a1b4ee70dd45c2e42ac61999e61a05db035)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Record from/to snapshots in audit log for snapshot diff report
 --

 Key: HDFS-8176
 URL: https://issues.apache.org/jira/browse/HDFS-8176
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: snapshots
Reporter: J.Andreina
Assignee: J.Andreina
Priority: Minor
 Fix For: 2.8.0

 Attachments: HDFS-8176.1.patch, HDFS-8176.2.patch


 Provide information about the snapshots compared in audit log. 
 In current code value null is been passed. 
 {code}
 logAuditEvent(diffs != null, computeSnapshotDiff, null, null, null);
 {code}
 {noformat}
 2015-04-15 09:56:49,328 INFO FSNamesystem.audit: allowed=true   ugi=Rex 
 (auth:SIMPLE)   ip=/Xcmd=computeSnapshotDiff src=null
 dst=nullperm=null   proto=rpc
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8110) Remove unsupported 'hdfs namenode -rollingUpgrade downgrade' from document

2015-04-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14512448#comment-14512448
 ] 

Hudson commented on HDFS-8110:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2106 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2106/])
HDFS-8110. Remove unsupported 'hdfs namenode -rollingUpgrade downgrade' from 
document. Contributed by J.Andreina. (aajisaka: rev 
91b97c21c9271629dae7515a6a58c35d13b777ff)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/site/xdoc/HdfsRollingUpgrade.xml


 Remove unsupported 'hdfs namenode -rollingUpgrade downgrade' from document
 --

 Key: HDFS-8110
 URL: https://issues.apache.org/jira/browse/HDFS-8110
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: J.Andreina
Assignee: J.Andreina
Priority: Minor
 Fix For: 3.0.0

 Attachments: HDFS-8110.1.patch


 Support for -rollingUpgrade downgrade is been removed as part of HDFS-7302.
 Corresponding information should be removed from document also.
 {noformat}
 Downgrade with Downtime
 Administrator may choose to first shutdown the cluster and then downgrade it. 
 The following are the steps:
 Shutdown all NNs and DNs.
 Restore the pre-upgrade release in all machines.
 Start NNs with the -rollingUpgrade downgrade option.
 Start DNs normally.
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >