[jira] [Updated] (HDFS-7601) Operations(e.g. balance) failed due to deficient configuration parsing

2015-03-01 Thread Doris Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-7601:
---
Affects Version/s: 2.6.0

> Operations(e.g. balance) failed due to deficient configuration parsing
> --
>
> Key: HDFS-7601
> URL: https://issues.apache.org/jira/browse/HDFS-7601
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer & mover
>Affects Versions: 2.3.0, 2.6.0
>Reporter: Doris Gu
>Priority: Minor
>
> Some operations, for example,balance,parses configuration(from 
> core-site.xml,hdfs-site.xml) to get NameServiceUris to link to.
> Current method considers those end with or without "/"  as two different 
> uris, then following operation may meet errors.
> bq. [hdfs://haCluster, hdfs://haCluster/] are considered to be two different 
> uris   which actually the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7439) Add BlockOpResponseProto's message to DFSClient's exception message

2015-03-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14342865#comment-14342865
 ] 

Hudson commented on HDFS-7439:
--

FAILURE: Integrated in Hadoop-trunk-Commit #7234 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7234/])
HDFS-7439. Add BlockOpResponseProto's message to the exception messages.  
Contributed by Takanobu Asanuma (szetszwo: rev 
67ed59348d638d56e6752ba2c71fdcd69567546d)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/DataTransferProtoUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Dispatcher.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/RemoteBlockReader2.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java


> Add BlockOpResponseProto's message to DFSClient's exception message
> ---
>
> Key: HDFS-7439
> URL: https://issues.apache.org/jira/browse/HDFS-7439
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer & mover, datanode, hdfs-client
>Reporter: Ming Ma
>Assignee: Takanobu Asanuma
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: HDFS-7439.1.patch, HDFS-7439.2.patch, HDFS-7439.3.patch
>
>
> When (BlockOpResponseProto#getStatus() != SUCCESS), it helps with debugging 
> if DFSClient can add BlockOpResponseProto's message to the exception message 
> applications will get. For example, instead of
> {noformat}
> throw new IOException("Got error for OP_READ_BLOCK, self="
> + peer.getLocalAddressString() + ", remote="
> + peer.getRemoteAddressString() + ", for file " + file
> + ", for pool " + block.getBlockPoolId() + " block " 
> + block.getBlockId() + "_" + block.getGenerationStamp());
> {noformat}
> It could be,
> {noformat}
> throw new IOException("Got error for OP_READ_BLOCK, self="
> + peer.getLocalAddressString() + ", remote="
> + peer.getRemoteAddressString() + ", for file " + file
> + ", for pool " + block.getBlockPoolId() + " block " 
> + block.getBlockId() + "_" + block.getGenerationStamp()
> + ", status message " + status.getMessage());
> {noformat}
> We might want to check out all the references to BlockOpResponseProto in 
> DFSClient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7439) Add BlockOpResponseProto's message to DFSClient's exception message

2015-03-01 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-7439:
--
   Resolution: Fixed
Fix Version/s: 2.7.0
   Status: Resolved  (was: Patch Available)

I have committed this.  Thanks, Takanobu!

> Add BlockOpResponseProto's message to DFSClient's exception message
> ---
>
> Key: HDFS-7439
> URL: https://issues.apache.org/jira/browse/HDFS-7439
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer & mover, datanode, hdfs-client
>Reporter: Ming Ma
>Assignee: Takanobu Asanuma
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: HDFS-7439.1.patch, HDFS-7439.2.patch, HDFS-7439.3.patch
>
>
> When (BlockOpResponseProto#getStatus() != SUCCESS), it helps with debugging 
> if DFSClient can add BlockOpResponseProto's message to the exception message 
> applications will get. For example, instead of
> {noformat}
> throw new IOException("Got error for OP_READ_BLOCK, self="
> + peer.getLocalAddressString() + ", remote="
> + peer.getRemoteAddressString() + ", for file " + file
> + ", for pool " + block.getBlockPoolId() + " block " 
> + block.getBlockId() + "_" + block.getGenerationStamp());
> {noformat}
> It could be,
> {noformat}
> throw new IOException("Got error for OP_READ_BLOCK, self="
> + peer.getLocalAddressString() + ", remote="
> + peer.getRemoteAddressString() + ", for file " + file
> + ", for pool " + block.getBlockPoolId() + " block " 
> + block.getBlockId() + "_" + block.getGenerationStamp()
> + ", status message " + status.getMessage());
> {noformat}
> We might want to check out all the references to BlockOpResponseProto in 
> DFSClient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7439) Add BlockOpResponseProto's message to DFSClient's exception message

2015-03-01 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14342851#comment-14342851
 ] 

Tsz Wo Nicholas Sze commented on HDFS-7439:
---

The failed test does not related to this.  I filed HDFS-7865.

> Add BlockOpResponseProto's message to DFSClient's exception message
> ---
>
> Key: HDFS-7439
> URL: https://issues.apache.org/jira/browse/HDFS-7439
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer & mover, datanode, hdfs-client
>Reporter: Ming Ma
>Assignee: Takanobu Asanuma
>Priority: Minor
> Attachments: HDFS-7439.1.patch, HDFS-7439.2.patch, HDFS-7439.3.patch
>
>
> When (BlockOpResponseProto#getStatus() != SUCCESS), it helps with debugging 
> if DFSClient can add BlockOpResponseProto's message to the exception message 
> applications will get. For example, instead of
> {noformat}
> throw new IOException("Got error for OP_READ_BLOCK, self="
> + peer.getLocalAddressString() + ", remote="
> + peer.getRemoteAddressString() + ", for file " + file
> + ", for pool " + block.getBlockPoolId() + " block " 
> + block.getBlockId() + "_" + block.getGenerationStamp());
> {noformat}
> It could be,
> {noformat}
> throw new IOException("Got error for OP_READ_BLOCK, self="
> + peer.getLocalAddressString() + ", remote="
> + peer.getRemoteAddressString() + ", for file " + file
> + ", for pool " + block.getBlockPoolId() + " block " 
> + block.getBlockId() + "_" + block.getGenerationStamp()
> + ", status message " + status.getMessage());
> {noformat}
> We might want to check out all the references to BlockOpResponseProto in 
> DFSClient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7865) NullPointerException in SimulatedFSDataset

2015-03-01 Thread Tsz Wo Nicholas Sze (JIRA)
Tsz Wo Nicholas Sze created HDFS-7865:
-

 Summary: NullPointerException in SimulatedFSDataset
 Key: HDFS-7865
 URL: https://issues.apache.org/jira/browse/HDFS-7865
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Tsz Wo Nicholas Sze
Priority: Minor


https://builds.apache.org/job/PreCommit-HDFS-Build/9690//testReport/org.apache.hadoop.hdfs.server.balancer/TestBalancer/testUnknownDatanode/
{noformat}
java.lang.NullPointerException
at 
org.apache.hadoop.hdfs.server.datanode.SimulatedFSDataset$BInfo.access$400(SimulatedFSDataset.java:126)
at 
org.apache.hadoop.hdfs.server.datanode.SimulatedFSDataset.getPinning(SimulatedFSDataset.java:1319)
at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.copyBlock(DataXceiver.java:969)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opCopyBlock(Receiver.java:244)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:80)
at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:249)
at java.lang.Thread.run(Thread.java:745)
{noformat}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7439) Add BlockOpResponseProto's message to DFSClient's exception message

2015-03-01 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-7439:
--
 Component/s: hdfs-client
  datanode
  balancer & mover
Hadoop Flags: Reviewed

+1 patch looks good.

> Add BlockOpResponseProto's message to DFSClient's exception message
> ---
>
> Key: HDFS-7439
> URL: https://issues.apache.org/jira/browse/HDFS-7439
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer & mover, datanode, hdfs-client
>Reporter: Ming Ma
>Assignee: Takanobu Asanuma
>Priority: Minor
> Attachments: HDFS-7439.1.patch, HDFS-7439.2.patch, HDFS-7439.3.patch
>
>
> When (BlockOpResponseProto#getStatus() != SUCCESS), it helps with debugging 
> if DFSClient can add BlockOpResponseProto's message to the exception message 
> applications will get. For example, instead of
> {noformat}
> throw new IOException("Got error for OP_READ_BLOCK, self="
> + peer.getLocalAddressString() + ", remote="
> + peer.getRemoteAddressString() + ", for file " + file
> + ", for pool " + block.getBlockPoolId() + " block " 
> + block.getBlockId() + "_" + block.getGenerationStamp());
> {noformat}
> It could be,
> {noformat}
> throw new IOException("Got error for OP_READ_BLOCK, self="
> + peer.getLocalAddressString() + ", remote="
> + peer.getRemoteAddressString() + ", for file " + file
> + ", for pool " + block.getBlockPoolId() + " block " 
> + block.getBlockId() + "_" + block.getGenerationStamp()
> + ", status message " + status.getMessage());
> {noformat}
> We might want to check out all the references to BlockOpResponseProto in 
> DFSClient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-4681) TestBlocksWithNotEnoughRacks#testCorruptBlockRereplicatedAcrossRacks fails using IBM java

2015-03-01 Thread Ayappan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14342844#comment-14342844
 ] 

Ayappan commented on HDFS-4681:
---

Thanks Allen

> TestBlocksWithNotEnoughRacks#testCorruptBlockRereplicatedAcrossRacks fails 
> using IBM java
> -
>
> Key: HDFS-4681
> URL: https://issues.apache.org/jira/browse/HDFS-4681
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.5.2
> Environment: PowerPC Big Endian architecture
>Reporter: Tian Hong Wang
>Assignee: Ayappan
> Fix For: 3.0.0
>
> Attachments: HDFS-4681-v1.patch, HDFS-4681-v2.patch, HDFS-4681.patch
>
>
> TestBlocksWithNotEnoughRacks unit test fails with the following error message:
> 
> testCorruptBlockRereplicatedAcrossRacks(org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks)
>   Time elapsed: 8997 sec  <<< FAILURE!
> org.junit.ComparisonFailure: Corrupt replica 
> expected:<...��^EI�u�[�{���[$�\hF�[�R{O�L^S��g�#�O��׼��Wv��6u4Hd)FaŔ��^W�0��H/�^ZU^@�6�<02>���":)$�{|�^@�-���|GvW��7g
>  �/M��[U!eF�>^N^?�4pR�d��|��Ŵ7j^O^Sh�^@�nu�(�^C^Y�;I�Q�K^O"c���   
> oKtE�*�^\3u��]Ē:mŭ^^y�^H��_^T�^ZS4�7�C�^G�_���\|^W�vo���zgU�lmJ)_vq~�+^Mo^G^O�W}�.�4
> ��6b�S�&G�^?��m4FW#^@
> D5��}�^Z�^]���mfR^G#T-�N��̋�p���`�~��`�^F;�^C]> but 
> was:<...��^EI�u�[�{���[$�\hF�[R{O�L^S��g�#�O��׼��Wv��6u4Hd)FaŔ��^W�0��H/�^ZU^@�6�<02>�":)$�{|�^@�-���|GvW��7g
>  �/M�[U!eF�>^N^?�4pR�d��|��Ŵ7j^O^Sh�^@�nu�(�^C^Y�;I�Q�K^O"c���  
> oKtE�*�^\3u��]Ē:mŭ^^y���^H��_^T�^ZS���4�7�C�^G�_���\|^W�vo���zgU�lmJ)_vq~�+^Mo^G^O�W}�.�4
>��6b�S�&G�^?��m4FW#^@
> D5��}�^Z�^]���mfR^G#T-�N�̋�p���`�~��`�^F;�]>
> at org.junit.Assert.assertEquals(Assert.java:123)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks.testCorruptBlockRereplicatedAcrossRacks(TestBlocksWithNotEnoughRacks.java:229)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7855) Separate class Packet from DFSOutputStream

2015-03-01 Thread Li Bo (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14342779#comment-14342779
 ] 

Li Bo commented on HDFS-7855:
-

I check the test results and console output carefully and don't find any build 
or test error. It's really confusing. Could anyone tell me the reason?

> Separate class Packet from DFSOutputStream
> --
>
> Key: HDFS-7855
> URL: https://issues.apache.org/jira/browse/HDFS-7855
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Li Bo
>Assignee: Li Bo
> Attachments: HDFS-7855-001.patch, HDFS-7855-002.patch, 
> HDFS-7855-003.patch
>
>
> Class Packet is an inner class in DFSOutputStream and also used by 
> DataStreamer. This sub task separates Packet out of DFSOutputStream to aid 
> the separation in HDFS-7854.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7855) Separate class Packet from DFSOutputStream

2015-03-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14342751#comment-14342751
 ] 

Hadoop QA commented on HDFS-7855:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12701763/HDFS-7855-003.patch
  against trunk revision e9ac88a.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The test build failed in 
hadoop-hdfs-project/hadoop-hdfs 

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9691//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9691//console

This message is automatically generated.

> Separate class Packet from DFSOutputStream
> --
>
> Key: HDFS-7855
> URL: https://issues.apache.org/jira/browse/HDFS-7855
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Li Bo
>Assignee: Li Bo
> Attachments: HDFS-7855-001.patch, HDFS-7855-002.patch, 
> HDFS-7855-003.patch
>
>
> Class Packet is an inner class in DFSOutputStream and also used by 
> DataStreamer. This sub task separates Packet out of DFSOutputStream to aid 
> the separation in HDFS-7854.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5292) clean up output of `dfs -du -s`

2015-03-01 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14342735#comment-14342735
 ] 

Akira AJISAKA commented on HDFS-5292:
-

Thanks [~aw] for the review. 
bq. but for trunk only since it changes the output in an incompatible way.
Make sense to me. Regarding the output of du, I'm thinking HADOOP-6857 is also 
incompatible. I'll mark this jira.

> clean up output of `dfs -du -s`
> ---
>
> Key: HDFS-5292
> URL: https://issues.apache.org/jira/browse/HDFS-5292
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.1.1-beta
>Reporter: Nick Dimiduk
>Assignee: Akira AJISAKA
>Priority: Minor
> Attachments: HDFS-5292-002.patch, HDFS-5292.patch
>
>
> This could be formatted a little nicer:
> {noformat}
> $ hdfs dfs -du -s /apps/hbase/data/data/default/*
> 22604541341  /apps/hbase/data/data/default/IntegrationTestBulkLoad
> 896656491  /apps/hbase/data/data/default/IntegrationTestIngest
> 33776145312  /apps/hbase/data/data/default/IntegrationTestLoadAndVerify
> 83512463  /apps/hbase/data/data/default/SendTracesTable
> 532898  /apps/hbase/data/data/default/TestAcidGuarantees
> 27294  /apps/hbase/data/data/default/demo_table
> 1410  /apps/hbase/data/data/default/example
> 2531532801  /apps/hbase/data/data/default/loadtest_d1
> 901  /apps/hbase/data/data/default/table_qho71mpvj8
> 1433  /apps/hbase/data/data/default/tcreatetbl
> 1690  /apps/hbase/data/data/default/tdelrowtbl
> 360  /apps/hbase/data/data/default/testtbl1
> 360  /apps/hbase/data/data/default/testtbl2
> 360  /apps/hbase/data/data/default/testtbl3
> 1515  /apps/hbase/data/data/default/tquerytbl
> 1513  /apps/hbase/data/data/default/tscantbl
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-7864) Erasure Coding: Update safemode calculation for striped blocks

2015-03-01 Thread GAO Rui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

GAO Rui reassigned HDFS-7864:
-

Assignee: GAO Rui  (was: Jing Zhao)

> Erasure Coding: Update safemode calculation for striped blocks
> --
>
> Key: HDFS-7864
> URL: https://issues.apache.org/jira/browse/HDFS-7864
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Jing Zhao
>Assignee: GAO Rui
>
> We need to update the safemode calculation for striped blocks. Specifically, 
> each striped block now consists of multiple data/parity blocks stored in 
> corresponding DataNodes. The current code's calculation is thus inconsistent: 
> each striped block is only counted as 1 expected block, while each of its 
> member block may increase the number of received blocks by 1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7439) Add BlockOpResponseProto's message to DFSClient's exception message

2015-03-01 Thread Takanobu Asanuma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-7439:
---
Status: Patch Available  (was: Open)

> Add BlockOpResponseProto's message to DFSClient's exception message
> ---
>
> Key: HDFS-7439
> URL: https://issues.apache.org/jira/browse/HDFS-7439
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ming Ma
>Assignee: Takanobu Asanuma
>Priority: Minor
> Attachments: HDFS-7439.1.patch, HDFS-7439.2.patch, HDFS-7439.3.patch
>
>
> When (BlockOpResponseProto#getStatus() != SUCCESS), it helps with debugging 
> if DFSClient can add BlockOpResponseProto's message to the exception message 
> applications will get. For example, instead of
> {noformat}
> throw new IOException("Got error for OP_READ_BLOCK, self="
> + peer.getLocalAddressString() + ", remote="
> + peer.getRemoteAddressString() + ", for file " + file
> + ", for pool " + block.getBlockPoolId() + " block " 
> + block.getBlockId() + "_" + block.getGenerationStamp());
> {noformat}
> It could be,
> {noformat}
> throw new IOException("Got error for OP_READ_BLOCK, self="
> + peer.getLocalAddressString() + ", remote="
> + peer.getRemoteAddressString() + ", for file " + file
> + ", for pool " + block.getBlockPoolId() + " block " 
> + block.getBlockId() + "_" + block.getGenerationStamp()
> + ", status message " + status.getMessage());
> {noformat}
> We might want to check out all the references to BlockOpResponseProto in 
> DFSClient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7439) Add BlockOpResponseProto's message to DFSClient's exception message

2015-03-01 Thread Takanobu Asanuma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-7439:
---
Status: Open  (was: Patch Available)

> Add BlockOpResponseProto's message to DFSClient's exception message
> ---
>
> Key: HDFS-7439
> URL: https://issues.apache.org/jira/browse/HDFS-7439
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ming Ma
>Assignee: Takanobu Asanuma
>Priority: Minor
> Attachments: HDFS-7439.1.patch, HDFS-7439.2.patch, HDFS-7439.3.patch
>
>
> When (BlockOpResponseProto#getStatus() != SUCCESS), it helps with debugging 
> if DFSClient can add BlockOpResponseProto's message to the exception message 
> applications will get. For example, instead of
> {noformat}
> throw new IOException("Got error for OP_READ_BLOCK, self="
> + peer.getLocalAddressString() + ", remote="
> + peer.getRemoteAddressString() + ", for file " + file
> + ", for pool " + block.getBlockPoolId() + " block " 
> + block.getBlockId() + "_" + block.getGenerationStamp());
> {noformat}
> It could be,
> {noformat}
> throw new IOException("Got error for OP_READ_BLOCK, self="
> + peer.getLocalAddressString() + ", remote="
> + peer.getRemoteAddressString() + ", for file " + file
> + ", for pool " + block.getBlockPoolId() + " block " 
> + block.getBlockId() + "_" + block.getGenerationStamp()
> + ", status message " + status.getMessage());
> {noformat}
> We might want to check out all the references to BlockOpResponseProto in 
> DFSClient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7820) Client Write fails after rolling upgrade rollback with " already exist in finalized state"

2015-03-01 Thread J.Andreina (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14342696#comment-14342696
 ] 

J.Andreina commented on HDFS-7820:
--

Hi Arpit Agarwal thanks for looking at this issue. 

bq.One thing I did not understand - the finalized block does not belong to any 
file after rollback. Hence it should never be added to the BlockInfo list and 
should be marked for deletion on the DN immediately.

Block would be marked for deletion only on the second block report ( which 
would take 6 hrs, as default value for dfs.blockreport.intervalMsec=6hrs). So 
within this time after rollback any client write operation will fail since 
block with the same id already exist at DN . 

To avoid the duplicate block id being assigned after rollback , i gave an 
initial patch considering there could be 10 million blocks written in worst 
case after upgrade and before rollback, hence incremented the block id by 10 
million after rollback.

Please correct me if I'am wrong.

> Client Write fails after rolling upgrade rollback with " already 
> exist in finalized state"
> 
>
> Key: HDFS-7820
> URL: https://issues.apache.org/jira/browse/HDFS-7820
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: J.Andreina
>Assignee: J.Andreina
> Attachments: HDFS-7820.1.patch
>
>
> Steps to Reproduce:
> ===
> Step 1:  Prepare rolling upgrade using "hdfs dfsadmin -rollingUpgrade prepare"
> Step 2:  Shutdown SNN and NN
> Step 3:  Start NN with the "hdfs namenode -rollingUpgrade started" option.
> Step 4:  Executed "hdfs dfsadmin -shutdownDatanode  
> upgrade" and restarted Datanode
> Step 5:  Write 3 files to hdfs ( block id assigned are : blk_1073741831_1007, 
> blk_1073741832_1008,blk_1073741833_1009 )
> Step 6:  Shutdown both NN and DN
> Step 7:  Start NNs with the "hdfs namenode -rollingUpgrade rollback" option.
>  Start DNs with the "-rollback" option.
> Step 8:  Write 2 files to hdfs.
> Issue:
> ===
> Client write failed with below exception
> {noformat}
> 2015-02-23 16:00:12,896 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Receiving BP-1837556285-XXX-1423130389269:blk_1073741832_1008 src: 
> /XXX:48545 dest: /XXX:50010
> 2015-02-23 16:00:12,897 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> opWriteBlock BP-1837556285-XXX-1423130389269:blk_1073741832_1008 
> received exception 
> org.apache.hadoop.hdfs.server.datanode.ReplicaAlreadyExistsException: Block 
> BP-1837556285-XXX-1423130389269:blk_1073741832_1008 already exists in 
> state FINALIZED and thus cannot be created.
> {noformat}
> Observations:
> =
> 1. At Namenode side block invalidate is been sent only to 2 blocks.
> {noformat}
> 15/02/23 14:59:56 INFO BlockStateChange: BLOCK* InvalidateBlocks: add 
> blk_1073741833_1009 to XXX:50010
> 15/02/23 14:59:56 INFO BlockStateChange: BLOCK* InvalidateBlocks: add 
> blk_1073741831_1007 to XXX:50010
> {noformat}
> 2. fsck report does not show information on blk_1073741832_1008
> {noformat}
> FSCK started by Rex (auth:SIMPLE) from /XXX for path / at Mon Feb 23 
> 16:17:57 CST 2015
> /File1:  Under replicated 
> BP-1837556285-XXX-1423130389269:blk_1073741825_1001. Target Replicas 
> is 3 but found 1 replica(s).
> /File11:  Under replicated 
> BP-1837556285-XXX-1423130389269:blk_1073741827_1003. Target Replicas 
> is 3 but found 1 replica(s).
> /File2:  Under replicated 
> BP-1837556285-XXX-1423130389269:blk_1073741826_1002. Target Replicas 
> is 3 but found 1 replica(s).
> /AfterRollback_2:  Under replicated 
> BP-1837556285-XXX-1423130389269:blk_1073741831_1007. Target Replicas 
> is 3 but found 1 replica(s).
> /Test1:  Under replicated 
> BP-1837556285-XXX-1423130389269:blk_1073741828_1004. Target Replicas 
> is 3 but found 1 replica(s).
> Status: HEALTHY
>  Total size:31620 B
>  Total dirs:7
>  Total files:   6
>  Total symlinks:0
>  Total blocks (validated):  5 (avg. block size 6324 B)
>  Minimally replicated blocks:   5 (100.0 %)
>  Over-replicated blocks:0 (0.0 %)
>  Under-replicated blocks:   5 (100.0 %)
>  Mis-replicated blocks: 0 (0.0 %)
>  Default replication factor:3
>  Average block replication: 1.0
>  Corrupt blocks:0
>  Missing replicas:  10 (66.64 %)
>  Number of data-nodes:  1
>  Number of racks:   1
> FSCK ended at Mon Feb 23 16:17:57 CST 2015 in 3 milliseconds
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7439) Add BlockOpResponseProto's message to DFSClient's exception message

2015-03-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14342678#comment-14342678
 ] 

Hadoop QA commented on HDFS-7439:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12701749/HDFS-7439.3.patch
  against trunk revision e9ac88a.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.server.balancer.TestBalancer

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9690//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9690//console

This message is automatically generated.

> Add BlockOpResponseProto's message to DFSClient's exception message
> ---
>
> Key: HDFS-7439
> URL: https://issues.apache.org/jira/browse/HDFS-7439
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ming Ma
>Assignee: Takanobu Asanuma
>Priority: Minor
> Attachments: HDFS-7439.1.patch, HDFS-7439.2.patch, HDFS-7439.3.patch
>
>
> When (BlockOpResponseProto#getStatus() != SUCCESS), it helps with debugging 
> if DFSClient can add BlockOpResponseProto's message to the exception message 
> applications will get. For example, instead of
> {noformat}
> throw new IOException("Got error for OP_READ_BLOCK, self="
> + peer.getLocalAddressString() + ", remote="
> + peer.getRemoteAddressString() + ", for file " + file
> + ", for pool " + block.getBlockPoolId() + " block " 
> + block.getBlockId() + "_" + block.getGenerationStamp());
> {noformat}
> It could be,
> {noformat}
> throw new IOException("Got error for OP_READ_BLOCK, self="
> + peer.getLocalAddressString() + ", remote="
> + peer.getRemoteAddressString() + ", for file " + file
> + ", for pool " + block.getBlockPoolId() + " block " 
> + block.getBlockId() + "_" + block.getGenerationStamp()
> + ", status message " + status.getMessage());
> {noformat}
> We might want to check out all the references to BlockOpResponseProto in 
> DFSClient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7864) Erasure Coding: Update safemode calculation for striped blocks

2015-03-01 Thread Jing Zhao (JIRA)
Jing Zhao created HDFS-7864:
---

 Summary: Erasure Coding: Update safemode calculation for striped 
blocks
 Key: HDFS-7864
 URL: https://issues.apache.org/jira/browse/HDFS-7864
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Jing Zhao
Assignee: Jing Zhao


We need to update the safemode calculation for striped blocks. Specifically, 
each striped block now consists of multiple data/parity blocks stored in 
corresponding DataNodes. The current code's calculation is thus inconsistent: 
each striped block is only counted as 1 expected block, while each of its 
member block may increase the number of received blocks by 1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7863) Missing description of parameter fsd in javadoc

2015-03-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14342671#comment-14342671
 ] 

Hadoop QA commented on HDFS-7863:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12701745/HDFS-7863.patch
  against trunk revision e9ac88a.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9689//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9689//console

This message is automatically generated.

> Missing description of parameter fsd in javadoc 
> 
>
> Key: HDFS-7863
> URL: https://issues.apache.org/jira/browse/HDFS-7863
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Yongjun Zhang
>Assignee: Brahma Reddy Battula
>Priority: Minor
> Attachments: HDFS-7863.patch
>
>
> HDFS-7573 did refactoring of delete() code. New parameter {{FSDirectory fsd}} 
> is added to resulted methods, but the javadoc is not updated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6200) Create a separate jar for hdfs-client

2015-03-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14342659#comment-14342659
 ] 

Hadoop QA commented on HDFS-6200:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12651333/HDFS-6200.007.patch
  against trunk revision e9ac88a.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9692//console

This message is automatically generated.

> Create a separate jar for hdfs-client
> -
>
> Key: HDFS-6200
> URL: https://issues.apache.org/jira/browse/HDFS-6200
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-6200.000.patch, HDFS-6200.001.patch, 
> HDFS-6200.002.patch, HDFS-6200.003.patch, HDFS-6200.004.patch, 
> HDFS-6200.005.patch, HDFS-6200.006.patch, HDFS-6200.007.patch
>
>
> Currently the hadoop-hdfs jar contain both the hdfs server and the hdfs 
> client. As discussed in the hdfs-dev mailing list 
> (http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-dev/201404.mbox/browser),
>  downstream projects are forced to bring in additional dependency in order to 
> access hdfs. The additional dependency sometimes can be difficult to manage 
> for projects like Apache Falcon and Apache Oozie.
> This jira proposes to create a new project, hadoop-hdfs-cliient, which 
> contains the client side of the hdfs code. Downstream projects can use this 
> jar instead of the hadoop-hdfs to avoid unnecessary dependency.
> Note that it does not break the compatibility of downstream projects. This is 
> because old downstream projects implicitly depend on hadoop-hdfs-client 
> through the hadoop-hdfs jar.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-5420) Not need to launch Secondary namenode for NN HA mode?

2015-03-01 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-5420.

Resolution: Duplicate

 I'm going to merge this into HADOOP--11590.  Closing as a dupe.  

> Not need to launch Secondary namenode for NN HA mode?
> -
>
> Key: HDFS-5420
> URL: https://issues.apache.org/jira/browse/HDFS-5420
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0
>Reporter: Raymond Liu
>Assignee: Allen Wittenauer
>Priority: Minor
> Attachments: HDFS-5420-00.patch
>
>
> For Hadoop 2, When deploying with NN HA, the wiki says that it is an error to 
> start a secondary namenode. While the sbin/start-dfs.sh still launch a 
> secondary namenode even nothing related to the secondary namenode is 
> configured. Should this be fixed? or people just don't use this scripts to 
> start HA hdfs?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7759) Provide existence-of-a-second-file implementation for pinning blocks on Datanode

2015-03-01 Thread zhaoyunjiong (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14342638#comment-14342638
 ] 

zhaoyunjiong commented on HDFS-7759:


Tests failure are not related.

> Provide existence-of-a-second-file implementation for pinning blocks on 
> Datanode
> 
>
> Key: HDFS-7759
> URL: https://issues.apache.org/jira/browse/HDFS-7759
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: zhaoyunjiong
>Assignee: zhaoyunjiong
> Attachments: HDFS-7759.patch
>
>
> Provide existence-of-a-second-file implementation for pinning blocks on 
> Datanode  and let admin choosing the mechanism(use sticky bit or 
> existence-of-a-second-file) to pinning blocks on favored Datanode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7855) Separate class Packet from DFSOutputStream

2015-03-01 Thread Li Bo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Bo updated HDFS-7855:

Status: Patch Available  (was: In Progress)

> Separate class Packet from DFSOutputStream
> --
>
> Key: HDFS-7855
> URL: https://issues.apache.org/jira/browse/HDFS-7855
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Li Bo
>Assignee: Li Bo
> Attachments: HDFS-7855-001.patch, HDFS-7855-002.patch, 
> HDFS-7855-003.patch
>
>
> Class Packet is an inner class in DFSOutputStream and also used by 
> DataStreamer. This sub task separates Packet out of DFSOutputStream to aid 
> the separation in HDFS-7854.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7855) Separate class Packet from DFSOutputStream

2015-03-01 Thread Li Bo (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14342626#comment-14342626
 ] 

Li Bo commented on HDFS-7855:
-

hi, Kai
patch-003 renames class Packet to DFSPacket and remove inner class Packet from 
DFSOutputStream

> Separate class Packet from DFSOutputStream
> --
>
> Key: HDFS-7855
> URL: https://issues.apache.org/jira/browse/HDFS-7855
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Li Bo
>Assignee: Li Bo
> Attachments: HDFS-7855-001.patch, HDFS-7855-002.patch, 
> HDFS-7855-003.patch
>
>
> Class Packet is an inner class in DFSOutputStream and also used by 
> DataStreamer. This sub task separates Packet out of DFSOutputStream to aid 
> the separation in HDFS-7854.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7855) Separate class Packet from DFSOutputStream

2015-03-01 Thread Li Bo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Bo updated HDFS-7855:

Attachment: HDFS-7855-003.patch

> Separate class Packet from DFSOutputStream
> --
>
> Key: HDFS-7855
> URL: https://issues.apache.org/jira/browse/HDFS-7855
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Li Bo
>Assignee: Li Bo
> Attachments: HDFS-7855-001.patch, HDFS-7855-002.patch, 
> HDFS-7855-003.patch
>
>
> Class Packet is an inner class in DFSOutputStream and also used by 
> DataStreamer. This sub task separates Packet out of DFSOutputStream to aid 
> the separation in HDFS-7854.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7855) Separate class Packet from DFSOutputStream

2015-03-01 Thread Li Bo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Bo updated HDFS-7855:

Status: In Progress  (was: Patch Available)

> Separate class Packet from DFSOutputStream
> --
>
> Key: HDFS-7855
> URL: https://issues.apache.org/jira/browse/HDFS-7855
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Li Bo
>Assignee: Li Bo
> Attachments: HDFS-7855-001.patch, HDFS-7855-002.patch, 
> HDFS-7855-003.patch
>
>
> Class Packet is an inner class in DFSOutputStream and also used by 
> DataStreamer. This sub task separates Packet out of DFSOutputStream to aid 
> the separation in HDFS-7854.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-7671) hdfs user guide should point to the common rack awareness doc

2015-03-01 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki reassigned HDFS-7671:


Assignee: Kai Sasaki

> hdfs user guide should point to the common rack awareness doc
> -
>
> Key: HDFS-7671
> URL: https://issues.apache.org/jira/browse/HDFS-7671
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Allen Wittenauer
>Assignee: Kai Sasaki
>
> HDFS user guide has a section on rack awareness that should really just be a 
> pointer to the common doc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7439) Add BlockOpResponseProto's message to DFSClient's exception message

2015-03-01 Thread Takanobu Asanuma (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14342563#comment-14342563
 ] 

Takanobu Asanuma commented on HDFS-7439:


Thank you very much for the review. I corrected the patch and resubmitted.

> Add BlockOpResponseProto's message to DFSClient's exception message
> ---
>
> Key: HDFS-7439
> URL: https://issues.apache.org/jira/browse/HDFS-7439
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ming Ma
>Assignee: Takanobu Asanuma
>Priority: Minor
> Attachments: HDFS-7439.1.patch, HDFS-7439.2.patch, HDFS-7439.3.patch
>
>
> When (BlockOpResponseProto#getStatus() != SUCCESS), it helps with debugging 
> if DFSClient can add BlockOpResponseProto's message to the exception message 
> applications will get. For example, instead of
> {noformat}
> throw new IOException("Got error for OP_READ_BLOCK, self="
> + peer.getLocalAddressString() + ", remote="
> + peer.getRemoteAddressString() + ", for file " + file
> + ", for pool " + block.getBlockPoolId() + " block " 
> + block.getBlockId() + "_" + block.getGenerationStamp());
> {noformat}
> It could be,
> {noformat}
> throw new IOException("Got error for OP_READ_BLOCK, self="
> + peer.getLocalAddressString() + ", remote="
> + peer.getRemoteAddressString() + ", for file " + file
> + ", for pool " + block.getBlockPoolId() + " block " 
> + block.getBlockId() + "_" + block.getGenerationStamp()
> + ", status message " + status.getMessage());
> {noformat}
> We might want to check out all the references to BlockOpResponseProto in 
> DFSClient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7439) Add BlockOpResponseProto's message to DFSClient's exception message

2015-03-01 Thread Takanobu Asanuma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-7439:
---
Attachment: HDFS-7439.3.patch

> Add BlockOpResponseProto's message to DFSClient's exception message
> ---
>
> Key: HDFS-7439
> URL: https://issues.apache.org/jira/browse/HDFS-7439
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ming Ma
>Assignee: Takanobu Asanuma
>Priority: Minor
> Attachments: HDFS-7439.1.patch, HDFS-7439.2.patch, HDFS-7439.3.patch
>
>
> When (BlockOpResponseProto#getStatus() != SUCCESS), it helps with debugging 
> if DFSClient can add BlockOpResponseProto's message to the exception message 
> applications will get. For example, instead of
> {noformat}
> throw new IOException("Got error for OP_READ_BLOCK, self="
> + peer.getLocalAddressString() + ", remote="
> + peer.getRemoteAddressString() + ", for file " + file
> + ", for pool " + block.getBlockPoolId() + " block " 
> + block.getBlockId() + "_" + block.getGenerationStamp());
> {noformat}
> It could be,
> {noformat}
> throw new IOException("Got error for OP_READ_BLOCK, self="
> + peer.getLocalAddressString() + ", remote="
> + peer.getRemoteAddressString() + ", for file " + file
> + ", for pool " + block.getBlockPoolId() + " block " 
> + block.getBlockId() + "_" + block.getGenerationStamp()
> + ", status message " + status.getMessage());
> {noformat}
> We might want to check out all the references to BlockOpResponseProto in 
> DFSClient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7863) Missing description of parameter fsd in javadoc

2015-03-01 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-7863:
---
Status: Patch Available  (was: Open)

> Missing description of parameter fsd in javadoc 
> 
>
> Key: HDFS-7863
> URL: https://issues.apache.org/jira/browse/HDFS-7863
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Yongjun Zhang
>Assignee: Brahma Reddy Battula
>Priority: Minor
> Attachments: HDFS-7863.patch
>
>
> HDFS-7573 did refactoring of delete() code. New parameter {{FSDirectory fsd}} 
> is added to resulted methods, but the javadoc is not updated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7826) Erasure Coding: Update INodeFile quota computation for striped blocks

2015-03-01 Thread Kai Sasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14342554#comment-14342554
 ] 

Kai Sasaki commented on HDFS-7826:
--

[~jingzhao] I have a question about compute quota usage on INodeFile. The 
methods used in INodeFile seem to be already updated to compute for striped 
blocks. I think name space quota is the same to contiguous one and storage 
space quota can be calculated by `storagespaceConsumedNoReplication` due to 
[this 
commit](https://github.com/apache/hadoop/commit/edb29268884642dfeed9315f4dd8c4bb2979e414#diff-14f4a294c57a9aa32512d13ec2f120d3L420)
 . Is this assumption correct? I will be glad to hear your advice. Thank you.

> Erasure Coding: Update INodeFile quota computation for striped blocks
> -
>
> Key: HDFS-7826
> URL: https://issues.apache.org/jira/browse/HDFS-7826
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Jing Zhao
>Assignee: Kai Sasaki
>
> Currently INodeFile's quota computation only considers contiguous blocks 
> (i.e., {{INodeFile#blocks}}). We need to update it to support striped blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7863) Missing description of parameter fsd in javadoc

2015-03-01 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-7863:
---
Attachment: HDFS-7863.patch

> Missing description of parameter fsd in javadoc 
> 
>
> Key: HDFS-7863
> URL: https://issues.apache.org/jira/browse/HDFS-7863
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Yongjun Zhang
>Assignee: Brahma Reddy Battula
>Priority: Minor
> Attachments: HDFS-7863.patch
>
>
> HDFS-7573 did refactoring of delete() code. New parameter {{FSDirectory fsd}} 
> is added to resulted methods, but the javadoc is not updated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7312) Update DistCp v1 to optionally not use tmp location (branch-1 only)

2015-03-01 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14342537#comment-14342537
 ] 

Allen Wittenauer commented on HDFS-7312:


Then, honestly, this should just be closed as won't fix. Given how long it's 
been since a v1 release

> Update DistCp v1 to optionally not use tmp location (branch-1 only)
> ---
>
> Key: HDFS-7312
> URL: https://issues.apache.org/jira/browse/HDFS-7312
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 1.2.1
>Reporter: Joseph Prosser
>Assignee: Joseph Prosser
>Priority: Minor
> Attachments: HDFS-7312.001.patch, HDFS-7312.002.patch, 
> HDFS-7312.003.patch, HDFS-7312.004.patch, HDFS-7312.005.patch, 
> HDFS-7312.006.patch, HDFS-7312.007.patch, HDFS-7312.patch
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> DistCp v1 currently copies files to a tmp location and then renames that to 
> the specified destination.  This can cause performance issues on filesystems 
> such as S3.  A -skiptmp flag will be added to bypass this step and copy 
> directly to the destination.  This feature mirrors a similar one added to 
> HBase ExportSnapshot 
> [HBASE-9|https://issues.apache.org/jira/browse/HBASE-9]
> NOTE: This is a branch-1 change only.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5796) The file system browser in the namenode UI requires SPNEGO.

2015-03-01 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14342536#comment-14342536
 ] 

Allen Wittenauer commented on HDFS-5796:


All:

I have set this as a blocker for 2.7.0 on the basis that http-auth-based 
plug-ins (especially those that require additional configuration) don't work. 
This is a pretty big regression from previous versions of Hadoop, made fatal 
now that the old HDFS browse code has been removed.  

If we feel that this particular JIRA has forked from that issue, then let's 
create a new one.

Thanks.

> The file system browser in the namenode UI requires SPNEGO.
> ---
>
> Key: HDFS-5796
> URL: https://issues.apache.org/jira/browse/HDFS-5796
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.5.0
>Reporter: Kihwal Lee
>Assignee: Arun Suresh
>Priority: Blocker
> Attachments: HDFS-5796.1.patch, HDFS-5796.1.patch, HDFS-5796.2.patch, 
> HDFS-5796.3.patch, HDFS-5796.3.patch
>
>
> After HDFS-5382, the browser makes webhdfs REST calls directly, requiring 
> SPNEGO to work between user's browser and namenode.  This won't work if the 
> cluster's security infrastructure is isolated from the regular network.  
> Moreover, SPNEGO is not supposed to be required for user-facing web pages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-5796) The file system browser in the namenode UI requires SPNEGO.

2015-03-01 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HDFS-5796:
---
Target Version/s: 2.7.0  (was: 2.6.0)

> The file system browser in the namenode UI requires SPNEGO.
> ---
>
> Key: HDFS-5796
> URL: https://issues.apache.org/jira/browse/HDFS-5796
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.5.0
>Reporter: Kihwal Lee
>Assignee: Arun Suresh
>Priority: Blocker
> Attachments: HDFS-5796.1.patch, HDFS-5796.1.patch, HDFS-5796.2.patch, 
> HDFS-5796.3.patch, HDFS-5796.3.patch
>
>
> After HDFS-5382, the browser makes webhdfs REST calls directly, requiring 
> SPNEGO to work between user's browser and namenode.  This won't work if the 
> cluster's security infrastructure is isolated from the regular network.  
> Moreover, SPNEGO is not supposed to be required for user-facing web pages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-5796) The file system browser in the namenode UI requires SPNEGO.

2015-03-01 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HDFS-5796:
---
Priority: Blocker  (was: Major)

> The file system browser in the namenode UI requires SPNEGO.
> ---
>
> Key: HDFS-5796
> URL: https://issues.apache.org/jira/browse/HDFS-5796
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.5.0
>Reporter: Kihwal Lee
>Assignee: Arun Suresh
>Priority: Blocker
> Attachments: HDFS-5796.1.patch, HDFS-5796.1.patch, HDFS-5796.2.patch, 
> HDFS-5796.3.patch, HDFS-5796.3.patch
>
>
> After HDFS-5382, the browser makes webhdfs REST calls directly, requiring 
> SPNEGO to work between user's browser and namenode.  This won't work if the 
> cluster's security infrastructure is isolated from the regular network.  
> Moreover, SPNEGO is not supposed to be required for user-facing web pages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-1783) Ability for HDFS client to write replicas in parallel

2015-03-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14342488#comment-14342488
 ] 

Hadoop QA commented on HDFS-1783:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12531578/HDFS-1783-trunk-v5.patch
  against trunk revision e9ac88a.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9688//console

This message is automatically generated.

> Ability for HDFS client to write replicas in parallel
> -
>
> Key: HDFS-1783
> URL: https://issues.apache.org/jira/browse/HDFS-1783
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Reporter: dhruba borthakur
>Assignee: Rosa Ali
> Attachments: HDFS-1783-trunk-v2.patch, HDFS-1783-trunk-v3.patch, 
> HDFS-1783-trunk-v4.patch, HDFS-1783-trunk-v5.patch, HDFS-1783-trunk.patch
>
>
> The current implementation of HDFS pipelines the writes to the three 
> replicas. This introduces some latency for realtime latency sensitive 
> applications. An alternate implementation that allows the client to write all 
> replicas in parallel gives much better response times to these applications. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-1783) Ability for HDFS client to write replicas in parallel

2015-03-01 Thread Rosa Ali (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rosa Ali reassigned HDFS-1783:
--

Assignee: Rosa Ali

> Ability for HDFS client to write replicas in parallel
> -
>
> Key: HDFS-1783
> URL: https://issues.apache.org/jira/browse/HDFS-1783
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Reporter: dhruba borthakur
>Assignee: Rosa Ali
> Attachments: HDFS-1783-trunk-v2.patch, HDFS-1783-trunk-v3.patch, 
> HDFS-1783-trunk-v4.patch, HDFS-1783-trunk-v5.patch, HDFS-1783-trunk.patch
>
>
> The current implementation of HDFS pipelines the writes to the three 
> replicas. This introduces some latency for realtime latency sensitive 
> applications. An alternate implementation that allows the client to write all 
> replicas in parallel gives much better response times to these applications. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7312) Update DistCp v1 to optionally not use tmp location (branch-1 only)

2015-03-01 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HDFS-7312:
-
Target Version/s: 1.3.0  (was: 2.5.1)

> Update DistCp v1 to optionally not use tmp location (branch-1 only)
> ---
>
> Key: HDFS-7312
> URL: https://issues.apache.org/jira/browse/HDFS-7312
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 1.2.1
>Reporter: Joseph Prosser
>Assignee: Joseph Prosser
>Priority: Minor
> Attachments: HDFS-7312.001.patch, HDFS-7312.002.patch, 
> HDFS-7312.003.patch, HDFS-7312.004.patch, HDFS-7312.005.patch, 
> HDFS-7312.006.patch, HDFS-7312.007.patch, HDFS-7312.patch
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> DistCp v1 currently copies files to a tmp location and then renames that to 
> the specified destination.  This can cause performance issues on filesystems 
> such as S3.  A -skiptmp flag will be added to bypass this step and copy 
> directly to the destination.  This feature mirrors a similar one added to 
> HBase ExportSnapshot 
> [HBASE-9|https://issues.apache.org/jira/browse/HBASE-9]
> NOTE: This is a branch-1 change only.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7312) Update DistCp v1 to optionally not use tmp location (branch-1 only)

2015-03-01 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HDFS-7312:
-
Affects Version/s: (was: 2.5.1)
   1.2.1

> Update DistCp v1 to optionally not use tmp location (branch-1 only)
> ---
>
> Key: HDFS-7312
> URL: https://issues.apache.org/jira/browse/HDFS-7312
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 1.2.1
>Reporter: Joseph Prosser
>Assignee: Joseph Prosser
>Priority: Minor
> Attachments: HDFS-7312.001.patch, HDFS-7312.002.patch, 
> HDFS-7312.003.patch, HDFS-7312.004.patch, HDFS-7312.005.patch, 
> HDFS-7312.006.patch, HDFS-7312.007.patch, HDFS-7312.patch
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> DistCp v1 currently copies files to a tmp location and then renames that to 
> the specified destination.  This can cause performance issues on filesystems 
> such as S3.  A -skiptmp flag will be added to bypass this step and copy 
> directly to the destination.  This feature mirrors a similar one added to 
> HBase ExportSnapshot 
> [HBASE-9|https://issues.apache.org/jira/browse/HBASE-9]
> NOTE: This is a branch-1 change only.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5853) Add "hadoop.user.group.metrics.percentiles.intervals" to hdfs-default.xml

2015-03-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14342284#comment-14342284
 ] 

Hudson commented on HDFS-5853:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2069 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2069/])
HDFS-5853. Add "hadoop.user.group.metrics.percentiles.intervals" to 
hdfs-default.xml (aajisaka) (aajisaka: rev 
aa55fd3096442f186aebc5a767d7e271b7224b51)
* hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Add "hadoop.user.group.metrics.percentiles.intervals" to hdfs-default.xml
> -
>
> Key: HDFS-5853
> URL: https://issues.apache.org/jira/browse/HDFS-5853
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation, namenode
>Affects Versions: 2.3.0
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: HDFS-5853.patch
>
>
> "hadoop.user.group.metrics.percentiles.intervals" was added in HDFS-5220, but 
> the parameter is not written in hdfs-default.xml.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-4681) TestBlocksWithNotEnoughRacks#testCorruptBlockRereplicatedAcrossRacks fails using IBM java

2015-03-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14342288#comment-14342288
 ] 

Hudson commented on HDFS-4681:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2069 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2069/])
HDFS-4681. TestBlocksWithNotEnoughRacks#testCorruptBlockRereplicatedAcrossRacks 
fails using IBM java (Ayappan via aw) (aw: rev 
dbc9b6433e9276057181cf4927cedf321acd354e)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlocksWithNotEnoughRacks.java


> TestBlocksWithNotEnoughRacks#testCorruptBlockRereplicatedAcrossRacks fails 
> using IBM java
> -
>
> Key: HDFS-4681
> URL: https://issues.apache.org/jira/browse/HDFS-4681
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.5.2
> Environment: PowerPC Big Endian architecture
>Reporter: Tian Hong Wang
>Assignee: Ayappan
> Fix For: 3.0.0
>
> Attachments: HDFS-4681-v1.patch, HDFS-4681-v2.patch, HDFS-4681.patch
>
>
> TestBlocksWithNotEnoughRacks unit test fails with the following error message:
> 
> testCorruptBlockRereplicatedAcrossRacks(org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks)
>   Time elapsed: 8997 sec  <<< FAILURE!
> org.junit.ComparisonFailure: Corrupt replica 
> expected:<...��^EI�u�[�{���[$�\hF�[�R{O�L^S��g�#�O��׼��Wv��6u4Hd)FaŔ��^W�0��H/�^ZU^@�6�<02>���":)$�{|�^@�-���|GvW��7g
>  �/M��[U!eF�>^N^?�4pR�d��|��Ŵ7j^O^Sh�^@�nu�(�^C^Y�;I�Q�K^O"c���   
> oKtE�*�^\3u��]Ē:mŭ^^y�^H��_^T�^ZS4�7�C�^G�_���\|^W�vo���zgU�lmJ)_vq~�+^Mo^G^O�W}�.�4
> ��6b�S�&G�^?��m4FW#^@
> D5��}�^Z�^]���mfR^G#T-�N��̋�p���`�~��`�^F;�^C]> but 
> was:<...��^EI�u�[�{���[$�\hF�[R{O�L^S��g�#�O��׼��Wv��6u4Hd)FaŔ��^W�0��H/�^ZU^@�6�<02>�":)$�{|�^@�-���|GvW��7g
>  �/M�[U!eF�>^N^?�4pR�d��|��Ŵ7j^O^Sh�^@�nu�(�^C^Y�;I�Q�K^O"c���  
> oKtE�*�^\3u��]Ē:mŭ^^y���^H��_^T�^ZS���4�7�C�^G�_���\|^W�vo���zgU�lmJ)_vq~�+^Mo^G^O�W}�.�4
>��6b�S�&G�^?��m4FW#^@
> D5��}�^Z�^]���mfR^G#T-�N�̋�p���`�~��`�^F;�]>
> at org.junit.Assert.assertEquals(Assert.java:123)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks.testCorruptBlockRereplicatedAcrossRacks(TestBlocksWithNotEnoughRacks.java:229)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7312) Update DistCp v1 to optionally not use tmp location (branch-1 only)

2015-03-01 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-7312:

Release Note: added -skiptmp to DistCp v1 (branch-1 only)  (was: added 
-skiptmp to DistCp v1)
 Summary: Update DistCp v1 to optionally not use tmp location (branch-1 
only)  (was: Update DistCp v1 to optionally not use tmp location)

> Update DistCp v1 to optionally not use tmp location (branch-1 only)
> ---
>
> Key: HDFS-7312
> URL: https://issues.apache.org/jira/browse/HDFS-7312
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 2.5.1
>Reporter: Joseph Prosser
>Assignee: Joseph Prosser
>Priority: Minor
> Attachments: HDFS-7312.001.patch, HDFS-7312.002.patch, 
> HDFS-7312.003.patch, HDFS-7312.004.patch, HDFS-7312.005.patch, 
> HDFS-7312.006.patch, HDFS-7312.007.patch, HDFS-7312.patch
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> DistCp v1 currently copies files to a tmp location and then renames that to 
> the specified destination.  This can cause performance issues on filesystems 
> such as S3.  A -skiptmp flag will be added to bypass this step and copy 
> directly to the destination.  This feature mirrors a similar one added to 
> HBase ExportSnapshot 
> [HBASE-9|https://issues.apache.org/jira/browse/HBASE-9]
> NOTE: This is a branch-1 change only.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7312) Update DistCp v1 to optionally not use tmp location

2015-03-01 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-7312:

Status: Patch Available  (was: Open)

Thanks Allen, this is branch-1 change only so the patch is not applicable to 
trunk. I changed the description section to make a note of it. Thanks,




> Update DistCp v1 to optionally not use tmp location
> ---
>
> Key: HDFS-7312
> URL: https://issues.apache.org/jira/browse/HDFS-7312
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 2.5.1
>Reporter: Joseph Prosser
>Assignee: Joseph Prosser
>Priority: Minor
> Attachments: HDFS-7312.001.patch, HDFS-7312.002.patch, 
> HDFS-7312.003.patch, HDFS-7312.004.patch, HDFS-7312.005.patch, 
> HDFS-7312.006.patch, HDFS-7312.007.patch, HDFS-7312.patch
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> DistCp v1 currently copies files to a tmp location and then renames that to 
> the specified destination.  This can cause performance issues on filesystems 
> such as S3.  A -skiptmp flag will be added to bypass this step and copy 
> directly to the destination.  This feature mirrors a similar one added to 
> HBase ExportSnapshot 
> [HBASE-9|https://issues.apache.org/jira/browse/HBASE-9]
> NOTE: This is a branch-1 change only.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-4681) TestBlocksWithNotEnoughRacks#testCorruptBlockRereplicatedAcrossRacks fails using IBM java

2015-03-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14342275#comment-14342275
 ] 

Hudson commented on HDFS-4681:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #119 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/119/])
HDFS-4681. TestBlocksWithNotEnoughRacks#testCorruptBlockRereplicatedAcrossRacks 
fails using IBM java (Ayappan via aw) (aw: rev 
dbc9b6433e9276057181cf4927cedf321acd354e)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlocksWithNotEnoughRacks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> TestBlocksWithNotEnoughRacks#testCorruptBlockRereplicatedAcrossRacks fails 
> using IBM java
> -
>
> Key: HDFS-4681
> URL: https://issues.apache.org/jira/browse/HDFS-4681
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.5.2
> Environment: PowerPC Big Endian architecture
>Reporter: Tian Hong Wang
>Assignee: Ayappan
> Fix For: 3.0.0
>
> Attachments: HDFS-4681-v1.patch, HDFS-4681-v2.patch, HDFS-4681.patch
>
>
> TestBlocksWithNotEnoughRacks unit test fails with the following error message:
> 
> testCorruptBlockRereplicatedAcrossRacks(org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks)
>   Time elapsed: 8997 sec  <<< FAILURE!
> org.junit.ComparisonFailure: Corrupt replica 
> expected:<...��^EI�u�[�{���[$�\hF�[�R{O�L^S��g�#�O��׼��Wv��6u4Hd)FaŔ��^W�0��H/�^ZU^@�6�<02>���":)$�{|�^@�-���|GvW��7g
>  �/M��[U!eF�>^N^?�4pR�d��|��Ŵ7j^O^Sh�^@�nu�(�^C^Y�;I�Q�K^O"c���   
> oKtE�*�^\3u��]Ē:mŭ^^y�^H��_^T�^ZS4�7�C�^G�_���\|^W�vo���zgU�lmJ)_vq~�+^Mo^G^O�W}�.�4
> ��6b�S�&G�^?��m4FW#^@
> D5��}�^Z�^]���mfR^G#T-�N��̋�p���`�~��`�^F;�^C]> but 
> was:<...��^EI�u�[�{���[$�\hF�[R{O�L^S��g�#�O��׼��Wv��6u4Hd)FaŔ��^W�0��H/�^ZU^@�6�<02>�":)$�{|�^@�-���|GvW��7g
>  �/M�[U!eF�>^N^?�4pR�d��|��Ŵ7j^O^Sh�^@�nu�(�^C^Y�;I�Q�K^O"c���  
> oKtE�*�^\3u��]Ē:mŭ^^y���^H��_^T�^ZS���4�7�C�^G�_���\|^W�vo���zgU�lmJ)_vq~�+^Mo^G^O�W}�.�4
>��6b�S�&G�^?��m4FW#^@
> D5��}�^Z�^]���mfR^G#T-�N�̋�p���`�~��`�^F;�]>
> at org.junit.Assert.assertEquals(Assert.java:123)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks.testCorruptBlockRereplicatedAcrossRacks(TestBlocksWithNotEnoughRacks.java:229)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5853) Add "hadoop.user.group.metrics.percentiles.intervals" to hdfs-default.xml

2015-03-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14342271#comment-14342271
 ] 

Hudson commented on HDFS-5853:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #119 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/119/])
HDFS-5853. Add "hadoop.user.group.metrics.percentiles.intervals" to 
hdfs-default.xml (aajisaka) (aajisaka: rev 
aa55fd3096442f186aebc5a767d7e271b7224b51)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml


> Add "hadoop.user.group.metrics.percentiles.intervals" to hdfs-default.xml
> -
>
> Key: HDFS-5853
> URL: https://issues.apache.org/jira/browse/HDFS-5853
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation, namenode
>Affects Versions: 2.3.0
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: HDFS-5853.patch
>
>
> "hadoop.user.group.metrics.percentiles.intervals" was added in HDFS-5220, but 
> the parameter is not written in hdfs-default.xml.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7312) Update DistCp v1 to optionally not use tmp location

2015-03-01 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-7312:

Description: 
DistCp v1 currently copies files to a tmp location and then renames that to the 
specified destination.  This can cause performance issues on filesystems such 
as S3.  A -skiptmp flag will be added to bypass this step and copy directly to 
the destination.  This feature mirrors a similar one added to HBase 
ExportSnapshot [HBASE-9|https://issues.apache.org/jira/browse/HBASE-9]

NOTE: This is a branch-1 change only.

  was:DistCp v1 currently copies files to a tmp location and then renames that 
to the specified destination.  This can cause performance issues on filesystems 
such as S3.  A -skiptmp flag will be added to bypass this step and copy 
directly to the destination.  This feature mirrors a similar one added to HBase 
ExportSnapshot [HBASE-9|https://issues.apache.org/jira/browse/HBASE-9]


> Update DistCp v1 to optionally not use tmp location
> ---
>
> Key: HDFS-7312
> URL: https://issues.apache.org/jira/browse/HDFS-7312
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 2.5.1
>Reporter: Joseph Prosser
>Assignee: Joseph Prosser
>Priority: Minor
> Attachments: HDFS-7312.001.patch, HDFS-7312.002.patch, 
> HDFS-7312.003.patch, HDFS-7312.004.patch, HDFS-7312.005.patch, 
> HDFS-7312.006.patch, HDFS-7312.007.patch, HDFS-7312.patch
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> DistCp v1 currently copies files to a tmp location and then renames that to 
> the specified destination.  This can cause performance issues on filesystems 
> such as S3.  A -skiptmp flag will be added to bypass this step and copy 
> directly to the destination.  This feature mirrors a similar one added to 
> HBase ExportSnapshot 
> [HBASE-9|https://issues.apache.org/jira/browse/HBASE-9]
> NOTE: This is a branch-1 change only.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5853) Add "hadoop.user.group.metrics.percentiles.intervals" to hdfs-default.xml

2015-03-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14342252#comment-14342252
 ] 

Hudson commented on HDFS-5853:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk-Java8 #110 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/110/])
HDFS-5853. Add "hadoop.user.group.metrics.percentiles.intervals" to 
hdfs-default.xml (aajisaka) (aajisaka: rev 
aa55fd3096442f186aebc5a767d7e271b7224b51)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml


> Add "hadoop.user.group.metrics.percentiles.intervals" to hdfs-default.xml
> -
>
> Key: HDFS-5853
> URL: https://issues.apache.org/jira/browse/HDFS-5853
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation, namenode
>Affects Versions: 2.3.0
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: HDFS-5853.patch
>
>
> "hadoop.user.group.metrics.percentiles.intervals" was added in HDFS-5220, but 
> the parameter is not written in hdfs-default.xml.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-4681) TestBlocksWithNotEnoughRacks#testCorruptBlockRereplicatedAcrossRacks fails using IBM java

2015-03-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14342256#comment-14342256
 ] 

Hudson commented on HDFS-4681:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk-Java8 #110 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/110/])
HDFS-4681. TestBlocksWithNotEnoughRacks#testCorruptBlockRereplicatedAcrossRacks 
fails using IBM java (Ayappan via aw) (aw: rev 
dbc9b6433e9276057181cf4927cedf321acd354e)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlocksWithNotEnoughRacks.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java


> TestBlocksWithNotEnoughRacks#testCorruptBlockRereplicatedAcrossRacks fails 
> using IBM java
> -
>
> Key: HDFS-4681
> URL: https://issues.apache.org/jira/browse/HDFS-4681
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.5.2
> Environment: PowerPC Big Endian architecture
>Reporter: Tian Hong Wang
>Assignee: Ayappan
> Fix For: 3.0.0
>
> Attachments: HDFS-4681-v1.patch, HDFS-4681-v2.patch, HDFS-4681.patch
>
>
> TestBlocksWithNotEnoughRacks unit test fails with the following error message:
> 
> testCorruptBlockRereplicatedAcrossRacks(org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks)
>   Time elapsed: 8997 sec  <<< FAILURE!
> org.junit.ComparisonFailure: Corrupt replica 
> expected:<...��^EI�u�[�{���[$�\hF�[�R{O�L^S��g�#�O��׼��Wv��6u4Hd)FaŔ��^W�0��H/�^ZU^@�6�<02>���":)$�{|�^@�-���|GvW��7g
>  �/M��[U!eF�>^N^?�4pR�d��|��Ŵ7j^O^Sh�^@�nu�(�^C^Y�;I�Q�K^O"c���   
> oKtE�*�^\3u��]Ē:mŭ^^y�^H��_^T�^ZS4�7�C�^G�_���\|^W�vo���zgU�lmJ)_vq~�+^Mo^G^O�W}�.�4
> ��6b�S�&G�^?��m4FW#^@
> D5��}�^Z�^]���mfR^G#T-�N��̋�p���`�~��`�^F;�^C]> but 
> was:<...��^EI�u�[�{���[$�\hF�[R{O�L^S��g�#�O��׼��Wv��6u4Hd)FaŔ��^W�0��H/�^ZU^@�6�<02>�":)$�{|�^@�-���|GvW��7g
>  �/M�[U!eF�>^N^?�4pR�d��|��Ŵ7j^O^Sh�^@�nu�(�^C^Y�;I�Q�K^O"c���  
> oKtE�*�^\3u��]Ē:mŭ^^y���^H��_^T�^ZS���4�7�C�^G�_���\|^W�vo���zgU�lmJ)_vq~�+^Mo^G^O�W}�.�4
>��6b�S�&G�^?��m4FW#^@
> D5��}�^Z�^]���mfR^G#T-�N�̋�p���`�~��`�^F;�]>
> at org.junit.Assert.assertEquals(Assert.java:123)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks.testCorruptBlockRereplicatedAcrossRacks(TestBlocksWithNotEnoughRacks.java:229)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5853) Add "hadoop.user.group.metrics.percentiles.intervals" to hdfs-default.xml

2015-03-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14342244#comment-14342244
 ] 

Hudson commented on HDFS-5853:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #2051 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2051/])
HDFS-5853. Add "hadoop.user.group.metrics.percentiles.intervals" to 
hdfs-default.xml (aajisaka) (aajisaka: rev 
aa55fd3096442f186aebc5a767d7e271b7224b51)
* hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Add "hadoop.user.group.metrics.percentiles.intervals" to hdfs-default.xml
> -
>
> Key: HDFS-5853
> URL: https://issues.apache.org/jira/browse/HDFS-5853
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation, namenode
>Affects Versions: 2.3.0
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: HDFS-5853.patch
>
>
> "hadoop.user.group.metrics.percentiles.intervals" was added in HDFS-5220, but 
> the parameter is not written in hdfs-default.xml.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-4681) TestBlocksWithNotEnoughRacks#testCorruptBlockRereplicatedAcrossRacks fails using IBM java

2015-03-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14342248#comment-14342248
 ] 

Hudson commented on HDFS-4681:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #2051 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2051/])
HDFS-4681. TestBlocksWithNotEnoughRacks#testCorruptBlockRereplicatedAcrossRacks 
fails using IBM java (Ayappan via aw) (aw: rev 
dbc9b6433e9276057181cf4927cedf321acd354e)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlocksWithNotEnoughRacks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> TestBlocksWithNotEnoughRacks#testCorruptBlockRereplicatedAcrossRacks fails 
> using IBM java
> -
>
> Key: HDFS-4681
> URL: https://issues.apache.org/jira/browse/HDFS-4681
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.5.2
> Environment: PowerPC Big Endian architecture
>Reporter: Tian Hong Wang
>Assignee: Ayappan
> Fix For: 3.0.0
>
> Attachments: HDFS-4681-v1.patch, HDFS-4681-v2.patch, HDFS-4681.patch
>
>
> TestBlocksWithNotEnoughRacks unit test fails with the following error message:
> 
> testCorruptBlockRereplicatedAcrossRacks(org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks)
>   Time elapsed: 8997 sec  <<< FAILURE!
> org.junit.ComparisonFailure: Corrupt replica 
> expected:<...��^EI�u�[�{���[$�\hF�[�R{O�L^S��g�#�O��׼��Wv��6u4Hd)FaŔ��^W�0��H/�^ZU^@�6�<02>���":)$�{|�^@�-���|GvW��7g
>  �/M��[U!eF�>^N^?�4pR�d��|��Ŵ7j^O^Sh�^@�nu�(�^C^Y�;I�Q�K^O"c���   
> oKtE�*�^\3u��]Ē:mŭ^^y�^H��_^T�^ZS4�7�C�^G�_���\|^W�vo���zgU�lmJ)_vq~�+^Mo^G^O�W}�.�4
> ��6b�S�&G�^?��m4FW#^@
> D5��}�^Z�^]���mfR^G#T-�N��̋�p���`�~��`�^F;�^C]> but 
> was:<...��^EI�u�[�{���[$�\hF�[R{O�L^S��g�#�O��׼��Wv��6u4Hd)FaŔ��^W�0��H/�^ZU^@�6�<02>�":)$�{|�^@�-���|GvW��7g
>  �/M�[U!eF�>^N^?�4pR�d��|��Ŵ7j^O^Sh�^@�nu�(�^C^Y�;I�Q�K^O"c���  
> oKtE�*�^\3u��]Ē:mŭ^^y���^H��_^T�^ZS���4�7�C�^G�_���\|^W�vo���zgU�lmJ)_vq~�+^Mo^G^O�W}�.�4
>��6b�S�&G�^?��m4FW#^@
> D5��}�^Z�^]���mfR^G#T-�N�̋�p���`�~��`�^F;�]>
> at org.junit.Assert.assertEquals(Assert.java:123)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks.testCorruptBlockRereplicatedAcrossRacks(TestBlocksWithNotEnoughRacks.java:229)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-4681) TestBlocksWithNotEnoughRacks#testCorruptBlockRereplicatedAcrossRacks fails using IBM java

2015-03-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14342174#comment-14342174
 ] 

Hudson commented on HDFS-4681:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #853 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/853/])
HDFS-4681. TestBlocksWithNotEnoughRacks#testCorruptBlockRereplicatedAcrossRacks 
fails using IBM java (Ayappan via aw) (aw: rev 
dbc9b6433e9276057181cf4927cedf321acd354e)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlocksWithNotEnoughRacks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> TestBlocksWithNotEnoughRacks#testCorruptBlockRereplicatedAcrossRacks fails 
> using IBM java
> -
>
> Key: HDFS-4681
> URL: https://issues.apache.org/jira/browse/HDFS-4681
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.5.2
> Environment: PowerPC Big Endian architecture
>Reporter: Tian Hong Wang
>Assignee: Ayappan
> Fix For: 3.0.0
>
> Attachments: HDFS-4681-v1.patch, HDFS-4681-v2.patch, HDFS-4681.patch
>
>
> TestBlocksWithNotEnoughRacks unit test fails with the following error message:
> 
> testCorruptBlockRereplicatedAcrossRacks(org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks)
>   Time elapsed: 8997 sec  <<< FAILURE!
> org.junit.ComparisonFailure: Corrupt replica 
> expected:<...��^EI�u�[�{���[$�\hF�[�R{O�L^S��g�#�O��׼��Wv��6u4Hd)FaŔ��^W�0��H/�^ZU^@�6�<02>���":)$�{|�^@�-���|GvW��7g
>  �/M��[U!eF�>^N^?�4pR�d��|��Ŵ7j^O^Sh�^@�nu�(�^C^Y�;I�Q�K^O"c���   
> oKtE�*�^\3u��]Ē:mŭ^^y�^H��_^T�^ZS4�7�C�^G�_���\|^W�vo���zgU�lmJ)_vq~�+^Mo^G^O�W}�.�4
> ��6b�S�&G�^?��m4FW#^@
> D5��}�^Z�^]���mfR^G#T-�N��̋�p���`�~��`�^F;�^C]> but 
> was:<...��^EI�u�[�{���[$�\hF�[R{O�L^S��g�#�O��׼��Wv��6u4Hd)FaŔ��^W�0��H/�^ZU^@�6�<02>�":)$�{|�^@�-���|GvW��7g
>  �/M�[U!eF�>^N^?�4pR�d��|��Ŵ7j^O^Sh�^@�nu�(�^C^Y�;I�Q�K^O"c���  
> oKtE�*�^\3u��]Ē:mŭ^^y���^H��_^T�^ZS���4�7�C�^G�_���\|^W�vo���zgU�lmJ)_vq~�+^Mo^G^O�W}�.�4
>��6b�S�&G�^?��m4FW#^@
> D5��}�^Z�^]���mfR^G#T-�N�̋�p���`�~��`�^F;�]>
> at org.junit.Assert.assertEquals(Assert.java:123)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks.testCorruptBlockRereplicatedAcrossRacks(TestBlocksWithNotEnoughRacks.java:229)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5853) Add "hadoop.user.group.metrics.percentiles.intervals" to hdfs-default.xml

2015-03-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14342170#comment-14342170
 ] 

Hudson commented on HDFS-5853:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #853 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/853/])
HDFS-5853. Add "hadoop.user.group.metrics.percentiles.intervals" to 
hdfs-default.xml (aajisaka) (aajisaka: rev 
aa55fd3096442f186aebc5a767d7e271b7224b51)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml


> Add "hadoop.user.group.metrics.percentiles.intervals" to hdfs-default.xml
> -
>
> Key: HDFS-5853
> URL: https://issues.apache.org/jira/browse/HDFS-5853
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation, namenode
>Affects Versions: 2.3.0
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: HDFS-5853.patch
>
>
> "hadoop.user.group.metrics.percentiles.intervals" was added in HDFS-5220, but 
> the parameter is not written in hdfs-default.xml.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-4681) TestBlocksWithNotEnoughRacks#testCorruptBlockRereplicatedAcrossRacks fails using IBM java

2015-03-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14342151#comment-14342151
 ] 

Hudson commented on HDFS-4681:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #119 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/119/])
HDFS-4681. TestBlocksWithNotEnoughRacks#testCorruptBlockRereplicatedAcrossRacks 
fails using IBM java (Ayappan via aw) (aw: rev 
dbc9b6433e9276057181cf4927cedf321acd354e)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlocksWithNotEnoughRacks.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java


> TestBlocksWithNotEnoughRacks#testCorruptBlockRereplicatedAcrossRacks fails 
> using IBM java
> -
>
> Key: HDFS-4681
> URL: https://issues.apache.org/jira/browse/HDFS-4681
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.5.2
> Environment: PowerPC Big Endian architecture
>Reporter: Tian Hong Wang
>Assignee: Ayappan
> Fix For: 3.0.0
>
> Attachments: HDFS-4681-v1.patch, HDFS-4681-v2.patch, HDFS-4681.patch
>
>
> TestBlocksWithNotEnoughRacks unit test fails with the following error message:
> 
> testCorruptBlockRereplicatedAcrossRacks(org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks)
>   Time elapsed: 8997 sec  <<< FAILURE!
> org.junit.ComparisonFailure: Corrupt replica 
> expected:<...��^EI�u�[�{���[$�\hF�[�R{O�L^S��g�#�O��׼��Wv��6u4Hd)FaŔ��^W�0��H/�^ZU^@�6�<02>���":)$�{|�^@�-���|GvW��7g
>  �/M��[U!eF�>^N^?�4pR�d��|��Ŵ7j^O^Sh�^@�nu�(�^C^Y�;I�Q�K^O"c���   
> oKtE�*�^\3u��]Ē:mŭ^^y�^H��_^T�^ZS4�7�C�^G�_���\|^W�vo���zgU�lmJ)_vq~�+^Mo^G^O�W}�.�4
> ��6b�S�&G�^?��m4FW#^@
> D5��}�^Z�^]���mfR^G#T-�N��̋�p���`�~��`�^F;�^C]> but 
> was:<...��^EI�u�[�{���[$�\hF�[R{O�L^S��g�#�O��׼��Wv��6u4Hd)FaŔ��^W�0��H/�^ZU^@�6�<02>�":)$�{|�^@�-���|GvW��7g
>  �/M�[U!eF�>^N^?�4pR�d��|��Ŵ7j^O^Sh�^@�nu�(�^C^Y�;I�Q�K^O"c���  
> oKtE�*�^\3u��]Ē:mŭ^^y���^H��_^T�^ZS���4�7�C�^G�_���\|^W�vo���zgU�lmJ)_vq~�+^Mo^G^O�W}�.�4
>��6b�S�&G�^?��m4FW#^@
> D5��}�^Z�^]���mfR^G#T-�N�̋�p���`�~��`�^F;�]>
> at org.junit.Assert.assertEquals(Assert.java:123)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks.testCorruptBlockRereplicatedAcrossRacks(TestBlocksWithNotEnoughRacks.java:229)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5853) Add "hadoop.user.group.metrics.percentiles.intervals" to hdfs-default.xml

2015-03-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14342147#comment-14342147
 ] 

Hudson commented on HDFS-5853:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #119 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/119/])
HDFS-5853. Add "hadoop.user.group.metrics.percentiles.intervals" to 
hdfs-default.xml (aajisaka) (aajisaka: rev 
aa55fd3096442f186aebc5a767d7e271b7224b51)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml


> Add "hadoop.user.group.metrics.percentiles.intervals" to hdfs-default.xml
> -
>
> Key: HDFS-5853
> URL: https://issues.apache.org/jira/browse/HDFS-5853
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation, namenode
>Affects Versions: 2.3.0
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: HDFS-5853.patch
>
>
> "hadoop.user.group.metrics.percentiles.intervals" was added in HDFS-5220, but 
> the parameter is not written in hdfs-default.xml.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5853) Add "hadoop.user.group.metrics.percentiles.intervals" to hdfs-default.xml

2015-03-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14342089#comment-14342089
 ] 

Hudson commented on HDFS-5853:
--

FAILURE: Integrated in Hadoop-trunk-Commit #7230 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7230/])
HDFS-5853. Add "hadoop.user.group.metrics.percentiles.intervals" to 
hdfs-default.xml (aajisaka) (aajisaka: rev 
aa55fd3096442f186aebc5a767d7e271b7224b51)
* hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Add "hadoop.user.group.metrics.percentiles.intervals" to hdfs-default.xml
> -
>
> Key: HDFS-5853
> URL: https://issues.apache.org/jira/browse/HDFS-5853
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation, namenode
>Affects Versions: 2.3.0
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: HDFS-5853.patch
>
>
> "hadoop.user.group.metrics.percentiles.intervals" was added in HDFS-5220, but 
> the parameter is not written in hdfs-default.xml.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-5853) Add "hadoop.user.group.metrics.percentiles.intervals" to hdfs-default.xml

2015-03-01 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-5853:

   Resolution: Fixed
Fix Version/s: 2.7.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed to trunk and branch-2. Thanks [~aw] for the review!

> Add "hadoop.user.group.metrics.percentiles.intervals" to hdfs-default.xml
> -
>
> Key: HDFS-5853
> URL: https://issues.apache.org/jira/browse/HDFS-5853
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation, namenode
>Affects Versions: 2.3.0
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: HDFS-5853.patch
>
>
> "hadoop.user.group.metrics.percentiles.intervals" was added in HDFS-5220, but 
> the parameter is not written in hdfs-default.xml.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-5853) Add "hadoop.user.group.metrics.percentiles.intervals" to hdfs-default.xml

2015-03-01 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-5853:

Issue Type: Improvement  (was: Bug)

> Add "hadoop.user.group.metrics.percentiles.intervals" to hdfs-default.xml
> -
>
> Key: HDFS-5853
> URL: https://issues.apache.org/jira/browse/HDFS-5853
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation, namenode
>Affects Versions: 2.3.0
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
>Priority: Minor
> Attachments: HDFS-5853.patch
>
>
> "hadoop.user.group.metrics.percentiles.intervals" was added in HDFS-5220, but 
> the parameter is not written in hdfs-default.xml.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7312) Update DistCp v1 to optionally not use tmp location

2015-03-01 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HDFS-7312:
---
Status: Open  (was: Patch Available)

Cancelling patch as it no longer applies.

> Update DistCp v1 to optionally not use tmp location
> ---
>
> Key: HDFS-7312
> URL: https://issues.apache.org/jira/browse/HDFS-7312
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 2.5.1
>Reporter: Joseph Prosser
>Assignee: Joseph Prosser
>Priority: Minor
> Attachments: HDFS-7312.001.patch, HDFS-7312.002.patch, 
> HDFS-7312.003.patch, HDFS-7312.004.patch, HDFS-7312.005.patch, 
> HDFS-7312.006.patch, HDFS-7312.007.patch, HDFS-7312.patch
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> DistCp v1 currently copies files to a tmp location and then renames that to 
> the specified destination.  This can cause performance issues on filesystems 
> such as S3.  A -skiptmp flag will be added to bypass this step and copy 
> directly to the destination.  This feature mirrors a similar one added to 
> HBase ExportSnapshot 
> [HBASE-9|https://issues.apache.org/jira/browse/HBASE-9]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-5420) Not need to launch Secondary namenode for NN HA mode?

2015-03-01 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HDFS-5420:
---
Status: Open  (was: Patch Available)

> Not need to launch Secondary namenode for NN HA mode?
> -
>
> Key: HDFS-5420
> URL: https://issues.apache.org/jira/browse/HDFS-5420
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0
>Reporter: Raymond Liu
>Assignee: Allen Wittenauer
>Priority: Minor
> Attachments: HDFS-5420-00.patch
>
>
> For Hadoop 2, When deploying with NN HA, the wiki says that it is an error to 
> start a secondary namenode. While the sbin/start-dfs.sh still launch a 
> secondary namenode even nothing related to the secondary namenode is 
> configured. Should this be fixed? or people just don't use this scripts to 
> start HA hdfs?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-5420) Not need to launch Secondary namenode for NN HA mode?

2015-03-01 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer reassigned HDFS-5420:
--

Assignee: Allen Wittenauer

> Not need to launch Secondary namenode for NN HA mode?
> -
>
> Key: HDFS-5420
> URL: https://issues.apache.org/jira/browse/HDFS-5420
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0
>Reporter: Raymond Liu
>Assignee: Allen Wittenauer
>Priority: Minor
> Attachments: HDFS-5420-00.patch
>
>
> For Hadoop 2, When deploying with NN HA, the wiki says that it is an error to 
> start a secondary namenode. While the sbin/start-dfs.sh still launch a 
> secondary namenode even nothing related to the secondary namenode is 
> configured. Should this be fixed? or people just don't use this scripts to 
> start HA hdfs?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-1348) Improve NameNode reponsiveness while it is checking if datanode decommissions are complete

2015-03-01 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-1348.

Resolution: Unresolved

Cancelling the patch and closing this issue as stale given the amount of time 
that has passed and the refactoring of the code involved.


> Improve NameNode reponsiveness while it is checking if datanode decommissions 
> are complete
> --
>
> Key: HDFS-1348
> URL: https://issues.apache.org/jira/browse/HDFS-1348
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Hairong Kuang
>Assignee: Hairong Kuang
> Attachments: decomissionImp1.patch, decomissionImp2.patch, 
> decommission.patch, decommission1.patch
>
>
> NameNode normally is busy all the time. Its log is full of activities every 
> second. But once for a while, NameNode seems to pause for more than 10 
> seconds without doing anything, leaving a blank in its log even though no 
> garbage collection is happening.  All other requests to NameNode are blocked 
> when this is happening.
> One culprit is DecommionManager. Its monitor holds the fsynamesystem lock 
> during the whole process of checking if decomissioning DataNodes are finished 
> or not, during which it checks every block of up to a default of 5 datanodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-1348) Improve NameNode reponsiveness while it is checking if datanode decommissions are complete

2015-03-01 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HDFS-1348:
---
Status: Open  (was: Patch Available)

> Improve NameNode reponsiveness while it is checking if datanode decommissions 
> are complete
> --
>
> Key: HDFS-1348
> URL: https://issues.apache.org/jira/browse/HDFS-1348
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Hairong Kuang
>Assignee: Hairong Kuang
> Attachments: decomissionImp1.patch, decomissionImp2.patch, 
> decommission.patch, decommission1.patch
>
>
> NameNode normally is busy all the time. Its log is full of activities every 
> second. But once for a while, NameNode seems to pause for more than 10 
> seconds without doing anything, leaving a blank in its log even though no 
> garbage collection is happening.  All other requests to NameNode are blocked 
> when this is happening.
> One culprit is DecommionManager. Its monitor holds the fsynamesystem lock 
> during the whole process of checking if decomissioning DataNodes are finished 
> or not, during which it checks every block of up to a default of 5 datanodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-1497) Write pipeline sequence numbers should be sequential with no skips or duplicates

2015-03-01 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-1497.

Resolution: Unresolved

Cancelling the patch and closing this issue as stale given the amount of time 
that has passed and the refactoring of the code involved.

> Write pipeline sequence numbers should be sequential with no skips or 
> duplicates
> 
>
> Key: HDFS-1497
> URL: https://issues.apache.org/jira/browse/HDFS-1497
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 0.20-append, 0.22.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Attachments: hdfs-1497.txt, hdfs-1497.txt, hdfs-1497.txt, 
> hdfs-1497.txt, hdfs-1497.txt
>
>
> In HDFS-895 we discovered that multiple hflush() calls in a row without 
> intervening writes could cause a skip in sequence number. This doesn't seem 
> to have any direct consequences, but we should maintain and assert the 
> invariant that sequence numbers have no gaps or duplicates.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-2084) Sometimes backup node/secondary name node stops with exception

2015-03-01 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HDFS-2084:
---
Resolution: Unresolved
Status: Resolved  (was: Patch Available)

Cancelling this issue as stale since the patch no longer applies and given the 
number of changes to the NN in the past four years, there is a good chance the 
issue has been dealt with. If not, please re-open. 

Thanks.

> Sometimes backup node/secondary name node stops with exception
> --
>
> Key: HDFS-2084
> URL: https://issues.apache.org/jira/browse/HDFS-2084
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 0.21.0
> Environment: FreeBSD
>Reporter: Vitalii Tymchyshyn
> Attachments: patch.diff
>
>
> 2011-06-17 11:43:23,096 ERROR 
> org.apache.hadoop.hdfs.server.namenode.Checkpointer: Throwable Exception in 
> doCheckpoint: 
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.unprotectedSetTimes(FSDirectory.java:1765)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.unprotectedSetTimes(FSDirectory.java:1753)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.loadEditRecords(FSEditLog.java:708)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.loadFSEdits(FSEditLog.java:411)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.loadFSEdits(FSEditLog.java:378)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSEdits(FSImage.java:1209)
> at 
> org.apache.hadoop.hdfs.server.namenode.BackupStorage.loadCheckpoint(BackupStorage.java:158)
> at 
> org.apache.hadoop.hdfs.server.namenode.Checkpointer.doCheckpoint(Checkpointer.java:243)
> at 
> org.apache.hadoop.hdfs.server.namenode.Checkpointer.run(Checkpointer.java:141)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-1497) Write pipeline sequence numbers should be sequential with no skips or duplicates

2015-03-01 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HDFS-1497:
---
Status: Open  (was: Patch Available)

> Write pipeline sequence numbers should be sequential with no skips or 
> duplicates
> 
>
> Key: HDFS-1497
> URL: https://issues.apache.org/jira/browse/HDFS-1497
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 0.22.0, 0.20-append
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Attachments: hdfs-1497.txt, hdfs-1497.txt, hdfs-1497.txt, 
> hdfs-1497.txt, hdfs-1497.txt
>
>
> In HDFS-895 we discovered that multiple hflush() calls in a row without 
> intervening writes could cause a skip in sequence number. This doesn't seem 
> to have any direct consequences, but we should maintain and assert the 
> invariant that sequence numbers have no gaps or duplicates.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-2306) NameNode web UI should show information about recent checkpoints

2015-03-01 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HDFS-2306:
---
Status: Open  (was: Patch Available)

Cancelling patch since it no longer applies.

> NameNode web UI should show information about recent checkpoints
> 
>
> Key: HDFS-2306
> URL: https://issues.apache.org/jira/browse/HDFS-2306
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 0.24.0
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
> Attachments: checkpoint-history.1.png, checkpoint-history.png, 
> hdfs-2306.0.patch, hdfs-2306.1.patch
>
>
> It would be nice if the NN web UI showed the 2NN address, timestamp, number 
> of edits, etc. of the last few checkpoints.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-2306) NameNode web UI should show information about recent checkpoints

2015-03-01 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HDFS-2306:
---
Fix Version/s: (was: 0.24.0)

> NameNode web UI should show information about recent checkpoints
> 
>
> Key: HDFS-2306
> URL: https://issues.apache.org/jira/browse/HDFS-2306
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 0.24.0
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
> Attachments: checkpoint-history.1.png, checkpoint-history.png, 
> hdfs-2306.0.patch, hdfs-2306.1.patch
>
>
> It would be nice if the NN web UI showed the 2NN address, timestamp, number 
> of edits, etc. of the last few checkpoints.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)