[jira] [Resolved] (HDFS-7117) Not all datanodes are displayed on the namenode http tab

2015-01-21 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B resolved HDFS-7117.
-
   Resolution: Invalid
Fix Version/s: (was: 2.4.0)

This issue is not present in current trunk code. Resolving as invalid.

Feel free to re-open if required.

> Not all datanodes are displayed on the namenode http tab
> 
>
> Key: HDFS-7117
> URL: https://issues.apache.org/jira/browse/HDFS-7117
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Jean-Baptiste Onofré
>
> On a single machine, I have three "fake nodes" (each node use different 
> dfs.datanode.address, dfs.datanode.ipc.address, dfs.datanode.http.address)
> - node1 starts the namenode and a datanode
> - node2 starts a datanode
> - node3 starts a datanode
> In the namenode http console, on the overview, I can see 3 live nodes:
> {code}
> http://localhost:50070/dfshealth.html#tab-overview
> {code}
> but, when clicking on the "Live Nodes":
> {code}
> http://localhost:50070/dfshealth.html#tab-datanode
> {code}
> I can see only one node row.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7655) Expose truncate API for Web HDFS

2015-01-21 Thread Yi Liu (JIRA)
Yi Liu created HDFS-7655:


 Summary: Expose truncate API for Web HDFS
 Key: HDFS-7655
 URL: https://issues.apache.org/jira/browse/HDFS-7655
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Yi Liu
Assignee: Yi Liu


This JIRA is to expose truncate API for Web HDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7656) Expose truncate API for HDFS httpfs

2015-01-21 Thread Yi Liu (JIRA)
Yi Liu created HDFS-7656:


 Summary: Expose truncate API for HDFS httpfs
 Key: HDFS-7656
 URL: https://issues.apache.org/jira/browse/HDFS-7656
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Yi Liu
Assignee: Yi Liu


This JIRA is to expose truncate API for Web HDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7654) TestFileTruncate#testTruncateEditLogLoad fails intermittently

2015-01-21 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HDFS-7654:
--

 Summary: TestFileTruncate#testTruncateEditLogLoad fails 
intermittently
 Key: HDFS-7654
 URL: https://issues.apache.org/jira/browse/HDFS-7654
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Colin Patrick McCabe


TestFileTruncate#testTruncateEditLogLoad fails intermittently with an error 
message like this: 
{code}
java.io.IOException: Timed out waiting for Mini HDFS Cluster to start
at 
org.apache.hadoop.hdfs.MiniDFSCluster.waitClusterUp(MiniDFSCluster.java:1194)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1819)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1780)
at 
org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.testTruncateEditLogLoad(TestFileTruncate.java:500)
{code}

Also, FSNamesystem ERROR logs appear in the test run log even when the test 
passes.  Example:
{code}
2015-01-21 18:52:36,474 ERROR namenode.NameNode 
(DirectoryWithQuotaFeature.java:checkDiskspace(82)) - BUG: Inconsistent 
diskspace for directory /test. Cached = 48 != Computed = 54
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7653) Block Readers and Writers used in both client side and datanode side

2015-01-21 Thread Li Bo (JIRA)
Li Bo created HDFS-7653:
---

 Summary: Block Readers and Writers used in both client side and 
datanode side
 Key: HDFS-7653
 URL: https://issues.apache.org/jira/browse/HDFS-7653
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Li Bo
Assignee: Li Bo


There're a lot of block read/write operations in HDFS-EC, for example, when 
client writes a file in striping layout, client has to write several blocks to 
several different datanodes; if a datanode wants to do an encoding/decoding 
task, it has to read several blocks from itself and other datanodes, and writes 
one or more blocks to itself or other datanodes.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-3458) Convert Forrest docs to APT

2015-01-21 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-3458.

  Resolution: Fixed
   Fix Version/s: 2.0.0-alpha
Target Version/s:   (was: )

Closing as fixed, since this was done eons ago.

> Convert Forrest docs to APT
> ---
>
> Key: HDFS-3458
> URL: https://issues.apache.org/jira/browse/HDFS-3458
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: documentation
>Affects Versions: 2.0.0-alpha
>Reporter: Eli Collins
>  Labels: newbie
> Fix For: 2.0.0-alpha
>
>
> HDFS side of HADOOP-8427. The src/main/docs/src/documentation/content/xdocs 
> contents needs to be converted to APT and removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7652) Process block reports for erasure coded blocks

2015-01-21 Thread Zhe Zhang (JIRA)
Zhe Zhang created HDFS-7652:
---

 Summary: Process block reports for erasure coded blocks
 Key: HDFS-7652
 URL: https://issues.apache.org/jira/browse/HDFS-7652
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhe Zhang
Assignee: Zhe Zhang


HDFS-7339 adds support in NameNode for persisting block groups. For memory 
efficiency, erasure coded blocks under the striping layout are not stored in 
{{BlockManager#blocksMap}}. Instead, entire block groups are stored in 
{{BlockGroupManager#blockGroups}}. When a block report arrives from the 
DataNode, it should be processed under the block group that it belongs to. The 
following naming protocol is used to calculate the group of a given block:
{code}
 * HDFS-EC introduces a hierarchical protocol to name blocks and groups:
 * Contiguous: {reserved block IDs | flag | block ID}
 * Striped: {reserved block IDs | flag | block group ID | index in group}
 *
 * Following n bits of reserved block IDs, The (n+1)th bit in an ID
 * distinguishes contiguous (0) and striped (1) blocks. For a striped block,
 * bits (n+2) to (64-m) represent the ID of its block group, while the last m
 * bits represent its index of the group. The value m is determined by the
 * maximum number of blocks in a group (MAX_BLOCKS_IN_GROUP).
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-5183) Combine ReplicaPlacementPolicy with VolumeChoosingPolicy together to have a global view in choosing DN storage for replica.

2015-01-21 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HDFS-5183.
-
  Resolution: Implemented
Hadoop Flags:   (was: Incompatible change)

Resolving this as Implemented.

As part of HDFS-6584 the first of your two approaches was chosen.
bq. 1. Client specifies the required storage type when calling addBlock(..) to 
NN. BlockPlacementPolicy in NN chooses a set of datanodes accounting for the 
storage type. Then, client passes the required storage type to the datanode set 
and each datanode chooses a particular storage using a VolumeChoosingPolicy.


> Combine ReplicaPlacementPolicy with VolumeChoosingPolicy together to have a 
> global view in choosing DN storage for replica.
> ---
>
> Key: HDFS-5183
> URL: https://issues.apache.org/jira/browse/HDFS-5183
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode, performance
>Affects Versions: Heterogeneous Storage (HDFS-2832)
>Reporter: Junping Du
>
> Per discussion in HDFS-5157, There are two different ways to handle 
> BlockPlacementPolicy and ReplicaChoosingPolicy in case of multiple storage 
> types:
>  1. Client specifies the required storage type when calling addBlock(..) to 
> NN. BlockPlacementPolicy in NN chooses a set of datanodes accounting for the 
> storage type. Then, client passes the required storage type to the datanode 
> set and each datanode chooses a particular storage using a 
> VolumeChoosingPolicy.
>  2. Same as before, client specifies the required storage type when calling 
> addBlock(..) to NN. Now, BlockPlacementPolicy in NN chooses a set of storages 
> (instead of datanodes). Then, client writes to the corresponding storages. 
> VolumeChoosingPolicy is no longer needed and it should be removed.
> We think #2 is more powerful as it will bring global view to volume choosing 
> or bring storage status into consideration in replica choosing, so we propose 
> to combine two polices together.
> One concern here is it may increase the load of NameNode as previously volume 
> choosing is decided by DN. We may verify it later (that's why I put 
> performance in component).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-5423) Verify initializations of LocatedBlock/RecoveringBlock

2015-01-21 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HDFS-5423.
-
Resolution: Invalid

Resolving as Invalid. I don't believe any work is remaining here.

> Verify initializations of LocatedBlock/RecoveringBlock
> --
>
> Key: HDFS-5423
> URL: https://issues.apache.org/jira/browse/HDFS-5423
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Affects Versions: Heterogeneous Storage (HDFS-2832)
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>
> Tracking Jira to make sure we verify initialization of LocatedBlock and 
> RecoveringBlock, possibly reorg the constructors to make missing 
> initialization of StorageIDs less likely.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #77

2015-01-21 Thread Apache Jenkins Server
See 

Changes:

[arp] HDFS-7634. Disallow truncation of Lazy persist files. (Contributed by Yi 
Liu)

[arp] HDFS-7634. Fix CHANGES.txt

[wangda] YARN-2731. Fixed RegisterApplicationMasterResponsePBImpl to properly 
invoke maybeInitBuilder. (Contributed by Carlo Curino)

[yliu] HDFS-7623. Add htrace configuration properties to core-default.xml and 
update user doc about how to enable htrace. (yliu)

[yliu] HDFS-7641. Update archival storage user doc for list/set/get block 
storage policies. (yliu)

[cmccabe] HDFS-7496. Fix FsVolume removal race conditions on the DataNode by 
reference-counting the volume instances (lei via cmccabe)

[cmccabe] HDFS-7496: add to CHANGES.txt

[cmccabe] HDFS-7610. Fix removal of dynamically added DN volumes (Lei (Eddy) Xu 
via Colin P. McCabe)

[cmccabe] HDFS-7610. Add CHANGES.txt

[arp] HDFS-7643. Test case to ensure lazy persist files cannot be truncated. 
(Contributed by Yi Liu)

--
[...truncated 6519 lines...]
Running org.apache.hadoop.hdfs.TestDFSInotifyEventInputStream
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.573 sec - in 
org.apache.hadoop.hdfs.TestDFSInotifyEventInputStream
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.96 sec - in 
org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 54.067 sec - 
in org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDecommission
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 165.747 sec - 
in org.apache.hadoop.hdfs.TestDecommission
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSUtil
Tests run: 30, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.266 sec - 
in org.apache.hadoop.hdfs.TestDFSUtil
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestGetBlocks
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 45.67 sec - in 
org.apache.hadoop.hdfs.TestGetBlocks
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestMultiThreadedHflush
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.146 sec - in 
org.apache.hadoop.hdfs.TestMultiThreadedHflush
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestCyclicIteration
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.071 sec - in 
org.apache.hadoop.hdfs.util.TestCyclicIteration
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestBestEffortLongFile
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.206 sec - in 
org.apache.hadoop.hdfs.util.TestBestEffortLongFile
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestDiff
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.454 sec - in 
org.apache.hadoop.hdfs.util.TestDiff
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestByteArrayManager
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.817 sec - in 
org.apache.hadoop.hdfs.util.TestByteArrayManager
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestXMLUtils
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.074 sec - in 
org.apache.hadoop.hdfs.util.TestXMLUtils
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestLightWeightHashSet
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.175 sec - in 
org.apache.hadoop.hdfs.util.TestLightWeightHashSet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestMD5FileUtils
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.258 sec - in 
org.apache.hadoop.hdfs.util.TestMD5FileUtils
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPerm

Hadoop-Hdfs-trunk-Java8 - Build # 77 - Still Failing

2015-01-21 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/77/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 6712 lines...]

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  FAILURE [  02:52 h]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  1.789 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 02:52 h
[INFO] Finished at: 2015-01-21T14:27:15+00:00
[INFO] Final Memory: 53M/238M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating YARN-2731
Updating HDFS-7643
Updating HDFS-7496
Updating HDFS-7641
Updating HDFS-7634
Updating HDFS-7610
Updating HDFS-7623
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
1 tests failed.
REGRESSION:  
org.apache.hadoop.hdfs.server.balancer.TestBalancer.testUnknownDatanode

Error Message:
expected:<0> but was:<-3>

Stack Trace:
java.lang.AssertionError: expected:<0> but was:<-3>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.hadoop.hdfs.server.balancer.TestBalancer.testUnknownDatanode(TestBalancer.java:806)




[jira] [Created] (HDFS-7651) [ NN Bench ] nnbench cant accept var by"-D",because class NNBench does not "extends Configured implements Tool"

2015-01-21 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HDFS-7651:
--

 Summary: [ NN Bench ] nnbench cant accept  var by"-D",because 
class NNBench does not "extends Configured implements Tool"
 Key: HDFS-7651
 URL: https://issues.apache.org/jira/browse/HDFS-7651
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula


{code}
public class NNBench {
  private static final Log LOG = LogFactory.getLog(
  "org.apache.hadoop.hdfs.NNBench");
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)