[jira] [Resolved] (HDFS-8768) Erasure Coding: block group ID displayed in WebUI is not consistent with fsck

2015-07-28 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang resolved HDFS-8768.
-
Resolution: Duplicate

> Erasure Coding: block group ID displayed in WebUI is not consistent with fsck
> -
>
> Key: HDFS-8768
> URL: https://issues.apache.org/jira/browse/HDFS-8768
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: GAO Rui
> Attachments: Screen Shot 2015-07-14 at 15.33.08.png, 
> screen-shot-with-HDFS-8779-patch.PNG
>
>
> This is duplicated by [HDFS-8779].
> For example, In WebUI( usually, namenode port: 50070) , one Erasure Code   
> file with one block group was displayed as the attached screenshot [^Screen 
> Shot 2015-07-14 at 15.33.08.png]. But, with fsck command, the block group of 
> the same file was displayed like: {{0. 
> BP-1130999596-172.23.38.10-1433791629728:blk_-9223372036854740160_3384 
> len=6438256640}}
> After checking block file names in datanodes, we believe WebUI may have some 
> problem with Erasure Code block group display.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-8499) Refactor BlockInfo class hierarchy with static helper class

2015-07-28 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang resolved HDFS-8499.
-
  Resolution: Fixed
Hadoop Flags: Reviewed

[~andrew.wang] tried reverting the patch and found some conflicts. Instead of 
reverting, we can take the chance to setup the {{BlockInfoUnderConstruction}} 
interface, which should be done in trunk anyway. I created HDFS-8835 to make 
necessary changes. Resolving this JIRA again.

> Refactor BlockInfo class hierarchy with static helper class
> ---
>
> Key: HDFS-8499
> URL: https://issues.apache.org/jira/browse/HDFS-8499
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 2.7.0
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Fix For: 2.8.0
>
> Attachments: HDFS-8499.00.patch, HDFS-8499.01.patch, 
> HDFS-8499.02.patch, HDFS-8499.03.patch, HDFS-8499.04.patch, 
> HDFS-8499.05.patch, HDFS-8499.06.patch, HDFS-8499.07.patch, 
> HDFS-8499.UCFeature.patch, HDFS-bistriped.patch
>
>
> In HDFS-7285 branch, the {{BlockInfoUnderConstruction}} interface provides a 
> common abstraction for striped and contiguous UC blocks. This JIRA aims to 
> merge it to trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8835) Convert BlockInfoUnderConstruction as an interface

2015-07-28 Thread Zhe Zhang (JIRA)
Zhe Zhang created HDFS-8835:
---

 Summary: Convert BlockInfoUnderConstruction as an interface
 Key: HDFS-8835
 URL: https://issues.apache.org/jira/browse/HDFS-8835
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.7.1
Reporter: Zhe Zhang
Assignee: Zhe Zhang


Per discussion under HDFS-8499, this JIRA aims to convert 
{{BlockInfoUnderConstruction}} as an interface and 
{{BlockInfoContiguousUnderConstruction}} as its implementation. The HDFS-7285 
branch will add {{BlockInfoStripedUnderConstruction}} as another implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8834) TestReplication#testReplicationWhenBlockCorruption is not valid after HDFS-6482

2015-07-28 Thread Lei (Eddy) Xu (JIRA)
Lei (Eddy) Xu created HDFS-8834:
---

 Summary: TestReplication#testReplicationWhenBlockCorruption is not 
valid after HDFS-6482
 Key: HDFS-8834
 URL: https://issues.apache.org/jira/browse/HDFS-8834
 Project: Hadoop HDFS
  Issue Type: Test
  Components: datanode
Affects Versions: 2.7.1
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Minor


{{TestReplication#testReplicationWhenBlockCorruption}} assumes DN has one level 
of directory:
{code}
File[] listFiles = participatedNodeDirs.listFiles();
{code}

However, HDFS-6482 changed the layout of block directories and used two level 
directories, which makes the following code invalidate (not running).

{code}
for (File file : listFiles) {
if (file.getName().startsWith(Block.BLOCK_FILE_PREFIX)
&& !file.getName().endsWith("meta")) {
  blockFile = file.getName();
  for (File file1 : nonParticipatedNodeDirs) {
file1.mkdirs();
new File(file1, blockFile).createNewFile();
new File(file1, blockFile + "_1000.meta").createNewFile();
  }
  break;
}
  }
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-8728) Erasure coding: revisit and simplify BlockInfoStriped and INodeFile

2015-07-28 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang resolved HDFS-8728.
-
Resolution: Later

Since HDFS-8499 is reopened, closing this one. We should revisit it after 
finalizing the HDFS-8499 discussion.

> Erasure coding: revisit and simplify BlockInfoStriped and INodeFile
> ---
>
> Key: HDFS-8728
> URL: https://issues.apache.org/jira/browse/HDFS-8728
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HDFS-8728-HDFS-7285.00.patch, 
> HDFS-8728-HDFS-7285.01.patch, HDFS-8728-HDFS-7285.02.patch, 
> HDFS-8728-HDFS-7285.03.patch, HDFS-8728.00.patch, HDFS-8728.01.patch, 
> HDFS-8728.02.patch, Merge-1-codec.patch, Merge-2-ecZones.patch, 
> Merge-3-blockInfo.patch, Merge-4-blockmanagement.patch, 
> Merge-5-blockPlacementPolicies.patch, Merge-6-locatedStripedBlock.patch, 
> Merge-7-replicationMonitor.patch, Merge-8-inodeFile.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8833) Erasure coding: store EC schema and cell size with INodeFile and eliminate EC zones

2015-07-28 Thread Zhe Zhang (JIRA)
Zhe Zhang created HDFS-8833:
---

 Summary: Erasure coding: store EC schema and cell size with 
INodeFile and eliminate EC zones
 Key: HDFS-8833
 URL: https://issues.apache.org/jira/browse/HDFS-8833
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: HDFS-7285
Reporter: Zhe Zhang
Assignee: Zhe Zhang


We have [discussed | 
https://issues.apache.org/jira/browse/HDFS-7285?focusedCommentId=14357754&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14357754]
 storing EC schema with files instead of EC zones and recently revisited the 
discussion under HDFS-8059.

As a recap, the _zone_ concept has severe limitations including renaming and 
nested configuration. Those limitations are valid in encryption for security 
reasons and it doesn't make sense to carry them over in EC.

This JIRA aims to store EC schema and cell size on {{INodeFile}} level. For 
simplicity, we should first implement it as an xattr and consider memory 
optimizations (such as moving it to file header) as a follow-on. We should also 
disable changing EC policy on a non-empty file / dir in the first phase.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8832) Document hdfs crypto cli changes

2015-07-28 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDFS-8832:


 Summary: Document hdfs crypto cli changes
 Key: HDFS-8832
 URL: https://issues.apache.org/jira/browse/HDFS-8832
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8831) Support "Soft Delete" for files under HDFS encryption zone

2015-07-28 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDFS-8831:


 Summary: Support "Soft Delete" for files under HDFS encryption zone
 Key: HDFS-8831
 URL: https://issues.apache.org/jira/browse/HDFS-8831
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


Currently, "Soft Delete" is only supported if the whole encryption zone is 
deleted. If you delete files whinin the zone with trash feature enabled, you 
will get error similar to the following 

{code}
rm: Failed to move to trash: hdfs://HW11217.local:9000/z1_1/startnn.sh: 
/z1_1/startnn.sh can't be moved from an encryption zone.
{code}

With HDFS-8830, we can support "Soft Delete" by adding the .Trash folder of the 
file being deleted appropriately to the same encryption zone. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8830) Support add/remove directories to an existing encryption zone

2015-07-28 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDFS-8830:


 Summary: Support add/remove directories to an existing encryption 
zone
 Key: HDFS-8830
 URL: https://issues.apache.org/jira/browse/HDFS-8830
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


This is the first step toward better "Scratch space" and "Soft Delete" support. 
We remove the assumption that the hdfs directory and encryption zone is 1 to 1 
mapped and can't be changed once created.

The encryption zone creation part is kept As-Is from Hadoop 2.4. We generalize 
the encryption zone and its directories from 1:1 to 1:many. This way, other 
directories such as scratch can be added to/removed from encryption zone as 
needed. Later on, files in these directories can be renamed within the same 
encryption zone efficiently. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Request for installing ISA-L library in build machines

2015-07-28 Thread Vinayakumar B
Thank you Allen for pointer.
On Jul 28, 2015 11:30 PM, "Allen Wittenauer"  wrote:

>
> On Jul 28, 2015, at 3:15 AM, Vinayakumar B 
> wrote:
>
> > Have you tried package available at
> >
> https://01.org/intel%C2%AE-storage-acceleration-library-open-source-version
> > ?
> >
> > I think this will work for Ubuntu also.
>
>
> Don’t forget to add it to the Dockerfile, otherwise those that use it
> won’t be able to work with this functionality out of the box.


Re: Request for installing ISA-L library in build machines

2015-07-28 Thread Allen Wittenauer

On Jul 28, 2015, at 3:15 AM, Vinayakumar B  wrote:

> Have you tried package available at
> https://01.org/intel%C2%AE-storage-acceleration-library-open-source-version
> ?
> 
> I think this will work for Ubuntu also.


Don’t forget to add it to the Dockerfile, otherwise those that use it won’t be 
able to work with this functionality out of the box.

Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #259

2015-07-28 Thread Apache Jenkins Server
See 

Changes:

[xyao] HDFS-8785. TestDistributedFileSystem is failing in trunk. Contributed by 
Xiaoyu Yao.

[vvasudev] YARN-3852. Add docker container support to container-executor. 
Contributed by Abin Shahab.

[vvasudev] YARN-3853. Add docker container runtime support to 
LinuxContainterExecutor. Contributed by Sidharta Seethana.

[jianhe] YARN-3846. RM Web UI queue filter is not working for sub queue. 
Contributed by Mohammad Shahid Khan

[aajisaka] HADOOP-12245. References to misspelled REMAINING_QUATA in 
FileSystemShell.md. Contributed by Gabor Liptak.

[Arun Suresh] HDFS-7858. Improve HA Namenode Failover detection on the client. 
(asuresh)

[xgong] YARN-3982. container-executor parsing of container-executor.cfg broken

--
[...truncated 7836 lines...]
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestByteArrayManager
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.777 sec - in 
org.apache.hadoop.hdfs.util.TestByteArrayManager
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestXMLUtils
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.075 sec - in 
org.apache.hadoop.hdfs.util.TestXMLUtils
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestLightWeightHashSet
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.185 sec - in 
org.apache.hadoop.hdfs.util.TestLightWeightHashSet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestMD5FileUtils
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.278 sec - in 
org.apache.hadoop.hdfs.util.TestMD5FileUtils
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.221 sec - in 
org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestAtomicFileOutputStream
Tests run: 4, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.233 sec - in 
org.apache.hadoop.hdfs.util.TestAtomicFileOutputStream
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestExactSizeInputStream
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.08 sec - in 
org.apache.hadoop.hdfs.util.TestExactSizeInputStream
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestLease
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.44 sec - in 
org.apache.hadoop.hdfs.TestLease
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestInjectionForSimulatedStorage
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.455 sec - in 
org.apache.hadoop.hdfs.TestInjectionForSimulatedStorage
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestHFlush
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 32.4 sec - in 
org.apache.hadoop.hdfs.TestHFlush
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestRemoteBlockReader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.607 sec - in 
org.apache.hadoop.hdfs.TestRemoteBlockReader
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestHdfsAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.136 sec - in 
org.apache.hadoop.hdfs.TestHdfsAdmin
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDistributedFileSystem
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 54.062 sec - 
in org.apache.hadoop.hdfs.TestDistributedFileSystem
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestRollingUpgradeRollback
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.523 sec - in 
org.apache.hadoop.hdfs.TestRollingUpgradeRollback
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running

Hadoop-Hdfs-trunk-Java8 - Build # 259 - Still Failing

2015-07-28 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/259/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 8029 lines...]
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [03:06 min]
[INFO] Apache Hadoop HDFS  FAILURE [  02:49 h]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.053 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 02:52 h
[INFO] Finished at: 2015-07-28T14:27:18+00:00
[INFO] Final Memory: 53M/673M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk-Java8 #222
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 4319350 bytes
Compression is 0.0%
Took 32 sec
Recording test results
Updating HDFS-7858
Updating YARN-3852
Updating YARN-3853
Updating HADOOP-12245
Updating YARN-3846
Updating HDFS-8785
Updating YARN-3982
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
3 tests failed.
REGRESSION:  
org.apache.hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes.testBalancer

Error Message:
File /tmp.txt could only be replicated to 0 nodes instead of minReplication 
(=1).  There are 1 datanode(s) running and 1 node(s) are excluded in this 
operation.
 at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1607)
 at 
org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:281)
 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2405)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:719)
 at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:489)
 at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
 at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2174)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2170)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:422)
 at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1667)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2170)


Stack Trace:
org.apache.hadoop.ipc.RemoteException: File /tmp.txt could only be replicated 
to 0 nodes instead of minRep

[jira] [Created] (HDFS-8829) DataNode sets SO_RCVBUF explicitly is disabling tcp auto-tuning

2015-07-28 Thread He Tianyi (JIRA)
He Tianyi created HDFS-8829:
---

 Summary: DataNode sets SO_RCVBUF explicitly is disabling tcp 
auto-tuning
 Key: HDFS-8829
 URL: https://issues.apache.org/jira/browse/HDFS-8829
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Affects Versions: 2.6.0, 2.3.0
Reporter: He Tianyi


{quote}

  private void initDataXceiver(Configuration conf) throws IOException {
// find free port or use privileged port provided
TcpPeerServer tcpPeerServer;
if (secureResources != null) {
  tcpPeerServer = new TcpPeerServer(secureResources);
} else {
  tcpPeerServer = new TcpPeerServer(dnConf.socketWriteTimeout,
  DataNode.getStreamingAddr(conf));
}
tcpPeerServer.setReceiveBufferSize(HdfsConstants.DEFAULT_DATA_SOCKET_SIZE);
{quote}

The last line sets SO_RCVBUF explicitly, thus disabling tcp auto-tuning on some 
system.

Shall we make this behavior configurable?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Hadoop-Hdfs-trunk #2197

2015-07-28 Thread Apache Jenkins Server
See 

Changes:

[xyao] HDFS-8785. TestDistributedFileSystem is failing in trunk. Contributed by 
Xiaoyu Yao.

[vvasudev] YARN-3852. Add docker container support to container-executor. 
Contributed by Abin Shahab.

[vvasudev] YARN-3853. Add docker container runtime support to 
LinuxContainterExecutor. Contributed by Sidharta Seethana.

[jianhe] YARN-3846. RM Web UI queue filter is not working for sub queue. 
Contributed by Mohammad Shahid Khan

[aajisaka] HADOOP-12245. References to misspelled REMAINING_QUATA in 
FileSystemShell.md. Contributed by Gabor Liptak.

[Arun Suresh] HDFS-7858. Improve HA Namenode Failover detection on the client. 
(asuresh)

[xgong] YARN-3982. container-executor parsing of container-executor.cfg broken

--
[...truncated 6747 lines...]
Running org.apache.hadoop.hdfs.TestDFSRename
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.737 sec - in 
org.apache.hadoop.hdfs.TestDFSRename
Running org.apache.hadoop.hdfs.TestFileConcurrentReader
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.945 sec - in 
org.apache.hadoop.hdfs.TestFileConcurrentReader
Running org.apache.hadoop.hdfs.TestDecommission
Tests run: 14, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 121.568 sec - 
in org.apache.hadoop.hdfs.TestDecommission
Running org.apache.hadoop.hdfs.server.common.TestJspHelper
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.837 sec - in 
org.apache.hadoop.hdfs.server.common.TestJspHelper
Running org.apache.hadoop.hdfs.server.common.TestGetUriFromString
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.197 sec - in 
org.apache.hadoop.hdfs.server.common.TestGetUriFromString
Running org.apache.hadoop.hdfs.server.blockmanagement.TestSequentialBlockId
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.519 sec - in 
org.apache.hadoop.hdfs.server.blockmanagement.TestSequentialBlockId
Running 
org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 46.869 sec - in 
org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks
Running 
org.apache.hadoop.hdfs.server.blockmanagement.TestBlockInfoUnderConstruction
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.401 sec - in 
org.apache.hadoop.hdfs.server.blockmanagement.TestBlockInfoUnderConstruction
Running org.apache.hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.635 sec - in 
org.apache.hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS
Running org.apache.hadoop.hdfs.server.blockmanagement.TestNodeCount
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.338 sec - in 
org.apache.hadoop.hdfs.server.blockmanagement.TestNodeCount
Running 
org.apache.hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithNodeGroup
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.604 sec - in 
org.apache.hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithNodeGroup
Running org.apache.hadoop.hdfs.server.blockmanagement.TestReplicationPolicy
Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.195 sec - in 
org.apache.hadoop.hdfs.server.blockmanagement.TestReplicationPolicy
Running org.apache.hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.685 sec - in 
org.apache.hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks
Running org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.214 sec - in 
org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager
Running org.apache.hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 31.247 sec - in 
org.apache.hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock
Running org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeDescriptor
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.319 sec - in 
org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeDescriptor
Running 
org.apache.hadoop.hdfs.server.blockmanagement.TestPendingDataNodeMessages
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.299 sec - in 
org.apache.hadoop.hdfs.server.blockmanagement.TestPendingDataNodeMessages
Running org.apache.hadoop.hdfs.server.blockmanagement.TestComputeInvalidateWork
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.602 sec - in 
org.apache.hadoop.hdfs.server.blockmanagement.TestComputeInvalidateWork
Running org.apache.hadoop.hdfs.server.blockmanagement.TestCachedBlocksList
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.328 sec - in 
org.apache.hadoop.hdfs.server.blockmanagement.TestCachedBlocks

Hadoop-Hdfs-trunk - Build # 2197 - Still Failing

2015-07-28 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2197/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 6940 lines...]
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [03:11 min]
[INFO] Apache Hadoop HDFS  FAILURE [  01:11 h]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.068 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 01:14 h
[INFO] Finished at: 2015-07-28T12:49:31+00:00
[INFO] Final Memory: 78M/1158M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: ExecutionException: java.lang.RuntimeException: The forked 
VM terminated without properly saying goodbye. VM crash or System.exit called?
[ERROR] Command was /bin/sh -c cd 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs
 && /home/jenkins/tools/java/jdk1.7.0_55/jre/bin/java -Xmx4096m 
-XX:MaxPermSize=768m -XX:+HeapDumpOnOutOfMemoryError -jar 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefirebooter6802411360727227202.jar
 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire8587369955657093524tmp
 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire_406173368118099378688tmp
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2181
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 3842285 bytes
Compression is 0.0%
Took 13 sec
Recording test results
Updating HDFS-7858
Updating YARN-3852
Updating YARN-3853
Updating HADOOP-12245
Updating YARN-3846
Updating HDFS-8785
Updating YARN-3982
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
1 tests failed.
REGRESSION:  
org.apache.hadoop.hdfs.tools.TestDFSAdmin.testGetReconfigurationStatus

Error Message:

Expected: is <4>
 but: was <8>

Stack Trace:
java.lang.AssertionError: 
Expected: is <4>
 but: was <8>
at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
at org.junit.Assert.assertThat(Assert.java:865)
at org.junit.Assert.assertThat(Assert.java:832)
at 
org.apache.hadoop.hdfs.tools.TestDFSAdmin.testGetReconfigurationStatus(TestDFSAdmin.java:145)
at 
org.apache.hadoop.hdfs.tools.TestDFSAdmin.testGetReconfigurationStatus(TestDFSAdmin.java:183)




Re: Request for installing ISA-L library in build machines

2015-07-28 Thread Vinayakumar B
Have you tried package available at
https://01.org/intel%C2%AE-storage-acceleration-library-open-source-version
 ?

I think this will work for Ubuntu also.

Regards,
Vinay

On Tue, Jul 28, 2015 at 2:44 PM, Andrew Bayer 
wrote:

> Can you provide us with packages for Ubuntu 14.04?
>
> A.
>
> On Tue, Jul 28, 2015 at 10:55 AM, Vinayakumar B 
> wrote:
>
> > Hi,
> >
> >   Request to install *Intel® Intelligent Storage
> > Acceleration-Library(Intel® ISA-L)* on all hadoop build machines. This is
> > required for testing the Recent work in Erasure Coding in Hadoop HDFS.
> >
> >   ISA-L available @
> >
> https://01.org/intel%C2%AE-storage-acceleration-library-open-source-version
> >
> > Thanks and Regards,
> > Vinay
> >
>


Re: Request for installing ISA-L library in build machines

2015-07-28 Thread Andrew Bayer
Can you provide us with packages for Ubuntu 14.04?

A.

On Tue, Jul 28, 2015 at 10:55 AM, Vinayakumar B 
wrote:

> Hi,
>
>   Request to install *Intel® Intelligent Storage
> Acceleration-Library(Intel® ISA-L)* on all hadoop build machines. This is
> required for testing the Recent work in Erasure Coding in Hadoop HDFS.
>
>   ISA-L available @
> https://01.org/intel%C2%AE-storage-acceleration-library-open-source-version
>
> Thanks and Regards,
> Vinay
>


Request for installing ISA-L library in build machines

2015-07-28 Thread Vinayakumar B
Hi,

  Request to install *Intel® Intelligent Storage
Acceleration-Library(Intel® ISA-L)* on all hadoop build machines. This is
required for testing the Recent work in Erasure Coding in Hadoop HDFS.

  ISA-L available @
https://01.org/intel%C2%AE-storage-acceleration-library-open-source-version

Thanks and Regards,
Vinay


Re: Questions on Namespace/namenode

2015-07-28 Thread Shani Ranasinghe
Hi Dimitry,

Thank you for your feedback. What I am trying to achieve is multi tenancy
(torage level isolation and Resource sharing between tenants). So what I
have in mind is to have a choice if to create tenants in the same namenode,
but the namenode to have separate namespaces which would manage their block
pool individually. I am just trying to avoid the hard rule of having two
namenodes, if two namespaces are required, and to have two namespaces by
one namenode.

WDYT?

On Wed, Jul 22, 2015 at 1:38 PM, Dmitry Salychev 
wrote:

> Hi, Shani.
>
> NameNode is a HDFS bottleneck, so running multiple namespaces within only
> one NN is a risky. What do you want to achieve?
>
> It might help you a bit - HDFS Federation <
> https://hadoop.apache.org/docs/r2.4.1/hadoop-project-dist/hadoop-hdfs/Federation.html
> >.
>
>
> On 07/21/2015 06:01 AM, Shani Ranasinghe wrote:
>
>> Hi,
>>
>> I am Shani. I am currently doing a research on how to create multiple
>> namespaces within the sasme namenode.
>>
>> I was looking at the code, and found some terms that I would like to have
>> a
>> better understanding on.
>>
>>
>> 1) What is a Namesystem?
>> 2)  If I could have guidance on where to look for namespace creation, it
>> would greatly help.
>>
>> Any help is appreciated.
>>
>> Regards,
>> Shani.
>>
>>
>


[jira] [Created] (HDFS-8828) Utilize Snapshot diff report to build copy list in distcp

2015-07-28 Thread Yufei Gu (JIRA)
Yufei Gu created HDFS-8828:
--

 Summary: Utilize Snapshot diff report to build copy list in distcp
 Key: HDFS-8828
 URL: https://issues.apache.org/jira/browse/HDFS-8828
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Yufei Gu
Assignee: Yufei Gu


Some users reported huge time cost to build file copy list in distcp. (30 hours 
with 1.6M files). We can leverage snapshot diff report to build file copy list 
including files/dirs which are changes only between two snapshots (or a 
snapshot and a normal dir). It speed up the process in two folds: 1. less copy 
list building time. 2. less file copy MR jobs.

HDFS snapshot diff report provide information about file/directory creation, 
deletion, rename and modification between two snapshots or a snapshot and a 
normal directory. HDFS-7535 synchronize deletion and rename, the fallback to 
the default distcp. So it still relies on default distcp to building copy list 
which will traverse all files under the source dir. This patch will build the 
copy list based on snapshot diff report. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)