[ 
https://issues.apache.org/jira/browse/OOZIE-2791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15859789#comment-15859789
 ] 

Attila Sasvari commented on OOZIE-2791:
---------------------------------------

[~abhishekbafna] thanks for the additional info. We ran into this issue on a 
4-node cluster with Hadoop 2.6 (multiple components were talking to HDFS).

When I tried to reproduce the problem with {{-concurrency 150}} on my 
single-node pseudo Hadoop, I noticed that sharelib was partially installed 
(exception failures for copy tasks were logged). At the end of the execution 
there were multiple files with 0 byte size created in HDFS.

Now I have a working solution that I tested on Mac with pseudo Hadoop 2.6.0. 
Briefly, I add information about each failed copy task to a concurrent hash 
set, and retry uploading missed files using a single thread (with 
copyFromLocalFile). Before re-uploading files, we wait 1000 ms. If it fails 
again, we increase delay by the factor of 2, and decrease retry count (it is 
now hardcoded to be 5 times).  

> ShareLib installation may fail on busy Hadoop clusters
> ------------------------------------------------------
>
>                 Key: OOZIE-2791
>                 URL: https://issues.apache.org/jira/browse/OOZIE-2791
>             Project: Oozie
>          Issue Type: Bug
>            Reporter: Attila Sasvari
>            Assignee: Attila Sasvari
>         Attachments: OOZIE-2791-01.patch
>
>
> On a busy Hadoop cluster it can happen that users cannot install properly  
> Oozie ShareLib.
> Example on a Hadoop 2.4.0 pseudo cluster sharelib installion with a  
> concurrency number set high (to simulate a busy cluster):
> {code}
> oozie-setup.sh sharelib create -fs hdfs://localhost:9000 -locallib 
> oozie-sharelib-*.tar.gz -concurrency 150
> {code}
> You can see a lot of errors (failed copy tasks) on the output:
> {code}
> Running 464 copy tasks on 150 threads
> Error: Copy task failed with exception
> Stack trace for the error was (for debug purposes):
> --------------------------------------
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): File 
> /user/asasvari/share/lib/lib_20170207105926/distcp/hadoop-distcp-2.4.0.jar 
> could only be replicated to 0 nodes instead of minReplication (=1).  There 
> are 1 datanode(s) running and no node(s) are excluded in this operation.
>       at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1430)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2684)
>       at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:584)
>       at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:440)
>       at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>       at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
>       at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
>       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
>       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
>       at java.security.AccessController.doPrivileged(Native Method)
>       at javax.security.auth.Subject.doAs(Subject.java:422)
>       at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
>       at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
>       at org.apache.hadoop.ipc.Client.call(Client.java:1410)
>       at org.apache.hadoop.ipc.Client.call(Client.java:1363)
>       at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
>       at com.sun.proxy.$Proxy9.addBlock(Unknown Source)
>       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>       at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>       at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>       at java.lang.reflect.Method.invoke(Method.java:498)
>       at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190)
>       at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103)
>       at com.sun.proxy.$Proxy9.addBlock(Unknown Source)
>       at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:361)
>       at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1439)
>       at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1261)
>       at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:525)
> --------------------------------------
> ...
> {code}
> You can see file is created but it's size is 0.
> {code}
> -rw-r--r--   3 asasvari supergroup          0 2017-02-07 10:59 
> share/lib/lib_20170207105926/distcp/hadoop-distcp-2.4.0.jar
> {code}
> This behaviour is clearly wrong. 
> In case of such an exception, we should retry copying or rollback changes. We 
> should also consider throttling HDFS requests.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to