[ 
https://issues.apache.org/jira/browse/HDFS-13936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16625759#comment-16625759
 ] 

Steve Loughran commented on HDFS-13936:
---------------------------------------

Stack
{code:java}
2018-09-24 13:36:07,259 [Thread-220] ERROR 
contract.AbstractContractMultipartUploaderTest 
(TestHDFSContractMultipartUploader.java:testUploadEmptyBlock(77)) - Empty 
uploads are not supported
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.HadoopIllegalArgumentException):
 concat: source file /testtestUploadEmptyBlock_multipart/1.part is invalid or 
empty or underConstruction
        at 
org.apache.hadoop.hdfs.server.namenode.FSDirConcatOp.verifySrcFiles(FSDirConcatOp.java:159)
        at 
org.apache.hadoop.hdfs.server.namenode.FSDirConcatOp.concat(FSDirConcatOp.java:67)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.concat(FSNamesystem.java:2062)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.concat(NameNodeRpcServer.java:1031)
        at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.concat(ClientNamenodeProtocolServerSideTranslatorPB.java:640)
        at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)

        at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1511)
        at org.apache.hadoop.ipc.Client.call(Client.java:1457)
        at org.apache.hadoop.ipc.Client.call(Client.java:1367)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
        at com.sun.proxy.$Proxy24.concat(Unknown Source)
        at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.concat(ClientNamenodeProtocolTranslatorPB.java:625)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
        at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
        at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
        at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
        at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
        at com.sun.proxy.$Proxy28.concat(Unknown Source)
        at org.apache.hadoop.hdfs.DFSClient.concat(DFSClient.java:1532)
        at 
org.apache.hadoop.hdfs.DistributedFileSystem.concat(DistributedFileSystem.java:835)
        at 
org.apache.hadoop.fs.FileSystemMultipartUploader.complete(FileSystemMultipartUploader.java:147)
        at 
org.apache.hadoop.fs.contract.AbstractContractMultipartUploaderTest.complete(AbstractContractMultipartUploaderTest.java:345)
        at 
org.apache.hadoop.fs.contract.AbstractContractMultipartUploaderTest.completeUpload(AbstractContractMultipartUploaderTest.java:301)
        at 
org.apache.hadoop.fs.contract.AbstractContractMultipartUploaderTest.testUploadEmptyBlock(AbstractContractMultipartUploaderTest.java:391)
        at 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMultipartUploader.lambda$testUploadEmptyBlock$0(TestHDFSContractMultipartUploader.java:76)
        at 
org.apache.hadoop.test.LambdaTestUtils.intercept(LambdaTestUtils.java:401)
        at 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMultipartUploader.testUploadEmptyBlock(TestHDFSContractMultipartUploader.java:75)
 {code}
of course. one solution for this for hdfs to recognise that concat of empty 
blocks is special

> multipart upload to HDFS to support 0 byte upload
> -------------------------------------------------
>
>                 Key: HDFS-13936
>                 URL: https://issues.apache.org/jira/browse/HDFS-13936
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: fs, hdfs
>    Affects Versions: 3.2.0
>            Reporter: Steve Loughran
>            Assignee: Ewan Higgs
>            Priority: Major
>
> MPUs to HDFS fail as you can't concat an empty block. 
> Whatever uploads to HDFS needs to recognise that specific case "0-byte file" 
> and rather than try and concat things, just create a 0-byte file there.
> Without this, you can't use MPU as a replacement for distcp or alternative 
> commit protocols.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to