[ 
https://issues.apache.org/jira/browse/HDFS-14062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Khare updated HDFS-14062:
-------------------------------
    Description: 
*PROBLEM STATEMENT:*

If we want to re-upload a file with the same name, to HDFS using WebHDFS APIs, 
WebHDFS APIs does not allow, giving error:
{code:java}
"exception":"FileAlreadyExistsException","javaClassName":"org.apache.hadoop.fs.FileAlreadyExistsException"{code}{code}
But from HDFS command we can force upload(overwrite) a file having same name:

{code}

hdfs dfs -put -f /tmp/file1.txt /user/ambari-test

{code}

 

Can we enable this feature via WebHDFS APIs also?

 

*STEPS TO REPRODUCE:*

1. Create a directory in HDFS using WebHDFS API:
 # curl -iL -X PUT 
"http://<NAMENODE_IP>:<PORT>/webhdfs/v1/user/admin/Test?op=MKDIRS&user.name=admin"

2. Upload a file called /tmp/file1.txt:
 # curl -iL -X PUT -T "/tmp/file1.txt" 
"http://<NAMENODE_IP>:<PORT>/webhdfs/v1/user/admin/Test/file1.txt?op=CREATE&user.name=admin"

3. Now edit this file and then try uploading it back:
 # curl -iL -X PUT -T "/tmp/file1.txt" 
"http://<NAMENODE_IP>:<PORT>/webhdfs/v1/user/admin/Test/file1.txt?op=CREATE&user.name=admin"

4. We get the following error:

HTTP/1.1 100 Continue

HTTP/1.1 403 Forbidden
 Content-Type: application/json; charset=utf-8
 Content-Length: 1465
 Connection: close

{"RemoteException":\{"exception":"FileAlreadyExistsException","javaClassName":"org.apache.hadoop.fs.FileAlreadyExistsException","message":"/user/admin/Test/file1.txt
 for client 172.26.123.95 already exists\n\tat 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2815)\n\tat
 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2702)\n\tat
 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2586)\n\tat
 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:736)\n\tat
 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:409)\n\tat
 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\n\tat
 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)\n\tat
 org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)\n\tat 
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)\n\tat 
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347)\n\tat 
java.security.AccessController.doPrivileged(Native Method)\n\tat 
javax.security.auth.Subject.doAs(Subject.java:422)\n\tat 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869)\n\tat
 org.apache.hadoop.ipc.Server$Handler.run(Server.java:2347)\n"}}

 

  was:
*PROBLEM STATEMENT:*

If we want to re-upload a file with the same name, to HDFS using WebHDFS APIs, 
WebHDFS APIs does not allow, giving error:

{code}"exception":"FileAlreadyExistsException","javaClassName":"org.apache.hadoop.fs.FileAlreadyExistsException"\{code}

But from HDFS command we can force upload(overwrite) a file having same name:

hdfs dfs -put -f /tmp/file1.txt /user/ambari-test

 

Can we enable this feature via WebHDFS APIs also?

 

*STEPS TO REPRODUCE:*

1. Create a directory in HDFS using WebHDFS API:

# curl -iL -X PUT 
"http://<NAMENODE_IP>:<PORT>/webhdfs/v1/user/admin/Test?op=MKDIRS&user.name=admin"

2. Upload a file called /tmp/file1.txt:
# curl -iL -X PUT -T "/tmp/file1.txt" 
"http://<NAMENODE_IP>:<PORT>/webhdfs/v1/user/admin/Test/file1.txt?op=CREATE&user.name=admin"

3. Now edit this file and then try uploading it back:
# curl -iL -X PUT -T "/tmp/file1.txt" 
"http://<NAMENODE_IP>:<PORT>/webhdfs/v1/user/admin/Test/file1.txt?op=CREATE&user.name=admin"

4. We get the following error:

HTTP/1.1 100 Continue

HTTP/1.1 403 Forbidden
Content-Type: application/json; charset=utf-8
Content-Length: 1465
Connection: close

{"RemoteException":\{"exception":"FileAlreadyExistsException","javaClassName":"org.apache.hadoop.fs.FileAlreadyExistsException","message":"/user/admin/Test/file1.txt
 for client 172.26.123.95 already exists\n\tat 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2815)\n\tat
 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2702)\n\tat
 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2586)\n\tat
 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:736)\n\tat
 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:409)\n\tat
 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\n\tat
 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)\n\tat
 org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)\n\tat 
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)\n\tat 
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347)\n\tat 
java.security.AccessController.doPrivileged(Native Method)\n\tat 
javax.security.auth.Subject.doAs(Subject.java:422)\n\tat 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869)\n\tat
 org.apache.hadoop.ipc.Server$Handler.run(Server.java:2347)\n"}}

 


> WebHDFS: Uploading a file again with the same naming convention
> ---------------------------------------------------------------
>
>                 Key: HDFS-14062
>                 URL: https://issues.apache.org/jira/browse/HDFS-14062
>             Project: Hadoop HDFS
>          Issue Type: New Feature
>          Components: webhdfs
>    Affects Versions: 3.1.1
>            Reporter: Arpit Khare
>            Priority: Major
>
> *PROBLEM STATEMENT:*
> If we want to re-upload a file with the same name, to HDFS using WebHDFS 
> APIs, WebHDFS APIs does not allow, giving error:
> {code:java}
> "exception":"FileAlreadyExistsException","javaClassName":"org.apache.hadoop.fs.FileAlreadyExistsException"{code}{code}
> But from HDFS command we can force upload(overwrite) a file having same name:
> {code}
> hdfs dfs -put -f /tmp/file1.txt /user/ambari-test
> {code}
>  
> Can we enable this feature via WebHDFS APIs also?
>  
> *STEPS TO REPRODUCE:*
> 1. Create a directory in HDFS using WebHDFS API:
>  # curl -iL -X PUT 
> "http://<NAMENODE_IP>:<PORT>/webhdfs/v1/user/admin/Test?op=MKDIRS&user.name=admin"
> 2. Upload a file called /tmp/file1.txt:
>  # curl -iL -X PUT -T "/tmp/file1.txt" 
> "http://<NAMENODE_IP>:<PORT>/webhdfs/v1/user/admin/Test/file1.txt?op=CREATE&user.name=admin"
> 3. Now edit this file and then try uploading it back:
>  # curl -iL -X PUT -T "/tmp/file1.txt" 
> "http://<NAMENODE_IP>:<PORT>/webhdfs/v1/user/admin/Test/file1.txt?op=CREATE&user.name=admin"
> 4. We get the following error:
> HTTP/1.1 100 Continue
> HTTP/1.1 403 Forbidden
>  Content-Type: application/json; charset=utf-8
>  Content-Length: 1465
>  Connection: close
> {"RemoteException":\{"exception":"FileAlreadyExistsException","javaClassName":"org.apache.hadoop.fs.FileAlreadyExistsException","message":"/user/admin/Test/file1.txt
>  for client 172.26.123.95 already exists\n\tat 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2815)\n\tat
>  
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2702)\n\tat
>  
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2586)\n\tat
>  
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:736)\n\tat
>  
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:409)\n\tat
>  
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\n\tat
>  
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)\n\tat
>  org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)\n\tat 
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)\n\tat 
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347)\n\tat 
> java.security.AccessController.doPrivileged(Native Method)\n\tat 
> javax.security.auth.Subject.doAs(Subject.java:422)\n\tat 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869)\n\tat
>  org.apache.hadoop.ipc.Server$Handler.run(Server.java:2347)\n"}}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to