[jira] [Created] (HDFS-14063) Support noredirect param for CREATE in HttpFS

2018-11-09 Thread JIRA
Íñigo Goiri created HDFS-14063:
--

 Summary: Support noredirect param for CREATE in HttpFS
 Key: HDFS-14063
 URL: https://issues.apache.org/jira/browse/HDFS-14063
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Íñigo Goiri






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-826) Update Ratis to 0.3.0-6f3419a-SNAPSHOT

2018-11-09 Thread Tsz Wo Nicholas Sze (JIRA)
Tsz Wo Nicholas Sze created HDDS-826:


 Summary: Update Ratis to 0.3.0-6f3419a-SNAPSHOT
 Key: HDDS-826
 URL: https://issues.apache.org/jira/browse/HDDS-826
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze


RATIS-404 fixed a deadlock bug.  We should update Ratis here.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14062) WebHDFS: Uploading a file again with the same naming convention

2018-11-09 Thread Arpit Khare (JIRA)
Arpit Khare created HDFS-14062:
--

 Summary: WebHDFS: Uploading a file again with the same naming 
convention
 Key: HDFS-14062
 URL: https://issues.apache.org/jira/browse/HDFS-14062
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: webhdfs
Affects Versions: 3.1.1
Reporter: Arpit Khare


*PROBLEM STATEMENT:*

If we want to re-upload a file with the same name, to HDFS using WebHDFS APIs, 
WebHDFS APIs does not allow, giving error:

{code}"exception":"FileAlreadyExistsException","javaClassName":"org.apache.hadoop.fs.FileAlreadyExistsException"\{code}

But from HDFS command we can force upload(overwrite) a file having same name:

hdfs dfs -put -f /tmp/file1.txt /user/ambari-test

 

Can we enable this feature via WebHDFS APIs also?

 

*STEPS TO REPRODUCE:*

1. Create a directory in HDFS using WebHDFS API:

# curl -iL -X PUT 
"http://:/webhdfs/v1/user/admin/Test?op=MKDIRS&user.name=admin"

2. Upload a file called /tmp/file1.txt:
# curl -iL -X PUT -T "/tmp/file1.txt" 
"http://:/webhdfs/v1/user/admin/Test/file1.txt?op=CREATE&user.name=admin"

3. Now edit this file and then try uploading it back:
# curl -iL -X PUT -T "/tmp/file1.txt" 
"http://:/webhdfs/v1/user/admin/Test/file1.txt?op=CREATE&user.name=admin"

4. We get the following error:

HTTP/1.1 100 Continue

HTTP/1.1 403 Forbidden
Content-Type: application/json; charset=utf-8
Content-Length: 1465
Connection: close

{"RemoteException":\{"exception":"FileAlreadyExistsException","javaClassName":"org.apache.hadoop.fs.FileAlreadyExistsException","message":"/user/admin/Test/file1.txt
 for client 172.26.123.95 already exists\n\tat 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2815)\n\tat
 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2702)\n\tat
 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2586)\n\tat
 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:736)\n\tat
 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:409)\n\tat
 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\n\tat
 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)\n\tat
 org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)\n\tat 
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)\n\tat 
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347)\n\tat 
java.security.AccessController.doPrivileged(Native Method)\n\tat 
javax.security.auth.Subject.doAs(Subject.java:422)\n\tat 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869)\n\tat
 org.apache.hadoop.ipc.Server$Handler.run(Server.java:2347)\n"}}

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14061) Check if the cluster topology supports the EC policy before setting, enabling or adding it

2018-11-09 Thread Kitti Nanasi (JIRA)
Kitti Nanasi created HDFS-14061:
---

 Summary: Check if the cluster topology supports the EC policy 
before setting, enabling or adding it
 Key: HDFS-14061
 URL: https://issues.apache.org/jira/browse/HDFS-14061
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: erasure-coding, hdfs
Affects Versions: 3.1.1
Reporter: Kitti Nanasi
Assignee: Kitti Nanasi


HDFS-12946 introduces a command for verifying if there are enough racks and 
datanodes for the enabled erasure coding policies.
This verification could be executed for the erasure coding policy before 
enabling, setting or adding it and a warning message could be written if the 
verify fails, or the policy setting could be even failed in this case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14060) HDFS fetchdt command to return error codes on success/failure

2018-11-09 Thread Steve Loughran (JIRA)
Steve Loughran created HDFS-14060:
-

 Summary: HDFS fetchdt command to return error codes on 
success/failure
 Key: HDFS-14060
 URL: https://issues.apache.org/jira/browse/HDFS-14060
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: tools
Affects Versions: 3.3.0
Reporter: Steve Loughran


The {{hdfs fetchdt}} command always returns 0, even when there's been an error 
(no token issued, no file to load, usage, etc). This means its not that useful 
as a command line tool for testing or in scripts.

Proposed: exit non-zero for errors; reuse LaucherExitCodes for these



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: HDFS/HDDS unit tests running failed in Jenkins precommit building

2018-11-09 Thread Akira Ajisaka
+ common-dev, mapreduce-dev, yarn-dev

Probably this issue is caused by
https://issues.apache.org/jira/browse/SUREFIRE-1588 and fixed by Maven
surefire plugin 3.0.0-M1. I filed a JIRA and uploaded a patch to
upgrade the version of the plugin. Please check
https://issues.apache.org/jira/browse/HADOOP-15916

Regards,
Akira
2018年11月9日(金) 6:10 Lin,Yiqun(vip.com) :
>
> Hi developers,
>
> Recently, I found following error frequently appeared in HDFS/HDDS Jenkins 
> building.
> The link: 
> https://builds.apache.org/job/PreCommit-HDDS-Build/1632/artifact/out/patch-unit-hadoop-ozone_ozone-manager.txt
>
> [ERROR] ExecutionException The forked VM terminated without properly saying 
> goodbye. VM crash or System.exit called?
> [ERROR] Command was /bin/sh -c cd /testptch/hadoop/hadoop-ozone/ozone-manager 
> && /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xmx2048m 
> -XX:+HeapDumpOnOutOfMemoryError -DminiClusterDedicatedDirs=true -jar 
> /testptch/hadoop/hadoop-ozone/ozone-manager/target/surefire/surefirebooter6481080145571841952.jar
>  /testptch/hadoop/hadoop-ozone/ozone-manager/target/surefire 
> 2018-11-07T22-53-35_334-jvmRun1 surefire2897373403289443808tmp 
> surefire_42678601136131093095tmp
>
> And this error makes the unit tests not be real tested in Jenkins precommit 
> buildings. Anyone who knows the root cause of this?
>
> Thanks
> Yiqun
> 本电子邮件可能为保密文件。如果阁下非电子邮件所指定之收件人,谨请立即通知本人。敬请阁下不要使用、保存、复印、打印、散布本电子邮件及其内容,或将其用于其他任何目的或向任何人披露。谢谢您的合作!
>  This communication is intended only for the addressee(s) and may contain 
> information that is privileged and confidential. You are hereby notified 
> that, if you are not an intended recipient listed above, or an authorized 
> employee or agent of an addressee of this communication responsible for 
> delivering e-mail messages to an intended recipient, any dissemination, 
> distribution or reproduction of this communication (including any attachments 
> hereto) is strictly prohibited. If you have received this communication in 
> error, please notify us immediately by a reply e-mail addressed to the sender 
> and permanently delete the original e-mail communication and any attachments 
> from all storage devices without making or otherwise retaining a copy.

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org