[jira] [Comment Edited] (HADOOP-15633) fs.TrashPolicyDefault: Can't create trash directory

2018-07-27 Thread Fei Hui (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16560006#comment-16560006
 ] 

Fei Hui edited comment on HADOOP-15633 at 7/27/18 5:16 PM:
---

[~jzhuge] Thanks. Got it
Upload the new path v002, changes as below. We need to make new baseTrashPath & 
trashPath with timestamp.
{code:java}
baseTrashPath = new Path 
(baseTrashPath.toString().replace(existsFilePath.toString()
  , existsFilePath.toString() + Time.now()));
trashPath = new Path(baseTrashPath, trashPath.getName());
fs.mkdirs(baseTrashPath, PERMISSION);
{code}



was (Author: ferhui):
[~jzhuge] Thanks. Got it
Upload the new path, changes as below. We need to make new baseTrashPath & 
trashPath with timestamp.
{code:java}
baseTrashPath = new Path 
(baseTrashPath.toString().replace(existsFilePath.toString()
  , existsFilePath.toString() + Time.now()));
trashPath = new Path(baseTrashPath, trashPath.getName());
fs.mkdirs(baseTrashPath, PERMISSION);
{code}


> fs.TrashPolicyDefault: Can't create trash directory
> ---
>
> Key: HADOOP-15633
> URL: https://issues.apache.org/jira/browse/HADOOP-15633
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.8.3, 3.1.0, 3.0.3, 2.7.7
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HADOOP-15633.001.patch, HADOOP-15633.002.patch
>
>
> Reproduce it as follow
> {code:java}
> hadoop fs -mkdir /user/hadoop/aaa
> hadoop fs -touchz /user/hadoop/aaa/bbb
> hadoop fs -rm /user/hadoop/aaa/bbb
> hadoop fs -mkdir /user/hadoop/aaa/bbb
> hadoop fs -touchz /user/hadoop/aaa/bbb/ccc
> hadoop fs -rm /user/hadoop/aaa/bbb/ccc
> {code}
> Then we get errors 
> {code:java}
> 18/07/26 17:55:24 WARN fs.TrashPolicyDefault: Can't create trash directory: 
> hdfs://xxx/user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
> org.apache.hadoop.fs.FileAlreadyExistsException: Path is not a directory: 
> /user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:65)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3961)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:984)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
> at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
> at 
> org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3002)
> at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2970)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1047)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1043)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1061)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1036)
> at 
> org.apache.hadoop.fs.TrashPolicyDefault.moveToTrash(TrashPolicyDefault.java:136)
> at org.apache.hadoop.fs.Trash.moveToTrash(Trash.java:114)
> at org.apache.hadoop.fs.Trash.moveT

[jira] [Comment Edited] (HADOOP-15633) fs.TrashPolicyDefault: Can't create trash directory

2018-07-26 Thread Fei Hui (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16559130#comment-16559130
 ] 

Fei Hui edited comment on HADOOP-15633 at 7/27/18 1:46 AM:
---

[~jzhuge] Thanks for your reply
{quote}
If you run the following command after your test steps:
hadoop fs -rm -r /user/hadoop/aaa/bbb
You will see the name conflict resolution, "bbb" -> "bbb1532625817927":
{quote}
Yes. basetrashdir */user/hadoop/.Trash/Current/user/hadoop/aaa* exists, and 
*fs.mkdirs(baseTrashPath, PERMISSION)* will success. the original code as 
follow resolves the name conflict

{code:java}
  try {
// if the target path in Trash already exists, then append with 
// a current time in millisecs.
String orig = trashPath.toString();

while(fs.exists(trashPath)) {
  trashPath = new Path(orig + Time.now());
}

if (fs.rename(path, trashPath))   // move to current trash
  return true;
  } catch (IOException e) {
cause = e;
  }
{code}

{quote}
/user/hadoop/.Trash
drwx-- - hadoop hadoop 0 2018-07-26 17:21 /user/hadoop/.Trash/Current
drwx-- - hadoop hadoop 0 2018-07-26 17:21 /user/hadoop/.Trash/Current/user
drwx-- - hadoop hadoop 0 2018-07-26 17:21 
/user/hadoop/.Trash/Current/user/hadoop
drwx-- - hadoop hadoop 0 2018-07-26 17:23 
/user/hadoop/.Trash/Current/user/hadoop/aaa
-rw-r--r-- 3 hadoop hadoop 0 2018-07-26 17:21 
/user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
drwxr-xr-x - hadoop hadoop 0 2018-07-26 17:23 
/user/hadoop/.Trash/Current/user/hadoop/aaa/bbb1532625817927
-rw-r--r-- 3 hadoop hadoop 0 2018-07-26 17:23 
/user/hadoop/.Trash/Current/user/hadoop/aaa/bbb1532625817927/ccc
Looks like name conflict resolution is not done for a parent dir on the path.
{quote}
The command *hadoop fs -rm -r /user/hadoop/aaa/bbb* run successfully, and it 
means name conflict does not exist, right ?


was (Author: ferhui):
[~jzhuge] Thanks for your reply
{quote}
If you run the following command after your test steps:
hadoop fs -rm -r /user/hadoop/aaa/bbb
You will see the name conflict resolution, "bbb" -> "bbb1532625817927":
{quote}
Yes. basetrashdir */user/hadoop/.Trash/Current/user/hadoop/aaa* exists, and 
*fs.mkdirs(baseTrashPath, PERMISSION)* will success. the original code as 
follow resolves the name conflict

{code:java}
  try {
// if the target path in Trash already exists, then append with 
// a current time in millisecs.
String orig = trashPath.toString();

while(fs.exists(trashPath)) {
  trashPath = new Path(orig + Time.now());
}

if (fs.rename(path, trashPath))   // move to current trash
  return true;
  } catch (IOException e) {
cause = e;
  }
{code}

{quote}
/user/hadoop/.Trash
drwx-- - hadoop hadoop 0 2018-07-26 17:21 /user/hadoop/.Trash/Current
drwx-- - hadoop hadoop 0 2018-07-26 17:21 /user/hadoop/.Trash/Current/user
drwx-- - hadoop hadoop 0 2018-07-26 17:21 
/user/hadoop/.Trash/Current/user/hadoop
drwx-- - hadoop hadoop 0 2018-07-26 17:23 
/user/hadoop/.Trash/Current/user/hadoop/aaa
-rw-r--r-- 3 hadoop hadoop 0 2018-07-26 17:21 
/user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
drwxr-xr-x - hadoop hadoop 0 2018-07-26 17:23 
/user/hadoop/.Trash/Current/user/hadoop/aaa/bbb1532625817927
-rw-r--r-- 3 hadoop hadoop 0 2018-07-26 17:23 
/user/hadoop/.Trash/Current/user/hadoop/aaa/bbb1532625817927/ccc
Looks like name conflict resolution is not done for a parent dir on the path.
{quote}
The command *hadoop fs -rm -r /user/hadoop/aaa/bbb* run successfully, and it 
means name conflict does not exits, right ?

> fs.TrashPolicyDefault: Can't create trash directory
> ---
>
> Key: HADOOP-15633
> URL: https://issues.apache.org/jira/browse/HADOOP-15633
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.8.3, 3.1.0, 3.0.3, 2.7.7
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HADOOP-15633.001.patch
>
>
> Reproduce it as follow
> {code:java}
> hadoop fs -mkdir /user/hadoop/aaa
> hadoop fs -touchz /user/hadoop/aaa/bbb
> hadoop fs -rm /user/hadoop/aaa/bbb
> hadoop fs -mkdir /user/hadoop/aaa/bbb
> hadoop fs -touchz /user/hadoop/aaa/bbb/ccc
> hadoop fs -rm /user/hadoop/aaa/bbb/ccc
> {code}
> Then we get errors 
> {code:java}
> 18/07/26 17:55:24 WARN fs.TrashPolicyDefault: Can't create trash directory: 
> hdfs://xxx/user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
> org.apache.hadoop.fs.FileAlreadyExistsException: Path is not a directory: 
> /user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:65)
>   

[jira] [Comment Edited] (HADOOP-15633) fs.TrashPolicyDefault: Can't create trash directory

2018-07-26 Thread John Zhuge (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558647#comment-16558647
 ] 

John Zhuge edited comment on HADOOP-15633 at 7/26/18 5:29 PM:
--

If you run the following command after your test steps:
{noformat}
hadoop fs -rm -r /user/hadoop/aaa/bbb
{noformat}
You will see the name conflict resolution, "bbb" -> "bbb1532625817927":
{noformat}
# hadoop fs -ls -R /user/hadoop/.Trash
drwx-- - hadoop hadoop 0 2018-07-26 17:21 /user/hadoop/.Trash/Current
drwx-- - hadoop hadoop 0 2018-07-26 17:21 /user/hadoop/.Trash/Current/user
drwx-- - hadoop hadoop 0 2018-07-26 17:21 
/user/hadoop/.Trash/Current/user/hadoop
drwx-- - hadoop hadoop 0 2018-07-26 17:23 
/user/hadoop/.Trash/Current/user/hadoop/aaa
-rw-r--r-- 3 hadoop hadoop 0 2018-07-26 17:21 
/user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
drwxr-xr-x - hadoop hadoop 0 2018-07-26 17:23 
/user/hadoop/.Trash/Current/user/hadoop/aaa/bbb1532625817927
-rw-r--r-- 3 hadoop hadoop 0 2018-07-26 17:23 
/user/hadoop/.Trash/Current/user/hadoop/aaa/bbb1532625817927/ccc{noformat}
Looks like name conflict resolution is not done for a parent dir on the path.

 


was (Author: jzhuge):
If you run the following command after your test steps:
{noformat}
hadoop fs -rm -r /user/hadoop/aaa/bbb
{noformat}
You will see the name conflict resolution, "bbb" -> "bbb1532625817927":

 
{noformat}
# hadoop fs -ls -R /user/hadoop/.Trash
drwx-- - hadoop hadoop 0 2018-07-26 17:21 /user/hadoop/.Trash/Current
drwx-- - hadoop hadoop 0 2018-07-26 17:21 /user/hadoop/.Trash/Current/user
drwx-- - hadoop hadoop 0 2018-07-26 17:21 
/user/hadoop/.Trash/Current/user/hadoop
drwx-- - hadoop hadoop 0 2018-07-26 17:23 
/user/hadoop/.Trash/Current/user/hadoop/aaa
-rw-r--r-- 3 hadoop hadoop 0 2018-07-26 17:21 
/user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
drwxr-xr-x - hadoop hadoop 0 2018-07-26 17:23 
/user/hadoop/.Trash/Current/user/hadoop/aaa/bbb1532625817927
-rw-r--r-- 3 hadoop hadoop 0 2018-07-26 17:23 
/user/hadoop/.Trash/Current/user/hadoop/aaa/bbb1532625817927/ccc{noformat}
Looks like name conflict resolution is not done for a parent dir on the path.

 

> fs.TrashPolicyDefault: Can't create trash directory
> ---
>
> Key: HADOOP-15633
> URL: https://issues.apache.org/jira/browse/HADOOP-15633
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.8.3, 3.1.0, 3.0.3, 2.7.7
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HADOOP-15633.001.patch
>
>
> Reproduce it as follow
> {code:java}
> hadoop fs -mkdir /user/hadoop/aaa
> hadoop fs -touchz /user/hadoop/aaa/bbb
> hadoop fs -rm /user/hadoop/aaa/bbb
> hadoop fs -mkdir /user/hadoop/aaa/bbb
> hadoop fs -touchz /user/hadoop/aaa/bbb/ccc
> hadoop fs -rm /user/hadoop/aaa/bbb/ccc
> {code}
> Then we get errors 
> {code:java}
> 18/07/26 17:55:24 WARN fs.TrashPolicyDefault: Can't create trash directory: 
> hdfs://xxx/user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
> org.apache.hadoop.fs.FileAlreadyExistsException: Path is not a directory: 
> /user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:65)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3961)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:984)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInst

[jira] [Comment Edited] (HADOOP-15633) fs.TrashPolicyDefault: Can't create trash directory

2018-07-26 Thread Fei Hui (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558152#comment-16558152
 ] 

Fei Hui edited comment on HADOOP-15633 at 7/26/18 11:01 AM:


upload v1 patch

{code:java}
 public boolean moveToTrash(Path path) throws IOException {
if (!isEnabled())
  return false;

if (!path.isAbsolute())   // make path absolute
  path = new Path(fs.getWorkingDirectory(), path);

// check that path exists
fs.getFileStatus(path);
String qpath = fs.makeQualified(path).toString();

Path trashRoot = fs.getTrashRoot(path);
Path trashCurrent = new Path(trashRoot, CURRENT);
if (qpath.startsWith(trashRoot.toString())) {
  return false;   // already in trash
}

if (trashRoot.getParent().toString().startsWith(qpath)) {
  throw new IOException("Cannot move \"" + path +
"\" to the trash, as it contains the trash");
}

Path trashPath = makeTrashRelativePath(trashCurrent, path);
Path baseTrashPath = makeTrashRelativePath(trashCurrent, path.getParent());

IOException cause = null;

// try twice, in case checkpoint between the mkdirs() & rename()
for (int i = 0; i < 2; i++) {
  try {
if (!fs.mkdirs(baseTrashPath, PERMISSION)) {  // create current
  LOG.warn("Can't create(mkdir) trash directory: " + baseTrashPath);
  return false;
} catch (FileAlreadyExistsException e) {
// here we should catch FileAlreadyExistsException, then handle it
} catch (IOException e) {
  LOG.warn("Can't create trash directory: " + baseTrashPath, e);
  cause = e;
  break;
}
...
  }
{code}

In moveToTrash function, catch  FileAlreadyExistsException and find the 
existing file path, then rename it, mkdir the base trash path at last.




was (Author: ferhui):
upload v1 patch

{code:java}
 public boolean moveToTrash(Path path) throws IOException {
if (!isEnabled())
  return false;

if (!path.isAbsolute())   // make path absolute
  path = new Path(fs.getWorkingDirectory(), path);

// check that path exists
fs.getFileStatus(path);
String qpath = fs.makeQualified(path).toString();

Path trashRoot = fs.getTrashRoot(path);
Path trashCurrent = new Path(trashRoot, CURRENT);
if (qpath.startsWith(trashRoot.toString())) {
  return false;   // already in trash
}

if (trashRoot.getParent().toString().startsWith(qpath)) {
  throw new IOException("Cannot move \"" + path +
"\" to the trash, as it contains the trash");
}

Path trashPath = makeTrashRelativePath(trashCurrent, path);
Path baseTrashPath = makeTrashRelativePath(trashCurrent, path.getParent());

IOException cause = null;

// try twice, in case checkpoint between the mkdirs() & rename()
for (int i = 0; i < 2; i++) {
  try {
if (!fs.mkdirs(baseTrashPath, PERMISSION)) {  // create current
  LOG.warn("Can't create(mkdir) trash directory: " + baseTrashPath);
  return false;
} catch (FileAlreadyExistsException e) {
// here we should catch FileAlreadyExistsException, then handle it
  } catch (IOException e) {
LOG.warn("Can't create trash directory: " + baseTrashPath, e);
cause = e;
break;
  }
{code}

In moveToTrash function, catch  FileAlreadyExistsException and find the 
existing file path, then rename it, mkdir the base trash path at last.



> fs.TrashPolicyDefault: Can't create trash directory
> ---
>
> Key: HADOOP-15633
> URL: https://issues.apache.org/jira/browse/HADOOP-15633
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.8.3, 3.1.0, 3.0.3, 2.7.7
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HADOOP-15633.001.patch
>
>
> Reproduce it as follow
> {code:java}
> hadoop fs -mkdir /user/hadoop/aaa
> hadoop fs -touchz /user/hadoop/aaa/bbb
> hadoop fs -rm /user/hadoop/aaa/bbb
> hadoop fs -mkdir /user/hadoop/aaa/bbb
> hadoop fs -touchz /user/hadoop/aaa/bbb/ccc
> hadoop fs -rm /user/hadoop/aaa/bbb/ccc
> {code}
> Then we get errors 
> {code:java}
> 18/07/26 17:55:24 WARN fs.TrashPolicyDefault: Can't create trash directory: 
> hdfs://xxx/user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
> org.apache.hadoop.fs.FileAlreadyExistsException: Path is not a directory: 
> /user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:65)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(F

[jira] [Comment Edited] (HADOOP-15633) fs.TrashPolicyDefault: Can't create trash directory

2018-07-26 Thread Fei Hui (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558152#comment-16558152
 ] 

Fei Hui edited comment on HADOOP-15633 at 7/26/18 11:00 AM:


upload v1 patch

{code:java}
 public boolean moveToTrash(Path path) throws IOException {
if (!isEnabled())
  return false;

if (!path.isAbsolute())   // make path absolute
  path = new Path(fs.getWorkingDirectory(), path);

// check that path exists
fs.getFileStatus(path);
String qpath = fs.makeQualified(path).toString();

Path trashRoot = fs.getTrashRoot(path);
Path trashCurrent = new Path(trashRoot, CURRENT);
if (qpath.startsWith(trashRoot.toString())) {
  return false;   // already in trash
}

if (trashRoot.getParent().toString().startsWith(qpath)) {
  throw new IOException("Cannot move \"" + path +
"\" to the trash, as it contains the trash");
}

Path trashPath = makeTrashRelativePath(trashCurrent, path);
Path baseTrashPath = makeTrashRelativePath(trashCurrent, path.getParent());

IOException cause = null;

// try twice, in case checkpoint between the mkdirs() & rename()
for (int i = 0; i < 2; i++) {
  try {
if (!fs.mkdirs(baseTrashPath, PERMISSION)) {  // create current
  LOG.warn("Can't create(mkdir) trash directory: " + baseTrashPath);
  return false;
} catch (FileAlreadyExistsException e) {
// here we should catch FileAlreadyExistsException, then handle it
  } catch (IOException e) {
LOG.warn("Can't create trash directory: " + baseTrashPath, e);
cause = e;
break;
  }
{code}

In moveToTrash function, catch  FileAlreadyExistsException and find the 
existing file path, then rename it, mkdir the base trash path at last.




was (Author: ferhui):
upload v1 patch

{code:java}
 public boolean moveToTrash(Path path) throws IOException {
if (!isEnabled())
  return false;

if (!path.isAbsolute())   // make path absolute
  path = new Path(fs.getWorkingDirectory(), path);

// check that path exists
fs.getFileStatus(path);
String qpath = fs.makeQualified(path).toString();

Path trashRoot = fs.getTrashRoot(path);
Path trashCurrent = new Path(trashRoot, CURRENT);
if (qpath.startsWith(trashRoot.toString())) {
  return false;   // already in trash
}

if (trashRoot.getParent().toString().startsWith(qpath)) {
  throw new IOException("Cannot move \"" + path +
"\" to the trash, as it contains the trash");
}

Path trashPath = makeTrashRelativePath(trashCurrent, path);
Path baseTrashPath = makeTrashRelativePath(trashCurrent, path.getParent());

IOException cause = null;

// try twice, in case checkpoint between the mkdirs() & rename()
for (int i = 0; i < 2; i++) {
  try {
if (!fs.mkdirs(baseTrashPath, PERMISSION)) {  // create current
  LOG.warn("Can't create(mkdir) trash directory: " + baseTrashPath);
  return false;
}
{code}

In moveToTrash function, catch  FileAlreadyExistsException and find the 
existing file path, then rename it, mkdir the base trash path at last.



> fs.TrashPolicyDefault: Can't create trash directory
> ---
>
> Key: HADOOP-15633
> URL: https://issues.apache.org/jira/browse/HADOOP-15633
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.8.3, 3.1.0, 3.0.3, 2.7.7
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HADOOP-15633.001.patch
>
>
> Reproduce it as follow
> {code:java}
> hadoop fs -mkdir /user/hadoop/aaa
> hadoop fs -touchz /user/hadoop/aaa/bbb
> hadoop fs -rm /user/hadoop/aaa/bbb
> hadoop fs -mkdir /user/hadoop/aaa/bbb
> hadoop fs -touchz /user/hadoop/aaa/bbb/ccc
> hadoop fs -rm /user/hadoop/aaa/bbb/ccc
> {code}
> Then we get errors 
> {code:java}
> 18/07/26 17:55:24 WARN fs.TrashPolicyDefault: Can't create trash directory: 
> hdfs://xxx/user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
> org.apache.hadoop.fs.FileAlreadyExistsException: Path is not a directory: 
> /user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:65)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3961)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:984)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:62