[jira] [Created] (HADOOP-17277) Correct spelling errors for separator

2020-09-20 Thread Fei Hui (Jira)
Fei Hui created HADOOP-17277:


 Summary: Correct spelling errors for separator
 Key: HADOOP-17277
 URL: https://issues.apache.org/jira/browse/HADOOP-17277
 Project: Hadoop Common
  Issue Type: Bug
  Components: common
Affects Versions: 3.3.0
Reporter: Fei Hui
Assignee: Fei Hui
 Attachments: HADOOP-17277.001.patch

Many spelling errors for separator, correct them!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17276) Extend CallerContext to make it include many items

2020-09-20 Thread Fei Hui (Jira)
Fei Hui created HADOOP-17276:


 Summary: Extend CallerContext to make it include many items
 Key: HADOOP-17276
 URL: https://issues.apache.org/jira/browse/HADOOP-17276
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Fei Hui
Assignee: Fei Hui


Now context is string. We need to extend the CallerContext because context may 
contains many items.
Items include 
* router ip
* MR or CLI
* etc



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17235) Erasure Coding: Remove dead code from common side

2020-08-30 Thread Fei Hui (Jira)
Fei Hui created HADOOP-17235:


 Summary: Erasure Coding: Remove dead code from common side
 Key: HADOOP-17235
 URL: https://issues.apache.org/jira/browse/HADOOP-17235
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.3.0
Reporter: Fei Hui
Assignee: Fei Hui


These codes are unused, so remove them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17232) Erasure Coding: Typo in document

2020-08-27 Thread Fei Hui (Jira)
Fei Hui created HADOOP-17232:


 Summary: Erasure Coding: Typo in document
 Key: HADOOP-17232
 URL: https://issues.apache.org/jira/browse/HADOOP-17232
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Affects Versions: 3.3.0
 Environment: When review ec document and code, find the typo.
Change "a erasure code" to "an erasure code"
Reporter: Fei Hui
Assignee: Fei Hui






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16814) Add dropped connections metric for Server

2020-01-19 Thread Fei Hui (Jira)
Fei Hui created HADOOP-16814:


 Summary: Add dropped connections metric for Server
 Key: HADOOP-16814
 URL: https://issues.apache.org/jira/browse/HADOOP-16814
 Project: Hadoop Common
  Issue Type: Test
  Components: common
Affects Versions: 3.3.0
Reporter: Fei Hui
Assignee: Fei Hui


With this metric we can see that the number of handled rpcs which weren't sent 
to clients.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15633) fs.TrashPolicyDefault: Can't create trash directory

2018-07-26 Thread Fei Hui (JIRA)
Fei Hui created HADOOP-15633:


 Summary: fs.TrashPolicyDefault: Can't create trash directory
 Key: HADOOP-15633
 URL: https://issues.apache.org/jira/browse/HADOOP-15633
 Project: Hadoop Common
  Issue Type: Bug
  Components: common
Affects Versions: 2.7.7, 3.0.3, 3.1.0, 2.8.3
Reporter: Fei Hui
Assignee: Fei Hui


Reproduce it as follow

{code:shell}
hadoop fs -touchz /user/hadoop/aaa
hadoop fs -rm /user/hadoop/aaa
hadoop fs -mkdir -p /user/hadoop/aaa/bbb
hadoop fs -touchz /user/hadoop/aaa/bbb/ccc
hadoop fs -rm /user/hadoop/aaa/bbb/ccc
{code}

Then we get errors 

{code:java}
18/07/26 17:55:24 WARN fs.TrashPolicyDefault: Can't create trash directory: 
hdfs://xxx/user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
org.apache.hadoop.fs.FileAlreadyExistsException: Path is not a directory: 
/user/hadoop/.Trash/Current/user/hadoop/aaa
at 
org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:65)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3961)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:984)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111)

at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at 
org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
at 
org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3002)
at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2970)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1047)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1043)
at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1061)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1036)
at 
org.apache.hadoop.fs.TrashPolicyDefault.moveToTrash(TrashPolicyDefault.java:136)
at org.apache.hadoop.fs.Trash.moveToTrash(Trash.java:114)
at org.apache.hadoop.fs.Trash.moveToAppropriateTrash(Trash.java:95)
at org.apache.hadoop.fs.shell.Delete$Rm.moveToTrash(Delete.java:118)
at org.apache.hadoop.fs.shell.Delete$Rm.processPath(Delete.java:105)
at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
at 
org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
at 
org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:201)
at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)
Caused by: 
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.fs.FileAlreadyExistsException):
 Path is not a directory: /user/hadoop/.Trash/Current/user/hadoop/aaa
at 
org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:65)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3961)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:984)
 

[jira] [Resolved] (HADOOP-14189) add distcp-site.xml for distcp on branch-2

2017-03-16 Thread Fei Hui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui resolved HADOOP-14189.
--
Resolution: Duplicate

> add distcp-site.xml for distcp on branch-2
> --
>
> Key: HADOOP-14189
> URL: https://issues.apache.org/jira/browse/HADOOP-14189
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Reporter: Fei Hui
>Assignee: Fei Hui
> Attachments: HADOOP-14189-branch-2.001.patch
>
>
> On hadoop 2.x , we could not config hadoop parameters for distcp. It only 
> uses distcp-default.xml.
> We should add distcp-site.xml to overrides hadoop paramers.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14189) add distcp-site.xml for distcp on branch-2

2017-03-16 Thread Fei Hui (JIRA)
Fei Hui created HADOOP-14189:


 Summary: add distcp-site.xml for distcp on branch-2
 Key: HADOOP-14189
 URL: https://issues.apache.org/jira/browse/HADOOP-14189
 Project: Hadoop Common
  Issue Type: Task
  Components: tools/distcp
Reporter: Fei Hui


On hadoop 2.x , we could not config hadoop parameters for distcp. It only uses 
distcp-default.xml.
We should add distcp-site.xml to overrides hadoop paramers.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14176) distcp reports beyond physical memory limits on branch-2

2017-03-12 Thread Fei Hui (JIRA)
Fei Hui created HADOOP-14176:


 Summary: distcp reports beyond physical memory limits on branch-2
 Key: HADOOP-14176
 URL: https://issues.apache.org/jira/browse/HADOOP-14176
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools/distcp
Affects Versions: 2.9.0
Reporter: Fei Hui


When i run distcp,  i get some errors as follow
{quote}
17/02/21 15:31:18 INFO mapreduce.Job: Task Id : 
attempt_1487645941615_0037_m_03_0, Status : FAILED
Container [pid=24661,containerID=container_1487645941615_0037_01_05] is 
running beyond physical memory limits. Current usage: 1.1 GB of 1 GB physical 
memory used; 4.0 GB of 5 GB virtual memory used. Killing container.
Dump of the process-tree for container_1487645941615_0037_01_05 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) 
SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 24661 24659 24661 24661 (bash) 0 0 108650496 301 /bin/bash -c 
/usr/lib/jvm/java/bin/java -Djava.net.preferIPv4Stack=true 
-Dhadoop.metrics.log.level=WARN  -Xmx2120m 
-Djava.io.tmpdir=/mnt/disk4/yarn/usercache/hadoop/appcache/application_1487645941615_0037/container_1487645941615_0037_01_05/tmp
 -Dlog4j.configuration=container-log4j.properties 
-Dyarn.app.container.log.dir=/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05
 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA 
-Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 192.168.1.208 
44048 attempt_1487645941615_0037_m_03_0 5 
1>/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05/stdout
 
2>/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05/stderr
|- 24665 24661 24661 24661 (java) 1766 336 4235558912 280699 
/usr/lib/jvm/java/bin/java -Djava.net.preferIPv4Stack=true 
-Dhadoop.metrics.log.level=WARN -Xmx2120m 
-Djava.io.tmpdir=/mnt/disk4/yarn/usercache/hadoop/appcache/application_1487645941615_0037/container_1487645941615_0037_01_05/tmp
 -Dlog4j.configuration=container-log4j.properties 
-Dyarn.app.container.log.dir=/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05
 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA 
-Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 192.168.1.208 
44048 attempt_1487645941615_0037_m_03_0 5
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
{quote}
Deep into the code , i find that because distcp configuration covers 
mapred-site.xml
{code}

mapred.job.map.memory.mb
1024



mapred.job.reduce.memory.mb
1024

{code}

When mapreduce.map.java.opts and mapreduce.map.memory.mb is setting in 
mapred-default.xml, and the value is larger than setted in distcp-default.xml, 
the error maybe occurs.

we should remove those two configurations in distcp-default.xml 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14069) AliyunOSS: listStatus returns wrong file info

2017-02-09 Thread Fei Hui (JIRA)
Fei Hui created HADOOP-14069:


 Summary: AliyunOSS: listStatus returns wrong file info
 Key: HADOOP-14069
 URL: https://issues.apache.org/jira/browse/HADOOP-14069
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/oss
Affects Versions: 3.0.0-alpha2
Reporter: Fei Hui


When i use command 'hadoop fs -lsoss://oss-for-hadoop-sh/', i find that list 
info is wrong

{quote}
$bin/hadoop fs -ls oss://oss-for-hadoop-sh/ 
Found 1 items
drwxrwxrwx   -  0 1970-01-01 08:00 oss://oss-for-hadoop-sh/test00
{quote}

the modifiedtime is wrong, it should not be 1970-01-01 08:00



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14065) oss directory filestatus should use meta time

2017-02-07 Thread Fei Hui (JIRA)
Fei Hui created HADOOP-14065:


 Summary: oss directory filestatus should use meta time
 Key: HADOOP-14065
 URL: https://issues.apache.org/jira/browse/HADOOP-14065
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/oss
Affects Versions: 3.0.0-alpha2
Reporter: Fei Hui
Assignee: Fei Hui


code in getFileStatus function
else if (objectRepresentsDirectory(key, meta.getContentLength())) {
  return new FileStatus(0, true, 1, 0, 0, qualifiedPath);
}

we should set right modifiedtime



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13898) should set HADOOP_JOB_HISTORYSERVER_HEAPSIZE only if it's empty

2016-12-12 Thread Fei Hui (JIRA)
Fei Hui created HADOOP-13898:


 Summary: should set HADOOP_JOB_HISTORYSERVER_HEAPSIZE only if it's 
empty
 Key: HADOOP-13898
 URL: https://issues.apache.org/jira/browse/HADOOP-13898
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 2.9.0
Reporter: Fei Hui


In mapred-env, set HADOOP_JOB_HISTORYSERVER_HEAPSIZE 1000 by default, That is 
incorrect.
We should set it 1000 by default only if it's empty. 
Because if you run  'HADOOP_JOB_HISTORYSERVER_HEAPSIZE =512 
$HADOOP_HOME/sbin/mr-jobhistory-daemon.sh start historyserver', 
HADOOP_JOB_HISTORYSERVER_HEAPSIZE  will be set 1000, rather than 512.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13869) using HADOOP_USER_CLASSPATH_FIRST inconsistently

2016-12-06 Thread Fei Hui (JIRA)
Fei Hui created HADOOP-13869:


 Summary: using HADOOP_USER_CLASSPATH_FIRST inconsistently
 Key: HADOOP-13869
 URL: https://issues.apache.org/jira/browse/HADOOP-13869
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Reporter: Fei Hui


I find HADOOP_USER_CLASSPATH_FIRST is used inconsistently.
I know it doesn't mattter because it affects classpath once 
HADOOP_USER_CLASSPATH_FIRST is not empty
BUT Maybe it's better that using  HADOOP_USER_CLASSPATH_FIRST uniformly



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13865) add tools to classpath by default

2016-12-04 Thread Fei Hui (JIRA)
Fei Hui created HADOOP-13865:


 Summary: add tools to classpath by default
 Key: HADOOP-13865
 URL: https://issues.apache.org/jira/browse/HADOOP-13865
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 2.7.3, 2.8.0
Reporter: Fei Hui






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13773) wrong HADOOP_CLIENT_OPTS in hadoop-env on branch-2

2016-10-31 Thread Fei Hui (JIRA)
Fei Hui created HADOOP-13773:


 Summary: wrong HADOOP_CLIENT_OPTS in hadoop-env  on branch-2
 Key: HADOOP-13773
 URL: https://issues.apache.org/jira/browse/HADOOP-13773
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.7.3, 2.6.1
Reporter: Fei Hui
 Fix For: 2.8.0, 2.9.0, 2.7.4






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org