[jira] [Created] (HADOOP-12441) Fix kill command execution under Ubuntu 12

2015-09-25 Thread Wangda Tan (JIRA)
Wangda Tan created HADOOP-12441:
---

 Summary: Fix kill command execution under Ubuntu 12
 Key: HADOOP-12441
 URL: https://issues.apache.org/jira/browse/HADOOP-12441
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Wangda Tan
Priority: Blocker


After HADOOP-12317, kill command's execution will be failure under Ubuntu12. 
After NM restarts, it cannot get if a process is alive or not via pid of 
containers, and it cannot kill process correctly when RM/AM tells NM to kill a 
container.

Logs from NM (customized logs):
{code}
2015-09-25 21:58:59,348 INFO  nodemanager.DefaultContainerExecutor 
(DefaultContainerExecutor.java:containerIsAlive(431)) -  == 
check alive cmd:[[Ljava.lang.String;@496e442d]
2015-09-25 21:58:59,349 INFO  nodemanager.NMAuditLogger 
(NMAuditLogger.java:logSuccess(89)) - USER=hrt_qa   IP=10.0.1.14
OPERATION=Stop Container RequestTARGET=ContainerManageImpl  
RESULT=SUCCESS  APPID=application_1443218269460_0001
CONTAINERID=container_1443218269460_0001_01_01
2015-09-25 21:58:59,363 INFO  nodemanager.DefaultContainerExecutor 
(DefaultContainerExecutor.java:containerIsAlive(438)) -  
===
ExitCodeException exitCode=1: ERROR: garbage process ID "--".
Usage:
  kill pid ...  Send SIGTERM to every process listed.
  kill signal pid ...   Send a signal to every process listed.
  kill -s signal pid ...Send a signal to every process listed.
  kill -l   List all signal names.
  kill -L   List all signal names in a nice table.
  kill -l signalConvert between signal numbers and names.

at org.apache.hadoop.util.Shell.runCommand(Shell.java:550)
at org.apache.hadoop.util.Shell.run(Shell.java:461)
at 
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:727)
at 
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.containerIsAlive(DefaultContainerExecutor.java:432)
at 
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.signalContainer(DefaultContainerExecutor.java:401)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.cleanupContainer(ContainerLaunch.java:419)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainersLauncher.handle(ContainersLauncher.java:139)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainersLauncher.handle(ContainersLauncher.java:55)
at 
org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:175)
at 
org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:108)
at java.lang.Thread.run(Thread.java:745)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-12356) CPU usage statistics on Windows

2015-12-17 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan resolved HADOOP-12356.
-
Resolution: Fixed

> CPU usage statistics on Windows
> ---
>
> Key: HADOOP-12356
> URL: https://issues.apache.org/jira/browse/HADOOP-12356
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
> Environment: CPU: Intel Xeon
> OS: Windows server
>Reporter: Yunqi Zhang
>Assignee: Yunqi Zhang
>  Labels: easyfix, newbie, patch
> Attachments: 0001-Correct-the-CPU-usage-calcualtion.patch, 
> 0001-Correct-the-CPU-usage-calcualtion.patch, HADOOP-12356-v3.patch, 
> HADOOP-12356-v4.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> The CPU usage information on Windows is computed incorrectly. The proposed 
> patch fixes the issue, and unifies the the interface with Linux.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HADOOP-12356) CPU usage statistics on Windows

2015-12-17 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan reopened HADOOP-12356:
-
  Assignee: Inigo Goiri  (was: Yunqi Zhang)

> CPU usage statistics on Windows
> ---
>
> Key: HADOOP-12356
> URL: https://issues.apache.org/jira/browse/HADOOP-12356
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
> Environment: CPU: Intel Xeon
> OS: Windows server
>Reporter: Yunqi Zhang
>Assignee: Inigo Goiri
>  Labels: easyfix, newbie, patch
> Attachments: 0001-Correct-the-CPU-usage-calcualtion.patch, 
> 0001-Correct-the-CPU-usage-calcualtion.patch, HADOOP-12356-v3.patch, 
> HADOOP-12356-v4.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> The CPU usage information on Windows is computed incorrectly. The proposed 
> patch fixes the issue, and unifies the the interface with Linux.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12907) Support specifying resources for task containers in SLS

2016-03-08 Thread Wangda Tan (JIRA)
Wangda Tan created HADOOP-12907:
---

 Summary: Support specifying resources for task containers in SLS
 Key: HADOOP-12907
 URL: https://issues.apache.org/jira/browse/HADOOP-12907
 Project: Hadoop Common
  Issue Type: New Feature
Reporter: Wangda Tan
Assignee: Wangda Tan


Currently, SLS doesn't support specify resources for task containers, it uses a 
global default value for all containers.

Instead, we should be able to specify different resources for task containers 
in sls-job.conf.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-13156) create-release.sh doesn't work for branch-2.8

2016-05-16 Thread Wangda Tan (JIRA)
Wangda Tan created HADOOP-13156:
---

 Summary: create-release.sh doesn't work for branch-2.8
 Key: HADOOP-13156
 URL: https://issues.apache.org/jira/browse/HADOOP-13156
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Reporter: Wangda Tan
Priority: Blocker


A couple of issues found while trying to run dev-support/create-release.sh:

1) Missing files like release-notes.html and CHANGE.txt
2) After remove lines to copy release-notes.html/CHANGE.txt, still saw some 
issues, for example:

{code}
usage: cp [-R [-H | -L | -P]] [-fi | -n] [-apvX] source_file target_file
   cp [-R [-H | -L | -P]] [-fi | -n] [-apvX] source_file ... 
target_directory

Failed! running cp -r ../../target/r2.8.0-SNAPSHOT/api 
../../target/r2.8.0-SNAPSHOT/css 
../../target/r2.8.0-SNAPSHOT/dependency-analysis.html 
../../target/r2.8.0-SNAPSHOT/hadoop-annotations 
../../target/r2.8.0-SNAPSHOT/hadoop-ant 
../../target/r2.8.0-SNAPSHOT/hadoop-archive-logs 
../../target/r2.8.0-SNAPSHOT/hadoop-archives 
../../target/r2.8.0-SNAPSHOT/hadoop-assemblies 
../../target/r2.8.0-SNAPSHOT/hadoop-auth 
../../target/r2.8.0-SNAPSHOT/hadoop-auth-examples 
../../target/r2.8.0-SNAPSHOT/hadoop-aws 
../../target/r2.8.0-SNAPSHOT/hadoop-azure 
../../target/r2.8.0-SNAPSHOT/hadoop-common-project 
../../target/r2.8.0-SNAPSHOT/hadoop-datajoin 
../../target/r2.8.0-SNAPSHOT/hadoop-dist 
../../target/r2.8.0-SNAPSHOT/hadoop-distcp 
../../target/r2.8.0-SNAPSHOT/hadoop-extras 
../../target/r2.8.0-SNAPSHOT/hadoop-gridmix 
../../target/r2.8.0-SNAPSHOT/hadoop-hdfs-bkjournal 
../../target/r2.8.0-SNAPSHOT/hadoop-hdfs-httpfs 
../../target/r2.8.0-SNAPSHOT/hadoop-hdfs-nfs 
../../target/r2.8.0-SNAPSHOT/hadoop-hdfs-project 
../../target/r2.8.0-SNAPSHOT/hadoop-kms 
../../target/r2.8.0-SNAPSHOT/hadoop-mapreduce 
../../target/r2.8.0-SNAPSHOT/hadoop-mapreduce-client 
../../target/r2.8.0-SNAPSHOT/hadoop-mapreduce-examples 
../../target/r2.8.0-SNAPSHOT/hadoop-maven-plugins 
../../target/r2.8.0-SNAPSHOT/hadoop-minicluster 
../../target/r2.8.0-SNAPSHOT/hadoop-minikdc 
../../target/r2.8.0-SNAPSHOT/hadoop-nfs 
../../target/r2.8.0-SNAPSHOT/hadoop-openstack 
../../target/r2.8.0-SNAPSHOT/hadoop-pipes 
../../target/r2.8.0-SNAPSHOT/hadoop-project-dist 
../../target/r2.8.0-SNAPSHOT/hadoop-rumen 
../../target/r2.8.0-SNAPSHOT/hadoop-sls 
../../target/r2.8.0-SNAPSHOT/hadoop-streaming 
../../target/r2.8.0-SNAPSHOT/hadoop-tools 
../../target/r2.8.0-SNAPSHOT/hadoop-yarn 
../../target/r2.8.0-SNAPSHOT/hadoop-yarn-project 
../../target/r2.8.0-SNAPSHOT/images ../../target/r2.8.0-SNAPSHOT/index.html 
../../target/r2.8.0-SNAPSHOT/project-reports.html 
hadoop-2.8.0-SNAPSHOT/share/doc/hadoop/ in 
/Users/wtan/sandbox/hadoop-erie-copy/hadoop-dist/target
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13167) "javac: invalid target release: 1.8" failures happen on YARN precommit jobs

2016-05-17 Thread Wangda Tan (JIRA)
Wangda Tan created HADOOP-13167:
---

 Summary: "javac: invalid target release: 1.8" failures happen on 
YARN precommit jobs
 Key: HADOOP-13167
 URL: https://issues.apache.org/jira/browse/HADOOP-13167
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Wangda Tan
Priority: Blocker


Tons of failures happen on YARN precommit runs, for example:
https://issues.apache.org/jira/browse/YARN-4957?focusedCommentId=15285836&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15285836.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13156) create-release.sh doesn't work for branch-2.8

2016-05-18 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan resolved HADOOP-13156.
-
Resolution: Duplicate

This is duplicated to HADOOP-12892

> create-release.sh doesn't work for branch-2.8
> -
>
> Key: HADOOP-13156
> URL: https://issues.apache.org/jira/browse/HADOOP-13156
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Reporter: Wangda Tan
>Priority: Blocker
>
> A couple of issues found while trying to run dev-support/create-release.sh:
> 1) Missing files like release-notes.html and CHANGE.txt
> 2) After remove lines to copy release-notes.html/CHANGE.txt, still saw some 
> issues, for example:
> {code}
> usage: cp [-R [-H | -L | -P]] [-fi | -n] [-apvX] source_file target_file
>cp [-R [-H | -L | -P]] [-fi | -n] [-apvX] source_file ... 
> target_directory
> Failed! running cp -r ../../target/r2.8.0-SNAPSHOT/api 
> ../../target/r2.8.0-SNAPSHOT/css 
> ../../target/r2.8.0-SNAPSHOT/dependency-analysis.html 
> ../../target/r2.8.0-SNAPSHOT/hadoop-annotations 
> ../../target/r2.8.0-SNAPSHOT/hadoop-ant 
> ../../target/r2.8.0-SNAPSHOT/hadoop-archive-logs 
> ../../target/r2.8.0-SNAPSHOT/hadoop-archives 
> ../../target/r2.8.0-SNAPSHOT/hadoop-assemblies 
> ../../target/r2.8.0-SNAPSHOT/hadoop-auth 
> ../../target/r2.8.0-SNAPSHOT/hadoop-auth-examples 
> ../../target/r2.8.0-SNAPSHOT/hadoop-aws 
> ../../target/r2.8.0-SNAPSHOT/hadoop-azure 
> ../../target/r2.8.0-SNAPSHOT/hadoop-common-project 
> ../../target/r2.8.0-SNAPSHOT/hadoop-datajoin 
> ../../target/r2.8.0-SNAPSHOT/hadoop-dist 
> ../../target/r2.8.0-SNAPSHOT/hadoop-distcp 
> ../../target/r2.8.0-SNAPSHOT/hadoop-extras 
> ../../target/r2.8.0-SNAPSHOT/hadoop-gridmix 
> ../../target/r2.8.0-SNAPSHOT/hadoop-hdfs-bkjournal 
> ../../target/r2.8.0-SNAPSHOT/hadoop-hdfs-httpfs 
> ../../target/r2.8.0-SNAPSHOT/hadoop-hdfs-nfs 
> ../../target/r2.8.0-SNAPSHOT/hadoop-hdfs-project 
> ../../target/r2.8.0-SNAPSHOT/hadoop-kms 
> ../../target/r2.8.0-SNAPSHOT/hadoop-mapreduce 
> ../../target/r2.8.0-SNAPSHOT/hadoop-mapreduce-client 
> ../../target/r2.8.0-SNAPSHOT/hadoop-mapreduce-examples 
> ../../target/r2.8.0-SNAPSHOT/hadoop-maven-plugins 
> ../../target/r2.8.0-SNAPSHOT/hadoop-minicluster 
> ../../target/r2.8.0-SNAPSHOT/hadoop-minikdc 
> ../../target/r2.8.0-SNAPSHOT/hadoop-nfs 
> ../../target/r2.8.0-SNAPSHOT/hadoop-openstack 
> ../../target/r2.8.0-SNAPSHOT/hadoop-pipes 
> ../../target/r2.8.0-SNAPSHOT/hadoop-project-dist 
> ../../target/r2.8.0-SNAPSHOT/hadoop-rumen 
> ../../target/r2.8.0-SNAPSHOT/hadoop-sls 
> ../../target/r2.8.0-SNAPSHOT/hadoop-streaming 
> ../../target/r2.8.0-SNAPSHOT/hadoop-tools 
> ../../target/r2.8.0-SNAPSHOT/hadoop-yarn 
> ../../target/r2.8.0-SNAPSHOT/hadoop-yarn-project 
> ../../target/r2.8.0-SNAPSHOT/images ../../target/r2.8.0-SNAPSHOT/index.html 
> ../../target/r2.8.0-SNAPSHOT/project-reports.html 
> hadoop-2.8.0-SNAPSHOT/share/doc/hadoop/ in 
> /Users/wtan/sandbox/hadoop-erie-copy/hadoop-dist/target
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-10483) AbstractDelegationTokenSecretManager.ExpiredTokenRemover should use adaptable unit to print time

2014-04-08 Thread Wangda Tan (JIRA)
Wangda Tan created HADOOP-10483:
---

 Summary: AbstractDelegationTokenSecretManager.ExpiredTokenRemover 
should use adaptable unit to print time
 Key: HADOOP-10483
 URL: https://issues.apache.org/jira/browse/HADOOP-10483
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Wangda Tan
Priority: Trivial


Currently, ExpiredTokenRemover uses minute(s) as unit to print 
tokenRemoverScanInterval, it's possible that user entered a value less than 1 
min. It will become 0 in output log. It's better to use format like 
0d:1h:3m:2s:3ms to print time.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10625) Configuration: names should be trimmed when putting/getting to properties

2014-05-21 Thread Wangda Tan (JIRA)
Wangda Tan created HADOOP-10625:
---

 Summary: Configuration: names should be trimmed when 
putting/getting to properties
 Key: HADOOP-10625
 URL: https://issues.apache.org/jira/browse/HADOOP-10625
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.4.0
Reporter: Wangda Tan


Currently, Hadoop will not trim name when putting a pair of k/v to property. 
But when loading configuration from file, names will be trimmed:
(In Configuration.java)
{code}
  if ("name".equals(field.getTagName()) && field.hasChildNodes())
attr = StringInterner.weakIntern(
((Text)field.getFirstChild()).getData().trim());
  if ("value".equals(field.getTagName()) && field.hasChildNodes())
value = StringInterner.weakIntern(
((Text)field.getFirstChild()).getData());
{code}
With this behavior, following steps will be problematic:
1. User incorrectly set " hadoop.key=value" (with a space before hadoop.key)
2. User try to get "hadoop.key", cannot get "value"
3. Serialize/deserialize configuration (Like what did in MR)
4. User try to get "hadoop.key", can get "value", which will make inconsistency 
problem.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Reopened] (HADOOP-15007) Stabilize and document Configuration element

2018-02-27 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan reopened HADOOP-15007:
-

> Stabilize and document Configuration  element
> --
>
> Key: HADOOP-15007
> URL: https://issues.apache.org/jira/browse/HADOOP-15007
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Ajay Kumar
>Priority: Blocker
> Fix For: 3.2.0
>
> Attachments: HADOOP-15007.000.patch, HADOOP-15007.001.patch, 
> HADOOP-15007.002.patch, HADOOP-15007.003.patch
>
>
> HDFS-12350 (moved to HADOOP-15005). Adds the ability to tag properties with a 
>  value.
> We need to make sure that this feature is backwards compatible & usable in 
> production. That's docs, testing, marshalling etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-15637) LocalFs#listLocatedStatus does not filter out hidden .crc files

2018-07-30 Thread Wangda Tan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan reopened HADOOP-15637:
-

[~xkrogen], [~vagarychen], could u help to check the comments from 
[~bibinchundatt] below. 

And updated fixed version to 3.1.2 given this don't exist in branch-3.1.1.

> LocalFs#listLocatedStatus does not filter out hidden .crc files
> ---
>
> Key: HADOOP-15637
> URL: https://issues.apache.org/jira/browse/HADOOP-15637
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Minor
> Fix For: 3.2.0, 2.9.2, 2.8.5, 3.0.4, 3.1.2
>
> Attachments: HADOOP-15637.000.patch
>
>
> After HADOOP-7165, {{LocalFs#listLocatedStatus}} incorrectly returns the 
> hidden {{.crc}} files used to store checksum information. This is because 
> HADOOP-7165 implemented {{listLocatedStatus}} on {{FilterFs}}, so the default 
> implementation is no longer used, and {{FilterFs}} directly calls the raw FS 
> since {{listLocatedStatus}} is not defined in {{ChecksumFs}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13423) Run JDiff on trunk for Hadoop-Common and analyze results

2016-07-25 Thread Wangda Tan (JIRA)
Wangda Tan created HADOOP-13423:
---

 Summary: Run JDiff on trunk for Hadoop-Common and analyze results
 Key: HADOOP-13423
 URL: https://issues.apache.org/jira/browse/HADOOP-13423
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Wangda Tan
Assignee: Wangda Tan
Priority: Blocker






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13428) Fix hadoop-common to generate jdiff

2016-07-26 Thread Wangda Tan (JIRA)
Wangda Tan created HADOOP-13428:
---

 Summary: Fix hadoop-common to generate jdiff
 Key: HADOOP-13428
 URL: https://issues.apache.org/jira/browse/HADOOP-13428
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Wangda Tan
Assignee: Wangda Tan


Hadoop-common failed to generate JDiff. We need to fix that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-12516) jdiff fails with error 'duplicate comment id' about MetricsSystem.register_changed

2016-08-24 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan resolved HADOOP-12516.
-
Resolution: Duplicate

This is already fixed by HADOOP-13428, closing as dup.

> jdiff fails with error 'duplicate comment id' about 
> MetricsSystem.register_changed
> --
>
> Key: HADOOP-12516
> URL: https://issues.apache.org/jira/browse/HADOOP-12516
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Tsuyoshi Ozawa
>
> "mvn package -Pdist,docs -DskipTests" fails with following error. It looks 
> like jdiff problem as Li Lu mentioned on HADOOP-11776.
> {quote}
>   [javadoc] ExcludePrivateAnnotationsJDiffDoclet
>   [javadoc] JDiff: doclet started ...
>   [javadoc] JDiff: reading the old API in from file 
> '/home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/dev-support/jdiff/Apache_Hadoop_Common_2.6.0.xml'...Warning:
>  API identifier in the XML file (hadoop-core 2.6.0) differs from the name of 
> the file 'Apache_Hadoop_Common_2.6.0.xml'
>   ...
>   [javadoc] JDiff: reading the new API in from file 
> '/home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/target/site/jdiff/xml/Apache_Hadoop_Common_2.8.0-SNAPSHOT.xml'...Warning:
>  incorrectly formatted @link in text: Options to be used by the \{@link 
> Find\} command and its \{@link Expression\}s.
>   
>   [javadoc] Error: duplicate comment id: 
> org.apache.hadoop.metrics2.MetricsSystem.register_changed(java.lang.String, 
> java.lang.String, T)
> {quote}
> A link to the comment by Li lu is [here| 
> https://issues.apache.org/jira/browse/HADOOP-11776?focusedCommentId=14391868&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14391868].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-13835) Move Google Test Framework code from mapreduce to hadoop-common

2017-08-09 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan reopened HADOOP-13835:
-

> Move Google Test Framework code from mapreduce to hadoop-common
> ---
>
> Key: HADOOP-13835
> URL: https://issues.apache.org/jira/browse/HADOOP-13835
> Project: Hadoop Common
>  Issue Type: Task
>  Components: test
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13835.001.patch, HADOOP-13835.002.patch, 
> HADOOP-13835.003.patch, HADOOP-13835.004.patch, HADOOP-13835.005.patch, 
> HADOOP-13835.006.patch, HADOOP-13835.007.patch, 
> HADOOP-13835.branch-2.007.patch
>
>
> The mapreduce project has Google Test Framework code to allow testing of 
> native libraries. This should be moved to hadoop-common so that other 
> projects can use it as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-14670) Increase minimum cmake version for all platforms

2017-09-15 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan reopened HADOOP-14670:
-

> Increase minimum cmake version for all platforms
> 
>
> Key: HADOOP-14670
> URL: https://issues.apache.org/jira/browse/HADOOP-14670
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-beta1
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14670.00.patch, HADOOP-14670.01.patch, 
> HADOOP-14670.02.patch, HADOOP-14670.03.patch
>
>
> cmake 2.6 is old at this point and I'd be greatly surprised if anyone is 
> actually using it or testing against it.  It's probably time to upgrade to 
> something approaching modern.  Plus:
> * Mac OS X already requires 3.0
> * If HADOOP-14667 gets committed, Windows bumps to 3.1
> * There is special handling in at least one CMakeLists.txt for versions less 
> than 3.1
> Given the last two points, I'd propose making the minimum 3.1, if not 
> something higher due to  compiler support for newer compilers across all 
> platforms.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-11199) Configuration should be able to set empty value for property

2014-10-13 Thread Wangda Tan (JIRA)
Wangda Tan created HADOOP-11199:
---

 Summary: Configuration should be able to set empty value for 
property
 Key: HADOOP-11199
 URL: https://issues.apache.org/jira/browse/HADOOP-11199
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Reporter: Wangda Tan


Currently in hadoop.common.conf.Configuration, when you specify a XML like this:

{code}

  
conf.name

  

{code}

When you trying to get the conf.name, the returned value is null instead of an 
empty string.

Test code for this,
{code}
import java.io.ByteArrayInputStream;

import org.apache.hadoop.conf.Configuration;


public class HadoopConfigurationEmptyTest {
  public static void main(String[] args) {
Configuration conf = new Configuration(false);
ByteArrayInputStream bais =
new ByteArrayInputStream((""
+ "conf.name" + ""
+ "").getBytes());
conf.addResource(bais);
System.out.println(conf.get("conf.name"));
  }
}
{code}

Does this intentionally or a behavior should be fixed?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12386) RetryPolicies.RETRY_FOREVER should be able to specify a retry interval

2015-09-08 Thread Wangda Tan (JIRA)
Wangda Tan created HADOOP-12386:
---

 Summary: RetryPolicies.RETRY_FOREVER should be able to specify a 
retry interval
 Key: HADOOP-12386
 URL: https://issues.apache.org/jira/browse/HADOOP-12386
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Wangda Tan


Problems mentioned in YARN-4113, We should be able to specify retry interval in 
RetryPolicies.RETRY_FOREVER. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)