Apache Hadoop qbt Report: trunk+JDK8 on Linux/ppc64le

2017-06-24 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/355/

[Jun 23, 2017 1:39:58 PM] (stevel) HADOOP-14568. GenericTestUtils#waitFor 
missing parameter verification.
[Jun 23, 2017 2:56:28 PM] (brahma) HDFS-12024. Fix typo's in 
FsDatasetImpl.java. Contributed by Yasen liu.
[Jun 23, 2017 8:26:03 PM] (yufei) YARN-5876. 
TestResourceTrackerService#testGracefulDecommissionWithApp
[Jun 23, 2017 8:38:41 PM] (stevel) HADOOP-14547. [WASB] the configured retry 
policy is not used for all
[Jun 23, 2017 11:50:47 PM] (arp) HADOOP-14543. ZKFC should use getAversion() 
while setting the zkacl.




-1 overall


The following subsystems voted -1:
compile mvninstall unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl 
   hadoop.hdfs.TestRollingUpgrade 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.yarn.server.nodemanager.recovery.TestNMLeveldbStateStoreService 
   hadoop.yarn.server.nodemanager.TestNodeManagerShutdown 
   hadoop.yarn.server.timeline.TestRollingLevelDB 
   hadoop.yarn.server.timeline.TestTimelineDataManager 
   hadoop.yarn.server.timeline.TestLeveldbTimelineStore 
   hadoop.yarn.server.timeline.recovery.TestLeveldbTimelineStateStore 
   hadoop.yarn.server.timeline.TestRollingLevelDBTimelineStore 
   
hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer 
   hadoop.yarn.server.resourcemanager.TestRMEmbeddedElector 
   hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer 
   hadoop.yarn.server.resourcemanager.recovery.TestLeveldbRMStateStore 
   hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppStarvation 
   hadoop.yarn.server.resourcemanager.TestRMRestart 
   hadoop.yarn.server.TestDiskFailures 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.client.api.impl.TestAMRMClient 
   hadoop.yarn.server.timeline.TestLevelDBCacheTimelineStore 
   hadoop.yarn.server.timeline.TestOverrideTimelineStoreYarnClient 
   hadoop.yarn.server.timeline.TestEntityGroupFSTimelineStore 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.mapred.TestShuffleHandler 
   hadoop.mapreduce.v2.hs.TestHistoryServerLeveldbStateStoreService 
   hadoop.mapreduce.TestMRJobClient 

Timed out junit tests :

   org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache 
   org.apache.hadoop.yarn.server.resourcemanager.TestRMStoreCommands 
   
org.apache.hadoop.yarn.server.resourcemanager.TestReservationSystemWithRMHA 
   
org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA 
   
org.apache.hadoop.yarn.server.resourcemanager.TestKillApplicationWithRMHA 
  

   mvninstall:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/355/artifact/out/patch-mvninstall-root.txt
  [504K]

   compile:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/355/artifact/out/patch-compile-root.txt
  [20K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/355/artifact/out/patch-compile-root.txt
  [20K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/355/artifact/out/patch-compile-root.txt
  [20K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/355/artifact/out/patch-unit-hadoop-assemblies.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/355/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [220K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/355/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
  [56K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/355/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt
  [52K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/355/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [80K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/355/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [324K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/355/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
  [12K]
   

Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2017-06-24 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/444/

[Jun 23, 2017 6:50:57 AM] (sunilg) YARN-5892. Support user-specific minimum 
user limit percentage in
[Jun 23, 2017 1:39:58 PM] (stevel) HADOOP-14568. GenericTestUtils#waitFor 
missing parameter verification.
[Jun 23, 2017 2:56:28 PM] (brahma) HDFS-12024. Fix typo's in 
FsDatasetImpl.java. Contributed by Yasen liu.
[Jun 23, 2017 8:26:03 PM] (yufei) YARN-5876. 
TestResourceTrackerService#testGracefulDecommissionWithApp
[Jun 23, 2017 8:38:41 PM] (stevel) HADOOP-14547. [WASB] the configured retry 
policy is not used for all
[Jun 23, 2017 11:50:47 PM] (arp) HADOOP-14543. ZKFC should use getAversion() 
while setting the zkacl.




-1 overall


The following subsystems voted -1:
findbugs unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 
   Useless object stored in variable removedNullContainers of method 
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List)
 At NodeStatusUpdaterImpl.java:removedNullContainers of method 
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List)
 At NodeStatusUpdaterImpl.java:[line 642] 
   
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeVeryOldStoppedContainersFromCache()
 makes inefficient use of keySet iterator instead of entrySet iterator At 
NodeStatusUpdaterImpl.java:keySet iterator instead of entrySet iterator At 
NodeStatusUpdaterImpl.java:[line 719] 
   Hard coded reference to an absolute pathname in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DockerLinuxContainerRuntime.launchContainer(ContainerRuntimeContext)
 At DockerLinuxContainerRuntime.java:absolute pathname in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DockerLinuxContainerRuntime.launchContainer(ContainerRuntimeContext)
 At DockerLinuxContainerRuntime.java:[line 455] 
   
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.createStatus()
 makes inefficient use of keySet iterator instead of entrySet iterator At 
ContainerLocalizer.java:keySet iterator instead of entrySet iterator At 
ContainerLocalizer.java:[line 334] 
   
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainerMetrics.usageMetrics
 is a mutable collection which should be package protected At 
ContainerMetrics.java:which should be package protected At 
ContainerMetrics.java:[line 134] 

Failed junit tests :

   hadoop.security.TestRaceWhenRelogin 
   hadoop.hdfs.TestDistributedFileSystem 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl 
   hadoop.hdfs.server.namenode.ha.TestPipelinesFailover 
   hadoop.hdfs.TestMaintenanceState 
   hadoop.yarn.server.resourcemanager.TestRMRestart 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.server.TestDiskFailures 
   hadoop.yarn.client.api.impl.TestNMClient 
   hadoop.yarn.client.api.impl.TestAMRMClient 

Timed out junit tests :

   org.apache.hadoop.yarn.server.resourcemanager.TestRMStoreCommands 
   
org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/444/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/444/artifact/out/diff-compile-javac-root.txt
  [192K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/444/artifact/out/diff-checkstyle-root.txt
  [17M]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/444/artifact/out/diff-patch-pylint.txt
  [20K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/444/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/444/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/444/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/444/artifact/out/whitespace-tabs.txt
  [1.2M]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/444/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-warnings.html
  [12K]

   javadoc:


[jira] [Created] (HDFS-12035) Ozone: listKey doesn't work from ozone commandline

2017-06-24 Thread Weiwei Yang (JIRA)
Weiwei Yang created HDFS-12035:
--

 Summary: Ozone: listKey doesn't work from ozone commandline
 Key: HDFS-12035
 URL: https://issues.apache.org/jira/browse/HDFS-12035
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Reporter: Weiwei Yang


HDFS-11782 implements listKey operation in KSM server side, but the commandline 
doesn't work right now, 

{code}
./bin/hdfs oz -listKey http://ozone1.fyre.ibm.com:9864/volume-wwei-0/bucket1/
{code}

gives me following output

{noformat}
Command Failed : 
{"httpCode":400,"shortMessage":"invalidBucketName","resource":"wwei","message":"Illegal
 max number of keys specified, the value must be in range (0, 1024], actual : 
0.","requestID":"d1a33851-6bfa-48d2-9afc-9dd7b06dfb0e","hostName":"ozone1.fyre.ibm.com"}
{noformat}

I think we have following things missing

# ListKeyHandler doesn't support common listing arguments, start, length and 
prefix.
# Http request to {{Bucket#listBucket}} uses 0 as the default value, I think 
that's why we got "Illegal max number of keys specified" error from command 
line.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12034) Ozone: Web interface for KSM

2017-06-24 Thread Elek, Marton (JIRA)
Elek, Marton created HDFS-12034:
---

 Summary: Ozone: Web interface for KSM
 Key: HDFS-12034
 URL: https://issues.apache.org/jira/browse/HDFS-12034
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Affects Versions: HDFS-7240
Reporter: Elek, Marton
Assignee: Elek, Marton


This is a propsal about how a web interface could be implemented for SCM (and 
later for KSM) similar to the namenode ui.

1. JS framework

There are three big option here. 

A.) One is to use a full featured web framework with all the webpack/npm 
minify/uglify magic. Build time the webpack/npm scripts should be run and the 
result will be added to the jar file. 

B.) It could be simplified if the generated minified/uglified js files are 
added to the project on commit time. It requires an additional step for every 
new patch (to generate the new minified javascripts) but doesn't require 
additional JS build tools during the build.

C.) The third option is to make it as simple as possible similar to the current 
namenode ui which uses javascript but every dependency is commited (without JS 
minify/uglify and other preprocessing).

I prefer to the third one as:

 * I have seen a lot of problems during frequent builds od older tez-ui 
versions (bower version mismatch, npm version mismatch, npm transitive 
dependency problems, proxy problem with older versions). All they could be 
fixed but requires additional JS/NPM magic/knowledge. Without additional npm 
build step the hdfs projects build could be kept more simple.

 * The complexity of the planned SCM/KSM ui (hopefully it will remain simple) 
doesn't require more sophisticated model. (Eg. we don't need JS require as we 
need only a few controllers)

 * HDFS developers mostly backend developers and not JS developers

2. Frameworks 

The big advantages of a more modern JS framework is the simplified programming 
model (for example with two way databinding) I suggest to use a more modern 
framework (not just jquery) which supports plain js (not just 
ECMA2015/2016/typescript) and just include the required js files in the 
projects (similar to the included bootstrap or as the existing namenode ui 
works). 
 
  * React could be a good candidate but it requires more library as it's just a 
ui framework, even the REST calls need separated library. It could be used with 
plain javascript instead of JSX and classes but not straightforward, and it's 
more verbose.
 
  * Ember is used in yarnui2 but the main strength of the ember is the CLI 
which couldn't be used for the simplified approach easily. I think ember is 
best with the A.) option

  * Angular 1 is a good candidate (but not so fancy). In case of angular 1 the 
component based approach should be used (in that case later it could be easier 
to migrate to angular 2 or react)

  * The mainstream side of Angular 2 uses typescript, it could work with plain 
JS but it could require additional knowledge, most of the tutorials and 
documentation shows the typescript approach.

I suggest to use angular 1 or react. Maybe angular is easier to use as don't 
need to emulate JSX with function calls, simple HTML templates could be used.

3. Backend

I would prefer the approach of the existing namenode ui where the backend is 
just the jmx endpoint. To keep it as simple as possible I suggest to try to 
avoid dedicated REST backend if possible. Later we can use REST api of SCM/KSM 
if they will be implemented. 




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org