[jira] [Created] (HDFS-12034) Ozone: Web interface for KSM

2017-06-23 Thread Elek, Marton (JIRA)
Elek, Marton created HDFS-12034:
---

 Summary: Ozone: Web interface for KSM
 Key: HDFS-12034
 URL: https://issues.apache.org/jira/browse/HDFS-12034
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Affects Versions: HDFS-7240
Reporter: Elek, Marton
Assignee: Elek, Marton


This is a propsal about how a web interface could be implemented for SCM (and 
later for KSM) similar to the namenode ui.

1. JS framework

There are three big option here. 

A.) One is to use a full featured web framework with all the webpack/npm 
minify/uglify magic. Build time the webpack/npm scripts should be run and the 
result will be added to the jar file. 

B.) It could be simplified if the generated minified/uglified js files are 
added to the project on commit time. It requires an additional step for every 
new patch (to generate the new minified javascripts) but doesn't require 
additional JS build tools during the build.

C.) The third option is to make it as simple as possible similar to the current 
namenode ui which uses javascript but every dependency is commited (without JS 
minify/uglify and other preprocessing).

I prefer to the third one as:

 * I have seen a lot of problems during frequent builds od older tez-ui 
versions (bower version mismatch, npm version mismatch, npm transitive 
dependency problems, proxy problem with older versions). All they could be 
fixed but requires additional JS/NPM magic/knowledge. Without additional npm 
build step the hdfs projects build could be kept more simple.

 * The complexity of the planned SCM/KSM ui (hopefully it will remain simple) 
doesn't require more sophisticated model. (Eg. we don't need JS require as we 
need only a few controllers)

 * HDFS developers mostly backend developers and not JS developers

2. Frameworks 

The big advantages of a more modern JS framework is the simplified programming 
model (for example with two way databinding) I suggest to use a more modern 
framework (not just jquery) which supports plain js (not just 
ECMA2015/2016/typescript) and just include the required js files in the 
projects (similar to the included bootstrap or as the existing namenode ui 
works). 
 
  * React could be a good candidate but it requires more library as it's just a 
ui framework, even the REST calls need separated library. It could be used with 
plain javascript instead of JSX and classes but not straightforward, and it's 
more verbose.
 
  * Ember is used in yarnui2 but the main strength of the ember is the CLI 
which couldn't be used for the simplified approach easily. I think ember is 
best with the A.) option

  * Angular 1 is a good candidate (but not so fancy). In case of angular 1 the 
component based approach should be used (in that case later it could be easier 
to migrate to angular 2 or react)

  * The mainstream side of Angular 2 uses typescript, it could work with plain 
JS but it could require additional knowledge, most of the tutorials and 
documentation shows the typescript approach.

I suggest to use angular 1 or react. Maybe angular is easier to use as don't 
need to emulate JSX with function calls, simple HTML templates could be used.

3. Backend

I would prefer the approach of the existing namenode ui where the backend is 
just the jmx endpoint. To keep it as simple as possible I suggest to try to 
avoid dedicated REST backend if possible. Later we can use REST api of SCM/KSM 
if they will be implemented. 




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12033) DatanodeManager picking EC recovery tasks should also consider the number of regular replication tasks.

2017-06-23 Thread Lei (Eddy) Xu (JIRA)
Lei (Eddy) Xu created HDFS-12033:


 Summary: DatanodeManager picking EC recovery tasks should also 
consider the number of regular replication tasks.
 Key: HDFS-12033
 URL: https://issues.apache.org/jira/browse/HDFS-12033
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: erasure-coding
Affects Versions: 3.0.0-alpha3
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu


In {{DatanodeManager#handleHeartbeat}}, it choose both pending replication list 
and pending EC list to up to {{maxTransfers}} items.

It should only send {{maxTransfers}} tasks combined to DN.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12032) Inaccurate comment on DatanodeDescriptor#getNumberOfBlocksToBeErasureCoded

2017-06-23 Thread Andrew Wang (JIRA)
Andrew Wang created HDFS-12032:
--

 Summary: Inaccurate comment on 
DatanodeDescriptor#getNumberOfBlocksToBeErasureCoded
 Key: HDFS-12032
 URL: https://issues.apache.org/jira/browse/HDFS-12032
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0-alpha3
Reporter: Andrew Wang
Assignee: Andrew Wang
Priority: Trivial


I saw this comment is an inaccurate copy paste:

{noformat}
  /**
   * The number of work items that are pending to be replicated
   */
  @VisibleForTesting
  public int getNumberOfBlocksToBeErasureCoded() {
return erasurecodeBlocks.size();
  }
{noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12031) Ozone: Rename OzoneClient to OzoneRestClient

2017-06-23 Thread Nandakumar (JIRA)
Nandakumar created HDFS-12031:
-

 Summary: Ozone: Rename OzoneClient to OzoneRestClient
 Key: HDFS-12031
 URL: https://issues.apache.org/jira/browse/HDFS-12031
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Reporter: Nandakumar
Assignee: Nandakumar


This JIRA is to rename existing 
{{org.apache.hadoop.ozone.web.client.OzoneClient}} to 
{{org.apache.hadoop.ozone.web.client.OzoneWebClient}} so that we can build 
OzoneClient java API.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12030) Ozone: CLI: support infoKey command

2017-06-23 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDFS-12030:
-

 Summary: Ozone: CLI: support infoKey command
 Key: HDFS-12030
 URL: https://issues.apache.org/jira/browse/HDFS-12030
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Affects Versions: HDFS-7240
Reporter: Xiaoyu Yao


{code}
HW11717:ozone xyao$ hdfs oz -infoKey http://localhost:9864/vol-2/bucket-1/key-1 
-user xyao 
Command Failed : {"httpCode":0,"shortMessage":"Not supported 
yet","resource":null,"message":"Not supported 
yet","requestID":null,"hostName":null}
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12029) Data node process crashes after kernel upgrade

2017-06-23 Thread Anu Engineer (JIRA)
Anu Engineer created HDFS-12029:
---

 Summary:  Data node process crashes after kernel upgrade
 Key: HDFS-12029
 URL: https://issues.apache.org/jira/browse/HDFS-12029
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Reporter: Anu Engineer
Priority: Critical


 We have seen that when Linux kernel is upgraded to address a specific CVE 
 ( https://access.redhat.com/security/vulnerabilities/stackguard ) it might 
cause a datanode crash.

We have observed this issue while upgrading from 3.10.0-514.6.2 to 
3.10.0-514.21.2 versions of the kernel.

Original kernel fix is here -- 
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=1be7107fbe18eed3e319a6c3e83c78254b693acb

Datanode fails with the following stack trace, 

{noformat}

# 
# A fatal error has been detected by the Java Runtime Environment: 
# 
# SIGBUS (0x7) at pc=0x7f458d078b7c, pid=13214, tid=139936990349120 
# 
# JRE version: (8.0_40-b25) (build ) 
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.40-b25 mixed mode linux-amd64 
compressed oops) 
# Problematic frame: 
# j java.lang.Object.()V+0 
# 
# Failed to write core dump. Core dumps have been disabled. To enable core 
dumping, try "ulimit -c unlimited" before starting Java again 
# 
# An error report file with more information is saved as: 
# /tmp/hs_err_pid13214.log 
# 
# If you would like to submit a bug report, please visit: 
# http://bugreport.java.com/bugreport/crash.jsp 
# 
{noformat}

The root cause is a failure in jsvc. If we pass a greater than 1MB value as the 
stack size argument, this can be mitigated.  Something like:

{code}
exec "$JSVC" \
-Xss2m
org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter "$@"
{code}

This JIRA tracks potential fixes for this problem. We don't have data on how 
this impacts other applications that run on datanode as this might impact 
datanodes memory usage.





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12028) Ozone: CLI: remove noisy slf4j binding output from hdfs oz command.

2017-06-23 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDFS-12028:
-

 Summary: Ozone: CLI: remove noisy slf4j binding output from hdfs 
oz command.
 Key: HDFS-12028
 URL: https://issues.apache.org/jira/browse/HDFS-12028
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Affects Versions: HDFS-7240
Reporter: Xiaoyu Yao


Currently when you run CLI "hdfs oz ...", there always a noisy slf4j biding 
issue log output. This ticket is opened to removed it. 

{code}
xyao$ hdfs oz
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in 
[jar:file:/Users/xyao/deploy/ozone/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/Users/xyao/deploy/ozone/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/hdfs/lib/logback-classic-1.0.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/ppc64le

2017-06-23 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/354/

[Jun 22, 2017 6:27:13 PM] (Arun Suresh) YARN-6127. Add support for work 
preserving NM restart when AMRMProxy is
[Jun 22, 2017 8:35:56 PM] (arp) HDFS-11789. Maintain Short-Circuit Read 
Statistics. Contributed by
[Jun 22, 2017 10:42:50 PM] (arp) HDFS-12010. TestCopyPreserveFlag fails 
consistently because of mismatch
[Jun 23, 2017 1:28:58 AM] (aajisaka) HADOOP-12940. Fix warnings from Spotbugs 
in hadoop-common.
[Jun 23, 2017 2:22:41 AM] (naganarasimha_gr) YARN-5006. ResourceManager quit 
due to ApplicationStateData exceed the
[Jun 23, 2017 2:57:54 AM] (xiao) HDFS-12009. Accept human-friendly units in 
dfsadmin
[Jun 23, 2017 6:50:57 AM] (sunilg) YARN-5892. Support user-specific minimum 
user limit percentage in




-1 overall


The following subsystems voted -1:
compile mvninstall unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.hdfs.server.balancer.TestBalancer 
   hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 
   hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.yarn.server.nodemanager.recovery.TestNMLeveldbStateStoreService 
   hadoop.yarn.server.nodemanager.TestNodeManagerShutdown 
   hadoop.yarn.server.timeline.TestRollingLevelDB 
   hadoop.yarn.server.timeline.TestTimelineDataManager 
   hadoop.yarn.server.timeline.TestLeveldbTimelineStore 
   hadoop.yarn.server.timeline.recovery.TestLeveldbTimelineStateStore 
   hadoop.yarn.server.timeline.TestRollingLevelDBTimelineStore 
   
hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer 
   hadoop.yarn.server.resourcemanager.TestRMEmbeddedElector 
   hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer 
   hadoop.yarn.server.resourcemanager.recovery.TestLeveldbRMStateStore 
   hadoop.yarn.server.TestDiskFailures 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.server.timeline.TestLevelDBCacheTimelineStore 
   hadoop.yarn.server.timeline.TestOverrideTimelineStoreYarnClient 
   hadoop.yarn.server.timeline.TestEntityGroupFSTimelineStore 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.mapred.TestShuffleHandler 
   hadoop.mapreduce.v2.hs.TestHistoryServerLeveldbStateStoreService 

Timed out junit tests :

   org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache 
   org.apache.hadoop.yarn.server.resourcemanager.TestRMStoreCommands 
   
org.apache.hadoop.yarn.server.resourcemanager.recovery.TestZKRMStateStore 
   
org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA 
   
org.apache.hadoop.yarn.server.resourcemanager.TestKillApplicationWithRMHA 
  

   mvninstall:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/354/artifact/out/patch-mvninstall-root.txt
  [504K]

   compile:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/354/artifact/out/patch-compile-root.txt
  [20K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/354/artifact/out/patch-compile-root.txt
  [20K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/354/artifact/out/patch-compile-root.txt
  [20K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/354/artifact/out/patch-unit-hadoop-assemblies.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/354/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [344K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/354/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
  [56K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/354/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt
  [52K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/354/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [76K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/354/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [324K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/354/artifact/out/patch-uni

[jira] [Resolved] (HDFS-11844) Ozone: Recover SCM state when SCM is restarted

2017-06-23 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao resolved HDFS-11844.
---
Resolution: Duplicate

> Ozone: Recover SCM state when SCM is restarted
> --
>
> Key: HDFS-11844
> URL: https://issues.apache.org/jira/browse/HDFS-11844
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone, scm
>Reporter: Weiwei Yang
>Assignee: Anu Engineer
>
> SCM losses its state once being restarted. This issue can be found by a 
> simple test with following steps
> # Start NN, DN, SCM
> # Create several containers via SCM CLI
> # Restart DN
> # Get existing container info via SCM CLI, this step will fail with container 
> doesn't exist error.
> {{ContainerManagerImpl}} maintains a cache of container mapping 
> {{containerMap}}, if DN is restarted, this information is lost. We need a way 
> to restore the state from DB in a background thread.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2017-06-23 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/443/

[Jun 22, 2017 8:42:59 AM] (aajisaka) HADOOP-14542. Add 
IOUtils.cleanupWithLogger that accepts slf4j logger
[Jun 22, 2017 12:07:08 PM] (vinayakumarb) HDFS-11067. 
DFS#listStatusIterator(..) should throw
[Jun 22, 2017 6:27:13 PM] (Arun Suresh) YARN-6127. Add support for work 
preserving NM restart when AMRMProxy is
[Jun 22, 2017 8:35:56 PM] (arp) HDFS-11789. Maintain Short-Circuit Read 
Statistics. Contributed by
[Jun 22, 2017 10:42:50 PM] (arp) HDFS-12010. TestCopyPreserveFlag fails 
consistently because of mismatch
[Jun 23, 2017 1:28:58 AM] (aajisaka) HADOOP-12940. Fix warnings from Spotbugs 
in hadoop-common.
[Jun 23, 2017 2:22:41 AM] (naganarasimha_gr) YARN-5006. ResourceManager quit 
due to ApplicationStateData exceed the
[Jun 23, 2017 2:57:54 AM] (xiao) HDFS-12009. Accept human-friendly units in 
dfsadmin




-1 overall


The following subsystems voted -1:
findbugs unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 
   Useless object stored in variable removedNullContainers of method 
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List)
 At NodeStatusUpdaterImpl.java:removedNullContainers of method 
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List)
 At NodeStatusUpdaterImpl.java:[line 642] 
   
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeVeryOldStoppedContainersFromCache()
 makes inefficient use of keySet iterator instead of entrySet iterator At 
NodeStatusUpdaterImpl.java:keySet iterator instead of entrySet iterator At 
NodeStatusUpdaterImpl.java:[line 719] 
   Hard coded reference to an absolute pathname in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DockerLinuxContainerRuntime.launchContainer(ContainerRuntimeContext)
 At DockerLinuxContainerRuntime.java:absolute pathname in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DockerLinuxContainerRuntime.launchContainer(ContainerRuntimeContext)
 At DockerLinuxContainerRuntime.java:[line 455] 
   
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.createStatus()
 makes inefficient use of keySet iterator instead of entrySet iterator At 
ContainerLocalizer.java:keySet iterator instead of entrySet iterator At 
ContainerLocalizer.java:[line 334] 
   
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainerMetrics.usageMetrics
 is a mutable collection which should be package protected At 
ContainerMetrics.java:which should be package protected At 
ContainerMetrics.java:[line 134] 

Failed junit tests :

   hadoop.security.TestShellBasedUnixGroupsMapping 
   hadoop.ha.TestZKFailoverController 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure040 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.hdfs.TestErasureCodeBenchmarkThroughput 
   hadoop.hdfs.server.balancer.TestBalancerRPCDelay 
   hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppStarvation 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.client.api.impl.TestAMRMProxy 
   hadoop.hdfs.TestNNBench 
   hadoop.mapred.TestMRTimelineEventHandling 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/443/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/443/artifact/out/diff-compile-javac-root.txt
  [192K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/443/artifact/out/diff-checkstyle-root.txt
  [17M]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/443/artifact/out/diff-patch-pylint.txt
  [20K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/443/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/443/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/443/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/443/artifact/out/whitespace-tabs.txt
  [1.2M]

   findbugs:

   
https://builds.apache.org/job

[jira] [Created] (HDFS-12027) Add KMS API to get service version and health check

2017-06-23 Thread patrick white (JIRA)
patrick white created HDFS-12027:


 Summary: Add KMS API to get service version and health check
 Key: HDFS-12027
 URL: https://issues.apache.org/jira/browse/HDFS-12027
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: kms
Reporter: patrick white
Priority: Minor


Enhancement request, add an API to the Key Management Server which can be used 
for health monitoring as well as programatic version checks, such as to return 
the service version identifier, suggest this;

GET http://HOST:PORT/kms/v1/key/kms_version

This API would be useful for production monitoring tools to quickly do KMS 
instance reporting (dashboards) and basic health checks, as part of overall 
monitoring of a Hadoop stack installation.

Such an API would also be useful for debugging initial bring-up of a service 
instance, such as validation of the KMS webserver and its interaction with ZK 
before the key manager(s) are necessarily working. Currently i believe a valid 
key needs to be setup and available before calls can return success.





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12026) libhdfspp: Fix compilation errors and warnings when compiling with Clang

2017-06-23 Thread Anatoli Shein (JIRA)
Anatoli Shein created HDFS-12026:


 Summary: libhdfspp: Fix compilation errors and warnings when 
compiling with Clang 
 Key: HDFS-12026
 URL: https://issues.apache.org/jira/browse/HDFS-12026
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Anatoli Shein
Assignee: Anatoli Shein


Currently multiple errors and warnings prevent libhdfspp from being compiled 
with clang. It should compile cleanly using flags:
-std=c++11 -stdlib=libc++

and also warning flags:
-Weverything -Wno-c++98-compat -Wno-missing-prototypes 
-Wno-c++98-compat-pedantic -Wno-padded -Wno-covered-switch-default 
-Wno-missing-noreturn -Wno-unknown-pragmas -Wconversion -Werror



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12025) Modify help instructions of Plan command and Execute command about DiskBalancer

2017-06-23 Thread steven-wugang (JIRA)
steven-wugang created HDFS-12025:


 Summary: Modify help instructions of Plan command and Execute 
command about DiskBalancer
 Key: HDFS-12025
 URL: https://issues.apache.org/jira/browse/HDFS-12025
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: steven-wugang


There are some inaccurate descriptions in help descriptions about PlanCommand 
and ExecuteCommand of DiskBalancer.
For example, in ExecuteCommand help description, "Execute command runs a 
submits a plan for execution on the given data node" should be modify
modified to "Execute command runs a submitted plan for execution on the given 
data node".



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-11918) Ozone: Encapsulate KSM metadata key for better (de)serialization

2017-06-23 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang resolved HDFS-11918.

Resolution: Later

> Ozone: Encapsulate KSM metadata key for better (de)serialization
> 
>
> Key: HDFS-11918
> URL: https://issues.apache.org/jira/browse/HDFS-11918
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Critical
> Attachments: HDFS-11918-HDFS-7240.001.patch
>
>
> There are multiple type of keys stored in KSM database
> # Volume Key
> # Bucket Key
> # Object Key
> # User Key
> Currently they are represented as plain string with some conventions, such as
> # /volume
> # /volume/bucket
> # /volume/bucket/key
> # $user
> this approach makes it so difficult to parse volume/bucket/keys from KSM 
> database. Propose to encapsulate these types of keys into protobuf messages, 
> and take advantage of protobuf to serialize(deserialize) classes to byte 
> arrays (and vice versa).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12024) spell error in FsDatasetImpl.java

2017-06-23 Thread Yasen Liu (JIRA)
Yasen Liu created HDFS-12024:


 Summary: spell error in FsDatasetImpl.java
 Key: HDFS-12024
 URL: https://issues.apache.org/jira/browse/HDFS-12024
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Yasen Liu


“LOG.warn("Failed to {color:red}repot {color}bad block " + corruptBlock, e)”
print content word “repot” Misspell,should be "report"

Also found a Document parameter error:
  /**
   * Removes a set of volumes from FsDataset.
   * @param {color:red}storageLocationsToRemove {color}a set of
   * {@link StorageLocation}s for each volume.
   * @param clearFailure set true to clear failure information.
   */
  @Override
  public void removeVolumes(
  final Collection {color:red}storageLocsToRemove{color},
  boolean clearFailure) {

"storageLocationsToRemove" in  param document should be "storageLocsToRemove"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12023) Ozone: test if all the configuration keys documented in ozone-defaults.xml

2017-06-23 Thread Elek, Marton (JIRA)
Elek, Marton created HDFS-12023:
---

 Summary: Ozone: test if all the configuration keys documented in 
ozone-defaults.xml
 Key: HDFS-12023
 URL: https://issues.apache.org/jira/browse/HDFS-12023
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Reporter: Elek, Marton
Assignee: Elek, Marton


HDFS-11990 added the missing configuration entries the ozone-defaults.xml

This patch contains a unit test which tests if all the configuration keys are 
still documented.

(constant fields of the specific configuration classes which ends with _KEY 
should be part of the defaults.xml). 





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org