[jira] [Created] (HDDS-583) SCM returns zero as the return code, even when invalid options are passed

2018-10-05 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-583:
-

 Summary: SCM returns zero as the return code, even when invalid 
options are passed
 Key: HDDS-583
 URL: https://issues.apache.org/jira/browse/HDDS-583
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari


While doing testing for HDDS-564, found that SCM returns zero as the return 
code, even when invalid options are passed. In StorageContainerManager.java, 
please see below code 
{code:java}
private static StartupOption parseArguments(String[] args) {
  int argsLen = (args == null) ? 0 : args.length;
  StartupOption startOpt = StartupOption.HELP;
{code}
Here, startOpt is initialized to HELP, so by default even if wrong options are 
passed, parseArguments method returns the value to HELP. This causes the exit 
code to be 0. 

Ideally, startOpt should be set to null, which will enable it to return non 
zero exit code, if the options are invalid.
{code:java}
StartupOption startOpt = null{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13965) hadoop.security.kerberos.ticket.cache.path setting is not honored when KMS encryption is enabled.

2018-10-05 Thread LOKESKUMAR VIJAYAKUMAR (JIRA)
LOKESKUMAR VIJAYAKUMAR created HDFS-13965:
-

 Summary: hadoop.security.kerberos.ticket.cache.path setting is not 
honored when KMS encryption is enabled.
 Key: HDFS-13965
 URL: https://issues.apache.org/jira/browse/HDFS-13965
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client, kms
Affects Versions: 2.7.7, 2.7.3
Reporter: LOKESKUMAR VIJAYAKUMAR


_We use the *+hadoop.security.kerberos.ticket.cache.path+* setting to provide a 
custom kerberos cache path for all hadoop operations to be run as specified 
user. But this setting is not honored when KMS encryption is enabled._


_The below program to read a file works when KMS encryption is not enabled, but 
it fails when the KMS encryption is enabled._

_Looks like *hadoop.security.kerberos.ticket.cache.path* setting is not honored 
by *createConnection on KMSClientProvider.java.*_

 

HadoopTest.java (CLASSPATH needs to be set to compile and run)

 

import java.io.InputStream;

import java.net.URI;

import org.apache.hadoop.conf.Configuration;

import org.apache.hadoop.fs.FileSystem;

import org.apache.hadoop.fs.Path;

 

public class HadoopTest {

    public static int runRead(String[] args) throws Exception{

    if (args.length < 3) {

    System.err.println("HadoopTest hadoop_file_path 
hadoop_user kerberos_cache");

    return 1;

    }

    Path inputPath = new Path(args[0]);

    Configuration conf = new Configuration();

    URI defaultURI = FileSystem.getDefaultUri(conf);

    conf.set("hadoop.security.kerberos.ticket.cache.path",args[2]);

    FileSystem fs = FileSystem.newInstance(defaultURI,conf,args[1]);

    InputStream is = fs.open(inputPath);

    byte[] buffer = new byte[4096];

    int nr = is.read(buffer);

    while (nr != -1)

    {

    System.out.write(buffer, 0, nr);

    nr = is.read(buffer);

    }

    return 0;

    }

    public static void main( String[] args ) throws Exception {

    int returnCode = HadoopTest.runRead(args);

    System.exit(returnCode);

    }

}

 

 

 

[root@lstrost3 testhadoop]# pwd

/testhadoop

 

[root@lstrost3 testhadoop]# ls

HadoopTest.java

 

[root@lstrost3 testhadoop]# export CLASSPATH=`hadoop classpath --glob`:.

 

[root@lstrost3 testhadoop]# javac HadoopTest.java

 

[root@lstrost3 testhadoop]# java HadoopTest

HadoopTest  hadoop_file_path  hadoop_user  kerberos_cache

 

[root@lstrost3 testhadoop]# java HadoopTest /loki/loki.file loki 
/tmp/krb5cc_1006

18/09/27 23:23:20 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable

18/09/27 23:23:21 WARN shortcircuit.DomainSocketFactory: The short-circuit 
local reads feature cannot be used because libhadoop cannot be loaded.

Exception in thread "main" java.io.IOException: 
org.apache.hadoop.security.authentication.client.AuthenticationException: 
GSSException: *{color:#FF}No valid credentials provided (Mechanism level: 
Failed to find any Kerberos tgt){color}*

    at 
{color:#FF}*org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:551)*{color}

    at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:831)

    at 
org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:388)

    at 
org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:1393)

    at 
org.apache.hadoop.hdfs.DFSClient.createWrappedInputStream(DFSClient.java:1463)

    at 
org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:333)

    at 
org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:327)

    at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)

    at 
org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:340)

    at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:786)

    at HadoopTest.runRead(HadoopTest.java:18)

    at HadoopTest.main(HadoopTest.java:29)

Caused by: 
org.apache.hadoop.security.authentication.client.AuthenticationException: 
GSSException: No valid credentials provided (Mechanism level: Failed to find 
any Kerberos tgt)

    at 
org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:333)

    at 
org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:203)

    at 

[jira] [Created] (HDDS-582) Remove ChunkOutputStreamEntry class from ChunkGroupOutputStream

2018-10-05 Thread Shashikant Banerjee (JIRA)
Shashikant Banerjee created HDDS-582:


 Summary: Remove ChunkOutputStreamEntry class from 
ChunkGroupOutputStream
 Key: HDDS-582
 URL: https://issues.apache.org/jira/browse/HDDS-582
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Client
Reporter: Shashikant Banerjee
Assignee: Shashikant Banerjee






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13964) TestRouterWebHDFSContractAppend fails with No Active Namenode under nameservice

2018-10-05 Thread Ayush Saxena (JIRA)
Ayush Saxena created HDFS-13964:
---

 Summary: TestRouterWebHDFSContractAppend fails with No Active 
Namenode under nameservice
 Key: HDFS-13964
 URL: https://issues.apache.org/jira/browse/HDFS-13964
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ayush Saxena
Assignee: Ayush Saxena


Reference

https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/916/testReport/junit/org.apache.hadoop.fs.contract.router.web/TestRouterWebHDFSContractAppend/testRenameFileBeingAppended/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-581) Bootstrap DN with private/public key pair

2018-10-05 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-581:
---

 Summary: Bootstrap DN with private/public key pair
 Key: HDDS-581
 URL: https://issues.apache.org/jira/browse/HDDS-581
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Xiaoyu Yao


This will create public/private key pair for HDDS datanode if there isn't one 
available during secure dn startup.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-580) Bootstrap OM/SCM with private/public key pair

2018-10-05 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-580:
---

 Summary: Bootstrap OM/SCM with private/public key pair
 Key: HDDS-580
 URL: https://issues.apache.org/jira/browse/HDDS-580
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Xiaoyu Yao


We will need to add API that leverage the key generator from HDDS-100 to 
generate public/private key pair for OM/SCM, this will be called by the scm/om 
admin cli with "-init" cmd.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[OZONE] Community calls

2018-10-05 Thread Elek, Marton



Hi everybody,


We start a new community call series about Apache Hadoop Ozone. It's an 
informal discussion about the current items, short-term plans, 
directions and contribution possibilities.


Please join if you are interested or have questions about Ozone.

For more details, please check:

https://cwiki.apache.org/confluence/display/HADOOP/Ozone+Community+Calls

Marton

ps: As it's written in the wiki, this is not a replacement of the 
mailing lists. All main proposals/decisions will be published to the 
mailing list/wiki to generate further discussion.


-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2018-10-05 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/917/

[Oct 4, 2018 2:30:30 AM] (yqlin) HADOOP-15817. Reuse Object Mapper in 
KMSJSONReader. Contributed by
[Oct 4, 2018 5:31:33 PM] (wangda) YARN-8758. Support getting PreemptionMessage 
when using AMRMClientAsyn.
[Oct 4, 2018 5:53:39 PM] (wangda) YARN-8844. TestNMProxy unit test is failing. 
(Eric Yang via wangda)
[Oct 4, 2018 7:47:31 PM] (haibochen) YARN-8732. Add unit tests of min/max 
allocation for custom resource
[Oct 4, 2018 8:00:31 PM] (haibochen) YARN-8750. Refactor TestQueueMetrics. 
(Contributed by Szilard Nemeth)
[Oct 4, 2018 10:17:47 PM] (weichiu) HDFS-13877. HttpFS: Implement 
GETSNAPSHOTDIFF. Contributed by Siyao
[Oct 4, 2018 10:22:44 PM] (weichiu) HDFS-13950. ACL documentation update to 
indicate that ACL entries are




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine
 
   
org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceUtils.getComponentArrayJson(String,
 int, String) concatenates strings using + in a loop At 
YarnServiceUtils.java:using + in a loop At YarnServiceUtils.java:[line 123] 

Failed CTEST tests :

   test_test_libhdfs_threaded_hdfs_static 
   test_libhdfs_threaded_hdfspp_test_shim_static 

Failed junit tests :

   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency 
   hadoop.hdfs.TestLeaseRecovery2 
   hadoop.yarn.server.nodemanager.containermanager.TestContainerManager 
   hadoop.yarn.server.nodemanager.TestNodeManagerResync 
   
hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage
 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/917/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/917/artifact/out/diff-compile-javac-root.txt
  [300K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/917/artifact/out/diff-checkstyle-root.txt
  [17M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/917/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/917/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/917/artifact/out/diff-patch-pylint.txt
  [40K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/917/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/917/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/917/artifact/out/whitespace-eol.txt
  [9.4M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/917/artifact/out/whitespace-tabs.txt
  [1.1M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/917/artifact/out/xml.txt
  [4.0K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/917/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-submarine-warnings.html
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/917/artifact/out/branch-findbugs-hadoop-hdds_client.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/917/artifact/out/branch-findbugs-hadoop-hdds_container-service.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/917/artifact/out/branch-findbugs-hadoop-hdds_framework.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/917/artifact/out/branch-findbugs-hadoop-hdds_server-scm.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/917/artifact/out/branch-findbugs-hadoop-hdds_tools.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/917/artifact/out/branch-findbugs-hadoop-ozone_client.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/917/artifact/out/branch-findbugs-hadoop-ozone_common.txt
  [4.0K]
   

[jira] [Resolved] (HDDS-508) Add robot framework to the apache/hadoop-runner baseimage

2018-10-05 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia resolved HDDS-508.

   Resolution: Fixed
Fix Version/s: 0.3.0

> Add robot framework to the apache/hadoop-runner baseimage
> -
>
> Key: HDDS-508
> URL: https://issues.apache.org/jira/browse/HDDS-508
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: test
>Reporter: Elek, Marton
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: alpha2, newbie
> Fix For: 0.3.0
>
>
> In HDDS-352 we moved the acceptance tests to the dist folder. Currently the 
> framework is not part of the base image we need to install it all the time.
> See the following lines in the 
> [test.sh|https://github.com/apache/hadoop/blob/trunk/hadoop-dist/src/main/smoketest/test.sh]:
> {code}
> docker-compose -f "$COMPOSE_FILE" exec datanode sudo apt-get update
> docker-compose -f "$COMPOSE_FILE" exec datanode sudo apt-get install -y 
> python-pip
> docker-compose -f "$COMPOSE_FILE" exec datanode sudo pip install 
> robotframework
> {code}
> This could be removed after we add these lines to the [docker 
> file|https://github.com/apache/hadoop/blob/docker-hadoop-runner/Dockerfile]:



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-579) ContainerStateMachine should track last successful applied transaction index per container and fail subsequent transactions in case one fails

2018-10-05 Thread Shashikant Banerjee (JIRA)
Shashikant Banerjee created HDDS-579:


 Summary: ContainerStateMachine should track last successful 
applied transaction index per container and fail subsequent transactions in 
case one fails
 Key: HDDS-579
 URL: https://issues.apache.org/jira/browse/HDDS-579
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Shashikant Banerjee
Assignee: Shashikant Banerjee


ContainerStateMachine will keep of track of the last successfully applied 
transaction index and on restart inform Ratis the index, so that the subsequent 
transactions can be reapplied from here.

Moreover, in case one transaction fails, all the subsequent transactions on the 
container should fail in the containerStateMachine.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13963) NN UI is broken with IE11

2018-10-05 Thread Daisuke Kobayashi (JIRA)
Daisuke Kobayashi created HDFS-13963:


 Summary: NN UI is broken with IE11
 Key: HDFS-13963
 URL: https://issues.apache.org/jira/browse/HDFS-13963
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode, ui
Affects Versions: 3.1.1
Reporter: Daisuke Kobayashi


Internet Explorer 11 cannot correctly display Namenode Web UI while the NN 
itself starts successfully. I have confirmed this over 3.1.1 (latest release) 
and 3.3.0-SNAPSHOT (current trunk) that the following message is shown.

{code}
Failed to retrieve data from /jmx?qry=java.lang:type=Memory, cause: 
SyntaxError: Invalid character
{code}

Apparently, this is because {{dfshealth.html}} runs as IE9 mode by default.

{code}

{code}

Once the compatible mode is changed to IE11 through developer tool, it's 
rendered correctly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13962) Add null check for add-replica pool to avoid lock acquiring

2018-10-05 Thread Yiqun Lin (JIRA)
Yiqun Lin created HDFS-13962:


 Summary: Add null check for add-replica pool to avoid lock 
acquiring
 Key: HDFS-13962
 URL: https://issues.apache.org/jira/browse/HDFS-13962
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.1.1
Reporter: Yiqun Lin


This is a follow-up work for HDFS-13768. Mainly two places needed to update:

* Add null check for add-replica pool to avoid lock acquiring
* In {{ReplicaMap#addAndGet}}, it would be better to use 
{{FoldedTreeSet#addOrReplace}} instead of {{FoldedTreeSet#add}} for adding 
replica info.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-578) om-audit-log4j2.properties must be packaged in ozone-dist

2018-10-05 Thread Dinesh Chitlangia (JIRA)
Dinesh Chitlangia created HDDS-578:
--

 Summary: om-audit-log4j2.properties must be packaged in ozone-dist 
 Key: HDDS-578
 URL: https://issues.apache.org/jira/browse/HDDS-578
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Manager
Reporter: Dinesh Chitlangia
Assignee: Dinesh Chitlangia


After HDDS-447, it appears the om-audit-log4j2.properties file is not available 
in etc/hadoop.

This Jira aims to fix this so that audit logging configurations are available 
and logs are generated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org