[jira] [Created] (HDDS-653) TestMetadataStore#testIterator fails in Windows

2018-10-12 Thread Yiqun Lin (JIRA)
Yiqun Lin created HDDS-653:
--

 Summary: TestMetadataStore#testIterator fails in Windows
 Key: HDDS-653
 URL: https://issues.apache.org/jira/browse/HDDS-653
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: test
Affects Versions: 0.2.1
Reporter: Yiqun Lin
Assignee: Yiqun Lin


Running the unit tests for hdds-common module, found one failure UT in Windows.
{noformat}
java.io.IOException: Unable to delete file: 
target\test\data\KbmK7CPN1M\MANIFEST-02
at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2381)
at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1679)
at org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1575)
at 
org.apache.hadoop.utils.TestMetadataStore.testIterator(TestMetadataStore.java:166)
{noformat}

Looking into this, we forget to close the db store and this will lead failure 
of deleting file in Windows.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-652) Properties in ozone-site.xml does not work well with IP

2018-10-12 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-652:
-

 Summary: Properties in ozone-site.xml does not work well with IP 
 Key: HDDS-652
 URL: https://issues.apache.org/jira/browse/HDDS-652
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Namit Maheshwari


There have been cases where properties in ozone-site.xml does not work well 
with IP. 

If those properties like ozone.om.address is changed to use hostnames, it works 
well.

 

Ideally this should work fine with both IP and hostnames.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-651) Rename o3 to o3fs for Filesystem

2018-10-12 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-651:
-

 Summary: Rename o3 to o3fs for Filesystem
 Key: HDDS-651
 URL: https://issues.apache.org/jira/browse/HDDS-651
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Namit Maheshwari


I propose that we rename o3 to o3fs for Filesystem.

It creates a lot of confusion while using the same name o3 for different 
purposes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-650) Spark job is not able to pick up Ozone configuration

2018-10-12 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-650:
-

 Summary: Spark job is not able to pick up Ozone configuration
 Key: HDDS-650
 URL: https://issues.apache.org/jira/browse/HDDS-650
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari


Spark job is not able to pick up Ozone configuration.
{code:java}
-bash-4.2$ spark-shell --master yarn-client --jars 
/usr/hdp/current/hadoop-client/lib/hadoop-lzo-0.6.0.3.0.3.0-63.jar,/tmp/ozone-0.3.0-SNAPSHOT/share/hadoop/ozoneplugin/hadoop-ozone-datanode-plugin-0.3.0-SNAPSHOT.jar,/tmp/ozone-0.3.0-SNAPSHOT/share/hadoop/ozonefs/hadoop-ozone-filesystem-0.3.0-SNAPSHOT.jar
Warning: Master yarn-client is deprecated since 2.0. Please use master "yarn" 
with specified deploy mode instead.
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use 
setLogLevel(newLevel).
Spark context Web UI available at 
http://ctr-e138-1518143905142-510793-01-02.hwx.site:4040
Spark context available as 'sc' (master = yarn, app id = 
application_1539295307098_0011).
Spark session available as 'spark'.
Welcome to
 __
/ __/__ ___ _/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 2.3.2.3.0.3.0-63
/_/

Using Scala version 2.11.8 (OpenJDK 64-Bit Server VM, Java 1.8.0_181)
Type in expressions to have them evaluated.
Type :help for more information.

scala>

scala> val input = sc.textFile("o3://bucket2.volume2/passwd");
input: org.apache.spark.rdd.RDD[String] = o3://bucket2.volume2/passwd 
MapPartitionsRDD[1] at textFile at :24

scala> val count = input.flatMap(line => line.split(" ")).map(word => (word, 
1)).reduceByKey(_+_);
count: org.apache.spark.rdd.RDD[(String, Int)] = ShuffledRDD[4] at reduceByKey 
at :25

scala> count.cache()
res0: count.type = ShuffledRDD[4] at reduceByKey at :25

scala> count.saveAsTextFile("o3://bucket2.volume2/sparkout3");
[Stage 0:> (0 + 2) / 2]18/10/12 22:16:44 WARN TaskSetManager: Lost task 1.0 in 
stage 0.0 (TID 1, ctr-e138-1518143905142-510793-01-11.hwx.site, executor 
1): java.io.IOException: Couldn't create protocol class 
org.apache.hadoop.ozone.client.rpc.RpcClient
at 
org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:299)
at 
org.apache.hadoop.ozone.client.OzoneClientFactory.getRpcClient(OzoneClientFactory.java:169)
at 
org.apache.hadoop.fs.ozone.OzoneFileSystem.initialize(OzoneFileSystem.java:119)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
at org.apache.hadoop.mapred.LineRecordReader.(LineRecordReader.java:108)
at 
org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67)
at org.apache.spark.rdd.HadoopRDD$$anon$1.liftedTree1$1(HadoopRDD.scala:257)
at org.apache.spark.rdd.HadoopRDD$$anon$1.(HadoopRDD.scala:256)
at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:214)
at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:94)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
at org.apache.spark.scheduler.Task.run(Task.scala:109)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.IllegalArgumentException: ozone.om.address must be 
defined. See https://wiki.apache.org/hadoop/Ozone#Configuration for details on 
configuring Ozone.
at org.apache.hadoop.ozone.OmUtils.getOmAddressForClients(OmUtils.java:70)
at org.apache.hadoop.ozone.client.rpc.RpcClient.(RpcClient.java:114)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 

[jira] [Created] (HDDS-649) Parallel test execution is broken

2018-10-12 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDDS-649:
--

 Summary: Parallel test execution is broken
 Key: HDDS-649
 URL: https://issues.apache.org/jira/browse/HDDS-649
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: test
Reporter: Arpit Agarwal


Parallel tests (with mvn test -Pparallel-tests) give unpredictable results 
likely because surefire is parallelizing test cases within a class.

Looks like surefire has options to parallelize at the class-level.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-648) hadoop-hdds and its sub modules have undefined hadoop component

2018-10-12 Thread Dinesh Chitlangia (JIRA)
Dinesh Chitlangia created HDDS-648:
--

 Summary: hadoop-hdds and its sub modules have undefined hadoop 
component
 Key: HDDS-648
 URL: https://issues.apache.org/jira/browse/HDDS-648
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Dinesh Chitlangia
Assignee: Dinesh Chitlangia


Similar to HDDS-409, hadoop-hdds and its submodule have undefined hadoop 
component folder:

When building the package, it creates an UNDEF hadoop component in the share 
folder:
 * 
./hadoop-hdds/sub-module/target/sub-module-X.Y.Z-SNAPSHOT/share/hadoop/UNDEF/lib



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-647) TestOzoneConfigurationFields is failing for dfs.container.ratis.num.container.op.executors

2018-10-12 Thread Nanda kumar (JIRA)
Nanda kumar created HDDS-647:


 Summary: TestOzoneConfigurationFields is failing for 
dfs.container.ratis.num.container.op.executors
 Key: HDDS-647
 URL: https://issues.apache.org/jira/browse/HDDS-647
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Nanda kumar
Assignee: Nanda kumar


testCompareXmlAgainstConfigurationClass(org.apache.hadoop.ozone.TestOzoneConfigurationFields)
  Time elapsed: 0.155 s  <<< FAILURE!
java.lang.AssertionError: ozone-default.xml has 1 properties missing in  class 
org.apache.hadoop.ozone.OzoneConfigKeys  class 
org.apache.hadoop.hdds.scm.ScmConfigKeys  class 
org.apache.hadoop.ozone.om.OMConfigKeys  class 
org.apache.hadoop.hdds.HddsConfigKeys  class 
org.apache.hadoop.ozone.s3.S3GatewayConfigKeys Entries:   
dfs.container.ratis.num.container.op.executors expected:<0> but was:<1>



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-646) TestChunkStreams.testErrorReadGroupInputStream fails

2018-10-12 Thread Nanda kumar (JIRA)
Nanda kumar created HDDS-646:


 Summary: TestChunkStreams.testErrorReadGroupInputStream fails
 Key: HDDS-646
 URL: https://issues.apache.org/jira/browse/HDDS-646
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: test
Reporter: Nanda kumar
Assignee: Nanda kumar


After HDDS-639, TestChunkStreams.testErrorReadGroupInputStream fails.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13991) Review of DiskBalancerCluster.java

2018-10-12 Thread BELUGA BEHR (JIRA)
BELUGA BEHR created HDFS-13991:
--

 Summary: Review of DiskBalancerCluster.java
 Key: HDFS-13991
 URL: https://issues.apache.org/jira/browse/HDFS-13991
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: diskbalancer
Affects Versions: 3.2.0
Reporter: BELUGA BEHR
 Attachments: HDFS-13991.1.patch

* Use {{ArrayList}} instead of {{LinkedList}}
 * Simplify the code
 * Use {{File}} object's file system agnostic features
 * Re-order import statements



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-645) Enable OzoneFS contract test by default

2018-10-12 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDDS-645:
--

 Summary: Enable OzoneFS contract test by default
 Key: HDDS-645
 URL: https://issues.apache.org/jira/browse/HDDS-645
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: test
Reporter: Arpit Agarwal


[~msingh] pointed out that OzoneFS contract tests are not running by default 
and must be run manually. Let's fix that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13990) Synchronization Issue With HashResolver.java

2018-10-12 Thread BELUGA BEHR (JIRA)
BELUGA BEHR created HDFS-13990:
--

 Summary: Synchronization Issue With HashResolver.java
 Key: HDFS-13990
 URL: https://issues.apache.org/jira/browse/HDFS-13990
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: federation
Affects Versions: 3.2.0
Reporter: BELUGA BEHR
 Attachments: HDFS-13990.1.patch

{code:java|title=HashResolver.java}
  private ConsistentHashRing getHashResolver(final Set namespaces) {
int hash = namespaces.hashCode();
ConsistentHashRing resolver = this.hashResolverMap.get(hash);
if (resolver == null) {
  resolver = new ConsistentHashRing(namespaces);
  this.hashResolverMap.put(hash, resolver);
}
return resolver;
  }
{code}

The {{hashResolverMap}} is a {{ConcurrentHashMap}} so presumably there is 
concern here for concurrency issues.  However, there is no synchronization 
around this method, so two threads could call {{get(hash)}} both see a 'null' 
value and then both add two entries into the {{Map}}.  Add synchronization here.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-13988) Allow for Creating Temporary Files

2018-10-12 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR resolved HDFS-13988.

Resolution: Duplicate

> Allow for Creating Temporary Files
> --
>
> Key: HDFS-13988
> URL: https://issues.apache.org/jira/browse/HDFS-13988
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs, hdfs-client
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Priority: Major
>
> Allow clients to create temporary files in HDFS.
> {code:java|title=FileSystem.java}
> public FSDataOutputStreamBuilder createFile(Path path);
> public FSDataOutputStreamBuilder appendFile(Path path);
> // New feature
> public FSDataOutputStreamBuilder createTempFile(Path path, TimeUnit unit, 
> long ttl);
> {code}
> This would register the file with the NameNode to have a TTL.  The NameNode 
> would be responsible for deleting the files after some custom TTL.  This 
> would be helpful in the case that a MapReduce (or Spark) job fails during 
> execution and does not properly clean up after itself.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-644) Rename dfs.container.ratis.num.container.op.threads

2018-10-12 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-644:
---

 Summary: Rename dfs.container.ratis.num.container.op.threads
 Key: HDDS-644
 URL: https://issues.apache.org/jira/browse/HDDS-644
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Bharat Viswanadham


public static final String DFS_CONTAINER_RATIS_NUM_CONTAINER_OP_EXECUTORS_KEY
= "dfs.container.ratis.num.container.op.threads";

This should be changed to dfs.container.ratis.num.container.op.executors

 

HDDS-550 has added this in OzoneConfigKeys.java, but they have named 
differently in ozone-default.xml and ScmConfigKeys.java



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-643) Parse Authorization header in a separate filter

2018-10-12 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-643:
---

 Summary: Parse Authorization header in a separate filter
 Key: HDDS-643
 URL: https://issues.apache.org/jira/browse/HDDS-643
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Bharat Viswanadham


This Jira is created from HDDS-522 comment from [~elek]
 # I think the authorization headers could be parsed in a separated filters 
similar to the request ids. But it could be implemented later. This is more 
like a prototype.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13989) RBF: Add FSCK to the Router

2018-10-12 Thread JIRA
Íñigo Goiri created HDFS-13989:
--

 Summary: RBF: Add FSCK to the Router
 Key: HDFS-13989
 URL: https://issues.apache.org/jira/browse/HDFS-13989
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Íñigo Goiri


The namenode supports FSCK.
The Router should be able to forward FSCK to the right Namenode and aggregate 
the results.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-642) Add chill mode exit condition for pipeline availability

2018-10-12 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDDS-642:
--

 Summary: Add chill mode exit condition for pipeline availability
 Key: HDDS-642
 URL: https://issues.apache.org/jira/browse/HDDS-642
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: SCM
Reporter: Arpit Agarwal


SCM should not exit chill-mode until at least 1 write pipeline is available. 
Else smoke tests are unreliable.

This is not an issue for real clusters.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-641) Fix ozone filesystem robot test

2018-10-12 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HDDS-641:
--

 Summary: Fix ozone filesystem robot test
 Key: HDDS-641
 URL: https://issues.apache.org/jira/browse/HDDS-641
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Filesystem
Reporter: Mukul Kumar Singh
Assignee: Mukul Kumar Singh


The Ozone filesystem tests are currently failing with no active pipeline error.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13988) Allow for Creating Temporary Files

2018-10-12 Thread BELUGA BEHR (JIRA)
BELUGA BEHR created HDFS-13988:
--

 Summary: Allow for Creating Temporary Files
 Key: HDFS-13988
 URL: https://issues.apache.org/jira/browse/HDFS-13988
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: hdfs, hdfs-client
Affects Versions: 3.2.0
Reporter: BELUGA BEHR


Allow clients to create temporary files in HDFS.

{code:java|title=FileSystem.java}
public FSDataOutputStreamBuilder createFile(Path path);
public FSDataOutputStreamBuilder appendFile(Path path);

// New feature
public FSDataOutputStreamBuilder createTempFile(Path path, TimeUnit unit, long 
ttl);
{code}

This would register the file with the NameNode to have a TTL.  The NameNode 
would be responsible for deleting the files after some custom TTL.  This would 
be helpful in the case that a MapReduce (or Spark) job fails during execution 
and does not properly clean up after itself.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13987) Review of RandomResolver Class

2018-10-12 Thread BELUGA BEHR (JIRA)
BELUGA BEHR created HDFS-13987:
--

 Summary: Review of RandomResolver Class
 Key: HDFS-13987
 URL: https://issues.apache.org/jira/browse/HDFS-13987
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: federation
Affects Versions: 3.2.0
Reporter: BELUGA BEHR
 Attachments: HDFS-13987.1.patch

* Use {{ThreadLocalRandom}}
* Do not create a list, and copy, to pick a random index
* An early-return in the method means that the 'ERROR' logging is skipped in 
some cases



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2018-10-12 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/924/

[Oct 11, 2018 4:51:51 AM] (tasanuma) HADOOP-15785. [JDK10] Javadoc build fails 
on JDK 10 in hadoop-common.
[Oct 11, 2018 10:35:53 AM] (sunilg) YARN-8666. [UI2] Remove application tab 
from YARN Queue Page.
[Oct 11, 2018 10:45:25 AM] (sunilg) YARN-8753. [UI2] Lost nodes representation 
missing from Nodemanagers
[Oct 11, 2018 1:57:38 PM] (stevel) HADOOP-15837. DynamoDB table Update can fail 
S3A FS init. Contributed by
[Oct 11, 2018 3:54:57 PM] (jlowe) YARN-8861. executorLock is misleading in 
ContainerLaunch. Contributed by
[Oct 11, 2018 3:59:17 PM] (billie) YARN-8710. Service AM should set a finite 
limit on NM container max
[Oct 11, 2018 4:20:11 PM] (arp) HDDS-630. Rename KSM to OM in Hdds.md. 
Contributed by Takanobu Asanuma.
[Oct 11, 2018 4:25:21 PM] (billie) YARN-8777. Container Executor C binary 
change to execute interactive
[Oct 11, 2018 7:01:42 PM] (stevel) MAPREDUCE-7149. Javadocs for FileInputFormat 
and OutputFormat to mention
[Oct 11, 2018 7:03:56 PM] (stevel) HDFS-13951. HDFS DelegationTokenFetcher 
can't print non-HDFS tokens in a
[Oct 11, 2018 8:20:32 PM] (hanishakoneru) HDDS-601. On restart, SCM throws 'No 
such datanode' exception.
[Oct 11, 2018 8:49:39 PM] (inigoiri) HDFS-13968. BlockReceiver Array-Based 
Queue. Contributed by BELUGA BEHR.
[Oct 11, 2018 10:01:50 PM] (weichiu) HDFS-13878. HttpFS: Implement 
GETSNAPSHOTTABLEDIRECTORYLIST. Contributed
[Oct 11, 2018 10:08:22 PM] (xyao) HDDS-627. OzoneFS read from an MR Job throws
[Oct 11, 2018 10:12:36 PM] (xiao) HADOOP-15676. Cleanup TestSSLHttpServer. 
Contributed by Szilard Nemeth.
[Oct 11, 2018 10:35:44 PM] (rkanter) HADOOP-15717. TGT renewal thread does not 
log IOException (snemeth via
[Oct 11, 2018 11:26:07 PM] (vrushali) YARN-5742 Serve aggregated logs of 
historical apps from timeline
[Oct 12, 2018 12:12:47 AM] (jitendra) HDDS-550. Serialize ApplyTransaction 
calls per Container in
[Oct 12, 2018 12:21:27 AM] (arp) HDDS-636. Turn off 
ozone.metastore.rocksdb.statistics for now.
[Oct 12, 2018 12:25:25 AM] (aengineer) HDDS-634. OzoneFS: support basic user 
group and permission for file/dir.




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

Failed CTEST tests :

   test_test_libhdfs_threaded_hdfs_static 
   test_libhdfs_threaded_hdfspp_test_shim_static 

Failed junit tests :

   hadoop.util.TestReadWriteDiskValidator 
   hadoop.util.TestBasicDiskValidator 
   hadoop.util.TestDiskCheckerWithDiskIo 
   hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes 
   hadoop.hdfs.TestLeaseRecovery2 
   hadoop.hdfs.TestPread 
   hadoop.yarn.server.nodemanager.containermanager.TestContainerManager 
   hadoop.yarn.client.api.impl.TestNMClient 
   
hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage
 
   hadoop.utils.TestRocksDBStoreMBean 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/924/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/924/artifact/out/diff-compile-javac-root.txt
  [300K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/924/artifact/out/diff-checkstyle-root.txt
  [17M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/924/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/924/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/924/artifact/out/diff-patch-pylint.txt
  [40K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/924/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/924/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/924/artifact/out/whitespace-eol.txt
  [9.3M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/924/artifact/out/whitespace-tabs.txt
  [1.1M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/924/artifact/out/xml.txt
  [4.0K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/924/artifact/out/branch-findbugs-hadoop-hdds_client.txt
  [4.0K]
   

[jira] [Created] (HDDS-640) Fix Failing Unit Test cases

2018-10-12 Thread Shashikant Banerjee (JIRA)
Shashikant Banerjee created HDDS-640:


 Summary: Fix Failing Unit Test cases
 Key: HDDS-640
 URL: https://issues.apache.org/jira/browse/HDDS-640
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: OM, Ozone Client, SCM
Reporter: Shashikant Banerjee


The following tests seem be failing consistently:

1.TestKeys#testPutAndGetKey

2.TestRocksDBStoreMBean#testJmxBeans

3.TestNodeFailure#testPipelineFail

4.TestNodeFailure#testPipelineFail

Test report for reference:

https://builds.apache.org/job/PreCommit-HDDS-Build/1381/testReport/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-639) ChunkGroupInputStream gets into infinite loop after reading a block

2018-10-12 Thread Nanda kumar (JIRA)
Nanda kumar created HDDS-639:


 Summary: ChunkGroupInputStream gets into infinite loop after 
reading a block
 Key: HDDS-639
 URL: https://issues.apache.org/jira/browse/HDDS-639
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Client
Reporter: Nanda kumar
Assignee: Nanda kumar


{{ChunkGroupInputStream}} doesn't exit the while loop even after reading all 
the chunks of the corresponding block.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-638) enable ratis snapshots for HDDS datanodes

2018-10-12 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HDDS-638:
--

 Summary: enable ratis snapshots for HDDS datanodes
 Key: HDDS-638
 URL: https://issues.apache.org/jira/browse/HDDS-638
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Datanode
Affects Versions: 0.3.0
Reporter: Mukul Kumar Singh
Assignee: Mukul Kumar Singh


Currently on a restart, a hdds datanode, starts applying log entries from the 
start of the log.
This should can be avoided by taking a ratis snapshot to persist the last 
stable state and on restart the datanodes start applying log from that index.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org