[jira] [Updated] (HDFS-15489) Documentation link is broken for Apache Hadoop

2020-07-22 Thread Namit Maheshwari (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Maheshwari updated HDFS-15489:

Attachment: Screen Shot 2020-07-22 at 5.24.29 PM.png

> Documentation link is broken for Apache Hadoop
> --
>
> Key: HDFS-15489
> URL: https://issues.apache.org/jira/browse/HDFS-15489
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: Namit Maheshwari
>Priority: Major
> Attachments: DocumentLinkBroken.mov, Screen Shot 2020-07-22 at 
> 5.24.22 PM.png, Screen Shot 2020-07-22 at 5.24.29 PM.png
>
>
>  
> Please see attached video and screenshots
> [^DocumentLinkBroken.mov]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15489) Documentation link is broken for Apache Hadoop

2020-07-22 Thread Namit Maheshwari (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Maheshwari updated HDFS-15489:

Attachment: Screen Shot 2020-07-22 at 5.24.22 PM.png

> Documentation link is broken for Apache Hadoop
> --
>
> Key: HDFS-15489
> URL: https://issues.apache.org/jira/browse/HDFS-15489
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: Namit Maheshwari
>Priority: Major
> Attachments: DocumentLinkBroken.mov, Screen Shot 2020-07-22 at 
> 5.24.22 PM.png, Screen Shot 2020-07-22 at 5.24.29 PM.png
>
>
>  
> Please see attached video and screenshots
> [^DocumentLinkBroken.mov]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15489) Documentation link is broken for Apache Hadoop

2020-07-22 Thread Namit Maheshwari (Jira)
Namit Maheshwari created HDFS-15489:
---

 Summary: Documentation link is broken for Apache Hadoop
 Key: HDFS-15489
 URL: https://issues.apache.org/jira/browse/HDFS-15489
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: Namit Maheshwari
 Attachments: DocumentLinkBroken.mov

 

Please see attached video and screenshots

[^DocumentLinkBroken.mov]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2229) ozonefs paths need host and port information for non HA environment

2019-10-02 Thread Namit Maheshwari (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943104#comment-16943104
 ] 

Namit Maheshwari commented on HDDS-2229:


Discussed this with [~smeng]

{code}
-bash-4.2$ kinit -kt hadoopqa/keytabs/hdfs.headless.keytab hdfs
-bash-4.2$ hdfs dfs -ls o3fs://bucket1.volume1/
19/10/02 19:37:34 INFO ipc.Client: Retrying connect to server: 
0.0.0.0/0.0.0.0:9862. Already tried 0 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
19/10/02 19:37:35 INFO ipc.Client: Retrying connect to server: 
0.0.0.0/0.0.0.0:9862. Already tried 1 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
19/10/02 19:37:36 INFO ipc.Client: Retrying connect to server: 
0.0.0.0/0.0.0.0:9862. Already tried 2 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
^C
{code}

It does not work without host port information as seen above. 
Please see after specifying the info it works fine
{code}
-bash-4.2hdfs dfs -ls 
o3fs://bucket1.volume1.xxx-xjhgyv-4.xxx-xjhgyv.root.xxx.site:9862/
-bash-4.2$
{code}

> ozonefs paths need host and port information for non HA environment
> ---
>
> Key: HDDS-2229
> URL: https://issues.apache.org/jira/browse/HDDS-2229
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Priority: Major
>
>  
> For non HA environment ozonefs path need to have host and port info, like 
> below:
> o3fs://bucket.volume.om-host:port/key
> Whereas, for HA environments the path will change to use nameservice like 
> below:
> o3fs://bucket.volume.ns1/key
> This will create ambiguity. User experience should be the same irrespective 
> of the usage. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2229) ozonefs paths need host and port information for non HA environment

2019-10-01 Thread Namit Maheshwari (Jira)
Namit Maheshwari created HDDS-2229:
--

 Summary: ozonefs paths need host and port information for non HA 
environment
 Key: HDDS-2229
 URL: https://issues.apache.org/jira/browse/HDDS-2229
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari


 
For non HA environment ozonefs path need to have host and port info, like below:

o3fs://bucket.volume.om-host:port/key

Whereas, for HA environments the path will change to use nameservice like below:

o3fs://bucket.volume.ns1/key

This will create ambiguity. User experience should be the same irrespective of 
the usage. 




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-688) Hive Query hangs, if DN's are restarted before the query is submitted

2018-10-31 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Maheshwari resolved HDDS-688.
---
Resolution: Fixed

This is fixed with the recent changes. Resolving it.

> Hive Query hangs, if DN's are restarted before the query is submitted
> -
>
> Key: HDDS-688
> URL: https://issues.apache.org/jira/browse/HDDS-688
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Mukul Kumar Singh
>Priority: Major
>
> Run a Hive Insert Query. It runs fine as below:
> {code:java}
> 0: jdbc:hive2://ctr-e138-1518143905142-510793> insert into testo3 values(1, 
> "aa", 3.0);
> INFO : Compiling 
> command(queryId=hive_20181018005729_fe644ab2-f8cc-41c3-b2d8-ffe1022de607): 
> insert into testo3 values(1, "aa", 3.0)
> INFO : Semantic Analysis Completed (retrial = false)
> INFO : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:_col0, 
> type:int, comment:null), FieldSchema(name:_col1, type:string, comment:null), 
> FieldSchema(name:_col2, type:float, comment:null)], properties:null)
> INFO : Completed compiling 
> command(queryId=hive_20181018005729_fe644ab2-f8cc-41c3-b2d8-ffe1022de607); 
> Time taken: 0.52 seconds
> INFO : Executing 
> command(queryId=hive_20181018005729_fe644ab2-f8cc-41c3-b2d8-ffe1022de607): 
> insert into testo3 values(1, "aa", 3.0)
> INFO : Query ID = hive_20181018005729_fe644ab2-f8cc-41c3-b2d8-ffe1022de607
> INFO : Total jobs = 1
> INFO : Launching Job 1 out of 1
> INFO : Starting task [Stage-1:MAPRED] in serial mode
> INFO : Subscribed to counters: [] for queryId: 
> hive_20181018005729_fe644ab2-f8cc-41c3-b2d8-ffe1022de607
> INFO : Session is already open
> INFO : Dag name: insert into testo3 values(1, "aa", 3.0) (Stage-1)
> INFO : Status: Running (Executing on YARN cluster with App id 
> application_1539383731490_0073)
> --
> VERTICES MODE STATUS TOTAL COMPLETED RUNNING PENDING FAILED KILLED
> --
> Map 1 .. container SUCCEEDED 1 1 0 0 0 0
> Reducer 2 .. container SUCCEEDED 1 1 0 0 0 0
> --
> VERTICES: 02/02 [==>>] 100% ELAPSED TIME: 11.95 s
> --
> INFO : Status: DAG finished successfully in 10.68 seconds
> INFO :
> INFO : Query Execution Summary
> INFO : 
> --
> INFO : OPERATION DURATION
> INFO : 
> --
> INFO : Compile Query 0.52s
> INFO : Prepare Plan 0.23s
> INFO : Get Query Coordinator (AM) 0.00s
> INFO : Submit Plan 0.11s
> INFO : Start DAG 0.57s
> INFO : Run DAG 10.68s
> INFO : 
> --
> INFO :
> INFO : Task Execution Summary
> INFO : 
> --
> INFO : VERTICES DURATION(ms) CPU_TIME(ms) GC_TIME(ms) INPUT_RECORDS 
> OUTPUT_RECORDS
> INFO : 
> --
> INFO : Map 1 7074.00 11,280 276 3 1
> INFO : Reducer 2 1074.00 2,040 0 1 0
> INFO : 
> --
> INFO :
> INFO : org.apache.tez.common.counters.DAGCounter:
> INFO : NUM_SUCCEEDED_TASKS: 2
> INFO : TOTAL_LAUNCHED_TASKS: 2
> INFO : AM_CPU_MILLISECONDS: 1390
> INFO : AM_GC_TIME_MILLIS: 0
> INFO : File System Counters:
> INFO : FILE_BYTES_READ: 135
> INFO : FILE_BYTES_WRITTEN: 135
> INFO : HDFS_BYTES_WRITTEN: 199
> INFO : HDFS_READ_OPS: 3
> INFO : HDFS_WRITE_OPS: 2
> INFO : HDFS_OP_CREATE: 1
> INFO : HDFS_OP_GET_FILE_STATUS: 3
> INFO : HDFS_OP_RENAME: 1
> INFO : org.apache.tez.common.counters.TaskCounter:
> INFO : SPILLED_RECORDS: 0
> INFO : NUM_SHUFFLED_INPUTS: 1
> INFO : NUM_FAILED_SHUFFLE_INPUTS: 0
> INFO : GC_TIME_MILLIS: 276
> INFO : TASK_DURATION_MILLIS: 8474
> INFO : CPU_MILLISECONDS: 13320
> INFO : PHYSICAL_MEMORY_BYTES: 4294967296
> INFO : VIRTUAL_MEMORY_BYTES: 11205029888
> INFO : COMMITTED_HEAP_BYTES: 4294967296
> INFO : INPUT_RECORDS_PROCESSED: 5
> INFO : INPUT_SPLIT_LENGTH_BYTES: 1
> INFO : OUTPUT_RECORDS: 1
> INFO : OUTPUT_LARGE_RECORDS: 0
> INFO : OUTPUT_BYTES: 94
> INFO : OUTPUT_BYTES_WITH_OVERHEAD: 102
> INFO : OUTPUT_BYTES_PHYSICAL: 127
> INFO : ADDITIONAL_SPILLS_BYTES_WRITTEN: 0
> INFO : 

[jira] [Comment Edited] (HDDS-672) Spark shell throws OzoneFileSystem not found

2018-10-18 Thread Namit Maheshwari (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16652535#comment-16652535
 ] 

Namit Maheshwari edited comment on HDDS-672 at 10/18/18 3:33 PM:
-

{code:java}
-bash-4.2$ spark-shell --master yarn-client
Warning: Master yarn-client is deprecated since 2.0. Please use master "yarn" 
with specified deploy mode instead.
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use 
setLogLevel(newLevel).
Spark context Web UI available at 
http://ctr-e138-1518143905142-510793-01-02.hwx.site:4040
Spark context available as 'sc' (master = yarn, app id = 
application_1539383731490_0051).
Spark session available as 'spark'.
Welcome to
 __
/ __/__ ___ _/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 2.3.2.3.0.3.0-63
/_/

Using Scala version 2.11.8 (OpenJDK 64-Bit Server VM, Java 1.8.0_181)
Type in expressions to have them evaluated.
Type :help for more information.

scala> val input = sc.textFile("o3://bucket2.volume2/passwd");
input: org.apache.spark.rdd.RDD[String] = o3://bucket2.volume2/passwd 
MapPartitionsRDD[1] at textFile at :24

scala> val count = input.flatMap(line => line.split(" ")).map(word => (word, 
1)).reduceByKey(_+_);
java.lang.RuntimeException: java.lang.ClassNotFoundException: Class 
org.apache.hadoop.fs.ozone.OzoneFileSystem not found
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2596)
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3320)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3352)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
at 
org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:268)
at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:239)
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:325)
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:200)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:253)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:251)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:251)
at 
org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:46)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:253)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:251)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:251)
at 
org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:46)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:253)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:251)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:251)
at 
org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:46)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:253)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:251)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:251)
at org.apache.spark.Partitioner$$anonfun$4.apply(Partitioner.scala:78)
at org.apache.spark.Partitioner$$anonfun$4.apply(Partitioner.scala:78)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.immutable.List.foreach(List.scala:381)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
at scala.collection.immutable.List.map(List.scala:285)
at org.apache.spark.Partitioner$.defaultPartitioner(Partitioner.scala:78)
at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$reduceByKey$3.apply(PairRDDFunctions.scala:326)
at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$reduceByKey$3.apply(PairRDDFunctions.scala:326)
at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
at org.apache.spark.rdd.PairRDDFunctions.reduceByKey(PairRDDFunctions.scala:325)
... 49 elided
Caused by: java.lang.ClassNotFoundException: Class 
org.apache.hadoop.fs.ozone.OzoneFileSystem not found
at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2500)
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2594)
... 93 more

scala>
{code}
 

It works fine if --jars option is specified as below:

[jira] [Commented] (HDDS-650) Spark job is not able to pick up Ozone configuration

2018-10-18 Thread Namit Maheshwari (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16655438#comment-16655438
 ] 

Namit Maheshwari commented on HDDS-650:
---

Yes, we can just use hadoop-ozone-filesystem-0.3.0-SNAPSHOT.jar and remove the 
hadoop-ozone-datanode-plugin-0.3.0-SNAPSHOT.jar from classpath. Thanks [~elek]

> Spark job is not able to pick up Ozone configuration
> 
>
> Key: HDDS-650
> URL: https://issues.apache.org/jira/browse/HDDS-650
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Priority: Major
>  Labels: app-compat
>
> Spark job is not able to pick up Ozone configuration.
>  Tried copying ozone-site.xml to /etc/spark2/conf directory as well. It does 
> not work.
> This however works fine, when following are specified in core-site.xml:
>  # {{ozone.om.address}}
>  # {{ozone.scm.client.address}}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-688) Hive Query hangs, if DN's are restarted before the query is submitted

2018-10-17 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Maheshwari updated HDDS-688:
--
Target Version/s: 0.3.0

> Hive Query hangs, if DN's are restarted before the query is submitted
> -
>
> Key: HDDS-688
> URL: https://issues.apache.org/jira/browse/HDDS-688
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Priority: Major
>
> Run a Hive Insert Query. It runs fine as below:
> {code:java}
> 0: jdbc:hive2://ctr-e138-1518143905142-510793> insert into testo3 values(1, 
> "aa", 3.0);
> INFO : Compiling 
> command(queryId=hive_20181018005729_fe644ab2-f8cc-41c3-b2d8-ffe1022de607): 
> insert into testo3 values(1, "aa", 3.0)
> INFO : Semantic Analysis Completed (retrial = false)
> INFO : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:_col0, 
> type:int, comment:null), FieldSchema(name:_col1, type:string, comment:null), 
> FieldSchema(name:_col2, type:float, comment:null)], properties:null)
> INFO : Completed compiling 
> command(queryId=hive_20181018005729_fe644ab2-f8cc-41c3-b2d8-ffe1022de607); 
> Time taken: 0.52 seconds
> INFO : Executing 
> command(queryId=hive_20181018005729_fe644ab2-f8cc-41c3-b2d8-ffe1022de607): 
> insert into testo3 values(1, "aa", 3.0)
> INFO : Query ID = hive_20181018005729_fe644ab2-f8cc-41c3-b2d8-ffe1022de607
> INFO : Total jobs = 1
> INFO : Launching Job 1 out of 1
> INFO : Starting task [Stage-1:MAPRED] in serial mode
> INFO : Subscribed to counters: [] for queryId: 
> hive_20181018005729_fe644ab2-f8cc-41c3-b2d8-ffe1022de607
> INFO : Session is already open
> INFO : Dag name: insert into testo3 values(1, "aa", 3.0) (Stage-1)
> INFO : Status: Running (Executing on YARN cluster with App id 
> application_1539383731490_0073)
> --
> VERTICES MODE STATUS TOTAL COMPLETED RUNNING PENDING FAILED KILLED
> --
> Map 1 .. container SUCCEEDED 1 1 0 0 0 0
> Reducer 2 .. container SUCCEEDED 1 1 0 0 0 0
> --
> VERTICES: 02/02 [==>>] 100% ELAPSED TIME: 11.95 s
> --
> INFO : Status: DAG finished successfully in 10.68 seconds
> INFO :
> INFO : Query Execution Summary
> INFO : 
> --
> INFO : OPERATION DURATION
> INFO : 
> --
> INFO : Compile Query 0.52s
> INFO : Prepare Plan 0.23s
> INFO : Get Query Coordinator (AM) 0.00s
> INFO : Submit Plan 0.11s
> INFO : Start DAG 0.57s
> INFO : Run DAG 10.68s
> INFO : 
> --
> INFO :
> INFO : Task Execution Summary
> INFO : 
> --
> INFO : VERTICES DURATION(ms) CPU_TIME(ms) GC_TIME(ms) INPUT_RECORDS 
> OUTPUT_RECORDS
> INFO : 
> --
> INFO : Map 1 7074.00 11,280 276 3 1
> INFO : Reducer 2 1074.00 2,040 0 1 0
> INFO : 
> --
> INFO :
> INFO : org.apache.tez.common.counters.DAGCounter:
> INFO : NUM_SUCCEEDED_TASKS: 2
> INFO : TOTAL_LAUNCHED_TASKS: 2
> INFO : AM_CPU_MILLISECONDS: 1390
> INFO : AM_GC_TIME_MILLIS: 0
> INFO : File System Counters:
> INFO : FILE_BYTES_READ: 135
> INFO : FILE_BYTES_WRITTEN: 135
> INFO : HDFS_BYTES_WRITTEN: 199
> INFO : HDFS_READ_OPS: 3
> INFO : HDFS_WRITE_OPS: 2
> INFO : HDFS_OP_CREATE: 1
> INFO : HDFS_OP_GET_FILE_STATUS: 3
> INFO : HDFS_OP_RENAME: 1
> INFO : org.apache.tez.common.counters.TaskCounter:
> INFO : SPILLED_RECORDS: 0
> INFO : NUM_SHUFFLED_INPUTS: 1
> INFO : NUM_FAILED_SHUFFLE_INPUTS: 0
> INFO : GC_TIME_MILLIS: 276
> INFO : TASK_DURATION_MILLIS: 8474
> INFO : CPU_MILLISECONDS: 13320
> INFO : PHYSICAL_MEMORY_BYTES: 4294967296
> INFO : VIRTUAL_MEMORY_BYTES: 11205029888
> INFO : COMMITTED_HEAP_BYTES: 4294967296
> INFO : INPUT_RECORDS_PROCESSED: 5
> INFO : INPUT_SPLIT_LENGTH_BYTES: 1
> INFO : OUTPUT_RECORDS: 1
> INFO : OUTPUT_LARGE_RECORDS: 0
> INFO : OUTPUT_BYTES: 94
> INFO : OUTPUT_BYTES_WITH_OVERHEAD: 102
> INFO : OUTPUT_BYTES_PHYSICAL: 127
> INFO : ADDITIONAL_SPILLS_BYTES_WRITTEN: 0
> INFO : ADDITIONAL_SPILLS_BYTES_READ: 0
> INFO : ADDITIONAL_SPILL_COUNT: 0
> INFO : SHUFFLE_BYTES: 103
> INFO 

[jira] [Updated] (HDDS-689) Datanode shuts down on restart

2018-10-17 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Maheshwari updated HDDS-689:
--
Target Version/s: 0.3.0

> Datanode shuts down on restart
> --
>
> Key: HDDS-689
> URL: https://issues.apache.org/jira/browse/HDDS-689
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Priority: Major
>
> Restart all the 8 DNs in the cluster. 2 of them shuts down as below
> {code:java}
> 2018-10-18 01:10:57,102 ERROR impl.StateMachineUpdater 
> (ExitUtils.java:terminate(86)) - Terminating with exit status 2: 
> StateMachineUpdater-69d15283-4e2e-4c30-a028-f2bad0f83cc1: the 
> StateMachineUpdater hits Throwable
> java.util.concurrent.RejectedExecutionException: Task 
> java.util.concurrent.CompletableFuture$AsyncSupply@7ee4907b rejected from 
> java.util.concurrent.ThreadPoolExecutor@702371d7[Terminated, pool size = 0, 
> active threads = 0, queued tasks = 0, completed tasks = 0]
> at 
> java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
> at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
> at 
> java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)
> at 
> java.util.concurrent.Executors$DelegatedExecutorService.execute(Executors.java:668)
> at 
> java.util.concurrent.CompletableFuture.asyncSupplyStage(CompletableFuture.java:1604)
> at 
> java.util.concurrent.CompletableFuture.supplyAsync(CompletableFuture.java:1830)
> at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.applyTransaction(ContainerStateMachine.java:433)
> at 
> org.apache.ratis.server.impl.RaftServerImpl.applyLogToStateMachine(RaftServerImpl.java:1093)
> at 
> org.apache.ratis.server.impl.StateMachineUpdater.run(StateMachineUpdater.java:148)
> at java.lang.Thread.run(Thread.java:748)
> 2018-10-18 01:10:57,107 WARN fs.CachingGetSpaceUsed 
> (CachingGetSpaceUsed.java:run(183)) - Thread Interrupted waiting to refresh 
> disk information: sleep interrupted
> 2018-10-18 01:10:57,108 INFO datanode.DataNode (LogAdapter.java:info(51)) - 
> SHUTDOWN_MSG:
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-689) Datanode shuts down on restart

2018-10-17 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-689:
-

 Summary: Datanode shuts down on restart
 Key: HDDS-689
 URL: https://issues.apache.org/jira/browse/HDDS-689
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari


Restart all the 8 DNs in the cluster. 2 of them shuts down as below
{code:java}
2018-10-18 01:10:57,102 ERROR impl.StateMachineUpdater 
(ExitUtils.java:terminate(86)) - Terminating with exit status 2: 
StateMachineUpdater-69d15283-4e2e-4c30-a028-f2bad0f83cc1: the 
StateMachineUpdater hits Throwable
java.util.concurrent.RejectedExecutionException: Task 
java.util.concurrent.CompletableFuture$AsyncSupply@7ee4907b rejected from 
java.util.concurrent.ThreadPoolExecutor@702371d7[Terminated, pool size = 0, 
active threads = 0, queued tasks = 0, completed tasks = 0]
at 
java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)
at 
java.util.concurrent.Executors$DelegatedExecutorService.execute(Executors.java:668)
at 
java.util.concurrent.CompletableFuture.asyncSupplyStage(CompletableFuture.java:1604)
at 
java.util.concurrent.CompletableFuture.supplyAsync(CompletableFuture.java:1830)
at 
org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.applyTransaction(ContainerStateMachine.java:433)
at 
org.apache.ratis.server.impl.RaftServerImpl.applyLogToStateMachine(RaftServerImpl.java:1093)
at 
org.apache.ratis.server.impl.StateMachineUpdater.run(StateMachineUpdater.java:148)
at java.lang.Thread.run(Thread.java:748)
2018-10-18 01:10:57,107 WARN fs.CachingGetSpaceUsed 
(CachingGetSpaceUsed.java:run(183)) - Thread Interrupted waiting to refresh 
disk information: sleep interrupted
2018-10-18 01:10:57,108 INFO datanode.DataNode (LogAdapter.java:info(51)) - 
SHUTDOWN_MSG:
{code}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-688) Hive Query hangs, if DN's are restarted before the query is submitted

2018-10-17 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-688:
-

 Summary: Hive Query hangs, if DN's are restarted before the query 
is submitted
 Key: HDDS-688
 URL: https://issues.apache.org/jira/browse/HDDS-688
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari


Run a Hive Insert Query. It runs fine as below:
{code:java}
0: jdbc:hive2://ctr-e138-1518143905142-510793> insert into testo3 values(1, 
"aa", 3.0);
INFO : Compiling 
command(queryId=hive_20181018005729_fe644ab2-f8cc-41c3-b2d8-ffe1022de607): 
insert into testo3 values(1, "aa", 3.0)
INFO : Semantic Analysis Completed (retrial = false)
INFO : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:_col0, 
type:int, comment:null), FieldSchema(name:_col1, type:string, comment:null), 
FieldSchema(name:_col2, type:float, comment:null)], properties:null)
INFO : Completed compiling 
command(queryId=hive_20181018005729_fe644ab2-f8cc-41c3-b2d8-ffe1022de607); Time 
taken: 0.52 seconds
INFO : Executing 
command(queryId=hive_20181018005729_fe644ab2-f8cc-41c3-b2d8-ffe1022de607): 
insert into testo3 values(1, "aa", 3.0)
INFO : Query ID = hive_20181018005729_fe644ab2-f8cc-41c3-b2d8-ffe1022de607
INFO : Total jobs = 1
INFO : Launching Job 1 out of 1
INFO : Starting task [Stage-1:MAPRED] in serial mode
INFO : Subscribed to counters: [] for queryId: 
hive_20181018005729_fe644ab2-f8cc-41c3-b2d8-ffe1022de607
INFO : Session is already open
INFO : Dag name: insert into testo3 values(1, "aa", 3.0) (Stage-1)
INFO : Status: Running (Executing on YARN cluster with App id 
application_1539383731490_0073)

--
VERTICES MODE STATUS TOTAL COMPLETED RUNNING PENDING FAILED KILLED
--
Map 1 .. container SUCCEEDED 1 1 0 0 0 0
Reducer 2 .. container SUCCEEDED 1 1 0 0 0 0
--
VERTICES: 02/02 [==>>] 100% ELAPSED TIME: 11.95 s
--
INFO : Status: DAG finished successfully in 10.68 seconds
INFO :
INFO : Query Execution Summary
INFO : 
--
INFO : OPERATION DURATION
INFO : 
--
INFO : Compile Query 0.52s
INFO : Prepare Plan 0.23s
INFO : Get Query Coordinator (AM) 0.00s
INFO : Submit Plan 0.11s
INFO : Start DAG 0.57s
INFO : Run DAG 10.68s
INFO : 
--
INFO :
INFO : Task Execution Summary
INFO : 
--
INFO : VERTICES DURATION(ms) CPU_TIME(ms) GC_TIME(ms) INPUT_RECORDS 
OUTPUT_RECORDS
INFO : 
--
INFO : Map 1 7074.00 11,280 276 3 1
INFO : Reducer 2 1074.00 2,040 0 1 0
INFO : 
--
INFO :
INFO : org.apache.tez.common.counters.DAGCounter:
INFO : NUM_SUCCEEDED_TASKS: 2
INFO : TOTAL_LAUNCHED_TASKS: 2
INFO : AM_CPU_MILLISECONDS: 1390
INFO : AM_GC_TIME_MILLIS: 0
INFO : File System Counters:
INFO : FILE_BYTES_READ: 135
INFO : FILE_BYTES_WRITTEN: 135
INFO : HDFS_BYTES_WRITTEN: 199
INFO : HDFS_READ_OPS: 3
INFO : HDFS_WRITE_OPS: 2
INFO : HDFS_OP_CREATE: 1
INFO : HDFS_OP_GET_FILE_STATUS: 3
INFO : HDFS_OP_RENAME: 1
INFO : org.apache.tez.common.counters.TaskCounter:
INFO : SPILLED_RECORDS: 0
INFO : NUM_SHUFFLED_INPUTS: 1
INFO : NUM_FAILED_SHUFFLE_INPUTS: 0
INFO : GC_TIME_MILLIS: 276
INFO : TASK_DURATION_MILLIS: 8474
INFO : CPU_MILLISECONDS: 13320
INFO : PHYSICAL_MEMORY_BYTES: 4294967296
INFO : VIRTUAL_MEMORY_BYTES: 11205029888
INFO : COMMITTED_HEAP_BYTES: 4294967296
INFO : INPUT_RECORDS_PROCESSED: 5
INFO : INPUT_SPLIT_LENGTH_BYTES: 1
INFO : OUTPUT_RECORDS: 1
INFO : OUTPUT_LARGE_RECORDS: 0
INFO : OUTPUT_BYTES: 94
INFO : OUTPUT_BYTES_WITH_OVERHEAD: 102
INFO : OUTPUT_BYTES_PHYSICAL: 127
INFO : ADDITIONAL_SPILLS_BYTES_WRITTEN: 0
INFO : ADDITIONAL_SPILLS_BYTES_READ: 0
INFO : ADDITIONAL_SPILL_COUNT: 0
INFO : SHUFFLE_BYTES: 103
INFO : SHUFFLE_BYTES_DECOMPRESSED: 102
INFO : SHUFFLE_BYTES_TO_MEM: 0
INFO : SHUFFLE_BYTES_TO_DISK: 0
INFO : SHUFFLE_BYTES_DISK_DIRECT: 103
INFO : SHUFFLE_PHASE_TIME: 154
INFO : FIRST_EVENT_RECEIVED: 108
INFO : LAST_EVENT_RECEIVED: 108
INFO : HIVE:
INFO : CREATED_FILES: 2
INFO : DESERIALIZE_ERRORS: 0
INFO : RECORDS_IN_Map_1: 3
INFO : RECORDS_OUT_0: 1
INFO : RECORDS_OUT_1_default.testo3: 1
INFO : RECORDS_OUT_INTERMEDIATE_Map_1: 1
INFO : 

[jira] [Commented] (HDDS-687) SCM is not able to restart

2018-10-17 Thread Namit Maheshwari (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16654418#comment-16654418
 ] 

Namit Maheshwari commented on HDDS-687:
---

{code:java}
2018-10-17 23:43:07,534 INFO org.apache.hadoop.hdds.scm.node.SCMNodeManager: 
Registered Data node : cd87320e-f0d1-4ecd-bb0f-502dedb0fdc5{ip: 172.27.87.64, 
host: ctr-e138-1518143905142-510793-01-05.hwx.site}
2018-10-17 23:43:08,639 INFO org.apache.hadoop.hdds.scm.node.SCMNodeManager: 
Registered Data node : 9c3b9b1c-22d1-4be4-84f6-e480421fb372{ip: 172.27.52.130, 
host: ctr-e138-1518143905142-510793-01-07.hwx.site}
2018-10-17 23:43:08,737 INFO org.apache.hadoop.hdds.scm.node.SCMNodeManager: 
Registered Data node : 07163755-db5b-48a6-920e-411164277129{ip: 172.27.79.197, 
host: ctr-e138-1518143905142-510793-01-04.hwx.site}
2018-10-17 23:43:11,045 INFO org.apache.hadoop.hdds.scm.node.SCMNodeManager: 
Registered Data node : 284d4bbe-3eb9-43a5-b972-931ebc5cd68e{ip: 172.27.16.145, 
host: ctr-e138-1518143905142-510793-01-11.hwx.site}
2018-10-17 23:43:13,663 INFO org.apache.hadoop.hdds.scm.node.SCMNodeManager: 
Registered Data node : c24d7f8e-8ff2-41a1-95cf-c5810e9e68cf{ip: 172.27.56.9, 
host: ctr-e138-1518143905142-510793-01-02.hwx.site}
2018-10-17 23:43:15,891 INFO org.apache.hadoop.hdds.scm.node.SCMNodeManager: 
Registered Data node : 90ecb353-8df8-4212-87f4-8caeff7fca65{ip: 172.27.68.128, 
host: ctr-e138-1518143905142-510793-01-08.hwx.site}
2018-10-17 23:53:00,001 INFO 
org.apache.hadoop.hdds.scm.pipelines.ratis.RatisManagerImpl: Allocating a new 
ratis pipeline of size: 3 id: pipelineId=7d29d891-bcce-4d77-a693-9c302d9be658
2018-10-17 23:53:01,917 INFO 
org.apache.hadoop.hdds.scm.pipelines.ratis.RatisManagerImpl: Allocating a new 
ratis pipeline of size: 3 id: pipelineId=9ad250a5-3fa9-4727-928e-f996264db5d9
2018-10-17 23:53:05,775 INFO 
org.apache.hadoop.hdds.scm.server.SCMClientProtocolServer: Object type 
container id 2 op create new stage complete
2018-10-17 23:53:06,350 INFO 
org.apache.hadoop.hdds.scm.server.SCMClientProtocolServer: Object type 
container id 3 op create new stage complete
2018-10-17 23:53:22,857 INFO 
org.apache.hadoop.hdds.scm.server.SCMBlockProtocolServer: SCM is informed by OM 
to delete 2 blocks
2018-10-17 23:53:22,858 INFO org.apache.hadoop.hdds.scm.block.BlockManagerImpl: 
Deleting blocks 
org.apache.hadoop.hdds.client.BlockID@71fa87de[containerID=3,localID=100913668837605377]
2018-10-17 23:53:22,867 INFO org.apache.hadoop.hdds.scm.block.BlockManagerImpl: 
Deleting blocks 
org.apache.hadoop.hdds.client.BlockID@7abf3ea[containerID=2,localID=100913668607246336]
2018-10-18 00:02:33,340 INFO 
org.apache.hadoop.hdds.scm.server.SCMClientProtocolServer: Object type 
container id 4 op create new stage complete
2018-10-18 00:02:35,269 INFO 
org.apache.hadoop.hdds.scm.server.SCMClientProtocolServer: Object type 
container id 5 op create new stage complete
2018-10-18 00:03:22,922 INFO 
org.apache.hadoop.hdds.scm.server.SCMBlockProtocolServer: SCM is informed by OM 
to delete 1 blocks
2018-10-18 00:03:22,923 INFO org.apache.hadoop.hdds.scm.block.BlockManagerImpl: 
Deleting blocks 
org.apache.hadoop.hdds.client.BlockID@6da5adf6[containerID=5,localID=100913706073063427]
2018-10-18 00:04:37,666 INFO 
org.apache.hadoop.hdds.scm.pipelines.PipelineActionEventHandler: Closing 
pipeline pipelineId=9ad250a5-3fa9-4727-928e-f996264db5d9 for 
reason:9da59935-3b86-457b-a00b-452c4c144b89 has not seen follower/s 
9c3b9b1c-22d1-4be4-84f6-e480421fb372 for 120031ms
2018-10-18 00:04:37,667 INFO 
org.apache.hadoop.hdds.scm.pipelines.PipelineSelector: Finalizing pipeline. 
pipelineID: pipelineId=9ad250a5-3fa9-4727-928e-f996264db5d9
2018-10-18 00:04:37,668 INFO 
org.apache.hadoop.hdds.scm.container.CloseContainerEventHandler: Close 
container Event triggered for container : 5
2018-10-18 00:04:37,672 INFO 
org.apache.hadoop.hdds.scm.container.CloseContainerEventHandler: Close 
container Event triggered for container : 7
2018-10-18 00:04:37,672 INFO 
org.apache.hadoop.hdds.scm.container.CloseContainerEventHandler: Close 
container Event triggered for container : 9
2018-10-18 00:04:37,673 INFO 
org.apache.hadoop.hdds.scm.container.CloseContainerEventHandler: Close 
container Event triggered for container : 11
2018-10-18 00:04:37,673 INFO 
org.apache.hadoop.hdds.scm.container.CloseContainerEventHandler: Close 
container Event triggered for container : 13
2018-10-18 00:04:37,674 INFO 
org.apache.hadoop.hdds.scm.container.CloseContainerEventHandler: Close 
container Event triggered for container : 15
2018-10-18 00:04:37,675 INFO 
org.apache.hadoop.hdds.scm.container.CloseContainerEventHandler: Close 
container Event triggered for container : 17
2018-10-18 00:04:37,675 INFO 
org.apache.hadoop.hdds.scm.container.CloseContainerEventHandler: Close 
container Event triggered for container : 2
2018-10-18 00:04:37,676 INFO 

[jira] [Created] (HDDS-687) SCM is not able to restart

2018-10-17 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-687:
-

 Summary: SCM is not able to restart
 Key: HDDS-687
 URL: https://issues.apache.org/jira/browse/HDDS-687
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari


SCM is not able to come up on restart.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-663) Lot of "Removed undeclared tags" logger while running commands

2018-10-17 Thread Namit Maheshwari (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16654023#comment-16654023
 ] 

Namit Maheshwari commented on HDDS-663:
---

[~hanishakoneru] - I am using Hadoop 3.1.1

Which version of hadoop has the fix ?

> Lot of "Removed undeclared tags" logger while running commands
> --
>
> Key: HDDS-663
> URL: https://issues.apache.org/jira/browse/HDDS-663
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Namit Maheshwari
>Priority: Major
>  Labels: newbie
>
> While running commands against OzoneFs see lot of logger like below:
> {code:java}
> -bash-4.2$ hdfs dfs -ls o3://bucket2.volume2/mr_jobEE
> 18/10/15 20:29:17 INFO conf.Configuration: Removed undeclared tags:
> 18/10/15 20:29:18 INFO conf.Configuration: Removed undeclared tags:
> Found 2 items
> rw-rw-rw 1 hdfs hdfs 0 2018-10-15 20:28 o3://bucket2.volume2/mr_jobEE/_SUCCESS
> rw-rw-rw 1 hdfs hdfs 5017 1970-07-23 04:33 
> o3://bucket2.volume2/mr_jobEE/part-r-0
> 18/10/15 20:29:19 INFO conf.Configuration: Removed undeclared tags:
> -bash-4.2$ {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-674) Not able to get key after distcp job passes

2018-10-16 Thread Namit Maheshwari (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16652729#comment-16652729
 ] 

Namit Maheshwari commented on HDDS-674:
---

{code:java}

-bash-4.2$ hadoop fs -ls /tmp/mr_jobs/input/
Found 1 items
-rw-r--r-- 3 root hdfs 215755 2018-10-09 06:37 
/tmp/mr_jobs/input/wordcount_input_1.txt
-bash-4.2$ hadoop distcp /tmp/mr_jobs/input/ o3://bucket2.volume2/distcp
ERROR: Tools helper /usr/hdp/3.0.3.0-63/hadoop/libexec/tools/hadoop-distcp.sh 
was not found.
18/10/17 00:21:12 INFO conf.Configuration: Removed undeclared tags:
18/10/17 00:21:13 INFO conf.Configuration: Removed undeclared tags:
18/10/17 00:21:13 INFO tools.DistCp: Input Options: 
DistCpOptions{atomicCommit=false, syncFolder=false, deleteMissing=false, 
ignoreFailures=false, overwrite=false, append=false, useDiff=false, 
useRdiff=false, fromSnapshot=null, toSnapshot=null, skipCRC=false, 
blocking=true, numListstatusThreads=0, maxMaps=20, mapBandwidth=0.0, 
copyStrategy='uniformsize', preserveStatus=[BLOCKSIZE], atomicWorkPath=null, 
logPath=null, sourceFileListing=null, sourcePaths=[/tmp/mr_jobs/input], 
targetPath=o3://bucket2.volume2/distcp, filtersFile='null', blocksPerChunk=0, 
copyBufferSize=8192, verboseLog=false}, sourcePaths=[/tmp/mr_jobs/input], 
targetPathExists=false, preserveRawXattrsfalse
18/10/17 00:21:13 INFO conf.Configuration: Removed undeclared tags:
18/10/17 00:21:14 INFO conf.Configuration: Removed undeclared tags:
18/10/17 00:21:16 INFO tools.SimpleCopyListing: Paths (files+dirs) cnt = 2; 
dirCnt = 1
18/10/17 00:21:16 INFO tools.SimpleCopyListing: Build file listing completed.
18/10/17 00:21:16 INFO conf.Configuration: Removed undeclared tags:
18/10/17 00:21:16 INFO tools.DistCp: Number of paths in the copy list: 2
18/10/17 00:21:16 INFO tools.DistCp: Number of paths in the copy list: 2
18/10/17 00:21:16 INFO conf.Configuration: Removed undeclared tags:
18/10/17 00:21:16 INFO client.ConfiguredRMFailoverProxyProvider: Failing over 
to rm2
18/10/17 00:21:17 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding 
for path: /user/hdfs/.staging/job_1539383731490_0053
18/10/17 00:21:17 INFO mapreduce.JobSubmitter: number of splits:2
18/10/17 00:21:18 INFO mapreduce.JobSubmitter: Submitting tokens for job: 
job_1539383731490_0053
18/10/17 00:21:18 INFO mapreduce.JobSubmitter: Executing with tokens: []
18/10/17 00:21:18 INFO conf.Configuration: Removed undeclared tags:
18/10/17 00:21:18 INFO conf.Configuration: found resource resource-types.xml at 
file:/etc/hadoop/3.0.3.0-63/0/resource-types.xml
18/10/17 00:21:18 INFO conf.Configuration: Removed undeclared tags:
18/10/17 00:21:18 INFO impl.YarnClientImpl: Submitted application 
application_1539383731490_0053
18/10/17 00:21:19 INFO mapreduce.Job: The url to track the job: 
http://ctr-e138-1518143905142-510793-01-05.hwx.site:8088/proxy/application_1539383731490_0053/
18/10/17 00:21:19 INFO tools.DistCp: DistCp job-id: job_1539383731490_0053
18/10/17 00:21:19 INFO mapreduce.Job: Running job: job_1539383731490_0053
18/10/17 00:21:28 INFO mapreduce.Job: Job job_1539383731490_0053 running in 
uber mode : false
18/10/17 00:21:28 INFO mapreduce.Job: map 0% reduce 0%
18/10/17 00:21:38 INFO mapreduce.Job: map 50% reduce 0%
18/10/17 00:21:40 INFO mapreduce.Job: map 100% reduce 0%
18/10/17 00:21:41 INFO mapreduce.Job: Job job_1539383731490_0053 completed 
successfully
18/10/17 00:21:41 INFO conf.Configuration: Removed undeclared tags:
18/10/17 00:21:41 INFO mapreduce.Job: Counters: 42
File System Counters
FILE: Number of bytes read=0
FILE: Number of bytes written=526882
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=216683
HDFS: Number of bytes written=0
HDFS: Number of read operations=21
HDFS: Number of large read operations=0
HDFS: Number of write operations=4
O3: Number of bytes read=0
O3: Number of bytes written=0
O3: Number of read operations=0
O3: Number of large read operations=0
O3: Number of write operations=0
Job Counters
Launched map tasks=2
Other local map tasks=2
Total time spent by all maps in occupied slots (ms)=68796
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=17199
Total vcore-milliseconds taken by all map tasks=17199
Total megabyte-milliseconds taken by all map tasks=70447104
Map-Reduce Framework
Map input records=2
Map output records=0
Input split bytes=232
Spilled Records=0
Failed Shuffles=0
Merged Map outputs=0
GC time elapsed (ms)=629
CPU time spent (ms)=9380
Physical memory (bytes) snapshot=795975680
Virtual memory (bytes) snapshot=10857562112
Total committed heap usage (bytes)=1411383296
Peak Map Physical memory (bytes)=430125056
Peak Map Virtual memory (bytes)=5458845696
File Input Format Counters
Bytes Read=696
File Output Format Counters
Bytes Written=0
DistCp Counters
Bandwidth in Btyes=107877
Bytes Copied=215755

[jira] [Created] (HDDS-674) Not able to get key after distcp job passes

2018-10-16 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-674:
-

 Summary: Not able to get key after distcp job passes
 Key: HDDS-674
 URL: https://issues.apache.org/jira/browse/HDDS-674
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Namit Maheshwari


It fails with 
{code:java}
-bash-4.2$ ozone sh key get /volume2/bucket2/distcp/wordcount_input_1.txt 
/tmp/wordcountDistcp.txt
2018-10-17 00:25:07,904 WARN util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
Lookup key failed, error:KEY_NOT_FOUND{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-672) Spark shell throws OzoneFileSystem not found

2018-10-16 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Maheshwari updated HDDS-672:
--
Target Version/s: 0.3.0

> Spark shell throws OzoneFileSystem not found
> 
>
> Key: HDDS-672
> URL: https://issues.apache.org/jira/browse/HDDS-672
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Namit Maheshwari
>Priority: Major
>  Labels: app-compat
>
> Spark shell throws OzoneFileSystem not found, if the ozone jars are not 
> specified in the --jars options



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-672) Spark shell throws OzoneFileSystem not found

2018-10-16 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Maheshwari updated HDDS-672:
--
Labels: app-compat  (was: )

> Spark shell throws OzoneFileSystem not found
> 
>
> Key: HDDS-672
> URL: https://issues.apache.org/jira/browse/HDDS-672
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Namit Maheshwari
>Priority: Major
>  Labels: app-compat
>
> Spark shell throws OzoneFileSystem not found, if the ozone jars are not 
> specified in the --jars options



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-672) Spark shell throws OzoneFileSystem not found

2018-10-16 Thread Namit Maheshwari (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16652535#comment-16652535
 ] 

Namit Maheshwari edited comment on HDDS-672 at 10/16/18 10:11 PM:
--

{code:java}
-bash-4.2$ spark-shell --master yarn-client
Warning: Master yarn-client is deprecated since 2.0. Please use master "yarn" 
with specified deploy mode instead.
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use 
setLogLevel(newLevel).
Spark context Web UI available at 
http://ctr-e138-1518143905142-510793-01-02.hwx.site:4040
Spark context available as 'sc' (master = yarn, app id = 
application_1539383731490_0051).
Spark session available as 'spark'.
Welcome to
 __
/ __/__ ___ _/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 2.3.2.3.0.3.0-63
/_/

Using Scala version 2.11.8 (OpenJDK 64-Bit Server VM, Java 1.8.0_181)
Type in expressions to have them evaluated.
Type :help for more information.

scala> val input = sc.textFile("o3://bucket2.volume2/passwd");
input: org.apache.spark.rdd.RDD[String] = o3://bucket2.volume2/passwd 
MapPartitionsRDD[1] at textFile at :24

scala> val count = input.flatMap(line => line.split(" ")).map(word => (word, 
1)).reduceByKey(_+_);
java.lang.RuntimeException: java.lang.ClassNotFoundException: Class 
org.apache.hadoop.fs.ozone.OzoneFileSystem not found
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2596)
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3320)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3352)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
at 
org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:268)
at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:239)
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:325)
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:200)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:253)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:251)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:251)
at 
org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:46)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:253)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:251)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:251)
at 
org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:46)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:253)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:251)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:251)
at 
org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:46)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:253)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:251)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:251)
at org.apache.spark.Partitioner$$anonfun$4.apply(Partitioner.scala:78)
at org.apache.spark.Partitioner$$anonfun$4.apply(Partitioner.scala:78)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.immutable.List.foreach(List.scala:381)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
at scala.collection.immutable.List.map(List.scala:285)
at org.apache.spark.Partitioner$.defaultPartitioner(Partitioner.scala:78)
at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$reduceByKey$3.apply(PairRDDFunctions.scala:326)
at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$reduceByKey$3.apply(PairRDDFunctions.scala:326)
at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
at org.apache.spark.rdd.PairRDDFunctions.reduceByKey(PairRDDFunctions.scala:325)
... 49 elided
Caused by: java.lang.ClassNotFoundException: Class 
org.apache.hadoop.fs.ozone.OzoneFileSystem not found
at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2500)
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2594)
... 93 more

scala>
{code}
 

It works fine if --jars option is specified as below:

[jira] [Commented] (HDDS-672) Spark shell throws OzoneFileSystem not found

2018-10-16 Thread Namit Maheshwari (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16652535#comment-16652535
 ] 

Namit Maheshwari commented on HDDS-672:
---

{code:java}
-bash-4.2$ spark-shell --master yarn-client
Warning: Master yarn-client is deprecated since 2.0. Please use master "yarn" 
with specified deploy mode instead.
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use 
setLogLevel(newLevel).
Spark context Web UI available at 
http://ctr-e138-1518143905142-510793-01-02.hwx.site:4040
Spark context available as 'sc' (master = yarn, app id = 
application_1539383731490_0051).
Spark session available as 'spark'.
Welcome to
 __
/ __/__ ___ _/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 2.3.2.3.0.3.0-63
/_/

Using Scala version 2.11.8 (OpenJDK 64-Bit Server VM, Java 1.8.0_181)
Type in expressions to have them evaluated.
Type :help for more information.

scala> val input = sc.textFile("o3://bucket2.volume2/passwd");
input: org.apache.spark.rdd.RDD[String] = o3://bucket2.volume2/passwd 
MapPartitionsRDD[1] at textFile at :24

scala> val count = input.flatMap(line => line.split(" ")).map(word => (word, 
1)).reduceByKey(_+_);
java.lang.RuntimeException: java.lang.ClassNotFoundException: Class 
org.apache.hadoop.fs.ozone.OzoneFileSystem not found
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2596)
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3320)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3352)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
at 
org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:268)
at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:239)
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:325)
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:200)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:253)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:251)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:251)
at 
org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:46)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:253)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:251)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:251)
at 
org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:46)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:253)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:251)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:251)
at 
org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:46)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:253)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:251)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:251)
at org.apache.spark.Partitioner$$anonfun$4.apply(Partitioner.scala:78)
at org.apache.spark.Partitioner$$anonfun$4.apply(Partitioner.scala:78)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.immutable.List.foreach(List.scala:381)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
at scala.collection.immutable.List.map(List.scala:285)
at org.apache.spark.Partitioner$.defaultPartitioner(Partitioner.scala:78)
at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$reduceByKey$3.apply(PairRDDFunctions.scala:326)
at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$reduceByKey$3.apply(PairRDDFunctions.scala:326)
at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
at org.apache.spark.rdd.PairRDDFunctions.reduceByKey(PairRDDFunctions.scala:325)
... 49 elided
Caused by: java.lang.ClassNotFoundException: Class 
org.apache.hadoop.fs.ozone.OzoneFileSystem not found
at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2500)
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2594)
... 93 more

scala>
{code}

> Spark shell throws OzoneFileSystem not found
> 
>
>

[jira] [Created] (HDDS-672) Spark shell throws OzoneFileSystem not found

2018-10-16 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-672:
-

 Summary: Spark shell throws OzoneFileSystem not found
 Key: HDDS-672
 URL: https://issues.apache.org/jira/browse/HDDS-672
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Namit Maheshwari


Spark shell throws OzoneFileSystem not found, if the ozone jars are not 
specified in the --jars options



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-671) Hive HSI insert tries to create data in Hdfs for Ozone external table

2018-10-16 Thread Namit Maheshwari (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16652475#comment-16652475
 ] 

Namit Maheshwari commented on HDDS-671:
---

{code:java}
-bash-4.2$ beeline
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in 
[jar:file:/usr/hdp/3.0.3.0-63/hive/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/usr/hdp/3.0.3.0-63/hadoop/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Connecting to 
jdbc:hive2://ctr-e138-1518143905142-510793-01-11.hwx.site:2181,ctr-e138-1518143905142-510793-01-06.hwx.site:2181,ctr-e138-1518143905142-510793-01-08.hwx.site:2181,ctr-e138-1518143905142-510793-01-10.hwx.site:2181,ctr-e138-1518143905142-510793-01-07.hwx.site:2181/default;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2
Enter username for 
jdbc:hive2://ctr-e138-1518143905142-510793-01-11.hwx.site:2181,ctr-e138-1518143905142-510793-01-06.hwx.site:2181,ctr-e138-1518143905142-510793-01-08.hwx.site:2181,ctr-e138-1518143905142-510793-01-10.hwx.site:2181,ctr-e138-1518143905142-510793-01-07.hwx.site:2181/default:
Enter password for 
jdbc:hive2://ctr-e138-1518143905142-510793-01-11.hwx.site:2181,ctr-e138-1518143905142-510793-01-06.hwx.site:2181,ctr-e138-1518143905142-510793-01-08.hwx.site:2181,ctr-e138-1518143905142-510793-01-10.hwx.site:2181,ctr-e138-1518143905142-510793-01-07.hwx.site:2181/default:
18/10/16 21:09:32 [main]: INFO jdbc.HiveConnection: Connected to 
ctr-e138-1518143905142-510793-01-04.hwx.site:1
Connected to: Apache Hive (version 3.1.0.3.0.3.0-63)
Driver: Hive JDBC (version 3.1.0.3.0.3.0-63)
Transaction isolation: TRANSACTION_REPEATABLE_READ
Beeline version 3.1.0.3.0.3.0-63 by Apache Hive
0: jdbc:hive2://ctr-e138-1518143905142-510793> describe formatted testo3;
INFO : Compiling 
command(queryId=hive_20181016210256_3400c0cb-a1d3-4384-8af4-7b95678030e4): 
describe formatted testo3
INFO : Semantic Analysis Completed (retrial = false)
INFO : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:col_name, 
type:string, comment:from deserializer), FieldSchema(name:data_type, 
type:string, comment:from deserializer), FieldSchema(name:comment, type:string, 
comment:from deserializer)], properties:null)
INFO : Completed compiling 
command(queryId=hive_20181016210256_3400c0cb-a1d3-4384-8af4-7b95678030e4); Time 
taken: 1.616 seconds
INFO : Executing 
command(queryId=hive_20181016210256_3400c0cb-a1d3-4384-8af4-7b95678030e4): 
describe formatted testo3
INFO : Starting task [Stage-0:DDL] in serial mode
INFO : Completed executing 
command(queryId=hive_20181016210256_3400c0cb-a1d3-4384-8af4-7b95678030e4); Time 
taken: 0.294 seconds
INFO : OK
+---++---+
| col_name | data_type | comment |
+---++---+
| # col_name | data_type | comment |
| i | int | |
| s | string | |
| d | float | |
| | NULL | NULL |
| # Detailed Table Information | NULL | NULL |
| Database: | default | NULL |
| OwnerType: | USER | NULL |
| Owner: | anonymous | NULL |
| CreateTime: | Mon Oct 15 22:25:33 UTC 2018 | NULL |
| LastAccessTime: | UNKNOWN | NULL |
| Retention: | 0 | NULL |
| Location: | o3://bucket2.volume2/testo3 | NULL |
| Table Type: | EXTERNAL_TABLE | NULL |
| Table Parameters: | NULL | NULL |
| | EXTERNAL | TRUE |
| | bucketing_version | 2 |
| | transient_lastDdlTime | 1539642333 |
| | NULL | NULL |
| # Storage Information | NULL | NULL |
| SerDe Library: | org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe | NULL |
| InputFormat: | org.apache.hadoop.mapred.TextInputFormat | NULL |
| OutputFormat: | org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat | 
NULL |
| Compressed: | No | NULL |
| Num Buckets: | -1 | NULL |
| Bucket Columns: | [] | NULL |
| Sort Columns: | [] | NULL |
| Storage Desc Params: | NULL | NULL |
| | serialization.format | 1 |
+---++---+
29 rows selected (2.65 seconds)
0: jdbc:hive2://ctr-e138-1518143905142-510793> insert into testo3 values(1, 
"aa", 3.0);
INFO : Compiling 
command(queryId=hive_20181016212028_c9c9dd7d-0f14-4d72-80cf-2177cc468167): 
insert into testo3 values(1, "aa", 3.0)
INFO : Semantic Analysis Completed (retrial = false)
INFO : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:_col0, 
type:int, comment:null), FieldSchema(name:_col1, type:string, comment:null), 
FieldSchema(name:_col2, type:float, comment:null)], properties:null)
INFO : Completed 

[jira] [Created] (HDDS-671) Hive HSI insert tries to create data in Hdfs for Ozone external table

2018-10-16 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-671:
-

 Summary: Hive HSI insert tries to create data in Hdfs for Ozone 
external table
 Key: HDDS-671
 URL: https://issues.apache.org/jira/browse/HDDS-671
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Namit Maheshwari


Hive HSI insert tries to create data in Hdfs for Ozone external table, when 
"hive.server2.enable.doAs" is set to true 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-670) Hive insert fails against Ozone external table

2018-10-16 Thread Namit Maheshwari (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16652460#comment-16652460
 ] 

Namit Maheshwari commented on HDDS-670:
---

{code:java}
-bash-4.2$ beeline
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in 
[jar:file:/usr/hdp/3.0.3.0-63/hive/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/usr/hdp/3.0.3.0-63/hadoop/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Connecting to 
jdbc:hive2://ctr-e138-1518143905142-510793-01-11.hwx.site:2181,ctr-e138-1518143905142-510793-01-06.hwx.site:2181,ctr-e138-1518143905142-510793-01-08.hwx.site:2181,ctr-e138-1518143905142-510793-01-10.hwx.site:2181,ctr-e138-1518143905142-510793-01-07.hwx.site:2181/default;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2
Enter username for 
jdbc:hive2://ctr-e138-1518143905142-510793-01-11.hwx.site:2181,ctr-e138-1518143905142-510793-01-06.hwx.site:2181,ctr-e138-1518143905142-510793-01-08.hwx.site:2181,ctr-e138-1518143905142-510793-01-10.hwx.site:2181,ctr-e138-1518143905142-510793-01-07.hwx.site:2181/default:
Enter password for 
jdbc:hive2://ctr-e138-1518143905142-510793-01-11.hwx.site:2181,ctr-e138-1518143905142-510793-01-06.hwx.site:2181,ctr-e138-1518143905142-510793-01-08.hwx.site:2181,ctr-e138-1518143905142-510793-01-10.hwx.site:2181,ctr-e138-1518143905142-510793-01-07.hwx.site:2181/default:
18/10/16 21:09:32 [main]: INFO jdbc.HiveConnection: Connected to 
ctr-e138-1518143905142-510793-01-04.hwx.site:1
Connected to: Apache Hive (version 3.1.0.3.0.3.0-63)
Driver: Hive JDBC (version 3.1.0.3.0.3.0-63)
Transaction isolation: TRANSACTION_REPEATABLE_READ
Beeline version 3.1.0.3.0.3.0-63 by Apache Hive
0: jdbc:hive2://ctr-e138-1518143905142-510793> describe formatted testo3;
INFO : Compiling 
command(queryId=hive_20181016210256_3400c0cb-a1d3-4384-8af4-7b95678030e4): 
describe formatted testo3
INFO : Semantic Analysis Completed (retrial = false)
INFO : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:col_name, 
type:string, comment:from deserializer), FieldSchema(name:data_type, 
type:string, comment:from deserializer), FieldSchema(name:comment, type:string, 
comment:from deserializer)], properties:null)
INFO : Completed compiling 
command(queryId=hive_20181016210256_3400c0cb-a1d3-4384-8af4-7b95678030e4); Time 
taken: 1.616 seconds
INFO : Executing 
command(queryId=hive_20181016210256_3400c0cb-a1d3-4384-8af4-7b95678030e4): 
describe formatted testo3
INFO : Starting task [Stage-0:DDL] in serial mode
INFO : Completed executing 
command(queryId=hive_20181016210256_3400c0cb-a1d3-4384-8af4-7b95678030e4); Time 
taken: 0.294 seconds
INFO : OK
+---++---+
| col_name | data_type | comment |
+---++---+
| # col_name | data_type | comment |
| i | int | |
| s | string | |
| d | float | |
| | NULL | NULL |
| # Detailed Table Information | NULL | NULL |
| Database: | default | NULL |
| OwnerType: | USER | NULL |
| Owner: | anonymous | NULL |
| CreateTime: | Mon Oct 15 22:25:33 UTC 2018 | NULL |
| LastAccessTime: | UNKNOWN | NULL |
| Retention: | 0 | NULL |
| Location: | o3://bucket2.volume2/testo3 | NULL |
| Table Type: | EXTERNAL_TABLE | NULL |
| Table Parameters: | NULL | NULL |
| | EXTERNAL | TRUE |
| | bucketing_version | 2 |
| | transient_lastDdlTime | 1539642333 |
| | NULL | NULL |
| # Storage Information | NULL | NULL |
| SerDe Library: | org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe | NULL |
| InputFormat: | org.apache.hadoop.mapred.TextInputFormat | NULL |
| OutputFormat: | org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat | 
NULL |
| Compressed: | No | NULL |
| Num Buckets: | -1 | NULL |
| Bucket Columns: | [] | NULL |
| Sort Columns: | [] | NULL |
| Storage Desc Params: | NULL | NULL |
| | serialization.format | 1 |
+---++---+
29 rows selected (2.65 seconds)
0: jdbc:hive2://ctr-e138-1518143905142-510793> insert into testo3 values(1, 
"aa", 3.0);
INFO : Compiling 
command(queryId=hive_20181016210935_cbe26097-44f0--b70d-8a6555f461a0): 
insert into testo3 values(1, "aa", 3.0)
INFO : Semantic Analysis Completed (retrial = false)
INFO : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:_col0, 
type:int, comment:null), FieldSchema(name:_col1, type:string, comment:null), 
FieldSchema(name:_col2, type:float, comment:null)], properties:null)
INFO : Completed 

[jira] [Created] (HDDS-670) Hive insert fails against Ozone external table

2018-10-16 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-670:
-

 Summary: Hive insert fails against Ozone external table
 Key: HDDS-670
 URL: https://issues.apache.org/jira/browse/HDDS-670
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Namit Maheshwari


It fails with 
{code:java}
ERROR : Job Commit failed with exception 
'org.apache.hadoop.hive.ql.metadata.HiveException(Unable to move: 
o3://bucket2.volume2/testo3/.hive-staging_hive_2018-10-16_21-09-35_130_1001829123585250245-1/_tmp.-ext-1
 to: 
o3://bucket2.volume2/testo3/.hive-staging_hive_2018-10-16_21-09-35_130_1001829123585250245-1/_tmp.-ext-1.moved)'
org.apache.hadoop.hive.ql.metadata.HiveException: Unable to move: 
o3://bucket2.volume2/testo3/.hive-staging_hive_2018-10-16_21-09-35_130_1001829123585250245-1/_tmp.-ext-1
 to: 
o3://bucket2.volume2/testo3/.hive-staging_hive_2018-10-16_21-09-35_130_1001829123585250245-1/_tmp.-ext-1.moved
{code}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-664) Creating hive table on Ozone fails

2018-10-16 Thread Namit Maheshwari (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16652226#comment-16652226
 ] 

Namit Maheshwari commented on HDDS-664:
---

Able to create table after adding below properties to core-site.xml
{code:java}

hadoop.proxyuser.hive.users
*



hadoop.proxyuser.hive.hosts
*




 hadoop.proxyuser.hive.groups
 
*





{code}

> Creating hive table on Ozone fails
> --
>
> Key: HDDS-664
> URL: https://issues.apache.org/jira/browse/HDDS-664
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Namit Maheshwari
>Assignee: Hanisha Koneru
>Priority: Major
>
> Modified HIVE_AUX_JARS_PATH to include Ozone jars. Tried creating Hive 
> external table on Ozone. It fails with "Error: Error while compiling 
> statement: FAILED: HiveAuthzPluginException Error getting permissions for 
> o3://bucket2.volume2/testo3: User: hive is not allowed to impersonate 
> anonymous (state=42000,code=4)"
> {code:java}
> -bash-4.2$ beeline
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/usr/hdp/3.0.3.0-63/hive/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/usr/hdp/3.0.3.0-63/hadoop/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
> Connecting to 
> jdbc:hive2://ctr-e138-1518143905142-510793-01-11.hwx.site:2181,ctr-e138-1518143905142-510793-01-06.hwx.site:2181,ctr-e138-1518143905142-510793-01-08.hwx.site:2181,ctr-e138-1518143905142-510793-01-10.hwx.site:2181,ctr-e138-1518143905142-510793-01-07.hwx.site:2181/default;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2
> Enter username for 
> jdbc:hive2://ctr-e138-1518143905142-510793-01-11.hwx.site:2181,ctr-e138-1518143905142-510793-01-06.hwx.site:2181,ctr-e138-1518143905142-510793-01-08.hwx.site:2181,ctr-e138-1518143905142-510793-01-10.hwx.site:2181,ctr-e138-1518143905142-510793-01-07.hwx.site:2181/default:
> Enter password for 
> jdbc:hive2://ctr-e138-1518143905142-510793-01-11.hwx.site:2181,ctr-e138-1518143905142-510793-01-06.hwx.site:2181,ctr-e138-1518143905142-510793-01-08.hwx.site:2181,ctr-e138-1518143905142-510793-01-10.hwx.site:2181,ctr-e138-1518143905142-510793-01-07.hwx.site:2181/default:
> 18/10/15 21:36:55 [main]: INFO jdbc.HiveConnection: Connected to 
> ctr-e138-1518143905142-510793-01-04.hwx.site:1
> Connected to: Apache Hive (version 3.1.0.3.0.3.0-63)
> Driver: Hive JDBC (version 3.1.0.3.0.3.0-63)
> Transaction isolation: TRANSACTION_REPEATABLE_READ
> Beeline version 3.1.0.3.0.3.0-63 by Apache Hive
> 0: jdbc:hive2://ctr-e138-1518143905142-510793> create external table testo3 ( 
> i int, s string, d float) location "o3://bucket2.volume2/testo3";
> Error: Error while compiling statement: FAILED: HiveAuthzPluginException 
> Error getting permissions for o3://bucket2.volume2/testo3: User: hive is not 
> allowed to impersonate anonymous (state=42000,code=4)
> 0: jdbc:hive2://ctr-e138-1518143905142-510793> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-664) Creating hive table on Ozone fails

2018-10-15 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-664:
-

 Summary: Creating hive table on Ozone fails
 Key: HDDS-664
 URL: https://issues.apache.org/jira/browse/HDDS-664
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Namit Maheshwari


Modified HIVE_AUX_JARS_PATH to include Ozone jars. Tried creating Hive external 
table on Ozone. It fails with "Error: Error while compiling statement: FAILED: 
HiveAuthzPluginException Error getting permissions for 
o3://bucket2.volume2/testo3: User: hive is not allowed to impersonate anonymous 
(state=42000,code=4)"
{code:java}
-bash-4.2$ beeline
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in 
[jar:file:/usr/hdp/3.0.3.0-63/hive/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/usr/hdp/3.0.3.0-63/hadoop/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Connecting to 
jdbc:hive2://ctr-e138-1518143905142-510793-01-11.hwx.site:2181,ctr-e138-1518143905142-510793-01-06.hwx.site:2181,ctr-e138-1518143905142-510793-01-08.hwx.site:2181,ctr-e138-1518143905142-510793-01-10.hwx.site:2181,ctr-e138-1518143905142-510793-01-07.hwx.site:2181/default;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2
Enter username for 
jdbc:hive2://ctr-e138-1518143905142-510793-01-11.hwx.site:2181,ctr-e138-1518143905142-510793-01-06.hwx.site:2181,ctr-e138-1518143905142-510793-01-08.hwx.site:2181,ctr-e138-1518143905142-510793-01-10.hwx.site:2181,ctr-e138-1518143905142-510793-01-07.hwx.site:2181/default:
Enter password for 
jdbc:hive2://ctr-e138-1518143905142-510793-01-11.hwx.site:2181,ctr-e138-1518143905142-510793-01-06.hwx.site:2181,ctr-e138-1518143905142-510793-01-08.hwx.site:2181,ctr-e138-1518143905142-510793-01-10.hwx.site:2181,ctr-e138-1518143905142-510793-01-07.hwx.site:2181/default:
18/10/15 21:36:55 [main]: INFO jdbc.HiveConnection: Connected to 
ctr-e138-1518143905142-510793-01-04.hwx.site:1
Connected to: Apache Hive (version 3.1.0.3.0.3.0-63)
Driver: Hive JDBC (version 3.1.0.3.0.3.0-63)
Transaction isolation: TRANSACTION_REPEATABLE_READ
Beeline version 3.1.0.3.0.3.0-63 by Apache Hive
0: jdbc:hive2://ctr-e138-1518143905142-510793> create external table testo3 ( i 
int, s string, d float) location "o3://bucket2.volume2/testo3";
Error: Error while compiling statement: FAILED: HiveAuthzPluginException Error 
getting permissions for o3://bucket2.volume2/testo3: User: hive is not allowed 
to impersonate anonymous (state=42000,code=4)
0: jdbc:hive2://ctr-e138-1518143905142-510793> {code}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-637) Not able to access the part-r-00000 file after the MR job succeeds

2018-10-15 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Maheshwari resolved HDDS-637.
---
Resolution: Cannot Reproduce

The issue seem to go away with the latest code. 
Will reopen if seen again. Thanks [~xyao]

> Not able to access the part-r-0 file after the MR job succeeds
> --
>
> Key: HDDS-637
> URL: https://issues.apache.org/jira/browse/HDDS-637
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.3.0
>Reporter: Namit Maheshwari
>Assignee: Xiaoyu Yao
>Priority: Major
>
> Run a MR job
> {code:java}
> -bash-4.2$ /usr/hdp/current/hadoop-client/bin/hadoop jar 
> /usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples.jar 
> wordcount /tmp/mr_jobs/input/ o3://bucket2.volume2/mr_jobDD
> 18/10/12 01:00:23 INFO conf.Configuration: Removed undeclared tags:
> 18/10/12 01:00:24 INFO conf.Configuration: Removed undeclared tags:
> 18/10/12 01:00:24 INFO conf.Configuration: Removed undeclared tags:
> 18/10/12 01:00:25 INFO conf.Configuration: Removed undeclared tags:
> 18/10/12 01:00:25 INFO client.ConfiguredRMFailoverProxyProvider: Failing over 
> to rm2
> 18/10/12 01:00:25 INFO mapreduce.JobResourceUploader: Disabling Erasure 
> Coding for path: /user/hdfs/.staging/job_1539295307098_0003
> 18/10/12 01:00:27 INFO input.FileInputFormat: Total input files to process : 1
> 18/10/12 01:00:27 INFO lzo.GPLNativeCodeLoader: Loaded native gpl library
> 18/10/12 01:00:27 INFO lzo.LzoCodec: Successfully loaded & initialized 
> native-lzo library [hadoop-lzo rev 5d6248d8d690f8456469979213ab2e9993bfa2e9]
> 18/10/12 01:00:27 INFO mapreduce.JobSubmitter: number of splits:1
> 18/10/12 01:00:28 INFO mapreduce.JobSubmitter: Submitting tokens for job: 
> job_1539295307098_0003
> 18/10/12 01:00:28 INFO mapreduce.JobSubmitter: Executing with tokens: []
> 18/10/12 01:00:28 INFO conf.Configuration: Removed undeclared tags:
> 18/10/12 01:00:28 INFO conf.Configuration: found resource resource-types.xml 
> at file:/etc/hadoop/3.0.3.0-63/0/resource-types.xml
> 18/10/12 01:00:28 INFO conf.Configuration: Removed undeclared tags:
> 18/10/12 01:00:28 INFO impl.YarnClientImpl: Submitted application 
> application_1539295307098_0003
> 18/10/12 01:00:29 INFO mapreduce.Job: The url to track the job: 
> http://ctr-e138-1518143905142-510793-01-05.hwx.site:8088/proxy/application_1539295307098_0003/
> 18/10/12 01:00:29 INFO mapreduce.Job: Running job: job_1539295307098_0003
> 18/10/12 01:00:35 INFO mapreduce.Job: Job job_1539295307098_0003 running in 
> uber mode : false
> 18/10/12 01:00:35 INFO mapreduce.Job: map 0% reduce 0%
> 18/10/12 01:00:44 INFO mapreduce.Job: map 100% reduce 0%
> 18/10/12 01:00:57 INFO mapreduce.Job: map 100% reduce 67%
> 18/10/12 01:00:59 INFO mapreduce.Job: map 100% reduce 100%
> 18/10/12 01:00:59 INFO mapreduce.Job: Job job_1539295307098_0003 completed 
> successfully
> 18/10/12 01:00:59 INFO conf.Configuration: Removed undeclared tags:
> 18/10/12 01:00:59 INFO mapreduce.Job: Counters: 58
> File System Counters
> FILE: Number of bytes read=6332
> FILE: Number of bytes written=532585
> FILE: Number of read operations=0
> FILE: Number of large read operations=0
> FILE: Number of write operations=0
> HDFS: Number of bytes read=215876
> HDFS: Number of bytes written=0
> HDFS: Number of read operations=2
> HDFS: Number of large read operations=0
> HDFS: Number of write operations=0
> O3: Number of bytes read=0
> O3: Number of bytes written=0
> O3: Number of read operations=0
> O3: Number of large read operations=0
> O3: Number of write operations=0
> Job Counters
> Launched map tasks=1
> Launched reduce tasks=1
> Rack-local map tasks=1
> Total time spent by all maps in occupied slots (ms)=25392
> Total time spent by all reduces in occupied slots (ms)=103584
> Total time spent by all map tasks (ms)=6348
> Total time spent by all reduce tasks (ms)=12948
> Total vcore-milliseconds taken by all map tasks=6348
> Total vcore-milliseconds taken by all reduce tasks=12948
> Total megabyte-milliseconds taken by all map tasks=26001408
> Total megabyte-milliseconds taken by all reduce tasks=106070016
> Map-Reduce Framework
> Map input records=716
> Map output records=32019
> Map output bytes=343475
> Map output materialized bytes=6332
> Input split bytes=121
> Combine input records=32019
> Combine output records=461
> Reduce input groups=461
> Reduce shuffle bytes=6332
> Reduce input records=461
> Reduce output records=461
> Spilled Records=922
> Shuffled Maps =1
> Failed Shuffles=0
> Merged Map outputs=1
> GC time elapsed (ms)=359
> CPU time spent (ms)=11800
> Physical memory (bytes) snapshot=3018502144
> Virtual memory (bytes) snapshot=14470242304
> Total committed heap usage (bytes)=3521642496
> Peak Map Physical memory 

[jira] [Updated] (HDDS-663) Lot of "Removed undeclared tags" logger while running commands

2018-10-15 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Maheshwari updated HDDS-663:
--
Description: 
While running commands against OzoneFs see lot of logger like below:
{code:java}
-bash-4.2$ hdfs dfs -ls o3://bucket2.volume2/mr_jobEE
18/10/15 20:29:17 INFO conf.Configuration: Removed undeclared tags:
18/10/15 20:29:18 INFO conf.Configuration: Removed undeclared tags:
Found 2 items
rw-rw-rw 1 hdfs hdfs 0 2018-10-15 20:28 o3://bucket2.volume2/mr_jobEE/_SUCCESS
rw-rw-rw 1 hdfs hdfs 5017 1970-07-23 04:33 
o3://bucket2.volume2/mr_jobEE/part-r-0
18/10/15 20:29:19 INFO conf.Configuration: Removed undeclared tags:
-bash-4.2$ {code}
 

  was:
While running commands against OzoneFs see lot of logger like below:
{code:java}
-bash-4.2$ hdfs dfs -ls o3://bucket2.volume2/mr_jobEE
18/10/15 20:29:17 INFO conf.Configuration: Removed undeclared tags:
18/10/15 20:29:18 INFO conf.Configuration: Removed undeclared tags:
Found 2 items
-rw-rw-rw- 1 hdfs hdfs 0 2018-10-15 20:28 o3://bucket2.volume2/mr_jobEE/_SUCCESS
-rw-rw-rw- 1 hdfs hdfs 5017 1970-07-23 04:33 
o3://bucket2.volume2/mr_jobEE/part-r-0
18/10/15 20:29:19 INFO conf.Configuration: Removed undeclared tags:
-bash-4.2${code}
 


> Lot of "Removed undeclared tags" logger while running commands
> --
>
> Key: HDDS-663
> URL: https://issues.apache.org/jira/browse/HDDS-663
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Namit Maheshwari
>Priority: Major
> Fix For: 0.3.0
>
>
> While running commands against OzoneFs see lot of logger like below:
> {code:java}
> -bash-4.2$ hdfs dfs -ls o3://bucket2.volume2/mr_jobEE
> 18/10/15 20:29:17 INFO conf.Configuration: Removed undeclared tags:
> 18/10/15 20:29:18 INFO conf.Configuration: Removed undeclared tags:
> Found 2 items
> rw-rw-rw 1 hdfs hdfs 0 2018-10-15 20:28 o3://bucket2.volume2/mr_jobEE/_SUCCESS
> rw-rw-rw 1 hdfs hdfs 5017 1970-07-23 04:33 
> o3://bucket2.volume2/mr_jobEE/part-r-0
> 18/10/15 20:29:19 INFO conf.Configuration: Removed undeclared tags:
> -bash-4.2$ {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-663) Lot of "Removed undeclared tags" logger while running commands

2018-10-15 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Maheshwari updated HDDS-663:
--
Description: 
While running commands against OzoneFs see lot of logger like below:
{code:java}
-bash-4.2$ hdfs dfs -ls o3://bucket2.volume2/mr_jobEE
18/10/15 20:29:17 INFO conf.Configuration: Removed undeclared tags:
18/10/15 20:29:18 INFO conf.Configuration: Removed undeclared tags:
Found 2 items
-rw-rw-rw- 1 hdfs hdfs 0 2018-10-15 20:28 o3://bucket2.volume2/mr_jobEE/_SUCCESS
-rw-rw-rw- 1 hdfs hdfs 5017 1970-07-23 04:33 
o3://bucket2.volume2/mr_jobEE/part-r-0
18/10/15 20:29:19 INFO conf.Configuration: Removed undeclared tags:
-bash-4.2${code}
 

  was:
While running commands against OzoneFs see lot of logger like below:
{code:java}
-bash-4.2$ hdfs dfs -ls o3://bucket2.volume2/mr_jobEE
18/10/15 20:29:17 INFO conf.Configuration: Removed undeclared tags:
18/10/15 20:29:18 INFO conf.Configuration: Removed undeclared tags:
Found 2 items
-rw-rw-rw- 1 hdfs hdfs 0 2018-10-15 20:28 o3://bucket2.volume2/mr_jobEE/_SUCCESS
-rw-rw-rw- 1 hdfs hdfs 5017 1970-07-23 04:33 
o3://bucket2.volume2/mr_jobEE/part-r-0
18/10/15 20:29:19 INFO conf.Configuration: Removed undeclared tags:
-bash-4.2${code}


> Lot of "Removed undeclared tags" logger while running commands
> --
>
> Key: HDDS-663
> URL: https://issues.apache.org/jira/browse/HDDS-663
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Namit Maheshwari
>Priority: Major
> Fix For: 0.3.0
>
>
> While running commands against OzoneFs see lot of logger like below:
> {code:java}
> -bash-4.2$ hdfs dfs -ls o3://bucket2.volume2/mr_jobEE
> 18/10/15 20:29:17 INFO conf.Configuration: Removed undeclared tags:
> 18/10/15 20:29:18 INFO conf.Configuration: Removed undeclared tags:
> Found 2 items
> -rw-rw-rw- 1 hdfs hdfs 0 2018-10-15 20:28 
> o3://bucket2.volume2/mr_jobEE/_SUCCESS
> -rw-rw-rw- 1 hdfs hdfs 5017 1970-07-23 04:33 
> o3://bucket2.volume2/mr_jobEE/part-r-0
> 18/10/15 20:29:19 INFO conf.Configuration: Removed undeclared tags:
> -bash-4.2${code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-663) Lot of "Removed undeclared tags" logger while running commands

2018-10-15 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Maheshwari updated HDDS-663:
--
Target Version/s: 0.3.0

> Lot of "Removed undeclared tags" logger while running commands
> --
>
> Key: HDDS-663
> URL: https://issues.apache.org/jira/browse/HDDS-663
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Namit Maheshwari
>Priority: Major
> Fix For: 0.3.0
>
>
> While running commands against OzoneFs see lot of logger like below:
> {code:java}
> -bash-4.2$ hdfs dfs -ls o3://bucket2.volume2/mr_jobEE
> 18/10/15 20:29:17 INFO conf.Configuration: Removed undeclared tags:
> 18/10/15 20:29:18 INFO conf.Configuration: Removed undeclared tags:
> Found 2 items
> -rw-rw-rw- 1 hdfs hdfs 0 2018-10-15 20:28 
> o3://bucket2.volume2/mr_jobEE/_SUCCESS
> -rw-rw-rw- 1 hdfs hdfs 5017 1970-07-23 04:33 
> o3://bucket2.volume2/mr_jobEE/part-r-0
> 18/10/15 20:29:19 INFO conf.Configuration: Removed undeclared tags:
> -bash-4.2${code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-663) Lot of "Removed undeclared tags" logger while running commands

2018-10-15 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Maheshwari updated HDDS-663:
--
Fix Version/s: 0.3.0

> Lot of "Removed undeclared tags" logger while running commands
> --
>
> Key: HDDS-663
> URL: https://issues.apache.org/jira/browse/HDDS-663
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Namit Maheshwari
>Priority: Major
> Fix For: 0.3.0
>
>
> While running commands against OzoneFs see lot of logger like below:
> {code:java}
> -bash-4.2$ hdfs dfs -ls o3://bucket2.volume2/mr_jobEE
> 18/10/15 20:29:17 INFO conf.Configuration: Removed undeclared tags:
> 18/10/15 20:29:18 INFO conf.Configuration: Removed undeclared tags:
> Found 2 items
> -rw-rw-rw- 1 hdfs hdfs 0 2018-10-15 20:28 
> o3://bucket2.volume2/mr_jobEE/_SUCCESS
> -rw-rw-rw- 1 hdfs hdfs 5017 1970-07-23 04:33 
> o3://bucket2.volume2/mr_jobEE/part-r-0
> 18/10/15 20:29:19 INFO conf.Configuration: Removed undeclared tags:
> -bash-4.2${code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-663) Lot of "Removed undeclared tags" logger while running commands

2018-10-15 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-663:
-

 Summary: Lot of "Removed undeclared tags" logger while running 
commands
 Key: HDDS-663
 URL: https://issues.apache.org/jira/browse/HDDS-663
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Namit Maheshwari


While running commands against OzoneFs see lot of logger like below:
{code:java}
-bash-4.2$ hdfs dfs -ls o3://bucket2.volume2/mr_jobEE
18/10/15 20:29:17 INFO conf.Configuration: Removed undeclared tags:
18/10/15 20:29:18 INFO conf.Configuration: Removed undeclared tags:
Found 2 items
-rw-rw-rw- 1 hdfs hdfs 0 2018-10-15 20:28 o3://bucket2.volume2/mr_jobEE/_SUCCESS
-rw-rw-rw- 1 hdfs hdfs 5017 1970-07-23 04:33 
o3://bucket2.volume2/mr_jobEE/part-r-0
18/10/15 20:29:19 INFO conf.Configuration: Removed undeclared tags:
-bash-4.2${code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-652) Properties in ozone-site.xml does not work well with IP

2018-10-12 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-652:
-

 Summary: Properties in ozone-site.xml does not work well with IP 
 Key: HDDS-652
 URL: https://issues.apache.org/jira/browse/HDDS-652
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Namit Maheshwari


There have been cases where properties in ozone-site.xml does not work well 
with IP. 

If those properties like ozone.om.address is changed to use hostnames, it works 
well.

 

Ideally this should work fine with both IP and hostnames.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-652) Properties in ozone-site.xml does not work well with IP

2018-10-12 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Maheshwari updated HDDS-652:
--
Target Version/s: 0.3.0
   Fix Version/s: 0.3.0

> Properties in ozone-site.xml does not work well with IP 
> 
>
> Key: HDDS-652
> URL: https://issues.apache.org/jira/browse/HDDS-652
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Namit Maheshwari
>Priority: Major
> Fix For: 0.3.0
>
>
> There have been cases where properties in ozone-site.xml does not work well 
> with IP. 
> If those properties like ozone.om.address is changed to use hostnames, it 
> works well.
>  
> Ideally this should work fine with both IP and hostnames.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-651) Rename o3 to o3fs for Filesystem

2018-10-12 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Maheshwari updated HDDS-651:
--
Target Version/s: 0.3.0

> Rename o3 to o3fs for Filesystem
> 
>
> Key: HDDS-651
> URL: https://issues.apache.org/jira/browse/HDDS-651
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Namit Maheshwari
>Priority: Major
> Fix For: 0.3.0
>
>
> I propose that we rename o3 to o3fs for Filesystem.
> It creates a lot of confusion while using the same name o3 for different 
> purposes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-651) Rename o3 to o3fs for Filesystem

2018-10-12 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-651:
-

 Summary: Rename o3 to o3fs for Filesystem
 Key: HDDS-651
 URL: https://issues.apache.org/jira/browse/HDDS-651
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Namit Maheshwari


I propose that we rename o3 to o3fs for Filesystem.

It creates a lot of confusion while using the same name o3 for different 
purposes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-651) Rename o3 to o3fs for Filesystem

2018-10-12 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Maheshwari updated HDDS-651:
--
Fix Version/s: 0.3.0

> Rename o3 to o3fs for Filesystem
> 
>
> Key: HDDS-651
> URL: https://issues.apache.org/jira/browse/HDDS-651
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Namit Maheshwari
>Priority: Major
> Fix For: 0.3.0
>
>
> I propose that we rename o3 to o3fs for Filesystem.
> It creates a lot of confusion while using the same name o3 for different 
> purposes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-650) Spark job is not able to pick up Ozone configuration

2018-10-12 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Maheshwari updated HDDS-650:
--
Target Version/s: 0.3.0

> Spark job is not able to pick up Ozone configuration
> 
>
> Key: HDDS-650
> URL: https://issues.apache.org/jira/browse/HDDS-650
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Priority: Major
> Fix For: 0.3.0
>
>
> Spark job is not able to pick up Ozone configuration.
> {code:java}
> -bash-4.2$ spark-shell --master yarn-client --jars 
> /usr/hdp/current/hadoop-client/lib/hadoop-lzo-0.6.0.3.0.3.0-63.jar,/tmp/ozone-0.3.0-SNAPSHOT/share/hadoop/ozoneplugin/hadoop-ozone-datanode-plugin-0.3.0-SNAPSHOT.jar,/tmp/ozone-0.3.0-SNAPSHOT/share/hadoop/ozonefs/hadoop-ozone-filesystem-0.3.0-SNAPSHOT.jar
> Warning: Master yarn-client is deprecated since 2.0. Please use master "yarn" 
> with specified deploy mode instead.
> Setting default log level to "WARN".
> To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use 
> setLogLevel(newLevel).
> Spark context Web UI available at 
> http://ctr-e138-1518143905142-510793-01-02.hwx.site:4040
> Spark context available as 'sc' (master = yarn, app id = 
> application_1539295307098_0011).
> Spark session available as 'spark'.
> Welcome to
>  __
> / __/__ ___ _/ /__
> _\ \/ _ \/ _ `/ __/ '_/
> /___/ .__/\_,_/_/ /_/\_\ version 2.3.2.3.0.3.0-63
> /_/
> Using Scala version 2.11.8 (OpenJDK 64-Bit Server VM, Java 1.8.0_181)
> Type in expressions to have them evaluated.
> Type :help for more information.
> scala>
> scala> val input = sc.textFile("o3://bucket2.volume2/passwd");
> input: org.apache.spark.rdd.RDD[String] = o3://bucket2.volume2/passwd 
> MapPartitionsRDD[1] at textFile at :24
> scala> val count = input.flatMap(line => line.split(" ")).map(word => (word, 
> 1)).reduceByKey(_+_);
> count: org.apache.spark.rdd.RDD[(String, Int)] = ShuffledRDD[4] at 
> reduceByKey at :25
> scala> count.cache()
> res0: count.type = ShuffledRDD[4] at reduceByKey at :25
> scala> count.saveAsTextFile("o3://bucket2.volume2/sparkout3");
> [Stage 0:> (0 + 2) / 2]18/10/12 22:16:44 WARN TaskSetManager: Lost task 1.0 
> in stage 0.0 (TID 1, ctr-e138-1518143905142-510793-01-11.hwx.site, 
> executor 1): java.io.IOException: Couldn't create protocol class 
> org.apache.hadoop.ozone.client.rpc.RpcClient
> at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:299)
> at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getRpcClient(OzoneClientFactory.java:169)
> at 
> org.apache.hadoop.fs.ozone.OzoneFileSystem.initialize(OzoneFileSystem.java:119)
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
> at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
> at org.apache.hadoop.mapred.LineRecordReader.(LineRecordReader.java:108)
> at 
> org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67)
> at org.apache.spark.rdd.HadoopRDD$$anon$1.liftedTree1$1(HadoopRDD.scala:257)
> at org.apache.spark.rdd.HadoopRDD$$anon$1.(HadoopRDD.scala:256)
> at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:214)
> at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:94)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
> at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
> at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
> at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
> at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
> at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
> at org.apache.spark.scheduler.Task.run(Task.scala:109)
> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: 

[jira] [Updated] (HDDS-650) Spark job is not able to pick up Ozone configuration

2018-10-12 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Maheshwari updated HDDS-650:
--
Fix Version/s: 0.3.0

> Spark job is not able to pick up Ozone configuration
> 
>
> Key: HDDS-650
> URL: https://issues.apache.org/jira/browse/HDDS-650
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Priority: Major
> Fix For: 0.3.0
>
>
> Spark job is not able to pick up Ozone configuration.
> {code:java}
> -bash-4.2$ spark-shell --master yarn-client --jars 
> /usr/hdp/current/hadoop-client/lib/hadoop-lzo-0.6.0.3.0.3.0-63.jar,/tmp/ozone-0.3.0-SNAPSHOT/share/hadoop/ozoneplugin/hadoop-ozone-datanode-plugin-0.3.0-SNAPSHOT.jar,/tmp/ozone-0.3.0-SNAPSHOT/share/hadoop/ozonefs/hadoop-ozone-filesystem-0.3.0-SNAPSHOT.jar
> Warning: Master yarn-client is deprecated since 2.0. Please use master "yarn" 
> with specified deploy mode instead.
> Setting default log level to "WARN".
> To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use 
> setLogLevel(newLevel).
> Spark context Web UI available at 
> http://ctr-e138-1518143905142-510793-01-02.hwx.site:4040
> Spark context available as 'sc' (master = yarn, app id = 
> application_1539295307098_0011).
> Spark session available as 'spark'.
> Welcome to
>  __
> / __/__ ___ _/ /__
> _\ \/ _ \/ _ `/ __/ '_/
> /___/ .__/\_,_/_/ /_/\_\ version 2.3.2.3.0.3.0-63
> /_/
> Using Scala version 2.11.8 (OpenJDK 64-Bit Server VM, Java 1.8.0_181)
> Type in expressions to have them evaluated.
> Type :help for more information.
> scala>
> scala> val input = sc.textFile("o3://bucket2.volume2/passwd");
> input: org.apache.spark.rdd.RDD[String] = o3://bucket2.volume2/passwd 
> MapPartitionsRDD[1] at textFile at :24
> scala> val count = input.flatMap(line => line.split(" ")).map(word => (word, 
> 1)).reduceByKey(_+_);
> count: org.apache.spark.rdd.RDD[(String, Int)] = ShuffledRDD[4] at 
> reduceByKey at :25
> scala> count.cache()
> res0: count.type = ShuffledRDD[4] at reduceByKey at :25
> scala> count.saveAsTextFile("o3://bucket2.volume2/sparkout3");
> [Stage 0:> (0 + 2) / 2]18/10/12 22:16:44 WARN TaskSetManager: Lost task 1.0 
> in stage 0.0 (TID 1, ctr-e138-1518143905142-510793-01-11.hwx.site, 
> executor 1): java.io.IOException: Couldn't create protocol class 
> org.apache.hadoop.ozone.client.rpc.RpcClient
> at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:299)
> at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getRpcClient(OzoneClientFactory.java:169)
> at 
> org.apache.hadoop.fs.ozone.OzoneFileSystem.initialize(OzoneFileSystem.java:119)
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
> at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
> at org.apache.hadoop.mapred.LineRecordReader.(LineRecordReader.java:108)
> at 
> org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67)
> at org.apache.spark.rdd.HadoopRDD$$anon$1.liftedTree1$1(HadoopRDD.scala:257)
> at org.apache.spark.rdd.HadoopRDD$$anon$1.(HadoopRDD.scala:256)
> at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:214)
> at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:94)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
> at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
> at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
> at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
> at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
> at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
> at org.apache.spark.scheduler.Task.run(Task.scala:109)
> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: 

[jira] [Created] (HDDS-650) Spark job is not able to pick up Ozone configuration

2018-10-12 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-650:
-

 Summary: Spark job is not able to pick up Ozone configuration
 Key: HDDS-650
 URL: https://issues.apache.org/jira/browse/HDDS-650
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari


Spark job is not able to pick up Ozone configuration.
{code:java}
-bash-4.2$ spark-shell --master yarn-client --jars 
/usr/hdp/current/hadoop-client/lib/hadoop-lzo-0.6.0.3.0.3.0-63.jar,/tmp/ozone-0.3.0-SNAPSHOT/share/hadoop/ozoneplugin/hadoop-ozone-datanode-plugin-0.3.0-SNAPSHOT.jar,/tmp/ozone-0.3.0-SNAPSHOT/share/hadoop/ozonefs/hadoop-ozone-filesystem-0.3.0-SNAPSHOT.jar
Warning: Master yarn-client is deprecated since 2.0. Please use master "yarn" 
with specified deploy mode instead.
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use 
setLogLevel(newLevel).
Spark context Web UI available at 
http://ctr-e138-1518143905142-510793-01-02.hwx.site:4040
Spark context available as 'sc' (master = yarn, app id = 
application_1539295307098_0011).
Spark session available as 'spark'.
Welcome to
 __
/ __/__ ___ _/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 2.3.2.3.0.3.0-63
/_/

Using Scala version 2.11.8 (OpenJDK 64-Bit Server VM, Java 1.8.0_181)
Type in expressions to have them evaluated.
Type :help for more information.

scala>

scala> val input = sc.textFile("o3://bucket2.volume2/passwd");
input: org.apache.spark.rdd.RDD[String] = o3://bucket2.volume2/passwd 
MapPartitionsRDD[1] at textFile at :24

scala> val count = input.flatMap(line => line.split(" ")).map(word => (word, 
1)).reduceByKey(_+_);
count: org.apache.spark.rdd.RDD[(String, Int)] = ShuffledRDD[4] at reduceByKey 
at :25

scala> count.cache()
res0: count.type = ShuffledRDD[4] at reduceByKey at :25

scala> count.saveAsTextFile("o3://bucket2.volume2/sparkout3");
[Stage 0:> (0 + 2) / 2]18/10/12 22:16:44 WARN TaskSetManager: Lost task 1.0 in 
stage 0.0 (TID 1, ctr-e138-1518143905142-510793-01-11.hwx.site, executor 
1): java.io.IOException: Couldn't create protocol class 
org.apache.hadoop.ozone.client.rpc.RpcClient
at 
org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:299)
at 
org.apache.hadoop.ozone.client.OzoneClientFactory.getRpcClient(OzoneClientFactory.java:169)
at 
org.apache.hadoop.fs.ozone.OzoneFileSystem.initialize(OzoneFileSystem.java:119)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
at org.apache.hadoop.mapred.LineRecordReader.(LineRecordReader.java:108)
at 
org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67)
at org.apache.spark.rdd.HadoopRDD$$anon$1.liftedTree1$1(HadoopRDD.scala:257)
at org.apache.spark.rdd.HadoopRDD$$anon$1.(HadoopRDD.scala:256)
at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:214)
at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:94)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
at org.apache.spark.scheduler.Task.run(Task.scala:109)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.IllegalArgumentException: ozone.om.address must be 
defined. See https://wiki.apache.org/hadoop/Ozone#Configuration for details on 
configuring Ozone.
at org.apache.hadoop.ozone.OmUtils.getOmAddressForClients(OmUtils.java:70)
at org.apache.hadoop.ozone.client.rpc.RpcClient.(RpcClient.java:114)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 

[jira] [Updated] (HDDS-600) Mapreduce example fails with java.lang.IllegalArgumentException: Bucket or Volume name has an unsupported character

2018-10-12 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Maheshwari updated HDDS-600:
--
Target Version/s: 0.3.0

> Mapreduce example fails with java.lang.IllegalArgumentException: Bucket or 
> Volume name has an unsupported character
> ---
>
> Key: HDDS-600
> URL: https://issues.apache.org/jira/browse/HDDS-600
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Hanisha Koneru
>Priority: Blocker
>
> Set up a hadoop cluster where ozone is also installed. Ozone can be 
> referenced via o3://xx.xx.xx.xx:9889
> {code:java}
> [root@ctr-e138-1518143905142-510793-01-02 ~]# ozone sh bucket list 
> o3://xx.xx.xx.xx:9889/volume1/
> 2018-10-09 07:21:24,624 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> [ {
> "volumeName" : "volume1",
> "bucketName" : "bucket1",
> "createdOn" : "Tue, 09 Oct 2018 06:48:02 GMT",
> "acls" : [ {
> "type" : "USER",
> "name" : "root",
> "rights" : "READ_WRITE"
> }, {
> "type" : "GROUP",
> "name" : "root",
> "rights" : "READ_WRITE"
> } ],
> "versioning" : "DISABLED",
> "storageType" : "DISK"
> } ]
> [root@ctr-e138-1518143905142-510793-01-02 ~]# ozone sh key list 
> o3://xx.xx.xx.xx:9889/volume1/bucket1
> 2018-10-09 07:21:54,500 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> [ {
> "version" : 0,
> "md5hash" : null,
> "createdOn" : "Tue, 09 Oct 2018 06:58:32 GMT",
> "modifiedOn" : "Tue, 09 Oct 2018 06:58:32 GMT",
> "size" : 0,
> "keyName" : "mr_job_dir"
> } ]
> [root@ctr-e138-1518143905142-510793-01-02 ~]#{code}
> Hdfs is also set fine as below
> {code:java}
> [root@ctr-e138-1518143905142-510793-01-02 ~]# hdfs dfs -ls 
> /tmp/mr_jobs/input/
> Found 1 items
> -rw-r--r-- 3 root hdfs 215755 2018-10-09 06:37 
> /tmp/mr_jobs/input/wordcount_input_1.txt
> [root@ctr-e138-1518143905142-510793-01-02 ~]#{code}
> Now try to run Mapreduce example job against ozone o3:
> {code:java}
> [root@ctr-e138-1518143905142-510793-01-02 ~]# 
> /usr/hdp/current/hadoop-client/bin/hadoop jar 
> /usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples.jar 
> wordcount /tmp/mr_jobs/input/ 
> o3://xx.xx.xx.xx:9889/volume1/bucket1/mr_job_dir/output
> 18/10/09 07:15:38 INFO conf.Configuration: Removed undeclared tags:
> java.lang.IllegalArgumentException: Bucket or Volume name has an unsupported 
> character : :
> at 
> org.apache.hadoop.hdds.scm.client.HddsClientUtils.verifyResourceName(HddsClientUtils.java:143)
> at 
> org.apache.hadoop.ozone.client.rpc.RpcClient.getVolumeDetails(RpcClient.java:231)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.ozone.client.OzoneClientInvocationHandler.invoke(OzoneClientInvocationHandler.java:54)
> at com.sun.proxy.$Proxy16.getVolumeDetails(Unknown Source)
> at org.apache.hadoop.ozone.client.ObjectStore.getVolume(ObjectStore.java:92)
> at 
> org.apache.hadoop.fs.ozone.OzoneFileSystem.initialize(OzoneFileSystem.java:121)
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
> at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
> at 
> org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.setOutputPath(FileOutputFormat.java:178)
> at org.apache.hadoop.examples.WordCount.main(WordCount.java:85)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
> at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
> at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> 

[jira] [Updated] (HDDS-623) On SCM UI, Node Manager info is empty

2018-10-12 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Maheshwari updated HDDS-623:
--
Target Version/s: 0.3.0

> On SCM UI, Node Manager info is empty
> -
>
> Key: HDDS-623
> URL: https://issues.apache.org/jira/browse/HDDS-623
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Priority: Major
> Attachments: Screen Shot 2018-10-10 at 4.19.59 PM.png
>
>
> Fields like below are empty
> Node Manager: Minimum chill mode nodes 
> Node Manager: Out-of-node chill mode 
> Node Manager: Chill mode status 
> Node Manager: Manual chill mode
> Please see attached screenshot !Screen Shot 2018-10-10 at 4.19.59 PM.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-611) SCM UI is not reflecting the changes done in ozone-site.xml

2018-10-12 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Maheshwari updated HDDS-611:
--
Target Version/s: 0.3.0

> SCM UI is not reflecting the changes done in ozone-site.xml
> ---
>
> Key: HDDS-611
> URL: https://issues.apache.org/jira/browse/HDDS-611
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Priority: Major
> Attachments: Screen Shot 2018-10-09 at 4.49.58 PM.png
>
>
> ozone-site.xml was updated to change hdds.scm.chillmode.enabled to false. 
> This is reflected properly as below:
> {code:java}
> [root@ctr-e138-1518143905142-510793-01-04 bin]# ./ozone getozoneconf 
> -confKey hdds.scm.chillmode.enabled
> 2018-10-09 23:52:12,621 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> false
> {code}
> But the SCM UI does not reflect this change and it still shows the old value 
> of true. Please see attached screenshot. !Screen Shot 2018-10-09 at 4.49.58 
> PM.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-612) Even after setting hdds.scm.chillmode.enabled to false, SCM allocateblock fails with ChillModePrecheck exception

2018-10-12 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Maheshwari updated HDDS-612:
--
Target Version/s: 0.3.0

> Even after setting hdds.scm.chillmode.enabled to false, SCM allocateblock 
> fails with ChillModePrecheck exception
> 
>
> Key: HDDS-612
> URL: https://issues.apache.org/jira/browse/HDDS-612
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-612.001.patch
>
>
> {code:java}
> 2018-10-09 23:11:58,047 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 0 on 9863, call Call#70 Retry#0 
> org.apache.hadoop.ozone.protocol.ScmBlockLocationProtocol.allocateScmBlock 
> from 172.27.56.9:53442
> org.apache.hadoop.hdds.scm.exceptions.SCMException: ChillModePrecheck failed 
> for allocateBlock
> at 
> org.apache.hadoop.hdds.scm.server.ChillModePrecheck.check(ChillModePrecheck.java:38)
> at 
> org.apache.hadoop.hdds.scm.server.ChillModePrecheck.check(ChillModePrecheck.java:30)
> at org.apache.hadoop.hdds.scm.ScmUtils.preCheck(ScmUtils.java:42)
> at 
> org.apache.hadoop.hdds.scm.block.BlockManagerImpl.allocateBlock(BlockManagerImpl.java:191)
> at 
> org.apache.hadoop.hdds.scm.server.SCMBlockProtocolServer.allocateBlock(SCMBlockProtocolServer.java:143)
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.allocateScmBlock(ScmBlockLocationProtocolServerSideTranslatorPB.java:74)
> at 
> org.apache.hadoop.hdds.protocol.proto.ScmBlockLocationProtocolProtos$ScmBlockLocationProtocolService$2.callBlockingMethod(ScmBlockLocationProtocolProtos.java:6255)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-637) Not able to access the part-r-00000 file after the MR job succeeds

2018-10-12 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Maheshwari updated HDDS-637:
--
Target Version/s: 0.3.0

> Not able to access the part-r-0 file after the MR job succeeds
> --
>
> Key: HDDS-637
> URL: https://issues.apache.org/jira/browse/HDDS-637
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.3.0
>Reporter: Namit Maheshwari
>Priority: Major
>
> Run a MR job
> {code:java}
> -bash-4.2$ /usr/hdp/current/hadoop-client/bin/hadoop jar 
> /usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples.jar 
> wordcount /tmp/mr_jobs/input/ o3://bucket2.volume2/mr_jobDD
> 18/10/12 01:00:23 INFO conf.Configuration: Removed undeclared tags:
> 18/10/12 01:00:24 INFO conf.Configuration: Removed undeclared tags:
> 18/10/12 01:00:24 INFO conf.Configuration: Removed undeclared tags:
> 18/10/12 01:00:25 INFO conf.Configuration: Removed undeclared tags:
> 18/10/12 01:00:25 INFO client.ConfiguredRMFailoverProxyProvider: Failing over 
> to rm2
> 18/10/12 01:00:25 INFO mapreduce.JobResourceUploader: Disabling Erasure 
> Coding for path: /user/hdfs/.staging/job_1539295307098_0003
> 18/10/12 01:00:27 INFO input.FileInputFormat: Total input files to process : 1
> 18/10/12 01:00:27 INFO lzo.GPLNativeCodeLoader: Loaded native gpl library
> 18/10/12 01:00:27 INFO lzo.LzoCodec: Successfully loaded & initialized 
> native-lzo library [hadoop-lzo rev 5d6248d8d690f8456469979213ab2e9993bfa2e9]
> 18/10/12 01:00:27 INFO mapreduce.JobSubmitter: number of splits:1
> 18/10/12 01:00:28 INFO mapreduce.JobSubmitter: Submitting tokens for job: 
> job_1539295307098_0003
> 18/10/12 01:00:28 INFO mapreduce.JobSubmitter: Executing with tokens: []
> 18/10/12 01:00:28 INFO conf.Configuration: Removed undeclared tags:
> 18/10/12 01:00:28 INFO conf.Configuration: found resource resource-types.xml 
> at file:/etc/hadoop/3.0.3.0-63/0/resource-types.xml
> 18/10/12 01:00:28 INFO conf.Configuration: Removed undeclared tags:
> 18/10/12 01:00:28 INFO impl.YarnClientImpl: Submitted application 
> application_1539295307098_0003
> 18/10/12 01:00:29 INFO mapreduce.Job: The url to track the job: 
> http://ctr-e138-1518143905142-510793-01-05.hwx.site:8088/proxy/application_1539295307098_0003/
> 18/10/12 01:00:29 INFO mapreduce.Job: Running job: job_1539295307098_0003
> 18/10/12 01:00:35 INFO mapreduce.Job: Job job_1539295307098_0003 running in 
> uber mode : false
> 18/10/12 01:00:35 INFO mapreduce.Job: map 0% reduce 0%
> 18/10/12 01:00:44 INFO mapreduce.Job: map 100% reduce 0%
> 18/10/12 01:00:57 INFO mapreduce.Job: map 100% reduce 67%
> 18/10/12 01:00:59 INFO mapreduce.Job: map 100% reduce 100%
> 18/10/12 01:00:59 INFO mapreduce.Job: Job job_1539295307098_0003 completed 
> successfully
> 18/10/12 01:00:59 INFO conf.Configuration: Removed undeclared tags:
> 18/10/12 01:00:59 INFO mapreduce.Job: Counters: 58
> File System Counters
> FILE: Number of bytes read=6332
> FILE: Number of bytes written=532585
> FILE: Number of read operations=0
> FILE: Number of large read operations=0
> FILE: Number of write operations=0
> HDFS: Number of bytes read=215876
> HDFS: Number of bytes written=0
> HDFS: Number of read operations=2
> HDFS: Number of large read operations=0
> HDFS: Number of write operations=0
> O3: Number of bytes read=0
> O3: Number of bytes written=0
> O3: Number of read operations=0
> O3: Number of large read operations=0
> O3: Number of write operations=0
> Job Counters
> Launched map tasks=1
> Launched reduce tasks=1
> Rack-local map tasks=1
> Total time spent by all maps in occupied slots (ms)=25392
> Total time spent by all reduces in occupied slots (ms)=103584
> Total time spent by all map tasks (ms)=6348
> Total time spent by all reduce tasks (ms)=12948
> Total vcore-milliseconds taken by all map tasks=6348
> Total vcore-milliseconds taken by all reduce tasks=12948
> Total megabyte-milliseconds taken by all map tasks=26001408
> Total megabyte-milliseconds taken by all reduce tasks=106070016
> Map-Reduce Framework
> Map input records=716
> Map output records=32019
> Map output bytes=343475
> Map output materialized bytes=6332
> Input split bytes=121
> Combine input records=32019
> Combine output records=461
> Reduce input groups=461
> Reduce shuffle bytes=6332
> Reduce input records=461
> Reduce output records=461
> Spilled Records=922
> Shuffled Maps =1
> Failed Shuffles=0
> Merged Map outputs=1
> GC time elapsed (ms)=359
> CPU time spent (ms)=11800
> Physical memory (bytes) snapshot=3018502144
> Virtual memory (bytes) snapshot=14470242304
> Total committed heap usage (bytes)=3521642496
> Peak Map Physical memory (bytes)=2518896640
> Peak Map Virtual memory (bytes)=5397549056
> Peak Reduce Physical memory (bytes)=499605504
> Peak Reduce Virtual 

[jira] [Updated] (HDDS-637) Not able to access the part-r-00000 file after the MR job succeeds

2018-10-11 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Maheshwari updated HDDS-637:
--
Description: 
Run a MR job
{code:java}
-bash-4.2$ /usr/hdp/current/hadoop-client/bin/hadoop jar 
/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples.jar 
wordcount /tmp/mr_jobs/input/ o3://bucket2.volume2/mr_jobDD
18/10/12 01:00:23 INFO conf.Configuration: Removed undeclared tags:
18/10/12 01:00:24 INFO conf.Configuration: Removed undeclared tags:
18/10/12 01:00:24 INFO conf.Configuration: Removed undeclared tags:
18/10/12 01:00:25 INFO conf.Configuration: Removed undeclared tags:
18/10/12 01:00:25 INFO client.ConfiguredRMFailoverProxyProvider: Failing over 
to rm2
18/10/12 01:00:25 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding 
for path: /user/hdfs/.staging/job_1539295307098_0003
18/10/12 01:00:27 INFO input.FileInputFormat: Total input files to process : 1
18/10/12 01:00:27 INFO lzo.GPLNativeCodeLoader: Loaded native gpl library
18/10/12 01:00:27 INFO lzo.LzoCodec: Successfully loaded & initialized 
native-lzo library [hadoop-lzo rev 5d6248d8d690f8456469979213ab2e9993bfa2e9]
18/10/12 01:00:27 INFO mapreduce.JobSubmitter: number of splits:1
18/10/12 01:00:28 INFO mapreduce.JobSubmitter: Submitting tokens for job: 
job_1539295307098_0003
18/10/12 01:00:28 INFO mapreduce.JobSubmitter: Executing with tokens: []
18/10/12 01:00:28 INFO conf.Configuration: Removed undeclared tags:
18/10/12 01:00:28 INFO conf.Configuration: found resource resource-types.xml at 
file:/etc/hadoop/3.0.3.0-63/0/resource-types.xml
18/10/12 01:00:28 INFO conf.Configuration: Removed undeclared tags:
18/10/12 01:00:28 INFO impl.YarnClientImpl: Submitted application 
application_1539295307098_0003
18/10/12 01:00:29 INFO mapreduce.Job: The url to track the job: 
http://ctr-e138-1518143905142-510793-01-05.hwx.site:8088/proxy/application_1539295307098_0003/
18/10/12 01:00:29 INFO mapreduce.Job: Running job: job_1539295307098_0003
18/10/12 01:00:35 INFO mapreduce.Job: Job job_1539295307098_0003 running in 
uber mode : false
18/10/12 01:00:35 INFO mapreduce.Job: map 0% reduce 0%
18/10/12 01:00:44 INFO mapreduce.Job: map 100% reduce 0%
18/10/12 01:00:57 INFO mapreduce.Job: map 100% reduce 67%
18/10/12 01:00:59 INFO mapreduce.Job: map 100% reduce 100%
18/10/12 01:00:59 INFO mapreduce.Job: Job job_1539295307098_0003 completed 
successfully
18/10/12 01:00:59 INFO conf.Configuration: Removed undeclared tags:
18/10/12 01:00:59 INFO mapreduce.Job: Counters: 58
File System Counters
FILE: Number of bytes read=6332
FILE: Number of bytes written=532585
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=215876
HDFS: Number of bytes written=0
HDFS: Number of read operations=2
HDFS: Number of large read operations=0
HDFS: Number of write operations=0
O3: Number of bytes read=0
O3: Number of bytes written=0
O3: Number of read operations=0
O3: Number of large read operations=0
O3: Number of write operations=0
Job Counters
Launched map tasks=1
Launched reduce tasks=1
Rack-local map tasks=1
Total time spent by all maps in occupied slots (ms)=25392
Total time spent by all reduces in occupied slots (ms)=103584
Total time spent by all map tasks (ms)=6348
Total time spent by all reduce tasks (ms)=12948
Total vcore-milliseconds taken by all map tasks=6348
Total vcore-milliseconds taken by all reduce tasks=12948
Total megabyte-milliseconds taken by all map tasks=26001408
Total megabyte-milliseconds taken by all reduce tasks=106070016
Map-Reduce Framework
Map input records=716
Map output records=32019
Map output bytes=343475
Map output materialized bytes=6332
Input split bytes=121
Combine input records=32019
Combine output records=461
Reduce input groups=461
Reduce shuffle bytes=6332
Reduce input records=461
Reduce output records=461
Spilled Records=922
Shuffled Maps =1
Failed Shuffles=0
Merged Map outputs=1
GC time elapsed (ms)=359
CPU time spent (ms)=11800
Physical memory (bytes) snapshot=3018502144
Virtual memory (bytes) snapshot=14470242304
Total committed heap usage (bytes)=3521642496
Peak Map Physical memory (bytes)=2518896640
Peak Map Virtual memory (bytes)=5397549056
Peak Reduce Physical memory (bytes)=499605504
Peak Reduce Virtual memory (bytes)=9072693248
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=215755
File Output Format Counters
Bytes Written=0
18/10/12 01:00:59 INFO conf.Configuration: Removed undeclared tags:
-bash-4.2$
{code}
Below exception is seen in SCM logs
{code:java}
2018-10-12 01:00:51,142 INFO 
org.apache.hadoop.hdds.scm.pipelines.ratis.RatisManagerImpl: Allocating a new 
ratis pipeline of size: 3 id: pipelineId=85cfe1ff-a47c-4ee7-aadb-9c7a42d80192
2018-10-12 01:00:51,988 INFO 
org.apache.hadoop.hdds.scm.pipelines.PipelineSelector: closing 

[jira] [Updated] (HDDS-637) Not able to access the part-r-00000 file after the MR job succeeds

2018-10-11 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Maheshwari updated HDDS-637:
--
Description: 
Run a MR job
{code:java}
-bash-4.2$ /usr/hdp/current/hadoop-client/bin/hadoop jar 
/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples.jar 
wordcount /tmp/mr_jobs/input/ o3://bucket2.volume2/mr_jobDD
18/10/12 01:00:23 INFO conf.Configuration: Removed undeclared tags:
18/10/12 01:00:24 INFO conf.Configuration: Removed undeclared tags:
18/10/12 01:00:24 INFO conf.Configuration: Removed undeclared tags:
18/10/12 01:00:25 INFO conf.Configuration: Removed undeclared tags:
18/10/12 01:00:25 INFO client.ConfiguredRMFailoverProxyProvider: Failing over 
to rm2
18/10/12 01:00:25 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding 
for path: /user/hdfs/.staging/job_1539295307098_0003
18/10/12 01:00:27 INFO input.FileInputFormat: Total input files to process : 1
18/10/12 01:00:27 INFO lzo.GPLNativeCodeLoader: Loaded native gpl library
18/10/12 01:00:27 INFO lzo.LzoCodec: Successfully loaded & initialized 
native-lzo library [hadoop-lzo rev 5d6248d8d690f8456469979213ab2e9993bfa2e9]
18/10/12 01:00:27 INFO mapreduce.JobSubmitter: number of splits:1
18/10/12 01:00:28 INFO mapreduce.JobSubmitter: Submitting tokens for job: 
job_1539295307098_0003
18/10/12 01:00:28 INFO mapreduce.JobSubmitter: Executing with tokens: []
18/10/12 01:00:28 INFO conf.Configuration: Removed undeclared tags:
18/10/12 01:00:28 INFO conf.Configuration: found resource resource-types.xml at 
file:/etc/hadoop/3.0.3.0-63/0/resource-types.xml
18/10/12 01:00:28 INFO conf.Configuration: Removed undeclared tags:
18/10/12 01:00:28 INFO impl.YarnClientImpl: Submitted application 
application_1539295307098_0003
18/10/12 01:00:29 INFO mapreduce.Job: The url to track the job: 
http://ctr-e138-1518143905142-510793-01-05.hwx.site:8088/proxy/application_1539295307098_0003/
18/10/12 01:00:29 INFO mapreduce.Job: Running job: job_1539295307098_0003
18/10/12 01:00:35 INFO mapreduce.Job: Job job_1539295307098_0003 running in 
uber mode : false
18/10/12 01:00:35 INFO mapreduce.Job: map 0% reduce 0%
18/10/12 01:00:44 INFO mapreduce.Job: map 100% reduce 0%
18/10/12 01:00:57 INFO mapreduce.Job: map 100% reduce 67%
18/10/12 01:00:59 INFO mapreduce.Job: map 100% reduce 100%
18/10/12 01:00:59 INFO mapreduce.Job: Job job_1539295307098_0003 completed 
successfully
18/10/12 01:00:59 INFO conf.Configuration: Removed undeclared tags:
18/10/12 01:00:59 INFO mapreduce.Job: Counters: 58
File System Counters
FILE: Number of bytes read=6332
FILE: Number of bytes written=532585
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=215876
HDFS: Number of bytes written=0
HDFS: Number of read operations=2
HDFS: Number of large read operations=0
HDFS: Number of write operations=0
O3: Number of bytes read=0
O3: Number of bytes written=0
O3: Number of read operations=0
O3: Number of large read operations=0
O3: Number of write operations=0
Job Counters
Launched map tasks=1
Launched reduce tasks=1
Rack-local map tasks=1
Total time spent by all maps in occupied slots (ms)=25392
Total time spent by all reduces in occupied slots (ms)=103584
Total time spent by all map tasks (ms)=6348
Total time spent by all reduce tasks (ms)=12948
Total vcore-milliseconds taken by all map tasks=6348
Total vcore-milliseconds taken by all reduce tasks=12948
Total megabyte-milliseconds taken by all map tasks=26001408
Total megabyte-milliseconds taken by all reduce tasks=106070016
Map-Reduce Framework
Map input records=716
Map output records=32019
Map output bytes=343475
Map output materialized bytes=6332
Input split bytes=121
Combine input records=32019
Combine output records=461
Reduce input groups=461
Reduce shuffle bytes=6332
Reduce input records=461
Reduce output records=461
Spilled Records=922
Shuffled Maps =1
Failed Shuffles=0
Merged Map outputs=1
GC time elapsed (ms)=359
CPU time spent (ms)=11800
Physical memory (bytes) snapshot=3018502144
Virtual memory (bytes) snapshot=14470242304
Total committed heap usage (bytes)=3521642496
Peak Map Physical memory (bytes)=2518896640
Peak Map Virtual memory (bytes)=5397549056
Peak Reduce Physical memory (bytes)=499605504
Peak Reduce Virtual memory (bytes)=9072693248
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=215755
File Output Format Counters
Bytes Written=0
18/10/12 01:00:59 INFO conf.Configuration: Removed undeclared tags:
-bash-4.2$
{code}
Below exception is seen in SCM logs
{code:java}
2018-10-12 01:00:51,142 INFO 
org.apache.hadoop.hdds.scm.pipelines.ratis.RatisManagerImpl: Allocating a new 
ratis pipeline of size: 3 id: pipelineId=85cfe1ff-a47c-4ee7-aadb-9c7a42d80192
2018-10-12 01:00:51,988 INFO 
org.apache.hadoop.hdds.scm.pipelines.PipelineSelector: closing 

[jira] [Created] (HDDS-637) Not able to access the part-r-00000 file after the MR job succeeds

2018-10-11 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-637:
-

 Summary: Not able to access the part-r-0 file after the MR job 
succeeds
 Key: HDDS-637
 URL: https://issues.apache.org/jira/browse/HDDS-637
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Affects Versions: 0.3.0
Reporter: Namit Maheshwari


Run a MR job
{code:java}
-bash-4.2$ /usr/hdp/current/hadoop-client/bin/hadoop jar 
/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples.jar 
wordcount /tmp/mr_jobs/input/ o3://bucket2.volume2/mr_jobDD
18/10/12 01:00:23 INFO conf.Configuration: Removed undeclared tags:
18/10/12 01:00:24 INFO conf.Configuration: Removed undeclared tags:
18/10/12 01:00:24 INFO conf.Configuration: Removed undeclared tags:
18/10/12 01:00:25 INFO conf.Configuration: Removed undeclared tags:
18/10/12 01:00:25 INFO client.ConfiguredRMFailoverProxyProvider: Failing over 
to rm2
18/10/12 01:00:25 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding 
for path: /user/hdfs/.staging/job_1539295307098_0003
18/10/12 01:00:27 INFO input.FileInputFormat: Total input files to process : 1
18/10/12 01:00:27 INFO lzo.GPLNativeCodeLoader: Loaded native gpl library
18/10/12 01:00:27 INFO lzo.LzoCodec: Successfully loaded & initialized 
native-lzo library [hadoop-lzo rev 5d6248d8d690f8456469979213ab2e9993bfa2e9]
18/10/12 01:00:27 INFO mapreduce.JobSubmitter: number of splits:1
18/10/12 01:00:28 INFO mapreduce.JobSubmitter: Submitting tokens for job: 
job_1539295307098_0003
18/10/12 01:00:28 INFO mapreduce.JobSubmitter: Executing with tokens: []
18/10/12 01:00:28 INFO conf.Configuration: Removed undeclared tags:
18/10/12 01:00:28 INFO conf.Configuration: found resource resource-types.xml at 
file:/etc/hadoop/3.0.3.0-63/0/resource-types.xml
18/10/12 01:00:28 INFO conf.Configuration: Removed undeclared tags:
18/10/12 01:00:28 INFO impl.YarnClientImpl: Submitted application 
application_1539295307098_0003
18/10/12 01:00:29 INFO mapreduce.Job: The url to track the job: 
http://ctr-e138-1518143905142-510793-01-05.hwx.site:8088/proxy/application_1539295307098_0003/
18/10/12 01:00:29 INFO mapreduce.Job: Running job: job_1539295307098_0003
18/10/12 01:00:35 INFO mapreduce.Job: Job job_1539295307098_0003 running in 
uber mode : false
18/10/12 01:00:35 INFO mapreduce.Job: map 0% reduce 0%
18/10/12 01:00:44 INFO mapreduce.Job: map 100% reduce 0%
18/10/12 01:00:57 INFO mapreduce.Job: map 100% reduce 67%
18/10/12 01:00:59 INFO mapreduce.Job: map 100% reduce 100%
18/10/12 01:00:59 INFO mapreduce.Job: Job job_1539295307098_0003 completed 
successfully
18/10/12 01:00:59 INFO conf.Configuration: Removed undeclared tags:
18/10/12 01:00:59 INFO mapreduce.Job: Counters: 58
File System Counters
FILE: Number of bytes read=6332
FILE: Number of bytes written=532585
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=215876
HDFS: Number of bytes written=0
HDFS: Number of read operations=2
HDFS: Number of large read operations=0
HDFS: Number of write operations=0
O3: Number of bytes read=0
O3: Number of bytes written=0
O3: Number of read operations=0
O3: Number of large read operations=0
O3: Number of write operations=0
Job Counters
Launched map tasks=1
Launched reduce tasks=1
Rack-local map tasks=1
Total time spent by all maps in occupied slots (ms)=25392
Total time spent by all reduces in occupied slots (ms)=103584
Total time spent by all map tasks (ms)=6348
Total time spent by all reduce tasks (ms)=12948
Total vcore-milliseconds taken by all map tasks=6348
Total vcore-milliseconds taken by all reduce tasks=12948
Total megabyte-milliseconds taken by all map tasks=26001408
Total megabyte-milliseconds taken by all reduce tasks=106070016
Map-Reduce Framework
Map input records=716
Map output records=32019
Map output bytes=343475
Map output materialized bytes=6332
Input split bytes=121
Combine input records=32019
Combine output records=461
Reduce input groups=461
Reduce shuffle bytes=6332
Reduce input records=461
Reduce output records=461
Spilled Records=922
Shuffled Maps =1
Failed Shuffles=0
Merged Map outputs=1
GC time elapsed (ms)=359
CPU time spent (ms)=11800
Physical memory (bytes) snapshot=3018502144
Virtual memory (bytes) snapshot=14470242304
Total committed heap usage (bytes)=3521642496
Peak Map Physical memory (bytes)=2518896640
Peak Map Virtual memory (bytes)=5397549056
Peak Reduce Physical memory (bytes)=499605504
Peak Reduce Virtual memory (bytes)=9072693248
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=215755
File Output Format Counters
Bytes Written=0
18/10/12 01:00:59 INFO conf.Configuration: Removed undeclared tags:
-bash-4.2$
{code}
Below exception is seen in SCM logs
{code:java}

2018-10-12 01:00:51,142 INFO 

[jira] [Commented] (HDDS-600) Mapreduce example fails with java.lang.IllegalArgumentException: Bucket or Volume name has an unsupported character

2018-10-11 Thread Namit Maheshwari (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16646991#comment-16646991
 ] 

Namit Maheshwari commented on HDDS-600:
---

I think we can close this issue, but we might want to keep track of the changes 
required to mapred-site.xml and yarn-site.xml in some documentation.

> Mapreduce example fails with java.lang.IllegalArgumentException: Bucket or 
> Volume name has an unsupported character
> ---
>
> Key: HDDS-600
> URL: https://issues.apache.org/jira/browse/HDDS-600
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Hanisha Koneru
>Priority: Blocker
>
> Set up a hadoop cluster where ozone is also installed. Ozone can be 
> referenced via o3://xx.xx.xx.xx:9889
> {code:java}
> [root@ctr-e138-1518143905142-510793-01-02 ~]# ozone sh bucket list 
> o3://xx.xx.xx.xx:9889/volume1/
> 2018-10-09 07:21:24,624 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> [ {
> "volumeName" : "volume1",
> "bucketName" : "bucket1",
> "createdOn" : "Tue, 09 Oct 2018 06:48:02 GMT",
> "acls" : [ {
> "type" : "USER",
> "name" : "root",
> "rights" : "READ_WRITE"
> }, {
> "type" : "GROUP",
> "name" : "root",
> "rights" : "READ_WRITE"
> } ],
> "versioning" : "DISABLED",
> "storageType" : "DISK"
> } ]
> [root@ctr-e138-1518143905142-510793-01-02 ~]# ozone sh key list 
> o3://xx.xx.xx.xx:9889/volume1/bucket1
> 2018-10-09 07:21:54,500 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> [ {
> "version" : 0,
> "md5hash" : null,
> "createdOn" : "Tue, 09 Oct 2018 06:58:32 GMT",
> "modifiedOn" : "Tue, 09 Oct 2018 06:58:32 GMT",
> "size" : 0,
> "keyName" : "mr_job_dir"
> } ]
> [root@ctr-e138-1518143905142-510793-01-02 ~]#{code}
> Hdfs is also set fine as below
> {code:java}
> [root@ctr-e138-1518143905142-510793-01-02 ~]# hdfs dfs -ls 
> /tmp/mr_jobs/input/
> Found 1 items
> -rw-r--r-- 3 root hdfs 215755 2018-10-09 06:37 
> /tmp/mr_jobs/input/wordcount_input_1.txt
> [root@ctr-e138-1518143905142-510793-01-02 ~]#{code}
> Now try to run Mapreduce example job against ozone o3:
> {code:java}
> [root@ctr-e138-1518143905142-510793-01-02 ~]# 
> /usr/hdp/current/hadoop-client/bin/hadoop jar 
> /usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples.jar 
> wordcount /tmp/mr_jobs/input/ 
> o3://xx.xx.xx.xx:9889/volume1/bucket1/mr_job_dir/output
> 18/10/09 07:15:38 INFO conf.Configuration: Removed undeclared tags:
> java.lang.IllegalArgumentException: Bucket or Volume name has an unsupported 
> character : :
> at 
> org.apache.hadoop.hdds.scm.client.HddsClientUtils.verifyResourceName(HddsClientUtils.java:143)
> at 
> org.apache.hadoop.ozone.client.rpc.RpcClient.getVolumeDetails(RpcClient.java:231)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.ozone.client.OzoneClientInvocationHandler.invoke(OzoneClientInvocationHandler.java:54)
> at com.sun.proxy.$Proxy16.getVolumeDetails(Unknown Source)
> at org.apache.hadoop.ozone.client.ObjectStore.getVolume(ObjectStore.java:92)
> at 
> org.apache.hadoop.fs.ozone.OzoneFileSystem.initialize(OzoneFileSystem.java:121)
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
> at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
> at 
> org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.setOutputPath(FileOutputFormat.java:178)
> at org.apache.hadoop.examples.WordCount.main(WordCount.java:85)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
> at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
> at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> 

[jira] [Created] (HDDS-624) PutBlock fails with Unexpected Storage Container Exception

2018-10-10 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-624:
-

 Summary: PutBlock fails with Unexpected Storage Container Exception
 Key: HDDS-624
 URL: https://issues.apache.org/jira/browse/HDDS-624
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari


As per HDDS-622, Datanodes were shutting down while running MR jobs due to 
issue in RocksDBStore. To avoid that failure set the property 
_ozone.metastore.rocksdb.statistics_ to _OFF_ in ozone-site.xml

Now running Mapreduce job fails with below error
{code:java}
-bash-4.2$ /usr/hdp/current/hadoop-client/bin/hadoop jar 
/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples.jar 
wordcount /tmp/mr_jobs/input/ o3://bucket2.volume2/mr_jobAA
18/10/11 00:14:41 INFO conf.Configuration: Removed undeclared tags:
18/10/11 00:14:42 INFO conf.Configuration: Removed undeclared tags:
18/10/11 00:14:42 INFO conf.Configuration: Removed undeclared tags:
18/10/11 00:14:43 INFO conf.Configuration: Removed undeclared tags:
18/10/11 00:14:43 INFO client.ConfiguredRMFailoverProxyProvider: Failing over 
to rm2
18/10/11 00:14:43 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding 
for path: /user/hdfs/.staging/job_1539208750583_0005
18/10/11 00:14:43 INFO input.FileInputFormat: Total input files to process : 1
18/10/11 00:14:43 INFO lzo.GPLNativeCodeLoader: Loaded native gpl library
18/10/11 00:14:43 INFO lzo.LzoCodec: Successfully loaded & initialized 
native-lzo library [hadoop-lzo rev 5d6248d8d690f8456469979213ab2e9993bfa2e9]
18/10/11 00:14:44 INFO mapreduce.JobSubmitter: number of splits:1
18/10/11 00:14:44 INFO mapreduce.JobSubmitter: Submitting tokens for job: 
job_1539208750583_0005
18/10/11 00:14:44 INFO mapreduce.JobSubmitter: Executing with tokens: []
18/10/11 00:14:44 INFO conf.Configuration: Removed undeclared tags:
18/10/11 00:14:44 INFO conf.Configuration: found resource resource-types.xml at 
file:/etc/hadoop/3.0.3.0-63/0/resource-types.xml
18/10/11 00:14:44 INFO conf.Configuration: Removed undeclared tags:
18/10/11 00:14:44 INFO impl.YarnClientImpl: Submitted application 
application_1539208750583_0005
18/10/11 00:14:45 INFO mapreduce.Job: The url to track the job: 
http://ctr-e138-1518143905142-510793-01-05.hwx.site:8088/proxy/application_1539208750583_0005/
18/10/11 00:14:45 INFO mapreduce.Job: Running job: job_1539208750583_0005
18/10/11 00:14:53 INFO mapreduce.Job: Job job_1539208750583_0005 running in 
uber mode : false
18/10/11 00:14:53 INFO mapreduce.Job: map 0% reduce 0%
18/10/11 00:15:00 INFO mapreduce.Job: map 100% reduce 0%
18/10/11 00:15:10 INFO mapreduce.Job: map 100% reduce 67%
18/10/11 00:15:11 INFO mapreduce.Job: Task Id : 
attempt_1539208750583_0005_r_00_0, Status : FAILED
Error: java.io.IOException: Unexpected Storage Container Exception: 
java.io.IOException: Failed to command cmdType: PutBlock
traceID: "df0ed956-fa4d-40ef-a7f2-ec0b6160b41b"
containerID: 2
datanodeUuid: "96f8fa78-413e-4350-a8ff-6cbdaa16ba7f"
putBlock {
blockData {
blockID {
containerID: 2
localID: 100874119214399488
}
metadata {
key: "TYPE"
value: "KEY"
}
chunks {
chunkName: 
"f24fa36171bda3113584cb01dc12a871_stream_84157b3a-654d-4e3d-8455-fbf85321a306_chunk_1"
offset: 0
len: 5017
}
}
}

at 
org.apache.hadoop.hdds.scm.storage.ChunkOutputStream.close(ChunkOutputStream.java:171)
at 
org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream$ChunkOutputStreamEntry.close(ChunkGroupOutputStream.java:699)
at 
org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.handleFlushOrClose(ChunkGroupOutputStream.java:502)
at 
org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.close(ChunkGroupOutputStream.java:531)
at 
org.apache.hadoop.fs.ozone.OzoneFSOutputStream.close(OzoneFSOutputStream.java:57)
at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
at 
org.apache.hadoop.mapreduce.lib.output.TextOutputFormat$LineRecordWriter.close(TextOutputFormat.java:106)
at 
org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.close(ReduceTask.java:551)
at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:630)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:390)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)
Caused by: java.io.IOException: Failed to command cmdType: PutBlock
traceID: "df0ed956-fa4d-40ef-a7f2-ec0b6160b41b"
containerID: 2
datanodeUuid: "96f8fa78-413e-4350-a8ff-6cbdaa16ba7f"
putBlock {
blockData {
blockID {
containerID: 2
localID: 100874119214399488
}
metadata {
key: "TYPE"
value: "KEY"
}
chunks {
chunkName: 

[jira] [Commented] (HDDS-600) Mapreduce example fails with java.lang.IllegalArgumentException: Bucket or Volume name has an unsupported character

2018-10-10 Thread Namit Maheshwari (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645715#comment-16645715
 ] 

Namit Maheshwari commented on HDDS-600:
---

So, the hadoop classpath had the ozone jars as above, but still the Mapreduce 
jobs were failing to pick up the path.

In order to proceed further added ozone plugin and ozone filesystem jars to :
 # In mapred-site.xml -> mapreduce.application.classpath property
 # In yarn-site.xml -> yarn.application.classpath property

{code:java}
/tmp/ozone-0.4.0-SNAPSHOT/share/hadoop/ozoneplugin/hadoop-ozone-datanode-plugin-0.4.0-SNAPSHOT.jar,/tmp/ozone-0.4.0-SNAPSHOT/share/hadoop/ozonefs/hadoop-ozone-filesystem-0.4.0-SNAPSHOT.jar{code}
After this it was able to pick up the ozone filesytem jar.

> Mapreduce example fails with java.lang.IllegalArgumentException: Bucket or 
> Volume name has an unsupported character
> ---
>
> Key: HDDS-600
> URL: https://issues.apache.org/jira/browse/HDDS-600
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Hanisha Koneru
>Priority: Blocker
>
> Set up a hadoop cluster where ozone is also installed. Ozone can be 
> referenced via o3://xx.xx.xx.xx:9889
> {code:java}
> [root@ctr-e138-1518143905142-510793-01-02 ~]# ozone sh bucket list 
> o3://xx.xx.xx.xx:9889/volume1/
> 2018-10-09 07:21:24,624 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> [ {
> "volumeName" : "volume1",
> "bucketName" : "bucket1",
> "createdOn" : "Tue, 09 Oct 2018 06:48:02 GMT",
> "acls" : [ {
> "type" : "USER",
> "name" : "root",
> "rights" : "READ_WRITE"
> }, {
> "type" : "GROUP",
> "name" : "root",
> "rights" : "READ_WRITE"
> } ],
> "versioning" : "DISABLED",
> "storageType" : "DISK"
> } ]
> [root@ctr-e138-1518143905142-510793-01-02 ~]# ozone sh key list 
> o3://xx.xx.xx.xx:9889/volume1/bucket1
> 2018-10-09 07:21:54,500 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> [ {
> "version" : 0,
> "md5hash" : null,
> "createdOn" : "Tue, 09 Oct 2018 06:58:32 GMT",
> "modifiedOn" : "Tue, 09 Oct 2018 06:58:32 GMT",
> "size" : 0,
> "keyName" : "mr_job_dir"
> } ]
> [root@ctr-e138-1518143905142-510793-01-02 ~]#{code}
> Hdfs is also set fine as below
> {code:java}
> [root@ctr-e138-1518143905142-510793-01-02 ~]# hdfs dfs -ls 
> /tmp/mr_jobs/input/
> Found 1 items
> -rw-r--r-- 3 root hdfs 215755 2018-10-09 06:37 
> /tmp/mr_jobs/input/wordcount_input_1.txt
> [root@ctr-e138-1518143905142-510793-01-02 ~]#{code}
> Now try to run Mapreduce example job against ozone o3:
> {code:java}
> [root@ctr-e138-1518143905142-510793-01-02 ~]# 
> /usr/hdp/current/hadoop-client/bin/hadoop jar 
> /usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples.jar 
> wordcount /tmp/mr_jobs/input/ 
> o3://xx.xx.xx.xx:9889/volume1/bucket1/mr_job_dir/output
> 18/10/09 07:15:38 INFO conf.Configuration: Removed undeclared tags:
> java.lang.IllegalArgumentException: Bucket or Volume name has an unsupported 
> character : :
> at 
> org.apache.hadoop.hdds.scm.client.HddsClientUtils.verifyResourceName(HddsClientUtils.java:143)
> at 
> org.apache.hadoop.ozone.client.rpc.RpcClient.getVolumeDetails(RpcClient.java:231)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.ozone.client.OzoneClientInvocationHandler.invoke(OzoneClientInvocationHandler.java:54)
> at com.sun.proxy.$Proxy16.getVolumeDetails(Unknown Source)
> at org.apache.hadoop.ozone.client.ObjectStore.getVolume(ObjectStore.java:92)
> at 
> org.apache.hadoop.fs.ozone.OzoneFileSystem.initialize(OzoneFileSystem.java:121)
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
> at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
> at 
> org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.setOutputPath(FileOutputFormat.java:178)
> at org.apache.hadoop.examples.WordCount.main(WordCount.java:85)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> 

[jira] [Created] (HDDS-623) On SCM UI, Node Manager info is empty

2018-10-10 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-623:
-

 Summary: On SCM UI, Node Manager info is empty
 Key: HDDS-623
 URL: https://issues.apache.org/jira/browse/HDDS-623
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari
 Attachments: Screen Shot 2018-10-10 at 4.19.59 PM.png

Fields like below are empty

Node Manager: Minimum chill mode nodes 
Node Manager: Out-of-node chill mode 
Node Manager: Chill mode status 
Node Manager: Manual chill mode

Please see attached screenshot !Screen Shot 2018-10-10 at 4.19.59 PM.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-622) Datanode shuts down with RocksDBStore java.lang.NoSuchMethodError

2018-10-10 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-622:
-

 Summary: Datanode shuts down with RocksDBStore 
java.lang.NoSuchMethodError
 Key: HDDS-622
 URL: https://issues.apache.org/jira/browse/HDDS-622
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari


Datanodes are registered fine on a Hadoop + Ozone cluster.

While running jobs against Ozone, Datanode shuts down as below:
{code:java}
2018-10-10 21:50:42,708 INFO storage.RaftLogWorker 
(RaftLogWorker.java:rollLogSegment(263)) - Rolling 
segment:7c1a32b5-34ed-4a2a-aa07-ac75d25858b6-RaftLogWorker index to:2
2018-10-10 21:50:42,714 INFO impl.RaftServerImpl 
(ServerState.java:setRaftConf(319)) - 7c1a32b5-34ed-4a2a-aa07-ac75d25858b6: set 
configuration 2: [7c1a32b5-34ed-4a2a-aa07-ac75d25858b6:172.27.56.9:9858, ee
20b6291-d898-46de-8cb2-861523aed1a3:172.27.87.64:9858, 
b7fbd501-27ae-4304-8c42-a612915094c6:172.27.17.133:9858], old=null at 2
2018-10-10 21:50:42,729 WARN impl.LogAppender (LogUtils.java:warn(135)) - 
7c1a32b5-34ed-4a2a-aa07-ac75d25858b6: Failed appendEntries to 
e20b6291-d898-46de-8cb2-861523aed1a3:172.27.87.64:9858: org.apache..
ratis.shaded.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception
2018-10-10 21:50:43,245 WARN impl.LogAppender (LogUtils.java:warn(135)) - 
7c1a32b5-34ed-4a2a-aa07-ac75d25858b6: Failed appendEntries to 
e20b6291-d898-46de-8cb2-861523aed1a3:172.27.87.64:9858: org.apache..
ratis.shaded.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception
2018-10-10 21:50:43,310 ERROR impl.RaftServerImpl 
(RaftServerImpl.java:applyLogToStateMachine(1153)) - 
7c1a32b5-34ed-4a2a-aa07-ac75d25858b6: applyTransaction failed for index:1 
proto:(t:2, i:1)SMLOGENTRY,,
client-894EC0846FDF, cid=0
2018-10-10 21:50:43,313 ERROR impl.StateMachineUpdater 
(ExitUtils.java:terminate(86)) - Terminating with exit status 2: 
StateMachineUpdater-7c1a32b5-34ed-4a2a-aa07-ac75d25858b6: the 
StateMachineUpdater hii
ts Throwable
java.lang.NoSuchMethodError: 
org.apache.hadoop.metrics2.util.MBeans.register(Ljava/lang/String;Ljava/lang/String;Ljava/util/Map;Ljava/lang/Object;)Ljavax/management/ObjectName;
at org.apache.hadoop.utils.RocksDBStore.(RocksDBStore.java:74)
at 
org.apache.hadoop.utils.MetadataStoreBuilder.build(MetadataStoreBuilder.java:142)
at 
org.apache.hadoop.ozone.container.keyvalue.helpers.KeyValueContainerUtil.createContainerMetaData(KeyValueContainerUtil.java:78)
at 
org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer.create(KeyValueContainer.java:133)
at 
org.apache.hadoop.ozone.container.keyvalue.KeyValueHandler.handleCreateContainer(KeyValueHandler.java:256)
at 
org.apache.hadoop.ozone.container.keyvalue.KeyValueHandler.handle(KeyValueHandler.java:179)
at 
org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:142)
at 
org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.dispatchCommand(ContainerStateMachine.java:223)
at 
org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.runCommand(ContainerStateMachine.java:229)
at 
org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.access$300(ContainerStateMachine.java:115)
at 
org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine$StateMachineHelper.handleCreateContainer(ContainerStateMachine.java:618)
at 
org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine$StateMachineHelper.executeContainerCommand(ContainerStateMachine.java:642)
at 
org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.applyTransaction(ContainerStateMachine.java:396)
at 
org.apache.ratis.server.impl.RaftServerImpl.applyLogToStateMachine(RaftServerImpl.java:1150)
at 
org.apache.ratis.server.impl.StateMachineUpdater.run(StateMachineUpdater.java:148)
at java.lang.Thread.run(Thread.java:748)
2018-10-10 21:50:43,320 INFO datanode.DataNode (LogAdapter.java:info(51)) - 
SHUTDOWN_MSG:
/
SHUTDOWN_MSG: Shutting down DataNode at 
ctr-e138-1518143905142-510793-01-02.hwx.site/172.27.56.9
/
{code}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-612) Even after setting hdds.scm.chillmode.enabled to false, SCM allocateblock fails with ChillModePrecheck exception

2018-10-09 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-612:
-

 Summary: Even after setting hdds.scm.chillmode.enabled to false, 
SCM allocateblock fails with ChillModePrecheck exception
 Key: HDDS-612
 URL: https://issues.apache.org/jira/browse/HDDS-612
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari


{code:java}
2018-10-09 23:11:58,047 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 
on 9863, call Call#70 Retry#0 
org.apache.hadoop.ozone.protocol.ScmBlockLocationProtocol.allocateScmBlock from 
172.27.56.9:53442
org.apache.hadoop.hdds.scm.exceptions.SCMException: ChillModePrecheck failed 
for allocateBlock
at 
org.apache.hadoop.hdds.scm.server.ChillModePrecheck.check(ChillModePrecheck.java:38)
at 
org.apache.hadoop.hdds.scm.server.ChillModePrecheck.check(ChillModePrecheck.java:30)
at org.apache.hadoop.hdds.scm.ScmUtils.preCheck(ScmUtils.java:42)
at 
org.apache.hadoop.hdds.scm.block.BlockManagerImpl.allocateBlock(BlockManagerImpl.java:191)
at 
org.apache.hadoop.hdds.scm.server.SCMBlockProtocolServer.allocateBlock(SCMBlockProtocolServer.java:143)
at 
org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.allocateScmBlock(ScmBlockLocationProtocolServerSideTranslatorPB.java:74)
at 
org.apache.hadoop.hdds.protocol.proto.ScmBlockLocationProtocolProtos$ScmBlockLocationProtocolService$2.callBlockingMethod(ScmBlockLocationProtocolProtos.java:6255)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-611) SCM UI is not reflecting the changes done in ozone-site.xml

2018-10-09 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-611:
-

 Summary: SCM UI is not reflecting the changes done in 
ozone-site.xml
 Key: HDDS-611
 URL: https://issues.apache.org/jira/browse/HDDS-611
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari
 Attachments: Screen Shot 2018-10-09 at 4.49.58 PM.png

ozone-site.xml was updated to change hdds.scm.chillmode.enabled to false. This 
is reflected properly as below:
{code:java}
[root@ctr-e138-1518143905142-510793-01-04 bin]# ./ozone getozoneconf 
-confKey hdds.scm.chillmode.enabled
2018-10-09 23:52:12,621 WARN util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
false
{code}
But the SCM UI does not reflect this change and it still shows the old value of 
true. Please see attached screenshot. !Screen Shot 2018-10-09 at 4.49.58 PM.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-610) On restart of SCM it fails to register DataNodes

2018-10-09 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-610:
-

 Summary: On restart of SCM it fails to register DataNodes
 Key: HDDS-610
 URL: https://issues.apache.org/jira/browse/HDDS-610
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari


{code:java}
2018-10-09 23:34:11,105 INFO 
org.apache.hadoop.hdds.scm.server.StorageContainerManager: STARTUP_MSG:
/
STARTUP_MSG: Starting StorageContainerManager
STARTUP_MSG: host = 
ctr-e138-1518143905142-510793-01-04.hwx.site/172.27.79.197
STARTUP_MSG: args = []
STARTUP_MSG: version = 3.3.0-SNAPSHOT
STARTUP_MSG: classpath = 

[jira] [Created] (HDDS-609) Mapreduce example fails with Allocate block failed, error:INTERNAL_ERROR

2018-10-09 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-609:
-

 Summary: Mapreduce example fails with Allocate block failed, 
error:INTERNAL_ERROR
 Key: HDDS-609
 URL: https://issues.apache.org/jira/browse/HDDS-609
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari


{code:java}
-bash-4.2$ /usr/hdp/current/hadoop-client/bin/hadoop jar 
/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples.jar 
wordcount /tmp/mr_jobs/input/ o3://bucket2.volume2/mr_job5
18/10/09 23:37:07 INFO conf.Configuration: Removed undeclared tags:
18/10/09 23:37:08 INFO conf.Configuration: Removed undeclared tags:
18/10/09 23:37:08 INFO conf.Configuration: Removed undeclared tags:
18/10/09 23:37:08 INFO client.AHSProxy: Connecting to Application History 
server at ctr-e138-1518143905142-510793-01-04.hwx.site/172.27.79.197:10200
18/10/09 23:37:08 INFO client.ConfiguredRMFailoverProxyProvider: Failing over 
to rm2
18/10/09 23:37:09 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding 
for path: /user/hdfs/.staging/job_1539125785626_0007
18/10/09 23:37:09 INFO input.FileInputFormat: Total input files to process : 1
18/10/09 23:37:09 INFO lzo.GPLNativeCodeLoader: Loaded native gpl library
18/10/09 23:37:09 INFO lzo.LzoCodec: Successfully loaded & initialized 
native-lzo library [hadoop-lzo rev 5d6248d8d690f8456469979213ab2e9993bfa2e9]
18/10/09 23:37:09 INFO mapreduce.JobSubmitter: number of splits:1
18/10/09 23:37:09 INFO mapreduce.JobSubmitter: Submitting tokens for job: 
job_1539125785626_0007
18/10/09 23:37:09 INFO mapreduce.JobSubmitter: Executing with tokens: []
18/10/09 23:37:10 INFO conf.Configuration: Removed undeclared tags:
18/10/09 23:37:10 INFO conf.Configuration: found resource resource-types.xml at 
file:/etc/hadoop/3.0.3.0-63/0/resource-types.xml
18/10/09 23:37:10 INFO conf.Configuration: Removed undeclared tags:
18/10/09 23:37:10 INFO impl.YarnClientImpl: Submitted application 
application_1539125785626_0007
18/10/09 23:37:10 INFO mapreduce.Job: The url to track the job: 
http://ctr-e138-1518143905142-510793-01-05.hwx.site:8088/proxy/application_1539125785626_0007/
18/10/09 23:37:10 INFO mapreduce.Job: Running job: job_1539125785626_0007
18/10/09 23:37:17 INFO mapreduce.Job: Job job_1539125785626_0007 running in 
uber mode : false
18/10/09 23:37:17 INFO mapreduce.Job: map 0% reduce 0%
18/10/09 23:37:24 INFO mapreduce.Job: map 100% reduce 0%
18/10/09 23:37:29 INFO mapreduce.Job: Task Id : 
attempt_1539125785626_0007_r_00_0, Status : FAILED
Error: java.io.IOException: Allocate block failed, error:INTERNAL_ERROR
at 
org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.allocateBlock(OzoneManagerProtocolClientSideTranslatorPB.java:576)
at 
org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.allocateNewBlock(ChunkGroupOutputStream.java:475)
at 
org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.handleWrite(ChunkGroupOutputStream.java:271)
at 
org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.write(ChunkGroupOutputStream.java:250)
at 
org.apache.hadoop.fs.ozone.OzoneFSOutputStream.write(OzoneFSOutputStream.java:47)
at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:57)
at java.io.DataOutputStream.write(DataOutputStream.java:107)
at 
org.apache.hadoop.mapreduce.lib.output.TextOutputFormat$LineRecordWriter.writeObject(TextOutputFormat.java:78)
at 
org.apache.hadoop.mapreduce.lib.output.TextOutputFormat$LineRecordWriter.write(TextOutputFormat.java:93)
at 
org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.write(ReduceTask.java:559)
at 
org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
at 
org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer$Context.write(WrappedReducer.java:105)
at org.apache.hadoop.examples.WordCount$IntSumReducer.reduce(WordCount.java:64)
at org.apache.hadoop.examples.WordCount$IntSumReducer.reduce(WordCount.java:52)
at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:171)
at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:628)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:390)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)

18/10/09 23:37:35 INFO mapreduce.Job: Task Id : 
attempt_1539125785626_0007_r_00_1, Status : FAILED
Error: java.io.IOException: Allocate block failed, error:INTERNAL_ERROR
at 
org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.allocateBlock(OzoneManagerProtocolClientSideTranslatorPB.java:576)
at 

[jira] [Created] (HDDS-608) Mapreduce example fails with Access denied for user hdfs. Superuser privilege is required

2018-10-09 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-608:
-

 Summary: Mapreduce example fails with Access denied for user hdfs. 
Superuser privilege is required
 Key: HDDS-608
 URL: https://issues.apache.org/jira/browse/HDDS-608
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari


Right now only the administrators can submit a MR job. All the other users 
including hdfs will fail with below error:
{code:java}
-bash-4.2$ ./ozone sh bucket create /volume2/bucket2
2018-10-09 23:03:46,399 WARN util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
2018-10-09 23:03:47,473 INFO rpc.RpcClient: Creating Bucket: volume2/bucket2, 
with Versioning false and Storage Type set to DISK
-bash-4.2$ /usr/hdp/current/hadoop-client/bin/hadoop jar 
/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples.jar 
wordcount /tmp/mr_jobs/input/ o3://bucket2.volume2/mr_job
18/10/09 23:04:08 INFO conf.Configuration: Removed undeclared tags:
18/10/09 23:04:10 INFO conf.Configuration: Removed undeclared tags:
18/10/09 23:04:10 INFO conf.Configuration: Removed undeclared tags:
18/10/09 23:04:10 INFO client.AHSProxy: Connecting to Application History 
server at ctr-e138-1518143905142-510793-01-04.hwx.site/172.27.79.197:10200
18/10/09 23:04:10 INFO client.ConfiguredRMFailoverProxyProvider: Failing over 
to rm2
18/10/09 23:04:10 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding 
for path: /user/hdfs/.staging/job_1539125785626_0003
18/10/09 23:04:11 INFO input.FileInputFormat: Total input files to process : 1
18/10/09 23:04:11 INFO lzo.GPLNativeCodeLoader: Loaded native gpl library
18/10/09 23:04:11 INFO lzo.LzoCodec: Successfully loaded & initialized 
native-lzo library [hadoop-lzo rev 5d6248d8d690f8456469979213ab2e9993bfa2e9]
18/10/09 23:04:11 INFO mapreduce.JobSubmitter: number of splits:1
18/10/09 23:04:12 INFO mapreduce.JobSubmitter: Submitting tokens for job: 
job_1539125785626_0003
18/10/09 23:04:12 INFO mapreduce.JobSubmitter: Executing with tokens: []
18/10/09 23:04:12 INFO conf.Configuration: Removed undeclared tags:
18/10/09 23:04:12 INFO conf.Configuration: found resource resource-types.xml at 
file:/etc/hadoop/3.0.3.0-63/0/resource-types.xml
18/10/09 23:04:12 INFO conf.Configuration: Removed undeclared tags:
18/10/09 23:04:12 INFO impl.YarnClientImpl: Submitted application 
application_1539125785626_0003
18/10/09 23:04:12 INFO mapreduce.Job: The url to track the job: 
http://ctr-e138-1518143905142-510793-01-05.hwx.site:8088/proxy/application_1539125785626_0003/
18/10/09 23:04:12 INFO mapreduce.Job: Running job: job_1539125785626_0003
18/10/09 23:04:22 INFO mapreduce.Job: Job job_1539125785626_0003 running in 
uber mode : false
18/10/09 23:04:22 INFO mapreduce.Job: map 0% reduce 0%
18/10/09 23:04:30 INFO mapreduce.Job: map 100% reduce 0%
18/10/09 23:04:36 INFO mapreduce.Job: Task Id : 
attempt_1539125785626_0003_r_00_0, Status : FAILED
Error: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Access 
denied for user hdfs. Superuser privilege is required.
at 
org.apache.hadoop.hdds.scm.server.StorageContainerManager.checkAdminAccess(StorageContainerManager.java:830)
at 
org.apache.hadoop.hdds.scm.server.SCMClientProtocolServer.getContainerWithPipeline(SCMClientProtocolServer.java:190)
at 
org.apache.hadoop.ozone.protocolPB.StorageContainerLocationProtocolServerSideTranslatorPB.getContainerWithPipeline(StorageContainerLocationProtocolServerSideTranslatorPB.java:128)
at 
org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos$StorageContainerLocationProtocolService$2.callBlockingMethod(StorageContainerLocationProtocolProtos.java:12392)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)

at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1497)
at org.apache.hadoop.ipc.Client.call(Client.java:1443)
at org.apache.hadoop.ipc.Client.call(Client.java:1353)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy19.getContainerWithPipeline(Unknown Source)
at 
org.apache.hadoop.hdds.scm.protocolPB.StorageContainerLocationProtocolClientSideTranslatorPB.getContainerWithPipeline(StorageContainerLocationProtocolClientSideTranslatorPB.java:156)
at 

[jira] [Commented] (HDDS-490) Improve om and scm start up options

2018-10-09 Thread Namit Maheshwari (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644185#comment-16644185
 ] 

Namit Maheshwari commented on HDDS-490:
---

Hi [~arpitagarwal] - I have tested both the above scenarios.

It works fine. Thanks

> Improve om and scm start up options 
> 
>
> Key: HDDS-490
> URL: https://issues.apache.org/jira/browse/HDDS-490
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Namit Maheshwari
>Priority: Major
>  Labels: alpha2, incompatible
> Attachments: HDDS-490.001.patch, HDDS-490.002.patch
>
>
> I propose the following changes:
>  # Rename createObjectStore to format
>  # Change the flag to use --createObjectStore instead of using 
> -createObjectStore. It is also applicable to other scm and om startup options.
>  # Fail to format existing object store. If a user runs:
> {code:java}
> ozone om -createObjectStore{code}
> And there is already an object store, it should give a warning message and 
> exit the process.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-564) Update docker-hadoop-runner branch to reflect changes done in HDDS-490

2018-10-09 Thread Namit Maheshwari (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644124#comment-16644124
 ] 

Namit Maheshwari commented on HDDS-564:
---

[~arpitagarwal] - Even I am not able to change it from sub-task to an issue

> Update docker-hadoop-runner branch to reflect changes done in HDDS-490
> --
>
> Key: HDDS-564
> URL: https://issues.apache.org/jira/browse/HDDS-564
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Namit Maheshwari
>Assignee: Namit Maheshwari
>Priority: Major
> Attachments: HDDS-564-docker-hadoop-runner.001.patch, 
> HDDS-564-docker-hadoop-runner.002.patch
>
>
> starter.sh needs to be modified to reflect the changes done in HDDS-490
> For compatibility, starter.sh should support both the old and new style 
> options.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-600) Mapreduce example fails with java.lang.IllegalArgumentException: Bucket or Volume name has an unsupported character

2018-10-09 Thread Namit Maheshwari (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644123#comment-16644123
 ] 

Namit Maheshwari commented on HDDS-600:
---

Thanks [~hanishakoneru]. With the correct URL Mapreduce job fails as below:
{code:java}
[root@ctr-e138-1518143905142-510793-01-02 ~]# su - hdfs
Last login: Tue Oct 9 07:11:08 UTC 2018
-bash-4.2$ /usr/hdp/current/hadoop-client/bin/hadoop jar 
/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples.jar 
wordcount /tmp/mr_jobs/input/ o3://bucket1.volume1/mr_job_dir/output
18/10/09 20:24:07 INFO conf.Configuration: Removed undeclared tags:
18/10/09 20:24:08 INFO conf.Configuration: Removed undeclared tags:
18/10/09 20:24:08 INFO conf.Configuration: Removed undeclared tags:
18/10/09 20:24:08 INFO client.AHSProxy: Connecting to Application History 
server at ctr-e138-1518143905142-510793-01-04.hwx.site/172.27.79.197:10200
18/10/09 20:24:09 INFO client.ConfiguredRMFailoverProxyProvider: Failing over 
to rm2
18/10/09 20:24:09 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding 
for path: /user/hdfs/.staging/job_1539069219098_0001
18/10/09 20:24:09 INFO input.FileInputFormat: Total input files to process : 1
18/10/09 20:24:09 INFO lzo.GPLNativeCodeLoader: Loaded native gpl library
18/10/09 20:24:09 INFO lzo.LzoCodec: Successfully loaded & initialized 
native-lzo library [hadoop-lzo rev 5d6248d8d690f8456469979213ab2e9993bfa2e9]
18/10/09 20:24:10 INFO mapreduce.JobSubmitter: number of splits:1
18/10/09 20:24:10 INFO mapreduce.JobSubmitter: Submitting tokens for job: 
job_1539069219098_0001
18/10/09 20:24:10 INFO mapreduce.JobSubmitter: Executing with tokens: []
18/10/09 20:24:11 INFO conf.Configuration: Removed undeclared tags:
18/10/09 20:24:11 INFO conf.Configuration: found resource resource-types.xml at 
file:/etc/hadoop/3.0.3.0-63/0/resource-types.xml
18/10/09 20:24:11 INFO conf.Configuration: Removed undeclared tags:
18/10/09 20:24:11 INFO impl.YarnClientImpl: Submitted application 
application_1539069219098_0001
18/10/09 20:24:11 INFO mapreduce.Job: The url to track the job: 
http://ctr-e138-1518143905142-510793-01-05.hwx.site:8088/proxy/application_1539069219098_0001/
18/10/09 20:24:11 INFO mapreduce.Job: Running job: job_1539069219098_0001
18/10/09 20:25:04 INFO mapreduce.Job: Job job_1539069219098_0001 running in 
uber mode : false
18/10/09 20:25:04 INFO mapreduce.Job: map 0% reduce 0%
18/10/09 20:25:04 INFO mapreduce.Job: Job job_1539069219098_0001 failed with 
state FAILED due to: Application application_1539069219098_0001 failed 20 times 
due to AM Container for appattempt_1539069219098_0001_20 exited with 
exitCode: 1
Failing this attempt.Diagnostics: [2018-10-09 20:25:04.763]Exception from 
container-launch.
Container id: container_e03_1539069219098_0001_20_03
Exit code: 1

[2018-10-09 20:25:04.765]Container exited with a non-zero exit code 1. Error 
file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
Last 4096 bytes of stderr :
log4j:WARN No appenders could be found for logger 
(org.apache.hadoop.mapreduce.v2.app.MRAppMaster).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
info.


[2018-10-09 20:25:04.765]Container exited with a non-zero exit code 1. Error 
file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
Last 4096 bytes of stderr :
log4j:WARN No appenders could be found for logger 
(org.apache.hadoop.mapreduce.v2.app.MRAppMaster).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
info.


For more detailed output, check the application tracking page: 
http://ctr-e138-1518143905142-510793-01-05.hwx.site:8088/cluster/app/application_1539069219098_0001
 Then click on links to logs of each attempt.
. Failing the application.
18/10/09 20:25:05 INFO mapreduce.Job: Counters: 0
18/10/09 20:25:05 INFO conf.Configuration: Removed undeclared tags:
{code}
Yarn container logs:
{code:java}
Application
Tools
Configuration
Local logs
Server stacks
Server metrics
Log Type: directory.info

Log Upload Time: Tue Oct 09 20:25:06 + 2018

Log Length: 20398

Showing 4096 bytes of 20398 total. Click here for the full log.

06:50 ./mr-framework/hadoop/lib/native/libsnappy.so.1
8651115 3324 -r-xr-xr-x   2 yarn hadoop3402313 Oct  8 06:38 
./mr-framework/hadoop/lib/native/libnativetask.so
86510584 drwxr-xr-x   3 yarn hadoop   4096 Oct  8 06:32 
./mr-framework/hadoop/sbin
86510914 -r-xr-xr-x   1 yarn hadoop   3898 Oct  8 06:33 
./mr-framework/hadoop/sbin/stop-dfs.sh
86510844 -r-xr-xr-x   1 yarn hadoop   1756 Oct  8 06:33 
./mr-framework/hadoop/sbin/stop-secure-dns.sh
86510814 -r-xr-xr-x   1 yarn hadoop   2166 Oct  8 06:32 
./mr-framework/hadoop/sbin/stop-all.sh
86510964 -r-xr-xr-x   1 yarn hadoop   1779 

[jira] [Updated] (HDDS-600) Mapreduce example fails with java.lang.IllegalArgumentException: Bucket or Volume name has an unsupported character

2018-10-09 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Maheshwari updated HDDS-600:
--
Priority: Blocker  (was: Major)

> Mapreduce example fails with java.lang.IllegalArgumentException: Bucket or 
> Volume name has an unsupported character
> ---
>
> Key: HDDS-600
> URL: https://issues.apache.org/jira/browse/HDDS-600
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Hanisha Koneru
>Priority: Blocker
>
> Set up a hadoop cluster where ozone is also installed. Ozone can be 
> referenced via o3://xx.xx.xx.xx:9889
> {code:java}
> [root@ctr-e138-1518143905142-510793-01-02 ~]# ozone sh bucket list 
> o3://xx.xx.xx.xx:9889/volume1/
> 2018-10-09 07:21:24,624 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> [ {
> "volumeName" : "volume1",
> "bucketName" : "bucket1",
> "createdOn" : "Tue, 09 Oct 2018 06:48:02 GMT",
> "acls" : [ {
> "type" : "USER",
> "name" : "root",
> "rights" : "READ_WRITE"
> }, {
> "type" : "GROUP",
> "name" : "root",
> "rights" : "READ_WRITE"
> } ],
> "versioning" : "DISABLED",
> "storageType" : "DISK"
> } ]
> [root@ctr-e138-1518143905142-510793-01-02 ~]# ozone sh key list 
> o3://xx.xx.xx.xx:9889/volume1/bucket1
> 2018-10-09 07:21:54,500 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> [ {
> "version" : 0,
> "md5hash" : null,
> "createdOn" : "Tue, 09 Oct 2018 06:58:32 GMT",
> "modifiedOn" : "Tue, 09 Oct 2018 06:58:32 GMT",
> "size" : 0,
> "keyName" : "mr_job_dir"
> } ]
> [root@ctr-e138-1518143905142-510793-01-02 ~]#{code}
> Hdfs is also set fine as below
> {code:java}
> [root@ctr-e138-1518143905142-510793-01-02 ~]# hdfs dfs -ls 
> /tmp/mr_jobs/input/
> Found 1 items
> -rw-r--r-- 3 root hdfs 215755 2018-10-09 06:37 
> /tmp/mr_jobs/input/wordcount_input_1.txt
> [root@ctr-e138-1518143905142-510793-01-02 ~]#{code}
> Now try to run Mapreduce example job against ozone o3:
> {code:java}
> [root@ctr-e138-1518143905142-510793-01-02 ~]# 
> /usr/hdp/current/hadoop-client/bin/hadoop jar 
> /usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples.jar 
> wordcount /tmp/mr_jobs/input/ 
> o3://xx.xx.xx.xx:9889/volume1/bucket1/mr_job_dir/output
> 18/10/09 07:15:38 INFO conf.Configuration: Removed undeclared tags:
> java.lang.IllegalArgumentException: Bucket or Volume name has an unsupported 
> character : :
> at 
> org.apache.hadoop.hdds.scm.client.HddsClientUtils.verifyResourceName(HddsClientUtils.java:143)
> at 
> org.apache.hadoop.ozone.client.rpc.RpcClient.getVolumeDetails(RpcClient.java:231)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.ozone.client.OzoneClientInvocationHandler.invoke(OzoneClientInvocationHandler.java:54)
> at com.sun.proxy.$Proxy16.getVolumeDetails(Unknown Source)
> at org.apache.hadoop.ozone.client.ObjectStore.getVolume(ObjectStore.java:92)
> at 
> org.apache.hadoop.fs.ozone.OzoneFileSystem.initialize(OzoneFileSystem.java:121)
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
> at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
> at 
> org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.setOutputPath(FileOutputFormat.java:178)
> at org.apache.hadoop.examples.WordCount.main(WordCount.java:85)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
> at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
> at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> 

[jira] [Assigned] (HDDS-139) Output of createVolume can be improved

2018-10-09 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Maheshwari reassigned HDDS-139:
-

Assignee: Namit Maheshwari

> Output of createVolume can be improved
> --
>
> Key: HDDS-139
> URL: https://issues.apache.org/jira/browse/HDDS-139
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.2.1
>Reporter: Arpit Agarwal
>Assignee: Namit Maheshwari
>Priority: Major
>  Labels: newbie, usability
>
> The output of {{createVolume}} includes a huge number (1 Exabyte) when the 
> quota is not specified. This number can either be specified in a friendly 
> format or omitted when the user did not use the \{{-quota}} option.
> {code:java}
>     2018-05-31 20:35:56 INFO  RpcClient:210 - Creating Volume: vol2, with 
> hadoop as owner and quota set to 1152921504606846976 bytes.{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-600) Mapreduce example fails with java.lang.IllegalArgumentException: Bucket or Volume name has an unsupported character

2018-10-09 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-600:
-

 Summary: Mapreduce example fails with 
java.lang.IllegalArgumentException: Bucket or Volume name has an unsupported 
character
 Key: HDDS-600
 URL: https://issues.apache.org/jira/browse/HDDS-600
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari


Set up a hadoop cluster where ozone is also installed. Ozone can be referenced 
via o3://xx.xx.xx.xx:9889
{code:java}
[root@ctr-e138-1518143905142-510793-01-02 ~]# ozone sh bucket list 
o3://xx.xx.xx.xx:9889/volume1/
2018-10-09 07:21:24,624 WARN util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
[ {
"volumeName" : "volume1",
"bucketName" : "bucket1",
"createdOn" : "Tue, 09 Oct 2018 06:48:02 GMT",
"acls" : [ {
"type" : "USER",
"name" : "root",
"rights" : "READ_WRITE"
}, {
"type" : "GROUP",
"name" : "root",
"rights" : "READ_WRITE"
} ],
"versioning" : "DISABLED",
"storageType" : "DISK"
} ]
[root@ctr-e138-1518143905142-510793-01-02 ~]# ozone sh key list 
o3://xx.xx.xx.xx:9889/volume1/bucket1
2018-10-09 07:21:54,500 WARN util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
[ {
"version" : 0,
"md5hash" : null,
"createdOn" : "Tue, 09 Oct 2018 06:58:32 GMT",
"modifiedOn" : "Tue, 09 Oct 2018 06:58:32 GMT",
"size" : 0,
"keyName" : "mr_job_dir"
} ]
[root@ctr-e138-1518143905142-510793-01-02 ~]#{code}
Hdfs is also set fine as below
{code:java}
[root@ctr-e138-1518143905142-510793-01-02 ~]# hdfs dfs -ls 
/tmp/mr_jobs/input/
Found 1 items
-rw-r--r-- 3 root hdfs 215755 2018-10-09 06:37 
/tmp/mr_jobs/input/wordcount_input_1.txt
[root@ctr-e138-1518143905142-510793-01-02 ~]#{code}
Now try to run Mapreduce example job against ozone o3:
{code:java}
[root@ctr-e138-1518143905142-510793-01-02 ~]# 
/usr/hdp/current/hadoop-client/bin/hadoop jar 
/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples.jar 
wordcount /tmp/mr_jobs/input/ 
o3://xx.xx.xx.xx:9889/volume1/bucket1/mr_job_dir/output
18/10/09 07:15:38 INFO conf.Configuration: Removed undeclared tags:
java.lang.IllegalArgumentException: Bucket or Volume name has an unsupported 
character : :
at 
org.apache.hadoop.hdds.scm.client.HddsClientUtils.verifyResourceName(HddsClientUtils.java:143)
at 
org.apache.hadoop.ozone.client.rpc.RpcClient.getVolumeDetails(RpcClient.java:231)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hadoop.ozone.client.OzoneClientInvocationHandler.invoke(OzoneClientInvocationHandler.java:54)
at com.sun.proxy.$Proxy16.getVolumeDetails(Unknown Source)
at org.apache.hadoop.ozone.client.ObjectStore.getVolume(ObjectStore.java:92)
at 
org.apache.hadoop.fs.ozone.OzoneFileSystem.initialize(OzoneFileSystem.java:121)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
at 
org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.setOutputPath(FileOutputFormat.java:178)
at org.apache.hadoop.examples.WordCount.main(WordCount.java:85)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:318)
at org.apache.hadoop.util.RunJar.main(RunJar.java:232)
18/10/09 07:15:39 INFO conf.Configuration: Removed undeclared tags:
[root@ctr-e138-1518143905142-510793-01-02 ~]#
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, 

[jira] [Updated] (HDDS-590) Add unit test for HDDS-583

2018-10-08 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Maheshwari updated HDDS-590:
--
Attachment: HDDS-590..002.patch
Status: Patch Available  (was: Open)

> Add unit test for HDDS-583
> --
>
> Key: HDDS-590
> URL: https://issues.apache.org/jira/browse/HDDS-590
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Namit Maheshwari
>Priority: Major
> Attachments: HDDS-590..001.patch, HDDS-590..002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-590) Add unit test for HDDS-583

2018-10-08 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Maheshwari updated HDDS-590:
--
Status: Open  (was: Patch Available)

> Add unit test for HDDS-583
> --
>
> Key: HDDS-590
> URL: https://issues.apache.org/jira/browse/HDDS-590
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Namit Maheshwari
>Priority: Major
> Attachments: HDDS-590..001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-583) SCM returns zero as the return code, even when invalid options are passed

2018-10-08 Thread Namit Maheshwari (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642273#comment-16642273
 ] 

Namit Maheshwari commented on HDDS-583:
---

[~elek] -Yes, the problem does not exist with OM. 

[~arpitagarwal] - I have created HDDS-590 and attached the Junit patch there, 
since this one is already resolved.

> SCM returns zero as the return code, even when invalid options are passed
> -
>
> Key: HDDS-583
> URL: https://issues.apache.org/jira/browse/HDDS-583
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Namit Maheshwari
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-583.001.patch
>
>
> While doing testing for HDDS-564, found that SCM returns zero as the return 
> code, even when invalid options are passed. In StorageContainerManager.java, 
> please see below code 
> {code:java}
> private static StartupOption parseArguments(String[] args) {
>   int argsLen = (args == null) ? 0 : args.length;
>   StartupOption startOpt = StartupOption.HELP;
> {code}
> Here, startOpt is initialized to HELP, so by default even if wrong options 
> are passed, parseArguments method returns the value to HELP. This causes the 
> exit code to be 0. 
> Ideally, startOpt should be set to null, which will enable it to return non 
> zero exit code, if the options are invalid.
> {code:java}
> StartupOption startOpt = null{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-590) Add unit test for HDDS-583

2018-10-08 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Maheshwari updated HDDS-590:
--
Attachment: HDDS-590..001.patch
Status: Patch Available  (was: In Progress)

> Add unit test for HDDS-583
> --
>
> Key: HDDS-590
> URL: https://issues.apache.org/jira/browse/HDDS-590
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Namit Maheshwari
>Priority: Major
> Attachments: HDDS-590..001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-590) Add unit test for HDDS-583

2018-10-08 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-590 started by Namit Maheshwari.
-
> Add unit test for HDDS-583
> --
>
> Key: HDDS-590
> URL: https://issues.apache.org/jira/browse/HDDS-590
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Namit Maheshwari
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-590) Add unit test for HDDS-583

2018-10-08 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Maheshwari reassigned HDDS-590:
-

Assignee: Namit Maheshwari

> Add unit test for HDDS-583
> --
>
> Key: HDDS-590
> URL: https://issues.apache.org/jira/browse/HDDS-590
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Namit Maheshwari
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-590) Add unit test for HDDS-583

2018-10-08 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-590:
-

 Summary: Add unit test for HDDS-583
 Key: HDDS-590
 URL: https://issues.apache.org/jira/browse/HDDS-590
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-583) SCM returns zero as the return code, even when invalid options are passed

2018-10-05 Thread Namit Maheshwari (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16640373#comment-16640373
 ] 

Namit Maheshwari commented on HDDS-583:
---

[~arpitagarwal] - Yes, this way it will print the help and the return code will 
be non zero. Please see below code in StorageContainerManager.java. The 
parseArguments method with the above change will return "null", so it will 
print the Help and return the exit code as "1".
{code:java}
StartupOption startOpt = parseArguments(argv);
if (startOpt == null) {
  printUsage(System.err);
  terminate(1);
  return null;
{code}

> SCM returns zero as the return code, even when invalid options are passed
> -
>
> Key: HDDS-583
> URL: https://issues.apache.org/jira/browse/HDDS-583
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Namit Maheshwari
>Priority: Major
> Attachments: HDDS-583.001.patch
>
>
> While doing testing for HDDS-564, found that SCM returns zero as the return 
> code, even when invalid options are passed. In StorageContainerManager.java, 
> please see below code 
> {code:java}
> private static StartupOption parseArguments(String[] args) {
>   int argsLen = (args == null) ? 0 : args.length;
>   StartupOption startOpt = StartupOption.HELP;
> {code}
> Here, startOpt is initialized to HELP, so by default even if wrong options 
> are passed, parseArguments method returns the value to HELP. This causes the 
> exit code to be 0. 
> Ideally, startOpt should be set to null, which will enable it to return non 
> zero exit code, if the options are invalid.
> {code:java}
> StartupOption startOpt = null{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-564) Update docker-hadoop-runner branch to reflect changes done in HDDS-490

2018-10-05 Thread Namit Maheshwari (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16640360#comment-16640360
 ] 

Namit Maheshwari commented on HDDS-564:
---

Thanks [~elek], [~arpitagarwal] for the reviews.

I have updated the latest patch with the changes suggested by [~elek].

[~arpitagarwal] - I have also completed the testing and raised HDDS-583 while 
testing the same. Thanks.

> Update docker-hadoop-runner branch to reflect changes done in HDDS-490
> --
>
> Key: HDDS-564
> URL: https://issues.apache.org/jira/browse/HDDS-564
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Namit Maheshwari
>Assignee: Namit Maheshwari
>Priority: Major
> Attachments: HDDS-564-docker-hadoop-runner.001.patch, 
> HDDS-564-docker-hadoop-runner.002.patch
>
>
> starter.sh needs to be modified to reflect the changes done in HDDS-490
> For compatibility, starter.sh should support both the old and new style 
> options.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-564) Update docker-hadoop-runner branch to reflect changes done in HDDS-490

2018-10-05 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Maheshwari updated HDDS-564:
--
Attachment: HDDS-564-docker-hadoop-runner.002.patch
Status: Patch Available  (was: Open)

> Update docker-hadoop-runner branch to reflect changes done in HDDS-490
> --
>
> Key: HDDS-564
> URL: https://issues.apache.org/jira/browse/HDDS-564
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Namit Maheshwari
>Assignee: Namit Maheshwari
>Priority: Major
> Attachments: HDDS-564-docker-hadoop-runner.001.patch, 
> HDDS-564-docker-hadoop-runner.002.patch
>
>
> starter.sh needs to be modified to reflect the changes done in HDDS-490
> For compatibility, starter.sh should support both the old and new style 
> options.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-564) Update docker-hadoop-runner branch to reflect changes done in HDDS-490

2018-10-05 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Maheshwari updated HDDS-564:
--
Status: Open  (was: Patch Available)

> Update docker-hadoop-runner branch to reflect changes done in HDDS-490
> --
>
> Key: HDDS-564
> URL: https://issues.apache.org/jira/browse/HDDS-564
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Namit Maheshwari
>Assignee: Namit Maheshwari
>Priority: Major
> Attachments: HDDS-564-docker-hadoop-runner.001.patch
>
>
> starter.sh needs to be modified to reflect the changes done in HDDS-490
> For compatibility, starter.sh should support both the old and new style 
> options.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-583) SCM returns zero as the return code, even when invalid options are passed

2018-10-05 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-583 started by Namit Maheshwari.
-
> SCM returns zero as the return code, even when invalid options are passed
> -
>
> Key: HDDS-583
> URL: https://issues.apache.org/jira/browse/HDDS-583
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Namit Maheshwari
>Priority: Major
> Attachments: HDDS-583.001.patch
>
>
> While doing testing for HDDS-564, found that SCM returns zero as the return 
> code, even when invalid options are passed. In StorageContainerManager.java, 
> please see below code 
> {code:java}
> private static StartupOption parseArguments(String[] args) {
>   int argsLen = (args == null) ? 0 : args.length;
>   StartupOption startOpt = StartupOption.HELP;
> {code}
> Here, startOpt is initialized to HELP, so by default even if wrong options 
> are passed, parseArguments method returns the value to HELP. This causes the 
> exit code to be 0. 
> Ideally, startOpt should be set to null, which will enable it to return non 
> zero exit code, if the options are invalid.
> {code:java}
> StartupOption startOpt = null{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-583) SCM returns zero as the return code, even when invalid options are passed

2018-10-05 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Maheshwari updated HDDS-583:
--
Attachment: HDDS-583.001.patch
Status: Patch Available  (was: In Progress)

> SCM returns zero as the return code, even when invalid options are passed
> -
>
> Key: HDDS-583
> URL: https://issues.apache.org/jira/browse/HDDS-583
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Namit Maheshwari
>Priority: Major
> Attachments: HDDS-583.001.patch
>
>
> While doing testing for HDDS-564, found that SCM returns zero as the return 
> code, even when invalid options are passed. In StorageContainerManager.java, 
> please see below code 
> {code:java}
> private static StartupOption parseArguments(String[] args) {
>   int argsLen = (args == null) ? 0 : args.length;
>   StartupOption startOpt = StartupOption.HELP;
> {code}
> Here, startOpt is initialized to HELP, so by default even if wrong options 
> are passed, parseArguments method returns the value to HELP. This causes the 
> exit code to be 0. 
> Ideally, startOpt should be set to null, which will enable it to return non 
> zero exit code, if the options are invalid.
> {code:java}
> StartupOption startOpt = null{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-583) SCM returns zero as the return code, even when invalid options are passed

2018-10-05 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-583:
-

 Summary: SCM returns zero as the return code, even when invalid 
options are passed
 Key: HDDS-583
 URL: https://issues.apache.org/jira/browse/HDDS-583
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari


While doing testing for HDDS-564, found that SCM returns zero as the return 
code, even when invalid options are passed. In StorageContainerManager.java, 
please see below code 
{code:java}
private static StartupOption parseArguments(String[] args) {
  int argsLen = (args == null) ? 0 : args.length;
  StartupOption startOpt = StartupOption.HELP;
{code}
Here, startOpt is initialized to HELP, so by default even if wrong options are 
passed, parseArguments method returns the value to HELP. This causes the exit 
code to be 0. 

Ideally, startOpt should be set to null, which will enable it to return non 
zero exit code, if the options are invalid.
{code:java}
StartupOption startOpt = null{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-583) SCM returns zero as the return code, even when invalid options are passed

2018-10-05 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Maheshwari reassigned HDDS-583:
-

Assignee: Namit Maheshwari

> SCM returns zero as the return code, even when invalid options are passed
> -
>
> Key: HDDS-583
> URL: https://issues.apache.org/jira/browse/HDDS-583
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Namit Maheshwari
>Priority: Major
>
> While doing testing for HDDS-564, found that SCM returns zero as the return 
> code, even when invalid options are passed. In StorageContainerManager.java, 
> please see below code 
> {code:java}
> private static StartupOption parseArguments(String[] args) {
>   int argsLen = (args == null) ? 0 : args.length;
>   StartupOption startOpt = StartupOption.HELP;
> {code}
> Here, startOpt is initialized to HELP, so by default even if wrong options 
> are passed, parseArguments method returns the value to HELP. This causes the 
> exit code to be 0. 
> Ideally, startOpt should be set to null, which will enable it to return non 
> zero exit code, if the options are invalid.
> {code:java}
> StartupOption startOpt = null{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-490) Improve om and scm start up options

2018-10-02 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Maheshwari updated HDDS-490:
--
Attachment: HDDS-490.002.patch

> Improve om and scm start up options 
> 
>
> Key: HDDS-490
> URL: https://issues.apache.org/jira/browse/HDDS-490
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Namit Maheshwari
>Priority: Major
>  Labels: alpha2, incompatible
> Attachments: HDDS-490.001.patch, HDDS-490.002.patch
>
>
> I propose the following changes:
>  # Rename createObjectStore to format
>  # Change the flag to use --createObjectStore instead of using 
> -createObjectStore. It is also applicable to other scm and om startup options.
>  # Fail to format existing object store. If a user runs:
> {code:java}
> ozone om -createObjectStore{code}
> And there is already an object store, it should give a warning message and 
> exit the process.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-564) Update docker-hadoop-runner branch to reflect changes done in HDDS-490

2018-10-01 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-564 started by Namit Maheshwari.
-
> Update docker-hadoop-runner branch to reflect changes done in HDDS-490
> --
>
> Key: HDDS-564
> URL: https://issues.apache.org/jira/browse/HDDS-564
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Namit Maheshwari
>Assignee: Namit Maheshwari
>Priority: Major
> Attachments: HDDS-564-docker-hadoop-runner.001.patch
>
>
> starter.sh needs to be modified to reflect the changes done in HDDS-490
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work stopped] (HDDS-564) Update docker-hadoop-runner branch to reflect changes done in HDDS-490

2018-10-01 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-564 stopped by Namit Maheshwari.
-
> Update docker-hadoop-runner branch to reflect changes done in HDDS-490
> --
>
> Key: HDDS-564
> URL: https://issues.apache.org/jira/browse/HDDS-564
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Namit Maheshwari
>Assignee: Namit Maheshwari
>Priority: Major
> Attachments: HDDS-564-docker-hadoop-runner.001.patch
>
>
> starter.sh needs to be modified to reflect the changes done in HDDS-490
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-564) Update docker-hadoop-runner branch to reflect changes done in HDDS-490

2018-10-01 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Maheshwari updated HDDS-564:
--
Attachment: (was: HDDS-564..001.patch)

> Update docker-hadoop-runner branch to reflect changes done in HDDS-490
> --
>
> Key: HDDS-564
> URL: https://issues.apache.org/jira/browse/HDDS-564
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Namit Maheshwari
>Assignee: Namit Maheshwari
>Priority: Major
> Attachments: HDDS-564-docker-hadoop-runner.001.patch
>
>
> starter.sh needs to be modified to reflect the changes done in HDDS-490
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-564) Update docker-hadoop-runner branch to reflect changes done in HDDS-490

2018-10-01 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-564 started by Namit Maheshwari.
-
> Update docker-hadoop-runner branch to reflect changes done in HDDS-490
> --
>
> Key: HDDS-564
> URL: https://issues.apache.org/jira/browse/HDDS-564
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Namit Maheshwari
>Assignee: Namit Maheshwari
>Priority: Major
> Attachments: HDDS-564-docker-hadoop-runner.001.patch
>
>
> starter.sh needs to be modified to reflect the changes done in HDDS-490
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-564) Update docker-hadoop-runner branch to reflect changes done in HDDS-490

2018-10-01 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Maheshwari updated HDDS-564:
--
Attachment: HDDS-564-docker-hadoop-runner.001.patch
Status: Patch Available  (was: In Progress)

> Update docker-hadoop-runner branch to reflect changes done in HDDS-490
> --
>
> Key: HDDS-564
> URL: https://issues.apache.org/jira/browse/HDDS-564
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Namit Maheshwari
>Assignee: Namit Maheshwari
>Priority: Major
> Attachments: HDDS-564-docker-hadoop-runner.001.patch
>
>
> starter.sh needs to be modified to reflect the changes done in HDDS-490
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-564) Update docker-hadoop-runner branch to reflect changes done in HDDS-490

2018-10-01 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Maheshwari updated HDDS-564:
--
Status: Open  (was: Patch Available)

> Update docker-hadoop-runner branch to reflect changes done in HDDS-490
> --
>
> Key: HDDS-564
> URL: https://issues.apache.org/jira/browse/HDDS-564
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Namit Maheshwari
>Assignee: Namit Maheshwari
>Priority: Major
> Attachments: HDDS-564-docker-hadoop-runner.001.patch
>
>
> starter.sh needs to be modified to reflect the changes done in HDDS-490
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-558) When creating keys, the creationTime and modificationTime should ideally be the same

2018-10-01 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Maheshwari updated HDDS-558:
--
Resolution: Not A Problem
Status: Resolved  (was: Patch Available)

This is actually the correct behavior. 

In [KeyManagerImpl.java|http://keymanagerimpl.java/] openKey method is called 
first, and then eventually commitKey is called.

CreateTime and ModifyTime can be different. 

> When creating keys, the creationTime and modificationTime should ideally be 
> the same
> 
>
> Key: HDDS-558
> URL: https://issues.apache.org/jira/browse/HDDS-558
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client, Ozone Manager
>Reporter: Dinesh Chitlangia
>Assignee: Namit Maheshwari
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-558.001.patch
>
>
> Steps to replicate:
>  # Start ozone
>  # Create Volume and Bucket or use existing ones
>  # Create Key
>  # List Keys for that bucket or just get key info
> We will see that the creationTime and ModificationTime has a minor difference.
>  
> {noformat}
> hadoop@fdaf56d9e9d8:~$ ./bin/ozone sh key put /rvol/rbucket/rkey sample.orc
> hadoop@fdaf56d9e9d8:~$ ./bin/ozone sh key list /rvol/rbucket
> [ {
> "version" : 0,
> "md5hash" : null,
> "createdOn" : "Wed, 26 Sep 2018 20:29:10 GMT",
> "modifiedOn" : "Wed, 26 Sep 2018 20:29:12 GMT",
> "size" : 2262690,
> "keyName" : "rkey"
> } ]{noformat}
> Potential fix area : KeyManagerImpl#commitKey
> {code:java}
> keyInfo = new OmKeyInfo.Builder()
> .setVolumeName(args.getVolumeName())
> .setBucketName(args.getBucketName())
> .setKeyName(args.getKeyName())
> .setOmKeyLocationInfos(Collections.singletonList(
> new OmKeyLocationInfoGroup(0, locations)))
> .setCreationTime(Time.now())
> .setModificationTime(Time.now())
> .setDataSize(size)
> .setReplicationType(type)
> .setReplicationFactor(factor)
> .build();
> {code}
> For setting, both these values, we are getting current time and thus the 
> minor difference.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-558) When creating keys, the creationTime and modificationTime should ideally be the same

2018-10-01 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Maheshwari updated HDDS-558:
--
Attachment: HDDS-558.001.patch
Status: Patch Available  (was: In Progress)

> When creating keys, the creationTime and modificationTime should ideally be 
> the same
> 
>
> Key: HDDS-558
> URL: https://issues.apache.org/jira/browse/HDDS-558
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client, Ozone Manager
>Reporter: Dinesh Chitlangia
>Assignee: Namit Maheshwari
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-558.001.patch
>
>
> Steps to replicate:
>  # Start ozone
>  # Create Volume and Bucket or use existing ones
>  # Create Key
>  # List Keys for that bucket or just get key info
> We will see that the creationTime and ModificationTime has a minor difference.
>  
> {noformat}
> hadoop@fdaf56d9e9d8:~$ ./bin/ozone sh key put /rvol/rbucket/rkey sample.orc
> hadoop@fdaf56d9e9d8:~$ ./bin/ozone sh key list /rvol/rbucket
> [ {
> "version" : 0,
> "md5hash" : null,
> "createdOn" : "Wed, 26 Sep 2018 20:29:10 GMT",
> "modifiedOn" : "Wed, 26 Sep 2018 20:29:12 GMT",
> "size" : 2262690,
> "keyName" : "rkey"
> } ]{noformat}
> Potential fix area : KeyManagerImpl#commitKey
> {code:java}
> keyInfo = new OmKeyInfo.Builder()
> .setVolumeName(args.getVolumeName())
> .setBucketName(args.getBucketName())
> .setKeyName(args.getKeyName())
> .setOmKeyLocationInfos(Collections.singletonList(
> new OmKeyLocationInfoGroup(0, locations)))
> .setCreationTime(Time.now())
> .setModificationTime(Time.now())
> .setDataSize(size)
> .setReplicationType(type)
> .setReplicationFactor(factor)
> .build();
> {code}
> For setting, both these values, we are getting current time and thus the 
> minor difference.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-558) When creating keys, the creationTime and modificationTime should ideally be the same

2018-10-01 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-558 started by Namit Maheshwari.
-
> When creating keys, the creationTime and modificationTime should ideally be 
> the same
> 
>
> Key: HDDS-558
> URL: https://issues.apache.org/jira/browse/HDDS-558
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client, Ozone Manager
>Reporter: Dinesh Chitlangia
>Assignee: Namit Maheshwari
>Priority: Major
>  Labels: newbie
>
> Steps to replicate:
>  # Start ozone
>  # Create Volume and Bucket or use existing ones
>  # Create Key
>  # List Keys for that bucket or just get key info
> We will see that the creationTime and ModificationTime has a minor difference.
>  
> {noformat}
> hadoop@fdaf56d9e9d8:~$ ./bin/ozone sh key put /rvol/rbucket/rkey sample.orc
> hadoop@fdaf56d9e9d8:~$ ./bin/ozone sh key list /rvol/rbucket
> [ {
> "version" : 0,
> "md5hash" : null,
> "createdOn" : "Wed, 26 Sep 2018 20:29:10 GMT",
> "modifiedOn" : "Wed, 26 Sep 2018 20:29:12 GMT",
> "size" : 2262690,
> "keyName" : "rkey"
> } ]{noformat}
> Potential fix area : KeyManagerImpl#commitKey
> {code:java}
> keyInfo = new OmKeyInfo.Builder()
> .setVolumeName(args.getVolumeName())
> .setBucketName(args.getBucketName())
> .setKeyName(args.getKeyName())
> .setOmKeyLocationInfos(Collections.singletonList(
> new OmKeyLocationInfoGroup(0, locations)))
> .setCreationTime(Time.now())
> .setModificationTime(Time.now())
> .setDataSize(size)
> .setReplicationType(type)
> .setReplicationFactor(factor)
> .build();
> {code}
> For setting, both these values, we are getting current time and thus the 
> minor difference.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-564) Update docker-hadoop-runner branch to reflect changes done in HDDS-490

2018-10-01 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Maheshwari updated HDDS-564:
--
Attachment: HDDS-564..001.patch
Status: Patch Available  (was: In Progress)

> Update docker-hadoop-runner branch to reflect changes done in HDDS-490
> --
>
> Key: HDDS-564
> URL: https://issues.apache.org/jira/browse/HDDS-564
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Namit Maheshwari
>Assignee: Namit Maheshwari
>Priority: Major
> Attachments: HDDS-564..001.patch
>
>
> starter.sh needs to be modified to reflect the changes done in HDDS-490
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-564) Update docker-hadoop-runner branch to reflect changes done in HDDS-490

2018-10-01 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-564 started by Namit Maheshwari.
-
> Update docker-hadoop-runner branch to reflect changes done in HDDS-490
> --
>
> Key: HDDS-564
> URL: https://issues.apache.org/jira/browse/HDDS-564
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Namit Maheshwari
>Assignee: Namit Maheshwari
>Priority: Major
>
> starter.sh needs to be modified to reflect the changes done in HDDS-490
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-564) Update docker-hadoop-runner branch to reflect changes done in HDDS-490

2018-10-01 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Maheshwari reassigned HDDS-564:
-

Assignee: Namit Maheshwari

> Update docker-hadoop-runner branch to reflect changes done in HDDS-490
> --
>
> Key: HDDS-564
> URL: https://issues.apache.org/jira/browse/HDDS-564
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Namit Maheshwari
>Assignee: Namit Maheshwari
>Priority: Major
>
> starter.sh needs to be modified to reflect the changes done in HDDS-490
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-564) Update docker-hadoop-runner branch to reflect changes done in HDDS-490

2018-10-01 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-564:
-

 Summary: Update docker-hadoop-runner branch to reflect changes 
done in HDDS-490
 Key: HDDS-564
 URL: https://issues.apache.org/jira/browse/HDDS-564
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Namit Maheshwari


starter.sh needs to be modified to reflect the changes done in HDDS-490

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-558) When creating keys, the creationTime and modificationTime should ideally be the same

2018-09-26 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Maheshwari reassigned HDDS-558:
-

Assignee: Namit Maheshwari

> When creating keys, the creationTime and modificationTime should ideally be 
> the same
> 
>
> Key: HDDS-558
> URL: https://issues.apache.org/jira/browse/HDDS-558
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client, Ozone Manager
>Reporter: Dinesh Chitlangia
>Assignee: Namit Maheshwari
>Priority: Major
>  Labels: newbie
>
> Steps to replicate:
>  # Start ozone
>  # Create Volume and Bucket or use existing ones
>  # Create Key
>  # List Keys for that bucket or just get key info
> We will see that the creationTime and ModificationTime has a minor difference.
>  
> {noformat}
> hadoop@fdaf56d9e9d8:~$ ./bin/ozone sh key put /rvol/rbucket/rkey sample.orc
> hadoop@fdaf56d9e9d8:~$ ./bin/ozone sh key list /rvol/rbucket
> [ {
> "version" : 0,
> "md5hash" : null,
> "createdOn" : "Wed, 26 Sep 2018 20:29:10 GMT",
> "modifiedOn" : "Wed, 26 Sep 2018 20:29:12 GMT",
> "size" : 2262690,
> "keyName" : "rkey"
> } ]{noformat}
> Potential fix area : KeyManagerImpl#commitKey
> {code:java}
> keyInfo = new OmKeyInfo.Builder()
> .setVolumeName(args.getVolumeName())
> .setBucketName(args.getBucketName())
> .setKeyName(args.getKeyName())
> .setOmKeyLocationInfos(Collections.singletonList(
> new OmKeyLocationInfoGroup(0, locations)))
> .setCreationTime(Time.now())
> .setModificationTime(Time.now())
> .setDataSize(size)
> .setReplicationType(type)
> .setReplicationFactor(factor)
> .build();
> {code}
> For setting, both these values, we are getting current time and thus the 
> minor difference.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-549) Documentation for key rename is missing in keycommands.md

2018-09-25 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-549:
-

 Summary: Documentation for key rename is missing in keycommands.md
 Key: HDDS-549
 URL: https://issues.apache.org/jira/browse/HDDS-549
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-538) ozone scmcli is broken

2018-09-24 Thread Namit Maheshwari (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16626488#comment-16626488
 ] 

Namit Maheshwari commented on HDDS-538:
---

Thanks [~hanishakoneru], yes it is working fine with :
{code:java}
[root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone scmcli list -s=1
{
"state" : "ALLOCATED",
"replicationFactor" : "THREE",
"replicationType" : "RATIS",
"allocatedBytes" : 0,
"usedBytes" : 0,
"numberOfKeys" : 0,
"lastUsed" : 7017303112,
"stateEnterTime" : 3731649177,
"owner" : "e4d4ed93-d6ec-480b-ab6c-6683e2e1ee7c",
"containerID" : 1,
"deleteTransactionId" : 0,
"containerOpen" : true
}
{
"state" : "OPEN",
"replicationFactor" : "THREE",
"replicationType" : "RATIS",
"allocatedBytes" : 134217728,
"usedBytes" : 134217728,
"numberOfKeys" : 1,
"lastUsed" : 7017303112,
"stateEnterTime" : 3731650367,
"owner" : "e4d4ed93-d6ec-480b-ab6c-6683e2e1ee7c",
"containerID" : 2,
"deleteTransactionId" : 0,
"containerOpen" : true
}{code}
However, the scmcli info command does not return anything for container Id 1, 
which is available as per above
{code:java}
[root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone scmcli info 1
ContainerID 1 does not exist
{code}
Does the scmcli info only work for Closed containers?
 

> ozone scmcli is broken
> --
>
> Key: HDDS-538
> URL: https://issues.apache.org/jira/browse/HDDS-538
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Priority: Major
>
> None of the below commands work for scmcli
> {code:java}
> [root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone scmcli list
> Missing required option '--start='
> Usage: ozone scmcli list [-hV] [-c=] -s=
> List containers
> -c, --count= Maximum number of containers to list
> Default: 20
> -h, --help Show this help message and exit.
> -s, --start= Container id to start the iteration
> -V, --version Print version information and exit.
> [root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone scmcli list -c=1
> Missing required option '--start='
> Usage: ozone scmcli list [-hV] [-c=] -s=
> List containers
> -c, --count= Maximum number of containers to list
> Default: 20
> -h, --help Show this help message and exit.
> -s, --start= Container id to start the iteration
> -V, --version Print version information and exit.
> [root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone scmcli list -c=1 
> -s
> Missing required parameter for option '--start' ()
> Usage: ozone scmcli list [-hV] [-c=] -s=
> List containers
> -c, --count= Maximum number of containers to list
> Default: 20
> -h, --help Show this help message and exit.
> -s, --start= Container id to start the iteration
> -V, --version Print version information and exit.
> [root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone scmcli -s
> Unknown option: -s
> Possible solutions: --scm, --set
> [root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone scmcli --start
> Unknown option: --start
> Usage: ozone scmcli [-hV] [--verbose] [--scm=] [-D=]...
> [COMMAND]
> Developer tools to handle SCM specific operations.
> --scm= The destination scm (host:port)
> --verbose More verbose output. Show the stack trace of the errors.
> -D, --set=
> -h, --help Show this help message and exit.
> -V, --version Print version information and exit.
> Commands:
> list List containers
> info Show information about a specific container
> delete Delete container
> create Create container
> close close container
> [root@ctr-e138-1518143905142-481027-01-02 bin]#
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-536) ozone sh throws exception and show on command line for invalid input

2018-09-24 Thread Namit Maheshwari (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16626481#comment-16626481
 ] 

Namit Maheshwari commented on HDDS-536:
---

Yes, the bug was to mention that the entire stack trace should not be printed, 
and only the Error message should be logged.

 

> ozone sh throws exception and show on command line for invalid input
> 
>
> Key: HDDS-536
> URL: https://issues.apache.org/jira/browse/HDDS-536
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Priority: Major
>
> {code:java}
> [root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone sh vol info 
> o3://as
> 2018-09-22 00:06:03,123 [main] ERROR - Couldn't create protocol class 
> org.apache.hadoop.ozone.client.rpc.RpcClient exception:
> java.lang.reflect.InvocationTargetException
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:291)
> at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getRpcClient(OzoneClientFactory.java:169)
> at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getRpcClient(OzoneClientFactory.java:153)
> at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getRpcClient(OzoneClientFactory.java:109)
> at org.apache.hadoop.ozone.web.ozShell.Handler.verifyURI(Handler.java:100)
> at 
> org.apache.hadoop.ozone.web.ozShell.volume.InfoVolumeHandler.call(InfoVolumeHandler.java:49)
> at 
> org.apache.hadoop.ozone.web.ozShell.volume.InfoVolumeHandler.call(InfoVolumeHandler.java:36)
> at picocli.CommandLine.execute(CommandLine.java:919)
> at picocli.CommandLine.access$700(CommandLine.java:104)
> at picocli.CommandLine$RunLast.handle(CommandLine.java:1083)
> at picocli.CommandLine$RunLast.handle(CommandLine.java:1051)
> at 
> picocli.CommandLine$AbstractParseResultHandler.handleParseResult(CommandLine.java:959)
> at picocli.CommandLine.parseWithHandlers(CommandLine.java:1242)
> at picocli.CommandLine.parseWithHandler(CommandLine.java:1181)
> at org.apache.hadoop.hdds.cli.GenericCli.execute(GenericCli.java:61)
> at org.apache.hadoop.hdds.cli.GenericCli.run(GenericCli.java:52)
> at org.apache.hadoop.ozone.web.ozShell.Shell.main(Shell.java:77)
> Caused by: java.net.UnknownHostException: Invalid host name: local host is: 
> (unknown); destination host is: "as":9889; java.net.UnknownHostException; For 
> more details see: http://wiki.apache.org/hadoop/UnknownHost
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831)
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:768)
> at org.apache.hadoop.ipc.Client$Connection.(Client.java:449)
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1552)
> at org.apache.hadoop.ipc.Client.call(Client.java:1403)
> at org.apache.hadoop.ipc.Client.call(Client.java:1367)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
> at com.sun.proxy.$Proxy10.getServiceList(Unknown Source)
> at 
> org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.getServiceList(OzoneManagerProtocolClientSideTranslatorPB.java:751)
> at 
> org.apache.hadoop.ozone.client.rpc.RpcClient.getScmAddressForClient(RpcClient.java:154)
> at org.apache.hadoop.ozone.client.rpc.RpcClient.(RpcClient.java:126)
> ... 21 more
> Caused by: java.net.UnknownHostException
> at org.apache.hadoop.ipc.Client$Connection.(Client.java:450)
> ... 30 more
> Invalid host name: local host is: (unknown); destination host is: "as":9889; 
> java.net.UnknownHostException; For more details see: 
> http://wiki.apache.org/hadoop/UnknownHost
> [root@ctr-e138-1518143905142-481027-01-02 bin]#
> {code}
> Ideally, it should just throw error like hdfs below:
> {code:java}
> [hrt_qa@ctr-e138-1518143905142-483670-01-02 hadoopqe]$ hdfs dfs -ls 
> s3a://namit54/
> 18/09/22 00:31:53 INFO impl.MetricsConfig: Loaded properties from 
> hadoop-metrics2.properties
> 18/09/22 00:31:53 INFO impl.MetricsSystemImpl: Scheduled Metric snapshot 
> period at 10 

[jira] [Created] (HDDS-541) ozone volume quota is not honored

2018-09-22 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-541:
-

 Summary: ozone volume quota is not honored
 Key: HDDS-541
 URL: https://issues.apache.org/jira/browse/HDDS-541
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari


Create a volume with just 1 MB as quota
{code:java}
[root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone sh volume create 
--quota=1MB --user=root /hive
2018-09-23 02:10:11,283 [main] INFO - Creating Volume: hive, with root as owner 
and quota set to 1048576 bytes.
{code}
Now create a bucket and put a big key greater than 1MB in the bucket
{code:java}
[root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone sh bucket create 
/hive/bucket1
2018-09-23 02:10:38,003 [main] INFO - Creating Bucket: hive/bucket1, with 
Versioning false and Storage Type set to DISK
[root@ctr-e138-1518143905142-481027-01-02 bin]# ls -l 
../../ozone-0.3.0-SNAPSHOT.tar.gz
-rw-r--r-- 1 root root 165903437 Sep 21 13:16 ../../ozone-0.3.0-SNAPSHOT.tar.gz
[root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone sh key put 
/hive/ozone-0.3.0-SNAPSHOT.tar.gz ../../ozone-0.3.0-SNAPSHOT.tar.gz
volume/bucket/key name required in putKey
[root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone sh key put 
/hive/bucket1/ozone-0.3.0-SNAPSHOT.tar.gz ../../ozone-0.3.0-SNAPSHOT.tar.gz
[root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone sh key info 
/hive/bucket1/ozone-0.3.0-SNAPSHOT.tar.gz
{
"version" : 0,
"md5hash" : null,
"createdOn" : "Sun, 23 Sep 2018 02:13:02 GMT",
"modifiedOn" : "Sun, 23 Sep 2018 02:13:08 GMT",
"size" : 165903437,
"keyName" : "ozone-0.3.0-SNAPSHOT.tar.gz",
"keyLocations" : [ {
"containerID" : 2,
"localID" : 100772661343420416,
"length" : 134217728,
"offset" : 0
}, {
"containerID" : 3,
"localID" : 100772661661007873,
"length" : 31685709,
"offset" : 0
} ]
}{code}
It was able to put a 165 MB file on a volume with just 1MB quota.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >