[jira] [Created] (HDFS-15489) Documentation link is broken for Apache Hadoop

2020-07-22 Thread Namit Maheshwari (Jira)
Namit Maheshwari created HDFS-15489:
---

 Summary: Documentation link is broken for Apache Hadoop
 Key: HDFS-15489
 URL: https://issues.apache.org/jira/browse/HDFS-15489
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: Namit Maheshwari
 Attachments: DocumentLinkBroken.mov

 

Please see attached video and screenshots

[^DocumentLinkBroken.mov]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2229) ozonefs paths need host and port information for non HA environment

2019-10-01 Thread Namit Maheshwari (Jira)
Namit Maheshwari created HDDS-2229:
--

 Summary: ozonefs paths need host and port information for non HA 
environment
 Key: HDDS-2229
 URL: https://issues.apache.org/jira/browse/HDDS-2229
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari


 
For non HA environment ozonefs path need to have host and port info, like below:

o3fs://bucket.volume.om-host:port/key

Whereas, for HA environments the path will change to use nameservice like below:

o3fs://bucket.volume.ns1/key

This will create ambiguity. User experience should be the same irrespective of 
the usage. 




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-688) Hive Query hangs, if DN's are restarted before the query is submitted

2018-10-31 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Maheshwari resolved HDDS-688.
---
Resolution: Fixed

This is fixed with the recent changes. Resolving it.

> Hive Query hangs, if DN's are restarted before the query is submitted
> -
>
> Key: HDDS-688
> URL: https://issues.apache.org/jira/browse/HDDS-688
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Mukul Kumar Singh
>Priority: Major
>
> Run a Hive Insert Query. It runs fine as below:
> {code:java}
> 0: jdbc:hive2://ctr-e138-1518143905142-510793> insert into testo3 values(1, 
> "aa", 3.0);
> INFO : Compiling 
> command(queryId=hive_20181018005729_fe644ab2-f8cc-41c3-b2d8-ffe1022de607): 
> insert into testo3 values(1, "aa", 3.0)
> INFO : Semantic Analysis Completed (retrial = false)
> INFO : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:_col0, 
> type:int, comment:null), FieldSchema(name:_col1, type:string, comment:null), 
> FieldSchema(name:_col2, type:float, comment:null)], properties:null)
> INFO : Completed compiling 
> command(queryId=hive_20181018005729_fe644ab2-f8cc-41c3-b2d8-ffe1022de607); 
> Time taken: 0.52 seconds
> INFO : Executing 
> command(queryId=hive_20181018005729_fe644ab2-f8cc-41c3-b2d8-ffe1022de607): 
> insert into testo3 values(1, "aa", 3.0)
> INFO : Query ID = hive_20181018005729_fe644ab2-f8cc-41c3-b2d8-ffe1022de607
> INFO : Total jobs = 1
> INFO : Launching Job 1 out of 1
> INFO : Starting task [Stage-1:MAPRED] in serial mode
> INFO : Subscribed to counters: [] for queryId: 
> hive_20181018005729_fe644ab2-f8cc-41c3-b2d8-ffe1022de607
> INFO : Session is already open
> INFO : Dag name: insert into testo3 values(1, "aa", 3.0) (Stage-1)
> INFO : Status: Running (Executing on YARN cluster with App id 
> application_1539383731490_0073)
> --
> VERTICES MODE STATUS TOTAL COMPLETED RUNNING PENDING FAILED KILLED
> --
> Map 1 .. container SUCCEEDED 1 1 0 0 0 0
> Reducer 2 .. container SUCCEEDED 1 1 0 0 0 0
> --
> VERTICES: 02/02 [==>>] 100% ELAPSED TIME: 11.95 s
> --
> INFO : Status: DAG finished successfully in 10.68 seconds
> INFO :
> INFO : Query Execution Summary
> INFO : 
> --
> INFO : OPERATION DURATION
> INFO : 
> --
> INFO : Compile Query 0.52s
> INFO : Prepare Plan 0.23s
> INFO : Get Query Coordinator (AM) 0.00s
> INFO : Submit Plan 0.11s
> INFO : Start DAG 0.57s
> INFO : Run DAG 10.68s
> INFO : 
> --
> INFO :
> INFO : Task Execution Summary
> INFO : 
> --
> INFO : VERTICES DURATION(ms) CPU_TIME(ms) GC_TIME(ms) INPUT_RECORDS 
> OUTPUT_RECORDS
> INFO : 
> --
> INFO : Map 1 7074.00 11,280 276 3 1
> INFO : Reducer 2 1074.00 2,040 0 1 0
> INFO : 
> --
> INFO :
> INFO : org.apache.tez.common.counters.DAGCounter:
> INFO : NUM_SUCCEEDED_TASKS: 2
> INFO : TOTAL_LAUNCHED_TASKS: 2
> INFO : AM_CPU_MILLISECONDS: 1390
> INFO : AM_GC_TIME_MILLIS: 0
> INFO : File System Counters:
> INFO : FILE_BYTES_READ: 135
> INFO : FILE_BYTES_WRITTEN: 135
> INFO : HDFS_BYTES_WRITTEN: 199
> INFO : HDFS_READ_OPS: 3
> INFO : HDFS_WRITE_OPS: 2
> INFO : HDFS_OP_CREATE: 1
> INFO : HDFS_OP_GET_FILE_STATUS: 3
> INFO : HDFS_OP_RENAME: 1
> INFO : org.apache.tez.common.counters.TaskCounter:
> INFO : SPILLED_RECORDS: 0
> INFO : NUM_SHUFFLED_INPUTS: 1
> INFO : NUM_FAILED_SHUFFLE_INPUTS: 0
> INFO : GC_TIME_MILLIS: 276
> INFO : TASK_DURATION_MILLIS: 8474
> INFO : CPU_MILLISECONDS: 13320
> INFO : PHYSICAL_MEMORY_BYTES: 4294967296
> INFO : VIRTUAL_MEMORY_BYTES: 11205029888
> INFO : COMMITTED_HEAP_BYTES: 4294967296
> INFO : INPUT_RECORDS_PROCESSED: 5
> INFO : INPUT_SPLIT_LENGTH_BYTES: 1
> INFO : OUTPUT_RECORDS: 1
> INFO : OUTPUT_LARGE_RECORDS: 0
> INFO : OUTPUT_BYTES: 94
> INFO : OUTPUT_BYTES_WITH_OVERHEAD: 102
> INFO : OUTPUT_BYTES_PHYSICAL: 127
> INFO : ADDITIONAL_SPILLS_BYTES_WRITTEN: 0
> INFO : 

[jira] [Created] (HDDS-689) Datanode shuts down on restart

2018-10-17 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-689:
-

 Summary: Datanode shuts down on restart
 Key: HDDS-689
 URL: https://issues.apache.org/jira/browse/HDDS-689
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari


Restart all the 8 DNs in the cluster. 2 of them shuts down as below
{code:java}
2018-10-18 01:10:57,102 ERROR impl.StateMachineUpdater 
(ExitUtils.java:terminate(86)) - Terminating with exit status 2: 
StateMachineUpdater-69d15283-4e2e-4c30-a028-f2bad0f83cc1: the 
StateMachineUpdater hits Throwable
java.util.concurrent.RejectedExecutionException: Task 
java.util.concurrent.CompletableFuture$AsyncSupply@7ee4907b rejected from 
java.util.concurrent.ThreadPoolExecutor@702371d7[Terminated, pool size = 0, 
active threads = 0, queued tasks = 0, completed tasks = 0]
at 
java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)
at 
java.util.concurrent.Executors$DelegatedExecutorService.execute(Executors.java:668)
at 
java.util.concurrent.CompletableFuture.asyncSupplyStage(CompletableFuture.java:1604)
at 
java.util.concurrent.CompletableFuture.supplyAsync(CompletableFuture.java:1830)
at 
org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.applyTransaction(ContainerStateMachine.java:433)
at 
org.apache.ratis.server.impl.RaftServerImpl.applyLogToStateMachine(RaftServerImpl.java:1093)
at 
org.apache.ratis.server.impl.StateMachineUpdater.run(StateMachineUpdater.java:148)
at java.lang.Thread.run(Thread.java:748)
2018-10-18 01:10:57,107 WARN fs.CachingGetSpaceUsed 
(CachingGetSpaceUsed.java:run(183)) - Thread Interrupted waiting to refresh 
disk information: sleep interrupted
2018-10-18 01:10:57,108 INFO datanode.DataNode (LogAdapter.java:info(51)) - 
SHUTDOWN_MSG:
{code}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-688) Hive Query hangs, if DN's are restarted before the query is submitted

2018-10-17 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-688:
-

 Summary: Hive Query hangs, if DN's are restarted before the query 
is submitted
 Key: HDDS-688
 URL: https://issues.apache.org/jira/browse/HDDS-688
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari


Run a Hive Insert Query. It runs fine as below:
{code:java}
0: jdbc:hive2://ctr-e138-1518143905142-510793> insert into testo3 values(1, 
"aa", 3.0);
INFO : Compiling 
command(queryId=hive_20181018005729_fe644ab2-f8cc-41c3-b2d8-ffe1022de607): 
insert into testo3 values(1, "aa", 3.0)
INFO : Semantic Analysis Completed (retrial = false)
INFO : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:_col0, 
type:int, comment:null), FieldSchema(name:_col1, type:string, comment:null), 
FieldSchema(name:_col2, type:float, comment:null)], properties:null)
INFO : Completed compiling 
command(queryId=hive_20181018005729_fe644ab2-f8cc-41c3-b2d8-ffe1022de607); Time 
taken: 0.52 seconds
INFO : Executing 
command(queryId=hive_20181018005729_fe644ab2-f8cc-41c3-b2d8-ffe1022de607): 
insert into testo3 values(1, "aa", 3.0)
INFO : Query ID = hive_20181018005729_fe644ab2-f8cc-41c3-b2d8-ffe1022de607
INFO : Total jobs = 1
INFO : Launching Job 1 out of 1
INFO : Starting task [Stage-1:MAPRED] in serial mode
INFO : Subscribed to counters: [] for queryId: 
hive_20181018005729_fe644ab2-f8cc-41c3-b2d8-ffe1022de607
INFO : Session is already open
INFO : Dag name: insert into testo3 values(1, "aa", 3.0) (Stage-1)
INFO : Status: Running (Executing on YARN cluster with App id 
application_1539383731490_0073)

--
VERTICES MODE STATUS TOTAL COMPLETED RUNNING PENDING FAILED KILLED
--
Map 1 .. container SUCCEEDED 1 1 0 0 0 0
Reducer 2 .. container SUCCEEDED 1 1 0 0 0 0
--
VERTICES: 02/02 [==>>] 100% ELAPSED TIME: 11.95 s
--
INFO : Status: DAG finished successfully in 10.68 seconds
INFO :
INFO : Query Execution Summary
INFO : 
--
INFO : OPERATION DURATION
INFO : 
--
INFO : Compile Query 0.52s
INFO : Prepare Plan 0.23s
INFO : Get Query Coordinator (AM) 0.00s
INFO : Submit Plan 0.11s
INFO : Start DAG 0.57s
INFO : Run DAG 10.68s
INFO : 
--
INFO :
INFO : Task Execution Summary
INFO : 
--
INFO : VERTICES DURATION(ms) CPU_TIME(ms) GC_TIME(ms) INPUT_RECORDS 
OUTPUT_RECORDS
INFO : 
--
INFO : Map 1 7074.00 11,280 276 3 1
INFO : Reducer 2 1074.00 2,040 0 1 0
INFO : 
--
INFO :
INFO : org.apache.tez.common.counters.DAGCounter:
INFO : NUM_SUCCEEDED_TASKS: 2
INFO : TOTAL_LAUNCHED_TASKS: 2
INFO : AM_CPU_MILLISECONDS: 1390
INFO : AM_GC_TIME_MILLIS: 0
INFO : File System Counters:
INFO : FILE_BYTES_READ: 135
INFO : FILE_BYTES_WRITTEN: 135
INFO : HDFS_BYTES_WRITTEN: 199
INFO : HDFS_READ_OPS: 3
INFO : HDFS_WRITE_OPS: 2
INFO : HDFS_OP_CREATE: 1
INFO : HDFS_OP_GET_FILE_STATUS: 3
INFO : HDFS_OP_RENAME: 1
INFO : org.apache.tez.common.counters.TaskCounter:
INFO : SPILLED_RECORDS: 0
INFO : NUM_SHUFFLED_INPUTS: 1
INFO : NUM_FAILED_SHUFFLE_INPUTS: 0
INFO : GC_TIME_MILLIS: 276
INFO : TASK_DURATION_MILLIS: 8474
INFO : CPU_MILLISECONDS: 13320
INFO : PHYSICAL_MEMORY_BYTES: 4294967296
INFO : VIRTUAL_MEMORY_BYTES: 11205029888
INFO : COMMITTED_HEAP_BYTES: 4294967296
INFO : INPUT_RECORDS_PROCESSED: 5
INFO : INPUT_SPLIT_LENGTH_BYTES: 1
INFO : OUTPUT_RECORDS: 1
INFO : OUTPUT_LARGE_RECORDS: 0
INFO : OUTPUT_BYTES: 94
INFO : OUTPUT_BYTES_WITH_OVERHEAD: 102
INFO : OUTPUT_BYTES_PHYSICAL: 127
INFO : ADDITIONAL_SPILLS_BYTES_WRITTEN: 0
INFO : ADDITIONAL_SPILLS_BYTES_READ: 0
INFO : ADDITIONAL_SPILL_COUNT: 0
INFO : SHUFFLE_BYTES: 103
INFO : SHUFFLE_BYTES_DECOMPRESSED: 102
INFO : SHUFFLE_BYTES_TO_MEM: 0
INFO : SHUFFLE_BYTES_TO_DISK: 0
INFO : SHUFFLE_BYTES_DISK_DIRECT: 103
INFO : SHUFFLE_PHASE_TIME: 154
INFO : FIRST_EVENT_RECEIVED: 108
INFO : LAST_EVENT_RECEIVED: 108
INFO : HIVE:
INFO : CREATED_FILES: 2
INFO : DESERIALIZE_ERRORS: 0
INFO : RECORDS_IN_Map_1: 3
INFO : RECORDS_OUT_0: 1
INFO : RECORDS_OUT_1_default.testo3: 1
INFO : RECORDS_OUT_INTERMEDIATE_Map_1: 1
INFO : 

[jira] [Created] (HDDS-687) SCM is not able to restart

2018-10-17 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-687:
-

 Summary: SCM is not able to restart
 Key: HDDS-687
 URL: https://issues.apache.org/jira/browse/HDDS-687
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari


SCM is not able to come up on restart.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-674) Not able to get key after distcp job passes

2018-10-16 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-674:
-

 Summary: Not able to get key after distcp job passes
 Key: HDDS-674
 URL: https://issues.apache.org/jira/browse/HDDS-674
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Namit Maheshwari


It fails with 
{code:java}
-bash-4.2$ ozone sh key get /volume2/bucket2/distcp/wordcount_input_1.txt 
/tmp/wordcountDistcp.txt
2018-10-17 00:25:07,904 WARN util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
Lookup key failed, error:KEY_NOT_FOUND{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-672) Spark shell throws OzoneFileSystem not found

2018-10-16 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-672:
-

 Summary: Spark shell throws OzoneFileSystem not found
 Key: HDDS-672
 URL: https://issues.apache.org/jira/browse/HDDS-672
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Namit Maheshwari


Spark shell throws OzoneFileSystem not found, if the ozone jars are not 
specified in the --jars options



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-671) Hive HSI insert tries to create data in Hdfs for Ozone external table

2018-10-16 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-671:
-

 Summary: Hive HSI insert tries to create data in Hdfs for Ozone 
external table
 Key: HDDS-671
 URL: https://issues.apache.org/jira/browse/HDDS-671
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Namit Maheshwari


Hive HSI insert tries to create data in Hdfs for Ozone external table, when 
"hive.server2.enable.doAs" is set to true 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-670) Hive insert fails against Ozone external table

2018-10-16 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-670:
-

 Summary: Hive insert fails against Ozone external table
 Key: HDDS-670
 URL: https://issues.apache.org/jira/browse/HDDS-670
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Namit Maheshwari


It fails with 
{code:java}
ERROR : Job Commit failed with exception 
'org.apache.hadoop.hive.ql.metadata.HiveException(Unable to move: 
o3://bucket2.volume2/testo3/.hive-staging_hive_2018-10-16_21-09-35_130_1001829123585250245-1/_tmp.-ext-1
 to: 
o3://bucket2.volume2/testo3/.hive-staging_hive_2018-10-16_21-09-35_130_1001829123585250245-1/_tmp.-ext-1.moved)'
org.apache.hadoop.hive.ql.metadata.HiveException: Unable to move: 
o3://bucket2.volume2/testo3/.hive-staging_hive_2018-10-16_21-09-35_130_1001829123585250245-1/_tmp.-ext-1
 to: 
o3://bucket2.volume2/testo3/.hive-staging_hive_2018-10-16_21-09-35_130_1001829123585250245-1/_tmp.-ext-1.moved
{code}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-664) Creating hive table on Ozone fails

2018-10-15 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-664:
-

 Summary: Creating hive table on Ozone fails
 Key: HDDS-664
 URL: https://issues.apache.org/jira/browse/HDDS-664
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Namit Maheshwari


Modified HIVE_AUX_JARS_PATH to include Ozone jars. Tried creating Hive external 
table on Ozone. It fails with "Error: Error while compiling statement: FAILED: 
HiveAuthzPluginException Error getting permissions for 
o3://bucket2.volume2/testo3: User: hive is not allowed to impersonate anonymous 
(state=42000,code=4)"
{code:java}
-bash-4.2$ beeline
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in 
[jar:file:/usr/hdp/3.0.3.0-63/hive/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/usr/hdp/3.0.3.0-63/hadoop/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Connecting to 
jdbc:hive2://ctr-e138-1518143905142-510793-01-11.hwx.site:2181,ctr-e138-1518143905142-510793-01-06.hwx.site:2181,ctr-e138-1518143905142-510793-01-08.hwx.site:2181,ctr-e138-1518143905142-510793-01-10.hwx.site:2181,ctr-e138-1518143905142-510793-01-07.hwx.site:2181/default;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2
Enter username for 
jdbc:hive2://ctr-e138-1518143905142-510793-01-11.hwx.site:2181,ctr-e138-1518143905142-510793-01-06.hwx.site:2181,ctr-e138-1518143905142-510793-01-08.hwx.site:2181,ctr-e138-1518143905142-510793-01-10.hwx.site:2181,ctr-e138-1518143905142-510793-01-07.hwx.site:2181/default:
Enter password for 
jdbc:hive2://ctr-e138-1518143905142-510793-01-11.hwx.site:2181,ctr-e138-1518143905142-510793-01-06.hwx.site:2181,ctr-e138-1518143905142-510793-01-08.hwx.site:2181,ctr-e138-1518143905142-510793-01-10.hwx.site:2181,ctr-e138-1518143905142-510793-01-07.hwx.site:2181/default:
18/10/15 21:36:55 [main]: INFO jdbc.HiveConnection: Connected to 
ctr-e138-1518143905142-510793-01-04.hwx.site:1
Connected to: Apache Hive (version 3.1.0.3.0.3.0-63)
Driver: Hive JDBC (version 3.1.0.3.0.3.0-63)
Transaction isolation: TRANSACTION_REPEATABLE_READ
Beeline version 3.1.0.3.0.3.0-63 by Apache Hive
0: jdbc:hive2://ctr-e138-1518143905142-510793> create external table testo3 ( i 
int, s string, d float) location "o3://bucket2.volume2/testo3";
Error: Error while compiling statement: FAILED: HiveAuthzPluginException Error 
getting permissions for o3://bucket2.volume2/testo3: User: hive is not allowed 
to impersonate anonymous (state=42000,code=4)
0: jdbc:hive2://ctr-e138-1518143905142-510793> {code}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-637) Not able to access the part-r-00000 file after the MR job succeeds

2018-10-15 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Maheshwari resolved HDDS-637.
---
Resolution: Cannot Reproduce

The issue seem to go away with the latest code. 
Will reopen if seen again. Thanks [~xyao]

> Not able to access the part-r-0 file after the MR job succeeds
> --
>
> Key: HDDS-637
> URL: https://issues.apache.org/jira/browse/HDDS-637
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.3.0
>Reporter: Namit Maheshwari
>Assignee: Xiaoyu Yao
>Priority: Major
>
> Run a MR job
> {code:java}
> -bash-4.2$ /usr/hdp/current/hadoop-client/bin/hadoop jar 
> /usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples.jar 
> wordcount /tmp/mr_jobs/input/ o3://bucket2.volume2/mr_jobDD
> 18/10/12 01:00:23 INFO conf.Configuration: Removed undeclared tags:
> 18/10/12 01:00:24 INFO conf.Configuration: Removed undeclared tags:
> 18/10/12 01:00:24 INFO conf.Configuration: Removed undeclared tags:
> 18/10/12 01:00:25 INFO conf.Configuration: Removed undeclared tags:
> 18/10/12 01:00:25 INFO client.ConfiguredRMFailoverProxyProvider: Failing over 
> to rm2
> 18/10/12 01:00:25 INFO mapreduce.JobResourceUploader: Disabling Erasure 
> Coding for path: /user/hdfs/.staging/job_1539295307098_0003
> 18/10/12 01:00:27 INFO input.FileInputFormat: Total input files to process : 1
> 18/10/12 01:00:27 INFO lzo.GPLNativeCodeLoader: Loaded native gpl library
> 18/10/12 01:00:27 INFO lzo.LzoCodec: Successfully loaded & initialized 
> native-lzo library [hadoop-lzo rev 5d6248d8d690f8456469979213ab2e9993bfa2e9]
> 18/10/12 01:00:27 INFO mapreduce.JobSubmitter: number of splits:1
> 18/10/12 01:00:28 INFO mapreduce.JobSubmitter: Submitting tokens for job: 
> job_1539295307098_0003
> 18/10/12 01:00:28 INFO mapreduce.JobSubmitter: Executing with tokens: []
> 18/10/12 01:00:28 INFO conf.Configuration: Removed undeclared tags:
> 18/10/12 01:00:28 INFO conf.Configuration: found resource resource-types.xml 
> at file:/etc/hadoop/3.0.3.0-63/0/resource-types.xml
> 18/10/12 01:00:28 INFO conf.Configuration: Removed undeclared tags:
> 18/10/12 01:00:28 INFO impl.YarnClientImpl: Submitted application 
> application_1539295307098_0003
> 18/10/12 01:00:29 INFO mapreduce.Job: The url to track the job: 
> http://ctr-e138-1518143905142-510793-01-05.hwx.site:8088/proxy/application_1539295307098_0003/
> 18/10/12 01:00:29 INFO mapreduce.Job: Running job: job_1539295307098_0003
> 18/10/12 01:00:35 INFO mapreduce.Job: Job job_1539295307098_0003 running in 
> uber mode : false
> 18/10/12 01:00:35 INFO mapreduce.Job: map 0% reduce 0%
> 18/10/12 01:00:44 INFO mapreduce.Job: map 100% reduce 0%
> 18/10/12 01:00:57 INFO mapreduce.Job: map 100% reduce 67%
> 18/10/12 01:00:59 INFO mapreduce.Job: map 100% reduce 100%
> 18/10/12 01:00:59 INFO mapreduce.Job: Job job_1539295307098_0003 completed 
> successfully
> 18/10/12 01:00:59 INFO conf.Configuration: Removed undeclared tags:
> 18/10/12 01:00:59 INFO mapreduce.Job: Counters: 58
> File System Counters
> FILE: Number of bytes read=6332
> FILE: Number of bytes written=532585
> FILE: Number of read operations=0
> FILE: Number of large read operations=0
> FILE: Number of write operations=0
> HDFS: Number of bytes read=215876
> HDFS: Number of bytes written=0
> HDFS: Number of read operations=2
> HDFS: Number of large read operations=0
> HDFS: Number of write operations=0
> O3: Number of bytes read=0
> O3: Number of bytes written=0
> O3: Number of read operations=0
> O3: Number of large read operations=0
> O3: Number of write operations=0
> Job Counters
> Launched map tasks=1
> Launched reduce tasks=1
> Rack-local map tasks=1
> Total time spent by all maps in occupied slots (ms)=25392
> Total time spent by all reduces in occupied slots (ms)=103584
> Total time spent by all map tasks (ms)=6348
> Total time spent by all reduce tasks (ms)=12948
> Total vcore-milliseconds taken by all map tasks=6348
> Total vcore-milliseconds taken by all reduce tasks=12948
> Total megabyte-milliseconds taken by all map tasks=26001408
> Total megabyte-milliseconds taken by all reduce tasks=106070016
> Map-Reduce Framework
> Map input records=716
> Map output records=32019
> Map output bytes=343475
> Map output materialized bytes=6332
> Input split bytes=121
> Combine input records=32019
> Combine output records=461
> Reduce input groups=461
> Reduce shuffle bytes=6332
> Reduce input records=461
> Reduce output records=461
> Spilled Records=922
> Shuffled Maps =1
> Failed Shuffles=0
> Merged Map outputs=1
> GC time elapsed (ms)=359
> CPU time spent (ms)=11800
> Physical memory (bytes) snapshot=3018502144
> Virtual memory (bytes) snapshot=14470242304
> Total committed heap usage (bytes)=3521642496
> Peak Map Physical memory 

[jira] [Created] (HDDS-663) Lot of "Removed undeclared tags" logger while running commands

2018-10-15 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-663:
-

 Summary: Lot of "Removed undeclared tags" logger while running 
commands
 Key: HDDS-663
 URL: https://issues.apache.org/jira/browse/HDDS-663
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Namit Maheshwari


While running commands against OzoneFs see lot of logger like below:
{code:java}
-bash-4.2$ hdfs dfs -ls o3://bucket2.volume2/mr_jobEE
18/10/15 20:29:17 INFO conf.Configuration: Removed undeclared tags:
18/10/15 20:29:18 INFO conf.Configuration: Removed undeclared tags:
Found 2 items
-rw-rw-rw- 1 hdfs hdfs 0 2018-10-15 20:28 o3://bucket2.volume2/mr_jobEE/_SUCCESS
-rw-rw-rw- 1 hdfs hdfs 5017 1970-07-23 04:33 
o3://bucket2.volume2/mr_jobEE/part-r-0
18/10/15 20:29:19 INFO conf.Configuration: Removed undeclared tags:
-bash-4.2${code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-652) Properties in ozone-site.xml does not work well with IP

2018-10-12 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-652:
-

 Summary: Properties in ozone-site.xml does not work well with IP 
 Key: HDDS-652
 URL: https://issues.apache.org/jira/browse/HDDS-652
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Namit Maheshwari


There have been cases where properties in ozone-site.xml does not work well 
with IP. 

If those properties like ozone.om.address is changed to use hostnames, it works 
well.

 

Ideally this should work fine with both IP and hostnames.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-651) Rename o3 to o3fs for Filesystem

2018-10-12 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-651:
-

 Summary: Rename o3 to o3fs for Filesystem
 Key: HDDS-651
 URL: https://issues.apache.org/jira/browse/HDDS-651
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Namit Maheshwari


I propose that we rename o3 to o3fs for Filesystem.

It creates a lot of confusion while using the same name o3 for different 
purposes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-650) Spark job is not able to pick up Ozone configuration

2018-10-12 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-650:
-

 Summary: Spark job is not able to pick up Ozone configuration
 Key: HDDS-650
 URL: https://issues.apache.org/jira/browse/HDDS-650
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari


Spark job is not able to pick up Ozone configuration.
{code:java}
-bash-4.2$ spark-shell --master yarn-client --jars 
/usr/hdp/current/hadoop-client/lib/hadoop-lzo-0.6.0.3.0.3.0-63.jar,/tmp/ozone-0.3.0-SNAPSHOT/share/hadoop/ozoneplugin/hadoop-ozone-datanode-plugin-0.3.0-SNAPSHOT.jar,/tmp/ozone-0.3.0-SNAPSHOT/share/hadoop/ozonefs/hadoop-ozone-filesystem-0.3.0-SNAPSHOT.jar
Warning: Master yarn-client is deprecated since 2.0. Please use master "yarn" 
with specified deploy mode instead.
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use 
setLogLevel(newLevel).
Spark context Web UI available at 
http://ctr-e138-1518143905142-510793-01-02.hwx.site:4040
Spark context available as 'sc' (master = yarn, app id = 
application_1539295307098_0011).
Spark session available as 'spark'.
Welcome to
 __
/ __/__ ___ _/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 2.3.2.3.0.3.0-63
/_/

Using Scala version 2.11.8 (OpenJDK 64-Bit Server VM, Java 1.8.0_181)
Type in expressions to have them evaluated.
Type :help for more information.

scala>

scala> val input = sc.textFile("o3://bucket2.volume2/passwd");
input: org.apache.spark.rdd.RDD[String] = o3://bucket2.volume2/passwd 
MapPartitionsRDD[1] at textFile at :24

scala> val count = input.flatMap(line => line.split(" ")).map(word => (word, 
1)).reduceByKey(_+_);
count: org.apache.spark.rdd.RDD[(String, Int)] = ShuffledRDD[4] at reduceByKey 
at :25

scala> count.cache()
res0: count.type = ShuffledRDD[4] at reduceByKey at :25

scala> count.saveAsTextFile("o3://bucket2.volume2/sparkout3");
[Stage 0:> (0 + 2) / 2]18/10/12 22:16:44 WARN TaskSetManager: Lost task 1.0 in 
stage 0.0 (TID 1, ctr-e138-1518143905142-510793-01-11.hwx.site, executor 
1): java.io.IOException: Couldn't create protocol class 
org.apache.hadoop.ozone.client.rpc.RpcClient
at 
org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:299)
at 
org.apache.hadoop.ozone.client.OzoneClientFactory.getRpcClient(OzoneClientFactory.java:169)
at 
org.apache.hadoop.fs.ozone.OzoneFileSystem.initialize(OzoneFileSystem.java:119)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
at org.apache.hadoop.mapred.LineRecordReader.(LineRecordReader.java:108)
at 
org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67)
at org.apache.spark.rdd.HadoopRDD$$anon$1.liftedTree1$1(HadoopRDD.scala:257)
at org.apache.spark.rdd.HadoopRDD$$anon$1.(HadoopRDD.scala:256)
at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:214)
at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:94)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
at org.apache.spark.scheduler.Task.run(Task.scala:109)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.IllegalArgumentException: ozone.om.address must be 
defined. See https://wiki.apache.org/hadoop/Ozone#Configuration for details on 
configuring Ozone.
at org.apache.hadoop.ozone.OmUtils.getOmAddressForClients(OmUtils.java:70)
at org.apache.hadoop.ozone.client.rpc.RpcClient.(RpcClient.java:114)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 

[jira] [Created] (HDDS-637) Not able to access the part-r-00000 file after the MR job succeeds

2018-10-11 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-637:
-

 Summary: Not able to access the part-r-0 file after the MR job 
succeeds
 Key: HDDS-637
 URL: https://issues.apache.org/jira/browse/HDDS-637
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Affects Versions: 0.3.0
Reporter: Namit Maheshwari


Run a MR job
{code:java}
-bash-4.2$ /usr/hdp/current/hadoop-client/bin/hadoop jar 
/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples.jar 
wordcount /tmp/mr_jobs/input/ o3://bucket2.volume2/mr_jobDD
18/10/12 01:00:23 INFO conf.Configuration: Removed undeclared tags:
18/10/12 01:00:24 INFO conf.Configuration: Removed undeclared tags:
18/10/12 01:00:24 INFO conf.Configuration: Removed undeclared tags:
18/10/12 01:00:25 INFO conf.Configuration: Removed undeclared tags:
18/10/12 01:00:25 INFO client.ConfiguredRMFailoverProxyProvider: Failing over 
to rm2
18/10/12 01:00:25 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding 
for path: /user/hdfs/.staging/job_1539295307098_0003
18/10/12 01:00:27 INFO input.FileInputFormat: Total input files to process : 1
18/10/12 01:00:27 INFO lzo.GPLNativeCodeLoader: Loaded native gpl library
18/10/12 01:00:27 INFO lzo.LzoCodec: Successfully loaded & initialized 
native-lzo library [hadoop-lzo rev 5d6248d8d690f8456469979213ab2e9993bfa2e9]
18/10/12 01:00:27 INFO mapreduce.JobSubmitter: number of splits:1
18/10/12 01:00:28 INFO mapreduce.JobSubmitter: Submitting tokens for job: 
job_1539295307098_0003
18/10/12 01:00:28 INFO mapreduce.JobSubmitter: Executing with tokens: []
18/10/12 01:00:28 INFO conf.Configuration: Removed undeclared tags:
18/10/12 01:00:28 INFO conf.Configuration: found resource resource-types.xml at 
file:/etc/hadoop/3.0.3.0-63/0/resource-types.xml
18/10/12 01:00:28 INFO conf.Configuration: Removed undeclared tags:
18/10/12 01:00:28 INFO impl.YarnClientImpl: Submitted application 
application_1539295307098_0003
18/10/12 01:00:29 INFO mapreduce.Job: The url to track the job: 
http://ctr-e138-1518143905142-510793-01-05.hwx.site:8088/proxy/application_1539295307098_0003/
18/10/12 01:00:29 INFO mapreduce.Job: Running job: job_1539295307098_0003
18/10/12 01:00:35 INFO mapreduce.Job: Job job_1539295307098_0003 running in 
uber mode : false
18/10/12 01:00:35 INFO mapreduce.Job: map 0% reduce 0%
18/10/12 01:00:44 INFO mapreduce.Job: map 100% reduce 0%
18/10/12 01:00:57 INFO mapreduce.Job: map 100% reduce 67%
18/10/12 01:00:59 INFO mapreduce.Job: map 100% reduce 100%
18/10/12 01:00:59 INFO mapreduce.Job: Job job_1539295307098_0003 completed 
successfully
18/10/12 01:00:59 INFO conf.Configuration: Removed undeclared tags:
18/10/12 01:00:59 INFO mapreduce.Job: Counters: 58
File System Counters
FILE: Number of bytes read=6332
FILE: Number of bytes written=532585
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=215876
HDFS: Number of bytes written=0
HDFS: Number of read operations=2
HDFS: Number of large read operations=0
HDFS: Number of write operations=0
O3: Number of bytes read=0
O3: Number of bytes written=0
O3: Number of read operations=0
O3: Number of large read operations=0
O3: Number of write operations=0
Job Counters
Launched map tasks=1
Launched reduce tasks=1
Rack-local map tasks=1
Total time spent by all maps in occupied slots (ms)=25392
Total time spent by all reduces in occupied slots (ms)=103584
Total time spent by all map tasks (ms)=6348
Total time spent by all reduce tasks (ms)=12948
Total vcore-milliseconds taken by all map tasks=6348
Total vcore-milliseconds taken by all reduce tasks=12948
Total megabyte-milliseconds taken by all map tasks=26001408
Total megabyte-milliseconds taken by all reduce tasks=106070016
Map-Reduce Framework
Map input records=716
Map output records=32019
Map output bytes=343475
Map output materialized bytes=6332
Input split bytes=121
Combine input records=32019
Combine output records=461
Reduce input groups=461
Reduce shuffle bytes=6332
Reduce input records=461
Reduce output records=461
Spilled Records=922
Shuffled Maps =1
Failed Shuffles=0
Merged Map outputs=1
GC time elapsed (ms)=359
CPU time spent (ms)=11800
Physical memory (bytes) snapshot=3018502144
Virtual memory (bytes) snapshot=14470242304
Total committed heap usage (bytes)=3521642496
Peak Map Physical memory (bytes)=2518896640
Peak Map Virtual memory (bytes)=5397549056
Peak Reduce Physical memory (bytes)=499605504
Peak Reduce Virtual memory (bytes)=9072693248
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=215755
File Output Format Counters
Bytes Written=0
18/10/12 01:00:59 INFO conf.Configuration: Removed undeclared tags:
-bash-4.2$
{code}
Below exception is seen in SCM logs
{code:java}

2018-10-12 01:00:51,142 INFO 

[jira] [Created] (HDDS-624) PutBlock fails with Unexpected Storage Container Exception

2018-10-10 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-624:
-

 Summary: PutBlock fails with Unexpected Storage Container Exception
 Key: HDDS-624
 URL: https://issues.apache.org/jira/browse/HDDS-624
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari


As per HDDS-622, Datanodes were shutting down while running MR jobs due to 
issue in RocksDBStore. To avoid that failure set the property 
_ozone.metastore.rocksdb.statistics_ to _OFF_ in ozone-site.xml

Now running Mapreduce job fails with below error
{code:java}
-bash-4.2$ /usr/hdp/current/hadoop-client/bin/hadoop jar 
/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples.jar 
wordcount /tmp/mr_jobs/input/ o3://bucket2.volume2/mr_jobAA
18/10/11 00:14:41 INFO conf.Configuration: Removed undeclared tags:
18/10/11 00:14:42 INFO conf.Configuration: Removed undeclared tags:
18/10/11 00:14:42 INFO conf.Configuration: Removed undeclared tags:
18/10/11 00:14:43 INFO conf.Configuration: Removed undeclared tags:
18/10/11 00:14:43 INFO client.ConfiguredRMFailoverProxyProvider: Failing over 
to rm2
18/10/11 00:14:43 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding 
for path: /user/hdfs/.staging/job_1539208750583_0005
18/10/11 00:14:43 INFO input.FileInputFormat: Total input files to process : 1
18/10/11 00:14:43 INFO lzo.GPLNativeCodeLoader: Loaded native gpl library
18/10/11 00:14:43 INFO lzo.LzoCodec: Successfully loaded & initialized 
native-lzo library [hadoop-lzo rev 5d6248d8d690f8456469979213ab2e9993bfa2e9]
18/10/11 00:14:44 INFO mapreduce.JobSubmitter: number of splits:1
18/10/11 00:14:44 INFO mapreduce.JobSubmitter: Submitting tokens for job: 
job_1539208750583_0005
18/10/11 00:14:44 INFO mapreduce.JobSubmitter: Executing with tokens: []
18/10/11 00:14:44 INFO conf.Configuration: Removed undeclared tags:
18/10/11 00:14:44 INFO conf.Configuration: found resource resource-types.xml at 
file:/etc/hadoop/3.0.3.0-63/0/resource-types.xml
18/10/11 00:14:44 INFO conf.Configuration: Removed undeclared tags:
18/10/11 00:14:44 INFO impl.YarnClientImpl: Submitted application 
application_1539208750583_0005
18/10/11 00:14:45 INFO mapreduce.Job: The url to track the job: 
http://ctr-e138-1518143905142-510793-01-05.hwx.site:8088/proxy/application_1539208750583_0005/
18/10/11 00:14:45 INFO mapreduce.Job: Running job: job_1539208750583_0005
18/10/11 00:14:53 INFO mapreduce.Job: Job job_1539208750583_0005 running in 
uber mode : false
18/10/11 00:14:53 INFO mapreduce.Job: map 0% reduce 0%
18/10/11 00:15:00 INFO mapreduce.Job: map 100% reduce 0%
18/10/11 00:15:10 INFO mapreduce.Job: map 100% reduce 67%
18/10/11 00:15:11 INFO mapreduce.Job: Task Id : 
attempt_1539208750583_0005_r_00_0, Status : FAILED
Error: java.io.IOException: Unexpected Storage Container Exception: 
java.io.IOException: Failed to command cmdType: PutBlock
traceID: "df0ed956-fa4d-40ef-a7f2-ec0b6160b41b"
containerID: 2
datanodeUuid: "96f8fa78-413e-4350-a8ff-6cbdaa16ba7f"
putBlock {
blockData {
blockID {
containerID: 2
localID: 100874119214399488
}
metadata {
key: "TYPE"
value: "KEY"
}
chunks {
chunkName: 
"f24fa36171bda3113584cb01dc12a871_stream_84157b3a-654d-4e3d-8455-fbf85321a306_chunk_1"
offset: 0
len: 5017
}
}
}

at 
org.apache.hadoop.hdds.scm.storage.ChunkOutputStream.close(ChunkOutputStream.java:171)
at 
org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream$ChunkOutputStreamEntry.close(ChunkGroupOutputStream.java:699)
at 
org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.handleFlushOrClose(ChunkGroupOutputStream.java:502)
at 
org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.close(ChunkGroupOutputStream.java:531)
at 
org.apache.hadoop.fs.ozone.OzoneFSOutputStream.close(OzoneFSOutputStream.java:57)
at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
at 
org.apache.hadoop.mapreduce.lib.output.TextOutputFormat$LineRecordWriter.close(TextOutputFormat.java:106)
at 
org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.close(ReduceTask.java:551)
at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:630)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:390)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)
Caused by: java.io.IOException: Failed to command cmdType: PutBlock
traceID: "df0ed956-fa4d-40ef-a7f2-ec0b6160b41b"
containerID: 2
datanodeUuid: "96f8fa78-413e-4350-a8ff-6cbdaa16ba7f"
putBlock {
blockData {
blockID {
containerID: 2
localID: 100874119214399488
}
metadata {
key: "TYPE"
value: "KEY"
}
chunks {
chunkName: 

[jira] [Created] (HDDS-623) On SCM UI, Node Manager info is empty

2018-10-10 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-623:
-

 Summary: On SCM UI, Node Manager info is empty
 Key: HDDS-623
 URL: https://issues.apache.org/jira/browse/HDDS-623
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari
 Attachments: Screen Shot 2018-10-10 at 4.19.59 PM.png

Fields like below are empty

Node Manager: Minimum chill mode nodes 
Node Manager: Out-of-node chill mode 
Node Manager: Chill mode status 
Node Manager: Manual chill mode

Please see attached screenshot !Screen Shot 2018-10-10 at 4.19.59 PM.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-622) Datanode shuts down with RocksDBStore java.lang.NoSuchMethodError

2018-10-10 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-622:
-

 Summary: Datanode shuts down with RocksDBStore 
java.lang.NoSuchMethodError
 Key: HDDS-622
 URL: https://issues.apache.org/jira/browse/HDDS-622
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari


Datanodes are registered fine on a Hadoop + Ozone cluster.

While running jobs against Ozone, Datanode shuts down as below:
{code:java}
2018-10-10 21:50:42,708 INFO storage.RaftLogWorker 
(RaftLogWorker.java:rollLogSegment(263)) - Rolling 
segment:7c1a32b5-34ed-4a2a-aa07-ac75d25858b6-RaftLogWorker index to:2
2018-10-10 21:50:42,714 INFO impl.RaftServerImpl 
(ServerState.java:setRaftConf(319)) - 7c1a32b5-34ed-4a2a-aa07-ac75d25858b6: set 
configuration 2: [7c1a32b5-34ed-4a2a-aa07-ac75d25858b6:172.27.56.9:9858, ee
20b6291-d898-46de-8cb2-861523aed1a3:172.27.87.64:9858, 
b7fbd501-27ae-4304-8c42-a612915094c6:172.27.17.133:9858], old=null at 2
2018-10-10 21:50:42,729 WARN impl.LogAppender (LogUtils.java:warn(135)) - 
7c1a32b5-34ed-4a2a-aa07-ac75d25858b6: Failed appendEntries to 
e20b6291-d898-46de-8cb2-861523aed1a3:172.27.87.64:9858: org.apache..
ratis.shaded.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception
2018-10-10 21:50:43,245 WARN impl.LogAppender (LogUtils.java:warn(135)) - 
7c1a32b5-34ed-4a2a-aa07-ac75d25858b6: Failed appendEntries to 
e20b6291-d898-46de-8cb2-861523aed1a3:172.27.87.64:9858: org.apache..
ratis.shaded.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception
2018-10-10 21:50:43,310 ERROR impl.RaftServerImpl 
(RaftServerImpl.java:applyLogToStateMachine(1153)) - 
7c1a32b5-34ed-4a2a-aa07-ac75d25858b6: applyTransaction failed for index:1 
proto:(t:2, i:1)SMLOGENTRY,,
client-894EC0846FDF, cid=0
2018-10-10 21:50:43,313 ERROR impl.StateMachineUpdater 
(ExitUtils.java:terminate(86)) - Terminating with exit status 2: 
StateMachineUpdater-7c1a32b5-34ed-4a2a-aa07-ac75d25858b6: the 
StateMachineUpdater hii
ts Throwable
java.lang.NoSuchMethodError: 
org.apache.hadoop.metrics2.util.MBeans.register(Ljava/lang/String;Ljava/lang/String;Ljava/util/Map;Ljava/lang/Object;)Ljavax/management/ObjectName;
at org.apache.hadoop.utils.RocksDBStore.(RocksDBStore.java:74)
at 
org.apache.hadoop.utils.MetadataStoreBuilder.build(MetadataStoreBuilder.java:142)
at 
org.apache.hadoop.ozone.container.keyvalue.helpers.KeyValueContainerUtil.createContainerMetaData(KeyValueContainerUtil.java:78)
at 
org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer.create(KeyValueContainer.java:133)
at 
org.apache.hadoop.ozone.container.keyvalue.KeyValueHandler.handleCreateContainer(KeyValueHandler.java:256)
at 
org.apache.hadoop.ozone.container.keyvalue.KeyValueHandler.handle(KeyValueHandler.java:179)
at 
org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:142)
at 
org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.dispatchCommand(ContainerStateMachine.java:223)
at 
org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.runCommand(ContainerStateMachine.java:229)
at 
org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.access$300(ContainerStateMachine.java:115)
at 
org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine$StateMachineHelper.handleCreateContainer(ContainerStateMachine.java:618)
at 
org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine$StateMachineHelper.executeContainerCommand(ContainerStateMachine.java:642)
at 
org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.applyTransaction(ContainerStateMachine.java:396)
at 
org.apache.ratis.server.impl.RaftServerImpl.applyLogToStateMachine(RaftServerImpl.java:1150)
at 
org.apache.ratis.server.impl.StateMachineUpdater.run(StateMachineUpdater.java:148)
at java.lang.Thread.run(Thread.java:748)
2018-10-10 21:50:43,320 INFO datanode.DataNode (LogAdapter.java:info(51)) - 
SHUTDOWN_MSG:
/
SHUTDOWN_MSG: Shutting down DataNode at 
ctr-e138-1518143905142-510793-01-02.hwx.site/172.27.56.9
/
{code}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-612) Even after setting hdds.scm.chillmode.enabled to false, SCM allocateblock fails with ChillModePrecheck exception

2018-10-09 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-612:
-

 Summary: Even after setting hdds.scm.chillmode.enabled to false, 
SCM allocateblock fails with ChillModePrecheck exception
 Key: HDDS-612
 URL: https://issues.apache.org/jira/browse/HDDS-612
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari


{code:java}
2018-10-09 23:11:58,047 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 
on 9863, call Call#70 Retry#0 
org.apache.hadoop.ozone.protocol.ScmBlockLocationProtocol.allocateScmBlock from 
172.27.56.9:53442
org.apache.hadoop.hdds.scm.exceptions.SCMException: ChillModePrecheck failed 
for allocateBlock
at 
org.apache.hadoop.hdds.scm.server.ChillModePrecheck.check(ChillModePrecheck.java:38)
at 
org.apache.hadoop.hdds.scm.server.ChillModePrecheck.check(ChillModePrecheck.java:30)
at org.apache.hadoop.hdds.scm.ScmUtils.preCheck(ScmUtils.java:42)
at 
org.apache.hadoop.hdds.scm.block.BlockManagerImpl.allocateBlock(BlockManagerImpl.java:191)
at 
org.apache.hadoop.hdds.scm.server.SCMBlockProtocolServer.allocateBlock(SCMBlockProtocolServer.java:143)
at 
org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.allocateScmBlock(ScmBlockLocationProtocolServerSideTranslatorPB.java:74)
at 
org.apache.hadoop.hdds.protocol.proto.ScmBlockLocationProtocolProtos$ScmBlockLocationProtocolService$2.callBlockingMethod(ScmBlockLocationProtocolProtos.java:6255)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-611) SCM UI is not reflecting the changes done in ozone-site.xml

2018-10-09 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-611:
-

 Summary: SCM UI is not reflecting the changes done in 
ozone-site.xml
 Key: HDDS-611
 URL: https://issues.apache.org/jira/browse/HDDS-611
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari
 Attachments: Screen Shot 2018-10-09 at 4.49.58 PM.png

ozone-site.xml was updated to change hdds.scm.chillmode.enabled to false. This 
is reflected properly as below:
{code:java}
[root@ctr-e138-1518143905142-510793-01-04 bin]# ./ozone getozoneconf 
-confKey hdds.scm.chillmode.enabled
2018-10-09 23:52:12,621 WARN util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
false
{code}
But the SCM UI does not reflect this change and it still shows the old value of 
true. Please see attached screenshot. !Screen Shot 2018-10-09 at 4.49.58 PM.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-610) On restart of SCM it fails to register DataNodes

2018-10-09 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-610:
-

 Summary: On restart of SCM it fails to register DataNodes
 Key: HDDS-610
 URL: https://issues.apache.org/jira/browse/HDDS-610
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari


{code:java}
2018-10-09 23:34:11,105 INFO 
org.apache.hadoop.hdds.scm.server.StorageContainerManager: STARTUP_MSG:
/
STARTUP_MSG: Starting StorageContainerManager
STARTUP_MSG: host = 
ctr-e138-1518143905142-510793-01-04.hwx.site/172.27.79.197
STARTUP_MSG: args = []
STARTUP_MSG: version = 3.3.0-SNAPSHOT
STARTUP_MSG: classpath = 

[jira] [Created] (HDDS-609) Mapreduce example fails with Allocate block failed, error:INTERNAL_ERROR

2018-10-09 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-609:
-

 Summary: Mapreduce example fails with Allocate block failed, 
error:INTERNAL_ERROR
 Key: HDDS-609
 URL: https://issues.apache.org/jira/browse/HDDS-609
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari


{code:java}
-bash-4.2$ /usr/hdp/current/hadoop-client/bin/hadoop jar 
/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples.jar 
wordcount /tmp/mr_jobs/input/ o3://bucket2.volume2/mr_job5
18/10/09 23:37:07 INFO conf.Configuration: Removed undeclared tags:
18/10/09 23:37:08 INFO conf.Configuration: Removed undeclared tags:
18/10/09 23:37:08 INFO conf.Configuration: Removed undeclared tags:
18/10/09 23:37:08 INFO client.AHSProxy: Connecting to Application History 
server at ctr-e138-1518143905142-510793-01-04.hwx.site/172.27.79.197:10200
18/10/09 23:37:08 INFO client.ConfiguredRMFailoverProxyProvider: Failing over 
to rm2
18/10/09 23:37:09 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding 
for path: /user/hdfs/.staging/job_1539125785626_0007
18/10/09 23:37:09 INFO input.FileInputFormat: Total input files to process : 1
18/10/09 23:37:09 INFO lzo.GPLNativeCodeLoader: Loaded native gpl library
18/10/09 23:37:09 INFO lzo.LzoCodec: Successfully loaded & initialized 
native-lzo library [hadoop-lzo rev 5d6248d8d690f8456469979213ab2e9993bfa2e9]
18/10/09 23:37:09 INFO mapreduce.JobSubmitter: number of splits:1
18/10/09 23:37:09 INFO mapreduce.JobSubmitter: Submitting tokens for job: 
job_1539125785626_0007
18/10/09 23:37:09 INFO mapreduce.JobSubmitter: Executing with tokens: []
18/10/09 23:37:10 INFO conf.Configuration: Removed undeclared tags:
18/10/09 23:37:10 INFO conf.Configuration: found resource resource-types.xml at 
file:/etc/hadoop/3.0.3.0-63/0/resource-types.xml
18/10/09 23:37:10 INFO conf.Configuration: Removed undeclared tags:
18/10/09 23:37:10 INFO impl.YarnClientImpl: Submitted application 
application_1539125785626_0007
18/10/09 23:37:10 INFO mapreduce.Job: The url to track the job: 
http://ctr-e138-1518143905142-510793-01-05.hwx.site:8088/proxy/application_1539125785626_0007/
18/10/09 23:37:10 INFO mapreduce.Job: Running job: job_1539125785626_0007
18/10/09 23:37:17 INFO mapreduce.Job: Job job_1539125785626_0007 running in 
uber mode : false
18/10/09 23:37:17 INFO mapreduce.Job: map 0% reduce 0%
18/10/09 23:37:24 INFO mapreduce.Job: map 100% reduce 0%
18/10/09 23:37:29 INFO mapreduce.Job: Task Id : 
attempt_1539125785626_0007_r_00_0, Status : FAILED
Error: java.io.IOException: Allocate block failed, error:INTERNAL_ERROR
at 
org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.allocateBlock(OzoneManagerProtocolClientSideTranslatorPB.java:576)
at 
org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.allocateNewBlock(ChunkGroupOutputStream.java:475)
at 
org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.handleWrite(ChunkGroupOutputStream.java:271)
at 
org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.write(ChunkGroupOutputStream.java:250)
at 
org.apache.hadoop.fs.ozone.OzoneFSOutputStream.write(OzoneFSOutputStream.java:47)
at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:57)
at java.io.DataOutputStream.write(DataOutputStream.java:107)
at 
org.apache.hadoop.mapreduce.lib.output.TextOutputFormat$LineRecordWriter.writeObject(TextOutputFormat.java:78)
at 
org.apache.hadoop.mapreduce.lib.output.TextOutputFormat$LineRecordWriter.write(TextOutputFormat.java:93)
at 
org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.write(ReduceTask.java:559)
at 
org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
at 
org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer$Context.write(WrappedReducer.java:105)
at org.apache.hadoop.examples.WordCount$IntSumReducer.reduce(WordCount.java:64)
at org.apache.hadoop.examples.WordCount$IntSumReducer.reduce(WordCount.java:52)
at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:171)
at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:628)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:390)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)

18/10/09 23:37:35 INFO mapreduce.Job: Task Id : 
attempt_1539125785626_0007_r_00_1, Status : FAILED
Error: java.io.IOException: Allocate block failed, error:INTERNAL_ERROR
at 
org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.allocateBlock(OzoneManagerProtocolClientSideTranslatorPB.java:576)
at 

[jira] [Created] (HDDS-608) Mapreduce example fails with Access denied for user hdfs. Superuser privilege is required

2018-10-09 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-608:
-

 Summary: Mapreduce example fails with Access denied for user hdfs. 
Superuser privilege is required
 Key: HDDS-608
 URL: https://issues.apache.org/jira/browse/HDDS-608
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari


Right now only the administrators can submit a MR job. All the other users 
including hdfs will fail with below error:
{code:java}
-bash-4.2$ ./ozone sh bucket create /volume2/bucket2
2018-10-09 23:03:46,399 WARN util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
2018-10-09 23:03:47,473 INFO rpc.RpcClient: Creating Bucket: volume2/bucket2, 
with Versioning false and Storage Type set to DISK
-bash-4.2$ /usr/hdp/current/hadoop-client/bin/hadoop jar 
/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples.jar 
wordcount /tmp/mr_jobs/input/ o3://bucket2.volume2/mr_job
18/10/09 23:04:08 INFO conf.Configuration: Removed undeclared tags:
18/10/09 23:04:10 INFO conf.Configuration: Removed undeclared tags:
18/10/09 23:04:10 INFO conf.Configuration: Removed undeclared tags:
18/10/09 23:04:10 INFO client.AHSProxy: Connecting to Application History 
server at ctr-e138-1518143905142-510793-01-04.hwx.site/172.27.79.197:10200
18/10/09 23:04:10 INFO client.ConfiguredRMFailoverProxyProvider: Failing over 
to rm2
18/10/09 23:04:10 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding 
for path: /user/hdfs/.staging/job_1539125785626_0003
18/10/09 23:04:11 INFO input.FileInputFormat: Total input files to process : 1
18/10/09 23:04:11 INFO lzo.GPLNativeCodeLoader: Loaded native gpl library
18/10/09 23:04:11 INFO lzo.LzoCodec: Successfully loaded & initialized 
native-lzo library [hadoop-lzo rev 5d6248d8d690f8456469979213ab2e9993bfa2e9]
18/10/09 23:04:11 INFO mapreduce.JobSubmitter: number of splits:1
18/10/09 23:04:12 INFO mapreduce.JobSubmitter: Submitting tokens for job: 
job_1539125785626_0003
18/10/09 23:04:12 INFO mapreduce.JobSubmitter: Executing with tokens: []
18/10/09 23:04:12 INFO conf.Configuration: Removed undeclared tags:
18/10/09 23:04:12 INFO conf.Configuration: found resource resource-types.xml at 
file:/etc/hadoop/3.0.3.0-63/0/resource-types.xml
18/10/09 23:04:12 INFO conf.Configuration: Removed undeclared tags:
18/10/09 23:04:12 INFO impl.YarnClientImpl: Submitted application 
application_1539125785626_0003
18/10/09 23:04:12 INFO mapreduce.Job: The url to track the job: 
http://ctr-e138-1518143905142-510793-01-05.hwx.site:8088/proxy/application_1539125785626_0003/
18/10/09 23:04:12 INFO mapreduce.Job: Running job: job_1539125785626_0003
18/10/09 23:04:22 INFO mapreduce.Job: Job job_1539125785626_0003 running in 
uber mode : false
18/10/09 23:04:22 INFO mapreduce.Job: map 0% reduce 0%
18/10/09 23:04:30 INFO mapreduce.Job: map 100% reduce 0%
18/10/09 23:04:36 INFO mapreduce.Job: Task Id : 
attempt_1539125785626_0003_r_00_0, Status : FAILED
Error: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Access 
denied for user hdfs. Superuser privilege is required.
at 
org.apache.hadoop.hdds.scm.server.StorageContainerManager.checkAdminAccess(StorageContainerManager.java:830)
at 
org.apache.hadoop.hdds.scm.server.SCMClientProtocolServer.getContainerWithPipeline(SCMClientProtocolServer.java:190)
at 
org.apache.hadoop.ozone.protocolPB.StorageContainerLocationProtocolServerSideTranslatorPB.getContainerWithPipeline(StorageContainerLocationProtocolServerSideTranslatorPB.java:128)
at 
org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos$StorageContainerLocationProtocolService$2.callBlockingMethod(StorageContainerLocationProtocolProtos.java:12392)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)

at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1497)
at org.apache.hadoop.ipc.Client.call(Client.java:1443)
at org.apache.hadoop.ipc.Client.call(Client.java:1353)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy19.getContainerWithPipeline(Unknown Source)
at 
org.apache.hadoop.hdds.scm.protocolPB.StorageContainerLocationProtocolClientSideTranslatorPB.getContainerWithPipeline(StorageContainerLocationProtocolClientSideTranslatorPB.java:156)
at 

[jira] [Created] (HDDS-600) Mapreduce example fails with java.lang.IllegalArgumentException: Bucket or Volume name has an unsupported character

2018-10-09 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-600:
-

 Summary: Mapreduce example fails with 
java.lang.IllegalArgumentException: Bucket or Volume name has an unsupported 
character
 Key: HDDS-600
 URL: https://issues.apache.org/jira/browse/HDDS-600
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari


Set up a hadoop cluster where ozone is also installed. Ozone can be referenced 
via o3://xx.xx.xx.xx:9889
{code:java}
[root@ctr-e138-1518143905142-510793-01-02 ~]# ozone sh bucket list 
o3://xx.xx.xx.xx:9889/volume1/
2018-10-09 07:21:24,624 WARN util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
[ {
"volumeName" : "volume1",
"bucketName" : "bucket1",
"createdOn" : "Tue, 09 Oct 2018 06:48:02 GMT",
"acls" : [ {
"type" : "USER",
"name" : "root",
"rights" : "READ_WRITE"
}, {
"type" : "GROUP",
"name" : "root",
"rights" : "READ_WRITE"
} ],
"versioning" : "DISABLED",
"storageType" : "DISK"
} ]
[root@ctr-e138-1518143905142-510793-01-02 ~]# ozone sh key list 
o3://xx.xx.xx.xx:9889/volume1/bucket1
2018-10-09 07:21:54,500 WARN util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
[ {
"version" : 0,
"md5hash" : null,
"createdOn" : "Tue, 09 Oct 2018 06:58:32 GMT",
"modifiedOn" : "Tue, 09 Oct 2018 06:58:32 GMT",
"size" : 0,
"keyName" : "mr_job_dir"
} ]
[root@ctr-e138-1518143905142-510793-01-02 ~]#{code}
Hdfs is also set fine as below
{code:java}
[root@ctr-e138-1518143905142-510793-01-02 ~]# hdfs dfs -ls 
/tmp/mr_jobs/input/
Found 1 items
-rw-r--r-- 3 root hdfs 215755 2018-10-09 06:37 
/tmp/mr_jobs/input/wordcount_input_1.txt
[root@ctr-e138-1518143905142-510793-01-02 ~]#{code}
Now try to run Mapreduce example job against ozone o3:
{code:java}
[root@ctr-e138-1518143905142-510793-01-02 ~]# 
/usr/hdp/current/hadoop-client/bin/hadoop jar 
/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples.jar 
wordcount /tmp/mr_jobs/input/ 
o3://xx.xx.xx.xx:9889/volume1/bucket1/mr_job_dir/output
18/10/09 07:15:38 INFO conf.Configuration: Removed undeclared tags:
java.lang.IllegalArgumentException: Bucket or Volume name has an unsupported 
character : :
at 
org.apache.hadoop.hdds.scm.client.HddsClientUtils.verifyResourceName(HddsClientUtils.java:143)
at 
org.apache.hadoop.ozone.client.rpc.RpcClient.getVolumeDetails(RpcClient.java:231)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hadoop.ozone.client.OzoneClientInvocationHandler.invoke(OzoneClientInvocationHandler.java:54)
at com.sun.proxy.$Proxy16.getVolumeDetails(Unknown Source)
at org.apache.hadoop.ozone.client.ObjectStore.getVolume(ObjectStore.java:92)
at 
org.apache.hadoop.fs.ozone.OzoneFileSystem.initialize(OzoneFileSystem.java:121)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
at 
org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.setOutputPath(FileOutputFormat.java:178)
at org.apache.hadoop.examples.WordCount.main(WordCount.java:85)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:318)
at org.apache.hadoop.util.RunJar.main(RunJar.java:232)
18/10/09 07:15:39 INFO conf.Configuration: Removed undeclared tags:
[root@ctr-e138-1518143905142-510793-01-02 ~]#
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 

[jira] [Created] (HDDS-590) Add unit test for HDDS-583

2018-10-08 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-590:
-

 Summary: Add unit test for HDDS-583
 Key: HDDS-590
 URL: https://issues.apache.org/jira/browse/HDDS-590
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-583) SCM returns zero as the return code, even when invalid options are passed

2018-10-05 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-583:
-

 Summary: SCM returns zero as the return code, even when invalid 
options are passed
 Key: HDDS-583
 URL: https://issues.apache.org/jira/browse/HDDS-583
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari


While doing testing for HDDS-564, found that SCM returns zero as the return 
code, even when invalid options are passed. In StorageContainerManager.java, 
please see below code 
{code:java}
private static StartupOption parseArguments(String[] args) {
  int argsLen = (args == null) ? 0 : args.length;
  StartupOption startOpt = StartupOption.HELP;
{code}
Here, startOpt is initialized to HELP, so by default even if wrong options are 
passed, parseArguments method returns the value to HELP. This causes the exit 
code to be 0. 

Ideally, startOpt should be set to null, which will enable it to return non 
zero exit code, if the options are invalid.
{code:java}
StartupOption startOpt = null{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-564) Update docker-hadoop-runner branch to reflect changes done in HDDS-490

2018-10-01 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-564:
-

 Summary: Update docker-hadoop-runner branch to reflect changes 
done in HDDS-490
 Key: HDDS-564
 URL: https://issues.apache.org/jira/browse/HDDS-564
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Namit Maheshwari


starter.sh needs to be modified to reflect the changes done in HDDS-490

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-549) Documentation for key rename is missing in keycommands.md

2018-09-25 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-549:
-

 Summary: Documentation for key rename is missing in keycommands.md
 Key: HDDS-549
 URL: https://issues.apache.org/jira/browse/HDDS-549
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-541) ozone volume quota is not honored

2018-09-22 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-541:
-

 Summary: ozone volume quota is not honored
 Key: HDDS-541
 URL: https://issues.apache.org/jira/browse/HDDS-541
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari


Create a volume with just 1 MB as quota
{code:java}
[root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone sh volume create 
--quota=1MB --user=root /hive
2018-09-23 02:10:11,283 [main] INFO - Creating Volume: hive, with root as owner 
and quota set to 1048576 bytes.
{code}
Now create a bucket and put a big key greater than 1MB in the bucket
{code:java}
[root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone sh bucket create 
/hive/bucket1
2018-09-23 02:10:38,003 [main] INFO - Creating Bucket: hive/bucket1, with 
Versioning false and Storage Type set to DISK
[root@ctr-e138-1518143905142-481027-01-02 bin]# ls -l 
../../ozone-0.3.0-SNAPSHOT.tar.gz
-rw-r--r-- 1 root root 165903437 Sep 21 13:16 ../../ozone-0.3.0-SNAPSHOT.tar.gz
[root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone sh key put 
/hive/ozone-0.3.0-SNAPSHOT.tar.gz ../../ozone-0.3.0-SNAPSHOT.tar.gz
volume/bucket/key name required in putKey
[root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone sh key put 
/hive/bucket1/ozone-0.3.0-SNAPSHOT.tar.gz ../../ozone-0.3.0-SNAPSHOT.tar.gz
[root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone sh key info 
/hive/bucket1/ozone-0.3.0-SNAPSHOT.tar.gz
{
"version" : 0,
"md5hash" : null,
"createdOn" : "Sun, 23 Sep 2018 02:13:02 GMT",
"modifiedOn" : "Sun, 23 Sep 2018 02:13:08 GMT",
"size" : 165903437,
"keyName" : "ozone-0.3.0-SNAPSHOT.tar.gz",
"keyLocations" : [ {
"containerID" : 2,
"localID" : 100772661343420416,
"length" : 134217728,
"offset" : 0
}, {
"containerID" : 3,
"localID" : 100772661661007873,
"length" : 31685709,
"offset" : 0
} ]
}{code}
It was able to put a 165 MB file on a volume with just 1MB quota.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-539) ozone datanode ignores the invalid options

2018-09-21 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-539:
-

 Summary: ozone datanode ignores the invalid options
 Key: HDDS-539
 URL: https://issues.apache.org/jira/browse/HDDS-539
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari


ozone datanode command starts datanode and ignores the invalid option, apart 
from help
{code:java}
[root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone datanode -help
Starts HDDS Datanode
{code}
For all the other invalid options, it just ignores and starts the DN like below:
{code:java}
[root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone datanode -ABC
2018-09-22 00:59:34,462 [main] INFO - STARTUP_MSG:
/
STARTUP_MSG: Starting HddsDatanodeService
STARTUP_MSG: host = 
ctr-e138-1518143905142-481027-01-02.hwx.site/172.27.54.20
STARTUP_MSG: args = [-ABC]
STARTUP_MSG: version = 3.2.0-SNAPSHOT
STARTUP_MSG: classpath = 

[jira] [Created] (HDDS-538) ozone scmcli is broken

2018-09-21 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-538:
-

 Summary: ozone scmcli is broken
 Key: HDDS-538
 URL: https://issues.apache.org/jira/browse/HDDS-538
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari


None of the below commands work for scmcli
{code:java}
[root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone scmcli list
Missing required option '--start='
Usage: ozone scmcli list [-hV] [-c=] -s=
List containers
-c, --count= Maximum number of containers to list
Default: 20
-h, --help Show this help message and exit.
-s, --start= Container id to start the iteration
-V, --version Print version information and exit.
[root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone scmcli list -c=1
Missing required option '--start='
Usage: ozone scmcli list [-hV] [-c=] -s=
List containers
-c, --count= Maximum number of containers to list
Default: 20
-h, --help Show this help message and exit.
-s, --start= Container id to start the iteration
-V, --version Print version information and exit.
[root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone scmcli list -c=1 -s
Missing required parameter for option '--start' ()
Usage: ozone scmcli list [-hV] [-c=] -s=
List containers
-c, --count= Maximum number of containers to list
Default: 20
-h, --help Show this help message and exit.
-s, --start= Container id to start the iteration
-V, --version Print version information and exit.
[root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone scmcli -s
Unknown option: -s
Possible solutions: --scm, --set
[root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone scmcli --start
Unknown option: --start
Usage: ozone scmcli [-hV] [--verbose] [--scm=] [-D=]...
[COMMAND]
Developer tools to handle SCM specific operations.
--scm= The destination scm (host:port)
--verbose More verbose output. Show the stack trace of the errors.
-D, --set=

-h, --help Show this help message and exit.
-V, --version Print version information and exit.
Commands:
list List containers
info Show information about a specific container
delete Delete container
create Create container
close close container
[root@ctr-e138-1518143905142-481027-01-02 bin]#
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-537) ozone sh vol complains for valid length even when the length is valid for 3 characters

2018-09-21 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-537:
-

 Summary: ozone sh vol complains for valid length even when the 
length is valid for 3 characters
 Key: HDDS-537
 URL: https://issues.apache.org/jira/browse/HDDS-537
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari


{code:java}
[root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone sh vol info abc
Bucket or Volume length is illegal, valid length is 3-63 characters
{code}
Here, the length is already 3 characters still it throws errors, that valid 
length is 3-63 characters



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-536) ozone sh throws exception and show on command line for invalid input

2018-09-21 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-536:
-

 Summary: ozone sh throws exception and show on command line for 
invalid input
 Key: HDDS-536
 URL: https://issues.apache.org/jira/browse/HDDS-536
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari


{code:java}
[root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone sh vol info o3://as
2018-09-22 00:06:03,123 [main] ERROR - Couldn't create protocol class 
org.apache.hadoop.ozone.client.rpc.RpcClient exception:
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at 
org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:291)
at 
org.apache.hadoop.ozone.client.OzoneClientFactory.getRpcClient(OzoneClientFactory.java:169)
at 
org.apache.hadoop.ozone.client.OzoneClientFactory.getRpcClient(OzoneClientFactory.java:153)
at 
org.apache.hadoop.ozone.client.OzoneClientFactory.getRpcClient(OzoneClientFactory.java:109)
at org.apache.hadoop.ozone.web.ozShell.Handler.verifyURI(Handler.java:100)
at 
org.apache.hadoop.ozone.web.ozShell.volume.InfoVolumeHandler.call(InfoVolumeHandler.java:49)
at 
org.apache.hadoop.ozone.web.ozShell.volume.InfoVolumeHandler.call(InfoVolumeHandler.java:36)
at picocli.CommandLine.execute(CommandLine.java:919)
at picocli.CommandLine.access$700(CommandLine.java:104)
at picocli.CommandLine$RunLast.handle(CommandLine.java:1083)
at picocli.CommandLine$RunLast.handle(CommandLine.java:1051)
at 
picocli.CommandLine$AbstractParseResultHandler.handleParseResult(CommandLine.java:959)
at picocli.CommandLine.parseWithHandlers(CommandLine.java:1242)
at picocli.CommandLine.parseWithHandler(CommandLine.java:1181)
at org.apache.hadoop.hdds.cli.GenericCli.execute(GenericCli.java:61)
at org.apache.hadoop.hdds.cli.GenericCli.run(GenericCli.java:52)
at org.apache.hadoop.ozone.web.ozShell.Shell.main(Shell.java:77)
Caused by: java.net.UnknownHostException: Invalid host name: local host is: 
(unknown); destination host is: "as":9889; java.net.UnknownHostException; For 
more details see: http://wiki.apache.org/hadoop/UnknownHost
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:768)
at org.apache.hadoop.ipc.Client$Connection.(Client.java:449)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1552)
at org.apache.hadoop.ipc.Client.call(Client.java:1403)
at org.apache.hadoop.ipc.Client.call(Client.java:1367)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy10.getServiceList(Unknown Source)
at 
org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.getServiceList(OzoneManagerProtocolClientSideTranslatorPB.java:751)
at 
org.apache.hadoop.ozone.client.rpc.RpcClient.getScmAddressForClient(RpcClient.java:154)
at org.apache.hadoop.ozone.client.rpc.RpcClient.(RpcClient.java:126)
... 21 more
Caused by: java.net.UnknownHostException
at org.apache.hadoop.ipc.Client$Connection.(Client.java:450)
... 30 more
Invalid host name: local host is: (unknown); destination host is: "as":9889; 
java.net.UnknownHostException; For more details see: 
http://wiki.apache.org/hadoop/UnknownHost
[root@ctr-e138-1518143905142-481027-01-02 bin]#
{code}
Ideally, it should just throw error like hdfs below:
{code:java}
[hrt_qa@ctr-e138-1518143905142-483670-01-02 hadoopqe]$ hdfs dfs -ls 
s3a://namit54/
18/09/22 00:31:53 INFO impl.MetricsConfig: Loaded properties from 
hadoop-metrics2.properties
18/09/22 00:31:53 INFO impl.MetricsSystemImpl: Scheduled Metric snapshot period 
at 10 second(s).
18/09/22 00:31:53 INFO impl.MetricsSystemImpl: s3a-file-system metrics system 
started
ls: Bucket namit54 does not exist
[hrt_qa@ctr-e138-1518143905142-483670-01-02 hadoopqe]$
{code}
And not the entire stack trace



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-535) Ozone Manager tries to start on non OM host

2018-09-21 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-535:
-

 Summary: Ozone Manager tries to start on non OM host
 Key: HDDS-535
 URL: https://issues.apache.org/jira/browse/HDDS-535
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari


Steps:
 # Start up a Ozone multinode cluster
 # Set the values for ozone.om.address, ozone.scm.names etc in ozone-site.xml
 # Make sure they are pointing to different nodes
 # Start all the services on all hosts - OM, SCM, Datanodes
 # Now try to run ozone om command on SCM host. It tries to start OM and fails 
as below:
{code:java}
[root@ctr-e138-1518143905142-481027-01-04 bin]# ./ozone om
2018-09-21 23:57:13,612 [main] INFO - STARTUP_MSG:
/
STARTUP_MSG: Starting OzoneManager
STARTUP_MSG: host = 
ctr-e138-1518143905142-481027-01-04.hwx.site/172.27.28.66
STARTUP_MSG: args = []
STARTUP_MSG: version = 3.2.0-SNAPSHOT
STARTUP_MSG: classpath = 

[jira] [Created] (HDDS-534) ozone jmxget does not work

2018-09-21 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-534:
-

 Summary: ozone jmxget does not work
 Key: HDDS-534
 URL: https://issues.apache.org/jira/browse/HDDS-534
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari


{code:java}
[root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone jmxget
ERROR: jmxget is not COMMAND nor fully qualified CLASSNAME.
Usage: ozone [OPTIONS] SUBCOMMAND [SUBCOMMAND OPTIONS]

OPTIONS is none or any of:

--buildpaths attempt to add class files from build tree
--config dir Hadoop config directory
--daemon (start|status|stop) operate on a daemon
--debug turn on shell script debug mode
--help usage information
--hostnames list[,of,host,names] hosts to use in worker mode
--hosts filename list of hosts to use in worker mode
--loglevel level set the log4j level for this command
--workers turn on worker mode

SUBCOMMAND is one of:


Admin Commands:

jmxget get JMX exported values from NameNode or DataNode.

Client Commands:

classpath prints the class path needed to get the hadoop jar and the required 
libraries
envvars display computed Hadoop environment variables
freon runs an ozone data generator
fs run a filesystem command on Ozone file system. Equivalent to 'hadoop fs'
genconf generate minimally required ozone configs and output to ozone-site.xml 
in specified path
genesis runs a collection of ozone benchmarks to help with tuning.
getozoneconf get ozone config values from configuration
noz ozone debug tool, convert ozone metadata into relational data
scmcli run the CLI of the Storage Container Manager
sh command line interface for object store operations
version print the version

Daemon Commands:

datanode run a HDDS datanode
om Ozone Manager
scm run the Storage Container Manager service

SUBCOMMAND may print help when invoked w/o parameters or with -h.
{code}
As in the logs above jmxget is an option, but does not work.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-533) includeFile and excludeFile options does not work for ozone getozoneconf CLI

2018-09-21 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-533:
-

 Summary: includeFile and excludeFile options does not work for 
ozone getozoneconf CLI
 Key: HDDS-533
 URL: https://issues.apache.org/jira/browse/HDDS-533
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari


{code:java}
ctr-e138-1518143905142-481027-01-04.hwx.site
[root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone getozoneconf 
-excludeFile
ozone getconf is utility for getting configuration information from the config 
file.

ozone getconf
[-includeFile]  gets the include file path that defines the datanodes that can 
join the cluster.
[-excludeFile]  gets the exclude file path that defines the datanodes that need 
to decommissioned.
[-ozonemanagers]gets list of Ozone Manager nodes in the cluster
[-storagecontainermanagers] gets list of ozone storage container manager 
nodes in the cluster
[-confKey [key]]gets a specific key from the configuration

[root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone getozoneconf 
-includeFile
ozone getconf is utility for getting configuration information from the config 
file.

ozone getconf
[-includeFile]  gets the include file path that defines the datanodes that can 
join the cluster.
[-excludeFile]  gets the exclude file path that defines the datanodes that need 
to decommissioned.
[-ozonemanagers]gets list of Ozone Manager nodes in the cluster
[-storagecontainermanagers] gets list of ozone storage container manager 
nodes in the cluster
[-confKey [key]]gets a specific key from the configuration

{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-532) ozone getozoneconf help shows the name as getconf

2018-09-21 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-532:
-

 Summary: ozone getozoneconf help shows the name as getconf
 Key: HDDS-532
 URL: https://issues.apache.org/jira/browse/HDDS-532
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari


{code:java}
[root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone getozoneconf -help
ozone getconf is utility for getting configuration information from the config 
file.

ozone getconf
[-includeFile]  gets the include file path that defines the datanodes that can 
join the cluster.
[-excludeFile]  gets the exclude file path that defines the datanodes that need 
to decommissioned.
[-ozonemanagers]gets list of Ozone Manager nodes in the cluster
[-storagecontainermanagers] gets list of ozone storage container manager 
nodes in the cluster
[-confKey [key]]gets a specific key from the configuration


Generic options supported are:
-conf  specify an application configuration file
-D  define a value for a given property
-fs  specify default filesystem URL to use, 
overrides 'fs.defaultFS' property from configurations.
-jt  specify a ResourceManager
-files  specify a comma-separated list of files to be copied to the 
map reduce cluster
-libjars  specify a comma-separated list of jar files to be included 
in the classpath
-archives  specify a comma-separated list of archives to be 
unarchived on the compute machines

The general command line syntax is:
command [genericOptions] [commandOptions]
{code}
Now the utility name is "getozoneconf", whereas the help shows the as "getconf"

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-531) ozone version command prints some information twice

2018-09-21 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-531:
-

 Summary: ozone version command prints some information twice
 Key: HDDS-531
 URL: https://issues.apache.org/jira/browse/HDDS-531
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari


{code:java}
[root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone version
//

 
// 
/  /
/  ///
  /
/ 
/  //
 /// /
/ /// 
/ // /
// // /
/// 
// 
/// //
/ 0.3.0-SNAPSHOT(Arches)

Source code repository g...@github.com:apache/hadoop.git -r 
a2752779ac1545f5e0a52fce3cff02a7007e95fb
Compiled by nmaheshwari on 2018-09-21T10:52Z
Compiled with protoc 2.5.0
>From source with checksum 11d8e28a7fa8994c5c73a39cfce5a87

Using HDDS 0.3.0-SNAPSHOT
Source code repository g...@github.com:apache/hadoop.git -r 
a2752779ac1545f5e0a52fce3cff02a7007e95fb
Compiled by nmaheshwari on 2018-09-21T10:52Z
Compiled with protoc 2.5.0
>From source with checksum 468f30b1a9935c6bcbf73aabdc7e2aca
{code}
Lines like below are repeated twice:
{code:java}
Source code repository g...@github.com:apache/hadoop.git -r 
a2752779ac1545f5e0a52fce3cff02a7007e95fb
Compiled by nmaheshwari on 2018-09-21T10:52Z
Compiled with protoc 2.5.0{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-492) Add more unit tests to ozonefs robot framework

2018-09-17 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-492:
-

 Summary: Add more unit tests to ozonefs robot framework
 Key: HDDS-492
 URL: https://issues.apache.org/jira/browse/HDDS-492
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari


Currently there are only couple of tests inside ozonefs.robot

We should add more unit tests for the same.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-490) Improve om and scm start up options

2018-09-17 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-490:
-

 Summary: Improve om and scm start up options 
 Key: HDDS-490
 URL: https://issues.apache.org/jira/browse/HDDS-490
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari


I propose the following changes:
 # Rename createObjectStore to format
 # Change the flag to use --createObjectStore instead of using 
-createObjectStore. It is also applicable to other scm and om startup options.
 # Fail to format existing object store. If a user runs:
{code:java}
ozone om -createObjectStore{code}
And there is already an object store, it should give a warning message and exit 
the process.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-486) Update Ozone Documentation to mention that “scm -init” and “om -createObjectStore” should be executed only once

2018-09-17 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Maheshwari resolved HDDS-486.
---
Resolution: Fixed

Duplicate of HDDS-483

> Update Ozone Documentation to mention that “scm -init” and “om 
> -createObjectStore” should be executed only once
> ---
>
> Key: HDDS-486
> URL: https://issues.apache.org/jira/browse/HDDS-486
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Namit Maheshwari
>Priority: Major
>
> Update Ozone Documentation to mention that “scm -init” and “om 
> -createObjectStore” should be executed only once. 
> If it has not been run before otherwise data will be lost.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-485) Update Ozone Java API documentation to fix complete example

2018-09-17 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Maheshwari resolved HDDS-485.
---
Resolution: Fixed

Duplicate of HDDS-483

> Update Ozone Java API documentation to fix complete example
> ---
>
> Key: HDDS-485
> URL: https://issues.apache.org/jira/browse/HDDS-485
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Namit Maheshwari
>Priority: Major
> Attachments: HDDS-485.001.patch
>
>
> The complete example misses a line to get the ObjectStore from the client:
> {code:java}
> ObjectStore objectStore = ozClient.getObjectStore();{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-484) Update ozone File system documentation to not overwrite HADOOP_CLASSPATH

2018-09-17 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Maheshwari resolved HDDS-484.
---
Resolution: Duplicate

Duplicate of HDDS-483

> Update ozone File system documentation to not overwrite HADOOP_CLASSPATH
> 
>
> Key: HDDS-484
> URL: https://issues.apache.org/jira/browse/HDDS-484
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Namit Maheshwari
>Priority: Major
> Attachments: HDDS-484.001.patch
>
>
> Update ozone File system documentation to not overwrite HADOOP_CLASSPATH



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-486) Update Ozone Documentation to mention that “scm -init” and “om -createObjectStore” should be executed only once

2018-09-17 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-486:
-

 Summary: Update Ozone Documentation to mention that “scm -init” 
and “om -createObjectStore” should be executed only once
 Key: HDDS-486
 URL: https://issues.apache.org/jira/browse/HDDS-486
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari


Update Ozone Documentation to mention that “scm -init” and “om 
-createObjectStore” should be executed only once. 

If it has not been run before otherwise data will be lost.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-485) Update Ozone Java API documentation to fix complete example

2018-09-17 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-485:
-

 Summary: Update Ozone Java API documentation to fix complete 
example
 Key: HDDS-485
 URL: https://issues.apache.org/jira/browse/HDDS-485
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari


The complete example misses a line to get the ObjectStore from the client:
{code:java}
ObjectStore objectStore = ozClient.getObjectStore();{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-484) Update ozone File system documentation to not overwrite HADOOP_CLASSPATH

2018-09-17 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-484:
-

 Summary: Update ozone File system documentation to not overwrite 
HADOOP_CLASSPATH
 Key: HDDS-484
 URL: https://issues.apache.org/jira/browse/HDDS-484
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari


Update ozone File system documentation to not overwrite HADOOP_CLASSPATH



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-483) Update ozone File system documentation to use 'sh' instead of 'oz'

2018-09-17 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-483:
-

 Summary: Update ozone File system documentation to use 'sh' 
instead of 'oz'
 Key: HDDS-483
 URL: https://issues.apache.org/jira/browse/HDDS-483
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-457) ozone om -help, scm -help commands cant run unless the service is stopped

2018-09-13 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-457:
-

 Summary: ozone om -help, scm -help commands cant run unless the 
service is stopped 
 Key: HDDS-457
 URL: https://issues.apache.org/jira/browse/HDDS-457
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari


{code:java}
➜ ozone-0.3.0-SNAPSHOT bin/ozone om -help
om is running as process 89242. Stop it first.

➜ ozone-0.3.0-SNAPSHOT bin/ozone scm -help
scm is running as process 73361. Stop it first.
{code}
It runs fine once the service is stopped
{code:java}
➜ ozone-0.3.0-SNAPSHOT bin/ozone --daemon stop scm
➜ ozone-0.3.0-SNAPSHOT bin/ozone scm -help
Usage:
ozone scm [genericOptions] [ -init [ -clusterid  ] ]
ozone scm [genericOptions] [ -genclusterid ]
ozone scm [ -help ]


Generic options supported are:
-conf  specify an application configuration file
-D  define a value for a given property
-fs  specify default filesystem URL to use, 
overrides 'fs.defaultFS' property from configurations.
-jt  specify a ResourceManager
-files  specify a comma-separated list of files to be copied to the 
map reduce cluster
-libjars  specify a comma-separated list of jar files to be included 
in the classpath
-archives  specify a comma-separated list of archives to be 
unarchived on the compute machines

The general command line syntax is:
command [genericOptions] [commandOptions]

{code}
 

Ideally help command should run fine without the service being stopped.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-415) bin/ozone om with incorrect argument first logs all the STARTUP_MSG

2018-09-07 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-415:
-

 Summary:  bin/ozone om with incorrect argument first logs all the 
STARTUP_MSG
 Key: HDDS-415
 URL: https://issues.apache.org/jira/browse/HDDS-415
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari


 bin/ozone om with incorrect argument first logs all the STARTUP_MSG
{code:java}

➜ ozone-0.2.1-SNAPSHOT bin/ozone om -hgfj
2018-09-07 12:56:12,391 [main] INFO - STARTUP_MSG:
/
STARTUP_MSG: Starting OzoneManager
STARTUP_MSG: host = HW11469.local/10.22.16.67
STARTUP_MSG: args = [-hgfj]
STARTUP_MSG: version = 3.2.0-SNAPSHOT
STARTUP_MSG: classpath = 

[jira] [Created] (HDDS-414) sbin/stop-all.sh does not stop Ozone daemons

2018-09-07 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-414:
-

 Summary: sbin/stop-all.sh does not stop Ozone daemons
 Key: HDDS-414
 URL: https://issues.apache.org/jira/browse/HDDS-414
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari


sbin/stop-all.sh does not stop Ozone daemons.

Please see below:
{code:java}

➜ ozone-0.2.1-SNAPSHOT jps
8896 Jps
8224 HddsDatanodeService
8162 OzoneManager
7701 StorageContainerManager
➜ ozone-0.2.1-SNAPSHOT pwd
/tmp/ozone-0.2.1-SNAPSHOT
➜ ozone-0.2.1-SNAPSHOT sbin/stop-all.sh
WARNING: Stopping all Apache Hadoop daemons as nmaheshwari in 10 seconds.
WARNING: Use CTRL-C to abort.
Stopping namenodes on [localhost]
localhost: ssh: connect to host localhost port 22: Connection refused
Stopping datanodes
localhost: ssh: connect to host localhost port 22: Connection refused
Stopping secondary namenodes [HW11469.local]
HW11469.local: ssh: connect to host hw11469.local port 22: Connection refused
2018-09-07 12:38:49,044 WARN util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
➜ ozone-0.2.1-SNAPSHOT jps
8224 HddsDatanodeService
8162 OzoneManager
7701 StorageContainerManager
9150 Jps
➜ ozone-0.2.1-SNAPSHOT
{code}
The Ozone daemons processes are not stopped even after sbin/stop-all.sh 
finished executing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-413) Ozone freon help needs the Scm and OM running

2018-09-07 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-413:
-

 Summary: Ozone freon help needs the Scm and OM running
 Key: HDDS-413
 URL: https://issues.apache.org/jira/browse/HDDS-413
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari


Ozone freon help needs the Scm and OM running
{code:java}
./ozone freon --help
2018-09-07 12:23:28,983 WARN util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
2018-09-07 12:23:30,203 INFO ipc.Client: Retrying connect to server: 
localhost/127.0.0.1:9862. Already tried 0 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2018-09-07 12:23:31,204 INFO ipc.Client: Retrying connect to server: 
localhost/127.0.0.1:9862. Already tried 1 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
^C⏎ 
HW11767 ~/t/o/bin> jps
52445
86095 Jps{code}

If Scm and Om are running, freon help works fine:
{code:java}
HW11767 ~/t/o/bin> /ozone freon --help
2018-09-07 12:30:18,535 WARN util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
Options supported are:
-numOfThreads    number of threads to be launched for the run.
-validateWrites do random validation of data written into 
ozone, only subset of data is validated.
-jsonDirdirectory where json is created.
-mode [online | offline]specifies the mode in which Freon should run.
-source    specifies the URL of s3 commoncrawl warc file 
to be used when the mode is online.
-numOfVolumes    specifies number of Volumes to be created in 
offline mode
-numOfBuckets    specifies number of Buckets to be created per 
Volume in offline mode
-numOfKeys   specifies number of Keys to be created per 
Bucket in offline mode
-keySize specifies the size of Key in bytes to be 
created in offline mode
-help   prints usage.{code}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13887) Remove hadoop-ozone-filesystem dependency on hadoop-ozone-integration-test

2018-08-30 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDFS-13887:
---

 Summary: Remove hadoop-ozone-filesystem dependency on 
hadoop-ozone-integration-test
 Key: HDFS-13887
 URL: https://issues.apache.org/jira/browse/HDFS-13887
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Namit Maheshwari


hadoop-ozone-filesystem has dependency on hadoop-ozone-integration-test

Ideally filesystem modules should not have dependency on test modules.

This will also have issues while developing Unit Tests and trying to 
instantiate OzoneFileSystem object inside hadoop-ozone-integration-test, as 
that will create a circular dependency.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-290) putKey is failing with KEY_ALLOCATION_ERROR

2018-07-24 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-290:
-

 Summary: putKey is failing with KEY_ALLOCATION_ERROR
 Key: HDDS-290
 URL: https://issues.apache.org/jira/browse/HDDS-290
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari


1. List the buckets in Volume /namit
{code}
hadoop@288c0999be17:~$ ozone oz -listBucket /namit
2018-07-24 18:53:26 WARN  NativeCodeLoader:60 - Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
[ {
  "volumeName" : "namit",
  "bucketName" : "abc",
  "createdOn" : "Fri, 29 Jul +50529 22:02:39 GMT",
  "acls" : [ {
"type" : "USER",
"name" : "hadoop",
"rights" : "READ_WRITE"
  }, {
"type" : "GROUP",
"name" : "users",
"rights" : "READ_WRITE"
  } ],
  "versioning" : "DISABLED",
  "storageType" : "DISK"
}, {
  "volumeName" : "namit",
  "bucketName" : "hjk",
  "createdOn" : "Sat, 30 Jul +50529 10:37:24 GMT",
  "acls" : [ {
"type" : "USER",
"name" : "hadoop",
"rights" : "READ_WRITE"
  }, {
"type" : "GROUP",
"name" : "users",
"rights" : "READ_WRITE"
  } ],
  "versioning" : "DISABLED",
  "storageType" : "DISK"
} ]
{code}

2. Now list the keys in bucket /namit/abc
{code}
hadoop@288c0999be17:~$ ozone oz -listKey /namit/abc
2018-07-24 18:53:56 WARN  NativeCodeLoader:60 - Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
[ ]
{code}

3. Now try to put a key to the bucket. It fails as below:
{code}
hadoop@288c0999be17:~$ cat aa
hgfhjljkjhf
hadoop@288c0999be17:~$ ozone oz -putKey /namit/abc/aa -file aa
2018-07-24 18:54:19 WARN  NativeCodeLoader:60 - Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Command Failed : Create key failed, error:KEY_ALLOCATION_ERROR
hadoop@288c0999be17:~$
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-289) While creating bucket everything after '/' is ignored without any warning

2018-07-24 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-289:
-

 Summary: While creating bucket everything after '/' is ignored 
without any warning
 Key: HDDS-289
 URL: https://issues.apache.org/jira/browse/HDDS-289
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Client
Affects Versions: 0.2.1
Reporter: Namit Maheshwari


Please see below example. Here the user issues command to create bucket like 
below. Where /namit is the volume. 
{code}
hadoop@288c0999be17:~$ ozone oz -createBucket /namit/hjk/fgh
2018-07-24 00:30:52 WARN  NativeCodeLoader:60 - Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
2018-07-24 00:30:52 INFO  RpcClient:337 - Creating Bucket: namit/hjk, with 
Versioning false and Storage Type set to DISK
{code}
As seen above it just ignored '/fgh'
There should be a Warning / Error message instead of just ignoring everything 
after a '/' 




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13715) diskbalancer does not work if one of the blockpools are empty on a Federated cluster

2018-07-02 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDFS-13715:
---

 Summary: diskbalancer does not work if one of the blockpools are 
empty on a Federated cluster
 Key: HDFS-13715
 URL: https://issues.apache.org/jira/browse/HDFS-13715
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: diskbalancer
Reporter: Namit Maheshwari


Try to run diskbalancer when one of the blockpools are empty on a Federated 
cluster.

diskbalancer process run and completes successfully within seconds. Actual disk 
balancing does not happen. 

cc - [~bharatviswa], [~anu]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-10964) Add more unit tests for ACLs

2016-10-05 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDFS-10964:
---

 Summary: Add more unit tests for ACLs
 Key: HDFS-10964
 URL: https://issues.apache.org/jira/browse/HDFS-10964
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Reporter: Namit Maheshwari
Assignee: Namit Maheshwari


This Jira proposes to add more unit tests to validate ACLs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org