[jira] [Updated] (AMBARI-22578) hive2 queries fails after adding any service to the cluster

2017-12-02 Thread Jaimin Jetly (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaimin Jetly updated AMBARI-22578:
--
Attachment: AMBARI-22578.patch

> hive2 queries fails after adding any service to the cluster
> ---
>
> Key: AMBARI-22578
> URL: https://issues.apache.org/jira/browse/AMBARI-22578
> Project: Ambari
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Santhosh B Gowda
>Assignee: Jaimin Jetly
>Priority: Blocker
> Fix For: 2.6.1
>
> Attachments: AMBARI-22578.patch
>
>
> After adding beacon component, hive2 queries fail with below exception
> {code}
>  select t_hour,count(t_hour) from time_dim group by t_hour;
> INFO  : Compiling 
> command(queryId=hive_20171117151724_678c0fad-bb02-4a07-9b5b-2f8c6ad34013): 
> select t_hour,count(t_hour) from time_dim group by t_hour
> INFO  : We are setting the hadoop caller context from 
> HIVE_SSN_ID:3d33e44f-5f12-40d1-a88c-8cb8e7841885 to 
> hive_20171117151724_678c0fad-bb02-4a07-9b5b-2f8c6ad34013
> INFO  : Semantic Analysis Completed
> INFO  : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:t_hour, 
> type:int, comment:null), FieldSchema(name:_c1, type:bigint, comment:null)], 
> properties:null)
> INFO  : Completed compiling 
> command(queryId=hive_20171117151724_678c0fad-bb02-4a07-9b5b-2f8c6ad34013); 
> Time taken: 0.131 seconds
> INFO  : We are resetting the hadoop caller context to 
> HIVE_SSN_ID:3d33e44f-5f12-40d1-a88c-8cb8e7841885
> INFO  : Setting caller context to query id 
> hive_20171117151724_678c0fad-bb02-4a07-9b5b-2f8c6ad34013
> INFO  : Executing 
> command(queryId=hive_20171117151724_678c0fad-bb02-4a07-9b5b-2f8c6ad34013): 
> select t_hour,count(t_hour) from time_dim group by t_hour
> INFO  : Query ID = hive_20171117151724_678c0fad-bb02-4a07-9b5b-2f8c6ad34013
> INFO  : Total jobs = 1
> INFO  : Launching Job 1 out of 1
> INFO  : Starting task [Stage-1:MAPRED] in serial mode
> INFO  : Tez session hasn't been created yet. Opening session
> INFO  : Dag name: select t_hour,count(t_hour) from ti...t_hour(Stage-1)
> INFO  : Dag submit failed due to Invalid TaskLaunchCmdOpts defined for Vertex 
> Map 1 : Invalid/conflicting GC options found, cmdOpts="-server 
> -Djava.net.preferIPv4Stack=true -Dhdp.version=2.6.3.0-222 -XX:+PrintGCDetails 
> -verbose:gc -XX:+PrintGCTimeStamps -XX:+UseNUMA -XX:+UseG1GC -XX:+ResizeTLAB 
> -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/grid/0/dumps/hive -server 
> -Xmx11469m -Djava.net.preferIPv4Stack=true -XX:NewRatio=8 -XX:+UseNUMA 
> -XX:+UseParallelGC -XX:+PrintGCDetails -verbose:gc -XX:+PrintGCTimeStamps 
> -Dlog4j.configuratorClass=org.apache.tez.common.TezLog4jConfigurator 
> -Dlog4j.configuration=tez-container-log4j.properties 
> -Dyarn.app.container.log.dir= -Dtez.root.logger=INFO,CLA " stack 
> trace: [org.apache.tez.dag.api.DAG.createDag(DAG.java:1009), 
> org.apache.tez.client.TezClientUtils.prepareAndCreateDAGPlan(TezClientUtils.java:720),
>  org.apache.tez.client.TezClient.submitDAGSession(TezClient.java:555), 
> org.apache.tez.client.TezClient.submitDAG(TezClient.java:522), 
> org.apache.hadoop.hive.ql.exec.tez.TezTask.submit(TezTask.java:548), 
> org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:198), 
> org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199), 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100), 
> org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1987), 
> org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1667), 
> org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1414), 
> org.apache.hadoop.hive.ql.Driver.run(Driver.java:1211), 
> org.apache.hadoop.hive.ql.Driver.run(Driver.java:1204), 
> org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:242),
>  
> org.apache.hive.service.cli.operation.SQLOperation.access$800(SQLOperation.java:91),
>  
> org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:336),
>  java.security.AccessController.doPrivileged(Native Method), 
> javax.security.auth.Subject.doAs(Subject.java:422), 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866),
>  
> org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:350),
>  java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511), 
> java.util.concurrent.FutureTask.run(FutureTask.java:266), 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511), 
> java.util.concurrent.FutureTask.run(FutureTask.java:266), 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142),
>  
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617),
>  java.lang.Thread.run(Thread.java:745)] retrying...
> ERROR 

[jira] [Updated] (AMBARI-22578) hive2 queries fails after adding any service to the cluster

2017-12-02 Thread Jaimin Jetly (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaimin Jetly updated AMBARI-22578:
--
Status: Patch Available  (was: Open)

> hive2 queries fails after adding any service to the cluster
> ---
>
> Key: AMBARI-22578
> URL: https://issues.apache.org/jira/browse/AMBARI-22578
> Project: Ambari
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Santhosh B Gowda
>Assignee: Jaimin Jetly
>Priority: Blocker
> Fix For: 2.6.1
>
> Attachments: AMBARI-22578.patch
>
>
> After adding beacon component, hive2 queries fail with below exception
> {code}
>  select t_hour,count(t_hour) from time_dim group by t_hour;
> INFO  : Compiling 
> command(queryId=hive_20171117151724_678c0fad-bb02-4a07-9b5b-2f8c6ad34013): 
> select t_hour,count(t_hour) from time_dim group by t_hour
> INFO  : We are setting the hadoop caller context from 
> HIVE_SSN_ID:3d33e44f-5f12-40d1-a88c-8cb8e7841885 to 
> hive_20171117151724_678c0fad-bb02-4a07-9b5b-2f8c6ad34013
> INFO  : Semantic Analysis Completed
> INFO  : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:t_hour, 
> type:int, comment:null), FieldSchema(name:_c1, type:bigint, comment:null)], 
> properties:null)
> INFO  : Completed compiling 
> command(queryId=hive_20171117151724_678c0fad-bb02-4a07-9b5b-2f8c6ad34013); 
> Time taken: 0.131 seconds
> INFO  : We are resetting the hadoop caller context to 
> HIVE_SSN_ID:3d33e44f-5f12-40d1-a88c-8cb8e7841885
> INFO  : Setting caller context to query id 
> hive_20171117151724_678c0fad-bb02-4a07-9b5b-2f8c6ad34013
> INFO  : Executing 
> command(queryId=hive_20171117151724_678c0fad-bb02-4a07-9b5b-2f8c6ad34013): 
> select t_hour,count(t_hour) from time_dim group by t_hour
> INFO  : Query ID = hive_20171117151724_678c0fad-bb02-4a07-9b5b-2f8c6ad34013
> INFO  : Total jobs = 1
> INFO  : Launching Job 1 out of 1
> INFO  : Starting task [Stage-1:MAPRED] in serial mode
> INFO  : Tez session hasn't been created yet. Opening session
> INFO  : Dag name: select t_hour,count(t_hour) from ti...t_hour(Stage-1)
> INFO  : Dag submit failed due to Invalid TaskLaunchCmdOpts defined for Vertex 
> Map 1 : Invalid/conflicting GC options found, cmdOpts="-server 
> -Djava.net.preferIPv4Stack=true -Dhdp.version=2.6.3.0-222 -XX:+PrintGCDetails 
> -verbose:gc -XX:+PrintGCTimeStamps -XX:+UseNUMA -XX:+UseG1GC -XX:+ResizeTLAB 
> -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/grid/0/dumps/hive -server 
> -Xmx11469m -Djava.net.preferIPv4Stack=true -XX:NewRatio=8 -XX:+UseNUMA 
> -XX:+UseParallelGC -XX:+PrintGCDetails -verbose:gc -XX:+PrintGCTimeStamps 
> -Dlog4j.configuratorClass=org.apache.tez.common.TezLog4jConfigurator 
> -Dlog4j.configuration=tez-container-log4j.properties 
> -Dyarn.app.container.log.dir= -Dtez.root.logger=INFO,CLA " stack 
> trace: [org.apache.tez.dag.api.DAG.createDag(DAG.java:1009), 
> org.apache.tez.client.TezClientUtils.prepareAndCreateDAGPlan(TezClientUtils.java:720),
>  org.apache.tez.client.TezClient.submitDAGSession(TezClient.java:555), 
> org.apache.tez.client.TezClient.submitDAG(TezClient.java:522), 
> org.apache.hadoop.hive.ql.exec.tez.TezTask.submit(TezTask.java:548), 
> org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:198), 
> org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199), 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100), 
> org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1987), 
> org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1667), 
> org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1414), 
> org.apache.hadoop.hive.ql.Driver.run(Driver.java:1211), 
> org.apache.hadoop.hive.ql.Driver.run(Driver.java:1204), 
> org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:242),
>  
> org.apache.hive.service.cli.operation.SQLOperation.access$800(SQLOperation.java:91),
>  
> org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:336),
>  java.security.AccessController.doPrivileged(Native Method), 
> javax.security.auth.Subject.doAs(Subject.java:422), 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866),
>  
> org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:350),
>  java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511), 
> java.util.concurrent.FutureTask.run(FutureTask.java:266), 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511), 
> java.util.concurrent.FutureTask.run(FutureTask.java:266), 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142),
>  
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617),
>  java.lang.Thread.run(Thread.java:745)] retrying...
> 

[jira] [Updated] (AMBARI-22578) hive2 queries fails after adding any service

2017-12-02 Thread Jaimin Jetly (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaimin Jetly updated AMBARI-22578:
--
Affects Version/s: (was: 2.6.1)
   2.6.0

> hive2 queries fails after adding any service
> 
>
> Key: AMBARI-22578
> URL: https://issues.apache.org/jira/browse/AMBARI-22578
> Project: Ambari
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Santhosh B Gowda
>Assignee: Jaimin Jetly
>Priority: Blocker
> Fix For: 2.6.1
>
>
> After adding beacon component, hive2 queries fail with below exception
> {code}
>  select t_hour,count(t_hour) from time_dim group by t_hour;
> INFO  : Compiling 
> command(queryId=hive_20171117151724_678c0fad-bb02-4a07-9b5b-2f8c6ad34013): 
> select t_hour,count(t_hour) from time_dim group by t_hour
> INFO  : We are setting the hadoop caller context from 
> HIVE_SSN_ID:3d33e44f-5f12-40d1-a88c-8cb8e7841885 to 
> hive_20171117151724_678c0fad-bb02-4a07-9b5b-2f8c6ad34013
> INFO  : Semantic Analysis Completed
> INFO  : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:t_hour, 
> type:int, comment:null), FieldSchema(name:_c1, type:bigint, comment:null)], 
> properties:null)
> INFO  : Completed compiling 
> command(queryId=hive_20171117151724_678c0fad-bb02-4a07-9b5b-2f8c6ad34013); 
> Time taken: 0.131 seconds
> INFO  : We are resetting the hadoop caller context to 
> HIVE_SSN_ID:3d33e44f-5f12-40d1-a88c-8cb8e7841885
> INFO  : Setting caller context to query id 
> hive_20171117151724_678c0fad-bb02-4a07-9b5b-2f8c6ad34013
> INFO  : Executing 
> command(queryId=hive_20171117151724_678c0fad-bb02-4a07-9b5b-2f8c6ad34013): 
> select t_hour,count(t_hour) from time_dim group by t_hour
> INFO  : Query ID = hive_20171117151724_678c0fad-bb02-4a07-9b5b-2f8c6ad34013
> INFO  : Total jobs = 1
> INFO  : Launching Job 1 out of 1
> INFO  : Starting task [Stage-1:MAPRED] in serial mode
> INFO  : Tez session hasn't been created yet. Opening session
> INFO  : Dag name: select t_hour,count(t_hour) from ti...t_hour(Stage-1)
> INFO  : Dag submit failed due to Invalid TaskLaunchCmdOpts defined for Vertex 
> Map 1 : Invalid/conflicting GC options found, cmdOpts="-server 
> -Djava.net.preferIPv4Stack=true -Dhdp.version=2.6.3.0-222 -XX:+PrintGCDetails 
> -verbose:gc -XX:+PrintGCTimeStamps -XX:+UseNUMA -XX:+UseG1GC -XX:+ResizeTLAB 
> -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/grid/0/dumps/hive -server 
> -Xmx11469m -Djava.net.preferIPv4Stack=true -XX:NewRatio=8 -XX:+UseNUMA 
> -XX:+UseParallelGC -XX:+PrintGCDetails -verbose:gc -XX:+PrintGCTimeStamps 
> -Dlog4j.configuratorClass=org.apache.tez.common.TezLog4jConfigurator 
> -Dlog4j.configuration=tez-container-log4j.properties 
> -Dyarn.app.container.log.dir= -Dtez.root.logger=INFO,CLA " stack 
> trace: [org.apache.tez.dag.api.DAG.createDag(DAG.java:1009), 
> org.apache.tez.client.TezClientUtils.prepareAndCreateDAGPlan(TezClientUtils.java:720),
>  org.apache.tez.client.TezClient.submitDAGSession(TezClient.java:555), 
> org.apache.tez.client.TezClient.submitDAG(TezClient.java:522), 
> org.apache.hadoop.hive.ql.exec.tez.TezTask.submit(TezTask.java:548), 
> org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:198), 
> org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199), 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100), 
> org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1987), 
> org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1667), 
> org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1414), 
> org.apache.hadoop.hive.ql.Driver.run(Driver.java:1211), 
> org.apache.hadoop.hive.ql.Driver.run(Driver.java:1204), 
> org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:242),
>  
> org.apache.hive.service.cli.operation.SQLOperation.access$800(SQLOperation.java:91),
>  
> org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:336),
>  java.security.AccessController.doPrivileged(Native Method), 
> javax.security.auth.Subject.doAs(Subject.java:422), 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866),
>  
> org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:350),
>  java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511), 
> java.util.concurrent.FutureTask.run(FutureTask.java:266), 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511), 
> java.util.concurrent.FutureTask.run(FutureTask.java:266), 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142),
>  
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617),
>  java.lang.Thread.run(Thread.java:745)] retrying...
> ERROR : Failed to execute tez graph.
> org.apa

[jira] [Updated] (AMBARI-22578) hive2 queries fails after adding any service to the cluster

2017-12-02 Thread Jaimin Jetly (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaimin Jetly updated AMBARI-22578:
--
Summary: hive2 queries fails after adding any service to the cluster  (was: 
hive2 queries fails after adding any service)

> hive2 queries fails after adding any service to the cluster
> ---
>
> Key: AMBARI-22578
> URL: https://issues.apache.org/jira/browse/AMBARI-22578
> Project: Ambari
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Santhosh B Gowda
>Assignee: Jaimin Jetly
>Priority: Blocker
> Fix For: 2.6.1
>
>
> After adding beacon component, hive2 queries fail with below exception
> {code}
>  select t_hour,count(t_hour) from time_dim group by t_hour;
> INFO  : Compiling 
> command(queryId=hive_20171117151724_678c0fad-bb02-4a07-9b5b-2f8c6ad34013): 
> select t_hour,count(t_hour) from time_dim group by t_hour
> INFO  : We are setting the hadoop caller context from 
> HIVE_SSN_ID:3d33e44f-5f12-40d1-a88c-8cb8e7841885 to 
> hive_20171117151724_678c0fad-bb02-4a07-9b5b-2f8c6ad34013
> INFO  : Semantic Analysis Completed
> INFO  : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:t_hour, 
> type:int, comment:null), FieldSchema(name:_c1, type:bigint, comment:null)], 
> properties:null)
> INFO  : Completed compiling 
> command(queryId=hive_20171117151724_678c0fad-bb02-4a07-9b5b-2f8c6ad34013); 
> Time taken: 0.131 seconds
> INFO  : We are resetting the hadoop caller context to 
> HIVE_SSN_ID:3d33e44f-5f12-40d1-a88c-8cb8e7841885
> INFO  : Setting caller context to query id 
> hive_20171117151724_678c0fad-bb02-4a07-9b5b-2f8c6ad34013
> INFO  : Executing 
> command(queryId=hive_20171117151724_678c0fad-bb02-4a07-9b5b-2f8c6ad34013): 
> select t_hour,count(t_hour) from time_dim group by t_hour
> INFO  : Query ID = hive_20171117151724_678c0fad-bb02-4a07-9b5b-2f8c6ad34013
> INFO  : Total jobs = 1
> INFO  : Launching Job 1 out of 1
> INFO  : Starting task [Stage-1:MAPRED] in serial mode
> INFO  : Tez session hasn't been created yet. Opening session
> INFO  : Dag name: select t_hour,count(t_hour) from ti...t_hour(Stage-1)
> INFO  : Dag submit failed due to Invalid TaskLaunchCmdOpts defined for Vertex 
> Map 1 : Invalid/conflicting GC options found, cmdOpts="-server 
> -Djava.net.preferIPv4Stack=true -Dhdp.version=2.6.3.0-222 -XX:+PrintGCDetails 
> -verbose:gc -XX:+PrintGCTimeStamps -XX:+UseNUMA -XX:+UseG1GC -XX:+ResizeTLAB 
> -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/grid/0/dumps/hive -server 
> -Xmx11469m -Djava.net.preferIPv4Stack=true -XX:NewRatio=8 -XX:+UseNUMA 
> -XX:+UseParallelGC -XX:+PrintGCDetails -verbose:gc -XX:+PrintGCTimeStamps 
> -Dlog4j.configuratorClass=org.apache.tez.common.TezLog4jConfigurator 
> -Dlog4j.configuration=tez-container-log4j.properties 
> -Dyarn.app.container.log.dir= -Dtez.root.logger=INFO,CLA " stack 
> trace: [org.apache.tez.dag.api.DAG.createDag(DAG.java:1009), 
> org.apache.tez.client.TezClientUtils.prepareAndCreateDAGPlan(TezClientUtils.java:720),
>  org.apache.tez.client.TezClient.submitDAGSession(TezClient.java:555), 
> org.apache.tez.client.TezClient.submitDAG(TezClient.java:522), 
> org.apache.hadoop.hive.ql.exec.tez.TezTask.submit(TezTask.java:548), 
> org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:198), 
> org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199), 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100), 
> org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1987), 
> org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1667), 
> org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1414), 
> org.apache.hadoop.hive.ql.Driver.run(Driver.java:1211), 
> org.apache.hadoop.hive.ql.Driver.run(Driver.java:1204), 
> org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:242),
>  
> org.apache.hive.service.cli.operation.SQLOperation.access$800(SQLOperation.java:91),
>  
> org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:336),
>  java.security.AccessController.doPrivileged(Native Method), 
> javax.security.auth.Subject.doAs(Subject.java:422), 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866),
>  
> org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:350),
>  java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511), 
> java.util.concurrent.FutureTask.run(FutureTask.java:266), 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511), 
> java.util.concurrent.FutureTask.run(FutureTask.java:266), 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142),
>  
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617),
>  java.lang.T

[jira] [Updated] (AMBARI-22578) hive2 queries fails after adding any service

2017-12-02 Thread Jaimin Jetly (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaimin Jetly updated AMBARI-22578:
--
Fix Version/s: 2.6.1

> hive2 queries fails after adding any service
> 
>
> Key: AMBARI-22578
> URL: https://issues.apache.org/jira/browse/AMBARI-22578
> Project: Ambari
>  Issue Type: Bug
>Affects Versions: 2.6.1
>Reporter: Santhosh B Gowda
>Assignee: Jaimin Jetly
>Priority: Blocker
> Fix For: 2.6.1
>
>
> After adding beacon component, hive2 queries fail with below exception
> {code}
>  select t_hour,count(t_hour) from time_dim group by t_hour;
> INFO  : Compiling 
> command(queryId=hive_20171117151724_678c0fad-bb02-4a07-9b5b-2f8c6ad34013): 
> select t_hour,count(t_hour) from time_dim group by t_hour
> INFO  : We are setting the hadoop caller context from 
> HIVE_SSN_ID:3d33e44f-5f12-40d1-a88c-8cb8e7841885 to 
> hive_20171117151724_678c0fad-bb02-4a07-9b5b-2f8c6ad34013
> INFO  : Semantic Analysis Completed
> INFO  : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:t_hour, 
> type:int, comment:null), FieldSchema(name:_c1, type:bigint, comment:null)], 
> properties:null)
> INFO  : Completed compiling 
> command(queryId=hive_20171117151724_678c0fad-bb02-4a07-9b5b-2f8c6ad34013); 
> Time taken: 0.131 seconds
> INFO  : We are resetting the hadoop caller context to 
> HIVE_SSN_ID:3d33e44f-5f12-40d1-a88c-8cb8e7841885
> INFO  : Setting caller context to query id 
> hive_20171117151724_678c0fad-bb02-4a07-9b5b-2f8c6ad34013
> INFO  : Executing 
> command(queryId=hive_20171117151724_678c0fad-bb02-4a07-9b5b-2f8c6ad34013): 
> select t_hour,count(t_hour) from time_dim group by t_hour
> INFO  : Query ID = hive_20171117151724_678c0fad-bb02-4a07-9b5b-2f8c6ad34013
> INFO  : Total jobs = 1
> INFO  : Launching Job 1 out of 1
> INFO  : Starting task [Stage-1:MAPRED] in serial mode
> INFO  : Tez session hasn't been created yet. Opening session
> INFO  : Dag name: select t_hour,count(t_hour) from ti...t_hour(Stage-1)
> INFO  : Dag submit failed due to Invalid TaskLaunchCmdOpts defined for Vertex 
> Map 1 : Invalid/conflicting GC options found, cmdOpts="-server 
> -Djava.net.preferIPv4Stack=true -Dhdp.version=2.6.3.0-222 -XX:+PrintGCDetails 
> -verbose:gc -XX:+PrintGCTimeStamps -XX:+UseNUMA -XX:+UseG1GC -XX:+ResizeTLAB 
> -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/grid/0/dumps/hive -server 
> -Xmx11469m -Djava.net.preferIPv4Stack=true -XX:NewRatio=8 -XX:+UseNUMA 
> -XX:+UseParallelGC -XX:+PrintGCDetails -verbose:gc -XX:+PrintGCTimeStamps 
> -Dlog4j.configuratorClass=org.apache.tez.common.TezLog4jConfigurator 
> -Dlog4j.configuration=tez-container-log4j.properties 
> -Dyarn.app.container.log.dir= -Dtez.root.logger=INFO,CLA " stack 
> trace: [org.apache.tez.dag.api.DAG.createDag(DAG.java:1009), 
> org.apache.tez.client.TezClientUtils.prepareAndCreateDAGPlan(TezClientUtils.java:720),
>  org.apache.tez.client.TezClient.submitDAGSession(TezClient.java:555), 
> org.apache.tez.client.TezClient.submitDAG(TezClient.java:522), 
> org.apache.hadoop.hive.ql.exec.tez.TezTask.submit(TezTask.java:548), 
> org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:198), 
> org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199), 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100), 
> org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1987), 
> org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1667), 
> org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1414), 
> org.apache.hadoop.hive.ql.Driver.run(Driver.java:1211), 
> org.apache.hadoop.hive.ql.Driver.run(Driver.java:1204), 
> org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:242),
>  
> org.apache.hive.service.cli.operation.SQLOperation.access$800(SQLOperation.java:91),
>  
> org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:336),
>  java.security.AccessController.doPrivileged(Native Method), 
> javax.security.auth.Subject.doAs(Subject.java:422), 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866),
>  
> org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:350),
>  java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511), 
> java.util.concurrent.FutureTask.run(FutureTask.java:266), 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511), 
> java.util.concurrent.FutureTask.run(FutureTask.java:266), 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142),
>  
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617),
>  java.lang.Thread.run(Thread.java:745)] retrying...
> ERROR : Failed to execute tez graph.
> org.apache.tez.dag.api.TezUncheckedException: Inval

[jira] [Updated] (AMBARI-22578) hive2 queries fails after adding any service

2017-12-02 Thread Jaimin Jetly (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaimin Jetly updated AMBARI-22578:
--
Summary: hive2 queries fails after adding any service  (was: hive2 queries 
fails after adding beacon component)

> hive2 queries fails after adding any service
> 
>
> Key: AMBARI-22578
> URL: https://issues.apache.org/jira/browse/AMBARI-22578
> Project: Ambari
>  Issue Type: Bug
>Affects Versions: 2.6.1
>Reporter: Santhosh B Gowda
>Assignee: Jaimin Jetly
>Priority: Blocker
> Fix For: 2.6.1
>
>
> After adding beacon component, hive2 queries fail with below exception
> {code}
>  select t_hour,count(t_hour) from time_dim group by t_hour;
> INFO  : Compiling 
> command(queryId=hive_20171117151724_678c0fad-bb02-4a07-9b5b-2f8c6ad34013): 
> select t_hour,count(t_hour) from time_dim group by t_hour
> INFO  : We are setting the hadoop caller context from 
> HIVE_SSN_ID:3d33e44f-5f12-40d1-a88c-8cb8e7841885 to 
> hive_20171117151724_678c0fad-bb02-4a07-9b5b-2f8c6ad34013
> INFO  : Semantic Analysis Completed
> INFO  : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:t_hour, 
> type:int, comment:null), FieldSchema(name:_c1, type:bigint, comment:null)], 
> properties:null)
> INFO  : Completed compiling 
> command(queryId=hive_20171117151724_678c0fad-bb02-4a07-9b5b-2f8c6ad34013); 
> Time taken: 0.131 seconds
> INFO  : We are resetting the hadoop caller context to 
> HIVE_SSN_ID:3d33e44f-5f12-40d1-a88c-8cb8e7841885
> INFO  : Setting caller context to query id 
> hive_20171117151724_678c0fad-bb02-4a07-9b5b-2f8c6ad34013
> INFO  : Executing 
> command(queryId=hive_20171117151724_678c0fad-bb02-4a07-9b5b-2f8c6ad34013): 
> select t_hour,count(t_hour) from time_dim group by t_hour
> INFO  : Query ID = hive_20171117151724_678c0fad-bb02-4a07-9b5b-2f8c6ad34013
> INFO  : Total jobs = 1
> INFO  : Launching Job 1 out of 1
> INFO  : Starting task [Stage-1:MAPRED] in serial mode
> INFO  : Tez session hasn't been created yet. Opening session
> INFO  : Dag name: select t_hour,count(t_hour) from ti...t_hour(Stage-1)
> INFO  : Dag submit failed due to Invalid TaskLaunchCmdOpts defined for Vertex 
> Map 1 : Invalid/conflicting GC options found, cmdOpts="-server 
> -Djava.net.preferIPv4Stack=true -Dhdp.version=2.6.3.0-222 -XX:+PrintGCDetails 
> -verbose:gc -XX:+PrintGCTimeStamps -XX:+UseNUMA -XX:+UseG1GC -XX:+ResizeTLAB 
> -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/grid/0/dumps/hive -server 
> -Xmx11469m -Djava.net.preferIPv4Stack=true -XX:NewRatio=8 -XX:+UseNUMA 
> -XX:+UseParallelGC -XX:+PrintGCDetails -verbose:gc -XX:+PrintGCTimeStamps 
> -Dlog4j.configuratorClass=org.apache.tez.common.TezLog4jConfigurator 
> -Dlog4j.configuration=tez-container-log4j.properties 
> -Dyarn.app.container.log.dir= -Dtez.root.logger=INFO,CLA " stack 
> trace: [org.apache.tez.dag.api.DAG.createDag(DAG.java:1009), 
> org.apache.tez.client.TezClientUtils.prepareAndCreateDAGPlan(TezClientUtils.java:720),
>  org.apache.tez.client.TezClient.submitDAGSession(TezClient.java:555), 
> org.apache.tez.client.TezClient.submitDAG(TezClient.java:522), 
> org.apache.hadoop.hive.ql.exec.tez.TezTask.submit(TezTask.java:548), 
> org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:198), 
> org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199), 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100), 
> org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1987), 
> org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1667), 
> org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1414), 
> org.apache.hadoop.hive.ql.Driver.run(Driver.java:1211), 
> org.apache.hadoop.hive.ql.Driver.run(Driver.java:1204), 
> org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:242),
>  
> org.apache.hive.service.cli.operation.SQLOperation.access$800(SQLOperation.java:91),
>  
> org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:336),
>  java.security.AccessController.doPrivileged(Native Method), 
> javax.security.auth.Subject.doAs(Subject.java:422), 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866),
>  
> org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:350),
>  java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511), 
> java.util.concurrent.FutureTask.run(FutureTask.java:266), 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511), 
> java.util.concurrent.FutureTask.run(FutureTask.java:266), 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142),
>  
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617),
>  java.lang.Thread.run(Thread.java:745)] retrying...

[jira] [Assigned] (AMBARI-22578) hive2 queries fails after adding beacon component

2017-12-02 Thread Jaimin Jetly (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaimin Jetly reassigned AMBARI-22578:
-

Assignee: Jaimin Jetly

> hive2 queries fails after adding beacon component
> -
>
> Key: AMBARI-22578
> URL: https://issues.apache.org/jira/browse/AMBARI-22578
> Project: Ambari
>  Issue Type: Bug
>Affects Versions: 2.6.1
>Reporter: Santhosh B Gowda
>Assignee: Jaimin Jetly
>Priority: Blocker
>
> After adding beacon component, hive2 queries fail with below exception
> {code}
>  select t_hour,count(t_hour) from time_dim group by t_hour;
> INFO  : Compiling 
> command(queryId=hive_20171117151724_678c0fad-bb02-4a07-9b5b-2f8c6ad34013): 
> select t_hour,count(t_hour) from time_dim group by t_hour
> INFO  : We are setting the hadoop caller context from 
> HIVE_SSN_ID:3d33e44f-5f12-40d1-a88c-8cb8e7841885 to 
> hive_20171117151724_678c0fad-bb02-4a07-9b5b-2f8c6ad34013
> INFO  : Semantic Analysis Completed
> INFO  : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:t_hour, 
> type:int, comment:null), FieldSchema(name:_c1, type:bigint, comment:null)], 
> properties:null)
> INFO  : Completed compiling 
> command(queryId=hive_20171117151724_678c0fad-bb02-4a07-9b5b-2f8c6ad34013); 
> Time taken: 0.131 seconds
> INFO  : We are resetting the hadoop caller context to 
> HIVE_SSN_ID:3d33e44f-5f12-40d1-a88c-8cb8e7841885
> INFO  : Setting caller context to query id 
> hive_20171117151724_678c0fad-bb02-4a07-9b5b-2f8c6ad34013
> INFO  : Executing 
> command(queryId=hive_20171117151724_678c0fad-bb02-4a07-9b5b-2f8c6ad34013): 
> select t_hour,count(t_hour) from time_dim group by t_hour
> INFO  : Query ID = hive_20171117151724_678c0fad-bb02-4a07-9b5b-2f8c6ad34013
> INFO  : Total jobs = 1
> INFO  : Launching Job 1 out of 1
> INFO  : Starting task [Stage-1:MAPRED] in serial mode
> INFO  : Tez session hasn't been created yet. Opening session
> INFO  : Dag name: select t_hour,count(t_hour) from ti...t_hour(Stage-1)
> INFO  : Dag submit failed due to Invalid TaskLaunchCmdOpts defined for Vertex 
> Map 1 : Invalid/conflicting GC options found, cmdOpts="-server 
> -Djava.net.preferIPv4Stack=true -Dhdp.version=2.6.3.0-222 -XX:+PrintGCDetails 
> -verbose:gc -XX:+PrintGCTimeStamps -XX:+UseNUMA -XX:+UseG1GC -XX:+ResizeTLAB 
> -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/grid/0/dumps/hive -server 
> -Xmx11469m -Djava.net.preferIPv4Stack=true -XX:NewRatio=8 -XX:+UseNUMA 
> -XX:+UseParallelGC -XX:+PrintGCDetails -verbose:gc -XX:+PrintGCTimeStamps 
> -Dlog4j.configuratorClass=org.apache.tez.common.TezLog4jConfigurator 
> -Dlog4j.configuration=tez-container-log4j.properties 
> -Dyarn.app.container.log.dir= -Dtez.root.logger=INFO,CLA " stack 
> trace: [org.apache.tez.dag.api.DAG.createDag(DAG.java:1009), 
> org.apache.tez.client.TezClientUtils.prepareAndCreateDAGPlan(TezClientUtils.java:720),
>  org.apache.tez.client.TezClient.submitDAGSession(TezClient.java:555), 
> org.apache.tez.client.TezClient.submitDAG(TezClient.java:522), 
> org.apache.hadoop.hive.ql.exec.tez.TezTask.submit(TezTask.java:548), 
> org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:198), 
> org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199), 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100), 
> org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1987), 
> org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1667), 
> org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1414), 
> org.apache.hadoop.hive.ql.Driver.run(Driver.java:1211), 
> org.apache.hadoop.hive.ql.Driver.run(Driver.java:1204), 
> org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:242),
>  
> org.apache.hive.service.cli.operation.SQLOperation.access$800(SQLOperation.java:91),
>  
> org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:336),
>  java.security.AccessController.doPrivileged(Native Method), 
> javax.security.auth.Subject.doAs(Subject.java:422), 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866),
>  
> org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:350),
>  java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511), 
> java.util.concurrent.FutureTask.run(FutureTask.java:266), 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511), 
> java.util.concurrent.FutureTask.run(FutureTask.java:266), 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142),
>  
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617),
>  java.lang.Thread.run(Thread.java:745)] retrying...
> ERROR : Failed to execute tez graph.
> org.apache.tez.dag.api.TezUncheckedException: Invalid TaskLaunc

[jira] [Resolved] (AMBARI-22560) Remove obsolete hack to set KDC admin credentials via Cluster session API

2017-12-02 Thread Sandor Molnar (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandor Molnar resolved AMBARI-22560.

Resolution: Fixed

> Remove obsolete hack to set KDC admin credentials via Cluster session API
> -
>
> Key: AMBARI-22560
> URL: https://issues.apache.org/jira/browse/AMBARI-22560
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.2.0
>Reporter: Sandor Molnar
>Assignee: Sandor Molnar
>Priority: Minor
>  Labels: kdc_credentials, kerberos
> Fix For: trunk
>
> Attachments: AMBARI_22560_patch, AMBARI_22560_trunk_02.patch
>
>
> Remove hack to set KDC admin credential via the API to set session attribute 
> via the Cluster resource.
> Near 
> *org/apache/ambari/server/controller/AmbariManagementControllerImpl.java:1469*
> {code:java}
>   // TODO: Once the UI uses the Credential Resource API, remove this 
> block to _clean_ the
>   // TODO: session attributes and store any KDC administrator credentials 
> in the secure
>   // TODO: credential provider facility.
>   // For now, to keep things backwards compatible, get and remove the KDC 
> administrator credentials
>   // from the session attributes and store them in the 
> CredentialsProvider. The KDC administrator
>   // credentials are prefixed with kdc_admin/. The following attributes 
> are expected, if setting
>   // the KDC administrator credentials:
>   //kerberos_admin/principal
>   //kerberos_admin/password
>   if((sessionAttributes != null) && !sessionAttributes.isEmpty()) {
> Map cleanedSessionAttributes = new HashMap<>();
> String principal = null;
> char[] password = null;
> for(Map.Entry entry: sessionAttributes.entrySet()) {
>   String name = entry.getKey();
>   Object value = entry.getValue();
>   if ("kerberos_admin/principal".equals(name)) {
> if(value instanceof String) {
>   principal = (String)value;
> }
>   }
>   else if ("kerberos_admin/password".equals(name)) {
> if(value instanceof String) {
>   password = ((String) value).toCharArray();
> }
>   } else {
> cleanedSessionAttributes.put(name, value);
>   }
> }
> if(principal != null) {
>   // The KDC admin principal exists... set the credentials in the 
> credentials store
>   credentialStoreService.setCredential(cluster.getClusterName(),
>   KerberosHelper.KDC_ADMINISTRATOR_CREDENTIAL_ALIAS,
>   new PrincipalKeyCredential(principal, password), 
> CredentialStoreType.TEMPORARY);
> }
> sessionAttributes = cleanedSessionAttributes;
>   }
>   // TODO: END
> {code}
> This is no longer needed once the UI uses the new Credential Resource REST 
> API - see  AMBARI-13292



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (AMBARI-22560) Remove obsolete hack to set KDC admin credentials via Cluster session API

2017-12-02 Thread Sandor Molnar (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-22560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16275554#comment-16275554
 ] 

Sandor Molnar commented on AMBARI-22560:


Jenkins failed due to another issue:
{noformat}
[INFO] Results:
[INFO] 
[ERROR] Errors: 
[ERROR]   ServicePropertiesTest.validatePropertySchemaOfServiceXMLs:49 ยป Ambari 
File /ho...
{noformat}


This is not related to my change either. Closing this JIRA.

> Remove obsolete hack to set KDC admin credentials via Cluster session API
> -
>
> Key: AMBARI-22560
> URL: https://issues.apache.org/jira/browse/AMBARI-22560
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.2.0
>Reporter: Sandor Molnar
>Assignee: Sandor Molnar
>Priority: Minor
>  Labels: kdc_credentials, kerberos
> Fix For: trunk
>
> Attachments: AMBARI_22560_patch, AMBARI_22560_trunk_02.patch
>
>
> Remove hack to set KDC admin credential via the API to set session attribute 
> via the Cluster resource.
> Near 
> *org/apache/ambari/server/controller/AmbariManagementControllerImpl.java:1469*
> {code:java}
>   // TODO: Once the UI uses the Credential Resource API, remove this 
> block to _clean_ the
>   // TODO: session attributes and store any KDC administrator credentials 
> in the secure
>   // TODO: credential provider facility.
>   // For now, to keep things backwards compatible, get and remove the KDC 
> administrator credentials
>   // from the session attributes and store them in the 
> CredentialsProvider. The KDC administrator
>   // credentials are prefixed with kdc_admin/. The following attributes 
> are expected, if setting
>   // the KDC administrator credentials:
>   //kerberos_admin/principal
>   //kerberos_admin/password
>   if((sessionAttributes != null) && !sessionAttributes.isEmpty()) {
> Map cleanedSessionAttributes = new HashMap<>();
> String principal = null;
> char[] password = null;
> for(Map.Entry entry: sessionAttributes.entrySet()) {
>   String name = entry.getKey();
>   Object value = entry.getValue();
>   if ("kerberos_admin/principal".equals(name)) {
> if(value instanceof String) {
>   principal = (String)value;
> }
>   }
>   else if ("kerberos_admin/password".equals(name)) {
> if(value instanceof String) {
>   password = ((String) value).toCharArray();
> }
>   } else {
> cleanedSessionAttributes.put(name, value);
>   }
> }
> if(principal != null) {
>   // The KDC admin principal exists... set the credentials in the 
> credentials store
>   credentialStoreService.setCredential(cluster.getClusterName(),
>   KerberosHelper.KDC_ADMINISTRATOR_CREDENTIAL_ALIAS,
>   new PrincipalKeyCredential(principal, password), 
> CredentialStoreType.TEMPORARY);
> }
> sessionAttributes = cleanedSessionAttributes;
>   }
>   // TODO: END
> {code}
> This is no longer needed once the UI uses the new Credential Resource REST 
> API - see  AMBARI-13292



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (AMBARI-22575) Broken Python Unit Tests in branch-2.6

2017-12-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-22575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16275514#comment-16275514
 ] 

Hudson commented on AMBARI-22575:
-

FAILURE: Integrated in Jenkins build Ambari-branch-2.6 #523 (See 
[https://builds.apache.org/job/Ambari-branch-2.6/523/])
AMBARI-22575. Broken Python Unit Tests in branch-2.6 (aonishuk) (aonishuk: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git&a=commit&h=51e26b86f8fef18f533a238f224d43d76a08bbaa])
* (edit) ambari-server/src/test/python/stacks/2.0.6/OOZIE/test_oozie_server.py
* (edit) ambari-server/src/test/python/stacks/2.2/configs/oozie-upgrade.json


> Broken Python Unit Tests in branch-2.6
> --
>
> Key: AMBARI-22575
> URL: https://issues.apache.org/jira/browse/AMBARI-22575
> Project: Ambari
>  Issue Type: Bug
>Affects Versions: 2.6.1
>Reporter: Jonathan Hurley
>Assignee: Andrew Onischuk
>Priority: Critical
> Fix For: 2.6.1
>
> Attachments: AMBARI-22575.patch
>
>
> AMBARI-22561 seems to have broken 4 unit tests...
> {code}
> ERROR: test_upgrade_23_with_type (test_oozie_server.TestOozieServer)
> --
> Traceback (most recent call last):
>   File 
> "/Users/jhurley/src/apache/ambari/ambari-common/src/test/python/mock/mock.py",
>  line 1199, in patched
> return func(*args, **keywargs)
>   File 
> "/Users/jhurley/src/apache/ambari/ambari-server/src/test/python/stacks/2.0.6/OOZIE/test_oozie_server.py",
>  line 1343, in test_upgrade_23_with_type
> mocks_dict = mocks_dict)
>   File 
> "/Users/jhurley/src/apache/ambari/ambari-server/src/test/python/stacks/utils/RMFTestCase.py",
>  line 159, in executeScript
> method(RMFTestCase.env, *command_args)
>   File 
> "/Users/jhurley/src/apache/ambari/ambari-server/src/test/python/stacks/utils/../../../../main/resources/common-services/OOZIE/4.0.0.2.0/package/scripts/oozie_server.py",
>  line 123, in pre_upgrade_restart
> OozieUpgrade.prepare_libext_directory(upgrade_type=upgrade_type)
>   File 
> "/Users/jhurley/src/apache/ambari/ambari-server/src/main/resources/common-services/OOZIE/4.0.0.2.0/package/scripts/oozie_server_upgrade.py",
>  line 72, in prepare_libext_directory
> lzo_utils.install_lzo_if_needed()
>   File 
> "/Users/jhurley/src/apache/ambari/ambari-common/src/main/python/resource_management/libraries/functions/lzo_utils.py",
>  line 85, in install_lzo_if_needed
> Script.repository_util.create_repo_files()
>   File 
> "/Users/jhurley/src/apache/ambari/ambari-common/src/main/python/resource_management/libraries/functions/repository_util.py",
>  line 53, in create_repo_files
> if self.command_repository.version_id is None:
> AttributeError: RepositoryUtil instance has no attribute 'command_repository'
> --
> Total run:1200
> Total errors:4
> Total failures:0
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-22575) Broken Python Unit Tests in branch-2.6

2017-12-02 Thread Andrew Onischuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Onischuk updated AMBARI-22575:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed to branch-2.6

> Broken Python Unit Tests in branch-2.6
> --
>
> Key: AMBARI-22575
> URL: https://issues.apache.org/jira/browse/AMBARI-22575
> Project: Ambari
>  Issue Type: Bug
>Affects Versions: 2.6.1
>Reporter: Jonathan Hurley
>Assignee: Andrew Onischuk
>Priority: Critical
> Fix For: 2.6.1
>
> Attachments: AMBARI-22575.patch
>
>
> AMBARI-22561 seems to have broken 4 unit tests...
> {code}
> ERROR: test_upgrade_23_with_type (test_oozie_server.TestOozieServer)
> --
> Traceback (most recent call last):
>   File 
> "/Users/jhurley/src/apache/ambari/ambari-common/src/test/python/mock/mock.py",
>  line 1199, in patched
> return func(*args, **keywargs)
>   File 
> "/Users/jhurley/src/apache/ambari/ambari-server/src/test/python/stacks/2.0.6/OOZIE/test_oozie_server.py",
>  line 1343, in test_upgrade_23_with_type
> mocks_dict = mocks_dict)
>   File 
> "/Users/jhurley/src/apache/ambari/ambari-server/src/test/python/stacks/utils/RMFTestCase.py",
>  line 159, in executeScript
> method(RMFTestCase.env, *command_args)
>   File 
> "/Users/jhurley/src/apache/ambari/ambari-server/src/test/python/stacks/utils/../../../../main/resources/common-services/OOZIE/4.0.0.2.0/package/scripts/oozie_server.py",
>  line 123, in pre_upgrade_restart
> OozieUpgrade.prepare_libext_directory(upgrade_type=upgrade_type)
>   File 
> "/Users/jhurley/src/apache/ambari/ambari-server/src/main/resources/common-services/OOZIE/4.0.0.2.0/package/scripts/oozie_server_upgrade.py",
>  line 72, in prepare_libext_directory
> lzo_utils.install_lzo_if_needed()
>   File 
> "/Users/jhurley/src/apache/ambari/ambari-common/src/main/python/resource_management/libraries/functions/lzo_utils.py",
>  line 85, in install_lzo_if_needed
> Script.repository_util.create_repo_files()
>   File 
> "/Users/jhurley/src/apache/ambari/ambari-common/src/main/python/resource_management/libraries/functions/repository_util.py",
>  line 53, in create_repo_files
> if self.command_repository.version_id is None:
> AttributeError: RepositoryUtil instance has no attribute 'command_repository'
> --
> Total run:1200
> Total errors:4
> Total failures:0
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (AMBARI-22575) Broken Python Unit Tests in branch-2.6

2017-12-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-22575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16275505#comment-16275505
 ] 

Hadoop QA commented on AMBARI-22575:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12900338/AMBARI-22575.patch
  against trunk revision .

{color:red}-1 patch{color}.  Top-level [trunk 
compilation|https://builds.apache.org/job/Ambari-trunk-test-patch/12788//artifact/patch-work/trunkJavacWarnings.txt]
 may be broken.

Console output: 
https://builds.apache.org/job/Ambari-trunk-test-patch/12788//console

This message is automatically generated.

> Broken Python Unit Tests in branch-2.6
> --
>
> Key: AMBARI-22575
> URL: https://issues.apache.org/jira/browse/AMBARI-22575
> Project: Ambari
>  Issue Type: Bug
>Affects Versions: 2.6.1
>Reporter: Jonathan Hurley
>Assignee: Andrew Onischuk
>Priority: Critical
> Fix For: 2.6.1
>
> Attachments: AMBARI-22575.patch
>
>
> AMBARI-22561 seems to have broken 4 unit tests...
> {code}
> ERROR: test_upgrade_23_with_type (test_oozie_server.TestOozieServer)
> --
> Traceback (most recent call last):
>   File 
> "/Users/jhurley/src/apache/ambari/ambari-common/src/test/python/mock/mock.py",
>  line 1199, in patched
> return func(*args, **keywargs)
>   File 
> "/Users/jhurley/src/apache/ambari/ambari-server/src/test/python/stacks/2.0.6/OOZIE/test_oozie_server.py",
>  line 1343, in test_upgrade_23_with_type
> mocks_dict = mocks_dict)
>   File 
> "/Users/jhurley/src/apache/ambari/ambari-server/src/test/python/stacks/utils/RMFTestCase.py",
>  line 159, in executeScript
> method(RMFTestCase.env, *command_args)
>   File 
> "/Users/jhurley/src/apache/ambari/ambari-server/src/test/python/stacks/utils/../../../../main/resources/common-services/OOZIE/4.0.0.2.0/package/scripts/oozie_server.py",
>  line 123, in pre_upgrade_restart
> OozieUpgrade.prepare_libext_directory(upgrade_type=upgrade_type)
>   File 
> "/Users/jhurley/src/apache/ambari/ambari-server/src/main/resources/common-services/OOZIE/4.0.0.2.0/package/scripts/oozie_server_upgrade.py",
>  line 72, in prepare_libext_directory
> lzo_utils.install_lzo_if_needed()
>   File 
> "/Users/jhurley/src/apache/ambari/ambari-common/src/main/python/resource_management/libraries/functions/lzo_utils.py",
>  line 85, in install_lzo_if_needed
> Script.repository_util.create_repo_files()
>   File 
> "/Users/jhurley/src/apache/ambari/ambari-common/src/main/python/resource_management/libraries/functions/repository_util.py",
>  line 53, in create_repo_files
> if self.command_repository.version_id is None:
> AttributeError: RepositoryUtil instance has no attribute 'command_repository'
> --
> Total run:1200
> Total errors:4
> Total failures:0
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-22575) Broken Python Unit Tests in branch-2.6

2017-12-02 Thread Andrew Onischuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Onischuk updated AMBARI-22575:
-
Attachment: AMBARI-22575.patch

> Broken Python Unit Tests in branch-2.6
> --
>
> Key: AMBARI-22575
> URL: https://issues.apache.org/jira/browse/AMBARI-22575
> Project: Ambari
>  Issue Type: Bug
>Affects Versions: 2.6.1
>Reporter: Jonathan Hurley
>Assignee: Andrew Onischuk
>Priority: Critical
> Fix For: 2.6.1
>
> Attachments: AMBARI-22575.patch
>
>
> AMBARI-22561 seems to have broken 4 unit tests...
> {code}
> ERROR: test_upgrade_23_with_type (test_oozie_server.TestOozieServer)
> --
> Traceback (most recent call last):
>   File 
> "/Users/jhurley/src/apache/ambari/ambari-common/src/test/python/mock/mock.py",
>  line 1199, in patched
> return func(*args, **keywargs)
>   File 
> "/Users/jhurley/src/apache/ambari/ambari-server/src/test/python/stacks/2.0.6/OOZIE/test_oozie_server.py",
>  line 1343, in test_upgrade_23_with_type
> mocks_dict = mocks_dict)
>   File 
> "/Users/jhurley/src/apache/ambari/ambari-server/src/test/python/stacks/utils/RMFTestCase.py",
>  line 159, in executeScript
> method(RMFTestCase.env, *command_args)
>   File 
> "/Users/jhurley/src/apache/ambari/ambari-server/src/test/python/stacks/utils/../../../../main/resources/common-services/OOZIE/4.0.0.2.0/package/scripts/oozie_server.py",
>  line 123, in pre_upgrade_restart
> OozieUpgrade.prepare_libext_directory(upgrade_type=upgrade_type)
>   File 
> "/Users/jhurley/src/apache/ambari/ambari-server/src/main/resources/common-services/OOZIE/4.0.0.2.0/package/scripts/oozie_server_upgrade.py",
>  line 72, in prepare_libext_directory
> lzo_utils.install_lzo_if_needed()
>   File 
> "/Users/jhurley/src/apache/ambari/ambari-common/src/main/python/resource_management/libraries/functions/lzo_utils.py",
>  line 85, in install_lzo_if_needed
> Script.repository_util.create_repo_files()
>   File 
> "/Users/jhurley/src/apache/ambari/ambari-common/src/main/python/resource_management/libraries/functions/repository_util.py",
>  line 53, in create_repo_files
> if self.command_repository.version_id is None:
> AttributeError: RepositoryUtil instance has no attribute 'command_repository'
> --
> Total run:1200
> Total errors:4
> Total failures:0
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-22575) Broken Python Unit Tests in branch-2.6

2017-12-02 Thread Andrew Onischuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Onischuk updated AMBARI-22575:
-
Status: Patch Available  (was: Open)

> Broken Python Unit Tests in branch-2.6
> --
>
> Key: AMBARI-22575
> URL: https://issues.apache.org/jira/browse/AMBARI-22575
> Project: Ambari
>  Issue Type: Bug
>Affects Versions: 2.6.1
>Reporter: Jonathan Hurley
>Assignee: Andrew Onischuk
>Priority: Critical
> Fix For: 2.6.1
>
> Attachments: AMBARI-22575.patch
>
>
> AMBARI-22561 seems to have broken 4 unit tests...
> {code}
> ERROR: test_upgrade_23_with_type (test_oozie_server.TestOozieServer)
> --
> Traceback (most recent call last):
>   File 
> "/Users/jhurley/src/apache/ambari/ambari-common/src/test/python/mock/mock.py",
>  line 1199, in patched
> return func(*args, **keywargs)
>   File 
> "/Users/jhurley/src/apache/ambari/ambari-server/src/test/python/stacks/2.0.6/OOZIE/test_oozie_server.py",
>  line 1343, in test_upgrade_23_with_type
> mocks_dict = mocks_dict)
>   File 
> "/Users/jhurley/src/apache/ambari/ambari-server/src/test/python/stacks/utils/RMFTestCase.py",
>  line 159, in executeScript
> method(RMFTestCase.env, *command_args)
>   File 
> "/Users/jhurley/src/apache/ambari/ambari-server/src/test/python/stacks/utils/../../../../main/resources/common-services/OOZIE/4.0.0.2.0/package/scripts/oozie_server.py",
>  line 123, in pre_upgrade_restart
> OozieUpgrade.prepare_libext_directory(upgrade_type=upgrade_type)
>   File 
> "/Users/jhurley/src/apache/ambari/ambari-server/src/main/resources/common-services/OOZIE/4.0.0.2.0/package/scripts/oozie_server_upgrade.py",
>  line 72, in prepare_libext_directory
> lzo_utils.install_lzo_if_needed()
>   File 
> "/Users/jhurley/src/apache/ambari/ambari-common/src/main/python/resource_management/libraries/functions/lzo_utils.py",
>  line 85, in install_lzo_if_needed
> Script.repository_util.create_repo_files()
>   File 
> "/Users/jhurley/src/apache/ambari/ambari-common/src/main/python/resource_management/libraries/functions/repository_util.py",
>  line 53, in create_repo_files
> if self.command_repository.version_id is None:
> AttributeError: RepositoryUtil instance has no attribute 'command_repository'
> --
> Total run:1200
> Total errors:4
> Total failures:0
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (AMBARI-22563) Packages Cannot Be Installed When Yum Transactions Fail

2017-12-02 Thread Dmytro Grinenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmytro Grinenko resolved AMBARI-22563.
--
Resolution: Fixed

Committed to trunk and branch-2.6

> Packages Cannot Be Installed When Yum Transactions Fail
> ---
>
> Key: AMBARI-22563
> URL: https://issues.apache.org/jira/browse/AMBARI-22563
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.6.1
>Reporter: Dmytro Grinenko
>Assignee: Dmytro Grinenko
>Priority: Blocker
> Fix For: 2.6.1
>
>
> Many installations are running into an issue with installing new bits in 
> preparation for an upgrade. Consider the following stack trace:
> {code}
> 2017-11-01 18:52:40,456 - No package found for 
> storm_${stack_version}(storm_(\d|_)+$) 
> 2017-11-01 18:52:40,457 - PackageNone
> {'retry_on_repo_unavailability': False, 'retry_count': 5, 'action': 
> ['upgrade']}
> 2017-11-01 18:52:40,457 - Installing package None ('/usr/bin/yum -d 0 -e 0 -y 
> install ''') 
> 2017-11-01 18:52:41,308 - Execution of '/usr/bin/yum -d 0 -e 0 -y install ''' 
> returned 1. Error: Nothing to do
> {code}
> Ambari attempts to determine the correct package to install by first doing a 
> scoped search by a specific repository. Once it has found a match, it then 
> proceeds with the yum install. 
> The problem in the above system is that Storm appears to have already been 
> partially installed:
> {code}
> [root@c6401 ~]# yum list installed | grep storm
> storm_2_6_0_0_334.x86_640.7.0.2.6.0.0-334.el6installed
> {code}
> *Notice that the repository listed for storm shows {{installed}}, instead of 
> a real repository. The actual repository it should be coming from is called 
> {{HDP-2.6-repo-1}}.*
> The odd part here is that this appears to be our first install of storm. If 
> we can't find the package, then how did it even get into this state? It seems 
> like there might be something going on in the agent in terms of killing yum 
> and automatically retrying it. We can see the following:
> {code}
> [root@c6401 ~]# yum-complete-transaction
> Loaded plugins: fastestmirror
> Loading mirror speeds from cached hostfile
>  * base: mirror.solarvps.com
>  * extras: mirrors.mit.edu
>  * updates: mirror.5ninesolutions.com
> There are 1 outstanding transactions to complete. Finishing the most recent 
> one
> The remaining transaction had 1 elements left to run
> --> Running transaction check
> ---> Package storm_2_6_0_0_334.x86_64 0:1.1.0.2.6.0.0-334.el6 will be 
> installed
> --> Finished Dependency Resolution
> {code}
> It's actually pretty easy to duplicate this problem - during a {{yum 
> install}}, just kill yum. It will leave the package in this quasi-installed 
> state where its repo is listed as 'installed' even though it is not.
> I think what's happening here is that the agent receives the command to 
> install storm. During the course of the installation - the yum command gets 
> killed (possibly by the agent itself) and the agent retries to install the 
> package quietly. It then is unable to find the package anymore since it no 
> longer is listed as "available" and is now "installed" with the {{installed}} 
> repository association.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)