[jira] [Resolved] (AMBARI-25189) Enable livy.server.access-control.enabled by default

2019-03-18 Thread Vitaly Brodetskyi (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMBARI-25189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi resolved AMBARI-25189.

Resolution: Fixed

> Enable livy.server.access-control.enabled by default
> 
>
> Key: AMBARI-25189
> URL: https://issues.apache.org/jira/browse/AMBARI-25189
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-agent, ambari-server
>Reporter: Vitaly Brodetskyi
>Assignee: Vitaly Brodetskyi
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> ...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (AMBARI-25189) Enable livy.server.access-control.enabled by default

2019-03-11 Thread Vitaly Brodetskyi (JIRA)
Vitaly Brodetskyi created AMBARI-25189:
--

 Summary: Enable livy.server.access-control.enabled by default
 Key: AMBARI-25189
 URL: https://issues.apache.org/jira/browse/AMBARI-25189
 Project: Ambari
  Issue Type: Bug
  Components: ambari-agent, ambari-server
Reporter: Vitaly Brodetskyi
Assignee: Vitaly Brodetskyi


...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (AMBARI-24532) Service Check for Spark2 fails when ssl enabled in spark

2018-10-26 Thread Vitaly Brodetskyi (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMBARI-24532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi resolved AMBARI-24532.

Resolution: Fixed

> Service Check for Spark2 fails when ssl enabled in spark
> 
>
> Key: AMBARI-24532
> URL: https://issues.apache.org/jira/browse/AMBARI-24532
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.6.2
>Reporter: Ravi Bhardwaj
>Assignee: Vitaly Brodetskyi
>Priority: Major
>
> Service check for Spark2 fails when History server is running on https fails 
> when a custom port is set via spark.ssl.historyServer.port. By default, when 
> enabling SSL on the Spark2 History Server, the instance will start on port 
> configured with 'spark.history.ui.port' (port for HTTP) + 400. However, since 
> https://issues.apache.org/jira/browse/SPARK-17874 it is possible to specify 
> the port to use for HTTPS using the property 'spark.ssl.historyServer.port'. 
> Ambari however does not read spark.ssl.historyServer.port property to find 
> the correct SSL port while performing service check.
>  
> {code:java}
> /var/lib/ambari-server/resources/common-services/SPARK2/2.0.0/package/scripts/params.py:
> spark-defaults params
> ui_ssl_enabled = default("configurations/spark2-defaults/spark.ssl.enabled", 
> False)
> spark_yarn_historyServer_address = default(spark_history_server_host, 
> "localhost")
> spark_history_scheme = "http"
> spark_history_ui_port = 
> config['configurations']['spark2-defaults']['spark.history.ui.port']
> if ui_ssl_enabled:
>   spark_history_ui_port = str(int(spark_history_ui_port) + 400)
>   spark_history_scheme = "https"{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMBARI-24800) service adviser changes for cluster specific configs

2018-10-22 Thread Vitaly Brodetskyi (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMBARI-24800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi updated AMBARI-24800:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> service adviser changes for cluster specific configs
> 
>
> Key: AMBARI-24800
> URL: https://issues.apache.org/jira/browse/AMBARI-24800
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Vitaly Brodetskyi
>Assignee: Vitaly Brodetskyi
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 2.8.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Env: HDC Spark Data science (m4x4xlarge 16 CPU/64 GB)
> Spark defaults aren't changed in ambari. It is loading with 1 GB 
> spark.executor.memory.
>  (Should this be 60-70% of yarn min container size. Need to consider 
> spark.yarn.executor.memoryOverhead)
> Add such logic for "spark.shuffle.io.numConnectionsPerPeer":
> spark.shuffle.io.numConnectionsPerPeer should be configured dynamically based 
> on cluster size.
> Recommandation was to set it to 10 if number of nodes < 10 and remove (so 
> that default value is used) for higher values.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (AMBARI-24532) Service Check for Spark2 fails when ssl enabled in spark

2018-10-18 Thread Vitaly Brodetskyi (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMBARI-24532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi reassigned AMBARI-24532:
--

Assignee: Vitaly Brodetskyi

> Service Check for Spark2 fails when ssl enabled in spark
> 
>
> Key: AMBARI-24532
> URL: https://issues.apache.org/jira/browse/AMBARI-24532
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.6.2
>Reporter: Ravi Bhardwaj
>Assignee: Vitaly Brodetskyi
>Priority: Major
>
> Service check for Spark2 fails when History server is running on https fails 
> when a custom port is set via spark.ssl.historyServer.port. By default, when 
> enabling SSL on the Spark2 History Server, the instance will start on port 
> configured with 'spark.history.ui.port' (port for HTTP) + 400. However, since 
> https://issues.apache.org/jira/browse/SPARK-17874 it is possible to specify 
> the port to use for HTTPS using the property 'spark.ssl.historyServer.port'. 
> Ambari however does not read spark.ssl.historyServer.port property to find 
> the correct SSL port while performing service check.
>  
> {code:java}
> /var/lib/ambari-server/resources/common-services/SPARK2/2.0.0/package/scripts/params.py:
> spark-defaults params
> ui_ssl_enabled = default("configurations/spark2-defaults/spark.ssl.enabled", 
> False)
> spark_yarn_historyServer_address = default(spark_history_server_host, 
> "localhost")
> spark_history_scheme = "http"
> spark_history_ui_port = 
> config['configurations']['spark2-defaults']['spark.history.ui.port']
> if ui_ssl_enabled:
>   spark_history_ui_port = str(int(spark_history_ui_port) + 400)
>   spark_history_scheme = "https"{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (AMBARI-24544) Spark2 Job History Server Quick links Hard Coded to http_only

2018-10-18 Thread Vitaly Brodetskyi (JIRA)


[ 
https://issues.apache.org/jira/browse/AMBARI-24544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16655981#comment-16655981
 ] 

Vitaly Brodetskyi commented on AMBARI-24544:


Duplicate of 
[AMBARI-24532|https://hortonworks.jira.com/issues/?jql=project+in+%2810320%2C+11620%2C+11320%2C+10520%29+AND+cf%5B11018%5D+%3D+AMBARI-24532]

> Spark2 Job History Server Quick links Hard Coded to http_only
> -
>
> Key: AMBARI-24544
> URL: https://issues.apache.org/jira/browse/AMBARI-24544
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.7.0
>Reporter: Sandeep Nemuri
>Assignee: Vitaly Brodetskyi
>Priority: Major
> Fix For: 2.8.0
>
> Attachments: AMBARI-24544.001.patch
>
>
> Spark2 Job History Server Quick links Hard Coded to http_only
> {code:java}
> {
> name: "default",
> description: "default quick links configuration",
> configuration: 
> {
> protocol: 
> {
> type: "HTTP_ONLY"
> },
> links: 
> [
> {
> name: "spark2_history_server_ui",
> label: "Spark2 History Server UI",
> component_name: "SPARK2_JOBHISTORYSERVER",
> requires_user_name: "false",
> url: "%@://%@:%@",
> port: 
> {
> http_property: "spark.history.ui.port",
> http_default_port: "18081",
> https_property: "spark.history.ui.port",
> https_default_port: "18081",
> regex: "^(\d+)$",
> site: "spark2-defaults"
> }
> }
> ]
> }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (AMBARI-24544) Spark2 Job History Server Quick links Hard Coded to http_only

2018-10-18 Thread Vitaly Brodetskyi (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMBARI-24544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi resolved AMBARI-24544.

Resolution: Duplicate

> Spark2 Job History Server Quick links Hard Coded to http_only
> -
>
> Key: AMBARI-24544
> URL: https://issues.apache.org/jira/browse/AMBARI-24544
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.7.0
>Reporter: Sandeep Nemuri
>Assignee: Vitaly Brodetskyi
>Priority: Major
> Fix For: 2.8.0
>
> Attachments: AMBARI-24544.001.patch
>
>
> Spark2 Job History Server Quick links Hard Coded to http_only
> {code:java}
> {
> name: "default",
> description: "default quick links configuration",
> configuration: 
> {
> protocol: 
> {
> type: "HTTP_ONLY"
> },
> links: 
> [
> {
> name: "spark2_history_server_ui",
> label: "Spark2 History Server UI",
> component_name: "SPARK2_JOBHISTORYSERVER",
> requires_user_name: "false",
> url: "%@://%@:%@",
> port: 
> {
> http_property: "spark.history.ui.port",
> http_default_port: "18081",
> https_property: "spark.history.ui.port",
> https_default_port: "18081",
> regex: "^(\d+)$",
> site: "spark2-defaults"
> }
> }
> ]
> }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMBARI-24544) Spark2 Job History Server Quick links Hard Coded to http_only

2018-10-18 Thread Vitaly Brodetskyi (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMBARI-24544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi updated AMBARI-24544:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Spark2 Job History Server Quick links Hard Coded to http_only
> -
>
> Key: AMBARI-24544
> URL: https://issues.apache.org/jira/browse/AMBARI-24544
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.7.0
>Reporter: Sandeep Nemuri
>Assignee: Vitaly Brodetskyi
>Priority: Major
> Fix For: 2.8.0
>
> Attachments: AMBARI-24544.001.patch
>
>
> Spark2 Job History Server Quick links Hard Coded to http_only
> {code:java}
> {
> name: "default",
> description: "default quick links configuration",
> configuration: 
> {
> protocol: 
> {
> type: "HTTP_ONLY"
> },
> links: 
> [
> {
> name: "spark2_history_server_ui",
> label: "Spark2 History Server UI",
> component_name: "SPARK2_JOBHISTORYSERVER",
> requires_user_name: "false",
> url: "%@://%@:%@",
> port: 
> {
> http_property: "spark.history.ui.port",
> http_default_port: "18081",
> https_property: "spark.history.ui.port",
> https_default_port: "18081",
> regex: "^(\d+)$",
> site: "spark2-defaults"
> }
> }
> ]
> }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (AMBARI-24544) Spark2 Job History Server Quick links Hard Coded to http_only

2018-10-18 Thread Vitaly Brodetskyi (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMBARI-24544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi reopened AMBARI-24544:


> Spark2 Job History Server Quick links Hard Coded to http_only
> -
>
> Key: AMBARI-24544
> URL: https://issues.apache.org/jira/browse/AMBARI-24544
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.7.0
>Reporter: Sandeep Nemuri
>Assignee: Vitaly Brodetskyi
>Priority: Major
> Fix For: 2.8.0
>
> Attachments: AMBARI-24544.001.patch
>
>
> Spark2 Job History Server Quick links Hard Coded to http_only
> {code:java}
> {
> name: "default",
> description: "default quick links configuration",
> configuration: 
> {
> protocol: 
> {
> type: "HTTP_ONLY"
> },
> links: 
> [
> {
> name: "spark2_history_server_ui",
> label: "Spark2 History Server UI",
> component_name: "SPARK2_JOBHISTORYSERVER",
> requires_user_name: "false",
> url: "%@://%@:%@",
> port: 
> {
> http_property: "spark.history.ui.port",
> http_default_port: "18081",
> https_property: "spark.history.ui.port",
> https_default_port: "18081",
> regex: "^(\d+)$",
> site: "spark2-defaults"
> }
> }
> ]
> }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (AMBARI-24544) Spark2 Job History Server Quick links Hard Coded to http_only

2018-10-18 Thread Vitaly Brodetskyi (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMBARI-24544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi reassigned AMBARI-24544:
--

Assignee: Vitaly Brodetskyi

> Spark2 Job History Server Quick links Hard Coded to http_only
> -
>
> Key: AMBARI-24544
> URL: https://issues.apache.org/jira/browse/AMBARI-24544
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.7.0
>Reporter: Sandeep Nemuri
>Assignee: Vitaly Brodetskyi
>Priority: Major
> Fix For: 2.8.0
>
> Attachments: AMBARI-24544.001.patch
>
>
> Spark2 Job History Server Quick links Hard Coded to http_only
> {code:java}
> {
> name: "default",
> description: "default quick links configuration",
> configuration: 
> {
> protocol: 
> {
> type: "HTTP_ONLY"
> },
> links: 
> [
> {
> name: "spark2_history_server_ui",
> label: "Spark2 History Server UI",
> component_name: "SPARK2_JOBHISTORYSERVER",
> requires_user_name: "false",
> url: "%@://%@:%@",
> port: 
> {
> http_property: "spark.history.ui.port",
> http_default_port: "18081",
> https_property: "spark.history.ui.port",
> https_default_port: "18081",
> regex: "^(\d+)$",
> site: "spark2-defaults"
> }
> }
> ]
> }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMBARI-24544) Spark2 Job History Server Quick links Hard Coded to http_only

2018-10-18 Thread Vitaly Brodetskyi (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMBARI-24544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi updated AMBARI-24544:
---
Fix Version/s: 2.8.0

> Spark2 Job History Server Quick links Hard Coded to http_only
> -
>
> Key: AMBARI-24544
> URL: https://issues.apache.org/jira/browse/AMBARI-24544
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.7.0
>Reporter: Sandeep Nemuri
>Assignee: Vitaly Brodetskyi
>Priority: Major
> Fix For: 2.8.0
>
> Attachments: AMBARI-24544.001.patch
>
>
> Spark2 Job History Server Quick links Hard Coded to http_only
> {code:java}
> {
> name: "default",
> description: "default quick links configuration",
> configuration: 
> {
> protocol: 
> {
> type: "HTTP_ONLY"
> },
> links: 
> [
> {
> name: "spark2_history_server_ui",
> label: "Spark2 History Server UI",
> component_name: "SPARK2_JOBHISTORYSERVER",
> requires_user_name: "false",
> url: "%@://%@:%@",
> port: 
> {
> http_property: "spark.history.ui.port",
> http_default_port: "18081",
> https_property: "spark.history.ui.port",
> https_default_port: "18081",
> regex: "^(\d+)$",
> site: "spark2-defaults"
> }
> }
> ]
> }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMBARI-24800) service adviser changes for cluster specific configs

2018-10-17 Thread Vitaly Brodetskyi (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMBARI-24800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi updated AMBARI-24800:
---
Status: Patch Available  (was: Open)

> service adviser changes for cluster specific configs
> 
>
> Key: AMBARI-24800
> URL: https://issues.apache.org/jira/browse/AMBARI-24800
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Vitaly Brodetskyi
>Assignee: Vitaly Brodetskyi
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 2.8.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Env: HDC Spark Data science (m4x4xlarge 16 CPU/64 GB)
> Spark defaults aren't changed in ambari. It is loading with 1 GB 
> spark.executor.memory.
>  (Should this be 60-70% of yarn min container size. Need to consider 
> spark.yarn.executor.memoryOverhead)
> Add such logic for "spark.shuffle.io.numConnectionsPerPeer":
> spark.shuffle.io.numConnectionsPerPeer should be configured dynamically based 
> on cluster size.
> Recommandation was to set it to 10 if number of nodes < 10 and remove (so 
> that default value is used) for higher values.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (AMBARI-24800) service adviser changes for cluster specific configs

2018-10-17 Thread Vitaly Brodetskyi (JIRA)
Vitaly Brodetskyi created AMBARI-24800:
--

 Summary: service adviser changes for cluster specific configs
 Key: AMBARI-24800
 URL: https://issues.apache.org/jira/browse/AMBARI-24800
 Project: Ambari
  Issue Type: Bug
  Components: ambari-server
Reporter: Vitaly Brodetskyi
Assignee: Vitaly Brodetskyi
 Fix For: 2.8.0


Env: HDC Spark Data science (m4x4xlarge 16 CPU/64 GB)

Spark defaults aren't changed in ambari. It is loading with 1 GB 
spark.executor.memory.

 (Should this be 60-70% of yarn min container size. Need to consider 
spark.yarn.executor.memoryOverhead)

Add such logic for "spark.shuffle.io.numConnectionsPerPeer":
spark.shuffle.io.numConnectionsPerPeer should be configured dynamically based 
on cluster size.
Recommandation was to set it to 10 if number of nodes < 10 and remove (so that 
default value is used) for higher values.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (AMBARI-24749) Stackadvisor error while enabling HSI

2018-10-09 Thread Vitaly Brodetskyi (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMBARI-24749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi resolved AMBARI-24749.

Resolution: Duplicate

> Stackadvisor error while enabling HSI
> -
>
> Key: AMBARI-24749
> URL: https://issues.apache.org/jira/browse/AMBARI-24749
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.7.3
>Reporter: Vivek Rathod
>Priority: Major
> Fix For: 2.7.3
>
>
> Stackadvisor error while enabling HSI
> STR:
> On a deployed cluster, enable HSI from Hive Service Page. Stackadvisor throws 
> error
>  
> {code}
> Traceback (most recent call last):
>  File "/var/lib/ambari-server/resources/scripts/stack_advisor.py", line 184, 
> in 
>  main(sys.argv)
>  File "/var/lib/ambari-server/resources/scripts/stack_advisor.py", line 138, 
> in main
>  result = stackAdvisor.validateConfigurations(services, hosts)
>  File "/var/lib/ambari-server/resources/scripts/../stacks/stack_advisor.py", 
> line 1079, in validateConfigurations
>  validationItems = self.getConfigurationsValidationItems(services, hosts)
>  File "/var/lib/ambari-server/resources/scripts/../stacks/stack_advisor.py", 
> line 1463, in getConfigurationsValidationItems
>  recommendations = self.recommendConfigurations(services, hosts)
>  File "/var/lib/ambari-server/resources/scripts/../stacks/stack_advisor.py", 
> line 1639, in recommendConfigurations
>  serviceAdvisor.getServiceConfigurationRecommendations(configurations, 
> clusterSummary, services, hosts)
>  File 
> "/var/lib/ambari-server/resources/stacks/HDP/3.0/services/ZEPPELIN/service_advisor.py",
>  line 124, in getServiceConfigurationRecommendations
>  recommender.recommendZeppelinConfigurationsFromHDP25(configurations, 
> clusterData, services, hosts)
>  File 
> "/var/lib/ambari-server/resources/stacks/HDP/3.0/services/ZEPPELIN/service_advisor.py",
>  line 190, in recommendZeppelinConfigurationsFromHDP25
>  shiro_ini_content = zeppelin_shiro_ini['shiro_ini_content']
> TypeError: 'NoneType' object has no attribute '__getitem__'
> {code}
> As a result HSI start fails because of misconfiguration
> {code}
> Failed: Cache size (0B) has to be smaller than the container sizing (0B)
> java.lang.IllegalArgumentException: Cache size (0B) has to be smaller than 
> the container sizing (0B)
>  at com.google.common.base.Preconditions.checkArgument(Preconditions.java:122)
>  at 
> org.apache.hadoop.hive.llap.cli.LlapServiceDriver.run(LlapServiceDriver.java:248)
>  at 
> org.apache.hadoop.hive.llap.cli.LlapServiceDriver.main(LlapServiceDriver.java:120)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMBARI-24718) STS fails after start, after stack upgrade from 3.0.1 to 3.0.3

2018-10-01 Thread Vitaly Brodetskyi (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMBARI-24718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi updated AMBARI-24718:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> STS fails after start, after stack upgrade from 3.0.1 to 3.0.3
> --
>
> Key: AMBARI-24718
> URL: https://issues.apache.org/jira/browse/AMBARI-24718
> Project: Ambari
>  Issue Type: Bug
>Reporter: Vitaly Brodetskyi
>Assignee: Vitaly Brodetskyi
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 2.7.3
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> See this exception in SHS log:
> {code:java}
> 
> Warning: Master yarn-client is deprecated since 2.0. Please use master "yarn" 
> with specified deploy mode instead.
> Exception in thread "main" java.lang.IllegalArgumentException: requirement 
> failed: Keytab file: none does not exist
>  at scala.Predef$.require(Predef.scala:224)
>  at 
> org.apache.spark.deploy.SparkSubmit$.doPrepareSubmitEnvironment(SparkSubmit.scala:390)
>  at 
> org.apache.spark.deploy.SparkSubmit$.prepareSubmitEnvironment(SparkSubmit.scala:250)
>  at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:171)
>  at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:137)
>  at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
> {code}
> After i've removed spark.yarn.keytab/principal properties, it start work 
> fine. By the way, this cluster is NOT kerberized. It's strange why SHS is 
> trying to use these properties. In the same time properties 
> spark.history.kerberos.keytab/principal also available but there is no 
> issues. I expect question why spark.yarn.keytab/principal were added during 
> stack upgrade if cluster is not kerberized, here is answer:
> {code:java}
>  from-key="spark.history.kerberos.keytab" to-key="spark.yarn.keytab" 
> default-value="" if-type="spark2-thrift-sparkconf" if-key="spark.yarn.keytab" 
> if-key-state="absent"/>
>   from-key="spark.history.kerberos.principal" to-key="spark.yarn.principal" 
> default-value="" if-type="spark2-thrift-sparkconf" 
> if-key="spark.yarn.principal" if-key-state="absent"/>
> {code}
> I thought if "spark.history.kerberos.keyta/principal" is available in non 
> kerberized cluster then "spark.yarn.keytab/principal" could be added too. 
> Also we have same logic for many other components in ambari. So the question 
> should it be fixed on ambari side, i mean add spark.yarn.keytab/principal 
> only if kerberos enabled or some condition should be modified/added on SPARK 
> side, not to use it if kerberos disabled or value empty/none?
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMBARI-24718) STS fails after start, after stack upgrade from 3.0.1 to 3.0.3

2018-10-01 Thread Vitaly Brodetskyi (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMBARI-24718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi updated AMBARI-24718:
---
Status: Patch Available  (was: Open)

> STS fails after start, after stack upgrade from 3.0.1 to 3.0.3
> --
>
> Key: AMBARI-24718
> URL: https://issues.apache.org/jira/browse/AMBARI-24718
> Project: Ambari
>  Issue Type: Bug
>Reporter: Vitaly Brodetskyi
>Assignee: Vitaly Brodetskyi
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 2.7.3
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> See this exception in SHS log:
> {code:java}
> 
> Warning: Master yarn-client is deprecated since 2.0. Please use master "yarn" 
> with specified deploy mode instead.
> Exception in thread "main" java.lang.IllegalArgumentException: requirement 
> failed: Keytab file: none does not exist
>  at scala.Predef$.require(Predef.scala:224)
>  at 
> org.apache.spark.deploy.SparkSubmit$.doPrepareSubmitEnvironment(SparkSubmit.scala:390)
>  at 
> org.apache.spark.deploy.SparkSubmit$.prepareSubmitEnvironment(SparkSubmit.scala:250)
>  at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:171)
>  at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:137)
>  at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
> {code}
> After i've removed spark.yarn.keytab/principal properties, it start work 
> fine. By the way, this cluster is NOT kerberized. It's strange why SHS is 
> trying to use these properties. In the same time properties 
> spark.history.kerberos.keytab/principal also available but there is no 
> issues. I expect question why spark.yarn.keytab/principal were added during 
> stack upgrade if cluster is not kerberized, here is answer:
> {code:java}
>  from-key="spark.history.kerberos.keytab" to-key="spark.yarn.keytab" 
> default-value="" if-type="spark2-thrift-sparkconf" if-key="spark.yarn.keytab" 
> if-key-state="absent"/>
>   from-key="spark.history.kerberos.principal" to-key="spark.yarn.principal" 
> default-value="" if-type="spark2-thrift-sparkconf" 
> if-key="spark.yarn.principal" if-key-state="absent"/>
> {code}
> I thought if "spark.history.kerberos.keyta/principal" is available in non 
> kerberized cluster then "spark.yarn.keytab/principal" could be added too. 
> Also we have same logic for many other components in ambari. So the question 
> should it be fixed on ambari side, i mean add spark.yarn.keytab/principal 
> only if kerberos enabled or some condition should be modified/added on SPARK 
> side, not to use it if kerberos disabled or value empty/none?
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (AMBARI-24718) STS fails after start, after stack upgrade from 3.0.1 to 3.0.3

2018-10-01 Thread Vitaly Brodetskyi (JIRA)
Vitaly Brodetskyi created AMBARI-24718:
--

 Summary: STS fails after start, after stack upgrade from 3.0.1 to 
3.0.3
 Key: AMBARI-24718
 URL: https://issues.apache.org/jira/browse/AMBARI-24718
 Project: Ambari
  Issue Type: Bug
Reporter: Vitaly Brodetskyi
Assignee: Vitaly Brodetskyi
 Fix For: 2.7.3


See this exception in SHS log:
{code}

Warning: Master yarn-client is deprecated since 2.0. Please use master "yarn" 
with specified deploy mode instead.
Exception in thread "main" java.lang.IllegalArgumentException: requirement 
failed: Keytab file: none does not exist
 at scala.Predef$.require(Predef.scala:224)
 at 
org.apache.spark.deploy.SparkSubmit$.doPrepareSubmitEnvironment(SparkSubmit.scala:390)
 at 
org.apache.spark.deploy.SparkSubmit$.prepareSubmitEnvironment(SparkSubmit.scala:250)
 at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:171)
 at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:137)
 at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
{code}
After i've removed spark.yarn.keytab/principal properties, it start work fine. 
By the way, this cluster is NOT kerberized. It's strange why SHS is trying to 
use these properties. In the same time properties 
spark.history.kerberos.keytab/principal also available but there is no issues. 
I expect question why spark.yarn.keytab/principal were added during stack 
upgrade if cluster is not kerberized, here is answer:
{code}

 
{code}
I thought if "spark.history.kerberos.keyta/principal" is available in non 
kerberized cluster then "spark.yarn.keytab/principal" could be added too. Also 
we have same logic for many other components in ambari. So the question should 
it be fixed on ambari side, i mean add spark.yarn.keytab/principal only if 
kerberos enabled or some condition should be modified/added on SPARK side, not 
to use it if kerberos disabled or value empty/none?

Cluster with repro: http://104.196.75.237:8080 (GCE)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMBARI-24718) STS fails after start, after stack upgrade from 3.0.1 to 3.0.3

2018-10-01 Thread Vitaly Brodetskyi (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMBARI-24718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi updated AMBARI-24718:
---
Description: 
See this exception in SHS log:
{code:java}

Warning: Master yarn-client is deprecated since 2.0. Please use master "yarn" 
with specified deploy mode instead.
Exception in thread "main" java.lang.IllegalArgumentException: requirement 
failed: Keytab file: none does not exist
 at scala.Predef$.require(Predef.scala:224)
 at 
org.apache.spark.deploy.SparkSubmit$.doPrepareSubmitEnvironment(SparkSubmit.scala:390)
 at 
org.apache.spark.deploy.SparkSubmit$.prepareSubmitEnvironment(SparkSubmit.scala:250)
 at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:171)
 at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:137)
 at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
{code}
After i've removed spark.yarn.keytab/principal properties, it start work fine. 
By the way, this cluster is NOT kerberized. It's strange why SHS is trying to 
use these properties. In the same time properties 
spark.history.kerberos.keytab/principal also available but there is no issues. 
I expect question why spark.yarn.keytab/principal were added during stack 
upgrade if cluster is not kerberized, here is answer:
{code:java}

 
{code}
I thought if "spark.history.kerberos.keyta/principal" is available in non 
kerberized cluster then "spark.yarn.keytab/principal" could be added too. Also 
we have same logic for many other components in ambari. So the question should 
it be fixed on ambari side, i mean add spark.yarn.keytab/principal only if 
kerberos enabled or some condition should be modified/added on SPARK side, not 
to use it if kerberos disabled or value empty/none?

 

  was:
See this exception in SHS log:
{code}

Warning: Master yarn-client is deprecated since 2.0. Please use master "yarn" 
with specified deploy mode instead.
Exception in thread "main" java.lang.IllegalArgumentException: requirement 
failed: Keytab file: none does not exist
 at scala.Predef$.require(Predef.scala:224)
 at 
org.apache.spark.deploy.SparkSubmit$.doPrepareSubmitEnvironment(SparkSubmit.scala:390)
 at 
org.apache.spark.deploy.SparkSubmit$.prepareSubmitEnvironment(SparkSubmit.scala:250)
 at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:171)
 at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:137)
 at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
{code}
After i've removed spark.yarn.keytab/principal properties, it start work fine. 
By the way, this cluster is NOT kerberized. It's strange why SHS is trying to 
use these properties. In the same time properties 
spark.history.kerberos.keytab/principal also available but there is no issues. 
I expect question why spark.yarn.keytab/principal were added during stack 
upgrade if cluster is not kerberized, here is answer:
{code}

 
{code}
I thought if "spark.history.kerberos.keyta/principal" is available in non 
kerberized cluster then "spark.yarn.keytab/principal" could be added too. Also 
we have same logic for many other components in ambari. So the question should 
it be fixed on ambari side, i mean add spark.yarn.keytab/principal only if 
kerberos enabled or some condition should be modified/added on SPARK side, not 
to use it if kerberos disabled or value empty/none?

Cluster with repro: http://104.196.75.237:8080 (GCE)


> STS fails after start, after stack upgrade from 3.0.1 to 3.0.3
> --
>
> Key: AMBARI-24718
> URL: https://issues.apache.org/jira/browse/AMBARI-24718
> Project: Ambari
>  Issue Type: Bug
>Reporter: Vitaly Brodetskyi
>Assignee: Vitaly Brodetskyi
>Priority: Blocker
> Fix For: 2.7.3
>
>
> See this exception in SHS log:
> {code:java}
> 
> Warning: Master yarn-client is deprecated since 2.0. Please use master "yarn" 
> with specified deploy mode instead.
> Exception in thread "main" java.lang.IllegalArgumentException: requirement 
> failed: Keytab file: none does not exist
>  at scala.Predef$.require(Predef.scala:224)
>  at 
> org.apache.spark.deploy.SparkSubmit$.doPrepareSubmitEnvironment(SparkSubmit.scala:390)
>  at 
> org.apache.spark.deploy.SparkSubmit$.prepareSubmitEnvironment(SparkSubmit.scala:250)
>  at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:171)
>  at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:137)
>  at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
> {code}
> After i've removed spark.yarn.keytab/principal properties, it start work 
> fine. By the way, this cluster is NOT kerberized. It's strange why SHS is 
> trying to use these properties. In the same time prope

[jira] [Resolved] (AMBARI-24365) Add livy-client.conf setting to Ambari spark stack

2018-08-17 Thread Vitaly Brodetskyi (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMBARI-24365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi resolved AMBARI-24365.

Resolution: Fixed

> Add livy-client.conf setting to Ambari spark stack
> --
>
> Key: AMBARI-24365
> URL: https://issues.apache.org/jira/browse/AMBARI-24365
> Project: Ambari
>  Issue Type: Improvement
>  Components: stacks
>Reporter: Tao Li
>Assignee: Vitaly Brodetskyi
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.7.1
>
> Attachments: AMBARI-24365.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> We have run into a need to add livy-client.conf to Ambari's capability. For 
> example we were trying to configure "livy.rsc.launcher.address" in this 
> setting file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMBARI-24365) Add livy-client.conf setting to Ambari spark stack

2018-08-17 Thread Vitaly Brodetskyi (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMBARI-24365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi updated AMBARI-24365:
---
Fix Version/s: 2.7.1

> Add livy-client.conf setting to Ambari spark stack
> --
>
> Key: AMBARI-24365
> URL: https://issues.apache.org/jira/browse/AMBARI-24365
> Project: Ambari
>  Issue Type: Improvement
>  Components: stacks
>Reporter: Tao Li
>Assignee: Vitaly Brodetskyi
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.7.1
>
> Attachments: AMBARI-24365.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> We have run into a need to add livy-client.conf to Ambari's capability. For 
> example we were trying to configure "livy.rsc.launcher.address" in this 
> setting file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (AMBARI-24366) Honor zeppelin.livy.url setting as an interpreter setting if specified

2018-08-17 Thread Vitaly Brodetskyi (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMBARI-24366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi resolved AMBARI-24366.

Resolution: Fixed

> Honor zeppelin.livy.url setting as an interpreter setting if specified
> --
>
> Key: AMBARI-24366
> URL: https://issues.apache.org/jira/browse/AMBARI-24366
> Project: Ambari
>  Issue Type: Improvement
>  Components: stacks
>Reporter: Tao Li
>Assignee: Vitaly Brodetskyi
>Priority: Major
>  Labels: pull-request-available
> Attachments: AMBARI-24366.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The resolved value for this settings from Ambari auto update is not good 
> enough in certain scenarios. It just uses the first host where Livy server is 
> running. We want to have the flexibility to honor an explicit configuration 
> for this setting.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMBARI-24366) Honor zeppelin.livy.url setting as an interpreter setting if specified

2018-08-17 Thread Vitaly Brodetskyi (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMBARI-24366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi updated AMBARI-24366:
---
Fix Version/s: 2.7.1

> Honor zeppelin.livy.url setting as an interpreter setting if specified
> --
>
> Key: AMBARI-24366
> URL: https://issues.apache.org/jira/browse/AMBARI-24366
> Project: Ambari
>  Issue Type: Improvement
>  Components: stacks
>Reporter: Tao Li
>Assignee: Vitaly Brodetskyi
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.7.1
>
> Attachments: AMBARI-24366.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The resolved value for this settings from Ambari auto update is not good 
> enough in certain scenarios. It just uses the first host where Livy server is 
> running. We want to have the flexibility to honor an explicit configuration 
> for this setting.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (AMBARI-24365) Add livy-client.conf setting to Ambari spark stack

2018-08-14 Thread Vitaly Brodetskyi (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMBARI-24365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi reassigned AMBARI-24365:
--

Assignee: Vitaly Brodetskyi

> Add livy-client.conf setting to Ambari spark stack
> --
>
> Key: AMBARI-24365
> URL: https://issues.apache.org/jira/browse/AMBARI-24365
> Project: Ambari
>  Issue Type: Improvement
>  Components: stacks
>Reporter: Tao Li
>Assignee: Vitaly Brodetskyi
>Priority: Major
>  Labels: pull-request-available
> Attachments: AMBARI-24365.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> We have run into a need to add livy-client.conf to Ambari's capability. For 
> example we were trying to configure "livy.rsc.launcher.address" in this 
> setting file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (AMBARI-24366) Honor zeppelin.livy.url setting as an interpreter setting if specified

2018-08-14 Thread Vitaly Brodetskyi (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMBARI-24366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi reassigned AMBARI-24366:
--

Assignee: Vitaly Brodetskyi

> Honor zeppelin.livy.url setting as an interpreter setting if specified
> --
>
> Key: AMBARI-24366
> URL: https://issues.apache.org/jira/browse/AMBARI-24366
> Project: Ambari
>  Issue Type: Improvement
>  Components: stacks
>Reporter: Tao Li
>Assignee: Vitaly Brodetskyi
>Priority: Major
>  Labels: pull-request-available
> Attachments: AMBARI-24366.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The resolved value for this settings from Ambari auto update is not good 
> enough in certain scenarios. It just uses the first host where Livy server is 
> running. We want to have the flexibility to honor an explicit configuration 
> for this setting.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMBARI-24365) Add livy-client.conf setting to Ambari spark stack

2018-08-10 Thread Vitaly Brodetskyi (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMBARI-24365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi updated AMBARI-24365:
---
Attachment: AMBARI-24365.patch

> Add livy-client.conf setting to Ambari spark stack
> --
>
> Key: AMBARI-24365
> URL: https://issues.apache.org/jira/browse/AMBARI-24365
> Project: Ambari
>  Issue Type: Improvement
>  Components: stacks
>Reporter: Tao Li
>Priority: Major
>  Labels: pull-request-available
> Attachments: AMBARI-24365.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> We have run into a need to add livy-client.conf to Ambari's capability. For 
> example we were trying to configure "livy.rsc.launcher.address" in this 
> setting file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMBARI-24366) Honor zeppelin.livy.url setting as an interpreter setting if specified

2018-08-10 Thread Vitaly Brodetskyi (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMBARI-24366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi updated AMBARI-24366:
---
Attachment: AMBARI-24366.patch

> Honor zeppelin.livy.url setting as an interpreter setting if specified
> --
>
> Key: AMBARI-24366
> URL: https://issues.apache.org/jira/browse/AMBARI-24366
> Project: Ambari
>  Issue Type: Improvement
>  Components: stacks
>Reporter: Tao Li
>Priority: Major
>  Labels: pull-request-available
> Attachments: AMBARI-24366.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The resolved value for this settings from Ambari auto update is not good 
> enough in certain scenarios. It just uses the first host where Livy server is 
> running. We want to have the flexibility to honor an explicit configuration 
> for this setting.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMBARI-24327) Zeppelin server is shown as started even if it fails during start up

2018-07-23 Thread Vitaly Brodetskyi (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMBARI-24327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi updated AMBARI-24327:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Zeppelin server is shown as started even if it fails during start up
> 
>
> Key: AMBARI-24327
> URL: https://issues.apache.org/jira/browse/AMBARI-24327
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Vitaly Brodetskyi
>Assignee: Vitaly Brodetskyi
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 2.7.1
>
> Attachments: AMBARI-24327.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Steps to reproduce:
> 1) Set incorrect values in shiro.ini. For example set the value of 
> ldapRealm.hadoopSecurityCredentialPath to incorrect path
> 2) Restart zeppelin.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMBARI-24327) Zeppelin server is shown as started even if it fails during start up

2018-07-20 Thread Vitaly Brodetskyi (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMBARI-24327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi updated AMBARI-24327:
---
Status: Patch Available  (was: Open)

> Zeppelin server is shown as started even if it fails during start up
> 
>
> Key: AMBARI-24327
> URL: https://issues.apache.org/jira/browse/AMBARI-24327
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Vitaly Brodetskyi
>Assignee: Vitaly Brodetskyi
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 2.7.1
>
> Attachments: AMBARI-24327.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Steps to reproduce:
> 1) Set incorrect values in shiro.ini. For example set the value of 
> ldapRealm.hadoopSecurityCredentialPath to incorrect path
> 2) Restart zeppelin.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMBARI-24327) Zeppelin server is shown as started even if it fails during start up

2018-07-20 Thread Vitaly Brodetskyi (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMBARI-24327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi updated AMBARI-24327:
---
Attachment: AMBARI-24327.patch

> Zeppelin server is shown as started even if it fails during start up
> 
>
> Key: AMBARI-24327
> URL: https://issues.apache.org/jira/browse/AMBARI-24327
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Vitaly Brodetskyi
>Assignee: Vitaly Brodetskyi
>Priority: Blocker
> Fix For: 2.7.1
>
> Attachments: AMBARI-24327.patch
>
>
> Steps to reproduce:
> 1) Set incorrect values in shiro.ini. For example set the value of 
> ldapRealm.hadoopSecurityCredentialPath to incorrect path
> 2) Restart zeppelin.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (AMBARI-24327) Zeppelin server is shown as started even if it fails during start up

2018-07-20 Thread Vitaly Brodetskyi (JIRA)
Vitaly Brodetskyi created AMBARI-24327:
--

 Summary: Zeppelin server is shown as started even if it fails 
during start up
 Key: AMBARI-24327
 URL: https://issues.apache.org/jira/browse/AMBARI-24327
 Project: Ambari
  Issue Type: Bug
  Components: ambari-server
Reporter: Vitaly Brodetskyi
Assignee: Vitaly Brodetskyi
 Fix For: 2.7.1


Steps to reproduce:
1) Set incorrect values in shiro.ini. For example set the value of 
ldapRealm.hadoopSecurityCredentialPath to incorrect path
2) Restart zeppelin.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMBARI-24271) Spark thrift server is not starting on Upgraded cluster

2018-07-09 Thread Vitaly Brodetskyi (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMBARI-24271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi updated AMBARI-24271:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Spark thrift server is not starting on Upgraded cluster
> ---
>
> Key: AMBARI-24271
> URL: https://issues.apache.org/jira/browse/AMBARI-24271
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.7.0
>Reporter: Vitaly Brodetskyi
>Assignee: Vitaly Brodetskyi
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 2.7.0
>
> Attachments: AMBARI-24271_part1.patch, AMBARI-24271_part2.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Steps to reproduce :
> 1) Deploy cluster with Ambari-2.6.2.0 + HDP-2.6.5.0
> 2) Upgrade to Ambari-2.7.0.0-876
> 3) Perform Express Upgrade to HDP-3.0.0.0-1621
> After upgrade STS is failing with below error :
> {code}
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.yarn.exceptions.YarnException):
>  org.apache.hadoop.security.AccessControlException: User spark does not have 
> permission to submit application_1531132300904_0038 to queue default
>  at org.apache.hadoop.yarn.ipc.RPCUtil.getRemoteException(RPCUtil.java:38)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:435)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.submitApplication(RMAppManager.java:320)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.submitApplication(ClientRMService.java:645)
>  at 
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.submitApplication(ApplicationClientProtocolPBServiceImpl.java:277)
>  at 
> org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:563)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>  at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:872)
>  at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:818)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1688)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2678)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMBARI-24271) Spark thrift server is not starting on Upgraded cluster

2018-07-09 Thread Vitaly Brodetskyi (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMBARI-24271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi updated AMBARI-24271:
---
Status: Patch Available  (was: Open)

> Spark thrift server is not starting on Upgraded cluster
> ---
>
> Key: AMBARI-24271
> URL: https://issues.apache.org/jira/browse/AMBARI-24271
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.7.0
>Reporter: Vitaly Brodetskyi
>Assignee: Vitaly Brodetskyi
>Priority: Blocker
> Fix For: 2.7.0
>
> Attachments: AMBARI-24271_part1.patch, AMBARI-24271_part2.patch
>
>
> Steps to reproduce :
> 1) Deploy cluster with Ambari-2.6.2.0 + HDP-2.6.5.0
> 2) Upgrade to Ambari-2.7.0.0-876
> 3) Perform Express Upgrade to HDP-3.0.0.0-1621
> After upgrade STS is failing with below error :
> {code}
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.yarn.exceptions.YarnException):
>  org.apache.hadoop.security.AccessControlException: User spark does not have 
> permission to submit application_1531132300904_0038 to queue default
>  at org.apache.hadoop.yarn.ipc.RPCUtil.getRemoteException(RPCUtil.java:38)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:435)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.submitApplication(RMAppManager.java:320)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.submitApplication(ClientRMService.java:645)
>  at 
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.submitApplication(ApplicationClientProtocolPBServiceImpl.java:277)
>  at 
> org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:563)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>  at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:872)
>  at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:818)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1688)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2678)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMBARI-24271) Spark thrift server is not starting on Upgraded cluster

2018-07-09 Thread Vitaly Brodetskyi (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMBARI-24271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi updated AMBARI-24271:
---
Attachment: AMBARI-24271_part1.patch
AMBARI-24271_part2.patch

> Spark thrift server is not starting on Upgraded cluster
> ---
>
> Key: AMBARI-24271
> URL: https://issues.apache.org/jira/browse/AMBARI-24271
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.7.0
>Reporter: Vitaly Brodetskyi
>Assignee: Vitaly Brodetskyi
>Priority: Blocker
> Fix For: 2.7.0
>
> Attachments: AMBARI-24271_part1.patch, AMBARI-24271_part2.patch
>
>
> Steps to reproduce :
> 1) Deploy cluster with Ambari-2.6.2.0 + HDP-2.6.5.0
> 2) Upgrade to Ambari-2.7.0.0-876
> 3) Perform Express Upgrade to HDP-3.0.0.0-1621
> After upgrade STS is failing with below error :
> {code}
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.yarn.exceptions.YarnException):
>  org.apache.hadoop.security.AccessControlException: User spark does not have 
> permission to submit application_1531132300904_0038 to queue default
>  at org.apache.hadoop.yarn.ipc.RPCUtil.getRemoteException(RPCUtil.java:38)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:435)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.submitApplication(RMAppManager.java:320)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.submitApplication(ClientRMService.java:645)
>  at 
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.submitApplication(ApplicationClientProtocolPBServiceImpl.java:277)
>  at 
> org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:563)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>  at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:872)
>  at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:818)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1688)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2678)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (AMBARI-24271) Spark thrift server is not starting on Upgraded cluster

2018-07-09 Thread Vitaly Brodetskyi (JIRA)
Vitaly Brodetskyi created AMBARI-24271:
--

 Summary: Spark thrift server is not starting on Upgraded cluster
 Key: AMBARI-24271
 URL: https://issues.apache.org/jira/browse/AMBARI-24271
 Project: Ambari
  Issue Type: Bug
  Components: ambari-server
Affects Versions: 2.7.0
Reporter: Vitaly Brodetskyi
Assignee: Vitaly Brodetskyi
 Fix For: 2.7.0


Steps to reproduce :
1) Deploy cluster with Ambari-2.6.2.0 + HDP-2.6.5.0
2) Upgrade to Ambari-2.7.0.0-876
3) Perform Express Upgrade to HDP-3.0.0.0-1621

After upgrade STS is failing with below error :

{code}
Caused by: 
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.yarn.exceptions.YarnException):
 org.apache.hadoop.security.AccessControlException: User spark does not have 
permission to submit application_1531132300904_0038 to queue default
 at org.apache.hadoop.yarn.ipc.RPCUtil.getRemoteException(RPCUtil.java:38)
 at 
org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:435)
 at 
org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.submitApplication(RMAppManager.java:320)
 at 
org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.submitApplication(ClientRMService.java:645)
 at 
org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.submitApplication(ApplicationClientProtocolPBServiceImpl.java:277)
 at 
org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:563)
 at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
 at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:872)
 at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:818)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:422)
 at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1688)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2678)
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (AMBARI-24023) Make `/hdp/apps/3.0.0.0-X/spark2/spark2-hdp-hive-archive.tar.gz`

2018-06-04 Thread Vitaly Brodetskyi (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMBARI-24023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi resolved AMBARI-24023.

Resolution: Fixed

> Make `/hdp/apps/3.0.0.0-X/spark2/spark2-hdp-hive-archive.tar.gz`
> 
>
> Key: AMBARI-24023
> URL: https://issues.apache.org/jira/browse/AMBARI-24023
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-sever
>Affects Versions: 2.7.0
>Reporter: Vitaly Brodetskyi
>Assignee: Vitaly Brodetskyi
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 2.7.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> To optimize YARN cluster application, Ambari makes an additional file 
> `spark2-hdp-yarn-archive.tar.gz` and upload it to HDFS. We need to do the 
> same thing for `spark2-hdp-hive-archive.tar.gz` like the following.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (AMBARI-24023) Make `/hdp/apps/3.0.0.0-X/spark2/spark2-hdp-hive-archive.tar.gz`

2018-06-04 Thread Vitaly Brodetskyi (JIRA)
Vitaly Brodetskyi created AMBARI-24023:
--

 Summary: Make 
`/hdp/apps/3.0.0.0-X/spark2/spark2-hdp-hive-archive.tar.gz`
 Key: AMBARI-24023
 URL: https://issues.apache.org/jira/browse/AMBARI-24023
 Project: Ambari
  Issue Type: Bug
  Components: ambari-sever
Affects Versions: 2.7.0
Reporter: Vitaly Brodetskyi
Assignee: Vitaly Brodetskyi
 Fix For: 2.7.0


To optimize YARN cluster application, Ambari makes an additional file 
`spark2-hdp-yarn-archive.tar.gz` and upload it to HDFS. We need to do the same 
thing for `spark2-hdp-hive-archive.tar.gz` like the following.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMBARI-23966) Upgrade Spark/Zeppelin/Livy from HDP 2.6 to HDP 3.0

2018-05-30 Thread Vitaly Brodetskyi (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMBARI-23966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi updated AMBARI-23966:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Upgrade Spark/Zeppelin/Livy from HDP 2.6 to HDP 3.0
> ---
>
> Key: AMBARI-23966
> URL: https://issues.apache.org/jira/browse/AMBARI-23966
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-sever
>Affects Versions: 2.7.0
>Reporter: Vitaly Brodetskyi
>Assignee: Vitaly Brodetskyi
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 2.7.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Upgrade Spark/Zeppelin/Livy from HDP 2.6 to HDP 3.0
> Configuration changes/ any data migration changes needed to upgrade using 
> express from HDP 2.6 to HDP 3.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMBARI-23966) Upgrade Spark/Zeppelin/Livy from HDP 2.6 to HDP 3.0

2018-05-29 Thread Vitaly Brodetskyi (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMBARI-23966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi updated AMBARI-23966:
---
Status: Patch Available  (was: Open)

> Upgrade Spark/Zeppelin/Livy from HDP 2.6 to HDP 3.0
> ---
>
> Key: AMBARI-23966
> URL: https://issues.apache.org/jira/browse/AMBARI-23966
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-sever
>Affects Versions: 2.7.0
>Reporter: Vitaly Brodetskyi
>Assignee: Vitaly Brodetskyi
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 2.7.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Upgrade Spark/Zeppelin/Livy from HDP 2.6 to HDP 3.0
> Configuration changes/ any data migration changes needed to upgrade using 
> express from HDP 2.6 to HDP 3.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (AMBARI-23966) Upgrade Spark/Zeppelin/Livy from HDP 2.6 to HDP 3.0

2018-05-29 Thread Vitaly Brodetskyi (JIRA)
Vitaly Brodetskyi created AMBARI-23966:
--

 Summary: Upgrade Spark/Zeppelin/Livy from HDP 2.6 to HDP 3.0
 Key: AMBARI-23966
 URL: https://issues.apache.org/jira/browse/AMBARI-23966
 Project: Ambari
  Issue Type: Bug
  Components: ambari-sever
Affects Versions: 2.7.0
Reporter: Vitaly Brodetskyi
Assignee: Vitaly Brodetskyi
 Fix For: 2.7.0


Upgrade Spark/Zeppelin/Livy from HDP 2.6 to HDP 3.0

Configuration changes/ any data migration changes needed to upgrade using 
express from HDP 2.6 to HDP 3.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMBARI-23948) Proxy user settings are missing for livy in unsecured clusters

2018-05-25 Thread Vitaly Brodetskyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-23948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi updated AMBARI-23948:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Proxy user settings are missing for livy in unsecured clusters
> --
>
> Key: AMBARI-23948
> URL: https://issues.apache.org/jira/browse/AMBARI-23948
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.7.0
>Reporter: Vitaly Brodetskyi
>Assignee: Vitaly Brodetskyi
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 2.7.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> below hadoop proxy user settings are missing for livy in HDFS core-site.xml:
> hadoop.proxyuser.livy.groups=*
> hadoop.proxyuser.livy.hosts=*



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMBARI-23948) Proxy user settings are missing for livy in unsecured clusters

2018-05-24 Thread Vitaly Brodetskyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-23948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi updated AMBARI-23948:
---
Status: Patch Available  (was: Open)

> Proxy user settings are missing for livy in unsecured clusters
> --
>
> Key: AMBARI-23948
> URL: https://issues.apache.org/jira/browse/AMBARI-23948
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.7.0
>Reporter: Vitaly Brodetskyi
>Assignee: Vitaly Brodetskyi
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 2.7.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> below hadoop proxy user settings are missing for livy in HDFS core-site.xml:
> hadoop.proxyuser.livy.groups=*
> hadoop.proxyuser.livy.hosts=*



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (AMBARI-23948) Proxy user settings are missing for livy in unsecured clusters

2018-05-24 Thread Vitaly Brodetskyi (JIRA)
Vitaly Brodetskyi created AMBARI-23948:
--

 Summary: Proxy user settings are missing for livy in unsecured 
clusters
 Key: AMBARI-23948
 URL: https://issues.apache.org/jira/browse/AMBARI-23948
 Project: Ambari
  Issue Type: Bug
  Components: ambari-server
Affects Versions: 2.7.0
Reporter: Vitaly Brodetskyi
Assignee: Vitaly Brodetskyi
 Fix For: 2.7.0


below hadoop proxy user settings are missing for livy in HDFS core-site.xml:
hadoop.proxyuser.livy.groups=*
hadoop.proxyuser.livy.hosts=*



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (AMBARI-23659) Add a configuration "metastore.catalog.default=spark" in hive-site.xml for Spark

2018-04-23 Thread Vitaly Brodetskyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-23659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi resolved AMBARI-23659.

Resolution: Fixed

> Add a configuration "metastore.catalog.default=spark" in hive-site.xml for 
> Spark
> 
>
> Key: AMBARI-23659
> URL: https://issues.apache.org/jira/browse/AMBARI-23659
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Vitaly Brodetskyi
>Assignee: Vitaly Brodetskyi
>Priority: Major
> Fix For: 2.7.0
>
>
> {{metastore.catalog.default}} is a configuration in Hive 3.x, and its default 
> value is "hive". Recently we added the new Hive metastore 3.0.0 support for 
> Spark, so {{metastore.catalog.default}} should be set to "spark" in 
> hive-site.xml for Spark.
> *metastore.catalog.default=spark*



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (AMBARI-23659) Add a configuration "metastore.catalog.default=spark" in hive-site.xml for Spark

2018-04-23 Thread Vitaly Brodetskyi (JIRA)
Vitaly Brodetskyi created AMBARI-23659:
--

 Summary: Add a configuration "metastore.catalog.default=spark" in 
hive-site.xml for Spark
 Key: AMBARI-23659
 URL: https://issues.apache.org/jira/browse/AMBARI-23659
 Project: Ambari
  Issue Type: Bug
  Components: ambari-server
Reporter: Vitaly Brodetskyi
Assignee: Vitaly Brodetskyi
 Fix For: 2.7.0


{{metastore.catalog.default}} is a configuration in Hive 3.x, and its default 
value is "hive". Recently we added the new Hive metastore 3.0.0 support for 
Spark, so {{metastore.catalog.default}} should be set to "spark" in 
hive-site.xml for Spark.

*metastore.catalog.default=spark*



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (AMBARI-22728) Ambari is monitoring the wrong Spark Thrift Server and Spark2 Thrift Server ports when using HTTP transport.

2018-04-20 Thread Vitaly Brodetskyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi resolved AMBARI-22728.

Resolution: Fixed

> Ambari is monitoring the wrong Spark Thrift Server and Spark2 Thrift Server 
> ports when using HTTP transport.
> 
>
> Key: AMBARI-22728
> URL: https://issues.apache.org/jira/browse/AMBARI-22728
> Project: Ambari
>  Issue Type: Bug
>Reporter: Mingjie Tang
>Assignee: Vitaly Brodetskyi
>Priority: Major
> Attachments: AMBARI-22728.diff
>
>
> When we are using HTTP transport for the Spark Thrift Server and Spark2 
> Thrift Servers.
> Looking at the 
> /var/lib/ambari-server/resources/common-services/SPARK2/2.0.0/package/scripts/alerts/alert_spark2_thrift_port.py
>  
> and 
> /var/lib/ambari-server/resources/common-services/SPARK/1.2.1/package/scripts/alerts/alert_spark_thrift_port.py
>  
> scripts, they ignore what the cluster administrator has configured 
> hive.server2.thrift.port when http transport is set, and completely ignores 
> what is set in 
> hive.server2.thrift.http.port.
> This causes a false alarm unless you change your http port to what the 
> scripts default to 
> (10002 in Spark2 and 10001 in Spark).
> This is not an ideal solution, especially in cases where the Spark Thrift 
> Server is going to be co-located on the same host as a HiveServer2 because 
> it's likely to result in a port conflict.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (AMBARI-22728) Ambari is monitoring the wrong Spark Thrift Server and Spark2 Thrift Server ports when using HTTP transport.

2018-04-20 Thread Vitaly Brodetskyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi reassigned AMBARI-22728:
--

Assignee: Vitaly Brodetskyi  (was: Mingjie Tang)

> Ambari is monitoring the wrong Spark Thrift Server and Spark2 Thrift Server 
> ports when using HTTP transport.
> 
>
> Key: AMBARI-22728
> URL: https://issues.apache.org/jira/browse/AMBARI-22728
> Project: Ambari
>  Issue Type: Bug
>Reporter: Mingjie Tang
>Assignee: Vitaly Brodetskyi
>Priority: Major
> Attachments: AMBARI-22728.diff
>
>
> When we are using HTTP transport for the Spark Thrift Server and Spark2 
> Thrift Servers.
> Looking at the 
> /var/lib/ambari-server/resources/common-services/SPARK2/2.0.0/package/scripts/alerts/alert_spark2_thrift_port.py
>  
> and 
> /var/lib/ambari-server/resources/common-services/SPARK/1.2.1/package/scripts/alerts/alert_spark_thrift_port.py
>  
> scripts, they ignore what the cluster administrator has configured 
> hive.server2.thrift.port when http transport is set, and completely ignores 
> what is set in 
> hive.server2.thrift.http.port.
> This causes a false alarm unless you change your http port to what the 
> scripts default to 
> (10002 in Spark2 and 10001 in Spark).
> This is not an ideal solution, especially in cases where the Spark Thrift 
> Server is going to be co-located on the same host as a HiveServer2 because 
> it's likely to result in a port conflict.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMBARI-23589) Enable Security and ACLs in History Server

2018-04-20 Thread Vitaly Brodetskyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-23589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi updated AMBARI-23589:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Enable Security and ACLs in History Server
> --
>
> Key: AMBARI-23589
> URL: https://issues.apache.org/jira/browse/AMBARI-23589
> Project: Ambari
>  Issue Type: New Feature
>  Components: ambari-server
>Affects Versions: 2.7.0
>Reporter: Vitaly Brodetskyi
>Assignee: Vitaly Brodetskyi
>Priority: Critical
> Fix For: 2.7.0
>
> Attachments: AMBARI-23589.patch
>
>
> *Usecase*
> The Spark History Server should have authentication enabled out of the box. 
> By default only authenticated Ambari Admin user should have access to SHS UI. 
> Ambari Admin should be able to see history of all jobs. For others, the job 
> submitter should see history of their own jobs and not anyone else's.
> *TestCase*
> * Verify that SHS enables Authentication OOB
> * Enable Kerberos in the Cluster with Ambari, verify that SHS is still 
> enabled for authentication
> * Verify that Admin can see history of all jobs
> * Verify a job submitter only sees history or their own jobs
> Current Spark's Web UI only has SSL encryption, doesn't have mutual 
> authentication. From my understanding it is necessary and valuable to add 
> this support, both for live UI and history UI.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMBARI-23617) Change cardinality for Zeppelin

2018-04-19 Thread Vitaly Brodetskyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-23617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi updated AMBARI-23617:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Change cardinality for Zeppelin
> ---
>
> Key: AMBARI-23617
> URL: https://issues.apache.org/jira/browse/AMBARI-23617
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Vitaly Brodetskyi
>Assignee: Vitaly Brodetskyi
>Priority: Blocker
> Fix For: 2.7.0
>
>
> User should be able to add more than 1 Zeppelin and Livy servers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMBARI-23616) Change cardinality for SHS

2018-04-19 Thread Vitaly Brodetskyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-23616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi updated AMBARI-23616:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Change cardinality for SHS
> --
>
> Key: AMBARI-23616
> URL: https://issues.apache.org/jira/browse/AMBARI-23616
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Vitaly Brodetskyi
>Assignee: Vitaly Brodetskyi
>Priority: Blocker
> Fix For: 2.7.0
>
>
> User should be able to add more than 1 SHS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMBARI-23617) Change cardinality for Zeppelin

2018-04-18 Thread Vitaly Brodetskyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-23617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi updated AMBARI-23617:
---
Status: Patch Available  (was: Open)

> Change cardinality for Zeppelin
> ---
>
> Key: AMBARI-23617
> URL: https://issues.apache.org/jira/browse/AMBARI-23617
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Vitaly Brodetskyi
>Assignee: Vitaly Brodetskyi
>Priority: Blocker
> Fix For: 2.7.0
>
>
> User should be able to add more than 1 Zeppelin and Livy servers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (AMBARI-23617) Change cardinality for Zeppelin

2018-04-18 Thread Vitaly Brodetskyi (JIRA)
Vitaly Brodetskyi created AMBARI-23617:
--

 Summary: Change cardinality for Zeppelin
 Key: AMBARI-23617
 URL: https://issues.apache.org/jira/browse/AMBARI-23617
 Project: Ambari
  Issue Type: Bug
  Components: ambari-server
Reporter: Vitaly Brodetskyi
Assignee: Vitaly Brodetskyi
 Fix For: 2.7.0


User should be able to add more than 1 Zeppelin and Livy servers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMBARI-23616) Change cardinality for SHS

2018-04-18 Thread Vitaly Brodetskyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-23616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi updated AMBARI-23616:
---
Status: Patch Available  (was: Open)

> Change cardinality for SHS
> --
>
> Key: AMBARI-23616
> URL: https://issues.apache.org/jira/browse/AMBARI-23616
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Vitaly Brodetskyi
>Assignee: Vitaly Brodetskyi
>Priority: Blocker
> Fix For: 2.7.0
>
>
> User should be able to add more than 1 SHS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (AMBARI-23616) Change cardinality for SHS

2018-04-18 Thread Vitaly Brodetskyi (JIRA)
Vitaly Brodetskyi created AMBARI-23616:
--

 Summary: Change cardinality for SHS
 Key: AMBARI-23616
 URL: https://issues.apache.org/jira/browse/AMBARI-23616
 Project: Ambari
  Issue Type: Bug
  Components: ambari-server
Reporter: Vitaly Brodetskyi
Assignee: Vitaly Brodetskyi
 Fix For: 2.7.0


User should be able to add more than 1 SHS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (AMBARI-23545) Remove livy2.pyspark3 interpreter

2018-04-17 Thread Vitaly Brodetskyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-23545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi resolved AMBARI-23545.

Resolution: Fixed

> Remove livy2.pyspark3 interpreter
> -
>
> Key: AMBARI-23545
> URL: https://issues.apache.org/jira/browse/AMBARI-23545
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Vitaly Brodetskyi
>Assignee: Vitaly Brodetskyi
>Priority: Blocker
> Fix For: 2.7.0
>
>
> Need to remove livy2.pyspark3 interpreter from Zeppelin



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMBARI-23589) Enable Security and ACLs in History Server

2018-04-16 Thread Vitaly Brodetskyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-23589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi updated AMBARI-23589:
---
Status: Patch Available  (was: Open)

> Enable Security and ACLs in History Server
> --
>
> Key: AMBARI-23589
> URL: https://issues.apache.org/jira/browse/AMBARI-23589
> Project: Ambari
>  Issue Type: New Feature
>  Components: ambari-server
>Affects Versions: 2.7.0
>Reporter: Vitaly Brodetskyi
>Assignee: Vitaly Brodetskyi
>Priority: Critical
> Fix For: 2.7.0
>
> Attachments: AMBARI-23589.patch
>
>
> *Usecase*
> The Spark History Server should have authentication enabled out of the box. 
> By default only authenticated Ambari Admin user should have access to SHS UI. 
> Ambari Admin should be able to see history of all jobs. For others, the job 
> submitter should see history of their own jobs and not anyone else's.
> *TestCase*
> * Verify that SHS enables Authentication OOB
> * Enable Kerberos in the Cluster with Ambari, verify that SHS is still 
> enabled for authentication
> * Verify that Admin can see history of all jobs
> * Verify a job submitter only sees history or their own jobs
> Current Spark's Web UI only has SSL encryption, doesn't have mutual 
> authentication. From my understanding it is necessary and valuable to add 
> this support, both for live UI and history UI.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMBARI-23589) Enable Security and ACLs in History Server

2018-04-16 Thread Vitaly Brodetskyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-23589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi updated AMBARI-23589:
---
Attachment: AMBARI-23589.patch

> Enable Security and ACLs in History Server
> --
>
> Key: AMBARI-23589
> URL: https://issues.apache.org/jira/browse/AMBARI-23589
> Project: Ambari
>  Issue Type: New Feature
>  Components: ambari-server
>Affects Versions: 2.7.0
>Reporter: Vitaly Brodetskyi
>Assignee: Vitaly Brodetskyi
>Priority: Critical
> Fix For: 2.7.0
>
> Attachments: AMBARI-23589.patch
>
>
> *Usecase*
> The Spark History Server should have authentication enabled out of the box. 
> By default only authenticated Ambari Admin user should have access to SHS UI. 
> Ambari Admin should be able to see history of all jobs. For others, the job 
> submitter should see history of their own jobs and not anyone else's.
> *TestCase*
> * Verify that SHS enables Authentication OOB
> * Enable Kerberos in the Cluster with Ambari, verify that SHS is still 
> enabled for authentication
> * Verify that Admin can see history of all jobs
> * Verify a job submitter only sees history or their own jobs
> Current Spark's Web UI only has SSL encryption, doesn't have mutual 
> authentication. From my understanding it is necessary and valuable to add 
> this support, both for live UI and history UI.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (AMBARI-23589) Enable Security and ACLs in History Server

2018-04-16 Thread Vitaly Brodetskyi (JIRA)
Vitaly Brodetskyi created AMBARI-23589:
--

 Summary: Enable Security and ACLs in History Server
 Key: AMBARI-23589
 URL: https://issues.apache.org/jira/browse/AMBARI-23589
 Project: Ambari
  Issue Type: New Feature
  Components: ambari-server
Affects Versions: 2.7.0
Reporter: Vitaly Brodetskyi
Assignee: Vitaly Brodetskyi
 Fix For: 2.7.0


*Usecase*
The Spark History Server should have authentication enabled out of the box. By 
default only authenticated Ambari Admin user should have access to SHS UI. 
Ambari Admin should be able to see history of all jobs. For others, the job 
submitter should see history of their own jobs and not anyone else's.

*TestCase*
* Verify that SHS enables Authentication OOB
* Enable Kerberos in the Cluster with Ambari, verify that SHS is still enabled 
for authentication
* Verify that Admin can see history of all jobs
* Verify a job submitter only sees history or their own jobs

Current Spark's Web UI only has SSL encryption, doesn't have mutual 
authentication. From my understanding it is necessary and valuable to add this 
support, both for live UI and history UI.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMBARI-23570) Ambari fails to install Zeppelin on Zuul test

2018-04-13 Thread Vitaly Brodetskyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-23570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi updated AMBARI-23570:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Ambari fails to install Zeppelin on Zuul test
> -
>
> Key: AMBARI-23570
> URL: https://issues.apache.org/jira/browse/AMBARI-23570
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.7.0
>Reporter: Vitaly Brodetskyi
>Assignee: Vitaly Brodetskyi
>Priority: Blocker
> Fix For: 2.7.0
>
> Attachments: AMBARI-23570.patch
>
>
> 61ZEPPELIN_MASTER INSTALL : 2018-04-08 18:47:48,913 - The 'zeppelin-server' 
> component did not advertise a version. This may indicate a problem with the 
> component packaging. However, the stack-select tool was able to report a 
> single version installed (3.0.0.0-1161). This is the version that will be 
> reported.
> (most recent call last):
> File 
> "/usr/lib/ambari-agent/lib/resource_management/core/providers/package/__init__.py",
>  line 283, in _call_with_retries
> code, out = func(cmd, **kwargs)
> File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 72, 
> in innerresult = function(command, **kwargs)
> File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 102, 
> in checked_call
> tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy)
> File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, 
> in _call_wrapper
> result = _call(command, **kwargs_copy)
> File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 303, 
> in _call
> raise ExecutionFailed(err_msg, code, out, err)
> ExecutionFailed: Execution of '/usr/bin/yum -d 0 -e 0 -y install zeppelin' 
> returned 1. Error: Nothing to doThe above exception was the cause of the 
> following exception:
> 2018-04-08 18:48:36,896 - The 'zeppelin-server' component did not advertise a 
> version. This may indicate a problem with the component packaging. However, 
> the stack-select tool was able to report a single version installed 
> (3.0.0.0-1161). This is the version that will be reported.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMBARI-23570) Ambari fails to install Zeppelin on Zuul test

2018-04-13 Thread Vitaly Brodetskyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-23570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi updated AMBARI-23570:
---
Status: Patch Available  (was: Open)

> Ambari fails to install Zeppelin on Zuul test
> -
>
> Key: AMBARI-23570
> URL: https://issues.apache.org/jira/browse/AMBARI-23570
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.7.0
>Reporter: Vitaly Brodetskyi
>Assignee: Vitaly Brodetskyi
>Priority: Blocker
> Fix For: 2.7.0
>
> Attachments: AMBARI-23570.patch
>
>
> 61ZEPPELIN_MASTER INSTALL : 2018-04-08 18:47:48,913 - The 'zeppelin-server' 
> component did not advertise a version. This may indicate a problem with the 
> component packaging. However, the stack-select tool was able to report a 
> single version installed (3.0.0.0-1161). This is the version that will be 
> reported.
> (most recent call last):
> File 
> "/usr/lib/ambari-agent/lib/resource_management/core/providers/package/__init__.py",
>  line 283, in _call_with_retries
> code, out = func(cmd, **kwargs)
> File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 72, 
> in innerresult = function(command, **kwargs)
> File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 102, 
> in checked_call
> tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy)
> File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, 
> in _call_wrapper
> result = _call(command, **kwargs_copy)
> File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 303, 
> in _call
> raise ExecutionFailed(err_msg, code, out, err)
> ExecutionFailed: Execution of '/usr/bin/yum -d 0 -e 0 -y install zeppelin' 
> returned 1. Error: Nothing to doThe above exception was the cause of the 
> following exception:
> 2018-04-08 18:48:36,896 - The 'zeppelin-server' component did not advertise a 
> version. This may indicate a problem with the component packaging. However, 
> the stack-select tool was able to report a single version installed 
> (3.0.0.0-1161). This is the version that will be reported.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMBARI-23570) Ambari fails to install Zeppelin on Zuul test

2018-04-12 Thread Vitaly Brodetskyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-23570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi updated AMBARI-23570:
---
Attachment: AMBARI-23570.patch

> Ambari fails to install Zeppelin on Zuul test
> -
>
> Key: AMBARI-23570
> URL: https://issues.apache.org/jira/browse/AMBARI-23570
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.7.0
>Reporter: Vitaly Brodetskyi
>Assignee: Vitaly Brodetskyi
>Priority: Blocker
> Fix For: 2.7.0
>
> Attachments: AMBARI-23570.patch
>
>
> 61ZEPPELIN_MASTER INSTALL : 2018-04-08 18:47:48,913 - The 'zeppelin-server' 
> component did not advertise a version. This may indicate a problem with the 
> component packaging. However, the stack-select tool was able to report a 
> single version installed (3.0.0.0-1161). This is the version that will be 
> reported.
> (most recent call last):
> File 
> "/usr/lib/ambari-agent/lib/resource_management/core/providers/package/__init__.py",
>  line 283, in _call_with_retries
> code, out = func(cmd, **kwargs)
> File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 72, 
> in innerresult = function(command, **kwargs)
> File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 102, 
> in checked_call
> tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy)
> File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, 
> in _call_wrapper
> result = _call(command, **kwargs_copy)
> File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 303, 
> in _call
> raise ExecutionFailed(err_msg, code, out, err)
> ExecutionFailed: Execution of '/usr/bin/yum -d 0 -e 0 -y install zeppelin' 
> returned 1. Error: Nothing to doThe above exception was the cause of the 
> following exception:
> 2018-04-08 18:48:36,896 - The 'zeppelin-server' component did not advertise a 
> version. This may indicate a problem with the component packaging. However, 
> the stack-select tool was able to report a single version installed 
> (3.0.0.0-1161). This is the version that will be reported.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (AMBARI-23570) Ambari fails to install Zeppelin on Zuul test

2018-04-12 Thread Vitaly Brodetskyi (JIRA)
Vitaly Brodetskyi created AMBARI-23570:
--

 Summary: Ambari fails to install Zeppelin on Zuul test
 Key: AMBARI-23570
 URL: https://issues.apache.org/jira/browse/AMBARI-23570
 Project: Ambari
  Issue Type: Bug
  Components: ambari-server
Affects Versions: 2.7.0
Reporter: Vitaly Brodetskyi
Assignee: Vitaly Brodetskyi
 Fix For: 2.7.0


61ZEPPELIN_MASTER INSTALL : 2018-04-08 18:47:48,913 - The 'zeppelin-server' 
component did not advertise a version. This may indicate a problem with the 
component packaging. However, the stack-select tool was able to report a single 
version installed (3.0.0.0-1161). This is the version that will be reported.
(most recent call last):
File 
"/usr/lib/ambari-agent/lib/resource_management/core/providers/package/__init__.py",
 line 283, in _call_with_retries
code, out = func(cmd, **kwargs)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 72, in 
innerresult = function(command, **kwargs)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 102, 
in checked_call
tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, 
in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 303, 
in _call
raise ExecutionFailed(err_msg, code, out, err)
ExecutionFailed: Execution of '/usr/bin/yum -d 0 -e 0 -y install zeppelin' 
returned 1. Error: Nothing to doThe above exception was the cause of the 
following exception:

2018-04-08 18:48:36,896 - The 'zeppelin-server' component did not advertise a 
version. This may indicate a problem with the component packaging. However, the 
stack-select tool was able to report a single version installed (3.0.0.0-1161). 
This is the version that will be reported.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (AMBARI-23545) Remove livy2.pyspark3 interpreter

2018-04-11 Thread Vitaly Brodetskyi (JIRA)
Vitaly Brodetskyi created AMBARI-23545:
--

 Summary: Remove livy2.pyspark3 interpreter
 Key: AMBARI-23545
 URL: https://issues.apache.org/jira/browse/AMBARI-23545
 Project: Ambari
  Issue Type: Bug
  Components: ambari-server
Reporter: Vitaly Brodetskyi
Assignee: Vitaly Brodetskyi
 Fix For: 2.7.0


Need to remove livy2.pyspark3 interpreter from Zeppelin



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMBARI-23500) Fix master.py to work with new property format in interpreters

2018-04-11 Thread Vitaly Brodetskyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-23500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi updated AMBARI-23500:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Fix master.py to work with new property format in interpreters
> --
>
> Key: AMBARI-23500
> URL: https://issues.apache.org/jira/browse/AMBARI-23500
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Vitaly Brodetskyi
>Assignee: Vitaly Brodetskyi
>Priority: Critical
> Fix For: 2.7.0
>
> Attachments: AMBARI-23500.patch
>
>
> As of now property format in interpreters was changed so we should update 
> python code to work correctly with new format.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (AMBARI-23532) Set the default value of spark2.driver to "org.apache.spark-project.org.apache.hive.jdbc.HiveDriver"

2018-04-11 Thread Vitaly Brodetskyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-23532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi resolved AMBARI-23532.

Resolution: Fixed

> Set the default value of spark2.driver to 
> "org.apache.spark-project.org.apache.hive.jdbc.HiveDriver"
> 
>
> Key: AMBARI-23532
> URL: https://issues.apache.org/jira/browse/AMBARI-23532
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.7.0
>Reporter: Vitaly Brodetskyi
>Assignee: Vitaly Brodetskyi
>Priority: Blocker
> Fix For: 2.7.0
>
> Attachments: AMBARI-23532.patch
>
>
> In HDP 3.x, Hive JDBC protocol has been updated to version 11 (hive-jdbc 
> 3.0), but Spark is still using version 8 (hive-jdbc 1.2). We are building a 
> shaded jar of hive-jdbc 1.2 for Spark.
> In this BUG, we need to set the default value of Spark2 jdbc driver to 
> "org.apache.spark-project.org.apache.hive.jdbc.HiveDriver", which is in the 
> hive-jdbc-1.2 shaded jar.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMBARI-23532) Set the default value of spark2.driver to "org.apache.spark-project.org.apache.hive.jdbc.HiveDriver"

2018-04-10 Thread Vitaly Brodetskyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-23532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi updated AMBARI-23532:
---
Attachment: AMBARI-23532.patch

> Set the default value of spark2.driver to 
> "org.apache.spark-project.org.apache.hive.jdbc.HiveDriver"
> 
>
> Key: AMBARI-23532
> URL: https://issues.apache.org/jira/browse/AMBARI-23532
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.7.0
>Reporter: Vitaly Brodetskyi
>Assignee: Vitaly Brodetskyi
>Priority: Blocker
> Fix For: 2.7.0
>
> Attachments: AMBARI-23532.patch
>
>
> In HDP 3.x, Hive JDBC protocol has been updated to version 11 (hive-jdbc 
> 3.0), but Spark is still using version 8 (hive-jdbc 1.2). We are building a 
> shaded jar of hive-jdbc 1.2 for Spark.
> In this BUG, we need to set the default value of Spark2 jdbc driver to 
> "org.apache.spark-project.org.apache.hive.jdbc.HiveDriver", which is in the 
> hive-jdbc-1.2 shaded jar.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (AMBARI-23532) Set the default value of spark2.driver to "org.apache.spark-project.org.apache.hive.jdbc.HiveDriver"

2018-04-10 Thread Vitaly Brodetskyi (JIRA)
Vitaly Brodetskyi created AMBARI-23532:
--

 Summary: Set the default value of spark2.driver to 
"org.apache.spark-project.org.apache.hive.jdbc.HiveDriver"
 Key: AMBARI-23532
 URL: https://issues.apache.org/jira/browse/AMBARI-23532
 Project: Ambari
  Issue Type: Bug
  Components: ambari-server
Affects Versions: 2.7.0
Reporter: Vitaly Brodetskyi
Assignee: Vitaly Brodetskyi
 Fix For: 2.7.0


In HDP 3.x, Hive JDBC protocol has been updated to version 11 (hive-jdbc 3.0), 
but Spark is still using version 8 (hive-jdbc 1.2). We are building a shaded 
jar of hive-jdbc 1.2 for Spark.

In this BUG, we need to set the default value of Spark2 jdbc driver to 
"org.apache.spark-project.org.apache.hive.jdbc.HiveDriver", which is in the 
hive-jdbc-1.2 shaded jar.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMBARI-23500) Fix master.py to work with new property format in interpreters

2018-04-06 Thread Vitaly Brodetskyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-23500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi updated AMBARI-23500:
---
Attachment: AMBARI-23500.patch

> Fix master.py to work with new property format in interpreters
> --
>
> Key: AMBARI-23500
> URL: https://issues.apache.org/jira/browse/AMBARI-23500
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Vitaly Brodetskyi
>Assignee: Vitaly Brodetskyi
>Priority: Critical
> Fix For: 2.7.0
>
> Attachments: AMBARI-23500.patch
>
>
> As of now property format in interpreters was changed so we should update 
> python code to work correctly with new format.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMBARI-23500) Fix master.py to work with new property format in interpreters

2018-04-06 Thread Vitaly Brodetskyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-23500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi updated AMBARI-23500:
---
Status: Patch Available  (was: Open)

> Fix master.py to work with new property format in interpreters
> --
>
> Key: AMBARI-23500
> URL: https://issues.apache.org/jira/browse/AMBARI-23500
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Vitaly Brodetskyi
>Assignee: Vitaly Brodetskyi
>Priority: Critical
> Fix For: 2.7.0
>
> Attachments: AMBARI-23500.patch
>
>
> As of now property format in interpreters was changed so we should update 
> python code to work correctly with new format.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (AMBARI-23500) Fix master.py to work with new property format in interpreters

2018-04-06 Thread Vitaly Brodetskyi (JIRA)
Vitaly Brodetskyi created AMBARI-23500:
--

 Summary: Fix master.py to work with new property format in 
interpreters
 Key: AMBARI-23500
 URL: https://issues.apache.org/jira/browse/AMBARI-23500
 Project: Ambari
  Issue Type: Bug
  Components: ambari-server
Reporter: Vitaly Brodetskyi
Assignee: Vitaly Brodetskyi
 Fix For: 2.7.0


As of now property format in interpreters was changed so we should update 
python code to work correctly with new format.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMBARI-23430) Interpreter configs are not retained after zeppelin restart

2018-04-03 Thread Vitaly Brodetskyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-23430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi updated AMBARI-23430:
---
Attachment: AMBARI-23430.patch

> Interpreter configs are not retained after zeppelin restart
> ---
>
> Key: AMBARI-23430
> URL: https://issues.apache.org/jira/browse/AMBARI-23430
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Vitaly Brodetskyi
>Assignee: Vitaly Brodetskyi
>Priority: Blocker
> Fix For: 2.7.0
>
> Attachments: AMBARI-23430.patch
>
>
> Interpreter configs are not retained after zeppelin restart. This issue is 
> seen for below interpreters :
> {code}
> jdbc
> md
> sh
> spark2
> {code}
> Steps to repro:
> 1) Add or edit new property to any of the above mentioned interpreters
> 2) Restart Zeppelin
> 3) See that the changes are not retained



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMBARI-23430) Interpreter configs are not retained after zeppelin restart

2018-04-03 Thread Vitaly Brodetskyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-23430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi updated AMBARI-23430:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Interpreter configs are not retained after zeppelin restart
> ---
>
> Key: AMBARI-23430
> URL: https://issues.apache.org/jira/browse/AMBARI-23430
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Vitaly Brodetskyi
>Assignee: Vitaly Brodetskyi
>Priority: Blocker
> Fix For: 2.7.0
>
> Attachments: AMBARI-23430.patch
>
>
> Interpreter configs are not retained after zeppelin restart. This issue is 
> seen for below interpreters :
> {code}
> jdbc
> md
> sh
> spark2
> {code}
> Steps to repro:
> 1) Add or edit new property to any of the above mentioned interpreters
> 2) Restart Zeppelin
> 3) See that the changes are not retained



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMBARI-23430) Interpreter configs are not retained after zeppelin restart

2018-04-03 Thread Vitaly Brodetskyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-23430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi updated AMBARI-23430:
---
Status: Patch Available  (was: Open)

> Interpreter configs are not retained after zeppelin restart
> ---
>
> Key: AMBARI-23430
> URL: https://issues.apache.org/jira/browse/AMBARI-23430
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Vitaly Brodetskyi
>Assignee: Vitaly Brodetskyi
>Priority: Blocker
> Fix For: 2.7.0
>
> Attachments: AMBARI-23430.patch
>
>
> Interpreter configs are not retained after zeppelin restart. This issue is 
> seen for below interpreters :
> {code}
> jdbc
> md
> sh
> spark2
> {code}
> Steps to repro:
> 1) Add or edit new property to any of the above mentioned interpreters
> 2) Restart Zeppelin
> 3) See that the changes are not retained



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (AMBARI-23430) Interpreter configs are not retained after zeppelin restart

2018-04-03 Thread Vitaly Brodetskyi (JIRA)
Vitaly Brodetskyi created AMBARI-23430:
--

 Summary: Interpreter configs are not retained after zeppelin 
restart
 Key: AMBARI-23430
 URL: https://issues.apache.org/jira/browse/AMBARI-23430
 Project: Ambari
  Issue Type: Bug
  Components: ambari-server
Reporter: Vitaly Brodetskyi
Assignee: Vitaly Brodetskyi
 Fix For: 2.7.0


Interpreter configs are not retained after zeppelin restart. This issue is seen 
for below interpreters :
{code}
jdbc
md
sh
spark2
{code}

Steps to repro:
1) Add or edit new property to any of the above mentioned interpreters
2) Restart Zeppelin
3) See that the changes are not retained



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMBARI-23366) Add service advisor for SPARK2 HDP 3.0

2018-03-28 Thread Vitaly Brodetskyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-23366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi updated AMBARI-23366:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Add service advisor for SPARK2 HDP 3.0
> --
>
> Key: AMBARI-23366
> URL: https://issues.apache.org/jira/browse/AMBARI-23366
> Project: Ambari
>  Issue Type: Task
>  Components: ambari-server
>Affects Versions: 2.7.0
>Reporter: Vitaly Brodetskyi
>Assignee: Vitaly Brodetskyi
>Priority: Critical
> Fix For: 2.7.0
>
>
> Create and collect all recommendations/validations in service advisor.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMBARI-23366) Add service advisor for SPARK2 HDP 3.0

2018-03-26 Thread Vitaly Brodetskyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-23366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi updated AMBARI-23366:
---
Status: Patch Available  (was: Open)

> Add service advisor for SPARK2 HDP 3.0
> --
>
> Key: AMBARI-23366
> URL: https://issues.apache.org/jira/browse/AMBARI-23366
> Project: Ambari
>  Issue Type: Task
>  Components: ambari-server
>Affects Versions: 2.7.0
>Reporter: Vitaly Brodetskyi
>Assignee: Vitaly Brodetskyi
>Priority: Critical
> Fix For: 2.7.0
>
>
> Create and collect all recommendations/validations in service advisor.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (AMBARI-23366) Add service advisor for SPARK2 HDP 3.0

2018-03-26 Thread Vitaly Brodetskyi (JIRA)
Vitaly Brodetskyi created AMBARI-23366:
--

 Summary: Add service advisor for SPARK2 HDP 3.0
 Key: AMBARI-23366
 URL: https://issues.apache.org/jira/browse/AMBARI-23366
 Project: Ambari
  Issue Type: Task
  Components: ambari-server
Affects Versions: 2.7.0
Reporter: Vitaly Brodetskyi
Assignee: Vitaly Brodetskyi
 Fix For: 2.7.0


Create and collect all recommendations/validations in service advisor.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMBARI-23324) Fix Zeppelin service dependency in HDP 3.0

2018-03-23 Thread Vitaly Brodetskyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-23324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi updated AMBARI-23324:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Fix Zeppelin service dependency in HDP 3.0
> --
>
> Key: AMBARI-23324
> URL: https://issues.apache.org/jira/browse/AMBARI-23324
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.7.0
>Reporter: Vitaly Brodetskyi
>Assignee: Vitaly Brodetskyi
>Priority: Critical
> Fix For: 2.7.0
>
>
> As of now, in HDP 3.0 Zeppelin contain such dependency:
> {code}
> 
>  SPARK/SPARK_CLIENT
>  host
>  
>  true
>  
>  
> {code}
> but stack HDP 3.0 doesn't have SPARK, only SPARK2 and SPARK2_CLIENT.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMBARI-23324) Fix Zeppelin service dependency in HDP 3.0

2018-03-22 Thread Vitaly Brodetskyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-23324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi updated AMBARI-23324:
---
Status: Patch Available  (was: Open)

> Fix Zeppelin service dependency in HDP 3.0
> --
>
> Key: AMBARI-23324
> URL: https://issues.apache.org/jira/browse/AMBARI-23324
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.7.0
>Reporter: Vitaly Brodetskyi
>Assignee: Vitaly Brodetskyi
>Priority: Critical
> Fix For: 2.7.0
>
>
> As of now, in HDP 3.0 Zeppelin contain such dependency:
> {code}
> 
>  SPARK/SPARK_CLIENT
>  host
>  
>  true
>  
>  
> {code}
> but stack HDP 3.0 doesn't have SPARK, only SPARK2 and SPARK2_CLIENT.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (AMBARI-23324) Fix Zeppelin service dependency in HDP 3.0

2018-03-22 Thread Vitaly Brodetskyi (JIRA)
Vitaly Brodetskyi created AMBARI-23324:
--

 Summary: Fix Zeppelin service dependency in HDP 3.0
 Key: AMBARI-23324
 URL: https://issues.apache.org/jira/browse/AMBARI-23324
 Project: Ambari
  Issue Type: Bug
  Components: ambari-server
Affects Versions: 2.7.0
Reporter: Vitaly Brodetskyi
Assignee: Vitaly Brodetskyi
 Fix For: 2.7.0


As of now, in HDP 3.0 Zeppelin contain such dependency:
{code}

 SPARK/SPARK_CLIENT
 host
 
 true
 
 
{code}
but stack HDP 3.0 doesn't have SPARK, only SPARK2 and SPARK2_CLIENT.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMBARI-23292) livy.superusers is not getting configured correctly when Spark2 and Zeppelin are added as a service later on

2018-03-20 Thread Vitaly Brodetskyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-23292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi updated AMBARI-23292:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> livy.superusers is not getting configured correctly when Spark2 and Zeppelin 
> are added as a service later on
> 
>
> Key: AMBARI-23292
> URL: https://issues.apache.org/jira/browse/AMBARI-23292
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Vitaly Brodetskyi
>Assignee: Vitaly Brodetskyi
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 2.7.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> This bug was found during bugbash. I had a cluster where Spark2 and Zeppelin 
> was not present. I added both these services via Add service wizard in 
> Ambari. 
> The string '$
> {zeppelin-env/zeppelin_user}
> $
> {principal_suffix}
> ' is not getting replaced by correct zeppelin principal value and hence the 
> %livy2 paragraphs are failing



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMBARI-23292) livy.superusers is not getting configured correctly when Spark2 and Zeppelin are added as a service later on

2018-03-19 Thread Vitaly Brodetskyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-23292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi updated AMBARI-23292:
---
Status: Patch Available  (was: Open)

> livy.superusers is not getting configured correctly when Spark2 and Zeppelin 
> are added as a service later on
> 
>
> Key: AMBARI-23292
> URL: https://issues.apache.org/jira/browse/AMBARI-23292
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Vitaly Brodetskyi
>Assignee: Vitaly Brodetskyi
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 2.7.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This bug was found during bugbash. I had a cluster where Spark2 and Zeppelin 
> was not present. I added both these services via Add service wizard in 
> Ambari. 
> The string '$
> {zeppelin-env/zeppelin_user}
> $
> {principal_suffix}
> ' is not getting replaced by correct zeppelin principal value and hence the 
> %livy2 paragraphs are failing



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (AMBARI-23292) livy.superusers is not getting configured correctly when Spark2 and Zeppelin are added as a service later on

2018-03-19 Thread Vitaly Brodetskyi (JIRA)
Vitaly Brodetskyi created AMBARI-23292:
--

 Summary: livy.superusers is not getting configured correctly when 
Spark2 and Zeppelin are added as a service later on
 Key: AMBARI-23292
 URL: https://issues.apache.org/jira/browse/AMBARI-23292
 Project: Ambari
  Issue Type: Bug
  Components: ambari-server
Reporter: Vitaly Brodetskyi
Assignee: Vitaly Brodetskyi
 Fix For: 2.7.0


This bug was found during bugbash. I had a cluster where Spark2 and Zeppelin 
was not present. I added both these services via Add service wizard in Ambari. 

The string '$

{zeppelin-env/zeppelin_user}

$

{principal_suffix}

' is not getting replaced by correct zeppelin principal value and hence the 
%livy2 paragraphs are failing



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (AMBARI-23280) Explicitly specify zeppelin.config.storage.class to org.apache.zeppelin.storage.FileSystemConfigStorage

2018-03-19 Thread Vitaly Brodetskyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-23280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi resolved AMBARI-23280.

Resolution: Fixed

> Explicitly specify zeppelin.config.storage.class to 
> org.apache.zeppelin.storage.FileSystemConfigStorage
> ---
>
> Key: AMBARI-23280
> URL: https://issues.apache.org/jira/browse/AMBARI-23280
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.7.0
>Reporter: Vitaly Brodetskyi
>Assignee: Vitaly Brodetskyi
>Priority: Blocker
> Fix For: 2.7.0
>
>
> In Zeppelin 0.8, the default zeppelin.config.storage.class is 
> org.apache.zeppelin.storage.LocalConfigStorage, so we need to explicitly it 
> as org.apache.zeppelin.storage.FileSystemConfigStorage to enable remote 
> config storage.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (AMBARI-23280) Explicitly specify zeppelin.config.storage.class to org.apache.zeppelin.storage.FileSystemConfigStorage

2018-03-19 Thread Vitaly Brodetskyi (JIRA)
Vitaly Brodetskyi created AMBARI-23280:
--

 Summary: Explicitly specify zeppelin.config.storage.class to 
org.apache.zeppelin.storage.FileSystemConfigStorage
 Key: AMBARI-23280
 URL: https://issues.apache.org/jira/browse/AMBARI-23280
 Project: Ambari
  Issue Type: Bug
  Components: ambari-server
Affects Versions: 2.7.0
Reporter: Vitaly Brodetskyi
Assignee: Vitaly Brodetskyi
 Fix For: 2.7.0


In Zeppelin 0.8, the default zeppelin.config.storage.class is 
org.apache.zeppelin.storage.LocalConfigStorage, so we need to explicitly it as 
org.apache.zeppelin.storage.FileSystemConfigStorage to enable remote config 
storage.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMBARI-23245) Invalid value for zeppelin.config.fs.dir property

2018-03-15 Thread Vitaly Brodetskyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-23245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi updated AMBARI-23245:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Invalid value for zeppelin.config.fs.dir property
> -
>
> Key: AMBARI-23245
> URL: https://issues.apache.org/jira/browse/AMBARI-23245
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Vitaly Brodetskyi
>Assignee: Vitaly Brodetskyi
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 2.6.2
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Property zeppelin.config.fs.dir should not be available for zeppelin, if it 
> was installed with stack < HDP-2.6.3. Also it should be added with value 
> "conf", if zeppelin installed with stack HDP-2.6.3 and higher.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMBARI-23245) Invalid value for zeppelin.config.fs.dir property

2018-03-15 Thread Vitaly Brodetskyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-23245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi updated AMBARI-23245:
---
Status: Patch Available  (was: Open)

> Invalid value for zeppelin.config.fs.dir property
> -
>
> Key: AMBARI-23245
> URL: https://issues.apache.org/jira/browse/AMBARI-23245
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Vitaly Brodetskyi
>Assignee: Vitaly Brodetskyi
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 2.6.2
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Property zeppelin.config.fs.dir should not be available for zeppelin, if it 
> was installed with stack < HDP-2.6.3. Also it should be added with value 
> "conf", if zeppelin installed with stack HDP-2.6.3 and higher.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (AMBARI-23245) Invalid value for zeppelin.config.fs.dir property

2018-03-15 Thread Vitaly Brodetskyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-23245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi reassigned AMBARI-23245:
--

Assignee: Vitaly Brodetskyi

> Invalid value for zeppelin.config.fs.dir property
> -
>
> Key: AMBARI-23245
> URL: https://issues.apache.org/jira/browse/AMBARI-23245
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Vitaly Brodetskyi
>Assignee: Vitaly Brodetskyi
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 2.6.2
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Property zeppelin.config.fs.dir should not be available for zeppelin, if it 
> was installed with stack < HDP-2.6.3. Also it should be added with value 
> "conf", if zeppelin installed with stack HDP-2.6.3 and higher.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (AMBARI-23245) Invalid value for zeppelin.config.fs.dir property

2018-03-15 Thread Vitaly Brodetskyi (JIRA)
Vitaly Brodetskyi created AMBARI-23245:
--

 Summary: Invalid value for zeppelin.config.fs.dir property
 Key: AMBARI-23245
 URL: https://issues.apache.org/jira/browse/AMBARI-23245
 Project: Ambari
  Issue Type: Bug
  Components: ambari-server
Reporter: Vitaly Brodetskyi
 Fix For: 2.6.2


Property zeppelin.config.fs.dir should not be available for zeppelin, if it was 
installed with stack < HDP-2.6.3. Also it should be added with value "conf", if 
zeppelin installed with stack HDP-2.6.3 and higher.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (AMBARI-23161) Invalid storage property value for zeppelin in stack HDP 2.6

2018-03-13 Thread Vitaly Brodetskyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-23161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi resolved AMBARI-23161.

Resolution: Fixed

> Invalid storage property value for zeppelin in stack HDP 2.6
> 
>
> Key: AMBARI-23161
> URL: https://issues.apache.org/jira/browse/AMBARI-23161
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Vitaly Brodetskyi
>Assignee: Vitaly Brodetskyi
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 2.6.2
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> For stacks lower than HDP 2.6.3, property {{zeppelin.notebook.storage}} 
> should have value {{org.apache.zeppelin.notebook.repo.VFSNotebookRepo}}. For 
> stacks HDP 2.6.3 and higher this property should have value 
> {{org.apache.zeppelin.notebook.repo.FileSystemNotebookRepo}}. As of now for 
> all stacks HDP 2.6.x by default storage property will have value 
> FileSystemNotebookRepo, it's not correct. So it should be fixed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (AMBARI-23161) Invalid storage property value for zeppelin in stack HDP 2.6

2018-03-06 Thread Vitaly Brodetskyi (JIRA)
Vitaly Brodetskyi created AMBARI-23161:
--

 Summary: Invalid storage property value for zeppelin in stack HDP 
2.6
 Key: AMBARI-23161
 URL: https://issues.apache.org/jira/browse/AMBARI-23161
 Project: Ambari
  Issue Type: Bug
  Components: ambari-server
Reporter: Vitaly Brodetskyi
Assignee: Vitaly Brodetskyi
 Fix For: 2.6.2


For stacks lower than HDP 2.6.3, property {{zeppelin.notebook.storage}} should 
have value {{org.apache.zeppelin.notebook.repo.VFSNotebookRepo}}. For stacks 
HDP 2.6.3 and higher this property should have value 
{{org.apache.zeppelin.notebook.repo.FileSystemNotebookRepo}}. As of now for all 
stacks HDP 2.6.x by default storage property will have value 
FileSystemNotebookRepo, it's not correct. So it should be fixed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (AMBARI-23091) Zeppelin Notebook SSL credentials in Ambari UI are in plain text rather than being hidden

2018-02-27 Thread Vitaly Brodetskyi (JIRA)
Vitaly Brodetskyi created AMBARI-23091:
--

 Summary: Zeppelin Notebook SSL credentials in Ambari UI are in 
plain text rather than being hidden
 Key: AMBARI-23091
 URL: https://issues.apache.org/jira/browse/AMBARI-23091
 Project: Ambari
  Issue Type: Bug
  Components: ambari-sever
Reporter: Vitaly Brodetskyi
Assignee: Vitaly Brodetskyi
 Fix For: 2.6.2


Zeppelin Notebook keystore & truststore passwords appear in plain text rather 
than being hidden in Ambari



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (AMBARI-23043) 'Table or view not found error' with livy/livy2 interpreter on upgraded cluster

2018-02-26 Thread Vitaly Brodetskyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-23043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi resolved AMBARI-23043.

Resolution: Fixed

> 'Table or view not found error' with livy/livy2 interpreter on upgraded 
> cluster
> ---
>
> Key: AMBARI-23043
> URL: https://issues.apache.org/jira/browse/AMBARI-23043
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Vitaly Brodetskyi
>Assignee: Vitaly Brodetskyi
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 2.6.2
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> The test has been performed as below:
>  CentOS6 + Ambari-2.5.1 + HDP-2.6.1 -> AU to Ambari-2.6.2 -> Full EU to 
> HDP-2.6.5.0-74  -> Run stack tests
> I see that with livy2 interpreter, anytime we register a temporary view or 
> table - the corresponding query on that table will fail with 'Table or view 
> not found error'
> {code:java}
> org.apache.spark.sql.AnalysisException: Table or view not found: word_counts; 
> line 2 pos 24
>  at 
> org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42)
>  at 
> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.org$apache$spark$sql$catalyst$analysis$Analyzer$ResolveRelations$$lookupTableFromCatalog(Analyzer.scala:649)
>  at 
> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.resolveRelation(Analyzer.scala:601)
>  at 
> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$8.applyOrElse(Analyzer.scala:631)
>  at 
> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$8.applyOrElse(Analyzer.scala:624)
>  at 
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$resolveOperators$1.apply(LogicalPlan.scala:62)
>  at 
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$resolveOperators$1.apply(LogicalPlan.scala:62)
>  at 
> org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
>  at 
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:61)
>  at 
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$1.apply(LogicalPlan.scala:59)
>  at 
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$1.apply(LogicalPlan.scala:59)
>  at 
> org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:306)
>  at 
> org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187)
>  at 
> org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:304)
>  at 
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:59)
>  at 
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$1.apply(LogicalPlan.scala:59)
>  at 
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$1.apply(LogicalPlan.scala:59)
>  at 
> org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:306)
>  at 
> org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187)
>  at 
> org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:304)
>  at 
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:59)
>  at 
> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.apply(Analyzer.scala:624)
>  at 
> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.apply(Analyzer.scala:570)
>  at 
> org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:85)
>  at 
> org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:82)
>  at 
> scala.collection.LinearSeqOptimized$class.foldLeft(LinearSeqOptimized.scala:124)
>  at scala.collection.immutable.List.foldLeft(List.scala:84)
>  at 
> org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:82)
>  at 
> org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:74)
>  at scala.collection.immutable.List.foreach(List.scala:381)
>  at 
> org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:74)
>  at 
> org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:69)
>  at 
> org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:67)
>  at 
> org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:50)
>  at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:67)
>  at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:637)
>  ... 50 elided
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#760

[jira] [Updated] (AMBARI-23043) 'Table or view not found error' with livy/livy2 interpreter on upgraded cluster to Fenton-M30

2018-02-26 Thread Vitaly Brodetskyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-23043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi updated AMBARI-23043:
---
Description: 
The test has been performed as below:
 CentOS6 + Ambari-2.5.1 + HDP-2.6.1 -> AU to Ambari-2.6.2 -> Full EU to 
HDP-2.6.5.0-74  -> Run stack tests

I see that with livy2 interpreter, anytime we register a temporary view or 
table - the corresponding query on that table will fail with 'Table or view not 
found error'
{code:java}
org.apache.spark.sql.AnalysisException: Table or view not found: word_counts; 
line 2 pos 24
 at 
org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42)
 at 
org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.org$apache$spark$sql$catalyst$analysis$Analyzer$ResolveRelations$$lookupTableFromCatalog(Analyzer.scala:649)
 at 
org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.resolveRelation(Analyzer.scala:601)
 at 
org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$8.applyOrElse(Analyzer.scala:631)
 at 
org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$8.applyOrElse(Analyzer.scala:624)
 at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$resolveOperators$1.apply(LogicalPlan.scala:62)
 at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$resolveOperators$1.apply(LogicalPlan.scala:62)
 at 
org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
 at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:61)
 at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$1.apply(LogicalPlan.scala:59)
 at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$1.apply(LogicalPlan.scala:59)
 at 
org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:306)
 at 
org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187)
 at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:304)
 at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:59)
 at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$1.apply(LogicalPlan.scala:59)
 at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$1.apply(LogicalPlan.scala:59)
 at 
org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:306)
 at 
org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187)
 at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:304)
 at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:59)
 at 
org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.apply(Analyzer.scala:624)
 at 
org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.apply(Analyzer.scala:570)
 at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:85)
 at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:82)
 at 
scala.collection.LinearSeqOptimized$class.foldLeft(LinearSeqOptimized.scala:124)
 at scala.collection.immutable.List.foldLeft(List.scala:84)
 at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:82)
 at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:74)
 at scala.collection.immutable.List.foreach(List.scala:381)
 at 
org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:74)
 at 
org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:69)
 at 
org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:67)
 at 
org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:50)
 at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:67)
 at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:637)
 ... 50 elided
{code}

  was:
The test has been performed as below:
CentOS6 + Ambari-2.5.1 + HDP-2.6.1 -> AU to Ambari-2.6.2 -> Full EU to 
HDP-2.6.5.0-74 (Fenton-M30) -> Run stack tests

I see that with livy2 interpreter, anytime we register a temporary view or 
table - the corresponding query on that table will fail with 'Table or view not 
found error'

{code:java}
org.apache.spark.sql.AnalysisException: Table or view not found: word_counts; 
line 2 pos 24
 at 
org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42)
 at 
org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.org$apache$spark$sql$catalyst$analysis$Analyzer$ResolveRelations$$lookupTableFromCatalog(Analyzer.scala:649)
 at 
org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.resolveRelation(Analyzer.scala:601)
 at 
org.apache.s

[jira] [Updated] (AMBARI-23043) 'Table or view not found error' with livy/livy2 interpreter on upgraded cluster

2018-02-26 Thread Vitaly Brodetskyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-23043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi updated AMBARI-23043:
---
Summary: 'Table or view not found error' with livy/livy2 interpreter on 
upgraded cluster  (was: 'Table or view not found error' with livy/livy2 
interpreter on upgraded cluster to Fenton-M30)

> 'Table or view not found error' with livy/livy2 interpreter on upgraded 
> cluster
> ---
>
> Key: AMBARI-23043
> URL: https://issues.apache.org/jira/browse/AMBARI-23043
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Vitaly Brodetskyi
>Assignee: Vitaly Brodetskyi
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 2.6.2
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> The test has been performed as below:
>  CentOS6 + Ambari-2.5.1 + HDP-2.6.1 -> AU to Ambari-2.6.2 -> Full EU to 
> HDP-2.6.5.0-74  -> Run stack tests
> I see that with livy2 interpreter, anytime we register a temporary view or 
> table - the corresponding query on that table will fail with 'Table or view 
> not found error'
> {code:java}
> org.apache.spark.sql.AnalysisException: Table or view not found: word_counts; 
> line 2 pos 24
>  at 
> org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42)
>  at 
> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.org$apache$spark$sql$catalyst$analysis$Analyzer$ResolveRelations$$lookupTableFromCatalog(Analyzer.scala:649)
>  at 
> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.resolveRelation(Analyzer.scala:601)
>  at 
> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$8.applyOrElse(Analyzer.scala:631)
>  at 
> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$8.applyOrElse(Analyzer.scala:624)
>  at 
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$resolveOperators$1.apply(LogicalPlan.scala:62)
>  at 
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$resolveOperators$1.apply(LogicalPlan.scala:62)
>  at 
> org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
>  at 
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:61)
>  at 
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$1.apply(LogicalPlan.scala:59)
>  at 
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$1.apply(LogicalPlan.scala:59)
>  at 
> org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:306)
>  at 
> org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187)
>  at 
> org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:304)
>  at 
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:59)
>  at 
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$1.apply(LogicalPlan.scala:59)
>  at 
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$1.apply(LogicalPlan.scala:59)
>  at 
> org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:306)
>  at 
> org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187)
>  at 
> org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:304)
>  at 
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:59)
>  at 
> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.apply(Analyzer.scala:624)
>  at 
> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.apply(Analyzer.scala:570)
>  at 
> org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:85)
>  at 
> org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:82)
>  at 
> scala.collection.LinearSeqOptimized$class.foldLeft(LinearSeqOptimized.scala:124)
>  at scala.collection.immutable.List.foldLeft(List.scala:84)
>  at 
> org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:82)
>  at 
> org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:74)
>  at scala.collection.immutable.List.foreach(List.scala:381)
>  at 
> org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:74)
>  at 
> org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:69)
>  at 
> org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:67)
>  at 
> org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:50)
>  at org.apache.spark.sql.Dataset$.ofR

[jira] [Updated] (AMBARI-23043) 'Table or view not found error' with livy/livy2 interpreter on upgraded cluster to Fenton-M30

2018-02-21 Thread Vitaly Brodetskyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-23043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi updated AMBARI-23043:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed to trunk and branch-2.6

> 'Table or view not found error' with livy/livy2 interpreter on upgraded 
> cluster to Fenton-M30
> -
>
> Key: AMBARI-23043
> URL: https://issues.apache.org/jira/browse/AMBARI-23043
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Vitaly Brodetskyi
>Assignee: Vitaly Brodetskyi
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 2.6.2
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The test has been performed as below:
> CentOS6 + Ambari-2.5.1 + HDP-2.6.1 -> AU to Ambari-2.6.2 -> Full EU to 
> HDP-2.6.5.0-74 (Fenton-M30) -> Run stack tests
> I see that with livy2 interpreter, anytime we register a temporary view or 
> table - the corresponding query on that table will fail with 'Table or view 
> not found error'
> {code:java}
> org.apache.spark.sql.AnalysisException: Table or view not found: word_counts; 
> line 2 pos 24
>  at 
> org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42)
>  at 
> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.org$apache$spark$sql$catalyst$analysis$Analyzer$ResolveRelations$$lookupTableFromCatalog(Analyzer.scala:649)
>  at 
> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.resolveRelation(Analyzer.scala:601)
>  at 
> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$8.applyOrElse(Analyzer.scala:631)
>  at 
> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$8.applyOrElse(Analyzer.scala:624)
>  at 
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$resolveOperators$1.apply(LogicalPlan.scala:62)
>  at 
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$resolveOperators$1.apply(LogicalPlan.scala:62)
>  at 
> org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
>  at 
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:61)
>  at 
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$1.apply(LogicalPlan.scala:59)
>  at 
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$1.apply(LogicalPlan.scala:59)
>  at 
> org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:306)
>  at 
> org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187)
>  at 
> org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:304)
>  at 
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:59)
>  at 
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$1.apply(LogicalPlan.scala:59)
>  at 
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$1.apply(LogicalPlan.scala:59)
>  at 
> org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:306)
>  at 
> org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187)
>  at 
> org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:304)
>  at 
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:59)
>  at 
> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.apply(Analyzer.scala:624)
>  at 
> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.apply(Analyzer.scala:570)
>  at 
> org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:85)
>  at 
> org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:82)
>  at 
> scala.collection.LinearSeqOptimized$class.foldLeft(LinearSeqOptimized.scala:124)
>  at scala.collection.immutable.List.foldLeft(List.scala:84)
>  at 
> org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:82)
>  at 
> org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:74)
>  at scala.collection.immutable.List.foreach(List.scala:381)
>  at 
> org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:74)
>  at 
> org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:69)
>  at 
> org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:67)
>  at 
> org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:50)
>  at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:67)
>  at org.apache.spark.sql.SparkS

[jira] [Updated] (AMBARI-23043) 'Table or view not found error' with livy/livy2 interpreter on upgraded cluster to Fenton-M30

2018-02-21 Thread Vitaly Brodetskyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-23043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi updated AMBARI-23043:
---
Status: Patch Available  (was: Open)

> 'Table or view not found error' with livy/livy2 interpreter on upgraded 
> cluster to Fenton-M30
> -
>
> Key: AMBARI-23043
> URL: https://issues.apache.org/jira/browse/AMBARI-23043
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Vitaly Brodetskyi
>Assignee: Vitaly Brodetskyi
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 2.6.2
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The test has been performed as below:
> CentOS6 + Ambari-2.5.1 + HDP-2.6.1 -> AU to Ambari-2.6.2 -> Full EU to 
> HDP-2.6.5.0-74 (Fenton-M30) -> Run stack tests
> I see that with livy2 interpreter, anytime we register a temporary view or 
> table - the corresponding query on that table will fail with 'Table or view 
> not found error'
> {code:java}
> org.apache.spark.sql.AnalysisException: Table or view not found: word_counts; 
> line 2 pos 24
>  at 
> org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42)
>  at 
> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.org$apache$spark$sql$catalyst$analysis$Analyzer$ResolveRelations$$lookupTableFromCatalog(Analyzer.scala:649)
>  at 
> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.resolveRelation(Analyzer.scala:601)
>  at 
> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$8.applyOrElse(Analyzer.scala:631)
>  at 
> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$8.applyOrElse(Analyzer.scala:624)
>  at 
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$resolveOperators$1.apply(LogicalPlan.scala:62)
>  at 
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$resolveOperators$1.apply(LogicalPlan.scala:62)
>  at 
> org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
>  at 
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:61)
>  at 
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$1.apply(LogicalPlan.scala:59)
>  at 
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$1.apply(LogicalPlan.scala:59)
>  at 
> org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:306)
>  at 
> org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187)
>  at 
> org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:304)
>  at 
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:59)
>  at 
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$1.apply(LogicalPlan.scala:59)
>  at 
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$1.apply(LogicalPlan.scala:59)
>  at 
> org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:306)
>  at 
> org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187)
>  at 
> org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:304)
>  at 
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:59)
>  at 
> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.apply(Analyzer.scala:624)
>  at 
> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.apply(Analyzer.scala:570)
>  at 
> org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:85)
>  at 
> org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:82)
>  at 
> scala.collection.LinearSeqOptimized$class.foldLeft(LinearSeqOptimized.scala:124)
>  at scala.collection.immutable.List.foldLeft(List.scala:84)
>  at 
> org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:82)
>  at 
> org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:74)
>  at scala.collection.immutable.List.foreach(List.scala:381)
>  at 
> org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:74)
>  at 
> org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:69)
>  at 
> org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:67)
>  at 
> org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:50)
>  at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:67)
>  at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:637)
>  ... 50 elided
> {code}



-

[jira] [Created] (AMBARI-23043) 'Table or view not found error' with livy/livy2 interpreter on upgraded cluster to Fenton-M30

2018-02-21 Thread Vitaly Brodetskyi (JIRA)
Vitaly Brodetskyi created AMBARI-23043:
--

 Summary: 'Table or view not found error' with livy/livy2 
interpreter on upgraded cluster to Fenton-M30
 Key: AMBARI-23043
 URL: https://issues.apache.org/jira/browse/AMBARI-23043
 Project: Ambari
  Issue Type: Bug
  Components: ambari-server
Reporter: Vitaly Brodetskyi
Assignee: Vitaly Brodetskyi
 Fix For: 2.6.2


The test has been performed as below:
CentOS6 + Ambari-2.5.1 + HDP-2.6.1 -> AU to Ambari-2.6.2 -> Full EU to 
HDP-2.6.5.0-74 (Fenton-M30) -> Run stack tests

I see that with livy2 interpreter, anytime we register a temporary view or 
table - the corresponding query on that table will fail with 'Table or view not 
found error'

{code:java}
org.apache.spark.sql.AnalysisException: Table or view not found: word_counts; 
line 2 pos 24
 at 
org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42)
 at 
org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.org$apache$spark$sql$catalyst$analysis$Analyzer$ResolveRelations$$lookupTableFromCatalog(Analyzer.scala:649)
 at 
org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.resolveRelation(Analyzer.scala:601)
 at 
org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$8.applyOrElse(Analyzer.scala:631)
 at 
org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$8.applyOrElse(Analyzer.scala:624)
 at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$resolveOperators$1.apply(LogicalPlan.scala:62)
 at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$resolveOperators$1.apply(LogicalPlan.scala:62)
 at 
org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
 at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:61)
 at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$1.apply(LogicalPlan.scala:59)
 at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$1.apply(LogicalPlan.scala:59)
 at 
org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:306)
 at 
org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187)
 at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:304)
 at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:59)
 at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$1.apply(LogicalPlan.scala:59)
 at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$1.apply(LogicalPlan.scala:59)
 at 
org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:306)
 at 
org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187)
 at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:304)
 at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:59)
 at 
org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.apply(Analyzer.scala:624)
 at 
org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.apply(Analyzer.scala:570)
 at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:85)
 at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:82)
 at 
scala.collection.LinearSeqOptimized$class.foldLeft(LinearSeqOptimized.scala:124)
 at scala.collection.immutable.List.foldLeft(List.scala:84)
 at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:82)
 at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:74)
 at scala.collection.immutable.List.foreach(List.scala:381)
 at 
org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:74)
 at 
org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:69)
 at 
org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:67)
 at 
org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:50)
 at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:67)
 at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:637)
 ... 50 elided
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMBARI-22898) spark2_shuffle is not present inside yarn.nodemanager.aux-services

2018-02-02 Thread Vitaly Brodetskyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi updated AMBARI-22898:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed to AMBARI-2.7.0.0

> spark2_shuffle is not present inside yarn.nodemanager.aux-services
> --
>
> Key: AMBARI-22898
> URL: https://issues.apache.org/jira/browse/AMBARI-22898
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.7.0
>Reporter: Vitaly Brodetskyi
>Assignee: Vitaly Brodetskyi
>Priority: Blocker
> Fix For: 2.7.0
>
> Attachments: AMBARI-22898.patch
>
>
> I saw a couple of tests have failed . In these tests we had setup 
> 'spark.dynamicAllocation.enabled': 'true', 'spark.shuffle.service.enabled': 
> 'true'
> The yarn application logs are showing following error while launching 
> containers:
> {code:java}
> 18/01/12 22:03:52 INFO YarnAllocator: Received 3 containers from YARN, 
> launching executors on 3 of them.
> 18/01/12 22:03:52 ERROR YarnAllocator: Failed to launch executor 3 on 
> container container_e04_1515724290515_0078_08_04
> org.apache.spark.SparkException: Exception while starting container 
> container_e04_1515724290515_0078_08_04 on host 
> ctr-e137-1514896590304-9678-01-06.hwx.site
>  at 
> org.apache.spark.deploy.yarn.ExecutorRunnable.startContainer(ExecutorRunnable.scala:125)
>  at 
> org.apache.spark.deploy.yarn.ExecutorRunnable.run(ExecutorRunnable.scala:65)
>  at 
> org.apache.spark.deploy.yarn.YarnAllocator$$anonfun$runAllocatedContainers$1$$anon$1.run(YarnAllocator.scala:533)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.hadoop.yarn.exceptions.InvalidAuxServiceException: The 
> auxService:spark2_shuffle does not exist
>  at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>  at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>  at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>  at 
> org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateExceptionImpl(SerializedExceptionPBImpl.java:171)
>  at 
> org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateException(SerializedExceptionPBImpl.java:182)
>  at 
> org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.deSerialize(SerializedExceptionPBImpl.java:106)
>  at 
> org.apache.hadoop.yarn.client.api.impl.NMClientImpl.startContainer(NMClientImpl.java:211)
>  at 
> org.apache.spark.deploy.yarn.ExecutorRunnable.startContainer(ExecutorRunnable.scala:122)
>  ... 5 more
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMBARI-22892) Ambari 2.99.99 lists services that are not present in Atlantic-Beta1

2018-02-02 Thread Vitaly Brodetskyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi updated AMBARI-22892:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed to AMBARI-2.7.0.0

> Ambari 2.99.99 lists services that are not present in Atlantic-Beta1
> 
>
> Key: AMBARI-22892
> URL: https://issues.apache.org/jira/browse/AMBARI-22892
> Project: Ambari
>  Issue Type: Task
>  Components: ambari-server
>Affects Versions: 2.7.0
>Reporter: Vitaly Brodetskyi
>Assignee: Vitaly Brodetskyi
>Priority: Blocker
> Fix For: 2.7.0
>
> Attachments: AMBARI-22892.patch, AMBARI-22892_part2.diff
>
>
> Remove these services from list 
>  - Sqoop
>  - Oozie
>  - Storm
>  - Slider



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMBARI-22898) spark2_shuffle is not present inside yarn.nodemanager.aux-services

2018-02-01 Thread Vitaly Brodetskyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi updated AMBARI-22898:
---
Status: Patch Available  (was: Open)

> spark2_shuffle is not present inside yarn.nodemanager.aux-services
> --
>
> Key: AMBARI-22898
> URL: https://issues.apache.org/jira/browse/AMBARI-22898
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.7.0
>Reporter: Vitaly Brodetskyi
>Assignee: Vitaly Brodetskyi
>Priority: Blocker
> Fix For: 2.7.0
>
> Attachments: AMBARI-22898.patch
>
>
> I saw a couple of tests have failed . In these tests we had setup 
> 'spark.dynamicAllocation.enabled': 'true', 'spark.shuffle.service.enabled': 
> 'true'
> The yarn application logs are showing following error while launching 
> containers:
> {code:java}
> 18/01/12 22:03:52 INFO YarnAllocator: Received 3 containers from YARN, 
> launching executors on 3 of them.
> 18/01/12 22:03:52 ERROR YarnAllocator: Failed to launch executor 3 on 
> container container_e04_1515724290515_0078_08_04
> org.apache.spark.SparkException: Exception while starting container 
> container_e04_1515724290515_0078_08_04 on host 
> ctr-e137-1514896590304-9678-01-06.hwx.site
>  at 
> org.apache.spark.deploy.yarn.ExecutorRunnable.startContainer(ExecutorRunnable.scala:125)
>  at 
> org.apache.spark.deploy.yarn.ExecutorRunnable.run(ExecutorRunnable.scala:65)
>  at 
> org.apache.spark.deploy.yarn.YarnAllocator$$anonfun$runAllocatedContainers$1$$anon$1.run(YarnAllocator.scala:533)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.hadoop.yarn.exceptions.InvalidAuxServiceException: The 
> auxService:spark2_shuffle does not exist
>  at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>  at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>  at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>  at 
> org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateExceptionImpl(SerializedExceptionPBImpl.java:171)
>  at 
> org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateException(SerializedExceptionPBImpl.java:182)
>  at 
> org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.deSerialize(SerializedExceptionPBImpl.java:106)
>  at 
> org.apache.hadoop.yarn.client.api.impl.NMClientImpl.startContainer(NMClientImpl.java:211)
>  at 
> org.apache.spark.deploy.yarn.ExecutorRunnable.startContainer(ExecutorRunnable.scala:122)
>  ... 5 more
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   3   4   5   6   7   8   9   10   >