ambari git commit: AMBARI-21685. Component fails to install with error, 'The stack packages are not defined on the command. Unable to load packages for the stack-select tool' (alejandro)

2017-08-08 Thread alejandro
Repository: ambari
Updated Branches:
  refs/heads/trunk 64cae8e0e -> ea2c432fc


AMBARI-21685. Component fails to install with error, 'The stack packages are 
not defined on the command. Unable to load packages for the stack-select tool' 
(alejandro)


Project: http://git-wip-us.apache.org/repos/asf/ambari/repo
Commit: http://git-wip-us.apache.org/repos/asf/ambari/commit/ea2c432f
Tree: http://git-wip-us.apache.org/repos/asf/ambari/tree/ea2c432f
Diff: http://git-wip-us.apache.org/repos/asf/ambari/diff/ea2c432f

Branch: refs/heads/trunk
Commit: ea2c432fcff5807b3cffc5cb94e0764cb8f94aa7
Parents: 64cae8e
Author: Alejandro Fernandez 
Authored: Tue Aug 8 13:41:28 2017 -0700
Committer: Alejandro Fernandez 
Committed: Tue Aug 8 13:41:28 2017 -0700

--
 .../resources/stacks/HDP/3.0/configuration/cluster-env.xml | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/ambari/blob/ea2c432f/ambari-server/src/main/resources/stacks/HDP/3.0/configuration/cluster-env.xml
--
diff --git 
a/ambari-server/src/main/resources/stacks/HDP/3.0/configuration/cluster-env.xml 
b/ambari-server/src/main/resources/stacks/HDP/3.0/configuration/cluster-env.xml
index 2fa33bd..1b903b1 100644
--- 
a/ambari-server/src/main/resources/stacks/HDP/3.0/configuration/cluster-env.xml
+++ 
b/ambari-server/src/main/resources/stacks/HDP/3.0/configuration/cluster-env.xml
@@ -262,14 +262,14 @@ gpgcheck=0
 
 
   
-  
+  
   
-stack_select_packages
+stack_packages
 
 Associations between component and stack-select 
tools.
 VALUE_FROM_PROPERTY_FILE
 
-  stack_select_packages.json
+  stack_packages.json
   json
   true
   false



ambari git commit: AMBARI-21673. Deploy fails with alert_state constraint violation (Myroslav Papirkovskyy via alejandro)

2017-08-07 Thread alejandro
Repository: ambari
Updated Branches:
  refs/heads/trunk 7a90a2604 -> e88b61e6c


AMBARI-21673. Deploy fails with alert_state constraint violation (Myroslav 
Papirkovskyy via alejandro)


Project: http://git-wip-us.apache.org/repos/asf/ambari/repo
Commit: http://git-wip-us.apache.org/repos/asf/ambari/commit/e88b61e6
Tree: http://git-wip-us.apache.org/repos/asf/ambari/tree/e88b61e6
Diff: http://git-wip-us.apache.org/repos/asf/ambari/diff/e88b61e6

Branch: refs/heads/trunk
Commit: e88b61e6c7831a569876dcbac524fbe2b465c967
Parents: 7a90a26
Author: Alejandro Fernandez 
Authored: Mon Aug 7 17:10:05 2017 -0700
Committer: Alejandro Fernandez 
Committed: Mon Aug 7 17:10:05 2017 -0700

--
 .../stacks/HDP/3.0/configuration/cluster-env.xml| 12 
 1 file changed, 12 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/ambari/blob/e88b61e6/ambari-server/src/main/resources/stacks/HDP/3.0/configuration/cluster-env.xml
--
diff --git 
a/ambari-server/src/main/resources/stacks/HDP/3.0/configuration/cluster-env.xml 
b/ambari-server/src/main/resources/stacks/HDP/3.0/configuration/cluster-env.xml
index c2e2971..2fa33bd 100644
--- 
a/ambari-server/src/main/resources/stacks/HDP/3.0/configuration/cluster-env.xml
+++ 
b/ambari-server/src/main/resources/stacks/HDP/3.0/configuration/cluster-env.xml
@@ -220,6 +220,18 @@ gpgcheck=0
 
 
   
+  
+  
+stack_name
+HDP
+The name of the stack.
+
+  true
+  false
+  false
+
+
+  
   
   
 stack_tools



ambari git commit: AMBARI-21627. ADDENDUM. Cross-stack upgrade from IOP to HDP, ranger audit properties need to be deleted (alejandro)

2017-08-07 Thread alejandro
Repository: ambari
Updated Branches:
  refs/heads/branch-2.5 f5e9d170e -> 253364400


AMBARI-21627. ADDENDUM. Cross-stack upgrade from IOP to HDP, ranger audit 
properties need to be deleted (alejandro)


Project: http://git-wip-us.apache.org/repos/asf/ambari/repo
Commit: http://git-wip-us.apache.org/repos/asf/ambari/commit/25336440
Tree: http://git-wip-us.apache.org/repos/asf/ambari/tree/25336440
Diff: http://git-wip-us.apache.org/repos/asf/ambari/diff/25336440

Branch: refs/heads/branch-2.5
Commit: 2533644003a58fa3a6909d10ed184eb9d126fe1c
Parents: f5e9d17
Author: Alejandro Fernandez 
Authored: Fri Aug 4 11:14:38 2017 -0700
Committer: Alejandro Fernandez 
Committed: Mon Aug 7 14:37:51 2017 -0700

--
 .../stacks/BigInsights/4.2.5/upgrades/config-upgrade.xml | 11 +++
 .../4.2.5/upgrades/nonrolling-upgrade-to-hdp-2.6.xml |  4 
 .../stacks/BigInsights/4.2/upgrades/config-upgrade.xml   | 10 ++
 .../4.2/upgrades/nonrolling-upgrade-to-hdp-2.6.xml   |  4 
 4 files changed, 29 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/ambari/blob/25336440/ambari-server/src/main/resources/stacks/BigInsights/4.2.5/upgrades/config-upgrade.xml
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.2.5/upgrades/config-upgrade.xml
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.2.5/upgrades/config-upgrade.xml
index f6a21ff..6ed6a11 100644
--- 
a/ambari-server/src/main/resources/stacks/BigInsights/4.2.5/upgrades/config-upgrade.xml
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.2.5/upgrades/config-upgrade.xml
@@ -56,6 +56,17 @@
 core-site
 
   
+
+  
+ranger-knox-audit
+
+
+
+
+
+
+
+  
 
   
 

http://git-wip-us.apache.org/repos/asf/ambari/blob/25336440/ambari-server/src/main/resources/stacks/BigInsights/4.2.5/upgrades/nonrolling-upgrade-to-hdp-2.6.xml
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.2.5/upgrades/nonrolling-upgrade-to-hdp-2.6.xml
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.2.5/upgrades/nonrolling-upgrade-to-hdp-2.6.xml
index 361fb56..e5f3690 100644
--- 
a/ambari-server/src/main/resources/stacks/BigInsights/4.2.5/upgrades/nonrolling-upgrade-to-hdp-2.6.xml
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.2.5/upgrades/nonrolling-upgrade-to-hdp-2.6.xml
@@ -234,6 +234,10 @@
 
   
 
+  
+
+  
+
   
   
 

http://git-wip-us.apache.org/repos/asf/ambari/blob/25336440/ambari-server/src/main/resources/stacks/BigInsights/4.2/upgrades/config-upgrade.xml
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.2/upgrades/config-upgrade.xml
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.2/upgrades/config-upgrade.xml
index 566bcd6..42fdca5 100644
--- 
a/ambari-server/src/main/resources/stacks/BigInsights/4.2/upgrades/config-upgrade.xml
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.2/upgrades/config-upgrade.xml
@@ -145,6 +145,16 @@
 
   
 
+  
+ranger-knox-audit
+
+
+
+
+
+
+
+  
 
   
 

http://git-wip-us.apache.org/repos/asf/ambari/blob/25336440/ambari-server/src/main/resources/stacks/BigInsights/4.2/upgrades/nonrolling-upgrade-to-hdp-2.6.xml
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.2/upgrades/nonrolling-upgrade-to-hdp-2.6.xml
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.2/upgrades/nonrolling-upgrade-to-hdp-2.6.xml
index bfed19d..b8c23bb 100644
--- 
a/ambari-server/src/main/resources/stacks/BigInsights/4.2/upgrades/nonrolling-upgrade-to-hdp-2.6.xml
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.2/upgrades/nonrolling-upgrade-to-hdp-2.6.xml
@@ -223,6 +223,10 @@
 
   
 
+  
+
+  
+
   
   
 



ambari git commit: AMBARI-21627. Cross-stack upgrade from IOP to HDP, ranger audit properties need to be deleted (alejandro)

2017-08-02 Thread alejandro
Repository: ambari
Updated Branches:
  refs/heads/branch-2.5 f7a51bf0a -> 266deaf1d


AMBARI-21627. Cross-stack upgrade from IOP to HDP, ranger audit properties need 
to be deleted (alejandro)


Project: http://git-wip-us.apache.org/repos/asf/ambari/repo
Commit: http://git-wip-us.apache.org/repos/asf/ambari/commit/266deaf1
Tree: http://git-wip-us.apache.org/repos/asf/ambari/tree/266deaf1
Diff: http://git-wip-us.apache.org/repos/asf/ambari/diff/266deaf1

Branch: refs/heads/branch-2.5
Commit: 266deaf1dc0129ee447e9255ed253a16535e6bf5
Parents: f7a51bf
Author: Alejandro Fernandez 
Authored: Tue Aug 1 15:30:28 2017 -0700
Committer: Alejandro Fernandez 
Committed: Wed Aug 2 18:14:15 2017 -0700

--
 .../4.2.5/upgrades/config-upgrade.xml   | 50 +
 .../upgrades/nonrolling-upgrade-to-hdp-2.6.xml  | 18 +++
 .../BigInsights/4.2/upgrades/config-upgrade.xml | 56 +++-
 .../upgrades/nonrolling-upgrade-to-hdp-2.6.xml  | 20 ++-
 4 files changed, 141 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/ambari/blob/266deaf1/ambari-server/src/main/resources/stacks/BigInsights/4.2.5/upgrades/config-upgrade.xml
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.2.5/upgrades/config-upgrade.xml
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.2.5/upgrades/config-upgrade.xml
index d40d4d6..f6a21ff 100644
--- 
a/ambari-server/src/main/resources/stacks/BigInsights/4.2.5/upgrades/config-upgrade.xml
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.2.5/upgrades/config-upgrade.xml
@@ -71,6 +71,17 @@
 yarn-env
 
   
+
+  
+ranger-yarn-audit
+
+
+
+
+
+
+
+  
 
   
 
@@ -112,6 +123,17 @@
 hbase-env
 
   
+
+  
+ranger-hbase-audit
+
+
+
+
+
+
+
+  
 
   
 
@@ -249,6 +271,17 @@
 
 
   
+
+  
+ranger-hive-audit
+
+
+
+
+
+
+
+  
 
   
   
@@ -301,5 +334,22 @@
 
   
 
+
+
+  
+
+  
+ranger-knox-audit
+
+
+
+
+
+
+
+  
+
+  
+
   
 

http://git-wip-us.apache.org/repos/asf/ambari/blob/266deaf1/ambari-server/src/main/resources/stacks/BigInsights/4.2.5/upgrades/nonrolling-upgrade-to-hdp-2.6.xml
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.2.5/upgrades/nonrolling-upgrade-to-hdp-2.6.xml
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.2.5/upgrades/nonrolling-upgrade-to-hdp-2.6.xml
index 609a4fe..361fb56 100644
--- 
a/ambari-server/src/main/resources/stacks/BigInsights/4.2.5/upgrades/nonrolling-upgrade-to-hdp-2.6.xml
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.2.5/upgrades/nonrolling-upgrade-to-hdp-2.6.xml
@@ -247,6 +247,10 @@
 
   
 
+  
+
+  
+
   
   
 
@@ -277,6 +281,10 @@
 
   
 
+  
+
+  
+
   
   
 
@@ -334,6 +342,11 @@
   
 
   
+
+  
+
+  
+
   
 
   
@@ -356,6 +369,11 @@
   
 
   
+
+  
+  
+
+  
 
 
 

http://git-wip-us.apache.org/repos/asf/ambari/blob/266deaf1/ambari-server/src/main/resources/stacks/BigInsights/4.2/upgrades/config-upgrade.xml
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.2/upgrades/config-upgrade.xml
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.2/upgrades/config-upgrade.xml
index 9c9737f..f544e10 100644
--- 
a/ambari-server/src/main/resources/stacks/BigInsights/4.2/upgrades/config-upgrade.xml
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.2/upgrades/config-upgrade.xml
@@ -78,7 +78,7 @@
 
   
 
-  
+  
 ranger-kms-audit
 
 
@@ -88,6 +88,7 @@
 
 
   
+
   
 kms-log4j
 
@@ -120,6 +121,7 @@
 
 
   
+
 
   
 
@@ -148,6 +150,7 @@
 core-site
 
   
+
 
   
 
@@ -164,6 +167,17 @@
 yarn-env
 
   
+
+  
+ran

ambari git commit: AMBARI-21463. FIXING MERGE ERROR. Cross-stack upgrade, Oozie restart fails with ext-2.2.zip missing error, stack_tools.py is missing get_stack_name in __all__, disable BigInsights i

2017-07-27 Thread alejandro
Repository: ambari
Updated Branches:
  refs/heads/trunk ae1369742 -> 8d3570f8a


AMBARI-21463. FIXING MERGE ERROR. Cross-stack upgrade, Oozie restart fails with 
ext-2.2.zip missing error, stack_tools.py is missing get_stack_name in __all__, 
disable BigInsights in UI (alejandro)


Project: http://git-wip-us.apache.org/repos/asf/ambari/repo
Commit: http://git-wip-us.apache.org/repos/asf/ambari/commit/8d3570f8
Tree: http://git-wip-us.apache.org/repos/asf/ambari/tree/8d3570f8
Diff: http://git-wip-us.apache.org/repos/asf/ambari/diff/8d3570f8

Branch: refs/heads/trunk
Commit: 8d3570f8aba559741d4e45da425eda3613c8d405
Parents: ae13697
Author: Alejandro Fernandez 
Authored: Thu Jul 27 15:16:25 2017 -0700
Committer: Alejandro Fernandez 
Committed: Thu Jul 27 15:16:25 2017 -0700

--
 .../common-services/OOZIE/4.0.0.2.0/package/scripts/oozie.py | 8 
 .../OOZIE/4.0.0.2.0/package/scripts/oozie_server_upgrade.py  | 2 +-
 2 files changed, 5 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/ambari/blob/8d3570f8/ambari-server/src/main/resources/common-services/OOZIE/4.0.0.2.0/package/scripts/oozie.py
--
diff --git 
a/ambari-server/src/main/resources/common-services/OOZIE/4.0.0.2.0/package/scripts/oozie.py
 
b/ambari-server/src/main/resources/common-services/OOZIE/4.0.0.2.0/package/scripts/oozie.py
index 3467ed2..695395a 100644
--- 
a/ambari-server/src/main/resources/common-services/OOZIE/4.0.0.2.0/package/scripts/oozie.py
+++ 
b/ambari-server/src/main/resources/common-services/OOZIE/4.0.0.2.0/package/scripts/oozie.py
@@ -52,7 +52,7 @@ from ambari_commons.inet_utils import download_file
 from resource_management.core import Logger
 
 @OsFamilyFuncImpl(os_family=OSConst.WINSRV_FAMILY)
-def oozie(is_server=False):
+def oozie(is_server=False, upgrade_type=None):
   import params
 
   from status_params import oozie_server_win_service_name
@@ -99,7 +99,7 @@ def oozie(is_server=False):
 
 # TODO: see if see can remove this
 @OsFamilyFuncImpl(os_family=OsFamilyImpl.DEFAULT)
-def oozie(is_server=False):
+def oozie(is_server=False, upgrade_type=None):
   import params
 
   if is_server:
@@ -189,8 +189,8 @@ def oozie(is_server=False):
 
   oozie_ownership()
   
-  if is_server:  
-oozie_server_specific()
+  if is_server:
+oozie_server_specific(upgrade_type)
   
 def oozie_ownership():
   import params

http://git-wip-us.apache.org/repos/asf/ambari/blob/8d3570f8/ambari-server/src/main/resources/common-services/OOZIE/4.0.0.2.0/package/scripts/oozie_server_upgrade.py
--
diff --git 
a/ambari-server/src/main/resources/common-services/OOZIE/4.0.0.2.0/package/scripts/oozie_server_upgrade.py
 
b/ambari-server/src/main/resources/common-services/OOZIE/4.0.0.2.0/package/scripts/oozie_server_upgrade.py
index 3edb042..23b39ef 100644
--- 
a/ambari-server/src/main/resources/common-services/OOZIE/4.0.0.2.0/package/scripts/oozie_server_upgrade.py
+++ 
b/ambari-server/src/main/resources/common-services/OOZIE/4.0.0.2.0/package/scripts/oozie_server_upgrade.py
@@ -41,7 +41,7 @@ BACKUP_CONF_ARCHIVE = "oozie-conf-backup.tar"
 class OozieUpgrade(Script):
 
   @staticmethod
-  def prepare_libext_directory():
+  def prepare_libext_directory(upgrade_type=None):
 """
 Performs the following actions on libext:
   - creates /current/oozie/libext and recursively



ambari git commit: AMBARI-21326. Implement CodeCache related configuration change for HBase daemons (Ted Yu via alejandro)

2017-07-27 Thread alejandro
Repository: ambari
Updated Branches:
  refs/heads/trunk eb2725158 -> ae1369742


AMBARI-21326. Implement CodeCache related configuration change for HBase 
daemons (Ted Yu via alejandro)


Project: http://git-wip-us.apache.org/repos/asf/ambari/repo
Commit: http://git-wip-us.apache.org/repos/asf/ambari/commit/ae136974
Tree: http://git-wip-us.apache.org/repos/asf/ambari/tree/ae136974
Diff: http://git-wip-us.apache.org/repos/asf/ambari/diff/ae136974

Branch: refs/heads/trunk
Commit: ae1369742081df3b99cf3ef678b24afc20b062fb
Parents: eb27251
Author: Alejandro Fernandez 
Authored: Thu Jul 27 15:06:38 2017 -0700
Committer: Alejandro Fernandez 
Committed: Thu Jul 27 15:06:38 2017 -0700

--
 .../common-services/HBASE/0.96.0.2.0/configuration/hbase-env.xml | 4 ++--
 .../common-services/HBASE/2.0.0.3.0/configuration/hbase-env.xml  | 4 ++--
 2 files changed, 4 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/ambari/blob/ae136974/ambari-server/src/main/resources/common-services/HBASE/0.96.0.2.0/configuration/hbase-env.xml
--
diff --git 
a/ambari-server/src/main/resources/common-services/HBASE/0.96.0.2.0/configuration/hbase-env.xml
 
b/ambari-server/src/main/resources/common-services/HBASE/0.96.0.2.0/configuration/hbase-env.xml
index 3ff67d4..8876141 100644
--- 
a/ambari-server/src/main/resources/common-services/HBASE/0.96.0.2.0/configuration/hbase-env.xml
+++ 
b/ambari-server/src/main/resources/common-services/HBASE/0.96.0.2.0/configuration/hbase-env.xml
@@ -220,11 +220,11 @@ export HBASE_MANAGES_ZK=false
 {% if security_enabled %}
 export HBASE_OPTS="$HBASE_OPTS -XX:+UseConcMarkSweepGC 
-XX:ErrorFile={{log_dir}}/hs_err_pid%p.log 
-Djava.security.auth.login.config={{client_jaas_config_file}} 
-Djava.io.tmpdir={{java_io_tmpdir}}"
 export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS -Xmx{{master_heapsize}} 
-Djava.security.auth.login.config={{master_jaas_config_file}}"
-export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS 
-Xmn{{regionserver_xmn_size}} -XX:CMSInitiatingOccupancyFraction=70  
-Xms{{regionserver_heapsize}} -Xmx{{regionserver_heapsize}} 
-Djava.security.auth.login.config={{regionserver_jaas_config_file}}"
+export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS 
-Xmn{{regionserver_xmn_size}} -XX:CMSInitiatingOccupancyFraction=70 
-XX:ReservedCodeCacheSize=256m -Xms{{regionserver_heapsize}} 
-Xmx{{regionserver_heapsize}} 
-Djava.security.auth.login.config={{regionserver_jaas_config_file}}"
 {% else %}
 export HBASE_OPTS="$HBASE_OPTS -XX:+UseConcMarkSweepGC 
-XX:ErrorFile={{log_dir}}/hs_err_pid%p.log -Djava.io.tmpdir={{java_io_tmpdir}}"
 export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS -Xmx{{master_heapsize}}"
-export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS 
-Xmn{{regionserver_xmn_size}} -XX:CMSInitiatingOccupancyFraction=70  
-Xms{{regionserver_heapsize}} -Xmx{{regionserver_heapsize}}"
+export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS 
-Xmn{{regionserver_xmn_size}} -XX:CMSInitiatingOccupancyFraction=70 
-XX:ReservedCodeCacheSize=256m -Xms{{regionserver_heapsize}} 
-Xmx{{regionserver_heapsize}}"
 {% endif %}
 
 

http://git-wip-us.apache.org/repos/asf/ambari/blob/ae136974/ambari-server/src/main/resources/common-services/HBASE/2.0.0.3.0/configuration/hbase-env.xml
--
diff --git 
a/ambari-server/src/main/resources/common-services/HBASE/2.0.0.3.0/configuration/hbase-env.xml
 
b/ambari-server/src/main/resources/common-services/HBASE/2.0.0.3.0/configuration/hbase-env.xml
index cb30b63..733ca8b 100644
--- 
a/ambari-server/src/main/resources/common-services/HBASE/2.0.0.3.0/configuration/hbase-env.xml
+++ 
b/ambari-server/src/main/resources/common-services/HBASE/2.0.0.3.0/configuration/hbase-env.xml
@@ -226,12 +226,12 @@ JDK_DEPENDED_OPTS="-XX:PermSize=128m -XX:MaxPermSize=128m"
 {% if security_enabled %}
 export HBASE_OPTS="$HBASE_OPTS -XX:+UseConcMarkSweepGC 
-XX:ErrorFile={{log_dir}}/hs_err_pid%p.log 
-Djava.security.auth.login.config={{client_jaas_config_file}} 
-Djava.io.tmpdir={{java_io_tmpdir}}"
 export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS -Xmx{{master_heapsize}} 
-Djava.security.auth.login.config={{master_jaas_config_file}} 
-Djavax.security.auth.useSubjectCredsOnly=false $JDK_DEPENDED_OPTS"
-export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS 
-Xmn{{regionserver_xmn_size}} -XX:CMSInitiatingOccupancyFraction=70  
-Xms{{regionserver_heapsize}} -Xmx{{regionserver_heapsize}} 
-Djava.security.auth.login.config={{regionserver_jaas_config_file}} 
-Djavax.security.auth.useSubjectCredsOnly=false $JDK_DEPENDED_OPTS"
+export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS 

ambari git commit: AMBARI-21326. Implement CodeCache related configuration change for HBase daemons (Ted Yu via alejandro)

2017-07-27 Thread alejandro
Repository: ambari
Updated Branches:
  refs/heads/branch-2.5 d4fad25e0 -> 0fa6e2091


AMBARI-21326. Implement CodeCache related configuration change for HBase 
daemons (Ted Yu via alejandro)


Project: http://git-wip-us.apache.org/repos/asf/ambari/repo
Commit: http://git-wip-us.apache.org/repos/asf/ambari/commit/0fa6e209
Tree: http://git-wip-us.apache.org/repos/asf/ambari/tree/0fa6e209
Diff: http://git-wip-us.apache.org/repos/asf/ambari/diff/0fa6e209

Branch: refs/heads/branch-2.5
Commit: 0fa6e2091cbe7e1f2741568fdfb2f244398ba59c
Parents: d4fad25
Author: Alejandro Fernandez 
Authored: Thu Jul 27 15:05:24 2017 -0700
Committer: Alejandro Fernandez 
Committed: Thu Jul 27 15:05:24 2017 -0700

--
 .../common-services/HBASE/0.96.0.2.0/configuration/hbase-env.xml | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/ambari/blob/0fa6e209/ambari-server/src/main/resources/common-services/HBASE/0.96.0.2.0/configuration/hbase-env.xml
--
diff --git 
a/ambari-server/src/main/resources/common-services/HBASE/0.96.0.2.0/configuration/hbase-env.xml
 
b/ambari-server/src/main/resources/common-services/HBASE/0.96.0.2.0/configuration/hbase-env.xml
index 3ff67d4..8876141 100644
--- 
a/ambari-server/src/main/resources/common-services/HBASE/0.96.0.2.0/configuration/hbase-env.xml
+++ 
b/ambari-server/src/main/resources/common-services/HBASE/0.96.0.2.0/configuration/hbase-env.xml
@@ -220,11 +220,11 @@ export HBASE_MANAGES_ZK=false
 {% if security_enabled %}
 export HBASE_OPTS="$HBASE_OPTS -XX:+UseConcMarkSweepGC 
-XX:ErrorFile={{log_dir}}/hs_err_pid%p.log 
-Djava.security.auth.login.config={{client_jaas_config_file}} 
-Djava.io.tmpdir={{java_io_tmpdir}}"
 export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS -Xmx{{master_heapsize}} 
-Djava.security.auth.login.config={{master_jaas_config_file}}"
-export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS 
-Xmn{{regionserver_xmn_size}} -XX:CMSInitiatingOccupancyFraction=70  
-Xms{{regionserver_heapsize}} -Xmx{{regionserver_heapsize}} 
-Djava.security.auth.login.config={{regionserver_jaas_config_file}}"
+export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS 
-Xmn{{regionserver_xmn_size}} -XX:CMSInitiatingOccupancyFraction=70 
-XX:ReservedCodeCacheSize=256m -Xms{{regionserver_heapsize}} 
-Xmx{{regionserver_heapsize}} 
-Djava.security.auth.login.config={{regionserver_jaas_config_file}}"
 {% else %}
 export HBASE_OPTS="$HBASE_OPTS -XX:+UseConcMarkSweepGC 
-XX:ErrorFile={{log_dir}}/hs_err_pid%p.log -Djava.io.tmpdir={{java_io_tmpdir}}"
 export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS -Xmx{{master_heapsize}}"
-export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS 
-Xmn{{regionserver_xmn_size}} -XX:CMSInitiatingOccupancyFraction=70  
-Xms{{regionserver_heapsize}} -Xmx{{regionserver_heapsize}}"
+export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS 
-Xmn{{regionserver_xmn_size}} -XX:CMSInitiatingOccupancyFraction=70 
-XX:ReservedCodeCacheSize=256m -Xms{{regionserver_heapsize}} 
-Xmx{{regionserver_heapsize}}"
 {% endif %}
 
 



ambari git commit: AMBARI-21573. Kafka service failed to start during regenerate keytab after upgrade from Biginsights 4.2.5, 4.2.0 to HDP 2.6.2 (alejandro)

2017-07-25 Thread alejandro
Repository: ambari
Updated Branches:
  refs/heads/branch-2.5 b38901bd9 -> ab05c5fec


AMBARI-21573. Kafka service failed to start during regenerate keytab after 
upgrade from Biginsights 4.2.5,4.2.0 to HDP 2.6.2 (alejandro)


Project: http://git-wip-us.apache.org/repos/asf/ambari/repo
Commit: http://git-wip-us.apache.org/repos/asf/ambari/commit/ab05c5fe
Tree: http://git-wip-us.apache.org/repos/asf/ambari/tree/ab05c5fe
Diff: http://git-wip-us.apache.org/repos/asf/ambari/diff/ab05c5fe

Branch: refs/heads/branch-2.5
Commit: ab05c5fec861bce64234400ea6bb665dc42b7c87
Parents: b38901b
Author: Alejandro Fernandez 
Authored: Tue Jul 25 15:54:52 2017 -0700
Committer: Alejandro Fernandez 
Committed: Tue Jul 25 17:44:24 2017 -0700

--
 .../BigInsights/4.2.5/upgrades/config-upgrade.xml | 14 ++
 .../4.2.5/upgrades/nonrolling-upgrade-to-hdp-2.6.xml  | 10 ++
 .../BigInsights/4.2/upgrades/config-upgrade.xml   | 14 ++
 .../4.2/upgrades/nonrolling-upgrade-to-hdp-2.6.xml| 10 ++
 4 files changed, 48 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/ambari/blob/ab05c5fe/ambari-server/src/main/resources/stacks/BigInsights/4.2.5/upgrades/config-upgrade.xml
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.2.5/upgrades/config-upgrade.xml
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.2.5/upgrades/config-upgrade.xml
index 87a2aef..2e9bd65 100644
--- 
a/ambari-server/src/main/resources/stacks/BigInsights/4.2.5/upgrades/config-upgrade.xml
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.2.5/upgrades/config-upgrade.xml
@@ -18,6 +18,20 @@
 
 http://www.w3.org/2001/XMLSchema-instance"; 
xsi:noNamespaceSchemaLocation="upgrade-config.xsd">
   
+
+  
+
+  
+  
+kafka-broker
+
+
+  
+
+  
+
+
 
   
 

http://git-wip-us.apache.org/repos/asf/ambari/blob/ab05c5fe/ambari-server/src/main/resources/stacks/BigInsights/4.2.5/upgrades/nonrolling-upgrade-to-hdp-2.6.xml
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.2.5/upgrades/nonrolling-upgrade-to-hdp-2.6.xml
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.2.5/upgrades/nonrolling-upgrade-to-hdp-2.6.xml
index a7ddd5c..684acfa 100644
--- 
a/ambari-server/src/main/resources/stacks/BigInsights/4.2.5/upgrades/nonrolling-upgrade-to-hdp-2.6.xml
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.2.5/upgrades/nonrolling-upgrade-to-hdp-2.6.xml
@@ -195,6 +195,16 @@
   true   
   false
 
+  
+  
+  
+
+
+  Apply Kerberos config changes for Kafka
+
+  
+
   
   
 

http://git-wip-us.apache.org/repos/asf/ambari/blob/ab05c5fe/ambari-server/src/main/resources/stacks/BigInsights/4.2/upgrades/config-upgrade.xml
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.2/upgrades/config-upgrade.xml
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.2/upgrades/config-upgrade.xml
index 6d00a90..f79272f 100644
--- 
a/ambari-server/src/main/resources/stacks/BigInsights/4.2/upgrades/config-upgrade.xml
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.2/upgrades/config-upgrade.xml
@@ -110,6 +110,20 @@
   
 
 
+
+  
+
+  
+  
+kafka-broker
+
+
+  
+
+  
+
+
 
   
 

http://git-wip-us.apache.org/repos/asf/ambari/blob/ab05c5fe/ambari-server/src/main/resources/stacks/BigInsights/4.2/upgrades/nonrolling-upgrade-to-hdp-2.6.xml
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.2/upgrades/nonrolling-upgrade-to-hdp-2.6.xml
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.2/upgrades/nonrolling-upgrade-to-hdp-2.6.xml
index cedc90f..484e459 100644
--- 
a/ambari-server/src/main/resources/stacks/BigInsights/4.2/upgrades/nonrolling-upgrade-to-hdp-2.6.xml
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.2/upgrades/nonrolling-upgrade-to-hdp-2.6.xml
@@ -190,6 +190,16 @@
   true   
   false
 
+  
+  
+  
+
+
+  Apply Kerberos config changes for Kafka
+
+  
+
   
   
 



ambari git commit: AMBARI-21528. Zookeeper server has incorrect memory setting, missing m in Xmx value (alejandro)

2017-07-20 Thread alejandro
Repository: ambari
Updated Branches:
  refs/heads/trunk e87a3e31a -> 2a298a3f7


AMBARI-21528. Zookeeper server has incorrect memory setting, missing m in Xmx 
value (alejandro)


Project: http://git-wip-us.apache.org/repos/asf/ambari/repo
Commit: http://git-wip-us.apache.org/repos/asf/ambari/commit/2a298a3f
Tree: http://git-wip-us.apache.org/repos/asf/ambari/tree/2a298a3f
Diff: http://git-wip-us.apache.org/repos/asf/ambari/diff/2a298a3f

Branch: refs/heads/trunk
Commit: 2a298a3f707c4a3702d0f70e927946540661c916
Parents: e87a3e3
Author: Alejandro Fernandez 
Authored: Thu Jul 20 14:24:18 2017 -0700
Committer: Alejandro Fernandez 
Committed: Thu Jul 20 14:24:18 2017 -0700

--
 .../ZOOKEEPER/3.4.5/package/scripts/params_linux.py | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/ambari/blob/2a298a3f/ambari-server/src/main/resources/common-services/ZOOKEEPER/3.4.5/package/scripts/params_linux.py
--
diff --git 
a/ambari-server/src/main/resources/common-services/ZOOKEEPER/3.4.5/package/scripts/params_linux.py
 
b/ambari-server/src/main/resources/common-services/ZOOKEEPER/3.4.5/package/scripts/params_linux.py
index 0780d2e..b8e8f78 100644
--- 
a/ambari-server/src/main/resources/common-services/ZOOKEEPER/3.4.5/package/scripts/params_linux.py
+++ 
b/ambari-server/src/main/resources/common-services/ZOOKEEPER/3.4.5/package/scripts/params_linux.py
@@ -68,7 +68,10 @@ zk_log_dir = 
config['configurations']['zookeeper-env']['zk_log_dir']
 zk_data_dir = config['configurations']['zoo.cfg']['dataDir']
 zk_pid_dir = status_params.zk_pid_dir
 zk_pid_file = status_params.zk_pid_file
-zk_server_heapsize_value = 
default('configurations/zookeeper-env/zk_server_heapsize', "1024m")
+zk_server_heapsize_value = 
str(default('configurations/zookeeper-env/zk_server_heapsize', "1024"))
+zk_server_heapsize_value = zk_server_heapsize_value.strip()
+if len(zk_server_heapsize_value) > 0 and 
zk_server_heapsize_value[-1].isdigit():
+  zk_server_heapsize_value = zk_server_heapsize_value + "m"
 zk_server_heapsize = format("-Xmx{zk_server_heapsize_value}")
 
 client_port = default('/configurations/zoo.cfg/clientPort', None)



ambari git commit: AMBARI-21528. Zookeeper server has incorrect memory setting, missing m in Xmx value (alejandro)

2017-07-20 Thread alejandro
Repository: ambari
Updated Branches:
  refs/heads/branch-2.5 212ee1cb0 -> d4244f520


AMBARI-21528. Zookeeper server has incorrect memory setting, missing m in Xmx 
value (alejandro)


Project: http://git-wip-us.apache.org/repos/asf/ambari/repo
Commit: http://git-wip-us.apache.org/repos/asf/ambari/commit/d4244f52
Tree: http://git-wip-us.apache.org/repos/asf/ambari/tree/d4244f52
Diff: http://git-wip-us.apache.org/repos/asf/ambari/diff/d4244f52

Branch: refs/heads/branch-2.5
Commit: d4244f5206feca1bb6001eea6d550494f69e8762
Parents: 212ee1c
Author: Alejandro Fernandez 
Authored: Wed Jul 19 16:01:42 2017 -0700
Committer: Alejandro Fernandez 
Committed: Thu Jul 20 14:16:42 2017 -0700

--
 .../ZOOKEEPER/3.4.5/package/scripts/params_linux.py | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/ambari/blob/d4244f52/ambari-server/src/main/resources/common-services/ZOOKEEPER/3.4.5/package/scripts/params_linux.py
--
diff --git 
a/ambari-server/src/main/resources/common-services/ZOOKEEPER/3.4.5/package/scripts/params_linux.py
 
b/ambari-server/src/main/resources/common-services/ZOOKEEPER/3.4.5/package/scripts/params_linux.py
index 0780d2e..b8e8f78 100644
--- 
a/ambari-server/src/main/resources/common-services/ZOOKEEPER/3.4.5/package/scripts/params_linux.py
+++ 
b/ambari-server/src/main/resources/common-services/ZOOKEEPER/3.4.5/package/scripts/params_linux.py
@@ -68,7 +68,10 @@ zk_log_dir = 
config['configurations']['zookeeper-env']['zk_log_dir']
 zk_data_dir = config['configurations']['zoo.cfg']['dataDir']
 zk_pid_dir = status_params.zk_pid_dir
 zk_pid_file = status_params.zk_pid_file
-zk_server_heapsize_value = 
default('configurations/zookeeper-env/zk_server_heapsize', "1024m")
+zk_server_heapsize_value = 
str(default('configurations/zookeeper-env/zk_server_heapsize', "1024"))
+zk_server_heapsize_value = zk_server_heapsize_value.strip()
+if len(zk_server_heapsize_value) > 0 and 
zk_server_heapsize_value[-1].isdigit():
+  zk_server_heapsize_value = zk_server_heapsize_value + "m"
 zk_server_heapsize = format("-Xmx{zk_server_heapsize_value}")
 
 client_port = default('/configurations/zoo.cfg/clientPort', None)



ambari git commit: AMBARI-21502. Cross-stack migration from BigInsights to HDP, EU needs to set hive-site custom.hive.warehouse.mode to 0770 (alejandro)

2017-07-17 Thread alejandro
Repository: ambari
Updated Branches:
  refs/heads/branch-2.5 9cd7fbe0a -> d8a5bad1b


AMBARI-21502. Cross-stack migration from BigInsights to HDP, EU needs to set 
hive-site custom.hive.warehouse.mode to 0770 (alejandro)


Project: http://git-wip-us.apache.org/repos/asf/ambari/repo
Commit: http://git-wip-us.apache.org/repos/asf/ambari/commit/d8a5bad1
Tree: http://git-wip-us.apache.org/repos/asf/ambari/tree/d8a5bad1
Diff: http://git-wip-us.apache.org/repos/asf/ambari/diff/d8a5bad1

Branch: refs/heads/branch-2.5
Commit: d8a5bad1b4b67367818646e7b65a2419021bb420
Parents: 9cd7fbe
Author: Alejandro Fernandez 
Authored: Mon Jul 17 12:35:27 2017 -0700
Committer: Alejandro Fernandez 
Committed: Mon Jul 17 16:42:56 2017 -0700

--
 .../stacks/BigInsights/4.2.5/upgrades/config-upgrade.xml| 9 +++--
 .../4.2.5/upgrades/nonrolling-upgrade-to-hdp-2.6.xml| 7 +--
 .../stacks/BigInsights/4.2/upgrades/config-upgrade.xml  | 9 +++--
 .../4.2/upgrades/nonrolling-upgrade-to-hdp-2.6.xml  | 7 +--
 4 files changed, 24 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/ambari/blob/d8a5bad1/ambari-server/src/main/resources/stacks/BigInsights/4.2.5/upgrades/config-upgrade.xml
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.2.5/upgrades/config-upgrade.xml
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.2.5/upgrades/config-upgrade.xml
index 8c009a7..e476d57 100644
--- 
a/ambari-server/src/main/resources/stacks/BigInsights/4.2.5/upgrades/config-upgrade.xml
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.2.5/upgrades/config-upgrade.xml
@@ -134,7 +134,7 @@
 
   
 
-  
+  
 hive-site
 
 
@@ -167,10 +167,15 @@
 
   
   
-  
+  
 hive-env
 
   
+
+  
+hive-site
+
+  
 
   
   

http://git-wip-us.apache.org/repos/asf/ambari/blob/d8a5bad1/ambari-server/src/main/resources/stacks/BigInsights/4.2.5/upgrades/nonrolling-upgrade-to-hdp-2.6.xml
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.2.5/upgrades/nonrolling-upgrade-to-hdp-2.6.xml
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.2.5/upgrades/nonrolling-upgrade-to-hdp-2.6.xml
index 7c1a9ce..cbd0550 100644
--- 
a/ambari-server/src/main/resources/stacks/BigInsights/4.2.5/upgrades/nonrolling-upgrade-to-hdp-2.6.xml
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.2.5/upgrades/nonrolling-upgrade-to-hdp-2.6.xml
@@ -255,10 +255,13 @@
 
   
   
-
+
+  
+  
+
   
   
-
+
 
   
 

http://git-wip-us.apache.org/repos/asf/ambari/blob/d8a5bad1/ambari-server/src/main/resources/stacks/BigInsights/4.2/upgrades/config-upgrade.xml
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.2/upgrades/config-upgrade.xml
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.2/upgrades/config-upgrade.xml
index 310e504..dada6e2 100644
--- 
a/ambari-server/src/main/resources/stacks/BigInsights/4.2/upgrades/config-upgrade.xml
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.2/upgrades/config-upgrade.xml
@@ -158,7 +158,7 @@
 
   
 
-  
+  
 hive-site
 
 
@@ -191,10 +191,15 @@
 
   
   
-  
+  
 hive-env
 
   
+
+  
+hive-site
+
+  
 
   
   

http://git-wip-us.apache.org/repos/asf/ambari/blob/d8a5bad1/ambari-server/src/main/resources/stacks/BigInsights/4.2/upgrades/nonrolling-upgrade-to-hdp-2.6.xml
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.2/upgrades/nonrolling-upgrade-to-hdp-2.6.xml
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.2/upgrades/nonrolling-upgrade-to-hdp-2.6.xml
index 5b8f8d9..3ea20ed 100644
--- 
a/ambari-server/src/main/resources/stacks/BigInsights/4.2/upgrades/nonrolling-upgrade-to-hdp-2.6.xml
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.2/upgrades/nonrolling-upgrade-to-hdp-2.6.xml
@@ -212,10 +212,13 @@
 
   
   
-
+
+  
+  
+
   
   
-
+
 
   
 



ambari git commit: AMBARI-21481. Upgrading IOP cluster with Spark2 to Ambari 2.5.2 fails on start because config mapping spark2-javaopts-properties is never selected (alejandro)

2017-07-17 Thread alejandro
Repository: ambari
Updated Branches:
  refs/heads/branch-2.5 13bcea0b0 -> 0ed09cd53


AMBARI-21481. Upgrading IOP cluster with Spark2 to Ambari 2.5.2 fails on start 
because config mapping spark2-javaopts-properties is never selected (alejandro)


Project: http://git-wip-us.apache.org/repos/asf/ambari/repo
Commit: http://git-wip-us.apache.org/repos/asf/ambari/commit/0ed09cd5
Tree: http://git-wip-us.apache.org/repos/asf/ambari/tree/0ed09cd5
Diff: http://git-wip-us.apache.org/repos/asf/ambari/diff/0ed09cd5

Branch: refs/heads/branch-2.5
Commit: 0ed09cd5342cfc4cac0d6061a7b7b9a3cef127c1
Parents: 13bcea0
Author: Alejandro Fernandez 
Authored: Fri Jul 14 16:15:07 2017 -0700
Committer: Alejandro Fernandez 
Committed: Mon Jul 17 14:29:36 2017 -0700

--
 .../server/upgrade/UpgradeCatalog252.java   | 99 
 .../configuration/spark-javaopts-properties.xml |  3 +
 .../spark2-javaopts-properties.xml  |  5 +-
 .../4.2.5/services/SPARK2/metainfo.xml  |  2 +-
 .../configuration/spark-javaopts-properties.xml |  3 +
 5 files changed, 110 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/ambari/blob/0ed09cd5/ambari-server/src/main/java/org/apache/ambari/server/upgrade/UpgradeCatalog252.java
--
diff --git 
a/ambari-server/src/main/java/org/apache/ambari/server/upgrade/UpgradeCatalog252.java
 
b/ambari-server/src/main/java/org/apache/ambari/server/upgrade/UpgradeCatalog252.java
index 3c8686c..ea1b034 100644
--- 
a/ambari-server/src/main/java/org/apache/ambari/server/upgrade/UpgradeCatalog252.java
+++ 
b/ambari-server/src/main/java/org/apache/ambari/server/upgrade/UpgradeCatalog252.java
@@ -18,12 +18,19 @@
 package org.apache.ambari.server.upgrade;
 
 import java.sql.SQLException;
+import java.util.Arrays;
+import java.util.Collection;
 import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
 import java.util.Map;
 import java.util.Set;
 
 import org.apache.ambari.server.AmbariException;
 import org.apache.ambari.server.orm.DBAccessor.DBColumnInfo;
+import org.apache.ambari.server.orm.dao.ClusterDAO;
+import org.apache.ambari.server.orm.entities.ClusterConfigMappingEntity;
+import org.apache.ambari.server.orm.entities.ClusterEntity;
 import org.apache.ambari.server.state.Cluster;
 import org.apache.ambari.server.state.Clusters;
 import org.apache.ambari.server.state.Config;
@@ -34,6 +41,8 @@ import org.apache.commons.lang.StringUtils;
 import com.google.common.collect.Sets;
 import com.google.inject.Inject;
 import com.google.inject.Injector;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 /**
  * The {@link org.apache.ambari.server.upgrade.UpgradeCatalog252} upgrades 
Ambari from 2.5.1 to 2.5.2.
@@ -54,6 +63,13 @@ public class UpgradeCatalog252 extends 
AbstractUpgradeCatalog {
 
   private static final String CLUSTER_ENV = "cluster-env";
 
+  private static final List configTypesToEnsureSelected = 
Arrays.asList("spark2-javaopts-properties");
+  
+  /**
+   * Logger.
+   */
+  private static final Logger LOG = 
LoggerFactory.getLogger(UpgradeCatalog252.class);
+
   /**
* Constructor.
*
@@ -102,6 +118,7 @@ public class UpgradeCatalog252 extends 
AbstractUpgradeCatalog {
   @Override
   protected void executeDMLUpdates() throws AmbariException, SQLException {
 resetStackToolsAndFeatures();
+ensureConfigTypesHaveAtLeastOneVersionSelected();
   }
 
   /**
@@ -197,4 +214,86 @@ public class UpgradeCatalog252 extends 
AbstractUpgradeCatalog {
   updateConfigurationPropertiesForCluster(cluster, CLUSTER_ENV, 
newStackProperties, true, false);
 }
   }
+
+  /**
+   * When doing a cross-stack upgrade, we found that one config type 
(spark2-javaopts-properties)
+   * did not have any mappings that were selected, so it caused Ambari Server 
start to fail on the DB Consistency Checker.
+   * To fix this, iterate over all config types and ensure that at least one 
is selected.
+   * If none are selected, then pick the one with the greatest time stamp; 
this should be safe since we are only adding
+   * more data to use as opposed to removing.
+   */
+  private void ensureConfigTypesHaveAtLeastOneVersionSelected() {
+ClusterDAO clusterDAO = injector.getInstance(ClusterDAO.class);
+List clusters = clusterDAO.findAll();
+
+if (null == clusters) {
+  return;
+}
+
+for (ClusterEntity clusterEntity : clusters) {
+  LOG.info("Ensuring all config types have at least one selected config 
for cluster {}", clusterEntity.getClusterName());
+
+  boolean atLeastOneChanged = false;
+  Collection configMappingEntities = 
clusterEntity.getConfigMappingEntities();
+
+  if (configMappingEntities != null) {
+Set configTypesNotSelecte

ambari git commit: AMBARI-21463. Cross-stack upgrade, Oozie restart fails with ext-2.2.zip missing error, stack_tools.py is missing get_stack_name in __all__, disable BigInsights in UI (alejandro)

2017-07-13 Thread alejandro
Repository: ambari
Updated Branches:
  refs/heads/branch-2.5 113b381ec -> d2c6d53f7


AMBARI-21463. Cross-stack upgrade, Oozie restart fails with ext-2.2.zip missing 
error, stack_tools.py is missing get_stack_name in __all__, disable BigInsights 
in UI (alejandro)


Project: http://git-wip-us.apache.org/repos/asf/ambari/repo
Commit: http://git-wip-us.apache.org/repos/asf/ambari/commit/d2c6d53f
Tree: http://git-wip-us.apache.org/repos/asf/ambari/tree/d2c6d53f
Diff: http://git-wip-us.apache.org/repos/asf/ambari/diff/d2c6d53f

Branch: refs/heads/branch-2.5
Commit: d2c6d53f70bcaa6aee789e6d026cc06990acd16c
Parents: 113b381
Author: Alejandro Fernandez 
Authored: Wed Jul 12 16:53:12 2017 -0700
Committer: Alejandro Fernandez 
Committed: Thu Jul 13 16:10:26 2017 -0700

--
 .../libraries/functions/stack_tools.py  |  2 +-
 .../OOZIE/4.0.0.2.0/package/scripts/oozie.py| 26 
 .../4.0.0.2.0/package/scripts/oozie_server.py   |  4 +--
 .../package/scripts/oozie_server_upgrade.py | 15 +--
 .../4.0.0.2.0/package/scripts/params_linux.py   | 15 ++-
 .../stacks/BigInsights/4.2.5/metainfo.xml   |  2 +-
 6 files changed, 47 insertions(+), 17 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/ambari/blob/d2c6d53f/ambari-common/src/main/python/resource_management/libraries/functions/stack_tools.py
--
diff --git 
a/ambari-common/src/main/python/resource_management/libraries/functions/stack_tools.py
 
b/ambari-common/src/main/python/resource_management/libraries/functions/stack_tools.py
index 830598b..de58021 100644
--- 
a/ambari-common/src/main/python/resource_management/libraries/functions/stack_tools.py
+++ 
b/ambari-common/src/main/python/resource_management/libraries/functions/stack_tools.py
@@ -19,7 +19,7 @@ limitations under the License.
 '''
 
 __all__ = ["get_stack_tool", "get_stack_tool_name", "get_stack_tool_path",
-   "get_stack_tool_package", "STACK_SELECTOR_NAME", 
"CONF_SELECTOR_NAME"]
+   "get_stack_tool_package", "get_stack_name", "STACK_SELECTOR_NAME", 
"CONF_SELECTOR_NAME"]
 
 # simplejson is much faster comparing to Python 2.6 json module and has the 
same functions set.
 import ambari_simplejson as json

http://git-wip-us.apache.org/repos/asf/ambari/blob/d2c6d53f/ambari-server/src/main/resources/common-services/OOZIE/4.0.0.2.0/package/scripts/oozie.py
--
diff --git 
a/ambari-server/src/main/resources/common-services/OOZIE/4.0.0.2.0/package/scripts/oozie.py
 
b/ambari-server/src/main/resources/common-services/OOZIE/4.0.0.2.0/package/scripts/oozie.py
index 0c38b0b..142e962 100644
--- 
a/ambari-server/src/main/resources/common-services/OOZIE/4.0.0.2.0/package/scripts/oozie.py
+++ 
b/ambari-server/src/main/resources/common-services/OOZIE/4.0.0.2.0/package/scripts/oozie.py
@@ -52,7 +52,7 @@ from ambari_commons.inet_utils import download_file
 from resource_management.core import Logger
 
 @OsFamilyFuncImpl(os_family=OSConst.WINSRV_FAMILY)
-def oozie(is_server=False):
+def oozie(is_server=False, upgrade_type=None):
   import params
 
   from status_params import oozie_server_win_service_name
@@ -99,7 +99,7 @@ def oozie(is_server=False):
 
 # TODO: see if see can remove this
 @OsFamilyFuncImpl(os_family=OsFamilyImpl.DEFAULT)
-def oozie(is_server=False):
+def oozie(is_server=False, upgrade_type=None):
   import params
 
   if is_server:
@@ -190,7 +190,7 @@ def oozie(is_server=False):
   oozie_ownership()
   
   if is_server:  
-oozie_server_specific()
+oozie_server_specific(upgrade_type)
   
 def oozie_ownership():
   import params
@@ -215,7 +215,20 @@ def oozie_ownership():
 group = params.user_group
   )
 
-def oozie_server_specific():
+def get_oozie_ext_zip_source_path(upgrade_type, params):
+  """
+  Get the Oozie ext zip file path from the source stack.
+  :param upgrade_type:  Upgrade type will be None if not in the middle of a 
stack upgrade.
+  :param params: Expected to contain fields for ext_js_path, 
upgrade_direction, source_stack_name, and ext_js_file
+  :return: Source path to use for Oozie extension zip file
+  """
+  # Default to /usr/share/$TARGETSTACK-oozie/ext-2.2.zip
+  source_ext_js_path = params.ext_js_path
+  if upgrade_type is not None and params.upgrade_direction == 
Direction.UPGRADE:
+source_ext_js_path = "/usr/share/" + params.source_stack_name.upper() + 
"-oozie/" + params.ext_js_file
+  return source_ext_js_path
+
+def oozie_server_specific(upgrade_type):
   import params
   
   no_op_test = as_user(format("ls {pid_file} >/dev/null 2>&am

[4/4] ambari git commit: AMBARI-21462. Readd TITAN, R4ML, SYSTEMML, JNBG to BigInsights and fix HBase backup during EU and imports (alejandro)

2017-07-12 Thread alejandro
AMBARI-21462. Readd TITAN, R4ML, SYSTEMML, JNBG to BigInsights and fix HBase 
backup during EU and imports (alejandro)


Project: http://git-wip-us.apache.org/repos/asf/ambari/repo
Commit: http://git-wip-us.apache.org/repos/asf/ambari/commit/69e492f2
Tree: http://git-wip-us.apache.org/repos/asf/ambari/tree/69e492f2
Diff: http://git-wip-us.apache.org/repos/asf/ambari/diff/69e492f2

Branch: refs/heads/branch-2.5
Commit: 69e492f288340e797cce62bfd42e677bec958158
Parents: 1f54c6e
Author: Alejandro Fernandez 
Authored: Wed Jul 12 15:14:30 2017 -0700
Committer: Alejandro Fernandez 
Committed: Wed Jul 12 16:17:07 2017 -0700

--
 .../0.96.0.2.0/package/scripts/hbase_master.py  |  10 +-
 .../0.96.0.2.0/package/scripts/hbase_service.py |  37 ++--
 .../common-services/JNBG/0.2.0/alerts.json  |  32 +++
 .../JNBG/0.2.0/configuration/jnbg-env.xml   | 208 +++
 .../common-services/JNBG/0.2.0/kerberos.json|  59 ++
 .../common-services/JNBG/0.2.0/metainfo.xml | 108 ++
 .../JNBG/0.2.0/package/files/jkg_install.sh | 169 +++
 .../JNBG/0.2.0/package/files/jkg_start.sh   |  84 
 .../JNBG/0.2.0/package/files/log4j_setup.sh |  79 +++
 .../0.2.0/package/files/pyspark_configure.sh| 104 ++
 .../JNBG/0.2.0/package/files/pythonenv_setup.sh | 138 
 .../JNBG/0.2.0/package/files/toree_configure.sh | 151 ++
 .../JNBG/0.2.0/package/files/toree_install.sh   | 176 
 .../JNBG/0.2.0/package/scripts/jkg_toree.py | 134 
 .../0.2.0/package/scripts/jkg_toree_params.py   | 177 
 .../JNBG/0.2.0/package/scripts/jnbg_helpers.py  |  81 
 .../JNBG/0.2.0/package/scripts/jnbg_params.py   |  66 ++
 .../JNBG/0.2.0/package/scripts/py_client.py |  63 ++
 .../0.2.0/package/scripts/py_client_params.py   |  39 
 .../JNBG/0.2.0/package/scripts/service_check.py |  44 
 .../JNBG/0.2.0/package/scripts/status_params.py |  26 +++
 .../R4ML/0.8.0/configuration/r4ml-env.xml   |  48 +
 .../common-services/R4ML/0.8.0/metainfo.xml |  92 
 .../R4ML/0.8.0/package/files/Install.R  |  25 +++
 .../R4ML/0.8.0/package/files/ServiceCheck.R |  28 +++
 .../R4ML/0.8.0/package/files/localr.repo|  22 ++
 .../R4ML/0.8.0/package/scripts/__init__.py  |  19 ++
 .../R4ML/0.8.0/package/scripts/params.py|  80 +++
 .../R4ML/0.8.0/package/scripts/r4ml_client.py   | 201 ++
 .../R4ML/0.8.0/package/scripts/service_check.py |  45 
 .../SYSTEMML/0.10.0/metainfo.xml|  77 +++
 .../SYSTEMML/0.10.0/package/scripts/__init__.py |  19 ++
 .../SYSTEMML/0.10.0/package/scripts/params.py   |  40 
 .../0.10.0/package/scripts/service_check.py |  43 
 .../0.10.0/package/scripts/systemml_client.py   |  49 +
 .../common-services/TITAN/1.0.0/alerts.json |  33 +++
 .../1.0.0/configuration/gremlin-server.xml  |  85 
 .../TITAN/1.0.0/configuration/hadoop-gryo.xml   |  94 +
 .../1.0.0/configuration/hadoop-hbase-read.xml   | 102 +
 .../TITAN/1.0.0/configuration/titan-env.xml | 157 ++
 .../1.0.0/configuration/titan-hbase-solr.xml|  69 ++
 .../TITAN/1.0.0/configuration/titan-log4j.xml   |  65 ++
 .../common-services/TITAN/1.0.0/kerberos.json   |  52 +
 .../common-services/TITAN/1.0.0/metainfo.xml| 124 +++
 .../package/alerts/alert_check_titan_server.py  |  65 ++
 .../package/files/gremlin-server-script.sh  |  86 
 .../package/files/tinkergraph-empty.properties  |  18 ++
 .../TITAN/1.0.0/package/files/titanSmoke.groovy |  20 ++
 .../TITAN/1.0.0/package/scripts/params.py   | 202 ++
 .../1.0.0/package/scripts/params_server.py  |  37 
 .../1.0.0/package/scripts/service_check.py  |  88 
 .../TITAN/1.0.0/package/scripts/titan.py| 143 +
 .../TITAN/1.0.0/package/scripts/titan_client.py |  61 ++
 .../TITAN/1.0.0/package/scripts/titan_server.py |  67 ++
 .../1.0.0/package/scripts/titan_service.py  | 150 +
 .../templates/titan_solr_client_jaas.conf.j2|  23 ++
 .../package/templates/titan_solr_jaas.conf.j2   |  26 +++
 .../BigInsights/4.2.5/role_command_order.json   |  12 +-
 .../4.2.5/services/JNBG/metainfo.xml|  26 +++
 .../4.2.5/services/R4ML/metainfo.xml|  37 
 .../4.2.5/services/SYSTEMML/metainfo.xml|  37 
 .../4.2.5/services/TITAN/metainfo.xml   |  40 
 .../BigInsights/4.2.5/services/stack_advisor.py |  53 +
 .../upgrades/nonrolling-upgrade-to-hdp-2.6.xml  |   2 +-
 .../BigInsights/4.2/role_command_order.json |   3 +-
 .../4.2/services/SYSTEMML/metainfo.xml  |  77 +++
 .../SYSTEMML/package/scripts/__init__.py|  19 ++
 .../services/SYSTEMML/package/scripts/params.py |  40 
 .../SYSTEMML/package/scripts

[2/4] ambari git commit: AMBARI-21462. Readd TITAN, R4ML, SYSTEMML, JNBG to BigInsights and fix HBase backup during EU and imports (alejandro)

2017-07-12 Thread alejandro
http://git-wip-us.apache.org/repos/asf/ambari/blob/69e492f2/ambari-server/src/main/resources/common-services/TITAN/1.0.0/package/files/titanSmoke.groovy
--
diff --git 
a/ambari-server/src/main/resources/common-services/TITAN/1.0.0/package/files/titanSmoke.groovy
 
b/ambari-server/src/main/resources/common-services/TITAN/1.0.0/package/files/titanSmoke.groovy
new file mode 100755
index 000..79438be
--- /dev/null
+++ 
b/ambari-server/src/main/resources/common-services/TITAN/1.0.0/package/files/titanSmoke.groovy
@@ -0,0 +1,20 @@
+/*Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License */
+
+import com.thinkaurelius.titan.core.TitanFactory;
+
+graph = TitanFactory.open("/etc/titan/conf/titan-hbase-solr.properties")
+g = graph.traversal()
+l = g.V().values('name').toList()

http://git-wip-us.apache.org/repos/asf/ambari/blob/69e492f2/ambari-server/src/main/resources/common-services/TITAN/1.0.0/package/scripts/params.py
--
diff --git 
a/ambari-server/src/main/resources/common-services/TITAN/1.0.0/package/scripts/params.py
 
b/ambari-server/src/main/resources/common-services/TITAN/1.0.0/package/scripts/params.py
new file mode 100755
index 000..8019748
--- /dev/null
+++ 
b/ambari-server/src/main/resources/common-services/TITAN/1.0.0/package/scripts/params.py
@@ -0,0 +1,202 @@
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+Ambari Agent
+
+"""
+
+from resource_management.libraries.script.script import Script
+from resource_management.libraries.resources.hdfs_resource import HdfsResource
+from resource_management.libraries.functions import conf_select
+from resource_management.libraries.functions import stack_select
+from resource_management.libraries.functions.format import format
+from resource_management.libraries.functions.version import 
format_stack_version
+from resource_management.libraries.functions.default import default
+from resource_management.libraries.functions import get_kinit_path
+from resource_management.libraries.functions.get_stack_version import 
get_stack_version
+
+config = Script.get_config()
+tmp_dir = Script.get_tmp_dir()
+stack_root= Script.get_stack_root()
+
+stack_name = default("/hostLevelParams/stack_name", None)
+
+stack_version_unformatted = str(config['hostLevelParams']['stack_version'])
+stack_version_formatted = format_stack_version(stack_version_unformatted)
+full_stack_version = get_stack_version('titan-client')
+
+# New Cluster Stack Version that is defined during the RESTART of a Rolling 
Upgrade
+version = default("/commandParams/version", None)
+
+titan_user = config['configurations']['titan-env']['titan_user']
+user_group = config['configurations']['cluster-env']['user_group']
+titan_log_dir = config['configurations']['titan-env']['titan_log_dir']
+titan_server_port = config['configurations']['titan-env']['titan_server_port']
+titan_hdfs_home_dir = 
config['configurations']['titan-env']['titan_hdfs_home_dir']
+titan_log_file = format("{titan_log_dir}/titan-{titan_server_port}.log")
+titan_err_file = format("{titan_log_dir}/titan-{titan_server_port}.err")
+
+smokeuser = config['configurations']['cluster-env']['smokeuser']
+smokeuser_principal = 
config['configurations']['cluster-env']['smokeuser_principal_name']
+
+security_enabled = config['configurations']['cluster-env']['security_enabled']
+smoke_user_keytab = config['configurations']['cluster-env']['smokeuser_keytab']
+kinit_path_local = 
get_kinit_path(

[1/4] ambari git commit: AMBARI-21462. Readd TITAN, R4ML, SYSTEMML, JNBG to BigInsights and fix HBase backup during EU and imports (alejandro)

2017-07-12 Thread alejandro
Repository: ambari
Updated Branches:
  refs/heads/branch-2.5 1f54c6e27 -> 69e492f28


http://git-wip-us.apache.org/repos/asf/ambari/blob/69e492f2/ambari-server/src/main/resources/stacks/BigInsights/4.2/services/TITAN/package/scripts/params.py
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.2/services/TITAN/package/scripts/params.py
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.2/services/TITAN/package/scripts/params.py
new file mode 100755
index 000..3cb7aef
--- /dev/null
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.2/services/TITAN/package/scripts/params.py
@@ -0,0 +1,128 @@
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+Ambari Agent
+
+"""
+
+from resource_management.libraries.script.script import Script
+from resource_management.libraries.resources.hdfs_resource import HdfsResource
+from resource_management.libraries.functions import conf_select
+from resource_management.libraries.functions import stack_select
+from resource_management.libraries.functions.format import format
+from resource_management.libraries.functions.version import 
format_stack_version
+from resource_management.libraries.functions.default import default
+from resource_management.libraries.functions import get_kinit_path
+
+# server configurations
+config = Script.get_config()
+tmp_dir = Script.get_tmp_dir()
+
+stack_name = default("/hostLevelParams/stack_name", None)
+
+stack_version_unformatted = str(config['hostLevelParams']['stack_version'])
+iop_stack_version = format_stack_version(stack_version_unformatted)
+
+# New Cluster Stack Version that is defined during the RESTART of a Rolling 
Upgrade
+version = default("/commandParams/version", None)
+
+titan_user = config['configurations']['titan-env']['titan_user']
+user_group = config['configurations']['cluster-env']['user_group']
+titan_bin_dir = '/usr/iop/current/titan-client/bin'
+
+
+smokeuser = config['configurations']['cluster-env']['smokeuser']
+smokeuser_principal = 
config['configurations']['cluster-env']['smokeuser_principal_name']
+
+security_enabled = config['configurations']['cluster-env']['security_enabled']
+smoke_user_keytab = config['configurations']['cluster-env']['smokeuser_keytab']
+kinit_path_local = 
get_kinit_path(default('/configurations/kerberos-env/executable_search_paths', 
None))
+
+# titan configurations
+titan_conf_dir = "/usr/iop/current/titan-client/conf"
+titan_hbase_solr_props = 
config['configurations']['titan-hbase-solr']['content']
+titan_env_props = config['configurations']['titan-env']['content']
+log4j_console_props = config['configurations']['titan-log4j']['content']
+
+# not supporting 32 bit jdk.
+java64_home = config['hostLevelParams']['java_home']
+hadoop_config_dir = '/etc/hadoop/conf'
+hbase_config_dir = '/etc/hbase/conf'
+
+# Titan required 'storage.hostname' which is hbase cluster in IOP 4.2.
+# The host name should be zooKeeper quorum
+storage_hosts = config['clusterHostInfo']['zookeeper_hosts']
+storage_host_list = []
+for hostname in storage_hosts:
+  storage_host_list.append(hostname)
+storage_host = ",".join(storage_host_list)
+hbase_zookeeper_parent = 
config['configurations']['hbase-site']['zookeeper.znode.parent']
+
+# Solr cloud host
+solr_hosts = config['clusterHostInfo']['solr_hosts']
+solr_host_list = []
+for hostname in solr_hosts:
+  solr_host_list.append(hostname)
+solr_host = ",".join(solr_host_list)
+solr_server_host = solr_hosts[0]
+
+# Titan client, it does not work right now, there is no 'titan_host' in 
'clusterHostInfo'
+# It will return "Configuration parameter 'titan_host' was not found in 
configurations dictionary!"
+# So here is a known issue as task 118900, will install titan and solr on same 
node right now.
+# titan_host = config['clusterHostInfo']['titan_host']
+titan_host = solr_server_host
+
+# Conf directory and jar should be copy to solr site
+titan_dir = format('/usr/iop/current/titan-client')
+titan_ext_dir = format('/usr/iop/current/titan-client/ext')
+titan_solr_conf_dir = format('/usr/iop/current/titan-client/conf/solr')
+titan_solr_jar_file = format('/usr/iop/current/titan-client/lib/jts-1.13.jar')
+
+titan_solr_hdfs_dir = "/apps/titan"
+titan_solr_hdfs_

[3/4] ambari git commit: AMBARI-21462. Readd TITAN, R4ML, SYSTEMML, JNBG to BigInsights and fix HBase backup during EU and imports (alejandro)

2017-07-12 Thread alejandro
http://git-wip-us.apache.org/repos/asf/ambari/blob/69e492f2/ambari-server/src/main/resources/common-services/JNBG/0.2.0/package/scripts/py_client_params.py
--
diff --git 
a/ambari-server/src/main/resources/common-services/JNBG/0.2.0/package/scripts/py_client_params.py
 
b/ambari-server/src/main/resources/common-services/JNBG/0.2.0/package/scripts/py_client_params.py
new file mode 100755
index 000..5dcc8e4
--- /dev/null
+++ 
b/ambari-server/src/main/resources/common-services/JNBG/0.2.0/package/scripts/py_client_params.py
@@ -0,0 +1,39 @@
+#!/usr/bin/env python
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+
+from resource_management.libraries.script.script import Script
+from resource_management.libraries.functions.format import format
+from jkg_toree_params import py_executable, py_venv_pathprefix, 
py_venv_restrictive, venv_owner, ambarisudo
+import jnbg_helpers as helpers
+
+# Server configurations
+config = Script.get_config()
+stack_root = Script.get_stack_root()
+
+package_dir = helpers.package_dir()
+cmd_file_name = "pythonenv_setup.sh"
+cmd_file_path = format("{package_dir}files/{cmd_file_name}")
+
+# Sequence of commands executed in py_client.py
+commands = [ambarisudo + ' ' +
+cmd_file_path + ' ' +
+py_executable + ' ' +
+py_venv_pathprefix + ' ' +
+venv_owner]

http://git-wip-us.apache.org/repos/asf/ambari/blob/69e492f2/ambari-server/src/main/resources/common-services/JNBG/0.2.0/package/scripts/service_check.py
--
diff --git 
a/ambari-server/src/main/resources/common-services/JNBG/0.2.0/package/scripts/service_check.py
 
b/ambari-server/src/main/resources/common-services/JNBG/0.2.0/package/scripts/service_check.py
new file mode 100755
index 000..d4d5f42
--- /dev/null
+++ 
b/ambari-server/src/main/resources/common-services/JNBG/0.2.0/package/scripts/service_check.py
@@ -0,0 +1,44 @@
+#!/usr/bin/env python
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+"""
+
+from resource_management.libraries.script.script import Script
+from resource_management.libraries.functions.format import format
+from resource_management.core.resources.system import Execute
+
+class JupyterKernelGatewayServiceCheck(Script):
+def service_check(self, env):
+import jkg_toree_params as params
+env.set_params(params)
+
+if params.security_enabled:
+  jnbg_kinit_cmd = format("{kinit_path_local} -kt 
{jnbg_kerberos_keytab} {jnbg_kerberos_principal}; ")
+  Execute(jnbg_kinit_cmd, user=params.user)
+
+scheme = "https" if params.ui_ssl_enabled else "http"
+Execute(format("curl -s -o /dev/null -w'%{{http_code}}' --negotiate 
-u: -k {scheme}://{jkg_host}:{jkg_port}/api/kernelspecs | grep 200"),
+tries = 10,
+try_sleep=3,
+logoutput=True)
+Execute(format("curl -s --negotiate -u: -k 
{scheme}://{jkg_host}:{jkg_port}/api/kernelspecs | grep Scala"),
+tries = 10,
+try_sleep=3,
+logoutput=True)
+
+if __name__ == "__main__":
+JupyterKernelGatewayServiceCheck().execute()

http://git-wip-us.apache.org/repos/asf/ambari/blob/69e492f2/ambari-server/src/main/resources/common-services/JNBG/0.2.0/package/scripts/status_params.py
--
diff --git 
a/ambari-server/src/main/resources/common-service

ambari git commit: AMBARI-21455. Remove unnecessary services from BigInsights stack (alejandro)

2017-07-12 Thread alejandro
Repository: ambari
Updated Branches:
  refs/heads/branch-2.5 4bbdd0e55 -> 08f48c1eb


AMBARI-21455. Remove unnecessary services from BigInsights stack (alejandro)


Project: http://git-wip-us.apache.org/repos/asf/ambari/repo
Commit: http://git-wip-us.apache.org/repos/asf/ambari/commit/08f48c1e
Tree: http://git-wip-us.apache.org/repos/asf/ambari/tree/08f48c1e
Diff: http://git-wip-us.apache.org/repos/asf/ambari/diff/08f48c1e

Branch: refs/heads/branch-2.5
Commit: 08f48c1eb85a3763891584b835977809936f3a19
Parents: 4bbdd0e
Author: Alejandro Fernandez 
Authored: Wed Jul 12 10:22:27 2017 -0700
Committer: Alejandro Fernandez 
Committed: Wed Jul 12 11:31:17 2017 -0700

--
 .../BigInsights/4.2.5/role_command_order.json   |  12 +-
 .../BigInsights/4.2.5/services/stack_advisor.py |  53 
 .../BigInsights/4.2/role_command_order.json |   3 +-
 .../4.2/services/SYSTEMML/metainfo.xml  |  77 ---
 .../SYSTEMML/package/scripts/__init__.py|  19 ---
 .../services/SYSTEMML/package/scripts/params.py |  40 --
 .../SYSTEMML/package/scripts/service_check.py   |  43 ---
 .../SYSTEMML/package/scripts/systemml_client.py |  49 ---
 .../services/TITAN/configuration/titan-env.xml  |  48 ---
 .../TITAN/configuration/titan-hbase-solr.xml|  67 --
 .../TITAN/configuration/titan-log4j.xml |  66 --
 .../4.2/services/TITAN/kerberos.json|  17 ---
 .../BigInsights/4.2/services/TITAN/metainfo.xml |  88 -
 .../TITAN/package/files/titanSmoke.groovy   |  20 ---
 .../services/TITAN/package/scripts/params.py| 128 ---
 .../TITAN/package/scripts/service_check.py  |  64 --
 .../4.2/services/TITAN/package/scripts/titan.py |  70 --
 .../TITAN/package/scripts/titan_client.py   |  58 -
 18 files changed, 3 insertions(+), 919 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/ambari/blob/08f48c1e/ambari-server/src/main/resources/stacks/BigInsights/4.2.5/role_command_order.json
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.2.5/role_command_order.json
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.2.5/role_command_order.json
index dc4811b..35fc0d8 100755
--- 
a/ambari-server/src/main/resources/stacks/BigInsights/4.2.5/role_command_order.json
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.2.5/role_command_order.json
@@ -4,22 +4,14 @@
   "general_deps" : {
 "_comment" : "dependencies for all cases",
 "HIVE_SERVER_INTERACTIVE-START": ["RESOURCEMANAGER-START", 
"NODEMANAGER-START", "MYSQL_SERVER-START"],
-"RESOURCEMANAGER-STOP": ["HIVE_SERVER_INTERACTIVE-STOP", 
"SPARK2_THRIFTSERVER-STOP", "KERNEL_GATEWAY-STOP" ],
+"RESOURCEMANAGER-STOP": ["HIVE_SERVER_INTERACTIVE-STOP", 
"SPARK2_THRIFTSERVER-STOP"],
 "NODEMANAGER-STOP": ["HIVE_SERVER_INTERACTIVE-STOP", "KERNEL_GATEWAY-STOP" 
],
 "NAMENODE-STOP": ["HIVE_SERVER_INTERACTIVE-STOP"],
 "HIVE_SERVER_INTERACTIVE-RESTART": ["NODEMANAGER-RESTART", 
"MYSQL_SERVER-RESTART"],
 "HIVE_SERVICE_CHECK-SERVICE_CHECK": ["HIVE_SERVER-START", 
"HIVE_METASTORE-START", "WEBHCAT_SERVER-START", 
"HIVE_SERVER_INTERACTIVE-START"],
 "RANGER_ADMIN-START": ["ZOOKEEPER_SERVER-START", "INFRA_SOLR-START"],
 "SPARK2_SERVICE_CHECK-SERVICE_CHECK" : ["SPARK2_JOBHISTORYSERVER-START", 
"APP_TIMELINE_SERVER-START"],
-"HBASE_REST_SERVER-START": ["HBASE_MASTER-START"],
-"TITAN_SERVER-START" : ["HBASE_SERVICE_CHECK-SERVICE_CHECK", "SOLR-START"],
-"TITAN_SERVICE_CHECK-SERVICE_CHECK": ["TITAN_SERVER-START"],
-"KERNEL_GATEWAY-INSTALL": ["SPARK2_CLIENT-INSTALL"],
-"PYTHON_CLIENT-INSTALL": ["KERNEL_GATEWAY-INSTALL"],
-"KERNEL_GATEWAY-START": ["NAMENODE-START", "DATANODE-START", 
"RESOURCEMANAGER-START", "NODEMANAGER-START", "SPARK2_JOBHISTORYSERVER-START"],
-"JNBG_SERVICE_CHECK-SERVICE_CHECK": ["KERNEL_GATEWAY-START"],
-"R4ML-INSTALL": ["SPARK2_CLIENT-INSTALL", "SYSTEMML-INSTALL"],
-"R4ML_SERVICE_CHECK-SERVICE_CHECK": ["NAMENODE-START", "DATANODE-START", 
"SPARK2_JOBHISTORYSERVER-START"]
+"HBASE_REST_SERVER-START": ["HBASE_MASTER-START"]
   },
   "_comment" : &

ambari git commit: AMBARI-21411. Backend - Run EU/RU PreChecks during a cross-stack upgrade (alejandro)

2017-07-06 Thread alejandro
Repository: ambari
Updated Branches:
  refs/heads/branch-feature-AMBARI-21348 fb53ed5dd -> 4e668ca68


AMBARI-21411. Backend - Run EU/RU PreChecks during a cross-stack upgrade 
(alejandro)


Project: http://git-wip-us.apache.org/repos/asf/ambari/repo
Commit: http://git-wip-us.apache.org/repos/asf/ambari/commit/4e668ca6
Tree: http://git-wip-us.apache.org/repos/asf/ambari/tree/4e668ca6
Diff: http://git-wip-us.apache.org/repos/asf/ambari/diff/4e668ca6

Branch: refs/heads/branch-feature-AMBARI-21348
Commit: 4e668ca683e4aacd35aed9274662a749b930fa06
Parents: fb53ed5
Author: Alejandro Fernandez 
Authored: Thu Jun 29 16:33:00 2017 -0700
Committer: Alejandro Fernandez 
Committed: Thu Jul 6 10:43:55 2017 -0700

--
 .../PreUpgradeCheckResourceProvider.java|  9 +++-
 .../internal/UpgradeResourceProvider.java   |  6 +--
 .../ambari/server/state/UpgradeHelper.java  | 29 ++-
 .../4.2.5/services/FLUME/metainfo.xml   |  2 +-
 .../4.2.5/services/JNBG/metainfo.xml| 26 -
 .../4.2.5/services/OOZIE/metainfo.xml   |  2 +-
 .../4.2.5/services/R4ML/metainfo.xml| 37 -
 .../4.2.5/services/SOLR/metainfo.xml| 55 
 .../4.2.5/services/SYSTEMML/metainfo.xml| 37 -
 .../4.2.5/services/TITAN/metainfo.xml   | 40 --
 .../PreUpgradeCheckResourceProviderTest.java| 12 +++--
 .../ambari/server/state/UpgradeHelperTest.java  |  6 ++-
 12 files changed, 40 insertions(+), 221 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/ambari/blob/4e668ca6/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/PreUpgradeCheckResourceProvider.java
--
diff --git 
a/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/PreUpgradeCheckResourceProvider.java
 
b/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/PreUpgradeCheckResourceProvider.java
index 7ccafb7..689942d 100644
--- 
a/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/PreUpgradeCheckResourceProvider.java
+++ 
b/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/PreUpgradeCheckResourceProvider.java
@@ -47,6 +47,7 @@ import org.apache.ambari.server.state.CheckHelper;
 import org.apache.ambari.server.state.Cluster;
 import org.apache.ambari.server.state.Clusters;
 import org.apache.ambari.server.state.ServiceInfo;
+import org.apache.ambari.server.state.StackId;
 import org.apache.ambari.server.state.UpgradeHelper;
 import org.apache.ambari.server.state.stack.PrerequisiteCheck;
 import org.apache.ambari.server.state.stack.UpgradePack;
@@ -78,6 +79,7 @@ public class PreUpgradeCheckResourceProvider extends 
ReadOnlyResourceProvider {
   public static final String UPGRADE_CHECK_CHECK_TYPE_PROPERTY_ID = 
PropertyHelper.getPropertyId("UpgradeChecks", "check_type");
   public static final String UPGRADE_CHECK_CLUSTER_NAME_PROPERTY_ID   = 
PropertyHelper.getPropertyId("UpgradeChecks", "cluster_name");
   public static final String UPGRADE_CHECK_UPGRADE_TYPE_PROPERTY_ID   = 
PropertyHelper.getPropertyId("UpgradeChecks", "upgrade_type");
+  public static final String UPGRADE_CHECK_TARGET_STACK_ID= 
PropertyHelper.getPropertyId("UpgradeChecks", "target_stack");
   /**
* Optional parameter to specify the preferred Upgrade Pack to use.
*/
@@ -114,6 +116,7 @@ public class PreUpgradeCheckResourceProvider extends 
ReadOnlyResourceProvider {
   UPGRADE_CHECK_CHECK_TYPE_PROPERTY_ID,
   UPGRADE_CHECK_CLUSTER_NAME_PROPERTY_ID,
   UPGRADE_CHECK_UPGRADE_TYPE_PROPERTY_ID,
+  UPGRADE_CHECK_TARGET_STACK_ID,
   UPGRADE_CHECK_UPGRADE_PACK_PROPERTY_ID,
   UPGRADE_CHECK_REPOSITORY_VERSION_PROPERTY_ID);
 
@@ -144,6 +147,7 @@ public class PreUpgradeCheckResourceProvider extends 
ReadOnlyResourceProvider {
 
 for (Map propertyMap: propertyMaps) {
   final String clusterName = 
propertyMap.get(UPGRADE_CHECK_CLUSTER_NAME_PROPERTY_ID).toString();
+  StackId targetStack = new 
StackId(propertyMap.get(UPGRADE_CHECK_TARGET_STACK_ID).toString());
 
   UpgradeType upgradeType = UpgradeType.ROLLING;
   if (propertyMap.containsKey(UPGRADE_CHECK_UPGRADE_TYPE_PROPERTY_ID)) {
@@ -170,7 +174,7 @@ public class PreUpgradeCheckResourceProvider extends 
ReadOnlyResourceProvider {
 
   if 
(propertyMap.containsKey(UPGRADE_CHECK_REPOSITORY_VERSION_PROPERTY_ID)) {
 String repositoryVersionId = 
propertyMap.get(UPGRADE_CHECK_REPOSITORY_VERSION_PROPERTY_ID).toString();
-RepositoryVersionEntity repositoryVersionEntity = 
repositoryVersionDAO.findByStackNameAndVersion(stackN

[15/51] [partial] ambari git commit: AMBARI-21349. Create BigInsights Stack Skeleton in Ambari 2.5 (alejandro)

2017-06-27 Thread alejandro
http://git-wip-us.apache.org/repos/asf/ambari/blob/1863c3b9/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/YARN/YARN_widgets.json
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/YARN/YARN_widgets.json
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/YARN/YARN_widgets.json
new file mode 100755
index 000..fedee4d
--- /dev/null
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/YARN/YARN_widgets.json
@@ -0,0 +1,617 @@
+{
+  "layouts": [
+{
+  "layout_name": "default_yarn_dashboard",
+  "display_name": "Standard YARN Dashboard",
+  "section_name": "YARN_SUMMARY",
+  "widgetLayoutInfo": [
+{
+  "widget_name": "Memory Utilization",
+  "description": "Percentage of total memory allocated to containers 
running in the cluster.",
+  "widget_type": "GRAPH",
+  "is_visible": true,
+  "metrics": [
+{
+  "name": "yarn.QueueMetrics.Queue=root.AllocatedMB",
+  "metric_path": "metrics/yarn/Queue/root/AllocatedMB",
+  "service_name": "YARN",
+  "component_name": "RESOURCEMANAGER",
+  "host_component_criteria": 
"host_components/HostRoles/ha_state=ACTIVE"
+},
+{
+  "name": "yarn.QueueMetrics.Queue=root.AvailableMB",
+  "metric_path": "metrics/yarn/Queue/root/AvailableMB",
+  "service_name": "YARN",
+  "component_name": "RESOURCEMANAGER",
+  "host_component_criteria": 
"host_components/HostRoles/ha_state=ACTIVE"
+}
+  ],
+  "values": [
+{
+  "name": "Memory Utilization",
+  "value": "${(yarn.QueueMetrics.Queue=root.AllocatedMB / 
(yarn.QueueMetrics.Queue=root.AllocatedMB + 
yarn.QueueMetrics.Queue=root.AvailableMB)) * 100}"
+}
+  ],
+  "properties": {
+"display_unit": "%",
+"graph_type": "LINE",
+"time_range": "1"
+  }
+},
+{
+  "widget_name": "CPU Utilization",
+  "description": "Percentage of total virtual cores allocated to 
containers running in the cluster.",
+  "widget_type": "GRAPH",
+  "is_visible": true,
+  "metrics": [
+{
+  "name": "yarn.QueueMetrics.Queue=root.AllocatedVCores",
+  "metric_path": "metrics/yarn/Queue/root/AllocatedVCores",
+  "service_name": "YARN",
+  "component_name": "RESOURCEMANAGER",
+  "host_component_criteria": 
"host_components/HostRoles/ha_state=ACTIVE"
+},
+{
+  "name": "yarn.QueueMetrics.Queue=root.AvailableVCores",
+  "metric_path": "metrics/yarn/Queue/root/AvailableVCores",
+  "service_name": "YARN",
+  "component_name": "RESOURCEMANAGER",
+  "host_component_criteria": 
"host_components/HostRoles/ha_state=ACTIVE"
+}
+  ],
+  "values": [
+{
+  "name": "Total Allocatable CPU Utilized across NodeManager",
+  "value": "${(yarn.QueueMetrics.Queue=root.AllocatedVCores / 
(yarn.QueueMetrics.Queue=root.AllocatedVCores + 
yarn.QueueMetrics.Queue=root.AvailableVCores)) * 100}"
+}
+  ],
+  "properties": {
+"display_unit": "%",
+"graph_type": "LINE",
+"time_range": "1"
+  }
+},
+{
+  "widget_name": "Container Failures",
+  "description": "Percentage of all containers failing in the 
cluster.",
+  "widget_type": "GRAPH",
+  "is_visible": true,
+  "metrics": [
+{
+  "name": "yarn.NodeManagerMetrics.ContainersFailed._sum",
+  "metric_path": "metrics/yarn/ContainersFailed._sum",
+  "service_name": "YARN",
+  "component_name": "NODEMANAGER"
+},
+{
+  "name": "yarn.NodeManagerMetrics.ContainersCompleted._sum",
+  "metric_path": "metrics/yarn/ContainersCompleted._sum",
+  "service_name": "YARN",
+  "component_name": "NODEMANAGER"
+},
+{
+  "name": "yarn.NodeManagerMetrics.ContainersLaunched._sum",
+  "metric_path": "metrics/yarn/ContainersLaunched._sum",
+  "service_name": "YARN",
+  "component_name": "NODEMANAGER"
+},
+{
+  "name": "yarn.NodeManagerMetrics.ContainersIniting._sum",
+  "metric_path": "metrics/yarn/ContainersIniting._sum",
+  "service_name": "YARN",
+  "component_name": "NODEMANAGER"
+},
+{
+  "name": "yarn.NodeManagerMetrics.ContainersKilled._sum",
+  "metric_

[31/51] [partial] ambari git commit: AMBARI-21349. Create BigInsights Stack Skeleton in Ambari 2.5 (alejandro)

2017-06-27 Thread alejandro
http://git-wip-us.apache.org/repos/asf/ambari/blob/1863c3b9/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HIVE/etc/hive-schema-0.13.0.oracle.sql
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HIVE/etc/hive-schema-0.13.0.oracle.sql
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HIVE/etc/hive-schema-0.13.0.oracle.sql
new file mode 100755
index 000..1654750
--- /dev/null
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HIVE/etc/hive-schema-0.13.0.oracle.sql
@@ -0,0 +1,834 @@
+-- Table SEQUENCE_TABLE is an internal table required by DataNucleus.
+-- NOTE: Some versions of SchemaTool do not automatically generate this table.
+-- See http://www.datanucleus.org/servlet/jira/browse/NUCRDBMS-416
+CREATE TABLE SEQUENCE_TABLE
+(
+   SEQUENCE_NAME VARCHAR2(255) NOT NULL,
+   NEXT_VAL NUMBER NOT NULL
+);
+
+ALTER TABLE SEQUENCE_TABLE ADD CONSTRAINT PART_TABLE_PK PRIMARY KEY 
(SEQUENCE_NAME);
+
+-- Table NUCLEUS_TABLES is an internal table required by DataNucleus.
+-- This table is required if datanucleus.autoStartMechanism=SchemaTable
+-- NOTE: Some versions of SchemaTool do not automatically generate this table.
+-- See http://www.datanucleus.org/servlet/jira/browse/NUCRDBMS-416
+CREATE TABLE NUCLEUS_TABLES
+(
+   CLASS_NAME VARCHAR2(128) NOT NULL,
+   TABLE_NAME VARCHAR2(128) NOT NULL,
+   TYPE VARCHAR2(4) NOT NULL,
+   OWNER VARCHAR2(2) NOT NULL,
+   VERSION VARCHAR2(20) NOT NULL,
+   INTERFACE_NAME VARCHAR2(255) NULL
+);
+
+ALTER TABLE NUCLEUS_TABLES ADD CONSTRAINT NUCLEUS_TABLES_PK PRIMARY KEY 
(CLASS_NAME);
+
+-- Table PART_COL_PRIVS for classes 
[org.apache.hadoop.hive.metastore.model.MPartitionColumnPrivilege]
+CREATE TABLE PART_COL_PRIVS
+(
+PART_COLUMN_GRANT_ID NUMBER NOT NULL,
+"COLUMN_NAME" VARCHAR2(128) NULL,
+CREATE_TIME NUMBER (10) NOT NULL,
+GRANT_OPTION NUMBER (5) NOT NULL,
+GRANTOR VARCHAR2(128) NULL,
+GRANTOR_TYPE VARCHAR2(128) NULL,
+PART_ID NUMBER NULL,
+PRINCIPAL_NAME VARCHAR2(128) NULL,
+PRINCIPAL_TYPE VARCHAR2(128) NULL,
+PART_COL_PRIV VARCHAR2(128) NULL
+);
+
+ALTER TABLE PART_COL_PRIVS ADD CONSTRAINT PART_COL_PRIVS_PK PRIMARY KEY 
(PART_COLUMN_GRANT_ID);
+
+-- Table CDS.
+CREATE TABLE CDS
+(
+CD_ID NUMBER NOT NULL
+);
+
+ALTER TABLE CDS ADD CONSTRAINT CDS_PK PRIMARY KEY (CD_ID);
+
+-- Table COLUMNS_V2 for join relationship
+CREATE TABLE COLUMNS_V2
+(
+CD_ID NUMBER NOT NULL,
+"COMMENT" VARCHAR2(256) NULL,
+"COLUMN_NAME" VARCHAR2(128) NOT NULL,
+TYPE_NAME VARCHAR2(4000) NOT NULL,
+INTEGER_IDX NUMBER(10) NOT NULL
+);
+
+ALTER TABLE COLUMNS_V2 ADD CONSTRAINT COLUMNS_V2_PK PRIMARY KEY 
(CD_ID,"COLUMN_NAME");
+
+-- Table PARTITION_KEY_VALS for join relationship
+CREATE TABLE PARTITION_KEY_VALS
+(
+PART_ID NUMBER NOT NULL,
+PART_KEY_VAL VARCHAR2(256) NULL,
+INTEGER_IDX NUMBER(10) NOT NULL
+);
+
+ALTER TABLE PARTITION_KEY_VALS ADD CONSTRAINT PARTITION_KEY_VALS_PK PRIMARY 
KEY (PART_ID,INTEGER_IDX);
+
+-- Table DBS for classes [org.apache.hadoop.hive.metastore.model.MDatabase]
+CREATE TABLE DBS
+(
+DB_ID NUMBER NOT NULL,
+"DESC" VARCHAR2(4000) NULL,
+DB_LOCATION_URI VARCHAR2(4000) NOT NULL,
+"NAME" VARCHAR2(128) NULL,
+OWNER_NAME VARCHAR2(128) NULL,
+OWNER_TYPE VARCHAR2(10) NULL
+);
+
+ALTER TABLE DBS ADD CONSTRAINT DBS_PK PRIMARY KEY (DB_ID);
+
+-- Table PARTITION_PARAMS for join relationship
+CREATE TABLE PARTITION_PARAMS
+(
+PART_ID NUMBER NOT NULL,
+PARAM_KEY VARCHAR2(256) NOT NULL,
+PARAM_VALUE VARCHAR2(4000) NULL
+);
+
+ALTER TABLE PARTITION_PARAMS ADD CONSTRAINT PARTITION_PARAMS_PK PRIMARY KEY 
(PART_ID,PARAM_KEY);
+
+-- Table SERDES for classes [org.apache.hadoop.hive.metastore.model.MSerDeInfo]
+CREATE TABLE SERDES
+(
+SERDE_ID NUMBER NOT NULL,
+"NAME" VARCHAR2(128) NULL,
+SLIB VARCHAR2(4000) NULL
+);
+
+ALTER TABLE SERDES ADD CONSTRAINT SERDES_PK PRIMARY KEY (SERDE_ID);
+
+-- Table TYPES for classes [org.apache.hadoop.hive.metastore.model.MType]
+CREATE TABLE TYPES
+(
+TYPES_ID NUMBER NOT NULL,
+TYPE_NAME VARCHAR2(128) NULL,
+TYPE1 VARCHAR2(767) NULL,
+TYPE2 VARCHAR2(767) NULL
+);
+
+ALTER TABLE TYPES ADD CONSTRAINT TYPES_PK PRIMARY KEY (TYPES_ID);
+
+-- Table PARTITION_KEYS for join relationship
+CREATE TABLE PARTITION_KEYS
+(
+TBL_ID NUMBER NOT NULL,
+PKEY_COMMENT VARCHAR2(4000) NULL,
+PKEY_NAME VARCHAR2(128) NOT NULL,
+PKEY_TYPE VARCHAR2(767) NOT NULL,
+INTEGER_IDX NUMBER(10) NOT NULL
+);
+
+ALTER TABLE PARTITION_KEYS ADD CONSTRAINT PARTITION_KEY_PK PRIMARY KEY 
(TBL_ID,PKEY_NAME);
+
+-- Table ROLES for classes [org.apache.hadoop.hive.metastore.model.MRole]
+CREATE TABLE ROLES
+(
+ROLE_ID NUMBER NOT NULL,
+CREATE_TIME NUMBER (10) NOT NULL,
+OWNER_NAME VARCHAR2(128) NULL,
+ROLE_NAME VARCHAR2(128) NULL
+);
+
+ALTER TABLE ROLES ADD CONSTRAINT ROLES_PK 

[24/51] [partial] ambari git commit: AMBARI-21349. Create BigInsights Stack Skeleton in Ambari 2.5 (alejandro)

2017-06-27 Thread alejandro
http://git-wip-us.apache.org/repos/asf/ambari/blob/1863c3b9/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/KAFKA/package/scripts/params.py
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/KAFKA/package/scripts/params.py
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/KAFKA/package/scripts/params.py
new file mode 100755
index 000..bc19704
--- /dev/null
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/KAFKA/package/scripts/params.py
@@ -0,0 +1,115 @@
+#!/usr/bin/env python
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+from resource_management.libraries.functions import format
+from resource_management.libraries.script.script import Script
+from resource_management.libraries.functions.version import 
format_stack_version, compare_versions
+from resource_management.libraries.functions.default import default
+from utils import get_bare_principal
+
+from resource_management.libraries.functions.get_stack_version import 
get_stack_version
+from resource_management.libraries.functions.is_empty import is_empty
+from resource_management.core.logger import Logger
+from resource_management.libraries.resources.hdfs_resource import HdfsResource
+from resource_management.libraries.functions import stack_select
+from resource_management.libraries.functions import conf_select
+from resource_management.libraries.functions import get_kinit_path
+
+import status_params
+
+
+# server configurations
+config = Script.get_config()
+tmp_dir = Script.get_tmp_dir()
+stack_name = default("/hostLevelParams/stack_name", None)
+
+version = default("/commandParams/version", None)
+# Version that is CURRENT.
+current_version = default("/hostLevelParams/current_version", None)
+
+host_sys_prepped = default("/hostLevelParams/host_sys_prepped", False)
+
+stack_version_unformatted = str(config['hostLevelParams']['stack_version'])
+iop_stack_version = format_stack_version(stack_version_unformatted)
+upgrade_direction = default("/commandParams/upgrade_direction", None)
+
+# When downgrading the 'version' and 'current_version' are both pointing to 
the downgrade-target version
+# downgrade_from_version provides the source-version the downgrade is 
happening from
+downgrade_from_version = default("/commandParams/downgrade_from_version", None)
+
+# default kafka parameters
+kafka_home = '/usr/iop/current/kafka-broker'
+kafka_bin = kafka_home+'/bin/kafka'
+conf_dir = "/usr/iop/current/kafka-broker/config"
+
+kafka_user = config['configurations']['kafka-env']['kafka_user']
+kafka_log_dir = config['configurations']['kafka-env']['kafka_log_dir']
+kafka_pid_dir = status_params.kafka_pid_dir
+kafka_pid_file = kafka_pid_dir+"/kafka.pid"
+# This is hardcoded on the kafka bash process lifecycle on which we have no 
control over
+kafka_managed_pid_dir = "/var/run/kafka"
+kafka_managed_log_dir = "/var/log/kafka"
+hostname = config['hostname']
+user_group = config['configurations']['cluster-env']['user_group']
+java64_home = config['hostLevelParams']['java_home']
+kafka_env_sh_template = config['configurations']['kafka-env']['content']
+kafka_hosts = config['clusterHostInfo']['kafka_broker_hosts']
+kafka_hosts.sort()
+
+zookeeper_hosts = config['clusterHostInfo']['zookeeper_hosts']
+zookeeper_hosts.sort()
+
+if (('kafka-log4j' in config['configurations']) and ('content' in 
config['configurations']['kafka-log4j'])):
+log4j_props = config['configurations']['kafka-log4j']['content']
+else:
+log4j_props = None
+
+kafka_metrics_reporters=""
+metric_collector_host = ""
+metric_collector_port = ""
+
+ams_collector_hosts = default("/clusterHostInfo/metrics_collector_hosts", [])
+has_metric_collector = not len(ams_collector_hosts) == 0
+
+if has_metric_collector:
+  metric_collector_host = ams_collector_hosts[0]
+  metric_collector_port = 
default("/configurations/ams-site/timeline.metrics.service.webapp.address", 
"0.0.0.0:6188")
+  if metric_collector_port and metric_collector_port.find(':') != -1:
+metric_collector_port = metric_collector_port.split(':')[1]
+
+  if not len(kafka_metrics_reporters) == 0:
+  kafka_metrics_reporters = kafka_metrics_reporters + ','

[38/51] [partial] ambari git commit: AMBARI-21349. Create BigInsights Stack Skeleton in Ambari 2.5 (alejandro)

2017-06-27 Thread alejandro
http://git-wip-us.apache.org/repos/asf/ambari/blob/1863c3b9/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HDFS/metrics.json
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HDFS/metrics.json
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HDFS/metrics.json
new file mode 100755
index 000..b1e95fe
--- /dev/null
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HDFS/metrics.json
@@ -0,0 +1,7769 @@
+{
+  "NAMENODE": {
+"Component": [
+  {
+"type": "ganglia",
+"metrics": {
+  "default": {
+"metrics/cpu/cpu_idle":{
+  "metric":"cpu_idle",
+  "pointInTime":true,
+  "temporal":true,
+  "amsHostMetric":true
+},
+"metrics/cpu/cpu_nice":{
+  "metric":"cpu_nice",
+  "pointInTime":true,
+  "temporal":true,
+  "amsHostMetric":true
+},
+"metrics/cpu/cpu_system":{
+  "metric":"cpu_system",
+  "pointInTime":true,
+  "temporal":true,
+  "amsHostMetric":true
+},
+"metrics/cpu/cpu_user":{
+  "metric":"cpu_user",
+  "pointInTime":true,
+  "temporal":true,
+  "amsHostMetric":true
+},
+"metrics/cpu/cpu_wio":{
+  "metric":"cpu_wio",
+  "pointInTime":true,
+  "temporal":true,
+  "amsHostMetric":true
+},
+"metrics/disk/disk_free":{
+  "metric":"disk_free",
+  "pointInTime":true,
+  "temporal":true,
+  "amsHostMetric":true
+},
+"metrics/disk/disk_total":{
+  "metric":"disk_total",
+  "pointInTime":true,
+  "temporal":true,
+  "amsHostMetric":true
+},
+"metrics/load/load_fifteen":{
+  "metric":"load_fifteen",
+  "pointInTime":true,
+  "temporal":true,
+  "amsHostMetric":true
+},
+"metrics/load/load_five":{
+  "metric":"load_five",
+  "pointInTime":true,
+  "temporal":true,
+  "amsHostMetric":true
+},
+"metrics/load/load_one":{
+  "metric":"load_one",
+  "pointInTime":true,
+  "temporal":true,
+  "amsHostMetric":true
+},
+"metrics/memory/mem_buffers":{
+  "metric":"mem_buffers",
+  "pointInTime":true,
+  "temporal":true,
+  "amsHostMetric":true
+},
+"metrics/memory/mem_cached":{
+  "metric":"mem_cached",
+  "pointInTime":true,
+  "temporal":true,
+  "amsHostMetric":true
+},
+"metrics/memory/mem_free":{
+  "metric":"mem_free",
+  "pointInTime":true,
+  "temporal":true,
+  "amsHostMetric":true
+},
+"metrics/memory/mem_shared":{
+  "metric":"mem_shared",
+  "pointInTime":true,
+  "temporal":true,
+  "amsHostMetric":true
+},
+"metrics/memory/mem_total":{
+  "metric":"mem_total",
+  "pointInTime":true,
+  "temporal":true,
+  "amsHostMetric":true
+},
+"metrics/memory/swap_free":{
+  "metric":"swap_free",
+  "pointInTime":true,
+  "temporal":true,
+  "amsHostMetric":true
+},
+"metrics/memory/swap_total":{
+  "metric":"swap_total",
+  "pointInTime":true,
+  "temporal":true,
+  "amsHostMetric":true
+},
+"metrics/network/bytes_in":{
+  "metric":"bytes_in",
+  "pointInTime":true,
+  "temporal":true,
+  "amsHostMetric":true
+},
+"metrics/network/bytes_out":{
+  "metric":"bytes_out",
+  "pointInTime":true,
+  "temporal":true,
+  "amsHostMetric":true
+},
+"metrics/network/pkts_in":{
+  "metric":"pkts_in",
+  "pointInTime":true,
+  "temporal":true,
+  "amsHostMetric":true
+},
+"metrics/network/pkts_out":{
+  "metric":"pkts_out",
+  "pointInTime":true,
+  "temporal":true,
+  "amsHostMetric":true
+},
+"metrics/process/proc_run":{
+  "metric":"proc_run",
+  "pointInTime":true,
+  "temporal":true,
+  "amsHostMetric":

[26/51] [partial] ambari git commit: AMBARI-21349. Create BigInsights Stack Skeleton in Ambari 2.5 (alejandro)

2017-06-27 Thread alejandro
http://git-wip-us.apache.org/repos/asf/ambari/blob/1863c3b9/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HIVE/package/scripts/hive_client.py
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HIVE/package/scripts/hive_client.py
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HIVE/package/scripts/hive_client.py
new file mode 100755
index 000..8d586da
--- /dev/null
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HIVE/package/scripts/hive_client.py
@@ -0,0 +1,81 @@
+#!/usr/bin/env python
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+import sys
+from resource_management import *
+from resource_management.libraries.functions import conf_select
+from resource_management.libraries.functions import stack_select
+from hive import hive
+from ambari_commons.os_family_impl import OsFamilyImpl
+from ambari_commons import OSConst
+
+class HiveClient(Script):
+  def install(self, env):
+import params
+self.install_packages(env)
+self.configure(env)
+
+  def status(self, env):
+raise ClientComponentHasNoStatus()
+
+  def configure(self, env):
+import params
+env.set_params(params)
+hive(name='client')
+
+
+@OsFamilyImpl(os_family=OsFamilyImpl.DEFAULT)
+class HiveClientDefault(HiveClient):
+  def get_component_name(self):
+return "hadoop-client"
+
+  def pre_upgrade_restart(self, env, upgrade_type=None):
+import params
+env.set_params(params)
+
+if params.version and 
compare_versions(format_stack_version(params.version), '4.1.0.0') >= 0:
+  conf_select.select(params.stack_name, "hive", params.version)
+  conf_select.select(params.stack_name, "hadoop", params.version)
+  stack_select.select("hadoop-client", params.version)
+
+  def pre_upgrade_restart(self, env, upgrade_type=None):
+"""
+Execute hdp-select before reconfiguring this client to the new HDP version.
+
+:param env:
+:param upgrade_type:
+:return:
+"""
+Logger.info("Executing Hive HCat Client Stack Upgrade pre-restart")
+
+import params
+env.set_params(params)
+
+# this function should not execute if the version can't be determined or
+# is not at least HDP 2.2.0.0
+if not params.version or compare_versions(params.version, "2.2", 
format=True) < 0:
+  return
+
+# HCat client doesn't have a first-class entry in hdp-select. Since 
clients always
+# update after daemons, this ensures that the hcat directories are correct 
on hosts
+# which do not include the WebHCat daemon
+stack_select.select("hive-webhcat", params.version)
+
+if __name__ == "__main__":
+  HiveClient().execute()

http://git-wip-us.apache.org/repos/asf/ambari/blob/1863c3b9/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HIVE/package/scripts/hive_metastore.py
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HIVE/package/scripts/hive_metastore.py
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HIVE/package/scripts/hive_metastore.py
new file mode 100755
index 000..0a9a066
--- /dev/null
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HIVE/package/scripts/hive_metastore.py
@@ -0,0 +1,199 @@
+#!/usr/bin/env python
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+import os
+
+from resource_management.core.logge

[36/51] [partial] ambari git commit: AMBARI-21349. Create BigInsights Stack Skeleton in Ambari 2.5 (alejandro)

2017-06-27 Thread alejandro
http://git-wip-us.apache.org/repos/asf/ambari/blob/1863c3b9/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HDFS/package/scripts/namenode.py
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HDFS/package/scripts/namenode.py
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HDFS/package/scripts/namenode.py
new file mode 100755
index 000..46bd926
--- /dev/null
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HDFS/package/scripts/namenode.py
@@ -0,0 +1,319 @@
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+
+import sys
+import os
+import json
+import  tempfile
+from resource_management import *
+from resource_management.libraries.functions import conf_select
+from resource_management.libraries.functions import stack_select
+from resource_management.libraries.functions.security_commons import 
build_expectations, \
+  cached_kinit_executor, get_params_from_filesystem, 
validate_security_config_properties, \
+  FILE_TYPE_XML
+from resource_management.libraries.functions.version import compare_versions, \
+  format_stack_version
+from resource_management.libraries.functions.format import format
+from resource_management.libraries.functions.check_process_status import 
check_process_status
+from resource_management.core.exceptions import Fail
+from resource_management.libraries.functions import get_klist_path
+
+import namenode_upgrade
+from hdfs_namenode import namenode, wait_for_safemode_off
+from hdfs import hdfs
+import hdfs_rebalance
+from utils import failover_namenode, get_dfsadmin_base_command
+from ambari_commons.os_family_impl import OsFamilyImpl
+from ambari_commons import OSConst
+
+# hashlib is supplied as of Python 2.5 as the replacement interface for md5
+# and other secure hashes.  In 2.6, md5 is deprecated.  Import hashlib if
+# available, avoiding a deprecation warning under 2.6.  Import md5 otherwise,
+# preserving 2.4 compatibility.
+try:
+  import hashlib
+  _md5 = hashlib.md5
+except ImportError:
+  import md5
+  _md5 = md5.new
+
+class NameNode(Script):
+
+  def get_component_name(self):
+return "hadoop-hdfs-namenode"
+
+  def install(self, env):
+import params
+
+self.install_packages(env)
+env.set_params(params)
+#TODO we need this for HA because of manual steps
+self.configure(env)
+
+  def prepare_rolling_upgrade(self, env):
+namenode_upgrade.prepare_rolling_upgrade()
+
+  def finalize_rolling_upgrade(self, env):
+namenode_upgrade.finalize_rolling_upgrade()
+
+  def wait_for_safemode_off(self, env):
+wait_for_safemode_off(30, True)
+
+  def finalize_non_rolling_upgrade(self, env):
+namenode_upgrade.finalize_upgrade("nonrolling")
+
+  def finalize_rolling_upgrade(self, env):
+namenode_upgrade.finalize_upgrade("rolling")
+
+  def pre_upgrade_restart(self, env, upgrade_type=None):
+Logger.info("Executing Stack Upgrade pre-restart")
+import params
+env.set_params(params)
+
+if params.version and 
compare_versions(format_stack_version(params.version), '4.0.0.0') >= 0:
+  conf_select.select(params.stack_name, "hadoop", params.version)
+  stack_select.select("hadoop-hdfs-namenode", params.version)
+  #Execute(format("stack-select set hadoop-hdfs-namenode {version}"))
+
+  def start(self, env, upgrade_type=None):
+import params
+
+env.set_params(params)
+self.configure(env)
+namenode(action="start", upgrade_type=upgrade_type, env=env)
+
+  def post_upgrade_restart(self, env, upgrade_type=None):
+Logger.info("Executing Stack Upgrade post-restart")
+import params
+env.set_params(params)
+
+dfsadmin_base_command = get_dfsadmin_base_command('hdfs')
+dfsadmin_cmd = dfsadmin_base_command + " -report -live"
+Execute(dfsadmin_cmd,
+user=params.hdfs_user,
+tries=60,
+try_sleep=10
+)
+
+  def stop(self, env, upgrade_type=None):
+import params
+env.set_params(params)
+
+if upgrade_type == "rolling" and params.dfs_ha_enabled:
+  if params.dfs_ha_automatic_failover_enabled:
+failover_namenode()
+  else:
+raise Fail("Rolling Upgrade - dfs.ha.

[35/51] [partial] ambari git commit: AMBARI-21349. Create BigInsights Stack Skeleton in Ambari 2.5 (alejandro)

2017-06-27 Thread alejandro
http://git-wip-us.apache.org/repos/asf/ambari/blob/1863c3b9/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HDFS/package/scripts/zkfc_slave.py
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HDFS/package/scripts/zkfc_slave.py
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HDFS/package/scripts/zkfc_slave.py
new file mode 100755
index 000..29ad4d9
--- /dev/null
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HDFS/package/scripts/zkfc_slave.py
@@ -0,0 +1,148 @@
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+
+from resource_management import *
+from resource_management.libraries.functions.check_process_status import 
check_process_status
+from resource_management.libraries.functions.security_commons import 
build_expectations, \
+  cached_kinit_executor, get_params_from_filesystem, 
validate_security_config_properties, \
+  FILE_TYPE_XML
+import utils  # this is needed to avoid a circular dependency since utils.py 
calls this class
+from hdfs import hdfs
+
+
+class ZkfcSlave(Script):
+  def install(self, env):
+import params
+
+self.install_packages(env)
+env.set_params(params)
+
+  def start(self, env, upgrade_type=False):
+import params
+
+env.set_params(params)
+self.configure(env)
+Directory(params.hadoop_pid_dir_prefix,
+  mode=0755,
+  owner=params.hdfs_user,
+  group=params.user_group
+)
+
+# format the znode for this HA setup
+# only run this format command if the active namenode hostname is set
+# The Ambari UI HA Wizard prompts the user to run this command
+# manually, so this guarantees it is only run in the Blueprints case
+if params.dfs_ha_enabled and \
+   params.dfs_ha_namenode_active is not None:
+  success =  initialize_ha_zookeeper(params)
+  if not success:
+raise Fail("Could not initialize HA state in zookeeper")
+
+utils.service(
+  action="start", name="zkfc", user=params.hdfs_user, create_pid_dir=True,
+  create_log_dir=True
+)
+
+  def stop(self, env, upgrade_type=None):
+import params
+
+env.set_params(params)
+utils.service(
+  action="stop", name="zkfc", user=params.hdfs_user, create_pid_dir=True,
+  create_log_dir=True
+)
+
+  def configure(self, env):
+hdfs()
+pass
+
+  def status(self, env):
+import status_params
+
+env.set_params(status_params)
+
+check_process_status(status_params.zkfc_pid_file)
+
+  def security_status(self, env):
+import status_params
+
+env.set_params(status_params)
+
+props_value_check = {"hadoop.security.authentication": "kerberos",
+ "hadoop.security.authorization": "true"}
+props_empty_check = ["hadoop.security.auth_to_local"]
+props_read_check = None
+core_site_expectations = build_expectations('core-site', 
props_value_check, props_empty_check,
+props_read_check)
+hdfs_expectations = {}
+hdfs_expectations.update(core_site_expectations)
+
+security_params = get_params_from_filesystem(status_params.hadoop_conf_dir,
+   {'core-site.xml': 
FILE_TYPE_XML})
+result_issues = validate_security_config_properties(security_params, 
hdfs_expectations)
+if 'core-site' in security_params and 'hadoop.security.authentication' in 
security_params['core-site'] and \
+security_params['core-site']['hadoop.security.authentication'].lower() 
== 'kerberos':
+  if not result_issues:  # If all validations passed successfully
+if status_params.hdfs_user_principal or status_params.hdfs_user_keytab:
+  try:
+cached_kinit_executor(status_params.kinit_path_local,
+  status_params.hdfs_user,
+  status_params.hdfs_user_keytab,
+  status_params.hdfs_user_principal,
+  status_params.hostname,
+  status_params.tmp_dir)
+self.put_structured_out({"securityState": "SEC

[34/51] [partial] ambari git commit: AMBARI-21349. Create BigInsights Stack Skeleton in Ambari 2.5 (alejandro)

2017-06-27 Thread alejandro
http://git-wip-us.apache.org/repos/asf/ambari/blob/1863c3b9/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HIVE/configuration/hive-site.xml
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HIVE/configuration/hive-site.xml
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HIVE/configuration/hive-site.xml
new file mode 100755
index 000..5f2bc18
--- /dev/null
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HIVE/configuration/hive-site.xml
@@ -0,0 +1, @@
+
+
+
+
+
+
+  
+hive.cbo.enable
+true
+Flag to control enabling Cost Based Optimizations using 
Calcite framework.
+  
+
+  
+hive.zookeeper.quorum
+localhost:2181
+
+  List of ZooKeeper servers to talk to. This is needed for: 1.
+  Read/write locks - when hive.lock.manager is set to
+  org.apache.hadoop.hive.ql.lockmgr.zookeeper.ZooKeeperHiveLockManager,
+  2. When HiveServer2 supports service discovery via Zookeeper.
+
+
+  multiLine
+  true
+
+  
+
+  
+hive.metastore.connect.retries
+24
+Number of retries while opening a connection to 
metastore
+  
+
+  
+hive.metastore.failure.retries
+24
+Number of retries upon failure of Thrift metastore 
calls
+  
+
+  
+hive.metastore.client.connect.retry.delay
+5s
+
+  Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, 
us/usec, ns/nsec), which is sec if not specified.
+  Number of seconds for the client to wait between consecutive connection 
attempts
+
+  
+
+  
+hive.metastore.client.socket.timeout
+60
+
+  Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, 
us/usec, ns/nsec), which is sec if not specified.
+  MetaStore Client socket timeout in seconds
+
+  
+
+  
+hive.mapjoin.bucket.cache.size
+1
+
+  Size per reducer.The default is 1G, i.e if the input size is 10G, it
+  will use 10 reducers.
+
+  
+
+  
+hive.security.authorization.manager
+
org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.SQLStdConfOnlyAuthorizerFactory
+
+  The Hive client authorization manager class name. The user defined 
authorization class should implement
+  interface 
org.apache.hadoop.hive.ql.security.authorization.HiveAuthorizationProvider.
+
+  
+
+  
+hive.cluster.delegation.token.store.class
+org.apache.hadoop.hive.thrift.ZooKeeperTokenStore
+The delegation token store implementation.
+  Set to org.apache.hadoop.hive.thrift.ZooKeeperTokenStore for 
load-balanced cluster.
+  
+
+  
+hive.cluster.delegation.token.store.zookeeper.connectString
+localhost:2181
+The ZooKeeper token store connect string.
+  
+
+  
+hive.server2.support.dynamic.service.discovery
+true
+Whether HiveServer2 supports dynamic service discovery for 
its clients.
+  To support this, each instance of HiveServer2 currently uses ZooKeeper 
to register itself,
+  when it is brought up. JDBC/ODBC clients should use the ZooKeeper 
ensemble: hive.zookeeper.quorum
+  in their connection string.
+
+
+  boolean
+
+  
+
+  
+fs.hdfs.impl.disable.cache
+true
+Disable HDFS filesystem cache.
+  
+
+  
+fs.file.impl.disable.cache
+true
+Disable local filesystem cache.
+  
+
+  
+hive.exec.scratchdir
+/tmp/hive
+HDFS root scratch dir for Hive jobs which gets created with 
write all (733) permission. For each connecting user, an HDFS scratch dir: 
${hive.exec.scratchdir}/ is created, with 
${hive.scratch.dir.permission}.
+  
+
+  
+hive.exec.submitviachild
+false
+
+  
+
+  
+hive.exec.submit.local.task.via.child
+true
+
+  Determines whether local tasks (typically mapjoin hashtable generation 
phase) runs in
+  separate JVM (true recommended) or not.
+  Avoids the overhead of spawning new JVM, but can lead to out-of-memory 
issues.
+
+  
+
+  
+hive.exec.compress.output
+false
+
+  This controls whether the final outputs of a query (to a local/HDFS file 
or a Hive table) is compressed.
+  The compression codec and other options are determined from Hadoop 
config variables mapred.output.compress*
+
+  
+
+  
+hive.exec.compress.intermediate
+false
+
+  This controls whether intermediate files produced by Hive between 
multiple map-reduce jobs are compressed.
+  The compression codec and other options are determined from Hadoop 
config variables mapred.output.compress*
+
+  
+
+  
+hive.exec.reducers.bytes.per.reducer
+67108864
+size per reducer.The default is 256Mb, i.e if the input size 
is 1G, it will use 4 reducers.
+  
+
+  
+hive.exec.reducers.max
+1009
+
+  max number of reducers will be used. If the one specified in the 
configuration parameter

[43/51] [partial] ambari git commit: AMBARI-21349. Create BigInsights Stack Skeleton in Ambari 2.5 (alejandro)

2017-06-27 Thread alejandro
http://git-wip-us.apache.org/repos/asf/ambari/blob/1863c3b9/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/FLUME/package/templates/log4j.properties.j2
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/FLUME/package/templates/log4j.properties.j2
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/FLUME/package/templates/log4j.properties.j2
new file mode 100755
index 000..3b34db8
--- /dev/null
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/FLUME/package/templates/log4j.properties.j2
@@ -0,0 +1,67 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#  http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+#
+
+# Define some default values that can be overridden by system properties.
+#
+# For testing, it may also be convenient to specify
+# -Dflume.root.logger=DEBUG,console when launching flume.
+
+#flume.root.logger=DEBUG,console
+flume.root.logger=INFO,LOGFILE
+flume.log.dir={{flume_log_dir}}
+flume.log.file=flume-{{agent_name}}.log
+
+log4j.logger.org.apache.flume.lifecycle = INFO
+log4j.logger.org.jboss = WARN
+log4j.logger.org.mortbay = INFO
+log4j.logger.org.apache.avro.ipc.NettyTransceiver = WARN
+log4j.logger.org.apache.hadoop = INFO
+
+# Define the root logger to the system property "flume.root.logger".
+log4j.rootLogger=${flume.root.logger}
+
+
+# Stock log4j rolling file appender
+# Default log rotation configuration
+log4j.appender.LOGFILE=org.apache.log4j.RollingFileAppender
+log4j.appender.LOGFILE.MaxFileSize=100MB
+log4j.appender.LOGFILE.MaxBackupIndex=10
+log4j.appender.LOGFILE.File=${flume.log.dir}/${flume.log.file}
+log4j.appender.LOGFILE.layout=org.apache.log4j.PatternLayout
+log4j.appender.LOGFILE.layout.ConversionPattern=%d{dd MMM  HH:mm:ss,SSS} 
%-5p [%t] (%C.%M:%L) %x - %m%n
+
+
+# Warning: If you enable the following appender it will fill up your disk if 
you don't have a cleanup job!
+# This uses the updated rolling file appender from log4j-extras that supports 
a reliable time-based rolling policy.
+# See 
http://logging.apache.org/log4j/companions/extras/apidocs/org/apache/log4j/rolling/TimeBasedRollingPolicy.html
+# Add "DAILY" to flume.root.logger above if you want to use this
+log4j.appender.DAILY=org.apache.log4j.rolling.RollingFileAppender
+log4j.appender.DAILY.rollingPolicy=org.apache.log4j.rolling.TimeBasedRollingPolicy
+log4j.appender.DAILY.rollingPolicy.ActiveFileName=${flume.log.dir}/${flume.log.file}
+log4j.appender.DAILY.rollingPolicy.FileNamePattern=${flume.log.dir}/${flume.log.file}.%d{-MM-dd}
+log4j.appender.DAILY.layout=org.apache.log4j.PatternLayout
+log4j.appender.DAILY.layout.ConversionPattern=%d{dd MMM  HH:mm:ss,SSS} 
%-5p [%t] (%C.%M:%L) %x - %m%n
+
+
+# console
+# Add "console" to flume.root.logger above if you want to use this
+log4j.appender.console=org.apache.log4j.ConsoleAppender
+log4j.appender.console.target=System.err
+log4j.appender.console.layout=org.apache.log4j.PatternLayout
+log4j.appender.console.layout.ConversionPattern=%d (%t) [%p - %l] %m%n

http://git-wip-us.apache.org/repos/asf/ambari/blob/1863c3b9/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HBASE/alerts.json
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HBASE/alerts.json
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HBASE/alerts.json
new file mode 100755
index 000..12846ca
--- /dev/null
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HBASE/alerts.json
@@ -0,0 +1,157 @@
+{
+  "HBASE": {
+"service": [
+  {
+"name": "hbase_regionserver_process_percent",
+"label": "Percent RegionServers Available",
+"description": "This service-level alert is triggered if the 
configured percentage of RegionServer processes cannot be determined to be up 
and listening on the network for the configured warning and critical 
thresholds. It aggregates the results of RegionServer process down checks.",
+"interval": 1,
+"scope": "SERVICE",
+"enabled": true,
+"source": {
+  "type": "AGGREGATE",
+  "alert_name": 

[22/51] [partial] ambari git commit: AMBARI-21349. Create BigInsights Stack Skeleton in Ambari 2.5 (alejandro)

2017-06-27 Thread alejandro
http://git-wip-us.apache.org/repos/asf/ambari/blob/1863c3b9/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/OOZIE/configuration/oozie-env.xml
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/OOZIE/configuration/oozie-env.xml
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/OOZIE/configuration/oozie-env.xml
new file mode 100755
index 000..f4b161e
--- /dev/null
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/OOZIE/configuration/oozie-env.xml
@@ -0,0 +1,189 @@
+
+
+
+
+
+  
+oozie_user
+Oozie User
+oozie
+USER
+Oozie User.
+  
+  
+oozie_admin_users
+{oozie_user}, oozie-admin
+Oozie admin users.
+  
+  
+oozie_database
+Oozie Database
+New Derby Database
+Oozie Server Database.
+
+  false
+
+  
+  
+oozie_derby_database
+Derby
+Oozie Derby Database
+  
+  
+oozie_data_dir
+Oozie Data Dir
+/hadoop/oozie/data
+Data directory in which the Oozie DB exists
+
+  directory
+  true
+  false
+
+  
+  
+oozie_log_dir
+Oozie Log Dir
+/var/log/oozie
+Directory for oozie logs
+
+  directory
+  true
+  false
+
+  
+  
+oozie_pid_dir
+Oozie PID Dir
+/var/run/oozie
+Directory in which the pid files for oozie 
reside.
+
+  directory
+  true
+  false
+
+  
+  
+oozie_admin_port
+Oozie Server Admin Port
+11001
+The admin port Oozie server runs.
+
+  false
+  int
+
+  
+  
+oozie_initial_heapsize
+1024
+Oozie initial heap size.
+  
+  
+oozie_heapsize
+2048
+Oozie heap size.
+  
+  
+oozie_permsize
+256
+Oozie permanent generation size.
+  
+
+
+  
+  
+content
+This is the jinja template for oozie-env.sh file
+
+#!/bin/bash
+
+if [ -d "/usr/lib/bigtop-tomcat" ]; then
+  export OOZIE_CONFIG=${OOZIE_CONFIG:-{{conf_dir}}}
+  export CATALINA_BASE={{oozie_server_dir}}
+  export CATALINA_TMPDIR=${CATALINA_TMPDIR:-/var/tmp/oozie}
+  export OOZIE_CATALINA_HOME=/usr/lib/bigtop-tomcat
+fi
+
+#Set JAVA HOME
+export JAVA_HOME={{java_home}}
+
+export JRE_HOME=${JAVA_HOME}
+
+# Set Oozie specific environment variables here.
+
+# Settings for the Embedded Tomcat that runs Oozie
+# Java System properties for Oozie should be specified in this variable
+# This is needed so that Oozie does not run into OOM or GC Overhead limit
+# exceeded exceptions. If the oozie server is handling large number of
+# workflows/coordinator jobs, the memory settings may need to be revised
+
+{% if java_version < 8 %}
+export CATALINA_OPTS="$CATALINA_OPTS -Xms{{oozie_initial_heapsize}} 
-Xmx{{oozie_heapsize}} -XX:MaxPermSize={{oozie_permsize}}"
+{% else %}
+export CATALINA_OPTS="$CATALINA_OPTS -Xms{{oozie_initial_heapsize}} 
-Xmx{{oozie_heapsize}} -XX:MaxMetaspaceSize={{oozie_permsize}}"
+{% endif %}
+# Oozie configuration file to load from Oozie configuration directory
+#
+# export OOZIE_CONFIG_FILE=oozie-site.xml
+
+# Oozie logs directory
+#
+export OOZIE_LOG={{oozie_log_dir}}
+
+# Oozie pid directory
+#
+export CATALINA_PID={{pid_file}}
+
+#Location of the data for oozie
+export OOZIE_DATA={{oozie_data_dir}}
+
+# Oozie Log4J configuration file to load from Oozie configuration directory
+#
+# export OOZIE_LOG4J_FILE=oozie-log4j.properties
+
+# Reload interval of the Log4J configuration file, in seconds
+#
+# export OOZIE_LOG4J_RELOAD=10
+
+# The port Oozie server runs
+#
+export OOZIE_HTTP_PORT={{oozie_server_port}}
+
+# The admin port Oozie server runs
+#
+export OOZIE_ADMIN_PORT={{oozie_server_admin_port}}
+
+# The host name Oozie server runs on
+#
+# export OOZIE_HTTP_HOSTNAME=`hostname -f`
+
+# The base URL for callback URLs to Oozie
+#
+# export 
OOZIE_BASE_URL="http://${OOZIE_HTTP_HOSTNAME}:${OOZIE_HTTP_PORT}/oozie";
+export JAVA_LIBRARY_PATH={{hadoop_lib_home}}/native
+
+# At least 1 minute of retry time to account for server downtime during
+# upgrade/downgrade
+export OOZIE_CLIENT_OPTS="${OOZIE_CLIENT_OPTS} 
-Doozie.connection.retry.count=5 "
+
+
+  
+
+

http://git-wip-us.apache.org/repos/asf/ambari/blob/1863c3b9/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/OOZIE/configuration/oozie-log4j.xml
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/OOZIE/configuration/oozie-log4j.xml
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/OOZIE/configuration/oozie-log4j.xml
new file mode 100755
index 000..2ca87c3
--- /dev/null
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/OOZIE/configuration/oozie-log4j.xml
@@ -0,0 +1,146 @@
+
+
+
+
+
+
+  
+content
+Custom log4j.properties
+
+  #
+  # Licensed to the Apache Software Foundation (ASF) under one
+  # or more

[18/51] [partial] ambari git commit: AMBARI-21349. Create BigInsights Stack Skeleton in Ambari 2.5 (alejandro)

2017-06-27 Thread alejandro
http://git-wip-us.apache.org/repos/asf/ambari/blob/1863c3b9/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/SPARK/package/scripts/spark.py
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/SPARK/package/scripts/spark.py
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/SPARK/package/scripts/spark.py
new file mode 100755
index 000..86c738f
--- /dev/null
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/SPARK/package/scripts/spark.py
@@ -0,0 +1,351 @@
+#!/usr/bin/python
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+
+import sys
+import fileinput
+import shutil
+import os
+from resource_management import *
+from resource_management.core.exceptions import ComponentIsNotRunning
+from resource_management.core.logger import Logger
+from resource_management.core import shell
+
+
+def spark(env):
+  import params
+
+  env.set_params(params)
+
+  Directory(params.spark_conf,
+owner=params.spark_user,
+create_parents=True,
+group=params.user_group
+  )
+
+  Directory([params.spark_pid_dir, params.spark_log_dir],
+owner=params.spark_user,
+group=params.user_group,
+mode=0775,
+create_parents=True
+  )
+  if type == 'server':
+if action == 'start' or action == 'config':
+  params.HdfsResource(params.spark_hdfs_user_dir,
+ type="directory",
+ action="create_on_execute",
+ owner=params.spark_user,
+ mode=0775
+  )
+  params.HdfsResource(None, action="execute")
+
+  #file_path = params.spark_conf + '/spark-defaults.conf'
+  #create_file(file_path)
+
+  #write_properties_to_file(file_path, 
params.config['configurations']['spark-defaults'])
+  configFile("spark-defaults.conf", template_name="spark-defaults.conf.j2")
+
+  # create spark-env.sh in conf dir
+  File(os.path.join(params.spark_conf, 'spark-env.sh'),
+   owner=params.spark_user,
+   group=params.user_group,
+   content=InlineTemplate(params.spark_env_sh)
+  )
+
+  # create log4j.properties in conf dir
+  File(os.path.join(params.spark_conf, 'spark-log4j.properties'),
+   owner=params.spark_user,
+   group=params.user_group,
+   content=InlineTemplate(params.spark_log4j)
+  )
+
+  #create metrics.properties in conf dir
+#  File(os.path.join(params.spark_conf, 'metrics.properties'),
+#   owner=params.spark_user,
+#   group=params.spark_group,
+#   content=InlineTemplate(params.spark_metrics_properties)
+#  )
+
+  # create java-opts in etc/spark/conf dir for iop.version
+  File(os.path.join(params.spark_conf, 'java-opts'),
+   owner=params.spark_user,
+   group=params.user_group,
+   content=params.spark_javaopts_properties
+  )
+
+  if params.is_hive_installed:
+hive_config = get_hive_config()
+XmlConfig("hive-site.xml",
+  conf_dir=params.spark_conf,
+  configurations=hive_config,
+  owner=params.spark_user,
+  group=params.user_group,
+  mode=0640)
+def get_hive_config():
+  import params
+  # MUST CONVERT BOOLEANS TO LOWERCASE STRINGS
+  hive_conf_dict = dict()
+  hive_conf_dict['hive.metastore.uris'] = 
params.config['configurations']['hive-site']['hive.metastore.uris']
+  hive_conf_dict['ambari.hive.db.schema.name'] = 
params.config['configurations']['hive-site']['ambari.hive.db.schema.name']
+  hive_conf_dict['datanucleus.cache.level2.type'] = 
params.config['configurations']['hive-site']['datanucleus.cache.level2.type']
+  hive_conf_dict['fs.file.impl.disable.cache'] = 
str(params.config['configurations']['hive-site']['fs.file.impl.disable.cache']).lower()
+  hive_conf_dict['fs.hdfs.impl.disable.cache'] = 
str(params.config['configurations']['hive-site']['fs.hdfs.impl.disable.cache']).lower()
+  hive_conf_dict['hive.auto.convert.join'] = 
str(params.config['configurations']['hive-site']['hive.auto.convert.join']).lower()
+  hive_conf_dict['hive.auto.convert.join.noconditionaltask'] = 
str(params.config['configurations']['hive-site']['hive.auto.convert.join.noconditional

[04/51] [partial] ambari git commit: AMBARI-21349. Create BigInsights Stack Skeleton in Ambari 2.5 (alejandro)

2017-06-27 Thread alejandro
http://git-wip-us.apache.org/repos/asf/ambari/blob/1863c3b9/ambari-server/src/main/resources/stacks/BigInsights/4.1/services/YARN/metainfo.xml
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.1/services/YARN/metainfo.xml
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.1/services/YARN/metainfo.xml
new file mode 100755
index 000..ea4a822
--- /dev/null
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.1/services/YARN/metainfo.xml
@@ -0,0 +1,82 @@
+
+
+
+  2.0
+  
+
+  YARN
+  2.7.1
+  
+  
+
+  any
+  
+
+  hadoop_4_1_*-yarn
+
+
+  hadoop_4_1_*-mapreduce
+
+
+  hadoop_4_1_*-hdfs
+
+  
+
+  
+  
+  
+
+   theme.json
+   true
+
+  
+  
+
+
+
+  MAPREDUCE2
+  MapReduce2
+  2.7.1
+  configuration-mapred
+  
+  
+
+  any
+  
+
+  hadoop_4_1_*-mapreduce
+
+  
+
+  
+  
+  themes-mapred
+  
+
+  theme.json
+  true
+
+  
+  
+
+
+  
+
+
+
+

http://git-wip-us.apache.org/repos/asf/ambari/blob/1863c3b9/ambari-server/src/main/resources/stacks/BigInsights/4.1/services/YARN/themes-mapred/theme.json
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.1/services/YARN/themes-mapred/theme.json
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.1/services/YARN/themes-mapred/theme.json
new file mode 100755
index 000..5019447
--- /dev/null
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.1/services/YARN/themes-mapred/theme.json
@@ -0,0 +1,132 @@
+{
+  "name": "default",
+  "description": "Default theme for MAPREDUCE service",
+  "configuration": {
+"layouts": [
+  {
+"name": "default",
+"tabs": [
+  {
+"name": "settings",
+"display-name": "Settings",
+"layout": {
+  "tab-columns": "1",
+  "tab-rows": "1",
+  "sections": [
+{
+  "name": "section-mr-scheduler",
+  "display-name": "MapReduce",
+  "row-index": "0",
+  "column-index": "0",
+  "row-span": "1",
+  "column-span": "1",
+  "section-columns": "3",
+  "section-rows": "1",
+  "subsections": [
+{
+  "name": "subsection-mr-scheduler-row1-col1",
+  "display-name": "MapReduce Framework",
+  "row-index": "0",
+  "column-index": "0",
+  "row-span": "1",
+  "column-span": "1"
+},
+{
+  "name": "subsection-mr-scheduler-row1-col2",
+  "row-index": "0",
+  "column-index": "1",
+  "row-span": "1",
+  "column-span": "1"
+},
+{
+  "name": "subsection-mr-scheduler-row1-col3",
+  "row-index": "0",
+  "column-index": "2",
+  "row-span": "1",
+  "column-span": "1"
+},
+{
+  "name": "subsection-mr-scheduler-row2-col1",
+  "display-name": "MapReduce AppMaster",
+  "row-index": "1",
+  "column-index": "0",
+  "row-span": "1",
+  "column-span": "3"
+}
+  ]
+}
+  ]
+}
+  }
+]
+  }
+],
+"placement": {
+  "configuration-layout": "default",
+  "configs": [
+{
+  "config": "mapred-site/mapreduce.map.memory.mb",
+  "subsection-name": "subsection-mr-scheduler-row1-col1"
+},
+{
+  "config": "mapred-site/mapreduce.reduce.memory.mb",
+  "subsection-name": "subsection-mr-scheduler-row1-col2"
+},
+{
+  "config": "mapred-site/yarn.app.mapreduce.am.resource.mb",
+  "subsection-name": "subsection-mr-scheduler-row2-col1"
+},
+{
+  "config": "mapred-site/mapreduce.task.io.sort.mb",
+  "subsection-name": "subsection-mr-scheduler-row1-col3"
+}
+  ]
+},
+"widgets": [
+  {
+"config": "mapred-site/mapreduce.map.memory.mb",
+"widget": {
+  "type": "slider",
+  "units": [
+{
+

[14/51] [partial] ambari git commit: AMBARI-21349. Create BigInsights Stack Skeleton in Ambari 2.5 (alejandro)

2017-06-27 Thread alejandro
http://git-wip-us.apache.org/repos/asf/ambari/blob/1863c3b9/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/YARN/configuration/yarn-site.xml
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/YARN/configuration/yarn-site.xml
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/YARN/configuration/yarn-site.xml
new file mode 100755
index 000..397f96f
--- /dev/null
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/YARN/configuration/yarn-site.xml
@@ -0,0 +1,748 @@
+
+
+
+
+
+
+http://www.w3.org/2001/XInclude";>
+
+  
+
+  
+yarn.resourcemanager.hostname
+localhost
+The hostname of the RM.
+  
+
+  
+yarn.resourcemanager.resource-tracker.address
+localhost:8025
+ The address of ResourceManager. 
+  
+
+  
+yarn.resourcemanager.scheduler.address
+localhost:8030
+The address of the scheduler interface.
+  
+
+  
+yarn.resourcemanager.address
+localhost:8050
+
+  The address of the applications manager interface in the
+  RM.
+
+  
+
+  
+yarn.resourcemanager.admin.address
+localhost:8141
+The address of the RM admin interface.
+  
+
+  
+yarn.resourcemanager.scheduler.class
+
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
+The class to use as the resource scheduler.
+  
+
+  
+yarn.scheduler.minimum-allocation-mb
+512
+
+  The minimum allocation for every container request at the RM,
+  in MBs. Memory requests lower than this won't take effect,
+  and the specified value will get allocated at minimum.
+
+Minimum Container Size (Memory)
+
+  int
+  0
+  5120
+  MB
+  250
+
+
+  
+yarn-site
+yarn.nodemanager.resource.memory-mb
+  
+
+  
+
+  
+yarn.scheduler.maximum-allocation-mb
+2048
+
+  The maximum allocation for every container request at the RM,
+  in MBs. Memory requests higher than this won't take effect,
+  and will get capped to this value.
+
+Maximum Container Size (Memory)
+
+  int
+  0
+  5120
+  MB
+  256
+
+
+  
+yarn-site
+yarn.nodemanager.resource.memory-mb
+  
+
+  
+
+  
+yarn.acl.enable
+false
+ Are acls enabled. 
+
+  boolean
+
+  
+
+  
+yarn.admin.acl
+
+ ACL of who can be admin of the YARN cluster. 
+
+  true
+
+  
+
+  
+
+  
+yarn.nodemanager.address
+0.0.0.0:45454
+The address of the container manager in the NM.
+  
+
+  
+yarn.nodemanager.resource.memory-mb
+5120
+Amount of physical memory, in MB, that can be allocated
+  for containers.
+Memory allocated for all YARN containers on a 
node
+
+  int
+  0
+  268435456
+  MB
+  250
+
+  
+
+  
+yarn.application.classpath
+
/etc/hadoop/conf,/usr/iop/current/hadoop-client/*,/usr/iop/current/hadoop-client/lib/*,/usr/iop/current/hadoop-hdfs-client/*,/usr/iop/current/hadoop-hdfs-client/lib/*,/usr/iop/current/hadoop-yarn-client/*,/usr/iop/current/hadoop-yarn-client/lib/*
+Classpath for typical applications.
+  
+
+  
+yarn.nodemanager.vmem-pmem-ratio
+5
+Ratio between virtual memory to physical memory when
+  setting memory limits for containers. Container allocations are
+  expressed in terms of physical memory, and virtual memory usage
+  is allowed to exceed this allocation by this ratio.
+
+Virtual Memory Ratio
+
+  float
+  0.1
+  5.0
+  0.1
+
+  
+
+  
+yarn.nodemanager.container-executor.class
+
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor
+ContainerExecutor for launching containers
+
+  
+yarn-env
+yarn_cgroups_enabled
+  
+
+  
+
+  
+yarn.nodemanager.linux-container-executor.group
+hadoop
+Unix group of the NodeManager
+
+  
+yarn-env
+yarn_cgroups_enabled
+  
+
+  
+
+  
+yarn.nodemanager.aux-services
+mapreduce_shuffle
+Auxilliary services of NodeManager. A valid service name 
should only contain a-zA-Z0-9_ and can
+  not start with numbers
+  
+
+  
+yarn.nodemanager.aux-services.mapreduce_shuffle.class
+org.apache.hadoop.mapred.ShuffleHandler
+The auxiliary service class to use 
+  
+
+  
+yarn.nodemanager.log-dirs
+/hadoop/yarn/log
+
+  Where to store container logs. An application's localized log directory
+  will be found in ${yarn.nodemanager.log-dirs}/application_${appid}.
+  Individual containers' log directories will be below this, in directories
+  named container_{$contid}. Each container directory will contain the 
files
+  stderr, stdin, and syslog generated by that container.
+
+
+  directories
+
+  
+
+  
+

[23/51] [partial] ambari git commit: AMBARI-21349. Create BigInsights Stack Skeleton in Ambari 2.5 (alejandro)

2017-06-27 Thread alejandro
http://git-wip-us.apache.org/repos/asf/ambari/blob/1863c3b9/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/KERBEROS/package/templates/kdc_conf.j2
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/KERBEROS/package/templates/kdc_conf.j2
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/KERBEROS/package/templates/kdc_conf.j2
new file mode 100755
index 000..f78adc7
--- /dev/null
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/KERBEROS/package/templates/kdc_conf.j2
@@ -0,0 +1,30 @@
+{#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#}
+[kdcdefaults]
+  kdc_ports = {{kdcdefaults_kdc_ports}}
+  kdc_tcp_ports = {{kdcdefaults_kdc_tcp_ports}}
+
+[realms]
+  {{realm}} = {
+acl_file = {{kadm5_acl_path}}
+dict_file = /usr/share/dict/words
+admin_keytab = {{kadm5_acl_dir}}/kadm5.keytab
+supported_enctypes = {{encryption_types}}
+  }
+
+{# Append additional realm declarations should be placed below #}
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/ambari/blob/1863c3b9/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/KERBEROS/package/templates/krb5_conf.j2
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/KERBEROS/package/templates/krb5_conf.j2
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/KERBEROS/package/templates/krb5_conf.j2
new file mode 100755
index 000..733d38a
--- /dev/null
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/KERBEROS/package/templates/krb5_conf.j2
@@ -0,0 +1,55 @@
+{#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#}
+[libdefaults]
+  renew_lifetime = 7d
+  forwardable = true
+  default_realm = {{realm|upper()}}
+  ticket_lifetime = 24h
+  dns_lookup_realm = false
+  dns_lookup_kdc = false
+  #default_tgs_enctypes = {{encryption_types}}
+  #default_tkt_enctypes = {{encryption_types}}
+
+{% if domains %}
+[domain_realm]
+{% for domain in domains.split(',') %}
+  {{domain}} = {{realm|upper()}}
+{% endfor %}
+{% endif %}
+
+[logging]
+  default = FILE:/var/log/krb5kdc.log
+  admin_server = FILE:/var/log/kadmind.log
+  kdc = FILE:/var/log/krb5kdc.log
+
+[realms]
+  {{realm}} = {
+{%- if kdc_hosts > 0 -%}
+{%- set kdc_host_list = kdc_hosts.split(',')  -%}
+{%- if kdc_host_list and kdc_host_list|length > 0 %}
+admin_server = {{admin_server_host|default(kdc_host_list[0]|trim(), True)}}
+{%- if kdc_host_list -%}
+{% for kdc_host in kdc_host_list %}
+kdc = {{kdc_host|trim()}}
+{%- endfor -%}
+{% endif %}
+{%- endif %}
+{%- endif %}
+  }
+
+{# Append additional realm declarations below #}
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/ambari/blob/1863c3b9/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/KNOX/alerts.json
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/KNOX/alerts.json
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/KNOX/alerts.json
new file mode 100755
index 000..4986e04
--- /dev/null
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/KNOX/alerts.json
@@ -0,0 +1,32 @@
+{
+  "KNOX": {
+"service": [],
+"KNOX_GATEWAY": [
+  {
+"name": "knox_gateway_proces

[17/51] [partial] ambari git commit: AMBARI-21349. Create BigInsights Stack Skeleton in Ambari 2.5 (alejandro)

2017-06-27 Thread alejandro
http://git-wip-us.apache.org/repos/asf/ambari/blob/1863c3b9/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/YARN/MAPREDUCE2_metrics.json
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/YARN/MAPREDUCE2_metrics.json
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/YARN/MAPREDUCE2_metrics.json
new file mode 100755
index 000..f44e3b2
--- /dev/null
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/YARN/MAPREDUCE2_metrics.json
@@ -0,0 +1,2596 @@
+{
+  "HISTORYSERVER": {
+"Component": [
+  {
+"type": "ganglia",
+"metrics": {
+  "default": {
+"metrics/jvm/memHeapCommittedM": {
+  "metric": "jvm.JvmMetrics.MemHeapCommittedM",
+  "pointInTime": false,
+  "temporal": true
+},
+"metrics/jvm/threadsRunnable": {
+  "metric": "jvm.JvmMetrics.ThreadsRunnable",
+  "pointInTime": false,
+  "temporal": true
+},
+"metrics/jvm/threadsNew": {
+  "metric": "jvm.JvmMetrics.ThreadsNew",
+  "pointInTime": false,
+  "temporal": true
+},
+"metrics/rpc/rpcAuthorizationFailures": {
+  "metric": "rpc.metrics.RpcAuthorizationFailures",
+  "pointInTime": false,
+  "temporal": true
+},
+"metrics/ugi/loginSuccess_avg_time": {
+  "metric": "ugi.ugi.LoginSuccessAvgTime",
+  "pointInTime": false,
+  "temporal": true
+},
+"metrics/rpc/RpcQueueTime_avg_time": {
+  "metric": "rpc.rpc.RpcQueueTimeAvgTime",
+  "pointInTime": false,
+  "temporal": true
+},
+"metrics/rpc/SentBytes": {
+  "metric": "rpc.rpc.SentBytes",
+  "pointInTime": false,
+  "temporal": true
+},
+"metrics/jvm/memNonHeapUsedM": {
+  "metric": "jvm.JvmMetrics.MemNonHeapUsedM",
+  "pointInTime": false,
+  "temporal": true
+},
+"metrics/jvm/logWarn": {
+  "metric": "jvm.JvmMetrics.LogWarn",
+  "pointInTime": false,
+  "temporal": true
+},
+"metrics/jvm/threadsTimedWaiting": {
+  "metric": "jvm.JvmMetrics.ThreadsTimedWaiting",
+  "pointInTime": false,
+  "temporal": true
+},
+"metrics/process/proc_run": {
+  "metric": "proc_run",
+  "pointInTime": true,
+  "temporal": true
+},
+"metrics/jvm/gcCount": {
+  "metric": "jvm.JvmMetrics.GcCount",
+  "pointInTime": false,
+  "temporal": true
+},
+"metrics/rpc/ReceivedBytes": {
+  "metric": "rpc.rpc.ReceivedBytes",
+  "pointInTime": false,
+  "temporal": true
+},
+"metrics/memory/swap_total": {
+  "metric": "swap_total",
+  "pointInTime": true,
+  "temporal": true
+},
+"metrics/jvm/threadsBlocked": {
+  "metric": "jvm.JvmMetrics.ThreadsBlocked",
+  "pointInTime": false,
+  "temporal": true
+},
+"metrics/rpc/RpcQueueTime_num_ops": {
+  "metric": "rpc.rpc.RpcQueueTimeNumOps",
+  "pointInTime": false,
+  "temporal": true
+},
+"metrics/process/proc_total": {
+  "metric": "proc_total",
+  "pointInTime": true,
+  "temporal": true
+},
+"metrics/disk/part_max_used": {
+  "metric": "part_max_used",
+  "pointInTime": true,
+  "temporal": true
+},
+"metrics/rpc/NumOpenConnections": {
+  "metric": "rpc.rpc.NumOpenConnections",
+  "pointInTime": false,
+  "temporal": true
+},
+"metrics/jvm/memHeapUsedM": {
+  "metric": "jvm.JvmMetrics.MemHeapUsedM",
+  "pointInTime": false,
+  "temporal": true
+},
+"metrics/jvm/threadsWaiting": {
+  "metric": "jvm.JvmMetrics.ThreadsWaiting",
+  "pointInTime": false,
+  "temporal": true
+},
+"metrics/memory/mem_buffers": {
+  "metric": "mem_buffers",
+  "pointInTime": true,
+  "temporal": true
+},
+"metrics/ugi/loginSuccess_num_ops": {
+  "metric": "ugi.ugi.LoginSuccessNumOps",
+  "pointInTime": false,
+  "temporal": true
+},
+"metrics/jvm/gcTimeMillis": {
+   

[42/51] [partial] ambari git commit: AMBARI-21349. Create BigInsights Stack Skeleton in Ambari 2.5 (alejandro)

2017-06-27 Thread alejandro
http://git-wip-us.apache.org/repos/asf/ambari/blob/1863c3b9/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HBASE/metrics.json
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HBASE/metrics.json
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HBASE/metrics.json
new file mode 100755
index 000..a309ec7
--- /dev/null
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HBASE/metrics.json
@@ -0,0 +1,9410 @@
+{
+  "HBASE_REGIONSERVER": {
+"Component": [
+  {
+"type": "ganglia",
+"metrics": {
+  "default": {
+"metrics/cpu/cpu_idle":{
+  "metric":"cpu_idle",
+  "pointInTime":true,
+  "temporal":true
+},
+"metrics/cpu/cpu_nice":{
+  "metric":"cpu_nice",
+  "pointInTime":true,
+  "temporal":true
+},
+"metrics/cpu/cpu_system":{
+  "metric":"cpu_system",
+  "pointInTime":true,
+  "temporal":true
+},
+"metrics/cpu/cpu_user":{
+  "metric":"cpu_user",
+  "pointInTime":true,
+  "temporal":true
+},
+"metrics/cpu/cpu_wio":{
+  "metric":"cpu_wio",
+  "pointInTime":true,
+  "temporal":true
+},
+"metrics/disk/disk_free":{
+  "metric":"disk_free",
+  "pointInTime":true,
+  "temporal":true
+},
+"metrics/disk/disk_total":{
+  "metric":"disk_total",
+  "pointInTime":true,
+  "temporal":true
+},
+"metrics/disk/read_bps":{
+  "metric":"read_bps",
+  "pointInTime":true,
+  "temporal":true
+},
+"metrics/disk/write_bps":{
+  "metric":"write_bps",
+  "pointInTime":true,
+  "temporal":true
+},
+"metrics/load/load_fifteen":{
+  "metric":"load_fifteen",
+  "pointInTime":true,
+  "temporal":true
+},
+"metrics/load/load_five":{
+  "metric":"load_five",
+  "pointInTime":true,
+  "temporal":true
+},
+"metrics/load/load_one":{
+  "metric":"load_one",
+  "pointInTime":true,
+  "temporal":true
+},
+"metrics/memory/mem_buffers":{
+  "metric":"mem_buffers",
+  "pointInTime":true,
+  "temporal":true
+},
+"metrics/memory/mem_cached":{
+  "metric":"mem_cached",
+  "pointInTime":true,
+  "temporal":true
+},
+"metrics/memory/mem_free":{
+  "metric":"mem_free",
+  "pointInTime":true,
+  "temporal":true
+},
+"metrics/memory/mem_shared":{
+  "metric":"mem_shared",
+  "pointInTime":true,
+  "temporal":true
+},
+"metrics/memory/mem_total":{
+  "metric":"mem_total",
+  "pointInTime":true,
+  "temporal":true
+},
+"metrics/memory/swap_free":{
+  "metric":"swap_free",
+  "pointInTime":true,
+  "temporal":true
+},
+"metrics/memory/swap_total":{
+  "metric":"swap_total",
+  "pointInTime":true,
+  "temporal":true
+},
+"metrics/network/bytes_in":{
+  "metric":"bytes_in",
+  "pointInTime":true,
+  "temporal":true
+},
+"metrics/network/bytes_out":{
+  "metric":"bytes_out",
+  "pointInTime":true,
+  "temporal":true
+},
+"metrics/network/pkts_in":{
+  "metric":"pkts_in",
+  "pointInTime":true,
+  "temporal":true
+},
+"metrics/network/pkts_out":{
+  "metric":"pkts_out",
+  "pointInTime":true,
+  "temporal":true
+},
+"metrics/process/proc_run":{
+  "metric":"proc_run",
+  "pointInTime":true,
+  "temporal":true
+},
+"metrics/process/proc_total":{
+  "metric":"proc_total",
+  "pointInTime":true,
+  "temporal":true
+},
+"metrics/disk/read_count":{
+  "metric":"read_count",
+  "pointInTime":true,
+  "temporal":true
+},
+"metrics/disk/write_count":{
+  "metric":"write_count",
+  "pointInTime":true,
+  "tempor

[19/51] [partial] ambari git commit: AMBARI-21349. Create BigInsights Stack Skeleton in Ambari 2.5 (alejandro)

2017-06-27 Thread alejandro
http://git-wip-us.apache.org/repos/asf/ambari/blob/1863c3b9/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/SOLR/package/scripts/solr_server.py
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/SOLR/package/scripts/solr_server.py
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/SOLR/package/scripts/solr_server.py
new file mode 100755
index 000..5cefc73
--- /dev/null
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/SOLR/package/scripts/solr_server.py
@@ -0,0 +1,118 @@
+#!/usr/bin/env python
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+
+import sys
+from resource_management import *
+from resource_management.libraries.functions import conf_select
+from resource_management.libraries.functions import Direction
+from resource_management.libraries.functions import stack_select
+from solr_service import solr_service
+from solr import solr
+import os
+
+class SolrServer(Script):
+  def install(self, env):
+self.install_packages(env)
+
+  def configure(self, env, upgrade_type=None):
+import params
+env.set_params(params)
+if not os.path.isfile("/usr/iop/4.1.0.0/solr/conf/solr.in.sh"):
+  solr(type='4103', upgrade_type=upgrade_type)
+solr(type='server', upgrade_type=upgrade_type)
+
+  def pre_upgrade_restart(self, env, upgrade_type=None):
+Logger.info("Executing Stack Upgrade pre-restart")
+import params
+env.set_params(params)
+if params.version and 
compare_versions(format_stack_version(params.version), '4.1.0.0') >= 0:
+  stack_select.select("solr-server", params.version)
+
+  call_conf_select = True
+  conf_dir = '/usr/iop/4.1.0.0/solr/conf'
+  if params.upgrade_direction is not None and params.upgrade_direction == 
Direction.DOWNGRADE and not os.path.islink(conf_dir):
+call_conf_select = False
+
+  if call_conf_select:
+conf_select.select(params.stack_name, "solr", params.version)
+
+  def start(self, env, upgrade_type=None):
+import params
+env.set_params(params)
+self.configure(env)
+solr_service(action = 'start')
+
+  def stop(self, env, upgrade_type=None):
+import params
+env.set_params(params)
+solr_service(action = 'stop')
+
+  def status(self, env):
+import status_params
+env.set_params(status_params)
+check_process_status(status_params.solr_pid_file)
+
+  def security_status(self, env):
+import status_params
+env.set_params(status_params)
+if status_params.security_enabled:
+  props_value_check = {"solr.hdfs.security.kerberos.enabled":"true"}
+  props_empty_check = ["solr.hdfs.security.kerberos.keytabfile",
+   "solr.hdfs.security.kerberos.principal"]
+  props_read_check = ["solr.hdfs.security.kerberos.keytabfile"]
+  solr_site_props = build_expectations('solr-site', props_value_check, 
props_empty_check, props_read_check)
+
+  solr_expectations = {}
+  solr_expectations.update(solr_site_props)
+
+  security_params = get_params_from_filesystem(status_params.solr_conf_dir,
+   {'solr-site.xml': 
FILE_TYPE_XML})
+  result_issues = 
validate_security_config_properties(security_params,solr_expectations)
+
+  if not result_issues: # If all validations passed successfully
+try:
+  if 'solr-site' not in security_params \
+or 'solr.hdfs.security.kerberos.keytabfile' not in 
security_params['solr-site'] \
+or 'solr.hdfs.security.kerberos.principal' not in 
security_params['solr-site']:
+self.put_structured_out({"securityState": "UNSECURED"})
+self.put_structured_out({"securityIssuesFound": "Keytab file or 
principal are not set property."})
+return
+  cached_kinit_executor(status_params.kinit_path_local,
+status_params.solr_user,
+
security_params['solr-site']['solr.hdfs.security.kerberos.keytabfile'],
+
security_params['solr-site']['solr.hdfs.security.kerberos.principal'],
+status_param

[20/51] [partial] ambari git commit: AMBARI-21349. Create BigInsights Stack Skeleton in Ambari 2.5 (alejandro)

2017-06-27 Thread alejandro
http://git-wip-us.apache.org/repos/asf/ambari/blob/1863c3b9/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/PIG/package/scripts/params_linux.py
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/PIG/package/scripts/params_linux.py
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/PIG/package/scripts/params_linux.py
new file mode 100755
index 000..8280a1f
--- /dev/null
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/PIG/package/scripts/params_linux.py
@@ -0,0 +1,88 @@
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+Ambari Agent
+
+"""
+
+from resource_management.libraries.script.script import Script
+from resource_management.libraries.resources.hdfs_resource import HdfsResource
+from resource_management.libraries.functions import conf_select
+from resource_management.libraries.functions import stack_select
+from resource_management.libraries.functions.version import 
format_stack_version
+from resource_management.libraries.functions.default import default
+from resource_management.libraries.functions import get_kinit_path
+
+# server configurations
+config = Script.get_config()
+tmp_dir = Script.get_tmp_dir()
+
+stack_name = default("/hostLevelParams/stack_name", None)
+
+stack_version_unformatted = str(config['hostLevelParams']['stack_version'])
+iop_stack_version = format_stack_version(stack_version_unformatted)
+
+# New Cluster Stack Version that is defined during the RESTART of a Rolling 
Upgrade
+version = default("/commandParams/version", None)
+
+# hadoop default parameters
+hadoop_conf_dir = conf_select.get_hadoop_conf_dir()
+hadoop_bin_dir = stack_select.get_hadoop_dir("bin")
+
+# hadoop parameters for 2.2+
+pig_conf_dir = "/usr/iop/current/pig-client/conf"
+hadoop_home = stack_select.get_hadoop_dir("home")
+pig_bin_dir = '/usr/iop/current/pig-client/bin'
+
+hdfs_user = config['configurations']['hadoop-env']['hdfs_user']
+hdfs_principal_name = 
config['configurations']['hadoop-env']['hdfs_principal_name']
+hdfs_user_keytab = config['configurations']['hadoop-env']['hdfs_user_keytab']
+smokeuser = config['configurations']['cluster-env']['smokeuser']
+smokeuser_principal = 
config['configurations']['cluster-env']['smokeuser_principal_name']
+user_group = config['configurations']['cluster-env']['user_group']
+security_enabled = config['configurations']['cluster-env']['security_enabled']
+smoke_user_keytab = config['configurations']['cluster-env']['smokeuser_keytab']
+kinit_path_local = 
get_kinit_path(default('/configurations/kerberos-env/executable_search_paths', 
None))
+pig_env_sh_template = config['configurations']['pig-env']['content']
+
+# not supporting 32 bit jdk.
+java64_home = config['hostLevelParams']['java_home']
+
+pig_properties = config['configurations']['pig-properties']['content']
+
+log4j_props = config['configurations']['pig-log4j']['content']
+
+
+
+hdfs_site = config['configurations']['hdfs-site']
+default_fs = config['configurations']['core-site']['fs.defaultFS']
+
+import functools
+#create partial functions with common arguments for every HdfsResource call
+#to create hdfs directory we need to call params.HdfsResource in code
+HdfsResource = functools.partial(
+  HdfsResource,
+  user=hdfs_user,
+  security_enabled = security_enabled,
+  keytab = hdfs_user_keytab,
+  kinit_path_local = kinit_path_local,
+  hadoop_bin_dir = hadoop_bin_dir,
+  hadoop_conf_dir = hadoop_conf_dir,
+  principal_name = hdfs_principal_name,
+  hdfs_site = hdfs_site,
+  default_fs = default_fs
+ )

http://git-wip-us.apache.org/repos/asf/ambari/blob/1863c3b9/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/PIG/package/scripts/pig.py
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/PIG/package/scripts/pig.py
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/PIG/package/scripts/pig.py
new file mode 100755
index 000..ea1e205
--- /dev/null
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/PIG/package/scripts/pig.py
@@ -0,0 +1,61 @@
+"""
+Licensed to the Apache Software Foun

[50/51] [partial] ambari git commit: AMBARI-21349. Create BigInsights Stack Skeleton in Ambari 2.5 (alejandro)

2017-06-27 Thread alejandro
http://git-wip-us.apache.org/repos/asf/ambari/blob/1863c3b9/ambari-server/src/main/resources/stacks/BigInsights/4.0/hooks/after-INSTALL/scripts/hook.py
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.0/hooks/after-INSTALL/scripts/hook.py
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/hooks/after-INSTALL/scripts/hook.py
new file mode 100755
index 000..ad7144c
--- /dev/null
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/hooks/after-INSTALL/scripts/hook.py
@@ -0,0 +1,38 @@
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+
+from resource_management.libraries.script.hook import Hook
+from shared_initialization import link_configs
+from shared_initialization import setup_config
+from shared_initialization import setup_iop_install_directory
+from resource_management.libraries.script import Script
+
+class AfterInstallHook(Hook):
+
+  def hook(self, env):
+import params
+
+env.set_params(params)
+setup_iop_install_directory()
+setup_config()
+
+link_configs(self.stroutfile)
+
+if __name__ == "__main__":
+  AfterInstallHook().execute()

http://git-wip-us.apache.org/repos/asf/ambari/blob/1863c3b9/ambari-server/src/main/resources/stacks/BigInsights/4.0/hooks/after-INSTALL/scripts/params.py
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.0/hooks/after-INSTALL/scripts/params.py
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/hooks/after-INSTALL/scripts/params.py
new file mode 100755
index 000..d3332db
--- /dev/null
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/hooks/after-INSTALL/scripts/params.py
@@ -0,0 +1,88 @@
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+
+from ambari_commons.constants import AMBARI_SUDO_BINARY
+from resource_management.libraries.script import Script
+from resource_management.libraries.functions import default
+from resource_management.libraries.functions import conf_select
+from resource_management.libraries.functions import stack_select
+from resource_management.libraries.functions import format_jvm_option
+from resource_management.libraries.functions.version import 
format_stack_version
+
+from resource_management.core.system import System
+from ambari_commons.os_check import OSCheck
+
+
+config = Script.get_config()
+sudo = AMBARI_SUDO_BINARY
+
+stack_version_unformatted = str(config['hostLevelParams']['stack_version'])
+iop_stack_version = format_stack_version(stack_version_unformatted)
+
+# default hadoop params
+mapreduce_libs_path = "/usr/lib/hadoop-mapreduce/*"
+hadoop_libexec_dir = stack_select.get_hadoop_dir("libexec")
+hadoop_conf_empty_dir = "/etc/hadoop/conf.empty"
+
+# IOP 4.0+ params
+if Script.is_stack_greater_or_equal("4.0"):
+  mapreduce_libs_path = "/usr/iop/current/hadoop-mapreduce-client/*"
+  # not supported in IOP 4.0+
+  hadoop_conf_empty_dir = None
+
+versioned_iop_root = '/usr/iop/current'
+
+#security params
+security_enabled = config['configurations']['cluster-env']['security_enabled']
+
+#java params
+java_home = config['hostLevelParams']['java_home']
+
+#hadoop params
+hdfs_log_dir_prefix = 
config['configurations']['hadoop-env']['hdfs_log_dir_prefix']
+hadoop_pid_dir_prefix = 
config['configurations']['hadoop-env']['hadoop_pid_dir_prefix']
+hadoop_root_logger = 
config['configurations']['hadoop-env']['hadoop_root_logger']
+
+jsvc_path = "/usr/lib/bigtop-utils

[06/51] [partial] ambari git commit: AMBARI-21349. Create BigInsights Stack Skeleton in Ambari 2.5 (alejandro)

2017-06-27 Thread alejandro
http://git-wip-us.apache.org/repos/asf/ambari/blob/1863c3b9/ambari-server/src/main/resources/stacks/BigInsights/4.0/widgets.json
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.0/widgets.json 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/widgets.json
new file mode 100755
index 000..3176354
--- /dev/null
+++ b/ambari-server/src/main/resources/stacks/BigInsights/4.0/widgets.json
@@ -0,0 +1,95 @@
+{
+  "layouts": [
+{
+  "layout_name": "default_system_heatmap",
+  "display_name": "Heatmaps",
+  "section_name": "SYSTEM_HEATMAPS",
+  "widgetLayoutInfo": [
+{
+  "widget_name": "Host Disk Space Used %",
+  "description": "",
+  "widget_type": "HEATMAP",
+  "is_visible": true,
+  "metrics": [
+{
+  "name": "disk_free",
+  "metric_path": "metrics/disk/disk_free",
+  "service_name": "STACK"
+},
+{
+  "name": "disk_total",
+  "metric_path": "metrics/disk/disk_total",
+  "service_name": "STACK"
+}
+  ],
+  "values": [
+{
+  "name": "Host Disk Space Used %",
+  "value": "${((disk_total-disk_free)/disk_total)*100}"
+}
+  ],
+  "properties": {
+"display_unit": "%",
+"max_limit": "100"
+  }
+},
+{
+  "widget_name": "Host Memory Used %",
+  "description": "",
+  "widget_type": "HEATMAP",
+  "is_visible": false,
+  "metrics": [
+{
+  "name": "mem_total",
+  "metric_path": "metrics/memory/mem_total",
+  "service_name": "STACK"
+},
+{
+  "name": "mem_free",
+  "metric_path": "metrics/memory/mem_free",
+  "service_name": "STACK"
+},
+{
+  "name": "mem_cached",
+  "metric_path": "metrics/memory/mem_cached",
+  "service_name": "STACK"
+}
+  ],
+  "values": [
+{
+  "name": "Host Memory Used %",
+  "value": "${((mem_total-mem_free-mem_cached)/mem_total)*100}"
+}
+  ],
+  "properties": {
+"display_unit": "%",
+"max_limit": "100"
+  }
+},
+{
+  "widget_name": "Host CPU Wait IO %",
+  "description": "",
+  "widget_type": "HEATMAP",
+  "is_visible": false,
+  "metrics": [
+{
+  "name": "cpu_wio",
+  "metric_path": "metrics/cpu/cpu_wio",
+  "service_name": "STACK"
+}
+  ],
+  "values": [
+{
+  "name": "Host Memory Used %",
+  "value": "${cpu_wio*100}"
+}
+  ],
+  "properties": {
+"display_unit": "%",
+"max_limit": "100"
+  }
+}
+  ]
+}
+  ]
+}

http://git-wip-us.apache.org/repos/asf/ambari/blob/1863c3b9/ambari-server/src/main/resources/stacks/BigInsights/4.1/kerberos.json
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.1/kerberos.json 
b/ambari-server/src/main/resources/stacks/BigInsights/4.1/kerberos.json
new file mode 100755
index 000..03198dc
--- /dev/null
+++ b/ambari-server/src/main/resources/stacks/BigInsights/4.1/kerberos.json
@@ -0,0 +1,47 @@
+{
+  "properties": {
+"realm": "${kerberos-env/realm}",
+"keytab_dir": "/etc/security/keytabs"
+  },
+  "identities": [
+{
+  "name": "spnego",
+  "principal": {
+"value": "HTTP/_HOST@${realm}",
+"type" : "service"
+  },
+  "keytab": {
+"file": "${keytab_dir}/spnego.service.keytab",
+"owner": {
+  "name": "root",
+  "access": "r"
+},
+"group": {
+  "name": "${cluster-env/user_group}",
+  "access": "r"
+}
+  }
+},
+{
+  "name": "smokeuser",
+  "principal": {
+"value": "${cluster-env/smokeuser}-${cluster_name}@${realm}",
+"type" : "user",
+"configuration": "cluster-env/smokeuser_principal_name",
+"local_username" : "${cluster-env/smokeuser}"
+  },
+  "keytab": {
+"file": "${keytab_dir}/smokeuser.headless.keytab",
+"owner": {
+  "name": "${cluster-env/smokeuser}",
+  "access": "r"
+},
+"group": {
+  "name": "${cluster-env/user_group}",
+  "access": "r"
+},
+"configuration": "cluster-env/smokeuser_keytab"
+  }
+}
+  ]
+}

http://git-wip-us.apache.org/repos/asf/ambari/blob/1863c3b9/ambari-server/src/main/resources/stacks/BigInsights/4.1/metainfo.xml
---

[03/51] [partial] ambari git commit: AMBARI-21349. Create BigInsights Stack Skeleton in Ambari 2.5 (alejandro)

2017-06-27 Thread alejandro
http://git-wip-us.apache.org/repos/asf/ambari/blob/1863c3b9/ambari-server/src/main/resources/stacks/BigInsights/4.2.5/hooks/before-START/scripts/custom_extensions.py
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.2.5/hooks/before-START/scripts/custom_extensions.py
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.2.5/hooks/before-START/scripts/custom_extensions.py
new file mode 100755
index 000..5563d54
--- /dev/null
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.2.5/hooks/before-START/scripts/custom_extensions.py
@@ -0,0 +1,168 @@
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+
+import os
+
+from resource_management.core.resources import Directory
+from resource_management.core.resources import Execute
+from resource_management.libraries.functions import default
+from resource_management.libraries.script.script import Script
+
+
+def setup_extensions():
+  import params
+
+  # Hadoop Custom extensions
+  hadoop_custom_extensions_enabled = 
default("/configurations/core-site/hadoop.custom-extensions.enabled", False)
+  hadoop_custom_extensions_services = 
default("/configurations/core-site/hadoop.custom-extensions.services", "")
+  hadoop_custom_extensions_owner = 
default("/configurations/core-site/hadoop.custom-extensions.owner", 
params.hdfs_user)
+  hadoop_custom_extensions_services = [ service.strip().upper() for service in 
hadoop_custom_extensions_services.split(",") ]
+  hadoop_custom_extensions_services.append("YARN")
+  hadoop_custom_extensions_hdfs_dir = 
"/iop/ext/{0}/hadoop".format(params.stack_version_formatted)
+  hadoop_custom_extensions_local_dir = 
"{0}/current/ext/hadoop".format(Script.get_stack_root())
+
+  if params.current_service in hadoop_custom_extensions_services:
+clean_extensions(hadoop_custom_extensions_local_dir)
+if hadoop_custom_extensions_enabled:
+  download_extensions(hadoop_custom_extensions_owner, params.user_group,
+  hadoop_custom_extensions_hdfs_dir,
+  hadoop_custom_extensions_local_dir)
+
+  setup_extensions_hive()
+
+  hbase_custom_extensions_services = []
+  hbase_custom_extensions_services.append("HBASE")
+  if params.current_service in hbase_custom_extensions_services:
+setup_hbase_extensions()
+
+def setup_extensions_hive():
+  import params
+
+  hive_custom_extensions_enabled = 
default("/configurations/hive-site/hive.custom-extensions.enabled", False)
+  hive_custom_extensions_owner = 
default("/configurations/hive-site/hive.custom-extensions.owner", 
params.hdfs_user)
+  hive_custom_extensions_hdfs_dir = 
"/iop/ext/{0}/hive".format(params.stack_version_formatted)
+  hive_custom_extensions_local_dir = 
"{0}/current/ext/hive".format(Script.get_stack_root())
+
+  impacted_components = ['HIVE_SERVER', 'HIVE_CLIENT'];
+  role = params.config.get('role','')
+
+  # Run copying for HIVE_SERVER and HIVE_CLIENT
+  if params.current_service == 'HIVE' and role in impacted_components:
+clean_extensions(hive_custom_extensions_local_dir)
+if hive_custom_extensions_enabled:
+  download_extensions(hive_custom_extensions_owner, params.user_group,
+  hive_custom_extensions_hdfs_dir,
+  hive_custom_extensions_local_dir)
+
+def download_extensions(owner_user, owner_group, hdfs_source_dir, 
local_target_dir):
+  """
+  :param owner_user: user owner of the HDFS directory
+  :param owner_group: group owner of the HDFS directory
+  :param hdfs_source_dir: the HDFS directory from where the files are being 
pull
+  :param local_target_dir: the location of where to download the files
+  :return: Will return True if successful, otherwise, False.
+  """
+  import params
+
+  if not os.path.isdir(local_target_dir):
+import tempfile
+
+#Create a secure random temp directory
+tmp_dir=tempfile.mkdtemp()
+cmd = ('chown', '-R', params.hdfs_user, tmp_dir)
+Execute(cmd, sudo=True)
+cmd = ('chmod', '755', tmp_dir)
+Execute(cmd, sudo=True)
+
+Directory(os.path.dirname(local_target_dir),
+  owner="root",
+  mode=0755,
+  group="root",
+  create_parents=True)

[48/51] [partial] ambari git commit: AMBARI-21349. Create BigInsights Stack Skeleton in Ambari 2.5 (alejandro)

2017-06-27 Thread alejandro
http://git-wip-us.apache.org/repos/asf/ambari/blob/1863c3b9/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/AMBARI_METRICS/configuration/ams-site.xml
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/AMBARI_METRICS/configuration/ams-site.xml
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/AMBARI_METRICS/configuration/ams-site.xml
new file mode 100755
index 000..cc9c27a
--- /dev/null
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/AMBARI_METRICS/configuration/ams-site.xml
@@ -0,0 +1,527 @@
+
+
+
+
+  
+timeline.metrics.service.operation.mode
+embedded
+Metrics Service operation mode
+
+  Service Operation modes:
+  1) embedded: Metrics stored on local FS, HBase in Standalone mode
+  2) distributed: HBase daemons writing to HDFS
+
+  
+  
+timeline.metrics.service.webapp.address
+0.0.0.0:6188
+
+  The address of the metrics service web application.
+
+  
+  
+timeline.metrics.service.rpc.address
+0.0.0.0:60200
+
+  The address of the metrics service rpc listeners.
+
+  
+  
+timeline.metrics.aggregator.checkpoint.dir
+/var/lib/ambari-metrics-collector/checkpoint
+Aggregator checkpoint directory
+
+  Directory to store aggregator checkpoints. Change to a permanent
+  location so that checkpoint ar not lost.
+
+
+  directory
+
+  
+  
+timeline.metrics.host.aggregator.minute.interval
+300
+Minute host aggregator interval
+
+  Time in seconds to sleep for the minute resolution host based
+  aggregator. Default resolution is 5 minutes.
+
+
+  int
+
+  
+  
+timeline.metrics.host.aggregator.hourly.interval
+3600
+Hourly host aggregator interval
+
+  Time in seconds to sleep for the hourly resolution host based
+  aggregator. Default resolution is 1 hour.
+
+
+  int
+
+  
+  
+timeline.metrics.daily.aggregator.minute.interval
+86400
+
+  Time in seconds to sleep for the day resolution host based
+  aggregator. Default resolution is 24 hours.
+
+  
+  
+timeline.metrics.cluster.aggregator.hourly.interval
+3600
+Hourly cluster aggregator Interval
+
+  Time in seconds to sleep for the hourly resolution cluster wide
+  aggregator. Default is 1 hour.
+
+
+  int
+
+  
+  
+timeline.metrics.cluster.aggregator.daily.interval
+86400
+
+  Time in seconds to sleep for the day resolution cluster wide
+  aggregator. Default is 24 hours.
+
+  
+  
+timeline.metrics.cluster.aggregator.minute.interval
+300
+Minute cluster aggregator interval
+
+  Time in seconds to sleep for the minute resolution cluster wide
+  aggregator. Default resolution is 5 minutes.
+
+
+  int
+
+  
+  
+timeline.metrics.cluster.aggregator.second.interval
+120
+Second cluster aggregator interval
+
+  Time in seconds to sleep for the second resolution cluster wide
+  aggregator. Default resolution is 2 minutes.
+
+
+  int
+
+  
+  
+
timeline.metrics.host.aggregator.daily.checkpointCutOffMultiplier
+1
+
+  Multiplier value * interval = Max allowed checkpoint lag. Effectively
+  if aggregator checkpoint is greater than max allowed checkpoint delay,
+  the checkpoint will be discarded by the aggregator.
+
+  
+  
+
timeline.metrics.host.aggregator.hourly.checkpointCutOffMultiplier
+2
+Hourly host aggregator checkpoint cutOff 
multiplier
+
+  Multiplier value * interval = Max allowed checkpoint lag. Effectively
+  if aggregator checkpoint is greater than max allowed checkpoint delay,
+  the checkpoint will be discarded by the aggregator.
+
+
+  int
+
+  
+  
+
timeline.metrics.host.aggregator.minute.checkpointCutOffMultiplier
+2
+Minute host aggregator checkpoint cutOff 
multiplier
+
+  Multiplier value * interval = Max allowed checkpoint lag. Effectively
+  if aggregator checkpoint is greater than max allowed checkpoint delay,
+  the checkpoint will be discarded by the aggregator.
+
+
+  int
+
+  
+  
+
timeline.metrics.cluster.aggregator.hourly.checkpointCutOffMultiplier
+2
+Hourly cluster aggregator checkpoint cutOff 
multiplier
+
+  Multiplier value * interval = Max allowed checkpoint lag. Effectively
+  if aggregator checkpoint is greater than max allowed checkpoint delay,
+  the checkpoint will be discarded by the aggregator.
+
+
+  int
+
+  
+  
+
timeline.metrics.cluster.aggregator.second.checkpointCutOffMultiplier
+2
+Second cluster aggregator checkpoint cutOff 
multiplier
+
+  Multiplier value * interval = Max allowed checkpoint lag. Effectively
+  if ag

[32/51] [partial] ambari git commit: AMBARI-21349. Create BigInsights Stack Skeleton in Ambari 2.5 (alejandro)

2017-06-27 Thread alejandro
http://git-wip-us.apache.org/repos/asf/ambari/blob/1863c3b9/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HIVE/etc/hive-schema-0.12.0.postgres.sql
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HIVE/etc/hive-schema-0.12.0.postgres.sql
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HIVE/etc/hive-schema-0.12.0.postgres.sql
new file mode 100755
index 000..61769f6
--- /dev/null
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HIVE/etc/hive-schema-0.12.0.postgres.sql
@@ -0,0 +1,1405 @@
+--
+-- PostgreSQL database dump
+--
+
+SET statement_timeout = 0;
+SET client_encoding = 'UTF8';
+SET standard_conforming_strings = off;
+SET check_function_bodies = false;
+SET client_min_messages = warning;
+SET escape_string_warning = off;
+
+SET search_path = public, pg_catalog;
+
+SET default_tablespace = '';
+
+SET default_with_oids = false;
+
+--
+-- Name: BUCKETING_COLS; Type: TABLE; Schema: public; Owner: hiveuser; 
Tablespace:
+--
+
+CREATE TABLE "BUCKETING_COLS" (
+"SD_ID" bigint NOT NULL,
+"BUCKET_COL_NAME" character varying(256) DEFAULT NULL::character varying,
+"INTEGER_IDX" bigint NOT NULL
+);
+
+
+--
+-- Name: CDS; Type: TABLE; Schema: public; Owner: hiveuser; Tablespace:
+--
+
+CREATE TABLE "CDS" (
+"CD_ID" bigint NOT NULL
+);
+
+
+--
+-- Name: COLUMNS_OLD; Type: TABLE; Schema: public; Owner: hiveuser; Tablespace:
+--
+
+CREATE TABLE "COLUMNS_OLD" (
+"SD_ID" bigint NOT NULL,
+"COMMENT" character varying(256) DEFAULT NULL::character varying,
+"COLUMN_NAME" character varying(128) NOT NULL,
+"TYPE_NAME" character varying(4000) NOT NULL,
+"INTEGER_IDX" bigint NOT NULL
+);
+
+
+--
+-- Name: COLUMNS_V2; Type: TABLE; Schema: public; Owner: hiveuser; Tablespace:
+--
+
+CREATE TABLE "COLUMNS_V2" (
+"CD_ID" bigint NOT NULL,
+"COMMENT" character varying(4000),
+"COLUMN_NAME" character varying(128) NOT NULL,
+"TYPE_NAME" character varying(4000),
+"INTEGER_IDX" integer NOT NULL
+);
+
+
+--
+-- Name: DATABASE_PARAMS; Type: TABLE; Schema: public; Owner: hiveuser; 
Tablespace:
+--
+
+CREATE TABLE "DATABASE_PARAMS" (
+"DB_ID" bigint NOT NULL,
+"PARAM_KEY" character varying(180) NOT NULL,
+"PARAM_VALUE" character varying(4000) DEFAULT NULL::character varying
+);
+
+
+--
+-- Name: DBS; Type: TABLE; Schema: public; Owner: hiveuser; Tablespace:
+--
+
+CREATE TABLE "DBS" (
+"DB_ID" bigint NOT NULL,
+"DESC" character varying(4000) DEFAULT NULL::character varying,
+"DB_LOCATION_URI" character varying(4000) NOT NULL,
+"NAME" character varying(128) DEFAULT NULL::character varying
+);
+
+
+--
+-- Name: DB_PRIVS; Type: TABLE; Schema: public; Owner: hiveuser; Tablespace:
+--
+
+CREATE TABLE "DB_PRIVS" (
+"DB_GRANT_ID" bigint NOT NULL,
+"CREATE_TIME" bigint NOT NULL,
+"DB_ID" bigint,
+"GRANT_OPTION" smallint NOT NULL,
+"GRANTOR" character varying(128) DEFAULT NULL::character varying,
+"GRANTOR_TYPE" character varying(128) DEFAULT NULL::character varying,
+"PRINCIPAL_NAME" character varying(128) DEFAULT NULL::character varying,
+"PRINCIPAL_TYPE" character varying(128) DEFAULT NULL::character varying,
+"DB_PRIV" character varying(128) DEFAULT NULL::character varying
+);
+
+
+--
+-- Name: GLOBAL_PRIVS; Type: TABLE; Schema: public; Owner: hiveuser; 
Tablespace:
+--
+
+CREATE TABLE "GLOBAL_PRIVS" (
+"USER_GRANT_ID" bigint NOT NULL,
+"CREATE_TIME" bigint NOT NULL,
+"GRANT_OPTION" smallint NOT NULL,
+"GRANTOR" character varying(128) DEFAULT NULL::character varying,
+"GRANTOR_TYPE" character varying(128) DEFAULT NULL::character varying,
+"PRINCIPAL_NAME" character varying(128) DEFAULT NULL::character varying,
+"PRINCIPAL_TYPE" character varying(128) DEFAULT NULL::character varying,
+"USER_PRIV" character varying(128) DEFAULT NULL::character varying
+);
+
+
+--
+-- Name: IDXS; Type: TABLE; Schema: public; Owner: hiveuser; Tablespace:
+--
+
+CREATE TABLE "IDXS" (
+"INDEX_ID" bigint NOT NULL,
+"CREATE_TIME" bigint NOT NULL,
+"DEFERRED_REBUILD" boolean NOT NULL,
+"INDEX_HANDLER_CLASS" character varying(4000) DEFAULT NULL::character 
varying,
+"INDEX_NAME" character varying(128) DEFAULT NULL::character varying,
+"INDEX_TBL_ID" bigint,
+"LAST_ACCESS_TIME" bigint NOT NULL,
+"ORIG_TBL_ID" bigint,
+"SD_ID" bigint
+);
+
+
+--
+-- Name: INDEX_PARAMS; Type: TABLE; Schema: public; Owner: hiveuser; 
Tablespace:
+--
+
+CREATE TABLE "INDEX_PARAMS" (
+"INDEX_ID" bigint NOT NULL,
+"PARAM_KEY" character varying(256) NOT NULL,
+"PARAM_VALUE" character varying(4000) DEFAULT NULL::character varying
+);
+
+
+--
+-- Name: NUCLEUS_TABLES; Type: TABLE; Schema: public; Owner: hiveuser; 
Tablespace:
+--
+
+CREATE TABLE "NUCLEUS_TABLES" (
+"CLASS_NAME" character varying(128) NOT NULL,
+"TABLE_NAME" charact

[46/51] [partial] ambari git commit: AMBARI-21349. Create BigInsights Stack Skeleton in Ambari 2.5 (alejandro)

2017-06-27 Thread alejandro
http://git-wip-us.apache.org/repos/asf/ambari/blob/1863c3b9/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/AMBARI_METRICS/package/alerts/alert_ambari_metrics_monitor.py
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/AMBARI_METRICS/package/alerts/alert_ambari_metrics_monitor.py
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/AMBARI_METRICS/package/alerts/alert_ambari_metrics_monitor.py
new file mode 100755
index 000..fa44a7f
--- /dev/null
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/AMBARI_METRICS/package/alerts/alert_ambari_metrics_monitor.py
@@ -0,0 +1,104 @@
+#!/usr/bin/env python
+
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+"""
+
+import os
+import socket
+
+from resource_management.libraries.functions.check_process_status import 
check_process_status
+from resource_management.core.exceptions import ComponentIsNotRunning
+from ambari_commons import OSCheck, OSConst
+from ambari_commons.os_family_impl import OsFamilyFuncImpl, OsFamilyImpl
+
+if OSCheck.is_windows_family():
+  from resource_management.libraries.functions.windows_service_utils import 
check_windows_service_status
+RESULT_CODE_OK = 'OK'
+RESULT_CODE_CRITICAL = 'CRITICAL'
+RESULT_CODE_UNKNOWN = 'UNKNOWN'
+
+AMS_MONITOR_PID_DIR = '{{ams-env/metrics_monitor_pid_dir}}'
+
+def get_tokens():
+  """
+  Returns a tuple of tokens in the format {{site/property}} that will be used
+  to build the dictionary passed into execute
+  """
+  return (AMS_MONITOR_PID_DIR,)
+
+@OsFamilyFuncImpl(OSConst.WINSRV_FAMILY)
+def is_monitor_process_live(pid_file=None):
+  """
+  Gets whether the Metrics Monitor Service is running.
+  :param pid_file: ignored
+  :return: True if the monitor is running, False otherwise
+  """
+  try:
+check_windows_service_status("AmbariMetricsHostMonitoring")
+ams_monitor_process_running = True
+  except:
+ams_monitor_process_running = False
+  return ams_monitor_process_running
+
+@OsFamilyFuncImpl(OsFamilyImpl.DEFAULT)
+def is_monitor_process_live(pid_file):
+  """
+  Gets whether the Metrics Monitor represented by the specified file is 
running.
+  :param pid_file: the PID file of the monitor to check
+  :return: True if the monitor is running, False otherwise
+  """
+  live = False
+
+  try:
+check_process_status(pid_file)
+live = True
+  except ComponentIsNotRunning:
+pass
+
+  return live
+
+
+def execute(configurations={}, parameters={}, host_name=None):
+  """
+  Returns a tuple containing the result code and a pre-formatted result label
+
+  Keyword arguments:
+  configurations (dictionary): a mapping of configuration key to value
+  parameters (dictionary): a mapping of script parameter key to value
+  host_name (string): the name of this host where the alert is running
+  """
+
+  if configurations is None:
+return (RESULT_CODE_UNKNOWN, ['There were no configurations supplied to 
the script.'])
+
+  if set([AMS_MONITOR_PID_DIR]).issubset(configurations):
+AMS_MONITOR_PID_PATH = os.path.join(configurations[AMS_MONITOR_PID_DIR], 
'ambari-metrics-monitor.pid')
+  else:
+return (RESULT_CODE_UNKNOWN, ['The ams_monitor_pid_dir is a required 
parameter.'])
+
+  if host_name is None:
+host_name = socket.getfqdn()
+
+  ams_monitor_process_running = is_monitor_process_live(AMS_MONITOR_PID_PATH)
+
+  alert_state = RESULT_CODE_OK if ams_monitor_process_running else 
RESULT_CODE_CRITICAL
+
+  alert_label = 'Ambari Monitor is running on {0}' if 
ams_monitor_process_running else 'Ambari Monitor is NOT running on {0}'
+  alert_label = alert_label.format(host_name)
+
+  return (alert_state, [alert_label])

http://git-wip-us.apache.org/repos/asf/ambari/blob/1863c3b9/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/AMBARI_METRICS/package/files/hbaseSmokeVerify.sh
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/AMBARI_METRICS/package/files/hbaseSmokeVerify.sh
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/AMBARI_METRICS/package/files/hbaseSmokeVerify.sh
new file mode 100755
ind

[25/51] [partial] ambari git commit: AMBARI-21349. Create BigInsights Stack Skeleton in Ambari 2.5 (alejandro)

2017-06-27 Thread alejandro
http://git-wip-us.apache.org/repos/asf/ambari/blob/1863c3b9/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HIVE/package/scripts/webhcat_server.py
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HIVE/package/scripts/webhcat_server.py
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HIVE/package/scripts/webhcat_server.py
new file mode 100755
index 000..7496649
--- /dev/null
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HIVE/package/scripts/webhcat_server.py
@@ -0,0 +1,146 @@
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+Ambari Agent
+
+"""
+from resource_management import *
+from resource_management.libraries.functions import conf_select
+from resource_management.libraries.functions import stack_select
+from resource_management.libraries.functions.security_commons import 
build_expectations, \
+  cached_kinit_executor, get_params_from_filesystem, 
validate_security_config_properties, \
+  FILE_TYPE_XML
+from webhcat import webhcat
+from webhcat_service import webhcat_service
+from ambari_commons import OSConst
+from ambari_commons.os_family_impl import OsFamilyImpl
+
+
+class WebHCatServer(Script):
+  def install(self, env):
+import params
+self.install_packages(env)
+
+  def start(self, env, upgrade_type=None):
+import params
+env.set_params(params)
+self.configure(env) # FOR SECURITY
+webhcat_service(action='start', upgrade_type=upgrade_type)
+
+  def stop(self, env, upgrade_type=None):
+import params
+env.set_params(params)
+webhcat_service(action='stop')
+
+  def configure(self, env):
+import params
+env.set_params(params)
+webhcat()
+
+
+@OsFamilyImpl(os_family=OsFamilyImpl.DEFAULT)
+class WebHCatServerDefault(WebHCatServer):
+  def get_component_name(self):
+return "hive-webhcat"
+
+  def status(self, env):
+import status_params
+env.set_params(status_params)
+check_process_status(status_params.webhcat_pid_file)
+
+  def pre_upgrade_restart(self, env, upgrade_type=None):
+Logger.info("Executing WebHCat Stack Upgrade pre-restart")
+import params
+env.set_params(params)
+
+if params.version and 
compare_versions(format_stack_version(params.version), '4.1.0.0') >= 0:
+  # webhcat has no conf, but uses hadoop home, so verify that regular 
hadoop conf is set
+  conf_select.select(params.stack_name, "hadoop", params.version)
+  stack_select.select("hive-webhcat", params.version)
+
+  def security_status(self, env):
+import status_params
+env.set_params(status_params)
+
+if status_params.security_enabled:
+  expectations ={}
+  expectations.update(
+build_expectations(
+  'webhcat-site',
+  {
+"templeton.kerberos.secret": "secret"
+  },
+  [
+"templeton.kerberos.keytab",
+"templeton.kerberos.principal"
+  ],
+  [
+"templeton.kerberos.keytab"
+  ]
+)
+  )
+  expectations.update(
+build_expectations(
+  'hive-site',
+  {
+"hive.server2.authentication": "KERBEROS",
+"hive.metastore.sasl.enabled": "true",
+"hive.security.authorization.enabled": "true"
+  },
+  None,
+  None
+)
+  )
+
+  security_params = {}
+  
security_params.update(get_params_from_filesystem(status_params.hive_conf_dir,
+{'hive-site.xml': 
FILE_TYPE_XML}))
+  
security_params.update(get_params_from_filesystem(status_params.webhcat_conf_dir,
+{'webhcat-site.xml': 
FILE_TYPE_XML}))
+  result_issues = validate_security_config_properties(security_params, 
expectations)
+  if not result_issues: # If all validations passed successfully
+try:
+  # Double check the dict before calling execute
+  if 'webhcat-site' not in security_params \
+or 'templeton.kerberos.keytab' not in 
security_params['webhcat-site'] \
+or 'templeton.kerberos.principal' not in 
security

[45/51] [partial] ambari git commit: AMBARI-21349. Create BigInsights Stack Skeleton in Ambari 2.5 (alejandro)

2017-06-27 Thread alejandro
http://git-wip-us.apache.org/repos/asf/ambari/blob/1863c3b9/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/AMBARI_METRICS/package/scripts/ams.py
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/AMBARI_METRICS/package/scripts/ams.py
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/AMBARI_METRICS/package/scripts/ams.py
new file mode 100755
index 000..e72ad82
--- /dev/null
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/AMBARI_METRICS/package/scripts/ams.py
@@ -0,0 +1,388 @@
+#!/usr/bin/env python
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+
+from resource_management import *
+from ambari_commons import OSConst
+from ambari_commons.os_family_impl import OsFamilyFuncImpl, OsFamilyImpl
+from ambari_commons.str_utils import compress_backslashes
+import glob
+import os
+
+@OsFamilyFuncImpl(os_family=OSConst.WINSRV_FAMILY)
+def ams(name=None):
+  import params
+  if name == 'collector':
+if not check_windows_service_exists(params.ams_collector_win_service_name):
+  Execute(format("cmd /C cd {ams_collector_home_dir} & 
ambari-metrics-collector.cmd setup"))
+
+Directory(params.ams_collector_conf_dir,
+  owner=params.ams_user,
+  create_parents=True
+)
+
+Directory(params.ams_checkpoint_dir,
+  owner=params.ams_user,
+  create_parents=True
+)
+
+XmlConfig("ams-site.xml",
+  conf_dir=params.ams_collector_conf_dir,
+  configurations=params.config['configurations']['ams-site'],
+  
configuration_attributes=params.config['configuration_attributes']['ams-site'],
+  owner=params.ams_user,
+)
+
+merged_ams_hbase_site = {}
+
merged_ams_hbase_site.update(params.config['configurations']['ams-hbase-site'])
+if params.security_enabled:
+  
merged_ams_hbase_site.update(params.config['configurations']['ams-hbase-security-site'])
+
+XmlConfig( "hbase-site.xml",
+   conf_dir = params.ams_collector_conf_dir,
+   configurations = merged_ams_hbase_site,
+   
configuration_attributes=params.config['configuration_attributes']['ams-hbase-site'],
+   owner = params.ams_user,
+)
+
+if (params.log4j_props != None):
+  File(os.path.join(params.ams_collector_conf_dir, "log4j.properties"),
+   owner=params.ams_user,
+   content=params.log4j_props
+  )
+
+File(os.path.join(params.ams_collector_conf_dir, "ams-env.cmd"),
+ owner=params.ams_user,
+ content=InlineTemplate(params.ams_env_sh_template)
+)
+
+ServiceConfig(params.ams_collector_win_service_name,
+  action="change_user",
+  username = params.ams_user,
+  password = Script.get_password(params.ams_user))
+
+if not params.is_local_fs_rootdir:
+  # Configuration needed to support NN HA
+  XmlConfig("hdfs-site.xml",
+conf_dir=params.ams_collector_conf_dir,
+configurations=params.config['configurations']['hdfs-site'],
+
configuration_attributes=params.config['configuration_attributes']['hdfs-site'],
+owner=params.ams_user,
+group=params.user_group,
+mode=0644
+  )
+
+  XmlConfig("hdfs-site.xml",
+conf_dir=params.hbase_conf_dir,
+configurations=params.config['configurations']['hdfs-site'],
+
configuration_attributes=params.config['configuration_attributes']['hdfs-site'],
+owner=params.ams_user,
+group=params.user_group,
+mode=0644
+  )
+
+  XmlConfig("core-site.xml",
+conf_dir=params.ams_collector_conf_dir,
+configurations=params.config['configurations']['core-site'],
+
configuration_attributes=params.config['configuration_attributes']['core-site'],
+owner=params.ams_user,
+group=params.user_group,
+mode=0644
+  )
+
+  XmlConfig("core-site.xml",
+conf_dir=params.hbase_conf_dir,
+configurations=params.confi

[44/51] [partial] ambari git commit: AMBARI-21349. Create BigInsights Stack Skeleton in Ambari 2.5 (alejandro)

2017-06-27 Thread alejandro
http://git-wip-us.apache.org/repos/asf/ambari/blob/1863c3b9/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/AMBARI_METRICS/package/scripts/status_params.py
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/AMBARI_METRICS/package/scripts/status_params.py
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/AMBARI_METRICS/package/scripts/status_params.py
new file mode 100755
index 000..d446baa
--- /dev/null
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/AMBARI_METRICS/package/scripts/status_params.py
@@ -0,0 +1,39 @@
+#!/usr/bin/env python
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+
+from resource_management import *
+from ambari_commons import OSCheck
+
+if OSCheck.is_windows_family():
+  from params_windows import *
+else:
+  from params_linux import *
+
+hbase_pid_dir = config['configurations']['ams-hbase-env']['hbase_pid_dir']
+hbase_user = ams_user
+ams_collector_pid_dir = 
config['configurations']['ams-env']['metrics_collector_pid_dir']
+ams_monitor_pid_dir = 
config['configurations']['ams-env']['metrics_monitor_pid_dir']
+
+security_enabled = config['configurations']['cluster-env']['security_enabled']
+ams_hbase_conf_dir = format("{hbase_conf_dir}")
+
+kinit_path_local = 
functions.get_kinit_path(default('/configurations/kerberos-env/executable_search_paths',
 None))
+hostname = config['hostname']
+tmp_dir = Script.get_tmp_dir()

http://git-wip-us.apache.org/repos/asf/ambari/blob/1863c3b9/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/AMBARI_METRICS/package/templates/ams.conf.j2
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/AMBARI_METRICS/package/templates/ams.conf.j2
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/AMBARI_METRICS/package/templates/ams.conf.j2
new file mode 100755
index 000..c5fbc9b
--- /dev/null
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/AMBARI_METRICS/package/templates/ams.conf.j2
@@ -0,0 +1,35 @@
+{#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#}
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+{{ams_user}}   - nofile {{max_open_files_limit}}
+{{ams_user}}   - nproc  65536

http://git-wip-us.apache.org/repos/asf/ambari/blob/1863c3b9/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/AMBARI_METRICS/package/templates/ams_collector_jaas.conf.j2
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/AMBARI_METRICS/package/templates/ams_collector_

[51/51] [partial] ambari git commit: AMBARI-21349. Create BigInsights Stack Skeleton in Ambari 2.5 (alejandro)

2017-06-27 Thread alejandro
AMBARI-21349. Create BigInsights Stack Skeleton in Ambari 2.5 (alejandro)


Project: http://git-wip-us.apache.org/repos/asf/ambari/repo
Commit: http://git-wip-us.apache.org/repos/asf/ambari/commit/1863c3b9
Tree: http://git-wip-us.apache.org/repos/asf/ambari/tree/1863c3b9
Diff: http://git-wip-us.apache.org/repos/asf/ambari/diff/1863c3b9

Branch: refs/heads/branch-feature-AMBARI-21348
Commit: 1863c3b90b6bfa45b7ecfef15d2eee08b7539e91
Parents: 7ad307c
Author: Alejandro Fernandez 
Authored: Tue Jun 27 15:41:36 2017 -0700
Committer: Alejandro Fernandez 
Committed: Tue Jun 27 17:21:03 2017 -0700

--
 .../4.0/blueprints/multinode-default.json   |  182 +
 .../4.0/blueprints/singlenode-default.json  |  133 +
 .../4.0/configuration/cluster-env.xml   |  304 +
 .../4.0/hooks/after-INSTALL/scripts/hook.py |   38 +
 .../4.0/hooks/after-INSTALL/scripts/params.py   |   88 +
 .../scripts/shared_initialization.py|   89 +
 .../hooks/before-ANY/files/changeToSecureUid.sh |   63 +
 .../4.0/hooks/before-ANY/scripts/hook.py|   36 +
 .../4.0/hooks/before-ANY/scripts/params.py  |  226 +
 .../before-ANY/scripts/shared_initialization.py |  242 +
 .../4.0/hooks/before-INSTALL/scripts/hook.py|   37 +
 .../4.0/hooks/before-INSTALL/scripts/params.py  |  111 +
 .../scripts/repo_initialization.py  |   90 +
 .../scripts/shared_initialization.py|   34 +
 .../4.0/hooks/before-RESTART/scripts/hook.py|   29 +
 .../hooks/before-START/files/checkForFormat.sh  |   65 +
 .../before-START/files/fast-hdfs-resource.jar   |  Bin 0 -> 28296598 bytes
 .../before-START/files/task-log4j.properties|  134 +
 .../hooks/before-START/files/topology_script.py |   66 +
 .../4.0/hooks/before-START/scripts/hook.py  |   40 +
 .../4.0/hooks/before-START/scripts/params.py|  211 +
 .../before-START/scripts/rack_awareness.py  |   71 +
 .../scripts/shared_initialization.py|  152 +
 .../templates/commons-logging.properties.j2 |   43 +
 .../templates/exclude_hosts_list.j2 |   21 +
 .../templates/hadoop-metrics2.properties.j2 |   88 +
 .../before-START/templates/health_check.j2  |   81 +
 .../templates/include_hosts_list.j2 |   21 +
 .../templates/topology_mappings.data.j2 |   24 +
 .../stacks/BigInsights/4.0/kerberos.json|   68 +
 .../stacks/BigInsights/4.0/metainfo.xml |   22 +
 .../4.0/properties/stack_features.json  |  212 +
 .../BigInsights/4.0/properties/stack_tools.json |4 +
 .../stacks/BigInsights/4.0/repos/repoinfo.xml   |   35 +
 .../BigInsights/4.0/role_command_order.json |   70 +
 .../4.0/services/AMBARI_METRICS/alerts.json |  183 +
 .../AMBARI_METRICS/configuration/ams-env.xml|  107 +
 .../configuration/ams-hbase-env.xml |  234 +
 .../configuration/ams-hbase-log4j.xml   |  146 +
 .../configuration/ams-hbase-policy.xml  |   53 +
 .../configuration/ams-hbase-security-site.xml   |  149 +
 .../configuration/ams-hbase-site.xml|  384 +
 .../AMBARI_METRICS/configuration/ams-log4j.xml  |   65 +
 .../AMBARI_METRICS/configuration/ams-site.xml   |  527 +
 .../4.0/services/AMBARI_METRICS/kerberos.json   |  122 +
 .../4.0/services/AMBARI_METRICS/metainfo.xml|  147 +
 .../4.0/services/AMBARI_METRICS/metrics.json| 2472 +
 .../alerts/alert_ambari_metrics_monitor.py  |  104 +
 .../package/files/hbaseSmokeVerify.sh   |   34 +
 .../files/service-metrics/AMBARI_METRICS.txt|  245 +
 .../package/files/service-metrics/FLUME.txt |   17 +
 .../package/files/service-metrics/HBASE.txt |  588 ++
 .../package/files/service-metrics/HDFS.txt  |  277 +
 .../package/files/service-metrics/HOST.txt  |   37 +
 .../package/files/service-metrics/KAFKA.txt |  190 +
 .../package/files/service-metrics/STORM.txt |7 +
 .../package/files/service-metrics/YARN.txt  |  178 +
 .../AMBARI_METRICS/package/scripts/__init__.py  |   19 +
 .../AMBARI_METRICS/package/scripts/ams.py   |  388 +
 .../package/scripts/ams_service.py  |  103 +
 .../AMBARI_METRICS/package/scripts/functions.py |   51 +
 .../AMBARI_METRICS/package/scripts/hbase.py |  267 +
 .../package/scripts/hbase_master.py |   70 +
 .../package/scripts/hbase_regionserver.py   |   66 +
 .../package/scripts/hbase_service.py|   53 +
 .../package/scripts/metrics_collector.py|  133 +
 .../package/scripts/metrics_monitor.py  |   58 +
 .../AMBARI_METRICS/package/scripts/params.py|  254 +
 .../package/scripts/params_linux.py |   50 +
 .../package/scripts/params_windows.py   |   53 +
 .../package/scripts/service_check.py|  165 +
 .../package/scripts/service_mapping.py  |   22 +
 .../package/scripts/split_points.py |  236 +
 .../AMBARI_METRICS/package/scripts/status.py|   46 +
 .../pack

[49/51] [partial] ambari git commit: AMBARI-21349. Create BigInsights Stack Skeleton in Ambari 2.5 (alejandro)

2017-06-27 Thread alejandro
http://git-wip-us.apache.org/repos/asf/ambari/blob/1863c3b9/ambari-server/src/main/resources/stacks/BigInsights/4.0/hooks/before-START/templates/hadoop-metrics2.properties.j2
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.0/hooks/before-START/templates/hadoop-metrics2.properties.j2
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/hooks/before-START/templates/hadoop-metrics2.properties.j2
new file mode 100755
index 000..31bf1c6
--- /dev/null
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/hooks/before-START/templates/hadoop-metrics2.properties.j2
@@ -0,0 +1,88 @@
+{#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#}
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# syntax: [prefix].[source|sink|jmx].[instance].[options]
+# See package.html for org.apache.hadoop.metrics2 for details
+
+{% if has_ganglia_server %}
+*.period=60
+
+*.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink31
+*.sink.ganglia.period=10
+
+# default for supportsparse is false
+*.sink.ganglia.supportsparse=true
+
+.sink.ganglia.slope=jvm.metrics.gcCount=zero,jvm.metrics.memHeapUsedM=both
+.sink.ganglia.dmax=jvm.metrics.threadsBlocked=70,jvm.metrics.memHeapUsedM=40
+
+# Hook up to the server
+namenode.sink.ganglia.servers={{ganglia_server_host}}:8661
+datanode.sink.ganglia.servers={{ganglia_server_host}}:8659
+jobtracker.sink.ganglia.servers={{ganglia_server_host}}:8662
+tasktracker.sink.ganglia.servers={{ganglia_server_host}}:8658
+maptask.sink.ganglia.servers={{ganglia_server_host}}:8660
+reducetask.sink.ganglia.servers={{ganglia_server_host}}:8660
+resourcemanager.sink.ganglia.servers={{ganglia_server_host}}:8664
+nodemanager.sink.ganglia.servers={{ganglia_server_host}}:8657
+historyserver.sink.ganglia.servers={{ganglia_server_host}}:8666
+journalnode.sink.ganglia.servers={{ganglia_server_host}}:8654
+nimbus.sink.ganglia.servers={{ganglia_server_host}}:8649
+supervisor.sink.ganglia.servers={{ganglia_server_host}}:8650
+
+resourcemanager.sink.ganglia.tagsForPrefix.yarn=Queue
+
+{% endif %}
+{% if has_metric_collector %}
+
+*.period={{metrics_collection_period}}
+*.sink.timeline.plugin.urls=file:///usr/lib/ambari-metrics-hadoop-sink/ambari-metrics-hadoop-sink.jar
+*.sink.timeline.class=org.apache.hadoop.metrics2.sink.timeline.HadoopTimelineMetricsSink
+*.sink.timeline.period={{metrics_collection_period}}
+*.sink.timeline.sendInterval={{metrics_report_interval}}000
+*.sink.timeline.slave.host.name = {{hostname}}
+
+datanode.sink.timeline.collector={{metric_collector_host}}:{{metric_collector_port}}
+namenode.sink.timeline.collector={{metric_collector_host}}:{{metric_collector_port}}
+resourcemanager.sink.timeline.collector={{metric_collector_host}}:{{metric_collector_port}}
+nodemanager.sink.timeline.collector={{metric_collector_host}}:{{metric_collector_port}}
+historyserver.sink.timeline.collector={{metric_collector_host}}:{{metric_collector_port}}
+journalnode.sink.timeline.collector={{metric_collector_host}}:{{metric_collector_port}}
+nimbus.sink.timeline.collector={{metric_collector_host}}:{{metric_collector_port}}
+supervisor.sink.timeline.collector={{metric_collector_host}}:{{metric_collector_port}}
+maptask.sink.timeline.collector={{metric_collector_host}}:{{metric_collector_port}}
+reducetask.sink.timeline.collector={{metric_collector_host}}:{{metric_collector_port}}
+
+resourcemanager.sink.timeline.tagsForPrefix.yarn=Queu

[47/51] [partial] ambari git commit: AMBARI-21349. Create BigInsights Stack Skeleton in Ambari 2.5 (alejandro)

2017-06-27 Thread alejandro
http://git-wip-us.apache.org/repos/asf/ambari/blob/1863c3b9/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/AMBARI_METRICS/metrics.json
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/AMBARI_METRICS/metrics.json
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/AMBARI_METRICS/metrics.json
new file mode 100755
index 000..c12e09a
--- /dev/null
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/AMBARI_METRICS/metrics.json
@@ -0,0 +1,2472 @@
+{
+  "METRICS_COLLECTOR": {
+"Component": [
+  {
+"type": "ganglia",
+"metrics": {
+  "default": {
+"metrics/hbase/ipc/ProcessCallTime_75th_percentile": {
+  "metric": "ipc.IPC.ProcessCallTime_75th_percentile",
+  "pointInTime": true,
+  "temporal": true
+},
+"metrics/hbase/ipc/ProcessCallTime_95th_percentile": {
+  "metric": "ipc.IPC.ProcessCallTime_95th_percentile",
+  "pointInTime": true,
+  "temporal": true
+},
+"metrics/hbase/ipc/ProcessCallTime_99th_percentile": {
+  "metric": "ipc.IPC.ProcessCallTime_99th_percentile",
+  "pointInTime": true,
+  "temporal": true
+},
+"metrics/hbase/ipc/ProcessCallTime_max": {
+  "metric": "ipc.IPC.ProcessCallTime_max",
+  "pointInTime": true,
+  "temporal": true
+},
+"metrics/hbase/ipc/ProcessCallTime_mean": {
+  "metric": "ipc.IPC.ProcessCallTime_mean",
+  "pointInTime": true,
+  "temporal": true
+},
+"metrics/hbase/ipc/ProcessCallTime_median": {
+  "metric": "ipc.IPC.ProcessCallTime_median",
+  "pointInTime": true,
+  "temporal": true
+},
+"metrics/hbase/ipc/ProcessCallTime_min": {
+  "metric": "ipc.IPC.ProcessCallTime_min",
+  "pointInTime": true,
+  "temporal": true
+},
+"metrics/hbase/ipc/ProcessCallTime_num_ops": {
+  "metric": "ipc.IPC.ProcessCallTime_num_ops",
+  "pointInTime": true,
+  "temporal": true
+},
+"metrics/hbase/ipc/QueueCallTime_75th_percentile": {
+  "metric": "ipc.IPC.QueueCallTime_75th_percentile",
+  "pointInTime": true,
+  "temporal": true
+},
+"metrics/hbase/ipc/QueueCallTime_95th_percentile": {
+  "metric": "ipc.IPC.QueueCallTime_95th_percentile",
+  "pointInTime": true,
+  "temporal": true
+},
+"metrics/hbase/ipc/QueueCallTime_99th_percentile": {
+  "metric": "ipc.IPC.QueueCallTime_99th_percentile",
+  "pointInTime": true,
+  "temporal": true
+},
+"metrics/hbase/ipc/QueueCallTime_max": {
+  "metric": "ipc.IPC.QueueCallTime_max",
+  "pointInTime": true,
+  "temporal": true
+},
+"metrics/hbase/ipc/QueueCallTime_mean": {
+  "metric": "ipc.IPC.QueueCallTime_mean",
+  "pointInTime": true,
+  "temporal": true
+},
+"metrics/hbase/ipc/QueueCallTime_median": {
+  "metric": "ipc.IPC.QueueCallTime_median",
+  "pointInTime": true,
+  "temporal": true
+},
+"metrics/hbase/ipc/QueueCallTime_min": {
+  "metric": "ipc.IPC.QueueCallTime_min",
+  "pointInTime": true,
+  "temporal": true
+},
+"metrics/hbase/ipc/QueueCallTime_num_ops": {
+  "metric": "ipc.IPC.QueueCallTime_num_ops",
+  "pointInTime": true,
+  "temporal": true
+},
+"metrics/hbase/ipc/authenticationFailures": {
+  "metric": "ipc.IPC.authenticationFailures",
+  "pointInTime": true,
+  "temporal": true
+},
+"metrics/hbase/ipc/authenticationSuccesses": {
+  "metric": "ipc.IPC.authenticationSuccesses",
+  "pointInTime": true,
+  "temporal": true
+},
+"metrics/hbase/ipc/authorizationFailures": {
+  "metric": "ipc.IPC.authorizationFailures",
+  "pointInTime": true,
+  "temporal": true
+},
+"metrics/hbase/ipc/authorizationSuccesses": {
+  "metric": "ipc.IPC.authorizationSuccesses",
+  "pointInTime": true,
+  "temporal": true
+},
+"metrics/hbase/ipc/numActiveHandler": {
+  "metric": "ipc.IPC.numActiveHandler",
+  "pointInTime": true,
+  "temporal": true

[39/51] [partial] ambari git commit: AMBARI-21349. Create BigInsights Stack Skeleton in Ambari 2.5 (alejandro)

2017-06-27 Thread alejandro
http://git-wip-us.apache.org/repos/asf/ambari/blob/1863c3b9/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HDFS/configuration/hdfs-site.xml
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HDFS/configuration/hdfs-site.xml
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HDFS/configuration/hdfs-site.xml
new file mode 100755
index 000..fc510fa
--- /dev/null
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HDFS/configuration/hdfs-site.xml
@@ -0,0 +1,606 @@
+
+
+
+
+
+
+
+
+
+  
+
+  
+dfs.namenode.name.dir
+
+/hadoop/hdfs/namenode
+Determines where on the local filesystem the DFS name node
+  should store the name table.  If this is a comma-delimited list
+  of directories then the name table is replicated in all of the
+  directories, for redundancy. 
+NameNode directories
+true
+
+  directories
+  false
+
+  
+
+  
+dfs.support.append
+true
+to enable dfs append
+true
+  
+
+  
+dfs.webhdfs.enabled
+true
+WebHDFS enabled
+Whether to enable WebHDFS feature
+true
+
+  boolean
+  false
+
+  
+
+  
+dfs.datanode.failed.volumes.tolerated
+0
+ Number of failed disks a DataNode would tolerate before it 
stops offering service
+true
+DataNode failed disk tolerance
+
+  int
+  0
+  2
+  1
+
+
+  
+hdfs-site
+dfs.datanode.data.dir
+  
+
+  
+
+  
+dfs.datanode.data.dir
+/hadoop/hdfs/data
+DataNode directories
+Determines where on the local filesystem an DFS data node
+  should store its blocks.  If this is a comma-delimited
+  list of directories, then data will be stored in all named
+  directories, typically on different devices.
+  Directories that do not exist are ignored.
+
+true
+
+  directories
+
+  
+
+  
+dfs.hosts.exclude
+/etc/hadoop/conf/dfs.exclude
+Names a file that contains a list of hosts that are
+  not permitted to connect to the namenode.  The full pathname of the
+  file must be specified.  If the value is empty, no hosts are
+  excluded.
+  
+
+  
+
+  
+dfs.namenode.checkpoint.dir
+/hadoop/hdfs/namesecondary
+SecondaryNameNode Checkpoint directories
+Determines where on the local filesystem the DFS secondary
+  name node should store the temporary images to merge.
+  If this is a comma-delimited list of directories then the image is
+  replicated in all of the directories for redundancy.
+
+
+  directories
+  false
+
+  
+
+  
+dfs.namenode.checkpoint.edits.dir
+${dfs.namenode.checkpoint.dir}
+Determines where on the local filesystem the DFS secondary
+  name node should store the temporary edits to merge.
+  If this is a comma-delimited list of directoires then teh edits is
+  replicated in all of the directoires for redundancy.
+  Default value is same as dfs.namenode.checkpoint.dir
+
+  
+
+
+  
+dfs.namenode.checkpoint.period
+21600
+HDFS Maximum Checkpoint Delay
+The number of seconds between two periodic 
checkpoints.
+
+  int
+  seconds
+
+  
+
+  
+dfs.namenode.checkpoint.txns
+100
+The Secondary NameNode or CheckpointNode will create a 
checkpoint
+  of the namespace every 'dfs.namenode.checkpoint.txns' transactions,
+  regardless of whether 'dfs.namenode.checkpoint.period' has expired.
+
+  
+
+  
+dfs.replication.max
+50
+Maximal block replication.
+
+  
+
+  
+dfs.replication
+3
+Default block replication.
+Block replication
+
+  int
+
+  
+
+  
+dfs.heartbeat.interval
+3
+Determines datanode heartbeat interval in 
seconds.
+  
+
+  
+dfs.heartbeat.interval
+3
+Determines datanode heartbeat interval in 
seconds.
+  
+
+  
+dfs.namenode.safemode.threshold-pct
+0.999
+
+  Specifies the percentage of blocks that should satisfy
+  the minimal replication requirement defined by 
dfs.namenode.replication.min.
+  Values less than or equal to 0 mean not to start in safe mode.
+  Values greater than 1 will make safe mode permanent.
+
+Minimum replicated blocks %
+
+  float
+  0.990
+  1.000
+  0.001
+
+  
+
+  
+dfs.datanode.balance.bandwidthPerSec
+625
+
+  Specifies the maximum amount of bandwidth that each datanode
+  can utilize for the balancing purpose in term of
+  the number of bytes per second.
+
+  
+
+  
+dfs.https.port
+50470
+
+  This property is used by HftpFileSystem.
+
+  
+
+  
+dfs.datanode.address
+0.0.0.0:50010
+
+  The datanode server address and port for data transfer.
+
+  
+
+  
+dfs.datanode.http.address
+0.0.0.0

[33/51] [partial] ambari git commit: AMBARI-21349. Create BigInsights Stack Skeleton in Ambari 2.5 (alejandro)

2017-06-27 Thread alejandro
http://git-wip-us.apache.org/repos/asf/ambari/blob/1863c3b9/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HIVE/etc/hive-schema-0.12.0.mysql.sql
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HIVE/etc/hive-schema-0.12.0.mysql.sql
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HIVE/etc/hive-schema-0.12.0.mysql.sql
new file mode 100755
index 000..bacee9e
--- /dev/null
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HIVE/etc/hive-schema-0.12.0.mysql.sql
@@ -0,0 +1,777 @@
+-- MySQL dump 10.13  Distrib 5.5.25, for osx10.6 (i386)
+--
+-- Host: localhostDatabase: test
+-- --
+-- Server version  5.5.25
+
+/*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT */;
+/*!40101 SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS */;
+/*!40101 SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION */;
+/*!40101 SET NAMES utf8 */;
+/*!40103 SET @OLD_TIME_ZONE=@@TIME_ZONE */;
+/*!40103 SET TIME_ZONE='+00:00' */;
+/*!40014 SET @OLD_UNIQUE_CHECKS=@@UNIQUE_CHECKS, UNIQUE_CHECKS=0 */;
+/*!40014 SET @OLD_FOREIGN_KEY_CHECKS=@@FOREIGN_KEY_CHECKS, 
FOREIGN_KEY_CHECKS=0 */;
+/*!40101 SET @OLD_SQL_MODE=@@SQL_MODE, SQL_MODE='NO_AUTO_VALUE_ON_ZERO' */;
+/*!40111 SET @OLD_SQL_NOTES=@@SQL_NOTES, SQL_NOTES=0 */;
+
+--
+-- Table structure for table `BUCKETING_COLS`
+--
+
+/*!40101 SET @saved_cs_client = @@character_set_client */;
+/*!40101 SET character_set_client = utf8 */;
+CREATE TABLE IF NOT EXISTS `BUCKETING_COLS` (
+  `SD_ID` bigint(20) NOT NULL,
+  `BUCKET_COL_NAME` varchar(256) CHARACTER SET latin1 COLLATE latin1_bin 
DEFAULT NULL,
+  `INTEGER_IDX` int(11) NOT NULL,
+  PRIMARY KEY (`SD_ID`,`INTEGER_IDX`),
+  KEY `BUCKETING_COLS_N49` (`SD_ID`),
+  CONSTRAINT `BUCKETING_COLS_FK1` FOREIGN KEY (`SD_ID`) REFERENCES `SDS` 
(`SD_ID`)
+) ENGINE=InnoDB DEFAULT CHARSET=latin1;
+/*!40101 SET character_set_client = @saved_cs_client */;
+
+--
+-- Table structure for table `CDS`
+--
+
+/*!40101 SET @saved_cs_client = @@character_set_client */;
+/*!40101 SET character_set_client = utf8 */;
+CREATE TABLE IF NOT EXISTS `CDS` (
+  `CD_ID` bigint(20) NOT NULL,
+  PRIMARY KEY (`CD_ID`)
+) ENGINE=InnoDB DEFAULT CHARSET=latin1;
+/*!40101 SET character_set_client = @saved_cs_client */;
+
+--
+-- Table structure for table `COLUMNS_V2`
+--
+
+/*!40101 SET @saved_cs_client = @@character_set_client */;
+/*!40101 SET character_set_client = utf8 */;
+CREATE TABLE IF NOT EXISTS `COLUMNS_V2` (
+  `CD_ID` bigint(20) NOT NULL,
+  `COMMENT` varchar(256) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
+  `COLUMN_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
+  `TYPE_NAME` varchar(4000) DEFAULT NULL,
+  `INTEGER_IDX` int(11) NOT NULL,
+  PRIMARY KEY (`CD_ID`,`COLUMN_NAME`),
+  KEY `COLUMNS_V2_N49` (`CD_ID`),
+  CONSTRAINT `COLUMNS_V2_FK1` FOREIGN KEY (`CD_ID`) REFERENCES `CDS` (`CD_ID`)
+) ENGINE=InnoDB DEFAULT CHARSET=latin1;
+/*!40101 SET character_set_client = @saved_cs_client */;
+
+--
+-- Table structure for table `DATABASE_PARAMS`
+--
+
+/*!40101 SET @saved_cs_client = @@character_set_client */;
+/*!40101 SET character_set_client = utf8 */;
+CREATE TABLE IF NOT EXISTS `DATABASE_PARAMS` (
+  `DB_ID` bigint(20) NOT NULL,
+  `PARAM_KEY` varchar(180) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
+  `PARAM_VALUE` varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT 
NULL,
+  PRIMARY KEY (`DB_ID`,`PARAM_KEY`),
+  KEY `DATABASE_PARAMS_N49` (`DB_ID`),
+  CONSTRAINT `DATABASE_PARAMS_FK1` FOREIGN KEY (`DB_ID`) REFERENCES `DBS` 
(`DB_ID`)
+) ENGINE=InnoDB DEFAULT CHARSET=latin1;
+/*!40101 SET character_set_client = @saved_cs_client */;
+
+--
+-- Table structure for table `DBS`
+--
+
+/*!40101 SET @saved_cs_client = @@character_set_client */;
+/*!40101 SET character_set_client = utf8 */;
+CREATE TABLE IF NOT EXISTS `DBS` (
+  `DB_ID` bigint(20) NOT NULL,
+  `DESC` varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
+  `DB_LOCATION_URI` varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin NOT 
NULL,
+  `NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
+  PRIMARY KEY (`DB_ID`),
+  UNIQUE KEY `UNIQUE_DATABASE` (`NAME`)
+) ENGINE=InnoDB DEFAULT CHARSET=latin1;
+/*!40101 SET character_set_client = @saved_cs_client */;
+
+--
+-- Table structure for table `DB_PRIVS`
+--
+
+/*!40101 SET @saved_cs_client = @@character_set_client */;
+/*!40101 SET character_set_client = utf8 */;
+CREATE TABLE IF NOT EXISTS `DB_PRIVS` (
+  `DB_GRANT_ID` bigint(20) NOT NULL,
+  `CREATE_TIME` int(11) NOT NULL,
+  `DB_ID` bigint(20) DEFAULT NULL,
+  `GRANT_OPTION` smallint(6) NOT NULL,
+  `GRANTOR` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
+  `GRANTOR_TYPE` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT 
NULL,
+  `PRINCIPAL

[28/51] [partial] ambari git commit: AMBARI-21349. Create BigInsights Stack Skeleton in Ambari 2.5 (alejandro)

2017-06-27 Thread alejandro
http://git-wip-us.apache.org/repos/asf/ambari/blob/1863c3b9/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HIVE/package/alerts/alert_webhcat_server.py
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HIVE/package/alerts/alert_webhcat_server.py
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HIVE/package/alerts/alert_webhcat_server.py
new file mode 100755
index 000..15627a5
--- /dev/null
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HIVE/package/alerts/alert_webhcat_server.py
@@ -0,0 +1,242 @@
+#!/usr/bin/env python
+
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+"""
+
+import ambari_simplejson as json # simplejson is much faster comparing to 
Python 2.6 json module and has the same functions set.
+import subprocess
+import socket
+import time
+import urllib2
+
+from resource_management.core.environment import Environment
+from resource_management.core.resources import Execute
+from resource_management.core import shell
+from resource_management.libraries.functions import format
+from resource_management.libraries.functions import get_kinit_path
+from resource_management.libraries.functions import get_klist_path
+from os import getpid, sep
+
+RESULT_CODE_OK = "OK"
+RESULT_CODE_CRITICAL = "CRITICAL"
+RESULT_CODE_UNKNOWN = "UNKNOWN"
+
+OK_MESSAGE = "WebHCat status was OK ({0:.3f}s response from {1})"
+CRITICAL_CONNECTION_MESSAGE = "Connection failed to {0}"
+CRITICAL_HTTP_MESSAGE = "HTTP {0} response from {1}"
+CRITICAL_WEBHCAT_STATUS_MESSAGE = 'WebHCat returned an unexpected status of 
"{0}"'
+CRITICAL_WEBHCAT_UNKNOWN_JSON_MESSAGE = "Unable to determine WebHCat health 
from unexpected JSON response"
+
+TEMPLETON_PORT_KEY = '{{webhcat-site/templeton.port}}'
+SECURITY_ENABLED_KEY = '{{cluster-env/security_enabled}}'
+WEBHCAT_PRINCIPAL_KEY = '{{webhcat-site/templeton.kerberos.principal}}'
+WEBHCAT_KEYTAB_KEY = '{{webhcat-site/templeton.kerberos.keytab}}'
+SMOKEUSER_KEY = '{{cluster-env/smokeuser}}'
+
+# The configured Kerberos executable search paths, if any
+KERBEROS_EXECUTABLE_SEARCH_PATHS_KEY = 
'{{kerberos-env/executable_search_paths}}'
+
+WEBHCAT_OK_RESPONSE = 'ok'
+WEBHCAT_PORT_DEFAULT = 50111
+
+CONNECTION_TIMEOUT_KEY = 'connection.timeout'
+CONNECTION_TIMEOUT_DEFAULT = 5.0
+CURL_CONNECTION_TIMEOUT_DEFAULT = str(int(CONNECTION_TIMEOUT_DEFAULT))
+
+# default smoke user
+SMOKEUSER_SCRIPT_PARAM_KEY = 'default.smoke.user'
+SMOKEUSER_DEFAULT = 'ambari-qa'
+
+def get_tokens():
+  """
+  Returns a tuple of tokens in the format {{site/property}} that will be used
+  to build the dictionary passed into execute
+  """
+  return (TEMPLETON_PORT_KEY, SECURITY_ENABLED_KEY, WEBHCAT_KEYTAB_KEY,
+WEBHCAT_PRINCIPAL_KEY, KERBEROS_EXECUTABLE_SEARCH_PATHS_KEY, SMOKEUSER_KEY)
+
+
+def execute(configurations={}, parameters={}, host_name=None):
+  """
+  Returns a tuple containing the result code and a pre-formatted result label
+
+  Keyword arguments:
+  configurations (dictionary): a mapping of configuration key to value
+  parameters (dictionary): a mapping of script parameter key to value
+  host_name (string): the name of this host where the alert is running
+  """
+
+  result_code = RESULT_CODE_UNKNOWN
+
+  if configurations is None:
+return (result_code, ['There were no configurations supplied to the 
script.'])
+
+  webhcat_port = WEBHCAT_PORT_DEFAULT
+  if TEMPLETON_PORT_KEY in configurations:
+webhcat_port = int(configurations[TEMPLETON_PORT_KEY])
+
+  security_enabled = False
+  if SECURITY_ENABLED_KEY in configurations:
+security_enabled = configurations[SECURITY_ENABLED_KEY].lower() == 'true'
+
+  # parse script arguments
+  connection_timeout = CONNECTION_TIMEOUT_DEFAULT
+  curl_connection_timeout = CURL_CONNECTION_TIMEOUT_DEFAULT
+  if CONNECTION_TIMEOUT_KEY in parameters:
+connection_timeout = float(parameters[CONNECTION_TIMEOUT_KEY])
+curl_connection_timeout = str(int(connection_timeout))
+
+
+  # the alert will always run on the webhcat host
+  if host_name is None:
+host_name = socket.getfqdn()
+
+  smokeuser = SMOKEUSER_DEFAULT
+
+  if SMOKEUSER_KEY in configurations:
+smokeuser = configurati

[29/51] [partial] ambari git commit: AMBARI-21349. Create BigInsights Stack Skeleton in Ambari 2.5 (alejandro)

2017-06-27 Thread alejandro
http://git-wip-us.apache.org/repos/asf/ambari/blob/1863c3b9/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HIVE/etc/hive-schema-0.14.0.postgres.sql
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HIVE/etc/hive-schema-0.14.0.postgres.sql
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HIVE/etc/hive-schema-0.14.0.postgres.sql
new file mode 100755
index 000..5e238b2
--- /dev/null
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HIVE/etc/hive-schema-0.14.0.postgres.sql
@@ -0,0 +1,1541 @@
+--
+-- PostgreSQL database dump
+--
+
+SET statement_timeout = 0;
+SET client_encoding = 'UTF8';
+SET standard_conforming_strings = off;
+SET check_function_bodies = false;
+SET client_min_messages = warning;
+SET escape_string_warning = off;
+
+SET search_path = public, pg_catalog;
+
+SET default_tablespace = '';
+
+SET default_with_oids = false;
+
+--
+-- Name: BUCKETING_COLS; Type: TABLE; Schema: public; Owner: hiveuser; 
Tablespace:
+--
+
+CREATE TABLE "BUCKETING_COLS" (
+"SD_ID" bigint NOT NULL,
+"BUCKET_COL_NAME" character varying(256) DEFAULT NULL::character varying,
+"INTEGER_IDX" bigint NOT NULL
+);
+
+
+--
+-- Name: CDS; Type: TABLE; Schema: public; Owner: hiveuser; Tablespace:
+--
+
+CREATE TABLE "CDS" (
+"CD_ID" bigint NOT NULL
+);
+
+
+--
+-- Name: COLUMNS_OLD; Type: TABLE; Schema: public; Owner: hiveuser; Tablespace:
+--
+
+CREATE TABLE "COLUMNS_OLD" (
+"SD_ID" bigint NOT NULL,
+"COMMENT" character varying(256) DEFAULT NULL::character varying,
+"COLUMN_NAME" character varying(128) NOT NULL,
+"TYPE_NAME" character varying(4000) NOT NULL,
+"INTEGER_IDX" bigint NOT NULL
+);
+
+
+--
+-- Name: COLUMNS_V2; Type: TABLE; Schema: public; Owner: hiveuser; Tablespace:
+--
+
+CREATE TABLE "COLUMNS_V2" (
+"CD_ID" bigint NOT NULL,
+"COMMENT" character varying(4000),
+"COLUMN_NAME" character varying(128) NOT NULL,
+"TYPE_NAME" character varying(4000),
+"INTEGER_IDX" integer NOT NULL
+);
+
+
+--
+-- Name: DATABASE_PARAMS; Type: TABLE; Schema: public; Owner: hiveuser; 
Tablespace:
+--
+
+CREATE TABLE "DATABASE_PARAMS" (
+"DB_ID" bigint NOT NULL,
+"PARAM_KEY" character varying(180) NOT NULL,
+"PARAM_VALUE" character varying(4000) DEFAULT NULL::character varying
+);
+
+
+--
+-- Name: DBS; Type: TABLE; Schema: public; Owner: hiveuser; Tablespace:
+--
+
+CREATE TABLE "DBS" (
+"DB_ID" bigint NOT NULL,
+"DESC" character varying(4000) DEFAULT NULL::character varying,
+"DB_LOCATION_URI" character varying(4000) NOT NULL,
+"NAME" character varying(128) DEFAULT NULL::character varying,
+"OWNER_NAME" character varying(128) DEFAULT NULL::character varying,
+"OWNER_TYPE" character varying(10) DEFAULT NULL::character varying
+);
+
+
+--
+-- Name: DB_PRIVS; Type: TABLE; Schema: public; Owner: hiveuser; Tablespace:
+--
+
+CREATE TABLE "DB_PRIVS" (
+"DB_GRANT_ID" bigint NOT NULL,
+"CREATE_TIME" bigint NOT NULL,
+"DB_ID" bigint,
+"GRANT_OPTION" smallint NOT NULL,
+"GRANTOR" character varying(128) DEFAULT NULL::character varying,
+"GRANTOR_TYPE" character varying(128) DEFAULT NULL::character varying,
+"PRINCIPAL_NAME" character varying(128) DEFAULT NULL::character varying,
+"PRINCIPAL_TYPE" character varying(128) DEFAULT NULL::character varying,
+"DB_PRIV" character varying(128) DEFAULT NULL::character varying
+);
+
+
+--
+-- Name: GLOBAL_PRIVS; Type: TABLE; Schema: public; Owner: hiveuser; 
Tablespace:
+--
+
+CREATE TABLE "GLOBAL_PRIVS" (
+"USER_GRANT_ID" bigint NOT NULL,
+"CREATE_TIME" bigint NOT NULL,
+"GRANT_OPTION" smallint NOT NULL,
+"GRANTOR" character varying(128) DEFAULT NULL::character varying,
+"GRANTOR_TYPE" character varying(128) DEFAULT NULL::character varying,
+"PRINCIPAL_NAME" character varying(128) DEFAULT NULL::character varying,
+"PRINCIPAL_TYPE" character varying(128) DEFAULT NULL::character varying,
+"USER_PRIV" character varying(128) DEFAULT NULL::character varying
+);
+
+
+--
+-- Name: IDXS; Type: TABLE; Schema: public; Owner: hiveuser; Tablespace:
+--
+
+CREATE TABLE "IDXS" (
+"INDEX_ID" bigint NOT NULL,
+"CREATE_TIME" bigint NOT NULL,
+"DEFERRED_REBUILD" boolean NOT NULL,
+"INDEX_HANDLER_CLASS" character varying(4000) DEFAULT NULL::character 
varying,
+"INDEX_NAME" character varying(128) DEFAULT NULL::character varying,
+"INDEX_TBL_ID" bigint,
+"LAST_ACCESS_TIME" bigint NOT NULL,
+"ORIG_TBL_ID" bigint,
+"SD_ID" bigint
+);
+
+
+--
+-- Name: INDEX_PARAMS; Type: TABLE; Schema: public; Owner: hiveuser; 
Tablespace:
+--
+
+CREATE TABLE "INDEX_PARAMS" (
+"INDEX_ID" bigint NOT NULL,
+"PARAM_KEY" character varying(256) NOT NULL,
+"PARAM_VALUE" character varying(4000) DEFAULT NULL::character varying
+);
+
+
+--
+-- Name: NUCLEUS_TABLES; Type: TABLE; Schema: public

[11/51] [partial] ambari git commit: AMBARI-21349. Create BigInsights Stack Skeleton in Ambari 2.5 (alejandro)

2017-06-27 Thread alejandro
http://git-wip-us.apache.org/repos/asf/ambari/blob/1863c3b9/ambari-server/src/main/resources/stacks/BigInsights/4.0/stack-advisor/stack_advisor_206.py
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.0/stack-advisor/stack_advisor_206.py
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/stack-advisor/stack_advisor_206.py
new file mode 100755
index 000..8798628
--- /dev/null
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/stack-advisor/stack_advisor_206.py
@@ -0,0 +1,2006 @@
+#!/usr/bin/env ambari-python-wrap
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+"""
+
+import re
+import os
+import sys
+import socket
+
+from math import ceil, floor
+
+from resource_management.core.logger import Logger
+from resource_management.libraries.functions.mounted_dirs_helper import 
get_mounts_with_multiple_data_dirs
+
+from stack_advisor import DefaultStackAdvisor
+
+
+class HDP206StackAdvisor(DefaultStackAdvisor):
+
+  def __init__(self):
+super(HDP206StackAdvisor, self).__init__()
+Logger.initialize_logger()
+
+  def getComponentLayoutValidations(self, services, hosts):
+"""Returns array of Validation objects about issues with hostnames 
components assigned to"""
+items = super(HDP206StackAdvisor, 
self).getComponentLayoutValidations(services, hosts)
+
+# Validating NAMENODE and SECONDARY_NAMENODE are on different hosts if 
possible
+# Use a set for fast lookup
+hostsSet =  set(super(HDP206StackAdvisor, 
self).getActiveHosts([host["Hosts"] for host in hosts["items"]]))  
#[host["Hosts"]["host_name"] for host in hosts["items"]]
+hostsCount = len(hostsSet)
+
+componentsListList = [service["components"] for service in 
services["services"]]
+componentsList = [item for sublist in componentsListList for item in 
sublist]
+nameNodeHosts = [component["StackServiceComponents"]["hostnames"] for 
component in componentsList if 
component["StackServiceComponents"]["component_name"] == "NAMENODE"]
+secondaryNameNodeHosts = [component["StackServiceComponents"]["hostnames"] 
for component in componentsList if 
component["StackServiceComponents"]["component_name"] == "SECONDARY_NAMENODE"]
+
+# Validating cardinality
+for component in componentsList:
+  if component["StackServiceComponents"]["cardinality"] is not None:
+ componentName = component["StackServiceComponents"]["component_name"]
+ componentDisplayName = 
component["StackServiceComponents"]["display_name"]
+ componentHosts = []
+ if component["StackServiceComponents"]["hostnames"] is not None:
+   componentHosts = [componentHost for componentHost in 
component["StackServiceComponents"]["hostnames"] if componentHost in hostsSet]
+ componentHostsCount = len(componentHosts)
+ cardinality = str(component["StackServiceComponents"]["cardinality"])
+ # cardinality types: null, 1+, 1-2, 1, ALL
+ message = None
+ if "+" in cardinality:
+   hostsMin = int(cardinality[:-1])
+   if componentHostsCount < hostsMin:
+ message = "At least {0} {1} components should be installed in 
cluster.".format(hostsMin, componentDisplayName)
+ elif "-" in cardinality:
+   nums = cardinality.split("-")
+   hostsMin = int(nums[0])
+   hostsMax = int(nums[1])
+   if componentHostsCount > hostsMax or componentHostsCount < hostsMin:
+ message = "Between {0} and {1} {2} components should be installed 
in cluster.".format(hostsMin, hostsMax, componentDisplayName)
+ elif "ALL" == cardinality:
+   if componentHostsCount != hostsCount:
+ message = "{0} component should be installed on all hosts in 
cluster.".format(componentDisplayName)
+ else:
+   if componentHostsCount != int(cardinality):
+ message = "Exactly {0} {1} components should be installed in 
cluster.".format(int(cardinality), componentDisplayName)
+
+ if message is not None:
+   items.append({"type": 'host-component', "level": 'ERROR', 
"message": message, "component-name": componentName})
+
+# Validating host-usage
+usedHosts

[09/51] [partial] ambari git commit: AMBARI-21349. Create BigInsights Stack Skeleton in Ambari 2.5 (alejandro)

2017-06-27 Thread alejandro
http://git-wip-us.apache.org/repos/asf/ambari/blob/1863c3b9/ambari-server/src/main/resources/stacks/BigInsights/4.0/stack-advisor/stack_advisor_22.py
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.0/stack-advisor/stack_advisor_22.py
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/stack-advisor/stack_advisor_22.py
new file mode 100755
index 000..6848635
--- /dev/null
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/stack-advisor/stack_advisor_22.py
@@ -0,0 +1,1713 @@
+#!/usr/bin/env ambari-python-wrap
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+"""
+
+import math
+from math import floor
+from urlparse import urlparse
+import os
+import fnmatch
+import socket
+import re
+import xml.etree.ElementTree as ET
+
+from resource_management.core.logger import Logger
+
+try:
+  from stack_advisor_21 import *
+except ImportError:
+  #Ignore ImportError
+  print("stack_advisor_21 not found")
+
+class HDP22StackAdvisor(HDP21StackAdvisor):
+
+  def getServiceConfigurationRecommenderDict(self):
+parentRecommendConfDict = super(HDP22StackAdvisor, 
self).getServiceConfigurationRecommenderDict()
+childRecommendConfDict = {
+  "HDFS": self.recommendHDFSConfigurations,
+  "HIVE": self.recommendHIVEConfigurations,
+  "HBASE": self.recommendHBASEConfigurations,
+  "MAPREDUCE2": self.recommendMapReduce2Configurations,
+  "TEZ": self.recommendTezConfigurations,
+  "AMBARI_METRICS": self.recommendAmsConfigurations,
+  "YARN": self.recommendYARNConfigurations,
+  "STORM": self.recommendStormConfigurations,
+  "KNOX": self.recommendKnoxConfigurations,
+  "RANGER": self.recommendRangerConfigurations,
+  "LOGSEARCH" : self.recommendLogsearchConfigurations,
+  "SPARK": self.recommendSparkConfigurations,
+}
+parentRecommendConfDict.update(childRecommendConfDict)
+return parentRecommendConfDict
+
+
+  def recommendSparkConfigurations(self, configurations, clusterData, 
services, hosts):
+"""
+:type configurations dict
+:type clusterData dict
+:type services dict
+:type hosts dict
+"""
+putSparkProperty = self.putProperty(configurations, "spark-defaults", 
services)
+
+spark_queue = self.recommendYarnQueue(services, "spark-defaults", 
"spark.yarn.queue")
+if spark_queue is not None:
+  putSparkProperty("spark.yarn.queue", spark_queue)
+
+# add only if spark supports this config
+if "configurations" in services and "spark-thrift-sparkconf" in 
services["configurations"]:
+  putSparkThriftSparkConf = self.putProperty(configurations, 
"spark-thrift-sparkconf", services)
+  recommended_spark_queue = self.recommendYarnQueue(services, 
"spark-thrift-sparkconf", "spark.yarn.queue")
+  if recommended_spark_queue is not None:
+putSparkThriftSparkConf("spark.yarn.queue", recommended_spark_queue)
+
+
+  def recommendYARNConfigurations(self, configurations, clusterData, services, 
hosts):
+super(HDP22StackAdvisor, self).recommendYARNConfigurations(configurations, 
clusterData, services, hosts)
+putYarnProperty = self.putProperty(configurations, "yarn-site", services)
+putYarnProperty('yarn.nodemanager.resource.cpu-vcores', clusterData['cpu'])
+putYarnProperty('yarn.scheduler.minimum-allocation-vcores', 1)
+putYarnProperty('yarn.scheduler.maximum-allocation-vcores', 
configurations["yarn-site"]["properties"]["yarn.nodemanager.resource.cpu-vcores"])
+# Property Attributes
+putYarnPropertyAttribute = self.putPropertyAttribute(configurations, 
"yarn-site")
+nodeManagerHost = self.getHostWithComponent("YARN", "NODEMANAGER", 
services, hosts)
+if (nodeManagerHost is not None):
+  cpuPercentageLimit = 0.8
+  if "yarn.nodemanager.resource.percentage-physical-cpu-limit" in 
configurations["yarn-site"]["properties"]:
+cpuPercentageLimit = 
float(configurations["yarn-site"]["properties"]["yarn.nodemanager.resource.percentage-physical-cpu-limit"])
+  cpuLimit = max(1, int(floor(nodeManagerHost["Hosts"]["cpu_count"] * 
cpuPercentageLimit)))
+  putYarnProperty('yarn.nodemanager.resource.cpu-vcores', str(cpuLimit))
+  putYarnProper

[37/51] [partial] ambari git commit: AMBARI-21349. Create BigInsights Stack Skeleton in Ambari 2.5 (alejandro)

2017-06-27 Thread alejandro
http://git-wip-us.apache.org/repos/asf/ambari/blob/1863c3b9/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HDFS/package/alerts/alert_checkpoint_time.py
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HDFS/package/alerts/alert_checkpoint_time.py
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HDFS/package/alerts/alert_checkpoint_time.py
new file mode 100755
index 000..7fbb9a2
--- /dev/null
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HDFS/package/alerts/alert_checkpoint_time.py
@@ -0,0 +1,146 @@
+#!/usr/bin/env python
+
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+"""
+
+import time
+import urllib2
+import json
+
+LABEL = 'Last Checkpoint: [{h} hours, {m} minutes, {tx} transactions]'
+
+NN_HTTP_ADDRESS_KEY = '{{hdfs-site/dfs.namenode.http-address}}'
+NN_HTTPS_ADDRESS_KEY = '{{hdfs-site/dfs.namenode.https-address}}'
+NN_HTTP_POLICY_KEY = '{{hdfs-site/dfs.http.policy}}'
+NN_CHECKPOINT_TX_KEY = '{{hdfs-site/dfs.namenode.checkpoint.txns}}'
+NN_CHECKPOINT_PERIOD_KEY = '{{hdfs-site/dfs.namenode.checkpoint.period}}'
+
+PERCENT_WARNING = 200
+PERCENT_CRITICAL = 200
+
+CHECKPOINT_TX_DEFAULT = 100
+CHECKPOINT_PERIOD_DEFAULT = 21600
+
+def get_tokens():
+  """
+  Returns a tuple of tokens in the format {{site/property}} that will be used
+  to build the dictionary passed into execute
+  """
+  return (NN_HTTP_ADDRESS_KEY, NN_HTTPS_ADDRESS_KEY, NN_HTTP_POLICY_KEY,
+  NN_CHECKPOINT_TX_KEY, NN_CHECKPOINT_PERIOD_KEY)
+
+
+def execute(parameters=None, host_name=None):
+  """
+  Returns a tuple containing the result code and a pre-formatted result label
+
+  Keyword arguments:
+  parameters (dictionary): a mapping of parameter key to value
+  host_name (string): the name of this host where the alert is running
+  """
+
+  if parameters is None:
+return (('UNKNOWN', ['There were no parameters supplied to the script.']))
+
+  uri = None
+  scheme = 'http'
+  http_uri = None
+  https_uri = None
+  http_policy = 'HTTP_ONLY'
+  percent_warning = PERCENT_WARNING
+  percent_critical = PERCENT_CRITICAL
+  checkpoint_tx = CHECKPOINT_TX_DEFAULT
+  checkpoint_period = CHECKPOINT_PERIOD_DEFAULT
+
+  if NN_HTTP_ADDRESS_KEY in parameters:
+http_uri = parameters[NN_HTTP_ADDRESS_KEY]
+
+  if NN_HTTPS_ADDRESS_KEY in parameters:
+https_uri = parameters[NN_HTTPS_ADDRESS_KEY]
+
+  if NN_HTTP_POLICY_KEY in parameters:
+http_policy = parameters[NN_HTTP_POLICY_KEY]
+
+  if NN_CHECKPOINT_TX_KEY in parameters:
+checkpoint_tx = parameters[NN_CHECKPOINT_TX_KEY]
+
+  if NN_CHECKPOINT_PERIOD_KEY in parameters:
+checkpoint_period = parameters[NN_CHECKPOINT_PERIOD_KEY]
+
+  # determine the right URI and whether to use SSL
+  uri = http_uri
+  if http_policy == 'HTTPS_ONLY':
+scheme = 'https'
+
+if https_uri is not None:
+  uri = https_uri
+
+  current_time = int(round(time.time() * 1000))
+
+  last_checkpoint_time_qry = 
"{0}://{1}/jmx?qry=Hadoop:service=NameNode,name=FSNamesystem".format(scheme,uri)
+  journal_transaction_info_qry = 
"{0}://{1}/jmx?qry=Hadoop:service=NameNode,name=NameNodeInfo".format(scheme,uri)
+
+  # start out assuming an OK status
+  label = None
+  result_code = "OK"
+
+  try:
+last_checkpoint_time = 
int(get_value_from_jmx(last_checkpoint_time_qry,"LastCheckpointTime"))
+journal_transaction_info = 
get_value_from_jmx(journal_transaction_info_qry,"JournalTransactionInfo")
+journal_transaction_info_dict = json.loads(journal_transaction_info)
+
+last_tx = int(journal_transaction_info_dict['LastAppliedOrWrittenTxId'])
+most_recent_tx = 
int(journal_transaction_info_dict['MostRecentCheckpointTxId'])
+transaction_difference = last_tx - most_recent_tx
+
+delta = (current_time - last_checkpoint_time)/1000
+
+label = LABEL.format(h=get_time(delta)['h'], m=get_time(delta)['m'], 
tx=transaction_difference)
+
+if (transaction_difference > int(checkpoint_tx)) and (float(delta) / 
int(checkpoint_period)*100 >= int(percent_critical)):
+  result_code = 'CRITICAL'
+elif (transaction_difference > int(checkpoint_tx)) and (float(delta) / 
int(checkpoint_period)*100 >= int(perce

[16/51] [partial] ambari git commit: AMBARI-21349. Create BigInsights Stack Skeleton in Ambari 2.5 (alejandro)

2017-06-27 Thread alejandro
http://git-wip-us.apache.org/repos/asf/ambari/blob/1863c3b9/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/YARN/YARN_metrics.json
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/YARN/YARN_metrics.json
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/YARN/YARN_metrics.json
new file mode 100755
index 000..a66bb34
--- /dev/null
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/YARN/YARN_metrics.json
@@ -0,0 +1,3486 @@
+{
+  "NODEMANAGER": {
+"Component": [
+  {
+"type": "ganglia",
+"metrics": {
+  "default": {
+"metrics/cpu/cpu_idle": {
+  "metric": "cpu_idle",
+  "pointInTime": true,
+  "temporal": true
+},
+"metrics/cpu/cpu_nice": {
+  "metric": "cpu_nice",
+  "pointInTime": true,
+  "temporal": true
+},
+"metrics/cpu/cpu_system": {
+  "metric": "cpu_system",
+  "pointInTime": true,
+  "temporal": true
+},
+"metrics/cpu/cpu_user": {
+  "metric": "cpu_user",
+  "pointInTime": true,
+  "temporal": true
+},
+"metrics/cpu/cpu_wio": {
+  "metric": "cpu_wio",
+  "pointInTime": true,
+  "temporal": true
+},
+"metrics/disk/disk_free": {
+  "metric": "disk_free",
+  "pointInTime": true,
+  "temporal": true
+},
+"metrics/disk/disk_total": {
+  "metric": "disk_total",
+  "pointInTime": true,
+  "temporal": true
+},
+"metrics/load/load_fifteen": {
+  "metric": "load_fifteen",
+  "pointInTime": true,
+  "temporal": true
+},
+"metrics/load/load_five": {
+  "metric": "load_five",
+  "pointInTime": true,
+  "temporal": true
+},
+"metrics/load/load_one": {
+  "metric": "load_one",
+  "pointInTime": true,
+  "temporal": true
+},
+"metrics/memory/mem_buffered": {
+  "metric": "mem_buffered",
+  "pointInTime": true,
+  "temporal": true
+},
+"metrics/memory/mem_cached": {
+  "metric": "mem_cached",
+  "pointInTime": true,
+  "temporal": true
+},
+"metrics/memory/mem_free": {
+  "metric": "mem_free",
+  "pointInTime": true,
+  "temporal": true
+},
+"metrics/memory/mem_shared": {
+  "metric": "mem_shared",
+  "pointInTime": true,
+  "temporal": true,
+  "amsHostMetric": true
+},
+"metrics/memory/mem_total": {
+  "metric": "mem_total",
+  "pointInTime": true,
+  "temporal": true
+},
+"metrics/memory/swap_free": {
+  "metric": "swap_free",
+  "pointInTime": true,
+  "temporal": true
+},
+"metrics/network/bytes_in": {
+  "metric": "bytes_in",
+  "pointInTime": true,
+  "temporal": true
+},
+"metrics/network/bytes_out": {
+  "metric": "bytes_out",
+  "pointInTime": true,
+  "temporal": true
+},
+"metrics/network/pkts_in": {
+  "metric": "pkts_in",
+  "pointInTime": true,
+  "temporal": true
+},
+"metrics/network/pkts_out": {
+  "metric": "pkts_out",
+  "pointInTime": true,
+  "temporal": true
+},
+"metrics/process/proc_run": {
+  "metric": "proc_run",
+  "pointInTime": true,
+  "temporal": true
+},
+"metrics/process/proc_total": {
+  "metric": "proc_total",
+  "pointInTime": true,
+  "temporal": true
+},
+"metrics/disk/read_count": {
+  "metric": "read_count",
+  "pointInTime": true,
+  "temporal": true
+},
+"metrics/disk/write_count": {
+  "metric": "write_count",
+  "pointInTime": true,
+  "temporal": true
+},
+"metrics/disk/read_bytes": {
+  "metric": "read_bytes",
+  "pointInTime": true,
+  "temporal": true
+},
+"metrics/disk/write_bytes": {
+  "metric": "write_bytes",
+  "pointInTime": true,
+  "temporal"

[10/51] [partial] ambari git commit: AMBARI-21349. Create BigInsights Stack Skeleton in Ambari 2.5 (alejandro)

2017-06-27 Thread alejandro
http://git-wip-us.apache.org/repos/asf/ambari/blob/1863c3b9/ambari-server/src/main/resources/stacks/BigInsights/4.0/stack-advisor/stack_advisor_21.py
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.0/stack-advisor/stack_advisor_21.py
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/stack-advisor/stack_advisor_21.py
new file mode 100755
index 000..be49bf8
--- /dev/null
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/stack-advisor/stack_advisor_21.py
@@ -0,0 +1,259 @@
+#!/usr/bin/env ambari-python-wrap
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+"""
+
+from resource_management.core.logger import Logger
+
+try:
+  from stack_advisor_206 import *
+except ImportError:
+  #Ignore ImportError
+  print("stack_advisor_206 not found")
+
+class HDP21StackAdvisor(HDP206StackAdvisor):
+
+  def getServiceConfigurationRecommenderDict(self):
+parentRecommendConfDict = super(HDP21StackAdvisor, 
self).getServiceConfigurationRecommenderDict()
+childRecommendConfDict = {
+  "OOZIE": self.recommendOozieConfigurations,
+  "HIVE": self.recommendHiveConfigurations,
+  "TEZ": self.recommendTezConfigurations
+}
+parentRecommendConfDict.update(childRecommendConfDict)
+return parentRecommendConfDict
+
+  def recommendOozieConfigurations(self, configurations, clusterData, 
services, hosts):
+oozieSiteProperties = getSiteProperties(services['configurations'], 
'oozie-site')
+oozieEnvProperties = getSiteProperties(services['configurations'], 
'oozie-env')
+putOozieProperty = self.putProperty(configurations, "oozie-site", services)
+putOozieEnvProperty = self.putProperty(configurations, "oozie-env", 
services)
+
+if "FALCON_SERVER" in clusterData["components"]:
+  putOozieSiteProperty = self.putProperty(configurations, "oozie-site", 
services)
+  falconUser = None
+  if "falcon-env" in services["configurations"] and "falcon_user" in 
services["configurations"]["falcon-env"]["properties"]:
+falconUser = 
services["configurations"]["falcon-env"]["properties"]["falcon_user"]
+if falconUser is not None:
+  
putOozieSiteProperty("oozie.service.ProxyUserService.proxyuser.{0}.groups".format(falconUser)
 , "*")
+  
putOozieSiteProperty("oozie.service.ProxyUserService.proxyuser.{0}.hosts".format(falconUser)
 , "*")
+falconUserOldValue = getOldValue(self, services, "falcon-env", 
"falcon_user")
+if falconUserOldValue is not None:
+  if 'forced-configurations' not in services:
+services["forced-configurations"] = []
+  putOozieSitePropertyAttribute = 
self.putPropertyAttribute(configurations, "oozie-site")
+  
putOozieSitePropertyAttribute("oozie.service.ProxyUserService.proxyuser.{0}.groups".format(falconUserOldValue),
 'delete', 'true')
+  
putOozieSitePropertyAttribute("oozie.service.ProxyUserService.proxyuser.{0}.hosts".format(falconUserOldValue),
 'delete', 'true')
+  services["forced-configurations"].append({"type" : "oozie-site", 
"name" : 
"oozie.service.ProxyUserService.proxyuser.{0}.hosts".format(falconUserOldValue)})
+  services["forced-configurations"].append({"type" : "oozie-site", 
"name" : 
"oozie.service.ProxyUserService.proxyuser.{0}.groups".format(falconUserOldValue)})
+  if falconUser is not None:
+services["forced-configurations"].append({"type" : "oozie-site", 
"name" : 
"oozie.service.ProxyUserService.proxyuser.{0}.hosts".format(falconUser)})
+services["forced-configurations"].append({"type" : "oozie-site", 
"name" : 
"oozie.service.ProxyUserService.proxyuser.{0}.groups".format(falconUser)})
+
+  putMapredProperty = self.putProperty(configurations, "oozie-site")
+  putMapredProperty("oozie.services.ext",
+"org.apache.oozie.service.JMSAccessorService," +
+
"org.apache.oozie.service.PartitionDependencyManagerService," +
+"org.apache.oozie.service.HCatAccessorService")
+if oozieEnvProperties and oozieSiteProperties and 
self.checkSiteProperties(oozieSiteProperties, 
'oozie.service.JPAService.jdbc.driver') and 

[02/51] [partial] ambari git commit: AMBARI-21349. Create BigInsights Stack Skeleton in Ambari 2.5 (alejandro)

2017-06-27 Thread alejandro
http://git-wip-us.apache.org/repos/asf/ambari/blob/1863c3b9/ambari-server/src/main/resources/stacks/BigInsights/4.2.5/services/HBASE/configuration/hbase-site.xml
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.2.5/services/HBASE/configuration/hbase-site.xml
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.2.5/services/HBASE/configuration/hbase-site.xml
new file mode 100755
index 000..047d3f6
--- /dev/null
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.2.5/services/HBASE/configuration/hbase-site.xml
@@ -0,0 +1,388 @@
+
+
+
+
+  
+hbase.bulkload.staging.dir
+/apps/hbase/staging
+A staging directory in default file system (HDFS)
+for bulk loading.
+
+
+  
+  
+hbase.hstore.flush.retries.number
+120
+
+The number of times the region flush operation will be retried.
+
+true
+
+  
+  
+hbase.hregion.majorcompaction
+60480
+Time between major compactions, expressed in milliseconds. 
Set to 0 to disable
+  time-based automatic major compactions. User-requested and size-based 
major compactions will
+  still run. This value is multiplied by 
hbase.hregion.majorcompaction.jitter to cause
+  compaction to start at a somewhat-random time during a given window of 
time. The default value
+  is 7 days, expressed in milliseconds. If major compactions are causing 
disruption in your
+  environment, you can configure them to run at off-peak times for your 
deployment, or disable
+  time-based major compactions by setting this parameter to 0, and run 
major compactions in a
+  cron job or by another external mechanism.
+
+  int
+  0
+  259200
+  milliseconds
+
+
+  
+  
+hbase.hregion.majorcompaction.jitter
+0.50
+A multiplier applied to hbase.hregion.majorcompaction to 
cause compaction to occur
+  a given amount of time either side of hbase.hregion.majorcompaction. The 
smaller the number,
+  the closer the compactions will happen to the 
hbase.hregion.majorcompaction
+  interval.
+
+  
+  
+hbase.hregion.memstore.block.multiplier
+4
+
+Block updates if memstore has hbase.hregion.memstore.block.multiplier
+times hbase.hregion.memstore.flush.size bytes.  Useful preventing
+runaway memstore during spikes in update traffic.  Without an
+upper-bound, memstore fills such that when it flushes the
+resultant flush files take a long time to compact or split, or
+worse, we OOME.
+
+HBase Region Block Multiplier
+
+  value-list
+  
+
+  2
+
+
+  4
+
+
+  8
+
+  
+
+
+  
+  
+hbase.bucketcache.ioengine
+
+Where to store the contents of the bucketcache. One of: 
onheap,
+  offheap, or file. If a file, set it to file:PATH_TO_FILE.
+
+  true
+
+
+  
+  
+hbase.bucketcache.size
+
+The size of the buckets for the bucketcache if you only use a 
single size.
+
+  true
+
+
+  
+  
+hbase.bucketcache.percentage.in.combinedcache
+
+Value to be set between 0.0 and 1.0
+
+  true
+
+
+  
+  
+hbase.regionserver.wal.codec
+RegionServer WAL Codec
+org.apache.hadoop.hbase.regionserver.wal.WALCellCodec
+
+  
+hbase-env
+phoenix_sql_enabled
+  
+
+
+  
+  
+hbase.region.server.rpc.scheduler.factory.class
+
+
+  true
+
+
+  
+hbase-env
+phoenix_sql_enabled
+  
+
+
+  
+  
+hbase.rpc.controllerfactory.class
+
+
+  true
+
+
+  
+hbase-env
+phoenix_sql_enabled
+  
+
+
+  
+  
+phoenix.functions.allowUserDefinedFunctions
+ 
+
+  
+hbase-env
+phoenix_sql_enabled
+  
+
+
+  
+  
+hbase.coprocessor.regionserver.classes
+
+
+  true
+
+
+  
+hbase-site
+hbase.security.authorization
+  
+
+
+  
+  
+hbase.hstore.compaction.max
+10
+The maximum number of StoreFiles which will be selected for a 
single minor
+  compaction, regardless of the number of eligible StoreFiles. 
Effectively, the value of
+  hbase.hstore.compaction.max controls the length of time it takes a 
single compaction to
+  complete. Setting it larger means that more StoreFiles are included in a 
compaction. For most
+  cases, the default value is appropriate.
+
+Maximum Files for Compaction
+
+  int
+  
+
+  8
+
+
+  9
+
+
+  10
+
+
+  11
+
+
+  12
+
+
+  13
+
+
+  14
+
+
+  15
+
+  
+
+
+  
+  
+

[07/51] [partial] ambari git commit: AMBARI-21349. Create BigInsights Stack Skeleton in Ambari 2.5 (alejandro)

2017-06-27 Thread alejandro
http://git-wip-us.apache.org/repos/asf/ambari/blob/1863c3b9/ambari-server/src/main/resources/stacks/BigInsights/4.0/stack-advisor/stack_advisor_25.py
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.0/stack-advisor/stack_advisor_25.py
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/stack-advisor/stack_advisor_25.py
new file mode 100755
index 000..1f0ae18
--- /dev/null
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/stack-advisor/stack_advisor_25.py
@@ -0,0 +1,1940 @@
+#!/usr/bin/env ambari-python-wrap
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+"""
+
+import math
+import traceback
+
+from ambari_commons.str_utils import string_set_equals
+from resource_management.core.logger import Logger
+from resource_management.core.exceptions import Fail
+from resource_management.libraries.functions.get_bare_principal import 
get_bare_principal
+
+try:
+  from stack_advisor_24 import *
+except ImportError:
+  #Ignore ImportError
+  print("stack_advisor_24 not found")
+
+class HDP25StackAdvisor(HDP24StackAdvisor):
+
+  def __init__(self):
+super(HDP25StackAdvisor, self).__init__()
+Logger.initialize_logger()
+self.HIVE_INTERACTIVE_SITE = 'hive-interactive-site'
+self.YARN_ROOT_DEFAULT_QUEUE_NAME = 'default'
+self.AMBARI_MANAGED_LLAP_QUEUE_NAME = 'llap'
+self.RANGER_TAGSYNC_SITE = 'ranger-tagsync-site';
+
+  def recommendOozieConfigurations(self, configurations, clusterData, 
services, hosts):
+super(HDP25StackAdvisor,self).recommendOozieConfigurations(configurations, 
clusterData, services, hosts)
+putOozieEnvProperty = self.putProperty(configurations, "oozie-env", 
services)
+
+if not "oozie-env" in services["configurations"] :
+  Logger.info("No oozie configurations available")
+  return
+
+if not "FALCON_SERVER" in clusterData["components"] :
+  Logger.info("Falcon is not part of the installation")
+  return
+
+falconUser = 'falcon'
+
+if "falcon-env" in services["configurations"] :
+  if "falcon_user" in 
services["configurations"]["falcon-env"]["properties"] :
+falconUser = 
services["configurations"]["falcon-env"]["properties"]["falcon_user"]
+Logger.info("Falcon user from configuration: %s " % falconUser)
+
+Logger.info("Falcon user : %s" % falconUser)
+
+oozieUser = 'oozie'
+
+if "oozie_user" \
+  in services["configurations"]["oozie-env"]["properties"] :
+  oozieUser = 
services["configurations"]["oozie-env"]["properties"]["oozie_user"]
+  Logger.info("Oozie user from configuration %s" % oozieUser)
+
+Logger.info("Oozie user %s" % oozieUser)
+
+if "oozie_admin_users" \
+in services["configurations"]["oozie-env"]["properties"] :
+  currentAdminUsers =  
services["configurations"]["oozie-env"]["properties"]["oozie_admin_users"]
+  Logger.info("Oozie admin users from configuration %s" % 
currentAdminUsers)
+else :
+  currentAdminUsers = "{0}, oozie-admin".format(oozieUser)
+  Logger.info("Setting default oozie admin users to %s" % 
currentAdminUsers)
+
+
+if falconUser in currentAdminUsers :
+  Logger.info("Falcon user %s already member of  oozie admin users " % 
falconUser)
+  return
+
+newAdminUsers = "{0},{1}".format(currentAdminUsers, falconUser)
+
+Logger.info("new oozie admin users : %s" % newAdminUsers)
+
+services["forced-configurations"].append({"type" : "oozie-env", "name" : 
"oozie_admin_users"})
+putOozieEnvProperty("oozie_admin_users", newAdminUsers)
+
+  def createComponentLayoutRecommendations(self, services, hosts):
+parentComponentLayoutRecommendations = super(HDP25StackAdvisor, 
self).createComponentLayoutRecommendations(
+  services, hosts)
+return parentComponentLayoutRecommendations
+
+  def getComponentLayoutValidations(self, services, hosts):
+parentItems = super(HDP25StackAdvisor, 
self).getComponentLayoutValidations(services, hosts)
+childItems = []
+if self.HIVE_INTERACTIVE_SITE in services['configurations']:
+  hsi_hosts = self.__getHostsForComponent(services, "HIVE", 
"HIVE_SERVER_INTERACTIVE")
+  if len(hsi_hosts) > 1:
+message = "Only 

[41/51] [partial] ambari git commit: AMBARI-21349. Create BigInsights Stack Skeleton in Ambari 2.5 (alejandro)

2017-06-27 Thread alejandro
http://git-wip-us.apache.org/repos/asf/ambari/blob/1863c3b9/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HBASE/package/files/draining_servers.rb
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HBASE/package/files/draining_servers.rb
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HBASE/package/files/draining_servers.rb
new file mode 100755
index 000..a3958a6
--- /dev/null
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HBASE/package/files/draining_servers.rb
@@ -0,0 +1,164 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+# Add or remove servers from draining mode via zookeeper
+
+require 'optparse'
+include Java
+
+import org.apache.hadoop.hbase.HBaseConfiguration
+import org.apache.hadoop.hbase.client.HBaseAdmin
+import org.apache.hadoop.hbase.zookeeper.ZKUtil
+import org.apache.commons.logging.Log
+import org.apache.commons.logging.LogFactory
+
+# Name of this script
+NAME = "draining_servers"
+
+# Do command-line parsing
+options = {}
+optparse = OptionParser.new do |opts|
+  opts.banner = "Usage: ./hbase org.jruby.Main #{NAME}.rb [options] 
add|remove|list || ..."
+  opts.separator 'Add remove or list servers in draining mode. Can accept 
either hostname to drain all region servers' +
+ 'in that host, a host:port pair or a host,port,startCode 
triplet. More than one server can be given separated by space'
+  opts.on('-h', '--help', 'Display usage information') do
+puts opts
+exit
+  end
+  options[:debug] = false
+  opts.on('-d', '--debug', 'Display extra debug logging') do
+options[:debug] = true
+  end
+end
+optparse.parse!
+
+# Return array of servernames where servername is hostname+port+startcode
+# comma-delimited
+def getServers(admin)
+  serverInfos = admin.getClusterStatus().getServerInfo()
+  servers = []
+  for server in serverInfos
+servers << server.getServerName()
+  end
+  return servers
+end
+
+def getServerNames(hostOrServers, config)
+  ret = []
+
+  for hostOrServer in hostOrServers
+# check whether it is already serverName. No need to connect to cluster
+parts = hostOrServer.split(',')
+if parts.size() == 3
+  ret << hostOrServer
+else
+  admin = HBaseAdmin.new(config) if not admin
+  servers = getServers(admin)
+
+  hostOrServer = hostOrServer.gsub(/:/, ",")
+  for server in servers
+ret << server if server.start_with?(hostOrServer)
+  end
+end
+  end
+
+  admin.close() if admin
+  return ret
+end
+
+def addServers(options, hostOrServers)
+  config = HBaseConfiguration.create()
+  servers = getServerNames(hostOrServers, config)
+
+  zkw = org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.new(config, 
"draining_servers", nil)
+  parentZnode = zkw.drainingZNode
+
+  begin
+for server in servers
+  node = ZKUtil.joinZNode(parentZnode, server)
+  ZKUtil.createAndFailSilent(zkw, node)
+end
+  ensure
+zkw.close()
+  end
+end
+
+def removeServers(options, hostOrServers)
+  config = HBaseConfiguration.create()
+  servers = getServerNames(hostOrServers, config)
+
+  zkw = org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.new(config, 
"draining_servers", nil)
+  parentZnode = zkw.drainingZNode
+
+  begin
+for server in servers
+  node = ZKUtil.joinZNode(parentZnode, server)
+  ZKUtil.deleteNodeFailSilent(zkw, node)
+end
+  ensure
+zkw.close()
+  end
+end
+
+# list servers in draining mode
+def listServers(options)
+  config = HBaseConfiguration.create()
+
+  zkw = org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.new(config, 
"draining_servers", nil)
+  parentZnode = zkw.drainingZNode
+
+  servers = ZKUtil.listChildrenNoWatch(zkw, parentZnode)
+  servers.each {|server| puts server}
+end
+
+hostOrServers = ARGV[1..ARGV.size()]
+
+# Create a logger and disable the DEBUG-level annoying client logging
+def configureLogging(options)
+  apacheLogger = LogFactory.getLog(NAME)
+  # Configure log4j to not spew so much
+  unless (options[:debug])
+logger = org.apache.log4j.Logger.getLogger("org.apache.hadoop.hbase")
+logger.setLevel(org.apache.log4j.Leve

[12/51] [partial] ambari git commit: AMBARI-21349. Create BigInsights Stack Skeleton in Ambari 2.5 (alejandro)

2017-06-27 Thread alejandro
http://git-wip-us.apache.org/repos/asf/ambari/blob/1863c3b9/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/ZOOKEEPER/package/files/zkServer.sh
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/ZOOKEEPER/package/files/zkServer.sh
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/ZOOKEEPER/package/files/zkServer.sh
new file mode 100755
index 000..dd75a58
--- /dev/null
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/ZOOKEEPER/package/files/zkServer.sh
@@ -0,0 +1,120 @@
+#!/usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+#
+# If this scripted is run out of /usr/bin or some other system bin directory
+# it should be linked to and not copied. Things like java jar files are found
+# relative to the canonical path of this script.
+#
+
+# See the following page for extensive details on setting
+# up the JVM to accept JMX remote management:
+# http://java.sun.com/javase/6/docs/technotes/guides/management/agent.html
+# by default we allow local JMX connections
+if [ "x$JMXLOCALONLY" = "x" ]
+then
+JMXLOCALONLY=false
+fi
+
+if [ "x$JMXDISABLE" = "x" ]
+then
+echo "JMX enabled by default"
+# for some reason these two options are necessary on jdk6 on Ubuntu
+#   accord to the docs they are not necessary, but otw jconsole cannot
+#   do a local attach
+ZOOMAIN="-Dcom.sun.management.jmxremote 
-Dcom.sun.management.jmxremote.local.only=$JMXLOCALONLY 
org.apache.zookeeper.server.quorum.QuorumPeerMain"
+else
+echo "JMX disabled by user request"
+ZOOMAIN="org.apache.zookeeper.server.quorum.QuorumPeerMain"
+fi
+
+# Only follow symlinks if readlink supports it
+if readlink -f "$0" > /dev/null 2>&1
+then
+  ZOOBIN=`readlink -f "$0"`
+else
+  ZOOBIN="$0"
+fi
+ZOOBINDIR=`dirname "$ZOOBIN"`
+
+. "$ZOOBINDIR"/zkEnv.sh
+
+if [ "x$2" != "x" ]
+then
+ZOOCFG="$ZOOCFGDIR/$2"
+fi
+
+if $cygwin
+then
+ZOOCFG=`cygpath -wp "$ZOOCFG"`
+# cygwin has a "kill" in the shell itself, gets confused
+KILL=/bin/kill
+else
+KILL=kill
+fi
+
+echo "Using config: $ZOOCFG"
+
+ZOOPIDFILE=$(grep dataDir "$ZOOCFG" | sed -e 's/.*=//')/zookeeper_server.pid
+
+
+case $1 in
+start)
+echo  "Starting zookeeper ... "
+$JAVA  "-Dzookeeper.log.dir=${ZOO_LOG_DIR}" 
"-Dzookeeper.root.logger=${ZOO_LOG4J_PROP}" \
+-cp "$CLASSPATH" $JVMFLAGS $ZOOMAIN "$ZOOCFG" &
+/bin/echo -n $! > "$ZOOPIDFILE"
+echo STARTED
+;;
+stop)
+echo "Stopping zookeeper ... "
+if [ ! -f "$ZOOPIDFILE" ]
+then
+echo "error: could not find file $ZOOPIDFILE"
+exit 1
+else
+$KILL -9 $(cat "$ZOOPIDFILE")
+rm "$ZOOPIDFILE"
+echo STOPPED
+fi
+;;
+upgrade)
+shift
+echo "upgrading the servers to 3.*"
+java "-Dzookeeper.log.dir=${ZOO_LOG_DIR}" 
"-Dzookeeper.root.logger=${ZOO_LOG4J_PROP}" \
+-cp "$CLASSPATH" $JVMFLAGS org.apache.zookeeper.server.upgrade.UpgradeMain 
${@}
+echo "Upgrading ... "
+;;
+restart)
+shift
+"$0" stop ${@}
+sleep 3
+"$0" start ${@}
+;;
+status)
+STAT=`echo stat | nc localhost $(grep clientPort "$ZOOCFG" | sed -e 
's/.*=//') 2> /dev/null| grep Mode`
+if [ "x$STAT" = "x" ]
+then
+echo "Error contacting service. It is probably not running."
+else
+echo $STAT
+fi
+;;
+*)
+echo "Usage: $0 {start|stop|restart|status}" >&2
+
+esac

http://git-wip-us.apache.org/repos/asf/ambari/blob/1863c3b9/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/ZOOKEEPER/package/files/zkService.sh
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/ZOOKEEPER/package/files/zkService.sh
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/ZOOKEEPER/package/files/zkService.sh
new file mode 100755
index 000..59bfe90
--- /dev/null
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/ZOOKEEPER/package/files/zkService.sh
@@ -0,0 +1,26 @@
+#!/usr/bin/env bash
+#
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor licen

[30/51] [partial] ambari git commit: AMBARI-21349. Create BigInsights Stack Skeleton in Ambari 2.5 (alejandro)

2017-06-27 Thread alejandro
http://git-wip-us.apache.org/repos/asf/ambari/blob/1863c3b9/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HIVE/etc/hive-schema-0.14.0.mysql.sql
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HIVE/etc/hive-schema-0.14.0.mysql.sql
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HIVE/etc/hive-schema-0.14.0.mysql.sql
new file mode 100755
index 000..e0086ab
--- /dev/null
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HIVE/etc/hive-schema-0.14.0.mysql.sql
@@ -0,0 +1,889 @@
+-- MySQL dump 10.13  Distrib 5.5.25, for osx10.6 (i386)
+--
+-- Host: localhostDatabase: test
+-- --
+-- Server version  5.5.25
+
+/*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT */;
+/*!40101 SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS */;
+/*!40101 SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION */;
+/*!40101 SET NAMES utf8 */;
+/*!40103 SET @OLD_TIME_ZONE=@@TIME_ZONE */;
+/*!40103 SET TIME_ZONE='+00:00' */;
+/*!40014 SET @OLD_UNIQUE_CHECKS=@@UNIQUE_CHECKS, UNIQUE_CHECKS=0 */;
+/*!40014 SET @OLD_FOREIGN_KEY_CHECKS=@@FOREIGN_KEY_CHECKS, 
FOREIGN_KEY_CHECKS=0 */;
+/*!40101 SET @OLD_SQL_MODE=@@SQL_MODE, SQL_MODE='NO_AUTO_VALUE_ON_ZERO' */;
+/*!40111 SET @OLD_SQL_NOTES=@@SQL_NOTES, SQL_NOTES=0 */;
+
+--
+-- Table structure for table `BUCKETING_COLS`
+--
+
+/*!40101 SET @saved_cs_client = @@character_set_client */;
+/*!40101 SET character_set_client = utf8 */;
+CREATE TABLE IF NOT EXISTS `BUCKETING_COLS` (
+  `SD_ID` bigint(20) NOT NULL,
+  `BUCKET_COL_NAME` varchar(256) CHARACTER SET latin1 COLLATE latin1_bin 
DEFAULT NULL,
+  `INTEGER_IDX` int(11) NOT NULL,
+  PRIMARY KEY (`SD_ID`,`INTEGER_IDX`),
+  KEY `BUCKETING_COLS_N49` (`SD_ID`),
+  CONSTRAINT `BUCKETING_COLS_FK1` FOREIGN KEY (`SD_ID`) REFERENCES `SDS` 
(`SD_ID`)
+) ENGINE=InnoDB DEFAULT CHARSET=latin1;
+/*!40101 SET character_set_client = @saved_cs_client */;
+
+--
+-- Table structure for table `CDS`
+--
+
+/*!40101 SET @saved_cs_client = @@character_set_client */;
+/*!40101 SET character_set_client = utf8 */;
+CREATE TABLE IF NOT EXISTS `CDS` (
+  `CD_ID` bigint(20) NOT NULL,
+  PRIMARY KEY (`CD_ID`)
+) ENGINE=InnoDB DEFAULT CHARSET=latin1;
+/*!40101 SET character_set_client = @saved_cs_client */;
+
+--
+-- Table structure for table `COLUMNS_V2`
+--
+
+/*!40101 SET @saved_cs_client = @@character_set_client */;
+/*!40101 SET character_set_client = utf8 */;
+CREATE TABLE IF NOT EXISTS `COLUMNS_V2` (
+  `CD_ID` bigint(20) NOT NULL,
+  `COMMENT` varchar(256) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
+  `COLUMN_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
+  `TYPE_NAME` varchar(4000) DEFAULT NULL,
+  `INTEGER_IDX` int(11) NOT NULL,
+  PRIMARY KEY (`CD_ID`,`COLUMN_NAME`),
+  KEY `COLUMNS_V2_N49` (`CD_ID`),
+  CONSTRAINT `COLUMNS_V2_FK1` FOREIGN KEY (`CD_ID`) REFERENCES `CDS` (`CD_ID`)
+) ENGINE=InnoDB DEFAULT CHARSET=latin1;
+/*!40101 SET character_set_client = @saved_cs_client */;
+
+--
+-- Table structure for table `DATABASE_PARAMS`
+--
+
+/*!40101 SET @saved_cs_client = @@character_set_client */;
+/*!40101 SET character_set_client = utf8 */;
+CREATE TABLE IF NOT EXISTS `DATABASE_PARAMS` (
+  `DB_ID` bigint(20) NOT NULL,
+  `PARAM_KEY` varchar(180) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
+  `PARAM_VALUE` varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT 
NULL,
+  PRIMARY KEY (`DB_ID`,`PARAM_KEY`),
+  KEY `DATABASE_PARAMS_N49` (`DB_ID`),
+  CONSTRAINT `DATABASE_PARAMS_FK1` FOREIGN KEY (`DB_ID`) REFERENCES `DBS` 
(`DB_ID`)
+) ENGINE=InnoDB DEFAULT CHARSET=latin1;
+/*!40101 SET character_set_client = @saved_cs_client */;
+
+--
+-- Table structure for table `DBS`
+--
+
+/*!40101 SET @saved_cs_client = @@character_set_client */;
+/*!40101 SET character_set_client = utf8 */;
+CREATE TABLE IF NOT EXISTS `DBS` (
+  `DB_ID` bigint(20) NOT NULL,
+  `DESC` varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
+  `DB_LOCATION_URI` varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin NOT 
NULL,
+  `NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
+  `OWNER_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT 
NULL,
+  `OWNER_TYPE` varchar(10) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT 
NULL,
+  PRIMARY KEY (`DB_ID`),
+  UNIQUE KEY `UNIQUE_DATABASE` (`NAME`)
+) ENGINE=InnoDB DEFAULT CHARSET=latin1;
+/*!40101 SET character_set_client = @saved_cs_client */;
+
+--
+-- Table structure for table `DB_PRIVS`
+--
+
+/*!40101 SET @saved_cs_client = @@character_set_client */;
+/*!40101 SET character_set_client = utf8 */;
+CREATE TABLE IF NOT EXISTS `DB_PRIVS` (
+  `DB_GRANT_ID` bigint(20) NOT NULL,
+  `CREATE_TIME` int(11) NOT NULL,
+  `DB_ID` bigint(20) DEFAULT NULL,
+  `GRANT_OPTION` smallint(6) NOT NULL,
+  `GRANTOR`

[27/51] [partial] ambari git commit: AMBARI-21349. Create BigInsights Stack Skeleton in Ambari 2.5 (alejandro)

2017-06-27 Thread alejandro
http://git-wip-us.apache.org/repos/asf/ambari/blob/1863c3b9/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HIVE/package/etc/hive-schema-0.12.0.postgres.sql
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HIVE/package/etc/hive-schema-0.12.0.postgres.sql
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HIVE/package/etc/hive-schema-0.12.0.postgres.sql
new file mode 100755
index 000..61769f6
--- /dev/null
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HIVE/package/etc/hive-schema-0.12.0.postgres.sql
@@ -0,0 +1,1405 @@
+--
+-- PostgreSQL database dump
+--
+
+SET statement_timeout = 0;
+SET client_encoding = 'UTF8';
+SET standard_conforming_strings = off;
+SET check_function_bodies = false;
+SET client_min_messages = warning;
+SET escape_string_warning = off;
+
+SET search_path = public, pg_catalog;
+
+SET default_tablespace = '';
+
+SET default_with_oids = false;
+
+--
+-- Name: BUCKETING_COLS; Type: TABLE; Schema: public; Owner: hiveuser; 
Tablespace:
+--
+
+CREATE TABLE "BUCKETING_COLS" (
+"SD_ID" bigint NOT NULL,
+"BUCKET_COL_NAME" character varying(256) DEFAULT NULL::character varying,
+"INTEGER_IDX" bigint NOT NULL
+);
+
+
+--
+-- Name: CDS; Type: TABLE; Schema: public; Owner: hiveuser; Tablespace:
+--
+
+CREATE TABLE "CDS" (
+"CD_ID" bigint NOT NULL
+);
+
+
+--
+-- Name: COLUMNS_OLD; Type: TABLE; Schema: public; Owner: hiveuser; Tablespace:
+--
+
+CREATE TABLE "COLUMNS_OLD" (
+"SD_ID" bigint NOT NULL,
+"COMMENT" character varying(256) DEFAULT NULL::character varying,
+"COLUMN_NAME" character varying(128) NOT NULL,
+"TYPE_NAME" character varying(4000) NOT NULL,
+"INTEGER_IDX" bigint NOT NULL
+);
+
+
+--
+-- Name: COLUMNS_V2; Type: TABLE; Schema: public; Owner: hiveuser; Tablespace:
+--
+
+CREATE TABLE "COLUMNS_V2" (
+"CD_ID" bigint NOT NULL,
+"COMMENT" character varying(4000),
+"COLUMN_NAME" character varying(128) NOT NULL,
+"TYPE_NAME" character varying(4000),
+"INTEGER_IDX" integer NOT NULL
+);
+
+
+--
+-- Name: DATABASE_PARAMS; Type: TABLE; Schema: public; Owner: hiveuser; 
Tablespace:
+--
+
+CREATE TABLE "DATABASE_PARAMS" (
+"DB_ID" bigint NOT NULL,
+"PARAM_KEY" character varying(180) NOT NULL,
+"PARAM_VALUE" character varying(4000) DEFAULT NULL::character varying
+);
+
+
+--
+-- Name: DBS; Type: TABLE; Schema: public; Owner: hiveuser; Tablespace:
+--
+
+CREATE TABLE "DBS" (
+"DB_ID" bigint NOT NULL,
+"DESC" character varying(4000) DEFAULT NULL::character varying,
+"DB_LOCATION_URI" character varying(4000) NOT NULL,
+"NAME" character varying(128) DEFAULT NULL::character varying
+);
+
+
+--
+-- Name: DB_PRIVS; Type: TABLE; Schema: public; Owner: hiveuser; Tablespace:
+--
+
+CREATE TABLE "DB_PRIVS" (
+"DB_GRANT_ID" bigint NOT NULL,
+"CREATE_TIME" bigint NOT NULL,
+"DB_ID" bigint,
+"GRANT_OPTION" smallint NOT NULL,
+"GRANTOR" character varying(128) DEFAULT NULL::character varying,
+"GRANTOR_TYPE" character varying(128) DEFAULT NULL::character varying,
+"PRINCIPAL_NAME" character varying(128) DEFAULT NULL::character varying,
+"PRINCIPAL_TYPE" character varying(128) DEFAULT NULL::character varying,
+"DB_PRIV" character varying(128) DEFAULT NULL::character varying
+);
+
+
+--
+-- Name: GLOBAL_PRIVS; Type: TABLE; Schema: public; Owner: hiveuser; 
Tablespace:
+--
+
+CREATE TABLE "GLOBAL_PRIVS" (
+"USER_GRANT_ID" bigint NOT NULL,
+"CREATE_TIME" bigint NOT NULL,
+"GRANT_OPTION" smallint NOT NULL,
+"GRANTOR" character varying(128) DEFAULT NULL::character varying,
+"GRANTOR_TYPE" character varying(128) DEFAULT NULL::character varying,
+"PRINCIPAL_NAME" character varying(128) DEFAULT NULL::character varying,
+"PRINCIPAL_TYPE" character varying(128) DEFAULT NULL::character varying,
+"USER_PRIV" character varying(128) DEFAULT NULL::character varying
+);
+
+
+--
+-- Name: IDXS; Type: TABLE; Schema: public; Owner: hiveuser; Tablespace:
+--
+
+CREATE TABLE "IDXS" (
+"INDEX_ID" bigint NOT NULL,
+"CREATE_TIME" bigint NOT NULL,
+"DEFERRED_REBUILD" boolean NOT NULL,
+"INDEX_HANDLER_CLASS" character varying(4000) DEFAULT NULL::character 
varying,
+"INDEX_NAME" character varying(128) DEFAULT NULL::character varying,
+"INDEX_TBL_ID" bigint,
+"LAST_ACCESS_TIME" bigint NOT NULL,
+"ORIG_TBL_ID" bigint,
+"SD_ID" bigint
+);
+
+
+--
+-- Name: INDEX_PARAMS; Type: TABLE; Schema: public; Owner: hiveuser; 
Tablespace:
+--
+
+CREATE TABLE "INDEX_PARAMS" (
+"INDEX_ID" bigint NOT NULL,
+"PARAM_KEY" character varying(256) NOT NULL,
+"PARAM_VALUE" character varying(4000) DEFAULT NULL::character varying
+);
+
+
+--
+-- Name: NUCLEUS_TABLES; Type: TABLE; Schema: public; Owner: hiveuser; 
Tablespace:
+--
+
+CREATE TABLE "NUCLEUS_TABLES" (
+"CLASS_NAME" character varying(128) NOT

[40/51] [partial] ambari git commit: AMBARI-21349. Create BigInsights Stack Skeleton in Ambari 2.5 (alejandro)

2017-06-27 Thread alejandro
http://git-wip-us.apache.org/repos/asf/ambari/blob/1863c3b9/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HBASE/widgets.json
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HBASE/widgets.json
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HBASE/widgets.json
new file mode 100755
index 000..a0f5ba9
--- /dev/null
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/HBASE/widgets.json
@@ -0,0 +1,510 @@
+{
+  "layouts": [
+{
+  "layout_name": "default_hbase_dashboard",
+  "display_name": "Standard HBase Dashboard",
+  "section_name": "HBASE_SUMMARY",
+  "widgetLayoutInfo": [
+{
+  "widget_name": "Reads and Writes",
+  "description": "Count of read and write requests on all regions in 
the cluster.",
+  "widget_type": "GRAPH",
+  "is_visible": true,
+  "metrics": [
+{
+  "name": "regionserver.Server.Get_num_ops._sum",
+  "metric_path": 
"metrics/hbase/regionserver/Server/Get_num_ops._sum",
+  "service_name": "HBASE",
+  "component_name": "HBASE_REGIONSERVER"
+},
+{
+  "name": "regionserver.Server.ScanNext_num_ops._sum",
+  "metric_path": 
"metrics/hbase/regionserver/Server/ScanNext_num_ops._sum",
+  "service_name": "HBASE",
+  "component_name": "HBASE_REGIONSERVER"
+},
+{
+  "name": "regionserver.Server.Append_num_ops._sum",
+  "metric_path": 
"metrics/hbase/regionserver/Server/Append_num_ops._sum",
+  "service_name": "HBASE",
+  "component_name": "HBASE_REGIONSERVER"
+},
+{
+  "name": "regionserver.Server.Delete_num_ops._sum",
+  "metric_path": 
"metrics/hbase/regionserver/Server/Delete_num_ops._sum",
+  "service_name": "HBASE",
+  "component_name": "HBASE_REGIONSERVER"
+},
+{
+  "name": "regionserver.Server.Increment_num_ops._sum",
+  "metric_path": 
"metrics/hbase/regionserver/Server/Increment_num_ops._sum",
+  "service_name": "HBASE",
+  "component_name": "HBASE_REGIONSERVER"
+},
+{
+  "name": "regionserver.Server.Mutate_num_ops._sum",
+  "metric_path": 
"metrics/hbase/regionserver/Server/Mutate_num_ops._sum",
+  "service_name": "HBASE",
+  "component_name": "HBASE_REGIONSERVER"
+}
+  ],
+  "values": [
+{
+  "name": "Read Requests",
+  "value": "${regionserver.Server.Get_num_ops._sum + 
regionserver.Server.ScanNext_num_ops._sum}"
+},
+{
+  "name": "Write Requests",
+  "value": "${regionserver.Server.Append_num_ops._sum + 
regionserver.Server.Delete_num_ops._sum + 
regionserver.Server.Increment_num_ops._sum + 
regionserver.Server.Mutate_num_ops._sum}"
+}
+  ],
+  "properties": {
+"graph_type": "LINE",
+"time_range": "1"
+  }
+},
+{
+  "widget_name": "Read Latency",
+  "description": "maximum of 95% read latency.",
+  "widget_type": "GRAPH",
+  "is_visible": true,
+  "metrics": [
+{
+  "name": "regionserver.Server.Get_95th_percentile._max",
+  "metric_path": 
"metrics/hbase/regionserver/Server/Get_95th_percentile._max",
+  "service_name": "HBASE",
+  "component_name": "HBASE_REGIONSERVER"
+},
+{
+  "name": "regionserver.Server.ScanNext_95th_percentile._max",
+  "metric_path": 
"metrics/hbase/regionserver/Server/ScanNext_95th_percentile._max",
+  "service_name": "HBASE",
+  "component_name": "HBASE_REGIONSERVER"
+}
+  ],
+  "values": [
+{
+  "name": "Cluster wide maximum of 95% Get Latency",
+  "value": "${regionserver.Server.Get_95th_percentile._max}"
+},
+{
+  "name": "Cluster wide maximum of 95% ScanNext Latency",
+  "value": "${regionserver.Server.ScanNext_95th_percentile._max}"
+}
+  ],
+  "properties": {
+"display_unit": "ms",
+"graph_type": "LINE",
+"time_range": "1"
+  }
+},
+{
+  "widget_name": "Write Latency",
+  "description": "maximum of 95% write latency.",
+  "widget_type": "GRAPH",
+  "is_visible": true,
+  "metrics": [
+{
+  "name": "regionserver.Server.Mutate_95th_percentile._max",
+  "metric_path": 
"metrics/hbase/regionserver/S

[13/51] [partial] ambari git commit: AMBARI-21349. Create BigInsights Stack Skeleton in Ambari 2.5 (alejandro)

2017-06-27 Thread alejandro
http://git-wip-us.apache.org/repos/asf/ambari/blob/1863c3b9/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/YARN/package/scripts/nodemanager.py
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/YARN/package/scripts/nodemanager.py
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/YARN/package/scripts/nodemanager.py
new file mode 100755
index 000..d185b7e
--- /dev/null
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/YARN/package/scripts/nodemanager.py
@@ -0,0 +1,144 @@
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+Ambari Agent
+
+"""
+
+import nodemanager_upgrade
+
+from resource_management import *
+from resource_management.libraries.functions import conf_select
+from resource_management.libraries.functions import stack_select
+from resource_management.libraries.functions.version import compare_versions, 
format_stack_version
+from resource_management.libraries.functions.format import format
+from resource_management.libraries.functions.security_commons import 
build_expectations, \
+  cached_kinit_executor, get_params_from_filesystem, 
validate_security_config_properties, \
+  FILE_TYPE_XML
+
+from yarn import yarn
+from service import service
+
+class Nodemanager(Script):
+
+  def get_component_name(self):
+return "hadoop-yarn-nodemanager"
+
+  def install(self, env):
+self.install_packages(env)
+
+  def configure(self, env):
+import params
+env.set_params(params)
+yarn(name="nodemanager")
+
+  def pre_upgrade_restart(self, env, upgrade_type=None):
+Logger.info("Executing NodeManager Stack Upgrade pre-restart")
+import params
+env.set_params(params)
+
+if params.version and 
compare_versions(format_stack_version(params.version), '4.0.0.0') >= 0:
+  conf_select.select(params.stack_name, "hadoop", params.version)
+  stack_select.select("hadoop-yarn-nodemanager", params.version)
+  #Execute(format("stack-select set hadoop-yarn-nodemanager {version}"))
+
+  def start(self, env, upgrade_type=None):
+import params
+env.set_params(params)
+self.configure(env) # FOR SECURITY
+service('nodemanager',action='start')
+
+  def post_upgrade_restart(self, env, upgrade_type=None):
+Logger.info("Executing NodeManager Stack Upgrade post-restart")
+import params
+env.set_params(params)
+
+nodemanager_upgrade.post_upgrade_check()
+
+  def stop(self, env, upgrade_type=None):
+import params
+env.set_params(params)
+
+service('nodemanager',action='stop')
+
+  def status(self, env):
+import status_params
+env.set_params(status_params)
+check_process_status(status_params.nodemanager_pid_file)
+
+  def security_status(self, env):
+import status_params
+env.set_params(status_params)
+if status_params.security_enabled:
+  props_value_check = {"yarn.timeline-service.http-authentication.type": 
"kerberos",
+   "yarn.acl.enable": "true"}
+  props_empty_check = ["yarn.nodemanager.principal",
+   "yarn.nodemanager.keytab",
+   "yarn.nodemanager.webapp.spnego-principal",
+   "yarn.nodemanager.webapp.spnego-keytab-file"]
+
+  props_read_check = ["yarn.nodemanager.keytab",
+  "yarn.nodemanager.webapp.spnego-keytab-file"]
+  yarn_site_props = build_expectations('yarn-site', props_value_check, 
props_empty_check,
+   props_read_check)
+
+  yarn_expectations ={}
+  yarn_expectations.update(yarn_site_props)
+
+  security_params = 
get_params_from_filesystem(status_params.hadoop_conf_dir,
+   {'yarn-site.xml': 
FILE_TYPE_XML})
+  result_issues = validate_security_config_properties(security_params, 
yarn_site_props)
+  if not result_issues: # If all validations passed successfully
+try:
+  # Double check the dict before calling execute
+  if ( 'yarn-site' not in security_params
+   or 'yarn.nodemanager.keytab' not in security_params['yarn-site']
+   or 'yarn.nodemana

[21/51] [partial] ambari git commit: AMBARI-21349. Create BigInsights Stack Skeleton in Ambari 2.5 (alejandro)

2017-06-27 Thread alejandro
http://git-wip-us.apache.org/repos/asf/ambari/blob/1863c3b9/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/OOZIE/package/scripts/oozie_server_upgrade.py
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/OOZIE/package/scripts/oozie_server_upgrade.py
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/OOZIE/package/scripts/oozie_server_upgrade.py
new file mode 100755
index 000..aa02f64
--- /dev/null
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/services/OOZIE/package/scripts/oozie_server_upgrade.py
@@ -0,0 +1,300 @@
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+import glob
+import os
+import shutil
+import tarfile
+import tempfile
+
+from resource_management.core import shell
+from resource_management.core.logger import Logger
+from resource_management.core.exceptions import Fail
+from resource_management.core.resources.system import Execute, Directory
+from resource_management.libraries.functions import Direction
+from resource_management.libraries.functions import format
+from resource_management.libraries.functions import compare_versions
+from resource_management.libraries.functions import format_stack_version
+from resource_management.libraries.functions import tar_archive
+from resource_management.libraries.functions import stack_select
+import oozie
+
+BACKUP_TEMP_DIR = "oozie-upgrade-backup"
+BACKUP_CONF_ARCHIVE = "oozie-conf-backup.tar"
+
+
+def backup_configuration():
+  """
+  Backs up the oozie configuration as part of the upgrade process.
+  :return:
+  """
+  Logger.info('Backing up Oozie configuration directory before upgrade...')
+  directoryMappings = _get_directory_mappings()
+
+  absolute_backup_dir = os.path.join(tempfile.gettempdir(), BACKUP_TEMP_DIR)
+  if not os.path.isdir(absolute_backup_dir):
+os.makedirs(absolute_backup_dir)
+
+  for directory in directoryMappings:
+if not os.path.isdir(directory):
+  raise Fail("Unable to backup missing directory {0}".format(directory))
+
+archive = os.path.join(absolute_backup_dir, directoryMappings[directory])
+Logger.info('Compressing {0} to {1}'.format(directory, archive))
+
+if os.path.exists(archive):
+  os.remove(archive)
+
+# backup the directory, following symlinks instead of including them
+tar_archive.archive_directory_dereference(archive, directory)
+
+
+def restore_configuration():
+  """
+  Restores the configuration backups to their proper locations after an
+  upgrade has completed.
+  :return:
+  """
+  Logger.info('Restoring Oozie configuration directory after upgrade...')
+  directoryMappings = _get_directory_mappings()
+
+  for directory in directoryMappings:
+archive = os.path.join(tempfile.gettempdir(), BACKUP_TEMP_DIR,
+  directoryMappings[directory])
+
+if not os.path.isfile(archive):
+  raise Fail("Unable to restore missing backup archive 
{0}".format(archive))
+
+Logger.info('Extracting {0} to {1}'.format(archive, directory))
+
+tar_archive.untar_archive(archive, directory)
+
+  # cleanup
+  Directory(os.path.join(tempfile.gettempdir(), BACKUP_TEMP_DIR),
+action="delete"
+  )
+
+
+def prepare_libext_directory():
+  """
+  Creates /usr/iop/current/oozie/libext-customer and recursively sets
+  777 permissions on it and its parents.
+  :return:
+  """
+  import params
+
+  # some versions of IOP don't need the lzo compression libraries
+  target_version_needs_compression_libraries = compare_versions(
+format_stack_version(params.version), '4.0.0.0') >= 0
+
+  if not os.path.isdir(params.oozie_libext_customer_dir):
+os.makedirs(params.oozie_libext_customer_dir, 0o777)
+
+  # ensure that it's rwx for all
+  os.chmod(params.oozie_libext_customer_dir, 0o777)
+
+  # get all hadooplzo* JAR files
+  # stack-select set hadoop-client has not run yet, therefore we cannot use
+  # /usr/iop/current/hadoop-client ; we must use params.version directly
+  # however, this only works when upgrading beyond 4.0.0.0; don't do this
+  # for downgrade to 4.0.0.0 since hadoop-lzo will not be present
+  # This can also be called during a Downgrade.
+  # When a version is Intalled, it is respons

[08/51] [partial] ambari git commit: AMBARI-21349. Create BigInsights Stack Skeleton in Ambari 2.5 (alejandro)

2017-06-27 Thread alejandro
http://git-wip-us.apache.org/repos/asf/ambari/blob/1863c3b9/ambari-server/src/main/resources/stacks/BigInsights/4.0/stack-advisor/stack_advisor_23.py
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.0/stack-advisor/stack_advisor_23.py
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/stack-advisor/stack_advisor_23.py
new file mode 100755
index 000..d8b6729
--- /dev/null
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.0/stack-advisor/stack_advisor_23.py
@@ -0,0 +1,995 @@
+#!/usr/bin/env ambari-python-wrap
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+"""
+
+# Python Imports
+import os
+import re
+import fnmatch
+import math
+import socket
+
+# Local Imports
+from resource_management.core.logger import Logger
+
+try:
+  from stack_advisor_22 import *
+except ImportError:
+  #Ignore ImportError
+  print("stack_advisor_22 not found")
+
+DB_TYPE_DEFAULT_PORT_MAP = {"MYSQL":"3306", "ORACLE":"1521", 
"POSTGRES":"5432", "MSSQL":"1433", "SQLA":"2638"}
+
+class HDP23StackAdvisor(HDP22StackAdvisor):
+
+  def __init__(self):
+super(HDP23StackAdvisor, self).__init__()
+Logger.initialize_logger()
+
+  def getComponentLayoutValidations(self, services, hosts):
+parentItems = super(HDP23StackAdvisor, 
self).getComponentLayoutValidations(services, hosts)
+
+servicesList = [service["StackServices"]["service_name"] for service in 
services["services"]]
+componentsListList = [service["components"] for service in 
services["services"]]
+componentsList = [item["StackServiceComponents"] for sublist in 
componentsListList for item in sublist]
+childItems = []
+
+if "SPARK" in servicesList:
+  if "SPARK_THRIFTSERVER" in servicesList:
+if not "HIVE_SERVER" in servicesList:
+  message = "SPARK_THRIFTSERVER requires HIVE services to be selected."
+  childItems.append( {"type": 'host-component', "level": 'ERROR', 
"message": message, "component-name": 'SPARK_THRIFTSERVER'} )
+
+  hmsHosts = self.__getHosts(componentsList, "HIVE_METASTORE") if "HIVE" 
in servicesList else []
+  sparkTsHosts = self.__getHosts(componentsList, "SPARK_THRIFTSERVER") if 
"SPARK" in servicesList else []
+
+  # if Spark Thrift Server is deployed but no Hive Server is deployed
+  if len(sparkTsHosts) > 0 and len(hmsHosts) == 0:
+message = "SPARK_THRIFTSERVER requires HIVE_METASTORE to be 
selected/deployed."
+childItems.append( { "type": 'host-component', "level": 'ERROR', 
"message": message, "component-name": 'SPARK_THRIFTSERVER' } )
+
+parentItems.extend(childItems)
+return parentItems
+
+  def __getHosts(self, componentsList, componentName):
+host_lists = [component["hostnames"] for component in componentsList if
+  component["component_name"] == componentName]
+if host_lists and len(host_lists) > 0:
+  return host_lists[0]
+else:
+  return []
+
+  def getServiceConfigurationRecommenderDict(self):
+parentRecommendConfDict = super(HDP23StackAdvisor, 
self).getServiceConfigurationRecommenderDict()
+childRecommendConfDict = {
+  "TEZ": self.recommendTezConfigurations,
+  "HDFS": self.recommendHDFSConfigurations,
+  "YARN": self.recommendYARNConfigurations,
+  "HIVE": self.recommendHIVEConfigurations,
+  "HBASE": self.recommendHBASEConfigurations,
+  "KAFKA": self.recommendKAFKAConfigurations,
+  "RANGER": self.recommendRangerConfigurations,
+  "RANGER_KMS": self.recommendRangerKMSConfigurations,
+  "STORM": self.recommendStormConfigurations,
+  "SQOOP": self.recommendSqoopConfigurations
+}
+parentRecommendConfDict.update(childRecommendConfDict)
+return parentRecommendConfDict
+
+  def recommendTezConfigurations(self, configurations, clusterData, services, 
hosts):
+super(HDP23StackAdvisor, self).recommendTezConfigurations(configurations, 
clusterData, services, hosts)
+
+putTezProperty = self.putProperty(configurations, "tez-site")
+
+if "HIVE" in self.getServiceNames(services):
+  if not "hive-site" in configurations:
+self.recommendHIVEConfigurations(configurations, clusterData, 
services, hosts)
+
+  if "h

[05/51] [partial] ambari git commit: AMBARI-21349. Create BigInsights Stack Skeleton in Ambari 2.5 (alejandro)

2017-06-27 Thread alejandro
http://git-wip-us.apache.org/repos/asf/ambari/blob/1863c3b9/ambari-server/src/main/resources/stacks/BigInsights/4.1/services/HIVE/configuration/hive-env.xml
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.1/services/HIVE/configuration/hive-env.xml
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.1/services/HIVE/configuration/hive-env.xml
new file mode 100755
index 000..2014c78
--- /dev/null
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.1/services/HIVE/configuration/hive-env.xml
@@ -0,0 +1,189 @@
+
+
+
+
+
+  
+  
+  
+content
+This is the jinja template for hive-env.sh file
+
+ if [ "$SERVICE" = "cli" ]; then
+   {% if java_version < 8 %}
+if [ -z "$DEBUG" ]; then
+  export HADOOP_OPTS="$HADOOP_OPTS -XX:NewRatio=12 -Xms10m 
-XX:MaxHeapFreeRatio=40 -XX:MinHeapFreeRatio=15 -XX:+UseParNewGC 
-XX:-UseGCOverheadLimit"
+else
+  export HADOOP_OPTS="$HADOOP_OPTS -XX:NewRatio=12 -Xms10m 
-XX:MaxHeapFreeRatio=40 -XX:MinHeapFreeRatio=15 -XX:-UseGCOverheadLimit"
+fi
+  {% else %}
+if [ -z "$DEBUG" ]; then
+  export HADOOP_OPTS="$HADOOP_OPTS -XX:NewRatio=12 -Xms10m 
-XX:MaxHeapFreeRatio=40 -XX:MinHeapFreeRatio=15 -XX:+UseParNewGC 
-XX:+UseConcMarkSweepGC -XX:-UseGCOverheadLimit"
+else
+  export HADOOP_OPTS="$HADOOP_OPTS -XX:NewRatio=12 -Xms10m 
-XX:MaxHeapFreeRatio=40 -XX:MinHeapFreeRatio=15 -XX:-UseGCOverheadLimit"
+fi
+  {% endif %}
+ fi
+
+# The heap size of the jvm stared by hive shell script can be controlled via:
+
+export HADOOP_HEAPSIZE="{{hive_heapsize}}"
+export HADOOP_CLIENT_OPTS="-Xmx${HADOOP_HEAPSIZE}m $HADOOP_CLIENT_OPTS"
+
+# Set JAVA HOME
+export JAVA_HOME={{java64_home}}
+
+# Larger heap size may be required when running queries over large number of 
files or partitions.
+# By default hive shell scripts use a heap size of 256 (MB).  Larger heap size 
would also be
+# appropriate for hive server (hwi etc).
+
+
+# Set HADOOP_HOME to point to a specific hadoop install directory
+HADOOP_HOME=${HADOOP_HOME:-{{hadoop_home}}}
+
+# Hive Configuration Directory can be controlled by:
+export HIVE_CONF_DIR={{hive_config_dir}}
+
+# Folder containing extra libraries required for hive compilation/execution 
can be controlled by:
+if [ "${HIVE_AUX_JARS_PATH}" != "" ]; then
+  export HIVE_AUX_JARS_PATH=${HIVE_AUX_JARS_PATH}
+elif [ -d "{{hcat_lib}}" ]; then
+  export HIVE_AUX_JARS_PATH={{hcat_lib}}
+fi
+
+# Set HIVE_AUX_JARS_PATH
+export HIVE_AUX_JARS_PATH={{hbase_lib}}/hbase-client.jar,\
+{{hbase_lib}}/hbase-common.jar,\
+{{hbase_lib}}/hbase-hadoop2-compat.jar,\
+{{hbase_lib}}/hbase-prefix-tree.jar,\
+{{hbase_lib}}/hbase-protocol.jar,\
+{{hbase_lib}}/hbase-server.jar,\
+{{hbase_lib}}/htrace-core-3.1.0-incubating.jar,\
+${HIVE_AUX_JARS_PATH}
+
+export METASTORE_PORT={{hive_metastore_port}}
+
+  
+ 
+  
+hive_security_authorization
+Choose Authorization
+None
+
+  value-list
+  
+
+  None
+  None
+
+
+  SQLStdAuth
+  SQLStdAuth
+
+  
+
+  
+  
+  
+hive_exec_orc_storage_strategy
+ORC Storage Strategy
+SPEED
+
+  value-list
+  
+
+  SPEED
+  Speed
+
+
+  COMPRESSION
+  Compression
+
+  
+  1
+
+  
+
+  
+hive_txn_acid
+ACID Transactions
+off
+
+  value-list
+  
+
+  on
+  On
+
+
+  off
+  Off
+
+  
+  1
+
+  
+  
+  
+hive.heapsize
+1024
+Hive Java heap size
+HiveServer2 Heap Size
+
+  int
+  512
+  2048
+  MB
+  512
+  false
+
+  
+
+  
+hive.client.heapsize
+512
+Hive Client Java heap size
+Client Heap Size
+
+  int
+  512
+  2048
+  MB
+  512
+  false
+
+  
+
+  
+hive.metastore.heapsize
+1024
+Hive Metastore Java heap size
+Metastore Heap Size
+
+  int
+  512
+  2048
+  MB
+  512
+
+  
+  
+

http://git-wip-us.apache.org/repos/asf/ambari/blob/1863c3b9/ambari-server/src/main/resources/stacks/BigInsights/4.1/services/HIVE/configuration/hive-site.xml
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.1/services/HIVE/configuration/hive-site.xml
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.1/services/HIVE/configuration/hive-site.xml
new file mode 100755
index 000..e1a2114
--- /dev/null
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.1/services/HIVE/configuration/hive-site.xml
@@ -0,0 +1,338 @@
+
+
+
+
+
+  
+  
+hive.security.authenticator.manager
+org.apache.hadoop.hive.ql.security.ProxyUserAuthenticator
+Hive client authenticator manager class name. The 
user-defined authenticator class shoul

[01/51] [partial] ambari git commit: AMBARI-21349. Create BigInsights Stack Skeleton in Ambari 2.5 (alejandro)

2017-06-27 Thread alejandro
Repository: ambari
Updated Branches:
  refs/heads/branch-feature-AMBARI-21348 7ad307c2c -> 1863c3b90


http://git-wip-us.apache.org/repos/asf/ambari/blob/1863c3b9/ambari-server/src/main/resources/stacks/BigInsights/4.2.5/services/HBASE/metrics.json
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.2.5/services/HBASE/metrics.json
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.2.5/services/HBASE/metrics.json
new file mode 100755
index 000..604e545
--- /dev/null
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.2.5/services/HBASE/metrics.json
@@ -0,0 +1,9370 @@
+{
+  "HBASE_REGIONSERVER": {
+"Component": [
+  {
+"type": "ganglia",
+"metrics": {
+  "default": {
+"metrics/cpu/cpu_idle":{
+  "metric":"cpu_idle",
+  "pointInTime":true,
+  "temporal":true
+},
+"metrics/cpu/cpu_nice":{
+  "metric":"cpu_nice",
+  "pointInTime":true,
+  "temporal":true
+},
+"metrics/cpu/cpu_system":{
+  "metric":"cpu_system",
+  "pointInTime":true,
+  "temporal":true
+},
+"metrics/cpu/cpu_user":{
+  "metric":"cpu_user",
+  "pointInTime":true,
+  "temporal":true
+},
+"metrics/cpu/cpu_wio":{
+  "metric":"cpu_wio",
+  "pointInTime":true,
+  "temporal":true
+},
+"metrics/disk/disk_free":{
+  "metric":"disk_free",
+  "pointInTime":true,
+  "temporal":true
+},
+"metrics/disk/disk_total":{
+  "metric":"disk_total",
+  "pointInTime":true,
+  "temporal":true
+},
+"metrics/disk/read_bps":{
+  "metric":"read_bps",
+  "pointInTime":true,
+  "temporal":true
+},
+"metrics/disk/write_bps":{
+  "metric":"write_bps",
+  "pointInTime":true,
+  "temporal":true
+},
+"metrics/load/load_fifteen":{
+  "metric":"load_fifteen",
+  "pointInTime":true,
+  "temporal":true
+},
+"metrics/load/load_five":{
+  "metric":"load_five",
+  "pointInTime":true,
+  "temporal":true
+},
+"metrics/load/load_one":{
+  "metric":"load_one",
+  "pointInTime":true,
+  "temporal":true
+},
+"metrics/memory/mem_buffers":{
+  "metric":"mem_buffers",
+  "pointInTime":true,
+  "temporal":true
+},
+"metrics/memory/mem_cached":{
+  "metric":"mem_cached",
+  "pointInTime":true,
+  "temporal":true
+},
+"metrics/memory/mem_free":{
+  "metric":"mem_free",
+  "pointInTime":true,
+  "temporal":true
+},
+"metrics/memory/mem_shared":{
+  "metric":"mem_shared",
+  "pointInTime":true,
+  "temporal":true
+},
+"metrics/memory/mem_total":{
+  "metric":"mem_total",
+  "pointInTime":true,
+  "temporal":true
+},
+"metrics/memory/swap_free":{
+  "metric":"swap_free",
+  "pointInTime":true,
+  "temporal":true
+},
+"metrics/memory/swap_total":{
+  "metric":"swap_total",
+  "pointInTime":true,
+  "temporal":true
+},
+"metrics/network/bytes_in":{
+  "metric":"bytes_in",
+  "pointInTime":true,
+  "temporal":true
+},
+"metrics/network/bytes_out":{
+  "metric":"bytes_out",
+  "pointInTime":true,
+  "temporal":true
+},
+"metrics/network/pkts_in":{
+  "metric":"pkts_in",
+  "pointInTime":true,
+  "temporal":true
+},
+"metrics/network/pkts_out":{
+  "metric":"pkts_out",
+  "pointInTime":true,
+  "temporal":true
+},
+"metrics/process/proc_run":{
+  "metric":"proc_run",
+  "pointInTime":true,
+  "temporal":true
+},
+"metrics/process/proc_total":{
+  "metric":"proc_total",
+  "pointInTime":true,
+  "temporal":true
+},
+"metrics/disk/read_count":{
+  "metric":"read_count",
+  "pointInTime":true,
+  "temporal":true
+},
+"metrics/disk/

ambari git commit: AMBARI-21350. ADDENDUM. Create a cross stack upgrade pack in Ambari (alejandro)

2017-06-27 Thread alejandro
Repository: ambari
Updated Branches:
  refs/heads/branch-feature-AMBARI-21348 196ed48b8 -> 7b8f0eab5


AMBARI-21350. ADDENDUM. Create a cross stack upgrade pack in Ambari (alejandro)


Project: http://git-wip-us.apache.org/repos/asf/ambari/repo
Commit: http://git-wip-us.apache.org/repos/asf/ambari/commit/7b8f0eab
Tree: http://git-wip-us.apache.org/repos/asf/ambari/tree/7b8f0eab
Diff: http://git-wip-us.apache.org/repos/asf/ambari/diff/7b8f0eab

Branch: refs/heads/branch-feature-AMBARI-21348
Commit: 7b8f0eab5146855c2dae21c3431bc1511a1fc4c9
Parents: 196ed48
Author: Alejandro Fernandez 
Authored: Tue Jun 27 14:47:25 2017 -0700
Committer: Alejandro Fernandez 
Committed: Tue Jun 27 14:47:25 2017 -0700

--
 .../BigInsights/4.2/upgrades/config-upgrade.xml | 118 +++
 .../upgrades/nonrolling-upgrade-to-hdp-2.6.xml  | 797 +++
 .../BigInsights/upgrades/config-upgrade.xml | 118 ---
 .../upgrades/nonrolling-upgrade-to-hdp-2.6.xml  | 797 ---
 4 files changed, 915 insertions(+), 915 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/ambari/blob/7b8f0eab/ambari-server/src/main/resources/stacks/BigInsights/4.2/upgrades/config-upgrade.xml
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/4.2/upgrades/config-upgrade.xml
 
b/ambari-server/src/main/resources/stacks/BigInsights/4.2/upgrades/config-upgrade.xml
new file mode 100644
index 000..540c017
--- /dev/null
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/4.2/upgrades/config-upgrade.xml
@@ -0,0 +1,118 @@
+
+
+
+http://www.w3.org/2001/XMLSchema-instance";>
+  
+
+  
+
+  
+hadoop-env
+
+
+  
+
+  
+
+
+
+  
+
+  
+mapred-site
+
+  
+
+  
+
+
+
+  
+
+  
+hive-site
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+  
+  
+
+  
+  
+  
+
+  
+webhcat-env
+
+  
+  
+  
+webhcat-site
+
+
+
+
+  
+
+  
+
+
+
+  
+
+  
+
+  oozie-site
+  oozie.services
+  org.apache.oozie.service.SchedulerService,  
org.apache.oozie.service.InstrumentationService,  
org.apache.oozie.service.CallableQueueService,  
org.apache.oozie.service.UUIDService,  org.apache.oozie.service.ELService,  
org.apache.oozie.service.AuthorizationService,  
org.apache.oozie.service.UserGroupInformationService,  
org.apache.oozie.service.HadoopAccessorService,   
org.apache.oozie.service.JobsConcurrencyService,  
org.apache.oozie.service.URIHandlerService,  
org.apache.oozie.service.MemoryLocksService,  
org.apache.oozie.service.DagXLogInfoService,  
org.apache.oozie.service.SchemaService,  
org.apache.oozie.service.LiteWorkflowAppService,  
org.apache.oozie.service.JPAService,  
org.apache.oozie.service.StoreService,  
org.apache.oozie.service.SLAStoreService,  
org.apache.oozie.service.DBLiteWorkflowStoreService,  
org.apache.oozie.service.CallbackService,   
org.apache.oozie.service.ActionService, 
 org.apache.oozie.service.ShareLibService,  
org.apache.oozie.service.ActionCheckerService,  
org.apache.oozie.service.RecoveryService,  
org.apache.oozie.service.PurgeService,  
org.apache.oozie.service.CoordinatorEngineService,  
org.apache.oozie.service.BundleEngineService,  
org.apache.oozie.service.DagEngineService,  
org.apache.oozie.service.CoordMaterializeTriggerService,  
org.apache.oozie.service.StatusTransitService,  
org.apache.oozie.service.PauseTransitService,  
org.apache.oozie.service.GroupsService,  
org.apache.oozie.service.ProxyUserService,
org.apache.oozie.service.XLogStreamingService,  
org.apache.oozie.service.JvmPauseMonitorService, 
org.apache.oozie.service.SparkConfigurationService
+
+  
+  
+oozie-env
+
+  
+
+  
+
+  
+

http://git-wip-us.apache.org/repos/asf/ambari/blob/7b8f0eab/ambari-server/src/main/resources/stacks/BigInsights/4.2/upgrades/nonrolling-upgrade-to-hdp-2.6.xml
--
diff --gi

ambari git commit: AMBARI-21350. Create a cross stack upgrade pack in Ambari (alejandro)

2017-06-27 Thread alejandro
Repository: ambari
Updated Branches:
  refs/heads/branch-feature-AMBARI-21348 6db15fab9 -> 196ed48b8


AMBARI-21350. Create a cross stack upgrade pack in Ambari (alejandro)


Project: http://git-wip-us.apache.org/repos/asf/ambari/repo
Commit: http://git-wip-us.apache.org/repos/asf/ambari/commit/196ed48b
Tree: http://git-wip-us.apache.org/repos/asf/ambari/tree/196ed48b
Diff: http://git-wip-us.apache.org/repos/asf/ambari/diff/196ed48b

Branch: refs/heads/branch-feature-AMBARI-21348
Commit: 196ed48b84e8b92d6edd06c166fbf66cfec4e6a2
Parents: 6db15fa
Author: Alejandro Fernandez 
Authored: Tue Jun 27 10:31:31 2017 -0700
Committer: Alejandro Fernandez 
Committed: Tue Jun 27 13:24:23 2017 -0700

--
 .../BigInsights/upgrades/config-upgrade.xml | 118 +++
 .../upgrades/nonrolling-upgrade-to-hdp-2.6.xml  | 797 +++
 2 files changed, 915 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/ambari/blob/196ed48b/ambari-server/src/main/resources/stacks/BigInsights/upgrades/config-upgrade.xml
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/upgrades/config-upgrade.xml
 
b/ambari-server/src/main/resources/stacks/BigInsights/upgrades/config-upgrade.xml
new file mode 100644
index 000..540c017
--- /dev/null
+++ 
b/ambari-server/src/main/resources/stacks/BigInsights/upgrades/config-upgrade.xml
@@ -0,0 +1,118 @@
+
+
+
+http://www.w3.org/2001/XMLSchema-instance";>
+  
+
+  
+
+  
+hadoop-env
+
+
+  
+
+  
+
+
+
+  
+
+  
+mapred-site
+
+  
+
+  
+
+
+
+  
+
+  
+hive-site
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+  
+  
+
+  
+  
+  
+
+  
+webhcat-env
+
+  
+  
+  
+webhcat-site
+
+
+
+
+  
+
+  
+
+
+
+  
+
+  
+
+  oozie-site
+  oozie.services
+  org.apache.oozie.service.SchedulerService,  
org.apache.oozie.service.InstrumentationService,  
org.apache.oozie.service.CallableQueueService,  
org.apache.oozie.service.UUIDService,  org.apache.oozie.service.ELService,  
org.apache.oozie.service.AuthorizationService,  
org.apache.oozie.service.UserGroupInformationService,  
org.apache.oozie.service.HadoopAccessorService,   
org.apache.oozie.service.JobsConcurrencyService,  
org.apache.oozie.service.URIHandlerService,  
org.apache.oozie.service.MemoryLocksService,  
org.apache.oozie.service.DagXLogInfoService,  
org.apache.oozie.service.SchemaService,  
org.apache.oozie.service.LiteWorkflowAppService,  
org.apache.oozie.service.JPAService,  
org.apache.oozie.service.StoreService,  
org.apache.oozie.service.SLAStoreService,  
org.apache.oozie.service.DBLiteWorkflowStoreService,  
org.apache.oozie.service.CallbackService,   
org.apache.oozie.service.ActionService, 
 org.apache.oozie.service.ShareLibService,  
org.apache.oozie.service.ActionCheckerService,  
org.apache.oozie.service.RecoveryService,  
org.apache.oozie.service.PurgeService,  
org.apache.oozie.service.CoordinatorEngineService,  
org.apache.oozie.service.BundleEngineService,  
org.apache.oozie.service.DagEngineService,  
org.apache.oozie.service.CoordMaterializeTriggerService,  
org.apache.oozie.service.StatusTransitService,  
org.apache.oozie.service.PauseTransitService,  
org.apache.oozie.service.GroupsService,  
org.apache.oozie.service.ProxyUserService,
org.apache.oozie.service.XLogStreamingService,  
org.apache.oozie.service.JvmPauseMonitorService, 
org.apache.oozie.service.SparkConfigurationService
+
+  
+  
+oozie-env
+
+  
+
+  
+
+  
+

http://git-wip-us.apache.org/repos/asf/ambari/blob/196ed48b/ambari-server/src/main/resources/stacks/BigInsights/upgrades/nonrolling-upgrade-to-hdp-2.6.xml
--
diff --git 
a/ambari-server/src/main/resources/stacks/BigInsights/upgrades/nonrolling-upgrade-to-hdp-2.6.xml
 
b/ambari-server/src/main/resources/stacks/BigInsights/upgrades/nonrolling-upgrade-to-

ambari git commit: AMBARI-21191. Custom log4j.properties of Flume (wangjianfei via alejandro)

2017-06-20 Thread alejandro
Repository: ambari
Updated Branches:
  refs/heads/trunk 4dac1b196 -> aa33c1b5b


AMBARI-21191. Custom log4j.properties of Flume (wangjianfei via alejandro)


Project: http://git-wip-us.apache.org/repos/asf/ambari/repo
Commit: http://git-wip-us.apache.org/repos/asf/ambari/commit/aa33c1b5
Tree: http://git-wip-us.apache.org/repos/asf/ambari/tree/aa33c1b5
Diff: http://git-wip-us.apache.org/repos/asf/ambari/diff/aa33c1b5

Branch: refs/heads/trunk
Commit: aa33c1b5bb1fc1a4742ce98ad7e9ade06212ab20
Parents: 4dac1b1
Author: Alejandro Fernandez 
Authored: Tue Jun 20 11:01:37 2017 -0700
Committer: Alejandro Fernandez 
Committed: Tue Jun 20 11:01:37 2017 -0700

--
 .../1.4.0.2.0/configuration/flume-log4j.xml | 96 
 .../FLUME/1.4.0.2.0/package/scripts/flume.py|  4 +-
 2 files changed, 98 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/ambari/blob/aa33c1b5/ambari-server/src/main/resources/common-services/FLUME/1.4.0.2.0/configuration/flume-log4j.xml
--
diff --git 
a/ambari-server/src/main/resources/common-services/FLUME/1.4.0.2.0/configuration/flume-log4j.xml
 
b/ambari-server/src/main/resources/common-services/FLUME/1.4.0.2.0/configuration/flume-log4j.xml
new file mode 100644
index 000..4066367
--- /dev/null
+++ 
b/ambari-server/src/main/resources/common-services/FLUME/1.4.0.2.0/configuration/flume-log4j.xml
@@ -0,0 +1,96 @@
+
+
+
+
+  
+content
+flume-log4j template
+Custom log4j.properties
+
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+# Define some default values that can be overridden by system properties.
+#
+# For testing, it may also be convenient to specify
+# -Dflume.root.logger=DEBUG,console when launching flume.
+
+#flume.root.logger=DEBUG,console
+flume.root.logger=INFO,LOGFILE
+flume.log.dir={{flume_log_dir}}
+flume.log.file=flume-{{agent_name}}.log
+
+log4j.logger.org.apache.flume.lifecycle = INFO
+log4j.logger.org.jboss = WARN
+log4j.logger.org.mortbay = INFO
+log4j.logger.org.apache.avro.ipc.NettyTransceiver = WARN
+log4j.logger.org.apache.hadoop = INFO
+
+# Define the root logger to the system property "flume.root.logger".
+log4j.rootLogger=${flume.root.logger}
+
+# Stock log4j rolling file appender
+# Default log rotation configuration
+log4j.appender.LOGFILE=org.apache.log4j.RollingFileAppender
+log4j.appender.LOGFILE.MaxFileSize=100MB
+log4j.appender.LOGFILE.MaxBackupIndex=10
+log4j.appender.LOGFILE.File=${flume.log.dir}/${flume.log.file}
+log4j.appender.LOGFILE.layout=org.apache.log4j.PatternLayout
+log4j.appender.LOGFILE.layout.ConversionPattern=%d{dd MMM  HH:mm:ss,SSS} 
%-5p [%t] (%C.%M:%L) %x - %m%n
+
+# Warning: If you enable the following appender it will fill up your disk if 
you don't have a cleanup job!
+# This uses the updated rolling file appender from log4j-extras that supports 
a reliable time-based rolling policy.
+# See 
http://logging.apache.org/log4j/companions/extras/apidocs/org/apache/log4j/rolling/TimeBasedRollingPolicy.html
+# Add "DAILY" to flume.root.logger above if you want to use this
+log4j.appender.DAILY=org.apache.log4j.rolling.RollingFileAppender
+log4j.appender.DAILY.rollingPolicy=org.apache.log4j.rolling.TimeBasedRollingPolicy
+log4j.appender.DAILY.rollingPolicy.ActiveFileName=${flume.log.dir}/${flume.log.file}
+log4j.appender.DAILY.rollingPolicy.FileNamePattern=${flume.log.dir}/${flume.log.file}.%d{-MM-dd}
+log4j.appender.DAILY.layout=org.apache.log4j.PatternLayout
+log4j.appender.DAILY.layout.ConversionPattern=%d{dd MMM  HH:mm:ss,SSS} 
%-5p [%t] (%C.%M:%L) %x - %m%n
+
+# console
+# Add "console" to flume.root.logger above if you want to use this
+log4j.appender.console=org.apache.log4j.ConsoleAppender
+log4j.appender.console.target=System.err
+log4j.appender.console.layout=org.apache.log4j.PatternLayout
+log4j.appender.console.layout.ConversionPattern=%d (%t) [%p - %l] %m%n

+
+  content
+  false
+
+
+  
+
\ No newline at end of file

http://git-wip-us.a

ambari git commit: AMBARI-21191. Custom log4j.properties of Flume (wangjianfei via alejandro)

2017-06-19 Thread alejandro
Repository: ambari
Updated Branches:
  refs/heads/trunk 2b46cb429 -> df21f9c4b


AMBARI-21191. Custom log4j.properties of Flume (wangjianfei via alejandro)


Project: http://git-wip-us.apache.org/repos/asf/ambari/repo
Commit: http://git-wip-us.apache.org/repos/asf/ambari/commit/df21f9c4
Tree: http://git-wip-us.apache.org/repos/asf/ambari/tree/df21f9c4
Diff: http://git-wip-us.apache.org/repos/asf/ambari/diff/df21f9c4

Branch: refs/heads/trunk
Commit: df21f9c4b2577a1d801145b131b8a065c8d02a7e
Parents: 2b46cb4
Author: Alejandro Fernandez 
Authored: Mon Jun 19 11:33:32 2017 -0700
Committer: Alejandro Fernandez 
Committed: Mon Jun 19 11:33:58 2017 -0700

--
 .../1.4.0.2.0/configuration/flume-log4j.xml | 96 
 .../FLUME/1.4.0.2.0/package/scripts/flume.py|  4 +-
 2 files changed, 98 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/ambari/blob/df21f9c4/ambari-server/src/main/resources/common-services/FLUME/1.4.0.2.0/configuration/flume-log4j.xml
--
diff --git 
a/ambari-server/src/main/resources/common-services/FLUME/1.4.0.2.0/configuration/flume-log4j.xml
 
b/ambari-server/src/main/resources/common-services/FLUME/1.4.0.2.0/configuration/flume-log4j.xml
new file mode 100644
index 000..fb9d7f8
--- /dev/null
+++ 
b/ambari-server/src/main/resources/common-services/FLUME/1.4.0.2.0/configuration/flume-log4j.xml
@@ -0,0 +1,96 @@
+
+
+
+
+  
+content
+flume-log4j template
+Custom log4j.properties
+
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+# Define some default values that can be overridden by system properties.
+#
+# For testing, it may also be convenient to specify
+# -Dflume.root.logger=DEBUG,console when launching flume.
+
+#flume.root.logger=DEBUG,console
+flume.root.logger=INFO,LOGFILE
+flume.log.dir={{flume_log_dir}}
+flume.log.file=flume-{{agent_name}}.log
+
+log4j.logger.org.apache.flume.lifecycle = INFO
+log4j.logger.org.jboss = WARN
+log4j.logger.org.mortbay = INFO
+log4j.logger.org.apache.avro.ipc.NettyTransceiver = WARN
+log4j.logger.org.apache.hadoop = INFO
+
+# Define the root logger to the system property "flume.root.logger".
+log4j.rootLogger=${flume.root.logger}
+
+# Stock log4j rolling file appender
+# Default log rotation configuration
+log4j.appender.LOGFILE=org.apache.log4j.RollingFileAppender
+log4j.appender.LOGFILE.MaxFileSize=100MB
+log4j.appender.LOGFILE.MaxBackupIndex=10
+log4j.appender.LOGFILE.File=${flume.log.dir}/${flume.log.file}
+log4j.appender.LOGFILE.layout=org.apache.log4j.PatternLayout
+log4j.appender.LOGFILE.layout.ConversionPattern=%d{dd MMM  HH:mm:ss,SSS} 
%-5p [%t] (%C.%M:%L) %x - %m%n
+
+# Warning: If you enable the following appender it will fill up your disk if 
you don't have a cleanup job!
+# This uses the updated rolling file appender from log4j-extras that supports 
a reliable time-based rolling policy.
+# See 
http://logging.apache.org/log4j/companions/extras/apidocs/org/apache/log4j/rolling/TimeBasedRollingPolicy.html
+# Add "DAILY" to flume.root.logger above if you want to use this
+log4j.appender.DAILY=org.apache.log4j.rolling.RollingFileAppender
+log4j.appender.DAILY.rollingPolicy=org.apache.log4j.rolling.TimeBasedRollingPolicy
+log4j.appender.DAILY.rollingPolicy.ActiveFileName=${flume.log.dir}/${flume.log.file}
+log4j.appender.DAILY.rollingPolicy.FileNamePattern=${flume.log.dir}/${flume.log.file}.%d{-MM-dd}
+log4j.appender.DAILY.layout=org.apache.log4j.PatternLayout
+log4j.appender.DAILY.layout.ConversionPattern=%d{dd MMM  HH:mm:ss,SSS} 
%-5p [%t] (%C.%M:%L) %x - %m%n
+
+# console
+# Add "console" to flume.root.logger above if you want to use this
+log4j.appender.console=org.apache.log4j.ConsoleAppender
+log4j.appender.console.target=System.err
+log4j.appender.console.layout=org.apache.log4j.PatternLayout
+log4j.appender.console.layout.ConversionPattern=%d (%t) [%p - %l] %m%n

+  
+  
+content
+false
+  
+  
+
\ No newline at end of file

http://git-wip-us.apache

[2/2] ambari git commit: AMBARI-20853. Service Advisor - Allow Service to define its Advisor Type as Python or Java (alejandro)

2017-06-12 Thread alejandro
AMBARI-20853. Service Advisor - Allow Service to define its Advisor Type as 
Python or Java (alejandro)


Project: http://git-wip-us.apache.org/repos/asf/ambari/repo
Commit: http://git-wip-us.apache.org/repos/asf/ambari/commit/f1ca09c0
Tree: http://git-wip-us.apache.org/repos/asf/ambari/tree/f1ca09c0
Diff: http://git-wip-us.apache.org/repos/asf/ambari/diff/f1ca09c0

Branch: refs/heads/trunk
Commit: f1ca09c03a2fe316129aa5623c675b62d3177112
Parents: acc12fb
Author: Alejandro Fernandez 
Authored: Mon Jun 5 15:24:49 2017 -0700
Committer: Alejandro Fernandez 
Committed: Mon Jun 12 13:02:59 2017 -0700

--
 ambari-client/groovy-client/pom.xml |   2 +
 ambari-funtest/pom.xml  |   2 +
 ambari-infra/ambari-infra-solr-client/pom.xml   |   2 +
 .../ambari-logsearch-config-api/pom.xml |   2 +-
 .../ambari-logsearch-config-zookeeper/pom.xml   |   2 +-
 ambari-logsearch/ambari-logsearch-it/pom.xml|   4 +-
 .../ambari-logsearch-logfeeder/pom.xml  |   4 +-
 .../ambari-metrics-timelineservice/pom.xml  |   4 +-
 ambari-project/pom.xml  |   4 +-
 ambari-server/checkstyle.xml|   4 +
 ambari-server/pom.xml   |  25 ++-
 .../stackadvisor/StackAdvisorHelper.java|  74 +--
 .../stackadvisor/StackAdvisorRunner.java| 207 ---
 .../ComponentLayoutRecommendationCommand.java   |   5 +-
 .../ComponentLayoutValidationCommand.java   |  11 +-
 ...rationDependenciesRecommendationCommand.java |  11 +-
 .../ConfigurationRecommendationCommand.java |  11 +-
 .../ConfigurationValidationCommand.java |  11 +-
 .../commands/StackAdvisorCommand.java   |  15 +-
 .../ambari/server/stack/ServiceModule.java  |   4 +
 .../apache/ambari/server/state/ServiceInfo.java |  26 ++-
 .../stackadvisor/StackAdvisorHelperTest.java|  63 --
 .../stackadvisor/StackAdvisorRunnerTest.java|  10 +-
 .../ConfigurationRecommendationCommandTest.java |   3 +-
 .../commands/StackAdvisorCommandTest.java   |  47 ++---
 .../ambari/server/stack/ServiceModuleTest.java  |  31 +++
 ambari-views/examples/weather-view/pom.xml  |   2 +-
 contrib/views/hawq/pom.xml  |   2 +-
 contrib/views/hive-next/pom.xml |   4 +-
 contrib/views/hive20/pom.xml|   4 +-
 contrib/views/pig/pom.xml   |   2 +-
 contrib/views/tez/pom.xml   |   3 +-
 contrib/views/wfmanager/pom.xml |   1 +
 pom.xml |  17 +-
 serviceadvisor/pom.xml  | 103 +
 .../ambari/serviceadvisor/ServiceAdvisor.java   | 147 +
 .../ServiceAdvisorCommandType.java  |  63 ++
 37 files changed, 756 insertions(+), 176 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/ambari/blob/f1ca09c0/ambari-client/groovy-client/pom.xml
--
diff --git a/ambari-client/groovy-client/pom.xml 
b/ambari-client/groovy-client/pom.xml
index fa89a73..8fafdec 100644
--- a/ambari-client/groovy-client/pom.xml
+++ b/ambari-client/groovy-client/pom.xml
@@ -29,10 +29,12 @@
 
   org.slf4j
   slf4j-api
+  1.7.20
 
 
   org.slf4j
   slf4j-log4j12
+  1.7.20
 
 
   org.codehaus.groovy

http://git-wip-us.apache.org/repos/asf/ambari/blob/f1ca09c0/ambari-funtest/pom.xml
--
diff --git a/ambari-funtest/pom.xml b/ambari-funtest/pom.xml
index 66678c2..3738106 100644
--- a/ambari-funtest/pom.xml
+++ b/ambari-funtest/pom.xml
@@ -266,10 +266,12 @@
 
   org.slf4j
   slf4j-api
+  1.7.20
 
 
   org.slf4j
   slf4j-log4j12
+  1.7.20
 
 
   log4j

http://git-wip-us.apache.org/repos/asf/ambari/blob/f1ca09c0/ambari-infra/ambari-infra-solr-client/pom.xml
--
diff --git a/ambari-infra/ambari-infra-solr-client/pom.xml 
b/ambari-infra/ambari-infra-solr-client/pom.xml
index 8cb2248..d103003 100644
--- a/ambari-infra/ambari-infra-solr-client/pom.xml
+++ b/ambari-infra/ambari-infra-solr-client/pom.xml
@@ -60,10 +60,12 @@
 
   org.slf4j
   slf4j-api
+  1.7.20
 
 
   org.slf4j
   slf4j-log4j12
+  1.7.20
 
 
   log4j

http://git-wip-us.apache.org/repos/asf/ambari/blob/f1ca09c0/ambari-logsearch/ambari-logsearch-config-api/pom.xml
--
diff --git a/ambari-logsearch/ambari-logsearch-config-api/pom.xml 
b/ambari-logsearch/ambari-logsearch-config-api/pom.xml
index 5355906..59286a6 100644
--- a/ambari-logsearch/ambari-logsearch-config-api/pom.xml
+++ b

[1/2] ambari git commit: AMBARI-20853. Service Advisor - Allow Service to define its Advisor Type as Python or Java (alejandro)

2017-06-12 Thread alejandro
Repository: ambari
Updated Branches:
  refs/heads/trunk acc12fb72 -> f1ca09c03


http://git-wip-us.apache.org/repos/asf/ambari/blob/f1ca09c0/serviceadvisor/pom.xml
--
diff --git a/serviceadvisor/pom.xml b/serviceadvisor/pom.xml
new file mode 100644
index 000..ecf6d8b
--- /dev/null
+++ b/serviceadvisor/pom.xml
@@ -0,0 +1,103 @@
+
+http://maven.apache.org/POM/4.0.0";
+ xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
+ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd";>
+  4.0.0
+  
+  serviceadvisor
+  serviceadvisor
+  Service Advisor
+  1.0.0.0-SNAPSHOT
+  Service Advisor
+
+  
+
+  org.apache.maven.plugins
+  maven-assembly-plugin
+  2.6
+
+
+
+
+  commons-logging
+  commons-logging
+  1.2
+
+
+  org.slf4j
+  slf4j-api
+  1.7.20
+
+  
+  org.apache.commons
+  commons-lang3
+  3.0
+
+
+
+  commons-lang
+  commons-lang
+  2.4
+
+  
+
+  
+
+  oss.sonatype.org
+  OSS Sonatype Staging
+  https://oss.sonatype.org/content/groups/staging
+
+  
+
+  jar
+  
+  
+
+  
+  
+maven-compiler-plugin
+3.2
+
+  1.7
+  1.7
+
+  
+
+  
+

http://git-wip-us.apache.org/repos/asf/ambari/blob/f1ca09c0/serviceadvisor/src/main/java/org/apache/ambari/serviceadvisor/ServiceAdvisor.java
--
diff --git 
a/serviceadvisor/src/main/java/org/apache/ambari/serviceadvisor/ServiceAdvisor.java
 
b/serviceadvisor/src/main/java/org/apache/ambari/serviceadvisor/ServiceAdvisor.java
new file mode 100644
index 000..77c482a
--- /dev/null
+++ 
b/serviceadvisor/src/main/java/org/apache/ambari/serviceadvisor/ServiceAdvisor.java
@@ -0,0 +1,147 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.ambari.serviceadvisor;
+
+import java.util.ArrayList;
+import java.util.List;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.commons.lang3.EnumUtils;
+import org.apache.commons.lang.StringUtils;
+
+/**
+ * Class that can be called either through its jar or using its run method.
+ * The goal is to invoke a Service Advisor.
+ * Right now, it is backward compatible by invoking the python script and does 
not know which service is affected.
+ */
+public class ServiceAdvisor {
+  protected static Log LOG = LogFactory.getLog(ServiceAdvisor.class);
+
+  private static String USAGE = "Usage: java -jar serviceadvisor.jar [ACTION] 
[HOSTS_FILE.json] [SERVICES_FILE.json] [OUTPUT.txt] [ERRORS.txt]";
+  private static String PYTHON_STACK_ADVISOR_SCRIPT = 
"/var/lib/ambari-server/resources/scripts/stack_advisor.py";
+
+  /**
+   * Entry point for calling this class through its jar.
+   * @param args
+   */
+  public static void main(String[] args) {
+if (args.length != 5) {
+  System.err.println(String.format("Wrong number of arguments. %s", 
USAGE));
+  System.exit(1);
+}
+
+String action = args[0];
+String hostsFile = args[1];
+String servicesFile = args[2];
+String outputFile = args[3];
+String errorFile = args[4];
+
+int exitCode = run(action, hostsFile, servicesFile, outputFile, errorFile);
+System.exit(exitCode);
+  }
+
+  public static int run(String action, String hostsFile, String servicesFile, 
String outputFile, String errorFile) {
+LOG.info(String.format("ServiceAdvisor. Received arguments. Action: %s, 
Hosts File: %s, Services File: %s", action, hostsFile, servicesFile));
+int returnCode = -1;
+
+try {
+  ServiceAdvisorCommandType commandType = 
ServiceAdvisorCommandType.getEnum(action);
+
+  // TODO, load each Service's Service Advisor at Start Time and call it 
instead of Python command below.
+
+  ProcessBuilder builder = preparePythonShellCommand(commandType, 
hostsFile, servicesFile, outputFile, errorFile);
+  returnCode = launchProcess(builder);
+} catch (IllegalArgumentException e) {
+  List values = 
EnumUtils.getEnumList(ServiceAdvis

ambari git commit: AMBARI-21137. Blueprint export should allow tokenized values in SingleHostUpdater (Amruta Borkar via alejandro)

2017-06-08 Thread alejandro
Repository: ambari
Updated Branches:
  refs/heads/trunk 9c302dcd2 -> b98f07f90


AMBARI-21137. Blueprint export should allow tokenized values in 
SingleHostUpdater (Amruta Borkar via alejandro)


Project: http://git-wip-us.apache.org/repos/asf/ambari/repo
Commit: http://git-wip-us.apache.org/repos/asf/ambari/commit/b98f07f9
Tree: http://git-wip-us.apache.org/repos/asf/ambari/tree/b98f07f9
Diff: http://git-wip-us.apache.org/repos/asf/ambari/diff/b98f07f9

Branch: refs/heads/trunk
Commit: b98f07f9093a0b9635443f317e96768b2d8b8ef7
Parents: 9c302dc
Author: Alejandro Fernandez 
Authored: Thu Jun 8 10:33:06 2017 -0700
Committer: Alejandro Fernandez 
Committed: Thu Jun 8 10:33:06 2017 -0700

--
 .../BlueprintConfigurationProcessor.java | 19 ++-
 .../BlueprintConfigurationProcessorTest.java |  3 +++
 2 files changed, 21 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/ambari/blob/b98f07f9/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/BlueprintConfigurationProcessor.java
--
diff --git 
a/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/BlueprintConfigurationProcessor.java
 
b/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/BlueprintConfigurationProcessor.java
index 508bf15..7ebefdd 100644
--- 
a/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/BlueprintConfigurationProcessor.java
+++ 
b/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/BlueprintConfigurationProcessor.java
@@ -139,6 +139,11 @@ public class BlueprintConfigurationProcessor {
   private static Pattern LOCALHOST_PORT_REGEX = 
Pattern.compile("localhost:?(\\d+)?");
 
   /**
+   * Compiled regex for placeholder
+   */
+  private static final Pattern PLACEHOLDER = Pattern.compile("\\{\\{.*\\}\\}");
+
+  /**
* Special network address
*/
   private static String BIND_ALL_IP_ADDRESS = "0.0.0.0";
@@ -1133,7 +1138,8 @@ public class BlueprintConfigurationProcessor {
   if (! matchedHost &&
   ! isNameServiceProperty(propertyName) &&
   ! isSpecialNetworkAddress(propValue)  &&
-  ! isUndefinedAddress(propValue)) {
+  ! isUndefinedAddress(propValue) &&
+  ! isPlaceholder(propValue)) {
 
 configuration.removeProperty(type, propertyName);
   }
@@ -1143,6 +1149,17 @@ public class BlueprintConfigurationProcessor {
   }
 
   /**
+   * Determine if a property is a placeholder
+   *
+   * @param propertyValue  property value
+   *
+   * @return true if the property has format "{{%s}}"
+   */
+  private static boolean isPlaceholder(String propertyValue) {
+return PLACEHOLDER.matcher(propertyValue).find();
+  }
+
+  /**
* Determines if a given property name's value can include
*   nameservice references instead of host names.
*

http://git-wip-us.apache.org/repos/asf/ambari/blob/b98f07f9/ambari-server/src/test/java/org/apache/ambari/server/controller/internal/BlueprintConfigurationProcessorTest.java
--
diff --git 
a/ambari-server/src/test/java/org/apache/ambari/server/controller/internal/BlueprintConfigurationProcessorTest.java
 
b/ambari-server/src/test/java/org/apache/ambari/server/controller/internal/BlueprintConfigurationProcessorTest.java
index 24fc3c7..ca579ea 100644
--- 
a/ambari-server/src/test/java/org/apache/ambari/server/controller/internal/BlueprintConfigurationProcessorTest.java
+++ 
b/ambari-server/src/test/java/org/apache/ambari/server/controller/internal/BlueprintConfigurationProcessorTest.java
@@ -426,6 +426,7 @@ public class BlueprintConfigurationProcessorTest extends 
EasyMockSupport {
 Map> group2Properties = new HashMap<>();
 Map group2YarnSiteProps = new HashMap<>();
 group2YarnSiteProps.put("yarn.resourcemanager.resource-tracker.address", 
"testhost");
+group2YarnSiteProps.put("yarn.resourcemanager.webapp.https.address", 
"{{rm_host}}");
 group2Properties.put("yarn-site", group2YarnSiteProps);
 // host group config -> BP config -> cluster scoped config
 Configuration group2BPConfiguration = new 
Configuration(Collections.>emptyMap(),
@@ -449,6 +450,8 @@ public class BlueprintConfigurationProcessorTest extends 
EasyMockSupport {
 assertEquals("%HOSTGROUP::group1%", 
properties.get("yarn-site").get("yarn.resourcemanager.hostname"));
 assertEquals("%HOSTGROUP::group1%",
   group2Configuration.getPropertyValue("yarn-site", 
"yarn.resourcemanager.resource-tracker.address"));
+assertNotNull("Placeholder property should not have been removed.",
+  group2Configuration.getPropertyValue("yarn-site", 
"yarn.resourcemanager.webapp.https.address"));
   }
 
   @Test



ambari git commit: AMBARI-21096. ADDENDUM. Provide additional logging for config audit log (alejandro)

2017-06-07 Thread alejandro
Repository: ambari
Updated Branches:
  refs/heads/branch-2.4 6afeb8ec2 -> 8ff4d12d3


AMBARI-21096. ADDENDUM. Provide additional logging for config audit log 
(alejandro)


Project: http://git-wip-us.apache.org/repos/asf/ambari/repo
Commit: http://git-wip-us.apache.org/repos/asf/ambari/commit/8ff4d12d
Tree: http://git-wip-us.apache.org/repos/asf/ambari/tree/8ff4d12d
Diff: http://git-wip-us.apache.org/repos/asf/ambari/diff/8ff4d12d

Branch: refs/heads/branch-2.4
Commit: 8ff4d12d3ce30e7fed9c16c6f7e6d600fb956c25
Parents: 6afeb8e
Author: Alejandro Fernandez 
Authored: Wed Jun 7 15:29:02 2017 -0700
Committer: Alejandro Fernandez 
Committed: Wed Jun 7 15:29:02 2017 -0700

--
 .../server/controller/AmbariManagementControllerImpl.java   | 5 -
 .../server/controller/internal/ConfigGroupResourceProvider.java | 3 ++-
 .../server/controller/AmbariManagementControllerTest.java   | 2 +-
 3 files changed, 7 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/ambari/blob/8ff4d12d/ambari-server/src/main/java/org/apache/ambari/server/controller/AmbariManagementControllerImpl.java
--
diff --git 
a/ambari-server/src/main/java/org/apache/ambari/server/controller/AmbariManagementControllerImpl.java
 
b/ambari-server/src/main/java/org/apache/ambari/server/controller/AmbariManagementControllerImpl.java
index 12a7eef..f4ad7f7 100644
--- 
a/ambari-server/src/main/java/org/apache/ambari/server/controller/AmbariManagementControllerImpl.java
+++ 
b/ambari-server/src/main/java/org/apache/ambari/server/controller/AmbariManagementControllerImpl.java
@@ -1751,7 +1751,10 @@ public class AmbariManagementControllerImpl implements 
AmbariManagementControlle
 }
   }
   note = cr.getServiceConfigVersionNote();
-  configs.add(cluster.getConfig(configType, cr.getVersionTag()));
+  Config config = cluster.getConfig(configType, cr.getVersionTag());
+  if (null != config) {
+configs.add(config);
+  }
 }
 if (!configs.isEmpty()) {
   Map existingConfigTypeToConfig = new HashMap();

http://git-wip-us.apache.org/repos/asf/ambari/blob/8ff4d12d/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/ConfigGroupResourceProvider.java
--
diff --git 
a/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/ConfigGroupResourceProvider.java
 
b/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/ConfigGroupResourceProvider.java
index 7f07cb0..f5cd1fa 100644
--- 
a/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/ConfigGroupResourceProvider.java
+++ 
b/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/ConfigGroupResourceProvider.java
@@ -661,10 +661,11 @@ public class ConfigGroupResourceProvider extends
 serviceName = requestServiceName;
   }
 
+  int numHosts = (null != configGroup.getHosts()) ? 
configGroup.getHosts().size() : 0;
   configLogger.info("(configchange) Updating configuration group host 
membership or config value. cluster: '{}', changed by: '{}', " +
   "service_name: '{}', config group: '{}', tag: '{}', num hosts in 
config group: '{}', note: '{}'",
   cluster.getClusterName(), getManagementController().getAuthName(),
-  serviceName, request.getGroupName(), request.getTag(), 
configGroup.getHosts().size(), request.getServiceConfigVersionNote());
+  serviceName, request.getGroupName(), request.getTag(), numHosts, 
request.getServiceConfigVersionNote());
 
   if (!request.getConfigs().isEmpty()) {
 List affectedConfigTypeList = new 
ArrayList(request.getConfigs().keySet());

http://git-wip-us.apache.org/repos/asf/ambari/blob/8ff4d12d/ambari-server/src/test/java/org/apache/ambari/server/controller/AmbariManagementControllerTest.java
--
diff --git 
a/ambari-server/src/test/java/org/apache/ambari/server/controller/AmbariManagementControllerTest.java
 
b/ambari-server/src/test/java/org/apache/ambari/server/controller/AmbariManagementControllerTest.java
index 1ea9407..8607068 100644
--- 
a/ambari-server/src/test/java/org/apache/ambari/server/controller/AmbariManagementControllerTest.java
+++ 
b/ambari-server/src/test/java/org/apache/ambari/server/controller/AmbariManagementControllerTest.java
@@ -7417,7 +7417,7 @@ public class AmbariManagementControllerTest {
 Assert.assertEquals(1, responsesWithParams.size());
 StackVersionResponse resp = responsesWithParams.iterator().next();
 assertNotNull(resp.g

ambari git commit: AMBARI-21096. ADDENDUM. Provide additional logging for config audit log (alejandro)

2017-06-07 Thread alejandro
Repository: ambari
Updated Branches:
  refs/heads/trunk e801b4199 -> bdadb704b


AMBARI-21096. ADDENDUM. Provide additional logging for config audit log 
(alejandro)


Project: http://git-wip-us.apache.org/repos/asf/ambari/repo
Commit: http://git-wip-us.apache.org/repos/asf/ambari/commit/bdadb704
Tree: http://git-wip-us.apache.org/repos/asf/ambari/tree/bdadb704
Diff: http://git-wip-us.apache.org/repos/asf/ambari/diff/bdadb704

Branch: refs/heads/trunk
Commit: bdadb704b8982915575ea65d3c42447abcb75d6e
Parents: e801b41
Author: Alejandro Fernandez 
Authored: Wed Jun 7 11:28:46 2017 -0700
Committer: Alejandro Fernandez 
Committed: Wed Jun 7 15:15:38 2017 -0700

--
 .../server/controller/AmbariManagementControllerImpl.java   | 5 -
 .../server/controller/internal/ConfigGroupResourceProvider.java | 3 ++-
 2 files changed, 6 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/ambari/blob/bdadb704/ambari-server/src/main/java/org/apache/ambari/server/controller/AmbariManagementControllerImpl.java
--
diff --git 
a/ambari-server/src/main/java/org/apache/ambari/server/controller/AmbariManagementControllerImpl.java
 
b/ambari-server/src/main/java/org/apache/ambari/server/controller/AmbariManagementControllerImpl.java
index b67b45b..1eeb82b 100644
--- 
a/ambari-server/src/main/java/org/apache/ambari/server/controller/AmbariManagementControllerImpl.java
+++ 
b/ambari-server/src/main/java/org/apache/ambari/server/controller/AmbariManagementControllerImpl.java
@@ -1777,7 +1777,10 @@ public class AmbariManagementControllerImpl implements 
AmbariManagementControlle
 }
   }
   note = cr.getServiceConfigVersionNote();
-  configs.add(cluster.getConfig(configType, cr.getVersionTag()));
+  Config config = cluster.getConfig(configType, cr.getVersionTag());
+  if (null != config) {
+configs.add(config);
+  }
 }
 if (!configs.isEmpty()) {
   Map existingConfigTypeToConfig = new HashMap();

http://git-wip-us.apache.org/repos/asf/ambari/blob/bdadb704/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/ConfigGroupResourceProvider.java
--
diff --git 
a/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/ConfigGroupResourceProvider.java
 
b/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/ConfigGroupResourceProvider.java
index d2b4a84..cc23177 100644
--- 
a/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/ConfigGroupResourceProvider.java
+++ 
b/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/ConfigGroupResourceProvider.java
@@ -657,10 +657,11 @@ public class ConfigGroupResourceProvider extends
 serviceName = requestServiceName;
   }
 
+  int numHosts = (null != configGroup.getHosts()) ? 
configGroup.getHosts().size() : 0;
   configLogger.info("(configchange) Updating configuration group host 
membership or config value. cluster: '{}', changed by: '{}', " +
   "service_name: '{}', config group: '{}', tag: '{}', num hosts in 
config group: '{}', note: '{}'",
   cluster.getClusterName(), getManagementController().getAuthName(),
-  serviceName, request.getGroupName(), request.getTag(), 
configGroup.getHosts().size(), request.getServiceConfigVersionNote());
+  serviceName, request.getGroupName(), request.getTag(), numHosts, 
request.getServiceConfigVersionNote());
 
   if (!request.getConfigs().isEmpty()) {
 List affectedConfigTypeList = new 
ArrayList(request.getConfigs().keySet());



ambari git commit: AMBARI-21096. ADDENDUM. Provide additional logging for config audit log (alejandro)

2017-06-07 Thread alejandro
Repository: ambari
Updated Branches:
  refs/heads/branch-2.5 4e97069ee -> 87e993e87


AMBARI-21096. ADDENDUM. Provide additional logging for config audit log 
(alejandro)


Project: http://git-wip-us.apache.org/repos/asf/ambari/repo
Commit: http://git-wip-us.apache.org/repos/asf/ambari/commit/87e993e8
Tree: http://git-wip-us.apache.org/repos/asf/ambari/tree/87e993e8
Diff: http://git-wip-us.apache.org/repos/asf/ambari/diff/87e993e8

Branch: refs/heads/branch-2.5
Commit: 87e993e8722d568c0afc02f6b41eb27d2a118854
Parents: 4e97069
Author: Alejandro Fernandez 
Authored: Wed Jun 7 11:35:05 2017 -0700
Committer: Alejandro Fernandez 
Committed: Wed Jun 7 15:14:44 2017 -0700

--
 .../server/controller/AmbariManagementControllerImpl.java   | 5 -
 .../server/controller/internal/ConfigGroupResourceProvider.java | 3 ++-
 2 files changed, 6 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/ambari/blob/87e993e8/ambari-server/src/main/java/org/apache/ambari/server/controller/AmbariManagementControllerImpl.java
--
diff --git 
a/ambari-server/src/main/java/org/apache/ambari/server/controller/AmbariManagementControllerImpl.java
 
b/ambari-server/src/main/java/org/apache/ambari/server/controller/AmbariManagementControllerImpl.java
index f454455..4cb72c2 100644
--- 
a/ambari-server/src/main/java/org/apache/ambari/server/controller/AmbariManagementControllerImpl.java
+++ 
b/ambari-server/src/main/java/org/apache/ambari/server/controller/AmbariManagementControllerImpl.java
@@ -1791,7 +1791,10 @@ public class AmbariManagementControllerImpl implements 
AmbariManagementControlle
 }
   }
   note = cr.getServiceConfigVersionNote();
-  configs.add(cluster.getConfig(configType, cr.getVersionTag()));
+  Config config = cluster.getConfig(configType, cr.getVersionTag());
+  if (null != config) {
+configs.add(config);
+  }
 }
 if (!configs.isEmpty()) {
   Map existingConfigTypeToConfig = new HashMap();

http://git-wip-us.apache.org/repos/asf/ambari/blob/87e993e8/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/ConfigGroupResourceProvider.java
--
diff --git 
a/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/ConfigGroupResourceProvider.java
 
b/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/ConfigGroupResourceProvider.java
index 5c4fea2..200cf27 100644
--- 
a/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/ConfigGroupResourceProvider.java
+++ 
b/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/ConfigGroupResourceProvider.java
@@ -659,10 +659,11 @@ public class ConfigGroupResourceProvider extends
 serviceName = requestServiceName;
   }
 
+  int numHosts = (null != configGroup.getHosts()) ? 
configGroup.getHosts().size() : 0;
   configLogger.info("(configchange) Updating configuration group host 
membership or config value. cluster: '{}', changed by: '{}', " +
   "service_name: '{}', config group: '{}', tag: '{}', num hosts in 
config group: '{}', note: '{}'",
   cluster.getClusterName(), getManagementController().getAuthName(),
-  serviceName, request.getGroupName(), request.getTag(), 
configGroup.getHosts().size(), request.getServiceConfigVersionNote());
+  serviceName, request.getGroupName(), request.getTag(), numHosts, 
request.getServiceConfigVersionNote());
 
   if (!request.getConfigs().isEmpty()) {
 List affectedConfigTypeList = new 
ArrayList(request.getConfigs().keySet());



ambari git commit: AMBARI-21129. Nimbus fails to start when Ambari is upgraded to 2.5.1, EU to HDP 2.6.1, and cluster is then Kerberized (alejandro)

2017-06-02 Thread alejandro
Repository: ambari
Updated Branches:
  refs/heads/branch-2.5 fab367491 -> edcc6c6e8


AMBARI-21129. Nimbus fails to start when Ambari is upgraded to 2.5.1, EU to HDP 
2.6.1, and cluster is then Kerberized (alejandro)


Project: http://git-wip-us.apache.org/repos/asf/ambari/repo
Commit: http://git-wip-us.apache.org/repos/asf/ambari/commit/edcc6c6e
Tree: http://git-wip-us.apache.org/repos/asf/ambari/tree/edcc6c6e
Diff: http://git-wip-us.apache.org/repos/asf/ambari/diff/edcc6c6e

Branch: refs/heads/branch-2.5
Commit: edcc6c6e89407c51d9be6d67640de38722657d0b
Parents: fab3674
Author: Alejandro Fernandez 
Authored: Fri Jun 2 15:44:34 2017 -0700
Committer: Alejandro Fernandez 
Committed: Fri Jun 2 15:44:34 2017 -0700

--
 .../server/upgrade/UpgradeCatalog251.java   | 50 
 .../server/upgrade/UpgradeCatalog251Test.java   |  6 +++
 2 files changed, 56 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/ambari/blob/edcc6c6e/ambari-server/src/main/java/org/apache/ambari/server/upgrade/UpgradeCatalog251.java
--
diff --git 
a/ambari-server/src/main/java/org/apache/ambari/server/upgrade/UpgradeCatalog251.java
 
b/ambari-server/src/main/java/org/apache/ambari/server/upgrade/UpgradeCatalog251.java
index 40fafb2..f4f69f7 100644
--- 
a/ambari-server/src/main/java/org/apache/ambari/server/upgrade/UpgradeCatalog251.java
+++ 
b/ambari-server/src/main/java/org/apache/ambari/server/upgrade/UpgradeCatalog251.java
@@ -18,12 +18,20 @@
 package org.apache.ambari.server.upgrade;
 
 import java.sql.SQLException;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Set;
 
 import org.apache.ambari.server.AmbariException;
+import org.apache.ambari.server.controller.AmbariManagementController;
 import org.apache.ambari.server.orm.DBAccessor.DBColumnInfo;
 
 import com.google.inject.Inject;
 import com.google.inject.Injector;
+import org.apache.ambari.server.state.Cluster;
+import org.apache.ambari.server.state.Clusters;
+import org.apache.ambari.server.state.Config;
+import org.apache.ambari.server.state.SecurityType;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -39,6 +47,7 @@ public class UpgradeCatalog251 extends AbstractUpgradeCatalog 
{
   private static final String REQUEST_TABLE = "request";
   private static final String CLUSTER_HOST_INFO_COLUMN = "cluster_host_info";
   private static final String REQUEST_ID_COLUMN = "request_id";
+  protected static final String STORM_ENV_CONFIG = "storm-env";
 
 
   /**
@@ -94,6 +103,7 @@ public class UpgradeCatalog251 extends 
AbstractUpgradeCatalog {
   @Override
   protected void executeDMLUpdates() throws AmbariException, SQLException {
 addNewConfigurationsFromXml();
+updateSTORMConfigs();
   }
 
   /**
@@ -121,4 +131,44 @@ public class UpgradeCatalog251 extends 
AbstractUpgradeCatalog {
 dbAccessor.moveColumnToAnotherTable(STAGE_TABLE, sourceColumn, 
REQUEST_ID_COLUMN, REQUEST_TABLE, targetColumn,
   REQUEST_ID_COLUMN, false);
   }
+
+  /**
+   * Make sure storm-env changes are applied to anyone upgrading to HDP-2.6.1 
Storm
+   * If the base version was before Ambari 2.5.0, this method should wind up 
doing nothing.
+   * @throws AmbariException
+   */
+  protected void updateSTORMConfigs() throws AmbariException {
+AmbariManagementController ambariManagementController = 
injector.getInstance(AmbariManagementController.class);
+Clusters clusters = ambariManagementController.getClusters();
+if (clusters != null) {
+  Map clusterMap = getCheckedClusterMap(clusters);
+  if (clusterMap != null && !clusterMap.isEmpty()) {
+for (final Cluster cluster : clusterMap.values()) {
+  Set installedServices = cluster.getServices().keySet();
+
+  // Technically, this should be added when the cluster is Kerberized 
on HDP 2.6.1, but is safe to add even
+  // without security or on an older stack version (such as HDP 2.5)
+  // The problem is that Kerberizing a cluster does not invoke Stack 
Advisor and has no easy way of setting
+  // these configs, so instead, add them as long as Storm is present.
+  if (installedServices.contains("STORM")) {
+Config stormEnv = cluster.getDesiredConfigByType(STORM_ENV_CONFIG);
+String content = stormEnv.getProperties().get("content");
+if (content != null && 
!content.contains("STORM_AUTOCREDS_LIB_DIR")) {
+  Map newProperties = new HashMap<>();
+  String stormEnvConfigs = "\n# set storm-auto creds\n" +
+  "# check if storm_jaas.conf in config, only enable 
storm_auto_creds in secure mode.\n" +
+  "STORM

ambari git commit: AMBARI-21129. Nimbus fails to start when Ambari is upgraded to 2.5.1, EU to HDP 2.6.1, and cluster is then Kerberized (alejandro)

2017-06-02 Thread alejandro
Repository: ambari
Updated Branches:
  refs/heads/trunk 57f4461bd -> 3f9088524


AMBARI-21129. Nimbus fails to start when Ambari is upgraded to 2.5.1, EU to HDP 
2.6.1, and cluster is then Kerberized (alejandro)


Project: http://git-wip-us.apache.org/repos/asf/ambari/repo
Commit: http://git-wip-us.apache.org/repos/asf/ambari/commit/3f908852
Tree: http://git-wip-us.apache.org/repos/asf/ambari/tree/3f908852
Diff: http://git-wip-us.apache.org/repos/asf/ambari/diff/3f908852

Branch: refs/heads/trunk
Commit: 3f9088524e276671fb0ea8252ce7c1f7ea149cda
Parents: 57f4461
Author: Alejandro Fernandez 
Authored: Fri Jun 2 15:33:09 2017 -0700
Committer: Alejandro Fernandez 
Committed: Fri Jun 2 15:33:09 2017 -0700

--
 .../server/upgrade/UpgradeCatalog251.java   | 45 +++-
 .../internal/UpgradeResourceProviderTest.java   |  1 -
 .../server/upgrade/UpgradeCatalog251Test.java   | 13 --
 3 files changed, 53 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/ambari/blob/3f908852/ambari-server/src/main/java/org/apache/ambari/server/upgrade/UpgradeCatalog251.java
--
diff --git 
a/ambari-server/src/main/java/org/apache/ambari/server/upgrade/UpgradeCatalog251.java
 
b/ambari-server/src/main/java/org/apache/ambari/server/upgrade/UpgradeCatalog251.java
index 9255daf..119d9ce 100644
--- 
a/ambari-server/src/main/java/org/apache/ambari/server/upgrade/UpgradeCatalog251.java
+++ 
b/ambari-server/src/main/java/org/apache/ambari/server/upgrade/UpgradeCatalog251.java
@@ -1,4 +1,4 @@
-/*
+/**
  * Licensed to the Apache Software Foundation (ASF) under one
  * or more contributor license agreements.  See the NOTICE file
  * distributed with this work for additional information
@@ -19,6 +19,7 @@ package org.apache.ambari.server.upgrade;
 
 import java.sql.SQLException;
 import java.util.Collections;
+import java.util.HashMap;
 import java.util.Map;
 import java.util.Set;
 
@@ -50,6 +51,7 @@ public class UpgradeCatalog251 extends AbstractUpgradeCatalog 
{
   private static final String REQUEST_TABLE = "request";
   private static final String CLUSTER_HOST_INFO_COLUMN = "cluster_host_info";
   private static final String REQUEST_ID_COLUMN = "request_id";
+  protected static final String STORM_ENV_CONFIG = "storm-env";
 
 
   /**
@@ -106,6 +108,7 @@ public class UpgradeCatalog251 extends 
AbstractUpgradeCatalog {
   protected void executeDMLUpdates() throws AmbariException, SQLException {
 addNewConfigurationsFromXml();
 updateKAFKAConfigs();
+updateSTORMConfigs();
   }
 
   /**
@@ -166,4 +169,44 @@ public class UpgradeCatalog251 extends 
AbstractUpgradeCatalog {
 dbAccessor.moveColumnToAnotherTable(STAGE_TABLE, sourceColumn, 
REQUEST_ID_COLUMN, REQUEST_TABLE, targetColumn,
   REQUEST_ID_COLUMN, false);
   }
+
+  /**
+   * Make sure storm-env changes are applied to anyone upgrading to HDP-2.6.1 
Storm
+   * If the base version was before Ambari 2.5.0, this method should wind up 
doing nothing.
+   * @throws AmbariException
+   */
+  protected void updateSTORMConfigs() throws AmbariException {
+AmbariManagementController ambariManagementController = 
injector.getInstance(AmbariManagementController.class);
+Clusters clusters = ambariManagementController.getClusters();
+if (clusters != null) {
+  Map clusterMap = getCheckedClusterMap(clusters);
+  if (clusterMap != null && !clusterMap.isEmpty()) {
+for (final Cluster cluster : clusterMap.values()) {
+  Set installedServices = cluster.getServices().keySet();
+
+  // Technically, this should be added when the cluster is Kerberized 
on HDP 2.6.1, but is safe to add even
+  // without security or on an older stack version (such as HDP 2.5)
+  // The problem is that Kerberizing a cluster does not invoke Stack 
Advisor and has no easy way of setting
+  // these configs, so instead, add them as long as Storm is present.
+  if (installedServices.contains("STORM")) {
+Config stormEnv = cluster.getDesiredConfigByType(STORM_ENV_CONFIG);
+String content = stormEnv.getProperties().get("content");
+if (content != null && 
!content.contains("STORM_AUTOCREDS_LIB_DIR")) {
+  Map newProperties = new HashMap<>();
+  String stormEnvConfigs = "\n# set storm-auto creds\n" +
+  "# check if storm_jaas.conf in config, only enable 
storm_auto_creds in secure mode.\n" +
+  "STORM_JAAS_CONF=$STORM_HOME/conf/storm_jaas.conf\n" +
+  
"STORM_AUTOCREDS_LIB_DIR=$STORM_HOME/external/storm-autocreds\n" +
+  "if [ -f $

ambari git commit: AMBARI-21096. Provide additional logging for config audit log (alejandro)

2017-06-02 Thread alejandro
Repository: ambari
Updated Branches:
  refs/heads/trunk b7101f782 -> 57f4461bd


AMBARI-21096. Provide additional logging for config audit log (alejandro)


Project: http://git-wip-us.apache.org/repos/asf/ambari/repo
Commit: http://git-wip-us.apache.org/repos/asf/ambari/commit/57f4461b
Tree: http://git-wip-us.apache.org/repos/asf/ambari/tree/57f4461b
Diff: http://git-wip-us.apache.org/repos/asf/ambari/diff/57f4461b

Branch: refs/heads/trunk
Commit: 57f4461bd002d998796168a00e51113927262ab8
Parents: b7101f7
Author: Alejandro Fernandez 
Authored: Fri Jun 2 14:53:54 2017 -0700
Committer: Alejandro Fernandez 
Committed: Fri Jun 2 14:53:54 2017 -0700

--
 .../AmbariManagementControllerImpl.java | 126 ++-
 .../internal/ConfigGroupResourceProvider.java   |  41 --
 .../internal/HostResourceProvider.java  |   2 +-
 .../server/state/cluster/ClusterImpl.java   |  13 +-
 4 files changed, 155 insertions(+), 27 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/ambari/blob/57f4461b/ambari-server/src/main/java/org/apache/ambari/server/controller/AmbariManagementControllerImpl.java
--
diff --git 
a/ambari-server/src/main/java/org/apache/ambari/server/controller/AmbariManagementControllerImpl.java
 
b/ambari-server/src/main/java/org/apache/ambari/server/controller/AmbariManagementControllerImpl.java
index 881ef1a..186a19e 100644
--- 
a/ambari-server/src/main/java/org/apache/ambari/server/controller/AmbariManagementControllerImpl.java
+++ 
b/ambari-server/src/main/java/org/apache/ambari/server/controller/AmbariManagementControllerImpl.java
@@ -57,6 +57,7 @@ import java.util.EnumMap;
 import java.util.EnumSet;
 import java.util.HashMap;
 import java.util.HashSet;
+import java.util.Iterator;
 import java.util.LinkedHashSet;
 import java.util.LinkedList;
 import java.util.List;
@@ -223,6 +224,7 @@ public class AmbariManagementControllerImpl implements 
AmbariManagementControlle
 
   private final static Logger LOG =
   LoggerFactory.getLogger(AmbariManagementControllerImpl.class);
+  private final static Logger configChangeLog = 
LoggerFactory.getLogger("configchange");
 
   /**
* Property name of request context.
@@ -1520,6 +1522,100 @@ public class AmbariManagementControllerImpl implements 
AmbariManagementControlle
 return response;
   }
 
+  /**
+   * Get a dictionary of all config differences between existingConfig and 
newConfigValues where the key is the config name and the action is one of 
"changed", "added", or "deleted".
+   * @param existingConfig
+   * @param newConfigValues
+   * @return Delta of configs
+   */
+  private Map getConfigKeyDeltaToAction(Config existingConfig, 
Map newConfigValues) {
+Map configsChanged = new HashMap<>();
+
+if (null != existingConfig) {
+  Map existingConfigValues = 
existingConfig.getProperties();
+
+  Iterator it = existingConfigValues.entrySet().iterator();
+  while (it.hasNext()) {
+Map.Entry pair = (Map.Entry) it.next();
+// Check the value if both keys exist
+if (newConfigValues.containsKey(pair.getKey())) {
+  if (!newConfigValues.get((String) 
pair.getKey()).equals(pair.getValue())) {
+configsChanged.put(pair.getKey(), "changed");
+  }
+} else {
+  // Deleted
+  configsChanged.put(pair.getKey(), "deleted");
+}
+  }
+
+  it = newConfigValues.entrySet().iterator();
+  while (it.hasNext()) {
+Map.Entry pair = (Map.Entry) it.next();
+if (!existingConfigValues.keySet().contains(pair.getKey())) {
+  configsChanged.put(pair.getKey(), "added");
+}
+  }
+} else {
+  // All of the configs in this config type are new.
+  for (String key : newConfigValues.keySet()) {
+configsChanged.put(key, "added");
+  }
+}
+return configsChanged;
+  }
+
+  /**
+   * Inverse a HashMap of the form key_i: value_j to value_j: [key_a, ..., 
key_z]
+   * for all keys that contain that value.
+   * This is useful for printing config deltas.
+   * @param configKeyToAction Original dictionary
+   * @return Inverse of the dictionary.
+   */
+  private Map> inverseMapByValue(Map 
configKeyToAction) {
+Map> mapByValue = new HashMap<>();
+
+Iterator it = configKeyToAction.entrySet().iterator();
+while (it.hasNext()) {
+  Map.Entry pair = (Map.Entry) it.next();
+  // Key is the config name, Value is the action (added, deleted, changed)
+  if (mapByValue.containsKey(pair.getValue())) {
+mapByValue.get(pair.getValue()).add(pair.getKey());
+  } else {
+List configListForAction = new ArrayList<>();
+configListForAction.add

ambari git commit: AMBARI-21096. Provide additional logging for config audit log (alejandro)

2017-06-02 Thread alejandro
Repository: ambari
Updated Branches:
  refs/heads/branch-2.5 91a7d0efa -> fab367491


AMBARI-21096. Provide additional logging for config audit log (alejandro)


Project: http://git-wip-us.apache.org/repos/asf/ambari/repo
Commit: http://git-wip-us.apache.org/repos/asf/ambari/commit/fab36749
Tree: http://git-wip-us.apache.org/repos/asf/ambari/tree/fab36749
Diff: http://git-wip-us.apache.org/repos/asf/ambari/diff/fab36749

Branch: refs/heads/branch-2.5
Commit: fab36749131d331cb9e1f01aeac4490309888063
Parents: 91a7d0e
Author: Alejandro Fernandez 
Authored: Fri Jun 2 14:50:54 2017 -0700
Committer: Alejandro Fernandez 
Committed: Fri Jun 2 14:50:54 2017 -0700

--
 .../AmbariManagementControllerImpl.java | 126 ++-
 .../internal/ConfigGroupResourceProvider.java   |  41 --
 .../internal/HostResourceProvider.java  |   2 +-
 .../server/state/cluster/ClusterImpl.java   |  13 +-
 4 files changed, 155 insertions(+), 27 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/ambari/blob/fab36749/ambari-server/src/main/java/org/apache/ambari/server/controller/AmbariManagementControllerImpl.java
--
diff --git 
a/ambari-server/src/main/java/org/apache/ambari/server/controller/AmbariManagementControllerImpl.java
 
b/ambari-server/src/main/java/org/apache/ambari/server/controller/AmbariManagementControllerImpl.java
index 01f93b4..eae7343 100644
--- 
a/ambari-server/src/main/java/org/apache/ambari/server/controller/AmbariManagementControllerImpl.java
+++ 
b/ambari-server/src/main/java/org/apache/ambari/server/controller/AmbariManagementControllerImpl.java
@@ -57,6 +57,7 @@ import java.util.EnumMap;
 import java.util.EnumSet;
 import java.util.HashMap;
 import java.util.HashSet;
+import java.util.Iterator;
 import java.util.LinkedHashSet;
 import java.util.LinkedList;
 import java.util.List;
@@ -227,6 +228,7 @@ public class AmbariManagementControllerImpl implements 
AmbariManagementControlle
 
   private final static Logger LOG =
   LoggerFactory.getLogger(AmbariManagementControllerImpl.class);
+  private final static Logger configChangeLog = 
LoggerFactory.getLogger("configchange");
 
   /**
* Property name of request context.
@@ -1534,6 +1536,100 @@ public class AmbariManagementControllerImpl implements 
AmbariManagementControlle
 return response;
   }
 
+  /**
+   * Get a dictionary of all config differences between existingConfig and 
newConfigValues where the key is the config name and the action is one of 
"changed", "added", or "deleted".
+   * @param existingConfig
+   * @param newConfigValues
+   * @return Delta of configs
+   */
+  private Map getConfigKeyDeltaToAction(Config existingConfig, 
Map newConfigValues) {
+Map configsChanged = new HashMap<>();
+
+if (null != existingConfig) {
+  Map existingConfigValues = 
existingConfig.getProperties();
+
+  Iterator it = existingConfigValues.entrySet().iterator();
+  while (it.hasNext()) {
+Map.Entry pair = (Map.Entry) it.next();
+// Check the value if both keys exist
+if (newConfigValues.containsKey(pair.getKey())) {
+  if (!newConfigValues.get((String) 
pair.getKey()).equals(pair.getValue())) {
+configsChanged.put(pair.getKey(), "changed");
+  }
+} else {
+  // Deleted
+  configsChanged.put(pair.getKey(), "deleted");
+}
+  }
+
+  it = newConfigValues.entrySet().iterator();
+  while (it.hasNext()) {
+Map.Entry pair = (Map.Entry) it.next();
+if (!existingConfigValues.keySet().contains(pair.getKey())) {
+  configsChanged.put(pair.getKey(), "added");
+}
+  }
+} else {
+  // All of the configs in this config type are new.
+  for (String key : newConfigValues.keySet()) {
+configsChanged.put(key, "added");
+  }
+}
+return configsChanged;
+  }
+
+  /**
+   * Inverse a HashMap of the form key_i: value_j to value_j: [key_a, ..., 
key_z]
+   * for all keys that contain that value.
+   * This is useful for printing config deltas.
+   * @param configKeyToAction Original dictionary
+   * @return Inverse of the dictionary.
+   */
+  private Map> inverseMapByValue(Map 
configKeyToAction) {
+Map> mapByValue = new HashMap<>();
+
+Iterator it = configKeyToAction.entrySet().iterator();
+while (it.hasNext()) {
+  Map.Entry pair = (Map.Entry) it.next();
+  // Key is the config name, Value is the action (added, deleted, changed)
+  if (mapByValue.containsKey(pair.getValue())) {
+mapByValue.get(pair.getValue()).add(pair.getKey());
+  } else {
+List configListForAction = new ArrayList<>();
+ 

ambari git commit: AMBARI-21096. Provide additional logging for config audit log (alejandro)

2017-06-02 Thread alejandro
Repository: ambari
Updated Branches:
  refs/heads/branch-2.4 5db8b4af5 -> 6afeb8ec2


AMBARI-21096. Provide additional logging for config audit log (alejandro)


Project: http://git-wip-us.apache.org/repos/asf/ambari/repo
Commit: http://git-wip-us.apache.org/repos/asf/ambari/commit/6afeb8ec
Tree: http://git-wip-us.apache.org/repos/asf/ambari/tree/6afeb8ec
Diff: http://git-wip-us.apache.org/repos/asf/ambari/diff/6afeb8ec

Branch: refs/heads/branch-2.4
Commit: 6afeb8ec2c11bd9e5636f34f7220427adada0f0e
Parents: 5db8b4a
Author: Alejandro Fernandez 
Authored: Fri Jun 2 14:48:43 2017 -0700
Committer: Alejandro Fernandez 
Committed: Fri Jun 2 14:48:43 2017 -0700

--
 .../AmbariManagementControllerImpl.java | 125 ++-
 .../internal/ConfigGroupResourceProvider.java   |  26 +++-
 .../internal/HostResourceProvider.java  |   2 +-
 .../server/state/cluster/ClusterImpl.java   |  13 +-
 4 files changed, 151 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/ambari/blob/6afeb8ec/ambari-server/src/main/java/org/apache/ambari/server/controller/AmbariManagementControllerImpl.java
--
diff --git 
a/ambari-server/src/main/java/org/apache/ambari/server/controller/AmbariManagementControllerImpl.java
 
b/ambari-server/src/main/java/org/apache/ambari/server/controller/AmbariManagementControllerImpl.java
index 28914db..12a7eef 100644
--- 
a/ambari-server/src/main/java/org/apache/ambari/server/controller/AmbariManagementControllerImpl.java
+++ 
b/ambari-server/src/main/java/org/apache/ambari/server/controller/AmbariManagementControllerImpl.java
@@ -214,6 +214,7 @@ public class AmbariManagementControllerImpl implements 
AmbariManagementControlle
 
   private final static Logger LOG =
   LoggerFactory.getLogger(AmbariManagementControllerImpl.class);
+  private final static Logger configChangeLog = 
LoggerFactory.getLogger("configchange");
 
   /**
* Property name of request context.
@@ -1477,6 +1478,100 @@ public class AmbariManagementControllerImpl implements 
AmbariManagementControlle
 return response;
   }
 
+  /**
+   * Get a dictionary of all config differences between existingConfig and 
newConfigValues where the key is the config name and the action is one of 
"changed", "added", or "deleted".
+   * @param existingConfig
+   * @param newConfigValues
+   * @return Delta of configs
+   */
+  private Map getConfigKeyDeltaToAction(Config existingConfig, 
Map newConfigValues) {
+Map configsChanged = new HashMap<>();
+
+if (null != existingConfig) {
+  Map existingConfigValues = 
existingConfig.getProperties();
+
+  Iterator it = existingConfigValues.entrySet().iterator();
+  while (it.hasNext()) {
+Map.Entry pair = (Map.Entry) it.next();
+// Check the value if both keys exist
+if (newConfigValues.containsKey(pair.getKey())) {
+  if (!newConfigValues.get((String) 
pair.getKey()).equals(pair.getValue())) {
+configsChanged.put(pair.getKey(), "changed");
+  }
+} else {
+  // Deleted
+  configsChanged.put(pair.getKey(), "deleted");
+}
+  }
+
+  it = newConfigValues.entrySet().iterator();
+  while (it.hasNext()) {
+Map.Entry pair = (Map.Entry) it.next();
+if (!existingConfigValues.keySet().contains(pair.getKey())) {
+  configsChanged.put(pair.getKey(), "added");
+}
+  }
+} else {
+  // All of the configs in this config type are new.
+  for (String key : newConfigValues.keySet()) {
+configsChanged.put(key, "added");
+  }
+}
+return configsChanged;
+  }
+
+  /**
+   * Inverse a HashMap of the form key_i: value_j to value_j: [key_a, ..., 
key_z]
+   * for all keys that contain that value.
+   * This is useful for printing config deltas.
+   * @param configKeyToAction Original dictionary
+   * @return Inverse of the dictionary.
+   */
+  private Map> inverseMapByValue(Map 
configKeyToAction) {
+Map> mapByValue = new HashMap<>();
+
+Iterator it = configKeyToAction.entrySet().iterator();
+while (it.hasNext()) {
+  Map.Entry pair = (Map.Entry) it.next();
+  // Key is the config name, Value is the action (added, deleted, changed)
+  if (mapByValue.containsKey(pair.getValue())) {
+mapByValue.get(pair.getValue()).add(pair.getKey());
+  } else {
+List configListForAction = new ArrayList<>();
+configListForAction.add(pair.getKey());
+mapByValue.put(pair.getValue(), configListForAction);
+  }
+}
+return mapByValue;
+  }
+
+  /**
+   * Get a string that represents config keys that are changed, added, or 
deleted.
+   * @p

ambari git commit: AMBARI-21118. HDP + HDF cluster cannot save configs for Storm when Streamline is installed due to missing configs (alejandro)

2017-05-25 Thread alejandro
Repository: ambari
Updated Branches:
  refs/heads/trunk a4482bba2 -> f7c04148c


AMBARI-21118. HDP + HDF cluster cannot save configs for Storm when Streamline 
is installed due to missing configs (alejandro)


Project: http://git-wip-us.apache.org/repos/asf/ambari/repo
Commit: http://git-wip-us.apache.org/repos/asf/ambari/commit/f7c04148
Tree: http://git-wip-us.apache.org/repos/asf/ambari/tree/f7c04148
Diff: http://git-wip-us.apache.org/repos/asf/ambari/diff/f7c04148

Branch: refs/heads/trunk
Commit: f7c04148c1214277cf39479d9a0306850c034ef9
Parents: a4482bb
Author: Alejandro Fernandez 
Authored: Thu May 25 17:24:05 2017 -0700
Committer: Alejandro Fernandez 
Committed: Thu May 25 17:24:05 2017 -0700

--
 .../STORM/1.1.0/configuration/storm-site.xml| 36 +++-
 .../stacks/HDP/2.6/services/STORM/metainfo.xml  |  6 +++-
 2 files changed, 25 insertions(+), 17 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/ambari/blob/f7c04148/ambari-server/src/main/resources/common-services/STORM/1.1.0/configuration/storm-site.xml
--
diff --git 
a/ambari-server/src/main/resources/common-services/STORM/1.1.0/configuration/storm-site.xml
 
b/ambari-server/src/main/resources/common-services/STORM/1.1.0/configuration/storm-site.xml
index 1a5dde9..b2e9acb 100644
--- 
a/ambari-server/src/main/resources/common-services/STORM/1.1.0/configuration/storm-site.xml
+++ 
b/ambari-server/src/main/resources/common-services/STORM/1.1.0/configuration/storm-site.xml
@@ -21,24 +21,28 @@
 -->
 
   
-nimbus.autocredential.plugins.classes
+nimbus.impersonation.acl
 
-  Allows users to add token based authentication for services such as 
HDFS, HBase, Hive
-
-
-  
-  
-nimbus.credential.renewers.freq.secs
-
-  Frequency at which tokens will be renewed.
-
-
-  
-  
-nimbus.credential.renewers.classes
-
-  List of classes for token renewal
+  The ImpersonationAuthorizer uses nimbus.impersonation.acl as the acl to 
authorize users. Following is a sample nimbus config for supporting 
impersonation:
+  nimbus.impersonation.acl:
+  impersonating_user1:
+  hosts:
+  [comma separated list of hosts from which impersonating_user1 is allowed 
to impersonate other users]
+  groups:
+  [comma separated list of groups whose users impersonating_user1 is 
allowed to impersonate]
+  impersonating_user2:
+  hosts:
+  [comma separated list of hosts from which impersonating_user2 is allowed 
to impersonate other users]
+  groups:
+  [comma separated list of groups whose users impersonating_user2 is 
allowed to impersonate]
 
+
+
+  
+streamline-env
+streamline_principal_name
+  
+
 
   
 

http://git-wip-us.apache.org/repos/asf/ambari/blob/f7c04148/ambari-server/src/main/resources/stacks/HDP/2.6/services/STORM/metainfo.xml
--
diff --git 
a/ambari-server/src/main/resources/stacks/HDP/2.6/services/STORM/metainfo.xml 
b/ambari-server/src/main/resources/stacks/HDP/2.6/services/STORM/metainfo.xml
index 49e00f7..747d951 100644
--- 
a/ambari-server/src/main/resources/stacks/HDP/2.6/services/STORM/metainfo.xml
+++ 
b/ambari-server/src/main/resources/stacks/HDP/2.6/services/STORM/metainfo.xml
@@ -23,9 +23,13 @@
   STORM
   1.1.0
   common-services/STORM/1.1.0
+
   
-application-properties
+
+streamline-env
+streamline-common
   
+
 
   
 



ambari git commit: AMBARI-21118. HDP + HDF cluster cannot save configs for Storm when Streamline is installed due to missing configs (alejandro)

2017-05-25 Thread alejandro
Repository: ambari
Updated Branches:
  refs/heads/branch-2.5 e52264823 -> 75de63d35


AMBARI-21118. HDP + HDF cluster cannot save configs for Storm when Streamline 
is installed due to missing configs
 (alejandro)


Project: http://git-wip-us.apache.org/repos/asf/ambari/repo
Commit: http://git-wip-us.apache.org/repos/asf/ambari/commit/75de63d3
Tree: http://git-wip-us.apache.org/repos/asf/ambari/tree/75de63d3
Diff: http://git-wip-us.apache.org/repos/asf/ambari/diff/75de63d3

Branch: refs/heads/branch-2.5
Commit: 75de63d35134626472eb5ca7adeccc8144712f18
Parents: e522648
Author: Alejandro Fernandez 
Authored: Thu May 25 17:18:56 2017 -0700
Committer: Alejandro Fernandez 
Committed: Thu May 25 17:18:56 2017 -0700

--
 .../STORM/1.1.0/configuration/storm-site.xml| 36 +++-
 .../stacks/HDP/2.6/services/STORM/metainfo.xml  |  6 +++-
 2 files changed, 25 insertions(+), 17 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/ambari/blob/75de63d3/ambari-server/src/main/resources/common-services/STORM/1.1.0/configuration/storm-site.xml
--
diff --git 
a/ambari-server/src/main/resources/common-services/STORM/1.1.0/configuration/storm-site.xml
 
b/ambari-server/src/main/resources/common-services/STORM/1.1.0/configuration/storm-site.xml
index 1a5dde9..b2e9acb 100644
--- 
a/ambari-server/src/main/resources/common-services/STORM/1.1.0/configuration/storm-site.xml
+++ 
b/ambari-server/src/main/resources/common-services/STORM/1.1.0/configuration/storm-site.xml
@@ -21,24 +21,28 @@
 -->
 
   
-nimbus.autocredential.plugins.classes
+nimbus.impersonation.acl
 
-  Allows users to add token based authentication for services such as 
HDFS, HBase, Hive
-
-
-  
-  
-nimbus.credential.renewers.freq.secs
-
-  Frequency at which tokens will be renewed.
-
-
-  
-  
-nimbus.credential.renewers.classes
-
-  List of classes for token renewal
+  The ImpersonationAuthorizer uses nimbus.impersonation.acl as the acl to 
authorize users. Following is a sample nimbus config for supporting 
impersonation:
+  nimbus.impersonation.acl:
+  impersonating_user1:
+  hosts:
+  [comma separated list of hosts from which impersonating_user1 is allowed 
to impersonate other users]
+  groups:
+  [comma separated list of groups whose users impersonating_user1 is 
allowed to impersonate]
+  impersonating_user2:
+  hosts:
+  [comma separated list of hosts from which impersonating_user2 is allowed 
to impersonate other users]
+  groups:
+  [comma separated list of groups whose users impersonating_user2 is 
allowed to impersonate]
 
+
+
+  
+streamline-env
+streamline_principal_name
+  
+
 
   
 

http://git-wip-us.apache.org/repos/asf/ambari/blob/75de63d3/ambari-server/src/main/resources/stacks/HDP/2.6/services/STORM/metainfo.xml
--
diff --git 
a/ambari-server/src/main/resources/stacks/HDP/2.6/services/STORM/metainfo.xml 
b/ambari-server/src/main/resources/stacks/HDP/2.6/services/STORM/metainfo.xml
index 49e00f7..747d951 100644
--- 
a/ambari-server/src/main/resources/stacks/HDP/2.6/services/STORM/metainfo.xml
+++ 
b/ambari-server/src/main/resources/stacks/HDP/2.6/services/STORM/metainfo.xml
@@ -23,9 +23,13 @@
   STORM
   1.1.0
   common-services/STORM/1.1.0
+
   
-application-properties
+
+streamline-env
+streamline-common
   
+
 
   
 



ambari git commit: AMBARI-21039. Atlas web UI inaccessible after adding Atlas service on upgraded cluster with Hive because /etc/atlas/conf symlink was created ahead of time (alejandro)

2017-05-18 Thread alejandro
Repository: ambari
Updated Branches:
  refs/heads/trunk 2e27f6619 -> 9cb87011c


AMBARI-21039. Atlas web UI inaccessible after adding Atlas service on upgraded 
cluster with Hive because /etc/atlas/conf symlink was created ahead of time 
(alejandro)


Project: http://git-wip-us.apache.org/repos/asf/ambari/repo
Commit: http://git-wip-us.apache.org/repos/asf/ambari/commit/9cb87011
Tree: http://git-wip-us.apache.org/repos/asf/ambari/tree/9cb87011
Diff: http://git-wip-us.apache.org/repos/asf/ambari/diff/9cb87011

Branch: refs/heads/trunk
Commit: 9cb87011c716e6fc8eeec0b2ca57a75fa9c7d2d9
Parents: 2e27f66
Author: Alejandro Fernandez 
Authored: Thu May 18 12:16:33 2017 -0400
Committer: Alejandro Fernandez 
Committed: Thu May 18 12:16:33 2017 -0400

--
 .../libraries/functions/conf_select.py | 13 +
 1 file changed, 9 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/ambari/blob/9cb87011/ambari-common/src/main/python/resource_management/libraries/functions/conf_select.py
--
diff --git 
a/ambari-common/src/main/python/resource_management/libraries/functions/conf_select.py
 
b/ambari-common/src/main/python/resource_management/libraries/functions/conf_select.py
index ce00f0c..facf186 100644
--- 
a/ambari-common/src/main/python/resource_management/libraries/functions/conf_select.py
+++ 
b/ambari-common/src/main/python/resource_management/libraries/functions/conf_select.py
@@ -356,11 +356,16 @@ def select(stack_name, package, version, try_create=True, 
ignore_errors=False):
   then the Atlas RPM will not be able to copy its artifacts into 
/etc/atlas/conf directory and therefore
   prevent Ambari from by copying those unmanaged contents into 
/etc/atlas/$version/0
   '''
-  parent_dir = os.path.dirname(current_dir)
-  if os.path.exists(parent_dir):
-Link(conf_dir, to=current_dir)
+  component_list = default("/localComponents", [])
+  if "ATLAS_SERVER" in component_list or "ATLAS_CLIENT" in 
component_list:
+Logger.info("Atlas is installed on this host.")
+parent_dir = os.path.dirname(current_dir)
+if os.path.exists(parent_dir):
+  Link(conf_dir, to=current_dir)
+else:
+  Logger.info("Will not create symlink from {0} to {1} because 
the destination's parent dir does not exist.".format(conf_dir, current_dir))
   else:
-Logger.info("Will not create symlink from {0} to {1} because 
the destination's parent dir does not exist.".format(conf_dir, current_dir))
+Logger.info("Will not create symlink from {0} to {1} because 
Atlas is not installed on this host.".format(conf_dir, current_dir))
 else:
   # Normal path for other packages
   Link(conf_dir, to=current_dir)



ambari git commit: AMBARI-21039. Atlas web UI inaccessible after adding Atlas service on upgraded cluster with Hive because /etc/atlas/conf symlink was created ahead of time (alejandro)

2017-05-18 Thread alejandro
Repository: ambari
Updated Branches:
  refs/heads/branch-2.5 3fd229c7f -> dae5606bf


AMBARI-21039. Atlas web UI inaccessible after adding Atlas service on upgraded 
cluster with Hive because /etc/atlas/conf symlink was created ahead of time 
(alejandro)


Project: http://git-wip-us.apache.org/repos/asf/ambari/repo
Commit: http://git-wip-us.apache.org/repos/asf/ambari/commit/dae5606b
Tree: http://git-wip-us.apache.org/repos/asf/ambari/tree/dae5606b
Diff: http://git-wip-us.apache.org/repos/asf/ambari/diff/dae5606b

Branch: refs/heads/branch-2.5
Commit: dae5606bf165fe708b9e5f9276968dc110e940fd
Parents: 3fd229c
Author: Alejandro Fernandez 
Authored: Thu May 18 12:14:07 2017 -0400
Committer: Alejandro Fernandez 
Committed: Thu May 18 12:14:07 2017 -0400

--
 .../libraries/functions/conf_select.py | 13 +
 1 file changed, 9 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/ambari/blob/dae5606b/ambari-common/src/main/python/resource_management/libraries/functions/conf_select.py
--
diff --git 
a/ambari-common/src/main/python/resource_management/libraries/functions/conf_select.py
 
b/ambari-common/src/main/python/resource_management/libraries/functions/conf_select.py
index ce00f0c..facf186 100644
--- 
a/ambari-common/src/main/python/resource_management/libraries/functions/conf_select.py
+++ 
b/ambari-common/src/main/python/resource_management/libraries/functions/conf_select.py
@@ -356,11 +356,16 @@ def select(stack_name, package, version, try_create=True, 
ignore_errors=False):
   then the Atlas RPM will not be able to copy its artifacts into 
/etc/atlas/conf directory and therefore
   prevent Ambari from by copying those unmanaged contents into 
/etc/atlas/$version/0
   '''
-  parent_dir = os.path.dirname(current_dir)
-  if os.path.exists(parent_dir):
-Link(conf_dir, to=current_dir)
+  component_list = default("/localComponents", [])
+  if "ATLAS_SERVER" in component_list or "ATLAS_CLIENT" in 
component_list:
+Logger.info("Atlas is installed on this host.")
+parent_dir = os.path.dirname(current_dir)
+if os.path.exists(parent_dir):
+  Link(conf_dir, to=current_dir)
+else:
+  Logger.info("Will not create symlink from {0} to {1} because 
the destination's parent dir does not exist.".format(conf_dir, current_dir))
   else:
-Logger.info("Will not create symlink from {0} to {1} because 
the destination's parent dir does not exist.".format(conf_dir, current_dir))
+Logger.info("Will not create symlink from {0} to {1} because 
Atlas is not installed on this host.".format(conf_dir, current_dir))
 else:
   # Normal path for other packages
   Link(conf_dir, to=current_dir)



ambari git commit: AMBARI-21039. Atlas web UI inaccessible after adding Atlas service on upgraded cluster with Hive because /etc/atlas/conf symlink was created ahead of time (alejandro)

2017-05-18 Thread alejandro
Repository: ambari
Updated Branches:
  refs/heads/branch-2.4 36fbb6163 -> 5db8b4af5


AMBARI-21039. Atlas web UI inaccessible after adding Atlas service on upgraded 
cluster with Hive because /etc/atlas/conf symlink was created ahead of time 
(alejandro)


Project: http://git-wip-us.apache.org/repos/asf/ambari/repo
Commit: http://git-wip-us.apache.org/repos/asf/ambari/commit/5db8b4af
Tree: http://git-wip-us.apache.org/repos/asf/ambari/tree/5db8b4af
Diff: http://git-wip-us.apache.org/repos/asf/ambari/diff/5db8b4af

Branch: refs/heads/branch-2.4
Commit: 5db8b4af5731a96be3b54545c1906e1dc36e8728
Parents: 36fbb61
Author: Alejandro Fernandez 
Authored: Thu May 18 12:11:31 2017 -0400
Committer: Alejandro Fernandez 
Committed: Thu May 18 12:11:31 2017 -0400

--
 .../libraries/functions/conf_select.py | 13 +
 1 file changed, 9 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/ambari/blob/5db8b4af/ambari-common/src/main/python/resource_management/libraries/functions/conf_select.py
--
diff --git 
a/ambari-common/src/main/python/resource_management/libraries/functions/conf_select.py
 
b/ambari-common/src/main/python/resource_management/libraries/functions/conf_select.py
index 8d54053..ddd6ef3 100644
--- 
a/ambari-common/src/main/python/resource_management/libraries/functions/conf_select.py
+++ 
b/ambari-common/src/main/python/resource_management/libraries/functions/conf_select.py
@@ -356,11 +356,16 @@ def select(stack_name, package, version, try_create=True, 
ignore_errors=False):
   then the Atlas RPM will not be able to copy its artifacts into 
/etc/atlas/conf directory and therefore
   prevent Ambari from by copying those unmanaged contents into 
/etc/atlas/$version/0
   '''
-  parent_dir = os.path.dirname(current_dir)
-  if os.path.exists(parent_dir):
-Link(conf_dir, to=current_dir)
+  component_list = default("/localComponents", [])
+  if "ATLAS_SERVER" in component_list or "ATLAS_CLIENT" in 
component_list:
+Logger.info("Atlas is installed on this host.")
+parent_dir = os.path.dirname(current_dir)
+if os.path.exists(parent_dir):
+  Link(conf_dir, to=current_dir)
+else:
+  Logger.info("Will not create symlink from {0} to {1} because 
the destination's parent dir does not exist.".format(conf_dir, current_dir))
   else:
-Logger.info("Will not create symlink from {0} to {1} because 
the destination's parent dir does not exist.".format(conf_dir, current_dir))
+Logger.info("Will not create symlink from {0} to {1} because 
Atlas is not installed on this host.".format(conf_dir, current_dir))
 else:
   # Normal path for other packages
   Link(conf_dir, to=current_dir)



ambari git commit: AMBARI-20977. Journalnode should support bulk restart o start or stop in hosts' page (zhangxiaolu via alejandro)

2017-05-11 Thread alejandro
Repository: ambari
Updated Branches:
  refs/heads/trunk 1d46f23f7 -> 023c819a3


AMBARI-20977. Journalnode should support bulk restart o start or stop in hosts' 
page (zhangxiaolu via alejandro)


Project: http://git-wip-us.apache.org/repos/asf/ambari/repo
Commit: http://git-wip-us.apache.org/repos/asf/ambari/commit/023c819a
Tree: http://git-wip-us.apache.org/repos/asf/ambari/tree/023c819a
Diff: http://git-wip-us.apache.org/repos/asf/ambari/diff/023c819a

Branch: refs/heads/trunk
Commit: 023c819a3d05c5bd8fbf1933930fbc5ddfda6e16
Parents: 1d46f23
Author: Alejandro Fernandez 
Authored: Thu May 11 15:21:29 2017 -0700
Committer: Alejandro Fernandez 
Committed: Thu May 11 15:21:45 2017 -0700

--
 .../main/resources/common-services/HDFS/3.0.0.3.0/metainfo.xml   | 4 
 1 file changed, 4 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/ambari/blob/023c819a/ambari-server/src/main/resources/common-services/HDFS/3.0.0.3.0/metainfo.xml
--
diff --git 
a/ambari-server/src/main/resources/common-services/HDFS/3.0.0.3.0/metainfo.xml 
b/ambari-server/src/main/resources/common-services/HDFS/3.0.0.3.0/metainfo.xml
index 0606883..3e1a7ae 100644
--- 
a/ambari-server/src/main/resources/common-services/HDFS/3.0.0.3.0/metainfo.xml
+++ 
b/ambari-server/src/main/resources/common-services/HDFS/3.0.0.3.0/metainfo.xml
@@ -199,6 +199,10 @@
 PYTHON
 1200
   
+   
+  JournalNodes
+  NAMENODE
+  
   
 
   hdfs_journalnode



[2/2] ambari git commit: AMBARI-20910. HDP 3.0 TP - Unable to install Spark, cannot find package/scripts dir (alejandro)

2017-05-02 Thread alejandro
AMBARI-20910. HDP 3.0 TP - Unable to install Spark, cannot find package/scripts 
dir (alejandro)


Project: http://git-wip-us.apache.org/repos/asf/ambari/repo
Commit: http://git-wip-us.apache.org/repos/asf/ambari/commit/4b588a92
Tree: http://git-wip-us.apache.org/repos/asf/ambari/tree/4b588a92
Diff: http://git-wip-us.apache.org/repos/asf/ambari/diff/4b588a92

Branch: refs/heads/trunk
Commit: 4b588a9237a72465f3ca83c207a8d4234d9c4c12
Parents: b3f7d9e
Author: Alejandro Fernandez 
Authored: Mon May 1 19:24:22 2017 -0700
Committer: Alejandro Fernandez 
Committed: Tue May 2 13:51:13 2017 -0700

--
 .../2.2.0/package/scripts/job_history_server.py | 108 
 .../SPARK/2.2.0/package/scripts/livy_server.py  | 151 +++
 .../SPARK/2.2.0/package/scripts/livy_service.py |  48 
 .../SPARK/2.2.0/package/scripts/params.py   | 268 +++
 .../2.2.0/package/scripts/service_check.py  |  62 +
 .../SPARK/2.2.0/package/scripts/setup_livy.py   |  88 ++
 .../SPARK/2.2.0/package/scripts/setup_spark.py  | 116 
 .../SPARK/2.2.0/package/scripts/spark_client.py |  62 +
 .../2.2.0/package/scripts/spark_service.py  | 146 ++
 .../package/scripts/spark_thrift_server.py  |  91 +++
 .../2.2.0/package/scripts/status_params.py  |  45 
 .../SPARK/2.2.0/scripts/job_history_server.py   | 108 
 .../SPARK/2.2.0/scripts/livy_server.py  | 151 ---
 .../SPARK/2.2.0/scripts/livy_service.py |  48 
 .../SPARK/2.2.0/scripts/params.py   | 268 ---
 .../SPARK/2.2.0/scripts/service_check.py|  62 -
 .../SPARK/2.2.0/scripts/setup_livy.py   |  88 --
 .../SPARK/2.2.0/scripts/setup_spark.py  | 116 
 .../SPARK/2.2.0/scripts/spark_client.py |  62 -
 .../SPARK/2.2.0/scripts/spark_service.py| 146 --
 .../SPARK/2.2.0/scripts/spark_thrift_server.py  |  91 ---
 .../SPARK/2.2.0/scripts/status_params.py|  45 
 22 files changed, 1185 insertions(+), 1185 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/ambari/blob/4b588a92/ambari-server/src/main/resources/common-services/SPARK/2.2.0/package/scripts/job_history_server.py
--
diff --git 
a/ambari-server/src/main/resources/common-services/SPARK/2.2.0/package/scripts/job_history_server.py
 
b/ambari-server/src/main/resources/common-services/SPARK/2.2.0/package/scripts/job_history_server.py
new file mode 100644
index 000..3937c88
--- /dev/null
+++ 
b/ambari-server/src/main/resources/common-services/SPARK/2.2.0/package/scripts/job_history_server.py
@@ -0,0 +1,108 @@
+#!/usr/bin/python
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+
+import sys
+import os
+
+from resource_management.libraries.script.script import Script
+from resource_management.libraries.functions import conf_select, stack_select
+from resource_management.libraries.functions.copy_tarball import copy_to_hdfs
+from resource_management.libraries.functions.check_process_status import 
check_process_status
+from resource_management.libraries.functions.stack_features import 
check_stack_feature
+from resource_management.libraries.functions.constants import StackFeature
+from resource_management.core.logger import Logger
+from resource_management.core import shell
+from setup_spark import *
+from spark_service import spark_service
+
+
+class JobHistoryServer(Script):
+
+  def install(self, env):
+import params
+env.set_params(params)
+
+self.install_packages(env)
+
+  def configure(self, env, upgrade_type=None, config_dir=None):
+import params
+env.set_params(params)
+
+setup_spark(env, 'server', upgrade_type=upgrade_type, action = 'config')
+
+  def start(self, env, upgrade_type=None):
+import params
+env.set_params(params)
+
+self.configure(env)
+spark_service('jobhistoryserver', upgrade_type=upgrade_type, 
action='start')
+
+  def stop(self, env, u

[1/2] ambari git commit: AMBARI-20910. HDP 3.0 TP - Unable to install Spark, cannot find package/scripts dir (alejandro)

2017-05-02 Thread alejandro
Repository: ambari
Updated Branches:
  refs/heads/trunk b3f7d9e42 -> 4b588a923


http://git-wip-us.apache.org/repos/asf/ambari/blob/4b588a92/ambari-server/src/main/resources/common-services/SPARK/2.2.0/scripts/setup_livy.py
--
diff --git 
a/ambari-server/src/main/resources/common-services/SPARK/2.2.0/scripts/setup_livy.py
 
b/ambari-server/src/main/resources/common-services/SPARK/2.2.0/scripts/setup_livy.py
deleted file mode 100644
index adaca87..000
--- 
a/ambari-server/src/main/resources/common-services/SPARK/2.2.0/scripts/setup_livy.py
+++ /dev/null
@@ -1,88 +0,0 @@
-#!/usr/bin/python
-"""
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-
-"""
-
-import os
-from resource_management import Directory, File, PropertiesFile, 
InlineTemplate, format
-
-
-def setup_livy(env, type, upgrade_type = None, action = None):
-  import params
-
-  Directory([params.livy_pid_dir, params.livy_log_dir],
-owner=params.livy_user,
-group=params.user_group,
-mode=0775,
-create_parents = True
-  )
-  if type == 'server' and action == 'config':
-params.HdfsResource(params.livy_hdfs_user_dir,
-type="directory",
-action="create_on_execute",
-owner=params.livy_user,
-mode=0775
-)
-params.HdfsResource(None, action="execute")
-
-params.HdfsResource(params.livy_recovery_dir,
-type="directory",
-action="create_on_execute",
-owner=params.livy_user,
-mode=0700
-   )
-params.HdfsResource(None, action="execute")
-
-  # create livy-env.sh in etc/conf dir
-  File(os.path.join(params.livy_conf, 'livy-env.sh'),
-   owner=params.livy_user,
-   group=params.livy_group,
-   content=InlineTemplate(params.livy_env_sh),
-   mode=0644,
-  )
-
-  # create livy.conf in etc/conf dir
-  PropertiesFile(format("{livy_conf}/livy.conf"),
-properties = params.config['configurations']['livy-conf'],
-key_value_delimiter = " ",
-owner=params.livy_user,
-group=params.livy_group,
-  )
-
-  # create log4j.properties in etc/conf dir
-  File(os.path.join(params.livy_conf, 'log4j.properties'),
-   owner=params.livy_user,
-   group=params.livy_group,
-   content=params.livy_log4j_properties,
-   mode=0644,
-  )
-
-  # create spark-blacklist.properties in etc/conf dir
-  File(os.path.join(params.livy_conf, 'spark-blacklist.conf'),
-   owner=params.livy_user,
-   group=params.livy_group,
-   content=params.livy_spark_blacklist_properties,
-   mode=0644,
-  )
-
-  Directory(params.livy_logs_dir,
-owner=params.livy_user,
-group=params.livy_group,
-mode=0755,
-  )
-

http://git-wip-us.apache.org/repos/asf/ambari/blob/4b588a92/ambari-server/src/main/resources/common-services/SPARK/2.2.0/scripts/setup_spark.py
--
diff --git 
a/ambari-server/src/main/resources/common-services/SPARK/2.2.0/scripts/setup_spark.py
 
b/ambari-server/src/main/resources/common-services/SPARK/2.2.0/scripts/setup_spark.py
deleted file mode 100644
index 9329ce0..000
--- 
a/ambari-server/src/main/resources/common-services/SPARK/2.2.0/scripts/setup_spark.py
+++ /dev/null
@@ -1,116 +0,0 @@
-#!/usr/bin/python
-"""
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations

ambari git commit: AMBARI-20443. No need to show (Masahiro Tanaka via alejandro)

2017-05-02 Thread alejandro
Repository: ambari
Updated Branches:
  refs/heads/trunk 6e2d32196 -> b3f7d9e42


AMBARI-20443. No need to show  (Masahiro Tanaka via alejandro)


Project: http://git-wip-us.apache.org/repos/asf/ambari/repo
Commit: http://git-wip-us.apache.org/repos/asf/ambari/commit/b3f7d9e4
Tree: http://git-wip-us.apache.org/repos/asf/ambari/tree/b3f7d9e4
Diff: http://git-wip-us.apache.org/repos/asf/ambari/diff/b3f7d9e4

Branch: refs/heads/trunk
Commit: b3f7d9e4211a3378c99538f606b27c30a33a34be
Parents: 6e2d321
Author: Alejandro Fernandez 
Authored: Tue May 2 11:43:07 2017 -0700
Committer: Alejandro Fernandez 
Committed: Tue May 2 11:43:07 2017 -0700

--
 .../common-services/HIVE/0.12.0.2.0/configuration/hive-env.xml| 3 ++-
 .../common-services/HIVE/2.1.0.3.0/configuration/hive-env.xml | 1 +
 2 files changed, 3 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/ambari/blob/b3f7d9e4/ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/configuration/hive-env.xml
--
diff --git 
a/ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/configuration/hive-env.xml
 
b/ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/configuration/hive-env.xml
index caa598a..b2c364c 100644
--- 
a/ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/configuration/hive-env.xml
+++ 
b/ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/configuration/hive-env.xml
@@ -117,7 +117,8 @@
 hive_ambari_database
 MySQL
 Database type.
-
+true
+
   
   
 hive_database_name

http://git-wip-us.apache.org/repos/asf/ambari/blob/b3f7d9e4/ambari-server/src/main/resources/common-services/HIVE/2.1.0.3.0/configuration/hive-env.xml
--
diff --git 
a/ambari-server/src/main/resources/common-services/HIVE/2.1.0.3.0/configuration/hive-env.xml
 
b/ambari-server/src/main/resources/common-services/HIVE/2.1.0.3.0/configuration/hive-env.xml
index 3cef34b..54a62e2 100644
--- 
a/ambari-server/src/main/resources/common-services/HIVE/2.1.0.3.0/configuration/hive-env.xml
+++ 
b/ambari-server/src/main/resources/common-services/HIVE/2.1.0.3.0/configuration/hive-env.xml
@@ -86,6 +86,7 @@
 hive_ambari_database
 MySQL
 Database type.
+true
 
   
   



ambari git commit: AMBARI-20865. Remove redundant whitespace in Hadoop 3.0 configs (alejandro)

2017-05-01 Thread alejandro
Repository: ambari
Updated Branches:
  refs/heads/trunk f167236c2 -> aef60264f


AMBARI-20865. Remove redundant whitespace in Hadoop 3.0 configs (alejandro)


Project: http://git-wip-us.apache.org/repos/asf/ambari/repo
Commit: http://git-wip-us.apache.org/repos/asf/ambari/commit/aef60264
Tree: http://git-wip-us.apache.org/repos/asf/ambari/tree/aef60264
Diff: http://git-wip-us.apache.org/repos/asf/ambari/diff/aef60264

Branch: refs/heads/trunk
Commit: aef60264f105a3b060a91dea1d637638384f0289
Parents: f167236
Author: Alejandro Fernandez 
Authored: Wed Apr 26 14:04:44 2017 -0700
Committer: Alejandro Fernandez 
Committed: Mon May 1 15:04:14 2017 -0700

--
 .../HDFS/3.0.0.3.0/configuration/hadoop-env.xml | 200 +-
 .../HDFS/3.0.0.3.0/configuration/hdfs-log4j.xml | 382 +--
 .../HIVE/2.1.0.3.0/configuration/hcat-env.xml   |  48 +--
 .../HIVE/2.1.0.3.0/configuration/hive-env.xml   |  78 ++--
 .../configuration/hive-interactive-env.xml  |  63 ++-
 .../YARN/3.0.0.3.0/configuration/yarn-env.xml   | 206 +-
 .../YARN/3.0.0.3.0/configuration/yarn-log4j.xml | 126 +++---
 .../YARN/3.0.0.3.0/configuration/yarn-site.xml  |   7 +-
 .../3.4.5/configuration/zookeeper-log4j.xml |   2 +-
 9 files changed, 555 insertions(+), 557 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/ambari/blob/aef60264/ambari-server/src/main/resources/common-services/HDFS/3.0.0.3.0/configuration/hadoop-env.xml
--
diff --git 
a/ambari-server/src/main/resources/common-services/HDFS/3.0.0.3.0/configuration/hadoop-env.xml
 
b/ambari-server/src/main/resources/common-services/HDFS/3.0.0.3.0/configuration/hadoop-env.xml
index e447c52..e292e6e 100644
--- 
a/ambari-server/src/main/resources/common-services/HDFS/3.0.0.3.0/configuration/hadoop-env.xml
+++ 
b/ambari-server/src/main/resources/common-services/HDFS/3.0.0.3.0/configuration/hadoop-env.xml
@@ -269,143 +269,143 @@
 hadoop-env template
 This is the jinja template for hadoop-env.sh 
file
 
-  # Set Hadoop-specific environment variables here.
+# Set Hadoop-specific environment variables here.
 
-  # The only required environment variable is JAVA_HOME.  All others are
-  # optional.  When running a distributed configuration it is best to
-  # set JAVA_HOME in this file, so that it is correctly defined on
-  # remote nodes.
+# The only required environment variable is JAVA_HOME.  All others are
+# optional.  When running a distributed configuration it is best to
+# set JAVA_HOME in this file, so that it is correctly defined on
+# remote nodes.
 
-  # The java implementation to use.  Required.
-  export JAVA_HOME={{java_home}}
-  export HADOOP_HOME_WARN_SUPPRESS=1
+# The java implementation to use.  Required.
+export JAVA_HOME={{java_home}}
+export HADOOP_HOME_WARN_SUPPRESS=1
 
-  # Hadoop home directory
-  export HADOOP_HOME=${HADOOP_HOME:-/usr/lib/hadoop}
+# Hadoop home directory
+export HADOOP_HOME=${HADOOP_HOME:-/usr/lib/hadoop}
 
-  # Hadoop Configuration Directory
-  #TODO: if env var set that can cause problems
-  export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-{{hadoop_conf_dir}}}
+# Hadoop Configuration Directory
+#TODO: if env var set that can cause problems
+export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-{{hadoop_conf_dir}}}
 
 
-  # Path to jsvc required by secure datanode
-  export JSVC_HOME={{jsvc_path}}
+# Path to jsvc required by secure datanode
+export JSVC_HOME={{jsvc_path}}
 
 
-  # The maximum amount of heap to use, in MB. Default is 1000.
-  if [[ ("$SERVICE" = "hiveserver2") || ("$SERVICE" = "metastore") || ( 
"$SERVICE" = "cli") ]]; then
-  if [ "$HADOOP_HEAPSIZE" = "" ]; then
-  export HADOOP_HEAPSIZE="{{hadoop_heapsize}}"
-  fi
-  else
-  export HADOOP_HEAPSIZE="{{hadoop_heapsize}}"
-  fi
+# The maximum amount of heap to use, in MB. Default is 1000.
+if [[ ("$SERVICE" = "hiveserver2") || ("$SERVICE" = "metastore") || ( 
"$SERVICE" = "cli") ]]; then
+if [ "$HADOOP_HEAPSIZE" = "" ]; then
+export HADOOP_HEAPSIZE="{{hadoop_heapsize}}"
+fi
+else
+export HADOOP_HEAPSIZE="{{hadoop_heapsize}}"
+fi
 
 
-  export HADOOP_NAMENODE_INIT_HEAPSIZE="-Xms{{namenode_heapsize}}"
+export HADOOP_NAMENODE_INIT_HEAPSIZE="-Xms{{namenode_heapsize}}"
 
-  # Extra Java runtime options.  Empty by default.
-  export HADOOP_OPTS="-Djava.net.preferIPv4Stack=true ${HADOOP_OPTS}"
+# Extra Java runtime options.  Empty by default.
+export HADOOP_OPTS="-Djava.net.preferIPv4Stack=true ${HADOOP_OPTS}"
 
-  # Command specifi

[2/5] ambari git commit: AMBARI-20326. HDP 3.0 TP - support for HBase with configs, kerberos, widgets, metrics, quicklinks, and themes (alejandro)

2017-04-23 Thread alejandro
http://git-wip-us.apache.org/repos/asf/ambari/blob/e5fff582/ambari-server/src/main/resources/common-services/HBASE/2.0.0.3.0/package/files/draining_servers.rb
--
diff --git 
a/ambari-server/src/main/resources/common-services/HBASE/2.0.0.3.0/package/files/draining_servers.rb
 
b/ambari-server/src/main/resources/common-services/HBASE/2.0.0.3.0/package/files/draining_servers.rb
new file mode 100644
index 000..5bcb5b6
--- /dev/null
+++ 
b/ambari-server/src/main/resources/common-services/HBASE/2.0.0.3.0/package/files/draining_servers.rb
@@ -0,0 +1,164 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+# Add or remove servers from draining mode via zookeeper 
+
+require 'optparse'
+include Java
+
+import org.apache.hadoop.hbase.HBaseConfiguration
+import org.apache.hadoop.hbase.client.HBaseAdmin
+import org.apache.hadoop.hbase.zookeeper.ZKUtil
+import org.apache.commons.logging.Log
+import org.apache.commons.logging.LogFactory
+
+# Name of this script
+NAME = "draining_servers"
+
+# Do command-line parsing
+options = {}
+optparse = OptionParser.new do |opts|
+  opts.banner = "Usage: ./hbase org.jruby.Main #{NAME}.rb [options] 
add|remove|list || ..."
+  opts.separator 'Add remove or list servers in draining mode. Can accept 
either hostname to drain all region servers' +
+ 'in that host, a host:port pair or a host,port,startCode 
triplet. More than one server can be given separated by space'
+  opts.on('-h', '--help', 'Display usage information') do
+puts opts
+exit
+  end
+  options[:debug] = false
+  opts.on('-d', '--debug', 'Display extra debug logging') do
+options[:debug] = true
+  end
+end
+optparse.parse!
+
+# Return array of servernames where servername is hostname+port+startcode
+# comma-delimited
+def getServers(admin)
+  serverInfos = admin.getClusterStatus().getServerInfo()
+  servers = []
+  for server in serverInfos
+servers << server.getServerName()
+  end
+  return servers
+end
+
+def getServerNames(hostOrServers, config)
+  ret = []
+  
+  for hostOrServer in hostOrServers
+# check whether it is already serverName. No need to connect to cluster
+parts = hostOrServer.split(',')
+if parts.size() == 3
+  ret << hostOrServer
+else 
+  admin = HBaseAdmin.new(config) if not admin
+  servers = getServers(admin)
+
+  hostOrServer = hostOrServer.gsub(/:/, ",")
+  for server in servers 
+ret << server if server.start_with?(hostOrServer)
+  end
+end
+  end
+  
+  admin.close() if admin
+  return ret
+end
+
+def addServers(options, hostOrServers)
+  config = HBaseConfiguration.create()
+  servers = getServerNames(hostOrServers, config)
+  
+  zkw = org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.new(config, 
"draining_servers", nil)
+  parentZnode = zkw.drainingZNode
+  
+  begin
+for server in servers
+  node = ZKUtil.joinZNode(parentZnode, server)
+  ZKUtil.createAndFailSilent(zkw, node)
+end
+  ensure
+zkw.close()
+  end
+end
+
+def removeServers(options, hostOrServers)
+  config = HBaseConfiguration.create()
+  servers = getServerNames(hostOrServers, config)
+  
+  zkw = org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.new(config, 
"draining_servers", nil)
+  parentZnode = zkw.drainingZNode
+  
+  begin
+for server in servers
+  node = ZKUtil.joinZNode(parentZnode, server)
+  ZKUtil.deleteNodeFailSilent(zkw, node)
+end
+  ensure
+zkw.close()
+  end
+end
+
+# list servers in draining mode
+def listServers(options)
+  config = HBaseConfiguration.create()
+  
+  zkw = org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.new(config, 
"draining_servers", nil)
+  parentZnode = zkw.drainingZNode
+
+  servers = ZKUtil.listChildrenNoWatch(zkw, parentZnode)
+  servers.each {|server| puts server}
+end
+
+hostOrServers = ARGV[1..ARGV.size()]
+
+# Create a logger and disable the DEBUG-level annoying client logging
+def configureLogging(options)
+  apacheLogger = LogFactory.getLog(NAME)
+  # Configure log4j to not spew so much
+  unless (options[:debug]) 
+logger = org.apache.log4j.Logger.getLogger("org.apache.hadoop.hbase")
+logger.setLevel(org.apache.log4j.Level::WAR

[3/5] ambari git commit: AMBARI-20326. HDP 3.0 TP - support for HBase with configs, kerberos, widgets, metrics, quicklinks, and themes (alejandro)

2017-04-23 Thread alejandro
http://git-wip-us.apache.org/repos/asf/ambari/blob/e5fff582/ambari-server/src/main/resources/common-services/HBASE/2.0.0.3.0/metrics.json
--
diff --git 
a/ambari-server/src/main/resources/common-services/HBASE/2.0.0.3.0/metrics.json 
b/ambari-server/src/main/resources/common-services/HBASE/2.0.0.3.0/metrics.json
new file mode 100644
index 000..f94f510
--- /dev/null
+++ 
b/ambari-server/src/main/resources/common-services/HBASE/2.0.0.3.0/metrics.json
@@ -0,0 +1,4733 @@
+{
+  "HBASE_REGIONSERVER": {
+"Component": [
+  {
+"type": "jmx",
+"metrics": {
+  "default": {
+"metrics/hbase/regionserver/slowPutCount": {
+  "metric": 
"Hadoop:service=HBase,name=RegionServer,sub=Server.slowPutCount",
+  "pointInTime": true,
+  "temporal": false
+},
+"metrics/hbase/regionserver/percentFilesLocal": {
+  "metric": 
"Hadoop:service=HBase,name=RegionServer,sub=Server.percentFilesLocal",
+  "pointInTime": true,
+  "temporal": false
+},
+"metrics/hbase/regionserver/deleteRequestLatency_min": {
+  "metric": 
"Hadoop:service=HBase,name=RegionServer,sub=Server.Delete_min",
+  "pointInTime": true,
+  "temporal": false
+},
+"metrics/hbase/regionserver/blockCacheFree": {
+  "metric": 
"Hadoop:service=HBase,name=RegionServer,sub=Server.blockCacheFreeSize",
+  "pointInTime": true,
+  "temporal": false
+},
+"metrics/hbase/regionserver/mutationsWithoutWALSize": {
+  "metric": 
"Hadoop:service=HBase,name=RegionServer,sub=Server.mutationsWithoutWALSize",
+  "pointInTime": true,
+  "temporal": false
+},
+"metrics/hbase/regionserver/blockCacheMissCount": {
+  "metric": 
"Hadoop:service=HBase,name=RegionServer,sub=Server.blockCacheMissCount",
+  "pointInTime": true,
+  "temporal": false
+},
+"metrics/hbase/regionserver/flushQueueSize": {
+  "metric": 
"Hadoop:service=HBase,name=RegionServer,sub=Server.flushQueueLength",
+  "pointInTime": true,
+  "temporal": false
+},
+"metrics/hbase/regionserver/deleteRequestLatency_99th_percentile": 
{
+  "metric": 
"Hadoop:service=HBase,name=RegionServer,sub=Server.Delete_99th_percentile",
+  "pointInTime": true,
+  "temporal": false
+},
+"metrics/hbase/regionserver/getRequestLatency_num_ops": {
+  "metric": 
"Hadoop:service=HBase,name=RegionServer,sub=Server.Get_num_ops",
+  "pointInTime": true,
+  "temporal": false
+},
+"metrics/hbase/regionserver/ScanNext_num_ops": {
+  "metric": 
"Hadoop:service=HBase,name=RegionServer,sub=Server.ScanNext_num_ops",
+  "pointInTime": true,
+  "temporal": false
+},
+"metrics/hbase/regionserver/Increment_num_ops": {
+  "metric": 
"Hadoop:service=HBase,name=RegionServer,sub=Server.Increment_num_ops",
+  "pointInTime": true,
+  "temporal": false
+},
+"metrics/hbase/regionserver/Append_num_ops": {
+  "metric": 
"Hadoop:service=HBase,name=RegionServer,sub=Server.Append_num_ops",
+  "pointInTime": true,
+  "temporal": false
+},
+"metrics/hbase/regionserver/ScanNext_95th_percentile": {
+  "metric": 
"Hadoop:service=HBase,name=RegionServer,sub=Server.ScanNext_95th_percentile",
+  "pointInTime": true,
+  "temporal": false
+},
+"metrics/hbase/regionserver/Append_95th_percentile": {
+  "metric": 
"Hadoop:service=HBase,name=RegionServer,sub=Server.Append_95th_percentile",
+  "pointInTime": true,
+  "temporal": false
+},
+"metrics/hbase/regionserver/Increment_95th_percentile": {
+  "metric": 
"Hadoop:service=HBase,name=RegionServer,sub=Server.Increment_95th_percentile",
+  "pointInTime": true,
+  "temporal": false
+},
+"metrics/hbase/regionserver/updatesBlockedTime": {
+  "metric": 
"Hadoop:service=HBase,name=RegionServer,sub=Server.updatesBlockedTime",
+  "pointInTime": true,
+  "temporal": false
+},
+"metrics/hbase/IPC/numActiveHandler": {
+  "metric": 
"Hadoop:service=HBase,name=IPC,sub=IPC.numActiveHandler",
+  "pointInTime": true,
+  "temporal": false
+},
+"metrics/hbase/IPC/numCallsInGeneralQueue": {
+  "metric": 
"Hadoop:service=HBase,name=IPC,sub=IPC

[1/5] ambari git commit: AMBARI-20326. HDP 3.0 TP - support for HBase with configs, kerberos, widgets, metrics, quicklinks, and themes (alejandro)

2017-04-23 Thread alejandro
Repository: ambari
Updated Branches:
  refs/heads/trunk 12ee39f34 -> e5fff5825


http://git-wip-us.apache.org/repos/asf/ambari/blob/e5fff582/ambari-server/src/main/resources/common-services/HBASE/2.0.0.3.0/package/scripts/setup_ranger_hbase.py
--
diff --git 
a/ambari-server/src/main/resources/common-services/HBASE/2.0.0.3.0/package/scripts/setup_ranger_hbase.py
 
b/ambari-server/src/main/resources/common-services/HBASE/2.0.0.3.0/package/scripts/setup_ranger_hbase.py
new file mode 100644
index 000..d32dce1
--- /dev/null
+++ 
b/ambari-server/src/main/resources/common-services/HBASE/2.0.0.3.0/package/scripts/setup_ranger_hbase.py
@@ -0,0 +1,106 @@
+#!/usr/bin/env python
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+from resource_management.core.logger import Logger
+
+def setup_ranger_hbase(upgrade_type=None, service_name="hbase-master"):
+  import params
+
+  if params.enable_ranger_hbase:
+
+stack_version = None
+
+if upgrade_type is not None:
+  stack_version = params.version
+
+if params.retryAble:
+  Logger.info("HBase: Setup ranger: command retry enables thus retrying if 
ranger admin is down !")
+else:
+  Logger.info("HBase: Setup ranger: command retry not enabled thus 
skipping if ranger admin is down !")
+
+if params.xml_configurations_supported and params.enable_ranger_hbase and 
params.xa_audit_hdfs_is_enabled and service_name == 'hbase-master' :
+  params.HdfsResource("/ranger/audit",
+ type="directory",
+ action="create_on_execute",
+ owner=params.hdfs_user,
+ group=params.hdfs_user,
+ mode=0755,
+ recursive_chmod=True
+  )
+  params.HdfsResource("/ranger/audit/hbaseMaster",
+ type="directory",
+ action="create_on_execute",
+ owner=params.hbase_user,
+ group=params.hbase_user,
+ mode=0700,
+ recursive_chmod=True
+  )
+  params.HdfsResource("/ranger/audit/hbaseRegional",
+ type="directory",
+ action="create_on_execute",
+ owner=params.hbase_user,
+ group=params.hbase_user,
+ mode=0700,
+ recursive_chmod=True
+  )
+  params.HdfsResource(None, action="execute")
+
+if params.xml_configurations_supported:
+  api_version=None
+  if params.stack_supports_ranger_kerberos:
+api_version='v2'
+  from resource_management.libraries.functions.setup_ranger_plugin_xml 
import setup_ranger_plugin
+  setup_ranger_plugin('hbase-client', 'hbase', params.previous_jdbc_jar, 
params.downloaded_custom_connector,
+  params.driver_curl_source, 
params.driver_curl_target, params.java64_home,
+  params.repo_name, params.hbase_ranger_plugin_repo,
+  params.ranger_env, params.ranger_plugin_properties,
+  params.policy_user, params.policymgr_mgr_url,
+  params.enable_ranger_hbase, 
conf_dict=params.hbase_conf_dir,
+  component_user=params.hbase_user, 
component_group=params.user_group, cache_service_list=['hbaseMaster', 
'hbaseRegional'],
+  
plugin_audit_properties=params.config['configurations']['ranger-hbase-audit'], 
plugin_audit_attributes=params.config['configuration_attributes']['ranger-hbase-audit'],
+  
plugin_security_properties=params.config['configurations']['ranger-hbase-security'],
 
plugin_security_attributes=params.config['configuration_attributes']['ranger-hbase-security'],
+  
plugin_policymgr_ssl_properties=params.config['configurations']['ranger-hbase-policymgr-ssl'],
 
plugin_policymgr_ssl_attributes=params.config['configuration_attributes']['ranger-hbase-policymgr-ssl'],
+  component_list=['hbase-client', 'hbase-master', 
'hbase-regionserver'], audit_db_is_enabled=p

[5/5] ambari git commit: AMBARI-20326. HDP 3.0 TP - support for HBase with configs, kerberos, widgets, metrics, quicklinks, and themes (alejandro)

2017-04-23 Thread alejandro
AMBARI-20326. HDP 3.0 TP - support for HBase with configs, kerberos, widgets, 
metrics, quicklinks, and themes (alejandro)


Project: http://git-wip-us.apache.org/repos/asf/ambari/repo
Commit: http://git-wip-us.apache.org/repos/asf/ambari/commit/e5fff582
Tree: http://git-wip-us.apache.org/repos/asf/ambari/tree/e5fff582
Diff: http://git-wip-us.apache.org/repos/asf/ambari/diff/e5fff582

Branch: refs/heads/trunk
Commit: e5fff5825563bee0b0af7a0fd790731989b9ce0b
Parents: 12ee39f
Author: Alejandro Fernandez 
Authored: Thu Apr 20 15:29:46 2017 -0700
Committer: Alejandro Fernandez 
Committed: Sun Apr 23 15:21:53 2017 -0700

--
 .../common-services/HBASE/2.0.0.3.0/alerts.json |  127 +
 .../HBASE/2.0.0.3.0/configuration/hbase-env.xml |  279 ++
 .../2.0.0.3.0/configuration/hbase-log4j.xml |  188 +
 .../2.0.0.3.0/configuration/hbase-policy.xml|   53 +
 .../2.0.0.3.0/configuration/hbase-site.xml  |  774 +++
 .../configuration/ranger-hbase-audit.xml|  132 +
 .../ranger-hbase-plugin-properties.xml  |  135 +
 .../ranger-hbase-policymgr-ssl.xml  |   66 +
 .../configuration/ranger-hbase-security.xml |   74 +
 .../HBASE/2.0.0.3.0/kerberos.json   |  160 +
 .../HBASE/2.0.0.3.0/metainfo.xml|  232 +
 .../HBASE/2.0.0.3.0/metrics.json| 4733 ++
 .../2.0.0.3.0/package/files/draining_servers.rb |  164 +
 .../package/files/hbase-smoke-cleanup.sh|   23 +
 .../2.0.0.3.0/package/files/hbaseSmokeVerify.sh |   34 +
 .../HBASE/2.0.0.3.0/package/scripts/__init__.py |   19 +
 .../2.0.0.3.0/package/scripts/functions.py  |   54 +
 .../HBASE/2.0.0.3.0/package/scripts/hbase.py|  230 +
 .../2.0.0.3.0/package/scripts/hbase_client.py   |   81 +
 .../package/scripts/hbase_decommission.py   |   94 +
 .../2.0.0.3.0/package/scripts/hbase_master.py   |  163 +
 .../package/scripts/hbase_regionserver.py   |  174 +
 .../2.0.0.3.0/package/scripts/hbase_service.py  |   66 +
 .../2.0.0.3.0/package/scripts/hbase_upgrade.py  |   42 +
 .../HBASE/2.0.0.3.0/package/scripts/params.py   |   28 +
 .../2.0.0.3.0/package/scripts/params_linux.py   |  426 ++
 .../2.0.0.3.0/package/scripts/params_windows.py |   43 +
 .../package/scripts/phoenix_queryserver.py  |   92 +
 .../package/scripts/phoenix_service.py  |   56 +
 .../2.0.0.3.0/package/scripts/service_check.py  |   99 +
 .../package/scripts/setup_ranger_hbase.py   |  106 +
 .../2.0.0.3.0/package/scripts/status_params.py  |   68 +
 .../HBASE/2.0.0.3.0/package/scripts/upgrade.py  |  106 +
 .../package/templates/hbase-smoke.sh.j2 |   44 +
 .../2.0.0.3.0/package/templates/hbase.conf.j2   |   35 +
 .../package/templates/hbase_client_jaas.conf.j2 |   23 +
 .../templates/hbase_grant_permissions.j2|   39 +
 .../package/templates/hbase_master_jaas.conf.j2 |   26 +
 .../templates/hbase_queryserver_jaas.conf.j2|   26 +
 .../templates/hbase_regionserver_jaas.conf.j2   |   26 +
 .../templates/input.config-hbase.json.j2|   79 +
 .../package/templates/regionservers.j2  |   20 +
 .../HBASE/2.0.0.3.0/quicklinks/quicklinks.json  |   97 +
 .../HBASE/2.0.0.3.0/role_command_order.json |   10 +
 .../HBASE/2.0.0.3.0/themes/theme.json   |  407 ++
 .../HBASE/2.0.0.3.0/widgets.json|  510 ++
 .../stacks/HDP/3.0/services/HBASE/metainfo.xml  |   26 +
 47 files changed, 10489 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/ambari/blob/e5fff582/ambari-server/src/main/resources/common-services/HBASE/2.0.0.3.0/alerts.json
--
diff --git 
a/ambari-server/src/main/resources/common-services/HBASE/2.0.0.3.0/alerts.json 
b/ambari-server/src/main/resources/common-services/HBASE/2.0.0.3.0/alerts.json
new file mode 100644
index 000..6fcb4dc
--- /dev/null
+++ 
b/ambari-server/src/main/resources/common-services/HBASE/2.0.0.3.0/alerts.json
@@ -0,0 +1,127 @@
+{
+  "HBASE": {
+"service": [
+  {
+"name": "hbase_regionserver_process_percent",
+"label": "Percent RegionServers Available",
+"description": "This service-level alert is triggered if the 
configured percentage of RegionServer processes cannot be determined to be up 
and listening on the network for the configured warning and critical 
thresholds. It aggregates the results of RegionServer process down checks.",
+"interval": 1,
+"scope": "SERVICE",
+"enabled": true,
+"source": {
+  "type": "AGGREGATE",
+  "alert_name": "hbase_regionserver_process",
+  "reporting": {
+"ok": {
+  "text&qu

[4/5] ambari git commit: AMBARI-20326. HDP 3.0 TP - support for HBase with configs, kerberos, widgets, metrics, quicklinks, and themes (alejandro)

2017-04-23 Thread alejandro
http://git-wip-us.apache.org/repos/asf/ambari/blob/e5fff582/ambari-server/src/main/resources/common-services/HBASE/2.0.0.3.0/metainfo.xml
--
diff --git 
a/ambari-server/src/main/resources/common-services/HBASE/2.0.0.3.0/metainfo.xml 
b/ambari-server/src/main/resources/common-services/HBASE/2.0.0.3.0/metainfo.xml
new file mode 100644
index 000..ac57693
--- /dev/null
+++ 
b/ambari-server/src/main/resources/common-services/HBASE/2.0.0.3.0/metainfo.xml
@@ -0,0 +1,232 @@
+
+
+
+  2.0
+  
+
+  HBASE
+  HBase
+  Non-relational distributed database and centralized service for 
configuration management &
+synchronization
+  
+  
+  2.0.0.3.0
+  
+
+  HBASE_MASTER
+  HBase Master
+  MASTER
+  1+
+  true
+  HBASE
+  
+
+  HDFS/HDFS_CLIENT
+  host
+  
+true
+  
+
+
+  ZOOKEEPER/ZOOKEEPER_SERVER
+  cluster
+  
+true
+HBASE/HBASE_MASTER
+  
+
+  
+  
+scripts/hbase_master.py
+PYTHON
+1200
+  
+  
+
+  hbase_master
+  true
+
+  
+  
+
+  DECOMMISSION
+  
+scripts/hbase_master.py
+PYTHON
+600
+  
+
+  
+
+
+
+  HBASE_REGIONSERVER
+  RegionServer
+  SLAVE
+  1+
+  true
+  true
+  HBASE
+  
+scripts/hbase_regionserver.py
+PYTHON
+  
+  
+RegionServers
+
+HBASE_MASTER
+  
+  
+
+  hbase_regionserver
+  true
+
+  
+
+
+
+  HBASE_CLIENT
+  HBase Client
+  CLIENT
+  1+
+  true
+  
+scripts/hbase_client.py
+PYTHON
+  
+  
+
+  xml
+  hbase-site.xml
+  hbase-site
+
+
+  env
+  hbase-env.sh
+  hbase-env
+
+
+  xml
+  hbase-policy.xml
+  hbase-policy
+
+
+  env
+  log4j.properties
+  hbase-log4j
+
+  
+
+
+
+  PHOENIX_QUERY_SERVER
+  Phoenix Query Server
+  SLAVE
+  0+
+  true
+  
+scripts/phoenix_queryserver.py
+PYTHON
+  
+  
+
+  hbase_phoenix_server
+  true
+
+  
+
+  
+
+  
+
+  any
+  
+
+  hbase
+
+  
+
+  
+
+  
+scripts/service_check.py
+PYTHON
+300
+  
+  
+  
+ZOOKEEPER
+HDFS
+  
+
+  
+core-site 
+hbase-policy
+hbase-site
+hbase-env
+hbase-log4j
+ranger-hbase-plugin-properties
+ranger-hbase-audit
+ranger-hbase-policymgr-ssl
+ranger-hbase-security
+  
+
+  
+
+  quicklinks.json
+  true
+
+  
+
+  
+
+  redhat7,amazon2015,redhat6,suse11,suse12
+  
+
+  hbase_${stack_version}
+
+
+  phoenix_${stack_version}
+  should_install_phoenix
+
+  
+
+
+  debian7,ubuntu12,ubuntu14,ubuntu16
+  
+
+  hbase-${stack_version}
+
+
+  phoenix-${stack_version}
+  should_install_phoenix
+
+  
+
+  
+
+  
+
+  theme.json
+  true
+
+  
+
+
+  
+



  1   2   3   4   5   6   7   8   9   10   >