[ 
https://issues.apache.org/jira/browse/AMBARI-11675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14573165#comment-14573165
 ] 

Hudson commented on AMBARI-11675:
---------------------------------

FAILURE: Integrated in Ambari-trunk-Commit #2811 (See 
[https://builds.apache.org/job/Ambari-trunk-Commit/2811/])
AMBARI-11675 - Hive Upgrade Fails Because Of Missing Database Library 
(jonathanhurley) (jhurley: 
http://git-wip-us.apache.org/repos/asf?p=ambari.git&a=commit&h=36a1c669970bf219c01157b40385218608685a38)
* ambari-server/src/test/python/stacks/2.1/HIVE/test_hive_metastore.py
* 
ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/scripts/hive_metastore.py


> Hive Upgrade Fails Because Of Missing Database Library
> ------------------------------------------------------
>
>                 Key: AMBARI-11675
>                 URL: https://issues.apache.org/jira/browse/AMBARI-11675
>             Project: Ambari
>          Issue Type: Bug
>          Components: ambari-server
>    Affects Versions: 2.1.0
>            Reporter: Jonathan Hurley
>            Assignee: Jonathan Hurley
>            Priority: Blocker
>             Fix For: 2.1.0
>
>         Attachments: AMBARI-11675.patch
>
>
> Hive rolling upgrade failed during the RESTART HIVE/HIVE_METASTORE, here is 
> the log in the panel:
> {noformat}
> 2015-06-03 01:01:08,835 - hive-metastore is currently at version 2.2.4.2-2
> 2015-06-03 01:01:08,924 - hive-metastore is currently at version 2.2.4.2-2
> 2015-06-03 01:01:09,022 - hive-metastore is currently at version 2.2.4.2-2
> 2015-06-03 01:01:09,107 - call['conf-select set-conf-dir --package hadoop 
> --stack-version 2.3.0.0-2208 --conf-version 0'] {'logoutput': False, 'quiet': 
> False}
> 2015-06-03 01:01:09,200 - call returned (0, 
> '/usr/hdp/2.3.0.0-2208/hadoop/conf -> /etc/hadoop/2.3.0.0-2208/0\r')
> 2015-06-03 01:01:09,504 - call['conf-select set-conf-dir --package hadoop 
> --stack-version 2.3.0.0-2208 --conf-version 0'] {'logoutput': False, 'quiet': 
> False}
> 2015-06-03 01:01:09,597 - call returned (0, 
> '/usr/hdp/2.3.0.0-2208/hadoop/conf -> /etc/hadoop/2.3.0.0-2208/0\r')
> 2015-06-03 01:01:09,692 - hive-metastore is currently at version 2.2.4.2-2
> 2015-06-03 01:01:09,697 - Group['hadoop'] {'ignore_failures': False}
> 2015-06-03 01:01:09,700 - Group['users'] {'ignore_failures': False}
> 2015-06-03 01:01:09,700 - User['hive'] {'gid': 'hadoop', 'ignore_failures': 
> False, 'groups': ['hadoop']}
> 2015-06-03 01:01:09,702 - User['mapred'] {'gid': 'hadoop', 'ignore_failures': 
> False, 'groups': ['hadoop']}
> 2015-06-03 01:01:09,704 - User['ambari-qa'] {'gid': 'hadoop', 
> 'ignore_failures': False, 'groups': ['users']}
> 2015-06-03 01:01:09,705 - User['zookeeper'] {'gid': 'hadoop', 
> 'ignore_failures': False, 'groups': ['hadoop']}
> 2015-06-03 01:01:09,706 - User['tez'] {'gid': 'hadoop', 'ignore_failures': 
> False, 'groups': ['users']}
> 2015-06-03 01:01:09,708 - User['hdfs'] {'gid': 'hadoop', 'ignore_failures': 
> False, 'groups': ['hadoop']}
> 2015-06-03 01:01:09,710 - User['yarn'] {'gid': 'hadoop', 'ignore_failures': 
> False, 'groups': ['hadoop']}
> 2015-06-03 01:01:09,711 - User['hcat'] {'gid': 'hadoop', 'ignore_failures': 
> False, 'groups': ['hadoop']}
> 2015-06-03 01:01:09,713 - User['ams'] {'gid': 'hadoop', 'ignore_failures': 
> False, 'groups': ['hadoop']}
> 2015-06-03 01:01:09,714 - File['/var/lib/ambari-agent/data/tmp/changeUid.sh'] 
> {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
> 2015-06-03 01:01:09,717 - 
> Execute['/var/lib/ambari-agent/data/tmp/changeUid.sh ambari-qa 
> /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa']
>  {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
> 2015-06-03 01:01:09,769 - Skipping 
> Execute['/var/lib/ambari-agent/data/tmp/changeUid.sh ambari-qa 
> /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa']
>  due to not_if
> 2015-06-03 01:01:09,771 - Group['hdfs'] {'ignore_failures': False}
> 2015-06-03 01:01:09,771 - User['hdfs'] {'ignore_failures': False, 'groups': 
> ['hadoop', 'hdfs']}
> 2015-06-03 01:01:09,773 - Directory['/etc/hadoop'] {'mode': 0755}
> 2015-06-03 01:01:09,800 - 
> File['/usr/hdp/2.3.0.0-2208/hadoop/conf/hadoop-env.sh'] {'content': 
> InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
> 2015-06-03 01:01:09,826 - Execute['('setenforce', '0')'] {'not_if': '(! which 
> getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': 
> True, 'only_if': 'test -f /selinux/enforce'}
> 2015-06-03 01:01:10,006 - Directory['/grid/0/log/hadoop'] {'owner': 'root', 
> 'mode': 0775, 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'}
> 2015-06-03 01:01:10,008 - Changing owner for /grid/0/log/hadoop from 507 to 
> root
> 2015-06-03 01:01:10,009 - Directory['/var/run/hadoop'] {'owner': 'root', 
> 'group': 'root', 'recursive': True, 'cd_access': 'a'}
> 2015-06-03 01:01:10,010 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 
> 'recursive': True, 'cd_access': 'a'}
> 2015-06-03 01:01:10,018 - 
> File['/usr/hdp/2.3.0.0-2208/hadoop/conf/commons-logging.properties'] 
> {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
> 2015-06-03 01:01:10,022 - 
> File['/usr/hdp/2.3.0.0-2208/hadoop/conf/health_check'] {'content': 
> Template('health_check.j2'), 'owner': 'hdfs'}
> 2015-06-03 01:01:10,023 - 
> File['/usr/hdp/2.3.0.0-2208/hadoop/conf/log4j.properties'] {'content': '...', 
> 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
> 2015-06-03 01:01:10,039 - 
> File['/usr/hdp/2.3.0.0-2208/hadoop/conf/hadoop-metrics2.properties'] 
> {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs'}
> 2015-06-03 01:01:10,040 - 
> File['/usr/hdp/2.3.0.0-2208/hadoop/conf/task-log4j.properties'] {'content': 
> StaticFile('task-log4j.properties'), 'mode': 0755}
> 2015-06-03 01:01:10,566 - call['conf-select set-conf-dir --package hadoop 
> --stack-version 2.3.0.0-2208 --conf-version 0'] {'logoutput': False, 'quiet': 
> False}
> 2015-06-03 01:01:10,662 - call returned (0, 
> '/usr/hdp/2.3.0.0-2208/hadoop/conf -> /etc/hadoop/2.3.0.0-2208/0\r')
> 2015-06-03 01:01:10,760 - hive-metastore is currently at version 2.2.4.2-2
> 2015-06-03 01:01:10,855 - hive-metastore is currently at version 2.2.4.2-2
> 2015-06-03 01:01:10,958 - Execute['ambari-sudo.sh kill `cat 
> /var/run/hive/hive.pid`'] {'not_if': '! (ls /var/run/hive/hive.pid >/dev/null 
> 2>&1 && ps -p `cat /var/run/hive/hive.pid` >/dev/null 2>&1)'}
> 2015-06-03 01:01:11,100 - Execute['ambari-sudo.sh kill -9 `cat 
> /var/run/hive/hive.pid`'] {'not_if': '! (ls /var/run/hive/hive.pid >/dev/null 
> 2>&1 && ps -p `cat /var/run/hive/hive.pid` >/dev/null 2>&1) || ( sleep 5 && ! 
> (ls /var/run/hive/hive.pid >/dev/null 2>&1 && ps -p `cat 
> /var/run/hive/hive.pid` >/dev/null 2>&1) )'}
> 2015-06-03 01:01:16,202 - Skipping Execute['ambari-sudo.sh kill -9 `cat 
> /var/run/hive/hive.pid`'] due to not_if
> 2015-06-03 01:01:16,204 - Execute['! (ls /var/run/hive/hive.pid >/dev/null 
> 2>&1 && ps -p `cat /var/run/hive/hive.pid` >/dev/null 2>&1)'] {'tries': 20, 
> 'try_sleep': 3}
> 2015-06-03 01:01:16,273 - File['/var/run/hive/hive.pid'] {'action': 
> ['delete']}
> 2015-06-03 01:01:16,274 - Deleting File['/var/run/hive/hive.pid']
> 2015-06-03 01:01:16,274 - Executing Metastore Rolling Upgrade pre-restart
> 2015-06-03 01:01:16,277 - Upgrading Hive Metastore
> 2015-06-03 01:01:16,280 - Execute['/usr/hdp/2.3.0.0-2208/hive/bin/schematool 
> -dbType mysql -upgradeSchema'] {'logoutput': True, 'environment': 
> {'HIVE_CONF_DIR': '/etc/hive/conf.server'}, 'tries': 1, 'user': 'hive'}
> WARNING: Use "yarn jar" to launch YARN applications.
> org.apache.hadoop.hive.metastore.HiveMetaException: Failed to load driver
> *** schemaTool failed ***
> {noformat}
> Basically the new Hive directory /usr/hdp/2.3.0.0-2208/hive/lib does not have 
> the connector jar and so the schemaTool command fails. We should copy over 
> the connector jar in the new directory before triggering the schemaTool 
> command.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to