Additionally,

Ambari server doesn't even start now:

[root@metron1 ~]# ambari-server start
Using python  /usr/bin/python
Starting ambari-server
Ambari Server running with administrator privileges.
Organizing resource files at /var/lib/ambari-server/resources...
Ambari database consistency check started...
No errors were found.
Ambari database consistency check finished
Server PID at: /var/run/ambari-server/ambari-server.pid
Server out at: /var/log/ambari-server/ambari-server.out
Server log at: /var/log/ambari-server/ambari-server.log
Waiting for server start....................
ERROR: Exiting with exit code -1.
REASON: Ambari Server java process died with exitcode 255. Check /var/log/ambari-server/ambari-server.out for more information.
[root@metron1 ~]# cat /var/log/ambari-server/ambari-server.out
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0
[root@metron1 ~]#

How can I enable extra Ambari debugging?

On 2017-05-05 09:32, Laurens Vets wrote:
If it normal that I see the following error during install with the new repo?

[root@metron1 yum.repos.d]# yum install ambari-server -y
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirror.sjc02.svwh.net
 * epel: mirrors.kernel.org
 * extras: mirrors.kernel.org
 * updates: repo1.sea.innoscale.net
Resolving Dependencies
--> Running transaction check
---> Package ambari-server.x86_64 0:2.4.2.0-136 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

========================================================================================================================
 Package                     Arch                 Version
      Repository                            Size
========================================================================================================================
Installing:
 ambari-server               x86_64               2.4.2.0-136
      Updates-ambari-2.4.2.0               645 M

Transaction Summary
========================================================================================================================
Install  1 Package

Total download size: 645 M
Installed size: 700 M
Downloading packages:
ambari-server-2.4.2.0-136.x86_64.rpm
                          | 645 MB  00:15:33
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
##### THIS -> #####
cp: cannot stat ‘//var/lib/ambari-server/resources/views/*.jar’: No
such file or directory
  Installing : ambari-server-2.4.2.0-136.x86_64
                                             1/1
  Verifying  : ambari-server-2.4.2.0-136.x86_64
                                             1/1

Installed:
  ambari-server.x86_64 0:2.4.2.0-136

Complete!
[root@metron1 yum.repos.d]#

On 2017-05-04 16:02, David Lyle wrote:
Looks like those instructions could use a bit of a re-vamp. Ambari 2.4.1 isn't supported, but it has you download that version. You'll need to use
Ambari 2.4.2+.


Here's the link for 2.4.2:
http://public-repo-1.hortonworks.com/ambari/centos6/2.x/updates/2.4.2.0/ambari.repo
-O /etc/yum.repos.d/ambari.repo


-D...


On Thu, May 4, 2017 at 6:16 PM, Laurens Vets <[email protected]> wrote:

I'm installing Metron in 3 VMs following this guide:
https://cwiki.apache.org/confluence/display/METRON/Metron+
with+HDP+2.5+bare-metal+install. Ambari tries to install all components
but fails with Elasticsearch Master Install:

stderr: /var/lib/ambari-agent/data/errors-385.txt

Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/common-services/ELASTICSEARCH/2
.3.3/package/scripts/elastic_master.py", line 73, in <module>
    Elasticsearch().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
line 280, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/common-services/ELASTICSEARCH/2
.3.3/package/scripts/elastic_master.py", line 32, in install
    self.install_packages(env)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
line 567, in install_packages
    retry_count=agent_stack_retry_count)
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py",
line 155, in __init__
    self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py",
line 160, in run
    self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py",
line 124, in run_action
    provider_action()
  File "/usr/lib/python2.6/site-packages/resource_management/core/
providers/package/__init__.py", line 54, in action_install
    self.install_package(package_name, self.resource.use_repos,
self.resource.skip_repos)
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/yumrpm.py",
line 49, in install_package
    self.checked_call_with_retries(cmd, sudo=True,
logoutput=self.get_logoutput())
  File "/usr/lib/python2.6/site-packages/resource_management/core/
providers/package/__init__.py", line 83, in checked_call_with_retries
    return self._call_with_retries(cmd, is_checked=True, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/
providers/package/__init__.py", line 91, in _call_with_retries
    code, out = func(cmd, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py",
line 71, in inner
    result = function(command, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py",
line 93, in checked_call
    tries=tries, try_sleep=try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py",
line 141, in _call_wrapper
    result = _call(command, **kwargs_copy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py",
line 294, in _call
    raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of '/usr/bin/yum -d 0
-e 0 -y install elasticsearch-2.3.3' returned 1. Error: Nothing to do

stdout: /var/lib/ambari-agent/data/output-385.txt

2017-05-04 15:12:26,016 - Using hadoop conf dir:
/usr/hdp/current/hadoop-client/conf
2017-05-04 15:12:26,017 - Group['metron'] {}
2017-05-04 15:12:26,019 - Group['livy'] {}
2017-05-04 15:12:26,019 - Group['elasticsearch'] {}
2017-05-04 15:12:26,019 - Group['spark'] {}
2017-05-04 15:12:26,020 - Group['zeppelin'] {}
2017-05-04 15:12:26,020 - Group['hadoop'] {}
2017-05-04 15:12:26,020 - Group['kibana'] {}
2017-05-04 15:12:26,020 - Group['users'] {}
2017-05-04 15:12:26,021 - User['hive'] {'gid': 'hadoop',
'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-05-04 15:12:26,021 - User['storm'] {'gid': 'hadoop',
'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-05-04 15:12:26,022 - User['zookeeper'] {'gid': 'hadoop',
'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-05-04 15:12:26,023 - User['ams'] {'gid': 'hadoop',
'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-05-04 15:12:26,023 - User['tez'] {'gid': 'hadoop',
'fetch_nonlocal_groups': True, 'groups': [u'users']}
2017-05-04 15:12:26,024 - User['zeppelin'] {'gid': 'hadoop',
'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-05-04 15:12:26,026 - User['metron'] {'gid': 'hadoop',
'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-05-04 15:12:26,027 - User['livy'] {'gid': 'hadoop',
'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-05-04 15:12:26,028 - User['elasticsearch'] {'gid': 'hadoop',
'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-05-04 15:12:26,029 - User['spark'] {'gid': 'hadoop',
'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-05-04 15:12:26,030 - User['ambari-qa'] {'gid': 'hadoop',
'fetch_nonlocal_groups': True, 'groups': [u'users']}
2017-05-04 15:12:26,031 - User['flume'] {'gid': 'hadoop',
'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-05-04 15:12:26,031 - User['kafka'] {'gid': 'hadoop',
'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-05-04 15:12:26,032 - User['hdfs'] {'gid': 'hadoop',
'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-05-04 15:12:26,033 - User['yarn'] {'gid': 'hadoop',
'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-05-04 15:12:26,033 - User['kibana'] {'gid': 'hadoop',
'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-05-04 15:12:26,034 - User['mapred'] {'gid': 'hadoop',
'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-05-04 15:12:26,035 - User['hbase'] {'gid': 'hadoop',
'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-05-04 15:12:26,035 - User['hcat'] {'gid': 'hadoop',
'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-05-04 15:12:26,036 - File['/var/lib/ambari-agent/tmp/changeUid.sh']
{'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-05-04 15:12:26,038 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari
-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u
ambari-qa) -gt 1000) || (false)'}
2017-05-04 15:12:26,048 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari
-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2017-05-04 15:12:26,048 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase',
'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2017-05-04 15:12:26,049 - File['/var/lib/ambari-agent/tmp/changeUid.sh']
{'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-05-04 15:12:26,050 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase']
{'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2017-05-04 15:12:26,060 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase']
due to not_if
2017-05-04 15:12:26,060 - Group['hdfs'] {}
2017-05-04 15:12:26,060 - User['hdfs'] {'fetch_nonlocal_groups': True,
'groups': [u'hadoop', u'hdfs']}
2017-05-04 15:12:26,061 - FS Type:
2017-05-04 15:12:26,061 - Directory['/etc/hadoop'] {'mode': 0755}
2017-05-04 15:12:26,073 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh']
{'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2017-05-04 15:12:26,074 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir']
{'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2017-05-04 15:12:26,090 - Initializing 2 repositories
2017-05-04 15:12:26,091 - Repository['HDP-2.5'] {'base_url': '
http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.5.5.0',
'action': ['create'], 'components': [u'HDP', 'main'], 'repo_template':
'[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list
%}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif
%}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP',
'mirror_list': None}
2017-05-04 15:12:26,098 - File['/etc/yum.repos.d/HDP.repo'] {'content':
'[HDP-2.5]\nname=HDP-2.5\nbaseurl=http://public-repo-1.horto
nworks.com/HDP/centos7/2.x/updates/2.5.5.0\n\npath=/\nena
bled=1\ngpgcheck=0'}
2017-05-04 15:12:26,099 - Repository['HDP-UTILS-1.1.0.21'] {'base_url': '
http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7',
'action': ['create'], 'components': [u'HDP-UTILS', 'main'],
'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list
%}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif
%}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP-UTILS',
'mirror_list': None}
2017-05-04 15:12:26,102 - File['/etc/yum.repos.d/HDP-UTILS.repo']
{'content': '[HDP-UTILS-1.1.0.21]\nname=HDP-UTILS-1.1.0.21\nbaseurl=http
://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/
centos7\n\npath=/\nenabled=1\ngpgcheck=0'}
2017-05-04 15:12:26,103 - Package['unzip'] {'retry_on_repo_unavailability':
False, 'retry_count': 5}
2017-05-04 15:12:26,194 - Skipping installation of existing package unzip 2017-05-04 15:12:26,194 - Package['curl'] {'retry_on_repo_unavailability':
False, 'retry_count': 5}
2017-05-04 15:12:26,206 - Skipping installation of existing package curl
2017-05-04 15:12:26,206 - Package['hdp-select']
{'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-05-04 15:12:26,220 - Skipping installation of existing package
hdp-select
2017-05-04 15:12:26,489 - Install ES Master Node
2017-05-04 15:12:26,491 - Package['elasticsearch-2.3.3']
{'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-05-04 15:12:26,580 - Installing package elasticsearch-2.3.3
('/usr/bin/yum -d 0 -e 0 -y install elasticsearch-2.3.3')

Command failed after 1 tries

Any idea what might be going on?

Reply via email to