Tested it on Centos 7.9, managed to install ZK and Hadoop so far with some 
workarounds.


1. While doing ambari-server setup encountered this error using postgresql 
9.2.24 (no harm so far):

Error extracting ambari-views-package-3.0.0.0-SNAPSHOT.jar 



023-09-03 11:22:13,040 ERROR [main] ViewRegistry:368 - Caught exception 
extracting view archive 
/var/lib/ambari-server/resources/views/ambari-views-package-3.0.0.0-SNAPSHOT.jar.

java.lang.IllegalStateException: Archive 
/var/lib/ambari-server/resources/views/ambari-views-package-3.0.0.0-SNAPSHOT.jar
 doesn't contain a view descriptor.

        at 
org.apache.ambari.server.view.ViewArchiveUtility.getViewConfigFromArchive(ViewArchiveUtility.java:75)

        at 
org.apache.ambari.server.view.ViewRegistry.extractViewArchive(ViewRegistry.java:2114)

        at 
org.apache.ambari.server.view.ViewRegistry.main(ViewRegistry.java:363)

2. While starting ambari-agent, this warning appeared (also ignored it):

Some native extensions not available for module(s): simplejson, it may affect 
execution performance 



3. While installing zookeeper, encountered "no module named yum", which fixed 
by applying the patch from:

https://github.com/apache/ambari/pull/3751/commits/35aeae38c2a3ddb9928bc82e94f59bc3915ac67d
 



4. set python3 as default python

5. yum install bigtop-select
Then Zookeeper installed successfully.

6.Failed to install datanode:

File "/usr/lib/ambari-agent/lib/resource_management/core/sudo.py", line 148, in 
read_file 

    with open(filename, 'rb', encoding=encoding) as fp:

ValueError: binary mode doesn't take an encoding argument



with open(filename, 'rb', encoding=encoding) as fp:

      content = fp.read()

removed the encoding arg, and it installed Hadoop successfully.



I will report back after testing other services.


Sent using https://www.zoho.com/mail/








---- On Tue, 05 Sep 2023 14:19:27 +0330 onmstester onmstester 
<[email protected]> wrote ---



Thank you,



I will try installing this ASAP.

Which one of below branches did you used to build rpms:

https://github.com/JiaLiangC/ambari/tree/AMBARI-26000-trunk

https://github.com/JiaLiangC/ambari/tree/trunk_py3



Is it possible to merge this branch with the branch that added Ranger to the 
stack (https://github.com/JiaLiangC/ambari/tree/AMBARI-25929)?



What would be the next stable Ambari, 2.8 or 3.0 (2.8 repo seems to be inactive 
recently) and do you suggest it would be GA till end of 2023?



Recently I was trying to build rpms for Ambari-3 on centos8 but the rpmbuild 
command failed, then I used the pre-rpm directories and files and installed 
those on Rocky8 with some workarounds. After that tried to install hadoop using 
Ambari wizard but it failed with reporting many python modules missing (yum, 
rpm, pycurl) , I'll retry building and installing the py3-trunk on el8.





Sent using https://www.zoho.com/mail/








---- On Mon, 04 Sep 2023 10:42:29 +0330 Jialiang Cai 
<mailto:[email protected]> wrote ---












Hi all,



Due to the extensive number of files involved in the Python 3 upgrade, 
conducting an online review has been relatively challenging. Therefore, I 
collaborated with the colleague who provided this PR offline, conducting 
extensive review, testing, and bug-fixing work. Now, the Ambari trunk Python 3 
upgrade has been successfully completed. We have passed all unit tests, and 
there have been no issues detected during manual deployments and blueprint 
automation deployments.



The code changes made during the upgrade process have been organized into 
documentation, with reasons for the changes clearly stated to facilitate review 
by everyone. Additionally, the issue provides valuable information on how to 
compile and test this Python 3 Ambari based on CentOS 7.



https://issues.apache.org/jira/browse/AMBARI-26000



The following repository provides packages for installing all dependencies for 
a cluster. Feel free to download and test it, and please report any issues you 
encounter.

(This repository contains Ambari based on trunk with the Python 3 upgrade PR 
merged, and other big data component packages are from Apache Bigtop 3.2.)



http://64.69.37.12:8089



Please note that the bandwidth and traffic for the repository I provided are 
limited. It's best to download it to your local machine and create your own 
repository for testing, which will significantly speed up the cluster 
installation process. Here are the steps to create a repository after 
downloading:



```bash

yum install -y createrepo

yum install -y yum-plugin-priorities



# Create the yum directory

mkdir -p /data1/custom_yum/packages



# Put all the RPMs you need to install in the /data1/custom_yum/packages 
directory, then execute

# Specify the 'basedir' as the location to store RPMs, it must be explicitly 
specified to avoid errors, it will default to the command execution directory

createrepo /data1/custom_yum --basedir=/data1/custom_yum/packages



# If you add or modify RPMs, you can update the repository with the following 
command

createrepo --update -p /data1/custom_yum

```



Next, expose the repo using an HTTP proxy server. You can use Python's built-in 
server, which allows downloading one file at a time (slower):



```bash

python -m SimpleHTTPServer 8089

```



Or you can use the Node.js file server, which is faster:



```bash

npm install --global http-server



cd /data1/custom_yum

http-server -p 8089

```



To use the repository, create a repo configuration file:



```bash

vi /etc/yum.repos.d/ambari_custom.repo

```



Add the following content, replacing `your_ip` with the actual IP address:



```

[c7-media]

name=CentOS-$releasever - Media

baseurl=http://your_ip:8089

gpgcheck=0

enabled=1

priority=2

```



Then, clean the cache and run the following command for testing:



```bash

yum clean all

yum makecache

yum install hadoop_3_2_0

```



Happy testing!

Reply via email to