On Mon, Apr 5, 2021 at 9:23 AM Trevor Gamblin <trevor.gamb...@windriver.com>
wrote:

> Signed-off-by: Trevor Gamblin <trevor.gamb...@windriver.com>
> ---
>  README-Guide.md | 94 +++++++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 94 insertions(+)
>
> diff --git a/README-Guide.md b/README-Guide.md
> index 21dd7c1..8558c48 100644
> --- a/README-Guide.md
> +++ b/README-Guide.md
> @@ -43,6 +43,16 @@ yocto-controller/yoctoabb
>  yocto-worker
>  ```
>
> +Before proceeding, make sure that the following is added to the
> +pokybuild3 user's exports (e.g. in .bashrc), or builds will fail after
> +being triggered:
> +
> +```
> +export LC_ALL=en_US.UTF-8
> +export LANG=en_US.UTF-8
> +export LANGUAGE=en_US.UTF-8
> +```
>


On the AB at typhoon.yocto.io only LANG=en_US.UTF-8 is set. I don't know
why LC_ALL or LANGUAGE need to be set on your cluster for builds to succeed.



> +
>  Next, we need to update the `yocto-controller/yoctoabb/master.cfg`
> towards the bottom where the `title`, `titleURL`, and `buildbotURL` are all
> set.  This is also where you would specify a different password for binding
> workers to the master.
>
>  Then, we need to update the `yocto-controller/yoctoabb/config.py` to
> include our worker.  In that file, find the line where `workers` is set and
> add: ["example-worker"].  _NOTE:_ if your worker's name is different, use
> that here.  Section 3.1 discusses how to further refine this list of
> workers.
> @@ -112,6 +122,90 @@ sudo
> /home/pokybuild3/yocto-worker/qemuarm/build/scripts/runqemu-gen-tapdevs \
>
>  In the above command, we assume the a build named qemuarm failed.  The
> value of 8 is the number of tap interfaces to create on the worker.
>
> +### 1.3) Adding Dedicated Worker Nodes
> +
> +Running both the controller and the worker together on a single machine
> +can quickly result in long build times and an unresponsive web UI,
> +especially if you plan on running any of the more comprehensive builders
> +(e.g. a-full). Additional workers can be added to the cluster by
> +following the steps given above, except that the yocto-controller steps
> +do not need to be repeated. For example, to add a new worker
> +"ala-blade51" to an Autobuilder cluster with a yocto-controller at the
> +IP address 147.11.105.72:
> +
> +1. On the yocto-controller host, add the name of the new worker to a
> worker
> +list (or create a new one) e.g. 'workers_wrlx = ["ala-blade51"]' and
> +make sure that it is added to the "workers" list.
> +
> +2. On the new worker node:
> +
> +```
> +sudo apt-get install gawk wget git-core diffstat unzip texinfo \
> +gcc-multilib build-essential chrpath socat cpio python python3 \
> +python3-pip python3-pexpect xz-utils debianutils iputils-ping \
> +libsdl1.2-dev xterm
>

Should we link to
https://docs.yoctoproject.org/ref-manual/system-requirements.html#ubuntu-and-debian
for the current package set as well as listing this information here?


> +
> +sudo pip3 install buildbot buildbot-www buildbot-waterfall-view \
> +buildbot-console-view buildbot-grid-view buildbot-worker
> +
> +useradd -m --system pokybuild3
> +cd /home/pokybuild3
> +mkdir -p git/trash
> +buildbot-worker create-worker -r --umask=0o22 yocto-worker 147.11.105.72
> ala-blade51 pass
> +chown -R pokybuild3:pokybuild3 /home/pokybuild3
> +```
> +
> + > Note 1: The URL/IP given to the create-worker command must match the
> +host running the yocto-controller.
> +
> + > Note 2: The "pass" argument given to the create-worker command must
> +match the common "worker_pass" variable set in
> yocto-controller/yoctoabb/config.py.
> +
> +
> +### 1.4) Configuring NFS for the Autobuilder Cluster
> +
> +The Yocto Autobuilder relies on NFS to distribute a common sstate cache
> +and other outputs between nodes. A similar configuration can be
> +deployed by performing the steps given below, which were written for
> +Ubuntu 18.04.In order for both the controller and worker nodes to be
> able
> +to access the NFS share without issue, the "pokybuild3" user on all
> +systems must have the same UID/GID, or sufficient permissions must be
> +granted on the /srv/autobuilder path (or wherever you modified the config
> +files to point to). The following instructions assume a controller node
> +at 147.11.105.72 and a single worker node at 147.11.105.71, but
> +additional worker nodes can be added as needed (see the previous
> +section).
> +
> +1. On the NFS host:
> +
> +```
> +sudo apt install -y nfs-kernel-server
> +sudo mkdir -p /srv/autobuilder/autobuilder.yoctoproject.org/pub/sstate
> +sudo chown -R pokybuild3:pokybuild3 /srv
>

Let's only chown the directories we intend to export. Other data may be
present in /srv and leaving its owner intact is desirable.


> +```
> +2. Add the following to /etc/exports, replacing the path and IP fields
> +   as necessary for each client node:
> +```
> +/srv/autobuilder/autobuilder.yoctoproject.org/pub/sstate
> 147.11.105.71(rw,sync,no_subtree_check)
> +```
> +
> +3. Run
> +```
> +sudo systemctl restart nfs-kernel-server
> +```
> +
> +4. Adjust the firewall (if required). Example:
> +```
> +sudo ufw allow from 147.11.105.71 to any port nfs
> +```
> +
> +5. On the client node(s):
> +```
> +sudo mkdir -p /srv/autobuilder/autobuilder.yoctoproject.org/pub/sstate
> +sudo chown -R pokybuild3:pokybuild3 /srv/autobuilder/
> +sudo mount 147.11.105.72:/srv/autobuilder/
> autobuilder.yoctoproject.org/pub/sstate /srv/autobuilder/
> autobuilder.yoctoproject.org/pub/sstate
> +```
> +
>  ## 2) Basics
>
>  This section is an overview of operation and a few basic configuration
> file relationships.  See Section 3 for more detailed instructions.
> --
> 2.30.2
>
>
> 
>
>

-- 
Michael Halstead
Linux Foundation / Yocto Project
Systems Operations Engineer
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#53015): https://lists.yoctoproject.org/g/yocto/message/53015
Mute This Topic: https://lists.yoctoproject.org/mt/81868167/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to