Hi Andreas,

On Fri, Jul 24, 2020 at 12:29 PM Andreas Tille <[email protected]> wrote:

> Hi Pranav,
>
> seems ampliconnoise will keep us busy longer than other packages.  The
> fact that now no extra data are shipped somehow made my motivation to
> put them into an extra package somehow void.  So I reverted this and in
> addition used xz compression for the data file shipped by upstream.


That sounds great.


> Unfortunately there are issues with the test (which I ignored formerly
> since I'm used to the fact that inside the pbuilder environment root
> is executing the test which does not work for mpi).  I've now tried
> to run /usr/share/doc/ampliconnoise/run-unit-test on my local machine
> which ended in:
>
> C005.dat
> Calculating .fdist file
> --------------------------------------------------------------------------
> There are not enough slots available in the system to satisfy the 4
> slots that were requested by the application:
>
>   PyroDist
>
> Either request fewer slots for your application, or make more slots
> available for use.
>
> A "slot" is the Open MPI term for an allocatable unit where we can
> launch a process.  The number of slots available are defined by the
> environment in which Open MPI processes are run:
>
>   1. Hostfile, via "slots=N" clauses (N defaults to number of
>      processor cores if not provided)
>   2. The --host command line parameter, via a ":N" suffix on the
>      hostname (N defaults to 1 if not provided)
>   3. Resource manager (e.g., SLURM, PBS/Torque, LSF, etc.)
>   4. If none of a hostfile, the --host command line parameter, or an
>      RM is present, Open MPI defaults to the number of processor cores
>
> In all the above cases, if you want Open MPI to default to the number
> of hardware threads instead of the number of processor cores, use the
> --use-hwthread-cpus option.
>
> Alternatively, you can use the --oversubscribe option to ignore the
> number of available slots when deciding the number of processes to
> launch.
> --------------------------------------------------------------------------
> Exit code:   1
>
>
> Seems mpi needs some initialisation before it runs on the local
> machine.  My machine has
>
> $ nproc
> 4
>
> So 4 processors should not be a problem.


> Any idea what might be wrong here?
>

It didn't seem to require any initialization for me. Maybe we should just
try it with the --use-hwthread-cpus option. I've tried it inside and
outside the chroot and it works for me. Could you please try it too? I've
edited the commands and pushed.

Regards,
Pranav
ᐧ

Reply via email to