On Wed, Sep 17, 2025 at 12:29:32AM +0000, Rikka Göring wrote:
> Hello ports@,
> 
> I wanted to give some context around several new port submissions I have 
> pending and to outline my ongoing efforts to improve FreeBSD's HPC 
> (High-Performance Computing) software stack.
> 
> Recent activity:
> 
> 
>   *
> Took over maintainership of sysutils/slurm-wlm (bug 
> #288600<https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=288600>)
> 
>   *
> Submitted new ports: devel/py-reframe (bug 
> #289292<https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=289292>), 
> sysutils/py-clustershell (bug 
> #289176<https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=289176>), 
> devel/spack (bug 
> #289296<https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=289296>)
> 
> 
> Planned/ongoing ports:
> 
> 
>   *
> devel/mpifileutils (in progress)
>   *
> sysutils/openpmix (planned)
>   *
> sysutils/prrte (planned)
>   *
> sysutils/flux-core (planned)
>   *
> sysutils/flux-shed (planned)
> 
> These tools are widely used in Tier 1/0 HPC centers (eg. Spack for package 
> management, ReFrame for regression testing, OpenPMIx/PRRTE as MPI runtime 
> foundations, Flux as next-generation workload manager). My current goal is to 
> make FreeBSD a viable HPC platform by ensuring these pieces are available and 
> functional.
> 
> I would appreciate feedback on:
> 
> 
>   *
> Wether there are other HPC-relevant software packages I should target.
>   *
> Any pitfalls or best practices to keep in mind while scaling this effort.
>   *
> Potential co-maintainers or testers interested in this space.
> 
> My aim is to make FreeBSD a serious option for scientific computing and 
> large-scale HPC environments, and I welcome any input form the community.

ucx (github openucx/ucx) ?

I am not sure how hard are linuxisms there.  I tried to eliminate some
several years ago, but mostly stuck in some formal issues.

Reply via email to