Hello Rikka,

This is great news. I'm also interested in HPC and I maintain several
HPC-related ports, in particular net/mpich, net/openmpi,
net/py-mpi4py, net/mpi4py-mpich and sysutils/modules. I have started
work on a new port, sysutils/openpbs, that will give FreeBSD another
option when it comes to HPC workload managers.

Thanks for bringing this up. I'm looking forward to collaborating.

Laurent


On Tue, 16 Sept 2025 at 20:29, Rikka Göring <[email protected]> wrote:
>
> Hello ports@,
>
> I wanted to give some context around several new port submissions I have 
> pending and to outline my ongoing efforts to improve FreeBSD's HPC 
> (High-Performance Computing) software stack.
>
> Recent activity:
>
> Took over maintainership of sysutils/slurm-wlm (bug #288600)
>
> Submitted new ports: devel/py-reframe (bug #289292), sysutils/py-clustershell 
> (bug #289176), devel/spack (bug #289296)
>
>
> Planned/ongoing ports:
>
> devel/mpifileutils (in progress)
> sysutils/openpmix (planned)
> sysutils/prrte (planned)
> sysutils/flux-core (planned)
> sysutils/flux-shed (planned)
>
>
> These tools are widely used in Tier 1/0 HPC centers (eg. Spack for package 
> management, ReFrame for regression testing, OpenPMIx/PRRTE as MPI runtime 
> foundations, Flux as next-generation workload manager). My current goal is to 
> make FreeBSD a viable HPC platform by ensuring these pieces are available and 
> functional.
>
> I would appreciate feedback on:
>
> Wether there are other HPC-relevant software packages I should target.
> Any pitfalls or best practices to keep in mind while scaling this effort.
> Potential co-maintainers or testers interested in this space.
>
>
> My aim is to make FreeBSD a serious option for scientific computing and 
> large-scale HPC environments, and I welcome any input form the community.
>
> Best regards,
> Rikka

Reply via email to