Hello Rikka,

On 9/16/25 06:29PM, Rikka Göring wrote:
Hello ports@,

I wanted to give some context around several new port submissions I have pending and to outline my ongoing efforts to improve FreeBSD's HPC (High-Performance Computing) software stack.

*Recent activity:*

 *
    Took over maintainership of *sysutils/slurm-wlm *(*bug #288600
    <https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=288600>*)

 *
    Submitted new ports: *devel/py-reframe* (bug #289292
    <https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=289292>),
    *sysutils/py-clustershell* (bug #289176
    <https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=289176>),
    *devel/spack *(bug #289296
    <https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=289296>)


*Planned/ongoing ports:*

 *
    *devel/mpifileutils* (in progress)
 *
    *sysutils/openpmix* (planned)
 *
    *sysutils/prrte* (planned)
 *
    *sysutils/flux-core* (planned)
 *
    *sysutils/flux-shed* (planned)


These tools are widely used in Tier 1/0 HPC centers (eg. Spack for package management, ReFrame for regression testing, OpenPMIx/PRRTE as MPI runtime foundations, Flux as next-generation workload manager). My current goal is to make FreeBSD a viable HPC platform by ensuring these pieces are available and functional.
*
*
*I would appreciate feedback on:*

 *
    Wether there are other HPC-relevant software packages I should target.
 *
    Any pitfalls or best practices to keep in mind while scaling this
    effort.
 *
    Potential co-maintainers or testers interested in this space.


My aim is to make FreeBSD a serious option for scientific computing and large-scale HPC environments, and I welcome any input form the community.

I am very much looking forward to this! So, thank you! Provided I can get the resources to test, I would be more than happy to do so and contribute in any way I can. If you can provide port patches as you work along, I'd be happy to give them a go. Thanks again!

Regards,
Janky Jay, III

Reply via email to