For those that couldn't attend it was a busy hour (and half):

On Fri, Jan 09, 2026 at 12:35:45AM +0100, Mark Wielaard wrote:
> 2025 has been a good year for Sourceware infrastructure. Thanks to
> some generous donations we got a larger server, which has been
> installed into a new datacenter with a VM-first setup which has been a
> great step forward for our security isolation story.
> 
> Come and discuss the next steps in 2026. Putting more services into
> their own dedicated VMs. Getting server2 and server3 into the new
> datacenter (early February). And decide how we are going to use the
> other new OSUOSL sourceware-builder3 (2x28 core, 112 thread, 768GB
> RAM) server for buildbot builders and forgejo action runners.

Thomas has been investigating several issues with inbox.sourceware.org
We discussed making the full config public, putting inbox into its own
VM with newer versions of public-inbox and xapian and then
reimporting/reindexing all 165 mailinglists.

There was some discussion of Reply-To and/or From munging on
gcc-patches (and other patch lists). Claudio would like to set the
Reply-To in batrachomyomachia so that replies don't go to the bot but
to the mailing list and all subscribers. There was some debate on
whether From and/or Reply-To munging for mailinglists would be helpful
or not.

For the VMs we discussed doing on-site backups through logical volume
(copy on write) snapshots. server1 has 14TiB of (raid6) disk space
(compared to just 4TiB on server2 and server3), currently only 4TiB is
used, so there is 10GiB free. Each VM has one logical volume that can
be partitioned (or has its own lvm setup) inside the guest VM. Which
can be extended if more data is necessary. After the meeting Frank
wrote a vm-backup script that fsfreeze a domain, creates a lv snapshot
and fsthaw the VM. This can be put in a crontab on the host for
regular backups. With old snapshots being copied from the host to some
other machines and/or deleted.

Mark is going to setup a new forge VM that is bigger and has more
resources than the current one in the current RH OSCI one in the old
datacenter. This one should mirror the forge-stage one and be setup
with Ansible (like forge-stage already is). Other (future) VM setups
are describe on https://sourceware.org/sourceware-wiki/OpenHouse2025/

DNS and ipv6 networking. We now have ipv6 in the new datacenter. http
and dns are already listening. smtp has been right after the meeting
and should now sent and receive email through ipv6. gcc.gnu.org and
valgrind.org only has an ipv4 address at the moment. Ian and Julian
should probably make them CNAMEs for sourceware.org. Cary and Pono
handled the tranfer of the dwarfstd.org domain which (together with
the elfutils.org domain) should get updated nameservers (currently
they only have server2 and server3 which will get moved to the new
datacenter and get new addresses in February).

Claudio and Pietro worked on a Containerfile for base forge actions.
https://gcc.gnu.org/cgit/gcc/tree/contrib/ci-containers/README
https://gcc.gnu.org/cgit/gcc/tree/.forgejo/workflows/build-containers.yaml
Now the question is how to get the container image usable for the
action runners. Should they be created on the runners or stored
(pushed) to a repository external or on forge.sourceware.org itself
(so they can be pulled back into the runners). This brings up
questions on whether we want to providing a image repository service
to the public or if this can be purely local on the
runner/builder. More discussion at
https://forge.sourceware.org/forge/forge/issues/11

We really seem to be on top of this whole scraper bots thing and git
attack that caused some trouble during the end of year vacation. But
we could still look at if we can make https git pull more
efficient. the efficiency thing really is only for gcc.git because
that is so freaking big compare to anything else. We could experiment
with Joseph instructions of repacking if that helps.
https://inbox.sourceware.org/[email protected]/
128gb ram and an hour of cpu are fine (vm01 uses just 56 cpus, half
available, and one third of memory, 500GiB out of 1.5TiB).

Mark created a script to run for the next account cleanup. It lists
the current developers and emeritus accounts per group so it is clear
who (still) has and who doesn't have write access to the project
repositories. The accountinfo page has also been updated with
information in how to reactive "emeritus" accounts
https://sourceware.org/sourceware/accountinfo.html

Reply via email to