Re: No mutter on master

2024-07-08 Thread Maxim Cournoyer
Hi,

Andreas Enge  writes:

> Hello,
>
> currently I cannot reconfigure my laptop with the Xfce desktop environment
> due to mutter not building:
> Ok: 166
> Expected Fail:  5
> Fail:   1
> Unexpected Pass:0
> Skipped:0
> Timeout:0
> (this was on a local build).
>
> Given the amount of output, I find it difficult to see which test failed.
>
> I am on commit 972c06dc79641864b05590b2cd905cc8b810062b from yesterday.
> As far as I can see, the corresponding derivation has not been built by QA
> (following is my breadcrump trail of links followed from the data service):
>https://data.guix.gnu.org/revision/972c06dc79641864b05590b2cd905cc8b810062b
>
> https://data.guix.gnu.org/revision/972c06dc79641864b05590b2cd905cc8b810062b/packages
>
> https://data.guix.gnu.org/revision/972c06dc79641864b05590b2cd905cc8b810062b/packages?locale=en_US.UTF-8&search_query=mutter&field=version&field=synopsis&after_name=&limit_results=100
>
> https://data.guix.gnu.org/revision/972c06dc79641864b05590b2cd905cc8b810062b/package/mutter/44.9?locale=en_US.UTF-8
>
> https://data.guix.gnu.org/gnu/store/gn3qgnzix2lnq4cg95samsy07zp0a8qr-mutter-44.9.drv
>
> Going back in time, the last successful build I could find was for commit
> ab41f5ec1cf559708165e1cd28e15538e6a197d6 of June 30. The next entry in the
> dataservice has status "unknown".
>
> After that, I do not see a commit that strikes me as suspicious as far as
> mutter is concerned.
>
> But there are quite a few patches applied to master recently of which I am
> not sure whether they have gone through QA and the process described here:
>
> https://guix.gnu.org/en/manual/devel/en/html_node/Managing-Patches-and-Branches.html
> Going through QA should normally make sure that dependent packages still
> build and that substitutes are directly available (while for some recently
> updated packages there are no substitutes); I would like to invite all
> committers to follow this procedure.

Perhaps it was a bunch of patches I had manually applied and built.
They built fine for me, but if the mutter test is flaky, then it could
have slipped through, as it did the first time.

I didn't wait for QA on all of these patches, as some were old and
probably wouldn't have been built anyway, while some newer ones looked
trivial enough.

QA is awesome, but sometimes it takes a lot of time to produce a result,
I assume due to the lack of resources compared to Berlin.  Sorry for not
being patient with machines :-).

-- 
Thanks,
Maxim



Re: python-duckdb stuck in its tests

2024-07-08 Thread Sharlatan Hellseher
Hi,

I've pushed update togather with https://issues.guix.gnu.org/71480. It
 fixed the build, check and sanity check phases in
ce98c3436c57e7b366a3ec06c47a7e8919c990fb.

Thanks,
Oleg

On Sun, 30 Jun 2024 at 11:02, Sharlatan Hellseher  wrote:
>
> Hi Andreas,
>
> It looks like updating to 1.0.0 has not issue wit passing test on my
> local checkout after applying this patch
>
> --8<---cut here---start->8---
> @@ -23334,20 +23334,24 @@ (define-public python-chevron
>  (define-public python-duckdb
>(package
>  (name "python-duckdb")
> -(version "0.8.1")
> +(version "1.0.0")
>  (source (origin
>(method url-fetch)
>(uri (pypi-uri "duckdb" version))
>(sha256
> (base32
> -"1sgfmii5xlkbx3hzyjxg80gl2ni1rxpabahl4gww9by2mgs3fkd5"
> +"0lyl6di1c7j31i2mk384j711kzyyf9rjd3nqx5mbgmf7gfvmk852"
>  (build-system pyproject-build-system)
>  (arguments
>   (list
>#:test-flags
>'(list "--ignore=tests/slow/test_h2oai_arrow.py"
> - ;; Don't install anything, thank you.
> - "-k" "not test_install_non_existent_extension")
> + "-k" (string-append
> +   ;; Don't install anything, thank you.
> +   "not test_install_non_existent_extension"
> +   ;; assert not ["error: duckdb failed to find <..>
> +   ;; site-packages/duckdb/__init__.py
> +   " and not test_generated_stubs"))
>#:phases
>#~(modify-phases %standard-phases
>;; Tests need this
> --8<---cut here---end--->8---
>
> But Sanity check is not happy with:
> --8<---cut here---start->8---
> starting phase `sanity-check'
> validating 'duckdb'
> /gnu/store/cvr5g1ivssavn3v5bhnbnpkm2zpwyj2s-python-duckdb-1.0.0/lib/python3.10/site-packages
> ...checking requirements: OK
> ...trying to load module adbc_driver_duckdb: ERROR:
> Traceback (most recent call last):
>   File "/gnu/store/iqsjkp55pcx5bfcp2jm9yj5rlx9a0whd-sanity-check.py",
> line 73, in 
> importlib.import_module(name)
>   File 
> "/gnu/store/4ncpi13lpwj8fk3j7adgnr5mi90dz311-python-3.10.7/lib/python3.10/importlib/__init__.py",
> line 126, in import_module
> return _bootstrap._gcd_import(name[level:], package, level)
>   File "", line 1050, in _gcd_import
>   File "", line 1027, in _find_and_load
>   File "", line 1006, in _find_and_load_unlocked
>   File "", line 688, in _load_unlocked
>   File "", line 883, in exec_module
>   File "", line 241, in _call_with_frames_removed
>   File 
> "/gnu/store/cvr5g1ivssavn3v5bhnbnpkm2zpwyj2s-python-duckdb-1.0.0/lib/python3.10/site-packages/adbc_driver_duckdb/__init__.py",
> line 24, in 
> import adbc_driver_manager
> ModuleNotFoundError: No module named 'adbc_driver_manager'
> ...trying to load module duckdb: OK
> ...trying to load module duckdb-stubs: OK
> error: in phase 'sanity-check': uncaught exception:
> %exception #<&invoke-error program: "python" arguments:
> ("/gnu/store/iqsjkp55pcx5bfcp2jm9yj5rlx9a0whd-sanity-check.py"
> "/gnu/store/cvr5g1ivssavn3v5bhnbnpkm2zpwyj2s-python-duckdb-1.0.0/lib/python3.10/site-packages")
> exit-status: 1 term-signal: #f stop-signal: #f>
> phase `sanity-check' failed after 0.2 seconds
> command "python"
> "/gnu/store/iqsjkp55pcx5bfcp2jm9yj5rlx9a0whd-sanity-check.py"
> "/gnu/store/cvr5g1ivssavn3v5bhnbnpkm2zpwyj2s-python-duckdb-1.0.0/lib/python3.10/site-packages"
> failed with status 1
> builder for 
> `/gnu/store/8i8z6fynmyg35x48dln3lgl1z1vqiygy-python-duckdb-1.0.0.drv'
> failed with exit code 1
> build of /gnu/store/8i8z6fynmyg35x48dln3lgl1z1vqiygy-python-duckdb-1.0.0.drv
> failed
> View build log at
> '/var/log/guix/drvs/8i/8z6fynmyg35x48dln3lgl1z1vqiygy-python-duckdb-1.0.0.drv.gz'.
> --8<---cut here---end--->8---
>
> Oleg
>
> On Sun, 30 Jun 2024 at 09:23, Andreas Enge  wrote:
> >
> > Hello the Python team,
> >
> > python-duckdb sticks out on the build farm, since it apparently gets blocked
> > in its tests after spending quite some CPU time for building, and finally
> > it times out after a day. It seems to happen consistently over several
> > evaluations. The last lines are:
> > tests/fast/test_multithread.py::TestDuckMultithread::test_transaction[pandas0]
> >  PASSED [  8%]
> > tests/fast/test_multithread.py::TestDuckMultithread::test_transaction[pandas1]
> >  PASSED [  8%]
> > tests/fast/test_multithread.py::TestDuckMultithread::test_df_append[pandas0]
> >  PASSED [  8%]
> > tests/fast/test_multithread.py::TestDuckMultithread::test_df_append[pandas1]
> >
> > (see 
> > https://bordeaux.guix.gnu.org/build/56359106-e402-49d2-8c6b-2f35de90b7da/log)
> >
> > Could you maybe have a look?
> >
> > Andreas
> >
>
>
> --
> VCS: https://github.incerto.xyz/; https://git.sr.ht/~hellseher/
> GPG: 9847 81DE 689C 21C2 6418 0867 76

Re: Sustainable funding and maintenance for our infrastructure

2024-07-08 Thread Vincent Legoll
Hello,

On Mon, Jul 8, 2024 at 3:47 PM Vagrant Cascadian  wrote:

> This may be a little wild, but what are the downsides to doing some
> combination of all of the above?
>
> A mixed strategy could reduce ... the upfront cost of buying and hosting
> hardware (#1), the ongoing costs of renting (#2), and dependence on the
> generosity of a third party for sponsored hardware & hosting (#3).
>
> It seems like any strategy should have some redundancy (e.g. multiple
> independent build farms) so that a failure in one datacenter does not
> effectively take down the whole network...
>

That would be my opinion too.

But for the cloud renting I would first research if there are associated
network or other costs, because the computing is cheap only to lure
you into the (sometimes prohibitive) hidden costs.


> ... though they are not full replacements for each other.
>

Maybe that should be treated as a bug/issue.

-- 
Vincent Legoll


Re: packaging: what to do about these requiring systemd?

2024-07-08 Thread Andy Tai
There are now cases of packages linking to Systemd, not just checking
for installing init scripts.
should these be considered bugs of such packages (and we shall ask the
package authors to make systemd optional)?

On Mon, Jul 8, 2024 at 4:45 AM Ricardo Wurmus  wrote:
>
> Andy Tai  writes:
>
> > Some packages require systemd;
>
> Some of these packages are perfectly happy with elogind.  Others only
> check to decide what files to install (systemd unit files or those for
> another init system).
>
> --
> Ricardo



-- 
Andy Tai, a...@atai.org, Skype: licheng.tai, Line: andy_tai, WeChat: andytai1010
Year 2024 民國113年
自動的精神力是信仰與覺悟
自動的行為力是勞動與技能



Re: Sustainable funding and maintenance for our infrastructure

2024-07-08 Thread Enrico Schwass
Hi Guix

If you are looking for host services
I have good experience with netcup.de. They also have ARM machines.
My 3 machines have a great uptime

Bye
Enno


> Am 2024/07/08 um 18:28 schrieb Efraim Flashner :
> 
> On Tue, Jul 02, 2024 at 04:24:06PM +0200, Ludovic Courtès wrote:
>> Hello Guix!
>> 
>> We (Andreas, Chris, Ricardo, Romain, and myself) were having a
>> discussion about what it would take to set up a build farm similar to
>> what’s behind ci.guix: roughly 30 x86_64 servers, with 32-core/64-thread
>> CPUs and 128 GiB of RAM.  The reason for this discussion is that we were
>> thinking that we should not take our existing build farms for granted
>> and be prepared for the future.
>> 
>> The various options and back-of-the-envelope estimates we came up with
>> are as follows:
>> 
>>  1. Buying and hosting hardware:
>>  250k€ for hardware
>>  3k€/month (36k€/year)
>> 
>>  2. Renting machines (e.g., on Hetzner):
>>  6k€/month (72k€/year)
>> 
>>  3. Sponsored:
>>  get hardware and/or hosting sponsored (by academic institutions or
>>  companies).
>> 
>> Option #1 gives us “full control”, the downside being that it’s a lot of
>> work and a real burden (get crowdfunding for the initial funding, later
>> on to sustain funding to cover hosting, ensure Guix Foundation is up to
>> the task of managing the assets, and of course to take care of the
>> machines for their entire lifecycle).
>> 
>> Option #2 gives us less control (we don’t know exactly what hardware is
>> being used and have to trust the company hosting the machines).  The
>> upside is that it’s much less work over time (the company is responsible
>> for upgrading hardware) and less work initially (no need to raise as
>> much money to buy hardware).
>> 
>> Option #3 potentially gives less control (depending on the project’s
>> relation with the hosting organization) and makes the project dependent
>> on the sponsor and/or person(s) in touch with them.  On the upside, it
>> could significantly reduce costs (potentially to 0€).
>> 
>> 
>> This is an important topic for the project, one we should plan for:
>> socially, financially, technically.  This takes time, which is why
>> preparation is needed.
>> 
>> What do people think?
>> 
>> Ludo’ & co.
> 
> Looking at Hetzner, they have an option to rent a dedicated ARM server
> with 80 cores/threads with 256GB of RAM and 2x3.84 TB NVMe drives for
> under €300/month and a €94 setup charge. Correct me if I"m wrong, but
> that one box is ~20x our current active aarch64/armv7 capacity.
> 
> Also looking at our current infrastructure at MDC, part of the reason we
> have so many x86_64 machines is because that's what was bought with the
> donated money, not because we actually needed quite that many, so some
> of the numbers might be higher than we actually need.
> 
> -- 
> Efraim Flashner  רנשלפ םירפא
> GPG key = A28B F40C 3E55 1372 662D  14F7 41AA E7DC CA3D 8351
> Confidentiality cannot be guaranteed on emails sent or received unencrypted
> 


> Am 2024/07/08 um 18:28 schrieb Efraim Flashner :
> 
> On Tue, Jul 02, 2024 at 04:24:06PM +0200, Ludovic Courtès wrote:
>> Hello Guix!
>> 
>> We (Andreas, Chris, Ricardo, Romain, and myself) were having a
>> discussion about what it would take to set up a build farm similar to
>> what’s behind ci.guix: roughly 30 x86_64 servers, with 32-core/64-thread
>> CPUs and 128 GiB of RAM.  The reason for this discussion is that we were
>> thinking that we should not take our existing build farms for granted
>> and be prepared for the future.
>> 
>> The various options and back-of-the-envelope estimates we came up with
>> are as follows:
>> 
>>  1. Buying and hosting hardware:
>>  250k€ for hardware
>>  3k€/month (36k€/year)
>> 
>>  2. Renting machines (e.g., on Hetzner):
>>  6k€/month (72k€/year)
>> 
>>  3. Sponsored:
>>  get hardware and/or hosting sponsored (by academic institutions or
>>  companies).
>> 
>> Option #1 gives us “full control”, the downside being that it’s a lot of
>> work and a real burden (get crowdfunding for the initial funding, later
>> on to sustain funding to cover hosting, ensure Guix Foundation is up to
>> the task of managing the assets, and of course to take care of the
>> machines for their entire lifecycle).
>> 
>> Option #2 gives us less control (we don’t know exactly what hardware is
>> being used and have to trust the company hosting the machines).  The
>> upside is that it’s much less work over time (the company is responsible
>> for upgrading hardware) and less work initially (no need to raise as
>> much money to buy hardware).
>> 
>> Option #3 potentially gives less control (depending on the project’s
>> relation with the hosting organization) and makes the project dependent
>> on the sponsor and/or person(s) in touch with them.  On the upside, it
>> could significantly reduce costs (potentially to 0€).
>> 
>> 
>> This is an important topic for the project, one we should plan for:
>> socially, f

Re: Sustainable funding and maintenance for our infrastructure

2024-07-08 Thread Efraim Flashner
On Tue, Jul 02, 2024 at 04:24:06PM +0200, Ludovic Courtès wrote:
> Hello Guix!
> 
> We (Andreas, Chris, Ricardo, Romain, and myself) were having a
> discussion about what it would take to set up a build farm similar to
> what’s behind ci.guix: roughly 30 x86_64 servers, with 32-core/64-thread
> CPUs and 128 GiB of RAM.  The reason for this discussion is that we were
> thinking that we should not take our existing build farms for granted
> and be prepared for the future.
> 
> The various options and back-of-the-envelope estimates we came up with
> are as follows:
> 
>   1. Buying and hosting hardware:
>   250k€ for hardware
>   3k€/month (36k€/year)
> 
>   2. Renting machines (e.g., on Hetzner):
>   6k€/month (72k€/year)
> 
>   3. Sponsored:
>   get hardware and/or hosting sponsored (by academic institutions or
>   companies).
> 
> Option #1 gives us “full control”, the downside being that it’s a lot of
> work and a real burden (get crowdfunding for the initial funding, later
> on to sustain funding to cover hosting, ensure Guix Foundation is up to
> the task of managing the assets, and of course to take care of the
> machines for their entire lifecycle).
> 
> Option #2 gives us less control (we don’t know exactly what hardware is
> being used and have to trust the company hosting the machines).  The
> upside is that it’s much less work over time (the company is responsible
> for upgrading hardware) and less work initially (no need to raise as
> much money to buy hardware).
> 
> Option #3 potentially gives less control (depending on the project’s
> relation with the hosting organization) and makes the project dependent
> on the sponsor and/or person(s) in touch with them.  On the upside, it
> could significantly reduce costs (potentially to 0€).
> 
> 
> This is an important topic for the project, one we should plan for:
> socially, financially, technically.  This takes time, which is why
> preparation is needed.
> 
> What do people think?
> 
> Ludo’ & co.

Looking at Hetzner, they have an option to rent a dedicated ARM server
with 80 cores/threads with 256GB of RAM and 2x3.84 TB NVMe drives for
under €300/month and a €94 setup charge. Correct me if I"m wrong, but
that one box is ~20x our current active aarch64/armv7 capacity.

Also looking at our current infrastructure at MDC, part of the reason we
have so many x86_64 machines is because that's what was bought with the
donated money, not because we actually needed quite that many, so some
of the numbers might be higher than we actually need.

-- 
Efraim Flashner  רנשלפ םירפא
GPG key = A28B F40C 3E55 1372 662D  14F7 41AA E7DC CA3D 8351
Confidentiality cannot be guaranteed on emails sent or received unencrypted


signature.asc
Description: PGP signature


Re: Sustainable funding and maintenance for our infrastructure

2024-07-08 Thread Vagrant Cascadian
On 2024-07-02, Ludovic Courtès wrote:
> We (Andreas, Chris, Ricardo, Romain, and myself) were having a
> discussion about what it would take to set up a build farm similar to
> what’s behind ci.guix: roughly 30 x86_64 servers, with 32-core/64-thread
> CPUs and 128 GiB of RAM.  The reason for this discussion is that we were
> thinking that we should not take our existing build farms for granted
> and be prepared for the future.
>
> The various options and back-of-the-envelope estimates we came up with
> are as follows:
>
>   1. Buying and hosting hardware:
>   250k€ for hardware
>   3k€/month (36k€/year)
>
>   2. Renting machines (e.g., on Hetzner):
>   6k€/month (72k€/year)
>
>   3. Sponsored:
>   get hardware and/or hosting sponsored (by academic institutions or
>   companies).

This may be a little wild, but what are the downsides to doing some
combination of all of the above? Maybe higher bandwidth requirements
between the various pieces of infrastructure presumably being hosted in
different locations? Maybe also a little more complexity in the overall
setup?

A mixed strategy could reduce ... the upfront cost of buying and hosting
hardware (#1), the ongoing costs of renting (#2), and dependence on the
generosity of a third party for sponsored hardware & hosting (#3).

It seems like any strategy should have some redundancy (e.g. multiple
independent build farms) so that a failure in one datacenter does not
effectively take down the whole network...

In a sense, we already have some of that, with ci.guix.gnu.org and
bordeaux.guix.gnu.org, and also the new North American build farm
... though they are not full replacements for each other.

live well,
  vagrant


signature.asc
Description: PGP signature


Re: Sustainable funding and maintenance for our infrastructure

2024-07-08 Thread Ricardo Wurmus
Hi Simon,

> On Tue, 02 Jul 2024 at 16:24, Ludovic Courtès  wrote:
>
>>   The reason for this discussion is that we were
>> thinking that we should not take our existing build farms for granted
>> and be prepared for the future.
>
> Could you explain the rationale?  I understand and fully agree that
> sustainable funding and maintenance for infrastructure are key topics
> for the project.  Do we need to move ci.guix soon?  Related to Ricardo
> announcement [1]?

There is no urgency.  The build farm at the MDC isn't going anywhere.

But it would be unwise for the project to assume that it will always
stay this way.  In the past we've also had some minor issues outside of
our immediate control that are attributable to hosting these servers at
a research institute, for example a trigger-happy firewall, or blanket
bans on large IP address ranges.

In the past we were given the opportunity to extend and upgrade the
build farm, but we cannot plan with good fortune like this.  As a
project it would be wise to continue our efforts to diversify our
distributed build farm.

>> The various options and back-of-the-envelope estimates we came up with
>> are as follows:
>>
>>   1. Buying and hosting hardware:
>>   250k€ for hardware
>>   3k€/month (36k€/year)
>>
>>   2. Renting machines (e.g., on Hetzner):
>>   6k€/month (72k€/year)
>>
>>   3. Sponsored:
>>   get hardware and/or hosting sponsored (by academic institutions or
>>   companies).
>
> Well, on the paper, option #1 appears to me appealing but how do we get
> this 250k€?  Somehow, 250k€ would mean being able to secure 3k€/month
> for over almost 7 years, right?
>
> Except if we have a large donation that I am not aware, I do not see how
> it would be possible to sign in being sure to secure 3k€/month for over
> almost 7 years; considering the project has 12 years.
>
> Other said, option #1 does not appear to me an option.

Correct.  I think it is a good reality check to see just how much value
there is (or was in 2019) in all these servers and what our realistic
options are to recreate this when eventually these machines are
decommissioned.  I don't see option #1 as realistic; not only is it a
prohibitively large up-front cost, it is also a serious continuous time
and money sink.  We'd also have to constantly play our cards well and
trade old hardware in for new hardware lest we are stuck with a metric
ton of e-waste.

> Option #2 could be a temporary option for a short time.  But again,
> that’s something.

I think option #2 is not actually terrible.  We like to say that the
cloud is just other people's machines, and our response to that is
aversion to a real or perceived loss of control.  But I'd like to put
this in perspective by asking how much control we *actually* have over
the build farm at the MDC right now.  In practice *I* have some
semblance of control over these machines because I have access to the
data centre.  For the most part, however, I treat these servers as warm
MDC furniture.

Yes, we'd lose a few more options when renting hardware via Hetzner (or
even the well-dressed monocled elephant over there: AWS), but I think we
should think carefully about how valuable our sacrifices are in exchange
for the practical advantages of not being stuck with a rack full of
industrial hardware.

Option #2 is rather quick to set up and quick to abandon should we run
out of money.  It does, however, depend on continuous donations, which
we are currently unable and possibly even unwilling to solicit.

-- 
Ricardo



Re: packaging: what to do about these requiring systemd?

2024-07-08 Thread Ricardo Wurmus
Andy Tai  writes:

> Some packages require systemd;

Some of these packages are perfectly happy with elogind.  Others only
check to decide what files to install (systemd unit files or those for
another init system).

-- 
Ricardo