Should wsl-boot-program create XDG_RUNTIME_DIR?

2022-11-06 Thread dan

Hello guix,

Even since the WSL image was pushed to master branch, I've been spending 
time experimenting with it.  It almost runs smoothly, unless two points:


1. when logged in, there is a warning says:
 > warning: XDG_RUNTIME_DIR doesn't exists, on-first-login script won't 
execute anything.  You can check if xdg runtime directory exists, 
XDG_RUNTIME_DIR variable is set to appropriate value and manually 
execute the script by running '$HOME/.guix-home/on-first-login'


The value of $XDG_RUNTIME_DIR is /run/user/$UID, and the /run/user 
directory is empty.  I believe the /run directory is created on WSL's 
side, and there is a step remounting it[1].


This also makes home shepherd services unusable.  Although I could 
manually create the directory, perhaps it's better if we could just do 
the work within `wsl-boot-program', a wrapper for the login shell to 
work properly on WSL.


2. WSLg is usable, but the mesa in guix repo doesn't build with d3d12 
gallium driver[2]. So when opening up a GUI software in guix on WSL, in 
renders through llvmpipe (using CPU not GPU).


I'm not sure if building mesa with d3d12 driver enabled by default is a 
good idea, or maybe we could create a new package?


[1] 
https://git.savannah.gnu.org/cgit/guix.git/tree/gnu/system/images/wsl2.scm#n87

[2] https://git.savannah.gnu.org/cgit/guix.git/tree/gnu/packages/gl.scm#n332

--
dan


OpenPGP_0xB17E7CFADED8D81E.asc
Description: OpenPGP public key


OpenPGP_signature
Description: OpenPGP digital signature


Should wsl-boot-program create XDG_RUNTIME_DIR?

2022-11-06 Thread dan

Hello guix,

Even since the WSL image was pushed to master branch, I've been spending 
time experimenting with it.  It almost runs smoothly, unless two points:


1. when logged in, there is a warning says:
 > warning: XDG_RUNTIME_DIR doesn't exists, on-first-login script won't 
execute anything.  You can check if xdg runtime directory exists, 
XDG_RUNTIME_DIR variable is set to appropriate value and manually 
execute the script by running '$HOME/.guix-home/on-first-login'


The value of $XDG_RUNTIME_DIR is /run/user/$UID, and the /run/user 
directory is empty.  I believe the /run directory is created on WSL's 
side, and there is a step remounting it[1].


This also makes home shepherd services unusable.  Although I could 
manually create the directory, perhaps it's better if we could just do 
the work within `wsl-boot-program', a wrapper for the login shell to 
work properly on WSL.


2. WSLg is usable, but the mesa in guix repo doesn't build with d3d12 
gallium driver[2]. So when opening up a GUI software in guix on WSL, in 
renders through llvmpipe (using CPU not GPU).


I'm not sure if building mesa with d3d12 driver enabled by default is a 
good idea, or maybe we could create a new package?


[1] 
https://git.savannah.gnu.org/cgit/guix.git/tree/gnu/system/images/wsl2.scm#n87

[2] https://git.savannah.gnu.org/cgit/guix.git/tree/gnu/packages/gl.scm#n332

--
dan


OpenPGP_0xB17E7CFADED8D81E.asc
Description: OpenPGP public key


OpenPGP_signature
Description: OpenPGP digital signature


Should wsl-boot-program create XDG_RUNTIME_DIR?

2022-11-06 Thread dan

Hello guix,

Even since the WSL image was pushed to master branch, I've been spending 
time experimenting with it.  It almost runs smoothly, unless two points:


1. when logged in, there is a warning says:
> warning: XDG_RUNTIME_DIR doesn't exists, on-first-login script won't 
execute anything.  You can check if xdg runtime directory exists, 
XDG_RUNTIME_DIR variable is set to appropriate value and manually 
execute the script by running '$HOME/.guix-home/on-first-login'


The value of $XDG_RUNTIME_DIR is /run/user/$UID, and the /run/user 
directory is empty.  I believe the /run directory is created on WSL's 
side, and there is a step remounting it[1].


This also makes home shepherd services unusable.  Although I could 
manually create the directory, perhaps it's better if we could just do 
the work within `wsl-boot-program', a wrapper for the login shell to 
work properly on WSL.


2. WSLg is usable, but the mesa in guix repo doesn't build with d3d12 
gallium driver[2]. So when opening up a GUI software in guix on WSL, in 
renders through llvmpipe (using CPU not GPU).


I'm not sure if building mesa with d3d12 driver enabled by default is a 
good idea, or maybe we could create a new package?


[1] 
https://git.savannah.gnu.org/cgit/guix.git/tree/gnu/system/images/wsl2.scm#n87

[2] https://git.savannah.gnu.org/cgit/guix.git/tree/gnu/packages/gl.scm#n332

--
dan


OpenPGP_0xB17E7CFADED8D81E.asc
Description: OpenPGP public key


OpenPGP_signature
Description: OpenPGP digital signature


Re: Compile skribilo doc containing guix channel references

2022-11-06 Thread Phil
Hi Simon,

zimoun writes:

> I am missing how the Skribilo file is it compiled?  At “guix pull” time?
> Or manually when running the script compile-docs.scm?
>

At the moment compilation is only manual and from inside the repo clone:
guix environment skribilo guile -- guix repl -- compile-docs.scm

Making it part of guix pull is an interesting idea given this is where
standard package compilation takes place.  For this to work any code
references would have to be of the form:
(use-modules (my-module foo)
(package->code my-package)

rather than direct access to scheme files in the repo clone using:
(source :file "foo.scm" :definiton 'my-package)

The advantage of using pacakge->code is that the documentation
compilation is then decoupled from the repo clone and can be generated
anywhere guix can be run (just like guix pull).

The 2 disadvantages are 1) Files in the repo, but not part of the guix
module system cannot be referenced, 2) The pretty-printing of packages
generated using package->code does not perfectly reproduce the format in
the repo clone - although they are functionally identical.

For example inputs are referenced fully qualified in-situ - eg
(@ (foo bar) baz), rather than just referencing baz in the inputs, and
importing (foo bar) at the top of the module.  Comments etc, are also
lost too.

If there was a nice way of referencing the uncompiled scheme files in
the channel without needing the repo cloned that would be perfect IMHO.
I don't think this is possible tho?



Re: guix pypi-ls

2022-11-06 Thread zimoun
Hi,

On Sat, 05 Nov 2022 at 12:47, jgart  wrote:

> I have this one off script I call `pypi-ls` for listing tar files on
> standard output from pypi to see if they contain tests:
>
> #!/bin/env sh
>
> exec wget -qO- $1 | tar xvz

Well, I think you can avoid the extraction.  Something like:

   exec wget -qO- $1 | tar tz

I have some tiny Guix script that extract the source URL from a package,
but I am not happy with it.  Other said, from my point of view, the most
annoying part is to form the PyPI URL, i.e., get $1. :-) For instance, I
have something like,

guix download $(guix repl -- pypi.scm pygls@.13) | tar tz

and we could imagine a better UI.  It could be nice to have an
extension, say “guix upstream”, that does some of the left part of the
pipe.


Cheers,
simon




Re: Compile skribilo doc containing guix channel references

2022-11-06 Thread zimoun
Hi,

On Sat, 05 Nov 2022 at 19:03, Ludovic Courtès  wrote:

>> This generates the docs with guix imports - see compile-command in the 
>> header:
>> https://github.com/quantiletechnologies/qt-guix/blob/feature/EA-133/compile-docs.scm
>>
>> This is the document - it's less ambitious than my internal version in
>> terms of generating content from guix - but has a few examples of
>> generating content from channels:
>> https://github.com/quantiletechnologies/qt-guix/blob/feature/EA-133/qtdocs/qt-guix.skb

I am missing how the Skribilo file is it compiled?  At “guix pull” time?
Or manually when running the script compile-docs.scm?

> Nice!  Including channel info in the document like you do here is
> probably a major use case; it also makes a lot of sense in the context
> of reproducible research workflows.

Ludo, what do you have in mind about reproducible research workflow?


Cheers,
simon



Re: splitting up and sorting commits?

2022-11-06 Thread Csepp


Andreas Enge  writes:

> Hello,
>
> Am Wed, Nov 02, 2022 at 12:05:54AM + schrieb Csepp:
>> * It is very easy for package to get added before their dependencies, so
>> even though by the end of the commit chain everything builds perfectly
>> fine, there are intermediate commits that can't be tested on their own.
>
> maybe it can be handled by a different workflow? I usually use the git stash
> to perform a depth first traversal of the dependency graph like so:
> - Add package A. Try to build it and see that it needs dependency B.
>   "git stash".
> - Add package B *instead*. Try to build it...
> (da capo ad libitum)
> - "git commit" with package B.
> - "git stash pop". Try to build package A etc.
> - "git commit" with package A.
>
> The only drawback is that B and A are often defined next to each other,
> which creates a spurious merge conflict during "git stash pop", but that
> is easy to solve and requires an additional "git stash drop" in the end.
>
> Sometimes I even give up on package A in the end, but then at least B
> has been added to Guix :)
>
> Andreas

I tried that, the merge conflicts drove me nuts.  And juggling the stash
stack was a nightmare.

Instead I ended up committing each package individually without caring
about dependencies, then doing an interactive rebase where every command
was edit.  If I saw warnings about undefined variables I ran git rebase
--edit-todo **without** magit, because it made simple textual edits
needlessly hard (this is why I use Kakoune and Emacs side by side),
opened .git/rebase-merge/done, cut the last line, pasted it in todo
after its dependency, ran git reset --hard "HEAD^" **ONLY** if HEAD was
actually that commit, if it was not yet committed because there was a
merge conflict, I ran git rebase --skip.  Yep, figuring out when to
reset and when to skip took a few tries. :)



Re: splitting up and sorting commits?

2022-11-06 Thread Andreas Enge
Hello,

Am Wed, Nov 02, 2022 at 12:05:54AM + schrieb Csepp:
> * It is very easy for package to get added before their dependencies, so
> even though by the end of the commit chain everything builds perfectly
> fine, there are intermediate commits that can't be tested on their own.

maybe it can be handled by a different workflow? I usually use the git stash
to perform a depth first traversal of the dependency graph like so:
- Add package A. Try to build it and see that it needs dependency B.
  "git stash".
- Add package B *instead*. Try to build it...
(da capo ad libitum)
- "git commit" with package B.
- "git stash pop". Try to build package A etc.
- "git commit" with package A.

The only drawback is that B and A are often defined next to each other,
which creates a spurious merge conflict during "git stash pop", but that
is easy to solve and requires an additional "git stash drop" in the end.

Sometimes I even give up on package A in the end, but then at least B
has been added to Guix :)

Andreas




Re: Release progress, week 3

2022-11-06 Thread Efraim Flashner
On Thu, Oct 27, 2022 at 10:04:13AM -0700, Vagrant Cascadian wrote:
> On 2022-10-27, Ludovic Courtès wrote:
> > Release progress: week 3.
> ...
> >   • Architectures:
> >
> >  - powerpc64le-linux builds are back behind ci.guix, thanks to
> >Tobias!
> ...
> >  - armhf-linux: No progress so far.
> 
> Not sure where this fits into the release process, but I uploaded a git
> snapshot to Debian of guix from commit
> c07b55eb94f8cfa9d0f56cfd97a16f2f7d842652 ... ppc64le still has not built
> yet, and it has the same test suite failures on two 32-bit architectures
> (i386 and armhf):
> 
>   
> https://buildd.debian.org/status/fetch.php?pkg=guix=armhf=1.3.0%2B26756.c07b5-1=1666838910=0
>   
> https://buildd.debian.org/status/fetch.php?pkg=guix=i386=1.3.0%2B26756.c07b5-1=1666825176=0
> 
> 
> Testsuite summary for GNU Guix 1.3.0.26756-c07b5
> 
> # TOTAL: 2286
> # PASS:  2002
> # SKIP:  274
> # XFAIL: 4
> # FAIL:  6
> # XPASS: 0
> # ERROR: 0
> 
> 
> All 6 of the failures have the same error:
> 
> test-name: channel-news, no news
> ...
> actual-error:
> + (git-error
> +   #< code: -1 message: "invalid version 0 on git_proxy_options" 
> class: 3>)
> result: FAIL
> 
> 
> Would love to figure out these issues in time for release!
> 
> 
> Good news is it built reproducibly (at least on amd64; i386, armhf and
> arm64 pending):
> 
>   https://tests.reproducible-builds.org/debian/rb-pkg/unstable/amd64/guix.html
> 
> This is partly because the Debian package works around
> https://issues.guix.gnu.org/20272 by disabling parallelism in the
> build. Sure would be nice if guix was reproducible when built with Guix
> too!

If you want to add a bit more to the mix you can add riscv64-linux and
powerpc-linux as build targets. I'm not sure if they'll pass the test
suite though, I don't think they technically have support in the base
linux-libre package since it expects a kernel config and they only
really have their own specialized kernels for now.


-- 
Efraim Flashner  אפרים פלשנר
GPG key = A28B F40C 3E55 1372 662D  14F7 41AA E7DC CA3D 8351
Confidentiality cannot be guaranteed on emails sent or received unencrypted


signature.asc
Description: PGP signature


Re: Antioxidant (new rust build system) update - 100% builds

2022-11-06 Thread Efraim Flashner
On Wed, Nov 02, 2022 at 12:20:14PM +0100, Ludovic Courtès wrote:
> Hi!
> 
> Maxime Devos  skribis:
> 
> > 100% (rounded up) of the packages build with antioxidant, though a
> > very few still fail to build:
> > .
> 
> Woohoo!!
> 
> > So far, work on antioxidant has been done in a separate channel for
> > convenience, but given that almost everything builds now, I think it's
> > a good time to start looking into moving it into Guix proper
> > (initially as a branch, as there are some remaining TODOs like
> > e.g. 'why are some of the binaries made with antioxidant larger than
> > with cargo-build-system + fix that').
> >
> > More concretely, this would mean changing the 'runtime'
> > transformations done by 'antioxidant-packages.scm' (in the style of
> > '(guix)Defining Package Variants') to source code transformations
> > ("guix style").
> >
> > IIRC, Ludo' has some "guix style" patches for moving #:cargo-inputs to
> > 'inputs' and such; those could perhaps be used as a basis.
> 
> That’s  but it probably needs work if
> we want it to work reliably on all the packages.  My understanding is
> that we’d need a “flag day” where we’d switch all Rust packages to
> Antioxydant in one commit, is that correct?  Any ideas how to achieve
> the big migration?
> 
> Efraim, thoughts on this?

Would it be possible to create a branch for it on savannah and hack on
the integration there? Then we can make sure everything looks good and
merge it in after everything builds nicely.

-- 
Efraim Flashner  אפרים פלשנר
GPG key = A28B F40C 3E55 1372 662D  14F7 41AA E7DC CA3D 8351
Confidentiality cannot be guaranteed on emails sent or received unencrypted


signature.asc
Description: PGP signature


Re: How long does it take to run the full rustc bootstrap chain?

2022-11-06 Thread Efraim Flashner
On Wed, Oct 26, 2022 at 09:37:32PM +0200, b...@bokr.com wrote:
> Hi,
> 
> On +2022-10-22 09:48:50 -0400, Maxim Cournoyer wrote:
> > Hi,
> > 
> > Félix Baylac Jacqué  writes:
> > 
> > > Hey Guix,
> > >
> > > I'd be curious to know how long it takes to run the full rustc bootstrap
> > > chain on the Guix build farm. I'm sadly not sure how to approach this
> > > problem.
> > >
> > > Is there a way to extract this information from Cuirass or the Guix data
> > > service?
> > >
> > > Félix
> > 
> > It used to be 16 hours on a Ryzen 3900x machine, then it got halved to 8
> > hours with the work to bootstrap from 1.39, and recently we're
> > bootstrapping from 1.54, so it must have been greatly reduced again.
> > 
> > Looking at (gnu packages rust), the mrustc-based bootstrap starts with
> > 1.54.0.  This one is expensive, probably around 1 h 30 or more on a
> > Ryzen 3900x CPU (24 logical CPUs).
> > 
> > The intermediate builds are typically around 15-20 minutes on that
> > machines, with the last one taking a bit more (30 minutes), so the
> > current bootstrap on such a machine should take about:
> > 
> > 1.54.0: 1h30m
> > 1.55.0 - 1.60.0: 6 X 20 min = 1h20m
> > 1.60.0: final build with tests and extra tools: 30 min
> > 
> > The total should be around 3 h 20 on a fast modern x86_64 machine.  I
> > suppose the time for berlin to build it takes about this.
> > 
> > HTH!
> > 
> > -- 
> > Thanks,
> > Maxim
> > 
> 
> I'm curious what
> --8<---cut here---start->8---
> $ lsblk -o size,model,type,tran,vendor,name|grep -Ei 'ssd|model';echo;lspci 
> |grep -i nvme
> --8<---cut here---end--->8---
> on your relevant machines would show.
> 
> I opted for the best SSD available for my purism librem13v4 at the time,
> and was really happy with seems like 10x faster than the SATA SSD in my older
> but still i7 x86_64 previous laptop. Prob really 4-5x faster.
> 
> So above combo command line now gives me
> --8<---cut here---start->8---
> SIZE MODEL  TYPE  TRAN   VENDOR   NAME
> 465.8G Samsung SSD 970 EVO Plus 500GB disk  nvmenvme0n1
> 
> 01:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD 
> Controller SM981/PM981
> $ 
> --8<---cut here---end--->8---
> 
> What /is/has been/ on your machines? Could your improved times be part from 
> SSD/controller changes?
> 
> There's really a huge difference  between SATA and 4-lane pci
> (where both ends can handle it, which may require fw update or not be 
> available)
> Obviously 4 lanes is also going to be faster than one.

  SIZE MODELTYPE TRAN   VENDOR NAME
931.5G NVME SSD 1TB disk nvme  nvme0n1

01:00.0 Non-Volatile memory controller: Silicon Motion, Inc. Device 2263 (rev 
03)

-- 
Efraim Flashner  אפרים פלשנר
GPG key = A28B F40C 3E55 1372 662D  14F7 41AA E7DC CA3D 8351
Confidentiality cannot be guaranteed on emails sent or received unencrypted


signature.asc
Description: PGP signature