Re: [core-updates] It would be nice to fix libsndfile CVE-2021-3246 (arbitrary code execution via crafted WAV file)

2023-04-04 Thread Development of GNU Guix and the GNU System distribution.
Hi Leo,

On Tue, Apr 4, 2023 at 7:49 PM Leo Famulari  wrote:
>
> See , which was never applied
> anywhere.

According to the Debian Bug for this issue [1] the upstream commit
with the fix is here. [2]

[1] https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=991496#5
[2] 
https://github.com/libsndfile/libsndfile/commit/deb669ee8be55a94565f6f8a6b60890c2e7c6f32

> I guess it's enough to update libsndfile to 1.1.0 on core-updates.

The upstream commit [2] shows that the issue was fixed in libsndfile's
master branch as part of their merge request #713, which made it into
these versions:

1.2.0
1.1.0
1.1.0beta2
1.1.0beta1

It may therefore be better to upgrade directly to 1.2.0, except I
think there was an understanding that no new features should be
allowed on our core-updates branch at this time.

In that context, I will mention that Repology shows Guix as shipping a
defective version [3] while NIST scored the vulnerability as "8.8
HIGH" [4] although we seem to have company.

Kind regards
Felix Lechner

[3] https://repology.org/project/libsndfile/versions
[4] https://nvd.nist.gov/vuln/detail/CVE-2021-3246



[core-updates] It would be nice to fix libsndfile CVE-2021-3246 (arbitrary code execution via crafted WAV file)

2023-04-04 Thread Leo Famulari
See , which was never applied
anywhere. Like I said in that thread, I no longer understand the patch,
but I guess it's enough to update libsndfile to 1.1.0 on core-updates.



Re: [internship]GSoC proposal review period begins today

2023-04-04 Thread Gábor Boskovits
Hello,

Simon Tournier  ezt írta (időpont: 2023. ápr. 4.,
K 13:51):

> Hi Gábor,
>
> On Mon, 20 Mar 2023 at 20:47, Gábor Boskovits 
> wrote:
>
> > the proposal submission deadline, which is 4th April, 1800 UTC. This is a
> > hard deadline, contributors not submitting a proposal by this deadline
> are
> > ineligible to participate in this round of GSoC.
>
> Ouch!  Well, I was in holidays and fully offline these past weeks.
> Therefore, I probably missed it.  What is the status?  Any proposal
> around?
>
Don't worry, we have some. I am going to have a look now, as it should now
be final.

Regards,
g_bor

>
>
> Cheers,
> simon
>
>


laminar and docker

2023-04-04 Thread Antonio Carlos Padoan Junior


Hello guixers,

I have been playing with laminar service, it sounds very nice.  However,
in my use-case, I need to manipulate dockers containers. I certainly need
to make laminar's user part of docker's group.  How can I make it happen
gracefully in my guix system configuration?

In addition, the laminar user seems to be created with a very bare bones
configuration. All my laminar bash scripts need to be started with a guix 
profile
activation, otherwise I have nothing available in the script, even
coreutils. Where can I create a bashrc that can be taken into account by
laminar scripts? (if it is possible)

Best regards,
-- 
Antonio



Re: [GSoC 23] distributed substitutes, cost of storage

2023-04-04 Thread Maxime Devos



Op 04-04-2023 om 12:53 schreef Attila Lendvai:

Onderwerp:
Re: [GSoC 23] distributed substitutes, cost of storage
Van:
Attila Lendvai 
Datum:
04-04-2023 12:53

Aan:
Maxime Devos 
CC:
Vijaya Anand , pukkamustard 
, guix-devel@gnu.org




it's another question whether this mirroring should be enabled by default in 
the clients. probably it shouldn't,


It probably should -- if things aren't mirrored, then it's not p2p; you
would lose the main performance benefit of p2p systems.

More cynically, some p2p systems (e.g. GNUnet) have mechanisms to
disincentive freeloaders -- clients that aren't being peers will get
worse downloading speed.


any successful p2p solution must have an incentive system that makes attacks 
expensive (freeloading, DoS'ing, censorship, etc). arguably, the most important 
difference between the various solutions is what this incentive system looks 
like.

from a bird's eye view perspective, there are two fundamental architectures of 
p2p storage networks (that i know of):

  1) ipfs-like, or torrent-like, where the nodes register/publish what
 they have in their local store, and other nodes may request it
 from them

  2) swarm-like, where the nodes are responsible for storing whatever
 content "is" in their "neighborhood". (block hashes and node ids
 are in the same domain, so there's a distance metric between a
 block and a node). put another way: Swarm stores not only the
 metadata in the DHT, but also the data itself.

in 1) there's no need to pay for, and to upload content into the network. a 
node just registers as a source for whatever content it has locally, and then 
serves the incoming requests.

but if you have content that you want to make available in 2) then you need to 
make sure that this content gets to a set of distant nodes that will store it. 
this is very different from 1) from a game theoretic perspective, and can't be 
done without some form of payments/accounting.

in 1) it's simpler for a node to share: just give away your storage and 
bandwidth to the network.

in 2) it's more complicated, because if your node is requesting other nodes to 
do stuff, then you're spending a more complex set of resources than just your 
bandwidth, potentially including some crypto coin payments if the balance goes 
way off.


GNUnet is (1) but also more than that, because of the automatic pushing 
to other nodes.  To my understanding it's not (2), but at the same time 
your comment about (2) applies.


Also, this crypto coin balance problem can be avoided by simply not 
basing your P2P system on money (crypto coins or otherwise); it's a 
problem that those systems invented for theirselves.



but both cases are fundamentally the same: users are spending their resources, 
and i wouldn't expect that installing a linux distro will start spending my 
network bandwidth, or any other resource than my machine's local resources.


Network bandwidth (and storage) _is_ a local resource.

Also, how are you going to keep your distribution up to date or install 
new software without allowing your distribution to spend network 
bandwidth? -- For non-P2P systems, it is already the case that that 
network bandwidth is spent by the local machine, P2P systems just makes 
it more symmetrical and hence fairer.


More to the point, recalling that this is a reply to my statement that 
mirroring should be enabled by default:


>> it's another question whether this mirroring should be enabled by 
default in the clients. probably it shouldn't,

>
> It probably should -- if things aren't mirrored, then it's not p2p; you
> would lose the main performance benefit of p2p systems.
>
> More cynically, some p2p systems (e.g. GNUnet) have mechanisms to
> disincentive freeloaders -- clients that aren't being peers will get
> worse downloading speed.

... and noticing that you are making a distinction between the resources 
of the user and others:


‘users are spending _their_ sources, and i wouldn't expect that [...] 
will start spending _my_  network bandwith, [...], _my_ machine [...]’

(emphasis added)

... it appears that your view is that it's ok to spend resources of 
other people even without trying to reciprocate (*), and that it is 
unreasonable to expect reciprocation by default?


(*) I'm not claiming that not reciprocating is always bad -- it's a 
reasonable thing to not do when on a very limited plan.  Rather, the 
point is that reciprocating by default is reasonable and that in 
reasonable circumstances, not reciprocating is unreasonable.


I mean, given how you are a proponent of crypto, you appear to be a 
capitalist, so I'd think you are familiar with the idea that to use 
resources of other people, you need to compensate them (in money like 
with Swarm or in kind like with P2P systems (*)).


(*) I don't consider Swarm to be a P2P system -- Swarm _by design and 
intentionally_ actively maintains a class distinction between customers 
(people paying for storage and 

Re: Google Summer of Code 2023 Inquiry

2023-04-04 Thread Simon Tournier
Hi Kyle,

On Tue, 04 Apr 2023 at 14:32, Kyle  wrote:

>   The CRAN importer, for example, cannot yet detect non-R
> dependencies. So, the profile author has to figure those out for
> themselves. It's still very useful despite not being perfect.  

Yeah, improving the importers is very helpful…

> Sure, but as is shown with "guix import cran" as I previously
> mentioned, it doesn't have to be perfect to be really useful in many
> cases.

…but please note the R ecosystem is probably one of the best around.

Well, I will not extrapolate to other ecosystem as Python or else based
on what Lars did with the channel guix-cran [1].

For more details, give a look to this thread [2],

Accuracy of importers?
Ludovic Courtès 
Thu, 28 Oct 2021 09:02:27 +0200

or slide 53 of
https://git.savannah.gnu.org/cgit/guix/maintenance.git/plain/talks/packaging-con-2021/grail/talk.2020.pdf
 
  

In addition, quoting another discussion from [3]:

Well, it strongly depends on the quality of the targeted language
ecosystem.  For some, they provide enough metadata to rely on for good
automatizing; for instance, R with CRAN or Bioconductor.

Sadly, for many others ecosystem, they (upstream) do not provide enough
metadata to automatically fill all the package fields.  And some manual
tweaks are required.

For example, let count the number of packages that are tweaking their
’arguments’ fields (from ’#:tests? #f’ to complex phases modifications).
This is far from being a perfect metrics but it is a rough indication
about upstream quality: if they provide clean package respecting their
build system or if the package requires Guix adjustments.

Well, I get:

  r: 2093 = 2093 = 1991 + 102 

which is good (only ~5% require ’arguments’ tweaks), but

  python   : 2630 = 2630 = 803  + 1827

is bad (only ~31% do not require an ’arguments’ tweak).

and the analysis can be refined, for instance which keyword ’arguments’
are they tweaked?  I did it [4] for the emacs-build-system:

emacs: 1234 = 1234 = 878  + 356
("phases" . 213)
("tests?" . 144)
("test-command" . 127)
("include" . 87)
("emacs" . 25)
("exclude" . 20)
("modules" . 7)
("imported-modules" . 4)
("parallel-tests?" . 1) 

Considering this 356 packages, 144 modifies the keyword #:tests?.  Note
that ’#:tests? #t’ is counted in these 144 and it reads,

$ ag 'tests\? #t' gnu/packages/emacs-xyz.scm | wc -l
117

Ah!  It requires some investigations. :-)

Last, in addition to ideas of improvements provided by the thread [3,4],
the conclusion is still:

Indeed, it could be worth to identify common sources of the extra
modifications we are doing compared to the default emacs-build-system.

Yeah, improving the importers is very helpful! :-)

Well, considering that 95% of the current R packages in Guix just work
out-of-the-box from the CRAN metadata, and considering how many packages
guix-cran provides compared to how many packages CRAN provides, we can
roughly extrapolate the meaning of “doesn't have to be perfect” for
other ecosystem as Python or else.  Roughly speaking, consider the 30%
of the current Python packages in Guix that are working out-of-the-box.

Yeah, these numbers are very partial and finer analysis could help in
improving the importers.  But these numbers show that the conclusion
drawn from the CRAN example would not apply as-is for others, IMHO.


1: 
https://hpc.guix.info/blog/2022/12/cran-a-practical-example-for-being-reproducible-at-large-scale-using-gnu-guix/
2: https://yhetil.org/guix/878ryd8we4@inria.fr/#r
3: https://yhetil.org/guix/86cz9kk71y@gmail.com
4: https://yhetil.org/guix/87cz9gunwx@gmail.com


Cheers,
simon



Re: Google Summer of Code 2023 Inquiry

2023-04-04 Thread Kyle




>I do not know what you have in mind with “working satisfiable
>configurations” or with “a variant of the solver”.  To my knowledge,
>this implies some SAT solver.  Well, before going this direction, I
>would suggest to read some output of the Mancoosi project [8].
>Especially this part [9].  From my point of view, the direction “working
>satisfiable configurations” or “a variant of the solver” would break the
>reproducibility of a specific configuration for the general case.  Part
>of the problem about computational environment reproducibility is
>because package manager implements solvers for installing some packages.

Yeah, we definitely don't want a solver for instantiating a profile. We want 
that explicit already in the manifest.scm. However, my understanding is that 
the role of an importer is to create a manifest.scm or, more realistically, 
help a user get started creating it. There will probably usually need to be 
additional tweaking related to the intended application the computational 
environment supports. The CRAN importer, for example, cannot yet detect non-R 
dependencies. So, the profile author has to figure those out for themselves. 
It's still very useful despite not being perfect. 

>Last, considering all Guix the version fields, I am not convinced it is
>straightforward to guarantee some “nearby” or newer versions.  It can
>only be heuristics working with more or less accuracy; see “guix
>refresh” and all the updaters.

Sure, but as is shown with "guix import cran" as I previously mentioned, it 
doesn't have to be perfect to be really useful in many cases.

>All in all, I am not convinced Guix should try to implement a way to
>“specify the exact software version”.  Because it leads to false
>considerations that label versions are enough for reproducing
>computational environments, when it is far to be.

It definitely is not enough, but that is where its up to the profile author to 
flesh out many examples of what their software is supposed to do and verify 
those still work under Guix.

Having tools to benchmark against existing, but not long term reproducible, 
software environments would help in this import case because that is the goal 
with conda. Researchers should not expect to go from "good enough" for now to 
guaranteed reproducibility without also doing a lot of empirical testing. 

Researchers have to start somewhere and convenience often trumps other 
considerations at the beginning since most new projects fail. To get 
researchers to start from Guix, they need either an army of packagers willing 
to assist them with packaging or for there to be so much convenience in Guix to 
package new software such that it isn't much of a hassle for the researcher to 
do it. I hope for both, but feel like working towards the latter would bolster 
the chances of the former. You could imagine Xapian being used to suggest also 
additional package inputs just as "guix build -f" already suggests missing 
scheme modules.



Re: [internship]GSoC proposal review period begins today

2023-04-04 Thread Simon Tournier
Hi Gábor,

On Mon, 20 Mar 2023 at 20:47, Gábor Boskovits  wrote:

> the proposal submission deadline, which is 4th April, 1800 UTC. This is a
> hard deadline, contributors not submitting a proposal by this deadline are
> ineligible to participate in this round of GSoC.

Ouch!  Well, I was in holidays and fully offline these past weeks.
Therefore, I probably missed it.  What is the status?  Any proposal
around?


Cheers,
simon




Re: How to manage package replacements?

2023-04-04 Thread Simon Tournier
Hi Chris,

On Tue, 28 Mar 2023 at 12:51, Christopher Baines  wrote:

> This is probably connected with package replacements, maybe the original
> package definition should be marked as hidden? Anyway, I think the
> bigger point here is I'm not sure what's meant to be done when adding
> replacements for a package, and I don't know how to find out what's
> meant to be done?

The issue is ’define-public’ which is unexpected for a replacement.  It
should be only ’define python-pillow/security-fixes’ or hides it.

Aside, I agree that grep in the source with ’/fixed’ to find examples
how to write grafts and replacement is not the best documentation. :-)

Cheers,
simon




SWH: extend sources.json and Mercurial (or not Git and not tarball)

2023-04-04 Thread Simon Tournier
Hi,

On Thu, 16 Mar 2023 at 12:48, Ludovic Courtès  wrote:

>   1. Reproducibility of past revisions.  If we lose copies of the
>  auto-generated tarballs, then OpenJDK in past revisions of Guix is
>  irreparably lost.  We should check whether/how to get them in
>  Disarchive + SWH.

The file sources.json that SWH ingests only contains original upstream
and not our copies.  One step forward would be to also list the URL of
our tarball substitutes as the last mirror in sources.json.

Any taker? :-)

>
>   2. Mercurial/SWH bridge.  While SWH has a one-to-one mapping with Git
>  (you can ask it for a specific Git commit ID), that’s not true for
>  hg.  This is a more general problem, but as things are today,
>  there’s no automatic SWH fallback if the upstream hg server
>  vanishes.

Since most git-fetch origins use label tags, the one-to-one mapping is
not guarantee and we rely on SWH resolver using URL + label tag to get
the content from SWH.  For instance, if the label tag is changed
in-place by upstream pointing then to one different commit, then SWH
creates another snapshot but our fallback will fail (known issue:
history of history, etc.)

If we would have a list of identifiers instead of only NAR+SHA256, and
we could have Git commit ID here (or SWHID or others), then it would
ease the fallback machinery.

SWH folk is currently adding NAR hashes; they store it as ’ExtID’ (see
[1] and merge request [2]), but it is not clear yet how they would
expose the API entry point or if they would do.

Extending ’origin’ with another optional field using other
content-address keys would robustify the preservation of Guix.  Yeah,
indeed we could also build the X-to-SWH bridge with the Disarchive
database (global bridge) but it would appear to me better to have some
“local” origin-based bridge.

1: https://gitlab.softwareheritage.org/swh/meta/-/issues/4979
2: 
https://gitlab.softwareheritage.org/swh/devel/swh-loader-core/-/merge_requests/459

Cheers,
simon



Re: Google Summer of Code 2023 Inquiry

2023-04-04 Thread Simon Tournier
Hi,

On Mon, 03 Apr 2023 at 20:41, Spencer Skylar Chan  
wrote:

>> I would expect most software versions to not be in Guix. Simon had
>> mentioned that this is mostly what the guix-past repository is
>> for. However, some packages might be buried on some branch or some
>> commit in some Guix related git repository. It may be helpful to
>> facilitate their discovery and extraction for conda import. 

Please note,

 1. The aim of the guix-past [1] channel is to have previous versions of
some packages still working with recent Guix revisions.  The
motivation of guix-past had been the 10 Years Challenge [2] and then
fed by hackathon [3].

 2. There is no easy way to know which revision of Guix provides that
specific version of this package.  The discovery of package version
mapping Guix revision is not straightforward with the current tool.
I am aware of two directions: rely on external server as the Guix
Data Service [4] or implement “guix git log” [5] (the code lives in
the branch ’wip-guix-log’).

1: https://gitlab.inria.fr/guix-hpc/guix-past
2: http://rescience.github.io/ten-years/
3: 
https://hpc.guix.info/blog/2020/07/reproducible-research-hackathon-experience-report/
4: 
https://data.guix.gnu.org/repository/1/branch/master/package/gmsh/output-history
5: https://guix.gnu.org/en/blog/2021/outreachy-guix-git-log-internship-wrap-up/

>> Git has a newish binary file format for caching searches across
>> commits. Maybe it would be helpful to figure out how to parse this
>> format (its documented) and index the data further using Xapian or a
>> graph data structure (or tree sitter?) with the relevant metadata
>> needed to find and efficiently extract scheme code and its
>> dependencies? 

Months ago, I have started to do that: index the package list using
Xapian.  Well, started is a strong word here, since I have not done
much.  My idea was (is still!) an attempt to address to two in the same
time: faster “guix search” [6] and discovery the past versions.

Somehow rework Arun’s patches [6].  From my point of view, it would be
possible to add Xapian as a dependency for Guix, therefore I think it
should use GUIX_EXTENSIONS_PATH.

6: https://issues.guix.gnu.org/39258#14


> If the format is documented then this is possible, although I'm not 
> super familiar with these kinds of data structures.

As said, an entry point about how “guix search” works is the super long
discussion in #39258 [7]. :-)

7: https://issues.guix.gnu.org/39258


>> You make an interesting point about compilation errors. It may more
>> productive to help researchers test for working satisfiable
>> configurations as a more relaxed approach to having to specify the
>> exact software version. Maybe some "nearby" or newer version is
>> packaged and that is enough to successfully run a test suite? I'm
>> imagining something between git bisect and Guix's own package
>> solver. 
>
> Yes, we could have a variant of the solver that's more relaxed. It could 
> output multiple solutions so the user can inspect them and pick the best 
> one.

I do not know what you have in mind with “working satisfiable
configurations” or with “a variant of the solver”.  To my knowledge,
this implies some SAT solver.  Well, before going this direction, I
would suggest to read some output of the Mancoosi project [8].
Especially this part [9].  From my point of view, the direction “working
satisfiable configurations” or “a variant of the solver” would break the
reproducibility of a specific configuration for the general case.  Part
of the problem about computational environment reproducibility is
because package manager implements solvers for installing some packages.

That’s said, all the package versions that Guix can provide is some DAG
because it is a Git history – well, it is the combination of several Git
histories when considering several channels.  Thus, a specific version
for a package is given by an interval in the graph.  Considering a list
of packages at one specific version, we end with a list of intervals.
The “working satisfiable configuration” is then the intersection of all
the intervals of this list; note that the resulting output could also be
the empty interval.

It’s a problem of graph.  Almost trivial when the graph is linear.  But
it requires some work when merge happens.  And note that the merges
merge some branches that does not always fully build; for instance part
of core-updates before its merges.  To my knowledge, it is impossible to
detect beforehand.

We discussed these kind of topics when introducing “guix package
--export-channels”; it is a variant of this proposal, IMHO.

Last, considering all Guix the version fields, I am not convinced it is
straightforward to guarantee some “nearby” or newer versions.  It can
only be heuristics working with more or less accuracy; see “guix
refresh” and all the updaters.

All in all, I am not convinced Guix should try to implement a way to
“specify the exact software 

Re: [GSoC 23] distributed substitutes, cost of storage

2023-04-04 Thread Attila Lendvai
> > it's another question whether this mirroring should be enabled by default 
> > in the clients. probably it shouldn't,
>
>
> It probably should -- if things aren't mirrored, then it's not p2p; you
> would lose the main performance benefit of p2p systems.
>
> More cynically, some p2p systems (e.g. GNUnet) have mechanisms to
> disincentive freeloaders -- clients that aren't being peers will get
> worse downloading speed.


any successful p2p solution must have an incentive system that makes attacks 
expensive (freeloading, DoS'ing, censorship, etc). arguably, the most important 
difference between the various solutions is what this incentive system looks 
like.

from a bird's eye view perspective, there are two fundamental architectures of 
p2p storage networks (that i know of):

 1) ipfs-like, or torrent-like, where the nodes register/publish what
they have in their local store, and other nodes may request it
from them

 2) swarm-like, where the nodes are responsible for storing whatever
content "is" in their "neighborhood". (block hashes and node ids
are in the same domain, so there's a distance metric between a
block and a node). put another way: Swarm stores not only the
metadata in the DHT, but also the data itself.

in 1) there's no need to pay for, and to upload content into the network. a 
node just registers as a source for whatever content it has locally, and then 
serves the incoming requests.

but if you have content that you want to make available in 2) then you need to 
make sure that this content gets to a set of distant nodes that will store it. 
this is very different from 1) from a game theoretic perspective, and can't be 
done without some form of payments/accounting.

in 1) it's simpler for a node to share: just give away your storage and 
bandwidth to the network.

in 2) it's more complicated, because if your node is requesting other nodes to 
do stuff, then you're spending a more complex set of resources than just your 
bandwidth, potentially including some crypto coin payments if the balance goes 
way off.

but both cases are fundamentally the same: users are spending their resources, 
and i wouldn't expect that installing a linux distro will start spending my 
network bandwidth, or any other resource than my machine's local resources.

but this of course can change, too: maybe a future Guix release can advertise 
with big red letters on the download page that installing it will use your 
network bandwidth to serve other guix nodes, unless it is turned off. and then 
all is well WRT informed consent.

--
• attila lendvai
• PGP: 963F 5D5F 45C7 DFCD 0A39
--
“Historically, the most terrible things - war, genocide, and slavery - have 
resulted not from disobedience, but from obedience.”
— Howard Zinn (1922–2010)




Re: Google Summer of Code 2023 Inquiry

2023-04-04 Thread Kyle
Hi Spencer,

Here is the documentation for the git commit-graph cache file. The authors also 
made their own blog posts about it as well with a bit more explanation.

=> https://git-scm.com/docs/commit-graph
=> 
https://devblogs.microsoft.com/devops/updates-to-the-git-commit-graph-feature/

Maybe it won't turn out to be needed... just thought it might help get you 
thinking. Please read all my suggestions from that perspective as a reasonable 
default.

I will have to defer to others for gauging the size of projects. I have found 
as a rule there are always many more details to be considered than I could have 
anticipated at the start of a project. That said I liked your earlier stated 
plan of starting simple. Handling latest releases seems a reasonable minimal 
viable product.

Cheers,
Kyle





On April 3, 2023 8:41:53 PM EDT, Spencer Skylar Chan  
wrote:
>Hi Kyle,
>
>On 3/31/23 11:15, Kyle wrote:
>> I would expect most software versions to not be in Guix. Simon had mentioned 
>> that this is mostly what the guix-past repository is for. However, some 
>> packages might be buried on some branch or some commit in some Guix related 
>> git repository. It may be helpful to facilitate their discovery and 
>> extraction for conda import.
>> 
>> Git has a newish binary file format for caching searches across commits. 
>> Maybe it would be helpful to figure out how to parse this format (its 
>> documented) and index the data further using Xapian or a graph data 
>> structure (or tree sitter?) with the relevant metadata needed to find and 
>> efficiently extract scheme code and its dependencies?
>
>If the format is documented then this is possible, although I'm not super 
>familiar with these kinds of data structures.
>
>> You make an interesting point about compilation errors. It may more 
>> productive to help researchers test for working satisfiable configurations 
>> as a more relaxed approach to having to specify the exact software version. 
>> Maybe some "nearby" or newer version is packaged and that is enough to 
>> successfully run a test suite? I'm imagining something between git bisect 
>> and Guix's own package solver.
>
>Yes, we could have a variant of the solver that's more relaxed. It could 
>output multiple solutions so the user can inspect them and pick the best one.
>
>> It might also be productive to add infrastructure to help scientists more 
>> conveniently track and study their recent packaging experiments. Guix will 
>> only become more useful the more packages which are already available. Work 
>> which makes packaging more approachable by more people benefits everyone. 
>> Perhaps you can think of other ideas in this direction?
>
>I'm not sure how "packaging experiments" are different from packaging software 
>the usual way. I think making the importers easier to use and debug would 
>help, although that sounds outside the scope of the projects.
>
>Finally, would these projects be considered large or medium for the purposes 
>of GSOC?
>
>Thanks,
>Skylar
>
>> On March 30, 2023 7:22:14 PM EDT, Spencer Skylar Chan 
>>  wrote:
>>> Hi Kyle,
>>> 
>>> On 3/24/23 14:59, Kyle wrote:
 I am a bit worried about your proposed project is too focused on replacing 
 python with guile. I think the project would benefit more from making 
 python users more comfortable productively using Guix tools in concert 
 with the tools they are already comfortable with.
>>> 
>>> Yes, I agree with you. Replacing Python with Guile is a much more ambitious 
>>> task and is not the highest priority here.
>>> 
 I'm wondering if you might consider modifying your project goals toward 
 exploring how GWL might be enhanced so that it could better complement 
 more expressive language specific workflow tools like snakemake. I am also 
 personally interested in exploring such a facilities from the targets 
 workflow system in R as well. Alternatively, perhaps you could focus kn 
 extending the GWL with more features?
>>> 
>>> I would also be interested in extending GWL with more features, I will 
>>> follow up with this on the GWL mailing list.
>>> 
 I agree that establishing an achievable scope within a short timeline is 
 crucial. The conda env importer idea would be quite an ambitious 
 undertaking by itself and would lead you towards thinking about some 
 pretty interesting and impactful problems.
>>> 
>>> While it's a challenging project, it could be broken into smaller steps:
>>> 
>>> 1. import packages by exact matching names only, without versioning.
>>> 2. extend `guix import` to have `guix import conda` to help with package 
>>> names that do not match exactly, and to accelerate adoption of Conda 
>>> packages not in Guix
>>> 3. match software version numbers when translating Conda packages to Guix
>>> 
>>> What's currently undefined is the error handling:
>>> - if a Conda package does not exist in Guix
>>> - if the dependency graph is not solvable
>>> - if 

GSoC Application deadline today

2023-04-04 Thread Gábor Boskovits
Hello guix,

This is a reminder that the GSoC application deadline is today, 1800 UTC.
All applicants should have their appliactions uploaded to the GSoC
organization before the deadline.

Regards,
g_bor


Re: PyQt in core-updates

2023-04-04 Thread Lars-Dominik Braun
Hi Andreas,

> I have just fixed calibre. It failed to build because .sip fails are
> now in a subdirectory /lib/python3.10/site-packages/PyQt5/bindings
> instead of /share/sip (or maybe before, they were in both directories).

no, it was definitely in /share/sip before, but the pyproject-based
build system does not expose any option to move it there :( If anyone
can figure out how to move it back *and* successfully compile pyqt,
please, change it back to /share/sip.

Lars




Re: Contributing Guix Home services

2023-04-04 Thread Tanguy LE CARROUR
Quoting Tanguy LE CARROUR (2023-03-25 17:53:23)
> My main concern now is to figure out how to implement complexe
> configurations to be able to write things like:
> […]
> I'm not sure how to make `define-configuration` accept complexe structures.
> When I look at `gnu/home/services/ssh.scm`, it seems to be doing the other way
> around and define the configuration with `define-record-type` and "put"
> the "configuration" inside.

Note to self: when in doubt, RTFM! … where the "F" stands for "Fabulous"! 


This doesn't answer the question "how complete need a service be to make
it to master?", though. But I've a lot of re-write to do before submitting 
patches
anyway!

-- 
Tanguy