[sane] how to scan using sane-airscan?

2024-09-03 Thread Giovanni Biscuolo
Hello,

I'm forwarding my message here, maybe someone on guix-devel can help me.

Sorry for the cross-post!

 Start of forwarded message 
From: Giovanni Biscuolo 
To: help-g...@gnu.org
Subject: [sane] how to scan using sane-airscan?
Date: Fri, 30 Aug 2024 15:59:43 +0200

Hello,

I'm using a system with %desktop-services installed (that includes
sane-service-type with sane-backends-minimal).

With sane-airscan installed I can get a list of devices:

--8<---cut here---start->8---

g@ken ~$ airscan-discover 
[devices]
  HP Color LaserJet MFP M181fw (0EEE0D) = http://192.168.1.20:8080/eSCL/, eSCL
  HP Color LaserJet MFP M181fw (0EEE0D) = https://192.168.1.20:443/eSCL/, eSCL
  HP Color LaserJet MFP M181fw (0EEE0D) = http://[fd56:feaa:cf06::20]:53048/, 
WSD
  HP Color LaserJet MFP M181fw (0EEE0D) = 
http://[fd56:feaa:cf06::60e:3cff:fe0e:ee0d]:53048/, WSD
  HP Color LaserJet MFP M181fw (0EEE0D) = http://192.168.1.20:53048/, WSD
  HP Color LaserJet MFP M181fw (0EEE0D) = 
http://[fe80::60e:3cff:fe0e:ee0d%252]:53048/, WSD

--8<---cut here---end--->8---

But if I run simple-scan it cannot find any scanner.

I found an old (2023-01-14) message in this mailing list [1] suggesting
to do set some env but it does non work:

--8<---cut here---start->8---

env LD_LIBRARY_PATH=${HOME}/.guix-profile/lib/sane 
SANE_CONFIG_DIR=${HOME}/.guix-profile/etc/sane.d simple-scan

--8<---cut here---end--->8---

Also this does not work:

--8<---cut here---start->8---

LD_LIBRARY_PATH=${HOME}/.guix-profile/lib/sane; 
SANE_CONFIG_DIR=${HOME}/.guix-profile/etc/sane.d; simple-scan

--8<---cut here---end--->8---

I also tried 'scanimage -L' in place of 'simple-scan' but no scanner is
detected.

In my ${HOME}/.guix-profile/lib I find this symlink: "sane ->
/gnu/store/hls5vghgb9z4isrvrr28n0kjsbhk6i97-sane-airscan-0.99.27/lib/sane"
and in that directory I find "libsane-airscan.so.1"

In my ${HOME}/.guix-profile/etc/ I find this symlink: "sane.d ->
/gnu/store/hls5vghgb9z4isrvrr28n0kjsbhk6i97-sane-airscan-0.99.27/etc/sane.d"
and in that directory:

--8<---cut here---start->8---

/home/g/.guix-profile/etc/sane.d:
dr-xr-xr-x 1 root root   34 Jan  1  1970 .
dr-xr-xr-x 1 root root   34 Jan  1  1970 ..
-r--r--r-- 1 root root 3.3K Jan  1  1970 airscan.conf
dr-xr-xr-x 1 root root   14 Jan  1  1970 dll.d

/home/g/.guix-profile/etc/sane.d/dll.d:
dr-xr-xr-x 1 root root 14 Jan  1  1970 .
dr-xr-xr-x 1 root root 34 Jan  1  1970 ..
-r--r--r-- 1 root root 42 Jan  1  1970 airscan

--8<---cut here---end--->8---

the content of the "airscan" file in
/home/g/.guix-profile/etc/sane.d/dll.d is:

--8<---cut here---start->8---

# sane-dll entry for sane-airscan
airscan

--8<---cut here---end--->8---


Plase how can I set sane-airscan as a usable backend for sane scanimage
and allo other sane frontends like simle-scan?

Thanks, Gio'

[1] id:nlhdbqq--...@tutanota.com, help-guix mailing list

[...]

 End of forwarded message 

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


Re: networking service not starting with netlink-response-error errno:17

2024-06-17 Thread Giovanni Biscuolo
Hi Ludovic,

executive summary: it is (was) a "network architecture" mistake by my
side, since I was mixing a device with static-network defined via guix
with a bridge defined via libvirt... and this is not good.  The more I
think about it the more I'm convinced that trying to add a route for
device "swws-bridge" (see below) in the "eno1" [1] static-networking
declaration is simply a... mistake.

Julien I'm adidng you in Cc: only because you develop guile-netlink and
maybe you could see if it's possible to improve netlink related error
messages.

Ludovic Courtès  writes:

> Giovanni Biscuolo  skribis:
>
>> after a reboot on a running remote host (it was running since several
>> guix system generations ago... but with no reboots meanwhile) I get a
>> failing networking service and consequently the ssh service (et al)
>> refuses to start :-(
>>
>> Sorry I've no text to show you but a screenshot (see attachment below)
>> because I'm connecting with a remote KVM console appliance.

In a follow-up message I was then able to copy the actual error message:

--8<---cut here---start->8---

Jun 14 11:28:32 localhost vmunix: [6.258520] shepherd[1]: Starting service
networking...
Jun 14 11:28:32 localhost vmunix: [6.472949] shepherd[1]: Service 
networking failed to
start.
Jun 14 11:28:32 localhost vmunix: [6.474842] shepherd[1]: Exception caught 
while
starting networking: (no-such-device "swws-bridge")
Jun 14 11:28:32 localhost vmunix: [6.492344] shepherd[1]: Starting service
networking...
Jun 14 11:28:32 localhost vmunix: [6.509652] shepherd[1]: Exception caught 
while
starting networking: (%exception #<&netlink-response-error errno: 17>)
Jun 14 11:28:32 localhost vmunix: [6.510034] shepherd[1]: Service 
networking failed to
start.

--8<---cut here---end--->8---

Then (in the same message) I described how I was able to solve my issue,
this is the "core" of my configuration _mistake:_

--8<---cut here---start->8---

(service static-networking-service-type
 (list (static-networking
(addresses (list (network-address
  (device ane-wan-device)
  (value (string-append ane-wan-ip4 
"/24")
(routes (list (network-route
   (destination "default")
   (gateway ane-wan-gateway
  ;; ip route add 10.1.2.0/24 dev 
swws-bridge via 192.168.133.12
  ;; (network-route
  ;;  (destination "10.1.2.0/24")   ;; 
lxcbr0 net
  ;;  (device swws-bridge-name)
  ;;  (gateway "192.168.133.12" ;; 
on node002
(name-servers '("185.12.64.1"
"185.12.64.1")

--8<---cut here---end--->8---

I commented out the second network-route definition, the one using
"swws-bridge" [1] as device to route to 10.1.2.0/24 via 192.168.133.12.

When I used that code, AFAIU the first time shepherd was trying to start
the networking service, failing because "swws-bridge" is missing and
(guile-)netlink fails with "no-such-device", then it tries again but
fails because the very same route is already defined (but not
functional).

A failing networking service (although the interface is up and running)
means that ssh (et al) fails to start, because networking is a ssh
requisite.

> 17 = EEXIST, which is netlink’s way of saying that the device/route/link
> it’s trying to add already exists.

Ah thanks!  I was not able to find that error code.

When run on the command line I get:

--8<---cut here---start->8---

g@ane ~$ sudo ip route add 10.1.2.0/24 dev swws-bridge via 192.168.133.12
RTNETLINK answers: File exists

--8<---cut here---end--->8---

Is it possible to have the same error and/or little bit of context in
syslog when this happens with 'network-set-up/linux'

Anyway, I think that "ip route" should just be idempotent... but maybe
I'm missing something. (and this is obviously not a downstream issue)

> The problem here is that static networking adds devices, routes, and
> links (see ‘network-set-up/linux’ in the code).  If it fails in the
> middle, then it may have added devices without adding routes, s

Re: networking service not starting for a network-route setting (was for network with netlink-response-error errno:17)

2024-06-14 Thread Giovanni Biscuolo
Hello,

OK I've managed to fix my networking problem, here is how I did it...

Giovanni Biscuolo  writes:

[...]

> The networking service is failing with this message (manually copied
> here, please forgive mistakes):

now that I can connect via SSH, I can copy the actual messages:

--8<---cut here---start->8---

Jun 14 11:28:32 localhost vmunix: [6.258520] shepherd[1]: Starting service 
networking...
Jun 14 11:28:32 localhost vmunix: [6.472949] shepherd[1]: Service 
networking failed to start.
Jun 14 11:28:32 localhost vmunix: [6.474842] shepherd[1]: Exception caught 
while starting networking: (no-such-device "swws-bridge")
Jun 14 11:28:32 localhost vmunix: [6.492344] shepherd[1]: Starting service 
networking...
Jun 14 11:28:32 localhost vmunix: [6.509652] shepherd[1]: Exception caught 
while starting networking: (%exception #<&netlink-response-error errno: 17>)
Jun 14 11:28:32 localhost vmunix: [6.510034] shepherd[1]: Service 
networking failed to start.

--8<---cut here---end--->8---

> The strange thing is that all the configured interfaces: eno1

I truncated the list, the actual list of interfaces was (and is):

--8<---cut here---start->8---

g@ane ~$ ip addre ls
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN 
group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope global lo
   valid_lft forever preferred_lft forever
inet6 ::1/128 scope host 
   valid_lft forever preferred_lft forever
2: eno1:  mtu 1500 qdisc pfifo_fast state UP 
group default qlen 1000
link/ether b4:2e:99:c5:cc:1c brd ff:ff:ff:ff:ff:ff
inet 162.55.88.253/24 scope global eno1
   valid_lft forever preferred_lft forever
inet6 fe80::b62e:99ff:fec5:cc1c/64 scope link 
   valid_lft forever preferred_lft forever
3: swws-bridge:  mtu 1500 qdisc noqueue state 
UP group default qlen 1000
link/ether 52:54:00:9b:c6:63 brd ff:ff:ff:ff:ff:ff
inet 192.168.133.1/24 brd 192.168.133.255 scope global swws-bridge
   valid_lft forever preferred_lft forever
4: vnet0:  mtu 1500 qdisc noqueue master 
swws-bridge state UNKNOWN group default qlen 1000
link/ether fe:54:00:ff:e2:fd brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc54:ff:feff:e2fd/64 scope link 
   valid_lft forever preferred_lft forever
5: vnet1:  mtu 1500 qdisc noqueue master 
swws-bridge state UNKNOWN group default qlen 1000
link/ether fe:54:00:41:53:1e brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc54:ff:fe41:531e/64 scope link 
   valid_lft forever preferred_lft forever
6: vnet2:  mtu 1500 qdisc noqueue master 
swws-bridge state UNKNOWN group default qlen 1000
link/ether fe:54:00:3d:17:90 brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc54:ff:fe3d:1790/64 scope link 
   valid_lft forever preferred_lft forever
7: vnet3:  mtu 1500 qdisc noqueue master 
swws-bridge state UNKNOWN group default qlen 1000
link/ether fe:54:00:64:81:8f brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc54:ff:fe64:818f/64 scope link 
   valid_lft forever preferred_lft forever

--8<---cut here---end--->8---

> Please find below the relevant parts of the configuration of my host.
>
> As you can see I've installed a libvirt daemon service (it is working)
> with an autostarted (by libvirt) bridge interface named "swws-bridge"

[...]

> --8<---cut here---start->8---

[...]

sorry I missed to add some relevant definitions I have at the start of
my config.scm file:

(define ane-wan-device "eno1")
(define ane-wan-ip4 "162.55.88.253")
(define ane-wan-gateway "162.55.88.193")
(define swws-bridge-name "swws-bridge")

>(list
> (service static-networking-service-type
>(list (static-networking
>   (addresses (list (network-address
> (device ane-wan-device)
> (value (string-append ane-wan-ip4 
> "/24")
>   (routes (list (network-route
>  (destination "default")
>  (gateway ane-wan-gateway))


the next one the problematic part of my static-networking configuration:

> ;; ip route add 10.1.2.0/24 dev 
> swws-bridge via 192.168.133.12
> (network-route
>  (destination "10.1.2.0/24")   ;; 
> lxcbr0 net
>  (device swws-bridge-name)
>  (gateway "192.168.133.12"

Re: backdoor injection via release tarballs combined with binary artifacts (was Re: Backdoor in upstream xz-utils)

2024-04-13 Thread Giovanni Biscuolo
Hello Skyler,

Skyler Ferris  writes:

> On 4/12/24 23:50, Giovanni Biscuolo wrote:

>> general reminder: please remember the specific scope of this (sub)thread

[...]

>> (https://yhetil.org/guix/8734s1mn5p@xelera.eu/)
>>
>> ...and if needed read that message again to understand the context,
>> please.
>>
> I assume that this was an indirect response to the email I sent 
> previously where I discussed the problems with PGP signatures on release 
> files.

No, believe me! I'm sorry I gave you this impression. :-)

> I believe that this was in scope

To be clear: not only I did not mean to say - even indirectly - that you
where out of scope _or_ that you did not understand the context.

Also, I really did not mean to /appear/ as the "coordinator" of this
(sub)thread and even less to /appear/ as the one who decides what's in
scope and what's OT; obviously everyone is absolutely free to decide
what is in scope and that she or he understood the context .

> because of the discussion about whether to use VCS checkouts which
> lack signatures or release tarballs which have signatures.

I still have not commented what you discussed just because I lack time,
not interest;  if I can I'll do it ASAP™ :-(

[...]

Thanks! Gio'

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


Re: backdoor injection via release tarballs combined with binary artifacts (was Re: Backdoor in upstream xz-utils)

2024-04-13 Thread Giovanni Biscuolo
Hi Attila,

sorry for the delay in my reply,

I'm asking myself if this (sub)thread should be "condensed" in a
dedicated RFC (are RFCs official workflows in Guix, now?); if so, I
volunteer to file such an RFC in the next weeks.

Attila Lendvai  writes:

>> Are there other issues (different from the "host cannot execute target
>> binary") that makes relesase tarballs indispensable for some upstream
>> projects?
>
>
> i didn't mean to say that tarballs are indispensible. i just wanted to
> point out that it's not as simple as going through each package
> definition and robotically changing the source origin from tarball to
> git repo. it costs some effort, but i don't mean to suggest that it's
> not worth doing.

OK understood thanks!

[...]

> i think a good first step would be to reword the packaging guidelines
> in the doc to strongly prefer VCS sources instead of tarballs.

I agree.

>> Even if We™ (ehrm) find a solution to the source tarball reproducibility
>> problem (potentially allowing us to patch all the upstream makefiles
>> with specific phases in our packages definitions) are we really going to
>> start our own (or one managed by the reproducible build community)
>> "reproducible source tarballs" repository? Is this feaseable?
>
> but why would that be any better than simply building from git? which,
> i think, would even take less effort.

I agree, I was just brainstorming.

[...]

Thanks, Gio'

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


Re: backdoor injection via release tarballs combined with binary artifacts (was Re: Backdoor in upstream xz-utils)

2024-04-12 Thread Giovanni Biscuolo
Hello,

general reminder: please remember the specific scope of this (sub)thread

--8<---cut here---start->8---

 Please consider that this (sub)thread is _not_ specific to xz-utils but
 to the specific attack vector (matrix?) used to inject a backdoor in a
 binary during a build phase, in a _very_ stealthy way.

 Also, since Guix _is_ downstream, I'd like this (sub)thread to
 concentrate on what *Guix* can/should do to strenghten the build process
 /independently/ of what upstreams (or other distributions) can/should
 do.

--8<---cut here---end--->8---
(https://yhetil.org/guix/8734s1mn5p@xelera.eu/)

...and if needed read that message again to understand the context,
please.

Andreas Enge  writes:

> Am Thu, Apr 11, 2024 at 02:56:24PM +0200 schrieb Ekaitz Zarraga:
>> I think it's just better to
>> obtain the exact same code that is easy to find
>
> The exact same code as what?

Of what is contained in the official tool used by upstream to track
their code, that is the one and _only_ that is /pragmatically/ open to
scrutiny by other upstream and _downstream_ contributors.

> Actually I often wonder when looking for a project and end up with a
> Github repository how I could distinguish the "original" from its
> clones in a VCS.

Actually it's a little bit of "intelligence work" but it's something
that usually downstream should really do: have a reasonable level of
trust that the origin is really the upstream one.

But here we are /brainstormig/ about the very issue that led to the
backdoor injection, and that issue is how to avoid "backdoor injections
via build subversion exploiting semi-binary seeds in release tarballs".
(see the scope above)

> With the signature by the known (this may also be a wrong assumption,
> admittedly) maintainer there is at least some form of assurance of
> origin.

We should definitely drop the idea of "trust by autority" as a
sufficient requisite for verifiability, that is one assumption for
reproducible builds.

The XZ backdoor injection absolutely demonstrates that one and just one
_co-maintainer_ was able to hide a trojan in the _signed_ release
tarball and the payload in the git archive (as very obfuscated bynary),
so it was _the origin_ that was "infected".

It's NOT important _who_ injected the backdoor (and in _was_ upstream),
but _how_.

In other words, we need a _pragmatic_ way (possibly with helping tools)
to "challenge the upstream authority" :-)

>> and everybody is reading.
>
> This is a steep claim! I agree that nobody reads generated files in
> a release tarball, but I am not sure how many other files are actually
> read.

Let's say that at least /someone/ should be _able_ to read the files,
but in the attack we are considering /no one/ is _pragmatically_ able to
read the (auto)generated semi-binary seeds in the release tarballs.

Security is a complex system, especially when considering the entire
supply chain: let's focus on this _specific_ weakness of the supply
chain. :-)


Ciao! Gio'

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


Re: backdoor injection via release tarballs combined with binary artifacts (was Re: Backdoor in upstream xz-utils)

2024-04-12 Thread Giovanni Biscuolo
Hello,

Ludovic Courtès  writes:

> Ekaitz Zarraga  skribis:
>
>> On 2024-04-04 21:48, Attila Lendvai wrote:
>>> all in all, just by following my gut insctincts, i was advodating
>>> for building everything from git even before the exposure of this
>>> backdoor. in fact, i found it surprising as a guix newbie that not
>>> everything is built from git (or their VCS of choice).
>>
>> That has happened to me too.
>> Why not use Git directly always?
>
> Because it create{s,d} a bootstrapping issue.  The
> “builtin:git-download” method was added only recently to guix-daemon and
> cannot be assumed to be available yet:
>
>   https://issues.guix.gnu.org/65866

This fortunately will help a lot with the "everything built from git"
part of the "whishlist", but what about the not zero occurrences of
"other upstream VCSs"?

[...]

> I think we should gradually move to building everything from
> source—i.e., fetching code from VCS and adding Autoconf & co. as inputs.
>
> This has been suggested several times before.  The difficulty, as you
> point out, will lie in addressing bootstrapping issues with core
> packages: glibc, GCC, Binutils, Coreutils, etc.  I’m not sure how to do
> that but…

does it have to be an "all of nothing" choiche?  I mean "continue using
release tarballs" vs "use git" for "all"?

If using git is unfeaseable for bootstrapping reasons [1], why not
cointinue using release tarballs with some _extra_ verifications steps
and possibly add some automation steps to "lint" to help contributors
and committers check that there are not "quasi-binary" seeds [2] hidden
in release tarballs?

WDYT?

[...]

Grazie! Gio'



[1] or other reasons specific to a package that should be documented
when needed, at least with a comment in the package definition

[2] the autogenerated files that are not pragmatically verifiable

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


Re: backdoor injection via release tarballs combined with binary artifacts (was Re: Backdoor in upstream xz-utils)

2024-04-05 Thread Giovanni Biscuolo
me very skilled (experienced?)
programmers

[1.1] because it's strictly related to good _redistribution_ of
_trusted_ software, not to good programming

[2]
https://docs.github.com/en/actions/learn-github-actions/understanding-github-actions#runners
«each workflow run executes in a fresh, newly-provisioned virtual machine.»
see also
https://www.paloaltonetworks.com/blog/prisma-cloud/unpinnable-actions-github-security/
for security concerns about GitHub actions relying on Docker containers
used for "reproducibility" purposes.

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


Re: backdoor injection via release tarballs combined with binary artifacts (was Re: Backdoor in upstream xz-utils)

2024-04-04 Thread Giovanni Biscuolo
Hi Attila,

Attila Lendvai  writes:

>> Also, in (info "(guix) origin Reference") I see that Guix packages
>> can have a list of uri(s) for the origin of source code, see xz as an
>> example [7]: are they intended to be multiple independent sources to
>> be compared in order to prevent possible tampering or are they "just"
>> alternatives to be used if the first listed uri is unavailable?
>
> a source origin is identified by its cryptographic hash (stored in its
> sha256 field); i.e. it doesn't matter *where* the source archive was
> acquired from. if the hash matches the one in the package definition,
> then it's the same archive that the guix packager has seen while
> packaging.

Ehrm, you are right, mine was a stupid question :-)

We *are* already verifying that tarballs had not been tampered
with... by other people but the release manager :-(

[...]

Happy hacking! Gio'

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


Re: backdoor injection via release tarballs combined with binary artifacts (was Re: Backdoor in upstream xz-utils)

2024-04-04 Thread Giovanni Biscuolo
Hello,

a couple of additional (IMO) useful resources...

Giovanni Biscuolo  writes:

[...]

> Let me highlight this: «It is pragmatically impossible [...] to peer
> review a tarball prepared in this manner.»
>
> There is no doubt that the release tarball is a very weak "trusted
> source" (trusted by peer review, not by authority) than the upstream
> DVCS repository.

This kind of attack was described by Daniel Stenberg in his «HOWTO
backdoor curl» article in 2021.03.30 as "skip-git-altogether" method:

https://daniel.haxx.se/blog/2021/03/30/howto-backdoor-curl/
--8<---cut here---start->8---

The skip-git-altogether methods

As I’ve described above, it is really hard even for a skilled developer
to write a backdoor and have that landed in the curl git repository and
stick there for longer than just a very brief period.

If the attacker instead can just sneak the code directly into a release
archive then it won’t appear in git, it won’t get tested and it won’t
get easily noticed by team members!

curl release tarballs are made by me, locally on my machine. After I’ve
built the tarballs I sign them with my GPG key and upload them to the
curl.se origin server for the world to download. (Web users don’t
actually hit my server when downloading curl. The user visible web site
and downloads are hosted by Fastly servers.)

An attacker that would infect my release scripts (which btw are also in
the git repository) or do something to my machine could get something
into the tarball and then have me sign it and then create the “perfect
backdoor” that isn’t detectable in git and requires someone to diff the
release with git in order to detect – which usually isn’t done by anyone
that I know of.

[...] I of course do my best to maintain proper login sanitation,
updated operating systems and use of safe passwords and encrypted
communications everywhere. But I’m also a human so I’m bound to do
occasional mistakes.

Another way could be for the attacker to breach the origin download
server and replace one of the tarballs there with an infected version,
and hope that people skip verifying the signature when they download it
or otherwise notice that the tarball has been modified. I do my best at
maintaining server security to keep that risk to a minimum. Most people
download the latest release, and then it’s enough if a subset checks the
signature for the attack to get revealed sooner rather than later.

--8<---cut here---end--->8---

Unfortunately Stenberg in that section misses one attack vector he
mentioned in a previous article section named "The tricking a user
method":

--8<---cut here---start->8---

We can even include more forced “convincing” such as direct threats
against persons or their families: “push this code or else…”. This way
of course cannot be protected against using 2fa, better passwords or
things like that.

--8<---cut here---end--->8---

...and an attack vector involving more subltle ways (let's call it
distributed social engineering) to convince the upstream developer and
other contributors and/or third parties they need a project
co-maintainer authorized to publish _official_ release tarballs.

Following Stenberg's attacks classification, since the supply-chain
attack was intended to install a backdoor in the _sshd_ service, and
_not_ in xz-utils or liblzma, we can classify this attack as:

  skip-git-altogether to install a backdoor further-down-the-chain,
  precisely in a _dependency_ of the attacked one, durind a period of
  "weakness" of the upstream maintainers

Stenberg closes his article with this update and one related reply to a
comment:

--8<---cut here---start->8---

Dependencies

Added after the initial post. Lots of people have mentioned that curl
can get built with many dependencies and maybe one of those would be an
easier or better target. Maybe they are, but they are products of their
own individual projects and an attack on those projects/products would
not be an attack on curl or backdoor in curl by my way of looking at it.

In the curl project we ship the source code for curl and libcurl and the
users, the ones that builds the binaries from that source code will get
the dependencies too.

[...]

 Jean Hominal says: 
 April 1, 2021 at 14:04 

 I think the big difference why you “missed” dependencies as an attack
 vector is because today, most application developers ship their
 dependencies in their application binaries (by linking statically or
 shipping a container) – in such a case, I would definitely count an
 attack on such a dependency, that is then shipped as part of the
 project’s artifacts, as a successful attack on the project.

 However, as you only ship a source artifact – of course, dependencies

backdoor injection via release tarballs combined with binary artifacts (was Re: Backdoor in upstream xz-utils)

2024-04-04 Thread Giovanni Biscuolo
 not something under
Guix control.

All in all: should we really avoid the "pragmatically impossible to be
peer reviewed" release tarballs?

WDYT?

Happy hacking! Gio'

[...]

[1] https://gist.github.com/thesamesam/223949d5a074ebc3dce9ee78baad9e27
«FAQ on the xz-utils backdoor (CVE-2024-3094)» (costantly updated)

[2] https://gynvael.coldwind.pl/?lang=en&id=782
«xz/liblzma: Bash-stage Obfuscation Explained»

[3]
e.g. 
https://web.archive.org/web/20110708023004/http://www.h-online.com/open/news/item/Vsftpd-backdoor-discovered-in-source-code-update-1272310.html
«Vsftpd backdoor discovered in source code - update»
"a bad tarball had been downloaded from the vsftpd master site with an
invalid GPG signature"

[4]
https://lists.fedoraproject.org/archives/list/de...@lists.fedoraproject.org/thread/YWMNOEJ34Q7QLBWQAB5TM6A2SVJFU4RV/
«Three steps we could take to make supply chain attacks a bit harder»

[5] https://lists.gnu.org/archive/html/bug-autoconf/2024-03/msg0.html

[6] https://lists.gnu.org/archive/html/automake/2024-03/msg7.html
«GNU Coding Standards, automake, and the recent xz-utils backdoor»

[7]
https://git.savannah.gnu.org/cgit/guix.git/tree/gnu/packages/compression.scm#n494
--8<---cut here---start->8---
(define-public xz
  (package
   (name "xz")
   (version "5.2.8")
   (source (origin
(method url-fetch)
(uri (list (string-append "http://tukaani.org/xz/xz-"; version
  ".tar.gz")
   (string-append "http://multiprecision.org/guix/xz-";
  version ".tar.gz")))
--8<---cut here---end--->8---



P.S.: in a way, I see this kind of attack is exploiting a form of
statefulness of the build system, in this case "build-to-host.m4" was
/status/; I think that (also) build systems should be stateless and Guix
is doing a great job to reach this goal.

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


Re: Recent security issues (guix-daemon and xz)

2024-04-02 Thread Giovanni Biscuolo
Hello,

John Kehayias  writes:

[...]

>
> Secondly, perhaps many have heard of the recent security issue
> (backdoor) in the xz project:
>
> - <https://www.openwall.com/lists/oss-security/2024/03/29/4> (original
>   disclosure)
>
> - <https://nvd.nist.gov/vuln/detail/CVE-2024-3094> (CVE-2024-3094)

Pointers to related threads on guix-devel:

1. https://lists.gnu.org/archive/html/guix-devel/2024-03/msg00281.html
   
https://yhetil.org/guix/KRd7y7M0o1HQsUlHonV0ZJDzgHgrAh6Mn1dnSvKq9RBCgBRTcy-MlK9Ai6PJPmiwzTh59wNVrAioo3n0uSrZ4r3n-YX9JOhtX6V6E1P37PQ=@protonmail.com/
   Message-ID: 
KRd7y7M0o1HQsUlHonV0ZJDzgHgrAh6Mn1dnSvKq9RBCgBRTcy-MlK9Ai6PJPmiwzTh59wNVrAioo3n0uSrZ4r3n-YX9JOhtX6V6E1P37PQ=@protonmail.com/

2. https://lists.gnu.org/archive/html/guix-devel/2024-03/msg00284.html
   https://yhetil.org/guix/87ttkon4c4@protonmail.com/
   Message-ID: 87ttkon4c4@protonmail.com

3. https://lists.gnu.org/archive/html/guix-devel/2024-04/msg0.html
   https://yhetil.org/guix/3ae39210-ba8b-49df-0ea1-c520011b7...@gmail.com/
   Message-ID: 3ae39210-ba8b-49df-0ea1-c520011b7...@gmail.com

HTH! Gio'

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


Re: the right to rewrite history to rectify the past (was Re: Concerns/questions around Software Heritage Archive)

2024-03-21 Thread Giovanni Biscuolo
Hello pinoaffe,

pinoaffe  writes:

[...]

> I think we, as Guix,
> - should examine if/how it is currently feasible to rewrite our git
> history,

it's not, see also:
https://guix.gnu.org/en/blog/2020/securing-updates/

> - should examine possible workarounds going forward,
> - should move towards something like UUIDs and petnames in the long run.
>
> (see https://spritelyproject.org/news/petname-systems.html).

I don't understand how using petnames, uuids or even a re:claimID
identity (see below) could solve the problem with "rewriting history" in
case a person wishes to change his or her previous _published_ name
(petname, uuid...) in an archived content-addressable storage system.

As a side note, other than the "petname system" please also consider
re:claimID from GNUnet:
https://www.gnunet.org/en/reclaim/index.html
https://www.gnunet.org/en/reclaim/motivation.html

[...]

Regards, Giovanni.


[1] https://guix.gnu.org/en/blog/2020/securing-updates/


-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


Re: Guix role in a free society

2024-03-20 Thread Giovanni Biscuolo
Hello Vivien,

Vivien Kraus  writes:

> Free software enables cooperation in a free society. More precisely, it
> makes it easy for a user of a package to use a new version where the
> personal information has been corrected. The thread in [1] questions
> our handling of potential cases where a transgender contributor of Guix
> or one of its packages requests to change their name. While it would be
> nothing but cruel to deny such a request

Please do not frame the question that way because it's very different:
the original request is _not_ to use the correct personal information in
a new package to be distributed (and potentially used), the request is
to modify the _correct_ personal information (self) published in the
past by rewriting the git history of the SHW archived copy of the
software.

Guix contributors or package authors can change their personal
information - usually their name and email in copyright attribution(s)
and documentation - at any moment and that will be _authomatically_
propagated in all new Guix built artifacts and/or in the Guix git
repositories.

Also, git can _display_ a different name in git logs if instructed to to
so via .mailmap

The problem, let me call it a "rights clash", arises when pretenting
that "rewriting the past" is a right people can exercise, protected by
the european GDPR also.

[...]

Loving, Gio'

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


the right to rewrite history to rectify the past (was Re: Concerns/questions around Software Heritage Archive)

2024-03-20 Thread Giovanni Biscuolo
history (of /all/ the
copies of the repository archived by SWH, **fork** included?)

The CNIL (the french data regulator) has been involved, but the author
do not trust CNIL:

--8<---cut here---start->8---

The explanation I can come up with is that CNIL and Inria are friends,
and CNIL will never take action against Inria.

--8<---cut here---end--->8---

Last but NOT least: what is this "right to rectification"?
...simple:

--8<---cut here---start->8---

Art. 16 GDPR Right to rectification

1The data subject shall have the right to obtain from the controller
without undue delay the rectification of inaccurate personal data
concerning him or her. 2Taking into account the purposes of the
processing, the data subject shall have the right to have incomplete
personal data completed, including by means of providing a supplementary
statement.

--8<---cut here---end--->8---
(https://gdpr-info.eu/art-16-gdpr/)

Simple... really?!?

First question is: is the "deadname" of the author "inaccurate personal
data concerning him or her" or it is "just" the /accurate/ name the
person had before he or she changed it?

...but the most interesting part is the "suitable recital" n. 65:

--8<---cut here---start->8---

1 A data subject should have the right to have personal data concerning
him or her rectified and a ‘right to be forgotten’ where the retention
of such data infringes this Regulation or Union or Member State law to
which the controller is subject.

[...]

5 However, the further retention of the personal data should be lawful
where it is necessary, for exercising the right of freedom of expression
and information, for compliance with a legal obligation, for the
performance of a task carried out in the public interest or in the
exercise of official authority vested in the controller, on the grounds
of public interest in the area of public health, for archiving purposes
in the public interest, scientific or historical research purposes or
statistical purposes, or for the establishment, exercise or defence of
legal claims.

--8<---cut here---end--->8---
(https://gdpr-info.eu/recitals/no-65/)

Is SHW (and Guix, and... *me*) exercising it's rights of /archiving/ and
/scientific or (and!) historical research/?  I say yes.

Last question: do SHW (and Guix, and *me*) have the right to archive and
redistribute free software for historical purposes.

But also: is the retention of the "deadname" even necessary to exercise
or defense legal claims about _copyright_ issues?

And also: is my right to retain the integrity of data structures I
obtained by copyright holders or I have to throw it away if one of the
copyright holder asks me to retroactively rewrite all occurrences of his
or her name for his or her asserted "right to rectification".

All in all: what rights are we talking about, please?!?

Loving, Giovanni



[3]
https://yhetil.org/guix/iytrYuvr9BcPdWG17PDP5SXyjrZzwBGx1sbh0BVcDZ8PAifSIMdPXPbuhhDu-2woPlaWmEWnSt09h4OravmRRBrMB5uDlXYtKtI0egEQX_k=@lendvai.name/#r

[4]
https://yhetil.org/guix/86d01304cc8957a2508e1d1732421b5e0f9ceeb5.ca...@planete-kraus.eu/



P.S.: I am DPO and copyright advisor at my tiny company, but IANAL :-D

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


on the bug tracker (Re: Guix Days: Patch flow discussion)

2024-02-29 Thread Giovanni Biscuolo
Hi Josselin,

Josselin Poiret  writes:

[...]

> One thing I would like to get rid of though is debbugs.

given that a significant part of the Guix infrastructure is provided by
the GNU project, including the bug/issue tracker, and I don't think that
GNU will replace https://debbugs.gnu.org/ (or the forge, Savannah) with
something else, I suggest to concentrate the Guix community efforts in
giving contributors better user interfaces to Debbugs, e.g Mumi (web and
CLI) instead of trying to get rid of it.

In other words: the "problem" it's not the tool, it's the *interface*.

Please also consider that if (I hope not) the Guix would decide to adopt
a different bug/issue tracker system then Someome™ will have to
administrate it, and currently there are other pieces of core
infrastructure that need more resources, e.g. QA.

Speaking of interface features, I simply *love* the email based
interface provided by Debbugs [1]; also the web UI is not bad, and the Mumi
one (https://issues.guix.gnu.org/) it's even better.

But I'm curious: what bug tracker would you suggest instead of Debbugs?

Let's see what some "big players" are using...

> It causes a lot of pain for everyone, eg. when sending patchsets, it
> completely breaks modern email because it insists on rewriting
> DMARC-protected headers, thus needing to also rewrite "From:" to avoid
> DMARC errors.

I don't understand what "completely breaks modern email" means: please
could you point me to a document where this issue/bug is documented?

> I've been following the Linux kernel development a bit closer this past
> year, and while there are things that need to improve (like knowing the
> status of a patchset in a maintainer's tree), they at least have a lot
> of tools that I think we should adopt more broadly:

you mention: b4/lei and patchwork but they are not bug/issue trackers.

the Linux kernel community is using https://bugzilla.kernel.org/;
RedHat, Fedora, OpenSUSE and SUSE are also using Bugzilla

Arch Linux have adopted GitLab issues

Other alternavives:
https://en.wikipedia.org/wiki/Comparison_of_issue-tracking_systems

...or:
https://en.wikipedia.org/wiki/Bug_tracking_system#Distributed_bug_tracking

I personally like the idea that the bug/issue tracker is _embedded_
(integrated?) in the DVCS used by the project, Git in Guix case.

For this reason I find Tissue https://tissue.systemreboot.net/ an
interesting project for *public* issue/bug tracking systems, also
because Tissue is _not_ discussion-oriented: this means that
discussions are managed "somewhere else", because «It's much better to
have a clear succinct actionable issue report. This way, the issue
tracker is a list of clear actionable items rather than a mess of
unreproducible issues.»  [2]

Happy hacking! Gio'

[...]

[1] https://debbugs.gnu.org/server-control.html

[2] https://tissue.systemreboot.net/manual/dev/en/#section11795

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


Re: Guix Days: Patch flow discussion

2024-02-28 Thread Giovanni Biscuolo
Hello Simon,

first and foremost: I'd like to say a big thank you to all the people
working in the Guix community...

...and apologise if I still cannot do more to help.

Simon Tournier  writes:

[...]

> Well, let me try to quickly summarize my conclusion of the session:
>
>  1. We have a social/organisational problem.
>
>  2. We have some tooling annoyances.
>
>
> The easy first: #2 about tools.  The email workflow is often cited as
> part of the issue.  That’s a false-problem, IMHO.

yes, we (as a community) already had several discussions around the
false-problem named "email worfkow is too hard", I also dared to send a
*very* lenghty analysis comparing the _so_called_ "pull request model" [1] 

Unfortunately I'm pretty sure that _this_ false issue will be cited
again and again and again when discussing about "how to better help Guix
maintainers"

...unless the (info "(guix) Submitting Patches") one day will finally
(briefly) explain why the project is using an email based workflow and
not a "so called PR workflow" (to understand why PR workflow is "so
called" please read [1]) 

But all this discussion on the "email workflow" issue is more useless
when considering the commit authetication mechanism _embedded_ in Guix
since 2020;  I recently studied this blog post:

https://guix.gnu.org/en/blog/2020/securing-updates/

and it states:

--8<---cut here---start->8---

To implement that, we came up with the following mechanism and rule:

1 The repository contains a .guix-authorizations file that lists the
 OpenPGP key fingerprints of authorized committers.

2 A commit is considered authentic if and only if it is signed by one of
 the keys listed in the .  guix-authorizations file of each of its
 parents. This is the authorization invariant.

[...]

The authorization invariant satisfies our needs for Guix. It has one
downside: it prevents pull-request-style workflows. Indeed, merging the
branch of a contributor not listed in .  guix-authorizations would break
the authorization invariant. It’s a good tradeoff for Guix because our
workflow relies on [patches carved into stone tablets] (patch tracker),
but it’s not suitable for every project out there.

--8<---cut here---end--->8---

[patches carved into stone tablets] is a link to:

https://lwn.net/Articles/702177/
«Why kernel development still uses email»
By Jonathan Corbet, October 1, 2016 

an article with another ton of reasons why "all patch management tools
sucks, email just sucks less.

Anyway, since Guix is using the "authorization invariant" since 2020,
the "email workflow" is embedded in Guix :-D

Am I missing something?

> Projects that use PR/MR workflow have the same problem.  For instance,
> Julia [1] has 896 open PR. 

[...]

> I will not speak about the channel ’nonguix’ but it gives another
> clue.

I will not speak about kubernetes, cited in the above cited LWN article,
I will not speak about Gerrit, also cited there...

[...]

> To be clear, the email workflow might add burden on submitter side but I
> am doubtful it is really part of the bottleneck for reviewing and
> pushing submissions.

Email workflow makes the reviewing workflow _extremely_ easy, provided a
good MUA and a _little_ bit of self-discipline following the /easy/
guidance in (info "(guix) Reviewing the Work of Others")

> Although the tools might add some unnecessary friction, the net of the
> issue is IMHO #1: reviewing is just boring and time-consuming.

This is the one and only reason.

[...]

I don't have anything to add, for now.


Happy hacking! Gio'


[1] id:87y1ha9jj6@xelera.eu aka
https://yhetil.org/guix/87y1ha9jj6@xelera.eu/

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


simple service extensions/composizions (Re: Guix Days: Patch flow discussion)

2024-02-28 Thread Giovanni Biscuolo
Tomas Volf <~@wolfsden.cz> writes:

> On 2024-02-06 13:09:15 +0100, Clément Lassieur wrote:
>> Hi!  Why is it more complicated with services?  You don't need forks at
>> all to use packages and services outside of Guix proper.
>
> For packages we have transformations, or I can just inherit.  But I am not 
> aware
> of such option for services (is there anything?).

Service composition? (info "(guix) Service Composition")

[...]

> Example: For long time the connman-configuration did not support a way to
> provide a configuration file (well, it still does not but that is not the
> point).  But I needed it.  So, what options I had?
 
I still have not had a try but maybe is possible to define something
like my/connman-service-type that extends the default one with all the
fields a user may need.

Maybe a Coockbook section could be useful

Happy hacking! Gio'

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


Cuirass 1.2.0-2.7bcd3d0 (and manual): latest install image not available for download (wrong URL?)

2024-02-28 Thread Giovanni Biscuolo
Hello Tanguy,

IMO there is a bug in the CI UI

Tanguy LE CARROUR  writes:

> In order to test the latest installer, I went to the "latest download"
> page [1] and clicked on "x86_64-linux" under "GNU Guix System on Linux"
> and ended up on an error page [2]:
>
> """
> {"error":"Could not find the requested build product."}
> """
>
> [1]: https://guix.gnu.org/en/download/latest/

the exact URL is this one:

https://ci.guix.gnu.org/search/latest/ISO-9660?query=spec:images+status:success+system:x86_64-linux+image.iso

Looking at this results:
https://ci.guix.gnu.org/search?query=spec:images+status:success+system:x86_64-linux+image.iso

there are plenty of images succesfully built.

Unfortunately the "Build outputs" URL available in the datails page are
broken so there is no way to download them via ci.guix.gnu.org (AFAIU)

> [2]: https://ci.guix.gnu.org/download/1907

I also tested:

- https://ci.guix.gnu.org/build/3395955/details pointing to
  https://ci.guix.gnu.org/download/1927

- https://ci.guix.gnu.org/build/3267415/details pointing to
  https://ci.guix.gnu.org/download/1894

and the result is the same:

--8<---cut here---start->8---

{"error":"Could not find the requested build product."}

--8<---cut here---end--->8---

[...]

> Is it the ISO that has to be tested? How can I download it?

Unfortunately also the latest (devel) manual

https://guix.gnu.org/en/manual/devel/en/html_node/USB-Stick-and-DVD-Installation.html

now is pointing to an invalid url:

https://ftp.gnu.org/gnu/guix/guix-system-install-f29f80c.x86_64-linux.iso

I'm almost sure not so log ago it was pointing to a valid file.

Are the two issues linked of do I need to file a separate bug for the
manual?

Sorry I don't know how to help to fix it.

Best regards, Gio'

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


Re: declarative partition and format with Guix (was Re: Guix System automated installation)

2024-02-28 Thread Giovanni Biscuolo
Giovanni Biscuolo  writes:

[...]

>> but I think this is close to the right track.  Either operating-system
>> should be extended to support things like disk partitioning,

the library for doing this with Guile is guile-parted (packaged in
Guix); it's used by the Guix Installer:

https://git.savannah.gnu.org/cgit/guix.git/tree/gnu/installer/parted.scm

AFAIU this (parted.scm above) is the starting point (the Guix library)
that can be used to develop a program that automates the disk
partitioning and filesystem creation based on a gexp (disk-layout.scm ?)
declaration.

>> and effect those changes at reconfigure time (with suitable
>> safeguards to avoid wrecking existing installs),
>
> I would prefer not, such "reconfigurations" should be done "out of band"
> and not "in band", IMHO

Side note: there is a recent discussion on a "Resize Filesystem Service"
at this thread
id:zr0p278mb0268910b4fe39a48112ce740c1...@zr0p278mb0268.chep278.prod.outlook.com
[1]

[...]

Happy hacking! Gio'



[1] 
https://yhetil.org/guix/zr0p278mb0268910b4fe39a48112ce740c1...@zr0p278mb0268.chep278.prod.outlook.com/

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


declarative partition and format with Guix (was Re: Guix System automated installation)

2024-02-27 Thread Giovanni Biscuolo
Hello Ian,

Ian Eure  writes:

> Giovanni Biscuolo  writes:

[...]

>> Please consider that a preseed file is very limited compared to 
>> a
>> full-fledged operating-system declaration since the latter 
>> contains the
>> declaration for *all* OS configuration, not just the installed 
>> packages.
>
> I appreciate where you’re coming from, I also like the one-file 
> system configuration, but this is inaccurate.

Yes you are right, I completely misrepresented the functionality of the
Debian preseed utility, sorry! (...and I used that in a remote past)

[...]

> installed packages.  Right now, Debian’s system allows you to do 
> things which Guix does not.

[...]

> means you can use a preseed file to tell the installer to 
> partition disks, set up LUKS-encrypted volumes (and specify one or 
> more passwords for them), format those with filesystems

Yes, this is what is missing from the Guix installer system

> With Debian, I can create a custom installer image with a preseed
> file, boot it, and without touching a single other thing, it’ll
> install and configure the target machine, and reboot into it.  That
> boot-and-it-just-works experience is what I want from Guix.

I understand that it's just a workaround but you can achieve this
boot-and-it-just-works (if there isn't bugs in the script/preseed)
experience with a simple bash script to automate "manual installation"

I wrote it in bash because I'm not able to write it in Guile and/or
extend the "guix system" command to be able to manage the missing bits,
but that is a solution (more a workaround now)

[...]

> There’s no facility for specifying disk partitioning or *creating* 
> filesystems in the system config -- it can only be pointed at ones 
> which have been created already.

Yes: those facilities are missing, we (still?) cannot do that
declaratively... let's do that imperatively, automatically :-)

[...]

>> I would really Love So Much™ to avoid writing imperative bash 
>> scripts
>> and just write Scheme code to be able to do a "full automatic" 
>> Guix
>> System install, using a workflow like this one:
>>
>> 1. guix system prepare --include preseed.scm disk-layout.scm 
>> /mnt
>>
>> where disk-layout.scm is a declarative gexp used to partition, 
>> format
>> and mount all needed filesystems
>>
>> the resulting config.scm would be an operating-system 
>> declaration with
>> included the contents of preseed.scm (packages and services
>> declarations)
>>
>> 2. guix system init config.scm /mnt (already working now)
>>
>> ...unfortunately I'm (still?!?) not able to contribute such code 
>> :-(
>>
>
> I don’t think there’s any need for a preseed.scm file, and I’m not 
> sure what would be in that,

preseed.scm is "just" the part of "operating-system" declaration without
the (bootloader [...]), (file-systems [...]) and (swap-devices [...])
declaration, that is automatically generated by "guix system prepare"
based on disk-layout.scm

> but I think this is close to the right track.  Either operating-system
> should be extended to support things like disk partitioning, and
> effect those changes at reconfigure time (with suitable safeguards to
> avoid wrecking existing installs),

I would prefer not, such "reconfigurations" should be done "out of band"
and not "in band", IMHO

> or the operating-system config could get 
> embedded in another struct which contains that, similar to the 
> (image ...) config for `guix system image'.  I think there are 
> some interesting possibilities here: you could change your 
> partition layout and have Guix resize them

Root (/) partition resizing must be done with root unmounted, no?

Also, since the resize (shrink?) of filesystem is a very sensitive
operation, I'd exclude that from "normal" operations done via "guix
system reconfigure"... it's more "guix system prepare..." with one or
more disk partitions (i.e. /home) resized/shrinked or kept as is,
_without_ file system (re)formatting.

One interesting thing that could be done at "guix system prepare" time
is to restore one or more filesystem content from a (possibly remote)
backup, useful in a disaster recovery scenario.

> / create new ones for you.

[...]

Meanwhile: WDYT to work togheter on a simple _configurable_ bash script
to help users automate the very first installation of a Guix System and
try to upstream it?

Thanks! Gio'

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


Guix System automated installation

2024-02-26 Thread Giovanni Biscuolo
Hello Ian,

I'm a little late to this discussion, sorry.

I'm adding guix-devel since it would be nice if some Guix developer have
something to add on this matter, for this reason I'm leaving all
previous messages intact

Csepp  writes:

> Ian Eure  writes:
>
>> Hello,
>>
>> On Debian, you can create a preseed file containing answers to all the 
>> questions
>> you’re prompted for during installation, and build a new install image which
>> includes it.  When booted, this installer skips any steps which have been
>> preconfigured, which allows for either fully automated installation, or 
>> partly
>> automated (prompt for hostname and root password, but otherwise automatic).
>>
>> Does Guix have a way to do something like this?  The declarative config is 
>> more
>> or less the equivalent of the Debian preseed file, but I don’t see anything 
>> that
>> lets you build an image that’ll install a configuration.

When using the guided installation (info "(guix) Guided Graphical
Installation"), right before the actual installation on target (guix
system init...) you can edit the operating-system configuration file:
isn't it something similar to what you are looking for?

Please consider that a preseed file is very limited compared to a
full-fledged operating-system declaration since the latter contains the
declaration for *all* OS configuration, not just the installed packages.

Alternatively, you can use the (info "(guix) Manual Installation") and
copy a pre-configured (preseed? :-) ) operating-system file, but you
have to be very careful (see (info "(guix) Proceeding with the
Installation").

>> I see there’s `guix deploy’, but that requires an already-installed GuixSD to
>> work, which isn’t helpful for getting it installed in the first place.
>>
>> Thanks,
>>
>>  — Ian

I'm also interested in a way to fully automate the installation [1] of
Guix System hosts and I've developed a small bash script to help me (see
below).

The idea is to use the script to install a very basic Guix System on the
machine and then use "guix deploy" (or deploy "manually") for a
full-fledged configuration.

My initial motivation was (and sill is the main) to allow me to install
Guix Systems on rented hosts (dedicates or VPS) provided by vendors that
do not have Guix System in the list of operating systems users can
install on their machines: in this case users can boot machines in
rescue mode (AFAIU all hosters provide a rescue system) and insall Guix
System in a similar way as described in (info "(guix-cookbook) Running
Guix on a Linode Server") or (info "(guix-cookbook) Running Guix on a
Kimsufi Server")

You can find the script here:
https://gitlab.com/softwareworkers/swws/-/blob/master/infrastructure/hosts/cornouiller/bootstrap-guix.sh?ref_type=heads
(that is the last "version" I used, for now I write a script for every
machine I need... I still have to make this script generic putting all 
needed config variables in an external file)

Please consider it's still in early development, although I've already
tested it both locally and with real rented machines, both bare metal
and VPS.

After some tests I realized that with few tests I could use such a
script both on a rescue system and when installing using the Guix
Installer ISO, selecting a full manual installation, see (info
"(guix) Manual Installation"), and then running the script.

> guix system image is maybe closer, but it doesn’t automate everything that the
> installer does.
> But the installer can be used as a Scheme library, at least in theory.  The 
> way
> I would approach the problem is by creating a Shepherd service that runs at 
> boot
> from the live booted ISO.

I would really Love So Much™ to avoid writing imperative bash scripts
and just write Scheme code to be able to do a "full automatic" Guix
System install, using a workflow like this one:

1. guix system prepare --include preseed.scm disk-layout.scm /mnt

where disk-layout.scm is a declarative gexp used to partition, format
and mount all needed filesystems

the resulting config.scm would be an operating-system declaration with
included the contents of preseed.scm (packages and services
declarations)

2. guix system init config.scm /mnt (already working now)

...unfortunately I'm (still?!?) not able to contribute such code :-(


Happy hacking! Gio'



[1] that means: with almost zero needed intervention by the user... the
user just needs to _design_ the installation.

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


Re: cannot boot after installation on VPS (via rescue system)

2024-02-22 Thread Giovanni Biscuolo
Hi Saku!

Saku Laesvuori  writes:

>>>> Now I'm trying to use it on two VPS from two different vendors, booted
>>>> in rescue mode, but after the installation (via bootstrap-guix.sh) when
>>>> I reboot the VPS I get the usual grub menu but the boot process suddenly
>>>> fails with this error (manually copied from web console, sorry for
>>>> possible typos):
>>>>
>>>> [...]
>> 
>> I'm not on Linode, I'm working on OVH and Hetzner VPSs
>
> I had to add "virtio_scsi" to initrd-modules to get Guix to boot on a
> Hetzner VPS. Maybe that could be the problem here, too?

Yes: you got it!

I asked Hetzner support to have the Guix installer ISO as "custom ISO"
and was able to do a guided install... and it (re)booted fine.  The
first thing that got my attention was this line in the working
config.scm:

--8<---cut here---start->8---

(initrd-modules (append '("virtio_scsi") %base-initrd-modules))
 
--8<---cut here---end--->8---

...and it makes sense! [1]

I also missed that line in (info "(guix-cookbook) Running Guix on a
Linode Server") ;-(

I still have not tested that fix with my bootstrap-guix.sh installation
script, but I'm pretty sure it will work.

Thank you so much! Gio'


[1] I also need to enhance my qemu scripts needed for testing

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


Re: cannot boot after installation on VPS (via rescue system)

2024-02-21 Thread Giovanni Biscuolo
Hi Wojtek!

Wojtek Kosior  writes:

>> --8<---cut here---start->8---
>> 
>> Scanning for Btrfs filesystems
>> ice-9/boot9.scm:1685:16: In procedure raise-exception:
>> In procedure mount: No such file or directory
>> GRUB loading...
>> Entering a new prompt.  Type ',bt' for a backtrace or ',q' to continue.
>> [...]
>> scheme@(guile-user)> ,bt
>> In gnu/build/linux-boot.scm:
>> 637:8  3 (_)
>> 435:8  2 (mount-root-filesystem "/dev/sda3" "btrfs" # _ #:flags ?)
>> In unknown file:
>>1 (mount "/dev/sda3" "/root" "btrfs" 0 "compress=zstd")
>> In ice-9/boot9.scm:
>>   1685:16: 0 (raise-exception _ #:continuable? _)
>> 
>> --8<---cut here---end--->8---
>
> Maybe the device file is called different from /dev/sda3?

Maybe, but I also tried using UUID (the usual method I use) and label,
both failing... I'll investigate

> On one VPS of mine (which also happens to have Guix installed via
> rescue mode) the root is mounted from /dev/vda1.

Out of curiosity: what's the hoster, please?

>> In particular, I don't understand why the boot script is trying to mount
>> the root filesystem at "/root" and not at "/" as it should: am I missing
>> something?
>
> Linux-based systems typically start with initrd filesystem mounted at
> /.  They then mount the real root at some subdirectory of / and use
> either chroot or pivot-root system call to make the processes see it as
> if it were mounted at / in the first place.

Yes! Thank you for your explanation: I checked gnu/build/linux-boot.scm
and it's just as you pointed out; I simply overlooked the error was in
the "initrd phase"... and now I'm starting to barely understand what's
happening

> I'm not an expert in early boot process so please forgive me any
> mistakes I might have made in this explanation :)

No mistakes :-D

Thank you for your help! Gio'

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


Re: cannot boot after installation on VPS (via rescue system)

2024-02-21 Thread Giovanni Biscuolo
Hi Joshua!

jbra...@dismail.de writes:

[...]

>> Now I'm trying to use it on two VPS from two different vendors, booted
>> in rescue mode, but after the installation (via bootstrap-guix.sh) when
>> I reboot the VPS I get the usual grub menu but the boot process suddenly
>> fails with this error (manually copied from web console, sorry for
>> possible typos):
>
> I just logged into my linode server...your script defaults to a btrfs
> filesystem right?

Right, and now that you pointed to that I realized that may be not a
good idea for a QEMU virtual machine [1]: I'll switch to ext4 to be sure

> When I tried to add an additional disk in linode just now, the only
> supported filesystem was ext4.  Does linode support btrfs?

I'm not on Linode, I'm working on OVH and Hetzner VPSs

Anyway I don't think that the boot issue is connected with BTRFS, but
I'll investigate!

Thank you! Gio'

[...]

[1] I don't know the host filesystem but if it's BTRFS, placing a COW
(qcow2) filesystem on top of another COW have a very bad performance,
see
https://www.qemu.org/docs/master/system/qemu-block-drivers.html#cmdoption-qcow2-arg-nocow


-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


cannot boot after installation on VPS (via rescue system)

2024-02-21 Thread Giovanni Biscuolo
Hello,

following the good guidelines from (info "(guix-cookbook) Running Guix
on a Kimsufi Server") and (info "(guix-cookbook) Running Guix on a
Linode Server") I'm developing a shell script to automate the "manual"
installation of Guix on bare metal and VPS, you can find it attached to
this email as bootstrap-guix.sh or at this git repo URL:
https://gitlab.com/softwareworkers/swws/-/blob/master/infrastructure/hosts/cornouiller/bootstrap-guix.sh?ref_type=heads

#!/bin/bash
# Copyright © 2023 Giovanni Biscuolo 
#
# bootstrap-guix.sh is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation; either version 3 of the
# License, or (at your option) any later version.
#
# bootstrap-guix.sh is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
# General Public License for more details.
#
# A copy of the GNU General Public License is available at
# <http://www.gnu.org/licenses/>.

# bootstrap-guix.sh is a very opinionated script to install Guix
# System on a host booted in "rescue" mode
#
# The system is installed on a single disk BTRFS filesystem

# Used variables MUST be initialized.
set -o nounset

# -
# Variables

# Disks
# TODO: transform this in array TARGET_DISKS[TARGET_NUMDISKS], for multi disk setups
export TARGET_NUMDISKS=1
export TARGET_DISK_PART_SUFFIX=""
export TARGET_DISK1="/dev/sda"
export TARGET_SWAP_SIZE="6GB"

# Hostname
export TARGET_HOSTNAME="cornouiller"

# User and pub key (only one admin user for basic installation)
export TARGET_USERNAME="g"
export TARGET_USERGECOS="Giovanni Biscuolo"
TARGET_USERKEY="ssh-ed25519 C3NzaC1lZDI1NTE5ICqpr0unFxPo2PnQTmmO2dIUEECsCL3vVvjhk5Dx80Yb g...@xelera.eu"

# 
# Source os-release information
test -e /etc/os-release && os_release='/etc/os-release' || os_release='/usr/lib/os-release'
. "${os_release}"
echo "### INFO - Detected GNU/Linux distribution: ${PRETTY_NAME}."

# -
# Get package dependencies
export AUTO_INSTALLED=0

if [ $AUTO_INSTALLED -eq 0 ]; then
# Install dependencies with Guix, if available
if command -v guix &> /dev/null; then
	echo "### INFO - Installing dependencies via guix..."
	guix install util-linux parted dosfstools btrfs-progs gnupg
	export AUTO_INSTALLED=1
	echo "### END - Installing dependencies via guix."
fi
fi

if [ $AUTO_INSTALLED -eq 0 ]; then
# Install dependencies with apt, if available
if command -v apt &> /dev/null; then
	echo "### INFO - Installing dependencies via apt..."
	apt install util-linux parted dosfstools btrfs-progs gnupg
	export AUTO_INSTALLED=1
	echo "### END - Installing dependencies via apt."
fi
fi

# Give up installing and notify users
if [ $AUTO_INSTALLED -eq 0 ] ; then
echo "### INFO - I'm not able to automatically install dependencies on ${PRETTY_NAME}."
echo "Please check you have the following commands available: wipefs, parted, mkfs.fat, mkswap, mkfs.btrfs, btrfs, gpg."
echo "### END - Checking dependencies."
fi

# Abort on any error, from now
set -e 

# ###
# DO NOT EDIT this variables
# unless for debugging

# (minimal) OS configuration file name
export OS_CONFIG_FILE="bootstrap-config.scm"

# Target OS mount point
export TARGET_MOUNTPOINT="/mnt/guix"

# -
# Prepare the target system filesystem
echo "### START - Downloading guix install script."
wget https://git.savannah.gnu.org/cgit/guix.git/plain/etc/guix-install.sh
chmod +x guix-install.sh
echo "### END - Downloading guix install script."  

# -
# Prepare the target system filesystem

# Wipe the disks
# TODO: use the array TARGET_DISKS[]
echo "### START - Wiping disks."
wipefs -af ${TARGET_DISK1}*
echo "### END - Wiping disks."

# Partition the disks
echo "### START - EFI system check."
if [ -e "/sys/firmware/efi/efivars" ]; then
IS_EFI=true
echo "System is EFI based. ($IS_EFI)"
else
IS_EFI=false
echo "System is BIOS grub based. ($IS_EFI)"
fi
echo "### END - EFI system check."

## Disk 1
echo "### START - partitioning ${TARGET_DISK1}."
parted ${TARGET_DISK1} --align=opt -s -m -- mklabel gpt

# partition p1 will be system boot
if $IS_EFI;

Re: guix installation why internet connection required?

2024-02-05 Thread Giovanni Biscuolo
Hi Maxim and v...@mail-on.us,

I'm including 43...@debbugs.gnu.org to keep track of the discussion on
this feature request (Add the ability to install GuixSD offline)

Maxim Cournoyer  writes:

> v...@mail-on.us writes:
>
>> x86 x64 gnu guix system 1.4.0 iso requires internet connection in order to 
>> get
>> installed. Same goes for i686 iso.
>>
>> Why is that so? Why is there no
>> iso option for installing off line? Thanks.
>
> There's this ticket about the same: #43049.  If I remember correctly it
> may be related by the Guix binary inside the ISO image being from the
> Guix package of the Guix used to generate it, perhaps.

Sorry I don't understand the problem, could you expand please?

The guix (and daemon) versione are those of the channel used when
creating the install .iso image; booting the 1.40 installer we get a
"guix version" and "guix describe" value of 989a391...

Also, the /gnu/store (755MB on 1.4.0 installer image) have all the
software needed to run the installer; when installing the (same) linux
kernel on the target disk, for example, why the daemon would download
the same substitute when it's already in the store?

Obviously the connection to a substitute server is needed if the user
choose to install some software not already in the store, so the point
should "just" be to have all the software the installer allows the user
to be installed.

> That seems like a tricky problem to solve.

...I feel like I'm missing something important here :-)

Thanks! Gio'

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


consider "git describe"... harmful? (if misused)

2024-02-05 Thread Giovanni Biscuolo
Hello developers,

Ipse dixit: a tag is a tag is a tag.

Sorry to stress on this but AFAIU "git describe" and it's variants is
(mis)used by some (many?) to obtain the last revision number of packages
got from a git tag on a repo, even in few upstream build config/scripts
(patched in Guix); here are just a few examples I've observed from
messages in this mailing list and our package definitions:

- https://yhetil.org/guix/7a759ffb-fca8-478d-a4aa-08e6b674d...@archlinux.org:
  `git describe --tags`, which is often used for --version output
  (especially in Go projects)

- https://yhetil.org/guix/87ediha5p0.wl-hako@ultrarare.space: I usually
  obtain the revision number from the output of 'git describe --tags', I
  think it's fine to use it when available.

- https://yhetil.org/guix/c93c18e5-8e01-45a0-b79f-05d72f6f8...@archlinux.org
  The output of `git describe --always --tags --dirty` was also embedded.

Some code/comments I got running "find . -type f -exec grep --color=auto
-nH --null -e "git describe" \{\} +" in "/gnu/packages", in
Emacs:

- ./audio.scm:751: ;; Ardour expects this file to exist at build time.  The
   revision is the output of git describe HEAD | sed
   's/^[A-Za-z]*+//'

- ./build-tools.scm:589: (substitute* "src/tup/link.sh" (("`git
  describe`") ,version))
 
- ./linux.scm:7263: ;; the checkout lacks a .git directory, breaking ‘git
  describe’.
  
- ./axoloti.scm:500: ;; TODO: this is the output of: git describe --long
  --tags --dirty --always

IMHO "git describe" should never be used to obtain the last revision
for the reasons I explained in my previous message (see a quote below):
IF you get it right is ONLY by chance (probably it's most of the times),
not by **design**; executive summary:

1. "git describe [--tag]" have a bug and doesn't traverse the graph in
topological order; for the Guix git repo this means that now the last
"git describe" tell us something like "v1.3.0-53609-gc70c513317" (the
number of commits and the commit hash may vary depending on last "git
pull"), not something like...

2. is NOT guaranteed that the last tag reported by "git describe
[--tag]" (even if the above mentioned bug is resolved) is the one
corresponding to a released revision of the software, since tags (even
annotated one) can be added by repo committers for any reason they find
useful; i.e. the last tag commited gor the Guix repo is
base-for-issue-62196.  If and ONLY IF committers use a recognised
pattern for the tag - i.e. v - we can get the last (tagged)
revision from git (see below for alternative to "

Giovanni Biscuolo  writes:

[...]

> The upstream bug report (and a reproducer) is this one:
> «Subject: [BUG] `git describe` doesn't traverse the graph in topological
> order»
> https://lore.kernel.org/git/ZNffWAgldUZdpQcr@farprobe/
>
> Another user also reported the issue and a reproducer:
> https://public-inbox.org/git/ph0pr08mb773203ce3206b8defb172b2f94...@ph0pr08mb7732.namprd08.prod.outlook.com/
>
> The "executive summary" is that "git describe" gets the count of "fewest
> commits different from the input commit-ish" wrong (see anso previous
> messages in this thread for details).
>
> Anyway, even if this bug was solved, I'd warmly suggest NOT to base the
> check for the latest stable Guix commit (usually tagged as v[0-9]*) on
> the result of "git describe".
>
> Today, if "guix describe"

I mean "git describe", sorry!

> had no bugs, the correct result would be:
> "base-for-issue-62196"... AFAIU :-)
>
> This is a reproducer:
>
> --8<---cut here---start->8---
>
> $ git describe $(git rev-list --tags --max-count=1)
> base-for-issue-62196
>
> --8<---cut here---end--->8---
>
> To get the value corresponding to the latest tagged version, we should
> testrict the list of tags to the ones matching the "v[0-9]*" regexp:
>
> --8<---cut here---start->8---
>
> $ git describe $(git rev-list --tags="v[0-9]*" --max-count=1)
> v1.4.0
>
> --8<---cut here---end--->8---

More efficient alternative:

--8<---cut here---start->8---

$ git tag --list 'v*' --sort=-creatordate | head -1
v1.4.0

--8<---cut here---end--->8---

[...]

Should we add some notes (a footnote?) in our Guix manual?

WDYT?

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


Re: bug#63775: git describe on current master says: v1.3.0-38775-g6192acf8b7

2024-02-03 Thread Giovanni Biscuolo
Hi Jonathan,

I'm CC'ing guix-devel because I suspect many users who cloned/updated
the Guix repo are having the same results... and concerns.

This is a git bug, not an issue with our repo, and for this reason (I
hope) I'm closing this bug; please see below.

Jonathan Brielmaier via Bug reports for GNU Guix 
writes:

> Hm, I'm hitting this bug while trying to work on the openSUSE package.
> They offer a way to build RPM packages from the most recent master
> commit, but it's get the wrong version (1.3.0 instead of 1.4.0) due to
> this `git describe` result.

As pointed out by Simon last June the result of "git describe" is not
what users should get given the "Search strategy" documented in the
command manual: https://git-scm.com/docs/git-describe#_search_strategy:

--8<---cut here---start->8---

If multiple tags were found during the walk then the tag which has the
fewest commits different from the input commit-ish will be selected and
output. Here fewest commits different is defined as the number of
commits which would be shown by git log tag..input will be the smallest
number of commits possible.

--8<---cut here---end--->8---

The upstream bug report (and a reproducer) is this one:
«Subject: [BUG] `git describe` doesn't traverse the graph in topological
order»
https://lore.kernel.org/git/ZNffWAgldUZdpQcr@farprobe/

Another user also reported the issue and a reproducer:
https://public-inbox.org/git/ph0pr08mb773203ce3206b8defb172b2f94...@ph0pr08mb7732.namprd08.prod.outlook.com/

The "executive summary" is that "git describe" gets the count of "fewest
commits different from the input commit-ish" wrong (see anso previous
messages in this thread for details).

Anyway, even if this bug was solved, I'd warmly suggest NOT to base the
check for the latest stable Guix commit (usually tagged as v[0-9]*) on
the result of "git describe".

Today, if "guix describe" had no bugs, the correct result would be:
"base-for-issue-62196"... AFAIU :-)

This is a reproducer:

--8<---cut here---start->8---

$ git describe $(git rev-list --tags --max-count=1)
base-for-issue-62196

--8<---cut here---end--->8---

To get the value corresponding to the latest tagged version, we should
testrict the list of tags to the ones matching the "v[0-9]*" regexp:

--8<---cut here---start->8---

$ git describe $(git rev-list --tags="v[0-9]*" --max-count=1)
v1.4.0

--8<---cut here---end--->8---

To browse all the tags there is the "git tag" command, for example to
have the list and description of every Guix released version:

--8<---cut here---start->8---

$ git tag -l "v[0-9]*" --sort=-creatordate -n
v1.4.0  GNU Guix 1.4.0.
v1.4.0rc2   GNU Guix 1.4.0rc2.
v1.4.0rc1   GNU Guix 1.4.0rc1.
v1.3.0  GNU Guix 1.3.0.
v1.3.0rc2   GNU Guix 1.3.0rc2.
v1.3.0rc1   GNU Guix 1.3.0rc1.
v1.2.0  GNU Guix 1.2.0.
v1.2.0rc2   GNU Guix 1.2.0rc2.
v1.2.0rc1   GNU Guix 1.2.0rc1.
v1.1.0  GNU Guix 1.1.0.
v1.1.0rc2   GNU Guix 1.1.0rc2.
v1.1.0rc1   GNU Guix 1.1.0rc1.
v1.0.1  GNU Guix 1.0.1.
v1.0.0  GNU Guix 1.0.0.
v0.16.0 GNU Guix 0.16.0.
v0.15.0 GNU Guix 0.15.0.
v0.14.0 GNU Guix 0.14.0.
v0.13.0 GNU Guix 0.13.0.
v0.12.0 GNU Guix 0.12.0
v0.11.0 GNU Guix 0.11.0.
v0.10.0 GNU Guix 0.10.0.
v0.9.0  GNU Guix 0.9.0.
v0.8.3  GNU Guix 0.8.3.
v0.8.2  GNU Guix 0.8.2.
v0.8.1  GNU Guix 0.8.1.
v0.8GNU Guix 0.8.
v0.7GNU Guix 0.7.
v0.6GNU Guix 0.6.
v0.5GNU Guix 0.5.
v0.4GNU Guix 0.4.
v0.3GNU Guix 0.3.
v0.2    GNU Guix 0.2.
v0.1GNU Guix 0.1.
v0.0Guix 0.0, initial announcement.

--8<---cut here---end--->8---

HTH!

Happy hacking, Gio'

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


Re: Git-LFS or Git Annex?

2024-01-28 Thread Giovanni Biscuolo
Hi Nicolas,

Nicolas Graves  writes:

[...]

> This is not always true. Git-LFS also has the concept of Custom Transfer
> Agents, which in some cases do not need a running server. One example is
> lfs-folderstore, which can simply use a remote directory as a LFS
> remote.

thanks, i didn't know about custom transfer agents, the use withous an
API server is documented here:

--8<---cut here---start->8---

In some cases the transfer agent can figure out by itself how and where
the transfers should be made, without having to query the API server. In
this case it's possible to use the custom transfer agent directly,
without querying the server, by using the following config option:
 
 lfs.standalonetransferagent, lfs..standalonetransferagent

Specifies a custom transfer agent to be used if the API server URL
matches as in "git config --get-urlmatch lfs.standalonetransferagent
". git-lfs will not contact the API server. It instead sets
stage 2 transfer actions to null. "lfs..standalonetransferagent"
can be used to configure a custom transfer agent for individual
remotes. "lfs.standalonetransferagent" unconditionally configures a
custom transfer agent for all remotes. The custom transfer agent must be
specified in a "lfs.customtransfer." settings group.

--8<---cut here---end--->8---
(https://github.com/git-lfs/git-lfs/blob/main/docs/custom-transfers.md#using-a-custom-transfer-type-without-the-api-server)

some examples:

1. git-lfs-agent-scp: A custom transfer agent for git-lfs that uses scp
   to transfer files. This transfer agent makes it possible to use
   git-lfs in situations where the remote only speaks ssh. This is
   useful if you do not want to install a git-lfs server. (MIT license,
   written in C, URL: https://github.com/tdons/git-lfs-agent-scp)

2. git-lfs-rsync-agent: The rsync git-lfs custom transfer agent allows
   transferring the data through rsync, for example using SSH
   authentication. (MIT license, written in Go, URL:
   https://github.com/excavador/git-lfs-rsync-agent)

3. git-lfs-agent-scp-bash: A custom transfer agent for git-lfs that uses
   scp to transfer files. This is a self-contained bash script designed
   for seamless installation, requiring no prerequisites with the
   exception of the external command scp. It enables to use git-lfs even
   if you can not use http/https but ssh only. (MIT License, written in
   bash, URL: https://github.com/yoshimoto/git-lfs-agent-scp-bash)

So yes: we could use git-lfs without a git-lfs server and set an rsync
or scp transfer agent for each remote (documenting it for users, since
this must be done client-side)

It's not at all as powerful as the location tracking features of
git-annex but... doable :-)

[...]

>> Another important limitation of Git-LFS is that you cannot delete
>> (remotely stored) objects [1], with git-annex is very easy.
>
> Probably true, haven't encountered the use-case yet.

IMHO this is a very important feature when you have to manage media
archives.

[...]

> Just a note on upsides of Git-LFS :
> - integration with git is better. A special magit extension to use
> git-lfs is not needed, whereas it is with git-annex.

true :-D

> - less operations: once I know which files will be my media files, I
> have less headaches (basically the exact git experience, you don't have
> to think about where I should `git add` or `git annex add` a file).

it's the same with git-annex, you just have to configure/distribute a
.gitattributes file, i.e.:

--8<---cut here---start->8---

* annex.largefiles=(largerthan=5Mb)
* annex.largefiles=(not(mimetype=text/*))

--8<---cut here---end--->8---

see https://git-annex.branchable.com/tips/largefiles/ for a description
of this feature

> It's indeed less copyleft though. Simpler, but also maybe less adapted
> to this use-case.

With git-annex everyone can set up a "git-annex enabled" server
(although haskel dependency is a limitation since it's unsupported in
many architectures)... or use one of the available special remotes.

Thanks! Gio'

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


Re: Git-LFS or Git Annex?

2024-01-25 Thread Giovanni Biscuolo
Hi pukkamustard,

git-annex is complex but no so complicated when you learn the two
foundamental concepts (sorry if I say something obvious to you!):

1. only the names of the files and some other metadata are stored in a
git repository when using git-annex, the content is not; when you "git
annex add some-media" it is (locally!) stored in a special folder named
.git/annex/

2. content can be transfered (get, put) from one repository to another
and the tool used to transfer depends (automatically choosen by
git-annex) on the remote where the data is (rsync, cp or curl), there
are also many "special remotes" available for data transfer. (see
https://git-annex.branchable.com/walkthrough/#index11h2 for an ssh
git-annex remote)

See https://git-annex.branchable.com/how_it_works/ for a general
description and https://git-annex.branchable.com/internals/ for a
description of the content of each git-annex managed (and reserved)
directory.

Just to make it clear, you can have one or more "plain" git remotes just
for location tracking and one or more git-annex remotes (also special
remotes) for file transfes (and location tracking if they are also
regular git remotes)

pukkamustard  writes:

[...]

> It ended up sharing remotes that are no longer existant or
> not-accessible and somehow it was hard/impossible to remove reference
> to those remotes (afaiu Git Annex remotes can only be marked as "dead"
> and not removed -
> https://git-annex.branchable.com/git-annex-dead/). As the number of
> such remotes increased, I became more and more confused.

https://git-annex.branchable.com/git-annex-dead/:
--8<---cut here---start->8---

This command exists to deal with situations where data has been lost,
and you know it has, and you want to stop being reminded of that fact.

When a repository is specified, indicates that the repository has been
irretrievably lost, so it will not be listed in eg, git annex whereis.

--8<---cut here---end--->8---

If you want git-annex to definitely forget about dead repositories
(throwing away historical data about past locations of files) you can
use "git-annex forget --drop-dead"

If you want to remove a remote (and stop syncing with it) you can do it
as you do with any git remote: "git remote rm "

[...]

> Still, I would recommend to NOT store the videos in a remote Git
> repository but a publicly accessible rsync server as a Git Annex
> special remote (https://git-annex.branchable.com/special_remotes/).

Good catch!

This way we can still use the current Savannah git hosted remote (not
supporting git-annex-shell, AFAIK) for location tracking and the same
(or more) rsync servers we are using to store media.

Thanks! Gio'

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


Re: Git-LFS or Git Annex?

2024-01-24 Thread Giovanni Biscuolo
Hi Ludo’

Ludovic Courtès  writes:

[...]

> The question boils down to: Git-LFS or Git Annex?
>
> From a quick look (I haven’t used them), Git-LFS seems to assume a
> rather centralized model where there’s an LFS server sitting next to the
> Git server¹.  Git Annex looks more decentralized, allowing you to have
> several “remotes”, to check the status of each one, to sync them, etc.²
> Because of this, Git Annex seems to be a better fit.

I've never used Git-LFS for my media repository (and will never use it,
never).

AFAIK this two advantages of git-annex vs Git-LFS are still valid today:

--8<---cut here---start->8---

A major advantage of git annex is that you can choose which file you
want to download.

You still know which files are available thanks to the symlinks.

For example suppose that you have a directory full of ISO files. You can
list the files, then decide which one you want to download by typing:
git annex get my_file.

Another advantage is that the files are not duplicated in your
checkout. With LFS, lfs files are present as git objects both in
.git/lfs/objects and in your working repository. So If you have 20 GB of
LFS files, you need 40 GB on your disk. While with git annex, files are
symlinked so in this case only 20 GB is required.

--8<---cut here---end--->8---
(https://stackoverflow.com/a/43277071, 2018-10-23)

So, AFAIU, with Git-LFS you can have all your media or no media, you
cannot selectively choose what media to get.

Another important limitation of Git-LFS is that you cannot delete
(remotely stored) objects [1], with git-annex is very easy.

> Data point: guix.gnu.org source is hosted on Savannah, which doesn’t
> support Git-LFS;

to host a Git-LFS service a Git-LFS server implementation (one that
reply to GIT_LFS API) is needed:
https://github.com/git-lfs/git-lfs/wiki/Implementations

AFAIU we dont't have one packaged (I'd save some precious time trying to
package one of them).

AFAIK Savannah do not support git-annex also, so we need to set up a
Guix git-annex server [3], I suggest using gitolite [4]: I can help with
this task if needed!

> the two other web sites above are hosted on GitLab instances, which I
> think do support Git-LFS.

Yes, Git-LFS is supported on GitLab.com and included in the Community
Edition [2] since late 2015.

git-annex repository support was available on GitLab.com in 2015/16 but
was removed in 2017 [5]

> What’s your experience?  What would you suggest?

I've no experience with Git-LFS (and will never have) but from what I
read I definitely suggest git-annex: it's more efficient, it's more
flexible, can be hosted everywhere with a little bit of effort... can be
hosted on a Guix System host! :-)

As a bonus, git-annex have a plenty of super cool features that will
make us very happy, i.e.:

- special remotes: https://git-annex.branchable.com/special_remotes/
  (including rclone
  https://git-annex.branchable.com/special_remotes/rclone/)

- location tracking
  (https://git-annex.branchable.com/location_tracking/)

- manage metadata of annexed files

HTH! Gio'

> Thanks,
> Ludo’.
>
> ⁰ 
> https://git.savannah.gnu.org/cgit/guix/maintenance.git/tree/hydra/berlin.scm#n193
> ¹ https://github.com/git-lfs/git-lfs/wiki/Tutorial
> ² https://git-annex.branchable.com/walkthrough/


[1] https://github.com/git-lfs/git-lfs/wiki/Limitations

[2] GitLab Community Edition

[3]
https://git-annex.branchable.com/tips/centralized_git_repository_tutorial/on_your_own_server/

[4] https://git-annex.branchable.com/tips/using_gitolite_with_git-annex/

[5] 
https://about.gitlab.com/blog/2015/02/17/gitlab-annex-solves-the-problem-of-versioning-large-binaries-with-git/

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


Re: Proposition to streamline our NAR collection to just zstd-compressed ones

2024-01-18 Thread Giovanni Biscuolo
Hello,

Ludovic Courtès  writes:

[...]

> From experience we know that users on foreign distros rarely, if ever,
> upgrade the daemon (on top of that, upgrading the daemon is non-trivial
> to someone who initially installed the Debian package, from what I’ve
> seen, because one needs to fiddle with the .service file to adjust file
> names and the likes),

The upgrade instructions are in (info "(guix) Upgrading Guix").

I run the daemon on Debian but installed it with the install script, not
with the Debian package: I'm going to test the installation on a VM and
I'll see/document what a user should do to upgrade a daemon installed
that way

My /etc/systemd/system/guix-daemon.service is:

--8<---cut here---start->8---

# This is a "service unit file" for the systemd init system to launch
# 'guix-daemon'.  Drop it in /etc/systemd/system or similar to have
# 'guix-daemon' automatically started.

[Unit]
Description=Build daemon for GNU Guix

[Service]
ExecStart=/var/guix/profiles/per-user/root/current-guix/bin/guix-daemon 
--build-users-group=guixbuild --substitute-urls='https://ci.guix.gnu.org 
https://bordeaux.guix.gnu.org'
Environment=GUIX_LOCPATH=/var/guix/profiles/per-user/root/guix-profile/lib/locale
 LC_ALL=en_US.utf8
Environment=TMPDIR=/home/guix-builder
RemainAfterExit=yes
StandardOutput=syslog
StandardError=syslog

# See <https://lists.gnu.org/archive/html/guix-devel/2016-04/msg00608.html>.
# Some package builds (for example, go@1.8.1) may require even more than
# 1024 tasks.
TasksMax=8192

[Install]
WantedBy=multi-user.target

--8<---cut here---end--->8---

I tweaked it a little bit to add "--substitute-urls" to ExecStart and
"LC_ALL" to Environment, but the Guix provided one should work.

AFAIU following the official daemon upgrade instructions should do the
job: right?

If this is not the case with the Debian package IMO it's a Debian
package (.service file) bug: we should add a footnote to (info "(guix)
Upgrading Guix") and file a bug upstream if needed, no?

[...]

> In addition to the warning, we should communicate in advance and make
> sure our instructions on how to upgrade the daemon are accurate and
> clear.

IMO the instructions on (info "(guix) Upgrading Guix") are clear; they
are just for a systemd based distro but should be easily "transposed" to
a different init system by the users... or not?

Thanks! Gio'

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


problems installing on LUKS2 encrypted device

2023-12-08 Thread Giovanni Biscuolo
Hello,

I've noticed that the last released system installer [1], when using the
guided install workflow, is using a LUKS1 encryption; since I would like
to install on a LUKS2 encrypted root filesystem I tried to "manually"
install following the instructions in the manual [2].

When using a LUKS2 encryption format [3], completing the installation
and rebooting, I get an error from Grub: it cannot find the encrypted
volume, it's trying to open the /unencrypted/ volume instead (via UUID),
child of the LUKS2 encrypted one.

If I just change the type of encryption to "luks1" in [3], the booting
of the installed machine works as expected.

Since I know that the LUKS2 support in Grub was not available when Guix
1.4 was released, I also tried to "guix pull && hash guix" /before/
installing with "guix system init /mnt/etc/config.scm /mnt", but the
error was the same.

I still have not tried to build an updated system installation image to
see if it is working.

Since the (stable) manual provides instructions on how to install Guix
System on a LUKS2 encrypted partition [4], I'd like to understand if I'm
doing something wrong or there is a bug, at least in the manual.

I'm attaching the script I'm using for the "manual" installation: if I
set "luks2" in the "cryptsetup luksFormat..." command /and/ uncomment
the "guix pull && hash guix" commands, the installation provides an
unbootable system.

Sorry for the "short story made long" but my script it's a proof of
concept to allow installing a Guix System starting from any (recent)
rescue system (tested only with a Guix install image and a systemd
rescue system, grml), that's why is so "long":

#!/bin/sh
# Copyright © 2023 Giovanni Biscuolo 
#
# bootstrap-guix.sh is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation; either version 3 of the
# License, or (at your option) any later version.
#
# bootstrap-guix.sh is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
# General Public License for more details.
#
# A copy of the GNU General Public License is available at
# <http://www.gnu.org/licenses/>.

# bootstrap-guix.sh is a very opinionated script to install Guix
# System on a host booted in "rescue" mode.
#
# The system is installed on a single disk BTRFS filesystem on a LUKS
# encrypted partition.

# -
# Variables

# Disks
# TODO: transform this in array TARGET_DISKS[TARGET_NUMDISKS], for multi disk setups
export TARGET_NUMDISKS=1
export TARGET_DISK_PART_SUFFIX=""
export TARGET_DISK1="/dev/sda"
export TARGET_SWAP_SIZE="16GB"

# Hostname
export TARGET_HOSTNAME="pioche"

# User and pub key (only one admin user for basic installation)
export TARGET_USERNAME="g"
export TARGET_USERGECOS="Giovanni Biscuolo"
export TARGET_USERKEY="ssh-ed25519 C3NzaC1lZDI1NTE5ICqpr0unFxPo2PnQTmmO2dIUEECsCL3vVvjhk5Dx80Yb g...@xelera.eu"

# ###
# DO NOT EDIT this variables
# unless for debugging

# (minimal) OS configuration file name
export OS_CONFIG_FILE="bootstrap-config.scm"

# Target OS mount point
export TARGET_MOUNTPOINT="/mnt/guix"

# Source os-release information
test -e /etc/os-release && os_release='/etc/os-release' || os_release='/usr/lib/os-release'
. "${os_release}"
echo "### INFO - Detected GNU/Linux distribution: ${PRETTY_NAME}."

# -
# Prepare the target system filesystem

# Wipe the disks
# TODO: use the array TARGET_DISKS[]
echo "### START - Wiping disks."
wipefs -af ${TARGET_DISK1}*
echo "### STOP - Wiping disks."

# Partition the disks
# FIXME: detect if on EFI platform looking at /sys/firmware/efi and
# perform disk partitioning and filesystem formatting accordingly

## Disk 1
echo "### START - partitioning ${TARGET_DISK1}."
parted ${TARGET_DISK1} --align=opt -s -m -- mklabel gpt
# BIOS grub system partition
parted ${TARGET_DISK1} --align=opt -s -m -- \
   mkpart grub 1MiB 5MiB \
   name 1 grub-1 \
   set 1 bios_grub on
# partition p2 will be swap
parted ${TARGET_DISK1} --align=opt -s -m -- \
   mkpart linux-swap 5MiB ${TARGET_SWAP_SIZE} \
   name 2 swap-1
# partition p3 will be LUKS encrypted device
parted ${TARGET_DISK1} --align=opt -s -m -- \
   mkpart ext4 ${TARGET_SWAP_SIZE} 100% \
   name 3 luks-1
echo "### END - partitioning ${TARGET_DISK1}."

# Create LUKS device on encrypted partition, backup 

Re: Opportunity For Guix: RFI Areas of long-term focus and Prioritization

2023-12-01 Thread Giovanni Biscuolo
Hello Maxim,

Maxim Cournoyer  writes:

[...]

>> The US Federal government has an RFI open on "Areas of long-term
>> focus and Prioritization". They're looking for 10 page briefing memos
>> on supporting memory-safe languages, strengthening software supply
>> chains, sustaining FLOSS, behavioural/economic incentives to secure
>> the OSS ecosytem, or other related ideas.
>> Sounds like an interesting opportunity for Guix hackers
>>
>> https://cloudisland.nz/@freakboy3742/110969575789548640

For the records, the RFI page is this one:
https://www.federalregister.gov/documents/2023/08/10/2023-17239/request-for-information-on-open-source-software-security-areas-of-long-term-focus-and-prioritization
(also on Wayback Machine and archive.is)

[...]

> Thanks for sharing, I've tried my luck.  My RFI reply was about
> strengthening free software distribution supply chain via GNU Guix and
> GNUnet.

Thank you for the initiative!

For the records all submitted comments are published in the Docket ID
ONCD-2023-0002 available here:
https://www.regulations.gov/docket/ONCD-2023-0002

Your comment is pubished here:
https://www.regulations.gov/comment/ONCD-2023-0002-0033
(unfortunately only as PDF)

IMHO your RFI could (hopefully easily) become a Guix project blog post,
with a brief introduction explaining the context like a link to the
above linked page and quick summary of the request for information:
WDYT?

Anyway, the process now is in Phase III (Government Review) [1]:

--8<---cut here---start->8---

The Government reviews and publishes the RFI responses submitted during
Phase II. The Government may select respondents to engage with the RFI
project team to elaborate on their response to the RFI.

--8<---cut here---end--->8---

We'll see if this "message in a bottle" will be heard by Someone™

Thank you and happy hacking! Gio'


[1] https://www.federalregister.gov/d/2023-17239/p-23

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


Re: when? [guix_milan] Guix Milan first meetup

2023-10-04 Thread Giovanni Biscuolo
Hi Andrea!

[versione italiana in calce]

Thank Andrea you for organizing this first meetup!

Andrea Rossi  writes:

[...]

> Then, in order to organise our first event, we decided to make two polls 
> to choose the date [1]

[...]

> [1] https://framadate.org/guix-milan-01-WHEN

The poll [1] got 4 votes until now and the most voted date is Oct Tue
10: can we close the poll and schedule the meeting plz?

[versione italiana]

Grazie Andrea dell'organizzazione di questo primo incontro!

Il sondaggio [1] ha ottenuto 4 voti e la data più votata è Martedì 10
Ottobre: possiamo chiudere il sondaggio e fissare la data dell'incontro
per favore?

Ciao! Gio'

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


Re: The elephant in the room and the Guix Bang.

2023-09-20 Thread Giovanni Biscuolo
Hi,

Simon Tournier  writes:

> Hi,
>
> On Tue, 19 Sep 2023 at 18:35, Giovanni Biscuolo  wrote:
>
>> I'm also suspecting that the spark that started the "Guix Bang" in
>> Ludovic's mind, the very moment he realized nix could be better
>> _extended_ using Guile in place of it's DSL, was /caused/ by the fact he
>> was a Lisp programmer, the specific /dialect/ probably did not matter.
>> But I'm just guessing.
>
> Reading your paragraph, it remembers me this reference:
>
> https://archive.fosdem.org/2015/schedule/event/the_emacs_of_distros/
>
> (that I forgot when I wrote [1]).

Oh thanks!  I also forgot it.

With this search restricted to guix.gnu.org :
https://duckduckgo.com/?q=%22emacs+of+distros%22+site%3Ahttps%3A%2F%2Fguix.gnu.org%2F&ia=web

I foud the relevant blog post:
https://guix.gnu.org/en/blog/2015/gnu-guix-at-fosdem/

I found a copy of the slides on the Guix site, also:
https://guix.gnu.org/guix-fosdem-20150131.pdf

Maybe a search box in the menu of https://guix.gnu.org/ would be
helpful...

...but a good search query is all we need, sometimes! :-D

Last but _not_ least, I find this is a very inspiring  Emacs talk:
«EmacsConf 2021: How Emacs made me appreciate software freedom»
https://protesilaos.com/codelog/2021-12-21-emacsconf2021-freedom/

The concepts presented there are valid also for Guix, IMO

Thanks! Gio'

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


Re: [workflow] Automatically close bug report when a patch is committed

2023-09-19 Thread Giovanni Biscuolo
Hi Vagrant,

Vagrant Cascadian  writes:

[...]

> This is why I think it is important for projects to have some sort of
> namespacing for this sort of thing; I am not really opinionated on the
> exact details, other than it not conflicting with Debian's terribly
> generic entirely un-namespaced historical "Closes:" syntax. :)

Noted and agreed!  At this point of discussion I also think there is
consensus (by the persons involved in this thread until now) to use a
namespaced URI (a full URL) for that upcoming feature (one of the two
discussed in the thread).

Thanks, Gio'

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


The elephant in the room and the Guix Bang.

2023-09-19 Thread Giovanni Biscuolo
eryone is free to use and extend them, and also to share with
the whole Guix community how their beloved BYOT are working well.

Please see a recent message of mine (id:87pm2e4flj@xelera.eu [2])
for some of my comments (nothing really new, I just reused concept
already expressed by others) about the process needed to integrate
useful informations (configuration _is_ information) in Guix
official _recommandations_.

Happy hacking! Gio'

[2] https://yhetil.org/guix/87pm2e4flj@xelera.eu/

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


contribute with content in our official help pages?

2023-09-19 Thread Giovanni Biscuolo
Hi Simon and all readers,

I have some comments I cannot keep, just to try helping making this
thread harder to follow, if possible :-D.

Simon Tournier  writes:

[...]

> For instance, Vim [1] or VSCode [2].  The frame is not “what we should
> do” but “let me show you what I have”, IMHO.  Well, you are free to
> join the fun! :-)
>
[1] https://10years.guix.gnu.org/video/using-vim-for-guix-development/
[2] 
https://videos.univ-grenoble-alpes.fr/video/26660-cafe_guix_vscode_comme_outil_deditionmp4/

Thank you for the pointers, Simon, I already knew [1] since I have a
look to 10years Guix from time to time but I missed [2].  Although I
don't personally use that tools, I find that information useful when
Someone™ asks me how do Something™ using tool XYZ with Guix... but I'm
sure in 5 minutes I'll forget almost all, since I don't usualy use all
XYZ tools.

How can we decrease the effort needed by Someone™ like me to find useful
informations on "how to do X with/in Guix"?

1. The "quick and dirty" solution is to use a "smart" query in a web
search engine:

Example query "guix write packages using vim":
https://duckduckgo.com/?q=guix+write+packages+using+vim

It's not very efficient since the searcher have to do some _hard_ work
first to find a "smart" query and then to filter out what he really
needs; last but not least to understand if that information is still
relevant or someway outdated or not working for his use-case.

I'd call this quite a high cognitive overhead.

2. Search one of our _official_ "Guix - Help" resources:
https://guix.gnu.org/en/help/

As you can see, We™ have plenty of them, in (my personal) reverse order
of information discoverability efficacy (cognitive overhead?):

  - IRC (logs): http://logs.guix.gnu.org/guix/
  - mailing lists, see: https://guix.gnu.org/en/contact/
  - videos: https://guix.gnu.org/en/videos/
  - wiki: https://libreplanet.org/wiki/Group:Guix
  - GNU manuals: https://gnu.org/manual
  - Cookbook: https://guix.gnu.org/cookbook/
  - GNU Guix Manual 1.4.0: https://guix.gnu.org/manual/en
  - GNU Guix Manual (Latest): https://guix.gnu.org/manual/devel/

Someone™ could help integrating useful information in one of the above
listed resources, for example informations like [1] and [2] should go to
"videos".

Each of the above listed help channels have very different contribution
workflows, ranging from chatting on IRC to sending a patch to include
Something™ in the cookbook or in the manual; this means that the more
Someone™ would like that information to traverse that list "upward", the
more will be the effort for Someone™ (Else™?) to add it...

It's a path and it's /also/ fractally recursive! :-D

For example, I have a feeling that many useful informations discussed in
this thread have been shared in previous messages on this mailing list
or IRC, wiki, videos... probably Someone™ shoud find a way to integrate
them /at least/ as a blog post in https://guix.gnu.org/en/blog/, if
Someone™ (Else™?) find it would be helpful in reducing the overhead to
do XYZ with/for Guix due to hard discoverability of that piece of useful
information; sometimes that piece of information is also /very/ useful
in reducing the cognitive overhead for contributors, that's recursivity
(© Simon Tournier).

To be cristal clear: I find that also "externally managed assets" - a
blog post, a video, a tool specifically designed for "Guix integration"
of tool XYZ, a Guix channel - **are** useful _contributions_, but that
kind of contributions are even harder to find and/or use if not properly
"integrated in Guix".

Happy hacking! Gio'

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


Re: The already complicated (complex?) process for contributing.

2023-09-16 Thread Giovanni Biscuolo
Hi Simon,

since I already replied you offline please forgive me for any
repetitions, but later I realized a better comment to your metaphore
(see below) could be useful to other people.

Simon Tournier  writes:

[...]

> On a side note, documentation is very fine but I do not read (or
> study!?) the documentation of my oven, instead I am just cooking stuff
> for fun.

On a side note, I like your metaphores!

OK but in my view the GNU Guix project is not a personal kitchen but an
international food company [1], with a complex /distributed/ production
process involving tools (ovens, etc.) and recipes (code) ranging from
trivial to very complex; not to forget **legal requirements**, a
challenging supply chain management and a very _peculiar_ process called
"change integration management" from "customers" proposals.  Am I
exaggerating?

Now, given the above context is a valid analogy, is it a fair
expectation you can contribute to the company [1] with the food you cook
just for fun with your oven?

The food GNU Guix company [1] supplies is boostrappable and reproducible
binary code, largerly "baked" using third-party recipes, with a
progressively shrinking binary seed: yeah! :-D

Well, to be honest the food analogy does not fit very well: Guix
supplies /peculiar/ tools that users can use for a variety of
activities, ranging from cooking just for fun to complex infrastructures
management... and to effectively use /that/ tools users should better
study its documentation (ungrateful job the documentation writer!) ;-)

[...]

> If a project needs really lengthy documentation for just contributing,
> either it is a more-than-thousand contributors project, as the Linux
> kernel

I disagree this is a valid metric to measure the complexity of the
process called "change integration management", it could even be _one_
customer asking to change the recipe for his preferred cake.

> either something is wrong.  Maybe both. ;-)

...or maybe it's not "just" a project but a whole food compay [1].

Oh, oh, oh: wait!  Are we going into /that/ famous essay [2] and its
sequel [3] ?!? (very interesting readings!)

...OK, I surrender, unconditionally! :-D

Happy cooking! :-) Gio'


[1] with a very peculiar vision, mission and business model; but please
concede the analogy

[2] "On Management and the Maginot Line"
http://www.catb.org/~esr/writings/cathedral-bazaar/cathedral-bazaar/ar01s12.html

[3] "Project Structures and Ownership"
http://catb.org/~esr/writings/homesteading/homesteading/ar01s16.html



P.S.: on a side note, I think that part (all?) of the problems discussed
in [2] and [3] are rooted in the anthropological set of questions known
as «Phenomenon of Bullshit Jobs»
https://davidgraeber.org/articles/on-the-phenomenon-of-bullshit-jobs-a-work-rant/

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


The already complicated (complex?) process for contributing.

2023-09-15 Thread Giovanni Biscuolo
Hi Simon,

maybe we are drifting... again? ;-)

Simon Tournier  writes:

[...]

>> If this is stil not properly documented it will be fixed.
>
> Maybe… and it will be another item in the already very long list of
> steps to complete before contributing.  This is one of my concern: add
> yet another thing to an already complicated process for contributing.

[...]

> Yes, yet another thing to an already complicated process for
> contributing.

[...]

> Yes, yet another thing to an already complicated process for
> contributing.

[...]

While I agree that if some-"thing" (or the lack of, OK?) is
/complicating/ the contributing process, that "thing" should be
addressed, I disagree that _adding_ the **requirement** for contributors
to properly configure git to use git hooks provided by Guix and
understand the purpose of and pay attention to the 'Change-Id' field is
another "thing" that adds /complication/.

Talking in general: if you mean that contributing to Guix is /complex/ I
agree, but /complex/ does not imply /complication/; also, /complexity/
is common to every DCVS based project with significant dimensions that I
know of.

So yes, contributing /in general/ is a complex process and I guess we
all would like it to be less complicated as possible; proposals in this
thread are trying to go in this direction: adding a little help in
«integrating a proposed change» with no complications (useless by
design) for _all_ involved parties.

Looking at other project development processes, take as an example
**one** of the activities in the Linux kernel development process:
«posting patches» [1].

You also need to know:

- «Submitting patches: the essential guide to getting your code into the
  kernel» [2]

- «Linux Kernel patch submission checklist» [3]

- «Linux kernel coding style» [4]

- «Email clients info for Linux» [5]...  just to mention one of the
cited MUAs, it states: «Gmail (Web GUI). Does not work for sending
patches..».  Probably Guix should copy/paste that.

Is it /complex/ or /complicated/?

To begin with, it's quite a lot of documentation, quite challenging to
study /just/ to be able to send a useful patch to the Linux kernel... or
/just/ to understand how and _why_ the process is designed that way.

I hear you Someone™ reader: I cannot summarise, sorry! :-D ...anyway
it's a very interesting reading, I'd suggest it. (I did not read all.)

To have an overall picture of the /complexity/ of the whole development
process of the Linux kernel, take a look at «the index» [6]. :-O

Could it be simpified without making it /complicated/ for Someone™?
...maybe.

Is Guix development process comparable to the Linux kernel one? ...who
knows :-D


Thanks! Gio'


[1] https://docs.kernel.org/process/5.Posting.html

[2] https://www.kernel.org/doc/html/latest/process/submitting-patches.html

[3] https://www.kernel.org/doc/html/latest/process/submit-checklist.html

[4] https://www.kernel.org/doc/html/latest/process/coding-style.html

[5] https://docs.kernel.org/process/email-clients.html
"Run away from it.": ROTFL!

[6] https://docs.kernel.org/process/index.html

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


Re: [workflow] Automatically close bug report when a patch is committed

2023-09-15 Thread Giovanni Biscuolo
Hello Simon,

thank you for havig listed possible troubles.

Before commenting, just let me repeat that we are _copying_ the
'Change-Id' idea (and related possible implementation issues) from
Gerrit:

https://gerrit-review.googlesource.com/Documentation/user-changeid.html

This means that Somewhere™ in our documentation we should start
explaining that:

--8<---cut here---start->8---

Our code review system needs to identify commits that belong to the same
review.  For instance, when a proposed patch needs to be modified to
address the comments of code reviewers, a second version of that patch
can be sent to guix-patc...@gnu.org.  Our code review system allows
attaching those 2 commits to the same change, and relies upon a
Change-Id line at the bottom of a commit message to do so.  With this
Change-Id, our code review system can automatically associate a new
version of a patch back to its original review, even across cherry-picks
and rebases.

--8<---cut here---end--->8---

In other words, 'Change-Id' is /just/ metadata automatically added to
help in code review **tracking**, specificcally helping "across
cherry-picks and rebases" [1]

Sorry if I'm repeating things probably already understood!

Simon Tournier  writes:

[...]

>> Please can you expand what troubles do you see in automatically adding
>> 'Change-Id:' using a hook-commit-msg like
>> https://gerrit-review.googlesource.com/Documentation/cmd-hook-commit-msg.html
>> ?
>
> 1. The hook must be installed.

AFAIU a hook is already installed when configuring for contribution.

If this is stil not properly documented it will be fixed.

> 2. The hook must not be in conflict with user configuration.

I guess you mean not in conflict with other locally installed git hooks:
since AFAIU we **already** have a locally installed git hook, this is
already a requirement and this is something the user (contributor)
should be aware of.

If this is stil not properly documented it will be fixed.

> 3. The generated Change-Id string can be mangled by some user unexpected
>action.

Contributors and committers should not delete or change and already
existing 'Change-Id', this will be documented.

> Many things can rail out on user side.  For an example, base-commit is
> almost appearing systematically in submitted patches almost 3 years
> later.

I don't understand how this could impact the addition of the
patch-tracking metadata named 'Change-Id'

> The patches of some submissions are badly formatted.  Etc.

I don't understand what is the problem of having a 'Change-Id' (in
commit messages) in badly formatted patch submissions.

> Whatever the implementation, I am not convinced that the effort is worth
> the benefits.

OK, I'm sorry

> And I am not convinced it will help in closing the submissions when
> the patches have already been applied.

OK, I'm sorry

> That’s said, I am not against the proposal.  I just have mixed feelings
> and before deploying I strongly suggest to review if the proposal covers
> the intent.

OK, thank you for your suggestion.

Happy hacking! Gio'


[1] AFAIU 'Change-Id' can even track different versions of patches (that
by design are from commits in the same branch, properly rebased as
needed) sent by mistake via **different bug reports**, this also means
that different bug reports containing the same 'Change-Id' are _surely_
linked togheter.

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


Re: [workflow] Automatically close bug report when a patch is committed

2023-09-14 Thread Giovanni Biscuolo
Maxim Cournoyer  writes:

[...]

> I like the 'Closes: ' trailer idea; it's simple.  However, it'd need to
> be something added locally, either the user typing it out (unlikely for
> most contributors) or via some mumi wizardry (it's unlikely that all
> users will use mumi), which means its usage (and value) would depend on
> how motivated individuals are to learn these new tricks.

I agree: the ratio, or better usecase, of my /trivial/ (in design, not
in implementation) initial proposal [1] was to try to help committers
closing bugs "in one go" by adding proper information to the commit
message, e.g. "Closes: #N"

It was _not_ intended for contributors, also because they could _not_
know that **specific** patch in a patch series will really close a
**whole** bug report: that's only a judgement of the committer.

Also, my ratio was influenced by my misunderstanding of a series of bug
closing actions performed by Vagrant (see [1] for details): the problem
in the majority (all?) of those cases was **not** that the committer
simply forgot to close the related bug report /but/ that bug reports
containing (different) patches for the _same_ package were not linked
each other: the solution to this class of problems in obviously not
"Automatically close bug report when a patch is committed", it's
something else [2]

> On the other hands, having Change-Ids added by a pre-commit hook
> automatically would means the user doesn't need to do anything special
> other than using git, and we could still infer useful information at any
> time (in a server hook, or as a batch process).
>
> For this reason, I think we could have both (why not?  Change-Ids by
> themselves provide some value already -- traceability between our git
> history and guix-patches).

I agree: just having 'Change-Ids' alone already provide some added
value, even if we still miss the tooling (server side git hook, batch
processing

Thanks!  Gio'


[1] id:8734zrn1sc@xelera.eu 
https://yhetil.org/guix/8734zrn1sc@xelera.eu/

[2] id:87msxyfhmv@xelera.eu https://yhetil.org/guix/87msxyfhmv@xelera.eu

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


Re: [workflow] Automatically close bug report when a patch is committed

2023-09-14 Thread Giovanni Biscuolo
Simon Tournier  writes:

[...]

>>  maybe ChangeIds really trump the explicit tags proposed by Giovanni
>> or myself here.  Whether that justifies the cognitive overhead of
>> juggling them around on every submission remains to be shown or
>> disproven.
>
> I agree.  I am not convinced by the benefits and I already see some
> troubles.

Please can you expand what troubles do you see in automatically adding
'Change-Id:' using a hook-commit-msg like
https://gerrit-review.googlesource.com/Documentation/cmd-hook-commit-msg.html
?

Thanks, Gio'

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


Re: [workflow] Automatically close bug report when a patch is committed

2023-09-14 Thread Giovanni Biscuolo
Hi Liliana

Liliana Marie Prikler  writes:

> Am Mittwoch, dem 13.09.2023 um 11:27 -0400 schrieb Maxim Cournoyer:

[...]

> I do wonder how the ChangeId would work in practice.

It's a «tag to track commits across cherry-picks and rebases.»

It is used by Gerrit to identify commits that belong to the same review:
https://gerrit-review.googlesource.com/Documentation/user-changeid.html

We could use it for the same purpose and instead of building a web
application for code review, "simply" count that all 'Change-Id's in a
patchset have been pushed to the Guix official repo to declare the
related bug report closed.

> Since it's not really assigned by the committer, it would have to be
> generated "on the fly" and attached to the mail in between

Not to the mail, to the commit msg! [1]

> which could result in all kinds of nasty behaviour like unstable Ids
> or duplicated ones.

No, modulo hook script bugs obviously.

> Also, if we can automate this for ChangeIds, we could also automate
> this for patch-sets – the last patch in the series just gets the
> Closes: tag added by mumi.  

The idea is that, but we don't need to add "Closes" to the commit msg
(via post-receive hook), we "just" need the hook to send an email to
-done on behalf of the committer (the committer, not the
contributor).

> Furthermore, I'm not convinced that it would ease the issue of
> forgotten bugs as you can't really apply them to the past.

No, this 'Change-Id' is not intended for past bug reports since we
**must not** rewrite past commits _because_ commit messages are
/embedded/ in commit objects.

...but for this purpose we could use git-notes, **if** wanted:
https://git-scm.com/docs/git-notes :-D

> So the practical use is limited to the case where you intentionally
> cherry- pick this or that commit from a series.

No: the practical use is that for each guix-patch bug report we can
count how many [PATCH]es are left to be committed and act accordigly,
for example notify all involved parties (contributor, committer,
'X-Debbugs-CC's) that N/M patches from the series are still to be merged
upstream... or close the bug report if zero patches are left.

> How we want to deal with that case could be a discussion in its own
> right, and maybe ChangeIds really trump the explicit tags proposed by
> Giovanni or myself here.  Whether that justifies the cognitive
> overhead of juggling them around on every submission remains to be
> shown or disproven.

There will be no additional cognitive overhead for contributors since
'Change-Id' will be automatically managed, they can simply ignore it.

> Beyond the scope of the discussion so far, it also doesn't help us with
> duplicate or superseded patches (e.g. two series on the mailing list
> propose a similar change, because one of them has already been
> forgotten).

No, IMO there is **no** solution to this problems other than "triaging"
(id:87msxyfhmv@xelera.eu
https://yhetil.org/guix/87msxyfhmv@xelera.eu/)

> Again, the explicit close tags would allow this case to be
> handled in an interpretable fashion.  In both cases, we do however also
> introduce the potential for incorrect tagging, which then needs to be
> resolved manually (more or less a non-issue, as it's the status quo).

There is no potential of incorret tagging when using a hook-commit-msg
[1] to add 'Change-Id'.

For the other method discussed here, there is no way to avoid users
mistyping 'Closes:' pseuto-headers in their commit messages: if mistuped
they will be ignored :-(


Cheeers, Gio'


[1] 
https://gerrit-review.googlesource.com/Documentation/cmd-hook-commit-msg.html

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


Re: [workflow] Automatically close bug report when a patch is committed

2023-09-14 Thread Giovanni Biscuolo
; package guix
>> close  []
>> quit
>>
>>
>> The "Reply-To:" (I still have to test it) will receive a notification
>> from the control server with the results of the commands, including
>> errors if any.
>>
>> Then, the documentation for the close command [5] states:
>
> Note that 'close' is deprecated in favor of 'done', which does send a
> reply.

Sorry I'm not finding 'done' (and the deprecation note) here:
https://debbugs.gnu.org/server-control.html

Maybe do you mean that we should not use the control server but send a
message to -d...@debbugs.gnu.org?

Like:

--8<---cut here---start->8---

From: guix-commits
To: -d...@debbugs.gnu.org

Version: 

This is an automated email from the git hooks/post-receive script.

This bug report has been closed on behalf of 
<> since he added an appropriate pseudo-footer in the
commit message of  (see ).

For details on the commit content, please see: .

--8<---cut here---end--->8---

OK: this goes in the upcoming [PATCH] and related patch revision
process... :-D

[...]

>> Interesting!  This could also be done by a server post-receive hook, in
>> contrast to a remote service listening for UDP datagrams.
>
> The reason in my scheme why the more capable mumi CLI would be needed is
> because closed series would be inferred from commits Change-IDs rather
> than explicitly declared.

Yes I agree: a more capable mumi CLI is also needed in my scheme :-)

The "only" difference in my scheme is that we don't need an external
server listening for a UPD datagram, IMO a more capable version of our
current git hooks/post-receive script is better.

>>> What mumi does internally would be something like:
>>>
>>> a) Check in its database to establish the Change-Id <-> Issue # relation,
>>> if any.
>>>
>>> b) For each issue, if issue #'s known Change-Ids are all covered by the
>>> change-ids in the arguments, close it
>>
>> I think that b) is better suited for a git post-receive hook and not for
>> mumi triggered by a third service; as said above for sure such a script
>> needs mumi (CLI) to query the mumi (server) database.
>
> To clarify, the above should be a sequential process;

It was clear to me, thanks!

> with the Change-Id scheme, you don't have a direct mapping between a
> series and the Debbugs issue -- it needs to be figured out by checking
> in the Mumi database.

Yes, to be precise it needs to be figured out by a tool that is indexing
'Change-Id' via Xapian.

The preferred tool to be extended by the Guix project contributors is
mumi, obviously

... but a similar feature could also be provided by an enhanced version
of (the unofficial) guix-patches public-inbox, that uses Xapian queries
for searches [2], it "just" lacks indexing messages by 'Change-Id' (is
there a public-inbox CLI for searching and scripting purposes?!?)

[...]

> It could process the 'Fixes: #N' and other git trailers we choose to
> use as well, but what I had on mind was processing the *guix-patches*
> outstanding Debbugs issues based on the presence of unique Change-Ids in
> them.  It complements the other proposal as it could be useful for when
> a committer didn't specify the needed trailers and forgot to close the
> issue in *guix-patches*, for example.

Yes I think I get it :-)

To be cristal clear: I think that "the other proposal" (that is use
"Fixes:" and alike in commit msg to close the provided bug num) will be
**superseeded** when all the tools to manage (first of all: CLI query
tool) the 'Change-Id' preudo-header/footer :-D

>>> Since it'd be transparent and requires nothing from a committer, it'd
>>> provide value without having to document yet more processes.
>>
>> No, but we should however document the design of this new kind of
>> machinery, so we can always check that the implementation respects the
>> design and eventually redesign and refactor if needed.
>
> Yes, it should be summarily described at least in the documentation,
> with pointers to the source.
>
> Oof, that's getting long.

Wow! \O/

> To recap:
>
> We have two propositions in there:
>
> 1. A simple one that is declarative: new git trailers added to commit
> messages would convey actions to be done by the server-side hook.
>
> 2. A more complex one that would allow us to close *some* (not all) of
> the *guix-patches* issues automatically, relying on Change-Ids (and
> Mumi's ability to parse them) to infer which patch series saw all their
> commits merged already.
>
> I think both have value to be pursued, but 1. could be implemented first
> since it is simpler.

I think that finally we have a clear big picture.  Thanks!

As stated at the beginning of this message, I'm going to open bug
reports :D


Happy hacking! Gio'


[1] whatever "do" does mean: it could range from a "simple" search with
mumi (or something similar) by a human "bug reports gardener" to a full
fledged server side git hook to notify involved parties and/or
automatically close the related bug report.

[2] https://yhetil.org/guix-patches/_/text/help/

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


Re: to PR or not to PR, is /that/ the question?

2023-09-13 Thread Giovanni Biscuolo
Hi Simon

Simon Tournier  writes:

> Thank you for your detailed message.  Sorry, I have not read all the
> details because I have been lost.

Sorry!  Forgive me since I am not able to summarize the details without
losing content.

> What do you want to explain?

In the context of this increasing cognitive overheading thread...

As I wrote: «I don't think thay adopting a «web based PR model»
(whatever the term "pull request" means, editor note) for the Guix
project is a good idea.»

To remain /in topic/: I think that adopting a «web based PR model» will
definitely _not_ decrease the cognitive overhead for contributors.

Also, replacing (or adding to) the current "integration request" email
based workflow with a "web based PR model" (that _imply_ a web based
patch review strictly connected with the choosen "forge"?) would be an
/overwelming/ cognitive overhead for all.

Last but not least, I also think that adding a patch review system like
Gerrit or Gerrit, Phabricator, and Review Board would not only
_increase_ the overall cognitive overhead but is also /technically/
_broken_.

Why I think all this things?  Well, I'm sorry but it's a _very_ long
story and is all written down in the /details/.
(id:87y1ha9jj6@xelera.eu aka 
https://yhetil.org/guix/87y1ha9jj6@xelera.eu/)

[...]

I'm done with this thread, sorry; I'll drift alone :-) 

Happy hacking, Gio'

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


to PR or not to PR, is /that/ the question?

2023-09-13 Thread Giovanni Biscuolo
ton of small, good-in-isolation commits (it does take a bit
more work after all), nobody would be forcing you to do so. Instead, if
this is your commit authorship pattern, the submission of the proposed
change could squash these commits together as part of the submission,
optionally rewriting your local history in the process. If you want to
keep dozens of fixup commits around in your local history, that's fine:
just have the tooling collapse them all together on submission.

[...] Making integration requests commit-centric doesn't force people to
adopt a different commit authorship workflow. But it does enable
projects that wish to adopt more mature commit hygiene to do so. That
being said, hows tools are implemented can impose restrictions. But
that's nothing about commit-centric review that fundamentally prohibits
the use of fixup commits in local workflows.

[...] I also largely ignored some general topics like the value that an
integration request can serve on the overall development lifecycle:
integration requests are more than just code review - they serve as a
nexus to track the evolution of a change throughout time.

--8<---cut here---end--->8---

In the Guix project (and many other) the evolution of changes throughout
time - made possible by the use of the "integration request" model
instead of the "pull request" model - is tracked by the guix-commit
mailing list, that is auto populated by a server side git hook:
https://lists.gnu.org/archive/html/guix-commits/

> Then many comments later about the
> pros and cons, the discussion is split intro the contributor side and
> the reviewer side of the PR model.  Ricardo reports then their
> experience reviewing Pull Request:
>
> Re: How can we decrease the cognitive overhead for contributors?
> Ricardo Wurmus 
> Fri, 08 Sep 2023 16:44:41 +0200
> id:87sf7o67ia@elephly.net
> https://yhetil.org/guix/87sf7o67ia@elephly.net
> https://lists.gnu.org/archive/html/guix-devel/2023-09
>
> And I provide one typical example where “our“ model leads to some
> friction for the reviewer: the first step, apply the patches.
>
> And this exact same friction does not exist in the PR model by design of
> this very PR model.

I don't know if really "reviewer apply the first patch" friction does
not exist by design in a PR model (especially a web based one), but
AFAIU many other frictions /do/ exist... by design!

[...]

(I'm studying for a reply to the rest of the message, stay tuned! :-D )

Happy hacking! Gio'



[1]
https://gregoryszorc.com/blog/2020/01/07/problems-with-pull-requests-and-how-to-fix-them/
 (2020-01-07)

[2] look at the Branches listed here: https://qa.guix.gnu.org/

[3] 
https://guix.gnu.org/en/manual/devel/en/html_node/Managing-Patches-and-Branches.html

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


Commenting bug reports via mumi web interface (was: How can we decrease the cognitive overhead for contributors?)

2023-09-13 Thread Giovanni Biscuolo
Hi Ricardo,

Ricardo Wurmus  writes:

> Giovanni Biscuolo  writes:
>
>> AFAIU mumi does not (still?) have ad authentication/authorization,
>> right?
>>
>> If so how do you plan to deal with users posting SPAM or similar
>> unappropriate content?
>
> It only sends email on behalf of commenters, so we’re using the same
> email mechanism to deal with spam.

Please forgive me if I'm not reading the source code for the relevant
mumi function, it would be easier for me to see it in action to
understand how the comment feature works.

I mean: I guess commenters are anonymous (?) and the mumi server will
send the email via authenticated SMTP (I hope) as user "mumi server" (or
something similar) on behalf of the commenter, right?

If so, the email is sent with the SPF and DKIM headers of the mumi
server configured mail server and that information is not useful to
eventually catch commenter email spoofing.

If I'm not missing something, then, anyone could send a comment as
"g...@xelera.eu" containing unappropriate content, right?

I know that the GNU mailing lists mail server surely have an antispam
service, but it cannot use DMARC (SPF and/or DKIM) to filter email
spoofing attempts and all it can do is to assign a "spamminess" score to
messages, that seldom is able to effectively spot "unappropriate"
content, right?

Given all this, does this mean that anyone could send an offensive
comment as "g...@xelera.eu" using the mumi commentig form?

...or are all the mailing lists moderated?

I feel I really miss something important in this picture, sorry for not
understanding what!

As an /antipattern/ example of a bug reporting system using a web
interface also for comments, I point out the one used by git-annex
(ikiwiki): https://git-annex.branchable.com/bugs/

When you try to "Add a comment", e.g. in:
https://git-annex.branchable.com/bugs/fsck_does_not_detect_corruption_on_yt_vids/

You are presented an authentication form supporting 3 auth methods:
registered user, email [1] and OpenID.

I still think that they sould just allow me to send an email to report
and comment bugs.


Thanks! Gio'


[1] The server sends you an unique URL you can use to log in and expires
in one day... why not just send me (forward) the complete message I want
to comment with the right Reply-to field pre-compiled, so I can edit my
comment with my lovely MUA instead of that /awful/ web interface?!?

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


Re: [workflow] Automatically close bug report when a patch is committed

2023-09-13 Thread Giovanni Biscuolo
Hi Liliana and Maxim,

Liliana Marie Prikler  writes:

[...]

> The thing is, we're discussing the same basic workflow

No, we are discussing two (complementary) workflows:

1. the one I suggested to add a "footer-metadata-field" named
Fix/Fixes/Close/Closes/whatever that will allow people pushing to the
Guix official repo to _add_ the information that the (already installed,
to be enhanced) server side git post-receive hook should also close one
or more bug reports; that "metadata-footer" should be "manually" added
to the commit message before pushing, the commit must be _amended_ (git
commit --amend).

2. the one suggested by Maxim (bringed by Gerrit) to _automatically_ add
a "footer-metadata-field" named 'Change-Id' that will allow "a
machinery" (IMO it should be the currently installed hook, enhanced) to
_automaticcally_ close bug reports when all 'Change-Id's contained in a
bug report have been pushed to the official Guix repo.

This is my understanding of what we are discussing here: did I miss
something?

> (which you lay out below), just different kinds of metadata that we'd
> have to attach to our commits.

Thay are different because they serve different needs.

> IIUC ChangeIds need to actually be carried around by the committers as
> they e.g. rewrite patches (rebasing, squashing, what have you)

Since 'Change-Id' is automaticcaly generated by a _local_ git hook upon
committing and left unchanged if already present, the only precaution
the committer should apply is to preserve it when rebasing in case the
person needs to send a new version of the patch. 

> and they're basically opaque hashes so I don't see the benefit to the
> reader.

The benefit are not for the reader but for "the machinery" to be able to
compute when a patch set is completely pushed to the Guix official repo,
this also means that the related bug repo (related to the patch set) can
be happily automatically closed.  No?

> (I think you might be arguing that the benefit is uniqueness, but I'm
> not sure if I ought to buy that.)

The benefit is that 'Change-Id' is autogererated as unique, kept between
rebases (with some pracaution by the _local_ committer) thus is useful
to compute the completion of each patch contained in a bug repo (of
class [PATCH]).

Obviously all of this should be clearly documented, so everyone will
understand how it works.

[...]

>> In case it couldn't close an issue, it could send a notification to
>> the submitter: "hey, I've seen some commits of series  landing to
>> master, but not all of the commits appears to have been pushed,
>> please check"
> I'm not sure how common this case is

Don't know the cardinality, but I guess is a /very/ useful usecase for
people who have committ access to the official Guix repo... at least for
Maxim :-)

Anyway, I think the main use case for this _second_ way of automatically
closing related bugs (the one based upon 'Change-Id') when all [PATCH]es
are pushed is very useful for all people with commit access to the Guix
repo, a sort of "push and (almost) forget".

[...]

Ciao, Gio'.

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


Re: Mumi CLI client (was: How can we decrease the cognitive overhead for contributors?)

2023-09-12 Thread Giovanni Biscuolo
Hello Ricardo,

Ricardo Wurmus  writes:

> Giovanni Biscuolo  writes:
>
>> […] actually Debbugs or Mumi web interfaces are read-only: you cannot
>> open a bug report or comment it, you have to send an email; this is a
>> _feature_, not a bug since we don't need a _complex_ web based
>> authentication+authorization system for bug reporters/commenters. […]
>
> Mumi actually does support commenting on the web:
>
>
> https://git.savannah.gnu.org/cgit/guix/mumi.git/tree/mumi/web/controller.scm#n145
>
> It’s just been disabled because messages ended up being stuck in the
> queue and nobody could make enough time to debug this.

Uh, I didn't know mumi have that feature

AFAIU mumi does not (still?) have ad authentication/authorization,
right?

If so how do you plan to deal with users posting SPAM or similar
unappropriate content?

Thanks! Gio'

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


Re: [workflow] Automatically close bug report when a patch is committed

2023-09-12 Thread Giovanni Biscuolo
ch traction.  The way I see it, it needs to happen
>>> automatically.
>> I mean, the way I imagine is that you type this as part of your message
>> and then debbugs would do the work of closing the bug.  In short, "git
>> push" saves you the work of writing a mail because there's a hook for
>> it.

I guess all of us are looking for this very same thing: a server side
web hook that automatically closes bugs (via email) when committers
pushing "instructs" it to do so.

The automatic email message will be sent to our "bug control and
manipulation server" [5], with this header:

--8<---cut here---start->8---

From: GNU Guix git hook 
Reply-To:  <>
To: cont...@debbugs.gnu.org

--8<---cut here---end--->8---

and this body:

--8<---cut here---start->8---

package guix
close  []
quit

--8<---cut here---end--->8---

The "Reply-To:" (I still have to test it) will receive a notification
from the control server with the results of the commands, including
errors if any.

Then, the documentation for the close command [5] states:

--8<---cut here---start->8---

A notification is sent to the user who reported the bug, but (in
contrast to mailing bugnumber-done) the text of the mail which caused
the bug to be closed is not included in that notification.

If you supply a fixed-version, the bug tracking system will note that
the bug was fixed in that version of the package.

--8<---cut here---end--->8---

Last but not least, the very fact that "GNU Guix git hook" have closed
the bug report is tracked and showed in the bug report history, as any
other action made via email using the Debbugs control server.

WDYT?

> Perhaps both approach could be combined.  I still see value in a general
> scheme to automate closing applied series that linger on in Debbugs.
>
> [0]
> https://lists.gnu.org/archive/html/guix-devel/2023-09/msg00138.html

Yes I agree, they are two complementary approaches: I think there are
usecases (many? few?) in which committer pushing to the repo are
actually solving an issue in some but report, even if this is not
tracked as a patch bug report.

[...]

> The process could go like this:
>
> 1. commits of a series pushed to master
> 2. Savannah sends datagram to a remote machine to trigger the
> post-commit job, with the newly pushed commits 'Change-Id' values (a
> list of them).
> 3. The remote machine runs something like 'mumi close-issues [change-id-1
> change-id-2 ...]'

I think that extending the already existing post-receive hook is better
since it does not depend on the availability of a remote service
receiving a **UDP** datagram.

For sure, we need an enhanced version of mumi CLI (capable of indexing
Change-Id) on the server running the post-receive hook to achieve this.

> In case it couldn't close an issue, it could send a notification to the
> submitter: "hey, I've seen some commits of series  landing to
> master, but not all of the commits appears to have been pushed, please
> check"

Interesting!  This could also be done by a server post-receive hook, in
contrast to a remote service listening for UDP datagrams.

> What mumi does internally would be something like:
>
> a) Check in its database to establish the Change-Id <-> Issue # relation,
> if any.
>
> b) For each issue, if issue #'s known Change-Ids are all covered by the
> change-ids in the arguments, close it

I think that b) is better suited for a git post-receive hook and not for
mumi triggered by a third service; as said above for sure such a script
needs mumi (CLI) to query the mumi (server) database.

> This is a bit more complex (UDP datagram, mumi database) but it does
> useful work for us committers (instead of simply changing the way we
> currently do the work).

I agree: an automatic bug closing "machinery" when patches are pushed to
master (and any other official branch?) is the best approach

> When not provided any change-id argument, 'mumi close-issues' could run
> the process on its complete list of issues.

Do you mean the list of issues provided by "Close/Fix/Resolve:
#bug-number"?

If I don't miss something, again this is someting that should be
provided by a git post-receive hook and not by an enhanced version on
mumi

> Since it'd be transparent and requires nothing from a committer, it'd
> provide value without having to document yet more processes.

No, but we should however document the design of this new kind of
machinery, so we can always check that the implementation respects the
design and eventually redesign and refactor if needed.

WDYT?

Thanks, Gio'



[1] id:87y1hikln6.fsf@wireframe https://yhetil.org/guix/87y1hikln6.fsf@wireframe

[2] id:87pm2pces0@xelera.eu https://yhetil.org/guix/87pm2pces0@xelera.eu

[3] id:875y4gzu85@gmail.com https://yhetil.org/guix/875y4gzu85@gmail.com

[4] https://www.gnu.org/software/emacs/manual/html_node/emacs/Bug-Reference.html
please also consider that bug-reference-bug-regexp can be customized ;-)

[5] https://debbugs.gnu.org/server-control.html

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


Re: How can we decrease the cognitive overhead for contributors?

2023-09-12 Thread Giovanni Biscuolo
Hello Csepp, 

Csepp  writes:

[...]

> I don't think repeating that no forge sucks less advances the
> conversation towards any solution other than keeping the status quo,
> which can't really be called a solution.

Are we really talking about changing the official Guix forge:
https://savannah.gnu.org/projects/guix?

When talking about "what forge to use" I feel that keeping the status
quo is a very good solution, overall.

Thanks, Gio'

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


Re: [workflow] Automatically close bug report when a patch is committed

2023-09-11 Thread Giovanni Biscuolo
Hi Maxim,

Maxim Cournoyer  writes:

[...]

>> If there is enough consensus I volunteer to collect ideas and send a
>> feature request to the mumi and/or Debbugs devels (if we need Debbugs
>> patches I guess it will be a long term goal)
>
> I don't think any changes to Debbugs would be necessary.  Mumi is
> already able to parse mail headers -- parsing a git trailer should be
> just as simple.

You are right, I'll try to file some feature request for mumi.

Thanks, Gio'

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


Re: [workflow] Triaging issues (was Automatically close bug report when a patch is committed)

2023-09-11 Thread Giovanni Biscuolo
Hi Simon

Simon Tournier  writes:

[...]

>> is enough, but (is:open and tag:patch,moreinfo) is better:

[...]

>> We could also add a feature to have "saved searches" in mumi web and CLI
>> interfaces to help with this task.
>
> Well, the Mumi CLI, “guix shell mumi” and then “mumi search”, should do
> act as “saved searches”.

I don't understand; I mean for example having a configuration file where
we can save searches, something like:

--8<---cut here---start->8---

 [patches-to-be-checked]
 "is:open and tag:patch,moreinfo"

--8<---cut here---end--->8---

and then "mumi search --saved patches-to-be-checked"

...something like notmuch, I mean.

> Although it does not work for me.

Please help improving mumi with bug reports or patches if you have not
already done

> Note that Debian provides some BTS tools, as pointed here,

Yes I saw your message, useful thanks; we should package it and maybe
add that functions to mumi, one by one

Thanks, Gio'

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


Re: How can we decrease the cognitive overhead for contributors?

2023-09-11 Thread Giovanni Biscuolo
Hi Liliana,

Liliana Marie Prikler  writes:

[...]

> For example, whenever people say that "forges would improve stuff", my
> reply is (modulo phrasing) "yes, for the people who are already used to
> forges".

I just want to point out that actually Guix _do_have_ a forge. the
software is Savane and it's hosted on savannah.gnu.org:
https://savannah.gnu.org/projects/guix

This is just to remind everyone that there are very different forges out
there, and:

  All forges suck, _no one_ sucks less

> Now, forges might indeed be familiar to many,

What kind of forge?  Savannah, GitHub, GitLab, SourceHut, Codeberg (you
name it)

Just to bring a simple, when talking about the "Pull Request" workflow
to manage merges, they are not even interoperable (while git
request-pull /is/)

...not to talk about issue management features, when present.

[...]

> Bear in mind, that contributing already has at least one degree of
> complexity baked right into itself on the basis of being a feedback
> loop.

Wow. food for thought!

Thanks! Gio'

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


Re: How can we decrease the cognitive overhead for contributors?

2023-09-11 Thread Giovanni Biscuolo
Hello,

(I find it difficult to efficiently follow this thread and to keep up to
date with reading it, so please forgive me if someone else already
addressed my considerations)

Simon Tournier  writes:

[...]

> « For which contributors do we want to/can we decrease the cognitive
> overhead? », so I read it as: do we discuss about someone who is already
> playing guitar or someone who is knowing nothing about music.
>
> We already have the answer: we are speaking about someone who already
> plays guitar (a skilled programmer).

There are many ways to contribute to Guix:

--8<---cut here---start->8---

- Project Management
- Art
- Documentation
- Packages
- Programming
- System Administration
- Test and Bug Reports
- Translation

--8<---cut here---end--->8---
(https://guix.gnu.org/en/contribute/)

and just a few of them requires to be a skilled programmer :-)

But you are absolutely right, we are talking about someone who already
have:

--8<---cut here---start->8---

(skill
 (or project-management
 user-interface-design
 graphical-design
 multimedia-design
 technical-documentation-writing
 guix-programming
 guile-programming
 program-debugging
 system-administration
 translation-of-tachnical-documents))
 
--8<---cut here---end--->8---

I'd also say that other "low level" skills are dependencies for some or
all of the above mentioned skills, like: git-dvcs-usage, text-mua-usage

As already mentioned, a conditio sine qua non (hard dependency) to
contribute to Guix (as to as many many other international distributed
projects) is to have "high level" skill named
manage-communications-in-EN.

Last but not least, a "meta skill" is that you accept to do all of this
as a volunteer in a large group of volunteers, with very few /direct/
rewards - the most important one being to improve the best ever free
software distro [1] - and many many issues to address...

Quite a lot of skills to be able to contribute, I'd say.

Furthermore, not a skill but another requirement not to be
underestimated is you need a certain amount of time and unfortunately
many people can only subtract that from their (often already scarce)
free time.

Probably we should find a way to /introduce/ old and new contributors to
this concepts since I feel someway sometimes they are forgotten or
underestimated.

[...]

> Somehow, now we have to discuss about specific task, task by task, and
> propose how to improve.  Survey is one next action for collecting
> data.

My 2 cents: surveys should be _carefully_ designed or the resulting data
would be useless at best, misleading at worst

[...]

> The improvement had been the removal of the friction by switching to
> some web interface.  Now, the process is probably not easy for people
> like me that are not used to web interface, although interacting with
> web interface is a simpler task than configuring some tools for
> editing translation files.

There is a weblate CLI we should probably package in Guix:
https://docs.weblate.org/en/latest/wlc.html

I just hope that changing from a git-commit-based approach to a
weblate-tool approach have helped find many more active translators:
https://translate.fedoraproject.org/projects/guix/#information

> we are far from the initial discussion. ;-)

Sorry: OT... OT?!? :-O

> I do not see “the practice of controlling access to information,
> advanced levels of study, elite sections of society, etc“.  Well, are
> you French? ;-) Because I feel we are discussing unrelated points
> emerging although we are agree on the core and we just detail tiny
> variations of the same thing. :-)

If you want I can add a little bit of Italian attitide at discussing in
detail tiny variations of the same thing :-O... just joking, eh! ;-)

[...]

Ciao! Gio'


[1] well, I'm biased :-D

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


Re: How can we decrease the cognitive overhead for contributors?

2023-09-11 Thread Giovanni Biscuolo
Hi Efraim,

Efraim Flashner  writes:

[...]

> On the other hand, if we do manage to automate writing of commit
> messages, it makes one less thing for committers to manually fix before
> pushing the commit.

It would be lovely!  It could also be done by a client-side git hook,
provided in the Guix repo and automatically installed when following the
instructions on the Guix manual (sorry I miss the pointer now) so that
not only committers can benefit using that script but also contributors.

As usual: patches wellcome! :-)

Sorry I can't contribute with this task, I really don't know ho to
program such a script.

All I can do is suggesting to add a git commit message template (see
message id:87y1hhdnzj@xelera.eu point 4. for details)

Anyway, automation does't mean that the contributor/committer can ignore
the commit message content conforms to the Guix guidelines: IMO a human
supervising activity is always needed, be it done by contributors before
submiting a patch or by a reviewer before committing.

> The last couple of contributions I pushed had green checks on
> qa.guix.gnu.org and I felt at a bit of a loss what to do while
> checking it over.

Sorry feel I don't fully understand what do you mean.

I'm guessing here...

AFAIU having a green light on QA means one of the build farms
succesfully built the package, I guess this is a "gree check" on a
"Committer check list" before committing: actually I can't find such
check list but probably it can be "extrapolated" from the checklist
documented for patch submissions:
https://guix.gnu.org/en/manual/devel/en/html_node/Submitting-Patches.html

So, if all the items in the check list are OK, the package sould be
committed to the appropriate branch.

Lastly, IMO if you are committer you can go on, if you are not committer
you should notify a suitable committer that all is ready for commitment.

Maybe if QA would send an email notification to the bug owner (every bug
related to a patch set should have an owner) about thay "green light" it
could be of some help with keeping track of what can be actually merged.

> After checking that it was in fact green I double-checked the
> commit message and then also some of the layout of the commit and the
> package itself, and made sure it passed `guix lint`. More resources for
> qa.guix.gnu.org would let us build more patches more quickly.

I agree, QA is a critical resource in this phase of Guix evolution.

More resources IMO also means documentation... and maybe more features?

I feel like we should find a way to sponsor the work on QA.

Happy hacking! Gio'

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


Re: [workflow] Automatically close bug report when a patch is committed

2023-09-11 Thread Giovanni Biscuolo
Hi!

Liliana Marie Prikler  writes:

> Am Donnerstag, dem 07.09.2023 um 09:12 -0700 schrieb Vagrant Cascadian:
>> I am much more comfortable with the "Fixes" convention of:
>> 
>>   Fixes: https://issues.guix.gnu.org/NNN

OK, I understand Vagrant's concerns: we need a _namespaced_ URI, but
there is no need that URI is the URL of **one** of our current web
interfaces, why not the other one? ;-)

IMO this is an implementation detail we can easily fix once we find a
consensus on introducing this requirement in the Guix guidelines on
committing.

> I like the idea, but we should also consider the bugs.gnu.org address
> here as well as the convention of putting it into angular brackets.  In
> fact, I might even prefer it if the convention was
>   Fixes: Bug description 
> where bug description is a (possibly empty) name for the bug such as
> "Emacs hangs when I press a key" or something.

I agree that an (optional) bug description might be useful (could also
be automatically added by some cool etc/committer.scm funcion?)

I propose:

 Fixes: [optional bug description] 

where namespace is the package name, in our case "guix"; fo example:

 Fixes: Emacs hangs when I press a key 

WDYT?

> As for when to send it, remember that we already send a bunch of mails
> to guix-comm...@gnu.org as our commit hook?  I think it shouldn't be
> too hard to search for the fixes line and send it to debbugs control.

Do you please know whare to get that scripts, just to have a quick look
and understand how we could eventually add a function for automatic bug
closing?

Thanks! Gio'

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


Re: [workflow] Automatically close bug report when a patch is committed

2023-09-11 Thread Giovanni Biscuolo
Hi Maxim,

Maxim Cournoyer  writes:

[...]

>> c. how do we get the issue number of a patch containing "Change-Id"? [1]
>
> We'd have to search through the currently opened patches issues; I
> assume using a tool like the 'mumi' command we already have could do
> that.

It would be fantastic if we find a way for mumi to index (via xapian)
the "Change-Id", enabling us to provide a query like this: (is:open and
change-id:).  I don'r know if this is doable by mumi alone or if it
needs Debbugs to be able to manage the new "Change-Id" attribute.

If there is enough consensus I volunteer to collect ideas and send a
feature request to the mumi and/or Debbugs devels (if we need Debbugs
patches I guess it will be a long term goal)

>> [1] right now how do we get the issue number of a committed patch?
>
> There's no direct mapping.  You have to search by subject name.

IMO a link like this is _very_ useful in helping bug tracking management
(and also for historians :-) ) and we should find a way to manage that.

Also for this reason (other that possibly automated bug closing) IMO
committers should add a "Fixes: #" signature (or what
is called in Git) to each commit with an associeted bug report.

WDYT?

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


Re: [workflow] Triaging issues (was Automatically close bug report when a patch is committed)

2023-09-11 Thread Giovanni Biscuolo
cks another bug from being fixed. The first
listed bug is the one being blocked, and it is followed by the bug or
bugs that are blocking it. Use unblock to unblock a bug.

--8<---cut here---end--->8---

Unfortunately "merge" is not good for two or more bugs containing
"duplicated" patches.

Could "Usertags" pseudo-header be used someway to add "extended" links
with other bug reports?  Something like:

--8<---cut here---start->8---

Usertags: parent_

--8<---cut here---end--->8---

Anyway using this approach makes bug managing (much?) _harder_ because:

1. we have no automatic mechanism to mark the referenced bug
relationship; e.g. if I add to #100 the bug #101 as parent, bug #101
should have #100 as child, and so on for each relationship.

2. managing pseudo-headers cannot be done via server-control commands
[2]

> but bugs can be marked as blocking other bugs... this would make some
> sense in splitting patch series into multiple bugs, marking blocking
> bugs for patches that are dependent on others. But I suspect that
> would be painful in practice in many cases.

In my experience splitting bugs is often useful when managing bug
reports, because you know... ouch; but splitting is also une of the most
painful activity to do, with _every_ bug report tracking system I know
of.

AFAIU splitting bug reports in Debbugs can be done with the clone
command [2]:

--8<---cut here---start->8---

clone bugnumber NewID [ new IDs ... ]

The clone control command allows you to duplicate a bug report. It is useful in 
the case where a single report actually indicates that multiple distinct bugs 
have occurred. "New IDs" are negative numbers, separated by spaces, which may 
be used in subsequent control commands to refer to the newly duplicated bugs. A 
new report is generated for each new ID.

Example usage:

clone 12345 -1 -2
reassign -1 foo
retitle -1 foo: foo sucks
reassign -2 bar
retitle -2 bar: bar sucks when used with foo
severity -2 wishlist
clone 123456 -3
reassign -3 foo
retitle -3 foo: foo sucks
merge -1 -3

--8<---cut here---end--->8---

Doable but not much straightforward, it wold be much better if bugs were
splitted my submitter (but in my experience sometimes submitters are
lazy :-) )

[...]

>>> That would then make it easier to both issues to be closed if that's
>>> appropriate.
>>
>> I guess you mean that a (human) triager can find related bugs with the
>> help of such a tool.
>>
>> I doubt that related issues should be closed without human intervention,
>> false positives are very dangerous in this case.
>
> With old patches, honestly, it might bring attention back to an issue to
> close it. When I get a bug closed notification, I definitely check to
> make sure the issue is actually fixed, or did not introduce other
> surprises...

I someway agree with you, but this "let's close old bugs, just reopen it
if needed" should probably be a scheduled _amd_ coordinated effort
between Guix contributors, a sort of meta-hackaton for Guix bugs :-D

> I am not saying I think we should just blindly close old bugs with
> patches, but processes that err slightly on the side of closing old
> ones, perhaps with a message "please verify if the issue is actually
> closed, and reopen in case we have made a mistake." might be
> reasonable.

I agree, it's reasonable IMO.

IMO we should better define what does it mean "old patch" and provide an
easy way to find them; maybe a plain (is:open and tag:patch)

 https://issues.guix.gnu.org/search?query=is%3Aopen+tag%3Apatch

is enough, but (is:open and tag:patch,moreinfo) is better:

 https://issues.guix.gnu.org/search?query=is%3Aopen+tag%3Apatch%2Cmoreinfo

or even filtered if older than 5m, because "The bug will be closed if
the submitter doesn't provide more information in a reasonable (few
months) timeframe." [3]

We could also add a feature to have "saved searches" in mumi web and CLI
interfaces to help with this task.

Anyway, whatever we decide to do with old bugs, IMO we should improve
Guix bug reporting management by having more people (I'm going to
volunteer, as soon as I get all the right tools and knowledge)
performing triaging activity and people working on bug reports making
better use of the tools and guidelines described in [3]

...maybe we should add some paragraph on this in the Guix manual, also.

WDYT?

[...]


Thanks! Gio'



[1] I was thinking of what can be done in Request-tracker, which I know
better; it allows to track this kind of links: Depends on, Depended on
by, Parents, Children, Refers to, Referred to by

[2] https://debbugs.gnu.org/server-control.html

[3] https://debbugs.gnu.org/Developer.html

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


Re: How can we decrease the cognitive overhead for contributors?

2023-09-08 Thread Giovanni Biscuolo
Hello Efraim,

Efraim Flashner  writes:

> On Fri, Sep 08, 2023 at 11:53:43AM +0200, Giovanni Biscuolo wrote:
> That wasn't my read of it at all.

I don't understand what part of my message is different from your read,
so I cannot comment

> I too have many packages which I haven't upstreamed.

[...]

> but the effort to go through and see which packages are ACTUALLY
> needed and to clean up everything, it's just too much for me.

Do you find that it's too much due to the Guix ChangeLog guidelines?

[...]

> As far as commit messages, I've found that the script in
> etc/committer.scm to be very nice, even if there are plenty of cases
> where it doesn't do the job. I do think there's room for improvement
> and that may be one of the things we can do to make contributing
> easier.

What do you propose should be changed in the commit messages guidelines
to make contributing easier?

In Messege-Id 87y1hhdnzj@xelera.eu today I tried to make some
actionable proposals considering the contant of this thread: WDYT?

Thanks, Gio'

[...]

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


Re: How can we decrease the cognitive overhead for contributors?

2023-09-08 Thread Giovanni Biscuolo
Ricardo,

Ricardo Wurmus  writes:

> Giovanni,
>
>> You are obviously free not to contribute your patches upstream but the
>> fact that you decided not to because it's "too hard" (my executive
>> summary about your complaints about Change Log content rules) to write
>> commit messages suitable for contribution it _not_ a Guix maintainers
>> fault, not at all.
>
> As a former Guix co-maintainer I disagree with this take.  (Nobody even
> brought up the word “fault”, which is a particularly unhelpful lens for
> understanding social issues, in my opinion.)

sorry for using "fault", I can't find a better term

> “too hard” sounds (perhaps unintentionally) derisive.

the complete sentence is: «"too hard" (my executive summary about your
complaints about Change Log content rules)»

what can I add about my intentions?

[...]

> It’s not that writing commit messages is hard.  It’s one of many
> obstacles,

IMO one of the very little ones

> and the lack of clear objective guidelines (fun fact: we aren’t
> actually following the Changelog rules)

Guix have /some/ objective guidelines, they can be enhanced, please help
the project find better rules or document it better; the fun fact Guix
is not actually following the "rules" is because they are actually not
rules but guidelines for best practice in documenting commits for rewiew
purposes

> that means that even something as trivial (compared to the rest of the
> work) as the commit message must be placed on the pile of chores.

You (and others) find it chore, I (and others) find it a very useful
work.

> Add on top of that that there’s a low probability of gratification,
> because committers like myself are getting burned out and so patches
> simply go unacknowledged or only ever see a first cursory review.

I know and understand this class of probolems but this have nothing to
do with the ChangeLog format.

> We can’t blame anyone for seeing these two piles and come to the
> conclusion that it’s not worth the hassle — especially when operating
> your own channel is so easy in comparison.

I'm not blaming Katherine, I respect her decision; I just wanted to say:
please don't blame the guidelines about ChangeLog for the two (or more)
piles.

I see there are several different management problems, I'm not trying to
say all is fine and good, but IMO the "manage git commit messages
following the Guix guidelines" is the last of this management problems.

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


Re: How can we decrease the cognitive overhead for contributors?

2023-09-08 Thread Giovanni Biscuolo
Hi!

Simon Tournier  writes:

[...]

> For example, we communicate in English.  It appears to me impossible to
> send a contribution without having some basic knowledge of English.

And, believe me or not, for me /that/ is a **significant** cognitive
overhead not just to contribute to international projects [1] (including
our company internal projects), but also to efficiently communicate in
international mailing lists like this one.

For me (actually for all) using a natural language, especially a foreign
one, is a constant hack! :-)

[...]

> What is the thing that will tell me that the English I wrote is not
> meeting the standard?

> Why do we accept this “friction” about English filtering people?
>
> Well, I am stretching a bit to make my point. :-)

Yes, it's a stretch but IMO it helps making the poing about the sources
of friction when participating in discussions and trying to contribute
to a "crowded" international project, with people with very different
competences in English language, including technical English,
programming languages **and** tools.

[...]

>> In the US, the phrase "I don't buy it" is usually the response to 
>> someone trying to trick you into something. This is a little hurtful 
>> because it's either saying:
>
> Sorry, it was not my intent.  I was expressing: I do not believe it is
> *the* real problem.

(Let me use some humor please)

The frinction here comes from the fact that all natural languages suck,
no one sucks less :-O

The 5th definition of the transitive verb "buy" taken from
Merriam-Webster is:

--8<---cut here---start->8---

5: ACCEPT, BELIEVE
   I don't buy that hooey.
   — often used with into
   buy into a compromise

--8<---cut here---end--->8---

So AFAIU "I don't buy it" means "I dont's [accept|believe] it": no?

...but maybe the Merriam-Webster is too British :-D

[...]

Happy haking! Gio'



[1] have you an idea of the time I spend to translate concepts to be
expressed in commit message in English?  Why I cannot just write in my
very good Italian and "git send-email" happily?

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


Re: How can we decrease the cognitive overhead for contributors?

2023-09-08 Thread Giovanni Biscuolo
Hello Katherine,

Katherine Cox-Buday  writes:

[...]

> By "standard" I mean the GNU Changelog format
> (https://www.gnu.org/prep/standards/standards.html#Change-Logs). As
> in: it's expected that commit messages use this format.

[...]

> In my response I was trying to point out a flaw in your comparison: that 
> with style guidelines, which are also complicated, there is usually a 
> formatter that will do it for me, or a linter that will tell me that 
> something is not meeting the standard. This is because languages have 
> grammars, and linters have higher-order system grammars.

AFAIU you are talking about the "Formatting Code" /subset/ of a "Coding
style", because there is no linter that will tell you if you are
following the subset called "Data Types and Pattern Matching" [1]: am I
wrong?

Back to the git commit message formatting: please can you provide us
with one or two examples of how a commit message should be formatted and
what linter is available for that syntax?

[...]

> Here is my channel with things I intend to upstream, but haven't,
> largely because of this friction.

By "this friction" you mean you miss a linter for commit messages?

Or do you mean you do not agree with the style requested by Guix (and
GNU) for the commit messages?

You are obviously free not to contribute your patches upstream but the
fact that you decided not to because it's "too hard" (my executive
summary about your complaints about Change Log content rules) to write
commit messages suitable for contribution it _not_ a Guix maintainers
fault, not at all.

Obviously everyone is free to comment, ask for clarifications or
proposing **patches**, but it's not fair to say "I'm not contributing
largerly because I've a specific friction with the rules about commit
messages" (again, my executive summary).

[...]

Ciao, Gio'


[1] 
https://guix.gnu.org/en/manual/devel/en/html_node/Data-Types-and-Pattern-Matching.html

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


Re: How can we decrease the cognitive overhead for contributors?

2023-09-08 Thread Giovanni Biscuolo
Hi all!

I think the discussion about ChangeLog Style shows we probably need to:

1. enhance the manual section "22.6 Submitting Patches"
https://guix.gnu.org/en/manual/devel/en/html_node/Submitting-Patches.html

--8<---cut here---start->8---

Please write commit logs in the ChangeLog format (see Change Logs in GNU Coding 
Standards); you can check the commit history for examples.

You can help make the review process more efficient, and increase the chance 
that your patch will be reviewed quickly, by describing the context of your 
patch and the impact you expect it to have. For example, if your patch is 
fixing something that is broken, describe the problem and how your patch fixes 
it. Tell us how you have tested your patch. Will users of the code changed by 
your patch have to adjust their workflow at all? If so, tell us how. In 
general, try to imagine what questions a reviewer will ask, and answer those 
questions in advance.

--8<---cut here---end--->8---

IMO we should move the above paragraphs to a new subsection "22.6.N
Change Logs" and add some rationale (summarized from the GNU Standards
section and maybe with some specific Guix ratio expressed in this
section), a general rule (to be interpreted by humans, see below) and
some examples taken by one or two relevant commits recently made.

2. enhance the section section "22.3 The Perfect Setup" 

The proposed new "22.6.N Change Logs" subsection above should also
provide a link to the relevant information about the snippets documented
in section "22.3 The Perfect Setup"... and /vice versa/: the "snippets
section" should reference the "Change Log" section, since snippets are
made to automate the general rules provided in "Change Log"; I'd also
separate the paragraph related to snippets in a "22.3.1 Emacs snippets"
section

3. wellcome snippets for different IDEs

Somewhere™ in our manual we should say that we are very glad to accept
patches (alco to documentation) to add snippets for free software IDEs
templating systems other than Emacs Yasnippet or Tempel, like
vim-neosnippet for vim or the native templating system of Kate [1], for
example.  Other examples?

4. add a git commit message template

https://www.git-scm.com/book/en/v2/Customizing-Git-Git-Configuration#_commit_template

--8<---cut here---start->8---

If your team has a commit-message policy, then putting a template for that 
policy on your system and configuring Git to use it by default can help 
increase the chance of that policy being followed regularly.

--8<---cut here---end--->8---

I'd write a template with a very short explanation on the commit message
*basics* (first line is Subject max 50 chars, blank line, body) and some
https and info link to the relevant section of the manual about Change
Log format.

Suggestions for other git template content are very wellcome.

This template will be added to the git config file etc/git/gitconfig,
that is automatically configured when building the project as stated in
"22.6.1 Configuring Git".



I'll try to send a patchset for 1., 2. and 3; a separate one for 4.

WDYT?

Maxim Cournoyer  writes:

[...]

>> On 2023-09-06, Liliana Marie Prikler wrote:

[...]

>>> It's 
>>>
>>> * file (variable)[field]{do you need 4 levels?}

The general form of a ChangeLog Style format for Guix code (Guile with
gexp) could be rewrote as:

* relative-path-of-changed-file (variable) [field] : Description of 
change.

I never saw a {4th level} so AFAIU is not needed, unless someone have a
good example plz: in this case we could add a 4th level to the general
description.

[...]

> Here's an example in the Guix "dialect":
>
> --8<---cut here---start->8---
> * gnu/packages/file.scm (package-symbol)
> [arguments] <#:phases>: New patch-paths phase.
> --8<---cut here---end--->8---
>
> It could also have been:
>
> --8<---cut here---start->8---
> * gnu/packages/file.scm (package-symbol) [arguments]: Add patch-paths
> phase.
> --8<---cut here---end--->8---

Those are good general examples: I'd use them in the manual section
descibed in 1.

WDYT?

> It doesn't really matter, as long as it's clear and you did the exercise
> of reviewing the code you touched and writing down the changes summary
> for the reviewer (and yourself).

This is a good example of ratio, I'd use that also :-)

Happy hacking! Gio'



[1] 
https://docs.kde.org/stable5/en/kate/kate/kate-application-plugin-snippets.html

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


Re: [workflow] Automatically close bug report when a patch is committed

2023-09-07 Thread Giovanni Biscuolo
Hi,

Giovanni Biscuolo  writes:

[...]

>>> The first thing we need is a server side git post-receive hook on
>>> Savannah, I've opened the sr#110928 support request:
>>> https://savannah.nongnu.org/support/index.php?110928
>>
>> It's something that the Savannah folks would need to maintain
>> themselves, right?
>
> Forgot to mention that I'm pretty sure a post-receive server side hook
> is already running (and maintained) for our guix.git repo on Savannah,
> it's the one that sends notifications to guix-commits mailing list
> https://lists.gnu.org/mailman/listinfo/guix-commits

Regarding server side git hooks, I forgot to mention that on 2023-08-31
a new commit-hook is available on Savannah (installation must be
requested per-project):

git post-receive UDP syndication
https://savannah.gnu.org/news/?id=10508

--8<---cut here---start->8---

A new commit-hook is available to install for git repositories that will send a 
single Datagram via UDP after each successful commit.  This can be useful for 
continuous integration (CI) schemes and elsewise when a push driven model is 
prefered to (e.g) regularly repolling upstream when changes may or may not have 
occured. 

To request installation please open a ticket with the Savannah Administration 
project:

[...]

The (sh, GPLv3+) post-receive script source, detail on how the Datagram is 
structured, and example "receiver" scripts (in perl) can be found here:

  https://git.sr.ht/~mplscorwin/git-udp-syndicate

--8<---cut here---end--->8---

Maybe this hook is useful for comminication with the QA service(s).

Maybe this hook could be adapted to close bugs instead of send an UDP
datagram.

Happy hacking! Gio'

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


Re: [workflow] Automatically close bug report when a patch is committed

2023-09-07 Thread Giovanni Biscuolo
Hi Felix,

Felix Lechner  writes:

> Hi Gio',
>
> On Thu, Sep 7, 2023 at 4:08 AM Giovanni Biscuolo  wrote:
>>
>> close the bugs that are listed in
>> the commit message
>
> Perhaps you'd like to see Debian's hook [1] for the Salsa web forge
> (which is controversially based on Gitlab).

[...]

Thank you for the reference, it'll be an useful example.

A ruby interpreter dependency in this case probably is too much, but we
can always rewrite that :-)

Happy hacking! Gio'

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


Re: [workflow] Automatically close bug report when a patch is committed

2023-09-07 Thread Giovanni Biscuolo
Hi Maxim,

Maxim Cournoyer  writes:

> Simon Tournier  writes:

[...]

>> Maybe patchwork already running (I think) could help, trying to
>> regularly rebase the branch dedicated to the submission on the top of
>> master, then if all is fine, somehow the two heads from the master
>> branch and the dedicated branch should match, and it would indicate the
>> patches are included and it is safe to close.  More or less. :-)
>
> We could use Gerrit's commit hook that adds a unique ID as a git
> trailer.

Do you mean "commit-msg" hook as documented here:
https://gerrit-review.googlesource.com/Documentation/cmd-hook-commit-msg.html
?

--8<---cut here---start->8---

The Gerrit Code Review supplied implementation of this hook is a short shell 
script which automatically inserts a globally unique Change-Id tag in the 
footer of a commit message. When present, Gerrit uses this tag to track commits 
across cherry-picks and rebases.

--8<---cut here---end--->8---

> Then it should become possible to
>
> 1. Check if all items of a series have appeared in the git history
> 2. If so, close the associated issue if it was still open

Thinking out loud:

a. each contributed patch will have a unique Change-Id, persistent
across rebases (and git commit --amend), and every new patch version
(produced during patch revision) will have the same Change-Id; this is
valid for all commits in a patch set

b. when all "Change-Id"s of patches contained in a patch set are listed
in the git history (of one of the official branches) the associated
issue can be closed

c. how do we get the issue number of a patch containing "Change-Id"? [1]

Did I miss something?

Thanks, Gio'


[1] right now how do we get the issue number of a committed patch?

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


Re: [workflow] Automatically close bug report when a patch is committed

2023-09-07 Thread Giovanni Biscuolo
Hi Simon,

Simon Tournier  writes:

> On Wed, 06 Sep 2023 at 12:14, Maxim Cournoyer  
> wrote:
>
>>> Let's avoid manual gardening as much as possible! :-)
>>
>> I like the idea!
>
> I think that automatizing is not trivial.  Sadly.

If we "restrict" the automation to "close the bugs that are listed in
the commit message" do you think it's doable?

[...]

> The potential issue is the number of false-positive;

In the context given above, the only way to have a false positive is
that the committer give a wrong bug number, right?

> closing and the submission is not applied.

I don't understand: what do you mean by "submission"?

By design:

--8<---cut here---start->8---

The post-receive hook runs after the entire process is completed and can be 
used to update other services or notify users.

--8<---cut here---end--->8---
(https://git-scm.com/book/en/v2/Customizing-Git-Git-Hooks)

In this case the "other service update" is "close bug " and is
guaranteed to be done after the commit is applied.

> Maybe patchwork already running (I think) could help, trying to
> regularly rebase the branch dedicated to the submission on the top of
> master, then if all is fine, somehow the two heads from the master
> branch and the dedicated branch should match, and it would indicate the
> patches are included and it is safe to close.  More or less. :-)

I'm lost :-D

> That’s said, I always find annoying to loose the track between the Git
> history and the discussion that happened in the tracker.  Sometimes,
> rational of some details of the implementation had been discussed in the
> tracker and it is impossible to find then back.  Therefore, I would be
> in favor to add ’Close #1234’ in the commit message, say the first one
> from the series tracked by #1234.  Doing so, it would ease automatic
> management of guix-patches.  However, it would add again some burden on
> committer shoulder.

I completely agree that (often, seldom?) we miss traceability if we do
not link a commit with a bug report (when present) and vice versa (when
a bug report is resolved by a series of commit)

> Similarly, we are already adding in the commit message something like
> ’Fixes <https://issues.guix.gnu.org/issue/1234>’.

Is this an informal convention or is this documented somewhere?

> And that could be used for closing.

Yes, we can use al list of keywords for closing (Closes, Close, Fix,
Fixes, etc.), but for the bug report I'd use only a number, the URL
really does not matter

> Again, the concern is about false-positive; closing when it should not
> be.

Modulo a programming error in the script, the only way would be to write
the wrong bug number after the "keyword" and IMO this is similar write
the wrong bug number in the "To:" field when closing a bug via email.

> Well, I think that automatizing is not trivial. :-)

Not trivial but automatable, IMO :-D

Thanks! Gio'

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


Re: [workflow] Automatically close bug report when a patch is committed

2023-09-07 Thread Giovanni Biscuolo
Hi Maxim and Ludovic,

I'm including you, Ludovic, becaus in the past you requested a similar
git hook installation for Guix and maybe you have more info

Maxim Cournoyer  writes:

> Giovanni Biscuolo  writes:

[...]

OK, I really found the wrong examples as Christopher pointed out, but my
proposal still holds valid :-D

>> IMO we need a way automatically close this kind of bug reports... or am
>> I missing something?
>>
>> Let's avoid manual gardening as much as possible! :-)
>
> I like the idea!

Fine, let's try to do it!

>> The first thing we need is a server side git post-receive hook on
>> Savannah, I've opened the sr#110928 support request:
>> https://savannah.nongnu.org/support/index.php?110928
>
> It's something that the Savannah folks would need to maintain
> themselves, right?

Forgot to mention that I'm pretty sure a post-receive server side hook
is already running (and maintained) for our guix.git repo on Savannah,
it's the one that sends notifications to guix-commits mailing list
https://lists.gnu.org/mailman/listinfo/guix-commits

Maybe that server side hook is common to all Savannah hosted projects,
but in this case it sould be configurable per-project since the "From:"
and "To:" email headers must be project related.

...or that post-receive hook has been installed the Savannah admin of
the Guix project?

I don't know what is the policy for "custom" server side web hooks: for
sure Someone™ with ssh access to /srv/git/guix.git/hooks/ should
/install/ such a script.

I've not found any reference to "per project" git hooks in the Savannah
documentation
(https://duckduckgo.com/?q=git+hook+site%3Ahttps%3A%2F%2Fsavannah.gnu.org%2Fmaintenance%2FFrontPage%2F&ia=web)

I have found many support request mentioning git hooks:
https://savannah.gnu.org/search/?words1=git+hook&type_of_search=support&Search=Search&exact=1#options

In particular, this support threads:

- sr #110614: auctex-diffs emails should be in the same format as emacs-diffs
https://savannah.nongnu.org/support/?func=detailitem&item_id=110614

- sr #110594: git push returns "hints" about ignored hooks
https://savannah.nongnu.org/support/?func=detailitem&item_id=110594

- sr #110482: Install irker hooks in poke.git
https://savannah.nongnu.org/support/?func=detailitem&item_id=110482

- sr #109104: Add Git 'update' hook for Guix repositories
https://savannah.nongnu.org/support/?func=detailitem&item_id=109104

Makes me believe that server-side hooks are something that must be
installed per-project by Savannah sysadmins

>> When I asket I though the best way would be to scan for a string like
>> "Close #" in the commit message (the committer should add
>> such a string) but probably this can be avoided: the bug can be closed
>> when a patch is committed to one of the listed official brances (master,
>> core-updates, etc.)
>
> You mean by simply matching the subject?  Maybe that could work.

Thinking twice maybe it's better to add a "Close: #" to the commit
message.

Now... let's find (or develop) such a post-receive hook!

Happy hacking! Gio'

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


[workflow] Triaging issues (was Automatically close bug report when a patch is committed)

2023-09-07 Thread Giovanni Biscuolo
Hi Christopher,

[note: I'm deleting the "In-Reply-To:" header and changing subject to
try to start a new thread]

Christopher Baines  writes:

> Giovanni Biscuolo  writes:

[...]

>> 20 bugs with messages similar to this one:
>>
>>
>>  rofi-wayland was added in:
>>
>>  04b5450ad852735dfa50961d3afc789b2e52b407 gnu: Add rofi-wayland.
>>
>>  And updated to a newer version in:
>>
>>  19c042ddf80533ba7a615b424dedf9647ca65b0f gnu: rofi-wayland: Update to 
>> 1.7.5+wayland2.
>>
>>  Marking as done.
>>
>> (https://yhetil.org/guix/87zg25r0id.fsf@wireframe/)
>>
>> IMO we need a way automatically close this kind of bug reports... or am
>> I missing something?
>
> I think the example you give doesn't relate to what you're looking at
> below (a post-receive hook).

Oh I see, thanks!

This is a complex case (see below), at least not one that can be solved
by automatically closing bug reports upon commits :-O

Sorry for the confusion I added by pointing out the wrong example, a
quick look at many bug reports made by Vagrant Cascadian last Fri and
Sat shows that many (all?) of the closed bug reports was some sort of
"duplication" of others.  Vagrant please can you tell us?

Let's call this a "triaging issue" and is a class of "management issue"
that should be discussed in a separate thread (this one), to stay
focused on the subject.

Probably missing to "manually" close bugs after a patch set has been
committed is not /the worst/ management issue currently, but IMO it's
better to just "commit and forget it" :-)

Probably /triaging/ is one of the most critical bug report management
issue, it should be addressed properly:

- by finding or developing triage helping tools to automate what is
  possible
  
- by having more people do the (boring) task of triaging bugs

Probably we should consider adding one more contributor "level": the
triager; the triager is _not_ a reviewer (obviously not a committer),
even if she could /also/ be a reviewer and/or committer.

The point is that triaging is a (boring) activity that Someone™ should
perform, sooner or later (as Vagrant did with the bug reports mentioned
above).

Obviously a contrubutor could (should) also be a self-triager, if she
wants help making the review process more efficient.

> There were at least two different issues with patches for adding
> rofi-wayland [1] and [2].
>
> 1: https://issues.guix.gnu.org/53717

This was to add (version "1.7.3+wayland1") and AFAIU was never committed

> 2: https://issues.guix.gnu.org/59241

This issue have 2 patches:

[PATCH 1/2] gnu: rofi: Update to 1.7.5.

[PATCH 2/2] gnu: Add rofi-wayland.

A (self-)triager should have noted two problems in that patch set
submisison:

1. patch contains two set of unrelated changes (?)

Point 12. of the "check list" in 22.6 Submitting Patches
https://guix.gnu.org/en/manual/devel/en/html_node/Submitting-Patches.html says:

--8<---cut here---start->8---

Verify that your patch contains only one set of related changes. Bundling 
unrelated changes together makes reviewing harder and slower.

Examples of unrelated changes include the addition of several packages, or a 
package update along with fixes to that package.

--8<---cut here---end--->8---

Is the addition of rofi-wayland related to the upgrade of rofi?

...probably yes, but...

2. multiple patches without cover letter

https://guix.gnu.org/en/manual/devel/en/html_node/Sending-a-Patch-Series.html#Multiple-Patches-1

--8<---cut here---start->8---

When sending a series of patches, it’s best to send a Git “cover letter” first, 
to give reviewers an overview of the patch series.

--8<---cut here---end--->8---

Missing a cover letter means that triaging is harder.

The issue title is from the first patch (gnu: rofi: Update to 1.7.5.)
and IMO is somewhat confusing because the title is what appears in
search results (Mumi, Debbugs, Emacs Debbugs).

If the contrubutor sent a cover letter with subject "gnu: Update rofi
and Add rofi-wayland (inherinting)", possibly with a little bit of
explanation in the message body, the (now undone) early triaging would
have been easier.

How do we solve such bug management class of problems? WDYT?

> One improvement I can think of here is that QA should highlight that
> some of the changes in each of those patch series can be found in
> another patch series.

...and tag both bugs as related on Debbugs?

This would be very helful for triagers, a very helping tool.

...but we need triagers, IMO

> That would then make it easier to both issues to be closed if that's
> appropriate

[workflow] Automatically close bug report when a patch is committed

2023-09-06 Thread Giovanni Biscuolo
Hello,

often bug reports related to patches are left open even after the
patch/patchset have been applied, the last example is a batch of Debbugs
manual gardening from Vagrant last Fri and Sat when he closed more than
20 bugs with messages similar to this one:

--8<---cut here---start->8---

 rofi-wayland was added in:

 04b5450ad852735dfa50961d3afc789b2e52b407 gnu: Add rofi-wayland.

 And updated to a newer version in:

 19c042ddf80533ba7a615b424dedf9647ca65b0f gnu: rofi-wayland: Update to 
1.7.5+wayland2.

 Marking as done.

--8<---cut here---end--->8---
(https://yhetil.org/guix/87zg25r0id.fsf@wireframe/)

IMO we need a way automatically close this kind of bug reports... or am
I missing something?

Let's avoid manual gardening as much as possible! :-)

The first thing we need is a server side git post-receive hook on
Savannah, I've opened the sr#110928 support request:
https://savannah.nongnu.org/support/index.php?110928

When I asket I though the best way would be to scan for a string like
"Close #" in the commit message (the committer should add
such a string) but probably this can be avoided: the bug can be closed
when a patch is committed to one of the listed official brances (master,
core-updates, etc.)

WDYT?

Ciao, Gio'

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


Re: Mumi CLI client (was: How can we decrease the cognitive overhead for contributors?)

2023-09-05 Thread Giovanni Biscuolo
Hi all

Arun Isaac  writes:

[...]

> I have been following the conversation with much interest. There seems
> to be a developing consensus that we should switch to sourcehut.

I fear that the switch would _not_ solve many of the problems tagged as
"cognitive overhead"; the problem is "the workflow", not the tool.

«All forges suck, _no one_ sucks less» :-D (it's a joke!)

Please forgive me if I insist and/or repeat myself (and others) but:

1. sourcehut is a suite of (alpha) applications: what set of
applications we should switch to?

2. given that all *.sr.ht applications are alpha [1], would the Guix
packages for applications listed in 1. be maintainable in the _near_
future?  Also we need to package them as Guix _services_ to be useful.

3. given current resources, where do we plan to host the new services?
Are current Guix sysadmins human resources enough to give _support_ to
the new services?  Please consider that «Currently, the only officially
supported method for installing sr.ht software is through packages on
Alpine Linux hosts. [...] the installation and deployment process can
become a bit more involving. In particular, many sr.ht services have
their own, unique requirements that necessitate extra installation
steps», unless we package them for Guix (point 2.)

4. git.sr.ht (the "forge"?) implements an email based patch workflow
management and _not_ a web based pull-request workflow, it's documented
here: https://man.sr.ht/git.sr.ht/#sending-patches-upstream; so
git.sr.ht will _not_ help Guix adding a web based PR workflow, for that
we need _other_ forges, for example GitLab, Gitea/Forgejo (other?)

5. what about "git request-pull" [2] to enable a PR workflow for Guix?
It seems completely ignored by all the "forges" or am I wrong?
Unfortunately AFAIU it runs only as CLI, there is no web or GUI
interface for that

6. in what aspect todo.sr.ht (the issue tracker) is better than Debbugs
(via multiple interfaces)?  AFAIU nothing in that application is so much
better than what Guix actually use; actually Debbugs or Mumi web
interfaces are read-only: you cannot open a bug report or comment it,
you have to send an email; this is a _feature_, not a bug since we don't
need a _complex_ web based authentication+authorization system for bug
reporters/commenters.  Please also consider that the emacs Debbugs
interface is very useful, I'd miss a similar interface for todo.sr.ht
(but there is a CLI for sourcehut web services)

7. last, but not least: a public-inbox instance (https://yhetil.org/)
helps a lot people who is not subscribed to a mailing list with email
replies (e.g https://yhetil.org/guix-bugs/87il8qrm53@gmail.com/#R)
**and** patch downloading; IMHO it should be officially provided by Guix

> I am all in favour of any decision the community comes to on this.

I just hope the decision comes after an analysis of each single problem,
otherwise there is a chance that different tools will not solve the
problems

[...]

> So, I thought it helpful to document it and put it into the manual. I
> have sent a patch to https://issues.guix.gnu.org/65746

Very good work, thanks Arun!

Happy hacking, Gio'


[1] https://man.sr.ht/packages.md

[2] https://www.git-scm.com/docs/git-request-pull

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


Re: How can we decrease the cognitive overhead for contributors?

2023-09-04 Thread Giovanni Biscuolo
Hi,

Felix Lechner via "Development of GNU Guix and the GNU System
distribution."  writes:

> Hi all,
>
> On Sun, Sep 3, 2023 at 3:35 AM Ricardo Wurmus  wrote:
>>
>> I won’t contribute to Mumi any more.  Giving it up doesn’t hurt my
>> feelings.  I’d be glad to see it gone.
>
> For what it's worth, I like Mumi.

me too: it's a very good implementation of a web interface for an
email-based issue tracking (eco)system, in particular it's a very useful
implementation of a web based Xapian interface specialized in issue
_tracking_

> One day, I hope to help offer Scheme diffs there. Perhaps it will take
> just a pull from the base commit and guile-sexp-diff. [1]
>
> Maybe Git can eventually use such diffs, too. As a group we deserve
> better.

Git already have a mechanism to use external diff programs, via
.gitattributes; all we need are better/specialized/semantic diff
programs packaged in Guix

> I like the email-based workflow.

I'd say that an email-based workflow is a conditio sine qua non for
_all_ projects, possibly supplemented by a web based interface, possibly
better than Debbugs, possibly better than mumi

mumi have a _great_ potential to be extended, it could even be extended
to become the web/CLI frontend to other email-based issue tracking
systems

[...]

Happy hacking, Gio'

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


Re: How can we decrease the cognitive overhead for contributors?

2023-09-02 Thread Giovanni Biscuolo
 a more commonly accepted commit message with no special 
> formatting.
>    12. Run `git push` (subsequent changes are still just `git push`).
>    13. Go to forge website, click button to open a pull-request.

Forgive me if I insist: that forge site is _not_ SourceHut

Second: each forge web site have a custom (not standard) way to manage
pull-requests.

Third: git have a pull-request mechanism [1] that could _easily_ be
integrated in each and every forge, allowing projects to use
/interoperable/ email based pull-request workflows if they want to.

[...]

> I don't find difficult, and reflected on the difference for awhile, I 
> think, at
> least for me, the highest friction comes from:
>
> - Steps 11-19, or (to assign it a name for easier reference) the "CI
> steps".

OK: AFAIU https://qa.guix.gnu.org/ is _the_ answer, so we need more
contributors to that project

...and this means more cognitive overhead for Someone™ :-)

[...]

> If we wanted to encourage contributors to run "CI steps" 
> locally before
>    submitting, maybe this should be another `guix` sub-command? `guix 
> pre-check`
>    or something? I know there is a potential contributor who had this 
> idea first
>    who would want to hack on this.

Having such a sub-command maybe could help, maybe not, because IMO the
core and most cognitive challenging steps of all "CI steps" are not if
builds are done locally or not but (in order of importance):

1. having patches reviewed by humans, the "not automatable" part because
Someone™ have to understand the _meaning_ of the patch and verify it
conforms to the coding standards of the project, including "changelog
style" commit messages;

2. understanding why build derivation fail when it fails.

This is real cognitive overhead and this cannot be automated.

> - Steps 19-23, or the "manage patch" steps.
>
>    I think an insight here is that the big button on forges is actually 
> a program
>    removing the mental overhead for you.

On the "web forges" vs "email based" patch workflow management I've said
enough in other messages in this thread, here I just want to add
(repeat) this: please do not only consider the mental overhead of
potential contributors for "managing patches", also consider the mental
overhead for patch reviewers; I've read many articles from professional
patch reviewers that perfectly explains the great advanteges of using an
email based workflow

[...]

>    I also don't usually have to worry nearly as much about crafting a commit
>    message. So long as the title is under a character limit, and the body is
>    helpful, it's OK. I think what bothers me most about the GNU changelog
>    messages is that it's the worst of both spoken language and programming
>    languages: there's an expectation of structure, but no grammar I can 
> parse
>    against, and it's free-form.

I'm sorry that the GNU policy about commit messages bothers you (on the
contrary it makes me happy); please consider that thai is /just/ one of
the policies of the Guix project: code of conduct, coding standards,
others?

[...]

> - Having multiple places to manage aspects of my patch
>
>    In a web-forge, I generally have a URL I can go to and see everything 
> about my
>    patch. I think we have that with https://issues.guix.gnu.org with two
>    exceptions: (1) QA is a click away, or if you're using email, you're 
> not even
>    aware that there's a QA process failing (2) If you're not using email,
>    context-switching between this page and email to respond.

it's "just" an _interface_ issue

[...]

Happy hacking! Gio'


[1] https://www.git-scm.com/docs/git-request-pull

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


commit message helpers (was Re: How can we decrease the cognitive overhead for contributors?)

2023-08-30 Thread Giovanni Biscuolo
uot; "babelfish " "bmp " "clarisworks " "collab " "command "

[...]

 (license license:gpl2+)))

--8<---cut here---end--->8---

As you can see, here the hunk header is:

3. @@ -48,89 +48,89 @@ (define-module (gnu packages abiword)

showing the function in with "define-public abiword" is defined.

I don't know if this is/could be used to automatically set a more helpful
commit message draft, but this helps to "figure out which procedures the
changed lines were in, automatically."

WDYT?

Also, I can't find if there is some plan or feature request to add
tree-sitter support in the "git diff" parser, but that could be a great
improvement IMO, since it will allow git users to use all tree-sitter
available parsers and not just pattern matching to understand/show the
context in which the changes happened.

In this context, I found this two currently active project of
tree-sitter based diff tools:

1. Difftastic https://difftastic.wilfred.me.uk/

2. diffsitter https://github.com/afnanenayet/diffsitter

Both can be used as external "git difftool"s [1] [2]

None is currently packaged in Guix.


Happy hacking! Gio'


[1] https://difftastic.wilfred.me.uk/git.html#git-difftool

[2] https://github.com/afnanenayet/diffsitter#git-integration

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


Re: How can we decrease the cognitive overhead for contributors?

2023-08-30 Thread Giovanni Biscuolo
Liliana Marie Prikler  writes:

[...]

>> I tried https://issues.guix.gnu.org/issue/65428/patch-set/1 but I get
>> a blank page: any example plz?
> That's a mumi bug IIUC, it works for M > 1.  For M = 1, use patch-set
> without /M.

OK, thanks!

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


git interfaces (was Re: How can we decrease the cognitive overhead for contributors?)

2023-08-29 Thread Giovanni Biscuolo
Giovanni Biscuolo  writes:

[...]

> Executive summary: it's "just" and **interface** problem; contributors

"an interface problem"

> who like the SourceHut web UI to format and send patchsets via email are
> very wellcome to subscribe and use the SourceHut services to send
> patches (also) to Guix [1].  There is no reason for Guix to be
> (self)hosted on SourceHut.
>
> Also, like there are manu TUI for Git,

"there are many"

> there are also many Git GUIs
> around https://git-scm.com/downloads/guis, maybe some of them also have
> a "patch preparation UI" like the Git SourceHut web service.

For example, I see that git-cola (https://git-cola.github.io/) have an
interface to apply patches [1] (the user still needs a way to download
the patches to apply them) but AFAIU it does not have an interface for
"git format-patch"; I guess it should not be too hard to add it as a
custom GUI action [2]

Could git-cola be a tool to help people who dislike CLI/TUI tools
contribute to Guix (or other email based patch management projects)
smootly?

[...]

Best regards, Gio'


[1] https://git-cola.readthedocs.io/en/latest/git-cola.html#apply-patches

[2] https://git-cola.readthedocs.io/en/latest/git-cola.html#custom-gui-actions

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


Re: How can we decrease the cognitive overhead for contributors?

2023-08-29 Thread Giovanni Biscuolo
Hi

MSavoritias  writes:

[...]

>> Not needing to register yet another account for one-off contributions
>> is an argument that kills all the forges, sadly :)
>>
> With Sourcehut you can contribute without an account.

Because they use an email based patch send/review workflow :-)

In other words: you can contribute to a SourceHut hosted project with
patches (or just with comments to patches) because everyone can send an
email to an email address provided by the project maintainer(s).

> There is also https://forgefed.org/ which is for federated forges using
> activitypub. So you can have one account for all forges that
> federate. :D

Interesting project, for now the supported (implementations) are:

--8<---cut here---start->8---

- Vervis is the reference implementation of ForgeFed. It serves as a demo 
platform for testing the protocol and new features.

- Forgejo is implementing federation.

- Pagure has an unmaintained ForgeFed plugin.

--8<---cut here---end--->8---
(via https://forgefed.org/)

Funny thing is that on the main Vervis instance
(https://vervis.peers.community/browse) they say: «NOTE: Federation is
disabled on this instance!»

Let's say this federation project is stil in early alpha... and probably
we will never see an implementation in GitHub: WDYT?!? :-O

Anyway, since the SourceHut patch management medium is email, it's
federated by default.

Happy hacking! Gio'

[...]

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


Re: How can we decrease the cognitive overhead for contributors?

2023-08-29 Thread Giovanni Biscuolo
[I sent this message incomplete by mistake, sorry!]

Ciao Giacomo,

I never used nor installed/administered SourceHut services, I've some
comments from what I learned reading documentation and articles.

Executive summary: it's "just" and **interface** problem; contributors
who like the SourceHut web UI to format and send patchsets via email are
very wellcome to subscribe and use the SourceHut services to send
patches (also) to Guix [1].  There is no reason for Guix to be
(self)hosted on SourceHut.

Also, like there are manu TUI for Git, there are also many Git GUIs
around https://git-scm.com/downloads/guis, maybe some of them also have
a "patch preparation UI" like the Git SourceHut web service.

paul  writes:

> From reading this discussion it appears sourcehut supporting both the
> web and email workflow and being free software is really the best
> solution.

SourceHut is a suite including this tools providing different services
(some of them already covered by specialized tools in Guix, e.g. CI):

- builds.sr.ht
- git.sr.ht
- hg.sr.ht
- hub.sr.ht
- lists.sr.ht
- man.sr.ht
- meta.sr.ht
- pages.sr.ht
- paste.sr.ht
- todo.sr.ht

git.sr.ht have a tool for "web-based patch preparation UI", it's
documented [1]: that tool is "just" a web interface to prepare a
patchset to be sent upstream via email; please read all the section and
you'll see that it describes an email based patch management workflow.

Also, in the "Tutorials" page I find the section "Contributing to
projects on SourceHut" [1] and the "Read more" link point to
https://git-send-email.io/

Finally, yesterday I sent a message (id:87il8z9yw8@xelera.eu) with
some pointers to related articles, two were from Drew DeVault, SourceHut
founder

Given what I found above, I'd say that the workflow model used by
SourceHut hosted projects is email based, not web based, and that the
service "git.sr.ht" provides SourceHut users "just" a web UI helping
them to format and send a patchset via email in the same way "git
format-patch" and "git send-email" do.

...it's "just" an _interface_ question, not an email vs web patch
management question :-)

That said, I understand that some (many?) users are not comfortable with
CLI interfaces and prefers a GUI interface (web UI is the same), but
there is no reason to increase the reviewers cognitive overhead by
introducing an inefficient web based patch management workflow just to
address a "simple" and unrelated interface problem.

> It appears to have no downsides (besides for the work required for
> packaging and provisioning the service)

First, let's start with packaging each SourceHut service in Guix: AFAIU
packaging in other distros is not in good shape, i.e.

- Debian
  
https://packages.debian.org/search?keywords=sourcehut&searchon=names&suite=all§ion=all
  https://wiki.debian.org/Teams/pkg-sourcehut

- Arch Linux
  https://archlinux.org/packages/?sort=&q=sourcehut&maintainer=&flagged=

- Fedora
  https://packages.fedoraproject.org/search?query=sourcehut

Sure, they have https://man.sr.ht/packages.md for Alpine Linux, Arch
Linux and Debian, but:

--8<---cut here---start->8---

Warning: SourceHut is still in alpha, and has no stable releases. As such, we 
do not recommend packaging SourceHut for your upstream distribution 
repositories until we have shipped stable versions of our software.

--8<---cut here---end--->8---
(from https://man.sr.ht/packages.md)

So we have to wait for SourceHut to exit its alpha status to start
packaging it, no?

> and everything else either does not support email workflow or does not
> support web workflow.

I insist: SourceHut does _not_ support a "web workflow" (fork, PR and
merge all done via a web UI)

> What are the blockers in Guix's policies for moving in this direction?

Guix is a GNU project, GNU have an infrastructure, a set of services and
a policy for their projects.  Maybe one day GNU will provide a self
hosted SourceHut service, now GNU have https://savannah.gnu.org/ and
https://debbugs.gnu.org/

...and please remember: "all forges and all issue trackers suck, some
just sucks less" (stolen from mutt motto)

> Should a team with a defined roadmap be created to address Katherine's 
> and all other people's point, either way the consensus will fall?

First it would be useful to separate each /different/ question raised in
dedicated threads that are actionable, possibly leading to a patch or at
least to a consensus... or to some sort of clarification at least.

I hope I helped to clarify the "email vs web patch management" question
with this message.

Ciao! Gio'


[1] https://man.sr.ht/git.sr.ht/#sending-patches-upstream

[2] https://man.sr.ht/tutorials/#contributing-to-srht-projects

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


Re: How can we decrease the cognitive overhead for contributors?

2023-08-29 Thread Giovanni Biscuolo
Ciao Giacomo,

I never used nor installed/administered sourcehut services, I've some
questions about them:

paul  writes:

> From reading this discussion it appears sourcehut supporting both the
> web and email workflow and being free software is really the best
> solution.

sourcehut is a suite including this tools providing different services
(some of them already covered by specialized tools in Guix, e.g. CI):

- builds.sr.ht
- git.sr.ht
- hg.sr.ht
- hub.sr.ht
- lists.sr.ht
- man.sr.ht
- meta.sr.ht
- pages.sr.ht
- paste.sr.ht
- todo.sr.ht

git.sr.ht is the tool needed for "web workflow" patch management, it's
documented here: https://man.sr.ht/git.sr.ht/#sending-patches-upstream


git.sr.ht provides a web-based patch preparation UI, which you can use to 
prepare changes to send upstream online. You can even use this to prepare 
patches for projects that use email for submission, but are not hosted on 
SourceHut. This tool may be helpful to users who are used to the "pull request" 
style contribution popularized by GitHub, GitLab, and others.


yesterday I sent a message (id:87il8z9yw8@xelera.eu) with some
pointers to related articles, two were from Drew DeVault, sourcehut
founder

in the "Tutorials" page I find the section "Contributing to projects on
SourceHut" [1] and the "Read more" link point to
https://git-send-email.io/




> It appears to have no downsides (besides for the work
> required for packaging and provisioning the service) and everything
> else either does not support email workflow or does not support web
> workflow.
>
> What are the blockers in Guix's policies for moving in this direction? 
> Should a team with a defined roadmap be created to address Katherine's 
> and all other people's point, either way the consensus will fall?
>
> giacomo
>

[1] https://man.sr.ht/tutorials/#contributing-to-srht-projects

-- 
Giovanni Biscuolo

Xelera IT Infrastructures



Re: How can we decrease the cognitive overhead for contributors?

2023-08-28 Thread Giovanni Biscuolo
u M-x debbugs-gnu RET guix-patches RET [then answer prompts]
>> 2. M-x cd RET ~/src/guix or wherever is your guix checkout
>> 3. Select series you want to apply
>> 4. Sort by subject
>
> Also can first read on issues (mumi), find a issue ID,
> then M-x gnus-read-ephemeral-emacs-bug-group ID.

[...]

> Don't know the 'C-u 10 |' one, cool, thank you!

I feel like a section in the Cookbook about email based workflows would
be useful to many, even one with just excertps from this thread.

Happy hacking! Gio'

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


Re: How can we decrease the cognitive overhead for contributors?

2023-08-28 Thread Giovanni Biscuolo
Liliana Marie Prikler  writes:

> Am Freitag, dem 25.08.2023 um 08:07 + schrieb Attila Lendvai:
>> i couldn't even find out which tools are used by those who are
>> comfortable with the email based workflow. i looked around once, even
>> in the manual, but maybe i should look again.
> Users who have tried curlbash also looked at
>   wget https://issues.guix.gnu.org/issue/N/patch-set/M | git am -3

Is this documented somewhere plz?

I tried https://issues.guix.gnu.org/issue/65428/patch-set/1 but I get a
blank page: any example plz?

Thanks! Gio'

[...]

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


Re: bug#65391: People need to report failing builds even though we have ci.guix.gnu.org for that

2023-08-27 Thread Giovanni Biscuolo
Bruno Victal  writes:

> On 2023-08-27 02:13, 宋文武 wrote:
>> Maybe we can automatically report the failures as bugs, say every 7
>> days, and remove a package if it still fail to build in 90 days?

maybe precedeed by an automated email notification (to guix-bugs) so
that interested people have the chance to step in and fix it?

> I'm not so sure about removing packages, personally if I'm in need of
> a package that happens to be broken I find it easier to fix it given
> that some work has already been put into writing the package definition
> than starting from scratch.

You don't need to start from scratch if you want, you just have to
checkout the right git commit (before the package was deleted) and start
from that, if needed: WDYT?

Happy hacking! Gio'

[...]

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


gnu: Add gmt.

2023-07-04 Thread Giovanni Biscuolo
Hello Ricardo,

checking commits done recently [1] I see you pushed ac86174e22 but I
cannot find the related patch sent to guix-patches: do I have missed the
relevant messages?

Thanks! Gio'

[1] I use git log to have a quick overview of latest changes and additions


P.S.. Generic Mapping Tools is a great package, kudos!

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


Re: FSDG issues of SCUMMVM-based games

2023-06-21 Thread Giovanni Biscuolo
Liliana Marie Prikler  writes:

[...]

> Note, that this discussion started IIRC a year ago and we have
> practically known about actually existing FSDG violations since then. 
> My approach here is quite simple and pragmatic: Remove the games which
> obviously violate the FSDG (that is all the games currently depending
> on ScummVM as far as I know)

I totally agree with this simple and pragmatic solution, but the FSDG
violation is not that games are distributed with a non-free license
(IMHO the license of some or all games have the same legal effect of the
SIL Open Font License v.1.1, that is considered free [1])

AFAIU the violation comes from the absence of the source code (or just
the build tools?) to compile the game to ScummVM bytecode (other issues
with the way games are compiled and distributed can be patched, AFAIU)

IMO when removing teh games is very inportant to mention the motivation,
since it will be useful for potential similar use cases.

> but keep ScummVM for now to allow folks to experiment.  If in some one
> to five years we still find no practical way of using ScummVM with
> only free software, that might be a reason to remove it then.

Yes, keeping ScummVM is not a clear FSDG violation AFAIU

Thanks all for the heads on!

[1] albeit very poorly worded: 
https://www.gnu.org/licenses/license-list.html#SILOFL

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


Re: Rebasing or merging? [was: Re: 01/03: gnu: wxwidgets: Add libxtst to inputs.]

2023-06-20 Thread Giovanni Biscuolo
Hi Maxim,

Maxim Cournoyer  writes:

> As discussed previously in this thread, a good policy would be to
> suggest avoid *both* rebases and merges during a feature branch
> development.  This way we avoid both problems,

I read the whole thread and AFAIU the (only?) problem with the "merging
master to feature branch" workflow is the one pointed out by Andreas [1]:

--8<---cut here---start->8---

Well, we used to repeatedly merge the master branch to core-updates,
which if I remember well makes the master commits end up first in "git
log". So the core-updates specific commits gradually disappear below
thousands of master commits. So this is a problem.

--8<---cut here---end--->8---

So, if I don't get wrong, the only problem is with "git log" not clearly
showing the commit that are specific to the feature branch: are we sure
is there no option that can help feature branch reviewers focus on the
specific commits?

Is not "git log --no-merges master..branchname" supposed to do what we
need? Or "git log --first-parent "? (not tested)

> and if the branch is short lived, it should be bearable that is isn't
> synced with master for its short lifetime.

What lifetime is short lived in Guix context?  5 days, 3 weeks?

Anyway, I'm not sure that the branches designed on Guix (i.e. those
listed on https://qa.guix.gnu.org/) will be short lived, I guess some
could be long lived (months instead of weeks)

WDYT?

Ciao, Gio'


[1] id:ZIcZ9tUidrWOfgyN@jurong

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


Re: Rebasing or merging? [was: Re: 01/03: gnu: wxwidgets: Add libxtst to inputs.]

2023-06-20 Thread Giovanni Biscuolo
Hello,

please consider I am (was?) a /great/ fan of rebase, but I have to admit
that "the golden rule" [1] of rebasing makes sense: «never rebase on a
public branch.»

Leo Famulari  writes:

> On Sun, Jun 11, 2023 at 08:47:54PM -0400, Maxim Cournoyer wrote:
>> I'm not sure how that'd work, since Git only allows a single PGP
>> signature per commit, as far as I can tell.  When you rewrite the
>> history (by using rebase, say), the existing signatures of the rewritten
>> (rebased) commits are replaced with new ones generated from your key.
>
> Is it so bad to re-sign commits on feature branches that we should lose
> the easy-to-read history of rebased branches?

IMHO this is not a problem, at all.

> To me, it's much easier to understand and review a branch that has been
> updated by rebasing rather than merging. I think that counts for a lot.
> Do many people feel the same way?

Me! ...if you mean "it's much easier to understand the history" I agree,
but all in all this is "just" a "view" problem that should be solved (if
not already solved) by a proper git log "filter"

conversely, when rebasing the review process might be (sometimes very)
problematic, this is an excerpt from
«Why you should stop using Git rebase»
https://medium.com/@fredrikmorken/why-you-should-stop-using-git-rebase-5552bee4fed1

--8<---cut here---start->8---

Why do we use Git at all? Because it is our most important tool for
tracking down the source of bugs in our code. Git is our safety net. By
rebasing, we give this less priority, in favour of the desire to achieve
a linear history.

A while back, I had to bisect through several hundred commits to track
down a bug in our system. The faulty commit was located in the middle of
a long chain of commits that didn’t compile, due to a faulty rebase a
colleague had performed. This unneccessary and totally avoidable error
resulted in me spending nearly a day extra in tracking down the commit.

[...] Git merge. It’s a simple, one-step process, where all conflicts
are resolved in a single commit. The resulting merge commit clearly
marks the integration point between our branches, and our history
depicts what actually happened, and when it happened.

The importance of keeping your history true should not be
underestimated. By rebasing, you are lying to yourself and to your
team. You pretend that the commits were written today, when they were in
fact written yesterday, based on another commit. You’ve taken the
commits out of their original context, disguising what actually
happened. Can you be sure that the code builds? Can you be sure that the
commit messages still make sense? You may believe that you are cleaning
up and clarifying your history, but the result may very well be the
opposite.

--8<---cut here---end--->8---

Also, when I read the below mentioned article I have many doubts that
rebase should ever be used in Guix public feature branches:

«Rebase Considered Harmful»
https://fossil-scm.org/home/doc/trunk/www/rebaseharm.md

--8<---cut here---start->8---

2.1 A rebase is just a merge with historical references omitted

[...] So, another way of thinking about rebase is that it is a kind of merge
that intentionally forgets some details in order to not overwhelm the
weak history display mechanisms available in Git. Wouldn't it be better,
less error-prone, and easier on users to enhance the history display
mechanisms in Git so that rebasing for a clean, linear history became
unnecessary? [...]

2.2 Rebase does not actually provide better feature-branch diffs

[...] The argument from rebase advocates is that with merge it is
difficult to see only the changes associated with the feature branch
without the commingled mainline changes. In other words, diff(C2,C7)
shows changes from both the feature branch and from the mainline,
whereas in the rebase case diff(C6,C5') shows only the feature branch
changes.

But that argument is comparing apples to oranges, since the two diffs do
not have the same baseline. The correct way to see only the feature
branch changes in the merge case is not diff(C2,C7) but rather
diff(C6,C7). [...]

(n.d.r. see graphs on original page for clearness)

--8<---cut here---end--->8---
(IMHO the whole article deserves to be read)

[...]

WDYT?

Happy hacking! Gio'


[1] 
https://www.atlassian.com/git/tutorials/merging-vs-rebasing#the-golden-rule-of-rebasing


P.S.: and yes, maybe Fossil is better designed than Git, but I'm not
proposing switching to it, not at all :-)

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


Re: stateful caches (was Re: OBS Studio memory leak)

2023-06-15 Thread Giovanni Biscuolo
Hi!

Guillaume Le Vaillant  writes:

[...]

> I used gdb on versions of mesa and vlc with debug symbols:
>
> --8<---cut here---start->8---
> guix build --with-debug-info=mesa --with-debug-info=vlc vlc
>
> gdb /gnu/store/...-vlc-3.0.18/bin/.vlc-real
> (gdb) run some-video.mkv
> --8<---cut here---end--->8---
>
> Then I sent a SIGSTOP signal to the vlc process, and in gdb I looked at
> the backtrace of all the threads of vlc.

got it, thanks!

[...]

>> do you think this bug (is it a bug, right?) needs to be reported
>> upstream?
>
> I guess it would be better if the code reading the shader cache was more
> robust when reading possibly incompatible or corrupted data. However
> I have not tried more recent versions of mesa, maybe they are better at
> it...
>
> And it seems that Maxim has already reported the issue upstream,
> see <https://issues.guix.gnu.org/63197>

oh I missed it: I'll make my comments in that issue then, thanks!

> and <https://gitlab.freedesktop.org/mesa/mesa/-/issues/8937>

I see

Happy hacking. Gio'


-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


stateful caches (was Re: OBS Studio memory leak)

2023-06-15 Thread Giovanni Biscuolo
Hi Guillaume Le Vaillant and Guix Devels,

sorry for cross-posting but IMHO the workaround you found [1] for the memory
leak affecting a number of media processing applications is of interest
for many people potentially not subscribed to help-guix

AFAIK this was not filed as a Guix bug

Guillaume Le Vaillant  writes:

> Ott Joon  skribis:
>
>> Hey
>>
>> Tried the same thing in VLC and it freezes on GPU accel and starts
>> leaking memory while also becoming hard to kill.  Maybe this also
>> explains why some mpv GPU accel settings don't work also in the exact
>> same way.  I have an AMD RX 6900 XT on this machine.

[...]

> It looks like an issue with the shader cache of mesa.
> After clearing it, I don't see the memory leak anymore.

good catch: please can you tell us how you managed to spot that problem?
Did you straced it or did yoy find a related mesa bug report?

do you think this bug (is it a bug, right?) needs to be reported
upstream?

I'm asking this because I "feel" we (I mean Guix users) could do
something to help upstream removing this "status mismanagement"

> Could you try doing a "rm -r $HOME/.cache/mesa_shader_cache/*" and see
> if it also solves the issue for you?

AFAIU this is "just" another instance of the "mismanaged state" error
class, like the one(s) discussed back in Oct 2019 [2] and probably
periodically recurring since the beginning of some (many) the upstream
applications lifecycle.

Back then, Efraim Flashner was using this snippet [2] in his OS-config:

--8<---cut here---start->8---

;; This directory shouldn't exist
(file-system
  (device "none")
  (mount-point "/var/cache/fontconfig")
  (type "tmpfs")
  (flags '(read-only))
  (check? #f))

--8<---cut here---end--->8---

It seems that a similar snippet could also be useful for all
"~/.cache/*" :-O

Happy hacking! Gio'



[1] message id:87y1kozvny@robbyzambito.me

[2] message id:20191018073501.GB1224@E5400

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


Re: pam_ssh_agent_auth on a Guix System?

2023-05-31 Thread Giovanni Biscuolo
Hi Felix,

Felix Lechner  writes:

[...]

>> I'd like to execute sudo without having to set and enter a password [1]
>> and that PAM module is needed

well, the above description is misleading :-(

> You could also add a line like this to your /etc/sudoers (but I don't
> recommend it)
>
> user_name ALL=(ALL) NOPASSWD:ALL

actually I don't want to disable authentication, I'd like to:

--8<---cut here---start->8---

permit anyone who has an SSH_AUTH_SOCK that manages the private key
matching a public key in /etc/security/authorized_keys to execute sudo
without having to enter a password. Note that the ssh-agent listening to
SSH_AUTH_SOCK can either be local, or forwarded.

Unlike NOPASSWD, this still requires an authentication, it's just that
the authentication is provided by ssh-agent, and not password entry.

--8<---cut here---end--->8---
(from https://pamsshagentauth.sourceforge.net/)

>> is someone already using such a configuration in a Guix System?
>
> Not quite. I added my public ssh key to root's authorized_keys. It's
> different from what you are looking for but gives you a root prompt
> with 'ssh root@localhost`.

mumble... I wonder if this works with a forwarded ssh-agent (this means
that you don't need your private ssh key on the remote host to do that
ssh)

> I did it because it's required for 'guix deploy'.
>
> Personally, I have not used the SSH agent, but it's an interesting
> avenue. I use Kerberos instead, which is probably the gold standard
> for distributed authentication. You are doing the right thing by
> thinking about your options.

I never used kerberos (I should learn it) but if possible I'd like to
avoid to install and configure extra services; ssh is ubiquitous and
installing and configuring an ssh-agent on the client /maybe/ is easier
than a kerberos client

[...]

Thanks! Gio'

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


pam_ssh_agent_auth on a Guix System?

2023-05-30 Thread Giovanni Biscuolo
Hello,

AFAIU pam_ssh_agent_auth https://pamsshagentauth.sourceforge.net/ is not
already packaged in Guix, or am I missing something?

I'd like to execute sudo without having to set and enter a password [1]
and that PAM module is needed

...then also a service to properly setup /etc/pam.d/sudo and
/etc/sudoers

is someone already using such a configuration in a Guix System?

Thanks, Gio'


[1] is it safer or more efficient to have users authentication without
password but only with a SSH key?

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


Re: nudging patches

2023-05-17 Thread Giovanni Biscuolo
Hello Remco,

sorry for cross posting to guix-devel but I think this is more a devel
(committers needing help) discussion than a user (needing help) one :-)

Remco van 't Veer  writes:

> Hi,
>
> What's the preferred / politest way to draw attention to patches (and /
> or bugs) which seem to have been overlooked?

AFAIU send an email ping to the patch/bug, possibly Cc-ing the related
team [1]

> And while I have your attention and you're wondering which patches I'd
> like to promote.. 😉
>
> - #62557 [guix-patches]
>   [PATCH] gnu: ruby-2.7-fixed: Upgrade to 2.7.8 [fixes CVE-2023-{28755, 
> 28756}]
> - #62558 [guix-patches]
>   [PATCH] gnu: ruby-3.0: Upgrade to 3.0.6 [fixes CVE-2023-{28755, 28756}].
> - #62559 [guix-patches]
>   [PATCH] gnu: ruby-3.1: Upgrade to 3.1.4 [fixes CVE-2023-{28755, 28756}].
> - #62561 [guix-patches]
>   [PATCH] gnu: ruby-3.2: Upgrade to 3.2.2 [fixes CVE-2023-{28755, 28756}].
>
> They still apply cleanly on master.

This is the current Ruby team:

id: ruby
name: Ruby team
description: 
scope: "gnu/packages/ruby.scm" "guix/build/ruby-build-system.scm" 
"guix/build-system/ruby.scm" "guix/import/gem.scm" 
"guix/scripts/import/gem.scm" "tests/gem.scm" 
members:
+ Christopher Baines 

> But seriously, what is the preferred way to do this?

HTH! Gio'

[1] https://guix.gnu.org/en/manual/devel/en/html_node/Teams.html#Teams

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


[job] Semantic CSS Developer to implement new visual identity of a Guix based company

2023-02-28 Thread Giovanni Biscuolo
Hello Guix developers,

[You can also find our announcement on the web: 
https://beta.softwareworkers.it/en/jobs/2023-css-developer/]

We build our Content Management System (CMS) as a static site generator
based on Emacs Org mode and Guix to manage all software dependencies.

We are looking for a CSS developer to implement the new visual identity,
to be applied on our sites [0]. The assignment will be governed by the
Open Contract for Agile Development [1].

Those wishing to apply are invited to check their willingness to adhere
to our CMS design principles [2].

Knowledge of Git distributed version control system, literate
programming, Emacs Org mode authoring system and LASS [3] or similar
Lisp/Scheme based CSS pre-processors are valued as a plus.

All code and documentation will be released with a free copyleft
license.

If interested, please write to j...@softwareworkers.it.

Happy hacking! Gio'

__
Footnotes:

[0] The versions under development are:
https://beta.softwareworkers.it/ (institutional website)
https://beta.meup.io/ (operational portal)
https://doc.meup.io/ (documentation)

[1] 
https://gitlab.com/softwareworkers/swws/-/blob/develop/documentation/source/doc.swws/en/legal/sprint-agreement/index.org

[2] https://doc.meup.io/colophon/#cms-design-principles

[3] https://shinmera.github.io/LASS/

-- 
Giovanni Biscuolo

Software Workers - IT Infrastructures


signature.asc
Description: PGP signature


Re: Stratification of GNU Guix into Independent Channels

2023-01-26 Thread Giovanni Biscuolo
Hi,

just a quick comment

zimoun  writes:

[...]

> Moreover, many channels would be dependant from one to the other.

and this would be **a nightmare** to maintain (as already clearly stated
by others much more competent than me in Guix-things)

to recap: all PROS that you jgart mentioned in his original message does
not apply (or are "reversed" to CONS) and users would not benefit of
this kind of modularity in terms of speed (likely speed would be much
worse with more than 4 channels to pull in, and 4 channels would be too
few for a complete system)

as far as I can understand from my journey in Guix, it has the **great**
advantage to /expose/ the weaknesses of some (some?) upstream packages
not so... diligent with dependency hygiene, leading to the **big**
problem of dependency hell or dependency confusion, that sooner or later
/someone/ (distro mainatienrs often) have to solve *before* including
that package in the distribution (unlike what is happening with PyPI,
npm and alike: good luck users!)

as Liliana said:

--8<---cut here---start->8---

 What does work is convincing upstreams to pull in less dependencies and
 drop the outdated ones, because that makes it so that eventually Guix
 has to ship less packages.

--8<---cut here---end--->8---

Unfortunately I have the impression that not so many upstream developers
are really aware of the problem and when they are they think it's a
problem of someone else, not their.

Minimalism at (Guix) System level, one package at a time.

Happy hacking! Gio'

[...]

P.S.: the elevator pitch of all this thread could be: «yes, software
systems are way too /complicated/... but please don't blame Guix for it
(instead "git blame" and bless it!)» :-D

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


declarative containers (was Re: [EXT] Re: Enterprise Guix Hosting?)

2023-01-23 Thread Giovanni Biscuolo
Hello everybody,

(this is an old thread started on help-guix [1])

Ludovic Courtès  writes:

> "Thompson, David"  skribis:
>
>> On Wed, Aug 31, 2022 at 2:40 AM Ricardo Wurmus  wrote:
>>>
>>> Another thing that seems to be missing is a way to supervise and manage
>>> running containers.  I use a shepherd instance for this with
>>> container-specific actions like this:

[...]

>> Hey that's a real nice starting point for a container management tool!
>>  So maybe there should be a system service to manage containers and
>> then a 'docker compose'-like tool for declaratively specifying
>> containers and their network bridging configuration that is a client
>> of the service?
>
> Agreed!  We could turn Ricardo’s code into ‘container-guest-service’ or
> something and have ‘containerized-operating-system’ add it
> automatically.

please there was some progress with this service?

once done, could it be possible to declaratively start a whole network
of containers using a dedicated home-service, or
containerized-operating-systems (also on foreign distros)?

right now with "guix system container" we can imperatively manage
(start/stop, connect to the console with nsenter) and connect them
to the network [2], Ricardo showed us how he do it programmatically;
having a declarative interface (os-records) whould be awesome!

I'm very interested and willing to test it, if needed

thanks! Gio'


[1] id:878rn4syql@elephly.net

[2] thank you Ricardo for the cookbook section!
https://guix.gnu.org/en/cookbook/en/guix-cookbook.html#Guix-System-Containers

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


Re: issue tracking in git

2022-11-23 Thread Giovanni Biscuolo
Pjotr Prins  writes:

[...]

> We also have ci and cd based on our own system containers in Guix.
>
> => https://ci.genenetwork.org/
>
> It would be good to write a Guix blog about all this.

Yes: «How to leave useless forges and live happy» :-D

>> Please Arun is there a devel mailing list dedicated to tissue so we can
>> discuss details of the project?
>
> Sounds like an idea. Though in the spirit of tissue we might as well
> set up a repo.

Maybe a mailing-list is less intimidating for users (and MUAs have nicer
interfaces for this kind of communication workflows [1]), but I'll
follow the discussion "anywhere" you'll decide to go :-)

>> I'm not a Guile developer but I would like to help with testing and (the
>> lack of) documentation, if I can.
>> 
>> I'd also like to understand and possibly discuss the overall
>> architecture design of tissue, in particular compared to git-issue
>> internals [1]
>
> I did not know of that project, but it looks similar in concept. With
> gemini support you get some other interesting features. And then Arun
> has added powerful search.

yes, the xapian index is what I like most of tissue... but I'd prefer to
discuss such things in a proper official channel

> Also, when you see Jonathan's E-mail, we are not done building on
> this.

I'm just dreaming of you and Jonatan merging your projects for world
domination in knowledge and workflow management! B-)

>> Last but not least: what about to have tissue packaged [2] in Guix? :-D
>
> It is about time - and a package and system definition exists in a
> channel.

yes, please :-)

Thanks! Gio'


[1] and notmuch/xapian rules!

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


Re: issue tracking in git

2022-11-23 Thread Giovanni Biscuolo
Hello Jonathan,

nice to read you!

I saw your «L'Union Qiuy Fait La Force» presentation at Ten Years of
Guix [1] and I have to admit I still have to "digest" it, because I
still do not understand the overall architecture (see below).

indieterminacy  writes:

[...]

> FWIW, Ive been working on Gemini and issue trackers in parrallel to 
> Genenetwork.
>
> Arun did such a great job with a minimalist setup that I thought it more 
> proper to create a bigger ladder (given the reach of all the domains 
> which Tissue provides(.
>
> I have two main strands:
>
> Firstly, I have been combining Gemtext's terse syntax with that of the 
> Emacs-Hyperbole format, Koutliner, as well as the 
> "recursive-modelling-language" I have been developing Qiuy.
>
> https://git.sr.ht/~indieterminacy/1q20hqh_oqo_parsing_qiuynonical/
>
> As a consequence, it has grown into something different and more 
> complex. I need to trim this, especially as the results of some sprints 
> but once I refactor it it shall be a lot more solid.
>
> Secondly, I have been returning to Gemtext from the perspective of Git 
> diffs, with the idea to generate RDF perspectives one each revision per 
> file and then use RDF calls to resolve more complex queries.

RDF representations of diffs (commits?) so we can combine this knowledge
with others (represented in RDF) AFAIU is great: (open) linked data for
knowledge management

IMHO RDF is still a little bit underestimated :-D

> https://git.sr.ht/~indieterminacy/1q20twt_oq_parsing-commits_txr
>
> I shall be folding the logic of the first tool into the second 
> (carefully). I need a bit more time to do this to be fully satisfied.

what about gNife?

https://git.sr.ht/~indieterminacy/5q50jq_oq_configuring_emacs
--8<---cut here---start->8---

gNife, an Emacs environment for high throughput issue-tracking and
knowledge-management - utilising GemText, Koutliner and Qiuy

--8<---cut here---end--->8---

is it still relevant or do you plan to substitute it with the tools
listed above?

> There are some other tools floating around my forge (concerning hash 
> trees for different interpreters and rdf from the perspective of project 
> management), its mainly in TXR, Gawk and eLisp (though I will be doing 
> more with respect to Guile for these areas over time).

Looking at the Icebreaker project descriptions:

1. https://nlnet.nl/project/Icebreaker/

2. https://portal.mozz.us/gemini/icebreaker.space

I can undertand the design principles of the tools you are developing
and I'm really impressed with the completeness of this approach with
knowledge management, unfortunately I miss the overall architecture and
some important detalils that allows me to completely understand how to
use (or try to contribute to, one day) this tools: do you plan to add
some more documentation soon?

Happy hacking! Gio'

>
> Kind regards,
>
>
> -- 
> Jonathan McHugh
> indieterminacy@libre.brussels


[1] https://10years.guix.gnu.org/video/l-union-qiuy-fait-la-force/

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


  1   2   3   4   >