Re: Debian Monthly [debian-devel]: AI News Report 2024/10

2024-11-09 Thread Mo Zhou

The LLM I used to produce that exact news report was gpt-4o-mini,
from openai. ChatGPT is the name of openai's LLM web interface and
its underlying LLM model name could change. It took roughly 3
minutes to perform the bulk API calls.

That said, I basically implemented support for all commonly seen
LLM inference services:

(4 commercial ones)
  openai, anthropic, google, xai,
(4 self-hosted)
  llamafile, ollama, vllm, zmq (built-in but kind of outdated.)

Other services missing from the list are also supported as long
as it has compatibility mode to the openai api.

For the particular use case like summarizing a mailing list, self-hosted
one will be much slower to respond to the bulk API call unless it is
hosted on a GPU cluster :-)

Small LLMs are not necessarily smart enough. The open llm leaderboard[3]
is a good reference for figuring out the best open-access llm for
self-hosting.

In terms of "Debian hosted computer with AMD GPU for LLM inference" --
that is exactly one of the long term goals of debian deep learning
team (debian-ai@l.d.o). Team members are working to prepare the ROCm
packages and the ROCm version of pytorch.

I find ollama[1] and llamafile[2] quite handy to use locally if do not
mind using software from outside of debian archive, with a spare GPU.

[1] https://github.com/ollama/ollama
[2] https://github.com/Mozilla-Ocho/llamafile
[3] https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard


On 11/9/24 05:19, PICCA Frederic-Emmanuel wrote:

is it via ChatGPT or an llm self hosted ?

Can we imagine having a Debian hosted computer with and AMD GPU dedicated to 
this use case ?

Se should provide these summaries letter for most of our mailing list :)

cheers

Fred

- Le 9 Nov 24, à 14:09, Hector Oron zu...@debian.org a écrit :


Hello Lumin,

El sáb, 9 nov 2024 a las 10:27, DebGPT () escribió:

This is an experiment, by letting LLM go through all 369 emails from
debian-devel on Oct. The command for producing the news report
is included below. Use debgpt's git HEAD if you want to try.

First time I see this kind of email, I thought time ago that'd be a
really cool use of AI, to produce a summary of mailing lists - since I
struggle to read everything.

I just want to thank you for putting this together and, at least from
my side, this is very much appreciated.

Regards
--
  Héctor Orón  -.. . -... .. .- -.   -.. . ...- . .-.. --- .--. . .-.




Re: Debian Monthly [debian-devel]: AI News Report 2024/10

2024-11-09 Thread Jeremy Stanley
On 2024-11-09 14:19:53 +0100 (+0100), PICCA Frederic-Emmanuel wrote:
> is it via ChatGPT or an llm self hosted ?
[...]

It's DebGPT: https://salsa.debian.org/deeplearning-team/debgpt
-- 
Jeremy Stanley


signature.asc
Description: PGP signature


Re: Debian Monthly [debian-devel]: AI News Report 2024/10

2024-11-09 Thread PICCA Frederic-Emmanuel
is it via ChatGPT or an llm self hosted ?

Can we imagine having a Debian hosted computer with and AMD GPU dedicated to 
this use case ?

Se should provide these summaries letter for most of our mailing list :)

cheers

Fred

- Le 9 Nov 24, à 14:09, Hector Oron zu...@debian.org a écrit :

> Hello Lumin,
> 
> El sáb, 9 nov 2024 a las 10:27, DebGPT () escribió:
>>
>> This is an experiment, by letting LLM go through all 369 emails from
>> debian-devel on Oct. The command for producing the news report
>> is included below. Use debgpt's git HEAD if you want to try.
> 
> First time I see this kind of email, I thought time ago that'd be a
> really cool use of AI, to produce a summary of mailing lists - since I
> struggle to read everything.
> 
> I just want to thank you for putting this together and, at least from
> my side, this is very much appreciated.
> 
> Regards
> --
>  Héctor Orón  -.. . -... .. .- -.   -.. . ...- . .-.. --- .--. . .-.



Re: Debian Monthly [debian-devel]: AI News Report 2024/10

2024-11-09 Thread Hector Oron
Hello Lumin,

El sáb, 9 nov 2024 a las 10:27, DebGPT () escribió:
>
> This is an experiment, by letting LLM go through all 369 emails from
> debian-devel on Oct. The command for producing the news report
> is included below. Use debgpt's git HEAD if you want to try.

First time I see this kind of email, I thought time ago that'd be a
really cool use of AI, to produce a summary of mailing lists - since I
struggle to read everything.

I just want to thank you for putting this together and, at least from
my side, this is very much appreciated.

Regards
-- 
 Héctor Orón  -.. . -... .. .- -.   -.. . ...- . .-.. --- .--. . .-.



Re: Debian packaging for git-credential-libsecret

2024-11-09 Thread Chris Hofstaedtler
* M Hickford  [241109 12:45]:
> On Mon, 1 Apr 2024 at 21:42, M Hickford  wrote:
> >
> > Hi. It'd be great to package Git credential helper
> > git-credential-libsecret in Debian. There's a patch prepared, but it
> > needs the attention of a Debian developer. Is anyone here able to
> > help?  https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=878599
> 
> Hi. Is any Debian developer able to please look at patch to package
> Git credential helper git-credential-libsecret?

I think Debian is mostly waiting for Jonathan to show up again, and
we won't disturb his circles, more than necessary, in the meantime.

Chris



Re: Debian packaging for git-credential-libsecret

2024-11-09 Thread M Hickford
On Mon, 1 Apr 2024 at 21:42, M Hickford  wrote:
>
> Hi. It'd be great to package Git credential helper
> git-credential-libsecret in Debian. There's a patch prepared, but it
> needs the attention of a Debian developer. Is anyone here able to
> help?  https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=878599

Hi. Is any Debian developer able to please look at patch to package
Git credential helper git-credential-libsecret?

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=878599



Re: FQDN mandatory or can a machine could not have a domain ?

2024-11-08 Thread Marco d'Itri
On Nov 08, Bastien Roucariès  wrote:

> Does it seems a reasonable assumption to use a domain for an host even if it 
> is localdomain or test ?
> 
> Do you think it is a good idea to set the testbed hostname to a FQDN ?
Please do. This has been a pain for the INN CI as well.

-- 
ciao,
Marco


signature.asc
Description: PGP signature


Re: Why is knot migration blocked by done 1081191?

2024-11-08 Thread Chris Hofstaedtler
* Niels Thykier  [241108 14:12]:
> Jakub Ružička:
> > I've fixed #1081191 through changelog entry in knot/3.4.0-3 and it's
> > marked as Done but the bug still blocks knot migration for reasons I
> > don't understand:
> > https://qa.debian.org/excuses.php?package=knot
> 
> The problem is that the BTS thinks that knot/3.3.9-1 is still in unstable
> and is still affected[1]. As long as this is the case, the BTS will inform
> Britney that knot is still affected by the bug in unstable.
> 
> This can occur if there are left other binaries that has not been removed.

I imagine these bins are the leftovers:

knot-module-dnstap| 3.3.9-1 | unstable   | armel
knot-module-dnstap-dbgsym | 3.3.9-1 | unstable-debug | armel
knot-module-geoip | 3.3.9-1 | unstable   | armel
knot-module-geoip-dbgsym  | 3.3.9-1 | unstable-debug | armel

and will need to be requested to be removed by filing an RM bug.

Chris



Re: Why is knot migration blocked by done 1081191?

2024-11-08 Thread Niels Thykier

Jakub Ružička:

Hello,

I've fixed #1081191 through changelog entry in knot/3.4.0-3 and it's
marked as Done but the bug still blocks knot migration for reasons I
don't understand:

https://qa.debian.org/excuses.php?package=knot

What do I need to do to finally finish migration?


Cheers,
Jakub Ružička


Hi Jakub

The problem is that the BTS thinks that knot/3.3.9-1 is still in 
unstable and is still affected[1]. As long as this is the case, the BTS 
will inform Britney that knot is still affected by the bug in unstable.


This can occur if there are left other binaries that has not been 
removed. Testing tends to "avoid" this trap because Britney is more 
aggressive in its pruning left-over when it migrates to testing 
(historically, Britney did not at allow out of binaries at all in 
testing; now it allows it in some cases and prunes them out when possible).


As it is, "dak ls knot" does list knot/3.3.9-1 as a known version for 
sid (rmadison should also show this as the case). It is unclear to me 
why, but once you resolve that I think the rest should follow from there.


Best regards,
Niels

[1]: 
https://bugs.debian.org/cgi-bin/version.cgi?package=knot-exporter;found=knot%2F3.4.0-2;found=knot%2F3.3.9-1;collapse=1;absolute=0;fixed=knot%2F3.4.0-3;fixed=knot-exporter%2F3.4.0-3;info=1




OpenPGP_signature.asc
Description: OpenPGP digital signature


Re: RFC: "Recommended bloat", and how to possibly fix ito

2024-11-08 Thread Marc Haber
On Fri, 8 Nov 2024 12:53:15 +0500, Andrey Rakhmatullin
 wrote:
>On Fri, Nov 08, 2024 at 08:20:46AM +0100, IOhannes m zmölnig wrote:
>> Am 8. November 2024 06:42:25 MEZ schrieb Marc Haber 
>> :
>> >Agreed! And I would also love the possibility to directly paste a
>> >package list from apt show's output into apt install without having to
>> >remove the commas.
>> >
>> 
>> 
>> This!
>
>apt satisfy

TIL. Thanks.

Greetings
Marc
-- 

Marc Haber |   " Questions are the | Mailadresse im Header
Rhein-Neckar, DE   | Beginning of Wisdom " | 
Nordisch by Nature | Lt. Worf, TNG "Rightful Heir" | Fon: *49 6224 1600402



Re: RFC: "Recommended bloat", and how to possibly fix ito

2024-11-07 Thread Andrey Rakhmatullin
On Fri, Nov 08, 2024 at 08:20:46AM +0100, IOhannes m zmölnig wrote:
> Am 8. November 2024 06:42:25 MEZ schrieb Marc Haber 
> :
> >Agreed! And I would also love the possibility to directly paste a
> >package list from apt show's output into apt install without having to
> >remove the commas.
> >
> 
> 
> This!

apt satisfy

-- 
WBR, wRAR


signature.asc
Description: PGP signature


Re: RFC: "Recommended bloat", and how to possibly fix ito

2024-11-07 Thread IOhannes m zmölnig
Am 8. November 2024 06:42:25 MEZ schrieb Marc Haber 
:
>Agreed! And I would also love the possibility to directly paste a
>package list from apt show's output into apt install without having to
>remove the commas.
>


This!


mfh.her.fsr
IOhannes



Re: RFC: "Recommended bloat", and how to possibly fix it

2024-11-07 Thread Aaron Rainbolt
On Thu, 7 Nov 2024 22:29:07 -0500
"Theodore Ts'o"  wrote:

> On Thu, Nov 07, 2024 at 12:08:22AM -0700, Soren Stoutner wrote:
> > On Wednesday, November 6, 2024 10:41:46 PM MST Aaron Rainbolt
> > wrote:  
> > > Again, this isn't a problem limited to a derivative distribution.
> > > I respect that your opinion of how Recommends should work differs
> > > from mine. That doesn't change the policy though, and it doesn't
> > > change that neglecting or changing the policy's rules in this
> > > area will cause and is causing problems to some of Debian's
> > > users. Ultimately I don't care exactly how those problems are
> > > solved, I just want to solve them.  
> > 
> > Let me try to explain what I see as the core of the problem.
> > 
> > First, some background on the three existing categories.
> > 
> > 1.  Depends:  These are the packages necessary to install and run
> > the basic functionality of the package.
> > 
> > 2.  Recommends:  These are the packages required to enable all the
> > features of the package.
> > 
> > 3.  Suggests:  These are packages that enhance the functionality of
> > the package.  
> 
> So if we have consensus that these definitions of the categories is
> correct, then there shouldn't be any contrversy of whether gwenview is
> "wrong"in recommending kamera, which Aaron was objecting to.  After
> all, there is a button in gwenview that would activate kamera.  If
> kamera is not installed, then no matter how many time the user mashes
> the "kamera" button, Nothing Will Happen.  So clearly, thats a
> "feature" of gwenview, and kamera should _absolutely_ be a Recommends.
> No?
> 
> But let's take a step backwards about why there seems to be so much
> passion about this question.  Especially when I really don't care all
> that much.  Part of this is because as far as I'm concerned, my time
> is valuble, and storage is _cheap_ so I always configure a huge amount
> of storage on my desktops.  For example, my root file system is 824
> GiB --- and Kamera is 1 MiB.  So as far as I'm concerned, the amount
> of time that I've spent reading this e-mail thread is far more
> expensive that the storage cost of Kamera.   :-)
> 
> So why do we care?  If someone is installing in a very
> storage-constrained environment --- say, the are creating some
> appliance image to be hosted on a VM, or Docker, or a Raspberry Pi ---
> then just configure apt to ignore all of the recommends.  Someone who
> is trying to configure the appliance may need to do a bit more work to
> figure out what package are *really* needed, but if they are really so
> concerned about minimizing every single byte, then that's probably the
> right answer.
> 
> And these days, with the size of most desktop systems, maybe we should
> be optimizing for user convenience, and for that use case, just
> installing all of the Recommends packages is again, probably the best
> choice in terms of making life easy for users, at the minor cost of
> paying a bit more for storage.  (Reminder: 1TiB of SSD can be as cheap
> as $50 USD --- and if you want to super-duper expensive, gold-plated
> SSD you have to spend $100 USD.   Horrors!)
> 
> So what we seem to be trying to optimize for is this middle ground
> where storage is not super-duper constrained, where the user will want
> to very carefully consider every single package to decide whether or
> not each package should be installed --- and yet storage is *just*
> cheap enough that you want to have "the right thing" to happen. The
> only problem is that everybody's opinion is going to be different
> about what "the right thing would actually be".  And until we can have
> some kind of AI were you can tell the system what exactly you plan to
> use diffoscope for in human language, and have it automatically decide
> what packages you need or not, I don't think this problem is really
> soluble.
> 
> Fortunately, I also don't think it's all that important that we solve
> it.  Personally, I think the current set of knobs and defaults are the
> right one.

The problem isn't mainly storage actually, there are reasons beyond
storage why one may wish to keep their package set minimal. I'm
tempted to give examples here but almost every example I give ends up
being a point of contention, so I think I'll just leave it there. :P
At least from my standpoint, and probably from the standpoint of
several others, Recommends oftentimes pulls in things that are
problematic for whatever reason (causes crashes, introduces wrong
artwork, etc.). In the opinions of many other developers, it appears
that not all of these "problematic" things can be moved out of
Recommends reasonably (and frankly I agree). There are some controls
in apt and Debian for dealing with this, but they are *not* sufficient
to reasonably avoid issues in all situations (unless reimplementing the
Recommends field using a metapackage network is considered reasonable).

Elsewhere, someone had the idea of restricting which packages had
their recommends inst

Re: RFC: "Recommended bloat", and how to possibly fix ito

2024-11-07 Thread Marc Haber
On Fri, 8 Nov 2024 03:47:05 +0100, Fay Stegerman 
wrote:
>I personally would consider a "apt install-recs", analogous to "apt build-dep",
>quite useful.  That would require multiple steps instead of a single command,
>but allow installing the recommends for a specific package at any later time,
>which is also useful when they have changed over time or you change your mind
>about installing them.

Agreed! And I would also love the possibility to directly paste a
package list from apt show's output into apt install without having to
remove the commas.

Greetings
Marc
-- 

Marc Haber |   " Questions are the | Mailadresse im Header
Rhein-Neckar, DE   | Beginning of Wisdom " | 
Nordisch by Nature | Lt. Worf, TNG "Rightful Heir" | Fon: *49 6224 1600402



Re: RFC: "Recommended bloat", and how to possibly fix it

2024-11-07 Thread Theodore Ts'o
On Thu, Nov 07, 2024 at 12:08:22AM -0700, Soren Stoutner wrote:
> On Wednesday, November 6, 2024 10:41:46 PM MST Aaron Rainbolt wrote:
> > Again, this isn't a problem limited to a derivative distribution. I
> > respect that your opinion of how Recommends should work differs from
> > mine. That doesn't change the policy though, and it doesn't change that
> > neglecting or changing the policy's rules in this area will cause and
> > is causing problems to some of Debian's users. Ultimately I don't care
> > exactly how those problems are solved, I just want to solve them.
> 
> Let me try to explain what I see as the core of the problem.
> 
> First, some background on the three existing categories.
> 
> 1.  Depends:  These are the packages necessary to install and run the basic 
> functionality of 
> the package.
> 
> 2.  Recommends:  These are the packages required to enable all the features 
> of the 
> package.
> 
> 3.  Suggests:  These are packages that enhance the functionality of the 
> package.

So if we have consensus that these definitions of the categories is
correct, then there shouldn't be any contrversy of whether gwenview is
"wrong"in recommending kamera, which Aaron was objecting to.  After
all, there is a button in gwenview that would activate kamera.  If
kamera is not installed, then no matter how many time the user mashes
the "kamera" button, Nothing Will Happen.  So clearly, thats a
"feature" of gwenview, and kamera should _absolutely_ be a Recommends.
No?

But let's take a step backwards about why there seems to be so much
passion about this question.  Especially when I really don't care all
that much.  Part of this is because as far as I'm concerned, my time
is valuble, and storage is _cheap_ so I always configure a huge amount
of storage on my desktops.  For example, my root file system is 824
GiB --- and Kamera is 1 MiB.  So as far as I'm concerned, the amount
of time that I've spent reading this e-mail thread is far more
expensive that the storage cost of Kamera.   :-)

So why do we care?  If someone is installing in a very
storage-constrained environment --- say, the are creating some
appliance image to be hosted on a VM, or Docker, or a Raspberry Pi ---
then just configure apt to ignore all of the recommends.  Someone who
is trying to configure the appliance may need to do a bit more work to
figure out what package are *really* needed, but if they are really so
concerned about minimizing every single byte, then that's probably the
right answer.

And these days, with the size of most desktop systems, maybe we should
be optimizing for user convenience, and for that use case, just
installing all of the Recommends packages is again, probably the best
choice in terms of making life easy for users, at the minor cost of
paying a bit more for storage.  (Reminder: 1TiB of SSD can be as cheap
as $50 USD --- and if you want to super-duper expensive, gold-plated
SSD you have to spend $100 USD.   Horrors!)

So what we seem to be trying to optimize for is this middle ground
where storage is not super-duper constrained, where the user will want
to very carefully consider every single package to decide whether or
not each package should be installed --- and yet storage is *just*
cheap enough that you want to have "the right thing" to happen. The
only problem is that everybody's opinion is going to be different
about what "the right thing would actually be".  And until we can have
some kind of AI were you can tell the system what exactly you plan to
use diffoscope for in human language, and have it automatically decide
what packages you need or not, I don't think this problem is really
soluble.

Fortunately, I also don't think it's all that important that we solve
it.  Personally, I think the current set of knobs and defaults are the
right one.

Regards,

- Ted



Re: RFC: "Recommended bloat", and how to possibly fix ito

2024-11-07 Thread Fay Stegerman
* Aaron Rainbolt  [2024-11-08 00:21]:
[...]
> However, this isn't that hard to rectify - rather than specifying the
> depth at which apt should stop installing recommends, one can specify
> the packages in the dependency tree from which apt should install
> recommends from. I.e., to replicate --no-transitive-recommends, one
> would do
> `sudo apt install package --only-install-recommends-from=package`
> (please someone pick a better name for this switch). If you're
> installing a complex metapackage network, you can just specify all the
> metapackages you want to allow recommends from, i.e.
> `sudo apt install metapackage 
> --only-install-recommends-from=metapackage,submetapackage1,...`
[...]

I personally would consider a "apt install-recs", analogous to "apt build-dep",
quite useful.  That would require multiple steps instead of a single command,
but allow installing the recommends for a specific package at any later time,
which is also useful when they have changed over time or you change your mind
about installing them.

- Fay



Re: RFC: "Recommended bloat", and how to possibly fix it

2024-11-07 Thread Philipp Kern

On 2024-11-08 00:17, gregor herrmann wrote:

The distinction between Depends, Recommends or Suggests is not a
true/false thing; this is not a question of mathematics or science
but always a judgement call. Adding another category won't solve
anything IMO but only extend the sometimes blurry area.

Clarifying policy may or may not help, in the end there will always
be uncertainties, clarifications, bug reports, and the common effort
to find the best solution for most users.


And, IMO more importantly, there is a question of why this problem needs 
solving. What are the underlying pain points people have. If a package 
that is pulled in by a Recommends breaks your local configuration (the 
example with the terminal emulator getting hijacked), that is indeed a 
problem - and that should be fixed regardless. Otherwise it is maybe a 
bit wasteful in terms of bandwidth (initial download and updates) and 
disk space - but installing yet another package should not otherwise 
hurt the user. In general the requirements imposed here are not 
outrageous and maybe in the rare cases where they are bug reports might 
be useful.


If you are building a derivative and are concerned about recommends 
pulling in "random" things: Sure, but arguably you would want to control 
your dependencies more strongly anyway - be it for support load, or 
other constraints. Having an allowlist of packages that you compare your 
package set against that you review for changes might help. And then you 
just go and prune what isn't on the list. Or maybe have a metapackage 
that conflicts against unwanted software.


For others it might be about more easily surfacing individual feature 
sets to the user (like tasksel, but for software groups) where 
metapackages might be a bit too messy. But then that's a different ask 
from a weak-depends, as well.


Kind regards
Philipp Kern



Re: RFC: "Recommended bloat", and how to possibly fix it

2024-11-07 Thread gregor herrmann
On Thu, 07 Nov 2024 00:08:22 -0700, Soren Stoutner wrote:

> Some packages are clearly in Depends, Recommends, or Suggests.  Others might 
> be right 
> on the line between two of the categories.  In these cases, a maintainer has 
> to make a 
> judgement call.  If a user thinks they have got it wrong, they are welcome to 
> submit a bug 
> report explaining why they think it should be in the other category.

This paragraph sums up my thoughts on this topic pretty well, thanks.

The distinction between Depends, Recommends or Suggests is not a
true/false thing; this is not a question of mathematics or science
but always a judgement call. Adding another category won't solve
anything IMO but only extend the sometimes blurry area.

Clarifying policy may or may not help, in the end there will always
be uncertainties, clarifications, bug reports, and the common effort
to find the best solution for most users.


Cheers,
gregor

-- 
 .''`.  https://info.comodo.priv.at -- Debian Developer https://www.debian.org
 : :' : OpenPGP fingerprint D1E1 316E 93A7 60A8 104D  85FA BB3A 6801 8649 AA06
 `. `'  Member VIBE!AT & SPI Inc. -- Supporter Free Software Foundation Europe
   `-   


signature.asc
Description: Digital Signature


Re: RFC: "Recommended bloat", and how to possibly fix ito

2024-11-07 Thread Soren Stoutner
On Thursday, November 7, 2024 12:53:27 PM MST Roger Lynn wrote:
> On 06/11/2024 19:20, Bill Allombert wrote:
> > Le Tue, Nov 05, 2024 at 05:35:59PM -0600, Aaron Rainbolt a écrit :
> >> Hello, and thanks for your time.
> >> 
> >> I've been a Debian user and contributor for a while, and have noticed a
> >> rather frustrating issue that I'm interested in potentially
> >> contributing code to fix. The issue is what I call "Recommended bloat",
> >> which in short is what happens when you install a package with all of
> >> its recommended packages, and end up with a whole lot of stuff installed
> >> that you don't want and that the package you actually wanted probably
> >> didn't even need.
> > 
> > A proposal I made was an option for apt to handle Recommends non
> > recursively.
> > That is if A Recommends B and B Recommends C,
> > apt-get install A --no-transitive-recommends
> > would install B but not C.
> 
> This, please!
> 

I should have noted previously (I apologize for not doing so) that the 
objections I have voiced to some of the proposals do not apply to this 
proposal.  If having this type of option would be helpful (and if the 
maintainers of apt don't have objections to someone implementing this) than I 
also do have any objections as it doesn't cause any extra work for package 
maintainers in general and it doesn't change the way Recommends works for 
people who chose not to use this argument.

-- 
Soren Stoutner
so...@debian.org

signature.asc
Description: This is a digitally signed message part.


Re: RFC: "Recommended bloat", and how to possibly fix it

2024-11-07 Thread gregor herrmann
On Wed, 06 Nov 2024 13:08:07 -0600, Aaron Rainbolt wrote:

> Then again,
> given that policy is clear about how Recommends ought to be used and
> it's pretty clear that there are packages that just don't use it right,

I'm sorry but I have to disagree here; it's "pretty clear" for you
but I believe that there is no such thing as an absolute
(nature|god|policy)-given truth; some people may think that the usage
of Recommends is more or less correct, others my think that it's more
or less incorrect but all we have is a hopefully respectful way to
find a temporary consensus.

(Why do I write such philsophical mails? Because I'm a bit concerned
how often people confuse their perception with some kind of
"reality"; and I had the hunch that this thread might also partially
take this direction …)


Cheers,
gregor

-- 
 .''`.  https://info.comodo.priv.at -- Debian Developer https://www.debian.org
 : :' : OpenPGP fingerprint D1E1 316E 93A7 60A8 104D  85FA BB3A 6801 8649 AA06
 `. `'  Member VIBE!AT & SPI Inc. -- Supporter Free Software Foundation Europe
   `-   


signature.asc
Description: Digital Signature


Re: RFC: "Recommended bloat", and how to possibly fix ito

2024-11-07 Thread Aaron Rainbolt
On Thu, 7 Nov 2024 19:53:27 +
Roger Lynn  wrote:

> On 06/11/2024 19:20, Bill Allombert wrote:
> > Le Tue, Nov 05, 2024 at 05:35:59PM -0600, Aaron Rainbolt a écrit :  
> >> Hello, and thanks for your time.
> >> 
> >> I've been a Debian user and contributor for a while, and have
> >> noticed a rather frustrating issue that I'm interested in
> >> potentially contributing code to fix. The issue is what I call
> >> "Recommended bloat", which in short is what happens when you
> >> install a package with all of its recommended packages, and end up
> >> with a whole lot of stuff installed that you don't want and that
> >> the package you actually wanted probably didn't even need.  
> > 
> > A proposal I made was an option for apt to handle Recommends non
> > recursively.
> > That is if A Recommends B and B Recommends C,
> > apt-get install A --no-transitive-recommends
> > would install B but not C.  
> 
> This, please!
> 
> As a user, when I choose to install a package, I am likely to have a
> reasonable idea of what that package's recommendations do and whether
> I need them. However, for transitive recommendations, it is unlikely
> that I will know whether I need those packages. If they in turn have
> lots of further dependencies then I will probably not install them
> and take the risk of unwanted breakage to my system. If the top level
> package that I originally did want needs those transitive
> recommendations it should recommend them itself, rather than relying
> on recommendations further down the dependency chain.

One issue with this is that it doesn't really work if you're dealing
with metapackages that in turn reference other metapackages. You end up
with the "lower" metapackages installed but all of their recommends
missing. Making it possible to tune the number of "levels" apt digs
would make something like this more useful, but it's still possible for
there to be a "lopsided" network of metapackages, where installing
recommends to level N results in a metapackage's recommends being
omitted, but installing recommends to level N+1 results in a
non-metapackage's recommends being installed.

However, this isn't that hard to rectify - rather than specifying the
depth at which apt should stop installing recommends, one can specify
the packages in the dependency tree from which apt should install
recommends from. I.e., to replicate --no-transitive-recommends, one
would do
`sudo apt install package --only-install-recommends-from=package`
(please someone pick a better name for this switch). If you're
installing a complex metapackage network, you can just specify all the
metapackages you want to allow recommends from, i.e.
`sudo apt install metapackage 
--only-install-recommends-from=metapackage,submetapackage1,...`
This could then be used in combination with per-install package
blacklisting (i.e., `sudo apt install gwenview+ kamera-`) to control
recommends very easily and without much effort. As for Kicksecure, it
would solve the problems we're running into perfectly, since we'd be
able to use recommends in our metapackages and get the advantages of
skipping them for normal packages. No changes needed to Debian policy.
No changes needed to debian/control. Just modifying apt is enough to
begin with, and might even be enough in the long run. This wouldn't fix
the issue of packages having too much in their Recommends, but that can
be handled with bug reports.

Aaron

> It would also be helpful if more package descriptions could explain
> why recommended and suggested packages are needed or helpful and what
> functionality they provide that would be lost if they were not
> installed. (Many already do this.)
> 
> Thanks,
> 
> Roger
> 
> PS. I use aptitude, so I can interactively browse through the lists of
> recommendations, but it's still hard work and it can be a long list
> of very obscure packages. Do any of the GUI package managers show a
> graphical dependency tree? That might be really helpful to understand
> the package relationships and visualise the consequences of various
> actions.
> 
> PPS. And the moon on a stick too, please!
> 



pgpVQPIkiT_dZ.pgp
Description: OpenPGP digital signature


Re: RFC: "Recommended bloat", and how to possibly fix ito

2024-11-07 Thread Roger Lynn
On 06/11/2024 19:20, Bill Allombert wrote:
> Le Tue, Nov 05, 2024 at 05:35:59PM -0600, Aaron Rainbolt a écrit :
>> Hello, and thanks for your time.
>> 
>> I've been a Debian user and contributor for a while, and have noticed a
>> rather frustrating issue that I'm interested in potentially
>> contributing code to fix. The issue is what I call "Recommended bloat",
>> which in short is what happens when you install a package with all of
>> its recommended packages, and end up with a whole lot of stuff installed
>> that you don't want and that the package you actually wanted probably
>> didn't even need.
> 
> A proposal I made was an option for apt to handle Recommends non
> recursively.
> That is if A Recommends B and B Recommends C,
> apt-get install A --no-transitive-recommends
> would install B but not C.

This, please!

As a user, when I choose to install a package, I am likely to have a
reasonable idea of what that package's recommendations do and whether I need
them. However, for transitive recommendations, it is unlikely that I will
know whether I need those packages. If they in turn have lots of further
dependencies then I will probably not install them and take the risk of
unwanted breakage to my system. If the top level package that I originally
did want needs those transitive recommendations it should recommend them
itself, rather than relying on recommendations further down the dependency
chain.

It would also be helpful if more package descriptions could explain why
recommended and suggested packages are needed or helpful and what
functionality they provide that would be lost if they were not installed.
(Many already do this.)

Thanks,

Roger

PS. I use aptitude, so I can interactively browse through the lists of
recommendations, but it's still hard work and it can be a long list of very
obscure packages. Do any of the GUI package managers show a graphical
dependency tree? That might be really helpful to understand the package
relationships and visualise the consequences of various actions.

PPS. And the moon on a stick too, please!



Re: Bug#1086878: python-catalogue: 2.1.0 was yanked - what version scheme should we use for 2.0.10?

2024-11-07 Thread Andreas Tille
Hi,

Am Thu, Nov 07, 2024 at 11:41:28AM +0100 schrieb Guillem Jover:
> Ah I assumed the sources were being taken from pypi, if they are taken
> from GitHub, then that explains yes. Perhaps using
> https://pypi.org/project/catalogue/#files as the URL for uscan (if uscan
> is happy with that one), would solve that problem? (And if it does,
> then perhaps python packages should be progressively transitioned to
> use pypi URLs to avoid this kind of problem?)

I have *not* inspected the situation personally.  However, you might
want to check the difference between the PyPI tarball and the tarball
from Github.  In lots of cases these are different and the maintainer
might have picked Github for a reason (which is actually the
recommendation I've read several times on the Debian Python list).
Common cases are missing test suite inside PyPI tarball but maybe other
things as well.
 
> But I'm thinking that, perhaps the best option is to ask upstream
> directly, whether they are going to release a 2.1.x release soon, or
> if they could do that now, and/or whether they could perhaps
> remove/rename the git tag perhaps (with the implied issues with messing
> with history and git tags being sticky on cloned repos)? As I assume
> other downstreams might be in the same/similar situation?

Sounds sensible - otherwise I'd probably go with the override proposed
by Colin.

Kind regards
Andreas. 

-- 
https://fam-tille.de



Re: Bug#1086878: python-catalogue: 2.1.0 was yanked - what version scheme should we use for 2.0.10?

2024-11-07 Thread Colin Watson
On Thu, Nov 07, 2024 at 11:41:28AM +0100, Guillem Jover wrote:
> On Thu, 2024-11-07 at 02:01:49 +, Colin Watson wrote:
> > I'd initially misread it as being just a day or two after the yanked
> > version, but you're right, it was months later.  I suspect it was simply
> > uscan - it's using the GitHub tags rather than looking at PyPI, and the
> > tag was never removed, so it's hard to see how it could have known any
> > better.
> > 
> > This does leave the question of how to hide that version from uscan in
> > the future, since uscan doesn't make it easy to ignore specific upstream
> > versions and I'd prefer to avoid using opaque regex constructions to do
> > it.  My best idea is to use uversionmangle to turn 2.1.0 into something
> > like 2.0.8~pre1, but is there a better idiom?
> 
> Ah I assumed the sources were being taken from pypi, if they are taken
> from GitHub, then that explains yes. Perhaps using
> https://pypi.org/project/catalogue/#files as the URL for uscan (if uscan
> is happy with that one), would solve that problem? (And if it does,
> then perhaps python packages should be progressively transitioned to
> use pypi URLs to avoid this kind of problem?)

Some do, but it can't be done systematically: if we get the orig.tar
from PyPI then it's the "sdist", which is built from the upstream
repository, but Python's build tooling unfortunately means it's a bit
easier than ideal for the sdist to be accidentally lacking some files,
such as documentation or tests.  Examples from a quick look in my
browser history:

  https://github.com/RKrahl/pytest-dependency/pull/79
  https://github.com/pgjones/quart-trio/pull/19
  https://github.com/jendrikseipp/vulture/pull/368

> But I'm thinking that, perhaps the best option is to ask upstream
> directly, whether they are going to release a 2.1.x release soon, or
> if they could do that now, and/or whether they could perhaps
> remove/rename the git tag perhaps (with the implied issues with messing
> with history and git tags being sticky on cloned repos)? As I assume
> other downstreams might be in the same/similar situation?

Indeed.  I've filed https://github.com/explosion/catalogue/issues/74
asking if they're willing to help out here.

-- 
Colin Watson (he/him)  [cjwat...@debian.org]



Re: Bug#1086878: python-catalogue: 2.1.0 was yanked - what version scheme should we use for 2.0.10?

2024-11-07 Thread Guillem Jover
Hi!

On Thu, 2024-11-07 at 02:01:49 +, Colin Watson wrote:
> On Thu, Nov 07, 2024 at 02:42:08AM +0100, Guillem Jover wrote:
> > Given that I assume the current (non-retracted) upstream version is
> > going to be close to surpass the retracted one, I'd go for the +really
> > hack. In this case invalidating relationships for external
> > dependencies would not seem like a big issue, because it looks like
> > the yanked version is the only one that has ever been in Debian, but
> > it avoids the ugliness and confusion of epochs (people tend to forget
> > to add the epoch in relationships for example) and its stickiness,
> > going forward.
> 
> I don't really have any information on whether upstream plans a 2.1.1 or
> similar, but it's true it might well happen.

Right, see below…

> > The other question that comes to mind is why the yanked version was
> > uploaded, as from that issue above it seems at that time it should
> > have already been marked as yanked. Perhaps we have some automated
> > tool that does not honor the yanked markings, which might deserve a bug
> > report? Andreas do you recall what tool or process you used for that?
> 
> I'd initially misread it as being just a day or two after the yanked
> version, but you're right, it was months later.  I suspect it was simply
> uscan - it's using the GitHub tags rather than looking at PyPI, and the
> tag was never removed, so it's hard to see how it could have known any
> better.
> 
> This does leave the question of how to hide that version from uscan in
> the future, since uscan doesn't make it easy to ignore specific upstream
> versions and I'd prefer to avoid using opaque regex constructions to do
> it.  My best idea is to use uversionmangle to turn 2.1.0 into something
> like 2.0.8~pre1, but is there a better idiom?

Ah I assumed the sources were being taken from pypi, if they are taken
from GitHub, then that explains yes. Perhaps using
https://pypi.org/project/catalogue/#files as the URL for uscan (if uscan
is happy with that one), would solve that problem? (And if it does,
then perhaps python packages should be progressively transitioned to
use pypi URLs to avoid this kind of problem?)

But I'm thinking that, perhaps the best option is to ask upstream
directly, whether they are going to release a 2.1.x release soon, or
if they could do that now, and/or whether they could perhaps
remove/rename the git tag perhaps (with the implied issues with messing
with history and git tags being sticky on cloned repos)? As I assume
other downstreams might be in the same/similar situation?

Thanks,
Guillem



Re: RFC: "Recommended bloat", and how to possibly fix it

2024-11-07 Thread Julien Plissonneau Duquène

On 07/11/2024 04:12, Aaron Rainbolt wrote:

just to get around problematic recommends in Debian's packages.


What about having a way to configure the packaging tools you use to only 
consider a whitelist (with pattern match, pin-style) of package 
Recommends? This way you could use your metapackages Recommends while 
ignoring others.


One major issue I see with your initial proposal is that if there is 
already some room for interpretation with the current fields, adding a 
new field is likely to make things even worse in terms of compliance as 
there will still be room for interpretation (or misuse) with now two 
fields with slightly tighter definitions and some significant overlap. 
In short this is a case of "now you have two problems".


--
Julien Plissonneau Duquène



Re: RFC: "Recommended bloat", and how to possibly fix it

2024-11-07 Thread nick black
Colin Watson left as an exercise for the reader:
> (https://peps.python.org/pep-0508/#extras): effectively groups of
> additional dependencies to enable some kind of feature that you can opt
> into if you need that feature, rather than having to pick from an
> undifferentiated pile of Recommends, or do things like devscripts does

the "undifferentiated" feels like it's doing some lifting here.
arch's yay (and presumably other tools in the pacman family)
shows a short justification for each optdepends[0] entry (when
such information is provided). it would require not just control
file changes but also UI work, of course.

--nick

[0] https://man.archlinux.org/man/PKGBUILD.5

-- 
nick black -=- https://nick-black.com
to make an apple pie from scratch,
you need first invent a universe.


signature.asc
Description: PGP signature


Re: RFC: "Recommended bloat", and how to possibly fix it

2024-11-06 Thread Soren Stoutner
On Wednesday, November 6, 2024 10:41:46 PM MST Aaron Rainbolt wrote:
> Again, this isn't a problem limited to a derivative distribution. I
> respect that your opinion of how Recommends should work differs from
> mine. That doesn't change the policy though, and it doesn't change that
> neglecting or changing the policy's rules in this area will cause and
> is causing problems to some of Debian's users. Ultimately I don't care
> exactly how those problems are solved, I just want to solve them.

Let me try to explain what I see as the core of the problem.

First, some background on the three existing categories.

1.  Depends:  These are the packages necessary to install and run the basic 
functionality of 
the package.

2.  Recommends:  These are the packages required to enable all the features of 
the 
package.

3.  Suggests:  These are packages that enhance the functionality of the package.

Enabling a feature in the package is a "strong dependency”, which is what is 
meant by the 
policy.  I agree that it could be worded in such a way as to be more clear to 
some people, 
particularly if they are not familiar with Debian or for whom English is not 
their primary 
language, but I want to be clear that the way I have defined Recommends above 
is exactly 
what the current policy envisions (at least I believe that is the understanding 
of the 
majority of the Debian community).

When a user installs a package and all the Recommends, they should be able to 
expect 
that all of the features of the package should just work.  They should not have 
to go 
hunting down some other package to install to enable one of the features of the 
package, 
even if that feature is less commonly used.

Some packages are clearly in Depends, Recommends, or Suggests.  Others might be 
right 
on the line between two of the categories.  In these cases, a maintainer has to 
make a 
judgement call.  If a user thinks they have got it wrong, they are welcome to 
submit a bug 
report explaining why they think it should be in the other category.

Some upstream projects are complex enough that the above three categories don’t 
fully 
capture the needs of users.  In those cases, meta-packages can accommodate 
those 
needs, as has already been discussed (which is analogous to Python "extras").  
If a user 
believe there is a compelling case to be made for an additional meta-package, 
they should 
feel free to file a bug report and explain the merits of their request.

With all of that said as background, the reason why I am opposed to what you 
are 
requesting is that it boils down to:  "Please create a package category that 
only installs the 
important features instead of all the ones I don’t use”.  This category would 
be somewhere 
in between Depends (the packages necessary to run the basic functionality) and 
Recommends (the packages necessary to run all the functionality).

What I don’t like about this idea is it requires the package maintainer to be a 
mind reader.  
Specifically, they need to read *your* mind, because every single user has a 
different list 
of what the “important” functionality is.  What you or your distribution 
considers important 
the next user or distribution would say, “I never use that, take it out.  But I 
need this other 
thing.”

Maintainers would end up having to create multiple variations of each package 
to make 
everyone happy.  It is unsustainable.  If you, as a user, want something 
between the basic 
functionality and all the functionality, please just install the basic 
functionality (Depends) 
and then add the extra functionality that is important to you.  Don’t ask the 
package 
maintainer to read you mind and guess at what that will be.

signature.asc
Description: This is a digitally signed message part.


Re: RFC: "Recommended bloat", and how to possibly fix it

2024-11-06 Thread Sune Vuorela
On 2024-11-06, Colin Watson  wrote:
> In some ways I think what we're missing is really a way to do the
> equivalent of "extras" in Python packages
> (https://peps.python.org/pep-0508/#extras): effectively groups of
> additional dependencies to enable some kind of feature that you can opt
> into if you need that feature, rather than having to pick from an

And conditional dependencies/recommends. Maybe they're kind of the same:

Package: foo
Recommends: foo-l10n-de[language-de]

Package: libQtGui
Depends: libQtWayland[gui-thingie-wayland], libQtX11[gui-thingie-x11]

Package: php
Depends: php-apache2-glue[apache2-is-installed],
php-nginx-glue[nginx-is-installed]

But I think this is kind of just dreaming at this point.

/Sune



Re: RFC: "Recommended bloat", and how to possibly fix it

2024-11-06 Thread Aaron Rainbolt
On Wed, 06 Nov 2024 21:41:43 -0700
Soren Stoutner  wrote:

> On Wednesday, November 6, 2024 8:12:29 PM MST Aaron Rainbolt wrote:
> > At this point, we have two options. We can either explicitly remove
> > all of the extra packages that get installed, or we can skip
> > installing recommends at all. Both of these come with their own
> > severe disadvantages. If we manually uninstall everything we don't
> > need, that means we have to maintain a list of packages that get
> > incorrectly installed, and keep it up-to-date as package
> > dependencies in the archive change. On top of that, if a package
> > happens to not uninstall cleanly, or it otherwise causes problems
> > when installed or uninstalled, we'd have to figure out how to
> > manually fix that, or avoid letting that particular package ever
> > get installed in the first place via held packages or a similar
> > mechanism. This is a rather large amount of work, and it's not easy
> > to maintain.
> > 
> > The other option is to skip installing recommends. The reason that
> > doesn't work well is because if we do that, we can't use recommends
> > in our metapackages at all - anything we specify as a recommends in
> > the metapackages won't end up installed. To work around that, we
> > have to use depends for everything, and to do *that*, we have to
> > basically reimplement the whole darn recommended packages mechanism
> > in metapackages. This results in the metapackages setup that we have
> > today, which can be seen at [1]. There's some "dependencies"
> > packages, some "recommends" packages, a complex network between the
> > packages to make things work right, and a smattering of dummy
> > dependencies to allow users to override certain depends. Our
> > current metapackages scheme has some rather inconsistent naming and
> > could use a lot of improvement, but even under ideal conditions
> > (like what I've proposed for Kicksecure in [2]), this is a lot of
> > ugly hacks that reimplement existing apt features without using
> > those features just to get around problematic recommends in
> > Debian's packages. And this is likely *easier* to maintain than
> > manually uninstalling everything we don't want.  
> 
> This sounds like exactly the type of work I would expect a derivative 
> distribution to do.  If I were in your shoes, I would probably do
> something like rebuild all packages for my derivative and host them
> in my own repositories, like Ubuntu does.  During that rebuild
> process, I would use some sort of patch process to alter the
> Recommends fields to suite the needs of my particular derivative
> distribution.  It would take time to setup and maintain such patches,
> but that is exactly the type of effort that is required to run a
> distribution.

I'm not sure why this would be a reasonable solution here. Kicksecure
already has a solution that is working. We dislike it, and I'd like for
something better to exist, but our metapackage tricks work for the time
being.

To be clear, I am not just doing this because I have a job to do and
this is part of it. I'm doing this because I truly believe that
Debian's current "recommended bloat" issues are a problem within Debian
itself and are worth my time and effort to solve. With that in mind,
bringing into the picture what a derivative should or shouldn't do
isn't really relevant to the discussion at hand. Much like the
diffoscope example, Kicksecure's issues were intended to be an example,
not a definition of an end goal.

> > So, that's the issue we're having, and the solutions I'm pursuing
> > (using Recommends differently, adding a Weak-Depends field, or
> > implementing something like Python's extras like Colin was
> > mentioning) are things I think would fix the problem for us and
> > others. I'm definitely open to suggestions for other ways we could
> > avoid these problems though.  
> 
> This really is the type of problem that needs to be solved inside of
> your derivative distribution, not in Debian itself, especially in
> ways that makes Debian worse for its own users or requires a bunch of
> extra work for Debian's maintainers (like figuring out how to sort
> all of the Recommends into these new fields/extras options, which
> different derivatives/users would have distinct opinions about where
> they should go and would require a lot of time from Debian developers
> to get each package to a state that would make all potential
> consumers of the package happy).

There's no reason this exact issue couldn't happen in Debian itself.
Any metapackage system that encourages its users to use
--no-install-recommends to avoid installation of unnecessary packages
is going to run into the exact same problem, whether inside of Debian
or outside of it. For that matter, any Debian user who uses
--no-install-recommends with a metapackage that uses Recommends is
going to run into the same kind of problem.

> I am sympathetic to the problems you are having.  And I wish you all
> the best in creating a d

Re: RFC: "Recommended bloat", and how to possibly fix it

2024-11-06 Thread Soren Stoutner
On Wednesday, November 6, 2024 8:12:29 PM MST Aaron Rainbolt wrote:
> At this point, we have two options. We can either explicitly remove all
> of the extra packages that get installed, or we can skip installing
> recommends at all. Both of these come with their own severe
> disadvantages. If we manually uninstall everything we don't need, that
> means we have to maintain a list of packages that get incorrectly
> installed, and keep it up-to-date as package dependencies in the
> archive change. On top of that, if a package happens to not uninstall
> cleanly, or it otherwise causes problems when installed or uninstalled,
> we'd have to figure out how to manually fix that, or avoid letting that
> particular package ever get installed in the first place via held
> packages or a similar mechanism. This is a rather large amount of work,
> and it's not easy to maintain.
> 
> The other option is to skip installing recommends. The reason that
> doesn't work well is because if we do that, we can't use recommends in
> our metapackages at all - anything we specify as a recommends in the
> metapackages won't end up installed. To work around that, we have to
> use depends for everything, and to do *that*, we have to basically
> reimplement the whole darn recommended packages mechanism in
> metapackages. This results in the metapackages setup that we have
> today, which can be seen at [1]. There's some "dependencies" packages,
> some "recommends" packages, a complex network between the packages to
> make things work right, and a smattering of dummy dependencies to allow
> users to override certain depends. Our current metapackages scheme has
> some rather inconsistent naming and could use a lot of improvement, but
> even under ideal conditions (like what I've proposed for Kicksecure in
> [2]), this is a lot of ugly hacks that reimplement existing apt
> features without using those features just to get around problematic
> recommends in Debian's packages. And this is likely *easier* to
> maintain than manually uninstalling everything we don't want.

This sounds like exactly the type of work I would expect a derivative 
distribution to do.  If I were in your shoes, I would probably do something 
like rebuild all packages for my derivative and host them in my own 
repositories, like Ubuntu does.  During that rebuild process, I would use some 
sort of patch process to alter the Recommends fields to suite the needs of my 
particular derivative distribution.  It would take time to setup and maintain 
such patches, but that is exactly the type of effort that is required to run a 
distribution.

> So, that's the issue we're having, and the solutions I'm pursuing
> (using Recommends differently, adding a Weak-Depends field, or
> implementing something like Python's extras like Colin was mentioning)
> are things I think would fix the problem for us and others. I'm
> definitely open to suggestions for other ways we could avoid these
> problems though.

This really is the type of problem that needs to be solved inside of your 
derivative distribution, not in Debian itself, especially in ways that makes 
Debian worse for its own users or requires a bunch of extra work for Debian's 
maintainers (like figuring out how to sort all of the Recommends into these new 
fields/extras options, which different derivatives/users would have distinct 
opinions about where they should go and would require a lot of time from 
Debian developers to get each package to a state that would make all potential 
consumers of the package happy).

I am sympathetic to the problems you are having.  And I wish you all the best 
in creating a distribution that meets the needs of your users.  Like I said 
earlier, if there are specific packages that actually have the wrong 
Recommends, I’m sure the package maintainers would welcome a bug report 
explaining why a package should be moved to Suggests.  But in most cases, what 
is currently in Recommends is what is in the best interests of Debian users.  
Unless you can point to a systematic problem with Recommends in Debian (which 
the examples you have presented so far have not shown), I don’t think upstream 
Debian is the place to fix the particular needs of your derivative 
distribution.

-- 
Soren Stoutner
so...@debian.org

signature.asc
Description: This is a digitally signed message part.


Re: RFC: "Recommended bloat", and how to possibly fix it

2024-11-06 Thread Aaron Rainbolt
On Wed, 06 Nov 2024 15:59:22 -0700
Soren Stoutner  wrote:

> On Wednesday, November 6, 2024 3:06:59 PM MST Aaron Rainbolt wrote:
> > And this brings us back to the original idea of creating a
> > Weak-Depends field. From my viewpoint, policy states that
> > Recommends is for declaring a strong (heavy emphasis on "strong"
> > here), but not absolute, dependency.  
> 
> After reading over the policy on the subject, I do agree that the
> current policy is a little vague as to what is intended.
> 
> "Recommends
> 
> This declares a strong, but not absolute, dependency.
> 
> The Recommends field should list packages that would be found
> together with this one in all but unusual installations.”
> 
> https://www.debian.org/doc/debian-policy/ch-relationships.html#binary-dependencies-depends-recommends-suggests-enhances-pre-depends
> 
> What constitutes a strong dependency, or what constitutes packages
> being found together in all but unusual installations can be
> interpreted differently by different people.  I would be in favor of
> rewording the policy to be more expressly inline with how Recommends
> is currently interpreted and used by the majority of maintainers and
> users, which is that all packages that are required for expected
> functionality should be included in Recommends, even if a feature is
> only used by a subset of users.  Suggests should be for packages that
> enhance some aspect of the program, but which most users would either
> not expect to be installed automatically or are so large that a user
> should make an explicit decision to install them.

I suppose I'd argue that kamera only enhances some aspect of gwenview,
but meh, neither of us are really "right" since this is a matter of
opinion. It's probably better to look at the practical problems I and
the developers of Kicksecure are running into that inspired this thread
initially.

Kicksecure is a security-focused Debian derivative that is intended to
be essentially Debian with every reasonable security hardening feature
in existence enabled out of the box. Kicksecure is also used as the
base of Whonix, which is intended to provide anonymity features that
can be used within virtual machines. We have a number of our own
packages that we ship on our images, and we have a number of
metapackages that depend on both our own packages and many packages
from the Debian repositories.

Some of the packages Kicksecure's metapackages depend on are hard
dependencies, i.e. if they're removed, the system should be considered
broken. On the other hand, many of our packages are ones that we
believe should be present on most installations, but that a user may
legitimately want rid of. This probably sounds like a good fit for the
Depends and Recommends fields of debian/control, since it is, except
for one major issue.

When we build the Kicksecure images with recommends enabled, they end
up with *a lot* of gunk that we don't want. Some packages that end up
installed cause actual problems (user can't log in, Debian artwork
ends up appearing where it shouldn't, default terminal emulator is
hijacked by something called ZuTTY, etc.), other packages simply
increase the size of the ISO for no good reason. This is the result of
people using the Recommends field the way you're suggesting the policy
be changed to explicitly allow.

At this point, we have two options. We can either explicitly remove all
of the extra packages that get installed, or we can skip installing
recommends at all. Both of these come with their own severe
disadvantages. If we manually uninstall everything we don't need, that
means we have to maintain a list of packages that get incorrectly
installed, and keep it up-to-date as package dependencies in the
archive change. On top of that, if a package happens to not uninstall
cleanly, or it otherwise causes problems when installed or uninstalled,
we'd have to figure out how to manually fix that, or avoid letting that
particular package ever get installed in the first place via held
packages or a similar mechanism. This is a rather large amount of work,
and it's not easy to maintain.

The other option is to skip installing recommends. The reason that
doesn't work well is because if we do that, we can't use recommends in
our metapackages at all - anything we specify as a recommends in the
metapackages won't end up installed. To work around that, we have to
use depends for everything, and to do *that*, we have to basically
reimplement the whole darn recommended packages mechanism in
metapackages. This results in the metapackages setup that we have
today, which can be seen at [1]. There's some "dependencies" packages,
some "recommends" packages, a complex network between the packages to
make things work right, and a smattering of dummy dependencies to allow
users to override certain depends. Our current metapackages scheme has
some rather inconsistent naming and could use a lot of improvement, but
even under ideal conditions (like what I've proposed 

Re: Bug#1086878: python-catalogue: 2.1.0 was yanked - what version scheme should we use for 2.0.10?

2024-11-06 Thread Colin Watson
On Thu, Nov 07, 2024 at 02:42:08AM +0100, Guillem Jover wrote:
> On Thu, 2024-11-07 at 01:15:08 +, Colin Watson wrote:
> > https://pypi.org/project/catalogue/#history shows that 2.1.0 was yanked
> > from PyPI, but it's what we currently have in Debian.  Some of the more
> > recent releases (which I think are really in the same version series -
> > it's just that upstream changed their minds about bumping the patch
> > version) contain fixes that we should have in Debian; in particular
> > while trying to fix python-srsly I ran into a problem which I think is
> > fixed by
> > https://github.com/explosion/catalogue/commit/75f5e9c24e93b5fcc2b3e9f324d9328bc871abad.
> 
> > I'm happy to do the work to get us onto a newer version, but I wanted to
> > check what to use for the version scheme.  Should we use
> > 2.1.0+really2.0.10-1 or 1:2.0.10-1?  I think there'd be some
> > justification for an epoch here since the upstream version numbering
> > scheme changed (cf.
> > https://www.debian.org/doc/debian-policy/ch-controlfields.html#s-f-version),
> > but I'm copying debian-devel as per policy on epoch proposals.
> 
> The way I see it, this does not look like a version schema change to
> me, upstream released a version that they then retracted:
> 
>   https://github.com/explosion/catalogue/issues/46

I guess you might or might not describe that as a change in the upstream
version numbering scheme, depending on how you interpret policy's
language.  *shrug*

> Given that I assume the current (non-retracted) upstream version is
> going to be close to surpass the retracted one, I'd go for the +really
> hack. In this case invalidating relationships for external
> dependencies would not seem like a big issue, because it looks like
> the yanked version is the only one that has ever been in Debian, but
> it avoids the ugliness and confusion of epochs (people tend to forget
> to add the epoch in relationships for example) and its stickiness,
> going forward.

I don't really have any information on whether upstream plans a 2.1.1 or
similar, but it's true it might well happen.

> The other question that comes to mind is why the yanked version was
> uploaded, as from that issue above it seems at that time it should
> have already been marked as yanked. Perhaps we have some automated
> tool that does not honor the yanked markings, which might deserve a bug
> report? Andreas do you recall what tool or process you used for that?

I'd initially misread it as being just a day or two after the yanked
version, but you're right, it was months later.  I suspect it was simply
uscan - it's using the GitHub tags rather than looking at PyPI, and the
tag was never removed, so it's hard to see how it could have known any
better.

This does leave the question of how to hide that version from uscan in
the future, since uscan doesn't make it easy to ignore specific upstream
versions and I'd prefer to avoid using opaque regex constructions to do
it.  My best idea is to use uversionmangle to turn 2.1.0 into something
like 2.0.8~pre1, but is there a better idiom?

-- 
Colin Watson (he/him)  [cjwat...@debian.org]



Re: Rebuilds to enable PAC and BTI support on arm64

2024-11-06 Thread Guillem Jover
Hi!

On Wed, 2024-11-06 at 17:28:38 +0500, Andrey Rakhmatullin wrote:
> On Wed, Nov 06, 2024 at 10:43:07AM +0100, Emanuele Rocca wrote:
> > As a final thought, given that new toolchain versions bring multiple
> > improvements over the years it's perhaps worth thinking about rebuilding
> > the archive on some sort of regular basis to make sure we get the
> > benefits?
> 
> "Let's at least force rebuilds all packages not rebuilt since stable
> before every freeze starts" is a popular opinion.

Of course, as Emanuele mentions doing mass rebuilds, which seems to
be more common nowadays, can bring benefits from toolchain improvements
(potentially all layers from compilers, to packaging tools).

But routinely rebuilding and _uploading_ the results also comes with
its drawbacks. For one it will hide ABI breaks. Having packages not get
upgraded during a Debian release upgrade has also been a good and easy
indicator for end users that those packages are stale and perhaps it's
time to look for alternatives.

Thanks,
Guillem



Re: Bug#1086878: python-catalogue: 2.1.0 was yanked - what version scheme should we use for 2.0.10?

2024-11-06 Thread Guillem Jover
Hi!

On Thu, 2024-11-07 at 01:15:08 +, Colin Watson wrote:
> Source: python-catalogue
> Version: 2.1.0-6
> Severity: normal
> X-Debbugs-Cc: Andreas Tille , debian-devel@lists.debian.org

> https://pypi.org/project/catalogue/#history shows that 2.1.0 was yanked
> from PyPI, but it's what we currently have in Debian.  Some of the more
> recent releases (which I think are really in the same version series -
> it's just that upstream changed their minds about bumping the patch
> version) contain fixes that we should have in Debian; in particular
> while trying to fix python-srsly I ran into a problem which I think is
> fixed by
> https://github.com/explosion/catalogue/commit/75f5e9c24e93b5fcc2b3e9f324d9328bc871abad.

> I'm happy to do the work to get us onto a newer version, but I wanted to
> check what to use for the version scheme.  Should we use
> 2.1.0+really2.0.10-1 or 1:2.0.10-1?  I think there'd be some
> justification for an epoch here since the upstream version numbering
> scheme changed (cf.
> https://www.debian.org/doc/debian-policy/ch-controlfields.html#s-f-version),
> but I'm copying debian-devel as per policy on epoch proposals.

The way I see it, this does not look like a version schema change to
me, upstream released a version that they then retracted:

  https://github.com/explosion/catalogue/issues/46

Given that I assume the current (non-retracted) upstream version is
going to be close to surpass the retracted one, I'd go for the +really
hack. In this case invalidating relationships for external
dependencies would not seem like a big issue, because it looks like
the yanked version is the only one that has ever been in Debian, but
it avoids the ugliness and confusion of epochs (people tend to forget
to add the epoch in relationships for example) and its stickiness,
going forward.

The other question that comes to mind is why the yanked version was
uploaded, as from that issue above it seems at that time it should
have already been marked as yanked. Perhaps we have some automated
tool that does not honor the yanked markings, which might deserve a bug
report? Andreas do you recall what tool or process you used for that?

Thanks,
Guillem



Re: RFC: "Recommended bloat", and how to possibly fix it

2024-11-06 Thread Charles Plessy
Hello Aaron and everybody,

the R package ecosystem uses control fields inspired by ours.  But there
the Suggests field is used in a stronger way: packages listed in
Suggests usually provide functionality that is directly leveraged by
packaged code, and the usual pattern is that if the suggested package is
not installed, there is error catching that leads to a less prefered
alternative or to a message with guidance about installing the suggested
package.

In Debian, if I remember well, I have seen that kind of behaviour with
Inkskape opening a pop-up requesting the installation of extra python
packages when trying to use some filters.

The Suggests field in Debian is not very useful at the moment, but there
is a straightforward way to repurpose it.  And apt-get already has
an --install-suggests option.

Have a nice day,

-- 
Charles Plessy Nagahama, Yomitan, Okinawa, Japan
Debian Med packaging team http://www.debian.org/devel/debian-med
Tooting from home  https://framapiaf.org/@charles_plessy
- You  do not have  my permission  to use  this email  to train  an AI -



Re: RFC: "Recommended bloat", and how to possibly fix it

2024-11-06 Thread Aaron Rainbolt
On Wed, 6 Nov 2024 22:55:43 +
Colin Watson  wrote:

> On Wed, Nov 06, 2024 at 04:06:59PM -0600, Aaron Rainbolt wrote:
> > (Side note, I wonder if there's a way to implement Weak-Depends that
> > *doesn't* require modifying all of the tons of packages Johannes
> > mentioned. Maybe some way of annotating packages in Recommends as
> > "important" would permit a distinction to be made here in such a way
> > that most tools wouldn't need changes? Those that were updated
> > would be able to understand the distinction, those that weren't
> > would continue to treat Recommends the same way they do today. No
> > idea if that's feasible, just something that went through my head.)
> >  
> 
> In some ways I think what we're missing is really a way to do the
> equivalent of "extras" in Python packages
> (https://peps.python.org/pep-0508/#extras): effectively groups of
> additional dependencies to enable some kind of feature that you can
> opt into if you need that feature, rather than having to pick from an
> undifferentiated pile of Recommends, or do things like devscripts does
> where you explain the Recommends you need for various tools in your
> package description (and hope that they never change, because people
> might not know that they need to keep up).
>
> I don't think the proposed Weak-Depends particularly helps with that;
> it adds another fine gradation along the axis of "how important is
> this dependency", rather than considering the orthogonal axis of
> "what is this dependency for".

That's a good point, and with a solution that focused on what
dependencies are for, one could prevent severe issues by marking a
dependency as "basic", i.e. without this, basic functionality breaks or
is very likely to break. That could be used for things like
dracut's dependency on systemd-cryptsetup or live-config's dependencies
on sudo and user-setup. Then the user could choose the extra
functionality they wanted on a package-by-package basis.

There are two kinds of categorization here that should be taken into
account - there's the way a package perceives its dependencies, and the
way a package perceives itself. For instance, diffoscope may perceive
mono-runtime primarily as a tool for disassembling .NET applications
into CIL bytecode, while other packages may perceive it primarily as a
runtime environment. The way in which a package perceives itself is
already handled in Debian (to some degree at least) by the sections
mechanism and the priority field, while it sounds like you're talking
specifically about how packages perceive other packages, for instance
diffoscope could declare its dependency on mono-runtime as being in a
group of recommends called "code-disassemblers" or whatever. Is that
right?

> In most cases you can get around this sort of thing using some extra
> metapackages.  This is usually workable, but since it involves a trip
> through NEW people are reluctant to do it too much, and it contributes
> to Packages bloat.
> 
> Still, I entirely agree with those who've said that adding new package
> relationship fields is a _lot_ of work and should have a very high
> bar, and the same goes for extending the syntax of existing fields.
> Not to mention that we're kind of running out of convenient ASCII
> punctuation characters.

Agreed. Still though, if a good theoretical solution is found here, I'm
willing to at least try implementing it even if it does end up being a
lot of work to bring it to fruition.

Aaron


pgpLW15c_P7dt.pgp
Description: OpenPGP digital signature


Re: RFC: "Recommended bloat", and how to possibly fix it

2024-11-06 Thread Soren Stoutner
On Wednesday, November 6, 2024 3:06:59 PM MST Aaron Rainbolt wrote:
> And this brings us back to the original idea of creating a Weak-Depends
> field. From my viewpoint, policy states that Recommends is for
> declaring a strong (heavy emphasis on "strong" here), but not absolute,
> dependency.

After reading over the policy on the subject, I do agree that the current 
policy is a little vague as to what is intended.

"Recommends

This declares a strong, but not absolute, dependency.

The Recommends field should list packages that would be found together with 
this one in all but unusual installations.”

https://www.debian.org/doc/debian-policy/ch-relationships.html#binary-dependencies-depends-recommends-suggests-enhances-pre-depends

What constitutes a strong dependency, or what constitutes packages being found 
together in all but unusual installations can be interpreted differently by  
different people.  I would be in favor of rewording the policy to be more 
expressly inline with how Recommends is currently interpreted and used by the 
majority of maintainers and users, which is that all packages that are 
required for expected functionality should be included in Recommends, even if 
a feature is only used by a subset of users.  Suggests should be for packages 
that enhance some aspect of the program, but which most users would either not 
expect to be installed automatically or are so large that a user should make 
an explicit decision to install them.

There probably are some packages that have items in Recommends that should 
actually be in Suggests.  I once received a bug report from a user asking me 
to move a package from Recommends to Suggests, which suggestion I agreed with 
and was grateful for receiving.  But personally, I would find an additional 
category unnecessary and even confusing, especially because users already have 
the option to not auto-install recommended packages if they really don’t want 
them (which usually comes down to space-saving concerns, especially on 
embedded systems).

-- 
Soren Stoutner
so...@debian.org

signature.asc
Description: This is a digitally signed message part.


Re: RFC: "Recommended bloat", and how to possibly fix it

2024-11-06 Thread Colin Watson
On Wed, Nov 06, 2024 at 04:06:59PM -0600, Aaron Rainbolt wrote:
> (Side note, I wonder if there's a way to implement Weak-Depends that
> *doesn't* require modifying all of the tons of packages Johannes
> mentioned. Maybe some way of annotating packages in Recommends as
> "important" would permit a distinction to be made here in such a way
> that most tools wouldn't need changes? Those that were updated would be
> able to understand the distinction, those that weren't would continue
> to treat Recommends the same way they do today. No idea if that's
> feasible, just something that went through my head.)

In some ways I think what we're missing is really a way to do the
equivalent of "extras" in Python packages
(https://peps.python.org/pep-0508/#extras): effectively groups of
additional dependencies to enable some kind of feature that you can opt
into if you need that feature, rather than having to pick from an
undifferentiated pile of Recommends, or do things like devscripts does
where you explain the Recommends you need for various tools in your
package description (and hope that they never change, because people
might not know that they need to keep up).

I don't think the proposed Weak-Depends particularly helps with that; it
adds another fine gradation along the axis of "how important is this
dependency", rather than considering the orthogonal axis of "what is
this dependency for".

In most cases you can get around this sort of thing using some extra
metapackages.  This is usually workable, but since it involves a trip
through NEW people are reluctant to do it too much, and it contributes
to Packages bloat.

Still, I entirely agree with those who've said that adding new package
relationship fields is a _lot_ of work and should have a very high bar,
and the same goes for extending the syntax of existing fields.  Not to
mention that we're kind of running out of convenient ASCII punctuation
characters.

-- 
Colin Watson (he/him)  [cjwat...@debian.org]



Re: RFC: "Recommended bloat", and how to possibly fix it

2024-11-06 Thread Aaron Rainbolt
On Wed, 06 Nov 2024 14:21:30 -0700
Soren Stoutner  wrote:

> On Wednesday, November 6, 2024 12:08:07 PM MST Aaron Rainbolt wrote:
> >For instance, gwenview currently
> > recommends kamera. gwenview is an image viewer, kamera is a tool for
> > working with digital cameras. Now it is true that kamera enhances
> > gwenview's functionality by allowing it to see pictures on a digital
> > camera that is plugged into the system, but by no means is gwenview
> > useless or even substantially degraded from a functionality
> > standpoint when kamera is missing.  
> 
> In my opinion, gwenview recomminding kamara is the correct behavior
> and inline with policy.
> 
> If I install an app, Recommends should pull in everything that would
> make all the buttons in the GUI work correctly.  I shouldn’t have to
> manually install anything else for them to work.  Kamera should
> definitely be in Recommends for gwenview (although not in depends).
> 
> If you don’t want that, set your system to not automatically install 
> recommended packages, and then you can manually install whatever you
> want to get all the features of the app to work correctly.  But
> please don’t mess up Recommends working correctly for the rest of us.

And this brings us back to the original idea of creating a Weak-Depends
field. From my viewpoint, policy states that Recommends is for
declaring a strong (heavy emphasis on "strong" here), but not absolute,
dependency. If the lack of kamera made it so that portions of
gwenview's user interface were visibly grayed out, or error messages
were displayed, then I would see it as a strong dependency since the
lack of it results in a substantial degredation of functionality. But
when kamera is missing, gwenview still works. Things that try to access
the `camera:` KIO slave won't work, but that's the extent of the issues
caused. KIO slaves are essentially plugins of sorts, so I see this as
an additional "nice-to-have" that isn't a dependency at all. This
isn't the only example either - I've had live image builds pull in
things that seemed (to me) to be completely "out of left field", like
random terminal emulators that I have neither interest in nor
willingness to keep in my image builds. (I don't know exactly which
package was the culprit there.)

The solution you mention is to not automatically install recommended
packages. But then you run into "fun" edge cases like system boot
failures[1], sudo or user-setup not being installed[2], and things like
that. When a packager needs to say "technically package A can function
without package B, but only if the user knows exactly what they're
doing", Recommends is the only way to express that, and if Recommends
is also being used to ship unnecessary terminal emulators, plugins for
little-used image viewer functionality, and the like, there's no good
solution except for manual fiddling. This is causing real problems in
at least one downstream derivative of Debian (Kicksecure, which I am
working with), which is why I'm hoping to find a good solution that I
can start implementing and sending patches for.

This isn't to disagree with you at all - I agree that gwenview shipping
kamera as a Recommends is a good thing. Like you say, "I shouldn't have
to manually install anything else for [the buttons in the GUI] to
work". That's a perfectly logical thing to use a field called
Recommends for. But if it's going to be used for that, it would be very
beneficial to identify packages that are recommended and important vs.
packages that are recommended but not that important. That's what I was
thinking Weak-Depends, or a solution similar to it, could do. In the
absence of that, demoting kamera to a Suggests would seem more
policy-compliant in my opinion, but then again that's just my opinion.

(Side note, I wonder if there's a way to implement Weak-Depends that
*doesn't* require modifying all of the tons of packages Johannes
mentioned. Maybe some way of annotating packages in Recommends as
"important" would permit a distinction to be made here in such a way
that most tools wouldn't need changes? Those that were updated would be
able to understand the distinction, those that weren't would continue
to treat Recommends the same way they do today. No idea if that's
feasible, just something that went through my head.)

[1]: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1078792
 If systemd-cryptsetup is missing on an encrypted system and one
 installs dracut with --no-install-recommends, the system will be
 rendered unbootable.
[2]: 
https://live-team.pages.debian.net/live-manual/html/live-manual/customizing-package-installation.en.html
 Section 8.4.3


pgpyd4I9j4qPq.pgp
Description: OpenPGP digital signature


Re: RFC: "Recommended bloat", and how to possibly fix it

2024-11-06 Thread Soren Stoutner
On Wednesday, November 6, 2024 12:08:07 PM MST Aaron Rainbolt wrote:
>For instance, gwenview currently
> recommends kamera. gwenview is an image viewer, kamera is a tool for
> working with digital cameras. Now it is true that kamera enhances
> gwenview's functionality by allowing it to see pictures on a digital
> camera that is plugged into the system, but by no means is gwenview
> useless or even substantially degraded from a functionality standpoint
> when kamera is missing.

In my opinion, gwenview recomminding kamara is the correct behavior and inline 
with policy.

If I install an app, Recommends should pull in everything that would make all 
the buttons in the GUI work correctly.  I shouldn’t have to manually install 
anything else for them to work.  Kamera should definitely be in Recommends for 
gwenview (although not in depends).

If you don’t want that, set your system to not automatically install 
recommended packages, and then you can manually install whatever you want to 
get all the features of the app to work correctly.  But please don’t mess up 
Recommends working correctly for the rest of us.

-- 
Soren Stoutner
so...@debian.org

signature.asc
Description: This is a digitally signed message part.


Re: diffoscope dependency granularity [was RFC: "Recommended bloat", and how to possibly fix it]

2024-11-06 Thread Richard Lewis
Fay Stegerman  writes:

> [Added diffosc...@lists.reproducible-builds.org to Cc]
>
> * Fay Stegerman  [2024-11-06 17:43]:
>> * Johannes Schauer Marin Rodrigues  [2024-11-06 02:28]:
>> [...]
>> > Have one package diffoscope and one package diffoscope-full and you could 
>> > even
>> > have a package diffoscope-minimal and there you have user-selectable
>> > granularity.
>> 
>> We already have two diffoscope packages for exactly this reason (I work on
>> diffoscope and only have -minimal installed myself):
>> 
>> $ apt-cache show diffoscope/sid | grep -A1 full
>>  This is a dependency package that recommends the full set of external tools,
>>  to support as many type of files as possible.
>> 
>> $ apt-cache show diffoscope-minimal/sid | grep -A2 partial
>>  This -minimal package only recommends a partial set of the supported 3rd 
>> party
>>  tools needed to produce file-format-specific comparisons, excluding those 
>> that
>>  are considered too large or niche for general use.
>
> IMO in the case of diffoscope it could make sense to have multiple tools
> metapackages, like a diffoscope-tools-android etc., to both make it easier to
> avoid those dependencies if you know you don't need them, but also to easily
> install the dependencies for the file formats you do work with.

I think even diffoscope-minimal pulls in too much - all i wanted was a
better version of debdiff, but diffoscope-minimal recommends things like
openssl, openssh-client, r-base-core which seem well beyong "minimal" --
but you cant just purge all its recommends as some (like xz-utils) are
needed for other reasons.



Re: RFC: "Recommended bloat", and how to possibly fix ito

2024-11-06 Thread Bill Allombert
Le Tue, Nov 05, 2024 at 05:35:59PM -0600, Aaron Rainbolt a écrit :
> Hello, and thanks for your time.
> 
> I've been a Debian user and contributor for a while, and have noticed a
> rather frustrating issue that I'm interested in potentially
> contributing code to fix. The issue is what I call "Recommended bloat",
> which in short is what happens when you install a package with all of
> its recommended packages, and end up with a whole lot of stuff installed
> that you don't want and that the package you actually wanted probably
> didn't even need.

A proposal I made was an option for apt to handle Recommends non
recursively.
That is if A Recommends B and B Recommends C,
apt-get install A --no-transitive-recommends
would install B but not C.

Cheers,
Bill.



Re: RFC: "Recommended bloat", and how to possibly fix it

2024-11-06 Thread Aaron Rainbolt
On Wed, 6 Nov 2024 18:03:07 +
Holger Levsen  wrote:

> On Wed, Nov 06, 2024 at 06:03:45PM +0100, IOhannes m zmölnig wrote:
> > Afaict, the problem is that we have 3 options to pick from, and
> > it's hard for people to decide which is the "right one". I do not
> > see how adding a 4th option will make this decision any easier
> > (esp. since that new option is so similar to an already existing
> > one).  
>  
> absolutly.

I have been thinking about it for a while and I am thinking that adding
a fourth field really is probably not the right way to go. Then again,
given that policy is clear about how Recommends ought to be used and
it's pretty clear that there are packages that just don't use it right,
I don't think an effort to "fix" the use of Recommends across Debian is
going to be terribly helpful because even if it succeeds, people are
probably going to make the same mistakes again. (It's not obvious that a
field called "Recommends" should only be used for not-quite-vital
dependencies, not recommendations in general. The very name of the
field begs for people to misuse it.)

Any suggestions for how to approach the problem as one of awareness
rather than one of technical limitations?

> maybe the problem is also related that the original poster didn't
> know about the diffoscope-minimal package because it's non obvious
> and somewhat hard to find?

My use of diffoscope was more to make a point in general about a
systemic problem (it was the most extreme example I could think of).
However, it is also true that I did not know about diffoscope-minimal,
and had I known about it I probably would have picked some other
package to illustrate the point. For instance, gwenview currently
recommends kamera. gwenview is an image viewer, kamera is a tool for
working with digital cameras. Now it is true that kamera enhances
gwenview's functionality by allowing it to see pictures on a digital
camera that is plugged into the system, but by no means is gwenview
useless or even substantially degraded from a functionality standpoint
when kamera is missing. It ought to be a Suggests according to Debian
policy, but it's obvious why it ended up being a Recommends because it
does enable additional functionality that doesn't work otherwise.

Aaron

> $ apt-cache search minimal|cut -d ' ' -f1|grep minimal$
> cm-super-minimal
> diffoscope-minimal
> gdb-minimal
> haskell-devscripts-minimal
> keepassxc-minimal
> kde-telepathy-minimal
> python3-minimal
> libpython3.12-minimal
> python3.12-minimal
> libpython3.13-minimal
> python3.13-minimal
> virtuoso-minimal
> 
> are there other similar metapackages?
> 
> 



pgpAGJl7tCgTP.pgp
Description: OpenPGP digital signature


Re: Rebuilds to enable PAC and BTI support on arm64

2024-11-06 Thread Andrey Rakhmatullin
On Wed, Nov 06, 2024 at 10:43:07AM +0100, Emanuele Rocca wrote:
> As a final thought, given that new toolchain versions bring multiple
> improvements over the years it's perhaps worth thinking about rebuilding
> the archive on some sort of regular basis to make sure we get the
> benefits?

"Let's at least force rebuilds all packages not rebuilt since stable
before every freeze starts" is a popular opinion.

-- 
WBR, wRAR


signature.asc
Description: PGP signature


Re: RFC: "Recommended bloat", and how to possibly fix it

2024-11-06 Thread Holger Levsen
On Wed, Nov 06, 2024 at 06:03:45PM +0100, IOhannes m zmölnig wrote:
> Afaict, the problem is that we have 3 options to pick from, and it's hard for 
> people to decide which is the "right one".
> I do not see how adding a 4th option will make this decision any easier (esp. 
> since that new option is so similar to an already existing one).
 
absolutly.

maybe the problem is also related that the original poster didn't know about
the diffoscope-minimal package because it's non obvious and somewhat hard
to find?

$ apt-cache search minimal|cut -d ' ' -f1|grep minimal$
cm-super-minimal
diffoscope-minimal
gdb-minimal
haskell-devscripts-minimal
keepassxc-minimal
kde-telepathy-minimal
python3-minimal
libpython3.12-minimal
python3.12-minimal
libpython3.13-minimal
python3.13-minimal
virtuoso-minimal

are there other similar metapackages?


-- 
cheers,
Holger

 ⢀⣴⠾⠻⢶⣦⠀
 ⣾⠁⢠⠒⠀⣿⡁  holger@(debian|reproducible-builds|layer-acht).org
 ⢿⡄⠘⠷⠚⠋⠀  OpenPGP: B8BF54137B09D35CF026FE9D 091AB856069AAA1C
 ⠈⠳⣄

„Faschisten hören niemals auf, Faschisten zu sein
Man diskutiert mit ihnen nicht, hat die Geschichte gezeigt“...


signature.asc
Description: PGP signature


Re: RFC: "Recommended bloat", and how to possibly fix it

2024-11-06 Thread IOhannes m zmölnig
Am 6. November 2024 00:35:59 MEZ schrieb Aaron Rainbolt :
>
>
>* Add a "Weak-Depends" field to the list of binary dependency control
>  fields in the Debian Policy Manual section 7.2, with a definition
>  very similar to the existing definition for "Recommends".
>* Change the definition of the "Recommends" field to match the way
>  the field is oftentimes used in the Debian archive.


This awfully reminds me of xkcd927 (even though it's not a good analogy).

Afaict, the problem is that we have 3 options to pick from, and it's hard for 
people to decide which is the "right one".
I do not see how adding a 4th option will make this decision any easier (esp. 
since that new option is so similar to an already existing one).

Just my 2¢


mfh.her.fsr
IOhannes



Re: RFC: "Recommended bloat", and how to possibly fix it

2024-11-06 Thread Fay Stegerman
* Johannes Schauer Marin Rodrigues  [2024-11-06 02:28]:
[...]
> Have one package diffoscope and one package diffoscope-full and you could even
> have a package diffoscope-minimal and there you have user-selectable
> granularity.

We already have two diffoscope packages for exactly this reason (I work on
diffoscope and only have -minimal installed myself):

$ apt-cache show diffoscope/sid | grep -A1 full
 This is a dependency package that recommends the full set of external tools,
 to support as many type of files as possible.

$ apt-cache show diffoscope-minimal/sid | grep -A2 partial
 This -minimal package only recommends a partial set of the supported 3rd party
 tools needed to produce file-format-specific comparisons, excluding those that
 are considered too large or niche for general use.

- Fay



Re: Rebuilds to enable PAC and BTI support on arm64

2024-11-06 Thread Andreas Tille
Am Wed, Nov 06, 2024 at 01:16:57PM + schrieb Holger Levsen:
> On Wed, Nov 06, 2024 at 05:28:38PM +0500, Andrey Rakhmatullin wrote:
> > "Let's at least force rebuilds all packages not rebuilt since stable
> > before every freeze starts" is a popular opinion.
> 
> true. and "let's not do that" is even more popular, else why haven't we
> done this in three decades?

Changing something is just some effort.  How do you want to distinguish
the "let's not do that" opinion from the "no spare time to trigger a
change" "opinion"?

Finally we have those regular archive wide rebuilds that trigger lots of
FTBFS bugs.  I could imagine to simply upload those that pass the
rebuild tests.  Otherwise the package needs manual intervention by the
maintainer which also will end up in an upload or a testing removal.

Kind regards
Andreas.

-- 
https://fam-tille.de



Re: Rebuilds to enable PAC and BTI support on arm64

2024-11-06 Thread Andrey Rakhmatullin
On Wed, Nov 06, 2024 at 01:16:57PM +, Holger Levsen wrote:
> On Wed, Nov 06, 2024 at 05:28:38PM +0500, Andrey Rakhmatullin wrote:
> > "Let's at least force rebuilds all packages not rebuilt since stable
> > before every freeze starts" is a popular opinion.
> 
> true. and "let's not do that" is even more popular, else why haven't we
> done this in three decades?

Doing is harder than talking, and doing something like that is much
harder than, say, fixing and uploading a package. Where can one start?

-- 
WBR, wRAR


signature.asc
Description: PGP signature


Re: Rebuilds to enable PAC and BTI support on arm64

2024-11-06 Thread Holger Levsen
On Wed, Nov 06, 2024 at 05:28:38PM +0500, Andrey Rakhmatullin wrote:
> "Let's at least force rebuilds all packages not rebuilt since stable
> before every freeze starts" is a popular opinion.

true. and "let's not do that" is even more popular, else why haven't we
done this in three decades?


-- 
cheers,
Holger

 ⢀⣴⠾⠻⢶⣦⠀
 ⣾⠁⢠⠒⠀⣿⡁  holger@(debian|reproducible-builds|layer-acht).org
 ⢿⡄⠘⠷⠚⠋⠀  OpenPGP: B8BF54137B09D35CF026FE9D 091AB856069AAA1C
 ⠈⠳⣄

If you upload your address book to "the cloud", I don't want to be in it.


signature.asc
Description: PGP signature


Re: Rebuilds to enable PAC and BTI support on arm64

2024-11-06 Thread Emanuele Rocca
On 2024-10-28 10:55, Sebastian Ramacher wrote:
> since dpkg 1.22.0 the additional hardening flags to enable Pointer
> Authentication (PAC) and Branch Target Identification (BTI)
> on arm64 are enabled by default.

Some more background and an update on this.

Both PAC and BTI are enabled by adding -mbranch-protection=standard to
the compiler flags. The defaults in Debian sid include such flag since
August 2023 (dpkg 1.22.0) as Sebastian said.

However PAC and BTI differ in the way they are enabled. For PAC, simply
building a program with -mbranch-protection=standard results in PAC to
be enabled. When it comes to BTI, all execution units (ie: all object
files) linked together need to have BTI in order for the resulting ELF
file to have BTI turned on. Since pretty much every program in the world
uses crtbeginS.o and crtendS.o from GCC as well as crti.o, Scrt1.o and
crtn.o from glibc, this means that only packages built with a
BTI-enabled GCC and glibc get the feature. In sid, we enabled BTI
support in gcc-14 14.1.0-4 (2024-07-10) and glibc 2.39-5 (2024-07-22).
See https://wiki.debian.org/ToolChain/PACBTI for more details.

I performed a local archive rebuild to get the list of all packages that
don't currently have BTI on, but would get it with a simple rebuild
(binNMU). I added the date of "last build" to the output just to verify
that no package was built after the end of July 2024, those should have
had BTI already. To my surprise some of the packages in the list where
last built in 2014, which is... well a long time ago!
https://people.debian.org/~ema/pac-bti/arm64-binNMUs.log

When the binNMUs started (thank you Sebastian) we had 10204 binary
packages with BTI turned on, and we are now at 18348. Once the rebuilds
are over I'll check the situation again. There's likely going to be a
long tail of packages that don't get BTI with a simple rebuild for many
reasons, including for example not using the default compiler flags.

As a final thought, given that new toolchain versions bring multiple
improvements over the years it's perhaps worth thinking about rebuilding
the archive on some sort of regular basis to make sure we get the
benefits?

  ema



Re: RFC: "Recommended bloat", and how to possibly fix it

2024-11-05 Thread Aaron Rainbolt
Partially top-posting because there are some assumptions here that need
cleared up.

Hi Johannes, and thanks for your feedback. I would like to point out
that I explicitly said in the very start of my email, that this was
something I was "interested in potentially contributing code to fix." I
see this part got omitted from what you quoted of my email, so it may
have gone unnoticed. I'm fully intend to implement and test much or
even most of the code require to make this happen should it be
considered desirable. So in reply to your "Do you plan to send
patches?" questions, yes, I intended to send patches from the moment I
started writing the email. It's generally understood and oftentimes
spelled out in open-source projects that one should discuss and get
feedback on a large change before even starting to implement it, which
is exactly what I'm doing here. I know that my proposal, as given, is
likely to not be accepted as is and may even be rejected altogether,
which is why I said "potentially". I would appreciate if you would look
at my suggestion in this light rather than assuming I'm just coming up
with a "great idea" that I expect everyone else to work on without any
further help from me.

It may also help to look at this as an initial "testing the waters"
conversation starter, and not as a proposal in its entirety. That's
what the DEP process is for. In your reply you mention "any good
proposal includes disadvantages", which is true. This doesn't have
those because it's not meant to be a formal proposal. The only reason I
sent this on the mailing lists rather than simply starting a
conversation on IRC is because I wanted it to get wider visibility.

The rest of my replies are inline.

On Wed, 06 Nov 2024 02:28:33 +0100
Johannes Schauer Marin Rodrigues  wrote:

> Hi,
> 
> Quoting Aaron Rainbolt (2024-11-06 00:35:59)
> > According to the Debian Policy Manual, section 7.2, the Recommends
> > field in Debian packages "declares a strong, but not absolute,
> > dependency. The Recommends field should list packages that would be
> > found together with this one in all but unusual installations."
> > While this is a very useful definition, the actual way in which
> > Recommends are used seems to differ substantially from this.  
> 
> then you should file bugs against the packages that violate this part
> of policy.
> 
> > If this was just a diffoscope problem, it would be easy to just
> > file a bug asking that most of these packages be demoted to
> > Suggests, but this is a much more pervasive issue, as evidenced by
> > the fact that the live-build manual has special instructions for
> > how to disable the installation of *all* recommended packages when
> > building a live image[1]. I have built live images that ended up
> > with all sorts of weird packages installed on them, which issue was
> > resolved by disabling the installation of recommended packages.  
> 
> On my system, I have apt configured to not install Recommends by
> default because I want to manually pick what to install when I
> install new things. Oftentimes I find the extra things that get
> installed too bloated for my taste, so I sympathize with your quest,
> but I do not agree with your solution.
> 
> > Furthermore, the current (ab)use of Recommends in Debian packages
> > illustrates something important - there is a real need for
> > packagers to specify packages that should automatically be
> > installed alongside another package, but that aren't necessarily
> > strong dependencies. Using diffoscope again as an example, it's
> > reasonable that the diffoscope maintainers want *all* of
> > diffoscope's functionality to "just work" out of the box, even if
> > that means installing over three and a half gigabytes of packages
> > to do it.[2] This may not be policy-compliant, but demoting these
> > packages to the "Suggests" field doesn't feel right.  Should a user
> > who just wants to compare things have to figure out the right combo
> > of packages to make diffoscope work for their particular use
> > case?[3] There's also the question of logistics - going through and
> > "fixing" all of the packages with overkill Recommends could be a
> > massive undertaking, and it's more than likely that there will be
> > some packagers who won't be thrilled with the changes.  
> 
> This argument goes both ways. Your proposed solution has the same
> drawbacks. To properly implement your solution, you still have to go
> through and fix all of the packages with overkill Recommends which is
> a massive undertaking and it's not unlikely that there will be some
> packagers who won't be thrilled with the change (as it usually is the
> case for any kind of change you propose to somebody's package).

Specifically for this point, the reason I thought that my proposed
solution would be superior to refactoring packages or demoting things
to Suggests is because both of those solutions require somewhat
invasive or significantly functionality-modifying changes to pack

Re: RFC: "Recommended bloat", and how to possibly fix it

2024-11-05 Thread Johannes Schauer Marin Rodrigues
Hi,

Quoting Aaron Rainbolt (2024-11-06 00:35:59)
> According to the Debian Policy Manual, section 7.2, the Recommends field in
> Debian packages "declares a strong, but not absolute, dependency. The
> Recommends field should list packages that would be found together with this
> one in all but unusual installations." While this is a very useful
> definition, the actual way in which Recommends are used seems to differ
> substantially from this.

then you should file bugs against the packages that violate this part of
policy.

> If this was just a diffoscope problem, it would be easy to just file a bug
> asking that most of these packages be demoted to Suggests, but this is a much
> more pervasive issue, as evidenced by the fact that the live-build manual has
> special instructions for how to disable the installation of *all* recommended
> packages when building a live image[1]. I have built live images that ended
> up with all sorts of weird packages installed on them, which issue was
> resolved by disabling the installation of recommended packages.

On my system, I have apt configured to not install Recommends by default
because I want to manually pick what to install when I install new things.
Oftentimes I find the extra things that get installed too bloated for my taste,
so I sympathize with your quest, but I do not agree with your solution.

> Furthermore, the current (ab)use of Recommends in Debian packages illustrates
> something important - there is a real need for packagers to specify packages
> that should automatically be installed alongside another package, but that
> aren't necessarily strong dependencies. Using diffoscope again as an example,
> it's reasonable that the diffoscope maintainers want *all* of diffoscope's
> functionality to "just work" out of the box, even if that means installing
> over three and a half gigabytes of packages to do it.[2] This may not be
> policy-compliant, but demoting these packages to the "Suggests" field doesn't
> feel right.  Should a user who just wants to compare things have to figure
> out the right combo of packages to make diffoscope work for their particular
> use case?[3] There's also the question of logistics - going through and
> "fixing" all of the packages with overkill Recommends could be a massive
> undertaking, and it's more than likely that there will be some packagers who
> won't be thrilled with the changes.

This argument goes both ways. Your proposed solution has the same drawbacks. To
properly implement your solution, you still have to go through and fix all of
the packages with overkill Recommends which is a massive undertaking and it's
not unlikely that there will be some packagers who won't be thrilled with the
change (as it usually is the case for any kind of change you propose to
somebody's package).

> Weak-Depends would basically just be a stronger "Recommends".

I don't think that the solution to your problem is to make the
Recommends/Suggests logic more granular. You still have to convert all the
packages which you think have too many Recommends. Even assuming that their
maintainers agree, instead of doing that work, why not invest that time in
finding a solution using the existing mechanisms instead? You mentioned that
meta-packages are besides the point. Why? Have one package diffoscope and one
package diffoscope-full and you could even have a package diffoscope-minimal
and there you have user-selectable granularity.

> Some of the advantages of this solution include:

Any good proposal also includes disadvantages. Any suggestion to change
something should come with an attempt to weigh the benefits against the costs.
In your mail you do not make an argument about the costs of your proposed
change. I'd argue, that the cost of your proposal are too high. Especially when
comparing that cost to using the existing packaging mechanisms to achieve
essentially the same thing.

> * It requires comparatively few changes to initially implement.

I strongly disagree. I think you are estimating the number of packages which
attempt messing around with Debian packages and their dependencies and which
would have to receive a patch to support a new field.

>   All
>   existing packages in the Debian repository will be compliant with a
>   Debian Policy Manual update that adds Weak-Depends, without changes.

I thought your proposal was to make Weak-Depends effectively what Recommends is
today?

>   Packagers can start using Weak-Depends if they want to or if a bug
>   report requests that they do. Some of the packages that would need to
>   change to implement this would be dpkg, apt, possibly the libraries
>   they depend on, and live-build.

Yes. And sbuild, python-apt, aptitude, mmdebstrap, debootstrap, cdebootstrap,
debhelper, cdbs, lintian, pbuilder, dose3, wanna-build, dak, python-debian,
libconfig-model-dpkg-perl, augeas, haskell-debian, dh-exec, autopkgtest and
very many tools in devscripts like build-rdeps, debrebuild, wrap-and-sort...
A

Re: Most optimal way to import NMU into existing git-builpackage repository?

2024-11-03 Thread Sean Whitton
Hello,

On Tue 29 Oct 2024 at 02:48pm +01, Helmut Grohne wrote:

> Andrey has already said much of what I could add to the thread, but I
> think I can slightly clarify the needs of NMUers.
>
> On Fri, Oct 25, 2024 at 08:45:16AM +0200, Andreas Henriksson wrote:
>> I would very much prefer if it was possible in Debian to not allow
>> the archive to get out of sync with packaging git repo (for example
>> when it lives under salsa.debian.org/debian which uploaders should have
>> access to already).
>
> There are three quite fundamental pieces missing to achieve this.
>
> There needs to be simple way to turn a git commit into a source package.
> If the source of truth ever is to become git, the .dsc becomes an export
> format and then this becomes a hard requirement. We can turn git commits
> into source packages. The problem is that there is not one way to do
> this, but about a hundred and you need to know which package uses which.
> That does not scale.
>
> There needs to be a simple way to figure out the commit that corresponds
> to an upload. This problem has been approached in two ways. For one
> thing, there is DEP14 recommending a particular tag layout, but I think
> this is backwards. It assumes that the git repository is trusted, but in
> reality git repositories allow for much wider access than Debian
> uploads. What we really needs is a source package to know the commit id
> it was generated from.
>
> These operations need to round-trip. If you take a source package,
> identify the git commit and export it to .dsc, it must be functionality
> equivalent to what you started with. Timestamps may differ, but file
> content or contained files very much not.
>
> To me, these are hard requirements for using maintainer git
> repositories for performing NMUs.
>
> Now the dgit users among us will be grinning already as what I have
> written here, very much reads like a specification of (parts of) dgit.
> Once again, I question whether salsa as we use it now is the solution or
> the problem. I note that it is practically possible to push your dgit
> history to salsa and then NMUers can easily do meaningful MRs for their
> uploads even when your maintainer git has changes that have not yet been
> uploaded.

Well, quite, this is dgit indeed.

-- 
Sean Whitton


signature.asc
Description: PGP signature


Re: Most optimal way to import NMU into existing git-builpackage repository?

2024-11-03 Thread Sean Whitton
Hello,

On Sat 26 Oct 2024 at 09:31am GMT, Holger Levsen wrote:

> On Fri, Oct 25, 2024 at 03:03:53PM +, Holger Levsen wrote:
>> the current expectation is that an NMU bug is opened, which contains
>> the debdiff.
>>
>> https://www.debian.org/doc/manuals/developers-reference/developers-reference.en.html#when-and-how-to-do-an-nmu
>>
>> "... Then, you must send a patch with the differences between the current
>>  package and your proposed NMU to the BTS. The nmudiff script in the
>>  devscripts package might be helpful"
>
> FWIW, I think this should stay the default when doing NMUs but I also think
> it should be (spelled out that it's) equally fine to open a MR on salsa
> *if* the specific package somehow specifies this is ok.
>
> I also think that currently no package should be able to opt-out from
> getting NMUdiffs via the BTS, because it's good to have one workflow which
> works for *all* packages.

I think so to.

-- 
Sean Whitton


signature.asc
Description: PGP signature


Re: Binary uploads into the archive

2024-11-03 Thread Mechtilde Stehmann

Hello Dennis,

Am 29.10.24 um 14:56 schrieb Dennis van Dok:

On 28-10-2024 22:09, Daniel Leidert wrote:

Hi,

by accident, I uploaded a binary package today (ruby-rouge) instead of
its source-package into the archive. I expected the binary package
being rejected once I discovered my mistake. But it was accepted
instead, and it was also not being rebuilt. Didn't we turn off binary
package uploads? Shouldn't this be rejected?


Coincidentally did the exact same thing (with igtf-policy-bundle); but 
this is now stuck as it cannot migrate to testing (unless somebody 
manually intervenes).


I think what I should do is update the release number and do another 
(source only) upload.


You can do a new upload by increasing the revision number and an entry 
in changelog that this is an source only upload.




Dennis



Regards
--
Mechtilde Stehmann
## Debian Developer
## PGP encryption welcome
## F0E3 7F3D C87A 4998 2899  39E7 F287 7BBA 141A AD7F



OpenPGP_signature.asc
Description: OpenPGP digital signature


Re: Most optimal way to import NMU into existing git-builpackage repository?

2024-11-03 Thread Sean Whitton
Hello,

On Wed 30 Oct 2024 at 01:24pm GMT, Otto Kekäläinen wrote:

> I can (and I did test already) do a merge with --allow-unrelated
> histories, but dgit history always has patches applied as separate
> commits that get rebased and thus there is no quilt/gbp pq -compatible
> git history to merge from. If I later do a 'dgit --gbp push' to
> upload, how do I push a development version to Salsa for review and CI
> testing?

If the NMU was done with dgit, then dgit only appends commits to apply
the patches.  Thus if you look a few commits back in history, you'll
find something you can merge.

> Based on docs and your previous replies it isn't possible, thus `gbp
> import-dsc --verbose --pristine-tar apt:j4-dmenu-desktop/sid` still
> seems to stand as the optimal way to import NMUs onto a Debian
> packaging git repo?

If the upload was not done with dgit, then probably, yes.

> The man pages of dgit are extensive and explains well how to interact
> with the Debian repository as if it was a git repository, but I am
> unable to find descriptions of how to use dgit in team maintained
> packages with testing and reviews prior to uploads.

The idea is that dgit is only needed when dealing with source packages.
When collaborating with team members prior to upload, you don't really
need any source packages -- you just build straight out of git.

If you can identify a place in the docs where some reference to this
could be added that would have been helpful to you, a patch would be
very welcome.

-- 
Sean Whitton


signature.asc
Description: PGP signature


Re: Binary uploads into the archive

2024-11-01 Thread Sean Whitton
Hello,

On Tue 29 Oct 2024 at 03:15pm +01, Marco d'Itri wrote:

> On Oct 29, Dennis van Dok  wrote:
>
>> I think what I should do is update the release number and do another (source
>> only) upload.
> Correct: these uploads are supposed to be accepted because binary
> uploads are still needed for passages in NEW (in that case: it's to
> target experimental for the first upload of the pair).

Also non-free packages that aren't marked as autobuildable.

-- 
Sean Whitton



Re: s390x architecture status?

2024-10-31 Thread Alberto Garcia
On Mon, Oct 28, 2024 at 10:24:04AM +0100, Chris Hofstaedtler wrote:
> b) various packages already ignore s390x (gnome? others?)

WebKitGTK still builds in s390x, but the Skia graphics library does
not support big-endian machines so if the Cairo backend is ever
dropped then we probably won't be able to support s390x any longer.

Berto

[1] 
https://github.com/WebKit/WebKit/blob/webkitgtk-2.47.1/Source/ThirdParty/skia/include/private/base/SkLoadUserConfig.h#L56



Re: Is ftp.upload.debian.org not working?

2024-10-31 Thread Colin Watson
On Thu, Oct 31, 2024 at 08:46:08AM -0500, Steven Robbins wrote:
> I pushed an update ("dput digikam_8.4.0-4_source.changes") ten hours ago to 
> ftp.upload.debian.org and dput reported success.  But there has been no email 
> acknowledgement nor change to the archive that I can spot.
> 
> Is everything working?

Things are very slow because a mass-binNMU for PAC/BTI support caused a
huge backlog in queue processing.  I can now see that your upload was
accepted a couple of hours ago, but it may still take a while to reach
the archive proper.

-- 
Colin Watson (he/him)  [cjwat...@debian.org]



Re: Rebuilds to enable PAC and BTI support on arm64

2024-10-31 Thread Holger Levsen
On Mon, Oct 28, 2024 at 10:55:57PM +0100, Sebastian Ramacher wrote:
> since dpkg 1.22.0 the additional hardening flags to enable Pointer
> Authentication (PAC) and Branch Target Identification (BTI)
> on arm64 are enabled by default. See [1] for the discussion to enable
> these flags.

/me likes
 
> To have the desired effect for the next release and have some time
> to catch regressions, I have started with scheduling rebuilds of
> packages that have not been built since the change in the default flags.
> While the change of flags only affects arm64, packages building
> Multi-Arch: same binaries require consistent versions on all
> architectures. For those packages, the rebuilds have been scheduled on
> all architectures.

/me likes very much! background: even though snapshot.d.o has been
fixed now, so that it's become generally usable again, several many
snapshots from 2023 and 2024 are missing, thus making it impossible
to recreate the build environments used needed for reproducible builds
of trixie.

these mass rebuilds will help reduce that gap. 

> Thanks to Emanuele Rocca for identifying the list of packages that have
> to be rebuilt to gain PAC/BTI support.
 
thank you both! :)


-- 
cheers,
Holger

 ⢀⣴⠾⠻⢶⣦⠀
 ⣾⠁⢠⠒⠀⣿⡁  holger@(debian|reproducible-builds|layer-acht).org
 ⢿⡄⠘⠷⠚⠋⠀  OpenPGP: B8BF54137B09D35CF026FE9D 091AB856069AAA1C
 ⠈⠳⣄

Gendern ist wie Wurst ohne Fleisch: Fortschritt.


signature.asc
Description: PGP signature


Re: Most optimal way to import NMU into existing git-builpackage repository?

2024-10-30 Thread Otto Kekäläinen
Hi,

ma 28. lokak. 2024 klo 4.27 Sean Whitton  kirjoitti:
>
> Hello,
>
> On Sun 27 Oct 2024 at 05:29pm GMT, Otto Kekäläinen wrote:
>
> > Hi!
> >
> >> > Seems this is still the most optimal way to ensure git is correct:
> >> >
> >> >gbp import-dsc --verbose --pristine-tar apt:j4-dmenu-desktop/sid
> >> >
> >> > Also, dgit pull can be used to get the latest source automatically, but 
> >> > unfortunately those
> >> > git commits are made as a custom "Debian as a git repo" representation, 
> >> > and is not
> >> > compatible with using CI testing and code review before upload in the 
> >> > way many of us
> >> > are doing on Salsa currently.
> >>
> >> 'dgit pull' integrates the NMU automatically, when it can.  It doesn't
> >> just fetch the source.  I don't follow how it's different from 'gbp
> >> import-dsc'.  Could you say more?
> >
> > In a gbp checkout of g...@salsa.debian.org:debian/j4-dmenu-desktop.git,
> > how would you invoke 'dgit pull sid' to import the NMU?
> >
> > Without any parameters, it will create branch 'dgit/sid' which has
> > unrelated history and patches are applied and nothing can be merged or
> > cherry-picked to the git-buildpackage master branch. Perhaps I am just
> > missing something on how this should work, or perhaps
> > https://manpages.debian.org/unstable/dgit/dgit-maint-gbp.7.en.html#INCORPORATING_NMUS
> > implies the functionality isn't yet there?
>
> Ah, I was thinking that you had already been using 'dgit --gbp push' to
> upload the package.  In that case the histories would be related, just
> with some additional commits on top, and a manual merge would be
> possible.


I can (and I did test already) do a merge with --allow-unrelated
histories, but dgit history always has patches applied as separate
commits that get rebased and thus there is no quilt/gbp pq -compatible
git history to merge from. If I later do a 'dgit --gbp push' to
upload, how do I push a development version to Salsa for review and CI
testing?

Based on docs and your previous replies it isn't possible, thus `gbp
import-dsc --verbose --pristine-tar apt:j4-dmenu-desktop/sid` still
seems to stand as the optimal way to import NMUs onto a Debian
packaging git repo?

The man pages of dgit are extensive and explains well how to interact
with the Debian repository as if it was a git repository, but I am
unable to find descriptions of how to use dgit in team maintained
packages with testing and reviews prior to uploads.



Re: [RFH] Running Python tests that require the source to be installed

2024-10-30 Thread Marcus Schäfer
Hi,

> On Sat, 2024-10-26 at 11:08 +0200, John Paul Adrian Glaubitz wrote:
> > I just realized that this doesn't work because kiwi doesn't have a setup.py
> > anymore but just uses pyproject.toml. Do you know how it works in this case?
> 
> OK, I just remove every custom override and let debhelper and pybuild just do 
> it's
> magic. Unfortunately, it turns out that the package now requires 
> pytest_container
> which doesn't exist in Debian.
> 
> *sigh*

Sorry for the trouble. A PR to fix the unneeded hard pytest_container
requirement is available here:

https://github.com/OSInside/kiwi/pull/2672

So that part will be fixed soon

Thanks much to all of you for the Debian packaging effort

Regards,
Marcus
-- 
 Public Key available via: https://keybase.io/marcus_schaefer/key.asc
 keybase search marcus_schaefer


signature.asc
Description: Digital signature


Re: Most optimal way to import NMU into existing git-builpackage repository?

2024-10-29 Thread Helmut Grohne
Andrey has already said much of what I could add to the thread, but I
think I can slightly clarify the needs of NMUers.

On Fri, Oct 25, 2024 at 08:45:16AM +0200, Andreas Henriksson wrote:
> I would very much prefer if it was possible in Debian to not allow
> the archive to get out of sync with packaging git repo (for example
> when it lives under salsa.debian.org/debian which uploaders should have
> access to already).

There are three quite fundamental pieces missing to achieve this.

There needs to be simple way to turn a git commit into a source package.
If the source of truth ever is to become git, the .dsc becomes an export
format and then this becomes a hard requirement. We can turn git commits
into source packages. The problem is that there is not one way to do
this, but about a hundred and you need to know which package uses which.
That does not scale.

There needs to be a simple way to figure out the commit that corresponds
to an upload. This problem has been approached in two ways. For one
thing, there is DEP14 recommending a particular tag layout, but I think
this is backwards. It assumes that the git repository is trusted, but in
reality git repositories allow for much wider access than Debian
uploads. What we really needs is a source package to know the commit id
it was generated from.

These operations need to round-trip. If you take a source package,
identify the git commit and export it to .dsc, it must be functionality
equivalent to what you started with. Timestamps may differ, but file
content or contained files very much not.

To me, these are hard requirements for using maintainer git
repositories for performing NMUs.

Now the dgit users among us will be grinning already as what I have
written here, very much reads like a specification of (parts of) dgit.
Once again, I question whether salsa as we use it now is the solution or
the problem. I note that it is practically possible to push your dgit
history to salsa and then NMUers can easily do meaningful MRs for their
uploads even when your maintainer git has changes that have not yet been
uploaded.

Helmut



Re: libtool -D_FILE_OFFSET_BITS= (empty) breaks build

2024-10-29 Thread Jakub Wilk

* Simon McVittie , 2024-10-29 14:38:
I would suggest looking for the root cause in some higher-level 
component or in the lcmaps package itself.


$ grep -rP 'D_FILE_OFFSET_BITS=(?!64)' /usr
/usr/lib/x86_64-linux-gnu/pkgconfig/globus-common.pc:Cflags:  
-D_FILE_OFFSET_BITS= -I${includedir}

--
Jakub Wilk



Re: libtool -D_FILE_OFFSET_BITS= (empty) breaks build

2024-10-29 Thread Simon McVittie
On Tue, 29 Oct 2024 at 16:03:17 +0100, Jakub Wilk wrote:
> $ grep -rP 'D_FILE_OFFSET_BITS=(?!64)' /usr
> /usr/lib/x86_64-linux-gnu/pkgconfig/globus-common.pc:Cflags:  
> -D_FILE_OFFSET_BITS= -I${includedir}

Thanks, I've reported  (plus a wishlist
bug report suggesting the addition of a superficial autopkgtest to
globus-common, which would probably have detected this).

smcv



Re: Binary uploads into the archive

2024-10-29 Thread Andrey Rakhmatullin
On Tue, Oct 29, 2024 at 02:56:46PM +0100, Dennis van Dok wrote:
> Coincidentally did the exact same thing (with igtf-policy-bundle); but this
> is now stuck as it cannot migrate to testing (unless somebody manually
> intervenes).
> 
> I think what I should do is update the release number and do another (source
> only) upload.

Yes, because it builds arch:all packages (otherwise a binNMU would be
enough and one would be scheduled automatically).

-- 
WBR, wRAR


signature.asc
Description: PGP signature


Re: libtool -D_FILE_OFFSET_BITS= (empty) breaks build

2024-10-29 Thread Simon McVittie
On Tue, 29 Oct 2024 at 14:52:17 +0100, Dennis van Dok wrote:
> https://buildd.debian.org/status/fetch.php?pkg=lcmaps&arch=amd64&ver=1.6.6-3.1%2Bb2&stamp=1730151515&file=log
> 
> libtool sets [_FILE_OFFSET_BITS] to an empty string. What I think was supposed
> to happen is not defining this at all; leaving it empty makes the expression 
> syntax
> invalid.

libtool is not choosing to set this to an empty string: some higher-level
component (perhaps Autoconf or Automake or the package-specific build
system) is asking libtool to set _FILE_OFFSET_BITS empty (by giving it the
command-line option "-D_FILE_OFFSET_BITS="), and libtool is obediently
passing on that option to gcc.

I recently uploaded dbus_1.14.10-6 which is another
autoconf/automake/libtool package, and that built successfully on all
release and -ports architectures (except for alpha and hppa where it has
not been tried yet), so it seems like this is not a completely general
problem with autoconf/automake/libtool or with dpkg-buildflags.

I would suggest looking for the root cause in some higher-level component
or in the lcmaps package itself.

smcv



Re: Binary uploads into the archive

2024-10-29 Thread Marco d'Itri
On Oct 29, Dennis van Dok  wrote:

> I think what I should do is update the release number and do another (source
> only) upload.
Correct: these uploads are supposed to be accepted because binary 
uploads are still needed for passages in NEW (in that case: it's to 
target experimental for the first upload of the pair).

-- 
ciao,
Marco


signature.asc
Description: PGP signature


Re: Binary uploads into the archive

2024-10-29 Thread Dennis van Dok

On 28-10-2024 22:09, Daniel Leidert wrote:

Hi,

by accident, I uploaded a binary package today (ruby-rouge) instead of
its source-package into the archive. I expected the binary package
being rejected once I discovered my mistake. But it was accepted
instead, and it was also not being rebuilt. Didn't we turn off binary
package uploads? Shouldn't this be rejected?


Coincidentally did the exact same thing (with igtf-policy-bundle); but 
this is now stuck as it cannot migrate to testing (unless somebody 
manually intervenes).


I think what I should do is update the release number and do another 
(source only) upload.


Dennis



Re: Binary uploads into the archive

2024-10-28 Thread Andrey Rakhmatullin
On Mon, Oct 28, 2024 at 10:09:16PM +0100, Daniel Leidert wrote:
> by accident, I uploaded a binary package today (ruby-rouge) instead of
> its source-package into the archive. I expected the binary package
> being rejected once I discovered my mistake. But it was accepted
> instead, and it was also not being rebuilt. Didn't we turn off binary
> package uploads? Shouldn't this be rejected?

We didn't, and they are still required for packages that need to go
through NEW. They are also the only way to bootstrap packages.

-- 
WBR, wRAR


signature.asc
Description: PGP signature


Re: s390x architecture status?

2024-10-28 Thread Elizabeth K. Joseph
On Mon, Oct 28, 2024 at 5:49 AM Marco d'Itri  wrote:
>
> On Oct 28, Chris Hofstaedtler  wrote:
>
> > It also appears true that IBM has an interest in s390x, but today
> I wonder if their interest could actually be just in Debian providing
> a base for the Ubuntu port (which I understand used to be funded by
> IBM).
> And if this is still true now that IBM owns Red Hat.
>
> You have been clear enough in documenting how this port has no porters
> and probably also very few users. I think that somebody who actually has
> an interest in it needs to step up very soon to maintain it, or else the
> first step would be to declare it not a release architecture anymore.

It is still true, and as a data point the IBM LinuxONE Open Source
Cloud has Debian as one of the operating systems that we offer to open
source software projects doing development, due to popular demand.

I work at IBM and have been in touch with Berli Gayathri as she gets
up to speed with the Debian community and is learning the intricacies
of being involved with the project. We're also looking internally at
IBM to see if any of the Debian Developers who already work here are
able to help us continue maintaining this port. It's taking some time,
but we are still eager to see the support continue, and not just as a
base for Ubuntu.

-- 
Elizabeth K. Joseph || Lyz || pleia2



Re: s390x architecture status?

2024-10-28 Thread Marco d'Itri
On Oct 28, Chris Hofstaedtler  wrote:

> It also appears true that IBM has an interest in s390x, but today
I wonder if their interest could actually be just in Debian providing 
a base for the Ubuntu port (which I understand used to be funded by 
IBM).
And if this is still true now that IBM owns Red Hat.

You have been clear enough in documenting how this port has no porters 
and probably also very few users. I think that somebody who actually has 
an interest in it needs to step up very soon to maintain it, or else the 
first step would be to declare it not a release architecture anymore.

-- 
ciao,
Marco


signature.asc
Description: PGP signature


Re: s390x architecture status?

2024-10-28 Thread Simon McVittie
On Mon, 28 Oct 2024 at 10:24:04 +0100, Chris Hofstaedtler wrote:
> b) various packages already ignore s390x (gnome? others?)

GNOME is currently buildable on s390x, but we have to ignore a lot of test
failures related to incorrect endianness of colour channels in image data
(for example in GTK 3, GTK 4, librsvg) and investigating those issues
tends to consume a much larger amount of maintainer time than we can
really justify.

We did temporarily remove high-level parts of GNOME (GNOME Shell
and friends) from s390x in a previous release cycle, because the
mozjs JavaScript engine was known-broken on s390x at the time, but in
current versions it appears to be working mostly as intended. However,
I'm reasonably sure that GNOME Shell has never been genuinely useful on
IBM mainframes: there is a not-entirely-hypothetical use-case for running
GTK/GNOME *apps* on one of these machines via X-forwarding or some
similar mechanism, but running a full Wayland compositor on a mainframe
seems really unlikely to be useful, particularly one like GNOME Shell
that is designed to make use of a GPU.

> I acknowledge that s390x is the last big-endian release arch.
> While this fact may be the cause of interest for curious people,
> in general it seems to cause more problems than we need.

I believe the latest status from porters outside Debian is that the
GTK and librsvg issues are believed to be caused by a regression in
src:pixman, and not actually the GNOME libraries' fault. The regression
in pixman appears to have been caused by a well-intentioned porter trying
to solve some other endianness bug but instead working around it in the
wrong layer, and last time I looked, it was believed to have been fixed
upstream but not yet fixed in Debian.

Based on fixes I've contributed to graphics-related libraries like GTK
and SDL, I've come to believe that the only way to solve endianness
issues without regressions is for each layer of the stack to have clear
documentation about the byte-order that is intended, for example
https://github.com/libsdl-org/SDL/commit/3698630bbc8e2ac501127c9c522cc0463a6c1565
which explains the meaning of each of SDL's pixel formats. Otherwise,
it's very easy for porters for a big-endian architecture to "fix" an
endianness-swap by reversing the endianness at the wrong layer of the
library stack, resulting in the correct result via "compensating errors"
for the particular library stack they are currently looking at, but with
the side-effect of regressions in a different library stack.

For example, if GTK is handling endianness wrongly, and a porter tries
to solve that by swapping the endianness in a lower-level library that
is used by both GTK and Qt, then they'll make GTK work as intended but
break Qt. Instead, the correct fix is to make each library work according
to its documentation (or, if documentation is missing, first document the
behaviour it was designed to have, and then fix any deviations from that).

> Maybe, motivated porters will show up and maintain the architecture and
> its key packages (like s390-tools).

Another factor that is relevant to s390x is that many newer architectures
like aarch64 and riscv64 are converging on some common architecture
properties (such as booting via EFI from an NVMe, SATA or SCSI disk,
having USB connectivity, having a VGA-compatible GPU and/or a serial
console, and generally behaving a bit like a PC), but because s390x is
a mainframe architecture, it has various s390x-specific quirks such as
its own special disk and terminal devices.

This means that it will tend to need special-cased code paths in
infrastructure tools like autopkgtest-build-qemu[1], which makes it
harder for a typical Debian developer to investigate failures even if
they are willing to spend time on it.

Also, unlike typical newer ports like riscv64, the minimal s390x machine
is a large, power-hungry, expensive mainframe: an interested developer
can't simply set up a dev board on their desk like they can for embedded
or "PC-like" ports.

smcv

[1] https://lists.debian.org/debian-devel/2024/10/msg00284.html



Re: Most optimal way to import NMU into existing git-builpackage repository?

2024-10-27 Thread Sean Whitton
Hello,

On Sun 27 Oct 2024 at 05:29pm GMT, Otto Kekäläinen wrote:

> Hi!
>
>> > Seems this is still the most optimal way to ensure git is correct:
>> >
>> >gbp import-dsc --verbose --pristine-tar apt:j4-dmenu-desktop/sid
>> >
>> > Also, dgit pull can be used to get the latest source automatically, but 
>> > unfortunately those
>> > git commits are made as a custom "Debian as a git repo" representation, 
>> > and is not
>> > compatible with using CI testing and code review before upload in the way 
>> > many of us
>> > are doing on Salsa currently.
>>
>> 'dgit pull' integrates the NMU automatically, when it can.  It doesn't
>> just fetch the source.  I don't follow how it's different from 'gbp
>> import-dsc'.  Could you say more?
>
> In a gbp checkout of g...@salsa.debian.org:debian/j4-dmenu-desktop.git,
> how would you invoke 'dgit pull sid' to import the NMU?
>
> Without any parameters, it will create branch 'dgit/sid' which has
> unrelated history and patches are applied and nothing can be merged or
> cherry-picked to the git-buildpackage master branch. Perhaps I am just
> missing something on how this should work, or perhaps
> https://manpages.debian.org/unstable/dgit/dgit-maint-gbp.7.en.html#INCORPORATING_NMUS
> implies the functionality isn't yet there?

Ah, I was thinking that you had already been using 'dgit --gbp push' to
upload the package.  In that case the histories would be related, just
with some additional commits on top, and a manual merge would be
possible.

-- 
Sean Whitton



Re: Most optimal way to import NMU into existing git-builpackage repository?

2024-10-27 Thread Sean Whitton
Hello,

On Sun 27 Oct 2024 at 05:29pm GMT, Otto Kekäläinen wrote:

> Hi!
>
>> > Seems this is still the most optimal way to ensure git is correct:
>> >
>> >gbp import-dsc --verbose --pristine-tar apt:j4-dmenu-desktop/sid
>> >
>> > Also, dgit pull can be used to get the latest source automatically, but 
>> > unfortunately those
>> > git commits are made as a custom "Debian as a git repo" representation, 
>> > and is not
>> > compatible with using CI testing and code review before upload in the way 
>> > many of us
>> > are doing on Salsa currently.
>>
>> 'dgit pull' integrates the NMU automatically, when it can.  It doesn't
>> just fetch the source.  I don't follow how it's different from 'gbp
>> import-dsc'.  Could you say more?
>
> In a gbp checkout of g...@salsa.debian.org:debian/j4-dmenu-desktop.git,
> how would you invoke 'dgit pull sid' to import the NMU?
>
> Without any parameters, it will create branch 'dgit/sid' which has
> unrelated history and patches are applied and nothing can be merged or
> cherry-picked to the git-buildpackage master branch. Perhaps I am just
> missing something on how this should work, or perhaps
> https://manpages.debian.org/unstable/dgit/dgit-maint-gbp.7.en.html#INCORPORATING_NMUS
> implies the functionality isn't yet there?

This is one of the cases where it can't do it completely automatically,
but a manual merge may be possible.

-- 
Sean Whitton



Re: Most optimal way to import NMU into existing git-builpackage repository?

2024-10-27 Thread Otto Kekäläinen
Hi!

> > Seems this is still the most optimal way to ensure git is correct:
> >
> >gbp import-dsc --verbose --pristine-tar apt:j4-dmenu-desktop/sid
> >
> > Also, dgit pull can be used to get the latest source automatically, but 
> > unfortunately those
> > git commits are made as a custom "Debian as a git repo" representation, and 
> > is not
> > compatible with using CI testing and code review before upload in the way 
> > many of us
> > are doing on Salsa currently.
>
> 'dgit pull' integrates the NMU automatically, when it can.  It doesn't
> just fetch the source.  I don't follow how it's different from 'gbp
> import-dsc'.  Could you say more?

In a gbp checkout of g...@salsa.debian.org:debian/j4-dmenu-desktop.git,
how would you invoke 'dgit pull sid' to import the NMU?

Without any parameters, it will create branch 'dgit/sid' which has
unrelated history and patches are applied and nothing can be merged or
cherry-picked to the git-buildpackage master branch. Perhaps I am just
missing something on how this should work, or perhaps
https://manpages.debian.org/unstable/dgit/dgit-maint-gbp.7.en.html#INCORPORATING_NMUS
implies the functionality isn't yet there?



Re: Bug#1072521: fakeroot hangs on some commands with faked-sysv using 100% CPU

2024-10-27 Thread Clint Adams
On Sun, Oct 27, 2024 at 04:32:57PM +0100, Chris Hofstaedtler wrote:
> Do you have the bandwidth to drive this as a stable-update sometime
> soon?

Probably not within the next two weeks.

https://bugs.launchpad.net/ubuntu/+source/fakeroot/+bug/2068702 might
be relevant as well if any very old kernels may be involved.



Re: Bug#1072521: fakeroot hangs on some commands with faked-sysv using 100% CPU

2024-10-27 Thread Chris Hofstaedtler
Control: found 1072521 fakeroot/1.25.3-1.1
Control: found 1072521 fakeroot/1.31-1.2

(CCing d-devel@ for awareness)

Hi Clint,

On Wed, Jun 05, 2024 at 07:54:07PM +, Debian Bug Tracking System wrote:
>  fakeroot (1.35-1) unstable; urgency=medium
>  .
>* New upstream version.
>  - Use close_range when available.  closes: #1072521.

thanks for fixing this (fd closing with "unlimited" range) in
unstable. The same problem occurs in build(d) chroots for stable
(bookworm) and older, if the host is running trixie or newer.

As a result, basically everyone preparing updates for bookworm is
affected, when building them on an unstable/trixie host.

It would be great if the fix could be backported to bookworm
and maybe bullseye.  Aurelien Jarno mentioned that the buildds will
needs this too.

Do you have the bandwidth to drive this as a stable-update sometime
soon?

Many thanks,
Chris



Re: Debian CI and autopkgtest artifacts

2024-10-26 Thread Simon McVittie
On Sat, 26 Oct 2024 at 21:45:15 +, Daniel Markstedt wrote:
> The autopkgtest docs suggest that by putting a file in a particular
> directory would have it picked up as a test artifact for the CI job.

Yes. During your test, the name of that directory is given by the
environment variable AUTOPKGTEST_ARTIFACTS.

> Would someone know where this dir would be, relative to the source dir?

You cannot predict it ahead of time. The specification is that you need
to read the environment variable AUTOPKGTEST_ARTIFACTS during testing
to find out what the correct location is for this particular test run.

If you're running an upstream test suite that writes out artifacts in
some location of its choice, you can wrap it in a shell script (or a
Perl or Python script or whatever you prefer) that copies the upstream
test suite's artifacts from that location into $AUTOPKGTEST_ARTIFACTS
(for example see debian/tests/upstream-runtime-tests in the keyutils
source package).

Or if you're running a tool or an upstream test suite that can be told
to write logs or other artifacts to a particular location, you can wrap
it in a script that tells it to put artifacts in $AUTOPKGTEST_ARTIFACTS
(for example see debian/tests/installed-tests in the gtk4 source package).

smcv



Re: [RFH] Running Python tests that require the source to be installed

2024-10-26 Thread Stefano Rivera
Hi John (2024.10.26_08:45:15_+)
> > tox tests in Debian package builds are a little different because we use
> > --system-site-packages virtualenvs, but they can be a good way to deal
> > with this.
> 
> Can you name any package using this mechanism so I can have a look?

wheel uses tox tests, in about the most straightforward way possible
(declare build-depends on tox).

Sometimes you'll need to patch tox.ini to add things to
allowlist_externals, because we're using --system-site-packages.

You'll also need to Build-Depend on everything that tox wants to install
into the virtualenv.

Some more examples:

$ reverse-depends -b tox
Reverse-Build-Depends
=
* anosql
* ceph
* custodia
* diskcache
* django-auth-ldap
* duplicity
* enlighten
* flask-jwt-simple
* git-imerge
* gitsome
* gnome-keysign
* gubbins
* mitmproxy
* pytest-datadir
* pytest-mypy-plugins
* python-django-solo
* python-json-log-formatter
* python-kdcproxy
* python-magic
* python-mrcfile
* python-nox
* python-pkginfo
* python-prettylog
* python-pyforge
* python-pyvmomi
* python-versioneer
* python-w3lib
* pytrainer
* rdflib-sqlalchemy
* reprotest
* sagemath
* sagenb-export
* sshtunnel
* tox-current-env
* wheel

Reverse-Build-Depends-Indep
===
* awscli
* python-bottle
* python-scrapy

> 
> I'm not really experienced with tox, so it would be great to have some
> guidance in the form of sample code.
> 
> Adrian
> 
> -- 
>  .''`.  John Paul Adrian Glaubitz
> : :' :  Debian Developer
> `. `'   Physicist
>   `-GPG: 62FF 8A75 84E0 2956 9546  0006 7426 3B37 F5B5 F913
> 

-- 
Stefano Rivera
  http://tumbleweed.org.za/
  +1 415 683 3272



Re: autopkgtest on s390x (was: Migration blocked by tests of depending package)

2024-10-26 Thread Simon McVittie
On Sat, 26 Oct 2024 at 11:02:29 +0200, Joachim Zobel wrote:
> I have tried all available --boot options for autopkgtest with s390x
> and was not able to create an image

I suspect you mean autopkgtest-{build,virt}-qemu. autopkgtest itself (the
test runner) does not have a --boot option.

The ci.debian.net infrastructure does not use autopkgtest-{build,virt}-qemu
on s390x. Instead, it sends jobs to a pre-existing s390x worker machine
(probably a VM, but I don't know) and the worker machine runs the tests
using autopkgtest-virt-lxc, most likely something like this:

autopkgtest ./package.dsc -- lxc autopkgtest-sid

where the autopkgtest-sid lxc image was probably generated by
autopkgtest-build-lxc.

To run an emulated or virtual s390x machine via qemu, there are two steps:
build the image (autopkgtest-build-qemu or manually), and then boot it
(autopkgtest-virt-qemu or manually).

If there is a way for autopkgtest-build-qemu to install a bootloader
into a qemu image and make it bootable by qemu, s390x porters or other
interested developers would be very welcome to provide a MR adding
it. I would guess that it will be most similar to the --boot=bios and
--boot=ieee1275 code paths. Depending on how booting a s390x VM works,
it might be possible to do this purely within autopkgtest-build-qemu,
or it might require changes in vmdb2 first. I assume this would have
something to do with the zipl bootloader, which is packaged in s390-tools
and seems to be vaguely "the same shape" as syslinux, but I don't know
the specifics.

autopkgtest-virt-qemu can run images prepared via autopkgtest-build-qemu,
but can also run images prepared in some other way (for example manually,
using debian-installer). It only needs a special --boot option if there
is something extra that needs to be added to the qemu command-line to
make the bootloader work: currently --boot=bios, --boot=ieee1275 and
--boot=none are functionally equivalent (they just run qemu in the obvious
way and hope that a bootloader comes up), and it's only --boot=efi that
is special (it needs to add appropriate emulated flash devices containing
EFI firmware).

smcv



Re: Most optimal way to import NMU into existing git-builpackage repository?

2024-10-26 Thread Holger Levsen
On Fri, Oct 25, 2024 at 03:03:53PM +, Holger Levsen wrote:
> the current expectation is that an NMU bug is opened, which contains
> the debdiff.
> 
> https://www.debian.org/doc/manuals/developers-reference/developers-reference.en.html#when-and-how-to-do-an-nmu
> 
> "... Then, you must send a patch with the differences between the current
>  package and your proposed NMU to the BTS. The nmudiff script in the
>  devscripts package might be helpful"

FWIW, I think this should stay the default when doing NMUs but I also think
it should be (spelled out that it's) equally fine to open a MR on salsa
*if* the specific package somehow specifies this is ok.

I also think that currently no package should be able to opt-out from
getting NMUdiffs via the BTS, because it's good to have one workflow which
works for *all* packages.


-- 
cheers,
Holger

 ⢀⣴⠾⠻⢶⣦⠀
 ⣾⠁⢠⠒⠀⣿⡁  holger@(debian|reproducible-builds|layer-acht).org
 ⢿⡄⠘⠷⠚⠋⠀  OpenPGP: B8BF54137B09D35CF026FE9D 091AB856069AAA1C
 ⠈⠳⣄

There is no such thing as trans rights or gay rights or lesbian rights. There
are human rights of people who are gay, human rights of people who are lesbian,
and human rights of people who are trans. (@victor_madrigal)


signature.asc
Description: PGP signature


Re: [RFH] Running Python tests that require the source to be installed

2024-10-26 Thread John Paul Adrian Glaubitz
On Sat, 2024-10-26 at 11:08 +0200, John Paul Adrian Glaubitz wrote:
> I just realized that this doesn't work because kiwi doesn't have a setup.py
> anymore but just uses pyproject.toml. Do you know how it works in this case?

OK, I just remove every custom override and let debhelper and pybuild just do 
it's
magic. Unfortunately, it turns out that the package now requires 
pytest_container
which doesn't exist in Debian.

*sigh*

Adrian

-- 
 .''`.  John Paul Adrian Glaubitz
: :' :  Debian Developer
`. `'   Physicist
  `-GPG: 62FF 8A75 84E0 2956 9546  0006 7426 3B37 F5B5 F913



Re: [RFH] Running Python tests that require the source to be installed

2024-10-26 Thread John Paul Adrian Glaubitz
Hi Timo,

On Sat, 2024-10-26 at 10:15 +0200, John Paul Adrian Glaubitz wrote:
> On Fri, 2024-10-25 at 22:19 +0200, Timo Röhling wrote:
> > I ran into this issue with old-style setuptools packages (i.e., 
> > packages with a setup.py); AFAIK the entry_points mechanism needs 
> > valid egg-info or dist-info metadata in the PYTHONPATH. My 
> > workaround was
> > 
> > 
> > export PYBUILD_BEFORE_TEST=\
> >  {interpreter} setup.py egg_info; \
> >  cp -r {dir}/src/*.egg-info {build_dir}
> > 
> > export PYBUILD_AFTER_TEST=\
> >  rm -r {dir}/src/*.egg-info {build_dir}/*.egg-info
> 
> Thanks, this is what I was looking for. I will give this a try.

I just realized that this doesn't work because kiwi doesn't have a setup.py
anymore but just uses pyproject.toml. Do you know how it works in this case?

Adrian

-- 
 .''`.  John Paul Adrian Glaubitz
: :' :  Debian Developer
`. `'   Physicist
  `-GPG: 62FF 8A75 84E0 2956 9546  0006 7426 3B37 F5B5 F913



Re: Migration blocked by tests of depending package

2024-10-26 Thread Joachim Zobel
Am Freitag, dem 25.10.2024 um 08:08 +0200 schrieb Paul Gevers:
> > 2. I am unable to reproduce the failing test since there is no working
> > --boot option for autopkgtest and and I can't find any info on how to
> > run qemu for that architecture.
> 
> 
> I don't understand the remark in the first part of the sentence. Can you 
> elaborate?

I have tried all available --boot options for autopkgtest with s390x
and was 
not able to create an image. I haven't looked closely into other
options but 
it seems to be difficult.

Thanks,
Joachim



Re: [RFH] Running Python tests that require the source to be installed

2024-10-26 Thread John Paul Adrian Glaubitz
Hello Stefano,

On Sat, 2024-10-26 at 02:45 +, Stefano Rivera wrote:
> If it actually needs the .dist-info to be on path (fully installed), you
> can try runnig the tests under tox. That's probably what the upstream
> does.
> 
> tox tests in Debian package builds are a little different because we use
> --system-site-packages virtualenvs, but they can be a good way to deal
> with this.

Can you name any package using this mechanism so I can have a look?

I'm not really experienced with tox, so it would be great to have some
guidance in the form of sample code.

Adrian

-- 
 .''`.  John Paul Adrian Glaubitz
: :' :  Debian Developer
`. `'   Physicist
  `-GPG: 62FF 8A75 84E0 2956 9546  0006 7426 3B37 F5B5 F913



Re: [RFH] Running Python tests that require the source to be installed

2024-10-26 Thread John Paul Adrian Glaubitz
Hi Timo,

On Fri, 2024-10-25 at 22:19 +0200, Timo Röhling wrote:
> I ran into this issue with old-style setuptools packages (i.e., 
> packages with a setup.py); AFAIK the entry_points mechanism needs 
> valid egg-info or dist-info metadata in the PYTHONPATH. My 
> workaround was
> 
> 
> export PYBUILD_BEFORE_TEST=\
>  {interpreter} setup.py egg_info; \
>  cp -r {dir}/src/*.egg-info {build_dir}
> 
> export PYBUILD_AFTER_TEST=\
>  rm -r {dir}/src/*.egg-info {build_dir}/*.egg-info

Thanks, this is what I was looking for. I will give this a try.

Adrian

-- 
 .''`.  John Paul Adrian Glaubitz
: :' :  Debian Developer
`. `'   Physicist
  `-GPG: 62FF 8A75 84E0 2956 9546  0006 7426 3B37 F5B5 F913



Re: [RFH] Running Python tests that require the source to be installed

2024-10-25 Thread Stefano Rivera
Hi John (2024.10.24_10:44:45_+)
> I am maintaining the package src:kiwi [1] which hasn't been updated in
> Debian for some time since upstream has added tests that only work when
> the package source is installed into the test environment, i.e. available
> through PYTHONPATH.

If it actually needs the .dist-info to be on path (fully installed), you
can try runnig the tests under tox. That's probably what the upstream
does.

tox tests in Debian package builds are a little different because we use
--system-site-packages virtualenvs, but they can be a good way to deal
with this.

Stefano

-- 
Stefano Rivera
  http://tumbleweed.org.za/
  +1 415 683 3272



Re: [RFH] Running Python tests that require the source to be installed

2024-10-25 Thread Timo Röhling

Hi Adrian,

* John Paul Adrian Glaubitz  
* [2024-10-24 13:11]:
I had to look it up and the mechanism used is called 
"entry_points". The kiwi package adds such entry_points and wants 
to test them in its testsuite.


Thus, I need to figure out how to make those entry_points visible 
from the build environment so that the testsuite can find and test 
them.


Is there any other Debian Python package that uses entry points?
I ran into this issue with old-style setuptools packages (i.e., 
packages with a setup.py); AFAIK the entry_points mechanism needs 
valid egg-info or dist-info metadata in the PYTHONPATH. My 
workaround was



export PYBUILD_BEFORE_TEST=\
{interpreter} setup.py egg_info; \
cp -r {dir}/src/*.egg-info {build_dir}

export PYBUILD_AFTER_TEST=\
rm -r {dir}/src/*.egg-info {build_dir}/*.egg-info


Cheers
Timo


--
⢀⣴⠾⠻⢶⣦⠀   ╭╮
⣾⠁⢠⠒⠀⣿⡁   │ Timo Röhling   │
⢿⡄⠘⠷⠚⠋⠀   │ 9B03 EBB9 8300 DF97 C2B1  23BF CC8C 6BDD 1403 F4CA │
⠈⠳⣄   ╰╯


signature.asc
Description: PGP signature


Re: Most optimal way to import NMU into existing git-builpackage repository?

2024-10-25 Thread Sean Whitton
Hello,

On Fri 25 Oct 2024 at 08:07pm +01, Otto Kekäläinen wrote:

> Thanks for all the comments!
>
> Trying to summarize and expand on the points raised:
>
> Seems this is still the most optimal way to ensure git is correct:
>
>gbp import-dsc --verbose --pristine-tar apt:j4-dmenu-desktop/sid
>
> Also, dgit pull can be used to get the latest source automatically, but 
> unfortunately those
> git commits are made as a custom "Debian as a git repo" representation, and 
> is not
> compatible with using CI testing and code review before upload in the way 
> many of us
> are doing on Salsa currently.

'dgit pull' integrates the NMU automatically, when it can.  It doesn't
just fetch the source.  I don't follow how it's different from 'gbp
import-dsc'.  Could you say more?

-- 
Sean Whitton



Re: Most optimal way to import NMU into existing git-builpackage repository?

2024-10-25 Thread Otto Kekäläinen
Thanks for all the comments!

Trying to summarize and expand on the points raised:

Seems this is still the most optimal way to ensure git is correct:

   gbp import-dsc --verbose --pristine-tar apt:j4-dmenu-desktop/sid

Also, dgit pull can be used to get the latest source automatically, but
unfortunately those git commits are made as a custom "Debian as a git repo"
representation, and is not compatible with using CI testing and code review
before upload in the way many of us are doing on Salsa currently.

Seems also others are occasionally annoyed by NMUs. Ideally mass change
drivers would do mass bug filings and let maintainers upload instead of
resorting immediately to NMUs.

The post-upload NMU bug report and diff will help ensure the fact that a
NMU happened is discovered by maintainer, but reconciling packaging git
contents to 100% match what was uploaded is still best to be done with
command above.

Packages that are actively maintained should in general never need a NMU.
If packages are abandoned they should move to salsa.debian.org/debian
namespace, as it would solve the access permissions and allow for some
changes to be pushed to git.

The Debian Janitor could perhaps be used both to sync NMUs back to git
repos, and as a replacement for NMUs in some cases. Any repo that has Salsa
CI passing and/or a gbp.conf is pretty easy to do a MR on, but doing
mass-MRs is a complex topic that deserves a documentation of its own.

Also, wider standardization in packaging workflows, and use of
machine-readable gbp.conf and such is needed to make it easier to work via
git. However there is, and will likely be, an inherent long-term conflict
in the duality that uploads can happen irrespective of version control in
git and use Debian repositories themselves for version control.

Anyway, a mass-MR filing does not result automatically in an upload
happening for every package, so some NMUs are likely to occur. Thus,
knowing the easiest one-liner to reconcile git packaging repo is still the
most important thing for DDs.


Re: Most optimal way to import NMU into existing git-builpackage repository?

2024-10-25 Thread Andrey Rakhmatullin
On Fri, Oct 25, 2024 at 12:55:48PM -0400, Noah Meyerhans wrote:
> > > Honestly I'd be happy if we could just establish some expectation that
> > > the NMUer open a merge request for their changes.  It can be merged
> > > later without losing anything or requiring additional work.  Enforcement
> > > of this expectation would be even better, of course.
> > 
> > the current expectation is that an NMU bug is opened, which contains
> > the debdiff.
> > 
> > https://www.debian.org/doc/manuals/developers-reference/developers-reference.en.html#when-and-how-to-do-an-nmu
> > 
> > "... Then, you must send a patch with the differences between the current
> >  package and your proposed NMU to the BTS. The nmudiff script in the
> >  devscripts package might be helpful"
> 
> Right, and that's not a whole lot more helpful than requiring me to
> download the sourcepackage and generate the debdiff myself.  Sure all
> the content is there, but it's still a tedious amount of work that's
> easily forgotten.  Further, it loses a little bit of metadata, in that
> the git commit now comes from me, rather than the person doing the
> actual NMU.
> 
> Yes, I know this is trivial, and yes I know I can fix it with more work;
> I don't want NMUs to make more work for me.  It makes me not like NMUs.

Sure.
We have two options here: make the project do fewer NMUs by doing more
maintainer uploads, or standardize and mandate a git workflow or two.
We don't like *doing* NMUs either.

-- 
WBR, wRAR


signature.asc
Description: PGP signature


Re: Most optimal way to import NMU into existing git-builpackage repository?

2024-10-25 Thread Noah Meyerhans
On Fri, Oct 25, 2024 at 03:03:53PM +, Holger Levsen wrote:
> > Honestly I'd be happy if we could just establish some expectation that
> > the NMUer open a merge request for their changes.  It can be merged
> > later without losing anything or requiring additional work.  Enforcement
> > of this expectation would be even better, of course.
> 
> the current expectation is that an NMU bug is opened, which contains
> the debdiff.
> 
> https://www.debian.org/doc/manuals/developers-reference/developers-reference.en.html#when-and-how-to-do-an-nmu
> 
> "... Then, you must send a patch with the differences between the current
>  package and your proposed NMU to the BTS. The nmudiff script in the
>  devscripts package might be helpful"

Right, and that's not a whole lot more helpful than requiring me to
download the sourcepackage and generate the debdiff myself.  Sure all
the content is there, but it's still a tedious amount of work that's
easily forgotten.  Further, it loses a little bit of metadata, in that
the git commit now comes from me, rather than the person doing the
actual NMU.

Yes, I know this is trivial, and yes I know I can fix it with more work;
I don't want NMUs to make more work for me.  It makes me not like NMUs.

noah



Re: Most optimal way to import NMU into existing git-builpackage repository?

2024-10-25 Thread Holger Levsen
On Fri, Oct 25, 2024 at 10:06:56AM -0400, Noah Meyerhans wrote:
> Honestly I'd be happy if we could just establish some expectation that
> the NMUer open a merge request for their changes.  It can be merged
> later without losing anything or requiring additional work.  Enforcement
> of this expectation would be even better, of course.

the current expectation is that an NMU bug is opened, which contains
the debdiff.

https://www.debian.org/doc/manuals/developers-reference/developers-reference.en.html#when-and-how-to-do-an-nmu

"... Then, you must send a patch with the differences between the current
 package and your proposed NMU to the BTS. The nmudiff script in the
 devscripts package might be helpful"


-- 
cheers,
Holger

 ⢀⣴⠾⠻⢶⣦⠀
 ⣾⠁⢠⠒⠀⣿⡁  holger@(debian|reproducible-builds|layer-acht).org
 ⢿⡄⠘⠷⠚⠋⠀  OpenPGP: B8BF54137B09D35CF026FE9D 091AB856069AAA1C
 ⠈⠳⣄

"Der Tod der menschlichen Empathie ist eines der frühesten und deutlichsten
Zeichen dafür, dass eine Kultur gerade in Barbarei verfällt." Hannah Arendt


signature.asc
Description: PGP signature


Re: Most optimal way to import NMU into existing git-builpackage repository?

2024-10-25 Thread Andrey Rakhmatullin
On Fri, Oct 25, 2024 at 10:06:56AM -0400, Noah Meyerhans wrote:
> > I would very much prefer if it was possible in Debian to not allow
> > the archive to get out of sync with packaging git repo (for example
> > when it lives under salsa.debian.org/debian which uploaders should have
> > access to already).
> > That would probably also require some "tag to upload" solution to be
> > implemented first I presume.
> 
> Honestly I'd be happy if we could just establish some expectation that
> the NMUer open a merge request for their changes. 

I write this too often in the recent months (maybe because I did much
fewer NMUs before 2024 than I did in 2024 for t64 and so I didn't care
before), but: if a NMUer wants to modify a random repo, they need to:

1. Identify if the repo is uptodate.
2. Identify which workflow is used in the repo, and whether that workflow
is some typical one or some random undocumented one.
3. Study the workflow used, to know how to rebuild the package, and
ideally at this step rebuild the package without modifications to make
sure it works.
4. Study the workflow used some more, to know how to add modifications to
the package, add them, rebuild the package once again, make sure the
built package is correct.

This is literally impossible unless the workflow is typical and you
guessed it correctly, and is too much additional work even when it's
possible. 

-- 
WBR, wRAR


signature.asc
Description: PGP signature


Re: Most optimal way to import NMU into existing git-builpackage repository?

2024-10-25 Thread Sean Whitton
Hello,

On Thu 24 Oct 2024 at 09:36pm +01, Otto Kekäläinen wrote:

> Hi,
>
> I occasionally run into the situation that a package has been NMU'd or
> otherwise updated directly into the Debian repositories,
> bypassing/ignoring that a packaging git repository existed. I was
> wondering what techniques other DDs use to
> 1) detect that the git packaging repository was bypassed/diverged?

If you do all your uploads using 'dgit push', it will always detect this.

> 2) bring the git repository back in sync with minimal effort?

If you are using a patches-applied workflow or you have no patches,
'dgit pull' will do this.

If you are using patches-unapplied, you might be able to 'dgit fetch'
and then manually merge.

-- 
Sean Whitton



Re: Most optimal way to import NMU into existing git-builpackage repository?

2024-10-25 Thread Noah Meyerhans
On Fri, Oct 25, 2024 at 08:45:16AM +0200, Andreas Henriksson wrote:
> I would very much prefer if it was possible in Debian to not allow
> the archive to get out of sync with packaging git repo (for example
> when it lives under salsa.debian.org/debian which uploaders should have
> access to already).
> That would probably also require some "tag to upload" solution to be
> implemented first I presume.

Honestly I'd be happy if we could just establish some expectation that
the NMUer open a merge request for their changes.  It can be merged
later without losing anything or requiring additional work.  Enforcement
of this expectation would be even better, of course.

noah



Re: Most optimal way to import NMU into existing git-builpackage repository?

2024-10-25 Thread Sean Whitton
Hello,

On Fri 25 Oct 2024 at 12:37pm +05, Andrey Rakhmatullin wrote:

> Not sure what's the logic here, but I feel like what you thought about may
> require some "tag to upload" solution not to be just implemented but also
> mandated, which won't happen.

We do intend to automatically import all uploads back into dgit-repos.
So we will have a gitified source of truth, which is a step forward.

-- 
Sean Whitton



  1   2   3   4   5   6   7   8   9   10   >