Re: Using release-monitoring.org [was: uscan roadmap]

2021-12-03 Thread Stephan Lachnit
On Thu, Dec 2, 2021 at 11:52 PM Paul Wise  wrote:
>
> On Thu, 2021-12-02 at 23:36 +0100, Stephan Lachnit wrote:
>
> > If I understand correctly, release-monitoring already offers such a
> > mapping [1].
>
> It seems like the Ayanita distro mapping needs to be done manually once
> per package, while using the Repology data would automatically get us
> the mapping for each existing package and all future packages.

I mean it looks rather easy to do, just a couple of mouse clicks.
Compare that to writing a watch file at the moment (assuming one has
to do more than copy and paste the github example).

> > Hm, I can't really think of an example where such a thing couldn't
> > also be implemented in release-monitoring.org.
>
> None of the three use-cases I listed can be done by it AFAICT.
>
> It can't check things that it doesn't have a check for, while
> individual package maintainers in various distros will update their
> packages and Repology will notice the new versions.

Then the maintainer would just have to write a check, just like they
have to do now.

Also, mapping on Repology sometimes needs to be adjusted manually. And
sometimes they disagree and instead tell you to rename the source
package in the distro (happened to me once), which is not really
viable in Debian.

> It presumably doesn't look at the versions for all distros, so it can't
> do the cross-distro VCS snapshot choice check, while individual package
> maintainers in various distros know their packages well and might
> upgrade to a VCS snapshot in their distro, which Repology notices.

Yes it can't, but also I don't think this is something *release
monitoring* should do. It is definitely a good use case and that is
why there is a link to repology on the tracker (called "other
distros"), but it has IMHO nothing to do with *automatic* release
monitoring. Don't get me wrong, I actually like repology exactly for
this particular reason.

> It also isn't going to check locations it doesn't check yet, while
> individual package maintainers in other distros might do that after
> noticing their package hasn't been updated recently and then going
> searching for a new upstream and updating, which Repology notices.

Fair point, but if we would work together on release-monitoring.org
with Fedora, there are more eyes on it as well as in the current
situation.
Repology still has more eyes of course, but then again the link to
Repology is right there on the tracker already if one is curious.

> > Just one quick idea I had: what about a "fake" uscan backend? I.e.
> > something like `Version: release-monitoring.org` in d/watch. In that
> > case uscan will launch an external program that fetches the data from
> > there and gives it back to uscan, so that other tools stay unaffected
> > until a full transition is done.
>
> Excellent idea, that would be great to have.

One more thought on this. If we go with version 5, maybe something like:

Version: 5
Source: release-monitoring.org

Would also work for multiple sources then and in general would fit
nicely to the current idea for v5. It also solves the problem with the
tooling, watch files and uscan would still exist, but the "searching"
portion is offloaded.

> The one issue I can think of with using release-monitoring.org is that
> Debian becomes more reliant on an external service, while currently we
> are completely independent of other distros for version checking.
>
> Converting the release-monitoring.org check to a watch file might be an
> alternative to using it directly that maintains our independence.

Hm right, independence is a valid concern. Anitya itself is open
source [1] so we could host it easily, but of course the real problem
would be the stored data of the projects. I don't know if they are
hosted somewhere, but I'm sure the Fedora guys would be open to share
them with us, so that we could easily spin up a mirror in case there
are any problems (it's probably a good idea to host a read-only mirror
just in case).

This sounds more reasonable to me than writing a tool that converts a
new standard to the old one just as backup.


Regards,
Stephan

[1] https://github.com/fedora-infra/anitya



Re: /usr/bin/ld.so as a symbolic link for the dynamic loader

2021-12-03 Thread Florian Weimer
* Paul Wise:

> Florian Weimer wrote:
>
>> I'd like to provide an ld.so command as part of glibc.
>
> Will this happen in glibc upstream or just in Debian?

Upstream, and then Debian.  The symbolic link would likely and up in
libc-bin in Debian.

>> Today, ld.so can be used to activate preloading, for example. 
>> Compared to LD_PRELOAD, the difference is that it's specific to one
>> process, and won't be inherited by subprocesses—something is that
>> exactly what is needed.
>
> That appears to be activated like this:
>
> /lib64/ld-linux-x86-64.so.2 --preload 
> /usr/lib/x86_64-linux-gnu/libeatmydata.so.1.3.0 /bin/ls

Right, thanks for providing a concrete example.  A (somewhat) portable
version would look like this:

  ld.so --preload '/usr/$LIB/libeatmydata.so.1.3.0' /bin/sl

This assumes that $LIB expands to the multi-arch subdirectory.
(In upstream, it switches between lib, lib64, libx32 as needed.)

>> Anyway, do you see any problems with providing /usr/bin/ld.so for use
>> by skilled end users?
>
> It means more folks get exposed to ld.so features, which might mean
> more support and feature requests for glibc upstream,. For example the
> set of features provided by environment variables is different to the
> set of features provided by command-line options.

The intent of this change is to expose these loader features to more
users and tools.  This came up for --list-diagnostics, where we'd
otherwise had to teach the sos tool (and others) how to find the loader
path.

Thanks,
Florian



Re: /usr/bin/ld.so as a symbolic link for the dynamic loader

2021-12-03 Thread Bastian Blank
On Fri, Dec 03, 2021 at 01:57:08PM +0100, Florian Weimer wrote:
> Right, thanks for providing a concrete example.  A (somewhat) portable
> version would look like this:
>   ld.so --preload '/usr/$LIB/libeatmydata.so.1.3.0' /bin/sl

You mean
  ld.so --preload libeatmydata.so.1.3.0 /bin/ls
?

ld.so is able to find the correct path itself.  So only looking up the
correct ld.so implementation would be needed.

Bastian

-- 
Behind every great man, there is a woman -- urging him on.
-- Harry Mudd, "I, Mudd", stardate 4513.3



Re: /usr/bin/ld.so as a symbolic link for the dynamic loader

2021-12-03 Thread Florian Weimer
* Bastian Blank:

> On Fri, Dec 03, 2021 at 01:57:08PM +0100, Florian Weimer wrote:
>> Right, thanks for providing a concrete example.  A (somewhat) portable
>> version would look like this:
>>   ld.so --preload '/usr/$LIB/libeatmydata.so.1.3.0' /bin/sl
>
> You mean
>   ld.so --preload libeatmydata.so.1.3.0 /bin/ls
> ?

Right, that is even better.

No objects to /usr/bin/ld.so then?

Thanks,
Florian



Re: /usr/bin/ld.so as a symbolic link for the dynamic loader

2021-12-03 Thread Timo Lindfors

Hi,

On Fri, 3 Dec 2021, Florian Weimer wrote:

No objects to /usr/bin/ld.so then?


Just a random thought: If you have configured a restricted shell (e.g. 
rbash) that only lets you execute commands in PATH, will this make it 
possible to bypass the restriction with "ld.so /tmp/some-random-binary"? 
This is not necessary an argument to not do this, I'm just wondering what 
the implications could be.


-Timo



Re: /usr/bin/ld.so as a symbolic link for the dynamic loader

2021-12-03 Thread Theodore Y. Ts'o
On Fri, Dec 03, 2021 at 02:46:27PM +0100, Bastian Blank wrote:
> On Fri, Dec 03, 2021 at 01:57:08PM +0100, Florian Weimer wrote:
> > Right, thanks for providing a concrete example.  A (somewhat) portable
> > version would look like this:
> >   ld.so --preload '/usr/$LIB/libeatmydata.so.1.3.0' /bin/sl
> 
> You mean
>   ld.so --preload libeatmydata.so.1.3.0 /bin/ls

Some stupid questions that I couldn't answer by reading the man page
or doing a quick google search

* How does ld.so --preload *work*?

* Does it modify /bin/ls, so that all users running /bin/ls get the
preloaded library?

* Does it modify something in the user's home directory?

* How do you undo the effects ld.so --preload?

"Inquiring minds want to know"  :-)

Thanks,

- Ted



Re: /usr/bin/ld.so as a symbolic link for the dynamic loader

2021-12-03 Thread Simon McVittie
On Thu, 02 Dec 2021 at 19:51:16 +0100, Florian Weimer wrote:
> Having ld.so as a real command makes the name architecture-agnostic.
> This discourages from hard-coding non-portable paths such as
> /lib64/ld-linux-x86-64.so.2 or even (the non-ABI-compliant)
> /lib/x86_64-linux-gnu/ld-linux-x86-64.so.2 in scripts that require
> specific functionality offered by such an explicit loader invocation.

This works up to a point, but because there is only one /usr/bin/ld.so,
it can only work for one architecture per machine, so saying it's
architecture-agnostic is still a bit of a stretch.

In multiarch, in principle there is no such thing as "the" primary
architecture (it is valid to combine sed:amd64 and coreutils:i386 on an
amd64 kernel), but in practice it's usually the case that "most"
executables come from the same architecture as dpkg.

So if we only have one ld.so, then on typical Debian x86_64 machines
it will only work for x86_64 executables, and not for i386 executables
(or cross-executables via qemu-aarch64-static or whatever). Similarly,
on Red Hat-style multilib, it will only work for x86_64 and not for
i386. Does that give you the functionality you are expecting?

One way to make it closer to architecture-agnostic would be to name it
${tuple}-ld.so, similar to how gcc (cross-)compilers are named. From
Debian's point of view, ideally the tuple would be a multiarch tuple,
which is a GNU tuple normalized to eliminate differences within an
ABI-compatible family of architectures:

- start from the GNU tuple, e.g. i686-pc-linux-gnu
- discard the vendor part, e.g. i686-linux-gnu
  - this version is the tools prefix used in cross-compilation
- replace i?86 with i386 and arm* with arm, e.g. i386-linux-gnu
  - this version is the Debian multiarch tuple

(Or perhaps better to have symlinks with both the cross-tools prefix
and the multiarch tuple, where they differ - which I believe is only
i386 and 32-bit ARM, because most/all other architectures sensibly change
the GNU tuple if and only if the ABI is different.)

> The initial implementation will be just a symbolic link.  This means
> that multi-arch support will be missing: the amd64 loader will not be
> able to redirect execution to the s390x loader.

... or to the i386 loader, which is probably a concern for more people
(that affects Red Hat-style multilib, which is present in some form on
most distros, and not just Debian-style multiarch, which is only seen in
Debian derivatives and the freedesktop.org SDK).

> In principle, it should
> be possible to find PT_INTERP with a generic ELF parser and redirect to
> that, but that's vaporware at present.  I don't know yet if it will be
> possible to implement this without some knowledge of Debian's multi-arch
> support in the loader.

I believe Debian uses the interoperable (ABI-compliant) ELF interpreter
as listed on https://sourceware.org/glibc/wiki/ABIList for all
architectures - it certainly does for all *common* architectures (for
example our x86_64 executables use /lib64/ld-linux-x86-64.so.2, which is
a special exception to the rule that we don't usually use lib64).

I had naively believed that all distros do the same, but unfortunately
my work on the Steam Runtime has taught me otherwise: for example, Arch
Linux has a non-standard ELF interpreter /usr/lib/ld-linux-x86-64.so for
executables that are built from the glibc source package (but uses the
interoperable ELF interpreter for everything else), and Exherbo
consistently puts their dynamic linkers in /usr/x86_64-pc-linux-gnu/lib.

Does glibc automatically set up the interoperable ELF interpreter, or is
it something that distros' glibc maintainers have to "just know" if they
are using a non-default ${libdir}?

> If someone wants to upstream the multi-arch patches, that would be
> great.  glibc now accepts submissions under DCO, so copyright assignment
> should no longer be an obstacle.

(Please note that I am not a glibc maintainer and cannot speak for them.)

I think multiarch is mostly build-time configuration rather than patches.
The main thing needing patching is that we want ${LIB} to expand to
lib/x86_64-linux-gnu instead of just x86_64-linux-gnu, so that the
"/usr/${LIB}/libfoo.so.0" idiom works, but glibc would normally only take
the last component of the ${libdir}:

https://salsa.debian.org/glibc-team/glibc/-/blob/sid/debian/patches/any/local-ld-multiarch.diff

The freedesktop.org SDK used for Flatpak also uses Debian-style multiarch
(but is not otherwise Debian-derived), and addresses that differently, in a
way that might be more upstream-suitable:

https://gitlab.com/freedesktop-sdk/freedesktop-sdk/-/blob/master/patches/glibc/fix-dl-dst-lib.patch

smcv



Re: /usr/bin/ld.so as a symbolic link for the dynamic loader

2021-12-03 Thread Florian Weimer
* Theodore Y. Ts'o:

> * How does ld.so --preload *work*?

The dynamic loader has an array of preloaded sonames, and it processes
them before loading the dependencies of the main program.  This way,
definitions in the preloaded objects preempt definitions in the shared
objects.

> * Does it modify /bin/ls, so that all users running /bin/ls get the
> preloaded library?

No, it's purely a run-time change.

The global setting is in /etc/ld.so.preload.

> * Does it modify something in the user's home directory?

No.  Well, the shell might put that command into .bash_history, or
something like that.

> * How do you undo the effects ld.so --preload?

You run the program without the --preload option.  Or unset LD_PRELOAD.

Thanks,
Florian



Re: /usr/bin/ld.so as a symbolic link for the dynamic loader

2021-12-03 Thread Bastian Blank
On Fri, Dec 03, 2021 at 04:16:04PM +0200, Timo Lindfors wrote:
> Just a random thought: If you have configured a restricted shell (e.g.
> rbash) that only lets you execute commands in PATH, will this make it
> possible to bypass the restriction with "ld.so /tmp/some-random-binary"?
> This is not necessary an argument to not do this, I'm just wondering what
> the implications could be.

The same as /bin/bash -c, /usr/bin/python3 -c.  ld.so is an interpreter
in the same way.

Bastian

-- 
Spock: The odds of surviving another attack are 13562190123 to 1, Captain.



Re: /usr/bin/ld.so as a symbolic link for the dynamic loader

2021-12-03 Thread Bastian Blank
On Fri, Dec 03, 2021 at 11:33:25AM -0500, Theodore Y. Ts'o wrote:
> Some stupid questions that I couldn't answer by reading the man page
> or doing a quick google search
> * How does ld.so --preload *work*?

ld.so is the ELF interpreter.  If you run a normal binary, the kernel
rewrites this request to load and execute /lib64/ld-linux-x86-64.so.2
instead.  This interpreter then loads the real binary, the libraries and
jumps to the real code.

If you run ld.so directly, that kernel redirection is just not done.
The rest of it's tasks however are done just as usual.

--preload is just as if the interpreter would see LD_PRELOAD set.

> * Does it modify /bin/ls, so that all users running /bin/ls get the
> preloaded library?

No, preload does not modify binary files, just the loaded binary in
memory.

> * Does it modify something in the user's home directory?

No.

> * How do you undo the effects ld.so --preload?

You just don't use ld.so and don't set LD_PRELOAD.

Bastian

-- 
He's dead, Jim.
-- McCoy, "The Devil in the Dark", stardate 3196.1



Re: /usr/bin/ld.so as a symbolic link for the dynamic loader

2021-12-03 Thread Florian Weimer
* Simon McVittie:

> On Thu, 02 Dec 2021 at 19:51:16 +0100, Florian Weimer wrote:
>> Having ld.so as a real command makes the name architecture-agnostic.
>> This discourages from hard-coding non-portable paths such as
>> /lib64/ld-linux-x86-64.so.2 or even (the non-ABI-compliant)
>> /lib/x86_64-linux-gnu/ld-linux-x86-64.so.2 in scripts that require
>> specific functionality offered by such an explicit loader invocation.
>
> This works up to a point, but because there is only one /usr/bin/ld.so,
> it can only work for one architecture per machine, so saying it's
> architecture-agnostic is still a bit of a stretch.

We can add a generic ELF parser to that ld.so and use PT_INTERP, as I
mentioned below.  I think this is the way to go.  Some care will be
needed to avoid endless loops, but that should be it.

Things will break if people link with --dynamic-linker=/usr/bin/ld.so,
but that's just broken (like using --dynamic-linker=/lib/dl-2.33.so
today).

>> In principle, it should
>> be possible to find PT_INTERP with a generic ELF parser and redirect to
>> that, but that's vaporware at present.  I don't know yet if it will be
>> possible to implement this without some knowledge of Debian's multi-arch
>> support in the loader.
>
> I believe Debian uses the interoperable (ABI-compliant) ELF interpreter
> as listed on https://sourceware.org/glibc/wiki/ABIList for all
> architectures - it certainly does for all *common* architectures (for
> example our x86_64 executables use /lib64/ld-linux-x86-64.so.2, which is
> a special exception to the rule that we don't usually use lib64).

I'm not aware of any Debian divergence yet, either.

If we can just run any specified PT_INTERP and use something else for
loop detection (e.g., an additional argument), then it should probably
work out of the box.  I was just trying to set expectations because I
had not really thought about it in detail, in particular the loop
avoidance scheme and it whether it must know about all the known
loaders.

Some distributions also want to avoid code execution from ldd.  Another
thing to consider before lifting paths out of PT_INTERP.

> I had naively believed that all distros do the same, but unfortunately
> my work on the Steam Runtime has taught me otherwise: for example, Arch
> Linux has a non-standard ELF interpreter /usr/lib/ld-linux-x86-64.so for
> executables that are built from the glibc source package (but uses the
> interoperable ELF interpreter for everything else),

/usr/lib/ld-linux-x86-64.so could be a botched attempt at completing
UsrMove.  The upstream makefiles are not really set up for that.

> and Exherbo consistently puts their dynamic linkers in
> /usr/x86_64-pc-linux-gnu/lib.

No idea about that one.

> Does glibc automatically set up the interoperable ELF interpreter, or is
> it something that distros' glibc maintainers have to "just know" if they
> are using a non-default ${libdir}?

With ./configure --prefix=/usr, upstream glibc is expected to use the
official path in the file system (and it should no longer be a symbolic
link, either).  The just-built binaries should use that path, too.

But the dynamic linker pathname is not entirely unique, which creates
problems for Debian-style multi-arch.

>> If someone wants to upstream the multi-arch patches, that would be
>> great.  glibc now accepts submissions under DCO, so copyright assignment
>> should no longer be an obstacle.
>
> (Please note that I am not a glibc maintainer and cannot speak for them.)
>
> I think multiarch is mostly build-time configuration rather than patches.
> The main thing needing patching is that we want ${LIB} to expand to
> lib/x86_64-linux-gnu instead of just x86_64-linux-gnu, so that the
> "/usr/${LIB}/libfoo.so.0" idiom works, but glibc would normally only take
> the last component of the ${libdir}:
>
> https://salsa.debian.org/glibc-team/glibc/-/blob/sid/debian/patches/any/local-ld-multiarch.diff

That must get the data from somewhere else.  Looking at

  

it seems to come from DEB_HOST_MULTIARCH, and that's:

| DEB_HOST_MULTIARCH?= $(shell dpkg-architecture -qDEB_HOST_MULTIARCH)

We would have to take the table out of dpkg-architecture and put it into
upstream glibc (or gcc or binutils), otherwise you can't build a
multi-arch glibc on a non-Debian system.  Something like a generic
--with-multiarch-tuple= configure option would sidestep that, but risk
that different distributions end up with different multi-arch tuples.

> The freedesktop.org SDK used for Flatpak also uses Debian-style multiarch
> (but is not otherwise Debian-derived), and addresses that differently, in a
> way that might be more upstream-suitable:
>
> https://gitlab.com/freedesktop-sdk/freedesktop-sdk/-/blob/master/patches/glibc/fix-dl-dst-lib.patch

The use of realpath again assumes that the file system already contains
the answer, which is not true if you have a different architecture or
dist

Re: /usr/bin/ld.so as a symbolic link for the dynamic loader

2021-12-03 Thread Simon McVittie
On Fri, 03 Dec 2021 at 18:29:33 +0100, Florian Weimer wrote:
> > On Thu, 02 Dec 2021 at 19:51:16 +0100, Florian Weimer wrote:
> >> If someone wants to upstream the multi-arch patches, that would be
> >> great.
> >
> > I think multiarch is mostly build-time configuration rather than patches.
> 
> We would have to take the table out of dpkg-architecture and put it into
> upstream glibc (or gcc or binutils), otherwise you can't build a
> multi-arch glibc on a non-Debian system.

Sorry, you asked about patches, so I thought you were under the
impression that Debian was patching glibc to have it use multiarch
library directories. I believe it's mainly done with build-time
configuration rather than by patching, so there isn't necessarily
anything to upstream, because most of what's necessary to enable/allow
that build-time configuration is already upstream - unless you want
glibc to be generically aware of multiarch paths even when built on
non-Debian? Is that your goal here?

As I said, a 99% implementation of multiarch tuples is to take the GNU
tuple that any Autotools-based build system already relies on, discard the
vendor part, and normalize a finite number of special cases (i[3456]86
and arm* are the only ones I'm aware of). I believe the people who did
the early design of multiarch were hoping to standardize it via something
like LSB, but that effort seems approximately as dead as LSB itself.

systemd has an independent implementation of the list of known multiarch
tuples:
https://github.com/systemd/systemd/blob/main/src/basic/architecture.h
https://github.com/systemd/systemd/blob/main/src/basic/architecture.c

Some sort of change to the expansion of $LIB is maybe the only thing
needed in addition to build-time configuration, because the upstream
implementation of $LIB makes the assumption that only the last path
segment of the ${libdir} is desired in $LIB, which is usually the case
but happens to be untrue for multiarch.

> In addition to get the right value for $LIB, it's also desirable to get
> the default search paths right.

This is done by runtime configuration rather than patching, at the moment,
for example this file installed as part of libc6:amd64:

$ cat /etc/ld.so.conf.d/x86_64-linux-gnu.conf
# Multiarch support
/usr/local/lib/x86_64-linux-gnu
/lib/x86_64-linux-gnu
/usr/lib/x86_64-linux-gnu

If you would prefer this to be hard-coded into ldconfig, I suspect there's
no implementation right now that could be upstreamed.

> And there's also /usr/libexec/getconf to worry about.

At the moment, /usr/bin/getconf is only installed for the "main"
architecture (more precisely, for whichever architecture of libc-bin is
installed). If there's meant to be one getconf per architecture, then it
wouldn't be able to appear in /usr/bin or /usr/libexec with that name.
As with ld.so, this is not unique to multiarch: multilib would have the
same problem.

Is that /usr/libexec/getconf upstream, or is /usr/libexec/getconf
something else?

smcv



Bug#1001084: ITP: meli -- terminal mail client

2021-12-03 Thread Jonas Smedegaard
Package: wnpp
Severity: wishlist
Owner: Jonas Smedegaard 
X-Debbugs-Cc: debian-devel@lists.debian.org

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

* Package name: meli
  Version : 0.7.2
  Upstream Author : Manos Pitsidianakis 
* URL : https://meli.delivery/
* License : GPL-3+
  Programming Lang: Rust
  Description : terminal mail client

 meli is a terminal email client with support for multiple accounts
 and Maildir, mbox, notmuch, IMAP, and JMAP.

The packaging will be maintained at https://salsa.debian.org/debian/meli


 - Jonas

-BEGIN PGP SIGNATURE-

iQIzBAEBCgAdFiEEn+Ppw2aRpp/1PMaELHwxRsGgASEFAmGqp2cACgkQLHwxRsGg
ASFHQg//bAr3xKyoXD6Ff9iItiozdSi+sGkjLWIWPqeIp9CsBUcnU7SRZGqgNQSG
iZRSmDwWhOp7rDs3QyBxM61l5atAwB8s51LBawmJIqKW1f1YSTtQaGB1y4f1/jOT
fKdwAcEVQl4MyZ7EybDAnzOrDaD1N/4/2u1vwhkvRMxTRQqXCCVH7R3yUijWyg5x
H3XXLnWBJViU9bVH5ZpiZBqx4V4pqNPJiDg9+9FLozTXbtSB82BN6/SCEIiVtwa9
nPtHN0Xlyq6FzLTQ44vhTvde/pQl3uT5G8f4Tsh1DwkpmD9WbQz3dWFEQUQMzJ5v
lDZxMkv2eBZs0Ft/NpCOOvs4FrH7f+xeIEHvPcwQkCFyx9ToPvU9O0dFZ1FZnxbF
Qi4nInZfuQZJae83Y+cC7Sq9DvS1cnuqv74schlEl2iot7lyhgHWiGfYU9HGqbRZ
CMFEdm0e3/L0HO3/uDpFX8CNIult1LPCF6g9kiZtOo2xpdAkqw1gWCNRk3czqR7q
Ywqui7rpltm0FDROOIwqjUzu8A5Wn7OKrHQMrjbD7tJVP8bcS6S1FfurPiTqjY+r
+8/SuSXU8XjBr08z/SshnDjBOX5DfM7My4H23svysGLXSj73o8sy6kH+ULtK2c1n
DIOYBu/p38o4hDiQOJvRr2WsqIWCbMM0roiC7+6I9w1dsaZqbaw=
=tu33
-END PGP SIGNATURE-



Bug#1001085: ITP: ruby-thread-local -- provide a class-level mixin to make thread local state easy

2021-12-03 Thread Daniel Leidert
Package: wnpp
Severity: wishlist
Owner: Daniel Leidert 
X-Debbugs-Cc: debian-devel@lists.debian.org, debian-r...@lists.debian.org

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

* Package name: ruby-thread-local
  Version : 1.1.0
  Upstream Author : Samuel Williams
* URL : https://github.com/socketry/thread-local
* License : MIT/X
  Programming Lang: Ruby
  Description : provide a class-level mixin to make thread local state easy

This gem provides a simple high level interface for per-class thread locals,
and it implements a standard interface for "shared global state". Using this
implementation avoids reinventing thread-local semantics in your own code.
.
Global variables are often not thread-safe and encourage poor programming
style. In many cases it is desirable to have thread-local state, but
implementing this directly in Ruby is unpleasant. This gem provides a
best-practice wrapper which can extend existing classes to provide per-thread
instances.
.
Conceptually, a thread is a container for application state. This works well
when servers consider applications to be isolated on a per-thread basis, but
this isn't always the case:


This is a new dependency of ruby-async-http and necessary to update this
package to fix #995354.


-BEGIN PGP SIGNATURE-

iQIzBAEBCgAdFiEEvu1N7VVEpMA+KD3HS80FZ8KW0F0FAmGqlgoACgkQS80FZ8KW
0F0vvxAAgy1bTR4g7pHBb8KqIwGRcouUYzFfSakiEqflJkxxHBOtbq4iHBOfWx/c
RKAl0CnFeEd+XVAMCvPIOziTlYLUJOubzIPRIRkHQYEas19mX5/LbT1SFgeQPuL5
iRaaW45oGiUOCLdGWniuQTE4wP6VjJjowU23YWqwdLCk+hQSUZmBrpJtA+IZmVSR
2tOL7vkGAIR1FhRxppy85D+Q1U/kgNUh8LYRaNHWvgQoie4u2dyWro6+UuYpvNRN
fTNF+oLqPUqcYYVq7l5rbWUDW+E2VgWyRE9aUKomJwVSgK+N8Yyc25aPhppuzapK
MjLpuRNgAo0fpAw+J0Bh4Bu+Ly75FUvhqPJkxpgquYJxtJIQVkwpPf4Td1hDI+Ht
0k56r+GJ55BPCofUkwecbXNPt5bek4s881Pz1TNsXjd9R6omgNQFp3eXb/frAhMv
TbtTyONoNGSruLtXT/4yr5d4Ks0HkfEy+vtVYKxsmsChkLnagEsF8OSbi/cpfRd0
1uG40RQGq+heLzjnpOS/1llrw3OlpqvglMCQz6KFooK+LQh83WuHl37j/zcJVYDe
x+6f/rc8Hl62gTaEhnrD1yblfxHOQa5ebaT3nsU1Hz5VoI+pf5D3CpTX4F0uYypk
vXR4xHifz6NGN0+Bo8Sw5NbqWnpD+zPoVj7XT7t0+BzLkB688ng=
=PCUC
-END PGP SIGNATURE-



Re: /usr/bin/ld.so as a symbolic link for the dynamic loader

2021-12-03 Thread Theodore Y. Ts'o
On Fri, Dec 03, 2021 at 05:59:17PM +0100, Florian Weimer wrote:
> * Theodore Y. Ts'o:
> 
> > * How does ld.so --preload *work*?
> 
> The dynamic loader has an array of preloaded sonames, and it processes
> them before loading the dependencies of the main program.  This way,
> definitions in the preloaded objects preempt definitions in the shared
> objects.
> 
> > * Does it modify /bin/ls, so that all users running /bin/ls get the
> > preloaded library?
> 
> No, it's purely a run-time change.

Ah, sorry, I was assuming it was more than just a run-time change.  So
this would effectively be equivalent to something like this, right?

(export LD_PRELOAD="/usr/lib/x86_64-linux-gnu/libeatmydata.so.1 $LD_PRELOAD" ; 
/bin/ls)


By the way, it might be nice if ld.so queried something like 
$HOME/.config/ld.so.preload
as a user-specific version of /etc/ld.so.preload.

I guess there might be some potential security concerns, but if an
attacker has write access to a user's home directory, they probably
can do all sorts of mischief anyway

- Ted




Bug#1001089: ITP: golang-github-clbanning-mxj -- mxj - to/from maps, XML and JSON (Go library)

2021-12-03 Thread Anthony Fok
Package: wnpp
Severity: wishlist
Owner: Anthony Fok 

* Package name: golang-github-clbanning-mxj
  Version : 2.5.5-1
  Upstream Author : Charles Banning
* URL : https://github.com/clbanning/mxj
* License : Expat
  Programming Lang: Go
  Description : mxj - to/from maps, XML and JSON (Go library)
 Decode/encode XML to/from map[string]interface{} (or JSON) values,
 and extract/modify values from maps by key or key-path, including wildcards.
 .
 mxj supplants the legacy x2j and j2x packages.
 If you want the old syntax, use mxj/x2j and mxj/j2x packages.

Reason for packaging: Needed by the upcoming Hugo 0.90.0 and up



Re: Using release-monitoring.org [was: uscan roadmap]

2021-12-03 Thread Paul Wise
On Fri, 2021-12-03 at 13:12 +0100, Stephan Lachnit wrote:

> I mean it looks rather easy to do, just a couple of mouse clicks.
> Compare that to writing a watch file at the moment (assuming one has
> to do more than copy and paste the github example).

Repology gets you mappings for all the source packages in Debian in one
download (assuming it has an export of the mappings, that may need to
be added), while the Anitya mapping requires a human to manually add a
mapping for each of the thousands of source packages in Debian. Not all
maintainers are going to bother and repetitive clicking is going to get
boring for the folks trying to make up for that.

> Then the maintainer would just have to write a check, just like they
> have to do now.

Or you could get the most recent distro version for free without manual
work by using the Repology data. While it isn't always the true latest
release from upstream, often the latest distro version being newer than
the Debian version is good enough to notify the Debian maintainer to
update the package to the true latest release from upstream.

> Also, mapping on Repology sometimes needs to be adjusted manually. And
> sometimes they disagree and instead tell you to rename the source
> package in the distro (happened to me once), which is not really
> viable in Debian.

I wasn't aware of the renaming part, seems kind of weird.

> Yes it can't, but also I don't think this is something *release
> monitoring* should do. It is definitely a good use case and that is
> why there is a link to repology on the tracker (called "other
> distros"), but it has IMHO nothing to do with *automatic* release
> monitoring. Don't get me wrong, I actually like repology exactly for
> this particular reason.

I was taking the thread topic to be the slightly more general area of
"monitoring when a package needs updating to a new upstream release,
snapshot or fork". New VCS snapshots in other distros fits that IMO.

> Fair point, but if we would work together on release-monitoring.org
> with Fedora, there are more eyes on it as well as in the current
> situation.
> Repology still has more eyes of course, but then again the link to
> Repology is right there on the tracker already if one is curious.

Sure, I think we need all three solutions as they all fit different
use-cases within the "update to latest release/snapshot/fork" arena;
(see below for more about why) debian/watch, Anitya and Repology. There
is already a bug about more Repology integration within the package
tracker, and I was the one who did the existing tracker integration.

> One more thought on this. If we go with version 5, maybe something
> like:
> 
> Version: 5
> Source: release-monitoring.org

Looks good.

> Would also work for multiple sources then and in general would fit
> nicely to the current idea for v5. It also solves the problem with the
> tooling, watch files and uscan would still exist, but the "searching"
> portion is offloaded.

Agreed.

> Hm right, independence is a valid concern. Anitya itself is open
> source [1] so we could host it easily, but of course the real problem
> would be the stored data of the projects. I don't know if they are
> hosted somewhere, but I'm sure the Fedora guys would be open to share
> them with us, so that we could easily spin up a mirror in case there
> are any problems (it's probably a good idea to host a read-only mirror
> just in case).

The other issue with using Anitya is that Debian and Fedora have
different policies and culture for choosing which upstream versions to
update to. Debian strongly prefers LTS versions while Fedora are all
about the latest and greatest, which is a bit of a culture clash and is
likely to mean for some packages we couldn't use Anitya.

In addition to independence there is the issue Jonas mentioned
elsewhere in the initial uscan thread that some Debian people prefer
the info to be maintained in the source package instead of elsewhere.

> This sounds more reasonable to me than writing a tool that converts a
> new standard to the old one just as backup.

Given the above, perhaps a way to sync a locally stored file and the
Anitya one, and then have uscan understand the Anitya format?

-- 
bye,
pabs

https://wiki.debian.org/PaulWise


signature.asc
Description: This is a digitally signed message part


Re: Using release-monitoring.org [was: uscan roadmap]

2021-12-03 Thread Scott Kitterman



On December 3, 2021 12:12:47 PM UTC, Stephan Lachnit 
 wrote:
>On Thu, Dec 2, 2021 at 11:52 PM Paul Wise  wrote:
>>
>> On Thu, 2021-12-02 at 23:36 +0100, Stephan Lachnit wrote:
>>
>> > If I understand correctly, release-monitoring already offers such a
>> > mapping [1].
>>
>> It seems like the Ayanita distro mapping needs to be done manually once
>> per package, while using the Repology data would automatically get us
>> the mapping for each existing package and all future packages.
>
>I mean it looks rather easy to do, just a couple of mouse clicks.
>Compare that to writing a watch file at the moment (assuming one has
>to do more than copy and paste the github example).
>
>> > Hm, I can't really think of an example where such a thing couldn't
>> > also be implemented in release-monitoring.org.
>>
>> None of the three use-cases I listed can be done by it AFAICT.
>>
>> It can't check things that it doesn't have a check for, while
>> individual package maintainers in various distros will update their
>> packages and Repology will notice the new versions.
>
>Then the maintainer would just have to write a check, just like they
>have to do now.
>
>Also, mapping on Repology sometimes needs to be adjusted manually. And
>sometimes they disagree and instead tell you to rename the source
>package in the distro (happened to me once), which is not really
>viable in Debian.
>
>> It presumably doesn't look at the versions for all distros, so it can't
>> do the cross-distro VCS snapshot choice check, while individual package
>> maintainers in various distros know their packages well and might
>> upgrade to a VCS snapshot in their distro, which Repology notices.
>
>Yes it can't, but also I don't think this is something *release
>monitoring* should do. It is definitely a good use case and that is
>why there is a link to repology on the tracker (called "other
>distros"), but it has IMHO nothing to do with *automatic* release
>monitoring. Don't get me wrong, I actually like repology exactly for
>this particular reason.
>
>> It also isn't going to check locations it doesn't check yet, while
>> individual package maintainers in other distros might do that after
>> noticing their package hasn't been updated recently and then going
>> searching for a new upstream and updating, which Repology notices.
>
>Fair point, but if we would work together on release-monitoring.org
>with Fedora, there are more eyes on it as well as in the current
>situation.
>Repology still has more eyes of course, but then again the link to
>Repology is right there on the tracker already if one is curious.
>
>> > Just one quick idea I had: what about a "fake" uscan backend? I.e.
>> > something like `Version: release-monitoring.org` in d/watch. In that
>> > case uscan will launch an external program that fetches the data from
>> > there and gives it back to uscan, so that other tools stay unaffected
>> > until a full transition is done.
>>
>> Excellent idea, that would be great to have.
>
>One more thought on this. If we go with version 5, maybe something like:
>
>Version: 5
>Source: release-monitoring.org
>
>Would also work for multiple sources then and in general would fit
>nicely to the current idea for v5. It also solves the problem with the
>tooling, watch files and uscan would still exist, but the "searching"
>portion is offloaded.
>
>> The one issue I can think of with using release-monitoring.org is that
>> Debian becomes more reliant on an external service, while currently we
>> are completely independent of other distros for version checking.
>>
>> Converting the release-monitoring.org check to a watch file might be an
>> alternative to using it directly that maintains our independence.
>
>Hm right, independence is a valid concern. Anitya itself is open
>source [1] so we could host it easily, but of course the real problem
>would be the stored data of the projects. I don't know if they are
>hosted somewhere, but I'm sure the Fedora guys would be open to share
>them with us, so that we could easily spin up a mirror in case there
>are any problems (it's probably a good idea to host a read-only mirror
>just in case).
>
>This sounds more reasonable to me than writing a tool that converts a
>new standard to the old one just as backup.

I think that there's a security consideration associated with all these 
proposals for externalizing finding upstream updates.  Currently watch files 
and at least the redirectors I know of all run on Debian infrastructure or on 
the systems of the Debian person doing the update.

If one of these services were ever compromised it would provide a vector for 
offering substitute upstream code (at least for the cases where upstream 
releases aren't both signed by upstream and verified in Debian).  I find that 
prospect concerning.

Scott K



Xhdh ) r v t

2021-12-03 Thread Meagan Graham



Sent from my brhvrgrb4?444))) r4(;;:555/;5;5;: