On 18 Jan 2014 10:29, "Donald Stufft" <don...@stufft.io> wrote:
>>
>> > I haven't used --find-links yet so I may be wrong but tarring up
packages and serving them by a web server is additional work that I'd
rather avoid. It just creates yet another copy of the package and requires
maintaining an additional server component.
>> >
>> > I loved the fact that I could point pip directly at my source repos
for both my top-level projects as well as their dependencies. That seemed
like a perfect fit for an interpreted language like Python where packages
are distributed in source form.
>>
>> In a trusted environment, it can be, but on the internet, it is
intolerably insecure.
>>
>> However, making it easy to generate a transitive requirements.txt for a
trusted environment may be a usage model we can support without leaving
upstream distribution and usage open to compromise.
>>
>> Regards,
>> Nick.
>
> Let me just be explicit that I totally get the fact that this is breaking
your workflow. Nobody *ever* likes being told that the way they are doing
something is no longer going to be supported. Processing dependency links
by default is a security risk however in your constrained use case it’s
probably Ok, but only because you have a tightly constrained use case.
>
> One thing that pip *could* do is leave the —process-dependency-links flag
in place instead of removing it as per the deprecation cycle, however I
think that ultimately this will be a disservice to users. The new formats
that we’re designing and implementing are not going to support things like
dependency links so while this would allow things to continue for awhile as
we further improve the system it will likely be increasingly difficult to
maintain the functionality of dependency links.

We don't know that for sure - dependency links is entirely supportable even
under PEP 426 through an appropriate metadata extension, it would just
require someone to write the code to generate and process it.
Alternatively, affected users could just stay with setuptools style
metadata for internal use.

> So again I’m really really sorry that this is affecting you, but I can’t
see a good way to keep this concept around that doesn’t hurt the security
for the other use case.

That's where I see a separate "scan-dependency-links" tool potentially
fitting in. The concept would still be completely gone from the core
installation toolchain in pip so it isn't available by default, but users
in a situation like Hannes could use it to generate a suitable
requirements.txt file (perhaps even dynamically) and feed *that* to pip.

PyPI itself, however, could still change to block uploading of *new*
packages that define dependency links (similar to what we did for external
link scanning).

Imposing backwards incompatible changes like this one, however well
justified, always brings with it a certain responsibility to help users in
managing the transition. In this case, I suspect a *separate* link scanning
tool (which we can put as many security warnings on as you like) that
generates a requirements.txt file may be a more appropriate approach than
just telling users to use built packages on a private HTTP or PyPI server,
since spinning up and maintaining new server infrastructure is often a much
more challenging prospect in user environments than installing and using a
new client tool.

Cheers,
Nick.

>
>
> -----------------
> Donald Stufft
> PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372
DCFA
>
_______________________________________________
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig

Reply via email to