On Mon, Sep 3, 2018 at 4:51 PM, Nick Coghlan <ncogh...@gmail.com> wrote:
> On Mon., 3 Sep. 2018, 5:48 am Ronald Oussoren, <ronaldousso...@mac.com>
> wrote:
>>
>>
>> What’s the problem with including GPU and non-GPU variants of code in a
>> binary wheel other than the size of the wheel? I tend to prefer binaries
>> that work “everywhere", even if that requires some more work in building
>> binaries (such as including multiple variants of extensions to have
>> optimised code for different CPU variants, such as SSE and non-SSE variants
>> in the past).
>
>
> As far as I'm aware, binary artifact size *is* the problem. It's just that
> once you're  automatically building and pushing an artifact (or an image
> containing that artifact) to thousands or tens of thousands of managed
> systems, the wasted bandwidth from pushing redundant implementations of the
> same functionality becomes more of a concern than the convenience of being
> able to use the same artifact across multiple platforms.

None of the links that Dustin gave at the top of the thread are about
managed systems though. As far as I can tell, they all come down to
one of two issues: given "tensorflow" and "tensorflow-gpu" are both on
PyPI, how can (a) users automatically get the appropriate version
without having to manually select one, and (b) other packages express
a dependency on "tensorflow or tensorflow-gpu"? And maybe (c) how can
we stop tensorflow and tensorflow-gpu from accidentally getting
installed on top of each other.

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
--
Distutils-SIG mailing list -- distutils-sig@python.org
To unsubscribe send an email to distutils-sig-le...@python.org
https://mail.python.org/mm3/mailman3/lists/distutils-sig.python.org/
Message archived at 
https://mail.python.org/mm3/archives/list/distutils-sig@python.org/message/CVW7ZVMHJTY2LCQZ33KO3WJQJM76WNF3/

Reply via email to