[Distutils] Re: Depending on external library

2019-01-22 Thread Robert T. McGibbon
It's necessary to bundle the external library with the wheel.

On linux, this is what the manylinux tag is all about (see PEP 513 & 571)
and the auditwheel tool (https://github.com/pypa/auditwheel) will bundle
the library into the wheel.

On OS X and Windows, I'm less familiar with the toolchain but the same
principle applies.

-Robert


On Tue, Jan 22, 2019 at 6:41 AM Jeroen Demeyer  wrote:

> Hello,
>
> I have a Python project that depends on an external C library (which is
> unrelated to Python and NOT part of the OS).
>
> For an sdist, this is easy: my setup.py assumes that the library is
> pre-installed somewhere on the system where setuptools can find it.
>
> However, is there a standard solution for packaging such a project as
> wheel? Ideally, the project should "just work" when doing pip install on
> it, which means that the external library should somehow be bundled in
> the wheel.
>
> I know that for example numpy does that, but it also has a very
> complicated build system (containing a fork of distutils!).
>
> Does anybody know any pointers for this? Or if you think that this is a
> minefield which is not really supported, feel free to say so.
>
>
> Thanks,
> Jeroen.
> --
> Distutils-SIG mailing list -- distutils-sig@python.org
> To unsubscribe send an email to distutils-sig-le...@python.org
> https://mail.python.org/mailman3/lists/distutils-sig.python.org/
> Message archived at
> https://mail.python.org/archives/list/distutils-sig@python.org/message/6RZVKM5YQVAAOR5ENDS34HH5VFXHZ22G/
>


-- 
-Robert
--
Distutils-SIG mailing list -- distutils-sig@python.org
To unsubscribe send an email to distutils-sig-le...@python.org
https://mail.python.org/mailman3/lists/distutils-sig.python.org/
Message archived at 
https://mail.python.org/archives/list/distutils-sig@python.org/message/PAFDZ27US57IKWU7KNW6FK23C5GSPYYG/


[Distutils] Re: Idea: perennial manylinux tag

2018-12-02 Thread Robert T. McGibbon
Hi,

As the original author of auditwheel and co-author of PEP 513, I figure I
should probably chime in.

I suspect that *I* am one of the major reasons that the manylinux1 ->
manylinux2010 transition has been unreasonably drawn out, rather than any
particular design flaw in the versioning scheme (manylinux_{cardinal
number} vs. manylinux_{year} vs. manylinux_{glibc version}).

I wrote auditwheel while I was finishing up graduate school. For years, the
"test suite" was just a couple wheel files hosted on my school's
per-student cgi-bin/ directory (which recently stopped working, since I'm
no longer a student). The logging was just random print statements. It
worked (I think), but it wasn't particularly well designed, nor was it well
tested. And it was basically a one-person project. After I finished my
Ph.D., I got a full time job and mostly stopped contributing to open
source. I think a lot of the reason for the delay in the manylinux2010
transition was that nobody was present and accounted for to develop
auditwheel.

Now, the project is in a much better place. Elana Hashman is leading doing
an awesome job leading auditwheel, and its seems like there's a new
momentum for manylinux2010. With now-proper maintenance and testing for
auditwheel, I don't think it will be as hard jump to the next iteration of
manylinux (e.g. manylinux2014) as it was to jump from manylinux1 to
manylinux2010.

-Robert

On Fri, Nov 30, 2018 at 3:12 AM Nathaniel Smith  wrote:

> Hi all,
>
> The manylinux1 -> manylinux2010 transition has turned out to be very
> difficult. Timeline so far:
>
> March 2017: CentOS 5 went EOL
> April 2018: PEP 517 accepted
> May 2018: support for manylinux2010 lands in warehouse
> November 2018: support lands in auditwheel, and pip master
> December 2018: 21 months after CentOS 5 EOL, wwee still don't have an
> official build environment, or support in a pip release
>
> We'll get through this, but it's been super painful and maybe we can
> change things somehow so it will suck less next time.
>
> We don't have anything like this pain on Windows or macOS. We never have
> to update pip, warehouse, etc., after those OSes hit EOLs. Why not?
>
> On Windows, we have just two tags: "win32" and "win_amd64". These are
> defined to mean something like "this wheel will run on any recent-ish
> Windows system". So the meaning of the tag actually changes over time: it
> used to be that if a wheel said it ran on win32, then that meant it would
> work on winxp, but since winxp hit EOL people started uploading "win32"
> wheels that don't work on winxp, and that's worked fine.
>
> On macOS, the tags look like "macosx_10_9_x86_64". So here we have the OS
> version embedded in the tag. This means that we do occasionally switch
> which tags we're using, kind of like how manylinux1 -> manylinux2010 is
> intended to work. But, unlike for the manylinux tags, defining a new macosx
> tag is totally trivial: every time a new OS version is released, the tag
> springs into existence without any human intervention. Warehouse already
> accepts uploads with this tag; pip already knows which systems can install
> wheels with this tag, etc.
>
> Can we take any inspiration from this for manylinux?
>
> We could do the Windows thing, and have a plain "manylinux" tag that means
> "any recent-ish glibc-based Linux". Today it would be defined to be "any
> distro newer than CentOS 6". When CentOS 6 goes out of service, we could
> tweak the definition to be "any distro newer than CentOS 7". Most parts of
> the toolchain wouldn't need to be updated, though, because the tag wouldn't
> change, and by assumption, enforcement wouldn't really be needed, because
> the only people who could break would be ones running on unsupported
> platforms. Just like happens on Windows.
>
> We could do the macOS thing, and have a "manylinux_${glibc version}" tag
> that means "this package works on any Linux using glibc newer than ${glibc
> version}". We're already using this as our heuristic to handle the current
> manylinux profiles, so e.g. manylinux1 is effectively equivalent to
> manylinux_2_5, and manylinux2010 will be equivalent to manylinux_2_12. That
> way we'd define the manylinux tags once, get support into pip and warehouse
> and auditwheel once, and then in the future the only thing that would have
> to change to support new distro releases or new architectures would be to
> set up a proper build environment.
>
> What do y'all think?
>
> -n
> --
> Distutils-SIG mailing list -- distutils-sig@python.org
> To unsubscribe send an email to distutils-sig-le...@python.org
> https://mail.python.org/mm3/mailman3/lists/distutils-sig.python.org/
> Message archived at
> https://mail.python.org/archives/list/distutils-sig@python.org/message/6AFS4HKX6PVAS76EQNI7JNTGZZRHQ6SQ/
>


-- 
-Robert
--
Distutils-SIG mailing list -- distutils-sig@python.org
To unsubscribe send an email to distutils-sig-le...@python.org

Re: [Distutils] Installation problem

2017-05-31 Thread Robert T. McGibbon
This isn't actually the right place to ask for installation help, Aneesh.
Your best bet would be the python-list or python-help mailing list.

-Robert

On Tue, May 30, 2017 at 7:24 PM, Aneesh Kona  wrote:

> Hi,
> I'm getting an error while installing software,
> I did installed firstly but did in another in local disk,
> but I want to install again, now it shows 'fatal error,' can you please
> let me fix it, so that I can install again in C disk,
> plz hand me in this particular,
> Thanks.
>
> Venkata Aneesh Kona
> Graduate Assistant
> Department of Mining Engineering and Management
> South Dakota School of Mines & Technology
> @: venkataaneesh.k...@mines.sdsmt.edu
> ph: 6823042622 <(682)%20304-2622>
>
> ___
> Distutils-SIG maillist  -  Distutils-SIG@python.org
> https://mail.python.org/mailman/listinfo/distutils-sig
>
>


-- 
-Robert
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Maintaining a curated set of Python packages

2016-12-02 Thread Robert T. McGibbon
Isn't this issue already solved by (and the raison d'être of) the multiple
third-party Python redistributors, like the various OS package maintainers,
Continuum's Anaconda, Enthought Canopy, ActiveState Python, WinPython, etc?

[image: Inline image 1]

-Robert

On Thu, Dec 1, 2016 at 4:45 AM, Freddy Rietdijk 
wrote:

> Hi,
>
> I would like to propose that, as a community, we jointly maintain a
> curated set of Python packages that are known to work together. These
> packages would receive security updates for some time and every couple of
> months a new major release of the curated set comes available. The idea of
> this is inspired by Haskell LTS, so maybe we should call this PyPI LTS?
>
> So why a PyPI LTS?
>
> PyPI makes available all versions of packages that were uploaded, and by
> default installers like pip will try to use the latest available versions
> of packages, unless told otherwise. With a requirements.txt file (or a
> future pipfile.lock) and setup.py we can pin as much as we like our
> requirements of respectively the environment and package requirements,
> thereby making a more reproducible environment possible and also fixing the
> API for developers. Pinning requirements is often a manual job, although
> one could use pip freeze or other tools.
>
> A common problem is when two packages in a certain environment require
> different versions of a package. Having a curated set of packages,
> developers could be encouraged to test against the latest stable and
> nightly of the curated package set, thereby increasing compatibility
> between different packages, something I think we all want.
>
> Having a compatible set of packages is not only interesting for
> developers, but also for downstream distributions. All distributions try to
> find a set of packages that are working together and release them. This is
> a lot of work, and I think it would be in everyone's benefit if we try to
> solve this issue together.
>
> A possible solution
>
> Downstream, that is developers and distributions, will need a set of
> packages that are known to work together. At minimum this would consist of,
> per package, the name of the package and its version, but for
> reproducibility I would propose adding the filename and hash as well.
> Because there isn't any reliable method to extract the requirements of a
> package, I propose also including `setup_requires`, install_requires`, and
> `tests_require` explicitly. That way, distributions can automatically build
> recipes for the packages (although non-Python dependencies would still have
> to be resolved by the distribution).
>
> The package set would be released as lts--MM-REVISION, and developers
> can choose to track a specific revision, but would typically be asked to
> track only lts--MM which would resolve to the latest REVISION.
>
> Because dependencies vary per Python language version, interpreter, and
> operating system, we would have to have these sets for each combination and
> therefore I propose having a source which evaluates to say a TOML/JSON file
> per version/interpreter/OS.
> How this source file should be written I don't know; while I think the Nix
> expression language is an excellent choice for this, it is not possible for
> everyone to use and therefore likely not an option.
>
> Open questions
>
> There are still plenty of open questions.
>
> - Who decides when a package is updated that would break dependents? This
> is an issue all distributions face, so maybe we should involve them.
> - How would this be integrated with pip / virtualenv / pipfile.lock /
> requirements.txt / setup.py? See e.g. https://github.com/pypa/
> pipfile/issues/10#issuecomment-262229620
>
> References to Haskell LTS
>
> Here are several links to some interesting documents on how Haskell LTS
> works.
> - A blog post describing what Haskell LTS is: https://www.fpcomplete.
> com/blog/2014/12/backporting-bug-fixes
> - Rules regarding uploading and breaking packages: https://github.com/
> fpco/stackage/blob/master/MAINTAINERS.md#adding-a-package
> - The actual LTS files https://github.com/fpco/lts-haskell
>
>
> What do you think of this proposal? Would you be interested in this as
> developer, or packager?
>
>
> Freddy
>
>
>
>
> ___
> Distutils-SIG maillist  -  Distutils-SIG@python.org
> https://mail.python.org/mailman/listinfo/distutils-sig
>
>


-- 
-Robert
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] If you want wheel to be successful, provide a build server.

2016-05-26 Thread Robert T. McGibbon
> I want to get something setup that would allow people to only need to
upload
> a source release to PyPI and then have wheels automatically built for them
> (but not mandate that- Projects that wish it should always be able to
control
> their wheel generation). I don’t know what that would specifically look
> like, if someone is motivated to work on it I’m happy to help figure out
what
> it should look like and provide guidance where I can, otherwise it’ll wait
> until I get around to it.

One first step towards this that's a natural follow-on to the manylinux work
might be to define a overall build configuration file / format and process
for
automating the whole wheel build cycle (i'm thinking of something modeled
after
conda-build) that would, among other things

for potentially multiple versions of python:
- run `pip wheel` (or setu.py bdist_wheel) to compile the wheel
- run `auditwheel` (linux) or `delocate` (osx) to bundle any external
libraries

-Robert

On Thu, May 26, 2016 at 2:47 PM, Donald Stufft  wrote:

>
> > On May 26, 2016, at 2:41 PM, Matthew Brett 
> wrote:
> >
> > On Thu, May 26, 2016 at 2:28 PM, Daniel Holth  wrote:
> >> Maybe there could be a way to say "the most recent release that has a
> wheel
> >> for my platform". That would help with the problem of binaries not being
> >> available concurrently with a new source distribution.
> >
> > Yes, that would certainly help get over some of the immediate problems.
> >
> > Sorry for my ignorance - but does ``--only-binary`` search for an
> > earlier release with a binary or just bomb out if the latest release
> > does not have a binary?   It would also be good to have a flag to say
> > "if this is pure Python go ahead and use the source, otherwise error".
> >   Otherwise I guess we'd have to rely on everyone with a pure Python
> > package generating wheels.
>
> I believe it would find the latest version that has a wheel available,
> I could be misremembering though.
>
> >
> > It would be very good to work out a plan for new Python releases as
> > well.  We really need to get wheels up to pypi a fair while before the
> > release date, and it's easy to forget to do that, because, at the
> > moment, we don't have much testing infrastructure to make sure that a
> > range of wheel installs are working OK.
> >
>
> I want to get something setup that would allow people to only need to
> upload
> a source release to PyPI and then have wheels automatically built for them
> (but not mandate that- Projects that wish it should always be able to
> control
> their wheel generation). I don’t know what that would specifically look
> like, if someone is motivated to work on it I’m happy to help figure out
> what
> it should look like and provide guidance where I can, otherwise it’ll wait
> until I get around to it.
>
> —
> Donald Stufft
>
>
>
> ___
> Distutils-SIG maillist  -  Distutils-SIG@python.org
> https://mail.python.org/mailman/listinfo/distutils-sig
>



-- 
-Robert
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] [final version?] PEP 513 - A Platform Tag for Portable Linux Built Distributions

2016-02-16 Thread Robert T. McGibbon
On Tue, Feb 16, 2016 at 4:10 PM, Glyph Lefkowitz 
wrote:
>
> This whole section is about a tool to automatically identify possible
> issues with these wheels -
> https://www.python.org/dev/peps/pep-0513/#auditwheel - so I don't even
> really know what you mean by this comment.  I thought that the existence of
> this tool is one of the best parts of this PEP!
>

Oh cool! Thanks, Glyph! I had a lot of fun writing it.

-- 
-Robert
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Alternative build system abstraction PEP

2016-02-16 Thread Robert T. McGibbon
> ... The ideal would be that there would be a single static pypackage.cfg
that could be dropped into any "version 0" VCS checkout to convert

I think you mean "pypackage.json" here, not .cfg?

-Robert

On Tue, Feb 16, 2016 at 8:43 PM, Robert Collins 
wrote:

> Cool. I've replied on the PR as well, but my understanding from Donald
> was that from pip's perspective, the spawn interface being the
> contract was preferred; the other two differences are I think mostly
> cosmetic - except that I'm very worried about the injection
> possibilities of the calling code being responsible for preserving
> metadata between 'metadata' and 'build-wheel'. That seems like an
> unnecessary risk to me.
>
> -Rob
>
> On 17 February 2016 at 14:32, Nathaniel Smith  wrote:
> > Hi all,
> ...
> ___
> Distutils-SIG maillist  -  Distutils-SIG@python.org
> https://mail.python.org/mailman/listinfo/distutils-sig
>



-- 
-Robert
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Does anyone understand what's going on with libpython on Linux?

2016-02-07 Thread Robert T. McGibbon
> What are Debian/Ubuntu doing in distutils so that extensions don't link
to libpython by default?

I don't know exactly, but one way to reproduce this is simply to build the
interpreter without `--enable-shared`.

I don't know that their reasons are, but I presume that the Debian
maintainers have a well-considered reason for this design.

The PEP 513 text currently says that it's permissible for manylinux1 wheels
to link against libpythonX.Y.so. So presumably for a platform to be
manylinux1-compatible, libpythonX.Y.so should be available. I guess my
preference would be for pip to simply check as to whether or not
libpythonX.Y.so is available in its platform detection code
(pypa/pip/pull/3446).

Because Debian/Ubuntu is such a big target, instead of just bailing out and
forcing the user to install the sdist from PyPI (which is going to fail,
because Debian installations that lack libpythonX.Y.so also lack Python.h),
I would be +1 for adding some kind of message for this case that says,
"maybe you should `sudo apt-get install python-dev` to get these fancy new
wheels rolling."

-Robert

On Sun, Feb 7, 2016 at 12:01 AM, Nathaniel Smith  wrote:

> So we found another variation between how different distros build
> CPython [1], and I'm very confused.
>
> Fedora (for example) turns out to work the way I naively expected:
> taking py27 as our example, they have:
> - libpython2.7.so.1.0 contains the actual python runtime
> - /usr/bin/python2.7 is a tiny (~7 KiB) executable that links to
> libpython2.7.so.1 to do the actual work; the main python package
> depends on the libpython package
> - python extension module packages depend on the libpython package,
> and contain extension modules linked against libpython2.7.so.1
> - python extension modules compiled locally get linked against
> libpython2.7.so.1 by default
>
> Debian/Ubuntu do things differently:
> - libpython2.7.so.1.0 exists and contains the full python runtime, but
> is not installed by default
> - /usr/bin/python2.7 *also* contains a *second* copy of the full
> python runtime; there is no dependency relationship between these, and
> you don't even get libpython2.7.so.1.0 installed unless you explicitly
> request it or it gets pulled in through some other dependency
> - most python extension module packages do *not* depend on the
> libpython2.7 package, and contain extension modules that are *not*
> linked against libpython2.7.so.1.0 (but there are exceptions!)
> - python extension modules compiled locally do *not* get linked
> against libpython2.7.so.1 by default.
>
> The only things that seem to link against libpython2.7.so.1.0 in debian
> are:
> a) other packages that embed python (e.g. gnucash, paraview, perf, ...)
> b) some minority of python packages (e.g. the PySide/QtOpenGL.so
> module is one that I found that directly links to libpython2.7.so.1.0)
>
> I guess that the reason this works is that according to ELF linking
> rules, the symbols defined in the main executable, or in the
> transitive closure of the libraries that the main executable is linked
> to via DT_NEEDED entries, are all injected into the global scope of
> any dlopen'ed libraries.
>
> Uh, let me try saying that again.
>
> When you dlopen() a library -- like, for example, a python extension
> module -- then the extension automatically gets access to any symbols
> that are exported from either (a) the main executable itself, or (b)
> any of the libraries that are listed if you run 'ldd  executable>'. It also gets access to any symbols that are exported by
> itself, or any of the libraries listed if you run 'ldd  library>'. OTOH it does *not* get access to any symbols exported by
> other libraries that get dlopen'ed -- each dlopen specifically creates
> its own "scope".
>
> So the reason this works is that Debian's /usr/bin/python2.7 itself
> exports all the standard Python C ABI symbols, so any extension module
> that it loads automatically get access to the CPython ABI, even if
> they don't explicitly link to it. And programs like gnucash are linked
> directly to libpython2.7.so.1, so they also end up exporting the
> CPython ABI to any libraries that they dlopen.
>
> But, it seems to me that there are two problems with the Debian/Ubuntu
> way of doing things:
> 1) it's rather wasteful of space, since there are two complete
> independent copies of the whole CPython runtime (one inside
> /usr/bin/python2.7, the other inside libpython2.7.so.1).
> 2) if I ever embed cpython by doing dlopen("libpython2.7.so.1"), or
> dlopen("some_plugin_library_linked_to_libpython.so"), then the
> embedded cpython will not be able to load python extensions that are
> compiled in the Debian-style (but will be able to load python
> extensions compiled in the Fedora-style), because the dlopen() the
> loaded the python runtime and the dlopen() that loads the extension
> module create two different scopes that can't see each other's
> symbols. [I'm pretty sure this is right, but linking is 

Re: [Distutils] Does anyone understand what's going on with libpython on Linux?

2016-02-07 Thread Robert T. McGibbon
> One option we have then is to remove all DT_NEEDED references to
libpython in manylinux wheels. We get instant compatibility for bare
Debian / Ubuntu Python installs, at the cost of causing some puzzling
crash for the case of: dlopened library with embedded Python
interpreter where the embedded Python interpreter imports a manylinux
wheel.

I don't think this is acceptable, since it's going to break some packages
that depend
on dlopen.

> On the other hand, presumably this same crash will occur for nearly
all Debian-packaged Python extension modules (if it is true that they
do not specify a libpython dependency) - so it seems unlikely that
this is a common problem.

I don't think so. Debian-packaged extensions that require libpython to exist
(a minority of them to be sure, but ones that use complex shared library
layouts)
just declare a dependency on libpython. For example, python-pyside has a
Depends on libpython2.7:

```
$ apt-cache depends python-pyside.qtcore
python-pyside.qtcore
  Depends: libc6
  Depends: libgcc1
  Depends: libpyside1.2
  Depends: libpython2.7
  Depends: libqtcore4
  Depends: libshiboken1.2v5
  Depends: libstdc++6
  Depends: python
  Depends: python
  Conflicts: python-pyside.qtcore:i386
```

-Robert




On Sun, Feb 7, 2016 at 2:06 PM, Matthew Brett <matthew.br...@gmail.com>
wrote:

> On Sun, Feb 7, 2016 at 4:38 AM, Antoine Pitrou <solip...@pitrou.net>
> wrote:
> > On Sun, 7 Feb 2016 00:25:57 -0800
> > "Robert T. McGibbon" <rmcgi...@gmail.com> wrote:
> >> > What are Debian/Ubuntu doing in distutils so that extensions don't
> link
> >> to libpython by default?
> >>
> >> I don't know exactly, but one way to reproduce this is simply to build
> the
> >> interpreter without `--enable-shared`.
> >
> > See https://bugs.python.org/issue21536. It would be nice if you could
> > lobby for this issue to be resolved... (though that would only be for
> > 3.6, presumably)
>
> Just to unpack from that issue - and quoting a nice summary by you
> (Antoine):
>
> "... the -l flag was added in #832799, for a rather complicated case
> where the interpreter is linked with a library dlopened by an
> embedding application (I suppose for some kind of plugin system)."
>
> Following the link to https://bugs.python.org/issue832799 - the `-l`
> flag (and therefore the dependency on libpython was added at Python
> 2.3 for the case where an executable A dlopens a library B.so . B.so
> has an embedded Python interpreter and is linked to libpython.
> However, when the embedded Python interpreter in B.so loads an
> extension module mymodule.so , mymodule.so does not inherit a
> namespace with the libpython symbols already loaded. See
> https://bugs.python.org/msg18810 .
>
> One option we have then is to remove all DT_NEEDED references to
> libpython in manylinux wheels. We get instant compatibility for bare
> Debian / Ubuntu Python installs, at the cost of causing some puzzling
> crash for the case of: dlopened library with embedded Python
> interpreter where the embedded Python interpreter imports a manylinux
> wheel.
>
> On the other hand, presumably this same crash will occur for nearly
> all Debian-packaged Python extension modules (if it is true that they
> do not specify a libpython dependency) - so it seems unlikely that
> this is a common problem.
>
> Cheers,
>
> Matthew
> ___
> Distutils-SIG maillist  -  Distutils-SIG@python.org
> https://mail.python.org/mailman/listinfo/distutils-sig
>



-- 
-Robert
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] setup('postinstall'='my.py')

2016-02-02 Thread Robert T. McGibbon
One very simple technique used by some projects like numpy is just to have
``setup.py`` write a file into the source tree before calling setup().

example: https://github.com/numpy/numpy/blob/master/setup.py#L338-L339

-Robert

On Tue, Feb 2, 2016 at 1:48 PM, AltSheets Dev <
altsheets+mailingli...@gmail.com> wrote:

> Hello everyone on distutils-sig@,
> this is a first timer, please be gentle to me *g*
>
> I am just starting with setuptools,
> and I just cannot get my idea working:
>
> At install time,
> I'd like to generate a file with a random UID,
> which will later always be the same.
>
> I had hoped for a setup('postinstall'='my.py') or setup('preinstall'= ...)
> but there isn't one.
>
> Then I have been trying with a customized
> distutils.command.install.install
> class - but so far with no success.
>
> Here is a detailed explanation of my futile attempts:
> https://github.com/altsheets/coinquery/blob/master/README-setupCustom.md
>
> I guess this is be a pretty frequent question?
>
> Happy about any hints.
>
> Thanks.
> :-)
>
>
>
> ___
> Distutils-SIG maillist  -  Distutils-SIG@python.org
> https://mail.python.org/mailman/listinfo/distutils-sig
>
>


-- 
-Robert
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] [final version?] PEP 513 - A Platform Tag for Portable Linux Built Distributions

2016-02-01 Thread Robert T. McGibbon
On Mon, Feb 1, 2016 at 3:47 PM, Alexander Walters 
wrote:

>
>
> On 2/1/2016 18:37, Matthias Klose wrote:
>
>> On 30.01.2016 00:29, Nathaniel Smith wrote:
>>
>>> Hi all,
>>>
>>> I think this is ready for pronouncement now -- thanks to everyone for
>>> all their feedback over the last few weeks!
>>>
>>
>> I don't think so.  I am biased because I'm the maintainer for Python in
>> Debian/Ubuntu.  So I would like to have some feedback from maintainers of
>> Python in other Linux distributions (Nick, no, you're not one of these).
>>
>> The proposal just takes some environment and declares that as a
>> standard.  So everybody wanting to supply these wheels basically has to use
>> this environment. Without giving any details, without giving any advise how
>> to produce such wheels in other environments. Without giving any hints how
>> such wheels may be broken with newer environments.  Without mentioning this
>> is am64/i386 only.
>> There might be more. Pretty please be specific about your environment.
>> Have a look how the LSB specifies requirements on the runtime environment
>> ... and then ask yourself why the lsb doesn't have any real value.
>>
>> Matthias
>>
>> I... Thought the environment this pep describes is the docker image, and
> only the docker image, and anything not made on that docker image is in
> violation of the pep.



It's not correct that anything made outside the docker image is in
violation of the PEP. The docker images are just tools that can help you
compile compliant wheels. Nathaniel and I tried to describe this as
precisely as we could. See this section
 of the PEP.

To comply with the policy, the wheel needs to (a not link against any other
external libraries beyond those mentioned in the PEP, (b) *work* on a stock
CentOS 5.11 machine, and (c) not use any narrow-unicode symbols (only
relevant for Python < 3.2). A consequence of requirements (a) and (b) is
that versioned symbols that are referenced in the depended-upon shared
libraries need to use sufficiently old versions of the symbols, which are
noted in the PEP as well. In order to satisfy this aspect of the policy,
the easiest route, from our experience, is to compile the wheel inside a
CentOS 5 environment, but other routes are possible, including

 - statically link everything
 - use your favorite linux distro, but install an older version of glibc
and configure your compiler to point to that.
 - use some inline assembly to instruct the linker to prefer older symbol
versions in libraries like glibc.
 - etc.

I also wrote the auditwheel command line tool that can check to see if a
wheel is manylinux1 compatible, and give you some feedback about what to
fix if it's not.

And furthermore, I've just put up an example project on github that you can
use as a template for compiling manylinux1 wheels using Travis-CI. You can
find it here: https://github.com/pypa/python-manylinux-demo

-Robert
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] [final version?] PEP 513 - A Platform Tag for Portable Linux Built Distributions

2016-01-30 Thread Robert T. McGibbon
Oh yes, this is very important! I will have to put a check in auditwheel as
well to verify this in the extensions too.

-Robert
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] How to get pip to really, really, I mean it -- rebuild this damn package!

2016-01-28 Thread Robert T. McGibbon
The pip wheel cache is in ~/Library/Caches/pip/wheels (OS X) or
~/.cache/pip/wheels (Linux). I'm not sure about Windows. You might have
some luck deleting files from there.

-Robert

On Thu, Jan 28, 2016 at 4:28 PM, Chris Barker  wrote:

> Context:
>
> I'm maintaining a number of conda packages of various packages, some  of
> which are mine, some others, some pure python, some extensions, etc.
>
> The way conda build works is you  specify some meta data, and a build
> script(s), and conda:
>
> sets up an isolated environment in which to build.
> installs the build dependencies
> runs teh build script
> see's what got installed, and makes a package of it.
> (there are complications, but that's the idea)
>
>
> so what to do in the build script for a python package? the simple anser
> is:
>
> $PYTHON setup.py install
>
> But then you get those god- awful eggs, or if it's not a setuptools built
> package, you don't get the right meta data for pip, etc. to resolve
> dependencies.
>
> [NOTE: I do want all the pip compatible meta data, otherwise, you have pip
> trying to re-instll stuff, etc if someone does install something with pip,
> or pip in editable mode, or...]
>
> so some of us have started doing:
>
> $PYTHON setup.py install --single-version-externally-managed  --record
> record.txt
>
> Which mostly seems to work -- though that is a God-awful command line to
> remember
>
> And it fails if the package has a plain old distuitls-based setup.py
>
> so I started going with:
>
> $PYTHON -m pip install ./
>
> and that seemed to work for awhile for me. However, I've been having
> problems lately with pip not bulding and re-installing the package. This is
> really weird, as the conda build environment is a clean environment, there
> really isn't a package already installed.
>
> here is the log:
>
> + /Users/chris.barker/miniconda2/envs/_build/bin/python -m pip install -v
> ./
>
> Processing /Users/chris.barker/miniconda2/conda-bld/work/gsw-3.0.3
>
>   Running setup.py (path:/tmp/pip-umxsOD-build/setup.py) egg_info for
> package from file:///Users/chris.barker/miniconda2/conda-bld/work/gsw-3.0.3
>
> Running command python setup.py egg_info
>
>   Source in /tmp/pip-umxsOD-build has version 3.0.3, which satisfies
> requirement gsw==3.0.3 from
> file:///Users/chris.barker/miniconda2/conda-bld/work/gsw-3.0.3
>
>   Requirement already satisfied (use --upgrade to upgrade): gsw==3.0.3
> from file:///Users/chris.barker/miniconda2/conda-bld/work/gsw-3.0.3 in
> /Users/chris.barker/miniconda2/conda-bld/work/gsw-3.0.3
>
> Requirement already satisfied (use --upgrade to upgrade): numpy in
> /Users/chris.barker/miniconda2/envs/_build/lib/python2.7/site-packages
> (from gsw==3.0.3)
>
> Requirement already satisfied (use --upgrade to upgrade): nose in
> /Users/chris.barker/miniconda2/envs/_build/lib/python2.7/site-packages
> (from gsw==3.0.3)
>
> Building wheels for collected packages: gsw
>
>   Running setup.py bdist_wheel for gsw ...   Destination directory:
> /tmp/tmprPhOYkpip-wheel-
>
>   Running command /Users/chris.barker/miniconda2/envs/_build/bin/python -u
> -c "import setuptools,
> tokenize;__file__='/tmp/pip-umxsOD-build/setup.py';exec(compile(getattr(tokenize,
> 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))"
> bdist_wheel -d /tmp/tmprPhOYkpip-wheel- --python-tag cp27
>
> done
>
>   Stored in directory:
> /Users/chris.barker/Library/Caches/pip/wheels/51/4e/d7/b4cfa75866df9da00f4e4f8a9c5c35cfacfa9e92c4885ec5c4
>
>   Removing source in /tmp/pip-umxsOD-build
>
> Successfully built gsw
>
> Cleaning up...
>
> You are using pip version 8.0.1, however version 8.0.2 is available.
>
> You should consider upgrading via the 'pip install --upgrade pip' command.
>
> So it seems to think it's already installed -- huh? what? IN any case, it
> doesn't install anything. It looks like it's referencing some cache, or
> manifest or something outside of the python environment itself. So if I
> installed it in a different Anaconda environment, it gets confused here.
>
> (BTW, I can replicate this behavior outside of conda build by creating a
> new conda environment by hand, and trying to ues pip to build a package
> locally)
>
> So I tried various command-line options:
>
> $PYTHON -m pip install -I -v --upgrade --no-deps ./
> but no dice.
>
> I also tried --no-cache-dir -- no change.
>
> So how can I tell pip that I really do want it to bulid and install this
> dran package from source, damn it!
>
> Other option -- go back to:
>
> $PYTHON setup.py install --single-version-externally-managed  --record
> record.txt
> And have to fight with pip only for the non-setuptools packages. Does the
> --single-version-externally-managedcommand do ayting different than pip?
>
> Thanks,
>
> -Chris
>
>
>
>
>
>
>
>
>
>
>
> --
>
> Christopher Barker, Ph.D.
> Oceanographer
>
> Emergency Response Division
> NOAA/NOS/OR(206) 526-6959   voice
> 7600 Sand Point Way NE   (206) 526-6329   fax
> 

Re: [Distutils] PEP 513: A Platform Tag for Portable Linux Built Distributions Version

2016-01-26 Thread Robert T. McGibbon
The file pip uses to record its latest check for being out of date (on
linux ~/.cache/pip/selfcheck.json) is a json file that maps the path to the
interpreter's sys.prefix to some metadata about the installation

e.g.

{"/home/rmcgibbo/opt/python-3.5/":
{"last_check":"2016-01-12T00:09:02Z","pypi_version":"7.1.2"}, ...}

Perhaps a similar model should be used for /etc/python/compatibility.cfg.
That is, the configuration option should be keyed on a per-interpreter
basis.

[/usr]
manylinux1 compatible = true

[/path/to/armv7l/python]
manylinux1 compatible = false


-Robert



On Tue, Jan 26, 2016 at 4:22 AM, Olivier Grisel 
wrote:

> 2016-01-26 11:41 GMT+01:00 Olivier Grisel :
> > Maybe we could devise some syntax for /etc/python/compatibility.conf
> > to state that the manylinux1 entry is only valid for Python
> > interpreters such as distutils.util.get_platform() == 'linux-x86_64'?
>
> Actually this won't work. I just try to debootstrap ubuntu trusty i386
> on top of ubuntu trusty amd64 and I get the following behavior in
> python3 inside the i386 chroot:
>
> >>> from distutils.util import get_platform
> >>> get_platform()
> 'linux-x86_64'
>
> >>> import platform
> >>> platform.machine()
> 'x86_64'
>
> >>> import sys
> >>> sys.maxsize > 2**32
> False
>
> So this is actually a 32 bit Python declared as running on a
> linux-x86_64 platform (even though everything is i386 inside the
> chroot)...
>
> I get the exact same behavior when installing the 32 bit miniconda on
> ubuntu trusty amd64.
>
> --
> Olivier
> http://twitter.com/ogrisel - http://github.com/ogrisel
>



-- 
-Robert
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] PEP 513: A Platform Tag for Portable Linux Built Distributions Version

2016-01-26 Thread Robert T. McGibbon
Virtualenvs have `sys.base_prefix`, so perhaps they could just check
whether the key exists for their parent environment?

-Robert

On Tue, Jan 26, 2016 at 1:37 PM, Nathaniel Smith <n...@pobox.com> wrote:

> On Jan 26, 2016 1:04 PM, "Robert T. McGibbon" <rmcgi...@gmail.com> wrote:
> >
> > The file pip uses to record its latest check for being out of date (on
> linux ~/.cache/pip/selfcheck.json) is a json file that maps the path to the
> interpreter's sys.prefix to some metadata about the installation
> >
> > e.g.
> >
> > {"/home/rmcgibbo/opt/python-3.5/":
> {"last_check":"2016-01-12T00:09:02Z","pypi_version":"7.1.2"}, ...}
> >
> > Perhaps a similar model should be used for
> /etc/python/compatibility.cfg. That is, the configuration option should be
> keyed on a per-interpreter basis.
> >
> > [/usr]
> > manylinux1 compatible = true
> >
> > [/path/to/armv7l/python]
> > manylinux1 compatible = false
>
> Interesting idea. Any thoughts on how to propagate this information from
> parent envs into virtualenvs?
>
> -n
>



-- 
-Robert
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] draft PEP: manylinux1

2016-01-25 Thread Robert T. McGibbon
On Mon, Jan 25, 2016 at 10:29 PM, Chris Barker - NOAA Federal <
chris.bar...@noaa.gov> wrote:

> Given that we're starting now ( not a year or two ago) and it'll take
> a while for it to really catch on, we should go CentOS 6 ( or
> equivalent ) now?
>
> CentOS5 was released in 2007! That is a pretty long time in computing.
>

I understand the concern, but I think we should follow the lead of the
other projects
that have been doing portable linux binaries (holy build box, traveling
ruby, portable-pypy,
firefox, enthought, continuum) for some time, all based on CentOS 5. At
some point things
like C++17 support will be important and I agree that we'll need to update
the base spec,
but in the meantime, I don't see this as a problem where we should be the
first mover.

-Robert
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] PEP 513: A Platform Tag for Portable Linux Built Distributions Version

2016-01-25 Thread Robert T. McGibbon
I agree that this is an important detail. I generally use machines that
have many different Python interpreters installed (some distro-provided and
others in my home directory), and can easily imagine wanting them to have
different behavior w.r.t. manylinux1 wheels.

Perhaps the option could be put in site.py, or somewhere in
lib/pythonX.Y/configX.Y/? I'm not sure what the appropriate solution here
is.

-Robert
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] PEP 513: A Platform Tag for Portable Linux Built Distributions Version

2016-01-25 Thread Robert T. McGibbon
Alternatively, perhaps this question could be delegated to the pip
maintainers, for pip to store and maintain this configuration option
itself, perhaps by using its cache (for example, Linux pip already stores
some caches in ~/.cache/pip)?

-Robert

On Mon, Jan 25, 2016 at 8:37 PM, Robert T. McGibbon <rmcgi...@gmail.com>
wrote:

> I agree that this is an important detail. I generally use machines that
> have many different Python interpreters installed (some distro-provided and
> others in my home directory), and can easily imagine wanting them to have
> different behavior w.r.t. manylinux1 wheels.
>
> Perhaps the option could be put in site.py, or somewhere in
> lib/pythonX.Y/configX.Y/? I'm not sure what the appropriate solution here
> is.
>
> -Robert
>



-- 
-Robert
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


[Distutils] PEP 513: A Platform Tag for Portable Linux Built Distributions Version

2016-01-25 Thread Robert T. McGibbon
Hi,

This PEP is an updated version of the draft manylinux1 PEP posted to this
list a couple days
ago by Nathaniel and myself. The changes reflect the discussion on the list
(thanks to everyone
for all of the feedback), and generally go to the clarity and precision of
the text.

HTML version: https://github.com/manylinux/manylinux/blob/master/pep-513.rst

-Robert



PEP: 513
Title: A Platform Tag for Portable Linux Built Distributions
Version: $Revision$
Last-Modified: $Date$
Author: Robert T. McGibbon <rmcgi...@gmail.com>, Nathaniel J. Smith
<n...@pobox.com>
BDFL-Delegate: Nick Coghlan <ncogh...@gmail.com>
Discussions-To: Distutils SIG <distutils-sig@python.org>
Status: Draft
Type: Informational
Content-Type: text/x-rst
Created: 19-Jan-2016
Post-History: 19-Jan-2016, 25-Jan-2016


Abstract


This PEP proposes the creation of a new platform tag for Python package built
distributions, such as wheels, called ``manylinux1_{x86_64,i386}`` with
external dependencies limited to a standardized, restricted subset of
the Linux kernel and core userspace ABI. It proposes that PyPI support
uploading and distributing wheels with this platform tag, and that ``pip``
support downloading and installing these packages on compatible platforms.


Rationale
=

Currently, distribution of binary Python extensions for Windows and OS X is
straightforward. Developers and packagers build wheels [1]_ [2]_, which are
assigned platform tags such as ``win32`` or ``macosx_10_6_intel``, and upload
these wheels to PyPI. Users can download and install these wheels using tools
such as ``pip``.

For Linux, the situation is much more delicate. In general, compiled Python
extension modules built on one Linux distribution will not work on other Linux
distributions, or even on different machines running the same Linux
distribution with different system libraries installed.

Build tools using PEP 425 platform tags [3]_ do not track information about the
particular Linux distribution or installed system libraries, and instead assign
all wheels the too-vague ``linux_i386`` or ``linux_x86_64`` tags. Because of
this ambiguity, there is no expectation that ``linux``-tagged built
distributions compiled on one machine will work properly on another, and for
this reason, PyPI has not permitted the uploading of wheels for Linux.

It would be ideal if wheel packages could be compiled that would work on *any*
linux system. But, because of the incredible diversity of Linux systems -- from
PCs to Android to embedded systems with custom libcs -- this cannot
be guaranteed in general.

Instead, we define a standard subset of the kernel+core userspace ABI that,
in practice, is compatible enough that packages conforming to this standard
will work on *many* linux systems, including essentially all of the desktop
and server distributions in common use. We know this because there are
companies who have been distributing such widely-portable pre-compiled Python
extension modules for Linux -- e.g. Enthought with Canopy [4]_ and Continuum
Analytics with Anaconda [5]_.

Building on the compability lessons learned from these companies, we thus
define a baseline ``manylinux1`` platform tag for use by binary Python
wheels, and introduce the implementation of preliminary tools to aid in the
construction of these ``manylinux1`` wheels.


Key Causes of Inter-Linux Binary Incompatibility


To properly define a standard that will guarantee that wheel packages meeting
this specification will operate on *many* linux platforms, it is necessary to
understand the root causes which often prevent portability of pre-compiled
binaries on Linux. The two key causes are dependencies on shared libraries
which are not present on users' systems, and dependencies on particular
versions of certain core libraries like ``glibc``.


External Shared Libraries
-

Most desktop and server linux distributions come with a system package manager
(examples include ``APT`` on Debian-based systems, ``yum`` on
``RPM``-based systems, and ``pacman`` on Arch linux) that manages, among other
responsibilities, the installation of shared libraries installed to system
directories such as ``/usr/lib``. Most non-trivial Python extensions will depend
on one or more of these shared libraries, and thus function properly only on
systems where the user has the proper libraries (and the proper
versions thereof), either installed using their package manager, or installed
manually by setting certain environment variables such as ``LD_LIBRARY_PATH``
to notify the runtime linker of the location of the depended-upon shared
libraries.


Versioning of Core Shared Libraries
---

Even if the developers a Python extension module wish to use no
external shared libraries, the modules will generally have a dynamic runtime
dependency on the GNU C library, ``glibc``. While it is possible, st

Re: [Distutils] draft PEP: manylinux1

2016-01-23 Thread Robert T. McGibbon
On Sat, Jan 23, 2016 at 6:19 PM, Chris Barker  wrote:

>
> 1)  each package that needs a third partly lib statically links it in.
> 2)  each package that needs a third partly lib provides it, linked with a
> relative path (IIUC, that's how most Windows packages are done.
> 3) We establish some standard for providing binary libs as wheels, so that
> other packages can depend on them and link to them.
>

In my view, *all* of these are valid options. I think much of this will
need to be worked out by the communities -- especially if individual
packages and subcommunities decide to take the option (3) approach. I hope
this PEP will enable the communities involved in OpenGIS, audio processing,
image processing, etc to work out the solutions that work for them and
their users.

Perhaps one thing that is missing from the PEP is an explicit statement
that option (3) is compatible with the manylinux1 tag -- bundling is a
valid solution, but it's not the *only* solution.

-Robert
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] draft PEP: manylinux1

2016-01-22 Thread Robert T. McGibbon
On Jan 22, 2016 9:04 AM, "Lowekamp, Bradley (NIH/NLM/LHC) [C]" <
blowek...@mail.nih.gov> wrote:
>
> Hello,
>
> I noticed that libpython is missing from the lists of dependent
libraries. Also the “manylinux” Docker image has it’s Python versions
compiled with libpython static.
>
> Does this mean that we must do static linking against libpython?

This is a bug/imprecision in the PEP. Manylinux1 wheels *can* link against
libpython (the appropriate version for whatever python they're targeting),
and the latest version of the docker image uses a shared libpython now.

-Robert
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] draft PEP: manylinux1

2016-01-21 Thread Robert T. McGibbon
On Thu, Jan 21, 2016 at 8:53 AM, M.-A. Lemburg  wrote:

> [...]
> What we need is a system that provides a few dimensions
> for various system specific differences (e.g. bitness,
> architecture) and a recommendation for library
> versions of a few very basic libraries to use when
> compiling for those systems.
>
> I believe the PEP is almost there, it just needs to use a
> slightly different concept, since limiting the set of
> allowed libraries does not provide the basis of an open
> system which PyPI/pip/wheels need to be.



Sorry, I still don't think I quite understand. Is your position essentially
that we should allow wheels to link against any system library included
in (for example) stock openSUSE 11.5? Or that we should allow wheels
to link against any library that can be installed into openSUSE 11.5 using
`yum install `?

-Robert
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] draft PEP: manylinux1

2016-01-21 Thread Robert T. McGibbon
On Jan 21, 2016 11:55 AM, "Chris Barker"  wrote:
>
> On Thu, Jan 21, 2016 at 11:37 AM, Nathaniel Smith  wrote:
>>
>>  Glyph told us last week that this proposal is exactly how the
cryptography package wants to handle their openssl dependency:
https://www.mail-archive.com/distutils-sig@python.org/msg23506.html
>
> well, SSL is a pretty unique case --  there's one where controlling the
version of the lib, and having it be recent it critical.
>
> We will have issues with all sorts of other "Pretty common, but can't
count on it" libs.
>>
>> Is manylinux1 the perfect panacea for every package? Probably not. In
particular it's great for popular cross platform packages, because it works
now and means they can basically reuse the work that
>>
>> they're already doing to make static windows and OSX wheels;
>
> except that static linking is a pain on LInux  -- the toolchain really
doesn't want you to do that :-) It's also not part of the culture.
>
> Windows is "working" because of Chris Gohlke's heroic efforts.
>
> OS-X is kind-of sort of working, because of Matthew Brett's also heroic
efforts.
>
> But Anaconda and Canopy exist, and are popular, for a reason -- they
solve a very real problem, and many linux is only solving a very small part
of that problem -- the easy part.
>
> Maybe there will be a Gohlke-like figure that will step up and build
statically linked wheels for all sorts of stuff -- but is that the end-game
we want anyway? everyting statically linked?
>
> One plus -- with Docker and CI systems, it's getting pretty easy to set
up a build sandbox that only has the manylinux libs on it -- so not too
hard to automate and test your builds
>
>> > The idea to include the needed share libs in the wheel
>> > goes completely against the idea of relying on a system
>> > vendor to provide updates and security fixes. In some cases,
>> > this may be reasonable, but as design approach, it's
>> > not a good idea.
>
> Is this any different than static linking -- probably not. And that's
pretty much what I mean by the culture os dynamic linking on Linux.

The difference between static linking and vendoring the shared libraries
into the wheel using ``auditwheel repair`` is essentially just that the
second option is much easier because it doesn't require modifications to
the build system. Other than that, they're about the same.

This is why Nathaniel was able to make a PySide wheel in 5 minutes. I don't
think heroic efforts are really required.
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] draft PEP: manylinux1

2016-01-21 Thread Robert T. McGibbon
On Jan 21, 2016 12:04 PM, "Matthias Klose"  wrote:

> so this is x86_64-linux-gnu. Any other architectures?  Any reason to
choose gcc 4.8.2 which is known for it's defects?

I'm not aware of those defects. Can you share more information?

-Robert
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] draft PEP: manylinux1

2016-01-21 Thread Robert T. McGibbon
On Thu, Jan 21, 2016 at 7:29 PM, Donald Stufft <don...@stufft.io> wrote:

>
> On Jan 21, 2016, at 10:27 PM, Robert T. McGibbon <rmcgi...@gmail.com>
> wrote:
>
> I'm hopeful that in the future, if this PEP is accepted
> and we can make the tooling and documentation excellent, uploading Linux
> wheels can start to become a
> standard part of the PyPI release cycle for package maintainers.
>
>
>
> Longer term, I really want to (but have put zero effort into trying to
> plan it out beyond pie in the sky dreams) have it so that maintainers can
> publish a sdist to PyPI, and have PyPI automatically build wheels for you.
> This is a long way down the road though.
>


Yes, absolutely! I think this will actually not be _too_ difficult for
Linux (because docker). The challenges for Windows
and OS X are more significant.  For the open source communities that I'm
involved in, Travis-CI has really made everyone much more aware and
comfortable with Linux container-based web services that compile our
packages, so
a PyPI wheel farm seems very much within reach over the next year or so.

-Robert
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] draft PEP: manylinux1

2016-01-21 Thread Robert T. McGibbon
On Thu, Jan 21, 2016 at 9:23 AM, Donald Stufft  wrote:
>
> >> On Jan 21, 2016, at 4:31 AM, Nick Coghlan  wrote:
> >> If Donald can provide the list of "most downloaded wheel files" for
> >> other platforms, that could also be a useful guide as to how many
> >> source builds may potentially already be avoided through the draft
> >> "manylinux1" definition.
> >
>
> Or https://gist.github.com/dstufft/ea8a95580b022b233635 if you prefer it
> grouped by project.



I went through this list and compiled manylinux1 wheels for each of the top
15 projects in the
list (py35).

The wheels are here, if you're interested
http://stanford.edu/~rmcgibbo/wheelhouse/. The amount
of work was pretty small -- the complete dockerfile for this, building off
from the
quay.io/manylinux/manylinux image mentioned in the pep draft is here:
https://github.com/rmcgibbo/manylinux/blob/popular/build-popular/Dockerfile

-Robert
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] draft PEP: manylinux1

2016-01-21 Thread Robert T. McGibbon
On Thu, Jan 21, 2016 at 7:19 PM, Chris Barker - NOAA Federal <
chris.bar...@noaa.gov> wrote:

Cool! Are the non-manylinux dependencies all statically linked?
>


No, they're not statically linked. They're vendored inside the wheel using
``auditwheel repair``.

-Robert
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] draft PEP: manylinux1

2016-01-21 Thread Robert T. McGibbon
On Thu, Jan 21, 2016 at 7:32 PM, Nick Coghlan  wrote:

>
> However, it does suggest a possible alternative approach to naming
> these compatibility subsets: what if the name of this particular
> platform compatibility tag was something like "linux-sciabi1", rather
> than "manylinux1"?
>

That's an interesting idea, but I personally don't see the manylinux1 list
as particularly
"scientific". If anything, I'd call it "minimal".


> That way, if someone later wanted to propose "linux-guiabi1" or
> "linux-audioabi1" or "linux-videoabi1", that could be done.


This would be something, but if we want to have Linux binary wheels that
tightly integrate
with system libraries for certain use cases, the *really *valuable thing
would be https://github.com/pypa/interoperability-peps/pull/30/files, more
so than specific ABI tags, IMO.

-Robert
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] draft PEP: manylinux1

2016-01-21 Thread Robert T. McGibbon
On Thu, Jan 21, 2016 at 6:47 PM, Chris Barker - NOAA Federal <
chris.bar...@noaa.gov> wrote:


> Maybe the infrastructure has improved, and the community grown enough,
> that this will all work. We'll see.
>


Yeah, that's my hope too. Currently, the community lacks the permissions to
upload Linux wheels to PyPI. Given
that, it's no surprise that the community hasn't formed yet. I'm hopeful
that in the future, if this PEP is accepted
and we can make the tooling and documentation excellent, uploading Linux
wheels can start to become a
standard part of the PyPI release cycle for package maintainers.

-Robert
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig