Am 15.08.22 um 14:30 schrieb Fons Adriaensen:
I have some mixed python/C++ packages, e.g. zita-audiotools
and zita-jacktools.

To install these I expect the following to happen:

1. The C++ parts are compiled and combined into a *.so
    file which is a python extension.
2. The *.so and the python parts, any data etc. get
    installed into the user's /usr/lib/python*.*/site-packages.

The way to install a python package nowadays should be via a wheel package (unless installed via a (Linux) distro package).

These wheel files are basically Zip archives with some additional meta-data and can contain compiled Python extensions (which makes them Python version, OS and architecture-dependent, of course).

The advantages of using wheels are:

- no code from the package is executed at installation time (e.-g. setup.py etc.) - users can choose to install system-wide, under ~/.local or into a virtual environment. - installation is generally faster and wheels can be cached for speedup of repeated installations (e.g. in a development or CI environment).


Originally this used distutils, when that got 'deprecated'
this changed to setuptools. [...]
Then I got warnings telling me that calling setup.py directly
is now  also deprecated, and that I should use 'official tools'
to build and install. What exactly that means I was unable to
find out, but the following seems to work:

This guide explains the up-to-date process to define a package, at least for pure Python packages:

https://packaging.python.org/en/latest/tutorials/packaging-projects/

If you have Python extension, you need to look into the documentation of your chosen build backend (e.g. poetry, flit, etc.).

In short, you write a pyproject.toml file, in which you define which build backend to use and then, depending on that backend, you either define the package meta data and build instructions also in that file or (if you still want to use setuptools as a build backend) in a pair of setup.py / setup.cfg files. The goal is to have as little actual python code in setup.py as possible and define all static meta data elsewhere.

WARNING: Running pip as the 'root' user can result in broken
permissions and conflicting behaviour with the system package
manager. It is recommended to use a virtual environment instead.

What this basically says is that you shouldn't normally mess with the system-wide Python packages in /usr/lib/pythonX.Y/site-packages, because this is used by the Python packages installed via the distribution packaging system. Programs installed also via distro packages may rely on packages installed there and specific versions of them.

If you install a Python package via pip as root into this location, pip will resolve dependencies, which may result in some existing packages being upgraded to newer versions. As the warning says , this may cause conflicts (or subtly break things).

Now clearly installing things in site-packages requires root,
so what is then the recommended method ?? And why the virtual
environment (which is used by build anyway) ??

IMO you should distinguish between these scenarios:

a) A program depends on, say, zita-xyz and is installed via a distro package. It should depend on the respective distro-package for zita-xyz. This installs into /usr/lib/pythonX.Y/site-packages. You can let the distro packages worry about the pip warning, you just provide the correct pyproject.toml and other build files.

b) A program depends on zita-xyz and is installed via pip. Then it should define this dependency in pyproject.toml and pip will take care of installing zita-xyz. The user running pip can decide whether to install the program system-wide (not recommended), under ~/.local or into a virtual environment (e.g. using pipx [1]). zita-xyz will be installed in the same location. If both are installed in a dedicated virtual environment, it ensures that there are no conflicts with other programs, possibly depending on a different version of zita-xyz.

c) A developer (or "power-user") uses zita-xyz for the implementation of a program or some utility scripts. He/she uses pip to install zita-xyz (and its dependencies, if any).

If he/she want zita-xyz to be importable from any Python interpreter with a matching version started *by that user*, he installs it with the --user option into ~/.local.

If he/she wants to make sure that a certain program always has access to and uses a certain zita-xyz version it depends on, he/she installs it into dedicated virtual environment for that (collection) of program(s) and always starts the program (or the Python interpreter) from the bin directory of that virtual env.

There are several ways to facilitate / automate the latter, e.g. via poetry [2], which I have mostly good experiences with.

And oh, btw, any virtual environment created by the build process is ephemeral and not relevant to the actual installation.

One last thing: extension packages come with the additional difficulty that they either need to be compiled on installation by the user (then requiring the presence of needed headers etc.) or somebody (e.g. the project maintainer) needs to provide a host of Python version/OS/architecture-dependent variants of wheel packages. For Windows and macOS this is relatively easy via CI environments like Gitlab CI oder Github actions. But for Linux, for these binary wheels to be compatible with a wide variety of Linux distributions, special care needs to be taken. For more information, for example, look at [3].


Hth, Chris

[1] https://pypi.org/project/pipx/
[2] https://pypi.org/project/poetry/
[3] https://github.com/pypa/cibuildwheel
_______________________________________________
Linux-audio-dev mailing list
[email protected]
https://lists.linuxaudio.org/listinfo/linux-audio-dev

Reply via email to