I tried uv, and I don't see a point in using it. It does what some
other existing tools do. Maybe faster. But it's in the place where
speed is not important.

For work, I follow company's rules, which require setting up a project
in the company's Git server. It doesn't really matter in what order
this happens: local repository first, or Git server first. The result
is the same. I don't have a template for a project, and don't see why
anyone would need it. I worked with a company that had a cookiecutter
template, and all it added was unnecessary. I had a little bit of a
fight against that company's policy, which required for no reason to
use that template. Eventually, I lost. So, I'd use the template, then
delete all the garbage it generated and moved on. Really, all the
infrastructure necessary for the project is a setup.py and a README.
Other configuration files are contextual: do people working on the
project want a linter, and what kind of? Do they want to keep some
other documents / configuration?

When I'm responsible for the project's CI and infrastructure in
general, my approach is to try to install dependencies with pip and
conda to see what conflicts I will have, what unnecessary garbage will
the package installer bring together with the packages that I need.
Then it usually comes down to patching the dependency specification in
the packages I want to install, and, because I have the freedom of
putting the patched versions in the company-run PyPI index, I just
replace the original version with the patched one. Sometimes patching
takes more effort, as I need to eliminate a bunch of junk from the
package, things like unit tests, or unnecessary modules etc.

Depending on my deployment target, I might bytecompile the package and
remove the sources to slim it down. Depending on deployment target, I
might create a single Wheel file out of a collection of Wheels and
deploy that instead (especially, if the customer wants to install with
pip for some reason). But, more typically, for development purposes, I
record the URLs used by pip to download the packages that I managed to
get to work, and during CI / routine development, I'd use a script to
re-download the packages from those URLs, skipping all and any
interactions with pip (or uv). This makes it a lot faster and, more
importantly, a lot more reliable. I then revisit the URLs list every
so many months to see if there are any worthwhile updates, and
regenerate the list.

It's harder to do this with conda, because that would require setting
up a special channel with the necessary packages, and, conda is a huge
mess when it comes to building packages... so, the amount of work that
I'd put into removing conda from the development process might be too
much to bother. So, if the package needs to be distributed to both, or
just conda world, I usually bite the bullet and let conda (mamba)
install things, and then eat dust every few weeks or so, when they add
a new breaking change. Conda world brings a lot of unnecessary
suffering...
-- 
https://mail.python.org/mailman3//lists/python-list.python.org

Reply via email to