Re: python devs complaining about debian packaging

2024-06-03 Thread Paul Boddie
On Monday, 3 June 2024 16:27:29 CEST Donald Stufft wrote:
> 
> In the interim the packaging toolchain evolved to the point that having
> distutils in the stdlib was no longer of general benefit, and in fact made
> things worse because people had grown accustomed to things like `from
> distutils import setup` being transparently monkeypatched to be setuptools
> under the covers.

The way that distutils could not be relied upon to behave in a sensible way is 
entirely the fault of the developer(s) of setuptools having free rein to 
corrupt the Python packaging stack. In environments where I wanted to install 
to a particular location only the software I had already acquired, which is 
largely what the deployment element involved in distribution packaging can be 
reduced to, I did not want to be dealing with setuptools when distutils would 
do, nor with random hacks introduced to make distutils behave like setuptools.

For a while, I routinely stripped out unnecessary setuptools references from 
setup.py files, put distutils support back in, and mostly got the desired 
effect. But in the Python world, once someone teases some fancy new features 
and they catch on, everybody else has to hold on for the wild ride and budget 
for the consequences.

(For some software I have been trying to package, I see now that there is no 
setup.py or anything else, with some more "magic" introduced to be processed 
by yet another tool. I apparently have to get with it, or something to that 
effect, which severely diminishes my interest in packaging that software at 
all. The outcome actually affects the Debian project directly, not that very 
many people seem to care, however.)

I suppose it also didn't help that distutils entered the standard library in 
the era where it was apparently acceptable to get one's code included and to 
then declare the job done. Back when Python 3 was initially introduced, I 
suggested that the standard library be reviewed and fixed up, especially since 
there was going to be a compatibility break anyway, but there was no appetite 
for it.

Still, I appreciate you engaging with this forum, even if it probably means 
having to defend decisions made by others.

Paul




Re: python devs complaining about debian packaging

2024-06-02 Thread Paul Boddie
On Monday, 27 May 2024 04:07:34 CEST Scott Kitterman wrote:
> 
> While there are technical concerns on both sides, socially I think the
> Python community isn't that interested in outside perspectives.

I managed to dig up these notes from the packaging summit at PyCon:

https://hackmd.io/@pradyunsg/pycon2024-pack-summit

Here's the summit page itself:

https://us.pycon.org/2024/events/packaging-summit/

There is some fixation on the "system Python" in distributions, and the 
following remarks:

"At least one distro team is working on moving their own Python out of the 
way, so users can install their own Python packages [...] Fedora tried 
platform-python and it broke everything, so it didn't really work"

Given the proliferation of "virtual environments" around Python, where you 
just pick your own Python version and accompanying packages, I find it odd 
that the Python packaging community gets so hung up on the system Python. Do 
they want it to just go away or not be on the path or something? Wouldn't 
having a singular upstream Python just cause the same problems when someone 
finds that it isn't the version they need?

For my own amusement and to confirm my own memories, I went back in time to 
check the Python Web site in the 1990s, and back then there was no problem in 
providing a binary for Linux and the Unix products of choice from that era. 
AIX, FreeBSD, HP-UX, Irix, OSF/1, Solaris, and SunOS 4, plus a Linux binary 
supplied either as an RPM or in a plain archive:

https://web.archive.org/web/20021030010019/http://www.python.org/ftp/python/
binaries-1.4/

What is stopping anyone from doing that all over again? The user downloads the 
binary, puts it in their current directory, and runs their software. Could it 
be that the burden of support is perhaps a little greater than one might 
expect?

Because from that starting point, you have to build multiple versions, and 
then you have to build accompanying libraries, and then you have to support 
third-party packages which need third-party libraries. It wasn't a surprise 
that things like Sun Freeware (http://www.sunfreeware.com) emerged to cater to 
the proprietary platforms, whereas distributions emerged around Linux to 
manage this complexity and provide all this software.

It is easy for the various language communities to focus only on their own 
ecosystems, but everybody's software has to work together. And then there are 
companies targeting various markets that demand software built on a selection 
of different technologies, so you get perspectives like these:

"Why did the PyPI and Conda ecosystem get created? It was originally created 
as an educational teaching language. If all of your tools are in Python, all 
of the things in your ecosystem are supposed to work well together. However, 
the tools that scientists and data scientists use are very commonly written in 
C, C++, etc. and so there’s something called a “native binary problem”. Making 
this stuff compatible across the board is an incredibly challenging issue! 
Conda was created to resolve those binary compatibility issues."

I honestly don't know what these people think operating system distributions 
are doing if not solving the "native binary problem".

Paul




Re: python devs complaining about debian packaging

2024-05-26 Thread Paul Boddie
On Sunday, 26 May 2024 03:33:09 CEST Ian Norton wrote:
> I am puzzled about some of the responses there, how can anyone expect to
> randomly update packages on the system using pip and not have it go wrong
> on any distribution? This is why things like pipenv exist.

Or whatever today's tool is called. That's just one of the problems that the 
core Python packaging scene has. To be fair, though, I found myself using pip 
in a job a couple of years back just to "go with the flow", and it mostly 
worked. My expectations were not particularly high, however.

Where I was just installing things casually, I made pip install things using 
the --user option, which employs that unpleasant but pervasive approach of 
stashing huge volumes of software inside a hidden directory (something things 
like Plasma also do), where one wonders where all the disk space has gone. 
There's also the issue of uninstalling unwanted software, which probably still 
isn't solved.

Where I was installing things in a Web application environment, I made pip 
install in a designated directory, but I seem to remember having to do a fair 
amount of environmental configuration to make sure that Python and Apache were 
able to find all the modules. I guess I could have used virtualenv, venv, 
pipenv (see what I mean?), but one is left wondering how all of these things 
interact with the rest of the system. How does one make Apache "enter" the 
virtual environment of whichever flavour, for instance?

(Yes, "containers" exist, but I am reminded of the saying about solving 
problems with regular expressions: now you have two problems. Particularly 
with the inelegant way Linux provides support for containers.)

With the fundamental Python packaging technology, the one thing that seems to 
have improved somewhat is the availability of wheels - whose implementation is 
probably heavily underappreciated - which avoids the issue I previously had 
where a source package is downloaded by pip and then fails to compile in a bad 
way. Twenty or so years ago, people wanted to replicate the CPAN experience 
for Python, but I doubt that this was what they meant.

The Python core developers have always complained about distribution packaging 
and distributions "ruining the experience for users", all reported rather 
selectively, but this is a situation much of their own making. They have 
always wanted to do everything themselves, traditionally with the 
justification that Windows and the Mac have to be catered for. People used to 
band around the term "stop energy", but pandering to Microsoft and Apple 
seemed to be a significant source of that energy.

Meanwhile, there are competing but incompatible tools like Conda that do 
everything themselves, too. When deploying things conservatively using pip, I 
had it suggested to me that I should use Conda. Conda's shared libraries are 
incompatible with the way Apache is built on Red Hat systems, so the proposed 
"solution" was to run the Web application using the built-in Python Web server 
and to proxy it using mod_proxy in Apache. What a joke!

So, the core developers make their own package repository, downloader, 
dependency resolver, installer, environment manager. At many an opportunity, 
they had the opportunity to leave this particular highway of madness, but they 
raised the stakes and now are in the business of effectively running their own 
"app" store, all without Big Tech's legions of low-cost workers trying to 
prevent malware being part of everyone's next invocation of "pip install".

I could be rather cynical about some of the motivations of people in the core 
development community. I see that a few of the discussion participants have 
worked for Red Hat and at least one of them was involved in the enthusiastic 
deprecation of Python 2 on the basis that people could always pay for 
continued support. No coincidence that RHEL is where people will go for that 
support, I suppose.

And these discussions have always had the tone of people pontificating about 
what "the users" want or do, even speaking for them, and yet never really 
engaging with the actual users. Combined with a lack of appreciation that most 
users run other software that isn't written in (or for) Python, probably 
because Windows doesn't provide such software out of the box. Some actual 
users are still running Python 2, believe it or not, regardless of whether 
that can be seen from the ivory tower.

In case anyone thinks this rant was a bit unfair, I followed the Python 
mailing lists for almost a couple of decades and saw plenty of discussions on 
such matters on python-dev. Then again, I think the stewardship record of the 
Python language over the last twenty years rather speaks for itself. That is a 
statement left for one to interpret in the way one feels most comfortable 
with.

Paul




Re: Request to join the Debian Python Team

2024-04-04 Thread Paul Boddie
On Thursday, 4 April 2024 18:02:42 CEST Pierre-Elliott Bécue wrote:
> 
> I've bumped your access to maintainer, let's try, and I'll downgrade
> when you're done.

The project has been transferred and now resides here:

https://salsa.debian.org/python-team/packages/shedskin

Many thanks once again!

Paul




Re: Request to join the Debian Python Team

2024-04-04 Thread Paul Boddie
On Wednesday, 3 April 2024 19:03:44 CEST Pierre-Elliott Bécue wrote:
> Paul Boddie  wrote on 03/04/2024 at 16:21:05+0200:
> > 
> > Many thanks for giving me access! Would it make sense to move the existing
> > project into the Python Team's packages collection on Salsa, or is that
> > only permitted for packages that are actually adopted by the team?
> 
> On the contrary, please do!

It seems that the "transfer project" function would be the most appropriate 
way of moving the project, but I don't think I am allowed to perform the 
transfer. In the settings for my project, the pull-down menu only offers 
transfer to the "moin" group where I have some other projects.

I reviewed the documentation...

https://salsa.debian.org/help/user/project/settings/migrate_projects#transfer-a-project-to-another-namespace

...and I think that the only obstacle is that I am a developer as opposed to a 
maintainer. I suppose that I could re-create the project in the group instead, 
if that is more appropriate.

Thanks in advance for any assistance that can be offered, and sorry to bring 
up such tedious administrative issues!

Paul




Re: Request to join the Debian Python Team

2024-04-03 Thread Paul Boddie
On Wednesday, 3 April 2024 16:51:33 CEST Andrey Rakhmatullin wrote:
> 
> Then putting the repo in the team namespace makes sense.

OK. I will aim to do that.

> On Wed, Apr 03, 2024 at 04:48:44PM +0200, Paul Boddie wrote:
> > However, I did not presume that I could just set the Maintainer field in
> > debian/control before applying to join the team, nor did I set any
> > particular headers or metadata in the ITP for shedskin to reference the
> > team, since I thought that this might be impolite. I imagine that I would
> > change the Maintainer as noted in the team policy if the package were to
> > be adopted within the team.
> 
> There is no "adopted within the team" process for packages, so you just
> put the repo there and put the team into d/control.

Right. I will do that.

I was just looking at the Wiki documentation for this and I think there might 
need to be some updates and clarifications. For example, on this page:

https://wiki.debian.org/Teams/PythonTeam/HowToJoin

Here, it mentions the Maintainers and Uploaders fields and a more 
comprehensive policy than in the Salsa-hosted policy document. I assume that 
this policy is still applicable, but I wonder how it interacts with people 
like me who do not have Debian Developer status and presumably cannot be an 
uploader.

The above page also has references to Alioth mailing lists, which seem to be 
active even though the archives contain a lot of spam. It also says this:

"If the team is in Maintainer field, remember to subscribe appropriate mailing 
list or at least bts and contact keywords on this [Advanced subscription form 
for the Package Tracking System] page."

I assume that this would involve me subscribing to the applications list or to 
notifications from the PTS, but the grammar in the above sentence is ambiguous 
and I don't have the background knowledge to successfully disambiguate it. It 
could also mean subscribing the team to the PTS for the package.

Sorry if this is asking questions with obvious answers!

Paul




Re: Request to join the Debian Python Team

2024-04-03 Thread Paul Boddie
On Wednesday, 3 April 2024 16:25:10 CEST Andrey Rakhmatullin wrote:
> On Wed, Apr 03, 2024 at 04:21:05PM +0200, Paul Boddie wrote:
> > 
> > Many thanks for giving me access! Would it make sense to move the existing
> > project into the Python Team's packages collection on Salsa, or is that
> > only permitted for packages that are actually adopted by the team?
> 
> If you are going to maintain this package in the team then it can and
> should be in the team namespace, or are you asking about something else?

My aim is to maintain it in the team, yes. Previously, I maintained the 
package in Debian, although I never became an actual Debian developer and 
relied on a mentor to perform uploads. This time round, I envisaged that it 
would be more sustainable if I joined the Debian Python Team, maintained the 
package, but would be able to rely on uploaders in the team to perform the 
uploads.

However, I did not presume that I could just set the Maintainer field in 
debian/control before applying to join the team, nor did I set any particular 
headers or metadata in the ITP for shedskin to reference the team, since I 
thought that this might be impolite. I imagine that I would change the 
Maintainer as noted in the team policy if the package were to be adopted 
within the team.

Paul




Re: Request to join the Debian Python Team

2024-04-03 Thread Paul Boddie
On Wednesday, 3 April 2024 15:56:48 CEST Pierre-Elliott Bécue wrote:
> 
> I've granted you developer access to the package subgroup. You can
> create a shedskin project here if you want.

Many thanks for giving me access! Would it make sense to move the existing 
project into the Python Team's packages collection on Salsa, or is that only 
permitted for packages that are actually adopted by the team?

> If you lack some access, don't hesitate to reach out for help (we are on
> #debian-python on OFTC IRC network).
> 
> Don't hesitate after some time and contributions to ask for maintainer
> access if you need to create new repos.

One step at a time, I think!

Thanks once again,

Paul




Re: Uscan: watch and changelog

2024-03-31 Thread Paul Boddie
On Sunday, 31 March 2024 15:31:36 CEST c.bu...@posteo.jp wrote:
> 
> Don't write and explain such things in emails. Save your time and
> resources. Write it into the wiki just one time and then you can link
> to it. You wrote it. I won't copy and paste your stuff. Ladies and
> Gentlemen please do edit your own wiki pages yourself. You are the
> experts here. I am not.

I agree broadly with these sentiments.

> This attitude is also one of the reasons why the wiki is in such a bad
> shape.

I somewhat agree with this, although it is actually why documentation is in 
such bad shape across the entire industry, not just the Debian Wiki. However, 
I sympathise with developers that there are only so many hours in the day, 
too, and that they are often tasked with getting working code out of the door, 
condemning supposedly optional activities like testing and documentation to 
perennial neglect.

I sympathise less with the school of development where everything is thrown 
over the side, redone with little benefit to everyone, and then the developers 
claim that they didn't have time to document anything, not least because they 
are about to go through the same performance all over again.

> Otherwise just delete the whole wiki. That would be an increase in
> the quality of Debians documentation. In its current state it is
> embarrassing and it harms the project called Debian.

I don't agree and I am evidently not the only one. There have been situations 
where the only usable documentation I could find was in the Debian Wiki.

> Debian is not a hobby. Doing FOSS shouldn't be an excuse for not taking
> responsibility.

Well, it should be more than a hobby, and it is indeed a job for some people. 
To upgrade it from hobby to job involves money, and that would definitely be 
an incentive for people to "take responsibility". Otherwise, nobody should 
believe that they can set people's work priorities: that is the job of the 
boss in an actual employment relationship.

> I tried. But it seems I am the only person caring for new contributors
> and how the read documentation. I am tortured with man pages and links
> to out dated documentation (e.g. "New maintainers guide").

I certainly care about it. Did you find my previous message about uscan 
helpful? I don't maintain any of these tools, but I am willing to share my 
experiences in order to evolve a reasonable consensus about how these tools 
might be used, eventually providing concise, usable documentation that would 
help newcomers quickly become productive.

> The situation is not healthy to me. So I need to stop from here with
> fixing other peoples documentation.

I think that you can make a difference, so don't stop trying to do something 
about it.

Paul




Re: Uscan: watch and changelog

2024-03-29 Thread Paul Boddie
On Friday, 29 March 2024 17:39:04 CET c.bu...@posteo.jp wrote:
> 
> On 2024-03-29 16:23 Carsten Schoenert  wrote:
> > You need to use "opts=mode=git, ...", see the man page of uscan.
> 
> Are you sure. For example this watch file do not use "opts="
> https://sources.debian.org/src/backintime/1.4.3-1/debian/watch/

If I look at your releases page, I see that you have one release that you will 
probably be wanting to package:

https://codeberg.org/buhtz/hyperorg/releases

However, uscan seems to be driven by the debian/changelog and debian/watch 
files, with the former supplying the version or release to be searched for, 
and the latter specifying the mechanism by which the searching will occur.

So, I imagine that your debian/changelog would need an entry starting with 
something like this:

hyperorg (0.1.0-1) unstable; urgency=low

If you hadn't tagged a release in the repository, then you would probably have 
to invent a suitable version number based on the changeset identifier. 
Something like this perhaps:

hyperorg (0.1.0+git20240329.4a77811214-1) unstable; urgency=low

Here, I've used the changeset identifier of 4a77811214 which is the latest 
commit that I can see, and that hopefully serves to illustrate the principle.

I suppose that the watch file would then need to use the "mode=git" option.

[...]

> > And uscan will find the most recent version matching the regexp.
> 
> I see 3 different regex patterns in this line. Why?

I must admit to just copying the recommended patterns for my packaging 
exercises, largely because although I can happily explore the process of 
downloading a Web page - in your case, the one mentioned above - and parsing 
the HTML to get at the download links, I assume that someone has already done 
that hard work.

But to answer your question, what is happening is that the release or version 
from the changelog is transformed into something that might appear on the 
downloaded page. You need to get from "hyperorg-0.1.0-1" to this:

https://codeberg.org/buhtz/hyperorg/archive/v0.1.0.tar.gz

Or if a specific changeset is involved, the process is a bit more complicated, 
but would involve transforming "hyperorg-0.1.0+git20240329.4a77811214-1" into 
this:

https://codeberg.org/buhtz/hyperorg/archive/
4a77811214e5bd73247a197dbb5d1a4367c99109.tar.gz

To summarise, uscan is just searching a page listing releases or, in the 
second case, a specific Web page for an appropriate download that it can then 
obtain.

Hope this helps a little!

Paul




Re: Docu: Need help to understand section about package creation

2024-03-29 Thread Paul Boddie
On Friday, 29 March 2024 09:52:14 CET Carsten Schoenert wrote:
> 
> Starting with Debian packaging isn't a easy thing and there is *not* the
> one way to do it right. And there are for sure hundreds of HowTos out
> there. You will need to try a few of them and chose in the end the
> workflow that fit's best for you.

The problem with this advice is that for the Debian Python Team, there 
probably aren't many ways to "do it right". Instead, as I understand it, there 
will be very few ways that people tolerate, as recent discussion on this list 
has indicated.

> git-buildpackage is one of the high level tools that can and should
> packaging tasks easier, to use it effective you will need to know what
> it is doing under the hood, means you need to be familiar with the low
> level tools that getting called by gbp.

I packaged stuff for Debian about ten years ago or so and recently repackaged 
one tool, whose package is now parked in a state where it could conceivably be 
incorporated into Debian but is evidently not of interest to anyone here (or I 
need to promote it more, perhaps). I have also been packaging Moin 2.0 which 
is coincidentally relevant to these documentation discussion threads.

After following the advice on the Debian Wiki about suitable approaches for 
packaging Python libraries and applications, I ended up reading all about gbp 
and trying out the different suggestions in its documentation. Although quite 
helpful, this documentation does not give concrete advice about what would-be 
packagers should do, so it really does fall on context-specific advice to fill 
in those gaps.

In the end, I did my usual thing and distilled the documentation's prose down 
to a concise workflow to remind me of what I might need to do if I were to 
start packaging something else. In fact, I wrote the following for the Moin 
2.0 packages and then made use of it for the other package:

git branch -c master upstream
git checkout -b debian/master

git add debian
git commit

git push origin 
gbp buildpackage --git-debian-branch=debian/master \
  --git-upstream-tag='upstream/%(version)s' --git-builder=sbuild

This saves me from having to re-read the gbp documentation or find the 
appropriate mention on the Debian Wiki of whatever the accepted approach might 
be. I also wrote summaries to maintaining patches since it seemed that gbp 
could be quite obstructive if not operated in precisely the right fashion, not 
entirely obvious from its documentation.

Obviously, this leaves a lot of stuff out, but it is a reminder and not a 
complete guide. Indeed, the issue of what the Debian directory might contain 
is almost an orthogonal topic to the mechanics of gbp. And then there are the 
social or collaborative aspects of the exercise as well, like setting up a 
Salsa account and project, filing an ITP, and so on.

Yes, such matters are covered in the more general documentation, at least if 
it was updated and no longer refers to Alioth or whatever. But in this context 
the advice will be quite specific and not vague suggestions that present more 
choices to be made rather than eliminating them.

In any case, I commend this effort to improve the documentation. In some ways, 
it really is quite a brave endeavour. I would have made edits to various pages 
myself, but I don't want to tread on any toes, and I don't really have much 
time to spend on this matter alongside numerous other commitments at the 
moment.

Paul

P.S. The argument made about needing to understand what happens "under the 
hood" is something of an indictment of the way technology is developed these 
days. A tool that is meant to simplify something should present its own 
coherent level of abstraction; deferring to lower-level mechanisms is 
something that the Git developers and community like to do, which is why the 
usability of Git is the subject of occasional jokes and somewhat more 
infrequent attempts to wrap it in more usable interfaces.




Request to join the Debian Python Team

2024-01-15 Thread Paul Boddie
Hello,

I would like to request membership of the Debian Python Team. For a while, I 
have been working on a package for Shed Skin (shedskin) which is a compiler 
for a dialect of Python.

I previously maintained a package for this software in the Python 2 era, and 
since it has been made compatible with Python 3, I feel that it would be 
beneficial to Debian users to have convenient access to it once again. To this 
end, I filed an "intent to package" report a while back:

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1051352

The principal developer of Shed Skin is very encouraging of packaging efforts 
and has been responsive to feedback about issues that might otherwise incur 
distribution-level patches. Personally, I think that the software deserves 
more exposure, especially when considering some of the transformative effects 
it has on some kinds of Python code:

http://shed-skin.blogspot.com/2024/01/fast-doom-wad-renderer-in-999-lines-of.html

I have sought to follow advice provided in the team policy, making a Salsa 
project available here:

https://salsa.debian.org/pboddie/shedskin

Building on experience with a previous repository (noted in the ITP report), I 
have hopefully managed to follow the branching and pristine-tar guidelines, 
also introducing test running using Salsa CI.

I have read and accept the policy published here:

https://salsa.debian.org/python-team/tools/python-modules/blob/master/
policy.rst

My Salsa login is pboddie.

Thanks in advance for any guidance that might be offered!

Paul




Re: [Debian-salsa-ci] Publishing multiple packages with aptly in Salsa CI

2023-09-15 Thread Paul Boddie
On Friday, 15 September 2023 16:33:25 CEST Philip Hands wrote:
>
> For another angle, see:
> 
>   https://salsa.debian.org/philh/user-setup/-/pipelines/576662
> 
> In which I have a `harvest-repos` job that grabs artifacts from `build`
> jobs in other pipelines, and an `aptly-plus` job that's got an added
> `needs: harvest-repos` that can combine the artifacts from its build and
> harvest-repos jobs and lump them all together.
>
> In the resulting aptly repo you can see that it includes both the
> local package (user-setup) under the 'u' directory, and 'grub-installer'
> under the 'g' directory:
>  
> https://salsa.debian.org/philh/user-setup/-/jobs/4671054/artifacts/browse/
aptly/pool/main/

Yes, that was the desired effect: have packages corresponding to several 
source packages being made available in the pool and thus the created package 
repository.

> That's done with:
> 
>   https://salsa.debian.org/installer-team/branch2repo/
> 
> Quite a lot of that is already part of the standard salsa-CI pipeline,
> and my aim is that branch2repo will pretty-much disapear, with its
> components being optional bits of the standard pipeline, and maybe a few
> variable settings.

Indeed. As noted before, my modifications were effectively (1) to keep the 
source repository around to get access to my scripts, (2) to be able to add 
package repositories to the apt configuration, (3) to download packages so 
that aptly can process them.

(1) is just preferable to writing scripts in the horrible YAML syntax and then 
doing fragment inclusion, which I found can lead to opaque error messages. 
Being able to obtain a set of scripts would be quite helpful generally as the 
environment can vary substantially from one kind of job to another.

(2) is something that could be part of the standard job definitions, 
particularly since other jobs like autopkgtest need to know about additional 
repositories in certain cases, brand new packages being one of them.

(3) is just a small enhancement for this specific scenario.

I thought that someone must have done this before, even if the standard 
pipeline didn't support it. I imagine that some coordination would be 
desirable to prevent fragmentation and people like me introducing our own ways 
of addressing this particular need.

Paul




Publishing multiple packages with aptly in Salsa CI

2023-09-10 Thread Paul Boddie
Hello,

A few weeks ago, I asked about techniques for making new packages available to 
other new packages so that the autopkgtest job could be run successfully in a 
pipeline in the Salsa CI environment. Eventually, this was made to work by 
taking advantage of the aptly job defined in the standard Debian CI pipeline.

To recap, the aptly job publishes the package built during the execution of a 
pipeline by creating a dedicated apt-compatible package repository just for 
that package. Such repositories can then be made available to various jobs in 
the pipeline of another package, allowing the successful installation of that 
package and its dependencies, including new packages, and the successful 
execution of jobs like autopkgtest.

However, one other thing I wanted to achieve was to take the complete set of 
new packages and to publish them in a single package repository. This would 
allow people to install and test the built packages in a more convenient 
fashion than asking them to hunt down each built package from job artefacts or 
to build the packages themselves.

Obviously, the aptly job in the standard Debian CI pipeline publishes a single 
package (or maybe a collection of packages built from a single source 
package), but I wanted to aggregate all packages published by a collection of 
aptly repositories. Fortunately, it seems that this is possible by augmenting 
the existing aptly job definition as shown in the following file:

https://salsa.debian.org/moin-team/moin/-/blob/debian/master/debian/salsa-ci.yml

Ignoring the "ls" command which was there to troubleshoot this rather opaque 
environment, one script adds repository definitions to the apt configuration, 
whereas the other performs the appropriate "apt source" and "apt download" 
commands, making the source and binary package files available for aptly to 
process. Consequently, aptly will include all the dependency packages in the 
final package repository.

(Since the scripts reside in my package's debian directory, I also had to 
request the availability of the Git repository for the job. Generally, I have 
found it challenging to have these job definitions make effective use of 
scripts due to environmental inconsistencies.)

I imagine that publishing packages like this is not particularly desirable, at 
least if done widely, but I hope it shows that it can be done relatively 
easily, at least if all the right incantations have been discovered.

Paul




Re: Package testing with Salsa CI for new packages

2023-08-21 Thread Paul Boddie
On Monday, 21 August 2023 15:01:26 CEST Carles Pina i Estany wrote:
> 
> If you want, you can simplify more (it's not exactly the same, so it
> might or might not help). There is a way on GitLab to point to the
> latest build of a job. For example, you have the following URL for one
> of the git repos:
>
> https://salsa.debian.org/moin-team/emeraldtree/-/jobs/4575438/artifacts/raw/
aptly
> 
> You could use instead (to avoid the pipeline number):
> https://salsa.debian.org/moin-team/emeraldtree/-/jobs/artifacts/debian/
master/raw/aptly?job=aptly

That is a good idea, since I always intend the latest job to be successful. 
:-)

> Which is a redirect to the latest pipeline. Currently:
> 
> $ curl -s -I
> "https://salsa.debian.org/moin-team/emeraldtree/-/jobs/artifacts/debian/mas
> ter/raw/aptly?job=aptly" | grep -E -i "^(http|location)" HTTP/2 302
> location:
> https://salsa.debian.org/moin-team/emeraldtree/-/jobs/4575438/artifacts/raw/
aptly
> 
> 
> Follows this format:
> BRANCH=debian/master
> DIRECTORY=aptly
> JOB_NAME=aptly
> https://salsa.debian.org/moin-team/emeraldtree/-/jobs/artifacts/${BRANCH}/ra
> w/${DIRECTORY}?job=${JOB_NAME}

Thanks for demonstrating this. I imagine that it is documented somewhere in 
the GitLab manuals, too. One note of caution, though. It seems that using 
these redirected URLs is not necessarily possible with apt, and so it might be 
advisible to obtain the redirected URL as shown in the output above.

In my script I now get curl (and not wget any more) to obtain the final URL to 
provide to the apt configuration, and I also use this job-specific URL to 
obtain each GPG key. I suppose I could change the apt configuration to allow 
redirects instead.

> Just a side note: be careful about expiring artifacts. In some projects
> (settings dependant) only the latest artifact is kept and older ones
> might be expired (deleted) after some time. I don't think that this is
> the case of the moin-team/emeraldtree after a quick check... but I'm
> unsure where this is properly checked on GitLab.

"CI/CD Settings", "Artifacts", "Keep artifacts from most recent successful 
jobs" is set, so I imagine that as long as the latest job is successful, 
everything will still work. Indeed, it is desirable to rely on the latest job 
under such conditions: one would not want a newer dependency build to break 
the Moin package because an older set of artefacts has been deleted.

Thanks once again for the feedback and advice!

Paul




Re: Package testing with Salsa CI for new packages

2023-08-20 Thread Paul Boddie
On Sunday, 20 August 2023 14:06:37 CEST Carles Pina i Estany wrote:
> 
> Interesting approach, but if you could use the variables and ship the
> shell script it might reduce some duplication between jobs (if possible,
> I haven't look into too much detail in your case) and more importantly
> you might be able to use the standard "autopkgtest" or "piuparts"
> (instead of redefining them).

I have continued along such a path.

> Just last week I created a new autopkgtest (I have it running on
> bookworm and bullseye). It's done this way:
> --
> autopkgtest-bullseye:
>   extends: .test-autopkgtest
>   variables:
> RELEASE: "bullseye-backports"
> SALSA_CI_AUTOPKGTEST_ARGS:
> '--setup-commands=ci/pin-django-from-backports.sh
> --skip-test=debusine-doc-linkcheck' --
> 
> Or, for more context:
> https://salsa.debian.org/freexian-team/debusine/-/blob/add-bullseye-support/
> .gitlab-ci.yml#L84 It ends up having "autopkgtest" and
> "autopkgtest-bullseye". But using the variable SALSA_CI_DISABLE_AUTOPKGTEST
> I could disable the one without my extends.

Previously, I had merely redefined the autopkgtest and piuparts jobs, 
replacing them with ones that use the customised code:

autopkgtest:
  extends: .test-autopkgtest-revised

piuparts:
  extends: .test-piuparts-revised

I had assumed that such redefinition was possible. When people decide to 
introduce a new language, as GitLab appear to have done, they should realise 
that they need to match the considerable levels of documentation that existing 
languages may have. Fortunately, my assumptions were correct.

Following your lead, I have since found that the following works:

autopkgtest:
  extends: .test-autopkgtest
  variables:
SALSA_CI_AUTOPKGTEST_ARGS: '--setup-commands=debian/salsa/add-
repositories.sh'

piuparts:
  extends: .test-piuparts
  variables:
SALSA_CI_PIUPARTS_PRE_INSTALL_SCRIPT: 'debian/salsa/add-repositories.sh'

You can see that by defining the variables to customise the tools, I am able 
to work with the existing job definitions. This simplifies the CI description 
file considerably:

https://salsa.debian.org/moin-team/moin/-/blob/debian/master/debian/salsa-ci.yml

The script is pretty straightforward, too:

https://salsa.debian.org/moin-team/moin/-/blob/debian/master/debian/salsa/add-repositories.sh

It is so mundane that I could imagine it being replaced by standard 
functionality very easily.

[Job numbers]

> If it's in the same pipeline: you don't need the numbers to get the
> artifacts (the .deb files from build).

The .deb files are produced by the pipelines associated with other package 
repositories. Thus, Aptly is used to make the .deb files available to the Moin 
package's pipeline. I imagine that I could build the dependencies in the same 
pipeline, however. Apparently, GitLab's non-free editions support "multi-
project" pipelines, but I suspect that Aptly has been introduced to get around 
the lack of such functionality in the free edition.

> For the aptly generated artifact: I will investigate this tomorrow (to
> fetch it and try to use) (I want to replace some code that I did with
> the aptly repo, if possible).

Good luck with that task! :-)

As a result of this discussion and a lot of effort, I have managed to get 
autopkgtest and piuparts to see and to use the package dependencies needed by 
the Moin package. This allowed me to tackle the test suite that had caused me 
to look at all of this in the beginning. And now, it seems that I can get all 
the reasonable tests to pass:

https://salsa.debian.org/moin-team/moin/-/pipelines/567871

Once again, many thanks for all the guidance!

Paul




Re: Package testing with Salsa CI for new packages

2023-08-19 Thread Paul Boddie
On Saturday, 19 August 2023 20:32:59 CEST Carles Pina i Estany wrote:
> 
> Quick answer for now...

And a very quick reply from me... :-)

[...]

> I don't know if a .asc files are allowed. Actually, from my
> .bash_history:
> ---
> wget -O- https://www.virtualbox.org/download/oracle_vbox_2016.asc | sudo gpg
> --dearmor --yes --output /usr/share/keyrings/oracle-virtualbox-2016.gpg ---
> 
> (I was following instructions, I didn't try leaving the .asc file there)

They are allowed in recent versions of apt, as confirmed by the man page.

[...]

> You could try using:
> ---
> variables:
> SALSA_CI_AUTOPKGTEST_ARGS: '--setup-commands=ci/setup-the-repo.sh'
> ---
> (and write and ship the ci/setup-the-repo.sh in the repo, do whatever
> you need there)

This could be very useful.

> Or, probably even better but less flexible:
> ---
> variables:
> SALSA_CI_AUTOPKGTEST_ARGS: '--add-apt-source="deb http://MIRROR SUITE
> COMPONENT' ---
> (I found add-apt-source via "man autopkgtest" in TEST BED SETUP OPTIONS.
> I haven't used add-apt-source, I don't know what happens with the gpg
> keys... but you could use [trusted=yes] :-| )
> 
> For the MIRROR you have salsa variables that might help if you need to
> specify the pipeline.

So could this.

> > So, what I now need to do is to find out how I can make the new packages
> > available to autopkgtest specifically.
> 
> We will get there! (one way or another)

Well, I have engineered something very inelegant, revealed in full here:

https://salsa.debian.org/moin-team/moin/-/blob/debian/master/debian/salsa-ci.yml

It defines a package list file and uses wget to obtain each repository's 
public key. Fortunately, wget works at the "script" level of the jobs 
involved, whereas it didn't work at what must have been the global 
"before_script" level (suggested in the documentation) and needed to be 
installed first.

To make autopkgtest and piuparts find the packages, the repository details are 
then copied into the appropriate location in each case. However, since such 
customisation is not readily incorporated into the existing job descriptions, 
I have had to duplicate the existing job description code and insert 
additional statements in the script sections.

As a consequence of these changes, the extra packages are found and installed 
within the environments concerned. This is enough to make the piuparts job 
succeed, but autopkgtest still fails. Fortunately, that may be due only to 
problems with the actual tests: I think the upstream maintainers may have made 
some errors.

[...]

> hopefully SALSA_CI_AUTOPKGTEST_ARGS will help you. I added this in
> salsa-ci/pipeline last year because I was trying to do a similar thing
> to what you are doing (I had to pin a package from backports).
> 
> (I'm just a user of salsa-ci/pipeline, not a member of the team neither
> I can speak for them!)

If this can simplify what I've done, which is really quite horrible, then I 
will adopt it instead. The way that the artefacts of the dependencies are 
bound up in specifically numbered jobs is also particularly unfortunate. Of 
course, if the packages enter the Debian archives, all of this extra effort 
will be made largely redundant, anyway, even if coordinated testing of new 
package versions would still benefit from such effort.

[...]

> I don't know if an FAQ, conventions, best practises or something might
> help...

I think that the documentation could be reworked, but that is largely true for 
a lot of the materials available for Debian. And since some of that 
documentation resides on the Debian Wiki, that is another reason why I am 
trying to move this particular packaging exercise forwards.

Thanks once again!

Paul




Re: Package testing with Salsa CI for new packages

2023-08-19 Thread Paul Boddie
On Friday, 18 August 2023 19:54:55 CEST Carles Pina i Estany wrote:
> 
> Ha! I wasn't aware of the aptly option
> (https://salsa.debian.org/salsa-ci-team/pipeline/#using-automatically-built-> 
> apt-repository and SALSA_CI_DISABLE_APTLY=0).
> I think that I might have re-invented the wheel in a tiny part of
> Debusine CI/CD.

It is certainly a way of propagating packages to those that might need them. 
However, the instructions indicating how a package might access these 
dependencies appear to be deficient.

It does not appear to be sufficient to merely specify the dependency package 
repositories and mark them as trusted. Doing that will just cause the 
repositories to be regarded as ignored and the GPG keys signalled as 
unrecognised.

So, the GPG keys need to be obtained. This is a hassle because something like 
wget is needed to do that, and then apt has to be persuaded not to fail in an 
opaque way. So the modified recipe is something like this:

before_script:
  - apt-get update
  - NON_INTERACTIVE=1 apt-get install -y wget
  - echo "deb https://salsa.debian.org/moin-team/emeraldtree/-/jobs/4575438/
artifacts/raw/aptly unstable main" | tee /etc/apt/sources.list.d/pkga.list
  ...
  - wget -q -O /etc/apt/trusted.gpg.d/emeraldtree.asc https://
salsa.debian.org/moin-team/emeraldtree/-/jobs/4575438/artifacts/raw/aptly/
public-key.asc
  ...
  - apt-get update

This seems to make the various jobs happy, but the one that I was most 
concerned with remains unhappy! I don't actually need the dependencies for 
anything other than autopkgtest, but that job employs its own environment 
where the above recipe has no effect.

So, what I now need to do is to find out how I can make the new packages 
available to autopkgtest specifically.

> I will point out at some things that might safe some time to you
> (great) or not (ignore! :-) ):
> 
> debusine's .gitlab-ci.yml:
> https://salsa.debian.org/freexian-team/debusine/-/blob/devel/.gitlab-ci.yml

This is definitely useful for examples of the CI definition syntax and how 
hidden jobs can be used to provide additional script fragments.

[...]

> The job autopkgtest, via debci command, runs autopkgtest:
> https://salsa.debian.org/freexian-team/debusine/-/jobs/4574458#L65

The difficult thing appears to be the configuration of this testing 
environment. One approach might involve overriding the existing job 
definitions to incorporate the injection of new packages.

[...]

> It's possible for sure. Other people in this list might come up with a
> different idea. I don't have almost any experience with Debian
> packaging, but I have some experience on the salsa CI. So I might be
> giving you solutions that might be sub-optimal! :-)

I think that you are giving me some good ideas at least. I do feel that I am 
encountering problems that must have been encountered many times before, but I 
could imagine that people do not use these tools in this way or have different 
expectations of what they are able to accomplish.

As for Debian's tooling, I consider it rather chaotic. When I previously did 
packaging, things like pbuilder were still in use, being slowly pushed aside 
by sbuild and probably other things that are now forgotten or obsolete. There 
always seems to be new things coming along, new mixtures of technologies, and 
the potential for more confusion and uncertainty.

Paul




Re: Package testing with Salsa CI for new packages

2023-08-18 Thread Paul Boddie
On Friday, 18 August 2023 18:21:00 CEST Andrey Rakhmatullin wrote:
> On Thu, Aug 17, 2023 at 05:10:08PM +0200, Paul Boddie wrote:
> > Here, it would seem that the most prudent approach is to use the Salsa CI
> > service instead of trying to get the test suite to run during the package
> > build process.
> 
> You should do both if possible, assuming that by "Salsa CI service" you
> mean autopkgtests which you can, and IMO should, also run locally.

I'm not really clear on what autopkgtest really is, other than a tool that 
uses some kind of test suite description that may reside in debian/test. I'm 
also not completely clear on what is supposed to invoke it, other than either 
the Salsa CI mechanism or dh_auto_test. In the Debian Wiki documentation...

https://wiki.debian.org/Python/LibraryStyleGuide

...it mentions a field in debian/control:

Testsuite: autopkgtest-pkg-python

My impression is that this calls autodep8 to generate some kind of test suite 
description which is then invoked by dh_auto_test. It doesn't help that there 
is an alternative to this that resembles it but behaves differently:

Testsuite: autopkgtest-pkg-pybuild

What I have previously managed to do with the Salsa CI mechanism is to run 
tests specified in debian/test, disabling these tests at build time using the 
following in debian/rules:

export PYBUILD_DISABLE=test

For another package which has limited dependencies, I do this because the 
tests take a long time to run, and I even need to increase the timeout for the 
Salsa CI jobs. For this package, the tests don't take too long to run (those 
that do run), but some tests fail for reasons previously described, and this 
is due to some configuration issue that is unpleasant to debug.

Success with testing using Salsa CI steers me away from using the Testsuite 
field in the normal build environment because the resulting test suite 
invocation failures suggest that some customisation is needed or that the 
environment is inappropriate in some way (even if I specify numerous build 
dependencies).

> > One motivation for doing so involves not having to specify
> > build dependencies for packages only needed for test suite execution,
> > which itself requires the invocation of gbp buildpackage with
> > --extra-package arguments since some packages are completely new to
> > Debian.
> 
> This is fine. Not to mention that the same problem exists for
> autopkgtests, as you say below.
> 
> > I have also found it difficult to persuade the tests to run successfully
> > during the build process. A few of these attempt to invoke the moin
> > program, but this cannot be located since it is not installed in the
> > build environment.
>
> This should also be fine, unless it's completely impossible to run it
> without installing into /.

The moin program is made available in setup.py using an entry point. Maybe if 
there were some kind of script instead, it would work.

> > However, one conclusion is that testing a system, as some of the
> > test cases appear to do, and as opposed to testing library functionality,
> > is not necessarily appropriate when directed by something like
> > dh_auto_test.
>
> If there are tests that can't be run at build time you can skip those. You
> can even ask the upstream to provide tool arguments to simplify that.

I may well discuss such matters with them. One challenge that is relevant in 
this situation is that upstream have been working in their own virtualenv (or 
venv, or whatever it is now) world for a few years, using plenty of 
dependencies that were not packaged in Debian. That hasn't been entirely 
conducive to adoption of Moin 2 which has in turn isolated its developers 
further.

Fortunately, the gulf between that world and Debian can be bridged, and some 
of the more tedious packaging seems to have been done for many of the 
dependencies involved. It just needs someone to shepherd this effort to an 
acceptable conclusion, which is of concern to Debian itself since Moin runs 
the Debian Wiki.

Thanks for the encouragement and advice.

Paul




Re: Package testing with Salsa CI for new packages

2023-08-18 Thread Paul Boddie
On Friday, 18 August 2023 16:12:19 CEST Carles Pina i Estany wrote:
> 
> For the jobs it is happening via
> https://salsa.debian.org/salsa-ci-team/pipeline/#using-automatically-built- 
apt-repository

Reviewing this documentation is actually more helpful than I thought it would 
be. I had noticed the "aptly" task in the YAML files, and I had started to 
wonder if that meant that some kind of repository publishing was occurring 
somewhere.

> > In the Salsa CI environment, I would need to have the built packages
> > (found in the artefacts for each package's build job) copied somewhere
> > that can then be found by the Moin package's pipeline jobs and the
> > scripts creating a special repository of new packages.
> 
> Archiving artifacts should happen automatically on the "build" step of
> Salsa CI (salsa-ci/pipeline). If I understand correctly what you
> wanted...

I think you have a good understanding of what I am trying to achieve. If I can 
get the new package dependencies (emeraldtree, feedgen, and so on) to yield 
installable packages when built that can then be referenced by Salsa CI as it 
runs the build jobs for Moin, I have a chance of running the test suite.

I'm currently persuading the CI system to run the "aptly" task and to publish 
package repositories. I will then augment a customised YAML file for the Moin 
package with references to these repositories and see if the test suite can be 
invoked somewhat more successfully as a consequence.

Thanks once again for the guidance!

Paul




Re: Package testing with Salsa CI for new packages

2023-08-18 Thread Paul Boddie
On Friday, 18 August 2023 09:51:29 CEST Carles Pina i Estany wrote:
> 
> I'm not a Debian developer but I have some experience on Salsa CI, so I
> thought that I might be able to help... but then I was confused by a
> specific part of the message:
> 
> On 17 Aug 2023 at 17:10:08, Paul Boddie wrote:
> 
> [...]
> 
> > For another package I have been working on, the Salsa CI facility has
> > proven to be usable, configured using files in debian/test, particularly
> > as it allows test-related dependencies to be specified independently.
> > However, this other package has no dependencies that are currently
> > unpackaged in Debian. Meanwhile, the testing of this new Moin package
> > depends on brand new packages somehow being made available.
> 
> If this dependencies are available on the "build" step: could they be
> made available on the autopkgtest? I didn't quite understand why this is
> not possible. I've found the autopkgtest quite flexible (since the tests
> are scripts that could prepare some environment)

The package has dependencies on installation but these dependencies are not 
strictly necessary when building. However, if I wanted to run the test suite 
when building, I would indeed need to pull in these dependencies as build 
dependencies so that the software being tested can run without import errors.

Since some of these dependencies are new packages, I had been specifying them 
using the --extra-package option of gbp buildpackage in order to satisfy such 
build dependencies, but these were really "test dependencies". I have since 
decided that I would rather distinguish between build and test dependencies, 
as previously noted.

I have to add that the other package I refer to has a test suite that takes a 
long time to run, so that is another reason why I chose Salsa CI for that 
package instead of letting autopkgtest do its work:

https://salsa.debian.org/pboddie/shedskin

When I get the courage to do so, I will probably submit an intention-to-
package for that package.

[...]

> For me, the packages on the build job are made available via a Salsa
> artifact automatically (and easy to download as a .zip of the *.deb). I
> think that this happens on all the "build" jobs on Salsa CI. They can be
> downloaded as a .zip file
> (e.g. https://salsa.debian.org/freexian-team/debusine/-/jobs/4564890,
> right hand side "Job artifacts" and click "Download"). Would that be
> enough?

I think the problem is that when running the test suite, the Moin package 
would need to obtain the test dependencies specified in debian/test/control. 
These include various new packages that cannot be obtained from the usual 
Debian archives, and so some kind of equivalent to gbp's --extra-package would 
be needed in the Salsa CI environment.

One can imagine having a common storage area holding these newly introduced 
packages that the CI scripts could access in preference to the usual archives. 
In fact, this would be something that might also affect existing packages. 
Consider the situation where fixes to a dependency are required to fix 
functionality in a particular package. One would have to wait for the fixed 
dependency to become integrated into the unstable archive before the principal 
package's tests would start to work again.

> I also created a repo and hosted it on a Salsa CI page for internal
> testing but this is a bit of a workaround. This is in a new job but just
> download the artifacts (via a Salsa CI dependency) and run
> dpkg-scanpackages and copy the files to the right place).

This sounds like something related to what might be required. In effect, you 
seem to be doing what I am doing when I actually install my built packages in 
a chroot. I run apt-ftparchive (which runs dpkg-scanpackages) to generate 
Packages, Sources and Release files that tells apt about the new packages when 
added as another package source.

In the Salsa CI environment, I would need to have the built packages (found in 
the artefacts for each package's build job) copied somewhere that can then be 
found by the Moin package's pipeline jobs and the scripts creating a special 
repository of new packages.

Thanks for replying to my enquiry!

Regards,

Paul




Package testing with Salsa CI for new packages

2023-08-17 Thread Paul Boddie
Hello,

It has been quite some time since I contributed any packaging to Debian, but 
in the last year or so I have been trying to put together some packaging for 
MoinMoin 2. As many of you will be aware, MoinMoin (colloquially referenced as 
"Moin" hereafter) is currently used by Debian for the Debian Wiki, but this 
service runs Moin 1.9 which requires Python 2.

After some discussion, I have put together a few packages for Moin 2 and its 
dependencies that are not currently packaged in Debian. These have been hosted 
on Salsa and are available here:

https://salsa.debian.org/moin-team/

A few of these packages, named using "old-" as a prefix, are discarded 
attempts and only remain because I do not have privileges to delete them. The 
rest have been packaged using the gbp tooling and associated techniques.

Although I feel that this packaging effort is now fairly well advanced in that 
I can build packages locally, install them in a chroot, and the installed 
software appears to work correctly (perhaps with some additional fixes 
required related to behaviour that may be inherent in the software itself), I 
find myself trying to satisfy various packaging requirements. One of these is 
the ability to run the software's test suite successfully.

Here, it would seem that the most prudent approach is to use the Salsa CI 
service instead of trying to get the test suite to run during the package 
build process. One motivation for doing so involves not having to specify 
build dependencies for packages only needed for test suite execution, which 
itself requires the invocation of gbp buildpackage with --extra-package 
arguments since some packages are completely new to Debian.

I have also found it difficult to persuade the tests to run successfully 
during the build process. A few of these attempt to invoke the moin program, 
but this cannot be located since it is not installed in the build environment. 
(There may be other failure situations, but these will be dealt with 
eventually.) I find the different test features of the build tooling - 
dh_auto_test, autopkgtest, and so on - all rather confusing and poorly 
summarised. However, one conclusion is that testing a system, as some of the 
test cases appear to do, and as opposed to testing library functionality, is 
not necessarily appropriate when directed by something like dh_auto_test.

For another package I have been working on, the Salsa CI facility has proven 
to be usable, configured using files in debian/test, particularly as it allows 
test-related dependencies to be specified independently. However, this other 
package has no dependencies that are currently unpackaged in Debian. 
Meanwhile, the testing of this new Moin package depends on brand new packages 
somehow being made available.

So, I would appreciate guidance on how I might enable testing of this new Moin 
package, along with its new dependencies, in the Salsa environment or any 
other appropriate environment that would satisfy Debian packaging needs and 
policies. It would also be quite helpful if built packages might be published 
somewhere for people to test, but this is a separate consideration even if 
such packages would obviously need to be generated as part of the testing 
regime.

Thanks for reading this far and for any guidance that might be offered!

Regards,

Paul




Re: PyCon 2013 -- tentative title/abstract/outline -- feedback plz

2012-09-27 Thread Paul Boddie
On Thursday 27 September 2012 17:50:04 Paul Tagliamonte wrote:
 On Thu, Sep 27, 2012 at 08:51:37AM -0400, Yaroslav Halchenko wrote:
  not a single comment... bad... I guess I need to work on the text
  more if even hardcore Debian people do not feel 'moved' ;-)

 Well, i'll give my 2c as a pythonista and a Debian-folk

I was going to give some feedback more as the kind of person who has gone to 
Python conferences, and certainly, if you want a native speaker to give 
feedback on the phrasing of your proposal, I'd be happy to make some 
suggestions.

[...]

 I can see this becoming a flamefest.

 Most hardcore pythonistas (and the types to be at PyCon) refuse to
 allow apt to install libs globally, and use virtualenv(wrapper) to
 isolate deps for a few reasons -- the big ones being:

  - more up to date
  - isolates dependency hell

 which (frankly) apt-get / Debian stable can't really address. Sometimes
 Python packages in sid are out of date as well.

Python packaging has become somewhat insular over the years with 
Python-centric solutions that work across different systems rather than 
solutions that work well with the rest of the software on particular systems. 
However, people appear to like things like virtualenv, especially the Web 
crowd that makes up a lot of the audience at events like this, because it 
lets them set up relatively cheap configurations for separate Web 
applications or for experimenting.

I have advocated solutions based on fakechrooted debootstrapped installations 
if only because you can manage the libraries below the Python modules and 
extensions as well as the stuff that supports things like distutils and 
setuptools. However, the people who can change this situation don't see the 
need or the point: it's either but I have root! or they can always build 
from source! No wonder people use stuff like virtualenv instead. It is in 
this area where I feel that the Debian community could do more to meet others 
half-way.

 People don't care about API stability or anything like that, so I think
 you might have to try to frame this in a way that doesn't provoke a
 virtualenv-vs-apt battle -- because, frankly, neither side will win and
 it'll just become a bit murky.

 I'd be happy to help you prepare / do more interactive work with folks
 at PyCon (I should likely be there) :)

The one case that many language-focused groups ignore, and where distributions 
do well, is the case where a range of different technologies needs to be 
managed and where administrators just wouldn't be able to keep up with Python 
eggs, Ruby gems, CPAN, and the language-specific technology of the week. 
Persuading the Python community to feed packages into Debian so that they 
become a safer choice for people who routinely use or know other technologies 
is definitely a worthwhile cause.

Paul


-- 
To UNSUBSCRIBE, email to debian-python-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/201209272343.21737.p...@boddie.org.uk



Re: PyCon 2013 -- tentative title/abstract/outline -- feedback plz

2012-09-27 Thread Paul Boddie
On Friday 28 September 2012 00:23:10 Yaroslav Halchenko wrote:
 Thank you Paul ;-)

 Good comments -- once again, arguments seems to be oriented mostly
 toward developers...  I guess I should explicitly guide the
 abstract more toward 'user-' and sysadmin- use cases:  people
 in need to have easy and uniform paths for software installation and
 maintenance of the heterogeneous system.  In scientific users domain it
 becomes even more fun with heavier reliance on computational and I/O
 libraries (blas/atlas, hdf5, etc) building and maintaining which might
 be quite a bit of a hassle.

Yes, I had cause to build NumPy from scratch recently, and it was quite 
intimidating. It did happen to involve a fairly low-performance device with 
fairly severe memory constraints, and the experience really pushed me towards 
looking harder at Debian and the packaging work that I knew would have been 
done to potentially solve some of the issues I was experiencing, one of which 
being modularisation of the library, although I'm not sure how much this is 
actually done with NumPy in Debian.

 Few inline comments.

  I was going to give some feedback more as the kind of person who has gone
  to Python conferences, and certainly, if you want a native speaker to
  give feedback on the phrasing of your proposal, I'd be happy to make some
  suggestions.

 I would appreciate native speaker feedback!  since accepting all
 types of proposals through September 28, 2012, I guess I have the whole
 tomorrow to revise and submit.  I hope to find some time later today to
 revise my abstract and will post it again for further phrasing
 suggestions

I'll try and follow the list and get back to you.

 That is true... Somewhat offtopic -- that is why with neuro.debian.net
 we pretty much serve an unofficial backports repository for a good
 portion of Python modules we maintain.  Besides immediate benefit for
 users, benefit from backporting for developers has been build-time
 testing across various releases of Debians and Ubuntus, picking out
 problems with specific versions of the core libraries... So, may be I
 should add an accent that availability in Debian doesn't only guarantee
 ease of installation (for users) but provides a good test bed for the
 developers to preclude problems with future deployments on Debian-based
 platforms... ?

Having gone through the packaging process, I know that a lot of the hurdles 
actually do help packagers improve the reliability of the eventual package, 
even though the process can be drawn out and frustrating. Even non-technical 
stuff like auditing the licences is a valuable activity, however, that gives 
users confidence that the end product is of high quality and not just zipped 
up and uploaded to some site or other.

The Python conference scene seems to love testing, so if you can make a case 
for Debian and quality assurance, and Debian has done things popular with 
this crowd for years like automated builds and the use of very strict package 
building tools that won't let you build without a precise specification of 
the build dependencies, then that may appeal to some people.

  Python packaging has become somewhat insular over the years with
  Python-centric solutions that work across different systems rather than
  solutions that work well with the rest of the software on particular
  systems. However, people appear to like things like virtualenv,
  especially the Web crowd that makes up a lot of the audience at events
  like this, because it lets them set up relatively cheap configurations
  for separate Web applications or for experimenting.

 virtualenv is indeed great for the reasons you guys point out AND
 indeed, it is very Python-centric and maintenance of a configured
 virtualenv might become cumbersome for projects with lots of 3rd party
 dependencies and for regular users who would not want to care to switch
 among different virtualenvs etc.

It's a Python tool for Python developers, primarily. If you're running a Web 
operation with lots of Python developers and a commitment to Python then it 
may make sense, but that sort of builds a wall that deters people from 
adopting Python, too.

 I guess I should revise abstract to aim a listener wondering why should
 I care about Debian if there is virtualenv WITHOUT explicitly pointing
 to its pros to not cause any flames.  And not sure I would be able to
 convince hard-core Python-ians, so I might not even try and orient
 it more toward users/admins.

I don't think you should worry too much about flames. My impression is that 
the packaging people are trying to scale back their ambitions and just get 
everyone to do the basic things like write decent metadata, mostly because I 
think the process of delivering a comprehensive solution is deadlocked once 
again, and I think that people do see the need to hear from distributions and 
to try and get as much input from them as possible.

  I have advocated solutions based on fakechrooted 

Re: PyCon 2013 -- anyone going? ideas for the talks?

2012-09-21 Thread Paul Boddie
On Friday 21 September 2012 20:13:22 Yaroslav Halchenko wrote:

 eh -- I cannot recommend any specific tutorial, especially tailored toward
 Python (yet). Lucas' packaging-tutorial is quite nice but IMHO for 1/2 hour
 introduction into packaging tutorial  should be more concise and more
 specific toward common situations with Python modules/extensions/apps
 packaging.  But I would advise to repeat a few times that the first
 pre-requisite toward easy packaging is for the project to follow the
 standard procedures, i.e. using distutils (setuptools) and having a working
 setup.py, having clear specification of copyright/license terms and
 dependencies.  Additional benefit -- having a good collection of unittests
 to be enabled at build time.  With those ideas in the pocket, in 90% of the
 cases the basic packaging would be quite easy thanks to dh+dh_python[23]
 bundle.

From following discussions on python-dev, I imagine that it might be 
interesting for people to be shown how following the basic best practices 
around metadata and configuration information can get you most of the way to 
making a half-decent package. It might also be informative if Natalia 
Frydrych's PyPI to Debian work were covered somehow, because that would 
potentially make people aware of packaging portability and that with only a 
little extra consideration, they could have their work conveniently available 
to an entire community.

Paul


--
To UNSUBSCRIBE, email to debian-python-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/201209212032.06968.p...@boddie.org.uk



Request for Review: Shedskin (a Python-to-C++ compiler)

2011-11-15 Thread Paul Boddie
Hello,

As I mentioned in the thread concerning Nuitka, I've been trying for a while 
to get Shedskin packaged for Debian, and I've sent the occasional RFS to the 
Mentors list, but I'd be interested to know if anyone is interested in taking 
this work further and/or sponsoring it. Here's the Mentors link:

http://mentors.debian.net/package/shedskin

And here's a link to the packaging itself:

http://hgweb.boddie.org.uk/shedskin-packaging/

Julian Taylor and Jakub Wilk have been very helpful in giving feedback and 
correcting my many packaging mistakes. D Haley also gave many suggestions in 
response to the ITP:

http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=564533

Although I don't really use Shedskin myself - I have too many other projects 
currently active, but have been meaning to use it a bit more - the software 
has made substantial progress towards being a widely applicable solution and 
can compile non-trivial programs, producing significant speed-ups. One fairly 
impressive example (that cannot unfortunately be distributed with Debian) is 
a Commodore 64 emulator:

http://shed-skin.blogspot.com/2011/06/shed-skin-08-programming-language.html

There are also indications that Shedskin is occupying a firm place in the 
range of solutions employed to increase the performance of Python-based 
software systems:

http://ianozsvald.com/2011/07/25/high-performance-python-tutorial-v0-2-from-europython-2011/

I intend to maintain the package as long as is necessary, although I would 
also be prepared to let a motivated Debian developer take over the 
responsibility for maintenance. My aim is primarily to make the inclusion of 
Shedskin into Debian as little work as possible for others.

Regards,

Paul


-- 
To UNSUBSCRIBE, email to debian-python-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20160008.22254.p...@boddie.org.uk