Re: arm64 Debian/Ubuntu port image available

2013-02-28 Thread Michael K. Edwards
Nice work, Wookey!  If experience cross-building for armhf is any guide,
all you need for NSS is a host build of shlibsign; see
https://github.com/mkedwards/crosstool-ng/blob/master/patches/nss/3.12.10/0001-Modify-shlibsign-wrapper-for-cross-compilation.patch.
 There's also scriptage in that repo for the build sequence and cross
parameters:
https://github.com/mkedwards/crosstool-ng/blob/master/scripts/build/cross_me_harder/510-nss.sh.
It's ugly in places (cross pkgconfig was kind of wonky at the time) but may
help you get past NSS and on to apt.

Cheers,
- Michael
On Feb 26, 2013 6:11 PM, Wookey woo...@wookware.org wrote:

 State of the Debian/Ubuntu arm64 port
 =

 *** Arm64 lives! ***

 Executive summary
 -

  * There is now a bootable (raring) image to download and run
  * Everything has been rebuilt against glibc 2.17 so it works
  * A bit more work is needed to make the rootfs useable as a native buildd
  * Multiarch crossbuilding and the build-profile mechanism is mature
 enough to cross-build
 a port from scratch (this is a big deal IMHO)
  * All packages, sources and tools are in a public repo and this work
 should be reproducible.
  * This image is fully multiarched so co-installing armhf for a
 64/32 mix should work nicely, as should multiarch crossbuilding to
 legacy x86 architectures. :-) (but I haven't tried that yet...)


  * Linaro wants 'the distros' to take this work forward from here so
 people interested in
 Debian and Ubuntu on 64-bit arm hardware need to step up and help out.


 Bootable images
 ---

 A milestone was reached this week: Enough packages were built for arm64 to
 debootstrap an
 image which booted to a prompt! After a bit of fettling (and switching to
 multistrap) I got
 an image with all the packages configured which boots with upstart to a
 login prompt (I
 admit, I did get quite excited about this, as it represents the coming
 together of nearly 3
 years work on multiarch, crossbuilding, bootstrapping, cyclic dependencies
 and arm64 :-)

 The images are available for download:
 http://wiki.debian.org/Arm64Port#Pre-built_Rootfs
 And there are destructions there for making your own.

 All these packages were cross-built on raring, untangling cyclic
 dependencies with build
 profiles (see wiki.debian.org/DebianBootstrap for how that works), making
 this the first
 (non x86) self-bootstrapped debian port ever (so far as I know). All (?)
 previous ports have
 been done using something else like OpenEmbedded (armel, armhf),
 RedHat/HardHat (arm, alpha,
 mips), something IBMy (s390) to get an initial linux rootfs on which
 debian packages are
 built.

 The new bootstrap process is (almost) just a list of sbuild commands. In
 practice there are
 still a few rough edges around cross- build-dependencies so of the 140
 packages needed for
 the bootstrap, 9 had to be built manually with 'dpkg-buildpackage -aarm64
 -d' (to skip
 build-dep checks) instead of 'sbuild --host arm64 package'.

 The current bootstrap packageset status is here:

 http://people.linaro.org/~wookey/buildd/raring-arm64/status-bootstrap.html

 There is no armv8 (arm64/aarch64) hardware available yet, so this image
 can currently only
 be run in a model. ARM provide a free-beer prorietary 'Foundation model'
 so we do have
 someting to test with. It's sluggish but perfectly useable. Booting the
 images takes a
 couple of minutes on my fairly average machine.

 The images are using the Linaro OE release kernels which seem to work fine
 for this purpose.
 Thanks to Marcin for modified bootloader lines in .axf files.



 Image status
 

 I was impressed that things basically 'just worked' on first boot. There
 is of course plenty
 of breakage, I'm sure, and I haven't looked very hard yet, but it's a lot
 better than I
 expected after months of just building stuff and testing nothing. (Things
 that are poorly:
 nano can't parse it's own syntax-coluring files for example, and multiarch
 perl has the
 wrong @INC path compiled in; I'm sure there is more). Consider this
 alpha-grade until it's
 been used a bit more.

 Things that are not yet built which would make the images a lot more
 useful are apt and a
 dhcp client. apt needs gnupg needs curl needs nss. The nss cross-build
 needs fixing to
 unbung that. A debian chroot without apt turns out to be disappointing
 quite quickly :-)
 Expect an updated image with more packages very soon.


 Multiarch crossbuilding
 ---

 It's really nice to have building and crossbuilding using exactly the same
 mechanisms
 and tools, with all files having one canonical location, and dependency
 mechanisms that
 are reliable. The more I've used this, the more I've been impressed by it.
 There is
 still work to do to expand the set of cross-buildable stuff, but it's a
 solid base to
 work from.

 Getting this port working has been 'interesting' because it's attempting 4
 new 

Re: Bruce Perens hosts party at OSCON Wednesday night

2005-08-02 Thread Michael K. Edwards
On 8/2/05, Adam Heath [EMAIL PROTECTED] wrote:
 Unsolicited Commercial Email.  Please pay the standard $2000 fee for
 advertisments on Debian mailing lists.

Adam, I'm kind of curious what you mean by that.  What, if any, actual
or proposed statutory standard for UCE did you have in mind when you
called Bruce's message UCE?  What, if anything, are or should be the
consequences if he doesn't pay $2000 to whomever you had in mind?  How
can I tell whether a message I'm thinking of sending will trigger the
same penalties?  How do people receive notice of this standard?

Cheers,
- Michael



Re: OT: debian mentors ubuntu

2005-07-19 Thread Michael K. Edwards
On 7/19/05, Ben Armstrong [EMAIL PROTECTED] wrote:
 On Tue, 2005-07-19 at 12:21 +0200, Nico Golde wrote:
  Heyho,
  why is mentors.debian.net powered by Ubuntu?
 
 http://mentors.debian.net/
 About this repository
 Welcome to the debian-mentors public software repository.
 ...
 Please note that this service is not run as a part of the
 official debian.org web site.*
 ...
 If you have questions which you do not find answered on these
 web pages please write an email to [EMAIL PROTECTED]
 
 Ben
 * And if it were, you'd ask debian-project, not debian-devel.

And if so, hopefully they would answer that it reflects a judgment
within the discretion of its principal sysadmin(s), and that success
for a Debian derivative is not failure for Debian.  And hopefully they
would be able to point to other services that run on other Debian
derivatives, or at least a plan to deploy such.  And hopefully this
would reflect a generally more positive attitude among DDs towards the
leveraging of Debian's mechanisms and code in good faith by
profit-seeking entities than one would gather from reading d-d.

Call me an unreasoning optimist (or vilify me as a traitor to
communitarian ideals), but I think it is not out of the question for
the Debian project and community to evolve in that direction.  Might
contribute to more nuanced views of legal issues, too.

Cheers,
- Michael
(IANADD)



Re: libcurl3-dev: A development package linked again gnutls needed

2005-07-18 Thread Michael K. Edwards
On 7/17/05, Marco d'Itri [EMAIL PROTECTED] wrote:
 Upstream developers should get a clue and either properly license their
 software, stop using libcurl or adding gnutls support to it.

Upstream developers (and a lot of other people) should stop believing
the FSF's FUD about how it's not legal to integrate (in an engineering
sense) components offered under the GPL and GPL-incompatible licenses,
especially OpenSSL.  That claim is (IANAL, TINLA) utterly without
foundation in the applicable law, not to mention profoundly
hypocritical in light of the LPF's amicus brief in Lotus v. Borland
and the incorporation of the OpenSSL interface spec in the shim
bundled with GNU TLS.  Knuckling under to it, where OpenSSL is
concerned, amounts IMHO to collusion with an extra-legal
anti-competitive strategy that may yet be proven against the FSF under
a more appropriate choice of law by someone more genuinely affected
and more legally competent than Daniel Wallace.

Cheers,
- Michael



Re: unreproducable bugs

2005-07-16 Thread Michael K. Edwards
On 7/15/05, Manoj Srivastava va, manoj [EMAIL PROTECTED] wrote:
[cranky but funny stuff]

If there ever is a blackball commitee, Manoj of all people belongs on it.  :-)

Cheers,
- Michael



Re: unreproducable bugs

2005-07-15 Thread Michael K. Edwards
On 7/15/05, Manoj Srivastava va, manoj [EMAIL PROTECTED] wrote:
 What's with the recent push to get every little things written
  down into policy, so the developer no longer is required to have an
  ability to think, or exercise any judgement whatsoever?

Welcome to the software industry in 2005.  If you haven't yet
encountered a senior software engineer with three degrees and a
six-figure salary who couldn't debug his way out of a paper bag, you
work in a very different part of the industry than I do.  [Note that I
am not accusing Nico or anyone else in Debian of fitting this
description.]

The threshold at which it is actually rather improbable that one
totally lacks the capacity for independent judgment seems to be
principal engineer -- a director equivalent in many large companies.
 I have worked with a number of junior staff whose performance
exceeded my expectations for their level of seniority -- including at
least one guy with a so-so high school education who was more able
than several MSCS's I have known -- but they are very much the
exceptions rather than the rule.

It's not the lack of (programming or human) language skills that's the
problem -- it's the lack of thinking skills.  I don't know if they can
be taught, but they certainly aren't being taught.  This problem is
endemic in the US educational system -- reputed to be worse in
California than almost anywhere else, even most of the Deep South --
and if my personal experience is any guide there are a few other
countries that are in similar positions.

Formal evaluation processes don't seem to do jack to keep the nitwits
out.  The only thing I've ever seen work is a self-selected review
team with anonymous blackball authority and a few seriously cranky
members.  That, of course, has its own problems; but it does work, at
least for a while.

Cheers,
- Michael



Re: shared library -dev package naming proposal

2005-07-15 Thread Michael K. Edwards
On 7/15/05, Steve Langasek [EMAIL PROTECTED] wrote:
 On Fri, Jul 15, 2005 at 05:30:44PM +0900, Junichi Uekawa wrote:
  An alternate solution is to have a database for that kind of thing,
  but I forsee that it requires effort to maintain and keep up-to-date.
 
 Like the database I just queried above? :)

There's an even better one called Google.  If you're adding a
library dependency to a package that you plan to maintain for the
benefit of a large number of users, you might want to know a little
more about the library, its upstream, and its packager than just what
the relationship is between foo.sf.net, foo-X.X.tgz, and the binary
package names.

Automated tools, on the other hand, can and should be primed with
data, not heuristics.  Test suites _should_ be fragile so that if
something changes in a remotely questionable way you _spot_ it.  Then
you use a heuristic, if available, to update the priming data and
touch it up manually where necessary.  Automate where it helps, not
where it hurts.

  It's rather embarassing to know that Debian isn't organized at all in this
  manner.

Organization is overrated.  While good code is, in the long run, an
aesthetic criterion as much as anything else, some aesthetic instincts
can be misleading.  Cathedral / bazaar, and all that.  (Though I
personally prefer cathedrals, and if you've read about how they were
actually built, you will see that the Linux, glibc, GCC, perl, python,
etc. development process looks much more like cathedral building than
like the Kasbah.)

 You seem to be embarrassed easily.  If this is such a problem for using
 Debian as a development platform, why is this the first time I've seen the
 subject discussed on debian-devel?

There may well be useful tools that are made harder to write by the
indiscriminate naming of packages.  For an example where the global
aesthetic criterion does tend to win, at the expense of some use
cases, consider the prejudice against splitting off micro-packages
to slim down the dependencies of the main binary package.  tetex-bin
comes to mind -- and don't tell me that tetex-base is the main
package, because it's tetex-bin that is needed when building X11 (last
time I checked; still true of xfree86 in unstable; apparently also
true of xorg).

Perhaps it's not worth splitting out xpdf as a separate source package
to break the circular build-depends -- although it would avoid
gratuitous security updates to the rest of tetex.  But I for one
really don't like having to have the binary packages from an old
xfree86 build installed in order to do a new build.  Yeah, you can
build your own tetex-bin with xpdf excluded, or just force-install
tetex-bin without the X libs in a chroot -- but it's ugly.

I know that the package count is getting to be a scalability problem
and that people are working on ways of dealing with that, and in the
meantime there is some rational pressure not to split packages
needlessly.  I'm not blaming the TeTeX team for weighing the factors
and deciding not to split.  I'm just giving an example of a warning
sign that too many meanings are being overloaded onto one technical
distinction -- in this case, the boundaries of a .deb.  Another
example would be localization packages; I hope I don't need to spell
that one out.

 I'm not convinced that the problem you're trying to solve is of sufficiently
 general interest to outweigh all of the other problems it introduces (such
 as the ones that have been pointed out in this thread).

IMHO the problem is real, the solution is wrong.  Don't try to
organize the underlying data; add enough metadata markup that you can
present better organized views for various purposes.  Don't rush to
add that metadata to debian/control; sketch out a heuristic using
existing metadata that leaves you with a relatively small number of
manual overrides, write real applications that use it, and then decide
if it's OK to keep the manual overrides as detached metadata or if
they belong in debian/control.

Cheers,
- Michael



Re: unreproducable bugs

2005-07-15 Thread Michael K. Edwards
On 7/15/05, Rich Walker [EMAIL PROTECTED] wrote:
 Michael K. Edwards [EMAIL PROTECTED] writes:
  On 7/15/05, Manoj Srivastava va, manoj [EMAIL PROTECTED] wrote:
  What's with the recent push to get every little things written
   down into policy, so the developer no longer is required to have an
   ability to think, or exercise any judgement whatsoever?
 
  Welcome to the software industry in 2005.
 
 Yes, to rely on 1300 developers to all think of your cunning method of
 solving a problem clearly makes sense. After all, to *write down* a
 technique that solves the problem, and make it available to all of them
 would stilt their creativity, hinder their intellect, and prevent the
 development of a consistent style!

I am having a hard time reading this as anything but a non sequitur. 
Personally, I prefer for a solution to be demonstrated to work, both
socially and technically, before it is enshrined in policy.  Drafts
are, of course, welcome at any stage.  Rough consensus and running
code.  YMMV.

 Sheesh, next you'll be arguing in favour of personal indentation styles!

Well, yes -- as long as the indent / emacs-mode / vim-mode
incantations that reproduce them are well documented, preferably in a
magic comment at the end of each file.  :-)

Cheers,
- Michael



Re: unreproducable bugs

2005-07-15 Thread Michael K. Edwards
On 7/15/05, Rich Walker [EMAIL PROTECTED] wrote:
  I am having a hard time reading this as anything but a non sequitur.
 
 Umm; it follows more from Manoj's comment than yours.

Ah.  OK.

  Personally, I prefer for a solution to be demonstrated to work, both
  socially and technically, before it is enshrined in policy.  Drafts
  are, of course, welcome at any stage.  Rough consensus and running
  code.  YMMV.
 
 You scale an organisation, I understand, by removing the *need* for
 everyone in it to be a genius at everything it does.

Bingo!  You also take care not to formalize unduly, or you get a
sclerotic bureaucracy.

 Hence the comment about the US army: designed by genius to be run by
 sergeants.

As a close associate of several sergeants in the US Army, I question
only the designed by genius part.  Given what armies do for a
living, Darwinian selection is probably also a factor.  :-)

 There does seem to be a lot of discussion on the debian groups about
 policy. If Debian is lucky, or well-managed, then it is the process you
 are describing. If it is unlucky, then it is a bunch of rule-lawyers
 having fun.

Don't knock rule-lawyers, just ignore them until they produce
something you can tolerate.  And keep your eyes open for things that
you don't want to agree with but that happen to reflect a real-world
truth of which you were previously unaware.  Kinda like real lawyers,
actually.

  Well, yes -- as long as the indent / emacs-mode / vim-mode
  incantations that reproduce them are well documented, preferably in a
  magic comment at the end of each file.  :-)
 
 Exactly: that and an indent script in the checkin routine remove any
 issue.

As long as it's purely advisory, please -- no tool is perfect
(although TeX is damn close).

 See how that compares to policy, which is hopefully implemented in such
 a way as to be mechanically testable?

To within certain limits, as demonstrated by lintian and linda -- up
there with dpkg and debhelper in the pantheon of Debian's
contributions to the world.  Not quite on par with the DFSG, but
that's only to be expected; the DFSG is not intended to be testable by
a machine that is less than Turing-complete.  :-)

Cheers,
- Michael



Re: unreproducable bugs

2005-07-15 Thread Michael K. Edwards
On 7/15/05, Rich Walker [EMAIL PROTECTED] wrote:
 (As a practicing SubGenius, I like to think of the ornery, cussing
 Debian, up there with the Two-Fisted Jesus, and the Butting
 Buddha. Others may have other views)

As a practicing Episcopatheist, I like to murmur, There is no God,
and debian-legal is her Prophet.  ;-

 Helps. The British Army likes to send officers out in front - produces
 lots of dead heroes in the upper classes, as well as reducing incidence
 of fragging...

Yes, indeed.  Dead heroes are the safe kind, from a political point of
view.  The US, with a little help from its foes, is currently engaged
in producing an alarming number of one of the unsafe kinds: living but
multiply amputated and/or addled by massive brain trauma.

Ooh -- I didn't notice your .sig before I wrote that.  Synchronicity?  :-/

 By the way, a spot of Google produces:
 
 Child (1984) cited A machine designed by geniuses to be run by idiots,
 Herman Wouk, The Caine Mutiny, on the organization of the wartime US
 Navy.

I do like a man who cites his sources.

 I get asked from time to time by academics for interesting projects for
 their students. I think I now have another:
 
 Implement a system capable of using standard AI techniques to process
 the (a) existing judgements and (b) content of debian.legal such that it
 can issue plausible analysis of a new software license...

Plausible?  No problem -- in the trade we call that a pseudo-random
number generator.  Hard to implement without infringing some patent
the USPTO issued last week, though ...

Cheers,
- Michael



Re: skills of developers

2005-07-14 Thread Michael K. Edwards
On 7/14/05, Bartosz Fenski aka fEnIo [EMAIL PROTECTED] wrote:
 In sum. Maybe it's time to create additional positions in Debian project?
 Maybe something like Packager (with knowledge about Bash and Debian
 Policy), Translator (with knowledge about some particular language and
 English), Helper (with knowledge about Debian in general), and finally
 DEVELOPER which develops software and is able to fix it if it's broken.
 Developers could be splitted to Python/Perl/C/C++/Java/Mono/and so on...

AIUI, DD-hood principally conveys voting rights, upload and login
privileges to certain machines, and r/w access to debian-private and w
access to d-d-a.  None of this has much to do with real-world
software developer competence and it would be rather odd to try to
retrofit such an expectation onto the Developer status at this
point.  It's not as if bogus position titles weren't ubiquitous in the
software industry anyway -- I have knuckled under to accepting titles
like Staff Engineer despite the fact that I am no engineer and do
not pretend to be even in job interviews, let alone any other setting.

But FWIW I would be disinclined to see Developer status split along
programming language lines in any way that isn't purely advisory.  In
a crisis I'd rather have a wizard developer who has never seen Python
(Scheme, OCaml, whatever) before step in, figure out an RC bug, and
deal with it without having to jump some stupid hello world hoops
first.  After you've worked in a dozen disparate languages, the
thirteenth is just more grist for the mill.  And for that matter, with
a little help from Google, fixing a screwed-up translation file in
some human language you don't know isn't all that hard either.  It
won't be idiomatic, but they'll get the idea.

Cheers,
- Michael



Re: Bug#301527: ITP: mazeofgalious -- The Maze of Galious

2005-07-13 Thread Michael K. Edwards
Added debian-legal; please drop debian-devel on follow-ups.

On 7/9/05, John Hasler [EMAIL PROTECTED] wrote:
  It is still using a copyrighted/trademarked (don't know which) name
 
 There is no such thing as a copyrighted name.  The name does appear to have
 been a trademark at one time, but if enough time has gone by without a
 product being marketed under that name the trademark will have lapsed.

It does however infringe the original's copyright in characters,
mise-en-scene, etc., as well as  a plethora of literal elements
(Nearly the same look-and-feel, per your linked page).  Compare the
discussion of OpenTTD at
http://lists.debian.org/debian-legal/2005/05/msg00628.html .  And
please keep it out of Debian; we have more than enough abandonware
clones as it is.

- Michael



Re: Ongoing Firefox (and Thunderbird) Trademark problems

2005-07-03 Thread Michael K. Edwards
On 7/3/05, Wouter Verhelst [EMAIL PROTECTED] wrote:
 On Sat, Jul 02, 2005 at 05:35:09PM -0700, Michael K. Edwards wrote:
  On 7/2/05, Andrew Suffield [EMAIL PROTECTED] wrote:
   On Thu, Jun 30, 2005 at 09:43:04PM +0100, Gervase Markham wrote:
These are two very different cases, though. If a local admin installs a
new root cert, that's cool - they are taking responsibility for the
security of those users, and they have extreme BOFH power over them
anyway. However, having the root appear by default, so that no-one at
the remote site really knows it's there (who consults the root list) and
it's now on Y thousand or million desktops - that is a different kettle
of fish.
  
   You've missed the really interesting, really important case.
  
   What about the site admin team for X thousand desktops who produce a
   modified firefox package to be used across the whole company? This is
   the normal, expected usage of Debian.
 
  Happily, trademark law is perfectly indifferent to this case; when the
  modified package is not advertised, marketed, sold, or otherwise used
  in commerce under the trademark, there is no case for trademark
  infringement (AIUI, IANAL).
 
 And?
 
 It's not because Debian doesn't advertise, market, sell, or otherwise
 use in commerce its own modified package that other people don't do
 that. Think of people selling CD images, preinstalled computers...

Note that I was responding specifically to a concern raised by Andrew
about whether the Mozilla Foundation's objections to adding root certs
to the Firefox package would affect corporate site admins who rebuild
Debian's Firefox.  As I see it, that is indeed a legitimate scenario
for downstream alteration of the root cert list, but that's OK -- the
trademark safety zone automatically extends to the site admin, since
she is not marketing her altered package under the trademark.

In other words, adding root certs is a common case; but adding root
certs _and_marketing_the_result_ is a corner case.  I don't have a
problem with a clause in a trademark policy that says, the 'safety
zone' for descriptive use of our trademark on modified builds does not
extend to people who add root certs without our approval and advertise
or sell the result under our trademark.  I think it's fair to say
that QA on the root cert list is rather central to the QA of a
secure browser, and for the Mozilla folks to call this out as a
sensitive issue is not only reasonable but perhaps (IANAL) necessary
if they want to retain the trademark.

Cheers,
- Michael



Re: Ongoing Firefox (and Thunderbird) Trademark problems

2005-07-02 Thread Michael K. Edwards
On 7/2/05, Andrew Suffield [EMAIL PROTECTED] wrote:
 On Thu, Jun 30, 2005 at 09:43:04PM +0100, Gervase Markham wrote:
  These are two very different cases, though. If a local admin installs a
  new root cert, that's cool - they are taking responsibility for the
  security of those users, and they have extreme BOFH power over them
  anyway. However, having the root appear by default, so that no-one at
  the remote site really knows it's there (who consults the root list) and
  it's now on Y thousand or million desktops - that is a different kettle
  of fish.
 
 You've missed the really interesting, really important case.
 
 What about the site admin team for X thousand desktops who produce a
 modified firefox package to be used across the whole company? This is
 the normal, expected usage of Debian.

Happily, trademark law is perfectly indifferent to this case; when the
modified package is not advertised, marketed, sold, or otherwise used
in commerce under the trademark, there is no case for trademark
infringement (AIUI, IANAL).  A trademark license can of course be
conditioned on the licensee's agreement to arbitrary constraints,
since (like a copyright license) it is an offer of contract; but the
offer on the table from the Mozilla Foundation more nearly resembles a
unilateral grant, articulating a safety zone within which no license
as such is required, and places no onus on Debian to ensure that
recipients of the Debian package don't further re-work it.

  A quick reminder of what's at risk here: if the private key of a root
  cert trusted by Firefox became compromised, _any_ SSL transaction that
  any user trusting that cert performed could be silently MITMed and
  eavesdropped on.
 
 Let's be serious here. You've already got the verisign certificates,
 and you've got a helpful dialog box that appears whenever new
 certificates are presented to the browser such that the user can just
 whack 'ok' without reading it. SSL security on the internet at large
 is a myth. Anybody who trusts it is insane; the risks aren't very
 significant.

Information security in general is a myth, but like many myths it has
utility in some contexts.  If you feel some degree of responsibility
for ensuring the good conduct of someone else, then it's wise to make
good conduct convenient for them and bad conduct as inconvenient,
risky, and easy to detect as possible.  Site admins who care can make
it quite inconvenient for the user to whack 'ok' when there is no
chain of trust to a root cert, and can back this up with logging to
make it easy to detect and a site policy to make it risky.  Selecting
root certs carefully can make it relatively convenient for a
legitimate site to establish a chain of trust and relatively
inconvenient to undermine that trust.

None of this is anything resembling foolproof -- or rather clever,
determined, non-risk-averse, expendable attacker-proof.  But it's
sophomoric to claim that ease of circumvention by an unsupervised user
would justify a cavalier attitude to root cert security on the part of
the Mozilla Foundation.

Cheers,
- Michael



Re: Ongoing Firefox (and Thunderbird) Trademark problems

2005-06-27 Thread Michael K. Edwards
On 6/27/05, Wouter Verhelst [EMAIL PROTECTED] wrote:
 On Mon, Jun 27, 2005 at 02:34:00AM -0400, Eric Dorland wrote:
  Presumably isn't good enough IMHO. If they cared about fairness they
  would develop a trademark policy that could be applied to everyone,
  based on the quality criteria that is right now only known to the
  MoFo.
 
 How do you judge quality? Do you apply some basic filter, read a metric,
 and say this program scores 83% on the scale of good code?
 
 Or do you have a look at how people write their code, what the result
 is, and whether you think that result is a good thing? In other words,
 do you make a judgement call?

As a general rule, the trademark holder is obliged to retain the
authority to judge whether or not others are maintaining adequate
quality controls.  This authority is not without bounds (see
Prestonettes v. Coty and other precedents I have cited), but
delegating too many subjective judgment calls to the trademark user
(whether or not he/she is a licensee) risks loss of the trademark.

It is apparently possible for a trademark holder to accept contractual
limits on his/her/its authority to issue a product a failing grade;
see the Sun v. Microsoft saga and the extent to which Sun may be
obliged to pass a Java implementation that passes their Test
Compatibility Kit.  But a browser is not a JVM, and I don't think it's
reasonable to ask the Mozilla folks to reduce their QA role to
objective validation with a currently non-existent TCK.

I hope that Eric or the DPL will decide to formally acknowledge the
safety zone offered to Debian by the Mozilla Foundation rather than
profess to ignore the trademark policy.  A policy that one is
ignoring can't be held up to other trademark holders as a modus
vivendi; and in any case I think the Foundation deserves better from
Debian than that.  I do find it encouraging that Eric has invited the
DPL to weigh in.  I hope that Branden finds the time to consult with
competent counsel (which I am not) and to propose a solution that
garners enough developer support to be a model for our relationship
with other trademark holders.

Cheers,
- Michael
(IANADD, IANAL, TINLA)

P. S.  The Mozilla Foundation doesn't seem to object to MoFo, and
may even have originated the term.  But I can certainly see how it
could lead to misunderstandings.  :)



Re: Debian concordance

2005-06-20 Thread Michael K. Edwards
On 6/20/05, Ian Murdock [EMAIL PROTECTED] wrote:
 On 6/18/05, Michael K. Edwards [EMAIL PROTECTED] wrote:
  In any case, Ubuntu packages aren't Debian packages any more than
  Mandrake packages are Red Hat packages.
 
 If Ubuntu sees itself to Debian as Mandrake was to Red Hat, then that
 certainly explains a lot.

Well, I haven't used Mandrake in a while, and if there's bad blood
there that I don't know about then it's probably a bad analogy.  (And
remember, I have no affiliation with Ubuntu either.)  All I'm saying
is that, unlike a CDD or a Debian-pony-with-one-extra-trick (Knoppix,
etc.), Ubuntu is a full distro which tracks Debian at the source code
level rather than using Debian binary packages.

This has consequences for ISVs not unlike those of the early Mandrake
releases, when Mandrake tracked Red Hat's code and releases but
optimized for Pentium and wrote an alternate installer.  If you wanted
your third-party application to run on both, you couldn't just sort of
pretend you were part of the Red Hat release cycle; you needed to
concoct a build environment whose products were distro-agnostic. 
Choice is good; but choice doesn't always make things easier.

  If you want binary
  compatibility, you need to build a system whose engineering outcome is
  binary compatibility
 
 That's precisely what I'm proposing we should do here! There will never
 be a better time.

Building that system doesn't mean cajoling Ubuntu into holding breezy
back.  It means (as I see it) constructing an apt repository and a
debootstrap recipe for Debian Standard Base 05.03 and 05.09 -- build
environments for sarge+hoary+breezy+etch-compatible binary debs and
breezy+etch-compatible debs respectively.

Presumably packages built in 05.03 won't be able to use ABI fixes,
etc. introduced at the last minute in sarge and/or hoary, and anything
that we can already tell won't run on breezy or etch should also be
excluded.  If that leaves a build environment that won't build much
other than C programs with few library dependencies, maybe we should
think about formalizing more backwards compatibility mechanisms in
breezy/etch.  If, that is, we care about ISVs and other poor sods who
don't want their application upgrade cycle dictated by their O/S
upgrade cycle (and vice versa).

Cheers,
- Michael



Re: Debian concordance

2005-06-19 Thread Michael K. Edwards
On 6/18/05, Joey Hess [EMAIL PROTECTED] wrote:
 Matt Zimmerman wrote:
  Practically speaking, the differences in compatibility between Ubuntu and
  Debian is of as much concern as those between Debian stable and Debian
  unstable.  New interfaces are added in unstable constantly, and software is
  adapted to use them.  Binary packages from unstable are rarely installable
  on stable without upgrading other supporting packages.  Third party
  packagers must choose whether to provide builds for stable (if the package
  builds), unstable or both.  So far, this has not resulted in a problem for
  Debian.
 
 Except unstable is capable of running packages built on stable, and
 stable is to some extent (partial upgrades) capable of running packages
 built on unstable. And if it doesn't work, a dependency will tell you it
 doesn't work. And Debian is able to decide we want to make it work better
 and fix things. So I don't think your analogy holds up very well.

After six months, I suspect that sid will have evolved to where no
binary package of any great complexity from sarge will install on it
without a stack of oldlibs; and backports will be (as usual) a royal
pain.  Better just to run a carefully selected sid snapshot.  Test
your backups frequently, though.  :-)

But Joey's right that having two stable releases, neither of which
has systematically greater version numbers than the other, complicates
the graph a lot.  This isn't necessarily a problem; in a way, it's a
nice business opportunity for people with a good grasp on the whole
process to build customized distros for businesses that need them for
embedded / ISV purposes.  But that opportunity already existed
mid-Debian-release-cycle; it's just sort of changed shape.

A former client of mine was very appreciative of the glibc
2.2.5-final-that-never-was build that I did for them (extending the
useful life of their woody systems, for their particular purposes, by
a year or so).  In a hypothetical similar situation six months from
now on sarge, I would probably suggest that they try breezy for a
while rather than go custom; but they would need custom work from a
different angle to port their internal code sideways and re-tune
their automated install procedures.  And it works both ways, of
course; developers who jumped immediately to hoary may decide that
they want python 2.3.x by default after all, and need help persuading
sarge to play nice with multiarch.

It's dull work in some ways, but it's bread and butter for the local
distro wrangler.  I'd sure rather have several Debian-style arrows in
my quiver than have to choose between Good Luck Enterprise Linux
flavors R, S, and W.  :-)

  The cost of guaranteeing ABI compatibility is high, and the benefit to free
  software is marginal.  It is a problem for proprietary software vendors to
  be concerned with, and we should leave it to them.  We have more important
  work to do, and we're doing it in source code form.
 
 If I were only interested in source code, I would not be a contributor
 to this distribution. I am interested in whole, working systems which
 are accessible to end users.

Amen, brother.  But Ubuntu's end users shouldn't be pointing their
sources.list at J. R. Hacker's apt repository, even when J. R. Hacker
is pronounced Debian, and vice versa.  On the other hand, I
respectfully differ from Matt about whether the creation of an
ISV-friendly build environment should be left to ISVs.  Coaxing, say,
a major proprietary RDBMS vendor onto Debian/Ubuntu, and reaping the
benefit of their benchmark-driven advice in tuning our kernels and
libraries, would benefit everyone, including those who prefer
open-source databases.

  Debian packages just work, in the environment for which they were intended
 
 No, Debian packages just work, if dpkg allows you to install them on
 your system.
 
 Unless, now, they happen to be built by someone running the other
 distribution.

I can think of several ways that this could happen, but I haven't
actually seen any of them yet.  Would you mind adducing some examples?

  This has nothing to do with binary compatibility, and everything to do
  with rigorous packaging practices (which is the true basis for this
  selling point).
 
 I agree that ABI compatability is only part of the picture, though it
 seems like one of the more important parts. However, the other parts of
 the picture suffer from similar problems.

It's important if you like, say, sarge for compile farms and hoary as
a base system for demo LiveCDs.

 Just as a random example, Ubuntu's fork of debhelper has a hack[1] in
 dh_buildeb to run pkgstriptranslations, an Ubuntu-specific command which
 removes mo files from usr/share/locale. That works ok until Debian adds
 a pkgstriptranslations that does something else. Or until the Debian
 version of debhelper is installed on someone's Ubuntu system and begins
 building packages without calling this tool.

I agree with Joey that mucking with the actual packaging 

Re: Debian concordance

2005-06-19 Thread Michael K. Edwards
On 6/19/05, Steve Langasek [EMAIL PROTECTED] wrote:
 Of 596 lib packages in woody (loosely identified), 325 are still
 present in sarge.  That's after three years of more or less constant
 development.  Where did you come up with this absurd idea that all binary
 packages of any great complexity will become uninstallable after only six
 *months*?

The examples that come to mind immediately are those with substantial
components in both native code and an interpreted or bytecode
language, such as Perl XSUBs and Python extensions.  Last time around,
I seem to recall that Perl was updated shortly after the release, and
you couldn't very well install a binary package containing a Perl
module (especially an XSUB) without also installing the old Perl, by
which time you might as well have a stable chroot.  And what are the
odds of my tweaked Python profiler, built to divert individual files
within the Python library tree, working with sid's stock Python build
come December?

The next example to pop into my head is the Midgard content management
framework, which involves both an Apache module and piles of PHP code.
 The chances of a binary package built on sarge installing (let alone
working) against next year's apache and php packages in sid probably
aren't that high.

The fact is that, while the C dynamic library naming system and the
split into libfoo2 and libfoo-dev allows multiple historical versions
to co-exist, the same cannot be said of all languages and development
frameworks.  Biggish software packages tend to have a few too many
paths and versions compiled into them to survive the first six months
after a release without source code changes and recompilation.  Think,
say, the LyX WYSIWYM front end to LaTeX (which knows the paths to all
sorts of document format converters) or a build of gvim with all of
the scripting language bells and whistles.

Doubtless others who have had more occasion to attempt this sort of
thing than I have will provide more examples if needed.

Cheers,
- Michael



Re: Mozilla Foundation Trademarks

2005-06-19 Thread Michael K. Edwards
On 6/19/05, Eric Dorland [EMAIL PROTECTED] wrote:
 * Michael K. Edwards ([EMAIL PROTECTED]) wrote:
  I wouldn't say accept it, I would say acknowledge the safety zone
  offered unilaterally by the Mozilla Foundation, and as a courtesy to
  them make some effort to stay comfortably within it while continuing
  to ship under the Mozilla names.  Their trademark policy is surely
  less draconian than, say, Red Hat's, and we aren't going around
  purging the RedHat Package Manager from Debian.
 
 I think you're playing word games now. Even if this is a unilateral
 gift we still need to decide if we want it or not.

Of course; and as the maintainer, you are going to be the one making
that call.  I'm just chary of using words like offer and accept
because they suggest that we are in the contract zone.  I think (I
could be wrong; IANAL) that, in the free software arena, it's actually
better for both sides for the trademark holder to say:

We aren't exactly licensing all and sundry to manufacture products
under our brand.  Our product line includes both the source code and
the 'official' binaries; we are of the opinion that third parties who
follow these guidelines when building and distributing their own
binaries are merely re-packaging our source code product (under
standards like Coty v. Prestonettes in the US and X, Y, and Z
elsewhere) and using our trademarks descriptively.

We reserve the right to decide unilaterally that someone either has
created a product of their own (to which our trademark can't be
applied without license) or isn't doing an adequate job of QA in the
course of re-packaging.  But if and when that happens, we're going to
follow the steps outlined here to try to bring them into voluntary
compliance before we demand that they either accept a formal license
and traditional oversight procedures or cease to apply our trademarks
to their modified version of our product.

From what I've read, that's as open a policy as a trademark holder can
offer and still retain control of the trademark in the long run.  I
may be overstating here how far the Mozilla Foundation is willing to
go; but if a modus vivendi can be reached in which the only thing
special about Debian is that the guidelines more or less reflect the
maintainer's actual practice, I think that sits more comfortably with
DFSG #8 than a license per se.

  If the offer from six months ago still stands (which, to my
  recollection and in my non-lawyer view, read like a unilateral safety
  zone rather than a trademark license as such), that's extraordinarily
  accommodating on MoFo's part.  It's a square deal from people with a
  pretty good reputation for square deals.  They deserve better from
  Debian than to have their flagship products obscured by a rename when
  they haven't done anything nasty to anyone yet.
 
 What reputation are you referring to? Not that I necessarily disagree,
 but what are you basing that assessment on?

Their rebranding isn't special for Netscape/AOL and other corporate
partners; they've worked very hard to make it accessible to third
parties without any need for explicit cooperation with them.  They're
going through the agony of relicensing their entire code base under
MPL/LGPL/GPL so that GPL projects can cherry-pick at source code
level.  They're good citizens in W3C standards space even when the
committee decisions go against them (e. g., XUL vs. XForms).  I don't
know the details of their CA certificate handling, but at least they
_have_ a policy and respond constructively to criticism of it.  And
Mitch Kapor and the rest of the MoFo board have a lot of street cred
as individuals.

  The FSF has, at best, completely failed to offer leadership with
  respect to free software and trademarks, as the MySQL case and the Red
  Hat / UnixCD mess have shown.  I think it would be rather gratifying
  if Debian could step in to fill the void.  And it would be kind of
  nice to have a workable modus vivendi to exhibit if and when the Linux
  Mark Institute (or the OpenSSL team or the PHP folks or Red Hat or
  MySQL) comes knocking.
 
 I do have to agree that guidance when it comes to trademark situations
 is sorely lacking. There doesn't seem to be that consistent a
 viewpoint with Debian either unfortunately.

It's a sticky wicket.  Free software enthusiasts (among whom I count
myself) don't like systems that exacerbate second-class-citizenship
among those whose motivations aren't principally commercial.  Nowadays
everyone's a publisher, and the paperwork overhead of copyright has
dropped near to zero (until you try to enforce it); but not everyone
is a marketer, and that's what trademarks are about.  I think it's
possible to have a personal-freedom-compatible trademark policy, but
it's not trivial, and the first few tries are bound to have their
discontents.  Doesn't mean it's not worth trying, though.

Cheers,
- Michael
(IANADD, IANAL)



Re: Debian concordance

2005-06-19 Thread Michael K. Edwards
On 6/19/05, Steve Langasek [EMAIL PROTECTED] wrote:
 On Sun, Jun 19, 2005 at 01:41:47AM -0700, Michael K. Edwards wrote:
  The examples that come to mind immediately are those with substantial
  components in both native code and an interpreted or bytecode
  language, such as Perl XSUBs and Python extensions.
 
 Yes, these are specific examples of packages that will be rendered
 uninstallable in unstable by an ABI change on a particular package.  Your
 claim was that *all* packages of any great complexity would be
 uninstallable after six months.

Perhaps that reflects my idea of what constitutes any great
complexity -- a high-level implementation language plus some glue and
accelerators in C/C++, maybe a GUI library or two, maybe a module for
a separately packaged daemon.  I also wrote without a stack of
'oldlibs' -- and even packages whose dependencies are completely
captured by ldd on a few binaries are going to be in that boat real
soon now.

 And while perl XSUBs from woody are not installable in sarge, python
 extensions from woody *are* generally installable in sarge (for reasons we
 shouldn't be particularly proud of, but still).

OK; but a program that makes use of them probably doesn't work unless
it was future-proofed in advance by systematically avoiding references
to python without an explicit version.  And anything that uses, say,
woody's libwxbase2.2 is out of luck.  And python-gdbm isn't the
version built against python 2.1 anymore, so packages built against it
will have the wrong dependencies for sarge.  And so on and so forth.

  And what are the odds of my tweaked Python profiler, built to divert
  individual files within the Python library tree, working with sid's stock
  Python build come December?
 
 A pathological case if ever I heard one...

Maybe so; but then all efforts to use the packaging system to update
bits of a language's standard library, without rebuilding (and taking
security update responsibility for) the whole shebang, are equally
pathological.  That would apply to large fractions of CPAN and CTAN
and many emacs modes as well as local customizations like my
experimental profiler.

  The next example to pop into my head is the Midgard content management
  framework, which involves both an Apache module and piles of PHP code.
   The chances of a binary package built on sarge installing (let alone
  working) against next year's apache and php packages in sid probably
  aren't that high.
 
 Which still doesn't prove a claim that *no* packages will be installable
 after six months.

What I said was:

After six months, I suspect that sid will have evolved to where no
binary package of any great complexity from sarge will install on it
without a stack of oldlibs; and backports will be (as usual) a royal
pain.  Better just to run a carefully selected sid snapshot.  Test
your backups frequently, though.  :-)

Would you settle for few binary packages of any great complexity, etc.?

Cheers,
- Michael



Re: Mozilla Foundation Trademarks

2005-06-18 Thread Michael K. Edwards
On 6/17/05, Eric Dorland [EMAIL PROTECTED] wrote:
 * John Hasler ([EMAIL PROTECTED]) wrote:
  Exactly.  If Debian doesn't need such an arrangement, neither do our users.
  And if our users don't need such an arrangement, our accepting it does not
  put us in a privileged position with respect to them: they have the legal
  right to do everything that we want to do with or without permission.
 
  So let's accept the arrangement and move on.  There is no DFSG problem
  here even if we do accept the notion that the DFSG applies to trademarks.
 
 If we don't need the arragement, why exactly would we accept it
 anyway?

I wouldn't say accept it, I would say acknowledge the safety zone
offered unilaterally by the Mozilla Foundation, and as a courtesy to
them make some effort to stay comfortably within it while continuing
to ship under the Mozilla names.  Their trademark policy is surely
less draconian than, say, Red Hat's, and we aren't going around
purging the RedHat Package Manager from Debian.

If the offer from six months ago still stands (which, to my
recollection and in my non-lawyer view, read like a unilateral safety
zone rather than a trademark license as such), that's extraordinarily
accommodating on MoFo's part.  It's a square deal from people with a
pretty good reputation for square deals.  They deserve better from
Debian than to have their flagship products obscured by a rename when
they haven't done anything nasty to anyone yet.

The FSF has, at best, completely failed to offer leadership with
respect to free software and trademarks, as the MySQL case and the Red
Hat / UnixCD mess have shown.  I think it would be rather gratifying
if Debian could step in to fill the void.  And it would be kind of
nice to have a workable modus vivendi to exhibit if and when the Linux
Mark Institute (or the OpenSSL team or the PHP folks or Red Hat or
MySQL) comes knocking.

Cheers,
- Michael



Re: Debian concordance

2005-06-18 Thread Michael K. Edwards
On 6/18/05, Ian Murdock [EMAIL PROTECTED] wrote:
 I'm more worried about the future; and I still haven't seen anyone
 address my initial question, which is why Ubuntu is tracking sid on core
 things like libc in the first place. The value you add is around
 the edges with stuff like X.org and GNOME 2.10. I'd like to see you do
 that in a manner that promotes compatibility with sarge, just as we're
 doing at Progeny as we move forward in these same areas. But I certainly
 understand why you want to move forward in these areas.. I do as well.

X.org isn't exactly around the edges in my book.  And in any case,
if you think of Ubuntu exclusively as a desktop distro, I don't think
you have it straight; hoary works very nicely as a server platform. 
From my perspective, Ubuntu adds value mostly on process fronts:
predictable release cycles, a clear distinction between supported
and best effort (and a systematic team approach to that support),
and a commercial support model for people who want that sort of thing.
 Each of these has its down side as well, and Debian works differently
for mostly good reasons; but they have inevitable engineering
consequences, and not just around the edges either.

 The core is a completely different issue. Where at the core do you add
 value? Ok, perhaps you can get a bug fix in here, better support for
 an architecture here. But are those things worth breaking compatibility?
 If your changes are important enough, they should be in Debian too.
 If they aren't, they're not as important as compatibility.
 Debian packages just work has been a truism for *years*, and it's been
 one of our key technical selling points. I don't want to see that fall
 by the wayside. This thread is a perfect example of what will happen
 if we don't worry about this stuff *now*. I've seen this movie before.

In case you hadn't gleaned it from Matt's timeline, most of Ubuntu's
work on glibc late in the hoary cycle did make it into sarge (or, if
you like, Ubuntu-the-company sponsored some excellent work by some of
Debian's glibc team, which made it under the wire for both releases),
and I for one am very glad that it did.  I really didn't want to be
stuck with Ubuntu #7897 (Debian #300943) for the next three years.

In any case, Ubuntu packages aren't Debian packages any more than
Mandrake packages are Red Hat packages.  If you want binary
compatibility, you need to build a system whose engineering outcome is
binary compatibility, which will look a lot more like the LSB than any
one distro.  The fact that most packages compiled on sid three months
ago will run on both sarge and hoary is no more significant than the
fact that most packages compiled on Red Hat 6.0 ran on both Red Hat
7.0 and Mandrake 7.0.

 If there's ever been or ever will be a perfect time for Debian and
 Ubuntu to sync up, it's now. Sarge is out, and there is significant
 momentum within the project behind the idea of fixing the release cycle
 problem, so it seems likely that etch will be out in some predictable
 and reasonable amount of time. Why not take advantage of that? Better
 yet, why not help make it happen? Why not, for example, work with
 Debian on putting together a plan for migrating to GCC 4 rather than
 just plowing ahead on your own? Going it alone is sure to cause
 compatibility problems that make the current ones pale by comparison.

Debian and Ubuntu are syncing up, with the maintainers' eyes on the
future instead of the past.  I think the evidence is overwhelming that
Ubuntu is not going it alone or plowing ahead on their own.  To
take your example: if you hadn't noticed, Matthias Klose is the only
person who has ever uploaded GCC 4.0 to either Debian or Ubuntu.  I
suspect that he coordinates with appropriate teams on both sides.  The
glibc teams obviously coordinate very closely, and the shlibver
problem smells more like the law of unintended consequences than
anything else; I think it very improbable that it will recur in the
next cycle.  The X.org packagers on both sides are doing their best --
it's a very challenging situation -- and it looks like they will sync
up where it matters, on the modular tree.

Ubuntu's collective patience and tolerance has been quite remarkable
-- as has Debian's, with few exceptions.  It doesn't make much sense
to me, looking from the outside, to ask Ubuntu to freeze the breezy
ABIs at the place where the sarge roulette wheel stopped.  How about
if everybody puts the sarge/hoary baggage behind them and works on
etch/breezy?

Cheers,
- Michael



Re: Ongoing Firefox (and Thunderbird) Trademark problems

2005-06-18 Thread Michael K. Edwards
On 6/18/05, Eric Dorland [EMAIL PROTECTED] wrote:
 You're skipping the crucial point here. Under the publicly available
 licenses/policies, we *cannot* call it Firefox. The MoFo is offering
 us an agreement that allows us to use the mark. I think agreeing to
 this is against the spirit of DFSG #8, and sets a bad precedent
 (speaking of precedents, have we ever made such an agreement before to
 use a trademark?).

I don't think so.  Basically, as far as I know Debian hasn't coped yet
with any of the foreseeable trademark problems, from Linux to MySQL. 
But as I think I have mentioned, the approach under discussion for the
Mozilla trademarks is about the most free-software-friendly tactic I
can imagine that still more or less preserves the legal forms.  (Other
people may have better imaginations than mine.  :-)

With that said, I definitely think that an actual IP lawyer with a
specialty in trademark should review the memorandum of trademark
safety zone in its final form in order to keep the risk of unintended
consequences down to a low roar.  If you are not unalterably opposed
to a trust but verify stance towards MoFo and DFSG #8, perhaps you
could ask Gervase what exactly is on offer at this point, and run it
past SPI's counsel before making any commitments.

Cheers,
- Michael



Re: Debian concordance

2005-06-17 Thread Michael K. Edwards
On 6/17/05, Ian Murdock [EMAIL PROTECTED] wrote:
 On 6/16/05, Michael K. Edwards [EMAIL PROTECTED] wrote:
  Speaking as someone with no Ubuntu affiliation (and IANADD either), I
  think that statement is based on a somewhat shallow analysis of how
  glibc is handled. [...]
 
 I don't doubt there were changes, even some worthwhile changes,
 between the version of libc in sarge and the versions in
 hoary/breezy. My question is: Are the changes worth breaking
 compatibility? It's a cost/benefit thing. And if they're
 important enough, why aren't they going into Debian directly?

Well, if anyone broke compatibility in the sarge/hoary timeframe, it
wasn't Ubuntu.  The sched_[gs]et_affinity change in sarge is the only
one that broke an ABI and bumped shlib_dep_ver.  The Ubuntu folks
quite consciously refrained from rolling out 2.3.[45] in hoary;
demanding that they merge 2.3.2.ds1-22 and nothing else into breezy,
and therefore further postpone upstream's improved ppc64 support and
compatibility with other toolchain updates, seems like a lot to ask.

 I understand why Ubuntu was moving ahead of Debian before, since
 Debian was so far behind. But now that sarge is out, don't
 you think it would be worthwhile to give Debian a chance to fix its
 release cycle problems and, better yet, to try to help fix them,
 rather than simply saying Debian is too slow/unpredictable for us?

I really don't think they're saying that.  Goto-san and the rest of
the Debian glibc team are appropriately cautious about moving 2.3.5
from experimental to unstable at the same time that many other things
are in flux.  But if Ubuntu is going to move the toolchain forward in
breezy at all, they just can't wait.  If anything, this will help
smooth the transition in Debian and speed up the etch cycle
accordingly.  There's no particular reason why etch can't ship in six
or nine months, with better application compatibility between etch and
breezy than there is now between sarge and hoary.

 Again, as I've said before, it's *sarge* the rest of the world thinks
 of as Debian, not sid. So, we're getting out patches into
 sid or we're tracking sid or whatever doesn't really help anything.

I basically agree with you there; what would help, in my view, is a
sort of Debian/Ubuntu mini-LSB, ideally with a white box testing
framework that helps validate that a .deb is installable and likely to
function properly on some combination of sarge, hoary, breezy, and
etch.  If, that is, ISV support is of interest, and you don't want to
go the LCC multi-distro golden binaries route.

Cheers,
- Michael



Re: Mozilla Foundation Trademarks

2005-06-17 Thread Michael K. Edwards
On 6/17/05, Gervase Markham [EMAIL PROTECTED] wrote:
 John Hasler wrote:
  Alexander Sack writes:
 
 In general the part of the MoFo brand we are talking about is the product
 name (e.g. firefox, thunderbird, sunbird). From what I can recall now, it
 is used in the help menu, the about box, the package-name and the window
 title bar.
 
  I'm not convinced that any of these constitute trademark infringement.
 
 Then I'm slightly confused as to your concept of trademark infringement.
 If I label the car I've built as a Ford (even if it uses a lot of Ford
 parts), it infringes Ford's trademark.
 
 I haven't heard anyone else disputing that to ship a web browser called
 Firefox, Debian needs an arrangement with the owner of the trademark
 Firefox as applied to web browsers.

Debian doesn't need such an arrangement, as I argued in a previous
thread six months ago; there's the Coty v. Prestonettes standard and
all that.  But IMHO it would be advisable for both sides if such an
arrangement were reached.  I prefer the not-quite-a-trademark-license
arrangement discussed in the thread ending at
http://lists.debian.org/debian-legal/2005/01/msg00795.html .

But then, I tend to take the square deal / keep people's options
open when that won't result in a tragedy of the commons approach to
freedom rather than the natural right approach.  So I'm
pro-GPL-as-construed-under-the-actual-law,
pro-trademark-when-used-to-discourage-misrepresentation, and
pro-real-world-legal-system generally.  This may put me in a minority
among debian-legal regulars.  :-)

Cheers,
- Michael
(IANAL, IANADD)



Re: Question regarding 'offensive' material

2005-06-17 Thread Michael K. Edwards
On 6/17/05, Andrew Suffield [EMAIL PROTECTED] wrote:
 I think you'll find that porn is the majority industry on the internet.

The Internet is, to zeroth order, useful only for the same four things
that interactive TV is well suited for: video games, gambling,
pornography, and pornographic gambling video games.  Its first-order
uses are cracker joyriding, make-money fast schemes, and hot chat
leading to occasional sexual assignations (oddly parallel to the
zeroth-order uses), plus ripping off copyrighted media and movement of
large military science data sets.  Usages that you wouldn't be ashamed
to admit to your mother are second-order effects at best.  These
proportions are essentially unchanged since the opening of the
Internet to general US undergraduate populations in the mid-80's.  Ask
anyone who's worked at an ISP or in a university IT department.

This does not, of course, mean that I approve of any software on my
systems downloading random _anything_ from the internet without my
very explicit approval.  It astonishes me that anyone opposes the
instant removal of something so fundamentally stupid to include in the
Debian operating system.

Cheers,
- Michael



Re: Debian concordance

2005-06-16 Thread Michael K. Edwards
On 6/16/05, Daniel Stone [EMAIL PROTECTED] wrote:
 On Thu, Jun 16, 2005 at 12:54:08PM -0500, Ian Murdock wrote:
  Daniel Stone wrote:
   libc6 added interfaces between 2.3.2 and 2.3.5 and made several other
   major changes, so all packages built with .5 depend on .5 or above,
   in case you use one of the new interfaces.
  
   A binary built with 2.3.2 can run with .5, but a binary built with .5
   can't necessarily run with .2.
 
  Then why not build your packages against 2.3.2? That would ensure
  maximum compatibility with Debian proper (which to most of the
  world is sarge, *not* sid, so don't answer that you're almost the
  same as sid).
 
 Hoary (like sarge) is built against 2.3.2.
 
 Breezy (like current sid) is built against 2.3.5.

Unfortunately, it's worse than this.  A last-minute ABI change in
sarge (backporting some glibc 2.3.4 symbols to 2.3.2) has the effect
that any package whose autogenerated shlibdeps includes libc6, when
built on sarge, isn't installable on hoary.  Any package that doesn't
use the affected APIs can be hacked to work around this (by lowering
the versioned dependency on libc6), but it's quite an inconvenience.

In general, it's not trivial to set up a build environment that
reliably produces binary packages that are installable on both sarge
and hoary.  (I happen to have such an environment at work, based on a
part-careful-part-lucky snapshot of sid, but it's not something I
would care to support for more than a few packages.)  It would be
awfully nice if Debian and Ubuntu could coordinate on a 90% solution;
I don't necessarily expect to be able easily to build python packages
that will run on both (given Ubuntu's early move to 2.4) but how about
basic C, C++, and perl ABI compatibility?

(Yes, I know this is what the LSB is for, but Debian and Ubuntu are so
closely related that the 90% solution probably isn't that hard.  An
apt repository containing a few packages with
lowest-common-denominator ABIs, plus a debootstrap script for use with
pbuilder, would probably do it.)

Cheers,
- Michael



Re: Debian concordance

2005-06-16 Thread Michael K. Edwards
On 6/16/05, Ian Murdock [EMAIL PROTECTED] wrote:
 glibc. Shipping X.org and GNOME 2.10 adds value, since sarge doesn't
 ship them. Shipping glibc 2.6.5 vs. glibc 2.6.2 just adds
 incompatibilities.

Speaking as someone with no Ubuntu affiliation (and IANADD either), I
think that statement is based on a somewhat shallow analysis of how
glibc is handled.  Jeff Bailey and Goto Masanori, and a couple of
other glibc packagers, are probably the people who can make genuinely
informed comments; but having watched the final stages of glibc
convergence for both hoary and sarge, I can say that they are
significantly different under the hood despite both being labeled
2.3.2.

On the Ubuntu side, divergences from the last Debian glibc drop that
was merged into hoary (2.3.2.ds1-20) include subtle but important
fixes to NPTL/TLS (with particular implications for heavily
multithreaded apps on Sun's JRE), a fix for a sigsetjmp() issue that
hits gcc 4.0, substantial rework of UTF-8 locale handling, and Tollef
Fog Heen's multiarch support.  The first two of these appear to have
been addressed similarly in Debian's 2.3.2.ds1-21, but there's also a
change to sched_[gs]et_affinity that resulted in GLIBC_2.3.4 versioned
symbols and thus bumped shlib_dep_ver up to 2.3.2.ds1-21.  That's why
many (most?) packages compiled on sarge won't install on hoary.

All of this is on top of the extensive backports to 2.3.2 of fragments
of later glibc snapshots that were already in Debian's 2.3.2.ds1-20. 
Goto-san has expressed the intention of moving sid to glibc 2.3.5 ASAP
(his first upload of 2.3.4 to experimental was in March), and Ubuntu
has synced up to his latest upload and moved forward with build fixes.
 If it is Ubuntu's desire not to diverge too much from sid in the
breezy timeframe, they really did have to merge from Debian
experimental immediately after the hoary release.

It doesn't seem reasonable to me to ask Ubuntu to continue shipping a
fork labeled 2.3.2-whatever for the duration of stable=sarge.  If it
were possible for breezy to bump shlib_dep_ver to 2.3.2.ds1-21 and
hold it there, that would be great; but it looks to me like Jeff has
bumped it to 2.3.4-1 for good and sufficient reason.  At least, with
some luck, packages compiled on sarge will run on breezy (but not vice
versa without manual shlibver mangling), just as packages compiled on
hoary will run on sarge (ditto).

Cheers,
- Michael



Re: Debian concordance

2005-06-16 Thread Michael K. Edwards
On 6/16/05, Matthias Klose [EMAIL PROTECTED] wrote:
 Python is basic for Ubuntu. Given the long freeze of sarge, Debian had
 to support 2.1 (jython), 2.2 (for zope 2.6) and 2.3 for sarge. I'm
 happy we did have a possibility to ship 2.4.1 with sarge. Maybe not
 with the best packaging, but it's included in the release. We will
 have a better packaging for etch/breezy.

No criticism intended of Ubuntu's work on python packaging, which is
awesome.  I just meant to point out that it is pretty much inevitable
that python packages need to be compiled separately for sarge and
hoary if they are to be used with the default runtime.

 Please stop spreading fud about C++ and perl ABI compatibility. There
 is none! Both sarge and hoary did ship with ABI compatible Perl and
 C++ ABIs.  Basic ABI/API updates are done at the start of a new
 release cycle, there is no difference how that is done in Debian and
 Ubuntu.  There is no point comparing the state of development and
 unstable versions.

I'm sorry, I really don't mean to be spreading FUD.  I meant those as
criteria for a lowest-common-denominator build environment, not as
statements about what is or isn't compatible between sarge and hoary. 
There is of course a great deal more involved in practical ABI
compatibility than compiler / standard library divergences; addressing
the libc6 shlibver issue may resolve the Perl XSUB problems (I haven't
tried it yet), but I haven't even checked whether there are other
last-minute divergences in things like libxml2, glib, libnspr4,
libssl, etc.

In general, I think it's normal for it to take some work to set up a
build environment for binary packages that are supposed to install and
function on multiple distros, no matter how closely related the
distros are.  It would be a really wonderful thing if Debian and
Ubuntu (and any other Debian derivatives willing to pitch in) could do
that work instead of sticking ISVs (commercial or otherwise) with it.

Cheers,
- Michael



Re: Debian concordance

2005-06-16 Thread Michael K. Edwards
On 6/16/05, Steve Langasek [EMAIL PROTECTED] wrote:
 On Thu, Jun 16, 2005 at 04:03:32PM -0700, Michael K. Edwards wrote:
  On the Ubuntu side, divergences from the last Debian glibc drop that
  was merged into hoary (2.3.2.ds1-20) include subtle but important
  fixes to NPTL/TLS (with particular implications for heavily
  multithreaded apps on Sun's JRE), a fix for a sigsetjmp() issue that
  hits gcc 4.0, substantial rework of UTF-8 locale handling and Tollef
  Fog Heen's multiarch support.  The first two of these appear to have
  been addressed similarly in Debian's 2.3.2.ds1-21,
 
 So if they were merged, why is it relevant that they were merged from hoary
 to sarge instead of the other way around?

It's not particularly; they just happened to be things I was aware of
on the Ubuntu side, and in verifying the corresponding changes in -21
I was relieved to see that similar (not, I think, identical) patches
went into sarge.  Looks like Ubuntu has since adopted Debian's fix for
at least the TLS fix (not sure about the others, they may already be
dealt with upstream in 2.3.[45]).

  but there's also a change to sched_[gs]et_affinity that resulted in
  GLIBC_2.3.4 versioned symbols and thus bumped shlib_dep_ver up to
  2.3.2.ds1-21.
 
 Indeed; and the shlibs model of identifying dependencies unfortunately is
 not very fine-grained, with the result that it's a lose-lose choice between
 making people manually set shlibs when building on sarge if they want hoary
 compatibility, or shipping glibc in sarge with an incorrectly relaxed shlibs
 that could let you install packages on hoary that won't work there.

Right.  As I said, it's not surprising, and not cause for criticism,
that there's a need for a specialized environment if you want to build
multi-distro binary packages.  It's a bit like building LSB RPMs,
except it's less painful because of the shared heritage and the
reduced need to lag drastically when there are fewer distros involved.

 So, maybe it's time to revisit the weaknesses of the shlibs system,
 particularly as they apply to glibc.  Scott James Remnant had done some
 poking in this area about a year ago, which involved tracking when
 individual symbols were added to a package -- apparently, many packages
 would actually be happy with glibc 2.1 or so (at least on i386), but we have
 no way to track this...

It would be pretty cool to actually extract the external symbols
referenced in a package's ELF objects and deduce the minimum happy
library versions accordingly.  Alternately, one could construct a set
of chroots with various historical library mixes and grind through the
ELF objects with ldd.  That's really more along the lines of automated
white box testing, and might fit better into the sort of grandiose
post-build QA-role-signature chain I've proposed a couple times than
it does in the build process per se.

This holds especially when it comes to ISV certifications, to the
extent that that's of interest to Debian and/or derivatives.  (Oracle
on Ubuntu, anyone?)  ISV build processes tend to be rather stinky, and
post-build touch-ups to metadata are more practical than retrofits to
the build.  (There's likely to be an alien / java-package style stage
in the process anyway.)

Cheers,
- Michael



Re: Ports helping in World Domination? (was: Re: Canonical and Debian)

2005-06-07 Thread Michael K. Edwards
On 6/6/05, Christian Perrier [EMAIL PROTECTED] wrote:
 Quoting Julien BLACHE ([EMAIL PROTECTED]):
 
  Eh, to achieve Total World Domination, we need to support every
  architecture out of there. Looks like a step in the wrong direction ;)
 
 Well, frankly speaking, Julien, last time I checked most of so-called
 third world users mostly just don't care a shit of non i386
 architectures..:-). They just want a functional operating system for
 the only architecture which is really available to them, no matter
 whether we like it or not.

I'm running Linux, and soon Debian, on consumer-market embedded
wireless routers and file servers based on mipsel, arm, and powerpc
(my personal favorite).  They aren't graphical desktops, but their low
power consumption and low price point make them excellent candidates
for developing-country infrastructure gear -- not to mention learning
environments for OS developers.

Total World Domination means more than the desktop.  Vanilla Debian
isn't that well suited to be the actual embedded release, and of
course buildroot works fine as a cross-compilation environment, but it
can be very handy to test in a chroot.  And when you're in a hurry,
sometimes it's easier to use a buildroot environment hosted on the
same arch than it is to fix a configure script that breaks on
cross-compiles.

Cheers,
- Michael



Re: Canonical and Debian

2005-06-06 Thread Michael K. Edwards
On 6/5/05, Tollef Fog Heen [EMAIL PROTECTED] wrote:
 * Michael K. Edwards
 
 | So either Debian collectively is
 | willing to labor to maintain a high standard of portability and
 | stability, or we need to focus on a few arches and ignore
 | bugs-in-principle that don't happen to break on those systems.
 
 At the same time, if we're releasing i386, powerpc and amd64 we have a
 fair bit of spread: We have both little- and big-endian, we have 32
 bit and 64 bit and we have signed and unsigned char and we have
 linuxthreads and pure NPTL threading libraries.
 
 This doesn't uncover all possible build and runtime problems, but it's
 a good start to having portable code.

In 1990, this was a good start (substituting vendor variants of thread
libraries for linux variants).  In 2005, a good start is to have a
variety of processor alignment constraints, a variety of OCaml and gcj
native-code back ends (and absences thereof), a variety of space/speed
trade-offs in the gcc -O2 defaults, and a variety of kernel
micro-versions.  Granted that that's a higher quality standard than
could seriously have been proposed in 1990, but if you ever paid the
FSF $5K for a coherent GNU tools build (as I did back then), you know
how much the state of the art has improved.

When I first packaged an OCaml library with a native-code component, I
struggled with build issues on unfamiliar Linux ports.  That's a
_good_ thing.  It meant that I had to learn the difference between
optimized and optimizing OCaml compilers, and to fix broken upstream
build scripts that didn't make the distinction correctly.  Next year,
when open-source MMIX cores have the best bang-for-the-buck in
embedded space :-), we'll be glad that Debian's 11 architectures cover
quite a bit of autoconf parameter space.

Cheers,
- Michael



Re: Canonical and Debian

2005-06-05 Thread Michael K. Edwards
On 6/5/05, Steve Langasek [EMAIL PROTECTED] wrote:
 You can either step up and make sure the
 architectures you care about are in good shape for etch, or you can be a
 whiny brat expecting everything to be handed to you on a silver platter and
 accusing people of being members of a Canonical-controlled cabal when they
 do you the courtesy of informing you about their personal priorities for
 etch.  Your choice.

I am no fan of the Vancouver proposal, but Steve's got a point. 
Ensuring that packages build and run properly on a wide variety of
architectures is _work_.  I happen to think that it's worthwhile work,
and that it's the main factor that sets Debian apart from all the rest
and directly contributes to the superior quality of Debian relative to
other distros.  But if it isn't spread across a large number of
people, it's a crushing burden, and no one has a right to ask the
release team to shoulder it.

The mirror network is not the big issue, as I see it; I care more
about the question of whether the build procedures have adequate
conditional logic to handle the presence/absence of a native-code
compiler for language X, the existence or lack of an assembly-language
implementation of core routine Y, etc.  As I have argued previously,
the diversity of architectures is the best available proxy for the
evolution of platforms over time, and the packages which have a hard
time building on all arches are precisely those which it's a struggle
to maintain for the duration of a release cycle.

Steve's also right that buildds that have non-zero queue depths for
non-trivial lengths of time tend to expose fragility in build and
run-time dependencies, and so they get stuck in ugly ways that need
release team (or porter) attention.  So either Debian collectively is
willing to labor to maintain a high standard of portability and
stability, or we need to focus on a few arches and ignore
bugs-in-principle that don't happen to break on those systems.  I know
which one I'd like to see, but I have to admit that I have done a
lousy job of following through so far on things that I could help with
myself.  But at least I don't go around blaming the people who are
actually doing the work.  :-/

Cheers,
- Michael



Re: Is Ubuntu a debian derivative or is it a fork?

2005-06-01 Thread Michael K. Edwards
On 5/31/05, Stephen Birch [EMAIL PROTECTED] wrote:
 Okay - you have my attention.  If you are right etch will be as
 beautiful as Hoary within a few weeks of the sarge release.

I think it's been so long since Debian started having pre-sarge
freeze-spasms that we've all forgotten what it's like when the
floodgates open post-release.  (I, for one, wasn't around for the last
one, but I've done a certain amount of reading.)  A few weeks is
probably on the optimistic side.  I don't know if Ubuntu has Breezy is
in any kind of usable state yet, and I don't think they let in
anything like the backlog of updates that is awaiting etch.

Once sarge does release, the Ubuntu folks are going to be right there
in the trenches with everyone else dealing with GCC 4.0, the death of
devfs, and the demands for a graphical installer.  If anything they'll
be pulling Debian forward with Linux 2.6.11.bignum, just as they are
with Python 2.4 and some of the remaining java-in-main issues.  It is
not in their interest to let their fork (or spoon or other implement
of destruction) go off into the rough.

 Oh my gosh, I hope and pray you are right.
 
 We are all watching ...

I'm not quite sure this is sarcasm, although I have my suspicions. 
But I happened to be watching on #ubuntu-devel as the last few hoary
RCs got knocked off.  These guys (and gals) are pros, and they're
excited about what they're doing, and they aren't any less committed
to Debian than they were before no-name-yet.  They're used to dealing
patiently with bull-headed upstreams when wearing their DD hats, so
they can probably take Debian-Ubuntu frictions in stride.

Some things about the relationship are going to be hard, though.  I
was very distressed to find that a last-minute ABI change in sarge's
glibc will cause any package built on sarge that gets a versioned
glibc dependency to be uninstallable on hoary.  I really had hoped to
run the same mysql-server packages on both, and I'm not quite sure
what I'm going to do for a distro-neutral C++ build environment.  :-(

Cheers,
- Michael



Re: Is Ubuntu a debian derivative or is it a fork?

2005-05-31 Thread Michael K. Edwards
On 5/31/05, Andrew Suffield [EMAIL PROTECTED] wrote:
  Also, when Ubuntu makes improvements to packages how do those
  improvements flow back to Debian?
 
 They generally don't. Ubuntu considers it more effective to spend
 their time on PR to make people think they are giving stuff back, than
 to actually do it; it generates more 'goodwill', since most people
 won't bother to check. This thread will probably become a good
 example, most of the others did.

If you've been around d-d awhile, you know not to take Andrew's
randomly directed flame cannon very seriously.

I have no relationship to Debian and Ubuntu other than satisfied
industrial-scale user (and kibitzer), but I think it's inane to kvetch
about Ubuntu's pattern of giving stuff back over the last year. 
Software process is hard work, especially when you add an extra layer
of judgment calls between upstream and release engineering.

What is the point of, say, harassing the glibc maintainer to take a
patch against the version in sid, when he's planning on jumping to
2.3.4 as soon as sarge releases?  If you want evidence on which to
judge the sincerity of Ubuntu's giving back, watch what happens
post-sarge.  I'm optimistic, largely because the Ubuntu folks seem to
have thick enough skins to shrug off attitudes like Andrew's.

Cheers,
- Michael



Re:

2005-05-20 Thread Michael K. Edwards
On 5/19/05, Thomas Bushnell BSG [EMAIL PROTECTED] wrote:
[snip arguments that might have been worthy of rebuttal on
debian-legal five months ago]

I'm not trying to be snotty about this, but if you want to engage in
the debate about the proper legal framework in which to understand the
GPL, I think you would do best to at least dip into recent
debian-legal archives and also look at some of the precedents cited
back in December and January.  At this point, there seem to be quite a
few people who agree that the FSF's stance (copyright-based license)
and the far-from-novel one that you advance (unilateral license /
donee beneficiaries) are untenable in the jurisdictions with whose
law they are to some degree familiar.

 And finally, for Debian's purposes, it's even more irrelevant.  Our
 standing policy is that if there is doubt about the force or intention
 of a license, we err on the side of simply doing what the licensor
 demands.

Which is great, until you find yourself estopped from arguing
otherwise in a courtroom.  It matters both what you do and why you say
you do it.

Cheers,
- Michael



Re:

2005-05-20 Thread Michael K. Edwards
On 5/19/05, Thomas Bushnell BSG [EMAIL PROTECTED] wrote:
 Michael K. Edwards [EMAIL PROTECTED] writes:
 
  At this point, there seem to be quite a
  few people who agree that the FSF's stance (copyright-based license)
  and the far-from-novel one that you advance (unilateral license /
  donee beneficiaries) are untenable in the jurisdictions with whose
  law they are to some degree familiar.
 
 You are choosing to post on three different forums.  Having made that
 choice, it is your obligation to make your comments relevant to them
 all; you cannot post on debian-devel, and then insist that your
 interlocutors there read a different list.

Oh, nuts.  I didn't realize this thread was still copied to hell and
gone.  I'll try to summarize briefly, and would the next person please
cut d-d and waste-public off if appropriate?

 Please don't put words into my mouth.  The quotes you give are not my
 words; I have not spoken of a unilateral license / donee
 beneficiaries, though you words suggest I have.

Sorry about that; I skipped a step or two.  Your unilateral grant of
permission is not in fact a recognized mechanism under law for the
conveyance of a non-exclusive copyright license.  In common law
systems, the mechanism that does exist is called a contract.  :-) 
This horse has been beaten to a pulp on debian-legal, and I think even
my esteemed fencing parter Raul is approaching convinced; if you want
one case law citation on the topic, try Effects Associates v. Cohen
from the Ninth Circuit.  Apparently quite firmly established in
various civil law systems as well.  (IANALIAJ.)

There is such a thing as a unilateral contract, also sometimes called
a defective contract, which can't be held to its full ostensible
extent against the drafter by donee beneficiaries for lack of
evidence of acceptance and/or return consideration.  That doesn't
apply in the case of the GPL; acceptance through conduct is quite
sufficient, and the various obligations accepted by the licensee
(especially the offer of source code) are in fact return
consideration, not a limitation on the scope of license.

Specht v. Netscape is sometimes cited as an obstacle to finding
acceptance of a browse-wrap license.  But it doesn't even contain
the word copyright, and boils down to this analogy (quoted from the
opinion):  From the user's vantage point, SmartDownload could be
analogized to a free neighborhood newspaper, readily obtained from a
sidewalk box or supermarket counter without any exchange with a seller
or vender.  As I wrote in the thread I cited, picking up a free
newspaper doesn't grant you copyright license on its contents.

A better parallel may be found in Jacob Maxwell v. Veeck, in which the
court used evidence of an oral exclusive license agreement to construe
a non-exclusive copyright license and then applied contract law to
establish whether and when it was terminated.  Oral evidence of intent
to offer an exclusive license -- something that by law must be in
writing -- is hardly less valid an offer of contract than a document
whose drafter professes to believe ought to be interpreted under some
other legal theory.  As for consideration, see Fosson v. Palace
Waterland and the cases involving the GPL itself that have been
discussed ad nauseam on d-l.

These and other equines are nearing a blenderized condition on d-l,
whether or not a consensus comes out of it.  I am omitting the rest of
the mountain of case law that has been cited there; persons who wish
pointers to specific topics within that discussion are welcome to ask
there, but let's spare d-d.

 You have not explained here (on debian-devel, that is) at all why we
 should disgregard the actual success of the license in convincing
 reluctant people to comply with its provisions.  Indeed, to date there
 is nobody who is willing to risk a lawsuit due to noncompliance with
 the GPL when the FSF's compliance folks have come after them.  This in
 itself suggests very strongly that those who have money to lose on the
 question think the GPL is binding
 
 And you haven't answered my question.  Please explain how the
 difference in legal theory here affects the bindingness of the GPL on
 those who choose to distribute GPLd software.

There's no question in my mind that the GPL is binding, i. e., a
valid offer of contract that licenses various rights to a given work. 
There are some grounds to worry that Section 6 is hard to implement in
at least some jurisdictions, for reasons having to do with the
doctrine of agency and how strong a form of agreement is needed in
order to construe agency to sub-license; but that's unlikely ever to
be litigated, and if it is we can hope that an appeals court can find
a way around it.

There's also no question that the GPL is enforceable (and has been
successfully enforced by Harald Welte in Deutschland) using a breach
of contract theory against people who don't release source code to
GPL works when they modify and distribute them.  But applying

Re:

2005-05-20 Thread Michael K. Edwards
On 5/20/05, Thomas Bushnell BSG [EMAIL PROTECTED] wrote:
 Michael K. Edwards [EMAIL PROTECTED] writes:
 
  Sorry about that; I skipped a step or two.  Your unilateral grant of
  permission is not in fact a recognized mechanism under law for the
  conveyance of a non-exclusive copyright license.
 
 I'm sorry, can you point me to the statute here?  The US statute
 simply prohibits copying without permission.  It says nothing about
 how permission is granted.  Can you point me to a court case which
 said that grant of permission is not contractual, and therefore no
 permission has been granted?

You might read the Jacob Maxwell v. Veeck case, in which the defendant
argued exactly that (because by law an exclusive license must be a
written contract).  The court agreed that federal law didn't permit
the finding of an exclusive license under the circumstances, discussed
exactly what a non-exclusive license is, and proceeded to construe and
interpret one under the applicable state contract law.  Honest to
Murgatroyd, copyright (and patent, etc.) licenses are [terms in]
contracts is a principle that long predates modern copyright statutes
and you're not going to find any counter-examples.

 We aren't concerned with a browsewrap or shrinkwrap license; all the
 cases you point to are about that.  Those are about licenses which
 attempt to take away rights that a person would have had if they had
 never agreed to the license.  Since the GPL only gives you new rights,
 never taking away any, it's not clear how objections to those kinds of
 licenses would matter.

That argument simply doesn't hold water.  Covenants to offer source
code in this and such a way are not scope of license, they're return
consideration.  The GPL is a true offer of bilateral contract.  And
yes, I've read lots of unfounded assertions from the FSF and others on
the subject, and this and other arguments have been made with a
reasonable degree of skill on debian-legal, and I see no reason to
repeat them on d-d.

  There's also no question that the GPL is enforceable (and has been
  successfully enforced by Harald Welte in Deutschland) using a breach
  of contract theory against people who don't release source code to
  GPL works when they modify and distribute them.  But applying contract
  law standards of construction against the offeror, notice and cure of
  breach, grounds for preliminary injunction, and all that -- together
  with a correct reading of phrases like derivative work under
  copyright law and mere aggregation -- results in a GPL whose
  utility as a club against the Wicked Linker is greatly weakened and
  possibly (IANALIAJ) zero.  Which is, in my personal view, as it should
  be.
 
 I see, so this is what you're claiming.  Since the proponents of the
 unilateral-grant-of-permission theory completely agree that contract
 law is the normal rule for the interpretation of such documents, there
 isn't any debate there.  If you only reason for invoking contract law
 is to say the license must be interpreted in accord with the
 standards of contract construction, there is already broad agreement
 about that point.

Not from the copyright-based license crowd, who would have you
believe that contract law standards don't apply and the GPL has a fast
path to preliminary injunction under copyright infringement standards.
 It is, however, a blenderized equine on d-l, so there's no particular
need to continue it here.

  There's a world of difference between we can't link Quagga against an
  OpenSSL-enabled NetSNMP because it's illegal; whoops, we already did
  so (and a thousand similar things), which means we have to beg the FSF
  to un-automatically-terminate all of our GPL rights and as a matter
  of courtesy to the FSF, we usually make a reasonable effort to obtain
  OpenSSL 'exemption riders' where their FAQ recommends them,
  irrespective of whether the assertions in their FAQ and related
  statements are legally valid.
 
 Yes, and we can simply make neither statement, but ask for the rider,
 make no statements to the FSF about whether our past actions were
 right or wrong, and if the rider is not granted, stop distributing
 (which we would do anyway).
 
 So this is a tempest in a silly teapot.  I'm happy to leave the thread
 here, since the upshot is a no-relevance-to-important-issues.

Fair enough; although you may find that not everyone agrees that stop
distributing is the right answer when we are talking dynamic linking
across one or more package boundaries.  Especially when the FSF is not
the sole copyright holder on the GPL'ed upstream, as in the case of
Quagga (now under discussion on d-l).

Cheers,
- Michael



Re: [WASTE-dev-public] Do not package WASTE! UNAUTHORIZED SOFTWARE [Was: Re: Questions about waste licence and code.]

2005-05-19 Thread Michael K. Edwards
On 5/18/05, Roberto C. Sanchez [EMAIL PROTECTED] wrote:
 Point taken.  However, the GPL clearly states the conditions in
 section 6:
 
   6. Each time you redistribute the Program (or any work based on the
 Program), the recipient automatically receives a license from the
 original licensor to copy, distribute or modify the Program subject to
 these terms and conditions.  You may not impose any further
 restrictions on the recipients' exercise of the rights granted herein.
 You are not responsible for enforcing compliance by third parties to
 this License.
 
 To me, that says Once the cat is out, it's out for good.  So,
 if you as the author of GPL software, try to restrict someone that
 has already received your software under the terms of the GPL, then
 you violate the license.  Since you are the author, it doesn't
 affect you so much, since you are also the copyright holder.

And what, exactly, is the licensee's recourse if the licensor
violates the license in this way?  Are you mistaking the GPL for a
statute?

The law about whether a license without an explicit term can be
revoked at will varies from one contract law jurisdiction to another. 
See Walthal v. Corey Rusk at
http://caselaw.lp.findlaw.com/data2/circs/7th/981659.html -- and
observe that appeals courts sometimes screw up too (note the scathing
commentary regarding the Ninth Circuit's opinion in Rano v. Sipa
Press).  Even in the Ninth, you probably wouldn't want to be using
Rano as a central part of the argument in a case today.

 The only other alternative is that the GPL is not enforceable.
 That would probably call into question the validity of all software
 licenses.  However, I am not lawyer (I'm sure you guessed that by
 now), so I will refrain from speaking further on this subject.

IANAL either, but this sweeping statement is obviously nonsense.  The
typical EULA is a dog's breakfast of enforceable and unenforceable
constraints, but there's getting to be quite a bit of statute and case
law about how to construe a EULA under any given jurisdiction's
contract law.  A court of fact's analysis of the GPL terms would in
any case have no value as precedent in a later court of fact where
some EULA (or for that matter the GPL) is under discussion.

The GPL is anomalous in that the drafter has published a widely
believed, but patently false, set of claims about its legal basis in
the FSF FAQ.  Yet in many ways the actual GPL text, properly
construed, is sounder than the typical EULA.  I don't believe that it
bans all of the things that the FSF says it does (notably dynamic
linking GPL and non-GPL code).  But the only thing I can see that
might jeopardize its validity with respect to real derivative works is
the difficulty of construing a legitimate implementation of that
automatically receives language in Section 6, which a court would
have to construe in terms of conventional rules of agency to
sub-license.

 Incidentally, if there was so much controversy about this and the
 origins and rights to the code have been in question, why has
 SourceForge let the project continue for 2 years?  I imagine that
 it is not their responsibility that to comb through every piece
 of code housed on their servers.  However, I would imagine that
 it would be part of their due diligence to verify whether a project
 like this can even exist on their servers in the first place.

SourceForge is not the tightest run ship on the planet.  They are
probably not protected by any kind of common carrier exemption, but
they also probably figure they can wait until they get a cease and
desist letter.  In the real world, most violations of the law go
unpunished unless they involve major bodily harm, justify a claim for
large monetary damages, run afoul of the ascendant political agenda,
or really piss someone off.

Cheers,
- Michael



Re: [WASTE-dev-public] Do not package WASTE! UNAUTHORIZED SOFTWARE [Was: Re: Questions about waste licence and code.]

2005-05-19 Thread Michael K. Edwards
On 5/19/05, Raul Miller [EMAIL PROTECTED] wrote:
 On 5/19/05, Michael K. Edwards [EMAIL PROTECTED] wrote:
  The GPL is anomalous in that the drafter has published a widely
  believed, but patently false, set of claims about its legal basis in
  the FSF FAQ.
 
 For the record, I disagree that this faq is patently false.
 
 It is, in places, a bit simplistic, but I wouldn't advise anyone
 delve into those fine points of law unless they've retained
 the services of a lawyer (at which point the FAQ is merely
 an interesting commentary -- it has less weight than
 professional advice).

The FAQ is not merely an interesting commentary -- it is the
published stance of the FSF, to which its General Counsel refers all
inquiries.  Although I am not legally qualified to judge, I believe
that he can have no reasonable basis under the law in his jurisdiction
for many of the assertions that it contains, particularly the
assertion that the GPL is a creature of copyright law and not an
ordinary offer of contract.  That may yet become a problem for him
personally as well as for the FSF.

This is not a fine point of law, it is first-year law student stuff
that anyone with a modicum of familiarity with legalese can easily
verify for himself or herself by the use of two law references (Nimmer
on Copyright and Corbin on Contracts) found in every law library in
the US.  These law references are probably also available from most
law libraries in any English-speaking country and the bigger ones
anywhere in the world, as are their equivalents for other national
implementations.  The fact that all licenses are (terms in) contracts
is also blatantly obvious from a few hours' perusal of the primary
literature -- statute and appellate case law -- which is available for
free through www.findlaw.com.  Don't believe me; look it up for
yourself.

 Furthermore, that FAQ is far and away better than anything
 you've proposed.

If that is a challenge to produce an adequate summary of my writings
to date on the topic, I think I'll take it up, in my own sweet time. 
It won't be legal advice (IANAL), but it will be firmly grounded in
the applicable law to the best of my ability, which is a hell of a lot
more than you can say for the FSF FAQ.

Cheers,
- Michael



Re: [WASTE-dev-public] Do not package WASTE! UNAUTHORIZED SOFTWARE [Was: Re: Questions about waste licence and code.]

2005-05-19 Thread Michael K. Edwards
On 5/19/05, Raul Miller [EMAIL PROTECTED] wrote:
   For the record, I disagree that this faq is patently false.
  
   It is, in places, a bit simplistic, but I wouldn't advise anyone
   delve into those fine points of law unless they've retained
   the services of a lawyer (at which point the FAQ is merely
   an interesting commentary -- it has less weight than
   professional advice).
 
 On 5/19/05, Michael K. Edwards [EMAIL PROTECTED] wrote:
  The FAQ is not merely an interesting commentary -- it is the
  published stance of the FSF, to which its General Counsel refers all
  inquiries.
 
 And if you have retained counsel of your own, I'd let your
 lawyer deal with that.  If you haven't, then my interesting
 commentary comment is irrelevant.

Perhaps that is indeed what you would do.  I don't consider lawyers to
be the only persons capable of reading the law for themselves.  They
are the only ones authorized to offer certain forms of legal advice
and legal representation, but that's a whole 'nother animal.

  Although I am not legally qualified to judge, I believe
  that he can have no reasonable basis under the law in his jurisdiction
  for many of the assertions that it contains, particularly the
  assertion that the GPL is a creature of copyright law and not an
  ordinary offer of contract.  That may yet become a problem for him
  personally as well as for the FSF.
 
 I don't find in the GPL FAQ any assertion that the GPL is not
 to be considered an agreement under contract law.

Very, very interesting.  The grossly erroneous conclusions are there
(including various statements about run-time use that are false in the
US in light of 17 USC 117, and false for other reasons in many other
jurisdictions), but the GPL is a creature of copyright law bit is
not.  Does anyone happen to have a six-month-old copy of the FSF FAQ?

Happily, the public record is not limited to websites under the FSF's
control.  Google Eben Moglen for the text of various interviews, and
read his statements (especially paragraph 18) in
http://www.gnu.org/press/mysql-affidavit.html -- or Google's cached
copy, if that URL mysteriously stops working.

 I can only guess that you're objecting to the implication that
 copyright law is somehow important to understanding the
 GPL.

Presumably this bit of grandstanding is meant for the benefit of any
reader who doesn't know that you and I have been spamming debian-legal
(and on and off debian-devel) with this debate for months, and hence
you can guess a great deal more than that.

 I'm stopping here because I'm assuming that the rest of what
 you wrote is somehow logically related to these assertions
 which do not appear in the FAQ.

Yeah, right.  Like you haven't been arguing strenuously for months
that the GPL is not an offer of contract.  I am starting to question
your sincerity again.

- Michael



Re: [WASTE-dev-public] Do not package WASTE! UNAUTHORIZED SOFTWARE [Was: Re: Questions about waste licence and code.]

2005-05-19 Thread Michael K. Edwards
On 5/19/05, Roberto C. Sanchez [EMAIL PROTECTED] wrote:
 http://web.archive.org/web/20041130014304/http://www.gnu.org/philosophy/free-sw.html
 http://web.archive.org/web/20041105024302/http://www.gnu.org/licenses/gpl-faq.html

Thanks, Roberto.  The (moderately) explicit bit I had in mind is in
fact still in the current FAQ, I just missed it:

( http://www.fsf.org/licensing/licenses/gpl-faq.html#TOCIfInterpreterIsGPL )

... The interpreted program, to the interpreter, is just data; a free
software license like the GPL, based on copyright law, cannot limit
what data you use the interpreter on.

But you are quite right to provide the philosophy link, since that's
the one that (IMHO, IANAL) goes way over the top:

quote
Most free software licenses are based on copyright, and there are
limits on what kinds of requirements can be imposed through copyright.
If a copyright-based license respects freedom in the ways described
above, it is unlikely to have some other sort of problem that we never
anticipated (though this does happen occasionally). However, some free
software licenses are based on contracts, and contracts can impose a
much larger range of possible restrictions. That means there are many
possible ways such a license could be unacceptably restrictive and
non-free.

We can't possibly list all the possible contract restrictions that
would be unacceptable. If a contract-based license restricts the user
in an unusual way that copyright-based licenses cannot, and which
isn't mentioned here as legitimate, we will have to think about it,
and we will probably decide it is non-free.
/quote

This text is still present in http://www.fsf.org/licensing/essays/free-sw.html .

Cheers,
- Michael



Re: [WASTE-dev-public] Do not package WASTE! UNAUTHORIZED SOFTWARE [Was: Re: Questions about waste licence and code.]

2005-05-19 Thread Michael K. Edwards
On 5/19/05, Raul Miller [EMAIL PROTECTED] wrote:
[snip Raul's honest and polite response]
 I've been objecting to the nature of the generalizations you've been
 making.  In other words, I see you asserting that things which are
 sometimes true must always be true.
 
 In the case of the contract issue -- I've been arguing that it's
 not always the case that the law will rely solely on contract law.
 I've not been arguing that contract law would never apply.

I believe it to be the case that contract law is the only basis on
which the text of the GPL has any significance whatsoever in any
jurisdiction I have heard spoken of, except that some jurisdictions
may also apply doctrines of estoppel, reliance, etc. against the FSF
and other GPL licensors in tort proceedings.  An action for copyright
infringement, or any similar proceeding under droit d'auteur for
instance, will look at the GPL (like any other license agreement) only
through the lens of contract law.  IANAL, TINLA.  I don't believe you
have succeeded in providing any evidence to the contrary.

 In my opinion, an assertion that contract law would never apply
 would involve the same kind of over generalization as an assertion
 that contract law must always apply.

Contract law (or its equivalent in a civil law system) always applies
to offers of contract; that's kind of tautological.  And the GPL has
no legal significance as anything other than an offer of contract,
except perhaps as a public statement by the FSF and hence conceivably
as grounds for estoppel.

 I have been convinced, over the last week, that within the U.S.,
 contract law will almost always apply.  I think there is a basis
 even in U.S. law for other kinds of legal action, but I think that
 you're much more likely to find examples in international law
 than in U.S. law.

People with actual legal qualifications in continental Europe and in
Brazil, as well as other laymen who read and cite law, have weighed in
on this one.  While they are less prolix than I, they seem to be no
less certain as to the offer-of-contract nature of the GPL.  Have you
any more evidence to adduce in opposition?

Cheers,
- Michael



Re:

2005-05-19 Thread Michael K. Edwards
On 5/19/05, Thomas Bushnell BSG [EMAIL PROTECTED] wrote:
 Michael K. Edwards [EMAIL PROTECTED] writes:
 
  An action for copyright
  infringement, or any similar proceeding under droit d'auteur for
  instance, will look at the GPL (like any other license agreement) only
  through the lens of contract law.  IANAL, TINLA.  I don't believe you
  have succeeded in providing any evidence to the contrary.
 
 Um, it is true that the rules for interpreting the meaning of licenses
 are more or less the same as the rules for interpreting contracts.  It
 does not follow that licenses are therefore contracts.

The words license and contract are indeed not synonymous under
law.  But the law applicable to offers of contract containing grants
of license is contract law (or the equivalent codes in civil law
systems).

  Contract law (or its equivalent in a civil law system) always applies
  to offers of contract; that's kind of tautological.  And the GPL has
  no legal significance as anything other than an offer of contract,
  except perhaps as a public statement by the FSF and hence conceivably
  as grounds for estoppel.
 
 Huh?  What about the license as just what it purports to be: a
 license?

You're a little bit late to the party.  Check the debian-legal
archives for debate and case law out the yin-yang.  There's no such
thing as a copyright-based license.

 There is a thing you are not considering: it is a unilateral grant of
 conditional permission.  This is a perfectly well-traveled area of
 law.

Also part of contract law; and not applicable to the GPL, which does
not lack for acceptance or consideration.  Thread at
http://lists.debian.org/debian-legal/2004/12/msg00209.html .

Cheers,
- Michael
(IANAL, TINLA, etc.)



Re: Debian as living system

2005-05-18 Thread Michael K. Edwards
On 5/18/05, Andrew Suffield [EMAIL PROTECTED] wrote:
 I really don't care. If somebody can't be bothered to write a mail in
 comprehensible English, they shouldn't expect anybody else to bother
 to read it. Most won't even bother to say why they didn't bother to
 read it. He's lucky that I did, and should be grateful for that. I
 *could* have simply ignored him.

Yes, and to be ignored by Andrew Suffield is a fate worse than death!

Cheers,
- Michael

(I have mostly ignored his I killfiled him nitwittery but this was
too good to pass up.)



Re: Debian as living system

2005-05-18 Thread Michael K. Edwards
On 5/18/05, Wouter Verhelst [EMAIL PROTECTED] wrote:
 Yeah, well. But he's still right. This once.

Is there some reason why eat a dictionary had to be copied to all of
debian-devel in order to inform bluefuture of his linguistic
difficulties?  (I ask this knowing full well that my own pot has black
spots.)

Cheers,
- Michael



Re: Debian as living system

2005-05-18 Thread Michael K. Edwards
On 5/18/05, Christian Perrier [EMAIL PROTECTED] wrote:
 Sure. And this list subscribers deserve some apologies for myself
 being annoyed enough to be impolite to them and write ununderstandable
 prose hereeven if obviously on purpose.

Well, I enjoyed it immensely, despite my execrable French.  And Google
Translation / Babelfish are pretty easy to use, though their
literalist translations often lack that je ne sais quoi.

 So, let's go back to my awful English.

If you would like to do a little penance, you might drop by
debian-legal and check how badly I have mangled the sense of a couple
of French court decisions relating to droits morals de l'auteur.

Cheers,
- Michael



Re: Debian as living system

2005-05-18 Thread Michael K. Edwards
On 5/18/05, Wouter Verhelst [EMAIL PROTECTED] wrote:
 On Wed, May 18, 2005 at 01:41:34PM -0700, Michael K. Edwards wrote:
  On 5/18/05, Wouter Verhelst [EMAIL PROTECTED] wrote:
   Yeah, well. But he's still right. This once.
 
  Is there some reason why eat a dictionary had to be copied to all of
  debian-devel in order to inform bluefuture of his linguistic
  difficulties?
 
 I never said so. On the contrary. Please read my post again.

Check.  I was getting my pots and kettles backwards.  Thanks for the correction.

Cheers,
- Michael



Re: [WASTE-dev-public] Do not package WASTE! UNAUTHORIZED SOFTWARE [Was: Re: Questions about waste licence and code.]

2005-05-18 Thread Michael K. Edwards
On 5/18/05, Peter Samuelson [EMAIL PROTECTED] wrote:
[snip]
 I know at least one developer on a prominent open source project who
 believes otherwise, and claims to be prepared to revoke their license
 to her code, if they do certain things to piss her off.  Presumably
 this is grounded on the basis of her having received no consideration,
 since it's a bit harder to revoke someone's right to use something they
 bought and paid for.  It is also possible that she's a looney.

If the GPL were the creature of copyright law that the FSF proclaims,
or the unilateral contract that some apologists believe, that would be
a real problem.  Talk about the Law of Unintended Consequences! 
Happily, there is no defect of consideration in the GPL, as the
covenants of return performance (principally the obligation to offer
source code) are non-trivial promises with some value to the licensor.
 Thread at http://lists.debian.org/debian-legal/2004/12/msg00209.html
; note references to Planetary Motion v. Techplosion 2001 later in the
thread.

 Yes, I'm aware that if it's possible to revoke the GPL, it fails the
 Tentacles of Evil test, and GPL software would be completely unsuitable
 for any serious deployment.  Note, however, that but it *can't* be
 that way because if it is, we're all in trouble is not a very strong
 argument.

It's not a meaningless argument, though; there's a doctrine of
reliance that can substitute for acceptance under some circumstances,
and might be used to estop a copyright holder from yanking an
ostensibly perpetual license if all else failed.  IANAL, etc.

Cheers,
- Michael



Re: [WASTE-dev-public] Do not package WASTE! UNAUTHORIZED SOFTWARE [Was: Re: Questions about waste licence and code.]

2005-05-18 Thread Michael K. Edwards
On 5/18/05, Roberto C. Sanchez [EMAIL PROTECTED] wrote:
 Peter Samuelson wrote:
[snip]
  Yes, I'm aware that if it's possible to revoke the GPL, it fails the
  Tentacles of Evil test, and GPL software would be completely unsuitable
  for any serious deployment.  Note, however, that but it *can't* be
  that way because if it is, we're all in trouble is not a very strong
  argument.
 
 But it can't be done, period.
 
 Reference: http://www.gnu.org/philosophy/free-sw.html
 
 In order for these freedoms to be real, they must be irrevocable as
 long as you do nothing wrong; if the developer of the software has the
 power to revoke the license, without your doing anything to give cause,
 the software is not free.

I would advise you to be very, very wary of assertions made by the FSF
about the legal import of the GPL.  Philosophy is strong stuff and has
been known to cloud the mind.  Case law is a more trustworthy guide to
what is and isn't legally possible, not to mention what can and can't
be construed into the terms of the GPL.

Cheers,
- Michael



Re: GPL and linking

2005-05-11 Thread Michael K. Edwards
On 5/11/05, Raul Miller [EMAIL PROTECTED] wrote:
[an argument, much of which would make sense in a parallel universe
where the GPL is on the law books as 17 USC 666]

I am not a lawyer (or a fortiori a judge), so all that I can do to
explain why this isn't valid legal reasoning is to point you at
documents to which you and I both have access.  To the extent that the
arguments that I have made involve fine points, I have backed them up
with more valid binding case law than you can shake a stick at.  You
have offered me the instruction sheet for a copyright registration
form and some definitions from random online dictionaries.

So I'm not going to say that your point of view isn't perfectly valid
as your own point of view; but I don't have any reason to believe that
it's a good predictor of how a court case involving the FSF suing
FooSoft for linking against GNU readline would be argued.

Cheers,
- Michael



Re: GPL and linking

2005-05-11 Thread Michael K. Edwards
On 5/11/05, Raul Miller [EMAIL PROTECTED] wrote:
 Of course, a court case does not have to be argued that way.

No, but if it's to have a prayer of winning, it has to be argued in
terms of the law that is actually applicable, not as if the court were
obliged to construe the GPL so that every word has meaning and then
proceed directly to copyright law.

 However, I believe that a person who holds a GPL copyright
 who neglects these points in court is likely to lose.

Erroneous beliefs are among the liberties granted to humankind by the
universe.  One or both of us holds some very erroneous beliefs.

 A judge can ignore issues which are not raised in court, and
 will focus on issues which are raised and contested in court.

A judge cannot ignore law which doesn't happen to be in one of the
parties' briefs.

Cheers,
- Michael



Re: GPL and linking

2005-05-11 Thread Michael K. Edwards
Fine.  I have been goaded into rebutting this specimen.

On 5/11/05, Raul Miller [EMAIL PROTECTED] wrote:
 I'm disputing an argument which seems to require a number of such fine points.
 It is difficult for me to raise such disputes without mentioning the the 
 points
 themselves.
 
 However, I can present my point of view without resorting to this argument:
 
 Let's say that we have a court case which involves some contested GPLed work.
 How should we proceed?
 
 First, let's consider a work which doesn't have any binaries.  This would be 
 no
 different from any other copyright case -- you have to show that the work in
 question is copyrighted under the GPL, and you'd have to show that the terms
 of the GPL are being violated.  This should be relatively simple, and we can
 neglect sections 2 and 3 (which are clearly being complied with if the rest of
 the license is being followed).

Nope.  Under US law at least (IANALIAJ), you'd have to show:

1.  that you, yourself, hold a valid registered copyright to a
specific portion of the copyrightable expression in a particular work
A; and

2.  that a portion of your contribution to A has been copied to work
B, using the Computer Associates v. Altai
abstraction-filtration-comparison standard, and that the amount of
_copyrightable_ material that has been copied exceeds de minimis;
and

3.  that the distributor of B does not have license from you to copy
that material from A to B, or that the distributor's conduct exceeds
the scope of the license (e. g. creation of a derivative work when the
license extends only to verbatim copies), or that the license has been
terminated for material breach not otherwise reparable under the
applicable contract law standard;

After which, the distributor of B has an opportunity to demonstrate:

4.  that some statutory or judicially created affirmative defense,
such as fair use, justifies the distributor's conduct; or

5.  that public policy or a principle of equity demands that the
distributor's conduct be sanctioned despite the unavailability of any
defense under current law.

Then, and only then, you may be entitled to some relief under
copyright law.  That relief may be as little as one dollar of damages.

 Now let's imagine that we've got a case which involves binaries.  What do we
 have to do?
 
 First, we need exhibits: the sources, and the binaries.  Out of
 consideration for
 the court, we want to pick examples which are as simple as possible while
 representing all of the important contested issues.  So let's imagine we have
 Exhibit A (the sources) and Exhibit B (the binary).  [We need to also show 
 that
 this binary is representative of something which is being distributed,
 but that's
 not really different from what you have to do in other copyright cases, so 
 I'll
 ignore that part.]
 
 Second, we need to show that Exhibit B is derived from Exhibit A.  Again, we
 want to present this in a simple and easily understandable form, and we
 want to also present complete information.
 
 Once we've shown that B is derived from A, we can start examining the terms
 of the GPL to make sure that they are being followed.
 
 For example, let's say now that we're the defending party, and we want to show
 that the mere aggregation clause applies.  To do this, we would show that
 the disputed work could be replaced by something trivial, and that having done
 so, the program is still the same program -- we might do this by showing that
 it still has the same behavior.

This has no bearing on the definition of work based on the Program
or of mere aggregation or on any other relevant ambiguity in the
construction of the contract.  The only sense in which I can see it
having any relevance is if the only theory under which B is derived
from A is characters and mise en scene, as in Micro Star v. FormGen;
in which case the existence of a reasonable alternative to A, under
which B does something similarly useful, may be a successful defense.

 Switching sides again, if someone asserted that the mere aggregation clause
 applied, and used program behavior to make that assertion, and I believed that
 mere aggregation did not apply, I would show how the program failed to
 operate in some independent context, with the disputed section removed.
 
 Is that clear enough?

Clear as mud.  What do you mean, used program behavior to make that
assertion?  Even though this is an offer of contract, its drafter
harps on one copyright note.  Mere aggregation is a phrase with no
legal meaning (there is a single usage of this phrase in all of the
appellate law accessible to FindLaw, and it refers to members of a
school prayer club).  According to FindLaw, Merriam-Webster's
Dictionary of Law defines aggregation as:

1: the collecting of individual units (as damages) into a whole

2: a collection of separate parts that is unpatentable because no
integrated mechanism or new and useful result is produced

I think it is vanishingly improbable, even if this were a 

Re: GPL and linking

2005-05-09 Thread Michael K. Edwards
I haven't replied in detail to Batist yet because I am still digesting
the hash that Babelfish makes out of his Dutch article.  And I don't
entirely agree that the GPL is horribly drafted, by comparison with
the kind of dog's breakfast that is the typical license contract.  In
the past, I have tried to draft something with similar legal meaning
myself, and on review I did a really lousy job.

I have used the GPL, and will probably use it again (emphatically
without the upgrade option) the next time it comes up, as the
default license under which I provide source code for software I write
primarily for a client's internal use, insofar as work made for hire
provisions do not apply.  As such, I have gone out on quite a limb in
this discussion, possibly giving a future legal opponent grounds for
estopping me from making certain arguments in a courtroom.  So be it.

On 5/9/05, Humberto Massa [EMAIL PROTECTED] wrote:
[snip]
 Batist, I think you are mistaken about the meaning of the any later
 version copyright license... the terms are precisely '' This program is
 free software; you can redistribute it and/or modify it under the terms
 of the GNU General Public License as published by the Free Software
 Foundation; either version 2 of the License, or (at your option) any
 later version. '' and they mean that said program is dually-triply-etc
 licensed under the GPLv2 or v3 or v4 or any other upcoming FSF-GPL, at
 the licensee's discretion.

I used to think it extroardinarily unlikely that this formula, with
regard to as-yet-unwritten offers of contract, would have legal force
in any jurisdiction.  The prevalence of similar terms in shrink-wrap
software licenses nowadays -- which I abhor, and blame directly on
RMS, Eben Moglen, and the FSF -- has eroded that confidence to some
degree.  If it were ever to come up in a court case in which I
personally was involved, I envision disputing its validity to the last
breath.  (I reserve the right to do otherwise, of course.)

 I am a defender of the GPLv2. I am not a defender of the GPLv3 because I
 don't know its terms yet... :-) I don't know why would anyone license
 their work under yet-undisclosed terms, but...

I too am a defender of the GPLv2 under an interpretation which I
believe to be correct under the law in the jurisdiction in which I
reside.  As to gambling on future license texts: I find it
uncomfortable enough to live in a society in which disputes on all
scales are frequently settled by reference to a corpus of law of which
no human being can possibly retain more than a small fraction in his
or her brain, and which is perpetually being evolved and ramified by
legislatures, courts, and unspoken consensus.  The existence of 
persons who would knowingly further complicate their lives by handing
over additional liberties to a person who publishes opinions such as
http://www.gnu.org/philosophy/enforcing-gpl.html appalls me but has
ceased to amaze me.

Cheers,
- Michael



Re: GPL and linking

2005-05-07 Thread Michael K. Edwards
On 5/7/05, Batist Paklons [EMAIL PROTECTED] wrote:
 [Note: IALNAP (I am lawyer, not a programmer), arguing solely in
 Belgian/European context, and english is not my native language.]

It's really cool to have an actual lawyer weigh in, even if TINLAIAJ.  :-)

 On 07/05/05, Michael K. Edwards [EMAIL PROTECTED] wrote:
  Again, that's not how it works.  In the presence of a valid license
  contract, one is entitled to contract-law standards of the
  reasonableness of one's attempts to cure a breach when notified.  The
  automatic termination clause is probably unenforceable in most
  jurisdictions; I think (IANAL) few would even read it as authority to
  terminate on inadvertent (non-material) breach, let alone on the
  licensor's idea of breach if the licensee's (reasonable) construction
  makes it not a breach.
 
 Automatic termination clauses are quite common, and generally held
 valid. It is often only what constitutes a breach that can lead to
 such termination that is disputed in court. In my opinion that is one
 of the few GPL license terms that is quite sound, only the grounds on
 which that termination happens seem extremely flakey to me.

You're quite right; I didn't really mean unenforceable, I meant
ineffective as a means of circumventing a court's authority to
interpret the contract and set standards of breach and remedy.  As in
the MySQL case, where the judge decided that the definitional issue
was a matter of fair dispute, and thus MySQL could not meet the
standard of likely to prevail on the facts; and even if MySQL's
interpretation was upheld the breach might well have been cured
(leaving the contract intact) by Progress's conduct subsequent to
notice of breach; and even if it weren't cured, MySQL could show
neither the prospect of irreparable harm nor that the balance of harms
favored it, given the conduct pledged by Progress.  Hence the already
pledged conduct would constitute sufficient remedy pending a full
trial of fact, even though the only remedy specified in the GPL is
termination.

What I really should have written is that automatic termination
clauses only affect the date from which the license is deemed to have
been terminated in the event that a court determines material breach,
but don't give the offeror or drafter any additional authority to
interpret whether a breach has occurred.  From this perspective, an
automatic termination clause isn't so much a way of strengthening the
licensor's authority to terminate as it is a declaration that the
licensee waives any waivable statutory provisions about notice of
termination in the event of breach.  It might also affect whether a
court-ordered remedy at the conclusion of a full trial includes
license termination (i. e., an injunction against continued exercise
of rights granted by the license) or merely damages for any conduct to
date that fell outside the license.

This is in contrast to in the sole judgment of the licensor
language, which as I understand it can only take effect upon notice in
most jurisdictions, and amounts to termination at will plus a
covenant not to terminate without a reasonable belief that one of the
termination conditions has been met.  Such language (which is not
present in the GPL) places the burden upon the licensee to
demonstrate, in the event of notice of termination, that the licensor
did not have a reasonable basis for belief that there was reason to
terminate.

Is that how it works in your jurisdiction, more or less?

 As to the whole derivative work discussion, my opinion is that a judge
 would rather easily decide something isn't a derived work. The linux
 kernel, e.g., wouldn't need those notes of Linus to allow use of the
 API and so on, on the simple reason that the kernel is designed to do
 just that. In Europe at least one has an automatic license to do
 everything that is necessary to run a program for the purpose it is
 intended to, unless explicitly otherwise agreed to. I believe for the
 GPL to rule this out, it has to draft a clause that says: you cannot
 link to this program in such and such a way, unless it is also GPL'ed.
 In general exceptions to a rule have to be very precise, lest they
 become the rule and the rule the exception.

Woohoo.  Yes, that's how I understand it under US law as well
(IANALIAJ), with a couple of asterisks about estoppel and laches.

 I am reasoning from a legal background, and I believe that is also wat
 a judge would do. It is my general opinion, following Michael, that
 large portions of the FSF FAQ are simply wrong. I have written some
 more elaborate papers on that topic, albeit discussing intellectual
 property in more general terms, focussed on Open Source. See
 http://m9923416.kuleuven.be for that (unfortunately, the most
 interesting one is written in dutch, and I do not have time to
 translate).

I suppose that if I profess to be able to read legalese, I ought to be
able to tackle Dutch, with a little help from Google and/or Babelfish.
 :-)

 Kind

Re: GPL and linking (was: Urgently need GPL compatible libsnmp5-dev replacement :-()

2005-05-06 Thread Michael K. Edwards
On 5/6/05, Raul Miller [EMAIL PROTECTED] wrote:
 On 5/5/05, Michael K. Edwards [EMAIL PROTECTED] wrote:
  Sorry to spam debian-devel -- and with a long message containing long
  paragraphs too, horrors! -- in replying to this.
 
 Who is sorry?  How sorry?
 
 Let's assume, for the sake of argument, that this sorry-ness is not
 something that matters enough to you to avoid posting long and
 elliptical messages to debian-devel.

As I wrote, debian-devel is where the Urgently need GPL compatible
libsnmp5-dev replacement discussion is happening.  Andrew's somewhat
disingenuous This part of the thread belongs on -legal
notwithstanding, it had not previously been moved to -legal, just
copied there.

I was uncertain whether to remove -devel from my reply, but eventually
decided to leave it as it was; was there some onus on me to remove
-devel?  I am hardly a major source of -devel noise, by message count
or by bandwidth.  But perhaps -devel is reserved for short, erroneous,
discourteous messages?  (That's not really aimed at Raul, actually.)

   On Wed, May 04, 2005 at 11:51:51PM -0500, Peter Samuelson wrote:
   The GPL simply defers to copyright law to define derivative work.
 
  Actually, it tries to define work based on the Program in terms of
  derivative work under copyright law, and then incorrectly
  paraphrases that definition.
 
 It's probably worth noting that derivative work and work based on the
 Program are spelled differently.  What's not clear, to me, is whether the
 word that refers to the d phrase or the w phrase.  Careful study sheds
 no insight into this burning issue.
 
 [If I read the GPL, I can't find where it paraphrases the d phrase.  On the
 other hand I can't figure out how someone could claim that the GPL
 incorrectly paraphrases the w phrase.]

Second sentence in Section 0:  The Program, below, refers to any
such program or work, and a work based on the Program means either
the Program or any derivative work under copyright law: that is to
say, a work containing the Program or a portion of it, either verbatim
or with modifications and/or translated into another language.

As I read it, the phrase after the colon is a paraphrase of the
ether/or clause it follows, i. e., an attempt to restate it in
layman's terms.  And it's incorrect, as I explained, and for which I
have previously given references to treaty, several countries'
statutes, and lots of case law, in messages on -legal to which you
responded (generally constructively and courteously, I might add).

Ignoring the actual definintion and taking the paraphrase would mean
that the largest possible work containing GPL licensed material
would still be subject to GPL constraints (modulo the mere
aggregation clause, which, if it has legal meaning, applies only to
Section 2).  And yes, anything copyrightable under the Berne
Convention is a work, including (for instance) a Debian CD set. 
That's obviously problematic, it's obviously not what any GPL licensee
believes (GPL section 3 0wns my distro?  yeah, right), and it's
obviously not a reading any court would accept, even absent the rule
of construction against the offeror.

  There has been so much silliness written about this topic ...
 
 Agreed.

Lots of sarcasm and cheap shots, too; of which I have sometomes been
guilty as well.  But they do not constitute negative silliness, and
are not something I have associated with your by-line in the past.

Cheers,
- Michael



Re: GPL and linking

2005-05-06 Thread Michael K. Edwards
On 5/6/05, Jeremy Hankins [EMAIL PROTECTED] wrote:
 All of this discussion of legal minutia misses (and perhaps supports)
 what, to my mind, is the most compelling argument for accepting the
 FSF's position on the subject.  The fact is that the question does
 depend on a lot of legal minutia that almost all of us aren't qualified
 to have an opinion on.  So unless it's a make-or-break issue for Debian
 (which I just don't see), the obvious thing to do is to take the
 agreeable, safe position.

You may not be qualified (as I am not) to offer legal advice.  But
you're certainly qualified to have an opinion.  And there isn't
necessarily an agreeable, safe position.

If your livelihood depends on your continued ability to work in the
software field, I think it helps to have the ability to read deeply
into a contract.  Sometimes that requires a review of the law
applicable to you personally.  I know people who really, really wish
they hadn't accepted EULA X, let alone Shared Source Agreement Y. 
Subtle issues of what constitutes contract acceptance in a given
jurisdiction, whose interpretation of an ambiguity prevails, and what
things can only be agreed to in writing (or can't be made binding at
all) do matter.

My experience with lawyers has been quite positive overall, but I have
learned two cautionary lessons.  One: a lawyer's research is always
focused on either backing or influencing his or her client's position,
and his or her thinking about the arguments on the other side is often
limited to finding counter-arguments for them.  Two: in the absence of
a lawyer who's on your payroll -- not your company's, not your
friend's, not a trusted third party's -- you are your own best legal
researcher.  Actually, the lawyer I respect most says that's true even
when he is on my payroll.  Use the primary literature; it's not really
that hard, though you might have to do a lot of background reading. 
(Same goes for medicine and algorithms, and almost all science that
merits the name.)

 So the question of whether or not the FSF is actually *right* doesn't
 matter.  We should only disagree with them if we have to for the sake of
 Debian -- in which case we're probably in trouble and should hire a
 lawyer ASAP.

The FSF has its own agenda, and it's not principally about keeping
people out of the courtroom.  Many Debian contributors have said that
one recent FSF action or another has seriously damaged their trust in
the FSF as a steward of the portion of the software commons that they
have acquired by copyright assignment, let alone of all software
offered under the GPL.  Note that the FSF is not unique in this
(RedHat, XFree86, and the Mozilla Foundation are other recent
examples), and I still think they're on the side of the angels most of
the time.

Lots of people rely on Debian to have made the most informed judgment
its members can about legal issues.  That doesn't mean just the SPI's
legal counsel, the -legal regulars, or the ftpmasters; that means the
DDs and, to a lesser extent, fellow travelers like me.  Oh, with
respect to Debian as such it doesn't necessarily mean me; IANAL,
TINLA, IANADD, and all that.  But when it comes to other entities that
accept my recommendation of Debian for their IT or product platform,
it's my judgment (among others') that they rely on.  In the primary
literature I trust; all others pay cash.

Cheers,
- Michael



Re: GPL and linking (was: Urgently need GPL compatible libsnmp5-dev replacement :-()

2005-05-06 Thread Michael K. Edwards
On 5/6/05, Raul Miller [EMAIL PROTECTED] wrote:
 On 5/6/05, Michael K. Edwards [EMAIL PROTECTED] wrote:
[snip]
  Second sentence in Section 0:  The Program, below, refers to any
  such program or work, and a work based on the Program means either
  the Program or any derivative work under copyright law: that is to
  say, a work containing the Program or a portion of it, either verbatim
  or with modifications and/or translated into another language.
 
 I believe you're objecting to the that is to say phrase, which restates what
 work based on the Program: means.

Attempts to, anyway.

  As I read it, the phrase after the colon is a paraphrase of the
  ether/or clause it follows, i. e., an attempt to restate it in
  layman's terms.
 
 Yes.  And that either/or clause says what work based on the Program
 means.

Yep.  That phrase is, in its entirety: either the Program or any
derivative work under copyright law.  And that's the definition of
work based on the Program for the duration of the GPL, as far as I'm
concerned.

  And it's incorrect, as I explained, and for which I
  have previously given references to treaty, several countries'
  statutes, and lots of case law, in messages on -legal to which you
  responded (generally constructively and courteously, I might add).
 
 I disagree:
 
 work based on the Program is not the same thing as derivative work.
 
 The definition of work based on the Program uses the derivative
 work concept, but builds on that concept.
 
 I think claiming they're equivalent is silly.

Right.  either the Program or any derivative work under copyright
law \superset derivative work.  But collections containing the
Program don't fit.  That is to say introduces an (incorrect)
paraphrase -- not a further expansion of the category.  To read
otherwise is to do violence to both the grammar and the legal sense of
the definition; and as I wrote, would result in an unacceptable scope
for the license (any work containing GPL material, up to and
including an entire CD set and the shelf of books bundled with it).

People who say publicly and often enough that they accept the FSF
FAQ's statement that programs using GPL libraries must be released
under the GPL (
http://www.fsf.org/licensing/licenses/gpl-faq.html#IfLibraryIsGPL )
may well be estopped from arguing otherwise in court.  I prefer not to
be numbered among them.  (And no, before you say it, I'm not trolling
to build a defense for some court case.)  But that's completely
different from affecting the legal meaning of the license (see Linus's
LKML post again).

I'd be sorry to see, say, a GR swearing allegiance to the FSF FAQ;
that would probably estop Debian in perpetuity from linking GPL
against non-GPL, trigger the automatic termation provision
immediately and retrospectively due to any of a zillion inadvertent
build bugs in the past decade, and lead to the Death Of Debian (TM). 
But it wouldn't have any effect on what license terms I or any Debian
user or derivative would be obligated to accept.

Cheers,
- Michael



Re: GPL and linking

2005-05-06 Thread Michael K. Edwards
On 5/6/05, Jeremy Hankins [EMAIL PROTECTED] wrote:
 Michael K. Edwards [EMAIL PROTECTED] writes:
 
  You may not be qualified (as I am not) to offer legal advice.  But
  you're certainly qualified to have an opinion.
 
 Sure.  But it's not relevant to this discussion -- despite what many of
 the participants seem to believe.

Did you read any of the rest of my message?  This particular sentence
of mine disagrees with your claim that almost all of us aren't
qualified to have an opinion on license issues.  Then what are we
doing messing around with other people's copyrighted material?

  And there isn't
  necessarily an agreeable, safe position.
 
 Are you saying there's not?  So who's going to sue me (or Debian) for
 adopting an overbroad idea of what constitutes a derivative?  Hey, you
 decided to abide by my license terms when you didn't have to.  I'm gonna
 sue!  (Standing?  What's that?)

It's not particularly agreeable or safe to say, we, Debian, interpret
the GPL to recursively follow the depends/reverse-depends
relationships of GPLed packages, crossing most of the individual API
and package boundaries within the work called Debian, and therefore
the strong set within Debian is being offered to our users under the
GPL alone, even if the individual packages contain MIT/BSD/whatever
licenses in debian/copyright.  That's probably a little stronger than
the estoppel one risks in saying the Debian consensus is that
dynamically linked Quagga - NetSNMP - OpenSSL is illegal (disallowed
under the GPL), but not much.

My take on it is that such relationships are perfectly legal, but that
as a courtesy to the FSF we undertake to resolve such situations when
they are discovered, either by efforts to obtain unambiguous license
compatibility or by package removal.  And if it were me, I'd keep
building Quagga against NetSNMP while proceeding with reasonable
dispatch, but not in a panic, to request that the Quagga upstream get
it together with respect to an OpenSSL exemption.

The risk in publicly acknowledging the FSF FAQ as a standard of
legitimacy is not that anyone will sue you but that Debian will
unwittingly provide a stalking-horse for some GPL copyright holder
(not necessarily the FSF) to attack Debian users and derivatives. 
Say, for instance, I write a program that uses an LGPL library whose
upstream doesn't follow a copyright assignment policy, and then
someone claims that their GPL code was pasted in a while ago.  I watch
helplessly while Debian relabels it GPL and purges all
GPL-incompatible engineering relationships to that library -- and
knowing that they have done so might put me at risk of being estopped
along with Debian even if I don't agree with the FSF FAQ myself.  That
would not be a good situation.

(By the way, my undying thanks to the Debian X Strike Force for
handling the XFree86 license situation the way they have.  No panic,
no sudden abandonment of the XFree86 code base, just a decision to
decline contributions not available under the MIT/X11 license even if
they're from upstream, and to move to an alternate upstream fork after
sarge.  And a carefully written FAQ, not over-commital on legal
issues.)

 Conversely, if our idea of what constitutes a derived work is too
 narrow we could end up violating someone's copyright.

Again, that's not how it works.  In the presence of a valid license
contract, one is entitled to contract-law standards of the
reasonableness of one's attempts to cure a breach when notified.  The
automatic termination clause is probably unenforceable in most
jurisdictions; I think (IANAL) few would even read it as authority to
terminate on inadvertent (non-material) breach, let alone on the
licensor's idea of breach if the licensee's (reasonable) construction
makes it not a breach.

Consider how it worked in Progress Software v. MySQL.  The FSF's
affidavit on MySQL's behalf claimed that Progress's license was
terminated, but the judge didn't buy it, and upheld Progress's right
to go on distributing MySQL's GPL material.  The judge called the
derivative work issue a matter of fair dispute -- and hence not a
deliberate breach -- noted that it was arguably cured anyway, that
MySQL had not demonstrated irreparable harm, and that the balance of
harms favored Progress, and denied the request for preliminary
injunction on GPL/copyright grounds.

For legal purposes, it often matters not only what you do and don't do
but why you say you're (not) doing it.  Saying in public that you're
trying to do X less often because you believe it's illegal is
injudicious at best.  Doubly so if you go on to say that you believe
that you permanently lost your rights under a license every time you
did X.

Cheers,
- Michael



Re: GPL and linking (was: Urgently need GPL compatible libsnmp5-dev replacement :-()

2005-05-06 Thread Michael K. Edwards
On 5/6/05, Raul Miller [EMAIL PROTECTED] wrote:
 On 5/6/05, Michael K. Edwards [EMAIL PROTECTED] wrote:
  On 5/6/05, Raul Miller [EMAIL PROTECTED] wrote:
   I believe you're objecting to the that is to say phrase, which restates 
   what
   work based on the Program: means.
 
  Attempts to, anyway.
 
 I think this attempts to quip is meaningless.

How would you like me to say it?  Purports to?  Professes to? 
Makes an honest but flawed effort to?  Do you not understand my
interpretation that the use of quotes around work based on the
Program means that the writer is defining it as shorthand for either
the Program or any derivative work under copyright law?  And that an
attempt is then made to paraphrase (restate, whatever) the latter
phrase, and that restatement is just plain wrong?  You don't have to
agree with it, of course, but surely you get it now.

   Yes.  And that either/or clause says what work based on the Program
   means.
 
  Yep.  That phrase is, in its entirety: either the Program or any
  derivative work under copyright law.  And that's the definition of
  work based on the Program for the duration of the GPL, as far as I'm
  concerned.
 
 To recap:
 
 W: work based on the program
 D: derivative work
 E: either/or phrase
 C: phrase after the colon.
 
 W means E
 C paraphrases E
 
 Thus, you have concluded, C attempts to paraphrase D

No.  E defines W, which appears in quotes in the original to indicate
that it is being given a formal meaning.  C is grammatically a
paraphrase of E.  However, C and E are not the same thing according to
law; and grammatically and legally, E is the definition of W, and C is
not.  Neither is C \union E, C - D, or some other way to assign W a
meaning based on the wording of W, the content of an unrelated
document, or the distance to the moon.

 Should we keep going back and forth on this, trying to show why
 you believe C attempts to paraphrase D?

I don't, except insofar as C - the Program attempts to paraphrase E
- the Program (= D).  Are we done?  And if you're going to move it
to private e-mail, do it, don't grandstand about it.  That is also
more characteristic of others around here than it previously has been
of you.

Cheers,
- Michael



Re: GPL and linking (was: Urgently need GPL compatible libsnmp5-dev replacement :-()

2005-05-06 Thread Michael K. Edwards
 I don't, except insofar as C - the Program attempts to paraphrase E
 - the Program (= D).

Oh for Pete's sake, (E - the Program) (= D).  What a great place for
a word wrap.

- Michael



Packaging audit trail mechanism (was: Ubuntu and its appropriation of Debian maintainers)

2005-05-05 Thread Michael K. Edwards
On 5/2/05, Matt Zimmerman [EMAIL PROTECTED] wrote:
 Another option would be to leave the source package maintainer the same (to
 retain proper credit, etc.), but override the binary package maintainer
 during the build (to reflect that it is a different build, and also display
 a more appropriate name in apt-cache show etc.).
 
 What do you think about this approach?

Personally, when I rebuild a package that might get handed to someone
else -- even if I didn't touch the source, but am rebuilding in a
known environment so I can reproduce it later -- I change the
Maintainer field to an e-mail address that reaches me, and add a
debian/changelog entry with an explanation of why it was rebuilt and
an appropriate suffix on the version number.  Otherwise, I'm risking:

1) Implying that the Debian maintainer is part of my organization,
since it appears that he/she was the last person to touch the package;

2) Suggesting that bug reports should be sent directly to the Debian
maintainer and/or BTS, possibly annoying him/her and probably leaving
me and my organization out of an interaction that we ought to know
about;

3) Violating some licenses (the GPL, for instance), at least in
spirit, by making it hard to determine who is responsible for meeting
obligations to provide source code (and, again at least in spirit,
detailed instructions about reproducing the build environment).

When I am distributing unaltered Debian source packages alone, or
bit-exact copies of Debian binary packages, I don't worry as much
about these things.  Actually, in principle I ought to have a cache of
the source packages associated with all binary packages I distribute,
although for one-offs I usually assume I can get it from
snapshot.debian.net if I need it.  (snapshot has saved my bacon more
than once -- thank you Ukai-san and FSIJ!)

If I had Ubuntu's resources, I'd handle it differently.  Relying on
people (or even an automated process) to touch up debian/control and
debian/changelog on rebuild is so 1990's.  A Debian upload isn't
acceptable without a signed changes file, and an autobuilt package
doesn't make it onto ftpmaster without a signed buildd log (as I
understand it, anyway).  Soon it will be practical to install only
signed binary packages (what gets signed in apt 0.6, actually?
md5sums?) on a Debian / Debian-derived system.  I would like to see
all binary packages accompanied by information equivalent to the
contents of a changes file, signed in a way that allows bug reporting
tools to check the chain of trust and choose a bug report destination
accordingly.

I believe that the right way to handle this (no, I don't have code in
my back pocket -- yet) is to use a token for package integrity that
can be multiply signed, and on which those signatures can be revoked,
so that an organization can easily delegate release engineering /
update tracking to an internal guru or a consultant they trust, or
spread it across multiple roles and automated processes.  These
integrity tokens should be distributed using a mechanism that makes it
easy to check the current signature set of the token and to add and
revoke signatures in any order, and this mechanism should be proven to
scale to millions of tokens with thousands of signatures on each.

Stating it this way should make it obvious that I have in mind using
single-use GPG keys as integrity tokens and distributing them with a
network of keyservers.  (Not, obviously, the public keyservers, on
which keys that represent things rather than people have no place.) 
Single-use keys would be generated at the conclusion of the package
build cycle, similar to a changes file except one per .deb.  The
sha1sums of the .deb and .dsc would appear in the key's userid, and
full vital data for the binary package, others built in the same
dpkg-buildpackage run, and the source from which it was built go in,
say, the Notation field for the self-signature.

The sha1sum of a .deb can thus be used to look up sha1sums for its
parent source and its sibling .debs, and given an sha1sum index to
snapshot.debian.net or the Debian derivative's equivalent, the
single-use public key is a reliable clue to fetch the packages
themselves.  The sha1sum of the .dsc is also in the userid to make it
easy to find other binary packages built from the same source, which
facilitates use cases like the M out of N security experts mentioned
below, in which some roleplayers' auditing of packages built from the
same source is good enough.

Once the public half of the key is self-signed with vital data
embedded, the private half is discarded, and the public half is
uploaded to the package keyserver network.  Thereafter, its primary
function is to accumulate signatures (and revocations), which
represent the audit trail through whatever processes, human and
automated, anyone who cares to use the package sees fit.  (Note that
the key isn't used to sign anything but itself, and the sha1sums in
its userid make leakage of the private key 

GPL and linking (was: Urgently need GPL compatible libsnmp5-dev replacement :-()

2005-05-05 Thread Michael K. Edwards
On 5/4/05, Andrew Suffield [EMAIL PROTECTED] wrote:
 [This part of the thread belongs on -legal]

Sorry to spam debian-devel -- and with a long message containing long
paragraphs too, horrors! -- in replying to this.  But that's where
this discussion is actually happening now, and I'm afraid I can't
agree with Andrew's implication that this issue is settled on
debian-legal in favor of the FSF FAQ's interpretation.  This isn't
about license text, this is about GPL FUD and Debian's maintainers and
other contributors, and debian-devel as a whole needs to hear it once
in a while.

I argue largely in the context of US law because it's been convenient
for me to research, but I welcome counter-arguments from other legal
systems -- with concrete references.

 On Wed, May 04, 2005 at 11:51:51PM -0500, Peter Samuelson wrote:
  [Paul TBBle Hampson]
   This of course assumes the phrase derived work is legalese for
   code dependancy or something. I'm sure the GPL actually defines
   what _they_ mean by it...

 The GPL simply defers to copyright law to define derivative work.

Actually, it tries to define work based on the Program in terms of
derivative work under copyright law, and then incorrectly
paraphrases that definition.  Under contract law (in most US
jurisdictions at least, IANAL, etc.) the recipient is entitled to have
this ambiguity construed against the drafter.  More below.

  I might add that
  claiming a program that uses a library's published API is a derived
  work is a bit shaky from the get-go.  If you actually cut and paste
  code from the library into your program, it's a lot more clear-cut.
 
 We talk about APIs on forums like -legal to save time, because
 everybody (supposedly) knows what we're talking about there. They
 aren't directly relevant, it's just that certain aspects of program
 design will normally have certain legal implications because that's
 how those things are normally implemented.

I think Peter has it right, and I'd like to know what grounds there
may be to demur.  See my recent posts to debian-legal archives for US
case law on the matter, which I (IANAL) would summarize as published
APIs are not copyrightable in their role as 'methods of operation' as
distinct from their role as creative expression.  It's kind of an odd
stance for the law to have arrived at -- a difference of usage changes
not just whether an act of copying is an infringement but whether the
copied material is copyrightable at all.  But it makes sense in the
context of the prescribed sequence of legal analysis, in which
recognizing a protected use too late in the sequence leaves the
copier open to lawsuits de novo for subsequent acts of copying the
same material.

The last time I know of that the US Supreme Court looked at the issue
-- an appeal from Lotus Development Corporation v. Borland
International, Inc.,49 F.3d 807 (1995) -- they deadlocked 4-4 in one
justice's absence.  The court transcript is fascinating.  The latest
and greatest analysis at circuit court level appears to be Lexmark v.
Static Control (2004).

Yes, the US is not the world.  Other legal systems are perfectly
within their rights to arrive at different conclusions, and the Berne
Convention is far from explicit on the matter.  But what actual
grounds are there for a belief that some particular country's legal
system would rule that the arm's-length use of a published API creates
a derivative work?  Chapter and verse, folks; even if precedents are
not law in your legal system, they're a lot more reliable than
reasoning outside a courtroom with no judge in sight.

 Changing static linking to dynamic, or replacing a linker call with a
 dlopen() call, *always* has precisely *zero* effect on whether
 something is a derivative work or not. A work is created derivative,
 or not, at the time of inception. For source code, this is the time
 when the code is written. The way in which it is compiled is
 irrelevant. For a binary, this is the time when the binary is built
 and linked. A statically linked binary is a derivative work of
 everything it links because it contains verbatim copies of those
 things. Every binary, static, dynamic, or other, is a derivative of
 everything that any part of its source was derived from.

I do not think that the binary part of this analysis is correct in any
country that implements the Berne Convention.  My rebuttal is long
enough without the case law references, but you can find them all in
the debian-legal archives.

Whether statically linked or provided as multiple dynamically linked
files, a program composed of separately identifiable independent works
of authorship is a collection (in some countries' implementation,
compilation) as defined in Article 2 Section 5.  Derivative works
are defined in Article 2 Section 3 to be [t]ranslations, adaptations,
arrangements of music and other alterations of a literary or artistic
work.  These exist as categories of copyrightable works for
completely separate reasons -- 

Re: How to show $arch releaseability (was: Re: How to define a release architecture)

2005-03-22 Thread Michael K. Edwards
On Tue, 22 Mar 2005 11:02:47 +0100, David Schmitt
[EMAIL PROTECTED] wrote:
[snip]
 As Steve mentioned in another mail[1], one of the points where arches offload
 work onto the release team is
 
 3) chasing down, or just waiting on (which means, taking time to poll the
 package's status to find out why a bug tagged 'testing' is still open),
 missing builds or fixes for build failures on individual architectures that
 are blocking RC bugfixes from reaching testing

And it's not just the number of arches.  Slow arches really do make
more work, and it's not just about migrations to testing.  There's a
concrete example right now that shows why a large number of slow
autobuilders, working in parallel, isn't a great solution.  (Although
distcc is another story, if its results are truly reproducible.)

The latest uim FTBFS twice on ARM because of the removal of howl
dependencies from gnome packages.  The rebuilt gnome-vfs2 still hadn't
made it to unstable as of the second try, so the archive wasn't in a
state that any package dependent on one of its binary packages could
be autobuilt.  It would probably build now, but Steve doesn't want to
submit it a third time until the build-depends have actually been
inspected for stray libhowl.la files -- and I can't really blame him. 
We are not swimming in arm buildd cycles.

The inspection will be a painful process for anyone who doesn't have
an otherwise idle ARM handy.  Sure, he (or I) can massage the build
log to construct URLs for files in pool, and pull them to $fast_box,
unpack, and inspect.  But an arm porter is in a better position to do
this inspection, since apt-get build-dep will work without a bunch of
script-fu.  Of course, that's going to pollute the box; so do it in a
nice new chroot.  Better have a mirror of a sane snapshot of unstable
handy.

There are things that can be done to make this easier, starting with
better facilities for snapshotting chroots and overlaying them using
device-mapper or unionfs.  But it does take some commitment by people
who have (access to) boxes they can experiment on -- and preferably
also skills in many programming languages and laboratory-scale systems
administration.

Cheers,
- Michael


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: How to show $arch releaseability (was: Re: How to define a release architecture)

2005-03-22 Thread Michael K. Edwards
On Tue, 22 Mar 2005 04:58:33 -0800, Steve Langasek [EMAIL PROTECTED] wrote:
 Eh, not particularly.  This inspection can be done on any machine, and
 there's no reason not to just use the fastest one available to you (whether
 that's by CPU, or network); what's needed here is to first identify which
 packages that were used in the build were broken, which can't be done with
 apt-get build-dep even on arm; and then verify whether the packages in
 question have been fixed in a newer version.  In general, just looking at
 apt-get build-dep doesn't guarantee that you haven't overlooked the problem
 that caused the build failure in the first place, and it isn't even
 guaranteed to install the same *packages* (let alone package *versions*) as
 sbuild.

Diffing the logs against a successful build on powerpc caught the
libpanel-applet2-dev (source gnome-vfs2) version skew immediately, and
looking at the changelog confirmed that the version arm pulled in was
built against libhowl and was enough to cause the failure.  But if you
want to be certain that there isn't additional breakage hiding behind
that, you need to inspect the actual packages.

Here's what I had in mind:
apt-get --download-only build-dep uim
on a clean chroot, arm host, against snapshot.debian.net for the
appropriate date gets you 95% of the way to a matching build system. 
Match
(cd /var/cache/apt/archives  ls *.deb)
against
perl -ane 'print $1\n if m#^Unpacking.*/([^/]*\.deb)\)#' arm.log
and you find a couple of packages for which you need to fetch the
precise version from snapshot.

dpkg --contents gets you enough information for this particular case,
but in general you want to actually install them and at least run
configure to verify, say, that autoreconf did the right thing -- and
that requires $arch.  Doesn't necessarily have to be your own, if
you're a DD and have access to developer boxes; but it helps to have a
quick way to bypass debootstrap, and that takes a little planning
ahead.

I'd be interested in seeing something equally simple that follows the
same logic as the apt-get step but doesn't have to be done from $arch.
 Bonus points if it monitors buildd failures and pulls packages to a
local mirror promptly so one doesn't have to go fishing around on
snapshot.

Cheers,
- Michael


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: How to show $arch releaseability (was: Re: How to define a release architecture)

2005-03-22 Thread Michael K. Edwards
On Tue, 22 Mar 2005 14:15:13 +0100, Wouter Verhelst [EMAIL PROTECTED] wrote:
[snip]
 Except that arm doesn't *have* a large number of slow autobuilders,
 working in parallel. They have four, and are having problems keeping up
 right now.

Precisely.  And four is already pushing the point of diminishing
returns, unless you have a good mechanism for enforcing rules like
builds against existing packages derived from gnome-vfs2 will be no
good; don't schedule uim until libpanel-applet2-dev, etc. have
actually made it into unstable where buildds can get at them. 
Otherwise you get this kind of race condition.

Maybe someone already experienced in wanna-build hacking could
implement a sort of write barrier field in changelog entries,
perhaps as urgency: flush.  This would force all packages that
build-depend on foo-dev (built from foo) to wait until foo/foo-dev
makes it to unstable for $arch before they can be scheduled on an
$arch buildd.  It would also notify appropriate humans that everything
that build-depends on foo-dev needs to be rebuilt, and facilitate
scheduling of these builds ahead of their reverse build dependencies. 
And so forth.

This would help reduce pointless FTBFS like this uim run.  But the
bottom line is that some rebuild sequences are intrinsically
sequential, and late in a release cycle is when they hit the hardest,
because issues like the howl license finally get forced.

Cheers,
0 Michael


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: How to show $arch releaseability

2005-03-22 Thread Michael K. Edwards
On Tue, 22 Mar 2005 14:07:32 +0100, Simon Richter
[EMAIL PROTECTED] wrote:
 That sounds more like a case of too-loose build-dependencies to me
 rather than architecture specific problems. This can also hit i386, the
 fact that it hit ARM this time is sheer coincidence.

Should the uim maintainer have to add versioned build-depends because
gnome-vfs2 had to drop howl support?  Without good tracking tools,
that's tough.  Something like the urgency: flush mechanism would be
better.

i386 doesn't get hit by it often because most people upload i386
binaries and the wanna-build queue is almost always empty.  Race
conditions are exposed by parallel architectures.

Cheers,
- Michael


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-22 Thread Michael K. Edwards
On Tue, 22 Mar 2005 12:14:17 +0100, Adrian Bunk [EMAIL PROTECTED] wrote:
 On Mon, Mar 21, 2005 at 06:50:22PM -0800, Steve Langasek wrote:
 ...
  The top three things I've spent release management time on that I shouldn't
  have had to are, in no discernable order:
 
  1) processing new RC bug reports to set sarge/sid tags appropriately, so
  that the RC bug list for sarge bears some resemblance to reality

To the extent that maintainers accept upstream's crack-of-the-day into
sid, relying on a not-for-sarge mechanism instead of letting the bugs
pile up upstream, the testing scripts do worsen the traffic late in
the release cycle.  Beats binge-purge, if you ask me, but YMMV. 
During a freeze-by-whatever-mechanism, becoming informed about whether
allowing a given update would improve or worsen the situation takes
effort; sarge/sid tags are part of that analysis.  I think Steve is
mostly saying that scrubbing the raw bug report data by disambiguating
sarge and sid bugs shouldn't be the release manager's job.

  2) prodding maintainers to get all packages associated with library soname
  transitions synchronized so that they can progress into testing together
  (seems to require an average of 2-3 iterations, and 3-4 weeks)

Yep, this is a lot of work.  The alternatives are unbuildable packages
(left behind by a transition) or multiple versions of core libraries. 
The relevant packaging teams are getting the hang of it, though, and
testing is a good tool for measuring progress towards an engineered
release.  Again, I think Steve wants this to be more of a
maintainer/porter responsibility during more of the cycle.

  3) chasing down, or just waiting on (which means, taking time to poll the
  package's status to find out why a bug tagged 'testing' is still open),
  missing builds or fixes for build failures on individual architectures that
  are blocking RC bugfixes from reaching testing

Comments elsewhere; but I certainly don't think this is caused by
testing.  Don't shoot the messenger.

  Taken together, these probably account for more of my release management
  time than anything else, including actual work on release-critical bugs.

Well, if it were efficient for the RM to be doing bug fixes himself,
something would be broken.  :-)

 Is it correct that none of these three points that account for most of
 your release management time would exist in a release process without
 testing?

Doubtless the timing patterns would be different; but if you want
sarge not to suck, it has to follow a meaningful bug metric down to
(near) zero, contain a choreographed release of libraries and desktop
integration, and the polish that comes from fixing bugs all the way
instead of ignoring corner cases.  None of these challenges is unique
to a testing-mediated release process.

Cheers,
- Michael


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: The 98% and N=2 criteria

2005-03-21 Thread Michael K. Edwards
On Mon, 21 Mar 2005 15:02:39 +0100, Wouter Verhelst [EMAIL PROTECTED] wrote:
[snip]
 Uh. Most porting bugs that require attention fall in one of the
 following areas:
 * Toolchain problems (Internal Compiler Errors, mostly)
 * Mistakes made by the packager. Quite easy to fix, usually.
 * Incorrect assumptions in the source code. These are becoming
   increasingly rare these days, IME.

The challenging bugs tend to involve all three factors.  For instance,
I was just looking at a FTBFS for lineak-xosdplugin on Ubuntu/ia64. 
It turned out that there was:

- a historical toolchain problem that XFree86 kludged around with a
static Xinerama_pic library
- an incorrect assumption upstream (adding -lXinerama_pic both to the
xosd library build and to the xosd-config output)
- out-of-date packaging; for XFree86 4.3 and Xorg, the package should
build-depend on libxinerama-dev and build-conflict (if there were such
a thing) with xlibs-static-pic

Note that the xosd build itself did not break; but packages that use
it FTBFS when there's a distinct Xinerama_pic, and may be subtlely
broken on platforms where -lXinerama gets linked statically into
libxosd.so.2 but is still included in the xosd-config output.

Having a big matrix of platforms vs. packages, on which packagers are
expected to take problems seriously, is really good for software
quality.  It's easy to dismiss FTBFS on minority platforms as buggy
gcc back end, and that may have been the most common cause during the
gcc 3.[0-2] interval.  But there are plenty of other root causes,
especially outside C/C++ space, and ignoring them can cause serious
build rot over time.

For instance, on some platforms, the Ocaml compiler (itself an Ocaml
bytecode in its basic incarnation) also comes in native (compiler
binaries that produce bytecode) and/or optimizing (produces stubs
and shared objects instead) varieties.  Some library upstreams confuse
the two when deciding whether to build the optimized form, leading to
FTBFS on hppa.  Debian packagers are expected to fix that, which makes
a big difference to portability onto new platforms.

It's a lot easier to focus on the arches that upstream cares about, or
to define a set of core packages and deprioritize FTBFS on the rest
(even if the problem is actually in the core package).  But that
doesn't accomplish what I look to Debian for.  Debian's the single
most effective caretaker of the software commons largely because it
doesn't settle for build rot, and pushes that expectation to the outer
limit of feasibility with each release cycle.

Cheers,
- Michael


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-18 Thread Michael K. Edwards
AJ's categorization has some traction, but I think it's a somewhat
short-term perspective.  Just because a full Debian doesn't usually
fit today's embedded footprint doesn't mean it won't fit tomorrow's,
and in the meantime Debian's toolchain, kernel, and initrd-tools are
probably the best embedded Linux development and packaging environment
going.  Doubly so if you respect the spirit of open source development
and feel obliged to enable end users to reproduce firmware after a
source-level change.

I think Sarge on ARM has the potential to greatly reduce the learning
curve for some kinds of embedded development, especially if Iyonix
succeeds in its niche (long live the Acorn!).  In particular I look
forward to being able to woo certain mobile computing colleagues,
currently doomed to PocketPC, with a proper native development
environment.  The same goes for some apparent doorstop arches:
mipsel in networking and storage (e. g., SoC from Broadcom in
set-tops, wireless gateways, and micro-NAS) and m68k in device control
(68332 peripheral support, anyone?).

On the other hand, enterprise sparc boxes with niceties like hot-swap
PCI make lovely debian targets, and sparc64 may prove as practical on
the high end as p(ower)?pc64 or even amd64.  And these big girls
haven't lost touch with their little sisters.  In the etch time frame,
sparc32 looks to me mostly like an embedded architecture (sparc-based
CompactPCI boards remain in use in industrial automation) and powerpc
(ppc32) isn't far behind.  On present trends, i386 (amd32 :) ) will be
a doorstop/embedded arch for etch+1 at the latest.

That leaves mips (big-endian), hppa, alpha, and s390.  Not so much
doorstops as space heaters; some people might put ia64 in this
category too.  To my mind, they remain interesting because they cover
more parameter space in terms of instruction set design, cache
architecture, and relative speeds and sizes of
CPU/memory/interconnect.  When something close to a common kernel
source base works adequately on all of them, it starts to look
production-ready.

Likewise, minority-architecture autobuilders are one reason why Debian
is really the only organization I trust to QA a toolchain any more. 
For instance, compiling KDE for all of them expands the C++ stress
test in a really useful way.  Even better if at least a couple of
people actually run big globs of GUI on their kotatsu and catch
run-time problems like #292673 (grave glibc bug, spotted with
evolution on ia64).

Although sarge's long cycle has been frustrating for many people, if
you ask me it's just as well that Debian never put the label stable
on kernel 2.6.7 (i. e., pre-objrmap), gcc 3.2.3+ (not just C++, but
nagging C and optimizer problems, often exposed by non-i386 kernels,
in all previous 3.x), or glibc 2.3.(before next week or so, given
#292673).  By comparison, the next year's core changes are likely to
be much more incremental in nature, with a few exceptions we can
already see coming (UTF-8 everywhere, the rest of FLOSS Java in main,
Perl 6 :-) ) and one big asterisk (biarch, aka dpkg 2 (c: ).

None of that says that the world has a right to put the burden of
sysadmining the broadest single software QA effort in history on the
Debian release team's shoulders.  But if specific technical problems
can be identified and addressed to where the infrastructure equipment
and teams can stand it, keeping Debian Universal for at least one more
cycle would be Herculean but not impossible.  I think this is one of
those cases where the last 20% of the effort invested (coaxing along
minority architectures) provides 80% of the value (stable actually
means something).

Or look at it this way:  supporting minority architectures has
revealed all sorts of scalability problems in Debian.  Some of those
problems will be really nasty if we wait until the major architectures
are in crisis to face them.  The doorstops are the canaries in the
coal mine that start to suffocate before the big guys notice air
quality problems.  Don't like performing CPR on canaries?  Don't put
'em down in coal mines!  Wait, there's something wrong with that logic
...

Cheers,
- Michael


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-18 Thread Michael K. Edwards
AJ's categorization has some traction, but I think it's a somewhat
short-term perspective.  Just because a full Debian doesn't usually
fit today's embedded footprint doesn't mean it won't fit tomorrow's,
and in the meantime Debian's toolchain, kernel, and initrd-tools are
probably the best embedded Linux development and packaging environment
going.  Doubly so if you respect the spirit of open source development
and feel obliged to enable end users to reproduce firmware after a
source-level change.

I think Sarge on ARM has the potential to greatly reduce the learning
curve for some kinds of embedded development, especially if Iyonix
succeeds in its niche (long live the Acorn!).  In particular I look
forward to being able to woo certain mobile computing colleagues,
currently doomed to PocketPC, with a proper native development
environment.  The same goes for some apparent doorstop arches:
mipsel in networking and storage (e. g., SoC from Broadcom in
set-tops, wireless gateways, and micro-NAS) and m68k in device control
(68332 peripheral support, anyone?).

On the other hand, enterprise sparc boxes with niceties like hot-swap
PCI make lovely debian targets, and sparc64 may prove as practical on
the high end as p(ower)?pc64 or even amd64.  And these big girls
haven't lost touch with their little sisters.  In the etch time frame,
sparc32 looks to me mostly like an embedded architecture (sparc-based
CompactPCI boards remain in use in industrial automation) and powerpc
(ppc32) isn't far behind.  On present trends, i386 (amd32 :) ) will be
a doorstop/embedded arch for etch+1 at the latest.

That leaves mips (big-endian), hppa, alpha, and s390.  Not so much
doorstops as space heaters; some people might put ia64 in this
category too.  To my mind, they remain interesting because they cover
more parameter space in terms of instruction set design, cache
architecture, and relative speeds and sizes of
CPU/memory/interconnect.  When something close to a common kernel
source base works adequately on all of them, it starts to look
production-ready.

Likewise, minority-architecture autobuilders are one reason why Debian
is really the only organization I trust to QA a toolchain any more. 
For instance, compiling KDE for all of them expands the C++ stress
test in a really useful way.  Even better if at least a couple of
people actually run big globs of GUI on their kotatsu and catch
run-time problems like #292673 (grave glibc bug, spotted with
evolution on ia64).

Although sarge's long cycle has been frustrating for many people, if
you ask me it's just as well that Debian never put the label stable
on kernel 2.6.7 (i. e., pre-objrmap), gcc 3.2.3+ (not just C++, but
nagging C and optimizer problems, often exposed by non-i386 kernels,
in all previous 3.x), or glibc 2.3.(before next week or so, given
#292673).  By comparison, the next year's core changes are likely to
be much more incremental in nature, with a few exceptions we can
already see coming (UTF-8 everywhere, the rest of FLOSS Java in main,
Perl 6 :-) ) and one big asterisk (biarch, aka dpkg 2 (c: ).

None of that says that the world has a right to put the burden of
sysadmining the broadest single software QA effort in history on the
Debian release team's shoulders.  But if specific technical problems
can be identified and addressed to where the infrastructure equipment
and teams can stand it, keeping Debian Universal for at least one more
cycle would be Herculean but not impossible.  I think this is one of
those cases where the last 20% of the effort invested (coaxing along
minority architectures) provides 80% of the value (stable actually
means something).

Or look at it this way:  supporting minority architectures has
revealed all sorts of scalability problems in Debian.  Some of those
problems will be really nasty if we wait until the major architectures
are in crisis to face them.  The doorstops are the canaries in the
coal mine that start to suffocate before the big guys notice air
quality problems.  Don't like performing CPR on canaries?  Don't put
'em down in coal mines!  Wait, there's something wrong with that logic
...

Cheers,
- Michael


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: rudeness in general

2005-01-12 Thread Michael K. Edwards
On Wed, 12 Jan 2005 18:09:18 +0100, Helmut Wollmersdorfer
[EMAIL PROTECTED] wrote:
[snip]
 My few attempts to step into debian as a contributor ended after some
 hours of senseless discussions or waste of time against unnecessary
 barriers. Compared against average OSS, or OSS where I contribute,
 debian seems to be on the bad side - IMHO.

debian-devel reminds me of alt.folklore.urban once upon a time.  Some
actual discussion of substance taking place, occasionally on a new
topic but more often in response to a new bit of evidence or a new
participant's opinion on a well-worn topic.  Regulars varying from the
saintly to the obnoxious (sometimes the same person with different
degrees of caffeination).  Visitors asking naive questions (and
getting anything from clear, concise answers to cleverly snarky
half-answers to RTFFAQ), trolling, debating furiously and then
disappearing, and occasionally scattering pearls of wisdom.  And, of
course, the occasional meta-discussion on community standards.

Then again, alt.folklore.urban was trying neither to build a better
operating system nor to take over the world.  Some Debian Developers
and other contributors are trying for one or both.  If you wanna join
the circus, better learn to tell the geeks from the rubes.  ;-)

Cheers,
- Michael


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Why does Debian distributed firmware not need to be Depends: upon? [was Re: LCC and blobs]

2005-01-10 Thread Michael K. Edwards
On Sun, 9 Jan 2005 22:01:52 -0800, Steve Langasek [EMAIL PROTECTED] wrote:
 It is not enough to say that you *could* create free firmware files.  As a
 user of xpdf, I can unequivocally say that there are pdfs that I have full
 rights to, because *I created them*.  I cannot say that about firmware
 files.  If you have a free firmware file that works with the driver in
 question, please produce it for us to see.  It should become part of the
 package immediately, and be loaded by default by the driver.
 
 If, on the other hand, we know that the driver needs to load firmware from
 disk before it can actually be usable with any device, and we don't have any
 real, working firmware images that are free, it is disingenuous to handwave
 this away by saying that free firmware could exist.  We either have free
 firmware for use with the device, or we don't.  If we don't, then the driver
 won't work for our users without additional effort, and we should be honest
 about that.

I think the best way to be honest about that is to exclude non-free
firmware images from the kernel binary and modules themselves but to
permit loading them from the initrd or the root filesystem.  Initrd
images in main shouldn't contain non-free firmware; initrd images in
non-free may (presuming that they are legitimately distributable), and
Debian's mkinitrd tools are available (and quite usable) for
sophisticated users to roll their own.

Depending on what happens at the day job, I may have a chance to put
in some effort along those lines as I migrate their platform to 2.6.x
kernels.

Cheers,
- Michael


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: GPL and LGPL issues for LCC, or lack thereof

2004-12-17 Thread Michael K. Edwards
Hopefully this continues to be interesting to debian-devel readers. 
Perhaps replies should go to debian-legal; GMail doesn't seem to let
me set Followup-To, but feel free to do so if you think best.

I have copied Eben Moglen (General Counsel to the FSF) at Bruce's
suggestion.  Mr. Moglen, I am not a lawyer, but I'm doing my humble
best to match the (L)GPL up to recent case law.  Your name was invoked
by Bruce Perens with regard to an analysis of the Red Hat service
contract and the GPL, and if you have the time, I would value your
comments.

This debate arose in the context of a discussion about the Linux Core
Consortium's proposal to share binaries among GNU/Linux distros.  I
don't think the LGPL ought to permit ISVs to discriminate between
users who link against these golden binaries and those who link
against functional equivalents, and I don't think that distros ought
to offer ISVs this illusory solution to their (perceived) quality and
interoperability problems.  I also think, based on the actual text of
the LGPL, that it may already be enforceable as a ban against this
discrimination.

You can find previous messages in this thread at
http://lists.debian.org/debian-devel/2004/12/thrd3.html starting at
Linux Core Consortium (Bruce Perens).

me  What part of normally distributed ... with ... the operating system is
me confusing?

Bruce The license requires that the source code all of the pieces that
Bruce constitute a derivative work of some original piece of GPL code must be
Bruce provided. This would be the original GPL program and the scripts used to
Bruce build it, and any other code you link into it and the scripts
used to build
Bruce that. The build tools are separate works that never become derivative of
Bruce the GPL program.

I am talking about the LGPL here, not the GPL.  Please re-read
sections 5 and 6 of the LGPL for the definition of work that uses the
Library (which assumes that it only becomes a derivative work after
linking) and the special exception to the GPL requirement of
releasing source code.  As I read it, the exception clause can and
does place limits on the allowable build tools even though they are
not derivative works.

It may not have been the original intention of the GPL authors to
address the availability of build tools.  But the language I cited
from LGPL section 6 seems to succeed in the intention stated in the
preamble, which includes:  If you link other code with the library,
you must provide complete object files to the recipients, so that they
can relink them with the library after making changes to the library
and recompiling it.  You can't do that if you don't have the full
means of recompiling it.

Bruce  Concievably a contract could require that you specify the build system,
Bruce but the GPL doesn't purport to be a contract. It is designed to
only grant
Bruce rights, not restrict any rights that you would otherwise have. This also
Bruce side-steps the issue of consent.

I am having a hard time believing that you are arguing that the GPL is
not an enforceable contract.

If it's not a contract, then what is it?  At least under U. S. law,
I'm not aware of any other theory under which copyright license can be
granted (fair use and the like are acceptable justifications for
using copyright material without a license, but do not result in a
grant of license).  Even a grant with no return consideration is
considered a unilateral contract, and the (L)GPL is certainly not
intended to be a unilateral grant.  A grant of license in return for
specific performance is in fact a contract whether there's a signature
or not.  The distributor can choose whether or not to accept the
offered contract, but if they don't choose to do so, they can't claim
to have received a copyright license.

For an analysis of similar issues in an offer of copyright license,
see Fosson v. Palace Waterland (1995) at
http://caselaw.lp.findlaw.com/data2/circs/9th/940.html .  You will
again have to go beyond a superficial reading of who won, because
issues of fact went against the copyright holder.  Here are a few
excerpts (see the original for references to other cases):

excerpts

Under California law, an offer is a manifestation of willingness to
enter into a bargain, so made as to justify another person in
understanding that his assent to that bargain is invited and will
conclude it.

... under California law, acceptance is the `manifestation of assent
to the terms thereof made by the offeree in a manner invited or
required by the offer.'

... under California law, where no time limit is specified for the
performance of an act, a reasonable time is implied.  ...  Thus, the
failure to specify a timeframe for payment of the license fee would
not render the contract illusory.

If doubt[exists] as to whether the agreement was bilateral or
unilateral, such doubt would have to be resolved by interpreting the
agreement to be bilateral. . . .There is a presumption in favor of
interpreting ambiguous 

Re: GPL and LGPL issues for LCC, or lack thereof

2004-12-17 Thread Michael K. Edwards
On re-reading the sequence of events, it looks like I was the one who
switched the context of the hypothetical reproducible build tools
obligation from GPL to LGPL.  Bruce, my apologies for implying that
you were the one who switched contexts.  So we seem to agree that the
support for this requirement isn't adequate in the GPL (which I
consider to be a flaw in the GPL).

I think the support is adequate in the LGPL, as my most recent e-mail
elaborates.  Presumably that's what is really at issue (at a strictly
legal level) in the LCC; proprietary applications don't usually link
against GPL libraries, since most ISVs consider the GPL likely to be
enforceable.  For code under other licenses, I have to fall back on
the DFSG to contend that Debian shouldn't encourage efforts to
standardize binaries.  I find arguments from the Social Contract and
hypothetical benefits to users unpersuasive.

Cheers,
- Michael




Re: GPL and LGPL issues for LCC, or lack thereof

2004-12-17 Thread Michael K. Edwards
I'll try to address the Specht case and summarize, and we can call
this an end to the discussion if that's what you want.

Bruce  You can read a case on the nature of consent such as Specht v. Netscape,
Bruce which might convince you that we don't necessarily get
sufficient consent on
Bruce the licenses that we distribute for them to bind as contracts.

Specht v. Netscape 2001 (if we are talking about
http://www.nysd.uscourts.gov/courtweb/pdf/D02NYSC/01-07482.PDF )
relies on a theory of final (retail) sale under the California version
of the Uniform Commercial Code.  The opinion doesn't even contain the
word copyright.

If you want to argue that the Specht case applies to a distro's or an
ISV's use of LGPL code, then you are saying that the (L)GPL isn't
enforceable at all in the US unless the copyright holder takes
technical measures to require all recipients to read the license. 
Even this probably isn't true, because what distro or ISV can
plausibly claim to be ignorant of the (L)GPL, as the Specht plaintiffs
claimed to be ignorant of the arbitration clause in Netscape's
license?

Here's a precis of those volumes of contract cases (two actually,
for which I gave the URLs, both addressing copyright licenses), plus
Specht:

LGPL an illusory contract (a factual question, the criteria
for which are discussed in Fosson)
 or not enough consent to bind as contract (a factual question,
discussed in Specht)
 = no contract
 = copyright law applies, and in the absence of a positive defense
such as fair use, likely to succeed on merits.

Alternately,
 valid contract
and violation of terms (a factual question, Sun)
 = likely to succeed on merits;
 likely to succeed on merits
 and terms are license restrictions (vs. contractual covenants, a
factual question, Sun) = conduct is outside license and copyright law
applies.

 Copyright law applies
 = automatic presumption of irreparable harm;
 Irreparable harm
 and likely to succeed on merits
 = preliminary injunction.

That's the entire scope of what I claim to have read in those
appellate decisions.  If it's correct, the only open question is
whether the limits on the exception granted in LGPL Section 6 permit
the hypothesized conduct of the ISV and/or the distro.  The remaining
factual issues do appear to me to be clear in the LGPL (the Fosson
criteria succeed and Specht doesn't fit the facts), but don't even
affect the conclusion (both contract and no contract cases are
covered).

You also claim that the LGPL doesn't require constructive availability
of tools required to exercise the right to modify, that the (L)GPL is
a unilateral and unconditional grant of rights under some theory other
than contract, and that the act of distribution is somehow separable
from the terms of a commercial software license.  I believe that I
have already adequately refuted these assertions.  Did I miss any
other arguments?

Cheers,
- Michael




Re: Linux Core Consortium

2004-12-16 Thread Michael K. Edwards
me binutils and modutils both depend on it.

Bruce On flex? No. At least not in unstable.

sorry, I meant to write Build-Depend.

me Or is the LCC proposing to standardize on a set of binaries without
me specifying the toolchain that's used to reproduce them?

Bruce Linking and calling conventions should be standardized and should
Bruce survive for reasonably long. We need to know what we use to build the
Bruce packages, but we are not currently proposing to standardize development
Bruce tools on the target system.

Agreed there needn't be development tools on the target system.  But
the development system itself needs to be fully and accurately
specified, both among the participating distros and to the end users. 
That's what it takes to satisfy the letter of the GPL, at least as I
read it, and it's certainly the standard to which Debian packages are
held.  It's going beyond the level of effort that has historically
been put into binary distributions, but I don't think it's too much to
ask in this context.

me Not having a policy is also a choice.  For a variety of reasons, a
me written policy about legal and technical issues can be a handicap to
me the sort of calculated risk that many business decisions boil down to.

Bruce The sort of calculated risk that many business decisions boil down to
Bruce is too vague to have meaning. What you may be groping at is that some
Bruce publicized policy can be taken as a promise. The organizations
Bruce participating in LCC have chosen to make such promises.

I wasn't groping, I was trying to leave it to the reader's imagination
rather than rehash old flamewars.  On the legal side, for instance,
some distros have been known to be cavalier about GPL+QPL and
GPL+SSLeay license incompatibilities.  On the technical side,
expecting releases to be self-hosting can constrain release timing
relative to toolchain changes.

I tend to be skeptical of promises that I think are logically
contradictory.  Promising ISVs that they need only test against
golden builds, while promising end users the Four Freedoms, doesn't
add up.

Note that if Distro X distributes both NonFreeWareApp and glibc, and
only offers technical support on NonFreeWareApp to those who don't
recompile their glibc, then Distro X's right to distribute glibc under
the LGPL is automatically revoked.  (IANAL, etc., etc., but I don't
see much ambiguity in this.)

Cheers,
- Michael




Re: Linux Core Consortium

2004-12-16 Thread Michael K. Edwards
On Thu, 16 Dec 2004 21:25:38 +0100, Wouter Verhelst [EMAIL PROTECTED] wrote:
[snip]
 Well, frankly, I don't care what [ISVs] think is 'viable'.

I do care.  Apparently some ISVs think a common binary core is
viable.  I think they might change their minds if the argument against
golden binaries is made with reference to historical reality (golden
binaries don't stay golden) as well as Free Software principle, and if
their problem is taken seriously and respectfully enough to propose a
better answer.  It is not enlightenment to despise the unenlightened.

Cheers,
- Michael




Re: GPL and LGPL issues for LCC, or lack thereof

2004-12-16 Thread Michael K. Edwards
This probably belongs on debian-legal, but let's go one more round on
debian-devel given the scope of the LCC's potential impact on Debian. 
(Personally, I'm more interested in the question of whether agreeing
to consecrate particular binaries contravenes a distro's commitment to
the Four Freedoms than I am in the legal niceties; but I feel obliged
to substantiate my earlier assertions.)  As always, IANAL.

[Bruce quoting the GPL]
    However, as a
  special exception, the source code distributed need not include
  anything that is normally distributed (in either source or binary
  form) with the major components (compiler, kernel, and so on) of the
  operating system on which the executable runs, unless that component
  itself accompanies the executable.

Bruce  This does not require specification of the development environment.

What part of normally distributed ... with ... the operating system
is confusing?  As written, this requires that any build requirement
that doesn't come with the operating system must be available, in
source code form, from the entity distributing the compiled Program. 
If the compiler required is some magic binary that isn't distributed
with the operating system, then this clause isn't met.

I will grant you that this clause was written in the days that
commercial operating systems shipped with the C compiler bundled, and
has since been interpreted to mean as long as the development
requirements are obtainable on the same terms as those needed to
compile code in the same language, with similar functionality, that
you wrote yourself -- more or less.  That's why I wrote the letter
of the GPL.

What I expect from binary distributions of free software is that I can
reproduce the binaries from the source code without undue effort, so
that I can then exercise my freedom to modify it without debugging
major functional regressions first.  Given the complexity of a full
GNU/Linux distro, I don't go around crying GPL violation every time
something fails to build from source (or fails to work right when
rebuilt).  But I do think, based on the above-cited license text, that
it's a technical violation of the GPL as well as a sign of poor
software engineering process.

me Note that if Distro X distributes both NonFreeWareApp and glibc, and only
me offers technical support on NonFreeWareApp to those who don't recompile
me their glibc, then Distro X's right to distribute glibc under the LGPL is
me automatically revoked.

Bruce The word support does not appear in the LGPL. What
Bruce I do see is:
Bruce
Bruce   Activities other than copying, distribution and modification are not
Bruce   covered by this License; they are outside its scope.
Bruce
Bruce This would imply that support is outside the scope of the
license. I don't
Bruce see any language in the LGPL specifying a support obligation of any kind.

I'm relying on LGPL 6, which addresses distribution terms:

As an exception to the Sections above, you may also combine or link a
work that uses the Library with the Library to produce a work
containing portions of the Library, and distribute that work under
terms of your choice, provided that the terms permit modification of
the work for the customer's own use and reverse engineering for
debugging such modifications.


At first blush, I would expect distribute ... under terms of your
choice to refer to the entire contract between licensor and
licensee (if we are talking about software that is licensed and
not sold; the common-law contract between seller and buyer is a whole
different animal).  If that contract includes a support clause, and
the support clause does not permit modification of the work without
loss of some of the economic benefits of the contract, then one could
argue that this exception (from the requirement to offer source as
per the GPL) should not apply, and that the distributor must either
offer source or refrain from distributing the LGPL material.

In fact, it depends on the legal regime that applies.  Consider a
recent (1999) decision of the U. S. 9th Circuit Court of Appeals, Sun
v. Microsoft, which may be found at
http://caselaw.lp.findlaw.com/data2/circs/9th/9915046.html (FindLaw is
cool).  The lower court had granted Sun a preliminary injunction
against Microsoft's continued distribution of JVMs incompatible with
Sun's Java standard.  The appeals court vacated the district court's
injunction, stating:

[9] The enforcement of a copyright license raises issues
that lie at the intersection of copyright and contract law, an
area of law that is not yet well developed. We must decide an
issue of first impression: whether, where two sophisticated
parties have negotiated a copyright license and dispute its
scope, the copyright holder who has demonstrated likely suc-
cess on the merits is entitled to a presumption of irreparable
harm. We hold that it is, but only after the copyright holder
has established that the disputed terms are limitations on the
scope of the license 

Re: Linux Core Consortium

2004-12-15 Thread Michael K. Edwards
 Bruce

 Well, please don't tell this to all of the people who we are attempting
 to get to use Linux as the core of their products.

core (software architecture) != core (customer value).

 Also, please make sure to tell the upstream maintainers that we aren't
 going to use their code any longer, because we have decided that it's a
 bad idea to outsource the core of our product.

Debian isn't a product, it's a project, and the core of the project
isn't code, it's principles and processes.  Outsourcing the core of
Debian would be delegating judgements about software freeness and
integrity.

Cheers,
- Michael




Re: Linux Core Consortium

2004-12-15 Thread Michael K. Edwards
Whoops, I guess that's what I get for trying to be concise for once. 
I'll try again.

Bruce Well, please don't tell this [i. e., outsourcing your core is
a bad idea]
Bruce to all of the people who we are attempting to get to use Linux
Bruce as the core of their products.

me core (software architecture) != core (customer value).

In other words, Bruce seemed to be conflating the usage of core in a
software architecture sense (kernel, toolchain, libraries) with core
in a business sense (value proposition to the customer).  It's smart
for many businesses to adopt the GNU/Linux core precisely because
writing operating systems isn't their own core competence and wouldn't
make their products better.

Bruce Also, please make sure to tell the upstream maintainers that we
Bruce aren't going to use their code any longer, because we have decided
Bruce that it's a bad idea to outsource the core of our product.

me Debian isn't a product, it's a project,

[snip Manoj's response, which seems to have been aimed at someone else]

me and the core of the project isn't code, it's principles and
me processes.  Outsourcing the core of Debian would be delegating
me judgements about software freeness and integrity.

What I was trying to say is that Linux (or any other chunk of upstream
code) doesn't represent the core of Debian, so Bruce's argument that
we've already outsourced our core doesn't hold water.  Our core is the
DFSG and the Social Contract, plus the processes we have in place to
deliver on the promises they contain.

I would argue that any strategy that consecrates particular binaries
-- even those built by Debian maintainers -- flies in the face of
those principles and processes.  Even a commitment to sync up at the
source code level constitutes a delegation of judgments about how to
maintain software integrity, and it risks a delegation of judgments
about freeness (think firmware BLOBs, or the XFree86 license changes).
 That's the part of what the LCC proposes which I think would
constitute outsourcing Debian's core.

Is there a paper tiger lurking in there somewhere?

Cheers, 
- Michael




Re: Linux Core Consortium

2004-12-15 Thread Michael K. Edwards
Bruce Fortunately, flex isn't in the problem space. If you stick to what
Bruce version of libc, etc., it'll make more sense.

Flex isn't in the problem space if we're talking core ABIs.  But it
certainly is if we're talking core implementations, as binutils and
modutils both depend on it.  Or is the LCC proposing to standardize on
a set of binaries without specifying the toolchain that's used to
reproduce them?

Bruce Do you know of any other distribution that has taken the trouble to
Bruce write down as much policy as Debian has? It's not clear that the others
Bruce have anything to put against it.

Not having a policy is also a choice.  For a variety of reasons, a
written policy about legal and technical issues can be a handicap to
the sort of calculated risk that many business decisions boil down to.
 Debian has flamewars about license compatibility and degree of
dependency on non-free materials precisely because it has a policy and
tries to abide by it.

But again, you may not always get what you pay for, but you rarely
fail to pay for what you get.  If all distros were as sensitive as
Debian is to questions of reproducibility from unencumbered source
code and build environments, then perhaps we wouldn't be debating the
need for golden binaries.

Cheers,
- Michael




Re: Linux Core Consortium

2004-12-14 Thread Michael K. Edwards
  me
 Ian Murdock (quotes out of order)

  If the LSB only attempts to certify things that haven't forked, then
  it's a no-op.  Well, that's not quite fair; I have found it useful to
  bootstrap a porting effort using lsb-rpm.  But for it to be a software
  operating environment and not just a software porting environment, it
  needs to have a non-trivial scope, which means an investment by both
  ISVs and distros.
 
 That's precisely what we're trying to do. :-)

The context of my remark was your claim that by definition, the LSB
codifies existing
standards, i.e., things everyone already agree[s] with.  If that's
true, then the LSB doesn't represent a non-trivial investment on the
distros' part, and no one should be surprised that the ISVs don't care
about it.  Agreeing on a set of LCC binaries would be non-trivial for
the distros but its entire justification would be that it's trivial
for the ISVs.  That would be fine (on the assumption that ISV success
is what matters) except that I don't think it will work.

I respect your efforts to make commercial software on Linux more
viable.  But be careful what you wish for -- you might get it.  Make
it impossible to remove implementation warts because the binaries are
more authoritative than the specification, and pretty soon you have a
thriving ISV market -- for antivirus software, system repair
utilities, interface hacks, and virtual machines to graft
unreproducible system images onto new hardware.

 Wishing the ISVs operated a different way doesn't really get us any
 closer to a solution..

You seem to have missed my point.  I don't wish for ISVs to operate
a different way; I cope daily with the way that they _do_ operate, and
am distressed by proposals that I think will make it worse.  In my
opinion, catering to poor software engineering practice by applying
Holy Penguin Pee to a particular set of core binaries is unwise.  I
would have expected you and Bruce to agree -- in fact, to hold rather
more radical opinions than I do -- so I must be missing something. 
Here's the case I would expect a Debian founder to be making.

In short, I don't think that ISVs can afford for their releases to
become hothouse flowers that only grow in the soil in which they
germinated.  It's understandable for ISVs to pressure distros to pool
their QA efforts and to propose that this be done at a tactical level
by sharing binaries.  But I think that's based on a naive analysis of
the quality problem.  Inconsistent implementation behavior from one
build to another is generally accompanied by similar inconsistencies
from one use case to another, which don't magically get better by
doing fewer builds.

The requirement that software run on multiple distros today serves as
a proxy for the much more important requirement that it run on the
same distro today and tomorrow.  It's a poor substitute for testing in
a controlled environment, exposing only functionality which is common
denominator among distros, but that takes skill, understanding, and
labor beyond what ISVs are willing to invest.  In practice, there are
still going to be things that break when bits change out from under
them.  But it makes a great deal of difference whether an ISV is
committed to fixing accidental dependencies on binary internals.

Suppose I want to use an ISV's product on Debian myself, or to support
its use on systems that I control.  Usually the ISV's approach to
Linux is to list a set of supported RedHat and SuSE distributions
(often from years ago) on which they have more or less tested their
software.  That gives me some idea of what its de facto environmental
requirements are.  Then I reverse engineer the actual requirements
(shared library versions, which shell /bin/sh is assumed to be, which
JVM installed where, etc.) and select/adapt Debian packages
accordingly.  This is a pain but at least I get to use competently
packaged open source code for all of the supporting components, and I
can fix things incrementally without expecting much help from the ISV.

If I'm going to go to this trouble, it's actually to my advantage that
Debian packages are completely independently built -- separate
packaging system, separate configure parameters, separately chosen
patches.  I find a lot of things up front that would otherwise hit me
in the middle of an urgent upgrade -- calls to APIs that are supposed
to be private, incorrectly linked shared objects (I do an ldd -r -v on
everything), code built with dubious toolchains, weird uses of
LD_LIBRARY_PATH, FHS violations.  Sometimes ISVs even appreciate a
heads-up about these problems and fix them in the next build.

Given this strategy for ISV handling, obviously I prefer the ABI /
test kit approach to distro convergence.  For one thing, if commercial
distros cooperated that way, it would make the reverse engineering
easier.  More importantly, any ISV which publicly buys into ABI-based
testing will immediately gain credibility with me in a way that I can
explain to 

Re: Linux Core Consortium

2004-12-09 Thread Michael K. Edwards
Name changes are a superficial design flaw that obscures the
fundamental design flaw in this proposal -- sharing binaries between
Linux distributions is a bad idea to begin with.

Fixing ABI forks, and articulating best known practices about managing
ABI evolution going forward, that's a good idea.  Building an open
source test kit that exercises the shared ABIs, validating that the
test kit builds substantially the same on each distro, and helping
ISVs resolve issues that the test kit missed (and add them as new test
cases), that's even better.  But if two competent packagers, working
on different distros, can't get the same ABI out of the same source
code, then upstream's build procedures are badly broken -- and I don't
want that papered over by passing binaries around!

From the point of view of a user of commercial software, I want to do
business with ISVs that take responsibility for the proper functioning
of their software on a system that is reasonably compatible with
their anticipated target environment.  Exposed ABIs, as verified by a
test kit, are an appropriate standard of reasonably compatible. 
It's in everyone's interest for that test kit to be correct and
thorough, which is a good thing, because it's a lot of work to build
and maintain it.

I prefer open source platforms for a number of reasons.  One is that
it's the source code, not any particular binary, that is authoritative
about how things should work.  In principle, one can bug-fix and
rebuild any components in any order without breaking the system.  My
experience has been that the more faithful a distro is to this
principle, and the more work is put into abiding by it, the more
likely it is that I will be able to use its binaries unaltered! 
That's one reason why I choose Debian when I have the option.

ISVs and IHVs who think that binaries shared among distros will help
them manage tech support costs and quality issues aren't thinking
along these lines, perhaps because testimonials like mine tend to be
anecdotal.  Maybe they could be persuaded by quantitative evidence. 
I'm not in a great position to gather that evidence; perhaps someone
else is?

Cheers,
- Michael




Re: Linux Core Consortium

2004-12-09 Thread Michael K. Edwards
If ISVs want exactly the same, they are free to install a chroot
environment containing the binaries they certify against and to supply
a kernel that they expect their customers to use.  That's the approach
I've had to take when bundling third-party binaries built by people
who were under the illusion that exactly the same was a reasonable
thing to ask for.  Once I got things working in my chroot, and
automated the construction of the chroot, I switched back to the
kernel I wanted to use; the ISV and I haven't troubled one another
since.

If the LSB only attempts to certify things that haven't forked, then
it's a no-op.  Well, that's not quite fair; I have found it useful to
bootstrap a porting effort using lsb-rpm.  But for it to be a software
operating environment and not just a software porting environment, it
needs to have a non-trivial scope, which means an investment by both
ISVs and distros.

As a strategy for defining and extending the scope of consensus
preparatory to a release of a test suite, sharing binaries is fine. 
But as a strategy for making ISVs and their customers happy, I think
it's a chimera.

Cheers,
- Michael




Re: Linux Core Consortium

2004-12-09 Thread Michael K. Edwards
On Thu, 09 Dec 2004 17:20:00 -0600, Ron Johnson [EMAIL PROTECTED] wrote:

 libfoo 1.7 fixes a non-security bug in v1.6.  bar segfaults when
 running libfoo 1.6.  But libfoo 1.6 is in Sarge, and the bug won't
 be fixed because it's not a security bug.

Having a formal GNU/Linux Distro Test Kit would help organize this
process.  Suppose that sarge validates against DTK 1.0, the vendor of
bar contributes a new test case which is accepted into DTK 1.0.1,
and DTK 1.0.1 runs against sarge + libfoo 1.7.  Then we'd be in no
worse a position than Other-Distro 14.7 which also included libfoo 1.6
and validated against DTK 1.0 -- except insofar as commercial distros
are more likely to silently roll DTK 1.0.1 and libfoo 1.7 into
14.7-updates, which isn't the way Debian handles stable.

This could be handled with overlay repositories that handle bug
fixes within a DTK minor version.  In the case described, bar
depends on validated-with-dtk (= 1.0.1,  1.1), validated-with-dtk
1.0.1-1 depends on libfoo (=1.7-1), and you need to have packages from
the validated-with-dtk 1.0.1 repository in order to install bar.  A
security fix to libfoo 1.6 (in stable) would need re-validating
against DTK 1.0, and might also need to be done on libfoo 1.7,
resulting in validated-with-dtk 1.0.1-2 (built to depend on libfoo
1.7-2).

I actually think trying to handle this with a validated-with-dtk
package is a kludge, but it doesn't require any new development
(except, of course, the DTK, and perhaps automation of propagation
into the overlay repositories).  One of these days I will get around
to doing a more competent job of proposing Signed Package Integrity
Tokens (my last effort wasn't worth SPIT :-) as a way for ISVs to
self-service (or delegate self-servicing) at the level of functional
test and validation.

Either approach partitions the problem of security/bug-fix management
so that ISVs and their customers can contribute to things that matter
to them.  The DTK needs to be re-run, and the validated-with-dtk
dependencies updated or the PITs re-signed, any time there is a
security update to the relevant packages.  Security updates themselves
multiply because they may hit the versions in the overlay repositories
as well.  But at least it's possible to estimate the resource cost of
maintaining the various overlays, and it's clear what the criterion
for adding an overlay package is -- a DTK update that fails without
the overlay.

Note that many of the core packages we are talking about already have
extensive test suites, so the DTK could get off the ground by adapting
them into tests of the package in its installed state instead of
build-time tests.  This would make it pretty easy for an ISV to
prototype a test within the DTK, send it to upstream (the same test
runs in the build-time test framework), and get the bug fixed (or
feature added).

Obviously there aren't any new ideas in this (DejaGNU, anyone?) and
it's not a panacea.  The LCC's proposal is just a special case of this
-- a DTK that verifies the checksums of the common binaries :-).  But
I prefer an approach in which I can verify when and why the contract
between distro and ISV changed, so that I can make reasonable
decisions about whether and how to replace distro binaries with builds
that meet my own needs better.

Cheers,
- Michael




Re: Ubuntu discussion at planet.debian.org

2004-10-25 Thread Michael K. Edwards
 Steve Langasek

 It is not correct.  At the time testing freezes for sarge, there are likely
 to be many packages in unstable which either have no version in testing, or
 have older versions in testing.  The list of such packages is always visible
 at http://ftp-master.debian.org/testing/update_excuses.html.gz.  While
 it's a goal of the release team to ensure that *incidental* issues don't
 keep package fixes/updates out of testing, there are plenty of package
 updates which will come too late for consideration, or will be RC-buggy in
 their own right, that won't be part of sarge.

That's the URL I was trying to remember; thanks.  That's what I meant
by the interesting thing about testing is the dependency analysis. 
I think the information in update_excuses mostly supports the
convergence is readiness hypothesis.

It seems to me that Jérôme's observation also takes into account the
fact that experimental exists, so that changes that maintainers know
would break britney don't get put into unstable late in the cycle. 
Without that, I wouldn't expect testing - unstable convergence ever
to happen.  But don't you think that, until testing converges (nearly)
to unstable, it's hard to know how much of testing will FTBFS on
testing itself?

Although it does sometimes happen that an update breaks something that
works in the version in testing, I think it's more common for an RC
bug to apply to earlier versions as well, even when it's an FTBFS for
something that used to build.  (That often seems to mean that one of
the build-deps evolved out from under the package or got removed
because it was old or broken, and the source that's made it into
testing won't build there either.)  So I would expect that the vast
majority of RC bugs filed against packages in sid have to be handled
by really fixing them -- and letting the fix propagate into testing --
or excluding the package from sarge.

Freezing base+standard at this stage saves the package maintainers the
trouble of uploading to experimental instead of unstable for a while,
and makes it a lot easier for the RMs to allow fixes in selectively. 
Otherwise, progressive freezes don't really alter this analysis.

 And immediately *after* the freeze point, I think we can safely expect
 unstable to begin diverging even further from testing.

True enough.  In a lot of commercial software development, the
interval between code freeze / VC branch and release is necessary so
that QA can finally do a full run through the test plan and the senior
coders are free to fix any RC bug they can.  Everybody else works on
the trunk.  So apply the testing (almost) = unstable criterion to
the freeze point rather than the release point, with the understanding
that the packages for which it's not true are exactly the ones that
need more / different attention during the freeze than they were
getting before.

 While getting KDE updates into testing has been a significant task in the
 past, I'm not aware of any point during the sarge release cycle when KDE has
 been a factor delaying the release.

Er, does the current situation fit?  An awful lot of update_excuses
seems to boil down to Bug#266478, and it's hard to see the RC bug
count on KDE 3.2 apps dropping by much until the debate about letting
KDE 3.3 in is resolved.  I think the C++ toolchain issues I mentioned
were a factor in KDE 3.2 propagation into testing being delayed to the
point that KDE 3.3 is even worth discussing.  But I haven't been
following those issues at all lately, so don't take my opinion on this
too seriously; maybe I should just ignore that portion of
update_excuses.

Cheers,
- Michael




Re: Ubuntu discussion at planet.debian.org

2004-10-24 Thread Michael K. Edwards
On Sat, 23 Oct 2004 01:04:41 +0200, Jérôme Marant [EMAIL PROTECTED] wrote:
 As soon as testing is strictly equal to unstable regarding package
 versions, testing is roughly ready for release.

I think this observation is acute -- as applied to the _current_
testing mechanism.

Personally, I view testing as a QA system for the release process,
not a sensible distribution for anyone (developer or end user) to be
running on a real system.  My understanding of the mechanism by
which packages propagate into testing is that there's only one
interesting thing about it: the _reason_ why any given package fails
to propagate.  The automated dependency analysis takes some of the
personality conflicts out of the assessment of the release status, and
provides macroscopic milestones (systematic transition to library X,
version upgrade to desktop suite Y) during a certain phase of the
release cycle.

I am in the interesting position of serving as buildmaster for an
appliance incorporating a snapshot of a subset of Debian unstable. 
(I may perhaps deserve some flamage for not keeping up communication
with the Debian project while working on this, more or less in
isolation.  Allow me to plead that the pressure of circumstances has
been rather intense, and I am hoping that recent management changes
will result in more follow-through on promises of return contributions
of time and other resources.)

Perhaps I've just been lucky, but I haven't had any technical trouble
at all due to the choice of unstable.  The only issue I encountered
was an upstream-induced regression in MySQL 4.0.21, which would have
hit me anyway (we bought MySQL's commercial support contract, but I
have no desire to ship any bits that haven't been through the hands of
the Debian packager, who seems to be on top of the situation). 
snapshot.debian.net was a real lifesaver on this one, allowing me to
choose the particular historical version I wanted for the affected
packages.

When sarge releases, I'm going to want to be able to sync the boxes in
the field up to sarge so that they can participate in the stable
security maintenance process.  In the best of all possible worlds, I'd
have some guarantee that sarge will contain package versions no lower
than today's unstable, at least for the packages I'm bundling.  But I
don't think it's at all reasonable to expect that kind of a guarantee,
and I'm just going to have to do my own QA on the upgrade/downgrade
process from the snapshot I've chosen to the eventual golden sarge.

If Jérôme's observation is correct, then I don't need to worry;
unstable will converge to a consistent state under the watchful eyes
of the RM (and many others), testing will rise to meet it, and the
worst that might happen is that some of the packages I've chosen could
be excluded from sarge because of a quality problem or an ill-timed
maintainer absence.  This would be an inconvenience but hardly grounds
for complaint about Debian's release process.

In this light (and for my purposes), the only sensible place to branch
stable off from unstable is at a point where the major subsystems are
all going to be reasonably maintainable on the branch.  Perhaps we're
close to such a point now and just haven't been for a while, for
reasons largely beyond the Debian project's control.  (Apart from the
definition of its major subsystems, that is; note that Ubuntu
doesn't expect to be able to provide the level of support for KDE that
they plan for Gnome, and it appears to me that the effect of changes
in the C++ toolchain on KDE has been a significant factor in delaying
sarge.  Do tell me if I'm mistaken about that, but please don't flame
too hard; I'm not casting aspersions on KDE or g++/libstdc++, just
recording an impression.)

To me, the miracle is that a stable distribution is possible at all,
given human nature and the scope of the Debian archive.  The old adage
about sausage and the law goes double for software, perhaps because
it's a sort of sausage (a composite product of many, er, committed
contributors) stuffed with law (part logic and part historical
circumstance).  I have to admit that it takes a strong stomach to
watch the sausage being made and then eat it anyway, but it helps to
focus on how much better it is than one's other options in sausages. 
That's still how I feel about Debian, with good reason.

Cheers,
- Michael




Accepted cryptokit 1.2-1 (i386 source)

2004-01-28 Thread Michael K. Edwards
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Format: 1.7
Date: Mon, 19 Jan 2004 11:48:52 -0800
Source: cryptokit
Binary: libcryptokit-ocaml libcryptokit-ocaml-dev
Architecture: source i386
Version: 1.2-1
Distribution: unstable
Urgency: low
Maintainer: Debian OCaml Maintainers [EMAIL PROTECTED]
Changed-By: Michael K. Edwards [EMAIL PROTECTED]
Description: 
 libcryptokit-ocaml - cryptographic algorithm library for OCaml - runtime
 libcryptokit-ocaml-dev - cryptographic algorithm library for OCaml - development
Closes: 203256
Changes: 
 cryptokit (1.2-1) unstable; urgency=low
 .
   * First upload (closes: Bug#203256)
   * debian/control: explicit Section: lines for binary packages
   * debian/control: Maintainer: Debian OCaml Maintainers, etc.
   * Sign with DSA subkey of new GPG key
Files: 
 938cb5f268eb30ffabfd418f7733857c 968 libdevel optional cryptokit_1.2-1.dsc
 0249135953f10c1515e88985b45ee4c9 106543 libdevel optional cryptokit_1.2.orig.tar.gz
 d08f02fc82c68edd32ccb7dd5007798e 12307 libdevel optional cryptokit_1.2-1.diff.gz
 681c699ffa6e140f1bdd24c2263d0dfd 19132 libs optional libcryptokit-ocaml_1.2-1_i386.deb
 937d8d9bcfb8cc9fb296c7ab79b909cf 287122 libdevel optional 
libcryptokit-ocaml-dev_1.2-1_i386.deb

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.4 (GNU/Linux)

iD8DBQFAD7LX2WTeT3CRQaQRAkgCAJ43/nJVSkuOLN5fmjvp8Cmx61XyXgCdHH8R
fBUQwryhfKQjnJNdT/czHdU=
=bfrI
-END PGP SIGNATURE-


Accepted:
cryptokit_1.2-1.diff.gz
  to pool/main/c/cryptokit/cryptokit_1.2-1.diff.gz
cryptokit_1.2-1.dsc
  to pool/main/c/cryptokit/cryptokit_1.2-1.dsc
cryptokit_1.2.orig.tar.gz
  to pool/main/c/cryptokit/cryptokit_1.2.orig.tar.gz
libcryptokit-ocaml-dev_1.2-1_i386.deb
  to pool/main/c/cryptokit/libcryptokit-ocaml-dev_1.2-1_i386.deb
libcryptokit-ocaml_1.2-1_i386.deb
  to pool/main/c/cryptokit/libcryptokit-ocaml_1.2-1_i386.deb


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Accepted libimager-perl 0.42-1 (powerpc source)

2004-01-28 Thread Michael K. Edwards
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Format: 1.7
Date: Tue, 20 Jan 2004 01:44:53 -0800
Source: libimager-perl
Binary: libimager-perl
Architecture: source powerpc
Version: 0.42-1
Distribution: unstable
Urgency: low
Maintainer: Michael K. Edwards [EMAIL PROTECTED]
Changed-By: Michael K. Edwards [EMAIL PROTECTED]
Description: 
 libimager-perl - Perl extension for Generating 24 bit Images
Changes: 
 libimager-perl (0.42-1) unstable; urgency=low
 .
   * Initial Release.
Files: 
 4368b90a7450144d0f31f4be0b65e82a 812 perl optional libimager-perl_0.42-1.dsc
 46275d8badb93e323345d520a5931017 550899 perl optional libimager-perl_0.42.orig.tar.gz
 c666c80631d51d8f043c2477bb33aae3 1961 perl optional libimager-perl_0.42-1.diff.gz
 edac3f52cf67e6a42eaee7c25f0b00ee 439920 perl optional 
libimager-perl_0.42-1_powerpc.deb

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.2 (GNU/Linux)
Comment: Colin Watson [EMAIL PROTECTED] -- Debian developer

iD8DBQFADTJ79t0zAhD6TNERAi2OAJ9u5Yy+Buo+ERryj5cWwsbG0c7YngCfQ1FI
k2G/fNEKAyzKMsqcLUAT+ss=
=00yK
-END PGP SIGNATURE-


Accepted:
libimager-perl_0.42-1.diff.gz
  to pool/main/libi/libimager-perl/libimager-perl_0.42-1.diff.gz
libimager-perl_0.42-1.dsc
  to pool/main/libi/libimager-perl/libimager-perl_0.42-1.dsc
libimager-perl_0.42-1_powerpc.deb
  to pool/main/libi/libimager-perl/libimager-perl_0.42-1_powerpc.deb
libimager-perl_0.42.orig.tar.gz
  to pool/main/libi/libimager-perl/libimager-perl_0.42.orig.tar.gz


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Accepted libobject-multitype-perl 0.04-1 (all source)

2004-01-28 Thread Michael K. Edwards
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Format: 1.7
Date: Mon, 19 Jan 2004 16:21:28 -0800
Source: libobject-multitype-perl
Binary: libobject-multitype-perl
Architecture: source all
Version: 0.04-1
Distribution: unstable
Urgency: low
Maintainer: Michael K. Edwards [EMAIL PROTECTED]
Changed-By: Michael K. Edwards [EMAIL PROTECTED]
Description: 
 libobject-multitype-perl - Perl Objects as Hash, Array, Scalar, Code and Glob at once
Changes: 
 libobject-multitype-perl (0.04-1) unstable; urgency=low
 .
   * Initial Release.
Files: 
 c9b672d3f2f5e5edda0adb83126bcfab 782 perl optional libobject-multitype-perl_0.04-1.dsc
 5997746d8bbe57c3fa7e581f4fe5ebe7 6586 perl optional 
libobject-multitype-perl_0.04.orig.tar.gz
 4f17548c6649d2174bbc8199ca245cbe 1903 perl optional 
libobject-multitype-perl_0.04-1.diff.gz
 80039ae115a285e358e2d13c88371a7b 13716 perl optional 
libobject-multitype-perl_0.04-1_all.deb

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.2 (GNU/Linux)
Comment: Colin Watson [EMAIL PROTECTED] -- Debian developer

iD8DBQFADTVC9t0zAhD6TNERAk5CAJ9bRoTXQ2sCDkIyGoFRI5yyvzdnNwCfUIf6
xuzUjHcP06sV8rdjKv2VM4Y=
=JEzZ
-END PGP SIGNATURE-


Accepted:
libobject-multitype-perl_0.04-1.diff.gz
  to pool/main/libo/libobject-multitype-perl/libobject-multitype-perl_0.04-1.diff.gz
libobject-multitype-perl_0.04-1.dsc
  to pool/main/libo/libobject-multitype-perl/libobject-multitype-perl_0.04-1.dsc
libobject-multitype-perl_0.04-1_all.deb
  to pool/main/libo/libobject-multitype-perl/libobject-multitype-perl_0.04-1_all.deb
libobject-multitype-perl_0.04.orig.tar.gz
  to pool/main/libo/libobject-multitype-perl/libobject-multitype-perl_0.04.orig.tar.gz


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Accepted libxml-smart-perl 1.5-1 (all source)

2004-01-28 Thread Michael K. Edwards
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Format: 1.7
Date: Mon, 19 Jan 2004 15:36:14 -0800
Source: libxml-smart-perl
Binary: libxml-smart-perl
Architecture: source all
Version: 1.5-1
Distribution: unstable
Urgency: low
Maintainer: Michael K. Edwards [EMAIL PROTECTED]
Changed-By: Michael K. Edwards [EMAIL PROTECTED]
Description: 
 libxml-smart-perl - Convenience features for access to parsed XML trees
Changes: 
 libxml-smart-perl (1.5-1) unstable; urgency=low
 .
   * Initial Release.
Files: 
 a7cacac42f30785ae47734bd7a885512 714 perl optional libxml-smart-perl_1.5-1.dsc
 c89344dad541f4f7b87b5ef36b06535b 32000 perl optional libxml-smart-perl_1.5.orig.tar.gz
 73832931b1518b3d7a3a34f48f81fe02 2091 perl optional libxml-smart-perl_1.5-1.diff.gz
 1520a804f12b056cf34df1bccee72394 43786 perl optional libxml-smart-perl_1.5-1_all.deb

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.4 (GNU/Linux)

iD8DBQFADwAfZNh5D+C4st4RAspVAKCBYMaDXA4Yh1B4xZUeUM2+6d6cGwCfSKwI
BuNmKsUvPpDEcJtHqihOYRM=
=FLF8
-END PGP SIGNATURE-


Accepted:
libxml-smart-perl_1.5-1.diff.gz
  to pool/main/libx/libxml-smart-perl/libxml-smart-perl_1.5-1.diff.gz
libxml-smart-perl_1.5-1.dsc
  to pool/main/libx/libxml-smart-perl/libxml-smart-perl_1.5-1.dsc
libxml-smart-perl_1.5-1_all.deb
  to pool/main/libx/libxml-smart-perl/libxml-smart-perl_1.5-1_all.deb
libxml-smart-perl_1.5.orig.tar.gz
  to pool/main/libx/libxml-smart-perl/libxml-smart-perl_1.5.orig.tar.gz


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Accepted numerix 0.19-1 (i386 source all)

2004-01-28 Thread Michael K. Edwards
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Format: 1.7
Date: Mon, 19 Jan 2004 13:40:45 -0800
Source: numerix
Binary: libcnumx0-dev numerix-doc libnumerix-ocaml libcnumx0 libnumerix-ocaml-dev
Architecture: source i386 all
Version: 0.19-1
Distribution: unstable
Urgency: low
Maintainer: Debian OCaml Maintainers [EMAIL PROTECTED]
Changed-By: Michael K. Edwards [EMAIL PROTECTED]
Description: 
 libcnumx0  - Numerix big integer library for C - runtime
 libcnumx0-dev - Numerix big integer library for C - runtime
 libnumerix-ocaml - Numerix big integer library for OCaml - runtime
 libnumerix-ocaml-dev - Numerix big integer library for OCaml
 numerix-doc - Numerix big integer library for C and OCaml - documentation
Changes: 
 numerix (0.19-1) unstable; urgency=low
 .
   * Initial Release
   * Packaging reworked to make proper libcnumx{,-dev} and
 libnumerix-ocaml{,-dev} -- many thanks to Sven Luther
   * Packaging style improvements -- thanks to Sylvain Le Gall
   * Split out arch-indep numerix-doc package
   * Build against new Bignum implementation
   * Added build dependency on tetex-extra (current location of amssymb.sty)
   * debian/control: revised Section: lines for source and binary packages
   * Don't include ocamlnumx toplevel and man page in libnumerix-ocaml-dev
   * Maintainer: Debian OCaml Maintainers, etc.
   * Sign with DSA subkey of new GPG key
   * Rebuild against proper upstream tarball, not the one from sks
   * Use autotools-dev to create configure at build time
Files: 
 6f100c4abf6d02b7bc164ec70647b78f 1043 libdevel optional numerix_0.19-1.dsc
 7b91889568145a3e76501107ed5407bc 614924 libdevel optional numerix_0.19.orig.tar.gz
 d94a45b1e4d2b7da75c56d0c91163c87 53798 libdevel optional numerix_0.19-1.diff.gz
 35eb558464c0c63496be88962a97d7fc 54750 doc optional numerix-doc_0.19-1_all.deb
 fa41815cc75dcf5c340f4720f3562cca 69130 libs optional libcnumx0_0.19-1_i386.deb
 0493e87af5be905cf6f4087bf4816086 93932 libdevel optional libcnumx0-dev_0.19-1_i386.deb
 af30c7ea475ce01330b595e480427296 78836 libs optional libnumerix-ocaml_0.19-1_i386.deb
 7f3f8999ff0065692f40fed1b480e9d4 180330 libdevel optional 
libnumerix-ocaml-dev_0.19-1_i386.deb

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.4 (GNU/Linux)

iD8DBQFADlhd2WTeT3CRQaQRAtkxAKCOAmtAGYETiAqSbAm++k6qPoKzAwCfdDXq
WutNUoLTFiMpMYbkIX30J5s=
=/YTL
-END PGP SIGNATURE-


Accepted:
libcnumx0-dev_0.19-1_i386.deb
  to pool/main/n/numerix/libcnumx0-dev_0.19-1_i386.deb
libcnumx0_0.19-1_i386.deb
  to pool/main/n/numerix/libcnumx0_0.19-1_i386.deb
libnumerix-ocaml-dev_0.19-1_i386.deb
  to pool/main/n/numerix/libnumerix-ocaml-dev_0.19-1_i386.deb
libnumerix-ocaml_0.19-1_i386.deb
  to pool/main/n/numerix/libnumerix-ocaml_0.19-1_i386.deb
numerix-doc_0.19-1_all.deb
  to pool/main/n/numerix/numerix-doc_0.19-1_all.deb
numerix_0.19-1.diff.gz
  to pool/main/n/numerix/numerix_0.19-1.diff.gz
numerix_0.19-1.dsc
  to pool/main/n/numerix/numerix_0.19-1.dsc
numerix_0.19.orig.tar.gz
  to pool/main/n/numerix/numerix_0.19.orig.tar.gz


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Accepted libxml-sax-writer-perl 0.44-3 (all source)

2004-01-20 Thread Michael K. Edwards
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Format: 1.7
Date: Mon, 19 Jan 2004 00:41:30 -0800
Source: libxml-sax-writer-perl
Binary: libxml-sax-writer-perl
Architecture: source all
Version: 0.44-3
Distribution: unstable
Urgency: low
Maintainer: Michael K. Edwards [EMAIL PROTECTED]
Changed-By: Michael K. Edwards [EMAIL PROTECTED]
Description: 
 libxml-sax-writer-perl - Perl module for a SAX2 XML writer
Closes: 196373 210544
Changes: 
 libxml-sax-writer-perl (0.44-3) unstable; urgency=low
 .
   * New maintainer (Closes: Bug#210544)
   * Added Jay Bonci as co-maintainer
   * Add ':encoding(EncodeTo)' line discipline to FileConsumer to defeat
 perl's automatic charset conversion.  Thanks to Michael Fowler.
 (Closes: Bug#196373)
   * Add trailing newline to xml declaration and to file/handle finalize()
   * Change quote marks in xml declaration to  to match most W3C examples
   * debian/control: upgraded to Debian Policy 3.6.1 (no changes)
Files: 
 509d0942387a84c70443ede15d19c2d8 925 perl optional libxml-sax-writer-perl_0.44-3.dsc
 edfd4c5aa2110bd5a1014cc92d0589fb 3089 perl optional 
libxml-sax-writer-perl_0.44-3.diff.gz
 87218c4a6c8188cde0faa94c3eebcfd3 21576 perl optional 
libxml-sax-writer-perl_0.44-3_all.deb

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.2 (GNU/Linux)
Comment: Colin Watson [EMAIL PROTECTED] -- Debian developer

iD8DBQFADQvN9t0zAhD6TNERAu5CAJ0QLs90/BeBFD+tyKHoHnmmUG/BPQCdGuYA
/M8Zz/rn7d9Dp5LJPoyWvw4=
=8YYV
-END PGP SIGNATURE-


Accepted:
libxml-sax-writer-perl_0.44-3.diff.gz
  to pool/main/libx/libxml-sax-writer-perl/libxml-sax-writer-perl_0.44-3.diff.gz
libxml-sax-writer-perl_0.44-3.dsc
  to pool/main/libx/libxml-sax-writer-perl/libxml-sax-writer-perl_0.44-3.dsc
libxml-sax-writer-perl_0.44-3_all.deb
  to pool/main/libx/libxml-sax-writer-perl/libxml-sax-writer-perl_0.44-3_all.deb


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Accepted libxml-libxml-perl 1.56-6 (powerpc source)

2004-01-20 Thread Michael K. Edwards
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Format: 1.7
Date: Mon, 19 Jan 2004 00:28:59 -0800
Source: libxml-libxml-perl
Binary: libxml-libxml-perl
Architecture: source powerpc
Version: 1.56-6
Distribution: unstable
Urgency: low
Maintainer: Michael K. Edwards [EMAIL PROTECTED]
Changed-By: Michael K. Edwards [EMAIL PROTECTED]
Description: 
 libxml-libxml-perl - Perl module for using the GNOME libxml2 library
Changes: 
 libxml-libxml-perl (1.56-6) unstable; urgency=low
 .
   * Integrate CVS as of 20040115
   * Sign with subkey of real key (new UID)
   * Add Jay Bonci as co-maintainer
Files: 
 7bf5bcc5a76064f6b36ebc4927dbed25 882 perl optional libxml-libxml-perl_1.56-6.dsc
 5c401e40a870d381c5047ac419a9ac3b 40683 perl optional libxml-libxml-perl_1.56-6.diff.gz
 4caf604f81e8a6ca556fbbdb0abc8c3c 263510 perl optional 
libxml-libxml-perl_1.56-6_powerpc.deb

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.2 (GNU/Linux)
Comment: Colin Watson [EMAIL PROTECTED] -- Debian developer

iD8DBQFADQqX9t0zAhD6TNERAnXHAJ9j9GbV2M5Mub2uaQArz/88sffFQwCfZGLX
gR/RvbEryDTadVtb5VPilTw=
=Qrsk
-END PGP SIGNATURE-


Accepted:
libxml-libxml-perl_1.56-6.diff.gz
  to pool/main/libx/libxml-libxml-perl/libxml-libxml-perl_1.56-6.diff.gz
libxml-libxml-perl_1.56-6.dsc
  to pool/main/libx/libxml-libxml-perl/libxml-libxml-perl_1.56-6.dsc
libxml-libxml-perl_1.56-6_powerpc.deb
  to pool/main/libx/libxml-libxml-perl/libxml-libxml-perl_1.56-6_powerpc.deb


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Accepted libghttp 1.0.9-15 (powerpc source)

2004-01-20 Thread Michael K. Edwards
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Format: 1.7
Date: Mon, 19 Jan 2004 01:18:25 -0800
Source: libghttp
Binary: libghttp1 libghttp-dev
Architecture: source powerpc
Version: 1.0.9-15
Distribution: unstable
Urgency: low
Maintainer: Michael K. Edwards [EMAIL PROTECTED]
Changed-By: Michael K. Edwards [EMAIL PROTECTED]
Description: 
 libghttp-dev - original GNOME HTTP client library - development kit
 libghttp1  - original GNOME HTTP client library - run-time kit
Changes: 
 libghttp (1.0.9-15) unstable; urgency=low
 .
   * GPG key / UID change (sign with subkey of securely managed primary key)
Files: 
 63a8107e80b20841eb68f46607ce387e 704 net optional libghttp_1.0.9-15.dsc
 cdf2e2be2a4f04ab2e2115fa1d5e60d5 6572 net optional libghttp_1.0.9-15.diff.gz
 551a9e440577ea1182970a3f43b52650 21160 oldlibs optional libghttp1_1.0.9-15_powerpc.deb
 7b94b2ae43c49814aa9a77283d7df9ad 39430 oldlibs optional 
libghttp-dev_1.0.9-15_powerpc.deb

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.2 (GNU/Linux)
Comment: Colin Watson [EMAIL PROTECTED] -- Debian developer

iD8DBQFADRAS9t0zAhD6TNERAmM9AJ9NZLyJERgBEsd4ctRmRon4dfPiuACfStVM
96p7km3mgehvv7LvrsmBW1g=
=wrWx
-END PGP SIGNATURE-


Accepted:
libghttp-dev_1.0.9-15_powerpc.deb
  to pool/main/libg/libghttp/libghttp-dev_1.0.9-15_powerpc.deb
libghttp1_1.0.9-15_powerpc.deb
  to pool/main/libg/libghttp/libghttp1_1.0.9-15_powerpc.deb
libghttp_1.0.9-15.diff.gz
  to pool/main/libg/libghttp/libghttp_1.0.9-15.diff.gz
libghttp_1.0.9-15.dsc
  to pool/main/libg/libghttp/libghttp_1.0.9-15.dsc


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Accepted libxml-filter-xslt-perl 0.03-4 (all source)

2004-01-20 Thread Michael K. Edwards
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Format: 1.7
Date: Mon, 19 Jan 2004 00:50:51 -0800
Source: libxml-filter-xslt-perl
Binary: libxml-filter-xslt-perl
Architecture: source all
Version: 0.03-4
Distribution: unstable
Urgency: low
Maintainer: Michael K. Edwards [EMAIL PROTECTED]
Changed-By: Michael K. Edwards [EMAIL PROTECTED]
Description: 
 libxml-filter-xslt-perl - Perl module for XSLT as a SAX Filter
Closes: 197760 210526
Changes: 
 libxml-filter-xslt-perl (0.03-4) unstable; urgency=low
 .
   * New maintainer (closes: Bug#210526)
   * Added Jay Bonci as co-maintainer
   * debian/control: add libxml-sax-perl to Build-Depends and build against
 overhauled libxml-libxml-perl (1.56-5) to fix FTBFS (closes: Bug#197760)
   * debian/control: upgraded to Debian Policy 3.6.1 (no changes)
Files: 
 d650de623e7b3ca18272f53b6b3808f3 811 perl optional libxml-filter-xslt-perl_0.03-4.dsc
 72610ac4718d7b0518c1e24d7c91ad25 2276 perl optional 
libxml-filter-xslt-perl_0.03-4.diff.gz
 d57016be52555142d44dab6ad58d6c36 8274 perl optional 
libxml-filter-xslt-perl_0.03-4_all.deb

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.2 (GNU/Linux)
Comment: Colin Watson [EMAIL PROTECTED] -- Debian developer

iD8DBQFADQ+U9t0zAhD6TNERArFRAJ9ZOv3GIlqgX2rR9CAPVxVH2JxiVwCfYDu8
RugJIKgksKXpay+NJivySdA=
=UD5E
-END PGP SIGNATURE-


Accepted:
libxml-filter-xslt-perl_0.03-4.diff.gz
  to pool/main/libx/libxml-filter-xslt-perl/libxml-filter-xslt-perl_0.03-4.diff.gz
libxml-filter-xslt-perl_0.03-4.dsc
  to pool/main/libx/libxml-filter-xslt-perl/libxml-filter-xslt-perl_0.03-4.dsc
libxml-filter-xslt-perl_0.03-4_all.deb
  to pool/main/libx/libxml-filter-xslt-perl/libxml-filter-xslt-perl_0.03-4_all.deb


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Accepted libxml-libxslt-perl 1.53-4 (powerpc source)

2004-01-20 Thread Michael K. Edwards
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Format: 1.7
Date: Mon, 19 Jan 2004 00:44:24 -0800
Source: libxml-libxslt-perl
Binary: libxml-libxslt-perl
Architecture: source powerpc
Version: 1.53-4
Distribution: unstable
Urgency: low
Maintainer: Michael K. Edwards [EMAIL PROTECTED]
Changed-By: Michael K. Edwards [EMAIL PROTECTED]
Description: 
 libxml-libxslt-perl - Perl module for using the GNOME libxslt library
Closes: 210535
Changes: 
 libxml-libxslt-perl (1.53-4) unstable; urgency=low
 .
   * New maintainer (closes: Bug#210535)
   * Added Jay Bonci as co-maintainer
   * Reverse changes from NMU, verify against perl v5.8.2 / libxml2 2.6.3
 (The regression test is not actually bad; the real cause was
 libxml-libxml-perl breakage.  A small change was made to the test()
 callback to suppress a warning.)
   * LibXSLT.xs: fix double-free bug in LibXSLT_generic_function
Files: 
 03a3fa14f59e0c73b4dfd8f75a292673 822 perl optional libxml-libxslt-perl_1.53-4.dsc
 4fbe9299fd738a8f72343008a52aa89e 3233 perl optional libxml-libxslt-perl_1.53-4.diff.gz
 ec1ed9670e42dfe137d85033bdf443c3 35704 perl optional 
libxml-libxslt-perl_1.53-4_powerpc.deb

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.2 (GNU/Linux)
Comment: Colin Watson [EMAIL PROTECTED] -- Debian developer

iD8DBQFADQ489t0zAhD6TNERAj3lAJ0dDpWo7JppKXM2uOGDqsnt+FYzvgCeMhx5
zLp6i4s2Ftjr0aBzOkdNiII=
=TKG0
-END PGP SIGNATURE-


Accepted:
libxml-libxslt-perl_1.53-4.diff.gz
  to pool/main/libx/libxml-libxslt-perl/libxml-libxslt-perl_1.53-4.diff.gz
libxml-libxslt-perl_1.53-4.dsc
  to pool/main/libx/libxml-libxslt-perl/libxml-libxslt-perl_1.53-4.dsc
libxml-libxslt-perl_1.53-4_powerpc.deb
  to pool/main/libx/libxml-libxslt-perl/libxml-libxslt-perl_1.53-4_powerpc.deb


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Accepted libhttp-ghttp-perl 1.07-7 (powerpc source)

2004-01-20 Thread Michael K. Edwards
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Format: 1.7
Date: Sun, 18 Jan 2004 22:58:42 -0800
Source: libhttp-ghttp-perl
Binary: libhttp-ghttp-perl
Architecture: source powerpc
Version: 1.07-7
Distribution: unstable
Urgency: low
Maintainer: Michael K. Edwards [EMAIL PROTECTED]
Changed-By: Michael K. Edwards [EMAIL PROTECTED]
Description: 
 libhttp-ghttp-perl - Perl module for using the Gnome ghttp library
Closes: 210214
Changes: 
 libhttp-ghttp-perl (1.07-7) unstable; urgency=low
 .
   * New maintainer (closes: Bug#210214)
   * Added Jay Bonci as co-maintainer
   * Regenerated debian/rules using dh-make-perl
   * debian/control: upgraded to Debian Policy 3.6.1 (no changes)
Files: 
 baac10bdba34867c1ec87398141f2015 750 perl optional libhttp-ghttp-perl_1.07-7.dsc
 d08432115183f965af21add517ba7429 2874 perl optional libhttp-ghttp-perl_1.07-7.diff.gz
 410013fa1cc56aea5879cdf846df4cd8 30340 perl optional 
libhttp-ghttp-perl_1.07-7_powerpc.deb

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.2 (GNU/Linux)
Comment: Colin Watson [EMAIL PROTECTED] -- Debian developer

iD8DBQFADREo9t0zAhD6TNERAsRNAJ9wmuaugmGR7GJv5lj4EOJdfgmVRgCfUY0X
nGThNGINVwqsxHbf+co1a5I=
=MHAW
-END PGP SIGNATURE-


Accepted:
libhttp-ghttp-perl_1.07-7.diff.gz
  to pool/main/libh/libhttp-ghttp-perl/libhttp-ghttp-perl_1.07-7.diff.gz
libhttp-ghttp-perl_1.07-7.dsc
  to pool/main/libh/libhttp-ghttp-perl/libhttp-ghttp-perl_1.07-7.dsc
libhttp-ghttp-perl_1.07-7_powerpc.deb
  to pool/main/libh/libhttp-ghttp-perl/libhttp-ghttp-perl_1.07-7_powerpc.deb


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Accepted libxml-libxml-perl 1.56-4 (i386 source)

2003-12-31 Thread Michael K. Edwards (in Debian context)
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Format: 1.7
Date: Wed, 31 Dec 2003 15:21:18 -0800
Source: libxml-libxml-perl
Binary: libxml-libxml-perl
Architecture: source i386
Version: 1.56-4
Distribution: unstable
Urgency: low
Maintainer: Michael K. Edwards (in Debian context) [EMAIL PROTECTED]
Changed-By: Michael K. Edwards (in Debian context) [EMAIL PROTECTED]
Description: 
 libxml-libxml-perl - Perl module for using the GNOME libxml2 library
Closes: 210534 225620
Changes: 
 libxml-libxml-perl (1.56-4) unstable; urgency=low
 .
   * New maintainer (closes: Bug#210534)
   * debian/rules: rebuilt with dh-make-perl, add DEB_BUILD_OPTIONS=noopt
 support, removed DEB_BUILD_OPTIONS=debug (now always built -g -Wall;
 dh_strip respects nostrip automatically)
   * t/02parse.t: bypass tests that crash due to regressions in libxml2
   * LibXML.xs: don't redeclare externs now in libxml2 globals.h
 (closes: Bug#225620)
   * LibXML.pm, LibXML.xs: fix spurious warnings during make test
   * lib/XML/LibXML/SAX.pm: inherit superclass using use base syntax
   * rebuild against perl v5.8.2, bump build-depends
   * rebuild against libxml2 v2.6.3-1, bump build-depends
   * debian/control: upgraded to Debian Policy 3.6.1 (no changes)
Files: 
 8aa39c0bd17a401c50db95a063c2fdab 785 perl optional libxml-libxml-perl_1.56-4.dsc
 37241704b13220eef099319cea04701b 5213 perl optional libxml-libxml-perl_1.56-4.diff.gz
 7ead35ab60520c11b5df1557c0cff800 261866 perl optional 
libxml-libxml-perl_1.56-4_i386.deb

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.3 (GNU/Linux)

iD8DBQE/84IeKl23+OYWEqURAtr0AKCsEkD7FOPshAmVo+/AteQn3GlCQwCggYs4
suxJMTBDtLE2gcidiouFTK8=
=23th
-END PGP SIGNATURE-


Accepted:
libxml-libxml-perl_1.56-4.diff.gz
  to pool/main/libx/libxml-libxml-perl/libxml-libxml-perl_1.56-4.diff.gz
libxml-libxml-perl_1.56-4.dsc
  to pool/main/libx/libxml-libxml-perl/libxml-libxml-perl_1.56-4.dsc
libxml-libxml-perl_1.56-4_i386.deb
  to pool/main/libx/libxml-libxml-perl/libxml-libxml-perl_1.56-4_i386.deb


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



  1   2   >