Re: What CDs and DVDs should we produce for lenny?

2008-03-22 Thread Peter 'p2' De Schrijver
On 2008-03-16 23:59:52 (+), Steve McIntyre <[EMAIL PROTECTED]> wrote:
> [ Please note Reply-To: to debian-cd... ]
> 
> Hi folks,
> 
> It's time for me to ask the question again - what CDs and DVDs will we
> find useful enough that we should make them for lenny? The reason I'm
> asking is that we're looking at a *huge* number of discs, and it's not
> clear that they'll all be useful. I've just finished building the full
> set for lenny d-i beta 1 (hence why I've been so quiet the last few
> days), and what we're looking at *now* is quite scary:
> 
>  2 small CDs per arch (business card, netinst)
>  ~30 CDs per arch for a full CD set
>  ~4 DVDs per arch for a full DVD set
>  (total 353 CDs, 51 DVDs, 426 GB)
> 
> Things are only going to get bigger: we're about to add armel to the
> mix, and I'm expecting that we're going to grow further yet in terms
> of the number and sizes of packages before we release lenny. That
> leaves us with a huge amount of data for us to build and host, and for
> our mirrors to handle too. So...
> 
>  1. Is it worth making full sets of CDs at all? Can we rely on people
> having a net connection or being able to use DVDs if they want
> *everything*?
> 
>  2. Is it worth producing all the CDs/DVDs/whatever for all the
> architectures?
> 
>  3. For some arches, should we just provide the first couple of CDs
> and a full set of DVDs? This is a bit of a compromise option - if
> a given machine will not boot from DVD, but can boot from CD and
> get the rest of its packages from a network share then all's good.
> 
>  4. ??? - what else would be a sane option?
> 

Considering that modern machines can boot from network or USB stick,
some machine classes lack optical drives (small laptops, many non
ia32/amd64/ppc machines), DVD readers are uncommon outside the
ia32/amd64/ppc world, optical media are slow compared to network or USB
sticks, I would say the importance of optical media images is low. It's
cool to have CDs or DVDs to give away at fairs etc, but for practical
use much better installation methods (network and USB stick) are available IMO. 
So in practice this means we can suffice with business card CD images
for ia32/amd64/ppc and CDs (maybe DVDs) with the most important stuff.
For other archs I see no use of having CD or DVD images.

Cheers,

Peter.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



CFP: fosdem 2008 embedded track

2007-12-11 Thread Peter &#x27;p2' De Schrijver
For those that are interested:

CALL FOR PAPERS for the 6th EMBEDDED track at FOSDEM 2008
=

sat 23 - sun 24 February 2008, Brussels

Call for papers


The 2008 edition of FOSDEM (Free and Open Source Developers' European
Meeting; http://www.fosdem.org) will take place in Brussels, Belgium 
on 23 and 24 February 2008. For the sixth time, a track on Embedded 
Systems and Operating Systems will be organized. The fourth edition 
was quite succesful and attracted up to 150 attendants for certain
topics.

For last year's program see:
http://archive.fosdem.org/2007/schedule/tracks/embedded

The use of Free Software in the infrastructure of Embedded Systems 
is booming, e.g. by the use of Linux, uClinux, eCos, RedBoot, RTEMS 
and many other Free Software components. More companies are supporting
Embedded Free Software every day because of the reliability and cheap
licensing. This can be confirmed from looking at some high profile
releases of embedded GNU/Linux systems. Some examples are the Nokia 770 & N800,
some models of Motorola's smartphones and Archos media players. Not to 
forget the popular TomTom navigation system, the numerous SOHO routers 
and NAS devices based on linux.

Operating System development has always been a very important topic in
Free Software.
As embedded and real-time systems typically have special OS
requirements, we organise this Free Embedded and OS development track at
FOSDEM to give people the opportunity to present their (or their teams)
achievements.

This track at FOSDEM provides a remarkable opportunity to present and
discuss the ongoing work in these areas, and we invite developers to 
present their current projects. Technical topics of the conference 
include but are not limited to :

* OS Development : kernel architecture and implementation, libraries, power 
   management, TIPC, boot time and memory usage optimizations
  (e.g. Linux, BSD, uClinux, uClibc, newlib, slob allocator,...)

* Practical experiences in implementing Free Software in embedded
systems  (e.g. reverse engineering, porting  to (and adapting of)
commercial devices like the Ipaq, linksys WRT54G, nlsu2  )

* Toolchain, performance testing and build environment 
  (e.g. crosstool, emdebian, openembedded, PTX dist, packaging,
scratchbox, Eclipse, Valgrind,...)

* GUIs for embedded systems
  (Gtk, Qt-(embedded), GPE, Qtopia, UI design with touchscreen, 
   Hildon GUI extensions, OpenMoko, OpenGL ES, ...)

* Multimedia applications for embedded systems
  (e.g. integer only decoders, Opieplayer, gstreamer... )

* Real-time extensions, nanokernels and hardware virtualization software
  (e.g. RTAI, Adeos, KURT, L4, Qemu, User Mode Linux, VirtualLogix,
   high resolution timers, ...)
 
* Hard real-time OS's
  (eCos, RTEMS, Real Time Linux,...)

* Open hardware, DSP, softcores and general hardware management
  (e.g opencores.org, OpenRISC, leonSparc, FPGA's, specific design
restrictions for free systems, DSP, Power management...)

* Safety and security certifications applied to Free software
   (e.g. security measures in Embedded systems, TPM, SELinux
for embedded, TrustZone, ...)

* Tools and techniques for programming multicore systems

* Free software licenses and embedded systems

Authors that wish to present a topic are requested to submit their
abstracts online to [EMAIL PROTECTED] before 23/01/2008. Notification
of receipt will be sent within 48 hours. Authors wishing to submit a
full paper (between 6 and 12 A4 pages), can do so in PS or PDF format.

The Program Committee will evaluate the abstracts and consists of:

* Geert Uytterhoeven, Sony NSCE, Belgium
* Peter De Schrijver (p2), Nokia (OSSO), Finland
* Philippe De Swert, Nokia (OSSO), Finland
* Klaas van Gend, MontaVista Software, The Netherlands
* Michael Opdenacker, Free Electrons, France


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: multiarch status update

2006-05-16 Thread Peter &#x27;p2' De Schrijver
> 
> But say you have the old i486 ls installed in /bin/ls and now you
> install the new amd64 ls in /bin/ls/x86_64.
> 
> Wait a second. How do you create the dir when the file already exists?
> dpkg has to specialy handle this case for every package.
> 

That's probably a bit of a problem. But that doesn't detract from the
usefulness of being able to have multiple binaries with the same path
IMO.

> >> Also what architecture should be called on x86_64 if both are there?
> >> i486 or amd64? Should that be configurable?
> >> 
> >
> > What do you mean here ? 
> 
> Say I have /usr/bin/firefox/i486 and /usr/bin/firefox/x86_64. Which
> one should be the default? Where/how do I set the default?
> 

The default would then be x86_64. I don't remember if AIX had a per
process setting to change this. 

> I never use flash so I want the x86_64 default. But userB always uses
> flash and wants i486. How do you set that up per user?
> 

You could use something like prctl for this. Note that current multiarch
doesn't solve this problem either.

> 
> But /bin/sh then is a directory containing i486 and x86_64. Or just
> one of them. Or cotaining mips, mipsn32, mips64, mipsel, mipsn32el,
> mips64el, mips-abi32, mips-abi64, mips-abi32el, mips-abi64el.
> 

So ? There is no difference between executing /bin/sh directly and
having it done as an interpreter for a script.

L & L

p2.

-- 
goa is a state of mind


signature.asc
Description: Digital signature


Re: multiarch status update

2006-05-16 Thread Peter &#x27;p2' De Schrijver
> > I don't think so. I see at least a few possible uses for this :
> >
> > 1) have a shared filesystem between machines of multiple architectures
> > 2) test your programs on architectures you don't have by using qemu
> 
> It might have its use there but it can't be simply done. The files
> from two packages must be disjunct. That was my point. Moving binaries
> into subdirs and calling them by their arch (e.g. /bin/ls/i486) would
> solve that. But something has to do this change. Either the packaging
> itself or dpkg when unpacking the deb. Both would mean a major change
> in what we (and everybody else) currently have.
> 

That could just be part of the package. Ie. unpacking the files
automatically puts them at the right place.

> Lets say we do add special dirs for binaries and let dpkg manage
> them. How would that work with old and new debs mixed together? Should
> dpkg move all binaries into subdirs on upgrade once? Should it move
> binaries into subdirs when a second arch gets installed?
> 

It is possible to have both 'normal' and 'directory' binaries at the
same time. At least AIX managed to do that, although I don't exactly
know how it did that. So this problem is probably non existant.

> Also what architecture should be called on x86_64 if both are there?
> i486 or amd64? Should that be configurable?
> 

What do you mean here ? 

> I imagine that would need kernel support to work for "#!/bin/sh" and
> the like which again raises the question of compatibility.
> 
> 

No. #!/bin/sh would just execute /bin/sh as usual.

> Weigh the gain against the work and hopefully you will see that the
> cost outweigh the gain by a lot. If you want to share a filesystem to
> i486 and amd64 systems I guess you could use a unionfs for amd64 that
> has i486 as base and then just adds the 64bit stuff. Thats probably
> far simpler and better than adding the complexity to dpkg.
> 

Well no. Because there is far more use then i486 and amd64. I don't
think dpkg needs extra changes beyond being able to install packages for
another architecture and doing the dependencies per architecture (which
all is necessary for multiarch anyway).

L & L

p2.

-- 
goa is a state of mind


signature.asc
Description: Digital signature


Re: multiarch status update

2006-05-16 Thread Peter &#x27;p2' De Schrijver

> 
> The obvious problem here: The scheme is incompatible with non-multiarched
> software. It would at least require a package manager which specialcases
> /bin directory, a one-time conversion which moves the binaries, and
> some trickery for alternatives. Plus some more things which don't come
> to mind immediately, I guess.
> 

Hmm. I somehow recall that you could also do normal binaries. At least I
can't remember that I always made this sort of 'binaries'. I'm not sure
how AIX distinguished between both types though. I guess the 'directory
binaries' had some magic bit set.

L & L

p2.

-- 
goa is a state of mind


signature.asc
Description: Digital signature


Re: multiarch status update

2006-05-15 Thread Peter &#x27;p2' De Schrijver
> > Being able to install multiple versions is some use to multiarch, but
> > could also be used for other things, such if two packages provide the
> > same binary (git for example).
> > Or to install multiple 'version 'numbers' of the same package.
> 
> The big problem then is when to install multiple versions of a binary?
> How should the depends decide when that is needed or wanted and when
> not? Esspecialy when different versions are available per
> architecture.
> 

One way of doing this would be to make a binary a special directory
which contains the actual binary files for the architectures the
binaries exist. AIX 1.x did this and allowed transparent execution of
binaries in a heterogenous cluster. So if you would start a binary on
IA32 AIX machine which only existed in a mainframe AIX version, the
system would automatically start the binary on one of the mainframe AIX
nodes in the cluster. If an IA32 AIX binary was available, it would
locally start this binary. The 'binary directory' had the name, the
binary would normally have. The actual binary files get the architecture
name they implement as the filename. Eg: there would be an /bin/ls
directory containing 2 files : i386, ibm370.

> How would programs or user specifiy what binary to call? How would

You could explicitly start /bin/ls/i386 I think (which would fail if
you did it on the wrong machine).

> users even know which binary is which if they have the same name and
> both packages are installed on the system? Just imagine the confusion
> of a user installing foo (which provides the same binary "foo" as bar)
> and calling foo gives him bars "foo".
> 
> That is totaly out of the question. Packages that provide the same (by
> name) binary (or even just file) MUST conflict. period.
> 

I don't think so. I see at least a few possible uses for this :

1) have a shared filesystem between machines of multiple architectures
2) test your programs on architectures you don't have by using qemu

L & L

p2.
-- 
goa is a state of mind


signature.asc
Description: Digital signature


Bug#354906: ITP: libftdi -- programming interface for FTDI FT2232C, FT232BM and FT245BM USB interface chips.

2006-03-01 Thread Peter &#x27;p2' De Schrijver
Package: wnpp
Severity: wishlist
Owner: "Peter 'p2' De Schrijver" <[EMAIL PROTECTED]>


* Package name: libftdi
  Version : 0.7
  Upstream Author : Intra2net AG <[EMAIL PROTECTED]>
* URL : http://www.intra2net.com/de/produkte/opensource/ftdi/
* License : LGPL
  Description : programming interface for FTDI FT2232C, FT232BM and FT245BM 
USB interface chips.

 libftdi is a library which provides a programming interface for the advanced
 features of the FTDI FT2232C, FT232BM and FT245BM chips. These chips can act 
as a
 USB interface to GPIO lines, a multiprotocol synchronous serial engine or a 
8051
 style bus in addition to a standard USB to asynchronous serial convertor. This
 library provides access to the additional functions. More information on the
 FTDI chips can be found on http://www.ftdichip.com.


-- System Information:
Debian Release: 3.1
  APT prefers unstable
  APT policy: (500, 'unstable')
Architecture: i386 (i686)
Kernel: Linux 2.6.13.4
Locale: [EMAIL PROTECTED], [EMAIL PROTECTED] (charmap=UTF-8) (ignored: LC_ALL 
set to en_IE.UTF-8)


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Co-maintainers sought

2005-12-11 Thread Peter &#x27;p2' De Schrijver
On Sat, Dec 10, 2005 at 04:00:14PM +0100, Daniel Baumann wrote:
> Francesco Paolo Lovergine wrote:
> > X31 and T43p, and some friends with X40 and A series :-P
> 
> I can even top that one: r40, r50, x31, x40, x41, t42p, t43p and a30 :PP
> 

You want a beowulf of thinkpads ? :)

> (and, just for the records, a 730c..)
> 

A30 and 860 here.

L & L,

p2.

-- 
goa is a state of mind


signature.asc
Description: Digital signature


Re: Announcing an intention to produce an armeb port of Debian

2005-09-19 Thread Peter &#x27;p2' De Schrijver
On Mon, Sep 19, 2005 at 12:59:52PM +0200, Christoph Hellwig wrote:
> On Mon, Sep 19, 2005 at 08:16:42AM +, W. Borgert wrote:
> > On Mon, Sep 19, 2005 at 10:45:26AM +0930, Debonaras Project Lead wrote:
> > > The Debonaras project (http://www.debonaras.org) is a group of Linux
> > > developers who have created the beginnings of a big-endian ARM (armeb)
> > > port of Debian.  We have built 2500+ stable packages so far (see
> > > http://ftp.debonaras.org).
> > 
> > Just a completely uninformed question: Wasn't the -el (endian
> > little) in mipsel a pun on the "wrong" endianess?  If so,
> > shouldn't it be armBE, because it's the "right" endianess?
> 
> What gets you the impression there's a "wrong" endianess?
> 
> BE for arm is unusal, but I couldn't see why one is wrong or right.

It's not so unusual anymore since intel introduced the IXP series of
chips which come with mostly BE oriented reference designs.

Cheers,

Peter (p2).

-- 
goa is a state of mind


signature.asc
Description: Digital signature


Re: Handling event device files [was: Bug#324604: [Fwd: The bug persists]]

2005-09-11 Thread Peter &#x27;p2' De Schrijver
> 2) make makedev produce more of these files (but probably most users
>don't need them, at least not on desktop PCs which have seldomly
>two mouses or keyboards)

That's probably the right solution. Device nodes hardly take any
resources anyway.

Cheers,

Peter (p2).


signature.asc
Description: Digital signature


Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-24 Thread Peter &#x27;p2' De Schrijver

> > > * By using a cross-compiler, by definition you use a compiler that is
> > >   not the same as the default compiler for your architecture. As such,
> > >   your architecture is no longer self-hosting. This may introduce bugs
> > >   when people do try to build software for your architecture natively
> > >   and find that there are slight and subtle incompatibilities.
> > > 
> > 
> > I have never seen nor heared about such a case. IME this is extremely
> > rare (if it happens at all).
> 
> Do you want to take the chance of finding out the hard way after having
> built 10G (or more) worth of software?
> 

I don't see why the risk would be higher compared to native compilation.

> This is not a case of embedded software where you cross-compile
> something that ends up on a flash medium the size of which is counted in
> megabytes; this is not a case of software which is being checked and

Some embedded software is fairly extensive and runs from HD.

> tested immediately after compilation and before deployment. This is a

Most packages are not tested automatically at all.

> whole distribution. Subtle bugs in the compiler may go unnoticed for a
> fair while if you don't have machines that run that software 24/7. If

Only a very tiny fraction of the software in debian runs 24/7 on debian
machines.

> you replace build daemons by cross-compiling machines, you lose machines
> that _do_ run the software at its bleeding edge 24/7, and thus lose
> quite some testing. It can already take weeks as it is to detect and

Most cross compiled software also runs 24/7. I have yet to see problems
produced by cross compiling the code.

> track down subtle bugs if they creep up in the toolchain; are you
> willing to make it worse by delaying the time of detection like that?
> 

They wouldn't necessarily show up any faster in native builds. 

> I'm not saying this problem is going to hit us very often. I do say this
> is going to hit us at _some_ point in the future; maybe next year, maybe
> in five years, maybe later; in maintaining autobuilder machines over the
> past four years, I've seen enough weird and unlikely problems become
> reality to assume murphy's law holds _quite_ some merit here. The
> important thing to remember is that this is a risk that is real, and
> that should be considered _before_ we blindly switch our build daemons
> to cross-compiling machines.
> 

I don't think the risk is real considering the amount of cross compiled
software already running in the world.

Cheers,

Peter (p2).


signature.asc
Description: Digital signature


Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-24 Thread Peter &#x27;p2' De Schrijver
> * Many packages don't support cross-compiling, and those that do may
>   have bugs in their makefiles that make cross-compiling either harder
>   or impossible.
> * You can't run the test suites of the software you're compiling, at
>   least not directly.
> * There's a serious problem with automatically installing
>   build-dependencies. Dpkg-cross may help here, but there's no
>   apt-cross (at least not TTBOMK); and implementing that may or may not
>   be hard (due to the fact that build-dependencies do not contain
>   information about whether a package is an arch:all package or not).

scratchbox solves these problems.

> * By using a cross-compiler, by definition you use a compiler that is
>   not the same as the default compiler for your architecture. As such,
>   your architecture is no longer self-hosting. This may introduce bugs
>   when people do try to build software for your architecture natively
>   and find that there are slight and subtle incompatibilities.
> 

I have never seen nor heared about such a case. IME this is extremely 
rare (if it happens at all). The only way to know if this is a real
problem is to try using cross compiling and verify against existing
native compiled binaries. Unfortunately the verify bit is quite annoying
as a simple cmp will likely fail because of things like build date,
build number, etc included in the binary. For packages which have a testsuite, 
this testsuite could be used as the verification step. 

> Hence the point of trying out distcc in the post to d-d-a; that will fix
> the first three points here, but not the last one. But it may not be
> worth the effort; distcc runs cc1 and as on a remote host, but cpp and
> ld are still being run on a native machine. Depending on the program
> being compiled, this may take more time than expected.
> 

Which is why scratchbox is a more interesting solution, as it only runs
those parts on target which can't be done on the host.

Cheers,

Peter (p2).


signature.asc
Description: Digital signature


Re: Using buildds only (was: Results of the meeting...)

2005-08-23 Thread Peter &#x27;p2' De Schrijver
On Tue, Aug 23, 2005 at 11:25:41AM +0200, Marc Haber wrote:
> On Tue, 23 Aug 2005 01:42:18 +0200, Martin Pitt <[EMAIL PROTECTED]>
> wrote:
> >Something like this is in fact considered. Probably Ubuntu won't use
> >pbuilder itself since it is not the most efficient implementation
> >around, but rebuilding the buildd chroots from scratch would help to
> >eliminate many FTBFS bugs due to polluted chroots.
> 
> Surely you are aware how much time it takes to rebuild a chroot on
> slower architectures. Some technology able to restore a large
> directory tree to a static default in short time should be used here.
> 
> I am not sure whether LVM (have a LV with the master chroot image,
> make a snapshot, build inside the snapshot, remove the snapshot) could
> help here.

Or only rebuild the chroot when a build failure is detected (or better
when a build failure is detected which could be caused by a broken
chroot).

Cheers,

Peter (p2).


signature.asc
Description: Digital signature


Re: vancouver revisited

2005-08-22 Thread Peter &#x27;p2' De Schrijver
On Tue, Aug 23, 2005 at 12:00:07AM +0200, Wouter Verhelst wrote:
> On Mon, Aug 22, 2005 at 01:53:37PM +0200, Peter 'p2' De Schrijver wrote:
> > > Claiming "nobody sane will ever use that" means someone who's actually
> > > interested in using said software, even if it's slow is left out in the
> > > cold. That's silly.
> > 
> > The user can always ask to build it or provide resources to build it.
> 
> Rotfl.
> 
> Imagine:
> 
> You have one m68k machine, and want to use it as a lightweight browsing
> machine (yes, some people do that). To have a browser, however, you'll
> need to let it run for >24 hours because the browser isn't compiled...
> and then you end up with mozilla, but you prefer a lightweight browser,
> such as galeon. Add another bunch of hours.

Galeon is by no means a lightweight browser. If you want a lightweight
browser look at dillo or the GPE mini browser.

Cheers,

Peter (p2).


signature.asc
Description: Digital signature


Re: vancouver revisited

2005-08-22 Thread Peter &#x27;p2' De Schrijver
> Uh, no. Just to name one example: tell me, are you absolutely and 100%
> sure no user will ever try to use a gecko-based browser on an older
> architecture? And yes, if you want to support that, that means you have
> to build mozilla
> 
> There _are_ lightweight gecko-based browsers, you know.
> 

Apparently not anything that is acceptable for GPE.

> Claiming "nobody sane will ever use that" means someone who's actually
> interested in using said software, even if it's slow is left out in the
> cold. That's silly.

The user can always ask to build it or provide resources to build it.

Cheers,

Peter (p2).


signature.asc
Description: Digital signature


Re: vancouver revisited

2005-08-22 Thread Peter &#x27;p2' De Schrijver

> 
> I don't agree with that interpretation of "arch-specific", and neither
> do the maintainers of the Packages-arch-specific list AFAICT, so please
> stop trying to use creative interpretations of people's words to torpedo
> the proposal that porters should be accountable for their ports.
> 

I have no idea what you're trying to say here.

> > > > and it cannot be the porters problem that packages violate
> > > > language rules and therefore fail to compile or work on some arch.
> 
> > > well, if the package is bogus from the language usage, than that's not
> > > the porters problem (but how often did that hit exactly one arch?). If
> > > the arch can't e.g. use C++-packages because it doesn't have the
> > > toolchain for c++, I think that is the porters problem (just to give an
> > > possible example).
> 
> > I have seen multiple examples of builds failing because the testsuite or
> > a buildtime generated tool crashed on a specific arch due to bad coding
> > practices.
> 
> And in some cases these are so severe that the package should
> unequivocally be ignored for that architecture.  In other cases, it is
> incumbent upon porters to, y'know, *port*.  If we're going to give a
> port a free pass every time some base package, or package that's
> installed as part of the desktop task (for example) manages to include
> code that's not portable, then I don't see any point at all in treating
> these as release architectures to begin with, because at that point
> they're *not* shipping the same OS that the other architectures are.
> 

What did you want to say here ?

Cheers,

Peter (p2).


signature.asc
Description: Digital signature


Re: vancouver revisited

2005-08-22 Thread Peter &#x27;p2' De Schrijver
> > but well, I suppose the line is hard to draw.
> 
> Exactly, and that is why we don't try.
> 
> I agree with you that mozilla is probably fairly useless on m68k. But if
> you start excluding packages, you'll fairly soon end up on a slipperly
> slope where you start excluding packages, and in the end you don't build
> any useful stuff anymore.

The line is quite easy to draw, it just requires a bit of common sense.

Cheers,

Peter (p2).


signature.asc
Description: Digital signature


Re: vancouver revisited

2005-08-22 Thread Peter &#x27;p2' De Schrijver
> > Trivial. debootstrap does that. 
> 
> Debootstrap is not an installer, in very much the same way that tar
> isn't, either.
> 

They both are. They can install debian, so it's an installer.

> > > - security team, DSA, and release team must not veto inclusion
> > 
> > Arbitrary veto power. This requirement is unacceptable for me. Noone
> > should be allowed to just ignore other peoples work within the project.
> 
> This is one argument I do have sympathy for. In fact, I went to that
> meeting with the exact same thought.
> 
> The thing is, however, that you're dealing with a madman scenario:
> you're afraid that one day, a member of one of the above teams will go
> crazy and declare that they don't like this person that works on a port,
> or just that port entirely. Or, less extreme, you're afraid that a
> member of the above teams will make a judgement error and veto a port
> for bogus reasons.

Rules don't work for madman scenarios anyway, so the rule is useless and
should not be there.

Cheers,

Peter (p2).


signature.asc
Description: Digital signature


Re: vancouver revisited

2005-08-22 Thread Peter &#x27;p2' De Schrijver
On Mon, Aug 22, 2005 at 11:05:59AM +0200, Pierre Habouzit wrote:
> Le Lun 22 Août 2005 10:29, Peter 'p2' De Schrijver a écrit :
> > Hi,
> >
> > > The "reasonable foundation" for having a redundant buildd in a
> > > separate physical location is, I think, well-established.  Any
> > > random facility can lose power, perform big router upgrades, burn
> > > down, etc.  Debian machines also seem to be prone to bad RAM, bad
> > > power supplies, bad disk arrays, and the like, and these things
> > > can't always be fixed within a tight time window.
> >
> > The problem is not requiring a redundant buildd, the problem is
> > the arbitrary limit on the amount of 'buildd machines' of 2.
> 
> if one of one buildd is down, or more likely if one piece of network 
> behind the buildd and the rest of the world is down for 1 month, or 
> worse than down : malfunctioning (with some nice tcp connections 
> loss) ... then if such a thing happens during :
>  * the c++ transition (that is a *real* pain for the buildd's)
>  * 2 weeks before a major distro freeze
>  * 
> what can you do ? the answer is wait and pray. *great*
> 
> No, 2 buildd's machines is a minimum requirement that is *not* 
> arbitrary, it's only wisdom.
> 

I think you misunderstood me here. The limit is a upper limit, not a
lower limit. 

Cheers,

Peter (p2).


signature.asc
Description: Digital signature


Re: vancouver revisited

2005-08-22 Thread Peter &#x27;p2' De Schrijver
> > How do you boot to a system to run debian-installer when there is no
> > bios or bootloader on the system yet?
> 
> Just take a look at the existing Debian ports, and you see that it's ok
> to use a bios that's part of the hardware.
> 
> > Should debian-installer support
> > installing via JTAG? What happens on many embedded systems, 
> > debianish system is debootstrapped on a different machine and put into 
> > jffs2 image,  which is then flashed to a pile of devices. Walking 
> > through d-i every time would be very clumsy, so there is no use
> > for a working installer for those systems.
> 
> Do we speak about ports to architectures or machines? If the sole
> purpose is to be able to flash some embedded device, frankly speaking I
> doubt that we need to host any *.deb-files on the ftp mirror network.
> 

That's all besides the point. The point is that there are a lot of other
ways to install debian besides booting from installer media and running
d-i, like flashing a premade image using JTAG, which count as much as
being a 'debian installer' then running d-i.

Cheers,

Peter (p2).


signature.asc
Description: Digital signature


Re: vancouver revisited

2005-08-22 Thread Peter &#x27;p2' De Schrijver
Hi,

> The "reasonable foundation" for having a redundant buildd in a separate
> physical location is, I think, well-established.  Any random facility
> can lose power, perform big router upgrades, burn down, etc.  Debian
> machines also seem to be prone to bad RAM, bad power supplies, bad disk
> arrays, and the like, and these things can't always be fixed within a
> tight time window.
> 

The problem is not requiring a redundant buildd, the problem is
the arbitrary limit on the amount of 'buildd machines' of 2.

> > Except that arch-specific package has always meant 'contains arch
> > specific code', not 'does not make sense to run on this arch'. So
> > this clause doesn't cover all cases.
> 
> Packages can be confined to specific architectures even in cases where
> they're written portably.  For example, I just looked at the
> isapnptools source and I don't see anything particularly non-portable
> in it.  (It does require a concept of iopl().)  But it's still useless
> on platforms that don't have ISA busses, so it claims "Architecture:
> alpha amd64 arm i386".  And I don't remembering seeing anyone wanting
> to change this, even before isapnptools was removed from unstable.

This is still somewhat arch specific code as it assumes iopl and the 
availability of an isa bus. I'm more thinking of packages which require a lot 
of RAM to build and run and are thus useless on archs which generally don't 
have that much RAM for example. 

Cheers,

Peter (p2).


signature.asc
Description: Digital signature


Re: vancouver revisited

2005-08-21 Thread Peter &#x27;p2' De Schrijver
Hi,

> 
> > Bogus requirement. At the moment we have less then 1 s390 buildd for 
> > example.
> 
> "machine" translates with partition btw - though the two different
> partitions should be in different physical locations, for obvious
> reasons. Yes, we want a redundancy for good reasons.
> 

Which is very arbitrary to me, machine to me means physical box with
hardware and software doing stuff. So this requirement is very much
arbitrary and without any reasonable foundation.

> 
> > > Overall:
> > > - must have successfully compiled 98% of the archive's source
> > >   (excluding arch-specific packages)
> 
> > Useless requirement. Less then 98% of the archive may be useful for the
> > architecture
> > >   (excluding arch-specific packages)
> that's there for a reason
> 

Except that arch-specific package has always meant 'contains arch
specific code', not 'does not make sense to run on this arch'. So this
clause doesn't cover all cases.

> > and it cannot be the porters problem that packages violate
> > language rules and therefore fail to compile or work on some arch.
> 
> well, if the package is bogus from the language usage, than that's not
> the porters problem (but how often did that hit exactly one arch?). If
> the arch can't e.g. use C++-packages because it doesn't have the
> toolchain for c++, I think that is the porters problem (just to give an
> possible example).
> 
> 

I have seen multiple examples of builds failing because the testsuite or
a buildtime generated tool crashed on a specific arch due to bad coding
practices.

> > > - must have a working, tested installer
> 
> > Trivial. debootstrap does that. 
> 
> How do you boot the system to run debootstrap? (Note: the answer
> "gentoo" or "Windows" is not what I want to hear :) It is agreed that
> this isn't a too high barrier, but - well, we should still require it.
> If no (potential) port has an issue with that, it's even better.
> 

You boot from an existing rootfs image, just like you boot from your
existing install media. 

> 
> > > - security team, DSA, and release team must not veto inclusion
> 
> > Arbitrary veto power. This requirement is unacceptable for me. Noone
> > should be allowed to just ignore other peoples work within the project.
> 
> Please read Wouter's explanation. There was really very much discussion
> on exactly this.
> 

I did. And I still consider it unacceptable. I see no reason for having
this sort of far reaching veto powers.

Cheers,

Peter (p2).


signature.asc
Description: Digital signature


vancouver revisited

2005-08-21 Thread Peter &#x27;p2' De Schrijver
Hi,

Some comments :

> Initial:
> - must be publically available to buy new

Trivially true for any architecture, even VAX.

> - must be freely usable (without NDA)
> - must be able to run a buildd 24/7 without crashing
> - must have an actual, working buildd
> - must include basic UNIX functionality

Whatever that may mean

> - 5 developers must send in a signed request for the addition
> - must demonstrate to have at least 50 users
> - must be able to keep up with unstable with 2 buildd machines, and must
>   have one redundant buildd machine

Bogus requirement. At the moment we have less then 1 s390 buildd for example.

> Overall:
> - must have successfully compiled 98% of the archive's source
>   (excluding arch-specific packages)

Useless requirement. Less then 98% of the archive may be useful for the
architecture and it cannot be the porters problem that packages violate
language rules and therefore fail to compile or work on some arch.

> - must have a working, tested installer

Trivial. debootstrap does that. 

> - security team, DSA, and release team must not veto inclusion

Arbitrary veto power. This requirement is unacceptable for me. Noone
should be allowed to just ignore other peoples work within the project.

Cheers,

Peter (p2).


signature.asc
Description: Digital signature


Re: And now for something completely different... etch!

2005-06-16 Thread Peter &#x27;p2' De Schrijver
> 
> Ummm... And if instead of asking the user for a disk change, this
> mini-initrd just keeps polling the floppy for a non-erroneous read
> (this means, the drive is not empty) with the correct magic at the
> correct place?

I don't think you actually have to read anything. You can use the disk
change line to see if a disk has been inserted or not.

Cheers,

Peter (p2).


signature.asc
Description: Digital signature


Re: machines (was: Canonical and Debian)

2005-06-08 Thread Peter &#x27;p2' De Schrijver
> That gdb-problem is under investigation.
> 
> > Well, at least we've got a porter machine, could that be turned into a
> > buildd on relatively short notice if necessary?  The gdb issue is
> > something I certainly hope is being looked into or at least has been
> > brought up to the LKML and debian-hppa folks.
> 
> Well, my understand was that we first try the kernel that was not killed
> by gdb, and also a current kernel, and look at the differences. For
> obvious reasons, one wants a local admin to be around for that.
> 

Is this problem easily reproduceable ? Is it machine/CPU type specific ?
Perhaps we could look into it in HEL, we should have alphas there.

Cheers,

Peter (p2).


signature.asc
Description: Digital signature


Re: Storage (was: Canonical and Debian)

2005-06-07 Thread Peter &#x27;p2' De Schrijver
On Tue, Jun 07, 2005 at 09:47:23AM +0200, Stig Sandbeck Mathisen wrote:
> Peter 'p2' De Schrijver <[EMAIL PROTECTED]> writes:
> 
> > That sounds retarded in an age where a 200GB HD cost less then 100
> > Euro...
> 
> Regarding storage: "Fast, cheap and secure; pick any two".
> 
> Good Storage have more costs than the price of the cheapest disks on
> the market.  For a file server, especially a software mirror for
> Internet users, you'll want "fast" and "secure", you can't have
> "cheap".
> 

For lightly used archives, secure and cheap are more important then
fast. So I pick secure and cheap in that case.

Cheers,

Peter (p2).


signature.asc
Description: Digital signature


Re: Canonical and Debian

2005-06-06 Thread Peter &#x27;p2' De Schrijver
On Mon, Jun 06, 2005 at 11:24:14PM +0200, Wouter Verhelst wrote:
> On Mon, Jun 06, 2005 at 07:22:08PM +0200, Peter 'p2' De Schrijver wrote:
> > > * Split the architectures over two sets of mirror networks, so that
> > >   mirror administrators don't need 100G just to mirror Debian anymore.
> > 
> > That sounds retarded in an age where a 200GB HD cost less then 100 Euro...
> > Anyway you can always decide to mirror only part of the archive if you
> > want to, even today.
> 
> Oh, come on! That's not a serious argument, is it?
> 
> First, to run a reliable mirror, one hard disk isn't going to get you
> there -- you need several in some sort of a RAID setup.
> 

I don't think this argument holds for lightly used parts of the mirror.
And if the disk breaks, too bad, just download the lost bits again.
I find it hard to believe you would need an enterprise class storage
system to store a lightly used mirror. Even with raid 1 it would still
be under 200 euro.

> Second, I have yet to see the first mirror that mirrors Debian
> exclusively. Most mirror Debian and some other sites.
> 

I didn't say all of the mirror should be on 1 disk :)

> Third, the 100G isn't a constant number. It will most likely jump up
> high in a few weeks, as we split a new testing off of unstable and leave
> woody to become oldstable.
> 

It most likely won't double in the next half year :)

> Fourth, adding the architectures that are waiting around the corner
> (amd64, kFreeBSD, ...) will the disk space requirements even more.
> 

Yes. Disk prices go down though. As long as debian doesn't grow faster
then disks become cheaper, we're fine.

> Fifth, many of our mirror admins want this -- the proof is easy, just
> look at the number of mirrors that does already drop a few from the list
> of mirrorred architectures even today. That even includes at least one
> primary mirror.
> 

Aha. So there is no reason to change anything, as those who wish to only
mirror part of the archive already can do that now.

> > Downloading 5GiB takes about 1 and 12 minutes on a 2Mbit/s link...
> > 2Mbit/s is hardly state of the art in IP networks... (state of the art
> > is more like 40GBit/s). And you can still mirror only part of the
> > archive if you want to save bandwidth, even today.
> 
> Indeed, and some of our mirrors are already doing so. It would, thus, be
> interesting if we could formalize that somehow, which is what the first
> bit of the proposal is all about.
> 

But as you say, this is being done right now. No reason to change
anything.

> > The current list doesn't make much sense at all.
> 
> Some elements in the current list don't, indeed. Some do.
> 
> > Some points just don't make any sense (like limiting the number of
> > buildds, or just outright refusing the arch for no reason,...)
> 
> Those are indeed two elements that I personally would like to see
> removed. But the idea of requiring that an architecture fulfills some
> basic quality criteria isn't too silly.
> 

Yes. The basic quality criterium which makes sense is that the packages
in the archive work. The 98% criterium doesn't make any sense as it
might not make sense to have some packages on some architectures.

Happy Hacking,

p2.


signature.asc
Description: Digital signature


Re: Canonical and Debian

2005-06-06 Thread Peter &#x27;p2' De Schrijver
> > Then how did these people end up choosing to support the same set of
> > architectures as Ubuntu?
> 
> I know I've been screaming murder and hell about this, but in hindsight,
> after having read the proposers' explanations (and the proposal itself
> for a few more times), this certainly is not what they're proposing.
> 

It's just a 'coincidence...'

> The whole thing is confusing, because the one nybbles mail talks about
> multiple things, and it's easy to mix those up. But in reality, the
> nybbles proposal suggests that we do the following:
> 
> * Split the architectures over two sets of mirror networks, so that
>   mirror administrators don't need 100G just to mirror Debian anymore.

That sounds retarded in an age where a 200GB HD cost less then 100 Euro...
Anyway you can always decide to mirror only part of the archive if you
want to, even today.

>   This has nothing to do with what architectures can release a "stable"
>   and what architectures cannot; only with mirror bandwidth and disk
>   space usage. The popularity of an architecture will be a deciding
>   factor in the decision of what archive mirror network will be used,
>   but there's of course nothing wrong with that; architectures would be
>   allowed to create a stable release on that "second-class" mirror
>   network, and that's what counts.

Downloading 5GiB takes about 1 and 12 minutes on a 2Mbit/s link...
2Mbit/s is hardly state of the art in IP networks... (state of the art
is more like 40GBit/s). And you can still mirror only part of the
archive if you want to save bandwidth, even today.

> * Create a set of rules that an architecture has to abide by in order to
>   be allowed to release. This set of rules would be there to make sure
>   that a port's porters, rather than the set of release managers, ftp
>   masters and the like, do all the work in making the port work.
>   Provided that set of rules is sensible (which I'm not entirely sure of
>   right now, but that can be fixed), there's nothing wrong with such a
>   suggestion.
> 

The current list doesn't make much sense at all. Some points just don't
make any sense (like limiting the number of buildds, or just outright
refusing the arch for no reason,...)

> While it is indeed very likely that only amd64, i386, and (perhaps)
> powerpc fall in the first thread, the same is very much not true for the
> second set.
> 

The second set is also not debian. It's not based on the same source
packages, it has different release cycles, it has a different testing
repository and we will have 6 or more of those variants
(mips,sparc,alpha,hppa,m68k,arm) all called 'etch'.

So effectively this proposal kills 82% of debian, causes more work and
more confusion.

Happy hacking,

Peter (p2).


signature.asc
Description: Digital signature


Re: acenic firmware rewrite

2005-04-09 Thread Peter &#x27;p2' De Schrijver
On Sat, Apr 09, 2005 at 01:13:57PM -0400, Andres Salomon wrote:
> On Thu, 07 Apr 2005 01:11:38 +0200, Peter 'p2' De Schrijver wrote:
> 
> > Hi,
> > 
> > Reading http://lists.debian.org/debian-legal/2004/12/msg00078.html I
> > wondered if people would be willing to work on a free firmware for the
> > Tigon II chip.  I didn't look at the existing code yet, but looking at the
> > datasheet
> > (http://alteon.shareable.org/firmware-source/12.4.13/tigonbk.pdf.bz2) it
> > doesn't seem to be a very complicated chip to code for. I'm not sure
> > however, how to handle the development in such a way the resulting
> > firmware can be released under a free license without any legal risks. I
> > have 2 PCI boards with the Tigon II chip and a 1000BaseSX PHY. I also have
> > an Ace Director III loadbalancer which consist of 10 Tigon II chips. 8 of
> > those are used for 100BaseT interfaces, 1 has a 1000BaseSX PHY and the
> > 10th is used as a system controller.
> > 
> 
> I can't imagine there would be any issues if you released both the source
> and binary, and licensed them under the GPL.  The firmware source and
> binary can both be distributed in the kernel (with the binary actually in
> the driver source code).
> 

Sure. That's ok. I was more thinking of someone reading the existing
firmware sources, writing a spec and a second person/group implementing
the new free firmware based on the spec. AFAICS the implementors and the
spec writers should be different people/groups. Or do you think it would
be ok if the same people read the existing non-free sources and reimplement 
its functionality in a new free firmware ?

Thanks,

Peter (p2).


signature.asc
Description: Digital signature


acenic firmware rewrite

2005-04-06 Thread Peter &#x27;p2' De Schrijver
Hi,

Reading http://lists.debian.org/debian-legal/2004/12/msg00078.html I
wondered if people would be willing to work on a free firmware for
the Tigon II chip.  I didn't look at the existing code yet, but looking
at the datasheet
(http://alteon.shareable.org/firmware-source/12.4.13/tigonbk.pdf.bz2) it
doesn't seem to be a very complicated chip to code for. I'm not sure
however, how to handle the development in such a way the resulting
firmware can be released under a free license without any legal risks.
I have 2 PCI boards with the Tigon II chip and a 1000BaseSX PHY. I also
have an Ace Director III loadbalancer which consist of 10 Tigon II
chips. 8 of those are used for 100BaseT interfaces, 1 has a 1000BaseSX
PHY and the 10th is used as a system controller.

Comments or ideas welcome.

Cheers,

Peter (p2).



signature.asc
Description: Digital signature


Re: Vancouver meeting - clarifications

2005-04-03 Thread Peter &#x27;p2' De Schrijver
On Sun, Apr 03, 2005 at 10:30:03PM +1000, Andrew Pollock wrote:
> On Sun, Mar 27, 2005 at 05:03:50PM +0200, Peter 'p2' De Schrijver wrote:
> > 
> > You don't need to install anyone else's operating system. You can easily
> > do : 
> > 
> > Boot target using NFS root
> 
> Ah but how do you create an NFS root for one architecture on another? This
> is one of the limitations of FAI. One cannot create the NFS root for say
> Sparc on i386.

That's a generic debian problem. The hurd people have built something
which solves this issue more or less. Presumably this can be extended to
other archs/OSes as well. I haven't looked into it though.

Cheers,

Peter (p2).


signature.asc
Description: Digital signature


Re: Vancouver meeting - clarifications

2005-03-27 Thread Peter &#x27;p2' De Schrijver
On Sun, Mar 27, 2005 at 03:00:07AM -0800, Steve Langasek wrote:
> On Tue, Mar 15, 2005 at 01:39:27PM +0100, Peter 'p2' De Schrijver wrote:
> > > | - the release architecture must have a working, tested installer
> > > I hope that's obvious why. :)
> 
> > As long as FAI or even raw debootstrap counts, I can agree here.
> 
> No, debootstrap isn't an installer, and shouldn't be counted as such for the
> purpose of release eligibility.  If you have to install someone else's
> operating system first to be able to install Debian, then we don't have an
> installer.  There *are* reasons that debian-installer has been emphasized as
> much as it has during the sarge release.
> 

You don't need to install anyone else's operating system. You can easily
do : 

Boot target using NFS root
Create filesystems
Run debootstrap
edit configfiles
reboot in new system

The NFS root can be created using debootstrap or extracted from a
prebuilt archive. 

Cheers,

Peter (p2).


signature.asc
Description: Digital signature


Re: How to define a release architecture

2005-03-22 Thread Peter &#x27;p2' De Schrijver
> 
> The only sarge architectures that are likely of being affected by your 
> "must be publicly available to buy new" rule during the next 10 years 
> are hppa and alpha (dunno about s390).
> 

Given IBM's track record in backwards compatibility I don't expect s390
to die at all :) Even the latest zSeries can still run IBM 360 code
afaik.

Cheers,

Peter (p2).


signature.asc
Description: Digital signature


Re: How to define a release architecture

2005-03-22 Thread Peter &#x27;p2' De Schrijver
On Tue, Mar 22, 2005 at 07:45:00AM +, Alastair McKinstry wrote:
> On Máirt, 2005-03-22 at 00:11 +0100, Peter 'p2' De Schrijver wrote:
> > > If Debian is keeping an arch alive so much that one can still buy it new, 
> > > I
> > > certainly can't see why we should not continue releasing for that arch,
> > > however.  So I'd say Matthew's explanation is not perfect.  But the
> > > reasoning behind it is not difficult to spot.
> > > 
> > > Throwing out this requirement makes sense, if and only if there is another
> > > way to get sure a released arch will not be left stranded.  So, let's work
> > > on these alternate ways, so that this rule can be removed.
> > > 
> > 
> > It's not because you can't buy a new machine, the arch suddenly stops
> > being useful.
> 
> 
> I think the point of this requirement is to support it we need buildds
> in the future for security fixes. Hence while I might like my mips box,
> etc. it would be irresponsible for us to do a release that we could not
> support in e.g. two years time when the motherboards of our buildds die.
> 

The arch should still be available, but a big enough collection of
existing machines will do here IMO. Not that this holds for mips as
there are new MIPS based systems available. Both broadcom and PMC
announced new MIPS based chips for example. And there is AMD (Alchemy)
and a bunch of others using MIPS as well.

Cheers,

Peter (p2).


signature.asc
Description: Digital signature


Re: How to define a release architecture

2005-03-21 Thread Peter &#x27;p2' De Schrijver
> If Debian is keeping an arch alive so much that one can still buy it new, I
> certainly can't see why we should not continue releasing for that arch,
> however.  So I'd say Matthew's explanation is not perfect.  But the
> reasoning behind it is not difficult to spot.
> 
> Throwing out this requirement makes sense, if and only if there is another
> way to get sure a released arch will not be left stranded.  So, let's work
> on these alternate ways, so that this rule can be removed.
> 

It's not because you can't buy a new machine, the arch suddenly stops
being useful.

> > 
> > A far more acceptable alternative would be to not have openoffice on those 
> > archs.
> 
> IMHO you are quite right.  If we can agree on that one, we should make it so
> that packages that are effectively blocked from an arch count as an
> arch-specific (of another arch) for that arch. I.e., they do not lower the
> number-of-packages-built rating.  So, they do not bother that arch at all.
> 

Marking them arch specific would be a good starting point IMO.

> Add a hard limit that at least the entire set of non-arch-specific standard,
> required and essential packages have to be supported (or a minimum of 90% of
> them, maybe?), plus at least 50% of optional and 50% of extra (again,
> non-arch-specific) to avoid abuse, and that would do it.
> 

You need something here yes, but by not as stringent as what has been
proposed up to now.

> DO note that we could also decide to accept that a buildd is something that
> has the latencies of a [fast enough] single unit, so, e.g., a distcc farm
> would count as one buildd.  Makes a lot of sense to me, since it addresses
> the latency problem...  but you WOULD need N+1 *farms* in different
> locations, etc. to address the reliability problem.
> 
> Let the porter teams release an *official* list of not-for-us packages for
> their arch, which get permanently excluded from that arch's release, and
> does not cause any sort of problems *at* *all* for the maintainer of said
> packages (such as annoying problems with the testing scripts if the arch
> drops the package after it had made it to testing for that arch), and I
> don't see how that could be a bad thing.
> 

Seems to make sense yes.

> > > packages are held back from testing by an architecture not being able to
> > > build them. 
> >
> > In which case those packages can simply be not available for the
> > architecture in question.
> 
> Yes, IMHO.  This would take care of it, as long as we have the proper
> infrastructure in place to make it easy to manage.
> 

The current Architecture: list should do ?

> No.  This requirement can be satisfied by having someone IN THE SECURITY
> team willing to support the arch.  There are various ways to make sure the
> security team is willing to support one arch, and I believe the most
> effective one is to get your hands dirty and go help them like crazy right
> now, so that they trust they will have help for that arch should they need
> it.
> 
> I'd suppose a damn good way to start is to make sure there are at least two
> porters of every arch (and I *do* count i386, amd64, powerpc and other
> popular arches here) *active* in the testing security team.
> 

Except that this requires NDAs to be signed which people might not be
willing to do.

> > > * the Debian System Administrators (DSA) must be willing to support
> > >   debian.org machine(s) of that architecture
> > > 
> > > This is in order to ensure that developer-accessible machines exist.
> > 
> > This requirement can be satisfied by stating that some one must be
> > willing to support debian.org machine(s) of that architecture.
> 
> My guess is that it would need to be a machine under the DSA control to make
> sure the security team and stable maintainership is never left out in the
> cold.
> 
> Is this actually a problem for any current arch (either released or not)?
> 

AFAIK not, but you never know.

> > > * the Release Team can veto the architecture's inclusion if they have
> > >   overwhelming concerns regarding the architecture's impact on the
> > >   release quality or the release cycle length
> > > 
> > > A get out clause - it must be possible for something to be refused if it
> > > would break the release, even if it meets all the other criteria.
> > 
> > This is unacceptable. It would for example allow archs to be refused
> > because their names starts with an 'A'.
> 
> Let's be very realistic here.  If for some weird reason, any of the
> post-release teams (security, DSA, release manager) will not touch any archs
> whose name start with an 'A', how exactly can we keep these archs working as
> a Debian stable arch must be?
> 
> We would have to do a parallel release or something like that, using
> unofficial mirrors, etc... or to drop a released stable arch in the middle
> of a stable release.  This is *unacceptable*.  Regardless of wether it is
> acceptable or not that a post-release team refuses to work with a given
> arch, as that pr

Re: The 98% and N<=2 criteria

2005-03-21 Thread Peter &#x27;p2' De Schrijver
> That has happened, but that are not the really bad problems with the
> toolchain. The really bad problems is if e.g. a class of packages starts
> to fail to build from source. Or some new required kernel version forces
> all to upgrade some autoconf-scripts.
> 

Both problems are easy to solve compared to hitting an obscure
toolchain bug.

Cheers,

Peter (p2).


signature.asc
Description: Digital signature


Re: The 98% and N<=2 criteria

2005-03-21 Thread Peter &#x27;p2' De Schrijver
> 

Because it should not be reason to throw out an entire architecture. Ie.
if the package can not be compiled on $arch and the toolchain can not be
fixed in time, then release $arch without the package instead of
throwing out the whole arch.

Cheers,

Peter (p2).


signature.asc
Description: Digital signature


Re: How to define a release architecture

2005-03-21 Thread Peter &#x27;p2' De Schrijver
> * the release architecture must be publicly available to buy new
> 
> Avoids a situation where Debian is keeping an architecture alive. This
> isn't intended to result in an architecture being dropped part way
> through a release cycle or once it becomes hard to obtain new hardware.
> 

What problem does this rule solve ?

> * the release architecture must have N+1 buildds where N is the number
>   required to keep up with the volume of uploaded packages
> 
> This is to ensure that all unstable packages are built in a timely
> manner, and that there is adequate redundancy to prevent a single buildd
> failure from delaying packages.
> 
> * the value of N above must not be > 2
> 
> This effectively sets an upper limit on the length of time a single
> package may take to build, which helps ensure that doing things like
> security fixes for Openoffice doesn't become a problem.
> 

A far more acceptable alternative would be to not have openoffice on those 
archs.

> * the release architecture must have successfully compiled 98% of the
>   archive's source (excluding architecture-specific packages)
> 
> A fairly arbitrary figure, but intended to prevent situations where 
> packages are held back from testing by an architecture not being able to
> build them. 
> 

In which case those packages can simply be not available for the
architecture in question.

> * the release architecture must have a working, tested installer
> 
> It's not acceptable to release without a working installer, for fairly
> obvious reasons.
> 
> * the Security Team must be willing to provide long-term support for
>   the architecture
> 
> All releases require security updates, again for obvious reasons.
> 

This requirement can be satisfied by stating that some one must be
willing to support the security team.

> * the Debian System Administrators (DSA) must be willing to support
>   debian.org machine(s) of that architecture
> 
> This is in order to ensure that developer-accessible machines exist.
> 

This requirement can be satisfied by stating that some one must be
willing to support debian.org machine(s) of that architecture.

> * the Release Team can veto the architecture's inclusion if they have
>   overwhelming concerns regarding the architecture's impact on the
>   release quality or the release cycle length
> 
> A get out clause - it must be possible for something to be refused if it
> would break the release, even if it meets all the other criteria.
> 

This is unacceptable. It would for example allow archs to be refused
because their names starts with an 'A'.

> * there must be a developer-accessible debian.org machine for the
>   architecture.
> 
> Developers must be able to test their code on a machine running that 
> architecture.
> 
> 
> The Vancouver proposals satisfy all of these, potentially at the cost of
> removing some architectures from the set released by Debian. If we want
> to avoid that cost, can we come up with another proposal that solves the
> same problems in a way that satisfies the release team? 

Yes. See above. Most problems can be solved by other means then just
throwing out lots of people's work. Some are actually not a problem but
are probably invented to articifially limit the amount of archs.

Cheers,

Peter (p2).


signature.asc
Description: Digital signature


Re: The 98% and N<=2 criteria (was: Vancouver meeting - clarifications)

2005-03-21 Thread Peter &#x27;p2' De Schrijver

> A QA measure for kernel/toolchain issues, sure. Many compiler bugs are
> identified by compiling 10G worth of software for an architecture;
> perhaps we should have a better way of tracking these, but it surely is
> a class of problems that /cannot/ be identified by just building on the
> big N architectures.
> 

Indeed. Some of the issues I think of right now :

- wrong assumptions on sizeof base types 
- unaligned accesses
- dependency on stack growth direction

Cheers,

Peter (p2).


signature.asc
Description: Digital signature


Re: The 98% and N<=2 criteria

2005-03-21 Thread Peter &#x27;p2' De Schrijver
On Mon, Mar 21, 2005 at 03:09:26PM +0100, Andreas Barth wrote:
> * Wouter Verhelst ([EMAIL PROTECTED]) [050321 15:05]:
> > On Sun, Mar 20, 2005 at 03:16:08PM +0100, Andreas Barth wrote:
> > > Well, the toolchain is perhaps not the part where they turn up most
> > > likely, but it's the part that creates most of the workload and delays.
> 
> > Uh. Most porting bugs that require attention fall in one of the
> > following areas:
> > * Toolchain problems (Internal Compiler Errors, mostly)
> > * Mistakes made by the packager. Quite easy to fix, usually.
> > * Incorrect assumptions in the source code. These are becoming
> >   increasingly rare these days, IME.
> 
> Exactly. And if you s/workload/workload for the release team/, than the
> first one is the usual spot for issues for us. The middle one is fixed

No. Because you can always consider to leave the package out for that
specific architecture (except if it would be a really important one).
That would be far more acceptable then throwing out the complete archive
for that architecture, that would be 'het kind weggooien met het
badwater'.

Cheers,

Peter (p2).




signature.asc
Description: Digital signature


Re: my thoughts on the Vancouver Prospectus

2005-03-20 Thread Peter &#x27;p2' De Schrijver
On Sun, Mar 20, 2005 at 09:27:26AM +0100, Matthias Urlichs wrote:
> Hi, Peter 'p2' De Schrijver wrote:
> 
> > This is obviously unacceptable. Why would a small number of people be
> > allowed to veto inclusion of other people's work ?
> 
> Why not? (Assuming they do have a valid reason. For instance, I probably
> wouldn't allow an MMIX port into the archive even if it sat up and begged.)

Because it's not fair ? I would allow an MMIX port if it would exist and
work.

Cheers,

Peter (p2).


signature.asc
Description: Digital signature


Re: my thoughts on the Vancouver Prospectus

2005-03-19 Thread Peter &#x27;p2' De Schrijver
> > * Why is the permitted number of buildds for an architecture restricted to
> >   2 or 3?
> 
> - Architectures which need more than 2 buildds to keep up with package
>   uploads on an ongoing basis are very slow indeed; while slower,
>   low-powered chips are indeed useful in certain applications, they are
>   a) unlikely to be able to usefully run much of the software we currently
>   expect our ports to build, and b) definitely too slow in terms of

You're sprouting non-sense here. The vast majority of the debian
packages is useful on slower architectures.

>   single-package build times to avoid inevitably delaying high-priority
>   package fixes for RC bugs.
> 

> - If an architecture requires more than 3 buildds to be on-line to keep up
>   with packages, we are accordingly spreading thin our trust network for
>   binary packages.  I'm sure I'll get flamed for even mentioning it, but
>   one concrete example of this is that the m68k port, today, is partially
>   dependent on build daemons maintained by individuals who have chosen not
>   to go through Debian's New Maintainer process.  Whether or not these
>   particular individuals should be trusted, the truth is that when you have
>   to have 10 buildds running to keep up with unstable, it's very difficult
>   to get a big-picture view of the security of your binary uploads.
>   Security is only as strong as the weakest link.
> 

We now rely on about 1000 developers which can upload binary packages
for any arch and they do not get rebuild by the buildd's. thanks for
playing.

> - While neither of the above concerns is overriding on its own (the
>   ftpmasters have obviously allowed these ports to persist on
>   ftp-master.debian.org, and they will be released with sarge), there is a
>   general feeling that twelve architectures is too many to try to keep in
>   sync for a release without resulting in severe schedule slippage.
>   Pre-sarge, I don't think it's possible to quantify "slippage that's
>   preventible by having more active porter teams" vs. "slippage that's
>   due to unavoidable overhead"; but if we do need to reduce our count of
>   release archs, and I believe we do, then all other things being equal, we
>   should take issues like the above into consideration.
> 

Would you please stop generalizing your opinions ? There is an idea in
some people's mind that 12 architectures is too much. If you look at the
number of reactions on this list, you will notice that a lot of people
do not agree with you on this point. So stop inventing bogus arguments
to justify your point.

> > * Three bodies (Security, System Administration, Release) are given
> >   independent veto power over the inclusion of an architecture.
> >   A) Does the entire team have to exercise this veto for it to be
> >  effective, or can one member of any team exercise this power
> >  effectively?
> 
> It's expected that each team would exercise that veto as a *team*, by
> reaching a consensus internally.
> 

This is obviously unacceptable. Why would a small number of people be
allowed to veto inclusion of other people's work ?

> >   B) Is the availability of an able and willing Debian Developer to join
> >  one of these teams for the express purpose of caring for a given
> >  architecture expected to mitigate concerns that would otherwise lead
> >  to a veto?
> 
> Without knowing beforehand what the reason for the veto would be (and if we
> knew, we would list them explicitly as requirements), this isn't possible to
> answer.
> 

So drop this bullshit veto thing. There is no reason to have this.

Cheers,

Peter (p2).


signature.asc
Description: Digital signature


Re: Emulated buildds (for SCC architectures)?

2005-03-19 Thread Peter &#x27;p2' De Schrijver
> Yes, but the argument against cross-compiling has always been stronger
> - If you are compiling under an emulator, you can at least test the
> produced binaries under that same emulator, and you have a high degree
> of confidence that they work reliably (this is, if an emulator bug
> leads to gcc miscompiling, it'd be surprising if it allowed for
> running under the emulator). Using cross-compilers you can't really
> test it. And, also an important point, you can potentially come up
> with a resulting package you could not generate in the target
> architecture.
> 

You can always run generated binaries on an emulator or a target board
for testing. I have cross compiled a lot of code using gcc and have yet
to see wrong binaries caused by cross compiling versus native compiling.
I could imagine problems with floating point expressions evaluated at
compile time and resulting in slightly different results. 
The only way to see if cross compiling generates wrong binaries, is to
try it and evaluate the results.

> But, yes, I'd accept a cross-compiler as a solution as well in case we
> could not run an emulator for a given slow platform.

We will probably need both as some build scripts run generated code.

Cheers,

Peter (p2).


signature.asc
Description: Digital signature


Re: Emulated buildds (for SCC architectures)?

2005-03-19 Thread Peter &#x27;p2' De Schrijver
On Fri, Mar 18, 2005 at 06:58:50PM -0800, Thomas Bushnell BSG wrote:
> Peter 'p2' De Schrijver <[EMAIL PROTECTED]> writes:
> 
> > A much faster solution would be to use distcc or scratchbox for
> > crosscompiling.
> 
> Debian packages cannot be reliably built with a cross-compiler,
> because they very frequently need to execute the compiled binaries as
> well as just compile them.

That's exactly the problem which is solved by using distcc or
scratchbox. distcc basically sends preprocessed source to another
machine and expects an object back. So you run the build on a machine of
the target arch (or an emulator), but the compiler part is actually a
small program which sends the source to the fast machine running the
cross compiler and expects the object code back. 
Scratchbox provides a sandbox on the machine doing the cross compile, in which 
target binaries can be executed by either running them on a target board
sharing the sandbox filesystem using NFS or by running them in qemu.

Cheers,

Peter (p2).


signature.asc
Description: Digital signature


Re: Emulated buildds (for SCC architectures)?

2005-03-18 Thread Peter &#x27;p2' De Schrijver
On Fri, Mar 18, 2005 at 08:06:47PM -0600, Gunnar Wolf wrote:
> Hi,
> 
> I haven't followed as thoroughly as I would have liked the recent
> verborrhea in the list regarding the Vancouver proposal. Anyway, I'd
> like to raise a point that I brought up during Debconf3, in the light
> of the changes that we are now facing.
> 
> Most (although not all) of the architectures facing being downgraded
> are older, slower hardware, and cannot be readily found. Their build
> speed is my main argument against John Goerzen's proposal [1]. Now, I
> understand that up to now we have had the requirement of the builds
> running in the real hardware. 
> 
> Nowadays, an i386 system emulating a m68k (using either UAE or
> Basilisk2) is at least comparable to the fastest m68k system ever
> produced. I have worked with both emulators, and both seem completely
> safe - Yes, I know we cannot run Debian on a regular UAE because of
> the lack of a MMU in the official package, but we _can_ run it inside
> Basilisk2. 
> 

A much faster solution would be to use distcc or scratchbox for
crosscompiling.

> A completely different problem with the same results arises when using
> s390 machines: As someone noted recently, most of us cannot afford
> having a s390 running in the basement. But AFAICT, Hercules is a quite
> usable s390 emulator.
> 
> And I am sure we can find more examples like these - I have not really
> checked, but I would be surprised if architectures as popular as
> Sparc, Alpha or ARM wouldn't have an emulator (although probably not
> currently as reliable as those two).
> 

ARM is supported by both qemu and scratchbox, so you could do a cross
compiling buildd without needing actual ARM hardware (scratchbox
normally uses a target board to run generated binaries during the
buildprocess, but it can also use qemu). OTOH Intel's IOP Xscale series
is quite fast and there are faster ARMs coming, so it's probably not
necessary to use crosscompiling to keep up.

Alpha and Sparc should be fast enough to keep up. 

> Now, if we face dropping one or more of our architectures (i.e. m68k)
> because new hardware can not be found anymore (the Vancouver proposal
> mentions that "the release architecture must be publicly available to
> buy new" in order to keep it as a fully supported architecture - I
> know, SCC != fully supported, but anyway, a buildd can die and create
> huge problems to a port), why shouldn't we start accepting buildds
> running under emulated machines?

If you don't tell people, how would they know ? :)

Cheers,

Peter (p2).


signature.asc
Description: Digital signature


Re: Do not make gratuitous source uploads just to provoke the buildds!

2005-03-18 Thread Peter &#x27;p2' De Schrijver
> Porters who have worked on getting an arch to REGUALR status are in a much 
> better position (demonstrated commitment, technical aptness and 
> experiencewise) to solve those problems than random-joe-developer.
> 

I have no idea what you're trying to say here.

> Always remember that the main reason that it is easier for a porters team to 
> release within the (current) Debian framework than outside is that _others_ 
> do work for them.
> 

That has been the case up to now, but won't be the case after sarge.

Cheers,

Peter (p2).


signature.asc
Description: Digital signature


Re: Do not make gratuitous source uploads just to provoke the buildds!

2005-03-18 Thread Peter &#x27;p2' De Schrijver
> Except the possibility to profit from the release team's efforts,
> and to create an actually supported release. It is not reasonable
> to believe a small porter team can do security updates for a
> unstable snapshot when a task of similiar size already overloads
> the stable security team.
> 

No. As neither the release team, not the security team will take
non-release archs into account.

Cheers,

Peter (p2).


signature.asc
Description: Digital signature


Re: Do not make gratuitous source uploads just to provoke the buildds!

2005-03-17 Thread Peter &#x27;p2' De Schrijver
On Thu, Mar 17, 2005 at 08:22:04PM +0100, Goswin von Brederlow wrote:
> Andreas Barth <[EMAIL PROTECTED]> writes:
> 
> > * Mike Fedyk ([EMAIL PROTECTED]) [050316 20:55]:
> >> Andreas Barth wrote:
> >> >If that happens for a too long period, we might consider such an
> >> >architecture to be too slow to keep up, and will eventually discuss
> >> >about kicking it out of the architectures we wait for testing migration
> >> >at all, or even kicking it out of testing at all. Not waiting for such
> >> >an arch has happened and might happen again.
> >
> >> I think it makes sense to shorten the list of arches we wait upon for 
> >> testing migration, but I fail to see the usefulness of removing an arch 
> >> from testing.
> >
> > If we don't wait for an arch, it gets out-of-sync quite soon, and due to
> > e.g. legal requirements, we can't release that arch. (In other words, if
> > an arch is too long ignored for testing, we should remove it, as we
> > can't release it in any case.)
> 
> Not if each arch has it's own source tracking. And you already need
> that for snapshot fixes.
> 
> Non release archs should be released by the porters alone (no burden
> to RMs) with a minimum of arch specific versions or patches. There
> should be a strong encouragement to remove software instead of
> patching it when it comes close to the actual release so when the port
> does release (after the main release) it is based on stable source for

Why would a port release after the main release ? Why, if debian doesn't
care about the non-release archs, would the porters even bother to
follow the release arch sources and not just release whenever they 
like ? They don't gain anything by following the main release.

> everything but last minute flaws in essential packages. Maintaining
> those patches in case of security updates or for the point releases
> again should lie with the porters.
> 
> 
> The reason why I favour this is that I have in mind that some archs
> will be too slow, they won't be able to keep up every day. But given
> an extra month they can compile the backlog or kick out out-of-sync
> software and release seperately with a nearly identical source
> tree. The remaining source changes can (and basically must for
> legal reasons) be contained on the binary CD/DVD set and in a branch
> of the scc.d.o tree.
> 
> Take for example m68k. It will always be at risk of lagging a few days
> behind because some packages do build a few days. It is always
> out-of-sync longer than the other archs but it is not getting worse,
> it is just a step behind. That is totaly different than arm or s390
> taking a deep dive getting some 300+ package backlog and having
> packages stuck for a month.
> 
> If an arch has enough developers on it to keep stuff building, and
> that means supplying patches to unstable fast and early enough to get
> them into testing and ultimately stable I see no reason why the arch
> shouldn't release. Make some rule to limit out-of-sync, say no more
> than 1% sources differences to stable for the release.
> 
> Any problems with that?
> 

Yes. It doesn't make sense. Either debian releases with all archs, or
every arch releases on its own. The latter is favoured by the current
proposal and will diminish debian's value. The former is the way to go.
Scalability problems need to be solved by improving infrastructure or
procedures as appropriate. A middle ground between both release
approaches is not sensible as it will still make the ports dependent on
a possibly suboptimal upstream tree without having the benefit of debian
security updates. Ie. ports get all the disadvantages of a synchronized
release without getting any of the advantages. That's just plain unfair.

Cheers,

Peter (p2).


signature.asc
Description: Digital signature


Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-17 Thread Peter &#x27;p2' De Schrijver
> > Sbapshots of unstable.  And people would run that on their servers?
> 
> Some, maybe.  Are there lots of people running servers on m68k and arm?
> 

Debian/ARM is becoming viable as a handheld platform as they get 400+
Mhz CPUs and gigabytes of storage (which is all available now).

Cheers,

Peter (p2).


signature.asc
Description: Digital signature


Re: Vancouver meeting - clarifications

2005-03-15 Thread Peter &#x27;p2' De Schrijver
> > I strongly disagree with this. There is a need for a set of base
> > packages to work, but it's entirely reasonable to have a release for eg
> > m68k without KDE or other large package sets. It's not as if debian/m68k
> > would be unusable without KDE packages for example. 
> 
> You might try to convince me that KDE is architecture-specific :)
> 

That was just an example, but Openoffice.org is architecture-specific
and doesn't even run on any 64bit arch. So you never know what upstream
does :)

> I hope you can agree that we need to say that "almost all" packages that
> should be build are build. And I consider 97.5% to be a reasonable
> level. Also, if we exclude too much, we might start to ask the question
> why we should do a stable release for an arch that builds only a very
> minor part of the archive. But the "excluding architecture-specific
> packages" gives of course some possibilities which packages count and
> which not.
> 

I think we should distinguish between what's really necessary to have a
useable release and what is nice to have. It's obviously nice to compile
almost everything for all archs. But if upstream is too broken for this
to be possible, it might make more sense to leave the broken bits out
then to delay everything.

> 
> > > | - the release architecture must have a working, tested installer
> > > I hope that's obvious why. :)
> 
> > As long as FAI or even raw debootstrap counts, I can agree here.
> 
> Any installer inside Debian. Of course, we can't tell people "To install
> Debian, you first need to install Gentoo" :)
> 
> 

Sure but manually punching papertape for the upcoming PDP-11 release
should be valid :)

Cheers,

Peter (p2).


signature.asc
Description: Digital signature


Re: Vancouver meeting - clarifications

2005-03-15 Thread Peter &#x27;p2' De Schrijver
> | - the release architecture must have N+1 buildds where N is the number
> |   required to keep up with the volume of uploaded packages
> The reason for this proposal should be instantly clear to everyone who
> ever suffered from buildd backlogs. :)
> 
> We want that all unstable packages are directly built under normal
> circumstances and that in the even of a buildd going down the arch does
> not suffer noticably.  The long periods of trying to get some RC-bugfix
> in while some arch has a backlog should disappear with etch.
> 

That should be easy to achieve on any of the currently supported debian
architectures. Perhaps we need things like scratchbox or distcc to do
this, but it can be done and there is sufficient interest in the
community to make this happen.

> | - the release architecture must have successfully compiled 98% of the
> |   archive's source (excluding architecture-specific packages)
> well, that's just an "the architecture is basically working", so that we
> don't get too many RC-bugs because of architecture-specific issues, and
> also don't get too many packages kept out of testing for not all archs
> being built. Of course, the 98% is not engraved into stone, and might just
> be another arbitrary high number like 97.5%. From looking at the usual
> level where we start noticing problems with testing migrations, a number
> in this range is sane.

I strongly disagree with this. There is a need for a set of base
packages to work, but it's entirely reasonable to have a release for eg
m68k without KDE or other large package sets. It's not as if debian/m68k
would be unusable without KDE packages for example. 

> 
> | - the release architecture must have a working, tested installer
> I hope that's obvious why. :)
> 

As long as FAI or even raw debootstrap counts, I can agree here.

> | - the Security Team must be willing to provide long-term support for
> |   the architecture
> If not, we can't release with that arch. I think this is also quite
> obvious. Of course, one way to get the security team to provide support
> might be to help them.
> 

The porting community is happy to help out here. Feel free to send
requests for help, even to me.

> | - the Debian System Administrators (DSA) must be willing to support
> |   debian.org machine(s) of that architecture
> | - there must be a developer-accessible debian.org machine for the
> |   architecture.
> Well, the second point is - I hope - obvious why we want this. This first
> one is just a conclusion of the second.
> 

The first one basically gives the DSA unlimited powers over which archs
can be in debian. That's bad.

> | - the Release Team can veto the architecture's inclusion if they have
> |   overwhelming concerns regarding the architecture's impact on the
> |   release quality or the release cycle length
> This is just more or less an emergency-exit: If we consider an architecture
> really unfit for release, we can use our veto. This is one of the things I
> hope will never happen.
> 
> 

If you don't expect it to happen, there is no reason for this being
here. Ergo, please remove.

> 
> Something else which is related to the number of architectures in testing
> is that we pay a price for every architecture:
> 
> For example, the more architectures are included the longer the migration
> testing script takes.  We are already at the limit currently (and also
> have out-of-memory issues from time to time). For example, currently we
> restrict the number of some hints to only 5 per day to keep up the
> scripts. Also, the udebs are not taken into account, which requires more
> manual intervention. With a lower number of release architecture, we can
> and will improve our scripts.
> 
> 

I fail to see why less archs allows you to improve the scripts. You can
always improve them, regardless of how many usage they get. I believe
there is sufficient algorithm knowhow in the debian developer community
to solve the scalability problems.

> 
> Having said this, this all doesn't exclude the possibility for a
> non-release arch to have some "testing" which can be (mostly) in sync with
> the release architectures testing - just that if it breaks, the release
> team is not forced anymore to hold the strings together.  For example,
> the amd64-people are doing something like that right now.
> 
> 

The current proposal will lead to random arch specific forks. Each arch
decides when they want a release and will include it's own fixes for
arch specific problems. These fixes might go into debian or upstream or
not (depends if debian or upstream wants them. Given the current hostile
feelings of some debian people against anything no mainstream, I hope upstream
will want them). If they are not integrated in debian or upstream, this
will lead to multiple archs solving the same problems (consider
endianess, alignment bugs, wrong use of bitfields,...). This will also
lead to multiple inconsistent debian releases as the specific arch teams
decide on when to r

Re: Let's remove mips, mipsel, s390, ... (Was: [Fwd: Re: GTK+2.0 2.6.2-3 and buildds running out of space])

2005-02-21 Thread Peter &#x27;p2' De Schrijver
 
> The main problem with distcc across architectures is the FUD
> surrounding whether gcc-as-cross-compiler spits out the same code as
> gcc-as-native-compiler.  The gcc team seem to be very hesitant to make
> any guarantees about that, as it's not something they test much.
> Without better information than guesswork, I'd say you might minimise
> your chances of cross-gcc bugs by using a host with the same endianness
> and bit width.

I guess differences in floating point implementations might be an issue
too for expressions containing floats which can be evaluated at compile
time. 

Cheers,

Peter (p2).


signature.asc
Description: Digital signature


Re: Let's remove mips, mipsel, s390, ... (Was: [Fwd: Re: GTK+2.0 2.6.2-3 and buildds running out of space])

2005-02-21 Thread Peter &#x27;p2' De Schrijver
> There are a few reasons why we usually avoid cross-compilers for buildd
> purposes. For one, as one cannot as easily test a cross-compiler by
> running a test suite, it may have been miscompiled -- but you wouldn't
> notice; this would result in strange, hard to debug behaviour by the
> resulting cross-compiled packages. For another, satisfying

This can be solved by using emulation tools (like qemu). Unfortunately
qemu doesn't support m68k as a target yet. It would not only help for
cross buildd's, but also allow maintainers to debug arch specific
problems in their package on their laptop :)

> build-dependencies for cross-compiling is pretty hard, as anyone using
> dpkg-cross can tell you.
> 

Yes, this is not solved yet, although emdebian and scratchbox are
making progress in this area. Someday this problem will be mastered, at
least for archs which have qemu support. The critical part is executing
target code in maintainer and build scripts. This can be done using
qemu user emulation.

Cheers,

Peter (p2).


signature.asc
Description: Digital signature


Re: Bug#293292: ITP: btexmms -- XMMS plugin to use some (Sony) Ericsson phones as a remote control

2005-02-02 Thread Peter &#x27;p2' De Schrijver
On Wed, Feb 02, 2005 at 04:41:15PM +, Paul Brossier wrote:
> On Wed, 2005-02-02 at 10:41 +0100, Peter 'p2' De Schrijver wrote:
> > * Package name: btexmms
> 
> xmms plugins would be better named xmms- (btexmms for the
> source should be fine though)
> 

So xmms-btexmms would be better ?

> >   Version : x.y.z
> >   Upstream Author : Name <[EMAIL PROTECTED]>
> > * URL : http://www.example.org/
> > * License : (GPL, LGPL, BSD, MIT/X, etc.)
> 
> mmh, looks like it lacks a few info here...
> 

It seems I missed some bits of the template yes.

Version : 0.5
Upstream Author : Nikolay Igotti ([EMAIL PROTECTED])
URL : http://www.lyola.com/bte/
License : GPL

> >   Description : XMMS plugin to use some (Sony) Ericsson phones as a 
> > remote control
> 
> what is the name of the feature provided by 'some (Sony) Ericsson' ?
> imo, it would looks better with that name instead.
> 

It uses the accessory menu feature to display messages and mobile
equipment event reporting to read the keys.

> > This plugin allows using some Ericsson and Sony Ericsson phones as a remote 
> > control for XMMS. Phones which are known to work are the SE T68i and the
> > SE T610. The plugin uses the accessory commands documented in the
> > Ericsson R320 manual.
> 
> any chance this documentation can be shipped with the package itself ?

No. The documentation used to be available on the ericsson website
(http://mobileinternet.ericsson.se/emi_manuals/R320s/ATCommand/R320AT_R1A.pdf)
but the link is dead now. The document says (c) Ericsson Mobile
Communications. There are a few sites which still have it, but I'm not
convinced that this is legal.

Cheers,

Peter (p2).


signature.asc
Description: Digital signature


Bug#293292: ITP: btexmms -- XMMS plugin to use some (Sony) Ericsson phones as a remote control

2005-02-02 Thread Peter &#x27;p2' De Schrijver
Package: wnpp
Severity: wishlist


* Package name: btexmms
  Version : x.y.z
  Upstream Author : Name <[EMAIL PROTECTED]>
* URL : http://www.example.org/
* License : (GPL, LGPL, BSD, MIT/X, etc.)
  Description : XMMS plugin to use some (Sony) Ericsson phones as a remote 
control

This plugin allows using some Ericsson and Sony Ericsson phones as a remote 
control for XMMS. Phones which are known to work are the SE T68i and the
SE T610. The plugin uses the accessory commands documented in the
Ericsson R320 manual.

-- System Information:
Debian Release: 3.1
  APT prefers unstable
  APT policy: (500, 'unstable')
Architecture: i386 (i686)
Kernel: Linux 2.6.10
Locale: [EMAIL PROTECTED], [EMAIL PROTECTED] (charmap=ISO-8859-15) (ignored: 
LC_ALL set to [EMAIL PROTECTED])


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Why does Debian distributed firmware not need to be Depends: upon? [was Re: LCC and blobs]

2005-01-10 Thread Peter &#x27;p2' De Schrijver
> 
> And I still don't think anyone could argue that it would be reasonable
> to stick a driver on a Debian CD with a README that says "if you want
> to use this driver, you'll need to write a firmware file for your SCSI
> card.  Use the following assembler"
> 

I never said the USER HAS to write the firmware. But the user (or more
likely the driver developer) CAN write the firmware.

> Do any of you seriously believe that Debian users would be satisfied
> with a driver that worked only after they sat down and wrote a firmware
> file using some free tools helpfully provided in Debian?  Do you think
> Debian users would consider the driver useful if shipped in that state?

The same argument goes for any other software shipped by Debian.

Cheers,

Peter (p2).


signature.asc
Description: Digital signature


Re: Why does Debian distributed firmware not need to be Depends: upon? [was Re: LCC and blobs]

2005-01-09 Thread Peter &#x27;p2' De Schrijver
>Firmware files are not the sort of thing people can create their own
>versions of.  In most cases the format is not documented and there
>are no free or even publicly available tools for this, and even in
>cases where it *is* documented, this is not by any stretch of the
>imagination a typical use case.
> 

That's not true. Firmware can created by anyone and requires only
documentation and a compiler/linker for the target processor. In many
cases the CPU used is already supported by some free toolchain.
Look for example at :

linux-2.6.10/drivers/scsi/aic7xxx/aic79xx.seq
linux-2.6.10/drivers/scsi/aic7xxx/aic7xxx.seq
linux-2.6.10/drivers/scsi/sym53c8xx_2/sym_fw[1-2].h 
linux-2.6.10/drivers/usb/serial/keyspan_pda.S

Cheers,

Peter (p2).




signature.asc
Description: Digital signature


Re: LCC and blobs

2004-12-15 Thread Peter &#x27;p2' De Schrijver
> What would you gain by having the firmware source.
> Please don't tell me that you want to fix bugs there.
> 
> The firmware is part of the hardware and we don't ask the vendors to
> give away their .vhdl files of the hardware.  Both firmware and hardware
> source are useless as they usually need highly proprietary tools to
> build.
> 

Firmware generally is software and it is built exactly the same way any
other software is built. Firmware is mostly written in C or assembler
which doesn't require any highly proprietary tools. Some firmware is a
'configuration file' for an FPGA. I'm not sure if this is software or
not. It's not executed the same way machine code is. 

Cheers,

Peter (p2).



signature.asc
Description: Digital signature


Re: Linux Core Consortium

2004-12-10 Thread Peter &#x27;p2' De Schrijver
Hi,

> 
>  * We should commit to strict release cylces of a base system others
>(and Debian itself) can build value upon.
> 
>  * We should proabably also commit to a set of core architectures which
>*need* to be bug-free on release, while the rest *should* be, but
>would not delay the release.
> 

I don't think that buys us anything. I don't think there is a single
architecture which has blocked the release up to now. All bugs that
appeared by testing on different architectures, were real bugs in the
code. They just didn't show up by only testing on a few architectures.
In short, releases are not faster and would probably contain more bugs.

Cheers,

Peter (p2).


signature.asc
Description: Digital signature