Re: Bug#616317: base: commit= ext3 mount option in fstab has no effect.

2011-05-07 Thread Roger Leigh
On Sun, May 08, 2011 at 05:24:09AM +0100, Ben Hutchings wrote:
> On Sat, 2011-05-07 at 22:43 -0400, Ted Ts'o wrote:
> [...]
> > Should we try to make this work (at best badly) since a change in
> > mount options in /etc/fstab would only take effect at the next
> > mkinitramfs and/or update-grub invocation?  Or should we just close
> > out this bug and say, "tough luck, kid; if you want to change the root
> > file system's mount options, you need to edit your kernel's boot
> > options using whatever bootloader you might happen to be using"?
> [...]
> 
> Could we not have init remount root based on /etc/fstab?  It already
> handles remounting read-write.  I suppose the problem then is that some
> mount options can't practically be changed when remounting.  (Worse, the
> failure to change them is silent in some cases.  And that is definitely
> a bug.)

See also: #520009

Also, with the version of initscripts in experimental, the domount
mount helper can remount filesystems with the options from /etc/fstab.
This is used to remount the filesystems mounted in the early initramfs
with the options the sysadmin defined (if any).  This could also be
extended to do the same for the rootfs.  The existing read_fstab
could also be refactored to use the generic read_fstab_entry to pull
out the entire set of mount options for remounting rather than just
using the ro option.  As both this and #520009 note, this won't work
for all mount options such as data= for ext3, but it would allow any
other options to from fstab to be applied to the root mount.


Regards,
Roger

-- 
  .''`.  Roger Leigh
 : :' :  Debian GNU/Linux http://people.debian.org/~rleigh/
 `. `'   Printing on GNU/Linux?   http://gutenprint.sourceforge.net/
   `-GPG Public Key: 0x25BFB848   Please GPG sign your mail.


signature.asc
Description: Digital signature


Re: Bug#616317: base: commit= ext3 mount option in fstab has no effect.

2011-05-07 Thread Ben Hutchings
On Sat, 2011-05-07 at 22:43 -0400, Ted Ts'o wrote:
[...]
> Should we try to make this work (at best badly) since a change in
> mount options in /etc/fstab would only take effect at the next
> mkinitramfs and/or update-grub invocation?  Or should we just close
> out this bug and say, "tough luck, kid; if you want to change the root
> file system's mount options, you need to edit your kernel's boot
> options using whatever bootloader you might happen to be using"?
[...]

Could we not have init remount root based on /etc/fstab?  It already
handles remounting read-write.  I suppose the problem then is that some
mount options can't practically be changed when remounting.  (Worse, the
failure to change them is silent in some cases.  And that is definitely
a bug.)

Ben.

-- 
Ben Hutchings
Once a job is fouled up, anything done to improve it makes it worse.


signature.asc
Description: This is a digitally signed message part


Re: Bug#616317: base: commit= ext3 mount option in fstab has no effect.

2011-05-07 Thread Ted Ts'o
reassign 616317 base
thanks

This isn't a bug in e2fsprogs; e2fsprogs has absolutely nothing to do
with mounting the file system.

Debian simply doesn't support the mount options for the root file
system in /etc/fstab having any effect on how the root file system is
mounted.  The root file system is mounted by the kernel, and the mount
options used by the kernel are specified by the rootflags= option on
the kernel's boot command line.

This is effectively a feature request, and I debated what was the best
way to deal with this bug.  I could close it, and say, "not a bug",
since Debian has never worked this way, and I suspect it was
deliberate.

Or, I could assign it to initramfs-tools, since what some other
distributions do is look in /etc/fstab, parse out the mount options in
for the root file system in /etc/fstab, and then insert into initrd
image the appropriate root mount options.  The problem with this is,
(a) it's a bit of a hack, (b) it only takes effect the next time you
install a new kernel, or if you deliberately and explicitly run
mkinitramfs, which has fairly baroque options that most users would
never figure out, and (c) not all Debian installations use an initrd,
so whether or not it works would depend on how the boot sequence was
set up.  If you don't use an initrd, you'd have to edit it into the
grub's configuration file.  But then, not all Debian systems use grub
as their boot loader.

Neither these seemed obviously the right choice.

So I'm going to do the cowardly thing, and choose the third option,
which is to reassign this back to base, cc'ing debian-devel.  I'm not
sure what the right thing is to do here, since honoring this feature
request would require making changes to multiple different packages:
initramfs-tools, all of the bootloaders, etc.

Should we try to make this work (at best badly) since a change in
mount options in /etc/fstab would only take effect at the next
mkinitramfs and/or update-grub invocation?  Or should we just close
out this bug and say, "tough luck, kid; if you want to change the root
file system's mount options, you need to edit your kernel's boot
options using whatever bootloader you might happen to be using"?

I have a slight preference for the latter, since it's a lot less
complexity that won't really work right anyway, but let's see what
other people think.

Regards,

- Ted


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20110508024329.ga15...@thunk.org



Re: Bits from the Release Team - Kicking off Wheezy

2011-05-07 Thread Charles Plessy
Le Sat, May 07, 2011 at 03:43:11PM +0200, Enrico Zini a écrit :
> 
> a GR was needed to be able to proceed, because the hands of FD were
> rather tied by this other GR: http://www.debian.org/vote/2008/vote_002

Hi Enrico,

the 2008 GR invites to seek consensus, and in my opinion, what prove to be
anti-consensual is to divide developers into formal categories.  I have not
seen such a vigourous opposition in the recent years to the idea of accepting
non-packaging developers in Debian.

Cheers,

-- 
Charles Plessy
Tsurumi, Kanagawa, Japan


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20110508014635.gc12...@merveille.plessy.net



Re: Best practice for cleaning autotools-generated files?

2011-05-07 Thread Henrique de Moraes Holschuh
On Sat, 07 May 2011, Tollef Fog Heen wrote:
> | The problem is that autoreconf offers NO command line options for you to
> | pass the required -I parameters for aclocal, nor is there a way to encode
> | that information in the one place where it could conveniently live
> | (configure.ac) AFAIK.
> 
> Can't you use AC_CONFIG_MACRO_DIR?  Note that this requires
> ACLOCAL_AMFLAGS = -I m4 too in the top level Makefile.am if you're using
> automake, though.

Hey, thanks!  Yes, it looks like it does exactly what I wanted for
cyrus-imap.

-- 
  "One disk to rule them all, One disk to find them. One disk to bring
  them all and in the darkness grind them. In the Land of Redmond
  where the shadows lie." -- The Silicon Valley Tarot
  Henrique Holschuh


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20110508002207.gb31...@khazad-dum.debian.net



Re: Best practice for cleaning autotools-generated files?

2011-05-07 Thread Simon McVittie
On Sat, 07 May 2011 at 13:33:53 +0200, Enrico Weigelt wrote:
> c) does _NOT_ call configure

As much as I wish this had been the convention, it isn't - the convention is
that autogen.sh *does* call ./configure (often with options suitable for
developers of the project, whereas the ./configure defaults are more suitable
for packagers). Not doing so will just confuse people (and build systems
like jhbuild, for that matter). It's mainly intended to be used to get
started with development from a VCS checkout.

Many upstreams do provide some way to not run ./configure (something like
"./autogen.sh --no-configure" or "NOCONFIGURE=1 ./autogen.sh").

For many (most? all?) autoconf/automake projects, running "autoreconf" is
enough; that's essentially what dh_autoreconf does.

S


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/20110507230300.ga7...@reptile.pseudorandom.co.uk



Bug#626017: ITP: ruby-shoulda-context -- context framework for Test::Unit

2011-05-07 Thread Antonio Terceiro
Package: wnpp
Severity: wishlist
Owner: Antonio Terceiro 

* Package name: ruby-shoulda-context
  Version : 1.0.0.beta1
  Upstream Author : Several authors
* URL : https://github.com/thoughtbot/shoulda-context
* License : MIT
  Programming Lang: Ruby
  Description : context framework for Test::Unit

From upstream README:

Shoulda’s contexts make it easy to write understandable and maintainable
tests for Test::Unit. It’s fully compatible with your existing tests in
Test::Unit, and requires no retooling to use.

-- 
Antonio Terceiro 
http://softwarelivre.org/terceiro




signature.asc
Description: Digital signature


Bug#626016: ITP: ruby-shoulda-matchers -- Test helpers for Rails applications, compatible with Test::Unit and RSpec

2011-05-07 Thread Antonio Terceiro
Package: wnpp
Severity: wishlist
Owner: Antonio Terceiro 

* Package name: ruby-shoulda-matchers
  Version : 1.0.0.beta2
  Upstream Author : Several authors
* URL : https://github.com/thoughtbot/shoulda-matchers
* License : MIT
  Programming Lang: Ruby
  Description : Test helpers for Rails applications, compatible with 
Test::Unit and RSpec

From upstream README:

Test::Unit- and RSpec-compatible one-liners that test common Rails
functionality. These tests would otherwise be much longer, more complex,
and error-prone.

-- 
Antonio Terceiro 
http://softwarelivre.org/terceiro




signature.asc
Description: Digital signature


Re: Best practice for cleaning autotools-generated files?

2011-05-07 Thread Enrico Weigelt
* Henrique de Moraes Holschuh  schrieb:

> > I'm (as upstream) using serval macros in their own .m4 files (eg.
> > in ./m4/, maybe even sorted into subdirs). Can autoreconf figure
> > out the required search pathes all on its own ?
> 
> The problem is that autoreconf offers NO command line options for you to
> pass the required -I parameters for aclocal, nor is there a way to encode
> that information in the one place where it could conveniently live
> (configure.ac) AFAIK.

So, more precisely: autoreconf lacks an important feature would like
to see. So, until this feature is available, we'll still need another
way to cope with those situations. Sure, most of the packages probably
could be changed to be fine with simple autoreconf call, but I've
seen several cases where it's not that easy. In the end we have two
options:

a) maintain specific build rules for virtually any package in virtually
   any distro (yes, I'm not just looking at one single distro), 
   indefinitely

b) fix it in the source (the package itself) once and for all

> I sure hope it will NEVER decide to actually search for .m4 files at
> non-standard directories on its own, that would make things much worse.
> 
> Anyway, you have to work around it using something like:
> 
> ACLOCAL='aclocal -I foo -I bar' autoreconf

Well, that would again be a package specific workaround which has to be
maintained in the distro's build descriptors (whichever distro you're
looking at). And if that changes, the distro's package maintainer has
to find out early enough to (manually) adapt properly.

Leaves too much manual works and chances of breaks for my taste.


cu
-- 
--
 Enrico Weigelt, metux IT service -- http://www.metux.de/

 phone:  +49 36207 519931  email: weig...@metux.de
 mobile: +49 151 27565287  icq:   210169427 skype: nekrad666
--
 Embedded-Linux / Portierung / Opensource-QM / Verteilte Systeme
--


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20110507195421.GA25222@nibiru.local



Re: Best practice for cleaning autotools-generated files?

2011-05-07 Thread Tollef Fog Heen
]] Henrique de Moraes Holschuh 

Hiya,

| The problem is that autoreconf offers NO command line options for you to
| pass the required -I parameters for aclocal, nor is there a way to encode
| that information in the one place where it could conveniently live
| (configure.ac) AFAIK.

Can't you use AC_CONFIG_MACRO_DIR?  Note that this requires
ACLOCAL_AMFLAGS = -I m4 too in the top level Makefile.am if you're using
automake, though.

cheers,
-- 
Tollef Fog Heen
UNIX is user friendly, it's just picky about who its friends are


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/8762pmuwoy@qurzaw.varnish-software.com



Re: Best practice for cleaning autotools-generated files?

2011-05-07 Thread Henrique de Moraes Holschuh
On Sat, 07 May 2011, Enrico Weigelt wrote:
> * Henrique de Moraes Holschuh  schrieb:
> > Yes.  I think it was Cyrus IMAP that required -I in places where
> > autoreconf doesn't reach, so I called each tool separately.  Which is
> > obviously a problem in autoreconf.
> 
> Is it really a problem of autoreconf ?

Yes.

> Imagine the following situation:
> 
> I'm (as upstream) using serval macros in their own .m4 files (eg.
> in ./m4/, maybe even sorted into subdirs). Can autoreconf figure
> out the required search pathes all on its own ?

The problem is that autoreconf offers NO command line options for you to
pass the required -I parameters for aclocal, nor is there a way to encode
that information in the one place where it could conveniently live
(configure.ac) AFAIK.

I sure hope it will NEVER decide to actually search for .m4 files at
non-standard directories on its own, that would make things much worse.

Anyway, you have to work around it using something like:

ACLOCAL='aclocal -I foo -I bar' autoreconf

(assuming that even works, I didn't test).  I consider that way too icky.
At that point, I just call each required tool directly in autogen.sh,
instead of relying on autoreconf.

-- 
  "One disk to rule them all, One disk to find them. One disk to bring
  them all and in the darkness grind them. In the Land of Redmond
  where the shadows lie." -- The Silicon Valley Tarot
  Henrique Holschuh


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20110507191840.ga31...@khazad-dum.debian.net



Re: glibc: causes segfault in Xorg

2011-05-07 Thread Steve M. Robbins
On Sat, May 07, 2011 at 12:25:15PM +0200, Aurelien Jarno wrote:
> On Wed, May 04, 2011 at 02:30:35PM +0200, Aurelien Jarno wrote:
> > Le 04/05/2011 07:42, Steve M. Robbins a écrit :

> > > P.S.  I tried rebuilding glibc myself locally, but gcc also segfaults
> > > in the process :-(
> > 
> > Are you sure it is something related? Which gcc version are you using?
> > Do you have a backtrace point to the same issue?

I was careless in my initial report; I should have specified that I
tried rebuilding the *old* glibc and got a segfault.  At this point,
all I really know is that building the eglibc 2.11.2-11 Debian source
package on my up-to-date sid amd64 machine fails:

make[3]: Entering directory `/home/steve/tmp/old-eglibc/eglibc-2.11.2/sunrpc'
CPP='gcc-4.4 -E -x c-header'
/home/steve/tmp/old-eglibc/eglibc-2.11.2/build-tree/amd64-libc/elf/ld-linux-x86-64.so.2
 --library-path 
/home/steve/tmp/old-eglibc/eglibc-2.11.2/build-tree/amd64-libc:/home/steve/tmp/old-eglibc/eglibc-2.11.2/build-tree/amd64-libc/math:/home/steve/tmp/old-eglibc/eglibc-2.11.2/build-tree/amd64-libc/elf:/home/steve/tmp/old-eglibc/eglibc-2.11.2/build-tree/amd64-libc/dlfcn:/home/steve/tmp/old-eglibc/eglibc-2.11.2/build-tree/amd64-libc/nss:/home/steve/tmp/old-eglibc/eglibc-2.11.2/build-tree/amd64-libc/nis:/home/steve/tmp/old-eglibc/eglibc-2.11.2/build-tree/amd64-libc/rt:/home/steve/tmp/old-eglibc/eglibc-2.11.2/build-tree/amd64-libc/resolv:/home/steve/tmp/old-eglibc/eglibc-2.11.2/build-tree/amd64-libc/crypt:/home/steve/tmp/old-eglibc/eglibc-2.11.2/build-tree/amd64-libc/nptl
 /home/steve/tmp/old-eglibc/eglibc-2.11.2/build-tree/amd64-libc/sunrpc/rpcgen 
-Y ../scripts -c rpcsvc/bootparam_prot.x -o 
/home/steve/tmp/old-eglibc/eglibc-2.11.2/build-tree/amd64-libc/sunrpc/xbootparam_prot.T
make[3]: *** 
[/home/steve/tmp/old-eglibc/eglibc-2.11.2/build-tree/amd64-libc/sunrpc/xbootparam_prot.stmp]
 Segmentation fault (core dumped)

The segfault is actually in the "ld-linux-x86-64.so.2" binary produced
during the build, not gcc as I had earlier written.  The backtrace is:

(gdb) bt full
#0  0x in ?? ()
No symbol table info available.
#1  0x2b4d84d7e990 in call_init (l=, argc=7, 
argv=0x7fff26bf50a0, env=0x7fff26bf50e0) at dl-init.c:85
j = 1
jm = 4
init_array = 0x2b4d852ebb50
#2  0x2b4d84d7ea87 in _dl_init (main_map=0x2b4d84f90178, argc=7, 
argv=0x7fff26bf50a0, env=0x7fff26bf50e0) at dl-init.c:134
preinit_array = 
preinit_array_size = 0x0
i = 0
#3  0x2b4d84d71b2a in _dl_start_user ()
   from 
/home/steve/tmp/old-eglibc/eglibc-2.11.2/build-tree/amd64-libc/elf/ld-linux-x86-64.so.2
No symbol table info available.
#4  0x7fff26bf6574 in ?? ()
No symbol table info available.
#5  0x0007 in ?? ()
No symbol table info available.
#6  0x7fff26bf6825 in ?? ()
No symbol table info available.
#7  0x7fff26bf6872 in ?? ()
No symbol table info available.
#8  0x7fff26bf6875 in ?? ()
No symbol table info available.
#9  0x7fff26bf6880 in ?? ()
No symbol table info available.
#10 0x7fff26bf6883 in ?? ()
No symbol table info available.
#11 0x7fff26bf689b in ?? ()
No symbol table info available.
#12 0x7fff26bf689e in ?? ()
No symbol table info available.
#13 0x in ?? ()
No symbol table info available.

Hope this clarifies the issue somewhat.

Thanks,
-Steve


signature.asc
Description: Digital signature


Re: Bug#625865: ITP: ocportal -- ocPortal is a Content Management System for building and maintaining a dynamic website

2011-05-07 Thread Asheesh Laroia

On Fri, 6 May 2011, Chris Warburton wrote:


On Fri, 2011-05-06 at 11:29 -0400, Scott Kitterman wrote:

On Friday, May 06, 2011 11:23:50 AM Tshepang Lekhonkhobe wrote:

On Fri, 2011-05-06 at 09:11 -0400, Scott Kitterman wrote:

On Friday, May 06, 2011 08:56:21 AM Chris Warburton wrote:

  Programming Lang: PHP
  Description : ocPortal is a Content Management System for
  building

and maintaining a dynamic website


How many content management systems written in php does Debian need?


It's not kool that you didn't even ask about how good it is. Maybe it's
better than whatever exists in Debian currently, have you checked? My
point is your question isn't helpful. It smacks of flaming.


The question I should have asked is what is it's security record like.  This
is an area that's rife with applications that have 'poor' security records.
Adding more to that pile would be an unfortunate burden on the security team.
That's probably the most significant of the project wide costs adding a package
like this brings with it.

Scott K


Hi Scott. ocPortal isn't massively widespread compared to other systems,
so there's obviously less experimental proof of security. We had a
security hole a few years ago; this was before I got involved, but
there's details here http://en.wikipedia.org/wiki/OcPortal#Criticisms


Hi Chris and the ITP and debian-devel,

I think that if you are willing to work to make this a high-quality 
package, and be a responsive maintainer to bugs reported by users, I think 
it will be great to have you maintain it in Debian.


The security work that you've described sounds great, and I hope that 
other PHP app upstreams hold their apps to such a high standard. If not, 
maybe you can use your tools to start filing bugs left and right against 
them. (-:


For that reason, I will review your packaging when it's ready, and sponsor 
it into Debian if it passes muster. Keep me posted.


--
-- Asheesh.

http://asheesh.org/

Life is to you a dashing and bold adventure.


--
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/alpine.deb.2.00.1105071405370.7...@rose.makesad.us



Re: Bits from the Release Team - Kicking off Wheezy

2011-05-07 Thread Michael Gilbert
Enrico Zini wrote:

> On Fri, May 06, 2011 at 02:04:20PM -0400, Michael Gilbert wrote:
> 
> > It wasn't the GR itself. It was the fact that these changes to the NM
> > process were actually made. I suppose it is arguable that those changes
> > simply would not have happened without the GR, but that indicates more
> > of a lack of direct motivation within the new maintainer team.  
> > 
> > So, if it required the GR to motivate them, then I suppose it was a
> > necessity and ultimately a good thing, but my point is simply that its
> > better when motivation comes from within; rather than an applied
> > external force.
> 
> I do not see how talking about the NM process or that GR is at all
> relevant in this thread, and please do not consider this message of mine
> an intent to contribute to it in any other way but to clarify a
> misrepresentation of a team I'm a member of.
>
> From the point of view of Front Desk, motivation has always been there:
> http://lists.debian.org/debian-devel-announce/2008/10/msg5.html
> but a GR was needed to be able to proceed, because the hands of FD were
> rather tied by this other GR: http://www.debian.org/vote/2008/vote_002

Then the core problem is the hand-tying itself, and that is the
consequence of the GR process itself.  Thus, my point remains: GRs are
the wrong way to achieve change; they have long-term and unintended
consequences. Ideally, changes should be adopted based on technical
merit itself; rather than forced.

> Please do not try to provide facts about the motivation or intentions of
> others unless you really know them, otherwise you run into the risk of
> misrepresenting other people, which is bad.

OK, I actually tried to avoid presenting that as a fact, and more as an
interpretation of the situation.  "...if it required the GR to motivate
them..." makes that intent pretty clear I think.  I didn't intend the
word "motivation" to be interpreted as lack of interest or in any kind
of negative connotation, but more in the sense of overcoming some kind
of barrier/inertia; i.e. definition 1 in a google search: "The reason
or reasons one has for acting or behaving in a particular way".

Best wishes,
Mike


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/20110507132223.d2deb096.michael.s.gilb...@gmail.com



Re: Integrating aptosid?

2011-05-07 Thread Stefano Zacchiroli
On Tue, May 03, 2011 at 03:05:34PM +0200, Pierre Habouzit wrote:
> I know it's not simple, but it's not necessarily harder than making
> testing usable, I think Joss made a pretty good case about that on his
> blog.

FWIW (#1), for the non-planet readers this is at
http://np237.livejournal.com/31868.html

> aptosid is just an example, I don't even know the distro, they may not
> be the best choice, I just try to find alternates ideas. Note that I
> don't think it takes more than 10 people to do the rolling distro like
> Joss propose it: snapshot unstable every month and fix the worst
> breakages. It takes more than 10 people to do the same in testing
> because each individual fix is tangled in the migration issue, hence
> rapidly needs to update things that are at first totally unrelated to be
> fixed too first.

FWIW (#2). Even though we seem to have consensus on going in a different
direction for a Debian rolling suite, I've mimicked what Lucas did and
reached out to a couple of aptosid people I've had a chance to meet at a
past FOSS event in Dublin a few months ago and asked them to comment on
this. You can find attached a mail from Niall Walsh, aptosid developer,
which I've asked to comment on the issues of: 1) who they believe their
target users are and 2) on the possibility of merging efforts with
Debian.

The mail is forwarded with permission from Niall, although it has been
written initially for a single recipient rather than for a list. Take it
with a grain of salt. Niall has also clarified that his views are only
his and not necessarily representative of all aptosid developers, but
I've found them interesting and useful for this debate nonetheless.

Thanks a lot to Niall and to all the other developers for aptosid!

Cheers.

-- 
Stefano Zacchiroli -o- PhD in Computer Science \ PostDoc @ Univ. Paris 7
zack@{upsilon.cc,pps.jussieu.fr,debian.org} -<>- http://upsilon.cc/zack/
Quando anche i santi ti voltano le spalle, |  .  |. I've fans everywhere
ti resta John Fante -- V. Capossela ...| ..: |.. -- C. Adams
--- Begin Message ---
Hi Stefano,

Paul forwarded your e-mail onto me (not in a way I could really reply to 
inline, sigh).

First off, obviously you now have my e-mail address.

Secondly I permalurk on OFTC with the nick bfree, feel free to ping me 
there any time.

You probably won't be surprised at all to know that I and all the other 
aptosid people are aware of the rambling (rolling) thread on debian-devel.  
 Yesterday we even had lucas come join us in #aptosid where he spoke to a 
few of the aptosid team and had most of his questions answered as best they 
could be on the spot by Kel Modderman.

You also probably won't be surprised to know that it's certainly not easy 
to even begin to try and answer the questions in your mail.   I've thrown 
the mail in front of the other members of the aptosid team in case it helps 
prompt any of them to put on their asbestos underpants and join in.

Both Kel and Stefan Lippers-Hollmann (CC'd here at his request as he hopes 
to send you an answer of his own) have shown their faces in the monster 
thread on debian-devel, so aptosid hasn't been totally silent on it.

Some initial comments (I won't call them answers) to the questions posed in 
your mail:

1)

I think many aptosid users are using it simply because they like to be 
closer to the "bleeding edge" then debian stable would allow without losing 
all the advantages of Debian.   This mainly means simply getting newer 
versions of their applications.  But as a group I guess I would say they 
must also be driven by enjoying staying closer to current software 
development, seeing things when they are newer and not waiting years before 
they get their hands on the features they hear about being released by the 
upstream projects.

I guess I would charecterise the vast majority of aptosid users as people 
who have prior Linux experience on one distribution or another.   Many seem 
drawn by the chance to use Debian's repository either because they know or 
have heard about the scope and quality of the software it contains.   
Obviously though they want (or at least think they want) newer things then 
Debian stable will give them.   Quite a few will say they want a rolling 
release (and many mention having tried or considered Arch) though I've 
never tried to find out why so I can only answer that one for myself.

I personally like a rolling release as a user because it means you don't 
get to face a day where you suddenly upgrade your entire system at once.   
Obviously the downside to this is you get many more opportunities for an 
upgrade to break something, but if it happens it's usually much easier to 
pick out the problem.

I also like to feel that I am contributing to improving Debian even if I am 
only using unstable and do not contribute anything.   The reality is though 
that the more people who use unstable, the more bugs will be found earlier 
and the better Debian will be for all, w

Re: New Packages generator

2011-05-07 Thread Joerg Jaspert
> As we already noted in our meeting minutes, there is one difference in
> the files that you might notice: The order of the fields within one
> entry is different. This should not create any trouble with compliant
> 822 parsers, as order does not matter, but in case you do some weird
> parsing on your own relying on any special order you might want to check
> it still does what you expect.

Just to clarify: The order of the fields is static, it doesn't change per
call. It is just different to what we had. (And, you shouldnt rely on
the order anyways :) )

-- 
bye, Joerg


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/871v0aedwr@gkar.ganneff.de



Re: Best practice for cleaning autotools-generated files?

2011-05-07 Thread Neil Williams
On Sat, 7 May 2011 16:51:01 +0200
Enrico Weigelt  wrote:

> * Neil Williams  schrieb:
> 
> > Nonsense. It is not the job of ./autogen.sh to revert to the VCS state
> 
> It is it's job to produce a clean state where *all* generated
> files have been regenerated and the next stage (configure)
> can start from here, w/o any manual intervention or workarounds.

s/clean//

./autogen.sh does not clean anything. It may refresh stuff but it's not
about cleaning things. It's not the job of ./autogen.sh to convert a
semi-built broken setup back to the original VCS state. It simply
regenerates the autotools stuff and provides the ./configure script.

Any ./autogen.sh script which deletes .o files or messes with existing
content in .libs/ directories is RC buggy IMHO. Fair enough if those
are changed when the next 'make' is issued but that's the point, it's
up to $(MAKE), not ./autogen.sh.

> > and there's no harm in ./autogen.sh calling configure with whatever
> 
> It *is*, as soon as this cannot run through without special
> flags (eg. if some features have to be switched off, etc).

The options are just passed on unchanged. No problem with that. Can
still call ./configure directly. Maybe it wastes a little bit of time
but if you're cross-building, you're using a really fast machine so
this is hardly of concern.

> > arguments are passed to ./autogen.sh - as long as those options have
> > the same effect when passed to ./configure for subsequent runs during
> > development.
> 
> Adds additional complexity to add proper parameters here, for each
> individual package. (and, of course, find out all this first).
> 
> This way, you add unnecessary burden to all the maintainers of all
> the distros out there (that might be interested in your package).

and all the other build systems out there are suddenly compatible
across all packages?? Dream on.

> > There is no technical reason for any of these suggestions, 
> 
> Actually, there is. (okay, maybe more an organisatoric reason).

That's not technical and there is no one organisation which bridges
all of the various upstream teams. If you want to change things, you
must persuade each team to adopt your preferences and you have
singularly failed to convince me - because these ARE preferences, not
best practice.

> Exactly the same reason why things like AC plugs, voltages and
> frequencies are standardized.

Not true. Those things MUST work together in every permutation within a
specific jurisdiction or people can die. Debian and autotools are
nowhere near that level of importance.

(My previous field {for >20yrs} was medical, so don't go assuming I'm
unaware of the risks of non-standard electrical services. I've treated
many patients for the kind of burns which result when things are even
slightly outside the standard.)

-- 


Neil Williams
=
http://www.linux.codehelp.co.uk/



pgpmACNhtDYCm.pgp
Description: PGP signature


Re: Best practice for cleaning autotools-generated files?

2011-05-07 Thread Enrico Weigelt
* Neil Williams  schrieb:

> Nonsense. It is not the job of ./autogen.sh to revert to the VCS state

It is it's job to produce a clean state where *all* generated
files have been regenerated and the next stage (configure)
can start from here, w/o any manual intervention or workarounds.

> and there's no harm in ./autogen.sh calling configure with whatever

It *is*, as soon as this cannot run through without special
flags (eg. if some features have to be switched off, etc).

> arguments are passed to ./autogen.sh - as long as those options have
> the same effect when passed to ./configure for subsequent runs during
> development.

Adds additional complexity to add proper parameters here, for each
individual package. (and, of course, find out all this first).

This way, you add unnecessary burden to all the maintainers of all
the distros out there (that might be interested in your package).

> Again, this has nothing to do with Debian in most cases.

True. Applies to virtually any distro. And they all have to cope
with that crap, again, and again, and again. Total waste of human
workforce, which would go away by simply fixing a few lines once.

> There is no technical reason for any of these suggestions, 

Actually, there is. (okay, maybe more an organisatoric reason).
Exactly the same reason why things like AC plugs, voltages and
frequencies are standardized.


cu
-- 
--
 Enrico Weigelt, metux IT service -- http://www.metux.de/

 phone:  +49 36207 519931  email: weig...@metux.de
 mobile: +49 151 27565287  icq:   210169427 skype: nekrad666
--
 Embedded-Linux / Portierung / Opensource-QM / Verteilte Systeme
--


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20110507145101.GG25423@nibiru.local



Re: Best practice for cleaning autotools-generated files?

2011-05-07 Thread Neil Williams
On Sat, 7 May 2011 15:41:22 +0200
Enrico Weigelt  wrote:

> * Neil Williams  schrieb:
> 
> > No. Reverting the build to the point where it is equivalent to only
> > what is in the VCS adds completely unnecessary build dependencies and
> > build stages.
> 
> Which ones exactly (besides the usual autotools) ?

bison, flex and a few others. Also that certain stages of the
transition from VCS checkout to tarball are not actually scripted
anywhere, just described in a file.
 
'make distcheck' is an extremely useful sanity check with autotools
packages. To do the check and then not distribute the results is just
odd. To not do the check is negligent.

> I don't want to repeat the last decades if SCM theory and common
> practies here, but one of the major goals of an SCM infrastructure
> is that you always get can grab the source of some product in some
> specific version and run a fully rebuild, anytime. _Reliable_ 
> reproducability. Tarballs that are not fully-automatically produced
> from the appropriate SCM tag (and can be reproduced anytime) simply
> dont meet that requirement.

Disagree. A tarball is easier to get hold of than some weird tag in an
unfamiliar VCS.

Common format, common methods to download and obtain it, simple URL's
become possible instead of weird ones with ?=_, and other characters
which rely on active parsing by a server instead of a simple directory
listing.
 
> Maybe that all worked for you over many years. Fine.

It has indeed - about 10yrs at last count.

> For me it always never worked, starting with common things like
> sysroot'ed crosscompiling (yes, I've propbably got a different
> scope than you, I'm coming from the embedded front)

Umm, please check www.emdebian.org - I am very deeply involved in all
of the embedded activity in Debian and elsewhere. It's been my geek
obsession since before I was a DD and it's now my full time employment.
Personally, I dislike sysroot because it doesn't actually make my
cross-building work any easier. I work mainly on a dpkg-cross based
system, I wrote or improved nearly all of the scripts which currently
support cross-building in Debian and prepared the only completely
cross-built Debian distribution for ARM and released it alongside
Lenny. I did some early work on the cross-building toolchains before
handling that task over to Hector. I think I qualify as "embedded".

Don't make assumptions - my embedded work isn't exactly hard to find.

> > I wouldn't want to work with any upstream which does not produce
> > stable tarballs which are available for download without going
> > anywhere near the VCS. I shouldn't even need to know what VCS is
> > being used if all I want to do is build the package for my own use.
> 
> Guess what, I wouldn't work with upstreams that don't provide 
> reliable release tags in the VCS (an VCS that I can import to git).
> Actually, I sometimes do, but it adds a lot of extra burden.

Fine. That means that any difference on that issue is personal
preference and has no place in any "best practice" guide or
specification.

Each preference works for those who prefer that method. Either the
guide / spec steers well clear of the entire area or it gives 100%
equal coverage to all of the possible preferences.

> My buildsystem (called Briegel), doesn't even bother with things
> like tarballs or patches (still supported, but unused for years now).
> It simply grabs the source tree from git by a normalized ref (eg. tag).
> Normalized means: there's a simple rule to transform a vector of
> package name and (also normalized) version number into a ref name.
> Period.

Then your build system cannot cope with any of my packages. The point
with Emdebian is that it must cope with all packages which might be
useful, across most of Debian.
 
> Any changes needed in the source tree are done through SCM.
> (the processes behind may vary between individual packages).

No, those should be done via patches, managed by dpkg-source format 3
or directly incorporated into the upstream tarball (which is how I
prefer to do it).

Depending on the purpose of the patches, the patch can go upstream or
into the Debian package.

> > Overall, this has little to do with Debian. 
> 
> True. It's a distro-agnostic issue. It's about how to be a good
> upstream to make distro's lifes easier and extends into the whole
> QM topics.

And that includes the ability to compare the checksums of tarballs
included in the distributed sources of an embedded cross-built
distribution to check that you really do have the same version as other
distributions. It helps limit the amount of things which need checking
when a bug only appears in one side. Normally, the cross-building
process takes the most blame but there can be other issues.
 
> > If those people are happy with their own system, then arguing
> > about tags versus tarballs is just bikeshedding. 
> 
> This argument is only valid as long as you only consider official
> upstream releases as your base. For example, I'

Re: Best practice for cleaning autotools-generated files?

2011-05-07 Thread Enrico Weigelt
* Sean Finney  schrieb:



IOW: all the fun of indeterministically self-modifying code ...


cu
-- 
--
 Enrico Weigelt, metux IT service -- http://www.metux.de/

 phone:  +49 36207 519931  email: weig...@metux.de
 mobile: +49 151 27565287  icq:   210169427 skype: nekrad666
--
 Embedded-Linux / Portierung / Opensource-QM / Verteilte Systeme
--


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20110507135537.GF25423@nibiru.local



Re: Bits from the Release Team - Kicking off Wheezy

2011-05-07 Thread Enrico Zini
On Fri, May 06, 2011 at 02:04:20PM -0400, Michael Gilbert wrote:

> It wasn't the GR itself. It was the fact that these changes to the NM
> process were actually made. I suppose it is arguable that those changes
> simply would not have happened without the GR, but that indicates more
> of a lack of direct motivation within the new maintainer team.  
> 
> So, if it required the GR to motivate them, then I suppose it was a
> necessity and ultimately a good thing, but my point is simply that its
> better when motivation comes from within; rather than an applied
> external force.

I do not see how talking about the NM process or that GR is at all
relevant in this thread, and please do not consider this message of mine
an intent to contribute to it in any other way but to clarify a
misrepresentation of a team I'm a member of.

From the point of view of Front Desk, motivation has always been there:
http://lists.debian.org/debian-devel-announce/2008/10/msg5.html
but a GR was needed to be able to proceed, because the hands of FD were
rather tied by this other GR: http://www.debian.org/vote/2008/vote_002

Please do not try to provide facts about the motivation or intentions of
others unless you really know them, otherwise you run into the risk of
misrepresenting other people, which is bad.


Ciao,

Enrico

-- 
GPG key: 4096R/E7AD5568 2009-05-08 Enrico Zini 


signature.asc
Description: Digital signature


Re: Best practice for cleaning autotools-generated files?

2011-05-07 Thread Enrico Weigelt
* Neil Williams  schrieb:

> No. Reverting the build to the point where it is equivalent to only
> what is in the VCS adds completely unnecessary build dependencies and
> build stages.

Which ones exactly (besides the usual autotools) ?

> VCS is not the final source, it's a tool for upstream to create the
> final source.

Lets replace "VCS" by "SCM" to get to the whole picture.

I don't want to repeat the last decades if SCM theory and common
practies here, but one of the major goals of an SCM infrastructure
is that you always get can grab the source of some product in some
specific version and run a fully rebuild, anytime. _Reliable_ 
reproducability. Tarballs that are not fully-automatically produced
from the appropriate SCM tag (and can be reproduced anytime) simply
dont meet that requirement.

Maybe that all worked for you over many years. Fine.
For me it always never worked, starting with common things like
sysroot'ed crosscompiling (yes, I've propbably got a different
scope than you, I'm coming from the embedded front)

> I wouldn't want to work with any upstream which does not produce
> stable tarballs which are available for download without going
> anywhere near the VCS. I shouldn't even need to know what VCS is
> being used if all I want to do is build the package for my own use.

Guess what, I wouldn't work with upstreams that don't provide 
reliable release tags in the VCS (an VCS that I can import to git).
Actually, I sometimes do, but it adds a lot of extra burden.

My buildsystem (called Briegel), doesn't even bother with things
like tarballs or patches (still supported, but unused for years now).
It simply grabs the source tree from git by a normalized ref (eg. tag).
Normalized means: there's a simple rule to transform a vector of
package name and (also normalized) version number into a ref name.
Period.

Any changes needed in the source tree are done through SCM.
(the processes behind may vary between individual packages).
And these efforts, prodiving (long term) stable source trees
is scope of the OSS-QM project (which acts as an intermediate
between the upstream and consumers like my Briegel-based
special-purpose distros).

> Overall, this has little to do with Debian. 

True. It's a distro-agnostic issue. It's about how to be a good
upstream to make distro's lifes easier and extends into the whole
QM topics.

> If those people are happy with their own system, then arguing
> about tags versus tarballs is just bikeshedding. 

This argument is only valid as long as you only consider official
upstream releases as your base. For example, I'm running automated
git imports and cleanups for certain packages where upstream doesn't
bother to provide a clean and stable source. You could use this,
or copy the machinery and run it on your own (IOW: join OSS-QM ;-p)

> Personal preferences have no place in a "best practice" statement -
> that way lies the hell of mandating which VCS gets used, pandering to
> whichever fanboy shouts the loudest.

Well, I wouldn't set any rule which VCS upstreams should use. That's
their choice. But they should provide anything that can be easily
imported (eg. into git). But as distro maintainers (from whatever
distro some might come from), we could agree to import all our required
source trees into some standardized SCM where anyone can grab the
(maybe cleaned) trees for a particular package/version automatically,
and additionally maintain our bugfix and/or tailoring changes there.
(-> OSS-QM ;-p)

> > A good rule, IMHO, is: take *only* the *actual* source files (those
> > written by the developers) and always regenerate all intermediate
> > steps. Of course this means, _never ever_ use prebuild configure
> > scripts etc.
> 
> Fundamentally disagree. 

Obviously, you almost never have to change the input files.
Or need to fix/tailor autotools. For mainline distros this might
work well, but in the embedded world you're easily out of luck
with this approach.

> That is a very, very, bad rule IMHO. ./autogen.sh is a valuable
> aid and it is pointless to try and assert that it should called
> in all builds. It can become necessary if the latest upstream
> tarball has gone stale but that's about it. 

And it *IS* necessary, as soon as you have to change some of the
input files. In my daily projects, this happens on the majority
of the packages.

> > To go even one step further: dont use tarballs, instead upstream's
> > VCS release tags (assuming upstream provides that ;-o)
> 
> ... and that is simply rubbish. Just because some teams want to use VCS
> tags does not mean that all upstreams would consider such a change. 

True, not every upstream wants to do that. And not every upstream
wants to maintain stable releases. Well, that's life. Essentially
we have two options to cope with that:

a) everybody does the required fixups and workarounds all on its own
b) collaborate and provide a stabelized midstream together and so
   save a lot of human workforce.

> As an upstream 

Bug#625971: ITP: wims-help -- help files for wims

2011-05-07 Thread Georges Khaznadar
Package: wnpp
Severity: wishlist
Owner: Georges Khaznadar 


* Package name: wims-help
  Version : 4.01
  Upstream Authors : Bernadette Perrin-Riou ,
 Marie-Claude David ,
 Association WIMSEDU 
* URL : http://www.wimsedu.info
* License : GPL-2
  Programming Lang: phtml, and oef languages (parsed by Wims)
  Description : help files for wims

 Wims' modules implement every user interface beyond its main page,
 this package features the help modules.
 .
 WIMS is an acronym for WWW Interactive Mathematics Server. Nowadays
 WIMS serves much more than mathematic contents (physics, chemistry,
 biology, languages).
 .
 The WIMS educational platform features a rich set of resources and
 exercises either with free access or for personalised study.



-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/20110507130617.29119.41897.report...@photos.khaznadar.fr



Re: 0-day NMUs for RC bugs without activity for 7 days?

2011-05-07 Thread Stefano Zacchiroli
On Sat, May 07, 2011 at 12:51:46PM +0200, Jakub Wilk wrote:
> This works both ways. If a NMUer uploaded my package without a delay
> and without a good reason[0], I want to be able to yell at him „you
> are
> a jerk (according to Developers Reference)!”
> 
> Unfortunately, clueless NMUers do exist. :(

It's all a matter of trade-offs. You're right that clueless NMUers do
exist. OTOH very good NMUers that won't go ahead because a policy tell
them not to go ahead exist as well. The question is whether there are
more people in the first camp or people in the second camp. In my
experience with NMUs---both as NMUers and as NMUed maintainer---I tend
to believe in Debian we have way more people on the cautious side than
clueless NMUers.

Ultimately, having an overcautious policy for NMUs has the potential of
blocking bug fixes and evolution in our project. I think we really need
to fight that and that we should tolerate the risk of a handful clueless
NMUers going ahead. Of course, when that happen, we should take good
care of explaining to them why their approach was suboptimal and not in
the best interest of Debian. I believe that way we can in the long run
both increase our culture of doing good NMUs and avoiding overzealous
blockers that will simply delay our procedures and increase frustration
on people that know they can just go ahead and fix a broken package.

But I agree that this policy should not force maintainers of several
packages to ping their bug logs every 7 days, although at the very
minimum I do expect a maintainer to post to an RC bug log at least once
an "I'm on it" message. I've proposed before, for this change, to stress
that the NMUer should do a best effort attempt to verify whether the
maintainer is working on the fix, for instance by looking at VCS head of
the package, asking on IRC, or the like.

Finally, considering this policy has been in effect for 5 years or so,
and considering that devref states guidelines rather than hard rules, I
believe the _practical_ impact of this change would be very low.

Cheers.

-- 
Stefano Zacchiroli -o- PhD in Computer Science \ PostDoc @ Univ. Paris 7
zack@{upsilon.cc,pps.jussieu.fr,debian.org} -<>- http://upsilon.cc/zack/
Quando anche i santi ti voltano le spalle, |  .  |. I've fans everywhere
ti resta John Fante -- V. Capossela ...| ..: |.. -- C. Adams


signature.asc
Description: Digital signature


Re: 0-day NMUs for RC bugs without activity for 7 days?

2011-05-07 Thread Neil Williams
On Sat, 7 May 2011 12:51:46 +0200
Jakub Wilk  wrote:

> [0] No, 7 days without activity in the bug log is not a good enough 
> reason. Let's turn our empathy on and face it: we are not bug-fixing 
> monkeys and 7 days is a very short time frame.

It's 7 days without maintainer response - if you're busy, as well get
from time to time, it just means that you explain that in the bug
report with just a single email. Let people know - we're not mind
readers. RC bugs are on the radar of every DD, you may have told those
you work with often that things are busy in real life, doesn't mean
that someone else affected by the RC bug will know, unless you put just
a brief note in the actual bug report.

It's an open development model, so communicate openly - even if you're
busy, sending a short reply to the bug report shouldn't be beyond what
Debian can justifiably expect from any maintainer.

7 days without email is a good enough reason to assume that the
maintainer isn't going to uploading a fix any time soon.

-- 


Neil Williams
=
http://www.linux.codehelp.co.uk/



pgpccjyUvP9oi.pgp
Description: PGP signature


Re: Best practice for cleaning autotools-generated files?

2011-05-07 Thread Neil Williams
On Sat, 7 May 2011 13:33:53 +0200
Enrico Weigelt  wrote:

> * Josue Abarca  schrieb:
> 
> > From: /usr/share/doc/autotools-dev/README.Debian.gz
> > "Example autogen.sh and debian/rules files can be found in
> > /usr/share/doc/autotools-dev/examples.  Do not use them as-is. Rather,
> > properly customize your own."
> > 
> > and from /usr/share/doc/autotools-dev/examples/autogen.sh
> > " ...Still, this script is capable of cleaning
> > # just about any possible mess of autoconf files."
> 
> Aprops autogen.sh:
> 
> IMHO the upstream always should supply a proper autogen.sh script,
> which:
> 
> a) removes all autogenerated files
> b) calls all the autotools commands, which can be overridden
>via environment (eg. $AUTOCONF, ...)
> c) does _NOT_ call configure

Nonsense. It is not the job of ./autogen.sh to revert to the VCS state
and there's no harm in ./autogen.sh calling configure with whatever
arguments are passed to ./autogen.sh - as long as those options have
the same effect when passed to ./configure for subsequent runs during
development.

Again, this has nothing to do with Debian in most cases. Fix the few
occasions where it might matter, don't make it a general requirement.

> Perhaps we could write some spec how an "good" upstream autogen.sh
> script should look like, fix them where necessary (so no individual
> logic in the control files anymore for this part) and push the
> patches to upstreams.

Try that with my upstreams and I'll do the opposite. There are no good
reasons to make such a spec which elevates personal preference to best
practice. There is no technical reason for any of these suggestions, no
benefit from these being turned into "best practice" and no gain in
Debian from wider adoption of such *preferences*.

> I've already done that for several packages in the OSS-QM project.
> (which is not meant for Debian, but completely distro agnostic)
 
I've no problem with that as long as you leave my upstreams alone.
;-)

-- 


Neil Williams
=
http://www.linux.codehelp.co.uk/



pgpfyXBNj66gP.pgp
Description: PGP signature


Re: Best practice for cleaning autotools-generated files?

2011-05-07 Thread Enrico Weigelt
* Henrique de Moraes Holschuh  schrieb:

> Yes.  I think it was Cyrus IMAP that required -I in places where
> autoreconf doesn't reach, so I called each tool separately.  Which is
> obviously a problem in autoreconf.

Is it really a problem of autoreconf ?

Imagine the following situation:

I'm (as upstream) using serval macros in their own .m4 files (eg.
in ./m4/, maybe even sorted into subdirs). Can autoreconf figure
out the required search pathes all on its own ?


cu
-- 
--
 Enrico Weigelt, metux IT service -- http://www.metux.de/

 phone:  +49 36207 519931  email: weig...@metux.de
 mobile: +49 151 27565287  icq:   210169427 skype: nekrad666
--
 Embedded-Linux / Portierung / Opensource-QM / Verteilte Systeme
--


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20110507125017.GD25423@nibiru.local



Re: Best practice for cleaning autotools-generated files?

2011-05-07 Thread Neil Williams
On Sat, 7 May 2011 13:48:00 +0200
Enrico Weigelt  wrote:

> * Neil Williams  schrieb:
> 
> > Not every generated file needs to be cleaned. The list can vary if the
> > package is very old and depends on how the package itself handles the
> > clean internally.
> 
> IMHO, *all* generated files should be cleaned away, just to be sure.
> In the last decade I've encountered a lot of nasty problems w/
> forgotten generated files and similar stuff.

No. Reverting the build to the point where it is equivalent to only
what is in the VCS adds completely unnecessary build dependencies and
build stages. VCS is not the final source, it's a tool for upstream to
create the final source. I wouldn't want to work with any upstream
which does not produce stable tarballs which are available for download
without going anywhere near the VCS. I shouldn't even need to know what
VCS is being used if all I want to do is build the package for my own
use.

I don't care what people say here, I am not putting the pre-tarball
build stages into the upstream build system of my upstream packages. I
usually document them in the README.svn but if it doesn't work, meh,
use the tarball which I ensure does work.

Overall, this has little to do with Debian. There is no consensus
amongst upstream teams, no benefit in breaking stable projects by
forcing a change in the build system and it just ends up as a simple
preference by whoever has commit access to the upstream. If those
people are happy with their own system, then arguing about tags versus
tarballs is just bikeshedding. 

Personal preferences have no place in a "best practice" statement -
that way lies the hell of mandating which VCS gets used, pandering to
whichever fanboy shouts the loudest.

> A good rule, IMHO, is: take *only* the *actual* source files (those
> written by the developers) and always regenerate all intermediate
> steps. Of course this means, _never ever_ use prebuild configure
> scripts etc.

Fundamentally disagree. That is a very, very, bad rule
IMHO. ./autogen.sh is a valuable aid and it is pointless to try and
assert that it should called in all builds. It can become necessary if
the latest upstream tarball has gone stale but that's about it. 

> To go even one step further: dont use tarballs, instead upstream's
> VCS release tags (assuming upstream provides that ;-o)

... and that is simply rubbish. Just because some teams want to use VCS
tags does not mean that all upstreams would consider such a change. As
an upstream developer, I simply refuse. The VCS is for upstream,
including the tags. What gets into distributions is the tarball because
then I can be sure that the package builds with 'make distcheck' without
using a new £$ %$£ branch for every "£$£" change between releases.
Lunacy. It's elevating a tool into a stick to hit people with. I've
only got one answer to that and it's NO.

One method does not fit all. I've got no problem with teams who do want
to use VCS tags as a release process but don't push your model on
others. All you'll get from my upstream projects is tarballs and if
the VCS tag doesn't build, I really don't care.

There is no such thing as a "release tag" in my upstreams, there might
be tags but those are often just experiments.

> > Building direct from VCS is a nice idea but sometimes, it just isn't
> > worth the pain - let the build system build the tarball and package
> > from the tarball. It's just easier. Or just change the build system
> > and/or the tool.
> 
> Actually, when building directly from VCS release tag causes big pain,
> upstream is doing something _seriously_ wrong. So: fix the upstream
> or put in an intermediate project for the (distro-independent) QM
> works. (-> OSS-QM project).

NO. I am one of those upstreams and I do not care if any VCS tag causes
pain to anyone when building. The tarball is the released code, use it
or find someone else to package it.

Indeed, in a few upstream projects, the only files which ever get
tagged are the debian/ files. ;-)

-- 


Neil Williams
=
http://www.linux.codehelp.co.uk/



pgpeIVTViOyjJ.pgp
Description: PGP signature


Re: Best practice for cleaning autotools-generated files?

2011-05-07 Thread Enrico Weigelt
* Henrique de Moraes Holschuh  schrieb:

> 1. No spawn from autotooling allowed in the VCS.  EVER.  .gitignore it
>away at once.  Autogenerated files inside the VCS repo are almost
>always a bad idea.  It was madness with CVS, it was bad mojo with
>SVN, and it is certainly at least an annoyance with git.

ACK, couldn't aggress more.

BTW: last week, in a commercial project, a collegue actually proposed
checking in prebuilt binaries into the VCS ... (well, not a VCS,
but TFS) ... i've almost spit out my coffe when he said this ;-o

> 2. Add an autogen.sh script that makes it trivial to retool everything as
>needed.  Keep it up-to-date.

ACK. I'm already doing so for several packages in the OSS-QM project.
Such steps heavily reduce the complexity of individual distro's build
systems/package descriptors. It's all about standardized interfaces.

> 3. I always hook the debian package to retool on package build.  I have
>no reason to trust whatever cruft upstream shipped, even if I happen to
>be upstream at that moment in time.

ACK. Preshipped autotools-generated files tend to be unreliable.
The goal of "generated once - run everywhere" is simply missed.
Just consider that for any reason just one of the input files have
to be changed - it always needs a full regeneration, to be sure.
At this point we can question whether we want the whole complexity
(not just technologically, but also on human workforce) for deciding
whether to do the regeneration or just always do it per default.
 
>It also means I will instantly notice if upstream started doing very
>"unpure" things to the build system, such as directly modifying
>autogenerated files, and it lets me try to contact them (or fix my ways,
>if I am upstream) and contain the braindamage before it causes any Debian
>victims.

ACK. And the downstream hopefully doesn't have similar insane ideas
(eg. patching autogenerated files) ...


cu
-- 
--
 Enrico Weigelt, metux IT service -- http://www.metux.de/

 phone:  +49 36207 519931  email: weig...@metux.de
 mobile: +49 151 27565287  icq:   210169427 skype: nekrad666
--
 Embedded-Linux / Portierung / Opensource-QM / Verteilte Systeme
--


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20110507123607.GC25423@nibiru.local



Re: 0-day NMUs for RC bugs without activity for 7 days?

2011-05-07 Thread Raphael Hertzog
Hi,

On Sat, 07 May 2011, Jakub Wilk wrote:
> This works both ways. If a NMUer uploaded my package without a delay
> and without a good reason[0], I want to be able to yell at him „you
> are a jerk (according to Developers Reference)!”

No.

First off, I never said that the rules are there to be able to badmouth
people. So calling someone a jerk is never a good idea.

And whatever mistake made, the NMU is still someone trying to help you, so
you should respond calmly: "Thank you for the NMU but the issue really did
not warrant a direct upload, you would have saved us some troubles by
contacting me or by uploading to DELAYED."

> [0] No, 7 days without activity in the bug log is not a good enough
> reason. Let's turn our empathy on and face it: we are not bug-fixing
> monkeys and 7 days is a very short time frame.

We're speaking of RC bugs, you should not have plenty of RC bugs on your
shoulders...

Cheers,
-- 
Raphaël Hertzog ◈ Debian Developer

Follow my Debian News ▶ http://RaphaelHertzog.com (English)
  ▶ http://RaphaelHertzog.fr (Français)


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20110507121459.gb8...@rivendell.home.ouaza.com



Re: 0-day NMUs for RC bugs without activity for 7 days?

2011-05-07 Thread Charles Plessy
Le Fri, May 06, 2011 at 11:14:55PM +0200, Lucas Nussbaum a écrit :
> 
> - The dev-ref documents the "default" choice. While there are cases where I
>   agree that uploading the fixed package ASAP is necessary, in most cases, we
>   can probably survive two more days with the bug, if we already survived more
>   than 7 days.

Dear Lucas and everybody,

I also think that not all RC bugs are equal.  What Lucas wrote above is
reflected by that some RC bugs will be uploaded with a “low” urgency, while
some others will have a higher one.  In http://bugs.debian.org/625449#85, I am
proposing the following:

 - urgency=emergency: no communication needed.
 - urgency=high: normal upload if no activity for 7 days.
 - urgency=medium: DELAYED/2 if no activity for 7 days.
 - urgency=low: ask first.

Cheers,

-- 
Charles Plessy
Tsurumi, Kanagawa, Japan


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20110507121003.gc15...@merveille.plessy.net



Re: Best practice for cleaning autotools-generated files?

2011-05-07 Thread Enrico Weigelt
* Neil Williams  schrieb:

> Not every generated file needs to be cleaned. The list can vary if the
> package is very old and depends on how the package itself handles the
> clean internally.

IMHO, *all* generated files should be cleaned away, just to be sure.
In the last decade I've encountered a lot of nasty problems w/
forgotten generated files and similar stuff.

A good rule, IMHO, is: take *only* the *actual* source files (those
written by the developers) and always regenerate all intermediate
steps. Of course this means, _never ever_ use prebuild configure
scripts etc.

To go even one step further: dont use tarballs, instead upstream's
VCS release tags (assuming upstream provides that ;-o)

> >Since I use (or plan to use) git-buildpackage, I don't have a
> > tarball which could serve as an authoritative whitelist. Thus an
> > additional whitelist refresh step would be required every time I
> > merge the upstream branch into the debian branch. That's bad.
> 
> Is this a case of a problem with a tool causing more work? Fix the
> tool? Change your choice of tool?

No, git-buildpackage (IMHO) is the way to go. But at this point I
wouldn't do any sourcetree manipulation outside of git, do all this
within the vcs (eg. via tailored import/rebase mechanisms, etc).
I'm using a similar approach w/ my Briegel embedded build system
and OSS-QM source repositories. I could go more into detail if
you like to.

> Building direct from VCS is a nice idea but sometimes, it just isn't
> worth the pain - let the build system build the tarball and package
> from the tarball. It's just easier. Or just change the build system
> and/or the tool.

Actually, when building directly from VCS release tag causes big pain,
upstream is doing something _seriously_ wrong. So: fix the upstream
or put in an intermediate project for the (distro-independent) QM
works. (-> OSS-QM project).


cu
-- 
--
 Enrico Weigelt, metux IT service -- http://www.metux.de/

 phone:  +49 36207 519931  email: weig...@metux.de
 mobile: +49 151 27565287  icq:   210169427 skype: nekrad666
--
 Embedded-Linux / Portierung / Opensource-QM / Verteilte Systeme
--


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20110507114800.GB25423@nibiru.local



Re: Best practice for cleaning autotools-generated files?

2011-05-07 Thread Enrico Weigelt
* Josue Abarca  schrieb:

> From: /usr/share/doc/autotools-dev/README.Debian.gz
> "Example autogen.sh and debian/rules files can be found in
> /usr/share/doc/autotools-dev/examples.  Do not use them as-is. Rather,
> properly customize your own."
> 
> and from /usr/share/doc/autotools-dev/examples/autogen.sh
> " ...Still, this script is capable of cleaning
> # just about any possible mess of autoconf files."

Aprops autogen.sh:

IMHO the upstream always should supply a proper autogen.sh script,
which:

a) removes all autogenerated files
b) calls all the autotools commands, which can be overridden
   via environment (eg. $AUTOCONF, ...)
c) does _NOT_ call configure


Perhaps we could write some spec how an "good" upstream autogen.sh
script should look like, fix them where necessary (so no individual
logic in the control files anymore for this part) and push the
patches to upstreams.

I've already done that for several packages in the OSS-QM project.
(which is not meant for Debian, but completely distro agnostic)


cu
-- 
--
 Enrico Weigelt, metux IT service -- http://www.metux.de/

 phone:  +49 36207 519931  email: weig...@metux.de
 mobile: +49 151 27565287  icq:   210169427 skype: nekrad666
--
 Embedded-Linux / Portierung / Opensource-QM / Verteilte Systeme
--


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20110507113353.GA25423@nibiru.local



Re: 0-day NMUs for RC bugs without activity for 7 days?

2011-05-07 Thread Jakub Wilk

* Raphael Hertzog , 2011-05-07, 09:12:
A patch was proposed (#625449) to implement in dvelopers-reference the 
"DELAYED/0 for upload fixing only release-critical bugs older than 7 
days, without maintainer activity for 7 days" policy.


I don't think that this policy is a good idea. But, since I was one of 
the drivers of the DEP that resulted in the current NMU policy 
(http://dep.debian.net/deps/dep1.html), I'm a bit biased, and I 
thought I would bring this proposed change to the attention of 
-devel@, so that we can get more feedback.


I wonder how many people made use of that 0-day NMU rule. I have always
interpreted those "rules" as “go ahead with NMUs, if the maintainer is 
a jerk and complain, we'll be on your side”.


This works both ways. If a NMUer uploaded my package without a delay and 
without a good reason[0], I want to be able to yell at him „you are

a jerk (according to Developers Reference)!”

Unfortunately, clueless NMUers do exist. :(

[0] No, 7 days without activity in the bug log is not a good enough 
reason. Let's turn our empathy on and face it: we are not bug-fixing 
monkeys and 7 days is a very short time frame.


--
Jakub Wilk


--
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20110507105146.ga3...@jwilk.net



Re: glibc: causes segfault in Xorg

2011-05-07 Thread Aurelien Jarno
On Wed, May 04, 2011 at 02:30:35PM +0200, Aurelien Jarno wrote:
> Le 04/05/2011 07:42, Steve M. Robbins a écrit :
> > On Wed, May 04, 2011 at 12:10:48AM -0500, Jonathan Nieder wrote:
> > 
> >> Sounds like http://sourceware.org/bugzilla/show_bug.cgi?id=12518
> >> which is fixed (sort of) by commit 0354e355 (2011-04-01).
> > 
> > Oh my word.  So glibc 2.13 breaks random binaries that happened to
> > incorrectly use memcpy() instead of memmove()?  What's wrong with the
> > glibc developers (and Ulrich Drepper in particular)?
> > 
> > I'm with Linus on this: let's just revert to the old behaviour.  A
> > tiny amount of clock cycles saved isn't worth the instability.
> > 
> > Thanks,
> > -Steve
> > 
> > P.S.  I tried rebuilding glibc myself locally, but gcc also segfaults
> > in the process :-(
> > 
> 
> Are you sure it is something related? Which gcc version are you using?
> Do you have a backtrace point to the same issue?
> 
> I am using this libc version for two months (on a CPU having ssse3
> instruction set), it is also used by other distributions, so I find
> strange it breaks something so common than gcc. For XOrg it can be due
> to the difference in configuration, that's why the problem stayed unnoticed.
> 

Any news about that? Which GCC version is affected? Can you please send
us the backtrace?

-- 
Aurelien Jarno  GPG: 1024D/F1BCDB73
aurel...@aurel32.net http://www.aurel32.net


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20110507102515.ga24...@hall.aurel32.net



Re: Bug#625865: ITP: ocportal -- ocPortal is a Content Management System for building and maintaining a dynamic website

2011-05-07 Thread George Danchev
On Saturday 07 May 2011 09:41:34 Raphael Hertzog wrote:
> Hi,

Hi,

> On Fri, 06 May 2011, George Danchev wrote:
> > * writing a meaningful ITP helps to grab attention, especially if there
> > are multiple alternatives. Prove your point (ref: I'm upstream and I
> > want to maintain it, doesn't magically buy you a slot into the archive)
> 
> There's nothing to buy... only people offering to maintain packages in
> Debian. But we should certainly not turn out upstream who are willing to
> maintain the package in Debian.
>
> In fact I want more upstream involved in Debian!

I didn't write exactly that. You simply twisted the meaning of what I wrote.
Please, re-read, the keyword is *magically*. 

> (Unless someone does a serious review and has enough credit to convince
> many people that the software is crap and would really be a big burden)

We will accumulate tons of PHP CMSes that way, which doesn't seem to scale. In 
case of multiple alternatives, I'd rather prefer inclusion if enough arguments 
exist that it is better than already included ones.

> > * writing lengthy rebuttals for well known facts from the past are quite
> > unlikely, people has more important things to do.
> 
> We're not speaking of lengthy rebuttals. I agree with Tshepang that the
> answers were rather aggressive when you consider that you speak with
> someone who is starting in the Debian community.
> 
> Something like this would have perfectly done the job:
> "We already have many PHP CMS in the archive, what does this one offer
> that the other don't? Also PHP software tends to have a bad security track
> record, is ocPortal any better in that regard?"

That would have been better. I agree.

> > * recognize the fact when someone says that chances are high you are
> > about to be wasting your own time packaging $something.
> 
> Everybody is free to do what they want with their own time, so you should
> certainly not say anyone that they are wasting their time. If you believe
> they are, you can certainly hint at better alternatives and let people
> see by themselves if they wish to spend their time differently now that
> they know of a possible alternative.

Okay, I just gave a hint from my mind, let's see what happens.

-- 
pub 4096R/0E4BD0AB 


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/201105071036.51004.danc...@spnet.net



Re: 0-day NMUs for RC bugs without activity for 7 days?

2011-05-07 Thread Raphael Hertzog
On Fri, 06 May 2011, Lucas Nussbaum wrote:
> A patch was proposed (#625449) to implement in dvelopers-reference the
> "DELAYED/0 for upload fixing only release-critical bugs older than 7 days,
> without maintainer activity for 7 days" policy.
> 
> I don't think that this policy is a good idea. But, since I was one of the
> drivers of the DEP that resulted in the current NMU policy
> (http://dep.debian.net/deps/dep1.html), I'm a bit biased, and I thought I 
> would
> bring this proposed change to the attention of -devel@, so that we can get 
> more
> feedback.

I wonder how many people made use of that 0-day NMU rule. I have always
interpreted those "rules" as “go ahead with NMUs, if the maintainer is a
jerk and complain, we'll be on your side”.

In other words, I think that the release team wants to encourage NMU to
fix RC bugs more than saving 2 more days in the usual NMU process.

So I believe the change is not very relevant. But I have no reason to
object.

That said I truly believe that RC bugs should be treated as high-priority
issues by maintainers and that we should have the required information in
the bug log ("I'll take care of it at ", "I have no idea how
to fix this, any help welcome" + tag help, etc.).

Cheers,
-- 
Raphaël Hertzog ◈ Debian Developer

Follow my Debian News ▶ http://RaphaelHertzog.com (English)
  ▶ http://RaphaelHertzog.fr (Français)


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20110507071212.gd...@rivendell.home.ouaza.com