Bug#374029: Fixing inconsisten and unusefull Build-Depends/Conflicts definition

2006-06-16 Thread Goswin Brederlow
Package: debian-policy
Severity: normal

Hi,

the current use and definition of Build-Depends/Conflicts[-Indep] in
policy 7.6 don't match. Both use and definition also greatly reduce
the usefullness of these fields. This issue has come up again and
again over the last few years and nothing has been done about it. I
hope this proposal provides a elegant and non disruptive way out so we
can finaly do something about it.


Currently policy reads:

| 7.6 Relationships between source and binary packages -
| Build-Depends, Build-Depends-Indep, Build-Conflicts,
| Build-Conflicts-Indep
|
| Source packages that require certain binary packages to be installed
| or absent at the time of building the package can declare
| relationships to those binary packages.
|
| This is done using the Build-Depends, Build-Depends-Indep,
| Build-Conflicts and Build-Conflicts-Indep control file fields.
|
|Build-dependencies on "build-essential" binary packages can be
|omitted. Please see Package relationships, Section 4.2 for more
|information.
|
|The dependencies and conflicts they define must be satisfied (as
|defined earlier for binary packages) in order to invoke the targets in
|debian/rules, as follows:[42]
|
|Build-Depends, Build-Conflicts
|
|The Build-Depends and Build-Conflicts fields must be satisfied
|when any of the following targets is invoked: build, clean,
|binary, binary-arch, build-arch, build-indep and binary-indep.

This comes down to Build-Depends have to be always installed. Buildds
always and only install Build-Depends.

|Build-Depends-Indep, Build-Conflicts-Indep
|
|The Build-Depends-Indep and Build-Conflicts-Indep fields must be
|satisfied when any of the following targets is invoked: build,
|build-indep, binary and binary-indep.  

But buildds do call the build targets (via dpkg-buildpackage) and
don't honor Build-Depends/Conflicts-Indep. And since build calls
build-indep that means anything needed to build the architecture
independent part needs to be included in Build-Depends. This make the
Build-Depends-Indep quite useless.

[Side note: Buildds/dpkg-buildpackage has no robust way of telling if
the optional build-arch field exists and must call build. This is
wastefull for both build dependencies and build time.]


Proposal:
-

Two new fields are introduced:

Build-Depends-Arch, Build-Conflicts-Arch

The Build-Depends-Arch and Build-Conflicts-Arch fields must be
satisfied when any of the following targets is invoked:
build-arch, binary-arch.

The existance of either of the two makes build-arch mandatory.


The old fields change their meaning:

Build-Depends, Build-Conflicts

The Build-Depends and Build-Conflicts fields must be satisfied
when any target is invoked.

Build-Depends-Indep, Build-Conflicts-Indep

The Build-Depends-Indep and Build-Conflicts-Indep fields must be
satisfied when any of the following targets is invoked:
build-indep, binary-indep.

The existance of either of the two makes build-indep mandatory.

The use of Build-Depends/Conflicts-Arch/Indep is optional but should
be used in architecture "all/any" mixed packages. If any of them is
omitted the respective Build-Depends/Conflicts field must be suffient
already.

### End of Proposal ###



Why is this proposal better than others that have failed before?

- non disruptive: Current buildd behaviour will continue to build all
  existing packages.

- Packages will not instantly have RC bugs.

- Simple to implement.
  + Trivial change in dpkg for the new field.
  + dpkg-checkbuilddeps has to parse 3 fields (2 with -B option)
instead of 2 (1).
  + sbuild, same change
  + Simple change for 'apt-get build-dep'

- Buildds/dpkg-buildpackage can use the build-arch target
  + reduces Build-Depends field and install time of same
  + build-indep is no longer called, reduces compile time and disk
space

- Build-Depends/Conflicts-Indep becomes usefull, build-indep becomes
  usefull

  Large packages no longer have to build indep stuff in the
  binary-indep target to avoid building it on buildds.

MfG
Goswin

-- System Information:
Debian Release: 3.1
Architecture: amd64 (x86_64)
Kernel: Linux 2.6.16-rc4-xen
Locale: LANG=C, LC_CTYPE=C (charmap=ANSI_X3.4-1968)


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Bug#204422: ITP: debix -- Live filesystem creation tool

2003-08-07 Thread Goswin Brederlow
Package: wnpp
Version: unavailable; reported 2003-08-07
Severity: wishlist

  Package name: debix
  Version : 0.1
  Upstream Author : Goswin von Brederlow <[EMAIL PROTECTED]>
  License : GPL
  Description : Live filesystem creation tool
  Sponsor : wanted

Debix is a collection of scripts to create live filesystems. Several
flavours are planed:

- Make a live filesystem image from any existing linux system
  Apart from a special initrd a plain image of the existing system is
  made without changes. The image on CD is made semingly writeable via
  LVM2 snaphots by the initrd and then the normal init is started.

- Pure live filesystem like knoppix (+zero reboot installation)
  Difference to Knoppix would be customizable size, being a pure
  Debian system and the possibility to migrate the live filesystem to
  harddisk on-the-fly to get a running Debian system (with the
  drawback that the partitioning scheme is mostly fixed, using online
  ext2/3 resize patches could solve that).

- Make a live filesystem with boot-floppies or debian-installer
  Console and X subflavours included. The advantage over the normal
  CDs would be better autodetection and access to www, irc and local
  docs during instalation (one could read the installation docs on
  www.debian.org in galeon while running boot-loppies in an xterm).
  A mixture of knoppix and installer.

A sponsor should be versed in /bin/sh and intrested in creating live
filesystems. Having a CD-rw or DVD-rw burner would be a big plus but
bochs or vmware will do to test stuff.

Sources aren't debianized yet but I have an example CD image made from
a normal woody system (flavour 1 from above) at
rsync://mrvn.homeip.net/images/

MfG
Goswin

-- System Information:
Debian Release: testing/unstable
Architecture: i386
Kernel: Linux dual 2.4.21-ac4 #1 SMP Sat Jul 5 17:53:13 CEST 2003 i686
Locale: LANG=C, LC_CTYPE=de_DE





Re: NM non-process

2003-08-06 Thread Goswin Brederlow
Tollef Fog Heen <[EMAIL PROTECTED]> writes:

> * Adam Majer 
> 
> | My definition of MIA for DD: Doesn't fix release critical bugs for
> | his/her package(s) within a week or two and doesn't respond to
> | direct emails about those bugs.
> 
> I guess I'm MIA, then, since I have an RC bug which is 156 days (or
> so) old, which is waiting for upstream to rewrite the program.

Taged forwarded?

MfG
Goswin




Re: NM non-process

2003-08-06 Thread Goswin Brederlow
Andreas Barth <[EMAIL PROTECTED]> writes:

> * Goswin Brederlow ([EMAIL PROTECTED]) [030806 05:35]:
> > Kalle Kivimaa <[EMAIL PROTECTED]> writes:
> > > BTW, has anybody done any research into what types of package
> > > maintainers tend to go MIA? I would be especially interested in a
> > > percentage of "old" style DD's, DD's who have gone through the NM
> > > process, people going MIA while in the NM queue, and people going MIA
> > > without ever even entering the NM queue. I'll try to do the statistics
> > > myself if nobody has done it before.
> 
> > And how many NMs go MIA because they still stuck in the NM queue after
> > years? Should we ask them? :)
> 
> Many. While cleaning up the ITPs/RFPs I asked many packagers about the
> status of their package and got quite often a "package is more or less
> ready, but I'm waiting of DAM-approval because I don't want the hassle
> of another sponsored package", or, what's worse a "package was ok some
> time ago, but as Debian doesn't want me I stopped fixing it".
> 
> Sad.

Till this morning I was one of those NMs not wanting the hassel of a
sponsor but now I had to change my maintainers email and fix some RC
bugs so I did bully someone to sponsor it.

You wait 5 Month for the DAM and thus one should become DD any day
now. Would you realy go hunting for a sponsor again? Now that I did I
probably become DD tomorrow so it was a waste of time. .oO( Damn, now
I jinxed become DD too again ).

MfG
Goswin




Re: Request for maintainer

2003-08-05 Thread Goswin Brederlow
[EMAIL PROTECTED] writes:

> Hi!
> 
> Subject: RFP: GRubik -- A 3D Rubik cube game
> Package: wnpp
> Version: N/A; reported 2003-08-06
> Severity: wishlist
> 
> * Package name: GRubik
>   Version : 1.16
>   Upstream Author : John Darrington <[EMAIL PROTECTED]>
> * URL : http://www.freesoftware.fsf.org/rubik/grubik.html
> * License : GPL-2
>   Description : A 3D Rubik cube game
> 
> This is an OpenGL / GTK+ package which I have written.  I've had
> positive feedback from the people who've looked at it so far. It's fully
> internationalised, has a number (5?) localisations.
> 
> I'm not a DD, and am not currently able to commit to being one.
> However if a DD wants to become a maintainer of this package I will
> co-operate with him/her (eg upstream patches where needed). 
> 
> I have created an unofficial deb for this software, and you can get it
> from my website http://darrington.wattle.id.au/deb if you want to use
> that as a starting point.  

Maybe you should ask on the new maintainer list for some new
maintainer thats intrested and needs a package to maintain. Just an
idea.

MfG
Goswin




Re: NM non-process

2003-08-05 Thread Goswin Brederlow
Kalle Kivimaa <[EMAIL PROTECTED]> writes:

> Roland Mas <[EMAIL PROTECTED]> writes:
> > with.  The MIA problem is significant enough that NM might be the only
> > way to tackle with it seriously.  That means taking time to examine
> > applications.
> 
> BTW, has anybody done any research into what types of package
> maintainers tend to go MIA? I would be especially interested in a
> percentage of "old" style DD's, DD's who have gone through the NM
> process, people going MIA while in the NM queue, and people going MIA
> without ever even entering the NM queue. I'll try to do the statistics
> myself if nobody has done it before.

And how many NMs go MIA because they still stuck in the NM queue after
years? Should we ask them? :)

MfG
Goswin




Re: NM non-process

2003-08-05 Thread Goswin Brederlow
[EMAIL PROTECTED] (Nathanael Nerode) writes:

> Steve Langasek said:
> >I don't think it irrelevant that those clamouring loudest for the DPL
> >to do something to fix the situation are people who don't actually have
> >a say in the outcome of DPL elections.  While I'm not happy to see such
> >long DAM wait times, I'm also not volunteering to take on the thankless
> >job myself.
> 
> No, it's not irrelevant.  It means precisely that Debian is in danger of 
> becoming an unresponsive, closed group which does not admit new people.  
> If this continues for, say, 2 more years, I would expect a new Project 
> to be formed, replicating what Debian is doing, but admitting new 
> people.  I'd probably be right there starting it.
> 
> That would be a stupid waste of effort, so I hope it turns out to be 
> unnecessary.

I know of several DDs and non-DDs thinking about creating a Debian2 (or
whatever named) project due to this and other lack of responce
problems and the group is growing. The danger is already there and
should not be ignored.

MfG
Goswin




Re: NM non-process

2003-08-05 Thread Goswin Brederlow
Steve Langasek <[EMAIL PROTECTED]> writes:

> On Mon, Jul 21, 2003 at 10:17:24AM -0400, Nathanael Nerode wrote:
> 
> > Martin Schulze is listed as the other DAM member.  He's also the Press 
> > Contact, so I certainly hope he has good communication skills!
> 
> And the Stable Release Manager, and a member of the Security Team, and
> a member of debian-admin.  What makes you think he would give a higher
> priority to DAM work than James currently does?
> 
> Actually, given that Joey is already listed as part of DAM, and isn't
> actively involved, doesn't this suggest he already gives a lower
> priority to this work?

As far as I heard Matrin is only there in case the DAM dies. He won't
create or delete account on his own while the DAM is still breathing
(or thought to be).

MfG
Goswin




Re: Excessive wait for DAM - something needs to be done

2003-08-05 Thread Goswin Brederlow
"Dwayne C. Litzenberger" <[EMAIL PROTECTED]> writes:

> On Sun, Jul 13, 2003 at 01:09:47PM -0600, Jamin W. Collins wrote:
> > 2001-01-24 - Dwayne Litzenberger <[EMAIL PROTECTED]>
> >http://nm.debian.org/nmstatus.php?email=dlitz%40dlitz.net
> 
> For the record, I'm still interested in becoming a DD.
> 
> Nice to see this finally being addressed!  Thanks!

http://nm.debian.org/nmstatus.php?email=brederlo%40informatik.uni-tuebingen.de

Received application2000-08-25

Still waiting too.

MfG
Goswin




OT (Re: Excessive wait for DAM - something needs to be done)

2003-08-05 Thread Goswin Brederlow
Martin Michlmayr - Debian Project Leader <[EMAIL PROTECTED]> writes:

> * Jamin W. Collins <[EMAIL PROTECTED]> [2003-07-21 18:52]:
> > Perhaps that is because only the DPL can appoint them (as far as I can
> > tell) and we haven't seen a request from you for them.
> 
> Request for help are usually very ineffective; examples: apt's
> maintainer asked for help and didn't get any (with the exception that
> mdz has started more apt work, but he worked on apt before so
> effectively there are no new volunteers), Bdale is looking for a
> co-maintainer of ntp as is md for mutt.

Mails with suggestions and offers to help (and I'm esspecially mean
apt here :( ) also go unanswered or patches for bugs or improvements
get just overlooked and never included.

Shit happens. Don't stop trying.

MfG
Goswin




Re: A success story with apt and rsync

2003-07-07 Thread Goswin Brederlow
Michael Karcher <[EMAIL PROTECTED]> writes:

> On Sun, Jul 06, 2003 at 01:29:06AM +0200, Andrew Suffield wrote:
> > It should put them in the package in the order they came from
> > readdir(), which will depend on the filesystem. This is normally the
> > order in which they were created,
> As long as the file system uses an inefficient approach for directories like
> the ext2/ext3 linked lists. If directories are hash tables (like on
> reiserfs) even creating another file in the same directory may totally mess
> up the order.
> 
> Michael Karcher

ext2/ext3 has hashed dirs too if you configure it.

MfG
Goswin




Re: Resolvconf -- a package to manage /etc/resolv.conf

2003-07-07 Thread Goswin Brederlow
Thomas Hood <[EMAIL PROTECTED]> writes:

> On Sun, 2003-07-06 at 01:00, Goswin Brederlow wrote:
> > You should think of a mechanism for daemons to get notified about
> > changes in resolv.conf.
> 
> There is already such a mechanism.  See below.
> 
> > Like providing a function to register a script
> > and a list of arguments (like the PID of the program to
> > notify). Whenever the resolv.conf changes all currently registered
> > scripts would be called with their respective arguments.
> > 
> > The simplest form would be:
> > 
> > resolv.conf-register /etc/init.d/squid reload
> > 
> > That would make squid to reload its config each time a nameserver is
> > added or removed.
> 
> Currently, scripts in /etc/resolvconf/update.d/ get run when
> resolver information changes.  So, would it suffice to create
> /etc/resolvconf/update.d/squid containing the following?
> #!/bin/sh
> /etc/init.d/squid reload
> 
> --
> Thomas Hood

Great.

MfG
Goswin




Re: A success story with apt and rsync

2003-07-05 Thread Goswin Brederlow
Koblinger Egmont <[EMAIL PROTECTED]> writes:

> Hi,
> 
> >From time to time the question arises on different forums whether it is
> possible to efficiently use rsync with apt-get. Recently there has been a
> thread here on debian-devel and it was also mentioned in Debian Weekly News
> June 24th, 2003. However, I only saw different small parts of a huge and
> complex problem set discussed at different places, I haven't find an
> overview of the whole situation anywhere.
...

I worked on an rsync patch for apt-get some years ago and raised some
design questions, some the same as you did in the deleted parts. Lets
summarize what I still remember:

1. debs are gziped so any change (even change in time) results in a
different gzip. The rsyncable patch for gzip helps a lot there. So
lets consider that fixed.

2. most of the time you have no old file to rsync against. Only
mirrors will have an old file and they already use rsync.

3. rsyncing against the previous version is only possible via some
dirty hack as apt module. apt would have to be changed to provide
modules access to its cache structure or at least pass any previous
version as argument. Some mirror scripts alreday use older versions as
templaes for new versions.

4. (and this is the knockout) rsync support for apt-get is NO
WANTED. rsync uses too much resources (cpu and more relevant IO) on
the server side and a widespread use of rsync for apt-get would choke
the rsync mirrors and do more harm than good.

> conclusion
> --
> 
> The good news is that it is working perfectly.
> 
> The bad news is that you can't hack it on your home computer as long as your
> distribution doesn't provide rsync-friendly packages. Maybe one could set up
> a public rsync server with high bandwidth that keeps syncing the official
> packages and repacks them with rsync-friendly gzip/zlib and sorting the
> files.

There is a growing lobby to use gzip --rsyncable for debian packages
per default. Its coming.


So what can be done?


Doogie is thinking about extending the Bittorrent protocol for use as
apt-get method. I talked with him on irc about some design ideas and
so far it looks realy good if he can get some mirrors to host it.

The bittorrent protocol organises multiple downloaders so that they
also upload to each other and thereby reduces the traffic on the main
server. The extension of the protocol should also utilise http/ftp
mirrors as sources for the files thereby spreading the load over
multiple servers evenly.

Bittorrent calculates a hash for each block of a file very similar to
what rsync needs to work. Via another small extension rolling
checksums for each block could be included in the protocol and a
client side rsync can be done. (I heard this variant of rsync would be
patented in US but never saw real proof of it.)


All together I think a extended bittorrent module for apt-get is by
far the better sollution but it will take some more time and designing
before it can be implemented.

MfG
Goswin




Re: Debconf and XFree86 X servers

2003-07-05 Thread Goswin Brederlow
Branden Robinson <[EMAIL PROTECTED]> writes:

> [Please direct any XFree86-specific followup to debian-x.]
> 
> On Sat, Jul 05, 2003 at 08:46:00AM -0400, Theodore Ts'o wrote:
> > Yet another reasons for wanting to decouple installation and
> > configuration is if some hardware company (such as VA^H^H Emperor
> > Linux) wishes to ship Debian pre-installed on the system.  In that
> > case, installation happens at the factory, and not when the user
> > receives it in his/her hot little hands.

So they should just provide a "setup.sh" script that calls
dpkg-reconfigure for relevant packages again.

Otherwise just type in "dpkg-reconfigure --all" and spend hours
configuring your system as much as you like.

MfG
Goswin




Re: 469 packages still using dh_undocumented, check if one is yours

2003-07-05 Thread Goswin Brederlow
"Artur R. Czechowski" <[EMAIL PROTECTED]> writes:

> On Sat, Jul 05, 2003 at 11:26:44AM -0400, Joey Hess wrote:
> > Goswin Brederlow wrote:
> > > Could the dh_undocumented programm allways fail with an error "Don't
> > > use me" as the next step? That way all new uploads will be forced to
> > > care.
> > No. Breaking 400+ packages so our uses cannot build them from source is
> > unacceptable.
> What's about dh_undocumented looking like:
> --
> #!/bin/bash
> if [ $FORCE_UNDOCUMENTED = 1 ]; then
>   echo You are still using dh_undocumented which is obsoleted.
>   echo Stop it.
> else
>   echo You are using obsoleted dh_undocumented in your debian/rules.
>   echo Please stop it and prepare a manpage for your package.
>   echo If you really want to build this package read (pointer to
>   documentation which explains how to set FORCE_UNDOCUMENTED or how to remove 
> this from debian/rules and why using dh_undocumented is bad).
>   exit 1
> fi
> --
> 
> Pro:
>   - it is possible to build package with buildd or any other autobuilder
>   - human building package can force it to build too
>   - it requires an interaction from developer, but this interaction is not
> time consuming
> 
> This is a good compromise between technical and social means to achieve a 
> goal.

I would have reversed it. Use "$FAIL_UNDOCUMENTED" and have
autobuilders set that.

Unknowing users aren't bothered, old sources still compile but new
uploads are forced to handle the issue.

But Joey stoped reading this so nothing will happen. EOD.

MfG
Goswin




Re: Resolvconf -- a package to manage /etc/resolv.conf

2003-07-05 Thread Goswin Brederlow
Thomas Hood <[EMAIL PROTECTED]> writes:

> Summary
> ~~~
> Resolvconf is a proposed standard framework for updating the
> system's information about currently available nameservers.
> 
> Most importantly, it manages /etc/resolv.conf , but it does 
> a bit more than that.

You should think of a mechanism for daemons to get notified about
changes in resolv.conf. Like providing a function to register a script
and a list of arguments (like the PID of the program to
notify). Whenever the resolv.conf changes all currently registered
scripts would be called with their respective arguments.

The simplest form would be:

resolv.conf-register /etc/init.d/squid reload

That would make squid to reload its config each time a nameserver is
added or removed.

MfG,
Goswin




Re: 469 packages still using dh_undocumented, check if one is yours

2003-07-05 Thread Goswin Brederlow
Bernd Eckenfels <[EMAIL PROTECTED]> writes:

> On Sat, Jul 05, 2003 at 04:43:56PM +0200, Goswin Brederlow wrote:
> > Could the dh_undocumented programm allways fail with an error "Don't
> > use me" as the next step? That way all new uploads will be forced to
> > care.
> 
> this will still create fail to build bugs for no good reason.

Some sort of "Hey you, your doing something wrong but I will let is
pass this time" feedback from autobuilder would be nice together with
keeping such nearly broken packages out of testing (to give some
incentive for mainatiner to fix the problem).

But that probably goes too far.

MfG
Goswin




Re: 469 packages still using dh_undocumented, check if one is yours

2003-07-05 Thread Goswin Brederlow
Joey Hess <[EMAIL PROTECTED]> writes:

> Artur R. Czechowski wrote:
> > OTOH, maybe dh_undocumented should be removed from debhelper with prior
> > notice? "This program does nothing and should no longer be used."
> 
> As a rule I try to avoid causing less than 469 FTBFS bugs with any given
> change I make to debhelper. I have removed programs when as many as
> three packages still used them, after appropriate bug reports and a two
> month grace period.

Could the dh_undocumented programm allways fail with an error "Don't
use me" as the next step? That way all new uploads will be forced to
care.

MfG
Goswin




469 packages still using dh_undocumented, check if one is yours

2003-07-03 Thread Goswin Brederlow
Hi,

I came accross some sources still using dh_undocumented so I did a
quick search through sids *.diff.gz files. Here is the result:

find -name "*diff.gz" | xargs zgrep ':+[[:space:]]*dh_undocumented' \
| cut -f 1 -d"_" | sort -u | cut -f6- -d"/"

./dists/potato/main/source/devel/fda
./dists/potato/main/source/libs/libgd-gif
./dists/potato/main/source/otherosfs/lpkg
./dists/potato/main/source/web/cern-httpd

3dwm

acfax acorn-fdisk adns adolc affiche alsaplayer am-utils amrita anthy
antiword apcupsd-devel apcupsd aprsd aprsdigi argus-client ascdc asd4
aspseek at-spi atlas august ax25-apps ax25-tools

bayonne bbmail bbpal bbsload bbtime bind binutils-avr bird bitcollider
blackbook bnc bnlib bonobo-activation bonobo-conf bonobo-config bonobo
bookview brahms bwbar

cam camstream canna capi4hylafax catalog cdrtoaster cfengine cftp
chasen checkmp3 chipmunk-log chrpath clanbomber clips codebreaker
console-tools cooledit coriander corkscrew courier cpcieject crack ctn
cutils cyrus-sasl

dancer-services db2 db3 db4.0 dcgui dcl dctc dds2tar dia2sql directfb
directory-administrator dotconf drscheme

eb eblook elastic electric elvis epwutil erlang ethstats evolution
ewipe ezpublish

fcron fidelio firebird flink fluxbox fnord fort freewnn ftape-tools
funny-manpages

gaby gasql gato gbase gbiff gcc-2.95 gcc-2.96 gcc-3.0 gcc-3.2 gcc-3.3
gcc-avr gcc-snapshot gconf-editor gconf gconf2 gcrontab gdis gdkxft
geda-doc geda-symbols gerris gimp1.2 gkrellm-newsticker
gkrellm-reminder glade-2 glbiff glib2.0 gnet gnome-db gnome-db2
gnome-doc-tools gnome-gv gnome-libs gnome-think gnome-vfs gnome-vfs2
gnomesword gnu-smalltalk gnudip gnumail goo gpa gpgme gpgme0.4 gpm
gpppkill gpsdrive gscanbus gstalker gtk+2.0 gtk-menu gtkglarea
gtkmathview gtkwave guile-core gwave

hamfax happy hesiod hmake hns2 htmlheadline hubcot hwtools

i2e ibcs ic35link icebreaker iceconf icom inews intltool iog ipv6calc
ipxripd ircp

jabber jack-audio-connection-kit jags jailtool jenova jfbterm jigdo
jless jlint jpilot junit-freenet

kakasi kdrill kfocus kon2 krb4 krb5 ksocrat

lablgtk lam lesstif1-1 lineakconfig linpqa lirc lm-batmon lm-sensors
libapache-mod-dav libax25 libbit-vector-perl libcap libcommoncpp2
libctl libdate-calc-perl libdbi-perl libgda2 libglpng libgnomedb
libgpio libjconv liblog-agent-logger-perl liblog-agent-perl
liblog-agent-rotate-perl libmng libnet-daemon-perl libnet-rawip-perl
libplrpc-perl libpod-pom-perl libprelude libproc-process-perl
libprpc-perl libquicktime librep libsdl-erlang libsigc++-1.1
libsigc++-1.2 libsigc++ libsmi libstroke libtabe libunicode libusb
libxbase

mailscanner makedev manderlbot mbrowse mdk medusa meshio mew mgetty
midentd mii-diag mingw32-binutils mingw32 mixer.app mmenu mnogosearch
mobilemesh mondo mosix motion mova mozilla-snapshot mozilla mp3blaster
mp3info mpatrol mped mpqc mtools mtrack multi-gnome-terminal
multiticker murasaki muse mwavem

namazu2 ncpfs net-snmp netcast netjuke nictools-nopci nmap notifyme
nte nvi-m17n

oaf obexftp ocaml octave2.1 oftc-hybrid oo2c openafs-krb5 openafs
opendchub opengate openh323gk openmash openmosix opensp openssl
overkill

pam pango1.0 parted passivetex pccts pdnsd peacock phaseshift phototk
pike pike7.2 pike7.4 pilot-link pimppa pinball pkf plum pong poppassd
postilion powertweak ppxp-applet ppxp progsreiserfs pronto ptex-bin
pybliographer pymol python-4suite python-stats python-xml-0.6
python2.1 python2.2 python2.3

qemacs qhull qm qstat quadra quota

radiusclient radiusd-livingston rblcheck rdtool read-edid
realtimebattle remem rfb rplay rubyunit rumba-manifold rumba-utils
rumbagui rumbaview

samhain sanitizer scandetd scanmail scite scrollkeeper scsitools
search-ccsb sg-utils sg3-utils shadow shapetools sidplay-base skkfep
smail sml-mode sms-pl sn snort soap-lite socks4-server sonicmail
sortmail soup soup2 sourcenav spass speech-tools speedy-cgi-perl
spidermonkey spong squidguard squidtaild stegdetect stopafter superd
sympa syscalltrack

tclex tclx8.2 tclx8.3 tcpquota tdb tetrinetx tex-guy texfam tik
tintin++ titrax tix8.1 tkisem tkpaint tkvnc tolua torch-examples
tptime tramp tsocks

ucd-snmp ucspi-proxy umodpack unixodbc usbmgr uw-imap

vdkxdb vdkxdb2 verilog vflib2 vflib3 vgagamespack vipec vtcl

w3cam webalizer webbase wine wings3d wmcdplay wmmon wmtime wwl

xbvl xclass xdb xdvik-ja xemacs21 xevil xfce xfree86 xfree86v3 xfreecd
xfs-xtt xirssi xitalk xkbset xlife xmacro xmms xnc xotcl xpa xpdf xpm
xracer xscorch xsmc-calc xstroke xsysinfo

zebra zmailer




Re: bugs.debian.org: ChangeLog closes handling should be changed

2002-09-03 Thread Goswin Brederlow
Gerfried Fuchs <[EMAIL PROTECTED]> writes:

> * Brian May <[EMAIL PROTECTED]> [2002-08-29 09:50]:
> > On Tue, Aug 27, 2002 at 08:48:31AM +0200, Gerfried Fuchs wrote:
> >>  What we need is a change here: Bugs should just be closed in unstable.
> >> How to do this?  They should be rather be tagged  than be closed
> >> by an upload to unstable.  Not unconditionally, of course.  The version
>^^^
> >> of the bugreport should be compared with the version currently in
>   ^^
> >> testing.  Some sort of algorithm not too complex but able to handle most
>
> >> of the cases shouldn't be too hard to do (yes, I volunteer to help
> >> there).
> > 
> > Then bugs will me marked as sarge, even though they might be bugs
> > specific to unstable.
> 
>  Just to make sure you didn't miss that I thought of that problem
> already, thanks.  Or do you think of that the bugs are there because of
> other packages in unstable?  Well, then the bug might be filed against
> the wrong package.  Which would leave us with a problem -- the version
> is rather meta information in the bugreport and not real useful data, is
> it?  So currently there is no need to change the version field if a bug
> gets reassigned to a different package... *hmmm*  Difficult issue, but
> still no issue that shouldn't be raised/thought about.

What about the folowing mechanism:

- The dist of the reportie(s) is checked. If its not in the bugreport
  guess stable.

- When a "closes" is seen in the changelog the version of the
  uploaded package is noted.

- Reporties using unstable get the normal mail as it is now.

- Reporties with testing get a mail that a bugfix is uploaded to
  unstable and they might want to try that out. Once the package closing
  the bug (or any later version) moves to testing they get a mail
  seaying that the bug is fixed in testing.

- same for stable but 3 mails. A "is in unstable", a "is in testing"
  and a "is closed" mail.

- The BTS tags bugs as [closed unstable], [closed testing] or
  [closed] respectivly. The script moving things to testing and stable
  tells the BTS about the version it moves and the BTS changes the tags
  according to a version compare.

- Bugs with tag [sid] could get closed immediatly with a upload to
  sid. The [sid] or [sarge] tags could be adjusted by the BTS when a
  new version is moved. If bar is moved from sid to sarge all [sid]
  tags become [sid][sarge]. A [closed unstable] would replace the
  [sid] tag but leave the [sarge] tag.


MfG
Goswin




Re: Dependencies on -dev packages

2002-09-03 Thread Goswin Brederlow
Stephen Zander <[EMAIL PROTECTED]> writes:

> What is the thinking behind always requiring libfoo-dev to depend on
> libbar-dev when libfoo depends on libbar?  I understand the need when
> /usr/include/foo.h contains
> 
>   #include 
> 
> but if libfoo opaquely wraps libbar, why have libfoo-dev depend on
> libbar-dev?

Do you have a case where theres no #include ? Then you would
just build-depend on libbar-dev.

If it works 100% without there is no need to depend.

MfG
Goswin




Re: Dumb little utilities

2002-09-03 Thread Goswin Brederlow
"J. Scott Edwards" <[EMAIL PROTECTED]> writes:

> On Wed Aug 28 11:37:29 2002 Allan Wind wrote:
> > On 2002-08-27 21:59:28, J. Scott Edwards wrote:
> > > file slicer (that can slice up a file into different size chunks).
> >
> > dd?
> 
> Yea, that would do it, slightly more cumbersome to use.

split ?

MfG
Goswin




Re: Improper NMU (Re: NMU for libquota-perl)

2002-09-03 Thread Goswin Brederlow
Martin Wheeler <[EMAIL PROTECTED]> writes:

> On Mon, 2 Sep 2002, Sebastian Rittau wrote:
> 
> >  If you're not able to maintain
> > your packages properly and in a timely manner, and your holding up a
> > major part of the distribution, it's your fault.
> 
> I'm interested in this.
> Individuals differ greatly in their working methods.
> So what is considered: "in a timely manner"?  (Seriously.)
> 
> I have dropped out of several co-operative projects in the past simply
> because I'm a relatively "slow" worker -- i.e. I don't tend to give three
> or four fast responses to collective development _in the same day_.
> I find that sort of speed of progression highly de-motivating.
> Am I alone in this?  (I might take up to a week to respond to any
> particular event -- this is normal for _me_.)
> 
> I'm curious as to what the expectations of other debian-developers are --
> for example, does not being online 24/7 -- or even once a day --
> effectively create a barrier to participation in collective development
> projects in debian?  (Theory would say: No.  But what does _practical_
> experience dictate?)

That completly depends on the problem at hand. For example if the
group is fixing a security bug responding a week later probably
doesn't help and you would be left out.

If you discuss where to go next, what parts to improve or how to
extend the interface of a library or such, discussing it for a month
might be normal and needed to get a proper consensus. 

> So I guess my question really is: what is "timely"; and what is
> "untimely"?
> Where is it defined?  By whom?  Does it make sense?

Its a case to case basis, so it full of errors, opinions and
flamewars.

MfG
Goswin




Re: Packages affected by removal of free mp3 players

2002-09-02 Thread Goswin Brederlow
Joe Drew <[EMAIL PROTECTED]> writes:

> If the worst does happen, and we need to remove all mp3 players from
> Debian, many packages will be affected. Most of these are because of

Why no non-free versions?

> their dependency on libsmpeg, which is the SDL MPEG audio and video
> decoder. Others depend on other MP3-playing libraries, such as libmad,
> mpeglib (which is named oddly), etc. (Please let me know if I've
> forgotten some.) A few packages depend on mpg321 as well - mostly
> front-ends.

"un" versions or "dummy" versions could be provided to make the
software work but not the mp3 features. Would be quite stupid for
mpg321 but for ogg it would make sense. People could then get a real
mp3lib from non-free or somewhere else and replace the "un" version
with that without recompile of the debs.

MfG
Goswin




Re: The harden-*flaws packages.

2002-09-02 Thread Goswin Brederlow
Daniel Martin <[EMAIL PROTECTED]> writes:

> Martin Schulze <[EMAIL PROTECTED]> writes:
> 
> Hrm.  The more I think about this the more I wonder if maybe the
> harden-*flaws packages make much sense in stable at all.  If someone
> is apt-get'ing from security.debian.org, they're already replacing
> vulnerable versions with fixed ones.  If someone is updating from a
> point release CD, the same thing applies.  The only case where I can
> see it making sense is with someone following testing with most of
> their packages on hold (they really want a stable system, and only
> upgrade a package when they need to).  Am I missing a scenario?

They should have stable as their distribution with highest priority
for apt. That includes security for stable.

On top of that the few packages they want more current can be
installed from woody or sid. No need to keep everything else on hold,
making stable first priority for apt should be enough.

And then they would get security updates.

MfG
Goswin




Re: Improper NMU (Re: NMU for libquota-perl)

2002-09-02 Thread Goswin Brederlow
Elie Rosenblum <[EMAIL PROTECTED]> writes:

> On Mon, Sep 02, 2002 at 02:25:30AM +0100, Colin Watson wrote:
> > On Sun, Sep 01, 2002 at 09:19:17PM -0400, Elie Rosenblum wrote:
> > > On Mon, Sep 02, 2002 at 02:16:11AM +0100, Colin Watson wrote:
> > > > Technically it wasn't. The upload is still in the DELAYED queue, which
> > > > is really just a convenient automated way of saying "I'll NMU this
> > > > package in  days if I don't hear anything", with the added bonus of
> > > > allowing the maintainer to poke at it and see exactly what would go in
> > > > in the absence of a maintainer upload. I usually explain this when using
> > > > the delayed queue.
> > > 
> > > I assume you also submit a bug.
> > 
> > Quite.
> 
> Would you agree that performing an NMU without a BTS entry is wrong?
> 
> > > Do you generally do this without leaving a bug for a few days first?
> > 
> > In the case of the perl transition I've been given to understand by the
> > actions of other developers that the -devel-announce post on 31st July
> > was enough. Otherwise no.
> 
> I see.
> 
> Well, I disagree with this (as do I believe some others), but only
> in that no NMU should be done until the bug has existed for a few
> days (if nothing else, this addresses the distinct possibility of
> NMUs actually breaking stuff, which has already been brought up in
> this thread). I'm probably not going to convince you of this, any

The Bug existet since 31st July even if it wasn't formaly in the BTS
against your package.

> more than you will convince me that I'm wrong here. I have not,
> however, been hit with this general case...I've been hit with an
> irresponsible maintainer performing an NMU without submitting a
> bug at all, even if it was 5 minutes before he uploaded. This is
> just plain wrong, and something that can cause us really serious
> problems if people start to imagine that it's acceptable - 
> especially since we have little control over which keys can
> successfully upload any given package.

In the case of something so trivial as causing a recompile for a
problem that has been known for some time the warning given by the
delayed upload should be enough.

Do we realy need to mass-file bugreports for this? Thats the
alternative to mentioning something like the perl transition on -devel
and then fix it in a group effort some time afterwards.


You might have a point in general but not in this case.

Just my 2c of though, don't blame anyone else.
Goswin




Re: woody CD not bootable with adaptec + scsi-cdrom

2002-09-02 Thread Goswin Brederlow
Andreas Metzler <[EMAIL PROTECTED]> writes:

> Disclaimer: Because I do not work on the debian-cd, bf or installer
> and do not follow the mailing lists regularily, I do not know much
> about this issue and there are probably lots of errors in this mail.
> 
> Goswin Brederlow <[EMAIL PROTECTED]> wrote:
> > I just crasht my system working on libsafe and hat to boot from CD.
> 
> > I the discovered that the woody CD (linuxtag prerelease) doesn't
> > boot. I heard of similar for the real woody release CDs on irc.
> 
> > Can anyone boot the CDs, which one of the set and what hardware?
> > Same if you can't boot.
> 
> Hello,
> Lots of people can, the official CD-image _were_ tested. - I think the
> linuxtag prerelease CD is similar to the official CD1.
> 
> I can boot from CD1, on a PentiumMMX-class machine (SiS 5591/5), an
> iirc 1 year old Athlon 800 (VIA 133) and a 2 month old Duron1200
> (VIA 266A).

Who cares about your cpu?
What cdrom? ide or scsi? What controler?


> > Also whats different between potato and woody?
> 
> potato used floppy-emulation, woody _CD1_ uses isolinux(??).

That explains the difference in output. The floppy emulation shows up
when the adaptec detects the bootable cdrom. Or is that unrelated?


> > potato has this multiboot thing and woody not anymore, right? What
> > was wrong with it?  Seems to be more compatible the old way.
> 
> It is not, search the Debian-CD mailing-list.
> 
> IIRC: Basically Microsoft switched the CD-boot method they used for
> their OS CDs, the BIOS manufacturers followed suit and dropped support
> for or did not fix bugs in the old method and added support for the
> "new" method. For maximum compatibility with old computers you need
> floppy-emulation, for new computers you need the new method. BTW
> RedHat et al. don't use floppy-emulation, too.
> 
> If your computer cannot boot woody CD1 try CD2-CD7 - they use
> floppy-emulation and should work on old computers.

Good to know. Is that in the install docs somewhere?

MfG
Goswin




Re: woody CD not bootable with adaptec + scsi-cdrom

2002-09-02 Thread Goswin Brederlow
Richard Atterer <[EMAIL PROTECTED]> writes:

> So the final solution was to use multiboot only on the first CD, the
> other CDs use the same method as potato. The few machines on which it
> fails are either old or have a SCSI CD-ROM, booting from one of the
> later CDs should work for them. AFAIK if the woody release multiboot
> CD fails, it even prints a message which tells you to do that.

Haven't seen that. But maybe that wasn't ready or not included on the
Linuxtag CDs. They are pre stable.

MfG
Goswin

PS: I will see if I can get a real woody to test this. Don't want to
waste the traffic though.




Re: Non-Intel package uploads by maintainer

2002-09-01 Thread Goswin Brederlow
John Goerzen <[EMAIL PROTECTED]> writes:

> On Sun, Sep 01, 2002 at 09:51:32PM +0200, Goswin Brederlow wrote:
> 
> > Imagine for eample the case where the sources are missing files, as
> > happens too often. Then the binary is in violation of the GPL.
> > 
> > Not good. So why not let the autobuilder do their job for all archs.
> 
> Because then there's no opportunity to test what was built prior to
> uploading it to the archive.  If you do a standard upload, you'll at least
> have the opportunity to test the actual .debs produced on one architecture.
> 
> -- John

You should do that anyway. Testing and uploading should be seperate
steps.

MfG
Goswin

PS: you should also test the result of the autobuilder.




Re: Non-Intel package uploads by maintainer

2002-09-01 Thread Goswin Brederlow
Josip Rodin <[EMAIL PROTECTED]> writes:

> On Sun, Sep 01, 2002 at 06:44:24AM +0200, Goswin Brederlow wrote:
> > > > Don't upload binaries at all.
> > > > 
> > > > The autobuilder will check the build-process of your package. It will
> > > > build in a clean chroot with proper build-depends. With proper
> > > > versions of all tools.
> > > > 
> > > > If you upload binaries you get the usual bugs of missing
> > > > build-depends, wrong versions of tools or libraries and so on. Just
> > > > because you had them installed.
> > > 
> > > In his case, most of these will be noticed by the remaining nine buildds,
> > > anyway. 
> > 
> > But the uploaded binaries will still be in the archive with broken
> > sources.
> 
> Er, so? He'll still get RC bugs filed, and will have to upload another
> revision, which would be fixed.

Imagine for eample the case where the sources are missing files, as
happens too often. Then the binary is in violation of the GPL.

Not good. So why not let the autobuilder do their job for all archs.

MfG
Goswin




woody CD not bootable with adaptec + scsi-cdrom

2002-08-31 Thread Goswin Brederlow
Hi,

I just crasht my system working on libsafe and hat to boot from CD.

I the discovered that the woody CD (linuxtag prerelease) doesn't
boot. I heard of similar for the real woody release CDs on irc.

Can anyone boot the CDs, which one of the set and what hardware?
Same if you can't boot.


Also whats different between potato and woody? potato has this
multiboot thing and woody not anymore, right? What was wrong with it?
Seems to be more compatible the old way.

MfG
Goswin




Bug#159037: general: Time Problem

2002-08-31 Thread Goswin Brederlow
"Matt Filizzi" <[EMAIL PROTECTED]> writes:

> Package: general
> Version: N/A; reported 2002-08-31
> Severity: normal
> Tags: sid
> 
> I don't know what is causing this problem but all I know is that I have
> narrowed it down to being caused either by a package or by the install
> system.  I installed from the woody install disks then upgraded to sid.
> What happenes is that the time jumps ahead then back, eg (this is output
> from "while true; do date;done"
> 
> Sat Aug 31 19:07:26 EDT 2002
> Sat Aug 31 19:07:26 EDT 2002
> Sat Aug 31 19:07:26 EDT 2002
> Sat Aug 31 19:07:26 EDT 2002
> Sat Aug 31 20:19:01 EDT 2002
> Sat Aug 31 20:19:01 EDT 2002
> Sat Aug 31 20:19:01 EDT 2002
> Sat Aug 31 19:07:27 EDT 2002
> Sat Aug 31 19:07:27 EDT 2002
> 
> The only thing I did differently then previous installs was I told the
> installer that it could set the bios go UTC.  The only time it is really
> noticable is when in X, the screensaver kicks in when it jumps.

ntpdate, ntp, chrony,  installed?

Maybe two of them, one with summer time, one without in its config?

MfG
Goswin




Re: Non-Intel package uploads by maintainer

2002-08-31 Thread Goswin Brederlow
Joerg Jaspert <[EMAIL PROTECTED]> writes:

> Goswin Brederlow <[EMAIL PROTECTED]> writes:
> 
> > There are several reasons not to do this.
> > Don't upload binaries at all.
> 
> Why?
> 
> > The autobuilder will check the build-process of your package.
> 
> YOU should do that.

To err is human.

> > It will build in a clean chroot with proper build-depends.
> > With proper versions of all tools.
> 
> man pbuilder|sbuild|chroot by hand
> 
> There are enough ways to test your Build-Depends.
> And if you have an up2date chroot the versions of the Depends should be
> alright.

How many maintainer realy bother? And then there are still mistakes.

MfG
Goswin




Re: Non-Intel package uploads by maintainer

2002-08-31 Thread Goswin Brederlow
Josip Rodin <[EMAIL PROTECTED]> writes:

> On Sun, Sep 01, 2002 at 12:17:01AM +0200, Goswin Brederlow wrote:
> > Don't upload binaries at all.
> > 
> > The autobuilder will check the build-process of your package. It will
> > build in a clean chroot with proper build-depends. With proper
> > versions of all tools.
> > 
> > If you upload binaries you get the usual bugs of missing
> > build-depends, wrong versions of tools or libraries and so on. Just
> > because you had them installed.
> 
> In his case, most of these will be noticed by the remaining nine buildds,
> anyway. 

But the uploaded binaries will still be in the archive with broken
sources.

MfG
Goswin




Re: Non-Intel package uploads by maintainer

2002-08-31 Thread Goswin Brederlow
Dale Scheetz <[EMAIL PROTECTED]> writes:

> Since I have access to both Intel and Sparc hardware, it would be possible
> for me to upload both the i386 version and the Sparc version of the binary
> packages when I build a new release.
> 
> Is there any reason not to do this? It seems that it might speed up the
> autobuild process, specially when it is a library like libgmp3 which other
> packages depend upon for their builds...

There are several reasons not to do this.

Don't upload binaries at all.

The autobuilder will check the build-process of your package. It will
build in a clean chroot with proper build-depends. With proper
versions of all tools.

If you upload binaries you get the usual bugs of missing
build-depends, wrong versions of tools or libraries and so on. Just
because you had them installed.

Only reason I see for binary uploads would be for archs that are far
behind or packages that need that little extra time to build. (open
office with its 4G space requirement also comes to mind).

MfG
Goswin

PS: I assume dinstall got fixed to not delete sources-only uploads.




Re: Packages.bz2, Sources.bz2, Contents-*.bz2, oh my

2002-08-30 Thread Goswin Brederlow
[EMAIL PROTECTED] writes:

> Hello world,
> 
> In a couple of days uncompressed Packages files for unstable will cease
> to be generated, and bzip2'ed Packages files will be generated in their

That will also break rsyncing them, which saves a lot.
Packages, Sources and Contents files only have minimal changes from
day to day, so downloading them again and again is a waste.

MfG
Goswin




Re: Large file support?

2002-08-30 Thread Goswin Brederlow
Torsten Landschoff <[EMAIL PROTECTED]> writes:

> On Fri, Aug 09, 2002 at 01:42:04PM +0200, Andreas Metzler wrote:
>  
> > This does not solve the issue, LFS requires 2.4 or a patched 2.2
> > Kernel.
> > http://www.suse.de/~aj/linux_lfs.html
> 
> But with a standard 2.2 kernel it should still work for files < 2GB 
> I hope? I built openldap2 with lfs support and I am only running 2.4
> kernels. Can somebody tell me if it is going to break on 2.2?
> 
> Greetings
> 
>   Torsten

Its not. glibc takes care of it since like forever.

Otherwise ls, dd, cat, tar,  all would be broken.

MfG
Goswin




Re: RFD: Architecture field being retarded? [was: How to specify architectures *not* to be built?]

2002-08-30 Thread Goswin Brederlow
Adam Heath <[EMAIL PROTECTED]> writes:

> On Mon, 12 Aug 2002, Brian May wrote:
> 
> > This proposal would also allow, say bochs, to provide i386 too (although
> > I think more work might be needed here).
> 
> No, it wouldn't.
> 
> Say you install bochs on alpha.  If bochs provides i386, then this would tell
> dpkg that it is ok to install i386 binaries in the host.

Theres also a more suitable project under way. Instead of emulating a
complete system it just translates the assembler code and transaltes
syscalls to your architecture.

I don't know its name because I only heart from it. Falk Hueffner is
trying to get his Mathematica(i386) running on his alpha with it.

>From what he told me its way faster than bochs and transperent. Could
probably made into a binary_misc style module so the kernel supports
i386 elf binaries.

MfG
Goswin




Re: Is there a limitation on swap parition size linux can use?

2002-08-30 Thread Goswin Brederlow
Walter Tautz <[EMAIL PROTECTED]> writes:

> I heard that 2Gb is the limit. If so I would have
> to create distinct swap partitions if I wanted to
> have more than 2Gb swap? Just wondering...

The older blends of kernels only allowed swap partitions up to 128MB.
The newer kernels allow 2GB per swap partiton or file.

Nobody said you can only have one partition. :)


In fact its faster to spread the swap over several disks, having a
small partition on each all with the same priority. Linux will then
automatically "raid0" them for greater speed.

If you need even more swap than 2GB per disk you can have multiple
swap partitions or files on one disk. But then better keep them at
different priorities (default) so they don't get used in parallel.

MfG
Goswin




Re: RFD: Architecture field being retarded? [was: How to specify architectures *not* to be built?]

2002-08-30 Thread Goswin Brederlow
Andreas Rottmann <[EMAIL PROTECTED]> writes:

> > "Russell" == Russell Coker <[EMAIL PROTECTED]> writes:
> 
> Russell> On Sun, 11 Aug 2002 16:35, Geert Stappers wrote:
> >> When the cause of the buildproblem is in the package, fix the
> >> problem there. The package maintainer hasn't to do it by
> >> himself, he can/must/should cooperate with people of other
> >> architectures.  A sign like "!hurd-i386" looks to me like "No
> >> niggers allowed", it is not an invitation to cooperation.
> 
> Russell> So you think I should keep my selinux packages as
> Russell> architecture any, even though they will never run on on
> Russell> HURD or BSD?
> 
> Thanks, Russell, you are making my point. It is similiar with radvd,
> which was designed for Linux/BSD and won't work on the HURD, since it
> simply isn't supported upstream. I am not in the position to port
> radvd to the HURD, altough this would be the ideal way to go.

What about setting !hurd-i386 and file a bug regarding it with the tag
"Help needed".

That should encourage people to help and prevent autobuilders to send
build-failed mails for every release.

Just a random thought.
Goswin




debbugs: assign someone from a group of willing people to fix a bug

2002-08-29 Thread Goswin Brederlow
Package: debbugs
Version: 2.3-4
Severity: wishlist

Hi,

Ever notived how many bugs there are? How many don't get fixed, get
ignored, get forgotten? A lot of bugs are years old and might not even
exist anymore.

I know (most :) maintainers do their best to fix bugs but sometimes
there just isn't enough time or will. Or the problem is hard to
reproduce. Maintainers might also not have the same architecture or
setup as the reportie of a bug.

What to do?

I would like to propose a setup similar to the one used to translate
package descriptions:

If a bug is not delt with for some time (no mails or status changes
indication work being done) a person is selected out of a pool of
willing persons and is mailed the bug. He can then check out the bug
and fix it if possible and has the right to do an NMU or close the bug
etc.

If nothing happens to the bug or if the person sends a reject for the
bug another person gets drafted and so on.

Some comands could be introduced to control what persons gets the bug
next. Like selecting the architecture or some capabilities of the
person to be drafted next. Also maintainers should be able to force or
stop drafting someone, so e.g. if a maintainer things its an alpha
related problem he can tell the BTS to restrict the bug to people
having an alpha and draft someone immediatly (without some lengthy
wait for the drafting to kick in).

The easiest way might be to allways draft someone but draft the
maintainer first. If he doesn't react, say within a month, the next
person is drafted from the pool.


Criteria for drafting someone from the pool should be primary
workload. Take the person with the lowest number of bugs assigned.
Additionally factors like architecture, dist
(stable/testing/unstable), kernel should be matched to the bug
reportie if possible. Capabilities like knowing perl or C or favorisms
like loving games could also be considered.

When starting there should probably be a limit of a few bugs per
person, otherwise all tousands of open bugs wouldbe reassigned to a
few then soon unwilling helpers.


Any comment? Maybe something more than "send patch and we think about
it"?

May the Source be with you.
Goswin

-- System Information:
Debian Release: testing/unstable
Architecture: i386
Kernel: Linux dual 2.4.16 #19 SMP Sat Jul 6 04:37:14 CEST 2002 i686
Locale: LANG=C, LC_CTYPE=de_DE

Versions of packages debbugs depends on:
ii  ed0.2-19 The classic unix line editor
ii  exim [mail-transport-agent]   3.35-1 An MTA (Mail Transport Agent)
ii  libmailtools-perl [mailtools] 1.48-1 Manipulate email in perl programs
ii  perl [perl5]  5.6.1-7Larry Wall's Practical Extraction 

-- no debconf information




new selftest target in debian/rules, new package state for autobuilder (suspect), debs that selftest on build

2002-01-06 Thread Goswin Brederlow
Hi,

I'm trying to build gcc-3.0 manually because the autobuilder on m68k
just timeout on it. So all this takes gcc-3.0 as example, nothing
personal. This is more about improving the autobuilders.

Doing a test compile on i386 (way faster to check build-depends and
general errors there) I noticed that the selftests of gcc had
unexpected failures.

But since that happens all the time the results of the selftests are
just ignored and the build succeeds. The reason being that otherwise
there would never be a successfull build, esspecialy for the MHz
challenged archs.


But neigther failing nor succeeding seems to be right here.
My suggestion would be to have a target "selftests" in
debian/rules. During build that should be called to carry out the
selftests (if any).

Now two things can happen:

- The selftests work fine. The package completes its build and gets
uploaded to unstable. (the build succeeds).

- The selftests fail. The package gets flaged as suspect because of
selftest failures and uploaded to experimental or selftest-failures or
so. Then someone can take a look at the debs and check what caused the
selftest failures. If its nothing serious (like outdated testcases)
the package can still be released. If its something serious, he can
patch it and upload a new version.



Why not just fail the build when the selftest fails?

Many packages build multiple debs. Many take very long to build,
esspecially those with selftests. A selftest failure in for example
the libjava portion of gcc should not hold back all other gcc
packages. The debs should still be build and then the maintainer can
move everything but libjava.deb to incoming.


Comments, ideas, flames?
Goswin




Re: An alarming trend (no it's not flaimbait.)

2002-01-06 Thread Goswin Brederlow
Henrique de Moraes Holschuh <[EMAIL PROTECTED]> writes:

> On Thu, 03 Jan 2002, Craig Dickson wrote:
> > Karl M. Hegbloom wrote:
> > >  If a package has gotten very stale, and nobody has taken up
> > >  maintainence, isn't that a pretty good indication that nobody is
> > >  using it anyhow?
> > 
> > Is it? Is the average Debian user both able and willing to be a
> 
> Obviously not. It is a pretty good indication that no developer is using it
> anymore, but just that.

1. Debian Developer are a good sample of the Debian users. Only a
selected group, but it still gives some indication.

2. popularity-contest should also give you a hint.

3. If a package has a bug and is not maintained that can be
noticed. If the bug is release critical, it drops out of stable. Watch
out for those. Clean up those buggy, stale debs first.

4. Check for packages that are outdated compared to upstream
source. Contact upstream if they know someone to maintain it.

But what about stale, unused, bugfree debs that are just perfect
(Yeah, show me one). No newer upsteam and no other indication of
staleness?  First of all its maintainer should know. Would you
maintain a package you don't use? The package should be orphaned when
its not maintained and then go the way of all orphans: get adopted or
grow up and earn your own money. :)

The only way to see if a probably unused package is realy unused is to
remove it and wait for someone to scream. Do you want to listen to all
those screams? Removing a package should be well though about.

MfG
Goswin

PS: I'm all for cleaning up old cruft. Just remember that someones cruft might 
be someone elses dearest.
PPS: NEVER REMVOE MOONBUGGY




Re: How to put files at a location determined at install-time.

2002-01-01 Thread Goswin Brederlow
"John H. Robinson, IV" <[EMAIL PROTECTED]> writes:

> On Mon, Dec 31, 2001 at 07:09:41PM +0100, Marc L. de Bruin wrote:
> > 
> > Therefore it is up to the root-user (and his filesystem) where the files 
> > should end up after installation.
> > 
> > Is this possible? Thanks again,
> 
> if this is the case, then i would strongly recomend distributing it as a
> tarball, and allowing the admin to extract wherever it is required. the
> drawback is that after an upgrade, the admin would have to manually
> extract the tarball.
> 
> now, if you debconf from the admin where the tarbal should be extracted
> to, you can query that, and extract in post-inst with no further
> interaction.
> 
> so, i guess the answer is yes: distribute the data as a taball, and
> query the admin as to where to extract it to. you may even want to rm
> the tarbal after extraction.

Why not use one directory in a standard place, like
/lib/share/package/. If the admin doesn't like the data there he can
link that dir to another place or mount a partition there.

The problem with having it in a random place is finding it again,
especially when the previous admin runs away to a better paid job and
you are left with the remains.

MfG
Goswin




Re: Some Crazy and Happier Ideas for 2002

2002-01-01 Thread Goswin Brederlow
"" <[EMAIL PROTECTED]> writes:

> [] A spanking new hardware platform without any compromise to aged standards 
> is released and produced. Linux is the OS of choice together with BSD and 
> other Open OS's. Plain boxes with just a couple connectors, stylish, vector, 
> plain [] // Oh well, sick of that x86 like alley, gimme something cool //

Take a few million $, a big box, 32 EV8 Alpha cpus (to be build),
design a fast motherboard, write a free bios and i386 bios emulator
(to initialise pci cards) and sell them for $100 each.

[ Madness starts when you turn on a i386 :]

:)




ECN: Why just on/off? can one mangle that per iptables?

2001-09-02 Thread Goswin Brederlow
Hi,

I have some thoughts about the ECN bit:

Why is it on per default when compiled in?

  Normaly I would expect it to be off unless activated in proc, like
  ip_forward or syn-cookie or lots of other stuff.


Why can one only turn it on/off?

  I want it on normaly, but not for a few hosts or routes. Why can't I
  enable it to eth0 but not for eth1?


Is there a way to mangle that bit (on or off) with iptables?

  I would realy love that feature. I have one site which doesn't work
  with ECN so I have to disable it. Why not catch all connects to that
  site and disbale the ECN bit upon connect per ipchains?

  I would like a package that comes with a blacklist of NON-ECN hosts
  and disables those per ipchain with regular updates.

May the Source be with you.
Goswin




Re: sysctl should disable ECN by default

2001-09-02 Thread Goswin Brederlow
Eduard Bloch <[EMAIL PROTECTED]> writes:

> Package: procps
> Version: 1:2.0.7-6
> Severity: wishlist
> Tags: woody
> 
> I suggest to disable ECN¹ in the default network configuration.
> This should be done in Woody since we don't like our users to be
> confused just because of the ECN support in kernel is turned on.
> 
> ¹ ECN bit causes trouble on bad configured firewalls, so the fresh
> installed Debian box won't be able to connect some remote hosts.
> 
> Gruss/Regards,
> Eduard.

I think that should be refiled against kernel-image-2.4.x. Those,
since they have the flag enabled, should warn about it and turn it off
in /etc/sysctl.conf upon first install (not on update, so you can
delete the option).

Or just ask via debconf.

MfG
Goswin




Re: big Packages.gz file

2001-01-10 Thread Goswin Brederlow
> " " == Brian May <[EMAIL PROTECTED]> writes:

> "zhaoway" == zhaoway  <[EMAIL PROTECTED]> writes:
zhaoway> This is only a small part of the whole story, IMHO. See
zhaoway> my other email replying you. ;)

>>> Maybe there could be another version of Packages.gz without
>>> the extended descriptions -- I imagine they would take
>>> something like 33% of the Packages file, in line count at
>>> least.

zhaoway> Exactly. DIFF or RSYNC method of APT (as Goswin pointed
zhaoway> out), or just seperate Descriptions out (as I pointed out
zhaoway> and you got it too), nearly 66% of the bits are
zhaoway> saved. But this is only a hack, albeit efficient.

 > At the risk of getting flamed, I investigated the possibility
 > of writing an apt-get method to support rsync. I would use this
 > to access an already existing private mirror, and not the main
 > Debian archive. Hence the server load issue is not a
 > problem. The only problem I have is downloading several megs of
 > index files every time I want to install a new package (often
 > under 100kb) from unstable, over a volume charged 28.8 kbps PPP
 > link, using apt-get[1].

I tried the same, but I used the copy method as template, which is
rather bad. Should have used http as starting point.

Can you send me your patch please.

 > I think (if I understand correctly) that I found three problems
 > with the design of apt-get:

 > 1. It tries to down-load the compressed Packages file, and has
 > no way to override it with the uncompressed file. I filed a bug
 > report against apt-get on this, as I believe this will also be
 > a problem with protocols like rproxy too.

 > 2. apt-get tries to be smart and passes the method a
 > destination file name that is only a temporary file, and not
 > the final file. Hence, rsync cannot make a comparison between
 > local and remote versions of the file.

I wrote to the deity mailinglist concerning those two problems with 2
possible sollution. Till now the only answere I got was "NO we don't
want rsync" after pressing the issue here on debian-devel.

 > 3. Instead, rsync creates its own temporary file while
 > downloading, so apt-get cannot display the progress of the
 > download operation because as far as it is concerned the
 > destination file is still empty.

Hmm, isn't there a informational message you can output to hint of the
progress? We would have to patch rsync to generate that style of
progress output or fork and parse the output of rsync and pass on
altered output.

 > I think the only way to fix both 2 and 3 is to allow some
 > coordination between apt-get and rsync where to put the
 > temporary file and where to find the previous version of the
 > file.

Doing some more thinking I like the second solution to the problem
more and more:

1. Include a template (some file that apt-get thinks matches best) in
the fetch request. The rsync method can then copy that file to the
destination and rsync on it. This would be the uncompressed Packages
file or a previous deb or the old source.

2. return wheather the file is compressed or not simply by passing
back the destination filename with the appropriate extension (.gz). So
the destination filename is altered to reflect the fileformat.

MfG
Goswin




Re: Solving the compression dilema when rsync-ing Debian versions

2001-01-09 Thread Goswin Brederlow
> " " == Otto Wyss <[EMAIL PROTECTED]> writes:

>> > gzip --compress-like=old-foo foo > > where foo will be
>> compressed as old-foo was or as aquivalent as > possible. Gzip
>> does not need to know anything about foo except how it > was
>> compressed. The switch "--compress-like" could be added to any
>> > compression algorithmus (bzip?) as long as it's easy to
>> retrieve the
>> 
>> No, this won't work with very many compression algorithms.
>> Most algorithms update their dictionaries/probability tables
>> dynamically based on input.  There isn't just one static table
>> that could be used for another file, since the table is
>> automatically updated after every (or near every) transmitted
>> or decoded symbol.  Further, the algorithms start with blank
>> tables on both ends (compression and decompression), the
>> algorithm doesn't transmit the tables (which can be quite large
>> for higher order statistical models).
>> 
 > Well the table is perfectly static when the compression
 > ends. Even if the table isn't transmitted itself, its
 > information is contained in the compressed file, otherwise the
 > file couldn't be decompressed either.

Yes THEY are. Most of the time each character is encoded by its own
table, which is constrcted out of all the characters encoded or
decoded before. The tables are static but 100% dependent on the
data. Change one char and all later tables change. (except when gzip
cleans the dictionary, see other mail).

MfG
Goswin




Re: big Packages.gz file

2001-01-09 Thread Goswin Brederlow
> " " == Brian May <[EMAIL PROTECTED]> writes:

> "sluncho" == sluncho  <[EMAIL PROTECTED]> writes:
sluncho> How hard would it be to make daily diffs of the Package
sluncho> file? Most people running unstable update every other day
sluncho> and this will require downloading and applying only a
sluncho> couple of diff files.

sluncho> The whole process can be easily automated.

 > Sounds remarkably like the process (weekly not daily though) to
 > distribute Fidonet nodelist diffs. Also similar to kernel
 > diffs, I guess to.

 > Seems a good idea to me (until better solutions like rproxy are
 > better implemented), but you have to be careful not to get
 > apply diffs in the wrong order.  -- Brian May <[EMAIL PROTECTED]>

Or missing one or having a corrupted file to begin with or any other
of 1000 possibilities.

Also mirrors will allways lack behind, have erratic timestamping on
those files and so on. I think it would become a mess pretty soon.

The nice thing about rsync is that its self repairing. Its allso more
efficient than a normal diff.

MfG
Goswin




Re: Solving the compression dilema when rsync-ing Debian versions

2001-01-09 Thread Goswin Brederlow
> " " == Otto Wyss <[EMAIL PROTECTED]> writes:

>> > gzip --compress-like=old-foo foo
>> 
>> AFAIK thats NOT possible with gzip. Same with bzip2.
>> 
 > Why not.

gzip creates a dictionary (that gets realy large) of strings that are
used and encodes references to them. At the start the dictionary is
empty, so the first char is pretty much unencoded and inserted into
the dictionary. The next char is encoded using the first one and so
on. That way longer and longer strings enter the dictionary.

Every sequence of bytes creates an unique (maybe not unique, but
pretty much so) dictionary that can be completly reconstructed from
the compressed data. Given the dictionary after the first n characters
the n+1's characted can be decoded and the next dictionary can be
calculated.

I think its pretty much impossible to find two files resulting in the
same dictionary. It certainly is impossible for the speed we need.

You cannot encoded two random files with the same dictionary, not
without adding the dictionary to the file, which gzip does not (since
its a waste).

So, as you see, that method is not possible.

But there is a little optimisation that can be used (and is used by
the --rsyncable patch):

If the dictionary gets to big, the compression ratio drops. It becomes
ineffective. Then gzip flushes the dictionary and starts again with an
empty one.

The --rsyncable patch now changes the moments when that will
happen. It looks for block of bytes that have a certain rolling
checksum and if it matches it flushes the dictionary. Most likely two
similar files will therefore flush the dictionary at exactly the same
places. If two files are equal after such a flush, the data will be
encoded the same way and rsync can match those blocks.

The author claims that it takes about 0.1-0.2% more space for
rsyncable gzip files, which is a loss I think everybody is willing to
pay.

>> I wish it where that simple.
>> 
 > I'm not saying it's simple, I'm saying it's possible. I'm not a
 > compression speciallist but from the theory there is nothing
 > which prevents this except from the actual implementation.

 > Maybe it's time to design a compression alogrithmus which has
 > this functionality (same difference rate as the source) from
 > the ground up.

There are such algorithms, but they eigther allys use the same
dictionary or table (like some i386.exe runtime compressors that are
specialiesed to the patterns used in opcodes) or they waste space by
adding the dictionary/table to the compressed file. Thats a huge waste
with all the small diff files we have.


The --rsyncable patch looks promising for a start and will greatly
reduce the downloads for source mirrors, if its used.

MfG
Goswin




Re: Debian unstable tar incompatible with 1.13.x?

2001-01-08 Thread Goswin Brederlow
> " " == safemode  <[EMAIL PROTECTED]> writes:

 > I have used tar with gzip and bzip2 in debian unstable and in
 > each case users who use older versions of tar ( like 1.13.11 )
 > were unable to decompress it.

Well, bzip2 is known. Just doesn't work anymore (see the big flameware
here :)

 > [49: huff+mtf rt+rld]data integrity (CRC) error in data

Thats strange. Do you have a small example as tar and tar.gz file?
Best would be the same data once with the old and once with the new
tar.

 > and such error messages like that .  This troubles me greatly.
 > Any info about this?  I'm using debian unstable's current tar
 > and bzip2 and gzip to make the tarballs.

MfG
Goswin




Re: Solving the compression dilema when rsync-ing Debian versions

2001-01-08 Thread Goswin Brederlow
> " " == John O Sullivan <[EMAIL PROTECTED]> writes:

 > There was a few discussions on the rsync mailing lists about
 > how to handle compressed files, specifically .debs I'd like to
 > see some way of handling it better, but I don't think it'll
 > happen at the rsync end. Reasons include higher server cpu load
 > to (de)compress every file that is transferred and problems
 > related to different compression rates.  see this links for
 > more info
 > http://lists.samba.org/pipermail/rsync/1999-October/001403.html

Did you read my proposal a few days back? That should do the trick,
works without unpacking on the server side and actually reduces the
load on the server, because it then can cache the checksum,
i.e. calculate them once and reuse them every time.

MfG
Goswin




Re: Solving the compression dilema when rsync-ing Debian versions

2001-01-08 Thread Goswin Brederlow
> " " == Andrew Lenharth <[EMAIL PROTECTED]> writes:

 > What is better and easier is to ensure that the compression is
 > deturministic (gzip by default is not, bzip2 seems to be), so
 > that rsync can decompress, rsync, compress, and get the exact
 > file back on the other side.

gzip encodes timestamps, which makes identical files seem to be
different when compressed.

Given the same file with the same timestamp, gzip should allways
generate an equal file.

Of cause that also depends on the options used.

MfG
Goswin




Re: Solving the compression dilema when rsync-ing Debian versions

2001-01-08 Thread Goswin Brederlow
> " " == Otto Wyss <[EMAIL PROTECTED]> writes:

>>> So why not solve the compression problem at the root? Why not
>>> try to change the compression in a way so it does produce a
>>> compressed
 > result
>>> with the same (or similar) difference rate as the source?
>>  Are you going to hack at *every* different kind of file format
>> that you might ever want to rsync, to make it rsync friendly?
>> 
 > No, I want rsync not even to be mentioned. All I want is
 > something similar to

 > gzip --compress-like=old-foo foo

AFAIK thats NOT possible with gzip. Same with bzip2.

 > where foo will be compressed as old-foo was or as aquivalent as
 > possible. Gzip does not need to know anything about foo except
 > how it was compressed. The switch "--compress-like" could be
 > added to any compression algorithmus (bzip?) as long as it's
 > easy to retrieve the compression scheme. Besides the following
 > is completly legal but probably not very sensible

 > gzip --compress-like=foo bar

 > where bar will be compressed as foo even if they might be
 > totally unrelated.

 > Rsync-ing Debian packages will certainly take advantage of this
 > solution but the solution itself is 100% pure compression
 > specific. Anything which needs identical compression could
 > profit from this switch. It's up to profiting application to
 > provide the necessary wrapper around.

>> gzip --rsyncable, aloready implemented, ask Rusty Russell.

 > The --rsyncable switch might yield the same result (I haven't
 > checked it sofar) but will need some internal knowledge how to
 > determine the old compression.

As far as I understand the patch it forces gzip to compress the binary
into chunks of 8K. So every 8K theres a break where rsync can try to
match blocks. It seems to help somehow, but I think it handles
movement of data in a file badly (like when a line is inserted).

 > As I read my mail again the syntax for "compressing like" could
 > be

 > gzip --compress=foo bar

 > where bar is compressed as foo was. Foo is of course a
 > compressed file (how else could the compression be retrieved)
 > while bar is not.

I wish it where that simple.

MfG
Goswin




Re: Solving the compression dilema when rsync-ing Debian versions

2001-01-08 Thread Goswin Brederlow
> " " == Jason Gunthorpe <[EMAIL PROTECTED]> writes:

 > On 7 Jan 2001, Bdale Garbee wrote:

>> > gzip --rsyncable, aloready implemented, ask Rusty Russell.
>> 
>> I have a copy of Rusty's patch, but have not applied it since I
>> don't like diverging Debian packages from upstream this way.
>> Wichert, have you or Rusty or anyone taken this up with the
>> gzip upstream maintainer?

 > Has anyone checked out what the size hit is, and how well
 > ryncing debs like this performs in actual use? A study using
 > xdelta on rsyncable debs would be quite nice to see. I recall
 > that the results of xdelta on the uncompressed data were not
 > that great.

That might be a problem of xdelta, I heard its pretty inefective.

MfG
Goswin




Re: package pool and big Packages.gz file

2001-01-08 Thread Goswin Brederlow
>>>>> " " == Jason Gunthorpe <[EMAIL PROTECTED]> writes:

 > On 8 Jan 2001, Goswin Brederlow wrote:
 
>> I don't need to get a filelisting, apt-get tells me the
>> name. :)

 > You have missed the point, the presence of the ability to do
 > file listings prevents the adoption of rsync servers with high
 > connection limits.

Then that feature should be limited to non-recursive listings or
turned off. Or .listing files should be created that are just served.

>> > Reversed checksums (with a detached checksum file) is
>> something > someone should implement for debian-cd. You calud
>> even quite > reasonably do that totally using HTTP and not run
>> the risk of > rsync load at all.
>> 
>> At the moment the client calculates one roling checksum and
>> md5sum per block.

 > I know how rsync works, and it uses MD4.

Ups, then s/5/4/g.

>> Given a 650MB file, I don't want to know the hit/miss ratios
>> for the roling checksum and the md5sum. Must be realy bad.

 > The ratio is supposed to only scale with block size, so it
 > should be the same for big files and small files (ignoring the
 > increase in block size with file size).  The amount of time
 > expended doing this calculation is not trivial however.

Hmm, in the technical paper it says that it creates a 16 bit external
hash, each entry a linked list of items containing the full 32 Bit
rolling checksum (or the other 16 bit) and the md4sum.

So when you have more blocks, the hash will fill up. So you have more
hits on the first level and need to search a linked list. With a block
size of 1K a CD image has 10 items per hash entry, its 1000% full. The
time wasted alone to check the rolling checksum must be huge.

And with 65 rolling checksums for the image, theres a ~10/65536
chance chance of hitting the same checksum with differen md4sum, so
thats about 100 times per CD, just by pure chance.

If the images match, then its 65 times.

So the better the match, the more blocks you have, the more cpu it
takes. Of cause larger blocks take more time to compute a md4sum, but
you will have less blocks then.

 > For CD images the concern is of course available disk
 > bandwidth, reversed checksums eliminate that bottleneck.

That anyway. And ram.

MfG
Goswin




Re: package pool and big Packages.gz file

2001-01-07 Thread Goswin Brederlow
>>>>> " " == Jason Gunthorpe <[EMAIL PROTECTED]> writes:

 > On 7 Jan 2001, Goswin Brederlow wrote:

>> Actually the load should drop, providing the following feature
>> add ons:
>> 
>> 1. cached checksums and pulling instead of pushing 2. client
>> side unpackging of compressed streams

 > Apparently reversing the direction of rsync infringes on a
 > patent.

When I rsync a file, rsync starts ssh to connect to the remote host
and starts rsync there in the reverse mode.

You say that the recieving end is violating a patent and the sending
end not?

Hmm, which patent anyway?

So I have to fork a rsync-non-US because of a patent?

 > Plus there is the simple matter that the file listing and file
 > download features cannot be seperated. Doing a listing of all
 > files on our site is non-trivial.

I don't need to get a filelisting, apt-get tells me the name. :)
Also I can do "rsync -v host::dir" and parse the output to grab the
actual files with another rsync. So filelisting and downloading is
absolutely seperable.

Doing a listing of all file probably results in a timeout. The
harddrives are too slow.

 > Once you strip all that out you have rproxy.

 > Reversed checksums (with a detached checksum file) is something
 > someone should implement for debian-cd. You calud even quite
 > reasonably do that totally using HTTP and not run the risk of
 > rsync load at all.

At the moment the client calculates one roling checksum and md5sum per
block.

The server, on the other hand, calculates the rolling checksum per
byte and for each hit it calculates an md5sum for one block.

Given a 650MB file, I don't want to know the hit/miss ratios for the
roling checksum and the md5sum. Must be realy bad.

The smaller the file, the less wrong md5sums need to be calculated.

 > Such a system for Package files would also be acceptable I
 > think.

For Packages file even cvs -z9 would be fine. They are comparatively
small to the rest of the load I would think.

But I, just as you do, think that it would be a realy good idea to
have precalculated rolling checksums and md5sums, maybe even for
various blocksizes, and let the client do the time consuming guessing
and calculating. That would prevent rsync to read every file served
twice, as it does now when they are dissimilar.

May the Source be with you.
Goswin




Re: apt maintainers dead?

2001-01-07 Thread Goswin Brederlow
>>>>> " " == Jason Gunthorpe <[EMAIL PROTECTED]> writes:

 > On 7 Jan 2001, Goswin Brederlow wrote:

>> I tried to contact the apt maintainers about rsync support for
>> apt-get (a proof of concept was included) but haven't got an
>> answere back yet.

 > No, you are just rediculously impatatient.

 > Date: 06 Jan 2001 19:26:59 +0100 Subject: rsync support for apt

 > Date: 07 Jan 2001 22:42:02 +0100 Subject: apt maintainers dead?

 > Just a bit over 24 hours? Tsk Tsk.

Usually with people living wordwide someone is allways reading their
mails, so an answere within minutes is possible.

 > The short answer is exactly what you should expect - No,
 > absolutely not.  Any emergence of a general rsync for APT

Then why did it take so long? :)

 > method will result in the immediate termination of public rsync
 > access to our servers.

I think that is something to be discussed. As I said before, I expect
the rsync + some features to produce less load than ftp or http.

Given that it doesn't need more resources than those two, is the
answere still no?

 > I have had discussions with the rproxy folks, and I feel that
 > they are currently the best hope for this sort of thing. If you
 > want to do something, then help them.

I'm still at the "designing technical detail and specs" stage, so
anything is possible. Gotta check rproxy out when I wake up
again. I hope I got an url for it by then.

MfG
Goswin




Re: package pool and big Packages.gz file

2001-01-07 Thread Goswin Brederlow
>>>>> " " == Brian May <[EMAIL PROTECTED]> writes:

>>>>> "Goswin" == Goswin Brederlow <[EMAIL PROTECTED]> writes:
Goswin> Actually the load should drop, providing the following
Goswin> feature add ons:

 > How does rproxy cope? Does it require a high load on the
 > server?  I suspect not, but need to check on this.

 > I think of rsync as just being a quick hack, rproxy is the
 > (long-term) direction we should be headed. rproxy is the same
 > as rsync, but based on the HTTP protocol, so it should be
 > possible (in theory) to integrate into programs like Squid,
 > Apache and Mozilla (or so the authors claim).  -- Brian May
 > <[EMAIL PROTECTED]>

URL?

Sounds more like encapsulation of an rsync similar protocol in html,
but its hard to tell from the few words you write. Could be intresting
though.

Anyway, it will not resolve the problem with compressed files if its
just like rsync.

MfG
Goswin




Re: Drag-N-Drop Interface

2001-01-07 Thread Goswin Brederlow
> " " == Michelle Konzack <[EMAIL PROTECTED]> writes:

 > Hello and good evening.  Curently I am programing a new
 > All-In-One Mail-Client (for Windows- Changers ;-)) ) and I need
 > to program a Drag-N-Drop interface.

 > Please can anyone point me to the right resources ???  I
 > program in C.

Well, look at kde and gnome. AFAIK they share a common drag&drop
interface.

MfG
Goswin




Linux Gazette [Was: Re: big Packages.gz file]

2001-01-07 Thread Goswin Brederlow
> " " == Chris Gray <[EMAIL PROTECTED]> writes:

> Brian May writes:
> "zhaoway" == zhaoway  <[EMAIL PROTECTED]> writes:
zhaoway> 1) It prevent many more packages to come into Debian, for
zhaoway> example, Linux Gazette are now not present newest issues
zhaoway> in Debian. People occasionally got fucked up by packages

Any reasons why the Linux gazette is not present anymore?

And is there a virtual package for the Linux gazette that allays
depends on the newest version?

MfG
Goswin




Re: Solving the compression dilema when rsync-ing Debian versions

2001-01-07 Thread Goswin Brederlow
> " " == Otto Wyss <[EMAIL PROTECTED]> writes:

 > It's commonly agreed that compression does prevent rsync from
 > profit of older versions of packages when synchronizing Debian
 > mirrors. All the discussion about fixing rsync to solve this,
 > even trough a deb-plugin is IMHO not the right way. Rsync's
 > task is to synchronize files without knowing what's inside.

 > So why not solve the compression problem at the root? Why not
 > try to change the compression in a way so it does produce a
 > compressed result with the same (or similar) difference rate as
 > the source?

 > As my understanding of compression goes, all have a kind of
 > lookup table at the beginning where all compression codes where
 > declared. Each time this table is created new, each time
 > slightly different than the previous one depending on the

Nope. Only a few compression programs use a table at the start of the
file. Most build the table as they go along. Saves a lot of
information not to copy the table.

gzip (I hope I remeber that correctly) for example increases its table
with every character it encodes, so when you compress a file that does
only contain 0, the table will not contain any a's, so a can't even be
encoded.

bzip2 on the other hand resorts the input in some way to get better
compression ratios. You can't resort the input in the same way with
different data. The compression rate will dramatically drop otherwise.

ppm, as a third example, builds a new table for every character thats
transfered and encoded the probability range of the real character in
one of the current contexts. And the contexts are based on all
previous characters. The first character will be plain text and the
rest of the file will (most likely) differ if that char changes.

 > source. So to get similar results when compressing means using
 > the same or at least an aquivalent lookup table.  If it would
 > be possible to feed the lookup table of the previous compressed
 > file to the new compression process, an equal or at least
 > similar compression could be achieved.

 > Of course using allways the same lookup table means a deceasing
 > of the compression rate. If there is an algorithmus which
 > compares the old rate with an optimal rate, even this could be
 > solved. This means a completly different compression from time
 > to time. All depends how easy an aquivalent lookup table could
 > be created without loosing to much of the compression rate.

Knowing the structure of the data can greatly increase the compression
ratio. Also knowing the structure can greatly reduce the differences
needed to sync two files.

So why should rsync stay stupid?

MfG
Goswin




Re: tar -I incompatibility

2001-01-07 Thread Goswin Brederlow
> " " == Paul Eggert <[EMAIL PROTECTED]> writes:

>> Date: Sun, 7 Jan 2001 12:07:14 -0500 From: Michael Stone
>> <[EMAIL PROTECTED]>

>> I certainly hope that the debian version at least prevents
>> serious silent breakage by either reverting the change to -I
>> and printing a message that the option is deprecated or
>> removing the -I flag entirely.

 > Why would deprecating or removing the -I flag help prevent
 > serious silent breakage?  I would think that most people using
 > -I in the 1.13.17 sense would use it like this:

 > tar -xIf archive.tar

 > and this silently breaks in 1.13.18 only in the unlikely case
 > where "f" is a readable tar file.

% tar --version
tar (GNU tar) 1.13.18
Copyright 2000 Free Software Foundation, Inc.
This program comes with NO WARRANTY, to the extent permitted by law.
You may redistribute it under the terms of the GNU General Public License;
see the file named COPYING for details.
Written by John Gilmore and Jay Fenlason.
% tar -cIvvf bla.tar.bz2 bla
tar: bla: Cannot stat: No such file or directory
tar: Error exit delayed from previous errors
% mkdir bla
% tar -cIvvf bla.tar.bz2 bla
drwxr-xr-x mrvn/mrvn 0 2001-01-07 22:50:27 bla/
% file bla.tar.bz2  
bla.tar.bz2: GNU tar archive
% tar -tIvvf bla.tar.bz2 
drwxr-xr-x mrvn/mrvn 0 2001-01-07 22:50:27 bla/

As you see -I is silently ignored in violation to
/usr/share/doc/tar/NEWS.gz:

* The short name of the --bzip option has been changed to -j,
  and -I is now an alias for -T, for compatibility with Solaris tar.

Thats part of the problem, people won't get any error message at the
moment. Everything looks fine until you compare the size, run file or
try to bunzip the file manually.

As I said before tar -I in its old useage should give one of several
errors, but doesn't. Can't remeber the bug number, but its in the BTS.

 > I'm not entirely opposed to deprecating -I for a while -- but I
 > want to know why it's helpful to do this before installing such
 > a change.

If its depreciated people will get a message every time they use
-I. cron jobs will generate a mail every time they run.
Just think how anoying a daily mail is and how fast people will change
the option.
BUT nothing will break, no data will be lost.

On the other hand, just changing the meaning or deleting the option
will result in severe breakage in 3rd party software. Sometimes
without even giving a hint of the cause. You know how bad 3rd party
software can be. :)

MfG
Goswin




Re: tar -I incompatibility

2001-01-07 Thread Goswin Brederlow
 > On Mon, Jan 08, 2001 at 12:12:59AM +1100, Sam Couter wrote:
>> Goswin Brederlow <[EMAIL PROTECTED]>
>> wrote: > Just as linux-centric as the other way is
>> solaris-centric.
>> 
>> Not true. There's the way GNU tar works, then there's the way
>> every other tar on the planet works (at least with respect to
>> the -I option). GNU tar is (used to be) the odd one out. Now
>> you're saying that not behaving like the odd man out is being
>> Solaris-centric? I don't think so.

I worked and still work on several paltform and the first think I
usually do is make them compatible:

compile bash, make, gcc, bash (again, to correct the stupid cc bugs),
make, automake, autoconf, zsh, xemacs, tar, gzip, bzip2, qvwm.

All the normal tools from comercial unixes are all proprietary to
their system. The only way to standardise those is to use the free
comon source of what we have under linux.

So I say that what debian uses can be the default on all unix systems.

Just my 2c.
Goswin




apt maintainers dead?

2001-01-07 Thread Goswin Brederlow
Hi,

I tried to contact the apt maintainers about rsync support for
apt-get (a proof of concept was included) but haven't got an answere
back yet.

Is the whole team on vacation? Who is actually on that list?

>From the number of bugs open against apt-get I would think they are
all dead. Please proof me wrong.

MfG
Goswin




Re: RFDisscusion: Big Packages.gz and Statistics and Comparing solution

2001-01-07 Thread Goswin Brederlow
>>>>> " " == zhaoway  <[EMAIL PROTECTED]> writes:

 > [A quick reply. And thanks for discuss with me! And no need to
 > Cc: me anymore, I updated my DB info.]

 > On Sun, Jan 07, 2001 at 05:51:26PM +0100, Goswin Brederlow
 > wrote:
>> The problem is that people want to browse descriptions to find
>> a package fairly often or just run "apt-cache show package" to
>> see what a package is about. So you need a method to download
>> all descriptions.

 > The big Packages.gz is still there. No conflict between the two
 > method.  And the newest, most updated information is always on
 > freshmeat.net. ;)

>> As far as I see theres no server support needed for rsync
>> support to operate better on compressed files.

 > Um, I don't know. But doesn't RSYNC need a server side RSYNC to
 > run?  Or, can I expect a HTTP server to provide RSYNC? (Maybe I
 > am stupid, I'll read RSYNC man page, later.)

Yes, eigther rsyncd or rshd/sshd needs to be running. But thats
already the case.

What I ment was that the new feature to uncompress archives before
rsyncing can (hoepfully) be done without any changes to existing
servers and without unpacking on the server side. All old servers
should do fine. Thats what I aim to archive.

>> If you update often, saving 1 Byte every time is worth it. If
>> you update seldomely, it doesn't realy matter that you download
>> a big Packages.gz. You would have to downlaod all the small
>> Packages.gz files also.

 > There is an approach to help this. But that is another
 > story. Later.

>> So you see, between potato and woody diff saves about 60%.
>> Also note that rsync usually performs better than cvs, since it
>> does not include the to be removed lines in the download.

 > Pretty sounding argument. My only critic on DIFF or RSYNC now
 > is just server support now. (Again, I'll read RSYNC man page
 > later. ;-)

 > The point is, can a storage server which provides merely HTTP
 > and/or FTP service do the job for apt-get?

Nope, but rsync servers already exist. Time to push people to convert
their services by pushing the users to use them.

Also think of the benefit when updating. With some extra code on the
client side (for example in apt) a pseudo deb can be created from the
installed version and then rsynced against the new version. You
wouldn't need a local mirror and you still save a lot of download.

Of cause this all needs support to rsync compressed archives
uncompressed in the rsync client.

MfG
Goswin




Re: tar -I incompatibility

2001-01-07 Thread Goswin Brederlow
>>>>> " " == Marcus Brinkmann <[EMAIL PROTECTED]> writes:

 > On Sun, Jan 07, 2001 at 02:05:27AM -0500, Michael Stone wrote:
>> On Sun, Jan 07, 2001 at 04:25:43AM +0100, Marcus Brinkmann
>> wrote: > On Sun, Jan 07, 2001 at 03:28:46AM +0100, Goswin
>> Brederlow wrote: > > "tar -xIvvf file.tar.bz2" has been in use
>> under linux for over a year > > by pretty much everybody. Even
>> if the author never released it as > > stable, all linux
>> distributions did it. I think that should count > > something.
>> > > It tells a lot about the people making the distributions at
>> least.
>> 
>> Before making such snide comments, take a look at the
>> changelog.Debian entries relating to the switch from 1.13 to
>> 1.13.x.

 > I see. Well, I don't think that Bdale did something wrong with
 > including 1.13.x. But I find the reactions to the flag change
 > shown here by some people quite inappropriate. When using
 > unreleased software, people have to expect such changes,
 > especially for non-standard extensions. It happens all the
 > time.

On anything apart from Debian I wouldn't say a word about it.

BUT on Debian tar -I is a standard and its stable. So I start
screaming. Since the Debian maintainer made -I stable with a unstable
upstream source, its his responsibility to watch it.

Its the authors fault to have not resolved the problem for so long and
suddenly resolve it in such a disasterous way, but also the Debian
maintainers fault not to warn us and ease our transition.

Fault might be a to strong word, I just mean that there should be a
new upload asap that eigther reverts the -I change or tells the user
about it. Having -I silently just do something else is not an option
in my eyes.

MfG
Goswin




Re: RFDisscusion: Big Packages.gz and Statistics and Comparing solution

2001-01-07 Thread Goswin Brederlow
> " " == zhaoway  <[EMAIL PROTECTED]> writes:

 > Hi, [Sorry for the thread broken, my POP3 provider stopped.]
 > [Please Cc: me! <[EMAIL PROTECTED]>. Sorry! ;-)]

 > 1. RFDiscussion on big Packages.gz

 > 1.1. Some statistics

 > % grep-dctrl -P
 > 
-sPackage,Priority,Installed-Size,Version,Depends,Provides,Conflicts,Filename,Size,MD5sum
 > -r '.*'
 > ftp.jp.debian.org_debian_dists_unstable_main_binary-i386_Packages
 > | gzip -9 > test.pkg.gz % gzip -9
 > ftp.jp.debian.org_debian_dists_unstable_main_binary-i386_Packages
 > % ls -alF *.gz -rw-r--r-- 1 zw zw 1157494 Jan 7 21:20
 > ftp.jp.debian.org_debian_dists_unstable_main_binary-i386_Packages.gz
 > -rw-r--r-- 1 zw zw 341407 Jan 7 21:23 test.pkg.gz %

Ahh, what does it do? Just take out the descriptions?

 > This approach is simple and straight and almost compatible. But
 > could accpect 10K more packages come into Debian with little
 > loss. Worth consideration. IMHO.

 > Better, if `Description:' etc. could come into seperate gzipped
 > file along with the Debian package.

The problem is that people want to browse descriptions to find a
package fairly often or just run "apt-cache show package" to see what
a package is about. So you need a method to download all descriptions.

Also many small files compress far less than one big file.


 > 2. Compare with DIFF and RSYNC method of APT

 > 2.1. They need server support. (More than a directory layout
 > and client tool changing.)

As far as I see theres no server support needed for rsync support to
operate better on compressed files.

 > 2.2. If you don't update for a long time, DIFF won't
 > help. RSYNC help less.

If you update often, saving 1 Byte every time is worth it. If you
update seldomely, it doesn't realy matter that you download a big
Packages.gz. You would have to downlaod all the small Packages.gz
files also.

And after that you download 500 MB of updates. So who cares about 2MB
packages.gz?

Also, diff and rsync do a great job even after a long time:

diff potato_Packages woody_Packages| gzip -9 | wc --bytes
 339831

% ls -l /debian/dists/woody/main/binary-i386/Packages.gz
-rw-r--r--1 mrvn mrvn   955259 Jan  6 21:03 
/debian/dists/woody/main/binary-i386/Packages.gz

So you see, between potato and woody diff saves about 60%.
Also note that rsync usually performs better than cvs, since it does
not include the to be removed lines in the download.

 > 3. Additional benefits

 > Seperate changelog.Debian and `Description:' etc. out into
 > meta-info file could help users: 1) reduce the bandwidth eaten
 > 2) help their upgrade decisions easily.

A global Description.gz might benefit from the fact that the
description doesn't change for each update, but the extra work needed
for this to realy work is not worth it. It would only benefit people
that do daily mirroring, where rsync would do just as good.

MfG
Goswin




Re: package pool and big Packages.gz file

2001-01-07 Thread Goswin Brederlow
>>>>> " " == Matt Zimmerman <[EMAIL PROTECTED]> writes:

 > On Sun, Jan 07, 2001 at 03:49:43PM +0100, Goswin Brederlow
 > wrote:
>> Actually the load should drop, providing the following feature
>> add ons: [...]

 > The load should drop from that induced by the current rsync
 > setup (for the mirrors), but if many, many more client start
 > using rsync (instead of FTP/HTTP), I think there will still be
 > a significant net increase in load.

 > Whether it would be enough to cause a problem is debatable, and
 > I honestly don't know either way.

When the checksums are cached there will be no cpu load caused by
rsync, since it will only transfer the file. And the checksum files
will be realy small as I said, so if some similarity is found the
reduction in data will make more than up for the checksum download.

The only increase is the space needed to store the checksums in some
form of cache.

MfG
Goswin




Re: What to do about /etc/debian_version

2001-01-07 Thread Goswin Brederlow
> " " == Martin Keegan <[EMAIL PROTECTED]> writes:

 > Martijn van Oosterhout <[EMAIL PROTECTED]> writes:

>> Joey Hess wrote: > I think /etc/mtab is on its way out. A 2.4.x
>> kernel with devfs has a > /proc/mounts that actually has a
>> proper line for the root filesystem.  > Linking the two files
>> would probably actually work on such a system > without
>> breakage.
>> 
>> Does 2.4 now also include the information on which loop devices
>> are related to which filesystems? AFAIK that's the only thing
>> that went strange after linking /proc/mounts and /etc/mtab;
>> loop devices not being freed after unmounting.

No. Not that I saw a change for it. How could it?  Currently when
mounting a loop device, mount writes the filename that gets attached
to the loop device into /etc/mtab and then mounts /dev/loopX. Because
/etc/mtab is read-only mount can't write the filename and thus doesn't
know what to detach when unmounting.

mount can't know the difference between
"mount -oloop file path"
and
losetup /dev/loop0 file
"mount /dev/loop0 path"

Maybe the mount or loopback interface could be changed to record that
umount has to free the loop device upon umount.

 > When doing this I had a problem with the mount programme
 > insisting on explicitly checking whether /etc/mtab were a
 > symlink and explicitly breaking if it were. Why is this?

Never had that problem.

MfG
Goswin




Re: package pool and big Packages.gz file

2001-01-07 Thread Goswin Brederlow
> " " == Sam Vilain <[EMAIL PROTECTED]> writes:

 > On Fri, 5 Jan 2001 09:33:05 -0700 (MST) Jason Gunthorpe
 > <[EMAIL PROTECTED]> wrote:

>> > If that suits your needs, feel free to write a bugreport on
>> apt about this.  Yes, I enjoy closing such bug reports with a
>> terse response.  Hint: Read the bug page for APT to discover
>> why!

>> From bug report #76118:

 > No. Debian can not support the use of rsync for anything other
 > than mirroring, APT will never support it.

 > Why?  Because if everyone used rsync, the loads on the servers
 > that supported rsync would be too high?  Or something else?  --
 > Sam Vilain, [EMAIL PROTECTED] WWW: http://sam.vilain.net/ GPG
 > public key: http://sam.vilain.net/sam.asc

Actually the load should drop, providing the following feature add
ons:

1. cached checksums and pulling instead of pushing
2. client side unpackging of compressed streams

That way the rsync servers would have to first server the checksum
file from cache (being 200-1000 smaller than the real file) and then
just the blocks the client asks for. So if 1% of the file being
rsynced fits its even and everything above that saves bandwidth.

The current mode of operation of rsync works in the reverse, so all
the computation is done on the server every time, which of cause is a
heavy load on the server.

I hope both features will work without chaning the server, but if not,
we will have to wait till servers catch up with the feature.

MfG
Goswin




Re: tar -I incompatibility

2001-01-06 Thread Goswin Brederlow
>>>>> " " == Sam Couter <[EMAIL PROTECTED]> writes:

 > Goswin Brederlow <[EMAIL PROTECTED]>
 > wrote:
>> PS: Why not change the Solaris version to be compatible with
>> the widely used linux version? I'm sure there are more people
>> and tools out there for linux using -I then there are for
>> solaris.

 > This is an incredibly Linux-centric point of view. You sound
 > worse than the BSD bigots.

Just as linux-centric as the other way is solaris-centric.

Letting an option die out is bad. Changing an option name is
evil. Chaning the meaning of an option to mean something else on the
fly is pure evil[tm].

I think Debian should patch -I back to the old meaning. If
compatibility with solaris tar is wanted, then let -I print a warning
that its depreciated. In a few month give an error and maybe in a year
adopt a new meaning for -I (if thats realy wanted).

 > There are many, many, many different unices that are *not*
 > Linux. You can't hope to change them all to be Just Like Linux
 > (tm). You'll be lucky if any of them follow Linux behaviour,
 > rather than the other way around.

I don't want to change them but I also don't want to be changed by
them in ways that are plain stupid. And the -I just changing meaning
without any warning is plain stupid.

 > Hint: Adopt some cross-platform habits like: "bzip2 -dc
 > foo.tar.bz2 | tar xf -"

 > Not only will you then become more immune to changes in
 > behaviour that was non-standard to begin with, you'll also find
 > adjustment to other systems a lot easier.


I like systems that don't change on a day to day basis. I don't want
"ls *" to do "rm *" tomorrow just because some other unix does it and
the author feels like it.


"tar -xIvvf file.tar.bz2" has been in use under linux for over a year
by pretty much everybody. Even if the author never released it as
stable, all linux distributions did it. I think that should count
something. Enough to at least ease the transition.

MfG
Goswin




Re: What to do about /etc/mtab

2001-01-06 Thread Goswin Brederlow
> " " == s Lichtmaier  writes:

>> > >[EMAIL PROTECTED]:/tmp>mount -o loop foo 1 > Why dont we just
>> patch mount to use /var/run/mtab?  > I dont know about any
>> other program which modifies it.
>> 
>> because /var is not always on the same partition as /

 >  /etc/mtab shouldnt exist, all the information should be
 > handled by the kernel itself. But for the time being, I think I
 > have a better solution than the current one:

 >  Allocate a shared memory area. SHM areas are kept in memory
 > like small ramdisk. /etc/mtab is rather small, never longer
 > than a 4k page, besides the memory is swappable.

 >  And theres an advantage: With a SHM capable mount program
 > there would be no problem when mounting root read only.

umount /proc
cp /etc/mtab /proc/mounts
mount /proc
rm /etc/mtab
ln /proc/mounts /etc/mtab

Works fine even with 2.4 + devfs. Only find seems to still have a
slight bug there.

MfG
Goswin




Re: What do you wish for in an package manager? Here you are!

2001-01-06 Thread Goswin Brederlow
> " " == Thorsten Wilmer <[EMAIL PROTECTED]> writes:

 > Hello Petr Èech wrote:
>> Adam Lazur wrote:
>>> The ability to install more than one version of a package
>>> simultaneously.
>>  Hmm. SO you install bash 2.04-1 and bash 2.02-3. Now what will
>> be /bin/bash 2.04 or 2.02 version? You will divert both of them
>> and symlink it to the old name - maybe, but but how will you
>> know, to what name it diverts to use it?
>> 
>> Give me please 3 sane examples, why you need this. And no,
>> shared libraries are NOT an excuse for this.

The only useable way is to have /bin/bash allways point to a stable
version.

Apart from that, anyone who cares what version to use must use the
full path to the binary or a versioned name, like /bin/bash-2.04-1.

I would like binaries to be compiled to reside in versioned
directories but I also see a lot of problems with it as
well. Especially with the /etc /usr /usr/share and so on. Every
directory would have to have a subdir for every package that has files
there. What a chaos.

Of cause in spezial cases you can install all packages to
/usr/share/software/package-version/ and symlinc, but thats not a
general solution to the problem. For stuff like /bin/sh a network
filesystem doesn't work.

MfG
Goswin




Re: [devfs users]: evaluate a patch please

2001-01-06 Thread Goswin Brederlow
> " " == Martin Bialasinski <[EMAIL PROTECTED]> writes:

 > Hi, there is a bug in the mc package, that most likely is
 > related to devfs. I can't reproduce it, nor does it seem to be
 > common.

 > http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=57557&repeatmerged=yes

 > mc hangs occasionally on starup on the VC.

Oh, thats the reason why it hangs. I wondered about that.

 > There is a patch on the buttom of the report.

 > Could you tell me, if it is formally OK and if it fixes the
 > problem for you, if you can reproduce the bug?

Gotta test that.

I will be back.
Goswin




Re: tar -I incompatibility

2001-01-06 Thread Goswin Brederlow
>>>>> " " == Scott Ellis <[EMAIL PROTECTED]> writes:

>> Goswin Brederlow wrote: > the Author of tar changed the --bzip
>> option again. This time its even > worse than the last time,
>> since -I is still a valid option but with a > totally different
>> meaning.  > > This totally changes the behaviour of tar and I
>> would consider that a > critical bug, since backup software
>> does break horribly with the new > semantic.
>> 
>> Yes, I think that this should definetely be changed back. The
>> first time I encountered this problem, I thought that the
>> tar.bz2 archive was broken from the error message tar
>> reported. (Not a valid tar archive or so.) This change is
>> confusing and unreasonable IMHO.

 > Of course the -I option to tar was completely non-standard.
 > The changelog explains why it changed, to be consistant with
 > Solaris tar.  I'd prefer portability and consistancy any day,
 > it shouldn't take that long to change any custom scripts you
 > have.  I always use long options for nonstandard commands when
 > building scripts anyway :)

The problem is that -I works although it should completly break
everything. The only difference is that the tar file won't be
compressed anymore.

No warning, no error and noone reads changelogs unless something
breaks. (well, most people don't).

"mkdir bla"
"tar -cIvvf bla.tar.bz2 bla" should give:

"bla.tar.bz2: No such file" Since -I reads the files to be included
from a file.

"bla: Failed to open file, bla is a directory"  Since tar should try
to create a tra file named bla, which is a directoy.

or

"tar: cowerdly refusing to create empty archive"Since there
are no file given as parameters and none read from
bla.tar.bz2.

So where are the errors?

MfG
Goswin

PS: Why not change the Solaris version to be compatible with the widely used 
linux version? I'm sure there are more people and tools out there for linux 
using -I then there are for solaris.




Re: diskless package and devfs (Linux 2.4.x)

2001-01-06 Thread Goswin Brederlow
> " " == Brian May <[EMAIL PROTECTED]> writes:

 > Hello, would anyone object if I made the diskless package
 > depend on devfs support from 2.4.x in future versions?

Please do.

MfG
Goswin (a devfs fan).




Re: Upcoming Events in Germany

2001-01-06 Thread Goswin Brederlow
> " " == Martin Schulze <[EMAIL PROTECTED]> writes:

 > May 19-20 Berliner Linux Infotage
 > http://www.belug.org/infotage/

Intresting. Gotta check my calendar for a vistit to my parents in
Berlin during that time.

 > July 5-8 LinuxTag 2001, Stuttgart http://www.linuxtag.org/
 > http://www.infodrom.ffis.de/Debian/events/LinuxTag2001/

Already planed to be there.

MfG
Goswin




Re: rsync mirror script for pools - first pre alpha release

2001-01-06 Thread Goswin Brederlow
> " " == esoR ocsirF <[EMAIL PROTECTED]> writes:

 > I would like to set up our local partial mirror to run without
 > attendance through multiple releases. If I hard code the
 > release candidate name into the mirror script, wont it just
 > break when testing goes stable?

The problem is that those are links and rsync can eigther keep links
or follow them.

At the moment there are a lot for links in potato/woody so I can't
follow links, so no mirroring of stable/unstable/testing.

I could check what stable/testing/unstable is and then mirror what
they point to, but who cares. In a year or so an update or
debian-mirror will ask you weather you want to start mirroring the new
unstable and weather to drop the old stable.

MfG
Goswin




Re: package pool and big Packages.gz file

2001-01-05 Thread Goswin Brederlow
>>>>> " " == Junichi Uekawa <[EMAIL PROTECTED]> writes:

 > In 05 Jan 2001 19:51:08 +0100 Goswin Brederlow
 > <[EMAIL PROTECTED]> cum veritate
 > scripsit : Hello,

>> I'm currently discussing some changes to the rsync client with
>> some people from the rsync ML which would uncompress compressed
>> data on the client side (no changes to the server) and rsync
>> those. Sounds like not improving anything, but when reading the
>> full description on this it actually does.
>> 
>> Before that rsyncing new debs with old once hardly ever saves
>> anything. Where it hels is with big packages like xfree, where
>> several packages are identical between releases.

 > No offence, but wouldn't it be a tad difficult to play around
 > with it, since deb packages are not just gzipped archives, but
 > ar archive containing gzipped tar archives?

Yes and no.

The problem is that deb files are special ar archives, so you can't
just download the files and ar them together.

One way would be to download the files in the ar, ar them together and
rsync again. Since ar does not chnage the data in it, the deb has the
same data just at different places, and rsync handles that well.

This would be possible, but would require server changes.

The trick is to know a bit about ar, but not to much. Just rsync the
header of the ar file till the first real file in it and then rsync
that recursively, then a bit more ar file data and another file and so
on. Knowing when subfiles start and how long they are is enough.

The question will be how much intelligence to teach rsync. I like
rsync stupid but still intelligent enough to do the job.

Its pretty tricky, so it will be some time before anything in that
direction is useable.

MfG
Goswin




Re: package pool and big Packages.gz file

2001-01-05 Thread Goswin Brederlow
>>>>> " " == Jason Gunthorpe <[EMAIL PROTECTED]> writes:

 > On 5 Jan 2001, Goswin Brederlow wrote:

>> If that suits your needs, feel free to write a bugreport on apt
>> about this.

 > Yes, I enjoy closing such bug reports with a terse response.

 > Hint: Read the bug page for APT to discover why!

 > Jason

I couldn't find any existing bugreport concerning rsync support for
apt-get in the long list of bugs.

So why would you close such a wishlist bugreport?
And why with a terse response?

MfG
Goswin




Re: package pool and big Packages.gz file

2001-01-05 Thread Goswin Brederlow
>>>>> " " == Sami Haahtinen <[EMAIL PROTECTED]> writes:

 > On Fri, Jan 05, 2001 at 03:05:03AM +0100, Goswin Brederlow
 > wrote:
>> Whats the problem with a big Packages file?
>> 
>> If you don't want to download it again and again just because
>> of small changes I have a better solution for you:
>> 
>> rsync
>> 
>> apt-get update could rsync all Packages files (yes, not the .gz
>> once) and thereby download only changed parts. On uncompressed
>> files rsync is very effective and the changes can be compressed
>> for the actual transfer. So on upload you will pratically get a
>> diff.gz to your old Packages file.


 > this would bring us to, apt renaming the old deb (if there is
 > one) to the name of the new package and rsync those. and we
 > would save some time once again...

Thats what the debian-mirror script does (its about halve of the
script just for that). It also uses old tar.gz, orig.tar.gz, diff.gz
and dsc files.

 > Or, can rsync sync binary files?

Of cause, but forget it with compressed data.

 > hmm.. this sounds like something worth implementing..

I'm currently discussing some changes to the rsync client with some
people from the rsync ML which would uncompress compressed data on the
client side (no changes to the server) and rsync those. Sounds like
not improving anything, but when reading the full description on this
it actually does.

Before that rsyncing new debs with old once hardly ever saves
anything. Where it hels is with big packages like xfree, where several
packages are identical between releases.

MfG
Goswin




tar -I incompatibility

2001-01-05 Thread Goswin Brederlow
Hi,

the Author of tar changed the --bzip option again. This time its even
worse than the last time, since -I is still a valid option but with a
totally different meaning.

This totally changes the behaviour of tar and I would consider that a
critical bug, since backup software does break horribly with the new
semantic.

Any good reason not to whack the author with a 50 pound unix manual
and revert the changes?

MfG
Goswin




Potato depopularisation, wired links

2001-01-05 Thread Goswin Brederlow
Hi,

it seems that more and more Packages disapear from potato and are
replaced by links into the pools. And thats not new pakages that are
becoming stable, but old once getting moved.

Did I miss something there?

Also a link is placed in /debian/dists/potato/main/source for each
package thats now in the pools. Directly in source, not source/x11 or
similar.

First, why the links at all? And then, why not sorted into sections?

MfG
Goswin




Re: Problem with start-stop-daemon and pidfile

2001-01-04 Thread Goswin Brederlow
>>>>> " " == Matt Zimmerman <[EMAIL PROTECTED]> writes:

 > On Wed, Jan 03, 2001 at 02:10:19AM +0100, Goswin Brederlow
 > wrote:
>> touch /var/run/debian-mirror.pid chown mirror.nogroup
>> /var/run/debian-mirror.pid
>> 
>> touch /var/log/debian-mirror.log chown mirror.nogroup
>> /var/log/debian-mirror.log

 > Please don't do this.  nogroup should not be the group of any
 > files, just as nobody should not be the owner of any files.

Ups, yes.

What should I use? root?

MfG
Goswin




Re: useful tools for packages--any comprehensive list?

2001-01-04 Thread Goswin Brederlow
> " " == Mikael Hedin <[EMAIL PROTECTED]> writes:

 > Hi,
>> from time to time people mentions some nifty tools (mostly
>> scripts?)
 > to search for info about packages and similar.  Eg the citation
 > below.  Is there some list/collection/etc of such utilities?
 > Or for other usefull things like `apt-cache search ',
 > which can be found in the manuals but is a bit tricky to find?

 > I couldn't find an easy way to find the biggest packages
 > installed so I hacked a script, but I suppose lots of these
 > things are really done, just I can't find it.

console-apt
s (sort) s (sort by size) [well, the interface has changed a bit, but
you can still sort by installed size or similar]

MfG
Goswin




Re: package pool and big Packages.gz file

2001-01-04 Thread Goswin Brederlow
> " " == zhaoway  <[EMAIL PROTECTED]> writes:

 > hi, [i'm not sure if this has been resolved, lart me if you
 > like.]

 > my proposal to resolve big Packages.gz is through package pool
 > system.


Whats the problem with a big Packages file?

If you don't want to download it again and again just because of small
changes I have a better solution for you:

rsync

apt-get update could rsync all Packages files (yes, not the .gz once)
and thereby download only changed parts. On uncompressed files rsync
is very effective and the changes can be compressed for the actual
transfer. So on upload you will pratically get a diff.gz to your old
Packages file.

If that suits your needs, feel free to write a bugreport on apt about
this.

MfG
Goswin




Re: maybe ITP rsync mirror script for pools

2001-01-04 Thread Goswin Brederlow
>>>>> " " == Marco d'Itri <[EMAIL PROTECTED]> writes:

 > On Jan 02, Goswin Brederlow
 > <[EMAIL PROTECTED]> wrote:
>> So would there be intrest in a deb of the script coming with a
>> debconf interface for configuration, cronjob or ip-up support
>> and whatever else is needed to keep an uptodate mirror.
 > Please don't encourage private mirrors!

 > I have been the administrator of ftp.it.debian.org since a long
 > time, and I notice there are many sites doing nightly mirrors
 > for their own use.  Mirroring is free for them because it's
 > done at night when offices are empty and there is nobody
 > downloading porn, but the aggregated traffic is significant for
 > me!  They could save bandwidth and disk space just by using a
 > correctly configured squid cache.

 > -- ciao, Marco

First thats not my problem, sorry. I just provide the means to do it
efficiently.

People will mirror anyway and my script is for any partial mirrors
(which might be public or private, for a company or for people
burining CDs). I use it to heavily downcut the downloading time
(comfort) and to actually reduce traffic (because my modem is to slow
to keep an ftp mirror up-to-date).

If you don't want people to do nightly mirrors, tell them so, or deny
the service. Not providing a script for people needing one will only
make them write their own, probably less eficcient scripts.

MfG
Goswin




rsync mirror script for pools - first pre alpha release

2001-01-03 Thread Goswin Brederlow
>>>>> " " == Goswin Brederlow <[EMAIL PROTECTED]> writes:

 > Hi, I've been asked about my rsync mirror script, which is an
 > extension from Joey Hess's one, on irc and here several times.

 > So would there be intrest in a deb of the script coming with a
 > debconf interface for configuration, cronjob or ip-up support
 > and whatever else is needed to keep an uptodate mirror.

 > Or do you all prefer to do it your own way? I don't want to
 > package something just for 2 or 3 people.

 > MfG Goswin

 > PS: Just send me a yes I want privately if you want such a
 > package.

So here it is. The script works fine, but you need to configure
/etc/debian-mirror.conf manually at the moment. Once configured it
works fine for me.

On install it will ask you about debian-mirror being run in cron.daily
via debconf and all the other option will be asked there as well in
the future.

It will create a user mirror when installed and runs as mirror:nogroup
when started from /etc/cron.daily/debian-mirror.

The package is called debian-mirror and is available from:

deb ftp://rut.informatik.uni-tuebingen.de unstable main
deb-src ftp://rut.informatik.uni-tuebingen.de unstable main

Suggestions to the script are welcome, esspecially: How do I make
debconf popup a checklist like:

++
| What distributions should be mirrored? |
||
| [ ] potato |
| [ ] woody  |
| [ ] sid|
||
|   < OK>|
++

Happy testing,
Goswin




Re: Problem with start-stop-daemon and pidfile

2001-01-02 Thread Goswin Brederlow
>>>>> " " == Adam Heath <[EMAIL PROTECTED]> writes:

 > On 3 Jan 2001, Goswin Brederlow wrote:
>> Hi,
>> 
>> I want to use start-stop-daemon to start the debian-mirror
>> script if its not already running. I don't trust the script, so
>> I run it as user mirror:nogroup.
>> 
>> But then start-stop-daemon can't write a pidfile to /var/run.
>> 
>> Whats the right[tm] way for this?
>> 
>> root:~% start-stop-daemon -S -m -c mirror:nogroup -u mirror -p
>> /var/run/debian-mirror.pid -x /usr/sbin/debian-mirror
>> start-stop-daemon: Unable to open pidfile
>> `/var/run/debian-mirror.pid' for writing: Permission denied

 > Touch the file first, then chown it, before calling s-s-d.


Ok, that helps somewhat. But now start-stop-daemon allways starts the
script.

--
#!/bin/sh
#
# debian-mirror cron script
#
# This will start the debian-mirror script to update the local debian
# mirror, unless its still running.

set -e

test -x /usr/sbin/debian-mirror || exit 0

touch /var/run/debian-mirror.pid
chown mirror.nogroup /var/run/debian-mirror.pid

touch /var/log/debian-mirror.log
chown mirror.nogroup /var/log/debian-mirror.log

start-stop-daemon -S -m -c mirror:nogroup -u mirror -p 
/var/run/debian-mirror.pid -x /usr/sbin/debian-mirror 
>>/var/log/debian-mirror.log &
--

Thats how I start the script know and "ps aux" shows:

mirror   20123  0.5  0.4  2076 1044 pts/3S02:07   0:00 sh -e 
/usr/sbin/debian-mirror
mirror   20125  0.2  0.2  1516  640 pts/3S02:07   0:00 rsync -rlpt 
--partial -v --progress --exclude Packages --delete ftp.de.debian.org 
:debian/dists/sid/Contents-i386.gz /mnt/raid/rsync-mirror/debian/dists/sid/

and cat /var/run/debian-mirror.pid:
20123

But running the script again starts a new instance:

mirror   20123  0.0  0.4  2076 1044 pts/3S02:07   0:00 sh -e 
/usr/sbin/debian-mirror
mirror   20135  0.1  0.4  2076 1044 pts/3S02:07   0:00 sh -e 
/usr/sbin/debian-mirror
mirror   20137  0.0  0.2  1516  696 pts/3S02:07   0:00 rsync -rlpt 
--partial -v --progress --exclude Packages --delete ftp.de.debian.org 
:debian/dists/sid/Contents-i386.gz /mnt/raid/rsync-mirror/debian/dists/sid/
mirror   20143  0.0  0.2  1516  668 pts/3S02:08   0:00 rsync -rlpt 
--partial -v --progress --exclude Packages --delete ftp.de.debian.org 
:debian-non-US/dists/sid/non-US/Contents-i386.gz 
/mnt/raid/rsync-mirror/debian/non-US/dists/sid/non-US/

and cat /var/run/debian-mirror.pid:
20135



So what am I doing wrong there?

MfG
Goswin




Problem with start-stop-daemon and pidfile

2001-01-02 Thread Goswin Brederlow
Hi,

I want to use start-stop-daemon to start the debian-mirror script if
its not already running. I don't trust the script, so I run it as user
mirror:nogroup.

But then start-stop-daemon can't write a pidfile to /var/run.

Whats the right[tm] way for this?

root:~% start-stop-daemon -S -m -c mirror:nogroup -u mirror -p 
/var/run/debian-mirror.pid -x /usr/sbin/debian-mirror
start-stop-daemon: Unable to open pidfile `/var/run/debian-mirror.pid' for 
writing: Permission denied

May the Source be with you.
Goswin




Re: autodetecting MBR location

2001-01-02 Thread Goswin Brederlow
> " " == Tollef Fog Heen <[EMAIL PROTECTED]> writes:

 > * Russell Coker | My lilo configuration scripts need to be able
 > to infer the correct location | for the MBR.  I am currently
 > using the following algorithm: | take root fs device from
 > /etc/fstab and do the following: | s/[0-9]*// | s/part$/disc/

 > What is the use of the first s/?  Unless your first letter is a
 > digit, it will just remove the zero-width string '' between the
 > first / and the beginning of the string.

 > A better solution will probably be to

 > s/[0-9]$//

 > which will remove 5 from /dev/hda5.

You forgot /dev/hda17, which would become /dev/hda1 with your syntax.

MfG
Goswin




maybe ITP rsync mirror script for pools

2001-01-02 Thread Goswin Brederlow
Hi,

I've been asked about my rsync mirror script, which is an extension
from Joey Hess's one, on irc and here several times.

So would there be intrest in a deb of the script coming with a debconf
interface for configuration, cronjob or ip-up support and whatever else
is needed to keep an uptodate mirror.

Or do you all prefer to do it your own way? I don't want to package
something just for 2 or 3 people.

MfG
Goswin

PS: Just send me a yes I want privately if you want such a package.




Re: bugs + rant + constructive criticism (long)

2001-01-02 Thread Goswin Brederlow
> " " == Erik Hollensbe <[EMAIL PROTECTED]> writes:

 > Some packages refuse to install, and of course, break apt in
 > the process.  Right now, I'm *hopefully* going to be able to
 > repair a totally hosed server that failed an apt-get because
 > MAN AND GROFF failed to install properly, ending the upgrade
 > process and therefore stopping the install of all the
 > perl/debian-perl packages except the binary, rendering apt
 > practically useless.

Try to configure the unpacked packages with "dpkg --configure
--pending". Helps a lot most of the time. Apart from that have a look
at what gets updatet. If you update 200 Packages of unstable in one go
you will kill your system with 99% certainty. Be a bit selective and
do "apt-get install " for the major components like libc,
perl, apt, dpkg before updating all the other stuff.

I know that should not be neccessary, but with unstable, being unstable,
I found that a good way to reduce the likelyhood of unneccessary
packages breaking vital once.

 > No doubt the failure of man and groff has to do with the
 > problem that i've been having with many other packages, which I
 > will detail below.

 > Please, please, please, please... Checking your shell scripts
 > for SYNTAX ERRORS is not a bad idea before you submit it to the
 > package repository!  You have no idea how many times, that I
 > have helped people in #debian on OPN fix shell script errors
 > for packages like mysql-server, which, could have easily
 > rendered a semi-production system completely dead (hopefully
 > they compile from source, but that's not the point, is it?)
 > simply because someone forgot a bracket or used the wrong 'set'
 > parameters in their script.

 > Other issues with apt in general - there is no OBVIOUS way
 > (short of reading the APT/DPKG perl classes) to force certain
 > flags.

RTFM

 > For instance - install package 'realplayer', then, upgrade your
 > copy of xfree86-server or xfree86-common, and watch them fail
 > as it tries to write to a file in /etc/X11. I don't think I
 > need to go into detail about how much stuff like this pisses
 > off the average user. rpm anyone? (no, apt-get -f install does
 > not work, so don't even bother)

Did you file a bugreport?

 > And why are packages being REMOVED (lib-pg-perl for example)
 > when I dist upgrade?

RTFM, thats what dist-upgrade is for. probably a conflicts of some
package updated.

 > apt-get and it's kin need more simple getopt-style flags that
 > allow overriding of certain things, mainly conflicts. Also, an
 > option to actually view what's being upgraded before you
 > download 250 packages that are only going to break your system
 > would be nice as well.

RTFM:

apt-get -u dist-upgrade

Also do an "apt-get -u update" first. That won't change wich packages
are installed, but only update whats possible.

 > I dunno - I was using debian back when hamm was released, and I
 > have never seen such an utter mess of incompatibilities and
 > stupid human error even in the worst mess of unstable upgrades
 > (which happens, and is understandable). Almost all of this is
 > due to a significant lack of adequate testing by package
 > maintainers.

Your are the tester, keep testing and FILE BUGREPORTS.

Alltogether I must say that unstable has become better and better. For
the last 3 years I never had to reinstall stable after an unstable
update. For the last year I didn't need a rescue disk after an
unstable update. For the last month I didn't even had an error on
update (but I haven't updatetd for the last 3 weeks, so that might
explain it).

MfG
Goswin




Re: finishing up the /usr/share/doc transition

2001-01-01 Thread Goswin Brederlow
>>>>> " " == Joey Hess <[EMAIL PROTECTED]> writes:

 > Goswin Brederlow wrote:
>> What is the reason for linking /usr/doc to /usr/hare/doc (or
>> share/doc)?

 > So that packages that are not policy complient and contain
 > files only in /usr/doc still end up installing them in
 > /usr/share/doc.

So bugs won't be noticed. Maybe a simple grep in the Contents files
would be enough to find all such packages.
Does lintian check for /usr/[share/]doc?

/debian/dists/woody% zgrep "usr/doc" Contents-i386.gz \
  | while read FILE PACKAGE; do echo $PACKAGE; done | sort -u | wc
748 748   12849

Seems to be a lot of packages still using /usr/doc.

>> Maybe I have architecure dependent documentation that should
>> not be in share.

 > Er. Well policy does not allow for this at all. If you do
 > actually have such a thing (it seems unlikely), perhaps you
 > should bring it up on the policy list and ask for a location to
 > put it.

I don't have any and I don't think anyone can make a good point for
any. What reason could there be that I can't read some i386 specific
dokumentation on an alpha and use that e.g. in plex or bochs?
Only exception would be documentation in an executable form, which is
a) evil and b) should be in /usr/bin.

MfG
Goswin




Re: finishing up the /usr/share/doc transition

2001-01-01 Thread Goswin Brederlow
> " " == Joey Hess <[EMAIL PROTECTED]> writes:

 > So it will need to:

 > 1. Remove all symlinks in /usr/doc that correspond to symlinks
 >or directories with the same names in /usr/share/doc
 > 2. If there are any directories with the same names in /usr/doc
 >and /usr/share/doc, merge them. (And probably whine about it,
 >since that's a bug.)
 > 3. Move any remaining directories and symlinks that are in
 >/usr/doc to /usr/share/doc
 > 4. Move any files in /usr/doc to /usr/share/doc (shouldn't be
 >necessary, but just in case).
 > 5. Remove /usr/doc
 > 6. Link /usr/doc to /usr/share/doc

What is the reason for linking /usr/doc to /usr/hare/doc (or
share/doc)?

Maybe I have architecure dependent documentation that should not be in
share.

This got probably answered a thousand times, but please, just once
more for me.

MfG
Goswin

PS: and don't say so that users looking in /usr/doc find the docs in
/usr/share/doc, users should adapt. :)




rsync'ing pools (Was: Re: DEBIAN IS LOOSING PACKAGES AND NOBODY CARES!!!)

2001-01-01 Thread Goswin Brederlow
>>>>> " " == Tinguaro Barreno Delgado <[EMAIL PROTECTED]> writes:

 > Hello again.

 > On Sun, Dec 31, 2000 at 02:22:45PM +, Miquel van
 > Smoorenburg wrote:
>>  Yes. The structure of the archive has changed because of
>> 'package pools'.  You need to mirror 'pool' as well.
>> 
>> Also, "woody" is no longer "unstable". "sid" is. "woody" is
>> "testing".
>> 
>> Mike.
>> 

 > Ok. Thanks to Peter Palfrader too. Then, there is a more
 > complicated issue for those who has a partial mirror (only i386
 > for me), but I think that is possible with rsync options.

There was a script posted here to do partial rsync mirrors.

I used that script and added several features to it. Whats missing is
support for the debian-installed in sid, but I'm working on that.

Changes:
- multiple architectures
- keep links from woody -> potato
- mirror binary-all
- mirror US and non-US pools
- use last version as template for new files
- mirror disks

People intrested in only one arch and only woody/sid should remove
binary-all and should resolve links.

Joey, can you put that where it originally came from? or next to the
original script? Any changes to the script from your side?

So heres the script for all who care:

----------
#!/bin/sh -e
# Anon rsync partial mirror of Debian with package pool support.
# Copyright 1999, 2000 by Joey Hess <[EMAIL PROTECTED]>, GPL'd.
# Add ons by Goswin Brederlow <[EMAIL PROTECTED]>

# update potato/woody files and Packages.gz or use old once? If you
# already have the new enough once say yes. This is for cases when you
# restart a scan after the modem died.
# No is the save answere here, but wastes bandwith when resumeing.
HAVE_PACKAGE_FILES=no

# Should a contents file kept updated? Saying NO won't delete old
# Contents files, so when resuming you might want to say no here
# temporarily.
CONTENTS=yes

# Flags to pass to rsync. More can be specified on the command line.
# These flags are always passed to rsync:
FLAGS="$@ -rlpt --partial -v --progress"
# These flags are not passed in when we are getting files from pools.
# In particular, --delete is a horrid idea at that point, but good here.
FLAGS_NOPOOL="$FLAGS --exclude Packages --delete"
# And these flags are passed in only when we are getting files from pools.
# Remember, do _not_ include --delete.
FLAGS_POOL="$FLAGS"
# The host to connect to. Currently must carry both non-us and main
# and support anon rsync, which limits the options somewhat.
HOST=ftp.de.debian.org
# Where to put the mirror (absolute path, please):
DEST=/mnt/raid/rsync-mirror/debian
# The distribution to mirror:
DISTS="sid potato woody"
# Architecture to mirror:
ARCHS="i386 alpha m68k"
# Should source be mirrored too?
SOURCE=yes
# The sections to mirror (main, non-free, etc):
SECTIONS="main contrib non-free"
# Should symlinks be generated to every deb, in an "all" directory?
# I find this is very handy to ease looking up deb filenames.
SYMLINK_FARM=no

###

mkdir -p $DEST/dists $DEST/pool

# Snarf the contents file.
if [ "$CONTENTS" = yes ]; then
for DIST in ${DISTS}; do
for ARCH in ${ARCHS}; do
echo Syncing  $DEST/dists/${DIST}/Contents-${ARCH}.gz
rsync $FLAGS_NOPOOL \
$HOST::debian/dists/$DIST/Contents-${ARCH}.gz \
$DEST/dists/${DIST}/
echo Syncing  
$DEST/non-US/dists/${DIST}/non-US/Contents-${ARCH}.gz
rsync $FLAGS_NOPOOL \

$HOST::debian-non-US/dists/$DIST/non-US/Contents-${ARCH}.gz \
$DEST/non-US/dists/${DIST}/non-US/
done
done
fi

# Generate list of archs to download
ARCHLIST="binary-all"
DISKS_ARCHLIST=""
NONUS_ARCHLIST="binary-all"

for ARCH in ${ARCHS}; do
ARCHLIST="${ARCHLIST} binary-${ARCH}"
DISKS_ARCHLIST="${DISKS_ARCHLIST} disks-${ARCH}"
NONUS_ARCHLIST="${NONUS_ARCHLIST} binary-${ARCH}"
done

if [ "$SOURCE" = yes ]; then
ARCHLIST="${ARCHLIST} source"
NONUS_ARCHLIST="${NONUS_ARCHLIST} source"
fi

# Download packages files (and .debs and sources too, until we move fully
# to pools).

if [ x$HAVE_PACKAGE_FILES != xyes ]; then
for DIST in ${DISTS}; do
for section in $SECTIONS; do
for type in ${ARCHLIST}; do
echo Syncing  $DEST/dists/$DIST/$section/$type
mkdir -p $DEST/dists/$DIST/$section/$type
rsync $FLAGS_NOPOOL \
  

Re: corelinux debian packages

2000-08-19 Thread goswin . brederlow
"Christophe Prud'homme" <[EMAIL PROTECTED]> writes:

> Hi,
> 
> I am waiting for my debian maintainer application to take place.
> In the mean time, I want to provide my work to the masses
> 
> Here is the apt line to add if you want corelinux (OOA and OOD library for 
> Linux)
> These packages were compiled using WOODY
> 
> deb http://augustine.mit.edu/~prudhomm/debian ./
> 
> more packages (not corelinux related) to follow in the future:

How about source? Does

deb-src http://augustine.mit.edu/~prudhomm/debian ./

work?

May the Source be with you.
Goswin

PS: Get a mentor to upload those. :)




Re: Problem with apt on slink systems

2000-08-19 Thread goswin . brederlow

> Where the heck the word 'stable' comes from? I removed my hole
> /var/state/apt/ and I do not know where it comes from. Hardcoded anywhere
> perhaps? Or did I miss something grave?
> 
> 
> MfG/Regards, Alexander

What revision of slink do you have? slink 2.1R3 doesn't have that
problem.

Try to update to potato, since thats now stable.

May the Source be with you.
Goswin




  1   2   >