Re: LILO 21.6-2

2001-01-05 Thread Chris Rutter
On Sat, 6 Jan 2001, Russell Coker wrote:

> You don't have sym-links to the root directory?  Why not?

There's absolutely no need or necessarily a desire to do so; besides
which, the point is moot: if you're in the automation arena, you'll
notice that kernel-package no longer produces them.

Trying to create a GUI interface to something with the free-formality
and complexity of lilo.conf, even totally disregarding the idea of
trying to allow interoperation between a human editor and your
configurator, is /seriously hard/ to achieve plausibly.

Things get worse, though: you try to model the configuration setup in
a strange orthogonal-linear fashion, which is destined to fail --
inelegantly.  If anything, a structure more similar to pppd's configuration
would befit what you're trying to do.

I think you need to go back to basics: to really carefully consider
exactly how you might most efficiently model the configuration in as
much depth as you feel you need, and then how best you might represent
that within debconf, if debconf is what you want to use.

And then, I'd think carefully about how, at a lower level, you might
best effect some degree of interoperability between a human editor and
your configuration mechanism.  Consider the following situations:

  * human prefers editing all files by hand, beautifully laid out in every
typographical sense;

  * your configurator generates a perfectly adequate file, and human has
no desire nor understanding to ever modify it in any way;

  * your configurator is not advanced enough to cater for the user's needs;
not even close;

  * your configurator generates a file that's /almost perfect/, but just
needs one or two tiny tweaks (like adding `lba32', say) -- human wants
to use your program to do the hard work but preserve the tweaks.

You could perhaps model this by offering a few options during the postinst:

  * ignore lilo.conf and never touch it again, leaving it all to the human;

  * generate a template file in /etc/lilo.conf.template, and then switch
back to mode (1);

  * generate the master file, /etc/lilo.conf.

(I hasten to add that the default should be the first: an upgrade should
/never/, /ever/ corrupt a working lilo installation.)

This scheme, as you can see, totally skips round the problem of
interoperation between human and script editors -- for which you'd need
to invent a parser that could read in lilo.conf and present it in a form
which your program fully understood.  If you're going down that route,
you might want to even analyse the syntactical sugar used by the human
and try and model it in the entries your program modifies.

Alternatively, you could avoid this whole minefield by desisting.

I haven't even /begun/ to describe the complexities of implementing a
proper scheme -- herein is mentioned only the very tip of the iceberg.
If you're confident, please do go ahead -- the result might break some
ground.  I feel, though, that lilo is probably trickier than most of
the other slightly easier and simpler examples of this sort of thing,
and for the time being, it might be better avoided, or at least /not/
shoved in people's faces as it now is: never the best way to endear
people to your work [viz. M$].

c.




Re: Embedded Debian (was: compaq iPaq)

2000-08-19 Thread Chris Rutter
On Sat, 19 Aug 2000 09:22:12 Glenn McGrath wrote:

> hmm, im not sure its practical to create extra binary packages, wouldnt
> it be more effective to exclude files from regular packages as its
> installed.

I was suggesting that the script would create them on-the-fly -- they
wouldn't reside anywhere as such.
 
> It could use an external script like you mention in your second point.
> 
> You could have some sort of wrapper around dpkg to do it, would be
> easier than creating new tools, new packages.

That's the sort of thing I meant, yeah.  I suggested the existence of a
tool which would handle all this sort of thing to create a whole
filesystem, or individual small packages which could be transferred onto
an embedded system and `dpkg -i'ed.

c.





Re: Embedded Debian (was: compaq iPaq)

2000-08-18 Thread Chris Rutter
On Wed, 16 Aug 2000 14:14:24 Ben Armstrong wrote:

> For the most part, I think there is enough flexibility within Debian to
> pick and choose the smallest tools that will do the job from among the
> binary packages.  Where Debian currently falls short, we can create -tiny
> versions of packages as needed.  Most useful optimizations that can be
> done at compile time can also be used to create binary packages to save
> people the time and bother of compiling it themselves. 

Yes; I have an idea for a solution to the problem:

  * For each package, logically create another two packages (although there
could be many categories): `-small' and `-tiny'.

  * Write a script that will take a binary package and, based on guesses,
squeeze it down to size; e.g. squeezing binaries, removing documentation,
removing bash or Perl scripts (depending on whether the target supports
bash and perl), header files, etc.

  * Define a mechanism so that a binary package can contain a file in
`DEBIAN/', called (say) `squeeze-small' and `squeeze-tiny', overriding
the script's guesses, and specifying more exactly how to squeeze the
package to its corresponding smaller version.

  * Define a mechanism so that a source package can contain a file which
specifies a list of `small' options (e.g. portions of glibc to compile
in) which can be defined to create a squeezed package in one form.
(I think few packages would need these.)

  * Write a tool analagous to the task selector to build these `small'
packages and create filesystem images out of them.

  * Package up newlib and friends and make them provide libc6. :-)

c.





Re: Is XEmacs nonfree?

1999-10-01 Thread Chris Rutter
On 30 Sep 1999, David Coe wrote:

> Is that still an accurate description of the legal status (from 
> FSF's perspective) of XEmacs, and if so, shouldn't we move it to
> non-free?

Yes, probably; but no.  RMS is referring to the fact that many authors
of many pieces of xemacs haven't assigned copyright to the FSF,
meaning that copyright remains with them, or possibly even their
employer, depending on sticky employment contracts.  Therefore,
to be absolutely 100% anal about the `freeness' of the `GNU system',
he is declaring that any code that hasn't been copyright-assigned
to the FSF is not worthy of inclusion in the GNU system.

-- 
Chris <[EMAIL PROTECTED]> ( http://www.fluff.org/chris )



Re: Swap setup on Debian

1999-10-01 Thread Chris Rutter
On Tue, 28 Sep 1999, Staffan Hamala wrote:

> Why doesn't the installer use -v1 so that larger swaps that 128MB can
> be used?

I presume this is a boot-floppies issue, and will indeed be rectified
nearer release time -- for the time being, it would seem prudent not
to sacrifice any compatibility for the sake of an extra few
commands.

-- 
Chris <[EMAIL PROTECTED]> ( http://www.fluff.org/chris )



Re: Packages should not Conflict on the basis of duplicate funct

1999-09-28 Thread Chris Rutter
On Mon, 27 Sep 1999, Sean 'Shaleh' Perry wrote:

> b) if you know what you are doing, compile the packages by hand, fix their
> install scripts, and remove the conflicts.  You are trying to circumvent the
> norm.

But I think, to be fair, that what he's proposing *isn't* necessarily
`not the norm' -- any decent-sized ISP could easily have cause to do
this, and it would seem better just to tweak the packages slightly
to let them both install than make a back-to-the-grindstone exercise.

> Debian is operating on making the easy case easy.  90+% of our users want to
> just install a package and go.

Oh jeezus.  This sounds like a Microsoft slogan.

I thought Debian was about handling every case -- for the novice, to
the guru -- flexibly, powerfully and elegantly.  I had no such idea
that was a lowest common denominator appeal element here.  It sounds
like a crude idea/slogan to try and reverse ratings like PCPlus hands
out: "Debian -- for pros, don't touch it" (which I agree is a problem).

-- 
Chris <[EMAIL PROTECTED]> ( http://www.fluff.org/chris )



Re: Packages should not Conflict on the basis of duplicate functionality

1999-09-27 Thread Chris Rutter
On Mon, 27 Sep 1999, Brian May wrote:

> However, if both packages contain a different implementation of the
> same file (or even worse - a completely different program with the same
> name), then things will break, depending on what order the
> programs are installed in.

This is true, and would need extra heuristics in the packaging
policy/system to cope with this.  All conflicting files could be
given names that wouldn't clash (prefix them with the first letter
of the package name, or whatever), and then diversions/symlinks
used.

> The warning messages produced when a file does conflict have a tendancy
> to scroll of the top of the screen, and unless you are paying attention,
> there is no way (that I know of) to find what files (if any) conflict
> after installing multiple packages. If I submit a bug report
> against package X, how are you, the maintainer to know that
> it is broken, because an important file, eg /usr/bin/z was overwritten
> by packge Y, which does something completely different?

Um, yes, it's a difficult one, but I don't think the problem is
*that* wide-spread.  I agree with the Apache and Roxen example --
I have wanted to do this as a quite straight-forward thing before,
but I think few packages will have big difficulties.

This won't list all diversions/etc., but you can always diff the
output of `dpkg -L' against two packages to see if files conlict.
Or maybe I mean `sort | uniq -d' -- of course this doesn't help
if some people have chosen /usr/X11R6/bin and some /usr/bin/X11,
for instance.  I'm sure a trivial utility could be made to do this.

I suggest you compile a list of these sorts of problems, and set
about trying to fix them in a sane way, if you have time.

-- 
Chris <[EMAIL PROTECTED]> ( http://www.fluff.org/chris )



Re: Packages should not Conflict on the basis of duplicate functionality

1999-09-27 Thread Chris Rutter
On Sat, 25 Sep 1999, Raul Miller wrote:

> Perhaps there are people who want a "service enabled by default" policy,
> and perhaps we should accomodate them.  However, I'm not one of them
> and I don't want any services turned on on some of my machines without
> my explicit ok.

Yes, and I think this is an *excellent* candidate for something that
can be trivially fixed by the new debconf -- you can tell it (at
least I hope you can) that the default answer for all of the `Can
I start this daemon?' questions is `no' (assuming, of course, that
the maintainers add questions like these into their packages).

-- 
Chris <[EMAIL PROTECTED]> ( http://www.fluff.org/chris )



Re: Packages should not Conflict on the basis of duplicate functionality

1999-09-27 Thread Chris Rutter
On Fri, 24 Sep 1999, Clint Adams wrote:

> They both provide httpd; should I file bugs against them demanding that
> they conflict with it too?

I think this is a good point; it doesn't seem to be a clear area
of policy.  It sounds like perhaps some new system needs to be
implemented.  Perhaps a Suspicious: line in the control file, which
will mirror Conflicts:, except only elciiting a warning to the
user?  Or perhaps a simple flag is required to override it?

What is wrong with the semantics of `dpkg --force-conflicts' as it
stands?  That it confuses packages like `apt-get', whinging about
broken packages, or some other reason?

I think any deeper idea (such as a database of `claimed' ports in
/etc, or something) would be ugly, and best avoided in the name of
non-intervention.

-- 
Chris <[EMAIL PROTECTED]> ( http://www.fluff.org/chris )



Re: A few changes

1999-09-27 Thread Chris Rutter
On Fri, 24 Sep 1999, Matthew Vernon wrote:

> This is all very well, except for those of us who email from work, and 
> have their PGP key at home...

Well, depending on how paranoid you may be, there are a few solutions:

  * Keep a copy of at least your `secring.pgp' on a floppy disk, and
use this at work (trying to avoid disk cacheing problems).

  * Use an intermediary machine (i.e. one always part of the Internet).
This option depends on many things -- the machine is bound to be
a multi-user one, which is in theory a no-no, but if it's fairly
tightly under your administrative control, then it's unlikely that
your keyrings stored on it will be compromised.  If you can ssh
into this machine, it should be safe.

I actually do this, almost; I have two keys: <[EMAIL PROTECTED]> and
<[EMAIL PROTECTED]>.  The former sits on a system with many users
that I administrate (inkvine.fluff.org), and is in theory vulnerable
at various times to several attacks: Ethernet snooping, and compromise
by local root-style exploit.  The latter has never left my home
machine, and assuming no one breaks in to my home machine during
dial-up time (unlikely; I watch /var/log like a hawk), the key is
safe from those sorts of exploits.

So, for anything lasting or really important, I use the home signature,
from home.

-- 
Chris <[EMAIL PROTECTED]> ( http://www.fluff.org/chris )



Re: A few changes

1999-09-19 Thread Chris Rutter
On Sat, 18 Sep 1999, Michael Stone wrote:

> Definately by package. I can think of several circumstances where this
> is useful: when a bug is closed in unstable but someone using stable
> wants an explanation for a problem; when a bug is inadvertantly
> reintroduced; when a maintainer closes a bug caused by user error, and
> [...]

Agreed.  It would be nice to have a mail server command `resurrect',
or similar, that would bring a dead bug back to life (if it were
found not to be dead, or whatever; several reasons were listed above).

This would also be some way to producing some kind of automated list of
bugs fixed in each release of Debian; people could visit a list somewhere
showing them precisely which bugs in which packages had been fixed
between their version of Debian and unstable.

-- 
Chris <[EMAIL PROTECTED]> ( http://www.fluff.org/chris )



Re: Move proftpd to contrib

1999-09-17 Thread Chris Rutter
On 17 Sep 1999, Martin Bialasinski wrote:

> OK, a bug in cron has recently produced a root exploit. What a crappy
> software, it should be moved to contrib.

Yes, but there aren't *hundreds* of bugs in cron, all giving security
problems; it has been subject (presumably) to security review;
bugs don't keep on appearing one after another, like cockroaches,
as they do in ProFTPD.

Read what SuSE said about ProFTPD, and then see how much of it
applies to cron.  Not much.

And, also, arguably cron is a more important part of a Unix system
than a specific FTP daemon.

-- 
Chris <[EMAIL PROTECTED]> ( http://www.fluff.org/chris )



Debian 2.1r3

1999-09-17 Thread Chris Rutter
The current `sub-release' (whatever) of Debian 2.1 is r3, right?
I was just wondering, as all references on the web site are to r2,
but I thought I received a message from the security team about
r3 last week somtime.  Just wanted to check before I filed a
boring bug report, or something. 

-- 
Chris <[EMAIL PROTECTED]> ( http://www.fluff.org/chris )



Re: Crazy Idea: debian developer conference

1999-09-17 Thread Chris Rutter
On Fri, 17 Sep 1999, Remco van de Meent wrote:

> Uhm, don't forget that in .nl there is only one campus university like the
> ones widespread in the USA. And moreover (I currently live on that campus)
> there ain't that many free dorm rooms during summer (people tend to stay on
> campus during summer)...

For that, try the UK.  There are plenty of 'em.  Hm, other than that,
I remember a university-affiliated conference centre sort-of thing
I once stayed at in .dk... I wonder where it was.  This is what
comes of attending 10 years' worth of singing festivals around the
world. ;-)

-- 
Chris <[EMAIL PROTECTED]> ( http://www.fluff.org/chris )



Re: ProFTPd being lame

1999-09-17 Thread Chris Rutter
Re: all the bug-finding in ProFTPd (I just read the SuSE notice about
it being dropped for lameness reasons, including it *still* being
vulnerable to remote exploit) -- if it is, indeed, *that* bad
(and the common consensus among admins I know is that it is), perhaps
the netkit ftpd shouldn't come with this message..:

  This is the netkit ftp server.  It is recommended for you to use one of its
  alternatives, such as wu-ftpd or proftpd.

Most people I know prefer using the OpenBSD-derived server, because
it seems to be more stable and less buggy than the rest -- why is
it being deprecated by Debian (or Herbert, I don't know) in this
way?

-- 
Chris <[EMAIL PROTECTED]> ( http://www.fluff.org/chris )



Re: building kernel 2.0.x under potato

1999-09-17 Thread Chris Rutter
On Thu, 16 Sep 1999, John Lapeyre wrote:

> The 2.0.37 and 2.2.x kernels keep hanging on my AMD K6-2.

This sounds *bad*, BTW; have you checked around to see if anyone
else has had these kinds of freezing problems?  Is your machine
unstable in any other way?

You may find all you need to do is tweak a CPU register or two,
or apply some patch to the kernel to make the machine stable on
any kernel you like -- it's worth checking, because the kernel
*shouldn't* have become randomly unstable in 2.0.37.

-- 
Chris <[EMAIL PROTECTED]> ( http://www.fluff.org/chris )



Re: history (Was Re: Corel/Debian Linux Installer)

1999-09-17 Thread Chris Rutter
On Thu, 16 Sep 1999, David Bristel wrote:

> With this in mind, I think that having a configuration variable for apt that
> would allow the downloaded .deb files to be put in a user defined place.  This
> way, if your /var is close to being full, you could, for example, drop it 
> into a
> temporary directory on /home for the upgrade.  This isn't the best place, but 
> on
> many systems, /home is one of the largest partitions on a system, and tends to
> have a good ammount of free space on it because users may use a large ammount 
> of
> space.

Yes, either this or a FIFO expiration policy on /var/cache/apt/packages
which gets automatically applied when space runs out.  Or possibly
the option of using /tmp/.apt, with a warning message that the
packages are in there and need to be moved into the cache.

I *don't* think that `apt' (or any other package) should use any
undefined directories (such as /home) for temporary storage.
If people want that, they'll symlink /tmp -> /home/.tmp or something.

Alternatively, is there any other, er, `in bits' way that the
upgrade can be done?

-- 
Chris <[EMAIL PROTECTED]> ( http://www.fluff.org/chris )



Re: Bug o' the week

1999-09-17 Thread Chris Rutter
On Wed, 15 Sep 1999, Michael Stone wrote:

> How much trouble would it be to add another category--"unreproduced" or
> somesuch?

Yes, or `observational', `possible', that sort of thing.  I agree.

-- 
Chris <[EMAIL PROTECTED]> ( http://www.fluff.org/chris )



Re: Crazy Idea: debian developer conference

1999-09-17 Thread Chris Rutter
On Fri, 17 Sep 1999, Federico Di Gregorio wrote:

> Having a big convention would be really awfull, but it's difficult to
> get sponsors and much more difficult to gather developers from all
> over the world. What about a series of smaller conferences? We can have
> Debian Europe, Debian America (North and South?), Debian Australasia, etc...

True, but I think that sort of defeats the point.  I would *like*
to see people from other parts of the globe (and the other side
of the pond), and localising it wouldn't do anything for that.

What I propose, perhaps, is that a notice be put that that anyone
*not* interested *at all* in attending a conference *wheresoever*
it may be, mail in to whomever coordinates this.  Then, pick a few
locations, post them out, and ask people (or at least one from each
region) to figure out roughly how much (if they were doing this
on the cheap) it would cost to get them there, stay, and back.

Perhaps it could even be nicely automated, in some way.  It might
give a feel for the true cost, depending on location.

I think, re: sponsorship, that probably the way to do it is to ask
no developer to pay more than, say, $200 or $300, and make the rest
up from there.  Anyone short (and there will be plenty) can take more;
people not travelling far could do less.  What d'ya think?

-- 
Chris <[EMAIL PROTECTED]> ( http://www.fluff.org/chris )



Re: Crazy Idea: debian developer conference

1999-09-17 Thread Chris Rutter
On 16 Sep 1999, Michael Alan Dorman wrote:

> I would _hope_, however, that being face to face might have the
> opposite effect.

Yes, I agree, and in all likelihood I think that's what'll happen. :)

-- 
Chris <[EMAIL PROTECTED]> ( http://www.fluff.org/chris )



Re: Binary Deb 'Diffs'

1999-09-17 Thread Chris Rutter
On Thu, 16 Sep 1999, Jordan Mendelson wrote:

> Just a quick idea, instead of having to download an entire package where 95%
> of the files don't change, what about downloading a type of binary diff? I can
> think of two ways to do it:

I've wanted something like this for a while -- I was also wondering
whether it would solve the PINE problem: the package could be
distributed as what appears to be a standard .deb file, but inside
there would be not only an original archive, but a binary patch
that dpkg-deb would automatically apply when unpacking -- neat?
If that solution did indeed get round UWashington's silly
licence, it would be a nice way of making it transparent to the
user, and sticking two fingers in UW's face at the same time.

> 1) Package everything in a type of 'pdeb' (patch deb). It should contain
> reconfiguration information, and files which have changed since version
> locally installed

Well, without getting so elaborate, most people have the old .deb
files sitting around, if not on CD-ROM on in their home directories,
in /var/cache/apt/archives.  It would be feasible just to distribute
a plain binary patch.

Notice that a plain binary diff between two .deb won't be much
use, due to `randomisation' by compression.  I think you'd need
to distribute an actual `diff -uRN' between the two unpacked
source trees, along with the new configuration files.

> 2) Package everything in a 'pdeb' w/ real binary diff. Instead of packaging
> entire files which have changed, package patches. This would require a system
> which logged changes in order to work correctly, similar to CVS.

Um, this is complicated (incremental package-by-CVS), and I
doubt it'll ever see the light of day, nice as it would be.
I think it would be simpler for master to just `dpkg -x' on
every upload, and diff against the existing file.

> Both of these would need to include a checksum per file. Optimally it would
> require that the storage of deb's on HTTP and FTP servers change as well,
> requiring the files to be unpacked so apt can grab a single file from a .deb.

Well, I think the best way of doing it would just be to store
all the patch files in the same current places as the .debs
are stored, with descriptive filenames that identify from which
to which versions they convert.  Of course, people won't want the
main tree being cluttered up with all this, so perhaps all the
patch files could be stored on a separate server.

> I don't know, I figured it might be a way to save bandwidth & disk space.

I don't think this is going to save anyone any disk space.
What I think it will save is bandwidth to the `end-user' -- those
using apt-get.  Remember that `rsync' has difference functions
in it already.

-- 
Chris <[EMAIL PROTECTED]> ( http://www.fluff.org/chris )



Re: building kernel 2.0.x under potato

1999-09-16 Thread Chris Rutter
On Thu, 16 Sep 1999, John Lapeyre wrote:

>The link to suse doesn't work at the moment, but I'll give it a try.
>   The blurb at cygnus does not look encouraging.  I think it is claiming
> that I have to "to change asm constructs" at various unspecified places
> in the source.

Nah, they're just trying to cover their backs, so that people can't
whinge at them *if* things go wrong -- they're only *trying* to
worry you.  Everyone I know who uses the patches says they're
fine.


> What should work? gcc272 ?  I have tried it on two current potato
> machines. building with CC=gcc272 fails to build both 2.0.x and 2.2.x
> kernels.  Building with the default compiler (egcs 2.95) will only build
> 2.2.x kernels.  The kernel mailing list still claims that I should
> build with 2.7.2 before sending a bug report about my corrupted fs.

Yeah, 2.7.2.* is the canonical compiler for 2.0 kernels.  Can you
post what's actually going wrong?

> I have an old 2.0.36 kernel, but I need to compile a module for a driver.
> I think that given the number of instability reports regarding 2.2.x
> kernels it might be nice to be able to compile 2.0.x somewhat easily.

What module's that -- does it not work under 2.2?  Yeah, it *should*
be straightforward...

> Am I being obtuse, or are things pretty fucked up regarding kernels and
> compilers ?

Er, a little, yeah.  Unfortunately the Linux kernel is quite a
stressful bit of code to compile (it needs to get good x86
performance), and so things got really tested to the full, w.r.t.
the compiler, but the compiler had bugs, and they had to be
worked around, etc.  It's not that pretty.

-- 
Chris <[EMAIL PROTECTED]> ( http://www.fluff.org/chris )



Re: building kernel 2.0.x under potato

1999-09-16 Thread Chris Rutter
On Thu, 16 Sep 1999, John Lapeyre wrote:

>Is it possible to build 2.0.x kernels under a reasonable
> potato build environment ?  I tried "make CC=gcc272", but
> I still get failures from the assembler, I think. 

Erm, yeah, I had no problems as I remember.  Just apply the
patches mentioned at ,
and you should be fine.  Alternatively (it *should* work if
binutils is sane, and you're pointing at the right gcc),
post the question to one of the egcs lists, and you should
get a quick response.

-- 
Chris <[EMAIL PROTECTED]> ( http://www.fluff.org/chris )



Re: debian O'Reilly book cover is up

1999-09-16 Thread Chris Rutter
On 16 Sep 1999, Steve Dunham wrote:

> All of their Linux books use a rodeo/cowboy theme rather than the
> traditional animal theme.  I have no idea why.  I kinda prefer the
> animals, but maybe they were running out?

Last time I asked I got some mutter about `brand pollution' or
something.  Personally I think they're ugly as sin (especially
that red `Running LInux' book) -- we could ask nicely?

-- 
Chris <[EMAIL PROTECTED]> ( http://www.fluff.org/chris )




Re: Crazy Idea: debian developer conference

1999-09-16 Thread Chris Rutter
On Wed, 15 Sep 1999, Joey Hess wrote:

> Wouldn't it be great if all the debian developers could be flown in to a
> convention site, get to meet each other, really tighten up the gpg web of
> trust, attend talks by developers, discuss important issues in person, and
> so on? It would really make us more of a community.

You've been thinking about this as well?  Cool. :)

>   Where?

Well, a location that'd be as cheap as possible overall, I think,
considering flight fares for everyone.

Problem is there ain't an `average developer' in terms of location,
as far as I can see -- they're all over the place. ;-)  I'd
certainly rather somewhere Scandinavian, probably -- it's nice,
clean, historic, etc. -- and closer to home.

>   I'm figuring around $700 per developer, for plane fare and
>   lodging. If 250 attend that's $175k. Plus some unkown amount
>   to rent out a convention center.

Yeah, but people won't all be coming just on their own, surely?
I think it would be possible to whittle that down a little, actually:
if any block bookings can be made on travel, perhaps.  And also,
if Debian get a little `creative' with accommodation (is there
anywhere with a high enough concentration of Debianites to house
200 visiting developers?  Probably not), some money could be saved
there.

For instance, if the Computer Lab could be persuaded, they might
be able to net some cheap accommodation at Cambridge (University).
Then again, I could be dreaming.

>   We always seem to have money we don't need to spend on
>   hardware or bandwidth, but I don't think it's on this scale.
>   Corporate donations? I don't really know.

Corporate donation is possible, perhaps, but I suspect things would
get a little more complicated then -- like they'd want some
advertising, or something.

Hey, why not turn it into a full-blown trade fair? :)

> Is this idea worth pursuing?

I think so, although in practice it's probably just to wait for
the next Linux Expo/LinuxKongress/Linux World/whatever, and
arrange a large-scale Debian meet; that way the conference hall
would be basically free, and we'd get an opportunity to foist a
few copies of Debian off onto some punters.

-- 
Chris <[EMAIL PROTECTED]> ( http://www.fluff.org/chris )



Re: history (Was Re: Corel/Debian Linux Installer)

1999-09-16 Thread Chris Rutter
On Wed, 15 Sep 1999, Jonathan Walther wrote:

> drives.  But given they are in such a vast minority, the current scheme of
> providing sensible defaults and popping the installer into a tool for
> creating your own arbitrary partition scheme is really the best.
> (at least, Im ASSUMING we do that the same as FreeBSD... I haven't installed
> Debian in a while.  Just duplicated already working drives)

You say this, but all almost every single one of my drives >120MB
(3.6GB, 6GB, 9GB, 13.5GB, 17GB) are partitioned into a single huge
Linux partition (and 256MB swap) -- I thought and hard about this,
and I have yet to have come across a time where having several
partitions would have been easier.

Initially, when I setup the first large multi-user system that I
admin, I *did* split it into lots of little bits (on a 6GB disk).
This was a *nightmare* -- bits of /usr were symlinked into /home;
bits of /var were symlinked into /usr, and so on.  I had constant
nightmares trying to distribute the disk load evenly and ensure
free space was there all around, so when I finally reinstalled it
(after 4 years) with Debian, I left both of its disks as single
huge partitions, so that it now has 8GB / and 6GB /home, and I've
been happier.

I'm not especially bothered about the fsck time -- this box goes
down only 3 or 4 times a year, if that.  Backups are taken
(over the network), and if my data crashes on the system, then
I'll reconstruct from that.

With the (good) Debian policy of fully integrating packages into
the /usr, /var tree (rather than just leaving them in a heap),
saving /var at the expense of /usr wouldn't be terribly useful,
anyway.

So, er, what reasons are there (for me, at least, and I think
I'm fairly typical of small--medium size system admins) for
splitting?

-- 
Chris <[EMAIL PROTECTED]> ( http://www.fluff.org/chris )



w only giving `-' as the FROM field

1999-09-15 Thread Chris Rutter
For months now, `w' has only reported `-' (well, *almost* all the
time, anyway) in the FROM field for any connections made through
`telnetd'.  Finally, with the update to PAMed `login', I once again
have the hostnames correctly appearing in FROM again.  Does anyone
know why this wasn't working for so long?

-- 
Chris <[EMAIL PROTECTED]> ( http://www.fluff.org/chris )



Re: Increasing regularity of build systems

1999-09-15 Thread Chris Rutter
On Wed, 15 Sep 1999, Paul Slootman wrote:

> 
> If all I'm doing is trying fix something, usually just invoking 'make'
> will do it (or some subtle variation that a glance at the rules file
> will make clear). Once it builds, I do 'debian/rules clean' and then
> restart the package build, to ensure that the final package can be
> reproduced (restarting things from the middle sometimes leads to things
> happening differently).

Yes, indeed.  Just be thankful you're not a porter for RedHat: with
`rpm', the *only* way to invoke a build that will end up in a binary
package is to invoke a rule with `rm -rf ' as
its first command.  Imagine building glibc, where everything works
up until the *very* last file -- so you fix the bug stopping the last
file from building, and you expect to be able to just restart make?
Oh no.  You restart the whole *damned* thing.  Maybe you're not sure
the fix will work?  Prepare to rebuild all of glibc 20 times.

Yes, you can edit the build script not to `rm -rf', but if you do,
odd files tend not to get properly included in the rpm archive.
However, this has been a preferable alternative to rebuilding for
another 36 hours.  (I could go on.  RPM is shit, for porters at least.)

-- 
Chris <[EMAIL PROTECTED]> ( http://www.fluff.org/chris )