Re: LDAP gateway broken?

2003-12-09 Thread Jason Gunthorpe

On Mon, 8 Dec 2003, John Goerzen wrote:

> I just sent my first-ever message to the LDAP gateway to reset my
> password.  I got the below message back.  BTW, my clock is accurate.
> 
> I used the exact "echo" command given in the docs.
> 
> Also, I received no other reply.

There is a small bug in the script, if for some reason the LDAP sever is
unreachable then it adds the signature of your message to the replay cache
and returns TEMPFAIL to exim which retries your message and then fails it
because the replay cache already has it..

Try again. Be sure to make a new sigature.

Jason




Bug#212028: apt-cache uses "dependency" backards

2003-09-21 Thread Jason Gunthorpe

On Sun, 21 Sep 2003, Daniel B. wrote:

> Per the The American Heritage Dictionary (via
> http://dictionary.reference.com/search?q=dependency), a dependency
> is:
> ...
> 2. Something dependent or subordinate. 
> ...
> 
> That is, if A depends on B, A is a dependency of B.  (B is not a 
> dependency of A.)

 Definition #1 for Dependency is 'Dependence'. Which is defined as

   1. The state of being dependent, as for support.

So if package A requires some supporting functionality from package B then
'A has a dependence on B' - which is also correctly said as 'A has a
dependency for B'.

Consider a commonly heard phrase today: 'Jack depends on drugs', 'Jack has
a drug dependency', 'Jack is dependent on drugs', 'Jack has dependency on
drugs'. 'Package: jack\n Depends: drugs'. 

In this case, your example results in something very odd indeed -
'Jack is a dependency of drugs' but 'Drugs are not a dependency of Jack'
Which is clearly not the expected meaning of 'Jack depends on drugs'.

You might say 'Drugs are a dependency of Jack's' however..

Then again, I am not an English major.

Jason





Re: Bug in apt-get ? [replace essential package / Yes, do as I say]

2003-04-29 Thread Jason Gunthorpe

On Tue, 29 Apr 2003, Miquel van Smoorenburg wrote:

> > There are a lot of wonky things that can happen during most of the essential
> > package remove scenarios that can completely screw your system so it doesn't
> > boot or can't run programs, install scripts or something er other.
> > 
> > Your case may or may not have these properties, it's impossible to tell.
 
> Well yes, it probably is. Sysv-rc and file-rc *are not* essential.
> Sysvinit is, and it depends on sysv-rc | file-rc, and that's why
> apt 'upgrades' the status of those packages to Essential. Even if
> I put Essential: no in the control file, apt still ignores that.

Well, for the purposes of this check they are essential. It is
specifically designed to prevent against 
 
> I'd say in this case apt is going slightly overboard. It's against
> current policy, and dpkg itself does get it right.

It isn't against policy.

> Unix is all about having enough rope to hang yourself and being
> able to shoot yourself in the foot etc, right, so why is apt
> preventing me from doing something that actually makes a lot
> of sense.

Yeah, well, a few years ago people were busilly shooting themselves in the
foot and rendering their systems completely inoperable by removing
these 'virtually eseential' packages. They complained loudly that it
should not be the way it was and I agreed. So this check is here to
stay.

You have to find another way to do what you want that isn't so risky.

> When replacing a (virtually) essential package, apt should simply
> not remove the old package first and install the replacing
> package after that. It should let dpkg take care of that. Apt
> is able to order things the right way, so it should be able to
> configure the replacing package as soon as it is unpacked.

This isn't relevent. Dpkg does the same operations internally, it just
does them a little faster. You still get the same potential bad effects.

> Now I have to find a sane way to describe that, and file a
> bug report against apt, I guess. Apt being written in C++ I'm
> not going to try to fix it myself.

It's not a bug, so please don't file more pointless bugs against it.

I seriously recommend against using virtual packages, or |'s with packages
that are marked essential. There is alot of very poor behavior you will
run into.

Jason




Re: Bug#170069: ITP: grunt -- Secure remote execution via UUCP or e-mail using GPG

2002-11-21 Thread Jason Gunthorpe

On Fri, 22 Nov 2002, Joey Hess wrote:

> > After verifying the signature on the data, the receiver does some sanity
> > checks.  One of the checks is doing an md5sum over the entire file
> > (remember, this includes both the headers and the payload).  If it
> > has seen the same md5sum in the last 60 days, it rejects the request.  If
> > the date of the request was more than 30 days ago, it rejects the request.
> 
> Hold on, if you're md5summing the headers, what is to stop an attacker
> from modifying the subject, and using an intercepted, gpg-signed body to
> repeat the command?

PGP signatures have a signature ID and a date that are ment to be used to
prevent against replay attacks. I forget the exact details but there is a
gpg mode that prints it out. The db.debian.org gateways all make use of
it. 

Jason




Re: Packages.bz2, Sources.bz2, Contents-*.bz2, oh my

2002-08-30 Thread Jason Gunthorpe

On Sat, 31 Aug 2002, Anthony Towns wrote:

> On Fri, Aug 30, 2002 at 05:34:48PM -0500, Adam Heath wrote:
> > This will break apt, as it doesn't look for compressed versions when using
> > file uris.
> 
> Then apt, or debian-cd, needs to be fixed. *shrug*

Huh. debian-cd can just uncompress them, but file: uris are a bit of a
pickle.

Jason




Re: First experience with gcc-3.2 recompiles

2002-08-26 Thread Jason Gunthorpe

On Mon, 26 Aug 2002, Gerhard Tonn wrote:

> > apt: failed with "debian/rules:20: build/environment.mak: No such file or
> > directory" gg: failed with "/usr/bin/ld: cannot find -ljpeg"
> > hylafax: failed because textfmt wasn't built[1]
> > latte: failed with "Your STL string implementation is unusable."
> > nana: failed with "make: dh_testdir: Command not found"
> > qnix: failed with "make: execvp: ./configure: Permission denied"
> >
 
> I am going to write bug reports for build problems that are not gcc 3.2 
> specific.

Do be a bit careful, APT didn't build because you changed the package
(NMU version number) and didn't have autoconf/etc installed which are not 
needed otherwise.

Jason




Re: apt-get wants to upgrade package to same version?

2002-08-21 Thread Jason Gunthorpe

On Thu, 22 Aug 2002, Brian May wrote:

> I ran dpkg-scanpackages on it myself, and haven't updated anything
> since (besides, if I had updated something, the MD5sum check would fail
> wouldn't it?)

Nope.
 
> Description: Dummy library package for Kerberos4 From KTH.
>  This is a dummy package. It should be safe to remove it.
>  installed-size: 76
>  source: krb4

vs
 
>  Description: Dummy library package for Kerberos4 From KTH.
>   This is a dummy package. It should be safe to remove it.
 
> What is different?

The description?

Looks like dpkg-scanpackages foobar'd you.

Jason




Re: apt-get wants to upgrade package to same version?

2002-08-21 Thread Jason Gunthorpe

On Wed, 21 Aug 2002, Brian May wrote:

> On Tue, Aug 20, 2002 at 11:19:13PM -0600, Jason Gunthorpe wrote:
> > > apt-get knows that it has to get the file from:
> > > 
> > > deb http://snoopy.apana.org.au/~ftp/debian woody main
> > > 
> > > and the md5sum of the Packages file from this source, as quoted
> > > before matches exactly.
> > 
> > Er, the md5sum of the deb is not kept by dpkg after you install a .deb.
 
> This MD5sum matches that of the file on the server, exactly:

Just ignore the md5sum, it isn't (can't be!) used for anything like this.

If you do apt-cache show kerberos4kth1 after installing and look very
carefully you will see that the two listed 1.1-11-2 stanzas are subtly
different. The problem is that your package file does not
accurately reflect the contents of the .deb. That is all.

Jason




Re: apt-get wants to upgrade package to same version?

2002-08-21 Thread Jason Gunthorpe

On Wed, 21 Aug 2002, Brian May wrote:

> On Tue, Aug 20, 2002 at 09:50:21PM -0600, Jason Gunthorpe wrote:
> > The entires in Packages files and those in the .deb must match exactly
> > (ie byte for byte), otherwise it sees them as different packages. Since
> > dpkg manipulates the status file and only has information from the .deb
> > there is no way to force a particular contents into the status file.
> 
> apt-get knows that it has to get the file from:
> 
> deb http://snoopy.apana.org.au/~ftp/debian woody main
> 
> and the md5sum of the Packages file from this source, as quoted
> before matches exactly.

Er, the md5sum of the deb is not kept by dpkg after you install a .deb.

Jason




Re: apt-get wants to upgrade package to same version?

2002-08-20 Thread Jason Gunthorpe

On Wed, 21 Aug 2002, Brian May wrote:

> Can't apt realize that if it is going to install a package from source
> X, it should use the Packages entry from source X too?

The entires in Packages files and those in the .deb must match exactly
(ie byte for byte), otherwise it sees them as different packages. Since
dpkg manipulates the status file and only has information from the .deb
there is no way to force a particular contents into the status file.

Jason





Re: Are libtool .la files in non-dev library packages bugs?

2002-08-20 Thread Jason Gunthorpe

On Tue, 20 Aug 2002, Marcelo E. Magallon wrote:

>  I beg your pardon?  Which naiveness?  That particular bit of libtool
>  solves a very real problem: dlopen is *not* portable.

Careful here, dlopen is defined by SUSv2, all the libtool hackage is does
is allow OS's to get away with not conforming to SUSv2 for longer :<

Jason




Re: Are libtool .la files in non-dev library packages bugs?

2002-08-19 Thread Jason Gunthorpe

On Mon, 19 Aug 2002, Ben Collins wrote:

> > > Not only that, it's only useful for linking, so has no reason being in
> > > the primary runtime.
> > 
> > ltdl needs them at runtime.
> 
> Then ltdl is broken. How does one install libfoo.so.1 and libfoo.so.2
> and only have libfoo.la, and ltdl expect to work?

I was always under the impression that ltdl only really needed the .la
files on defective OS's, not on linux.. 

Just look in a .la, there is nothing in there that can't be properly done
by ld.so. 

Jason




Re: rsync and debian -- summary of issues

2002-04-12 Thread Jason Gunthorpe

On Thu, 11 Apr 2002, Martin Pool wrote:

> I'd appreciate comments.

Hmm...

As you may know I'm both the APT author, administrator of the top level
debian mirrors and associated mirror network. So,

> 3.2 rsync is too hard on servers
> If it is, then I think we should fix the problems, rather than
> invent a new system from scratch. I think the scalability problems
> are accidents of the current codebase, rather than anything inherent
> in the design.

It's true I'm afraid. Currently on ftp.d.o:

nobody8835 25.7  0.3 22120 1740 ?RN   Apr10 525:24 rsync --daemon
nobody   22896  5.0  0.3 22828 1992 ?SN   Apr11  21:20 rsync --daemon
nobody3907  7.3  0.5 22336 2820 ?RN   Apr11  15:30 rsync --daemon
nobody   10729 13.7  4.0 22308 20904 ?   RN   Apr11  13:10 rsync --daemon

The load average is currently > 7 all due to rsync. I'm not sure what that
one that has sucked up 500mins is actually doing, but I've come to accept
that as 'normal'. I expect some client has asked it to recompute every
checksum for the entire 30G of data and it's just burning away processor
power .

We tend to allow only 10-15 simulataneous rsync connections because of
this.

Things are better now, in the past with 2.2 kernels and somewhat slower
disks rsync would not just suck up CPU power but it would seriously hit
the drives as well. I think the improvements in inode/dentry caching in
2.4, and our new archive structure are largely responsible for making that
less noticable.

IMHO as long as rsync continues to have a server heavy design it's ability
to scale is going to be quite poor. Right now there are 91 people
connected to  ftp/http on ftp.d.o, if they were using rsync's I'm sure the
poor server would be quite dead indeed.

> 3.1 Compressed files cannot be differenced

I recall seeing some work done to determine how much savings you could
expect if you used xdeltas of the uncompressed data. This would be the
best result you could expect from gzip --rsyncable. I recall the numbers 
were disapointing, it was << 50% on average or somesuch. It would be nice
if someone could find that email or repeat the experiments.

> 3.5 Goswin Brederlow's proposal to use the reverse rsync algorithm over
> HTTP Range requests

Several years ago I suggested this in a conversation with you on one of
the rsync lists, someone else was able to pull a reference to the IBM
patent database and claimed it was the particular patent that prohibits
the server-friendly reverse implementation.

> 3.7 rsync uses too much memory

This only really seems to be true for tree-mirroring, the filelists can be
very big indeed.

Jason


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Debian's problems, Debian's future

2002-04-09 Thread Jason Gunthorpe

On Tue, 9 Apr 2002, Michael Bramer wrote:

>   -> make the check on the client site and
>   -> download the file partly per ftp/http 
>   -> make the new file with the old and downloaded parts
> 
> With this the server need only extra rsync-checksum files.

Rumor around rsync circles is that this is patented.

Jason


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: New Packages (i18n version of APT)

2002-04-08 Thread Jason Gunthorpe

On Mon, 8 Apr 2002, Michael Piefel wrote:

> clear to someone who takes the easy path like me. It would also help if
> I could see your current source; the CVS archive on cvs.debian.org does
> not seem to be current.

It is current.

Jason


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: New Packages (i18n version of APT)

2002-04-07 Thread Jason Gunthorpe

On Sun, 7 Apr 2002, Michael Piefel wrote:

> You, Jason, did not add full i18n support to APT, and were not willing
> to accept my patches for woody. This is OK, as APT is a very central
> package and has been in different shades of freeze for quite some time.

Bzzt, I accepted the parts of your patches that met my criterea and asked
you to rework the rest, you never did, so big surprise that it is
incomplete.

> Don't say I didn't make the patch to your likings when you are not
> willing (or able) to tell others what exactly your likings are.

Seems to me you re-iterated what I wanted pretty well in your email.

Jason


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: New Packages (i18n version of APT)

2002-04-06 Thread Jason Gunthorpe

On Sun, 7 Apr 2002, Erich Schubert wrote:

> I REALLY REALLY would like to see translated apt in woody.
> And i cannot understand why apt-i18n is not installed so we could
> test it. Adding apt-i18n to unstable will not break anything, but
> interested developers can test this before adding it to real apt.

Because it is a bad idea? The people who made it still have not produced a
complete patch against normal APT, so instead of doing that they just
opted to try and force their work into the archive.

Current CVS has most, but not all of nescessary patching.

Jason



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: apt complaining about valid dependencies

2001-05-06 Thread Jason Gunthorpe

On Mon, 7 May 2001, Oliver Elphick wrote:

> What is apt-get upgrade complaining about here?  On a cursory glance,
> there isn't anything wrong with any of these proposed installations:

It's an error with how the message is printed, it is showing the wrong
number for 'is to be installed' IIRC.

Jason




Re: Bug#95801: won't let me upgrade perl from stable to unstable

2001-05-01 Thread Jason Gunthorpe

On 1 May 2001, Brian May wrote:

> Jason> No, it means you can't do this specific situation you asked
> Jason> for, you said dist-upgrade works, so you can in fact
> Jason> upgrade to unstable!
>  
> I don't recall saying that. If so, I did, I am sorry, I must have been
> confused at the time (or maybe the archive has changed since then, or

You seem to be confused. It is in the bug log.

[..]
> apt-get dist-upgrade would work, but I want to remove my Helix packages
> first (otherwise things will break), and apt wont let me do that without
[..]

> It has the highest version number I always run dselect update???

I don't care? It is not the installed package! Why are you ignoring that
absolutely critical detail?? Lets review my first message:

> Try looking at the current version of perl-base, and then look at the
> thing that actually provides the pre-depends it needs, which in this case
> will be perl-5.004-base. This has very little to do with the packages you
> are going to install. 

Now, hold my hand. Your installed perl-base looks like this: 

> Package: perl-base
> Essential: yes
> Priority: required
> Section: base
> Installed-Size: 10
> Maintainer: Darren Stalder <[EMAIL PROTECTED]>
> Architecture: all
> Version: 5.004.05-1.1
> Pre-Depends: perl5-base
   

And your perl-5.004-base looks like this!

> snoopy:nfsroot:/# dpkg -s perl-5.004-base
> Package: perl-5.004-base
> Provides: perl5-base
^^

Mkay? 

Now, the *only* ground you can attempt to stand on to claim this is an APT
bug is that it is erronously giving the error.

The above proves the relationship between perl-base and perl-5.004-base
and it also proves the fact that perl-base is essential -- on your system,
as it is now. 

I established this in the last paragraph of my first message, which you
totally ignored. Establishing this absolves me of all responsibility so I
closed the bug.

> details to explain the problem, which you don't seem to understand,
> you are constantly accusing me of being wrong. However, thank-you for

Yes, of course, I have no idea how the software I wrote works. Silly me.

Jason




Re: Bug#95801: won't let me upgrade perl from stable to unstable

2001-04-30 Thread Jason Gunthorpe

On 1 May 2001, Brian May wrote:

> >>>>> "Jason" == Jason Gunthorpe <[EMAIL PROTECTED]> writes:
> 
> Jason> On Mon, 30 Apr 2001, Brian May wrote:
> 
> >> WARNING: The following essential packages will be removed This
> >> should NOT be done unless you know exactly what you are doing!
> >> perl-5.004-base (due to perl-base)
> 
> Jason> I'm confused, why is this a bug? You asked it to remove a
> 
> No I didn't! I asked to upgrade to the latest perl from unstable!

 Please read exactly what I am saying, I already discussed this
problem with Brendan before I replied to you. The fact that dist-upgrade
works makes this not-a-bug.

By asking it to remove some packages, that implicitly implies others need
to be upgraded, which implies that some perl packages need to be removed.
You asked for that, you have to accept that. You may suggest it not remove
perl-5.004 base by also listing that on the command line, but you may find
it removes more packages that you'd like it to.

> It is a bug. It means that I cannot upgrade from stable to unstable.

No, it means you can't do this specific situation you asked for, you said
dist-upgrade works, so you can in fact upgrade to unstable!
 
> >> The thing is, I can't even see how this is meant to work:
> >> 
> >> dewey:~# dpkg --print-avail perl-base
  
 
> Jason> Try looking at the current version of perl-base, and then
   

> conflicts. Perhaps you are confused? Please check the version number
> again.

Read it. Carefully. Your output has no relevence what so ever for 2
reasons, the first being that it is not the installed package, and the
second being that it is dpkg 'avail' output which is not used by APT.

In fact the output you quote below demonstrates exactly what I was talking
about.

> If you do close this bug again without resolving it, then I guess you
> have just demonstrated that Debian has grown too large and

Why are you arguing with me? Instead of taking the time to reopen the bug,
you should have reassgned it to a perl package, or accept that it is
Not-A-Bug. I have already provided sufficient explanation in the first
message to show that it is not an APT bug. 

Since I am not convinced this is even a bug at all, I am not going to
foist it on anyone else. If you still belive something is wrong then it is
up to you to talk to Brendan yourself - that is why I closed the bug, why
you reopened it in spite of my explaination of why it is not an apt bug is
beyond me...

Jason




Re: IA-64?

2001-01-09 Thread Jason Gunthorpe

On Tue, 9 Jan 2001, Bruce Perens wrote:

> > Speaking of IA-64: Do we have a machine yet? AFAIK not. Do you think HP
> > would be willing to make one availible to Debian?
> 
> Please verify the situation regarding ia64 and get back to me.

HP, via Matt Taggart, is planning to put a IA64 box and a HPPA box for us
at their Fort Collins, Colorado facility.

Bdadle has got himself a IA64 box that he is using for Debian stuff as
well.

Jason




Re: package pool and big Packages.gz file

2001-01-08 Thread Jason Gunthorpe

On 8 Jan 2001, Goswin Brederlow wrote:

> Then that feature should be limited to non-recursive listings or
> turned off. Or .listing files should be created that are just served.

*couf* rproxy *couf*

> So when you have more blocks, the hash will fill up. So you have more
> hits on the first level and need to search a linked list. With a block
> size of 1K a CD image has 10 items per hash entry, its 1000% full. The
> time wasted alone to check the rolling checksum must be huge.

Sure, but that is trivially solvable and is really a minor amount of
time when compared with the computing of the MD4 hashes. In fact when you
start taking about 65 blocks you want to reconsider the design choices
that were made with rsync's searching - it is geared toward small files
and is not really optimal for big ones.

> So the better the match, the more blocks you have, the more cpu it
> takes. Of cause larger blocks take more time to compute a md4sum, but
> you will have less blocks then.

No. The smaller the blocks the more CPU time it will take to compute MD4
hashes. Expect MD4 to run at > 100meg/sec on modern hardware so you are
looking at burning 6 seconds of CPU time to verify the local CD image.

If you start getting large 32 bit checksum matches with md4 mismatches due
to too large a block size then you could easially double or triple the
number of md4 calculations you need. That is still totally dwarfed by the
< 10meg/sec IO throughput you can expect with a copy of a 600 meg ISO
file. 
 
Jason




Re: package pool and big Packages.gz file

2001-01-07 Thread Jason Gunthorpe

On 8 Jan 2001, Goswin Brederlow wrote:

>  > Apparently reversing the direction of rsync infringes on a
>  > patent.
 
> When I rsync a file, rsync starts ssh to connect to the remote host
> and starts rsync there in the reverse mode.

Not really, you have to use quite a different set of operations to do it
one way vs the other. The core computation is the same, mind you.
 
> Hmm, which patent anyway?

Don't know, I never heard back from Tridge on that.
 
> I don't need to get a filelisting, apt-get tells me the name. :)

You have missed the point, the presence of the ability to do file listings
prevents the adoption of rsync servers with high connection limits.

>  > Reversed checksums (with a detached checksum file) is something
>  > someone should implement for debian-cd. You calud even quite
>  > reasonably do that totally using HTTP and not run the risk of
>  > rsync load at all.
> 
> At the moment the client calculates one roling checksum and md5sum per
> block.

I know how rsync works, and it uses MD4.

> Given a 650MB file, I don't want to know the hit/miss ratios for the
> roling checksum and the md5sum. Must be realy bad.

The ratio is supposed to only scale with block size, so it should be the
same for big files and small files (ignoring the increase in block size
with file size).  The amount of time expended doing this calculation is
not trivial however. 

For CD images the concern is of course available disk bandwidth, reversed
checksums eliminate that bottleneck.

Jason




Re: Solving the compression dilema when rsync-ing Debian versions

2001-01-07 Thread Jason Gunthorpe

On 7 Jan 2001, Bdale Garbee wrote:

> > gzip --rsyncable, aloready implemented, ask Rusty Russell.
> 
> I have a copy of Rusty's patch, but have not applied it since I don't like
> diverging Debian packages from upstream this way.  Wichert, have you or Rusty
> or anyone taken this up with the gzip upstream maintainer?

Has anyone checked out what the size hit is, and how well ryncing debs
like this  performs in actual use? A study using xdelta on rsyncable debs
would be quite nice to see. I recall that the results of xdelta on the
uncompressed data were not that great.

Jason




Re: apt maintainers dead?

2001-01-07 Thread Jason Gunthorpe

On 8 Jan 2001, Goswin Brederlow wrote:

>  > The short answer is exactly what you should expect - No,
>  > absolutely not.  Any emergence of a general rsync for APT
> 
> Then why did it take so long? :)

I was traveling.
 
>  > method will result in the immediate termination of public rsync
>  > access to our servers.
> 
> I think that is something to be discussed. As I said before, I expect
> the rsync + some features to produce less load than ftp or http.

No. If this comes about the load will go up, our mirrors will have trouble
getting access  and I then will disable it.

> Given that it doesn't need more resources than those two, is the
> answere still no?

Yes. I cannot create more rsync slots because those slots can just as
easially be used for more intensive operations. 

> again. I hope I got an url for it by then.

Check sourceforge.

Jason




Re: apt maintainers dead?

2001-01-07 Thread Jason Gunthorpe

On 8 Jan 2001, Brian May wrote:

> Do you know when they plan to integrate rproxy support into programs
> like squid, apache and Mozilla (as per their web site)?

No, I do not follow that closely. What we need to see is an apache module
primarily. There are also some (IMHO) serious issues with aborted
transfers and resumption - I don't think those will ever be resolved
completely.

Jason





Re: package pool and big Packages.gz file

2001-01-07 Thread Jason Gunthorpe

On 7 Jan 2001, Goswin Brederlow wrote:

> Actually the load should drop, providing the following feature add
> ons:
> 
> 1. cached checksums and pulling instead of pushing
> 2. client side unpackging of compressed streams

Apparently reversing the direction of rsync infringes on a patent.

Plus there is the simple matter that the file listing and file download
features cannot be seperated. Doing a listing of all files on our site is
non-trivial.

Once you strip all that out you have rproxy.

Reversed checksums (with a detached checksum file) is something someone
should implement for debian-cd. You calud even quite reasonably do that
totally using HTTP and not run the risk of rsync load at all.

Such a system for Package files would also be acceptable I think.

Jason




Re: apt maintainers dead?

2001-01-07 Thread Jason Gunthorpe

On 7 Jan 2001, Goswin Brederlow wrote:

> I tried to contact the apt maintainers about rsync support for
> apt-get (a proof of concept was included) but haven't got an answere
> back yet.

No, you are just rediculously impatatient.

Date: 06 Jan 2001 19:26:59 +0100
Subject: rsync support for apt

Date: 07 Jan 2001 22:42:02 +0100
Subject: apt maintainers dead?

Just a bit over 24 hours? Tsk Tsk.

The short answer is exactly what you should expect - No, absolutely not. 
Any emergence of a general rsync for APT method will result in the
immediate termination of public rsync access to our servers.

I have had discussions with the rproxy folks, and I feel that they are
currently the best hope for this sort of thing. If you want to do
something, then help them.

Jason




Re: package pool and big Packages.gz file

2001-01-05 Thread Jason Gunthorpe

On 5 Jan 2001, Goswin Brederlow wrote:

> If that suits your needs, feel free to write a bugreport on apt about
> this.

Yes, I enjoy closing such bug reports with a terse response.

Hint: Read the bug page for APT to discover why!

Jason




Re: Something has broken APT on my system...

2001-01-05 Thread Jason Gunthorpe

On Thu, 4 Jan 2001, Heikki Kantola wrote:

> For few days (first experienced this on 1.1.) I've been trying to figure
> out what's wrong with APT as whatever command I try, I get:

Er, ah, er, the only time I've seen that is when someone had too many
items in their sources.list, but I did not think we were even close to
having a problem with that again.. It means the package data exceeded 4
meg or some such large number. There is a configuration setting you can
tweak to make that higher, check the apt.conf man page.

Jason




Re: apt-get and proxy

2000-09-14 Thread Jason Gunthorpe

On Wed, 13 Sep 2000, Andreas Tille wrote:

> When I wrote, that the proxy variables were ignored just my description
> was wrong.  May be they are used but they are used in an other way
> than if I use settings in /etc/apt/apt.conf.  While trying several different
> proxy-settings (sorry, don't remember) there, I got explicitely the
> message that the proxy is contacted.  Using just the environment

Nope, they are 100% identical. The only way it could not work is if you
were not actually exporting the variable, or were typing something wrong.

> the time is always the same when updating package list (also doing this
> several times on the same box - at least this could be cached even
> without using a proxy - is this worth a wishlist-bug?) or when obtaining

It is cached - only environmental problems can defeat the cache - these
invariably boil down to defective servers, transparent proxies, or
*something* like that.

Jason



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: apt-get and proxy

2000-09-14 Thread Jason Gunthorpe

On Wed, 13 Sep 2000, Andreas Tille wrote:

> >From /var/lib/dpkg/available:
> Package: makedev:
> ...
> MD5sum: 7f6b97b984c246ead2c7be45ce4f1678
> 
> /var/cache/apt/archives/partial> md5sum makedev_2.3.1-46_all.deb
> 7f6b97b984c246ead2c7be45ce4f1678  makedev_2.3.1-46_all.deb

Please use apt-cache show makedev rather than the available file, and
verify the version numbers too. 

Are you certain there is not a problem with your CPU/Memory that could
cause this?

See, the only time bytes are added to the hash is when they are written to
the file, so.. Well, what you are describing is impossible :> I'd like to
see strace -o /tmp/foo -f -ff's -s200 and script logs of an apt-get doing
this. 

I did lots of testing of apt-get and most squids and never once
encountered an MD5 error.
 
> Well but some of my boxes don't use NFS and those using NFS have trouble
> with tke lock file.  At least I had when I tried.  Any example for 
> /etc/exports and /etc/fstab which handle this right?

You need kernel NFS server for locking.

Jason


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: apt-get and proxy

2000-09-13 Thread Jason Gunthorpe

On Wed, 13 Sep 2000, Andreas Tille wrote:

> I'm in real trouble with apt-get and a squid proxy.  First of all
> I found out that in contrast to the manual of apt.conf the environment
> variables

Uh..

Wakko{root}~/work/apt2/build/bin#http_proxy="http://void"; apt-get install apt
Reading Package Lists... Done
Building Dependency Tree... Done
1 packages upgraded, 0 newly installed, 0 to remove and 362 not upgraded.
Need to get 483kB of archives. After unpacking 142kB will be used.
Err http://sunsite.ualberta.ca woody/main apt 0.3.19
  Could not resolve 'void'

Maybe your shell is foobar 

> Unfortunately I've got MD5 sum errors for all files I got via
>   apt-get install
> which remained in /var/cache/apt/archives/partial if the sources.list

Well, this means the bits that were pulled down don't match the what the
Package file claims. Should never ever happen of course.

> file enforced http-protocol instead of ftp.  But all files where OK
> and I could cmp them perfectly to files I got "by hand".  I could

Run md5sum on the files in partial and check against the Package file.
Your cache may be caching a corrupted file or the end servers
just might be bad . Heck, you might have a 1 bit error that isn't
within any compressed data. So it eludes gzip's CRC.

If they do match, then congrats, you found a bug - though due to the way
the code is that would be .. interesting .. 

> perfectly install them via "dpkg -i".  I guess that the squid-proxy
> prevents a MD5 validation.

Nope, impossible.

> I've thought I could get rid off this problem using ftp-protocol
> in sources.list entries, because the MD5 problem vanished.  Today I
> recogniced that the cache is ignored and files are obtained everytime
> from the far host instead of using the squid-cache.

FTP over HTTP over Squid is slightly less than desirable, I dont think
If-Modified-Since actually works (squid bug). I *don't* recommend this
configuration BTW.

> I really hope that there is anybody who can help me out this situation.
> I'm sharing a 128kByte line with many people :-(( and need the cache
> very hard.

I recommend shared NFS of /var/cache/apt/archives... Faster/better than
squid for .debs

Jason


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Problems with mail system? [Fwd: Returned mail: User unknown]

2000-09-07 Thread Jason Gunthorpe

On Thu, 7 Sep 2000, Timshel Knoll wrote:

> Oliver Schulze is an upstream maintainer of one of my prospective packages,
> and he's had problems sending mail to my @debian.org address. I believe that
> this is something to do with master's IPv6 configuration - the SMTP error
> message from master is:
> 
> <<< 550 mail from :::216.250.196.10 rejected: administrative prohibition 
> (failed to find host name from IP address)

This is just your standard lack of reverse DNS.. Part of the anti-spam
bit. The sender needs to get working reverse DNS I suppose..

Jason


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: apt and multiple connections

2000-09-06 Thread Jason Gunthorpe

On 6 Sep 2000, Andrew J Cosgriff wrote:

> On a similar (but kinda opposite) note, are there any plans to add
> some bandwidth-limiting functionality to apt/apt-get ?

No. It isn't very effective to try and do that from an application - use
the services in the linux kernel if you really need it... 
 
Jason


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: apt and multiple connections

2000-09-05 Thread Jason Gunthorpe

On Mon, 4 Sep 2000, Russell Coker wrote:

> I would like to transfer several files at a time to enable usable throughput
> through slow web caches.  Is there any way this can be done?  If not can this
> feature be added?

If I recall it isn't too hard, but it isn't there specificly to prevent
yahoos on 'fast' links from tanking our archive servers.

As much as I hate saying it, if you are behind a poor web cache or have an
ISP that QOS's HTTP then you should probably use ftp..

Jason


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: APT problem

2000-09-02 Thread Jason Gunthorpe

On 1 Sep 2000, Alex Romosan wrote:

> with 'apt-get source -b '. what's the point in having the
> ability to download the source and recompile it automatically if the
> next upgrade will wipe it out. if i choose to recompile a package, apt

Mostly to compile versions that are not available for 'stable' but are
available for slink. All other recompiles really should bump the version
number to keep things sane.

Jason


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: (Beware helix packages) Re: [CrackMonkey] The right to bare legs

2000-08-30 Thread Jason Gunthorpe

On Wed, 30 Aug 2000, Peter Teichman wrote:

> I have one question. What is the preferred way for me to handle our
> gtk package? This is a library package that we actually apply some
> patches to for a slightly nicer user interface.

Well, we don't have much provision for flavors of shared libraries. The
best solution would be to use versioned provides and provide a differently
named package, libgtk-helix or something.

Jason




Re: APT problem

2000-08-30 Thread Jason Gunthorpe

On 30 Aug 2000, Alex Romosan wrote:

> > It means the libc6 package you have installed has a different md5sum then
> > the package it finds on ftp.corel.com, and assumes that the version on

No, this is not at all how it works..

> which are not on by default and then i have to put the packages on hold
> because apt wants to get the remote ones.

You have to do this anyhow, otherwise the package will get upgraded and
you will loose your changes. In all cases I can think of where a package
is locally recompiled and has not been placed on hold you would indeed
want the 'newer' archive package to be installed. The motivating factor
here is local slink recompiles of potato packages. 
 
> can we please, please reverse the behaviour, or at least make it
> configurable in /etc/apt/apt.conf, something like PreferLocal "yes".
> if there is such an option and i missed it, please point it out to me.

Come up with a reasonable situation where you would want to have a
non-held package not be moved to the archive version of the same version
but be moved to the newer archive version :>

Jason





Re: APT problem

2000-08-30 Thread Jason Gunthorpe

On Wed, 30 Aug 2000, Michael Meskes wrote:

> | Status=Not/Installed/Config-files/Unpacked/Failed-config/Half-installed
> |/ Err?=(none)/Hold/Reinst-required/X=both-problems (Status,Err: 
> uppercase=bad)
> ||/ NameVersionDescription
> +++-===-==-
> ii  libc6   2.1.3-10   GNU C Library: Shared libraries and 
> Timezone
> feivel:~# dpkg --print-avail libc6

> Package: libc6
> [...]
> Version: 2.1.3-10
> [...]
> feivel:~# apt-get upgrade
> [...]
> Get:1 ftp://ftp.corel.com corellinux-1.2/corel_updates libc6 2.1.3-10 [1904kB]
> [...]

> Could anyone please explain this to me? Did Corel do anything to their files
> that makes apt think it has to upgrade although its up-to-date? Or is this
> a bug in apt?

This is a feature going awry in APT.. someone at Corel must have done
something weird to their package files. Basically APT looks at the
control records in the packages for tell tale signs of recompiles. Ie if
you have one package that depends on libc6 and one that depends on libc5
it can know. In this case it differentiates the remote package and the
local package and then always installs the remote package.

What needs to be done is diff the record from the corel package file
against what is in their .deb and see if there is a difference in any
fields.

Depending on what it is I will say two things
   1) Forget it, corel has to update their package file
   2) Oops, APT needs to ignore that difference.

Jason




Re: APT problem

2000-08-30 Thread Jason Gunthorpe

On Wed, 30 Aug 2000, Bernd Eckenfels wrote:

> > Could anyone please explain this to me? Did Corel do anything to their files
> > that makes apt think it has to upgrade although its up-to-date? Or is this
> > a bug in apt?
> 
> I see this quite often, so it is a bug in the curret apt lib. aptitude is
> even more vulnerable to this... at least the cache does work so u d/l it
> only once.

*blink* could I hear more details on this?

I'd like to see the dpkg --status for the offending package and the
dpkg-deb -I output for the deb.

Thanks,
Jason




Re: (Beware helix packages) Re: [CrackMonkey] The right to bare legs

2000-08-30 Thread Jason Gunthorpe

On Wed, 30 Aug 2000, Branden Robinson wrote:

> > That is one mechanism of creating a private namespace, isn't another 
> > Setting the origin to something other than Debian?
> 
> Please see elsewhere in this thread for my other remarks on this subject.
> 
> An Origin field is a great idea.

We have one, sorry I am so late making it be really useful - very busy you
know.. 

Assuming I can manage to get in the proper frame of mind this problem
should become much less troubling for most APT users.

The versioning scheme I will suggest is fairly direct:
  1) If the package is derived from a debian package it should encode that
 fact by using -1.storm.1, or -1.1, -0.storm.1, etc or whatever seems
 appropriate.
  2) If the package is not derived from a debian package it should use a 
 plain version, -1.storm, -1 or something
  3) Libraries - All possible effort should be made to make Debian the
 primary source of libraries. Period full stop. This is so important
 because of what we are seeing with helix and their special library 
 packages now. Thus, I suggest the following:
   a) If a add-on vendor needs a newer upstream library then they
  can follow standard NMU procedures, using a -0.1.helix type tag.
   b) If their is some critical bug in the Debian library then they
  should still follow NMU type procedures and work with the
  Debian library packager and upstream to make sure the next rev
  is properly fixed.
   c) I recommend the vendor provide a seperate section of their
  FTP site specifically for libraries, and tagged with a proper
  Release file. The libraries they collect there should be the
  libraries they use and have modified. It would be best if most
  of these files were exactly identical to what Debian ships.
  Rational: 
 i) I expect people like helix will include woody
libraries that work on a potato system. These can use the
'usual' 1.2-0.1.woody.1 tagging scheme and probably will not
be included by Debian.
ii) I want the user to be able to say 'I want only helix gnome 
but pick the newest library from the union of
debian+helix'. This is easiest if the libraries are
seperated.
   iii) Libraries are truely a shared resource, we need to take
special steps to ensure apps in Debian linked to them work
and apps in Helix linked to them work - best way to that
is to only have 1 library package that everyone uses and
tests against.

Encoding the vendor tag in the version string will allow the user to know
which version has been installed. It is also important to make sure that
each vendor is creating universially unique package/version/arch triplets.
APT can handle most cases where this is not true but it is *very*
confusing to the end user and is best advoided.

Inter-origin version comparisions is probably fairly pointless - what is
newer? libgnome 1.2-1.storm.1 or libgnome 1.2-1.helix.1 ? 

Selecting which origin is prefered (debian, helix, storm) is done in APT,
via a user configurable system on a per-package and 'default' sort of
basis. Vendors should not try to use weird versioning like epochs and
-storm and so on to enforce an upgrade path.

I hope there will ultimately be a nice simple command that says 
'Prefer to install packages from helix' which can be placed in
installation instructions and in installation scripts.

Night,
Jason




Re: Potato now stable

2000-08-19 Thread Jason Gunthorpe

On Fri, 18 Aug 2000, Anthony Towns wrote:

> Presumably sections and tasks will both be subsumed by this. I think
> these should probably be handled differently: saying "I want the games
> task" should probably default to installing all; whereas you'd probably
> not want to say "I want the games section" and have all of them installed.

Well, is this really an issue? If we maintain the taks-* prefix it becomes
clear to the user.. Maybe someone will want to install a full section
- especially if our sections become significantly more useful!

> Changing the meaning of "Section" like this is probably dependent on
> getting dinstall rewritten and the archive restructured first.

Hm, Possibly. I'd have to ask James of course.

> > be installed. The UI tool will track when new packages are added to groups
> > and present that information in conjunction with the traditional new
> > packages display.

> This sort of behaviour probably wouldn't be suitable for sections. Are
> there any other "grouping" style things apart from sections and tasks
> that we can consider?

Why? Right now our sections are pretty useless because they have too wide
a mismatch of things in them. But that doesn't have to remain true.
 
> This makes the "extra" priority not really fit in though: while you can
> (in theory) install all packages of any of the other priorities you
> specifically *can't* do this with packages in extra. This priority is

True - eliminate it would be my answer. 'extra' packages are gouped into a
view by sections or by name - but not by priority.

> I suspect you'd want a different interface to play with priorities than
> with tasks though, too.

Possibly, I don't know..

> (if you *really* group everything into just one way of doing things),
> but I think this would probably require icky handling on behalf of apt
> or dselect. It probably *would* make it much easier to introduce new
> styles of groupings in future though.

If people want to see this then internally I will convert all groupable
things into whatever the internal group representation is - that makes it
much, much, much simpler to deal with. It isn't so important if that is
done in the archive or not.

Do people like this idea? I mean - if nobody cares I'm certianly not going
to spend any time on it.

Jason




Re: Intent To Split: netbase

2000-08-17 Thread Jason Gunthorpe

On Thu, 17 Aug 2000, Herbert Xu wrote:

> snmpnetstat will show the routing table of routers that export it
> through SNMP.  My point is that route in this case is simply a
> special case of snpmnetstat.

Most routers have a security arrangement so that the information is not
public.

Jason




Re: Problem with apt on slink systems

2000-08-16 Thread Jason Gunthorpe

On Wed, 16 Aug 2000, Alexander Reelsen wrote:

> Where the heck the word 'stable' comes from? I removed my hole
> /var/state/apt/ and I do not know where it comes from. Hardcoded anywhere
> perhaps? Or did I miss something grave?

The slink package files have this inside.. That needs to be changed by us.

Jason




Re: compaq iPaq

2000-08-16 Thread Jason Gunthorpe

On Tue, 15 Aug 2000, Joey Hess wrote:

> It's been pointed out that emdebian (http://www.emdebian.org/) is
> essentilly an effort to do just this.

It is? I use their stuff and the main focus is cross compilers and cross
environments for debian, not really shrinking and porting debian proper. 

That it is, it is more tools for embedded development rather than an
embedded operating system.

Jason




Re: Intent To Split: netbase

2000-08-15 Thread Jason Gunthorpe

On 15 Aug 2000, Manoj Srivastava wrote:

>   Is it really your contention that all MTA's should provide for
>  this configurability, and cooperate with all other MTA packages out
>  of the box? I am afraid that all this handshaking is going to entail
>  a lot of effort, and the resultant gains seem fairly minimal (

No, it is just not common enough to be worth while. The way you do it by
hand is to divert sendmail, install the alternate mailer, carefully work
around the TCP service problem then hack the status file to make things
sane again. It isn't undoly hard.

Jason




Re: Potato now stable

2000-08-15 Thread Jason Gunthorpe

On Tue, 15 Aug 2000, Anthony Towns wrote:

> What I'd like to happen is basically be able to remove the package,
> and just have the task automatically act as though that package had
> never existed. Not complain in dselect about it, not worry people when
> Apt gives you a warning, not do anything.

Well, this is what I was trying to say before - logically it makes alot of
sense if packages are members of groups, this is the reverse of what we
have now - a list of packages in a group.

Delivery and storage of this data has *lots* of options.. 

Let me outline more clearly how I think task packages should work from a
users POV:

The user should see a list of groups (I will call them this because I
think groupings can be more general than just tasks). The UI tool will
allow sorting and searching of the groups and when browsing individual
packages it will be possible to see what groups they are part of. 

The user can select that a group is of interest to them and mark it for
'installation'. Once done this means all packages currently in the group
will be installed and all new packages added to the group in future will
be installed. The UI tool will track when new packages are added to groups
and present that information in conjunction with the traditional new
packages display.

A tree-like display can be used to show what packages are part of a group
and allow individual selection. Since some groups are quite large it may
make sense to categorize the packages lists into finer subgroups
(primarily to help the user navitagate around, but they could be seperate
at the top level too) that can all be individually selected for install. 
[Example: task-python-critical, task-python-web, task-python-gui]

Since there is a tree like display the user can pick off individual
sub-packages of the group, which would now serve nicely as an
oganizational tool. Packages may belong to many groups and appear in
multiple places in this tree - again for organiation.

Important/standard/etc priorities would become mega-groups, most people
would run with important and standard set to install - [like dselect
does], but this becomes optional - and much more controlled.

I can see that blacklisting within a group may be useful on a limited
scale. The blacklist would be expressed as 'packages a1,a2.. in group b
are not to be installed, but the rest of b is' which allows undesired
components to be eliminated by the user. Most groups should be designed to
minimize this, hence this is primarily aimed at the mega-groups rather
than smaller ones. (This is a similar, but stronger statement than your
original proposal - not automatic either) 

So now we can bring organization in on a grand scale. I can envision task
package groups that are like we have now, small very focused things,
priority groups which reflect the standard UNIX view of a system, and new
kinds of purely organization groups (how about a gnome mega-group?). We
could bring some sanity to the section arrangement by having things be
part of multiple sections, and provide stronger guidelines and more
sections.

And if you recall what I said in my last message about recommends - take
this same concept and apply it to a 'micro-group' of a single package
(where recommends and suggests form sub groups) and you have a simple
understandable concept that can be applied and used for about 5 different
things! In my book that a good thing!

If we can work out the details I think an idea like this could help in 
*alot* of areas, and is not really super complicated for us to deploy!

Jason







Re: Intent To Split: netbase

2000-08-15 Thread Jason Gunthorpe

On Tue, 15 Aug 2000, Jacob Kuntz wrote:

> while i can't imagine ever justifying having postfix AND exim installed on
> the same machine, your argument holds true for other things. for instance,
> it's not uncommon to see a machine that has apache running on 80 for

I've done it - had to really.. Two reasons
   1) Exim provides a different command line interface than say qmail,
  some software simply will not work. Thus we need a mail agent to
  move messages outbound only.
   2) Migrating between mailers. Need to have both operational at once
  in order to flush queues, test and migrate.

> modperl pages, with thttpd or aolserver on 8080 for static content. not to
> mention what will happen when we see TUX packaged.

Yes, good examples too. 

Jason




Re: Potato now stable

2000-08-15 Thread Jason Gunthorpe

On Mon, 14 Aug 2000, Joey Hess wrote:

> Drake Diedrich wrote:
> >Under the Irix packaging system (quite nice UI except that it has to
> > handle Irix packages..) packages exist in a hierarchy, with lowest level
> > packages quite fine grained.
> 
> Wow, I quite like this. How could we do it?

This is the ultimate in micropackaging - doing something like that would
solve so many different requests in one big *splat*.

We could have sparc32/64 binaries, PIII optimized binaries, systems
without /usr/doc, etc.

Off hand, I would suspect you'd take an arbitary .deb and carve it into
sub packages internally - this is for effeciency.. Other debs can come
along and clealy install over the sub packages. Ex:

You have apt_1.1_i386.deb which contains
'doc'
'binary'

And an apt_1.1_i686_bin.deb which just have
'binary' 
Inside

Package tools would sort that out through some magic means..

Of course this is all just off hand... :>

Jason





Re: Potato now stable

2000-08-15 Thread Jason Gunthorpe

On Mon, 14 Aug 2000, Joey Hess wrote:

> Jason Gunthorpe wrote:
> > Tasks are bettered handled through some kind of non-package means. I've
> > long said we need to determine some kind of meta-package scheme (a
> > 'package' whose only purpose is to logically group other packages).
> 
> How is introducing some basterdized form of package (perhaps it's just
> an entry in the Packages file or something), going to allow us to
> address problems like aj was talking about, where one of the things it
> depends on is removed from debian, and it needs to be updated?

You already have a bastadized form of packages, thats what a task package
is! The reason there are problems is specificly because task-packages
*aren't really packages* and we don't have enough expressiveness in our
packaging system to make them really work in a good way. [nor should we,
IMHO]

Trying to put hack upon hack into the package tools to support
magic-special packages in a limited fashion does not seem to be a good
solution because:
  1) They are not packages!
  2) You will never get everything you want because you are treating
 specialized data in a generic way

The exact problem AJ is talking about is easially handled when you no
longer have task packages because suddenly there are no more dependencies,
you have a grouping which can be as strong or weak as the user+packager
desires. 

Your suggestion would work to solve AJ's problem, but it suddenly makes
apt-get act really damn weird. You now have a black list of packages which
are hidden from recommends. This black list can't be updated if someone
uses dpkg because it doesn't know about it, and there is not really a
super-good way to edit it and it doesn't buy you anything in terms of ease
of use and organization. 

I suspect the model APT guis, and perhaps apt-get too, will use for
recommends will be a white list where specific packages have their
recommends and suggests promoted to depends under user control. That list
can be fully maintained safely within APT and matches the familiar model
that dselect uses. (pull stuff in, don't exclude stuff out) 

We also already have the concept of groups (priority/section), our users
are familiar with it - we even have automatic groups ala task-packages
(priority=important). So why not enhance that and create something
really spanky?

> > priorities of packages (ie -python doesn't need to install every freaking
> > package, but some are definately critical) and the ability to track and
> > optionally install new packages added to the group, remove the whole
> > group, etc.
> 
> I don't disagree that all this would be nice, but it seems like icing on
> a cake that's just hiding the nasty holes.

Eh? That's completely unreasonable - the entire point is that expressing
groupings using the dependency mechanism has severe drawbacks, you have to
get away from that - you can't consider anything else as full of holes and
expect to fix any of the drawbacks!

> > Logically, the way to represent this is to have package declare their
> > membership in a grouping.
> 
> You know, we had this discussion already. Please see the list archives
> of this winter. We decided this was not the correct way to do it,

I'm well aware of that - and that has zippo to do with delivery of the
data. We already have the ability to override sections and priority,
groups are not a big streatch. 

Inling group membership with each package is a good way to deliver this
data without making major changes to the delivery system, another option
is to throw another index file in the archive or somehow abuse the content
of the Package file. But the best option from a modeling viewpoint is to
have packages be members of groups, not have groups with packages in them.

Jason




Re: Potato now stable

2000-08-14 Thread Jason Gunthorpe

On Mon, 14 Aug 2000, Joey Hess wrote:

> > * Tasks are great, but task-* packages suck when some of the
> >   packages included have release critical bugs. (Remove the
> >   package, the entire task breaks)
> 
> You know, if apt could only support Reccommends, task packages could be
> a lot saner. Sure, it'd still be ugly if something they depended on went
> missing, but at least they'd still be usable.
> 
> I think apt could support reccommends like this:
> 
> * Automatically install all reccommended packages when
>   installing/upgrading a package.
> * If a package that something reccommended was manually removed, don't
>   re-install it next time a package that reccomends it is installed.
> 
> Of course whether this is doable is up to Jason..

I don't care for this much, it breaks the model that apt-get follows, it
adds this extra variable of 'things that were removed' which can lead to
subtle unexepected behavior. The way it is now the command line tool
consistently ignores recommends/suggests, like dpkg. Higher level tools
are free to do whatever they want.

Tasks are bettered handled through some kind of non-package means. I've
long said we need to determine some kind of meta-package scheme (a
'package' whose only purpose is to logically group other packages).

Clearly the desired effect of all meta-packages is to provide the user
with a single node to manipulate and view a group of packages. They should
have special properties in any UI, you should be able to view and
manipulate their grouped packages. Idillicly the grouping would have
priorities of packages (ie -python doesn't need to install every freaking
package, but some are definately critical) and the ability to track and
optionally install new packages added to the group, remove the whole
group, etc.

All this data is orthogonal to the dependency structure. Perhaps if some
thought is put into this a rational solution to the package splitting
problem can be found (convert the old 'big' package into a meta-package
before touching the original 'big' package -> provides a simple and safe
transition?) 

If you take this thought to it's logical extent then things like the
important priority are mearly hard coded examples of this.

Logically, the way to represent this is to have package declare their
membership in a grouping. This could be done via the override file so as
to maintain a centralized authority like we have no with the task
packages. Groups and user preferences about them could be stored seperate
to the status file.

Jason




Re: Signing Packages.gz

2000-04-02 Thread Jason Gunthorpe

On Sun, 2 Apr 2000, Marcus Brinkmann wrote:

> This is a seperate problem. I agree that this should not be the case, but it
> has no place in this discussion. If individual developer keys are
> compromised, we have a problem no matter what. Developers should not store
> secret keys on net connected machines, point.
> 
> However, this only affects the developers packages, not the whole archive.
  ^
 
GAH!? Don't you see that isn't true?? Look, a hack attempt would go like
this.

  1) Break root on master
  2) Use that to break user account on developer victum (any will do)
 (Hint: I have already shown that torsten at least could be 
  attacked quite easially)
  3) Steal PGP key
  4) Use stolen PGP to form new glibc package with trojan, sneak into
 archive using #1

If #1 is possible than #3 and #4 sure as heck will be too! Furthermore,
this is lethal, it can effect both stable, unstable, distributed CDs -
everything! What is worse, once you know it has happened - how do you
determine which PGP key has been stolen? You have to *manualy* go
through every single package and check the signer by hand to make sure it
is all correct. Only someone very well versed in the ftp archive can do
this.

In fact, any time a developer is forced to revoke his key for any reason
it calls the security of 'fixed' things we have distributed (stable
basically) into question, you can't quite tell if that CD out there is
legit or modified. This is a very serious weakness. Think about that, it
is important.

With a dinstall key it goes like this
  1) Break root on master
  2) Hack archive use dinstall key.

However, an attacker doing this can only ruin unstable, our stable
distribution and all CDs *remain secure* The archive itself is recoverable
because the process above can be done. 

This is also very easially recoverable, we revoke the dinstall key, create
a new one, signed by the security key and automated tools can fix the
situation without hassle. The dinstall key has no permanance (on CDs on
the like) so this isn't a big deal.

With the secure dinstall key things are the best they can be:
  1) Break root on wichert's machine
  2) Steal security key
  3) Break root on master or forge CD's

Now we assume wichert is very carefull with the security key [more
carefull than the average developer] so #2 is very very hard - thus this
is the most secure alternative to the 2 above. But is impossible for use
on a daily basis.

Jason



Re: Signing Packages.gz

2000-04-02 Thread Jason Gunthorpe

On 2 Apr 2000, Robert Bihlmeyer wrote:

> > Solution: remove the identity from .ssh/authorized_keys on my home
> > machine.
 
> Note that *any* keys that your agent holds can be snarfed by the
> admin(s) of any hosts where you ssh-in with agent forwarding enabled.

No, that is the point of ssh-agent. The key never leaves your machine the
authentication request travels through SSH to your agent, and then back
again with the proper encrypted credentials. So long as your ssh is active
an attacker can use that to access other machines you normally ssh into
and presumably implant his own authorized_key.

Jason



Re: Signing Packages.gz

2000-04-02 Thread Jason Gunthorpe

On Sun, 2 Apr 2000, Julian Gilbey wrote:

> On Sat, Apr 01, 2000 at 03:16:23PM -0700, Jason Gunthorpe wrote:
> > How many people
> > foward ssh agents and put that key in their home .ssh/authorized_keys?
> 
> What does that mean?  It could easily be that I am doing something
> wrong without even realising it

If you can ssh into your machine using RSA authentication and the key you
use for that is in your ssh agent and you forward your agent then you can
ssh from master back to your home machine without a password - and thus so
can root. 
 
Jason



Re: Signing Packages.gz

2000-04-01 Thread Jason Gunthorpe

On Sat, 1 Apr 2000, Marcus Brinkmann wrote:

> We already use link 1 (signed changes files), and trust it. This won't
> be changed by either proposal. Yes, even in the signed packages file you
> trust all developers keys.

We only trust link1 due to the vigilance of our FTP masters and people
reading -changes lists to make sure that, say, glibc is not uploaded by
someone other than Joel. That is a critical part of the trust in that
step.
 
> Now link 2. It is currently absent. What you seem to suggest is to add a key
> (dinstall-key) here, so the user can verify the archive. This adds a point
> of weakness. As the dinstall key can't be used automatically and kept 
> "truly"[1]

How about this, if someone was able to hack master to the point of being
able to get the dinstall key, I assure you they would be able to hack
some]weak developer machine and lift their key too. I also assert that the
chance of a hacker getting the security key is lower than say 50% of the
keys in our keyring. 

Furthermore it is comparitively easy to revoke a dinstall key - much
harder to detect and revoke individual keys.

> What link 2 asserts instead is that the packages come from master. It solves
> the mirror problem, but does not solve the master problem.

The master problem cannot be solved in an automatic fashion, it will
always require skilled intervention by a human.

Jason



Re: Signing Packages.gz

2000-04-01 Thread Jason Gunthorpe

On Sat, 1 Apr 2000, Marcus Brinkmann wrote:

> Wrong. If you have signed debs, and you are careful when updating the
> debian-keyring package, there is no risk even if master is compromised.

Hahha!

Sorry, your are deluded if you belive this :> Seriously, if someone can
hack master we are all vunerable - how many people out there do you think
use the same password on master as on their home boxes? How many people
foward ssh agents and put that key in their home .ssh/authorized_keys? How
many people have foolishly left their pgp key on master?

Hint: Lots to all of the above [except the last, we purged a bunch of
people for that awhile ago].

If master is compromized right now, we would take the d-changes archive
from a more secure machine [which we may not even have, hence the interest
in storing that in the archive], a slink cd, some potato CDs developers
might have, etc, and begin painstakingly verfiying each and every .deb and
.dsc to make sure it comes from where it was supposed to come from - there
is no automated way to do this and only people like James would actually
know who should be singing what packages. 

Jason



Re: Signing Packages.gz

2000-04-01 Thread Jason Gunthorpe

On Sat, 1 Apr 2000, Marcus Brinkmann wrote:

> In the signed .debs case, I, as a developer, assert that the package comes
> from me. A user can directly verify this by checking the signature.

No, the user cannot verify that. The user can check the signature against
our keyring but they have no idea who *should* have signed it. This means
that all I need to do is nix one of our maintainers keys and I can
undetectably forge Debian packages willy nilly. 

This is aside from the other problem of keeping 600 keys up to date on the
client machines and making sure that huge keyring is not disturbed in
transit. 

> whatever comes from dinstall, but he can not directly check if what is in
> the archive comes really from the developers (not a problem if dinstall can
> be trusted).

If we store the .changes files as I propose then the end use can check it,
if they want. But nobody will, because it is not a usefull thing to check.
It has use to definitively verify the root archive (say, after a hacking
or something) but otherwise the end user cannot make much use of it at
all.

> The latter adds a chain, thus one further point of weakness. I might add
> that as the dinstall key can't be kept truly secret if it is stored on a
> net-connected machine, this weakness is rather huge.

The dinstall and security keys (particularly the security key) are going
to be far, far more secure than the weekest key in the key ring. 

> I could not trust either. The former, because it is stored on a network
> connected machine, the latter because it is transfered over the net (if it

This is a flawed assertion - by your logic SSL is insecure and must not be
used. In reality it is a perfectly good system that has really good
security benifits.

Jason



Re: Signing Packages.gz

2000-04-01 Thread Jason Gunthorpe

On Sat, 1 Apr 2000, Anthony Towns wrote:

> Why would verifying a new security-key necessarily be significantly harder
> than verifying a new unstable-key, though? In both cases you only really
> want to check that its signed by the previous security-key.

But in the other case it replaces/augements the security key, having an
automatic means for that seems like a bad idea.

> A global index wouldn't be entirely appropriate for partial mirrors. *shrug*

The file would be small, people can mirror it too. Partial mirrors are
going to need more and more special care in the future that I don't think
this is a concern.

> How would you go about signing half of a global index with the unstable
> key, and leaving the rest signed by the security key?

Two indexes each signed by their respective keys, and the two keys.
 
> Having a new file right next to the old Packages.gz file might be
> easier to ensure mirroring too. I'm not sure where you'd put a global,
> signed index? *shrug*

debian/indices with the rest of that stuff.
 
> You could have both, if you wanted, too, I guess. How would the index
> be particularly more useful?

I've always wanted an index :> It is simpler to work with and faster
overall (two gpg checks vs ~36, gpg is very very slow). It also would have
file sizes, I like file sizes :>

Jason



Re: Signing Packages.gz

2000-04-01 Thread Jason Gunthorpe

On Sat, 1 Apr 2000, Anthony Towns wrote:

>   * the web of trust, and having the ftp-team sign it

The average user has no entry to the web of trust, so this is just as
useless. (and massively involved for our poor end user)
 
>   * putting a fingerprint on the website and in Debian books,
> and making it easy for people to verify said fingerprint

This is probably the only thing we can do.

> This key (or the private half thereof) wouldn't need to be anywhere near
> any public machines, either.

? The dinstall daily key has to be on master and have no password. The
securty key is kept by a handfull of people on their local machines who
are rather panaroid.
 
> Stick it on the ftp site, and use the web of trust. (If the secure-key that
> you currently have trusts it, then it's good. Either because it's an update
> of the old secure-key, or because it's an unstable-key).

The security key must never be obsoleted, it should last the life time of
the project - anything else is too complicated for our users :|
 
> or so before gzipping anything.

I'd like a seperate global index, that is much more usefull really.

Jason



Re: Signing Packages.gz

2000-04-01 Thread Jason Gunthorpe

On Sat, 1 Apr 2000, Anthony Towns wrote:

> I'm not sure why this isn't getting through. Automatically, cavalierly
> signing Packages.gz on master *HAS DEFINITE GAINS OVER THE PRESENT WAY
> OF DOING THINGS*.

How exactly do you propose to transfer a verification key to the clients?
I can't think of any decent way to do this that isn't prone to some kind
of hack-a-mirror thing or involves annoying extra steps.

You are wrong about signed .debs vs signed package files. Signed .debs are
not worth the bytes to transfer a signature and the time check it. Their
only real use is to check the master archive against hack/corruption and
even that is better served by saving the uploading .changes file
[preferably on multiple hosts, hence d-devel-changes]. In fact I would
argue .deb sigs only give people a false sense of security because it
makes the system as weak as the weakest key in our keyring. 

Signed package files on the other hand provide a really fast and efficient
way to definately verify the whole chain, from us to the user. In
particular, we could have a relatively insecure daily use dinstall key
[for unstable] and a strong release key (aka the key the security team
uses) When we do a release all the package files are signed using the
security key and we have a nice sealed package that can be checked quickly
and efficiently by the users. 

The trick however is to distribute the security key automatically and
efficiently. [The dinstall key can be derived from this one] Ponder
Ponder. 

Incidently the dinstall change is all of (basically)

cd .../dists/unstable/
find -name "Packages" -or "Packages.gz" | xargs mymd5sum | gpg --clearsign > 
Packages.sig

stable would be signed once at each release time by the security team.

APT would need to download this file, verify, then load the md5s
internally for checking the package files. It would also have the nice
side effect of providing accurate progress meters for package files
(since you would want to include sizes in the index.).

There are some tricky details about how to locate the .sig file for each
package file, but that I think is fairly resolvable..

Jason




Re: [Election Results] Official and Final

2000-03-31 Thread Jason Gunthorpe

On Fri, 31 Mar 2000, Josip Rodin wrote: 

> 216 people, if I counted right (wc(1) :). So much for the `300 active
> developers' vaporware, even if you include dissidents et al...

It think it just clearly shows typical lack of election interest. FYI,
Echelon has confirmed a total of 346 developers by PGP verification.

Jason



Re: RBL report..

2000-03-29 Thread Jason Gunthorpe

On Thu, 30 Mar 2000, Craig Sanders wrote:

> you were lucky enough to be able to set up something at work. many
> others will be able to setup something similar. debian developers
> should have the option of a uucp account from one of the debian servers
> (trivially easy for us to set up). 

I think we have been over this in various forms, I don't think we can do
it without some complications, it would be inapproriate use of sponsored
machines/bandwidth..

It would be better for someone else to provide a service like this.

Jason



Re: RBL report..

2000-03-29 Thread Jason Gunthorpe

On Wed, 29 Mar 2000, Larry Gilbert wrote:

> Why is murphy.debian.org not adding a "Received:" header to show where
> messages are originating?  This information is useful when trying to
> track down actual spammers.  Is this being deliberately omitted or does
> qmail just normally not include this info?

This is deliberately removed, we had some problems a year or so ago with
the received lines getting too long for some mailers. We are looking at
putting them back.

Jason



Re: Idea: Debian Developer Information Center

2000-03-29 Thread Jason Gunthorpe

On Wed, 29 Mar 2000, Raphael Hertzog wrote:

> No, not yet. But as it must integrate in what we already have ... WML has
> support for eperl. But I have decided of absolutely nothing and it's
> possible that I end without eperl and without php with a simple perl
> script (I don't know python but most of the db.debian.org stuff is written
> in Python AFAIK) that will generate the HTML files.

We have chatted about moving db.d.o to PHP4 which has ldap and the right
sort (tm) of session management supprt. The web site is written completely
in perl.

Jason



Re: RBL report..

2000-03-28 Thread Jason Gunthorpe

On Tue, 28 Mar 2000, Alexander Koch wrote:

> DUL is interesting. I changed my mind on that. I rather say
> we use it since the amount of spam is certainly increasing
> the last weeks and DUL is understandable.

Yes there is more spam, but I've been looking and I haven't seen that much
(if any at all) would be blocked by DUL.

Jason



RBL report..

2000-03-26 Thread Jason Gunthorpe

Okay, since everyone really desperately wants to know, I ran the numbers
on the effectiveness of RBL, RSS, DUL and ORBS against the mail intake for
lists.debian.org. All of this is theoretical and done offline against the
log file, we are blocking only via RBL (and now RSS) 

The period of analysis was 1 week.

Stat #1
  Of 3054 unique IPs 386 are in one of the RBL's, the breakdown is:
   RBL - 16
   RSS - 45
   DUL - 49 [17 rcn.com, 14, psi.net]
   ORBS - 314
  Comparing connections it is found that 3970 out of 40236 connection
  attempts would have been blocked. This can be roughly considered to be
  3970 emails blocked.

Stat #2
  Cross referencing the IP list against the bad bounce log shows 13 IPs. 
  These are highly likely to be legitimate emails.

Stat #3 
  Cross referencing the IP list against the content filtered spam log
  shows 0 hits [not surprising, this log is very small].

Stat #4
  Taking the list of all subscriber domains and substring matching this
  against the list (loosly, check for people who are blocked but
  subscribed to the list) gives 226 matches. Breakdown:
RBL - 1 
RSS - 12
DUL - 26
ORBS - 196
  The RBL and RSS hits show a very good chance of actually being
  legitimate list subscribers :< It is impossible to tell with DUL if
  the host is a subscriber on a modem or something else. ORBS is to
  prolific to check by hand.

Stat #5
  Collecting IPs from all recived and relayed (ie good) list mail and
  corellating gives 28 matches. Breakdown:
RBL - 0[Expected, we are banning RBL]
RSS - 1
DUL - 18 [17 from a single user on rcn.com]
ORBS - 10
  Note, during the 1 week period I estimate that no more than 5 unique
  spams were recieved. May of the spams were sent to all lists. Also
  note that aliases like [EMAIL PROTECTED] are not covered by these
  stats.

There seems to be a huge mismatch between messages accounted for and
messages taken in, I think these are due to sucessfully processed bounces
by the list software, which do not get logged [?]

Conclusions

I have been unable to conclusively show that any of the RBLs are actually
reducing spam, but I have positively confirmed that they *all* (save RBL
which I cannot check since we block on it) would result in legitimate
messages being blocked. 

ORBS deserves special mention because of their insane hit count, I don't
know what that is about but ORBS would block 10% of the mails we get. I
think it is without question that the majority of those blocks are
legitimate mails. ORBS is also almost completely inclusive of the RSS and
RBL.

DUL would seem to effect at most maybe 10 people, but it hasn't actually
been shown to stop any spam - so this needs more investigation. DUL has a 
policy that many people find objectional.

A perusal of the DUL ips all suggest they are *all* modems which is a
really selective filter swath. No DSL or Cable IPs appear to be listed! 

RBL has not been conclusively shown to stop spam, but it has such a low
impact (<3 uniq hits each day) that we use it anyhow.

RSS has been observed to list the occasional spam, this is expected since
they respond to spammer activity - but it is also shown that it will
effect at least 1-2 people.

* Note, once a site is listed in one of these RBLs it becomes impossible
for a user to unsubscribe from our lists - no matter what they do they
will never be able to communicate a bounce or a unsubscribe request - this
is pretty bad.

Jason




Re: Apt-Problem

2000-03-20 Thread Jason Gunthorpe

On Mon, 20 Mar 2000, Andreas Tille wrote:

> output after failing to install 42 packages).  I repeat: All packages
> were installable with dpkg -i after apt-get was unable to install

That doesn't mean anything, if the file was only 1 byte short chances are
it would still be entirely valid, dpkg -i would take it, apt would not due
to a size and md5 mismatch.

Jason



Re: [transcript] source package formats

2000-03-20 Thread Jason Gunthorpe

On Mon, 20 Mar 2000, Adam Heath wrote:

> You'll note the addition of 3 fields(Format, Patches, and Tarballs), and the
> different files specified for the files field.  The existance of a Format

Having a .tarballs.tar.gz seems rather pointless, just have all the tars
seperate - as does including the md5s for patches in the .dsc, have a
manifest file in patch tar. Though the point of that does rather elude me.

Jason



Re: Apt-Problem

2000-03-18 Thread Jason Gunthorpe

On 18 Mar 2000, Brian May wrote:

> I believe the original poster used dpkg -i to install the same copy
> that apt had downloaded - ie only one copy ever downloaded.

Then dpkg should have failed to install it since it is a truncated file.
 
> Not sure about libtool, but have a look at bugs 60339 and 60399 for a
> similar problem with man-db. This was posted as another thread on
> debian-devel.

This looks like something entirely different

Jason



Re: Apt-Problem

2000-03-18 Thread Jason Gunthorpe

On 18 Mar 2000, Brian May wrote:

> >> libtool 1.3.3-9 [177kB] Failed to fetch
> >> 
> http://ftp.tu-clausthal.de/pub/linux/debian/dists/frozen/main/binary-i386/devel/libtool_1.3.3-9.deb
> >> Size mismatch E: Unable to fetch some archives, maybe try with
> >> --fix-missing?
 
> Jason> This means your mirror is broken, try another site.
 
> Not if it works when manually installing the same deb file with
> dpkg...

Maybe in the time you downloaded the new file your mirror fixed itself.
That error means the .deb it fetched was too small, ie still being
downloaded.

Jason



Re: ITP: dvipdfm - A DVI to PDF translator

2000-03-17 Thread Jason Gunthorpe

On Fri, 17 Mar 2000, Brian Mays wrote:

> > But the comment says the whole story, it is compatible with standard
> > Adobe fonts, aka times which is what I had the problem with.
 
> Jason - You would think so.  Nevertheless, try it; it works.  Therefore, I 
> must assume that the comment is incorrect.

In that case, this is very exciting :>

Jason



Re: ssh & master

2000-03-17 Thread Jason Gunthorpe

On Fri, 17 Mar 2000 michael@fam-meskes.de wrote:

> It seems master does not accept ssh connections. What's going on?

No, just from you..

Mar 17 00:57:44 master named[1815]: bad referral (de.colt.net !<
host.DE.COLT.net)
Mar 17 00:57:44 master sshd[4266]: warning: /etc/hosts.deny, line 15:
can't verify hostname: gethostbyname(h-62.96.162.190.host.de.colt.net) failed
Mar 17 00:57:44 master sshd[4266]: refused connect from 62.96.162.190
 
Jason



Re: ITP: dvipdfm - A DVI to PDF translator

2000-03-17 Thread Jason Gunthorpe

On Thu, 16 Mar 2000, Brian Mays wrote:

> The ligatures are supported, but dvips switches the characters in the
> font around.  This can be fixed by turning off the "G" option in the
> /etc/texmf/dvips/config.pdf file.

But the comment says the whole story, it is compatible with standard
Adobe fonts, aka times which is what I had the problem with.

Jason



Re: Apt-Problem

2000-03-15 Thread Jason Gunthorpe

On Wed, 15 Mar 2000, Andreas Tille wrote:

> Reading Package Lists... Done
> Building Dependency Tree... Done
> The following NEW packages will be installed:
>   libtool
> 0 packages upgraded, 1 newly installed, 0 to remove and 13 not upgraded.
> Need to get 177kB of archives. After unpacking 681kB will be used.
> Get:1 http://ftp.tu-clausthal.de potato/main libtool 1.3.3-9 [177kB]
> Failed to fetch 
> http://ftp.tu-clausthal.de/pub/linux/debian/dists/frozen/main/binary-i386/devel/libtool_1.3.3-9.deb
>   Size mismatch
> E: Unable to fetch some archives, maybe try with --fix-missing?

This means your mirror is broken, try another site.

Jason



Re: So, what's up with the XFree86 4.0 .debs?

2000-03-14 Thread Jason Gunthorpe

On Mon, 13 Mar 2000, Steve Greenland wrote:

> Why not? Have you read the compiler/linker docs? Adding -I/some/dir/inc
> and -L/some/dir/lib causes those directories to be searched *before* the
> default directories. I don't have an opinion about where the X stuff
> should go, but the above argument is completely bogus FUD.

For ages now all my X stuff certainly has not used any -I and -L
directives on debian, the headers/libs are already in the standard
locations!

Jason



Re: Danger, Branden Robinson! Danger!

2000-03-13 Thread Jason Gunthorpe

On Sun, 12 Mar 2000, Joey Hess wrote:

> I hope you weren't even considering one package per module? I understand
> (from IRC) that the 100 modules weigh in at 12 mb. The typical xserver-*

Why not? Nobody else seems to have a problem with creating bazillions of
itty bitty packages for some incomprehensible reason. It's not like that
is making the install smaller or anything.. 

Besides, if we are to have woody have 6000 packages and send dpkg sobbing
into a corner [not to mention those people with less than 64M of ram], we
better think big! 

Jason



Re: Danger Will Robinson! Danger!

2000-03-12 Thread Jason Gunthorpe

On 11 Mar 2000, Manoj Srivastava wrote:

> I've been running 2.3 kernels for a while now, and so have
>  several people. Though it may not work as a default ekrnel,

But can we integrate the necessary new changes to properly support 2.4?
devfsd, the new firewall code, new PCMCIA, etc?

Jason



Re: nasty slink -> potato upgrade problem

2000-03-12 Thread Jason Gunthorpe

On Sat, 11 Mar 2000, [iso-8859-1] Nicolás Lichtmaier wrote:

> > > Trouble ahead?
> > Please run "apt-get install apt" before doing the dist-upgrade. Old apt
> > don't manage well the perl transition. This will be documented in the
> > Release Notes.
> 
>  Why don't we make the new perls conflict the old apt?

Augh, no don't do that!

Upgrading APT will have to be in the release notes, you *HAVE* to run

'apt-get install apt' 

For alot of reasons, more than just perl, and you have to be running the
new APT before you start going and installing other things for it to be of
any value (ie depends are pointless)

Jason



Re: better RSYNC mirroring , for .debs and others

2000-03-10 Thread Jason Gunthorpe

On Fri, 10 Mar 2000, Jacob Kuntz wrote:

> wouldn't it make more sense to use something like mirror or wget untill
> debdiff matures? are mirror admins required to use rsync?

Sadly rsync is far, far better that mirror or wget, both of which are
verging on useless for an archive of our size. 

We use rsync not for its ability to do binary file diffs, but because it
largely works.

Sadly my project to get a real mirroring system written is on hold (alas)

Jason



Re: login message on lully.debian.org

2000-03-09 Thread Jason Gunthorpe

On 9 Mar 2000, Douglas Bates wrote:

> The system has been up for 14 days and /etc/motd was last modified on
> Jan 27.  Is it possible that the repairs are complete and someone
> forgot to remove this line from /etc/motd?

No
 
Jason



Re: better RSYNC mirroring , for .debs and others

2000-03-09 Thread Jason Gunthorpe

On Thu, 9 Mar 2000, David Starner wrote:

> I'm not arguing the rest of your points, but I'm curious about 
> this one. IIRC, the last thing a full bootstrap of GCC does,
> after building stage one binaries with the native compiler,

Hum, It *used* to do this, can't seem to get it to do it today though 


IIRC it only applied to debug information, it included timestamps or
some such.

Jason



Re: better RSYNC mirroring , for .debs and others

2000-03-09 Thread Jason Gunthorpe

On Thu, 9 Mar 2000, Andrea Mennucc1 wrote:

> rsync contains a wonderful algorithm to speedup downloads when mirroring
> files which have only minor differences;
> only problem is, this algorithm is ALMOST NEVER  used
> when mirroring a debian repository

Small detail here, .debs, like .gz files are basically not-rsyncable. gzip
effectively randomizes the contents of the files making the available
differences very, very small. This is particularly true for .debs when you
add in the fact that gcc never produces binary identical output on
consecutive runs.

Please *do not* run a client with this type of patch connected to any of
our servers, it will send the load sky high for no good reason, rsync is
already responsible for silly amounts of load, do not make it worse.

Jason



Re: PGP/GPG Keys

1999-10-05 Thread Jason Gunthorpe

On Mon, 4 Oct 1999, Rene Mayrhofer wrote:

> Is it possible to use a key created by pgp5 for package signing ? The
> key works for me when I use it with gpg, both the opposite is not true
> (e.g. pgp5 is unable to verify a signature created with a gpg key). I am
> no maintainer yet and so I want to start cleanly. What is the "right"
> way if I want to use gpg and pgp5 and communicate with people using pgp5
> ? Can I create a gpg key usable by pgp5 or is it possible to use the
> pgp5 key for administrative purposes ?
> I really want to revoke my rsa key and use only one key for all
> purposes.

This should be OK, GPG implements the OpenPGP spec, and so does PGP5. If
you used a new enough PGP version you should have no problems reading GPG
signed things. So long as GPG properly understands your key it is fine to
use.

Jason



Re: slink -> potato

1999-10-04 Thread Jason Gunthorpe

On Mon, 4 Oct 1999, Yves Arrouye wrote:

> > As for the discussion, APT actually has such a feature cleverly
> > undocumented and unmentioned - if you flag a package as Impotant: then
> > its downtime is minizimized by the ordering code.

> packages that conflict with them. An example is moving from the 1.1.2
> KDE packages to the 2.0 ones, eg. from kdebase to kdebase-cvs etc. USing
> dselect and APT, what happens is that somehow installation of the new
> packages is tried first, and fails, and then deinstallation does not
> proceed. Soone needs to explicitely delete the old packages first and
> install the new ones after. That should be figured out by the package
> management tools.

It is figured out by the tools, but it sounds like the KDE packages lack
the proper headers to tell what to do.

Jason



Debian Buisness Cards

1999-10-04 Thread Jason Gunthorpe

Hi,

I have done some improvements to the Debian buisness card tex files that
are floating around. My changes are at http://www.debian.org/~jgg.

The rundown is that I sized and made available the bottle version of the
logo, adjusted the PGP key font/spacing, reordered some text and put much
better cut marks in.

Given that ALS is coming up soon this is a really nice way to print out
cards with your PGP key fingerprints for key signing. You get 10 cards per
page.

Jason



Re: slink -> potato

1999-10-04 Thread Jason Gunthorpe

On Mon, 4 Oct 1999, Herbert Xu wrote:

> On Sun, Oct 03, 1999 at 07:06:10PM -0400, Raul Miller wrote:
> > 
> > On Mon, Oct 04, 1999 at 08:15:54AM +1000, Herbert Xu wrote:
> > > I think the worst case would be a telnetd linked with a broken
> > > shlib (or in the case of telnetd, perhaps a missing or broken
> > > /usr/lib/telnetd/login) that gives a security hole. If you wish to
> > > minimise downtime, the proper way to do it IMHO is to have certain
> > > packages flagged as daemons, and they should be upgraded (by whatever
> > > program that is in charge) one by one.
> > 
> > Under what circumstances would this be in effect during an
> > upgrade but not otherwise?
 
> The fact that dpkg does not deconfigure a package which depends on another
> deconfigured package is a bug in dpkg.  This should not be used as an excuse
> to not deal with things correctly in maintainer scripts.

It isn't a bug, it is a feature. 

As for the discussion, APT actually has such a feature cleverly
undocumented and unmentioned - if you flag a package as Impotant: then
its downtime is minizimized by the ordering code.

For the record, many daemon packages (like apache) use the installation
script arguments to tell what is going on and not needless stop the server
during an upgrade - this is the best solution to the problem.

Jason



Re: Little FAQ for users and maintainers

1999-10-02 Thread Jason Gunthorpe

On Fri, 1 Oct 1999, Fabien Ninoles wrote:

> Many time, apt-get break on conflicting files. It happens me often
> on unstable but also when upgrading from slink to potato. Here some
> recommendations to help users resolved the conflicts and also to
> help maintainers do the Right Things (TM) the first time.

I assume you mean file conflicts. I generally recommend adding this to
/etc/apt/apt.conf 

dpkg::options {"--force-overwrite";};

And use an 0.3 version of APT. Of course you should file bugs when you see
the warning ;>

Jason



Re: BTS: How are the bug reports organized?

1999-10-02 Thread Jason Gunthorpe

On Sat, 2 Oct 1999, Thomas Schoepf wrote:

> I don't understand how this should reduce/limit the number of files in a
> single directory.

Well, it's an application of probability theory.. The last couple digits
are more evenly distributed over the range of active (and inactive) bugs
so you get a more even distribution.

Consider if we have bugs 0->199 and you take the first digit. You end up
with 10 bugs in each bucket except bucket '1' which has 110. Put that on a
broader scale and account for expired bugs and you see the trouble.

Jason



Re: SSH never free

1999-10-01 Thread Jason Gunthorpe

On 1 Oct 1999, James Troup wrote:

> [ RSA is no longer included. ]

Wait wait, doesn't this mean that ssh RSA authentication is gone as well??
Did they replace it with DSS/DH or what? IMHO ssh would cease to be very
usefull as a security tool without a public key mechism, not to mention
that existin ssh clients would not be able to securely connect to obsd-ssh
servers :<

Jason



Re: BTS: How are the bug reports organized?

1999-10-01 Thread Jason Gunthorpe

On Thu, 30 Sep 1999, Darren Benham wrote:

> No, seriously, that's how it's created but as long as we don't start ignoring
> bugs, we'll never see  or 9 bugs in a single directory.

Yeah, but the entire reason behind splitting things up like that was to
reduce the number of files per-directory. Thomas is right to observe that
in a few years we will be back where we started again :< 

Jason



Re: {R,I[INEW]}TP: free ssh [non-US]

1999-10-01 Thread Jason Gunthorpe

On 30 Sep 1999, James Troup wrote:

> OpenBSD have started working on the last free SSH (1.2.12 was under a
> DFSG free license AFAICT[1]), they also, (again AFAICT [I'm going by
> the CVS commits]), are ripping out the patented algrothims (IDEA,
> etc.).  Unfortunately, I'm chronically busy with work and haven't had

This is very exciting, ssh is one of the few remaining non-free programs
that debian relies on, it would be very nice to get a real replacement.

Can someone corfirm the DFSGness of it?

Jason



Re: Can't acces db.debian.org

1999-09-30 Thread Jason Gunthorpe

On Thu, 30 Sep 1999, Federico Di Gregorio wrote:

>   apparently I can't access db.debian.org: I use my password
> on master and the server gives me "authentication failed". (Note that
> I can login in master with that same password.) Is something broken?
> (my brain for example?)

You probably changed your password since it was copied over. Use this
command:

echo "Please change my Debian password" | gpg --clearsign | mail [EMAIL 
PROTECTED]

And you will get a new password mailed back. (pgp -fast instead of gpg if
you are not using gpg yet)

Jason



Developer Lat/Long positions

1999-09-29 Thread Jason Gunthorpe

I have put up a new way to enter your location information, it is a PGP
signed mail gateway at [EMAIL PROTECTED] It can actually change quite
a few things, but for the moment I am only announcing the ability to set
location and contact information :>

The server is line oriented much like [EMAIL PROTECTED] and
regex's each line to determine what to do, here is a sample session:

c: CA
l: Edmonton, Alberta
Lat: 55n33 Long: 113w28

---> Daemon sends back this:
> c: ca
Changed entry c to ca
> l: Edmonton, Alberta
Changed entry l to Edmonton, Alberta
> Lat: 55n33 Long: 113w28
Position set to +5533/-11328 (+55.55/-113.47 decimal degrees)
--->

[aside, the 'fast' way to enter the data is like:
 echo "Lat: 55n33 Long: 113w28" | gpg --clearsign | mail [EMAIL PROTECTED]
 or pgp -fast. The gateway should work with most popular mailers too and
 PGP/MIME]

Which will set your country code, city value and location. The parser that
handles the lat/long for -this- service is more sophisticated that the web
page or command line version and can handle alot of the common formats
found on the net. It converts them to the 'standard' form of DGMS, DGM or
Decimal degrees, which ever is most natural for the input. A quick rundown
on the supported types is: 

 D = Degrees, M = Minutes, S = Seconds, x = n, s, e, or w
  +-DDD.D,  +- DDDMM.,  +-DDDMMSS.   [standard forms]
  DDxMM., DD:MM. x,  DD:MM:SS.SSS X

I haven't seen a format outside of that yet, but let me know if you find
one. 

The end result is that people who had troubles entering their positions
before should be able to cut and paste straight from the original source
and get correct results. Also people who don't have SSL/Web access can now
use the mail gateway.

Here is my list of links to find out some decent coords for yourself:
 http://www.astro.com/atlas/
 http://www.mapblast.com
 http://www.geocode.com/eagle.html-ssi
 http://www.environment.gov.au/database/MAN200R.html
 http://GeoNames.NRCan.gc.ca/

Jason




Re: scanning my ports

1999-09-26 Thread Jason Gunthorpe

On 26 Sep 1999, Mark W. Eichin wrote:

> In addition to apologies to Mr. Norman, perhaps there's some value in
> either (1) making tcplogd etc. require enough configuration to force
> people to read the documentation, or (2) enhance those packages to
> interpret things a little more, so they scare naive users a bit less?

debian-admin gets reports like this on virtually a monthly basis, they
response is always that the user is using port mode ftp and that the site
is an ftp server.

Some of the 'reports' are exeremely angry and irritated - I think the best
one was from some admin who had a user who subscribed to a Debian lists,
he was incessed that we were 'attacking' his mail server by *gasp* sending
it mail!

Jason



Re: ssh keys in ldap

1999-09-26 Thread Jason Gunthorpe

On Sun, 26 Sep 1999, Wichert Akkerman wrote:

> Previously Jason Gunthorpe wrote:
> > I would like a couple people to look over this patch I have made to SSH.
> > It creates a new option that allows ssh to lookup RSA authentication keys
> > in a global file modeled after the shadow password file.
> 
> Does this support multiple keys?

Yes, it is exactly like the existing search method, it tries every key
assigned to the user until one actually works.

Jason



ssh keys in ldap

1999-09-25 Thread Jason Gunthorpe

Hi all,

I would like a couple people to look over this patch I have made to SSH.
It creates a new option that allows ssh to lookup RSA authentication keys
in a global file modeled after the shadow password file. The intent is to
allow users to place their RSA ssh key into the ldap directory and then
have that key replicated automatically to all machines and used by ssh.

Checking of the global key file is done after looking at the users
.ssh/authorizes_key file and the global file is keyed to each maintainer.
LDAP entries would look like this:

sshrsaauthkey=1024 35
13188913800864665310056145282172752809896969986210687776638992421269538682667499807562325681722264279958572627924253677904887346542958562754647616248471798299277451202136815142932982865314941795877586991831796183279248323438349823299332680534314763423857547649263063185581654408646481264156574330001283021
[EMAIL PROTECTED]

And I would probably put a PGP mail gateway to set new keys. [ie gpg
--clearsign < .ssh/identity.pub | mail [EMAIL PROTECTED]

The advantage would be that everyone can use their ssh key uniformly on
all the machines. If someone looses their key or needs to revoke it due to
a compromise it can be done quickly and correctly. 

If nobody can see why this would be a bad idea I will deploy this system
on db.debian.org and the debian.org machines in the near future. I hope
that when lsh becomes usable a similar patch to it can be made.

Thanks,
Jason

diff -ur ssh-1.2.27/auth-rsa.c ssh-1.2.27+jgg/auth-rsa.c
--- ssh-1.2.27/auth-rsa.c   Wed May 12 05:19:24 1999
+++ ssh-1.2.27+jgg/auth-rsa.c   Sat Sep 25 14:25:40 1999
@@ -211,7 +211,7 @@
successful.  This may exit if there is a serious protocol violation. */
 
 int auth_rsa(struct passwd *pw, MP_INT *client_n, RandomState *state,
- int strict_modes)
+ int strict_modes,int global)
 {
   char line[8192];
   int authenticated;
@@ -220,61 +220,93 @@
   UserFile uf;
   unsigned long linenum = 0;
   struct stat st;
-
-  /* Check permissions & owner of user's .ssh directory */
-  snprintf(line, sizeof(line), "%.500s/%.100s", pw->pw_dir, SSH_USER_DIR);
-
-  /* Check permissions & owner of user's home directory */
-  if (strict_modes && !userfile_check_owner_permissions(pw, pw->pw_dir))
-{
-  log_msg("Rsa authentication refused for %.100s: bad modes for %.200s",
-  pw->pw_name, pw->pw_dir);
-  packet_send_debug("Bad file modes for %.200s", pw->pw_dir);
-  return 0;
-}
-
-  /* Check if user have .ssh directory */
-  if (userfile_stat(pw->pw_uid, line, &st) < 0)
-{
-  log_msg("Rsa authentication refused for %.100s: no %.200s directory",
-  pw->pw_name, line);
-  packet_send_debug("Rsa authentication refused, no %.200s directory",
-line);
-  return 0;
-}
-  
-  if (strict_modes && !userfile_check_owner_permissions(pw, line))
-{
-  log_msg("Rsa authentication refused for %.100s: bad modes for %.200s",
-  pw->pw_name, line);
-  packet_send_debug("Bad file modes for %.200s", line);
-  return 0;
-}
+  const char *keyfile = 0;
+   
+  if (global == 0)
+  {
+ /* Check permissions & owner of user's .ssh directory */
+ snprintf(line, sizeof(line), "%.500s/%.100s", pw->pw_dir, SSH_USER_DIR);
+ 
+ /* Check permissions & owner of user's home directory */
+ if (strict_modes && !userfile_check_owner_permissions(pw, pw->pw_dir))
+ {
+   log_msg("Rsa authentication refused for %.100s: bad modes for %.200s",
+   pw->pw_name, pw->pw_dir);
+   packet_send_debug("Bad file modes for %.200s", pw->pw_dir);
+   return 0;
+ }
+ 
+ /* Check if user have .ssh directory */
+ if (userfile_stat(pw->pw_uid, line, &st) < 0)
+ {
+   log_msg("Rsa authentication refused for %.100s: no %.200s directory",
+   pw->pw_name, line);
+   packet_send_debug("Rsa authentication refused, no %.200s directory",
+ line);
+   return 0;
+ }
+ 
+ if (strict_modes && !userfile_check_owner_permissions(pw, line))
+ {
+   log_msg("Rsa authentication refused for %.100s: bad modes for %.200s",
+   pw->pw_name, line);
+   packet_send_debug("Bad file modes for %.200s", line);
+   return 0;
+ }
+ 
+ /* Check permissions & owner of user's authorized keys file */
+ snprintf(line, sizeof(line),
+ "%.500s/%.100s", pw->pw_dir, SSH_USER_PERMITTED_KEYS);
+ 
+ /* Open the file containing the authorized keys. */
+ if (userfile_stat(pw->pw_uid, line, &st) < 0)
+   return 0;
+ 
+ if (strict_modes && !userfile_check_owner_permissions(pw, line))
+ {
+   log_msg("Rsa authentication refused for %.100s: bad modes for %.200s",
+   pw->pw_name, line);
+   packet_send_debug("Bad file modes for %.200s", line);
+   return 0;
+ }
+
+ uf = userfile_open(pw->pw_uid, line, O_RDONLY, 0);
+ 

Re: Add your location to the developer db so it can be added to the map

1999-09-23 Thread Jason Gunthorpe

On Wed, 22 Sep 1999, James A. Treacy wrote: 

> I may add a comment to the coords file describing how the image is created
> so people can create their own. Hopefully someone has a printer that can
> print a large version.

Can you send them to me? I will include them in the man page for ud-xearth

Jason



Re: Use https://db.debian.org/ [was Re: Add your location ...]

1999-09-23 Thread Jason Gunthorpe

On Wed, 22 Sep 1999, James A. Treacy wrote:

> I should have used https://www.debian.org/ in the original mail.
> Sorry. Everyone who can (legally) use ssl should use that URL.

Yes, this is definately the best way to enter the data right now.
Encrypted LDAP is comming in many months though.
 
> Additionally, I have asked for a page to be linked from
> db.debian.org to describe what those who have lost their
> password should do.

The procedure is this:

echo "Please change my Debian password" | gpg --clear-sign | mail [EMAIL 
PROTECTED]

[Or the equivilent if you use pgp, what are the options for a clear
signed ascii armored message anyhow?]

You will be emailed back a new password encrypted with your PGP key.  This
password will automatically propogate to all machines except pandora,
master and va. 

At some point in the future it will propogate, so don't loose it.

Here are my notes on location information and some sources to find the
data:

LAT/LONG POSITION
   There are  three possible  formats  for  giving  position
   information  and several  online  sites  that can give an
   accurate position fix based on mailing address.


   Decimal Degrees
  The format is +-DDD.DDD.  This  is  the
  format programs like xearth use and the format that
  many positioning web sites use.  However  typically
  the precision is limited to 4 or 5 decimals.

   Degrees Minutes (DGM)
  The  format  is +-DDDMM.M. It is not an
  arithmetic type, but a packed representation of two
  seperate units, degrees and minutes. This output is
  common from some types of hand held GPS  units  and
  from NMEA format GPS messages.


   Degrees Minutes Seconds (DGMS)
  The  format  is +-DDDMMSS.SSS. Like DGM, it
  is not an arithmetic type but a packed  representa-
  tion  of  three seperate units, degrees minutes and
  seconds. This output is typically derived from  web
  sites  that  give 3  values for each position. For
  instance 34:50:12.24523 North might be the position
  given, in DGMS it would be +0345012.24523.

   For  Latitude  + is North, for Longitude + is East. It is
   important to specify enough leading zeros to dis-ambiguate
   the  format  that  is  being used if your position is less
   than 2 degrees from a zero point.

   So locations to find positioning information are:


   o  Good  starting  point  -  http://www.ckdhr.com/dns-
  loc/finding.html

   o  AirNav  - GPS  locations  for  airports around the
  world http://www.airnav.com/

   o  GeoCode   -USindex by ZIP Code
  http://www.geocode.com/eagle.html-ssi

   o  Map  Blast!  Canadian,  US and some European maps -
  http://www.mapblast.com/

   o  Australian   Database http://www.environ-
  ment.gov.au/database/MAN200R.html

   o  Canadian Database http://GeoNames.NRCan.gc.ca/

   o  GNU Timezone database, organized partially by coun-
  try /usr/share/zoneinfo/zone.tab

   Remember that we are  after  reasonable  coordinates  for
   drawing  an  xearth  graph  and looking for people to sign
   keys, not for coordinates accurate enough to land an  ICBM
   on your doorstop!



Re: Debian BTS

1999-09-15 Thread Jason Gunthorpe

On Wed, 15 Sep 1999, Hamish Moffatt wrote:

> Great work guys. Just a query though -- is the web server on
> www.debian.org working properly? It takes me several minutes to retrieve
> the home page lately! No other sites exhibit this problem.

AFAIK, but one other person did mention this - can you tcpdump a wget for
www.debian.org and see if there are any obvious problems?

Thanks,
Jason



  1   2   3   >