Bug#278746: prerc2-net partman? detect problems & 2.6 red screen

2004-10-28 Thread Drew Scott Daniels
Package: installation-reports

Debian-installer-version:
http://cdimage.debian.org/pub/cdimage-testing/sid_d-i/i386/pre-rc2/sarge-i386-netinst.iso
20041024
uname -a: Linux Markov 2.4.18-bf2.4 #1 Mon Apr 12 11:37:50 UTC 2004 i686 unknown
Date: 20041027
Method: booted off CD-RW in my Pioneer A03? DVD+-RW drive, chose expert,
then expert26

Machine: OEM
Processor: Pentium 4, 1.5GHz
Memory: 128MiB RAM
Root Device: fdisk -l /dev/hdc

Disk /dev/hdc: 255 heads, 63 sectors, 1232 cylinders
Units = cylinders of 16065 * 512 bytes

   Device BootStart   EndBlocks   Id  System
/dev/hdc1 1937469916  FAT16
/dev/hdc2   130  1232   8859847+   f  Win95 Ext'd (LBA)
/dev/hdc5   131   207618502+   6  FAT16
/dev/hdc6   261   3718915766  FAT16
/dev/hdc7   391   482738958+   6  FAT16
/dev/hdc8   521   966   3582463+   b  Win95 FAT32
/dev/hdc9   967  1232   2136613+  83  Linux

This is after I tried to resize hdc1-7 under expert. Problem details to
follow.

Output of lspci and lspci -n:
00:00.0 Host bridge: Intel Corp. 82845 845 (Brookdale) Chipset Host
Bridge (rev 03)
00:01.0 PCI bridge: Intel Corp. 82845 845 (Brookdale) Chipset AGP Bridge
(rev 03)
00:1e.0 PCI bridge: Intel Corp. 82820 820 (Camino 2) Chipset PCI (rev
12)
00:1f.0 ISA bridge: Intel Corp. 82820 820 (Camino 2) Chipset ISA Bridge
(ICH2) (rev 12)
00:1f.1 IDE interface: Intel Corp. 82820 820 (Camino 2) Chipset IDE U100
(rev 12)
00:1f.2 USB Controller: Intel Corp. 82820 820 (Camino 2) Chipset USB
(Hub A) (rev 12)
00:1f.3 SMBus: Intel Corp. 82820 820 (Camino 2) Chipset SMBus (rev 12)
00:1f.4 USB Controller: Intel Corp. 82820 820 (Camino 2) Chipset USB
(Hub B) (rev 12)
00:1f.5 Multimedia audio controller: Intel Corp. 82820 820 (Camino 2)
Chipset AC'97 Audio Controller (rev 12)
02:0a.0 Ethernet controller: Realtek Semiconductor Co., Ltd.
RTL-8029(AS)
02:0b.0 VGA compatible controller: ATI Technologies Inc 3D Rage I/II
215GT [Mach64 GT] (rev 41)

00:00.0 Class 0600: 8086:1a30 (rev 03)
00:01.0 Class 0604: 8086:1a31 (rev 03)
00:1e.0 Class 0604: 8086:244e (rev 12)
00:1f.0 Class 0601: 8086:2440 (rev 12)
00:1f.1 Class 0101: 8086:244b (rev 12)
00:1f.2 Class 0c03: 8086:2442 (rev 12)
00:1f.3 Class 0c05: 8086:2443 (rev 12)
00:1f.4 Class 0c03: 8086:2444 (rev 12)
00:1f.5 Class 0401: 8086:2445 (rev 12)
02:0a.0 Class 0200: 10ec:8029
02:0b.0 Class 0300: 1002:4754 (rev 41)

Base System Installation Checklist:

Initial boot worked:[O]
Configure network HW:   [O]
Config network: [ ]
Detect CD:  [O]
Load installer modules: [O]
Detect hard drives: [O]
Partition hard drives:  [E]

Comments/Problems:
The errata said to report when a red screen is seen for 2.6 boots. I saw
one. If I can help with something there, let me know.

My main reason for making this report is that the partitions didn't show 
up properly when I went to resize and move around the partitions.

2.4 showed all but the ext2 partition as FAT16 (which I thought was
plausible at first). I went through and figured out how to resize all
the partitions except for hdc8 (marked fat32 above). Trying to resize
hdc8 told me that I'd found an impossible configuration or something like
that. I think that it was smart enough to check the partition size and
knew that FAT16 couldn't support that size? So I decided to try
expert26.

2.6 showed only two partitions, and I think sfdisk said they were
EZBIOS? I can't remember their type and I don't think partman and sfdisk
agreed, but then I can't remember for sure. If it'll help I can check.

I was also surprised that I couldn't find a move partition option. While
I managed to free up disk space after several partitions, I can't merge
the free space to make one larger partition.


I'd really like to get a 4.7GB partition free... ;-) Yes, that's right,
I'm trying to use the installer simply to repartition an extra drive
that's full... all so I can back things up again. :-( Streaming backup
to DVD sounds great, but then wham, there comes that 2GiB limit. Oh
well, I'm on a tangent.


 Drew Daniels


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Bug#192305: "q to end, b for begin" ambiguous

2003-11-13 Thread Drew Scott Daniels
On Thu, 13 Nov 2003, Martin [ISO-8859-1] Sjögren wrote:

> ons 2003-11-12 klockan 00.33 skrev Denis Barbier:
> >   * Prompt is:
> >  q to quit select, b for back, n for next page
>
> How about "b to back up"? It would be either that or "b for previous
> page" IMHO.
>
>
Or to be consistent also "n to go to the next page" or alike... fwiw, I
think the important change was "q to quit select".

Throughly something like:
Press q to quit the selection menu
Press b to go back one page in the current selection menu
Press n to go to the next page in the current selection menu

But all this can be left for an extended help option or a manual... unless
of course it's clear that enough people are still confused.

 Drew Daniels


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Bug#192305: "q to end, b for begin" ambiguous

2003-11-12 Thread Drew Scott Daniels
>  q to quit select, b for back, n for next page
Sounds great to me. Thanks!

 Drew Daniels


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: d-i minimum system requirements

2003-09-30 Thread Drew Scott Daniels
My paraphrased minimum requirements from various sources and some guesses
is at: http://wiki.debian.net/index.cgi?DebianInstallerMinReqs
-
In response to
  ??? - Where Do you live ???
  In Africa ??? Last Place in Congo ??? (Nice holliday here)
  Or do you install via Cell-Phone and 9600 BpS ???
  ;-)
I live in Winnipeg, Manitoba, Canada but sometimes I spend time at Lake of
the Woods where I don't have a good land line.

I'm also looking to get together a computer for a local church and the
best I've found in my budget range is a 16MB 486, single speed CD-ROM
drive. There's a very aggressive "Computers for Schools" program here
which seems to grab all the cheap computers at the Pentium level. Also try
finding cheap large RAM SIMMS for a 486. Ebay would be good if it weren't
for shipping costs...

I've also managed to dig up two other 486's for which I'm personally
interested in maintaining... I believe in reuse before refuse.

I've seen others comment on debian-devel that they still use 486's for
routers and alike. There were at least some grumblings when 386 support
was dropped from iirc the C++ library, and I know there were grumblings
about the speed of the Kernel with 486 emulation on 386's, not to mention
the few K of "bloat".

-
>From all of this I've basically learned that the best way to find out is
to give it a try.

I'm still curious though as to how hard it would be to create an
installer that could have a ramdisk with just enough to create a hard
drive partition and then have the installer use that? This would probably
impose the burden of restarting the computer, managing another ramdisk
image with it's related software (d-i installer?), and adding hard drive
support into the installer... likely at least a bit of a mess.

 Drew Daniels


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



d-i minimum system requirements

2003-09-26 Thread Drew Scott Daniels
Arg, resending...

 Drew Daniels

-- Forwarded message --
Date: Wed, 24 Sep 2003 11:06:20 -0500 (CDT)
From: Drew Scott Daniels <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED]
Subject: d-i minimum system requirements

Hi,
I was wondering if there was any speculation on what the minimum system
requirements for d-i are.

Is 4MB, 8MB, 12MB enough ram? How close are things to any real RAM limit?

The current kernel used should make d-i usable on a 386 and newer for ia32
arches due to math emulation being turned on?

Does the installer require a CD-ROM drive or a network connection? I.e.
does the cd install work, and can one install via floppy disks? If it is
possible to install via floppy disks, what is the minimum requirements for
the disks? Single sided, single density?

Can anna recover from download failures? High latency, corrupted packets,
low bandwidth are still issues that users deal with these days... although
TCP usually does a good job of recovering from corrupted packets. If it
does have the ability to recover, how nice is it?

The current discover was having problems with finding ISA devices unlike
an older version... Is this fixed? Is discover 2.0 out yet? Is discover
still used on the installer? A good test for this might be using boches'
ethernet card emulation.

What other minimum requirements are there?

I'll look at doing some testing on Boches again once I have time.

 Drew Daniels



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



d-i minimum system requirements

2003-09-26 Thread Drew Scott Daniels
Hi,
I was wondering if there was any speculation on what the minimum system
requirements for d-i are.

Is 4MB, 8MB, 12MB enough ram? How close are things to any real RAM limit?

The current kernel used should make d-i usable on a 386 and newer for ia32
arches due to math emulation being turned on?

Does the installer require a CD-ROM drive or a network connection? I.e.
does the cd install work, and can one install via floppy disks? If it is
possible to install via floppy disks, what is the minimum requirements for
the disks? Single sided, single density?

Can anna recover from download failures? High latency, corrupted packets,
low bandwidth are still issues that users deal with these days... although
TCP usually does a good job of recovering from corrupted packets. If it
does have the ability to recover, how nice is it?

The current discover was having problems with finding ISA devices unlike
an older version... Is this fixed? Is discover 2.0 out yet? Is discover
still used on the installer? A good test for this might be using boches'
ethernet card emulation.

What other minimum requirements are there?

I'll look at doing some testing on Boches again once I have time.

 Drew Daniels


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



squashfs

2003-03-04 Thread Drew Scott Daniels
On Wed, 5 Mar 2003, Herbert Xu wrote:

> On Tue, Mar 04, 2003 at 12:24:49PM -0600, Drew Scott Daniels wrote:
...
> > Where would I file a wishlist bug to get squashfs included in
> > kernel-images? It's value is discussed in
> > http://lists.debian.org/debian-boot/2003/debian-boot-200302/msg00412.html
> > There are three or four separate threads about squashfs in the archives
> > of debian-boot for Feb 2003.
>
> I have followed those threads and I don't see any reason why we must have
> it.  However, any Debian developper can package it as a kernel-patch
> package.

I don't see a reason why it *must* be included, but in
http://lists.debian.org/debian-boot/2003/debian-boot-200302/msg00449.html
Glenn McGrath <[EMAIL PROTECTED]> says: "The comparisons to cramfs
does favour squashfs for lowmem installs."

I can see situations where it must be used. I may be facing such a
situation myself with a lowmem install on an old system of mine, but I may
be able to pull things off with 4MB and no system reserved blocks.

I would like to have an easy way of creating a debian install that uses
squashfs. Whether this means having an official kernel-patch package and
distributing an unofficial udeb or any other method. To be honest, I'd
just like someone else to do it, and I'm unsure as to how much value it
has.

Squashfs' usefulness over cramfs seems to be in situations where cramfs
images are too large for storage on media. Since the advantage is only 13%
in the given example, there would not be a huge number of cases, but it is
significant. 13% could be very useful to the installer team, and I would
like the option somehow made available.

Perhaps Koopnix may see some advantage to using squashfs as the CD
filesystem. If someone demonstrates that squashfs is useful enough to
create an official debian kernel-image udeb/deb then they would likely be
the first maintainer and it would be made official.

Useful enough should be qualified. I don't know what kind of limitations
on filesystem size or memory requirements have currently been reached. Can
anyone point to things that are wanted in, but not in Debian's images?
Could squashfs reduce the number of disks used to install Debian? I doubt
that squashfs would be any more useful than gzip for the base images.

 Drew Daniels


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: squashfs compressed file system update?

2003-02-20 Thread Drew Scott Daniels
From: ftp://sources.redhat.com/pub/bzip2/docs/manual_2.html#SEC7
Compress   Decompress   Decompress   Corpus
   Flag usage  usage   -s usage Size

-1  1200k   500k 350k  914704
-2  2000k   900k 600k  877703
-3  2800k  1300k 850k  860338
-4  3600k  1700k1100k  846899
-5  4400k  2100k1350k  845160
-6  5200k  2500k1600k  838626
-7  6100k  2900k1850k  834096
-8  6800k  3300k2100k  828642
-9  7600k  3700k2350k  828642



It's interesting to note that in this case -8 and -9 yield the same
results, but -9 requires more memory at the time of decompression and
compression. I suspect the memory usage statistics will remain constant
for any file (inherent in block sorting), not just the corpus. The
compression results will of course vary depending on the input file.

If memory usage is a concern and small files like initrd are being
compressed, the choice between gzip and bzip2 leans towards gzip. bzip2
only really compresses better than gzip with large blocks (thus high
memory requirements) and on small files like initrd it doesn't make much
of a difference in nominal (non percent) terms.

Prediction by Partial Match (PPM) algorithms are usually very good at
compression, take lots of memory and are very slow. Thus PPM algorithms
would be good to compress an entire initrd file to make the media space
requirements small, but require more memory and CPU time than other
compressors.

The order of the files in the initrd file can make a difference if the run
length or context of the compressor can span multiple files (large block
sizes). I don't know how one could change the order of files in an initrd
file, but I suspect it's like tar, just pass them in a different order.
The order of files would make no difference if files are compressed
independently like squashfs does.

Phillip Lougher:
I'm also not subscribed to debian-boot thus I can't get the threading
right, but you did. I read the mailing list at
http://lists.debian.org/debian-boot and several others on
lists.debian.org.

Would you be willing to try to get squashfs to be made part of the
standard Debian Linux kernel? I'm not sure how, or what the reaction would
be.

I've submitted a request for packaging which you can see at
http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=179672 It's likely that
the Debian Linux kernel maintainers would not include squashfs, but would
suggest that it be made a patch package. If the boot system team finds it
useful enough then maybe that would be enough to have squashfs include in
the Debian Linux kernel.

 Drew Daniels


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




PPMd (PPMII) memory, file order in archives, special tar/debs/udebs

2003-02-20 Thread Drew Scott Daniels
Sorry about the cross post, but the topic here are relevent to the bug,
and discussed in both lists.

On Thu, 20 Feb 2003, Magnus Ekdahl wrote:

> > There is a good paper on the PPMII algorithm (based on PPM) which ppmd
> > uses at: http://DataCompression.info/Miscellaneous/PPMII_DCC02.pdf
> > Basically, as I understand it, the more memory that you allow a PPM
> > algorithm to use, the more of the file it can check for similar contexts
> > or come up with better prediction values by checking more of and more in
> > the file.
>
> True, the PPMII can use a lot of memory and if that memory ain't available
> the compression model have to be crushed somehow. With decreased
> compression ratio as a result. But if PPMII only needs 10Mb for its model,
> another 100Mb won't make the compression any better. (As far as I understand).
>
It's possible. It depends the maximum context size. If you have a very
large chunk in the same context (and it's allowed), then the statistics
table generator should imho be checking all predictors and predicted
blocks. I didn't check to see if it actually allows large contexts and
actually builds tables using all the predictors and predicted blocks in
larger contexts. I'm also not sure how the different models would affect
maximum context sizes, but I wouldn't be surprised if it was expected that
binaries have small contexts.

Another possible reason for allowing a PPM algorithm to use more memory is
to allow it to be sloppy and quick by using full ints or equivalent in
its in memory table generation. There are memory usage optimizations that
can be put in place as it is known ahead of time the maximum number of
possible occurrences of a block. If you take the amount of space left in
the file and divide it by the block size you get the maximum possible
number of occurrences (a rough description). Having the maximum number of
occurrences be small enough, there is extra space in the int or whatever
kind of counter that could be shared with another table entry
(counter/statistic). I doubt ppmd would be using such an optimization
though and I'm not sure if it's counters are slightly lossy (ie, floats)
although lossy counters can be optimized too.

> > Another thing to note when creating compressed *archives* is that the
> > order of files may be important. It has been documented (I don't remember
> > where) that grouping files of similar types together can yield better
> > compression. It might yield optimum compression to go through all the
> > different ways of ordering files. I suspect this is how the 7-zip (
> > http://www.7-zip.org ) algorithm outperforms other zip algorithms and
> > still maintains compatibility with all other zip archivers.
> >
> > A program or script to put files into different orders for archiving (in a
> > list or in a list passed to something like tar) is something that I have
> > been meaning to do. Going through all possible combinations would be nice,
> > but it may be faster to use something like file to group like files
> > together. File has quite a few bugs against it though (
> > http://bugs.debian.org/file ) and it may be far from an optimal choice.
> > Certainly using file would be a good place for me to start. It'd be nice if
> > someone wrote such a script before me though. ;-)
>
> Hmm... this might be more of a tar issue than a PPMd issue. Tar has
> additional problems too, since it adds extra things in the archive.
> Christian Fasshauer <[EMAIL PROTECTED]> has done some work with tars
> problems, and his solution was supposed to decrease the size of the debian
> packages. Perhaps he has more suggestions of what can be done.
>
I like the extra information that tar includes, more so in source files,
and especially in "original" source files. When looking through source
code, seeing the date that a file was change/modified/created and it's
attributes can tell allot about the history of the file. For binary
packages, the value of the information is debatable. I'm on the fence as
far as whether removing things like the number of user/group entries
available and other things suggested in
http://lists.debian.org/debian-dpkg/2003/debian-dpkg-200301/msg00049.html
but I'm leaning towards their inclusion in standard debs, and their
removal in udebs.

Glenn, did you ever get around to making a busybox dar applet? If so what
were your results. Does it implement the suggestions of Christian
Fasshauer? If so maybe Russell Coker could get his comparisons done.

http://bugs.debian.org/cgi-bin/bugreport.cgi?archive=no&bug=174308 is also
an interesting discussion about gtar, star... I really don't know enough
about any tar to have an opinion as to what kind of tar is best.

To save even more space, a special compressor with a dictionary of Debian
blocks and/or statistics could be used. Thus if few users/groups etc were
used then such a dictionary would pick up on this and be used. As far as
compression goes, if there's a common dictionary to all the files, then
the 

PPMd (PPMII), bzip2 -9 misconception, order of files in archives

2003-02-19 Thread Drew Scott Daniels
On 19 Feb 2003 17:16:48 +1100, Glenn McGrath <[EMAIL PROTECTED]> wrote:
>I have looked at PPMd, ive read thats good but it took me a while to
>workout how to use it to get maximum ocmpression, here is a comparison.
>
>8921088 Packages
>2375680 Packages.gz
>1814528 Packages.bz2
>1335296 Packages.pmd
>
>That was done with "PPMd e -o16 -m256 -r2 Packages", it surprised me
>that the -m option (amount of memory to use) effects the compression
>ratio.
>
>Thats a pretty improvement over bz2.

I would guess that memory usage improving compression is a concept that is
widely unknown so I filed a bug against ppmd:
http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=181665
I'm ccing the bug as in this message I talk about the -m option in this
next paragraph.

There is a good paper on the PPMII algorithm (based on PPM) which ppmd
uses at: http://DataCompression.info/Miscellaneous/PPMII_DCC02.pdf
Basically, as I understand it, the more memory that you allow a PPM
algorithm to use, the more of the file it can check for similar contexts
or come up with better prediction values by checking more of and more in
the file.



Some interesting stuff about bzip2 form:
ftp://sources.redhat.com/pub/bzip2/docs/manual_1.html#SEC1
-
bzip2 compresses files using the Burrows-Wheeler block-sorting text
compression algorithm, and Huffman coding. Compression is generally
considerably better than that achieved by more conventional
LZ77/LZ78-based compressors, and approaches the performance of the PPM
family of statistical compressors.
-


Also, when using bzip2 note that -9 does not indicate to use the best
compression (best as in size), but instead specifies the largest blocksize
which *usually* gives the smallest compressed file. Sometimes different
block sizes can actually be better than -9. I'd have to test it to be
sure, but the case I was testing was compressing several megabytes of IRC
logs which are ASCII text with many repeating words and a defined
structure.

The reference fwiw:
ftp://sources.redhat.com/pub/bzip2/docs/manual_2.html#SEC6
-
-1 (or --fast) to -9 (or --best)
Set the block size to 100 k, 200 k .. 900 k when compressing.
-

Perhaps I should make a wishlist bug against bzip2 about this option's
documentation. --best could also be said to be short for best guess at
blocksize which uses the 900 k block size. Also people using bzip2 might
need to be explicitly informed. Perhaps all bzip2'd files in Debian
should be checked to see what blocksize is best and file bugs against the
few that don't have optimal blocksizes chosen. I might warn of a mass bug
filing in debian-devel if I decide I want to test all relevant bz2 files.

It should also be noted that bzip2 may not be an optimal implementation of
the Burrows-Wheeler block-sorting algorithm. In
ftp://sources.redhat.com/pub/bzip2/docs/manual_4.html#SEC44 Julian Seward
says: "It would be fair to say that the bzip2 format was frozen before I
properly and fully understood the performance consequences of doing so."



The Archive Comparison Test (ACT) (usually available at
http://compression.ca ) has a comparison of archivers. You can find the
algorithm that each archiver uses in the Index of archivers. I found the
best compressors for size use Prediction by Partial Match (PPM which is
based on Markov Chains) and Context Tree Weighting (CTW) algorithms.



Another thing to note when creating compressed *archives* is that the
order of files may be important. It has been documented (I don't remember
where) that grouping files of similar types together can yield better
compression. It might yield optimum compression to go through all the
different ways of ordering files. I suspect this is how the 7-zip (
http://www.7-zip.org ) algorithm outperforms other zip algorithms and
still maintains compatibility with all other zip archivers.

A program or script to put files into different orders for archiving (in a
list or in a list passed to something like tar) is something that I have
been meaning to do. Going through all possible combinations would be nice,
but it may be faster to use something like file to group like files
together. File has quite a few bugs against it though (
http://bugs.debian.org/file ) and it may be far from an optimal choice.
Certainly using file would be a good place for me to start. It'd be nice if
someone wrote such a script before me though. ;-)

 Drew Daniels


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




squashfs compressed file system update?

2003-02-18 Thread Drew Scott Daniels
On Wed, 5 Feb 2003 14:08:14 +1100 Glenn McGrath wrote:
> On Tue, 4 Feb 2003 10:27:59 -0600 (CST) Drew Scott Daniels
> <[EMAIL PROTECTED]> wrote:
>
>> It looks stable enough now, but I wonder why it hasn't been included
>> in the Linux kernel (even 2.5?). I also have to question the value of
>> zlib compression vs other types of compression, but then I suppose
>> that zlib usually gives better compression than none. If I ever get to
>> it I'll try writing a zlib replacement that uses a PPM variant, but it
>> seems unlikely that I will.
>
>The reason gzip compression is used on intrd's is becasue the kernel and
>some bootloaders (grub at least) support it.
>
Would squashfs work? I doubt this is an optimal compression, but it might
be better than alternatives. Currently the best free solution might be
based on PPMd which has been packaged into Debian.

>It would be great is we could use better compression on the initrd.
>
What support would be needed for this?

>Ill have a look at squash fs for comparison, but if it isnt in the main
>kernel its hard to depend on it for the installer, it could be used in
>custom builds though i guess.
Have you done a comparison? Would it be possible to include the option to
use squashfs if it's deemed better than at least some of the other
options?
Thanks

 Drew Daniels


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




squashfs compressed file system

2003-02-04 Thread Drew Scott Daniels
I've been meaning to file an RFP on squashfs for a while, and I finally
got around to it ( http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=179672
). While I was at it, I figured I might ask if this filesystem has been
looked at for use in any debian media. I know there has recently been some
comparisons of cramfs, ext2, romfs...

It looks stable enough now, but I wonder why it hasn't been included in
the Linux kernel (even 2.5?). I also have to question the value of zlib
compression vs other types of compression, but then I suppose that zlib
usually gives better compression than none. If I ever get to it I'll try
writing a zlib replacement that uses a PPM variant, but it seems unlikely
that I will.

Squashfs is hosted at http://squashfs.sourceforge.net

 Drew Daniels


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]