> Wouldn't ZFS's being an integrated filesystem make it
> easier for it to 
> identify the file types vs. a standard block device
> with a filesystem 
> overlaid upon it?
> 
> I read in another post that with compression enabled,
> ZFS attempts to 
> compress the data and stores it compressed if it
> compresses enough. As far 
> as identifying the file type/data type how about:
> 1.) ZFS block compression system reads the ZFS file
> table to identify which 
> blocks are the beginning of files (or for new writes,
> the block compression 
> system is notified that file.ext is being written on
> block #### (e.g. block 
> 9,000,201).
> 2.) ZFS block compression system reads block ####,
> identifies the file type 
> probably based on the file header and applies the
> most appropriate 
> compression format, or if none found, the default
> 
> An approach for maximal compression:
> The algorithm selection could be
> 1.) attempt to compress using BWT, store compressed
> if BWT works better 
> than no compression
> 2.) when CPU is otherwise idle, use 10% of spare cpu
> cycles to "walk the 
> disk", trying to recompress each block with each of
> the various supported 
> compression algorithms, ultimately storing that block
> in the most space 
> efficient compression format.
> 
> This technique would result in a file system that
> tends to compact its data 
> ever more tightly as the data sits in it. It could be
> compared to 
> 'settling' flakes in a cereal box...the contents may
> have had a lot of 'air 
> space' before shipment, but are now 'compressed'. The
> recompression step 
> might even be part of a period disk scrubbing step
> meant to check and 
> recheck previously written data to make sure the
> sector it is sitting on 
> isn't going bad.


this sounds really good - i like that ideas
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to