Adam Leventhal wrote:
This is a great idea. I'd like to add a couple of suggestions:
It might be interesting to focus on compression algorithms which are
optimized for particular workloads and data types, an Oracle database for
example.
NB. Oracle 11g has builtin compression. In general,
Wouldn't ZFS's being an integrated filesystem make it
easier for it to
identify the file types vs. a standard block device
with a filesystem
overlaid upon it?
I read in another post that with compression enabled,
ZFS attempts to
compress the data and stores it compressed if it
pedantic comment below...
dave johnson wrote:
Richard Elling [EMAIL PROTECTED] wrote:
Dave Johnson wrote:
roland [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote:
there is also no filesystem based approach in
compressing/decompressing a whole filesystem.
one could kludge this by
roland [EMAIL PROTECTED] wrote:
there is also no filesystem based approach in compressing/decompressing a
whole filesystem. you can have 499gb of data on a 500gb partition - and if
you need some more space you would think turning on compression on that fs
would solve your problem. but
Dave Johnson wrote:
roland [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote:
there is also no filesystem based approach in
compressing/decompressing a whole filesystem. you can have 499gb of data
on a 500gb partition - and if you need some more space you would think
turning on
Hi,
It might be interesting to focus on compression algorithms which are
optimized for particular workloads and data types, an Oracle database for
example.
Yes, I agree. That is what I meant when I said The study might be
extended to the analysis of data in specific applications (e.g. web
Hi,
why not starting with lzo first - it`s already in zfs-fuse on linux and it
looks, that it`s just in between lzjb and gzip in terms of performance and
compression ratio.
there needs yet to be demonstrated that it behaves similar on solaris.
Good question and I'm afraid I don't have a
On Jul 9 2007, Domingos Soares wrote:
Hi,
It might be interesting to focus on compression algorithms which are
optimized for particular workloads and data types, an Oracle database for
example.
Yes, I agree. That is what I meant when I said The study might be
extended to the analysis of data
Wouldn't ZFS's being an integrated filesystem make it easier for it to
identify the file types vs. a standard block device with a filesystem
overlaid upon it?
I'm not sure. I would think that most applications are going to use the
POSIX layer where there's no separate API for filetypes.
On Mon, Jul 09, 2007 at 05:27:44PM -0500, Haudy Kazemi wrote:
Wouldn't ZFS's being an integrated filesystem make it easier for it to
identify the file types vs. a standard block device with a filesystem
overlaid upon it?
How? The API to ZFS that most everything uses is the POSIX API.
On Mon, Jul 09, 2007 at 03:42:03PM -0700, Darren Dunham wrote:
Wouldn't ZFS's being an integrated filesystem make it easier for it to
identify the file types vs. a standard block device with a filesystem
overlaid upon it?
I'm not sure. I would think that most applications are going to
Richard Elling [EMAIL PROTECTED] wrote:
Dave Johnson wrote:
roland [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote:
there is also no filesystem based approach in
compressing/decompressing a whole filesystem.
one could kludge this by setting the compression parameters desired on
the
One thing ZFS is missing is the ability to select which files to compress.
yes.
there is also no filesystem based approach in compressing/decompressing a whole
filesystem. you can have 499gb of data on a 500gb partition - and if you need
some more space you would think turning on compression on
nice idea! :)
We plan to start with the development of a fast implementation of a Burrows
Wheeler Transform based algorithm (BWT).
why not starting with lzo first - it`s already in zfs-fuse on linux and it
looks, that it`s just in between lzjb and gzip in terms of performance and
compression
This is a great idea. I'd like to add a couple of suggestions:
It might be interesting to focus on compression algorithms which are
optimized for particular workloads and data types, an Oracle database for
example.
It might be worthwhile to have some sort of adaptive compression whereby
ZFS
Bellow, follows a proposal for a new opensolaris project. Of course,
this is open to change since I just wrote down some ideas I had months
ago, while researching the topic as a graduate student in Computer
Science, and since I'm not an opensolaris/ZFS expert at all. I would
really appreciate any
16 matches
Mail list logo