On Mon, 21 Sep 2009 18:20:53 -0400
Richard Elling richard.ell...@gmail.com wrote:
On Sep 21, 2009, at 2:43 PM, Andrew Deason wrote:
On Mon, 21 Sep 2009 17:13:26 -0400
Richard Elling richard.ell...@gmail.com wrote:
You don't know the max overhead for the file before it is
allocated.
On Sep 22, 2009, at 8:07 AM, Andrew Deason wrote:
On Mon, 21 Sep 2009 18:20:53 -0400
Richard Elling richard.ell...@gmail.com wrote:
On Sep 21, 2009, at 2:43 PM, Andrew Deason wrote:
On Mon, 21 Sep 2009 17:13:26 -0400
Richard Elling richard.ell...@gmail.com wrote:
You don't know the max
On Tue, 22 Sep 2009 13:26:59 -0400
Richard Elling richard.ell...@gmail.com wrote:
That seems to differ quite a bit from what I've seen; perhaps I am
misunderstanding... is the + 1 block of a different size than the
recordsize? With recordsize=1k:
$ ls -ls foo
2261 -rw-r--r-- 1 root
On Sun, 20 Sep 2009 20:31:57 -0400
Richard Elling richard.ell...@gmail.com wrote:
If you are just building a cache, why not just make a file system and
put a reservation on it? Turn off auto snapshots and set other
features as per best practices for your workload? In other words,
treat it
On Sep 21, 2009, at 7:11 AM, Andrew Deason wrote:
On Sun, 20 Sep 2009 20:31:57 -0400
Richard Elling richard.ell...@gmail.com wrote:
If you are just building a cache, why not just make a file system and
put a reservation on it? Turn off auto snapshots and set other
features as per best
On Mon, 21 Sep 2009 17:13:26 -0400
Richard Elling richard.ell...@gmail.com wrote:
OK, so the problem you are trying to solve is how much stuff can I
place in the remaining free space? I don't think this is knowable
for a dynamic file system like ZFS where metadata is dynamically
allocated.
On Fri, 18 Sep 2009 17:54:41 -0400
Robert Milkowski mi...@task.gda.pl wrote:
There will be a delay of up-to 30s currently.
But how much data do you expect to be pushed within 30s?
Lets say it would be even 10g to lots of small file and you would
calculate the total size by only summing up
If you are just building a cache, why not just make a file system and
put a reservation on it? Turn off auto snapshots and set other features
as per best practices for your workload? In other words, treat it like
we
treat dump space.
I think that we are getting caught up in trying to answer
On Thu, 17 Sep 2009 18:40:49 -0400
Robert Milkowski mi...@task.gda.pl wrote:
if you would create a dedicated dataset for your cache and set quota
on it then instead of tracking a disk space usage for each file you
could easily check how much disk space is being used in the dataset.
Would it
On Fri, 18 Sep 2009 12:48:34 -0400
Richard Elling richard.ell...@gmail.com wrote:
The transactional nature of ZFS may work against you here.
Until the data is committed to disk, it is unclear how much space
it will consume. Compression clouds the crystal ball further.
...but not impossible.
On Sep 18, 2009, at 7:36 AM, Andrew Deason wrote:
On Thu, 17 Sep 2009 18:40:49 -0400
Robert Milkowski mi...@task.gda.pl wrote:
if you would create a dedicated dataset for your cache and set quota
on it then instead of tracking a disk space usage for each file you
could easily check how much
Andrew Deason wrote:
On Thu, 17 Sep 2009 18:40:49 -0400
Robert Milkowski mi...@task.gda.pl wrote:
if you would create a dedicated dataset for your cache and set quota
on it then instead of tracking a disk space usage for each file you
could easily check how much disk space is being used in
On Fri, 18 Sep 2009 16:38:28 -0400
Robert Milkowski mi...@task.gda.pl wrote:
No. We need to be able to tell how close to full we are, for
determining when to start/stop removing things from the cache
before we can add new items to the cache again.
but having a dedicated dataset will
Andrew Deason wrote:
On Fri, 18 Sep 2009 16:38:28 -0400
Robert Milkowski mi...@task.gda.pl wrote:
No. We need to be able to tell how close to full we are, for
determining when to start/stop removing things from the cache
before we can add new items to the cache again.
but having a
Andrew Deason wrote:
As I'm sure you're all aware, filesize in ZFS can differ greatly from
actual disk usage, depending on access patterns. e.g. truncating a 1M
file down to 1 byte still uses up about 130k on disk when
recordsize=128k. I'm aware that this is a result of ZFS's rather
different
On Thu, 17 Sep 2009 22:55:38 +0100
Robert Milkowski mi...@task.gda.pl wrote:
IMHO you won't be able to lower a file blocksize other than by
creating a new file. For example:
Okay, thank you.
If you are not worried with this extra overhead and you are mostly
concerned with proper accounting
if you would create a dedicated dataset for your cache and set quota on
it then instead of tracking a disk space usage for each file you could
easily check how much disk space is being used in the dataset.
Would it suffice for you?
Setting recordsize to 1k if you have lots of files (I
17 matches
Mail list logo