OK..
> Once the file is deleted, ARC entry {dva 5 txg 5} is no longer valid.
> So why do we need it?
>>Let's suppose file is removed from the directory but is still open by
>>some process...
On a traditional fle system, you would not be able to delete an open file.
Which I believe is good.
S
>TXG action
>5 write of first block of File A, assigned DVA 5, birth TXG 5
>10 file A is deleted
>15 write to first block of File B, assigned DVA 5 birth TXG 15
>
>The two blocks are distinct, and are cached seperately in the ARC.
Once the file is deleted, ARC entry {dva 5 txg 5} is no longer val
Hi Darren,
Thanks for replying.
I am confused then. What is the purpose of incorporating the birth txg
in the arc hash?
How do 2 different files find a shared block?
Obviously, the first one has to read it from disk, and subsequently remember
the
birth txg in addition to to the dva.
I am pondering the fact that the txg is part of the hash key for the arc. It
seems to me,
this has a profound implication: read caching is per txg.
If I read a record into the cache in txg N, and the current txg is closed, this
record
becomes effectively un-cached in the new (N+1) txg.
(Wel
I should have suggested to check if ->private is !null before dereferencing it.
Come to think of it, ->private and ->b_efunc should be set and cleared
together, yes?
Except in arc_buf_evict, taking the:
if (buf->b_data == NULL) {
path, ->private does not get cleared, only ->b_efunc.
--
Thi
Hello,
Looking at the arc_buf_destroy code, I am wondering: before an arc buf record
gets freed (deallocated) why not set the the associated dmu_buf_impl_t
record's pointer to NULL?
dmu_buf_impl_t *db = buf->b_private;
DBUF_VERIFY(db);
db->db_buf = NULL;
The objecti
Awesome, thanks.
--
This message posted from opensolaris.org
Thanks for clarifying the HOLE-s.
Why are there (possibly) multiple dbuf_dirty_record_t -s hanging off of a given
dmu_buf_impl_t?
--
This message posted from opensolaris.org
Thanks for the explanation.
> a dirty
> buffer goes onto the
> list corresponding to the txg it belongs to.
Ok. I see that all dirty buffers are put on a per txg list.
This is for easy synchronization, makes sense.
The per dmu_buf_impl_t details are a bit fuzzy.
I see there can be m
Hello,
I believe the following is true, correct me if it is not:
If more than one objects reference a block (e.g. 2 files have the same block
open)
there must be multiple clones of the arc_buf_t ( and associated dmu_impl_t )
records
present, one for each of the objects. This is always so, eve
Hello,
Thanks for replying.
I am merely talking about explicitly assigning 0 or NULL to fields of a
structure,
right after is was allocated. The structure was zero filled already in its
kmem_cache_alloc* constructor.
I believe these assignments are not necessary on any OS. Am I missing
s
I started to look at ref counting to convince myself that the db_bu field in a
cached dmu_impl_t object
is guaranteed to point at a valid arc_buf_t.
I have seen a "deadbeef" crash on a busy system when zfs_write() is
pre-pagefaulting in
the file's pages.
The page fault handler eventually wi
Thanks for replying.
>>I am a bit puzzled why a new ARC entry could not be cloned in arc_release.
>
>We have already handed out a reference to the data at the point that
>buffer is being released, so we cannot allocate a new block "JIT".
I think I get it. Each colliding thread threads need
I see a consistent coding pattern in the various zfs object cache constructors.
These constructors always bzero the new structure.
Given this, all subsequent initializations, that merely set various fields to 0
or NULL are unnecessary.
No big deal, but fewer CPU cycles, and fewer lines of code
Greets,
I have read a couple of earlier posts by Jeff and Mark Maybee explaining how
Arc reference counting works.
These posts did help clarifying this piece of code ( a bit complex, to say the
least).
I would like to solicit more comments elucidating ARC reference counting.
The usage pattern
15 matches
Mail list logo