Miles Nordin wrote:
"km" == Kyle McDonald <kmcdon...@egenera.com> writes:

    km> hese drives do seem to do a great job at random writes, most
    km> of the promise shows at sequential writes, so Does the slog
    km> attempt to write sequentially through the space given to it?


<thwack> NO!  Everyone who is using the code, writing the code, and
building the systems says, io/s is the number that matters.  If you've
got some experience otherwise, fine, odd things turn up all the time.
but AFAICT the consensus is clear right now.

Yeah I know. I get it. I screwed up and used the the wrong term. OK? I agree with you.

Still when all the previously erased pages are gone, write latencies go up (drastically - in some cases worse than a spinning HD,) and io/s goes down. So what I really wanted to get into was the question below.
    km> they can't overwrite one page (usually 4k) without erasing the
    km> whole (512k) block the page is in.

don't presume to get into the business of their black box so far.
I'm not.

Guys like this are:

http://www.anandtech.com/storage/showdoc.aspx?i=3531&p=8
That's almost certainly not what they do.  They probably do COW like
ZFS and (yaffs and jffs2 and ubifs), so they will do the 4k writes to
partly-empty pages until the page is full.  In the background a gc
thread will evacuate and rewrite pages that have become spattered with
unreferenced sectors.
That's where the problem comes in. They have no knowledge of the upper filesystem, and don't know what previously written blocks are still referenced. When the OS FS rewrites a directory to remove a pointer to the string of blocks the file used to use, and updates it's list of which LBA sectors are now free vs. in use, it probably happens pretty much exactly like you say.

But that doesn't let the SSD mark the sectors the file used as unreferenced, so the gc thread can't "evacuate" them ahead of time and add them to the empty page pool.
    km> The Drive vendors have come up with a new TRIM command, which
    km> some OS's (Win7) are talking about supporting in their
    km> Filesystems.

this would be useful for VM's with thin-provisioned disks, too.
True. Keeping or Putting the 'holes' back in the 'holey' disk files when the VM frees up space would be very useful.
    km> I would think that the slog code would need to use it in order
    km> to keep write speeds up and latencies down. No?

read the goofy gamer site review please.  No, not with the latest
intel firmware, it's not needed.
I did read at least one review that compared old and new firmware on the Intel M model. In that I'm pretty sure they still saw a performance hit (in latency) when the entire drive had been written to. It may have taken longer to hit, and it may have not been as drastic but it was still there.

Which review are you talking about?

So what if Intel has fixed it. Not everyone is going to use the intel drives. If the TRIM command (assuming it can help at all) can keep the other brands and models performing close to how they performed when new, then I'd say it's useful in the ZFS slogs too - Just because one vendor might have made it unnecessary, doesn't mean it is for everyone.

Does it?

 -Kyle



------------------------------------------------------------------------

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to