On Thursday 11 March 2010 17:19:32 Chris Mason wrote:
> On Thu, Mar 11, 2010 at 04:03:59PM +0000, Gordan Bobic wrote:
> > On Thu, 11 Mar 2010 16:35:33 +0100, Stephan von Krawczynski
> > 
> > <sk...@ithnet.com> wrote:
> > >> Besides, why shouldn't we help the drive firmware by
> > >> - writing the data only in erase-block sizes
> > >> - trying to write blocks that are smaller than the erase-block in a
> > >> way that won't cross the erase-block boundary
> > > 
> > > Because if the designing engineer of a good SSD controller wasn't able
> > 
> > to
> > 
> > > cope with that he will have no chance to design a second one.
> > 
> > You seem to be confusing quality of implementation with theoretical
> > possibility.
> > 
> > >> This will not only increase the life of the SSD but also increase its
> > >> performance.
> > > 
> > > TRIM: maybe yes. Rest: pure handwaving.
> > > 
> > >> [...]
> > >> 
> > >> > > And your guess is that intel engineers had no glue when designing
> > >> > > the XE
> > >> > > including its controller? You think they did not know what you and
> > 
> > me
> > 
> > >> > > know and
> > >> > > therefore pray every day that some smart fs designer falls from
> > >> > > heaven
> > >> > > and saves their product from dying in between? Really?
> > >> > 
> > >> > I am saying that there are problems that CANNOT be solved on the
> > >> > disk firmware level. Some problems HAVE to be addressed higher up
> > >> > the
> > 
> > stack.
> > 
> > >> Exactly, you can't assume that the SSDs firmware understands any and
> > 
> > all
> > 
> > >> file
> > >> system layouts, especially if they are on fragmented LVM or other
> > >> logical
> > >> volume manager partitions.
> > > 
> > > Hopefully the firmware understands exactly no fs layout at all. That
> > 
> > would
> > 
> > > be
> > > braindead. Instead it should understand how to arrange incoming and
> > > outgoing
> > > data in a way that its own technical requirements are met as perfect as
> > > possible. This is no spinning disk, it is completely irrelevant what
> > > the data
> > > layout looks like as long as the controller finds its way through and
> > 
> > copes
> > 
> > > best with read/write/erase cycles. It may well use additional RAM for
> > > caching and data reordering.
> > > Do you really believe ascending block numbers are placed in ascending
> > > addresses inside the disk (as an example)? Why should they? What does
> > 
> > that
> > 
> > > mean for fs block ordering? If you don't know anyway what a controller
> > > does to
> > > your data ordering, how do you want to help it with its job?
> > > Please accept that we are _not_ talking about trivial flash mem here or
> > > pseudo-SSDs consisting of sd cards. The market has already evolved
> > 
> > better
> > 
> > > products. The dinosaurs are extincted even if some are still looking
> > 
> > alive.

You seem to be forgetting that CEOs like to save 10 cents per drive to show 
"millions of dollars saved" by their work, I highly doubt that we won't see 
SSDs with half assed wear leveling implementations 10 years from now.

And no, I don't think that the linear storage that we see at the ATA level is 
any linear on the drive itself. But erase blocks are still erase blocks. I 
highly doubt that the abstraction layer works over sector sizes (512B) and not 
over whole erase block sizes -- just because it would make it much more 
complicated, thus slower.

This way, even if the writes to the flash cells are made in fashion similar to 
a LogFS, one will still get r/m/w cycle if the write is 512B in size on a 
block that has also other data.

> > 
> > I am assuming that you are being deliberately facetious here (the
> > alternative is less kind). The simple fact is that you cannot come up
> > with some magical data (re)ordering method that nullifies problems of
> > common use-cases that are quite nasty for flash based media.
> > 
> > For example - you have a disk that has had all it's addressable blocks
> > tainted. A new write comes in - what do you do with it? Worse, a write
> > comes in spanning two erase blocks as a consequence of the data
> > re-alignment in the firmware. You have no choice but to wipe them both
> > and re-write the data. You'd be better off not doing the magic and
> > assuming that the FS is sensibly aligned.
> 
> Ok, how exactly would the FS help here?  We have a device with a 256kb
> erasure size, and userland does a 4k write followed by an fsync.

I assume here that the FS knows about erasure size and does implement TRIM.

> If the FS were to be smart and know about the 256kb requirement, it
> would do a read/modify/write cycle somewhere and then write the 4KB.

If all the free blocks have been TRIMmed, FS should pick a completely free 
erasure size block and write those 4KiB of data.

Correct implementation of wear leveling in the drive should notice that the 
write is entirely inside a free block and make just a write cycle adding zeros 
to the end of supplied data.

> The underlying implementation is the same in the device.  It picks a
> destination, reads it then writes it back.  You could argue (and many
> people do) that this operation is risky and has a good chance of
> destroying old data.  Perhaps we're best off if the FS does the rmw
> cycle instead into an entirely safe location.

And IMO that's the idea behind TRIM -- not to force the device do do rmw 
cycles, only write cycle or erase cycle, provided there's free space and the 
free space  doesn't have considerably more write cycles than the already 
allocated data.

> 
> It's a great place for research and people are definitely looking at it.
> 
> But with all of that said, it has nothing to do with alignment or trim.
> Modern ssds are a raid device with a large stripe size, and someone
> somewhere is going to do a read/modify/write to service any small write.
> You can force this up to the FS or the application, it'll happen
> somewhere.

Yes, and if the parition is full rmw will happen in the drive. But if the 
partition is far from full, free space is TRIMmed then than the r/m/w cycle 
will happen inside btrfs and the SSD won't have to do its magic -- making the 
process faster.

The effect will be a FS that behaves consistently over a broad range of SSDs, 
provided there's free space left.

> The filesystem metadata writes are a very small percentage of the
> problem overall.  Sure we can do better and try to force larger metadata
> blocks.  This was the whole point behind btrfs' support for large tree
> blocks, which I'll be enabling again shortly.

-- 
Hubert Kario
QBS - Quality Business Software
ul. Ksawerów 30/85
02-656 Warszawa
POLAND
tel. +48 (22) 646-61-51, 646-74-24
fax +48 (22) 646-61-50
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to