Hi,
Jianjian, You really get a point at the fundamental design.
If I understand it correctly, the whole idea indeed is very simple,
the consumer/provider and circular buffer model. use SSD as a circular
write buffer, write flush thread stores incoming writes to this buffer
sequentially as
Hi,
The major reason is, it needs to read full 512KB segment to calculate
checksum to
know if the log isn't half written.
(Read 500GB SSD that performs 500MB/sec seqread spends 1000secs)
I've just measured how long the cache resuming is.
I use 2GB SSD for the cache device.
512KB seqread
On Sat, Dec 13, 2014 at 6:07 AM, Akira Hayakawa ruby.w...@gmail.com wrote:
Hi,
Jianjian, You really get a point at the fundamental design.
If I understand it correctly, the whole idea indeed is very simple,
the consumer/provider and circular buffer model. use SSD as a circular
write buffer,
Hi,
I've just measured how split affects.
I think seqread can make the discussion solid
so these are the cases of reading 6.4GB (64MB * 100) sequentially.
HDD:
64MB read
real 2m1.191s
user 0m0.000s
sys 0m0.470s
Writeboost (HDD+SSD):
64MB read
real 2m13.532s
user 0m0.000s
Jianjian,
How about invalidating previous writes on same sector address? if
first write is stored in one 512KB log in SSD, later when user write
the same address, will writeboost invalid old write by updating meta
data header in place in that 512KB log? and other meta data like
superblock,
On 12/12/14 01:42, Akira Hayakawa wrote:
1. Writeboost shouldn't split the bio into 4KB chunks.
No. It is necessary.
I know WALB (https://github.com/starpos/walb) logs data without
splitting but the data structure becomes complicated.
If you read my code carefully, you will notice that splitting
On 12/12/14 6:12 PM, Bart Van Assche wrote:
This is the first time I see someone claiming that reducing the request size
improves performance. I don't know any SSD model for which splitting requests
improves performance.
Writeboost batches number of writes into a log (that is 512KB large)
On 12/12/14 10:35, Akira Hayakawa wrote:
Writeboost batches number of writes into a log (that is 512KB large) and
submits to SSD
which maximizes the throughput and the lifetime of the SSD.
Does this mean that dm-writeboost is similar to btier ? If so, this
makes me wonder which of these two
On Fri, Dec 12, 2014 at 09:42:15AM +0900, Akira Hayakawa wrote:
The SSD-caching should be log-structured.
No argument there, and this is why I've supported you with
dm-writeboost over the last couple of years.
However, after looking at the current code, and using it I think it's
a long, long
However, after looking at the current code, and using it I think it's
a long, long way from being ready for production. As we've already
discussed there are some very naive design decisions in there, such as
copying every bio payload to another memory buffer, splitting all io
down to 4k.
If I understand it correctly, the whole idea indeed is very simple,
the consumer/provider and circular buffer model. use SSD as a circular
write buffer, write flush thread stores incoming writes to this buffer
sequentially as provider, and writeback thread write those data logs
sequentially into
On Wed, Dec 10 2014 at 6:42am -0500,
Akira Hayakawa ruby.w...@gmail.com wrote:
This patch adds dm-writeboost to staging tree.
dm-writeboost is a log-structured SSD-caching driver.
It caches data in log-structured way on the cache device
so that the performance is maximized.
The merit of
Mike,
Below is my comments against Joe's previous comments:
1. Writeboost shouldn't split the bio into 4KB chunks.
No. It is necessary.
I know WALB (https://github.com/starpos/walb) logs data without
splitting but the data structure becomes complicated.
If you read my code carefully, you will
This patch adds dm-writeboost to staging tree.
dm-writeboost is a log-structured SSD-caching driver.
It caches data in log-structured way on the cache device
so that the performance is maximized.
The merit of putting this driver in staging tree is
to make it possible to get more feedback from
14 matches
Mail list logo