Jan 2015 16:09:52 -0800
Greg KH gre...@linuxfoundation.org wrote:
On Thu, Jan 01, 2015 at 05:44:39PM +0900, Akira Hayakawa wrote:
This patch adds dm-writeboost to staging tree.
dm-writeboost is a log-structured SSD-caching driver.
It caches data in log-structured way on the cache device
for the little
improvement.
- Akira
On 2015/02/21 1:17, Joe Thornber wrote:
On Sat, Feb 21, 2015 at 01:06:08AM +0900, Akira Hayakawa wrote:
The size is configurable but typically 512KB (that's the default).
Refer to bio payload sounds really dangerous but it may be possible
in some tricky way
the ownership?
- Akira
On 2015/02/21 0:50, Joe Thornber wrote:
On Sat, Feb 21, 2015 at 12:25:53AM +0900, Akira Hayakawa wrote:
Yes.
How big are your log chunks? Presumably they're relatively small (eg,
256k). In which case you can optimise for the common case where you
have enough bios to hand
didn't exhibit before but it's truly a bug.
- Fully revised the README.
Now that we have read-caching support, the old README was completely obsolete.
- Update TODO
Implementing read-caching is done.
- bump up the copyright year to 2015
- fix up comments
Signed-off-by: Akira Hayakawa ruby.w
will perform garbage collections
twice.
And one question, how long will be data logs replay time during init,
if SSD is almost full of dirty data logs?
Jianjian
On Fri, Dec 12, 2014 at 7:09 AM, Akira Hayakawa ruby.w...@gmail.com wrote:
However, after looking at the current code, and using
for the
oldest log as the starting point. It's 4KB metadata reads but spends to some
extent.
The other 2 sec is thought to be spent by this)
- Akira
On 12/13/14 11:07 PM, Akira Hayakawa wrote:
Hi,
Jianjian, You really get a point at the fundamental design.
If I understand it correctly
like this
before: bio - ~map:bio-bio
after: bio - ~should_split:bio-bool - ~map:bio-bio
- Akira
On 12/13/14 12:09 AM, Akira Hayakawa wrote:
However, after looking at the current code, and using it I think it's
a long, long way from being ready for production. As we've already
discussed
, Jianjian Huo wrote:
On Sat, Dec 13, 2014 at 6:07 AM, Akira Hayakawa ruby.w...@gmail.com wrote:
Hi,
Jianjian, You really get a point at the fundamental design.
If I understand it correctly, the whole idea indeed is very simple,
the consumer/provider and circular buffer model. use SSD as a circular
On 12/12/14 6:12 PM, Bart Van Assche wrote:
This is the first time I see someone claiming that reducing the request size
improves performance. I don't know any SSD model for which splitting requests
improves performance.
Writeboost batches number of writes into a log (that is 512KB large)
the mechanism is.
Not to mention the code itself is.
- Akira
On 12/12/14 11:24 PM, Joe Thornber wrote:
On Fri, Dec 12, 2014 at 09:42:15AM +0900, Akira Hayakawa wrote:
The SSD-caching should be log-structured.
No argument there, and this is why I've supported you with
dm-writeboost over the last
2014 at 6:42am -0500,
Akira Hayakawa ruby.w...@gmail.com wrote:
This patch adds dm-writeboost to staging tree.
dm-writeboost is a log-structured SSD-caching driver.
It caches data in log-structured way on the cache device
so that the performance is maximized.
The merit of putting
users
and polish the codes.
Signed-off-by: Akira Hayakawa ruby.w...@gmail.com
---
MAINTAINERS|6 +
drivers/staging/Kconfig|2 +
drivers/staging/Makefile |1 +
drivers/staging/writeboost/Kconfig
Dave,
# -EIO retuned corrupts XFS
I turned up
lockdep, frame pointer, xfs debug
and also changed to 3.12.0-rc5 from rc1.
What's changed is that
the problem we discussed in previous mails
*never* reproduce.
However, if I turn off the lockdep only
it hangs up by setting blockup to 1 and then to 0
Dave
XFS shuts down because you've returned EIO to a log IO. That's a
fatal error. If you do the same to an ext4 journal write, it will do
the equivalent of shut down (e.g. complain and turn read-only).
You mean block device should not return -EIO anyway if
it doesn't want XFS to suddenly shut
Mikulas,
I/Os shouldn't be returned with -ENOMEM. If they are, you can treat it as
a hard error.
It seems to be blkdev_issue_discard returns -ENOMEM
when bio_alloc fails, for example.
Waiting for a second and we can alloc the memory is my idea
for handling -ENOMEM returned.
Blocking I/O
Hi, DM Guys
I suppose I have finished the tasks to
answer Mikulas's pointing outs.
So, let me update the progress report.
The code is updated now on my Github repo.
Checkout the develop branch to avail
the latest source code.
Compilation Status
--
First, compilation status.
Mikulas,
Let me ask you about this comment
of choosing the best API.
For the rest, I will reply later.
BTW. You should use wait_event_interruptible_lock_irq instead of
wait_event_interruptible and wait_event_interruptible_lock_irq_timeout
instead of wait_event_interruptible_timeout. The
Mikulas,
Waking up every 100ms in flush_proc is not good because it wastes CPU time
and energy if the driver is idle.
Yes, 100ms is too short. I will change it to 1sec then.
We can wait for 1 sec in termination.
The problem is that if you fill up the whole cache device in less time
than 1
Mikulas,
nvidia binary driver, but it may happen in other parts of the kernel too.
The fact that it works in your setup doesn't mean that it is correct.
You are right. I am convinced.
As far as I looked around the kernel code,
it seems to be choosing kthread when one needs looping in
Dave,
That's where arbitrary delays in the storage stack below XFS cause
problems - if the first FUA log write is delayed, the next log
buffer will get filled, issued and delayed, and when we run out of
log buffers (there are 8 maximum) the entire log subsystem will
stall, stopping *all* log
Mikulas,
The change seems ok. Please, also move this piece of code in flush_proc
out of the spinlock:
if (kthread_should_stop())
return 0;
It caused the workqueue warning I reported before and still causes warning
with kthreads:
into
any type of workqueue?
You are only mentioning that
putting a looping work item in system_wq
is the wrong way since
nvidia driver flush the workqueue.
Akira
On 10/4/13 10:38 PM, Mikulas Patocka wrote:
On Fri, 4 Oct 2013, Akira Hayakawa wrote:
Hi, Mikulas,
I am sorry to say that
I don't
Hi, Mikulas,
Thank you for reporting.
I am really happy to see this report.
First, I respond to the performance problem.
I will make time later for investigating the rest and answer.
Some deadlock issues are difficult to solve in short time.
I tested dm-writeboost with disk as backing device
Hi, Mikulas,
I am sorry to say that
I don't have such machines to reproduce the problem.
But agree with that I am dealing with workqueue subsystem
in a little bit weird way.
I should clean them up.
For example,
free_cache() routine below is
a deconstructor of the cache metadata
including all
, superblock_recorder should be in the -metadata.c file
but I chose to put it on this file since for unity.
Thanks,
Akira
followed by the current .h files.
-- dm-writeboost-daemon.h --
/*
* Copyright (C) 2012-2013 Akira Hayakawa ruby.w...@gmail.com
*
* This file is released under the GPL
design rule?
Akira
On 9/26/13 2:37 AM, Mike Snitzer wrote:
On Tue, Sep 24 2013 at 8:20am -0400,
Akira Hayakawa ruby.w...@gmail.com wrote:
Hi, Mike
I am now working on redesigning and implementation
of dm-writeboost.
This is a progress report.
Please run
git clone https://github.com
Hi, Mike
I am now working on redesigning and implementation
of dm-writeboost.
This is a progress report.
Please run
git clone https://github.com/akiradeveloper/dm-writeboost.git
to see full set of the code.
* 1. Current Status
writeboost in new design passed my test.
Documentations are
benefit.
Akira
n 9/18/13 5:59 AM, Mike Snitzer wrote:
On Tue, Sep 17 2013 at 8:43am -0400,
Akira Hayakawa ruby.w...@gmail.com wrote:
Hi, Mike
There are two designs in my mind
regarding the formatting cache.
You said
administer the writeboost devices. There is no need for this. Just
Mike,
First, thank you for your commenting.
I was looking forward to your comments.
I suppose you are sensing some smell in my design.
You are worrying that dm-writeboost will not only confuse users
but also fall into worst situation of giving up backward-compatibility
after merging into tree.
Hi, Mike
There are two designs in my mind
regarding the formatting cache.
You said
administer the writeboost devices. There is no need for this. Just
have a normal DM target whose .ctr takes care of validation and
determines whether a device needs formatting, etc.
makes me wonder
30 matches
Mail list logo