On 2017/7/1 上午4:43, bca...@lists.ewheeler.net wrote:
> From: Tang Junhui
>
> bucket_in_use is updated in gc thread which triggered by invalidating or
> writing sectors_to_gc dirty data, It's been too long, Therefore, when we
> use it to compare with the threshold, it is
On 2017/7/11 上午5:46, Kai Krakow wrote:
> Am Mon, 10 Jul 2017 19:18:28 +0800
> schrieb Coly Li :
>
>> If a read bio to cache device gets failed, bcache will try to
>> recovery it by forward the read bio to backing device. If backing
>> device responses read request successfully
On 2017/7/6 下午11:24, Christoph Hellwig wrote:
> On Thu, Jul 06, 2017 at 03:35:48PM +0800, Coly Li wrote:
>> Then gfs2 breaks the above rule ? in gfs2_metapath_ra() and
>> gfs2_dir_readahead(), only REQ_META is used in submit_bh(). It seems an
>> extra REQ_PRIO should be there.
>
> Or maybe not.
On Mon, Jul 10 2017, Shaohua Li wrote:
> On Mon, Jul 10, 2017 at 03:25:41PM +0800, Ming Lei wrote:
>> On Mon, Jul 10, 2017 at 02:38:19PM +1000, NeilBrown wrote:
>> > On Mon, Jul 10 2017, Ming Lei wrote:
>> >
>> > > On Mon, Jul 10, 2017 at 11:35:12AM +0800, Ming Lei wrote:
>> > >> On Mon, Jul 10,
On Tue, Jul 04, 2017 at 10:33:07PM -0500, Goldwyn Rodrigues wrote:
> From: Goldwyn Rodrigues
>
> Assigning pos for usage early messes up in append mode, where
> the pos is re-assigned in generic_write_checks(). Assign
> pos later to get the correct position to write from
On Mon, Jul 10, 2017 at 12:05:49PM -0700, Shaohua Li wrote:
> On Mon, Jul 10, 2017 at 03:25:41PM +0800, Ming Lei wrote:
> > On Mon, Jul 10, 2017 at 02:38:19PM +1000, NeilBrown wrote:
> > > On Mon, Jul 10 2017, Ming Lei wrote:
> > >
> > > > On Mon, Jul 10, 2017 at 11:35:12AM +0800, Ming Lei wrote:
Hi, everyone,
I did some benchmarks of Kyber on 4.12 that I wanted to share. If anyone
else has done any testing, I'd love to see the results.
== Latency
Kyber's basic function is controlling latency, so the first benchmark I
did was to measure latency of a mixed workload. When idle, the NVMe
So, adding adding hpsa_allow_any=1 did not work...
When you added the 0x40800e11, did you add it to both tables?
/* define the PCI info for the cards we can control */
static const struct pci_device_id hpsa_pci_device_id[] = {
{PCI_VENDOR_ID_COMPAQ, PCI_DEVICE_ID_COMPAQ_CISSB, 0x0E11,
On Mon, Jul 10, 2017 at 03:25:41PM +0800, Ming Lei wrote:
> On Mon, Jul 10, 2017 at 02:38:19PM +1000, NeilBrown wrote:
> > On Mon, Jul 10 2017, Ming Lei wrote:
> >
> > > On Mon, Jul 10, 2017 at 11:35:12AM +0800, Ming Lei wrote:
> > >> On Mon, Jul 10, 2017 at 7:09 AM, NeilBrown
bio_free isn't a good place to free cgroup info. There are a
lot of cases bio is allocated in special way (for example, in stack) and
never gets called by bio_put hence bio_free, we are leaking memory. This
patch moves the free to bio endio, which should be called anyway. The
bio_uninit call in
On 6/19/2017 12:18 AM, Christoph Hellwig wrote:
+static void nvme_rdma_destroy_admin_queue(struct nvme_rdma_ctrl *ctrl, bool
remove)
{
+ nvme_rdma_stop_queue(>queues[0]);
+ if (remove) {
+ blk_cleanup_queue(ctrl->ctrl.admin_connect_q);
+
On 07/10/2017 12:40 PM, Shaohua Li wrote:
> bio_free isn't a good place to free cgroup info. There are a
> lot of cases bio is allocated in special way (for example, in stack) and
> never gets called by bio_put hence bio_free, we are leaking memory. This
> patch moves the free to bio endio, which
On 2017/7/1 上午4:43, bca...@lists.ewheeler.net wrote:
> From: Tang Junhui
>
> Since dirty sectors of thin flash cannot be used to cache data for backend
> device, so we should subtract it in calculating writeback rate.
>
I see you want to get ride of the noise of flash
On 07/10/2017 11:16 AM, Sagi Grimberg wrote:
> Hey Jens,
>
> Another round of early patches for 4.13.
>
> I added the quiesce/unquiesce patches in here as it's
> easy for me easily apply changes on top. It has accumulated
> reviews and includes mostly nvme anyway, please tell me if
> you don't
Hey Jens,
Another round of early patches for 4.13.
I added the quiesce/unquiesce patches in here as it's
easy for me easily apply changes on top. It has accumulated
reviews and includes mostly nvme anyway, please tell me if
you don't want to take them with this.
This includes:
-
On 07/08/2017 01:06 PM, Levin, Alexander (Sasha Levin) wrote:
> Hi all,
>
> syzkaller seems to be hitting a lockup with the reproducer below:
>
> INFO: task syzkaller490361:8788 blocked for more than 120 seconds.
> Not tainted 4.12.0-next-20170706+ #186
> "echo 0 >
Hello,
We are allowed to build all scheduler as modules or build-in setting
defaults with
CONFIG_SCSI_MQ_DEFAULT=y/n / CONFIG_DM_MQ_DEFAULT=y/n.
Following setup:
All scheduler set to Y ( build-in )
CONFIG_DEFAULT_CFQ=y
CONFIG_SCSI_MQ_DEFAULT and CONFIG_DM_MQ_DEFAULT set to N
Boot now with
Hello Christoph Hellwig,
The patch 2a842acab109: "block: introduce new block status code type"
from Jun 3, 2017, leads to the following static checker warning:
fs/exofs/inode.c:1333 exofs_new_inode()
error: passing non negative 255 to ERR_PTR
drivers/scsi/osd/osd_initiator.c
> On Fri, Jul 07, 2017 at 11:42:38AM -0400, Laurence Oberman wrote:
> > What happens when hpsa_allow_any=1 with the Smart Array 64xx
> > It should probe.
>
> But only if it has a HP vendor ID as far as I can tell. We'd
> still need to add the compaq ids so that these controllers get
> probed.
On Fri, Jul 07, 2017 at 08:09:28PM -0600, Jens Axboe wrote:
> On 07/07/2017 07:51 PM, Goldwyn Rodrigues wrote:
> > On 07/04/2017 05:16 PM, Jens Axboe wrote:
> >>
> >> Please expedite getting this upstream, asap.
> >
> > I have posted an updated patch [1] and it is acked by David. Would you
> >
If a read bio to cache device gets failed, bcache will try to recovery it
by forward the read bio to backing device. If backing device responses
read request successfully then the bio contains data from backing device
will be returned to uppper layer.
The recovery effort in
On Mon, Jul 10, 2017 at 02:38:19PM +1000, NeilBrown wrote:
> On Mon, Jul 10 2017, Ming Lei wrote:
>
> > On Mon, Jul 10, 2017 at 11:35:12AM +0800, Ming Lei wrote:
> >> On Mon, Jul 10, 2017 at 7:09 AM, NeilBrown wrote:
> ...
> >> >> +
> >> >> + rp->idx = 0;
> >> >
> >>
22 matches
Mail list logo