On Sat, Oct 21, 2017 at 4:58 AM, Bart Van Assche wrote:
> Sorry but I'm not sure that's the best possible answer. In my opinion
> avoiding that completion objects have dependencies on other lock objects,
> e.g. by avoiding to wait on a completion object while holding a mutex, is a
> far superior s
On Fri, 2017-10-20 at 16:54 -0600, dann frazier wrote:
> hey,
> I'm seeing a regression when executing 'dmraid -r -c' in an arm64
> QEMU guest, which I've bisected to the following commit:
>
> ca18d6f7 "block: Make most scsi_req_init() calls implicit"
>
> I haven't yet had time to try and deb
hey,
I'm seeing a regression when executing 'dmraid -r -c' in an arm64
QEMU guest, which I've bisected to the following commit:
ca18d6f7 "block: Make most scsi_req_init() calls implicit"
I haven't yet had time to try and debug it yet, but wanted to get
the report out there before the weekend.
On Fri, 2017-10-20 at 11:39 +0200, Roman Penyaev wrote:
> But what bothers me is these looong loops inside blk_mq_sched_restart(),
> and since you are the author of the original 6d8c6c0f97ad ("blk-mq: Restart
> a single queue if tag sets are shared") I want to ask what was the original
> problem wh
On Fri, 2017-10-20 at 08:34 +0200, Thomas Gleixner wrote:
> On Thu, 19 Oct 2017, Bart Van Assche wrote:
> > Are there any completion objects for which the cross-release checking is
> > useful?
>
> All of them by definition.
Sorry but I'm not sure that's the best possible answer. In my opinion
avo
Javier
> On 18 Oct 2017, at 18.52, Christoph Hellwig wrote:
>
> Introduce a new struct nvme_ns_head that holds information about an actual
> namespace, unlike struct nvme_ns, which only holds the per-controller
> namespace information. For private namespaces there is a 1:1 relation of
> the two,
On Fri, Oct 20, 2017 at 06:26:30PM +0200, Christoph Hellwig wrote:
> On Thu, Oct 19, 2017 at 05:59:33PM +0200, Benjamin Block wrote:
> > > +#define ptr64(val) ((void __user *)(uintptr_t)(val))
> >
> > Better to reflect the special property, that it is a user pointer, in
> > the name of the macro.
On Thu, Oct 19, 2017 at 05:59:33PM +0200, Benjamin Block wrote:
> > +#define ptr64(val) ((void __user *)(uintptr_t)(val))
>
> Better to reflect the special property, that it is a user pointer, in
> the name of the macro. Maybe something like user_ptr(64). The same
> comment for the same macro in b
We need to look for an active PM request until the next softbarrier
instead of looking for the first non-PM request. Otherwise any cause
of request reordering might starve the PM request(s).
Signed-off-by: Christoph Hellwig
---
block/blk-core.c | 35 ++-
1 file c
The Subject prefix for this should be "block:".
> @@ -945,7 +945,7 @@ int submit_bio_wait(struct bio *bio)
> {
> struct submit_bio_ret ret;
>
> - init_completion(&ret.event);
> + init_completion_with_map(&ret.event, &bio->bi_disk->lockdep_map);
FYI, I have an outstanding patch to
On 10/20/2017 08:17 AM, Christoph Hellwig wrote:
> Hi Jens,
>
> below are two regression fixes each for RDMA and FC, and a fix for a SQHD
> update race in the target.
>
> The following changes since commit 639812a1ed9bf49ae2c026086fbf975339cd1eef:
>
> nbd: don't set the device size until we're
Hi Jens,
below are two regression fixes each for RDMA and FC, and a fix for a SQHD
update race in the target.
The following changes since commit 639812a1ed9bf49ae2c026086fbf975339cd1eef:
nbd: don't set the device size until we're connected (2017-10-09 12:29:22
-0600)
are available in the git
On 19/10/17 14:44, Adrian Hunter wrote:
> On 18/10/17 09:16, Adrian Hunter wrote:
>> On 11/10/17 16:58, Ulf Hansson wrote:
>>> On 11 October 2017 at 14:58, Adrian Hunter wrote:
On 11/10/17 15:13, Ulf Hansson wrote:
> On 10 October 2017 at 15:31, Adrian Hunter
> wrote:
>> On 10/1
> Elena Reshetova writes:
> > Elena Reshetova (6):
> > block: convert bio.__bi_cnt from atomic_t to refcount_t
> > block: convert blk_queue_tag.refcnt from atomic_t to refcount_t
> > block: convert blkcg_gq.refcnt from atomic_t to refcount_t
> > block: convert io_context.active_ref from a
Hi Bart,
On Thu, Oct 19, 2017 at 7:47 PM, Bart Van Assche wrote:
> On Wed, 2017-10-18 at 12:22 +0200, Roman Pen wrote:
>> the patch below fixes queue stalling when shared hctx marked for restart
>> (BLK_MQ_S_SCHED_RESTART bit) but q->shared_hctx_restart stays zero. The
>> root cause is that hctx
Elena Reshetova writes:
> Elena Reshetova (6):
> block: convert bio.__bi_cnt from atomic_t to refcount_t
> block: convert blk_queue_tag.refcnt from atomic_t to refcount_t
> block: convert blkcg_gq.refcnt from atomic_t to refcount_t
> block: convert io_context.active_ref from atomic_t to re
atomic_t variables are currently used to implement reference
counters with the following properties:
- counter is initialized to 1 using atomic_set()
- a resource is freed upon counter reaching zero
- once counter reaches zero, its further
increments aren't allowed
- counter schema uses basi
Changes in v4:
- Improved commit messages and signoff info.
- Rebase on top of linux-next as of yesterday.
- WARN_ONs are restored since x86 refcount_t does not WARN on zero
Changes in v3:
No changes in patches apart from trivial rebases, but now by
default refcount_t = atomic_t and uses all at
atomic_t variables are currently used to implement reference
counters with the following properties:
- counter is initialized to 1 using atomic_set()
- a resource is freed upon counter reaching zero
- once counter reaches zero, its further
increments aren't allowed
- counter schema uses basi
atomic_t variables are currently used to implement reference
counters with the following properties:
- counter is initialized to 1 using atomic_set()
- a resource is freed upon counter reaching zero
- once counter reaches zero, its further
increments aren't allowed
- counter schema uses basi
atomic_t variables are currently used to implement reference
counters with the following properties:
- counter is initialized to 1 using atomic_set()
- a resource is freed upon counter reaching zero
- once counter reaches zero, its further
increments aren't allowed
- counter schema uses basi
atomic_t variables are currently used to implement reference
counters with the following properties:
- counter is initialized to 1 using atomic_set()
- a resource is freed upon counter reaching zero
- once counter reaches zero, its further
increments aren't allowed
- counter schema uses basi
atomic_t variables are currently used to implement reference
counters with the following properties:
- counter is initialized to 1 using atomic_set()
- a resource is freed upon counter reaching zero
- once counter reaches zero, its further
increments aren't allowed
- counter schema uses basi
23 matches
Mail list logo