On Wed, Oct 16, 2019 at 7:26 AM David Howells wrote:
>
> Btw, is there any point in __wake_up_sync_key() taking a nr_exclusive
> argument since it clears WF_SYNC if nr_exclusive != 1 and doesn't make sense
> to be >1 anyway.
Ack, looks sane to me.
We have _very_ few users of nr_exclusive. I wond
Aside from the two small comments, the pipe side looked reasonable,
but I stopped looking when the patches moved on to the notificaiton
part, and maybe I missed something in the earlier ones too.
Which does bring me to the meat of this email: can we please keep the
pipe cleanups and prepwork (and
On Tue, Oct 15, 2019 at 2:49 PM David Howells wrote:
>
> + if (head - pipe->tail == buffers) {
Can we just have helper inline functions for these things?
You describe them in the commit message of 03/21 (good), but it would
be even better if the code was just self-describin
On Tue, Oct 15, 2019 at 2:48 PM David Howells wrote:
>
> Add a wakeup call for a case whereby the caller already has the waitqueue
> spinlock held.
That naming is crazy.
We already have helper functions like this, and they are just called
"wake_up_locked()".
So the "prelocked" naming is just od
On Tue, Sep 17, 2019 at 5:18 PM Linus Torvalds
wrote:
>
> Honestly, I would much rather have seen this as three completely
> separate pull requests:
There are two hard problems in computer science: cache invalidation,
naming things, and off-by-one errors.
I said "three" and
On Mon, Sep 16, 2019 at 7:52 AM Jens Axboe wrote:
>
> - blk-cgroup writeback fixes (Tejun)
Well, that's a very short description of a pretty subtle series.
Honestly, I would much rather have seen this as three completely
separate pull requests:
- the writeback and cgroup stuff
- the core que
On Fri, Sep 6, 2019 at 10:07 AM Linus Torvalds
wrote:
>
> Hmm. Maybe somebody can come up with a good legacy signaling solution
> (and "just use another pipe for error notification and OOB data" for
> the first one may _work_, but that sounds pretty hacky and just
On Fri, Sep 6, 2019 at 9:12 AM Steven Whitehouse wrote:
>
> The events are generally not independent - we would need ordering either
> implicit in the protocol or explicit in the messages.
Note that pipes certainly would never re-order messages. It's just
that _if_ you have two independent and co
On Fri, Sep 6, 2019 at 8:35 AM Linus Torvalds
wrote:
>
> This is why I like pipes. You can use them today. They are simple, and
> extensible, and you don't need to come up with a new subsystem and
> some untested ad-hoc thing that nobody has actually used.
The only _real_ comp
On Fri, Sep 6, 2019 at 3:09 AM David Howells wrote:
>
> Linus Torvalds wrote:
>
> > But it's *literally* just finding the places that work with
> > pipe->curbuf/nrbufs and making them use atomic updates.
>
> No. It really isn't. That's two variable
On Thu, Sep 5, 2019 at 4:18 PM David Howells wrote:
>
> Can you write into a pipe from softirq context and/or with spinlocks held
> and/or with the RCU read lock held? That is a requirement. Another is that
> messages get inserted whole or not at all (or if they are truncated, the size
> field g
On Thu, Sep 5, 2019 at 2:32 PM David Howells wrote:
>
> (1) /dev/watch_queue just implements locked-in-memory buffers. It gets you
> no events by simply opening it.
Cool. In-memory buffers.
But I know - we *have* one of those. There's already a system call for
it, and has been forever. On
On Thu, Sep 5, 2019 at 11:33 AM Ray Strode wrote:
>
> Hi,
>
> On Thu, Sep 5, 2019 at 1:20 PM Linus Torvalds
> wrote:
> > You've at least now answered part of the "Why", but you didn't
> > actually answer the whole "another developer" p
On Thu, Sep 5, 2019 at 10:01 AM David Howells wrote:
> >
> > I'm just going to be very blunt about this, and say that there is no
> > way I can merge any of this *ever*, unless other people stand up and
> > say that
> >
> > (a) they'll use it
> >
> > and
> >
> > (b) they'll actively develop it a
On Wed, Sep 4, 2019 at 3:15 PM David Howells wrote:
>
>
> Here's a set of patches to add a general notification queue concept and to
> add event sources such as:
Why?
I'm just going to be very blunt about this, and say that there is no
way I can merge any of this *ever*, unless other people stan
On Fri, Jul 26, 2019 at 11:39 AM Jens Axboe wrote:
>
> The fix is sitting in the SCSI tree:
You clearly didn't follow the links.
That's *one* of the fixes.
Not the one that possibly should be fixed in block/blk-settings.c and
that I expected to come through your tree.
Linus
Guys, what happened to the wrong sector boundary and max sector mess?
There are at least two different issues (with one of them having two
proposed fixes);
https://lore.kernel.org/linux-block/1563896932.3609.15.ca...@hansenpartnership.com/
https://lore.kernel.org/lkml/1563895995.3609.10.ca..
On Sat, Jun 8, 2019 at 11:00 PM Jens Axboe wrote:
>
> FWIW, the concept/idea goes back a few months and was discussed with
> the cgroup folks. But I totally agree that the implementation could
> have been cleaner, especially at this point in time.
>
> I'm fine with you reverting those two patches
On Sat, Jun 8, 2019 at 1:21 AM Jens Axboe wrote:
>
> Angelo Ruocco (2):
> cgroup: let a symlink too be created with a cftype file
So I'm not seeing any acks by the cgroup people who actually maintain
that file, and honestly, the patch looks butt-ugly to me.
Why are you adding an odd "write
On Fri, Apr 12, 2019 at 12:04 PM Jens Axboe wrote:
>
> git://git.kernel.dk/linux-block.git for-linus-20190412
That tag does not exist. There's a for-linus-20190407, but not a 20190412..
Pls advise,
Linus
On Sat, Mar 23, 2019 at 8:59 AM Jens Axboe wrote:
>
> The first 5 in this series are heavily inspired by the work Al did on
> the aio side to fix the races there.
Al, mind sending your aio.c side changes too, so that we can close the
loop on this all?
The last series looked good to me, were ther
On Fri, Mar 15, 2019 at 9:27 AM Jens Axboe wrote:
>
> Linus, curious on your opinion on this one. I had this as part of the
> original series, but removed it from the pull request due to the
> mincore etc discussions.
I'd rather not have new ways to leak cache information, and in fact
already the
On Fri, Mar 8, 2019 at 8:39 PM Jens Axboe wrote:
>
> If all else fails, it's also trivial to yank the poll command support for
> now, which would kill the io_req refs.
No, it really wouldn't.
The poll() code thinks it needs the refs, and the poll code is right -
but the refs are just badly done
On Fri, Mar 8, 2019 at 8:25 PM Jens Axboe wrote:
>
> I'll take a look
> at the patches that went into aio and apply the same fix(es) to
> io_uring.
Note that they are not upstream yet, there's a work-in-progress branch
in Al's tree at
git://git.kernel.org/pub/scm/linux/kernel/git
Oh well. I'm obviously not entirely happy about it, but the refcount
issues look fixable.
So I pushed it out. Al, mind taking a look? Jens claimed in his pull
request that you've already looked at the refcounting, so I think the
only issue is whether it shares the bug with fs/aio.c that I think it
On Fri, Mar 8, 2019 at 3:36 PM Linus Torvalds
wrote:
>
> It was bogus garbage in fs/aio.c, and honestly, looking at how much of
> the logic looks suspiciously very similar here, I suspect it's bogus
> garbage here too.
Apart from that "this really looks suspicious&quo
On Fri, Mar 8, 2019 at 2:55 PM Linus Torvalds
wrote:
>
> I'm going to run the usual build tests, but also look at the basic
> sanity tests and boot and run them just to be careful before actually
> doing that final "ok pushed out".
While waiting for that, I'm
On Wed, Mar 6, 2019 at 12:05 PM Jens Axboe wrote:
>
> Slight mess up in the stats, here's the correct one... Note that this
> also throws a few more merge conflicts now, due to the syscall merges.
> All trivial, though, and the branch was prepared for it in terms of
> numbering.
Ok, merged in my
On Sun, Mar 3, 2019 at 5:53 PM Jens Axboe wrote:
>
> This pull request adds support for a new IO interface, io_uring.
> io_uring allows an application to communicate with the kernel through
> two rings, the submission queue (SQ) and completion queue (CQ) ring.
> This allows for very efficient hand
On Tue, Jan 15, 2019 at 4:28 PM Jens Axboe wrote:
>
> Thanks Ming, I'll queue this up for shipping this week.
Oops. I _just_ applied it to my tree as a follow-up to Steven's
softirq version. I just hadn't had time to build test and push out
yet.
Linus
ld probably be inspected.
In the meantime, the attached is what I committed.
Linus
commit 1ac5cd4978794bd060d448acc0305e9fc996ba92
Author: Linus Torvalds
Date: Wed Jan 2 10:46:03 2019 -0800
block: don't use un-ordered __set_current_state(TASK_UNINTERRUPTIBLE)
This mo
On Wed, Jan 2, 2019 at 9:55 AM Jens Axboe wrote:
>
> A previous commit deemed it safe to use __set_current_state() for IRQ
> driven O_DIRECT, but that's isn't necessarily the case. Be safer and
> only apply that optimization to polled IO, where we know the the task is
> going to find the completio
On Wed, Jan 2, 2019 at 10:07 AM Linus Torvalds
wrote:
>
> Stop this craziness. The optimization is garbage. If you want to save
> 15 cycles, get rid of code, don't add new code in an area where the
> block maintainers have already shown that they can't get it right.
Side
On Sun, Dec 30, 2018 at 11:15 AM Mikulas Patocka wrote:
>
> But you're right that 2TiB devices are common and that perhaps this option
> should go away.
2TiB devices are definitely not common in the one situation where this
option might matter: small embedded devices.
I don't think the cost of 6
On Fri, Dec 28, 2018 at 1:48 PM Linus Torvalds
wrote:
>
> Maybe I'm missing something, but this really looks like a completely
> invalid "optimization" to me. And it's entirely bogus too. If that
> memory barrier matters, you're almost certainly doing someth
On Thu, Dec 20, 2018 at 8:05 PM Jens Axboe wrote:
>
> Jens Axboe (108):
> block: avoid ordered task state change for polled IO
This one seems *very* questionable.
The commit message is misleading:
For the core poll helper, the task state setting don't need to imply any
atomics, as
On Thu, Dec 6, 2018 at 6:12 PM Jens Axboe wrote:
>
>
> Linus, I just know notice you are not on the CC for the discussion about
> the patch. Don't pull this one yet. It'll solve the issue, but it'll also
> mess up the BUSY feedback loop that DM relies on for good merging of
> sequential IO. Testin
On Fri, Nov 9, 2018 at 1:29 PM Jens Axboe wrote:
>
> A select set of fixes that should go into this release. This pull
> request contains:
This is part of a final few "ack" emails, pointing out that there is
now automation in place if you cc lkml in your pull request.
That automation will parse
On Fri, Nov 2, 2018 at 10:08 AM Jens Axboe wrote:
>
> The biggest part of this pull request is the revert of the blkcg cleanup
> series. It had one fix earlier for a stacked device issue, but another
> one was reported. Rather than play whack-a-mole with this, revert the
> entire series and try ag
On Fri, Oct 26, 2018 at 9:00 AM Jens Axboe wrote:
>
> A followup pull request for this merge window.
Pulled,
Linus
On Wed, Aug 22, 2018 at 10:54 AM Jens Axboe wrote:
>
> - Set of bcache fixes and changes (Coly)
Some of those bcache style fixes look questionable.
Maybe we should push back on some of the checkpatch rules instead?
Like having argument names in declarations - sometimes descriptive
names can be
On Mon, Jun 4, 2018 at 5:56 PM Kent Overstreet
wrote:
>
> I like your patch for a less invasive version, but I did finish and test my
> version, which deletes more code :)
I certainly won't object to that.
Your patch looks fine, and looks like the right thing in the long run anyway.
Plus, it's
On Mon, Jun 4, 2018 at 5:42 PM Linus Torvalds
wrote:
>
> How about just the attached?
Note: it probably goes without saying that the patch was entirely
untested, but it does build, and it does get rid of the insane stack
use.
Linus
On Mon, Jun 4, 2018 at 12:04 PM Kent Overstreet
wrote:
>
> However, that's not correct as is because mddev_delayed_put() calls
> kobject_put(), and the kobject isn't initialized when the mddev is first
> allocated, it's initialized when the gendisk is allocated... that isn't hard
> to
> fix but t
On Mon, Jun 4, 2018 at 11:20 AM Tejun Heo wrote:
>
>
> Looking at the code, the fundamental problem seems to be that it's
> weaving different parts of sync and async paths. I don't understand
> why it'd punt the destructin of mddev but destroy biosets
> synchronously. Can't it do sth like the fo
On Mon, Jun 4, 2018 at 8:54 AM Jens Axboe wrote:
>
> On 6/4/18 9:51 AM, Linus Torvalds wrote:
> >
> > Why the hell doesn't it just do bioset_exit() on the originals instead,
> > before freeing the mddev?
>
> CC'ing Neil to get his input on how best to clean
On Sun, Jun 3, 2018 at 5:42 PM Jens Axboe wrote:
>
> drivers/md/md.c | 61 +--
> drivers/md/md.h | 4 +-
So I've pulled this, but I get a new warning:
drivers/md/md.c: In function ‘mddev_put’:
drivers/md/md.c:543:1: warning:
On Tue, May 22, 2018 at 2:09 AM Roman Penyaev <
roman.peny...@profitbricks.com> wrote:
> Should I resend current patch with more clear comments about how careful
> caller should be with a leaking pointer?
No. Even if we go your way, there is *one* single user, and that one is
special and needs to
On Mon, May 21, 2018 at 6:51 AM Roman Penyaev <
roman.peny...@profitbricks.com> wrote:
> No, I continue from the pointer, which I assigned on the previous IO
> in order to send IO fairly and keep load balanced.
Right. And that's exactly what has both me and Paul nervous. You're no
longer in the R
On Sat, May 19, 2018 at 1:25 PM Roman Penyaev <
roman.peny...@profitbricks.com> wrote:
> Another one list_for_each_entry_rcu()-like macro I am aware of is used in
> block/blk-mq-sched.c, is called list_for_each_entry_rcu_rr():
https://elixir.bootlin.com/linux/v4.17-rc5/source/block/blk-mq-sched.
On Sat, May 19, 2018 at 1:21 PM Roman Penyaev <
roman.peny...@profitbricks.com> wrote:
> I need -rr behaviour for doing IO load-balancing when I choose next RDMA
> connection from the list in order to send a request, i.e. my code is
> something like the following:
[ incomplete pseudoicode ]
> i.e.
On Fri, May 18, 2018 at 6:07 AM Roman Pen
wrote:
> Function is going to be used in transport over RDMA module
> in subsequent patches.
Does this really merit its own helper macro in a generic header?
It honestly smells more like "just have an inline helper function that is
specific to rdma" to
On Tue, May 1, 2018 at 3:27 AM Tetsuo Handa <
penguin-ker...@i-love.sakura.ne.jp> wrote:
> Can you review this patch? syzbot has hit this bug for nearly 4000 times
but
> is still unable to find a reproducer. Therefore, the only way to test
would be
> to apply this patch upstream and test whether t
On Mon, Apr 9, 2018 at 3:32 PM, Jens Axboe wrote:
>
> The resulting min/max and friends would have been trivial to test, but
> clearly they weren't.
Well, the min/max macros themselves actually were tested in user space by me.
It was the interaction with the unrelated "min_not_zero()" that wasn'
On Fri, Mar 2, 2018 at 8:57 AM, Linus Torvalds
wrote:
>
> Like the page table caching entries, the memory type range registers
> are really just "secondary information". They don't actually select
> between PCIe and RAM, they just affect the behavior on top of that.
On Fri, Mar 2, 2018 at 8:22 AM, Kani, Toshi wrote:
>
> FWIW, this thing is called MTRRs on x86, which are initialized by BIOS.
No.
Or rather, that's simply just another (small) part of it all - and an
architected and documented one at that.
Like the page table caching entries, the memory type r
On Thu, Mar 1, 2018 at 2:06 PM, Benjamin Herrenschmidt wrote:
>
> Could be that x86 has the smarts to do the right thing, still trying to
> untangle the code :-)
Afaik, x86 will not cache PCI unless the system is misconfigured, and
even then it's more likely to just raise a machine check exceptio
On Mon, Jan 29, 2018 at 12:08 PM, Jens Axboe wrote:
>
> Yes of course, I can switch to using signed tags for pull requests
> for you.
Thanks. I don't actually know the maintainership status of kernel.dk.
You show up as the 'whois' contact, but I'm also assuming it doesn't
have the same kind of f
On Mon, Jan 29, 2018 at 12:17 PM, Jens Axboe wrote:
>
> If you're fine with signed tags on git.kernel.dk, I'd prefer
> continue using that, but just signing the pull requests.
Oh, absolutely. That's fine,
I absolutely am not in any way trying to push for kernel.org repos, I
just want to have mo
On Mon, Jan 29, 2018 at 7:40 AM, Jens Axboe wrote:
>
> This is the main pull request for block IO related changes for the 4.16
> kernel. Nothing major in this pull request, but a good amount of
> improvements and fixes all over the map. This pull request contains:
Pulled.
However, can I request
On Mon, Dec 11, 2017 at 9:20 PM, Byungchul Park wrote:
>
> The *problem* is false positives, since locks and waiters in
> kernel are not classified properly
So the problem is that those false positives apparently end up being a
big deal for the filesystem people.
I personally don't think the cod
On Sun, Dec 10, 2017 at 7:50 PM, Theodore Ts'o wrote:
> CONFIG_LOCKDEP_CROSSRELEASE and CONFIG_LOCKDEP_COMPLETIONS can result
> in a large number of false positives because lockdep doesn't
> understand how to deal with multiple stacked loop or MD devices.
Guys, can we just remove this nasty crud
On Fri, Nov 17, 2017 at 12:18 PM, Paolo Valente
wrote:
>
> Sorry for causing this problem. Yours was our first version, but then
> we feared that leaving useless instructions was worse than adding a
> burst of ifdefs. I'll try not to repeat this mistake.
So I generally do not want people to depe
On Fri, Nov 17, 2017 at 11:29 AM, Linus Torvalds
wrote:
>
> "F*ck no, that code is too ugly, you need to write it so that it can
> be maintained".
Dammit, the rest of the pull looks ok, so I'll take it anyway.
But I really do expect you to
(a) clean up that mess - m
On Fri, Nov 17, 2017 at 8:51 AM, Jens Axboe wrote:
>
> - Small BFQ updates series from Luca and Paolo.
Honestly, this code is too ugly to live.
Seriously. That update should have been rejected on the grounds of
being completely unmaintainable crap.
Why didn't you?
Why are you allowing code lik
On Wed, Sep 27, 2017 at 5:41 AM, Jens Axboe wrote:
>
> So I reworked the series, to include three prep patches that end up
> killing off free_more_memory(). This means that we don't have to do the
> 1024 -> 0 change in there. On top of that, I added a separate bit to
> manage range cyclic vs non r
On Mon, Sep 25, 2017 at 2:17 PM, Chris Mason wrote:
>
> My understanding is that for order-0 page allocations and
> kmem_cache_alloc(buffer_heads), GFP_NOFS is going to either loop forever or
> at the very least OOM kill something before returning NULL?
That should generally be true. We've occasi
On Mon, Sep 25, 2017 at 7:46 AM, Jens Axboe wrote:
>
> Honestly, what I really wanted to do was kill that call completely.
I can understand why you'd want to, but at the same time it is not
entirely clear that is possible.
So the problem is that we need more memory for buffer head
allocations, a
On Sun, Sep 24, 2017 at 5:03 PM, Jens Axboe wrote:
>
> NVMe fixes have sometimes been accepted for the current series, while
> they should have been pushed. I'm not going to argue with that, but I
> think we've managed to make that better the last few series. It's still
> not squeaky clean, but it
On Sun, Sep 24, 2017 at 6:03 AM, Christoph Hellwig wrote:
> On Fri, Sep 22, 2017 at 05:18:49PM -1000, Linus Torvalds wrote:
>> WTF? Why is this so hard? It used to be that IDE drove people crazy.
>> Now it's NVMe and generic block layer stuff.
>
> Can you please expl
On Fri, Sep 22, 2017 at 9:32 AM, Jens Axboe wrote:
> Hi Linus,
>
> A round of fixes for this series. This pull request contains:
No, Jens.
This isn't fixes. This is new code that does new things, or cleanups,
or other random things mixed up with some fixes.
Stop this fuckery already.
The block
On Sat, Sep 9, 2017 at 12:54 PM, Linus Torvalds
wrote:
>
> Other than that, we match.
Oh, and I almost cried when I saw this nasty thing:
ilog2(SZ_4K) - 9
in that nvme code, but I left it alone.
Why the hell people think "SZ_4K" is somehow more legible than 4096, I
h
On Fri, Sep 8, 2017 at 10:34 AM, Jens Axboe wrote:
>
> Single merge conflict here, for nvme/rdma. Trivial to fixup, see my
> for-next branch here:
>
> http://git.kernel.dk/cgit/linux-block/log/?h=for-next
Your earlier merge for mm/page_io.c is ugly, and mixes up the bi_disk
thing oddly in the mid
On Thu, Sep 7, 2017 at 12:27 PM, Jens Axboe wrote:
>
> Which was committed yesterday? It was not from my tree. I try to keep
> an eye out for potential conflicts or issues.
It was from Andrew, so I'm assuming it was in linux-next. Not as a git
tree, but as the usual akpm branch.
I'm not sure why
On Tue, Sep 5, 2017 at 7:31 AM, Jens Axboe wrote:
>
> Note that you'll hit a conflict in block/bio-integrity.c and
> mm/page_io.c, since we had fixes later in the 4.13 series for both of
> those that ended up conflicting with new changes. Both are trivial to
> fix up, I've included my resolution a
On Tue, Jul 11, 2017 at 8:17 PM, Linus Torvalds
wrote:
>
> If that's the case, I'd prefer just turning off the format-truncation
> (but not overflow) warning with '-Wno-format-trunction".
Doing
KBUILD_CFLAGS += $(call cc-disable-warning, format-truncation)
in the
On Tue, Jul 11, 2017 at 8:10 PM, Guenter Roeck wrote:
>
> The hwmon warnings are all about supporting no more than 9,999 sensors
> (applesmc) to 999,999,999 sensors (scpi) of a given type.
Yeah, I think that's enough.
> Easy "fix" would be to replace snprintf() with scnprintf(), presumably
> bec
[ Very random list of maintainers and mailing lists, at least
partially by number of warnings generated by gcc-7.1.1 that is then
correlated with the get_maintainers script ]
So I upgraded one of my boxes to F26, which upgraded the compiler to gcc-7.1.1
Which in turn means that my nice clean allm
On Sun, Jul 2, 2017 at 4:44 PM, Jens Axboe wrote:
>
> This is the main pull request for the block layer for 4.13. Not a huge
> round in terms of features, but there's a lot of churn related to some
> core cleanups. Note that merge request will throw 3 merge failures for
> you. I've included how I
On Fri, Feb 24, 2017 at 9:39 AM, Bart Van Assche
wrote:
>
> So the crash is caused by an attempt to dereference address 0x6b6b6b6b6b6b6b6b
> at offset 0x270. I think this means the crash is caused by a use-after-free.
Yeah, that's POISON_FREE, and that might explain why you see crashes
that other
On Wed, Feb 22, 2017 at 1:50 PM, Markus Trippelsdorf
wrote:
>
> But what about e.g. SATA SSDs? Wouldn't they be better off without any
> scheduler?
> So perhaps setting "none" for queue/rotational==0 and mq-deadline for
> spinning drives automatically in the sq blk-mq case?
Jens already said that
On Wed, Feb 22, 2017 at 10:58 AM, Jens Axboe wrote:
> On 02/22/2017 11:56 AM, Linus Torvalds wrote:
>
> OK, so here's what I'll do:
>
> 1) We'll kill the default scheduler choices. sq blk-mq will default to
>mq-deadline, mq blk-mq will default to "none&
On Wed, Feb 22, 2017 at 10:52 AM, Jens Axboe wrote:
>>
>> It's that simple.
>
> No, it's not that simple at all. Fact is, some optimizations make sense
> for some workloads, and some do not.
Are you even listening?
I'm saying no user can ever give a sane answer to your question. The
question is
On Wed, Feb 22, 2017 at 10:41 AM, Jens Axboe wrote:
>
> The fact is that we have two different sets, until we can yank
> the old ones. So I can't just ask one question, since the sets
> aren't identical.
Bullshit.
I'm, saying: rip out the question ENTIRELY. For *both* cases.
If you cannot yours
On Wed, Feb 22, 2017 at 10:26 AM, Linus Torvalds
wrote:
>
> And dammit, IF YOU DON'T EVEN KNOW, WHY THE HELL ARE YOU ASKING THE POOR USER?
Basically, I'm pushing back on config options that I can't personally
even sanely answer.
If it's a config option about "
On Wed, Feb 22, 2017 at 10:14 AM, Jens Axboe wrote:
>
> What do you mean by "the regular IO scheduler"? These are different
> schedulers.
Not to the user they aren't.
If the user already answered once about the IO schedulers, we damn
well shouldn't ask again abotu another small implementaiton de
On Tue, Feb 21, 2017 at 3:15 PM, Jens Axboe wrote:
>
> But under a device managed by blk-mq, that device exposes a number of
> hardware queues. For older style devices, that number is typically 1
> (single queue).
... but why would this ever be different from the normal IO scheduler?
IOW, what m
Hmm. The new config options are incomprehensible and their help
messages don't actually help.
So can you fill in the blanks on what
Default single-queue blk-mq I/O scheduler
Default multi-queue blk-mq I/O scheduler
config options mean, and why they default to none?
The config phase of the k
On Sun, Feb 19, 2017 at 4:10 PM, Jens Axboe wrote:
>
> Please pull! Either this pre-merged branch:
>
> git://git.kernel.dk/linux-block.git for-4.11/linus-merge-signed
>
> or
>
> git://git.kernel.dk/linux-block.git for-4.11/block-signed
> git://git.kernel.dk/linux-block.git for-4.11/next-sign
On Sat, Jan 7, 2017 at 6:02 PM, Johannes Weiner wrote:
>
> Linus? Andrew?
Looks fine to me. Will apply.
Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-block" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.
On Fri, Dec 23, 2016 at 2:00 AM, Christoph Hellwig wrote:
>
> From: Christoph Hellwig
> Date: Fri, 23 Dec 2016 10:57:06 +0100
> Subject: virtio_blk: avoid DMA to stack for the sense buffer
>
> Most users of BLOCK_PC requests allocate the sense buffer on the stack,
> so to avoid DMA to the stack c
On Wed, Dec 21, 2016 at 9:13 PM, Dave Chinner wrote:
>
> There may be deeper issues. I just started running scalability tests
> (e.g. 16-way fsmark create tests) and about a minute in I got a
> directory corruption reported - something I hadn't seen in the dev
> cycle at all.
By "in the dev cycle
Hi,
On Wed, Dec 21, 2016 at 2:16 PM, Dave Chinner wrote:
> On Fri, Dec 16, 2016 at 10:59:06AM -0800, Chris Leech wrote:
>> Thanks Dave,
>>
>> I'm hitting a bug at scatterlist.h:140 before I even get any iSCSI
>> modules loaded (virtio block) so there's something else going on in the
>> current me
On Fri, Nov 11, 2016 at 4:11 PM, Jens Axboe wrote:
> Hi Linus,
>
> Three small (really, one liners all of them!) fixes that should go into
> this series:
What about the aoeblk one? That seems to have come in with a tester
lately. From your original email:
"I'm wondering if this is bio iteration
94 matches
Mail list logo