On 2019/08/26 1:54, Linus Torvalds wrote:
> On Sat, Aug 24, 2019 at 10:50 PM Tetsuo Handa
> <[email protected]> wrote:
>>
>> @@ -142,7 +144,7 @@ static ssize_t read_mem(struct file *file, char __user
>> *buf,
>> sz = size_inside_page(p, count);
>> cond_resched();
>> err = -EINTR;
>> - if (fatal_signal_pending(current))
>> + if (signal_pending(current))
>> goto failed;
>>
>> err = -EPERM;
>
> So from a "likelihood of breaking" standpoint, I'd really like to make
> sure that the "signal_pending()" checks come at the *end* of the loop.
>
> That way, if somebody is doing a 4-byte read from MMIO, he'll never see
> -EINTR.
>
> I'm specifically thinking of tools like user-space 'lspci' etc, which
> I wouldn't be surprised could happen.
>
> Also, just in case things break, I do agree with Ingo that this should
> be split up into several patches.
Thinking from how read_mem() returns error code instead of returning bytes
already processed, any sane users will not try to read so much memory (like
2GB).
If userspace programs want to read so much memory, there must have been attempts
to improve performance. I guess that userspace program somehow knows which
region
to read and tries to read only meaningful pages (which would not become
hundreds MB).
Thus, I don't think we want to make /dev/{mem,kmem} intrruptible. Just making
killable
in case insane userspace program (like fuzzer) tried to read/write so much
memory
will be sufficient...