On Fri, Apr 22, 2016 at 11:49 PM, Andrew Morton
wrote:
> On Fri, 15 Apr 2016 12:14:31 +0100 Chris Wilson
> wrote:
>
>> > > purge_fragmented_blocks() manages per-cpu lists, so that looks safe
>> > > under its own rcu_read_lock.
>> > >
>> > >
On Fri, Apr 22, 2016 at 11:49 PM, Andrew Morton
wrote:
> On Fri, 15 Apr 2016 12:14:31 +0100 Chris Wilson
> wrote:
>
>> > > purge_fragmented_blocks() manages per-cpu lists, so that looks safe
>> > > under its own rcu_read_lock.
>> > >
>> > > Yes, it looks feasible to remove the purge_lock if we
On Fri, Apr 15, 2016 at 1:07 PM, Chris Wilson wrote:
> When mixing lots of vmallocs and set_memory_*() (which calls
> vm_unmap_aliases()) I encountered situations where the performance
> degraded severely due to the walking of the entire vmap_area list each
> invocation.
On Fri, Apr 15, 2016 at 1:07 PM, Chris Wilson wrote:
> When mixing lots of vmallocs and set_memory_*() (which calls
> vm_unmap_aliases()) I encountered situations where the performance
> degraded severely due to the walking of the entire vmap_area list each
> invocation. One simple improvement is
On Thu, Apr 14, 2016 at 3:49 PM, Chris Wilson <ch...@chris-wilson.co.uk> wrote:
> On Thu, Apr 14, 2016 at 03:13:26PM +0200, Roman Peniaev wrote:
>> Hi, Chris.
>>
>> Is it made on purpose not to drop VM_LAZY_FREE flag in
>> __purge_vmap_area_lazy()? With your patc
On Thu, Apr 14, 2016 at 3:49 PM, Chris Wilson wrote:
> On Thu, Apr 14, 2016 at 03:13:26PM +0200, Roman Peniaev wrote:
>> Hi, Chris.
>>
>> Is it made on purpose not to drop VM_LAZY_FREE flag in
>> __purge_vmap_area_lazy()? With your patch va->flags
>>
Hi, Chris.
Is it made on purpose not to drop VM_LAZY_FREE flag in
__purge_vmap_area_lazy()? With your patch va->flags
will have two bits set: VM_LAZY_FREE | VM_LAZY_FREEING.
Seems it is not that bad, because all other code paths
do not care, but still the change is not clear.
Also, did you
Hi, Chris.
Is it made on purpose not to drop VM_LAZY_FREE flag in
__purge_vmap_area_lazy()? With your patch va->flags
will have two bits set: VM_LAZY_FREE | VM_LAZY_FREEING.
Seems it is not that bad, because all other code paths
do not care, but still the change is not clear.
Also, did you
Hi, Chris.
Comment is below.
On Thu, Mar 17, 2016 at 12:59 PM, Chris Wilson wrote:
> vmaps are temporary kernel mappings that may be of long duration.
> Reusing a vmap on an object is preferrable for a driver as the cost of
> setting up the vmap can otherwise dominate
Hi, Chris.
Comment is below.
On Thu, Mar 17, 2016 at 12:59 PM, Chris Wilson wrote:
> vmaps are temporary kernel mappings that may be of long duration.
> Reusing a vmap on an object is preferrable for a driver as the cost of
> setting up the vmap can otherwise dominate the operation on the
On Thu, Mar 17, 2016 at 1:57 PM, Chris Wilson <ch...@chris-wilson.co.uk> wrote:
> On Thu, Mar 17, 2016 at 01:37:06PM +0100, Roman Peniaev wrote:
>> > + freed = 0;
>> > + blocking_notifier_call_chain(_notify_list, 0, );
>>
>> It seems to
On Thu, Mar 17, 2016 at 1:57 PM, Chris Wilson wrote:
> On Thu, Mar 17, 2016 at 01:37:06PM +0100, Roman Peniaev wrote:
>> > + freed = 0;
>> > + blocking_notifier_call_chain(_notify_list, 0, );
>>
>> It seems to me that alloc_vmap_area() was d
On Mon, Feb 8, 2016 at 5:35 PM, Greg Kroah-Hartman
wrote:
> On Mon, Feb 08, 2016 at 11:28:52AM +0100, Roman Peniaev wrote:
>> On Mon, Feb 8, 2016 at 7:38 AM, Greg Kroah-Hartman
>> wrote:
>> > On Thu, Dec 10, 2015 at 01:47:12PM +0100, Roman Pen wrote:
>> >&g
On Mon, Feb 8, 2016 at 5:35 PM, Greg Kroah-Hartman
<gre...@linuxfoundation.org> wrote:
> On Mon, Feb 08, 2016 at 11:28:52AM +0100, Roman Peniaev wrote:
>> On Mon, Feb 8, 2016 at 7:38 AM, Greg Kroah-Hartman
>> <gre...@linuxfoundation.org> wrote:
>> > On Thu, Dec
On Mon, Feb 8, 2016 at 7:38 AM, Greg Kroah-Hartman
wrote:
> On Thu, Dec 10, 2015 at 01:47:12PM +0100, Roman Pen wrote:
>> Directory inodes should start off with i_nlink == 2 (for "." entry).
>> Of course the same rule should be applied to automount dentries for
>> child and parent inodes as well.
On Mon, Feb 8, 2016 at 7:38 AM, Greg Kroah-Hartman
wrote:
> On Thu, Dec 10, 2015 at 01:47:12PM +0100, Roman Pen wrote:
>> Directory inodes should start off with i_nlink == 2 (for "." entry).
>> Of course the same rule should be applied to automount dentries for
>>
On Tue, Dec 8, 2015 at 12:49 PM, Greg Kroah-Hartman
wrote:
> On Tue, Dec 08, 2015 at 10:51:03AM +0100, Roman Pen wrote:
>> Hello.
>>
>> Here is an attempt to solve annoying race, which exists between two
>> operations
>> on debugfs entries: write (setting a request) and read (reading a
On Tue, Dec 8, 2015 at 12:49 PM, Greg Kroah-Hartman
wrote:
> On Tue, Dec 08, 2015 at 10:51:03AM +0100, Roman Pen wrote:
>> Hello.
>>
>> Here is an attempt to solve annoying race, which exists between two
>> operations
>> on debugfs entries: write (setting a request)
On Mon, Sep 28, 2015 at 7:27 PM, Jens Axboe wrote:
> On 09/27/2015 02:44 PM, Roman Pen wrote:
>>
>> In case of several stacked block devices, which both were inited by
>> blk_init_queue call, you can catch the queue stuck, if first device
>> in stack makes bio submit being in a flush of a plug
On Mon, Sep 28, 2015 at 7:27 PM, Jens Axboe wrote:
> On 09/27/2015 02:44 PM, Roman Pen wrote:
>>
>> In case of several stacked block devices, which both were inited by
>> blk_init_queue call, you can catch the queue stuck, if first device
>> in stack makes bio submit being in a
On Wed, Mar 25, 2015 at 7:00 AM, Andrew Morton
wrote:
> On Thu, 19 Mar 2015 23:04:39 +0900 Roman Pen wrote:
>
>> If suitable block can't be found, new block is allocated and put into a head
>> of a free list, so on next iteration this new block will be found first.
>>
>> ...
>>
>> Cc:
On Wed, Mar 25, 2015 at 7:00 AM, Andrew Morton
a...@linux-foundation.org wrote:
On Thu, 19 Mar 2015 23:04:39 +0900 Roman Pen r.peni...@gmail.com wrote:
If suitable block can't be found, new block is allocated and put into a head
of a free list, so on next iteration this new block will be found
On Tue, Mar 17, 2015 at 4:29 PM, Joonsoo Kim wrote:
> On Tue, Mar 17, 2015 at 02:12:14PM +0900, Roman Peniaev wrote:
>> On Tue, Mar 17, 2015 at 1:56 PM, Joonsoo Kim wrote:
>> > On Fri, Mar 13, 2015 at 09:12:55PM +0900, Roman Pen wrote:
>> >> If suitable b
On Tue, Mar 17, 2015 at 4:29 PM, Joonsoo Kim iamjoonsoo@lge.com wrote:
On Tue, Mar 17, 2015 at 02:12:14PM +0900, Roman Peniaev wrote:
On Tue, Mar 17, 2015 at 1:56 PM, Joonsoo Kim iamjoonsoo@lge.com wrote:
On Fri, Mar 13, 2015 at 09:12:55PM +0900, Roman Pen wrote:
If suitable block
On Tue, Mar 17, 2015 at 1:56 PM, Joonsoo Kim wrote:
> On Fri, Mar 13, 2015 at 09:12:55PM +0900, Roman Pen wrote:
>> If suitable block can't be found, new block is allocated and put into a head
>> of a free list, so on next iteration this new block will be found first.
>>
>> That's bad, because
On Mon, Mar 16, 2015 at 7:49 PM, Roman Peniaev wrote:
> On Mon, Mar 16, 2015 at 7:28 PM, Gioh Kim wrote:
>>
>>
>> 2015-03-13 오후 9:12에 Roman Pen 이(가) 쓴 글:
>>> Hello all.
>>>
>>> Recently I came across high fragmentation of vm_map_ram allocator:
On Mon, Mar 16, 2015 at 7:28 PM, Gioh Kim wrote:
>
>
> 2015-03-13 오후 9:12에 Roman Pen 이(가) 쓴 글:
>> Hello all.
>>
>> Recently I came across high fragmentation of vm_map_ram allocator: vmap_block
>> has free space, but still new blocks continue to appear. Further
>> investigation
>> showed that
On Mon, Mar 16, 2015 at 7:49 PM, Roman Peniaev r.peni...@gmail.com wrote:
On Mon, Mar 16, 2015 at 7:28 PM, Gioh Kim gioh@lge.com wrote:
2015-03-13 오후 9:12에 Roman Pen 이(가) 쓴 글:
Hello all.
Recently I came across high fragmentation of vm_map_ram allocator:
vmap_block
has free space
On Mon, Mar 16, 2015 at 7:28 PM, Gioh Kim gioh@lge.com wrote:
2015-03-13 오후 9:12에 Roman Pen 이(가) 쓴 글:
Hello all.
Recently I came across high fragmentation of vm_map_ram allocator: vmap_block
has free space, but still new blocks continue to appear. Further
investigation
showed that
On Tue, Mar 17, 2015 at 1:56 PM, Joonsoo Kim iamjoonsoo@lge.com wrote:
On Fri, Mar 13, 2015 at 09:12:55PM +0900, Roman Pen wrote:
If suitable block can't be found, new block is allocated and put into a head
of a free list, so on next iteration this new block will be found first.
That's
On Fri, Jan 23, 2015 at 3:07 AM, Kees Cook wrote:
> On Wed, Jan 21, 2015 at 5:24 PM, Roman Peniaev wrote:
>> On Thu, Jan 22, 2015 at 8:32 AM, Kees Cook wrote:
>>> On Tue, Jan 20, 2015 at 3:04 PM, Russell King - ARM Linux
>>> wrote:
[snip]
>>>
>>
On Fri, Jan 23, 2015 at 3:07 AM, Kees Cook keesc...@chromium.org wrote:
On Wed, Jan 21, 2015 at 5:24 PM, Roman Peniaev r.peni...@gmail.com wrote:
On Thu, Jan 22, 2015 at 8:32 AM, Kees Cook keesc...@chromium.org wrote:
On Tue, Jan 20, 2015 at 3:04 PM, Russell King - ARM Linux
li
On Thu, Jan 22, 2015 at 8:32 AM, Kees Cook wrote:
> On Tue, Jan 20, 2015 at 3:04 PM, Russell King - ARM Linux
> wrote:
>> On Tue, Jan 20, 2015 at 10:45:19PM +, Russell King - ARM Linux wrote:
>>> Well, the whole question is this: is restarting a system call like
>>> usleep() really a
On Thu, Jan 22, 2015 at 8:32 AM, Kees Cook keesc...@chromium.org wrote:
On Tue, Jan 20, 2015 at 3:04 PM, Russell King - ARM Linux
li...@arm.linux.org.uk wrote:
On Tue, Jan 20, 2015 at 10:45:19PM +, Russell King - ARM Linux wrote:
Well, the whole question is this: is restarting a system
On Sat, Jan 17, 2015 at 8:54 AM, Kees Cook wrote:
> On Fri, Jan 16, 2015 at 11:57 AM, Kees Cook wrote:
>> On Fri, Jan 16, 2015 at 8:17 AM, Russell King - ARM Linux
>> wrote:
>>> On Sat, Jan 17, 2015 at 01:08:11AM +0900, Roman Peniaev wrote:
>>>> On Sat, Ja
On Sat, Jan 17, 2015 at 8:54 AM, Kees Cook keesc...@chromium.org wrote:
On Fri, Jan 16, 2015 at 11:57 AM, Kees Cook keesc...@chromium.org wrote:
On Fri, Jan 16, 2015 at 8:17 AM, Russell King - ARM Linux
li...@arm.linux.org.uk wrote:
On Sat, Jan 17, 2015 at 01:08:11AM +0900, Roman Peniaev wrote
On Sat, Jan 17, 2015 at 12:59 AM, Russell King - ARM Linux
wrote:
> On Sat, Jan 17, 2015 at 12:57:02AM +0900, Roman Peniaev wrote:
>> On Fri, Jan 16, 2015 at 7:54 AM, Kees Cook wrote:
>> > One interesting thing I noticed (which is unchanged by this series),
>> >
On Fri, Jan 16, 2015 at 7:54 AM, Kees Cook wrote:
> On Wed, Jan 14, 2015 at 5:54 PM, Roman Peniaev wrote:
>> On Thu, Jan 15, 2015 at 5:51 AM, Kees Cook wrote:
>>> On Tue, Jan 13, 2015 at 12:35 AM, Roman Peniaev wrote:
>>>> On Tue, Jan 13, 2015 at 3:39 AM, Will De
On Fri, Jan 16, 2015 at 7:54 AM, Kees Cook keesc...@chromium.org wrote:
On Wed, Jan 14, 2015 at 5:54 PM, Roman Peniaev r.peni...@gmail.com wrote:
On Thu, Jan 15, 2015 at 5:51 AM, Kees Cook keesc...@chromium.org wrote:
On Tue, Jan 13, 2015 at 12:35 AM, Roman Peniaev r.peni...@gmail.com wrote
On Sat, Jan 17, 2015 at 12:59 AM, Russell King - ARM Linux
li...@arm.linux.org.uk wrote:
On Sat, Jan 17, 2015 at 12:57:02AM +0900, Roman Peniaev wrote:
On Fri, Jan 16, 2015 at 7:54 AM, Kees Cook keesc...@chromium.org wrote:
One interesting thing I noticed (which is unchanged by this series
On Thu, Jan 15, 2015 at 5:51 AM, Kees Cook wrote:
> On Tue, Jan 13, 2015 at 12:35 AM, Roman Peniaev wrote:
>> On Tue, Jan 13, 2015 at 3:39 AM, Will Deacon wrote:
>>> On Sun, Jan 11, 2015 at 02:32:30PM +, Roman Pen wrote:
>>>> thread_info->syscall is used on
On Thu, Jan 15, 2015 at 5:51 AM, Kees Cook keesc...@chromium.org wrote:
On Tue, Jan 13, 2015 at 12:35 AM, Roman Peniaev r.peni...@gmail.com wrote:
On Tue, Jan 13, 2015 at 3:39 AM, Will Deacon will.dea...@arm.com wrote:
On Sun, Jan 11, 2015 at 02:32:30PM +, Roman Pen wrote:
thread_info
On Tue, Jan 13, 2015 at 5:35 PM, Roman Peniaev wrote:
> On Tue, Jan 13, 2015 at 3:39 AM, Will Deacon wrote:
>> On Sun, Jan 11, 2015 at 02:32:30PM +, Roman Pen wrote:
>>> thread_info->syscall is used only for ptrace, but syscall number
>>> is also used
On Wed, Jan 14, 2015 at 5:08 AM, Kees Cook wrote:
> On Sun, Jan 11, 2015 at 6:32 AM, Roman Pen wrote:
>> In previous patch current_thread_info()->syscall is set with
>> corresponding syscall number prior to further calls, thus there
>> is no any need to pass 'scno'.
>>
>> Also, add explicit
On Tue, Jan 13, 2015 at 3:39 AM, Will Deacon wrote:
> On Sun, Jan 11, 2015 at 02:32:30PM +, Roman Pen wrote:
>> thread_info->syscall is used only for ptrace, but syscall number
>> is also used by syscall_get_nr and returned to userspace by the
>> following proc file access:
>>
>> $ cat
On Tue, Jan 13, 2015 at 5:35 PM, Roman Peniaev r.peni...@gmail.com wrote:
On Tue, Jan 13, 2015 at 3:39 AM, Will Deacon will.dea...@arm.com wrote:
On Sun, Jan 11, 2015 at 02:32:30PM +, Roman Pen wrote:
thread_info-syscall is used only for ptrace, but syscall number
is also used
On Tue, Jan 13, 2015 at 3:39 AM, Will Deacon will.dea...@arm.com wrote:
On Sun, Jan 11, 2015 at 02:32:30PM +, Roman Pen wrote:
thread_info-syscall is used only for ptrace, but syscall number
is also used by syscall_get_nr and returned to userspace by the
following proc file access:
$
On Wed, Jan 14, 2015 at 5:08 AM, Kees Cook keesc...@chromium.org wrote:
On Sun, Jan 11, 2015 at 6:32 AM, Roman Pen r.peni...@gmail.com wrote:
In previous patch current_thread_info()-syscall is set with
corresponding syscall number prior to further calls, thus there
is no any need to pass
On Sat, Sep 20, 2014 at 11:42 PM, Greg Kroah-Hartman
wrote:
> On Sat, Sep 20, 2014 at 10:18:39PM +0900, Roman Peniaev wrote:
>> On Sat, Sep 20, 2014 at 6:42 AM, Greg Kroah-Hartman
>> wrote:
>> > On Fri, Sep 19, 2014 at 09:44:24PM +0900, Roman Pen wrote:
>> >>
Thanks, Oleg, for the review.
--
Roman
On Sat, Sep 20, 2014 at 4:45 AM, Oleg Nesterov wrote:
> On 09/19, Roman Pen wrote:
>>
>> +void wait_for_rootfs(void)
>> +{
>> + /* Here we try to protect from a few things:
>> + * 1. Avoid waiting for ourselves, when init thread has not
>> +
On Sat, Sep 20, 2014 at 6:42 AM, Greg Kroah-Hartman
wrote:
> On Fri, Sep 19, 2014 at 09:44:24PM +0900, Roman Pen wrote:
>> The thing is that built-in modules are being inited before
>> rootfs mount. Some of the modules can request firmware loading
>> from another thread using async
On Sat, Sep 20, 2014 at 6:42 AM, Greg Kroah-Hartman
gre...@linuxfoundation.org wrote:
On Fri, Sep 19, 2014 at 09:44:24PM +0900, Roman Pen wrote:
The thing is that built-in modules are being inited before
rootfs mount. Some of the modules can request firmware loading
from another thread using
Thanks, Oleg, for the review.
--
Roman
On Sat, Sep 20, 2014 at 4:45 AM, Oleg Nesterov o...@redhat.com wrote:
On 09/19, Roman Pen wrote:
+void wait_for_rootfs(void)
+{
+ /* Here we try to protect from a few things:
+ * 1. Avoid waiting for ourselves, when init thread has not
+
On Sat, Sep 20, 2014 at 11:42 PM, Greg Kroah-Hartman
gre...@linuxfoundation.org wrote:
On Sat, Sep 20, 2014 at 10:18:39PM +0900, Roman Peniaev wrote:
On Sat, Sep 20, 2014 at 6:42 AM, Greg Kroah-Hartman
gre...@linuxfoundation.org wrote:
On Fri, Sep 19, 2014 at 09:44:24PM +0900, Roman Pen wrote
On Fri, Sep 19, 2014 at 2:41 AM, Oleg Nesterov wrote:
> On 09/18, Roman Pen wrote:
>>
>> +void wait_for_rootfs(void)
>> +{
>> + /* Avoid waiting for ourselves */
>> + if (WARN_ON(is_global_init(current)))
>> + return;
>> + else
>> + wait_event(rootfs_waitq,
On Fri, Sep 19, 2014 at 2:41 AM, Oleg Nesterov o...@redhat.com wrote:
On 09/18, Roman Pen wrote:
+void wait_for_rootfs(void)
+{
+ /* Avoid waiting for ourselves */
+ if (WARN_ON(is_global_init(current)))
+ return;
+ else
+ wait_event(rootfs_waitq,
On Thu, Sep 18, 2014 at 2:46 AM, Oleg Nesterov wrote:
> On 09/17, Roman Pen wrote:
>>
>> +void wait_for_rootfs(void)
>> +{
>> + /* Avoid waiting for ourselves */
>> + if (is_global_init(current))
>> + pr_warn("init: it is not a good idea to wait for the rootfs
>> mount from
On Thu, Sep 18, 2014 at 2:46 AM, Oleg Nesterov o...@redhat.com wrote:
On 09/17, Roman Pen wrote:
+void wait_for_rootfs(void)
+{
+ /* Avoid waiting for ourselves */
+ if (is_global_init(current))
+ pr_warn(init: it is not a good idea to wait for the rootfs
mount from
On Tue, Sep 16, 2014 at 1:39 AM, Oleg Nesterov wrote:
> On 09/15, Roman Pen wrote:
>>
>> +static DECLARE_COMPLETION(rootfs_mounted);
>> +
>> +void wait_for_rootfs(void)
>> +{
>> + /* Avoid waiting for ourselves */
>> + if (is_global_init(current))
>> + pr_warn("it is not a
On Tue, Sep 16, 2014 at 1:39 AM, Oleg Nesterov o...@redhat.com wrote:
On 09/15, Roman Pen wrote:
+static DECLARE_COMPLETION(rootfs_mounted);
+
+void wait_for_rootfs(void)
+{
+ /* Avoid waiting for ourselves */
+ if (is_global_init(current))
+ pr_warn(it is not a good
On Fri, Mar 14, 2014 at 11:11 PM, Tejun Heo wrote:
> Hello,
>
> On Fri, Mar 14, 2014 at 11:07:04PM +0900, Roman Peniaev wrote:
>> Seems the following message should be better:
>> When data inegrity operation (sync, fsync, fdatasync calls) happens
>> writeback co
On Fri, Mar 14, 2014 at 11:20 PM, Tejun Heo wrote:
> On Fri, Mar 14, 2014 at 11:17:56PM +0900, Roman Peniaev wrote:
>> No, no. Not device does not support flush, filesystem does not care about
>> flush.
>> (take any old school, e.g. ext2)
>>
>> We did
On Fri, Mar 14, 2014 at 11:15 PM, Jan Kara wrote:
> On Fri 14-03-14 10:11:43, Tejun Heo wrote:
>> > Also, could you please help me do understand how can I guarantee
>> > integrity in case of block device with big volatile
>> > cache and filesystem, which does not support REQ_FLUSH/FUA?
>>
>> If a
On Fri, Mar 14, 2014 at 11:11 PM, Tejun Heo wrote:
> Hello,
>
> On Fri, Mar 14, 2014 at 11:07:04PM +0900, Roman Peniaev wrote:
>> Seems the following message should be better:
>> When data inegrity operation (sync, fsync, fdatasync calls) happens
>> writeback co
Hello, Tejun.
On Fri, Mar 14, 2014 at 10:07 PM, Tejun Heo wrote:
> Hello, Andrew.
>
> On Thu, Mar 13, 2014 at 02:34:56PM -0700, Andrew Morton wrote:
>> Jens isn't talking to us. Tejun, are you able explain REQ_SYNC?
>
> It has nothing to do with data integrity. It's just a hint telling
> the
Hello, Tejun.
On Fri, Mar 14, 2014 at 10:07 PM, Tejun Heo t...@kernel.org wrote:
Hello, Andrew.
On Thu, Mar 13, 2014 at 02:34:56PM -0700, Andrew Morton wrote:
Jens isn't talking to us. Tejun, are you able explain REQ_SYNC?
It has nothing to do with data integrity. It's just a hint telling
On Fri, Mar 14, 2014 at 11:11 PM, Tejun Heo t...@kernel.org wrote:
Hello,
On Fri, Mar 14, 2014 at 11:07:04PM +0900, Roman Peniaev wrote:
Seems the following message should be better:
When data inegrity operation (sync, fsync, fdatasync calls) happens
writeback control is set to WB_SYNC_ALL
On Fri, Mar 14, 2014 at 11:15 PM, Jan Kara j...@suse.cz wrote:
On Fri 14-03-14 10:11:43, Tejun Heo wrote:
Also, could you please help me do understand how can I guarantee
integrity in case of block device with big volatile
cache and filesystem, which does not support REQ_FLUSH/FUA?
If a
On Fri, Mar 14, 2014 at 11:20 PM, Tejun Heo t...@kernel.org wrote:
On Fri, Mar 14, 2014 at 11:17:56PM +0900, Roman Peniaev wrote:
No, no. Not device does not support flush, filesystem does not care about
flush.
(take any old school, e.g. ext2)
We did some write, and then we did fsync
On Fri, Mar 14, 2014 at 11:11 PM, Tejun Heo t...@kernel.org wrote:
Hello,
On Fri, Mar 14, 2014 at 11:07:04PM +0900, Roman Peniaev wrote:
Seems the following message should be better:
When data inegrity operation (sync, fsync, fdatasync calls) happens
writeback control is set to WB_SYNC_ALL
Jens,
could you please explain the real purpose of WAIT_SYNC?
In case of wbc->sync_mode == WB_SYNC_ALL.
Because my current understanding is if writeback control has
WB_SYNC_ALL everything
should be submitted with WAIT_SYNC.
--
Roman
On Wed, Feb 19, 2014 at 10:38 AM, Roman Peniaev wrote:
&g
Jens,
could you please explain the real purpose of WAIT_SYNC?
In case of wbc-sync_mode == WB_SYNC_ALL.
Because my current understanding is if writeback control has
WB_SYNC_ALL everything
should be submitted with WAIT_SYNC.
--
Roman
On Wed, Feb 19, 2014 at 10:38 AM, Roman Peniaev r.peni
On Fri, Mar 7, 2014 at 3:44 PM, Brian Norris
wrote:
> On Thu, Jan 02, 2014 at 01:21:21AM +0900, Roman Pen wrote:
>> From: Roman Peniaev
>>
>> mtd_blkdevs is device with volatile cache (writeback buffer), so it should
>> support
>> REQ_FLUSH to do explicit flus
On Fri, Mar 7, 2014 at 3:44 PM, Brian Norris
computersforpe...@gmail.com wrote:
On Thu, Jan 02, 2014 at 01:21:21AM +0900, Roman Pen wrote:
From: Roman Peniaev r.peni...@gmail.com
mtd_blkdevs is device with volatile cache (writeback buffer), so it should
support
REQ_FLUSH to do explicit
(my previous email was rejected by vger.kernel.org because google web
sent it as html.
will resend the same one in plain text mode)
> What do REQ_SYNC and REQ_NOIDLE actually *do*?
Yep, this REQ_SYNC is very confusing to me.
First of all according to the sources of old school block buffer
(my previous email was rejected by vger.kernel.org because google web
sent it as html.
will resend the same one in plain text mode)
What do REQ_SYNC and REQ_NOIDLE actually *do*?
Yep, this REQ_SYNC is very confusing to me.
First of all according to the sources of old school block buffer
Hello, Phillip.
one remark below:
>
> +static int squashfs_read_cache(struct page *target_page, u64 block, int
bsize,
> + int pages, struct page **page)
> +{
> + struct inode *i = target_page->mapping->host;
> + struct squashfs_cache_entry *buffer = squashfs_get_datablock(i-
>i_sb,
Hello, Phillip.
one remark below:
+static int squashfs_read_cache(struct page *target_page, u64 block, int
bsize,
+ int pages, struct page **page)
+{
+ struct inode *i = target_page-mapping-host;
+ struct squashfs_cache_entry *buffer = squashfs_get_datablock(i-
i_sb,
+
78 matches
Mail list logo