Re: block: optionally snapshot page contents to provide stable pages during write

2013-01-26 Thread Darrick J. Wong
On Sat, Jan 26, 2013 at 01:39:46PM +0100, Sedat Dilek wrote:
> Hi Darrick,
> 
> can you tell me why you do not put your help text where it normally
> belongs ("help" Kconfig item)?

Sure -- the non-ISA bounce pool is only used by a small number of specific
parts of the kernel that require it.  If those parts aren't built, then forcing
it on causes a useless memory pool to be created, wasting memory.  Since kbuild
can figure out when we need it and when we don't, there's no need to present
the user with a config option that they can only use to do the wrong thing.

--D
> 
> 273 # We also use the bounce pool to provide stable page writes for jbd.  jbd
> 274 # initiates buffer writeback without locking the page or setting
> PG_writeback,
> 275 # and fixing that behavior (a second time; jbd2 doesn't have this
> problem) is
> 276 # a major rework effort.  Instead, use the bounce buffer to snapshot pages
> 277 # (until jbd goes away).  The only jbd user is ext3.
> 278 config NEED_BOUNCE_POOL
> 279 bool
> 280 default y if (TILE && USB_OHCI_HCD) || (BLK_DEV_INTEGRITY && JBD)
> 281 help
> 282 line #273..277
> 
> Noticed while hunting a culprit commit in Linux-Next as my
> kernel-config got changed between next-20130123..next-20130124.
> 
> Regards,
> - Sedat -
> 
> [1] 
> http://git.kernel.org/?p=linux/kernel/git/next/linux-next.git;a=commitdiff;h=3f1c22e#patch5
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: block: optionally snapshot page contents to provide stable pages during write

2013-01-26 Thread Darrick J. Wong
On Sat, Jan 26, 2013 at 01:39:46PM +0100, Sedat Dilek wrote:
 Hi Darrick,
 
 can you tell me why you do not put your help text where it normally
 belongs (help Kconfig item)?

Sure -- the non-ISA bounce pool is only used by a small number of specific
parts of the kernel that require it.  If those parts aren't built, then forcing
it on causes a useless memory pool to be created, wasting memory.  Since kbuild
can figure out when we need it and when we don't, there's no need to present
the user with a config option that they can only use to do the wrong thing.

--D
 
 273 # We also use the bounce pool to provide stable page writes for jbd.  jbd
 274 # initiates buffer writeback without locking the page or setting
 PG_writeback,
 275 # and fixing that behavior (a second time; jbd2 doesn't have this
 problem) is
 276 # a major rework effort.  Instead, use the bounce buffer to snapshot pages
 277 # (until jbd goes away).  The only jbd user is ext3.
 278 config NEED_BOUNCE_POOL
 279 bool
 280 default y if (TILE  USB_OHCI_HCD) || (BLK_DEV_INTEGRITY  JBD)
 281 help
 282 line #273..277
 
 Noticed while hunting a culprit commit in Linux-Next as my
 kernel-config got changed between next-20130123..next-20130124.
 
 Regards,
 - Sedat -
 
 [1] 
 http://git.kernel.org/?p=linux/kernel/git/next/linux-next.git;a=commitdiff;h=3f1c22e#patch5
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 4/6] block: Optionally snapshot page contents to provide stable pages during write

2013-01-21 Thread Jan Kara
On Fri 18-01-13 17:13:01, Darrick J. Wong wrote:
> This provides a band-aid to provide stable page writes on jbd without needing
> to backport the fixed locking and page writeback bit handling schemes of jbd2.
> The band-aid works by using bounce buffers to snapshot page contents instead 
> of
> waiting.
> 
> For those wondering about the ext3 bandage -- fixing the jbd locking (which 
> was
> done as part of ext4dev years ago) is a lot of surgery, and setting
> PG_writeback on data pages when we actually hold the page lock dropped ext3
> performance by nearly an order of magnitude.  If we're going to migrate iscsi
> and raid to use stable page writes, the complaints about high latency will
> likely return.  We might as well centralize their page snapshotting thing to
> one place.
> 
> Tested-by: Andy Lutomirski 
> Signed-off-by: Darrick J. Wong 
  The patch looks good. You can add:
Reviewed-by: Jan Kara 

Honza

> ---
>  arch/tile/Kconfig   |6 --
>  block/blk-core.c|8 +---
>  fs/ext3/super.c |1 +
>  include/uapi/linux/fs.h |3 +++
>  mm/Kconfig  |   13 +
>  mm/bounce.c |   48 
> +++
>  mm/page-writeback.c |4 
>  7 files changed, 70 insertions(+), 13 deletions(-)
> 
> 
> diff --git a/arch/tile/Kconfig b/arch/tile/Kconfig
> index 875d008..c671fda 100644
> --- a/arch/tile/Kconfig
> +++ b/arch/tile/Kconfig
> @@ -410,12 +410,6 @@ config TILE_USB
> Provides USB host adapter support for the built-in EHCI and OHCI
> interfaces on TILE-Gx chips.
>  
> -# USB OHCI needs the bounce pool since tilegx will often have more
> -# than 4GB of memory, but we don't currently use the IOTLB to present
> -# a 32-bit address to OHCI.  So we need to use a bounce pool instead.
> -config NEED_BOUNCE_POOL
> - def_bool USB_OHCI_HCD
> -
>  source "drivers/pci/hotplug/Kconfig"
>  
>  endmenu
> diff --git a/block/blk-core.c b/block/blk-core.c
> index c973249..277134c 100644
> --- a/block/blk-core.c
> +++ b/block/blk-core.c
> @@ -1474,6 +1474,11 @@ void blk_queue_bio(struct request_queue *q, struct bio 
> *bio)
>*/
>   blk_queue_bounce(q, );
>  
> + if (bio_integrity_enabled(bio) && bio_integrity_prep(bio)) {
> + bio_endio(bio, -EIO);
> + return;
> + }
> +
>   if (bio->bi_rw & (REQ_FLUSH | REQ_FUA)) {
>   spin_lock_irq(q->queue_lock);
>   where = ELEVATOR_INSERT_FLUSH;
> @@ -1714,9 +1719,6 @@ generic_make_request_checks(struct bio *bio)
>*/
>   blk_partition_remap(bio);
>  
> - if (bio_integrity_enabled(bio) && bio_integrity_prep(bio))
> - goto end_io;
> -
>   if (bio_check_eod(bio, nr_sectors))
>   goto end_io;
>  
> diff --git a/fs/ext3/super.c b/fs/ext3/super.c
> index 6e50223..4ba2683 100644
> --- a/fs/ext3/super.c
> +++ b/fs/ext3/super.c
> @@ -2065,6 +2065,7 @@ static int ext3_fill_super (struct super_block *sb, 
> void *data, int silent)
>   test_opt(sb,DATA_FLAGS) == EXT3_MOUNT_JOURNAL_DATA ? "journal":
>   test_opt(sb,DATA_FLAGS) == EXT3_MOUNT_ORDERED_DATA ? "ordered":
>   "writeback");
> + sb->s_flags |= MS_SNAP_STABLE;
>  
>   return 0;
>  
> diff --git a/include/uapi/linux/fs.h b/include/uapi/linux/fs.h
> index 780d4c6..c7fc1e6 100644
> --- a/include/uapi/linux/fs.h
> +++ b/include/uapi/linux/fs.h
> @@ -86,6 +86,9 @@ struct inodes_stat_t {
>  #define MS_KERNMOUNT (1<<22) /* this is a kern_mount call */
>  #define MS_I_VERSION (1<<23) /* Update inode I_version field */
>  #define MS_STRICTATIME   (1<<24) /* Always perform atime updates */
> +
> +/* These sb flags are internal to the kernel */
> +#define MS_SNAP_STABLE   (1<<27) /* Snapshot pages during writeback, if 
> needed */
>  #define MS_NOSEC (1<<28)
>  #define MS_BORN  (1<<29)
>  #define MS_ACTIVE(1<<30)
> diff --git a/mm/Kconfig b/mm/Kconfig
> index 278e3ab..7901d83 100644
> --- a/mm/Kconfig
> +++ b/mm/Kconfig
> @@ -258,6 +258,19 @@ config BOUNCE
>   def_bool y
>   depends on BLOCK && MMU && (ZONE_DMA || HIGHMEM)
>  
> +# On the 'tile' arch, USB OHCI needs the bounce pool since tilegx will often
> +# have more than 4GB of memory, but we don't currently use the IOTLB to 
> present
> +# a 32-bit address to OHCI.  So we need to use a bounce pool instead.
> +#
> +# We also use the bounce pool to provide stable page writes for jbd.  jbd
> +# initiates buffer writeback without locking the page or setting 
> PG_writeback,
> +# and fixing that behavior (a second time; jbd2 doesn't have this problem) is
> +# a major rework effort.  Instead, use the bounce buffer to snapshot pages
> +# (until jbd goes away).  The only jbd user is ext3.
> +config NEED_BOUNCE_POOL
> + bool
> + default y if (TILE && USB_OHCI_HCD) || (BLK_DEV_INTEGRITY && JBD)
> +
>  config NR_QUICK
> 

Re: [PATCH 4/6] block: Optionally snapshot page contents to provide stable pages during write

2013-01-21 Thread Jan Kara
On Fri 18-01-13 17:13:01, Darrick J. Wong wrote:
 This provides a band-aid to provide stable page writes on jbd without needing
 to backport the fixed locking and page writeback bit handling schemes of jbd2.
 The band-aid works by using bounce buffers to snapshot page contents instead 
 of
 waiting.
 
 For those wondering about the ext3 bandage -- fixing the jbd locking (which 
 was
 done as part of ext4dev years ago) is a lot of surgery, and setting
 PG_writeback on data pages when we actually hold the page lock dropped ext3
 performance by nearly an order of magnitude.  If we're going to migrate iscsi
 and raid to use stable page writes, the complaints about high latency will
 likely return.  We might as well centralize their page snapshotting thing to
 one place.
 
 Tested-by: Andy Lutomirski l...@amacapital.net
 Signed-off-by: Darrick J. Wong darrick.w...@oracle.com
  The patch looks good. You can add:
Reviewed-by: Jan Kara j...@suse.cz

Honza

 ---
  arch/tile/Kconfig   |6 --
  block/blk-core.c|8 +---
  fs/ext3/super.c |1 +
  include/uapi/linux/fs.h |3 +++
  mm/Kconfig  |   13 +
  mm/bounce.c |   48 
 +++
  mm/page-writeback.c |4 
  7 files changed, 70 insertions(+), 13 deletions(-)
 
 
 diff --git a/arch/tile/Kconfig b/arch/tile/Kconfig
 index 875d008..c671fda 100644
 --- a/arch/tile/Kconfig
 +++ b/arch/tile/Kconfig
 @@ -410,12 +410,6 @@ config TILE_USB
 Provides USB host adapter support for the built-in EHCI and OHCI
 interfaces on TILE-Gx chips.
  
 -# USB OHCI needs the bounce pool since tilegx will often have more
 -# than 4GB of memory, but we don't currently use the IOTLB to present
 -# a 32-bit address to OHCI.  So we need to use a bounce pool instead.
 -config NEED_BOUNCE_POOL
 - def_bool USB_OHCI_HCD
 -
  source drivers/pci/hotplug/Kconfig
  
  endmenu
 diff --git a/block/blk-core.c b/block/blk-core.c
 index c973249..277134c 100644
 --- a/block/blk-core.c
 +++ b/block/blk-core.c
 @@ -1474,6 +1474,11 @@ void blk_queue_bio(struct request_queue *q, struct bio 
 *bio)
*/
   blk_queue_bounce(q, bio);
  
 + if (bio_integrity_enabled(bio)  bio_integrity_prep(bio)) {
 + bio_endio(bio, -EIO);
 + return;
 + }
 +
   if (bio-bi_rw  (REQ_FLUSH | REQ_FUA)) {
   spin_lock_irq(q-queue_lock);
   where = ELEVATOR_INSERT_FLUSH;
 @@ -1714,9 +1719,6 @@ generic_make_request_checks(struct bio *bio)
*/
   blk_partition_remap(bio);
  
 - if (bio_integrity_enabled(bio)  bio_integrity_prep(bio))
 - goto end_io;
 -
   if (bio_check_eod(bio, nr_sectors))
   goto end_io;
  
 diff --git a/fs/ext3/super.c b/fs/ext3/super.c
 index 6e50223..4ba2683 100644
 --- a/fs/ext3/super.c
 +++ b/fs/ext3/super.c
 @@ -2065,6 +2065,7 @@ static int ext3_fill_super (struct super_block *sb, 
 void *data, int silent)
   test_opt(sb,DATA_FLAGS) == EXT3_MOUNT_JOURNAL_DATA ? journal:
   test_opt(sb,DATA_FLAGS) == EXT3_MOUNT_ORDERED_DATA ? ordered:
   writeback);
 + sb-s_flags |= MS_SNAP_STABLE;
  
   return 0;
  
 diff --git a/include/uapi/linux/fs.h b/include/uapi/linux/fs.h
 index 780d4c6..c7fc1e6 100644
 --- a/include/uapi/linux/fs.h
 +++ b/include/uapi/linux/fs.h
 @@ -86,6 +86,9 @@ struct inodes_stat_t {
  #define MS_KERNMOUNT (122) /* this is a kern_mount call */
  #define MS_I_VERSION (123) /* Update inode I_version field */
  #define MS_STRICTATIME   (124) /* Always perform atime updates */
 +
 +/* These sb flags are internal to the kernel */
 +#define MS_SNAP_STABLE   (127) /* Snapshot pages during writeback, if 
 needed */
  #define MS_NOSEC (128)
  #define MS_BORN  (129)
  #define MS_ACTIVE(130)
 diff --git a/mm/Kconfig b/mm/Kconfig
 index 278e3ab..7901d83 100644
 --- a/mm/Kconfig
 +++ b/mm/Kconfig
 @@ -258,6 +258,19 @@ config BOUNCE
   def_bool y
   depends on BLOCK  MMU  (ZONE_DMA || HIGHMEM)
  
 +# On the 'tile' arch, USB OHCI needs the bounce pool since tilegx will often
 +# have more than 4GB of memory, but we don't currently use the IOTLB to 
 present
 +# a 32-bit address to OHCI.  So we need to use a bounce pool instead.
 +#
 +# We also use the bounce pool to provide stable page writes for jbd.  jbd
 +# initiates buffer writeback without locking the page or setting 
 PG_writeback,
 +# and fixing that behavior (a second time; jbd2 doesn't have this problem) is
 +# a major rework effort.  Instead, use the bounce buffer to snapshot pages
 +# (until jbd goes away).  The only jbd user is ext3.
 +config NEED_BOUNCE_POOL
 + bool
 + default y if (TILE  USB_OHCI_HCD) || (BLK_DEV_INTEGRITY  JBD)
 +
  config NR_QUICK
   int
   depends on QUICKLIST
 diff --git a/mm/bounce.c b/mm/bounce.c
 index 0420867..5f89017 

[PATCH 4/6] block: Optionally snapshot page contents to provide stable pages during write

2013-01-18 Thread Darrick J. Wong
This provides a band-aid to provide stable page writes on jbd without needing
to backport the fixed locking and page writeback bit handling schemes of jbd2.
The band-aid works by using bounce buffers to snapshot page contents instead of
waiting.

For those wondering about the ext3 bandage -- fixing the jbd locking (which was
done as part of ext4dev years ago) is a lot of surgery, and setting
PG_writeback on data pages when we actually hold the page lock dropped ext3
performance by nearly an order of magnitude.  If we're going to migrate iscsi
and raid to use stable page writes, the complaints about high latency will
likely return.  We might as well centralize their page snapshotting thing to
one place.

Tested-by: Andy Lutomirski 
Signed-off-by: Darrick J. Wong 
---
 arch/tile/Kconfig   |6 --
 block/blk-core.c|8 +---
 fs/ext3/super.c |1 +
 include/uapi/linux/fs.h |3 +++
 mm/Kconfig  |   13 +
 mm/bounce.c |   48 +++
 mm/page-writeback.c |4 
 7 files changed, 70 insertions(+), 13 deletions(-)


diff --git a/arch/tile/Kconfig b/arch/tile/Kconfig
index 875d008..c671fda 100644
--- a/arch/tile/Kconfig
+++ b/arch/tile/Kconfig
@@ -410,12 +410,6 @@ config TILE_USB
  Provides USB host adapter support for the built-in EHCI and OHCI
  interfaces on TILE-Gx chips.
 
-# USB OHCI needs the bounce pool since tilegx will often have more
-# than 4GB of memory, but we don't currently use the IOTLB to present
-# a 32-bit address to OHCI.  So we need to use a bounce pool instead.
-config NEED_BOUNCE_POOL
-   def_bool USB_OHCI_HCD
-
 source "drivers/pci/hotplug/Kconfig"
 
 endmenu
diff --git a/block/blk-core.c b/block/blk-core.c
index c973249..277134c 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -1474,6 +1474,11 @@ void blk_queue_bio(struct request_queue *q, struct bio 
*bio)
 */
blk_queue_bounce(q, );
 
+   if (bio_integrity_enabled(bio) && bio_integrity_prep(bio)) {
+   bio_endio(bio, -EIO);
+   return;
+   }
+
if (bio->bi_rw & (REQ_FLUSH | REQ_FUA)) {
spin_lock_irq(q->queue_lock);
where = ELEVATOR_INSERT_FLUSH;
@@ -1714,9 +1719,6 @@ generic_make_request_checks(struct bio *bio)
 */
blk_partition_remap(bio);
 
-   if (bio_integrity_enabled(bio) && bio_integrity_prep(bio))
-   goto end_io;
-
if (bio_check_eod(bio, nr_sectors))
goto end_io;
 
diff --git a/fs/ext3/super.c b/fs/ext3/super.c
index 6e50223..4ba2683 100644
--- a/fs/ext3/super.c
+++ b/fs/ext3/super.c
@@ -2065,6 +2065,7 @@ static int ext3_fill_super (struct super_block *sb, void 
*data, int silent)
test_opt(sb,DATA_FLAGS) == EXT3_MOUNT_JOURNAL_DATA ? "journal":
test_opt(sb,DATA_FLAGS) == EXT3_MOUNT_ORDERED_DATA ? "ordered":
"writeback");
+   sb->s_flags |= MS_SNAP_STABLE;
 
return 0;
 
diff --git a/include/uapi/linux/fs.h b/include/uapi/linux/fs.h
index 780d4c6..c7fc1e6 100644
--- a/include/uapi/linux/fs.h
+++ b/include/uapi/linux/fs.h
@@ -86,6 +86,9 @@ struct inodes_stat_t {
 #define MS_KERNMOUNT   (1<<22) /* this is a kern_mount call */
 #define MS_I_VERSION   (1<<23) /* Update inode I_version field */
 #define MS_STRICTATIME (1<<24) /* Always perform atime updates */
+
+/* These sb flags are internal to the kernel */
+#define MS_SNAP_STABLE (1<<27) /* Snapshot pages during writeback, if needed */
 #define MS_NOSEC   (1<<28)
 #define MS_BORN(1<<29)
 #define MS_ACTIVE  (1<<30)
diff --git a/mm/Kconfig b/mm/Kconfig
index 278e3ab..7901d83 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -258,6 +258,19 @@ config BOUNCE
def_bool y
depends on BLOCK && MMU && (ZONE_DMA || HIGHMEM)
 
+# On the 'tile' arch, USB OHCI needs the bounce pool since tilegx will often
+# have more than 4GB of memory, but we don't currently use the IOTLB to present
+# a 32-bit address to OHCI.  So we need to use a bounce pool instead.
+#
+# We also use the bounce pool to provide stable page writes for jbd.  jbd
+# initiates buffer writeback without locking the page or setting PG_writeback,
+# and fixing that behavior (a second time; jbd2 doesn't have this problem) is
+# a major rework effort.  Instead, use the bounce buffer to snapshot pages
+# (until jbd goes away).  The only jbd user is ext3.
+config NEED_BOUNCE_POOL
+   bool
+   default y if (TILE && USB_OHCI_HCD) || (BLK_DEV_INTEGRITY && JBD)
+
 config NR_QUICK
int
depends on QUICKLIST
diff --git a/mm/bounce.c b/mm/bounce.c
index 0420867..5f89017 100644
--- a/mm/bounce.c
+++ b/mm/bounce.c
@@ -178,8 +178,45 @@ static void bounce_end_io_read_isa(struct bio *bio, int 
err)
__bounce_end_io_read(bio, isa_page_pool, err);
 }
 
+#ifdef CONFIG_NEED_BOUNCE_POOL
+static int must_snapshot_stable_pages(struct request_queue *q, 

[PATCH 4/6] block: Optionally snapshot page contents to provide stable pages during write

2013-01-18 Thread Darrick J. Wong
This provides a band-aid to provide stable page writes on jbd without needing
to backport the fixed locking and page writeback bit handling schemes of jbd2.
The band-aid works by using bounce buffers to snapshot page contents instead of
waiting.

For those wondering about the ext3 bandage -- fixing the jbd locking (which was
done as part of ext4dev years ago) is a lot of surgery, and setting
PG_writeback on data pages when we actually hold the page lock dropped ext3
performance by nearly an order of magnitude.  If we're going to migrate iscsi
and raid to use stable page writes, the complaints about high latency will
likely return.  We might as well centralize their page snapshotting thing to
one place.

Tested-by: Andy Lutomirski l...@amacapital.net
Signed-off-by: Darrick J. Wong darrick.w...@oracle.com
---
 arch/tile/Kconfig   |6 --
 block/blk-core.c|8 +---
 fs/ext3/super.c |1 +
 include/uapi/linux/fs.h |3 +++
 mm/Kconfig  |   13 +
 mm/bounce.c |   48 +++
 mm/page-writeback.c |4 
 7 files changed, 70 insertions(+), 13 deletions(-)


diff --git a/arch/tile/Kconfig b/arch/tile/Kconfig
index 875d008..c671fda 100644
--- a/arch/tile/Kconfig
+++ b/arch/tile/Kconfig
@@ -410,12 +410,6 @@ config TILE_USB
  Provides USB host adapter support for the built-in EHCI and OHCI
  interfaces on TILE-Gx chips.
 
-# USB OHCI needs the bounce pool since tilegx will often have more
-# than 4GB of memory, but we don't currently use the IOTLB to present
-# a 32-bit address to OHCI.  So we need to use a bounce pool instead.
-config NEED_BOUNCE_POOL
-   def_bool USB_OHCI_HCD
-
 source drivers/pci/hotplug/Kconfig
 
 endmenu
diff --git a/block/blk-core.c b/block/blk-core.c
index c973249..277134c 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -1474,6 +1474,11 @@ void blk_queue_bio(struct request_queue *q, struct bio 
*bio)
 */
blk_queue_bounce(q, bio);
 
+   if (bio_integrity_enabled(bio)  bio_integrity_prep(bio)) {
+   bio_endio(bio, -EIO);
+   return;
+   }
+
if (bio-bi_rw  (REQ_FLUSH | REQ_FUA)) {
spin_lock_irq(q-queue_lock);
where = ELEVATOR_INSERT_FLUSH;
@@ -1714,9 +1719,6 @@ generic_make_request_checks(struct bio *bio)
 */
blk_partition_remap(bio);
 
-   if (bio_integrity_enabled(bio)  bio_integrity_prep(bio))
-   goto end_io;
-
if (bio_check_eod(bio, nr_sectors))
goto end_io;
 
diff --git a/fs/ext3/super.c b/fs/ext3/super.c
index 6e50223..4ba2683 100644
--- a/fs/ext3/super.c
+++ b/fs/ext3/super.c
@@ -2065,6 +2065,7 @@ static int ext3_fill_super (struct super_block *sb, void 
*data, int silent)
test_opt(sb,DATA_FLAGS) == EXT3_MOUNT_JOURNAL_DATA ? journal:
test_opt(sb,DATA_FLAGS) == EXT3_MOUNT_ORDERED_DATA ? ordered:
writeback);
+   sb-s_flags |= MS_SNAP_STABLE;
 
return 0;
 
diff --git a/include/uapi/linux/fs.h b/include/uapi/linux/fs.h
index 780d4c6..c7fc1e6 100644
--- a/include/uapi/linux/fs.h
+++ b/include/uapi/linux/fs.h
@@ -86,6 +86,9 @@ struct inodes_stat_t {
 #define MS_KERNMOUNT   (122) /* this is a kern_mount call */
 #define MS_I_VERSION   (123) /* Update inode I_version field */
 #define MS_STRICTATIME (124) /* Always perform atime updates */
+
+/* These sb flags are internal to the kernel */
+#define MS_SNAP_STABLE (127) /* Snapshot pages during writeback, if needed */
 #define MS_NOSEC   (128)
 #define MS_BORN(129)
 #define MS_ACTIVE  (130)
diff --git a/mm/Kconfig b/mm/Kconfig
index 278e3ab..7901d83 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -258,6 +258,19 @@ config BOUNCE
def_bool y
depends on BLOCK  MMU  (ZONE_DMA || HIGHMEM)
 
+# On the 'tile' arch, USB OHCI needs the bounce pool since tilegx will often
+# have more than 4GB of memory, but we don't currently use the IOTLB to present
+# a 32-bit address to OHCI.  So we need to use a bounce pool instead.
+#
+# We also use the bounce pool to provide stable page writes for jbd.  jbd
+# initiates buffer writeback without locking the page or setting PG_writeback,
+# and fixing that behavior (a second time; jbd2 doesn't have this problem) is
+# a major rework effort.  Instead, use the bounce buffer to snapshot pages
+# (until jbd goes away).  The only jbd user is ext3.
+config NEED_BOUNCE_POOL
+   bool
+   default y if (TILE  USB_OHCI_HCD) || (BLK_DEV_INTEGRITY  JBD)
+
 config NR_QUICK
int
depends on QUICKLIST
diff --git a/mm/bounce.c b/mm/bounce.c
index 0420867..5f89017 100644
--- a/mm/bounce.c
+++ b/mm/bounce.c
@@ -178,8 +178,45 @@ static void bounce_end_io_read_isa(struct bio *bio, int 
err)
__bounce_end_io_read(bio, isa_page_pool, err);
 }
 
+#ifdef CONFIG_NEED_BOUNCE_POOL
+static int must_snapshot_stable_pages(struct 

Re: [PATCH 4/6] block: Optionally snapshot page contents to provide stable pages during write

2013-01-17 Thread Jan Kara
On Wed 16-01-13 19:01:32, Darrick J. Wong wrote:
> > > diff --git a/block/blk-core.c b/block/blk-core.c
> > > index c973249..277134c 100644
> > > --- a/block/blk-core.c
> > > +++ b/block/blk-core.c
> > > @@ -1474,6 +1474,11 @@ void blk_queue_bio(struct request_queue *q, struct 
> > > bio *bio)
> > >*/
> > >   blk_queue_bounce(q, );
> > >  
> > > + if (bio_integrity_enabled(bio) && bio_integrity_prep(bio)) {
> > > + bio_endio(bio, -EIO);
> > > + return;
> > > + }
> > > +
> > >   if (bio->bi_rw & (REQ_FLUSH | REQ_FUA)) {
> > >   spin_lock_irq(q->queue_lock);
> > >   where = ELEVATOR_INSERT_FLUSH;
> > > @@ -1714,9 +1719,6 @@ generic_make_request_checks(struct bio *bio)
> > >*/
> > >   blk_partition_remap(bio);
> > >  
> > > - if (bio_integrity_enabled(bio) && bio_integrity_prep(bio))
> > > - goto end_io;
> > > -
> >   Umm, why did you move this hunk?
> 
> I moved it so that the integrity data are generated against the contents of 
> the
> bounce buffer because the write paths don't wait for writeback to finish if 
> the
> snapshotting mode is enabled, which means (I think) that programs can wander 
> in
> and scribble on the original page in between bio_integrity_prep and
> blk_queue_bounce.
  Ah, I see. OK.

> > >   if (bio_check_eod(bio, nr_sectors))
> > >   goto end_io;
> > 
> > > diff --git a/include/uapi/linux/fs.h b/include/uapi/linux/fs.h
> > > index 780d4c6..0144fbb 100644
> > > --- a/include/uapi/linux/fs.h
> > > +++ b/include/uapi/linux/fs.h
> > > @@ -69,6 +69,7 @@ struct inodes_stat_t {
> > >  #define MS_REMOUNT   32  /* Alter flags of a mounted FS */
> > >  #define MS_MANDLOCK  64  /* Allow mandatory locks on an FS */
> > >  #define MS_DIRSYNC   128 /* Directory modifications are 
> > > synchronous */
> > > +#define MS_SNAP_STABLE   256 /* Snapshot pages during writeback, if 
> > > needed */
> > >  #define MS_NOATIME   1024/* Do not update access times. */
> > >  #define MS_NODIRATIME2048/* Do not update directory access times 
> > > */
> > >  #define MS_BIND  4096
> >   Please don't mix MS_SNAP_STABLE flag among flags passed by mount(2)
> > syscall. I think putting it at 1 << 27 might be acceptable. I remember
> > Al Viro saying something along the lines that kernel internal superblock
> > flags should be separated from those passed from userspace into a special
> > superblock entry but that's a different story I guess.
> 
> Ok, I'll change it to 1<<27.  I'll add a comment stating that we're trying to
> keep internal sb flags separate.  It looks like those last four flags are all
> internal?
  Yes. Flags with low numbers are part of kernel ABI...

> > > diff --git a/mm/bounce.c b/mm/bounce.c
> > > index 0420867..a5b30f9 100644
> > > --- a/mm/bounce.c
> > > +++ b/mm/bounce.c
> > > @@ -178,8 +178,44 @@ static void bounce_end_io_read_isa(struct bio *bio, 
> > > int err)
> > >   __bounce_end_io_read(bio, isa_page_pool, err);
> > >  }
> > >  
> > > +#ifdef CONFIG_NEED_BOUNCE_POOL
> > > +static int must_snapshot_stable_pages(struct bio *bio)
> > > +{
> > > + struct page *page;
> > > + struct backing_dev_info *bdi;
> > > + struct address_space *mapping;
> > > + struct bio_vec *from;
> > > + int i;
> > > +
> > > + if (bio_data_dir(bio) != WRITE)
> > > + return 0;
> > > +
> > > + /*
> > > +  * Based on the first page that has a valid mapping, decide whether or
> > > +  * not we have to employ bounce buffering to guarantee stable pages.
> > > +  */
> > > + bio_for_each_segment(from, bio, i) {
> > > + page = from->bv_page;
> > > + mapping = page_mapping(page);
> > > + if (!mapping)
> > > + continue;
> > > + bdi = mapping->backing_dev_info;
> > > + if (!bdi_cap_stable_pages_required(bdi))
> > > + return 0;
> > > + return mapping->host->i_sb->s_flags & MS_SNAP_STABLE;
> > > + }
> > > +
> > > + return 0;
> > > +}
> >   How about using q->backing_dev_info for the
> > bdi_cap_stable_pages_required() check? It will be a fast path and this check
> > will be faster..
> Ok.
  And maybe I should have told explicitely that then you can move the check
before bio_for_each_segment() loop...

Honza
-- 
Jan Kara 
SUSE Labs, CR
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 4/6] block: Optionally snapshot page contents to provide stable pages during write

2013-01-17 Thread Jan Kara
On Wed 16-01-13 19:01:32, Darrick J. Wong wrote:
   diff --git a/block/blk-core.c b/block/blk-core.c
   index c973249..277134c 100644
   --- a/block/blk-core.c
   +++ b/block/blk-core.c
   @@ -1474,6 +1474,11 @@ void blk_queue_bio(struct request_queue *q, struct 
   bio *bio)
  */
 blk_queue_bounce(q, bio);

   + if (bio_integrity_enabled(bio)  bio_integrity_prep(bio)) {
   + bio_endio(bio, -EIO);
   + return;
   + }
   +
 if (bio-bi_rw  (REQ_FLUSH | REQ_FUA)) {
 spin_lock_irq(q-queue_lock);
 where = ELEVATOR_INSERT_FLUSH;
   @@ -1714,9 +1719,6 @@ generic_make_request_checks(struct bio *bio)
  */
 blk_partition_remap(bio);

   - if (bio_integrity_enabled(bio)  bio_integrity_prep(bio))
   - goto end_io;
   -
Umm, why did you move this hunk?
 
 I moved it so that the integrity data are generated against the contents of 
 the
 bounce buffer because the write paths don't wait for writeback to finish if 
 the
 snapshotting mode is enabled, which means (I think) that programs can wander 
 in
 and scribble on the original page in between bio_integrity_prep and
 blk_queue_bounce.
  Ah, I see. OK.

 if (bio_check_eod(bio, nr_sectors))
 goto end_io;
  
   diff --git a/include/uapi/linux/fs.h b/include/uapi/linux/fs.h
   index 780d4c6..0144fbb 100644
   --- a/include/uapi/linux/fs.h
   +++ b/include/uapi/linux/fs.h
   @@ -69,6 +69,7 @@ struct inodes_stat_t {
#define MS_REMOUNT   32  /* Alter flags of a mounted FS */
#define MS_MANDLOCK  64  /* Allow mandatory locks on an FS */
#define MS_DIRSYNC   128 /* Directory modifications are 
   synchronous */
   +#define MS_SNAP_STABLE   256 /* Snapshot pages during writeback, if 
   needed */
#define MS_NOATIME   1024/* Do not update access times. */
#define MS_NODIRATIME2048/* Do not update directory access times 
   */
#define MS_BIND  4096
Please don't mix MS_SNAP_STABLE flag among flags passed by mount(2)
  syscall. I think putting it at 1  27 might be acceptable. I remember
  Al Viro saying something along the lines that kernel internal superblock
  flags should be separated from those passed from userspace into a special
  superblock entry but that's a different story I guess.
 
 Ok, I'll change it to 127.  I'll add a comment stating that we're trying to
 keep internal sb flags separate.  It looks like those last four flags are all
 internal?
  Yes. Flags with low numbers are part of kernel ABI...

   diff --git a/mm/bounce.c b/mm/bounce.c
   index 0420867..a5b30f9 100644
   --- a/mm/bounce.c
   +++ b/mm/bounce.c
   @@ -178,8 +178,44 @@ static void bounce_end_io_read_isa(struct bio *bio, 
   int err)
 __bounce_end_io_read(bio, isa_page_pool, err);
}

   +#ifdef CONFIG_NEED_BOUNCE_POOL
   +static int must_snapshot_stable_pages(struct bio *bio)
   +{
   + struct page *page;
   + struct backing_dev_info *bdi;
   + struct address_space *mapping;
   + struct bio_vec *from;
   + int i;
   +
   + if (bio_data_dir(bio) != WRITE)
   + return 0;
   +
   + /*
   +  * Based on the first page that has a valid mapping, decide whether or
   +  * not we have to employ bounce buffering to guarantee stable pages.
   +  */
   + bio_for_each_segment(from, bio, i) {
   + page = from-bv_page;
   + mapping = page_mapping(page);
   + if (!mapping)
   + continue;
   + bdi = mapping-backing_dev_info;
   + if (!bdi_cap_stable_pages_required(bdi))
   + return 0;
   + return mapping-host-i_sb-s_flags  MS_SNAP_STABLE;
   + }
   +
   + return 0;
   +}
How about using q-backing_dev_info for the
  bdi_cap_stable_pages_required() check? It will be a fast path and this check
  will be faster..
 Ok.
  And maybe I should have told explicitely that then you can move the check
before bio_for_each_segment() loop...

Honza
-- 
Jan Kara j...@suse.cz
SUSE Labs, CR
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 4/6] block: Optionally snapshot page contents to provide stable pages during write

2013-01-16 Thread Martin K. Petersen
> "Darrick" == Darrick J Wong  writes:

>> Umm, why did you move this hunk?

Darrick> I moved it so that the integrity data are generated against the
Darrick> contents of the bounce buffer because the write paths don't
Darrick> wait for writeback to finish if the snapshotting mode is
Darrick> enabled, which means (I think) that programs can wander in and
Darrick> scribble on the original page in between bio_integrity_prep and
Darrick> blk_queue_bounce.

It's also important that bio_integrity_prep() is run after partition
remapping has taken place. But it looks like that's still the case after
your change.

-- 
Martin K. Petersen  Oracle Linux Engineering
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 4/6] block: Optionally snapshot page contents to provide stable pages during write

2013-01-16 Thread Darrick J. Wong
On Wed, Jan 16, 2013 at 03:00:00AM +0100, Jan Kara wrote:
> On Mon 14-01-13 21:43:05, Darrick J. Wong wrote:
> > This provides a band-aid to provide stable page writes on jbd without 
> > needing
> > to backport the fixed locking and page writeback bit handling schemes of 
> > jbd2.
> > The band-aid works by using bounce buffers to snapshot page contents 
> > instead of
> > waiting.
> > 
> > For those wondering about the ext3 bandage -- fixing the jbd locking (which 
> > was
> > done as part of ext4dev years ago) is a lot of surgery, and setting
> > PG_writeback on data pages when we actually hold the page lock dropped ext3
> > performance by nearly an order of magnitude.  If we're going to migrate 
> > iscsi
> > and raid to use stable page writes, the complaints about high latency will
> > likely return.  We might as well centralize their page snapshotting thing to
> > one place.
> > 
> > Tested-by: Andy Lutomirski 
> > Signed-off-by: Darrick J. Wong 
> > ---
> >  arch/tile/Kconfig   |6 --
> >  block/blk-core.c|8 +---
> >  fs/ext3/super.c |1 +
> >  include/uapi/linux/fs.h |1 +
> >  mm/Kconfig  |   13 +
> >  mm/bounce.c |   47 
> > +++
> >  mm/page-writeback.c |4 
> >  7 files changed, 67 insertions(+), 13 deletions(-)
> > 
> > 
> > diff --git a/arch/tile/Kconfig b/arch/tile/Kconfig
> > index 875d008..c671fda 100644
> > --- a/arch/tile/Kconfig
> > +++ b/arch/tile/Kconfig
> > @@ -410,12 +410,6 @@ config TILE_USB
> >   Provides USB host adapter support for the built-in EHCI and OHCI
> >   interfaces on TILE-Gx chips.
> >  
> > -# USB OHCI needs the bounce pool since tilegx will often have more
> > -# than 4GB of memory, but we don't currently use the IOTLB to present
> > -# a 32-bit address to OHCI.  So we need to use a bounce pool instead.
> > -config NEED_BOUNCE_POOL
> > -   def_bool USB_OHCI_HCD
> > -
> >  source "drivers/pci/hotplug/Kconfig"
> >  
> >  endmenu
> > diff --git a/block/blk-core.c b/block/blk-core.c
> > index c973249..277134c 100644
> > --- a/block/blk-core.c
> > +++ b/block/blk-core.c
> > @@ -1474,6 +1474,11 @@ void blk_queue_bio(struct request_queue *q, struct 
> > bio *bio)
> >  */
> > blk_queue_bounce(q, );
> >  
> > +   if (bio_integrity_enabled(bio) && bio_integrity_prep(bio)) {
> > +   bio_endio(bio, -EIO);
> > +   return;
> > +   }
> > +
> > if (bio->bi_rw & (REQ_FLUSH | REQ_FUA)) {
> > spin_lock_irq(q->queue_lock);
> > where = ELEVATOR_INSERT_FLUSH;
> > @@ -1714,9 +1719,6 @@ generic_make_request_checks(struct bio *bio)
> >  */
> > blk_partition_remap(bio);
> >  
> > -   if (bio_integrity_enabled(bio) && bio_integrity_prep(bio))
> > -   goto end_io;
> > -
>   Umm, why did you move this hunk?

I moved it so that the integrity data are generated against the contents of the
bounce buffer because the write paths don't wait for writeback to finish if the
snapshotting mode is enabled, which means (I think) that programs can wander in
and scribble on the original page in between bio_integrity_prep and
blk_queue_bounce.

> > if (bio_check_eod(bio, nr_sectors))
> > goto end_io;
> 
> > diff --git a/include/uapi/linux/fs.h b/include/uapi/linux/fs.h
> > index 780d4c6..0144fbb 100644
> > --- a/include/uapi/linux/fs.h
> > +++ b/include/uapi/linux/fs.h
> > @@ -69,6 +69,7 @@ struct inodes_stat_t {
> >  #define MS_REMOUNT 32  /* Alter flags of a mounted FS */
> >  #define MS_MANDLOCK64  /* Allow mandatory locks on an FS */
> >  #define MS_DIRSYNC 128 /* Directory modifications are synchronous */
> > +#define MS_SNAP_STABLE 256 /* Snapshot pages during writeback, if 
> > needed */
> >  #define MS_NOATIME 1024/* Do not update access times. */
> >  #define MS_NODIRATIME  2048/* Do not update directory access times 
> > */
> >  #define MS_BIND4096
>   Please don't mix MS_SNAP_STABLE flag among flags passed by mount(2)
> syscall. I think putting it at 1 << 27 might be acceptable. I remember
> Al Viro saying something along the lines that kernel internal superblock
> flags should be separated from those passed from userspace into a special
> superblock entry but that's a different story I guess.

Ok, I'll change it to 1<<27.  I'll add a comment stating that we're trying to
keep internal sb flags separate.  It looks like those last four flags are all
internal?

> > diff --git a/mm/Kconfig b/mm/Kconfig
> > index 278e3ab..7901d83 100644
> > --- a/mm/Kconfig
> > +++ b/mm/Kconfig
> > @@ -258,6 +258,19 @@ config BOUNCE
> > def_bool y
> > depends on BLOCK && MMU && (ZONE_DMA || HIGHMEM)
> >  
> > +# On the 'tile' arch, USB OHCI needs the bounce pool since tilegx will 
> > often
> > +# have more than 4GB of memory, but we don't currently use the IOTLB to 
> > present
> > +# a 32-bit address to OHCI.  So we need to use a bounce 

Re: [PATCH 4/6] block: Optionally snapshot page contents to provide stable pages during write

2013-01-16 Thread Darrick J. Wong
On Wed, Jan 16, 2013 at 03:00:00AM +0100, Jan Kara wrote:
 On Mon 14-01-13 21:43:05, Darrick J. Wong wrote:
  This provides a band-aid to provide stable page writes on jbd without 
  needing
  to backport the fixed locking and page writeback bit handling schemes of 
  jbd2.
  The band-aid works by using bounce buffers to snapshot page contents 
  instead of
  waiting.
  
  For those wondering about the ext3 bandage -- fixing the jbd locking (which 
  was
  done as part of ext4dev years ago) is a lot of surgery, and setting
  PG_writeback on data pages when we actually hold the page lock dropped ext3
  performance by nearly an order of magnitude.  If we're going to migrate 
  iscsi
  and raid to use stable page writes, the complaints about high latency will
  likely return.  We might as well centralize their page snapshotting thing to
  one place.
  
  Tested-by: Andy Lutomirski l...@amacapital.net
  Signed-off-by: Darrick J. Wong darrick.w...@oracle.com
  ---
   arch/tile/Kconfig   |6 --
   block/blk-core.c|8 +---
   fs/ext3/super.c |1 +
   include/uapi/linux/fs.h |1 +
   mm/Kconfig  |   13 +
   mm/bounce.c |   47 
  +++
   mm/page-writeback.c |4 
   7 files changed, 67 insertions(+), 13 deletions(-)
  
  
  diff --git a/arch/tile/Kconfig b/arch/tile/Kconfig
  index 875d008..c671fda 100644
  --- a/arch/tile/Kconfig
  +++ b/arch/tile/Kconfig
  @@ -410,12 +410,6 @@ config TILE_USB
Provides USB host adapter support for the built-in EHCI and OHCI
interfaces on TILE-Gx chips.
   
  -# USB OHCI needs the bounce pool since tilegx will often have more
  -# than 4GB of memory, but we don't currently use the IOTLB to present
  -# a 32-bit address to OHCI.  So we need to use a bounce pool instead.
  -config NEED_BOUNCE_POOL
  -   def_bool USB_OHCI_HCD
  -
   source drivers/pci/hotplug/Kconfig
   
   endmenu
  diff --git a/block/blk-core.c b/block/blk-core.c
  index c973249..277134c 100644
  --- a/block/blk-core.c
  +++ b/block/blk-core.c
  @@ -1474,6 +1474,11 @@ void blk_queue_bio(struct request_queue *q, struct 
  bio *bio)
   */
  blk_queue_bounce(q, bio);
   
  +   if (bio_integrity_enabled(bio)  bio_integrity_prep(bio)) {
  +   bio_endio(bio, -EIO);
  +   return;
  +   }
  +
  if (bio-bi_rw  (REQ_FLUSH | REQ_FUA)) {
  spin_lock_irq(q-queue_lock);
  where = ELEVATOR_INSERT_FLUSH;
  @@ -1714,9 +1719,6 @@ generic_make_request_checks(struct bio *bio)
   */
  blk_partition_remap(bio);
   
  -   if (bio_integrity_enabled(bio)  bio_integrity_prep(bio))
  -   goto end_io;
  -
   Umm, why did you move this hunk?

I moved it so that the integrity data are generated against the contents of the
bounce buffer because the write paths don't wait for writeback to finish if the
snapshotting mode is enabled, which means (I think) that programs can wander in
and scribble on the original page in between bio_integrity_prep and
blk_queue_bounce.

  if (bio_check_eod(bio, nr_sectors))
  goto end_io;
 
  diff --git a/include/uapi/linux/fs.h b/include/uapi/linux/fs.h
  index 780d4c6..0144fbb 100644
  --- a/include/uapi/linux/fs.h
  +++ b/include/uapi/linux/fs.h
  @@ -69,6 +69,7 @@ struct inodes_stat_t {
   #define MS_REMOUNT 32  /* Alter flags of a mounted FS */
   #define MS_MANDLOCK64  /* Allow mandatory locks on an FS */
   #define MS_DIRSYNC 128 /* Directory modifications are synchronous */
  +#define MS_SNAP_STABLE 256 /* Snapshot pages during writeback, if 
  needed */
   #define MS_NOATIME 1024/* Do not update access times. */
   #define MS_NODIRATIME  2048/* Do not update directory access times 
  */
   #define MS_BIND4096
   Please don't mix MS_SNAP_STABLE flag among flags passed by mount(2)
 syscall. I think putting it at 1  27 might be acceptable. I remember
 Al Viro saying something along the lines that kernel internal superblock
 flags should be separated from those passed from userspace into a special
 superblock entry but that's a different story I guess.

Ok, I'll change it to 127.  I'll add a comment stating that we're trying to
keep internal sb flags separate.  It looks like those last four flags are all
internal?

  diff --git a/mm/Kconfig b/mm/Kconfig
  index 278e3ab..7901d83 100644
  --- a/mm/Kconfig
  +++ b/mm/Kconfig
  @@ -258,6 +258,19 @@ config BOUNCE
  def_bool y
  depends on BLOCK  MMU  (ZONE_DMA || HIGHMEM)
   
  +# On the 'tile' arch, USB OHCI needs the bounce pool since tilegx will 
  often
  +# have more than 4GB of memory, but we don't currently use the IOTLB to 
  present
  +# a 32-bit address to OHCI.  So we need to use a bounce pool instead.
  +#
  +# We also use the bounce pool to provide stable page writes for jbd.  jbd
  +# initiates buffer writeback without locking the page or setting 
  PG_writeback,
  +# 

Re: [PATCH 4/6] block: Optionally snapshot page contents to provide stable pages during write

2013-01-16 Thread Martin K. Petersen
 Darrick == Darrick J Wong darrick.w...@oracle.com writes:

 Umm, why did you move this hunk?

Darrick I moved it so that the integrity data are generated against the
Darrick contents of the bounce buffer because the write paths don't
Darrick wait for writeback to finish if the snapshotting mode is
Darrick enabled, which means (I think) that programs can wander in and
Darrick scribble on the original page in between bio_integrity_prep and
Darrick blk_queue_bounce.

It's also important that bio_integrity_prep() is run after partition
remapping has taken place. But it looks like that's still the case after
your change.

-- 
Martin K. Petersen  Oracle Linux Engineering
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 4/6] block: Optionally snapshot page contents to provide stable pages during write

2013-01-15 Thread Jan Kara
On Mon 14-01-13 21:43:05, Darrick J. Wong wrote:
> This provides a band-aid to provide stable page writes on jbd without needing
> to backport the fixed locking and page writeback bit handling schemes of jbd2.
> The band-aid works by using bounce buffers to snapshot page contents instead 
> of
> waiting.
> 
> For those wondering about the ext3 bandage -- fixing the jbd locking (which 
> was
> done as part of ext4dev years ago) is a lot of surgery, and setting
> PG_writeback on data pages when we actually hold the page lock dropped ext3
> performance by nearly an order of magnitude.  If we're going to migrate iscsi
> and raid to use stable page writes, the complaints about high latency will
> likely return.  We might as well centralize their page snapshotting thing to
> one place.
> 
> Tested-by: Andy Lutomirski 
> Signed-off-by: Darrick J. Wong 
> ---
>  arch/tile/Kconfig   |6 --
>  block/blk-core.c|8 +---
>  fs/ext3/super.c |1 +
>  include/uapi/linux/fs.h |1 +
>  mm/Kconfig  |   13 +
>  mm/bounce.c |   47 
> +++
>  mm/page-writeback.c |4 
>  7 files changed, 67 insertions(+), 13 deletions(-)
> 
> 
> diff --git a/arch/tile/Kconfig b/arch/tile/Kconfig
> index 875d008..c671fda 100644
> --- a/arch/tile/Kconfig
> +++ b/arch/tile/Kconfig
> @@ -410,12 +410,6 @@ config TILE_USB
> Provides USB host adapter support for the built-in EHCI and OHCI
> interfaces on TILE-Gx chips.
>  
> -# USB OHCI needs the bounce pool since tilegx will often have more
> -# than 4GB of memory, but we don't currently use the IOTLB to present
> -# a 32-bit address to OHCI.  So we need to use a bounce pool instead.
> -config NEED_BOUNCE_POOL
> - def_bool USB_OHCI_HCD
> -
>  source "drivers/pci/hotplug/Kconfig"
>  
>  endmenu
> diff --git a/block/blk-core.c b/block/blk-core.c
> index c973249..277134c 100644
> --- a/block/blk-core.c
> +++ b/block/blk-core.c
> @@ -1474,6 +1474,11 @@ void blk_queue_bio(struct request_queue *q, struct bio 
> *bio)
>*/
>   blk_queue_bounce(q, );
>  
> + if (bio_integrity_enabled(bio) && bio_integrity_prep(bio)) {
> + bio_endio(bio, -EIO);
> + return;
> + }
> +
>   if (bio->bi_rw & (REQ_FLUSH | REQ_FUA)) {
>   spin_lock_irq(q->queue_lock);
>   where = ELEVATOR_INSERT_FLUSH;
> @@ -1714,9 +1719,6 @@ generic_make_request_checks(struct bio *bio)
>*/
>   blk_partition_remap(bio);
>  
> - if (bio_integrity_enabled(bio) && bio_integrity_prep(bio))
> - goto end_io;
> -
  Umm, why did you move this hunk?

>   if (bio_check_eod(bio, nr_sectors))
>   goto end_io;

> diff --git a/include/uapi/linux/fs.h b/include/uapi/linux/fs.h
> index 780d4c6..0144fbb 100644
> --- a/include/uapi/linux/fs.h
> +++ b/include/uapi/linux/fs.h
> @@ -69,6 +69,7 @@ struct inodes_stat_t {
>  #define MS_REMOUNT   32  /* Alter flags of a mounted FS */
>  #define MS_MANDLOCK  64  /* Allow mandatory locks on an FS */
>  #define MS_DIRSYNC   128 /* Directory modifications are synchronous */
> +#define MS_SNAP_STABLE   256 /* Snapshot pages during writeback, if 
> needed */
>  #define MS_NOATIME   1024/* Do not update access times. */
>  #define MS_NODIRATIME2048/* Do not update directory access times 
> */
>  #define MS_BIND  4096
  Please don't mix MS_SNAP_STABLE flag among flags passed by mount(2)
syscall. I think putting it at 1 << 27 might be acceptable. I remember
Al Viro saying something along the lines that kernel internal superblock
flags should be separated from those passed from userspace into a special
superblock entry but that's a different story I guess.

> diff --git a/mm/Kconfig b/mm/Kconfig
> index 278e3ab..7901d83 100644
> --- a/mm/Kconfig
> +++ b/mm/Kconfig
> @@ -258,6 +258,19 @@ config BOUNCE
>   def_bool y
>   depends on BLOCK && MMU && (ZONE_DMA || HIGHMEM)
>  
> +# On the 'tile' arch, USB OHCI needs the bounce pool since tilegx will often
> +# have more than 4GB of memory, but we don't currently use the IOTLB to 
> present
> +# a 32-bit address to OHCI.  So we need to use a bounce pool instead.
> +#
> +# We also use the bounce pool to provide stable page writes for jbd.  jbd
> +# initiates buffer writeback without locking the page or setting 
> PG_writeback,
> +# and fixing that behavior (a second time; jbd2 doesn't have this problem) is
> +# a major rework effort.  Instead, use the bounce buffer to snapshot pages
> +# (until jbd goes away).  The only jbd user is ext3.
> +config NEED_BOUNCE_POOL
> + bool
> + default y if (TILE && USB_OHCI_HCD) || (BLK_DEV_INTEGRITY && JBD)
> +
>  config NR_QUICK
>   int
>   depends on QUICKLIST
> diff --git a/mm/bounce.c b/mm/bounce.c
> index 0420867..a5b30f9 100644
> --- a/mm/bounce.c
> +++ b/mm/bounce.c
> @@ -178,8 +178,44 @@ static void 

Re: [PATCH 4/6] block: Optionally snapshot page contents to provide stable pages during write

2013-01-15 Thread Jan Kara
On Mon 14-01-13 21:43:05, Darrick J. Wong wrote:
 This provides a band-aid to provide stable page writes on jbd without needing
 to backport the fixed locking and page writeback bit handling schemes of jbd2.
 The band-aid works by using bounce buffers to snapshot page contents instead 
 of
 waiting.
 
 For those wondering about the ext3 bandage -- fixing the jbd locking (which 
 was
 done as part of ext4dev years ago) is a lot of surgery, and setting
 PG_writeback on data pages when we actually hold the page lock dropped ext3
 performance by nearly an order of magnitude.  If we're going to migrate iscsi
 and raid to use stable page writes, the complaints about high latency will
 likely return.  We might as well centralize their page snapshotting thing to
 one place.
 
 Tested-by: Andy Lutomirski l...@amacapital.net
 Signed-off-by: Darrick J. Wong darrick.w...@oracle.com
 ---
  arch/tile/Kconfig   |6 --
  block/blk-core.c|8 +---
  fs/ext3/super.c |1 +
  include/uapi/linux/fs.h |1 +
  mm/Kconfig  |   13 +
  mm/bounce.c |   47 
 +++
  mm/page-writeback.c |4 
  7 files changed, 67 insertions(+), 13 deletions(-)
 
 
 diff --git a/arch/tile/Kconfig b/arch/tile/Kconfig
 index 875d008..c671fda 100644
 --- a/arch/tile/Kconfig
 +++ b/arch/tile/Kconfig
 @@ -410,12 +410,6 @@ config TILE_USB
 Provides USB host adapter support for the built-in EHCI and OHCI
 interfaces on TILE-Gx chips.
  
 -# USB OHCI needs the bounce pool since tilegx will often have more
 -# than 4GB of memory, but we don't currently use the IOTLB to present
 -# a 32-bit address to OHCI.  So we need to use a bounce pool instead.
 -config NEED_BOUNCE_POOL
 - def_bool USB_OHCI_HCD
 -
  source drivers/pci/hotplug/Kconfig
  
  endmenu
 diff --git a/block/blk-core.c b/block/blk-core.c
 index c973249..277134c 100644
 --- a/block/blk-core.c
 +++ b/block/blk-core.c
 @@ -1474,6 +1474,11 @@ void blk_queue_bio(struct request_queue *q, struct bio 
 *bio)
*/
   blk_queue_bounce(q, bio);
  
 + if (bio_integrity_enabled(bio)  bio_integrity_prep(bio)) {
 + bio_endio(bio, -EIO);
 + return;
 + }
 +
   if (bio-bi_rw  (REQ_FLUSH | REQ_FUA)) {
   spin_lock_irq(q-queue_lock);
   where = ELEVATOR_INSERT_FLUSH;
 @@ -1714,9 +1719,6 @@ generic_make_request_checks(struct bio *bio)
*/
   blk_partition_remap(bio);
  
 - if (bio_integrity_enabled(bio)  bio_integrity_prep(bio))
 - goto end_io;
 -
  Umm, why did you move this hunk?

   if (bio_check_eod(bio, nr_sectors))
   goto end_io;

 diff --git a/include/uapi/linux/fs.h b/include/uapi/linux/fs.h
 index 780d4c6..0144fbb 100644
 --- a/include/uapi/linux/fs.h
 +++ b/include/uapi/linux/fs.h
 @@ -69,6 +69,7 @@ struct inodes_stat_t {
  #define MS_REMOUNT   32  /* Alter flags of a mounted FS */
  #define MS_MANDLOCK  64  /* Allow mandatory locks on an FS */
  #define MS_DIRSYNC   128 /* Directory modifications are synchronous */
 +#define MS_SNAP_STABLE   256 /* Snapshot pages during writeback, if 
 needed */
  #define MS_NOATIME   1024/* Do not update access times. */
  #define MS_NODIRATIME2048/* Do not update directory access times 
 */
  #define MS_BIND  4096
  Please don't mix MS_SNAP_STABLE flag among flags passed by mount(2)
syscall. I think putting it at 1  27 might be acceptable. I remember
Al Viro saying something along the lines that kernel internal superblock
flags should be separated from those passed from userspace into a special
superblock entry but that's a different story I guess.

 diff --git a/mm/Kconfig b/mm/Kconfig
 index 278e3ab..7901d83 100644
 --- a/mm/Kconfig
 +++ b/mm/Kconfig
 @@ -258,6 +258,19 @@ config BOUNCE
   def_bool y
   depends on BLOCK  MMU  (ZONE_DMA || HIGHMEM)
  
 +# On the 'tile' arch, USB OHCI needs the bounce pool since tilegx will often
 +# have more than 4GB of memory, but we don't currently use the IOTLB to 
 present
 +# a 32-bit address to OHCI.  So we need to use a bounce pool instead.
 +#
 +# We also use the bounce pool to provide stable page writes for jbd.  jbd
 +# initiates buffer writeback without locking the page or setting 
 PG_writeback,
 +# and fixing that behavior (a second time; jbd2 doesn't have this problem) is
 +# a major rework effort.  Instead, use the bounce buffer to snapshot pages
 +# (until jbd goes away).  The only jbd user is ext3.
 +config NEED_BOUNCE_POOL
 + bool
 + default y if (TILE  USB_OHCI_HCD) || (BLK_DEV_INTEGRITY  JBD)
 +
  config NR_QUICK
   int
   depends on QUICKLIST
 diff --git a/mm/bounce.c b/mm/bounce.c
 index 0420867..a5b30f9 100644
 --- a/mm/bounce.c
 +++ b/mm/bounce.c
 @@ -178,8 +178,44 @@ static void bounce_end_io_read_isa(struct bio *bio, int 
 err)
   __bounce_end_io_read(bio, isa_page_pool, err);
  }
  

[PATCH 4/6] block: Optionally snapshot page contents to provide stable pages during write

2013-01-14 Thread Darrick J. Wong
This provides a band-aid to provide stable page writes on jbd without needing
to backport the fixed locking and page writeback bit handling schemes of jbd2.
The band-aid works by using bounce buffers to snapshot page contents instead of
waiting.

For those wondering about the ext3 bandage -- fixing the jbd locking (which was
done as part of ext4dev years ago) is a lot of surgery, and setting
PG_writeback on data pages when we actually hold the page lock dropped ext3
performance by nearly an order of magnitude.  If we're going to migrate iscsi
and raid to use stable page writes, the complaints about high latency will
likely return.  We might as well centralize their page snapshotting thing to
one place.

Tested-by: Andy Lutomirski 
Signed-off-by: Darrick J. Wong 
---
 arch/tile/Kconfig   |6 --
 block/blk-core.c|8 +---
 fs/ext3/super.c |1 +
 include/uapi/linux/fs.h |1 +
 mm/Kconfig  |   13 +
 mm/bounce.c |   47 +++
 mm/page-writeback.c |4 
 7 files changed, 67 insertions(+), 13 deletions(-)


diff --git a/arch/tile/Kconfig b/arch/tile/Kconfig
index 875d008..c671fda 100644
--- a/arch/tile/Kconfig
+++ b/arch/tile/Kconfig
@@ -410,12 +410,6 @@ config TILE_USB
  Provides USB host adapter support for the built-in EHCI and OHCI
  interfaces on TILE-Gx chips.
 
-# USB OHCI needs the bounce pool since tilegx will often have more
-# than 4GB of memory, but we don't currently use the IOTLB to present
-# a 32-bit address to OHCI.  So we need to use a bounce pool instead.
-config NEED_BOUNCE_POOL
-   def_bool USB_OHCI_HCD
-
 source "drivers/pci/hotplug/Kconfig"
 
 endmenu
diff --git a/block/blk-core.c b/block/blk-core.c
index c973249..277134c 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -1474,6 +1474,11 @@ void blk_queue_bio(struct request_queue *q, struct bio 
*bio)
 */
blk_queue_bounce(q, );
 
+   if (bio_integrity_enabled(bio) && bio_integrity_prep(bio)) {
+   bio_endio(bio, -EIO);
+   return;
+   }
+
if (bio->bi_rw & (REQ_FLUSH | REQ_FUA)) {
spin_lock_irq(q->queue_lock);
where = ELEVATOR_INSERT_FLUSH;
@@ -1714,9 +1719,6 @@ generic_make_request_checks(struct bio *bio)
 */
blk_partition_remap(bio);
 
-   if (bio_integrity_enabled(bio) && bio_integrity_prep(bio))
-   goto end_io;
-
if (bio_check_eod(bio, nr_sectors))
goto end_io;
 
diff --git a/fs/ext3/super.c b/fs/ext3/super.c
index 6e50223..4ba2683 100644
--- a/fs/ext3/super.c
+++ b/fs/ext3/super.c
@@ -2065,6 +2065,7 @@ static int ext3_fill_super (struct super_block *sb, void 
*data, int silent)
test_opt(sb,DATA_FLAGS) == EXT3_MOUNT_JOURNAL_DATA ? "journal":
test_opt(sb,DATA_FLAGS) == EXT3_MOUNT_ORDERED_DATA ? "ordered":
"writeback");
+   sb->s_flags |= MS_SNAP_STABLE;
 
return 0;
 
diff --git a/include/uapi/linux/fs.h b/include/uapi/linux/fs.h
index 780d4c6..0144fbb 100644
--- a/include/uapi/linux/fs.h
+++ b/include/uapi/linux/fs.h
@@ -69,6 +69,7 @@ struct inodes_stat_t {
 #define MS_REMOUNT 32  /* Alter flags of a mounted FS */
 #define MS_MANDLOCK64  /* Allow mandatory locks on an FS */
 #define MS_DIRSYNC 128 /* Directory modifications are synchronous */
+#define MS_SNAP_STABLE 256 /* Snapshot pages during writeback, if needed */
 #define MS_NOATIME 1024/* Do not update access times. */
 #define MS_NODIRATIME  2048/* Do not update directory access times */
 #define MS_BIND4096
diff --git a/mm/Kconfig b/mm/Kconfig
index 278e3ab..7901d83 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -258,6 +258,19 @@ config BOUNCE
def_bool y
depends on BLOCK && MMU && (ZONE_DMA || HIGHMEM)
 
+# On the 'tile' arch, USB OHCI needs the bounce pool since tilegx will often
+# have more than 4GB of memory, but we don't currently use the IOTLB to present
+# a 32-bit address to OHCI.  So we need to use a bounce pool instead.
+#
+# We also use the bounce pool to provide stable page writes for jbd.  jbd
+# initiates buffer writeback without locking the page or setting PG_writeback,
+# and fixing that behavior (a second time; jbd2 doesn't have this problem) is
+# a major rework effort.  Instead, use the bounce buffer to snapshot pages
+# (until jbd goes away).  The only jbd user is ext3.
+config NEED_BOUNCE_POOL
+   bool
+   default y if (TILE && USB_OHCI_HCD) || (BLK_DEV_INTEGRITY && JBD)
+
 config NR_QUICK
int
depends on QUICKLIST
diff --git a/mm/bounce.c b/mm/bounce.c
index 0420867..a5b30f9 100644
--- a/mm/bounce.c
+++ b/mm/bounce.c
@@ -178,8 +178,44 @@ static void bounce_end_io_read_isa(struct bio *bio, int 
err)
__bounce_end_io_read(bio, isa_page_pool, err);
 }
 
+#ifdef CONFIG_NEED_BOUNCE_POOL
+static int 

[PATCH 4/6] block: Optionally snapshot page contents to provide stable pages during write

2013-01-14 Thread Darrick J. Wong
This provides a band-aid to provide stable page writes on jbd without needing
to backport the fixed locking and page writeback bit handling schemes of jbd2.
The band-aid works by using bounce buffers to snapshot page contents instead of
waiting.

For those wondering about the ext3 bandage -- fixing the jbd locking (which was
done as part of ext4dev years ago) is a lot of surgery, and setting
PG_writeback on data pages when we actually hold the page lock dropped ext3
performance by nearly an order of magnitude.  If we're going to migrate iscsi
and raid to use stable page writes, the complaints about high latency will
likely return.  We might as well centralize their page snapshotting thing to
one place.

Tested-by: Andy Lutomirski l...@amacapital.net
Signed-off-by: Darrick J. Wong darrick.w...@oracle.com
---
 arch/tile/Kconfig   |6 --
 block/blk-core.c|8 +---
 fs/ext3/super.c |1 +
 include/uapi/linux/fs.h |1 +
 mm/Kconfig  |   13 +
 mm/bounce.c |   47 +++
 mm/page-writeback.c |4 
 7 files changed, 67 insertions(+), 13 deletions(-)


diff --git a/arch/tile/Kconfig b/arch/tile/Kconfig
index 875d008..c671fda 100644
--- a/arch/tile/Kconfig
+++ b/arch/tile/Kconfig
@@ -410,12 +410,6 @@ config TILE_USB
  Provides USB host adapter support for the built-in EHCI and OHCI
  interfaces on TILE-Gx chips.
 
-# USB OHCI needs the bounce pool since tilegx will often have more
-# than 4GB of memory, but we don't currently use the IOTLB to present
-# a 32-bit address to OHCI.  So we need to use a bounce pool instead.
-config NEED_BOUNCE_POOL
-   def_bool USB_OHCI_HCD
-
 source drivers/pci/hotplug/Kconfig
 
 endmenu
diff --git a/block/blk-core.c b/block/blk-core.c
index c973249..277134c 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -1474,6 +1474,11 @@ void blk_queue_bio(struct request_queue *q, struct bio 
*bio)
 */
blk_queue_bounce(q, bio);
 
+   if (bio_integrity_enabled(bio)  bio_integrity_prep(bio)) {
+   bio_endio(bio, -EIO);
+   return;
+   }
+
if (bio-bi_rw  (REQ_FLUSH | REQ_FUA)) {
spin_lock_irq(q-queue_lock);
where = ELEVATOR_INSERT_FLUSH;
@@ -1714,9 +1719,6 @@ generic_make_request_checks(struct bio *bio)
 */
blk_partition_remap(bio);
 
-   if (bio_integrity_enabled(bio)  bio_integrity_prep(bio))
-   goto end_io;
-
if (bio_check_eod(bio, nr_sectors))
goto end_io;
 
diff --git a/fs/ext3/super.c b/fs/ext3/super.c
index 6e50223..4ba2683 100644
--- a/fs/ext3/super.c
+++ b/fs/ext3/super.c
@@ -2065,6 +2065,7 @@ static int ext3_fill_super (struct super_block *sb, void 
*data, int silent)
test_opt(sb,DATA_FLAGS) == EXT3_MOUNT_JOURNAL_DATA ? journal:
test_opt(sb,DATA_FLAGS) == EXT3_MOUNT_ORDERED_DATA ? ordered:
writeback);
+   sb-s_flags |= MS_SNAP_STABLE;
 
return 0;
 
diff --git a/include/uapi/linux/fs.h b/include/uapi/linux/fs.h
index 780d4c6..0144fbb 100644
--- a/include/uapi/linux/fs.h
+++ b/include/uapi/linux/fs.h
@@ -69,6 +69,7 @@ struct inodes_stat_t {
 #define MS_REMOUNT 32  /* Alter flags of a mounted FS */
 #define MS_MANDLOCK64  /* Allow mandatory locks on an FS */
 #define MS_DIRSYNC 128 /* Directory modifications are synchronous */
+#define MS_SNAP_STABLE 256 /* Snapshot pages during writeback, if needed */
 #define MS_NOATIME 1024/* Do not update access times. */
 #define MS_NODIRATIME  2048/* Do not update directory access times */
 #define MS_BIND4096
diff --git a/mm/Kconfig b/mm/Kconfig
index 278e3ab..7901d83 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -258,6 +258,19 @@ config BOUNCE
def_bool y
depends on BLOCK  MMU  (ZONE_DMA || HIGHMEM)
 
+# On the 'tile' arch, USB OHCI needs the bounce pool since tilegx will often
+# have more than 4GB of memory, but we don't currently use the IOTLB to present
+# a 32-bit address to OHCI.  So we need to use a bounce pool instead.
+#
+# We also use the bounce pool to provide stable page writes for jbd.  jbd
+# initiates buffer writeback without locking the page or setting PG_writeback,
+# and fixing that behavior (a second time; jbd2 doesn't have this problem) is
+# a major rework effort.  Instead, use the bounce buffer to snapshot pages
+# (until jbd goes away).  The only jbd user is ext3.
+config NEED_BOUNCE_POOL
+   bool
+   default y if (TILE  USB_OHCI_HCD) || (BLK_DEV_INTEGRITY  JBD)
+
 config NR_QUICK
int
depends on QUICKLIST
diff --git a/mm/bounce.c b/mm/bounce.c
index 0420867..a5b30f9 100644
--- a/mm/bounce.c
+++ b/mm/bounce.c
@@ -178,8 +178,44 @@ static void bounce_end_io_read_isa(struct bio *bio, int 
err)
__bounce_end_io_read(bio, isa_page_pool, err);
 }
 
+#ifdef CONFIG_NEED_BOUNCE_POOL
+static 

Re: [PATCH 4/4] block: Optionally snapshot page contents to provide stable pages during write

2013-01-08 Thread OGAWA Hirofumi
"Darrick J. Wong"  writes:

> On Fri, Dec 28, 2012 at 06:48:35AM +0900, OGAWA Hirofumi wrote:
>> "Darrick J. Wong"  writes:
>> 
>> >> I think this flag should be separated into "FS provide stable page" and
>> >> "FS needs bounce buffer for stable page".
>> >> 
>> >> My fs (I guess btrfs also) provides stable page by better way, and
>> >> doesn't need to wait writeback flags too. What needs is just to avoid
>> >> those stable page stuff.
>> >
>> > How does your fs (are we talking about vfat?) provide stable pages?
>> 
>> Basically it copies the data to another page before modifying data only
>> if page has writeback flag. (this is about new fs under development stage.)
>
> Aha, we're talking about tux3.  Since (afaik) there aren't any other
> filesystems requesting this flag and tux3 uses the device bdi, should I just
> create a patch and send it to you so it'll be included when tux3 goes in?

If core doesn't provide, no need to waste your time. I will.

Thanks.
-- 
OGAWA Hirofumi 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 4/4] block: Optionally snapshot page contents to provide stable pages during write

2013-01-08 Thread OGAWA Hirofumi
Darrick J. Wong darrick.w...@oracle.com writes:

 On Fri, Dec 28, 2012 at 06:48:35AM +0900, OGAWA Hirofumi wrote:
 Darrick J. Wong darrick.w...@oracle.com writes:
 
  I think this flag should be separated into FS provide stable page and
  FS needs bounce buffer for stable page.
  
  My fs (I guess btrfs also) provides stable page by better way, and
  doesn't need to wait writeback flags too. What needs is just to avoid
  those stable page stuff.
 
  How does your fs (are we talking about vfat?) provide stable pages?
 
 Basically it copies the data to another page before modifying data only
 if page has writeback flag. (this is about new fs under development stage.)

 Aha, we're talking about tux3.  Since (afaik) there aren't any other
 filesystems requesting this flag and tux3 uses the device bdi, should I just
 create a patch and send it to you so it'll be included when tux3 goes in?

If core doesn't provide, no need to waste your time. I will.

Thanks.
-- 
OGAWA Hirofumi hirof...@mail.parknet.co.jp
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 4/4] block: Optionally snapshot page contents to provide stable pages during write

2013-01-07 Thread Darrick J. Wong
On Fri, Dec 28, 2012 at 06:48:35AM +0900, OGAWA Hirofumi wrote:
> "Darrick J. Wong"  writes:
> 
> >> I think this flag should be separated into "FS provide stable page" and
> >> "FS needs bounce buffer for stable page".
> >> 
> >> My fs (I guess btrfs also) provides stable page by better way, and
> >> doesn't need to wait writeback flags too. What needs is just to avoid
> >> those stable page stuff.
> >
> > How does your fs (are we talking about vfat?) provide stable pages?
> 
> Basically it copies the data to another page before modifying data only
> if page has writeback flag. (this is about new fs under development stage.)

Aha, we're talking about tux3.  Since (afaik) there aren't any other
filesystems requesting this flag and tux3 uses the device bdi, should I just
create a patch and send it to you so it'll be included when tux3 goes in?

--D

> 
> > btrfs creates its own bdi and doesn't set the "stable pages required"
> > flag, so it already skips all the stable page stuff.
> 
> I see.
> -- 
> OGAWA Hirofumi 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 4/4] block: Optionally snapshot page contents to provide stable pages during write

2013-01-07 Thread Darrick J. Wong
On Fri, Dec 28, 2012 at 06:48:35AM +0900, OGAWA Hirofumi wrote:
 Darrick J. Wong darrick.w...@oracle.com writes:
 
  I think this flag should be separated into FS provide stable page and
  FS needs bounce buffer for stable page.
  
  My fs (I guess btrfs also) provides stable page by better way, and
  doesn't need to wait writeback flags too. What needs is just to avoid
  those stable page stuff.
 
  How does your fs (are we talking about vfat?) provide stable pages?
 
 Basically it copies the data to another page before modifying data only
 if page has writeback flag. (this is about new fs under development stage.)

Aha, we're talking about tux3.  Since (afaik) there aren't any other
filesystems requesting this flag and tux3 uses the device bdi, should I just
create a patch and send it to you so it'll be included when tux3 goes in?

--D

 
  btrfs creates its own bdi and doesn't set the stable pages required
  flag, so it already skips all the stable page stuff.
 
 I see.
 -- 
 OGAWA Hirofumi hirof...@mail.parknet.co.jp
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 4/4] block: Optionally snapshot page contents to provide stable pages during write

2012-12-27 Thread OGAWA Hirofumi
"Darrick J. Wong"  writes:

>> I think this flag should be separated into "FS provide stable page" and
>> "FS needs bounce buffer for stable page".
>> 
>> My fs (I guess btrfs also) provides stable page by better way, and
>> doesn't need to wait writeback flags too. What needs is just to avoid
>> those stable page stuff.
>
> How does your fs (are we talking about vfat?) provide stable pages?

Basically it copies the data to another page before modifying data only
if page has writeback flag. (this is about new fs under development stage.)

> btrfs creates its own bdi and doesn't set the "stable pages required"
> flag, so it already skips all the stable page stuff.

I see.
-- 
OGAWA Hirofumi 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 4/4] block: Optionally snapshot page contents to provide stable pages during write

2012-12-27 Thread Darrick J. Wong
On Fri, Dec 28, 2012 at 04:14:49AM +0900, OGAWA Hirofumi wrote:
> "Darrick J. Wong"  writes:
> 
> > diff --git a/include/uapi/linux/fs.h b/include/uapi/linux/fs.h
> > index 780d4c6..0144fbb 100644
> > --- a/include/uapi/linux/fs.h
> > +++ b/include/uapi/linux/fs.h
> > @@ -69,6 +69,7 @@ struct inodes_stat_t {
> >  #define MS_REMOUNT 32  /* Alter flags of a mounted FS */
> >  #define MS_MANDLOCK64  /* Allow mandatory locks on an FS */
> >  #define MS_DIRSYNC 128 /* Directory modifications are synchronous */
> > +#define MS_SNAP_STABLE 256 /* Snapshot pages during writeback, if 
> > needed */
> >  #define MS_NOATIME 1024/* Do not update access times. */
> >  #define MS_NODIRATIME  2048/* Do not update directory access times 
> > */
> >  #define MS_BIND4096
> 
> I think this flag should be separated into "FS provide stable page" and
> "FS needs bounce buffer for stable page".
> 
> My fs (I guess btrfs also) provides stable page by better way, and
> doesn't need to wait writeback flags too. What needs is just to avoid
> those stable page stuff.

How does your fs (are we talking about vfat?) provide stable pages?

btrfs creates its own bdi and doesn't set the "stable pages required" flag, so
it already skips all the stable page stuff.

--D
> 
> Thanks.
> -- 
> OGAWA Hirofumi 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 4/4] block: Optionally snapshot page contents to provide stable pages during write

2012-12-27 Thread OGAWA Hirofumi
"Darrick J. Wong"  writes:

> diff --git a/include/uapi/linux/fs.h b/include/uapi/linux/fs.h
> index 780d4c6..0144fbb 100644
> --- a/include/uapi/linux/fs.h
> +++ b/include/uapi/linux/fs.h
> @@ -69,6 +69,7 @@ struct inodes_stat_t {
>  #define MS_REMOUNT   32  /* Alter flags of a mounted FS */
>  #define MS_MANDLOCK  64  /* Allow mandatory locks on an FS */
>  #define MS_DIRSYNC   128 /* Directory modifications are synchronous */
> +#define MS_SNAP_STABLE   256 /* Snapshot pages during writeback, if 
> needed */
>  #define MS_NOATIME   1024/* Do not update access times. */
>  #define MS_NODIRATIME2048/* Do not update directory access times 
> */
>  #define MS_BIND  4096

I think this flag should be separated into "FS provide stable page" and
"FS needs bounce buffer for stable page".

My fs (I guess btrfs also) provides stable page by better way, and
doesn't need to wait writeback flags too. What needs is just to avoid
those stable page stuff.

Thanks.
-- 
OGAWA Hirofumi 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 4/4] block: Optionally snapshot page contents to provide stable pages during write

2012-12-27 Thread OGAWA Hirofumi
Darrick J. Wong darrick.w...@oracle.com writes:

 diff --git a/include/uapi/linux/fs.h b/include/uapi/linux/fs.h
 index 780d4c6..0144fbb 100644
 --- a/include/uapi/linux/fs.h
 +++ b/include/uapi/linux/fs.h
 @@ -69,6 +69,7 @@ struct inodes_stat_t {
  #define MS_REMOUNT   32  /* Alter flags of a mounted FS */
  #define MS_MANDLOCK  64  /* Allow mandatory locks on an FS */
  #define MS_DIRSYNC   128 /* Directory modifications are synchronous */
 +#define MS_SNAP_STABLE   256 /* Snapshot pages during writeback, if 
 needed */
  #define MS_NOATIME   1024/* Do not update access times. */
  #define MS_NODIRATIME2048/* Do not update directory access times 
 */
  #define MS_BIND  4096

I think this flag should be separated into FS provide stable page and
FS needs bounce buffer for stable page.

My fs (I guess btrfs also) provides stable page by better way, and
doesn't need to wait writeback flags too. What needs is just to avoid
those stable page stuff.

Thanks.
-- 
OGAWA Hirofumi hirof...@mail.parknet.co.jp
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 4/4] block: Optionally snapshot page contents to provide stable pages during write

2012-12-27 Thread Darrick J. Wong
On Fri, Dec 28, 2012 at 04:14:49AM +0900, OGAWA Hirofumi wrote:
 Darrick J. Wong darrick.w...@oracle.com writes:
 
  diff --git a/include/uapi/linux/fs.h b/include/uapi/linux/fs.h
  index 780d4c6..0144fbb 100644
  --- a/include/uapi/linux/fs.h
  +++ b/include/uapi/linux/fs.h
  @@ -69,6 +69,7 @@ struct inodes_stat_t {
   #define MS_REMOUNT 32  /* Alter flags of a mounted FS */
   #define MS_MANDLOCK64  /* Allow mandatory locks on an FS */
   #define MS_DIRSYNC 128 /* Directory modifications are synchronous */
  +#define MS_SNAP_STABLE 256 /* Snapshot pages during writeback, if 
  needed */
   #define MS_NOATIME 1024/* Do not update access times. */
   #define MS_NODIRATIME  2048/* Do not update directory access times 
  */
   #define MS_BIND4096
 
 I think this flag should be separated into FS provide stable page and
 FS needs bounce buffer for stable page.
 
 My fs (I guess btrfs also) provides stable page by better way, and
 doesn't need to wait writeback flags too. What needs is just to avoid
 those stable page stuff.

How does your fs (are we talking about vfat?) provide stable pages?

btrfs creates its own bdi and doesn't set the stable pages required flag, so
it already skips all the stable page stuff.

--D
 
 Thanks.
 -- 
 OGAWA Hirofumi hirof...@mail.parknet.co.jp
 --
 To unsubscribe from this list: send the line unsubscribe linux-fsdevel in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 4/4] block: Optionally snapshot page contents to provide stable pages during write

2012-12-27 Thread OGAWA Hirofumi
Darrick J. Wong darrick.w...@oracle.com writes:

 I think this flag should be separated into FS provide stable page and
 FS needs bounce buffer for stable page.
 
 My fs (I guess btrfs also) provides stable page by better way, and
 doesn't need to wait writeback flags too. What needs is just to avoid
 those stable page stuff.

 How does your fs (are we talking about vfat?) provide stable pages?

Basically it copies the data to another page before modifying data only
if page has writeback flag. (this is about new fs under development stage.)

 btrfs creates its own bdi and doesn't set the stable pages required
 flag, so it already skips all the stable page stuff.

I see.
-- 
OGAWA Hirofumi hirof...@mail.parknet.co.jp
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 4/4] block: Optionally snapshot page contents to provide stable pages during write

2012-12-17 Thread Darrick J. Wong
On Mon, Dec 17, 2012 at 11:23:59AM +0100, Jan Kara wrote:
> On Thu 13-12-12 00:08:11, Darrick J. Wong wrote:
> > Several complaints have been received regarding long file write latencies 
> > when
> > memory pages must be held stable during writeback.  Since it might not be
> > acceptable to stall programs for the entire duration of a page write (which 
> > may
> > take many milliseconds even on good hardware), enable a second strategy 
> > wherein
> > pages are snapshotted as part of submit_bio; the snapshot can be held stable
> > while writes continue.
> > 
> > This provides a band-aid to provide stable page writes on jbd without 
> > needing
> > to backport the fixed locking scheme in jbd2.  A mount option is added to 
> > ext4
> > to allow administrators to enable it there.
> > 
> > For those wondering about the ext3 bandage -- fixing the jbd locking (which 
> > was
> > done as part of ext4dev years ago) is a lot of surgery, and setting
> > PG_writeback on data pages when we actually hold the page lock dropped ext3
> > performance by nearly an order of magnitude.  If we're going to migrate 
> > iscsi
> > and raid to use stable page writes, the complaints about high latency will
> > likely return.  We might as well centralize their page snapshotting thing to
> > one place.
>   Umm, I kind of like this solution for ext3...

:)

> > diff --git a/mm/Kconfig b/mm/Kconfig
> > index a3f8ddd..78db0e1 100644
> > --- a/mm/Kconfig
> > +++ b/mm/Kconfig
> > @@ -224,6 +224,16 @@ config BOUNCE
> > def_bool y
> > depends on BLOCK && MMU && (ZONE_DMA || HIGHMEM)
> >  
> > +# On the 'tile' arch, USB OHCI needs the bounce pool since tilegx will 
> > often
> > +# have more than 4GB of memory, but we don't currently use the IOTLB to 
> > present
> > +# a 32-bit address to OHCI.  So we need to use a bounce pool instead.
> > +#
> > +# We also use the bounce pool to provide stable page writes for users that
> > +# don't (or can't) afford the wait latency.
> > +config NEED_BOUNCE_POOL
> > +   bool
> > +   default y if (TILE && USB_OHCI_HCD) || (BLK_DEV_INTEGRITY && (EXT3_FS 
> > || EXT4_FS))
> > +
>   This means that NEED_BOUNCE_POOL is going to be enabled on pretty much
> any distro kernel...

I want to drop EXT4_FS from that default line.  Then we only need it in the
case where EXT4_USE_FOR_EXT23=y hasn't taken the place of EXT3_FS=y.

 Has anyone actually done that?

Heh, even *I* haven't done that.

> >  config NR_QUICK
> > int
> > depends on QUICKLIST
> > diff --git a/mm/bounce.c b/mm/bounce.c
> > index 0420867..fa11935 100644
> > --- a/mm/bounce.c
> > +++ b/mm/bounce.c
> > @@ -178,6 +178,38 @@ static void bounce_end_io_read_isa(struct bio *bio, 
> > int err)
> > __bounce_end_io_read(bio, isa_page_pool, err);
> >  }
> >  
> > +#ifdef CONFIG_NEED_BOUNCE_POOL
> > +static int might_snapshot_stable_page_write(struct bio **bio_orig)
> > +{
> > +   return bio_data_dir(*bio_orig) == WRITE;
> > +}
> ... so might might_snapshot_stable_page_write() will be true for each
> write. And thus blk_queue_bounce() becomes considerably more expensive?
> Also calling should_snapshot_stable_pages() for every page seems to be
> stupid since its result is going to be the same for all the pages in the
> bio (well, I could imagine setups where it won't be but do we want to
> support them?).
> 
> So cannot we just make a function like should_snapshot_stable_pages() to
> test whether we really need the bouncing, use it in blk_queue_bounce() and
> then pass the information to __blk_queue_bounce() if needed?

Yes.  I'd actually considered simply calling should_snapshot() on the first
bio_vec page and passing that information through, but thought I should play it
safe for the first revision.

However, a bio targets a single block device, so this seems like a safe
optimization.

--D
> 
>   Honza
> 
> > +static int should_snapshot_stable_pages(struct page *page, int rw)
> > +{
> > +   struct backing_dev_info *bdi;
> > +   struct address_space *mapping = page_mapping(page);
> > +
> > +   if (!mapping)
> > +   return 0;
> > +   bdi = mapping->backing_dev_info;
> > +   if (!bdi_cap_stable_pages_required(bdi))
> > +   return 0;
> > +
> > +   return mapping->host->i_sb->s_flags & MS_SNAP_STABLE &&
> > +  rw == WRITE;
> > +}
> > +#else
> > +static int might_snapshot_stable_page_write(struct bio **bio_orig)
> > +{
> > +   return 0;
> > +}
> > +
> > +static int should_snapshot_static_pages(struct page *page, int rw)
> > +{
> > +   return 0;
> > +}
> > +#endif /* CONFIG_NEED_BOUNCE_POOL */
> > +
> >  static void __blk_queue_bounce(struct request_queue *q, struct bio 
> > **bio_orig,
> >mempool_t *pool)
> >  {
> > @@ -192,7 +224,8 @@ static void __blk_queue_bounce(struct request_queue *q, 
> > struct bio **bio_orig,
> > /*
> >  * is destination page below bounce pfn?
> >  */
> > -   if 

Re: [PATCH 4/4] block: Optionally snapshot page contents to provide stable pages during write

2012-12-17 Thread Darrick J. Wong
On Mon, Dec 17, 2012 at 12:13:36AM +0800, Zheng Liu wrote:
> On Thu, Dec 13, 2012 at 12:08:11AM -0800, Darrick J. Wong wrote:
> > diff --git a/mm/bounce.c b/mm/bounce.c
> > index 0420867..fa11935 100644
> > --- a/mm/bounce.c
> > +++ b/mm/bounce.c
> > @@ -178,6 +178,38 @@ static void bounce_end_io_read_isa(struct bio *bio, 
> > int err)
> > __bounce_end_io_read(bio, isa_page_pool, err);
> >  }
> >  
> > +#ifdef CONFIG_NEED_BOUNCE_POOL
> > +static int might_snapshot_stable_page_write(struct bio **bio_orig)
> > +{
> > +   return bio_data_dir(*bio_orig) == WRITE;
> > +}
> > +
> > +static int should_snapshot_stable_pages(struct page *page, int rw)
> > +{
> > +   struct backing_dev_info *bdi;
> > +   struct address_space *mapping = page_mapping(page);
> > +
> > +   if (!mapping)
> > +   return 0;
> > +   bdi = mapping->backing_dev_info;
> > +   if (!bdi_cap_stable_pages_required(bdi))
> > +   return 0;
> > +
> > +   return mapping->host->i_sb->s_flags & MS_SNAP_STABLE &&
> > +  rw == WRITE;
> > +}
> > +#else
> > +static int might_snapshot_stable_page_write(struct bio **bio_orig)
> > +{
> > +   return 0;
> > +}
> > +
> > +static int should_snapshot_static_pages(struct page *page, int rw)
>   ^^
>   It should be _stable_.

Good catch!  Thank you!

--D
> 
> Regards,
> - Zheng
> > +{
> > +   return 0;
> > +}
> > +#endif /* CONFIG_NEED_BOUNCE_POOL */
> > +
> >  static void __blk_queue_bounce(struct request_queue *q, struct bio 
> > **bio_orig,
> >mempool_t *pool)
> >  {
> > @@ -192,7 +224,8 @@ static void __blk_queue_bounce(struct request_queue *q, 
> > struct bio **bio_orig,
> > /*
> >  * is destination page below bounce pfn?
> >  */
> > -   if (page_to_pfn(page) <= queue_bounce_pfn(q))
> > +   if (page_to_pfn(page) <= queue_bounce_pfn(q) &&
> > +   !should_snapshot_stable_pages(page, rw))
> > continue;
> >  
> > /*
> > @@ -284,7 +317,8 @@ void blk_queue_bounce(struct request_queue *q, struct 
> > bio **bio_orig)
> >  * don't waste time iterating over bio segments
> >  */
> > if (!(q->bounce_gfp & GFP_DMA)) {
> > -   if (queue_bounce_pfn(q) >= blk_max_pfn)
> > +   if (queue_bounce_pfn(q) >= blk_max_pfn &&
> > +   !might_snapshot_stable_page_write(bio_orig))
> > return;
> > pool = page_pool;
> > } else {
> --
> To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 4/4] block: Optionally snapshot page contents to provide stable pages during write

2012-12-17 Thread Darrick J. Wong
On Fri, Dec 14, 2012 at 06:06:50PM -0800, Andy Lutomirski wrote:
> On Fri, Dec 14, 2012 at 6:01 PM, Darrick J. Wong
>  wrote:
> > On Fri, Dec 14, 2012 at 05:12:37PM -0800, Andy Lutomirski wrote:
> >> It survived.  I hit at least one mm bug, but I really don't think it's
> >> a problem with your code.  (I have not tried this workload on Linux
> >> 3.7 at all before.  It normally runs on 3.5.)  The box in question is
> >
> > Would you mind sending along the bug report so I can make sure?
> 
> http://marc.info/?l=linux-mm=135553342803210=2

Hm, this looks like a hugepages thing, which (afaik) doesn't touch fs code at
all.  Looks like this patchset is in the clear.

> >
> >> ext4 on LVM on dm-crypt on (hardware) RAID 5 on hpsa, which should not
> >> need stable pages.
> >>
> >> The majority of the data written (that wasn't unlinked before it was
> >> dropped from cache) was checksummed when written and verified later.
> >> Most of this data was written using mmap.  This workload hammers the
> >> vm concurrently in several threads, and it frequently stalls when
> >> stable pages are enabled, so it's probably exercising the code
> >> decently well.
> >
> > Did you observe any change in performance?
> 
> No.  But I'm comparing to 3.5 + butchery to remove stable pages.  With
> stable pages on, this workload performs terribly.  (It's a soft
> real-time thing, as you can possibly guess from my domain name, and
> various latency monitoring things go nuts when stable pages are
> active.)

Well, I guess that's good. :)

> Actually, performance appears to be improved, probably due to
> https://lkml.org/lkml/2012/12/14/14, which I tested concurrently.
> 
> >
> >> Feel free to add Tested-by: Andy Lutomirski 
> >
> > Will do!  Thanks for the testing!
> 
> My pleasure.  When these changes go in to an upstream kernel, they'll
> represent a significant reduction in how much our kernel differs from
> kernel.org's :)  Thanks for fixing this.

No problem!

--D
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 4/4] block: Optionally snapshot page contents to provide stable pages during write

2012-12-17 Thread Jan Kara
On Thu 13-12-12 00:08:11, Darrick J. Wong wrote:
> Several complaints have been received regarding long file write latencies when
> memory pages must be held stable during writeback.  Since it might not be
> acceptable to stall programs for the entire duration of a page write (which 
> may
> take many milliseconds even on good hardware), enable a second strategy 
> wherein
> pages are snapshotted as part of submit_bio; the snapshot can be held stable
> while writes continue.
> 
> This provides a band-aid to provide stable page writes on jbd without needing
> to backport the fixed locking scheme in jbd2.  A mount option is added to ext4
> to allow administrators to enable it there.
> 
> For those wondering about the ext3 bandage -- fixing the jbd locking (which 
> was
> done as part of ext4dev years ago) is a lot of surgery, and setting
> PG_writeback on data pages when we actually hold the page lock dropped ext3
> performance by nearly an order of magnitude.  If we're going to migrate iscsi
> and raid to use stable page writes, the complaints about high latency will
> likely return.  We might as well centralize their page snapshotting thing to
> one place.
  Umm, I kind of like this solution for ext3...

> diff --git a/mm/Kconfig b/mm/Kconfig
> index a3f8ddd..78db0e1 100644
> --- a/mm/Kconfig
> +++ b/mm/Kconfig
> @@ -224,6 +224,16 @@ config BOUNCE
>   def_bool y
>   depends on BLOCK && MMU && (ZONE_DMA || HIGHMEM)
>  
> +# On the 'tile' arch, USB OHCI needs the bounce pool since tilegx will often
> +# have more than 4GB of memory, but we don't currently use the IOTLB to 
> present
> +# a 32-bit address to OHCI.  So we need to use a bounce pool instead.
> +#
> +# We also use the bounce pool to provide stable page writes for users that
> +# don't (or can't) afford the wait latency.
> +config NEED_BOUNCE_POOL
> + bool
> + default y if (TILE && USB_OHCI_HCD) || (BLK_DEV_INTEGRITY && (EXT3_FS 
> || EXT4_FS))
> +
  This means that NEED_BOUNCE_POOL is going to be enabled on pretty much
any distro kernel...

>  config NR_QUICK
>   int
>   depends on QUICKLIST
> diff --git a/mm/bounce.c b/mm/bounce.c
> index 0420867..fa11935 100644
> --- a/mm/bounce.c
> +++ b/mm/bounce.c
> @@ -178,6 +178,38 @@ static void bounce_end_io_read_isa(struct bio *bio, int 
> err)
>   __bounce_end_io_read(bio, isa_page_pool, err);
>  }
>  
> +#ifdef CONFIG_NEED_BOUNCE_POOL
> +static int might_snapshot_stable_page_write(struct bio **bio_orig)
> +{
> + return bio_data_dir(*bio_orig) == WRITE;
> +}
... so might might_snapshot_stable_page_write() will be true for each
write. And thus blk_queue_bounce() becomes considerably more expensive?
Also calling should_snapshot_stable_pages() for every page seems to be
stupid since its result is going to be the same for all the pages in the
bio (well, I could imagine setups where it won't be but do we want to
support them?).

So cannot we just make a function like should_snapshot_stable_pages() to
test whether we really need the bouncing, use it in blk_queue_bounce() and
then pass the information to __blk_queue_bounce() if needed?

Honza

> +static int should_snapshot_stable_pages(struct page *page, int rw)
> +{
> + struct backing_dev_info *bdi;
> + struct address_space *mapping = page_mapping(page);
> +
> + if (!mapping)
> + return 0;
> + bdi = mapping->backing_dev_info;
> + if (!bdi_cap_stable_pages_required(bdi))
> + return 0;
> +
> + return mapping->host->i_sb->s_flags & MS_SNAP_STABLE &&
> +rw == WRITE;
> +}
> +#else
> +static int might_snapshot_stable_page_write(struct bio **bio_orig)
> +{
> + return 0;
> +}
> +
> +static int should_snapshot_static_pages(struct page *page, int rw)
> +{
> + return 0;
> +}
> +#endif /* CONFIG_NEED_BOUNCE_POOL */
> +
>  static void __blk_queue_bounce(struct request_queue *q, struct bio 
> **bio_orig,
>  mempool_t *pool)
>  {
> @@ -192,7 +224,8 @@ static void __blk_queue_bounce(struct request_queue *q, 
> struct bio **bio_orig,
>   /*
>* is destination page below bounce pfn?
>*/
> - if (page_to_pfn(page) <= queue_bounce_pfn(q))
> + if (page_to_pfn(page) <= queue_bounce_pfn(q) &&
> + !should_snapshot_stable_pages(page, rw))
>   continue;
>  
>   /*
> @@ -284,7 +317,8 @@ void blk_queue_bounce(struct request_queue *q, struct bio 
> **bio_orig)
>* don't waste time iterating over bio segments
>*/
>   if (!(q->bounce_gfp & GFP_DMA)) {
> - if (queue_bounce_pfn(q) >= blk_max_pfn)
> + if (queue_bounce_pfn(q) >= blk_max_pfn &&
> + !might_snapshot_stable_page_write(bio_orig))
>   return;
>   pool = page_pool;
>   } else {
> diff --git a/mm/page-writeback.c b/mm/page-writeback.c
> 

Re: [PATCH 4/4] block: Optionally snapshot page contents to provide stable pages during write

2012-12-17 Thread Darrick J. Wong
On Fri, Dec 14, 2012 at 06:06:50PM -0800, Andy Lutomirski wrote:
 On Fri, Dec 14, 2012 at 6:01 PM, Darrick J. Wong
 darrick.w...@oracle.com wrote:
  On Fri, Dec 14, 2012 at 05:12:37PM -0800, Andy Lutomirski wrote:
  It survived.  I hit at least one mm bug, but I really don't think it's
  a problem with your code.  (I have not tried this workload on Linux
  3.7 at all before.  It normally runs on 3.5.)  The box in question is
 
  Would you mind sending along the bug report so I can make sure?
 
 http://marc.info/?l=linux-mmm=135553342803210w=2

Hm, this looks like a hugepages thing, which (afaik) doesn't touch fs code at
all.  Looks like this patchset is in the clear.

 
  ext4 on LVM on dm-crypt on (hardware) RAID 5 on hpsa, which should not
  need stable pages.
 
  The majority of the data written (that wasn't unlinked before it was
  dropped from cache) was checksummed when written and verified later.
  Most of this data was written using mmap.  This workload hammers the
  vm concurrently in several threads, and it frequently stalls when
  stable pages are enabled, so it's probably exercising the code
  decently well.
 
  Did you observe any change in performance?
 
 No.  But I'm comparing to 3.5 + butchery to remove stable pages.  With
 stable pages on, this workload performs terribly.  (It's a soft
 real-time thing, as you can possibly guess from my domain name, and
 various latency monitoring things go nuts when stable pages are
 active.)

Well, I guess that's good. :)

 Actually, performance appears to be improved, probably due to
 https://lkml.org/lkml/2012/12/14/14, which I tested concurrently.
 
 
  Feel free to add Tested-by: Andy Lutomirski l...@amacapital.net
 
  Will do!  Thanks for the testing!
 
 My pleasure.  When these changes go in to an upstream kernel, they'll
 represent a significant reduction in how much our kernel differs from
 kernel.org's :)  Thanks for fixing this.

No problem!

--D
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 4/4] block: Optionally snapshot page contents to provide stable pages during write

2012-12-17 Thread Darrick J. Wong
On Mon, Dec 17, 2012 at 12:13:36AM +0800, Zheng Liu wrote:
 On Thu, Dec 13, 2012 at 12:08:11AM -0800, Darrick J. Wong wrote:
  diff --git a/mm/bounce.c b/mm/bounce.c
  index 0420867..fa11935 100644
  --- a/mm/bounce.c
  +++ b/mm/bounce.c
  @@ -178,6 +178,38 @@ static void bounce_end_io_read_isa(struct bio *bio, 
  int err)
  __bounce_end_io_read(bio, isa_page_pool, err);
   }
   
  +#ifdef CONFIG_NEED_BOUNCE_POOL
  +static int might_snapshot_stable_page_write(struct bio **bio_orig)
  +{
  +   return bio_data_dir(*bio_orig) == WRITE;
  +}
  +
  +static int should_snapshot_stable_pages(struct page *page, int rw)
  +{
  +   struct backing_dev_info *bdi;
  +   struct address_space *mapping = page_mapping(page);
  +
  +   if (!mapping)
  +   return 0;
  +   bdi = mapping-backing_dev_info;
  +   if (!bdi_cap_stable_pages_required(bdi))
  +   return 0;
  +
  +   return mapping-host-i_sb-s_flags  MS_SNAP_STABLE 
  +  rw == WRITE;
  +}
  +#else
  +static int might_snapshot_stable_page_write(struct bio **bio_orig)
  +{
  +   return 0;
  +}
  +
  +static int should_snapshot_static_pages(struct page *page, int rw)
   ^^
   It should be _stable_.

Good catch!  Thank you!

--D
 
 Regards,
 - Zheng
  +{
  +   return 0;
  +}
  +#endif /* CONFIG_NEED_BOUNCE_POOL */
  +
   static void __blk_queue_bounce(struct request_queue *q, struct bio 
  **bio_orig,
 mempool_t *pool)
   {
  @@ -192,7 +224,8 @@ static void __blk_queue_bounce(struct request_queue *q, 
  struct bio **bio_orig,
  /*
   * is destination page below bounce pfn?
   */
  -   if (page_to_pfn(page) = queue_bounce_pfn(q))
  +   if (page_to_pfn(page) = queue_bounce_pfn(q) 
  +   !should_snapshot_stable_pages(page, rw))
  continue;
   
  /*
  @@ -284,7 +317,8 @@ void blk_queue_bounce(struct request_queue *q, struct 
  bio **bio_orig)
   * don't waste time iterating over bio segments
   */
  if (!(q-bounce_gfp  GFP_DMA)) {
  -   if (queue_bounce_pfn(q) = blk_max_pfn)
  +   if (queue_bounce_pfn(q) = blk_max_pfn 
  +   !might_snapshot_stable_page_write(bio_orig))
  return;
  pool = page_pool;
  } else {
 --
 To unsubscribe from this list: send the line unsubscribe linux-fsdevel in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 4/4] block: Optionally snapshot page contents to provide stable pages during write

2012-12-17 Thread Darrick J. Wong
On Mon, Dec 17, 2012 at 11:23:59AM +0100, Jan Kara wrote:
 On Thu 13-12-12 00:08:11, Darrick J. Wong wrote:
  Several complaints have been received regarding long file write latencies 
  when
  memory pages must be held stable during writeback.  Since it might not be
  acceptable to stall programs for the entire duration of a page write (which 
  may
  take many milliseconds even on good hardware), enable a second strategy 
  wherein
  pages are snapshotted as part of submit_bio; the snapshot can be held stable
  while writes continue.
  
  This provides a band-aid to provide stable page writes on jbd without 
  needing
  to backport the fixed locking scheme in jbd2.  A mount option is added to 
  ext4
  to allow administrators to enable it there.
  
  For those wondering about the ext3 bandage -- fixing the jbd locking (which 
  was
  done as part of ext4dev years ago) is a lot of surgery, and setting
  PG_writeback on data pages when we actually hold the page lock dropped ext3
  performance by nearly an order of magnitude.  If we're going to migrate 
  iscsi
  and raid to use stable page writes, the complaints about high latency will
  likely return.  We might as well centralize their page snapshotting thing to
  one place.
   Umm, I kind of like this solution for ext3...

:)

  diff --git a/mm/Kconfig b/mm/Kconfig
  index a3f8ddd..78db0e1 100644
  --- a/mm/Kconfig
  +++ b/mm/Kconfig
  @@ -224,6 +224,16 @@ config BOUNCE
  def_bool y
  depends on BLOCK  MMU  (ZONE_DMA || HIGHMEM)
   
  +# On the 'tile' arch, USB OHCI needs the bounce pool since tilegx will 
  often
  +# have more than 4GB of memory, but we don't currently use the IOTLB to 
  present
  +# a 32-bit address to OHCI.  So we need to use a bounce pool instead.
  +#
  +# We also use the bounce pool to provide stable page writes for users that
  +# don't (or can't) afford the wait latency.
  +config NEED_BOUNCE_POOL
  +   bool
  +   default y if (TILE  USB_OHCI_HCD) || (BLK_DEV_INTEGRITY  (EXT3_FS 
  || EXT4_FS))
  +
   This means that NEED_BOUNCE_POOL is going to be enabled on pretty much
 any distro kernel...

I want to drop EXT4_FS from that default line.  Then we only need it in the
case where EXT4_USE_FOR_EXT23=y hasn't taken the place of EXT3_FS=y.

shrug Has anyone actually done that?

Heh, even *I* haven't done that.

   config NR_QUICK
  int
  depends on QUICKLIST
  diff --git a/mm/bounce.c b/mm/bounce.c
  index 0420867..fa11935 100644
  --- a/mm/bounce.c
  +++ b/mm/bounce.c
  @@ -178,6 +178,38 @@ static void bounce_end_io_read_isa(struct bio *bio, 
  int err)
  __bounce_end_io_read(bio, isa_page_pool, err);
   }
   
  +#ifdef CONFIG_NEED_BOUNCE_POOL
  +static int might_snapshot_stable_page_write(struct bio **bio_orig)
  +{
  +   return bio_data_dir(*bio_orig) == WRITE;
  +}
 ... so might might_snapshot_stable_page_write() will be true for each
 write. And thus blk_queue_bounce() becomes considerably more expensive?
 Also calling should_snapshot_stable_pages() for every page seems to be
 stupid since its result is going to be the same for all the pages in the
 bio (well, I could imagine setups where it won't be but do we want to
 support them?).
 
 So cannot we just make a function like should_snapshot_stable_pages() to
 test whether we really need the bouncing, use it in blk_queue_bounce() and
 then pass the information to __blk_queue_bounce() if needed?

Yes.  I'd actually considered simply calling should_snapshot() on the first
bio_vec page and passing that information through, but thought I should play it
safe for the first revision.

However, a bio targets a single block device, so this seems like a safe
optimization.

--D
 
   Honza
 
  +static int should_snapshot_stable_pages(struct page *page, int rw)
  +{
  +   struct backing_dev_info *bdi;
  +   struct address_space *mapping = page_mapping(page);
  +
  +   if (!mapping)
  +   return 0;
  +   bdi = mapping-backing_dev_info;
  +   if (!bdi_cap_stable_pages_required(bdi))
  +   return 0;
  +
  +   return mapping-host-i_sb-s_flags  MS_SNAP_STABLE 
  +  rw == WRITE;
  +}
  +#else
  +static int might_snapshot_stable_page_write(struct bio **bio_orig)
  +{
  +   return 0;
  +}
  +
  +static int should_snapshot_static_pages(struct page *page, int rw)
  +{
  +   return 0;
  +}
  +#endif /* CONFIG_NEED_BOUNCE_POOL */
  +
   static void __blk_queue_bounce(struct request_queue *q, struct bio 
  **bio_orig,
 mempool_t *pool)
   {
  @@ -192,7 +224,8 @@ static void __blk_queue_bounce(struct request_queue *q, 
  struct bio **bio_orig,
  /*
   * is destination page below bounce pfn?
   */
  -   if (page_to_pfn(page) = queue_bounce_pfn(q))
  +   if (page_to_pfn(page) = queue_bounce_pfn(q) 
  +   !should_snapshot_stable_pages(page, rw))
  continue;
   
  /*
  @@ -284,7 

Re: [PATCH 4/4] block: Optionally snapshot page contents to provide stable pages during write

2012-12-17 Thread Jan Kara
On Thu 13-12-12 00:08:11, Darrick J. Wong wrote:
 Several complaints have been received regarding long file write latencies when
 memory pages must be held stable during writeback.  Since it might not be
 acceptable to stall programs for the entire duration of a page write (which 
 may
 take many milliseconds even on good hardware), enable a second strategy 
 wherein
 pages are snapshotted as part of submit_bio; the snapshot can be held stable
 while writes continue.
 
 This provides a band-aid to provide stable page writes on jbd without needing
 to backport the fixed locking scheme in jbd2.  A mount option is added to ext4
 to allow administrators to enable it there.
 
 For those wondering about the ext3 bandage -- fixing the jbd locking (which 
 was
 done as part of ext4dev years ago) is a lot of surgery, and setting
 PG_writeback on data pages when we actually hold the page lock dropped ext3
 performance by nearly an order of magnitude.  If we're going to migrate iscsi
 and raid to use stable page writes, the complaints about high latency will
 likely return.  We might as well centralize their page snapshotting thing to
 one place.
  Umm, I kind of like this solution for ext3...

 diff --git a/mm/Kconfig b/mm/Kconfig
 index a3f8ddd..78db0e1 100644
 --- a/mm/Kconfig
 +++ b/mm/Kconfig
 @@ -224,6 +224,16 @@ config BOUNCE
   def_bool y
   depends on BLOCK  MMU  (ZONE_DMA || HIGHMEM)
  
 +# On the 'tile' arch, USB OHCI needs the bounce pool since tilegx will often
 +# have more than 4GB of memory, but we don't currently use the IOTLB to 
 present
 +# a 32-bit address to OHCI.  So we need to use a bounce pool instead.
 +#
 +# We also use the bounce pool to provide stable page writes for users that
 +# don't (or can't) afford the wait latency.
 +config NEED_BOUNCE_POOL
 + bool
 + default y if (TILE  USB_OHCI_HCD) || (BLK_DEV_INTEGRITY  (EXT3_FS 
 || EXT4_FS))
 +
  This means that NEED_BOUNCE_POOL is going to be enabled on pretty much
any distro kernel...

  config NR_QUICK
   int
   depends on QUICKLIST
 diff --git a/mm/bounce.c b/mm/bounce.c
 index 0420867..fa11935 100644
 --- a/mm/bounce.c
 +++ b/mm/bounce.c
 @@ -178,6 +178,38 @@ static void bounce_end_io_read_isa(struct bio *bio, int 
 err)
   __bounce_end_io_read(bio, isa_page_pool, err);
  }
  
 +#ifdef CONFIG_NEED_BOUNCE_POOL
 +static int might_snapshot_stable_page_write(struct bio **bio_orig)
 +{
 + return bio_data_dir(*bio_orig) == WRITE;
 +}
... so might might_snapshot_stable_page_write() will be true for each
write. And thus blk_queue_bounce() becomes considerably more expensive?
Also calling should_snapshot_stable_pages() for every page seems to be
stupid since its result is going to be the same for all the pages in the
bio (well, I could imagine setups where it won't be but do we want to
support them?).

So cannot we just make a function like should_snapshot_stable_pages() to
test whether we really need the bouncing, use it in blk_queue_bounce() and
then pass the information to __blk_queue_bounce() if needed?

Honza

 +static int should_snapshot_stable_pages(struct page *page, int rw)
 +{
 + struct backing_dev_info *bdi;
 + struct address_space *mapping = page_mapping(page);
 +
 + if (!mapping)
 + return 0;
 + bdi = mapping-backing_dev_info;
 + if (!bdi_cap_stable_pages_required(bdi))
 + return 0;
 +
 + return mapping-host-i_sb-s_flags  MS_SNAP_STABLE 
 +rw == WRITE;
 +}
 +#else
 +static int might_snapshot_stable_page_write(struct bio **bio_orig)
 +{
 + return 0;
 +}
 +
 +static int should_snapshot_static_pages(struct page *page, int rw)
 +{
 + return 0;
 +}
 +#endif /* CONFIG_NEED_BOUNCE_POOL */
 +
  static void __blk_queue_bounce(struct request_queue *q, struct bio 
 **bio_orig,
  mempool_t *pool)
  {
 @@ -192,7 +224,8 @@ static void __blk_queue_bounce(struct request_queue *q, 
 struct bio **bio_orig,
   /*
* is destination page below bounce pfn?
*/
 - if (page_to_pfn(page) = queue_bounce_pfn(q))
 + if (page_to_pfn(page) = queue_bounce_pfn(q) 
 + !should_snapshot_stable_pages(page, rw))
   continue;
  
   /*
 @@ -284,7 +317,8 @@ void blk_queue_bounce(struct request_queue *q, struct bio 
 **bio_orig)
* don't waste time iterating over bio segments
*/
   if (!(q-bounce_gfp  GFP_DMA)) {
 - if (queue_bounce_pfn(q) = blk_max_pfn)
 + if (queue_bounce_pfn(q) = blk_max_pfn 
 + !might_snapshot_stable_page_write(bio_orig))
   return;
   pool = page_pool;
   } else {
 diff --git a/mm/page-writeback.c b/mm/page-writeback.c
 index 3e4a8cc..fbd8efb 100644
 --- a/mm/page-writeback.c
 +++ b/mm/page-writeback.c
 @@ -2291,6 +2291,10 @@ void 

Re: [PATCH 4/4] block: Optionally snapshot page contents to provide stable pages during write

2012-12-16 Thread Zheng Liu
On Thu, Dec 13, 2012 at 12:08:11AM -0800, Darrick J. Wong wrote:
> diff --git a/mm/bounce.c b/mm/bounce.c
> index 0420867..fa11935 100644
> --- a/mm/bounce.c
> +++ b/mm/bounce.c
> @@ -178,6 +178,38 @@ static void bounce_end_io_read_isa(struct bio *bio, int 
> err)
>   __bounce_end_io_read(bio, isa_page_pool, err);
>  }
>  
> +#ifdef CONFIG_NEED_BOUNCE_POOL
> +static int might_snapshot_stable_page_write(struct bio **bio_orig)
> +{
> + return bio_data_dir(*bio_orig) == WRITE;
> +}
> +
> +static int should_snapshot_stable_pages(struct page *page, int rw)
> +{
> + struct backing_dev_info *bdi;
> + struct address_space *mapping = page_mapping(page);
> +
> + if (!mapping)
> + return 0;
> + bdi = mapping->backing_dev_info;
> + if (!bdi_cap_stable_pages_required(bdi))
> + return 0;
> +
> + return mapping->host->i_sb->s_flags & MS_SNAP_STABLE &&
> +rw == WRITE;
> +}
> +#else
> +static int might_snapshot_stable_page_write(struct bio **bio_orig)
> +{
> + return 0;
> +}
> +
> +static int should_snapshot_static_pages(struct page *page, int rw)
  ^^
  It should be _stable_.

Regards,
- Zheng
> +{
> + return 0;
> +}
> +#endif /* CONFIG_NEED_BOUNCE_POOL */
> +
>  static void __blk_queue_bounce(struct request_queue *q, struct bio 
> **bio_orig,
>  mempool_t *pool)
>  {
> @@ -192,7 +224,8 @@ static void __blk_queue_bounce(struct request_queue *q, 
> struct bio **bio_orig,
>   /*
>* is destination page below bounce pfn?
>*/
> - if (page_to_pfn(page) <= queue_bounce_pfn(q))
> + if (page_to_pfn(page) <= queue_bounce_pfn(q) &&
> + !should_snapshot_stable_pages(page, rw))
>   continue;
>  
>   /*
> @@ -284,7 +317,8 @@ void blk_queue_bounce(struct request_queue *q, struct bio 
> **bio_orig)
>* don't waste time iterating over bio segments
>*/
>   if (!(q->bounce_gfp & GFP_DMA)) {
> - if (queue_bounce_pfn(q) >= blk_max_pfn)
> + if (queue_bounce_pfn(q) >= blk_max_pfn &&
> + !might_snapshot_stable_page_write(bio_orig))
>   return;
>   pool = page_pool;
>   } else {
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 4/4] block: Optionally snapshot page contents to provide stable pages during write

2012-12-16 Thread Zheng Liu
On Thu, Dec 13, 2012 at 12:08:11AM -0800, Darrick J. Wong wrote:
 diff --git a/mm/bounce.c b/mm/bounce.c
 index 0420867..fa11935 100644
 --- a/mm/bounce.c
 +++ b/mm/bounce.c
 @@ -178,6 +178,38 @@ static void bounce_end_io_read_isa(struct bio *bio, int 
 err)
   __bounce_end_io_read(bio, isa_page_pool, err);
  }
  
 +#ifdef CONFIG_NEED_BOUNCE_POOL
 +static int might_snapshot_stable_page_write(struct bio **bio_orig)
 +{
 + return bio_data_dir(*bio_orig) == WRITE;
 +}
 +
 +static int should_snapshot_stable_pages(struct page *page, int rw)
 +{
 + struct backing_dev_info *bdi;
 + struct address_space *mapping = page_mapping(page);
 +
 + if (!mapping)
 + return 0;
 + bdi = mapping-backing_dev_info;
 + if (!bdi_cap_stable_pages_required(bdi))
 + return 0;
 +
 + return mapping-host-i_sb-s_flags  MS_SNAP_STABLE 
 +rw == WRITE;
 +}
 +#else
 +static int might_snapshot_stable_page_write(struct bio **bio_orig)
 +{
 + return 0;
 +}
 +
 +static int should_snapshot_static_pages(struct page *page, int rw)
  ^^
  It should be _stable_.

Regards,
- Zheng
 +{
 + return 0;
 +}
 +#endif /* CONFIG_NEED_BOUNCE_POOL */
 +
  static void __blk_queue_bounce(struct request_queue *q, struct bio 
 **bio_orig,
  mempool_t *pool)
  {
 @@ -192,7 +224,8 @@ static void __blk_queue_bounce(struct request_queue *q, 
 struct bio **bio_orig,
   /*
* is destination page below bounce pfn?
*/
 - if (page_to_pfn(page) = queue_bounce_pfn(q))
 + if (page_to_pfn(page) = queue_bounce_pfn(q) 
 + !should_snapshot_stable_pages(page, rw))
   continue;
  
   /*
 @@ -284,7 +317,8 @@ void blk_queue_bounce(struct request_queue *q, struct bio 
 **bio_orig)
* don't waste time iterating over bio segments
*/
   if (!(q-bounce_gfp  GFP_DMA)) {
 - if (queue_bounce_pfn(q) = blk_max_pfn)
 + if (queue_bounce_pfn(q) = blk_max_pfn 
 + !might_snapshot_stable_page_write(bio_orig))
   return;
   pool = page_pool;
   } else {
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 4/4] block: Optionally snapshot page contents to provide stable pages during write

2012-12-14 Thread Andy Lutomirski
On Fri, Dec 14, 2012 at 6:01 PM, Darrick J. Wong
 wrote:
> On Fri, Dec 14, 2012 at 05:12:37PM -0800, Andy Lutomirski wrote:
>> It survived.  I hit at least one mm bug, but I really don't think it's
>> a problem with your code.  (I have not tried this workload on Linux
>> 3.7 at all before.  It normally runs on 3.5.)  The box in question is
>
> Would you mind sending along the bug report so I can make sure?

http://marc.info/?l=linux-mm=135553342803210=2

>
>> ext4 on LVM on dm-crypt on (hardware) RAID 5 on hpsa, which should not
>> need stable pages.
>>
>> The majority of the data written (that wasn't unlinked before it was
>> dropped from cache) was checksummed when written and verified later.
>> Most of this data was written using mmap.  This workload hammers the
>> vm concurrently in several threads, and it frequently stalls when
>> stable pages are enabled, so it's probably exercising the code
>> decently well.
>
> Did you observe any change in performance?

No.  But I'm comparing to 3.5 + butchery to remove stable pages.  With
stable pages on, this workload performs terribly.  (It's a soft
real-time thing, as you can possibly guess from my domain name, and
various latency monitoring things go nuts when stable pages are
active.)

Actually, performance appears to be improved, probably due to
https://lkml.org/lkml/2012/12/14/14, which I tested concurrently.

>
>> Feel free to add Tested-by: Andy Lutomirski 
>
> Will do!  Thanks for the testing!

My pleasure.  When these changes go in to an upstream kernel, they'll
represent a significant reduction in how much our kernel differs from
kernel.org's :)  Thanks for fixing this.

--Andy
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 4/4] block: Optionally snapshot page contents to provide stable pages during write

2012-12-14 Thread Darrick J. Wong
On Fri, Dec 14, 2012 at 05:12:37PM -0800, Andy Lutomirski wrote:
> On Thu, Dec 13, 2012 at 6:10 PM, Darrick J. Wong
>  wrote:
> > On Thu, Dec 13, 2012 at 05:48:06PM -0800, Andy Lutomirski wrote:
> >> On 12/13/2012 12:08 AM, Darrick J. Wong wrote:
> >> > Several complaints have been received regarding long file write 
> >> > latencies when
> >> > memory pages must be held stable during writeback.  Since it might not be
> >> > acceptable to stall programs for the entire duration of a page write 
> >> > (which may
> >> > take many milliseconds even on good hardware), enable a second strategy 
> >> > wherein
> >> > pages are snapshotted as part of submit_bio; the snapshot can be held 
> >> > stable
> >> > while writes continue.
> >> >
> >> > This provides a band-aid to provide stable page writes on jbd without 
> >> > needing
> >> > to backport the fixed locking scheme in jbd2.  A mount option is added 
> >> > to ext4
> >> > to allow administrators to enable it there.
> >>
> >> I'm a bit confused as to what it has to do with ext3.  Wouldn't this be
> >> useful as a mount option everywhere, though?
> >
> > ext3 requires snapshots; the rest are ok with either strategy.
> >
> > *If* snapshotting is generally liked, then yes I'll go redo it as a vfs 
> > mount
> > option.
> >
> >> If this becomes widely used, would it be better to snapshot on
> >> wait_for_stable_page instead of on io submission?
> >
> > That really depends on how long you can afford to wait and how much free
> > memory you have. :)  It's all a big tradeoff between write latency and
> > consumption of memory pages and bandwidth, and one that I doubt I'm 
> > qualified
> > to make for everyone.
> >
> >> FWIW, I'm about to pound pretty hard on this whole patchset on a box
> >> that doesn't need stable pages.  I'll let you know how it goes.
> >
> > Yay!
> >
> > --D
> 
> It survived.  I hit at least one mm bug, but I really don't think it's
> a problem with your code.  (I have not tried this workload on Linux
> 3.7 at all before.  It normally runs on 3.5.)  The box in question is

Would you mind sending along the bug report so I can make sure?

> ext4 on LVM on dm-crypt on (hardware) RAID 5 on hpsa, which should not
> need stable pages.
> 
> The majority of the data written (that wasn't unlinked before it was
> dropped from cache) was checksummed when written and verified later.
> Most of this data was written using mmap.  This workload hammers the
> vm concurrently in several threads, and it frequently stalls when
> stable pages are enabled, so it's probably exercising the code
> decently well.

Did you observe any change in performance?

> Feel free to add Tested-by: Andy Lutomirski 

Will do!  Thanks for the testing!

--D
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 4/4] block: Optionally snapshot page contents to provide stable pages during write

2012-12-14 Thread Andy Lutomirski
On Thu, Dec 13, 2012 at 6:10 PM, Darrick J. Wong
 wrote:
> On Thu, Dec 13, 2012 at 05:48:06PM -0800, Andy Lutomirski wrote:
>> On 12/13/2012 12:08 AM, Darrick J. Wong wrote:
>> > Several complaints have been received regarding long file write latencies 
>> > when
>> > memory pages must be held stable during writeback.  Since it might not be
>> > acceptable to stall programs for the entire duration of a page write 
>> > (which may
>> > take many milliseconds even on good hardware), enable a second strategy 
>> > wherein
>> > pages are snapshotted as part of submit_bio; the snapshot can be held 
>> > stable
>> > while writes continue.
>> >
>> > This provides a band-aid to provide stable page writes on jbd without 
>> > needing
>> > to backport the fixed locking scheme in jbd2.  A mount option is added to 
>> > ext4
>> > to allow administrators to enable it there.
>>
>> I'm a bit confused as to what it has to do with ext3.  Wouldn't this be
>> useful as a mount option everywhere, though?
>
> ext3 requires snapshots; the rest are ok with either strategy.
>
> *If* snapshotting is generally liked, then yes I'll go redo it as a vfs mount
> option.
>
>> If this becomes widely used, would it be better to snapshot on
>> wait_for_stable_page instead of on io submission?
>
> That really depends on how long you can afford to wait and how much free
> memory you have. :)  It's all a big tradeoff between write latency and
> consumption of memory pages and bandwidth, and one that I doubt I'm qualified
> to make for everyone.
>
>> FWIW, I'm about to pound pretty hard on this whole patchset on a box
>> that doesn't need stable pages.  I'll let you know how it goes.
>
> Yay!
>
> --D

It survived.  I hit at least one mm bug, but I really don't think it's
a problem with your code.  (I have not tried this workload on Linux
3.7 at all before.  It normally runs on 3.5.)  The box in question is
ext4 on LVM on dm-crypt on (hardware) RAID 5 on hpsa, which should not
need stable pages.

The majority of the data written (that wasn't unlinked before it was
dropped from cache) was checksummed when written and verified later.
Most of this data was written using mmap.  This workload hammers the
vm concurrently in several threads, and it frequently stalls when
stable pages are enabled, so it's probably exercising the code
decently well.

Feel free to add Tested-by: Andy Lutomirski 

--Andy
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 4/4] block: Optionally snapshot page contents to provide stable pages during write

2012-12-14 Thread Darrick J. Wong
On Fri, Dec 14, 2012 at 02:33:34PM +1100, Dave Chinner wrote:
> On Thu, Dec 13, 2012 at 06:10:49PM -0800, Darrick J. Wong wrote:
> > On Thu, Dec 13, 2012 at 05:48:06PM -0800, Andy Lutomirski wrote:
> > > On 12/13/2012 12:08 AM, Darrick J. Wong wrote:
> > > > Several complaints have been received regarding long file write 
> > > > latencies when
> > > > memory pages must be held stable during writeback.  Since it might not 
> > > > be
> > > > acceptable to stall programs for the entire duration of a page write 
> > > > (which may
> > > > take many milliseconds even on good hardware), enable a second strategy 
> > > > wherein
> > > > pages are snapshotted as part of submit_bio; the snapshot can be held 
> > > > stable
> > > > while writes continue.
> > > > 
> > > > This provides a band-aid to provide stable page writes on jbd without 
> > > > needing
> > > > to backport the fixed locking scheme in jbd2.  A mount option is added 
> > > > to ext4
> > > > to allow administrators to enable it there.
> > > 
> > > I'm a bit confused as to what it has to do with ext3.  Wouldn't this be
> > > useful as a mount option everywhere, though?
> > 
> > ext3 requires snapshots; the rest are ok with either strategy.
> > 
> > *If* snapshotting is generally liked, then yes I'll go redo it as a vfs 
> > mount
> > option.
> 
> It's copying every single IO, right? If so, then please don't
> propagate any further than is necessary to fix the broken
> filesystems...

Yup.  I wasn't intending this flag for general service, though I /am/ curious
to hear if anyone sees any substantive performance difference with snapshots.

--D
> 
> Cheers,
> 
> Dave.
> -- 
> Dave Chinner
> da...@fromorbit.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 4/4] block: Optionally snapshot page contents to provide stable pages during write

2012-12-14 Thread Darrick J. Wong
On Fri, Dec 14, 2012 at 02:33:34PM +1100, Dave Chinner wrote:
 On Thu, Dec 13, 2012 at 06:10:49PM -0800, Darrick J. Wong wrote:
  On Thu, Dec 13, 2012 at 05:48:06PM -0800, Andy Lutomirski wrote:
   On 12/13/2012 12:08 AM, Darrick J. Wong wrote:
Several complaints have been received regarding long file write 
latencies when
memory pages must be held stable during writeback.  Since it might not 
be
acceptable to stall programs for the entire duration of a page write 
(which may
take many milliseconds even on good hardware), enable a second strategy 
wherein
pages are snapshotted as part of submit_bio; the snapshot can be held 
stable
while writes continue.

This provides a band-aid to provide stable page writes on jbd without 
needing
to backport the fixed locking scheme in jbd2.  A mount option is added 
to ext4
to allow administrators to enable it there.
   
   I'm a bit confused as to what it has to do with ext3.  Wouldn't this be
   useful as a mount option everywhere, though?
  
  ext3 requires snapshots; the rest are ok with either strategy.
  
  *If* snapshotting is generally liked, then yes I'll go redo it as a vfs 
  mount
  option.
 
 It's copying every single IO, right? If so, then please don't
 propagate any further than is necessary to fix the broken
 filesystems...

Yup.  I wasn't intending this flag for general service, though I /am/ curious
to hear if anyone sees any substantive performance difference with snapshots.

--D
 
 Cheers,
 
 Dave.
 -- 
 Dave Chinner
 da...@fromorbit.com
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 4/4] block: Optionally snapshot page contents to provide stable pages during write

2012-12-14 Thread Andy Lutomirski
On Thu, Dec 13, 2012 at 6:10 PM, Darrick J. Wong
darrick.w...@oracle.com wrote:
 On Thu, Dec 13, 2012 at 05:48:06PM -0800, Andy Lutomirski wrote:
 On 12/13/2012 12:08 AM, Darrick J. Wong wrote:
  Several complaints have been received regarding long file write latencies 
  when
  memory pages must be held stable during writeback.  Since it might not be
  acceptable to stall programs for the entire duration of a page write 
  (which may
  take many milliseconds even on good hardware), enable a second strategy 
  wherein
  pages are snapshotted as part of submit_bio; the snapshot can be held 
  stable
  while writes continue.
 
  This provides a band-aid to provide stable page writes on jbd without 
  needing
  to backport the fixed locking scheme in jbd2.  A mount option is added to 
  ext4
  to allow administrators to enable it there.

 I'm a bit confused as to what it has to do with ext3.  Wouldn't this be
 useful as a mount option everywhere, though?

 ext3 requires snapshots; the rest are ok with either strategy.

 *If* snapshotting is generally liked, then yes I'll go redo it as a vfs mount
 option.

 If this becomes widely used, would it be better to snapshot on
 wait_for_stable_page instead of on io submission?

 That really depends on how long you can afford to wait and how much free
 memory you have. :)  It's all a big tradeoff between write latency and
 consumption of memory pages and bandwidth, and one that I doubt I'm qualified
 to make for everyone.

 FWIW, I'm about to pound pretty hard on this whole patchset on a box
 that doesn't need stable pages.  I'll let you know how it goes.

 Yay!

 --D

It survived.  I hit at least one mm bug, but I really don't think it's
a problem with your code.  (I have not tried this workload on Linux
3.7 at all before.  It normally runs on 3.5.)  The box in question is
ext4 on LVM on dm-crypt on (hardware) RAID 5 on hpsa, which should not
need stable pages.

The majority of the data written (that wasn't unlinked before it was
dropped from cache) was checksummed when written and verified later.
Most of this data was written using mmap.  This workload hammers the
vm concurrently in several threads, and it frequently stalls when
stable pages are enabled, so it's probably exercising the code
decently well.

Feel free to add Tested-by: Andy Lutomirski l...@amacapital.net

--Andy
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 4/4] block: Optionally snapshot page contents to provide stable pages during write

2012-12-14 Thread Darrick J. Wong
On Fri, Dec 14, 2012 at 05:12:37PM -0800, Andy Lutomirski wrote:
 On Thu, Dec 13, 2012 at 6:10 PM, Darrick J. Wong
 darrick.w...@oracle.com wrote:
  On Thu, Dec 13, 2012 at 05:48:06PM -0800, Andy Lutomirski wrote:
  On 12/13/2012 12:08 AM, Darrick J. Wong wrote:
   Several complaints have been received regarding long file write 
   latencies when
   memory pages must be held stable during writeback.  Since it might not be
   acceptable to stall programs for the entire duration of a page write 
   (which may
   take many milliseconds even on good hardware), enable a second strategy 
   wherein
   pages are snapshotted as part of submit_bio; the snapshot can be held 
   stable
   while writes continue.
  
   This provides a band-aid to provide stable page writes on jbd without 
   needing
   to backport the fixed locking scheme in jbd2.  A mount option is added 
   to ext4
   to allow administrators to enable it there.
 
  I'm a bit confused as to what it has to do with ext3.  Wouldn't this be
  useful as a mount option everywhere, though?
 
  ext3 requires snapshots; the rest are ok with either strategy.
 
  *If* snapshotting is generally liked, then yes I'll go redo it as a vfs 
  mount
  option.
 
  If this becomes widely used, would it be better to snapshot on
  wait_for_stable_page instead of on io submission?
 
  That really depends on how long you can afford to wait and how much free
  memory you have. :)  It's all a big tradeoff between write latency and
  consumption of memory pages and bandwidth, and one that I doubt I'm 
  qualified
  to make for everyone.
 
  FWIW, I'm about to pound pretty hard on this whole patchset on a box
  that doesn't need stable pages.  I'll let you know how it goes.
 
  Yay!
 
  --D
 
 It survived.  I hit at least one mm bug, but I really don't think it's
 a problem with your code.  (I have not tried this workload on Linux
 3.7 at all before.  It normally runs on 3.5.)  The box in question is

Would you mind sending along the bug report so I can make sure?

 ext4 on LVM on dm-crypt on (hardware) RAID 5 on hpsa, which should not
 need stable pages.
 
 The majority of the data written (that wasn't unlinked before it was
 dropped from cache) was checksummed when written and verified later.
 Most of this data was written using mmap.  This workload hammers the
 vm concurrently in several threads, and it frequently stalls when
 stable pages are enabled, so it's probably exercising the code
 decently well.

Did you observe any change in performance?

 Feel free to add Tested-by: Andy Lutomirski l...@amacapital.net

Will do!  Thanks for the testing!

--D
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 4/4] block: Optionally snapshot page contents to provide stable pages during write

2012-12-14 Thread Andy Lutomirski
On Fri, Dec 14, 2012 at 6:01 PM, Darrick J. Wong
darrick.w...@oracle.com wrote:
 On Fri, Dec 14, 2012 at 05:12:37PM -0800, Andy Lutomirski wrote:
 It survived.  I hit at least one mm bug, but I really don't think it's
 a problem with your code.  (I have not tried this workload on Linux
 3.7 at all before.  It normally runs on 3.5.)  The box in question is

 Would you mind sending along the bug report so I can make sure?

http://marc.info/?l=linux-mmm=135553342803210w=2


 ext4 on LVM on dm-crypt on (hardware) RAID 5 on hpsa, which should not
 need stable pages.

 The majority of the data written (that wasn't unlinked before it was
 dropped from cache) was checksummed when written and verified later.
 Most of this data was written using mmap.  This workload hammers the
 vm concurrently in several threads, and it frequently stalls when
 stable pages are enabled, so it's probably exercising the code
 decently well.

 Did you observe any change in performance?

No.  But I'm comparing to 3.5 + butchery to remove stable pages.  With
stable pages on, this workload performs terribly.  (It's a soft
real-time thing, as you can possibly guess from my domain name, and
various latency monitoring things go nuts when stable pages are
active.)

Actually, performance appears to be improved, probably due to
https://lkml.org/lkml/2012/12/14/14, which I tested concurrently.


 Feel free to add Tested-by: Andy Lutomirski l...@amacapital.net

 Will do!  Thanks for the testing!

My pleasure.  When these changes go in to an upstream kernel, they'll
represent a significant reduction in how much our kernel differs from
kernel.org's :)  Thanks for fixing this.

--Andy
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 4/4] block: Optionally snapshot page contents to provide stable pages during write

2012-12-13 Thread Dave Chinner
On Thu, Dec 13, 2012 at 06:10:49PM -0800, Darrick J. Wong wrote:
> On Thu, Dec 13, 2012 at 05:48:06PM -0800, Andy Lutomirski wrote:
> > On 12/13/2012 12:08 AM, Darrick J. Wong wrote:
> > > Several complaints have been received regarding long file write latencies 
> > > when
> > > memory pages must be held stable during writeback.  Since it might not be
> > > acceptable to stall programs for the entire duration of a page write 
> > > (which may
> > > take many milliseconds even on good hardware), enable a second strategy 
> > > wherein
> > > pages are snapshotted as part of submit_bio; the snapshot can be held 
> > > stable
> > > while writes continue.
> > > 
> > > This provides a band-aid to provide stable page writes on jbd without 
> > > needing
> > > to backport the fixed locking scheme in jbd2.  A mount option is added to 
> > > ext4
> > > to allow administrators to enable it there.
> > 
> > I'm a bit confused as to what it has to do with ext3.  Wouldn't this be
> > useful as a mount option everywhere, though?
> 
> ext3 requires snapshots; the rest are ok with either strategy.
> 
> *If* snapshotting is generally liked, then yes I'll go redo it as a vfs mount
> option.

It's copying every single IO, right? If so, then please don't
propagate any further than is necessary to fix the broken
filesystems...

Cheers,

Dave.
-- 
Dave Chinner
da...@fromorbit.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 4/4] block: Optionally snapshot page contents to provide stable pages during write

2012-12-13 Thread Darrick J. Wong
On Thu, Dec 13, 2012 at 05:48:06PM -0800, Andy Lutomirski wrote:
> On 12/13/2012 12:08 AM, Darrick J. Wong wrote:
> > Several complaints have been received regarding long file write latencies 
> > when
> > memory pages must be held stable during writeback.  Since it might not be
> > acceptable to stall programs for the entire duration of a page write (which 
> > may
> > take many milliseconds even on good hardware), enable a second strategy 
> > wherein
> > pages are snapshotted as part of submit_bio; the snapshot can be held stable
> > while writes continue.
> > 
> > This provides a band-aid to provide stable page writes on jbd without 
> > needing
> > to backport the fixed locking scheme in jbd2.  A mount option is added to 
> > ext4
> > to allow administrators to enable it there.
> 
> I'm a bit confused as to what it has to do with ext3.  Wouldn't this be
> useful as a mount option everywhere, though?

ext3 requires snapshots; the rest are ok with either strategy.

*If* snapshotting is generally liked, then yes I'll go redo it as a vfs mount
option.

> If this becomes widely used, would it be better to snapshot on
> wait_for_stable_page instead of on io submission?

That really depends on how long you can afford to wait and how much free
memory you have. :)  It's all a big tradeoff between write latency and
consumption of memory pages and bandwidth, and one that I doubt I'm qualified
to make for everyone.

> FWIW, I'm about to pound pretty hard on this whole patchset on a box
> that doesn't need stable pages.  I'll let you know how it goes.

Yay!

--D
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 4/4] block: Optionally snapshot page contents to provide stable pages during write

2012-12-13 Thread Andy Lutomirski
On 12/13/2012 12:08 AM, Darrick J. Wong wrote:
> Several complaints have been received regarding long file write latencies when
> memory pages must be held stable during writeback.  Since it might not be
> acceptable to stall programs for the entire duration of a page write (which 
> may
> take many milliseconds even on good hardware), enable a second strategy 
> wherein
> pages are snapshotted as part of submit_bio; the snapshot can be held stable
> while writes continue.
> 
> This provides a band-aid to provide stable page writes on jbd without needing
> to backport the fixed locking scheme in jbd2.  A mount option is added to ext4
> to allow administrators to enable it there.

I'm a bit confused as to what it has to do with ext3.  Wouldn't this be
useful as a mount option everywhere, though?

If this becomes widely used, would it be better to snapshot on
wait_for_stable_page instead of on io submission?

FWIW, I'm about to pound pretty hard on this whole patchset on a box
that doesn't need stable pages.  I'll let you know how it goes.

--Andy
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 4/4] block: Optionally snapshot page contents to provide stable pages during write

2012-12-13 Thread Darrick J. Wong
Several complaints have been received regarding long file write latencies when
memory pages must be held stable during writeback.  Since it might not be
acceptable to stall programs for the entire duration of a page write (which may
take many milliseconds even on good hardware), enable a second strategy wherein
pages are snapshotted as part of submit_bio; the snapshot can be held stable
while writes continue.

This provides a band-aid to provide stable page writes on jbd without needing
to backport the fixed locking scheme in jbd2.  A mount option is added to ext4
to allow administrators to enable it there.

For those wondering about the ext3 bandage -- fixing the jbd locking (which was
done as part of ext4dev years ago) is a lot of surgery, and setting
PG_writeback on data pages when we actually hold the page lock dropped ext3
performance by nearly an order of magnitude.  If we're going to migrate iscsi
and raid to use stable page writes, the complaints about high latency will
likely return.  We might as well centralize their page snapshotting thing to
one place.

Signed-off-by: Darrick J. Wong 
---
 arch/tile/Kconfig   |6 --
 block/blk-core.c|8 +---
 fs/ext3/super.c |1 +
 fs/ext4/super.c |7 ++-
 include/uapi/linux/fs.h |1 +
 mm/Kconfig  |   10 ++
 mm/bounce.c |   38 --
 mm/page-writeback.c |4 
 8 files changed, 63 insertions(+), 12 deletions(-)


diff --git a/arch/tile/Kconfig b/arch/tile/Kconfig
index 875d008..c671fda 100644
--- a/arch/tile/Kconfig
+++ b/arch/tile/Kconfig
@@ -410,12 +410,6 @@ config TILE_USB
  Provides USB host adapter support for the built-in EHCI and OHCI
  interfaces on TILE-Gx chips.
 
-# USB OHCI needs the bounce pool since tilegx will often have more
-# than 4GB of memory, but we don't currently use the IOTLB to present
-# a 32-bit address to OHCI.  So we need to use a bounce pool instead.
-config NEED_BOUNCE_POOL
-   def_bool USB_OHCI_HCD
-
 source "drivers/pci/hotplug/Kconfig"
 
 endmenu
diff --git a/block/blk-core.c b/block/blk-core.c
index 3c95c4d..85e7705 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -1433,6 +1433,11 @@ void blk_queue_bio(struct request_queue *q, struct bio 
*bio)
 */
blk_queue_bounce(q, );
 
+   if (bio_integrity_enabled(bio) && bio_integrity_prep(bio)) {
+   bio_endio(bio, -EIO);
+   return;
+   }
+
if (bio->bi_rw & (REQ_FLUSH | REQ_FUA)) {
spin_lock_irq(q->queue_lock);
where = ELEVATOR_INSERT_FLUSH;
@@ -1673,9 +1678,6 @@ generic_make_request_checks(struct bio *bio)
 */
blk_partition_remap(bio);
 
-   if (bio_integrity_enabled(bio) && bio_integrity_prep(bio))
-   goto end_io;
-
if (bio_check_eod(bio, nr_sectors))
goto end_io;
 
diff --git a/fs/ext3/super.c b/fs/ext3/super.c
index 5366393..d251793 100644
--- a/fs/ext3/super.c
+++ b/fs/ext3/super.c
@@ -2068,6 +2068,7 @@ static int ext3_fill_super (struct super_block *sb, void 
*data, int silent)
test_opt(sb,DATA_FLAGS) == EXT3_MOUNT_JOURNAL_DATA ? "journal":
test_opt(sb,DATA_FLAGS) == EXT3_MOUNT_ORDERED_DATA ? "ordered":
"writeback");
+   sb->s_flags |= MS_SNAP_STABLE;
 
return 0;
 
diff --git a/fs/ext4/super.c b/fs/ext4/super.c
index 80928f7..64d7bc8 100644
--- a/fs/ext4/super.c
+++ b/fs/ext4/super.c
@@ -1222,7 +1222,7 @@ enum {
Opt_inode_readahead_blks, Opt_journal_ioprio,
Opt_dioread_nolock, Opt_dioread_lock,
Opt_discard, Opt_nodiscard, Opt_init_itable, Opt_noinit_itable,
-   Opt_max_dir_size_kb,
+   Opt_max_dir_size_kb, Opt_snap_stable,
 };
 
 static const match_table_t tokens = {
@@ -1297,6 +1297,7 @@ static const match_table_t tokens = {
{Opt_init_itable, "init_itable"},
{Opt_noinit_itable, "noinit_itable"},
{Opt_max_dir_size_kb, "max_dir_size_kb=%u"},
+   {Opt_snap_stable, "snapshot_stable"},
{Opt_removed, "check=none"},/* mount option from ext2/3 */
{Opt_removed, "nocheck"},   /* mount option from ext2/3 */
{Opt_removed, "reservation"},   /* mount option from ext2/3 */
@@ -1478,6 +1479,7 @@ static const struct mount_opts {
{Opt_jqfmt_vfsv0, QFMT_VFS_V0, MOPT_QFMT},
{Opt_jqfmt_vfsv1, QFMT_VFS_V1, MOPT_QFMT},
{Opt_max_dir_size_kb, 0, MOPT_GTE0},
+   {Opt_snap_stable, 0, 0},
{Opt_err, 0, 0}
 };
 
@@ -1549,6 +1551,9 @@ static int handle_mount_opt(struct super_block *sb, char 
*opt, int token,
return -1;
*journal_ioprio = IOPRIO_PRIO_VALUE(IOPRIO_CLASS_BE, arg);
return 1;
+   case Opt_snap_stable:
+   sb->s_flags |= MS_SNAP_STABLE;
+   return 1;
}
 
for (m = ext4_mount_opts; m->token != Opt_err; m++) {
diff --git 

[PATCH 4/4] block: Optionally snapshot page contents to provide stable pages during write

2012-12-13 Thread Darrick J. Wong
Several complaints have been received regarding long file write latencies when
memory pages must be held stable during writeback.  Since it might not be
acceptable to stall programs for the entire duration of a page write (which may
take many milliseconds even on good hardware), enable a second strategy wherein
pages are snapshotted as part of submit_bio; the snapshot can be held stable
while writes continue.

This provides a band-aid to provide stable page writes on jbd without needing
to backport the fixed locking scheme in jbd2.  A mount option is added to ext4
to allow administrators to enable it there.

For those wondering about the ext3 bandage -- fixing the jbd locking (which was
done as part of ext4dev years ago) is a lot of surgery, and setting
PG_writeback on data pages when we actually hold the page lock dropped ext3
performance by nearly an order of magnitude.  If we're going to migrate iscsi
and raid to use stable page writes, the complaints about high latency will
likely return.  We might as well centralize their page snapshotting thing to
one place.

Signed-off-by: Darrick J. Wong darrick.w...@oracle.com
---
 arch/tile/Kconfig   |6 --
 block/blk-core.c|8 +---
 fs/ext3/super.c |1 +
 fs/ext4/super.c |7 ++-
 include/uapi/linux/fs.h |1 +
 mm/Kconfig  |   10 ++
 mm/bounce.c |   38 --
 mm/page-writeback.c |4 
 8 files changed, 63 insertions(+), 12 deletions(-)


diff --git a/arch/tile/Kconfig b/arch/tile/Kconfig
index 875d008..c671fda 100644
--- a/arch/tile/Kconfig
+++ b/arch/tile/Kconfig
@@ -410,12 +410,6 @@ config TILE_USB
  Provides USB host adapter support for the built-in EHCI and OHCI
  interfaces on TILE-Gx chips.
 
-# USB OHCI needs the bounce pool since tilegx will often have more
-# than 4GB of memory, but we don't currently use the IOTLB to present
-# a 32-bit address to OHCI.  So we need to use a bounce pool instead.
-config NEED_BOUNCE_POOL
-   def_bool USB_OHCI_HCD
-
 source drivers/pci/hotplug/Kconfig
 
 endmenu
diff --git a/block/blk-core.c b/block/blk-core.c
index 3c95c4d..85e7705 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -1433,6 +1433,11 @@ void blk_queue_bio(struct request_queue *q, struct bio 
*bio)
 */
blk_queue_bounce(q, bio);
 
+   if (bio_integrity_enabled(bio)  bio_integrity_prep(bio)) {
+   bio_endio(bio, -EIO);
+   return;
+   }
+
if (bio-bi_rw  (REQ_FLUSH | REQ_FUA)) {
spin_lock_irq(q-queue_lock);
where = ELEVATOR_INSERT_FLUSH;
@@ -1673,9 +1678,6 @@ generic_make_request_checks(struct bio *bio)
 */
blk_partition_remap(bio);
 
-   if (bio_integrity_enabled(bio)  bio_integrity_prep(bio))
-   goto end_io;
-
if (bio_check_eod(bio, nr_sectors))
goto end_io;
 
diff --git a/fs/ext3/super.c b/fs/ext3/super.c
index 5366393..d251793 100644
--- a/fs/ext3/super.c
+++ b/fs/ext3/super.c
@@ -2068,6 +2068,7 @@ static int ext3_fill_super (struct super_block *sb, void 
*data, int silent)
test_opt(sb,DATA_FLAGS) == EXT3_MOUNT_JOURNAL_DATA ? journal:
test_opt(sb,DATA_FLAGS) == EXT3_MOUNT_ORDERED_DATA ? ordered:
writeback);
+   sb-s_flags |= MS_SNAP_STABLE;
 
return 0;
 
diff --git a/fs/ext4/super.c b/fs/ext4/super.c
index 80928f7..64d7bc8 100644
--- a/fs/ext4/super.c
+++ b/fs/ext4/super.c
@@ -1222,7 +1222,7 @@ enum {
Opt_inode_readahead_blks, Opt_journal_ioprio,
Opt_dioread_nolock, Opt_dioread_lock,
Opt_discard, Opt_nodiscard, Opt_init_itable, Opt_noinit_itable,
-   Opt_max_dir_size_kb,
+   Opt_max_dir_size_kb, Opt_snap_stable,
 };
 
 static const match_table_t tokens = {
@@ -1297,6 +1297,7 @@ static const match_table_t tokens = {
{Opt_init_itable, init_itable},
{Opt_noinit_itable, noinit_itable},
{Opt_max_dir_size_kb, max_dir_size_kb=%u},
+   {Opt_snap_stable, snapshot_stable},
{Opt_removed, check=none},/* mount option from ext2/3 */
{Opt_removed, nocheck},   /* mount option from ext2/3 */
{Opt_removed, reservation},   /* mount option from ext2/3 */
@@ -1478,6 +1479,7 @@ static const struct mount_opts {
{Opt_jqfmt_vfsv0, QFMT_VFS_V0, MOPT_QFMT},
{Opt_jqfmt_vfsv1, QFMT_VFS_V1, MOPT_QFMT},
{Opt_max_dir_size_kb, 0, MOPT_GTE0},
+   {Opt_snap_stable, 0, 0},
{Opt_err, 0, 0}
 };
 
@@ -1549,6 +1551,9 @@ static int handle_mount_opt(struct super_block *sb, char 
*opt, int token,
return -1;
*journal_ioprio = IOPRIO_PRIO_VALUE(IOPRIO_CLASS_BE, arg);
return 1;
+   case Opt_snap_stable:
+   sb-s_flags |= MS_SNAP_STABLE;
+   return 1;
}
 
for (m = ext4_mount_opts; m-token != Opt_err; m++) {
diff --git 

Re: [PATCH 4/4] block: Optionally snapshot page contents to provide stable pages during write

2012-12-13 Thread Andy Lutomirski
On 12/13/2012 12:08 AM, Darrick J. Wong wrote:
 Several complaints have been received regarding long file write latencies when
 memory pages must be held stable during writeback.  Since it might not be
 acceptable to stall programs for the entire duration of a page write (which 
 may
 take many milliseconds even on good hardware), enable a second strategy 
 wherein
 pages are snapshotted as part of submit_bio; the snapshot can be held stable
 while writes continue.
 
 This provides a band-aid to provide stable page writes on jbd without needing
 to backport the fixed locking scheme in jbd2.  A mount option is added to ext4
 to allow administrators to enable it there.

I'm a bit confused as to what it has to do with ext3.  Wouldn't this be
useful as a mount option everywhere, though?

If this becomes widely used, would it be better to snapshot on
wait_for_stable_page instead of on io submission?

FWIW, I'm about to pound pretty hard on this whole patchset on a box
that doesn't need stable pages.  I'll let you know how it goes.

--Andy
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 4/4] block: Optionally snapshot page contents to provide stable pages during write

2012-12-13 Thread Darrick J. Wong
On Thu, Dec 13, 2012 at 05:48:06PM -0800, Andy Lutomirski wrote:
 On 12/13/2012 12:08 AM, Darrick J. Wong wrote:
  Several complaints have been received regarding long file write latencies 
  when
  memory pages must be held stable during writeback.  Since it might not be
  acceptable to stall programs for the entire duration of a page write (which 
  may
  take many milliseconds even on good hardware), enable a second strategy 
  wherein
  pages are snapshotted as part of submit_bio; the snapshot can be held stable
  while writes continue.
  
  This provides a band-aid to provide stable page writes on jbd without 
  needing
  to backport the fixed locking scheme in jbd2.  A mount option is added to 
  ext4
  to allow administrators to enable it there.
 
 I'm a bit confused as to what it has to do with ext3.  Wouldn't this be
 useful as a mount option everywhere, though?

ext3 requires snapshots; the rest are ok with either strategy.

*If* snapshotting is generally liked, then yes I'll go redo it as a vfs mount
option.

 If this becomes widely used, would it be better to snapshot on
 wait_for_stable_page instead of on io submission?

That really depends on how long you can afford to wait and how much free
memory you have. :)  It's all a big tradeoff between write latency and
consumption of memory pages and bandwidth, and one that I doubt I'm qualified
to make for everyone.

 FWIW, I'm about to pound pretty hard on this whole patchset on a box
 that doesn't need stable pages.  I'll let you know how it goes.

Yay!

--D
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 4/4] block: Optionally snapshot page contents to provide stable pages during write

2012-12-13 Thread Dave Chinner
On Thu, Dec 13, 2012 at 06:10:49PM -0800, Darrick J. Wong wrote:
 On Thu, Dec 13, 2012 at 05:48:06PM -0800, Andy Lutomirski wrote:
  On 12/13/2012 12:08 AM, Darrick J. Wong wrote:
   Several complaints have been received regarding long file write latencies 
   when
   memory pages must be held stable during writeback.  Since it might not be
   acceptable to stall programs for the entire duration of a page write 
   (which may
   take many milliseconds even on good hardware), enable a second strategy 
   wherein
   pages are snapshotted as part of submit_bio; the snapshot can be held 
   stable
   while writes continue.
   
   This provides a band-aid to provide stable page writes on jbd without 
   needing
   to backport the fixed locking scheme in jbd2.  A mount option is added to 
   ext4
   to allow administrators to enable it there.
  
  I'm a bit confused as to what it has to do with ext3.  Wouldn't this be
  useful as a mount option everywhere, though?
 
 ext3 requires snapshots; the rest are ok with either strategy.
 
 *If* snapshotting is generally liked, then yes I'll go redo it as a vfs mount
 option.

It's copying every single IO, right? If so, then please don't
propagate any further than is necessary to fix the broken
filesystems...

Cheers,

Dave.
-- 
Dave Chinner
da...@fromorbit.com
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/