Re: [PATCHSET 0/5] Peaceful co-existence of scsi_sgtable and Large IO sg-chaining

2007-08-08 Thread Jens Axboe
On Tue, Aug 07 2007, FUJITA Tomonori wrote:
> On Tue, 7 Aug 2007 08:55:49 +0200
> Jens Axboe <[EMAIL PROTECTED]> wrote:
> 
> > On Mon, Aug 06 2007, FUJITA Tomonori wrote:
> > > On Tue, 31 Jul 2007 23:12:26 +0300
> > > Boaz Harrosh <[EMAIL PROTECTED]> wrote:
> > > 
> > > > The tested Kernels:
> > > > 
> > > > 1. Jens's sglist-arch
> > > >   I was not able to pass all tests with this Kernel. For some reason 
> > > > when
> > > >   bigger than 256 pages commands are queued the Machine will run out
> > > >   of memory and will kill the test. After the test is killed the system
> > > >   is left with 10M of memory and can hardly reboot.
> > > >   I have done some prints at the queuecommand entry in scsi_debug.c
> > > >   and I can see that I receive the expected large sg_count and bufflen
> > > >   but unlike other tests I get a different pointer at scsi_sglist().
> > > >   In other tests since nothing is happening at this machine while in
> > > >   the test, the sglist pointer is always the same. commands comes in,
> > > >   allocates memory, do nothing in scsi_debug, freed, and returns. 
> > > >   I suspect sglist leak or allocation bug.
> > > 
> > > Ok, I found the leak.
> > > 
> > > 
> > > From 011c05c2e514d1db4834147ed83526473711b0a3 Mon Sep 17 00:00:00 2001
> > > From: FUJITA Tomonori <[EMAIL PROTECTED]>
> > > Date: Mon, 6 Aug 2007 16:16:24 +0900
> > > Subject: [PATCH] fix sg chaining leak
> > > 
> > > Signed-off-by: FUJITA Tomonori <[EMAIL PROTECTED]>
> > > ---
> > >  drivers/scsi/scsi_lib.c |1 -
> > >  1 files changed, 0 insertions(+), 1 deletions(-)
> > > 
> > > diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
> > > index 5884b1b..25988b9 100644
> > > --- a/drivers/scsi/scsi_lib.c
> > > +++ b/drivers/scsi/scsi_lib.c
> > > @@ -48,7 +48,6 @@ static struct scsi_host_sg_pool scsi_sg_pools[] = {
> > >   SP(32),
> > >   SP(64),
> > >   SP(128),
> > > - SP(256),
> > >  };
> > >  #undef SP
> > 
> > Thanks Tomo! Trying to catch up with mails, will apply this one right
> > away.
> 
> You can add the following patch to your sglist branches:
> 
> 
> From abd73c05d5f08ee307776150e1deecac7a709b60 Mon Sep 17 00:00:00 2001
> From: FUJITA Tomonori <[EMAIL PROTECTED]>
> Date: Mon, 30 Jul 2007 23:01:32 +0900
> Subject: [PATCH] zfcp: sg chaining support

Thanks, applied!

-- 
Jens Axboe

-
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCHSET 0/5] Peaceful co-existence of scsi_sgtable and Large IO sg-chaining

2007-08-07 Thread FUJITA Tomonori
On Tue, 7 Aug 2007 08:55:49 +0200
Jens Axboe <[EMAIL PROTECTED]> wrote:

> On Mon, Aug 06 2007, FUJITA Tomonori wrote:
> > On Tue, 31 Jul 2007 23:12:26 +0300
> > Boaz Harrosh <[EMAIL PROTECTED]> wrote:
> > 
> > > The tested Kernels:
> > > 
> > > 1. Jens's sglist-arch
> > >   I was not able to pass all tests with this Kernel. For some reason when
> > >   bigger than 256 pages commands are queued the Machine will run out
> > >   of memory and will kill the test. After the test is killed the system
> > >   is left with 10M of memory and can hardly reboot.
> > >   I have done some prints at the queuecommand entry in scsi_debug.c
> > >   and I can see that I receive the expected large sg_count and bufflen
> > >   but unlike other tests I get a different pointer at scsi_sglist().
> > >   In other tests since nothing is happening at this machine while in
> > >   the test, the sglist pointer is always the same. commands comes in,
> > >   allocates memory, do nothing in scsi_debug, freed, and returns. 
> > >   I suspect sglist leak or allocation bug.
> > 
> > Ok, I found the leak.
> > 
> > 
> > From 011c05c2e514d1db4834147ed83526473711b0a3 Mon Sep 17 00:00:00 2001
> > From: FUJITA Tomonori <[EMAIL PROTECTED]>
> > Date: Mon, 6 Aug 2007 16:16:24 +0900
> > Subject: [PATCH] fix sg chaining leak
> > 
> > Signed-off-by: FUJITA Tomonori <[EMAIL PROTECTED]>
> > ---
> >  drivers/scsi/scsi_lib.c |1 -
> >  1 files changed, 0 insertions(+), 1 deletions(-)
> > 
> > diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
> > index 5884b1b..25988b9 100644
> > --- a/drivers/scsi/scsi_lib.c
> > +++ b/drivers/scsi/scsi_lib.c
> > @@ -48,7 +48,6 @@ static struct scsi_host_sg_pool scsi_sg_pools[] = {
> > SP(32),
> > SP(64),
> > SP(128),
> > -   SP(256),
> >  };
> >  #undef SP
> 
> Thanks Tomo! Trying to catch up with mails, will apply this one right
> away.

You can add the following patch to your sglist branches:


>From abd73c05d5f08ee307776150e1deecac7a709b60 Mon Sep 17 00:00:00 2001
From: FUJITA Tomonori <[EMAIL PROTECTED]>
Date: Mon, 30 Jul 2007 23:01:32 +0900
Subject: [PATCH] zfcp: sg chaining support

Signed-off-by: FUJITA Tomonori <[EMAIL PROTECTED]>
---
 drivers/s390/scsi/zfcp_def.h  |1 +
 drivers/s390/scsi/zfcp_qdio.c |6 ++
 2 files changed, 3 insertions(+), 4 deletions(-)

diff --git a/drivers/s390/scsi/zfcp_def.h b/drivers/s390/scsi/zfcp_def.h
index b36dfc4..0d80150 100644
--- a/drivers/s390/scsi/zfcp_def.h
+++ b/drivers/s390/scsi/zfcp_def.h
@@ -34,6 +34,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
diff --git a/drivers/s390/scsi/zfcp_qdio.c b/drivers/s390/scsi/zfcp_qdio.c
index 81daa82..60bc269 100644
--- a/drivers/s390/scsi/zfcp_qdio.c
+++ b/drivers/s390/scsi/zfcp_qdio.c
@@ -591,7 +591,7 @@ zfcp_qdio_sbals_from_segment(struct zfcp_fsf_req *fsf_req, 
unsigned long sbtype,
  */
 int
 zfcp_qdio_sbals_from_sg(struct zfcp_fsf_req *fsf_req, unsigned long sbtype,
-struct scatterlist *sg,int sg_count, int 
max_sbals)
+struct scatterlist *sgl, int sg_count, int max_sbals)
 {
int sg_index;
struct scatterlist *sg_segment;
@@ -607,9 +607,7 @@ zfcp_qdio_sbals_from_sg(struct zfcp_fsf_req *fsf_req, 
unsigned long sbtype,
sbale->flags |= sbtype;
 
/* process all segements of scatter-gather list */
-   for (sg_index = 0, sg_segment = sg, bytes = 0;
-sg_index < sg_count;
-sg_index++, sg_segment++) {
+   for_each_sg(sgl, sg_segment, sg_count, sg_index) {
retval = zfcp_qdio_sbals_from_segment(
fsf_req,
sbtype,
-- 
1.5.2.4





-
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCHSET 0/5] Peaceful co-existence of scsi_sgtable and Large IO sg-chaining

2007-08-07 Thread Jens Axboe
On Mon, Aug 06 2007, FUJITA Tomonori wrote:
> On Tue, 31 Jul 2007 23:12:26 +0300
> Boaz Harrosh <[EMAIL PROTECTED]> wrote:
> 
> > The tested Kernels:
> > 
> > 1. Jens's sglist-arch
> >   I was not able to pass all tests with this Kernel. For some reason when
> >   bigger than 256 pages commands are queued the Machine will run out
> >   of memory and will kill the test. After the test is killed the system
> >   is left with 10M of memory and can hardly reboot.
> >   I have done some prints at the queuecommand entry in scsi_debug.c
> >   and I can see that I receive the expected large sg_count and bufflen
> >   but unlike other tests I get a different pointer at scsi_sglist().
> >   In other tests since nothing is happening at this machine while in
> >   the test, the sglist pointer is always the same. commands comes in,
> >   allocates memory, do nothing in scsi_debug, freed, and returns. 
> >   I suspect sglist leak or allocation bug.
> 
> Ok, I found the leak.
> 
> 
> From 011c05c2e514d1db4834147ed83526473711b0a3 Mon Sep 17 00:00:00 2001
> From: FUJITA Tomonori <[EMAIL PROTECTED]>
> Date: Mon, 6 Aug 2007 16:16:24 +0900
> Subject: [PATCH] fix sg chaining leak
> 
> Signed-off-by: FUJITA Tomonori <[EMAIL PROTECTED]>
> ---
>  drivers/scsi/scsi_lib.c |1 -
>  1 files changed, 0 insertions(+), 1 deletions(-)
> 
> diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
> index 5884b1b..25988b9 100644
> --- a/drivers/scsi/scsi_lib.c
> +++ b/drivers/scsi/scsi_lib.c
> @@ -48,7 +48,6 @@ static struct scsi_host_sg_pool scsi_sg_pools[] = {
>   SP(32),
>   SP(64),
>   SP(128),
> - SP(256),
>  };
>  #undef SP

Thanks Tomo! Trying to catch up with mails, will apply this one right
away.

-- 
Jens Axboe

-
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCHSET 0/5] Peaceful co-existence of scsi_sgtable and Large IO sg-chaining

2007-08-06 Thread FUJITA Tomonori
On Tue, 31 Jul 2007 23:12:26 +0300
Boaz Harrosh <[EMAIL PROTECTED]> wrote:

> The tested Kernels:
> 
> 1. Jens's sglist-arch
>   I was not able to pass all tests with this Kernel. For some reason when
>   bigger than 256 pages commands are queued the Machine will run out
>   of memory and will kill the test. After the test is killed the system
>   is left with 10M of memory and can hardly reboot.
>   I have done some prints at the queuecommand entry in scsi_debug.c
>   and I can see that I receive the expected large sg_count and bufflen
>   but unlike other tests I get a different pointer at scsi_sglist().
>   In other tests since nothing is happening at this machine while in
>   the test, the sglist pointer is always the same. commands comes in,
>   allocates memory, do nothing in scsi_debug, freed, and returns. 
>   I suspect sglist leak or allocation bug.

Ok, I found the leak.


>From 011c05c2e514d1db4834147ed83526473711b0a3 Mon Sep 17 00:00:00 2001
From: FUJITA Tomonori <[EMAIL PROTECTED]>
Date: Mon, 6 Aug 2007 16:16:24 +0900
Subject: [PATCH] fix sg chaining leak

Signed-off-by: FUJITA Tomonori <[EMAIL PROTECTED]>
---
 drivers/scsi/scsi_lib.c |1 -
 1 files changed, 0 insertions(+), 1 deletions(-)

diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
index 5884b1b..25988b9 100644
--- a/drivers/scsi/scsi_lib.c
+++ b/drivers/scsi/scsi_lib.c
@@ -48,7 +48,6 @@ static struct scsi_host_sg_pool scsi_sg_pools[] = {
SP(32),
SP(64),
SP(128),
-   SP(256),
 };
 #undef SP
 
-- 
1.5.2.4


-
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCHSET 0/5] Peaceful co-existence of scsi_sgtable and Large IO sg-chaining

2007-08-05 Thread FUJITA Tomonori
From: Boaz Harrosh <[EMAIL PROTECTED]>
Subject: Re: [PATCHSET 0/5] Peaceful co-existence of scsi_sgtable and Large IO 
sg-chaining
Date: Tue, 31 Jul 2007 23:12:26 +0300

> Boaz Harrosh wrote:
> > FUJITA Tomonori wrote:
> >> From: Benny Halevy <[EMAIL PROTECTED]>
> >> Subject: Re: [PATCHSET 0/5] Peaceful co-existence of scsi_sgtable and 
> >> Large IO sg-chaining
> >> Date: Wed, 25 Jul 2007 11:26:44 +0300
> >>
> >>>> However, I'm perfectly happy to go with whatever the empirical evidence
> >>>> says is best .. and hopefully, now we don't have to pick this once and
> >>>> for all time ... we can alter it if whatever is chosen proves to be
> >>>> suboptimal.
> >>> I agree.  This isn't a catholic marriage :)
> >>> We'll run some performance experiments comparing the sgtable chaining
> >>> implementation vs. a scsi_data_buff implementation pointing
> >>> at a possibly chained sglist and let's see if we can measure
> >>> any difference.  We'll send results as soon as we have them.
> >> I did some tests with your sgtable patchset and the approach to use
> >> separate buffer for sglists. As expected, there was no performance
> >> difference with small I/Os. I've not tried very large I/Os, which
> >> might give some difference.
> >>
> > 
> > Next week I will try to mount lots of scsi_debug devices and
> > use large parallel IO to try and find a difference. I will
> > test Jens's sglist-arch tree against above sglist-arch+scsi_sgtable
> > 
> 
> 
> I was able to run some tests here are my results.
> 
> The results:
> PPT - is Pages Per Transfer (sg_count)
> 
> The numbers are accumulated time of 20 transfers of 32GB each,
> and the average of 4 such runs. (Lower time is better)
> Transfers are sg_dd into scsi_debug
> 
> Kernel | total time 128-PPT | total time 2048-PPT
> ---||-
> sglist-arch|  47.26 | Test Failed
> scsi_data_buff |  41.68 | 35.05
> scsi_sgtable   |  42.42 | 36.45
> 
> 
> The test:
> 1. scsi_debug
>   I mounted the scsi_debug module which was converted and fixed for 
>   chaining with the following options:
>   $ modprobe scsi_debug virtual_gb=32 delay=0 dev_size_mb=32 fake_rw=1
>   
>   32 GB of virtual drive on 32M of memory with 0 delay
>   and read/write do nothing with the fake_rw=1.
>   After that I just enabled chained IO on the device
> 
>   So what I'm actually testing is only sg + scsi-ml request
>   queuing and sglist allocation/deallocation. Which is what I want
>   to test.
> 
> 2. sg_dd
>   In the test script (see prof_test_scsi_debug attached)
>   I use sg_dd in direct io mode to send a direct scsi-command
>   to above device.

Your script is doing:

sg_dd blk_sgio=1 dio=1 if=/dev/zero of=/dev/sdb ...

dio works for SG_IO? Only sg suuports it, I think.

And sd_read is more appropriate than sg_dd for performance tests.

I'm not sure about the results. How can sglist-arch be slower than
scsi_data_buff and scsi_sgtable.


>   I did 2 tests, in both I transfer 32GB of data.
>   1st test with 128 (4K) pages IO size.
>   2nd test with 2048 pages IO size.
>   The second test will successfully run only if chaining is enabled
>   and working. Otherwise it will fail.
> 
> The tested Kernels:
> 
> 1. Jens's sglist-arch
>   I was not able to pass all tests with this Kernel. For some reason when
>   bigger than 256 pages commands are queued the Machine will run out
>   of memory and will kill the test. After the test is killed the system
>   is left with 10M of memory and can hardly reboot.

sglist-arch works for me. I hit memory leak due to the sg (I used sg
instead of SG_IO) bug though the bug happens even with scsi_data_buff
and sgtable.


> 2. scsi_data_buff
>   This tree is what I posted last. It is basically: 
>   0. sglist-arch
>   1. revert of scsi-ml support for chaining.
>   2. sg-pools cleanup [PATCH AB1]

I don't think this is the proper way. As I said, we can implement
scsi_data_buffer stuff on the top of sglist cleanly. No need to revert
something in sglist.

I don't think that sg-pools cleanup is necessary. We can live without
the sg segment size hack due to sglist.

My scsi data buffer patch is at:

git://git.kernel.org/pub/scm/linux/kernel/git/tomo/linux-2.6-bidi.git sdbuffer

The differences are: I use 'scsi_data_buffer structure' instead of
scsi_data_buff; it's on the top of sglist cleanly.


I also put your scsi_data_buff and sgtable patchset:

git://git.kernel.org/pub/scm/linux/kernel/git/tomo/linux-2.6-bidi.git 
boaz-sdbuff
git://git.kernel.org/pub/scm/linux/kernel/git/tomo/linux-2.6-bidi.git 
boaz-sgtable
-
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCHSET 0/5] Peaceful co-existence of scsi_sgtable and Large IO sg-chaining

2007-07-31 Thread Boaz Harrosh
Boaz Harrosh wrote:
> FUJITA Tomonori wrote:
>> From: Benny Halevy <[EMAIL PROTECTED]>
>> Subject: Re: [PATCHSET 0/5] Peaceful co-existence of scsi_sgtable and Large 
>> IO sg-chaining
>> Date: Wed, 25 Jul 2007 11:26:44 +0300
>>
>>>> However, I'm perfectly happy to go with whatever the empirical evidence
>>>> says is best .. and hopefully, now we don't have to pick this once and
>>>> for all time ... we can alter it if whatever is chosen proves to be
>>>> suboptimal.
>>> I agree.  This isn't a catholic marriage :)
>>> We'll run some performance experiments comparing the sgtable chaining
>>> implementation vs. a scsi_data_buff implementation pointing
>>> at a possibly chained sglist and let's see if we can measure
>>> any difference.  We'll send results as soon as we have them.
>> I did some tests with your sgtable patchset and the approach to use
>> separate buffer for sglists. As expected, there was no performance
>> difference with small I/Os. I've not tried very large I/Os, which
>> might give some difference.
>>
> 
> Next week I will try to mount lots of scsi_debug devices and
> use large parallel IO to try and find a difference. I will
> test Jens's sglist-arch tree against above sglist-arch+scsi_sgtable
> 


I was able to run some tests here are my results.

The results:
PPT - is Pages Per Transfer (sg_count)

The numbers are accumulated time of 20 transfers of 32GB each,
and the average of 4 such runs. (Lower time is better)
Transfers are sg_dd into scsi_debug

Kernel | total time 128-PPT | total time 2048-PPT
---||-
sglist-arch|  47.26 | Test Failed
scsi_data_buff |  41.68 | 35.05
scsi_sgtable   |  42.42 | 36.45


The test:
1. scsi_debug
  I mounted the scsi_debug module which was converted and fixed for 
  chaining with the following options:
  $ modprobe scsi_debug virtual_gb=32 delay=0 dev_size_mb=32 fake_rw=1
  
  32 GB of virtual drive on 32M of memory with 0 delay
  and read/write do nothing with the fake_rw=1.
  After that I just enabled chained IO on the device

  So what I'm actually testing is only sg + scsi-ml request
  queuing and sglist allocation/deallocation. Which is what I want
  to test.

2. sg_dd
  In the test script (see prof_test_scsi_debug attached)
  I use sg_dd in direct io mode to send a direct scsi-command
  to above device.
  I did 2 tests, in both I transfer 32GB of data.
  1st test with 128 (4K) pages IO size.
  2nd test with 2048 pages IO size.
  The second test will successfully run only if chaining is enabled
  and working. Otherwise it will fail.

The tested Kernels:

1. Jens's sglist-arch
  I was not able to pass all tests with this Kernel. For some reason when
  bigger than 256 pages commands are queued the Machine will run out
  of memory and will kill the test. After the test is killed the system
  is left with 10M of memory and can hardly reboot.
  I have done some prints at the queuecommand entry in scsi_debug.c
  and I can see that I receive the expected large sg_count and bufflen
  but unlike other tests I get a different pointer at scsi_sglist().
  In other tests since nothing is happening at this machine while in
  the test, the sglist pointer is always the same. commands comes in,
  allocates memory, do nothing in scsi_debug, freed, and returns. 
  I suspect sglist leak or allocation bug.

2. scsi_data_buff
  This tree is what I posted last. It is basically: 
  0. sglist-arch
  1. revert of scsi-ml support for chaining.
  2. sg-pools cleanup [PATCH AB1]
  3. scsi-ml sglist-arch [PATCH B1]
  4. scsi_data_buff patch. scsi_lib.c (Last patch sent)
  5. scsi_data_buff patch for sr.c sd.c & scsi_error.c
  6. Plus converted libata, ide-scsi, so Kernel can compile.
  7. convert of scsi_debug.c and fix for chaining.
  ( see http://www.bhalevy.com/open-osd/download/scsi_data_buff)

  All Tests run

3. scsi_sgtable 
  This tree is what I posted as patches that open this mailing thread.
  0. sglist-arch
  1. revert of scsi-ml support for chaining.
  2. sg-pools cleanup [PATCH AB1]
  3. sgtable [PATCH A2]
  3. chaining [PATCH A3]
  4. scsi_sgtable for sd sr and scsi_error
  6. Converted libata ide-scsi so Kernel can compile.
  7. convert of scsi_debug.c and fix for chaining.
  ( see http://www.bhalevy.com/open-osd/download/scsi_sgtable/linux-block/)

  All Tests run

#!/bin/sh
sdx=sdb
#load the device with these params
modprobe scsi_debug virtual_gb=32 delay=0 dev_size_mb=32 fake_rw=1

# go set some live params
# $ cd /sys/bus/pseudo/drivers/scsi_debug
# $ echo 1 > fake_rw

# mess with sglist chaining
cd /sys/block/$sdx/queue
echo 4096 > max_segments
cat max_hw_sectors_kb  > max_sectors_kb
echo "max_hw_sectors_kb="$(ca

Re: [PATCHSET 0/5] Peaceful co-existence of scsi_sgtable and Large IO sg-chaining

2007-07-26 Thread FUJITA Tomonori
From: Boaz Harrosh <[EMAIL PROTECTED]>
Subject: Re: [PATCHSET 0/5] Peaceful co-existence of scsi_sgtable and Large IO 
sg-chaining
Date: Wed, 25 Jul 2007 22:22:20 +0300

> FUJITA Tomonori wrote:
> > From: Benny Halevy <[EMAIL PROTECTED]>
> > Subject: Re: [PATCHSET 0/5] Peaceful co-existence of scsi_sgtable and Large 
> > IO sg-chaining
> > Date: Wed, 25 Jul 2007 11:26:44 +0300
> > 
> >>> However, I'm perfectly happy to go with whatever the empirical evidence
> >>> says is best .. and hopefully, now we don't have to pick this once and
> >>> for all time ... we can alter it if whatever is chosen proves to be
> >>> suboptimal.
> >> I agree.  This isn't a catholic marriage :)
> >> We'll run some performance experiments comparing the sgtable chaining
> >> implementation vs. a scsi_data_buff implementation pointing
> >> at a possibly chained sglist and let's see if we can measure
> >> any difference.  We'll send results as soon as we have them.
> > 
> > I did some tests with your sgtable patchset and the approach to use
> > separate buffer for sglists. As expected, there was no performance
> > difference with small I/Os. I've not tried very large I/Os, which
> > might give some difference.
> > 
> > The patchset to use separate buffer for sglists is available:
> > 
> > git://git.kernel.org/pub/scm/linux/kernel/git/tomo/linux-2.6-bidi.git 
> > simple-sgtable
> > 
> > 
> > Can you clean up your patchset and upload somewhere?
> 
> Sorry sure. Here is scsi_sgtable complete work over linux-block:
> http://www.bhalevy.com/open-osd/download/scsi_sgtable/linux-block
>  
> Here is just scsi_sgtable, no chaining, over scsi-misc + more
> drivers:
> http://www.bhalevy.com/open-osd/download/scsi_sgtable/scsi-misc
> 
> 
> Next week I will try to mount lots of scsi_debug devices and
> use large parallel IO to try and find a difference. I will
> test Jens's sglist-arch tree against above sglist-arch+scsi_sgtable
> 
> I have lots of reservations about Tomo's last patches. For me
> they are a regression. They use 3 allocations per command instead
> of 2. They use an extra pointer and an extra global slab_pool. And
> for what, for grouping some scsi_cmnd members in a substructure.
> If we want to go the "pointing" way, keeping our extra scatterlist
> and our base_2 count on most ARCHs. Than we can just use the 
> scsi_data_buff embedded inside scsi_cmnd. 

Yeah, I changed and tested the patch to embed one sgtable header
structure in struct scsi_cmnd for uni-directional transfers.

Somehow, I uploaded the old patchset to my bidi tree yesterday.


> The second scsi_data_buff for bidi can come either from an extra 
> slab_pool like in Tomo's patch - bidi can pay - or in scsi.c at 
> scsi_setup_command_freelist() the code can inspect Tomo's flag 
> to the request_queue QUEUE_FLAG_BIDI and will than allocate a 
> bigger scsi_cmnd in the free list.
> 
> I have coded that approach and it is very simple:
> http://www.bhalevy.com/open-osd/download/scsi_data_buff
> 
> They are over Jens's sglist-arch branch
> I have revised all scsi-ml places and it all compiles
> But is totally untested.

This is similar to the patch that I tested.


> I will add this branch to the above tests, but I suspect that
> they are identical in every way to current code.

With the large I/Os, there might be some difference.
-
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCHSET 0/5] Peaceful co-existence of scsi_sgtable and Large IO sg-chaining

2007-07-25 Thread Boaz Harrosh
FUJITA Tomonori wrote:
> From: Benny Halevy <[EMAIL PROTECTED]>
> Subject: Re: [PATCHSET 0/5] Peaceful co-existence of scsi_sgtable and Large 
> IO sg-chaining
> Date: Wed, 25 Jul 2007 11:26:44 +0300
> 
>>> However, I'm perfectly happy to go with whatever the empirical evidence
>>> says is best .. and hopefully, now we don't have to pick this once and
>>> for all time ... we can alter it if whatever is chosen proves to be
>>> suboptimal.
>> I agree.  This isn't a catholic marriage :)
>> We'll run some performance experiments comparing the sgtable chaining
>> implementation vs. a scsi_data_buff implementation pointing
>> at a possibly chained sglist and let's see if we can measure
>> any difference.  We'll send results as soon as we have them.
> 
> I did some tests with your sgtable patchset and the approach to use
> separate buffer for sglists. As expected, there was no performance
> difference with small I/Os. I've not tried very large I/Os, which
> might give some difference.
> 
> The patchset to use separate buffer for sglists is available:
> 
> git://git.kernel.org/pub/scm/linux/kernel/git/tomo/linux-2.6-bidi.git 
> simple-sgtable
> 
> 
> Can you clean up your patchset and upload somewhere?

Sorry sure. Here is scsi_sgtable complete work over linux-block:
http://www.bhalevy.com/open-osd/download/scsi_sgtable/linux-block
 
Here is just scsi_sgtable, no chaining, over scsi-misc + more
drivers:
http://www.bhalevy.com/open-osd/download/scsi_sgtable/scsi-misc

Next week I will try to mount lots of scsi_debug devices and
use large parallel IO to try and find a difference. I will
test Jens's sglist-arch tree against above sglist-arch+scsi_sgtable

I have lots of reservations about Tomo's last patches. For me
they are a regression. They use 3 allocations per command instead
of 2. They use an extra pointer and an extra global slab_pool. And
for what, for grouping some scsi_cmnd members in a substructure.
If we want to go the "pointing" way, keeping our extra scatterlist
and our base_2 count on most ARCHs. Than we can just use the 
scsi_data_buff embedded inside scsi_cmnd. 

The second scsi_data_buff for bidi can come either from an extra 
slab_pool like in Tomo's patch - bidi can pay - or in scsi.c at 
scsi_setup_command_freelist() the code can inspect Tomo's flag 
to the request_queue QUEUE_FLAG_BIDI and will than allocate a 
bigger scsi_cmnd in the free list.

I have coded that approach and it is very simple:
http://www.bhalevy.com/open-osd/download/scsi_data_buff

They are over Jens's sglist-arch branch
I have revised all scsi-ml places and it all compiles
But is totally untested.

I will add this branch to the above tests, but I suspect that
they are identical in every way to current code.

For review here is the main scsi_data_buff patch:
--
From: Boaz Harrosh <[EMAIL PROTECTED]>
Date: Wed, 25 Jul 2007 20:19:14 +0300
Subject: [PATCH] SCSI: scsi_data_buff

  In preparation for bidi we abstract all IO members of scsi_cmnd
  that will need to duplicate into a substructure.
  - Group all IO members of scsi_cmnd into a scsi_data_buff
structure.
  - Adjust accessors to new members.
  - scsi_{alloc,free}_sgtable receive a scsi_data_buff instead of
scsi_cmnd. And work on it. (Supporting chaining like before)
  - Adjust scsi_init_io() and  scsi_release_buffers() for above
change.
  - Fix other parts of scsi_lib to members migration. Use accessors
where appropriate.

 Signed-off-by: Boaz Harrosh <[EMAIL PROTECTED]>
---
 drivers/scsi/scsi_lib.c  |   68 +++--
 include/scsi/scsi_cmnd.h |   34 +++---
 2 files changed, 46 insertions(+), 56 deletions(-)

diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
index d62b184..2b8a865 100644
--- a/drivers/scsi/scsi_lib.c
+++ b/drivers/scsi/scsi_lib.c
@@ -714,16 +714,14 @@ static unsigned scsi_sgtable_index(unsigned nents)
return -1;
 }
 
-struct scatterlist *scsi_alloc_sgtable(struct scsi_cmnd *cmd, gfp_t gfp_mask)
+struct scatterlist *scsi_alloc_sgtable(struct scsi_data_buff *sdb, gfp_t 
gfp_mask)
 {
struct scsi_host_sg_pool *sgp;
struct scatterlist *sgl, *prev, *ret;
unsigned int index;
int this, left;
 
-   BUG_ON(!cmd->use_sg);
-
-   left = cmd->use_sg;
+   left = sdb->sg_count;
ret = prev = NULL;
do {
this = left;
@@ -747,7 +745,7 @@ struct scatterlist *scsi_alloc_sgtable(struct scsi_cmnd 
*cmd, gfp_t gfp_mask)
 * first loop through, set initial index and return value
 */
if (!ret) {
-   cmd->sg_pool = index;
+   sdb->sg_pool = index;
ret = sgl;
}
 
@@ -769,10

Re: [PATCHSET 0/5] Peaceful co-existence of scsi_sgtable and Large IO sg-chaining

2007-07-25 Thread Boaz Harrosh
FUJITA Tomonori wrote:
> From: Benny Halevy <[EMAIL PROTECTED]>
> Subject: Re: [PATCHSET 0/5] Peaceful co-existence of scsi_sgtable and Large 
> IO sg-chaining
> Date: Wed, 25 Jul 2007 11:26:44 +0300
> 
>>> However, I'm perfectly happy to go with whatever the empirical evidence
>>> says is best .. and hopefully, now we don't have to pick this once and
>>> for all time ... we can alter it if whatever is chosen proves to be
>>> suboptimal.
>> I agree.  This isn't a catholic marriage :)
>> We'll run some performance experiments comparing the sgtable chaining
>> implementation vs. a scsi_data_buff implementation pointing
>> at a possibly chained sglist and let's see if we can measure
>> any difference.  We'll send results as soon as we have them.
> 
> I did some tests with your sgtable patchset and the approach to use
> separate buffer for sglists. As expected, there was no performance
> difference with small I/Os. I've not tried very large I/Os, which
> might give some difference.
> 
> The patchset to use separate buffer for sglists is available:
> 
> git://git.kernel.org/pub/scm/linux/kernel/git/tomo/linux-2.6-bidi.git 
> simple-sgtable
> 
> 
> Can you clean up your patchset and upload somewhere?

Sorry sure. Here is scsi_sgtable complete work over linux-block:
http://www.bhalevy.com/open-osd/download/scsi_sgtable/linux-block
 
Here is just scsi_sgtable, no chaining, over scsi-misc + more
drivers:
http://www.bhalevy.com/open-osd/download/scsi_sgtable/scsi-misc


Next week I will try to mount lots of scsi_debug devices and
use large parallel IO to try and find a difference. I will
test Jens's sglist-arch tree against above sglist-arch+scsi_sgtable

I have lots of reservations about Tomo's last patches. For me
they are a regression. They use 3 allocations per command instead
of 2. They use an extra pointer and an extra global slab_pool. And
for what, for grouping some scsi_cmnd members in a substructure.
If we want to go the "pointing" way, keeping our extra scatterlist
and our base_2 count on most ARCHs. Than we can just use the 
scsi_data_buff embedded inside scsi_cmnd. 

The second scsi_data_buff for bidi can come either from an extra 
slab_pool like in Tomo's patch - bidi can pay - or in scsi.c at 
scsi_setup_command_freelist() the code can inspect Tomo's flag 
to the request_queue QUEUE_FLAG_BIDI and will than allocate a 
bigger scsi_cmnd in the free list.

I have coded that approach and it is very simple:
http://www.bhalevy.com/open-osd/download/scsi_data_buff

They are over Jens's sglist-arch branch
I have revised all scsi-ml places and it all compiles
But is totally untested.

I will add this branch to the above tests, but I suspect that
they are identical in every way to current code.


For review here is the main scsi_data_buff patch:

--
From: Boaz Harrosh <[EMAIL PROTECTED]>
Date: Wed, 25 Jul 2007 20:19:14 +0300
Subject: [PATCH] SCSI: scsi_data_buff

  In preparation for bidi we abstract all IO members of scsi_cmnd
  that will need to duplicate into a substructure.
  - Group all IO members of scsi_cmnd into a scsi_data_buff
structure.
  - Adjust accessors to new members.
  - scsi_{alloc,free}_sgtable receive a scsi_data_buff instead of
scsi_cmnd. And work on it. (Supporting chaining like before)
  - Adjust scsi_init_io() and  scsi_release_buffers() for above
change.
  - Fix other parts of scsi_lib to members migration. Use accessors
where appropriate.

 Signed-off-by: Boaz Harrosh <[EMAIL PROTECTED]>
---
 drivers/scsi/scsi_lib.c  |   68 +++--
 include/scsi/scsi_cmnd.h |   34 +++---
 2 files changed, 46 insertions(+), 56 deletions(-)

diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
index d62b184..2b8a865 100644
--- a/drivers/scsi/scsi_lib.c
+++ b/drivers/scsi/scsi_lib.c
@@ -714,16 +714,14 @@ static unsigned scsi_sgtable_index(unsigned nents)
return -1;
 }
 
-struct scatterlist *scsi_alloc_sgtable(struct scsi_cmnd *cmd, gfp_t gfp_mask)
+struct scatterlist *scsi_alloc_sgtable(struct scsi_data_buff *sdb, gfp_t 
gfp_mask)
 {
struct scsi_host_sg_pool *sgp;
struct scatterlist *sgl, *prev, *ret;
unsigned int index;
int this, left;
 
-   BUG_ON(!cmd->use_sg);
-
-   left = cmd->use_sg;
+   left = sdb->sg_count;
ret = prev = NULL;
do {
this = left;
@@ -747,7 +745,7 @@ struct scatterlist *scsi_alloc_sgtable(struct scsi_cmnd 
*cmd, gfp_t gfp_mask)
 * first loop through, set initial index and return value
 */
if (!ret) {
-   cmd->sg_pool = index;
+   sdb->sg_pool = index;
ret = sgl;
}
 

Re: [PATCHSET 0/5] Peaceful co-existence of scsi_sgtable and Large IO sg-chaining

2007-07-25 Thread FUJITA Tomonori
From: Benny Halevy <[EMAIL PROTECTED]>
Subject: Re: [PATCHSET 0/5] Peaceful co-existence of scsi_sgtable and Large IO 
sg-chaining
Date: Wed, 25 Jul 2007 11:26:44 +0300

> > However, I'm perfectly happy to go with whatever the empirical evidence
> > says is best .. and hopefully, now we don't have to pick this once and
> > for all time ... we can alter it if whatever is chosen proves to be
> > suboptimal.
> 
> I agree.  This isn't a catholic marriage :)
> We'll run some performance experiments comparing the sgtable chaining
> implementation vs. a scsi_data_buff implementation pointing
> at a possibly chained sglist and let's see if we can measure
> any difference.  We'll send results as soon as we have them.

I did some tests with your sgtable patchset and the approach to use
separate buffer for sglists. As expected, there was no performance
difference with small I/Os. I've not tried very large I/Os, which
might give some difference.

The patchset to use separate buffer for sglists is available:

git://git.kernel.org/pub/scm/linux/kernel/git/tomo/linux-2.6-bidi.git 
simple-sgtable


Can you clean up your patchset and upload somewhere?
-
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCHSET 0/5] Peaceful co-existence of scsi_sgtable and Large IO sg-chaining

2007-07-25 Thread Benny Halevy
James Bottomley wrote:
> On Tue, 2007-07-24 at 17:01 +0300, Benny Halevy wrote:
>> FUJITA Tomonori wrote:
>>> I should have said that, was the approach to use separate buffer for
>>> sglists instead of putting the sglists and the parameters in one
>>> buffer completely rejected?
>> I think that James should be asked this question.
>> My understanding was that he preferred allocating the sgtable
>> header along with the scatterlist array.
> 
> All I really cared about was insulating the drivers from future changes
> in this area.  It strikes me that for chained sglist implementations,
> this can all become a block layer responsibility, since more than SCSI
> will want to make use of it.

agreed.

> 
> Just remember, though that whatever is picked has to work in both memory
> constrained embedded systems as well as high end clusters.  It seems to
> me (but this isn't a mandate) that a single tunable sg element chunk
> size will accomplish this the best (as in get rid of the entire SCSI
> sglist sizing machinery) .

maybe :)
I'm not as familiar as you are with all the different uses of linux.
IMO, having a tunable is worse for the administrator and I'd prefer
an automatic mechanism that will dynamically adapt the allocation
size(s) to the actual use, very much like the one you have today.

> 
> However, I'm perfectly happy to go with whatever the empirical evidence
> says is best .. and hopefully, now we don't have to pick this once and
> for all time ... we can alter it if whatever is chosen proves to be
> suboptimal.

I agree.  This isn't a catholic marriage :)
We'll run some performance experiments comparing the sgtable chaining
implementation vs. a scsi_data_buff implementation pointing
at a possibly chained sglist and let's see if we can measure
any difference.  We'll send results as soon as we have them.

> 
>> There are pro's and con's either way.  In my opinion separating
>> the headers is better for mapping buffers that have a power of 2
>> #pages (which seems to be the typical case) since when you're
>> losing one entry in the sgtable for the header you'd waste a lot
>> more when you just cross the bucket boundary. E.g. for 64 pages
>> you need to allocate from the "64 to 127" bucket rather than the
>> "33 to 64" bucket).  Separated, one sgtable header structure
>> can just be embedded in struct scsi_cmnd for uni-directional transfers
>> (wasting some space when transferring no data, but saving space and
>> cycles in the common case vs. allocating it from a separate memory pool)
>> and the one for bidi read buffers can be allocated separately just for
>> bidi commands.
> 
> This is all opinion ... could someone actually run some performance
> tests?
> 
> James
> 
> 

-
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCHSET 0/5] Peaceful co-existence of scsi_sgtable and Large IO sg-chaining

2007-07-24 Thread James Bottomley
On Tue, 2007-07-24 at 17:01 +0300, Benny Halevy wrote:
> FUJITA Tomonori wrote:
> > I should have said that, was the approach to use separate buffer for
> > sglists instead of putting the sglists and the parameters in one
> > buffer completely rejected?
> 
> I think that James should be asked this question.
> My understanding was that he preferred allocating the sgtable
> header along with the scatterlist array.

All I really cared about was insulating the drivers from future changes
in this area.  It strikes me that for chained sglist implementations,
this can all become a block layer responsibility, since more than SCSI
will want to make use of it.

Just remember, though that whatever is picked has to work in both memory
constrained embedded systems as well as high end clusters.  It seems to
me (but this isn't a mandate) that a single tunable sg element chunk
size will accomplish this the best (as in get rid of the entire SCSI
sglist sizing machinery) .

However, I'm perfectly happy to go with whatever the empirical evidence
says is best .. and hopefully, now we don't have to pick this once and
for all time ... we can alter it if whatever is chosen proves to be
suboptimal.

> There are pro's and con's either way.  In my opinion separating
> the headers is better for mapping buffers that have a power of 2
> #pages (which seems to be the typical case) since when you're
> losing one entry in the sgtable for the header you'd waste a lot
> more when you just cross the bucket boundary. E.g. for 64 pages
> you need to allocate from the "64 to 127" bucket rather than the
> "33 to 64" bucket).  Separated, one sgtable header structure
> can just be embedded in struct scsi_cmnd for uni-directional transfers
> (wasting some space when transferring no data, but saving space and
> cycles in the common case vs. allocating it from a separate memory pool)
> and the one for bidi read buffers can be allocated separately just for
> bidi commands.

This is all opinion ... could someone actually run some performance
tests?

James


-
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCHSET 0/5] Peaceful co-existence of scsi_sgtable and Large IO sg-chaining

2007-07-24 Thread Benny Halevy
FUJITA Tomonori wrote:
> I should have said that, was the approach to use separate buffer for
> sglists instead of putting the sglists and the parameters in one
> buffer completely rejected?

I think that James should be asked this question.
My understanding was that he preferred allocating the sgtable
header along with the scatterlist array.

There are pro's and con's either way.  In my opinion separating
the headers is better for mapping buffers that have a power of 2
#pages (which seems to be the typical case) since when you're
losing one entry in the sgtable for the header you'd waste a lot
more when you just cross the bucket boundary. E.g. for 64 pages
you need to allocate from the "64 to 127" bucket rather than the
"33 to 64" bucket).  Separated, one sgtable header structure
can just be embedded in struct scsi_cmnd for uni-directional transfers
(wasting some space when transferring no data, but saving space and
cycles in the common case vs. allocating it from a separate memory pool)
and the one for bidi read buffers can be allocated separately just for
bidi commands.

Benny
-
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCHSET 0/5] Peaceful co-existence of scsi_sgtable and Large IO sg-chaining

2007-07-24 Thread FUJITA Tomonori
From: FUJITA Tomonori <[EMAIL PROTECTED]>
Subject: Re: [PATCHSET 0/5] Peaceful co-existence of scsi_sgtable and Large IO 
sg-chaining
Date: Tue, 24 Jul 2007 20:12:47 +0900

> From: Boaz Harrosh <[EMAIL PROTECTED]>
> Subject: Re: [PATCHSET 0/5] Peaceful co-existence of scsi_sgtable and Large 
> IO sg-chaining
> Date: Tue, 24 Jul 2007 13:01:34 +0300
> 
> > FUJITA Tomonori wrote:
> > > From: Boaz Harrosh <[EMAIL PROTECTED]>
> > > Subject: [PATCHSET 0/5] Peaceful co-existence of scsi_sgtable and Large 
> > > IO sg-chaining
> > > Date: Tue, 24 Jul 2007 11:47:50 +0300
> > > 
> > >> As Jens said, there is nothing common to scsi_sgtable and 
> > >> sglists. Save the fact that it is a massive conflict at 
> > >> scsi-ml. They touch all the same places.
> > >>
> > >> Proposed is a simple way out. Two patchsets That produce the
> > >> same output at the end.
> > >>
> > >> One: scsi_sgtable_than_sg-chaining
> > >> Two: sg-chaining_than_scsi_sgtable
> > > 
> > > Hmm, I thought that I've already posted a scsi_sgtable patch working
> > > with sg-chaining together.
> > > 
> > > http://marc.info/?l=linux-scsi&m=118519987632758&w=2
> > > 
> > > I quoted from my mail:
> > > 
> > > ---
> > > I think that the main issue of integrating sgtable and sglist is how
> > > to put scatterlist to scsi_sgtable structure.
> > > 
> > > If we allocate a scsi_sgtable structure and sglists separately, the
> > > code is pretty simple. But probably it's not the best way from the
> > > perspective of performance.
> > > 
> > I was just answering your other mail when this came in so I'll answer
> > here. 
> > This Approach is exactly the scsi_data_buffer approach we both
> > had solutions for. At the time I was all for that approach because it
> > is safer and could be kept compatible to old drivers. (Less work for
> > me) But it was decided against it. So suggesting it again is plain
> > going back.
> 
> Well, the approach to shuffle the entire request setup around was
> rejected. But was the approach to use two data buffers for bidi
> completely rejected?

I should have said that, was the approach to use separate buffer for
sglists instead of putting the sglists and the parameters in one
buffer completely rejected?
-
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCHSET 0/5] Peaceful co-existence of scsi_sgtable and Large IO sg-chaining

2007-07-24 Thread FUJITA Tomonori
From: Boaz Harrosh <[EMAIL PROTECTED]>
Subject: Re: [PATCHSET 0/5] Peaceful co-existence of scsi_sgtable and Large IO 
sg-chaining
Date: Tue, 24 Jul 2007 13:01:34 +0300

> FUJITA Tomonori wrote:
> > From: Boaz Harrosh <[EMAIL PROTECTED]>
> > Subject: [PATCHSET 0/5] Peaceful co-existence of scsi_sgtable and Large IO 
> > sg-chaining
> > Date: Tue, 24 Jul 2007 11:47:50 +0300
> > 
> >> As Jens said, there is nothing common to scsi_sgtable and 
> >> sglists. Save the fact that it is a massive conflict at 
> >> scsi-ml. They touch all the same places.
> >>
> >> Proposed is a simple way out. Two patchsets That produce the
> >> same output at the end.
> >>
> >> One: scsi_sgtable_than_sg-chaining
> >> Two: sg-chaining_than_scsi_sgtable
> > 
> > Hmm, I thought that I've already posted a scsi_sgtable patch working
> > with sg-chaining together.
> > 
> > http://marc.info/?l=linux-scsi&m=118519987632758&w=2
> > 
> > I quoted from my mail:
> > 
> > ---
> > I think that the main issue of integrating sgtable and sglist is how
> > to put scatterlist to scsi_sgtable structure.
> > 
> > If we allocate a scsi_sgtable structure and sglists separately, the
> > code is pretty simple. But probably it's not the best way from the
> > perspective of performance.
> > 
> I was just answering your other mail when this came in so I'll answer
> here. 
> This Approach is exactly the scsi_data_buffer approach we both
> had solutions for. At the time I was all for that approach because it
> is safer and could be kept compatible to old drivers. (Less work for
> me) But it was decided against it. So suggesting it again is plain
> going back.

Well, the approach to shuffle the entire request setup around was
rejected. But was the approach to use two data buffers for bidi
completely rejected?


> > If we put sglists into the scsi_sgtable structure (like Boaz's patch
> > does) and allocate and chain sglists only for large I/Os, we would
> > have the better performance (especially for small I/Os). But we will
> > have more complicated code.
> 
> Only that my proposed solution is more simple more elegant and more
> robust than even Jens's solution without any sgtable at all. Actually 
> the sgtable does good to chaining. scsi_lib with sglist+sgtable is 
> smaller than Jens's sglist by itself.

Your patch chains sgtables instead of sglists. It uses more memory for
simple code. But I think that it's a nice approach.
-
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCHSET 0/5] Peaceful co-existence of scsi_sgtable and Large IO sg-chaining

2007-07-24 Thread Boaz Harrosh
FUJITA Tomonori wrote:
> From: Boaz Harrosh <[EMAIL PROTECTED]>
> Subject: [PATCHSET 0/5] Peaceful co-existence of scsi_sgtable and Large IO 
> sg-chaining
> Date: Tue, 24 Jul 2007 11:47:50 +0300
> 
>> As Jens said, there is nothing common to scsi_sgtable and 
>> sglists. Save the fact that it is a massive conflict at 
>> scsi-ml. They touch all the same places.
>>
>> Proposed is a simple way out. Two patchsets That produce the
>> same output at the end.
>>
>> One: scsi_sgtable_than_sg-chaining
>> Two: sg-chaining_than_scsi_sgtable
> 
> Hmm, I thought that I've already posted a scsi_sgtable patch working
> with sg-chaining together.
> 
> http://marc.info/?l=linux-scsi&m=118519987632758&w=2
> 
> I quoted from my mail:
> 
> ---
> I think that the main issue of integrating sgtable and sglist is how
> to put scatterlist to scsi_sgtable structure.
> 
> If we allocate a scsi_sgtable structure and sglists separately, the
> code is pretty simple. But probably it's not the best way from the
> perspective of performance.
> 
I was just answering your other mail when this came in so I'll answer
here. 
This Approach is exactly the scsi_data_buffer approach we both
had solutions for. At the time I was all for that approach because it
is safer and could be kept compatible to old drivers. (Less work for
me) But it was decided against it. So suggesting it again is plain
going back.

Now that We have done the work, I must say that James was right. The 
sgtable work is nice and elegant. And makes scsi-ml more simple not
more complicated. And the bidi looks beautifully trivial on-top of
sgtables.

> If we put sglists into the scsi_sgtable structure (like Boaz's patch
> does) and allocate and chain sglists only for large I/Os, we would
> have the better performance (especially for small I/Os). But we will
> have more complicated code.

Only that my proposed solution is more simple more elegant and more
robust than even Jens's solution without any sgtable at all. Actually 
the sgtable does good to chaining. scsi_lib with sglist+sgtable is 
smaller than Jens's sglist by itself.

> ---
> 
> From a quick look over your patchset, you chose the latter. And my
> patch uses the former.

This patchset is not new. I have sent that solution before.
(http://marc.info/?l=linux-scsi&m=117933686313822&w=2)
In fact when first programing for sgtable I had in mind
Jens's work to make sure it will fit and converge.

Please lets not go back to old solutions again

Boaz
-
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCHSET 0/5] Peaceful co-existence of scsi_sgtable and Large IO sg-chaining

2007-07-24 Thread FUJITA Tomonori
From: Boaz Harrosh <[EMAIL PROTECTED]>
Subject: [PATCHSET 0/5] Peaceful co-existence of scsi_sgtable and Large IO 
sg-chaining
Date: Tue, 24 Jul 2007 11:47:50 +0300

> As Jens said, there is nothing common to scsi_sgtable and 
> sglists. Save the fact that it is a massive conflict at 
> scsi-ml. They touch all the same places.
> 
> Proposed is a simple way out. Two patchsets That produce the
> same output at the end.
> 
> One: scsi_sgtable_than_sg-chaining
> Two: sg-chaining_than_scsi_sgtable

Hmm, I thought that I've already posted a scsi_sgtable patch working
with sg-chaining together.

http://marc.info/?l=linux-scsi&m=118519987632758&w=2

I quoted from my mail:

---
I think that the main issue of integrating sgtable and sglist is how
to put scatterlist to scsi_sgtable structure.

If we allocate a scsi_sgtable structure and sglists separately, the
code is pretty simple. But probably it's not the best way from the
perspective of performance.

If we put sglists into the scsi_sgtable structure (like Boaz's patch
does) and allocate and chain sglists only for large I/Os, we would
have the better performance (especially for small I/Os). But we will
have more complicated code.
---

>From a quick look over your patchset, you chose the latter. And my
patch uses the former.
-
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html