Re: [PATCH] minimal SAS transport class

2005-08-28 Thread Stefan Richter

Jeff Garzik wrote:
(host,string) could succeed in transporting both HCIL and non-HCIL 
target identifiers.


What is the meaning of "host" there?
--
Stefan Richter
-=-=-=-= =--- ===-=
http://arcgraph.de/sr/
-
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] minimal SAS transport class

2005-08-28 Thread Luben Tuikov
On 08/26/05 21:39, Jeff Garzik wrote:
> Luben Tuikov wrote:
> 
>>On 08/26/05 14:48, Jeff Garzik wrote:
>>
>>
No host numbers, no routing information.  This is all
transparent to SCSI Core, and NONE of its business.
>>>
>>>Routing is an essential part of the SCSI core's duties.
>>
>>
>>[I'm not a big fan of reading mixed-message emails, but what can you do...]
>> 
>>
>>
>>>The SCSI core is the resource manager responsible for routing messages 
>>>[CDBs] to/from LLDs based on .  This 
>>>includes resolution of kernel-specific identifiers (device major/minor, 
>>>etc.) into .  This also includes direct use of 
>>
>>
>>This particular is the task of sd.c.  How it does it is
>>sd.c. job.  Not SCSI Core.
> 
> 
> No.  sd, sr, st, and sg all use the -common- infrastructure to execute 
> tasks and return results.  That common infrastructure is part of the 
> SCSI core.
> 
> The SCSI layer itself is a marraige between
> 
>   device classes -- sd, sr, st, sg
>   transport classes -- common per-transport code
>   drivers -- executes tasks via transport class
>   glue -- the myriad functions that tie the above 3 together

Yes, this is the current infrastructure and is quite messy without
clear separation between layers.

SCSI layer should sit above any transport specific layer and should
have no knowlege about the transport specifics.  Read SAM.
 
> All transport-specific knowledge that is common across hardware vendors 
> should be in the transport class.

First, there is no such thing as "transport specific knowlege common
accross vendors" -- maybe you mean same-transport, different vendor?

Yes, this is what I'm driving at: same-transport, same transport-layer,
different vendors, *but* clear separation between them:

FS/user >> Block/char >> Command set drivers >> SCSI Core >> transport layer >> 
LLDD/transport >> interconnect >> physical world.

"Transport layer" is *not* James B's transport class, because it, transport
class, falls short of representing the specific concepts that a transport
may have.  Case in point: SAS.

I'd suggest looking at those figures:
SAM4r02, page 2, Figure 1 and Figure 2.
SPC4r00, page 1, Figure 1.
SAS1r09, page 33, Figure 10 -- yellow highlight is what should be SCSI Core.

>  The SCSI core uses the transport 
> class to perform transport-specific actions.

And it should absolutely not.  Because the layering infratructure would be 
broken
then.   Or would SCSI Core know about every transport there is.
Transport specific actions should be performed only by the transport
layer, which sits _below_ SCSI Core.

> The SCSI core is the common point for exporting bus topology via 
> transport classes.

Which is again: wrong.

A transport implementation sits below SCSI Core and exports
topology info in a unified way for all vendors implementing
a transport.

>>>Moving away from HCIL requires a lot of thought, including thinking 
>>>about userland app breakage -- a big deal in Linux.
>>
>>
>>I never contended that userspace should be moved away from HCIL.
> 
> 
> Then, by implication, SAS and FC must continue to maintain HCIL<->device 
> maps.

And I repeat: those should be done by SCSI Core, simply because:
- They (HCIL) are _invented_ by SAS and FC and USB and Firewire, etc.
- They (HCIL) are crud which only SCSI Core requires.

> SAM is already mostly there.  ->queuecommand is already a pretty good 
> execute_task().

Jeff *do not spread FUD*!  SCSI Core doesn't know about SAM to save its
life!

The situation hasn't changed one bit for the last 5 years!

There are no TMF implementions, the layering infrastructure is wrong, etc, etc.

>>Most easily this would be done by implementing a bunch of
>>new-way-to-do-it functions.  The request_queue wouldn't care,
>>and old LLDD can use the old interface, and new ones can use
>>the new interface.
> 
> 
> Disagree.  Just follow the TODO list Christoph outlined, plus figure out 

Christoph was asking me if that list was ok -- just to be clear.

> how to handle SG_IO and /dev/sg sanely.
> 
> We don't need yet more
> 
>   if (new way) {
>   ...
>   } else {
>   ...
>   }
> 
> code blocks :)

Hmm, no.  There will never be one such block.  The new way and
the old way will be completely unaware of each other.  Once a
transport layer starts registering SCSI domain devices with the
new SCSI Core, it will just "go" from there.

_Plus_ it will be so much _shorter_ and straightforward.

> HCIL addressing gunk largely belongs in SPI transport class, along with 
> scsi_scan_host()

SPI transport _layer_ will handle finding devices on the SPI domain.
It then registers those with SCSI Core *in no different way* than
SAS or USB or FC or Firewire would register them.  Then SCSI Core
sticks to SPC and does LU discovery, request_queue set up (or whatever),
and announces the LUs to the Command set drivers.  (Look at the figures
I mentioned above.)

> [each transport class should build its own topo

Re: [RFC] SCSI EH document

2005-08-28 Thread Luben Tuikov
On 08/25/05 23:53, Tejun Heo wrote:
>  Hello, fellow SCSI/ATA developers.
> 
>  This is the first draft of SCSI EH document.  This document tries to
> describe how SCSI EH works and what choirs should be done to maintain
> SCSI midlayer integrity.  It's intended that this document can be used
> as reference for implementing either fine-grained EH callbacks or
> single eh_strategy_handler() callback.

Very good stuff, Tejun!

I'll have to print it and read it.  At first glance, good job!

Thanks,
Luben

>  I'm pretty sure that I've screwed up in (hopefully) several places,
> so please correct me.  Also, I have several places where I'm not sure
> or have questions, those are marked with *VERIFY* and *QUESTION*
> respectively.  If you know the answer, please let me know.
> 
>  Thanks.
> 
> 
> SCSI EH
> ==
> 
> TABLE OF CONTENTS
> 
> [1] How SCSI commands travel through the midlayer and to EH
> [1-1] struct scsi_cmnd
> [1-2] How do scmd's get completed?
>   [1-2-1] Completing a scmd w/ scsi_done
>   [1-2-2] Completing a scmd w/ timeout
> [1-3] How EH takes over
> [2] How SCSI EH works
> [2-1] EH through fine-grained callbacks
>   [2-1-1] Overview
>   [2-1-2] Flow of scmds through EH
>   [2-1-3] Flow of control
> [2-2] EH through hostt->eh_strategy_handler()
>   [2-2-1] Pre hostt->eh_strategy_handler() SCSI midlayer conditions
>   [2-2-2] Post hostt->eh_strategy_handler() SCSI midlayer conditions
>   [2-2-3] Things to consider
> 
> 
> [1] How SCSI commands travel through the midlayer and to EH
> 
> [1-1] struct scsi_cmnd
> 
>  Each SCSI command is represented with struct scsi_cmnd (== scmd).  A
> scmd has two list_head's to link itself into lists.  The two are
> scmd->list and scmd->eh_entry.  The former is used for free list or
> per-device allocated scmd list and not of much interest to this EH
> discussion.  The latter is used for completion and EH lists.
> 
> 
> [1-2] How do scmd's get completed?
> 
>  Once LLDD gets hold of a scmd, either the LLDD will complete the
> command by calling scsi_done callback passed from midlayer when
> invoking hostt->queuecommand() or SCSI midlayer will time it out.
> 
> 
> [1-2-1] Completing a scmd w/ scsi_done
> 
>  For all non-EH commands, scsi_done() is the completion callback.  It
> does the following.
> 
>  1. Delete timeout timer.  If it fails, it means that timeout timer
> has expired and is going to finish the command.  Just return.
> 
>  2. Link scmd to per-cpu scsi_done_q using scmd->en_entry
> 
>  3. Raise SCSI_SOFTIRQ
> 
>  SCSI_SOFTIRQ handler scsi_softirq calls scsi_decide_disposition() to
> determine what to do with the command.  scsi_decide_disposition()
> looks at the scmd->result value and sense data to determine what to do
> with the command.
> 
>  - SUCCESS
>   scsi_finish_command() is invoked for the command.  The
>   function does some maintenance choirs and notify completion by
>   calling scmd->done() callback, which, for fs requests, would
>   be HLD completion callback - sd:sd_rw_intr, sr:rw_intr,
>   st:st_intr.
> 
>  - NEEDS_RETRY
>  - ADD_TO_MLQUEUE
>   scmd is requeued to blk queue.
> 
>  - otherwise
>   scsi_eh_scmd_add(scmd, 0) is invoked for the command.  See
>   [1-3] for details of this funciton.
> 
> 
> [1-2-2] Completing a scmd w/ timeout
> 
>  The timeout handler is scsi_times_out().  When a timeout occurs, this
> function
> 
>  1. invokes optional hostt->eh_timedout() callback.  Return value can
> be one of
> 
> - EH_HANDLED
>   This indicates that eh_timedout() dealt with the timeout.  The
>   scmd is passed to __scsi_done() and thus linked into per-cpu
>   scsi_done_q.  Normal command completion described in [1-2-1]
>   follows.
> 
> - EH_RESET_TIMER
>   This indicates that more time is required to finish the
>   command.  Timer is restarted.  This action is counted as a
>   retry and only allowed scmd->allowed + 1(!) times.  Once the
>   limit is reached, EH_NOT_HANDLED action is taken.
> 
>   *NOTE* This action is racy as the LLDD could finish the scmd
>   after the timeout has expired but before it's added back.  In
>   such cases, scsi_done() would think that timeout has occurred
>   and return without doing anything.  We lose completion and the
>   command will time out again.
> 
> - EH_NOT_HANDLED
>   This is the same as when eh_timedout() callback doesn't exist.
>   Step #2 is taken.
> 
>  2. scsi_eh_scmd_add(scmd, SCSI_EH_CANCEL_CMD) is invoked for the
> command.  See [1-3] for more information.
> 
> 
> [1-3] How EH takes over
> 
>  scmds enter EH via scsi_eh_scmd_add(), which does the following.
> 
>  1. Turns on scmd->eh_eflags as requested.  It's 0 for error
> completions and SCSI_EH_CANCEL_CMD for timeouts.
> 
>  2. Links scmd->eh_entry to shost->eh_cmd_q
> 
>  3. Sets SHOST_RECOVERY bit in shost->shost_state
>

Re: [PATCH] minimal SAS transport class

2005-08-28 Thread Luben Tuikov
On 08/26/05 21:53, Jeff Garzik wrote:
> Luben Tuikov wrote:
> 
>>On 08/26/05 15:24, Jeff Garzik wrote:
>>
>>
>>>Luben Tuikov wrote:
>>>
>>>
>>>
Even simpler: the transport layer, calls SCSI Core, saying: "Hey here is
a pointer to struct scsi_domain_device.  If you want, you an send REPORT
LUNS and other things to it."
>>>
>>>
>>>For the SG_IO ioctl, /dev/sg and request_queue usage, SCSI core must map 
>>>an address (currently HCIL) into a scsi_domain_device pointer.  These 
>>
>>
>>The request queue is associated with the LU, not the scsi_domain_device.
>>When SCSI Core discovers the LU, it sets up the request queue for it,
>>etc.  Again this is the role of SCSI Core, not messing up with transport
>>specific stuff.
>>
>>
>>
>>>upper layer kernel elements rely on this "SCSI address", and rely on the 
>>>fact that SCSI core can route from a block device straight to a SCSI 
>>>LLD, using nothing more than this "SCSI address."
>>
>>
>>I don't get this.
> 
> 
> More basically...  An in-kernel C pointer, to a SCSI target device, is 
> not sufficient in all cases to address a target.  This plays out most 
> often in userland interfaces such as ioctls.

1. Why do I care about this stuff, when I'm so low in the layering
   infra?
2. I thought ioctls are bad.
3. So you're saying that there's an ioctl which addresses a "SCSI target
   device" by HCIL?  Which one is it please?
 
>>>That is the heart of the routing/addressing that the SCSI core must perform.
>>
>>
>>Disagree: now: scsi_device <--> request_queue, then: struct LU <--> 
>>request_queue.
>>
>>The LU points to the domain_device (as its parent). The domain_device
>>has a void *lldd_dev in it.  
> 
> 
> The current SCSI code largely already has this stuff.

No, it has no concept of those things.  I mean, look at how
scsi_target is treated and implemented...

Before one start writing code about something (scsi_target)
one should ask themselves "what is that something (scsi_target)?",
and "how does it play with the rest of the objects I want to
represent?".

You can search the archives of linux-scsi and you'll see how many
times I've asked about true "scsi_target" representation, and how many
times I've been rejected by the SCSI maintainers.

Now the need for such an object is even more dire, when you have
_one_more_ SCSI transport.

It started with iSCSI...

> No specs, just a comment from IRC.

Oh, I see... this was one of those IRC sessions you had with James B,
which you talked about before.

I'd suggest sitting down with a fresh copy of SAM and SPC, reading them
20 times, then looking at this picture:
http://www.t10.org/scsi-3.gif
and then "seeing" how _a_ SCSI Core should behave.

> (host,string) could succeed in transporting both HCIL and non-HCIL 
> target identifiers.

Broken.

BTW, none of what I'm saying here has changed for the last 5 years.
It's all the same old stuff.  Now its SAS, back then it was iSCSI.

What the maintainers here fail to see is that it is all SAM and
they fail to see how it all plays together with the transports and
then with the interconnects.

The reason we don't see eye to eye is that we don't come off the
same base.  Some people here re-read every revision of SAM, SPC, etc,
when it comes out, others do not.

Luben




-
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: libata: clustering on or off?

2005-08-28 Thread Jens Axboe
On Sun, Aug 28 2005, Jeff Garzik wrote:
> Jens Axboe wrote:
> >On Sun, Aug 28 2005, Arjan van de Ven wrote:
> >
> >>On Sun, 2005-08-28 at 05:42 -0400, Jeff Garzik wrote:
> >>
> >>>The constant ATA_SHT_USE_CLUSTERING in include/linux/libata.h controls
> >>>the use of SCSI layer's use_clustering feature, for a great many libata
> >>>drivers.
> >>>
> >>>The current setup has clustering disabled, which in theory causes the
> >>>block layer to do less work, at the expense of a greater number of
> >>>scatter/gather table entries used.
> >>>
> >>>Any opinions WRT turning on clustering for libata?
> >>
> >>in 2.4 clustering was expensive due to a large number of checks that
> >>were done (basically the number of fragments got recounted a gazilion
> >>times). In 2.6 Jens fixed that afaik to make it basically free...
> >>at which point it's a win always.
> 
> >Yeah, it wont cost any extra cycles,
> 
> A simple grep for QUEUE_FLAG_CLUSTER-related code shows that it -does- 
> cost extra cycles.

Well yes, none is not true of course. But it's not a lot, like extra
iterations of the request mappings like it used to. So in by far the
most cases, it should be a win overall.

> >>Imo clustering on the driver level should announce driver capabilities.
> >>If clustering for some arch/kernel makes it slower, that should be
> >>decided at a midlayer level and not in each driver; eg the midlayer
> >>would chose to ignore the drivers capabilities.
> >>So .. my opinion would be that libata should announce the capability (it
> >>seems the code/hw can do it). 
> >
> >
> >Agree, we should just remove the ability to control clustering, as it
> >really overlaps with the segment settings anyways.
> 
> OK, I guess the consensus is to use clustering :)
> 
> We'll see if anything blows up in 2.6.14...

;-)

-- 
Jens Axboe

-
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: libata: clustering on or off?

2005-08-28 Thread Jeff Garzik

Jens Axboe wrote:

On Sun, Aug 28 2005, Arjan van de Ven wrote:


On Sun, 2005-08-28 at 05:42 -0400, Jeff Garzik wrote:


The constant ATA_SHT_USE_CLUSTERING in include/linux/libata.h controls
the use of SCSI layer's use_clustering feature, for a great many libata
drivers.

The current setup has clustering disabled, which in theory causes the
block layer to do less work, at the expense of a greater number of
scatter/gather table entries used.

Any opinions WRT turning on clustering for libata?


in 2.4 clustering was expensive due to a large number of checks that
were done (basically the number of fragments got recounted a gazilion
times). In 2.6 Jens fixed that afaik to make it basically free...
at which point it's a win always.



Yeah, it wont cost any extra cycles,


A simple grep for QUEUE_FLAG_CLUSTER-related code shows that it -does- 
cost extra cycles.




Imo clustering on the driver level should announce driver capabilities.
If clustering for some arch/kernel makes it slower, that should be
decided at a midlayer level and not in each driver; eg the midlayer
would chose to ignore the drivers capabilities.
So .. my opinion would be that libata should announce the capability (it
seems the code/hw can do it). 



Agree, we should just remove the ability to control clustering, as it
really overlaps with the segment settings anyways.


OK, I guess the consensus is to use clustering :)

We'll see if anything blows up in 2.6.14...

Jeff



-
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: libata: clustering on or off?

2005-08-28 Thread Jens Axboe
On Sun, Aug 28 2005, Christoph Hellwig wrote:
> On Sun, Aug 28, 2005 at 04:20:19PM +0200, Jens Axboe wrote:
> > Agree, we should just remove the ability to control clustering, as it
> > really overlaps with the segment settings anyways.
> 
> What are we going to do with iscsi then?  It really doesn't like segments
> over a pages size.  Best thing would probably be to switch networking to
> use sg lists and dma_map_sg, but that's not a trivial task.

Limit the segment size, then. There are provisions for doing both length
and boundary limits, that should suffice.

-- 
Jens Axboe

-
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: libata: clustering on or off?

2005-08-28 Thread Christoph Hellwig
On Sun, Aug 28, 2005 at 04:20:19PM +0200, Jens Axboe wrote:
> Agree, we should just remove the ability to control clustering, as it
> really overlaps with the segment settings anyways.

What are we going to do with iscsi then?  It really doesn't like segments
over a pages size.  Best thing would probably be to switch networking to
use sg lists and dma_map_sg, but that's not a trivial task.

-
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: libata: clustering on or off?

2005-08-28 Thread Jens Axboe
On Sun, Aug 28 2005, Arjan van de Ven wrote:
> On Sun, 2005-08-28 at 05:42 -0400, Jeff Garzik wrote:
> > The constant ATA_SHT_USE_CLUSTERING in include/linux/libata.h controls
> > the use of SCSI layer's use_clustering feature, for a great many libata
> > drivers.
> > 
> > The current setup has clustering disabled, which in theory causes the
> > block layer to do less work, at the expense of a greater number of
> > scatter/gather table entries used.
> > 
> > Any opinions WRT turning on clustering for libata?
> 
> in 2.4 clustering was expensive due to a large number of checks that
> were done (basically the number of fragments got recounted a gazilion
> times). In 2.6 Jens fixed that afaik to make it basically free...
> at which point it's a win always.

Yeah, it wont cost any extra cycles, so there's no point in keeping it
turned off for that reason.

> Imo clustering on the driver level should announce driver capabilities.
> If clustering for some arch/kernel makes it slower, that should be
> decided at a midlayer level and not in each driver; eg the midlayer
> would chose to ignore the drivers capabilities.
> So .. my opinion would be that libata should announce the capability (it
> seems the code/hw can do it). 

Agree, we should just remove the ability to control clustering, as it
really overlaps with the segment settings anyways.

-- 
Jens Axboe

-
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Fw: [Bugme-new] [Bug 5117] New: Panic when accessing scsi-tapedrives with 4G-remap

2005-08-28 Thread Arjan van de Ven

> > OK. I booted my test i386 machine with highmem=384m and did some tests. I
> > also added a counter to st.c to count the highmem pages used for zero-copy
> > DMA. I could not get dd to use highmem but with tar that succeeded. No
> > extra messages were found in syslog during these tests.
> > 
> > BUT, at the same time I remembered that the system in the Bugzilla report
> > was Athlon64 running FC3 x86_64. The x86_64 kernel does not have highmem.
> 
> True. I do have a machine running FC4 x86_64, but it doesn't have 4G of  
> memory, so it doesn't do the 4G-remap that the bug report refers to.
> 
> Don't know if there is any way to make it do so (other than buy more memory).


note that some (tyan) amd64 motherboards have a really broken bios wrt
remap, and the only way to get those systems stable is to disable the
memory remap in that bios. If you don't all kinds of funky stuff can and
does happen.


-
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Fw: [Bugme-new] [Bug 5117] New: Panic when accessing scsi-tapedrives with 4G-remap

2005-08-28 Thread Willem Riede

On 08/28/2005 06:40:04 AM, Kai Makisara wrote:

On Fri, 26 Aug 2005, Kai Makisara wrote:

> On Thu, 25 Aug 2005, Andrew Morton wrote:
>
> > Could this purely be a highmem problem? Is the zero-copy DMA feature of
> > st.c known to work OK with x86 highmem?
>
> It is _not_ known _not_ to work ;-) I.e., I have received neither any
> success nor any failure reports. I have not tested it because I don't have

> any machine with enough memory (and I have not hacked a kernel to use
> highmem with 512 MB of memory). I hope someone seeing this thread and
> using highmem with tape can comment on this subject.
>
OK. I booted my test i386 machine with highmem=384m and did some tests. I
also added a counter to st.c to count the highmem pages used for zero-copy
DMA. I could not get dd to use highmem but with tar that succeeded. No
extra messages were found in syslog during these tests.

BUT, at the same time I remembered that the system in the Bugzilla report
was Athlon64 running FC3 x86_64. The x86_64 kernel does not have highmem.


True. I do have a machine running FC4 x86_64, but it doesn't have 4G of  
memory, so it doesn't do the 4G-remap that the bug report refers to.


Don't know if there is any way to make it do so (other than buy more memory).

Regards, Willem Riede.

-
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Fw: [Bugme-new] [Bug 5117] New: Panic when accessing scsi-tapedrives with 4G-remap

2005-08-28 Thread Kai Makisara
On Fri, 26 Aug 2005, Kai Makisara wrote:

> On Thu, 25 Aug 2005, Andrew Morton wrote:
> 
> > 
> > 
> > Begin forwarded message:
> > 
> > Date: Tue, 23 Aug 2005 12:53:38 -0700
> > From: [EMAIL PROTECTED]
> > To: [EMAIL PROTECTED]
> > Subject: [Bugme-new] [Bug 5117] New: Panic when accessing scsi-tapedrives 
> > with 4G-remap
> > 
> > 
> >  http://bugzilla.kernel.org/show_bug.cgi?id=5117
> > 
> > Summary: Panic when accessing scsi-tapedrives with 4G-remap
> >  Kernel Version: 2.6.12.5
> > 
> > 
> > Could this purely be a highmem problem?   Is the zero-copy DMA feature of
> > st.c known to work OK with x86 highmem?
> 
> It is _not_ known _not_ to work ;-) I.e., I have received neither any 
> success nor any failure reports. I have not tested it because I don't have 
> any machine with enough memory (and I have not hacked a kernel to use 
> highmem with 512 MB of memory). I hope someone seeing this thread and 
> using highmem with tape can comment on this subject.
> 
OK. I booted my test i386 machine with highmem=384m and did some tests. I 
also added a counter to st.c to count the highmem pages used for zero-copy 
DMA. I could not get dd to use highmem but with tar that succeeded. No 
extra messages were found in syslog during these tests.

BUT, at the same time I remembered that the system in the Bugzilla report 
was Athlon64 running FC3 x86_64. The x86_64 kernel does not have highmem.

-- 
Kai
-
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: libata: clustering on or off?

2005-08-28 Thread Arjan van de Ven
On Sun, 2005-08-28 at 05:42 -0400, Jeff Garzik wrote:
> The constant ATA_SHT_USE_CLUSTERING in include/linux/libata.h controls
> the use of SCSI layer's use_clustering feature, for a great many libata
> drivers.
> 
> The current setup has clustering disabled, which in theory causes the
> block layer to do less work, at the expense of a greater number of
> scatter/gather table entries used.
> 
> Any opinions WRT turning on clustering for libata?

in 2.4 clustering was expensive due to a large number of checks that
were done (basically the number of fragments got recounted a gazilion
times). In 2.6 Jens fixed that afaik to make it basically free...
at which point it's a win always.

Imo clustering on the driver level should announce driver capabilities.
If clustering for some arch/kernel makes it slower, that should be
decided at a midlayer level and not in each driver; eg the midlayer
would chose to ignore the drivers capabilities.
So .. my opinion would be that libata should announce the capability (it
seems the code/hw can do it). 

-
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


libata: clustering on or off?

2005-08-28 Thread Jeff Garzik

The constant ATA_SHT_USE_CLUSTERING in include/linux/libata.h controls
the use of SCSI layer's use_clustering feature, for a great many libata
drivers.

The current setup has clustering disabled, which in theory causes the
block layer to do less work, at the expense of a greater number of
scatter/gather table entries used.

Any opinions WRT turning on clustering for libata?

Jeff



-
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html