On Tue, Jan 25, 2005 at 09:02:34AM -0500, Mukker, Atul wrote:
> The megaraid driver is open source, do you see anything that driver can do
> to improve performance. We would greatly appreciate any feedback in this
> regard and definitely incorporate in the driver. The FW under Linux and
> windows i
On Tue, 25 Jan 2005, Andi Kleen wrote:
> On Tue, Jan 25, 2005 at 09:02:34AM -0500, Mukker, Atul wrote:
> >
> > > e.g. performance on megaraid controllers (very popular
> > > because a big PC vendor ships them) was always quite bad on
> > > Linux. Up to the point that specific IO workloads run half
On Tue, Jan 25, 2005 at 09:02:34AM -0500, Mukker, Atul wrote:
>
> > e.g. performance on megaraid controllers (very popular
> > because a big PC vendor ships them) was always quite bad on
> > Linux. Up to the point that specific IO workloads run half as
> > fast on a megaraid compared to other
On Tue, Jan 25, 2005 at 02:27:57PM +, Christoph Hellwig wrote:
> > It is not the driver per se, but the way the memory which is the I/O
> > source/target is presented to the driver. In linux there is a good
> > chance it will have to use more scatter gather elements to represent
> > the same am
> It is not the driver per se, but the way the memory which is the I/O
> source/target is presented to the driver. In linux there is a good
> chance it will have to use more scatter gather elements to represent
> the same amount of data.
Note that a change made a few month ago after seeing issues
Mukker, Atul wrote:
LSI would leave no stone unturned to make the performance better for
megaraid controllers under Linux. If you have some hard data in relation to
comparison of performance for adapters from other vendors, please share with
us. We would definitely strive to better it.
The megaraid
> e.g. performance on megaraid controllers (very popular
> because a big PC vendor ships them) was always quite bad on
> Linux. Up to the point that specific IO workloads run half as
> fast on a megaraid compared to other controllers. I heard
> they do work better on Windows.
>
> Ideally th
Steve Lord <[EMAIL PROTECTED]> writes:
>
> I realize this is one data point on one end of the scale, but I
> just wanted to make the point that there are cases where it
> does matter. Hopefully William's little change from last
> year has helped out a lot.
There are more datapoints:
e.g. perform
James Bottomley wrote:
Well, the basic advice would be not to worry too much about
fragmentation from the point of view of I/O devices. They mostly all do
scatter gather (SG) onboard as an intelligent processing operation and
they're very good at it.
No one has ever really measured an effect we ca
On Mon, 2005-01-24 at 13:49 -0200, Marcelo Tosatti wrote:
> So is it valid to affirm that on average an operation with one SG element
> pointing to a 1MB
> region is similar in speed to an operation with 16 SG elements each pointing
> to a 64K
> region due to the efficient onboard SG processing
On Mon, Jan 24, 2005 at 10:29:52AM -0200, Marcelo Tosatti wrote:
> Grant Grundler and James Bottomley have been working on this area,
> they might want to add some comments to this discussion.
>
> It seems HP (Grant et all) has pursued using big pages on IA64 (64K)
> for this purpose.
Marcello,
T
On Mon, Jan 24, 2005 at 10:44:12AM -0600, James Bottomley wrote:
> On Mon, 2005-01-24 at 10:29 -0200, Marcelo Tosatti wrote:
> > Since the pages which compose IO operations are most likely sparse (not
> > physically contiguous),
> > the driver+device has to perform scatter-gather IO on the pages.
On Mon, 2005-01-24 at 10:29 -0200, Marcelo Tosatti wrote:
> Since the pages which compose IO operations are most likely sparse (not
> physically contiguous),
> the driver+device has to perform scatter-gather IO on the pages.
>
> The idea is that if we can have larger memory blocks scatter-gather
James and Grant added to CC.
On Mon, Jan 24, 2005 at 01:28:47PM +, Mel Gorman wrote:
> On Sat, 22 Jan 2005, Marcelo Tosatti wrote:
>
> > > > I was thinking that it would be nice to have a set of high-order
> > > > intensive workloads, and I wonder what are the most common high-order
> > > >
On Sat, 22 Jan 2005, Marcelo Tosatti wrote:
> > > I was thinking that it would be nice to have a set of high-order
> > > intensive workloads, and I wonder what are the most common high-order
> > > allocation paths which fail.
> > >
> >
> > Agreed. As I am not fully sure what workloads require high
On Sat, Jan 22, 2005 at 07:59:49PM -0200, Marcelo Tosatti wrote:
> On Sat, Jan 22, 2005 at 09:48:20PM +, Mel Gorman wrote:
> > On Fri, 21 Jan 2005, Marcelo Tosatti wrote:
> >
> > > On Thu, Jan 20, 2005 at 10:13:00AM +, Mel Gorman wrote:
> > > >
> > >
> > > Hi Mel,
> > >
> > > I was thinki
On Sat, Jan 22, 2005 at 09:48:20PM +, Mel Gorman wrote:
> On Fri, 21 Jan 2005, Marcelo Tosatti wrote:
>
> > On Thu, Jan 20, 2005 at 10:13:00AM +, Mel Gorman wrote:
> > >
> >
> > Hi Mel,
> >
> > I was thinking that it would be nice to have a set of high-order
> > intensive workloads, and I
On Fri, 21 Jan 2005, Marcelo Tosatti wrote:
> On Thu, Jan 20, 2005 at 10:13:00AM +, Mel Gorman wrote:
> >
>
> Hi Mel,
>
> I was thinking that it would be nice to have a set of high-order
> intensive workloads, and I wonder what are the most common high-order
> allocation paths which fail.
>
On Thu, Jan 20, 2005 at 10:13:00AM +, Mel Gorman wrote:
> Changelog since V5
> o Fixed up gcc-2.95 errors
> o Fixed up whitespace damage
>
> Changelog since V4
> o No changes. Applies cleanly against 2.6.11-rc1 and 2.6.11-rc1-bk6. Applies
> with offsets to 2.6.11-rc1-mm1
>
> Changelog since
Changelog since V5
o Fixed up gcc-2.95 errors
o Fixed up whitespace damage
Changelog since V4
o No changes. Applies cleanly against 2.6.11-rc1 and 2.6.11-rc1-bk6. Applies
with offsets to 2.6.11-rc1-mm1
Changelog since V3
o inlined get_pageblock_type() and set_pageblock_type()
o set_pageblock_ty
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
On Sun, 16 Jan 2005, Marcelo Tosatti wrote:
> > No unfortunately. Do you know of a test I can use?
>
> Some STP reaim results have significant performance increase in general, a few
> small regressions.
>
> I think that depending on the type of access pattern of the application(s)
> there
> will
On Sat, Jan 15, 2005 at 07:18:42PM +, Mel Gorman wrote:
> On Fri, 14 Jan 2005, Marcelo Tosatti wrote:
>
> > On Thu, Jan 13, 2005 at 03:56:46PM +, Mel Gorman wrote:
> > > The patch is against 2.6.11-rc1 and I'm willing to stand by it's
> > > stability. I'm also confident it does it's job pr
> > That is possible but it I haven't thought of a way of measuring the cache
> > colouring effects (if any). There is also the problem that the additional
> > complexity of the allocator will offset this benefit. The two main loss
> > points of the allocator are increased complexity and the incre
On Sat, Jan 15, 2005 at 07:18:42PM +, Mel Gorman wrote:
> On Fri, 14 Jan 2005, Marcelo Tosatti wrote:
>
> > On Thu, Jan 13, 2005 at 03:56:46PM +, Mel Gorman wrote:
> > > The patch is against 2.6.11-rc1 and I'm willing to stand by it's
> > > stability. I'm also confident it does it's job pr
On Fri, 14 Jan 2005, Marcelo Tosatti wrote:
> On Thu, Jan 13, 2005 at 03:56:46PM +, Mel Gorman wrote:
> > The patch is against 2.6.11-rc1 and I'm willing to stand by it's
> > stability. I'm also confident it does it's job pretty well so I'd like it
> > to be considered for inclusion.
>
> This
26 matches
Mail list logo