Matthias-Christian Ott wrote:
Hi!
I have a question: Why do I get such debug messages:
BUG: using smp_processor_id() in preemptible [0001] code: khelper/892
caller is _pagebuf_lookup_pages+0x11b/0x362
[] smp_processor_id+0xa3/0xb4
[] _pagebuf_lookup_pages+0x11b/0x362
[]
Matthias-Christian Ott wrote:
Hi!
I have a question: Why do I get such debug messages:
BUG: using smp_processor_id() in preemptible [0001] code: khelper/892
caller is _pagebuf_lookup_pages+0x11b/0x362
[c03119c7] smp_processor_id+0xa3/0xb4
[c02ef802] _pagebuf_lookup_pages+0x11b/0x362
[c02ef802]
Mukker, Atul wrote:
LSI would leave no stone unturned to make the performance better for
megaraid controllers under Linux. If you have some hard data in relation to
comparison of performance for adapters from other vendors, please share with
us. We would definitely strive to better it.
The
Mukker, Atul wrote:
LSI would leave no stone unturned to make the performance better for
megaraid controllers under Linux. If you have some hard data in relation to
comparison of performance for adapters from other vendors, please share with
us. We would definitely strive to better it.
The
James Bottomley wrote:
Well, the basic advice would be not to worry too much about
fragmentation from the point of view of I/O devices. They mostly all do
scatter gather (SG) onboard as an intelligent processing operation and
they're very good at it.
No one has ever really measured an effect we
James Bottomley wrote:
Well, the basic advice would be not to worry too much about
fragmentation from the point of view of I/O devices. They mostly all do
scatter gather (SG) onboard as an intelligent processing operation and
they're very good at it.
No one has ever really measured an effect we
Trever L. Adams wrote:
It is for a group. For the most part it is data access/retention. Writes
and such would be more similar to a desktop. I would use SATA if they
were (nearly) equally priced and there were awesome 1394 to SATA bridge
chips that worked well with Linux. So, right now, I am
Trever L. Adams wrote:
It is for a group. For the most part it is data access/retention. Writes
and such would be more similar to a desktop. I would use SATA if they
were (nearly) equally priced and there were awesome 1394 to SATA bridge
chips that worked well with Linux. So, right now, I am
>
> On Sat, 30 Jun 2001, Steve Lord wrote:
> >
> > OK, sounds reasonable, time to go download and merge again I guess!
>
> For 2.4.7 or so, I'll make a backwards-compatibility define (ie make
> GFP_BUFFER be the same as the new GFP_NOIO, which is the historical
>
> Yes. 2.4.6-pre8 fixes that (not sure if its up already).
It is up.
>
> > If the fix is to avoid page_launder in these cases then the number of
> > occurrences when an alloc_pages fails will go up.
>
> > I was attempting to come up with a way of making try_to_free_buffers
> > fail on
>
> On Sat, 30 Jun 2001, Steve Lord wrote:
> >
> > It looks to me as if all memory allocations of type GFP_BUFFER which happen
> > in generic_make_request downwards can hit the same type of deadlock, so
> > bounce buffers, the request functions of the raid an
>
>
> On Fri, 29 Jun 2001, Steve Lord wrote:
>
> >
> > Has anyone else seen a hang like this:
> >
> > bdflush()
> > flush_dirty_buffers()
> > ll_rw_block()
> > submit_bh(buffer X)
> > generic_make_request(
On Fri, 29 Jun 2001, Steve Lord wrote:
Has anyone else seen a hang like this:
bdflush()
flush_dirty_buffers()
ll_rw_block()
submit_bh(buffer X)
generic_make_request()
__make_request()
create_bounce
On Sat, 30 Jun 2001, Steve Lord wrote:
It looks to me as if all memory allocations of type GFP_BUFFER which happen
in generic_make_request downwards can hit the same type of deadlock, so
bounce buffers, the request functions of the raid and lvm paths can all
end up
Yes. 2.4.6-pre8 fixes that (not sure if its up already).
It is up.
If the fix is to avoid page_launder in these cases then the number of
occurrences when an alloc_pages fails will go up.
I was attempting to come up with a way of making try_to_free_buffers
fail on buffers which
On Sat, 30 Jun 2001, Steve Lord wrote:
OK, sounds reasonable, time to go download and merge again I guess!
For 2.4.7 or so, I'll make a backwards-compatibility define (ie make
GFP_BUFFER be the same as the new GFP_NOIO, which is the historical
behaviour and the anally safe value
Has anyone else seen a hang like this:
bdflush()
flush_dirty_buffers()
ll_rw_block()
submit_bh(buffer X)
generic_make_request()
__make_request()
create_bounce()
alloc_bounce_page()
alloc_page()
XFS supports O_DIRECT on linux, has done for a while.
Steve
> At work I had to sit through a meeting where I heard
> the boss say "If Linux makes Sybase go through the page cache on
> reads, maybe we'll just have to switch to Solaris. That's
> a serious performance problem."
> All I could say
XFS supports O_DIRECT on linux, has done for a while.
Steve
At work I had to sit through a meeting where I heard
the boss say If Linux makes Sybase go through the page cache on
reads, maybe we'll just have to switch to Solaris. That's
a serious performance problem.
All I could say was I
Has anyone else seen a hang like this:
bdflush()
flush_dirty_buffers()
ll_rw_block()
submit_bh(buffer X)
generic_make_request()
__make_request()
create_bounce()
alloc_bounce_page()
alloc_page()
> Hi,
> So I only hope that the smart guys at SGI find a way to prepare the
> patches the way Linus loves because now the file
> "patch-2.4.5-xfs-1.0.1-core" (which contains the modifs to the kernel
> and not the new files) is about 174090 bytes which is a lot.
>
> YA
>
But that is not a
Hi,
So I only hope that the smart guys at SGI find a way to prepare the
patches the way Linus loves because now the file
patch-2.4.5-xfs-1.0.1-core (which contains the modifs to the kernel
and not the new files) is about 174090 bytes which is a lot.
YA
But that is not a patch
I am chasing around in circles with an issue where buffers pointing at
highmem pages are getting put onto the buffer free list, and later on
causing oops in ext2 when it gets assigned them for metadata via getblk.
Say one thread is performing a truncate on an inode and is currently in
I am chasing around in circles with an issue where buffers pointing at
highmem pages are getting put onto the buffer free list, and later on
causing oops in ext2 when it gets assigned them for metadata via getblk.
Say one thread is performing a truncate on an inode and is currently in
Hmm, we 'released' version 1 of XFS against a 2.4.2 base - and packaged
it into a RedHat 7.1 Kernel RPM, we also have a development CVS tree
currently running at 2.4.4. If you are running a production server
with what you describe below, you might want to switch to one of the
other two kernels I
Hmm, we 'released' version 1 of XFS against a 2.4.2 base - and packaged
it into a RedHat 7.1 Kernel RPM, we also have a development CVS tree
currently running at 2.4.4. If you are running a production server
with what you describe below, you might want to switch to one of the
other two kernels I
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Hi,
>
> [ I'm not subscribed to linux-xfs, please cc me ]
>
> We have managed to get a Debian potato system (with the 2.4 updates from
> http://people.debian.org/~bunk/debian plus xfs-tools which we imported
> from woody) to run 2.4.3-XFS.
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi,
[ I'm not subscribed to linux-xfs, please cc me ]
We have managed to get a Debian potato system (with the 2.4 updates from
http://people.debian.org/~bunk/debian plus xfs-tools which we imported
from woody) to run 2.4.3-XFS.
Hans Reiser wrote:
> XFS used to have the performance problems that Alan described but fixed them
> in
> the linux port, yes?
>
> Hans
Hmm, we do things somewhat differently on linux, but I suspect most of it
is due to hardware getting faster underneath us.
Steve
-
To unsubscribe from
system
will work best for your application is to try it out. I have found reiserfs
to be very fast in some tests, especially those operating on lots of small
files, but contrary to some peoples, belief XFS is good for a lot more than
just messing with Gbyte long data files.
Steve Lord
-
To unsubscribe
Hans Reiser wrote:
XFS used to have the performance problems that Alan described but fixed them
in
the linux port, yes?
Hans
Hmm, we do things somewhat differently on linux, but I suspect most of it
is due to hardware getting faster underneath us.
Steve
-
To unsubscribe from this
is to try it out. I have found reiserfs
to be very fast in some tests, especially those operating on lots of small
files, but contrary to some peoples, belief XFS is good for a lot more than
just messing with Gbyte long data files.
Steve Lord
-
To unsubscribe from this list: send the line unsubscribe
granularity. Volume headers and such like would be one
possibility.
No I don't have a magic bullet solution, but I do not think that just
increasing the granularity of the addressing is the correct answer,
and yes I do agree that just growing the buffer_head fields is not
perfect either.
Steve Lord
p.s
granularity. Volume headers and such like would be one
possibility.
No I don't have a magic bullet solution, but I do not think that just
increasing the granularity of the addressing is the correct answer,
and yes I do agree that just growing the buffer_head fields is not
perfect either.
Steve Lord
p.s
>
>
> On Friday, March 02, 2001 01:25:25 PM -0600 Steve Lord <[EMAIL PROTECTED]> wrote:
>
> >> For why ide is beating scsi in this benchmark...make sure tagged queueing
> >> is on (or increase the queue length?). For the xlog.c test posted, I
> >
>
>
> On Friday, March 02, 2001 12:39:01 PM -0600 Steve Lord <[EMAIL PROTECTED]> wrote:
>
> [ file_fsync syncs all dirty buffers on the FS ]
> >
> > So it looks like fsync is going to cost more for bigger devices. Given the
> > O_SYNC changes Stephen Tw
fsync look more like this:
down(>i_sem);
filemap_fdatasync(ip->i_mapping);
fsync_inode_buffers(ip);
filemap_fdatawait(ip->i_mapping);
up(>i_sem);
Steve Lord
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
t
);
filemap_fdatasync(ip-i_mapping);
fsync_inode_buffers(ip);
filemap_fdatawait(ip-i_mapping);
up(inode-i_sem);
Steve Lord
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo inf
On Friday, March 02, 2001 12:39:01 PM -0600 Steve Lord [EMAIL PROTECTED] wrote:
[ file_fsync syncs all dirty buffers on the FS ]
So it looks like fsync is going to cost more for bigger devices. Given the
O_SYNC changes Stephen Tweedie did, couldnt fsync look more like
On Friday, March 02, 2001 01:25:25 PM -0600 Steve Lord [EMAIL PROTECTED] wrote:
For why ide is beating scsi in this benchmark...make sure tagged queueing
is on (or increase the queue length?). For the xlog.c test posted, I
would expect scsi to get faster than ide as the size
Marcelo Tosatti wrote:
>
> On Wed, 14 Feb 2001, Steve Lord wrote:
>
>
>
> > A break in the on disk mapping of data could be used to stop readahead
> > I suppose, especially if getting that readahead page is going to
> > involve evicting other pages. I susp
>
> On Wed, 14 Feb 2001, wrote:
>
> > I have been performing some IO tests under Linux on SCSI disks.
>
> ext2 filesystem?
>
> > I noticed gaps between the commands and decided to investigate.
> > I am new to the kernel and do not profess to underatand what
> > actually happens. My
On Wed, 14 Feb 2001, wrote:
I have been performing some IO tests under Linux on SCSI disks.
ext2 filesystem?
I noticed gaps between the commands and decided to investigate.
I am new to the kernel and do not profess to underatand what
actually happens. My observations suggest
>
> On Tue, 6 Feb 2001, Marcelo Tosatti wrote:
>
> > Think about a given number of pages which are physically contiguous on
> > disk -- you dont need to cache the block number for each page, you
> > just need to cache the physical block number of the first page of the
> > "cluster".
>
> ranges
On Tue, 6 Feb 2001, Marcelo Tosatti wrote:
Think about a given number of pages which are physically contiguous on
disk -- you dont need to cache the block number for each page, you
just need to cache the physical block number of the first page of the
"cluster".
ranges are a hell
> We're trying to port some code that currently runs on SGI using the IRIX
> direct I/O facility. From searching the web, it appears that a similar
> feature either already is or will soon be available under Linux. Could
> anyone fill me in on what the status is?
>
> (I know about mapping
We're trying to port some code that currently runs on SGI using the IRIX
direct I/O facility. From searching the web, it appears that a similar
feature either already is or will soon be available under Linux. Could
anyone fill me in on what the status is?
(I know about mapping block
> Hi!
>
> > We will be demonstrating XFS as the root file system for high availability
> > and clustering solutions in SGI systems at LinuxWorld New York from January
> > 31 to February 2. Free XFS CDs will also be available at LinuxWorld.
>
> What support does XFS provide for clustering?
>
> On Thu, Feb 01, 2001 at 02:56:47PM -0600, Steve Lord wrote:
> > And if you are writing to a striped volume via a filesystem which can do
> > it's own I/O clustering, e.g. I throw 500 pages at LVM in one go and LVM
> > is striped on 64K boundaries.
>
> But usually I w
> In article <[EMAIL PROTECTED]> you wrote:
> > Hi,
>
> > On Thu, Feb 01, 2001 at 05:34:49PM +, Alan Cox wrote:
> > In the disk IO case, you basically don't get that (the only thing
> > which comes close is raid5 parity blocks). The data which the user
> > started with is the data sent out
Christoph Hellwig wrote:
> On Thu, Feb 01, 2001 at 08:14:58PM +0530, [EMAIL PROTECTED] wrote:
> >
> > That would require the vfs interfaces themselves (address space
> > readpage/writepage ops) to take kiobufs as arguments, instead of struct
> > page * . That's not the case right now, is it ?
>
In article [EMAIL PROTECTED] you wrote:
Hi,
On Thu, Feb 01, 2001 at 05:34:49PM +, Alan Cox wrote:
In the disk IO case, you basically don't get that (the only thing
which comes close is raid5 parity blocks). The data which the user
started with is the data sent out on the wire.
On Thu, Feb 01, 2001 at 02:56:47PM -0600, Steve Lord wrote:
And if you are writing to a striped volume via a filesystem which can do
it's own I/O clustering, e.g. I throw 500 pages at LVM in one go and LVM
is striped on 64K boundaries.
But usually I want to have pages 0-63, 128-191, etc
Hi!
We will be demonstrating XFS as the root file system for high availability
and clustering solutions in SGI systems at LinuxWorld New York from January
31 to February 2. Free XFS CDs will also be available at LinuxWorld.
What support does XFS provide for clustering?
>
>Any information on XFS interoperability with current kernel nfsd?
You can NFS export XFS, I would have to say that this is not something we
test regularly and you may find problems under high load.
Steve
>
>
> Pedro
>
-
To unsubscribe from this list: send the line "unsubscribe
Any information on XFS interoperability with current kernel nfsd?
You can NFS export XFS, I would have to say that this is not something we
test regularly and you may find problems under high load.
Steve
Pedro
-
To unsubscribe from this list: send the line "unsubscribe
56 matches
Mail list logo