Re: Preempt & Xfs Question

2005-01-27 Thread Steve Lord
Matthias-Christian Ott wrote: Hi! I have a question: Why do I get such debug messages: BUG: using smp_processor_id() in preemptible [0001] code: khelper/892 caller is _pagebuf_lookup_pages+0x11b/0x362 [] smp_processor_id+0xa3/0xb4 [] _pagebuf_lookup_pages+0x11b/0x362 []

Re: Preempt Xfs Question

2005-01-27 Thread Steve Lord
Matthias-Christian Ott wrote: Hi! I have a question: Why do I get such debug messages: BUG: using smp_processor_id() in preemptible [0001] code: khelper/892 caller is _pagebuf_lookup_pages+0x11b/0x362 [c03119c7] smp_processor_id+0xa3/0xb4 [c02ef802] _pagebuf_lookup_pages+0x11b/0x362 [c02ef802]

Re: [PATCH] Avoiding fragmentation through different allocator

2005-01-25 Thread Steve Lord
Mukker, Atul wrote: LSI would leave no stone unturned to make the performance better for megaraid controllers under Linux. If you have some hard data in relation to comparison of performance for adapters from other vendors, please share with us. We would definitely strive to better it. The

Re: [PATCH] Avoiding fragmentation through different allocator

2005-01-25 Thread Steve Lord
Mukker, Atul wrote: LSI would leave no stone unturned to make the performance better for megaraid controllers under Linux. If you have some hard data in relation to comparison of performance for adapters from other vendors, please share with us. We would definitely strive to better it. The

Re: [PATCH] Avoiding fragmentation through different allocator

2005-01-24 Thread Steve Lord
James Bottomley wrote: Well, the basic advice would be not to worry too much about fragmentation from the point of view of I/O devices. They mostly all do scatter gather (SG) onboard as an intelligent processing operation and they're very good at it. No one has ever really measured an effect we

Re: [PATCH] Avoiding fragmentation through different allocator

2005-01-24 Thread Steve Lord
James Bottomley wrote: Well, the basic advice would be not to worry too much about fragmentation from the point of view of I/O devices. They mostly all do scatter gather (SG) onboard as an intelligent processing operation and they're very good at it. No one has ever really measured an effect we

Re: LVM2

2005-01-20 Thread Steve Lord
Trever L. Adams wrote: It is for a group. For the most part it is data access/retention. Writes and such would be more similar to a desktop. I would use SATA if they were (nearly) equally priced and there were awesome 1394 to SATA bridge chips that worked well with Linux. So, right now, I am

Re: LVM2

2005-01-20 Thread Steve Lord
Trever L. Adams wrote: It is for a group. For the most part it is data access/retention. Writes and such would be more similar to a desktop. I would use SATA if they were (nearly) equally priced and there were awesome 1394 to SATA bridge chips that worked well with Linux. So, right now, I am

Re: Bounce buffer deadlock

2001-06-30 Thread Steve Lord
> > On Sat, 30 Jun 2001, Steve Lord wrote: > > > > OK, sounds reasonable, time to go download and merge again I guess! > > For 2.4.7 or so, I'll make a backwards-compatibility define (ie make > GFP_BUFFER be the same as the new GFP_NOIO, which is the historical >

Re: Bounce buffer deadlock

2001-06-30 Thread Steve Lord
> Yes. 2.4.6-pre8 fixes that (not sure if its up already). It is up. > > > If the fix is to avoid page_launder in these cases then the number of > > occurrences when an alloc_pages fails will go up. > > > I was attempting to come up with a way of making try_to_free_buffers > > fail on

Re: Bounce buffer deadlock

2001-06-30 Thread Steve Lord
> > On Sat, 30 Jun 2001, Steve Lord wrote: > > > > It looks to me as if all memory allocations of type GFP_BUFFER which happen > > in generic_make_request downwards can hit the same type of deadlock, so > > bounce buffers, the request functions of the raid an

Re: Bounce buffer deadlock

2001-06-30 Thread Steve Lord
> > > On Fri, 29 Jun 2001, Steve Lord wrote: > > > > > Has anyone else seen a hang like this: > > > > bdflush() > > flush_dirty_buffers() > > ll_rw_block() > > submit_bh(buffer X) > > generic_make_request(

Re: Bounce buffer deadlock

2001-06-30 Thread Steve Lord
On Fri, 29 Jun 2001, Steve Lord wrote: Has anyone else seen a hang like this: bdflush() flush_dirty_buffers() ll_rw_block() submit_bh(buffer X) generic_make_request() __make_request() create_bounce

Re: Bounce buffer deadlock

2001-06-30 Thread Steve Lord
On Sat, 30 Jun 2001, Steve Lord wrote: It looks to me as if all memory allocations of type GFP_BUFFER which happen in generic_make_request downwards can hit the same type of deadlock, so bounce buffers, the request functions of the raid and lvm paths can all end up

Re: Bounce buffer deadlock

2001-06-30 Thread Steve Lord
Yes. 2.4.6-pre8 fixes that (not sure if its up already). It is up. If the fix is to avoid page_launder in these cases then the number of occurrences when an alloc_pages fails will go up. I was attempting to come up with a way of making try_to_free_buffers fail on buffers which

Re: Bounce buffer deadlock

2001-06-30 Thread Steve Lord
On Sat, 30 Jun 2001, Steve Lord wrote: OK, sounds reasonable, time to go download and merge again I guess! For 2.4.7 or so, I'll make a backwards-compatibility define (ie make GFP_BUFFER be the same as the new GFP_NOIO, which is the historical behaviour and the anally safe value

Bounce buffer deadlock

2001-06-29 Thread Steve Lord
Has anyone else seen a hang like this: bdflush() flush_dirty_buffers() ll_rw_block() submit_bh(buffer X) generic_make_request() __make_request() create_bounce() alloc_bounce_page() alloc_page()

Re: O_DIRECT please; Sybase 12.5

2001-06-29 Thread Steve Lord
XFS supports O_DIRECT on linux, has done for a while. Steve > At work I had to sit through a meeting where I heard > the boss say "If Linux makes Sybase go through the page cache on > reads, maybe we'll just have to switch to Solaris. That's > a serious performance problem." > All I could say

Re: O_DIRECT please; Sybase 12.5

2001-06-29 Thread Steve Lord
XFS supports O_DIRECT on linux, has done for a while. Steve At work I had to sit through a meeting where I heard the boss say If Linux makes Sybase go through the page cache on reads, maybe we'll just have to switch to Solaris. That's a serious performance problem. All I could say was I

Bounce buffer deadlock

2001-06-29 Thread Steve Lord
Has anyone else seen a hang like this: bdflush() flush_dirty_buffers() ll_rw_block() submit_bh(buffer X) generic_make_request() __make_request() create_bounce() alloc_bounce_page() alloc_page()

Re: Announcing Journaled File System (JFS) release 1.0.0 available

2001-06-28 Thread Steve Lord
> Hi, > So I only hope that the smart guys at SGI find a way to prepare the > patches the way Linus loves because now the file > "patch-2.4.5-xfs-1.0.1-core" (which contains the modifs to the kernel > and not the new files) is about 174090 bytes which is a lot. > > YA > But that is not a

Re: Announcing Journaled File System (JFS) release 1.0.0 available

2001-06-28 Thread Steve Lord
Hi, So I only hope that the smart guys at SGI find a way to prepare the patches the way Linus loves because now the file patch-2.4.5-xfs-1.0.1-core (which contains the modifs to the kernel and not the new files) is about 174090 bytes which is a lot. YA But that is not a patch

Busy buffers and try_to_free_pages

2001-06-07 Thread Steve Lord
I am chasing around in circles with an issue where buffers pointing at highmem pages are getting put onto the buffer free list, and later on causing oops in ext2 when it gets assigned them for metadata via getblk. Say one thread is performing a truncate on an inode and is currently in

Busy buffers and try_to_free_pages

2001-06-07 Thread Steve Lord
I am chasing around in circles with an issue where buffers pointing at highmem pages are getting put onto the buffer free list, and later on causing oops in ext2 when it gets assigned them for metadata via getblk. Say one thread is performing a truncate on an inode and is currently in

Re: kernel oops with 2.4.3-xfs

2001-05-23 Thread Steve Lord
Hmm, we 'released' version 1 of XFS against a 2.4.2 base - and packaged it into a RedHat 7.1 Kernel RPM, we also have a development CVS tree currently running at 2.4.4. If you are running a production server with what you describe below, you might want to switch to one of the other two kernels I

Re: kernel oops with 2.4.3-xfs

2001-05-23 Thread Steve Lord
Hmm, we 'released' version 1 of XFS against a 2.4.2 base - and packaged it into a RedHat 7.1 Kernel RPM, we also have a development CVS tree currently running at 2.4.4. If you are running a production server with what you describe below, you might want to switch to one of the other two kernels I

Re: Oops with 2.4.3-XFS

2001-05-16 Thread Steve Lord
> -BEGIN PGP SIGNED MESSAGE- > Hash: SHA1 > > Hi, > > [ I'm not subscribed to linux-xfs, please cc me ] > > We have managed to get a Debian potato system (with the 2.4 updates from > http://people.debian.org/~bunk/debian plus xfs-tools which we imported > from woody) to run 2.4.3-XFS.

Re: Oops with 2.4.3-XFS

2001-05-16 Thread Steve Lord
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Hi, [ I'm not subscribed to linux-xfs, please cc me ] We have managed to get a Debian potato system (with the 2.4 updates from http://people.debian.org/~bunk/debian plus xfs-tools which we imported from woody) to run 2.4.3-XFS.

Re: reiserfs, xfs, ext2, ext3

2001-05-09 Thread Steve Lord
Hans Reiser wrote: > XFS used to have the performance problems that Alan described but fixed them > in > the linux port, yes? > > Hans Hmm, we do things somewhat differently on linux, but I suspect most of it is due to hardware getting faster underneath us. Steve - To unsubscribe from

Re: reiserfs, xfs, ext2, ext3

2001-05-09 Thread Steve Lord
system will work best for your application is to try it out. I have found reiserfs to be very fast in some tests, especially those operating on lots of small files, but contrary to some peoples, belief XFS is good for a lot more than just messing with Gbyte long data files. Steve Lord - To unsubscribe

Re: reiserfs, xfs, ext2, ext3

2001-05-09 Thread Steve Lord
Hans Reiser wrote: XFS used to have the performance problems that Alan described but fixed them in the linux port, yes? Hans Hmm, we do things somewhat differently on linux, but I suspect most of it is due to hardware getting faster underneath us. Steve - To unsubscribe from this

Re: reiserfs, xfs, ext2, ext3

2001-05-09 Thread Steve Lord
is to try it out. I have found reiserfs to be very fast in some tests, especially those operating on lots of small files, but contrary to some peoples, belief XFS is good for a lot more than just messing with Gbyte long data files. Steve Lord - To unsubscribe from this list: send the line unsubscribe

Re: 64-bit block sizes on 32-bit systems

2001-03-27 Thread Steve Lord
granularity. Volume headers and such like would be one possibility. No I don't have a magic bullet solution, but I do not think that just increasing the granularity of the addressing is the correct answer, and yes I do agree that just growing the buffer_head fields is not perfect either. Steve Lord p.s

Re: 64-bit block sizes on 32-bit systems

2001-03-27 Thread Steve Lord
granularity. Volume headers and such like would be one possibility. No I don't have a magic bullet solution, but I do not think that just increasing the granularity of the addressing is the correct answer, and yes I do agree that just growing the buffer_head fields is not perfect either. Steve Lord p.s

Re: scsi vs ide performance on fsync's

2001-03-02 Thread Steve Lord
> > > On Friday, March 02, 2001 01:25:25 PM -0600 Steve Lord <[EMAIL PROTECTED]> wrote: > > >> For why ide is beating scsi in this benchmark...make sure tagged queueing > >> is on (or increase the queue length?). For the xlog.c test posted, I > >

Re: scsi vs ide performance on fsync's

2001-03-02 Thread Steve Lord
> > > On Friday, March 02, 2001 12:39:01 PM -0600 Steve Lord <[EMAIL PROTECTED]> wrote: > > [ file_fsync syncs all dirty buffers on the FS ] > > > > So it looks like fsync is going to cost more for bigger devices. Given the > > O_SYNC changes Stephen Tw

Re: scsi vs ide performance on fsync's

2001-03-02 Thread Steve Lord
fsync look more like this: down(>i_sem); filemap_fdatasync(ip->i_mapping); fsync_inode_buffers(ip); filemap_fdatawait(ip->i_mapping); up(>i_sem); Steve Lord - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in t

Re: scsi vs ide performance on fsync's

2001-03-02 Thread Steve Lord
); filemap_fdatasync(ip-i_mapping); fsync_inode_buffers(ip); filemap_fdatawait(ip-i_mapping); up(inode-i_sem); Steve Lord - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo inf

Re: scsi vs ide performance on fsync's

2001-03-02 Thread Steve Lord
On Friday, March 02, 2001 12:39:01 PM -0600 Steve Lord [EMAIL PROTECTED] wrote: [ file_fsync syncs all dirty buffers on the FS ] So it looks like fsync is going to cost more for bigger devices. Given the O_SYNC changes Stephen Tweedie did, couldnt fsync look more like

Re: scsi vs ide performance on fsync's

2001-03-02 Thread Steve Lord
On Friday, March 02, 2001 01:25:25 PM -0600 Steve Lord [EMAIL PROTECTED] wrote: For why ide is beating scsi in this benchmark...make sure tagged queueing is on (or increase the queue length?). For the xlog.c test posted, I would expect scsi to get faster than ide as the size

Re: File IO performance

2001-02-14 Thread Steve Lord
Marcelo Tosatti wrote: > > On Wed, 14 Feb 2001, Steve Lord wrote: > > > > > A break in the on disk mapping of data could be used to stop readahead > > I suppose, especially if getting that readahead page is going to > > involve evicting other pages. I susp

Re: File IO performance

2001-02-14 Thread Steve Lord
> > On Wed, 14 Feb 2001, wrote: > > > I have been performing some IO tests under Linux on SCSI disks. > > ext2 filesystem? > > > I noticed gaps between the commands and decided to investigate. > > I am new to the kernel and do not profess to underatand what > > actually happens. My

Re: File IO performance

2001-02-14 Thread Steve Lord
On Wed, 14 Feb 2001, wrote: I have been performing some IO tests under Linux on SCSI disks. ext2 filesystem? I noticed gaps between the commands and decided to investigate. I am new to the kernel and do not profess to underatand what actually happens. My observations suggest

Re: [Kiobuf-io-devel] RFC: Kernel mechanism: Compound event wait

2001-02-06 Thread Steve Lord
> > On Tue, 6 Feb 2001, Marcelo Tosatti wrote: > > > Think about a given number of pages which are physically contiguous on > > disk -- you dont need to cache the block number for each page, you > > just need to cache the physical block number of the first page of the > > "cluster". > > ranges

Re: [Kiobuf-io-devel] RFC: Kernel mechanism: Compound event wait

2001-02-06 Thread Steve Lord
On Tue, 6 Feb 2001, Marcelo Tosatti wrote: Think about a given number of pages which are physically contiguous on disk -- you dont need to cache the block number for each page, you just need to cache the physical block number of the first page of the "cluster". ranges are a hell

Re: Direct (unbuffered) I/O status ...

2001-02-02 Thread Steve Lord
> We're trying to port some code that currently runs on SGI using the IRIX > direct I/O facility. From searching the web, it appears that a similar > feature either already is or will soon be available under Linux. Could > anyone fill me in on what the status is? > > (I know about mapping

Re: Direct (unbuffered) I/O status ...

2001-02-02 Thread Steve Lord
We're trying to port some code that currently runs on SGI using the IRIX direct I/O facility. From searching the web, it appears that a similar feature either already is or will soon be available under Linux. Could anyone fill me in on what the status is? (I know about mapping block

Re: XFS file system Pre-Release

2001-02-01 Thread Steve Lord
> Hi! > > > We will be demonstrating XFS as the root file system for high availability > > and clustering solutions in SGI systems at LinuxWorld New York from January > > 31 to February 2. Free XFS CDs will also be available at LinuxWorld. > > What support does XFS provide for clustering? >

Re: [Kiobuf-io-devel] RFC: Kernel mechanism: Compound event wait /notify + callback chains

2001-02-01 Thread Steve Lord
> On Thu, Feb 01, 2001 at 02:56:47PM -0600, Steve Lord wrote: > > And if you are writing to a striped volume via a filesystem which can do > > it's own I/O clustering, e.g. I throw 500 pages at LVM in one go and LVM > > is striped on 64K boundaries. > > But usually I w

Re: [Kiobuf-io-devel] RFC: Kernel mechanism: Compound event wait /notify + callback chains

2001-02-01 Thread Steve Lord
> In article <[EMAIL PROTECTED]> you wrote: > > Hi, > > > On Thu, Feb 01, 2001 at 05:34:49PM +, Alan Cox wrote: > > In the disk IO case, you basically don't get that (the only thing > > which comes close is raid5 parity blocks). The data which the user > > started with is the data sent out

Re: [Kiobuf-io-devel] RFC: Kernel mechanism: Compound event wait /notify + callback chains

2001-02-01 Thread Steve Lord
Christoph Hellwig wrote: > On Thu, Feb 01, 2001 at 08:14:58PM +0530, [EMAIL PROTECTED] wrote: > > > > That would require the vfs interfaces themselves (address space > > readpage/writepage ops) to take kiobufs as arguments, instead of struct > > page * . That's not the case right now, is it ? >

Re: [Kiobuf-io-devel] RFC: Kernel mechanism: Compound event wait /notify + callback chains

2001-02-01 Thread Steve Lord
In article [EMAIL PROTECTED] you wrote: Hi, On Thu, Feb 01, 2001 at 05:34:49PM +, Alan Cox wrote: In the disk IO case, you basically don't get that (the only thing which comes close is raid5 parity blocks). The data which the user started with is the data sent out on the wire.

Re: [Kiobuf-io-devel] RFC: Kernel mechanism: Compound event wait /notify + callback chains

2001-02-01 Thread Steve Lord
On Thu, Feb 01, 2001 at 02:56:47PM -0600, Steve Lord wrote: And if you are writing to a striped volume via a filesystem which can do it's own I/O clustering, e.g. I throw 500 pages at LVM in one go and LVM is striped on 64K boundaries. But usually I want to have pages 0-63, 128-191, etc

Re: XFS file system Pre-Release

2001-02-01 Thread Steve Lord
Hi! We will be demonstrating XFS as the root file system for high availability and clustering solutions in SGI systems at LinuxWorld New York from January 31 to February 2. Free XFS CDs will also be available at LinuxWorld. What support does XFS provide for clustering?

Re: XFS file system Pre-Release

2001-01-29 Thread Steve Lord
> >Any information on XFS interoperability with current kernel nfsd? You can NFS export XFS, I would have to say that this is not something we test regularly and you may find problems under high load. Steve > > > Pedro > - To unsubscribe from this list: send the line "unsubscribe

Re: XFS file system Pre-Release

2001-01-29 Thread Steve Lord
Any information on XFS interoperability with current kernel nfsd? You can NFS export XFS, I would have to say that this is not something we test regularly and you may find problems under high load. Steve Pedro - To unsubscribe from this list: send the line "unsubscribe