Matthias-Christian Ott wrote:
Hi!
I have a question: Why do I get such debug messages:
BUG: using smp_processor_id() in preemptible [0001] code: khelper/892
caller is _pagebuf_lookup_pages+0x11b/0x362
[] smp_processor_id+0xa3/0xb4
[] _pagebuf_lookup_pages+0x11b/0x362
[] _pagebuf_lookup_pages+0x
Mukker, Atul wrote:
LSI would leave no stone unturned to make the performance better for
megaraid controllers under Linux. If you have some hard data in relation to
comparison of performance for adapters from other vendors, please share with
us. We would definitely strive to better it.
The megaraid
James Bottomley wrote:
Well, the basic advice would be not to worry too much about
fragmentation from the point of view of I/O devices. They mostly all do
scatter gather (SG) onboard as an intelligent processing operation and
they're very good at it.
No one has ever really measured an effect we ca
Trever L. Adams wrote:
It is for a group. For the most part it is data access/retention. Writes
and such would be more similar to a desktop. I would use SATA if they
were (nearly) equally priced and there were awesome 1394 to SATA bridge
chips that worked well with Linux. So, right now, I am lookin
>
> On Sat, 30 Jun 2001, Steve Lord wrote:
> >
> > OK, sounds reasonable, time to go download and merge again I guess!
>
> For 2.4.7 or so, I'll make a backwards-compatibility define (ie make
> GFP_BUFFER be the same as the new GFP_NOIO, which is the historical
> Yes. 2.4.6-pre8 fixes that (not sure if its up already).
It is up.
>
> > If the fix is to avoid page_launder in these cases then the number of
> > occurrences when an alloc_pages fails will go up.
>
> > I was attempting to come up with a way of making try_to_free_buffers
> > fail on buffe
>
> On Sat, 30 Jun 2001, Steve Lord wrote:
> >
> > It looks to me as if all memory allocations of type GFP_BUFFER which happen
> > in generic_make_request downwards can hit the same type of deadlock, so
> > bounce buffers, the request functions of the raid and l
>
>
> On Fri, 29 Jun 2001, Steve Lord wrote:
>
> >
> > Has anyone else seen a hang like this:
> >
> > bdflush()
> > flush_dirty_buffers()
> > ll_rw_block()
> > submit_bh(buffer X)
> > generic_make_request(
Has anyone else seen a hang like this:
bdflush()
flush_dirty_buffers()
ll_rw_block()
submit_bh(buffer X)
generic_make_request()
__make_request()
create_bounce()
alloc_bounce_page()
alloc_page()
XFS supports O_DIRECT on linux, has done for a while.
Steve
> At work I had to sit through a meeting where I heard
> the boss say "If Linux makes Sybase go through the page cache on
> reads, maybe we'll just have to switch to Solaris. That's
> a serious performance problem."
> All I could say
> Hi,
> So I only hope that the smart guys at SGI find a way to prepare the
> patches the way Linus loves because now the file
> "patch-2.4.5-xfs-1.0.1-core" (which contains the modifs to the kernel
> and not the new files) is about 174090 bytes which is a lot.
>
> YA
>
But that is not a pa
I am chasing around in circles with an issue where buffers pointing at
highmem pages are getting put onto the buffer free list, and later on
causing oops in ext2 when it gets assigned them for metadata via getblk.
Say one thread is performing a truncate on an inode and is currently in
truncate_i
Hmm, we 'released' version 1 of XFS against a 2.4.2 base - and packaged
it into a RedHat 7.1 Kernel RPM, we also have a development CVS tree
currently running at 2.4.4. If you are running a production server
with what you describe below, you might want to switch to one of the
other two kernels I
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Hi,
>
> [ I'm not subscribed to linux-xfs, please cc me ]
>
> We have managed to get a Debian potato system (with the 2.4 updates from
> http://people.debian.org/~bunk/debian plus xfs-tools which we imported
> from woody) to run 2.4.3-XFS.
>
Hans Reiser wrote:
> XFS used to have the performance problems that Alan described but fixed them
> in
> the linux port, yes?
>
> Hans
Hmm, we do things somewhat differently on linux, but I suspect most of it
is due to hardware getting faster underneath us.
Steve
-
To unsubscribe from this
which filesystem
will work best for your application is to try it out. I have found reiserfs
to be very fast in some tests, especially those operating on lots of small
files, but contrary to some peoples, belief XFS is good for a lot more than
just messing with Gbyte long data files.
Steve Lord
-
To u
granularity. Volume headers and such like would be one
possibility.
No I don't have a magic bullet solution, but I do not think that just
increasing the granularity of the addressing is the correct answer,
and yes I do agree that just growing the buffer_head fields is not
perfect either.
Steve Lord
>
>
> On Friday, March 02, 2001 01:25:25 PM -0600 Steve Lord <[EMAIL PROTECTED]> wrote:
>
> >> For why ide is beating scsi in this benchmark...make sure tagged queueing
> >> is on (or increase the queue length?). For the xlog.c test posted, I
> >>
>
>
> On Friday, March 02, 2001 12:39:01 PM -0600 Steve Lord <[EMAIL PROTECTED]> wrote:
>
> [ file_fsync syncs all dirty buffers on the FS ]
> >
> > So it looks like fsync is going to cost more for bigger devices. Given the
> > O_SYNC changes Stephen Tw
eedie did, couldnt fsync look more like this:
down(&inode->i_sem);
filemap_fdatasync(ip->i_mapping);
fsync_inode_buffers(ip);
filemap_fdatawait(ip->i_mapping);
up(&inode->i_sem);
Steve Lord
-
To unsubscribe from this list: send the line &
Marcelo Tosatti wrote:
>
> On Wed, 14 Feb 2001, Steve Lord wrote:
>
>
>
> > A break in the on disk mapping of data could be used to stop readahead
> > I suppose, especially if getting that readahead page is going to
> > involve evicting other pages. I susp
>
> On Wed, 14 Feb 2001, wrote:
>
> > I have been performing some IO tests under Linux on SCSI disks.
>
> ext2 filesystem?
>
> > I noticed gaps between the commands and decided to investigate.
> > I am new to the kernel and do not profess to underatand what
> > actually happens. My observat
>
> On Tue, 6 Feb 2001, Marcelo Tosatti wrote:
>
> > Think about a given number of pages which are physically contiguous on
> > disk -- you dont need to cache the block number for each page, you
> > just need to cache the physical block number of the first page of the
> > "cluster".
>
> ranges
> We're trying to port some code that currently runs on SGI using the IRIX
> direct I/O facility. From searching the web, it appears that a similar
> feature either already is or will soon be available under Linux. Could
> anyone fill me in on what the status is?
>
> (I know about mapping block
> Hi!
>
> > We will be demonstrating XFS as the root file system for high availability
> > and clustering solutions in SGI systems at LinuxWorld New York from January
> > 31 to February 2. Free XFS CDs will also be available at LinuxWorld.
>
> What support does XFS provide for clustering?
>
> On Thu, Feb 01, 2001 at 02:56:47PM -0600, Steve Lord wrote:
> > And if you are writing to a striped volume via a filesystem which can do
> > it's own I/O clustering, e.g. I throw 500 pages at LVM in one go and LVM
> > is striped on 64K boundaries.
>
> But usuall
> In article <[EMAIL PROTECTED]> you wrote:
> > Hi,
>
> > On Thu, Feb 01, 2001 at 05:34:49PM +, Alan Cox wrote:
> > In the disk IO case, you basically don't get that (the only thing
> > which comes close is raid5 parity blocks). The data which the user
> > started with is the data sent out o
Christoph Hellwig wrote:
> On Thu, Feb 01, 2001 at 08:14:58PM +0530, [EMAIL PROTECTED] wrote:
> >
> > That would require the vfs interfaces themselves (address space
> > readpage/writepage ops) to take kiobufs as arguments, instead of struct
> > page * . That's not the case right now, is it ?
>
>
>Any information on XFS interoperability with current kernel nfsd?
You can NFS export XFS, I would have to say that this is not something we
test regularly and you may find problems under high load.
Steve
>
>
> Pedro
>
-
To unsubscribe from this list: send the line "unsubscribe linu
29 matches
Mail list logo