Re: Disk schedulers

2008-02-15 Thread Jeffrey E. Hundstad

Lukas Hejtmanek,

I have to say, that I've heard this subject before, the summary answer 
seems to be, that the kernel can not guess the wishes of the user 100% 
of the time.  If you have a low priority I/O task use ionice(1) to set 
the priority of that task so it doesn't nuke your high priority task.


I have to personal stake in this answer but I can report that for my 
high I/O tasks it does work like a charm.


--
Jeffrey Hundstad

Lukas Hejtmanek wrote:

On Fri, Feb 15, 2008 at 03:42:58PM +0100, Jan Engelhardt wrote:
  

Also consider
- DMA (e.g. only UDMA2 selected)
- aging disk



it's not the case.

hdparm reports udma5 is used, if it is reliable with libata.

The disk is 3 months old, kernel does not report any errors. And it has never
been different.

--
Lukáš Hejtmánek
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/
  

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: ext3 bug

2005-02-28 Thread Jeffrey E. Hundstad
linux-2.6.10 has some bio problems that are fixed in the current 
linux-2.6.11 release candidates.  The bio problems wreaked havoc with 
XFS and there were people reporting EXT3 problems as well with this 
bug.  I'd recommend trying the latest release candidate and see if your 
problem vanishes.

--
jeffrey hundstad
jmerkey wrote:
jmerkey wrote:
Jean-Marc Valin wrote:
Le lundi 28 février 2005 à 08:31 -0700, jmerkey a écrit :
 

I see this problem infrequently on systems that have low memory 
conditions and
with heavy swapping.I have not seen it on 2.6.9 but I have seen 
it on 2.6.10.   

My machine has 1 GB RAM and I wasn't using much of it at that time (2GB
free on the swap), so I doubt that's the problem in my case.
Jean-Marc
 

Running the ext2 recover program seems to trigger some good bugs in 
2.6.10 with ext3 -- try it.  I was doing this
to test some disk tools and I managed to cause these errors with 
forcing ext2 recovery from an ext3 fs (which is
probably something to be expected.  The recover tools need to get 
syncrhonized -- have not tried with
mc yet.)Doesn't happen every time though.

Jeff

lde also causes some problems as well with ext3.  Just caused one on 
2.6.10.  stale or poisoned
cache blocks perhaps?

Jeff
-
To unsubscribe from this list: send the line "unsubscribe 
linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: linux-2.6.11-rc3: XFS internal error xfs_da_do_buf(1) at line 2176 of file fs/xfs/xfs_da_btree.c.

2005-02-07 Thread Jeffrey E. Hundstad
Anders Saaby wrote:
Is this system running SMP og UP?
On Monday 07 February 2005 16:38, Jeffrey E. Hundstad wrote:
 

I'm sorry for this truncated report... but it's all I've got.  If you
need .config or system configuration, etc. let me know and I'll send'em
ASAP.  I don't believe this is hardware related; ide-smart shows all fine.
From dmesg:
xfs_da_do_buf: bno 8388608
dir: inode 117526252
Filesystem "hda4": XFS internal error xfs_da_do_buf(1) at line 2176 of
file fs/x
   

 

UP
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


linux-2.6.11-rc3: XFS internal error xfs_da_do_buf(1) at line 2176 of file fs/xfs/xfs_da_btree.c.

2005-02-07 Thread Jeffrey E. Hundstad
I'm sorry for this truncated report... but it's all I've got.  If you 
need .config or system configuration, etc. let me know and I'll send'em 
ASAP.  I don't believe this is hardware related; ide-smart shows all fine.

From dmesg:
xfs_da_do_buf: bno 8388608
dir: inode 117526252
Filesystem "hda4": XFS internal error xfs_da_do_buf(1) at line 2176 of 
file fs/x
fs/xfs_da_btree.c.  Caller 0xc01bda27
[] xfs_da_do_buf+0x65c/0x7b0
[] xfs_da_read_buf+0x47/0x60
[] __alloc_pages+0x2ad/0x3d0
[] cache_grow+0xe2/0x150
[] xfs_da_read_buf+0x47/0x60
[] xfs_da_node_lookup_int+0x7e/0x320
[] xfs_da_node_lookup_int+0x7e/0x320
[] xfs_dir2_node_lookup+0x36/0xa0
[] xfs_dir2_lookup+0xf7/0x110
[] xfs_ichgtime+0xf8/0xfa
[] xfs_readlink+0x96/0x2c0
[] xfs_dir_lookup_int+0x38/0x100
[] xfs_iaccess+0xc2/0x1c0
[] xfs_lookup+0x4d/0x90
[] linvfs_lookup+0x4e/0x80
[] real_lookup+0xae/0xd0
[] do_lookup+0x7e/0x90
[] link_path_walk+0x722/0xd50
[] path_lookup+0x7b/0x130
[] __user_walk+0x2f/0x60
[] vfs_stat+0x1d/0x50
[] sys_stat64+0x12/0x30
[] syscall_call+0x7/0xb
xfs_da_do_buf: bno 8388608
dir: inode 117526252
Filesystem "hda4": XFS internal error xfs_da_do_buf(1) at line 2176 of 
file fs/x
fs/xfs_da_btree.c.  Caller 0xc01bda27
[] xfs_da_do_buf+0x65c/0x7b0
[] xfs_da_read_buf+0x47/0x60
[] __alloc_pages+0x2ad/0x3d0
[] cache_grow+0xe2/0x150
[] xfs_da_read_buf+0x47/0x60
[] xfs_da_node_lookup_int+0x7e/0x320
[] xfs_da_node_lookup_int+0x7e/0x320
[] xfs_dir2_node_lookup+0x36/0xa0
[] xfs_dir2_lookup+0xf7/0x110
[] xfs_ichgtime+0xf8/0xfa
[] xfs_readlink+0x96/0x2c0
[] xfs_dir_lookup_int+0x38/0x100
[] xfs_iaccess+0xc2/0x1c0
[] xfs_lookup+0x4d/0x90
[] linvfs_lookup+0x4e/0x80
[] real_lookup+0xae/0xd0
[] do_lookup+0x7e/0x90
[] link_path_walk+0x722/0xd50
[] path_lookup+0x7b/0x130
[] __user_walk+0x2f/0x60
[] vfs_stat+0x1d/0x50
[] sys_stat64+0x12/0x30
[] syscall_call+0x7/0xb

From syslog:
[xfs_da_do_buf+1628/1968] xfs_da_do_buf+0x65c/0x7b0
[xfs_da_read_buf+71/96] xfs_da_read_buf+0x47/0x60
[__alloc_pages+685/976] __alloc_pages+0x2ad/0x3d0
[cache_grow+226/336] cache_grow+0xe2/0x150
[xfs_da_read_buf+71/96] xfs_da_read_buf+0x47/0x60
[xfs_da_node_lookup_int+126/800] xfs_da_node_lookup_int+0x7e/0x320
[xfs_da_node_lookup_int+126/800] xfs_da_node_lookup_int+0x7e/0x320
[xfs_dir2_node_lookup+54/160] xfs_dir2_node_lookup+0x36/0xa0
[xfs_dir2_lookup+247/272] xfs_dir2_lookup+0xf7/0x110
[xfs_ichgtime+248/250] xfs_ichgtime+0xf8/0xfa
[xfs_readlink+150/704] xfs_readlink+0x96/0x2c0
[xfs_dir_lookup_int+56/256] xfs_dir_lookup_int+0x38/0x100
[xfs_iaccess+194/448] xfs_iaccess+0xc2/0x1c0
[xfs_lookup+77/144] xfs_lookup+0x4d/0x90
[linvfs_lookup+78/128] linvfs_lookup+0x4e/0x80
[real_lookup+174/208] real_lookup+0xae/0xd0
[do_lookup+126/144] do_lookup+0x7e/0x90
[link_path_walk+1826/3408] link_path_walk+0x722/0xd50
[path_lookup+123/304] path_lookup+0x7b/0x130
[__user_walk+47/96] __user_walk+0x2f/0x60
[vfs_stat+29/80] vfs_stat+0x1d/0x50
[sys_stat64+18/48] sys_stat64+0x12/0x30
[syscall_call+7/11] syscall_call+0x7/0xb
[xfs_da_do_buf+1628/1968] xfs_da_do_buf+0x65c/0x7b0
[xfs_da_read_buf+71/96] xfs_da_read_buf+0x47/0x60
[__alloc_pages+685/976] __alloc_pages+0x2ad/0x3d0
[cache_grow+226/336] cache_grow+0xe2/0x150
[xfs_da_read_buf+71/96] xfs_da_read_buf+0x47/0x60
[xfs_da_node_lookup_int+126/800] xfs_da_node_lookup_int+0x7e/0x320
[xfs_da_node_lookup_int+126/800] xfs_da_node_lookup_int+0x7e/0x320
[xfs_dir2_node_lookup+54/160] xfs_dir2_node_lookup+0x36/0xa0
[xfs_dir2_lookup+247/272] xfs_dir2_lookup+0xf7/0x110
[xfs_ichgtime+248/250] xfs_ichgtime+0xf8/0xfa
[xfs_readlink+150/704] xfs_readlink+0x96/0x2c0
[xfs_dir_lookup_int+56/256] xfs_dir_lookup_int+0x38/0x100
[xfs_iaccess+194/448] xfs_iaccess+0xc2/0x1c0
[xfs_lookup+77/144] xfs_lookup+0x4d/0x90
[linvfs_lookup+78/128] linvfs_lookup+0x4e/0x80
[real_lookup+174/208] real_lookup+0xae/0xd0
[do_lookup+126/144] do_lookup+0x7e/0x90
[link_path_walk+1826/3408] link_path_walk+0x722/0xd50
[path_lookup+123/304] path_lookup+0x7b/0x130
[__user_walk+47/96] __user_walk+0x2f/0x60
[vfs_stat+29/80] vfs_stat+0x1d/0x50
[sys_stat64+18/48] sys_stat64+0x12/0x30
[syscall_call+7/11] syscall_call+0x7/0xb
--
jeffrey hundstad
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: journaled filesystems -- known instability; Was: XFS: inode with st_mode == 0

2005-01-28 Thread Jeffrey E. Hundstad
Stephen C. Tweedie wrote:
Hi,
On Fri, 2005-01-28 at 20:15, Jeffrey E. Hundstad wrote:
 

Does linux-2.6.11-rc2 have both the linux-2.6.10-ac10 fix and the xattr 
problem fixed?
   

 

Not sure about how much of -ac went in, but it has the xattr fix.
 

 

I've had my machine that would crash daily if not hourly stay up for 10 
days now.  This is with the linux-2.6.10-ac10 kernel. 
   

Good to know.  Are you using xattrs extensively (eg. for ACLs, SELinux
or Samba 4)?
--Stephen
 

On the machines that were having problems we really weren't using them 
for anything.  I think I may have been running into the BIO problem that 
was fixed in 2.6.10-ac10.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: journaled filesystems -- known instability; Was: XFS: inode with st_mode == 0

2005-01-28 Thread Jeffrey E. Hundstad
Stephen C. Tweedie wrote:
Hi,
On Tue, 2005-01-25 at 15:09, Jeffrey Hundstad wrote:
 

Bad things happening to journaled filesystem machines
Oops in kjournald
   

 

I wonder if there are several problems.  Alan Cox claimed that there was 
a fix in linux-2.6.10-ac10 that might alleviate the problem.
   

I'm not sure --- there are a couple of bio/bh-related fixes in that
patch, but nothing against jbd/ext3 itself. 

 

Does linux-2.6.11-rc2 have both the linux-2.6.10-ac10 fix and the xattr 
problem fixed?
   

Not sure about how much of -ac went in, but it has the xattr fix.
--Stephen
 

I've had my machine that would crash daily if not hourly stay up for 10 
days now.  This is with the linux-2.6.10-ac10 kernel.  I was wondering 
if anyone else is having similiar results.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: journaled filesystems -- known instability; Was: XFS: inode with st_mode == 0

2005-01-20 Thread Jeffrey E. Hundstad
Jeffrey Hundstad wrote:
For more of this look up subjects:
 Bad things happening to journaled filesystem machines
 Oops in kjournald
and from author:
 Anders Saaby
I also can't keep a recent 2.6 or 2.6*-ac* kernel up more than a few 
hours on a machine under real load.   Perhaps us folks with the 
problem need to talk to the powers who be to come up with a strategy 
to make a report they can use.  My guess is we're not sending 
something that can be used.

I have found two server in my operation that seem to do quite well on 
linux-2.6.7.  So I believe the brokenness is after this point and before 
linux-2.6.8.1.

...so far I'm not seeing problems after two days with 
linux-2.6.10-ac10.  I'm still crossing my fingers and knocking on wood.

--
jeffrey hundstad
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: LVM2

2005-01-20 Thread Jeffrey E. Hundstad
XFS is an SGI project.
http://oss.sgi.com/
I've been using it for quite a while and am quite happy with it; it is 
very fast and very fault tolerant.  The only warning I'd like to give 
about it is it seems that some Linux developers seem to have a bad taste 
in their mouth when it comes to XFS; go figure.

--
jeffrey hundstad
Trever L. Adams wrote:
It is for a group. For the most part it is data access/retention. Writes
and such would be more similar to a desktop. I would use SATA if they
were (nearly) equally priced and there were awesome 1394 to SATA bridge
chips that worked well with Linux. So, right now, I am looking at ATA to
1394.
So, to get 2TB of RAID5 you have 6 500 GB disks right? So, will this
work within on LV? Or is it 2TB of diskspace total? So, are volume
groups pretty fault tolerant if you have a bunch of RAID5 LVs below
them? This is my one worry about this.
Second, you mentioned file systems. We were talking about ext3. I have
never used any others in Linux (barring ext2, minixfs, and fat). I had
heard XFS from IBM was pretty good. I would rather not use reiserfs.
Any recommendations.
Trever
P.S. Why won't an LV support over 2TB?
S.P.S. I am not really worried about the boot and programs drive. They
will be spun down most of the time I am sure.
On Thu, 2005-01-20 at 22:40 +0100, Norbert van Nobelen wrote:
 

A logical volume in LVM will not handle more than 2TB. You can tie together 
the LVs in a volume group, thus going over the 2TB limit. Choose your 
filesystem well though, some have a 2TB limit too.

Disk size: What are you doing with it. 500GB disks are ATA (maybe SATA). ATA 
is good for low end servers or near line storage, SATA can be used equally to 
SCSI (I am going to suffer for this remark).

RAID5 in software works pretty good (survived a failed disk, and recovered 
another failing raid in 1 month). Hardware is better since you don't have a 
boot partition left which is usually just present on one disk (you can mirror 
that yourself ofcourse).

Regards,
Norbert van Nobelen
On Thursday 20 January 2005 20:51, you wrote:
   

I recently saw Alan Cox say on this list that LVM won't handle more than
2 terabytes. Is this LVM2 or LVM? What is the maximum amount of disk
space LVM2 (or any other RAID/MIRROR capable technology that is in
Linus's kernel) handle? I am talking with various people and we are
looking at Samba on Linux to do several different namespaces (obviously
one tree), most averaging about 3 terabytes, but one would have in
excess of 20 terabytes. We are looking at using 320 to 500 gigabyte
drives in these arrays. (How? IEEE-1394. Which brings a question I will
ask in a second email.)
Is RAID 5 all that bad using this software method? Is RAID 5 available?
Trever Adams
--
"They that can give up essential liberty to obtain a little temporary
safety deserve neither liberty nor safety." -- Benjamin Franklin, 1759
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/
 

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/
   

--
"Assassination is the extreme form of censorship." -- George Bernard
Shaw (1856-1950)
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/
 

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/