No change in behavior even in case of low memory systems. I confirmed
it running on 1Gig machine.
Thanks
--Chakri
On 9/28/07, Chakri n <[EMAIL PROTECTED]> wrote:
> Here is a the snapshot of vmstats when the problem happened. I believe
> this could help a little.
>
&
Here is a the snapshot of vmstats when the problem happened. I believe
this could help a little.
crash> kmem -V
NR_FREE_PAGES: 680853
NR_INACTIVE: 95380
NR_ACTIVE: 26891
NR_ANON_PAGES: 2507
NR_FILE_MAPPED: 1832
NR_FILE_PAGES: 119779
NR_FILE_DIR
It's works on .23-rc8-mm2 with out any problems.
"dd" process does not hang any more.
Thanks for all the help.
Cheers
--Chakri
On 9/28/07, Peter Zijlstra <[EMAIL PROTECTED]> wrote:
> [ and one copy for the list too ]
>
> On Fri, 2007-09-28 at 02:20 -0700, Chakr
It's 2.6.23-rc6.
Thanks
--Chakri
On 9/28/07, Peter Zijlstra <[EMAIL PROTECTED]> wrote:
> On Fri, 2007-09-28 at 02:01 -0700, Chakri n wrote:
> > Thanks for explaining the adaptive logic.
> >
> > > However other devices will at that moment try to maintain a li
e calculations where it does not fall
back to sync mode.
Thanks
--Chakri
On 9/28/07, Peter Zijlstra <[EMAIL PROTECTED]> wrote:
> [ please don't top-post! ]
>
> On Fri, 2007-09-28 at 01:27 -0700, Chakri n wrote:
>
> > On 9/27/07, Peter Zijlstra <[EMAIL PROTECTED]&
Thanks.
The BDI dirty limits sounds like a good idea.
Is there already a patch for this, which I could try?
I believe it works like this,
Each BDI, will have a limit. If the dirty_thresh exceeds the limit,
all the I/O on the block device will be synchronous.
so, if I have sda & a NFS mount, th
Hi,
In my testing, a unresponsive file system can hang all I/O in the system.
This is not seen in 2.4.
I started 20 threads doing I/O on a NFS share. They are just doing 4K
writes in a loop.
Now I stop NFS server hosting the NFS share and start a
"dd" process to write a file on local EXT3 file s
On 9/21/07, Trond Myklebust <[EMAIL PROTECTED]> wrote:
> No. The requirement for 'hard' mounts is not that the server be up all
> the time. The server can go up and down as it pleases: the client can
> happily recover from that.
>
> The requirement is rather that nobody remove it permanently before
stion logic?
Thanks
--Chakri
On 9/21/07, Trond Myklebust <[EMAIL PROTECTED]> wrote:
> On Fri, 2007-09-21 at 09:20 -0700, Chakri n wrote:
> > Thanks.
> >
> > I was using flock (BSD locking) and I think the problem should be
> > solved if I move my application to u
<[EMAIL PROTECTED]> wrote:
> On Thu, 2007-09-20 at 20:12 -0700, Chakri n wrote:
> > Thanks Trond, for clarifying this for me.
> >
> > I have seen similar behavior when a remote NFS server is not
> > available. Many processes wait end up waiting in nfs_release_page
ile system locking, so that user can use
"mount --bind" on local host and NFS mount on remote nodes, but file &
record locking will be consistent between both the nodes?
Thanks
--Chakri
On 9/20/07, Trond Myklebust <[EMAIL PROTECTED]> wrote:
> On Thu, 2007-09-20 at 17:2
Hi,
I am testing NFS on loopback locks up entire system with 2.6.23-rc6 kernel.
I have mounted a local ext3 partition using loopback NFS (version 3)
and started my test program. The test program forks 20 threads
allocates 10MB for each thread, writes & reads a file on the loopback
NFS mount. Afte
To add to the pain, lsof or fuser hang on unresponsive shares.
I wrote my own wrapper to go through the "/proc/" file tables and
find any process using the unresponsive mounts and kill those
processes.This works well.
Also, it brings another point. If the unresponsives problem cannot be
fixed for
The patches do not help. The system still paniced in the same place.
Still trying to correlate the problem & fix to specific patch.
Regards
--Chakri
On 8/5/07, Chakri n <[EMAIL PROTECTED]> wrote:
> Hi Malte,
>
> Thanks for the information.
>
> Based on your suggestion
http://linux-nfs.org/Linux-2.6.x/2.6.20-rc7/linux-2.6.20-008-fix_readdir_positive_dentry.dif
The systems are running for the past 7 hours with out any issues.
Hopefully this fixes it.
Regards
--Chakri
On 8/4/07, Malte Schröder <[EMAIL PROTECTED]> wrote:
> On Thu, 2 Aug 2007 14:27:04 -0700
Hi,
We are seeing this problem while unmounting file systems. It happens
once in a while.
I am able to grab the trace and core from linux-2.6.18-1.8.el5, but I
have observed the same problem with linux-2.6.20.1 kernel.
Has this problem fixed in recent kernel?
BUG: Dentry f7498f70{i=12803e,n=clie
16 matches
Mail list logo