It is all true, when system _READS_, but not when it is just 'get all
buffers and sit quiet_.
I mean that it is possible to find a quiet states, when cluster can be
remounted without any harm. Even with the reads, but more likely when there
is lot any activity on the file system for a while.
Of c
Not sure whether I would call that high. It is a 5G box afterall.
It is always hard to figure out memory related issues based on one output.
Call Oracle support. They will give you an oswatcher script that
monitors various stats giving us a much better view into the running
system.
Alexei_Roudne
While a fs node may only be reading, does not mean the metadata
on disk is not being updated by some other node. Means, it needs to
take appropriate locks to perform the read... means it needs to have
a lock on the mastered lock resource... means it needs to be part of
the active cluster. The dlm
Luis.
Things can be worst because we can run 3 clusterware at the same time on the
same Linux:
- CRS (oracle RAC)
- O2CB
- Heartbeat2
Problem is that each system makes independent decisions and independent
selection of the masters and slaves, and decide _to fence _ or _to suicide_
independent
Suni, you do know it much better - is not is strange a little? (so many ext3
inode chache objects, togetrher with 3 GB of cached disk space).
But I dont see anything wrong below - big 'cached' memory means only 'you have
so many unused memory that system cached files in it' and nothing more. Cac
Sunil,
First I want to make clear that I do think that Oracle Cluster File
System provides a great value for Oracle Linux customers and I do know that one
has to pay top dollar for equivalent functionality on other platforms, for
example Veritas Storage Foundation, and others offered
Of course it is cluster operations.
as I said, cluster have a clients like FS. Client can be in 3 modes:
- passive (no reason to fence, just don't allow to switch mode)
- active read only
- active write
Active write requires fencing in all cases, active read status can't transit
into active write
Alexei,
How can I relate the information on slabtop to the actual memory used by
buffers?
I see this on slabtop:
Active / Total Objects (% used): 603822 / 649643 (92.9%)
Active / Total Slabs (% used) : 47216 / 47216 (100.0%)
Active / Total Caches (% used) :
Fencing is not a fs operation but a cluster operation. The fs is only a
client
of the cluster stack.
Alexei_Roudnev wrote:
It all depends of the usage scenario.
Tipical usage is, for example:
(1) Shared application home. Writes happens once / week during maintanance,
otehr time files are open
Did you run slabtop ? It can show unreleased buffers in the system.
- Original Message -
From: Luis Freitas
To: Alexei_Roudnev ; Brian Sieler ; ocfs2-users@oss.oracle.com
Sent: Monday, April 09, 2007 2:29 PM
Subject: Re: [Ocfs2-users] High on buffers and deep on swap
Alexe
It all depends of the usage scenario.
Tipical usage is, for example:
(1) Shared application home. Writes happens once / week during maintanance,
otehr time files are opened for reading only. Few logfiles
can be redirected if required.
So, when server see a problems, it HAD NOT any pending IO for
I am saying, that by default (at least on SLES9 SP1) system panic do not
cause automatic reboot, so when some applications try to
fence themself via panic, they stop system instead of rebooting it. I saw it
on olde SLES9 (and because I alwayts set up these 2 variables, I did not
verified how OCFSv2
For io fencing to be graceful, one requires better hardware. Read expensive.
As in, switches where one can choke off all the ios to the storage from
a specific
node.
Read the following for a discussion on force umounts. In short, not
possible as yet.
http://lwn.net/Articles/192632/
Readonly
Alexei_Roudnev wrote:
Did you checked
/proc/sys/kernel/panic /proc/sys/kernel/panic_on_oops
system variables?
No. Maybe I'm missing something here.
Are you saying that a panic/freeze/reboot is the expected/desirable
behavior? That nothing more graceful could be done, like to just
di
Alexei,
Yes, it seems to have no effect, which too is very strange. On 2.4
vm.freepages had a very easy to notice effect.
There are other people having problems with buffers not being released on
the list and some of them are forcing the kernel cache to be flushed with:
ech
Did you checked
/proc/sys/kernel/panic /proc/sys/kernel/panic_on_oops
system variables?
- Original Message -
From: "David Miller" <[EMAIL PROTECTED]>
To:
Sent: Monday, April 02, 2007 9:01 AM
Subject: [Ocfs2-users] Catatonic nodes under SLES10
> Good afternoon all;
>
> I'm planning
One more tip - in SLES9, 'reboot on panic' is turned off by default, so to
make fencing work properly, you must turn it on (panic-on-oops parameter
somewhere at /proc/sys).
- Original Message -
From: "Sunil Mushran" <[EMAIL PROTECTED]>
To: "enohi ibekwe" <[EMAIL PROTECTED]>
Cc:
Sent: Fr
It's noty an issue; it is really OCFSv2 killer:
- in 99% cases, it is not split brain condition but just a short (20 - 30
seconds) network interruption. Systems can (in most cases) see each other by
network or thru the voting disk, so they can communicate by one or another
way;
- in 90% cases syste
Did you tried vm.swappiness parameter?
(/proc/sys/vm/swappiness)
-
Original Message -
From: Brian Sieler
To: 'Luis Freitas' ; ocfs2-users@oss.oracle.com
Sent: Sunday, April 08, 2007 10:52 PM
Subject: RE: [Ocfs2-users] High on buffers and deep on swap
Luis, yes I am exp
mkdir /u01 or /u02
As in, it appears you are missing the mount directory.
Zosen Wang wrote:
I try to install 2 nodes RAC in Linux 2.6.9.-22.EL by using Jeffery
Hunter’s paper. I am getting problem to mount the ocfs2 file system.
The following is mount command error output:
[EMAIL PROTECTED] ~]
May sound basic, but do you have /u01 and /u02 created?
Zosen Wang wrote:
> I try to install 2 nodes RAC in Linux 2.6.9.-22.EL by using Jeffery
> Hunter�s paper. I am getting problem to mount the ocfs2 file system.
> The following is mount command error output:
> [EMAIL PROTECTED] ~]# mount -t ocf
21 matches
Mail list logo