On Tue, 2008-10-07 at 16:57 -0400, Caron, Chris wrote:
> I just have a quick question regarding the amount of storage occupied
> by the journals. Is there a common ratio to determine how much space
> will be occupied?
>
Default Journal size is 128MB.
> I’m doing very rough map and experimenting
I just have a quick question regarding the amount of storage occupied by
the journals. Is there a common ratio to determine how much space will
be occupied?
I'm doing very rough map and experimenting with different conditions,
but I can't seem to get a common mechanism for predicting the usable
st
So is clvmd running fine on both nodes? If its not, your not going to be
able to do anything with the shared storage. After you have verified its
running, do a vgscan. If you get any errors, you have to fix those first
before you can move ahead to worrying about the lv issues.
I am far from a clus
Hi Mark,
This is just an experimental cluster for now, not production, so 2-nodes
is sufficient (as long as it doesn't significantly alter the setup, which
I don;t think it does). I have two multi-pathed iSCSI targets for storage,
one each on two separate boxes. I have got this going previously on
And for another follow-up in the interest of full disclosure, I don't recall
the specifics, but it seems dlm_recvd was eating up all the CPU cycles on
one of the machines, and others seemed to follow suit shortly thereafter.
Sorry for the flood!
Shawn
--
Linux-cluster mailing list
Linux-cluster@re
More info:
All filesystems mounted using noatime,nodiratime,noquota.
All filesystems report the same data from gfs_tool gettune:
limit1 = 100
ilimit1_tries = 3
ilimit1_min = 1
ilimit2 = 500
ilimit2_tries = 10
ilimit2_min = 3
demote_secs = 300
incore_log_blocks = 1024
jindex_refresh_secs = 60
dep
Problem:
It seems that IO on one machine in the cluster (not always the same
machine) will hang and all processes accessing clustered LVs will
block. Other machines will follow suit shortly thereafter until the
machine that first exhibited the problem is rebooted (via fence_drac
manually). No mes
> >
> > I can't see a way around some significant downtime even with that, and
> > there is no way they will give me the option to be down from a planned
> > perspective.
>
> So, out of nowhere straight into production, without performance user
> acceptance testing period? And they won't allow
Hello all,
Situation:
We have a 2 nodes cluster (we don't use GFS). Only one node has an
active service.
The other node is only here in case the first node crashs (application
automatically restarts on the healthy node).
This service has a file system resource that is a mirrored LV across 2
stora
Hopefully the following provide some relieves ...
1. Enable lock trimming tunable. It is particularly relevant if NFS-GFS
is used by development type of workloads (editing, compiling, build,
etc) and/or after filesystem backup. Unlike fast statfs, this tunable is
per-node base (you don't need
Did this patch ever get merged in?
https://www.redhat.com/archives/linux-cluster/2008-August/msg00026.html
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster
Jakub Suchy wrote:
Leo Pleiman wrote:
The kbase article can be found at http://kbase.redhat.com/faq/FAQ_51_11755.shtm
It has a link to Cisco's web site enumerating 5 possible solutions.
http://www.cisco.com/en/US/products/hw/switches/ps708/products_tech_note09186a008059a9df.shtml
Hello,
I am
12 matches
Mail list logo