e had. I know this isn't the
best solution for our needs, but given the lack of funding, this seemed like a
good idea at the time.
Thanks for the help!
Randy
On 02/14/2011 09:03 AM, Digimer wrote:
On 02/14/2011 08:53 AM, Randy Brown wrote:
Hello,
I am running a 2 node cluster being
Hello,
I am running a 2 node cluster being used as a NAS head for a Lefthand
Networks iSCSI SAN to provide NFS mounts out to my network. Things have
been OK for a while, but I recently lost one of the nodes as a result of
a patching problem. In an effort to recreate the failed node, I imaged
I am using a two node cluster as the front end to our iscsi san. The
cluster serves NFS file systems to the network. I'm seeing loads on the
active cluster node of 7 or 8 at times which brings performance to a crawl.
Here is a sample of top on the active node:
top - 10:46:21 up 1 day, 2:48,
Any thoughts?
Thanks,
Randy
Randy Brown wrote:
Great. Thanks. I'll give that a shot. I just need to find umount
and
mount that file system to see if it worked.
Randy
Corey Kovacs wrote:
I had a similar problem. Reducing my rsize back to 8k from 32k
fixed it.
Problem root was
Regards,
Corey
On Feb 4, 2009, at 16:00, Randy Brown wrote:
We are seeing a very strange issue with KDE with all kernels later
than 2.6.18-53.1.21 on our cluster nodes. When users attempt to log
into the machine using KDE, it never makes it to the KDE splash
screen. The machine just hangs
y
On Feb 4, 2009, at 16:00, Randy Brown wrote:
We are seeing a very strange issue with KDE with all kernels later
than 2.6.18-53.1.21 on our cluster nodes. When users attempt to log
into the machine using KDE, it never makes it to the KDE splash
screen. The machine just hangs on a solid blue s
We are seeing a very strange issue with KDE with all kernels later than
2.6.18-53.1.21 on our cluster nodes. When users attempt to log into the
machine using KDE, it never makes it to the KDE splash screen. The
machine just hangs on a solid blue screen. I can then restart the
Xserver and log
about binding failover resources to interfaces). I've not seen a response
> yet, so I'm most curious to see if you'll get any.
>
> Gordan
>
> On Wed, 12 Mar 2008, Randy Brown wrote:
>
> > I am using a two node cluster with Centos 5 with up to date patche
"I have an interface with an IP of 10.10.20.101" should be "I have an
interface with an IP of 140.90.20.101"
Randy Brown wrote:
For example: I have an interface with an IP of 10.10.20.101 and
created a resource for IP address 140.90.20.100. Then a service
called nf
g
Subject: Re: [Linux-cluster] Two node NFS cluster serving multiple networks
Sounds very similar to what I'm trying to achieve (see the other thread
about binding failover resources to interfaces). I've not seen a response
yet, so I'm most curious to see if you'll get any.
I am using a two node cluster with Centos 5 with up to date patches. We
have three different networks to which I would like to serve nfs mounts
from this cluster. Can this even be done? I have interfaces available
for each network in each node?
Thanks
Randy
begin:vcard
fn:Randy Brown
n:Bro
ng I did
cleared it without me realizing it. As long as it's working. :)
I'm still pretty "green" when it comes to clustering and SANS and
sincerely appreciate the quality responses and willingness to help on
this list.
Randy
James Parsons wrote:
Randy Brown wrote:
I
I forgotI'm using Centos 5 with latest patches and kernel.
Randy Brown wrote:
I am using an APC Masterswitch Plus as my fencing device. I am seeing
this in my logs now when fencing occurs:
Dec 31 11:36:26 nfs1-cluster fenced[3848]: agent "fence_apc" reports:
Traceback (m
I am using an APC Masterswitch Plus as my fencing device. I am seeing
this in my logs now when fencing occurs:
Dec 31 11:36:26 nfs1-cluster fenced[3848]: agent "fence_apc" reports:
Traceback (most recent call last): File "/sbin/fence_apc", line 829,
in ? main() File "/sbin/fence_apc",
34,anongid=65534)
/fs/rfcdata
frisky.nws.noaa.gov(rw,wdelay,no_root_squash,no_subtree_check,anonuid=65534,anongid=65534)
Eric Kerin wrote:
On Mon, 2007-12-31 at 09:56 -0500, Randy Brown wrote:
The umask for this user is 022. I believe I have the export configured
correctly. Here is th
I am using Centos 5 Cluster suite for an NFS cluster. I am able to
mount the filesystems exported by the cluster to other machines in our
network. The problem arises when I try to copy or move files to this
file system as a non-root user.
Here is the result of trying to copy a file:
[cliffor
Correction: "but the nfs service will failover" should read "but the
nfs service will not failover" Sorry.
Randy
Randy Brown wrote:
I just ran `yum update` on one of the nodes in my two node cluster and
now the nfs service won't relocate to the updated node.
I just ran `yum update` on one of the nodes in my two node cluster and
now the nfs service won't relocate to the updated node. Here are the
versions of relevant packages on each node:
Node 1 (updated node)
[EMAIL PROTECTED] ~]# rpm -qa |grep -e cman -e lvm -e gfs -e rgmanager
-e kernel
kmod-
I am running a two node cluster using Centos 5 that is basically being
used as a NAS head for our iscsi based storage. Here are the related
rpms and their versions I am using:
kmod-gfs-0.1.16-5.2.6.18_8.1.14.el5
kmod-gfs-0.1.16-6.2.6.18_8.1.15.el5
system-config-lvm-1.0.22-1.0.el5
cman-2.0.64-1.
MESSAGE-
Hash: SHA1
Randy Brown wrote:
I have tried both suggestions. I added the line:
node.session.initial_login_retry_max = 8
I even made the value as high as 20 with no change during boot up. Then
I added NETWORKDELAY=20 to /etc/sysconfig/network and I still see:
iscsiadm: Could not
:
On Mon, 22 Oct 2007, Randy Brown wrote:
Thanks, I'll try upping the retries. I am assuming this is the same
thing as increasing the time value here:
No. Timeouts and retries are separate settings. The problem is usually
that the iSCSI subsystems tries to access the SAN b
There isn't a line like that in my iscsi.conf file. Can I simply add it?
Randy
[EMAIL PROTECTED] wrote:
On Mon, 22 Oct 2007, Randy Brown wrote:
Thanks, I'll try upping the retries. I am assuming this is the same
thing as increasing the time value here:
No. Timeouts and r
t the priorities back too. That was just for testing purposes.
Randy
[EMAIL PROTECTED] wrote:
On Mon, 22 Oct 2007, Randy Brown wrote:
If I boot both nodes with none of the clustering components (cman,
clvmd, gfs, or rgmanager) starting at boot, I can restart the iscsi
service then start cman, clvmd
I have a two node cluster configured which is going to be used as a NAS
head for our iscsi based storage. I think I have it working, for the
most part (still some fencing issues, but that will be in another
post). I am using Centos 5 and it's associated clustering software.
If I boot both no
In a two node cluster I believe you want to use:
in cluster.conf.
Randy
Celso K. Webber wrote:
Hello all,
Sorry for asking this basic question, I've checked the Cluster FAQ and
did not find anything related to this.
I have set up a 2-node Cluster, with quorum/qdiskd configured, and
everyt
Do you have a host-based firewall running?
Randy
Jordi Prats wrote:
Hi,
I'm trying to start a two node cluster, but I can't start cman. If I
run manually the join command it returns 141. (I don't know what dos
this means) How can I know why it do not start? /var/log/messages
does not give
ing
to get nfs file systems mounted on a remote host. :)
Thanks,
Randy
Lon Hohberger wrote:
On Wed, Aug 22, 2007 at 11:37:11AM -0400, Lon Hohberger wrote:
On Mon, Aug 20, 2007 at 04:39:05PM -0400, Randy Brown wrote:
Right. That's the way I understood it to be. Using ext3 wou
m between nodes.
=
From: [EMAIL PROTECTED] on behalf of Randy Brown
Sent: Mon 8/20/2007 16:39
To: linux clustering
Subject: Re: [Linux-cluster] Please correct me if I'm wrong, but...
Right. That's the way I understood it to be. Using ext3 would require us to
have to umount and remount the
users.
Randy
Lon Hohberger wrote:
On Mon, Aug 20, 2007 at 03:26:08PM -0400, Randy Brown wrote:
in order to configure a two-node high availability NFS failover cluster,
I need to use GFS, correct?
You can use EXT3; you just can only mount the file system on one
node at a time.
Wit
s was dropped with RHEL4 Update 4 and not restored with Update 5. This would have made this build of this configuration so much easier. Oh well.
=
____
From: [EMAIL PROTECTED] on behalf of Randy Brown
Sent: Mon 8/20/2007 15:26
To: linux clustering
Subject: [Linux-clus
in order to configure a two-node high availability NFS failover cluster,
I need to use GFS, correct?
I am wanting to configure two machines in a cluster and use them as a
NAS head for an ISCSI based storage unit providing NFS file systems to
the machines on our network. I'd like to have the
I am trying to configure two matching servers in a high availability
cluster to work as a NAS head for NFS mounts from our ISCSI based
network storage. Has anyone done this or is anyone doing this? I am
struggling with getting the NFS exports configured so machines outside
the cluster can mou
32 matches
Mail list logo