Could you please share the parameters you tuned, and perhaps a brief
explanation of your thinking? I have hideously slow backups, too, and
haven't been successful in improving it through tuning.
--
Brandon
On Wed, Nov 19, 2008 at 3:56 PM, Tom Lanyon <[EMAIL PROTECTED]> wrote:
> On 19/11/2008, a
lable) and run a traditional backup (using, say, rdiff-backup) in a
shorter time than I can now, where I'm running it straight off a live GFS
volume?
--
Brandon
On Thu, Oct 16, 2008 at 10:50 AM, Wendy Cheng <[EMAIL PROTECTED]>wrote:
> Brandon Young wrote:
>
>> Hi all,
&
Hi all,
I currently have a GFS deployment consisting of eight servers and several
GFS volumes. One of my GFS servers is a dedicated backup server with a
second replica SAN attached to it through a second HBA. My approach to
backups has been with tools such as rsync and rdiff-backup, run on a nig
I have occasionally run into this problem, too. I have found that sometimes
I can work around the problem by chkconfig'ing clvmd,cman,and rgmanager off,
rebooting, then manually starting cman, rgmanager, clvmd (in that order).
Usually, after that, I am able to fence the node(s) and they will rejoi
I use this method of fencing on my cluster. With RHCS, there is a supplied
fencing script for DRAC cards. The trick is you have to enable telnet on
the DRAC cards for the supplied script to work (you can either do this
through the web interface, or install the Dell Management Software and issue
s
Yeah, similar question to the first responder ... Is your intent to have
shared disk space between all the ESX servers? To support live migrations,
etc? If so, then ESX server has a built-in filesystem called vmfs, which
can be shared by all the servers in the farm to store VM images, etc. We
us
In my GFS cluster, I use DRAC cards as the fencing device for each node.
Yesterday, I had a situation where the DRAC card on a particular node had
failed, and would not allow remote logins, etc, but it still returned
pings. I don't know how long the card had been dead, and I only noticed
because I
Unplug the heartbeat cable.
On Wed, May 7, 2008 at 6:19 AM, Chris Picton <[EMAIL PROTECTED]> wrote:
> On Tue, 06 May 2008 17:35:04 -0400, Lon Hohberger wrote:
>
> > On Tue, 2008-05-06 at 14:37 -0600, Gary Romo wrote:
> >>
> >> Is there a command that you can run to test/veryify that fencing is
>
'partprobe' on each cluster node, and try restarting clvmd on each node.
Note that you should unmount the filesystem before restarting clvmd ...
On Tue, May 6, 2008 at 8:40 AM, Kumar, T Santhosh (TCS) <[EMAIL PROTECTED]>
wrote:
>
>
> Linux hostname 2.6.18-53.1.14.el5 #1 SMP Tue Feb 19 07:18:46 ES