Hi,
On 20/11/17 04:23, 성백재 wrote:
Hello, List.
We are developing storage systems using 10 NVMes (current test set).
Using MD RAID10 + CLVM/GFS2 over four hosts achieves 22 GB/s (Max. on
Reads).
However, a GFS2 DLM problem occurred. The problem is that each host
frequently reports “dlm:
Hi,
On 29/08/17 12:26, Gionatan Danti wrote:
Il 29-08-2017 13:13 Steven Whitehouse ha scritto:
Whatever kind of storage is being used with GFS2, it needs to act as
if there was no cache or as if there is a common cache between all
nodes - what we want to avoid is caches which are specific
Hi,
On 29/08/17 12:07, Gionatan Danti wrote:
Il 29-08-2017 12:59 Steven Whitehouse ha scritto:
Yes, it definitely needs to be set to cache=none mode. Barrier passing
is only one issue, and as you say it is down to the cache coherency,
since the block layer is not aware of the caching
On 29/08/17 11:54, Gionatan Danti wrote:
Hi Steven,
Il 29-08-2017 11:45 Steven Whitehouse ha scritto:
Yes, there is some additional overhead due to the clustering. You can
however usually organise things so that the overheads are minimised as
you mentioned above by being careful about
Hi,
On 26/08/17 07:11, Gionatan Danti wrote:
Hi list,
I am evaluating how to refresh my "standard" cluster configuration and
GFS2 clearly is on the table ;)
GOAL: to have a 2-node HA cluster running DRBD (active/active), GFS2
(to store disk image) and KVM (as hypervisor). The cluster had
Hi,
On 19/07/17 00:39, Digimer wrote:
On 2017-07-18 07:25 PM, Kristián Feldsam wrote:
Hello, I see today GFS2 errors in log and nothing about that is on net,
so I writing to this mailing list.
node2 19.07.2017 01:11:55 kernel kernerr vmscan: shrink_slab:
Hi,
On 11/04/16 13:29, Daniel Dehennin wrote:
Hello,
My OpenNebula cluster has a 4TB GFS2 logical volume supported by two
physical volumes (2TB each).
The result is that near all I/O go to a single PV.
Now I'm looking at a way to convert linear LV to a stripping one and
only found the
Hi,
On 08/04/16 10:21, Daniel Dehennin wrote:
Hello,
On our virtualisation infrastructure we have a 4To GFS2 over a SAN.
Since one or two weeks we are facing read I/O issues, 5k or 6k IOPS with
an average block size of 5kB.
I'm looking for the possibilities and didn't find anything yet, so
Hi,
On 15/02/16 09:20, Daniel Dehennin wrote:
Hello,
We run some troubles since several days on our GFS2 (log attached):
- we ran the FS for some times without troubles (since 2014-11-03)
- the FS was grown from 3To to 4To near 6 month ago
- it seems to happen only on one node “nebula3”
-
Hi,
On 13/11/15 08:13, Milos Jakubicek wrote:
Hi,
can somebody from the devel team at RedHat share some thoughts on what
are actually the development plans for GFS2 in the future?
I mean: will it mainly be small and big performance improvements like
in the past couple of years, or are
Hi,
On 10/11/15 04:56, Dil Lee wrote:
Hi,
I have a centos 6.5 cluster that are connected to a Fibre Channel SAN in star
topology. All nodes/SAN_storages have single-pair fibre connection and
no multipathing. Possibility of hardware issue had been eliminated
because read/write between all other
Hi,
On 30/06/15 20:37, Daniel Dehennin wrote:
Hello,
We are experiencing slow VMs on our OpenNebula architecture:
- two Dell PowerEdge M620
+ Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz
+ 96GB RAM
+ 2x146Go SAS drives
- 2TB SAN LUN to store qcow2 images with GFS2 over cLVM
We made
Hi,
On 19/02/15 13:50, Megan . wrote:
Good Morning!
We have an 11 node Centos 6.6 cluster configuration. We are using it
to share SAN mounts between servers (GFS2 via iscsi with LVM). We
have a requirement to have 33 GFS2 mounts shared on the cluster (crazy
i know). Are there any
Hi,
How did you generate the image in the first place? I don't know if we've
ever really tested GFS2 with a qcow device underneath it - normally even
in virt clusters the storage for GFS2 would be a real shared block
device. Was this perhaps just a single node?
Have you checked the image
,
then you could run fsck on a snapshot in order to avoid so much downtime.
An odd file size should not, in and of itself cause any problems with
removing a file, so it will only be an issue if other on-disk metadata
is incorrect,
Steve.
On Mon, Dec 15, 2014 at 09:23:47AM +, Steven
without unmounting. So the short
answer to your question is no,
Steve.
On Mon, Dec 15, 2014 at 09:59:02AM +, Steven Whitehouse wrote:
Hi,
On 15/12/14 09:54, Vladimir Melnik wrote:
Hi!
The qcow2 isn't inderneath it, we can assume it's an ordinary file on a
filesystem. Its' size
Hi,
On 24/04/14 17:29, Alan Brown wrote:
On 30/03/14 12:34, Steven Whitehouse wrote:
Well that is not entirely true. We have done a great deal of
investigation into this issue. We do test quotas (among many other
things) on each release to ensure that they are working. Our tests have
all
Hi,
On Fri, 2014-03-28 at 22:07 +, Alan Brown wrote:
On 28/03/14 19:31, Fabio M. Di Nitto wrote:
Are there any known issues, guidelines, or recommendations for having
a single RHCS cluster with different OS releases on the nodes?
Only one answer.. don't do it. It's not supported
Hi,
On Tue, 2014-03-11 at 09:47 +, stephen.ran...@stfc.ac.uk wrote:
The storage is a separate Hitachi SAN connected by 4Gig fibre channel,
which itself does not report any problems when the crash happens. With
the quota switched off, all is fine.
Are you exporting that GFS2 filesystem
Hi,
On Tue, 2014-01-28 at 19:58 -0500, Dan Riley wrote:
On Jan 20, 2014, at 5:21 AM, Jürgen Ladstätter i...@innova-studios.com
wrote:
is anyone running gfs2 with kernel version 2.6.32-431 yet? 358 was unusable
due to bugs, 279 was working quite well. Anyone tested the new 431? Is it
Hi,
On Fri, 2014-01-17 at 09:10 -0200, Juan Pablo Lorier wrote:
Hi,
I've been using gfs2 on top of a 24 TB lvm volume used as a file server
for several month now and I see a lot of io related to glock_workqueue
when there's file transfers. The threads even get to be the top ones in
read
Hi,
On Tue, 2013-11-26 at 10:19 +0200, Vladimir Melnik wrote:
Dear colleagues,
Your advices will be greatly appreciated.
I have another small GFS2 cluster. 2 nodes connected to the same
iSCSI-target.
Tonight something has happen and now both nodes can’t work with the
Hi,
On Tue, 2013-11-26 at 13:13 +0200, Vladimir Melnik wrote:
On Tue, Nov 26, 2013 at 09:59:34AM +, Steven Whitehouse wrote:
Looking at the logs, I see that it looks like recovery has got stuck for
one of the nodes, since the log is complaining that it has taken a long
time for kslowd
Hi,
On Tue, 2013-11-26 at 13:53 +0200, Vladimir Melnik wrote:
On Tue, Nov 26, 2013 at 01:13:48PM +0200, Vladimir Melnik wrote:
The other question is also what caused the node to try and fence the
other one in the first place? That is not immediately clear from the
logs.
It seems that
Hi,
On Tue, 2013-11-26 at 14:29 +0200, Vladimir Melnik wrote:
On Tue, Nov 26, 2013 at 12:15:31PM +, Steven Whitehouse wrote:
Well the logs appear to suggest that one of the nodes at least has been
fenced at some stage.
I have to admit that fencing hasn't been enabled in this cluster
Hi,
On Fri, 2013-09-27 at 20:30 +, Hofmeister, James (HP ESSN BCS Linux
ERT) wrote:
I am not looking for a deep analysis of this problem, just a search
for known issues… I have not found a duplicate in my Google and
bugzilla searches.
The trace looks to me as if the unlinked inodes
Hi,
On Wed, 2013-09-25 at 16:25 +0200, Pavel Herrmann wrote:
Hi
I am trying to build a two-node cluster for samba, but I'm having some GFS2
issues.
The nodes themselves run as virtual machines in KVM (on different hosts), use
gentoo kernel 3.10.7 (not sure what exact version of vanilla
Hi,
On Tue, 2013-09-24 at 11:29 +0200, Olivier Desport wrote:
Hello,
I've installed a two nodes GFS2 cluster on Debian 7. The nodes are
connected to the datas by iSCSI and multipathing with a 10 Gb/s link. I
can write a 1g file with dd at 500 Mbytes/s. I export with NFS (on a 10
Gb/s
Hi,
On Tue, 2013-09-24 at 08:57 -0400, Thom Gardner wrote:
OK, I don't know much about increasing NFS performance, but I do have
some things for you to consider that may actually help anyway:
In general we (the cluster support group at RedHat) have started
recommending that you just not
Hi,
On Thu, 2013-09-05 at 11:24 -0400, Schaefer, Micah wrote:
Hello,
I am running a cluster with two nodes. Each node is importing an iSCSI
block device. Using clustered logical volume management, they are sharing
several logical volumes that are formatted with GFS2.
I have attempted
Hi,
On Tue, 2013-04-30 at 08:44 -0400, rhu...@bidmc.harvard.edu wrote:
A couple of years ago, I staged a test environment using RHEL 5u1 with
a few KVM guests that were provisioned with a direct LUN for use with
Cluster Suite and resilient storage (GFS2). For whatever reason (on
reflection,
Hi,
On Wed, 2013-03-13 at 10:09 -0700, Scooter Morris wrote:
Hi all,
We're seeing gfs2 crashes since we've upgraded to RHEL 6.4. The
traceback is:
There is a fix available for that, bug #908398. Please open a ticket
with our support team and quote that bug number and they should be
Hi,
On Wed, 2013-01-30 at 12:31 +0100, Kristian Grønfeldt Sørensen wrote:
Hi,
I'm setting up a two-node cluster sharing a single GFS2 filesystem
backed by a dual-primary DRBD-device (DRBD on top of LVM, so no CLVM
involved).
I am experiencing more or less the same as the OP in this
Hi,
On Thu, 2013-01-31 at 00:29 +0800, Zama Ques wrote:
I am facing few issues while creating a GFS2 file system . GFS2 file
creation is successful , but it is failing while trying to mount the
file system .
It is failing with the following error :
===
[root@eser~]#
Hi,
On Thu, 2013-01-03 at 18:00 +0800, Zama Ques wrote:
Hi All ,
Need few clarification regarding GFS.
I need to create a shared file system for our servers . The servers will
write to the shared file system at the same time and there is no requirement
for a cluster .
Planning
Hi,
On Mon, 2012-11-12 at 15:24 +0900, Antonio Castellano wrote:
Hi,
I'd like to know about the status of the bug number 831330 and its schedule.
Our system is complaining about it and I don't have enough permissions to
access its bugzilla related page. It is urgent.
This is the link
Hi,
On Wed, 2012-10-31 at 14:07 -0500, james pedia wrote:
Noticed this thread for the same issue at:
https://www.redhat.com/archives/linux-cluster/2012-September/msg00084.html:
I think I hit the same issue:
(CentOS6.3)
# uname -r
2.6.32-279.el6.x86_64
Hi,
On Wed, 2012-10-24 at 17:27 +0200, Andrew Holway wrote:
On Oct 24, 2012, at 5:01 PM, Heiko Nardmann wrote:
Am 24.10.2012 16:38, schrieb Andrew Holway:
Hello,
Ive been doing some testing.
I have an iSCSI device that I have set up with CLVM.
I have 4 physical hosts, all with
Hi,
On Thu, 2012-09-20 at 16:25 +0200, Andrew Holway wrote:
It seems that my node004 is the problem.
I cannot kill the iozone processes and I find this in the logs.
This looks like there is some problem with the i/o stack below the level
of GFS2. What kind of storage are you using? If this
Hi,
On Sun, 2012-09-09 at 17:31 -0400, Jason Henderson wrote:
On Sep 8, 2012 9:44 AM, Bob Peterson rpete...@redhat.com wrote:
- Original Message -
| A question on the inode numbers in the hangalyzer output.
|
| In the glock dump for node2 you have these lines:
| G: s:SH
Hi,
On Fri, 2012-09-07 at 18:38 +, Chip Burke wrote:
My problem is that on a single node of the cluster I can mount a GFS2
volume, however as soon as I try to write to the volume, access to
GFS2 freezes on all nodes (Simple ls hangs even). The hang finally
clears up with the original two
Hi,
On Sun, 2012-09-02 at 02:11 +0200, Kveri wrote:
Hello,
we're using gfs2 on drbd, we created cluster in incomplete state (only 1
node). When doing dd if=/dev/zero of=/gfs_partition/file we get filesystem
freezes every 1-2 minutes for 10-20 seconds, I mean every filesystem on that
Hi,
On Wed, 2012-08-29 at 06:37 -0500, I-Viramuthu, Siva wrote:
Hello,
After the reboot, I am unable to joint with the fence group, it is
always says waithing...
Any idea...
We'll need a bit more info... do you have a cluster.conf you can share
and are there any
Hi,
On Thu, 2012-08-23 at 22:35 +0200, Bart Verwilst wrote:
Umounting and remounting made the filesystem writeable again.
I've then ran a gfs2_fsck on the device, which gave me
The output from fsck doesn't really give any clues as to the cause.
The reclaiming of unlinked inodes is a
your change to 3.2.x? I will
then test this, and try to push it upstream to -stable and/or ubuntu
LTS..
Thanks a lot in advance!
Kind regards,
Bart
Steven Whitehouse schreef op 21.08.2012 14:37:
Hi,
On Tue, 2012-08-21 at 14:23 +0200, Bart Verwilst wrote:
Hi Steven
.
Steven Whitehouse schreef op 22.08.2012 10:44:
Hi,
On Wed, 2012-08-22 at 09:35 +0200, Bart Verwilst wrote:
Hi Steven,
I'm not sure if this is enough to fix it in 3.2:
--- inode.c.orig 2012-08-22 07:28:15.675859475 +
+++ inode.c2012-08-22 07:33:05.895865014 +
Hi,
On Tue, 2012-08-21 at 12:08 +0200, Bart Verwilst wrote:
As yet another reply to my own post, i found this on the node where it
hangs ( this time it's vm01, and /var/lib/libvirt/sanlock that's hanging
):
[ 1219.640653] GFS2: fsid=: Trying to join cluster lock_dlm,
kvm:sanlock
[
the report. I wonder whether your
distro kernel has this patch:
http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;h=718b97bd6b03445be53098e3c8f896aeebc304aa
Thats the most likely thing that I can see that has been fixed recently,
Steve.
Steven Whitehouse schreef op
down the
problem,
Steve.
Steven Whitehouse schreef op 21.08.2012 13:27:
Hi,
On Tue, 2012-08-21 at 13:03 +0200, Bart Verwilst wrote:
Hi Steven,
There is no drbd in the mix ( which is why i changed the title of
the
bugreport now ). I'm only using plain iSCSI. The original posted
Hi,
On Tue, 2012-07-10 at 12:11 +0200, Ali Bendriss wrote:
Hi,
On Tue, 2012-07-10 at 10:45 +0200, Ali Bendriss wrote:
Hello,
It's look like recent version of GFS2 use the standard linux quota
tools,
but I've tried the mainstream quota-tools (ver 4.00)
Hi,
On Tue, 2012-07-10 at 10:45 +0200, Ali Bendriss wrote:
Hello,
It's look like recent version of GFS2 use the standard linux quota
tools,
but I've tried the mainstream quota-tools (ver 4.00) without success.
Which version sould be used ?
thanks
The quota tools should work with
Hi,
On Mon, 2012-06-04 at 11:24 +0200, Nicolas Ecarnot wrote:
Hi,
I had a 2-nodes cluster running too fine under Ubuntu server 11.10, with
cman, corosync, GFS2, OCFS2, clvm, ctdb, samba, winbind.
So I decided to upgrade :)
Under Precise (12.04), my OCFS2 partition is still working
Hi,
On Tue, 2012-04-03 at 14:14 +0100, Alan Brown wrote:
Real Dumb Question[tm] time
Has anyone tried putting bcache/flashcache in front of shared storage in
a GFS2 cluster (on each node, of course)
Did it work?
Should it work?
Is it safe?
Are there ways of making it safe?
Hi,
On Fri, 2012-03-02 at 06:05 -0800, Scooter Morris wrote:
Hi all,
We're seeing a problem with file append using cat: cat file
on a 4 node cluster with gfs2 where the file's mtime doesn't get
updated. This looks exactly the same as in Bug 496716, except that
bug was supposed to have
Hi,
On Thu, 2012-02-23 at 11:56 -0500, Greg Mortensen wrote:
Hi.
I'm testing a two-node virtual-host CentOS 6.2 (2.6.32-220.4.2.el6.x86_64)
GFS2 cluster running on the following hardware:
Two physical hosts, running VMware ESXi 5.0.0
EqualLogic PS6000XV iSCSI SAN
I have exported a
Hi,
On Fri, 2012-02-10 at 18:32 +0100, Laszlo Beres wrote:
Hello,
is there anybody here running Tibco EMS with GFS2? We've got some
nasty thing recently and I'm interested in others' experiences.
Thanks,
Laszlo
I'd be very interested to know what has not worked for you. Please open
a
Of Steven Whitehouse
Sent: Monday, February 13, 2012 4:38 AM
To: linux clustering
Subject: Re: [Linux-cluster] Tibco EMS on GFS2
Hi,
On Fri, 2012-02-10 at 18:32 +0100, Laszlo Beres wrote:
Hello,
is there anybody here running Tibco EMS with GFS2? We've got some
nasty thing recently
Hi,
On Mon, 2012-02-13 at 15:06 +0100, Laszlo Beres wrote:
Hi Steven,
On Mon, Feb 13, 2012 at 10:38 AM, Steven Whitehouse swhit...@redhat.com
wrote:
I'd be very interested to know what has not worked for you. Please open
a ticket with our support team if you are a Red Hat customer. We
Hi,
On Mon, 2012-02-13 at 15:50 +0100, emmanuel segura wrote:
I seen in the redhat DocS
The first thing you can do it's mount with the fs option
noatime,nodiratime
Yes, that is a very good idea. It might not solve the particular issue
here, but in general it will improve performance
Hi,
On Mon, 2012-02-13 at 16:03 +0100, emmanuel segura wrote:
:-) Thanks
2012/2/13 Adam Drew ad...@redhat.com
You don't have to use a partition. I was just providing a
syntax example. GFS2 is typically deployed over LVM.
In fact CLVM is a requirement for
Hi,
On Wed, 2012-01-25 at 16:00 -0500, Digimer wrote:
Hi all,
EL6 DLM question. Beginning adventures in DLM tuning... :)
I am trying to optimize DLM for use on nodes which can hit the disks
hard and fast. Specifically, clustered LVM with many LVs hosting many
VMs where the VMs are
Hi,
On Thu, 2012-01-05 at 13:54 -0800, Wes Modes wrote:
Howdy, y'all. I'm trying to set up GFS in a cluster on CentOS systems
running on vmWare. The GFS FS is on a Dell Equilogic SAN.
I keep running into the same problem despite many differently-flavored
attempts to set up GFS. The problem
Hi,
On Thu, 2012-01-05 at 10:07 -0700, Dax Kelson wrote:
Looking in older Red Hat Magazine article by Matthew O'Keefe such as:
http://www.redhat.com/magazine/008jun05/features/gfs/
http://www.redhat.com/magazine/008jun05/features/gfs_nfs/
There are references to large GFS clusters.
For
Hi,
On Fri, 2011-12-30 at 19:30 +, yvette hirth wrote:
Digimer wrote:
For GFS2, one of the easiest performance wins is to set
'noatime,nodiratime' in the mount options to avoid requiring locks to
update the access times on files when you only read them.
i've found that noatime
Hi,
On Fri, 2011-12-30 at 21:37 +0100, Stevo Slavić wrote:
Pulling the cables between shared storage and foo01, foo01 gets
fenced. Here is some info from foo02 about shared storage and dlm
debug (lock file seems to remain locked)
root@foo02:-//data/activemq_data#ls -li
total 276
66467
Hi,
On Fri, 2011-12-30 at 14:39 +0100, Stevo Slavić wrote:
Hello RedHat Linux cluster community,
I'm in process of configuring shared filesystem storage master/slave
Apache ActiveMQ setup. For it to work, it requires reliable
distributed locking - master is node that holds exclusive lock on
Hi,
On Fri, 2011-12-16 at 18:31 +0530, SATHYA - IT wrote:
Hi,
We had configured a clustered file server with (samba + ctdb + gfs2).
GFS2 partition is mounted with ACL option. When we create a folder in
GFS2 partition, irrespective of user being provided with full
permission for the
Hi,
On Thu, 2011-12-15 at 13:06 +1100, yu song wrote:
Gents,
beauty!! it is great to see your ideas.
I found a doc from redhat kb, which has the following statement
Multi-Site Disaster Recovery Clusters
A multi-site cluster established for disaster recovery comprises two
Hi,
On Mon, 2011-12-12 at 04:37 +, Jankowski, Chris wrote:
Yu,
GFS2 or any other filesystems being replicated are not aware at all of
the block replication taking place in the storage layer. This is
entirely transparent to the OS and filesystems clustered or not.
Replication
Hi,
On Wed, 2011-11-16 at 11:03 +, Alan Brown wrote:
Bob Peterson wrote:
I've taken a close look at the image file you created.
This appears to be a normal, everyday GFS2 file system
except there is a section of 16 blocks (or 0x10 in hex)
that are completely destroyed near the
Hi,
On Wed, 2011-11-16 at 12:00 +, Alan Brown wrote:
On Wed, 16 Nov 2011, Steven Whitehouse wrote:
The problem is the blocks following that, such as the master directory
which contains all the system files. If enough of that has been
destroyed, it would make it very tricky
Hi,
On Wed, 2011-11-16 at 19:44 +0530, Rajagopal Swaminathan wrote:
Greetings,
On Wed, Nov 16, 2011 at 7:13 PM, Bob Peterson rpete...@redhat.com wrote:
- Original Message -
The latest/greatest upstream fsck.gfs2 has the ability to recreate
pretty much any and all damaged system
Hi,
On Wed, 2011-11-09 at 09:57 -0500, Nicolas Ross wrote:
(...)
On some services, there are document directories that are huge, not that
much in size (about 35 gigs), but in number of files, around one million.
One service even has 3 data directories with that many files each.
Hi,
On Wed, 2011-11-09 at 14:57 +, Alan Brown wrote:
Nicolas Ross wrote:
Get me right, there are millions of files, but no more than a few
hundreds per directory. They are spread out splited on the database id,
2 caracters at a time. So a file name 1234567.jpg would end up in a
Hi,
On Sat, 2011-11-05 at 14:17 -0400, berg...@merctech.com wrote:
In the message dated: Fri, 04 Nov 2011 14:05:34 EDT,
The pithy ruminations from Nicolas Ross on
[Linux-cluster] Ext3/ext4 in a clustered environement were:
= Hi !
=
[SNIP!]
=
= On some services, there are
Hi,
On Tue, 2011-11-01 at 11:29 +0200, sagar.shi...@tieto.com wrote:
Hi,
Following is my setup –
Redhat -6.0 è 64-bit
Cluster configuration using LUCI.
I had setup 2 node cluster Load Balancing Cluster having Mysql service
active on both the nodes using different
Hi,
On Tue, 2011-08-30 at 12:51 +0200, Davide Brunato wrote:
Hello,
I've a Red Hat 5.7 2-node cluster for electronic mail services where the
mailboxes (maildir format)
are stored on GFS2 volume. The volume contains about 750 files for ~740
GB of disk space
occupation. Previously the
, another for shared applications and
data: there is 5 shares.
Also, did you mount with noatime, nodiratime?
Yes, I'm mounting with these options.
Jordi Renye
LCFIB - UPC
Were the tests being run directly on gfs2, or via Samba in this case?
Steve.
El 22/07/2011 12:32, Steven
Hi,
On Fri, 2011-07-22 at 12:08 +0200, Jordi Renye wrote:
Hi,
We have configured redhat cluster RHEL 6.1 with two nodes.
We have seen that performance of GFS2 on writing is
half of ext3 partition.
For example, time of commands:
time cp -Rp /usr /gfs2partition/usr
0.681u 47.082s
up via NFS.
Is there anything else I should know about GFS2 limitations?
Is there a book GFS: The Missing Manual? :)
Thanks
Colin
On Mon, 2011-07-11 at 13:05 +0100, J. Bruce Fields wrote:
On Mon, Jul 11, 2011 at 11:43:58AM +0100, Steven Whitehouse wrote:
Hi
Hi,
On Mon, 2011-07-11 at 09:30 +0100, Alan Brown wrote:
On 08/07/11 22:09, J. Bruce Fields wrote:
With default mount options, the linux NFS client (like most NFS clients)
assumes that a file has a most one writer at a time. (Applications that
need to do write-sharing over NFS need to
Hi,
On Wed, 2011-07-06 at 11:15 -0500, Paras pradhan wrote:
Hi,
My GFS2 linux cluster has three nodes. Two at the data center and one
at the DR site. If the nodes at DR site break/turnoff, all the
services move to DR node. But if the 2 nodes at the data center lost
communication with the
Hi,
On Tue, 2011-06-21 at 09:57 -0400, Nicolas Ross wrote:
8 node cluster, fiber channel hbas and disks access trough a qlogic fabric.
I've got hit 3 times with this error on different nodes :
GFS2: fsid=CyberCluster:GizServer.1: fatal: filesystem consistency error
GFS2:
Hi,
On Thu, 2011-06-09 at 12:45 +0300, Budai Laszlo wrote:
Hi,
I would like to know if it is possible to remove a journal from GFS. I
have tried to google for it, but did not found anything conclusive. I've
read the documentation on the following address:
Hi,
On Thu, 2011-06-02 at 10:34 +0100, Alan Brown wrote:
GFS2 seems horribly prone to fragmentation.
I have a filesystem which has been written to once (data archive,
migrated from a GFS1 filesystem to a clean GFS2 fs) and a lot of the
files are composed of hundreds of extents - most of
Hi,
On Thu, 2011-06-02 at 11:47 +0100, Alan Brown wrote:
Steven Whitehouse wrote:
The thing to check is what size the extents are...
filefrag doesn't show this.
Yes it does. You need the -v flag
the on-disk layout is
designed so that you should have a metadata block separating each
Hi,
On Wed, 2011-05-18 at 16:14 +0100, Alan Brown wrote:
Bob, Steve, Dave,
Is there any progress on tuning the size of the tables (RHEL5) to allow
larger values and see if they help things as far as caching goes?
There is a bz open, and you should ask for that to be linked to one of
your
Hi,
On Wed, 2011-05-18 at 18:34 +0100, Alan Brown wrote:
Steven Whitehouse wrote:
Hi,
On Wed, 2011-05-18 at 16:14 +0100, Alan Brown wrote:
Bob, Steve, Dave,
Is there any progress on tuning the size of the tables (RHEL5) to allow
larger values and see if they help things as far
Hi,
On Fri, 2011-05-13 at 18:21 -0400, Bob Peterson wrote:
- Original Message -
| On 12/05/11 00:32, Ramiro Blanco wrote:
|
| https://bugzilla.redhat.com/show_bug.cgi?id=683155
| Can't access that one: You are not authorized to access bug
| #683155
|
| There's no reason this
clustering linux-cluster@redhat.com
Sent: Tuesday, May 10, 2011 1:06:38 AM
Subject: Re: [Linux-cluster] gfs2 setting quota problem
Hi,
Steven Whitehouse pisze:
Hi,
On Mon, 2011-05-09 at 15:32 +0200, mr wrote:
Hello,
I'm having problem to init gfs2 quota on my existing FS
Hi,
On Mon, 2011-05-09 at 15:32 +0200, mr wrote:
Hello,
I'm having problem to init gfs2 quota on my existing FS.
I have 2TB gfs2 FS which is being used in 50%. I have decided to set up
quotas. Setting warning and limit levels seemed OK - no errors (athought
I had to reset all my existing
Hi,
On Tue, 2011-04-19 at 20:47 +0600, Muhammad Ammad Shah wrote:
Hello,
I am using RHEL 5.3 and formated the shared volume using gfs. how can i know
that its GFS version 1 or GFS version 2?
Thanks,
Muhammad Ammad Shah
Well you should be
Hi,
On Fri, 2011-04-15 at 11:00 -0400, Nicolas Ross wrote:
Hi !
We are slowly migrating services to our new cluster... We curently have an 8
node cluster with a 16 1tb disk enclosure in 7 x raid1 pairs, with 2 global
spares.
the first 2 arrays are in one vg, wich is in turn seperated in
...@redhat.com] On Behalf Of Steven Whitehouse
Sent: 30 mars 2011 07:48
To: linux clustering
Subject: Re: [Linux-cluster] GFS2 cluster node is running very slow
Hi,
On Wed, 2011-03-30 at 01:34 -0400, David Hill wrote:
Hi guys,
I’ve found this in /sys/kernel/debug/gfs2
and the remaining
nodes can't find one of those?
Either way, those would be the first two thing that I'd look into in
order to track this down,
Steve.
-Original Message-
From: linux-cluster-boun...@redhat.com
[mailto:linux-cluster-boun...@redhat.com] On Behalf Of Steven Whitehouse
Sent: 31
Hi,
On Thu, 2011-03-31 at 17:15 +0100, Alan Brown wrote:
Bob, Steve et al,
Which EL test kernel post 2.6.18.247 is stable enough for use in a
production system for a few days?
I'm seeing massive slowdowns on lots of 2-100Mb writes (someone's
mirroring a ftp archive) and want to see if
Hi,
On Wed, 2011-03-30 at 01:34 -0400, David Hill wrote:
Hi guys,
I’ve found this in /sys/kernel/debug/gfs2/fsname/glocks
H: s:EX f:tW e:0 p:22591 [jsvc] gfs2_inplace_reserve_i+0x451/0x69a
[gfs2]
H: s:EX f:tW e:0 p:22591 [jsvc] gfs2_inplace_reserve_i+0x451/0x69a
[gfs2]
H:
Hi,
On Wed, 2011-03-23 at 01:34 -0400, Valeriu Mutu wrote:
Hi Steve,
Thanks for the reply.
On Mon, Mar 21, 2011 at 11:11:31AM +, Steven Whitehouse wrote:
Note that I've used the same setup for the GFS2 and ext3 tests: same
machine, same networking config, same storage array
Hi,
On Fri, 2011-03-18 at 01:42 -0400, Valeriu Mutu wrote:
Hi,
Has anyone done any GFS2 metadata performance benchmarks? If so, what have
you found? Also, what performance tuning would be recommended to increase the
metadata performance of a GFS2 filesystem?
I've recently ran 'fdtree'
Hi,
On Thu, 2011-03-17 at 17:17 +0200, C.D. wrote:
Hello,
sorry guys to resurrect an old thread, but I have to say I can confirm
that, too. I have a libvirt setup with multipathed FC SAN devices and
KVM guests running on top of it. The physical machine is HP 465c G7 (2
x 12 Core
1 - 100 of 272 matches
Mail list logo