Hi,
On 20/11/17 04:23, 성백재 wrote:
Hello, List.
We are developing storage systems using 10 NVMes (current test set).
Using MD RAID10 + CLVM/GFS2 over four hosts achieves 22 GB/s (Max. on
Reads).
However, a GFS2 DLM problem occurred. The problem is that each host
frequently reports “dlm: g
Hi,
On 29/08/17 12:26, Gionatan Danti wrote:
Il 29-08-2017 13:13 Steven Whitehouse ha scritto:
Whatever kind of storage is being used with GFS2, it needs to act as
if there was no cache or as if there is a common cache between all
nodes - what we want to avoid is caches which are specific to
Hi,
On 29/08/17 12:07, Gionatan Danti wrote:
Il 29-08-2017 12:59 Steven Whitehouse ha scritto:
Yes, it definitely needs to be set to cache=none mode. Barrier passing
is only one issue, and as you say it is down to the cache coherency,
since the block layer is not aware of the caching
On 29/08/17 11:54, Gionatan Danti wrote:
Hi Steven,
Il 29-08-2017 11:45 Steven Whitehouse ha scritto:
Yes, there is some additional overhead due to the clustering. You can
however usually organise things so that the overheads are minimised as
you mentioned above by being careful about the
Hi,
On 26/08/17 07:11, Gionatan Danti wrote:
Hi list,
I am evaluating how to refresh my "standard" cluster configuration and
GFS2 clearly is on the table ;)
GOAL: to have a 2-node HA cluster running DRBD (active/active), GFS2
(to store disk image) and KVM (as hypervisor). The cluster had to
Hi,
On 19/07/17 00:39, Digimer wrote:
On 2017-07-18 07:25 PM, Kristián Feldsam wrote:
Hello, I see today GFS2 errors in log and nothing about that is on net,
so I writing to this mailing list.
node2 19.07.2017 01:11:55 kernel kernerr vmscan: shrink_slab:
gfs2_glock_shrink_scan+
Hi,
On 11/04/16 13:29, Daniel Dehennin wrote:
Hello,
My OpenNebula cluster has a 4TB GFS2 logical volume supported by two
physical volumes (2TB each).
The result is that near all I/O go to a single PV.
Now I'm looking at a way to convert linear LV to a stripping one and
only found the possibi
Hi,
On 08/04/16 10:21, Daniel Dehennin wrote:
Hello,
On our virtualisation infrastructure we have a 4To GFS2 over a SAN.
Since one or two weeks we are facing read I/O issues, 5k or 6k IOPS with
an average block size of 5kB.
I'm looking for the possibilities and didn't find anything yet, so my
Hi,
On 15/02/16 09:20, Daniel Dehennin wrote:
Hello,
We run some troubles since several days on our GFS2 (log attached):
- we ran the FS for some times without troubles (since 2014-11-03)
- the FS was grown from 3To to 4To near 6 month ago
- it seems to happen only on one node “nebula3”
- I
Hi,
On 13/11/15 08:13, Milos Jakubicek wrote:
Hi,
can somebody from the devel team at RedHat share some thoughts on what
are actually the development plans for GFS2 in the future?
I mean: will it mainly be small and big performance improvements like
in the past couple of years, or are there
Hi,
On 10/11/15 04:56, Dil Lee wrote:
Hi,
I have a centos 6.5 cluster that are connected to a Fibre Channel SAN in star
topology. All nodes/SAN_storages have single-pair fibre connection and
no multipathing. Possibility of hardware issue had been eliminated
because read/write between all other
Hi,
On 30/06/15 20:37, Daniel Dehennin wrote:
Hello,
We are experiencing slow VMs on our OpenNebula architecture:
- two Dell PowerEdge M620
+ Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz
+ 96GB RAM
+ 2x146Go SAS drives
- 2TB SAN LUN to store qcow2 images with GFS2 over cLVM
We made som
Hi,
On 19/02/15 13:50, Megan . wrote:
Good Morning!
We have an 11 node Centos 6.6 cluster configuration. We are using it
to share SAN mounts between servers (GFS2 via iscsi with LVM). We
have a requirement to have 33 GFS2 mounts shared on the cluster (crazy
i know). Are there any limitations
ve correct results without unmounting. So the short
answer to your question is no,
Steve.
On Mon, Dec 15, 2014 at 09:59:02AM +0000, Steven Whitehouse wrote:
Hi,
On 15/12/14 09:54, Vladimir Melnik wrote:
Hi!
The qcow2 isn't inderneath it, we can assume it's an ordinary file on a
files
bility to snapshot the storage,
then you could run fsck on a snapshot in order to avoid so much downtime.
An odd file size should not, in and of itself cause any problems with
removing a file, so it will only be an issue if other on-disk metadata
is incorrect,
Steve.
On Mon, Dec 15, 20
Hi,
How did you generate the image in the first place? I don't know if we've
ever really tested GFS2 with a qcow device underneath it - normally even
in virt clusters the storage for GFS2 would be a real shared block
device. Was this perhaps just a single node?
Have you checked the image wit
Hi,
On 24/04/14 17:29, Alan Brown wrote:
On 30/03/14 12:34, Steven Whitehouse wrote:
Well that is not entirely true. We have done a great deal of
investigation into this issue. We do test quotas (among many other
things) on each release to ensure that they are working. Our tests have
all
Hi,
On Fri, 2014-03-28 at 22:07 +, Alan Brown wrote:
> On 28/03/14 19:31, Fabio M. Di Nitto wrote:
>
> >
> > Are there any known issues, guidelines, or recommendations for having
> > a single RHCS cluster with different OS releases on the nodes?
> > Only one answer.. don't do it. It's not su
Hi,
On Tue, 2014-03-11 at 09:47 +, stephen.ran...@stfc.ac.uk wrote:
> The storage is a separate Hitachi SAN connected by 4Gig fibre channel,
> which itself does not report any problems when the crash happens. With
> the quota switched off, all is fine.
>
Are you exporting that GFS2 filesystem
Hi,
On Tue, 2014-01-28 at 19:58 -0500, Dan Riley wrote:
> On Jan 20, 2014, at 5:21 AM, Jürgen Ladstätter
> wrote:
>
> > is anyone running gfs2 with kernel version 2.6.32-431 yet? 358 was unusable
> > due to bugs, 279 was working quite well. Anyone tested the new 431? Is it
> > stable enough f
Hi,
On Tue, 2014-01-21 at 13:16 +0100, Moullé Alain wrote:
> Hi,
>
> As far as I know the final stack option 4 (with Pacemaker and quorum API
> of corosync instead of cman) will be the stack delivered on RHEL7,
> always right ?
>
> So my question is about GFS2 : will it be working together wit
Hi,
On Fri, 2014-01-17 at 09:10 -0200, Juan Pablo Lorier wrote:
> Hi,
>
> I've been using gfs2 on top of a 24 TB lvm volume used as a file server
> for several month now and I see a lot of io related to glock_workqueue
> when there's file transfers. The threads even get to be the top ones in
> re
Hi,
On Tue, 2013-11-26 at 14:29 +0200, Vladimir Melnik wrote:
> On Tue, Nov 26, 2013 at 12:15:31PM +0000, Steven Whitehouse wrote:
> > Well the logs appear to suggest that one of the nodes at least has been
> > fenced at some stage.
>
> I have to admit that fencing hasn
Hi,
On Tue, 2013-11-26 at 13:53 +0200, Vladimir Melnik wrote:
> On Tue, Nov 26, 2013 at 01:13:48PM +0200, Vladimir Melnik wrote:
> > > The other question is also what caused the node to try and fence the
> > > other one in the first place? That is not immediately clear from the
> > > logs.
> > It
Hi,
On Tue, 2013-11-26 at 13:13 +0200, Vladimir Melnik wrote:
> On Tue, Nov 26, 2013 at 09:59:34AM +0000, Steven Whitehouse wrote:
> > Looking at the logs, I see that it looks like recovery has got stuck for
> > one of the nodes, since the log is complaining that it has taken a lo
Hi,
On Tue, 2013-11-26 at 10:19 +0200, Vladimir Melnik wrote:
> Dear colleagues,
>
>
>
> Your advices will be greatly appreciated.
>
>
>
> I have another small GFS2 cluster. 2 nodes connected to the same
> iSCSI-target.
>
>
>
> Tonight something has happen and now both nodes can’t work
Hi,
On Fri, 2013-09-27 at 20:30 +, Hofmeister, James (HP ESSN BCS Linux
ERT) wrote:
> I am not looking for a deep analysis of this problem, just a search
> for known issues… I have not found a duplicate in my Google and
> bugzilla searches.
>
The trace looks to me as if the unlinked inodes (h
Hi,
On Thu, 2013-09-26 at 14:05 -0400, David Teigland wrote:
> On Thu, Sep 26, 2013 at 12:24:49PM -0500, Goldwyn Rodrigues wrote:
> > This folds ops_results and error into one. This enables the
> > error code to trickle all the way to the calling function and the gfs2
> > mount fails if older dlm_
Hi,
On Wed, 2013-09-25 at 16:25 +0200, Pavel Herrmann wrote:
> Hi
>
> I am trying to build a two-node cluster for samba, but I'm having some GFS2
> issues.
>
> The nodes themselves run as virtual machines in KVM (on different hosts), use
> gentoo kernel 3.10.7 (not sure what exact version of van
Hi,
On Tue, 2013-09-24 at 08:57 -0400, Thom Gardner wrote:
> OK, I don't know much about increasing NFS performance, but I do have
> some things for you to consider that may actually help anyway:
>
> In general we (the cluster support group at RedHat) have started
> recommending that you just not
Hi,
On Tue, 2013-09-24 at 11:29 +0200, Olivier Desport wrote:
> Hello,
>
> I've installed a two nodes GFS2 cluster on Debian 7. The nodes are
> connected to the datas by iSCSI and multipathing with a 10 Gb/s link. I
> can write a 1g file with dd at 500 Mbytes/s. I export with NFS (on a 10
> Gb
Hi,
On Thu, 2013-09-05 at 11:24 -0400, Schaefer, Micah wrote:
> Hello,
> I am running a cluster with two nodes. Each node is importing an iSCSI
> block device. Using clustered logical volume management, they are sharing
> several logical volumes that are formatted with GFS2.
>
> I have atte
Hi,
On Tue, 2013-04-30 at 08:44 -0400, rhu...@bidmc.harvard.edu wrote:
> A couple of years ago, I staged a test environment using RHEL 5u1 with
> a few KVM guests that were provisioned with a direct LUN for use with
> Cluster Suite and resilient storage (GFS2). For whatever reason (on
> reflectio
Hi,
On Wed, 2013-03-13 at 10:09 -0700, Scooter Morris wrote:
> Hi all,
> We're seeing gfs2 crashes since we've upgraded to RHEL 6.4. The
> traceback is:
>
There is a fix available for that, bug #908398. Please open a ticket
with our support team and quote that bug number and they should be
Hi,
On Thu, 2013-01-31 at 00:29 +0800, Zama Ques wrote:
>
>
> I am facing few issues while creating a GFS2 file system . GFS2 file
> creation is successful , but it is failing while trying to mount the
> file system .
>
> It is failing with the following error :
>
> ===
>
> [root@eser~]# /e
Hi,
On Wed, 2013-01-30 at 12:31 +0100, Kristian Grønfeldt Sørensen wrote:
> Hi,
>
> I'm setting up a two-node cluster sharing a single GFS2 filesystem
> backed by a dual-primary DRBD-device (DRBD on top of LVM, so no CLVM
> involved).
>
> I am experiencing more or less the same as the OP in this
Hi,
On Thu, 2013-01-03 at 18:00 +0800, Zama Ques wrote:
> Hi All ,
>
>
> Need few clarification regarding GFS.
>
>
> I need to create a shared file system for our servers . The servers will
> write to the shared file system at the same time and there is no requirement
> for a cluster .
>
Hi,
On Mon, 2012-11-12 at 15:24 +0900, Antonio Castellano wrote:
> Hi,
>
> I'd like to know about the status of the bug number 831330 and its schedule.
> Our system is complaining about it and I don't have enough permissions to
> access its bugzilla related page. It is urgent.
>
> This is the
Hi,
On Wed, 2012-10-31 at 14:07 -0500, james pedia wrote:
> Noticed this thread for the same issue at:
>
>
> https://www.redhat.com/archives/linux-cluster/2012-September/msg00084.html:
>
>
> I think I hit the same issue:
>
>
> (CentOS6.3)
> # uname -r
> 2.6.32-279.el6.x86_64
>
>
> gfs2-ut
Hi,
On Wed, 2012-10-24 at 17:27 +0200, Andrew Holway wrote:
> On Oct 24, 2012, at 5:01 PM, Heiko Nardmann wrote:
>
> > Am 24.10.2012 16:38, schrieb Andrew Holway:
> >> Hello,
> >>
> >> Ive been doing some testing.
> >> I have an iSCSI device that I have set up with CLVM.
> >> I have 4 physical
Hi,
On Thu, 2012-09-20 at 16:25 +0200, Andrew Holway wrote:
> It seems that my node004 is the problem.
>
> I cannot kill the iozone processes and I find this in the logs.
>
This looks like there is some problem with the i/o stack below the level
of GFS2. What kind of storage are you using? If th
Hi,
On Fri, 2012-09-07 at 18:38 +, Chip Burke wrote:
> My problem is that on a single node of the cluster I can mount a GFS2
> volume, however as soon as I try to write to the volume, access to
> GFS2 freezes on all nodes (Simple ls hangs even). The hang finally
> clears up with the original t
Hi,
On Sun, 2012-09-09 at 17:31 -0400, Jason Henderson wrote:
>
> On Sep 8, 2012 9:44 AM, "Bob Peterson" wrote:
> >
> > - Original Message -
> > | A question on the inode numbers in the hangalyzer output.
> > |
> > | In the glock dump for node2 you have these lines:
> > | G: s:SH n:2/81
Hi,
On Sun, 2012-09-02 at 02:11 +0200, Kveri wrote:
> Hello,
>
> we're using gfs2 on drbd, we created cluster in incomplete state (only 1
> node). When doing dd if=/dev/zero of=/gfs_partition/file we get filesystem
> freezes every 1-2 minutes for 10-20 seconds, I mean every filesystem on that
Hi,
On Wed, 2012-08-29 at 06:37 -0500, I-Viramuthu, Siva wrote:
> Hello,
>
> After the reboot, I am unable to joint with the fence group, it is
> always says waithing...
>
> Any idea...
>
We'll need a bit more info... do you have a cluster.conf you can share
and are there any m
Hi,
On Thu, 2012-08-23 at 22:35 +0200, Bart Verwilst wrote:
> Umounting and remounting made the filesystem writeable again.
>
> I've then ran a gfs2_fsck on the device, which gave me
>
The output from fsck doesn't really give any clues as to the cause.
The reclaiming of unlinked inodes is a fa
the bug which
it fixes). If you copy me in, then I can ACK it,
Steve.
> Steven Whitehouse schreef op 22.08.2012 10:44:
> > Hi,
> >
> > On Wed, 2012-08-22 at 09:35 +0200, Bart Verwilst wrote:
> >> Hi Steven,
> >>
> >> I'm not sure if this is enou
rent between 3.2.28 and 3.3-rc1,
> > and i do not dare to hack myself a diff file that incorporates your
> > change, fearing it will probably be less stable than it is already.
> > :)
> >
> > Would it be too much to ask to backport your change to 3.2.x? I will
> >
am
which has that patch in it, then that might also help narrow down the
problem,
Steve.
> Steven Whitehouse schreef op 21.08.2012 13:27:
> > Hi,
> >
> > On Tue, 2012-08-21 at 13:03 +0200, Bart Verwilst wrote:
> >> Hi Steven,
> >>
> >> There is no d
> Bart
>
Ah, I see sorry.. I misunderstood the report. I wonder whether your
distro kernel has this patch:
http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;h=718b97bd6b03445be53098e3c8f896aeebc304aa
Thats the most likely thing that I can see that has been fixed rece
x27;m finally getting somewhere with this :P
>
> Anything i can do to help Steven?
>
> Kind regards,
>
> Bart Verwilst
>
Can you reproduce this without drbd in the mix? That should remove one
complication and make this easier to track down.
I'll take a look at see what
Hi,
On Tue, 2012-08-21 at 12:08 +0200, Bart Verwilst wrote:
> As yet another reply to my own post, i found this on the node where it
> hangs ( this time it's vm01, and /var/lib/libvirt/sanlock that's hanging
> ):
>
>
> [ 1219.640653] GFS2: fsid=: Trying to join cluster "lock_dlm",
> "kvm:sanl
Hi,
On Wed, 2012-08-08 at 09:28 -0700, Scooter Morris wrote:
> Bob,
> Thanks for the information and the pointer, that really helps.
>
> -- scooter
>
> On 08/08/2012 05:50 AM, Bob Peterson wrote:
> > - Original Message -
> > | Hi All,
> > | We have a RedHat 6.2 cluster with 4 n
Hi,
On Tue, 2012-07-10 at 12:11 +0200, Ali Bendriss wrote:
> > Hi,
>
> >
>
> > On Tue, 2012-07-10 at 10:45 +0200, Ali Bendriss wrote:
>
> > > Hello,
>
> > >
>
> > > It's look like recent version of GFS2 use the standard linux quota
>
> > > tools,
>
> > >
>
> > > but I've tried the mains
Hi,
On Tue, 2012-07-10 at 10:45 +0200, Ali Bendriss wrote:
> Hello,
>
> It's look like recent version of GFS2 use the standard linux quota
> tools,
>
> but I've tried the mainstream quota-tools (ver 4.00) without success.
>
> Which version sould be used ?
>
> thanks
>
The quota tools should
Hi,
On Mon, 2012-06-04 at 11:24 +0200, Nicolas Ecarnot wrote:
> Hi,
>
> I had a 2-nodes cluster running too fine under Ubuntu server 11.10, with
> cman, corosync, GFS2, OCFS2, clvm, ctdb, samba, winbind.
>
> So I decided to upgrade :)
>
> Under Precise (12.04), my OCFS2 partition is still work
Hi,
On Tue, 2012-04-03 at 14:14 +0100, Alan Brown wrote:
> Real Dumb Question[tm] time
>
> Has anyone tried putting bcache/flashcache in front of shared storage in
> a GFS2 cluster (on each node, of course)
>
> Did it work?
>
> Should it work?
>
> Is it safe?
>
> Are there ways of making
Hi,
On Fri, 2012-03-02 at 06:05 -0800, Scooter Morris wrote:
> Hi all,
> We're seeing a problem with file append using cat: "cat >> file"
> on a 4 node cluster with gfs2 where the file's mtime doesn't get
> updated. This looks exactly the same as in Bug 496716, except that
> bug was supposed
Hi,
On Thu, 2012-02-23 at 11:56 -0500, Greg Mortensen wrote:
> Hi.
>
> I'm testing a two-node virtual-host CentOS 6.2 (2.6.32-220.4.2.el6.x86_64)
> GFS2 cluster running on the following hardware:
>
> Two physical hosts, running VMware ESXi 5.0.0
> EqualLogic PS6000XV iSCSI SAN
>
> I have export
Hi,
On Mon, 2012-02-13 at 16:03 +0100, emmanuel segura wrote:
> :-) Thanks
>
> 2012/2/13 Adam Drew
> You don't have to use a partition. I was just providing a
> syntax example. GFS2 is typically deployed over LVM.
>
>
In fact CLVM is a requirement for support (
Hi,
On Mon, 2012-02-13 at 15:50 +0100, emmanuel segura wrote:
> I seen in the redhat DocS
>
> The first thing you can do it's mount with the fs option
> noatime,nodiratime
>
>
Yes, that is a very good idea. It might not solve the particular issue
here, but in general it will improve performance
Hi,
On Mon, 2012-02-13 at 15:06 +0100, Laszlo Beres wrote:
> Hi Steven,
>
> On Mon, Feb 13, 2012 at 10:38 AM, Steven Whitehouse
> wrote:
>
> > I'd be very interested to know what has not worked for you. Please open
> > a ticket with our support team if yo
t; you had with EMS/GFS2 configuration?
>
> -Original Message-
> From: linux-cluster-boun...@redhat.com
> [mailto:linux-cluster-boun...@redhat.com] On Behalf Of Steven Whitehouse
> Sent: Monday, February 13, 2012 4:38 AM
> To: linux clustering
> Subject: Re: [Linux-cl
Hi,
On Fri, 2012-02-10 at 18:32 +0100, Laszlo Beres wrote:
> Hello,
>
> is there anybody here running Tibco EMS with GFS2? We've got some
> nasty thing recently and I'm interested in others' experiences.
>
> Thanks,
>
> Laszlo
>
I'd be very interested to know what has not worked for you. Pleas
Hi,
On Wed, 2012-01-25 at 16:00 -0500, Digimer wrote:
> Hi all,
>
> EL6 DLM question. Beginning adventures in DLM tuning... :)
>
> I am trying to optimize DLM for use on nodes which can hit the disks
> hard and fast. Specifically, clustered LVM with many LVs hosting many
> VMs where the VMs
Hi,
On Thu, 2012-01-05 at 13:54 -0800, Wes Modes wrote:
> Howdy, y'all. I'm trying to set up GFS in a cluster on CentOS systems
> running on vmWare. The GFS FS is on a Dell Equilogic SAN.
>
> I keep running into the same problem despite many differently-flavored
> attempts to set up GFS. The prob
Hi,
On Thu, 2012-01-05 at 10:07 -0700, Dax Kelson wrote:
> Looking in older Red Hat Magazine article by Matthew O'Keefe such as:
>
> http://www.redhat.com/magazine/008jun05/features/gfs/
> http://www.redhat.com/magazine/008jun05/features/gfs_nfs/
>
> There are references to large GFS clusters.
>
Hi,
On Fri, 2011-12-30 at 14:39 +0100, Stevo Slavić wrote:
> Hello RedHat Linux cluster community,
>
> I'm in process of configuring shared filesystem storage master/slave
> Apache ActiveMQ setup. For it to work, it requires reliable
> distributed locking - master is node that holds exclusive loc
Hi,
On Fri, 2011-12-30 at 21:37 +0100, Stevo Slavić wrote:
> Pulling the cables between shared storage and foo01, foo01 gets
> fenced. Here is some info from foo02 about shared storage and dlm
> debug (lock file seems to remain locked)
>
> root@foo02:-//data/activemq_data#ls -li
> total 276
> 66
Hi,
On Fri, 2011-12-30 at 19:30 +, yvette hirth wrote:
> Digimer wrote:
>
> > For GFS2, one of the easiest performance wins is to set
> > 'noatime,nodiratime' in the mount options to avoid requiring locks to
> > update the access times on files when you only read them.
>
> i've found that "n
Hi,
On Fri, 2011-12-16 at 18:31 +0530, SATHYA - IT wrote:
> Hi,
>
>
>
> We had configured a clustered file server with (samba + ctdb + gfs2).
> GFS2 partition is mounted with ACL option. When we create a folder in
> GFS2 partition, irrespective of user being provided with full
> permission for
Hi,
On Thu, 2011-12-15 at 13:06 +1100, yu song wrote:
> Gents,
>
> beauty!! it is great to see your ideas.
>
> I found a doc from redhat kb, which has the following statement
>
> "
>
> Multi-Site Disaster Recovery Clusters
> A multi-site cluster established for disaster recovery comprises two
Hi,
On Mon, 2011-12-12 at 04:37 +, Jankowski, Chris wrote:
> Yu,
>
>
>
> GFS2 or any other filesystems being replicated are not aware at all of
> the block replication taking place in the storage layer. This is
> entirely transparent to the OS and filesystems clustered or not.
> Replica
Hi,
On Wed, 2011-11-16 at 11:42 -0500, Michael Bubb wrote:
> Hello -
>
> We are experiencing extreme I/O slowness on a gfs2 volume on a SAN.
>
> We have a:
>
> Netezza TF24
> IBM V7000 SAN
> IBM Bladecenter with 3 HS22 blades
> Stand alone HP DL380 G7 server
>
>
> The 3 blades and the HP DL38
Hi,
On Wed, 2011-11-16 at 19:44 +0530, Rajagopal Swaminathan wrote:
> Greetings,
>
> On Wed, Nov 16, 2011 at 7:13 PM, Bob Peterson wrote:
> > - Original Message -
> > The latest/greatest upstream fsck.gfs2 has the ability to recreate
> > pretty much any and all damaged system structures
Hi,
On Wed, 2011-11-16 at 12:00 +, Alan Brown wrote:
> On Wed, 16 Nov 2011, Steven Whitehouse wrote:
>
> > The problem is the blocks following that, such as the master directory
> > which contains all the system files. If enough of that has been
> > destroyed, it woul
Hi,
On Wed, 2011-11-16 at 11:03 +, Alan Brown wrote:
> Bob Peterson wrote:
>
> > I've taken a close look at the image file you created.
> > This appears to be a normal, everyday GFS2 file system
> > except there is a section of 16 blocks (or 0x10 in hex)
> > that are completely destroyed near
Hi,
On Fri, 2011-11-11 at 17:04 +0800, Sherlock Zhang wrote:
> Hi Bob
> Thank you for your advice,
> I had dump the FS head.
> #dd if=/dev/sdc of=/home/disk.dump bs=4096 count=1000
> #file disk.dump
> disk.dump: Linux GFS2 Filesystem (blocksize 4096, lockproto
> fsck_nolock)
> and I also try
Hi,
On Wed, 2011-11-09 at 14:57 +, Alan Brown wrote:
> Nicolas Ross wrote:
>
> > Get me right, there are millions of files, but no more than a few
> > hundreds per directory. They are spread out splited on the database id,
> > 2 caracters at a time. So a file name 1234567.jpg would end up i
Hi,
On Wed, 2011-11-09 at 09:57 -0500, Nicolas Ross wrote:
> (...)
>
> > On some services, there are document directories that are huge, not that
> > much in size (about 35 gigs), but in number of files, around one million.
> > One service even has 3 data directories with that many files each.
Hi,
On Sat, 2011-11-05 at 14:17 -0400, berg...@merctech.com wrote:
> In the message dated: Fri, 04 Nov 2011 14:05:34 EDT,
> The pithy ruminations from "Nicolas Ross" on
> <[Linux-cluster] Ext3/ext4 in a clustered environement> were:
> => Hi !
> =>
>
> [SNIP!]
>
> =>
> => On some service
Hi,
On Tue, 2011-11-01 at 11:29 +0200, sagar.shi...@tieto.com wrote:
> Hi,
>
>
>
> Following is my setup –
>
>
>
> Redhat -6.0 è 64-bit
>
> Cluster configuration using LUCI.
>
>
>
> I had setup 2 node cluster Load Balancing Cluster having Mysql service
> active on both the nodes using
Hi,
On Tue, 2011-08-30 at 16:25 +0200, Davide Brunato wrote:
> Hi,
>
> Steven Whitehouse wrote:
> > Hi,
> >
> > On Tue, 2011-08-30 at 12:51 +0200, Davide Brunato wrote:
> >> Hello,
> >>
> >> I've a Red Hat 5.7 2-node cluster for electro
Hi,
On Tue, 2011-08-30 at 12:51 +0200, Davide Brunato wrote:
> Hello,
>
> I've a Red Hat 5.7 2-node cluster for electronic mail services where the
> mailboxes (maildir format)
> are stored on GFS2 volume. The volume contains about 750 files for ~740
> GB of disk space
> occupation. Previous
run directly on gfs2, or via Samba in this case?
Steve.
>
> El 22/07/2011 12:32, Steven Whitehouse escribió:
> > Hi,
> >
> > On Fri, 2011-07-22 at 12:08 +0200, Jordi Renye wrote:
> >> Hi,
> >>
> >> We have configured redhat cluster RHEL 6.1 wi
Hi,
On Fri, 2011-07-22 at 12:08 +0200, Jordi Renye wrote:
> Hi,
>
> We have configured redhat cluster RHEL 6.1 with two nodes.
> We have seen that performance of GFS2 on writing is
> half of ext3 partition.
>
> For example, time of commands:
>
> time cp -Rp /usr /gfs2partition/usr
> 0.681u 47
t; filesystems, one issue that also springs to mind is commercial backup
> > systems that support GFS2 but don't support backing up via NFS.
> >
> > Is there anything else I should know about GFS2 limitations?
> > Is there a book "GFS: The Missing Manual"? :)
&g
Hi,
On Mon, 2011-07-11 at 09:30 +0100, Alan Brown wrote:
> On 08/07/11 22:09, J. Bruce Fields wrote:
>
> > With default mount options, the linux NFS client (like most NFS clients)
> > assumes that a file has a most one writer at a time. (Applications that
> > need to do write-sharing over NFS ne
Hi,
On Fri, 2011-07-08 at 17:41 +0200, Javi Polo wrote:
> Hello everyone!
>
> I've set up a cluster in order to use GFS2. The cluster works really well ;)
> Then, I've exported the GFS2 filesystem via NFS to share with machines
> outside the cluster, and in a read fashion it works OK, but as soo
Hi,
On Wed, 2011-07-06 at 11:15 -0500, Paras pradhan wrote:
> Hi,
>
>
> My GFS2 linux cluster has three nodes. Two at the data center and one
> at the DR site. If the nodes at DR site break/turnoff, all the
> services move to DR node. But if the 2 nodes at the data center lost
> communication wi
Hi,
On Tue, 2011-06-21 at 09:57 -0400, Nicolas Ross wrote:
> 8 node cluster, fiber channel hbas and disks access trough a qlogic fabric.
>
> I've got hit 3 times with this error on different nodes :
>
> GFS2: fsid=CyberCluster:GizServer.1: fatal: filesystem consistency error
> GFS2: fsid=CyberCl
Hi,
On Thu, 2011-06-09 at 12:45 +0300, Budai Laszlo wrote:
> Hi,
>
> I would like to know if it is possible to remove a journal from GFS. I
> have tried to google for it, but did not found anything conclusive. I've
> read the documentation on the following address:
> http://docs.redhat.com/docs/e
Hi,
On Thu, 2011-06-02 at 11:47 +0100, Alan Brown wrote:
> Steven Whitehouse wrote:
>
> > The thing to check is what size the extents are...
>
> filefrag doesn't show this.
>
Yes it does. You need the -v flag
> > the on-disk layout is
> > designed so
Hi,
On Thu, 2011-06-02 at 10:34 +0100, Alan Brown wrote:
> GFS2 seems horribly prone to fragmentation.
>
> I have a filesystem which has been written to once (data archive,
> migrated from a GFS1 filesystem to a clean GFS2 fs) and a lot of the
> files are composed of hundreds of extents - most
Hi,
On Wed, 2011-05-18 at 18:34 +0100, Alan Brown wrote:
> Steven Whitehouse wrote:
> > Hi,
> >
> > On Wed, 2011-05-18 at 16:14 +0100, Alan Brown wrote:
> >> Bob, Steve, Dave,
> >>
> >> Is there any progress on tuning the size of the tables (RHE
Hi,
On Wed, 2011-05-18 at 16:14 +0100, Alan Brown wrote:
> Bob, Steve, Dave,
>
> Is there any progress on tuning the size of the tables (RHEL5) to allow
> larger values and see if they help things as far as caching goes?
>
There is a bz open, and you should ask for that to be linked to one of
y
t it will stay much longer in RHEL - until the end of the release, of
course) and use exclusively the system quota-tools package,
Steve.
> Abhijith Das pisze:
> > - Original Message -
> >
> >> From: "mr"
> >> To: "lin
Hi,
On Fri, 2011-05-13 at 18:21 -0400, Bob Peterson wrote:
> - Original Message -
> | On 12/05/11 00:32, Ramiro Blanco wrote:
> |
> | >> https://bugzilla.redhat.com/show_bug.cgi?id=683155
> | > Can't access that one: "You are not authorized to access bug
> | > #683155"
> |
> | There's no
Hi,
On Mon, 2011-05-09 at 15:32 +0200, mr wrote:
> Hello,
> I'm having problem to init gfs2 quota on my existing FS.
>
> I have 2TB gfs2 FS which is being used in 50%. I have decided to set up
> quotas. Setting warning and limit levels seemed OK - no errors (athought
> I had to reset all my exis
Hi,
On Tue, 2011-04-19 at 20:47 +0600, Muhammad Ammad Shah wrote:
> Hello,
>
>
> I am using RHEL 5.3 and formated the shared volume using gfs. how can i know
> that its GFS version 1 or GFS version 2?
>
>
>
> Thanks,
> Muhammad Ammad Shah
>
Well you sh
1 - 100 of 330 matches
Mail list logo