- Original Message -
| You know, that's a good point. We don't use GFS2 for any non-clustered
| fs, right now, but why not? Are you saying I can do an online
| gfs2_grow
| even with lock_nolock?
|
| -Jeff
Hi Jeff,
Yes, you should be able to.
Regards,
Bob Peterson
Red Hat File Systems
> Yes, ls -l will always take longer because it is not just accessing
the directory, but also every inode in the directory. As a result the
I/O pattern will generally be poor.
I know and accept that. It's common to most filesystems but the access
time is particularly pronounced with GFS2 (pres
> -Original Message-
> From: linux-cluster-boun...@redhat.com
[mailto:linux-cluster-boun...@redhat.com]
> On Behalf Of Bob Peterson
> Sent: Tuesday, February 15, 2011 11:24 AM
> To: linux clustering
> Subject: Re: [Linux-cluster] Cluster with shared storage on low budget
>
> - Original
> For the GFS2 glocks, that doesn't matter - all of the glocks are held
in a single hash table no matter how many filesystems there are.
Given nearly 4 mlllion glocks currently on one of the boxes in a quiet
state (and nearly 6 million if everything was on one node), is the
existing hash table
Hi,
On Wed, 2011-02-16 at 19:41 +, Alan Brown wrote:
> > Directories of the size (number of entries) which you have indicated
> should not be causing a problem as lookup should still be quite fast at
> that scale.
>
> Perhaps, but even so 4000 file directories usually take over a minute to
>
> -Original Message-
> From: linux-cluster-boun...@redhat.com
[mailto:linux-cluster-boun...@redhat.com]
> On Behalf Of Nikola Savic
> Sent: Tuesday, February 15, 2011 3:09 PM
> To: linux clustering
> Subject: Re: [Linux-cluster] Cluster with shared storage on low budget
>
> Jeff Sturm wrot
Hi,
On Wed, 2011-02-16 at 19:36 +, Alan Brown wrote:
> > A faster way to just grab lock numbers is to grep for gfs2
> in /proc/slabinfo as that will show how many are allocated at any one
> time.
>
> True, but it doesn't show mow many are used per fs.
>
For the GFS2 glocks, that doesn't matt
> Directories of the size (number of entries) which you have indicated
should not be causing a problem as lookup should still be quite fast at
that scale.
Perhaps, but even so 4000 file directories usually take over a minute to
"ls -l" , while 85k file/directories take 5 mins (20-40 mins on a ba
> A faster way to just grab lock numbers is to grep for gfs2
in /proc/slabinfo as that will show how many are allocated at any one
time.
True, but it doesn't show mow many are used per fs.
FWIW, here are current stats on each cluster node (it's evening and
lightly loaded)
gfs2_quotad
Hi,
On Wed, 2011-02-16 at 19:07 +, Alan Brown wrote:
> Steve:
>
> To add some interest (and give you numbers to work with as far as dlm
> config tuning goes), here are a selection of real world lock figures
> from our file cluster (cat $d | wc -l)
>
> /sys/kernel/debug/dlm/WwwHome-gfs2_loc
Steve:
To add some interest (and give you numbers to work with as far as dlm
config tuning goes), here are a selection of real world lock figures
from our file cluster (cat $d | wc -l)
/sys/kernel/debug/dlm/WwwHome-gfs2_locks 162299 (webserver exports)
/sys/kernel/debug/dlm/soft2-gfs2_locks
On Wed, Feb 16, 2011 at 02:12:30PM +, Alan Brown wrote:
> > You can set it via the configfs interface:
>
> Given 24Gb ram, 100 filesystems, several hundred million of files
> and the usual user habits of trying to put 100k files in a
> directory:
>
> Is 24Gb enough or should I add more memory
On Tue, Feb 15, 2011 at 09:07:31PM +0100, Marc Grimme wrote:
> Hi Steve,
> I think lately I observed a very similar behavior with RHEL5 and gfs2.
> It was a gfs2 filesystem that had about 2Mio files with sum of 2GB in a
> directory. When I did a du -shx . in this directory it took about 5 Minutes
Hi,
On Wed, 2011-02-16 at 14:12 +, Alan Brown wrote:
> > You can set it via the configfs interface:
>
> Given 24Gb ram, 100 filesystems, several hundred million of files and
> the usual user habits of trying to put 100k files in a directory:
>
> Is 24Gb enough or should I add more memory? (
> You can set it via the configfs interface:
Given 24Gb ram, 100 filesystems, several hundred million of files and
the usual user habits of trying to put 100k files in a directory:
Is 24Gb enough or should I add more memory? (96Gb is easy, beyond that
is harder)
What would you consider safe
Hi,
On Wed, 2011-02-16 at 12:02 +, Alan Brown wrote:
> > There is a config option to increase the resource table size though,
> so perhaps you could try that?
>
> ..details?
>
>
You can set it via the configfs interface:
echo "4096" > /sys/kernel/config/dlm//cluster/rsbtbl_size
It doesn'
> There is a config option to increase the resource table size though,
so perhaps you could try that?
..details?
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster
Hi,
Hello to all !
will you please guide me how can do a practice of clustrign as well as
loadbalancer for testing enviorment can all of you please guide me what are
the basic requirements
i have three centos machine apache,Mysql and postfix is runing on these
machines
--
*Regards.*.//
Anuj Si
Hi,
On Tue, 2011-02-15 at 21:07 +0100, Marc Grimme wrote:
> Hi Steve,
> I think lately I observed a very similar behavior with RHEL5 and gfs2.
> It was a gfs2 filesystem that had about 2Mio files with sum of 2GB in a
> directory. When I did a du -shx . in this directory it took about 5 Minutes
>
- Original Message -
> From: "Shariq Siddiqui"
> To: linux4ora...@yahoogroups.com, linux-cluster@redhat.com
> Sent: Wednesday, 16 February, 2011 12:04:00 PM
> Subject: [Linux-cluster] RAW Devices performance issue
> Dear All,
>
> I am going to install Oracle RAC on two Servers, With s
On Wed, Feb 16, 2011 at 5:04 PM, Shariq Siddiqui
wrote:
>
>
> Dear All,
>
> I am going to install Oracle RAC on two Servers, With shared SAN storage
> (Servers and Storage is IBM)
> OS = RHEL 5u5 x64 bit
>
> And we used multipathing mechanism and created multipathing devices.
> i.e. /dev/mapper/m
On Wed, Feb 16, 2011 at 5:04 PM, Shariq Siddiqui
wrote:
>
>
> Dear All,
>
> I am going to install Oracle RAC on two Servers, With shared SAN storage
> (Servers and Storage is IBM)
> OS = RHEL 5u5 x64 bit
>
> And we used multipathing mechanism and created multipathing devices.
> i.e. /dev/mapper/m
22 matches
Mail list logo