On Tue, Aug 31, 2010 at 05:36:42PM -0400, Boris Epstein wrote:
> >
> >
> > Nope, Red Hat backports the necessary bits from the newer kernels into
> > their 2.6.18 "stable" release, so you should be all set.
> >
> > Ray
> > ___
> > CentOS mailing list
> >
>
>
> Nope, Red Hat backports the necessary bits from the newer kernels into
> their 2.6.18 "stable" release, so you should be all set.
>
> Ray
> ___
> CentOS mailing list
> CentOS@centos.org
> http://lists.centos.org/mailman/listinfo/centos
This is int
On Tue, Aug 31, 2010 at 05:12:24PM -0400, Boris Epstein wrote:
> On Tue, Aug 10, 2010 at 11:53 AM, Ray Van Dolson wrote:
> > On Tue, Aug 10, 2010 at 11:48:17AM -0400, Boris Epstein wrote:
> >> Hi all,
> >>
> >> If you have had experience hosting GFS/GFS2 on CentOS machines could
> >> you share you
On Tue, Aug 10, 2010 at 11:53 AM, Ray Van Dolson wrote:
> On Tue, Aug 10, 2010 at 11:48:17AM -0400, Boris Epstein wrote:
>> Hi all,
>>
>> If you have had experience hosting GFS/GFS2 on CentOS machines could
>> you share you general impression on it? Was it realiable? Fast? Any
>> issues or concern
On Tue, Aug 10, 2010 at 11:48:17AM -0400, Boris Epstein wrote:
> Hi all,
>
> If you have had experience hosting GFS/GFS2 on CentOS machines could
> you share you general impression on it? Was it realiable? Fast? Any
> issues or concerns?
I've only run GFS2 on RHEL5. It's been quite reliable, but
Hi all,
If you have had experience hosting GFS/GFS2 on CentOS machines could
you share you general impression on it? Was it realiable? Fast? Any
issues or concerns?
Also, how feasible is it to start it on just one machine and then grow
it out if necessary?
Thanks.
Boris.
___
On Mon, Jul 19, 2010 at 11:41:07AM -0400, Fred Wittekind wrote:
> Two web servers, both virtualized with CentOS Xen servers as host
> (residing on two different physical servers).
> GFS used to store home directories containing web document roots.
>
> Shared block device used by GFS is an ISCSI
Two web servers, both virtualized with CentOS Xen servers as host
(residing on two different physical servers).
GFS used to store home directories containing web document roots.
Shared block device used by GFS is an ISCSI target with the ISCSI
initiator residing on the Dom-0, and presented to Do
On Sun, 2009-05-03 at 06:09 -0700, Nifty Cluster Mitch wrote:
> On Wed, Apr 29, 2009 at 07:01:17PM +0800, Hairul Ikmal Mohamad Fuzi wrote:
> >
> > Hi all,
> >
> > We are running CentOS 5.2 64bit as our file server.
> > Currently, we used GFS (with CLVM underneath it) as our filesystem
> > (for o
On Wed, Apr 29, 2009 at 07:01:17PM +0800, Hairul Ikmal Mohamad Fuzi wrote:
>
> Hi all,
>
> We are running CentOS 5.2 64bit as our file server.
> Currently, we used GFS (with CLVM underneath it) as our filesystem
> (for our multiple 2TB SAN volume exports) since we plan to add more
> file servers
Filipe Brandenburger wrote:
>
> In general, having directories with a huge number of files tends to be
> a bad idea, you will most likely have performance bottlenecks with
> specific filesystems or tools. If possible, try to change the
> application to create two or three levels of directories usi
Hi,
On Wed, Apr 29, 2009 at 08:35, William L. Maltby
wrote:
> One thing to keep in mind is that ls must sort the file list.
Not only sorting, but usually "ls" ends up trying to find out if the
file is a directory, which uses a "stat" syscall for each of the
files.
This is always expensive on re
On Apr 29, 2009, at 8:35, William L. Maltby wrote:
> One thing to keep in mind is that ls must sort the file list. If the
> system load is high and memory is short, you may be getting into a
> swap
> situation. I suggest trying the test when the system is lightly loaded
> to see if the results di
On Wed, 2009-04-29 at 19:01 +0800, Hairul Ikmal Mohamad Fuzi wrote:
> Hi all,
>
> We are running CentOS 5.2 64bit as our file server.
> Currently, we used GFS (with CLVM underneath it) as our filesystem
> (for our multiple 2TB SAN volume exports) since we plan to add more
> file servers (serving
Hi,
independently from the results you have seen it might be always reasonable to
tune a gfs filesystem as follows:
http://kbase.redhat.com/faq/docs/DOC-6533
specially
mount with noatime and
gfs_tool settune glock_purge 50
Regards Marc.
On Wednesday 29 April 2009 13:01:17 Hairul Ikmal Mohamad Fu
Hi all,
We are running CentOS 5.2 64bit as our file server.
Currently, we used GFS (with CLVM underneath it) as our filesystem
(for our multiple 2TB SAN volume exports) since we plan to add more
file servers (serving the same contents) later on.
The issue we are facing at the moment is we found o
Dear List,
I have one last little problem with setting up an cluster. My gfs
Mount will hang as soon as I do an iptables restart on one of the
nodes..
>>
>>> Undoubtedly someone else with more experience with GFS will give you an
>>> answer, but to me this makes me thi
on 2-17-2009 3:00 AM Sven Kaptein | MARS websolutions spake the following:
>>> Dear List,
>>>
>>> I have one last little problem with setting up an cluster. My gfs
>>> Mount will hang as soon as I do an iptables restart on one of the
>>> nodes..
>
>> Undoubtedly someone else with more experien
On Tuesday 17 February 2009, Sven Kaptein | MARS websolutions wrote:
> > Undoubtedly someone else with more experience with GFS will give you an
> > answer, but to me this makes me think ip_conntrack stuff gets cleared
> > out and sessions have to reestablish themselves.
> >
> > Ray
>
> Ray,
>
> Th
>> Dear List,
>>
>> I have one last little problem with setting up an cluster. My gfs
>> Mount will hang as soon as I do an iptables restart on one of the
>> nodes..
> Undoubtedly someone else with more experience with GFS will give you an
> answer, but to me this makes me think ip_conntrack
On Fri, Feb 13, 2009 at 06:36:22PM +0100, MARS websolutions wrote:
> Dear List,
>
> I have one last little problem with setting up an cluster. My gfs
> Mount will hang as soon as I do an iptables restart on one of the
> nodes..
Undoubtedly someone else with more experience with GFS will give
Dear List,
I have one last little problem with setting up an cluster. My gfs
Mount will hang as soon as I do an iptables restart on one of the
nodes..
First, let me describe my setup:
- 4 nodes, all running an updated Centos 5.2 installation
- 1 Dell MD3000i ISCSI SAN
- All
Hi,
I had found on the list that i can improve the performance of GFS with small
files if i adapt the size of the rsbtbl_size/lkbtbl_size values.
But it also found that this has to be done after loading the dlm module, but
before the lockspace is created. What means "before the lockspace is cr
HI,
My setup
Two node Cluster[only to create shared gfs file system] with manual fencing
running on centos 4 update 5 for oracle apps.
Shared gfs partiton are mounted on both the node[active-active]
Whenever i type df -h command it will take some delay to print my shared
gfs partiton,it is
Jay Leafey wrote:
> Another alternative that we are examining is using OCFS2 (Oracle Cluster
> File System 2) and iSCSI for the shared storage with Heartbeat for
> service management. This combination looks to be a bit "lighter" than
> the Cluster Suite and GFS, but I'm hoping to confirm or dispro
Mag Gam wrote:
Hello:
I am planning to implement GFS for my university as a summer project. I
have 10 servers each with SAN disks attached. I will be reading and
writing many files for professor's research projects. Each file can be
anywhere from 1k to 120GB (fluid dynamic research images). T
So, how do you have your setup?
How many nodes? I need something stable so I will look into GFSv1, but may
consider GFSv2 later on.
On Thu, May 29, 2008 at 5:16 AM, Karanbir Singh <[EMAIL PROTECTED]>
wrote:
> Mag Gam wrote:
> > I am planning to implement GFS for my university as a summer proje
Mag Gam wrote:
> I am planning to implement GFS for my university as a summer project. I
> have 10 servers each with SAN disks attached.
GFS works well, gfs2 is at the moment in technology-preview mode only,
but its still worth looking at.
--
Karanbir Singh : http://www.karan.org/ : [EMAIL PROT
Hello:
I am planning to implement GFS for my university as a summer project. I have
10 servers each with SAN disks attached. I will be reading and writing many
files for professor's research projects. Each file can be anywhere from 1k
to 120GB (fluid dynamic research images). The 10 servers will b
No problem Scott, thanks for the reply, you're the only one that even
tried :). Our userbase here has become accustomed to being able to
check their quota from any machine they are on, and apparently not being
able to do so it just horrible horrible from my boss's standpoint. If
there is no way t
OS mailing list
> Subject: Re: [CentOS] GFS + quotas
>
> gfs_quota command does NOT exist on clients that are mounting the
> cluster via nfs. on a standard nfs export from a linux ext3 file
> system, when you run the quota command from a client, it makes an rpc
> call to the nfs s
Sorry. Misread your requirement..
On Tue, May 13, 2008 at 12:55 PM, Doug Tucker <[EMAIL PROTECTED]> wrote:
> gfs_quota command does NOT exist on clients that are mounting the
> cluster via nfs. on a standard nfs export from a linux ext3 file
> system, when you run the quota command from a client
gfs_quota command does NOT exist on clients that are mounting the
cluster via nfs. on a standard nfs export from a linux ext3 file
system, when you run the quota command from a client, it makes an rpc
call to the nfs server, and the nfs server returns the quota on the
mounted file system...with gf
Use gfs_quota command.
man gfs_quota
*gfs_quota* [*OPTION]*
**
On Mon, May 12, 2008 at 6:54 PM, Doug Tucker <[EMAIL PROTECTED]> wrote:
> I have 2 machines in a cluster using GFS, that many client mount up via
> nfs. We use quotas extensively here, is there a way from a client
> machine to ch
I have 2 machines in a cluster using GFS, that many client mount up via
nfs. We use quotas extensively here, is there a way from a client
machine to check a users quota? Standard quota command on client
machines do not work like they do when checking a non-gfs nfs mounted
file system. The quotas
Manish Kathuria wrote:
> Are the RPMs for the latest GFS kernel module
> GFS-kernel-2.6.9-72.2.0.8 to be used with the version 2.6.9-55.0.9.EL
> available ? I tried to compile the Source RPMs available from the Red
> Hat site but the modules can't be loaded because of invalid module
> format arisi
Are the RPMs for the latest GFS kernel module
GFS-kernel-2.6.9-72.2.0.8 to be used with the version 2.6.9-55.0.9.EL
available ? I tried to compile the Source RPMs available from the Red
Hat site but the modules can't be loaded because of invalid module
format arising from version magic issues. The
Hi,
try http://www.centos.org/docs/5/html/Global_File_System/ and
http://www.centos.org/docs/5/html/Global_File_System/s1-sysreq-rhcs.html
regards
-
Tomáš Ruprich [EMAIL PROTECTED]
OKV UIKT
Hi,
does Centos-5 support the GFS filesystem and the RedHat cluster suite? I did not find anything in
the FAQs.
Regards
Joachim Backes <[EMAIL PROTECTED]>
University of Kaiserslautern,Computer Center [RHRK],
Systems and Operations, High Performance Computing,
D-67653 Kaiserslautern, PO Box 30
Tru Huynh wrote:
> On Mon, Jul 23, 2007 at 04:07:57PM +0100, James Fidell wrote:
> ...
>> lvcreate -m 1 ... /dev/sdb /dev/sdc /dev/sdd
>
> or use pvreate /dev/md0 (md raid1 mirror of sda/sdb/sdc)?
AIUI, MD isn't cluster-{aware,safe} though, so I could end up with all
the servers that can see th
On Mon, Jul 23, 2007 at 04:07:57PM +0100, James Fidell wrote:
...
>
> lvcreate -m 1 ... /dev/sdb /dev/sdc /dev/sdd
or use pvreate /dev/md0 (md raid1 mirror of sda/sdb/sdc)?
>
> where sd[bc] are the mirrored (iSCSI) PVs in the VG and sdd is the log.
> I have this working and can write data to t
I have a (CentOS4.5) cluster in which the servers mount a GFS partition
which is an LVM2 logical volume created as a mirror of two iSCSI-
connected drives (with a third for the log). The LV was created using a
command along the lines of:
lvcreate -m 1 ... /dev/sdb /dev/sdc /dev/sdd
where sd[bc
42 matches
Mail list logo