glusterfs is a distributed file system, fair enough, easy to
maintain and very friendly to the user
still, comparing it against a raw (local) file system, like
I do via local mount point back ended with a single brick
volume would be a valid route to see what glusterfs does
when most of the
Hello, all.
I'm trying to get glusterfs working on two machines (so that I can have
replicated storage on both of them) and I'm stuck on getting glusterd
working.
The two machines are Debian 6.0 (Squeeze) and I'm using the glusterfs
packages from the backports repo (3.2.4-1~bpo60+1).
We are setting up a 180 node cluster for weather modeling. 2 Storage
servers with 32GB Ram each. QDR INfiniband interconnect.
When we run iozone with 1GB perthread (128Kb blocksize) from 32 clients (2
iozone threads per client).
The run succeeds however run fails for 64 clients and we start
Can you explain where glusterfs is being used? Is this lockup
happening on a VM running in on a file-disk-image on top of gluster?
is gluster itself causing this timeout?
On Wed, May 9, 2012 at 6:59 PM, chyd c...@ihep.ac.cn wrote:
Hi all,
I'm encountering a lockup problem many times when
Pasting an email from bugzilla-announce:
Red Hat Bugzilla (bugzilla.redhat.com) will be unavailable on May 22nd starting
at 6 p.m. EDT [2200 UTC] to perform an upgrade from Bugzilla 3.6 to Bugzilla
4.2. We are hoping to be complete in no more than 3 hours barring any problems.
Any services
Are there any free disk space requirements that should be observed for clients
(native/NFS) that mount GlusterFS volumes? Does the io-cache translater use
local disk storage for caching? As an example, a GlusterFS volume (1GB) is
mounted on a client at /home/mnt/volA (/home/ is on partition 1).
Hi,
I am using gluster in physical machine (CPU: 2 Xeon E5620, MEM: 24GB, 1Gpbs
network link, Centos 6.0 linux 2.6.32-71.el6.x86_64). When reading or writing
small numbers of files, the system is fine. But when too many files are
accessing concurrently, the problem will occure some times.
It's a kernel bug which already fixed in RHEL6.2, try it with 2.6.32-220
kernel, it will works fine for you.
On Fri, May 11, 2012 at 12:24 PM, 程耀东 c...@ihep.ac.cn wrote:
Hi,
I am using gluster in physical machine (CPU: 2 Xeon E5620, MEM: 24GB,
1Gpbs network link, Centos 6.0 linux
Here is basic question:
Are we more likely to avoid a split brain scenario if we have more than 2
bricks total? IOW, one brick on at least three servers or more?
Thanks.
-Geoff
--
Geoff Galitz, ggal...@shutterstock.com
WebOps Engineer, Europe
Shutterstock Images
On 05/10/2012 07:14 PM, Emmanuel Seyman wrote:
root@titane:~# gluster peer status
run 'glusterd --debug' on titane, and see what the logs say
-Amar
___
Gluster-users mailing list
Gluster-users@gluster.org
10 matches
Mail list logo