Hi,
I am trying to run SPEC SFS 2014 on a GlusterFS deployment. I am mounting
GlusterFS on my client using sudo mount -t glusterfs myserver:/bfsvolume
/mnt/gluster and am using GlusterFS version 3.6.2. both on clients and
servers. The benchmark start and passes initialization phase (writing file
On 04/09/2015 07:33 PM, Jeff Darcy wrote:
I was under the impression that gluster replication was synchrounous, so the
appserver would not return back to the client until the created file was
replicated to the other server. But this does not seem to be the case,
because sleeping a little bit
Hi all,
I have a 14 node cluster with two volume on replica 7 one strip 7.
This night the node 10 add a peers files and stop.
I have corrected with the data in glusterd.info and suppress the bad
peers file.
Now all the node are stopped no glusterd service and the node 10 failed
in starting
I was under the impression that gluster replication was synchrounous, so the
appserver would not return back to the client until the created file was
replicated to the other server. But this does not seem to be the case,
because sleeping a little bit always seems to make the read failures go
Hi All,
Back to you.
I have updated all the node : system centos6.6 and gluster last release
on all the nodes.
All nodes are OK except the 10 which don't start the daemon gluster.
Many thanks in advance.
Hi all,
I have a 14 node cluster with two volume on replica 7 one strip 7.
This
We have a Gluster replicated setup with 2 servers. Each server also runs an
app server that functions as a client of the gluster files. Client access
to the appservers are load balanced using round robin.
Sometimes, when a client creates a new file and then immediately tries to
read it, the read
- Original Message -
From: Punit Dambiwal hypu...@gmail.com
To: Vijay Bellur vbel...@redhat.com
Cc: gluster-users@gluster.org
Sent: Wednesday, April 8, 2015 9:55:38 PM
Subject: Re: [Gluster-users] Glusterfs performance tweaks
Hi Vijay,
If i run the same command directly on the
On 04/09/2015 07:26 PM, Eric Mortensen wrote:
Yes that is correct, read from a different mount (different
brick/server). The error from the app server when reading is that the
file was not found. (The actual error from the OS which leads to the
404 error from the app server is not visible, I
Jeff: I don't really understand how a write-behind translator could keep data
in memory before flushing to the replication module if the replication is
synchronous. Or put another way, from whose perspective is the replication
synchronous? The gluster daemon or the creating client?
That's
I think we also need version based documentation something similar to
http://guides.emberjs.com/v1.11.0/
--
regards
Aravinda
http://aravindavk.in
On 04/09/2015 07:24 PM, Sankarshan Mukhopadhyay wrote:
On Mon, Mar 23, 2015 at 12:31 PM, Shravan Chandrashekar
schan...@redhat.com wrote:
Hence,
Hi all,
Thanks for the helpful responses everyone, I will look at the mounting
options suggested.
Just a quick question to confirm my understanding. When we all say
replication is synchronous, that does mean that each of the filesystem
operations on appserver1 that write a chunk of bytes to the
Hi Sander,
It sounds to me, that it triggered the self-healing, which will do a scan on
the bricks. Depending on the number of files on the brick, it can use a lot of
CPU.
Does the logs say anything useful?
Grtz,
Jiri Hoogeveen
On 09 Apr 2015, at 14:18, Sander Zijlstra
On Mon, Mar 23, 2015 at 12:31 PM, Shravan Chandrashekar
schan...@redhat.com wrote:
Hence, we started by curating content from various sources including
gluster.org static HTML documentation, glusterfs github repository,
various blog posts and the Community wiki. We also felt the need to improve
LS,
We have a GlusterFS cluster which consists of 4 nodes with one brick each and a
distributed-replicated volume of 72 TB.
Today I extended the cluster to 8 machines and added new bricks to the volume,
so it now contains 8 bricks.
I didn’t start the rebalance yet to limit the impact during
Ok, that made a lot of sense. I guess what I was expecting was that the
writes were (close to) immediately consistent, but Gluster is rather
designed to be eventually consistent.
Thanks for explaining all that.
Eric
On Thu, Apr 9, 2015 at 5:45 PM, Jeff Darcy jda...@redhat.com wrote:
Jeff: I
Hi Ben,
-Scheduler {noop or deadline } :- *noop*
-No read ahead! :- *yes*
-No RAID! :- *Yes no RAID*
-Make sure the kernel seems them as SSDs :- *Yes*
On Thu, Apr 9, 2015 at 10:04 AM, Punit Dambiwal hypu...@gmail.com wrote:
Hi Ben,
Yes...i am using 2*10G (Bonding LACP)...
[root@cpu02 ~]#
On 04/09/2015 04:39 PM, Eric Mortensen wrote:
We have a Gluster replicated setup with 2 servers. Each server also
runs an app server that functions as a client of the gluster files.
Client access to the appservers are load balanced using round robin.
Sometimes, when a client creates a new
Ok, that made a lot of sense. I guess what I was expecting was that the
writes were (close to) immediately consistent, but Gluster is rather
designed to be eventually consistent.
All distributed file systems are, to some extent; we just try to be
clearer than most about what the guarantees
^ that's even if it's just a typo fix.
This could be as simple as: `git pull`
Instead of generating, having a ground up static site can solve this for us.
This is actually not true. The middleman / ruby framework is wired up
so that it builds the site when you push the change. For a simple
19 matches
Mail list logo