Hi,
I have seen the gluster performance is dead slow on the small files...even
i am using the SSDit's too bad performanceeven i am getting better
performance in my SAN with normal SATA disk...
I am using distributed replicated glusterfs with replica count=2...i have
all SSD disks on the b
On 2015-02-12 02:55, Atin Mukherjee wrote:
> On 02/12/2015 12:36 AM, Ernie Dunbar wrote:
>
>> I nuked the entire partition with mkfs, just to be *sure*, and I still get
>> the error message: volume create: gv0: failed: /brick1/gv0 is already part
>> of a volume Clearly, there's some bit of
Hi,
We had an outage in our gluster setup and I had to rm -rf /var/lib/gluster/*
I rebuild the setup from scratch but op-version for volumes is set as 2 for
op-version and client-op-version.
op-version in glusterd.info is 30501.
The main issue is when I try to set property it fails only for ce
Sorry to top-post, Roundcube doesn't do quoting properly.
The answer to your quesiton is yes, both NFS1 and NFS2 had /brick1
formatted. I find some information about the brick in /
Also, here are my logs from the last time I tried to create the volume
on NFS1:
[2015-02-11 18:59:58.394420] E
Am I the only one experiencing this? Do you guys have proper statistics?
On Wed, Feb 11, 2015 at 1:29 PM, Rumen Telbizov wrote:
> Hello everyone,
>
> I have the following situation. I put some read and write load on my test
> GlusterFS setup as follows:
>
> # dd if=/dev/zero of=file2 bs=1M count
Hi,
one of our gluster nodes (gluster03-mi) seems to have double uuid:
[root@gluster01-mi peers]# gluster peer status
Number of Peers: 7
Hostname: gluster03-mi
Uuid: 9c62532f-7901-4d16-8eda-0ff9a5dfccca
State: Peer in Cluster (Connected)
Hostname: gluster04-mi
Uuid: 52755abd-f7f5-41b0-a0f8-2600e7
Hi,
I was wondering if turning on the performance.flush-behind option is dangerous
in terms of data integrity? Reading the documentation it seems to me that I
could benefit from that especially for having a lot of small files but I would
like to stay on the safe side. So if anyone could tell me
Hello,
I am testing glusterfs under Apache+Joomla to store the DocumentRoot. I'm
using Centos7 and gluster client 3.4.
After installing components/plugins when I browse the web I get a lot of
messages like:
[2015-02-12 15:14:11.327225] I
[dht-common.c:1000:dht_lookup_everywhere_done] 0-volume1-dh
On 11 Feb 2015, at 19:06, Ernie Dunbar wrote:
> I nuked the entire partition with mkfs, just to be *sure*, and I still get
> the error message:
>
> volume create: gv0: failed: /brick1/gv0 is already part of a volume
>
> Clearly, there's some bit of data being kept somewhere else besides in
> /
Hi, everybody
I have one question..
There is too many duplicate Error log in my gluster client server
** /var/log/glusterfs/gluster.log
[2015-02-12 08:42:43.476938] W [rpc-clnt-ping.c:145:rpc_clnt_ping_cbk]
gluster_01-client-1: socket or ib related error
[2015-02-12 08:42:43.47704
Hi, everybody
I have one question..
There is too many duplicate Error log in my gluster client server
** /var/log/glusterfs/gluster.log
[2015-02-12 08:42:43.476938] W [rpc-clnt-ping.c:145:rpc_clnt_ping_cbk]
gluster_01-client-1: socket or ib related error
[2015-02-12 08:42:43.47704
Hi!
I've recently added another node to a gluster volume (glusterfs v3.4.6),
because it was getting full, but now I still receive the following warning
in the logs (/var/log/glusterfs/VOLUME-NAME.log) every 2-3 days:
[quote]
[2015-02-09 09:06:55.625497] W [dht-diskusage.c:232:dht_is_subvol_filled
On 02/12/2015 12:36 AM, Ernie Dunbar wrote:
>
>
> I nuked the entire partition with mkfs, just to be *sure*, and I still
> get the error message:
>
> volume create: gv0: failed: /brick1/gv0 is already part of a volume
>
> Clearly, there's some bit of data being kept somewhere else besides
On 02/12/2015 02:07 PM, ML mail wrote:
Thanks Pranith for the details. So with that option one would be
trading data consistency with performance. I am now interested to hear
about Nico's new tests with this option disabled...
On Thursday, February 12, 2015 9:23 AM, Pranith Kumar Karampuri
Thanks Pranith for the details. So with that option one would be trading data
consistency with performance. I am now interested to hear about Nico's new
tests with this option disabled...
On Thursday, February 12, 2015 9:23 AM, Pranith Kumar Karampuri
wrote:
On 02/12/2015 01:17
On 02/12/2015 01:17 PM, ML mail wrote:
Dear Pranith
I would be interested to know what the cluster.ensure-durability off
option does, could you explain or point to the documentation?
By default replication translator does fsyncs on the files at certain
times so that it doesn't lose data when
16 matches
Mail list logo