On 6 November 2015 at 17:22, Krutika Dhananjay wrote:
> Sure. So far I've just been able to figure that GlusterFS counts blocks in
> multiples of 512B while XFS seems to count them in multiples of 4.0KB.
> Let me again try creating sparse files on xfs, sharded and non-sharded
> gluster volumes an
Sure. So far I've just been able to figure that GlusterFS counts blocks in
multiples of 512B while XFS seems to count them in multiples of 4.0KB.
Let me again try creating sparse files on xfs, sharded and non-sharded gluster
volumes and compare the results. I'll let you know what I find.
-Krut
CC'd him only now.
- Original Message -
> From: "Krutika Dhananjay"
> To: "Lindsay Mathieson"
> Cc: "gluster-users"
> Sent: Friday, November 6, 2015 11:05:27 AM
> Subject: Re: [Gluster-users] File Corruption with shards - 100% reproducable
> CC'ing Raghavendra Talur, who is managing
CC'ing Raghavendra Talur, who is managing the 3.7.6 release.
-Krutika
- Original Message -
> From: "Lindsay Mathieson"
> To: "Krutika Dhananjay"
> Cc: "gluster-users"
> Sent: Thursday, November 5, 2015 7:17:35 PM
> Subject: Re: [Gluster-users] File Corruption with shards - 100% repr
Hi,
Yesterday while debugging in a production setup, found that multiple
directories having same GFID(Rename and Lookup race) and some files
without GFID xattr(Brick crash/hard reboot during create).
Wrote a script to detect the same
https://gist.github.com/aravindavk/29f673f13c2f8963447e
>> [glusterd-store.c:4243:glusterd_resolve_all_bricks] 0-glusterd:
>> resolve brick failed in restore
The above log is the culprit here. Generally this function fails when
GlusterD fails to resolve the associated host of a brick. Has any of the
node undergone an IP change during the upgrade process
Did you upgrade all the nodes too?
Are some of your nodes not-reachable?
Adding gluster-users for glusterd error.
On 11/06/2015 12:00 AM, Stefano Danzi wrote:
After upgrading oVirt from 3.5 to 3.6, glusterd fail to start when the
host boot.
Manual start of service after boot works fine.
glu
Thomas,
You seems to be on right track, That is easiest way to replace brick
without any hassle.
Here is the set of steps which I usually follow:
# mkdir -p /bricks/brick1 <--Mountpoint of new brick
# mkdir -p /bricks/brick1/.glusterfs/00/00
# cd /bricks/brick1/.glusterfs/00/00
# ln -s ../../.
On 11/06/2015 06:06 AM, Gmail wrote:
> Does any one run Gluster on IPv6???
We are already working on this. A patch [1] to fix all IPv6 issues is
already under review. The plan is to get the complete support in 3.8
release. Will keep you posted once it gets merged in mainline.
[1] http://review.g
Does any one run Gluster on IPv6???
-Bishoy
> On Oct 30, 2015, at 1:14 PM, Gmail wrote:
>
> Hello,
>
> I’m trying to use IPv6 with Gluster 3.7.5, but when I do peer probe, I get
> the following error:
>
> peer probe: failed: Probe returned with Transport endpoint is not connected
>
> and the
Hi Jeremy
I have found that glusterfs 3.7.5 dies when I disconnect from the client
if starting glusterfs from the command line but is stable when started
with a systemd unit file. You might try using an upstart unit instead of
init.d scripts.
Cheers,
Wade.
On 6/11/2015 9:43 AM, Jeremy Koerb
Hi all,
I recently upgraded to gluster 3.7.5 on a 6 node dist-repl cluster (one
brick each), along with about 10 clients. I was on 3.7.4 previously and it
was stable. With 3.7.5 (running as a daemon, installed via
*ppa:gluster/glusterfs-3.7*), the glusterd process shuts down unexpectedly
after less
Hi,
A small update: since nothing else worked, I broke down and changed the
replacement system's IP and hostname to that of the broken system;
replaced its UUID with that of the downed machine and probed it back
into the gluster cluster. Had to restart glusterd several times to make
the other syst
Any news?
glusterfs version is 3.7.5
On 30/10/2015 17:51, Marco Lorenzo Crociani wrote:
Hi Susant,
here the stats:
[root@s20 brick1]# stat .* *
File: `.'
Size: 78Blocks: 0 IO Block: 4096 directory
Device: 811h/2065dInode: 2481712637 Links: 7
Access: (0755/drwxr-xr-
Hi,
A small update: since nothing else worked, I broke down and changed the
replacement system's IP and hostname to that of the broken system;
replaced its UUID with that of the downed machine and probed it back
into the gluster cluster. Had to restart glusterd several times to make
the other syst
On 11/05/2015 10:13 AM, Surya K Ghatty wrote:
> All... I need your help! I am trying to setup Highly available
> Active-Active Ganesha configuration on two glusterfs nodes based on
> instructions here:
>
> https://gluster.readthedocs.org/en/latest/Administrator%20Guide/Configuring%20HA%20NFS%20Ser
All... I need your help! I am trying to setup Highly available
Active-Active Ganesha configuration on two glusterfs nodes based on
instructions here:
https://gluster.readthedocs.org/en/latest/Administrator%20Guide/Configuring%20HA%20NFS%20Server/
and
http://www.slideshare.net/SoumyaKoduri/high-
On 5 November 2015 at 21:55, Krutika Dhananjay wrote:
> Although I do not have experience with VM live migration, IIUC, it is got
> to do with a different server (and as a result a new glusterfs client
> process) taking over the operations and mgmt of the VM.
>
Thats sounds very plausible
> I
On 5 November 2015 at 21:19, Krutika Dhananjay wrote:
> Just to be sure, did you rerun the test on the already broken file
> (test.bin) which was written to when strict-write-ordering had been off?
> Or did you try the new test with strict-write-ordering on a brand new file?
>
Very strange. I tr
Hi all,
We have a 4 node distributed/replicated setup (2 x 2) with gluster version
3.6.4.
Yesterday one node went down due to a power failure, as expected everything
kept working well. But after we brought up the failed node again gluster
started, also as expected, its self-healing process. Th
Hi,
Although I do not have experience with VM live migration, IIUC, it is got to do
with a different server (and as a result a new glusterfs client process) taking
over the operations and mgmt of the VM.
If this is a correct assumption, then I think this could be the result of the
same cachin
OK. I am not sure what it is that we're doing differently. I tried the steps
you shared and here's what I got:
[root@dhcp35-215 bricks]# gluster volume info
Volume Name: rep
Type: Replicate
Volume ID: 3fd45a4b-0d02-4a44-b74a-41592d48e102
Status: Started
Number of Bricks: 1 x 3 = 3
Transpo
22 matches
Mail list logo