Grant,
Thanks for the reminder. Just forked, uploaded and submitted the PR.
Chuck
On Fr, 2016-03-18 at 10:27 -0700, Grant Ridder wrote:
> Charles,
>
> Thanks for the write-up! I see near the end you mention "Until the
> collector is officially accepted into the Diamond project" but i
> don't
If you want the existing files in your volume to get sharded, you would
need to
a. enable sharding on the volume and configure block size, both of which
you have already done,
b. cp the file(s) into the same volume with temporary names
c. once done, you can rename the temporary paths back to their
Hi Anuradha,
The issue is resolved but we have one more issue something similar to this
one in which the file is not getting sync after the steps followed,
mentioned in the link which you shared in the previous mail.
And problem is that why split-brain command is not showing split-brain
entries.
Hi,
Thank you for your reply.
And I found a link that is about how to use a server to replace a old one as
below.
http://www.gluster.org/community/documentation/index.php/Gluster_3.4:_Brick_Restoration_-_Replace_Crashed_Server
My question is following.
1.I found that your recovery step is a l
And for 256b inode:
(597904 - 33000) / (1066036 - 23) == 530 bytes per inode.
So I still consider 1k to be good estimation for average workload.
Regards,
Oleksandr.
On четвер, 17 березня 2016 р. 09:58:14 EET Ravishankar N wrote:
> Looks okay to me Oleksandr. You might want to make a github gi
>> > I’ve sent the logs directly as they push this message over the size limit.
Where have you send logs. i could not able to find. could you send glusterd
logs so that we can start analyzing this issue.
Thanks,
Regards,
Gaurav
- Original Message -
From: "Atin Mukherjee"
To: "tommy y
Hi Anuradha,
But in this case I need to do tail on each file which is time taking
process and other end I can't pause my module until these file is getting
healed.
Any how I need the output of the split-brain to resolve this problem.
Regards,
Abhishek
On Wed, Mar 16, 2016 at 6:21 PM, ABHISHEK P
On Wed, Mar 16, 2016 at 02:24:00PM +0100, Charles Williams wrote:
> On Mi, 2016-03-16 at 14:07 +0100, Niels de Vos wrote:
> > On Wed, Mar 16, 2016 at 10:51:52AM +0100, Charles Williams wrote:
> > > Hey all,
> > >
> > > Finally took the time to hammer out a Diamond metrics collector.
> > > It's
>
Yes, gluster does handle this. /How/ it handles this I don't have time
to write this morning. As for the source code, look at the afr
translator (xlators/cluster/afr/src).
On 03/19/2016 09:26 AM, 袁仲 wrote:
I have two peers with two bricks on each of them, and each two
bricks(on different pee
On 03/16/2016 10:57 PM, Oleksandr Natalenko wrote:
OK, I've repeated the test with the following hierarchy:
* 10 top-level folders with 10 second-level folders each;
* 10 000 files in each second-level folder.
So, this composes 10×10×1=1M files and 100 folders
Initial brick used space: 33
Thanks Oleksandr! I'll update
http://gluster.readthedocs.org/en/latest/Administrator%20Guide/arbiter-volumes-and-quorum/
with a link to your gist.
On 03/18/2016 04:24 AM, Oleksandr Natalenko wrote:
Ravi,
here is the summary: [1]
Regards,
Oleksandr.
[1] https://gist.github.com/e8265ca07f7
I have two peers with two bricks on each of them, and each two bricks(on
different peer) make up a replicate volume, and two replicate volumes make
up a distribute volume.
I mount the volume on the client,and start a vm(virtual machine).
when a peer restart (shutdown -r now). and my question is:
- Original Message -
> From: "Anuradha Talur"
> To: "ABHISHEK PALIWAL"
> Cc: gluster-users@gluster.org, gluster-de...@gluster.org
> Sent: Wednesday, March 16, 2016 5:32:26 PM
> Subject: Re: [Gluster-users] gluster volume heal info split brain command not
> showing files in split-brain
>> Could I run some glusterfs command on good node to recover the replicate
>> volume, if I don't copy the files ,including glusterd.info and other
>> files,from good node to new node.
running glusterfs command is not enough to recover the replicate volume. for
recovery you need to follow fol
I am using 2.3. But I can test 2.4 if there are centos packages..
Or I can try to build rpm packages from latest ganesha if it is easy to build..
On Fri, Mar 18, 2016 at 7:03 AM, Jiffin Tony Thottan
wrote:
>
>
> On 17/03/16 23:17, Serkan Çoban wrote:
>>
>> Hi Jiffin,
>> Will these patches land in
- Original Message -
> From: "Venkatesh Gopal"
> To: gluster-users@gluster.org
> Sent: Friday, March 11, 2016 8:42:45 AM
> Subject: [Gluster-users] Using GlusterFS with SoftLayer Endurance
>
> I have a Softlayer Endurance storage volume and have mounted it using NFS..
>
> mount -t nfs4
On Fri, Mar 18, 2016 at 1:41 AM, Anuradha Talur wrote:
>
>
> - Original Message -
> > From: "ABHISHEK PALIWAL"
> > To: "Anuradha Talur"
> > Cc: gluster-users@gluster.org, gluster-de...@gluster.org
> > Sent: Thursday, March 17, 2016 4:00:58 PM
> > Subject: Re: [Gluster-users] gluster vol
I am out of the office until 03/23/2016.
Please open a SID for any DEV/TEST linux work.
Note: This is an automated response to your message "Gluster-users Digest,
Vol 95, Issue 21" sent on 3/16/2016 7:00:01 AM.
This is the only notification you will receive while this person is away.
**
Thi
- Original Message -
> From: "ABHISHEK PALIWAL"
> To: "Anuradha Talur"
> Cc: gluster-users@gluster.org, gluster-de...@gluster.org
> Sent: Thursday, March 17, 2016 4:00:58 PM
> Subject: Re: [Gluster-users] gluster volume heal info split brain command not
> showing files in split-brain
>
https://joejulian.name/blog/dht-misses-are-expensive/
On 03/16/2016 01:14 PM, Mark Selby wrote:
I used rsync to copy files (10TB) from a local disk to a replicated
gluster volume. I DID NOT use --inplace option during the copy.
Someone mentioned this may have a long term adverse read performa
Hi Jiffin,
Will these patches land in 3.7.9?
Serkan
On Tue, Feb 16, 2016 at 2:47 PM, Jiffin Tony Thottan
wrote:
>
> Hi Serkan,
>
> I had moved out previous gfapi-side to ganesha and include all those change
> in single patch https://review.gerrithub.io/#/c/263180/
>
> I will try get it reviewed
On 17/03/16 23:17, Serkan Çoban wrote:
Hi Jiffin,
Will these patches land in 3.7.9?
Hi Serkan,
I moved all changes to ganesha [1] and got merged upstream (ganesha
V2.4-dev-9),
I missed to back port it to 2.3, so which ganesha build are u using?
[1] https://review.gerrithub.io/#/c/263180/
OK, I've repeated the test with the following hierarchy:
* 10 top-level folders with 10 second-level folders each;
* 10 000 files in each second-level folder.
So, this composes 10×10×1=1M files and 100 folders
Initial brick used space: 33 M
Initial inodes count: 24
After test:
* each brick
I used rsync to copy files (10TB) from a local disk to a replicated
gluster volume. I DID NOT use --inplace option during the copy.
Someone mentioned this may have a long term adverse read performance
impact because there we be an extra hard link that would lead to an
extras FS Ops during rea
Hi Anuradha,
Please confirm me, this is bug in glusterfs or we need to do something at
our end.
Because this problem is stopping our development.
Regards,
Abhishek
On Thu, Mar 17, 2016 at 1:54 PM, ABHISHEK PALIWAL
wrote:
> Hi Anuradha,
>
> But in this case I need to do tail on each file which
Ravi, I will definitely arrange the results into some short handy
document and post it here.
Also, @JoeJulian on IRC suggested me to perform this test on XFS bricks
with inode size of 256b and 1k:
===
22:38 <@JoeJulian> post-factum: Just wondering what 256 byte inodes
might look like for tha
Did you ever come to a resolution on this problem?
We're running a 3-node CentOS cluster using GlusterFS, and we're getting these
same messages in the etc-glusfterfs-glusterd.vol.log file on just one node.
___
Gluster-users mailing list
Gluster-users@gl
>> Could I run some glusterfs command on good node to recover the replicate
>> volume, if I don't copy the files ,including glusterd.info and other
>> files,from good node to new node.
running glusterfs command is not enough to recover the replicate volume. for
recovery you need to follow foll
Hello,
I have three gluster nodes in a replica 3 node. I am able to read and
write if the gluster volume -- vol1 -- is mounted on one of the three
nodes, but if I mount the same volume on a client that isn't one of the
gluster nodes, the volume is mounted as read only. I am currently using
the g
29 matches
Mail list logo