Hi jayakrishnan,
the old implementation was not finally accepted to be included in main
glusterfs tree, so it was rewritten in what now is known as ec.
disperse is an alias for ec. They are the same. The algorithm
implemented is Reed-Solomon. It's really similar to ida (we could say
that
You can survive loosing 2 servers out of 5 by using disperse volumes
with 3+2 configuration. But in your case, web hosting, with lots of
small files and random read/write it is not recommended.
Maybe you can test the workload and give it a try. Other than disperse
volumes I don't know a solution
On 02/17/2016 11:44 AM, songxin wrote:
> Hi,
> The version of glusterfs on A node and B node are both 3.7.6.
> The time on B node is same after rebooting because B node hasn't RTC.
> Does it cause the problem?
>
> If I run " gluster volume start gv0 force " the glusterfsd can be
> started but
On 02/17/2016 12:23 PM, Atin Mukherjee wrote:
>
>
> On 02/17/2016 12:08 PM, songxin wrote:
>>
>> Hi,
>> But I also don't know why glusterfsd can't be start by glusterd after B
>> node rebooted.The version of glusterfs on A node and B node are both
>> 3.7.6. Can you explain this for me please?
On 02/17/2016 12:08 PM, songxin wrote:
>
> Hi,
> But I also don't know why glusterfsd can't be start by glusterd after B
> node rebooted.The version of glusterfs on A node and B node are both
> 3.7.6. Can you explain this for me please?
Its because the GlusterD has failed to start on Node B.
Hi,
But I also don't know why glusterfsd can't be start by glusterd after B node
rebooted.The version of glusterfs on A node and B node are both 3.7.6. Can you
explain this for me please?
Thanks,
Xin
At 2016-02-17 14:30:21, "Anuradha Talur" wrote:
>
>
>-
- Original Message -
> From: "songxin"
> To: "Atin Mukherjee"
> Cc: "Anuradha Talur" , gluster-users@gluster.org
> Sent: Wednesday, February 17, 2016 11:44:14 AM
> Subject: Re:Re: [Gluster-users] question about sync
Yes, I meant the algo. I guess , it is the same algo (IDA). Correct
me , if I am wrong.
Best regards
JK
On Wed, Feb 17, 2016 at 2:03 PM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:
>
>
> On 02/17/2016 11:31 AM, jayakrishnan mm wrote:
>
> Dear Pranith,
>
> Thanks for the
Hi,
The version of glusterfs on A node and B node are both 3.7.6.
The time on B node is same after rebooting because B node hasn't RTC. Does it
cause the problem?
If I run " gluster volume start gv0 force " the glusterfsd can be started but
"gluster volume start gv0" don't work.
The file
On 02/17/2016 11:31 AM, jayakrishnan mm wrote:
Dear Pranith,
Thanks for the reply. So GlusterFS 3.7.6 (which is the version I am
using) already contains full disperse volume functionality in it ?
But where is the IDA implementation ?
You mean the algo? You should take a look at
Dear Pranith,
Thanks for the reply. So GlusterFS 3.7.6 (which is the version I am
using) already contains full disperse volume functionality in it ?
But where is the IDA implementation ?
Best Regards
JK
On Wed, Feb 17, 2016 at 1:23 PM, Pranith Kumar Karampuri <
pkara...@redhat.com>
On 02/17/2016 09:42 AM, jayakrishnan mm wrote:
Dear Xavier,
I am trying to understand the disperse translator and its usage.
From
https://lists.gnu.org/archive/html/gluster-devel/2014-01/txttzloLYIJOh.txt
, I see there are four components namely gfsys,dfc,ida and heal
which
Dear Xavier,
I am trying to understand the disperse translator and its usage.
From
https://lists.gnu.org/archive/html/gluster-devel/2014-01/txttzloLYIJOh.txt
, I see there are four components namely gfsys,dfc,ida and heal
which needs to be compiled with GlusterFS main source code.
On 02/17/2016 08:23 AM, songxin wrote:
> Hi,
> Thank you for your immediate and detailed reply.And I have a few more
> question about glusterfs.
> A node IP is 128.224.162.163.
> B node IP is 128.224.162.250.
> 1.After reboot B node and start the glusterd service the glusterd log is
> as blow.
Hi,
Thank you for your immediate and detailed reply.And I have a few more question
about glusterfs.
A node IP is 128.224.162.163.
B node IP is 128.224.162.250.
1.After reboot B node and start the glusterd service the glusterd log is as
blow.
...
[2015-12-07 07:54:55.743966] I [MSGID: 101190]
Hi guys,
I need to build a backend storage for the website hosting. Storage needs to
be highly available and easily expandable. I have 5 servers with 300GB
disks for that purpose. I would like to create volume replicated with a
100GB of size. I want the configuration which will accept 2 boxes
On 02/16/2016 08:06 PM, Oleksandr Natalenko wrote:
Hmm, OK. I've rechecked 3.7.8 with the following patches (latest
revisions):
===
Soumya Koduri (3):
gfapi: Use inode_forget in case of handle objects
inode: Retire the inodes from the lru list in inode_table_destroy
rpc:
On 02/16/2016 02:12 AM, Amir Alavi wrote:
Hi,
I was wondering if anyone could help me with Gluster replicated volume
and replication manager.What are they? How do they work? Benefits and
drawbacks? I've checked out your website but I still don't quite
understand the architecture of these two
Hi guys
Thanks a lot for your help.
I will now update our servers to glusterfs 3.7.8 and then add the 3rd
server as an arbiter.
I will update you after that.
Thanks a lot
Dominique
Werde Teil des modernen Arbeitens im Glarnerland auf www.digitalglarus.ch!
Lese Neuigkeiten auf Twitter:
Hmm, OK. I've rechecked 3.7.8 with the following patches (latest
revisions):
===
Soumya Koduri (3):
gfapi: Use inode_forget in case of handle objects
inode: Retire the inodes from the lru list in inode_table_destroy
rpc: Fix for rpc_transport_t leak
===
Here is Valgrind
On 02/12/2016 11:27 AM, Soumya Koduri wrote:
On 02/11/2016 08:33 PM, Oleksandr Natalenko wrote:
And "API" test.
I used custom API app [1] and did brief file manipulations through it
(create/remove/stat).
Then I performed drop_caches, finished API [2] and got the following
Valgrind log
Hi all!
I attended FOSDEM this year and was part of the team that put up the
Gluster stand. I've written up a small report on the experience at
https://kshlm.in/fosdem16/ .
I'll be writing up a report on my DevConf experience as well (soon enough).
~kaushal
Hi Serkan,
I had moved out previous gfapi-side to ganesha and include all those
change in single patch https://review.gerrithub.io/#/c/263180/
I will try get it reviewed and merge the patch as soon as possible.
With Regards,
Jiffin
On 14/02/16 21:54, Serkan Çoban wrote:
Thanks for the
Hello,
is it safe to create desaster recovery backups from the bricks
filesystem instead of the gluster storage? It would be much faster. Can
there be some data missing or be wrong? I've tried to find an answer to
that in the documentation but couldn't find anything.
Cheers
Kim
Hello,
I have a problem with a two node setup, that are replica 2 servers, and
also clients.
Actions on node1:
gluster peer probe node2
gluster volume create fs replica 2 transport tcp node1:/fs node2:/fs force
gluster volume start fs
mount -t glusterfs node1:/fs /test
Actions on node2:
- Original Message -
> From: "songxin"
> To: gluster-users@gluster.org
> Sent: Tuesday, February 16, 2016 3:59:50 PM
> Subject: [Gluster-users] question about sync replicate volume after
> rebooting one node
>
> Hi,
> I have a question about how to sync volume
i update from 3.7.6 to 3.7.8
I am on Centos 7.2
[root@compute1 ~]# gluster volume status
Status of volume: vol_cinder
Gluster process TCP Port RDMA Port Online Pid
--
Brick
Hi,
I have a question about how to sync volume between two bricks after one node is
reboot.
There are two node, A node and B node.A node ip is 128.124.10.1 and B node ip
is 128.124.10.2.
operation steps on A node as below
1.gluster peer probe 128.124.10.2
2.mkdir -p /data/brick/gv0
On Tue, Feb 16, 2016 at 08:35:05AM +, ousmane sanogo wrote:
> Hello i .update my gluster node yesterday
> I am using cinder openstack with gluster
> And i have this warning after update :
>
> warning:
> /var/lib/glusterd/vols/vol_cinder/vol_cinder.172.16.10.2.glusterfs-cinder.vol
> saved as
Hello i .update my gluster node yesterday
I am using cinder openstack with gluster
And i have this warning after update :
warning:
/var/lib/glusterd/vols/vol_cinder/vol_cinder.172.16.10.2.glusterfs-cinder.vol
saved as
30 matches
Mail list logo