Re: [Gluster-users] Concurrent writes management.

2014-07-04 Thread COCHE Sébastien
I Thank you for your feedback...

Sébastien Coché, 
Architecte Infrastructure (DIP)
SIGMA Informatique - www.sigma.fr
8 rue Newton - CS 84533 - 44245 LA CHAPELLE SUR ERDRE CEDEX
Tél : (+33) 2.53.48.92.57 - Mob : 06 22 25 03 74

-Message d'origine-
De : Pranith Kumar Karampuri [mailto:pkara...@redhat.com] 
Envoyé : jeudi 3 juillet 2014 04:44
À : COCHE Sébastien; Niels de Vos
Cc : gluster-users@gluster.org
Objet : Re: [Gluster-users] Concurrent writes management.


On 07/01/2014 03:38 PM, COCHE Sébastien wrote:
> OK, but how can I know if an application use fcntl lock ?
Use strace and check if uses fcntl locks.
> Few common examples : Apache webserver, KVM hypervisor, 
> Postgresql/mysql Database What's happen if two split-brained nodes run the 
> same database file or the same virtual machine virtualdisk ?
If they are already in split-brain damage is already done. This is distributed 
file-system problem not application's.

Pranith
> How can I know if those solutions use fcntl lock ?
>
> Sorry for my insistence :-/
>
> Sébastien Coché,
> Architecte Infrastructure (DIP)
> SIGMA Informatique - www.sigma.fr
> 8 rue Newton - CS 84533 - 44245 LA CHAPELLE SUR ERDRE CEDEX Tél : 
> (+33) 2.53.48.92.57 - Mob : 06 22 25 03 74
>
>
> -Message d'origine-
> De : Niels de Vos [mailto:nde...@redhat.com] Envoyé : mardi 1 juillet 
> 2014 11:43 À : COCHE Sébastien Cc : Pranith Kumar Karampuri; 
> gluster-users@gluster.org Objet : Re: [Gluster-users] Concurrent 
> writes management.
>
> On Tue, Jul 01, 2014 at 07:28:15AM +, COCHE Sébastien wrote:
>> Does it mean that if I use gluster FUSE driver or NFS client, fcntl 
>> locks are manages and no data corruption could happen ?
> It is always good practise to use read/write fcntl locks when multiple 
> processes or threads use the same file. These locks are handled correctly for 
> files located on Gluster volumes. The application developer is responsible 
> for implementing these locks, Gluster does not magically/transparently add 
> these (I don't think any filesystem can do that).
>
> Cheers,
> Niels
>
>> Sébastien Coché,
>> Architecte Infrastructure (DIP)
>> SIGMA Informatique - www.sigma.fr<http://www.sigma.fr/>
>> 8 rue Newton - CS 84533 - 44245 LA CHAPELLE SUR ERDRE CEDEX Tél :
>> (+33) 2.53.48.92.57 - Mob : 06 22 25 03 74
>>
>> De : Pranith Kumar Karampuri [mailto:pkara...@redhat.com] Envoyé :
>> lundi 30 juin 2014 18:08 À : COCHE Sébastien Cc :
>> gluster-users@gluster.org Objet : Re: [Gluster-users] Concurrent 
>> writes management.
>>
>>
>> On 06/30/2014 09:26 PM, COCHE Sébastien wrote:
>> Thank you for your response.
>> I understand that the file exist only one time on the volume. But it can be 
>> accessed in write, by many nodes (clients) at the same time.
>> What's happen in those case ?
>> Nothing bad will happen to the filesystem. But the file may not be 
>> meaningful if the applications writing to it don't synchronize overlapping 
>> concurrent writes with fcntl locks.
>>
>> Pranith
>>
>>
>>
>> Sébastien Coché,
>> Architecte Infrastructure (DIP)
>> SIGMA Informatique - www.sigma.fr<http://www.sigma.fr/>
>> 8 rue Newton - CS 84533 - 44245 LA CHAPELLE SUR ERDRE CEDEX Tél :
>> (+33) 2.53.48.92.57 - Mob : 06 22 25 03 74
>>
>> De : Pranith Kumar Karampuri [mailto:pkara...@redhat.com] Envoyé :
>> lundi 30 juin 2014 17:49 À : COCHE Sébastien; 
>> gluster-users@gluster.org<mailto:gluster-users@gluster.org>
>> Objet : Re: [Gluster-users] Concurrent writes management.
>>
>>
>> On 06/30/2014 07:49 PM, COCHE Sébastien wrote:
>> Hello
>>
>> I have a question regarding concurrent write.
>> How are manage those writes ? Is there a risk of data corruption ?
>> Is there a lock mechanism, against corruption ? If yes, how it work ?
>> I already had a look to forum and documents but I did not found a deep dive 
>> explanation.
>> For plain distribute volumes there exists only one file in the volume with 
>> the data. All the operations on the file happen just like they happen on 
>> normal filesystem. For replicated/distributed replicated volumes there are 
>> internal locks taken by replication feature to avoid any in-consistencies.
>> Please check 
>> https://github.com/gluster/glusterfs/blob/master/doc/features/afr-v1.md to 
>> know more about it.
>>
>> Pranith
>>
>>
>>
>> Thank for your feedback
>> Sorry for my poor english  ;-)
>>
>> Sebastien
>>
>>
>>
>>
>>
>> ___
>>
>> Gluster-users mailing list
>>
>> Gluster-users@gluster.org<mailto:Gluster-users@gluster.org>
>>
>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>>
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://supercolony.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Concurrent writes management.

2014-07-01 Thread COCHE Sébastien
OK, but how can I know if an application use fcntl lock ?
Few common examples : Apache webserver, KVM hypervisor, Postgresql/mysql 
Database
What's happen if two split-brained nodes run the same database file or the same 
virtual machine virtualdisk ?
How can I know if those solutions use fcntl lock ?

Sorry for my insistence :-/

Sébastien Coché, 
Architecte Infrastructure (DIP)
SIGMA Informatique - www.sigma.fr
8 rue Newton - CS 84533 - 44245 LA CHAPELLE SUR ERDRE CEDEX
Tél : (+33) 2.53.48.92.57 - Mob : 06 22 25 03 74


-Message d'origine-
De : Niels de Vos [mailto:nde...@redhat.com] 
Envoyé : mardi 1 juillet 2014 11:43
À : COCHE Sébastien
Cc : Pranith Kumar Karampuri; gluster-users@gluster.org
Objet : Re: [Gluster-users] Concurrent writes management.

On Tue, Jul 01, 2014 at 07:28:15AM +0000, COCHE Sébastien wrote:
> Does it mean that if I use gluster FUSE driver or NFS client, fcntl 
> locks are manages and no data corruption could happen ?

It is always good practise to use read/write fcntl locks when multiple 
processes or threads use the same file. These locks are handled correctly for 
files located on Gluster volumes. The application developer is responsible for 
implementing these locks, Gluster does not magically/transparently add these (I 
don't think any filesystem can do that).

Cheers,
Niels

> 
> Sébastien Coché,
> Architecte Infrastructure (DIP)
> SIGMA Informatique - www.sigma.fr<http://www.sigma.fr/>
> 8 rue Newton - CS 84533 - 44245 LA CHAPELLE SUR ERDRE CEDEX Tél : 
> (+33) 2.53.48.92.57 - Mob : 06 22 25 03 74
> 
> De : Pranith Kumar Karampuri [mailto:pkara...@redhat.com] Envoyé : 
> lundi 30 juin 2014 18:08 À : COCHE Sébastien Cc : 
> gluster-users@gluster.org Objet : Re: [Gluster-users] Concurrent 
> writes management.
> 
> 
> On 06/30/2014 09:26 PM, COCHE Sébastien wrote:
> Thank you for your response.
> I understand that the file exist only one time on the volume. But it can be 
> accessed in write, by many nodes (clients) at the same time.
> What's happen in those case ?
> Nothing bad will happen to the filesystem. But the file may not be meaningful 
> if the applications writing to it don't synchronize overlapping concurrent 
> writes with fcntl locks.
> 
> Pranith
> 
> 
> 
> Sébastien Coché,
> Architecte Infrastructure (DIP)
> SIGMA Informatique - www.sigma.fr<http://www.sigma.fr/>
> 8 rue Newton - CS 84533 - 44245 LA CHAPELLE SUR ERDRE CEDEX Tél : 
> (+33) 2.53.48.92.57 - Mob : 06 22 25 03 74
> 
> De : Pranith Kumar Karampuri [mailto:pkara...@redhat.com] Envoyé : 
> lundi 30 juin 2014 17:49 À : COCHE Sébastien; 
> gluster-users@gluster.org<mailto:gluster-users@gluster.org>
> Objet : Re: [Gluster-users] Concurrent writes management.
> 
> 
> On 06/30/2014 07:49 PM, COCHE Sébastien wrote:
> Hello
> 
> I have a question regarding concurrent write.
> How are manage those writes ? Is there a risk of data corruption ?
> Is there a lock mechanism, against corruption ? If yes, how it work ?
> I already had a look to forum and documents but I did not found a deep dive 
> explanation.
> For plain distribute volumes there exists only one file in the volume with 
> the data. All the operations on the file happen just like they happen on 
> normal filesystem. For replicated/distributed replicated volumes there are 
> internal locks taken by replication feature to avoid any in-consistencies.
> Please check 
> https://github.com/gluster/glusterfs/blob/master/doc/features/afr-v1.md to 
> know more about it.
> 
> Pranith
> 
> 
> 
> Thank for your feedback
> Sorry for my poor english  ;-)
> 
> Sebastien
> 
> 
> 
> 
> 
> ___
> 
> Gluster-users mailing list
> 
> Gluster-users@gluster.org<mailto:Gluster-users@gluster.org>
> 
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
> 
> 

> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Concurrent writes management.

2014-07-01 Thread COCHE Sébastien
Does it mean that if I use gluster FUSE driver or NFS client, fcntl locks are 
manages and no data corruption could happen ?

Sébastien Coché,
Architecte Infrastructure (DIP)
SIGMA Informatique - www.sigma.fr<http://www.sigma.fr/>
8 rue Newton - CS 84533 - 44245 LA CHAPELLE SUR ERDRE CEDEX
Tél : (+33) 2.53.48.92.57 - Mob : 06 22 25 03 74

De : Pranith Kumar Karampuri [mailto:pkara...@redhat.com]
Envoyé : lundi 30 juin 2014 18:08
À : COCHE Sébastien
Cc : gluster-users@gluster.org
Objet : Re: [Gluster-users] Concurrent writes management.


On 06/30/2014 09:26 PM, COCHE Sébastien wrote:
Thank you for your response.
I understand that the file exist only one time on the volume. But it can be 
accessed in write, by many nodes (clients) at the same time.
What's happen in those case ?
Nothing bad will happen to the filesystem. But the file may not be meaningful 
if the applications writing to it don't synchronize overlapping concurrent 
writes with fcntl locks.

Pranith



Sébastien Coché,
Architecte Infrastructure (DIP)
SIGMA Informatique - www.sigma.fr<http://www.sigma.fr/>
8 rue Newton - CS 84533 - 44245 LA CHAPELLE SUR ERDRE CEDEX
Tél : (+33) 2.53.48.92.57 - Mob : 06 22 25 03 74

De : Pranith Kumar Karampuri [mailto:pkara...@redhat.com]
Envoyé : lundi 30 juin 2014 17:49
À : COCHE Sébastien; gluster-users@gluster.org<mailto:gluster-users@gluster.org>
Objet : Re: [Gluster-users] Concurrent writes management.


On 06/30/2014 07:49 PM, COCHE Sébastien wrote:
Hello

I have a question regarding concurrent write.
How are manage those writes ? Is there a risk of data corruption ?
Is there a lock mechanism, against corruption ? If yes, how it work ?
I already had a look to forum and documents but I did not found a deep dive 
explanation.
For plain distribute volumes there exists only one file in the volume with the 
data. All the operations on the file happen just like they happen on normal 
filesystem. For replicated/distributed replicated volumes there are internal 
locks taken by replication feature to avoid any in-consistencies.
Please check 
https://github.com/gluster/glusterfs/blob/master/doc/features/afr-v1.md to know 
more about it.

Pranith



Thank for your feedback
Sorry for my poor english  ;-)

Sebastien





___

Gluster-users mailing list

Gluster-users@gluster.org<mailto:Gluster-users@gluster.org>

http://supercolony.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Concurrent writes management.

2014-06-30 Thread COCHE Sébastien
Thank you for your response.
I understand that the file exist only one time on the volume. But it can be 
accessed in write, by many nodes (clients) at the same time.
What's happen in those case ?


Sébastien Coché,
Architecte Infrastructure (DIP)
SIGMA Informatique - www.sigma.fr<http://www.sigma.fr/>
8 rue Newton - CS 84533 - 44245 LA CHAPELLE SUR ERDRE CEDEX
Tél : (+33) 2.53.48.92.57 - Mob : 06 22 25 03 74

De : Pranith Kumar Karampuri [mailto:pkara...@redhat.com]
Envoyé : lundi 30 juin 2014 17:49
À : COCHE Sébastien; gluster-users@gluster.org
Objet : Re: [Gluster-users] Concurrent writes management.


On 06/30/2014 07:49 PM, COCHE Sébastien wrote:
Hello

I have a question regarding concurrent write.
How are manage those writes ? Is there a risk of data corruption ?
Is there a lock mechanism, against corruption ? If yes, how it work ?
I already had a look to forum and documents but I did not found a deep dive 
explanation.
For plain distribute volumes there exists only one file in the volume with the 
data. All the operations on the file happen just like they happen on normal 
filesystem. For replicated/distributed replicated volumes there are internal 
locks taken by replication feature to avoid any in-consistencies.
Please check 
https://github.com/gluster/glusterfs/blob/master/doc/features/afr-v1.md to know 
more about it.

Pranith


Thank for your feedback
Sorry for my poor english  ;-)

Sebastien




___

Gluster-users mailing list

Gluster-users@gluster.org<mailto:Gluster-users@gluster.org>

http://supercolony.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Concurrent writes management.

2014-06-30 Thread COCHE Sébastien
Hello

I have a question regarding concurrent write.
How are manage those writes ? Is there a risk of data corruption ?
Is there a lock mechanism, against corruption ? If yes, how it work ?
I already had a look to forum and documents but I did not found a deep dive 
explanation.

Thank for your feedback
Sorry for my poor english  ;-)

Sebastien
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Glusterfs Rack-Zone Awareness feature...

2014-04-22 Thread COCHE Sébastien
Sorry if my question is not clear.

When I create a new replicated volume, using only 2 nodes, I use this command 
line : ‘gluster volume create vol_name replica 2 transport tcp 
server1:/export/brick1/1 server2:/export/brick1/1’

server1 and server2 are in 2 different datacenters.

Now, if I want to expand gluster volume, using 2 new servers (ex : server3 and 
server4) , I use those command lines :

‘gluster volume add-brick vol_name server3: /export/brick1/1’

‘gluster volume add-brick vol_name server4: /export/brick1/1’

‘gluster volume rebalance vol_name fix-layout start’

‘gluster volume rebalance vol_name  start’

How the rebalance command work ?

How to be sure that replicated data are not stored on servers hosted in the 
same datacenter ?



Sébastien



-Message d'origine-
De : Jeff Darcy [mailto:jda...@redhat.com]
Envoyé : vendredi 18 avril 2014 18:52
À : COCHE Sébastien
Cc : gluster-users@gluster.org
Objet : Re: [Gluster-users] Glusterfs Rack-Zone Awareness feature...



> I do not understand why it could be a problem to place the data's

> replica on a different node group.

> If a group of node become unavailable (due to datacenter failure, for

> example) volume should remain online, using the second group.



I'm not sure what you're getting at here.  If you're talking about initial 
placement of replicas, we can place all members of each replica set in 
different node groups (e.g. racks).  If you're talking about adding new replica 
members when a previous one has failed, then the question is *when*.

Re-populating a new replica can be very expensive.  It's not worth starting if 
the previously failed replica is likely to come back before you're done.

We provide the tools (e.g. replace-brick) to deal with longer term or even 
permanent failures, but we don't re-replicate automatically.  Is that what 
you're talking about?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Glusterfs Rack-Zone Awareness feature...

2014-04-18 Thread COCHE Sébastien
Hi Jeff,

Thanks' for you feedback.
I do not understand why it could be a problem to place the data's replica on a 
different node group.
If a group of node become unavailable (due to datacenter failure, for example) 
volume should remain online, using the second group.

Regards


-Message d'origine-
De : Jeff Darcy [mailto:jda...@redhat.com] 
Envoyé : mardi 15 avril 2014 16:37
À : COCHE Sébastien
Cc : gluster-users@gluster.org
Objet : Re: [Gluster-users] Glusterfs Rack-Zone Awareness feature...

> I have a little question.
> I have read glusterfs documentation looking for a replication 
> management. I want to be able to localize replicas on nodes hosted in 
> 2 Datacenters (dual-building).
> CouchBase provide the feature, I’m looking for GlusterFs : “Rack-Zone 
> Awareness”.
> https://blog.couchbase.com/announcing-couchbase-server-25
> “Rack-Zone Awareness - This feature will allow logical groupings of 
> Couchbase Server nodes (where each group is physically located on a 
> rack or an availability zone). Couchbase Server will automatically 
> allocate replica copies of data on servers that belong to a group 
> different from where the active data lives. This significantly 
> increases reliability in case an entire rack becomes unavailable. This 
> is of particularly importance for customers running deployments in public 
> clouds.”

> Do you know if Glusterfs provide a similar feature ?
> If not, do you plan to develop it, in the near future ?

There are two parts to the answer. Rack-aware placement in general is part of 
the "data classification" feature planned for the 3.6 release. 

http://www.gluster.org/community/documentation/index.php/Features/data-classification
 

With this feature, files can be placed according to various policies using any 
of several properties associated with objects or physical locations. Rack-aware 
placement would use the physical location of a brick. Tiering would use the 
performance properties of a brick and the access time/frequency of an object. 
Multi-tenancy would use the tenant identity for both bricks and objects. And so 
on. It's all essentially the same infrastructure. 

For replication decisions in particular, there needs to be another piece. Right 
now, the way we use N bricks with a replication factor of R is to define N/R 
replica sets each containing R members. This is sub-optimal in many ways. We 
can still compare the "value" or "fitness" of two replica sets for storing a 
particular object, but our options are limited to the replica sets as defined 
last time bricks were added or removed. The differences between one choice and 
another effectively get smoothed out, and the load balancing after a failure is 
less than ideal. To do this right, we need to use more (overlapping) 
combinations of bricks. Some of us have discussed ways that we can do this 
without sacrificing the modularity of having distribution and replication as 
two separate modules, but there's no defined plan or date for that feature 
becoming available. 

BTW, note that using *too many* combinations can also be a problem. Every time 
an object is replicated across a certain set of storage locations, it creates a 
coupling between those locations. Before long, all locations are coupled 
together, so that *any* failure of R-1 locations anywhere in the system will 
result in data loss or unavailability. Many systems, possibly including 
Couchbase Server, have made this mistake and become *less* reliable as a 
result.  Emin Gün Sirer does a better job describing the problem - and 
solutions - than I do, here:

http://hackingdistributed.com/2014/02/14/chainsets/
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Glusterfs Rack-Zone Awareness feature...

2014-04-15 Thread COCHE Sébastien
HI all,

 

I have a little question.

I have read glusterfs documentation looking for a replication management. I 
want to be able to localize replicas on nodes hosted in 2 Datacenters 
(dual-building).

CouchBase provide the feature, I'm looking for GlusterFs : "Rack-Zone 
Awareness".

https://blog.couchbase.com/announcing-couchbase-server-25 
 

"Rack-Zone Awareness - This feature will allow logical groupings of Couchbase 
Server nodes (where each group is physically located on a rack or an 
availability zone). Couchbase Server will automatically allocate replica copies 
of data on servers that belong to a group different from where the active data 
lives. This significantly increases reliability in case an entire rack becomes 
unavailable. This is of particularly importance for customers running 
deployments in public clouds."

 

Do you know if Glusterfs provide a similar feature ?

If not, do you plan to develop it, in the near future ?

 

Thank's in advance.

 

Sébastien Coché

 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] How to expand Gluster's volume after xfs filesystem resize ?

2013-11-12 Thread COCHE Sébastien
Hello,

 

You are right, I found my mistake. I forgot to extend the second server's 
partition 

I installed the latest version (3.4) on Centos 6.4.

In order to bench GlusterFS without being limited by disk subsystem, I created 
a ramdrive (15GB), on two servers.

The major constraint is that I need to rebuild de brick, from the second node, 
when I restart the server (It is not a production environment)

So the procedure I used to extend the volume is :

-  I remove the brick

-  Extend the Ramdrive (kernel parameter in /etc/grub.conf) (this 
file I forgot to upgrade on the second node)

-  Recreate the xfs file system

-  Add the newly created file system with a new brick

 

After correcting my error, the glusterfile system shown me the right size.

 

Thank you for your help.

 

Best regards

 

Sébastien

 

De : Mark Morlino [mailto:m...@gina.alaska.edu] 
Envoyé : vendredi 8 novembre 2013 18:45
À : COCHE Sébastien
Cc : gluster-users@gluster.org
Objet : Re: [Gluster-users] How to expand Gluster's volume after xfs filesystem 
resize ?

 

Which version of gluster are you using? I have been able to do this with 3.3 
and 3.4 on CentOS. With a replica 2 volume, I have just run lvexend with the -r 
option on both bricks to grow the LV and XF filesystem at the same time. The 
clients see the new size without having to do anything else specifically in 
gluster to resize the volume.

 

On Fri, Nov 8, 2013 at 6:27 AM, COCHE Sébastien  wrote:

Hi all,

 

I am testing gluster’s feature, and how to perform exploitation task.

I created a Gluster cluster composed of 2 nodes.

I create a volume based on xfs filesystem (and LVM), and start a replicated 
gluster volume.

 

I would like to expand the volume size by :

-  Expanding LV

-  Expanding xfs filesystem

-  Expanding gluster volume

 

My problem is: I did not see Gluster command line to take on account the new 
filesystem size.

The filesystem show me the new size, when gluster volume still see the old size.

I tried the command line : ‘gluster volume rebalance…’ but this command only 
work for  stipped volume or for replicated volume with more than 1 brick.

 

How can I expand gluster volume ?

 

Thank you very much

 

Best regards

 

Sébastien Coché

 


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] How to expand Gluster's volume after xfs filesystem resize ?

2013-11-08 Thread COCHE Sébastien
Hi all,

 

I am testing gluster's feature, and how to perform exploitation task.

I created a Gluster cluster composed of 2 nodes.

I create a volume based on xfs filesystem (and LVM), and start a replicated 
gluster volume.

 

I would like to expand the volume size by :

-  Expanding LV

-  Expanding xfs filesystem

-  Expanding gluster volume

 

My problem is: I did not see Gluster command line to take on account the new 
filesystem size.

The filesystem show me the new size, when gluster volume still see the old size.

I tried the command line : 'gluster volume rebalance...' but this command only 
work for  stipped volume or for replicated volume with more than 1 brick.

 

How can I expand gluster volume ?

 

Thank you very much

 

Best regards

 

Sébastien Coché

 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] setting up a dedicated NIC for replication

2013-11-05 Thread COCHE Sébastien
Do you know where I could find documents that explain (in deep) the network 
streams ? 
I did not saw on the GlusterFS documentation that the replication is supported 
by the client.
For example, the slide 7 of this presentation is very confusing : 
http://fr.slideshare.net/anu-bhaskar/introducing-gluster?from_search=1


-Message d'origine-
De : James [mailto:purplei...@gmail.com] 
Envoyé : mardi 5 novembre 2013 16:52
À : COCHE Sébastien
Cc : gluster-users@gluster.org
Objet : Re: [Gluster-users] setting up a dedicated NIC for replication

On Tue, Nov 5, 2013 at 10:18 AM, COCHE Sébastien  wrote:
> Thank for your reply.
> that means that clients generate as many write there are Cluster nodes 
> :-( I saw on some forum and some blog that some people create a dedicated 
> network for gluster interconnect.
> But they did not explain how to (the deep configuration).
>
> Sébastien
The short answer is that you don't want to.
Here's the link:

http://supercolony.gluster.org/pipermail/gluster-users/2013-October/037727.html

HTH,
James
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] setting up a dedicated NIC for replication

2013-11-05 Thread COCHE Sébastien
Thank for your reply.
that means that clients generate as many write there are Cluster nodes :-(
I saw on some forum and some blog that some people create a dedicated network 
for gluster interconnect.
But they did not explain how to (the deep configuration).

Sébastien

-Message d'origine-
De : James [mailto:purplei...@gmail.com] 
Envoyé : mardi 5 novembre 2013 14:21
À : COCHE Sébastien
Cc : gluster-users@gluster.org
Objet : Re: [Gluster-users] setting up a dedicated NIC for replication

On Tue, Nov 5, 2013 at 4:12 AM, COCHE Sébastien  wrote:
> I am looking for how configuring a dedicated network (vlan) for 
> Gluster's replication.


Most of the replication happens on the client side directly to the nodes.
There is some inter host communication, but it's not as much as you might 
expect.
Have a test run of it first, without any fancy networking. There's also at 
least one other recent mailing list post about this question.
Hope it helps.

Cheers,
James
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] setting up a dedicated NIC for replication

2013-11-05 Thread COCHE Sébastien
Hi all,

 

I evaluate Glusterfs for storing virtual machines (KVM).

I am looking for how configuring a dedicated network (vlan) for Gluster's 
replication.

Because the configuration is based on only one DNS name, I don't know how to 
configure Gluster's nodes in order to:

-  Use production network, for hypervisors communications

-  Use replicated/heartbeat network, for gluster's nodes 
communications

 

Can you help me ?

 

Best regards

Sébastien Coché

 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users