[Gluster-users] errors after removing bricks

2013-04-18 Thread Eyal Marantenboim
Hi,

I had a 4 nodes in replication setup.
After removing one of the nodes with:

gluster volume remove-brick images_1 replica 3  vmhost5:/exports/1
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
Remove Brick commit force successful


I started to see a high cpu usage on 2 of the remaining boxes, and this
error in glustershd.log:

[2013-04-18 12:40:47.160559] E
[afr-self-heald.c:685:_link_inode_update_loc] 0-images_1-replicate-0: inode
link failed on the inode (----)
[2013-04-18 12:41:55.784510] I
[afr-self-heald.c:1082:afr_dir_exclusive_crawl] 0-images_1-replicate-0:
Another crawl is in progress for images_1-client-1

any ideas?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Removing brick from replicated

2013-04-15 Thread Eyal Marantenboim
Hi,

We have a setup of 4 nodes/bricks in a replicated setup:

Number of Bricks: 1 x 4 = 4
Transport-type: tcp
Bricks:
Brick1: vmhost2:/exports/1
Brick2: vmhost3:/exports/1
Brick3: vmhost5:/exports/1
Brick4: vmhost6:/exports/1

I'm trying to remove one of them (vmhost5):
gluster volume remove-brick images_1 replica 3 vmhost5:/exports/1 start
Remove Brick start unsuccessful

gluster volume remove-brick images_1 replica 3 vmhost5:/exports/1 status
Volume images_1 is not a distribute volume or contains only 1 brick.
Not performing rebalance

gluster volume info images_1
Volume Name: images_1
Type: Distributed-Replicate
Volume ID: 30c9c2d8-94c9-4b59-b647-0dc593d903d3
Status: Started
Number of Bricks: 1 x 3 = 4
Transport-type: tcp
Bricks:
Brick1: vmhost2:/exports/1
Brick2: vmhost3:/exports/1
Brick3: vmhost5:/exports/1
Brick4: vmhost6:/exports/1

A few questions that hopefully someone can help me:

Why does it say Type: Distributed-Replicate and Number of Bricks 1 x 3 =
4 if ,after executing the remove-brick command, I got: unsuccessful

What am I doing wrong? :)

Thanks!
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Performance for KVM images (qcow)

2013-04-11 Thread Eyal Marantenboim
Thanks for you help!
I'll need to figure out a way to recreate the volume as replica 2 without
messing with the running vms.




On Wed, Apr 10, 2013 at 7:25 PM, Bryan Whitehead dri...@megahappy.netwrote:

 I just want to say that trying to run a VM on almost any glusterfs
 mounted fuse is going to suck when using 1G nic cards.

 That said your setup looks fine except you need to change replica 4
 to replica 2. I'm assuming you want redundancy and speed.
 Replicating to all 4 bricks is probably not what you wanted. By
 setting replica to 2 you'll be able to lose any 1 of the 4 bricks and
 still run without data missing.

 gluster volume create replica 2 ... list all 4 bricks



 On Tue, Apr 9, 2013 at 3:49 AM, Stephan von Krawczynski
 sk...@ithnet.com wrote:
  On Tue, 09 Apr 2013 03:13:10 -0700
  Robert Hajime Lanning lann...@lanning.cc wrote:
 
  On 04/09/13 01:17, Eyal Marantenboim wrote:
   Hi Bryan,
  
   We have 1G nics on all our servers.
   Do you think that changing our design to distribute-replicate will
   improve the performance?
  
   Anything in the gluster performance settings that you think I should
 change?
 
  With GlusterFS, almost all the processing is in the client side.  This
  includes replication.  So, when you have replica 4, the client will be
  duplicating all transactions 4 times, synchronously.  Your 1G ethernet
  just became 256M.
 
  Let me drop in that no clear mind does it this way. Obviously one would
 give
  the client more physical network cards, best choice as many as there are
  replications, and do the subnetting accordingly.
 
  --
  Regards,
  Stephan
  ___
  Gluster-users mailing list
  Gluster-users@gluster.org
  http://supercolony.gluster.org/mailman/listinfo/gluster-users
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Performance for KVM images (qcow)

2013-04-09 Thread Eyal Marantenboim
Hi Bryan,

We have 1G nics on all our servers.
Do you think that changing our design to distribute-replicate will improve
the performance?

Anything in the gluster performance settings that you think I should change?

thanks!


On Tue, Apr 9, 2013 at 6:00 AM, Bryan Whitehead dri...@megahappy.netwrote:

 This looks like you are replicating every file to all bricks?

 What is tcp running on? 1G nics? 10G? IPoIB (40-80G)?

 I think you want to have Distribute-Replicate. So 4 bricks with replica =
 2.

 Unless you are running at least 10G nics you are going to have serious
 IO issues in your KVM/qcow2 VM's.

 On Mon, Apr 8, 2013 at 7:11 AM, Eyal Marantenboim
 e...@theserverteam.com wrote:
  Hi,
 
 
  We have a set of 4 gluster nodes, all in replicated (design?)
 
  We use it to store our qcow2 images for kvm. These images have a variable
  IO, though most of them are for reading only.
 
 
  I tried to find some documentation re. performance optimization, but it's
  either unclear to me, or I couldn't find much.. so I'd copied from the
  internet and tried to adjust the config to our needs, but I'm sure it's
 not
  optimized.
 
 
  We're using 3.3.1 on top of XFS.
 
  The qcow images are about 30GB (a couple of 100GB).
 
 
  Can someone please tell what would be the best paramenter for
 performance to
  look at?
 
 
  here is volume info:
 
 
  Volume Name: images
 
  Type: Replicate
 
  Volume ID:
 
  Status: Started
 
  Number of Bricks: 1 x 4 = 4
 
  Transport-type: tcp
 
  Bricks:
 
  Brick1: vmhost2:/exports/1
 
  Brick2: vmhost3:/exports/1
 
  Brick3: vmhost5:/exports/1
 
  Brick4: vmhost6:/exports/1
 
  Options Reconfigured:
 
  performance.cache-max-file-size: 1GB
 
  nfs.disable: on
 
  performance.cache-size: 4GB
 
  performance.cache-refresh-timeout: 1
 
  performance.write-behind-window-size: 2MB
 
  performance.read-ahead: on
 
  performance.write-behind: on
 
  performance.io-cache: on
 
  performance.stat-prefetch: on
 
  performance.quick-read: on
 
  performance.io-thread-count: 64
 
  performance.flush-behind: on
 
  features.quota-timeout: 1800
 
  features.quota: off
 
 
  Thanks in advanced.
 
 
  ___
  Gluster-users mailing list
  Gluster-users@gluster.org
  http://supercolony.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Performance for KVM images (qcow)

2013-04-08 Thread Eyal Marantenboim
Hi,


We have a set of 4 gluster nodes, all in replicated (design?)

We use it to store our qcow2 images for kvm. These images have a variable
IO, though most of them are for reading only.


I tried to find some documentation re. performance optimization, but it's
either unclear to me, or I couldn't find much.. so I'd copied from the
internet and tried to adjust the config to our needs, but I'm sure it's not
optimized.


We're using 3.3.1 on top of XFS.

The qcow images are about 30GB (a couple of 100GB).


Can someone please tell what would be the best paramenter for performance
to look at?


here is volume info:


Volume Name: images

Type: Replicate

Volume ID:

Status: Started

Number of Bricks: 1 x 4 = 4

Transport-type: tcp

Bricks:

Brick1: vmhost2:/exports/1

Brick2: vmhost3:/exports/1

Brick3: vmhost5:/exports/1

Brick4: vmhost6:/exports/1

Options Reconfigured:

performance.cache-max-file-size: 1GB

nfs.disable: on

performance.cache-size: 4GB

performance.cache-refresh-timeout: 1

performance.write-behind-window-size: 2MB

performance.read-ahead: on

performance.write-behind: on

performance.io-cache: on

performance.stat-prefetch: on

performance.quick-read: on

performance.io-thread-count: 64

performance.flush-behind: on

features.quota-timeout: 1800

features.quota: off


Thanks in advanced.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users