Re: [Gluster-users] bonding question

2014-09-29 Thread Reinis Rozitis
Ok, I mean this is a network based solution, but I think the 100MB/sec is 
possible with one nic too.

I just wondering, maybe my bonding isn't working fine.


You should test with multiple clients/dd streams.

http://serverfault.com/questions/569060/link-aggregation-lacp-802-3ad-max-throughput/

rr 


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Glusterfs high memory usage

2014-05-29 Thread Reinis Rozitis

 PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
 1892 root  20   0 10.2g 4.7g 1900 S   15 61.1   8980:27 glusterfs



10 GBytes is to much for a process and we want to know if there is
anything we can do too solve this situation.


In general VIRT (virtual) is not the memory you should be looking at but the
RES (resident) which is 4.7G and is the actual memory the process is using.


You can cleary see it from:

Mem:   8125496k total,  5948512k used,  2176984k free


Eg your server has only 8Gb of physical ram and 2Gb free  so there is no way
a process can eat up "actual" 10 Gigs (unless Linux virtual memory).


The bad thing though you have quite a large swap size and it's being used:


Swap: 14878048k total,  5731656k used, 9146392k free,   109640k cached


I would suggest to run:

sysctl -w vm.swappiness=0

(and then set it in sysctl.conf for permanet use) so the systems avoids
swapping as much as possible.



p.s. to understand more about Linux memory management you might to look at
this presentation https://www.youtube.com/watch?v=twQKAoq2OPE

rr

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Reading directly from brick

2011-09-06 Thread Reinis Rozitis
Simple answer - no, it's not ever safe to do writes to an active Gluster 
backend.


Question was about reads though and then the answer is it is perfectly fine 
(and faster) to do reads directly from the filesystem (in replicated setups) 
if you keep in mind that by doing so you lose the Glusters autoheal 
eature  - eg if one of the gluster nodes goes down and there is a file 
written meanwhile when the server comes up if you access the file directly 
it won't show up while it would when accessing it via the gluster mount 
point (you can work arround it by manually triggering the self heal).



I've heard that reads from glusterfs are around 20 times slower than from 
ext3:


"20 times" might be fetched out of thin air but of course there is a 
significant overhead of serving a file from a gluster which basically 
involves network operations and additional meta data checks versus fetching 
the file directly from iron.



rr 


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Stripe+replicate

2011-08-12 Thread Reinis Rozitis

Hello,
is this ( http://gluster.org/pipermail/gluster-users/2011-July/008223.html ) 
true regarding 3.3.0 beta or should check out GIT?

Also while it is possible to manually create in client volfile will some more complex setups like striped+replicated+distributed 
setups ( like for example stripe on 6 (or more) nodes each stripe having 3 replicas and distributed on 12 servers) be supported or 
better stay away from something like that?


What's the suggested way to store large ~500 Gb files in reliable way so it doesn't bring the cluster down if a replica fails and 
has to be resynced.



thx in advance
rr 


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Replacing a downed brick

2011-07-25 Thread Reinis Rozitis

Hello,
while playing around with the new elastic glusterfs system (via 'glusterd', previously have been using glusterfs with static 
configuration) I have stumbled upon such problem:


1. I have a test system with 12 nodes in a replicated/distributed way (replica 
count 3):

Volume Name: storage
Type: Distributed-Replicate
Status: Started
Number of Bricks: 4 x 3 = 12
Transport-type: tcp


2. One of the brick systems/servers had a simulated hardware failure (disks 
have been wiped) and restarted a new.


3. When the server ('glusterd') came up the rest of the bricks received 
something like:

Jul 25 17:10:45 snode182  GlusterFS[3371]: [2011-07-25 17:10:45.435786] C [glusterd-rpc-ops.c:748:glusterd3_1_cluster_lock_cbk] 0-: 
Lock response received from unknown peer: 4ecec354-1d02-4709-8f1e-607a735dbe62


Obviously the peer UID in glusterd.info (because of the full "crash/reinstall") seems to be different from the UID which is in the 
cluster configuration.


Peer status shows:

Hostname: 10.0.0.149
Uuid: f9ea651e-68da-40fa-80d9-6bee7779aa97
State: Peer Rejected (Connected)

4. While the info commands work fine anything that involves changing the volume settings return that the volume doesn't exist (from 
the logs seem to be coming from the reinstalled node):


[2011-07-25 17:08:54.579631] E 
[glusterd-op-sm.c:1237:glusterd_op_stage_set_volume] 0-: Volume storage does 
not exist
[2011-07-25 17:08:54.579769] E [glusterd-op-sm.c:7107:glusterd_op_ac_stage_op] 
0-: Validate failed: -1



So my question is how to correctly reintroduce the box to the glusterfs cluster 
since:

1. I can't run 'peer probe 10.0.0.149' as gluster says the peer is already in 
cluster
2. I can't remove the peer because it is part of a volume.
3. I can't remove the brick from the volume because gluster asks me to remove 3 bricks (eg the replica count and also would mean 
data loss).
4. I imagine that the replace-brick won't work even if I fake the new node with a different ip/hostname (since the source brick will 
be down) or will it replicate from the alive ones?




I tried just to manually change the UID back to the the one which is listed in the rest of the nodes (peer status) but apparently it 
was not enough (the node itself didn't see any other servers and wasn't able to sync volume information from remote brick(s) 
complaining that it is not his friend).


Then when I manually placed all the peers/* files from a running bricknode and restarted glusterd the node reverted to 'Peer in 
Cluster'  state.



Is this the way?
Or am I doing something totally wrong?


wbr
rr

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Reading directly from gluster folder ?

2011-07-20 Thread Reinis Rozitis

I'm curious, however - in what way would you like to use this method?


It is useful if you have a replica/mirror setup without distribution then for example you can set up webserver(s) on the node(s) 
itself and serve directly from filesystem which is faster than through a fuse mountpoint (of course there are drawbacks regarding 
self-heal but you can work around that too).


rr 


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users