[Gluster-users] glusterd dead but subsys locked

2014-02-24 Thread Mingfan Lu
I have trid the lastest glusterfs 3.4.2
but I found that I could start the service by *service glusterd status*,
and all volumes are up.
while I ran service glusterd status
it report the glusterd is stopped.
but when I called service glusterd status, I got
glusterd dead but subsys locked
I found that /var/lock/subsys/glusterd existed while all brick processes
were still alive.

I don't think I saw the bug
https://bugzilla.redhat.com/show_bug.cgi?id=960476
for I saw the lock file.

Any comments?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Rebalance times in 3.2.5 vs 3.4.2

2014-02-24 Thread Viktor Villafuerte
Hi all,

I have distributed replicated set with 2 servers (replicas) and am
trying to add another set of replicas: 1 x (1x1) => 2 x (1x1)

I have about 23G of data which I copy onto the first replica, check
everything and then add the other set of replicas and eventually
rebalance fix-layout, migrate-data.

Now on

Gluster v3.2.5 this took about 30 mins (to rebalance + migrate-data)

on

Gluster v3.4.2 this has been running for almost 4 hours and it's still
not finished


As I may have to do this in production, where the amount of data is
significantly larger than 23G, I'm looking at about three weeks of wait
to rebalance :)

Now my question is if this is as it's meant to be? I can see that v3.4.2
gives me more info about the rebalance process etc, but that surely
cannot justify the enormous time difference.

Is this normal/expected behaviour? If so I will have to stick with the
v3.2.5 as it seems way quicker.

Please, let me know if there is any 'well known' option/way/secret to
speed the rebalance up on v3.4.2.


thanks



-- 
Regards

Viktor Villafuerte
Optus Internet Engineering
t: 02 808-25265
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Stopped dd write test, but gluster still busy

2014-02-24 Thread Gerald Brandt

Hi,

I decided I would hammer my GlusterFS NFS mounts with:

# watch -n 0 dd if=/dev/zero of=ddfile bs=8k count=200

I stopped the writes long ago, yet my gluster server is still going along.

MDD |  md0  |  | busy  0%  | |  read 259 
|   | write315 | | KiB/r438 |   
|   | KiB/w507 |   | MBr/s  22.20 
|   | MBw/s  31.20 |  | avq 0.00  
|  |  avio 0.00 ms |
MDD |  md2  |  | busy  0% |  |  
read 405 |   | write214 |   | KiB/r
409 |   | | KiB/w497 |   | MBr/s  32.40 | | 
MBw/s  20.80  |  | avq 0.00  | |  avio 0.00 ms |
DSK |  sdk  |  | busy 18% |  |  
read 132 |   | write110 |   | KiB/r
415 |   | | KiB/w485 |   | MBr/s  10.72 | | 
MBw/s  10.43  |  | avq22.13  | |  avio 3.54 ms |
DSK |  sdl  |  | busy 16% |  |  
read 135 |   | write107 |   | KiB/r
414 |   | | KiB/w496 |   | MBr/s  10.92 | | 
MBw/s  10.38  |  | avq20.79  | |  avio 3.16 ms |
DSK |  sde  |  | busy 15% |  |  
read 135 |   | write110 |   | KiB/r
409 |   | | KiB/w488 |   | MBr/s  10.80 | | 
MBw/s  10.50  |  | avq25.55  | |  avio 2.96 ms |
DSK |  sdd  |  | busy 15% |  |  
read  53 |   | write111 |   | KiB/r
434 |   | | KiB/w479 |   | MBr/s   4.50 | | 
MBw/s  10.40  |  | avq15.65  | |  avio 4.27 ms |
DSK |  sdf  |  | busy 14% |  |  
read  52 |   | write109 |   | KiB/r
437 |   | | KiB/w489 |   | MBr/s   4.44 | | 
MBw/s  10.41  |  | avq25.47  | |  avio 4.12 ms |
DSK |  sdj  |  | busy 13% |  |  
read  54 |   | write108 |   | KiB/r
424 |   | | KiB/w493 |   | MBr/s   4.47 | | 
MBw/s  10.40  |  | avq25.53  | |  avio 3.85 ms |
DSK |  sdg  |  | busy 12% |  |  
read  51 |   | write109 |   | KiB/r
441 |   | | KiB/w489 |   | MBr/s   4.40 | | 
MBw/s  10.41  |  | avq24.37  | |  avio 3.67 ms |
DSK |  sdh  |  | busy 12% |  |  
read  51 |   | write110 |   | KiB/r
441 |   | | KiB/w484 |   | MBr/s   4.40 | | 
MBw/s  10.40  |  | avq22.06  | |  avio 3.48 ms |
NET | transport |  | tcpi  217035  | tcpo 469958 
|   | udpi   2  |  | udpo   2  
|  |  tcpao  0 | tcppo  6 |  | tcprs 
84  |  |  tcpie  0 |   | tcpor  2 | 
udpnp  0  | |  udpip  0 |
NET | network   |  | ipi   217128 |  |  
ipo   191986 |   | ipfrw  0 |   | deliv 
217039 |   | |  |   |  | 
|   |  | icmpi  2  | |  icmpo  0 |
NET | eth4 24%  |  | pcki   53820 |  |  
pcko  100866 |   | si 6391 Kbps |   | so  244 
Mbps |   | coll   0 | mlti   0 |   | 
erri   0 | | erro   0  |  | drpi   0  | |  
drpo   0 |
NET | eth2 24%  |  | pcki  122557 |  |  
pcko  123208 |   | si  243 Mbps |   | so  241 
Mbps |   | coll   0 | mlti   5 |   | 
erri   0 | | erro   0  |  | drpi   0  | |  
drpo   0 |
NET | eth3 24%  |  | pcki   53846 |  |  
pcko  100256 |   | si 6380 Kbps |   | so  242 
Mbps |   | coll   0 | mlti   0 |   | 
erri   0 | | erro   0  |  | drpi   0  | |  
drpo   0 |
NET | eth1 24%  |  | pcki  121621 |  |  
pcko  123520 |   | si  241 Mbps |   | so  242 
Mbps |   | coll   0 | mlti   6 |   | 
erri   0 | | erro   0  |  | drpi   0  | |  
drpo   0 |


What's happening?  Why are there big reads when I was only doing a write 
test?  Why is the ethernet still busy?


Looks like I'm back to trying DRBD.

Gerald

--
Gerald Brandt
Majentis Technologies
g...@majentis.com
204-229-6595
www.ma

Re: [Gluster-users] Best Practices for different failure scenarios?

2014-02-24 Thread BGM
thnx Vijay,
will drill my head into it 
Bernhard

Sent from my iPad

> On 24.02.2014, at 17:26, Vijay Bellur  wrote:
> 
> On 02/21/2014 10:27 PM, BGM wrote:
 It might be very helpful to have a wiki next to this mailing list,
 where all the good experience, all the proved solutions for "situations"
 that are brought up here, could be gathered in a more
 permanent and straight way.
>>> 
>>> +1. It would be very useful to evolve an operations guide for GlusterFS.
>>> 
 .
 To your questions I would add:
 what's best practice in setting options for performance and/or integrity...
 (yeah, well, for which use case under which conditions)
 a mailinglist is very helpful for adhoc probs and questions,
 but it would be nice to distill the knowledge into a permanent, searchable 
 form.
 .
 sure anybody could set up a wiki, but...
 it would need the acceptance and participation of an active group
 to get best results.
 so IMO the appropriate place would be somewhere close to gluster.org?
 .
>>> 
>>> Would be happy to carry this in doc/ folder of glusterfs.git and 
>>> collaborate on it if a lightweight documentation format like markdown or 
>>> asciidoc is used for evolving this guide.
>> 
>> I haven't worked with neither of them,
>> on the very first glance asciidoc looks easier to me.
>> (assuming it is either or ?)
>> and (sorry for being flat, i m op not dev ;-) you suggest everybody sets up 
>> a git from where you
>> pull, right?
> 
> No need to setup a git on your own. We use the development workflow [1] for 
> submitting patches to documentation too.
> 
>> well, wouldn't a wiki be much easier? both, to contribute to and to access 
>> the information?
>> (like wiki.debian.org?)
>> The git based solution might be easier to start of with,
>> but would it reach a big enough community?
> 
> Documentation in markdown or asciidoc is rendered well by github. One of the 
> chapters in our admin guide does get rendered like this [2].
> 
>> Wouldn't a wiki also have a better PR/marketing effect (by being easier to 
>> access)?
>> just a thought...
> 
> We can roll out the content from git in various formats (like pdf, html etc.) 
> as both asciidoc/markdown can be converted to various formats. The advantage 
> of a git based workflow is that it becomes easy to review changes through 
> tools like gerrit and can also help in keeping false content/spam out of the 
> way.
> 
> Having said that, feel free to use tools of your choice. We can just go ahead 
> and use whatever is easy for most of us :). At the end of the day, evolving 
> this guide is more important than the tools that we choose to use in the 
> process.
> 
> Cheers,
> Vijay
> 
> [1] 
> http://www.gluster.org/community/documentation/index.php/Development_Work_Flow
> 
> [2] 
> https://github.com/gluster/glusterfs/blob/master/doc/admin-guide/en-US/markdown/admin_setting_volumes.md
> 
> 
>> Bernhard
>> 
>>> 
>>> -Vijay
> 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] NFS write speed 'ramps' up

2014-02-24 Thread Gerald Brandt

Hi,

I'm doing some testing, and seeing strange results for NFS writing.

I have a bonded (round robin) 2x1GigE link between my client and my server.

When I do 'dd if-/dev/zero of=ddfile bs=8k count=200', the writes 
start out slow, using only 12% of each interface.  Then over time (the 
time varies) it ramps up to use about 88% of each interface.


Is there something in Linux, Gluster, or ??? that slows down initial writes?

Gerald

ps: on a side note, NFS reads are REALLY slow... roughly 43 MB/s of the 
bonded link, which is about 12% of each interface.


--
Gerald Brandt
Majentis Technologies
g...@majentis.com
204-229-6595
www.majentis.com

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] mounting a gluster volume

2014-02-24 Thread Joe Julian
GlusterFS listens on all addresses so it's simply a matter of having your 
hostnames resolve to the IP you want any particular node to resolve to.

On February 24, 2014 7:36:17 AM PST, Bernhard Glomm 
 wrote:
>Hi all
>
>I have a replica 2 glustervolume.
>between hostA and hostB
>both hosts are connected through a private network/crosslink
>which has addresses in a distinguished network.
>
>The server have another set of interfaces facing the client side -
>(on a different network address range)
>Is there a way that a client can mount a glustervolume
>without enabling ipforward on the server?
>
>TIA
>
>Bernhard
>
>
>
>
>___
>Gluster-users mailing list
>Gluster-users@gluster.org
>http://supercolony.gluster.org/mailman/listinfo/gluster-users

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] mounting a gluster volume

2014-02-24 Thread Paul Cuzner
having a front end network and a back end network is a common approach to 
handling "legacy" protocols like SMB and NFS. 

Your front end network provides one entry point, and the backend network would 
support the node inter-connects and gluster client connections. 

As it stands today, I'm not aware of a simple way to make native gluster mounts 
pass through the front end network without ipforward approach. 

Perhaps others on this list can enlighten us both! 

- Original Message -

> From: "Bernhard Glomm" 
> To: gluster-users@gluster.org
> Sent: Tuesday, 25 February, 2014 4:36:17 AM
> Subject: [Gluster-users] mounting a gluster volume

> Hi all

> I have a replica 2 glustervolume.
> between hostA and hostB
> both hosts are connected through a private network/crosslink
> which has addresses in a distinguished network.
> The server have another set of interfaces facing the client side -
> (on a different network address range)
> Is there a way that a client can mount a glustervolume
> without enabling ipforward on the server?

> TIA

> Bernhard

> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Very slow ls - WARNING

2014-02-24 Thread harry mangalam
FYI, another data point that echos Franco's experience.

I turned this option (cluster.readdir-optimize) on after reading the thread  
post and in fact the 'ls' perf seem to increase quite a bit, but after a few 
days, this morning all 85 of our compute nodes reported no files on the mount 
point which was .. disconcerting to a number of users.

The filesystem was still mounted and the data was intact, but 'ls' reported 
nothing, which makes it somewhat less than useful.

After turning off that option and remounting, all the clients see their files 
again, albeit more slowly again.  

The config is gluster 3.4.2 on amd64/SL6.4 and is now


 $ gluster volume info gl
 
Volume Name: gl
Type: Distribute
Volume ID: 21f480f7-fc5a-4fd8-a084-3964634a9332
Status: Started
Number of Bricks: 8
Transport-type: tcp,rdma
Bricks:
Brick1: bs2:/raid1
Brick2: bs2:/raid2
Brick3: bs3:/raid1
Brick4: bs3:/raid2
Brick5: bs4:/raid1
Brick6: bs4:/raid2
Brick7: bs1:/raid1
Brick8: bs1:/raid2
Options Reconfigured:
cluster.readdir-optimize: off
performance.write-behind-window-size: 1MB
performance.flush-behind: on
performance.cache-size: 268435456
nfs.disable: on
performance.io-cache: on
performance.quick-read: on
performance.io-thread-count: 64
auth.allow: 10.2.*.*,10.1.*.*


hjm



On Sunday, February 23, 2014 04:11:28 AM Franco Broi wrote:
> All the client filesystems core-dumped. Lost a lot of production time.
> 
> I've disabled the cluster.readdir-optimize option and remounted all the
> filesystems. 
> From: gluster-users-boun...@gluster.org [gluster-users-boun...@gluster.org]
> on behalf of Franco Broi [franco.b...@iongeo.com] Sent: Friday, February
> 21, 2014 10:57 PM
> To: Vijay Bellur
> Cc: gluster-users@gluster.org
> Subject: Re: [Gluster-users] Very slow ls
> 
> Amazingly setting cluster.readdir-optimize has fixed the problem, ls is
> still slow but there's no long pause on the last readdir call.
> 
> What does this option do and why isn't it enabled by default?
> ___
> From: gluster-users-boun...@gluster.org [gluster-users-boun...@gluster.org]
> on behalf of Franco Broi [franco.b...@iongeo.com] Sent: Friday, February
> 21, 2014 7:25 PM
> To: Vijay Bellur
> Cc: gluster-users@gluster.org
> Subject: Re: [Gluster-users] Very slow ls
> 
> On 21 Feb 2014 22:03, Vijay Bellur  wrote:
> > On 02/18/2014 12:42 AM, Franco Broi wrote:
> > > On 18 Feb 2014 00:13, Vijay Bellur  wrote:
> > >  > On 02/17/2014 07:00 AM, Franco Broi wrote:
> > >  > > I mounted the filesystem with trace logging turned on and can see
> > >  > > that
> > >  > > after the last successful READDIRP there is a lot of other
> > >  > > connections
> > >  > > being made the clients repeatedly which takes minutes to complete.
> > >  > 
> > >  > I did not observe anything specific which points to clients
> > >  > repeatedly
> > >  > reconnecting. Can you point to the appropriate line numbers for this?
> > >  > 
> > >  > Can you also please describe the directory structure being referred
> > >  > here?
> > > 
> > > I was tailing the log file while the readdir script was running and
> > > could see respective READDIRP calls for each readdir, after the last
> > > call all the rest of the stuff in the log file was returning nothing but
> > > took minutes to complete. This particular example was a directory
> > > containing a number of directories, one for each of the READDIRP calls
> > > in the log file.
> > 
> > One possible tuning that can possibly help:
> > 
> > volume set  cluster.readdir-optimize on
> > 
> > Let us know if there is any improvement after enabling this option.
> 
> I'll give it a go but I think this is a bug and not a performance issue.
> I've filed a bug report on bugzilla.
> > Thanks,
> > Vijay
> 
> 
> 
> 
> This email and any files transmitted with it are confidential and are
> intended solely for the use of the individual or entity to whom they are
> addressed. If you are not the original recipient or the person responsible
> for delivering the email to the intended recipient, be advised that you
> have received this email in error, and that any use, dissemination,
> forwarding, printing, or copying of this email is strictly prohibited. If
> you received this email in error, please immediately notify the sender and
> delete the original.
> 
> 
> 
> 
> This email and any files transmitted with it are confidential and are
> intended solely for the use of the individual or entity to whom they are
> addressed. If you are not the original recipient or the person responsible
> for delivering the email to the intended recipient, be advised that you
> have received this email in error, and that any use, dissemination,
> forwarding, printing, or copying of this email is strictly prohibited. If
> you received this email in error, please immediately notify the sender and
> delete the original.
> 
> ___

Re: [Gluster-users] upgrading from gluster-3.2.6 to gluster-3.4.2

2014-02-24 Thread Dmitry Kopelevich

Vijay,

Thanks for the response. I don't think I had the /var/lib/glusterd 
directory before installing the RPMs (unless this directory would be 
created by the 3.2.x version).


I think my problem is indeed related to IPoIB. I have it set up but in 
the gluster set up I used hostnames corresponding to an ethernet 
connection, not IB. Is there a simple method to change the names for the 
server nodes in the gluster configuration?


Thanks,

Dmitry

Dmitry Kopelevich
Associate Professor
Chemical Engineering Department
University of Florida
Gainesville, FL 32611

Phone:   (352)-392-4422
Fax: (352)-392-9513
E-mail:  dkopelev...@che.ufl.edu

On 02/24/2014 11:40 AM, Vijay Bellur wrote:

On 02/19/2014 01:21 AM, Dmitry Kopelevich wrote:



According to the installation guidelines, installation from rpms should
automatically copy the files from /etc/glusterd to /var/lib/glusterd.
This didn't happen for me -- the directory /var/lib/glusterd contained
only empty subdirectories. But the content of /etc/glusterd directory
has moved to /etc/glusterd/glusterd.


Did /var/lib/glusterd per chance exist before installing the RPMs? If 
it does, then contents of /etc/glusterd do not get copied over to 
/var/lib/glusterd.




So, I decided to manually copy files from /etc/glusterd/glusterd to
/var/lib/glusterd and follow step 5 of the installation guidelines
(which was supposed to be skipped when installing from rpms):

glusterd --xlator-option *.upgrade=on -N

This didn't work (error message: glusterd: No match)

Then I triedspecifying explicitly the name of my volume:

glusterd --xlator-option .upgrade=on -N

This lead to the following messages in file 
etc-glusterfs-glusterd.vol.log:


[2014-02-18 17:22:27.146449] I [glusterd.c:961:init] 0-management: Using
/var/lib/glusterd as working directory
[2014-02-18 17:22:27.149097] I [socket.c:3480:socket_init]
0-socket.management: SSL support is NOT enabled
[2014-02-18 17:22:27.149126] I [socket.c:3495:socket_init]
0-socket.management: using system polling thread
[2014-02-18 17:22:29.282665] I
[glusterd-store.c:1339:glusterd_restore_op_version] 0-glusterd:
retrieved op-version: 1
[2014-02-18 17:22:29.283478] E
[glusterd-store.c:1858:glusterd_store_retrieve_volume] 0-: Unknown key:
brick-0
[2014-02-18 17:22:29.283513] E
[glusterd-store.c:1858:glusterd_store_retrieve_volume] 0-: Unknown key:
brick-1
[2014-02-18 17:22:29.283534] E
[glusterd-store.c:1858:glusterd_store_retrieve_volume] 0-: Unknown key:
brick-2
...
and so on for all other bricks.


These messages related to bricks are benign and can be ignored.



After that, files nfs.log, glustershd.log, and
etc-glusterfs-glusterd.vol.log get filled with a large number of warning
messages and nothing else seems to happen. The following messages appear
to be relevant:

- Files nfs.log, glustershd.log:

2014-02-18 15:58:01.889847] W [rdma.c:1079:gf_rdma_cm_event_handler]
0-data-volume-client-2: cma event RDMA_CM_EVENT_ADDR_ERROR, error -2
(me: peer:)


Do you also have IPoIB in your setup? RDMA-CM in 3.4.x releases does 
need IPoIB to function properly. [1]


Raghavendra - can you please help here?



(the name of my volume is data-volume and its transport type is RDMA)

- File etc-glusterfs-glusterd.vol.log

[2014-02-18 17:22:33.322565] W [socket.c:514:__socket_rwv] 0-management:
readv failed (No data available)

Also, for some reason the time stamps in the log files are incorrect.


Starting with 3.4, time stamps in the log files are in UTC by default.

Thanks,
Vijay

[1] 
https://github.com/gluster/glusterfs/blob/master/doc/features/rdma-cm-in-3.4.0.txt


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] upgrading from gluster-3.2.6 to gluster-3.4.2

2014-02-24 Thread Vijay Bellur

On 02/19/2014 01:21 AM, Dmitry Kopelevich wrote:



According to the installation guidelines, installation from rpms should
automatically copy the files from /etc/glusterd to /var/lib/glusterd.
This didn't happen for me -- the directory /var/lib/glusterd contained
only empty subdirectories. But the content of /etc/glusterd directory
has moved to /etc/glusterd/glusterd.


Did /var/lib/glusterd per chance exist before installing the RPMs? If it 
does, then contents of /etc/glusterd do not get copied over to 
/var/lib/glusterd.




So, I decided to manually copy files from /etc/glusterd/glusterd to
/var/lib/glusterd and follow step 5 of the installation guidelines
(which was supposed to be skipped when installing from rpms):

glusterd --xlator-option *.upgrade=on -N

This didn't work (error message: glusterd: No match)

Then I triedspecifying explicitly the name of my volume:

glusterd --xlator-option .upgrade=on -N

This lead to the following messages in file etc-glusterfs-glusterd.vol.log:

[2014-02-18 17:22:27.146449] I [glusterd.c:961:init] 0-management: Using
/var/lib/glusterd as working directory
[2014-02-18 17:22:27.149097] I [socket.c:3480:socket_init]
0-socket.management: SSL support is NOT enabled
[2014-02-18 17:22:27.149126] I [socket.c:3495:socket_init]
0-socket.management: using system polling thread
[2014-02-18 17:22:29.282665] I
[glusterd-store.c:1339:glusterd_restore_op_version] 0-glusterd:
retrieved op-version: 1
[2014-02-18 17:22:29.283478] E
[glusterd-store.c:1858:glusterd_store_retrieve_volume] 0-: Unknown key:
brick-0
[2014-02-18 17:22:29.283513] E
[glusterd-store.c:1858:glusterd_store_retrieve_volume] 0-: Unknown key:
brick-1
[2014-02-18 17:22:29.283534] E
[glusterd-store.c:1858:glusterd_store_retrieve_volume] 0-: Unknown key:
brick-2
...
and so on for all other bricks.


These messages related to bricks are benign and can be ignored.



After that, files nfs.log, glustershd.log, and
etc-glusterfs-glusterd.vol.log get filled with a large number of warning
messages and nothing else seems to happen. The following messages appear
to be relevant:

- Files nfs.log, glustershd.log:

2014-02-18 15:58:01.889847] W [rdma.c:1079:gf_rdma_cm_event_handler]
0-data-volume-client-2: cma event RDMA_CM_EVENT_ADDR_ERROR, error -2
(me: peer:)


Do you also have IPoIB in your setup? RDMA-CM in 3.4.x releases does 
need IPoIB to function properly. [1]


Raghavendra - can you please help here?



(the name of my volume is data-volume and its transport type is RDMA)

- File etc-glusterfs-glusterd.vol.log

[2014-02-18 17:22:33.322565] W [socket.c:514:__socket_rwv] 0-management:
readv failed (No data available)

Also, for some reason the time stamps in the log files are incorrect.


Starting with 3.4, time stamps in the log files are in UTC by default.

Thanks,
Vijay

[1] 
https://github.com/gluster/glusterfs/blob/master/doc/features/rdma-cm-in-3.4.0.txt

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Best Practices for different failure scenarios?

2014-02-24 Thread Vijay Bellur

On 02/21/2014 10:27 PM, BGM wrote:

It might be very helpful to have a wiki next to this mailing list,
where all the good experience, all the proved solutions for "situations"
that are brought up here, could be gathered in a more
permanent and straight way.


+1. It would be very useful to evolve an operations guide for GlusterFS.


.
To your questions I would add:
what's best practice in setting options for performance and/or integrity...
(yeah, well, for which use case under which conditions)
a mailinglist is very helpful for adhoc probs and questions,
but it would be nice to distill the knowledge into a permanent, searchable form.
.
sure anybody could set up a wiki, but...
it would need the acceptance and participation of an active group
to get best results.
so IMO the appropriate place would be somewhere close to gluster.org?
.


Would be happy to carry this in doc/ folder of glusterfs.git and collaborate on 
it if a lightweight documentation format like markdown or asciidoc is used for 
evolving this guide.


I haven't worked with neither of them,
on the very first glance asciidoc looks easier to me.
(assuming it is either or ?)
and (sorry for being flat, i m op not dev ;-) you suggest everybody sets up a 
git from where you
pull, right?


No need to setup a git on your own. We use the development workflow [1] 
for submitting patches to documentation too.



well, wouldn't a wiki be much easier? both, to contribute to and to access the 
information?
(like wiki.debian.org?)
The git based solution might be easier to start of with,
but would it reach a big enough community?


Documentation in markdown or asciidoc is rendered well by github. One of 
the chapters in our admin guide does get rendered like this [2].



Wouldn't a wiki also have a better PR/marketing effect (by being easier to 
access)?
just a thought...


We can roll out the content from git in various formats (like pdf, html 
etc.) as both asciidoc/markdown can be converted to various formats. The 
advantage of a git based workflow is that it becomes easy to review 
changes through tools like gerrit and can also help in keeping false 
content/spam out of the way.


Having said that, feel free to use tools of your choice. We can just go 
ahead and use whatever is easy for most of us :). At the end of the day, 
evolving this guide is more important than the tools that we choose to 
use in the process.


Cheers,
Vijay

[1] 
http://www.gluster.org/community/documentation/index.php/Development_Work_Flow


[2] 
https://github.com/gluster/glusterfs/blob/master/doc/admin-guide/en-US/markdown/admin_setting_volumes.md




Bernhard



-Vijay





___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] gluster NFS proces takes 100% cpu

2014-02-24 Thread Gerald Brandt

Hi,

I've set up a 2 brick replicate system, using bonded GigE.

eth0 - management
eth1 & eth2 - bonded 192.168.20.x
eth3 & eth4 - bonded 192.168.10.x

I created the replicate over the 192.168.10 interfaces.

# gluster volume info

Volume Name: raid5
Type: Replicate
Volume ID: 02b24ff0-e55c-4f92-afa5-731fd52d0e1a
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: filer-1:/gluster-exported/raid5/data
Brick2: filer-2:/gluster-exported/raid5/data
Options Reconfigured:
performance.nfs.stat-prefetch: on
performance.nfs.io-cache: on
performance.nfs.read-ahead: on
performance.nfs.io-threads: on
nfs.trusted-sync: on
performance.cache-size: 13417728
performance.io-thread-count: 64
performance.write-behind-window-size: 4MB
performance.io-cache: on
performance.read-ahead: on

I attached an NFS client across the 192.168.20 interface.  The NFS works fine.  
Under load, though, I get 100% CPU usage of the nfs process and lose 
connectivity.

My plan was to replicate across the 192.168.10 bond as well as do gluster 
mounts.  The NFS mount on 192.168.20 was to keep NFS traffic off the gluster 
link.

Is this a supported configuration?  Does anyone else do this?

Gerald

--
Gerald Brandt
Majentis Technologies
g...@majentis.com
204-229-6595
www.majentis.com

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] nfs.trusted-sync behaviour

2014-02-24 Thread Steve Dainard
Just posted on #gluster and didn't get a response, so I thought I'd post
here as well:

Is Anyone familiar with nfs.trusted-sync behaviour? Specifically in a
multinode replica cluster, does NFS send ack back to the client when data
is received in memory on whichever host is providing NFS services, or does
it send ack when that data has been replicated in memory of all the replica
member nodes?

I suppose the question could also be, does the data have to be on disk of
one of the nodes, before it is replicated to the other nodes?

Thanks,

*Steve Dainard *
IT Infrastructure Manager
Miovision  | *Rethink Traffic*

*Blog   |  **LinkedIn
  |  Twitter
  |  Facebook
*
--
 Miovision Technologies Inc. | 148 Manitou Drive, Suite 101, Kitchener, ON,
Canada | N2C 1L3
This e-mail may contain information that is privileged or confidential. If
you are not the intended recipient, please delete the e-mail and any
attachments and notify us immediately.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] mounting a gluster volume

2014-02-24 Thread Bernhard Glomm
Hi all

I have a replica 2 glustervolume.
between hostA and hostB
both hosts are connected through a private network/crosslink
which has addresses in a distinguished network.

The server have another set of interfaces facing the client side -
(on a different network address range)
Is there a way that a client can mount a glustervolume
without enabling ipforward on the server?

TIA

Bernhard
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Shuffling data to modify brick FS

2014-02-24 Thread Andrew Smith

I have a system with 4 bricks, each on an independent server.
I have found, unhappily, that I didn’t configure my bricks with
enough metadata space. I can only increase the size of the metadata
by rebuilding the filesystem. So,

I wish to 

1) Move all the data off of a brick,
2) Rebuild the FS on that brick
3) Add it back
4) Repeat for other 3 bricks.

My problem is that the only option to move data off of a brick 
seems to involve moving the data to a single target drive.
Since my drives are about 60% full, none of the other drives
can accommodate the entire data set of the removed drive. 
So, what I want to do is something like:

1) Rebalance my 4-brick system onto 3 bricks
2) Rebuild FS on retired brick
3) Add back refreshed brick and migrate
4) Repeat for other bricks.

I can’t figure out how to do this from the docs, which seem
to only include the case where a brick is replaced.

Thanks
Andy
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Fwd: [CentOS-devel] Update on stats of SIG plans

2014-02-24 Thread Lalatendu Mohanty


FYI,

For folks who are interested to see a CentOS variant with all required 
GlusterFS packages i.e. CentOS storage SIG


Thanks,
Lala

 Original Message 
Subject:[CentOS-devel] Update on stats of SIG plans
Date:   Fri, 21 Feb 2014 19:45:47 +
From:   Karanbir Singh 
Reply-To:   The CentOS developers mailing list. 
To: centos-de...@centos.org



hi,

I am sure people are wondering what the state of the SIG's is at this
point. And this is a quick recap of stuff from my perspective.

We have 8 SIG's that are under active consideration and planning.

Every SIG proposal needs to go via the CentOS Board for inclusion and setup.

The CentOS Board is starting to get a regular meeting schedule doing,
and will meet every Wednesday ( minutes for every meeting will be posted
on the website, Jim is working out the mechanics for that ).

Starting with the Board meeting of the 5th March, we will consider SIG
plans, no more than 2 a meeting, at every other board meeting. These
meetings will be held in public, and the SIG's being considered will be
notified in advance so they can come and be a part of the conversations
( and any followup can happen immediately after the meeting ).

In the coming days, I will be reaching out to the people who nominated
themselves to be SIG coordinators, for the SIG's I've offered to help
sponsor and start working on the proposals. The proposals will be on the
wiki.centos.org site and I'll try to post updates to this list (
centos-devel ) so others can chime in and we incorporate wider, public
viewpoints on the proposal.

- KB

--
Karanbir Singh
+44-207-0999389 | http://www.karan.org/ | twitter.com/kbsingh
GnuPG Key : http://www.karan.org/publickey.asc
___
CentOS-devel mailing list
centos-de...@centos.org
http://lists.centos.org/mailman/listinfo/centos-devel



___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users