[Gluster-users] glusterfs-3.4.2 released

2014-01-03 Thread Gluster Build System

RPM: http://bits.gluster.org/pub/gluster/glusterfs/3.4.2/

SRC: http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.4.2.tar.gz

This release is made off jenkins-release-55

-- Gluster Build System
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster Munin Monitoring Plugin

2014-01-03 Thread John Mark Walker
Don't forget glubix - https://forge.gluster.org/glubix

-JM
 On Jan 3, 2014 2:15 AM, Vijay Bellur vbel...@redhat.com wrote:

 On 12/31/2013 10:32 PM, The Figuras wrote:

 Has anyone came across decent Munin plugin?


 The ones I had observed in the past seem to be dated now.

 Recently, I came across this nagios plugin [1] and it seems to be working
 with GlusterFS 3.4.1.

 -Vijay

 [1] http://exchange.nagios.org/directory/Plugins/System-
 Metrics/File-System/GlusterFS-checks/details

 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Kernel panic when using FUSE

2014-01-03 Thread Pruner, Anne (Anne)
Hi,
I've run into a kernel bug, triggered by using gluster through 
FUSE.  Wondering if any of you have encountered a similar problem and have 
worked around it by using an updated version of the kernel, gluster, or FUSE.  
Here's the bug:

  KERNEL: /usr/lib/debug/lib/modules/2.6.32-220.el6.x86_64/vmlinux
DUMPFILE: /var/crash/127.0.0.1-2013-12-21-13:55:32/vmcore  [PARTIAL DUMP]
CPUS: 64
DATE: Sat Dec 21 13:52:33 2013
  UPTIME: 16 days, 23:17:25
LOAD AVERAGE: 3.22, 3.36, 3.57
   TASKS: 5598
NODENAME: uca-amm3.cnda.avaya.com
 RELEASE: 2.6.32-220.el6.x86_64
 VERSION: #1 SMP Wed Nov 9 08:03:13 EST 2011
 MACHINE: x86_64  (2892 Mhz)
  MEMORY: 32 GB
   PANIC: kernel BUG at fs/inode.c:322!
 PID: 259
 COMMAND: kswapd1
TASK: 880433108b00  [THREAD_INFO: 88043310e000]
 CPU: 8
   STATE: TASK_RUNNING (PANIC)

crash bt
PID: 259TASK: 880433108b00  CPU: 8   COMMAND: kswapd1
#0 [88043310f7b0] machine_kexec at 81031fcb
#1 [88043310f810] crash_kexec at 810b8f72
#2 [88043310f8e0] oops_end at 814f04b0
#3 [88043310f910] die at 8100f26b
#4 [88043310f940] do_trap at 814efda4
#5 [88043310f9a0] do_invalid_op at 8100ce35
#6 [88043310fa40] invalid_op at 8100bedb
[exception RIP: clear_inode+248]
RIP: 81190fb8  RSP: 88043310faf0  RFLAGS: 00010202
RAX:   RBX: 8800245ce980  RCX: 0001
RDX:   RSI: 0001  RDI: 8800245ce980
RBP: 88043310fb00   R8:    R9: 000d
R10: 0002  R11: 0001  R12: 81fbf340
R13:   R14: 8800888014f8  R15: 0001
ORIG_RAX:   CS: 0010  SS: 0018
#7 [88043310fb08] generic_delete_inode at 811916f6
#8 [88043310fb38] iput at 81190612
#9 [88043310fb58] dentry_iput at 8118d170
#10 [88043310fb78] d_kill at 8118d2d1
#11 [88043310fb98] __shrink_dcache_sb at 8118d666
#12 [88043310fc38] shrink_dcache_memory at 8118d7e9
#13 [88043310fc98] shrink_slab at 811299aa
#14 [88043310fcf8] balance_pgdat at 8112c75d
#15 [88043310fe28] kswapd at 8112caf6
#16 [88043310fee8] kthread at 81090886
#17 [88043310ff48] kernel_thread at 8100c14a

This bug is reported here: 
http://fuse.996288.n3.nabble.com/Kernel-panic-when-using-fuse-with-distributed-filesystem-while-doing-heavy-i-o-td10373.html,
 which shows that this bug is reported on two different bugzilla bug reports 
https://bugzilla.redhat.com/show_bug.cgi?id=644085 and 
https://bugzilla.kernel.org/show_bug.cgi?id=15927.  All of these reports 
mention using FUSE with high volume on the 2.6 kernel (which we are using).   
There's another entry on the red hat network, but I can't see it (don't have 
the subscription)  https://access.redhat.com/site/solutions/72383


Thanks,
Anne

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Kernel panic when using FUSE

2014-01-03 Thread Ben Turner
- Original Message -
 From: Anne Pruner (Anne) pru...@avaya.com
 To: gluster-users@gluster.org
 Sent: Friday, January 3, 2014 8:51:25 AM
 Subject: [Gluster-users] Kernel panic when using FUSE
 
 
 
 Hi,
 
 I’ve run into a kernel bug, triggered by using gluster through FUSE.
 Wondering if any of you have encountered a similar problem and have worked
 around it by using an updated version of the kernel, gluster, or FUSE.
 Here’s the bug:
 
 
 
 KERNEL: /usr/lib/debug/lib/modules/2.6.32-220.el6.x86_64/vmlinux
 
 DUMPFILE: /var/crash/127.0.0.1-2013-12-21-13:55:32/vmcore [PARTIAL DUMP]
 
 CPUS: 64
 
 DATE: Sat Dec 21 13:52:33 2013
 
 UPTIME: 16 days, 23:17:25
 
 LOAD AVERAGE: 3.22, 3.36, 3.57
 
 TASKS: 5598
 
 NODENAME: uca-amm3.cnda.avaya.com
 
 RELEASE: 2.6.32-220.el6.x86_64
 
 VERSION: #1 SMP Wed Nov 9 08:03:13 EST 2011
 
 MACHINE: x86_64 (2892 Mhz)
 
 MEMORY: 32 GB
 
 PANIC: kernel BUG at fs/inode.c:322!
 
 PID: 259
 
 COMMAND: kswapd1
 
 TASK: 880433108b00 [THREAD_INFO: 88043310e000]
 
 CPU: 8
 
 STATE: TASK_RUNNING (PANIC)
 
 
 
 crash bt
 
 PID: 259 TASK: 880433108b00 CPU: 8 COMMAND: kswapd1
 
 #0 [88043310f7b0] machine_kexec at 81031fcb
 
 #1 [88043310f810] crash_kexec at 810b8f72
 
 #2 [88043310f8e0] oops_end at 814f04b0
 
 #3 [88043310f910] die at 8100f26b
 
 #4 [88043310f940] do_trap at 814efda4
 
 #5 [88043310f9a0] do_invalid_op at 8100ce35
 
 #6 [88043310fa40] invalid_op at 8100bedb
 
 [exception RIP: clear_inode+248]
 
 RIP: 81190fb8 RSP: 88043310faf0 RFLAGS: 00010202
 
 RAX:  RBX: 8800245ce980 RCX: 0001
 
 RDX:  RSI: 0001 RDI: 8800245ce980
 
 RBP: 88043310fb00 R8:  R9: 000d
 
 R10: 0002 R11: 0001 R12: 81fbf340
 
 R13:  R14: 8800888014f8 R15: 0001
 
 ORIG_RAX:  CS: 0010 SS: 0018
 
 #7 [88043310fb08] generic_delete_inode at 811916f6
 
 #8 [88043310fb38] iput at 81190612
 
 #9 [88043310fb58] dentry_iput at 8118d170
 
 #10 [88043310fb78] d_kill at 8118d2d1
 
 #11 [88043310fb98] __shrink_dcache_sb at 8118d666
 
 #12 [88043310fc38] shrink_dcache_memory at 8118d7e9
 
 #13 [88043310fc98] shrink_slab at 811299aa
 
 #14 [88043310fcf8] balance_pgdat at 8112c75d
 
 #15 [88043310fe28] kswapd at 8112caf6
 
 #16 [88043310fee8] kthread at 81090886
 
 #17 [88043310ff48] kernel_thread at 8100c14a
 
 
 
 This bug is reported here:
 http://fuse.996288.n3.nabble.com/Kernel-panic-when-using-fuse-with-distributed-filesystem-while-doing-heavy-i-o-td10373.html
 , which shows that this bug is reported on two different bugzilla bug
 reports https://bugzilla.redhat.com/show_bug.cgi?id=644085 and
 https://bugzilla.kernel.org/show_bug.cgi?id=15927 . All of these reports
 mention using FUSE with high volume on the 2.6 kernel (which we are using).
 There’s another entry on the red hat network, but I can’t see it (don’t have
 the subscription) https://access.redhat.com/site/solutions/72383
 

It looks like this was fixed in kernel-2.6.32-220.13.1.el6.x86_64.rpm for 6.2 
and kernel-2.6.32-279.el6.x86_64.rpm for 6.3.  Updating should resolve this for 
you.

-b
 
 
 
 
 Thanks,
 
 Anne
 
 
 
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] glusterfs-3.4.2 released

2014-01-03 Thread John Mark Walker
3.4.2 is here! Please give some time for the builds to work their way over to 
the download server. 

An announcement will be coming out shortly. 

-JM


- Original Message -
 
 RPM: http://bits.gluster.org/pub/gluster/glusterfs/3.4.2/
 
 SRC: http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.4.2.tar.gz
 
 This release is made off jenkins-release-55
 
 -- Gluster Build System
 
 ___
 Gluster-devel mailing list
 gluster-de...@nongnu.org
 https://lists.nongnu.org/mailman/listinfo/gluster-devel
 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] glusterfs-3.4.2 released

2014-01-03 Thread Kaleb KEITHLEY

On 01/03/2014 10:08 AM, John Mark Walker wrote:

3.4.2 is here! Please give some time for the builds to work their way over to 
the download server.

An announcement will be coming out shortly.


Builds of 3.4.2 for several distributions are already available at

  http://download.gluster.org/pub/gluster/glusterfs/LATEST/

a.k.a.

  http://download.gluster.org/pub/gluster/glusterfs/3.4/3.4.2/

Packages for RHEL/CentOS, Pidora, and OpenSuSE 13.1 are there now. 
Packages for SLES11sp3 coming shortly.


For Fedora 18, 19, and 20 enable the updates-testing repo. Packages 
are there now, although the mirrors may take a while to sync. After 
seven days of testing they will be promoted to the stable updates 
repo. If you have a FAS account you can give the packages a karma boost 
to promote them sooner.


No doubt packages are in the works for Debian and Ubuntu; watch for an 
announcement.


--

Kaleb

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Gluster Community members visiting FOSDEM and DevConf in early February 2014

2014-01-03 Thread Niels de Vos
Hello all,

on the weekend on 1-2 Febuary FOSDEM[1] will take place in Brussels, Belgium.
Several users and Gluster community members will be attending the event. Some
of us are trying to arrange a Gluster Community table with some promotional
materials, demo's, Q+A and face-to-face chats.

If you are interested in joining us, please let us know by responding to this
email with some details, or add your note to the TitanPad[2]. In case you want
to discuss a specific topic or would like to see a certain GlusterFS
use-case/application, tell us in advance and we'll try to be prepared for it.

Part of the same group of people will be going to Brno, Czech Republic for
DevConf [3] the weekend after FOSDEM. We might be able to arrange a workshop or
presentation there too. However, we would like to hear from others what they
prefer to see/hear/do, so please post your wishes!

I hope to see a bunch of you in Brussels,
Niels


[1] https://fosdem.org
[2] http://titanpad.com/gluster-at-fosdem2014
[3] http://devconf.cz
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] .glusterfs

2014-01-03 Thread Dave Nirenberg
Hi All,This is probably a simple question, but I have been unable to find any official documentation on it yet: what exactly does the .glusterfs directory do, and what controls how much disk space it uses?I am using 3.3 in my environment and the .glusterfs directory has grown to 93GB, whereas the actual directory content is under 2GB total. I got these disk usage stats using 'du'. Does this sound right?Thanks in advance,Dave
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] 2 Replicas on 3 servers

2014-01-03 Thread Joe Julian
No. Your brick count must be a multiple of your replica count. 

In the case of replica 2 volumes, each pair of bricks, in the order defined at 
volume creation, make a replicated subvolume to the distribute translator.

Chris Clarke ccla...@centiq.co.uk wrote:
Hi,

I've seen the other post recently on 3 servers and 2 replicas, but it
doesn't answer the essential point of whether you can have a replica of
2
with 3 servers if those servers have 1 brick on each server.

I'm looking to replicate the behaviour of an IBM GPFS filesystem, where
I
have a 3 node cluster with a replication factor of 2.  As GPFS is block
based, then each block of data is stored on two of the 3 servers, i.e.
block 1 may be on server 1 and 3, block 2 may be on server 1 and 2,
block 3
may be on 2 and 3...

Is it possible to do a similar thing with Gluster?  I'd also love it to
be
possible to extend the filesystem by adding extra servers and being
able to
rebalance the file's distribution across them, but keeping the
replication
factor of 2, so I can always stand the loss of a single node without
data
loss.

Is any of this achievable?

Thanks

Chris

-- 
Confidentiality: This e-mail and its attachments are intended for the
above 
named only and may be confidential. If they have come to you in error
you 
must take no action based on them, nor must you copy or show them to 
anyone, please reply to this e-mail and highlight the error.

Security Warning: Please note that this e-mail has been created in the 
knowledge that Internet e-mail is not a 100% secure communications
medium. 
We advise that you understand and observe this lack of security when 
e-mailing us.

Viruses: Although we have taken steps to ensure that this e-mail and 
attachments are free from any virus, we advise that in keeping with
good 
computing practice the recipient should ensure they are actually virus
free.

Centiq Ltd
Registered in England with number 3593921.
Registered office: Unit 1, Charles Park, Charles Way, Cinderhill Road, 
Nottingham, NG6 8RF




___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] .glusterfs

2014-01-03 Thread Justin Dossey
Dave,

Check out Joe Julian's excellent explanation at
http://joejulian.name/blog/what-is-this-new-glusterfs-directory-in-33/ .

The .glusterfs directory should be approximately the same size as the
volume content, as it is just hard links.  You may have some issue there if
you're seeing a huge discrepancy.  It might be worth trying something like

find .glusterfs -type f -links 1

on each brick to see if you have any DHT linkfiles pointing to files which
have been removed on the brick(s).


On Fri, Jan 3, 2014 at 7:45 AM, Dave Nirenberg
dave.nirenb...@verizon.netwrote:

 Hi All,

 This is probably a simple question, but I have been unable to find any
 official documentation on it yet:  what exactly does the .glusterfs
 directory do, and what controls how much disk space it uses?

 I am using 3.3 in my environment and the .glusterfs directory has grown to
 93GB, whereas the actual directory content is under 2GB total.  I got these
 disk usage stats using 'du'.  Does this sound right?

 Thanks in advance,
 Dave

 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-users




-- 
Justin Dossey
CTO, PodOmatic
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster Community members visiting FOSDEM and DevConf in early February 2014

2014-01-03 Thread James
On Fri, Jan 3, 2014 at 10:40 AM, Niels de Vos nde...@redhat.com wrote:
 Hello all,

 on the weekend on 1-2 Febuary FOSDEM[1] will take place in Brussels, Belgium.
 Several users and Gluster community members will be attending the event. Some
 of us are trying to arrange a Gluster Community table with some promotional
 materials, demo's, Q+A and face-to-face chats.

 If you are interested in joining us, please let us know by responding to this
 email with some details, or add your note to the TitanPad[2]. In case you want
 to discuss a specific topic or would like to see a certain GlusterFS
 use-case/application, tell us in advance and we'll try to be prepared for it.

I'd love to attend if someone can cover my travel expenses. I'm happy
to give a talk too. I've got all the Vagrant stuff mostly done, and I
should have a public release of it with some new puppet-gluster code
within the week.

Cheers,
James


 Part of the same group of people will be going to Brno, Czech Republic for
 DevConf [3] the weekend after FOSDEM. We might be able to arrange a workshop 
 or
 presentation there too. However, we would like to hear from others what they
 prefer to see/hear/do, so please post your wishes!

 I hope to see a bunch of you in Brussels,
 Niels


 [1] https://fosdem.org
 [2] http://titanpad.com/gluster-at-fosdem2014
 [3] http://devconf.cz
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Happy New Year! GlusterFS 3.4.2 is Out

2014-01-03 Thread John Mark Walker
Originally posted at
http://www.gluster.org/2014/01/happy-new-year-glusterfs-3-4-2-has-hit-store-shelves/

As we ring in the new year, we also ring in a new release – GlusterFS
3.4.2, available at your local download
serverhttp://www.gluster.org/download!
This is a maintenance release, fixing a few bugs, which you can read
in the release
noteshttps://forge.gluster.org/gluster-docs-project/pages/GlusterFS_342_Release_Notes
.

Download the new release: http://gluster.org/download

Release notes:
https://forge.gluster.org/gluster-docs-project/pages/GlusterFS_342_Release_Notes

In addition to bug fixes, you’ll notice that we’re welcoming a new
distribution to the Gluster Community:
OpenSuSEhttp://download.gluster.org/pub/gluster/glusterfs/3.4/3.4.2/OpenSuSE13.1/and
SLEShttp://download.gluster.org/pub/gluster/glusterfs/3.4/3.4.2/SLES11sp3/.
We hope our SuSE friends will try them and let us know how it goes. If
you’re counting at home, that brings the number of supported platforms to
*9* on the client and server (NetBSD packages aren’t on the download
server, because its users get updated GlusterFS builds via
pkgsrchttp://pkgsrc.se/filesystems/glusterfs
).

The most notable bug fixes/changes:

   - Libgfapi support for Ganesha NFS integration
   
https://github.com/nfs-ganesha/nfs-ganesha/wiki/ReleaseNotes_2.0#wiki-GLUSTER
   - Updating extras/Ubuntu with latest upstart configs (BUG:
1047007https://bugzilla.redhat.com/show_bug.cgi?id=1047007
   )
   - gfapi.py: support dynamic loading of versioned libraries
   - cluster/dht: Ignore ENOENT errors for unlink of linkfiles
   - cli: Throw a warning during replace-brick
   - mgmt/glusterd: Fix a memory leak in glusterd_is_local_addr()
   - glusterd: submit RPC requests without holding big lock
   - protocol/client: handle network disconnect/reconnect properly
   - gfapi: use native STACK_WIND for read _async() calls
   - mgmt/glusterd: add option to specify a different base-port
   - Disable eager-locks on NetBSD for 3.4 branch
   - nfs/mount3: fix crash in subdir resolution

Use and enjoy - and share with your friends.

Happy hacking,
JM, Gluster Community Leader
___
Announce mailing list
annou...@gluster.org
http://supercolony.gluster.org/mailman/listinfo/announce
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] fractured/split glusterfs - 2 up, 2 down for an hour

2014-01-03 Thread harry mangalam
This is a distributed-only glusterfs on 4 servers with 2 bricks each on an 
IPoIB network.

Thanks to a misconfigured autoupdate script, when 3.4.2 was released today, my 
gluster servers tried to update themselves.  2 succeeded, but then failed to 
restart, the other 2 failed to update and kept running.

Not realizing the sequence of events, I restarted the 2 that failed to 
restart, which gave my fs 2 servers running 3.4.1 and 2 running 3.4.2.  

When I realized this after about 30m, I shut everything down and updated the 2 
remaining to 3.4.2 and then restarted but now I'm getting lots of reports of 
file errors of the type 'endpoints not connected' and the like:

[2014-01-04 01:31:18.593547] W [client-rpc-fops.c:2624:client3_3_lookup_cbk] 
0-gl-client-2: remote operation failed: Transport endpoint i
s not connected. Path: /bio/fishm/test_cuffdiff.sh 
(----)
[2014-01-04 01:31:18.594928] W [client-rpc-fops.c:2624:client3_3_lookup_cbk] 
0-gl-client-2: remote operation failed: Transport endpoint i
s not connected. Path: /bio/fishm/test_cuffdiff.sh 
(----)
[2014-01-04 01:31:18.595818] W [client-rpc-fops.c:2624:client3_3_lookup_cbk] 
0-gl-client-2: remote operation failed: Transport endpoint i
s not connected. Path: /bio/fishm/.#test_cuffdiff.sh (14c3b612-e952-4aec-
ae18-7f3dbb422dcc)
[2014-01-04 01:31:18.597381] W [client-rpc-fops.c:2624:client3_3_lookup_cbk] 
0-gl-client-2: remote operation failed: Transport endpoint i
s not connected. Path: /bio/fishm/test_cuffdiff.sh 
(----)
[2014-01-04 01:31:18.598212] W [client-rpc-fops.c:814:client3_3_statfs_cbk] 0-
gl-client-2: remote operation failed: Transport endpoint is
 not connected
[2014-01-04 01:31:18.598236] W [dht-diskusage.c:45:dht_du_info_cbk] 0-gl-dht: 
failed to get disk info from gl-client-2
[2014-01-04 01:31:19.912210] W [socket.c:514:__socket_rwv] 0-gl-client-2: 
readv failed (No data available)
[2014-01-04 01:31:22.912717] W [socket.c:514:__socket_rwv] 0-gl-client-2: 
readv failed (No data available)
[2014-01-04 01:31:25.913208] W [socket.c:514:__socket_rwv] 0-gl-client-2: 
readv failed (No data available)

The servers at the same time provided the following error 'E' messages:
Fri Jan 03 17:46:42 [0.20 0.12 0.13]  root@biostor1:~
1008 $ grep ' E ' /var/log/glusterfs/bricks/raid1.log |grep '2014-01-03' 
[2014-01-03 06:11:36.251786] E [server-helpers.c:751:server_alloc_frame] (--
/usr/lib64/libgfrpc.so.0(rpcsvc_notify+0x103) [0x3161e090d3] (--
/usr/lib64/libgfrpc.so.0(rpcsvc_handle_rpc_call+0x245) [0x3161e08f85] (--
/usr/lib64/glusterfs/3.4.1/xlator/protocol/server.so(server3_3_lookup+0xa0) 
[0x7fa60e577170]))) 0-server: invalid argument: conn
[2014-01-03 06:11:36.251813] E [rpcsvc.c:450:rpcsvc_check_and_reply_error] 0-
rpcsvc: rpc actor failed to complete successfully
[2014-01-03 17:48:44.236127] E [rpc-transport.c:253:rpc_transport_load] 0-rpc-
transport: /usr/lib64/glusterfs/3.4.1/rpc-transport/rdma.so: cannot open 
shared object file: No such file or directory
[2014-01-03 19:15:26.643378] E [rpc-transport.c:253:rpc_transport_load] 0-rpc-
transport: /usr/lib64/glusterfs/3.4.2/rpc-transport/rdma.so: cannot open 
shared object file: No such file or directory


The missing/misbehaving files /are/ accessible on the individual bricks but 
not thru gluster.

 This is a distributed-only setup, not replicated, so it seems like the 

gluster volume heal volume

is appropriate.

Do the gluster wizards agree? 

---
Harry Mangalam - Research Computing, OIT, Rm 225 MSTB, UC Irvine
[m/c 2225] / 92697 Google Voice Multiplexer: (949) 478-4487
415 South Circle View Dr, Irvine, CA, 92697 [shipping]
MSTB Lat/Long: (33.642025,-117.844414) (paste into Google Maps)
---
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] How to tune glusterfs performance

2014-01-03 Thread 张兵
Hi all:

How to tune glusterfs performance;is there any Effectiveparameters can 
beoptimizedtoimprove thewrite performance!linux kernel 
nfssupportasyncasynchronous mode,which

can promote perfomance,Areclusterfs native clientcan modifyittoasynchronous mode___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users