[Gluster-users] Community Meeting 2017-05-10

2017-05-18 Thread Kaushal M
Once again, I couldn't send out this mail quick enough. Sorry for that.

The meeting minutes and logs for this mail are available at the links below.

Minutes: 
https://meetbot.fedoraproject.org/gluster-meeting/2017-05-10/gluster_community_meeting_2017-05-10.2017-05-10-15.00.html
Minutes (text):
https://meetbot.fedoraproject.org/gluster-meeting/2017-05-10/gluster_community_meeting_2017-05-10.2017-05-10-15.00.txt
Log: 
https://meetbot.fedoraproject.org/gluster-meeting/2017-05-10/gluster_community_meeting_2017-05-10.2017-05-10-15.00.log.html

The meeting pad has been archived at
https://github.com/gluster/glusterfs/wiki/Community-Meeting-2017-05-10
.

The next meeting is on 24th May. The meeting pad is available at
https://bit.ly/gluster-community-meetings to add updates and topics
for discussion.

~kaushal

==
#gluster-meeting: Gluster Community Meeting 2017-05-10
==


Meeting started by kshlm at 15:00:50 UTC. The full logs are available at
https://meetbot.fedoraproject.org/gluster-meeting/2017-05-10/gluster_community_meeting_2017-05-10.2017-05-10-15.00.log.html
.



Meeting summary
---
* Roll Call  (kshlm, 15:05:28)

* Github issues  (kshlm, 15:10:06)

* Coverity progress  (kshlm, 15:23:51)

* Good build?  (kshlm, 15:30:41)
  * LINK:

https://software.intel.com/en-us/articles/intel-c-compiler-170-for-linux-release-notes-for-intel-parallel-studio-xe-2017
(kkeithley, 15:38:55)

* External Monitoring of Gluster performance / metrics  (kshlm,
  15:40:32)

* What is the status on getting gluster-block into Fedora?  (kshlm,
  15:53:31)

Meeting ended at 16:04:13 UTC.




Action Items






Action Items, by person
---
* **UNASSIGNED**
  * (none)




People Present (lines said)
---
* kshlm (85)
* JoeJulian (21)
* kkeithley (21)
* jdarcy (17)
* BatS9 (12)
* amye (12)
* vbellur (8)
* zodbot (5)
* sanoj (5)
* ndevos (4)
* rafi (1)




Generated by `MeetBot`_ 0.1.4

.. _`MeetBot`: http://wiki.debian.org/MeetBot
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] increase cache-invalidation performance settings?

2017-05-18 Thread Dan Ragle
In setting up my v3.10.1 cluster I found that using the newer 
cache-invalidation features seemed to help performance in at least some 
ways; but I haven't seen a lot of discussion about them following the 
initial introduction.


features.cache-invalidation: on
features.cache-invalidation-timeout: 600
performance.cache-invalidation: on
performance.md-cache-timeout: 600

Has anyone noticed any gotchas in using these parameters? Also, has 
there been any talk of increasing the allowed values for the 
md-cache-timeout and cache-invalidation-timeout settings?


TIA,

Dan
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Failure while upgrading gluster to 3.10.1

2017-05-18 Thread Atin Mukherjee
On Thu, 18 May 2017 at 23:40, Atin Mukherjee  wrote:

> On Wed, 17 May 2017 at 12:47, Pawan Alwandi  wrote:
>
>> Hello Atin,
>>
>> I realized that these
>> http://gluster.readthedocs.io/en/latest/Upgrade-Guide/upgrade_to_3.10/
>> instructions only work for upgrades from 3.7, while we are running 3.6.2.
>> Are there instructions/suggestion you have for us to upgrade from 3.6
>> version?
>>
>> I believe upgrade from 3.6 to 3.7 and then to 3.10 would work, but I see
>> similar errors reported when I upgraded to 3.7 too.
>>
>> For what its worth, I was able to set the op-version (gluster v set all
>> cluster.op-version 30702) but that doesn't seem to help.
>>
>> [2017-05-17 06:48:33.700014] I [MSGID: 100030] [glusterfsd.c:2338:main]
>> 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 3.7.20
>> (args: /usr/sbin/glusterd -p /var/run/glusterd.pid)
>> [2017-05-17 06:48:33.703808] I [MSGID: 106478] [glusterd.c:1383:init]
>> 0-management: Maximum allowed open file descriptors set to 65536
>> [2017-05-17 06:48:33.703836] I [MSGID: 106479] [glusterd.c:1432:init]
>> 0-management: Using /var/lib/glusterd as working directory
>> [2017-05-17 06:48:33.708866] W [MSGID: 103071]
>> [rdma.c:4594:__gf_rdma_ctx_create] 0-rpc-transport/rdma: rdma_cm event
>> channel creation failed [No such device]
>> [2017-05-17 06:48:33.709011] W [MSGID: 103055] [rdma.c:4901:init]
>> 0-rdma.management: Failed to initialize IB Device
>> [2017-05-17 06:48:33.709033] W [rpc-transport.c:359:rpc_transport_load]
>> 0-rpc-transport: 'rdma' initialization failed
>> [2017-05-17 06:48:33.709088] W [rpcsvc.c:1642:rpcsvc_create_listener]
>> 0-rpc-service: cannot create listener, initing the transport failed
>> [2017-05-17 06:48:33.709105] E [MSGID: 106243] [glusterd.c:1656:init]
>> 0-management: creation of 1 listeners failed, continuing with succeeded
>> transport
>> [2017-05-17 06:48:35.480043] I [MSGID: 106513]
>> [glusterd-store.c:2068:glusterd_restore_op_version] 0-glusterd: retrieved
>> op-version: 30600
>> [2017-05-17 06:48:35.605779] I [MSGID: 106498]
>> [glusterd-handler.c:3640:glusterd_friend_add_from_peerinfo] 0-management:
>> connect returned 0
>> [2017-05-17 06:48:35.607059] I [rpc-clnt.c:1046:rpc_clnt_connection_init]
>> 0-management: setting frame-timeout to 600
>> [2017-05-17 06:48:35.607670] I [rpc-clnt.c:1046:rpc_clnt_connection_init]
>> 0-management: setting frame-timeout to 600
>> [2017-05-17 06:48:35.607025] I [MSGID: 106498]
>> [glusterd-handler.c:3640:glusterd_friend_add_from_peerinfo] 0-management:
>> connect returned 0
>> [2017-05-17 06:48:35.608125] I [MSGID: 106544]
>> [glusterd.c:159:glusterd_uuid_init] 0-management: retrieved UUID:
>> 7f2a6e11-2a53-4ab4-9ceb-8be6a9f2d073
>>
>
>> Final graph:
>>
>> +--+
>>   1: volume management
>>   2: type mgmt/glusterd
>>   3: option rpc-auth.auth-glusterfs on
>>   4: option rpc-auth.auth-unix on
>>   5: option rpc-auth.auth-null on
>>   6: option rpc-auth-allow-insecure on
>>   7: option transport.socket.listen-backlog 128
>>   8: option event-threads 1
>>   9: option ping-timeout 0
>>  10: option transport.socket.read-fail-log off
>>  11: option transport.socket.keepalive-interval 2
>>  12: option transport.socket.keepalive-time 10
>>  13: option transport-type rdma
>>  14: option working-directory /var/lib/glusterd
>>  15: end-volume
>>  16:
>>
>> +--+
>> [2017-05-17 06:48:35.609868] I [MSGID: 101190]
>> [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread
>> with index 1
>> [2017-05-17 06:48:35.610839] W [socket.c:596:__socket_rwv] 0-management:
>> readv on 192.168.0.7:24007 failed (No data available)
>> [2017-05-17 06:48:35.611907] E [rpc-clnt.c:370:saved_frames_unwind] (-->
>> /usr/lib/x86_64-linux-gnu/libglusterfs.so.0(_gf_log_callingfn+0x1a3)[0x7fd6c2d70bb3]
>> (-->
>> /usr/lib/x86_64-linux-gnu/libgfrpc.so.0(saved_frames_unwind+0x1cf)[0x7fd6c2b3a2df]
>> (-->
>> /usr/lib/x86_64-linux-gnu/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7fd6c2b3a3fe]
>> (-->
>> /usr/lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x89)[0x7fd6c2b3ba39]
>> (-->
>> /usr/lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_clnt_notify+0x160)[0x7fd6c2b3c380]
>> ) 0-management: forced unwinding frame type(GLUSTERD-DUMP) op(DUMP(1))
>> called at 2017-05-17 06:48:35.609965 (xid=0x1)
>> [2017-05-17 06:48:35.611928] E [MSGID: 106167]
>> [glusterd-handshake.c:2091:__glusterd_peer_dump_version_cbk] 0-management:
>> Error through RPC layer, retry again later
>> [2017-05-17 06:48:35.611944] I [MSGID: 106004]
>> [glusterd-handler.c:5201:__glusterd_peer_rpc_notify] 0-management: Peer
>> <192.168.0.7> (<5ec54b4f-f60c-48c6-9e55-95f2bb58f633>), in state > Cluster>, has disconnected from glusterd.
>> [2017-05-17 06:48:35.612024] W
>> 

Re: [Gluster-users] Failure while upgrading gluster to 3.10.1

2017-05-18 Thread Atin Mukherjee
On Wed, 17 May 2017 at 12:47, Pawan Alwandi  wrote:

> Hello Atin,
>
> I realized that these
> http://gluster.readthedocs.io/en/latest/Upgrade-Guide/upgrade_to_3.10/
> instructions only work for upgrades from 3.7, while we are running 3.6.2.
> Are there instructions/suggestion you have for us to upgrade from 3.6
> version?
>
> I believe upgrade from 3.6 to 3.7 and then to 3.10 would work, but I see
> similar errors reported when I upgraded to 3.7 too.
>
> For what its worth, I was able to set the op-version (gluster v set all
> cluster.op-version 30702) but that doesn't seem to help.
>
> [2017-05-17 06:48:33.700014] I [MSGID: 100030] [glusterfsd.c:2338:main]
> 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 3.7.20
> (args: /usr/sbin/glusterd -p /var/run/glusterd.pid)
> [2017-05-17 06:48:33.703808] I [MSGID: 106478] [glusterd.c:1383:init]
> 0-management: Maximum allowed open file descriptors set to 65536
> [2017-05-17 06:48:33.703836] I [MSGID: 106479] [glusterd.c:1432:init]
> 0-management: Using /var/lib/glusterd as working directory
> [2017-05-17 06:48:33.708866] W [MSGID: 103071]
> [rdma.c:4594:__gf_rdma_ctx_create] 0-rpc-transport/rdma: rdma_cm event
> channel creation failed [No such device]
> [2017-05-17 06:48:33.709011] W [MSGID: 103055] [rdma.c:4901:init]
> 0-rdma.management: Failed to initialize IB Device
> [2017-05-17 06:48:33.709033] W [rpc-transport.c:359:rpc_transport_load]
> 0-rpc-transport: 'rdma' initialization failed
> [2017-05-17 06:48:33.709088] W [rpcsvc.c:1642:rpcsvc_create_listener]
> 0-rpc-service: cannot create listener, initing the transport failed
> [2017-05-17 06:48:33.709105] E [MSGID: 106243] [glusterd.c:1656:init]
> 0-management: creation of 1 listeners failed, continuing with succeeded
> transport
> [2017-05-17 06:48:35.480043] I [MSGID: 106513]
> [glusterd-store.c:2068:glusterd_restore_op_version] 0-glusterd: retrieved
> op-version: 30600
> [2017-05-17 06:48:35.605779] I [MSGID: 106498]
> [glusterd-handler.c:3640:glusterd_friend_add_from_peerinfo] 0-management:
> connect returned 0
> [2017-05-17 06:48:35.607059] I [rpc-clnt.c:1046:rpc_clnt_connection_init]
> 0-management: setting frame-timeout to 600
> [2017-05-17 06:48:35.607670] I [rpc-clnt.c:1046:rpc_clnt_connection_init]
> 0-management: setting frame-timeout to 600
> [2017-05-17 06:48:35.607025] I [MSGID: 106498]
> [glusterd-handler.c:3640:glusterd_friend_add_from_peerinfo] 0-management:
> connect returned 0
> [2017-05-17 06:48:35.608125] I [MSGID: 106544]
> [glusterd.c:159:glusterd_uuid_init] 0-management: retrieved UUID:
> 7f2a6e11-2a53-4ab4-9ceb-8be6a9f2d073
>
> Final graph:
>
> +--+
>   1: volume management
>   2: type mgmt/glusterd
>   3: option rpc-auth.auth-glusterfs on
>   4: option rpc-auth.auth-unix on
>   5: option rpc-auth.auth-null on
>   6: option rpc-auth-allow-insecure on
>   7: option transport.socket.listen-backlog 128
>   8: option event-threads 1
>   9: option ping-timeout 0
>  10: option transport.socket.read-fail-log off
>  11: option transport.socket.keepalive-interval 2
>  12: option transport.socket.keepalive-time 10
>  13: option transport-type rdma
>  14: option working-directory /var/lib/glusterd
>  15: end-volume
>  16:
>
> +--+
> [2017-05-17 06:48:35.609868] I [MSGID: 101190]
> [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread
> with index 1
> [2017-05-17 06:48:35.610839] W [socket.c:596:__socket_rwv] 0-management:
> readv on 192.168.0.7:24007 failed (No data available)
> [2017-05-17 06:48:35.611907] E [rpc-clnt.c:370:saved_frames_unwind] (-->
> /usr/lib/x86_64-linux-gnu/libglusterfs.so.0(_gf_log_callingfn+0x1a3)[0x7fd6c2d70bb3]
> (-->
> /usr/lib/x86_64-linux-gnu/libgfrpc.so.0(saved_frames_unwind+0x1cf)[0x7fd6c2b3a2df]
> (-->
> /usr/lib/x86_64-linux-gnu/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7fd6c2b3a3fe]
> (-->
> /usr/lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x89)[0x7fd6c2b3ba39]
> (-->
> /usr/lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_clnt_notify+0x160)[0x7fd6c2b3c380]
> ) 0-management: forced unwinding frame type(GLUSTERD-DUMP) op(DUMP(1))
> called at 2017-05-17 06:48:35.609965 (xid=0x1)
> [2017-05-17 06:48:35.611928] E [MSGID: 106167]
> [glusterd-handshake.c:2091:__glusterd_peer_dump_version_cbk] 0-management:
> Error through RPC layer, retry again later
> [2017-05-17 06:48:35.611944] I [MSGID: 106004]
> [glusterd-handler.c:5201:__glusterd_peer_rpc_notify] 0-management: Peer
> <192.168.0.7> (<5ec54b4f-f60c-48c6-9e55-95f2bb58f633>), in state  Cluster>, has disconnected from glusterd.
> [2017-05-17 06:48:35.612024] W
> [glusterd-locks.c:681:glusterd_mgmt_v3_unlock]
> (-->/usr/lib/x86_64-linux-gnu/glusterfs/3.7.20/xlator/mgmt/glusterd.so(glusterd_big_locked_notify+0x4b)
> [0x7fd6bdc4912b]
> 

Re: [Gluster-users] URGENT - Cheat on quorum

2017-05-18 Thread lemonnierk
> If you know what you are getting into, then `gluster v set  
> cluster.quorum-type none` should give you the desired result, i.e. allow 
> write access to the volume.

Thanks a lot ! We won't be needing it now, but I'll write that in the wiki
just in case.

We realised that the problem was the CacheCade SSDs, they all died together
on every nodes. But that makes sense, they were used exactly the same through
gluster.
We disabled those for now, and once the heal finishes we'll change them one by
one.

signature.asc
Description: Digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] URGENT - Cheat on quorum

2017-05-18 Thread Ravishankar N

On 05/18/2017 07:18 PM, lemonni...@ulrar.net wrote:

Hi,


We are having huge hardware issues (oh joy ..) with RAID cards.
On a replica 3 volume, we have 2 nodes down. Can we somehow tell
gluster that it's quorum is 1, to get some amount of service back
while we try to fix the other nodes or install new ones ?
If you know what you are getting into, then `gluster v set  
cluster.quorum-type none` should give you the desired result, i.e. allow 
write access to the volume.

Thanks


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users



___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] URGENT - Cheat on quorum

2017-05-18 Thread lemonnierk
Hi,


We are having huge hardware issues (oh joy ..) with RAID cards.
On a replica 3 volume, we have 2 nodes down. Can we somehow tell
gluster that it's quorum is 1, to get some amount of service back
while we try to fix the other nodes or install new ones ?

Thanks


signature.asc
Description: Digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] 120k context switches on GlsuterFS nodes

2017-05-18 Thread Joe Julian
On the other hand,  tracking that stat between versions with a known test 
sequence may be valuable for watching for performance issues or improvements. 

On May 17, 2017 10:03:28 PM PDT, Ravishankar N  wrote:
>On 05/17/2017 11:07 PM, Pranith Kumar Karampuri wrote:
>> + gluster-devel
>>
>> On Wed, May 17, 2017 at 10:50 PM, mabi > > wrote:
>>
>> I don't know exactly what kind of context-switches it was but
>what
>> I know is that it is the "cs" number under "system" when you run
>> vmstat.
>>
>Okay, that could be due to the  syscalls themselves or pre-emptive 
>multitasking in case there aren't enough cpu cores. I think the spike
>in 
>numbers is due to more users accessing the files at the same time like 
>you observed, translating into more syscalls.  You can try capturing
>the 
>gluster volume profile info the next time it occurs and co-relate with 
>the cs count. If you don't see any negative performance impact, I think
>
>you don't need to be bothered much by the numbers.
>
>HTH,
>Ravi
>>
>>
>> Also I use the percona linux monitoring template for cacti
>>
>(https://www.percona.com/doc/percona-monitoring-plugins/LATEST/cacti/linux-templates.html
>>
>)
>> which monitors context switches too. If that's of any use
>> interrupts where also quite high during that time with peaks up
>to
>> 50k interrupts.
>>
>>
>>
>>>  Original Message 
>>> Subject: Re: [Gluster-users] 120k context switches on GlsuterFS
>nodes
>>> Local Time: May 17, 2017 2:37 AM
>>> UTC Time: May 17, 2017 12:37 AM
>>> From: ravishan...@redhat.com 
>>> To: mabi >,
>>> Gluster Users >> >
>>>
>>>
>>> On 05/16/2017 11:13 PM, mabi wrote:
 Today I even saw up to 400k context switches for around 30
 minutes on my two nodes replica... Does anyone else have so
>high
 context switches on their GlusterFS nodes?

 I am wondering what is "normal" and if I should be worried...




>  Original Message 
> Subject: 120k context switches on GlsuterFS nodes
> Local Time: May 11, 2017 9:18 PM
> UTC Time: May 11, 2017 7:18 PM
> From: m...@protonmail.ch 
> To: Gluster Users 
> 
>
> Hi,
>
> Today I noticed that for around 50 minutes my two GlusterFS
> 3.8.11 nodes had a very high amount of context switches,
>around
> 120k. Usually the average is more around 1k-2k. So I checked
> what was happening and there where just more users accessing
> (downloading) their files at the same time. These are
> directories with typical cloud files, which means files of any
> sizes ranging from a few kB to MB and a lot of course.
>
> Now I never saw such a high number in context switches in my
> entire life so I wanted to ask if this is normal or to be
> expected? I do not find any signs of errors or warnings in any
> log files.
>
>>>
>>> What context switch are you referring to (syscalls
>context-switch
>>> on the bricks?) ? How did you measure this?
>>> -Ravi
>>>
> My volume is a replicated volume on two nodes with ZFS as
> filesystem behind and the volume is mounted using FUSE on the
> client (the cloud server). On that cloud server the glusterfs
> process was using quite a lot of system CPU but that server
> (VM) only has 2 vCPUs so maybe I should increase the number of
> vCPUs...
>
> Any ideas or recommendations?
>
>
>
> Regards,
> M.



 ___
 Gluster-users mailing list
 Gluster-users@gluster.org 
 http://lists.gluster.org/mailman/listinfo/gluster-users
 
>>>
>> ___ Gluster-users
>> mailing list Gluster-users@gluster.org
>> 
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>  
>>
>> -- 
>> Pranith

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Can't add-brick to an encrypted volume without the master key

2017-05-18 Thread Mark Wadham

Hi,

I followed this guide for setting up an encrypted volume:

https://github.com/gluster/glusterfs-specs/blob/master/done/GlusterFS%203.5/Disk%20Encryption.md

I started with 3 nodes (EC2) and this all worked fine.  My understanding 
from the article is that the master key does not need to be present on 
the glusterfs nodes, and as such is only known to the client machines.


My issue comes when trying to make this solution resilient - terminating 
a node and having it respawned by the ASG, I’m then apparently unable 
to add the brick from the new node into the existing volume.


It fails with:

# gluster volume add-brick gv0 replica 3 glusterfs2:/data/brick/gv0
volume add-brick: failed: Commit failed on glusterfs2. Please check log 
file for details.


The log on glusterfs2 shows:

# cat gv0-add-brick-mount.log
[2017-05-18 07:55:24.712211] I [MSGID: 100030] [glusterfsd.c:2454:main] 
0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 
3.8.11 (args: /usr/sbin/glusterfs --volfile /tmp/gv0.tcp-fuse.vol 
--client-pid -6 -l /var/log/glusterfs/gv0-add-brick-mount.log 
/tmp/mntnwGall)
[2017-05-18 07:55:24.891563] E [crypt.c:4306:master_set_master_vol_key] 
0-gv0-crypt: FATAL: can not open file with master key
[2017-05-18 07:55:24.891591] E [MSGID: 101019] 
[xlator.c:433:xlator_init] 0-gv0-crypt: Initialization of volume 
'gv0-crypt' failed, review your volfile again
[2017-05-18 07:55:24.891603] E [MSGID: 101066] 
[graph.c:324:glusterfs_graph_init] 0-gv0-crypt: initializing translator 
failed
[2017-05-18 07:55:24.891608] E [MSGID: 101176] 
[graph.c:673:glusterfs_graph_activate] 0-graph: init failed
[2017-05-18 07:55:24.891987] W [glusterfsd.c:1327:cleanup_and_exit] 
(-->/usr/sbin/glusterfs(glusterfs_volumes_init+0xfd) [0x7fead0b9e72d] 
-->/usr/sbin/glusterfs(glusterfs_process_volfp+0x172) [0x7fead0b9e5d2] 
-->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x7fead0b9db4b] ) 0-: 
received signum (1), shutting down
[2017-05-18 07:55:24.892018] I [fuse-bridge.c:5788:fini] 0-fuse: 
Unmounting '/tmp/mntnwGall'.
[2017-05-18 07:55:24.893023] W [glusterfsd.c:1327:cleanup_and_exit] 
(-->/lib64/libpthread.so.0(+0x7dc5) [0x7feacf509dc5] 
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x7fead0b9dcd5] 
-->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x7fead0b9db4b] ) 0-: 
received signum (15), shutting down



This seems to be suggesting that the master key needs to be present on 
the glusterfs nodes themselves in order to add a brick, but this 
wasn’t the case when I set the cluster up.  When I set it up I did 
create the volume before enabling the encryption though.


What’s going on here?  Do the glusterfs nodes actually need the master 
key in order to work?


Thanks,
Mark
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] GlusterFS+heketi+Kubernetes snapshots fail

2017-05-18 Thread Mohammed Rafi K C


On 05/18/2017 10:04 AM, Pranith Kumar Karampuri wrote:
> +Snapshot maintainer. I think he is away for a week or so. You may
> have to wait a bit more.
>
> On Wed, May 10, 2017 at 2:39 AM, Chris Jones  > wrote:
>
> Hi All,
>
> This was discussed briefly on IRC, but got no resolution. I have a
> Kubernetes cluster running heketi and GlusterFS 3.10.1. When I try
> to create a snapshot, I get:
>
> snapshot create: failed: Commit failed on localhost. Please check
> log file for details.
>
> glusterd log: http://termbin.com/r8s3
>

I'm not able to open the url. Could you please paste it in a different
domain. ?

> brick log: http://termbin.com/l0ya
>
> lvs output: http://termbin.com/bwug
>
> "gluster snapshot config" output: http://termbin.com/4t1k
>
> As you can see, there's not a lot of helpful output in the log
> files. I'd be grateful if somebody could help me interpret what's
> there.
>
> Chris
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org 
> http://lists.gluster.org/mailman/listinfo/gluster-users
> 
>
>
>
>
> -- 
> Pranith
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users