Re: [Gluster-users] Kubernetes v1.7.3 and GlusterFS Plugin

2017-08-10 Thread Christopher Schmidt
Hi, has anyone a working way to install glusterfs 3.11 on CentOS?

(f.e. centos-release-gluster311 does not exist)

Christopher Schmidt <fakod...@gmail.com> schrieb am Do., 10. Aug. 2017 um
20:01 Uhr:

> Yes, I tried to, but I didn't find a 3.11 centos-release-gluster package
> for CentOS.
>
> Humble Devassy Chirammal <humble.deva...@gmail.com> schrieb am Do., 10.
> Aug. 2017, 19:17:
>
>> As an another solution, if you are updating the system where you run
>> application container to latest glusterfs ( 3.11) , this will be fixed as
>> well as it support this mount option.
>>
>> --Humble
>>
>>
>> On Thu, Aug 10, 2017 at 10:39 PM, Christopher Schmidt <fakod...@gmail.com
>> > wrote:
>>
>>> Ok, thanks.
>>>
>>> Humble Devassy Chirammal <humble.deva...@gmail.com> schrieb am Do., 10.
>>> Aug. 2017, 19:04:
>>>
>>>> On Thu, Aug 10, 2017 at 10:25 PM, Christopher Schmidt <
>>>> fakod...@gmail.com> wrote:
>>>>
>>>>> Just created the container from here:
>>>>> https://github.com/gluster/gluster-containers/tree/master/CentOS
>>>>>
>>>>> And used stock Kubernetes 1.7.3, hence the included volume plugin and
>>>>> Heketi version 4.
>>>>>
>>>>> ​
>>>> Regardless of the glusterfs client version this is supposed to work.
>>>> One patch has gone in 1.7.3 tree which could have broken it.
>>>> Checking on the same and will get back as soon as I have an update.
>>>> ​
>>>>
>>>>
>>>>> Humble Devassy Chirammal <humble.deva...@gmail.com> schrieb am Do.,
>>>>> 10. Aug. 2017, 18:49:
>>>>>
>>>>>> ​Thanks .. Its the same option. Can you let me know your glusterfs
>>>>>> client package version ?​
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Thu, Aug 10, 2017 at 8:34 PM, Christopher Schmidt <
>>>>>> fakod...@gmail.com> wrote:
>>>>>>
>>>>>>>
>>>>>>> short copy from a kubectl describe pod...
>>>>>>>
>>>>>>> Events:
>>>>>>>   FirstSeen LastSeen Count From SubObjectPath Type Reason Message
>>>>>>>   -  -  -  --
>>>>>>> ---
>>>>>>>   5h 54s 173 kubelet, k8s-bootcamp-rbac-np-worker-6263f70 Warning
>>>>>>> FailedMount (combined from similar events): MountVolume.SetUp
>>>>>>> failed for volume "pvc-fa4b2621-7dad-11e7-8a44-062df200059f" : 
>>>>>>> glusterfs:
>>>>>>> mount failed: mount failed: exit status 1
>>>>>>> Mounting command: mount
>>>>>>> Mounting arguments: 159.100.242.235:vol_7a312f660490387c94cfaf84bce81bca
>>>>>>> /var/lib/kubelet/pods/fa4c6540-7dad-11e7-8a44-062df200059f/volumes/
>>>>>>> kubernetes.io~glusterfs/pvc-fa4b2621-7dad-11e7-8a44-062df200059f
>>>>>>> glusterfs [auto_unmount log-level=ERROR 
>>>>>>> log-file=/var/lib/kubelet/plugins/
>>>>>>> kubernetes.io/glusterfs/pvc-fa4b2621-7dad-11e7-8a44-062df200059f/es-data-log-distributed-0-glusterfs.log
>>>>>>> backup-volfile-servers=159.100.240.237:159.100.242.156
>>>>>>> :159.100.242.235]
>>>>>>> Output: /usr/bin/fusermount-glusterfs: mount failed: Invalid argument
>>>>>>> Mount failed. Please check the log file for more details.
>>>>>>>
>>>>>>>  the following error information was pulled from the glusterfs log
>>>>>>> to help diagnose this issue:
>>>>>>> [2017-08-10 15:02:12.260168] W [glusterfsd.c:1095:cleanup_and_exit]
>>>>>>> (-->/lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7f3e2477c62d]
>>>>>>> (-->/lib/x86_64-linux-gnu/libpthread.so.0(+0x8064) [0x7f3e24e43064]
>>>>>>> (-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xdd) [0x4073bd]))) 0-:
>>>>>>> received signum (15), shutting down
>>>>>>> [2017-08-10 15:02:12.260189] I [fuse-bridge.c:5475:fini] 0-fuse:
>>>>>>> Unmounting
>>>>>>> '/var/lib/kubelet/pods/fa4c6540-7dad-11e7-8a44-062df200059f/volumes/
>>>>>>> kubernetes.io~glusterfs/pvc-fa4b2621-7dad-11e7-8a44-062df200059f'.
>>>>>&g

Re: [Gluster-users] Kubernetes v1.7.3 and GlusterFS Plugin

2017-08-10 Thread Christopher Schmidt
Yes, I tried to, but I didn't find a 3.11 centos-release-gluster package
for CentOS.

Humble Devassy Chirammal <humble.deva...@gmail.com> schrieb am Do., 10.
Aug. 2017, 19:17:

> As an another solution, if you are updating the system where you run
> application container to latest glusterfs ( 3.11) , this will be fixed as
> well as it support this mount option.
>
> --Humble
>
>
> On Thu, Aug 10, 2017 at 10:39 PM, Christopher Schmidt <fakod...@gmail.com>
> wrote:
>
>> Ok, thanks.
>>
>> Humble Devassy Chirammal <humble.deva...@gmail.com> schrieb am Do., 10.
>> Aug. 2017, 19:04:
>>
>>> On Thu, Aug 10, 2017 at 10:25 PM, Christopher Schmidt <
>>> fakod...@gmail.com> wrote:
>>>
>>>> Just created the container from here:
>>>> https://github.com/gluster/gluster-containers/tree/master/CentOS
>>>>
>>>> And used stock Kubernetes 1.7.3, hence the included volume plugin and
>>>> Heketi version 4.
>>>>
>>>> ​
>>> Regardless of the glusterfs client version this is supposed to work. One
>>> patch has gone in 1.7.3 tree which could have broken it.
>>> Checking on the same and will get back as soon as I have an update.
>>> ​
>>>
>>>
>>>> Humble Devassy Chirammal <humble.deva...@gmail.com> schrieb am Do.,
>>>> 10. Aug. 2017, 18:49:
>>>>
>>>>> ​Thanks .. Its the same option. Can you let me know your glusterfs
>>>>> client package version ?​
>>>>>
>>>>>
>>>>>
>>>>> On Thu, Aug 10, 2017 at 8:34 PM, Christopher Schmidt <
>>>>> fakod...@gmail.com> wrote:
>>>>>
>>>>>>
>>>>>> short copy from a kubectl describe pod...
>>>>>>
>>>>>> Events:
>>>>>>   FirstSeen LastSeen Count From SubObjectPath Type Reason Message
>>>>>>   -  -  -  -- ---
>>>>>>   5h 54s 173 kubelet, k8s-bootcamp-rbac-np-worker-6263f70 Warning
>>>>>> FailedMount (combined from similar events): MountVolume.SetUp failed
>>>>>> for volume "pvc-fa4b2621-7dad-11e7-8a44-062df200059f" : glusterfs: mount
>>>>>> failed: mount failed: exit status 1
>>>>>> Mounting command: mount
>>>>>> Mounting arguments: 159.100.242.235:vol_7a312f660490387c94cfaf84bce81bca
>>>>>> /var/lib/kubelet/pods/fa4c6540-7dad-11e7-8a44-062df200059f/volumes/
>>>>>> kubernetes.io~glusterfs/pvc-fa4b2621-7dad-11e7-8a44-062df200059f
>>>>>> glusterfs [auto_unmount log-level=ERROR 
>>>>>> log-file=/var/lib/kubelet/plugins/
>>>>>> kubernetes.io/glusterfs/pvc-fa4b2621-7dad-11e7-8a44-062df200059f/es-data-log-distributed-0-glusterfs.log
>>>>>> backup-volfile-servers=159.100.240.237:159.100.242.156
>>>>>> :159.100.242.235]
>>>>>> Output: /usr/bin/fusermount-glusterfs: mount failed: Invalid argument
>>>>>> Mount failed. Please check the log file for more details.
>>>>>>
>>>>>>  the following error information was pulled from the glusterfs log to
>>>>>> help diagnose this issue:
>>>>>> [2017-08-10 15:02:12.260168] W [glusterfsd.c:1095:cleanup_and_exit]
>>>>>> (-->/lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7f3e2477c62d]
>>>>>> (-->/lib/x86_64-linux-gnu/libpthread.so.0(+0x8064) [0x7f3e24e43064]
>>>>>> (-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xdd) [0x4073bd]))) 0-:
>>>>>> received signum (15), shutting down
>>>>>> [2017-08-10 15:02:12.260189] I [fuse-bridge.c:5475:fini] 0-fuse:
>>>>>> Unmounting
>>>>>> '/var/lib/kubelet/pods/fa4c6540-7dad-11e7-8a44-062df200059f/volumes/
>>>>>> kubernetes.io~glusterfs/pvc-fa4b2621-7dad-11e7-8a44-062df200059f'.
>>>>>>
>>>>>>   5h 32s 151 kubelet, k8s-bootcamp-rbac-np-worker-6263f70 Warning
>>>>>> FailedMount Unable to mount volumes for pod
>>>>>> "es-data-log-distributed-0_monitoring(fa4c6540-7dad-11e7-8a44-062df200059f)":
>>>>>> timeout expired waiting for volumes to attach/mount for pod
>>>>>> "monitoring"/"es-data-log-distributed-0". list of unattached/unmounted
>>>>>> volumes=[es-data]
>>>>>>   5h 32s 163 kubelet, k8s-bootcamp-r

Re: [Gluster-users] Kubernetes v1.7.3 and GlusterFS Plugin

2017-08-10 Thread Christopher Schmidt
Ok, thanks.

Humble Devassy Chirammal <humble.deva...@gmail.com> schrieb am Do., 10.
Aug. 2017, 19:04:

> On Thu, Aug 10, 2017 at 10:25 PM, Christopher Schmidt <fakod...@gmail.com>
> wrote:
>
>> Just created the container from here:
>> https://github.com/gluster/gluster-containers/tree/master/CentOS
>>
>> And used stock Kubernetes 1.7.3, hence the included volume plugin and
>> Heketi version 4.
>>
>> ​
> Regardless of the glusterfs client version this is supposed to work. One
> patch has gone in 1.7.3 tree which could have broken it.
> Checking on the same and will get back as soon as I have an update.
> ​
>
>
>> Humble Devassy Chirammal <humble.deva...@gmail.com> schrieb am Do., 10.
>> Aug. 2017, 18:49:
>>
>>> ​Thanks .. Its the same option. Can you let me know your glusterfs
>>> client package version ?​
>>>
>>>
>>>
>>> On Thu, Aug 10, 2017 at 8:34 PM, Christopher Schmidt <fakod...@gmail.com
>>> > wrote:
>>>
>>>>
>>>> short copy from a kubectl describe pod...
>>>>
>>>> Events:
>>>>   FirstSeen LastSeen Count From SubObjectPath Type Reason Message
>>>>   -  -  -  -- ---
>>>>   5h 54s 173 kubelet, k8s-bootcamp-rbac-np-worker-6263f70 Warning
>>>> FailedMount (combined from similar events): MountVolume.SetUp failed
>>>> for volume "pvc-fa4b2621-7dad-11e7-8a44-062df200059f" : glusterfs: mount
>>>> failed: mount failed: exit status 1
>>>> Mounting command: mount
>>>> Mounting arguments: 159.100.242.235:vol_7a312f660490387c94cfaf84bce81bca
>>>> /var/lib/kubelet/pods/fa4c6540-7dad-11e7-8a44-062df200059f/volumes/
>>>> kubernetes.io~glusterfs/pvc-fa4b2621-7dad-11e7-8a44-062df200059f
>>>> glusterfs [auto_unmount log-level=ERROR log-file=/var/lib/kubelet/plugins/
>>>> kubernetes.io/glusterfs/pvc-fa4b2621-7dad-11e7-8a44-062df200059f/es-data-log-distributed-0-glusterfs.log
>>>> backup-volfile-servers=159.100.240.237:159.100.242.156:159.100.242.235]
>>>> Output: /usr/bin/fusermount-glusterfs: mount failed: Invalid argument
>>>> Mount failed. Please check the log file for more details.
>>>>
>>>>  the following error information was pulled from the glusterfs log to
>>>> help diagnose this issue:
>>>> [2017-08-10 15:02:12.260168] W [glusterfsd.c:1095:cleanup_and_exit]
>>>> (-->/lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7f3e2477c62d]
>>>> (-->/lib/x86_64-linux-gnu/libpthread.so.0(+0x8064) [0x7f3e24e43064]
>>>> (-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xdd) [0x4073bd]))) 0-:
>>>> received signum (15), shutting down
>>>> [2017-08-10 15:02:12.260189] I [fuse-bridge.c:5475:fini] 0-fuse:
>>>> Unmounting
>>>> '/var/lib/kubelet/pods/fa4c6540-7dad-11e7-8a44-062df200059f/volumes/
>>>> kubernetes.io~glusterfs/pvc-fa4b2621-7dad-11e7-8a44-062df200059f'.
>>>>
>>>>   5h 32s 151 kubelet, k8s-bootcamp-rbac-np-worker-6263f70 Warning
>>>> FailedMount Unable to mount volumes for pod
>>>> "es-data-log-distributed-0_monitoring(fa4c6540-7dad-11e7-8a44-062df200059f)":
>>>> timeout expired waiting for volumes to attach/mount for pod
>>>> "monitoring"/"es-data-log-distributed-0". list of unattached/unmounted
>>>> volumes=[es-data]
>>>>   5h 32s 163 kubelet, k8s-bootcamp-rbac-np-worker-6263f70 Warning
>>>> FailedSync Error syncing pod
>>>>
>>>> Humble Devassy Chirammal <humble.deva...@gmail.com> schrieb am Do.,
>>>> 10. Aug. 2017 um 16:55 Uhr:
>>>>
>>>>> Are you seeing issue or error message which says auto_unmount option
>>>>> is not valid  ?
>>>>> Can you please let me the issue you are seeing with 1.7.3 ?
>>>>>
>>>>> --Humble
>>>>>
>>>>>
>>>>> On Thu, Aug 10, 2017 at 3:33 PM, Christopher Schmidt <
>>>>> fakod...@gmail.com> wrote:
>>>>>
>>>>>> Hi all,
>>>>>>
>>>>>> I am testing K8s 1.7.3 together with GlusterFS and have some issues
>>>>>>
>>>>>> is this correct?
>>>>>> - Kubernetes v1.7.3 ships with a GlusterFS Plugin that depends on
>>>>>> GlusterFS 3.11
>>>>>> - GlusterFS 3.11 is not recommended for production. So 3.10 should be
>>>>>> used
>>>>>>
>>>>>> This actually means no K8s 1.7.x version, right?
>>>>>> Or is there anything else I could do?
>>>>>>
>>>>>> best Christopher
>>>>>>
>>>>>>
>>>>>>
>>>>>> ___
>>>>>> Gluster-users mailing list
>>>>>> Gluster-users@gluster.org
>>>>>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>>>>>
>>>>>
>>>>>
>>>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Kubernetes v1.7.3 and GlusterFS Plugin

2017-08-10 Thread Christopher Schmidt
Just created the container from here:
https://github.com/gluster/gluster-containers/tree/master/CentOS

And used stock Kubernetes 1.7.3, hence the included volume plugin and
Heketi version 4.

Humble Devassy Chirammal <humble.deva...@gmail.com> schrieb am Do., 10.
Aug. 2017, 18:49:

> ​Thanks .. Its the same option. Can you let me know your glusterfs client
> package version ?​
>
>
>
> On Thu, Aug 10, 2017 at 8:34 PM, Christopher Schmidt <fakod...@gmail.com>
> wrote:
>
>>
>> short copy from a kubectl describe pod...
>>
>> Events:
>>   FirstSeen LastSeen Count From SubObjectPath Type Reason Message
>>   -  -  -  -- ---
>>   5h 54s 173 kubelet, k8s-bootcamp-rbac-np-worker-6263f70 Warning
>> FailedMount (combined from similar events): MountVolume.SetUp failed for
>> volume "pvc-fa4b2621-7dad-11e7-8a44-062df200059f" : glusterfs: mount
>> failed: mount failed: exit status 1
>> Mounting command: mount
>> Mounting arguments: 159.100.242.235:vol_7a312f660490387c94cfaf84bce81bca
>> /var/lib/kubelet/pods/fa4c6540-7dad-11e7-8a44-062df200059f/volumes/
>> kubernetes.io~glusterfs/pvc-fa4b2621-7dad-11e7-8a44-062df200059f
>> glusterfs [auto_unmount log-level=ERROR log-file=/var/lib/kubelet/plugins/
>> kubernetes.io/glusterfs/pvc-fa4b2621-7dad-11e7-8a44-062df200059f/es-data-log-distributed-0-glusterfs.log
>> backup-volfile-servers=159.100.240.237:159.100.242.156:159.100.242.235]
>> Output: /usr/bin/fusermount-glusterfs: mount failed: Invalid argument
>> Mount failed. Please check the log file for more details.
>>
>>  the following error information was pulled from the glusterfs log to
>> help diagnose this issue:
>> [2017-08-10 15:02:12.260168] W [glusterfsd.c:1095:cleanup_and_exit]
>> (-->/lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7f3e2477c62d]
>> (-->/lib/x86_64-linux-gnu/libpthread.so.0(+0x8064) [0x7f3e24e43064]
>> (-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xdd) [0x4073bd]))) 0-:
>> received signum (15), shutting down
>> [2017-08-10 15:02:12.260189] I [fuse-bridge.c:5475:fini] 0-fuse:
>> Unmounting
>> '/var/lib/kubelet/pods/fa4c6540-7dad-11e7-8a44-062df200059f/volumes/
>> kubernetes.io~glusterfs/pvc-fa4b2621-7dad-11e7-8a44-062df200059f'.
>>
>>   5h 32s 151 kubelet, k8s-bootcamp-rbac-np-worker-6263f70 Warning
>> FailedMount Unable to mount volumes for pod
>> "es-data-log-distributed-0_monitoring(fa4c6540-7dad-11e7-8a44-062df200059f)":
>> timeout expired waiting for volumes to attach/mount for pod
>> "monitoring"/"es-data-log-distributed-0". list of unattached/unmounted
>> volumes=[es-data]
>>   5h 32s 163 kubelet, k8s-bootcamp-rbac-np-worker-6263f70 Warning
>> FailedSync Error syncing pod
>>
>> Humble Devassy Chirammal <humble.deva...@gmail.com> schrieb am Do., 10.
>> Aug. 2017 um 16:55 Uhr:
>>
>>> Are you seeing issue or error message which says auto_unmount option is
>>> not valid  ?
>>> Can you please let me the issue you are seeing with 1.7.3 ?
>>>
>>> --Humble
>>>
>>>
>>> On Thu, Aug 10, 2017 at 3:33 PM, Christopher Schmidt <fakod...@gmail.com
>>> > wrote:
>>>
>>>> Hi all,
>>>>
>>>> I am testing K8s 1.7.3 together with GlusterFS and have some issues
>>>>
>>>> is this correct?
>>>> - Kubernetes v1.7.3 ships with a GlusterFS Plugin that depends on
>>>> GlusterFS 3.11
>>>> - GlusterFS 3.11 is not recommended for production. So 3.10 should be
>>>> used
>>>>
>>>> This actually means no K8s 1.7.x version, right?
>>>> Or is there anything else I could do?
>>>>
>>>> best Christopher
>>>>
>>>>
>>>>
>>>> ___
>>>> Gluster-users mailing list
>>>> Gluster-users@gluster.org
>>>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>>>
>>>
>>>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Kubernetes v1.7.3 and GlusterFS Plugin

2017-08-10 Thread Christopher Schmidt
short copy from a kubectl describe pod...

Events:
  FirstSeen LastSeen Count From SubObjectPath Type Reason Message
  -  -  -  -- ---
  5h 54s 173 kubelet, k8s-bootcamp-rbac-np-worker-6263f70 Warning
FailedMount (combined from similar events): MountVolume.SetUp failed for
volume "pvc-fa4b2621-7dad-11e7-8a44-062df200059f" : glusterfs: mount
failed: mount failed: exit status 1
Mounting command: mount
Mounting arguments: 159.100.242.235:vol_7a312f660490387c94cfaf84bce81bca
/var/lib/kubelet/pods/fa4c6540-7dad-11e7-8a44-062df200059f/volumes/
kubernetes.io~glusterfs/pvc-fa4b2621-7dad-11e7-8a44-062df200059f glusterfs
[auto_unmount log-level=ERROR log-file=/var/lib/kubelet/plugins/
kubernetes.io/glusterfs/pvc-fa4b2621-7dad-11e7-8a44-062df200059f/es-data-log-distributed-0-glusterfs.log
backup-volfile-servers=159.100.240.237:159.100.242.156:159.100.242.235]
Output: /usr/bin/fusermount-glusterfs: mount failed: Invalid argument
Mount failed. Please check the log file for more details.

 the following error information was pulled from the glusterfs log to help
diagnose this issue:
[2017-08-10 15:02:12.260168] W [glusterfsd.c:1095:cleanup_and_exit]
(-->/lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7f3e2477c62d]
(-->/lib/x86_64-linux-gnu/libpthread.so.0(+0x8064) [0x7f3e24e43064]
(-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xdd) [0x4073bd]))) 0-:
received signum (15), shutting down
[2017-08-10 15:02:12.260189] I [fuse-bridge.c:5475:fini] 0-fuse: Unmounting
'/var/lib/kubelet/pods/fa4c6540-7dad-11e7-8a44-062df200059f/volumes/
kubernetes.io~glusterfs/pvc-fa4b2621-7dad-11e7-8a44-062df200059f'.

  5h 32s 151 kubelet, k8s-bootcamp-rbac-np-worker-6263f70 Warning
FailedMount Unable to mount volumes for pod
"es-data-log-distributed-0_monitoring(fa4c6540-7dad-11e7-8a44-062df200059f)":
timeout expired waiting for volumes to attach/mount for pod
"monitoring"/"es-data-log-distributed-0". list of unattached/unmounted
volumes=[es-data]
  5h 32s 163 kubelet, k8s-bootcamp-rbac-np-worker-6263f70 Warning
FailedSync Error
syncing pod

Humble Devassy Chirammal <humble.deva...@gmail.com> schrieb am Do., 10.
Aug. 2017 um 16:55 Uhr:

> Are you seeing issue or error message which says auto_unmount option is
> not valid  ?
> Can you please let me the issue you are seeing with 1.7.3 ?
>
> --Humble
>
>
> On Thu, Aug 10, 2017 at 3:33 PM, Christopher Schmidt <fakod...@gmail.com>
> wrote:
>
>> Hi all,
>>
>> I am testing K8s 1.7.3 together with GlusterFS and have some issues
>>
>> is this correct?
>> - Kubernetes v1.7.3 ships with a GlusterFS Plugin that depends on
>> GlusterFS 3.11
>> - GlusterFS 3.11 is not recommended for production. So 3.10 should be used
>>
>> This actually means no K8s 1.7.x version, right?
>> Or is there anything else I could do?
>>
>> best Christopher
>>
>>
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Kubernetes v1.7.3 and GlusterFS Plugin

2017-08-10 Thread Christopher Schmidt
Hi all,

I am testing K8s 1.7.3 together with GlusterFS and have some issues

is this correct?
- Kubernetes v1.7.3 ships with a GlusterFS Plugin that depends on
GlusterFS 3.11
- GlusterFS 3.11 is not recommended for production. So 3.10 should be used

This actually means no K8s 1.7.x version, right?
Or is there anything else I could do?

best Christopher
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Performance Translators Documentation

2017-07-11 Thread Christopher Schmidt
Hi,

I had some issues (org.apache.lucene.index.CorruptIndexException) with
Lucene (resp. ElasticSearch) working on a GlusterFS Volume and Kubernetes.

For testing I switched off all performance translators...

And I wonder if there is somewhere documentation, who they are and what
they are doing?

best Chris
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] GlusterFS and Kafka

2017-05-29 Thread Christopher Schmidt
I was stupid enough to copy an additional newline from email. So, sorry for
the noise.
Works so far.

Thanks for getting that solved, best Chris

Raghavendra Talur <rta...@redhat.com> schrieb am Mo., 29. Mai 2017 um
13:18 Uhr:

>
>
> On 29-May-2017 3:49 PM, "Christopher Schmidt" <fakod...@gmail.com> wrote:
>
> Hi Raghavendra Talur,
>
> this does not work for me. Most certainly because I forgot something.
> So just put the file in the folder, make it executable and create a
> volume? Thats all?
>
> If I am doing this, there is no /var/lib/glusterd/hooks/1/create/post/log
> file and the Performance Translator is still on.
>
> and idea?
>
>
> Most certainly selinux. I think you will have to set context on the file
> to same as others in hooks dir.
>
> You can test temporarily by
> setenforce  0
>
>
> Raghavendra Talur <rta...@redhat.com> schrieb am Fr., 26. Mai 2017 um
> 07:34 Uhr:
>
>> On Thu, May 25, 2017 at 8:39 PM, Joe Julian <j...@julianfamily.org> wrote:
>> > Maybe hooks?
>>
>> Yes, we were thinking of the same :)
>>
>> Christopher,
>> Gluster has hook-scripts facility that admins can write and set those
>> to be run on certain events in Gluster. We have a event for volume
>> creation.
>> Here are the steps for using hook scripts.
>>
>> 1. deploy the gluster pods and create a cluster as you have already done.
>> 2. on the kubernetes nodes that are running gluster pods(make sure
>> they are running now, because we want to write into the bind mount),
>> create a new file in location /var/lib/glusterd/hooks/1/create/post/
>> 3. name of the fie could be S29disable-perf.sh , important part being
>> that the number should have capital S as first letter.
>> 4. I tried out a sample script with content as below
>>
>> ```
>> #!/bin/bash
>>
>>
>>
>> PROGNAME="Sdisable-perf"
>> OPTSPEC="volname:,gd-workdir:"
>> VOL=
>> CONFIGFILE=
>> LOGFILEBASE=
>> PIDDIR=
>> GLUSTERD_WORKDIR=
>>
>> function parse_args () {
>> ARGS=$(getopt -l $OPTSPEC  -name $PROGNAME $@)
>> eval set -- "$ARGS"
>>
>> while true; do
>> case $1 in
>> --volname)
>> shift
>> VOL=$1
>> ;;
>> --gd-workdir)
>> shift
>> GLUSTERD_WORKDIR=$1
>> ;;
>> *)
>> shift
>> break
>> ;;
>> esac
>> shift
>> done
>> }
>>
>> function disable_perf_xlators () {
>> volname=$1
>>     gluster volume set $volname performance.write-behind off
>> echo "executed and return is $?" >>
>> /var/lib/glusterd/hooks/1/create/post/log
>> }
>>
>> echo "starting" >> /var/lib/glusterd/hooks/1/create/post/log
>> parse_args $@
>> disable_perf_xlators $VOL
>> ```
>> 5. set execute permissions on the file
>>
>> I tried this out and it worked for me. Let us know if that helps!
>>
>> Thanks,
>> Raghavendra Talur
>>
>>
>>
>> >
>> >
>> > On May 25, 2017 6:48:04 AM PDT, Christopher Schmidt <fakod...@gmail.com
>> >
>> > wrote:
>> >>
>> >> Hi Humble,
>> >>
>> >> thanks for that, it is really appreciated.
>> >>
>> >> In the meanwhile, using K8s 1.5, what can I do to disable the
>> performance
>> >> translator that doesn't work with Kafka? Maybe something while
>> generating
>> >> the Glusterfs container for Kubernetes?
>> >>
>> >> Best Christopher
>> >>
>> >> Humble Chirammal <hchir...@redhat.com> schrieb am Do., 25. Mai 2017,
>> >> 09:36:
>> >>>
>> >>> On Thu, May 25, 2017 at 12:57 PM, Raghavendra Talur <
>> rta...@redhat.com>
>> >>> wrote:
>> >>>>
>> >>>> On Thu, May 25, 2017 at 11:21 AM, Christopher Schmidt
>> >>>> <fakod...@gmail.com> wrote:
>> >>>> > So this change of the Gluster Volume Plugin will make it into K8s
>> 1.7
>> >>>> > or
>> >>>> > 1.8. Unfortunately too late for me.
>> >>>> >
>> >>>> > Does anyone know how t

Re: [Gluster-users] GlusterFS and Kafka

2017-05-29 Thread Christopher Schmidt
Hi Raghavendra Talur,

this does not work for me. Most certainly because I forgot something.
So just put the file in the folder, make it executable and create a volume?
Thats all?

If I am doing this, there is no /var/lib/glusterd/hooks/1/create/post/log
file and the Performance Translator is still on.

and idea?

Raghavendra Talur <rta...@redhat.com> schrieb am Fr., 26. Mai 2017 um
07:34 Uhr:

> On Thu, May 25, 2017 at 8:39 PM, Joe Julian <j...@julianfamily.org> wrote:
> > Maybe hooks?
>
> Yes, we were thinking of the same :)
>
> Christopher,
> Gluster has hook-scripts facility that admins can write and set those
> to be run on certain events in Gluster. We have a event for volume
> creation.
> Here are the steps for using hook scripts.
>
> 1. deploy the gluster pods and create a cluster as you have already done.
> 2. on the kubernetes nodes that are running gluster pods(make sure
> they are running now, because we want to write into the bind mount),
> create a new file in location /var/lib/glusterd/hooks/1/create/post/
> 3. name of the fie could be S29disable-perf.sh , important part being
> that the number should have capital S as first letter.
> 4. I tried out a sample script with content as below
>
> ```
> #!/bin/bash
>
>
>
> PROGNAME="Sdisable-perf"
> OPTSPEC="volname:,gd-workdir:"
> VOL=
> CONFIGFILE=
> LOGFILEBASE=
> PIDDIR=
> GLUSTERD_WORKDIR=
>
> function parse_args () {
> ARGS=$(getopt -l $OPTSPEC  -name $PROGNAME $@)
> eval set -- "$ARGS"
>
> while true; do
> case $1 in
> --volname)
> shift
> VOL=$1
> ;;
> --gd-workdir)
> shift
> GLUSTERD_WORKDIR=$1
> ;;
> *)
> shift
> break
> ;;
> esac
> shift
> done
> }
>
> function disable_perf_xlators () {
> volname=$1
> gluster volume set $volname performance.write-behind off
> echo "executed and return is $?" >>
> /var/lib/glusterd/hooks/1/create/post/log
> }
>
> echo "starting" >> /var/lib/glusterd/hooks/1/create/post/log
> parse_args $@
> disable_perf_xlators $VOL
> ```
> 5. set execute permissions on the file
>
> I tried this out and it worked for me. Let us know if that helps!
>
> Thanks,
> Raghavendra Talur
>
>
>
> >
> >
> > On May 25, 2017 6:48:04 AM PDT, Christopher Schmidt <fakod...@gmail.com>
> > wrote:
> >>
> >> Hi Humble,
> >>
> >> thanks for that, it is really appreciated.
> >>
> >> In the meanwhile, using K8s 1.5, what can I do to disable the
> performance
> >> translator that doesn't work with Kafka? Maybe something while
> generating
> >> the Glusterfs container for Kubernetes?
> >>
> >> Best Christopher
> >>
> >> Humble Chirammal <hchir...@redhat.com> schrieb am Do., 25. Mai 2017,
> >> 09:36:
> >>>
> >>> On Thu, May 25, 2017 at 12:57 PM, Raghavendra Talur <rta...@redhat.com
> >
> >>> wrote:
> >>>>
> >>>> On Thu, May 25, 2017 at 11:21 AM, Christopher Schmidt
> >>>> <fakod...@gmail.com> wrote:
> >>>> > So this change of the Gluster Volume Plugin will make it into K8s
> 1.7
> >>>> > or
> >>>> > 1.8. Unfortunately too late for me.
> >>>> >
> >>>> > Does anyone know how to disable performance translators by default?
> >>>>
> >>>> Humble,
> >>>>
> >>>> Do you know of any way Christopher can proceed here?
> >>>
> >>>
> >>> I am trying to get it in 1.7 branch, will provide an update here as
> soon
> >>> as its available.
> >>>>
> >>>>
> >>>> >
> >>>> >
> >>>> > Raghavendra Talur <rta...@redhat.com> schrieb am Mi., 24. Mai 2017,
> >>>> > 19:30:
> >>>> >>
> >>>> >> On Wed, May 24, 2017 at 4:08 PM, Christopher Schmidt
> >>>> >> <fakod...@gmail.com>
> >>>> >> wrote:
> >>>> >> >
> >>>> >> >
> >>>> >> > Vijay Bellur <vbel...@redhat.com> schrieb am Mi., 24. Mai 2017
> um
> >>>> >> > 05:53
> &g

Re: [Gluster-users] GlusterFS and Kafka

2017-05-25 Thread Christopher Schmidt
Hi Humble,

thanks for that, it is really appreciated.

In the meanwhile, using K8s 1.5, what can I do to disable the performance
translator that doesn't work with Kafka? Maybe something while generating
the Glusterfs container for Kubernetes?

Best Christopher

Humble Chirammal <hchir...@redhat.com> schrieb am Do., 25. Mai 2017, 09:36:

> On Thu, May 25, 2017 at 12:57 PM, Raghavendra Talur <rta...@redhat.com>
> wrote:
>
>> On Thu, May 25, 2017 at 11:21 AM, Christopher Schmidt
>> <fakod...@gmail.com> wrote:
>> > So this change of the Gluster Volume Plugin will make it into K8s 1.7 or
>> > 1.8. Unfortunately too late for me.
>> >
>> > Does anyone know how to disable performance translators by default?
>>
>> Humble,
>>
>> Do you know of any way Christopher can proceed here?
>>
>
> I am trying to get it in 1.7 branch, will provide an update here as soon
> as its available.
>
>>
>> >
>> >
>> > Raghavendra Talur <rta...@redhat.com> schrieb am Mi., 24. Mai 2017,
>> 19:30:
>> >>
>> >> On Wed, May 24, 2017 at 4:08 PM, Christopher Schmidt <
>> fakod...@gmail.com>
>> >> wrote:
>> >> >
>> >> >
>> >> > Vijay Bellur <vbel...@redhat.com> schrieb am Mi., 24. Mai 2017 um
>> 05:53
>> >> > Uhr:
>> >> >>
>> >> >> On Tue, May 23, 2017 at 1:39 AM, Christopher Schmidt
>> >> >> <fakod...@gmail.com>
>> >> >> wrote:
>> >> >>>
>> >> >>> OK, seems that this works now.
>> >> >>>
>> >> >>> A couple of questions:
>> >> >>> - What do you think, are all these options necessary for Kafka?
>> >> >>
>> >> >>
>> >> >> I am not entirely certain what subset of options will make it work
>> as I
>> >> >> do
>> >> >> not understand the nature of failure with  Kafka and the default
>> >> >> gluster
>> >> >> configuration. It certainly needs further analysis to identify the
>> list
>> >> >> of
>> >> >> options necessary. Would it be possible for you to enable one option
>> >> >> after
>> >> >> the other and determine the configuration that ?
>> >> >>
>> >> >>
>> >> >>>
>> >> >>> - You wrote that there have to be kind of application profiles. So
>> to
>> >> >>> find out, which set of options work is currently a matter of
>> testing
>> >> >>> (and
>> >> >>> hope)? Or are there any experiences for MongoDB / ProstgreSQL /
>> >> >>> Zookeeper
>> >> >>> etc.?
>> >> >>
>> >> >>
>> >> >> Application profiles are work in progress. We have a few that are
>> >> >> focused
>> >> >> on use cases like VM storage, block storage etc. at the moment.
>> >> >>
>> >> >>>
>> >> >>> - I am using Heketi and Dynamik Storage Provisioning together with
>> >> >>> Kubernetes. Can I set this volume options somehow by default or by
>> >> >>> volume
>> >> >>> plugin?
>> >> >>
>> >> >>
>> >> >>
>> >> >> Adding Raghavendra and Michael to help address this query.
>> >> >
>> >> >
>> >> > For me it would be sufficient to disable some (or all) translators,
>> for
>> >> > all
>> >> > volumes that'll be created, somewhere here:
>> >> > https://github.com/gluster/gluster-containers/tree/master/CentOS
>> >> > This is the container used by the GlusterFS DaemonSet for Kubernetes.
>> >>
>> >> Work is in progress to give such option at volume plugin level. We
>> >> currently have a patch[1] in review for Heketi that allows users to
>> >> set Gluster options using heketi-cli instead of going into a Gluster
>> >> pod. Once this is in, we can add options in storage-class of
>> >> Kubernetes that pass down Gluster options for every volume created in
>> >> that storage-class.
>> >>
>> >> [1] https://github.com/heketi/heketi/pull/751
>> >>
>> >> Thanks,
>> >> Raghavendra Talur
>> >>
>> 

Re: [Gluster-users] GlusterFS and Kafka

2017-05-24 Thread Christopher Schmidt
So this change of the Gluster Volume Plugin will make it into K8s 1.7 or
1.8. Unfortunately too late for me.

Does anyone know how to disable performance translators by default?


Raghavendra Talur <rta...@redhat.com> schrieb am Mi., 24. Mai 2017, 19:30:

> On Wed, May 24, 2017 at 4:08 PM, Christopher Schmidt <fakod...@gmail.com>
> wrote:
> >
> >
> > Vijay Bellur <vbel...@redhat.com> schrieb am Mi., 24. Mai 2017 um 05:53
> Uhr:
> >>
> >> On Tue, May 23, 2017 at 1:39 AM, Christopher Schmidt <
> fakod...@gmail.com>
> >> wrote:
> >>>
> >>> OK, seems that this works now.
> >>>
> >>> A couple of questions:
> >>> - What do you think, are all these options necessary for Kafka?
> >>
> >>
> >> I am not entirely certain what subset of options will make it work as I
> do
> >> not understand the nature of failure with  Kafka and the default gluster
> >> configuration. It certainly needs further analysis to identify the list
> of
> >> options necessary. Would it be possible for you to enable one option
> after
> >> the other and determine the configuration that ?
> >>
> >>
> >>>
> >>> - You wrote that there have to be kind of application profiles. So to
> >>> find out, which set of options work is currently a matter of testing
> (and
> >>> hope)? Or are there any experiences for MongoDB / ProstgreSQL /
> Zookeeper
> >>> etc.?
> >>
> >>
> >> Application profiles are work in progress. We have a few that are
> focused
> >> on use cases like VM storage, block storage etc. at the moment.
> >>
> >>>
> >>> - I am using Heketi and Dynamik Storage Provisioning together with
> >>> Kubernetes. Can I set this volume options somehow by default or by
> volume
> >>> plugin?
> >>
> >>
> >>
> >> Adding Raghavendra and Michael to help address this query.
> >
> >
> > For me it would be sufficient to disable some (or all) translators, for
> all
> > volumes that'll be created, somewhere here:
> > https://github.com/gluster/gluster-containers/tree/master/CentOS
> > This is the container used by the GlusterFS DaemonSet for Kubernetes.
>
> Work is in progress to give such option at volume plugin level. We
> currently have a patch[1] in review for Heketi that allows users to
> set Gluster options using heketi-cli instead of going into a Gluster
> pod. Once this is in, we can add options in storage-class of
> Kubernetes that pass down Gluster options for every volume created in
> that storage-class.
>
> [1] https://github.com/heketi/heketi/pull/751
>
> Thanks,
> Raghavendra Talur
>
> >
> >>
> >>
> >> -Vijay
> >>
> >>
> >>
> >>>
> >>>
> >>> Thanks for you help... really appreciated.. Christopher
> >>>
> >>> Vijay Bellur <vbel...@redhat.com> schrieb am Mo., 22. Mai 2017 um
> 16:41
> >>> Uhr:
> >>>>
> >>>> Looks like a problem with caching. Can you please try by disabling all
> >>>> performance translators? The following configuration commands would
> disable
> >>>> performance translators in the gluster client stack:
> >>>>
> >>>> gluster volume set  performance.quick-read off
> >>>> gluster volume set  performance.io-cache off
> >>>> gluster volume set  performance.write-behind off
> >>>> gluster volume set  performance.stat-prefetch off
> >>>> gluster volume set  performance.read-ahead off
> >>>> gluster volume set  performance.readdir-ahead off
> >>>> gluster volume set  performance.open-behind off
> >>>> gluster volume set  performance.client-io-threads off
> >>>>
> >>>> Thanks,
> >>>> Vijay
> >>>>
> >>>>
> >>>>
> >>>> On Mon, May 22, 2017 at 9:46 AM, Christopher Schmidt
> >>>> <fakod...@gmail.com> wrote:
> >>>>>
> >>>>> Hi all,
> >>>>>
> >>>>> has anyone ever successfully deployed a Kafka (Cluster) on GlusterFS
> >>>>> volumes?
> >>>>>
> >>>>> I my case it's a Kafka Kubernetes-StatefulSet and a Heketi GlusterFS.
> >>>>> Needless to say that I am getting a lot of filesystem related
> >>>>> exceptions like this one:
> >>>>>
> >>>>> Failed to read `log header` from file channel
> >>>>> `sun.nio.ch.FileChannelImpl@67afa54a`. Expected to read 12 bytes,
> but
> >>>>> reached end of file after reading 0 bytes. Started read from position
> >>>>> 123065680.
> >>>>>
> >>>>> I limited the amount of exceptions with the
> >>>>> log.flush.interval.messages=1 option, but not all...
> >>>>>
> >>>>> best Christopher
> >>>>>
> >>>>>
> >>>>> ___
> >>>>> Gluster-users mailing list
> >>>>> Gluster-users@gluster.org
> >>>>> http://lists.gluster.org/mailman/listinfo/gluster-users
> >>>>
> >>>>
> >
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] GlusterFS and Kafka

2017-05-24 Thread Christopher Schmidt
Vijay Bellur <vbel...@redhat.com> schrieb am Mi., 24. Mai 2017 um 05:53 Uhr:

> On Tue, May 23, 2017 at 1:39 AM, Christopher Schmidt <fakod...@gmail.com>
> wrote:
>
>> OK, seems that this works now.
>>
>> A couple of questions:
>> - What do you think, are all these options necessary for Kafka?
>>
>
> I am not entirely certain what subset of options will make it work as I do
> not understand the nature of failure with  Kafka and the default gluster
> configuration. It certainly needs further analysis to identify the list of
> options necessary. Would it be possible for you to enable one option after
> the other and determine the configuration that ?
>

I've done a short test To disable
http://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Developer-guide/write-behind/
seems
to be enough at least for Kafka.


>
>
>
>> - You wrote that there have to be kind of application profiles. So to
>> find out, which set of options work is currently a matter of testing (and
>> hope)? Or are there any experiences for MongoDB / ProstgreSQL / Zookeeper
>> etc.?
>>
>
> Application profiles are work in progress. We have a few that are focused
> on use cases like VM storage, block storage etc. at the moment.
>
>
>> - I am using Heketi and Dynamik Storage Provisioning together with
>> Kubernetes. Can I set this volume options somehow by default or by volume
>> plugin?
>>
>
>
> Adding Raghavendra and Michael to help address this query.
>
> -Vijay
>
>
>
>
>>
>> Thanks for you help... really appreciated.. Christopher
>>
>> Vijay Bellur <vbel...@redhat.com> schrieb am Mo., 22. Mai 2017 um
>> 16:41 Uhr:
>>
>>> Looks like a problem with caching. Can you please try by disabling all
>>> performance translators? The following configuration commands would disable
>>> performance translators in the gluster client stack:
>>>
>>> gluster volume set  performance.quick-read off
>>> gluster volume set  performance.io-cache off
>>> gluster volume set  performance.write-behind off
>>> gluster volume set  performance.stat-prefetch off
>>> gluster volume set  performance.read-ahead off
>>> gluster volume set  performance.readdir-ahead off
>>> gluster volume set  performance.open-behind off
>>> gluster volume set  performance.client-io-threads off
>>>
>>> Thanks,
>>> Vijay
>>>
>>>
>>>
>>> On Mon, May 22, 2017 at 9:46 AM, Christopher Schmidt <fakod...@gmail.com
>>> > wrote:
>>>
>>>> Hi all,
>>>>
>>>> has anyone ever successfully deployed a Kafka (Cluster) on GlusterFS
>>>> volumes?
>>>>
>>>> I my case it's a Kafka Kubernetes-StatefulSet and a Heketi GlusterFS.
>>>> Needless to say that I am getting a lot of filesystem related
>>>> exceptions like this one:
>>>>
>>>> Failed to read `log header` from file channel
>>>> `sun.nio.ch.FileChannelImpl@67afa54a`. Expected to read 12 bytes, but
>>>> reached end of file after reading 0 bytes. Started read from position
>>>> 123065680.
>>>>
>>>> I limited the amount of exceptions with
>>>> the log.flush.interval.messages=1 option, but not all...
>>>>
>>>> best Christopher
>>>>
>>>>
>>>> ___
>>>> Gluster-users mailing list
>>>> Gluster-users@gluster.org
>>>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>>>
>>>
>>>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] GlusterFS and Kafka

2017-05-24 Thread Christopher Schmidt
Vijay Bellur <vbel...@redhat.com> schrieb am Mi., 24. Mai 2017 um 05:53 Uhr:

> On Tue, May 23, 2017 at 1:39 AM, Christopher Schmidt <fakod...@gmail.com>
> wrote:
>
>> OK, seems that this works now.
>>
>> A couple of questions:
>> - What do you think, are all these options necessary for Kafka?
>>
>
> I am not entirely certain what subset of options will make it work as I do
> not understand the nature of failure with  Kafka and the default gluster
> configuration. It certainly needs further analysis to identify the list of
> options necessary. Would it be possible for you to enable one option after
> the other and determine the configuration that ?
>
>
>
>> - You wrote that there have to be kind of application profiles. So to
>> find out, which set of options work is currently a matter of testing (and
>> hope)? Or are there any experiences for MongoDB / ProstgreSQL / Zookeeper
>> etc.?
>>
>
> Application profiles are work in progress. We have a few that are focused
> on use cases like VM storage, block storage etc. at the moment.
>
>
>> - I am using Heketi and Dynamik Storage Provisioning together with
>> Kubernetes. Can I set this volume options somehow by default or by volume
>> plugin?
>>
>
>
> Adding Raghavendra and Michael to help address this query.
>

For me it would be sufficient to disable some (or all) translators, for all
volumes that'll be created, somewhere here:
https://github.com/gluster/gluster-containers/tree/master/CentOS
This is the container used by the GlusterFS DaemonSet for Kubernetes.


>
> -Vijay
>
>
>
>
>>
>> Thanks for you help... really appreciated.. Christopher
>>
>> Vijay Bellur <vbel...@redhat.com> schrieb am Mo., 22. Mai 2017 um
>> 16:41 Uhr:
>>
>>> Looks like a problem with caching. Can you please try by disabling all
>>> performance translators? The following configuration commands would disable
>>> performance translators in the gluster client stack:
>>>
>>> gluster volume set  performance.quick-read off
>>> gluster volume set  performance.io-cache off
>>> gluster volume set  performance.write-behind off
>>> gluster volume set  performance.stat-prefetch off
>>> gluster volume set  performance.read-ahead off
>>> gluster volume set  performance.readdir-ahead off
>>> gluster volume set  performance.open-behind off
>>> gluster volume set  performance.client-io-threads off
>>>
>>> Thanks,
>>> Vijay
>>>
>>>
>>>
>>> On Mon, May 22, 2017 at 9:46 AM, Christopher Schmidt <fakod...@gmail.com
>>> > wrote:
>>>
>>>> Hi all,
>>>>
>>>> has anyone ever successfully deployed a Kafka (Cluster) on GlusterFS
>>>> volumes?
>>>>
>>>> I my case it's a Kafka Kubernetes-StatefulSet and a Heketi GlusterFS.
>>>> Needless to say that I am getting a lot of filesystem related
>>>> exceptions like this one:
>>>>
>>>> Failed to read `log header` from file channel
>>>> `sun.nio.ch.FileChannelImpl@67afa54a`. Expected to read 12 bytes, but
>>>> reached end of file after reading 0 bytes. Started read from position
>>>> 123065680.
>>>>
>>>> I limited the amount of exceptions with
>>>> the log.flush.interval.messages=1 option, but not all...
>>>>
>>>> best Christopher
>>>>
>>>>
>>>> ___
>>>> Gluster-users mailing list
>>>> Gluster-users@gluster.org
>>>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>>>
>>>
>>>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] GlusterFS and Kafka

2017-05-23 Thread Christopher Schmidt
Well, a "turning off caching" warning is ok.
But without doing anything, Kafka definitely doesn't work. Which is somehow
strange because it is a normal (IMHO) JVM process written in Scala. I am
wondering if there are some issues with other tools too.

Vijay Bellur <vbel...@redhat.com> schrieb am Di., 23. Mai 2017 um 05:48 Uhr:

> On Mon, May 22, 2017 at 11:49 AM, Joe Julian <j...@julianfamily.org> wrote:
>
>> This may be asking too much, but can you explain why or how it's even
>> possible to bypass the cache like this, Vijay?
>>
>
> This is a good question and the answers to that is something I should have
> elaborated a bit more in my previous response. As far as the why is
> concerned, gluster's client stack is configured by default to provide
> reasonable performance and not  be very strongly consistent for
> applications that need the most accurate metadata for their functioning.
> Turning off the performance translators provide more stronger consistency
> and we have seen applications that rely on a high degree of consistency
> work better with that configuration. It is with this backdrop I suggested
> performance translators be turned off from the client stack for Kafka.
>
> On how it is possible, the translator model of gluster helps us to enable
> or disable optional functionality from the stack. There is no single
> configuration that can accommodate workloads with varying profiles and
> having a modular architecture is a plus for gluster - the storage stack can
> be tuned to suit varying application profiles. We are exploring the
> possibility of providing custom profiles (collections of options) for
> popular applications to make it easier for all of us. Note that disabling
> performance translators in gluster is similar to turning off caching with
> the NFS client. In parallel we are also looking to alter the behavior of
> performance translators to provide as much consistency as possible by
> default.
>
> Thanks,
> Vijay
>
>>
>>
>> On May 22, 2017 7:41:40 AM PDT, Vijay Bellur <vbel...@redhat.com> wrote:
>>>
>>> Looks like a problem with caching. Can you please try by disabling all
>>> performance translators? The following configuration commands would disable
>>> performance translators in the gluster client stack:
>>>
>>> gluster volume set  performance.quick-read off
>>> gluster volume set  performance.io-cache off
>>> gluster volume set  performance.write-behind off
>>> gluster volume set  performance.stat-prefetch off
>>> gluster volume set  performance.read-ahead off
>>> gluster volume set  performance.readdir-ahead off
>>> gluster volume set  performance.open-behind off
>>> gluster volume set  performance.client-io-threads off
>>>
>>> Thanks,
>>> Vijay
>>>
>>>
>>>
>>> On Mon, May 22, 2017 at 9:46 AM, Christopher Schmidt <fakod...@gmail.com
>>> > wrote:
>>>
>>>> Hi all,
>>>>
>>>> has anyone ever successfully deployed a Kafka (Cluster) on GlusterFS
>>>> volumes?
>>>>
>>>> I my case it's a Kafka Kubernetes-StatefulSet and a Heketi GlusterFS.
>>>> Needless to say that I am getting a lot of filesystem related
>>>> exceptions like this one:
>>>>
>>>> Failed to read `log header` from file channel
>>>> `sun.nio.ch.FileChannelImpl@67afa54a`. Expected to read 12 bytes, but
>>>> reached end of file after reading 0 bytes. Started read from position
>>>> 123065680.
>>>>
>>>> I limited the amount of exceptions with
>>>> the log.flush.interval.messages=1 option, but not all...
>>>>
>>>> best Christopher
>>>>
>>>>
>>>> ___
>>>> Gluster-users mailing list
>>>> Gluster-users@gluster.org
>>>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>>>
>>>
>>>
>> --
>> Sent from my Android device with K-9 Mail. Please excuse my brevity.
>>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] GlusterFS and Kafka

2017-05-22 Thread Christopher Schmidt
Hi all,

has anyone ever successfully deployed a Kafka (Cluster) on GlusterFS
volumes?

I my case it's a Kafka Kubernetes-StatefulSet and a Heketi GlusterFS.
Needless to say that I am getting a lot of filesystem related exceptions
like this one:

Failed to read `log header` from file channel
`sun.nio.ch.FileChannelImpl@67afa54a`. Expected to read 12 bytes, but
reached end of file after reading 0 bytes. Started read from position
123065680.

I limited the amount of exceptions with the log.flush.interval.messages=1
option, but not all...

best Christopher
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users