Re: [Gluster-users] GlusterFS and Kafka

2017-05-25 Thread Raghavendra Talur
On Thu, May 25, 2017 at 8:39 PM, Joe Julian  wrote:
> Maybe hooks?

Yes, we were thinking of the same :)

Christopher,
Gluster has hook-scripts facility that admins can write and set those
to be run on certain events in Gluster. We have a event for volume
creation.
Here are the steps for using hook scripts.

1. deploy the gluster pods and create a cluster as you have already done.
2. on the kubernetes nodes that are running gluster pods(make sure
they are running now, because we want to write into the bind mount),
create a new file in location /var/lib/glusterd/hooks/1/create/post/
3. name of the fie could be S29disable-perf.sh , important part being
that the number should have capital S as first letter.
4. I tried out a sample script with content as below

```
#!/bin/bash



PROGNAME="Sdisable-perf"
OPTSPEC="volname:,gd-workdir:"
VOL=
CONFIGFILE=
LOGFILEBASE=
PIDDIR=
GLUSTERD_WORKDIR=

function parse_args () {
ARGS=$(getopt -l $OPTSPEC  -name $PROGNAME $@)
eval set -- "$ARGS"

while true; do
case $1 in
--volname)
shift
VOL=$1
;;
--gd-workdir)
shift
GLUSTERD_WORKDIR=$1
;;
*)
shift
break
;;
esac
shift
done
}

function disable_perf_xlators () {
volname=$1
gluster volume set $volname performance.write-behind off
echo "executed and return is $?" >>
/var/lib/glusterd/hooks/1/create/post/log
}

echo "starting" >> /var/lib/glusterd/hooks/1/create/post/log
parse_args $@
disable_perf_xlators $VOL
```
5. set execute permissions on the file

I tried this out and it worked for me. Let us know if that helps!

Thanks,
Raghavendra Talur



>
>
> On May 25, 2017 6:48:04 AM PDT, Christopher Schmidt 
> wrote:
>>
>> Hi Humble,
>>
>> thanks for that, it is really appreciated.
>>
>> In the meanwhile, using K8s 1.5, what can I do to disable the performance
>> translator that doesn't work with Kafka? Maybe something while generating
>> the Glusterfs container for Kubernetes?
>>
>> Best Christopher
>>
>> Humble Chirammal  schrieb am Do., 25. Mai 2017,
>> 09:36:
>>>
>>> On Thu, May 25, 2017 at 12:57 PM, Raghavendra Talur 
>>> wrote:

 On Thu, May 25, 2017 at 11:21 AM, Christopher Schmidt
  wrote:
 > So this change of the Gluster Volume Plugin will make it into K8s 1.7
 > or
 > 1.8. Unfortunately too late for me.
 >
 > Does anyone know how to disable performance translators by default?

 Humble,

 Do you know of any way Christopher can proceed here?
>>>
>>>
>>> I am trying to get it in 1.7 branch, will provide an update here as soon
>>> as its available.


 >
 >
 > Raghavendra Talur  schrieb am Mi., 24. Mai 2017,
 > 19:30:
 >>
 >> On Wed, May 24, 2017 at 4:08 PM, Christopher Schmidt
 >> 
 >> wrote:
 >> >
 >> >
 >> > Vijay Bellur  schrieb am Mi., 24. Mai 2017 um
 >> > 05:53
 >> > Uhr:
 >> >>
 >> >> On Tue, May 23, 2017 at 1:39 AM, Christopher Schmidt
 >> >> 
 >> >> wrote:
 >> >>>
 >> >>> OK, seems that this works now.
 >> >>>
 >> >>> A couple of questions:
 >> >>> - What do you think, are all these options necessary for Kafka?
 >> >>
 >> >>
 >> >> I am not entirely certain what subset of options will make it work
 >> >> as I
 >> >> do
 >> >> not understand the nature of failure with  Kafka and the default
 >> >> gluster
 >> >> configuration. It certainly needs further analysis to identify the
 >> >> list
 >> >> of
 >> >> options necessary. Would it be possible for you to enable one
 >> >> option
 >> >> after
 >> >> the other and determine the configuration that ?
 >> >>
 >> >>
 >> >>>
 >> >>> - You wrote that there have to be kind of application profiles.
 >> >>> So to
 >> >>> find out, which set of options work is currently a matter of
 >> >>> testing
 >> >>> (and
 >> >>> hope)? Or are there any experiences for MongoDB / ProstgreSQL /
 >> >>> Zookeeper
 >> >>> etc.?
 >> >>
 >> >>
 >> >> Application profiles are work in progress. We have a few that are
 >> >> focused
 >> >> on use cases like VM storage, block storage etc. at the moment.
 >> >>
 >> >>>
 >> >>> - I am using Heketi and Dynamik Storage Provisioning together
 >> >>> with
 >> >>> Kubernetes. Can I set this volume options somehow by default or
 >> >>> by
 >> >>> volume
 >> >>> plugin?
 >> >>
 >> >>
 >> >>
 >> >> Adding Raghavendra and Michael to help address this query.

Re: [Gluster-users] Failure while upgrading gluster to 3.10.1

2017-05-25 Thread Atin Mukherjee
On Thu, 25 May 2017 at 19:11, Pawan Alwandi  wrote:

> Hello Atin,
>
> Yes, glusterd on other instances are up and running.  Below is the
> requested output on all the three hosts.
>
> Host 1
>
> # gluster peer status
> Number of Peers: 2
>
> Hostname: 192.168.0.7
> Uuid: 5ec54b4f-f60c-48c6-9e55-95f2bb58f633
> State: Peer in Cluster (Disconnected)
>

Glusterd is disconnected here.

>
>
> Hostname: 192.168.0.6
> Uuid: 83e9a0b9-6bd5-483b-8516-d8928805ed95
> State: Peer in Cluster (Disconnected)
>

Same as above

Can you please check what does glusterd log have to say here about these
disconnects?


>
> # gluster volume status
> Status of volume: shared
> Gluster process TCP Port  RDMA Port  Online
> Pid
>
> --
> Brick 192.168.0.5:/data/exports/shared  49152 0  Y
> 2105
> NFS Server on localhost 2049  0  Y
> 2089
> Self-heal Daemon on localhost   N/A   N/AY
> 2097
>

Volume status output does show all the bricks are up. So I'm not sure why
are you seeing the volume as read only. Can you please provide the mount
log?


>
> Task Status of Volume shared
>
> --
> There are no active volume tasks
>
> Host 2
>
> # gluster peer status
> Number of Peers: 2
>
> Hostname: 192.168.0.7
> Uuid: 5ec54b4f-f60c-48c6-9e55-95f2bb58f633
> State: Peer in Cluster (Connected)
>
> Hostname: 192.168.0.5
> Uuid: 7f2a6e11-2a53-4ab4-9ceb-8be6a9f2d073
> State: Peer in Cluster (Connected)
>
>
> # gluster volume status
> Status of volume: shared
> Gluster processPortOnlinePid
>
> --
> Brick 192.168.0.5:/data/exports/shared49152Y2105
> Brick 192.168.0.6:/data/exports/shared49152Y2188
> Brick 192.168.0.7:/data/exports/shared49152Y2453
> NFS Server on localhost2049Y2194
> Self-heal Daemon on localhostN/AY2199
> NFS Server on 192.168.0.52049Y2089
> Self-heal Daemon on 192.168.0.5N/AY2097
> NFS Server on 192.168.0.72049Y2458
> Self-heal Daemon on 192.168.0.7N/AY2463
>
> Task Status of Volume shared
>
> --
> There are no active volume tasks
>
> Host 3
>
> # gluster peer status
> Number of Peers: 2
>
> Hostname: 192.168.0.5
> Uuid: 7f2a6e11-2a53-4ab4-9ceb-8be6a9f2d073
> State: Peer in Cluster (Connected)
>
> Hostname: 192.168.0.6
> Uuid: 83e9a0b9-6bd5-483b-8516-d8928805ed95
> State: Peer in Cluster (Connected)
>
> # gluster volume status
> Status of volume: shared
> Gluster processPortOnlinePid
>
> --
> Brick 192.168.0.5:/data/exports/shared49152Y2105
> Brick 192.168.0.6:/data/exports/shared49152Y2188
> Brick 192.168.0.7:/data/exports/shared49152Y2453
> NFS Server on localhost2049Y2458
> Self-heal Daemon on localhostN/AY2463
> NFS Server on 192.168.0.62049Y2194
> Self-heal Daemon on 192.168.0.6N/AY2199
> NFS Server on 192.168.0.52049Y2089
> Self-heal Daemon on 192.168.0.5N/AY2097
>
> Task Status of Volume shared
>
> --
> There are no active volume tasks
>
>
>
>
>
>
> On Wed, May 24, 2017 at 8:32 PM, Atin Mukherjee 
> wrote:
>
>> Are the other glusterd instances are up? output of gluster peer status &
>> gluster volume status please?
>>
>> On Wed, May 24, 2017 at 4:20 PM, Pawan Alwandi  wrote:
>>
>>> Thanks Atin,
>>>
>>> So I got gluster downgraded to 3.7.9 on host 1 and now have the
>>> glusterfs and glusterfsd processes come up.  But I see the volume is
>>> mounted read only.
>>>
>>> I see these being logged every 3s:
>>>
>>> [2017-05-24 10:45:44.440435] W [socket.c:852:__socket_keepalive]
>>> 0-socket: failed to set keep idle -1 on socket 17, Invalid argument
>>> [2017-05-24 10:45:44.440475] E [socket.c:2966:socket_connect]
>>> 0-management: Failed to set keep-alive: Invalid argument
>>> [2017-05-24 10:45:44.440734] W [socket.c:852:__socket_keepalive]
>>> 0-socket: failed to set keep idle -1 on socket 20, Invalid argument
>>> [2017-05-24 10:45:44.440754] E [socket.c:2966:socket_connect]
>>> 0-management: Failed to set keep-alive: Invalid argument
>>> [2017-05-24 10:45:44.441354] E [rpc-clnt.c:362:saved_frames_unwind] (-->
>>> 

Re: [Gluster-users] GlusterFS and Kafka

2017-05-25 Thread Joe Julian
Maybe hooks?

On May 25, 2017 6:48:04 AM PDT, Christopher Schmidt  wrote:
>Hi Humble,
>
>thanks for that, it is really appreciated.
>
>In the meanwhile, using K8s 1.5, what can I do to disable the
>performance
>translator that doesn't work with Kafka? Maybe something while
>generating
>the Glusterfs container for Kubernetes?
>
>Best Christopher
>
>Humble Chirammal  schrieb am Do., 25. Mai 2017,
>09:36:
>
>> On Thu, May 25, 2017 at 12:57 PM, Raghavendra Talur
>
>> wrote:
>>
>>> On Thu, May 25, 2017 at 11:21 AM, Christopher Schmidt
>>>  wrote:
>>> > So this change of the Gluster Volume Plugin will make it into K8s
>1.7 or
>>> > 1.8. Unfortunately too late for me.
>>> >
>>> > Does anyone know how to disable performance translators by
>default?
>>>
>>> Humble,
>>>
>>> Do you know of any way Christopher can proceed here?
>>>
>>
>> I am trying to get it in 1.7 branch, will provide an update here as
>soon
>> as its available.
>>
>>>
>>> >
>>> >
>>> > Raghavendra Talur  schrieb am Mi., 24. Mai
>2017,
>>> 19:30:
>>> >>
>>> >> On Wed, May 24, 2017 at 4:08 PM, Christopher Schmidt <
>>> fakod...@gmail.com>
>>> >> wrote:
>>> >> >
>>> >> >
>>> >> > Vijay Bellur  schrieb am Mi., 24. Mai 2017
>um
>>> 05:53
>>> >> > Uhr:
>>> >> >>
>>> >> >> On Tue, May 23, 2017 at 1:39 AM, Christopher Schmidt
>>> >> >> 
>>> >> >> wrote:
>>> >> >>>
>>> >> >>> OK, seems that this works now.
>>> >> >>>
>>> >> >>> A couple of questions:
>>> >> >>> - What do you think, are all these options necessary for
>Kafka?
>>> >> >>
>>> >> >>
>>> >> >> I am not entirely certain what subset of options will make it
>work
>>> as I
>>> >> >> do
>>> >> >> not understand the nature of failure with  Kafka and the
>default
>>> >> >> gluster
>>> >> >> configuration. It certainly needs further analysis to identify
>the
>>> list
>>> >> >> of
>>> >> >> options necessary. Would it be possible for you to enable one
>option
>>> >> >> after
>>> >> >> the other and determine the configuration that ?
>>> >> >>
>>> >> >>
>>> >> >>>
>>> >> >>> - You wrote that there have to be kind of application
>profiles. So
>>> to
>>> >> >>> find out, which set of options work is currently a matter of
>>> testing
>>> >> >>> (and
>>> >> >>> hope)? Or are there any experiences for MongoDB / ProstgreSQL
>/
>>> >> >>> Zookeeper
>>> >> >>> etc.?
>>> >> >>
>>> >> >>
>>> >> >> Application profiles are work in progress. We have a few that
>are
>>> >> >> focused
>>> >> >> on use cases like VM storage, block storage etc. at the
>moment.
>>> >> >>
>>> >> >>>
>>> >> >>> - I am using Heketi and Dynamik Storage Provisioning together
>with
>>> >> >>> Kubernetes. Can I set this volume options somehow by default
>or by
>>> >> >>> volume
>>> >> >>> plugin?
>>> >> >>
>>> >> >>
>>> >> >>
>>> >> >> Adding Raghavendra and Michael to help address this query.
>>> >> >
>>> >> >
>>> >> > For me it would be sufficient to disable some (or all)
>translators,
>>> for
>>> >> > all
>>> >> > volumes that'll be created, somewhere here:
>>> >> >
>https://github.com/gluster/gluster-containers/tree/master/CentOS
>>> >> > This is the container used by the GlusterFS DaemonSet for
>Kubernetes.
>>> >>
>>> >> Work is in progress to give such option at volume plugin level.
>We
>>> >> currently have a patch[1] in review for Heketi that allows users
>to
>>> >> set Gluster options using heketi-cli instead of going into a
>Gluster
>>> >> pod. Once this is in, we can add options in storage-class of
>>> >> Kubernetes that pass down Gluster options for every volume
>created in
>>> >> that storage-class.
>>> >>
>>> >> [1] https://github.com/heketi/heketi/pull/751
>>> >>
>>> >> Thanks,
>>> >> Raghavendra Talur
>>> >>
>>> >> >
>>> >> >>
>>> >> >>
>>> >> >> -Vijay
>>> >> >>
>>> >> >>
>>> >> >>
>>> >> >>>
>>> >> >>>
>>> >> >>> Thanks for you help... really appreciated.. Christopher
>>> >> >>>
>>> >> >>> Vijay Bellur  schrieb am Mo., 22. Mai
>2017 um
>>> >> >>> 16:41
>>> >> >>> Uhr:
>>> >> 
>>> >>  Looks like a problem with caching. Can you please try by
>disabling
>>> >>  all
>>> >>  performance translators? The following configuration
>commands
>>> would
>>> >>  disable
>>> >>  performance translators in the gluster client stack:
>>> >> 
>>> >>  gluster volume set  performance.quick-read off
>>> >>  gluster volume set  performance.io-cache off
>>> >>  gluster volume set  performance.write-behind off
>>> >>  gluster volume set  performance.stat-prefetch off
>>> >>  gluster volume set  performance.read-ahead off
>>> >>  gluster volume set  performance.readdir-ahead off
>>> >>  gluster volume set  performance.open-behind off
>>> >>  gluster volume set  performance.client-io-threads
>off
>>> >> 
>>> >>  Thanks,
>>> >>  Vijay
>>> >> 
>>> >> 
>>> >> 
>>> >>  On Mon, May 22, 2017 at 9:46 AM, Christopher 

Re: [Gluster-users] GlusterFS and Kafka

2017-05-25 Thread Humble Chirammal
Hi Christopher,

We are experimenting  few other options to get rid of this issue. We will
provide an update as soon as we have it.

On Thu, May 25, 2017 at 7:18 PM, Christopher Schmidt 
wrote:

> Hi Humble,
>
> thanks for that, it is really appreciated.
>
> In the meanwhile, using K8s 1.5, what can I do to disable the performance
> translator that doesn't work with Kafka? Maybe something while generating
> the Glusterfs container for Kubernetes?
>
> Best Christopher
>
> Humble Chirammal  schrieb am Do., 25. Mai 2017,
> 09:36:
>
>> On Thu, May 25, 2017 at 12:57 PM, Raghavendra Talur 
>> wrote:
>>
>>> On Thu, May 25, 2017 at 11:21 AM, Christopher Schmidt
>>>  wrote:
>>> > So this change of the Gluster Volume Plugin will make it into K8s 1.7
>>> or
>>> > 1.8. Unfortunately too late for me.
>>> >
>>> > Does anyone know how to disable performance translators by default?
>>>
>>> Humble,
>>>
>>> Do you know of any way Christopher can proceed here?
>>>
>>
>> I am trying to get it in 1.7 branch, will provide an update here as soon
>> as its available.
>>
>>>
>>> >
>>> >
>>> > Raghavendra Talur  schrieb am Mi., 24. Mai 2017,
>>> 19:30:
>>> >>
>>> >> On Wed, May 24, 2017 at 4:08 PM, Christopher Schmidt <
>>> fakod...@gmail.com>
>>> >> wrote:
>>> >> >
>>> >> >
>>> >> > Vijay Bellur  schrieb am Mi., 24. Mai 2017 um
>>> 05:53
>>> >> > Uhr:
>>> >> >>
>>> >> >> On Tue, May 23, 2017 at 1:39 AM, Christopher Schmidt
>>> >> >> 
>>> >> >> wrote:
>>> >> >>>
>>> >> >>> OK, seems that this works now.
>>> >> >>>
>>> >> >>> A couple of questions:
>>> >> >>> - What do you think, are all these options necessary for Kafka?
>>> >> >>
>>> >> >>
>>> >> >> I am not entirely certain what subset of options will make it work
>>> as I
>>> >> >> do
>>> >> >> not understand the nature of failure with  Kafka and the default
>>> >> >> gluster
>>> >> >> configuration. It certainly needs further analysis to identify the
>>> list
>>> >> >> of
>>> >> >> options necessary. Would it be possible for you to enable one
>>> option
>>> >> >> after
>>> >> >> the other and determine the configuration that ?
>>> >> >>
>>> >> >>
>>> >> >>>
>>> >> >>> - You wrote that there have to be kind of application profiles.
>>> So to
>>> >> >>> find out, which set of options work is currently a matter of
>>> testing
>>> >> >>> (and
>>> >> >>> hope)? Or are there any experiences for MongoDB / ProstgreSQL /
>>> >> >>> Zookeeper
>>> >> >>> etc.?
>>> >> >>
>>> >> >>
>>> >> >> Application profiles are work in progress. We have a few that are
>>> >> >> focused
>>> >> >> on use cases like VM storage, block storage etc. at the moment.
>>> >> >>
>>> >> >>>
>>> >> >>> - I am using Heketi and Dynamik Storage Provisioning together with
>>> >> >>> Kubernetes. Can I set this volume options somehow by default or by
>>> >> >>> volume
>>> >> >>> plugin?
>>> >> >>
>>> >> >>
>>> >> >>
>>> >> >> Adding Raghavendra and Michael to help address this query.
>>> >> >
>>> >> >
>>> >> > For me it would be sufficient to disable some (or all) translators,
>>> for
>>> >> > all
>>> >> > volumes that'll be created, somewhere here:
>>> >> > https://github.com/gluster/gluster-containers/tree/master/CentOS
>>> >> > This is the container used by the GlusterFS DaemonSet for
>>> Kubernetes.
>>> >>
>>> >> Work is in progress to give such option at volume plugin level. We
>>> >> currently have a patch[1] in review for Heketi that allows users to
>>> >> set Gluster options using heketi-cli instead of going into a Gluster
>>> >> pod. Once this is in, we can add options in storage-class of
>>> >> Kubernetes that pass down Gluster options for every volume created in
>>> >> that storage-class.
>>> >>
>>> >> [1] https://github.com/heketi/heketi/pull/751
>>> >>
>>> >> Thanks,
>>> >> Raghavendra Talur
>>> >>
>>> >> >
>>> >> >>
>>> >> >>
>>> >> >> -Vijay
>>> >> >>
>>> >> >>
>>> >> >>
>>> >> >>>
>>> >> >>>
>>> >> >>> Thanks for you help... really appreciated.. Christopher
>>> >> >>>
>>> >> >>> Vijay Bellur  schrieb am Mo., 22. Mai 2017 um
>>> >> >>> 16:41
>>> >> >>> Uhr:
>>> >> 
>>> >>  Looks like a problem with caching. Can you please try by
>>> disabling
>>> >>  all
>>> >>  performance translators? The following configuration commands
>>> would
>>> >>  disable
>>> >>  performance translators in the gluster client stack:
>>> >> 
>>> >>  gluster volume set  performance.quick-read off
>>> >>  gluster volume set  performance.io-cache off
>>> >>  gluster volume set  performance.write-behind off
>>> >>  gluster volume set  performance.stat-prefetch off
>>> >>  gluster volume set  performance.read-ahead off
>>> >>  gluster volume set  performance.readdir-ahead off
>>> >>  gluster volume set  performance.open-behind off
>>> >>  gluster volume set  performance.client-io-threads off
>>> >> 
>>> >>  

Re: [Gluster-users] Fwd: Re: VM going down

2017-05-25 Thread Alessandro Briosi
Il 25/05/2017 15:24, Joe Julian ha scritto:
> You'd want to see the client log. I'm not sure where proxmox
> configures those to go.
>
> On May 24, 2017 11:57:33 PM PDT, Alessandro Briosi 
> wrote:
>
> Il 19/05/2017 17:27, Alessandro Briosi ha scritto:
>> Il 12/05/2017 12:09, Alessandro Briosi ha scritto:
 You probably should open a bug so that we have all the troubleshooting
 and debugging details in one location. Once we find the problem we can
 move the bug to the right component.
   https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS

 HTH,
 Niels
>>> The thing is that when the VM is down and I check the logs there's 
>>> nothing.
>>> Then when I start the VM the logs get populated with the seek error.
>>>
>>> Anyway I'll open a bug for this.
>>
>> Ok, as it happened again I have opened a bug:
>> https://bugzilla.redhat.com/show_bug.cgi?id=1452766
>>
>> I now have started the vm with gdb (maybe I can find more
>> information)
>>
>> In the logs I still have "No such file or directory" which at
>> this point seems to be the culprit of this (?)
>>
>> Alessandro
>
> It heppened again and now I have at least a gdb log which tells me
> where the error is.
>
> I've attached the log to the bug.
>
> Logs strangely do not report any error, though the 2 VM disk files
> seem to be going through a heal process:
>
> Brick srvpve1g:/data/brick1/brick
> /images/101/vm-101-disk-2.qcow2 - Possibly undergoing heal
>
> /images/101/vm-101-disk-1.qcow2 - Possibly undergoing heal
>
> Status: Connected
> Number of entries: 2
>
> Brick srvpve2g:/data/brick1/brick
> /images/101/vm-101-disk-2.qcow2 - Possibly undergoing heal
>
> /images/101/vm-101-disk-1.qcow2 - Possibly undergoing heal
>
> Status: Connected
> Number of entries: 2
>
> Brick srvpve3g:/data/brick1/brick
> /images/101/vm-101-disk-2.qcow2 - Possibly undergoing heal
>
> /images/101/vm-101-disk-1.qcow2 - Possibly undergoing heal
>
> Status: Connected
> Number of entries: 2
>
>
> I really have no clue on why this is happening.
> Thanks for your help.
>
> Alessandro
>
>
> -- 
> Sent from my Android device with K-9 Mail. Please excuse my brevity. 

It's starting to be a bit frustrating as the VM now it crashed for thr
forth time...

I'm considering on moving the disks to a local storage untill the
problem is solved.


Buon lavoro.
/Alessandro Briosi/
 
*METAL.it Nord S.r.l.*
Via Maioliche 57/C - 38068 Rovereto (TN)
Tel.+39.0464.430130 - Fax +39.0464.437393
www.metalit.com

 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] GlusterFS and Kafka

2017-05-25 Thread Christopher Schmidt
Hi Humble,

thanks for that, it is really appreciated.

In the meanwhile, using K8s 1.5, what can I do to disable the performance
translator that doesn't work with Kafka? Maybe something while generating
the Glusterfs container for Kubernetes?

Best Christopher

Humble Chirammal  schrieb am Do., 25. Mai 2017, 09:36:

> On Thu, May 25, 2017 at 12:57 PM, Raghavendra Talur 
> wrote:
>
>> On Thu, May 25, 2017 at 11:21 AM, Christopher Schmidt
>>  wrote:
>> > So this change of the Gluster Volume Plugin will make it into K8s 1.7 or
>> > 1.8. Unfortunately too late for me.
>> >
>> > Does anyone know how to disable performance translators by default?
>>
>> Humble,
>>
>> Do you know of any way Christopher can proceed here?
>>
>
> I am trying to get it in 1.7 branch, will provide an update here as soon
> as its available.
>
>>
>> >
>> >
>> > Raghavendra Talur  schrieb am Mi., 24. Mai 2017,
>> 19:30:
>> >>
>> >> On Wed, May 24, 2017 at 4:08 PM, Christopher Schmidt <
>> fakod...@gmail.com>
>> >> wrote:
>> >> >
>> >> >
>> >> > Vijay Bellur  schrieb am Mi., 24. Mai 2017 um
>> 05:53
>> >> > Uhr:
>> >> >>
>> >> >> On Tue, May 23, 2017 at 1:39 AM, Christopher Schmidt
>> >> >> 
>> >> >> wrote:
>> >> >>>
>> >> >>> OK, seems that this works now.
>> >> >>>
>> >> >>> A couple of questions:
>> >> >>> - What do you think, are all these options necessary for Kafka?
>> >> >>
>> >> >>
>> >> >> I am not entirely certain what subset of options will make it work
>> as I
>> >> >> do
>> >> >> not understand the nature of failure with  Kafka and the default
>> >> >> gluster
>> >> >> configuration. It certainly needs further analysis to identify the
>> list
>> >> >> of
>> >> >> options necessary. Would it be possible for you to enable one option
>> >> >> after
>> >> >> the other and determine the configuration that ?
>> >> >>
>> >> >>
>> >> >>>
>> >> >>> - You wrote that there have to be kind of application profiles. So
>> to
>> >> >>> find out, which set of options work is currently a matter of
>> testing
>> >> >>> (and
>> >> >>> hope)? Or are there any experiences for MongoDB / ProstgreSQL /
>> >> >>> Zookeeper
>> >> >>> etc.?
>> >> >>
>> >> >>
>> >> >> Application profiles are work in progress. We have a few that are
>> >> >> focused
>> >> >> on use cases like VM storage, block storage etc. at the moment.
>> >> >>
>> >> >>>
>> >> >>> - I am using Heketi and Dynamik Storage Provisioning together with
>> >> >>> Kubernetes. Can I set this volume options somehow by default or by
>> >> >>> volume
>> >> >>> plugin?
>> >> >>
>> >> >>
>> >> >>
>> >> >> Adding Raghavendra and Michael to help address this query.
>> >> >
>> >> >
>> >> > For me it would be sufficient to disable some (or all) translators,
>> for
>> >> > all
>> >> > volumes that'll be created, somewhere here:
>> >> > https://github.com/gluster/gluster-containers/tree/master/CentOS
>> >> > This is the container used by the GlusterFS DaemonSet for Kubernetes.
>> >>
>> >> Work is in progress to give such option at volume plugin level. We
>> >> currently have a patch[1] in review for Heketi that allows users to
>> >> set Gluster options using heketi-cli instead of going into a Gluster
>> >> pod. Once this is in, we can add options in storage-class of
>> >> Kubernetes that pass down Gluster options for every volume created in
>> >> that storage-class.
>> >>
>> >> [1] https://github.com/heketi/heketi/pull/751
>> >>
>> >> Thanks,
>> >> Raghavendra Talur
>> >>
>> >> >
>> >> >>
>> >> >>
>> >> >> -Vijay
>> >> >>
>> >> >>
>> >> >>
>> >> >>>
>> >> >>>
>> >> >>> Thanks for you help... really appreciated.. Christopher
>> >> >>>
>> >> >>> Vijay Bellur  schrieb am Mo., 22. Mai 2017 um
>> >> >>> 16:41
>> >> >>> Uhr:
>> >> 
>> >>  Looks like a problem with caching. Can you please try by disabling
>> >>  all
>> >>  performance translators? The following configuration commands
>> would
>> >>  disable
>> >>  performance translators in the gluster client stack:
>> >> 
>> >>  gluster volume set  performance.quick-read off
>> >>  gluster volume set  performance.io-cache off
>> >>  gluster volume set  performance.write-behind off
>> >>  gluster volume set  performance.stat-prefetch off
>> >>  gluster volume set  performance.read-ahead off
>> >>  gluster volume set  performance.readdir-ahead off
>> >>  gluster volume set  performance.open-behind off
>> >>  gluster volume set  performance.client-io-threads off
>> >> 
>> >>  Thanks,
>> >>  Vijay
>> >> 
>> >> 
>> >> 
>> >>  On Mon, May 22, 2017 at 9:46 AM, Christopher Schmidt
>> >>   wrote:
>> >> >
>> >> > Hi all,
>> >> >
>> >> > has anyone ever successfully deployed a Kafka (Cluster) on
>> GlusterFS
>> >> > volumes?
>> >> >
>> >> > I my case it's a Kafka Kubernetes-StatefulSet and a 

Re: [Gluster-users] Fwd: Re: VM going down

2017-05-25 Thread Alessandro Briosi
Il 25/05/2017 15:24, Joe Julian ha scritto:
> You'd want to see the client log. I'm not sure where proxmox
> configures those to go.

This is all the content of glusterfs/cli.log (previous file cli.log.1 is
from 5 days ago)

[2017-05-25 06:21:30.736837] I [cli.c:728:main] 0-cli: Started running
gluster with version 3.8.11
[2017-05-25 06:21:30.787152] I [MSGID: 101190]
[event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started thread
with index 1
[2017-05-25 06:21:30.787186] I [socket.c:2403:socket_event_handler]
0-transport: disconnecting now
[2017-05-25 06:21:30.825593] I [input.c:31:cli_batch] 0-: Exiting with: 0
[2017-05-25 06:21:40.067379] I [cli.c:728:main] 0-cli: Started running
gluster with version 3.8.11
[2017-05-25 06:21:40.130303] I [MSGID: 101190]
[event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started thread
with index 1
[2017-05-25 06:21:40.130384] I [socket.c:2403:socket_event_handler]
0-transport: disconnecting now
[2017-05-25 06:21:41.268839] I [input.c:31:cli_batch] 0-: Exiting with: 0

Alessandro
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Failure while upgrading gluster to 3.10.1

2017-05-25 Thread Pawan Alwandi
Hello Atin,

Yes, glusterd on other instances are up and running.  Below is the
requested output on all the three hosts.

Host 1

# gluster peer status
Number of Peers: 2

Hostname: 192.168.0.7
Uuid: 5ec54b4f-f60c-48c6-9e55-95f2bb58f633
State: Peer in Cluster (Disconnected)

Hostname: 192.168.0.6
Uuid: 83e9a0b9-6bd5-483b-8516-d8928805ed95
State: Peer in Cluster (Disconnected)

# gluster volume status
Status of volume: shared
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick 192.168.0.5:/data/exports/shared  49152 0  Y
2105
NFS Server on localhost 2049  0  Y
2089
Self-heal Daemon on localhost   N/A   N/AY
2097

Task Status of Volume shared
--
There are no active volume tasks

Host 2

# gluster peer status
Number of Peers: 2

Hostname: 192.168.0.7
Uuid: 5ec54b4f-f60c-48c6-9e55-95f2bb58f633
State: Peer in Cluster (Connected)

Hostname: 192.168.0.5
Uuid: 7f2a6e11-2a53-4ab4-9ceb-8be6a9f2d073
State: Peer in Cluster (Connected)


# gluster volume status
Status of volume: shared
Gluster processPortOnlinePid
--
Brick 192.168.0.5:/data/exports/shared49152Y2105
Brick 192.168.0.6:/data/exports/shared49152Y2188
Brick 192.168.0.7:/data/exports/shared49152Y2453
NFS Server on localhost2049Y2194
Self-heal Daemon on localhostN/AY2199
NFS Server on 192.168.0.52049Y2089
Self-heal Daemon on 192.168.0.5N/AY2097
NFS Server on 192.168.0.72049Y2458
Self-heal Daemon on 192.168.0.7N/AY2463

Task Status of Volume shared
--
There are no active volume tasks

Host 3

# gluster peer status
Number of Peers: 2

Hostname: 192.168.0.5
Uuid: 7f2a6e11-2a53-4ab4-9ceb-8be6a9f2d073
State: Peer in Cluster (Connected)

Hostname: 192.168.0.6
Uuid: 83e9a0b9-6bd5-483b-8516-d8928805ed95
State: Peer in Cluster (Connected)

# gluster volume status
Status of volume: shared
Gluster processPortOnlinePid
--
Brick 192.168.0.5:/data/exports/shared49152Y2105
Brick 192.168.0.6:/data/exports/shared49152Y2188
Brick 192.168.0.7:/data/exports/shared49152Y2453
NFS Server on localhost2049Y2458
Self-heal Daemon on localhostN/AY2463
NFS Server on 192.168.0.62049Y2194
Self-heal Daemon on 192.168.0.6N/AY2199
NFS Server on 192.168.0.52049Y2089
Self-heal Daemon on 192.168.0.5N/AY2097

Task Status of Volume shared
--
There are no active volume tasks






On Wed, May 24, 2017 at 8:32 PM, Atin Mukherjee  wrote:

> Are the other glusterd instances are up? output of gluster peer status &
> gluster volume status please?
>
> On Wed, May 24, 2017 at 4:20 PM, Pawan Alwandi  wrote:
>
>> Thanks Atin,
>>
>> So I got gluster downgraded to 3.7.9 on host 1 and now have the glusterfs
>> and glusterfsd processes come up.  But I see the volume is mounted read
>> only.
>>
>> I see these being logged every 3s:
>>
>> [2017-05-24 10:45:44.440435] W [socket.c:852:__socket_keepalive]
>> 0-socket: failed to set keep idle -1 on socket 17, Invalid argument
>> [2017-05-24 10:45:44.440475] E [socket.c:2966:socket_connect]
>> 0-management: Failed to set keep-alive: Invalid argument
>> [2017-05-24 10:45:44.440734] W [socket.c:852:__socket_keepalive]
>> 0-socket: failed to set keep idle -1 on socket 20, Invalid argument
>> [2017-05-24 10:45:44.440754] E [socket.c:2966:socket_connect]
>> 0-management: Failed to set keep-alive: Invalid argument
>> [2017-05-24 10:45:44.441354] E [rpc-clnt.c:362:saved_frames_unwind] (-->
>> /usr/lib/x86_64-linux-gnu/libglusterfs.so.0(_gf_log_callingfn+0x1a3)[0x7f767c46d483]
>> (--> 
>> /usr/lib/x86_64-linux-gnu/libgfrpc.so.0(saved_frames_unwind+0x1cf)[0x7f767c2383af]
>> (--> 
>> /usr/lib/x86_64-linux-gnu/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7f767c2384ce]
>> (--> 
>> /usr/lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x7e)[0x7f767c239c8e]
>> (--> 
>> /usr/lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_clnt_notify+0x88)[0x7f767c23a4a8]
>> ) 0-management: forced unwinding frame type(GLUSTERD-DUMP) op(DUMP(1))
>> called at 2017-05-24 10:45:44.440945 (xid=0xbf)
>> [2017-05-24 10:45:44.441505] W 

Re: [Gluster-users] Fwd: Re: VM going down

2017-05-25 Thread Joe Julian
You'd want to see the client log. I'm not sure where proxmox configures those 
to go.

On May 24, 2017 11:57:33 PM PDT, Alessandro Briosi  wrote:
>Il 19/05/2017 17:27, Alessandro Briosi ha scritto:
>> Il 12/05/2017 12:09, Alessandro Briosi ha scritto:
 You probably should open a bug so that we have all the
>troubleshooting
 and debugging details in one location. Once we find the problem we
>can
 move the bug to the right component.
   https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS

 HTH,
 Niels
>>> The thing is that when the VM is down and I check the logs there's
>nothing.
>>> Then when I start the VM the logs get populated with the seek error.
>>>
>>> Anyway I'll open a bug for this.
>>
>> Ok, as it happened again I have opened a bug:
>> https://bugzilla.redhat.com/show_bug.cgi?id=1452766
>>
>> I now have started the vm with gdb (maybe I can find more
>information)
>>
>> In the logs I still have "No such file or directory" which at this
>> point seems to be the culprit of this (?)
>>
>> Alessandro
>
>It heppened again and now I have at least a gdb log which tells me
>where
>the error is.
>
>I've attached the log to the bug.
>
>Logs strangely do not report any error, though the 2 VM disk files seem
>to be going through a heal process:
>
>Brick srvpve1g:/data/brick1/brick
>/images/101/vm-101-disk-2.qcow2 - Possibly undergoing heal
>
>/images/101/vm-101-disk-1.qcow2 - Possibly undergoing heal
>
>Status: Connected
>Number of entries: 2
>
>Brick srvpve2g:/data/brick1/brick
>/images/101/vm-101-disk-2.qcow2 - Possibly undergoing heal
>
>/images/101/vm-101-disk-1.qcow2 - Possibly undergoing heal
>
>Status: Connected
>Number of entries: 2
>
>Brick srvpve3g:/data/brick1/brick
>/images/101/vm-101-disk-2.qcow2 - Possibly undergoing heal
>
>/images/101/vm-101-disk-1.qcow2 - Possibly undergoing heal
>
>Status: Connected
>Number of entries: 2
>
>
>I really have no clue on why this is happening.
>Thanks for your help.
>
>Alessandro

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Distributed re-balance issue

2017-05-25 Thread Nithya Balachandran
On 24 May 2017 at 22:54, Mahdi Adnan  wrote:

> Well yes and no, when i start the re-balance and check it's status, it
> just tells me it completed the re-balance, but it really did not move any
> data and the volume is not evenly distributed.
>
> right now brick6 is full, brick 5 is going to be full in few hours or so.
>

An update on this - on further analysis it looked like the rebalance was
actually happening and files were being migrated. However as the files were
large (2TB) and the rebalance status does not update the Rebalanced or Size
columns until a file migration is complete, it looked like nothing was
happening.


> --
>
> Respectfully
> *Mahdi A. Mahdi*
>
> --
> *From:* Nithya Balachandran 
> *Sent:* Wednesday, May 24, 2017 8:16:53 PM
> *To:* Mahdi Adnan
> *Cc:* Mohammed Rafi K C; gluster-users@gluster.org
>
> *Subject:* Re: [Gluster-users] Distributed re-balance issue
>
>
>
> On 24 May 2017 at 22:45, Nithya Balachandran  wrote:
>
>>
>>
>> On 24 May 2017 at 21:55, Mahdi Adnan  wrote:
>>
>>> Hi,
>>>
>>>
>>> Thank you for your response.
>>>
>>> I have around 15 files, each is 2TB qcow.
>>>
>>> One brick reached 96% so i removed it with "brick remove" and waited
>>> until it goes for around 40% and stopped the removal process with brick
>>> remove stop.
>>>
>>> The issue is brick1 drain it's data to brick6 only, and when brick6
>>> reached around 90% i did the same thing as before and it drained the data
>>> to brick1 only.
>>>
>>> now brick6 reached 99% and i have only a few gigabytes left which will
>>> fill in the next half hour or so.
>>>
>>> attached are the logs for all 6 bricks.
>>>
>>> Hi,
>>
>> Just to clarify, did you run a rebalance (gluster volume rebalance 
>> start) or did you only run remove-brick  ?
>>
>> On re-reading your original email, I see you did run a rebalance. Did it
> complete? Also which bricks are full at the moment?
>
>
>>
>> --
>>>
>>> Respectfully
>>> *Mahdi A. Mahdi*
>>>
>>> --
>>> *From:* Nithya Balachandran 
>>> *Sent:* Wednesday, May 24, 2017 6:45:10 PM
>>> *To:* Mohammed Rafi K C
>>> *Cc:* Mahdi Adnan; gluster-users@gluster.org
>>> *Subject:* Re: [Gluster-users] Distributed re-balance issue
>>>
>>>
>>>
>>> On 24 May 2017 at 20:02, Mohammed Rafi K C  wrote:
>>>


 On 05/23/2017 08:53 PM, Mahdi Adnan wrote:

 Hi,


 I have a distributed volume with 6 bricks, each have 5TB and it's
 hosting large qcow2 VM disks (I know it's reliable but it's not important
 data)

 I started with 5 bricks and then added another one, started the re
 balance process, everything went well, but now im looking at the bricks
 free space and i found one brick is around 82% while others ranging from
 20% to 60%.

 The brick with highest utilization is hosting more qcow2 disk than
 other bricks, and whenever i start re balance it just complete in 0 seconds
 and without moving any data.


 How much is your average file size in the cluster? And number of files
 (roughly) .


 What will happen with the brick became full ?

 Once brick contents goes beyond 90%, new files won't be created in the
 brick. But existing files can grow.


 Can i move data manually from one brick to the other ?


 Nop.It is not recommended, even though gluster will try to find the
 file, it may break.


 Why re balance not distributing data evenly on all bricks ?


 Rebalance works based on layout, so we need to see how layouts are
 distributed. If one of your bricks has higher capacity, it will have larger
 layout.


>>>
>>>
 That is correct. As Rafi said, the layout matters here. Can you please
 send across all the rebalance logs from all the 6 nodes?


>>> Nodes runing CentOS 7.3

 Gluster 3.8.11


 Volume info;
 Volume Name: ctvvols
 Type: Distribute
 Volume ID: 1ecea912-510f-4079-b437-7398e9caa0eb
 Status: Started
 Snapshot Count: 0
 Number of Bricks: 6
 Transport-type: tcp
 Bricks:
 Brick1: ctv01:/vols/ctvvols
 Brick2: ctv02:/vols/ctvvols
 Brick3: ctv03:/vols/ctvvols
 Brick4: ctv04:/vols/ctvvols
 Brick5: ctv05:/vols/ctvvols
 Brick6: ctv06:/vols/ctvvols
 Options Reconfigured:
 nfs.disable: on
 performance.readdir-ahead: on
 transport.address-family: inet
 performance.quick-read: off
 performance.read-ahead: off
 performance.io-cache: off
 performance.stat-prefetch: off
 performance.low-prio-threads: 32
 network.remote-dio: enable
 cluster.eager-lock: enable
 cluster.quorum-type: none
 cluster.server-quorum-type: server
 cluster.data-self-heal-algorithm: full
 cluster.locking-scheme: granular

Re: [Gluster-users] GlusterFS and Kafka

2017-05-25 Thread Raghavendra Talur
On Thu, May 25, 2017 at 11:21 AM, Christopher Schmidt
 wrote:
> So this change of the Gluster Volume Plugin will make it into K8s 1.7 or
> 1.8. Unfortunately too late for me.
>
> Does anyone know how to disable performance translators by default?

Humble,

Do you know of any way Christopher can proceed here?

>
>
> Raghavendra Talur  schrieb am Mi., 24. Mai 2017, 19:30:
>>
>> On Wed, May 24, 2017 at 4:08 PM, Christopher Schmidt 
>> wrote:
>> >
>> >
>> > Vijay Bellur  schrieb am Mi., 24. Mai 2017 um 05:53
>> > Uhr:
>> >>
>> >> On Tue, May 23, 2017 at 1:39 AM, Christopher Schmidt
>> >> 
>> >> wrote:
>> >>>
>> >>> OK, seems that this works now.
>> >>>
>> >>> A couple of questions:
>> >>> - What do you think, are all these options necessary for Kafka?
>> >>
>> >>
>> >> I am not entirely certain what subset of options will make it work as I
>> >> do
>> >> not understand the nature of failure with  Kafka and the default
>> >> gluster
>> >> configuration. It certainly needs further analysis to identify the list
>> >> of
>> >> options necessary. Would it be possible for you to enable one option
>> >> after
>> >> the other and determine the configuration that ?
>> >>
>> >>
>> >>>
>> >>> - You wrote that there have to be kind of application profiles. So to
>> >>> find out, which set of options work is currently a matter of testing
>> >>> (and
>> >>> hope)? Or are there any experiences for MongoDB / ProstgreSQL /
>> >>> Zookeeper
>> >>> etc.?
>> >>
>> >>
>> >> Application profiles are work in progress. We have a few that are
>> >> focused
>> >> on use cases like VM storage, block storage etc. at the moment.
>> >>
>> >>>
>> >>> - I am using Heketi and Dynamik Storage Provisioning together with
>> >>> Kubernetes. Can I set this volume options somehow by default or by
>> >>> volume
>> >>> plugin?
>> >>
>> >>
>> >>
>> >> Adding Raghavendra and Michael to help address this query.
>> >
>> >
>> > For me it would be sufficient to disable some (or all) translators, for
>> > all
>> > volumes that'll be created, somewhere here:
>> > https://github.com/gluster/gluster-containers/tree/master/CentOS
>> > This is the container used by the GlusterFS DaemonSet for Kubernetes.
>>
>> Work is in progress to give such option at volume plugin level. We
>> currently have a patch[1] in review for Heketi that allows users to
>> set Gluster options using heketi-cli instead of going into a Gluster
>> pod. Once this is in, we can add options in storage-class of
>> Kubernetes that pass down Gluster options for every volume created in
>> that storage-class.
>>
>> [1] https://github.com/heketi/heketi/pull/751
>>
>> Thanks,
>> Raghavendra Talur
>>
>> >
>> >>
>> >>
>> >> -Vijay
>> >>
>> >>
>> >>
>> >>>
>> >>>
>> >>> Thanks for you help... really appreciated.. Christopher
>> >>>
>> >>> Vijay Bellur  schrieb am Mo., 22. Mai 2017 um
>> >>> 16:41
>> >>> Uhr:
>> 
>>  Looks like a problem with caching. Can you please try by disabling
>>  all
>>  performance translators? The following configuration commands would
>>  disable
>>  performance translators in the gluster client stack:
>> 
>>  gluster volume set  performance.quick-read off
>>  gluster volume set  performance.io-cache off
>>  gluster volume set  performance.write-behind off
>>  gluster volume set  performance.stat-prefetch off
>>  gluster volume set  performance.read-ahead off
>>  gluster volume set  performance.readdir-ahead off
>>  gluster volume set  performance.open-behind off
>>  gluster volume set  performance.client-io-threads off
>> 
>>  Thanks,
>>  Vijay
>> 
>> 
>> 
>>  On Mon, May 22, 2017 at 9:46 AM, Christopher Schmidt
>>   wrote:
>> >
>> > Hi all,
>> >
>> > has anyone ever successfully deployed a Kafka (Cluster) on GlusterFS
>> > volumes?
>> >
>> > I my case it's a Kafka Kubernetes-StatefulSet and a Heketi
>> > GlusterFS.
>> > Needless to say that I am getting a lot of filesystem related
>> > exceptions like this one:
>> >
>> > Failed to read `log header` from file channel
>> > `sun.nio.ch.FileChannelImpl@67afa54a`. Expected to read 12 bytes,
>> > but
>> > reached end of file after reading 0 bytes. Started read from
>> > position
>> > 123065680.
>> >
>> > I limited the amount of exceptions with the
>> > log.flush.interval.messages=1 option, but not all...
>> >
>> > best Christopher
>> >
>> >
>> > ___
>> > Gluster-users mailing list
>> > Gluster-users@gluster.org
>> > http://lists.gluster.org/mailman/listinfo/gluster-users
>> 
>> 
>> >
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Fwd: Re: VM going down

2017-05-25 Thread Alessandro Briosi
Il 19/05/2017 17:27, Alessandro Briosi ha scritto:
> Il 12/05/2017 12:09, Alessandro Briosi ha scritto:
>>> You probably should open a bug so that we have all the troubleshooting
>>> and debugging details in one location. Once we find the problem we can
>>> move the bug to the right component.
>>>   https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
>>>
>>> HTH,
>>> Niels
>> The thing is that when the VM is down and I check the logs there's nothing.
>> Then when I start the VM the logs get populated with the seek error.
>>
>> Anyway I'll open a bug for this.
>
> Ok, as it happened again I have opened a bug:
> https://bugzilla.redhat.com/show_bug.cgi?id=1452766
>
> I now have started the vm with gdb (maybe I can find more information)
>
> In the logs I still have "No such file or directory" which at this
> point seems to be the culprit of this (?)
>
> Alessandro

It heppened again and now I have at least a gdb log which tells me where
the error is.

I've attached the log to the bug.

Logs strangely do not report any error, though the 2 VM disk files seem
to be going through a heal process:

Brick srvpve1g:/data/brick1/brick
/images/101/vm-101-disk-2.qcow2 - Possibly undergoing heal

/images/101/vm-101-disk-1.qcow2 - Possibly undergoing heal

Status: Connected
Number of entries: 2

Brick srvpve2g:/data/brick1/brick
/images/101/vm-101-disk-2.qcow2 - Possibly undergoing heal

/images/101/vm-101-disk-1.qcow2 - Possibly undergoing heal

Status: Connected
Number of entries: 2

Brick srvpve3g:/data/brick1/brick
/images/101/vm-101-disk-2.qcow2 - Possibly undergoing heal

/images/101/vm-101-disk-1.qcow2 - Possibly undergoing heal

Status: Connected
Number of entries: 2


I really have no clue on why this is happening.
Thanks for your help.

Alessandro
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users