Re: unsupported volume type after update to 1.1.3

2016-04-24 Thread David McCormick
>> On Tue, Feb 23, 2016 at 9:00 AM, Mark Turansky  wrote:
> 
>> There is no recycler for glusterfs, so "no volume plugin matched" would 
>> occur when the volume is being reclaimed by the cluster after its release 
>> from a claim.

Hi

This is a problem for me too as I have provisioned a limited number of 
glusterfs volumes on our pilot platform at IG and I don't think that manually 
recycling failed volumes is a workable solution for me.

I've come up with an interim work-around whilst we wait for the plugin to be 
developed or the more exciting fully automatic provisioning to arrive.  The 
temporary solution is to run a docker container that does the job of picking up 
gluster volumes that are in the failed state, wiping their files and returning 
them back to the pool of available volumes.  I've used the glusterfs-centos 
image as a base and written a simple shell script to perform the recycling 
process.  The container is intended to be deployed by cluster-admins and is 
available on dockerhub 
https://hub.docker.com/r/davemccormickig/gluster-recycler/ and the project 
files on github https://github.com/davemccormickig/gluster-recycler.

I hope that this is useful to other admins trialling the use of glusterfs 
volumes with their openshift clusters.

regards



Dave

On 24 April 2016 at 11:32, David McCormick  wrote:
>>> On Tue, Feb 23, 2016 at 9:00 AM, Mark Turansky  wrote:
>> 
>>> There is no recycler for glusterfs, so "no volume plugin matched" would 
>>> occur when the volume is being reclaimed by the cluster after its release 
>>> from a claim.
> 
> Hi
> 
> This is a problem for me too as I have provisioned a limited number of 
> glusterfs volumes on our pilot platform at IG and I don't think that manually 
> recycling failed volumes is a workable solution for me.
> 
> I've come up with an interim work-around whilst we wait for the plugin to be 
> developed or the more exciting fully automatic provisioning to arrive.  The 
> temporary solution is to run a docker container that does the job of picking 
> up gluster volumes that are in the failed state, wiping their files and 
> returning them back to the pool of available volumes.  I've used the 
> glusterfs-centos image as a base and written a simple shell script to perform 
> the recycling process.  The container is intended to be deployed by 
> cluster-admins and is available on dockerhub 
> https://hub.docker.com/r/davemccormickig/gluster-recycler/ and the project 
> files on github https://github.com/davemccormickig/gluster-recycler.
> 
> I hope that this is useful to other admins trialling the use of glusterfs 
> volumes with their openshift clusters.
> 
> regards
> 
> 
> 
> Dave

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: unsupported volume type after update to 1.1.3

2016-02-23 Thread Mark Turansky
Hm.  This is the worst good news for a developer.  I'm glad it works but I
don't know why, at least the Windows reboot trick worked again.

On Tue, Feb 23, 2016 at 5:29 PM, Philippe Lafoucrière <
philippe.lafoucri...@tech-angels.com> wrote:

> Hmm, I had to restart origin-node on the scheduled node, and now the pod
> is running.​
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: unsupported volume type after update to 1.1.3

2016-02-23 Thread Philippe Lafoucrière
Hmm, I had to restart origin-node on the scheduled node, and now the pod is
running.​
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: unsupported volume type after update to 1.1.3

2016-02-23 Thread Philippe Lafoucrière
>
> For now, yes.  We're looking at ways to make dynamic provisioning more
> widely available, even outside of a cloud environment.  We'd prefer to not
> implement more recyclers and instead make more provisioners.
>

Ok thanks, the PV is Bound again:

status:
  accessModes:
  - ReadWriteOnce
  - ReadWriteMany
  capacity:
storage: 20Gi
  phase: Bound

Anyway, the pods seems to be waiting for it:

NAME READY STATUS RESTARTS   AGE
hawkular-cassandra-1-7im2u   0/1   Pending0  42m
hawkular-metrics-n4iv3   0/1   CrashLoopBackOff   9  42m
heapster-m66tt   0/1   Pending0  42m



And describe doesn't give more info:


Name:   hawkular-cassandra-1-7im2u
Namespace:  openshift-infra
Image(s):   docker.io/openshift/origin-metrics-cassandra:latest
Node:   node-1
Labels:
metrics-infra=hawkular-cassandra,name=hawkular-cassandra-1,type=hawkular-cassandra
Status: Pending
Reason:
Message:
IP:
Controllers:ReplicationController/hawkular-cassandra-1
Containers:
  hawkular-cassandra-1:
Container ID:
Image:  docker.io/openshift/origin-metrics-cassandra:latest
Image ID:
Command:
  /opt/apache-cassandra/bin/cassandra-docker.sh
  --cluster_name=hawkular-metrics
  --data_volume=/cassandra_data
  --internode_encryption=all
  --require_node_auth=true
  --enable_client_encryption=true
  --require_client_auth=true
  --keystore_file=/secret/cassandra.keystore
  --keystore_password_file=/secret/cassandra.keystore.password
  --truststore_file=/secret/cassandra.truststore
  --truststore_password_file=/secret/cassandra.truststore.password
  --cassandra_pem_file=/secret/cassandra.pem
QoS Tier:
  cpu:  BestEffort
  memory:   BestEffort
State:  Waiting
Ready:  False
Restart Count:  0
Environment Variables:
  CASSANDRA_MASTER: true
  POD_NAMESPACE:openshift-infra (v1:metadata.namespace)
Volumes:
  cassandra-data:
Type:   PersistentVolumeClaim (a reference to a
PersistentVolumeClaim in the same namespace)
ClaimName:  metrics-cassandra-1
ReadOnly:   false
  hawkular-cassandra-secrets:
Type:   Secret (a secret that should populate this volume)
SecretName: hawkular-cassandra-secrets
  cassandra-token-sciym:
Type:   Secret (a secret that should populate this volume)
SecretName: cassandra-token-sciym
Events:
  FirstSeen LastSeenCount   From
 SubobjectPath   TypeReason  Message
  - -   
 -   --  ---
  43m   43m 1   {default-scheduler }
 Normal  Scheduled   Successfully assigned
hawkular-cassandra-1-7im2u to node-1


Thanks
Philippe
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: unsupported volume type after update to 1.1.3

2016-02-23 Thread Mark Turansky
On Tue, Feb 23, 2016 at 9:25 AM, Philippe Lafoucrière <
philippe.lafoucri...@tech-angels.com> wrote:

> On Tue, Feb 23, 2016 at 9:00 AM, Mark Turansky 
> wrote:
>
>> There is no recycler for glusterfs, so "no volume plugin matched" would
>> occur when the volume is being reclaimed by the cluster after its release
>> from a claim.
>>
>
> yes, the pvc was probably remove when the metrics-deploy-template was used
> to replace cassandra, heapster, etc.
> So I have to manually "recycle" the pv? (ie: delete and recreate it)
>


For now, yes.  We're looking at ways to make dynamic provisioning more
widely available, even outside of a cloud environment.  We'd prefer to not
implement more recyclers and instead make more provisioners.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: unsupported volume type after update to 1.1.3

2016-02-23 Thread Philippe Lafoucrière
On Tue, Feb 23, 2016 at 9:00 AM, Mark Turansky  wrote:

> There is no recycler for glusterfs, so "no volume plugin matched" would
> occur when the volume is being reclaimed by the cluster after its release
> from a claim.
>

yes, the pvc was probably remove when the metrics-deploy-template was used
to replace cassandra, heapster, etc.
So I have to manually "recycle" the pv? (ie: delete and recreate it)
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: unsupported volume type after update to 1.1.3

2016-02-23 Thread Mark Turansky
Hi Philippe,

Has the claim for this volume been deleted?

There is no recycler for glusterfs, so "no volume plugin matched" would
occur when the volume is being reclaimed by the cluster after its release
from a claim.

Mark

On Tue, Feb 23, 2016 at 8:46 AM, Philippe Lafoucrière <
philippe.lafoucri...@tech-angels.com> wrote:

> Hi,
>
> We have a volume with status = "Failed" after upgrading to 1.1.3.
> All our volumes are mounted through glusterfs, and all the others are
> fine, the issue is just with one of them:
>
> Name:   pv-storage-1
> Labels: 
> Status: Failed
> Claim:  openshift-infra/metrics-cassandra-1
> Reclaim Policy: Recycle
> Access Modes:   RWO,RWX
> Capacity:   20Gi
> Message:no volume plugin matched
> Source:
> Type:   Glusterfs (a Glusterfs mount on the host that
> shares a pod's lifetime)
> EndpointsName:  glusterfs-cluster
> Path:   pv-staging-gemnasium-20G-2
> ReadOnly:   false
>
>
> /sbin/mount.glusterfs is available on all nodes, and I can mount the
> volume by hand (everything was working fine before the update).
>
> Any idea to fix this?
> Thanks
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users