[Gluster-users] Heketi error: Server busy. Retry operation later.

2018-12-06 Thread Guillermo Alvarado
Hello, Yesterday I tweeted my frustration and  @YanivKaul
  suggested me to write in this list:

I Installed Openshift and I am using INDEPENDENT MODE of GlusterFS to
provide persistent and dinamyc storage. These are the vars I am using on
the openshift ansible inventory:
openshift_storage_glusterfs_namespace=app-storage
openshift_storage_glusterfs_storageclass=true
openshift_storage_glusterfs_storageclass_default=false
openshift_storage_glusterfs_block_deploy=true
openshift_storage_glusterfs_block_host_vol_size=600
openshift_storage_glusterfs_block_storageclass=true
openshift_storage_glusterfs_block_storageclass_default=false
openshift_storage_glusterfs_is_native=false
openshift_storage_glusterfs_heketi_is_native=true
openshift_storage_glusterfs_heketi_executor=ssh
openshift_storage_glusterfs_heketi_ssh_port=22
openshift_storage_glusterfs_heketi_ssh_user=ocpadmin
openshift_storage_glusterfs_heketi_ssh_sudo=true
openshift_storage_glusterfs_heketi_ssh_keyfile="/home/ocpadmin/.ssh/id_rsa"
openshift_storage_glusterfs_registry_namespace=infra-storage
openshift_storage_glusterfs_registry_block_deploy=true
openshift_storage_glusterfs_registry_block_host_vol_size=600
openshift_storage_glusterfs_registry_block_storageclass=true
openshift_storage_glusterfs_registry_block_storageclass_default=true
openshift_storage_glusterfs_registry_is_native=false
openshift_storage_glusterfs_registry_heketi_is_native=true
openshift_storage_glusterfs_registry_heketi_executor=ssh
openshift_storage_glusterfs_registry_heketi_ssh_port=22
openshift_storage_glusterfs_registry_heketi_ssh_user=ocpadmin
openshift_storage_glusterfs_registry_heketi_ssh_sudo=true
openshift_storage_glusterfs_registry_heketi_ssh_keyfile="/home/ocpadmin/.ssh/id_rsa"

When I try to create a PVC with the next Storage class:

$ oc describe sc glusterfs-storage
Name:  glusterfs-storage
IsDefaultClass:No
Annotations:   
Provisioner:   kubernetes.io/glusterfs
Parameters:resturl=http://heketi-storage.app-storage.svc:8080
,restuser=admin,secretName=heketi-storage-admin-secret,secretNamespace=app-storage
AllowVolumeExpansion:  
MountOptions:  
ReclaimPolicy: Delete
VolumeBindingMode: Immediate
Events:

I am able to create it, but with these:

$ oc describe sc glusterfs-registry-block
Name:  glusterfs-registry-block
IsDefaultClass:Yes
Annotations:   storageclass.kubernetes.io/is-default-class=true
Provisioner:   gluster.org/glusterblock
Parameters:
chapauthenabled=true,hacount=3,restsecretname=heketi-registry-admin-secret-block,restsecretnamespace=infra-storage,resturl=
http://heketi-registry.infra-storage.svc:8080,restuser=admin
AllowVolumeExpansion:  
MountOptions:  
ReclaimPolicy: Delete
VolumeBindingMode: Immediate
Events:


and this


$ oc describe sc glusterfs-storage-block Name: glusterfs-storage-block
IsDefaultClass: No Annotations:  Provisioner: gluster.org/glusterblock
Parameters:
chapauthenabled=true,hacount=3,restsecretname=heketi-storage-admin-secret-block,restsecretnamespace=app-storage,resturl=
http://heketi-storage.app-storage.svc:8080,restuser=admin
AllowVolumeExpansion:  MountOptions:  ReclaimPolicy: Delete
VolumeBindingMode: Immediate Events: 


I am getting this error message when I try to create a PVC: *Failed to
provision volume with StorageClass "glusterfs-registry-block": failed to
create volume: heketi block volume creation failed: [heketi] failed to
create volume: Server busy. Retry operation later.*

Heketi is conteinerized and I just trying to create 1 volume at time, so I
do not understand why I am getting that message.

Thanks in advance
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Community Meeting, Tomorrow - Dec 5 at 15:00 UTC

2018-12-06 Thread Amye Scavarda
The chat logs aren't meaningful for this, but we can look into options for
logging longer term.
Was anyone actually using the logs?
- amye

On Wed, Dec 5, 2018 at 11:43 PM Vijay Bellur  wrote:

> Thank you, Amye!
>
> Do we perhaps have the entire chat log somewhere? Or do we need to look
> for alternatives after the botbot.me change?
>
> Regards,
> Vijay
>
> On Wed, Dec 5, 2018 at 8:51 PM Amye Scavarda  wrote:
>
>> So the agenda for today's meeting has some questions, but unfortunately,
>> we don't have the maintainers group online to be able to assist.
>> Here's what's come up:
>>
>> - gluster-prometheus-exporter: the git tag still refer to 0.3dev, when a
>> final first release would be available? what are the targeted components
>> metrics of such release and next?
>> - gluster-block: benchmark reporting: couldn't find anything online, what
>> expectations should we have?
>> - glusterd2: any timeline of when would be production ready?
>> - glusterfs: running glusterfs on high spec HW is it a waste of resources
>> or there are plans for optimizations? NVDIM support? NVME drives etc
>> - georeplication: implementation and monitoring seems to be challenging
>> what is the vision?
>> - heketi coming up features?
>>
>> Putting this on the mailing list for larger visibility.
>> - amye
>>
>> On Tue, Dec 4, 2018 at 10:39 AM Amye Scavarda  wrote:
>>
>>> https://bit.ly/gluster-community-meetings has an agenda, but should we
>>> postpone these to January?
>>>
>>> If we were to replace the Community Meetings with a video call to
>>> present current work, would that be more effective? Would you rather just
>>> do office hours for a particular feature?
>>>
>>> Drop a note on list for what you'd like to see happen with the community
>>> meetings, overall Gluster community. :D
>>>
>>> - amye
>>>
>>>
>>> --
>>> Amye Scavarda | a...@redhat.com | Gluster Community Lead
>>>
>>
>>
>> --
>> Amye Scavarda | a...@redhat.com | Gluster Community Lead
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>

-- 
Amye Scavarda | a...@redhat.com | Gluster Community Lead
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users