Re: [Openstack-operators] OpenStack mitaka using swift as backend for glance

2016-07-19 Thread Michael Stang
Hi Lucas,
 
yes I used thisID from the command for both, also have the proxy at 8080 on the
controller. Here is my final configuration I made maybe this helps you?
 
In the glance-api.conf:
 

[glance_store]
stores = file,http,swift
default_store = swift
filesystem_store_datadir = /var/lib/glance/images/
swift_store_create_container_on_put = True
swift_store_container = glance
swift_store_large_object_size = 20480
swift_store_large_object_chunk_size = 200
swift_enable_snet = False
default_swift_reference = ref1
swift_store_config_file = /etc/glance/glance-swift-store.conf

 
and in the glance-swift-store.conf:
 

[ref1]
user_domain_id = 
project_domain_id = 
auth_version = 3
auth_address = http://controller:5000/v3
user = service:glance
key = 



> 

Kind regards,

Michael

 

 

> Lucas Di Paola  hat am 19. Juli 2016 um 22:49
> geschrieben:
>  Hi Michael, 
>   
>  I am having exactly the same issue, getting the error that you mentioned in
> your first email. I tried replacing project_domain_id´s and user_domain´s
> value using the , but I had no luck, still getting the same error. 
>  Did you add the ID obtained from running the following openstack command?
> "openstack domain list". In that case, for both parameters, did you use the
> same ID? 
>   
>  +--+-+-++
>  | ID   | Name| Enabled | Description|
>  +--+-+-++
>  |  | default | True| Default Domain |
>  +--+-+-++
>   
>  Regarding your issue, do you have the Proxy Server listenting on the 8080
> port on the controller node?
>   
> 
>  2016-07-19 11:21 GMT-03:00 Michael Stang  mailto:michael.st...@dhbw-mannheim.de >:
>> >Hi Sam, Hi all,
> > 
> >fixed the problem.  I had used
> > 
> >project_domain_id = default
> >user_domain_id = default
> > 
> >instead I had to use
> > 
> >project_domain_id = 
> >user_domain_id = 
> > 
> >to make it work. Now I can store images over glance in swift, the only
> > problem I now have is that every user can upload images but only "glance"
> > can delete the images, when I try this with another user in the project then
> > in the glance-api.log I get
> > 
> > ClientException: Object HEAD failed:
> > http://controller:8080/v1/glance/  403 Forbidden
> > 
> > 
> >don't know if something is still wrong or ift this might be a bug?
> > 
> > 
> >Kind regards,
> >Michael
> > 
> > 
> > 
> > 
> > 
> > > > > Michael Stang  > > > > mailto:michael.st...@dhbw-mannheim.de > hat am 18. Juli 2016 um
> > > > > 08:44 geschrieben:
> > > 
> > > 
> > > Hi Sam,
> > >  
> > > thank you for your answer.
> > >  
> > > I had a look, the swift store endpoint ist listet 3 times  in the
> > > keystone, publicurl admin and internal endpoint. To try, I also set it in
> > > the glance-api.conf:
> > >  
> > > swift_store_endpoint = http://controller:8080/v1/
> > >  
> > > I also tried
> > >  
> > > swift_store_endpoint = http://controller:8080/v1/AUTH_%(tenant_id)s
> > >  
> > > but both gave me the same result as bevor. Is this the right endpoint
> > > url for swift? In which config file and with what option do I have to
> > > enter it in the glance configuration?
> > >  
> > >  
> > > Thank you and kind regards,
> > > Michael
> > >  
> > >  
> > > 
> > >  > > > > Sam Morrison mailto:sorri...@gmail.com >
> > >  > > > > hat am 18. Juli 2016 um 01:41 geschrieben:
> > > > 
> > > >  Hi Michael,
> > > >   
> > > >  This would indicate that glance can’t find the swift endpoint in
> > > > the keystone catalog.
> > > >   
> > > >  You can either add it to the catalog or specify the swift url in
> > > > the config.
> > > >   
> > > >  Cheers,
> > > >  Sam
> > > >   
> > > > 
> > > > 
> > > >  > > > > >  On 15 Jul 2016, at 9:07 PM, Michael Stang
> > > >  > > > > >  > > >  > > > > > mailto:michael.st...@dhbw-mannheim.de > wrote:
> > > > > 
> > > > >  Hi everyone,
> > > > >   
> > > > >  I tried to setup swift as backend for glance in our new
> > > > > mitaka installation. I used this in the glance-api.conf
> > > > >   
> > > > > 
> > > > >  [glance_store]
> > > > >  stores = swift
> > > > >  default_store = swift
> > > > >  swift_store_create_container_on_put = True
> > > > >  swift_store_region = RegionOne
> > > > >  default_swift_reference = ref1
> > > > >  swift_store_config_file = /etc/glance/glance-swift-store.conf
> > > > > 
> > > > >   
> > > > > 
> > > > >  and in the glance-swift-store.conf this
> > > > > 
> > > > >  [ref1]
> > > > >  auth_ver

Re: [Openstack-operators] PCI Passthrough issues

2016-07-19 Thread Blair Bethwaite
Thanks for the confirmation Joe!

On 20 July 2016 at 12:19, Joe Topjian  wrote:
> Hi Blair,
>
> We only updated qemu. We're running the version of libvirt from the Kilo
> cloudarchive.
>
> We've been in production with our K80s for around two weeks now and have had
> several users report success.
>
> Thanks,
> Joe
>
> On Tue, Jul 19, 2016 at 5:06 PM, Blair Bethwaite 
> wrote:
>>
>> Hilariously (or not!) we finally hit the same issue last week once
>> folks actually started trying to do something (other than build and
>> load drivers) with the K80s we're passing through. This
>>
>> https://devtalk.nvidia.com/default/topic/850833/pci-passthrough-kvm-for-cuda-usage/
>> is the best discussion of the issue I've found so far, haven't tracked
>> down an actual bug yet though. I wonder whether it has something to do
>> with the memory size of the device, as we've been happy for a long
>> time with other NVIDIA GPUs (GRID K1, K2, M2070, ...).
>>
>> Jon, when you grabbed Mitaka Qemu, did you also update libvirt? We're
>> just working through this and have tried upgrading both but are
>> hitting some issues with Nova and Neutron on the compute nodes,
>> thinking it may libvirt related but debug isn't helping much yet.
>>
>> Cheers,
>>
>> On 8 July 2016 at 00:54, Jonathan Proulx  wrote:
>> > On Thu, Jul 07, 2016 at 11:13:29AM +1000, Blair Bethwaite wrote:
>> > :Jon,
>> > :
>> > :Awesome, thanks for sharing. We've just run into an issue with SRIOV
>> > :VF passthrough that sounds like it might be the same problem (device
>> > :disappearing after a reboot), but haven't yet investigated deeply -
>> > :this will help with somewhere to start!
>> >
>> > :By the way, the nouveau mention was because we had missed it on some
>> > :K80 hypervisors recently and seen passthrough apparently work, but
>> > :then the NVIDIA drivers would not build in the guest as they claimed
>> > :they could not find a supported device (despite the GPU being visible
>> > :on the PCI bus).
>> >
>> > Definitely sage advice!
>> >
>> > :I have also heard passing mention of requiring qemu
>> > :2.3+ but don't have any specific details of the related issue.
>> >
>> > I didn't do a bisection but with qemu 2.2 (from ubuntu cloudarchive
>> > kilo) I was sad and with 2.5 (from ubuntu cloudarchive mitaka but
>> > installed on a kilo hypervisor) I am working.
>> >
>> > Thanks,
>> > -Jon
>> >
>> >
>> > :Cheers,
>> > :
>> > :On 7 July 2016 at 08:13, Jonathan Proulx  wrote:
>> > :> On Wed, Jul 06, 2016 at 12:32:26PM -0400, Jonathan D. Proulx wrote:
>> > :> :
>> > :> :I do have an odd remaining issue where I can run cuda jobs in the vm
>> > :> :but snapshots fail and after pause (for snapshotting) the pci device
>> > :> :can't be reattached (which is where i think it deletes the snapshot
>> > :> :it took).  Got same issue with 3.16 and 4.4 kernels.
>> > :> :
>> > :> :Not very well categorized yet, but I'm hoping it's because the VM I
>> > :> :was hacking on had it's libvirt.xml written out with the older qemu
>> > :> :maybe?  It had been through a couple reboots of the physical system
>> > :> :though.
>> > :> :
>> > :> :Currently building a fresh instance and bashing more keys...
>> > :>
>> > :> After an ugly bout of bashing I've solve my failing snapshot issue
>> > :> which I'll post here in hopes of saving someonelse
>> > :>
>> > :> Short version:
>> > :>
>> > :> add "/dev/vfio/vfio rw," to
>> > /etc/apparmor.d/abstractions/libvirt-qemu
>> > :> add "ulimit -l unlimited" to /etc/init/libvirt-bin.conf
>> > :>
>> > :> Longer version:
>> > :>
>> > :> What was happening.
>> > :>
>> > :> * send snapshot request
>> > :> * instance pauses while snapshot is pending
>> > :> * instance attempt to resume
>> > :> * fails to reattach pci device
>> > :>   * nova-compute.log
>> > :> Exception during message handling: internal error: unable to
>> > execute QEMU command 'device_add': Device initialization failedcompute.log
>> > :>
>> > :>   * qemu/.log
>> > :> vfio: failed to open /dev/vfio/vfio: Permission denied
>> > :> vfio: failed to setup container for group 48
>> > :> vfio: failed to get group 48
>> > :> * snapshot disappears
>> > :> * instance resumes but without passed through device (hard reboot
>> > :> reattaches)
>> > :>
>> > :> seeing permsission denied I though would be an easy fix but:
>> > :>
>> > :> # ls -l /dev/vfio/vfio
>> > :> crw-rw-rw- 1 root root 10, 196 Jul  6 14:05 /dev/vfio/vfio
>> > :>
>> > :> so I'm guessing I'm in apparmor hell, I try adding "/dev/vfio/vfio
>> > :> rw," to  /etc/apparmor.d/abstractions/libvirt-qemu rebooting the
>> > :> hypervisor and trying again which gets me a different libvirt error
>> > :> set:
>> > :>
>> > :> VFIO_MAP_DMA: -12
>> > :> vfio_dma_map(0x5633a5fa69b0, 0x0, 0xa, 0x7f4e7be0) = -12
>> > (Cannot allocate memory)
>> > :>
>> > :> kern.log (and thus dmesg) showing:
>> > :> vfio_pin_pages: RLIMIT_MEMLOCK (65536) exceeded
>> > :>
>> > :> Getting rid of this one required inserting 'ulimit -l unlimite

Re: [Openstack-operators] PCI Passthrough issues

2016-07-19 Thread Joe Topjian
Hi Blair,

We only updated qemu. We're running the version of libvirt from the Kilo
cloudarchive.

We've been in production with our K80s for around two weeks now and have
had several users report success.

Thanks,
Joe

On Tue, Jul 19, 2016 at 5:06 PM, Blair Bethwaite 
wrote:

> Hilariously (or not!) we finally hit the same issue last week once
> folks actually started trying to do something (other than build and
> load drivers) with the K80s we're passing through. This
>
> https://devtalk.nvidia.com/default/topic/850833/pci-passthrough-kvm-for-cuda-usage/
> is the best discussion of the issue I've found so far, haven't tracked
> down an actual bug yet though. I wonder whether it has something to do
> with the memory size of the device, as we've been happy for a long
> time with other NVIDIA GPUs (GRID K1, K2, M2070, ...).
>
> Jon, when you grabbed Mitaka Qemu, did you also update libvirt? We're
> just working through this and have tried upgrading both but are
> hitting some issues with Nova and Neutron on the compute nodes,
> thinking it may libvirt related but debug isn't helping much yet.
>
> Cheers,
>
> On 8 July 2016 at 00:54, Jonathan Proulx  wrote:
> > On Thu, Jul 07, 2016 at 11:13:29AM +1000, Blair Bethwaite wrote:
> > :Jon,
> > :
> > :Awesome, thanks for sharing. We've just run into an issue with SRIOV
> > :VF passthrough that sounds like it might be the same problem (device
> > :disappearing after a reboot), but haven't yet investigated deeply -
> > :this will help with somewhere to start!
> >
> > :By the way, the nouveau mention was because we had missed it on some
> > :K80 hypervisors recently and seen passthrough apparently work, but
> > :then the NVIDIA drivers would not build in the guest as they claimed
> > :they could not find a supported device (despite the GPU being visible
> > :on the PCI bus).
> >
> > Definitely sage advice!
> >
> > :I have also heard passing mention of requiring qemu
> > :2.3+ but don't have any specific details of the related issue.
> >
> > I didn't do a bisection but with qemu 2.2 (from ubuntu cloudarchive
> > kilo) I was sad and with 2.5 (from ubuntu cloudarchive mitaka but
> > installed on a kilo hypervisor) I am working.
> >
> > Thanks,
> > -Jon
> >
> >
> > :Cheers,
> > :
> > :On 7 July 2016 at 08:13, Jonathan Proulx  wrote:
> > :> On Wed, Jul 06, 2016 at 12:32:26PM -0400, Jonathan D. Proulx wrote:
> > :> :
> > :> :I do have an odd remaining issue where I can run cuda jobs in the vm
> > :> :but snapshots fail and after pause (for snapshotting) the pci device
> > :> :can't be reattached (which is where i think it deletes the snapshot
> > :> :it took).  Got same issue with 3.16 and 4.4 kernels.
> > :> :
> > :> :Not very well categorized yet, but I'm hoping it's because the VM I
> > :> :was hacking on had it's libvirt.xml written out with the older qemu
> > :> :maybe?  It had been through a couple reboots of the physical system
> > :> :though.
> > :> :
> > :> :Currently building a fresh instance and bashing more keys...
> > :>
> > :> After an ugly bout of bashing I've solve my failing snapshot issue
> > :> which I'll post here in hopes of saving someonelse
> > :>
> > :> Short version:
> > :>
> > :> add "/dev/vfio/vfio rw," to  /etc/apparmor.d/abstractions/libvirt-qemu
> > :> add "ulimit -l unlimited" to /etc/init/libvirt-bin.conf
> > :>
> > :> Longer version:
> > :>
> > :> What was happening.
> > :>
> > :> * send snapshot request
> > :> * instance pauses while snapshot is pending
> > :> * instance attempt to resume
> > :> * fails to reattach pci device
> > :>   * nova-compute.log
> > :> Exception during message handling: internal error: unable to
> execute QEMU command 'device_add': Device initialization failedcompute.log
> > :>
> > :>   * qemu/.log
> > :> vfio: failed to open /dev/vfio/vfio: Permission denied
> > :> vfio: failed to setup container for group 48
> > :> vfio: failed to get group 48
> > :> * snapshot disappears
> > :> * instance resumes but without passed through device (hard reboot
> > :> reattaches)
> > :>
> > :> seeing permsission denied I though would be an easy fix but:
> > :>
> > :> # ls -l /dev/vfio/vfio
> > :> crw-rw-rw- 1 root root 10, 196 Jul  6 14:05 /dev/vfio/vfio
> > :>
> > :> so I'm guessing I'm in apparmor hell, I try adding "/dev/vfio/vfio
> > :> rw," to  /etc/apparmor.d/abstractions/libvirt-qemu rebooting the
> > :> hypervisor and trying again which gets me a different libvirt error
> > :> set:
> > :>
> > :> VFIO_MAP_DMA: -12
> > :> vfio_dma_map(0x5633a5fa69b0, 0x0, 0xa, 0x7f4e7be0) = -12
> (Cannot allocate memory)
> > :>
> > :> kern.log (and thus dmesg) showing:
> > :> vfio_pin_pages: RLIMIT_MEMLOCK (65536) exceeded
> > :>
> > :> Getting rid of this one required inserting 'ulimit -l unlimited' into
> > :> /etc/init/libvirt-bin.conf in the 'script' section:
> > :>
> > :> 
> > :> script
> > :> [ -r /etc/default/libvirt-bin ] && . /etc/default/libvirt-bin
> > :> ulimit -l unlimited
> > :>  

Re: [Openstack-operators] PCI Passthrough issues

2016-07-19 Thread Blair Bethwaite
Hilariously (or not!) we finally hit the same issue last week once
folks actually started trying to do something (other than build and
load drivers) with the K80s we're passing through. This
https://devtalk.nvidia.com/default/topic/850833/pci-passthrough-kvm-for-cuda-usage/
is the best discussion of the issue I've found so far, haven't tracked
down an actual bug yet though. I wonder whether it has something to do
with the memory size of the device, as we've been happy for a long
time with other NVIDIA GPUs (GRID K1, K2, M2070, ...).

Jon, when you grabbed Mitaka Qemu, did you also update libvirt? We're
just working through this and have tried upgrading both but are
hitting some issues with Nova and Neutron on the compute nodes,
thinking it may libvirt related but debug isn't helping much yet.

Cheers,

On 8 July 2016 at 00:54, Jonathan Proulx  wrote:
> On Thu, Jul 07, 2016 at 11:13:29AM +1000, Blair Bethwaite wrote:
> :Jon,
> :
> :Awesome, thanks for sharing. We've just run into an issue with SRIOV
> :VF passthrough that sounds like it might be the same problem (device
> :disappearing after a reboot), but haven't yet investigated deeply -
> :this will help with somewhere to start!
>
> :By the way, the nouveau mention was because we had missed it on some
> :K80 hypervisors recently and seen passthrough apparently work, but
> :then the NVIDIA drivers would not build in the guest as they claimed
> :they could not find a supported device (despite the GPU being visible
> :on the PCI bus).
>
> Definitely sage advice!
>
> :I have also heard passing mention of requiring qemu
> :2.3+ but don't have any specific details of the related issue.
>
> I didn't do a bisection but with qemu 2.2 (from ubuntu cloudarchive
> kilo) I was sad and with 2.5 (from ubuntu cloudarchive mitaka but
> installed on a kilo hypervisor) I am working.
>
> Thanks,
> -Jon
>
>
> :Cheers,
> :
> :On 7 July 2016 at 08:13, Jonathan Proulx  wrote:
> :> On Wed, Jul 06, 2016 at 12:32:26PM -0400, Jonathan D. Proulx wrote:
> :> :
> :> :I do have an odd remaining issue where I can run cuda jobs in the vm
> :> :but snapshots fail and after pause (for snapshotting) the pci device
> :> :can't be reattached (which is where i think it deletes the snapshot
> :> :it took).  Got same issue with 3.16 and 4.4 kernels.
> :> :
> :> :Not very well categorized yet, but I'm hoping it's because the VM I
> :> :was hacking on had it's libvirt.xml written out with the older qemu
> :> :maybe?  It had been through a couple reboots of the physical system
> :> :though.
> :> :
> :> :Currently building a fresh instance and bashing more keys...
> :>
> :> After an ugly bout of bashing I've solve my failing snapshot issue
> :> which I'll post here in hopes of saving someonelse
> :>
> :> Short version:
> :>
> :> add "/dev/vfio/vfio rw," to  /etc/apparmor.d/abstractions/libvirt-qemu
> :> add "ulimit -l unlimited" to /etc/init/libvirt-bin.conf
> :>
> :> Longer version:
> :>
> :> What was happening.
> :>
> :> * send snapshot request
> :> * instance pauses while snapshot is pending
> :> * instance attempt to resume
> :> * fails to reattach pci device
> :>   * nova-compute.log
> :> Exception during message handling: internal error: unable to execute 
> QEMU command 'device_add': Device initialization failedcompute.log
> :>
> :>   * qemu/.log
> :> vfio: failed to open /dev/vfio/vfio: Permission denied
> :> vfio: failed to setup container for group 48
> :> vfio: failed to get group 48
> :> * snapshot disappears
> :> * instance resumes but without passed through device (hard reboot
> :> reattaches)
> :>
> :> seeing permsission denied I though would be an easy fix but:
> :>
> :> # ls -l /dev/vfio/vfio
> :> crw-rw-rw- 1 root root 10, 196 Jul  6 14:05 /dev/vfio/vfio
> :>
> :> so I'm guessing I'm in apparmor hell, I try adding "/dev/vfio/vfio
> :> rw," to  /etc/apparmor.d/abstractions/libvirt-qemu rebooting the
> :> hypervisor and trying again which gets me a different libvirt error
> :> set:
> :>
> :> VFIO_MAP_DMA: -12
> :> vfio_dma_map(0x5633a5fa69b0, 0x0, 0xa, 0x7f4e7be0) = -12 (Cannot 
> allocate memory)
> :>
> :> kern.log (and thus dmesg) showing:
> :> vfio_pin_pages: RLIMIT_MEMLOCK (65536) exceeded
> :>
> :> Getting rid of this one required inserting 'ulimit -l unlimited' into
> :> /etc/init/libvirt-bin.conf in the 'script' section:
> :>
> :> 
> :> script
> :> [ -r /etc/default/libvirt-bin ] && . /etc/default/libvirt-bin
> :> ulimit -l unlimited
> :> exec /usr/sbin/libvirtd $libvirtd_opts
> :> end script
> :>
> :>
> :> -Jon
> :>
> :> ___
> :> OpenStack-operators mailing list
> :> OpenStack-operators@lists.openstack.org
> :> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> :
> :
> :
> :--
> :Cheers,
> :~Blairo
>
> --



-- 
Cheers,
~Blairo

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://list

[Openstack-operators] [scientific][scientific-wg] Reminder: Scientific WG Meeting July 20th 0900 UTC

2016-07-19 Thread Stig Telfer
Hi all - 

We have a Scientific WG IRC meeting on Wednesday at 0900 UTC on channel 
#openstack-meeting.

The agenda is available here[1] and full IRC meeting details are here[2].

This week if possible I’d like to review recent WG activities, in particular 
outputs for our user stories activity area.

Best wishes,
Stig

[1] 
https://wiki.openstack.org/wiki/Scientific_working_group#IRC_Meeting_July_20th_2016
[2] http://eavesdrop.openstack.org/#Scientific_Working_Group
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] OpenStack mitaka using swift as backend for glance

2016-07-19 Thread Lucas Di Paola
Hi Michael,

I am having exactly the same issue, getting the error that you mentioned in
your first email. I tried replacing *project_domain_id´s* and
*user_domain´s* value using the **, but I had no luck, still
getting the same error.
Did you add the ID obtained from running the following openstack
command? "*openstack
domain list*". In that case, for both parameters, did you use the same ID?

+--+-+-++
| ID   | Name| Enabled | Description|
+--+-+-++
| ** | default | True| Default Domain |
+--+-+-++

Regarding your issue, do you have the Proxy Server listenting on the 8080
port on the controller node?


2016-07-19 11:21 GMT-03:00 Michael Stang :

> Hi Sam, Hi all,
>
> fixed the problem.  I had used
>
> project_domain_id = default
> user_domain_id = default
>
> instead I had to use
>
> project_domain_id = 
> user_domain_id = 
>
> to make it work. Now I can store images over glance in swift, the only
> problem I now have is that every user can upload images but only "glance"
> can delete the images, when I try this with another user in the project
> then in the glance-api.log I get
>
>  ClientException: Object HEAD failed: 
> http://controller:8080/v1/glance/
>  403 Forbidden
>
>
> don't know if something is still wrong or ift this might be a bug?
>
>
> Kind regards,
> Michael
>
>
>
>
>
> Michael Stang  hat am 18. Juli 2016 um
> 08:44 geschrieben:
>
>
> Hi Sam,
>
> thank you for your answer.
>
> I had a look, the swift store endpoint ist listet 3 times  in the
> keystone, publicurl admin and internal endpoint. To try, I also set it in
> the glance-api.conf:
>
> swift_store_endpoint = http://controller:8080/v1/
>
> I also tried
>
> swift_store_endpoint = http://controller:8080/v1/AUTH_%(tenant_id)s
>
> but both gave me the same result as bevor. Is this the right endpoint url
> for swift? In which config file and with what option do I have to enter it
> in the glance configuration?
>
>
> Thank you and kind regards,
> Michael
>
>
>
> Sam Morrison  hat am 18. Juli 2016 um 01:41
> geschrieben:
>
> Hi Michael,
>
> This would indicate that glance can’t find the swift endpoint in the
> keystone catalog.
>
> You can either add it to the catalog or specify the swift url in the
> config.
>
> Cheers,
> Sam
>
>
> On 15 Jul 2016, at 9:07 PM, Michael Stang 
> wrote:
>
> Hi everyone,
>
> I tried to setup swift as backend for glance in our new mitaka
> installation. I used this in the glance-api.conf
>
>
> [glance_store]
> stores = swift
> default_store = swift
> swift_store_create_container_on_put = True
> swift_store_region = RegionOne
> default_swift_reference = ref1
> swift_store_config_file = /etc/glance/glance-swift-store.conf
>
>
> and in the glance-swift-store.conf this
>
> [ref1]
> auth_version = 3
> project_domain_id = default
> user_domain_id = default
> auth_address = http://controller:35357
> user = services:swift
> key = x
>
> When I trie now to upload an image it gets the status "killed" and this is
> in the glance-api.log
>
> 2016-07-15 12:21:44.379 14230 ERROR glance.api.v1.upload_utils
> [req-0ec16fa5-a605-47f3-99e9-9ab231116f04 de9463239010412d948df4020e9be277
> 669e037b13874b6c871
> 2b1fd10c219f0 - - -] Failed to upload image
> 6de45d08-b420-477b-a665-791faa232379
> 2016-07-15 12:21:44.379 14230 ERROR glance.api.v1.upload_utils Traceback
> (most recent call last):
> 2016-07-15 12:21:44.379 14230 ERROR glance.api.v1.upload_utils File
> "/usr/lib/python2.7/dist-packages/glance/api/v1/upload_utils.py", line 110,
> in upload_d
> ata_to_store
> 2016-07-15 12:21:44.379 14230 ERROR glance.api.v1.upload_utils
> context=req.context)
> 2016-07-15 12:21:44.379 14230 ERROR glance.api.v1.upload_utils File
> "/usr/lib/python2.7/dist-packages/glance_store/backend.py", line 344, in
> store_add_to_b
> ackend
> 2016-07-15 12:21:44.379 14230 ERROR glance.api.v1.upload_utils
> verifier=verifier)
> 2016-07-15 12:21:44.379 14230 ERROR glance.api.v1.upload_utils File
> "/usr/lib/python2.7/dist-packages/glance_store/capabilities.py", line 226,
> in op_checke
> r
> 2016-07-15 12:21:44.379 14230 ERROR glance.api.v1.upload_utils return
> store_op_fun(store, *args, **kwargs)
> 2016-07-15 12:21:44.379 14230 ERROR glance.api.v1.upload_utils File
> "/usr/lib/python2.7/dist-packages/glance_store/_drivers/swift/store.py",
> line 532, in a
> dd
> 2016-07-15 12:21:44.379 14230 ERROR glance.api.v1.upload_utils
> allow_reauth=need_chunks) as manager:
> 2016-07-15 12:21:44.379 14230 ERROR glance.api.v1.upload_utils File
> "/usr/lib/python2.7/dist-packages/glance_store/_drivers/swift/store.py",
> line 1170, in
> get_manager_for_store
> 2016-07-15 12:21:44.379 14230 ERROR glance.api.v1.upload_utils store,
> store_location, context, allow_reauth)
> 2016-07-15 12:21:44.379 14230 ERROR glance.api.v1.upload_utils Fi

Re: [Openstack-operators] OpenStack mitaka using swift as backend for glance

2016-07-19 Thread Michael Stang
Hi Sam, Hi all,
 
fixed the problem.  I had used
 
project_domain_id = default
user_domain_id = default
 
instead I had to use
 
project_domain_id = 
user_domain_id = 
 
to make it work. Now I can store images over glance in swift, the only problem I
now have is that every user can upload images but only "glance" can delete the
images, when I try this with another user in the project then in the
glance-api.log I get
 
 ClientException: Object HEAD failed:
http://controller:8080/v1/glance/  403 Forbidden
 
 
don't know if something is still wrong or ift this might be a bug?
 
 
Kind regards,
Michael
 
 
 
 

> Michael Stang  hat am 18. Juli 2016 um 08:44
> geschrieben:
> 
>  Hi Sam,
>   
>  thank you for your answer.
>   
>  I had a look, the swift store endpoint ist listet 3 times  in the keystone,
> publicurl admin and internal endpoint. To try, I also set it in the
> glance-api.conf:
>   
>  swift_store_endpoint = http://controller:8080/v1/
>   
>  I also tried
>   
>  swift_store_endpoint = http://controller:8080/v1/AUTH_%(tenant_id)s
>   
>  but both gave me the same result as bevor. Is this the right endpoint url for
> swift? In which config file and with what option do I have to enter it in the
> glance configuration?
>   
>   
>  Thank you and kind regards,
>  Michael
>   
>   
> 
>   > > Sam Morrison  hat am 18. Juli 2016 um 01:41
>   > > geschrieben:
> > 
> >   Hi Michael,
> >
> >   This would indicate that glance can’t find the swift endpoint in the
> > keystone catalog.
> >
> >   You can either add it to the catalog or specify the swift url in the
> > config.
> >
> >   Cheers,
> >   Sam
> >
> > 
> > 
> >   > > >   On 15 Jul 2016, at 9:07 PM, Michael Stang
> >   > > >  >   > > > mailto:michael.st...@dhbw-mannheim.de > wrote:
> > > 
> > >   Hi everyone,
> > >
> > >   I tried to setup swift as backend for glance in our new mitaka
> > > installation. I used this in the glance-api.conf
> > >
> > > 
> > >   [glance_store]
> > >   stores = swift
> > >   default_store = swift
> > >   swift_store_create_container_on_put = True
> > >   swift_store_region = RegionOne
> > >   default_swift_reference = ref1
> > >   swift_store_config_file = /etc/glance/glance-swift-store.conf
> > > 
> > >
> > > 
> > >   and in the glance-swift-store.conf this
> > > 
> > >   [ref1]
> > >   auth_version = 3
> > >   project_domain_id = default
> > >   user_domain_id = default
> > >   auth_address = http://controller:35357/
> > >   user = services:swift
> > >   key = x
> > > 
> > >   When I trie now to upload an image it gets the status "killed" and
> > > this is in the glance-api.log
> > > 
> > >   2016-07-15 12:21:44.379 14230 ERROR glance.api.v1.upload_utils
> > > [req-0ec16fa5-a605-47f3-99e9-9ab231116f04 de9463239010412d948df4020e9be277
> > > 669e037b13874b6c871
> > >   2b1fd10c219f0 - - -] Failed to upload image
> > > 6de45d08-b420-477b-a665-791faa232379
> > >   2016-07-15 12:21:44.379 14230 ERROR glance.api.v1.upload_utils
> > > Traceback (most recent call last):
> > >   2016-07-15 12:21:44.379 14230 ERROR glance.api.v1.upload_utils File
> > > "/usr/lib/python2.7/dist-packages/glance/api/v1/upload_utils.py", line
> > > 110, in upload_d
> > >   ata_to_store
> > >   2016-07-15 12:21:44.379 14230 ERROR glance.api.v1.upload_utils
> > > context=req.context)
> > >   2016-07-15 12:21:44.379 14230 ERROR glance.api.v1.upload_utils File
> > > "/usr/lib/python2.7/dist-packages/glance_store/backend.py", line 344, in
> > > store_add_to_b
> > >   ackend
> > >   2016-07-15 12:21:44.379 14230 ERROR glance.api.v1.upload_utils
> > > verifier=verifier)
> > >   2016-07-15 12:21:44.379 14230 ERROR glance.api.v1.upload_utils File
> > > "/usr/lib/python2.7/dist-packages/glance_store/capabilities.py", line 226,
> > > in op_checke
> > >   r
> > >   2016-07-15 12:21:44.379 14230 ERROR glance.api.v1.upload_utils
> > > return store_op_fun(store, *args, **kwargs)
> > >   2016-07-15 12:21:44.379 14230 ERROR glance.api.v1.upload_utils File
> > > "/usr/lib/python2.7/dist-packages/glance_store/_drivers/swift/store.py",
> > > line 532, in a
> > >   dd
> > >   2016-07-15 12:21:44.379 14230 ERROR glance.api.v1.upload_utils
> > > allow_reauth=need_chunks) as manager:
> > >   2016-07-15 12:21:44.379 14230 ERROR glance.api.v1.upload_utils File
> > > "/usr/lib/python2.7/dist-packages/glance_store/_drivers/swift/store.py",
> > > line 1170, in
> > >   get_manager_for_store
> > >   2016-07-15 12:21:44.379 14230 ERROR glance.api.v1.upload_utils
> > > store, store_location, context, allow_reauth)
> > >   2016-07-15 12:21:44.379 14230 ERROR glance.api.v1.upload_utils File
> > > "/usr/lib/python2.7/dist-packages/glance_store/_drivers/swift/connection_manager.py",
> > > l
> > >   ine 64, in __init__
> > >   2016-07-15 12:21:44.379 14230 ERROR glance.api.v1.u

[Openstack-operators] Operators Meetup to plan Mid-Cycle

2016-07-19 Thread David Medberry
Last two weeks there appears to have been no operators meetup to continue
planning the mid-cycle. Is it completely planned now?

(I've been in IRC at 1400 on Tuesdays at #openstack-operators to crickets.)
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators