Re: [Openstack-operators] [all] OpenStack voting by the numbers

2015-07-29 Thread David Medberry
Nice writeup maish! very nice.

On Wed, Jul 29, 2015 at 10:27 AM, Maish Saidel-Keesing mais...@maishsk.com
wrote:

 Some of my thoughts on the Voting process.


 http://technodrone.blogspot.com/2015/07/openstack-summit-voting-by-numbers.html

 Guess which category has the most number of submissions??
 ;)

 --
 Best Regards,
 Maish Saidel-Keesing

 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [all] OpenStack voting by the numbers

2015-07-29 Thread Maish Saidel-Keesing

Some of my thoughts on the Voting process.

http://technodrone.blogspot.com/2015/07/openstack-summit-voting-by-numbers.html

Guess which category has the most number of submissions??
;)

--
Best Regards,
Maish Saidel-Keesing

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [os-ansible-deployment][openstack-ansible] Ceph / OpenStack Integration

2015-07-29 Thread Matt Thompson
Hi All,

We've got an open blueprint [1] on the os-ansible-deployment project to add
the ability to configure Cinder / Glance / Nova to use an existing Ceph
storage backend.  The implementation [2] is in flight and has seen some
thorough reviews already, but we welcome anyone with Ceph / OpenStack
experience to have a look and to let us know if there is anything that can
be improved upon.

For those without Ceph experience, we'd also appreciate help with the
testing of the different components to identify any edge cases.  While we
currently do not include playbooks to deploy a Ceph storage cluster in
os-ansible-deployment itself, we've been using Sebastien's roles [3] which
have been working well and are relatively painless to get going.

If you've got any questions on getting the review [2] tested or getting a
Ceph storage cluster set up for testing purposes, please swing by
#openstack-ansible on Freenode and let us know.  We've got core members in
different timezones, so there's usually someone around to answer questions.

Lastly, thanks to Serge (svg) for all the work he's put into the
implementation [2] -- we hope to get this merged soon!  :)

Thanks,
Matt (mattt)


[1]
https://blueprints.launchpad.net/openstack-ansible/+spec/ceph-block-devices
[2] https://review.openstack.org/#/c/181957
[3] https://github.com/ceph/ceph-ansible
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Compressed Images

2015-07-29 Thread Matt Joyce
It might be a function of apache compression ?

On July 29, 2015 3:38:25 PM EDT, Joe Topjian j...@topjian.net wrote:
Hello,

In the Create An Image page of Horizon, it says the following:

Currently only images available via an HTTP URL are supported. The
image
location must be accessible to the Image Service. Compressed image
binaries
are supported (.zip and .tar.gz.)

Either I have something misconfigured, the text does not apply to *all*
images, or the text is wrong.

If I upload a QCOW2 image that has been zipped, gzip'd, or tar'd and
gzip'd, the image is saved but instances fail to boot because of no
bootable device.

Does Glance need configured a certain way to accept compressed files?
Is
there something on Horizon's side that needs configured? Do I need to
use a
different disk format other than QCOW2 when creating the image?

Thanks,
Joe




___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [os-ansible-deployment][openstack-ansible] Release review/bug list for tomorrow's meeting

2015-07-29 Thread Jesse Pretorius
Hi everyone,


We need some support for reviews and bug updates, ideally before the next
meeting at 16:00 UTC tomorrow as per
https://wiki.openstack.org/wiki/Meetings/openstack-ansible#Agenda_for_next_meeting


The following reviews are in-flight and are important for the upcoming
releases, and therefore there is a need for more reviews and in some cases
backports once the master patches have landed:
https://review.openstack.org/#/q/starredby:%22Jesse+Pretorius%22+project:stackforge/os-ansible-deployment,n,z


The upcoming releases (this weekend) are:

Kilo: https://launchpad.net/openstack-ansible/+milestone/11.1.0

Juno: https://launchpad.net/openstack-ansible/+milestone/10.1.11


I’d appreciate it if everyone could take a look at the reviews – some of
which need to be rebased/changed - and the bugs that are not yet ‘In
Progress’ and decide whether the solutions for the bugs will make it or not
in time for the release.

We’ll discuss everything in the list in the meeting and decide on the best
course of action.

-- 
Jesse Pretorius
IRC: odyssey4me
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Which Puppet Modules To Use?

2015-07-29 Thread Richard Raseley
On 07/29/2015 06:01 AM, Eren Türkay wrote:
 There are a number of puppet modules for deploying OpenStack. So far, I've 
 seen
 modules in puppetlabs, stackforge, and mirantis. Which one do you guys use and
 suggest? I know that every module is merged into big tent [0] but still, I'm
 curious as modules outside of the big tent seem to be developed as well.
Here at Puppet, we make use of the modules in the official OpenStack
Puppet Modules[0] project (formerly part of StackForge). As I understand
it Fuel is aligning towards using these as well. The ones under the
'puppetlabs' name are either deprecated or are intended to function as a
working example of a composition layer (the Puppet code you write to
'glue' the other modules together).

Regards,

Richard Raseley

Systems Operations Engineer @ Puppet Labs

[0] - https://forge.puppetlabs.com/openstack



signature.asc
Description: OpenPGP digital signature
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Ceph backed 'boot from volume' downloading image

2015-07-29 Thread Josh Durgin

Hi Caius,

This has existed in the rbd cinder driver since volume-to-image was added:

https://github.com/openstack/cinder/blob/stable/kilo/cinder/volume/drivers/rbd.py#L823

Cinder falls back to doing the full copy if glance doesn't report the
location, or it's not marked as raw format.

If glance doesn't have show_image_direct_url = True, or cinder doesn't
have glance_api_version = 2, cinder won't be able to do the clone. See

http://ceph.com/docs/master/rbd/rbd-openstack/#configure-openstack-to-use-ceph

for more details.

Josh


On 07/29/2015 07:36 AM, Caius Howcroft wrote:

Hi,

We (bloomberg) are preparing to roll out kilo into production and one
thing is causing a lot of grief. I wonder if anyone else has
encountered it.

We run BCPC (https://github.com/bloomberg/chef-bcpc) which is ceph
backed. When we boot an instance from volume the cinder create volume
from image function (
https://github.com/openstack/cinder/blob/stable/kilo/cinder/volume/drivers/rbd.py#L850)
ends up pulling the entire image through the glance API, so lots of
tenants doing this creates quite a bit of load on our API nodes.

We were confused why it did this, when its way more efficient to go
directly via rbd clone, we created a patch and tested and it seems to
work just fine (and an order of magnitude faster)
https://github.com/bloomberg/chef-bcpc/pull/742

So, the question is: what are other ceph backed installations doing ?



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Ceph backed 'boot from volume' downloading image

2015-07-29 Thread Caius Howcroft
ahh thank you, we will dig through our config again and see of
something isn't right.

On Wed, Jul 29, 2015 at 2:31 PM, Josh Durgin jdur...@redhat.com wrote:
 Hi Caius,

 This has existed in the rbd cinder driver since volume-to-image was added:

 https://github.com/openstack/cinder/blob/stable/kilo/cinder/volume/drivers/rbd.py#L823

 Cinder falls back to doing the full copy if glance doesn't report the
 location, or it's not marked as raw format.

 If glance doesn't have show_image_direct_url = True, or cinder doesn't
 have glance_api_version = 2, cinder won't be able to do the clone. See

 http://ceph.com/docs/master/rbd/rbd-openstack/#configure-openstack-to-use-ceph

 for more details.

 Josh


 On 07/29/2015 07:36 AM, Caius Howcroft wrote:

 Hi,

 We (bloomberg) are preparing to roll out kilo into production and one
 thing is causing a lot of grief. I wonder if anyone else has
 encountered it.

 We run BCPC (https://github.com/bloomberg/chef-bcpc) which is ceph
 backed. When we boot an instance from volume the cinder create volume
 from image function (

 https://github.com/openstack/cinder/blob/stable/kilo/cinder/volume/drivers/rbd.py#L850)
 ends up pulling the entire image through the glance API, so lots of
 tenants doing this creates quite a bit of load on our API nodes.

 We were confused why it did this, when its way more efficient to go
 directly via rbd clone, we created a patch and tested and it seems to
 work just fine (and an order of magnitude faster)
 https://github.com/bloomberg/chef-bcpc/pull/742

 So, the question is: what are other ceph backed installations doing ?





-- 
Caius Howcroft
@caiushowcroft
http://www.linkedin.com/in/caius

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Make libguestfs available on pypi

2015-07-29 Thread Kris G. Lindgren
We are packaging nova in a venv so that we can run some kilo code on top of 
some cent6 nodes (default python install is 2.6) (additionally we are working 
on replacing the cent6 nodes with a newer os, but when you have a large number 
of machines - things take time).  We are using python27 software collections 
and pretty much everything is working.  But the issue is that libguestfs is not 
able to be installed in a venv via normal means (pip install).  I would like to 
make the request that libguestfs get added to pypi.

The following bug has already been created over a year ago [1], and it looks 
like most of the work on the libguestfs side is already done [2].  It seems 
something about a complaint of licensing per the bug report.

[1] - https://bugzilla.redhat.com/show_bug.cgi?id=1075594
[2] - 
https://github.com/libguestfs/libguestfs/commit/fcbfc4775fa2a44020974073594a745ca420d614



Kris Lindgren
Senior Linux Systems Engineer
GoDaddy, LLC.
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Which Puppet Modules To Use?

2015-07-29 Thread Eren Türkay
Hello,

There are a number of puppet modules for deploying OpenStack. So far, I've seen
modules in puppetlabs, stackforge, and mirantis. Which one do you guys use and
suggest? I know that every module is merged into big tent [0] but still, I'm
curious as modules outside of the big tent seem to be developed as well.

Regards,

[0] https://wiki.openstack.org/wiki/Puppet

-- 
Eren Türkay, System Administrator
https://skyatlas.com/ | +90 850 885 0357

Yildiz Teknik Universitesi Davutpasa Kampusu
Teknopark Bolgesi, D2 Blok No:107
Esenler, Istanbul Pk.34220



signature.asc
Description: OpenPGP digital signature
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [app-catalog] IRC Meeting Thursday July 30th at 17:00UTC

2015-07-29 Thread Christopher Aedo
Hello! Our next OpenStack App Catalog meeting will take place this
Thursday July 20th at 17:00 UTC in #openstack-meeting-3

The agenda can be found here:
https://wiki.openstack.org/wiki/Meetings/app-catalog

Please add agenda items if there's anything specific you would like to
discuss.  For this weeks meeting my primary intention is to discuss
the roadmap, everything we'd like to accomplish before the next
summit, and determine who all will be helping get it done.

Please join us if you can!

-Christopher

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] docker for icehouse heat pluign

2015-07-29 Thread pra devOPS
Hi All:

I have installed the docker pluign on icehouse on Centos 7.

I had installed heat and docker plugin , While I am restarting the heat
after installing the docker pluing .

openstack-heat-engine does not start, I get below in the engine.log.

Failed to import module
heat.engine.plugins.heat_docker.resources.docker_container
No Module oslog.log.


Will installing oslo.log heps ?

Or whats the error about heat_docker container.

Any help on this is much appreciated.

Thanks,
Dev
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [tags] Meeting this week

2015-07-29 Thread Maish Saidel-Keesing
I will probably be in flight at that time - so I am sorry but I Will not 
be able to join.




On 07/28/15 22:28, Tom Fifield wrote:

Hi all,

I think it's probably a good idea to have a meeting in our scheduled 
slot 1400 UTC on Thurs 30th July.


I'll actually be in Beijing at the time, but I've planned to be there, 
but it something goes wrong, it would be great if someone could run 
the meeting. I think a good discussion topic is what you'd like to do 
for the mid-cycle ops event as we'll likely have a 90 minute in-person 
session.



Regards,


Tom

On 16/07/15 21:11, Tom Fifield wrote:

OK, if there isn't soon an outpouring of support for this meeting, I
think it's best cancelled :)


On 16/07/15 18:37, Maish Saidel-Keesing wrote:

I would prefer to defer today's meeting

On 07/16/15 11:17, Tom Fifield wrote:

Hi,

According to the logs from last week, which are sadly in yet another
directory: http://eavesdrop.openstack.org/meetings/_operator_tags/ 
, we

do have a meeting this week, but the only agenda item (Jamespage 
markbaker - thoughts on packaging) didn't pan out since markbaker 
wasn't

available.

Is there interest for a meeting, and any proposed topics? ops:ha?

Regards,


Tom



On 16/07/15 16:10, Maish Saidel-Keesing wrote:

Are we having a meeting today at 14:00 UTC?

On 06/29/15 07:39, Tom Fifield wrote:

Hi,

As noted last meeting, we didn't get even half way through out 
agenda,

so we will meet this week as well.

So, join us this Thursday Jul 2nd 1400 UTC in #openstack-meeting on
freenode
(http://www.timeanddate.com/worldclock/fixedtime.html?iso=20150702T1400 


)

To kick off with agenda item #4:
https://etherpad.openstack.org/p/ops-tags-June-2015

Previous meeting notes can be found at:
http://eavesdrop.openstack.org/meetings/ops_tags/2015/


Regards,


Tom






___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators




___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


--
Best Regards,
Maish Saidel-Keesing

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] ssh inside instance

2015-07-29 Thread aishwarya.adyanthaya
Hi,

I've launched two instances from my openstack dashboard. Firstly I created 
instance one, where I generated a key through the ssh-keygen command, pasting 
the public key contents to import key in the access and security. Using this 
key I launched second instance.

I want to be able to ssh the second instance from my first instance. Could 
someone tell me how to work it out?

Thank you,
Aishwarya Adyanthaya



This message is for the designated recipient only and may contain privileged, 
proprietary, or otherwise confidential information. If you have received it in 
error, please notify the sender immediately and delete the original. Any other 
use of the e-mail by you is prohibited. Where allowed by local law, electronic 
communications with Accenture and its affiliates, including e-mail and instant 
messaging (including content), may be scanned by our systems for the purposes 
of information security and assessment of internal compliance with Accenture 
policy.
__

www.accenture.com
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] RE : RE : Can't launch docker instance, Unexpected vif_type=binding_failed.

2015-07-29 Thread Asmaa Chebba
1. ml2_conf.ini in controller:
[ml2]
type_drivers = flat,gre
tenant_network_types = gre
mechanism_drivers = openvswitch
[ml2_type_gre]
tunnel_id_ranges = 1:1000
[securitygroup]
enable_security_group = True
enable_ipset = True
firewall_driver = 
neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

2. ml2_conf.ini in compute2:
[ml2]
type_drivers = flat,gre
tenant_network_types = gre
mechanism_drivers = openvswitch
[ml2_type_gre]
tunnel_id_ranges = 1:1000
[securitygroup]
enable_security_group = True
enable_ipset = True
firewall_driver = 
neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
[ovs]
local_ip = 192.168.2.5
enable_tunneling = True
[agent]
tunnel_types = gre

3. nova-compute.log in compute2:
2015-07-29 11:13:50.857 5166 WARNING novadocker.virt.docker.driver [-] 
[instance: 52c3d98f-f99c-4d42-969b-16308cbf9db4] Cannot setup network: 
Unexpected vif_type=binding_failed
2015-07-29 11:13:50.857 5166 TRACE novadocker.virt.docker.driver [instance: 
52c3d98f-f99c-4d42-969b-16308cbf9db4] Traceback (most recent call last):
2015-07-29 11:13:50.857 5166 TRACE novadocker.virt.docker.driver [instance: 
52c3d98f-f99c-4d42-969b-16308cbf9db4]   File 
/usr/local/lib/python2.7/dist-packages/novadocker/virt/docker/driver.py, line 
367, in _start_container
2015-07-29 11:13:50.857 5166 TRACE novadocker.virt.docker.driver [instance: 
52c3d98f-f99c-4d42-969b-16308cbf9db4] self.plug_vifs(instance, network_info)
2015-07-29 11:13:50.857 5166 TRACE novadocker.virt.docker.driver [instance: 
52c3d98f-f99c-4d42-969b-16308cbf9db4]   File 
/usr/local/lib/python2.7/dist-packages/novadocker/virt/docker/driver.py, line 
187, in plug_vifs
2015-07-29 11:13:50.857 5166 TRACE novadocker.virt.docker.driver [instance: 
52c3d98f-f99c-4d42-969b-16308cbf9db4] self.vif_driver.plug(instance, vif)
2015-07-29 11:13:50.857 5166 TRACE novadocker.virt.docker.driver [instance: 
52c3d98f-f99c-4d42-969b-16308cbf9db4]   File 
/usr/local/lib/python2.7/dist-packages/novadocker/virt/docker/vifs.py, line 
63, in plug
2015-07-29 11:13:50.857 5166 TRACE novadocker.virt.docker.driver [instance: 
52c3d98f-f99c-4d42-969b-16308cbf9db4] _(Unexpected vif_type=%s) % 
vif_type)
2015-07-29 11:13:50.857 5166 TRACE novadocker.virt.docker.driver [instance: 
52c3d98f-f99c-4d42-969b-16308cbf9db4] NovaException: Unexpected 
vif_type=binding_failed
2015-07-29 11:13:50.857 5166 TRACE novadocker.virt.docker.driver [instance: 
52c3d98f-f99c-4d42-969b-16308cbf9db4]
2015-07-29 11:13:51.050 5166 ERROR nova.compute.manager [-] [instance: 
52c3d98f-f99c-4d42-969b-16308cbf9db4] Instance failed to spawn
2015-07-29 11:13:51.050 5166 TRACE nova.compute.manager [instance: 
52c3d98f-f99c-4d42-969b-16308cbf9db4] Traceback (most recent call last):
2015-07-29 11:13:51.050 5166 TRACE nova.compute.manager [instance: 
52c3d98f-f99c-4d42-969b-16308cbf9db4]   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 2267, in 
_build_resources
2015-07-29 11:13:51.050 5166 TRACE nova.compute.manager [instance: 
52c3d98f-f99c-4d42-969b-16308cbf9db4] yield resources
2015-07-29 11:13:51.050 5166 TRACE nova.compute.manager [instance: 
52c3d98f-f99c-4d42-969b-16308cbf9db4]   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 2137, in 
_build_and_run_instance
2015-07-29 11:13:51.050 5166 TRACE nova.compute.manager [instance: 
52c3d98f-f99c-4d42-969b-16308cbf9db4] block_device_info=block_device_info)
2015-07-29 11:13:51.050 5166 TRACE nova.compute.manager [instance: 
52c3d98f-f99c-4d42-969b-16308cbf9db4]   File 
/usr/local/lib/python2.7/dist-packages/novadocker/virt/docker/driver.py, line 
404, in spawn
2015-07-29 11:13:51.050 5166 TRACE nova.compute.manager [instance: 
52c3d98f-f99c-4d42-969b-16308cbf9db4] self._start_container(container_id, 
instance, network_info)
2015-07-29 11:13:51.050 5166 TRACE nova.compute.manager [instance: 
52c3d98f-f99c-4d42-969b-16308cbf9db4]   File 
/usr/local/lib/python2.7/dist-packages/novadocker/virt/docker/driver.py, line 
376, in _start_container
2015-07-29 11:13:51.050 5166 TRACE nova.compute.manager [instance: 
52c3d98f-f99c-4d42-969b-16308cbf9db4] instance_id=instance['name'])
2015-07-29 11:13:51.050 5166 TRACE nova.compute.manager [instance: 
52c3d98f-f99c-4d42-969b-16308cbf9db4] InstanceDeployFailure: Cannot setup 
network: Unexpected vif_type=binding_failed
2015-07-29 11:13:51.050 5166 TRACE nova.compute.manager [instance: 
52c3d98f-f99c-4d42-969b-16308cbf9db4]
2015-07-29 11:13:51.051 5166 AUDIT nova.compute.manager 
[req-156ad821-5880-4560-83f1-e9a2efb4b4c6 None] [instance: 
52c3d98f-f99c-4d42-969b-16308cbf9db4] Terminating instance
2015-07-29 11:13:51.186 5166 INFO nova.scheduler.client.report [-] 
Compute_service record updated for ('compute2', 'compute2')

4. the log file for neutron server:

2015-07-29 11:13:45.726 2282 INFO neutron.wsgi 
[req-67398b60-ab27-49aa-8d9d-24c750252d7c None] 192.168.1.2 - - [29/Jul/2015 
11:13:45] GET 

[Openstack-operators] Chef cookbook

2015-07-29 Thread aishwarya.adyanthaya
Hi,

I've integrated chef with openstack and needed to work on the cookbooks. I 
wanted to upload a docker and mysql cookbook in specific to the chef-dashboard.

I've gone through the supermarket of the chef.io but I just ended up confused. 
Moreover, when I tried uploading the cookbook it either gave me an error. Could 
some give me the steps as to how to proceed from the point of bringing the 
cookbooks to the node, uploading it and adding it to the run list.

Thank you!



This message is for the designated recipient only and may contain privileged, 
proprietary, or otherwise confidential information. If you have received it in 
error, please notify the sender immediately and delete the original. Any other 
use of the e-mail by you is prohibited. Where allowed by local law, electronic 
communications with Accenture and its affiliates, including e-mail and instant 
messaging (including content), may be scanned by our systems for the purposes 
of information security and assessment of internal compliance with Accenture 
policy.
__

www.accenture.com
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Ceph backed 'boot from volume' downloading image

2015-07-29 Thread matt
Caius why not submit that patch to openstack review for cinder?  I'm sure
more than a few of us would be glad to voice our desire for it or something
much like it to land in liberty.

-Matt

On Wed, Jul 29, 2015 at 10:36 AM, Caius Howcroft caius.howcr...@gmail.com
wrote:

 Hi,

 We (bloomberg) are preparing to roll out kilo into production and one
 thing is causing a lot of grief. I wonder if anyone else has
 encountered it.

 We run BCPC (https://github.com/bloomberg/chef-bcpc) which is ceph
 backed. When we boot an instance from volume the cinder create volume
 from image function (

 https://github.com/openstack/cinder/blob/stable/kilo/cinder/volume/drivers/rbd.py#L850
 )
 ends up pulling the entire image through the glance API, so lots of
 tenants doing this creates quite a bit of load on our API nodes.

 We were confused why it did this, when its way more efficient to go
 directly via rbd clone, we created a patch and tested and it seems to
 work just fine (and an order of magnitude faster)
 https://github.com/bloomberg/chef-bcpc/pull/742

 So, the question is: what are other ceph backed installations doing ?

 Caius


 --
 Caius Howcroft
 @caiushowcroft
 http://www.linkedin.com/in/caius

 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Ceph backed 'boot from volume' downloading image

2015-07-29 Thread Caius Howcroft
Hi,

We (bloomberg) are preparing to roll out kilo into production and one
thing is causing a lot of grief. I wonder if anyone else has
encountered it.

We run BCPC (https://github.com/bloomberg/chef-bcpc) which is ceph
backed. When we boot an instance from volume the cinder create volume
from image function (
https://github.com/openstack/cinder/blob/stable/kilo/cinder/volume/drivers/rbd.py#L850)
ends up pulling the entire image through the glance API, so lots of
tenants doing this creates quite a bit of load on our API nodes.

We were confused why it did this, when its way more efficient to go
directly via rbd clone, we created a patch and tested and it seems to
work just fine (and an order of magnitude faster)
https://github.com/bloomberg/chef-bcpc/pull/742

So, the question is: what are other ceph backed installations doing ?

Caius


-- 
Caius Howcroft
@caiushowcroft
http://www.linkedin.com/in/caius

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators