[Openstack-operators] Neutron L2-GW operators experience [l2-gateway]

2016-08-15 Thread Saverio Proto
Hello all,

we want to bridge one of our tenants networks with a physical network
where we have some hardware appliances.

we can't easily use provider networks, because our compute-nodes are
connected over a L3 network, so there is not a shared L2 segment where
we can bridge the VMs regardless of the compute-node where they are
scheduled to.

Googling about the best way to address this issue I found that
somebody had already started with this solution:

https://wiki.openstack.org/wiki/Neutron/L2-GW

This looks exactly what I need, but there is not a lot of
documentation and packages, and I wonder if it is something production
ready.

I found that Ubuntu has packages here:
http://ubuntu-cloud.archive.canonical.com/ubuntu/pool/main/n/networking-l2gw/

but I can't really get from the version number if these packages are
supposed to be for Liberty or Mitaka.

Has anyone on the list experience with this Neutron feature ?

thank you

Saverio

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Change Dashboard Splash Logo and Top-left logo

2016-08-15 Thread Saverio Proto
On ubuntu we just replace these two files:

/usr/share/openstack-dashboard/openstack_dashboard/static/dashboard/img/logo.png
/usr/share/openstack-dashboard/openstack_dashboard/static/dashboard/img/logo-splash.png

we make sure with puppet that our version of these two files is in place.

I hope this helps you

Saverio


2016-08-14 9:55 GMT+02:00 William Josefsson :
> Hi,
>
> I'm trying to do a simple customization to my Liberty/CentOS7
> dashboard changing my Splash-Logo and after login, the small top-left
> Logo.
>
> I have tried to follow the online customization guide and created by
> own custom.css in:
>
> # 
> /usr/share/openstack-dashboard/openstack_dashboard/static/dashboard/scss/custom.css
>
> h1.brand a {
> background: url(../img/small_logo.png) top left no-repeat;
> }
> #splash .login {
> background: url(../img/logo_splash.png) no-repeat center 35px;
> }
>
> # I have added import of custom.css
> # 
> /usr/share/openstack-dashboard/openstack_dashboard/templates/_stylesheets.html
>
> {% compress css %}
>  media='screen' rel='stylesheet' />
>  media='screen' rel='stylesheet' />
>  type='text/scss' media='screen' rel='stylesheet' />
>  type='text/scss' media='screen' rel='stylesheet' />
> {% endcompress %}
>
>
> For some strange reason, only the splash logo appears customized. The
> top-left Small logo after login still remains unchanged. My image is
> 2.2K.
>
> Can anyone please hint on why the small logo does not appear, and how
> to fix? thx will
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] can we lock RPC version with upgrade_levels on neutron-openvswitch-agent ?

2016-08-05 Thread Saverio Proto
Hello,

we are doing the upgrade Kilo to Liberty pet by pet.

We already upgraded successfully Keystone and Glance.

Now I started the Nova pet upgrade. For the controller node it was ok.

As soon as I upgraded the compute nodes I had a problem with neutron.

I can't lock the neutron-openvswitch-agent to and earlier RPC version,
so I get in my logs:

RemoteError: Remote error: UnsupportedVersion Endpoint does not
support RPC version 1.5. Attempted method:
get_devices_details_list_and_failed_devices

Do I must upgrade also the neutron server di Liberty before upgrading
the compute nodes ?

In nova.conf there are the [upgrade_levels]. Looks like these are
ignored by the neutron-openvswitch-agent. There is not any similar
settings in the neutron.conf ?

thanks !

Saverio

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] HowTo upgrade in production LBaaS V1 to LBaaS V2

2016-08-03 Thread Saverio Proto
Hello !

LBaaS V1 is deprecated in Liberty.

I am aware of this documentation:
http://docs.openstack.org/mitaka/networking-guide/adv-config-lbaas.html

We are now running a public cloud, and some users deployed LBaaS V1
(with heat templates).

How do we migrate to LBaaS V2 without delete users resources ? There
is a proper way to migrate what is already running ?

anyone has experience to share about this kind of upgrade/migration ?

thanks !

Saverio

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [puppet] - controller with mysql and rabbitmq have problem with puppet dependencies

2016-07-07 Thread Saverio Proto
So it is puppetlabs-rabbitmq 5.4.0 that created this problem.
I rolled back to 5.3.1

Here is the commit that introduces the problem:
https://github.com/puppetlabs/puppetlabs-rabbitmq/commit/df71b47413fa2e941850a42e82ba83d7f680529d

Saverio

2016-07-07 13:18 GMT+02:00 Saverio Proto <ziopr...@gmail.com>:
> Hello,
>
> I have a pet server hosting both mysql and rabbitmq for openstack.
> I guess this is common for many people running a Openstack Controller pet.
>
> I just figured out I have a strange problem with my puppet modules 
> dependencies:
>
> puppetlabs-rabbitmq requires:
> puppet-staging
>
> puppetlab-mysql requires:
> nanliu-staging
>
> these 'staging' modules have a name collision, r10k will install them
> into modules/staging/
> so the latest one that is the Puppetfile wins.
>
> I know nanliu a old fork... but... which one of the two should I stick to ?
>
> Saverio

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [puppet] - controller with mysql and rabbitmq have problem with puppet dependencies

2016-07-07 Thread Saverio Proto
Hello,

I have a pet server hosting both mysql and rabbitmq for openstack.
I guess this is common for many people running a Openstack Controller pet.

I just figured out I have a strange problem with my puppet modules dependencies:

puppetlabs-rabbitmq requires:
puppet-staging

puppetlab-mysql requires:
nanliu-staging

these 'staging' modules have a name collision, r10k will install them
into modules/staging/
so the latest one that is the Puppetfile wins.

I know nanliu a old fork... but... which one of the two should I stick to ?

Saverio

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Next Ops Midcycle NYC August 25-26

2016-06-23 Thread Saverio Proto
Hello there :)

is anyone from the openstack foundation or from bloomberg that can
help out with this ?

I share this for anyone that needs visa.

for Austin we had something like this:
https://www.openstack.org/summit/austin-2016/austin-and-travel/
https://openstackfoundation.formstack.com/forms/visa_form_austin_summit

anyone that needs to apply for Visa will need a 'US point of contact
information'.

Basically, if the organizer of the Ops Midcycle is officially the
openstack foundation or bloomberg, I need to enter in my visa
application the following info:

Organization name
Address in the US
Phone number
Email

It must be a phone number and a email where in case there is a check,
somebody can tell "yes of course, this guy exist and is coming to the
conference" :)

How we sort this out ?

thank you

Saverio

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Next Ops Midcycle NYC August 25-26

2016-06-22 Thread Saverio Proto
Did we create already a etherpad ?

I hope I did this correctly:
https://etherpad.openstack.org/p/NYC-ops-meetup

Saverio

2016-06-22 15:58 GMT+02:00 Mark Voelker :
> Hi Ops,
>
> FYI for those that may not be aware, that’s also the week of OpenStack East.  
> OpenStack East runs August 23-24 also in New York City (about ~15-20 minutes 
> away from Civic Hall by MTA at the Playstation Theater).  If you’re coming to 
> town for the Ops Midcycle, you may want to make a week of it.  Earlybird 
> pricing for OpenStack East is still available but prices increase tomorrow:
>
> http://www.openstackeast.com/
>
> At Your Service,
>
> Mark T. Voelker
> (wearer of many hats, one of which is OpenStack East steering committee 
> member)
>
>
>
>> On Jun 21, 2016, at 11:36 AM, Jonathan D. Proulx  wrote:
>>
>> Hi All,
>>
>> The Ops Meetups Team has selected[1] New York City as the location of the
>> next mid-cycle meetup on August 25 and 26 2016 at Civic Hall[2]
>>
>> Many thanks to Bloomberg for sponsoring the location.  And thanks to
>> BestBuy as well for their offer of the Seattle location.  The choice
>> was very close and hopefully their offer will stand for our next North
>> American meet-up.
>>
>> There's quite a bit of work to do to make this all happen in the
>> next couple of months so it's still a great time to join the Ops
>> Meetups Team[3] and help out.
>>
>> -Jon
>>
>> --
>>
>> [1] 
>> http://eavesdrop.openstack.org/meetings/ops_meetups_team/2016/ops_meetups_team.2016-06-21-14.02.html
>> [2] 
>> https://urldefense.proofpoint.com/v2/url?u=http-3A__civichall.org_about-2Dcivic-2Dhall_=CwICAg=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEs=Q8IhPU-EIzbG5YDx5LYO7zEJpGZykn7RwFg-UTPWvDc=oAH-SSZ6EFikpmcyUpbf3984kyPBIuLJKkQadC6CKUw=36Kl-0b4WBC06xaYXo0V5AM2lHzvjhL48bBV48cz2is=
>> [3] https://wiki.openstack.org/wiki/Ops_Meetups_Team
>>
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Shared Storage for compute nodes

2016-06-21 Thread Saverio Proto
Hello Michael,

have a look at Openstack Manila and CephFS

Cheers

Saverio


2016-06-21 11:42 GMT+02:00 Michael Stang :

> I think I have asked my question not correctly, it is not for the cinder
> backend, I meant the shared storage for the instances which is shared by
> the compute nodes. Or can cinder also be used for this? Sorry if I ask
> stupid questions, OpenStack is still new for me ;-)
>
> Regards,
> Michael
>
>
> Matt Jarvis  hat am 21. Juni 2016 um 10:21
> geschrieben:
>
>
> If you look at the user survey (
> https://www.openstack.org/user-survey/survey-2016-q1/landing ) you can
> see what the current landscape looks like in terms of deployments. Ceph is
> by far the most commonly used storage backend for Cinder.
>
> On 21 June 2016 at 08:27, Michael Stang 
> wrote:
>
> Hi,
>
> I wonder what is the recommendation for a shared storage for the compute
> nodes? At the moment we are using an iSCSI device which is served to all
> compute nodes with multipath, the filesystem is OCFS2. But this makes it a
> little unflexible in my opinion, because you have to decide how many
> compute nodes you will have in the future.
>
> So is there any suggestion which kind of shared storage to use for the
> compute nodes and what filesystem?
>
> Thanky,
> Michael
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
> DataCentred Limited registered in England and Wales no. 05611763
>
>
>
> Viele Grüße
>
> Michael Stang
> Laboringenieur, Dipl. Inf. (FH)
>
> Duale Hochschule Baden-Württemberg Mannheim
> Baden-Wuerttemberg Cooperative State University Mannheim
> ZeMath Zentrum für mathematisch-naturwissenschaftliches Basiswissen
> Fachbereich Informatik, Fakultät Technik
> Coblitzallee 1-9
> 68163 Mannheim
>
> Tel.: +49 (0)621 4105 - 1367
> michael.st...@dhbw-mannheim.de
> http://www.dhbw-mannheim.de
>
>
>
>
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Shared Storage for compute nodes

2016-06-21 Thread Saverio Proto
Hello Michael,

a very widely adopted solution is to use Ceph with rbd volumes.

http://docs.openstack.org/liberty/config-reference/content/ceph-rados.html
http://docs.ceph.com/docs/master/rbd/rbd-openstack/

you find more options here under Volume drivers:
http://docs.openstack.org/liberty/config-reference/content/section_volume-drivers.html

Saverio


2016-06-21 9:27 GMT+02:00 Michael Stang :
> Hi,
>
> I wonder what is the recommendation for a shared storage for the compute
> nodes? At the moment we are using an iSCSI device which is served to all
> compute nodes with multipath, the filesystem is OCFS2. But this makes it a
> little unflexible in my opinion, because you have to decide how many compute
> nodes you will have in the future.
>
> So is there any suggestion which kind of shared storage to use for the
> compute nodes and what filesystem?
>
> Thanky,
> Michael
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Upgrade OpenStack Juno to Mitaka

2016-06-15 Thread Saverio Proto
Hello,

first of all I suggest you read this article:
http://superuser.openstack.org/articles/openstack-upgrading-tutorial-11-pitfalls-and-solutions

> What is the best way to performe an upgrade from Juno to Mitaka?

I would go for the in place upgrade, but I always upgraded without
jumping version, that AFAIK is not supported.

The main problem I see, is that database migrations are supported when
you upgrade to the next release, but if you jump form Juno to Mitaka I
have to idea how the database upgrade cloud be done.

Saverio

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Mid-Cycle Ops Meetup venue choice - please make your voice heard!

2016-06-15 Thread Saverio Proto
Hello all,

I will need a visa to come to the US for the Mid-Cycle Ops Meetup.

The process to obtain a Visa can take up to 8 weeks, and I cannot
apply until dates and venue are decided.

please set a date at least 8 weeks ahead, or few people that can't
make it on time to apply for visa will not be able to join.

thank you

Saverio


2016-06-14 18:16 GMT+02:00 Edgar Magana :
> Chris,
>
>
>
> Awesome locations! Looking forward to have the final one and the date to do
> the booking.
>
>
>
> Edgar
>
>
>
> From: Chris Morgan 
> Date: Tuesday, June 14, 2016 at 8:09 AM
> To: OpenStack Operators 
> Subject: [Openstack-operators] Mid-Cycle Ops Meetup venue choice - please
> make your voice heard!
>
>
>
> [DISCLAIMER AT BOTTOM OF EMAIL]
>
>
>
> Hello Everyone,
>
>   There are two possible venues for the next OpenStack Operators Mid-Cycle
> meetup. They both seem suitable and the details are listed here :
>
>
>
> https://etherpad.openstack.org/p/ops-meetup-venue-discuss
>
>
>
> To guide the decision making process and since time is drawing short for
> planning an August event, the Ops Meetups Team meeting today on IRC decided
> to try putting this issue to a poll. Please record *likely* attendance
> preferences for Seattle, NYC (or neither) here :
>
>
>
> http://doodle.com/poll/e4heruzps4g94syf
>
>
>
> The poll is not binding :)
>
>
>
> The Ops Meetups Team is hoping to see a good number of responses within the
> next SEVEN DAYS.
>
>
>
> Thanks for your attention.
>
>
>
> Chris Morgan
>
>
>
> Disclaimer: I work for Bloomberg LP, we have offered to be the host in NYC.
> However, both proposals seem great to me
>
>
>
> --
>
> Chris Morgan 
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [sahara] - Hadoop on Openstack and swift storage only: how to set --os-storage-url in core-site.xml ?

2016-06-09 Thread Saverio Proto
Hello !

I made some tests running Hadoop on our Openstack Cloud. The idea is
to do map reduce examples but using the swift storage instead of HDFS.
We have Ceph backend for cinder volumes so HDFS on top of that does
not really fit.

I managed to configure hadoop to access swift with
swift://container.provider syntax:

https://github.com/zioproto/hadoop-swift-tutorial

Now I am blocked because I have some very large containers with
Scientific Datasets that I would like to access within Hadoop.

If you go back in the mailing list to the thread "Swift ACL's together
with Keystone (v3) integration" the problem is well explained.

I have read permission to a container that belongs to a different
tenant, and with the swift command line I need to pass the
--os-storage-url argument.

But how to specify the --os-storage-url in the core-site.xml Hadoop
configuration file ?
I have not find any documentation that explains this !

https://github.com/openstack/sahara/blob/master/sahara/swift/resources/conf-template.xml

Are Sahara people reading this mailing list ?

Thanks any feedback is appreciated !

Saverio

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Neutron database upgrade kilo-liberty and parallel Alembic migration branches

2016-06-01 Thread Saverio Proto
Hello,

reading this documentation page:

http://docs.openstack.org/mitaka/networking-guide/migration-neutron-database.html

I dont get what means having two parallel migration branches.

Do you have to choose one of the branches ? If yes how ?


Or it just means that some operations can be safely run when Neutron
Server API is running, (e.g.
neutron-db-manage upgrade --expand)

while other operations (e.g. neutron-db-manage upgrade --contract)
require the Neutron Server to be stopped ??

Anyone has some upgrade notes to share that can clarify the procedure ?

Thank you

Saverio

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [kolla] Moving from distro packages to containers (or virtualenvs...)

2016-05-18 Thread Saverio Proto
About docker:
testing this docker setup is in my TODO list from a long time:

https://github.com/dguerri/dockerstack

Looks very well done but I think is not very well known.

Saverio

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Swift ACL's together with Keystone (v3) integration

2016-05-12 Thread Saverio Proto
Hello,

I got this working, I was actually missing the storage-url in my client config.

Related to this setup I am experiencing a weird problem with large
objects when reading from another tenant.

Only when I read objects from a container where I have read-only
access, if the object is larger than 10GB I am not able to read the
object.

I used the rclone client that has a very covenient option --dump-bodies.

When I list the objects in the container I receive a JSON structure
with all the objects in the container. All objects that I can read
have data that makes sense. Some files that are bigger than 10Gb have
a size of 0 bytes.

example:

{"hash": "d41d8cd98f00b204e9800998ecf8427e", "last_modified":
"2016-05-10T15:12:44.233710", "bytes": 0, "name":
"eng/googlebooks-eng-all-3gram-20120701-li.gz", "content_type":
"application/octet-stream"}

or sometimes the bytes size is just random and wrong.

When I try to read these objects I get a 403.

I tried both with swiftclient and rclone and I have the same problem.

Of course if I use the container in my tenant where I have read and
write, I can successfully read and write all the large objects. This
only happens when reading a large object shared across tenants.

Did you maybe try to work with such large objects in your setup ? does
it work for you ?

Saverio



2016-05-03 14:34 GMT+02:00 Wijngaarden, Pieter van
<pieter.van.wijngaar...@philips.com>:
> Hi Saverio,
>
>
>
> Yes, in the end I was able to get it working! The issue was related to my
> proxy server pipeline config (filter:authtoken). I did not find pointers to
> updated documentation though.
>
>
>
> When I had updated the [filter:authtoken] configuration in
> /etc/swift/proxy-server.conf, everything worked. In my case the values
> auth_uri and auth_url were not configured correctly:
>
>
>
> [filter:authtoken]
>
> paste.filter_factory = keystonemiddleware.auth_token:filter_factory
>
> auth_uri = https://:5443
>
> auth_url = http://:35357
>
> auth_plugin = password
>
> project_name = service
>
> project_domain_id = default
>
> user_domain_id = default
>
> username = swift
>
> password = X
>
>
>
> I don’t know why that meant that regular token validation worked, but
> cross-tenant did not
>
>
>
> (unfortunately it’s a test cluster so I don’t have history on what it was
> before I changed it :( )
>
>
>
> What works for me now (using python-swiftclient) is the following. I hope
> that the text formatting survives in the email:
>
>
>
> 1.   A user with complete ownership over the account (say account X)
> executes
>
> a.  swift post  --read-acl ‘:’
>
> b.  or
>
> c.  swift post  --read-acl ‘:*>’
>
> 2.   A user in the  account can now list the container and
> get objects in the container by doing:
>
> a.  swift list  --os-storage-url 
> --os-auth-token 
>
> b.  or
>
> c.  swift download   --os-storage-url 
> --os-auth-token 
>
>
>
> Note that you can review the full storage URL for an account by doing swift
> stat -v.
>
>
>
> In this case, the user in step 2 is not able to do anything else in account
> X besides do object listing in the container and get its objects, which is
> what I was aiming for. What does not work for me is if I set the read-acl to
> ‘’ only, even though that should work according to the
> documentation. If you want to allow all users in another project read access
> to a container, use ‘:*’ as the read-ACL.
>
>
>
> I hope this helps!
>
>
>
> With kind regards,
>
> Pieter van Wijngaarden
>
>
>
>
>
> -Original Message-
> From: Saverio Proto [mailto:ziopr...@gmail.com]
> Sent: dinsdag 3 mei 2016 12:44
> To: Wijngaarden, Pieter van <pieter.van.wijngaar...@philips.com>
> Cc: openstack-operators@lists.openstack.org
> Subject: Re: [Openstack-operators] Swift ACL's together with Keystone (v3)
> integration
>
>
>
> Hello Pieter,
>
>
>
> I did run into the same problem today. Did you find pointers to more updated
> documentation ? Were you able to configure the cross tenant read ACL ?
>
>
>
> thank you
>
>
>
> Saverio
>
>
>
>
>
> 2016-04-20 13:48 GMT+02:00 Wijngaarden, Pieter van
>
> <pieter.van.wijngaar...@philips.com>:
>
>> Hi all,
>
>>
>
>> I’m playing around with a Swift cluster (Liberty) and cannot get the
>
>> Swift ACL’s to work. My objective is to give users from one project
>
>> (and thus Swift account?) selective access to specific containers in
>> another project.
>
>>
>
&g

Re: [Openstack-operators] Swift ACL's together with Keystone (v3) integration

2016-05-03 Thread Saverio Proto
Hello Pieter,

I did run into the same problem today. Did you find pointers to more
updated documentation ? Were you able to configure the cross tenant
read ACL ?

thank you

Saverio


2016-04-20 13:48 GMT+02:00 Wijngaarden, Pieter van
:
> Hi all,
>
> I’m playing around with a Swift cluster (Liberty) and cannot get the Swift
> ACL’s to work. My objective is to give users from one project (and thus
> Swift account?) selective access to specific containers in another project.
>
> According to
> http://docs.openstack.org/developer/swift/middleware.html#keystoneauth, the
> swift/keystoneauth plugin should support cross-tenant (now cross-project)
> ACL’s by setting the read-acl of a container to something like:
>
> swift post  --read-acl ':'
>
> Using a project name instead of a UUID should be supported if all projects
> are in the default domain.
>
> But if I set this for a user in a different project / different swift
> account, it doesn’t seem to work. The last reference to Swift container
> ACL’s from the archives is somewhere in 2011..
>
> I have found a few Swift ACL examples / tutorials online, but they are all
> outdated or appear to use special / proprietary middleware. Does anybody
> have (or can anybody create) an example that is up-to-date for OpenStack
> Liberty or later, and shows container ACL’s together with Keystone
> integration?
>
> What I would like to do:
> - I have a bunch of users and projects in Keystone, and thus a bunch of
> (automatically created) Swift accounts
> - I would like to allow one specific user in a project (say project X) to
> access a container from a different project (Y)
> - And/or, I would like to allow all users in project X to access one
> specific container in project Y.
> Both these options should include listing the objects in the container, but
> exclude listing all containers in the other account.
>
> I hope there is someone who can help, thanks a lot in advance!
>
> With kind regards,
> Pieter van Wijngaarden
> System Architect
> Digital Pathology Solutions
> Philips Healthcare
>
> Veenpluis 4-6, Building QY-2.006, 5684 PC Best
> Tel: +31 6 2958 6736, Email: pieter.van.wijngaar...@philips.com
>
>
>
>
>   
> The information contained in this message may be confidential and legally
> protected under applicable law. The message is intended solely for the
> addressee(s). If you are not the intended recipient, you are hereby notified
> that any use, forwarding, dissemination, or reproduction of this message is
> strictly prohibited and may be unlawful. If you are not the intended
> recipient, please contact the sender by return e-mail and destroy all copies
> of the original message.
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] neutron operators please check this bug: Check if namespace exists before exec commands

2016-04-29 Thread Saverio Proto
Hello operators,

I am running Kilo and I found this bug:
https://bugs.launchpad.net/neutron/+bug/1573073

because neutron is trying to do operations over and over again on
namespaces that do not exist anymore, this kills the CPU of our
network node.

I wrote a patch for kilo that fixes the problem:
https://gist.github.com/zioproto/9aba7336b18e9769a85d55d520540456

Now, the patch is also for review upstream:
https://review.openstack.org/#/c/309050/

But from the review looks like the patch is not likely to be accepted
upstream because the bug should not be reproducible in master.

I dont have a cluster running master or mitaka. Does anyone can test
if this bug occurs in Mitaka or master ?

thanks a lot for the help.

Saverio

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] nova snapshots should dump all RAM to hypervisor disk ?

2016-04-24 Thread Saverio Proto
> We are in an even worst situation: we have flavors with 256GB of ram
> but only 100GB on the local hard disk, which means that we cannot
> snapshot VMs with this flavor.
>
> If there is any way to avoid saving the content of the ram to disk (or
> maybe there is a way to snapshot the ram to, e.g., ceph), we would be
> very happy.

Hello Antonio,

I received a new feedback in the Openstack patch review
(https://review.openstack.org/#/c/295865/) pointing me to this:

https://github.com/openstack/nova/blob/82a684fb1ae1dd1bd49e2a8792a2456b4d3ab037/nova/conf/workarounds.py#L72

so it looks like live snapshots are disabled because of an old buggy
libvirt. So actually this process of dumping all the RAM to disk is
not a bad design, but it is a necessary workaround because of libvirt
not being stable. It makes sense now.

At the moment I am running libvirt version 1.2.12-0ubuntu14.4~cloud0.
Maybe I can disable the workaround and try to do faster snapshots ?

Any other operator has feedback about this ?

Thank you

Saverio

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] nova snapshots should dump all RAM to hypervisor disk ?

2016-04-22 Thread Saverio Proto
Hello Operators,

one of the users of our cluster opened a ticket about a snapshot
corner case. It is not possible to snapshot a instance that is booted
from volume when the instance is paused. So I wrote this patch, and
from the discussion you can see that I learnt a lot about snapshots.
https://review.openstack.org/#/c/295865/

Discussing about the patch I found something that I found totally
strange, so I want to check with the community if this is the expected
behavior.

Scenario:
Openstack Kilo
libvirt
rbd storage for the images
instance booted from image

Now the developers pointed to the fact that when I snapshot an active
instance, nova makes a "managedSave"
https://libvirt.org/html/libvirt-libvirt-domain.html#virDomainManagedSave

I thought there was a misunderstanding, because I did not see the
point of dumping the all content of the RAM to disk.

I was surprised to check on the hypervisor where the instance is
scheduled that I really see two temporary files created during the
snapshotting process

As soon as you click "snapshot" you will see this file:

/var/lib/libvirt/qemu/save/instance-0001cf8c.save

This file will have the size of the RAM memory of the instance. In my
case I had to wait for 32Gb of RAM to be written to disk.

Once that is finished this second process starts.

qemu-img convert -O raw
rbd:volumes/ee3a84c3-b870-4669-8847-6b9ac93a8eac_disk:id=cinder:conf=/etc/ceph/ceph.conf
/var/lib/nova/instances/snapshots/

ok this convert is also slow but already fixed in Mitaka:
Problem description -
http://www.sebastien-han.fr/blog/2015/10/05/openstack-nova-snapshots-on-ceph-rbd/
Patches that should solve the problem:
https://review.openstack.org/#/c/205282/
https://review.openstack.org/#/c/188244/
Merged for Mitaka -
https://blueprints.launchpad.net/nova/+spec/rbd-instance-snapshots


as a result you have a file with a name that looks like a uuid in this
other folder:

ls /var/lib/nova/instances/snapshots/tmpWsKqvl/
51574e9140204c0f89c7d86fcf741579

So this means that when we take a snapshot of an active instance, we
dump all the RAM memory into a temp file.

This has an impact for us because we have flavors with 32Gb of RAM.
Because our instances are completely rbd backed, we have small disks
on the compute nodes.
Also, it takes time to dump to disk 32Gb of RAM for nothing !!

So, is calling managedSave the intedend behavior ? Or nova should just
make a call to libvirt to make sure that filesystem caches are written
to disk before snapshotting ?

I tracked this call in the git, and looks like nova is implemented
this way since 2012.

Please opeators tell me that I configured something wrong and this is
not really how snapshots are implemented :) Or explain why the dump of
the all RAM memory is needed :)

Any feedback is appreciated !!

Saverio

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] how I re-compile UCA packages to carry my local patches

2016-04-04 Thread Saverio Proto
Hello there,

with the help of James I was finally able to properly compile ubuntu
packages to carry local patches in my Kilo installation.

I documented the all process here:

https://github.com/zioproto/ubuntu-cloud-archive-vagrant-vm

I hope this is useful for everyone. The notes I wrote can be still
improved. The best would be to make the process easy for everyone, so
that we can easily create a workflow to propose patches upstream to
the ubuntu packages.

Thank you !

Saverio

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [nova] how to recover instances with different hypervisor-id on the same compute node

2016-03-24 Thread Saverio Proto
When this kind of very corner case happen I usually hack manually the database.
Go with mysql in the nova database and put the new ID for those two instances.

my 2 cents

Saverio

2016-03-23 5:17 GMT+01:00 Rahul Sharma :
> Hi All,
>
> Due to a hostname change, we ended up having a new hypervisor-id for one of
> our compute nodes. It was already running two instances and then, we didn't
> catch the updated hypervisor-id. This ended up with new instances spawning
> up with new hypervisor-id.
>
> Now, the instances with older hypervisor-id are not seen in "virsh list",
> rebooting those instances keep them rebooting.
>
> [root (admin)]# nova hypervisor-servers compute-90
> +--+---+---+-+
> | ID   | Name  | Hypervisor ID |
> Hypervisor Hostname |
> +--+---+---+-+
> | 9688dcef-2836-496f-8b70-099638b73096 | instance-0712 | 40|
> compute-90.test.edu |
> | f7373cd6-96a0-4643-9137-732ea5353e94 | instance-0b74 | 40|
> compute-90.test.edu |
> | c8926585-a260-45cd-b008-71df2124b364 | instance-1270 | 92|
> compute-90.test.edu |
> | a0aa3f5f-d49b-43a6-8465-e7865bb68d57 | instance-18de | 92|
> compute-90.test.edu |
> | d729f9f4-fcae-4abe-803c-e9474e533a3b | instance-16e0 | 92|
> compute-90.test.edu |
> | 30a6a05d-a170-4105-9987-07a875152907 | instance-17e4 | 92|
> compute-90.test.edu |
> | 6e0fa25b-569d-4e9e-b57d-4c182c1c23ea | instance-18f8 | 92|
> compute-90.test.edu |
> | 5964f6cc-eec3-493a-81fe-7fb616c89a8f | instance-18fa | 92|
> compute-90.test.edu |
> +--+---+---+-+
>
> [root@compute-90]# virsh list
>  IdName   State
> 
>  112   instance-1270  running
>  207   instance-18de  running
>  178   instance-16e0  running
>  189   instance-17e4  running
>  325   instance-18f8  running
>  336   instance-18fa  running
>
> Instances not visible: instance-0712 and instance-0b74
>
> Is there a way to recover from this step? I can delete the old services(nova
> service-delete ), but I am confused whether it will lead to loss of
> already running instances with old hypervisor-id? Is there a way I can
> update the state of those instances to use hypervisor-id as 92 instead of
> 40? Kindly do let me know if you have any suggestions.
>
> Thanks.
>
> Rahul Sharma
> MS in Computer Science, 2016
> College of Computer and Information Science, Northeastern University
> Mobile:  801-706-7860
> Email: rahulsharma...@gmail.com
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [puppet] [cinder] - some snapshots are there, but they are hidden in the snapshot-list

2016-03-24 Thread Saverio Proto
Hello there,

because I need to tune osapi_max_limit I wrote this puppet patch

https://review.openstack.org/#/c/296931/

but I am very bad at coding :) please review :)

Saverio



2016-03-23 11:17 GMT+01:00 Saverio Proto <ziopr...@gmail.com>:
> It looks like not all snapshots are listed because I am hitting the
> osapi_max_limit
>
> # The maximum number of items that a collection resource
> # returns in a single response (integer value)
> #osapi_max_limit=1000
>
> Saverio
>
>
> 2016-03-23 9:29 GMT+01:00 Saverio Proto <ziopr...@gmail.com>:
>> Hello,
>>
>> I upgraded to Kilo and I see the very same bug.
>>
>> What is the right way to fill a bug to Cinder, is it here ?
>> https://bugs.launchpad.net/cinder
>>
>> Will it be considered if the Bug is in Kilo ?
>>
>> thank you
>>
>> Saverio
>>
>>
>> 2016-03-16 13:42 GMT+01:00 Saverio Proto <ziopr...@gmail.com>:
>>> Hello there,
>>>
>>> I could not believe my eyes today.
>>> We are talking about Cinder in Juno.
>>> So we have a use case where we have to be sure we do snapshots
>>> everyday, so we have a script that tries to do the snapshot in this
>>> way:
>>>
>>> while [ `/usr/bin/cinder snapshot-list | grep $DISPLAY_NAME | wc -l` -eq 0 
>>> ]; do
>>> /usr/bin/cinder snapshot-create --force True --display-name
>>> $DISPLAY_NAME $VOLUME_ID >> $LOG_FILE 2>&1
>>> sleep 5
>>> done
>>>
>>> Looking over the log file we noticed that every snapshot required 3 or
>>> 4 tries to go through. Digging into the logs I found that I have some
>>> snapshots that are valid, but they do not appear in the snapshot-list.
>>>
>>> This is what I mean:
>>>
>>> $ cinder snapshot-list | grep d0a750fd-37dd-43ee-87d8-aacbcacc5dc4
>>> $ cinder snapshot-show d0a750fd-37dd-43ee-87d8-aacbcacc5dc4
>>> ++--+
>>> |  Property  |Value
>>>  |
>>> ++--+
>>> | created_at |
>>> 2016-03-15T23:30:21.00  |
>>> |display_description | None
>>>  |
>>> |display_name| backup--130-20160316-00:30 |
>>> | id |
>>> d0a750fd-37dd-43ee-87d8-aacbcacc5dc4 |
>>> |  metadata  |  {}
>>>  |
>>> |  os-extended-snapshot-attributes:progress  | 100%
>>>  |
>>> | os-extended-snapshot-attributes:project_id |
>>> bdf747f88fee4b5a9faca3da7c26754c   |
>>> |size| 2048
>>>  |
>>> |   status   |  available
>>>  |
>>> | volume_id  |
>>> 34d9a7a5-0383-4e59-8293-e9f9d4989490 |
>>> ++--+
>>> $
>>>
>>> Very same thing with the latest openstack client
>>>
>>> macsp:~ proto$ openstack snapshot list | grep
>>> d0a750fd-37dd-43ee-87d8-aacbcacc5dc4
>>> macsp:~ proto$ openstack snapshot show d0a750fd-37dd-43ee-87d8-aacbcacc5dc4
>>> ++--+
>>> | Field  | Value
>>>  |
>>> ++--+
>>> | created_at |
>>> 2016-03-15T23:30:21.00   |
>>> | description| None
>>>  |
>>> | id |
>>> d0a750fd-37dd-43ee-87d8-aacbcacc5dc4 |
>>> | name   | backup--130-20160316-00:30 |
>>> | os-extended-snapshot-attributes:progress   | 100%
>>>  |
>>> | os-extended-snapshot-attributes:project_id |
>>> bdf747f88fee4b5a9faca3da7c26754c |
>>> | properties |
>>>  |
>>> | size   | 2048
>>>  |
>>> | status | available
>>>  |
>>> | volume_id  |
>>> 34d9a7a5-0383-4e59-8293-e9f9d4989490 |
>>> ++--+
>>>
>>> I checked the entry in the mysql database for this snapshots, they
>>> look exactly identical to other entries where the snapshot appear.
>>>
>>> Is this a old Juno bug ?
>>>
>>> thank you
>>>
>>> Saverio

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [cinder] - some snapshots are there, but they are hidden in the snapshot-list

2016-03-23 Thread Saverio Proto
Hello,

I upgraded to Kilo and I see the very same bug.

What is the right way to fill a bug to Cinder, is it here ?
https://bugs.launchpad.net/cinder

Will it be considered if the Bug is in Kilo ?

thank you

Saverio


2016-03-16 13:42 GMT+01:00 Saverio Proto <ziopr...@gmail.com>:
> Hello there,
>
> I could not believe my eyes today.
> We are talking about Cinder in Juno.
> So we have a use case where we have to be sure we do snapshots
> everyday, so we have a script that tries to do the snapshot in this
> way:
>
> while [ `/usr/bin/cinder snapshot-list | grep $DISPLAY_NAME | wc -l` -eq 0 ]; 
> do
> /usr/bin/cinder snapshot-create --force True --display-name
> $DISPLAY_NAME $VOLUME_ID >> $LOG_FILE 2>&1
> sleep 5
> done
>
> Looking over the log file we noticed that every snapshot required 3 or
> 4 tries to go through. Digging into the logs I found that I have some
> snapshots that are valid, but they do not appear in the snapshot-list.
>
> This is what I mean:
>
> $ cinder snapshot-list | grep d0a750fd-37dd-43ee-87d8-aacbcacc5dc4
> $ cinder snapshot-show d0a750fd-37dd-43ee-87d8-aacbcacc5dc4
> ++--+
> |  Property  |Value
>  |
> ++--+
> | created_at |
> 2016-03-15T23:30:21.00  |
> |display_description | None
>  |
> |display_name| backup--130-20160316-00:30 |
> | id |
> d0a750fd-37dd-43ee-87d8-aacbcacc5dc4 |
> |  metadata  |  {}
>  |
> |  os-extended-snapshot-attributes:progress  | 100%
>  |
> | os-extended-snapshot-attributes:project_id |
> bdf747f88fee4b5a9faca3da7c26754c   |
> |size| 2048
>  |
> |   status   |  available
>  |
> | volume_id  |
> 34d9a7a5-0383-4e59-8293-e9f9d4989490 |
> ++--+
> $
>
> Very same thing with the latest openstack client
>
> macsp:~ proto$ openstack snapshot list | grep
> d0a750fd-37dd-43ee-87d8-aacbcacc5dc4
> macsp:~ proto$ openstack snapshot show d0a750fd-37dd-43ee-87d8-aacbcacc5dc4
> ++--+
> | Field  | Value
>  |
> ++--+
> | created_at |
> 2016-03-15T23:30:21.00   |
> | description| None
>  |
> | id |
> d0a750fd-37dd-43ee-87d8-aacbcacc5dc4 |
> | name   | backup--130-20160316-00:30 |
> | os-extended-snapshot-attributes:progress   | 100%
>  |
> | os-extended-snapshot-attributes:project_id |
> bdf747f88fee4b5a9faca3da7c26754c |
> | properties |
>  |
> | size   | 2048
>  |
> | status | available
>  |
> | volume_id  |
> 34d9a7a5-0383-4e59-8293-e9f9d4989490 |
> ++--+
>
> I checked the entry in the mysql database for this snapshots, they
> look exactly identical to other entries where the snapshot appear.
>
> Is this a old Juno bug ?
>
> thank you
>
> Saverio

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [nova] create image from paused VM refused with http 409

2016-03-22 Thread Saverio Proto
I am not a developer but I tried my best !
https://review.openstack.org/#/c/295865/

I applied the patch in my staging system and it fixes the problem :)

Saverio


2016-03-22 13:46 GMT+01:00 Saverio Proto <ziopr...@gmail.com>:
> I found the problem. It happens only then the instance has booted from volume.
>
> I think api.py should be patched at the function
> snapshot_volume_backed (line 2250) in a similar way as
> https://review.openstack.org/#/c/116789/
>
> Looks like the bug is also there in master , file nova/compute/api.py line 
> 2296
>
> I will try to submit a patch with Gerrit
>
> Saverio
>
> 2016-03-22 13:33 GMT+01:00 Saverio Proto <ziopr...@gmail.com>:
>> Hello there,
>>
>> I used to do this in Juno, and now I upgraded to Kilo and it is not
>> working anymore.
>>
>> macsp:~ proto$ openstack server image create --name test
>> 81da19c6-efbe-4002-b4e8-5ce352ffdd14
>> Cannot 'createImage' instance 81da19c6-efbe-4002-b4e8-5ce352ffdd14
>> while it is in vm_state paused (HTTP 409) (Request-ID:
>> req-0e7f339d-a236-4584-a44c-49daed7558ee)
>>
>> Is this a change of behaviour ?
>>
>> I also found this:
>>
>> https://review.openstack.org/#/c/116789/
>>
>> Should not this work ? my nova-api --version is 2015.1.2
>>
>> Saverio

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [nova] create image from paused VM refused with http 409

2016-03-22 Thread Saverio Proto
Hello there,

I used to do this in Juno, and now I upgraded to Kilo and it is not
working anymore.

macsp:~ proto$ openstack server image create --name test
81da19c6-efbe-4002-b4e8-5ce352ffdd14
Cannot 'createImage' instance 81da19c6-efbe-4002-b4e8-5ce352ffdd14
while it is in vm_state paused (HTTP 409) (Request-ID:
req-0e7f339d-a236-4584-a44c-49daed7558ee)

Is this a change of behaviour ?

I also found this:

https://review.openstack.org/#/c/116789/

Should not this work ? my nova-api --version is 2015.1.2

Saverio

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] RAID / stripe block storage volumes

2016-03-06 Thread Saverio Proto
> In our environments, we offer two types of storage. Tenants can either use
> Ceph/RBD and trade speed/latency for reliability and protection against
> physical disk failures, or they can launch instances that are realized as
> LVs on an LVM VG that we create on top of a RAID 0 spanning all but the OS
> disk on the hypervisor. This lets the users elect to go all-in on speed and
[..CUT..]

Hello Ned,

how do you implement this ? What is like the user experience of having
two types of storage ?

We generally have Ceph/RBD as storage backend, however we have a use
case where we need LVM because latency is important.

To cope with our use case we have different flavors, where setting a
flavor-key to a specific flavor you can force the VM to be scheduled
to a specific host-aggregate. Then we have a host-aggregate for
hypervisors supporting the LVM storage and another host-aggregate for
hypervisors running the default Ceph/RBD backend.

However, let's say the user just creates a Cinder Volume in Horizon.
In this case the Volume is created to Ceph/RBD. Is there a solution to
support multiple storage backends at the same time and let the user
decide in Horizon which one to use ???

Thanks.

Saverio

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Horizon bug fixed in Liberty, how should we ask a backport to Kilo ?

2016-03-04 Thread Saverio Proto
Thanks for merging the patch so quickly ! I will try today to build
new ubuntu packages and and for feedback at UCA.

> But yes, thank you for pointing this out. Bug fixes are mostly
> backported to Liberty release, in some rare cases Kilo might get a fix
> as well.

This is an endless discussion :)
But please look here at page 20:
https://www.openstack.org/assets/survey/Public-User-Survey-Report.pdf

Most production systems are on old releases.
Kilo is not that old ! And I am really curious to see the results of
the next Survey.
We would like to avoid to have each installation with custom packages.

I think this email thread was a good example on how we can improve
operators to developers feedback to have better stable branches.
Operators should not be afraid of proposing a cherry-pick to  stable
branch, especially if it fixes a non-corner case problem, and if it
easy to review.

my 2 cents :)

Cheers,

Saverio

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Horizon bug fixed in Liberty, how should we ask a backport to Kilo ?

2016-03-03 Thread Saverio Proto
Thanks Matt ! I logged in and now I see the cherry-pick button. I
should use the web interface always logged in :)
Thank you

Saverio


2016-03-03 16:51 GMT+01:00 Matt Fischer <tadow...@gmail.com>:
> The backport is pretty easy. You click on Cherry pick and if there's no
> conflict it just works. Like so:
>
> https://review.openstack.org/#/c/287928/
>
> It still needs to go through the review process so you will need to ping
> some horizon developers in IRC.
>
> Getting that packaged may take longer.
>
> On Mar 3, 2016 8:43 AM, "Saverio Proto" <ziopr...@gmail.com> wrote:
>>
>> Hello there,
>>
>> in Manchester we had this interesting discussion about asking to
>> backport bugfixes in order to avoid to build own packages.
>>
>> https://etherpad.openstack.org/p/MAN-ops-Upgrade-patches-packaging
>>
>> We run Openstack with Ubuntu and we use the Ubuntu Cloud Archive.
>>
>> We are upgrading our production system to from Juno to Kilo and we run
>> into this bug
>>
>> https://review.openstack.org/#/c/185403/
>>
>> This is the commit the fixes the bug (Horizon repository):
>> git show 8201d65c
>>
>> I remember someone saying that this bug can be tagged somewhere to
>> request the backport to Kilo ... how do I do this ??
>>
>> I have put James in Cc: because he was super kind in Manchester, and I
>> he show me how to correctly package for Ubuntu.
>> Once the change is merged into stable/kilo branch I will try to kindly
>> ask to the people from Ubuntu Cloud Archive to refresh Kilo packages.
>> Or even better I will try to help them with the packaging and testing.
>>
>> Let's try to start this workflow to see how long it takes ?? Do you
>> guys consider this example a good follow up to the discussion we had
>> in Manchester ?
>>
>> If we give up, we will have to build our own Kilo packages for Kilo
>> even before going to production ;)
>>
>> thanks !
>>
>> Saverio
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Horizon bug fixed in Liberty, how should we ask a backport to Kilo ?

2016-03-03 Thread Saverio Proto
Hello there,

in Manchester we had this interesting discussion about asking to
backport bugfixes in order to avoid to build own packages.

https://etherpad.openstack.org/p/MAN-ops-Upgrade-patches-packaging

We run Openstack with Ubuntu and we use the Ubuntu Cloud Archive.

We are upgrading our production system to from Juno to Kilo and we run
into this bug

https://review.openstack.org/#/c/185403/

This is the commit the fixes the bug (Horizon repository):
git show 8201d65c

I remember someone saying that this bug can be tagged somewhere to
request the backport to Kilo ... how do I do this ??

I have put James in Cc: because he was super kind in Manchester, and I
he show me how to correctly package for Ubuntu.
Once the change is merged into stable/kilo branch I will try to kindly
ask to the people from Ubuntu Cloud Archive to refresh Kilo packages.
Or even better I will try to help them with the packaging and testing.

Let's try to start this workflow to see how long it takes ?? Do you
guys consider this example a good follow up to the discussion we had
in Manchester ?

If we give up, we will have to build our own Kilo packages for Kilo
even before going to production ;)

thanks !

Saverio

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Snapshots taking long time

2016-03-02 Thread Saverio Proto
Hello Andreas,

what kind of snapshot are you doing ?

1) Snapshot of a instance running on a ephimeral volume ?
2) Snapshot of a instance booted from Volume ?
3) Snapshot of a volume ?

in case 1 the ephemeral volume is in the volume pool with the name
_disk
when you snapshot, this must be read to disk and then a image is
generated and uploaded to the glance pool.
This is slow, but the patch to make this faster and all within ceph
has been already merged in Mitaka
Look here:
https://etherpad.openstack.org/p/MAN-ops-Ceph
Under
Instance (ephemeral disk) snap CoW directly to Glance pool


You might also want to measure how fast your ceph can take snapshot,
without Openstack.

assuming that your ceph pool for volumes is called volumespool

try to make a snapshot by hand bypassing openstack using rbd CLI

rbd -p volumespool snap create volume-@mytestsnapshotname

Does it take a long time as well ?

Saverio


2016-03-03 8:01 GMT+01:00 Andreas Vallin :
> We are currently installing a new openstack cluster (Liberty) with
> openstack-ansible and an already existing ceph cluster. We have both images
> and volumes located in ceph with rbd. My current problem is that snapshots
> take a very long time and I can see that snapshots are temporary created
> under /var/lib/nova/instances/snapshots/tmp on the compute node, I thought
> that this would not be needed when using ceph? The instance that I am
> creating a snapshot of uses a raw image that is protected. What can cause
> this behavior?
>
> Thanks,
> Andreas
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] best practice to manage multiple Data center using openstack

2016-01-29 Thread Saverio Proto
Hello Jeff,

your question is very general.

as a general answer I can suggest to use a configuration management
system such as Puppet or Ansible to take care of the servers. It is
easier to keep stuff in different datacenter running the same version
of packages in this way.

I hope this helps.

Saverio


2016-01-29 8:49 GMT+01:00 XueSong Ma :
> Hi:
> does anyone can tell me what are the best way or manage multiple DC using
> one openstack system?
> or the good reasons for it?
> We have sevel openstack env(multiple region), but its really difficult to
> manage them, python code, software update. etc.
> Thanks a lot!
>
> Jeff
>
>
>
>
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] how to get glance images for a specific tenant with the openstack client ?

2016-01-27 Thread Saverio Proto
> We have an image promotion process that does this for us.  The command I use
> to get images from a specific tenant is:
>
> glance --os-image-api-version 1 image-list --owner=
>
> I'm sure using the v1 API will make some cringe, but I haven't found
> anything similar in the v2 API.
>

I used this solution, and it worked very nice for me.

Also openstack image list --long and then grepping the project ID
makes the work.

I would like in the long term to use only the pythonopenstack-client.
However having to pipe stuff into grep is kind of slow for large
setups. How to properly ask to developers to include something like:

openstack image list --project 

in the next release cycle ?
should I write a spec for this kind of change ?

Saverio

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] how to get glance images for a specific tenant with the openstack client ?

2016-01-25 Thread Saverio Proto
Hello there,

I need to delete some users  and tenants from my public cloud. Before
deleting the users and tenants from keystone, I need to delete all the
resources in the tenants.

I am stucked listing the glance images uploaded in a specific tenant.
I cannot find the way, I always get either all the images in the
system, or just the ones of the active OS_TENANT_NAME

openstack help image list
usage: openstack image list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN]
[--max-width ] [--noindent]
[--quote {all,minimal,none,nonnumeric}]
[--public | --private | --shared]
[--property 

[Openstack-operators] how to use the new openstack client to nova snapshot ??

2016-01-19 Thread Saverio Proto
Hello there,

I am trying to stick to the new openstack client CLI, but sometimes I
get completely lost.

So I used to do with python-novaclient instance snapshots like this:

nova image-create   snapshotname


I just cannot understand how to do the same with the new client. Could
someone explain ?

openstack image create 

thanks

Saverio

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [log] LOG.exception should contain an exception message or not?

2016-01-15 Thread Saverio Proto
I think the stacktrace is usefull when debugging, even as an operator.

Instead of changing the logs to satisfy the tools, it is the tools
that should be enable to parse the logs correctly.

Saverio

2016-01-05 14:08 GMT+01:00 Akihiro Motoki :
> Hi,
>
> # cross-posting to -dev and -operators ML
>
> In the current most OpenStack implementation,
> when we use LOG.exception, we don't pass an exception message to 
> LOG.exception:
>
>   LOG.exception(_LE("Error while processing VIF ports"))
>
> IIUC it is because the exception message will be logged at the end of
> a corresponding stacktrace.
>
> We will get a line like (Full log: http://paste.openstack.org/show/483018/):
>
>   ERROR ... Error while processing VIF ports
>
> [Problem]
>
> Many logging tools are still line-oriented (though logstash or fluentd
> can handle multiple lines).
> An ERROR line only contains a summary message without the actual failure 
> reason.
> This makes difficult for line-oriented logging tools to classify error logs.
>
> [Proposal]
>
> My proposal is to pass an exception message to LOG.exception like:
>
>   except  as exc:
>  LOG.exception(_LE("Error while processing VIF ports: %s"), exc)
>
> This alllows line-oriented logging tools to classify error logs more easily.
>
>
> Thought?
>
> Thanks,
> Akihiro
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] cinder-api with rbd driver ignores ceph.conf

2015-12-08 Thread Saverio Proto
Hello there,

finally yesterday I found fast way to backport the rbd driver in juno
glance_store.

I found this repository with the right patch I was looking for:
https://github.com/vumrao/glance_store.git (branch rbd_default_features)

I reworked the patch on top of stable juno:
https://github.com/zioproto/glance_store/commit/564129f865e10e7fcd5378a0914847323139f901

and I created my ubuntu packages.

Now everything works. I am testing the deb packages in my staging
cluster. I do have cinder and glance
honoring the ceph.conf default features. All volumes and images are
created in the ceph
backend with the object-map.

If anyone is running juno and wants to enable this feature we have
packages published here:
http://ubuntu.mirror.cloud.switch.ch/engines/packages/

Saverio



2015-11-26 11:36 GMT+01:00 Saverio Proto <ziopr...@gmail.com>:
> Hello,
>
> I think it is worth to update the list on this issue, because a lot of
> operators are running Juno, and might want to enable the object map
> feature in their rbd backend.
>
> our cinder backport seems to work great.
>
> however, most of volumes are CoW from glance images. Glance uses as
> well the rbd backend.
>
> This means that if glance images do not have the rbd object map
> features, the cinder volumes will have flags "object map invalid".
>
> So, we are now trying to backport this feature of the rbd driver in
> glance as well.
>
> Saverio
>
>
>
> 2015-11-24 13:12 GMT+01:00 Saverio Proto <ziopr...@gmail.com>:
>> Hello there,
>>
>> we were able finally to backport the patch to Juno:
>> https://github.com/zioproto/cinder/tree/backport-ceph-object-map
>>
>> we are testing this version. Everything good so far.
>>
>> this will require in your ceph.conf
>> rbd default format = 2
>> rbd default features = 13
>>
>> if anyone is willing to test this on his Juno setup I can also share
>> .deb packages for Ubuntu
>>
>> Saverio
>>
>>
>>
>> 2015-11-16 16:21 GMT+01:00 Saverio Proto <ziopr...@gmail.com>:
>>> Thanks,
>>>
>>> I tried to backport this patch to Juno but it is not that trivial for
>>> me. I have 2 tests failing, about volume cloning and create a volume
>>> without layering.
>>>
>>> https://github.com/zioproto/cinder/commit/0d26cae585f54c7bda5ba5b423d8d9ddc87e0b34
>>> https://github.com/zioproto/cinder/commits/backport-ceph-object-map
>>>
>>> I guess I will stop trying to backport this patch and wait for the
>>> upgrade to Kilo of our Openstack installation to have the feature.
>>>
>>> If anyone ever backported this feature to Juno it would be nice to
>>> know, so I can use the patch to generate deb packages.
>>>
>>> thanks
>>>
>>> Saverio
>>>
>>> 2015-11-12 17:55 GMT+01:00 Josh Durgin <jdur...@redhat.com>:
>>>> On 11/12/2015 07:41 AM, Saverio Proto wrote:
>>>>>
>>>>> So here is my best guess.
>>>>> Could be that I am missing this patch ?
>>>>>
>>>>>
>>>>> https://github.com/openstack/cinder/commit/6211d8fa2033c2a607c20667110c5913cf60dd53
>>>>
>>>>
>>>> Exactly, you need that patch for cinder to use rbd_default_features
>>>> from ceph.conf instead of its own default of only layering.
>>>>
>>>> In infernalis and later version of ceph you can also add object map to
>>>> existing rbd images via the 'rbd feature enable' and 'rbd object-map
>>>> rebuild' commands.
>>>>
>>>> Josh
>>>>
>>>>> proto@controller:~$ apt-cache policy python-cinder
>>>>> python-cinder:
>>>>>Installed: 1:2014.2.3-0ubuntu1.1~cloud0
>>>>>Candidate: 1:2014.2.3-0ubuntu1.1~cloud0
>>>>>
>>>>>
>>>>> Thanks
>>>>>
>>>>> Saverio
>>>>>
>>>>>
>>>>>
>>>>> 2015-11-12 16:25 GMT+01:00 Saverio Proto <ziopr...@gmail.com>:
>>>>>>
>>>>>> Hello there,
>>>>>>
>>>>>> I am investigating why my cinder is slow deleting volumes.
>>>>>>
>>>>>> you might remember my email from few days ago with subject:
>>>>>> "cinder volume_clear=zero makes sense with rbd ?"
>>>>>>
>>>>>> so it comes out that volume_clear has nothing to do with the rbd driver.
>>>>>>
>>>>>> cinder was not guilty, it was re

[Openstack-operators] Juno neutron - Tenant Network with multiple routers, how to nat/filter ?

2015-11-27 Thread Saverio Proto
Hello,

I have a cloud user that is trying to implement the following topology

ext_net <|R1|>  internal_net  <|R2|>  dbservers_network

where
- internal_net: 10.0.2.0/24
- dbservers_net: 10.0.3.0/24

Now according to the documentation:
http://docs.openstack.org/admin-guide-cloud/networking_adv-features.html

My user was able to set up the necessary static routes on R1 to reach
the dbservers_network and on R2 to have a default via R1

However, it seems impossible to manipulate Nat rules on R1 and R2.
R1 for example will SNAT traffic only for source IPs into 10.0.2.0
making impossible for hosts in dbservers_network to access the
Internet.

To see the configuration, I can as an Operator use iptables commands
into the namespaces on the network node. But what can users do ?

So far, I ended up with the feeling, that is not possible to have two
hop topologies where hosts two hops away from the gateway can make
traffic with the outside Internet. Is this really the case ?

thanks !

Saverio

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] How do I install specific versions of openstack/puppet-keystone

2015-11-26 Thread Saverio Proto
> Can you get R10k to NOT install dependencies listed in metadata etc.?

In my experience r10k will not try to install any dependencies.

Saverio

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] cinder-api with rbd driver ignores ceph.conf

2015-11-26 Thread Saverio Proto
Hello,

I think it is worth to update the list on this issue, because a lot of
operators are running Juno, and might want to enable the object map
feature in their rbd backend.

our cinder backport seems to work great.

however, most of volumes are CoW from glance images. Glance uses as
well the rbd backend.

This means that if glance images do not have the rbd object map
features, the cinder volumes will have flags "object map invalid".

So, we are now trying to backport this feature of the rbd driver in
glance as well.

Saverio



2015-11-24 13:12 GMT+01:00 Saverio Proto <ziopr...@gmail.com>:
> Hello there,
>
> we were able finally to backport the patch to Juno:
> https://github.com/zioproto/cinder/tree/backport-ceph-object-map
>
> we are testing this version. Everything good so far.
>
> this will require in your ceph.conf
> rbd default format = 2
> rbd default features = 13
>
> if anyone is willing to test this on his Juno setup I can also share
> .deb packages for Ubuntu
>
> Saverio
>
>
>
> 2015-11-16 16:21 GMT+01:00 Saverio Proto <ziopr...@gmail.com>:
>> Thanks,
>>
>> I tried to backport this patch to Juno but it is not that trivial for
>> me. I have 2 tests failing, about volume cloning and create a volume
>> without layering.
>>
>> https://github.com/zioproto/cinder/commit/0d26cae585f54c7bda5ba5b423d8d9ddc87e0b34
>> https://github.com/zioproto/cinder/commits/backport-ceph-object-map
>>
>> I guess I will stop trying to backport this patch and wait for the
>> upgrade to Kilo of our Openstack installation to have the feature.
>>
>> If anyone ever backported this feature to Juno it would be nice to
>> know, so I can use the patch to generate deb packages.
>>
>> thanks
>>
>> Saverio
>>
>> 2015-11-12 17:55 GMT+01:00 Josh Durgin <jdur...@redhat.com>:
>>> On 11/12/2015 07:41 AM, Saverio Proto wrote:
>>>>
>>>> So here is my best guess.
>>>> Could be that I am missing this patch ?
>>>>
>>>>
>>>> https://github.com/openstack/cinder/commit/6211d8fa2033c2a607c20667110c5913cf60dd53
>>>
>>>
>>> Exactly, you need that patch for cinder to use rbd_default_features
>>> from ceph.conf instead of its own default of only layering.
>>>
>>> In infernalis and later version of ceph you can also add object map to
>>> existing rbd images via the 'rbd feature enable' and 'rbd object-map
>>> rebuild' commands.
>>>
>>> Josh
>>>
>>>> proto@controller:~$ apt-cache policy python-cinder
>>>> python-cinder:
>>>>Installed: 1:2014.2.3-0ubuntu1.1~cloud0
>>>>Candidate: 1:2014.2.3-0ubuntu1.1~cloud0
>>>>
>>>>
>>>> Thanks
>>>>
>>>> Saverio
>>>>
>>>>
>>>>
>>>> 2015-11-12 16:25 GMT+01:00 Saverio Proto <ziopr...@gmail.com>:
>>>>>
>>>>> Hello there,
>>>>>
>>>>> I am investigating why my cinder is slow deleting volumes.
>>>>>
>>>>> you might remember my email from few days ago with subject:
>>>>> "cinder volume_clear=zero makes sense with rbd ?"
>>>>>
>>>>> so it comes out that volume_clear has nothing to do with the rbd driver.
>>>>>
>>>>> cinder was not guilty, it was really ceph rbd slow itself to delete big
>>>>> volumes.
>>>>>
>>>>> I was able to reproduce the slowness just using the rbd client.
>>>>>
>>>>> I was also able to fix the slowness just using the rbd client :)
>>>>>
>>>>> This is fixed in ceph hammer release, introducing a new feature.
>>>>>
>>>>>
>>>>> http://www.sebastien-han.fr/blog/2015/07/06/ceph-enable-the-object-map-feature/
>>>>>
>>>>> Enabling the object map feature rbd is now super fast to delete large
>>>>> volumes.
>>>>>
>>>>> However how I am in trouble with cinder. Looks like my cinder-api
>>>>> (running juno here) ignores the changes in my ceph.conf file.
>>>>>
>>>>> cat cinder.conf | grep rbd
>>>>>
>>>>> volume_driver=cinder.volume.drivers.rbd.RBDDriver
>>>>> rbd_user=cinder
>>>>> rbd_max_clone_depth=5
>>>>> rbd_ceph_conf=/etc/ceph/ceph.conf
>>>>> rbd_flatten_volume_from_snapshot=False
>>>>> rbd_pool=volumes
>>>>> 

Re: [Openstack-operators] How do I install specific versions of openstack/puppet-keystone

2015-11-25 Thread Saverio Proto
Hello,

you can use r10k

go in a empty folder, create a file called Puppetfile with this content:

mod 'openstack-ceilometer'
mod 'openstack-cinder'
mod 'openstack-glance'
mod 'openstack-heat'
mod 'openstack-horizon'
mod 'openstack-keystone'
mod 'openstack-neutron'
mod 'openstack-nova'
mod 'openstack-openstack_extras'
mod 'openstack-openstacklib'
mod 'openstack-vswitch'

the type the commands:
gem install r10k
r10k puppetfile install -v

Look at r10k documentation for howto specify a version number of the modules.

Saverio



2015-11-25 18:43 GMT+01:00 Oleksiy Molchanov :
> Hi,
>
> You can provide --version parameter to 'puppet module install' or even use
> puppet-librarian with puppet in standalone mode. This tool is solving all
> your issues described.
>
> BR,
> Oleksiy.
>
> On Wed, Nov 25, 2015 at 6:16 PM, Russell Cecala 
> wrote:
>>
>> Hi,
>>
>> I am struggling with setting up OpenStack via the OpenStack community
>> puppet modules.  For example
>> https://github.com/openstack/puppet-keystone/tree/stable/kilo
>>
>> If I do what the README.md file says to do ...
>>
>> example% puppet module install puppetlabs/keystone
>>
>> What release of the module would I get?  Do I get Liberty, Kilo, Juno?
>> And what if I needed to be able to install the Liberty version on one
>> system
>> but need the Juno version for yet another system?  How can I ensure the
>> the right dependencies like cprice404-inifile and puppetlabs-mysql get
>> installed?
>>
>> Thanks
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Neutron getting stuck creating namespaces

2015-11-24 Thread Saverio Proto
Hello Xav,

what version of Openstack are you running ?

thank you

Saverio


2015-11-23 20:04 GMT+01:00 Xav Paice :
> Hi,
>
> Over the last few months we've had a few incidents where the process to
> create network namespaces (Neutron, OVS) on the network nodes gets 'stuck'
> and prevents not only the router it's trying to create from finishing, but
> all further namespace operations too.
>
> This has usually finished up with either us rebooting the node pretty fast
> afterwards, or the node rebooting itself.
>
> It looks very much like we're affected by
> https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1403152 but the notes
> say it's fixed in the kernel we're running.  I've asked the clever person
> who checked it to make some extra notes in the bug report.
>
> It looks very much like when we have a bunch of load on the box the thing is
> more likely to trigger - I was wondering if other ops have a max ratio of
> routers per network node?  I would have thought our current max of 150
> routers per node would be pretty light, but with the dhcp namespaces as well
> that's ~450 namespaces on a box and maybe that's an issue?
>
> Thanks
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Neutron getting stuck creating namespaces

2015-11-24 Thread Saverio Proto
Hello Xav,

we also had problems with namespaces in Juno. Maybe a little different
than what you describe.

we are running about 250 namespaces in our network node. When we
reboot the network node we observe that some namespaces have qr-* and
qg-* interfaces missing.

we believe that is because the control plane in neutron juno performs
very badly. This is probably fixed in Kilo.

to work around it, after the network node is up and running, we do
reset the namespaces that have interfaces missing:

 neutron router-update  --admin-state-up false
  sleep 5
  neutron router-update  --admin-state-up true

Saverio





2015-11-24 9:51 GMT+01:00 Xav Paice <xavpa...@gmail.com>:
> Neutron is Juno, on Trusty boxes with the 3.19 LTS kernel.  We're in the
> process of updating to Kilo, and onwards to Liberty.
>
> On 24 November 2015 at 21:24, Saverio Proto <ziopr...@gmail.com> wrote:
>>
>> Hello Xav,
>>
>> what version of Openstack are you running ?
>>
>> thank you
>>
>> Saverio
>>
>>
>> 2015-11-23 20:04 GMT+01:00 Xav Paice <xavpa...@gmail.com>:
>> > Hi,
>> >
>> > Over the last few months we've had a few incidents where the process to
>> > create network namespaces (Neutron, OVS) on the network nodes gets
>> > 'stuck'
>> > and prevents not only the router it's trying to create from finishing,
>> > but
>> > all further namespace operations too.
>> >
>> > This has usually finished up with either us rebooting the node pretty
>> > fast
>> > afterwards, or the node rebooting itself.
>> >
>> > It looks very much like we're affected by
>> > https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1403152 but the
>> > notes
>> > say it's fixed in the kernel we're running.  I've asked the clever
>> > person
>> > who checked it to make some extra notes in the bug report.
>> >
>> > It looks very much like when we have a bunch of load on the box the
>> > thing is
>> > more likely to trigger - I was wondering if other ops have a max ratio
>> > of
>> > routers per network node?  I would have thought our current max of 150
>> > routers per node would be pretty light, but with the dhcp namespaces as
>> > well
>> > that's ~450 namespaces on a box and maybe that's an issue?
>> >
>> > Thanks
>> >
>> > ___
>> > OpenStack-operators mailing list
>> > OpenStack-operators@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>> >
>
>

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] cinder-api with rbd driver ignores ceph.conf

2015-11-24 Thread Saverio Proto
Hello there,

we were able finally to backport the patch to Juno:
https://github.com/zioproto/cinder/tree/backport-ceph-object-map

we are testing this version. Everything good so far.

this will require in your ceph.conf
rbd default format = 2
rbd default features = 13

if anyone is willing to test this on his Juno setup I can also share
.deb packages for Ubuntu

Saverio



2015-11-16 16:21 GMT+01:00 Saverio Proto <ziopr...@gmail.com>:
> Thanks,
>
> I tried to backport this patch to Juno but it is not that trivial for
> me. I have 2 tests failing, about volume cloning and create a volume
> without layering.
>
> https://github.com/zioproto/cinder/commit/0d26cae585f54c7bda5ba5b423d8d9ddc87e0b34
> https://github.com/zioproto/cinder/commits/backport-ceph-object-map
>
> I guess I will stop trying to backport this patch and wait for the
> upgrade to Kilo of our Openstack installation to have the feature.
>
> If anyone ever backported this feature to Juno it would be nice to
> know, so I can use the patch to generate deb packages.
>
> thanks
>
> Saverio
>
> 2015-11-12 17:55 GMT+01:00 Josh Durgin <jdur...@redhat.com>:
>> On 11/12/2015 07:41 AM, Saverio Proto wrote:
>>>
>>> So here is my best guess.
>>> Could be that I am missing this patch ?
>>>
>>>
>>> https://github.com/openstack/cinder/commit/6211d8fa2033c2a607c20667110c5913cf60dd53
>>
>>
>> Exactly, you need that patch for cinder to use rbd_default_features
>> from ceph.conf instead of its own default of only layering.
>>
>> In infernalis and later version of ceph you can also add object map to
>> existing rbd images via the 'rbd feature enable' and 'rbd object-map
>> rebuild' commands.
>>
>> Josh
>>
>>> proto@controller:~$ apt-cache policy python-cinder
>>> python-cinder:
>>>Installed: 1:2014.2.3-0ubuntu1.1~cloud0
>>>Candidate: 1:2014.2.3-0ubuntu1.1~cloud0
>>>
>>>
>>> Thanks
>>>
>>> Saverio
>>>
>>>
>>>
>>> 2015-11-12 16:25 GMT+01:00 Saverio Proto <ziopr...@gmail.com>:
>>>>
>>>> Hello there,
>>>>
>>>> I am investigating why my cinder is slow deleting volumes.
>>>>
>>>> you might remember my email from few days ago with subject:
>>>> "cinder volume_clear=zero makes sense with rbd ?"
>>>>
>>>> so it comes out that volume_clear has nothing to do with the rbd driver.
>>>>
>>>> cinder was not guilty, it was really ceph rbd slow itself to delete big
>>>> volumes.
>>>>
>>>> I was able to reproduce the slowness just using the rbd client.
>>>>
>>>> I was also able to fix the slowness just using the rbd client :)
>>>>
>>>> This is fixed in ceph hammer release, introducing a new feature.
>>>>
>>>>
>>>> http://www.sebastien-han.fr/blog/2015/07/06/ceph-enable-the-object-map-feature/
>>>>
>>>> Enabling the object map feature rbd is now super fast to delete large
>>>> volumes.
>>>>
>>>> However how I am in trouble with cinder. Looks like my cinder-api
>>>> (running juno here) ignores the changes in my ceph.conf file.
>>>>
>>>> cat cinder.conf | grep rbd
>>>>
>>>> volume_driver=cinder.volume.drivers.rbd.RBDDriver
>>>> rbd_user=cinder
>>>> rbd_max_clone_depth=5
>>>> rbd_ceph_conf=/etc/ceph/ceph.conf
>>>> rbd_flatten_volume_from_snapshot=False
>>>> rbd_pool=volumes
>>>> rbd_secret_uuid=secret
>>>>
>>>> But when I create a volume with cinder, The options in ceph.conf are
>>>> ignored:
>>>>
>>>> cat /etc/ceph/ceph.conf | grep rbd
>>>> rbd default format = 2
>>>> rbd default features = 13
>>>>
>>>> But the volume:
>>>>
>>>> rbd image 'volume-78ca9968-77e8-4b68-9744-03b25b8068b1':
>>>>  size 102400 MB in 25600 objects
>>>>  order 22 (4096 kB objects)
>>>>  block_name_prefix: rbd_data.533f4356fe034
>>>>  format: 2
>>>>  features: layering
>>>>  flags:
>>>>
>>>>
>>>> so my first question is:
>>>>
>>>> does anyone use cinder with rbd driver and object map feature enabled
>>>> ? Does it work for anyone ?
>>>>
>>>> thank you
>>>>
>>>> Saverio
>>>
>>>
>>> ___
>>> OpenStack-operators mailing list
>>> OpenStack-operators@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>
>>

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Running mixed stuff Juno & Kilo , Was: cinder-api with rbd driver ignores ceph.conf

2015-11-18 Thread Saverio Proto
Belmiro are you running cinder-api and cinder-volume on a controller
node with other Juno services, or cinder was on a dedicated server or
container ?

thanks

Saverio


2015-11-18 13:19 GMT+01:00 Belmiro Moreira
<moreira.belmiro.email.li...@gmail.com>:
> Hi Saverio,
> we always upgrade one component at a time.
> Cinder was one of the first components that we upgraded to kilo,
> meaning that other components (glance, nova, ...) were running Juno.
>
> We didn't have any problem with this setup.
>
> Belmiro
> CERN
>
> On Tue, Nov 17, 2015 at 6:01 PM, Saverio Proto <ziopr...@gmail.com> wrote:
>>
>> Hello there,
>>
>> I need to quickly find a workaround to be able to use ceph object map
>> features for cinder volumes with rbd backend.
>>
>> However, upgrading everything from Juno to Kilo will require a lot of
>> time for testing and updating all my puppet modules.
>>
>> Do you think it is feasible to start updating just cinder to Kilo ?
>> Will it work with the rest of the Juno components ?
>>
>> Has someone here experience in running mixed components between Juno and
>> Kilo ?
>>
>> thanks
>>
>> Saverio
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Running mixed stuff Juno & Kilo , Was: cinder-api with rbd driver ignores ceph.conf

2015-11-17 Thread Saverio Proto
Hello there,

I need to quickly find a workaround to be able to use ceph object map
features for cinder volumes with rbd backend.

However, upgrading everything from Juno to Kilo will require a lot of
time for testing and updating all my puppet modules.

Do you think it is feasible to start updating just cinder to Kilo ?
Will it work with the rest of the Juno components ?

Has someone here experience in running mixed components between Juno and Kilo ?

thanks

Saverio

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] cinder-api with rbd driver ignores ceph.conf

2015-11-16 Thread Saverio Proto
Thanks,

I tried to backport this patch to Juno but it is not that trivial for
me. I have 2 tests failing, about volume cloning and create a volume
without layering.

https://github.com/zioproto/cinder/commit/0d26cae585f54c7bda5ba5b423d8d9ddc87e0b34
https://github.com/zioproto/cinder/commits/backport-ceph-object-map

I guess I will stop trying to backport this patch and wait for the
upgrade to Kilo of our Openstack installation to have the feature.

If anyone ever backported this feature to Juno it would be nice to
know, so I can use the patch to generate deb packages.

thanks

Saverio

2015-11-12 17:55 GMT+01:00 Josh Durgin <jdur...@redhat.com>:
> On 11/12/2015 07:41 AM, Saverio Proto wrote:
>>
>> So here is my best guess.
>> Could be that I am missing this patch ?
>>
>>
>> https://github.com/openstack/cinder/commit/6211d8fa2033c2a607c20667110c5913cf60dd53
>
>
> Exactly, you need that patch for cinder to use rbd_default_features
> from ceph.conf instead of its own default of only layering.
>
> In infernalis and later version of ceph you can also add object map to
> existing rbd images via the 'rbd feature enable' and 'rbd object-map
> rebuild' commands.
>
> Josh
>
>> proto@controller:~$ apt-cache policy python-cinder
>> python-cinder:
>>Installed: 1:2014.2.3-0ubuntu1.1~cloud0
>>Candidate: 1:2014.2.3-0ubuntu1.1~cloud0
>>
>>
>> Thanks
>>
>> Saverio
>>
>>
>>
>> 2015-11-12 16:25 GMT+01:00 Saverio Proto <ziopr...@gmail.com>:
>>>
>>> Hello there,
>>>
>>> I am investigating why my cinder is slow deleting volumes.
>>>
>>> you might remember my email from few days ago with subject:
>>> "cinder volume_clear=zero makes sense with rbd ?"
>>>
>>> so it comes out that volume_clear has nothing to do with the rbd driver.
>>>
>>> cinder was not guilty, it was really ceph rbd slow itself to delete big
>>> volumes.
>>>
>>> I was able to reproduce the slowness just using the rbd client.
>>>
>>> I was also able to fix the slowness just using the rbd client :)
>>>
>>> This is fixed in ceph hammer release, introducing a new feature.
>>>
>>>
>>> http://www.sebastien-han.fr/blog/2015/07/06/ceph-enable-the-object-map-feature/
>>>
>>> Enabling the object map feature rbd is now super fast to delete large
>>> volumes.
>>>
>>> However how I am in trouble with cinder. Looks like my cinder-api
>>> (running juno here) ignores the changes in my ceph.conf file.
>>>
>>> cat cinder.conf | grep rbd
>>>
>>> volume_driver=cinder.volume.drivers.rbd.RBDDriver
>>> rbd_user=cinder
>>> rbd_max_clone_depth=5
>>> rbd_ceph_conf=/etc/ceph/ceph.conf
>>> rbd_flatten_volume_from_snapshot=False
>>> rbd_pool=volumes
>>> rbd_secret_uuid=secret
>>>
>>> But when I create a volume with cinder, The options in ceph.conf are
>>> ignored:
>>>
>>> cat /etc/ceph/ceph.conf | grep rbd
>>> rbd default format = 2
>>> rbd default features = 13
>>>
>>> But the volume:
>>>
>>> rbd image 'volume-78ca9968-77e8-4b68-9744-03b25b8068b1':
>>>  size 102400 MB in 25600 objects
>>>  order 22 (4096 kB objects)
>>>  block_name_prefix: rbd_data.533f4356fe034
>>>  format: 2
>>>  features: layering
>>>  flags:
>>>
>>>
>>> so my first question is:
>>>
>>> does anyone use cinder with rbd driver and object map feature enabled
>>> ? Does it work for anyone ?
>>>
>>> thank you
>>>
>>> Saverio
>>
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] cinder-api with rbd driver ignores ceph.conf

2015-11-12 Thread Saverio Proto
Hello there,

I am investigating why my cinder is slow deleting volumes.

you might remember my email from few days ago with subject:
"cinder volume_clear=zero makes sense with rbd ?"

so it comes out that volume_clear has nothing to do with the rbd driver.

cinder was not guilty, it was really ceph rbd slow itself to delete big volumes.

I was able to reproduce the slowness just using the rbd client.

I was also able to fix the slowness just using the rbd client :)

This is fixed in ceph hammer release, introducing a new feature.

http://www.sebastien-han.fr/blog/2015/07/06/ceph-enable-the-object-map-feature/

Enabling the object map feature rbd is now super fast to delete large volumes.

However how I am in trouble with cinder. Looks like my cinder-api
(running juno here) ignores the changes in my ceph.conf file.

cat cinder.conf | grep rbd

volume_driver=cinder.volume.drivers.rbd.RBDDriver
rbd_user=cinder
rbd_max_clone_depth=5
rbd_ceph_conf=/etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot=False
rbd_pool=volumes
rbd_secret_uuid=secret

But when I create a volume with cinder, The options in ceph.conf are ignored:

cat /etc/ceph/ceph.conf | grep rbd
rbd default format = 2
rbd default features = 13

But the volume:

rbd image 'volume-78ca9968-77e8-4b68-9744-03b25b8068b1':
size 102400 MB in 25600 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.533f4356fe034
format: 2
features: layering
flags:


so my first question is:

does anyone use cinder with rbd driver and object map feature enabled
? Does it work for anyone ?

thank you

Saverio

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] cinder-api with rbd driver ignores ceph.conf

2015-11-12 Thread Saverio Proto
So here is my best guess.
Could be that I am missing this patch ?

https://github.com/openstack/cinder/commit/6211d8fa2033c2a607c20667110c5913cf60dd53

proto@controller:~$ apt-cache policy python-cinder
python-cinder:
  Installed: 1:2014.2.3-0ubuntu1.1~cloud0
  Candidate: 1:2014.2.3-0ubuntu1.1~cloud0


Thanks

Saverio



2015-11-12 16:25 GMT+01:00 Saverio Proto <ziopr...@gmail.com>:
> Hello there,
>
> I am investigating why my cinder is slow deleting volumes.
>
> you might remember my email from few days ago with subject:
> "cinder volume_clear=zero makes sense with rbd ?"
>
> so it comes out that volume_clear has nothing to do with the rbd driver.
>
> cinder was not guilty, it was really ceph rbd slow itself to delete big 
> volumes.
>
> I was able to reproduce the slowness just using the rbd client.
>
> I was also able to fix the slowness just using the rbd client :)
>
> This is fixed in ceph hammer release, introducing a new feature.
>
> http://www.sebastien-han.fr/blog/2015/07/06/ceph-enable-the-object-map-feature/
>
> Enabling the object map feature rbd is now super fast to delete large volumes.
>
> However how I am in trouble with cinder. Looks like my cinder-api
> (running juno here) ignores the changes in my ceph.conf file.
>
> cat cinder.conf | grep rbd
>
> volume_driver=cinder.volume.drivers.rbd.RBDDriver
> rbd_user=cinder
> rbd_max_clone_depth=5
> rbd_ceph_conf=/etc/ceph/ceph.conf
> rbd_flatten_volume_from_snapshot=False
> rbd_pool=volumes
> rbd_secret_uuid=secret
>
> But when I create a volume with cinder, The options in ceph.conf are ignored:
>
> cat /etc/ceph/ceph.conf | grep rbd
> rbd default format = 2
> rbd default features = 13
>
> But the volume:
>
> rbd image 'volume-78ca9968-77e8-4b68-9744-03b25b8068b1':
> size 102400 MB in 25600 objects
> order 22 (4096 kB objects)
> block_name_prefix: rbd_data.533f4356fe034
> format: 2
> features: layering
> flags:
>
>
> so my first question is:
>
> does anyone use cinder with rbd driver and object map feature enabled
> ? Does it work for anyone ?
>
> thank you
>
> Saverio

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] cinder volume_clear=zero makes sense with rbd ?

2015-11-04 Thread Saverio Proto
Hello there,

I am using cinder with rbd, and most volumes are created from glance
images on rbd as well.
Because of ceph features, these volumes are CoW and only blocks
different from the original parent image are really written.

Today I am debugging why in my production system deleting cinder
volumes gets very slow. Looks like the problem happens only at scale,
I can't reproduce it on my small test cluster.

I read all the cinder.conf reference, and I found this default value
=>   volume_clear=0.

Is this parameter evaluated when cinder works with rbd ?

This means that everytime we delete a Volume we first write all blocks
to 0 with a "dd" like operation and then we really delete it. This
default is designed with LVM backend in mind. In fact we dont want
that the next user gets a raw block device that is dirty, and can
potentially can read data out of it.

But what happens when we are using Ceph rbd as cinder backend ? and
our volumes are CoW from Glance Images most of the time, so we only
write in Ceph the blocks that are different from the original image. I
hope this is not writing all the rbd objects with zeros before
actually deleting the ceph volumes.

Does anybody has any advice on volume_clear setting to be used with rbd ?
Or even better, how can I make sure that the setting volume_clear is
not evaluated at all when using the rbd backend ?

thank you

Saverio

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


<    1   2