Re: [Openstack-operators] Problems with AggregateMultiTenancyIsolation while migrating an instance

2018-05-31 Thread Massimo Sgaravatto
Thanks a lot !!

On Wed, May 30, 2018 at 8:06 PM, Matt Riedemann  wrote:

> On 5/30/2018 9:41 AM, Matt Riedemann wrote:
>
>> Thanks for your patience in debugging this Massimo! I'll get a bug
>> reported and patch posted to fix it.
>>
>
> I'm tracking the problem with this bug:
>
> https://bugs.launchpad.net/nova/+bug/1774205
>
> I found that this has actually been fixed since Pike:
>
> https://review.openstack.org/#/c/449640/
>
> But I've got a patch up for another related issue, and a functional test
> to avoid regressions which I can also use when backporting the fix to
> stable/ocata.
>
> --
>
> Thanks,
>
> Matt
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Problems with AggregateMultiTenancyIsolation while migrating an instance

2018-05-30 Thread Massimo Sgaravatto
The problem is indeed with the tenant_id

When I create a VM, tenant_id is ee1865a76440481cbcff08544c7d580a
(SgaraPrj1), as expected

But when, as admin, I run the "nova migrate" command to migrate the very
same instance, the tenant_id is 56c3f5c047e74a78a71438c4412e6e13 (admin) !

Cheers, Massimo

On Wed, May 30, 2018 at 1:01 AM, Matt Riedemann  wrote:

> On 5/29/2018 3:07 PM, Massimo Sgaravatto wrote:
>
>> The VM that I am trying to migrate was created when the Cloud was already
>> running Ocata
>>
>
> OK, I'd added the tenant_id variable in scope to the log message here:
>
> https://github.com/openstack/nova/blob/stable/ocata/nova/sch
> eduler/filters/aggregate_multitenancy_isolation.py#L50
>
> And make sure when it fails, it matches what you'd expect. If it's None or
> '' or something weird then we have a bug.
>
> --
>
> Thanks,
>
> Matt
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Problems with AggregateMultiTenancyIsolation while migrating an instance

2018-05-29 Thread Massimo Sgaravatto
The VM that I am trying to migrate was created when the Cloud was already
running Ocata

Cheers, Massimo

On Tue, May 29, 2018 at 9:47 PM, Matt Riedemann  wrote:

> On 5/29/2018 12:44 PM, Jay Pipes wrote:
>
>> Either that, or the wrong project_id is being used when attempting to
>> migrate? Maybe the admin project_id is being used instead of the original
>> project_id who launched the instance?
>>
>
> Could be, but we should be pulling the request spec from the database
> which was created when the instance was created. There is some shim code
> from Newton which will create an essentially fake request spec on-demand
> when doing a move operation if the instance was created before newton,
> which could go back to that bug I was referring to.
>
> Massimo - can you clarify if this is a new server created in your Ocata
> test environment that you're trying to move? Or is this a server created
> before Ocata?
>
> --
>
> Thanks,
>
> Matt
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Problems with AggregateMultiTenancyIsolation while migrating an instance

2018-05-29 Thread Massimo Sgaravatto
I have a small testbed OpenStack cloud (running Ocata) where I am trying to
debug a problem with Nova scheduling.


In short: I see different behaviors when I create a new VM and when I try
to migrate a VM


Since I want to partition the Cloud so that each project uses only certain
compute nodes, I created one host aggregate per project (see also this
thread:
http://lists.openstack.org/pipermail/openstack-operators/2018-February/014831.html
)


The host-aggregate for my project is:

# nova  aggregate-show 52
++---+---+--+--+--+
| Id | Name  | Availability Zone | Hosts
| Metadata
   | UUID
   |
++---+---+--+--+--+
| 52 | SgaraPrj1 | nova  | 'compute-01.cloud.pd.infn.it', '
compute-02.cloud.pd.infn.it' | 'availability_zone=nova',
'filter_tenant_id=ee1865a76440481cbcff08544c7d580a', 'size=normal' |
675f6291-6997-470d-87e1-e9ea199a379f |
++---+---+--+--+--+

The same compute nodes are shared by other projects  (for which specific
host-aggregates, as this one, have been created)
The other compute node (I have only 3 compute nodes in this small testbed)
is targeted to other projects (for which specific host-aggregates exist)


This is what I have in nova.conf wrt scheduling filters:

enabled_filters =
AggregateInstanceExtraSpecsFilter,AggregateMultiTenancyIsolation,RetryFilter,AvailabilityZoneFilter,RamFilter,CoreFilter,AggregateRamFilter,AggregateCo
reFilter,DiskFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter



If I try to create a VM, I see from the scheduler log [*] that
the AggregateMultiTenancyIsolation selects only 2 compute nodes, as
expected.


But if I then try to migrate the very same VM, it reports that no valid
host was found:

# nova migrate afaf2a2d-7ff8-4e52-a89a-031ee079a9ba
ERROR (BadRequest): No valid host was found. No valid host found for cold
migrate (HTTP 400) (Request-ID: req-45b8afd5-9683-40a6-8416-295563e37e34)


And according to the scheduler log the problem is with the
AggregateMultiTenancyIsolation which returned 0 hosts (while I would have
expected one):

2018-05-29 11:12:56.375 19428 INFO nova.scheduler.host_manager
[req-45b8afd5-9683-40a6-8416-295563e37e34 9bd03f63fa9d4beb8de31e6c2f2c8d12
56c3f5c047e74a78a714\
38c4412e6e13 - - -\
] Host filter ignoring hosts: compute-02.cloud.pd.infn.it
2018-05-29 11:12:56.375 19428 DEBUG nova.filters
[req-45b8afd5-9683-40a6-8416-295563e37e34 9bd03f63fa9d4beb8de31e6c2f2c8d12
56c3f5c047e74a78a71438c4412e6e13 -\
 - -] Starting wit\
h 2 host(s) get_filtered_objects
/usr/lib/python2.7/site-packages/nova/filters.py:70
2018-05-29 11:12:56.376 19428 DEBUG nova.filters
[req-45b8afd5-9683-40a6-8416-295563e37e34 9bd03f63fa9d4beb8de31e6c2f2c8d12
56c3f5c047e74a78a71438c4412e6e13 -\
 - -] Filter Aggre\
gateInstanceExtraSpecsFilter returned 2 host(s) get_filtered_objects
/usr/lib/python2.7/site-packages/nova/filters.py:104
2018-05-29 11:12:56.377 19428 DEBUG
nova.scheduler.filters.aggregate_multitenancy_isolation
[req-45b8afd5-9683-40a6-8416-295563e37e34 9bd03f63fa9d4beb8de31e6c\
2f2c8d12 56c3f5c04\
7e74a78a71438c4412e6e13 - - -] (compute-01.cloud.pd.infn.it,
compute-01.cloud.pd.infn.it) ram: 12797MB disk: 48128MB io_ops: 0
instances: 0 fails tenant id on\
 aggregate host_pa\
sses
/usr/lib/python2.7/site-packages/nova/scheduler/filters/aggregate_multitenancy_isolation.py:50
2018-05-29 11:12:56.378 19428 DEBUG
nova.scheduler.filters.aggregate_multitenancy_isolation
[req-45b8afd5-9683-40a6-8416-295563e37e34 9bd03f63fa9d4beb8de31e6c\
2f2c8d12 56c3f5c04\
7e74a78a71438c4412e6e13 - - -] (compute-03.cloud.pd.infn.it,
compute-03.cloud.pd.infn.it) ram: 8701MB disk: -4096MB io_ops: 0 instances:
0 fails tenant id on \
aggregate host_pas\
ses
/usr/lib/python2.7/site-packages/nova/scheduler/filters/aggregate_multitenancy_isolation.py:50
2018-05-29 11:12:56.378 19428 INFO nova.filters
[req-45b8afd5-9683-40a6-8416-295563e37e34 9bd03f63fa9d4beb8de31e6c2f2c8d12
56c3f5c047e74a78a71438c4412e6e13 - \
- -] Filter Aggreg\
ateMultiTenancyIsolation returned 0 hosts



I am confused ...
Any hints ?

Thanks, Massimo

[*]


2018-05-29 11:09:54.328 19428 DEBUG nova.filters
[req-1a838e77-8042-4550-b157-4943445119a2 ab573ba3ea014b778193b6922e6d
ee1865a76440481cbcff08544c7d580a -\
 - -] Filter 

Re: [Openstack-operators] Receipt to transfer the ownership of an instance

2018-04-23 Thread Massimo Sgaravatto
Thanks for the hint ! :-)

Cheers, Massimo

On Mon, Apr 23, 2018 at 1:38 PM, Saverio Proto <ziopr...@gmail.com> wrote:

> Hello Massimo,
>
> what we suggest to our users, is to migrate a volume, and to create a
> new VM from that volume.
> https://help.switch.ch/engines/documentation/migrating-resources/
>
> the bad thing is that the new VM has a new IP address, so eventually
> DNS records have to be updated by the users.
>
> Cheers,
>
> Saverio
>
>
> 2018-04-23 10:17 GMT+02:00 Massimo Sgaravatto <
> massimo.sgarava...@gmail.com>:
> > As far as I understand there is not a clean way to transfer the
> ownership of
> > an instance from a user to another one (the implementation of the
> blueprint
> > https://blueprints.launchpad.net/nova/+spec/transfer-instance-ownership
> was
> > abandoned).
> >
> >
> > Is there at least a receipt (i.e. what needs to be changed in the
> database)
> > that operators can follow to implement such use case ?
> >
> > Thanks, Massimo
> >
> > ___
> > OpenStack-operators mailing list
> > OpenStack-operators@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Receipt to transfer the ownership of an instance

2018-04-23 Thread Massimo Sgaravatto
As far as I understand there is not a clean way to transfer the ownership
of an instance from a user to another one (the implementation of the
blueprint
https://blueprints.launchpad.net/nova/+spec/transfer-instance-ownership was
abandoned).


Is there at least a receipt (i.e. what needs to be changed in the database)
that operators can follow to implement such use case ?

Thanks, Massimo
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [nova] Default scheduler filters survey

2018-04-20 Thread Massimo Sgaravatto
enabled_filters =
AggregateInstanceExtraSpecsFilter,AggregateMultiTenancyIsolation,RetryFilter,AvailabilityZoneFilter,RamFilter,CoreFilter,AggregateRamFilter,AggregateCoreFilter,DiskFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter

Cheers, Massimo

On Wed, Apr 18, 2018 at 10:20 PM, Simon Leinen 
wrote:

> Artom Lifshitz writes:
> > To that end, we'd like to know what filters operators are enabling in
> > their deployment. If you can, please reply to this email with your
> > [filter_scheduler]/enabled_filters (or
> > [DEFAULT]/scheduler_default_filters if you're using an older version)
> > option from nova.conf. Any other comments are welcome as well :)
>
> We have the following enabled on our semi-public (academic community)
> cloud, which runs on Newton:
>
> AggregateInstanceExtraSpecsFilter
> AvailabilityZoneFilter
> ComputeCapabilitiesFilter
> ComputeFilter
> ImagePropertiesFilter
> PciPassthroughFilter
> RamFilter
> RetryFilter
> ServerGroupAffinityFilter
> ServerGroupAntiAffinityFilter
>
> (sorted alphabetically) Recently we've also been trying
>
> AggregateImagePropertiesIsolation
>
> ...but it looks like we'll replace it with our own because it's a bit
> awkward to use for our purpose (scheduling Windows instance to licensed
> compute nodes).
> --
> Simon.
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] AggregateMultiTenancyIsolation with multiple (many) projects

2018-02-06 Thread Massimo Sgaravatto
Thanks for your answer.
As far as I understand CellsV2 are present in Pike and later. I need to
implement such use case in an Ocata Openstack based cloud

Thanks, Massimo

2018-02-06 10:26 GMT+01:00 Flint WALRUS <gael.ther...@gmail.com>:

> Aren’t CellsV2 more adapted to what you’re trying to do?
> Le mar. 6 févr. 2018 à 06:45, Massimo Sgaravatto <
> massimo.sgarava...@gmail.com> a écrit :
>
>> Hi
>>
>> I want to partition my OpenStack cloud so that:
>>
>> - Project p1, p2, .., pn can use only compute nodes C1, C2, ... Cx
>> - Projects pn+1.. pm can use only compute nodes Cx+1 ... Cy
>>
>> I read that CERN addressed this use case implementing the
>> ProjectsToAggregateFilter but, as far as I understand, this in-house
>> developments eventually wasn't pushed upstream.
>>
>> So I am trying to rely on the  AggregateMultiTenancyIsolation filter to
>> create  2 host aggregates:
>>
>> - the first one including C1, C2, ... Cx and with filter_tenant_id=p1,
>> p2, .., pn
>> - the second one including Cx+1 ... Cy and with filter_tenant_id=pn+1..
>> pm
>>
>>
>> But if I try to specify the long list of projects, I get:a "Value ... is
>> too long" error message [*].
>>
>> I can see two workarounds for this problem:
>>
>> 1) Create an host aggregate per project:
>>
>> HA1 including CA1, C2, ... Cx and with filter_tenant_id=p1
>> HA2 including CA1, C2, ... Cx and with filter_tenant_id=p2
>> etc
>>
>> 2) Use the AggregateInstanceExtraSpecsFilter, creating two aggregates
>> and having each flavor visible only by a set of projects, and tagged with a
>> specific string that should match the value specified in the correspondent
>> host aggregate
>>
>> Is this correct ? Can you see better options ?
>>
>> Thanks, Massimo
>>
>>
>>
>> [*]
>> # nova aggregate-set-metadata 1 filter_tenant_id=
>> ee1865a76440481cbcff08544c7d580a,1c63ef80901b46bebc9e1e8e10d85019,
>> a22db12575694c9e9f8650dde73ef565,b4a1039f2f7c419596662950d8a1d397,
>> d8ce7a449d8e4d68bbf2c40314da357e,e81df4c0b493439abb8b85bfd4cbe071,
>> ee1865a76440481cbcff08544c7d580a,29d6233c0b2741e2a30324079f05aa9f,
>> 2b92483138dc4a61b1133c8c177ff298
>> ERROR (BadRequest): Invalid input for field/attribute filter_tenant_id.
>> Value: ee1865a76440481cbcff08544c7d580a,1c63ef80901b46bebc9e1e8e10d85019,
>> a22db12575694c9e9f8650dde73ef565,b4a1039f2f7c419596662950d8a1d397,
>> d8ce7a449d8e4d68bbf2c40314da357e,e81df4c0b493439abb8b85bfd4cbe071,
>> ee1865a76440481cbcff08544c7d580a,29d6233c0b2741e2a30324079f05aa9f,
>> 2b92483138dc4a61b1133c8c177ff298. u'ee1865a76440481cbcff08544c7d580a,
>> 1c63ef80901b46bebc9e1e8e10d85019,a22db12575694c9e9f8650dde73ef565,
>> b4a1039f2f7c419596662950d8a1d397,d8ce7a449d8e4d68bbf2c40314da357e,
>> e81df4c0b493439abb8b85bfd4cbe071,ee1865a76440481cbcff08544c7d580a,
>> 29d6233c0b2741e2a30324079f05aa9f,2b92483138dc4a61b1133c8c177ff298' is
>> too long (HTTP 400) (Request-ID: req-b971d686-72e5-4c54-aaa1-
>> fef5eb7c7001)
>>
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] AggregateMultiTenancyIsolation with multiple (many) projects

2018-02-05 Thread Massimo Sgaravatto
Hi

I want to partition my OpenStack cloud so that:

- Project p1, p2, .., pn can use only compute nodes C1, C2, ... Cx
- Projects pn+1.. pm can use only compute nodes Cx+1 ... Cy

I read that CERN addressed this use case implementing the
ProjectsToAggregateFilter but, as far as I understand, this in-house
developments eventually wasn't pushed upstream.

So I am trying to rely on the  AggregateMultiTenancyIsolation filter to
create  2 host aggregates:

- the first one including C1, C2, ... Cx and with filter_tenant_id=p1, p2,
.., pn
- the second one including Cx+1 ... Cy and with filter_tenant_id=pn+1.. pm


But if I try to specify the long list of projects, I get:a "Value ... is
too long" error message [*].

I can see two workarounds for this problem:

1) Create an host aggregate per project:

HA1 including CA1, C2, ... Cx and with filter_tenant_id=p1
HA2 including CA1, C2, ... Cx and with filter_tenant_id=p2
etc

2) Use the AggregateInstanceExtraSpecsFilter, creating two aggregates and
having each flavor visible only by a set of projects, and tagged with a
specific string that should match the value specified in the correspondent
host aggregate

Is this correct ? Can you see better options ?

Thanks, Massimo



[*]
# nova aggregate-set-metadata 1
filter_tenant_id=ee1865a76440481cbcff08544c7d580a,1c63ef80901b46bebc9e1e8e10d85019,a22db12575694c9e9f8650dde73ef565,b4a1039f2f7c419596662950d8a1d397,d8ce7a449d8e4d68bbf2c40314da357e,e81df4c0b493439abb8b85bfd4cbe071,ee1865a76440481cbcff08544c7d580a,29d6233c0b2741e2a30324079f05aa9f,2b92483138dc4a61b1133c8c177ff298
ERROR (BadRequest): Invalid input for field/attribute filter_tenant_id.
Value:
ee1865a76440481cbcff08544c7d580a,1c63ef80901b46bebc9e1e8e10d85019,a22db12575694c9e9f8650dde73ef565,b4a1039f2f7c419596662950d8a1d397,d8ce7a449d8e4d68bbf2c40314da357e,e81df4c0b493439abb8b85bfd4cbe071,ee1865a76440481cbcff08544c7d580a,29d6233c0b2741e2a30324079f05aa9f,2b92483138dc4a61b1133c8c177ff298.
u'ee1865a76440481cbcff08544c7d580a,1c63ef80901b46bebc9e1e8e10d85019,a22db12575694c9e9f8650dde73ef565,b4a1039f2f7c419596662950d8a1d397,d8ce7a449d8e4d68bbf2c40314da357e,e81df4c0b493439abb8b85bfd4cbe071,ee1865a76440481cbcff08544c7d580a,29d6233c0b2741e2a30324079f05aa9f,2b92483138dc4a61b1133c8c177ff298'
is too long (HTTP 400) (Request-ID:
req-b971d686-72e5-4c54-aaa1-fef5eb7c7001)
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Problems chaning access list for flavors using the dashboard

2017-12-19 Thread Massimo Sgaravatto
Dear Operators

I have a Mitaka openstack installation and in such deployment I am able to
modify the access list of flavors using the dashboard without problems. I
am even able to modify a public flavor to a private one (specifying, using
the dashboard, the list of projects allowed to use the flavor).
After this operation I see that the flavor-id ichanged: I think this is
because the dashboard, at least in that version of OpenStack, basically
"simulates" the ability to modify a flavor by deleting an existing flavor
and creating a new one with the same name.


I have an other openstack installation where I have just performed the
update from Mitaka to Ocata (going through Newton just for the db
migrations).

In this Ocata installation I have problems changing the flavor access list
using the dashboard. Basically what I noticed is that:

1) Adding a new project to a private flavor using the dashboard sometimes
works, sometimes doesn't work. In the latter case the error message I see
in the horizon log file is something like:

Recoverable error: Flavor access already exists for flavor 25 and project
47e60c9b8dd9426997c2406df45a06f7 combination. (HTTP 409) (Request-ID:
req-19a1a805-ca3a-4b9f-aec9-88c8efcd7c2d)

But the specified project isn't the one I tried to add !

Is this a known issue ? Otherwise there is probably something wrong in my
installation ...

2) When the addition of a project to a private flavor works, the flavor id
didn't change. Is this expected ?

3) Modification of a public flavor to a private one using the dashboard
never works. Is this expected ?

4) No problems adding/removing projects to a private flavor using the CLI


Thanks for your help

Cheers, Massimo
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Cinder quota per volume types in the dashboard

2017-09-26 Thread Massimo Sgaravatto
ok, I will

2017-09-26 14:43 GMT+02:00 Arne Wiebalck <arne.wieba...@cern.ch>:

> Massimo,
>
> Following Rob’s comment on https://bugs.launchpad.net/horizon/+bug/1717342,
> would you
> be willing to write up a blueprint? Mateusz would then prepare our code
> and submit it to
> gerrit as a partial implementation (as we only have the user facing part,
> not the admin panel).
>
> Cheers,
>  Arne
>
>
> On 25 Sep 2017, at 10:46, Arne Wiebalck <arne.wieba...@cern.ch> wrote:
>
> Ah, nice, wasn’t aware. Mateusz is one of the Horizon experts here at CERN
> I was referring to :)
>
> On 25 Sep 2017, at 10:41, Massimo Sgaravatto <massimo.sgarava...@gmail.com>
> wrote:
>
> Just found that there is already this one:
>
> https://bugs.launchpad.net/horizon/+bug/1717342
>
> 2017-09-25 10:28 GMT+02:00 Saverio Proto <ziopr...@gmail.com>:
>
>> Yes I am interested. Are you going to push them to gerrit ?
>> Should we open a bug to track this change into Horizon ?
>>
>> massimo do you want to open the bug on Launchpad ? So if Arne pushed
>> the patches on gerrit we can link them to the bug. I pointed
>> robcresswell to this thread, he is reading us.
>>
>> thanks !
>>
>> Saverio
>>
>> 2017-09-25 10:13 GMT+02:00 Arne Wiebalck <arne.wieba...@cern.ch>:
>> > Massimo, Saverio,
>> >
>> > We faced the same issue and have created patches for Horizon to display
>> > - the per volume quota in the volume request panel, and also
>> > - additional information about the volume type (like IOPS and
>> throughput limits, intended usage etc.)
>> >
>> > The patches will need some polishing before being sent upstream (I’ll
>> need
>> > need to cross-check with our Horizon experts), but we use them in prod
>> since
>> > quite a while and are happy to already share patch files if you’re
>> interested.
>> >
>> > Cheers,
>> >  Arne
>> >
>> >
>> >
>> >> On 25 Sep 2017, at 09:58, Saverio Proto <ziopr...@gmail.com> wrote:
>> >>
>> >> I am pinging on IRC robcresswell from the Horizon project. He is still
>> >> PTL I think.
>> >> If you are on IRC please join #openstack-horizon.
>> >>
>> >> We should ask the Horizon PTL how to get this feature request into
>> >> implementation.
>> >>
>> >> With the command line interface, can you already see the two different
>> >> quotas for the two different volume types ? Can you paste an example
>> >> output from the CLI ?
>> >>
>> >> thank you
>> >>
>> >> Saverio
>> >>
>> >>
>> >> 2017-09-25 9:55 GMT+02:00 Massimo Sgaravatto <
>> massimo.sgarava...@gmail.com>:
>> >>> We are currently running Mitaka (preparing to update to Ocata). I see
>> the
>> >>> same behavior on an Ocata based testbed
>> >>>
>> >>> Thanks, Massimo
>> >>>
>> >>> 2017-09-25 9:50 GMT+02:00 Saverio Proto <ziopr...@gmail.com>:
>> >>>>
>> >>>> Hello Massimo,
>> >>>>
>> >>>> what is your version of Openstack ??
>> >>>>
>> >>>> thank you
>> >>>>
>> >>>> Saverio
>> >>>>
>> >>>> 2017-09-25 9:13 GMT+02:00 Massimo Sgaravatto
>> >>>> <massimo.sgarava...@gmail.com>:
>> >>>>> Hi
>> >>>>>
>> >>>>>
>> >>>>> In our OpenStack cloud we have two backends for Cinder (exposed
>> using
>> >>>>> two
>> >>>>> volume types), and we set different quotas for these two volume
>> types.
>> >>>>>
>> >>>>> The problem happens when a user, using the dashboard, tries to
>> create a
>> >>>>> volume using a volume type for which the project quota is over:
>> >>>>>
>> >>>>> - the reported error message simply reports "unable to create
>> volume",
>> >>>>> without mentioning that the problem is with quota
>> >>>>>
>> >>>>> - (by default) the dashboard only shows the overall Cinder quota
>> (and
>> >>>>> not
>> >>>>> the quota per volume)
>> >>>>>
>> >>>>>
>> >>>>> Do you know if it possible in some to expose on the dashboard the
>> cinder
>> >>>>> quota per volume type ?
>> >>>>>
>> >>>>>
>> >>>>> Thanks, Massimo
>> >>>>>
>> >>>>>
>> >>>>>
>> >>>>>
>> >>>>>
>> >>>>>
>> >>>>> ___
>> >>>>> OpenStack-operators mailing list
>> >>>>> OpenStack-operators@lists.openstack.org
>> >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac
>> k-operators
>> >>>>>
>> >>>
>> >>>
>> >>
>> >> ___
>> >> OpenStack-operators mailing list
>> >> OpenStack-operators@lists.openstack.org
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac
>> k-operators
>> >
>> > --
>> > Arne Wiebalck
>> > CERN IT
>> >
>>
>
>
> --
> Arne Wiebalck
> CERN IT
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Cinder quota per volume types in the dashboard

2017-09-25 Thread Massimo Sgaravatto
Just found that there is already this one:

https://bugs.launchpad.net/horizon/+bug/1717342

2017-09-25 10:28 GMT+02:00 Saverio Proto <ziopr...@gmail.com>:

> Yes I am interested. Are you going to push them to gerrit ?
> Should we open a bug to track this change into Horizon ?
>
> massimo do you want to open the bug on Launchpad ? So if Arne pushed
> the patches on gerrit we can link them to the bug. I pointed
> robcresswell to this thread, he is reading us.
>
> thanks !
>
> Saverio
>
> 2017-09-25 10:13 GMT+02:00 Arne Wiebalck <arne.wieba...@cern.ch>:
> > Massimo, Saverio,
> >
> > We faced the same issue and have created patches for Horizon to display
> > - the per volume quota in the volume request panel, and also
> > - additional information about the volume type (like IOPS and throughput
> limits, intended usage etc.)
> >
> > The patches will need some polishing before being sent upstream (I’ll
> need
> > need to cross-check with our Horizon experts), but we use them in prod
> since
> > quite a while and are happy to already share patch files if you’re
> interested.
> >
> > Cheers,
> >  Arne
> >
> >
> >
> >> On 25 Sep 2017, at 09:58, Saverio Proto <ziopr...@gmail.com> wrote:
> >>
> >> I am pinging on IRC robcresswell from the Horizon project. He is still
> >> PTL I think.
> >> If you are on IRC please join #openstack-horizon.
> >>
> >> We should ask the Horizon PTL how to get this feature request into
> >> implementation.
> >>
> >> With the command line interface, can you already see the two different
> >> quotas for the two different volume types ? Can you paste an example
> >> output from the CLI ?
> >>
> >> thank you
> >>
> >> Saverio
> >>
> >>
> >> 2017-09-25 9:55 GMT+02:00 Massimo Sgaravatto <
> massimo.sgarava...@gmail.com>:
> >>> We are currently running Mitaka (preparing to update to Ocata). I see
> the
> >>> same behavior on an Ocata based testbed
> >>>
> >>> Thanks, Massimo
> >>>
> >>> 2017-09-25 9:50 GMT+02:00 Saverio Proto <ziopr...@gmail.com>:
> >>>>
> >>>> Hello Massimo,
> >>>>
> >>>> what is your version of Openstack ??
> >>>>
> >>>> thank you
> >>>>
> >>>> Saverio
> >>>>
> >>>> 2017-09-25 9:13 GMT+02:00 Massimo Sgaravatto
> >>>> <massimo.sgarava...@gmail.com>:
> >>>>> Hi
> >>>>>
> >>>>>
> >>>>> In our OpenStack cloud we have two backends for Cinder (exposed using
> >>>>> two
> >>>>> volume types), and we set different quotas for these two volume
> types.
> >>>>>
> >>>>> The problem happens when a user, using the dashboard, tries to
> create a
> >>>>> volume using a volume type for which the project quota is over:
> >>>>>
> >>>>> - the reported error message simply reports "unable to create
> volume",
> >>>>> without mentioning that the problem is with quota
> >>>>>
> >>>>> - (by default) the dashboard only shows the overall Cinder quota (and
> >>>>> not
> >>>>> the quota per volume)
> >>>>>
> >>>>>
> >>>>> Do you know if it possible in some to expose on the dashboard the
> cinder
> >>>>> quota per volume type ?
> >>>>>
> >>>>>
> >>>>> Thanks, Massimo
> >>>>>
> >>>>>
> >>>>>
> >>>>>
> >>>>>
> >>>>>
> >>>>> ___
> >>>>> OpenStack-operators mailing list
> >>>>> OpenStack-operators@lists.openstack.org
> >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack-operators
> >>>>>
> >>>
> >>>
> >>
> >> ___
> >> OpenStack-operators mailing list
> >> OpenStack-operators@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >
> > --
> > Arne Wiebalck
> > CERN IT
> >
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Cinder quota per volume types in the dashboard

2017-09-25 Thread Massimo Sgaravatto
Yes. From the CLI I have the info. E.g in the following example I have
three volume-types (ceph, iscsi-infnpd, gluster) and for two of them a
quota was set

Thanks, Massimo

[sgaravat@lxsgaravat ~]$ cinder quota-usage ${OS_PROJECT_ID}
+++--+---+
| Type   | In_use | Reserved | Limit |
+++--+---+
| backup_gigabytes   | 0  | 0| 1000  |
| backups| 0  | 0| 10|
| gigabytes  | 120| 0| 300   |
| gigabytes_ceph | 85 | 0| 100   |
| gigabytes_gluster  | 0  | 0| -1|
| gigabytes_iscsi-infnpd | 35 | 0| 200   |
| per_volume_gigabytes   | 0  | 0| 1000  |
| snapshots  | 0  | 0| 10|
| snapshots_ceph | 0  | 0| -1|
| snapshots_gluster  | 0  | 0| -1|
| snapshots_iscsi-infnpd | 0  | 0| -1|
| volumes| 8  | 0| 20|
| volumes_ceph   | 6  | 0| -1|
| volumes_gluster| 0  | 0| -1|
| volumes_iscsi-infnpd   | 2  | 0| -1|
+++--+---+
[sgaravat@lxsgaravat ~]$


2017-09-25 9:58 GMT+02:00 Saverio Proto <ziopr...@gmail.com>:

> I am pinging on IRC robcresswell from the Horizon project. He is still
> PTL I think.
>  If you are on IRC please join #openstack-horizon.
>
> We should ask the Horizon PTL how to get this feature request into
> implementation.
>
> With the command line interface, can you already see the two different
> quotas for the two different volume types ? Can you paste an example
> output from the CLI ?
>
> thank you
>
> Saverio
>
>
> 2017-09-25 9:55 GMT+02:00 Massimo Sgaravatto <massimo.sgarava...@gmail.com
> >:
> > We are currently running Mitaka (preparing to update to Ocata). I see the
> > same behavior on an Ocata based testbed
> >
> > Thanks, Massimo
> >
> > 2017-09-25 9:50 GMT+02:00 Saverio Proto <ziopr...@gmail.com>:
> >>
> >> Hello Massimo,
> >>
> >> what is your version of Openstack ??
> >>
> >> thank you
> >>
> >> Saverio
> >>
> >> 2017-09-25 9:13 GMT+02:00 Massimo Sgaravatto
> >> <massimo.sgarava...@gmail.com>:
> >> > Hi
> >> >
> >> >
> >> > In our OpenStack cloud we have two backends for Cinder (exposed using
> >> > two
> >> > volume types), and we set different quotas for these two volume types.
> >> >
> >> > The problem happens when a user, using the dashboard, tries to create
> a
> >> > volume using a volume type for which the project quota is over:
> >> >
> >> > - the reported error message simply reports "unable to create volume",
> >> > without mentioning that the problem is with quota
> >> >
> >> > - (by default) the dashboard only shows the overall Cinder quota (and
> >> > not
> >> > the quota per volume)
> >> >
> >> >
> >> > Do you know if it possible in some to expose on the dashboard the
> cinder
> >> > quota per volume type ?
> >> >
> >> >
> >> > Thanks, Massimo
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> > ___
> >> > OpenStack-operators mailing list
> >> > OpenStack-operators@lists.openstack.org
> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack-operators
> >> >
> >
> >
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Cinder quota per volume types in the dashboard

2017-09-25 Thread Massimo Sgaravatto
We are currently running Mitaka (preparing to update to Ocata). I see the
same behavior on an Ocata based testbed

Thanks, Massimo

2017-09-25 9:50 GMT+02:00 Saverio Proto <ziopr...@gmail.com>:

> Hello Massimo,
>
> what is your version of Openstack ??
>
> thank you
>
> Saverio
>
> 2017-09-25 9:13 GMT+02:00 Massimo Sgaravatto <massimo.sgarava...@gmail.com
> >:
> > Hi
> >
> >
> > In our OpenStack cloud we have two backends for Cinder (exposed using two
> > volume types), and we set different quotas for these two volume types.
> >
> > The problem happens when a user, using the dashboard, tries to create a
> > volume using a volume type for which the project quota is over:
> >
> > - the reported error message simply reports "unable to create volume",
> > without mentioning that the problem is with quota
> >
> > - (by default) the dashboard only shows the overall Cinder quota (and not
> > the quota per volume)
> >
> >
> > Do you know if it possible in some to expose on the dashboard the cinder
> > quota per volume type ?
> >
> >
> > Thanks, Massimo
> >
> >
> >
> >
> >
> >
> > ___
> > OpenStack-operators mailing list
> > OpenStack-operators@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Cinder quota per volume types in the dashboard

2017-09-25 Thread Massimo Sgaravatto
Hi


In our OpenStack cloud we have two backends for Cinder (exposed using two
volume types), and we set different quotas for these two volume types.

The problem happens when a user, using the dashboard, tries to create a
volume using a volume type for which the project quota is over:

- the reported error message simply reports "unable to create volume",
without mentioning that the problem is with quota

- (by default) the dashboard only shows the overall Cinder quota (and not
the quota per volume)


Do you know if it possible in some to expose on the dashboard the cinder
quota per volume type ?


Thanks, Massimo
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Gluster storage for Cinder: migrating from Gluster to NFS driver

2017-07-01 Thread Massimo Sgaravatto
Hi

I have a iscsi storage system which provides Cinder block storage to an
OpenStack cloud (running Mitaka) using the gluster cinder driver.

We are now preparing the update Mitaka --> Newton --> Ocata

Since the cinder gluster driver is not supported anymore in Ocata, the idea
is to expose that storage using the Cinder NFS driver, since  I can export
the same gluster volumes using both the gluster and NFS protocols (and
therefore on the storage side no operations would be needed).

The question is how to migrate the existing volumes, created using the
Gluster driver.

Do I have to create new volumes (using the NFS driver), copy on these new
volumes the content of the "gluster" volumes, or are there smarter options ?

Thanks., Massimo
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Problems with ec2-service on Ocata

2017-06-08 Thread Massimo Sgaravatto
Looks like setting:

enable_proxy_headers_parsing=true

in nova,conf helped.

Actually it still doesn't work, but for other reasons (Expecting to find
domain in project. The server could not comply with the request since it is
either malformed or otherwise incorrect. The client is assumed to be in
error)

Cheers, Massimo

2017-06-08 9:40 GMT+02:00 Massimo Sgaravatto <massimo.sgarava...@gmail.com>:

> I am indeed using a HAProxy which also acts as SSL proxy.
>
> And, indeed I have the same problem using the nova CLI:
>
> # nova list
> ERROR (ConnectFailure): Unable to establish connection to
> http://cloud-areapd-test.pd.infn.it:8774/v2.1/: ('Connection aborted.',
> BadStatusLine("''",))
>
> while the openstack cli works (i.e. "openstack server list" works).
>
> I tried to set:
>
> compute_link_prefix= https://cloud-areapd-test.pd.infn.it:8774/v2.
>
> as you suggested (I hope to had your comment right), but this didn't help
> ...
>
> Cheers, Massimo
>
> 2017-06-07 19:21 GMT+02:00 Sean Dague <s...@dague.net>:
>
>> Are you using a tls proxy in front of Nova API? if so, you need to
>> adjust the osapi compute_link_prefix -
>> https://docs.openstack.org/ocata/config-reference/compute/api.html to be
>> the https url, otherwise it will autodetect as http. The ec2-service (or
>> novaclient) is probably doing link following from returned links, and
>> thus fails hitting the http ones.
>>
>> -Sean
>>
>> On 06/07/2017 12:18 PM, Massimo Sgaravatto wrote:
>> > Hi
>> >
>> > We are trying to configure the ec2-service on a Ocata OpenStack
>> > installation.
>> >
>> > If I try a euca-describe-images it works, but if I try to get the list
>> > of instances (euca-describe-instances) it fails.
>> > Looking at the log [*], it looks like to me that it initially uses the
>> > correct nova endpoint:
>> >
>> > https://cloud-areapd-test.pd.infn.it:8774/v2.1
>> >
>> > but then it tries to use:
>> >
>> > http://cloud-areapd-test.pd.infn.it:8774/v2.1
>> >
>> > i.e. http instead of https, and the connection fails, as expected.
>> > I am not able to understand why it tries to use that endpoint ...
>> >
>> > Any hints ?
>> >
>> > Thanks, Massimo
>> >
>> >
>> > [*]
>> > 2017-06-07 18:10:10.371 16470 DEBUG ec2api.wsgi.server [-] (16470)
>> > accepted ('192.168.60.24', 45185) server
>> > /usr/lib/python2.7/site-packages/eventlet/wsgi.py:867
>> > 2017-06-07 18:10:10.549 16470 DEBUG ec2api.api
>> > [req-7aa79c03-bf95-4e4d-9795-0c7d2d2b84a2
>> > 30de175a645a4258984bdb89cbf436f5 b9629ae5c480455397cfaa5ab0c2db43 - -
>> -]
>> > action: DescribeInstances __call__
>> > /usr/lib/python2.7/site-packages/ec2api/api/__init__.py:286
>> > 2017-06-07 18:10:10.565 16470 DEBUG novaclient.v2.client
>> > [req-7aa79c03-bf95-4e4d-9795-0c7d2d2b84a2
>> > 30de175a645a4258984bdb89cbf436f5 b9629ae5c480455397cfaa5ab0c2db43 - -
>> -]
>> > REQ: curl -g -i --cacert
>> > "/etc/grid-security/certificates/INFN-CA-2015.pem" -X GET
>> > https://cloud-areapd-test.pd.infn.it:8774/v2.1 -H "User-Agent:
>> > python-novaclient" -H "Accept: application/json" -H
>> > "X-OpenStack-Nova-API-Version: 2.1" -H "X-Auth-Token:
>> > {SHA1}9f9eb3c7cea14ac54b243338281afa0a59b3d06b" _http_log_request
>> > /usr/lib/python2.7/site-packages/keystoneclient/session.py:216
>> > 2017-06-07 18:10:11.320 16470 DEBUG novaclient.v2.client
>> > [req-7aa79c03-bf95-4e4d-9795-0c7d2d2b84a2
>> > 30de175a645a4258984bdb89cbf436f5 b9629ae5c480455397cfaa5ab0c2db43 - -
>> -]
>> > RESP: [302] Content-Type: text/plain; charset=utf8 Location:
>> > http://cloud-areapd-test.pd.infn.it:8774/v2.1/ X-Compute-Request-Id:
>> > req-6ed38429-784b-4fc9-a80d-f886b106ba6e Content-Length: 0 Date: Wed,
>> 07
>> > Jun 2017 16:10:11 GMT Connection: close
>> > RESP BODY: Omitted, Content-Type is set to text/plain; charset=utf8.
>> > Only application/json responses have their bodies logged.
>> >  _http_log_response
>> > /usr/lib/python2.7/site-packages/keystoneclient/session.py:256
>> > 2017-06-07 18:10:11.323 16470 ERROR ec2api.api
>> > [req-7aa79c03-bf95-4e4d-9795-0c7d2d2b84a2
>> > 30de175a645a4258984bdb89cbf436f5 b9629ae5c480455397cfaa5ab0c2db43 - -
>> -]
>> > Unexpected ConnectFailure raised: Unable to establish connection to
>> > http://cloud-areapd-test.pd.infn.it:

[Openstack-operators] Problems with ec2-service on Ocata

2017-06-07 Thread Massimo Sgaravatto
Hi

We are trying to configure the ec2-service on a Ocata OpenStack
installation.

If I try a euca-describe-images it works, but if I try to get the list of
instances (euca-describe-instances) it fails.
Looking at the log [*], it looks like to me that it initially uses the
correct nova endpoint:

https://cloud-areapd-test.pd.infn.it:8774/v2.1

but then it tries to use:

http://cloud-areapd-test.pd.infn.it:8774/v2.1

i.e. http instead of https, and the connection fails, as expected.
I am not able to understand why it tries to use that endpoint ...

Any hints ?

Thanks, Massimo


[*]
2017-06-07 18:10:10.371 16470 DEBUG ec2api.wsgi.server [-] (16470) accepted
('192.168.60.24', 45185) server
/usr/lib/python2.7/site-packages/eventlet/wsgi.py:867
2017-06-07 18:10:10.549 16470 DEBUG ec2api.api
[req-7aa79c03-bf95-4e4d-9795-0c7d2d2b84a2 30de175a645a4258984bdb89cbf436f5
b9629ae5c480455397cfaa5ab0c2db43 - - -] action: DescribeInstances __call__
/usr/lib/python2.7/site-packages/ec2api/api/__init__.py:286
2017-06-07 18:10:10.565 16470 DEBUG novaclient.v2.client
[req-7aa79c03-bf95-4e4d-9795-0c7d2d2b84a2 30de175a645a4258984bdb89cbf436f5
b9629ae5c480455397cfaa5ab0c2db43 - - -] REQ: curl -g -i --cacert
"/etc/grid-security/certificates/INFN-CA-2015.pem" -X GET
https://cloud-areapd-test.pd.infn.it:8774/v2.1 -H "User-Agent:
python-novaclient" -H "Accept: application/json" -H
"X-OpenStack-Nova-API-Version: 2.1" -H "X-Auth-Token:
{SHA1}9f9eb3c7cea14ac54b243338281afa0a59b3d06b" _http_log_request
/usr/lib/python2.7/site-packages/keystoneclient/session.py:216
2017-06-07 18:10:11.320 16470 DEBUG novaclient.v2.client
[req-7aa79c03-bf95-4e4d-9795-0c7d2d2b84a2 30de175a645a4258984bdb89cbf436f5
b9629ae5c480455397cfaa5ab0c2db43 - - -] RESP: [302] Content-Type:
text/plain; charset=utf8 Location:
http://cloud-areapd-test.pd.infn.it:8774/v2.1/ X-Compute-Request-Id:
req-6ed38429-784b-4fc9-a80d-f886b106ba6e Content-Length: 0 Date: Wed, 07
Jun 2017 16:10:11 GMT Connection: close
RESP BODY: Omitted, Content-Type is set to text/plain; charset=utf8. Only
application/json responses have their bodies logged.
 _http_log_response
/usr/lib/python2.7/site-packages/keystoneclient/session.py:256
2017-06-07 18:10:11.323 16470 ERROR ec2api.api
[req-7aa79c03-bf95-4e4d-9795-0c7d2d2b84a2 30de175a645a4258984bdb89cbf436f5
b9629ae5c480455397cfaa5ab0c2db43 - - -] Unexpected ConnectFailure raised:
Unable to establish connection to
http://cloud-areapd-test.pd.infn.it:8774/v2.1/
2017-06-07 18:10:11.323 16470 ERROR ec2api.api Traceback (most recent call
last):
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] nova_cell0 database connection string

2017-05-26 Thread Massimo Sgaravatto
Hi

I am reading the RDO installation guide for Ocata. In the nova section [*]
it is explained how to create the nova_cell0 database, but I can't find how
to set the relevant connection string in the nova configuration file.
Any hints ?
Thanks, Massimo


[*]
https://docs.openstack.org/ocata/install-guide-rdo/nova-controller-install.html
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Mixed env for nova (ceph for some compute nodes, local disk for the rest): qcow2 or raw images ?

2017-04-05 Thread Massimo Sgaravatto
Hi George

Thanks for your feedback

Yes, it makes sense, but most of our users upload their own images.
Educating them to upload the image twice and setting the metadata on these
images won't be easy at all

Thanks, Massimo

2017-04-05 16:23 GMT+02:00 George Mihaiescu <lmihaie...@gmail.com>:

> Hi Massimo,
>
> You can upload the images twice, in both qcow2 and raw format, then create
> a host aggregate for your "local-disk" compute nodes and set its metadata
> to match the property you'll set on your qcow2 images.
>
> When somebody will start a qcow2 version of the image, it will be
> scheduled on your compute nodes with local disk and pull the qcow2 image
> from Glance.
>
> Does it make sense?
>
> George
>
> On Wed, Apr 5, 2017 at 10:05 AM, Massimo Sgaravatto <
> massimo.sgarava...@gmail.com> wrote:
>
>> Hi
>>
>> Currently in our Cloud we are using a gluster storage for cinder and
>> glance.
>> For nova we are using a shared file system (implemented using gluster)
>> for part of the compute nodes; the rest of the compute nodes use the local
>> disk.
>>
>> We are now planning the replacement of gluster with ceph. The idea is
>> therefore to use ceph for cinder, glance. Ceph would be used for nova but
>> just for a set of compute nodes  (the other compute nodes would keep using
>> the local disk).
>>
>> In such configuration I see a problem with the choice of the best format
>> type
>> for images.
>>
>> As far as I understand (please correct me if am wrong) the ideal setup
>> would be using raw images for VMs targeted to compute nodes using ceph, and
>> qcow2 images for VMs targeted to compute nodes using the local disk for
>> nova.
>> In fact starting a VM using a qcow2 image on a compute node using ceph
>> for nova works but it is quite inefficient since the qcow2 image must be
>> first downloaded in /var/lib/nova/instances/_base and then converted into
>> raw. This also means that some space is needed on the local disk.
>>
>> And if you start a VM using a raw image on a a compute node using the
>> local disk for nova, the raw image (usually quite big) must be downloaded
>> on the compute node, and this is less efficient wrt a qcow2 image. It is
>> true that the qcow2 is then converted into raw, but I think that most of
>> the time is taken in downloading the image.
>>
>> Did I get it right ?
>> Any advice ?
>>
>> Thanks, Massimo
>>
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Mixed env for nova (ceph for some compute nodes, local disk for the rest): qcow2 or raw images ?

2017-04-05 Thread Massimo Sgaravatto
Hi

Currently in our Cloud we are using a gluster storage for cinder and glance.
For nova we are using a shared file system (implemented using gluster) for
part of the compute nodes; the rest of the compute nodes use the local disk.

We are now planning the replacement of gluster with ceph. The idea is
therefore to use ceph for cinder, glance. Ceph would be used for nova but
just for a set of compute nodes  (the other compute nodes would keep using
the local disk).

In such configuration I see a problem with the choice of the best format
type
for images.

As far as I understand (please correct me if am wrong) the ideal setup
would be using raw images for VMs targeted to compute nodes using ceph, and
qcow2 images for VMs targeted to compute nodes using the local disk for
nova.
In fact starting a VM using a qcow2 image on a compute node using ceph for
nova works but it is quite inefficient since the qcow2 image must be first
downloaded in /var/lib/nova/instances/_base and then converted into raw.
This also means that some space is needed on the local disk.

And if you start a VM using a raw image on a a compute node using the local
disk for nova, the raw image (usually quite big) must be downloaded on the
compute node, and this is less efficient wrt a qcow2 image. It is true that
the qcow2 is then converted into raw, but I think that most of the time is
taken in downloading the image.

Did I get it right ?
Any advice ?

Thanks, Massimo
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Migrating glance images to a new backend

2017-03-28 Thread Massimo Sgaravatto
First  of all thanks for your help

This is a private cloud which is right now using gluster as backend.
Most of the images are private (i.e. usable only within the project),
uploaded by the end-users.
Most of these images were saved in qcow2 format ...


The ceph cluster is still being benchmarked. I am testing the integration
between ceph and openstack (and studying the migration) on a small
openstack testbed.

 Having the glance service running during the migration is not strictly
needed, i.e. we can plan a scheduled downtime of the service

Thanks again, Massimo


2017-03-28 5:24 GMT+02:00 Fei Long Wang <feil...@catalyst.net.nz>:

> Hi Massimo,
>
> Though I don't have experience on the migration, but as the glance RBD
> driver maintainer and image service maintainer of our public cloud
> (Catalyst Cloud based in NZ), I'm happy to provide some information. Before
> I talk more, would you mind sharing some information of your environment?
>
> 1. Are you using CoW of Ceph?
>
> 2. Are you using multi locations?
>
> show_multiple_locations=True
>
> 3. Are you expecting to migrate all the images in a maintenance time
> window or you want to keep the glance service running for end user during
> the migration?
>
> 4. Is it a public cloud?
>
>
> On 25/03/17 04:55, Massimo Sgaravatto wrote:
>
> Hi
>
> In our Mitaka cloud we are currently using Gluster as storage backend for
> Glance and Cinder.
> We are now starting the migration to ceph: the idea is then to dismiss
> gluster when we have done.
>
> I have a question concerning Glance.
>
> I have understood (or at least I hope so) how to add ceph as store backend
> for Glance so that new images will use ceph while the previously created
> ones on the file backend will be still usable.
>
> My question is how I can migrate the images from the file backend to ceph
> when I decide to dismiss the gluster based storage.
>
> The only documentation I found is this one:
>
> https://dmsimard.com/2015/07/18/migrating-glance-images-to-
> a-different-backend/
>
>
> Could you please confirm that there aren't other better (simpler)
> approaches for such image migration ?
>
> Thanks, Massimo
>
>
> ___
> OpenStack-operators mailing 
> listOpenStack-operators@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
> --
> Cheers & Best regards,
> Feilong Wang (王飞龙)
> --
> Senior Cloud Software Engineer
> Tel: +64-48032246 <+64%204-803%202246>
> Email: flw...@catalyst.net.nz
> Catalyst IT Limited
> Level 6, Catalyst House, 150 Willis Street, Wellington
> --
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Migrating glance images to a new backend

2017-03-24 Thread Massimo Sgaravatto
Hi

In our Mitaka cloud we are currently using Gluster as storage backend for
Glance and Cinder.
We are now starting the migration to ceph: the idea is then to dismiss
gluster when we have done.

I have a question concerning Glance.

I have understood (or at least I hope so) how to add ceph as store backend
for Glance so that new images will use ceph while the previously created
ones on the file backend will be still usable.

My question is how I can migrate the images from the file backend to ceph
when I decide to dismiss the gluster based storage.

The only documentation I found is this one:

https://dmsimard.com/2015/07/18/migrating-glance-images-to-a-different-backend/


Could you please confirm that there aren't other better (simpler)
approaches for such image migration ?

Thanks, Massimo
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] User_id Based Policy Enforcement

2017-01-15 Thread Massimo Sgaravatto
Maybe this is relevant with:

https://bugs.launchpad.net/nova/+bug/1539351

?

In our Mitaka installation we had to keep using v2.0 API to be able to use
user_id in the policy file ...

I don't know if there are better solutions ...

Cheers, Massimo

2017-01-15 8:44 GMT+01:00 Hamza Achi :

> Hello,
>
> According to this Nova-spec of Newton release [1], user_id:%(user_id)s
> syntax should work to constrain some operations to user_id instead of
> project_id. Like deleting and rebuilding VMs.
>
> But it is not working, users within the same project can delete,
> rebuild..the VMs of each other. i added these rules in
> /etc/nova/policy.json (i used devstack stable/newton branch):
>
> "admin_required": "role:admin or is_admin:1",
> "owner" : "user_id:%(user_id)s",
> "admin_or_owner": "rule:admin_required or rule:owner",
> "compute:delete": "rule:admin_or_owner",
> "compute:resize": "rule:admin_or_owner",
> "compute:rebuild": "rule:admin_or_owner",
> "compute:reboot": "rule:admin_or_owner",
> "compute:start": "rule:admin_or_owner",
> "compute:stop": "rule:admin_or_owner"
>
>
> Can you please point out what i am missing ?
>
> Thank you,
> Hamza
>
>
> [1] https://specs.openstack.org/openstack/nova-specs/specs/
> newton/implemented/user-id-based-policy-enforcement.html
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] How to tune scheduling for "Insufficient compute resources" (race conditions ?)

2016-12-01 Thread Massimo Sgaravatto
Thanks a lot George

It looks like this indeed helps !

Cheers, Massimo

2016-11-30 16:04 GMT+01:00 George Mihaiescu <lmihaie...@gmail.com>:

> Try changing the following in nova.conf and restart the nova-scheduler:
>
> scheduler_host_subset_size = 10
> scheduler_max_attempts = 10
>
> Cheers,
> George
>
> On Wed, Nov 30, 2016 at 9:56 AM, Massimo Sgaravatto <
> massimo.sgarava...@gmail.com> wrote:
>
>> Hi all
>>
>> I have a problem with scheduling in our Mitaka Cloud,
>> Basically when there are a lot of requests for new instances, some of
>> them fail because "Failed to compute_task_build_instances: Exceeded maximum
>> number of retries". And the failures are because "Insufficient compute
>> resources: Free memory 2879.50 MB < requested
>>  8192 MB" [*]
>>
>> But there are compute nodes with enough memory that could serve such
>> requests.
>>
>> In the conductor log I also see messages reporting that "Function
>> 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted
>> interval by xxx sec" [**]
>>
>>
>> My understanding is that:
>>
>> - VM a is scheduled to a certain compute node
>> - the scheduler chooses the same compute node for VM b before the info
>> for that compute node is updated (so the 'size' of VM a is not taken into
>> account)
>>
>> Does this make sense or am I totally wrong ?
>>
>> Any hints about how to cope with such scenarios, besides increasing
>>  scheduler_max_attempts ?
>>
>> scheduler_default_filters is set to:
>>
>> scheduler_default_filters = AggregateInstanceExtraSpecsFil
>> ter,AggregateMultiTenancyIsolation,RetryFilter,AvailabilityZ
>> oneFilter,RamFilter,CoreFilter,AggregateRamFilter,
>> AggregateCoreFilter,ComputeFilter,ComputeCapabilitiesFilter,
>> ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGr
>> oupAffinityFilter
>>
>>
>> Thanks a lot, Massimo
>>
>> [*]
>>
>> 2016-11-30 15:10:20.233 25140 WARNING nova.scheduler.utils
>> [req-ec8c0bdc-b413-4cab-b925-eb8f11212049 840c96b6fb1e4972beaa3d30ade10cc7
>> d27fe2becea94a3e980fb9f66e2f29
>> 1a - - -] Failed to compute_task_build_instances: Exceeded maximum number
>> of retries. Exceeded max scheduling attempts 5 for instance
>> 314eccd0-fc73-446f-8138-7d8d3c
>> 8644f7. Last exception: Insufficient compute resources: Free memory
>> 2879.50 MB < requested 8192 MB.
>> 2016-11-30 15:10:20.233 25140 WARNING nova.scheduler.utils
>> [req-ec8c0bdc-b413-4cab-b925-eb8f11212049 840c96b6fb1e4972beaa3d30ade10cc7
>> d27fe2becea94a3e980fb9f66e2f29
>> 1a - - -] [instance: 314eccd0-fc73-446f-8138-7d8d3c8644f7] Setting
>> instance to ERROR state.
>>
>>
>> [**]
>>
>> 2016-11-30 15:10:48.873 25128 WARNING oslo.service.loopingcall [-]
>> Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run
>> outlasted interval by 9.08 sec
>> 2016-11-30 15:10:54.372 25142 WARNING oslo.service.loopingcall [-]
>> Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run
>> outlasted interval by 9.33 sec
>> 2016-11-30 15:10:54.375 25140 WARNING oslo.service.loopingcall [-]
>> Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run
>> outlasted interval by 9.32 sec
>> 2016-11-30 15:10:54.376 25129 WARNING oslo.service.loopingcall [-]
>> Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run
>> outlasted interval by 9.30 sec
>> 2016-11-30 15:10:54.381 25138 WARNING oslo.service.loopingcall [-]
>> Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run
>> outlasted interval by 9.24 sec
>> 2016-11-30 15:10:54.381 25139 WARNING oslo.service.loopingcall [-]
>> Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run
>> outlasted interval by 9.28 sec
>> 2016-11-30 15:10:54.382 25143 WARNING oslo.service.loopingcall [-]
>> Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run
>> outlasted interval by 9.24 sec
>> 2016-11-30 15:10:54.385 25141 WARNING oslo.service.loopingcall [-]
>> Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run
>> outlasted interval by 9.11 sec
>> 2016-11-30 15:11:01.964 25128 WARNING oslo.service.loopingcall [-]
>> Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run
>> outlasted interval by 3.09 sec
>> 2016-11-30 15:11:05.503 25142 WARNING oslo.service.loopingcall [-]
>> Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run
>> outlasted interval by 1.13 sec
>> 2016-11-30 15:11:05.506 25138 W

Re: [Openstack-operators] How to tune scheduling for "Insufficient compute resources" (race conditions ?)

2016-11-30 Thread Massimo Sgaravatto
Hi Belmiro

We are indeed running 2 nova-schedulers, to have some HA

Thanks, Massimo

2016-11-30 16:18 GMT+01:00 Belmiro Moreira <
moreira.belmiro.email.li...@gmail.com>:

> How many nova-schedulers are you running?
> You can hit this issue when multiple nova-schedulers select the same
> compute node for different instances.
>
> Belmiro
>
> On Wed, Nov 30, 2016 at 3:56 PM, Massimo Sgaravatto <
> massimo.sgarava...@gmail.com> wrote:
>
>> Hi all
>>
>> I have a problem with scheduling in our Mitaka Cloud,
>> Basically when there are a lot of requests for new instances, some of
>> them fail because "Failed to compute_task_build_instances: Exceeded maximum
>> number of retries". And the failures are because "Insufficient compute
>> resources: Free memory 2879.50 MB < requested
>>  8192 MB" [*]
>>
>> But there are compute nodes with enough memory that could serve such
>> requests.
>>
>> In the conductor log I also see messages reporting that "Function
>> 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted
>> interval by xxx sec" [**]
>>
>>
>> My understanding is that:
>>
>> - VM a is scheduled to a certain compute node
>> - the scheduler chooses the same compute node for VM b before the info
>> for that compute node is updated (so the 'size' of VM a is not taken into
>> account)
>>
>> Does this make sense or am I totally wrong ?
>>
>> Any hints about how to cope with such scenarios, besides increasing
>>  scheduler_max_attempts ?
>>
>> scheduler_default_filters is set to:
>>
>> scheduler_default_filters = AggregateInstanceExtraSpecsFil
>> ter,AggregateMultiTenancyIsolation,RetryFilter,AvailabilityZ
>> oneFilter,RamFilter,CoreFilter,AggregateRamFilter,
>> AggregateCoreFilter,ComputeFilter,ComputeCapabilitiesFilter,
>> ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGr
>> oupAffinityFilter
>>
>>
>> Thanks a lot, Massimo
>>
>> [*]
>>
>> 2016-11-30 15:10:20.233 25140 WARNING nova.scheduler.utils
>> [req-ec8c0bdc-b413-4cab-b925-eb8f11212049 840c96b6fb1e4972beaa3d30ade10cc7
>> d27fe2becea94a3e980fb9f66e2f29
>> 1a - - -] Failed to compute_task_build_instances: Exceeded maximum number
>> of retries. Exceeded max scheduling attempts 5 for instance
>> 314eccd0-fc73-446f-8138-7d8d3c
>> 8644f7. Last exception: Insufficient compute resources: Free memory
>> 2879.50 MB < requested 8192 MB.
>> 2016-11-30 15:10:20.233 25140 WARNING nova.scheduler.utils
>> [req-ec8c0bdc-b413-4cab-b925-eb8f11212049 840c96b6fb1e4972beaa3d30ade10cc7
>> d27fe2becea94a3e980fb9f66e2f29
>> 1a - - -] [instance: 314eccd0-fc73-446f-8138-7d8d3c8644f7] Setting
>> instance to ERROR state.
>>
>>
>> [**]
>>
>> 2016-11-30 15:10:48.873 25128 WARNING oslo.service.loopingcall [-]
>> Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run
>> outlasted interval by 9.08 sec
>> 2016-11-30 15:10:54.372 25142 WARNING oslo.service.loopingcall [-]
>> Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run
>> outlasted interval by 9.33 sec
>> 2016-11-30 15:10:54.375 25140 WARNING oslo.service.loopingcall [-]
>> Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run
>> outlasted interval by 9.32 sec
>> 2016-11-30 15:10:54.376 25129 WARNING oslo.service.loopingcall [-]
>> Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run
>> outlasted interval by 9.30 sec
>> 2016-11-30 15:10:54.381 25138 WARNING oslo.service.loopingcall [-]
>> Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run
>> outlasted interval by 9.24 sec
>> 2016-11-30 15:10:54.381 25139 WARNING oslo.service.loopingcall [-]
>> Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run
>> outlasted interval by 9.28 sec
>> 2016-11-30 15:10:54.382 25143 WARNING oslo.service.loopingcall [-]
>> Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run
>> outlasted interval by 9.24 sec
>> 2016-11-30 15:10:54.385 25141 WARNING oslo.service.loopingcall [-]
>> Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run
>> outlasted interval by 9.11 sec
>> 2016-11-30 15:11:01.964 25128 WARNING oslo.service.loopingcall [-]
>> Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run
>> outlasted interval by 3.09 sec
>> 2016-11-30 15:11:05.503 25142 WARNING oslo.service.loopingcall [-]
>> Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run
>> outlasted interval 

[Openstack-operators] How to tune scheduling for "Insufficient compute resources" (race conditions ?)

2016-11-30 Thread Massimo Sgaravatto
Hi all

I have a problem with scheduling in our Mitaka Cloud,
Basically when there are a lot of requests for new instances, some of them
fail because "Failed to compute_task_build_instances: Exceeded maximum
number of retries". And the failures are because "Insufficient compute
resources: Free memory 2879.50 MB < requested
 8192 MB" [*]

But there are compute nodes with enough memory that could serve such
requests.

In the conductor log I also see messages reporting that "Function
'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted
interval by xxx sec" [**]


My understanding is that:

- VM a is scheduled to a certain compute node
- the scheduler chooses the same compute node for VM b before the info for
that compute node is updated (so the 'size' of VM a is not taken into
account)

Does this make sense or am I totally wrong ?

Any hints about how to cope with such scenarios, besides increasing
 scheduler_max_attempts ?

scheduler_default_filters is set to:

scheduler_default_filters =
AggregateInstanceExtraSpecsFilter,AggregateMultiTenancyIsolation,RetryFilter,AvailabilityZoneFilter,RamFilter,CoreFilter,AggregateRamFilter,AggregateCoreFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter


Thanks a lot, Massimo

[*]

2016-11-30 15:10:20.233 25140 WARNING nova.scheduler.utils
[req-ec8c0bdc-b413-4cab-b925-eb8f11212049 840c96b6fb1e4972beaa3d30ade10cc7
d27fe2becea94a3e980fb9f66e2f29
1a - - -] Failed to compute_task_build_instances: Exceeded maximum number
of retries. Exceeded max scheduling attempts 5 for instance
314eccd0-fc73-446f-8138-7d8d3c
8644f7. Last exception: Insufficient compute resources: Free memory 2879.50
MB < requested 8192 MB.
2016-11-30 15:10:20.233 25140 WARNING nova.scheduler.utils
[req-ec8c0bdc-b413-4cab-b925-eb8f11212049 840c96b6fb1e4972beaa3d30ade10cc7
d27fe2becea94a3e980fb9f66e2f29
1a - - -] [instance: 314eccd0-fc73-446f-8138-7d8d3c8644f7] Setting instance
to ERROR state.


[**]

2016-11-30 15:10:48.873 25128 WARNING oslo.service.loopingcall [-] Function
'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted
interval by 9.08 sec
2016-11-30 15:10:54.372 25142 WARNING oslo.service.loopingcall [-] Function
'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted
interval by 9.33 sec
2016-11-30 15:10:54.375 25140 WARNING oslo.service.loopingcall [-] Function
'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted
interval by 9.32 sec
2016-11-30 15:10:54.376 25129 WARNING oslo.service.loopingcall [-] Function
'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted
interval by 9.30 sec
2016-11-30 15:10:54.381 25138 WARNING oslo.service.loopingcall [-] Function
'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted
interval by 9.24 sec
2016-11-30 15:10:54.381 25139 WARNING oslo.service.loopingcall [-] Function
'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted
interval by 9.28 sec
2016-11-30 15:10:54.382 25143 WARNING oslo.service.loopingcall [-] Function
'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted
interval by 9.24 sec
2016-11-30 15:10:54.385 25141 WARNING oslo.service.loopingcall [-] Function
'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted
interval by 9.11 sec
2016-11-30 15:11:01.964 25128 WARNING oslo.service.loopingcall [-] Function
'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted
interval by 3.09 sec
2016-11-30 15:11:05.503 25142 WARNING oslo.service.loopingcall [-] Function
'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted
interval by 1.13 sec
2016-11-30 15:11:05.506 25138 WARNING oslo.service.loopingcall [-] Function
'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted
interval by 1.12 sec
2016-11-30 15:11:05.509 25139 WARNING oslo.service.loopingcall [-] Function
'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted
interval by 1.13 sec
2016-11-30 15:11:05.512 25141 WARNING oslo.service.loopingcall [-] Function
'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted
interval by 1.13 sec
2016-11-30 15:11:05.525 25143 WARNING oslo.service.loopingcall [-] Function
'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted
interval by 1.14 sec
2016-11-30 15:11:05.526 25140 WARNING oslo.service.loopingcall [-] Function
'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted
interval by 1.15 sec
2016-11-30 15:11:05.529 25129 WARNING oslo.service.loopingcall [-] Function
'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted
interval by 1.15 sec
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Missing Glance metadef_resource_types table

2016-11-17 Thread Massimo Sgaravatto
Thanks

Actually I have just realized that db_sync prints an error in the api.log:


2016-11-17 17:20:21.895 43516 CRITICAL glance [-] ValueError: Tables
"task_info,tasks" have non utf8 collation, please make sure all tables are
CHARSET=utf8



2016-11-17 18:21 GMT+01:00 Abel Lopez <alopg...@gmail.com>:

> This is a manual step to load them. If your installation was complete, you
> should have a bunch of json files in /etc/glance/metadefs.
> You need to load them with glance-manage db_load_metadefs
>
> > On Nov 17, 2016, at 9:12 AM, Massimo Sgaravatto <
> massimo.sgarava...@gmail.com> wrote:
> >
> > Hi
> >
> > We have just done the update Kilo --> Liberty --> Mitaka  of our Cloud.
> We went through Liberty just for the migration of the databases.
> >
> > In the updated (to Mitaka) installation, if I click "Update metadata" on
> a image, I am told:
> >
> > Error: Unable to retrieve the namespaces.
> > In the glance api log file I see:
> >
> > 2016-11-17 15:33:30.313 11829 ERROR glance.api.v2.metadef_namespaces
> [req-81b31f76-f665-4933-805e-3fd1d5e7c045 a1f915a7a36c471d87d6702255016df4
> 36b1
> > ddb5dab8404dbe7fc359ec95ecf5 - - -] (pymysql.err.ProgrammingError)
> (1146, u"Table 'glance_prod.metadef_resource_types' doesn't exist") [SQL:
> u'SELECT metadef_resource_types.name AS metadef_resource_types_name,
> metadef_namespace_resource_types.namespace_id AS
> metadef_namespace_resource_types_name
> > space_id \nFROM metadef_resource_types INNER JOIN
> metadef_namespace_resource_types ON metadef_resource_types.id =
> metadef_namespace_resource_types.resource_type_id \nWHERE
> metadef_resource_types.name IN (%s)'] [parameters:
> (u'OS::Glance::Image',)]
> >
> >
> > Indeed I don't have this table in my glance database:
> >
> > mysql> show tables;
> > +---+
> > | Tables_in_glance_prod |
> > +---+
> > | image_locations   |
> > | image_members |
> > | image_properties  |
> > | image_tags|
> > | images|
> > | migrate_version   |
> > | task_info |
> > | tasks |
> > +---+
> > 8 rows in set (0.00 sec)
> >
> > I didn't have any errors running db_sync, apart some deprecation
> messages (and the install guide says that they can be ignored):
> >
> > su -s /bin/sh -c "glance-manage db_sync" glance
> > Option "verbose" from group "DEFAULT" is deprecated for removal.  Its
> value may be silently ignored in the future.
> > /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:1056:
> OsloDBDeprecationWarning: EngineFacade is deprecated; please use
> oslo_db.sqlalchemy.enginefacade
> >   expire_on_commit=expire_on_commit, _conf=conf)
> >
> > Any hints ?
> >
> > Thanks, Massimo
> > ___
> > OpenStack-operators mailing list
> > OpenStack-operators@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators