Re: [openstack-dev] [nova] How to debug no valid host failures with placement

2018-08-01 Thread Andrey Volkov
Hi,

It seems you need first to check what placement knows about resources of
your cloud.
This can be done either with REST API [1] or with osc-placement [2].
For osc-placement you could use:

pip install osc-placement
openstack allocation candidate list --resource DISK_GB=20 --resource
MEMORY_MB=2048 --resource VCPU=1 --os-placement-api-version 1.10

And you can explore placement state with other commands like openstack
resource provider list, resource provider inventory list, resource provider
usage show.

[1] https://developer.openstack.org/api-ref/placement/
[2] https://docs.openstack.org/osc-placement/latest/index.html

On Wed, Aug 1, 2018 at 6:16 PM Ben Nemec  wrote:

> Hi,
>
> I'm having an issue with no valid host errors when starting instances
> and I'm struggling to figure out why.  I thought the problem was disk
> space, but I changed the disk_allocation_ratio and I'm still getting no
> valid host.  The host does have plenty of disk space free, so that
> shouldn't be a problem.
>
> However, I'm not even sure it's disk that's causing the failures because
> I can't find any information in the logs about why the no valid host is
> happening.  All I get from the scheduler is:
>
> "Got no allocation candidates from the Placement API. This may be a
> temporary occurrence as compute nodes start up and begin reporting
> inventory to the Placement service."
>
> While in placement I see:
>
> 2018-08-01 15:02:22.062 20 DEBUG nova.api.openstack.placement.requestlog
> [req-0a830ce9-e2af-413a-86cb-b47ae129b676
> fc44fe5cefef43f4b921b9123c95e694 b07e6dc2e6284b00ac7070aa3457c15e -
> default default] Starting request: 10.2.2.201 "GET
> /placement/allocation_candidates?limit=1000&resources=DISK_GB%3A20%2CMEMORY_MB%3A2048%2CVCPU%3A1"
>
> __call__
>
> /usr/lib/python2.7/site-packages/nova/api/openstack/placement/requestlog.py:38
> 2018-08-01 15:02:22.103 20 INFO nova.api.openstack.placement.requestlog
> [req-0a830ce9-e2af-413a-86cb-b47ae129b676
> fc44fe5cefef43f4b921b9123c95e694 b07e6dc2e6284b00ac7070aa3457c15e -
> default default] 10.2.2.201 "GET
> /placement/allocation_candidates?limit=1000&resources=DISK_GB%3A20%2CMEMORY_MB%3A2048%2CVCPU%3A1"
>
> status: 200 len: 53 microversion: 1.25
>
> Basically it just seems to be logging that it got a request, but there's
> no information about what it did with that request.
>
> So where do I go from here?  Is there somewhere else I can look to see
> why placement returned no candidates?
>
> Thanks.
>
> -Ben
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


-- 
Thanks,

Andrey Volkov,
Software Engineer, Mirantis, Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][placement] Scheduler VM distribution

2018-04-19 Thread Andrey Volkov
Hello,

>From my understanding, we have a race between the scheduling
process and host weight update.

I made a simple experiment. On the 50 fake host environment
it was asked to boot 40 VMs those should be placed 1 on each host.
The hosts are equal to each other in terms of inventory.

img=6fedf6a1-5a55-4149-b774-b0b4dccd2ed1
flavor=1
for i in {1..40}; do
nova boot --flavor $flavor --image $img --nic none vm-$i;
sleep 1;
done

The following distribution was gotten:

mysql> select resource_provider_id, count(*) from allocations where
resource_class_id = 0 group by 1;

+--+--+
| resource_provider_id | count(*) |
+--+--+
|1 |2 |
|   18 |2 |
|   19 |3 |
|   20 |3 |
|   26 |2 |
|   29 |2 |
|   33 |3 |
|   36 |2 |
|   41 |1 |
|   49 |3 |
|   51 |2 |
|   52 |3 |
|   55 |2 |
|   60 |3 |
|   61 |2 |
|   63 |2 |
|   67 |3 |
+--+--+
17 rows in set (0.00 sec)

And the question is:
If we have an atomic resource allocation what is the reason
to use compute_nodes.* for weight calculation?

There is a custom log of behavior I described: http://ix.io/18cw

-- 
Thanks,

Andrey Volkov,
Software Engineer, Mirantis, Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] Modeling SR-IOV with nested resource providers

2017-09-07 Thread Andrey Volkov
Ed,

Thanks for the response.

I'm also interested how those models can be used from two points of view.

First, how I can request the desired configuration. I thought about
some anti-affinity logic based on traits in Placement, but probably
that's not a task for Placement. Solution Jay Pipes proposed [1] is to
make several requests to /allocation_candidates and then combine a new
request from responses.

Second, how complicated it would be to update resource provider structure
if some conditions are changed (port was connected to a different switch).
I agree that simple structure is preferable here, for me having PFs as
resource providers
and VFs as inventories with tags (third option in the previos post) is
closer
to reality than hierarchical resource providers. What do you think?

FYI Eric Fried started an etherpad about generic device management [2].

[1] http://paste.openstack.org/show/620456/
[2]
https://etherpad.openstack.org/p/nova-ptg-queens-generic-device-management


On Wed, Sep 6, 2017 at 11:17 PM, Ed Leafe  wrote:

> On Sep 5, 2017, at 10:02 AM, Andrey Volkov  wrote:
>
> > For example, I have SR-IOV PF with four ports (P_i), two of them are
> > connected to one switch (SW_1) and other two to another (SW_2). I
> > would like to get VFs from distinct ports connected to distinct
> > switches (more details can be found in spec [1]), how it can be
> > modeled with nested resource providers?
>
> You should make it as complicated as it needs to be, but no more. The
> first model nests one deep, while the second goes two levels deep, yet they
> both provide the same granularity for accessing the VFs, so I would go for
> the first. And I’m not sure that we will be able to get the “inherited”
> traits used in the second model implemented any time soon.
>
> -- Ed Leafe
>
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Thanks,

Andrey Volkov,
Software Engineer, Mirantis, Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] [placement] Modeling SR-IOV with nested resource providers

2017-09-05 Thread Andrey Volkov
For example, I have SR-IOV PF with four ports (P_i), two of them are
connected to one switch (SW_1) and other two to another (SW_2). I
would like to get VFs from distinct ports connected to distinct
switches (more details can be found in spec [1]), how it can be
modeled with nested resource providers?

Several possible solutions I see:

1)
  --- compute node -
 / /  \ \--
   -/ /\\---
  /  /  \   \
 SR-IOV PF SR-IOV PF SR-IOV PF SR-IOV PF
   (traits:P1,SW1)  (traits:P2,SW1)   (traits:P3,SW2)(traits:P4,SW2)
: : : :
   / \   / \   / \   / \
  /   \ /   \ /   \ /   \
   VF1VF2VF3VF4VF5VF6VF7VF8


2)
compute node
  /  \
/  \
   SR-IOV PF   SR-IOV PF
  (traits:SW1)(traits:SW2)
/  \/  \
   /\  /\
  SR-IOV PF SR-IOV PF SR-IOV PF SR-IOV PF
 (traits:P1)   (traits:P2)   (traits:P3)   (traits:P4)
 : : : :
/ \   / \   / \   / \
   /   \ /   \ /   \ /   \
VF1VF2VF3VF4VF5VF6VF7VF8


3) Use tags for inventories, so the problem can be solved without complex
structures.

Are the described options applicable or there are other to solve the issue?

[1] https://review.openstack.org/#/c/182242/


-- 
Thanks,

Andrey Volkov,
Software Engineer, Mirantis, Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [glance] [cinder] [neutron] [keystone] - RFC cross project request id tracking

2017-05-15 Thread Andrey Volkov

> The last time this came up, some people were concerned that trusting 
> request-id on the wire was concerning to them because it's coming from 
> random users.

TBH I don't see the reason why a validated request-id value can't be
logged on a callee service side, probably because I missed some previous
context. Could you please give an example of such concerns?

With service user I see two blocks:
- A callee service needs to know if it's "special" user or not.
- Until all services don't use a service user we'll not get the complete trace.

Sean Dague writes:

> One of the things that came up in a logging Forum session is how much 
> effort operators are having to put into reconstructing flows for things 
> like server boot when they go wrong, as every time we jump a service 
> barrier the request-id is reset to something new. The back and forth 
> between Nova / Neutron and Nova / Glance would be definitely well served 
> by this. Especially if this is something that's easy to query in elastic 
> search.
>
> The last time this came up, some people were concerned that trusting 
> request-id on the wire was concerning to them because it's coming from 
> random users. We're going to assume that's still a concern by some. 
> However, since the last time that came up, we've introduced the concept 
> of "service users", which are a set of higher priv services that we are 
> using to wrap user requests between services so that long running 
> request chains (like image snapshot). We trust these service users 
> enough to keep on trucking even after the user token has expired for 
> this long run operations. We could use this same trust path for 
> request-id chaining.
>
> So, the basic idea is, services will optionally take an inbound 
> X-OpenStack-Request-ID which will be strongly validated to the format 
> (req-$uuid). They will continue to always generate one as well. When the 
> context is built (which is typically about 3 more steps down the paste 
> pipeline), we'll check that the service user was involved, and if not, 
> reset the request_id to the local generated one. We'll log both the 
> global and local request ids. All of these changes happen in 
> oslo.middleware, oslo.context, oslo.log, and most projects won't need 
> anything to get this infrastructure.
>
> The python clients, and callers, will then need to be augmented to pass 
> the request-id in on requests. Servers will effectively decide when they 
> want to opt into calling other services this way.
>
> This only ends up logging the top line global request id as well as the 
> last leaf for each call. This does mean that full tree construction will 
> take more work if you are bouncing through 3 or more servers, but it's a 
> step which I think can be completed this cycle.
>
> I've got some more detailed notes, but before going through the process 
> of putting this into an oslo spec I wanted more general feedback on it 
> so that any objections we didn't think about yet can be raised before 
> going through the detailed design.
>
>   -Sean

-- 
Thanks,

Andrey Volkov,
Software Engineer, Mirantis, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hierarchical quotas at the PTG?

2017-02-13 Thread Andrey Volkov

Hi Matt,

> But it's unclear to some (at least me) what those
> issues are.


I believe issues were resolved. I made a retrospective by working with
bugs and repository history what problems cinder encountered with quotas.

- First, hierarchical quotas were implemented in DbQuotaDriver but
  after some time were moved to separated driver. It's definitely
  worth to left choice for an operator to determine what quotas driver is
  better for their case.

- Second, in nested quotas implementation sum of subproject usages was
  saved in a parent node. I mean there is the special column (allocated)
  in quotas limits table contains calculated value. I believe that was
  done for the sake of performance. In implementation was proposed to
  nova there is the different approach which doesn't cache usages in DB,
  and performance testing for such approach was done.

- Third, an issue with getting project from keystone due policy. To get
  projects from keystone service user was used because ordinary user's
  rights may be not enough. Additional complexity was due subtle
  conditions for quotas management. The proposed approach in nova also
  uses service user but doesn't touch anything related to quotas
  management.

> Has anyone already planned on talking about hierarchical quotas at the
> PTG, like the architecture work group?

I would like to discuss following topics on PTG:

- Hierarchical quotas approach in nova.

- Storing quotas in keystone service and limits migration from nova.

- ?Quotas on flavor level.

It's my first PTG so I don't sure how it can be done, anyway,
I'll prepare some information to share with the community.

> Is there still a group working on that and can provide some guidance
> here?

There are a couple of developers in Mirantis interested in this theme,
as Boris mentioned some work was done.

Matt Riedemann writes:

> Operators want hierarchical quotas [1]. Nova doesn't have them yet and
> we've been hesitant to invest scarce developer resources in them since
> we've heard that the implementation for hierarchical quotas in Cinder
> has some issues. But it's unclear to some (at least me) what those
> issues are.
>
> Has anyone already planned on talking about hierarchical quotas at the
> PTG, like the architecture work group?
>
> I know there was a bunch of razzle dazzle before the Austin summit about
> quotas, but I have no idea what any of that led to. Is there still a
> group working on that and can provide some guidance here?
>
> [1]
> http://lists.openstack.org/pipermail/openstack-operators/2017-January/012450.html

--
Thanks,

Andrey Volkov,
Software Engineer, Mirantis, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Live migration with claim

2017-02-09 Thread Andrey Volkov

Hi,

I started to review patch series [1] which addresses the issue with
live migration resources. While doing that I made some notes possibly
can be useful for reviewers. I would like to share those notes and to
ask community to look critically and check if I'm wrong in my
conclusions.

** How nova make live migration (LM)?

*** Components of LM workflow

In LM process the following components are involved:
- nova-api 
  Migration params are determined and validated on this level, most
  important:
  - instance - source VM
  - host - target hostname
  - block_migration
  - force
- conductor
  Some orchestration process is done on this level:
  - migration object creating
  - LiveMigrationTask building and executing
  - scheduler call
  - check_can_live_migrate_destination - RPC request to compute node to check 
that destination environment is appropriate. On destination node
check_can_live_migrate_source call is made to check rollback is
possible.
  - migration call to the source compute node
- scheduler
  Scheduler is involved in LM only if the destination host is
  empty. In that case, scheduler's select_destinations function pick
  an appropriate host, conductor also calls
  check_can_live_migrate_destination on picked host.
- compute source node
  It's the place where migration starts and ends.
  - pre_live_migration call to destination node is made first
  - control is transferred to the underlying driver for migration
  - migration monitor is started
  - post_live_migration or rollback is made
- compute destination node
  Calls from conductor and source node are processed here,
  check_can_live_migrate_source is made to the source node.

*** Common calls diagram

http://amadev.ru/static/lm_diagram.png

*** Calls list for the libvirt case

The following list of calls can be used as reference.
  
- 
nova.api.openstack.compute.migrate_server.MigrateServerController._migrate_live
- nova.compute.api.API.live_migrate
- nova.conductor.api.ComputeTaskAPI.live_migrate_instance
- nova.conductor.manager.ComputeTaskManager._live_migrate
- nova.conductor.manager.ComputeTaskManager._build_live_migrate_task
- nova.conductor.tasks.live_migrate.LiveMigrationTask._execute
- nova.conductor.tasks.live_migrate.LiveMigrationTask._find_destination
- nova.scheduler.manager.SchedulerManager.select_destinations
- nova.conductor.tasks.live_migrate.LiveMigrationTask._call_livem_checks_on_host
- nova.compute.manager.ComputeManager.check_can_live_migrate_destination
- nova.compute.manager.ComputeManager.live_migration
- nova.compute.manager.ComputeManager._do_live_migration
- nova.compute.manager.pre_live_migration
- nova.virt.libvirt.driver.LibvirtDriver._live_migration_operation
- nova.virt.libvirt.guest.Guest.migrate
- librirt:domain.migrateToURI{,2,3}
- nova.compute.manager.ComputeManager.post_live_migration_at_destination

** What is the problem with LM?

Nova doesn't claim resources within LM, so we can get in a situation
with wrong scheduling until next periodic update_available_resource is
done. It has good description in bug [2].

** What changes in patch were done?

New live_migration_claim was added to the ResourceTracker similarly to
resize and rebuild claim.

It was decided to initiate live_migration_claim within
check_can_live_migrate_destination on destination node. To make that
done migration (was created in conductor) and resource limits for
destination node (got from scheduler) must be passed to
check_can_live_migrate_destination, so that's why conductor call and
compute RPC API were changed.

Overall intention of this patch is taking info account amount of
resources on destination node that can be a basement for future LM
improvement related to numa, sr-iov, huge pages.

[1] https://review.openstack.org/#/c/244489/
[2] https://bugs.launchpad.net/nova/+bug/1289064

-- 
Thanks,

Andrey Volkov,
Software Engineer, Mirantis, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Let's kill quota classes (again)

2016-12-15 Thread Andrey Volkov

> We're moving quotas to the API and we're going to stop doing the 
> reservation/commit/rollback race dance between API and compute nodes per 
> this spec:
>
> https://specs.openstack.org/openstack/nova-specs/specs/ocata/approved/cells-count-resources-to-check-quota-in-api.html

It's very nice spec. If we can remove all quota_usages and reservations
stuff it will be great. As I understand a lot of places should be
changed I'd happy to take part and to help.

Matt Riedemann writes:

> On 12/15/2016 3:11 AM, Andrey Volkov wrote:
>> Hi,
>>
>> I totally agree with Matt than `os-quota-class-sets` is inconsistent.
>> It has that hardcoded default class can't be changed.
>> API call is documented neither Nova nor Cinder (has the same API for
>> quotas).
>>
>> With defaults in the configuration I have some concerns:
>> - As it was mentioned before, possibly we need to update configs in
>> several places.
>
> We're moving quotas to the API and we're going to stop doing the 
> reservation/commit/rollback race dance between API and compute nodes per 
> this spec:
>
> https://specs.openstack.org/openstack/nova-specs/specs/ocata/approved/cells-count-resources-to-check-quota-in-api.html
>
> So that would mean you really only need the default quota configuration 
> on the API node, so I don't think this is as much of a problem after 
> that change.
>
>> - To make changes be applied we need to restart service, possibly SIGHUP
>> can help
>>   but I'm not sure.
>
> I'd think we could make these mutable config options so we could pickup 
> the changes without restarting the service.
>
>>
>> Alternatives I see are:
>> - Update `os-quota-sets` and give it possibility to work with defaults.
>> - Use external centralized quota service on which the work's doing actively.
>>   Please see [1] spec for limits in keystone and doc [2] having information
>>   how it can be applied in Nova and Cinder.
>>
>> [1] https://review.openstack.org/#/c/363765/
>> [2]
>> https://docs.google.com/document/d/1AqmmRvd_e-4Hw2oLbnBf5jBtjLgMj-kqAaQfofp_NYI/edit#
>>
>>

-- 
Thanks,

Andrey Volkov,
Software Engineer, Mirantis, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Let's kill quota classes (again)

2016-12-15 Thread Andrey Volkov
Hi,

I totally agree with Matt than `os-quota-class-sets` is inconsistent.
It has that hardcoded default class can't be changed.
API call is documented neither Nova nor Cinder (has the same API for
quotas).

With defaults in the configuration I have some concerns:
- As it was mentioned before, possibly we need to update configs in several
places.
- To make changes be applied we need to restart service, possibly SIGHUP
can help
  but I'm not sure.

Alternatives I see are:
- Update `os-quota-sets` and give it possibility to work with defaults.
- Use external centralized quota service on which the work's doing actively.
  Please see [1] spec for limits in keystone and doc [2] having information
  how it can be applied in Nova and Cinder.

[1] https://review.openstack.org/#/c/363765/
[2]
https://docs.google.com/document/d/1AqmmRvd_e-4Hw2oLbnBf5jBtjLgMj-kqAaQfofp_NYI/edit#


On Thu, Dec 15, 2016 at 6:32 AM, joehuang  wrote:

> If we don't update the default quota configuration at the same time for
> Nova services in different
> physical nodes, then there is a chance for the quota control in dis-order
> period: for example,
> 30 cores qutoa limit in one node, 20 cores quota limit in the other node.
>
> Best Regards
> Chaoyi Huang (joehuang)
>
> 
> From: Matt Riedemann [mrie...@linux.vnet.ibm.com]
> Sent: 15 December 2016 10:42
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [nova] Let's kill quota classes (again)
>
> On 7/18/2016 6:36 PM, Sean Dague wrote:
> > On 07/14/2016 08:07 AM, Kevin L. Mitchell wrote:
> >> The original concept of quota classes was to allow the default quotas
> >> applied to a tenant to be a function of the type of tenant.  That is,
> >> say you have a tiered setup, where you have gold-, silver-, and
> >> bronze-level customers, with gold having lots of free quota and bronze
> >> having a small amount of quota.  Rather than having to set the quotas
> >> individually for each tenant you created, the idea is that you set the
> >> _class_ of the tenant, and have quotas associated with the classes.
> >> This also has the advantage that, if someone levels up (or down) to
> >> another class of service, all you do is change the tenant's class, and
> >> the new quotas apply immediately.
> >>
> >> (By the way, the turnstile integration was not part of turnstile itself;
> >> there's a turnstile plugin to allow it to integrate with nova that's
> >> quota_class-aware, so you could also apply rate limits this way.)
> >>
> >> Personally, it wouldn't break my heart if quota classes went away; I
> >> think this level of functionality, if it seems reasonable to include,
> >> should become part of a more unified quota API (which we're still
> >> struggling to come up with anyway) so that everyone gets the benefit…or
> >> perhaps shares the pain? ;)  Anyway, I'm not aware of anyone using this
> >> functionality, though it might be worth asking about on the operators
> >> list—for curiosity's sake, if nothing else.  It would be interesting to
> >> see if anyone would be interested in the original idea, even if the
> >> current implementation doesn't make sense :)
> >
> > We've already dropped the hook turnstile was using, so I don't see any
> > reason not to drop this bit as well. I don't think it will work for
> > anyone with the current code.
> >
> > I agree that this probably makes way more sense in common quota code
> > then buried inside of Nova.
> >
> > -Sean
> >
>
> Following up on this, I missed the boat for Ocata, but got to talking
> with melwitt about this again today and while I had it all in my head
> again I've written a spec for Pike to deprecate the os-quota-class-sets
> API:
>
> https://review.openstack.org/#/c/411035/
>
> This essentially means no more custom quota classes (they aren't
> functional today anyway), and no more controlling global default quota
> limits via the REST API - that has to be done via the configuration
> (after the microversion).
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Thanks,

Andrey Volkov,
Software Engineer, Mirantis, Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova]Quotas: Store resources and limits in the Keystone

2016-12-14 Thread Andrey Volkov
t Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Thanks,

Andrey Volkov,
Software Engineer, Mirantis, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] hierarchy quota spec

2016-11-08 Thread Andrey Volkov
Hi,

I'd like community to look at hierarchy quota spec [1].
It uses a little different approach than previously proposed nested quota
spec [2]
and allows quota overbooking thing. I think it's possible to do this in
parallel with
cells-quota-api-db spec [3].

I'd happy to get some comments and suggestions about it.
There is PoC [3] for this spec where code can be viewed.

[1] https://review.openstack.org/#/c/394422/
[2]
https://review.openstack.org/#/c/160605/3/specs/liberty/approved/nested-quota-driver-api.rst
[3]
http://git.openstack.org/cgit/openstack/nova-specs/tree/specs/ocata/approved/cells-quota-api-db.rst
[4] https://review.openstack.org/#/c/391072/

-- 
Thanks,

Andrey Volkov,
Software Engineer, Mirantis, Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Functional tests

2016-11-01 Thread Andrey Volkov
Hi Paul,

For me, it looks like keystone auth error, AFAIK neither cinder nor
keystone are run in nova functional tests.
Test "nova.tests.functional.test_servers.ServerTestV220.test_
attach_detach_vol_to_shelved_offloaded_server" calls
os-volume_attachments and there is no mock for cinder.volumes.get.
Possibly, you should mock this call.

On Tue, Nov 1, 2016 at 12:53 PM, Carlton, Paul (Cloud Services) <
paul.carlt...@hpe.com> wrote:

> Hi
>
>
> I've inherited a series of changes from a co-worker who has moved on and
> have
>
> rebased them but now I'm hitting some issues with functional tests which I
> can't
>
> figure out how to resolve.  The changes are https://review.openstack.org/#
> /c/268053
>
> and https://review.openstack.org/#/c/326899.  The former causes an
> existing related
>
> test to fail due to a cinder error and the latter introduces a new api
> version and
>
> using this seems to break existing functionality.  Any suggestions as to
> how I might
>
> debug these issues?
>
>
> Thanks
>
> Paul Carlton
> Software Engineer
> Cloud Services
> Hewlett Packard Enterprise
> BUK03:T242
> Longdown Avenue
> Stoke Gifford
> Bristol BS34 8QZ
>
> Office: +44 (0) 1173 162189
> Mobile:+44 (0)7768 994283
> Email:paul.carl...@hpe.com
> Hewlett-Packard Enterprise Limited registered Office: Cain Road,
> Bracknell, Berks RG12 1HN Registered No: 690597 England.
> The contents of this message and any attachments to it are confidential
> and may be legally privileged. If you have received this message in error,
> you should delete it from your system immediately and advise the sender. To
> any recipient of this message within HP, unless otherwise stated you should
> consider this message and attachments as "HP CONFIDENTIAL".
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Thanks,

Andrey Volkov,
Software Engineer, Mirantis, Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][neutron] neutron port duplication

2016-07-22 Thread Andrey Volkov
Hi, nova and neutron teams,

While booting new instance nova requests port for that instance in the
neutron.
It's possible to have a situation when neutron doesn't response due timeout
or connection break and nova retries port creation. It definitely results in
ports duplication for instance [1].

To solve this issue different methods can be applied:
- Transactional port creating in neutron (when it's possible to rollback if
the client doesn't accept the answer).
- Idempotent port creation (when the client provides some id and server
does get_or_create on this id).
- Getting port on the client before next retry attempt (idempotent port
creation on the client side).

Questions to community:
- Am I right with my thoughts? Does the problem exist? Maybe there is tool
already can solve it?
- Which method is better to apply to solve the problem if it exists?

[1] https://bugs.launchpad.net/nova/+bug/1603909


-- 
Thanks,

Andrey Volkov,
Software Engineer, Mirantis, Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Does nova-compute interact with nova Db?

2016-06-21 Thread Andrey Volkov
Hi,

nova-compute hasn't direct access to DB only scheduler, conductor and API
can use it.
See schema: http://docs.openstack.org/developer/nova/architecture.html.

I think for your case you could write some script (ansible, puppet?) to
collect data and nova-manage command to update DB.

On Tue, Jun 21, 2016 at 5:07 PM, KHAN, RAO ADNAN  wrote:

> I want to collect an *extra_resource* info from compute node(s) and add it
> in to the compute_nodes table. I like to understand how nova-compute
> interacts with DB currently.
>
>
>
> Thanks,
>
>
>
> *Rao Adnan Khan*
>
> AT&T Integrated Cloud (AIC) Development | SE
> Software Development & Engineering (SD&E)
>
> Emai: rk2...@att.com
>
> Cell phone: 972-342-5638
>
>
>
> RESTRICTED - PROPRIETARY INFORMATION
>
> This email is the property of AT&T and intended solely for the use of the
> addressee. If you have reason to believe you have received this in error,
> please delete this immediately; any other use, retention, dissemination,
> copying or printing of this email is strictly prohibited.
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Thanks,

Andrey Volkov,
Software Engineer, Mirantis
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [api] [placement] strategy for placement api structure

2016-06-17 Thread Andrey Volkov
> The code I've written in the WIP tries to break with many of code trends
> that require readers to guess.
>
> [1]
> http://specs.openstack.org/openstack/nova-specs/specs/newton/approved/generic-resource-pools.html
> [2]
> http://specs.openstack.org/openstack/nova-specs/specs/mitaka/implemented/resource-classes.html
> [3] https://review.openstack.org/#/c/329149/ and its descendants [4]
> https://gabbi.readthedocs.io/
> [5]
> http://specs.openstack.org/openstack/api-wg/guidelines/testing.html#proposals
> [6] http://flask.pocoo.org/
> [7]
> https://review.openstack.org/#/c/329151/10/nova/api/openstack/placement/handlers/resource_provider.py
> [8] https://pypi.python.org/pypi/selector
> [9] https://review.openstack.org/#/c/329386/
>
> --
> Chris Dent   (╯°□°)╯︵┻━┻http://anticdent.org/
> freenode: cdent         tw: @anticdent
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-- 
Kind Regards,
Andrey Volkov,
Software Engineer, Mirantis, Inc.

Tel.: +7 (916) 86 88 942
Skype: amadev_alt
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev