Re: [openstack-dev] [trove] datastore migration issues

2013-12-20 Thread Vipul Sabhaya
I am fine with requiring the deployer to update default values, if they
don’t make sense for their given deployment.  However, not having any value
for older/existing instances, when the code requires it is not good.  So
let’s create a default datastore of mysql, with a default version, and set
that as the datastore for older instances.  A deployer can then run
trove-manage to update the default record created.


On Thu, Dec 19, 2013 at 6:14 PM, Tim Simpson wrote:

>  I second Rob and Greg- we need to not allow the instance table to have
> nulls for the datastore version ID. I can't imagine that as Trove grows and
> evolves, that edge case is something we'll always remember to code and test
> for, so let's cauterize things now by no longer allowing it at all.
>
>  The fact that the migration scripts can't, to my knowledge, accept
> parameters for what the dummy datastore name and version should be isn't
> great, but I think it would be acceptable enough to make the provided
> default values sensible and ask operators who don't like it to manually
> update the database.
>
>  - Tim
>
>
>
>  --
> *From:* Robert Myers [myer0...@gmail.com]
> *Sent:* Thursday, December 19, 2013 9:59 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [trove] datastore migration issues
>
>   I think that we need to be good citizens and at least add dummy data.
> Because it is impossible to know who all is using this, the list you have
> is probably complete. But Trove has been available for quite some time and
> all these users will not be listening on this thread. Basically anytime you
> have a database migration that adds a required field you *have* to alter
> the existing rows. If we don't we're basically telling everyone who
> upgrades that we the 'Database as a Service' team don't care about data
> integrity in our own product :)
>
>  Robert
>
>
> On Thu, Dec 19, 2013 at 9:25 AM, Greg Hill wrote:
>
>>  We did consider doing that, but decided it wasn't really any different
>> from the other options as it required the deployer to know to alter that
>> data.  That would require the fewest code changes, though.  It was also my
>> understanding that mysql variants were a possibility as well (percona and
>> mariadb), which is what brought on the objection to just defaulting in
>> code.  Also, we can't derive the version being used, so we *could* fill it
>> with a dummy version and assume mysql, but I don't feel like that solves
>> the problem or the objections to the earlier solutions.  And then we also
>> have bogus data in the database.
>>
>>   Since there's no perfect solution, I'm really just hoping to gather
>> consensus among people who are running existing trove installations and
>> have yet to upgrade to the newer code about what would be easiest for them.
>>  My understanding is that list is basically HP and Rackspace, and maybe
>> Ebay?, but the hope was that bringing the issue up on the list might
>> confirm or refute that assumption and drive the conversation to a suitable
>> workaround for those affected, which hopefully isn't that many
>> organizations at this point.
>>
>>  The options are basically:
>>
>>  1. Put the onus on the deployer to correct existing records in the
>> database.
>> 2. Have the migration script put dummy data in the database which you
>> have to correct.
>> 3. Put the onus on the deployer to fill out values in the config value
>>
>>  Greg
>>
>>  On Dec 18, 2013, at 8:46 PM, Robert Myers  wrote:
>>
>>  There is the database migration for datastores. We should add a
>> function to  back fill the existing data with either a dummy data or set it
>> to 'mysql' as that was the only possibility before data stores.
>> On Dec 18, 2013 3:23 PM, "Greg Hill"  wrote:
>>
>>> I've been working on fixing a bug related to migrating existing
>>> installations to the new datastore code:
>>>
>>>  https://bugs.launchpad.net/trove/+bug/1259642
>>>
>>>  The basic gist is that existing instances won't have any data in the
>>> datastore_version_id field in the database unless we somehow populate that
>>> data during migration, and not having that data populated breaks a lot of
>>> things (including the ability to list instances or delete or resize old
>>> instances).  It's impossible to populate that data in an automatic, generic
>>> way, since it's highly vendor-dependent on what database and version they
>>> currently support, and there's not enough data in the older schema to
>>> populate the new tables automatically.
>>>
>>>  So far, we've come up with some non-optimal solutions:
>>>
>>>  1. The first iteration was to assume 'mysql' as the database manager
>>> on instances without a datastore set.
>>> 2. The next iteration was to make the default value be configurable in
>>> trove.conf, but default to 'mysql' if it wasn't set.
>>> 3. It was then proposed that we could just use the 'default_datastore'
>>> value from the config, which ma

Re: [openstack-dev] [trove] Delivering datastore logs to customers

2013-12-20 Thread Vipul Sabhaya
Yep agreed, this is a great idea.

We really only need two API calls to get this going:
- List available logs to ‘save’
- Save a log (to swift)

Some additional points to consider:
- We don’t need to create a record of every Log ‘saved’ in Trove.  These
entries, treated as a Trove resource aren’t useful, since you don’t
actually manipulate that resource.
- Deletes of Logs shouldn’t be part of the Trove API, if the user wants to
delete them, just use Swift.
- A deployer should be able to choose which logs can be ‘saved’ by their
users


On Wed, Dec 18, 2013 at 2:02 PM, Michael Basnight wrote:

> I think this is a good idea and I support it. In todays meeting [1] there
> were some questions, and I encourage them to get brought up here. My only
> question is in regard to the "tail" of a file we discussed in irc. After
> talking about it w/ other trovesters, I think it doesnt make sense to tail
> the log for most datstores. I cant imagine finding anything useful in say,
> a java, applications last 100 lines (especially if a stack trace was
> present). But I dont want to derail, so lets try to focus on the "deliver
> to swift" first option.
>
> [1]
> http://eavesdrop.openstack.org/meetings/trove/2013/trove.2013-12-18-18.13.log.txt
>
> On Wed, Dec 18, 2013 at 5:24 AM, Denis Makogon wrote:
>
>> Greetings, OpenStack DBaaS community.
>>
>> I'd like to start discussion around a new feature in Trove. The
>> feature I would like to propose covers manipulating  database log files.
>>
>
>> Main idea. Give user an ability to retrieve database log file for any
>> purposes.
>>
>> Goals to achieve. Suppose we have an application (binary application,
>> without source code) which requires a DB connection to perform data
>> manipulations and a user would like to perform development, debbuging of an
>> application, also logs would be useful for audit process. Trove itself
>> provides access only for CRUD operations inside of database, so the user
>> cannot access the instance directly and analyze its log files. Therefore,
>> Trove should be able to provide ways to allow a user to download the
>> database log for analysis.
>>
>>
>> Log manipulations are designed to let user perform log
>> investigations. Since Trove is a PaaS - level project, its user cannot
>> interact with the compute instance directly, only with database through the
>> provided API (database operations).
>>
>> I would like to propose the following API operations:
>>
>>1.
>>
>>Create DBLog entries.
>>2.
>>
>>Delete DBLog entries.
>>3.
>>
>>List DBLog entries.
>>
>> Possible API, models, server, and guest configurations are described at
>> wiki page. [1]
>>
>> [1] https://wiki.openstack.org/wiki/TroveDBInstanceLogOperation
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Michael Basnight
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] Delivering datastore logs to customers

2013-12-20 Thread Denis Makogon
Vipul, agreed.

Trove server side could store a mapping of available log files and their
paths per datastore.
Also i agreed with ingoring DBLog model since it's really useless in term
of future manipulations.
And deployer would be able to define which logs could be available for user
by setting allowing parameter per each log type.

Example:
 allow_commit_log = True, allow_bin_log = False - each of parameter sets
are custom per datastore.
mapping = {}
if allow_bin_log: mapping.update({'bin_log': path})



2013/12/20 Vipul Sabhaya 

> Yep agreed, this is a great idea.
>
> We really only need two API calls to get this going:
> - List available logs to ‘save’
> - Save a log (to swift)
>
> Some additional points to consider:
> - We don’t need to create a record of every Log ‘saved’ in Trove.  These
> entries, treated as a Trove resource aren’t useful, since you don’t
> actually manipulate that resource.
> - Deletes of Logs shouldn’t be part of the Trove API, if the user wants to
> delete them, just use Swift.
> - A deployer should be able to choose which logs can be ‘saved’ by their
> users
>
>
> On Wed, Dec 18, 2013 at 2:02 PM, Michael Basnight wrote:
>
>> I think this is a good idea and I support it. In todays meeting [1] there
>> were some questions, and I encourage them to get brought up here. My only
>> question is in regard to the "tail" of a file we discussed in irc. After
>> talking about it w/ other trovesters, I think it doesnt make sense to tail
>> the log for most datstores. I cant imagine finding anything useful in say,
>> a java, applications last 100 lines (especially if a stack trace was
>> present). But I dont want to derail, so lets try to focus on the "deliver
>> to swift" first option.
>>
>> [1]
>> http://eavesdrop.openstack.org/meetings/trove/2013/trove.2013-12-18-18.13.log.txt
>>
>> On Wed, Dec 18, 2013 at 5:24 AM, Denis Makogon wrote:
>>
>>> Greetings, OpenStack DBaaS community.
>>>
>>> I'd like to start discussion around a new feature in Trove. The
>>> feature I would like to propose covers manipulating  database log files.
>>>
>>>
>>
>>> Main idea. Give user an ability to retrieve database log file for
>>> any purposes.
>>>
>>> Goals to achieve. Suppose we have an application (binary
>>> application, without source code) which requires a DB connection to perform
>>> data manipulations and a user would like to perform development, debbuging
>>> of an application, also logs would be useful for audit process. Trove
>>> itself provides access only for CRUD operations inside of database, so the
>>> user cannot access the instance directly and analyze its log files.
>>> Therefore, Trove should be able to provide ways to allow a user to download
>>> the database log for analysis.
>>>
>>>
>>> Log manipulations are designed to let user perform log
>>> investigations. Since Trove is a PaaS - level project, its user cannot
>>> interact with the compute instance directly, only with database through the
>>> provided API (database operations).
>>>
>>> I would like to propose the following API operations:
>>>
>>>1.
>>>
>>>Create DBLog entries.
>>>2.
>>>
>>>Delete DBLog entries.
>>>3.
>>>
>>>List DBLog entries.
>>>
>>> Possible API, models, server, and guest configurations are described at
>>> wiki page. [1]
>>>
>>> [1] https://wiki.openstack.org/wiki/TroveDBInstanceLogOperation
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Michael Basnight
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][nova] oslo common.service vs. screen and devstack

2013-12-20 Thread Oleg Gelbukh
I'd +1 Clint on this. I believe that the only right way to handle SIGHUP
for process running in foreground is to terminate.

--
Best regards,
Oleg Gelbukh


On Fri, Dec 20, 2013 at 10:54 AM, Clint Byrum  wrote:

> Excerpts from Sean Dague's message of 2013-12-19 16:33:12 -0800:
> > So a few people had been reporting recently that unstack no longer stops
> > nova processes, which I only got around to looking at today. It turns
> > out the new common.service stack from oslo takes SIGHUP and treats it as
> > a restart. Which isn't wrong, but is new, and is incompatible with
> > screen (the way we use it). Because we use -X stuff, the resulting -X
> > quit sends SIGHUP to the child processes.
> >
> > So the question is, are we definitely in a state now where nova services
> > can and do want to support SIGHUP as restart?
> >
> > If so, is there interest in being able to disable that behavior at start
> > time, so we can continue with a screen based approach as well?
> >
> > If not, we'll need to figure out another way to approach the shutdown in
> > devstack. Which is fine, just work that wasn't expected.
> >
>
> Perhaps if the process is running in the foreground, as it does in
> devstack, it should still terminate on SIGHUP rather than restart.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Support for Django 1.6

2013-12-20 Thread Matthias Runge
On 12/19/2013 04:45 PM, Thomas Goirand wrote:
> Hi,
> 
> Sid has Django 1.6. Is it planned to add support for it? I currently
> don't know what to do with the Horizon package, as it's currently
> broken... :(
> 
> Thomas
Yes, there are two patches available, one for horizon[1] and one for
django_openstack_auth[2]

If both are in, we can start gating on django-1.6 as well.

[1] https://review.openstack.org/#/c/58947/
[2] https://review.openstack.org/#/c/58561/

Matthias

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] VM diagnostics - V3 proposal

2013-12-20 Thread Oleg Gelbukh
Hi everyone,

I'm sorry for being late to the thread, but what about baremetal driver?
Should it support the get_diagnostics() as well?

--
Best regards,
Oleg Gelbukh


On Thu, Dec 19, 2013 at 8:21 PM, Vladik Romanovsky <
vladik.romanov...@enovance.com> wrote:

> Ah, I think I've responded too fast, sorry.
>
> meter-list provides a list of various measurements that are being done per
> resource.
> sample-list provides a list of samples per every meter: ceilometer
> sample-list --meter cpu_util -q resource_id=vm_uuid
> These samples can be aggregated over a period of time per every meter and
> resource:
> ceilometer statistics -m cpu_util -q
> 'timestamp>START;timestamp<=END;resource_id=vm_uuid' --period 3600
>
> Vladik
>
>
>
> - Original Message -
> > From: "Daniel P. Berrange" 
> > To: "Vladik Romanovsky" 
> > Cc: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>, "John
> > Garbutt" 
> > Sent: Thursday, 19 December, 2013 10:37:27 AM
> > Subject: Re: [openstack-dev] [nova] VM diagnostics - V3 proposal
> >
> > On Thu, Dec 19, 2013 at 03:47:30PM +0100, Vladik Romanovsky wrote:
> > > I think it was:
> > >
> > > ceilometer sample-list -m cpu_util -q 'resource_id=vm_uuid'
> >
> > Hmm, a standard devstack deployment of ceilometer doesn't seem to
> > record any performance stats at all - just shows me the static
> > configuration parameters :-(
> >
> >  ceilometer meter-list  -q
> 'resource_id=296b22c6-2a4d-4a8d-a7cd-2d73339f9c70'
> >
> +-+---+--+--+--+--+
> > | Name| Type  | Unit | Resource ID
> > | | User ID  | Project ID
> > | |
> >
> +-+---+--+--+--+--+
> > | disk.ephemeral.size | gauge | GB   |
> > | 296b22c6-2a4d-4a8d-a7cd-2d73339f9c70 |
> 96f9a624a325473daf4cd7875be46009 |
> > | ec26984024c1438e8e2f93dc6a8c5ad0 |
> > | disk.root.size  | gauge | GB   |
> > | 296b22c6-2a4d-4a8d-a7cd-2d73339f9c70 |
> 96f9a624a325473daf4cd7875be46009 |
> > | ec26984024c1438e8e2f93dc6a8c5ad0 |
> > | instance| gauge | instance |
> > | 296b22c6-2a4d-4a8d-a7cd-2d73339f9c70 |
> 96f9a624a325473daf4cd7875be46009 |
> > | ec26984024c1438e8e2f93dc6a8c5ad0 |
> > | instance:m1.small   | gauge | instance |
> > | 296b22c6-2a4d-4a8d-a7cd-2d73339f9c70 |
> 96f9a624a325473daf4cd7875be46009 |
> > | ec26984024c1438e8e2f93dc6a8c5ad0 |
> > | memory  | gauge | MB   |
> > | 296b22c6-2a4d-4a8d-a7cd-2d73339f9c70 |
> 96f9a624a325473daf4cd7875be46009 |
> > | ec26984024c1438e8e2f93dc6a8c5ad0 |
> > | vcpus   | gauge | vcpu |
> > | 296b22c6-2a4d-4a8d-a7cd-2d73339f9c70 |
> 96f9a624a325473daf4cd7875be46009 |
> > | ec26984024c1438e8e2f93dc6a8c5ad0 |
> >
> +-+---+--+--+--+--+
> >
> >
> > If the admin user can't rely on ceilometer guaranteeing availability of
> > the performance stats at all, then I think having an API in nova to
> report
> > them is in fact justifiable. In fact it is probably justifiable no matter
> > what as a fallback way to check that VMs are doing in the fact of failure
> > of ceilometer / part of the cloud infrastructure.
> >
> > Daniel
> > --
> > |: http://berrange.com  -o-
> http://www.flickr.com/photos/dberrange/ :|
> > |: http://libvirt.org  -o-
> http://virt-manager.org :|
> > |: http://autobuild.org   -o-
> http://search.cpan.org/~danberr/ :|
> > |: http://entangle-photo.org   -o-
> http://live.gnome.org/gtk-vnc :|
> >
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] Next two weekly meetings cancelled ?

2013-12-20 Thread Sergey Lukjanov
Both Mon and Tue 1500 UTC works for me


On Fri, Dec 20, 2013 at 8:43 AM, Nikolay Starodubtsev <
nstarodubt...@mirantis.com> wrote:

> I'm on holidays till 9th January. And I don't think I'll have an internet
> access all the time on holidays.
> p.s. By the way, I'll prefer Friday's evenings as new meeting time if it
> is available - it only one day when I don't do my BJJ classes. Or we can
> move to 1900-2000UTC. it looks fine for me. Or move to early Europe morning.
>
>
>
> Nikolay Starodubtsev
>
> Software Engineer
>
> Mirantis Inc.
>
>
> Skype: dark_harlequine1
>
>
> 2013/12/19 Sylvain Bauza 
>
>>  Le 19/12/2013 13:57, Dina Belova a écrit :
>>
>> I have Christmas holidays till 12th January... So I don't really know I
>> if I will be available 6th Jan.
>>
>>
>> Oh ok. Who else are still on vacation these times ?
>> We can do our next meeting on 12th Jan, but I'm concerned with the
>> delivery of Climate 0.1 which would be one week after.
>>
>> -Sylvain
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-20 Thread Radomir Dopieralski
On 20/12/13 00:17, Jay Pipes wrote:
> On 12/19/2013 04:55 AM, Radomir Dopieralski wrote:
>> On 14/12/13 16:51, Jay Pipes wrote:
>>
>> [snip]
>>
>>> Instead of focusing on locking issues -- which I agree are very
>>> important in the virtualized side of things where resources are
>>> "thinner" -- I believe that in the bare-metal world, a more useful focus
>>> would be to ensure that the Tuskar API service treats related group
>>> operations (like "deploy an undercloud on these nodes") in a way that
>>> can handle failures in a graceful and/or atomic way.
>>
>> Atomicity of operations can be achieved by intoducing critical sections.
>> You basically have two ways of doing that, optimistic and pessimistic.
>> Pessimistic critical section is implemented with a locking mechanism
>> that prevents all other processes from entering the critical section
>> until it is finished.
> 
> I'm familiar with the traditional non-distributed software concept of a
> mutex (or in Windows world, a critical section). But we aren't dealing
> with traditional non-distributed software here. We're dealing with
> highly distributed software where components involved in the
> "transaction" may not be running on the same host or have much awareness
> of each other at all.

Yes, that is precisely why you need to have a single point where they
can check if they are not stepping on each other's toes. If you don't,
you get race conditions and non-deterministic behavior. The only
difference with traditional, non-distributed software is that since the
components involved are communicating over a, relatively slow, network,
you have a much, much greater chance of actually having a conflict.
Scaling the whole thing to hundreds of nodes practically guarantees trouble.

> And, in any case (see below), I don't think that this is a problem that
> needs to be solved in Tuskar.
>
>> Perhaps you have some other way of making them atomic that I can't
>> think of?
> 
> I should not have used the term atomic above. I actually do not think
> that the things that Tuskar/Ironic does should be viewed as an atomic
> operation. More below.

OK, no operations performed by Tuskar need to be atomic, noted.

>>> For example, if the construction or installation of one compute worker
>>> failed, adding some retry or retry-after-wait-for-event logic would be
>>> more useful than trying to put locks in a bunch of places to prevent
>>> multiple sysadmins from trying to deploy on the same bare-metal nodes
>>> (since it's just not gonna happen in the real world, and IMO, if it did
>>> happen, the sysadmins/deployers should be punished and have to clean up
>>> their own mess ;)
>>
>> I don't see why they should be punished, if the UI was assuring them
>> that they are doing exactly the thing that they wanted to do, at every
>> step, and in the end it did something completely different, without any
>> warning. If anyone deserves punishment in such a situation, it's the
>> programmers who wrote the UI in such a way.
> 
> The issue I am getting at is that, in the real world, the problem of
> multiple users of Tuskar attempting to deploy an undercloud on the exact
> same set of bare metal machines is just not going to happen. If you
> think this is actually a real-world problem, and have seen two sysadmins
> actively trying to deploy an undercloud on bare-metal machines at the
> same time without unbeknownst to each other, then I feel bad for the
> sysadmins that found themselves in such a situation, but I feel its
> their own fault for not knowing about what the other was doing.

How can it be their fault, when at every step of their interaction with
the user interface, the user interface was assuring them that they are
going to do the right thing (deploy a certain set of nodes), but when
they finally hit the confirmation button, did a completely different
thing (deployed a different set of nodes)? The only fault I see is in
them using such software. Or are you suggesting that they should
implement the lock themselves, through e-mails or some other means of
communication?

Don't get me wrong, the deploy button is just one easy example of this
problem. We have it all over the user interface. Even such a simple
operation, as retrieving a list of node ids, and then displaying the
corresponding information to the user has a race condition in it -- what
if some of the nodes get deleted after we get the list of ids, but
before we make the call to get node details about them? This should be
done as an atomic operation that either locks, or fails if there was a
change in the middle of it, and since the calls are to different
systems, the only place where you can set a lock or check if there was a
change, is the tuskar-api. And no, trying to get again the information
about a deleted node won't help -- you can keep retrying for years, and
the node will still remain deleted. This is all over the place. And,
saying that "this is the user's fault" doesn't help.

> Trying to make a comple

Re: [openstack-dev] [Climate] Next two weekly meetings cancelled ?

2013-12-20 Thread Sylvain Bauza
Well, 2000UTC means midnight for you, guys. Not really safe for family 
concerns :-)

Maybe you were meaning 2000 local time, so 1600 UTC ?

I can propose Fridays 1500 UTC (so 19:00 your time ;-)) as an 
alternative (both meeting channels are free this time)


Let's vote : +1 for Fridays 1500 UTC.

Le 20/12/2013 05:43, Nikolay Starodubtsev a écrit :
I'm on holidays till 9th January. And I don't think I'll have an 
internet access all the time on holidays.
p.s. By the way, I'll prefer Friday's evenings as new meeting time if 
it is available - it only one day when I don't do my BJJ classes. Or 
we can move to 1900-2000UTC. it looks fine for me. Or move to early 
Europe morning.


*__*

Nikolay Starodubtsev

Software Engineer

Mirantis Inc.


Skype: dark_harlequine1



2013/12/19 Sylvain Bauza >


Le 19/12/2013 13:57, Dina Belova a écrit :

I have Christmas holidays till 12th January... So I don't really
know I if I will be available 6th Jan.



Oh ok. Who else are still on vacation these times ?
We can do our next meeting on 12th Jan, but I'm concerned with the
delivery of Climate 0.1 which would be one week after.

-Sylvain

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] VM diagnostics - V3 proposal

2013-12-20 Thread Daniel P. Berrange
On Fri, Dec 20, 2013 at 12:56:47PM +0400, Oleg Gelbukh wrote:
> Hi everyone,
> 
> I'm sorry for being late to the thread, but what about baremetal driver?
> Should it support the get_diagnostics() as well?

Of course, where practical, every driver should aim to support every
method in the virt driver class API.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Default ephemeral filesystem

2013-12-20 Thread Daniel P. Berrange
On Fri, Dec 20, 2013 at 09:21:54AM +1300, Robert Collins wrote:
> The default ephemeral filesystem in Nova is ext3 (for Linux). However
> ext3 is IMNSHO a pretty poor choice given ext4's existence. I can
> totally accept that other fs's like xfs might be contentious - but is
> there any reason not to make ext4 the default?
> 
> I'm not aware of any distro that doesn't have ext4 support - even RHEL
> defaults to ext4 in RHEL5.
> 
> The reason I'm raising this is that making a 1TB ext3 ephemeral volume
> does (way) over 5GB of writes due to zeroing all the inode tables, but
> an ext4 one does less than 1% of the IO - 14m vs 7seconds in my brief
> testing. (We were investigating why baremetal deploys were slow :)).

I've no objection to changing the default in this way. I would suggest
though that we make the hoice of emphemeral filesystem configurable
per-instance. I can well imagine people wanting to be able to choose
xfs or btrfs instead of ext4.

My suggestion would be to support a glance image metadata property to
let users specify the filesystem that is suitable for use with their
image.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] Next two weekly meetings cancelled ?

2013-12-20 Thread Sergey Lukjanov
+1 for Fridays 1500 UTC


On Fri, Dec 20, 2013 at 1:26 PM, Sylvain Bauza wrote:

>  Well, 2000UTC means midnight for you, guys. Not really safe for family
> concerns :-)
> Maybe you were meaning 2000 local time, so 1600 UTC ?
>
> I can propose Fridays 1500 UTC (so 19:00 your time ;-)) as an alternative
> (both meeting channels are free this time)
>
> Let's vote : +1 for Fridays 1500 UTC.
>
> Le 20/12/2013 05:43, Nikolay Starodubtsev a écrit :
>
> I'm on holidays till 9th January. And I don't think I'll have an internet
> access all the time on holidays.
> p.s. By the way, I'll prefer Friday's evenings as new meeting time if it
> is available - it only one day when I don't do my BJJ classes. Or we can
> move to 1900-2000UTC. it looks fine for me. Or move to early Europe morning.
>
>
>
> Nikolay Starodubtsev
>
> Software Engineer
>
> Mirantis Inc.
>
>
>  Skype: dark_harlequine1
>
>
> 2013/12/19 Sylvain Bauza 
>
>>  Le 19/12/2013 13:57, Dina Belova a écrit :
>>
>> I have Christmas holidays till 12th January... So I don't really know I
>> if I will be available 6th Jan.
>>
>>
>>  Oh ok. Who else are still on vacation these times ?
>> We can do our next meeting on 12th Jan, but I'm concerned with the
>> delivery of Climate 0.1 which would be one week after.
>>
>> -Sylvain
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> ___
> OpenStack-dev mailing 
> listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [bugs] definition of triaged

2013-12-20 Thread Thierry Carrez
Robert Collins wrote:
> On 16 December 2013 23:56, Thierry Carrez  wrote:
>> I like the first and third parts. Not really convinced with the second
>> part, though. You'll have a lot of "Confirmed" bugs without proposed
>> approach (like 99% of them) so asking core to read them all and scan
>> them for a proposed approach sounds like a waste of time. There seems to
> 
> So, I'm trying to reconcile:
>  - the goal of moving bugs into 'triaged'
>- which really is:
>  - keeping a pipeline of low hanging fruit as an onramp
>  - apparently some documentation needs too, though I'd rather push
> /all/ those into DocImpact tags on changes, + those bugs that are
> solely documentation issues.
>  - the goal of identifying critical bugs rapidly
>  - the goal of steering bugs to the right subteam (e.g. vendor interests)

Agree on your 3 goals. But I would argue that, in our setting, the value
of the second and third goal is much higher than the value of the first
one. We need *some* easy/analyzed bugs in the onramp pipeline, but we
don't need all of them.

> [...]
> I'm *entirely* happy with saying that anyone with the experience to do
> it can move things up to Triaged - I see no problem there, but there
> is a huge problem if we have any step in the process's inner loop that
> requires O(bugs) tasks.
> [...]

I completely agree. I felt like *your* approach for the second phase was
O(bugs), which is why I disagreed with it :)

You proposed:

Daily tasks - second layer - -core current and previous members
1. Assess the proposed approach in Confirmed+High[1] bugs
1.1. If good, move to Triaged
1.2  If not, suggest what would make the approach good[2]
2. If appropriate add low-hanging-fruit tag

IIUC that means going through each and every Confirmed+High[1] bug to
check if there is a proposed approach in them, and move them to
"Triaged" if it's any good. This is O(confirmedbugs).

My proposal (and the current state of things) is like this:

Anyone can propose an approach to a random bug and set the bug to
Triaged when he does. Anyone.

This is not an O(bugs) effort, this is 0 effort for your core members.

I assert there is not enough value in "assessing the proposed approach"
for it to be worth core time. Anyone should be able to propose a
solution and, if they are sure enough about it, set the bug to Triaged.
That creates enough Triaged bugs for your on-ramp pipeline. Code review
will be there to catch the corner case bad solution.

Core developers are just human beings that sign up to do a lot of code
reviews. Not deities that need to vouch every proposed solution in every
bug before a human may be tempted to solve it.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Default ephemeral filesystem

2013-12-20 Thread Robert Collins
Thats certainly a logical extension to the system, but orthogonal to
fixing a bad default IMO.

Admins can already configure a filesystem per OS.

-Rob

On 20 December 2013 23:00, Daniel P. Berrange  wrote:
> On Fri, Dec 20, 2013 at 09:21:54AM +1300, Robert Collins wrote:
>> The default ephemeral filesystem in Nova is ext3 (for Linux). However
>> ext3 is IMNSHO a pretty poor choice given ext4's existence. I can
>> totally accept that other fs's like xfs might be contentious - but is
>> there any reason not to make ext4 the default?
>>
>> I'm not aware of any distro that doesn't have ext4 support - even RHEL
>> defaults to ext4 in RHEL5.
>>
>> The reason I'm raising this is that making a 1TB ext3 ephemeral volume
>> does (way) over 5GB of writes due to zeroing all the inode tables, but
>> an ext4 one does less than 1% of the IO - 14m vs 7seconds in my brief
>> testing. (We were investigating why baremetal deploys were slow :)).
>
> I've no objection to changing the default in this way. I would suggest
> though that we make the hoice of emphemeral filesystem configurable
> per-instance. I can well imagine people wanting to be able to choose
> xfs or btrfs instead of ext4.
>
> My suggestion would be to support a glance image metadata property to
> let users specify the filesystem that is suitable for use with their
> image.
>
> Regards,
> Daniel
> --
> |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
> |: http://libvirt.org  -o- http://virt-manager.org :|
> |: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
> |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] Next two weekly meetings cancelled ?

2013-12-20 Thread Dina Belova
+1


On Fri, Dec 20, 2013 at 1:57 PM, Sergey Lukjanov wrote:

> +1 for Fridays 1500 UTC
>
>
> On Fri, Dec 20, 2013 at 1:26 PM, Sylvain Bauza wrote:
>
>>  Well, 2000UTC means midnight for you, guys. Not really safe for family
>> concerns :-)
>> Maybe you were meaning 2000 local time, so 1600 UTC ?
>>
>> I can propose Fridays 1500 UTC (so 19:00 your time ;-)) as an alternative
>> (both meeting channels are free this time)
>>
>> Let's vote : +1 for Fridays 1500 UTC.
>>
>> Le 20/12/2013 05:43, Nikolay Starodubtsev a écrit :
>>
>> I'm on holidays till 9th January. And I don't think I'll have an internet
>> access all the time on holidays.
>> p.s. By the way, I'll prefer Friday's evenings as new meeting time if it
>> is available - it only one day when I don't do my BJJ classes. Or we can
>> move to 1900-2000UTC. it looks fine for me. Or move to early Europe morning.
>>
>>
>>
>> Nikolay Starodubtsev
>>
>> Software Engineer
>>
>> Mirantis Inc.
>>
>>
>>  Skype: dark_harlequine1
>>
>>
>> 2013/12/19 Sylvain Bauza 
>>
>>>  Le 19/12/2013 13:57, Dina Belova a écrit :
>>>
>>> I have Christmas holidays till 12th January... So I don't really know I
>>> if I will be available 6th Jan.
>>>
>>>
>>>  Oh ok. Who else are still on vacation these times ?
>>> We can do our next meeting on 12th Jan, but I'm concerned with the
>>> delivery of Climate 0.1 which would be one week after.
>>>
>>> -Sylvain
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> ___
>> OpenStack-dev mailing 
>> listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Sincerely yours,
> Sergey Lukjanov
> Savanna Technical Lead
> Mirantis Inc.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [storyboard] Storyboard sprint around FOSDEM

2013-12-20 Thread Thierry Carrez
Hi everyone,

In case you're not familiar with it, Storyboard[1] is a cross-project
task tracking tool that we are building to replace our usage of
Launchpad bugs & blueprints.

We plan to have a 2-day sprint just before FOSDEM in Brussels to make
the few design and architectural hard calls that are needed to bring
this from POC state to a dogfoodable, continuously-deployed system.

We already have 4/6 people signed up, so if you're interested to join,
please reply to thread ASAP so that we can book relevant space.

Date/Location: January 30-31 in Brussels, Belgium

(FOSDEM[2] is February 1-2 in the same city, so you can combine the two)

[1] http://git.openstack.org/cgit/openstack-infra/storyboard
[2] http://fosdem.org/

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] Display NoneType fields correctly in python-cinderclient output

2013-12-20 Thread Shrirang Phadke
Hi All,



I am fixing a Bug #1261713: Volume type 'None' gives indistinguishable CLI
output (https://bugs.launchpad.net/cinder/+bug/1261713)



Currently if any value in cinder-api response is a NoneType (i.e. NULL)
then python-cinderclient shows an entry as None in the output. This is
because of JSON parsing null to NoneType.

Please check the following cinder list output:
http://paste.openstack.org/show/55648/



In the given output JSON correctly shows "volume_type" as null, but while
using python-cinderclient it converts JSON null to python NoneType which is
sometimes confusing as mentioned in the bug above.

So instead we should show the null entries as either hyphen( - ) or just a
space (  ) in the output of python-cinderclient.



Expected output of python-cinderclient in case of NoneType values for Name
and Volume_Type would be as follows:

http://paste.openstack.org/show/55654/







Regards,

Shrirang

-- 
*Disclaimer*
The information contained in this e-mail and any attachment(s) to this 
message are intended for the exclusive use of the addressee(s) and may 
contain proprietary, confidential or privileged information of Izel 
Technologies Pvt. Ltd. If you are not the intended recipient, you are 
notified that any review, use, any form of reproduction, dissemination, 
copying, disclosure, modification, distribution and/or publication of this 
e-mail message, contents or its attachment(s) is strictly prohibited and 
you are requested to notify us the same immediately by e-mail and delete 
this mail immediately. Izel Technologies Pvt. Ltd accepts no liability for 
virus infected e-mail or errors or omissions or consequences which may 
arise as a result of this e-mail transmission.
*End of Disclaimer*
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Horizon and Tuskar-UI codebase merge

2013-12-20 Thread Ladislav Smola

and +1 also from me :-)

Seems like this the way we want to go. So, what will be the next steps? 
Seems like this have to be done by cooperation of PTL's of Horizon and 
TripleO and ttx probably?


Thank you,
Ladislav


On 12/19/2013 05:29 PM, Lyle, David wrote:

So after a lot of consideration, my opinion is the two code bases should stay 
in separate repos under the Horizon Program, for a few reasons:
-Adding a large chunk of code for an incubated project is likely going to cause 
the Horizon delivery some grief due to dependencies and packaging issues at the 
distro level.
-The code in Tuskar-UI is currently in a large state of flux/rework.  The 
Tuskar-UI code needs to be able to move quickly and at times drastically, this 
could be detrimental to the stability of Horizon.  And conversely, the 
stability needs of Horizon and be detrimental to the speed at which Tuskar-UI 
can change.
-Horizon Core can review changes in the Tuskar-UI code base and provide 
feedback without the code needing to be integrated in Horizon proper.  
Obviously, with an eye to the code bases merging in the long run.

As far as core group organization, I think the current Tuskar-UI core should 
maintain their +2 for only Tuskar-UI.  Individuals who make significant review 
contributions to Horizon will certainly be considered for Horizon core in time. 
 I agree with Gabriel's suggestion of adding Horizon Core to tuskar-UI core.  
The idea being that Horizon core is looking for compatibility with Horizon 
initially and working toward a deeper understanding of the Tuskar-UI code base. 
 This will help insure the integration process goes as smoothly as possible 
when Tuskar/TripleO comes out of incubation.

I look forward to being able to merge the two code bases, but I don't think the 
time is right yet and Horizon should stick to only integrating code into 
OpenStack Dashboard that is out of incubation.  We've made exceptions in the 
past, and they tend to have unfortunate consequences.

-David



-Original Message-
From: Jiri Tomasek [mailto:jtoma...@redhat.com]
Sent: Thursday, December 19, 2013 4:40 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Horizon and Tuskar-UI codebase merge

On 12/19/2013 08:58 AM, Matthias Runge wrote:

On 12/18/2013 10:33 PM, Gabriel Hurley wrote:


Adding developers to Horizon Core just for the purpose of reviewing
an incubated umbrella project is not the right way to do things at
all.  If my proposal of two separate groups having the +2 power in
Gerrit isn't technically feasible then a new group should be created
for management of umbrella projects.

Yes, I totally agree.

Having two separate projects with separate cores should be possible
under the umbrella of a program.

Tuskar differs somewhat from other projects to be included in horizon,
because other projects contributed a view on their specific feature.
Tuskar provides an additional dashboard and is talking with several apis
below. It's a something like a separate dashboard to be merged here.

When having both under the horizon program umbrella, my concern is,

that

both projects wouldn't be coupled so tight, as I would like it.

Esp. I'd love to see an automatic merge of horizon commits to a
(combined) tuskar and horizon repository, thus making sure, tuskar will
work in a fresh (updated) horizon environment.

Please correct me if I am wrong, but I think this is not an issue.
Currently Tuskar-UI is run from Horizon fork. In local Horizon fork we
create symlink to tuskar-ui local clone and to run Horizon with
Tuskar-UI we simply start Horizon server. This means that Tuskar-UI runs
on latest version of Horizon. (If you pull regularly of course).


Matthias

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Jirka


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-20 Thread Ladislav Smola
May I propose we keep the conversation Icehouse related. I don't think 
we can make any sort of locking

mechanism in I.

Though it would be worth of creating some WikiPage that would present it 
whole in some consistent

manner. I am kind of lost in these emails. :-)

So, what do you thing are the biggest issues for the Icehouse tasks we have?

1. GET operations?
I don't think we need to be atomic here. We basically join resources 
from multiple APIs together. I think
it's perfectly fine that something will be deleted in the process. Even 
right now we join together only things
that exists. And we can handle when something is not. There is no need 
of locking or retrying here AFAIK.


2. Heat stack create, update
This is locked in the process of the operation, so nobody can mess with 
it while it is updating or creating.
Once we will pack all operations that are now aside in this, we should 
be alright. And that should be doable in I.
So we should push towards this, rather then building some temporary 
locking solution in Tuskar-API.


3. Reservation of resources
As we can deploy only one stack now, so I think it shouldn't be a 
problem with multiple users there. When
somebody will delete the resources from 'free pool' in the process, it 
will fail with 'Not enough free resources'

I guess that is fine.
Also not sure how it's now, but it should be possible to deploy smartly, 
so the stack will be working even
with smaller amount of resources. Then we would just heat stack-update 
with numbers it ended up with,

and it would switch to OK status without changing anything.

So, are there any other critical sections you see?

I know we did it bad way in the previous Tuskar-API and I think we are 
avoiding that now. And we will avoid
it in the future. By simply not doing these kind of stuff until there is 
a proper way to do it.


Thanks,
Ladislav


On 12/20/2013 10:13 AM, Radomir Dopieralski wrote:

On 20/12/13 00:17, Jay Pipes wrote:

On 12/19/2013 04:55 AM, Radomir Dopieralski wrote:

On 14/12/13 16:51, Jay Pipes wrote:

[snip]


Instead of focusing on locking issues -- which I agree are very
important in the virtualized side of things where resources are
"thinner" -- I believe that in the bare-metal world, a more useful focus
would be to ensure that the Tuskar API service treats related group
operations (like "deploy an undercloud on these nodes") in a way that
can handle failures in a graceful and/or atomic way.

Atomicity of operations can be achieved by intoducing critical sections.
You basically have two ways of doing that, optimistic and pessimistic.
Pessimistic critical section is implemented with a locking mechanism
that prevents all other processes from entering the critical section
until it is finished.

I'm familiar with the traditional non-distributed software concept of a
mutex (or in Windows world, a critical section). But we aren't dealing
with traditional non-distributed software here. We're dealing with
highly distributed software where components involved in the
"transaction" may not be running on the same host or have much awareness
of each other at all.

Yes, that is precisely why you need to have a single point where they
can check if they are not stepping on each other's toes. If you don't,
you get race conditions and non-deterministic behavior. The only
difference with traditional, non-distributed software is that since the
components involved are communicating over a, relatively slow, network,
you have a much, much greater chance of actually having a conflict.
Scaling the whole thing to hundreds of nodes practically guarantees trouble.


And, in any case (see below), I don't think that this is a problem that
needs to be solved in Tuskar.


Perhaps you have some other way of making them atomic that I can't
think of?

I should not have used the term atomic above. I actually do not think
that the things that Tuskar/Ironic does should be viewed as an atomic
operation. More below.

OK, no operations performed by Tuskar need to be atomic, noted.


For example, if the construction or installation of one compute worker
failed, adding some retry or retry-after-wait-for-event logic would be
more useful than trying to put locks in a bunch of places to prevent
multiple sysadmins from trying to deploy on the same bare-metal nodes
(since it's just not gonna happen in the real world, and IMO, if it did
happen, the sysadmins/deployers should be punished and have to clean up
their own mess ;)

I don't see why they should be punished, if the UI was assuring them
that they are doing exactly the thing that they wanted to do, at every
step, and in the end it did something completely different, without any
warning. If anyone deserves punishment in such a situation, it's the
programmers who wrote the UI in such a way.

The issue I am getting at is that, in the real world, the problem of
multiple users of Tuskar attempting to deploy an undercloud on the exact
same set of bare metal machines is just not goin

Re: [openstack-dev] [Ceilometer][Oslo] Consuming Notifications in Batches

2013-12-20 Thread Nadya Privalova
Hi John,

As for me your ideas look very interesting. As I understood notification
messages will be kept in MQ for some time (during batch-basket is being
filled), right? I'm concerned about the additional load that will be on MQ
(Rabbit).

Thanks,
Nadya


On Fri, Dec 20, 2013 at 3:31 AM, Herndon, John Luke wrote:

> Hi Folks,
>
> The Rackspace-HP team has been putting a lot of effort into performance
> testing event collection in the ceilometer storage drivers[0]. Based on
> some results of this testing, we would like to support batch consumption
> of notifications, as it will greatly improve insertion performance. Batch
> consumption in this case means waiting for a certain number of
> notifications to arrive before sending to the storage
> driver.
>
> I¹d like to get feedback from the community about this feature, and how we
> are planning to implement it. Here is what I’m currently thinking:
>
> 1) This seems to fit well into oslo.messaging - batching may be a feature
> that other projects will find useful. After reviewing the changes that
> sileht has been working on in oslo.messaging, I think the right way to
> start off is to create a new executor that builds up a batch of
> notifications, and sends the batch to the dispatcher. We’d also add a
> timeout, so if a certain amount of time passes and the batch isn’t filled
> up, the notifications will be dispatched anyway. I’ve started a
> blueprint for this change and am filling in the details as I go along [1].
>
> 2) In ceilometer, initialize the notification listener with the batch
> executor instead of the eventlet executor (this should probably be
> configurable)[2]. We can then send the entire batch of notifications to
> the storage driver to be processed as events, while maintaining the
> current method for converting notifications into samples.
>
> 3) Error handling becomes more difficult. The executor needs to know if
> any of the notifications should be requeued. I think the right way to
> solve this is to return a list of notifications to requeue from the
> handler. Any better ideas?
>
> Is this the right approach to take? I¹m not an oslo.messaging expert, so
> if there is a proper way to implement this change, I¹m all ears!
>
> Thanks, happy holidays!
> -john
>
> 0: https://etherpad.openstack.org/p/ceilometer-data-store-scale-testing
> 1:
> https://blueprints.launchpad.net/oslo.messaging/+spec/bulk-consume-messages
> 2: https://blueprints.launchpad.net/ceilometer/+spec/use-bulk-notification
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-20 Thread Radomir Dopieralski
On 20/12/13 12:25, Ladislav Smola wrote:
> May I propose we keep the conversation Icehouse related. I don't think
> we can make any sort of locking
> mechanism in I.

By getting rid of tuskar-api and putting all the logic higher up, we are
forfeiting the ability to ever create it. That worries me. I hate to
remove potential solutions from my toolbox, even when the problems they
solve may as well never materialize.

> Though it would be worth of creating some WikiPage that would present it
> whole in some consistent
> manner. I am kind of lost in these emails. :-)
> 
> So, what do you thing are the biggest issues for the Icehouse tasks we
> have?
> 
> 1. GET operations?
> I don't think we need to be atomic here. We basically join resources
> from multiple APIs together. I think
> it's perfectly fine that something will be deleted in the process. Even
> right now we join together only things
> that exists. And we can handle when something is not. There is no need
> of locking or retrying here AFAIK.
> 2. Heat stack create, update
> This is locked in the process of the operation, so nobody can mess with
> it while it is updating or creating.
> Once we will pack all operations that are now aside in this, we should
> be alright. And that should be doable in I.
> So we should push towards this, rather then building some temporary
> locking solution in Tuskar-API.
> 
> 3. Reservation of resources
> As we can deploy only one stack now, so I think it shouldn't be a
> problem with multiple users there. When
> somebody will delete the resources from 'free pool' in the process, it
> will fail with 'Not enough free resources'
> I guess that is fine.
> Also not sure how it's now, but it should be possible to deploy smartly,
> so the stack will be working even
> with smaller amount of resources. Then we would just heat stack-update
> with numbers it ended up with,
> and it would switch to OK status without changing anything.
> 
> So, are there any other critical sections you see?

It's hard for me to find critical sections in a system that doesn't
exist, is not documented and will be designed as we go. Perhaps you are
right and I am just panicking, and we won't have any such critical
sections, or can handle the ones we do without any need for
synchronization. You probably have a much better idea how the whole
system will look like. Even then, I think it still makes sense to keep
that door open an leave ourselves the possibility of implementing
locking/sessions/serialization/counters/any other synchronization if we
need them, unless there is a horrible cost involved. Perhaps I'm just
not aware of the cost?

As far as I know, Tuskar is going to have more than just GETs and Heat
stack operations. I seem to remember stuff like resource classes, roles,
node profiles, node discovery, etc. How will updates to those be handled
and how will they interact with the Heat stack updates? Will every
change trigger a heat stack update immediately and force a refresh for
all open tuskar-ui pages?

Every time we will have a number of operations batched together -- such
as in any of those wizard dialogs, for which we've had so many
wireframes already, and which I expect to see more -- we will have a
critical section. That critical section doesn't begin when the "OK"
button is pressed, it starts when the dialog is first displayed, because
the user is making decisions based on the information that is presented
to her or him there. If by the time he finished the wizard and presses
OK the situation has changed, you are risking doing something else than
the user intended. Will we need to implement such interface elements,
and thus need synchronization mechanisms for it?

I simply don't know. And when I'm not sure, I like to have an option.

As I said, perhaps I just don't understand that there is a large cost
involved in keeping the logic inside tuskar-api instead of somewhere
else. Perhaps that cost is significant enough to justify this difficult
decision and limit our options. In the discussion I saw I didn't see
anything like that pointed out, but maybe it's just so obvious that
everybody takes it for granted and it's just me that can't see it. In
that case I will rest my case.
-- 
Radomir Dopieralski



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] Next two weekly meetings cancelled ?

2013-12-20 Thread Nikolay Starodubtsev
+1



Nikolay Starodubtsev

Software Engineer

Mirantis Inc.


Skype: dark_harlequine1


2013/12/20 Dina Belova 

> +1
>
>
> On Fri, Dec 20, 2013 at 1:57 PM, Sergey Lukjanov 
> wrote:
>
>> +1 for Fridays 1500 UTC
>>
>>
>> On Fri, Dec 20, 2013 at 1:26 PM, Sylvain Bauza wrote:
>>
>>>  Well, 2000UTC means midnight for you, guys. Not really safe for family
>>> concerns :-)
>>> Maybe you were meaning 2000 local time, so 1600 UTC ?
>>>
>>> I can propose Fridays 1500 UTC (so 19:00 your time ;-)) as an
>>> alternative (both meeting channels are free this time)
>>>
>>> Let's vote : +1 for Fridays 1500 UTC.
>>>
>>> Le 20/12/2013 05:43, Nikolay Starodubtsev a écrit :
>>>
>>> I'm on holidays till 9th January. And I don't think I'll have an
>>> internet access all the time on holidays.
>>> p.s. By the way, I'll prefer Friday's evenings as new meeting time if it
>>> is available - it only one day when I don't do my BJJ classes. Or we can
>>> move to 1900-2000UTC. it looks fine for me. Or move to early Europe morning.
>>>
>>>
>>>
>>> Nikolay Starodubtsev
>>>
>>> Software Engineer
>>>
>>> Mirantis Inc.
>>>
>>>
>>>  Skype: dark_harlequine1
>>>
>>>
>>> 2013/12/19 Sylvain Bauza 
>>>
  Le 19/12/2013 13:57, Dina Belova a écrit :

 I have Christmas holidays till 12th January... So I don't really know I
 if I will be available 6th Jan.


  Oh ok. Who else are still on vacation these times ?
 We can do our next meeting on 12th Jan, but I'm concerned with the
 delivery of Climate 0.1 which would be one week after.

 -Sylvain

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>>
>>> ___
>>> OpenStack-dev mailing 
>>> listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Sincerely yours,
>> Sergey Lukjanov
>> Savanna Technical Lead
>> Mirantis Inc.
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
>
> Best regards,
>
> Dina Belova
>
> Software Engineer
>
> Mirantis Inc.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] Next two weekly meetings cancelled ?

2013-12-20 Thread Sylvain Bauza
Once that agreed, let's go back to the initial question : when do we 
resume weekly meetings ?


Is Friday 10th Jan 1500 UTC OK for you ?

Note : Other committers, YorikSar, jd__, scroiset and f_rossigneux 
haven't yet replied.

-Sylvain

Le 20/12/2013 13:28, Nikolay Starodubtsev a écrit :

+1

*__*

Nikolay Starodubtsev

Software Engineer

Mirantis Inc.


Skype: dark_harlequine1



2013/12/20 Dina Belova >


+1


On Fri, Dec 20, 2013 at 1:57 PM, Sergey Lukjanov
mailto:slukja...@mirantis.com>> wrote:

+1 for Fridays 1500 UTC


On Fri, Dec 20, 2013 at 1:26 PM, Sylvain Bauza
mailto:sylvain.ba...@bull.net>> wrote:

Well, 2000UTC means midnight for you, guys. Not really
safe for family concerns :-)
Maybe you were meaning 2000 local time, so 1600 UTC ?

I can propose Fridays 1500 UTC (so 19:00 your time ;-)) as
an alternative (both meeting channels are free this time)

Let's vote : +1 for Fridays 1500 UTC.

Le 20/12/2013 05:43, Nikolay Starodubtsev a écrit :

I'm on holidays till 9th January. And I don't think I'll
have an internet access all the time on holidays.
p.s. By the way, I'll prefer Friday's evenings as new
meeting time if it is available - it only one day when I
don't do my BJJ classes. Or we can move to 1900-2000UTC.
it looks fine for me. Or move to early Europe morning.

*__*

Nikolay Starodubtsev

Software Engineer

Mirantis Inc.


Skype: dark_harlequine1



2013/12/19 Sylvain Bauza mailto:sylvain.ba...@bull.net>>

Le 19/12/2013 13:57, Dina Belova a écrit :

I have Christmas holidays till 12th January... So I
don't really know I if I will be available 6th Jan.



Oh ok. Who else are still on vacation these times ?
We can do our next meeting on 12th Jan, but I'm
concerned with the delivery of Climate 0.1 which
would be one week after.

-Sylvain

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org


http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org  

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Sincerely yours,

Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 


Best regards,

Dina Belova

Software Engineer

Mirantis Inc.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [savanna] team meeting minutes Dec 19

2013-12-20 Thread Sergey Lukjanov
Thanks everyone who have joined Savanna meeting.

Here are the logs from the meeting:

Minutes: 
savanna.2013-12-19-18.05.html
Log: 
savanna.2013-12-19-18.05.log.html

P.S. we are canceling our next two weekly meetings - Dec 26 and Jan 2.

-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] Next two weekly meetings cancelled ?

2013-12-20 Thread Nikolay Starodubtsev
It's okay for me.



Nikolay Starodubtsev

Software Engineer

Mirantis Inc.


Skype: dark_harlequine1


2013/12/20 Sylvain Bauza 

>  Once that agreed, let's go back to the initial question : when do we
> resume weekly meetings ?
>
> Is Friday 10th Jan 1500 UTC OK for you ?
>
> Note : Other committers, YorikSar, jd__, scroiset and f_rossigneux haven't
> yet replied.
> -Sylvain
>
> Le 20/12/2013 13:28, Nikolay Starodubtsev a écrit :
>
> +1
>
>
>
> Nikolay Starodubtsev
>
> Software Engineer
>
> Mirantis Inc.
>
>
>  Skype: dark_harlequine1
>
>
> 2013/12/20 Dina Belova 
>
>> +1
>>
>>
>> On Fri, Dec 20, 2013 at 1:57 PM, Sergey Lukjanov 
>> wrote:
>>
>>> +1 for Fridays 1500 UTC
>>>
>>>
>>> On Fri, Dec 20, 2013 at 1:26 PM, Sylvain Bauza 
>>> wrote:
>>>
  Well, 2000UTC means midnight for you, guys. Not really safe for
 family concerns :-)
 Maybe you were meaning 2000 local time, so 1600 UTC ?

 I can propose Fridays 1500 UTC (so 19:00 your time ;-)) as an
 alternative (both meeting channels are free this time)

 Let's vote : +1 for Fridays 1500 UTC.

 Le 20/12/2013 05:43, Nikolay Starodubtsev a écrit :

 I'm on holidays till 9th January. And I don't think I'll have an
 internet access all the time on holidays.
 p.s. By the way, I'll prefer Friday's evenings as new meeting time if
 it is available - it only one day when I don't do my BJJ classes. Or we can
 move to 1900-2000UTC. it looks fine for me. Or move to early Europe 
 morning.



 Nikolay Starodubtsev

 Software Engineer

 Mirantis Inc.


  Skype: dark_harlequine1


 2013/12/19 Sylvain Bauza 

>  Le 19/12/2013 13:57, Dina Belova a écrit :
>
> I have Christmas holidays till 12th January... So I don't really know
> I if I will be available 6th Jan.
>
>
>  Oh ok. Who else are still on vacation these times ?
> We can do our next meeting on 12th Jan, but I'm concerned with the
> delivery of Climate 0.1 which would be one week after.
>
> -Sylvain
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


 ___
 OpenStack-dev mailing 
 listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>>
>>>   --
>>>  Sincerely yours,
>>> Sergey Lukjanov
>>> Savanna Technical Lead
>>> Mirantis Inc.
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>>  --
>>
>> Best regards,
>>
>> Dina Belova
>>
>> Software Engineer
>>
>> Mirantis Inc.
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> ___
> OpenStack-dev mailing 
> listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [trove] Dropping connectivity from guesagent to Trove back-end

2013-12-20 Thread Denis Makogon
Goodday, OpenStack DВaaS community.


I'd like to start conversation about dropping connectivity from In-VM
guestagent and Trove back-end.

Since Trove has conductor service which interacts with agents via MQ
service, we could let it deal with any back-end required operations
initialized by guestagent.

Now conductor deals with instance status notifications and backup
status notifications. But guest still have one more operation which
requires back-end connectivity - database root-enabled reporting [1]. After
dealing with it we could finally drop connectivity [2].

Since not each database have root entity (not even ACL at all) it would be
incorrect to report about root enabling on server-side because
server-side(trove-taskmanager) should stay common as it possible.

My first suggestion was to extend conductor API [3] to let conductor
write report to Trove back-end. Until Trove would reach state when it would
support multiple datastore (databases) types current patch would work fine
[4], but when Trove would deliver, suppose, Database (without ACL) it would
be confusing when after instance provisioning user will find out that some
how root was enabled, but Database doesn't have any ACL at all.

My point is that Trove Conductor must handle every database (datastore
in terms of Trove) specific operations which are required back-end
connection. And Trove server-side (taskmanager) must stay generic and
perform preparation tasks, which are independent from datastore type.

 [1] https://github.com/openstack/trove/blob/master/bin/trove-guestagent#L52

[2] https://bugs.launchpad.net/trove/+bug/1257489

[3] https://review.openstack.org/#/c/59410/5

[4] https://review.openstack.org/#/c/59410/
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-20 Thread Radomir Dopieralski
On 20/12/13 13:04, Radomir Dopieralski wrote:

[snip]

I have just learned that tuskar-api stays, so my whole ranting is just a
waste of all our time. Sorry about that.

-- 
Radomir Dopieralski

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA][Tempest][Ceilometer] Pollster's testing strategy

2013-12-20 Thread Nadya Privalova
Hi guys!

For QA and Tempest guys brief description of Ceilometer's pollstering.
Ceilometer has several agents that once in 'interval' asks Nova, Glance and
other services about their metrics. We need to test this functionality,
'Interval' is defined in pipeline.yaml file and is 10 minutes by default.

I'd like to discuss the strategy of pollster's testing in tempest. Now we
need to wait 10 min to test the correctness of pollsters' work, as I
understand it is not appropriate.
I see the following solutions here:
1. Add smth like 'if tempest then interval = 5 sec'. This change should go
to gating AFAIU
2. Add additional functionality in Ceilometer: run_all_pollsters_on_demand.
I think this may be useful not only in tempest.

All your comments will be highly appreciated,

Thanks,
Nadya
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Ironic] Get power and temperature via IPMI

2013-12-20 Thread Alan Kavanagh
Cheers Gao. So my only comment here is how complex and how many attributes are 
we expecting the scheduler to take as input. Similarly the more variables you 
schedule on the more complex the beast becomes and from experience you end up 
having cross dependencies.

I can see power be an item of concern but don't you think that we could solve 
that one with Nova Cells Parent being aware of the Power consumption costs at 
"time-T" and then just forward the Nova API call to the appropriate Child which 
has say the least power consumption cost?

Also, on a priority scale, some DC providers (speaking as one of the DC 
Providers here) will not have power cost on their top say 5 list for 
scheduling. So I agree its definitely interesting but if you consider 
scheduling inside a large DC in the same geographical region and Dc site, 
Scheduling for power consumption becomes null and void. ;-(

BR
Alan



From: Gao, Fengqian [mailto:fengqian@intel.com]
Sent: December-19-13 11:10 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova] [Ironic] Get power and temperature via IPMI

Yes, Alan, you got me.
Providing power/temperature to scheduler, set threshold or different weight, 
then the scheduler can boot VM on the most suitable node.

Thanks

--fengqian

From: Alan Kavanagh [mailto:alan.kavan...@ericsson.com]
Sent: Friday, December 20, 2013 11:58 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova] [Ironic] Get power and temperature via IPMI

Cheers Gao

It definitely makes sense to collect additional metrics such as power and 
temperature, and make that available for selective decisions you would want to 
take. However, I am just wondering if you could realistically feed those 
metrics as variables for scheduling, this is the main part I feel is 
questionable. I assume then you would use temperature &|| power etc to gauge if 
you want to schedule another VM on a given node when a given temperature 
threshold is reached. Is this the main case you are thinking of Gao?

Alan

From: Gao, Fengqian [mailto:fengqian@intel.com]
Sent: December-18-13 10:23 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova] [Ironic] Get power and temperature via IPMI

Hi, Alan,
I think, for nova-scheduler it is better if we gather more information.  And In 
today's DC, power and temperature are very important facts to considering.
CPU/Memory utilization is not enough to describe nodes' status. Power/inlet 
temperature should be noticed.

Best Wishes

--fengqian

From: Alan Kavanagh [mailto:alan.kavan...@ericsson.com]
Sent: Thursday, December 19, 2013 2:14 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova] [Ironic] Get power and temperature via IPMI

Hi Gao

What is the reason why you see it would be important to have these two 
additional metrics "power and temperature" for Nova to base scheduling on?

Alan

From: Gao, Fengqian [mailto:fengqian@intel.com]
Sent: December-18-13 1:00 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Nova] [Ironic] Get power and temperature via IPMI

Hi, all,
I am planning to extend bp 
https://blueprints.launchpad.net/nova/+spec/utilization-aware-scheduling with 
power and temperature. In other words, power and temperature can be collected 
and used for nova-scheduler just as CPU utilization.
I have a question here. As you know, IPMI is used to get power and temperature 
and baremetal implements IPMI functions in Nova. But baremetal driver is being 
split out of nova, so if I want to change something to the IPMI, which part 
should I choose now? Nova or Ironic?


Best wishes

--fengqian

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-20 Thread Imre Farkas

On 12/20/2013 12:25 PM, Ladislav Smola wrote:

2. Heat stack create, update
This is locked in the process of the operation, so nobody can mess with
it while it is updating or creating.
Once we will pack all operations that are now aside in this, we should
be alright. And that should be doable in I.
So we should push towards this, rather then building some temporary
locking solution in Tuskar-API.


It's not the issue of locking, but the goal of Tuskar with the Provision 
button is not only a single stack creation. After Heat's job is done, 
the overcloud needs to be properly configured: Keystone needs to be 
initialized, the services need to be registered, etc. I don't think 
Horizon wants to add a background worker to handle such operations.


Imre

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-20 Thread Ladislav Smola

On 12/20/2013 01:04 PM, Radomir Dopieralski wrote:

On 20/12/13 12:25, Ladislav Smola wrote:

May I propose we keep the conversation Icehouse related. I don't think
we can make any sort of locking
mechanism in I.

By getting rid of tuskar-api and putting all the logic higher up, we are
forfeiting the ability to ever create it. That worries me. I hate to
remove potential solutions from my toolbox, even when the problems they
solve may as well never materialize.



Well, I expect that there will be decisions whether we should not land
a feature because it's not ready or we should make some temporary hack
that will make it work.

I am just little worried to have some temporary hacks in stable version,
cause then the update to the next version will be hard. And we will most 
likely

have to support these hacks as a backwards compatibility.

I wouldn't say we are forfeiting the ability to create it. I would say 
we are

forfeiting the ability to create hacked together temporary solutions, that
might go against how upstream wants to do it. That is a good thing I 
think. :-)



Though it would be worth of creating some WikiPage that would present it
whole in some consistent
manner. I am kind of lost in these emails. :-)

So, what do you thing are the biggest issues for the Icehouse tasks we
have?

1. GET operations?
I don't think we need to be atomic here. We basically join resources
from multiple APIs together. I think
it's perfectly fine that something will be deleted in the process. Even
right now we join together only things
that exists. And we can handle when something is not. There is no need
of locking or retrying here AFAIK.
2. Heat stack create, update
This is locked in the process of the operation, so nobody can mess with
it while it is updating or creating.
Once we will pack all operations that are now aside in this, we should
be alright. And that should be doable in I.
So we should push towards this, rather then building some temporary
locking solution in Tuskar-API.

3. Reservation of resources
As we can deploy only one stack now, so I think it shouldn't be a
problem with multiple users there. When
somebody will delete the resources from 'free pool' in the process, it
will fail with 'Not enough free resources'
I guess that is fine.
Also not sure how it's now, but it should be possible to deploy smartly,
so the stack will be working even
with smaller amount of resources. Then we would just heat stack-update
with numbers it ended up with,
and it would switch to OK status without changing anything.

So, are there any other critical sections you see?

It's hard for me to find critical sections in a system that doesn't
exist, is not documented and will be designed as we go. Perhaps you are
right and I am just panicking, and we won't have any such critical
sections, or can handle the ones we do without any need for
synchronization. You probably have a much better idea how the whole
system will look like. Even then, I think it still makes sense to keep
that door open an leave ourselves the possibility of implementing
locking/sessions/serialization/counters/any other synchronization if we
need them, unless there is a horrible cost involved. Perhaps I'm just
not aware of the cost?


Well yeah I guess for some J features, we might need to do
something like this. I have no idea right now. So the doors are
always open. :-)



As far as I know, Tuskar is going to have more than just GETs and Heat
stack operations. I seem to remember stuff like resource classes, roles,
node profiles, node discovery, etc. How will updates to those be handled
and how will they interact with the Heat stack updates? Will every
change trigger a heat stack update immediately and force a refresh for
all open tuskar-ui pages?


resource classes: it's definitely J, are we are not yet sure how it will 
look like


node_profiles: it's nova flavor in I and it will stay that way because 
of scheduler
From start we will have just one flavor. Even when we had more flavors, 
I think

I don't see issues here.
This heavily relies on how we are going to build the Heat Template. But 
adding

flavors should be separated form creating or updating a heat template.

creating and updating heat template: seems like we will be doing this in 
Tuskar-API

do you see any potential problems here?

node-discovery: will be in Ironic. Should be also a separate operation. 
So I don't see problems

here.



Every time we will have a number of operations batched together -- such
as in any of those wizard dialogs, for which we've had so many
wireframes already, and which I expect to see more -- we will have a
critical section. That critical section doesn't begin when the "OK"
button is pressed, it starts when the dialog is first displayed, because
the user is making decisions based on the information that is presented
to her or him there. If by the time he finished the wizard and presses
OK the situation has changed, you are risking doing something else than
the user

Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-20 Thread Ladislav Smola

On 12/20/2013 02:06 PM, Radomir Dopieralski wrote:

On 20/12/13 13:04, Radomir Dopieralski wrote:

[snip]

I have just learned that tuskar-api stays, so my whole ranting is just a
waste of all our time. Sorry about that.



Hehe. :-)

Ok after the last meeting we are ready to say what goes to Tuskar-API.

Who wants to start that thread? :-)



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-20 Thread Ladislav Smola

On 12/20/2013 02:37 PM, Imre Farkas wrote:

On 12/20/2013 12:25 PM, Ladislav Smola wrote:

2. Heat stack create, update
This is locked in the process of the operation, so nobody can mess with
it while it is updating or creating.
Once we will pack all operations that are now aside in this, we should
be alright. And that should be doable in I.
So we should push towards this, rather then building some temporary
locking solution in Tuskar-API.


It's not the issue of locking, but the goal of Tuskar with the 
Provision button is not only a single stack creation. After Heat's job 
is done, the overcloud needs to be properly configured: Keystone needs 
to be initialized, the services need to be registered, etc. I don't 
think Horizon wants to add a background worker to handle such operations.




Yes, that is a valid point. I hope we will be able to pack it all to 
Heat Template in I. This could be the way 
https://blueprints.launchpad.net/heat/+spec/hot-software-config


Seems like the consensus is: It belongs to Heat. We are just not able to 
do it that way now.


So there is a question, whether we should try to solve it in Tuskar-API 
temporarily. Or rather focus on the Heat.




Imre

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] Next two weekly meetings cancelled ?

2013-12-20 Thread Sergey Lukjanov
It's ok for me too.

On Friday, December 20, 2013, Nikolay Starodubtsev wrote:

> It's okay for me.
>
>
>
> Nikolay Starodubtsev
>
> Software Engineer
>
> Mirantis Inc.
>
>
> Skype: dark_harlequine1
>
>
> 2013/12/20 Sylvain Bauza 
>
>  Once that agreed, let's go back to the initial question : when do we
> resume weekly meetings ?
>
> Is Friday 10th Jan 1500 UTC OK for you ?
>
> Note : Other committers, YorikSar, jd__, scroiset and f_rossigneux haven't
> yet replied.
> -Sylvain
>
> Le 20/12/2013 13:28, Nikolay Starodubtsev a écrit :
>
> +1
>
>
>
> Nikolay Starodubtsev
>
> Software Engineer
>
> Mirantis Inc.
>
>
>  Skype: dark_harlequine1
>
>
> 2013/12/20 Dina Belova 
>
> +1
>
>
> On Fri, Dec 20, 2013 at 1:57 PM, Sergey Lukjanov 
> wrote:
>
> +1 for Fridays 1500 UTC
>
>
> On Fri, Dec 20, 2013 at 1:26 PM, Sylvain Bauza wrote:
>
>  Well, 2000UTC means midnight for you, guys. Not really safe for family
> concerns :-)
> Maybe you were meaning 2000 local time, so 1600 UTC ?
>
> I can propose Fridays 1500 UTC (so 19:00 your time ;-)) as an alternative
> (both meeting channels are free this time)
>
> Let's vote : +1 for Fridays 1500 UTC.
>
> Le 20/12/2013 05:43, Nikolay Starodubtsev a écrit :
>
> I'm on holidays
>
>

-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer][Oslo] Consuming Notifications in Batches

2013-12-20 Thread Herndon, John Luke
Hi Nadya,

Yep, that’s right, the notifications stick around on the server until they
are acknowledged so there is extra overhead involved. I only have experience
with rabbitmq, so I can’t speak for other transports, but we have used this
strategy internally for other purposes, and have reached > 10k
messages/second on a single consumer using batch message consumption (i.e.,
consume N messages, process them, then ack all N at once). We’ve found that
being able to acknowledge the entire batch of messages at a time leads to a
huge performance increase. This is another motivating factor for moving
towards batches. But to your point, making this configurable is the right
way to go just in case other transports don’t react as well.

Thanks,
-john


From:  Nadya Privalova 
Reply-To:  "OpenStack Development Mailing List (not for usage questions)"

Date:  Fri, 20 Dec 2013 15:25:55 +0400
To:  "OpenStack Development Mailing List (not for usage questions)"

Subject:  Re: [openstack-dev] [Ceilometer][Oslo] Consuming Notifications in
Batches

Hi John,

As for me your ideas look very interesting. As I understood notification
messages will be kept in MQ for some time (during batch-basket is being
filled), right? I'm concerned about the additional load that will be on MQ
(Rabbit). 

Thanks,
Nadya


On Fri, Dec 20, 2013 at 3:31 AM, Herndon, John Luke 
wrote:
> Hi Folks,
> 
> The Rackspace-HP team has been putting a lot of effort into performance
> testing event collection in the ceilometer storage drivers[0]. Based on
> some results of this testing, we would like to support batch consumption
> of notifications, as it will greatly improve insertion performance. Batch
> consumption in this case means waiting for a certain number of
> notifications to arrive before sending to the storage
> driver.
> 
> I¹d like to get feedback from the community about this feature, and how we
> are planning to implement it. Here is what I’m currently thinking:
> 
> 1) This seems to fit well into oslo.messaging - batching may be a feature
> that other projects will find useful. After reviewing the changes that
> sileht has been working on in oslo.messaging, I think the right way to
> start off is to create a new executor that builds up a batch of
> notifications, and sends the batch to the dispatcher. We’d also add a
> timeout, so if a certain amount of time passes and the batch isn’t filled
> up, the notifications will be dispatched anyway. I’ve started a
> blueprint for this change and am filling in the details as I go along [1].
> 
> 2) In ceilometer, initialize the notification listener with the batch
> executor instead of the eventlet executor (this should probably be
> configurable)[2]. We can then send the entire batch of notifications to
> the storage driver to be processed as events, while maintaining the
> current method for converting notifications into samples.
> 
> 3) Error handling becomes more difficult. The executor needs to know if
> any of the notifications should be requeued. I think the right way to
> solve this is to return a list of notifications to requeue from the
> handler. Any better ideas?
> 
> Is this the right approach to take? I¹m not an oslo.messaging expert, so
> if there is a proper way to implement this change, I¹m all ears!
> 
> Thanks, happy holidays!
> -john
> 
> 0: https://etherpad.openstack.org/p/ceilometer-data-store-scale-testing
> 1:
> https://blueprints.launchpad.net/oslo.messaging/+spec/bulk-consume-messages
> 2: https://blueprints.launchpad.net/ceilometer/+spec/use-bulk-notification
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___ OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Process for proposing patches attached to launchpad bugs?

2013-12-20 Thread Dolph Mathews
In the past, I've been able to get authors of bug fixes attached to
Launchpad bugs to sign the CLA and submit the patch through gerrit...
although, in one case it took quite a bit of time (and thankfully it wasn't
a critical fix or anything).

This scenario just came up again (example: [1]), so I'm asking
preemptively... what if the author is unwilling / unable in signing the CLA
and propose through gerrit, or it's a critical bug fix and waiting on an
author to go through the CLA process is undesirable for the community?
Obviously that's a bit of a fail on our part, but what's the most
appropriate & expedient way to handle it?

Can we propose the patch to gerrit ourselves?

If so, who should appear as the --author of the commit? Who should appear
as Co-Authored-By, especially when the committer helps to evolve the patch
evolves further in review?

Alternatively, am I going about this all wrong?

Thanks!

[1]: https://bugs.launchpad.net/keystone/+bug/1198171/comments/8

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] Dropping connectivity from guesagent to Trove back-end

2013-12-20 Thread Tim Simpson
Hi Denis,

The plan from the start with Conductor has been to remove any guest connections 
to the database. So any lingering ones are omissions which should be dealt with.

>> Since not each database have root entity (not even ACL at all) it would be 
>> incorrect to report about root enabling on server-side because 
>> server-side(trove-taskmanager) should stay common as it possible.

I agree that in the case of the root call Conductor should have another RPC 
method that gets called by the guest to inform it that the root entity was set.

I also agree that any code that can stay as common as possible between 
datastores should. However I don't agree that trove-taskmanager (by which I 
assume you mean the daemon) has to only be for common functionality.

Thanks,

Tim


From: Denis Makogon [dmako...@mirantis.com]
Sent: Friday, December 20, 2013 7:04 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [trove] Dropping connectivity from guesagent to Trove 
back-end


Goodday, OpenStack DВaaS community.



I'd like to start conversation about dropping connectivity from In-VM 
guestagent and Trove back-end.

Since Trove has conductor service which interacts with agents via MQ 
service, we could let it deal with any back-end required operations initialized 
by guestagent.

Now conductor deals with instance status notifications and backup status 
notifications. But guest still have one more operation which requires back-end 
connectivity – database root-enabled reporting [1]. After dealing with it we 
could finally drop connectivity [2].

Since not each database have root entity (not even ACL at all) it would be 
incorrect to report about root enabling on server-side because 
server-side(trove-taskmanager) should stay common as it possible.

My first suggestion was to extend conductor API [3] to let conductor write 
report to Trove back-end. Until Trove would reach state when it would support 
multiple datastore (databases) types current patch would work fine [4], but 
when Trove would deliver, suppose, Database (without ACL) it would be confusing 
when after instance provisioning user will find out that some how root was 
enabled, but Database doesn't have any ACL at all.

My point is that Trove Conductor must handle every database (datastore in 
terms of Trove) specific operations which are required back-end connection. And 
Trove server-side (taskmanager) must stay generic and perform preparation 
tasks, which are independent from datastore type.


[1] https://github.com/openstack/trove/blob/master/bin/trove-guestagent#L52

[2] https://bugs.launchpad.net/trove/+bug/1257489

[3] https://review.openstack.org/#/c/59410/5

[4] https://review.openstack.org/#/c/59410/
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Ironic] Get power and temperature via IPMI

2013-12-20 Thread Pradipta Banerjee
On 12/19/2013 12:30 AM, Devananda van der Veen wrote:
> On Tue, Dec 17, 2013 at 10:00 PM, Gao, Fengqian  > wrote:
>
> Hi, all,
>
> I am planning to extend bp
> https://blueprints.launchpad.net/nova/+spec/utilization-aware-scheduling
> with power and temperature. In other words, power and temperature can be
> collected and used for nova-scheduler just as CPU utilization.
>
This is a good idea and have definite use cases where one might want to optimize
provisioning based on power consumption
>
> I have a question here. As you know, IPMI is used to get power and
> temperature and baremetal implements IPMI functions in Nova. But baremetal
> driver is being split out of nova, so if I want to change something to the
> IPMI, which part should I choose now? Nova or Ironic?
>
>  
>
>  
> Hi!
>
> A few thoughts... Firstly, new features should be geared towards Ironic, not
> the nova baremetal driver as it will be deprecated soon
> (https://blueprints.launchpad.net/nova/+spec/deprecate-baremetal-driver). That
> being said, I actually don't think you want to use IPMI for what you're
> describing at all, but maybe I'm wrong.
>
> When scheduling VMs with Nova, in many cases there is already an agent running
> locally, eg. nova-compute, and this agent is already supplying information to
> the scheduler. I think this is where the facilities for gathering
> power/temperature/etc (eg, via lm-sensors) should be placed, and it can
> reported back to the scheduler along with other usage statistics.
+1

Using lm-sensors or equivalent seems better.
Have a look at the following blueprint
https://blueprints.launchpad.net/nova/+spec/extensible-resource-tracking
>
> If you think there's a compelling reason to use Ironic for this instead of
> lm-sensors, please clarify.
>
> Cheers,
> Devananda
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 
Regards,
Pradipta

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][nova] oslo common.service vs. screen and devstack

2013-12-20 Thread Doug Hellmann
On Fri, Dec 20, 2013 at 1:54 AM, Clint Byrum  wrote:

> Excerpts from Sean Dague's message of 2013-12-19 16:33:12 -0800:
> > So a few people had been reporting recently that unstack no longer stops
> > nova processes, which I only got around to looking at today. It turns
> > out the new common.service stack from oslo takes SIGHUP and treats it as
> > a restart. Which isn't wrong, but is new, and is incompatible with
> > screen (the way we use it). Because we use -X stuff, the resulting -X
> > quit sends SIGHUP to the child processes.
> >
> > So the question is, are we definitely in a state now where nova services
> > can and do want to support SIGHUP as restart?
> >
> > If so, is there interest in being able to disable that behavior at start
> > time, so we can continue with a screen based approach as well?
> >
> > If not, we'll need to figure out another way to approach the shutdown in
> > devstack. Which is fine, just work that wasn't expected.
> >
>
> Perhaps if the process is running in the foreground, as it does in
> devstack, it should still terminate on SIGHUP rather than restart.
>

It looks like the changes to ServiceLauncher.wait() that introduced this
behavior are related to
https://blueprints.launchpad.net/oslo/+spec/cfg-reload-config-files where
we wanted a service to be able to re-read its configuration files on a
signal. HUP seems appropriate for a production
 environment, but probably not for development.

I opened https://bugs.launchpad.net/oslo/+bug/1263122 to track the fix.

Doug
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][nova] oslo common.service vs. screen and devstack

2013-12-20 Thread Sean Dague
On 12/20/2013 09:55 AM, Doug Hellmann wrote:
> 
> 
> 
> On Fri, Dec 20, 2013 at 1:54 AM, Clint Byrum  > wrote:
> 
> Excerpts from Sean Dague's message of 2013-12-19 16:33:12 -0800:
> > So a few people had been reporting recently that unstack no longer
> stops
> > nova processes, which I only got around to looking at today. It turns
> > out the new common.service stack from oslo takes SIGHUP and treats
> it as
> > a restart. Which isn't wrong, but is new, and is incompatible with
> > screen (the way we use it). Because we use -X stuff, the resulting -X
> > quit sends SIGHUP to the child processes.
> >
> > So the question is, are we definitely in a state now where nova
> services
> > can and do want to support SIGHUP as restart?
> >
> > If so, is there interest in being able to disable that behavior at
> start
> > time, so we can continue with a screen based approach as well?
> >
> > If not, we'll need to figure out another way to approach the
> shutdown in
> > devstack. Which is fine, just work that wasn't expected.
> >
> 
> Perhaps if the process is running in the foreground, as it does in
> devstack, it should still terminate on SIGHUP rather than restart.
> 
> 
> It looks like the changes to ServiceLauncher.wait() that introduced this
> behavior are related to
> https://blueprints.launchpad.net/oslo/+spec/cfg-reload-config-files
> where we wanted a service to be able to re-read its configuration files
> on a signal. HUP seems appropriate for a production
> environment, but probably not for development.
> 
> I opened https://bugs.launchpad.net/oslo/+bug/1263122 to track the fix.

So as Clint said, SIGHUP is only appropriate to do that *if* the process
is daemonized. If it's in the foreground it's not.

So that logic needs to be better.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] Dropping connectivity from guesagent to Trove back-end

2013-12-20 Thread Denis Makogon
Thanks for response, Tim.

As i said, it would be confusing situation when database which has no ACL
would be deployed by Trove with root enabled - this looks very strange
since user allowed to check if root enabled. I think in this case Conductor
should be _that_ place which should contain datastore specific logic, which
requires back-end connectivity.

It would be nice to have consistent instance states for each datastore
types and version.

Are there any objections about letting conductor deal with it ?



Best regards,
Denis Makogon


2013/12/20 Tim Simpson 

>  Hi Denis,
>
>  The plan from the start with Conductor has been to remove any guest
> connections to the database. So any lingering ones are omissions which
> should be dealt with.
>
>  >> Since not each database have root entity (not even ACL at all) it
> would be incorrect to report about root enabling on server-side because
> server-side(trove-taskmanager) should stay common as it possible.
>
>  I agree that in the case of the root call Conductor should have another
> RPC method that gets called by the guest to inform it that the root entity
> was set.
>
>  I also agree that any code that can stay as common as possible between
> datastores should. However I don't agree that trove-taskmanager (by which I
> assume you mean the daemon) has to only be for common functionality.
>
>  Thanks,
>
>  Tim
>
>  --
> *From:* Denis Makogon [dmako...@mirantis.com]
> *Sent:* Friday, December 20, 2013 7:04 AM
> *To:* OpenStack Development Mailing List
> *Subject:* [openstack-dev] [trove] Dropping connectivity from guesagent
> to Trove back-end
>
>Goodday, OpenStack DВaaS community.
>
>
>  I'd like to start conversation about dropping connectivity from
> In-VM guestagent and Trove back-end.
>
> Since Trove has conductor service which interacts with agents via MQ
> service, we could let it deal with any back-end required operations
> initialized by guestagent.
>
> Now conductor deals with instance status notifications and backup
> status notifications. But guest still have one more operation which
> requires back-end connectivity - database root-enabled reporting [1]. After
> dealing with it we could finally drop connectivity [2].
>
> Since not each database have root entity (not even ACL at all) it would be
> incorrect to report about root enabling on server-side because
> server-side(trove-taskmanager) should stay common as it possible.
>
> My first suggestion was to extend conductor API [3] to let conductor
> write report to Trove back-end. Until Trove would reach state when it would
> support multiple datastore (databases) types current patch would work fine
> [4], but when Trove would deliver, suppose, Database (without ACL) it would
> be confusing when after instance provisioning user will find out that some
> how root was enabled, but Database doesn't have any ACL at all.
>
> My point is that Trove Conductor must handle every database (datastore
> in terms of Trove) specific operations which are required back-end
> connection. And Trove server-side (taskmanager) must stay generic and
> perform preparation tasks, which are independent from datastore type.
>
>  [1]
> https://github.com/openstack/trove/blob/master/bin/trove-guestagent#L52
>
> [2] https://bugs.launchpad.net/trove/+bug/1257489
>
> [3] https://review.openstack.org/#/c/59410/5
>
> [4] https://review.openstack.org/#/c/59410/
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] VM diagnostics - V3 proposal

2013-12-20 Thread Matt Riedemann



On Friday, December 20, 2013 3:57:15 AM, Daniel P. Berrange wrote:

On Fri, Dec 20, 2013 at 12:56:47PM +0400, Oleg Gelbukh wrote:

Hi everyone,

I'm sorry for being late to the thread, but what about baremetal driver?
Should it support the get_diagnostics() as well?


Of course, where practical, every driver should aim to support every
method in the virt driver class API.

Regards,
Daniel


Although isn't the baremetal driver moving to ironic, or there is an 
ironic driver moving into nova?  I'm a bit fuzzy on what's going on 
there.  Point is, if we're essentially halting feature development on 
the nova baremetal driver I'd hold off on implementing get_diagnostics 
there for now.


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer][Oslo] Consuming Notifications in Batches

2013-12-20 Thread Doug Hellmann
On Thu, Dec 19, 2013 at 6:31 PM, Herndon, John Luke wrote:

> Hi Folks,
>
> The Rackspace-HP team has been putting a lot of effort into performance
> testing event collection in the ceilometer storage drivers[0]. Based on
> some results of this testing, we would like to support batch consumption
> of notifications, as it will greatly improve insertion performance. Batch
> consumption in this case means waiting for a certain number of
> notifications to arrive before sending to the storage
> driver.
>
> I¹d like to get feedback from the community about this feature, and how we
> are planning to implement it. Here is what I’m currently thinking:
>
> 1) This seems to fit well into oslo.messaging - batching may be a feature
> that other projects will find useful. After reviewing the changes that
> sileht has been working on in oslo.messaging, I think the right way to
> start off is to create a new executor that builds up a batch of
> notifications, and sends the batch to the dispatcher. We’d also add a
> timeout, so if a certain amount of time passes and the batch isn’t filled
> up, the notifications will be dispatched anyway. I’ve started a
> blueprint for this change and am filling in the details as I go along [1].
>

IIRC, the executor is meant to differentiate between threading, eventlet,
other async implementations, or other methods for dealing with the I/O. It
might be better to implement the batching at the dispatcher level instead.
That way no matter what I/O processing is in place, the batching will occur.


> 2) In ceilometer, initialize the notification listener with the batch
> executor instead of the eventlet executor (this should probably be
> configurable)[2]. We can then send the entire batch of notifications to
> the storage driver to be processed as events, while maintaining the
> current method for converting notifications into samples.
>
> 3) Error handling becomes more difficult. The executor needs to know if
> any of the notifications should be requeued. I think the right way to
> solve this is to return a list of notifications to requeue from the
> handler. Any better ideas?
>

Which "handler" do you mean?

Doug



>
> Is this the right approach to take? I¹m not an oslo.messaging expert, so
> if there is a proper way to implement this change, I¹m all ears!
>
> Thanks, happy holidays!
> -john
>
> 0: https://etherpad.openstack.org/p/ceilometer-data-store-scale-testing
> 1:
> https://blueprints.launchpad.net/oslo.messaging/+spec/bulk-consume-messages
> 2: https://blueprints.launchpad.net/ceilometer/+spec/use-bulk-notification
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] Dropping connectivity from guesagent to Trove back-end

2013-12-20 Thread Tim Simpson
Because the ability to check if root is enabled is in an extension which would 
not be in effect for a datastore with no ACL support, the user would not be 
able to see that the marker for root enabled was set in the Trove 
infrastructure database either way.

By the way- I double checked the code, and I was wrong- the guest agent was 
*not* telling the database to update the root enabled flag. Instead, the API 
extension had been updating the database all along after contacting the guest. 
Sorry for making this thread more confusing.

It seems like if you follow my one (hopefully last) suggestion on this pull 
request, it will solve the issue you're tackling: 
https://review.openstack.org/#/c/59410/5

Thanks,

Tim


From: Denis Makogon [dmako...@mirantis.com]
Sent: Friday, December 20, 2013 8:58 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [trove] Dropping connectivity from guesagent to 
Trove back-end

Thanks for response, Tim.

As i said, it would be confusing situation when database which has no ACL would 
be deployed by Trove with root enabled - this looks very strange since user 
allowed to check if root enabled. I think in this case Conductor should be 
_that_ place which should contain datastore specific logic, which requires 
back-end connectivity.

It would be nice to have consistent instance states for each datastore types 
and version.

Are there any objections about letting conductor deal with it ?



Best regards,
Denis Makogon


2013/12/20 Tim Simpson 
mailto:tim.simp...@rackspace.com>>
Hi Denis,

The plan from the start with Conductor has been to remove any guest connections 
to the database. So any lingering ones are omissions which should be dealt with.

>> Since not each database have root entity (not even ACL at all) it would be 
>> incorrect to report about root enabling on server-side because 
>> server-side(trove-taskmanager) should stay common as it possible.

I agree that in the case of the root call Conductor should have another RPC 
method that gets called by the guest to inform it that the root entity was set.

I also agree that any code that can stay as common as possible between 
datastores should. However I don't agree that trove-taskmanager (by which I 
assume you mean the daemon) has to only be for common functionality.

Thanks,

Tim


From: Denis Makogon [dmako...@mirantis.com]
Sent: Friday, December 20, 2013 7:04 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [trove] Dropping connectivity from guesagent to Trove 
back-end


Goodday, OpenStack DВaaS community.



I'd like to start conversation about dropping connectivity from In-VM 
guestagent and Trove back-end.

Since Trove has conductor service which interacts with agents via MQ 
service, we could let it deal with any back-end required operations initialized 
by guestagent.

Now conductor deals with instance status notifications and backup status 
notifications. But guest still have one more operation which requires back-end 
connectivity – database root-enabled reporting [1]. After dealing with it we 
could finally drop connectivity [2].

Since not each database have root entity (not even ACL at all) it would be 
incorrect to report about root enabling on server-side because 
server-side(trove-taskmanager) should stay common as it possible.

My first suggestion was to extend conductor API [3] to let conductor write 
report to Trove back-end. Until Trove would reach state when it would support 
multiple datastore (databases) types current patch would work fine [4], but 
when Trove would deliver, suppose, Database (without ACL) it would be confusing 
when after instance provisioning user will find out that some how root was 
enabled, but Database doesn't have any ACL at all.

My point is that Trove Conductor must handle every database (datastore in 
terms of Trove) specific operations which are required back-end connection. And 
Trove server-side (taskmanager) must stay generic and perform preparation 
tasks, which are independent from datastore type.


[1] https://github.com/openstack/trove/blob/master/bin/trove-guestagent#L52

[2] https://bugs.launchpad.net/trove/+bug/1257489

[3] https://review.openstack.org/#/c/59410/5

[4] https://review.openstack.org/#/c/59410/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] datastore migration issues

2013-12-20 Thread Greg Hill
Thanks for the input.  I'll go ahead with this plan then.

Greg

On Dec 20, 2013, at 2:06 AM, Vipul Sabhaya 
mailto:vip...@gmail.com>> wrote:

I am fine with requiring the deployer to update default values, if they don’t 
make sense for their given deployment.  However, not having any value for 
older/existing instances, when the code requires it is not good.  So let’s 
create a default datastore of mysql, with a default version, and set that as 
the datastore for older instances.  A deployer can then run trove-manage to 
update the default record created.


On Thu, Dec 19, 2013 at 6:14 PM, Tim Simpson 
mailto:tim.simp...@rackspace.com>> wrote:
I second Rob and Greg- we need to not allow the instance table to have nulls 
for the datastore version ID. I can't imagine that as Trove grows and evolves, 
that edge case is something we'll always remember to code and test for, so 
let's cauterize things now by no longer allowing it at all.

The fact that the migration scripts can't, to my knowledge, accept parameters 
for what the dummy datastore name and version should be isn't great, but I 
think it would be acceptable enough to make the provided default values 
sensible and ask operators who don't like it to manually update the database.

- Tim




From: Robert Myers [myer0...@gmail.com]
Sent: Thursday, December 19, 2013 9:59 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [trove] datastore migration issues

I think that we need to be good citizens and at least add dummy data. Because 
it is impossible to know who all is using this, the list you have is probably 
complete. But Trove has been available for quite some time and all these users 
will not be listening on this thread. Basically anytime you have a database 
migration that adds a required field you *have* to alter the existing rows. If 
we don't we're basically telling everyone who upgrades that we the 'Database as 
a Service' team don't care about data integrity in our own product :)

Robert


On Thu, Dec 19, 2013 at 9:25 AM, Greg Hill 
mailto:greg.h...@rackspace.com>> wrote:
We did consider doing that, but decided it wasn't really any different from the 
other options as it required the deployer to know to alter that data.  That 
would require the fewest code changes, though.  It was also my understanding 
that mysql variants were a possibility as well (percona and mariadb), which is 
what brought on the objection to just defaulting in code.  Also, we can't 
derive the version being used, so we *could* fill it with a dummy version and 
assume mysql, but I don't feel like that solves the problem or the objections 
to the earlier solutions.  And then we also have bogus data in the database.

Since there's no perfect solution, I'm really just hoping to gather consensus 
among people who are running existing trove installations and have yet to 
upgrade to the newer code about what would be easiest for them.  My 
understanding is that list is basically HP and Rackspace, and maybe Ebay?, but 
the hope was that bringing the issue up on the list might confirm or refute 
that assumption and drive the conversation to a suitable workaround for those 
affected, which hopefully isn't that many organizations at this point.

The options are basically:

1. Put the onus on the deployer to correct existing records in the database.
2. Have the migration script put dummy data in the database which you have to 
correct.
3. Put the onus on the deployer to fill out values in the config value

Greg

On Dec 18, 2013, at 8:46 PM, Robert Myers 
mailto:myer0...@gmail.com>> wrote:


There is the database migration for datastores. We should add a function to  
back fill the existing data with either a dummy data or set it to 'mysql' as 
that was the only possibility before data stores.

On Dec 18, 2013 3:23 PM, "Greg Hill" 
mailto:greg.h...@rackspace.com>> wrote:
I've been working on fixing a bug related to migrating existing installations 
to the new datastore code:

https://bugs.launchpad.net/trove/+bug/1259642

The basic gist is that existing instances won't have any data in the 
datastore_version_id field in the database unless we somehow populate that data 
during migration, and not having that data populated breaks a lot of things 
(including the ability to list instances or delete or resize old instances).  
It's impossible to populate that data in an automatic, generic way, since it's 
highly vendor-dependent on what database and version they currently support, 
and there's not enough data in the older schema to populate the new tables 
automatically.

So far, we've come up with some non-optimal solutions:

1. The first iteration was to assume 'mysql' as the database manager on 
instances without a datastore set.
2. The next iteration was to make the default value be configurable in 
trove.conf, but default to 'mysql' if it wasn't set.
3. It was then proposed that 

[openstack-dev] [Glance][Oslo] Pulling glance.store out of glance. Where should it live?

2013-12-20 Thread Flavio Percoco

Greetings,

In the last Glance meeting, it was proposed to pull out glance's
stores[0] code into its own package. There are a couple of other
scenarios where using this code is necessary and it could also be
useful for other consumers outside OpenStack itself.

That being said, it's not clear where this new library should live in:

   1) Oslo: it's the place for common code, incubation, although this
   code has been pretty stable in the last release.

   2) glance.stores under Image program: As said in #1, the API has
   been pretty stable - and it falls perfectly into what Glance's
   program covers.

[0] https://github.com/openstack/glance/tree/master/glance/store/

Thoughts?

Cheers,
FF

--
@flaper87
Flavio Percoco


pgpSaOGX3Qu0u.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] Dropping connectivity from guesagent to Trove back-end

2013-12-20 Thread Denis Makogon
Unfortunately, Trove cannot manage it's own extensions, so if, suppose, i
would try to get provisioned cassandra instance i would be still possible
to check if root enabled.
Prof:
https://github.com/openstack/trove/blob/master/trove/extensions/mysql/service.py
There are no checks for datastore_type, service just loads root model and
that's it, since my patch create root model, next API call (root check)
will load this model.


2013/12/20 Tim Simpson 

>  Because the ability to check if root is enabled is in an extension which
> would not be in effect for a datastore with no ACL support, the user would
> not be able to see that the marker for root enabled was set in the Trove
> infrastructure database either way.
>
>  By the way- I double checked the code, and I was wrong- the guest agent
> was *not* telling the database to update the root enabled flag. Instead,
> the API extension had been updating the database all along after contacting
> the guest. Sorry for making this thread more confusing.
>
>  It seems like if you follow my one (hopefully last) suggestion on this
> pull request, it will solve the issue you're tackling:
> https://review.openstack.org/#/c/59410/5
>
>  Thanks,
>
>  Tim
>
>  --
> *From:* Denis Makogon [dmako...@mirantis.com]
> *Sent:* Friday, December 20, 2013 8:58 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [trove] Dropping connectivity from
> guesagent to Trove back-end
>
>Thanks for response, Tim.
>
>  As i said, it would be confusing situation when database which has no ACL
> would be deployed by Trove with root enabled - this looks very strange
> since user allowed to check if root enabled. I think in this case Conductor
> should be _that_ place which should contain datastore specific logic, which
> requires back-end connectivity.
>
> It would be nice to have consistent instance states for each datastore
> types and version.
>
>  Are there any objections about letting conductor deal with it ?
>
>
>
> Best regards,
> Denis Makogon
>
>
> 2013/12/20 Tim Simpson 
>
>>  Hi Denis,
>>
>>  The plan from the start with Conductor has been to remove any guest
>> connections to the database. So any lingering ones are omissions which
>> should be dealt with.
>>
>>  >> Since not each database have root entity (not even ACL at all) it
>> would be incorrect to report about root enabling on server-side because
>> server-side(trove-taskmanager) should stay common as it possible.
>>
>>   I agree that in the case of the root call Conductor should have
>> another RPC method that gets called by the guest to inform it that the root
>> entity was set.
>>
>>  I also agree that any code that can stay as common as possible between
>> datastores should. However I don't agree that trove-taskmanager (by which I
>> assume you mean the daemon) has to only be for common functionality.
>>
>>  Thanks,
>>
>>  Tim
>>
>>  --
>> *From:* Denis Makogon [dmako...@mirantis.com]
>> *Sent:* Friday, December 20, 2013 7:04 AM
>> *To:* OpenStack Development Mailing List
>> *Subject:* [openstack-dev] [trove] Dropping connectivity from guesagent
>> to Trove back-end
>>
>> Goodday, OpenStack DВaaS community.
>>
>>
>>  I'd like to start conversation about dropping connectivity from
>> In-VM guestagent and Trove back-end.
>>
>> Since Trove has conductor service which interacts with agents via MQ
>> service, we could let it deal with any back-end required operations
>> initialized by guestagent.
>>
>> Now conductor deals with instance status notifications and backup
>> status notifications. But guest still have one more operation which
>> requires back-end connectivity - database root-enabled reporting [1]. After
>> dealing with it we could finally drop connectivity [2].
>>
>> Since not each database have root entity (not even ACL at all) it would
>> be incorrect to report about root enabling on server-side because
>> server-side(trove-taskmanager) should stay common as it possible.
>>
>> My first suggestion was to extend conductor API [3] to let conductor
>> write report to Trove back-end. Until Trove would reach state when it would
>> support multiple datastore (databases) types current patch would work fine
>> [4], but when Trove would deliver, suppose, Database (without ACL) it would
>> be confusing when after instance provisioning user will find out that some
>> how root was enabled, but Database doesn't have any ACL at all.
>>
>> My point is that Trove Conductor must handle every database
>> (datastore in terms of Trove) specific operations which are required
>> back-end connection. And Trove server-side (taskmanager) must stay generic
>> and perform preparation tasks, which are independent from datastore type.
>>
>>  [1]
>> https://github.com/openstack/trove/blob/master/bin/trove-guestagent#L52
>>
>> [2] https://bugs.launchpad.net/trove/+bug/1257489
>>
>> [3] https://review.openstac

Re: [openstack-dev] [storyboard] Storyboard sprint around FOSDEM

2013-12-20 Thread Anita Kuno
I can't attend.

I just booked my flight for SaltConf in Salt Lake City, which conflicts.

Would have liked to have been there.

I hope you have a great sprint and FOSDEM.

Thanks,
Anita.

On 12/20/2013 05:26 AM, Thierry Carrez wrote:
> Hi everyone,
> 
> In case you're not familiar with it, Storyboard[1] is a cross-project
> task tracking tool that we are building to replace our usage of
> Launchpad bugs & blueprints.
> 
> We plan to have a 2-day sprint just before FOSDEM in Brussels to make
> the few design and architectural hard calls that are needed to bring
> this from POC state to a dogfoodable, continuously-deployed system.
> 
> We already have 4/6 people signed up, so if you're interested to join,
> please reply to thread ASAP so that we can book relevant space.
> 
> Date/Location: January 30-31 in Brussels, Belgium
> 
> (FOSDEM[2] is February 1-2 in the same city, so you can combine the two)
> 
> [1] http://git.openstack.org/cgit/openstack-infra/storyboard
> [2] http://fosdem.org/
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer][Oslo] Consuming Notifications in Batches

2013-12-20 Thread Julien Danjou
On Thu, Dec 19 2013, Herndon, John Luke wrote:

Hi John,

> The Rackspace-HP team has been putting a lot of effort into performance
> testing event collection in the ceilometer storage drivers[0]. Based on
> some results of this testing, we would like to support batch consumption
> of notifications, as it will greatly improve insertion performance. Batch
> consumption in this case means waiting for a certain number of
> notifications to arrive before sending to the storage
> driver. 

I think that is overall a good idea. And in my mind it could also a
bigger consequences that you would think. When we will start using
notifications instead of RPC calls for sending the samples, we may be
able to leverage that too.

Anyway, my main concern here is that I am not very enthusiast about
using the executor to do that. I wonder if there is not a way to ask the
broker to get as many as message as it has up to a limit?

You would have 100 messages waiting in the notifications.info queue, and
you would be able to tell to oslo.messaging that you want to read up to
10 messages at a time. If the underlying protocol (e.g. AMQP) can
support that too, it would be more efficient too.

-- 
Julien Danjou
/* Free Software hacker * independent consultant
   http://julien.danjou.info */


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][Tempest][Ceilometer] Pollster's testing strategy

2013-12-20 Thread Julien Danjou
On Fri, Dec 20 2013, Nadya Privalova wrote:

Hi Nadya,

> For QA and Tempest guys brief description of Ceilometer's pollstering.
> Ceilometer has several agents that once in 'interval' asks Nova, Glance and
> other services about their metrics. We need to test this functionality,
> 'Interval' is defined in pipeline.yaml file and is 10 minutes by default.
>
> I'd like to discuss the strategy of pollster's testing in tempest. Now we
> need to wait 10 min to test the correctness of pollsters' work, as I
> understand it is not appropriate.
> I see the following solutions here:
> 1. Add smth like 'if tempest then interval = 5 sec'. This change should go
> to gating AFAIU
> 2. Add additional functionality in Ceilometer: run_all_pollsters_on_demand.
> I think this may be useful not only in tempest.

I think having 2. is a good idea. It could be used in any environment
for debugging or having almost-real-time feedback.

-- 
Julien Danjou
# Free Software hacker # independent consultant
# http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][nova] oslo common.service vs. screen and devstack

2013-12-20 Thread Sean Dague
On 12/20/2013 09:59 AM, Sean Dague wrote:

> So as Clint said, SIGHUP is only appropriate to do that *if* the process
> is daemonized. If it's in the foreground it's not.
> 
> So that logic needs to be better.

This is basically a blocker for adding any upgrade testing from
something later than havana. Grenade upstream is still functioning
because the service code wasn't merged into nova until after havana was cut.

However there is a desire to do more interesting upgrade patterns, and
without the ability to shutdown nova services on master in the gate,
that's going to hit us pretty hard.

So I'd like to get this fixed soon. As digging us out of this later is
going to be way more expensive.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] Dropping connectivity from guesagent to Trove back-end

2013-12-20 Thread Tim Simpson
I think you're addressing a different problem which is that the extensions for 
MySQL shouldn't always apply to every single datastore. However I think we 
should proceed on the assumption that this will be fixed.

Btw, last time we tried to address it there was a week of awful, three hour 
meetings and we couldn't reach consensus.

Thanks,

Tim

From: Denis Makogon [dmako...@mirantis.com]
Sent: Friday, December 20, 2013 9:44 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [trove] Dropping connectivity from guesagent to 
Trove back-end

Unfortunately, Trove cannot manage it's own extensions, so if, suppose, i would 
try to get provisioned cassandra instance i would be still possible to check if 
root enabled.
Prof: 
https://github.com/openstack/trove/blob/master/trove/extensions/mysql/service.py
There are no checks for datastore_type, service just loads root model and 
that's it, since my patch create root model, next API call (root check) will 
load this model.


2013/12/20 Tim Simpson 
mailto:tim.simp...@rackspace.com>>
Because the ability to check if root is enabled is in an extension which would 
not be in effect for a datastore with no ACL support, the user would not be 
able to see that the marker for root enabled was set in the Trove 
infrastructure database either way.

By the way- I double checked the code, and I was wrong- the guest agent was 
*not* telling the database to update the root enabled flag. Instead, the API 
extension had been updating the database all along after contacting the guest. 
Sorry for making this thread more confusing.

It seems like if you follow my one (hopefully last) suggestion on this pull 
request, it will solve the issue you're tackling: 
https://review.openstack.org/#/c/59410/5

Thanks,

Tim


From: Denis Makogon [dmako...@mirantis.com]
Sent: Friday, December 20, 2013 8:58 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [trove] Dropping connectivity from guesagent to 
Trove back-end

Thanks for response, Tim.

As i said, it would be confusing situation when database which has no ACL would 
be deployed by Trove with root enabled - this looks very strange since user 
allowed to check if root enabled. I think in this case Conductor should be 
_that_ place which should contain datastore specific logic, which requires 
back-end connectivity.

It would be nice to have consistent instance states for each datastore types 
and version.

Are there any objections about letting conductor deal with it ?



Best regards,
Denis Makogon


2013/12/20 Tim Simpson 
mailto:tim.simp...@rackspace.com>>
Hi Denis,

The plan from the start with Conductor has been to remove any guest connections 
to the database. So any lingering ones are omissions which should be dealt with.

>> Since not each database have root entity (not even ACL at all) it would be 
>> incorrect to report about root enabling on server-side because 
>> server-side(trove-taskmanager) should stay common as it possible.

I agree that in the case of the root call Conductor should have another RPC 
method that gets called by the guest to inform it that the root entity was set.

I also agree that any code that can stay as common as possible between 
datastores should. However I don't agree that trove-taskmanager (by which I 
assume you mean the daemon) has to only be for common functionality.

Thanks,

Tim


From: Denis Makogon [dmako...@mirantis.com]
Sent: Friday, December 20, 2013 7:04 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [trove] Dropping connectivity from guesagent to Trove 
back-end


Goodday, OpenStack DВaaS community.



I'd like to start conversation about dropping connectivity from In-VM 
guestagent and Trove back-end.

Since Trove has conductor service which interacts with agents via MQ 
service, we could let it deal with any back-end required operations initialized 
by guestagent.

Now conductor deals with instance status notifications and backup status 
notifications. But guest still have one more operation which requires back-end 
connectivity – database root-enabled reporting [1]. After dealing with it we 
could finally drop connectivity [2].

Since not each database have root entity (not even ACL at all) it would be 
incorrect to report about root enabling on server-side because 
server-side(trove-taskmanager) should stay common as it possible.

My first suggestion was to extend conductor API [3] to let conductor write 
report to Trove back-end. Until Trove would reach state when it would support 
multiple datastore (databases) types current patch would work fine [4], but 
when Trove would deliver, suppose, Database (without ACL) it would be confusing 
when after instance provisioning user will find out that some 

Re: [openstack-dev] [QA][Tempest][Ceilometer] Pollster's testing strategy

2013-12-20 Thread Nadya Privalova
Thanks, Julien!

Will create blueprint on Monday and will start implementation.


On Fri, Dec 20, 2013 at 7:51 PM, Julien Danjou  wrote:

> On Fri, Dec 20 2013, Nadya Privalova wrote:
>
> Hi Nadya,
>
> > For QA and Tempest guys brief description of Ceilometer's pollstering.
> > Ceilometer has several agents that once in 'interval' asks Nova, Glance
> and
> > other services about their metrics. We need to test this functionality,
> > 'Interval' is defined in pipeline.yaml file and is 10 minutes by default.
> >
> > I'd like to discuss the strategy of pollster's testing in tempest. Now we
> > need to wait 10 min to test the correctness of pollsters' work, as I
> > understand it is not appropriate.
> > I see the following solutions here:
> > 1. Add smth like 'if tempest then interval = 5 sec'. This change should
> go
> > to gating AFAIU
> > 2. Add additional functionality in Ceilometer:
> run_all_pollsters_on_demand.
> > I think this may be useful not only in tempest.
>
> I think having 2. is a good idea. It could be used in any environment
> for debugging or having almost-real-time feedback.
>
> --
> Julien Danjou
> # Free Software hacker # independent consultant
> # http://julien.danjou.info
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] Dropping connectivity from guesagent to Trove back-end

2013-12-20 Thread Denis Makogon
Tim, let's make conductor reporting that root enabled until we could be
able to teach trove how to manage its own extensions. Because extensions
are too complicated, as you said. So, it's really would be better to give
conductor an ability to handle root reporting.

Best regards,
Denis Makogon


2013/12/20 Tim Simpson 

>  I think you're addressing a different problem which is that the
> extensions for MySQL shouldn't always apply to every single datastore.
> However I think we should proceed on the assumption that this will be
> fixed.
>
>  Btw, last time we tried to address it there was a week of awful, three
> hour meetings and we couldn't reach consensus.
>
>  Thanks,
>
>  Tim
>  --
> *From:* Denis Makogon [dmako...@mirantis.com]
> *Sent:* Friday, December 20, 2013 9:44 AM
>
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [trove] Dropping connectivity from
> guesagent to Trove back-end
>
>   Unfortunately, Trove cannot manage it's own extensions, so if, suppose,
> i would try to get provisioned cassandra instance i would be still possible
> to check if root enabled.
> Prof:
> https://github.com/openstack/trove/blob/master/trove/extensions/mysql/service.py
>  There are no checks for datastore_type, service just loads root model and
> that's it, since my patch create root model, next API call (root check)
> will load this model.
>
>
> 2013/12/20 Tim Simpson 
>
>>  Because the ability to check if root is enabled is in an extension
>> which would not be in effect for a datastore with no ACL support, the user
>> would not be able to see that the marker for root enabled was set in the
>> Trove infrastructure database either way.
>>
>>  By the way- I double checked the code, and I was wrong- the guest agent
>> was *not* telling the database to update the root enabled flag. Instead,
>> the API extension had been updating the database all along after contacting
>> the guest. Sorry for making this thread more confusing.
>>
>>  It seems like if you follow my one (hopefully last) suggestion on this
>> pull request, it will solve the issue you're tackling:
>> https://review.openstack.org/#/c/59410/5
>>
>>  Thanks,
>>
>>  Tim
>>
>>  --
>> *From:* Denis Makogon [dmako...@mirantis.com]
>> *Sent:* Friday, December 20, 2013 8:58 AM
>> *To:* OpenStack Development Mailing List (not for usage questions)
>> *Subject:* Re: [openstack-dev] [trove] Dropping connectivity from
>> guesagent to Trove back-end
>>
>> Thanks for response, Tim.
>>
>>  As i said, it would be confusing situation when database which has no
>> ACL would be deployed by Trove with root enabled - this looks very strange
>> since user allowed to check if root enabled. I think in this case Conductor
>> should be _that_ place which should contain datastore specific logic, which
>> requires back-end connectivity.
>>
>> It would be nice to have consistent instance states for each datastore
>> types and version.
>>
>>  Are there any objections about letting conductor deal with it ?
>>
>>
>>
>> Best regards,
>> Denis Makogon
>>
>>
>> 2013/12/20 Tim Simpson 
>>
>>>  Hi Denis,
>>>
>>>  The plan from the start with Conductor has been to remove any guest
>>> connections to the database. So any lingering ones are omissions which
>>> should be dealt with.
>>>
>>>  >> Since not each database have root entity (not even ACL at all) it
>>> would be incorrect to report about root enabling on server-side because
>>> server-side(trove-taskmanager) should stay common as it possible.
>>>
>>>   I agree that in the case of the root call Conductor should have
>>> another RPC method that gets called by the guest to inform it that the root
>>> entity was set.
>>>
>>>  I also agree that any code that can stay as common as possible between
>>> datastores should. However I don't agree that trove-taskmanager (by which I
>>> assume you mean the daemon) has to only be for common functionality.
>>>
>>>  Thanks,
>>>
>>>  Tim
>>>
>>>  --
>>> *From:* Denis Makogon [dmako...@mirantis.com]
>>> *Sent:* Friday, December 20, 2013 7:04 AM
>>> *To:* OpenStack Development Mailing List
>>> *Subject:* [openstack-dev] [trove] Dropping connectivity from guesagent
>>> to Trove back-end
>>>
>>> Goodday, OpenStack DВaaS community.
>>>
>>>
>>>  I'd like to start conversation about dropping connectivity from
>>> In-VM guestagent and Trove back-end.
>>>
>>> Since Trove has conductor service which interacts with agents via MQ
>>> service, we could let it deal with any back-end required operations
>>> initialized by guestagent.
>>>
>>> Now conductor deals with instance status notifications and backup
>>> status notifications. But guest still have one more operation which
>>> requires back-end connectivity - database root-enabled reporting [1]. After
>>> dealing with it we could finally drop connectivity [2].
>>>
>>> Since not each database have root entit

Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-20 Thread Jay Dobies



On 12/20/2013 08:40 AM, Ladislav Smola wrote:

On 12/20/2013 02:06 PM, Radomir Dopieralski wrote:

On 20/12/13 13:04, Radomir Dopieralski wrote:

[snip]

I have just learned that tuskar-api stays, so my whole ranting is just a
waste of all our time. Sorry about that.



Hehe. :-)

Ok after the last meeting we are ready to say what goes to Tuskar-API.

Who wants to start that thread? :-)


I'm writing something up, but I won't have anything worth showing until 
after the New Year (sounds so far away when I say it that way; it's 
simply that I'm on vacation starting today until the 6th).





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] Dropping connectivity from guesagent to Trove back-end

2013-12-20 Thread Ed Cranford
Conductor was the first phase of
https://wiki.openstack.org/wiki/Trove/guest_agent_communication whose
proposed future phases include turning conductor into a source of truth for
trove to ask about instances, and then using its own datastore separate
from the host db anyway.
The purpose of the root history table is to keep information in a place
even an instance with root cannot reach, so we essentially have a warranty
seal on the instance. The thinking at was if that status was kept on the
instance, intrepid users could potentially enable root, muck about, and
then manually remove root. By putting that row in a table outside the
instance there's no question.

Phase 2 of the document above is to make conductor the source of truth for
information about an instance, so taskman will start asking conductor
instead of fetching the database information directly. So I think the next
step for removing this is to give conductor a method taskman can call to
get the root status from the extant table.

Phase 3 then seeks to give conductor its own datastore away from the
original database; I think that's the right time to migrate the root
history table, too.


On Fri, Dec 20, 2013 at 9:44 AM, Denis Makogon wrote:

> Unfortunately, Trove cannot manage it's own extensions, so if, suppose, i
> would try to get provisioned cassandra instance i would be still possible
> to check if root enabled.
> Prof:
> https://github.com/openstack/trove/blob/master/trove/extensions/mysql/service.py
> There are no checks for datastore_type, service just loads root model and
> that's it, since my patch create root model, next API call (root check)
> will load this model.
>
>
>
> 2013/12/20 Tim Simpson 
>
>>  Because the ability to check if root is enabled is in an extension
>> which would not be in effect for a datastore with no ACL support, the user
>> would not be able to see that the marker for root enabled was set in the
>> Trove infrastructure database either way.
>>
>>  By the way- I double checked the code, and I was wrong- the guest agent
>> was *not* telling the database to update the root enabled flag. Instead,
>> the API extension had been updating the database all along after contacting
>> the guest. Sorry for making this thread more confusing.
>>
>>  It seems like if you follow my one (hopefully last) suggestion on this
>> pull request, it will solve the issue you're tackling:
>> https://review.openstack.org/#/c/59410/5
>>
>>  Thanks,
>>
>>  Tim
>>
>>  --
>> *From:* Denis Makogon [dmako...@mirantis.com]
>> *Sent:* Friday, December 20, 2013 8:58 AM
>> *To:* OpenStack Development Mailing List (not for usage questions)
>> *Subject:* Re: [openstack-dev] [trove] Dropping connectivity from
>> guesagent to Trove back-end
>>
>>Thanks for response, Tim.
>>
>>  As i said, it would be confusing situation when database which has no
>> ACL would be deployed by Trove with root enabled - this looks very strange
>> since user allowed to check if root enabled. I think in this case Conductor
>> should be _that_ place which should contain datastore specific logic, which
>> requires back-end connectivity.
>>
>> It would be nice to have consistent instance states for each datastore
>> types and version.
>>
>>  Are there any objections about letting conductor deal with it ?
>>
>>
>>
>> Best regards,
>> Denis Makogon
>>
>>
>> 2013/12/20 Tim Simpson 
>>
>>>  Hi Denis,
>>>
>>>  The plan from the start with Conductor has been to remove any guest
>>> connections to the database. So any lingering ones are omissions which
>>> should be dealt with.
>>>
>>>  >> Since not each database have root entity (not even ACL at all) it
>>> would be incorrect to report about root enabling on server-side because
>>> server-side(trove-taskmanager) should stay common as it possible.
>>>
>>>   I agree that in the case of the root call Conductor should have
>>> another RPC method that gets called by the guest to inform it that the root
>>> entity was set.
>>>
>>>  I also agree that any code that can stay as common as possible between
>>> datastores should. However I don't agree that trove-taskmanager (by which I
>>> assume you mean the daemon) has to only be for common functionality.
>>>
>>>  Thanks,
>>>
>>>  Tim
>>>
>>>  --
>>> *From:* Denis Makogon [dmako...@mirantis.com]
>>> *Sent:* Friday, December 20, 2013 7:04 AM
>>> *To:* OpenStack Development Mailing List
>>> *Subject:* [openstack-dev] [trove] Dropping connectivity from guesagent
>>> to Trove back-end
>>>
>>> Goodday, OpenStack DВaaS community.
>>>
>>>
>>>  I'd like to start conversation about dropping connectivity from
>>> In-VM guestagent and Trove back-end.
>>>
>>> Since Trove has conductor service which interacts with agents via MQ
>>> service, we could let it deal with any back-end required operations
>>> initialized by guestagent.
>>>
>>> Now conductor deals with instance status notifications and backup
>>> status notifications. B

Re: [openstack-dev] [oslo][nova] oslo common.service vs. screen and devstack

2013-12-20 Thread Sean Dague
On 12/20/2013 10:56 AM, Sean Dague wrote:
> On 12/20/2013 09:59 AM, Sean Dague wrote:
> 
>> So as Clint said, SIGHUP is only appropriate to do that *if* the process
>> is daemonized. If it's in the foreground it's not.
>>
>> So that logic needs to be better.
> 
> This is basically a blocker for adding any upgrade testing from
> something later than havana. Grenade upstream is still functioning
> because the service code wasn't merged into nova until after havana was cut.
> 
> However there is a desire to do more interesting upgrade patterns, and
> without the ability to shutdown nova services on master in the gate,
> that's going to hit us pretty hard.
> 
> So I'd like to get this fixed soon. As digging us out of this later is
> going to be way more expensive.

Work around here for review - https://review.openstack.org/#/c/63444/

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] olso.messaging and Rabbit HA configs

2013-12-20 Thread John Wood
Hello folks,

I would like to configure olso.messaging to work with an HA Rabbit cluster and 
was curious about the correct configuration to use.

I am setting the following values in my private network:

ampq_durable_queues = True
rabbit_userid=guest
rabbit_password=guest
rabbit_hosts=192.168.50.8:5672, 192.168.50.9:5672
rabbit_ha_queues = True
rabbit_port=5672
transport_url = rabbit://guest@192.168.50.8:5672,guest@192.168.50.9:5672/


...but when I try to remove the 192.168.50.8 Rabbit node I get this error:

2013-12-20 16:12:18.014 24700 ERROR oslo.messaging._drivers.impl_rabbit [-] 
AMQP server on 192.168.50.8:5672 is unreachable: [Errno 113] No route to host. 
Trying again in 1 seconds.
2013-12-20 16:12:19.016 24700 INFO oslo.messaging._drivers.impl_rabbit [-] 
Reconnecting to AMQP server on 192.168.50.8:5672


I was expecting the node at 192.168.50.9 to be used if 192.168.50.8 became 
unavailable.

Another note is that it appears the transport_url configs are not utilized by 
the impl_rabbit.py module, hence setting the 'rabbit_hosts' separately.

Thanks in advance,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-20 Thread Clint Byrum
Excerpts from Radomir Dopieralski's message of 2013-12-20 01:13:20 -0800:
> On 20/12/13 00:17, Jay Pipes wrote:
> > On 12/19/2013 04:55 AM, Radomir Dopieralski wrote:
> >> On 14/12/13 16:51, Jay Pipes wrote:
> >>
> >> [snip]
> >>
> >>> Instead of focusing on locking issues -- which I agree are very
> >>> important in the virtualized side of things where resources are
> >>> "thinner" -- I believe that in the bare-metal world, a more useful focus
> >>> would be to ensure that the Tuskar API service treats related group
> >>> operations (like "deploy an undercloud on these nodes") in a way that
> >>> can handle failures in a graceful and/or atomic way.
> >>
> >> Atomicity of operations can be achieved by intoducing critical sections.
> >> You basically have two ways of doing that, optimistic and pessimistic.
> >> Pessimistic critical section is implemented with a locking mechanism
> >> that prevents all other processes from entering the critical section
> >> until it is finished.
> > 
> > I'm familiar with the traditional non-distributed software concept of a
> > mutex (or in Windows world, a critical section). But we aren't dealing
> > with traditional non-distributed software here. We're dealing with
> > highly distributed software where components involved in the
> > "transaction" may not be running on the same host or have much awareness
> > of each other at all.
> 
> Yes, that is precisely why you need to have a single point where they
> can check if they are not stepping on each other's toes. If you don't,
> you get race conditions and non-deterministic behavior. The only
> difference with traditional, non-distributed software is that since the
> components involved are communicating over a, relatively slow, network,
> you have a much, much greater chance of actually having a conflict.
> Scaling the whole thing to hundreds of nodes practically guarantees trouble.
> 

Radomir, what Jay is suggesting is that it seems pretty unlikely that
two individuals would be given a directive to deploy OpenStack into a
single pool of hardware at such a scale where they will both use the
whole thing.

Worst case, if it does happen, they both run out of hardware, one
individual deletes their deployment, the other one resumes. This is the
optimistic position and it will work fine. Assuming you are driving this
all through Heat (which, AFAIK, Tuskar still uses Heat) there's even a
blueprint to support you that I'm working on:

https://blueprints.launchpad.net/heat/+spec/retry-failed-update

Even if both operators put the retry in a loop, one would actually
finish at some point.

> > Trying to make a complex series of related but distributed actions --
> > like the underlying actions of the Tuskar -> Ironic API calls -- into an
> > atomic operation is just not a good use of programming effort, IMO.
> > Instead, I'm advocating that programming effort should instead be spent
> > coding a workflow/taskflow pipeline that can gracefully retry failed
> > operations and report the state of the total taskflow back to the user.
> 
> Sure, there are many ways to solve any particular synchronisation
> problem. Let's say that we have one that can actually be solved by
> retrying. Do you want to retry infinitely? Would you like to increase
> the delays between retries exponentially? If so, where are you going to
> keep the shared counters for the retries? Perhaps in tuskar-api, hmm?
> 

I don't think a sane person would retry more than maybe once without
checking with the other operators.

> Or are you just saying that we should pretend that the nondeteministic
> bugs appearing due to the lack of synchronization simply don't exist?
> They cannot be easily reproduced, after all. We could just close our
> eyes, cover our ears, sing "lalalala" and close any bug reports with
> such errors with "could not reproduce on my single-user, single-machine
> development installation". I know that a lot of software companies do
> exactly that, so I guess it's a valid business practice, I just want to
> make sure that this is actually the tactic that we are going to take,
> before commiting to an architectural decision that will make those bugs
> impossible to fix.
> 

OpenStack is non-deterministic. Deterministic systems are rigid and unable
to handle failure modes of any kind of diversity. We tend to err toward
pushing problems back to the user and giving them tools to resolve the
problem. Avoiding spurious problems is important too, no doubt. However,
what Jay has been suggesting is that the situation a pessimistic locking
system would avoid is entirely user created, and thus lower priority
than say, actually having a complete UI for deploying OpenStack.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] packet forwarding

2013-12-20 Thread Abbass MAROUNI
Hello,

Is it true that a traffic from one OpenStack virtual network to another
have to pass by an OpenStack router ? (using an OpenVirtual switch as the
L2 ).

I'm trying ti use a VM as a router between 2 OpenStack virtual networks but
for some reason I'm not able.

Appreciate any insights,


Best regards,
Abbass
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-20 Thread Clint Byrum
Excerpts from Ladislav Smola's message of 2013-12-20 05:48:40 -0800:
> On 12/20/2013 02:37 PM, Imre Farkas wrote:
> > On 12/20/2013 12:25 PM, Ladislav Smola wrote:
> >> 2. Heat stack create, update
> >> This is locked in the process of the operation, so nobody can mess with
> >> it while it is updating or creating.
> >> Once we will pack all operations that are now aside in this, we should
> >> be alright. And that should be doable in I.
> >> So we should push towards this, rather then building some temporary
> >> locking solution in Tuskar-API.
> >
> > It's not the issue of locking, but the goal of Tuskar with the 
> > Provision button is not only a single stack creation. After Heat's job 
> > is done, the overcloud needs to be properly configured: Keystone needs 
> > to be initialized, the services need to be registered, etc. I don't 
> > think Horizon wants to add a background worker to handle such operations.
> >
> 
> Yes, that is a valid point. I hope we will be able to pack it all to 
> Heat Template in I. This could be the way 
> https://blueprints.launchpad.net/heat/+spec/hot-software-config
> 
> Seems like the consensus is: It belongs to Heat. We are just not able to 
> do it that way now.
> 
> So there is a question, whether we should try to solve it in Tuskar-API 
> temporarily. Or rather focus on the Heat.
> 

Interestingly enough, what Imre has just mentioned isn't necessarily
covered by hot-software-config. That blueprint is specifically about
configuring machines, but not API's.

I think we actually need multi-cloud to support what Imre is talking
about. These are API operations that need to follow the entire stack
bring-up, but happen in a different cloud (the new one).

Assuming single servers instead of loadbalancers and stuff for simplicity:


resources:
  keystone:
type: OS::Nova::Server
  glance:
type: OS::Nova::Server
  nova:
type: OS::Nova::Server
  cloud-setup:
type: OS::Heat::Stack
properties:
  cloud-endpoint: str_join [ 'https://', get_attribute [ 'keystone', 
'first_ip' ], ':35357/' ]
  cloud-credentials: get_parameter ['something']
  template:
keystone-catalog:
  type: OS::Keystone::Catalog
  properties:
endpoints:
  - type: Compute
publicUrl: str_join [ 'https://', get_attribute [ 'nova', 
'first_ip' ], ':8447/' ]
  - type: Image
publicUrl: str_join [ 'https://', get_attribute [ 'glance', 
'first_ip' ], ':12345/' ]

What I mean is, you want the Heat stack to be done not when the hardware
is up, but when the API's have been orchestrated.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Diversity as a requirement for incubation

2013-12-20 Thread Bryan D. Payne
+1
-bryan


On Wed, Dec 18, 2013 at 10:22 PM, Jay Pipes  wrote:

> On 12/18/2013 12:34 PM, Doug Hellmann wrote:
>
>> I have more of an issue with a project failing *after* becoming
>> integrated than during incubation. That's why we have the incubation
>> period to begin with. For the same reason, I'm leaning towards allowing
>> projects into incubation without a very diverse team, as long as there
>> is some recognition that they won't be able to graduate in that state,
>> no matter the technical situation.
>>
>
> This precisely sums up my view as well.
>
> Best,
> -jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Ironic] Get power and temperature via IPMI

2013-12-20 Thread Alan Kavanagh
One additional item Gao and apologies as I was thinking power on two front 
here. I assume then the limit here is per compute node within a given DC site, 
so yes I can see some small benefits on that for sure. I still however have a 
hard time seeing if I want to do scheduling based on power as one of my main 
attributes I need to schedule based on, but sure I can see some value in this.
Let me know when you flesh this out in the blueprint, would be willing to 
support and take some items for dev for this.

BR
Alan

From: Alan Kavanagh [mailto:alan.kavan...@ericsson.com]
Sent: December-20-13 8:21 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova] [Ironic] Get power and temperature via IPMI

Cheers Gao. So my only comment here is how complex and how many attributes are 
we expecting the scheduler to take as input. Similarly the more variables you 
schedule on the more complex the beast becomes and from experience you end up 
having cross dependencies.

I can see power be an item of concern but don't you think that we could solve 
that one with Nova Cells Parent being aware of the Power consumption costs at 
"time-T" and then just forward the Nova API call to the appropriate Child which 
has say the least power consumption cost?

Also, on a priority scale, some DC providers (speaking as one of the DC 
Providers here) will not have power cost on their top say 5 list for 
scheduling. So I agree its definitely interesting but if you consider 
scheduling inside a large DC in the same geographical region and Dc site, 
Scheduling for power consumption becomes null and void. ;-(

BR
Alan



From: Gao, Fengqian [mailto:fengqian@intel.com]
Sent: December-19-13 11:10 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova] [Ironic] Get power and temperature via IPMI

Yes, Alan, you got me.
Providing power/temperature to scheduler, set threshold or different weight, 
then the scheduler can boot VM on the most suitable node.

Thanks

--fengqian

From: Alan Kavanagh [mailto:alan.kavan...@ericsson.com]
Sent: Friday, December 20, 2013 11:58 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova] [Ironic] Get power and temperature via IPMI

Cheers Gao

It definitely makes sense to collect additional metrics such as power and 
temperature, and make that available for selective decisions you would want to 
take. However, I am just wondering if you could realistically feed those 
metrics as variables for scheduling, this is the main part I feel is 
questionable. I assume then you would use temperature &|| power etc to gauge if 
you want to schedule another VM on a given node when a given temperature 
threshold is reached. Is this the main case you are thinking of Gao?

Alan

From: Gao, Fengqian [mailto:fengqian@intel.com]
Sent: December-18-13 10:23 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova] [Ironic] Get power and temperature via IPMI

Hi, Alan,
I think, for nova-scheduler it is better if we gather more information.  And In 
today's DC, power and temperature are very important facts to considering.
CPU/Memory utilization is not enough to describe nodes' status. Power/inlet 
temperature should be noticed.

Best Wishes

--fengqian

From: Alan Kavanagh [mailto:alan.kavan...@ericsson.com]
Sent: Thursday, December 19, 2013 2:14 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova] [Ironic] Get power and temperature via IPMI

Hi Gao

What is the reason why you see it would be important to have these two 
additional metrics "power and temperature" for Nova to base scheduling on?

Alan

From: Gao, Fengqian [mailto:fengqian@intel.com]
Sent: December-18-13 1:00 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Nova] [Ironic] Get power and temperature via IPMI

Hi, all,
I am planning to extend bp 
https://blueprints.launchpad.net/nova/+spec/utilization-aware-scheduling with 
power and temperature. In other words, power and temperature can be collected 
and used for nova-scheduler just as CPU utilization.
I have a question here. As you know, IPMI is used to get power and temperature 
and baremetal implements IPMI functions in Nova. But baremetal driver is being 
split out of nova, so if I want to change something to the IPMI, which part 
should I choose now? Nova or Ironic?


Best wishes

--fengqian

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] VM diagnostics - V3 proposal

2013-12-20 Thread Oleg Gelbukh
Matt,

My understanding is that there will be a nova.virt.baremetal.ironic driver
in Nova which will talk to Ironic API to manage bare-metal instances. So,
Ironic will be actually providing the diagnostics data about bm instance
via its API eventually.

Hope someone will correct me if I'm wrong.

--
Best regards,
Oleg Gelbukh


On Fri, Dec 20, 2013 at 7:12 PM, Matt Riedemann
wrote:

>
>
> On Friday, December 20, 2013 3:57:15 AM, Daniel P. Berrange wrote:
>
>> On Fri, Dec 20, 2013 at 12:56:47PM +0400, Oleg Gelbukh wrote:
>>
>>> Hi everyone,
>>>
>>> I'm sorry for being late to the thread, but what about baremetal driver?
>>> Should it support the get_diagnostics() as well?
>>>
>>
>> Of course, where practical, every driver should aim to support every
>> method in the virt driver class API.
>>
>> Regards,
>> Daniel
>>
>
> Although isn't the baremetal driver moving to ironic, or there is an
> ironic driver moving into nova?  I'm a bit fuzzy on what's going on there.
>  Point is, if we're essentially halting feature development on the nova
> baremetal driver I'd hold off on implementing get_diagnostics there for now.
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer][Oslo] Consuming Notifications in Batches

2013-12-20 Thread Herndon, John Luke

On Dec 20, 2013, at 8:10 AM, Doug Hellmann  wrote:

> 
> 
> 
> On Thu, Dec 19, 2013 at 6:31 PM, Herndon, John Luke  
> wrote:
> Hi Folks,
> 
> The Rackspace-HP team has been putting a lot of effort into performance
> testing event collection in the ceilometer storage drivers[0]. Based on
> some results of this testing, we would like to support batch consumption
> of notifications, as it will greatly improve insertion performance. Batch
> consumption in this case means waiting for a certain number of
> notifications to arrive before sending to the storage
> driver.
> 
> I¹d like to get feedback from the community about this feature, and how we
> are planning to implement it. Here is what I’m currently thinking:
> 
> 1) This seems to fit well into oslo.messaging - batching may be a feature
> that other projects will find useful. After reviewing the changes that
> sileht has been working on in oslo.messaging, I think the right way to
> start off is to create a new executor that builds up a batch of
> notifications, and sends the batch to the dispatcher. We’d also add a
> timeout, so if a certain amount of time passes and the batch isn’t filled
> up, the notifications will be dispatched anyway. I’ve started a
> blueprint for this change and am filling in the details as I go along [1].
> 
> IIRC, the executor is meant to differentiate between threading, eventlet, 
> other async implementations, or other methods for dealing with the I/O. It 
> might be better to implement the batching at the dispatcher level instead. 
> That way no matter what I/O processing is in place, the batching will occur.
> 

I thought about doing it in the dispatcher. One problem I see is handling 
message acks. It looks like the current executors are built around single 
messages andre-queueing single messages if problems occur. If we build up a 
batch in the dispatcher, either the executor has to wait for the whole batch to 
be committed (which wouldn’t work in the case of the blocking executor, or 
would leave a lot of green threads hanging around in the case of the eventlet 
executor), or the executor has to be modified to allow acking to be handled out 
of band. So, I was thinking it would be cleaner to write a new executor that is 
responsible for acking/requeueing the entire batch. Maybe I’m missing something?

> 
> 2) In ceilometer, initialize the notification listener with the batch
> executor instead of the eventlet executor (this should probably be
> configurable)[2]. We can then send the entire batch of notifications to
> the storage driver to be processed as events, while maintaining the
> current method for converting notifications into samples.
> 
> 3) Error handling becomes more difficult. The executor needs to know if
> any of the notifications should be requeued. I think the right way to
> solve this is to return a list of notifications to requeue from the
> handler. Any better ideas?
> 
> Which "handler" do you mean?

Ah, sorry - handler is whichever method is registered to receive the batch from 
the dispatcher. In ceilometer’s case, this would be process_notifications I 
think.

> Doug
> 
>  
> 
> Is this the right approach to take? I¹m not an oslo.messaging expert, so
> if there is a proper way to implement this change, I¹m all ears!
> 
> Thanks, happy holidays!
> -john
> 
> 0: https://etherpad.openstack.org/p/ceilometer-data-store-scale-testing
> 1:
> https://blueprints.launchpad.net/oslo.messaging/+spec/bulk-consume-messages
> 2: https://blueprints.launchpad.net/ceilometer/+spec/use-bulk-notification
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-
John Herndon
HP Cloud
john.hern...@hp.com





smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] packet forwarding

2013-12-20 Thread Randy Tuttle
In general, you'd need a router to pass from one VLAN to another, and that
is still true in OS. However, for your case where you have a VM running
some routing software, it's quite possible (likely) that the iptable rules
on the host machine are stopping your VM from forwarding out since the
source address of the packet is not that of the guest that it knows about.

Randy


On Fri, Dec 20, 2013 at 11:50 AM, Abbass MAROUNI <
abbass.maro...@virtualscale.fr> wrote:

> Hello,
>
> Is it true that a traffic from one OpenStack virtual network to another
> have to pass by an OpenStack router ? (using an OpenVirtual switch as the
> L2 ).
>
> I'm trying ti use a VM as a router between 2 OpenStack virtual networks
> but for some reason I'm not able.
>
> Appreciate any insights,
>
>
> Best regards,
> Abbass
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] Dropping connectivity from guesagent to Trove back-end

2013-12-20 Thread Tim Simpson
>> whose proposed future phases include turning conductor into a source of 
>> truth for trove to ask about instances, and then using its own datastore 
>> separate from the host db anyway.

IIRC this was to support such ideas as storing the heart beat or service status 
somewhere besides the Trove database. So let's say that instead of having to 
constantly update the heart beat table from the guest it was possible to ask 
Rabbit when the last time the guest tried to receive a message and use that as 
the heartbeat timestamp instead. This is what Conductor was meant to support - 
the ability to not force a guest to have to send back heart beat info to a 
database if there was an RPC technology dependent way to get that info which 
Conductor knew about.

I don't agree with the idea that all information on a guest should live only in 
Conductor. Under this logic we'd have no backup information in the Trove 
database we could use when listing backups and would have to call Conductor 
instead.  I don't see what that buys us.

Similarly with the RootHistory object, it lives in the database right now which 
works fine because anytime Root is enabled it's done by Trove code which has 
access to that database anyway. Moving root history to Conductor will 
complicate things without giving us any benefit.

Thanks,

Tim


From: Ed Cranford [ed.cranf...@gmail.com]
Sent: Friday, December 20, 2013 10:13 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [trove] Dropping connectivity from guesagent to 
Trove back-end

Conductor was the first phase of 
https://wiki.openstack.org/wiki/Trove/guest_agent_communication whose proposed 
future phases include turning conductor into a source of truth for trove to ask 
about instances, and then using its own datastore separate from the host db 
anyway.
The purpose of the root history table is to keep information in a place even an 
instance with root cannot reach, so we essentially have a warranty seal on the 
instance. The thinking at was if that status was kept on the instance, intrepid 
users could potentially enable root, muck about, and then manually remove root. 
By putting that row in a table outside the instance there's no question.

Phase 2 of the document above is to make conductor the source of truth for 
information about an instance, so taskman will start asking conductor instead 
of fetching the database information directly. So I think the next step for 
removing this is to give conductor a method taskman can call to get the root 
status from the extant table.

Phase 3 then seeks to give conductor its own datastore away from the original 
database; I think that's the right time to migrate the root history table, too.


On Fri, Dec 20, 2013 at 9:44 AM, Denis Makogon 
mailto:dmako...@mirantis.com>> wrote:
Unfortunately, Trove cannot manage it's own extensions, so if, suppose, i would 
try to get provisioned cassandra instance i would be still possible to check if 
root enabled.
Prof: 
https://github.com/openstack/trove/blob/master/trove/extensions/mysql/service.py
There are no checks for datastore_type, service just loads root model and 
that's it, since my patch create root model, next API call (root check) will 
load this model.



2013/12/20 Tim Simpson 
mailto:tim.simp...@rackspace.com>>
Because the ability to check if root is enabled is in an extension which would 
not be in effect for a datastore with no ACL support, the user would not be 
able to see that the marker for root enabled was set in the Trove 
infrastructure database either way.

By the way- I double checked the code, and I was wrong- the guest agent was 
*not* telling the database to update the root enabled flag. Instead, the API 
extension had been updating the database all along after contacting the guest. 
Sorry for making this thread more confusing.

It seems like if you follow my one (hopefully last) suggestion on this pull 
request, it will solve the issue you're tackling: 
https://review.openstack.org/#/c/59410/5

Thanks,

Tim


From: Denis Makogon [dmako...@mirantis.com]
Sent: Friday, December 20, 2013 8:58 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [trove] Dropping connectivity from guesagent to 
Trove back-end

Thanks for response, Tim.

As i said, it would be confusing situation when database which has no ACL would 
be deployed by Trove with root enabled - this looks very strange since user 
allowed to check if root enabled. I think in this case Conductor should be 
_that_ place which should contain datastore specific logic, which requires 
back-end connectivity.

It would be nice to have consistent instance states for each datastore types 
and version.

Are there any objections about letting conductor deal with it ?



Best regards,
Denis Makogon


2013/12/20 Tim Simpson 
mailto:tim.simp...@racks

Re: [openstack-dev] [Ceilometer][Oslo] Consuming Notifications in Batches

2013-12-20 Thread Herndon, John Luke

On Dec 20, 2013, at 8:48 AM, Julien Danjou  wrote:

> On Thu, Dec 19 2013, Herndon, John Luke wrote:
> 
> Hi John,
> 
>> The Rackspace-HP team has been putting a lot of effort into performance
>> testing event collection in the ceilometer storage drivers[0]. Based on
>> some results of this testing, we would like to support batch consumption
>> of notifications, as it will greatly improve insertion performance. Batch
>> consumption in this case means waiting for a certain number of
>> notifications to arrive before sending to the storage
>> driver. 
> 
> I think that is overall a good idea. And in my mind it could also a
> bigger consequences that you would think. When we will start using
> notifications instead of RPC calls for sending the samples, we may be
> able to leverage that too.
Cool, glad to hear it!

> Anyway, my main concern here is that I am not very enthusiast about
> using the executor to do that. I wonder if there is not a way to ask the
> broker to get as many as message as it has up to a limit?
> 
> You would have 100 messages waiting in the notifications.info queue, and
> you would be able to tell to oslo.messaging that you want to read up to
> 10 messages at a time. If the underlying protocol (e.g. AMQP) can
> support that too, it would be more efficient too.

Yeah, I like this idea. As far as I can tell, AMQP doesn’t support grabbing 
more than a single message at a time, but we could definitely have the broker 
store up the batch before sending it along. Other protocols may support bulk 
consumption. My one concern with this approach is error handling. Currently the 
executors treat each notification individually. So let’s say the broker hands 
100 messages at a time. When client is done processing the messages, the broker 
needs to know if message 25 had an error or not. We would somehow need to 
communicate back to the broker which messages failed. I think this may take 
some refactoring of executors/dispatchers. What do you think?

> 
> -- 
> Julien Danjou
> /* Free Software hacker * independent consultant
>   http://julien.danjou.info */

-
John Herndon
HP Cloud
john.hern...@hp.com





smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Gerrit review refs now supported by diskimage-builder's source-repositories element

2013-12-20 Thread Chris Jones
Hi

As of just now (review 63021) the source-repositories element in
diskimage-builder can fetch git repos from gerrit reviews.

I figured it'd be worth mentioning here because it's super useful if you
want to test the code from one or more gerrit reviews, in a TripleO
environment.

A quick example, let's say you're using our devtest.sh script to build your
local environment and you want to try out patch set 9 of Yuiko Takada's
latest nova bug fix, all you need to do is:

export DIB_REPOLOCATION_nova=https://review.openstack.org/openstack/nova
export DIB_REPOREF_nova=refs/changes/56/53056/9
./scripts/devtest.sh

Bam!

(FWIW, the same env vars work if you're calling disk-image-create directly)

-- 
Cheers,

Chris
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer][Oslo] Consuming Notifications in Batches

2013-12-20 Thread Julien Danjou
On Fri, Dec 20 2013, Herndon, John Luke wrote:

> Yeah, I like this idea. As far as I can tell, AMQP doesn’t support grabbing
> more than a single message at a time, but we could definitely have the
> broker store up the batch before sending it along. Other protocols may
> support bulk consumption. My one concern with this approach is error
> handling. Currently the executors treat each notification individually. So
> let’s say the broker hands 100 messages at a time. When client is done
> processing the messages, the broker needs to know if message 25 had an error
> or not. We would somehow need to communicate back to the broker which
> messages failed. I think this may take some refactoring of
> executors/dispatchers. What do you think?

Yeah, it definitely needs to change the messaging API a bit to handle
such a case. But in the end that will be a good thing to support such a
case, it being natively supported by the broker or not.

For brokers where it's not possible, it may be simple enough to have a
"get_one_notification_nb()" method that would either return a
notification or None if there's none to read, and would that
consequently have to be _non-blocking_.

So if the transport is smart we write:

  # Return up to max_number_of_notifications_to_read
  notifications =
  transport.get_notificatations(conf.max_number_of_notifications_to_read)
  storage.record(notifications)

Otherwise we do:

  for i in range(conf.max_number_of_notifications_to_read):
  notification = transport.get_one_notification_nb():
  if notification:
  notifications.append(notification)
  else:
  break
   storage.record(notifications)

So it's just about having the right primitive in oslo.messaging, we can
then build on top of that wherever that is.

-- 
Julien Danjou
/* Free Software hacker * independent consultant
   http://julien.danjou.info */


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer][Oslo] Consuming Notifications in Batches

2013-12-20 Thread Herndon, John Luke

On Dec 20, 2013, at 10:47 AM, Julien Danjou  wrote:

> On Fri, Dec 20 2013, Herndon, John Luke wrote:
> 
>> Yeah, I like this idea. As far as I can tell, AMQP doesn’t support grabbing
>> more than a single message at a time, but we could definitely have the
>> broker store up the batch before sending it along. Other protocols may
>> support bulk consumption. My one concern with this approach is error
>> handling. Currently the executors treat each notification individually. So
>> let’s say the broker hands 100 messages at a time. When client is done
>> processing the messages, the broker needs to know if message 25 had an error
>> or not. We would somehow need to communicate back to the broker which
>> messages failed. I think this may take some refactoring of
>> executors/dispatchers. What do you think?
> 
> Yeah, it definitely needs to change the messaging API a bit to handle
> such a case. But in the end that will be a good thing to support such a
> case, it being natively supported by the broker or not.
> 
> For brokers where it's not possible, it may be simple enough to have a
> "get_one_notification_nb()" method that would either return a
> notification or None if there's none to read, and would that
> consequently have to be _non-blocking_.
> 
> So if the transport is smart we write:
> 
>  # Return up to max_number_of_notifications_to_read
>  notifications =
>  transport.get_notificatations(conf.max_number_of_notifications_to_read)
>  storage.record(notifications)
> 
> Otherwise we do:
> 
>  for i in range(conf.max_number_of_notifications_to_read):
>  notification = transport.get_one_notification_nb():
>  if notification:
>  notifications.append(notification)
>  else:
>  break
>   storage.record(notifications)
> 
> So it's just about having the right primitive in oslo.messaging, we can
> then build on top of that wherever that is.
> 

I think this will work. I was considering putting in a timeout so the broker 
would not send off all of the messages immediately, and implement using 
blocking calls. If the consumer consumes faster than the publishers are 
publishing, this just becomes single-notification batches. So it may be 
beneficial to wait for more messages to arrive before sending off the batch. If 
the batch is full before the timeout is reached, then the batch would be sent 
off.

> -- 
> Julien Danjou
> /* Free Software hacker * independent consultant
>   http://julien.danjou.info */

-
John Herndon
HP Cloud
john.hern...@hp.com





smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] [Oslo] Add APP-NAME (RFC5424) for Oslo syslog logging

2013-12-20 Thread Bogdan Dobrelya

*Preamble*
Hi stackers, I was trying to implement correct APP-NAME tags for remote 
logging in Fuel for Openstack, and faced the 
https://bugs.launchpad.net/nova/+bug/904307 issue. There are no logging 
options in Python 2.6/2.7 to address this APP-NAME in logging formats or 
configs (log_format, log_config(_append)).


Just look at the log file names, and you will understand me:
cinder-cinder.api.extensions.log
cinder-cinder.db.sqlalchemy.session.log
cinder-cinder.log
cinder-cinder.openstack.common.rpc.common.log
cinder-eventlet.wsgi.server.log
cinder-keystoneclient.middleware.auth_token.log
glance-eventlet.wsgi.server.log
glance-glance.api.middleware.cache.log
glance-glance.api.middleware.cache_manage.log
glance-glance.image_cache.log
glance-keystoneclient.middleware.auth_token.log
keystone-root.log
nova-keystoneclient.middleware.auth_token.log
nova-nova.api.openstack.compute.extensions.log
nova-nova.api.openstack.extensions.log
nova-nova.ec2.wsgi.server.log
nova-nova.log
nova-nova.metadata.wsgi.server.log
nova-nova.network.driver.log
nova-nova.osapi_compute.wsgi.server.log
nova-nova.S3.log
quantum-eventlet.wsgi.server.log
quantum-keystoneclient.middleware.auth_token.log
quantum-quantum.api.extensions.log
quantum-quantum.manager.log
quantum-quantum.openstack.common.rpc.amqp.log
quantum-quantum.plugins.openvswitch.ovs_quantum_plugin.log

But I actually want to see something like this:
cinder-api.log
cinder-volume.log
glance-api.log
glance-manage.log
glance-registry.log
keystone-all.log
nova-api.log
nova-conductor.log
nova-consoleauth.log
nova-objectstore.log
nova-scheduler.log
...and so on.

Another words, logging should honor RFC3164 & RFC5424, here are some quotes:
"The MSG part has two fields known as the TAG field and the CONTENT
field. The value in the TAG field will be the name of the program or 
process that generated the message. The CONTENT contains the details of 
the message..."

"The APP-NAME field SHOULD identify the device or application that
originated the message..."

I see two solutions for this issue.

*Solution 1*
The one of possible solutions is to use new key for log_format (i.e. 
%(binary_name)s) to extract application/service name for log records.
The implementation could be like patch #4: 
https://review.openstack.org/#/c/63094/4

And the log_format could be like this:
log_format=%(asctime)s %(binary_name)s %(levelname)s: %(name)s: %(message)s

The patch is applicable to other Openstack services, which did not moved 
to Oslo yet.
I tested it with nova services, and all services can start with 
log_format using %(binary_name)s, but nova-api. Looks like 
/keystoneclient/middleware/auth_token.py is unhappy with this patch, see 
the trace http://paste.openstack.org/show/55519/


*Solution 2*
The other and only option I can suggest, is to backport 'ident' from 
python 3.3, see http://hg.python.org/cpython/rev/6baa90fa2b6d
The implementation could be like this: 
https://review.openstack.org/#/c/63094
To ensure we will have APP-NAME in message we can set use_syslog = true 
and check the results.
If we're using log_config_append, the formatters and handlers could be 
like this:

[formatter_normal]
format = %(levelname)s: %(message)s
[handler_production]
class = openstack.common.log.RFCSysLogHandler
level = INFO
formatter = normal
args = ('/dev/log', handlers.SysLogHandler.LOG_LOCAL6)

The patch is also applicable to other Openstack services, which did not 
moved to Oslo yet.
For syslog logging, the application/service/process name (aka APP-NAME, 
see RFC5424) would be added before the MSG part, right after it has been 
formatted, and there is no need to use any special log_format settings 
as well.


*Conclusion*
I vote for implement solution 2 for Oslo logging, and for those 
Openstack services, which don't use Oslo for logging yet. That would not 
require any changes outside of the Openstack modules, thus looks like a 
good compromise for backporting 'ident' feature for APP-NAME tags from 
Python 3.3. What do you think?


P.S. Sorry for spamming, fuel-dev, have a nice weekend :-)
--
Best regards,
Bogdan Dobrelya,
Researcher TechLead, Mirantis, Inc.
+38 (066) 051 07 53
Skype bogdando_at_yahoo.com
Irc #bogdando
38, Lenina ave.
Kharkov, Ukraine
www.mirantis.com
www.mirantis.ru
bdobre...@mirantis.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] Dropping connectivity from guesagent to Trove back-end

2013-12-20 Thread Ed Cranford
Fair enough, original scope for conductor was just heartbeats
anyway--backups were more of an added bonus if anything to reduce that db
dependency.
Denis' patch at present just makes taskmanager take care of it, and it's
simple enough to do that way.


On Fri, Dec 20, 2013 at 11:16 AM, Tim Simpson wrote:

>  >> whose proposed future phases include turning conductor into a source
> of truth for trove to ask about instances, and then using its own datastore
> separate from the host db anyway.
>
> IIRC this was to support such ideas as storing the heart beat or service
> status somewhere besides the Trove database. So let's say that instead of
> having to constantly update the heart beat table from the guest it was
> possible to ask Rabbit when the last time the guest tried to receive a
> message and use that as the heartbeat timestamp instead. This is what
> Conductor was meant to support - the ability to not force a guest to have
> to send back heart beat info to a database if there was an RPC technology
> dependent way to get that info which Conductor knew about.
>
>  I don't agree with the idea that all information on a guest should live
> only in Conductor. Under this logic we'd have no backup information in the
> Trove database we could use when listing backups and would have to call
> Conductor instead.  I don't see what that buys us.
>
>  Similarly with the RootHistory object, it lives in the database right
> now which works fine because anytime Root is enabled it's done by Trove
> code which has access to that database anyway. Moving root history to
> Conductor will complicate things without giving us any benefit.
>
>  Thanks,
>
>  Tim
>
>   --
> *From:* Ed Cranford [ed.cranf...@gmail.com]
> *Sent:* Friday, December 20, 2013 10:13 AM
>
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [trove] Dropping connectivity from
> guesagent to Trove back-end
>
>   Conductor was the first phase of
> https://wiki.openstack.org/wiki/Trove/guest_agent_communication whose
> proposed future phases include turning conductor into a source of truth for
> trove to ask about instances, and then using its own datastore separate
> from the host db anyway.
>  The purpose of the root history table is to keep information in a place
> even an instance with root cannot reach, so we essentially have a warranty
> seal on the instance. The thinking at was if that status was kept on the
> instance, intrepid users could potentially enable root, muck about, and
> then manually remove root. By putting that row in a table outside the
> instance there's no question.
>
> Phase 2 of the document above is to make conductor the source of truth for
> information about an instance, so taskman will start asking conductor
> instead of fetching the database information directly. So I think the next
> step for removing this is to give conductor a method taskman can call to
> get the root status from the extant table.
>
>  Phase 3 then seeks to give conductor its own datastore away from the
> original database; I think that's the right time to migrate the root
> history table, too.
>
>
> On Fri, Dec 20, 2013 at 9:44 AM, Denis Makogon wrote:
>
>>  Unfortunately, Trove cannot manage it's own extensions, so if, suppose,
>> i would try to get provisioned cassandra instance i would be still possible
>> to check if root enabled.
>> Prof:
>> https://github.com/openstack/trove/blob/master/trove/extensions/mysql/service.py
>>  There are no checks for datastore_type, service just loads root model
>> and that's it, since my patch create root model, next API call (root check)
>> will load this model.
>>
>>
>>
>> 2013/12/20 Tim Simpson 
>>
>>>  Because the ability to check if root is enabled is in an extension
>>> which would not be in effect for a datastore with no ACL support, the user
>>> would not be able to see that the marker for root enabled was set in the
>>> Trove infrastructure database either way.
>>>
>>>  By the way- I double checked the code, and I was wrong- the guest
>>> agent was *not* telling the database to update the root enabled flag.
>>> Instead, the API extension had been updating the database all along after
>>> contacting the guest. Sorry for making this thread more confusing.
>>>
>>>  It seems like if you follow my one (hopefully last) suggestion on this
>>> pull request, it will solve the issue you're tackling:
>>> https://review.openstack.org/#/c/59410/5
>>>
>>>  Thanks,
>>>
>>>  Tim
>>>
>>>  --
>>> *From:* Denis Makogon [dmako...@mirantis.com]
>>> *Sent:* Friday, December 20, 2013 8:58 AM
>>> *To:* OpenStack Development Mailing List (not for usage questions)
>>> *Subject:* Re: [openstack-dev] [trove] Dropping connectivity from
>>> guesagent to Trove back-end
>>>
>>> Thanks for response, Tim.
>>>
>>>  As i said, it would be confusing situation when database which has no
>>> ACL would be deployed by Trove with root enabled -

Re: [openstack-dev] [Neutron] Neutron Tempest code sprint - 2nd week of January, Montreal, QC, Canada

2013-12-20 Thread Shiv Haris

Please add my name to the list. Thanks.

-Shiv Haris

-Original Message-
From: Anita Kuno [mailto:ante...@anteaya.info] 
Sent: Wednesday, December 18, 2013 1:18 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron] Neutron Tempest code sprint - 2nd week 
of January, Montreal, QC, Canada

Okay time for a recap.

What: Neutron Tempest code sprint
Where: Montreal, QC, Canada
When: January 15, 16, 17 2014
Location: I am about to sign the contract for Salle du Parc at 3625 Parc 
avenue, a room in a residence of McGill University.
Time: 9am - 5am

I am expecting to see the following people in Montreal in January:
Mark McClain
Salvatore Orlando
Sean Dague
Matt Trenish
Jay Pipes
Sukhdev Kapur
Miguel Lavelle
Oleg Bondarev
Rossella Sblendido
Emilien Macchi
Sylvain Afchain
Nicolas Planel
Kyle Mestery
Dane Leblanc
Sumit Naiksatam
Henry Gessau
Don Kehn
Carl Baldwin
Justin Hammond
Anita Kuno

If you are on the above list and can't attend, please email me so I have an 
up-to-date list. If you are planning on attending and I don't have your name 
listed, please email me without delay so that I can add you and you get done 
what you need to get done to attend.

I have the contract for the room and will be signing it and sending it in with 
the room deposit tomorrow. Monty has about 6 more hours to get back to me on 
this, then I just have to go ahead and do it.

Caterer is booked and I will be doing menu selection over the holidays.
I can post the intended, _the intended_ menu once I have decided. Soup, salad, 
sandwich - not glamourous but hopefully filling. If the menu on the day isn't 
the same as what I post, please forgive me. Unforeseen circumstances may crop 
up and I will do my best to get you fed. One person has identified they have a 
specific food request, if there are any more out there, please email me now. 
This covers breakfast, lunch and tea/coffee all day.

Henry Gessau will be social convener for dinners. If you have some restaurant 
suggestions, please contact Henry. Organization of dinners will take place once 
we congregate in our meeting room.

T-shirts: we decided that the code quality of Neutron was a higher priority 
than t-shirts.

One person required a letter of invitation for visa purposes and received it. I 
hope the visa has been granted.

Individuals arrangements for hotels seem to be going well from what I have been 
hearing. A few people will be staying at Le Nouvel Hotel, thanks for finding 
that one, Rosella.

Weather: well you got me on this one. This winter is colder than we have had in 
some time and more snow too. So it will be beautiful but bring or buy warm 
clothes. A few suggestions:
* layer your clothes (t-shirt, turtleneck, sweatshirt)
* boots with removable liners (this is my boot of choice:
http://amzn.to/19ddJve) remove the liners at the end of each day to dry them
* warm coat
* toque (wool unless you are allergic) I'm seeing them for $35, don't pay that 
much, you should be able to get something warm for $15 or less
* warm socks (cotton socks and wool over top)- keep your feet dry
* mitts (mitts keep my fingers warmer than gloves)
* scarf
If the weather is making you panic, talk to me and I will see about bringing 
some of my extra accessories with me. The style might not be you but you will 
be warm.

Remember, don't lick the flagpole. It doesn't matter what your friends tell you.

That's all I can think of, if I missed something, email me.

Oh, and best to consider me offline from Jan.2 until the code sprint.
Make sure you have all the information you need prior to that time.

See you in Montreal,
Anita.


On 11/19/2013 11:31 AM, Rossella Sblendido wrote:
> Hi all,
> 
> sorry if this is a bit OT now.
> I contacted some hotels to see if we could get a special price if we 
> book many rooms. According to my research the difference in price is not much.
> Also, as Anita was saying, booking for everybody is more complicated.
> So I decided to booked a room for myself.
> I share the name of the hotel, in case you want to stay in the same 
> place 
> http://www.lenouvelhotel.com/.
> It's close to the meeting room and the price is one of the best I have 
> found.
> 
> cheers,
> 
> Rossella
> 
> 
> 
> 
> On Sat, Nov 16, 2013 at 7:39 PM, Anita Kuno  wrote:
> 
>>  On 11/16/2013 01:14 PM, Anita Kuno wrote:
>>
>> On 11/16/2013 12:37 PM, Sean Dague wrote:
>>
>> On 11/15/2013 10:36 AM, Russell Bryant wrote:
>>
>>  On 11/13/2013 11:10 AM, Anita Kuno wrote:
>>
>>  Neutron Tempest code sprint
>>
>> In the second week of January in Montreal, Quebec, Canadathere will 
>> be a Neutron Tempest code sprint to improve the status of Neutron 
>> tests

[openstack-dev] [Climate] PTL Candidacy

2013-12-20 Thread Dina Belova
Howdy, guys!

I’d like to announce my candidacy for Climate (Reservation-as-a-Service)
PTL.

I’m working with OpenStack about last two years since Diablo and have much
experience in working with different customers within different projects.
Last six months I’m everything about community work - since really early
ideas of Climate. I proposed idea of global reservation opportunity to the
OpenStack - for both virtual and physical resources, not just some of it. I
also created architecture proposal for the Climate, that was discussed with
our community and on what we agreed the time Climate was a ‘baby’.

I’m leading subteam, that is working on implementing virtual reservations
opportunity. I took significant participation in core related features that
are important for every project - overall structure of DB layer, REST API,
base logic for the internal Climate part and plugin mechanism, that allows
to implement extensions for every resource type to make them reservable.
I’m a top contributor and reviewer for Climate and spend much time on
defining its future vectors of development and keeping Climate extensible
and relevant to the current OpenStack ecosystem. Now I’m holding our team’s
IRC meeting half times to keep our two subteams balanced and presented
enough; and manage our Launchpad project to represent every side of it.
Also I was the initiator of Climate presentation during OpenStack Icehouse
summit in Hong Kong this fall and prepared much materials for it. I have
expedience and know about release cycles, release management and other
infrastructure specific things.

I think, PTL is not only about reviews or code writing, it’s more about
presenting project to the outside world. It’s about endless communication
both internally with people contributing to Climate and externally to avoid
overlaps and conflicts between contributors and Climate with other
projects. I believe, PTL should think not only about Climate itself, but
about its place in whole OpenStack ecosystem and how it may look like in
future.

As for Icehouse, as the closest point we should pass, I defined our scope
for the first 0.1 Climate release and believe we will have it Jan 2014.
Definitely we would like to find the appropriate OpenStack Program (or
create a new one) and become incubated within it. Icehouse will be about
close integration with other OpenStack projects to support reservation of
different resources - not only compute hosts and virtual machines, proposed
to our first release, but also volumes, network resources, etc. Finally we
would like to propose architecture of integration with Heat and its stacks
reservation, as a most complicated virtual resource. Integration with
Horizon is also about creation of a better way for our users to communicate
with Climate and definitely we hope to propose solution for that.

It was a great time when different companies and people decided to unite
and create this project with its special role and become a part of great
OpenStack community. I believe we’ll do even more in future :)

Thanks!
Dina

-

Best regards,

Dina Belova

Software Engineer

Mirantis Inc
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Process for proposing patches attached to launchpad bugs?

2013-12-20 Thread Russell Bryant
On 12/20/2013 09:32 AM, Dolph Mathews wrote:
> In the past, I've been able to get authors of bug fixes attached to
> Launchpad bugs to sign the CLA and submit the patch through gerrit...
> although, in one case it took quite a bit of time (and thankfully it
> wasn't a critical fix or anything).
> 
> This scenario just came up again (example: [1]), so I'm asking
> preemptively... what if the author is unwilling / unable in signing the
> CLA and propose through gerrit, or it's a critical bug fix and waiting
> on an author to go through the CLA process is undesirable for the
> community? Obviously that's a bit of a fail on our part, but what's the
> most appropriate & expedient way to handle it?
> 
> Can we propose the patch to gerrit ourselves?
> 
> If so, who should appear as the --author of the commit? Who should
> appear as Co-Authored-By, especially when the committer helps to evolve
> the patch evolves further in review?
> 
> Alternatively, am I going about this all wrong?
> 
> Thanks!
> 
> [1]: https://bugs.launchpad.net/keystone/+bug/1198171/comments/8

It's not your code, so you really can't propose it without them having
signed the CLA, or propose it as your own.

Ideally have someone else fix the same bug that hasn't looked at the patch.

>From a quick look, it seems likely that this fix is small and straight
forward enough that the clean new implementation is going to end up
looking very similar.  Still, I think it's the right thing to do.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer][Oslo] Consuming Notifications in Batches

2013-12-20 Thread Gordon Sim

On 12/20/2013 05:27 PM, Herndon, John Luke wrote:


On Dec 20, 2013, at 8:48 AM, Julien Danjou 
wrote:

Anyway, my main concern here is that I am not very enthusiast
about using the executor to do that. I wonder if there is not a way
to ask the broker to get as many as message as it has up to a
limit?

You would have 100 messages waiting in the notifications.info
queue, and you would be able to tell to oslo.messaging that you
want to read up to 10 messages at a time. If the underlying
protocol (e.g. AMQP) can support that too, it would be more
efficient too.


Yeah, I like this idea. As far as I can tell, AMQP doesn’t support
grabbing more than a single message at a time, but we could
definitely have the broker store up the batch before sending it
along.


AMQP (in all it's versions) allows for a subscription with a 
configurable amount of 'prefetch', which means the broker can send lots 
of messages without waiting for the client to request them one at a time.


That's not quite the same as the batching I think you are looking for, 
but it does allow the broker to do its own batching. My guess is the 
rabbit driver is already using basic.consume rather than basic.get 
anyway(?), so the broker is free to batch as it sees fit.  (I haven't 
actually dug into the kombu code to verify that however, perhaps someone 
else here can confirm?)


However you still need the client to have some way of batching up the 
messages and then processing them together.



Other protocols may support bulk consumption. My one concern
with this approach is error handling. Currently the executors treat
each notification individually. So let’s say the broker hands 100
messages at a time. When client is done processing the messages, the
broker needs to know if message 25 had an error or not. We would
somehow need to communicate back to the broker which messages failed.
I think this may take some refactoring of executors/dispatchers. What
do you think?


I've have some related questions, that I haven't yet satisfactorily 
answered yet. The extra context here may be useful in doing so.


(1) What are the expectations around message delivery guarantees for 
insertion into a store? I.e. if there is a failure, is it ok to get 
duplicate entries for notifications? (I'm assuming losing notifications 
is not acceptable).


(2) What would you want the broker to do with the failed messages? What 
sort of things might fail? Is it related to the message content itself? 
Or is it failures suspected to be of a temporal nature?


(3) How important is ordering ? If a failure causes some notifications 
to be inserted out of order is that a problem at all?


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer][Oslo] Consuming Notifications in Batches

2013-12-20 Thread Gordon Sim

On 12/20/2013 07:13 PM, Gordon Sim wrote:

AMQP (in all it's versions) allows for a subscription with a
configurable amount of 'prefetch', which means the broker can send lots
of messages without waiting for the client to request them one at a time.


Just as an aside, the impl_qpid.py driver currently explicitly restricts 
the broker to sending one at a time. Probably not what we want for the 
notifications at any rate (more justifiable perhaps for the 'invoke on 
one of a group of servers' case).


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6] Blueprint Bind dnsmasq in qrouter- namespace

2013-12-20 Thread Martinx - ジェームズ
Hello Stackers!

I agree with "one namespace approach", if it is better for IPv6 (or even
for IPv4 and for operators).

And also, I think that, when with IPv6, we must do what is better for IPv6
networks... If things needs to be changed, lets do it!

BTW, one namespace with all the required services on it, makes more sense
to me either, this way, OpenStack can focus on "namespace = tenant router",
with dhcp, dhcpv6, RA, filter, IPv4 NAT, etc, on it... Just like a "real
world router"...  OpenStack approach to present the Linux Namespace as a
router to tenants is awesome by itself!

Operators can learn the new way of doing things, now with IPv6, it can be
simpler! No NAT tables, pure routing, less namespaces to deal with, VXLAN
seems to work better when with IPv6 (nephos6 PDF have some notes about
it)...

I'm wondering about starting millions of tiny Docker Instances, each one
with its own public IPv6 address! This will be epic!   :-D

What about a Floating IP for IPv6?! I think we can provide a "IPv6 Floating
IP" (without any kind NAT, of course), so, this "Floating IPv6" address
will appear *within* the attached Instance, instead of within its namespace
router, as it is with IPv4 (a NAT rule at the namespace router). What do
you guys think about this idea? This way, the namespace router will be used
to configure/deliver more IPv6 address for each Instance.

Another idea is: the Tenant IPv6 Namespace Router should provide a way (I
think), to deliver a range of IPv6 address (if possible), not only 1 per
Instance. This way, a Instance can have hundreds of web sites (Apache,
NGinx), each one with its own public IP (I prefer this Apache setup:
IP-Based ), because
I really like the idea of 1 public IP for each website, but not 1 Instance
for each website (perhaps with Docker it will be okay to have 1 Instance
per website).

Sorry to throw lots of subjects, I don't want to hijack the thread but, the
namespaces does lots of things anyway...   =P

NOTE: Can I start testing IPv6 tenant networks with Neutron 2014.1~b1 from
Ubuntu 14.04?!

Cheers!
Thiago


On 19 December 2013 23:31, Shixiong Shang wrote:

> Hi, Ian:
>
> The use case brought by Comcast team today during the ipv6 sub-team
> meeting actually proved the point I made here, instead of against it. If I
> didn’t explain it clearly in my previous email, here it is.
>
> I was questioning the design with two namespaces and I believe we can use
> a SINGLE namespace as the common container to host two services, i.e. DHCP
> and ROUTING. If your use case needs DHCP instance, but not ROUTING, then
> just launch dnsmasq in THE namespace with qr- interface; If your use case
> needs default GW, then add qg- interface in THE namespace. Whether it is
> called qdhcp or qrouter, I don’t care. It is just a label.
>
> People follow the routine to use it, simply because this is what OpenStack
> offers. But my question is, why? And why NOT we design the system in the
> way that qg- and qr- interface collocate in the same namespace?
>
> It is because we intentionally separate the service, now the system become
> clumsy and less efficient. As you can see in IPv6 cases, we are forced to
> deal with two namespaces now. It just doesn’t make any sense.
>
> Shixiong
>
>
>
>
>
>
> On Dec 19, 2013, at 7:27 PM, Ian Wells  wrote:
>
> Per the discussions this evening, we did identify a reason why you might
> need a dhcp namespace for v6 - because networks don't actually have to have
> routers.  It's clear you need an agent in the router namespace for RAs and
> another one in the DHCP namespace for when the network's not connected to a
> router, though.
>
> We've not pinned down all the API details yet, but the plan is to
> implement an RA agent first, responding to subnets that router is attached
> to (which is very close to what Randy and Shixiong have already done).
> --
> Ian.
>
>
> On 19 December 2013 14:01, Randy Tuttle  wrote:
>
>> First, dnsmasq is not being "moved". Instead, it's a different instance
>> for the attached subnet in the qrouter namespace. If it's not in the
>> qrouter namespace, the default gateway (the local router interface) will be
>> the interface of qdhcp namespace interface. That will cause blackhole for
>> traffic from VM. As you know, routing tables and NAT all occur in qrouter
>> namespace. So we want the RA to contain the local interface as default
>> gateway in qrouter namespace
>>
>> Randy
>>
>> Sent from my iPhone
>>
>> On Dec 19, 2013, at 4:05 AM, Xuhan Peng  wrote:
>>
>> I am reading through the blueprint created by Randy to bind dnsmasq into
>> qrouter- namespace:
>>
>>
>> https://blueprints.launchpad.net/neutron/+spec/dnsmasq-bind-into-qrouter-namespace
>>
>> I don't think I can follow the reason that we need to change the
>> namespace which contains dnsmasq process and the device it listens to from
>> qdhcp- to qrouter-. Why the original namespace design conflicts with the
>> Router Advertisement

[openstack-dev] Nova and Neutron Hyper-V patches

2013-12-20 Thread Alessandro Pilotti
Hi guys,

We have a couple of bug fix patches that already received a +2 review waiting 
since some time for a second +2a.

Can some core rev please help in getting them reviewed and possibly merged?

Nova

https://review.openstack.org/#/c/55449/
https://review.openstack.org/#/c/55975/

Neutron

https://review.openstack.org/#/c/57521/


Thanks!!

Alessandro
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][Oslo] Pulling glance.store out of glance. Where should it live?

2013-12-20 Thread Doug Hellmann
On Fri, Dec 20, 2013 at 10:42 AM, Flavio Percoco  wrote:

> Greetings,
>
> In the last Glance meeting, it was proposed to pull out glance's
> stores[0] code into its own package. There are a couple of other
> scenarios where using this code is necessary and it could also be
> useful for other consumers outside OpenStack itself.
>
> That being said, it's not clear where this new library should live in:
>
>1) Oslo: it's the place for common code, incubation, although this
>code has been pretty stable in the last release.
>
>2) glance.stores under Image program: As said in #1, the API has
>been pretty stable - and it falls perfectly into what Glance's
>program covers.
>

Either makes sense. If the glance team is going to continue maintaining
the code, it may make more sense to create a repo managed by glance-core.

One note, unless glance is using a namespace package, the name for the
library can't be glance.store, unfortunately. It wouldn't be difficult to
make that sort of structure work, though, so if you like the name it would
just mean some changes to glance and its packaging.

Doug



>
> [0] https://github.com/openstack/glance/tree/master/glance/store/
>
> Thoughts?
>
> Cheers,
> FF
>
> --
> @flaper87
> Flavio Percoco
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][nova] oslo common.service vs. screen and devstack

2013-12-20 Thread Doug Hellmann
On Fri, Dec 20, 2013 at 11:22 AM, Sean Dague  wrote:

> On 12/20/2013 10:56 AM, Sean Dague wrote:
> > On 12/20/2013 09:59 AM, Sean Dague wrote:
> > 
> >> So as Clint said, SIGHUP is only appropriate to do that *if* the process
> >> is daemonized. If it's in the foreground it's not.
> >>
> >> So that logic needs to be better.
> >
> > This is basically a blocker for adding any upgrade testing from
> > something later than havana. Grenade upstream is still functioning
> > because the service code wasn't merged into nova until after havana was
> cut.
> >
> > However there is a desire to do more interesting upgrade patterns, and
> > without the ability to shutdown nova services on master in the gate,
> > that's going to hit us pretty hard.
> >
> > So I'd like to get this fixed soon. As digging us out of this later is
> > going to be way more expensive.
>
> Work around here for review - https://review.openstack.org/#/c/63444/


That fix has landed in oslo, and I'm working on a patch to copy it into
nova now.

Doug



>
>
> -Sean
>
> --
> Sean Dague
> Samsung Research America
> s...@dague.net / sean.da...@samsung.com
> http://dague.net
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer][Oslo] Consuming Notifications in Batches

2013-12-20 Thread Dan Dyer

On 12/20/2013 11:18 AM, Herndon, John Luke wrote:

On Dec 20, 2013, at 10:47 AM, Julien Danjou  wrote:


On Fri, Dec 20 2013, Herndon, John Luke wrote:


Yeah, I like this idea. As far as I can tell, AMQP doesn't support grabbing
more than a single message at a time, but we could definitely have the
broker store up the batch before sending it along. Other protocols may
support bulk consumption. My one concern with this approach is error
handling. Currently the executors treat each notification individually. So
let's say the broker hands 100 messages at a time. When client is done
processing the messages, the broker needs to know if message 25 had an error
or not. We would somehow need to communicate back to the broker which
messages failed. I think this may take some refactoring of
executors/dispatchers. What do you think?

Yeah, it definitely needs to change the messaging API a bit to handle
such a case. But in the end that will be a good thing to support such a
case, it being natively supported by the broker or not.

For brokers where it's not possible, it may be simple enough to have a
"get_one_notification_nb()" method that would either return a
notification or None if there's none to read, and would that
consequently have to be _non-blocking_.

So if the transport is smart we write:

  # Return up to max_number_of_notifications_to_read
  notifications =
  transport.get_notificatations(conf.max_number_of_notifications_to_read)
  storage.record(notifications)

Otherwise we do:

  for i in range(conf.max_number_of_notifications_to_read):
  notification = transport.get_one_notification_nb():
  if notification:
  notifications.append(notification)
  else:
  break
   storage.record(notifications)

So it's just about having the right primitive in oslo.messaging, we can
then build on top of that wherever that is.


I think this will work. I was considering putting in a timeout so the broker 
would not send off all of the messages immediately, and implement using 
blocking calls. If the consumer consumes faster than the publishers are 
publishing, this just becomes single-notification batches. So it may be 
beneficial to wait for more messages to arrive before sending off the batch. If 
the batch is full before the timeout is reached, then the batch would be sent 
off.


--
Julien Danjou
/* Free Software hacker * independent consultant
   http://julien.danjou.info */

-
John Herndon
HP Cloud
john.hern...@hp.com





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

A couple of things that I think need to be emphasized here:
1. the mechanism needs to be configurable, so if you are more worried 
about reliability than performance you would be able to turn off bulk 
loading
2. the caching size should also be configurable, so that we can limit 
your exposure to lost messages
3. while you can have the message queue hold the messages until you 
acknowledge them, it seems like this adds a lot of complexity to the 
interaction. you will need to be able to propagate this information all 
the way back from the storage driver.
4. any integration that is depdendent on a specific configuration on the 
rabbit server is brittle, since we have seen a lot of variation between 
services on this. I would prefer to control the behavior on the 
collection side.


So in general, I would prefer a mechanism that pulls the data in a 
default manner, caches on the collection side based on configuration 
that allows you to determine your own risk level and then manager 
retries in the storage driver or at the cache controller level.


Dan Dyer
HP cloud
dan.d...@hp.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer][Oslo] Consuming Notifications in Batches

2013-12-20 Thread Doug Hellmann
On Fri, Dec 20, 2013 at 12:15 PM, Herndon, John Luke wrote:

>
> On Dec 20, 2013, at 8:10 AM, Doug Hellmann 
> wrote:
>
>
>
>
> On Thu, Dec 19, 2013 at 6:31 PM, Herndon, John Luke 
> wrote:
>
>> Hi Folks,
>>
>> The Rackspace-HP team has been putting a lot of effort into performance
>> testing event collection in the ceilometer storage drivers[0]. Based on
>> some results of this testing, we would like to support batch consumption
>> of notifications, as it will greatly improve insertion performance. Batch
>> consumption in this case means waiting for a certain number of
>> notifications to arrive before sending to the storage
>> driver.
>>
>> I¹d like to get feedback from the community about this feature, and how we
>> are planning to implement it. Here is what I’m currently thinking:
>>
>> 1) This seems to fit well into oslo.messaging - batching may be a feature
>> that other projects will find useful. After reviewing the changes that
>> sileht has been working on in oslo.messaging, I think the right way to
>> start off is to create a new executor that builds up a batch of
>> notifications, and sends the batch to the dispatcher. We’d also add a
>> timeout, so if a certain amount of time passes and the batch isn’t filled
>> up, the notifications will be dispatched anyway. I’ve started a
>> blueprint for this change and am filling in the details as I go along [1].
>>
>
> IIRC, the executor is meant to differentiate between threading, eventlet,
> other async implementations, or other methods for dealing with the I/O. It
> might be better to implement the batching at the dispatcher level instead.
> That way no matter what I/O processing is in place, the batching will occur.
>
>
> I thought about doing it in the dispatcher. One problem I see is handling
> message acks. It looks like the current executors are built around single
> messages andre-queueing single messages if problems occur. If we build up a
> batch in the dispatcher, either the executor has to wait for the whole
> batch to be committed (which wouldn’t work in the case of the blocking
> executor, or would leave a lot of green threads hanging around in the case
> of the eventlet executor), or the executor has to be modified to allow
> acking to be handled out of band. So, I was thinking it would be cleaner to
> write a new executor that is responsible for acking/requeueing the entire
> batch. Maybe I’m missing something?
>

No, you're right. Were you going to use eventlet again for the new
executor?



>
>
>> 2) In ceilometer, initialize the notification listener with the batch
>> executor instead of the eventlet executor (this should probably be
>> configurable)[2]. We can then send the entire batch of notifications to
>> the storage driver to be processed as events, while maintaining the
>> current method for converting notifications into samples.
>>
>> 3) Error handling becomes more difficult. The executor needs to know if
>> any of the notifications should be requeued. I think the right way to
>> solve this is to return a list of notifications to requeue from the
>> handler. Any better ideas?
>>
>
> Which "handler" do you mean?
>
>
> Ah, sorry - handler is whichever method is registered to receive the batch
> from the dispatcher. In ceilometer’s case, this would be
> process_notifications I think.
>
>
> Doug
>
>
>
>>
>> Is this the right approach to take? I¹m not an oslo.messaging expert, so
>> if there is a proper way to implement this change, I¹m all ears!
>>
>> Thanks, happy holidays!
>> -john
>>
>> 0: https://etherpad.openstack.org/p/ceilometer-data-store-scale-testing
>> 1:
>>
>> https://blueprints.launchpad.net/oslo.messaging/+spec/bulk-consume-messages
>> 2:
>> https://blueprints.launchpad.net/ceilometer/+spec/use-bulk-notification
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> -
> John Herndon
> HP Cloud
> john.hern...@hp.com
>
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Live upgrades and major rpc versions

2013-12-20 Thread Russell Bryant
Greetings,

Bumping the major rpc versions allows us to drop old backwards
compatibility code.  However, we have to do this in such a way that
doesn't break live upgrades.  We've expected live upgrades for CD to
work for a while, and we're also expecting to be able to support it from
Havana to Icehouse.

The approach for bumping major rpc versions in the past has been like this:

Step 1) https://review.openstack.org/#/c/53944/

Step 2) https://review.openstack.org/#/c/54493/

The approach outlined in the commit message for step 1 discusses how
this approach works with live upgrades in a CD environment.  However,
making changes like this in the middle of a release cycle breaks the
live upgrade from the N-1 to N release.

(Yes, these changes broke Havana->Icehouse live upgrades, but that has
since been resolved with some other patches.  This discussion is how we
avoid breaking it in the future.)

To support N-1 to N live upgrades, I propose that we use the same change
structure, but split it over a release boundary.  A practical example
for the conductor service:

Step 1) https://review.openstack.org/#/c/52218/

This patch adds a new revision of the conductor rpc API, 2.0.  I say we
merge a change like this just before the Icehouse release.  The way it's
written is very low risk to the release since it leaves most important
existing code (1.X) untouched.

Step 2) https://review.openstack.org/#/c/52219/

Once master is open for J development, merge a patch like this one as
step 2.  At this point, we would drop all support for 1.X.  It's no
longer needed because in J we're only trying to support upgrades from
Icehouse, and Icehouse supported 2.0.

Using this approach I think we can support live upgrades from N-1 to N
while still being able to drop some backwards compatibility code each
release cycle.

Once we get the details worked out, I'd like to capture the process on
the release checklist wiki page for Nova.

https://wiki.openstack.org/wiki/Nova/ReleaseChecklist

Thoughts?

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Nova and Neutron Hyper-V patches

2013-12-20 Thread Joe Gordon
Please read
http://lists.openstack.org/pipermail/openstack-dev/2013-September/015264.html


On Fri, Dec 20, 2013 at 11:59 AM, Alessandro Pilotti <
apilo...@cloudbasesolutions.com> wrote:

>  Hi guys,
>
>  We have a couple of bug fix patches that already received a +2 review
> waiting since some time for a second +2a.
>
>  Can some core rev please help in getting them reviewed and possibly
> merged?
>
>  Nova
>
>  https://review.openstack.org/#/c/55449/
> https://review.openstack.org/#/c/55975/
>
>  Neutron
>
>  https://review.openstack.org/#/c/57521/
>
>
>  Thanks!!
>
>  Alessandro
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Gerrit review refs now supported by diskimage-builder's source-repositories element

2013-12-20 Thread Roman Podoliaka
Hi Chris,

This is super useful for testing patches on review! Thank you!

Roman

On Fri, Dec 20, 2013 at 7:35 PM, Chris Jones  wrote:
> Hi
>
> As of just now (review 63021) the source-repositories element in
> diskimage-builder can fetch git repos from gerrit reviews.
>
> I figured it'd be worth mentioning here because it's super useful if you
> want to test the code from one or more gerrit reviews, in a TripleO
> environment.
>
> A quick example, let's say you're using our devtest.sh script to build your
> local environment and you want to try out patch set 9 of Yuiko Takada's
> latest nova bug fix, all you need to do is:
>
> export DIB_REPOLOCATION_nova=https://review.openstack.org/openstack/nova
> export DIB_REPOREF_nova=refs/changes/56/53056/9
> ./scripts/devtest.sh
>
> Bam!
>
> (FWIW, the same env vars work if you're calling disk-image-create directly)
>
> --
> Cheers,
>
> Chris
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Live upgrades and major rpc versions

2013-12-20 Thread Dan Smith
> Using this approach I think we can support live upgrades from N-1 to N
> while still being able to drop some backwards compatibility code each
> release cycle.

Agreed. We've been kinda slack about bumping the RPC majors for a while,
which means we end up with a lot of cruft and comments like "#NOTE:
Remove this in Grizzly" still in the code. Major numbers are free, we
should use them more :)

> Thoughts?

I wonder if it's worth also saying something in the checklist about "if
the API hasn't changed in this release, no need to bump". Just so we
don't get to RPC version 23.0 for the console API, or something else
that doesn't change much.

I don't have a feeling for how this would translate to objects, if at
all, so I'll reserve judgment on that for the moment. Mechanically, it's
a very similar thing, but we've not had enough churn in the object APIs
to know if being this proactive is at all warranted.

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer][Oslo] Consuming Notifications in Batches

2013-12-20 Thread Herndon, John Luke

On Dec 20, 2013, at 12:13 PM, Gordon Sim  wrote:

> On 12/20/2013 05:27 PM, Herndon, John Luke wrote:
>> 
>> On Dec 20, 2013, at 8:48 AM, Julien Danjou 
>> wrote:
>>> Anyway, my main concern here is that I am not very enthusiast
>>> about using the executor to do that. I wonder if there is not a way
>>> to ask the broker to get as many as message as it has up to a
>>> limit?
>>> 
>>> You would have 100 messages waiting in the notifications.info
>>> queue, and you would be able to tell to oslo.messaging that you
>>> want to read up to 10 messages at a time. If the underlying
>>> protocol (e.g. AMQP) can support that too, it would be more
>>> efficient too.
>> 
>> Yeah, I like this idea. As far as I can tell, AMQP doesn’t support
>> grabbing more than a single message at a time, but we could
>> definitely have the broker store up the batch before sending it
>> along.
> 
> AMQP (in all it's versions) allows for a subscription with a configurable 
> amount of 'prefetch', which means the broker can send lots of messages 
> without waiting for the client to request them one at a time.
> 
> That's not quite the same as the batching I think you are looking for, but it 
> does allow the broker to do its own batching. My guess is the rabbit driver 
> is already using basic.consume rather than basic.get anyway(?), so the broker 
> is free to batch as it sees fit.  (I haven't actually dug into the kombu code 
> to verify that however, perhaps someone else here can confirm?)
> 
Yeah, that should help out the performance a bit, but we will still need to 
work out the batching logic. I think basic.consume is likely the best way to 
go, I think it will be straight forward to implement the timeout mechanism I’m 
looking for in this case. Thanks for the tip :).

> However you still need the client to have some way of batching up the 
> messages and then processing them together.
> 
>> Other protocols may support bulk consumption. My one concern
>> with this approach is error handling. Currently the executors treat
>> each notification individually. So let’s say the broker hands 100
>> messages at a time. When client is done processing the messages, the
>> broker needs to know if message 25 had an error or not. We would
>> somehow need to communicate back to the broker which messages failed.
>> I think this may take some refactoring of executors/dispatchers. What
>> do you think?
> 
> I've have some related questions, that I haven't yet satisfactorily answered 
> yet. The extra context here may be useful in doing so.
> 
> (1) What are the expectations around message delivery guarantees for 
> insertion into a store? I.e. if there is a failure, is it ok to get duplicate 
> entries for notifications? (I'm assuming losing notifications is not 
> acceptable).
I think there is probably a tolerance for duplicates but you’re right, missing 
a notification is unacceptable. Can anyone weigh in on how big of a deal 
duplicates are for meters? Duplicates aren’t really unique to the batching 
approach, though. If a consumer dies after it’s inserted a message into the 
data store but before the message is acked, the message will be requeued and 
handled by another consumer resulting in a duplicate. 

> (2) What would you want the broker to do with the failed messages? What sort 
> of things might fail? Is it related to the message content itself? Or is it 
> failures suspected to be of a temporal nature?
There will be situations where the message can’t be parsed, and those messages 
can’t just be thrown away. My current thought is that ceilometer could provide 
some sort of mechanism for sending messages that are invalid to an external 
data store (like a file, or a different topic on the amqp server) where a 
living, breathing human can look at them and try to parse out any meaningful 
information. Other errors might be “database not available”, in which case 
re-queing the message is probably the right way to go. If the consumer process 
crashes, all of the unasked messages need to be requeued and handled by a 
different consumer. Any other error cases?

> (3) How important is ordering ? If a failure causes some notifications to be 
> inserted out of order is that a problem at all?
From an event point of view, I don’t think this is a problem since the events 
have a generated timestamp.

> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-
John Herndon
HP Cloud
john.hern...@hp.com





smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][nova] oslo common.service vs. screen and devstack

2013-12-20 Thread Robert Collins
Ok so this is interesting. I think this new feature will have caused
bugs in anything using this idiom:


def foo(bar=CONF):

because that is only evaluated at import time - any later reevaluation
of the config settings won't propagate into code. (This is why we
recently avoided that idiom in Ironic, but part of that review was
looking at where else it was used).

So - something for reviewers to watch out for - default parameter
values from config settings in daemons is no longer 'safe' (it never
really was, but now it's clearly always unsafe).

-Rob

On 21 December 2013 09:09, Doug Hellmann  wrote:
>
>
>
> On Fri, Dec 20, 2013 at 11:22 AM, Sean Dague  wrote:
>>
>> On 12/20/2013 10:56 AM, Sean Dague wrote:
>> > On 12/20/2013 09:59 AM, Sean Dague wrote:
>> > 
>> >> So as Clint said, SIGHUP is only appropriate to do that *if* the
>> >> process
>> >> is daemonized. If it's in the foreground it's not.
>> >>
>> >> So that logic needs to be better.
>> >
>> > This is basically a blocker for adding any upgrade testing from
>> > something later than havana. Grenade upstream is still functioning
>> > because the service code wasn't merged into nova until after havana was
>> > cut.
>> >
>> > However there is a desire to do more interesting upgrade patterns, and
>> > without the ability to shutdown nova services on master in the gate,
>> > that's going to hit us pretty hard.
>> >
>> > So I'd like to get this fixed soon. As digging us out of this later is
>> > going to be way more expensive.
>>
>> Work around here for review - https://review.openstack.org/#/c/63444/
>
>
> That fix has landed in oslo, and I'm working on a patch to copy it into nova
> now.
>
> Doug
>
>
>>
>>
>>
>> -Sean
>>
>> --
>> Sean Dague
>> Samsung Research America
>> s...@dague.net / sean.da...@samsung.com
>> http://dague.net
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] Dropping connectivity from guesagent to Trove back-end

2013-12-20 Thread Denis Makogon
All points are good, but RootHistory object now is being created by
guestagent while mysql datastore prepare call. Delegating all
responsibilities to Conductor would give us next benefits:
1. Breaking guest -> back-end connectivity.
2. Keeping taskmanager generic (already said, need to take into accout
non-ACL datastores)
3. Letting conductor doing it's job - executing tasks which are required
back-end connectivity.

My current patch is not giving us points [1] and [2] since trove cannon
manage it's own extension.


2013/12/20 Ed Cranford 

> Fair enough, original scope for conductor was just heartbeats
> anyway--backups were more of an added bonus if anything to reduce that db
> dependency.
> Denis' patch at present just makes taskmanager take care of it, and it's
> simple enough to do that way.
>
>
> On Fri, Dec 20, 2013 at 11:16 AM, Tim Simpson 
> wrote:
>
>>  >> whose proposed future phases include turning conductor into a source
>> of truth for trove to ask about instances, and then using its own datastore
>> separate from the host db anyway.
>>
>> IIRC this was to support such ideas as storing the heart beat or service
>> status somewhere besides the Trove database. So let's say that instead of
>> having to constantly update the heart beat table from the guest it was
>> possible to ask Rabbit when the last time the guest tried to receive a
>> message and use that as the heartbeat timestamp instead. This is what
>> Conductor was meant to support - the ability to not force a guest to have
>> to send back heart beat info to a database if there was an RPC technology
>> dependent way to get that info which Conductor knew about.
>>
>>  I don't agree with the idea that all information on a guest should live
>> only in Conductor. Under this logic we'd have no backup information in the
>> Trove database we could use when listing backups and would have to call
>> Conductor instead.  I don't see what that buys us.
>>
>>  Similarly with the RootHistory object, it lives in the database right
>> now which works fine because anytime Root is enabled it's done by Trove
>> code which has access to that database anyway. Moving root history to
>> Conductor will complicate things without giving us any benefit.
>>
>>  Thanks,
>>
>>  Tim
>>
>>   --
>> *From:* Ed Cranford [ed.cranf...@gmail.com]
>> *Sent:* Friday, December 20, 2013 10:13 AM
>>
>> *To:* OpenStack Development Mailing List (not for usage questions)
>> *Subject:* Re: [openstack-dev] [trove] Dropping connectivity from
>> guesagent to Trove back-end
>>
>>   Conductor was the first phase of
>> https://wiki.openstack.org/wiki/Trove/guest_agent_communication whose
>> proposed future phases include turning conductor into a source of truth for
>> trove to ask about instances, and then using its own datastore separate
>> from the host db anyway.
>>  The purpose of the root history table is to keep information in a place
>> even an instance with root cannot reach, so we essentially have a warranty
>> seal on the instance. The thinking at was if that status was kept on the
>> instance, intrepid users could potentially enable root, muck about, and
>> then manually remove root. By putting that row in a table outside the
>> instance there's no question.
>>
>> Phase 2 of the document above is to make conductor the source of truth
>> for information about an instance, so taskman will start asking conductor
>> instead of fetching the database information directly. So I think the next
>> step for removing this is to give conductor a method taskman can call to
>> get the root status from the extant table.
>>
>>  Phase 3 then seeks to give conductor its own datastore away from the
>> original database; I think that's the right time to migrate the root
>> history table, too.
>>
>>
>> On Fri, Dec 20, 2013 at 9:44 AM, Denis Makogon wrote:
>>
>>>  Unfortunately, Trove cannot manage it's own extensions, so if,
>>> suppose, i would try to get provisioned cassandra instance i would be still
>>> possible to check if root enabled.
>>> Prof:
>>> https://github.com/openstack/trove/blob/master/trove/extensions/mysql/service.py
>>>  There are no checks for datastore_type, service just loads root model
>>> and that's it, since my patch create root model, next API call (root check)
>>> will load this model.
>>>
>>>
>>>
>>> 2013/12/20 Tim Simpson 
>>>
  Because the ability to check if root is enabled is in an extension
 which would not be in effect for a datastore with no ACL support, the user
 would not be able to see that the marker for root enabled was set in the
 Trove infrastructure database either way.

  By the way- I double checked the code, and I was wrong- the guest
 agent was *not* telling the database to update the root enabled flag.
 Instead, the API extension had been updating the database all along after
 contacting the guest. Sorry for making this thread more confusing.

  It seems like if you follow my one (

Re: [openstack-dev] [Neutron] Neutron Tempest code sprint - 2nd week of January, Montreal, QC, Canada

2013-12-20 Thread Mark McClain
Edgar-

I’m a bit concerned about Fawad joining the sprint.  He’s a new contributor who 
has never landed a patch in Neutron or Tempest.  Closing the testing gaps with 
experienced devs is the goal of the Montreal sprint and I do not think we’ll 
have manpower to onboard new contributors (3 days is a really short time).  
While I’m happy to welcome more workers there, I do not want to waste his time 
or PLUMgrid’s resources.

mark

On Dec 19, 2013, at 7:02 PM, Edgar Magana  wrote:

> Anita,
> 
> Fawad and Myself will be also attending.
> 
> BTW. Fawad will require an invitation letter for visa. He will email you 
> directly with that request.
> 
> Thanks,
> 
> Edgar
> 
> 
> On Wed, Dec 18, 2013 at 1:17 PM, Anita Kuno  wrote:
> Okay time for a recap.
> 
> What: Neutron Tempest code sprint
> Where: Montreal, QC, Canada
> When: January 15, 16, 17 2014
> Location: I am about to sign the contract for Salle du Parc at 3625 Parc
> avenue, a room in a residence of McGill University.
> Time: 9am - 5am
> 
> I am expecting to see the following people in Montreal in January:
> Mark McClain
> Salvatore Orlando
> Sean Dague
> Matt Trenish
> Jay Pipes
> Sukhdev Kapur
> Miguel Lavelle
> Oleg Bondarev
> Rossella Sblendido
> Emilien Macchi
> Sylvain Afchain
> Nicolas Planel
> Kyle Mestery
> Dane Leblanc
> Sumit Naiksatam
> Henry Gessau
> Don Kehn
> Carl Baldwin
> Justin Hammond
> Anita Kuno
> 
> If you are on the above list and can't attend, please email me so I have
> an up-to-date list. If you are planning on attending and I don't have
> your name listed, please email me without delay so that I can add you
> and you get done what you need to get done to attend.
> 
> I have the contract for the room and will be signing it and sending it
> in with the room deposit tomorrow. Monty has about 6 more hours to get
> back to me on this, then I just have to go ahead and do it.
> 
> Caterer is booked and I will be doing menu selection over the holidays.
> I can post the intended, _the intended_ menu once I have decided. Soup,
> salad, sandwich - not glamourous but hopefully filling. If the menu on
> the day isn't the same as what I post, please forgive me. Unforeseen
> circumstances may crop up and I will do my best to get you fed. One
> person has identified they have a specific food request, if there are
> any more out there, please email me now. This covers breakfast, lunch
> and tea/coffee all day.
> 
> Henry Gessau will be social convener for dinners. If you have some
> restaurant suggestions, please contact Henry. Organization of dinners
> will take place once we congregate in our meeting room.
> 
> T-shirts: we decided that the code quality of Neutron was a higher
> priority than t-shirts.
> 
> One person required a letter of invitation for visa purposes and
> received it. I hope the visa has been granted.
> 
> Individuals arrangements for hotels seem to be going well from what I
> have been hearing. A few people will be staying at Le Nouvel Hotel,
> thanks for finding that one, Rosella.
> 
> Weather: well you got me on this one. This winter is colder than we have
> had in some time and more snow too. So it will be beautiful but bring or
> buy warm clothes. A few suggestions:
> * layer your clothes (t-shirt, turtleneck, sweatshirt)
> * boots with removable liners (this is my boot of choice:
> http://amzn.to/19ddJve) remove the liners at the end of each day to dry them
> * warm coat
> * toque (wool unless you are allergic) I'm seeing them for $35, don't
> pay that much, you should be able to get something warm for $15 or less
> * warm socks (cotton socks and wool over top)- keep your feet dry
> * mitts (mitts keep my fingers warmer than gloves)
> * scarf
> If the weather is making you panic, talk to me and I will see about
> bringing some of my extra accessories with me. The style might not be
> you but you will be warm.
> 
> Remember, don't lick the flagpole. It doesn't matter what your friends
> tell you.
> 
> That's all I can think of, if I missed something, email me.
> 
> Oh, and best to consider me offline from Jan.2 until the code sprint.
> Make sure you have all the information you need prior to that time.
> 
> See you in Montreal,
> Anita.
> 
> 
> On 11/19/2013 11:31 AM, Rossella Sblendido wrote:
> > Hi all,
> >
> > sorry if this is a bit OT now.
> > I contacted some hotels to see if we could get a special price if we book
> > many rooms. According to my research the difference in price is not much.
> > Also, as Anita was saying, booking for everybody is more complicated.
> > So I decided to booked a room for myself.
> > I share the name of the hotel, in case you want to stay in the same place
> > http://www.lenouvelhotel.com/

Re: [openstack-dev] [Climate] PTL Candidacy

2013-12-20 Thread Sergey Lukjanov
Confirmed.

https://wiki.openstack.org/wiki/Climate/PTL_Elections_Icehouse#Candidates


On Fri, Dec 20, 2013 at 11:01 PM, Dina Belova  wrote:

> Howdy, guys!
>
> I’d like to announce my candidacy for Climate (Reservation-as-a-Service)
> PTL.
>
> I’m working with OpenStack about last two years since Diablo and have much
> experience in working with different customers within different projects.
> Last six months I’m everything about community work - since really early
> ideas of Climate. I proposed idea of global reservation opportunity to the
> OpenStack - for both virtual and physical resources, not just some of it. I
> also created architecture proposal for the Climate, that was discussed with
> our community and on what we agreed the time Climate was a ‘baby’.
>
> I’m leading subteam, that is working on implementing virtual reservations
> opportunity. I took significant participation in core related features that
> are important for every project - overall structure of DB layer, REST API,
> base logic for the internal Climate part and plugin mechanism, that allows
> to implement extensions for every resource type to make them reservable.
> I’m a top contributor and reviewer for Climate and spend much time on
> defining its future vectors of development and keeping Climate extensible
> and relevant to the current OpenStack ecosystem. Now I’m holding our team’s
> IRC meeting half times to keep our two subteams balanced and presented
> enough; and manage our Launchpad project to represent every side of it.
> Also I was the initiator of Climate presentation during OpenStack Icehouse
> summit in Hong Kong this fall and prepared much materials for it. I have
> expedience and know about release cycles, release management and other
> infrastructure specific things.
>
> I think, PTL is not only about reviews or code writing, it’s more about
> presenting project to the outside world. It’s about endless communication
> both internally with people contributing to Climate and externally to avoid
> overlaps and conflicts between contributors and Climate with other
> projects. I believe, PTL should think not only about Climate itself, but
> about its place in whole OpenStack ecosystem and how it may look like in
> future.
>
> As for Icehouse, as the closest point we should pass, I defined our scope
> for the first 0.1 Climate release and believe we will have it Jan 2014.
> Definitely we would like to find the appropriate OpenStack Program (or
> create a new one) and become incubated within it. Icehouse will be about
> close integration with other OpenStack projects to support reservation of
> different resources - not only compute hosts and virtual machines, proposed
> to our first release, but also volumes, network resources, etc. Finally we
> would like to propose architecture of integration with Heat and its stacks
> reservation, as a most complicated virtual resource. Integration with
> Horizon is also about creation of a better way for our users to communicate
> with Climate and definitely we hope to propose solution for that.
>
> It was a great time when different companies and people decided to unite
> and create this project with its special role and become a part of great
> OpenStack community. I believe we’ll do even more in future :)
>
> Thanks!
> Dina
>
> -
>
> Best regards,
>
> Dina Belova
>
> Software Engineer
>
> Mirantis Inc
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer][Oslo] Consuming Notifications in Batches

2013-12-20 Thread Julien Danjou
On Fri, Dec 20 2013, Herndon, John Luke wrote:

> I think there is probably a tolerance for duplicates but you’re right,
> missing a notification is unacceptable. Can anyone weigh in on how big of a
> deal duplicates are for meters? Duplicates aren’t really unique to the
> batching approach, though. If a consumer dies after it’s inserted a message
> into the data store but before the message is acked, the message will be
> requeued and handled by another consumer resulting in a duplicate.

Duplicates can be a problem for metering, as if you see twice the same
event it's possible you will think it happened twice.

As for event storage, it won't be a problem if you use a good storage
driver that can have unique constraint; you'll just drop it and log the
fact that this should not have happened, or something like that.

> There will be situations where the message can’t be parsed, and those
> messages can’t just be thrown away. My current thought is that ceilometer
> could provide some sort of mechanism for sending messages that are invalid
> to an external data store (like a file, or a different topic on the amqp
> server) where a living, breathing human can look at them and try to parse
> out any meaningful information. Other errors might be “database not
> available”, in which case re-queing the message is probably the right way to
> go. If the consumer process crashes, all of the unasked messages need to be
> requeued and handled by a different consumer. Any other error cases?

Sounds good to me.

-- 
Julien Danjou
# Free Software hacker # independent consultant
# http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer][Oslo] Consuming Notifications in Batches

2013-12-20 Thread Herndon, John Luke

On Dec 20, 2013, at 1:12 PM, Dan Dyer  wrote:

> On 12/20/2013 11:18 AM, Herndon, John Luke wrote:
>> On Dec 20, 2013, at 10:47 AM, Julien Danjou  wrote:
>> 
>>> On Fri, Dec 20 2013, Herndon, John Luke wrote:
>>> 
 Yeah, I like this idea. As far as I can tell, AMQP doesn’t support grabbing
 more than a single message at a time, but we could definitely have the
 broker store up the batch before sending it along. Other protocols may
 support bulk consumption. My one concern with this approach is error
 handling. Currently the executors treat each notification individually. So
 let’s say the broker hands 100 messages at a time. When client is done
 processing the messages, the broker needs to know if message 25 had an 
 error
 or not. We would somehow need to communicate back to the broker which
 messages failed. I think this may take some refactoring of
 executors/dispatchers. What do you think?
>>> Yeah, it definitely needs to change the messaging API a bit to handle
>>> such a case. But in the end that will be a good thing to support such a
>>> case, it being natively supported by the broker or not.
>>> 
>>> For brokers where it's not possible, it may be simple enough to have a
>>> "get_one_notification_nb()" method that would either return a
>>> notification or None if there's none to read, and would that
>>> consequently have to be _non-blocking_.
>>> 
>>> So if the transport is smart we write:
>>> 
>>>  # Return up to max_number_of_notifications_to_read
>>>  notifications =
>>>  transport.get_notificatations(conf.max_number_of_notifications_to_read)
>>>  storage.record(notifications)
>>> 
>>> Otherwise we do:
>>> 
>>>  for i in range(conf.max_number_of_notifications_to_read):
>>>  notification = transport.get_one_notification_nb():
>>>  if notification:
>>>  notifications.append(notification)
>>>  else:
>>>  break
>>>   storage.record(notifications)
>>> 
>>> So it's just about having the right primitive in oslo.messaging, we can
>>> then build on top of that wherever that is.
>>> 
>> I think this will work. I was considering putting in a timeout so the broker 
>> would not send off all of the messages immediately, and implement using 
>> blocking calls. If the consumer consumes faster than the publishers are 
>> publishing, this just becomes single-notification batches. So it may be 
>> beneficial to wait for more messages to arrive before sending off the batch. 
>> If the batch is full before the timeout is reached, then the batch would be 
>> sent off.
>> 
>>> -- 
>>> Julien Danjou
>>> /* Free Software hacker * independent consultant
>>>   http://julien.danjou.info */
>> -
>> John Herndon
>> HP Cloud
>> john.hern...@hp.com
>> 
>> 
>> 
>> 
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> A couple of things that I think need to be emphasized here:
> 1. the mechanism needs to be configurable, so if you are more worried about 
> reliability than performance you would be able to turn off bulk loading
Definitely will be configurable, but I don’t think batching is going to be any 
less reliable than individual inserts. Can you expand on what is concerning?
> 2. the caching size should also be configurable, so that we can limit your 
> exposure to lost messages
Agreed.
> 3. while you can have the message queue hold the messages until you 
> acknowledge them, it seems like this adds a lot of complexity to the 
> interaction. you will need to be able to propagate this information all the 
> way back from the storage driver.
This is actually a pretty standard use case for AMQP, we have done it several 
times on in-house projects. The basic.ack call lets you acknowledge a whole 
batch of messages at once. Yes, we do have to figure out how to propagate the 
error cases back up to the broker, but I don’t think it will be so complicated 
that it’s not worth doing.
> 4. any integration that is depdendent on a specific configuration on the 
> rabbit server is brittle, since we have seen a lot of variation between 
> services on this. I would prefer to control the behavior on the collection 
> side
Hm, I don’t understand…?
> So in general, I would prefer a mechanism that pulls the data in a default 
> manner, caches on the collection side based on configuration that allows you 
> to determine your own risk level and then manager retries in the storage 
> driver or at the cache controller level.
If you’re caching on the collector and the collector dies, then you’ve lost the 
whole batch of messages.  Then you have to invent some way of persisting the 
messages to disk until they been committed to the db and removing them 
afterwards. We originally talked about implementing a batching layer in the 
storage driver, but dragondm pointed out that the message queue is already 
hanging on to t

Re: [openstack-dev] [Ceilometer][Oslo] Consuming Notifications in Batches

2013-12-20 Thread Julien Danjou
On Fri, Dec 20 2013, Herndon, John Luke wrote:

> I think this will work. I was considering putting in a timeout so the broker
> would not send off all of the messages immediately, and implement using
> blocking calls. If the consumer consumes faster than the publishers are
> publishing, this just becomes single-notification batches. So it may be
> beneficial to wait for more messages to arrive before sending off the batch.
> If the batch is full before the timeout is reached, then the batch would be
> sent off.

I don't think you want to wait for other messages if you only picked on,
event with a timeout. It's better to record this one right away; while
you do that messages will potentially queue up in queue so on your next
call you'll pick more than one anyway.

Otherwise, yeah that should work fine.

-- 
Julien Danjou
# Free Software hacker # independent consultant
# http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] [Metadatarepository] Metadata repository initiative status

2013-12-20 Thread Georgy Okrokvertskhov
Hi,


Metadata repository meeting occurred this Tuesday in #openstack-glance
channel. Main item that was discussed was an API for a new metadata
functions and where this API should appear. During discussion it was
defined that the main functionality will be  a storage for different
objects and metadata associated with them. Initially all objects will have
a specific type which defines specific attributes in metadata. There will
be also a common set of attributes for all objects stored in Glance.

During the discussion there was an input from different projects (Hest,
Murano, Solum) what kind of objects should be stored for each project and
what kind functionality is minimally required.

Here is a list of potential objects:
Heat:

   -

   HOT template

Potential Attributes: version, tag, keywords, etc.

Required Features:

   -

   Object and metadata versioning
   -

   Search by specific attribute\attributes value


Murano

   -

   Murano files
   -

  UI definition
  -

  workflow definition
  -

  HOT templates
  -

  Scripts

Required Features:

   -

   Object and metadata versioning
   -

   Search by specific attribute


Solum

   -

   Solum Language Packs

Potential Attributes: name, build_toolchain, OS, language platform, versions

Required Features:

   -

   Object and metadata versioning
   -

   Search by specific attribute


After a discussion it was concluded that the best way will be to add a new
API endpoint /artifacts. This endpoint will be used to work with object’s
common attributes while type specific attributes and methods will be
accessible through /artifact/object-type endpoint. The endpoint /artifacts
will be used for filtering objects by searching for specific attributes
value. Type specific attributes search should also be possible via
/artifacts endpoint.

For each object type there will be a separate table for attributes in a
database.

Currently it is supposed that metadata repository API will be implemented
inside Glance within v2 version without changing existing API for images.
In the future, v3 Glance API can fold images related API to the common
artifacts API.

New artifact’s API will reuse as much as possible from existing Glance
functionality. Most of the stored objects will be non-binary, so it is
necessary to check how Glance code handle this.

AI

All projects teams should start submit BPs for new functionality in Glance.
These BPs will be discussed in ML and on Glance weekly meetings.

Related Resources:

Etherpad for Artifacts API design:
https://etherpad.openstack.org/p/MetadataRepository-ArtifactRepositoryAPI

Heat templates repo BP for Heat:

https://blueprints.launchpad.net/heat/+spec/heat-template-repo

Initial API discussion Etherpad:

https://etherpad.openstack.org/p/MetadataRepository-API



Thanks
Georgy
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] cliff moved to launchpad and stackforge

2013-12-20 Thread Doug Hellmann
The cliff library source repository is now hosted on stackforge (
https://git.openstack.org/cgit/stackforge/cliff/) and the bug tracker has
moved to launchpad (https://launchpad.net/python-cliff).

Doug
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack][cinder] Driver certification ideas

2013-12-20 Thread John Griffith
Hey Everyone,

So we merged the super simple driver cert test script in to devstack a
while back.  For those that aren't familiar you can check it out here
[1].  First iteration of this is simply a do it yourself config and
run that goes through the same volume-tests that every cinder patch
runs through the gate.

There's definitely room for growth here but this seems like a
reasonable first step.  The remaining question here is how do we want
to use this?  I've made a coupe of suggestions that I'd like to review
and get some feed-back.  To be clear this can obviously evolve over
time, but I'd like to start somewhat simple, try it out and build off
of if depending on how things go.  So with that here's a couple of
options I've been considering:

1. File a bug in launchpad:
This bug would be for tracking purposes, it would be something like
"no cert results available for driver-X".  This would require that the
driver maintainer download/install devstack, configure their driver
and backend and then run the supplied script.

The next question is what to do with the results, there are some options here:
  a. Take the resultant tgz file and post it into the bug report as an
attachment.  Assuming everything passes the bug can then be marked as
closed/resolved.
  b. Create a repo (or even a directory in the Cinder tree) that
includes results files.  That way the bug is logged and a gerrit
commit referencing the bug id is submitted and reviewed very similar
to how we handle source changes.

Option 'a' is path of least resistance, however it becomes a very
manual process and it's somewhat ugly.  Option 'b' fits more with how
we operate anyway, and provides some automation, and it also leaves a
record of the cert process in the tree that makes visibility and
tracking much easier.



2.  Create a web/wiki page specifically for this information:
This would basicly be a matrix of the drivers, and the current status
of the cert results for the current iteration.  It would be something
like a row for every driver in the tree and a column for "last cycle"
and "current cycle".  We'd basically set it up so that the
"current-cycle" entries are all listed as "not submitted" after the
milestone is cut.  The current entries in that column would roll back
to the "last-cycle" column.  Then the driver maintainer could
run/update the matrix at any time during that cycle.

The only thing with this is again it's very manual in terms of
tracking, might be a bit confusing (this may make perfect sense to me,
but seems like jibberish to others :)), and we'd want to have a
repository to store the results files for people to reference.

I'm open to ideas/suggestions here keeping in mind that the initial
purpose is to provide publicly viewable information as to the
integration status of the drivers in the Cinder project.  This would
help people building OpenStack clouds to make sure that the backend
devices they may be choosing actually implement all of the base
features and that they actually work.  Vendors can of course choose
not to participate, that just tells consumers "beware, vendor-a
doesn't necessarily care all that much, or doesn't have time to test
this".

Anyway, hopefully this makes sense, if more clarification is needed I
can try and clean up my descriptions a bit.

Thanks,
John

[1]: 
https://github.com/openstack-dev/devstack/blob/master/driver_certs/cinder_driver_cert.sh

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] packet forwarding

2013-12-20 Thread Pedro Roque Marques
There are at least 3 types of solutions I'm aware of:
1) Using VLANs and physical or virtual-machine appliances that route packets 
between VLANs.
Tutorial:
http://developer.rackspace.com/blog/neutron-networking-vlan-provider-networks.html

2) Using an L2 overlay and virtual machines that route packets between VLANs. 
(e.g. OVS + neutron virtual-router)
Tutorial:
http://developer.rackspace.com/blog/neutron-networking-simple-flat-network.html

3) Using an L3 overlay that implements a distributed router. (e.g. OpenContrail)
Unfortunately i don't know of a tutorial that is has nice as the ones above... 
but you can glean some useful information from:
https://github.com/dsetia/devstack/blob/master/contrail/README and
http://pedrormarques.wordpress.com/2013/11/14/using-devstack-plus-opencontrail/

  Pedro.

On Dec 20, 2013, at 8:50 AM, Abbass MAROUNI  
wrote:

> Hello,
> 
> Is it true that a traffic from one OpenStack virtual network to another have 
> to pass by an OpenStack router ? (using an OpenVirtual switch as the L2 ).
> 
> I'm trying ti use a VM as a router between 2 OpenStack virtual networks but 
> for some reason I'm not able.
> 
> Appreciate any insights,
> 
> 
> Best regards,
> Abbass 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >