Re: [openstack-dev] How to get VXLAN Endpoint IP without agent

2013-10-21 Thread P Balaji-B37839
Though we can configure in Nova.conf file, but we have to make sure these 
tunnel interface ipaddress of every compute node and nova.conf will have the 
same configuration.
But it is still painful to make sure that the ipaddress are configured properly.

We want to come up with BP to avoid these manual configuration and as the 
interfaces are configured through DHCP server, Compute Node tunnel ipaddress 
will be stored in Neutron and retrieved.

If anyone has deployed VXLAN setup with Neutron, Please share your experience 
on VXLAN “Local-IP” configuration of Compute Node.

Regards,
Balaji.P

From: Ravi Chunduru [mailto:ravi...@gmail.com]
Sent: Thursday, October 17, 2013 12:41 AM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] How to get VXLAN Endpoint IP without agent

I guess the intention is to make VXLAN work with out quantum agent. It means 
you are using an external openflow controller to manage OVS switches.

In such a case, there is a need to specifically get the compute node IP from 
the VM data interface network( and not the management or openstack network 
interface IP)

I think of two solutions
1) There must be a onboarding process for each compute node that can indicate 
your controller of the compute's local_ip
2) Make sure OVS uses VM data interface network to connect to the controller.

I don't prefer 2, as OVS mgmt traffic should not be on VM data network.

For solution#1, its a pain to load compute local IP onto openflow controller 
but it can be automated using puppet etc.,

(I have verified nova database for compute - but it stores management network 
interface IP but not from data network- Makes sense as endpoints are on 
management network)

Alternately, we can propose a blueprint to include an option in nova.conf on 
compute for local_ip as there is a need for consumption using VXLAN based 
overlay networks.

Hope it helps.

Thanks,
-Ravi.

On Tue, Oct 15, 2013 at 3:45 AM, B Veera-B37207 
mailto:b37...@freescale.com>> wrote:
Hi,

Vxlan endpoint ip is configured in 
‘/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini’ as ‘local_ip’
While starting openvswitch agent the above local ip is populated in neutron 
database.

Is there any way to get local_ip of compute node without any agent running?

Thanks in advance.

Regards,
Veera.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Ravi
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Call for a clear COPYRIGHT-HOLDERS file in all OpenStack projects (and [trove] python-troveclient_0.1.4-1_amd64.changes REJECTED)

2013-10-21 Thread Thomas Goirand
On 10/22/2013 02:12 AM, Jeremy Stanley wrote:
> On 2013-10-22 01:45:13 +0800 (+0800), Thomas Goirand wrote:
> [...]
>> The main problem I was facing was that troveclient has a few files
>> stating that HP was the sole copyright holder, when it clearly was
>> not (since I have discussed a bit with some the dev team in
>> Portland, IIRC some of them are from Rackspace...).
> [...]
>> So, for me, the clean and easy way to fix this problem is to have a
>> simple copyright-holder.txt file, containing a list of company or
>> individuals. It doesn't really mater if some entities forget to write
>> themselves in. After all, that'd be their fault, no?
> [...]
> 
> I don't really see the difference here at all. You propose going
> from...
> 
> A) copyright claims in headers of files, which contributors
> might forget to update
> 
> ...to...
> 
> B) copyright claims in one file, which contributors might also
> forget to update
> 
> I don't understand how adding a file full of duplicate information
> to each project is going to solve your actual concern.

My idea was that in the case of B, it's more easy to fix/patch a single
file than lots of them, and also that the existence of the file itself
is an invitation for copyright holders to add themselves in, while a
copyright header in a source code isn't that explicit.

Though I can agree of course, that in both cases, contributors might
forget to add themselves in...

Thomas


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gerrit tools

2013-10-21 Thread Flavio Percoco

On 21/10/13 15:55 +, Joshua Harlow wrote:

I am using gerritlib in the curses ui; seems to work nicely.

Only 1 thing that I don't like so much is that it silences connection/other 
errors from what I can tell.

See _run() method in 
https://github.com/openstack-infra/gerritlib/blob/master/gerritlib/gerrit.py


I started migrating some of the OpenStack tools to python-gerrit a
couple of months ago, I still have that code somewhere in my laptop, I
hope.

I think that gerritlib API could be improved a lot from python-gerrit,
plus query combinations in gerritlib are a bit limited due to how
they're expressed there. At least the last time I checked.

I'll take some time in the next few weeks to work on that.

@Joshua, do you mind taking a look at python-gerrit and provide some
feedback or use it? :D

Cheers,
FF



Otherwise pretty easy to use.

Sent from my really tiny device...


On Oct 21, 2013, at 4:46 AM, "Sean Dague"  wrote:


On 10/21/2013 04:04 AM, Flavio Percoco wrote:

On 20/10/13 05:01 +, Joshua Harlow wrote:
I created some gerrit tools that I think others might find useful.

https://github.com/harlowja/gerrit_view



I worked on this Python library for Gerrit[0] a couple of months ago and
I've been using it for this gerrit-cli[1] tool. I was wondering if you'd
like to migrate your Gerrit queries and make them use python-gerrit
instead? I can do that for you.

[0] https://github.com/FlaPer87/python-gerrit
[1] https://github.com/FlaPer87/gerrit-cli

BTW, Big +1 for the curses UI!


Also realize that OpenStack maintains gerritlib - 
https://github.com/openstack-infra/gerritlib

Which anyone can contribute to (and is the code that every message posted back 
to gerrit by a bot users). It would actually be nice to enhance gerritlib if 
there were enough features missing that are in python-gerrit.

   -Sean

--
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Call for a clear COPYRIGHT-HOLDERS file in all OpenStack projects (and [trove] python-troveclient_0.1.4-1_amd64.changes REJECTED)

2013-10-21 Thread Thomas Goirand
On 10/22/2013 04:45 AM, Mark McLoughlin wrote:
> By "improve clarity", you mean "compile an accurate list of all
> copyright holders"? Why is this useful information?
> 
> Sure, we could also "improve clarity" by compiling a list of all the
> cities in the world where some OpenStack code has been authored ... but
> *why*?

Mark, I haven't asked for things to be 100% accurate. I know that's not
possible. I've asked that we make sure headers aren't 90% wrong, which
was my gut feeling when writing the trove debian/copyright file and
seeing only HP in the headers...

> The key thing for Debian to understand is that all OpenStack
> contributors agree to license their code under the terms of the Apache
> License. I don't see why a list of copyright holders would clarify the
> licensing situation any further.
> 
> Mark.

Well, I am just willing to write things correctly, and I have been in
situations where it wasn't possible easily, and wanted to fix this once
and for all, by opening the topic in this list. It is as simple as that.
There's no need for the discussion to go *that* far. Nobody is
discussing the fact that OpenStack is free software.

Thomas


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gerrit tools

2013-10-21 Thread Flavio Percoco

On 21/10/13 15:41 +0200, Chmouel Boudjnah wrote:


On Mon, Oct 21, 2013 at 3:03 PM, Flavio Percoco  wrote:

   Also realize that OpenStack maintains gerritlib - https://github.com/
   openstack-infra/gerritlib

   Which anyone can contribute to (and is the code that every message
   posted back to gerrit by a bot users). It would actually be nice to
   enhance gerritlib if there were enough features missing that are in
   python-gerrit.


   Yup, that's part of the plan, python-gerrit rewrites a lot of stuff,
   though.



It seems that gerritlib is using SSH commands, isn't that the plans is to have
a gerrit with the full REST api enabled in the future without needing to have
to spawn ssh commands for every calls?



The last time I checked the 'REST' api, I think there were some pieces
missing, I could be wrong, though. I'll take another look.

Cheers,
FF


Chmouel.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Call for a clear COPYRIGHT-HOLDERS file in all OpenStack projects (and [trove] python-troveclient_0.1.4-1_amd64.changes REJECTED)

2013-10-21 Thread Thomas Goirand
On 10/22/2013 08:09 AM, Monty Taylor wrote:
> b) Thomas should put in debian/copyright what is in our headers, and
> should consider them, as they are in our source tarballs, to be correct
> c) If Thomas, or anyone else, considers our header attribution to be
> incorrect, he or she should submit a patch or suggest that someone else
> submit a patch to the file in question indicating that he or she feels
> that there is incorrect content in that file

ACK.

Thomas


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Call for a clear COPYRIGHT-HOLDERS file in all OpenStack projects (and [trove] python-troveclient_0.1.4-1_amd64.changes REJECTED)

2013-10-21 Thread Mark McLoughlin
On Tue, 2013-10-22 at 14:09 +0800, Thomas Goirand wrote:
> On 10/22/2013 04:55 AM, Mark McLoughlin wrote:
> > Talk to the Trove developers and politely ask them whether the copyright
> > notices in their code reflects what they see as the reality.
> > 
> > I'm sure it would help them if you pointed out to them some significant
> > chunks of code from the commit history which don't appear to have been
> > written by a HP employee.
> 
> I did this already. Though if I raised the topic in this list (as
> opposed to contact the Trove maintainers privately), this was for a
> broader scope, to make sure it doesn't happen again and again.
> 
> > Simply adding a Rackspace copyright notice to a file or two which has
> > had a significant contribution by someone from Rackspace would be enough
> > to resolve your concerns completely.
> 
> But how to make sure that there's no *other* copyright holders, and that
> my debian/copyright is right? Currently, there's no way...

I've never seen a project where copyright headers weren't occasionally
missing some copyright holders. I suspect Debian has managed just fine
with those projects and can manage just fine with OpenStack's copyright
headers too.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Call for a clear COPYRIGHT-HOLDERS file in all OpenStack projects (and [trove] python-troveclient_0.1.4-1_amd64.changes REJECTED)

2013-10-21 Thread Thomas Goirand
On 10/22/2013 04:48 AM, Mark McLoughlin wrote:
> On Tue, 2013-10-22 at 01:55 +0800, Thomas Goirand wrote:
>> On 10/21/2013 09:28 PM, Mark McLoughlin wrote:
>>> In other words, what exactly is a list of copyright holders good for?
>>
>> At least avoid pain and reject when uploading to the Debian NEW queue...
> 
> I'm sorry, that is downstream Debian pain.

I agree, it is painful, though it is a necessary pain. Debian has always
been strict with copyright stuff. This should be seen as a freeness Q/A,
so that we make sure everything is 100% free, rather than an annoyance.

> It shouldn't be inflicted on
> upstream unless it is generally a useful thing.

There's no other ways to do things, unfortunately. How would I make sure
a software is free, and released in the correct license, if upstream
doesn't declare it properly? There's been some cases on packages I
wanted to upload, where there was just:

Classifier: License :: OSI Approved :: MIT License

in *.egg-info/PKG-INFO, and that's it. If the upstream authors don't fix
this by adding a clear LICENSE file (with the correct definition of the
MIT License, which is confusing because there's been many of them), then
the package couldn't get in. Lucky, upstream authors of that python
module fixed that, and the package was re-uploaded and validated by the
FTP masters.

I'm not saying that this was the case for Trove (the exactitude of the
copyright holder list in debian/copyright is less of an issue), though
I'm just trying to make you understand that you can't just ignore the
issue and say "I don't care, that's Debian's problem". This simply
doesn't work (unless you would prefer OpenStack package to *not* be in
Debian, which I'm sure isn't the case here).

Thomas


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Call for a clear COPYRIGHT-HOLDERS file in all OpenStack projects (and [trove] python-troveclient_0.1.4-1_amd64.changes REJECTED)

2013-10-21 Thread Thomas Goirand
On 10/22/2013 05:06 AM, Michael Basnight wrote:
> so if this is sufficient, ill fix the copyright headers.

Please do (and backport that to 2013.2...)! :)

Thomas


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Common requirements for services' discussion

2013-10-21 Thread Sumit Naiksatam
Hi All,

This is a reminder for the next IRC meeting on Tuesday (Oct 22nd) 15.30 UTC
(8.30 AM PDT) on the #openstack-meeting-alt channel.

The proposed agenda is:
* Service insertion and chaining
* Service agents
* Service VMs - mechanism
* Service VMs - policy
* Extensible APIs for services
and anything else you may want to discuss in this context.

Meeting wiki page (has pointer to the first meeting logs):
https://wiki.openstack.org/wiki/Meetings/AdvancedServices

Thanks,
~Sumit.

On Thu, Oct 17, 2013 at 12:02 AM, Sumit Naiksatam
wrote:

> Hi All,
>
> We will have the "advanced services" and the common requirements IRC
> meeting on Tuesdays 15.30 UTC (8.30 AM PDT) on the #openstack-meeting-alt 
> channel.
> The meeting time was chosen to accommodate requests by folks in Asia and
> will hopefully suit most people involved. Please note that this is the
> alternate meeting channel.
>
> The agenda will be a continuation of discussion from the previous meeting
> with some additional agenda items based on the sessions already proposed
> for the summit. The current discussion is being captured in this etherpad:
> https://etherpad.openstack.org/p/NeutronAdvancedServices
>
> Hope you can make it and participate.
>
> Thanks,
> ~Sumit.
>
>
> On Mon, Oct 14, 2013 at 8:15 PM, Sumit Naiksatam  > wrote:
>
>> Thanks all for attending the IRC meeting today for the Neutron "advanced
>> services" discussion. We have an etherpad for this:
>> https://etherpad.openstack.org/p/NeutronAdvancedServices
>>
>> It was also felt that we need to have more ongoing discussions, so we
>> will have follow up meetings. We will try to propose a more convenient time
>> for everyone involved for a meeting next week. Meanwhile, we can continue
>> to use the mailing list, etherpad, and/or comment on the specific proposals.
>>
>> Thanks,
>> ~Sumit.
>>
>>
>> On Tue, Oct 8, 2013 at 8:30 PM, Sumit Naiksatam > > wrote:
>>
>>> Hi All,
>>>
>>> We had a VPNaaS meeting yesterday and it was felt that we should have a
>>> separate meeting to discuss the topics common to all services. So, in
>>> preparation for the Icehouse summit, I am proposing an IRC meeting on Oct
>>> 14th 22:00 UTC (immediately after the Neutron meeting) to discuss common
>>> aspects related to the FWaaS, LBaaS, and VPNaaS.
>>>
>>> We will begin with service insertion and chaining discussion, and I hope
>>> we can collect requirements for other common aspects such as service
>>> agents, services instances, etc. as well.
>>>
>>> Etherpad for service insertion & chaining can be found here:
>>>
>>> https://etherpad.openstack.org/icehouse-neutron-service-insertion-chaining
>>>
>>> Hope you all can join.
>>>
>>> Thanks,
>>> ~Sumit.
>>>
>>>
>>>
>>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Call for a clear COPYRIGHT-HOLDERS file in all OpenStack projects (and [trove] python-troveclient_0.1.4-1_amd64.changes REJECTED)

2013-10-21 Thread Thomas Goirand
On 10/22/2013 04:55 AM, Mark McLoughlin wrote:
> Talk to the Trove developers and politely ask them whether the copyright
> notices in their code reflects what they see as the reality.
> 
> I'm sure it would help them if you pointed out to them some significant
> chunks of code from the commit history which don't appear to have been
> written by a HP employee.

I did this already. Though if I raised the topic in this list (as
opposed to contact the Trove maintainers privately), this was for a
broader scope, to make sure it doesn't happen again and again.

> Simply adding a Rackspace copyright notice to a file or two which has
> had a significant contribution by someone from Rackspace would be enough
> to resolve your concerns completely.

But how to make sure that there's no *other* copyright holders, and that
my debian/copyright is right? Currently, there's no way...

Thomas


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Call for a clear COPYRIGHT-HOLDERS file in all OpenStack projects (and [trove] python-troveclient_0.1.4-1_amd64.changes REJECTED)

2013-10-21 Thread Mark McLoughlin
On Tue, 2013-10-22 at 01:09 +0100, Monty Taylor wrote:

> The last thing we need to do is validate in any manner that somehow the
> CLA makes our Apache Licensed Free Software more Free or more Valid than
> if we did not have our useless CLA.

Agree with this. My simplified way of thinking about this is that the
terms of the CLA are the same as the terms of the license and the CLA is
just a way of getting contributors to explicitly say that all code they
contribute will be under the terms of the license.

As Richard Fontana pointed out on this list before, Signed-off-by could
serve a similar process (while supporting the notion of a patch having
multiple authors) with much less hassle, confusion and false security.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mistral] Teem meeting minutes - 10/21

2013-10-21 Thread Renat Akhmerov
I'd like to thank everyone who joined our meeting and who generally got 
interested in the project.

Here's the log and meeting minutes:
Log: 
http://eavesdrop.openstack.org/meetings/mistral/2013/mistral.2013-10-21-16.00.log.html
Minutes: 
http://eavesdrop.openstack.org/meetings/mistral/2013/mistral.2013-10-21-16.00.html

The project has just started, please join us and/or leave your feedback.

Renat Akhmerov
Mirantis Inc.___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] question regarding vmdk file format for baremetal provisioning

2013-10-21 Thread Ravikanth Samprathi
The vmdk i have is from a VM that has not been partitioned, does it mean,
it will not work?
Meaning, i cannot use an off the shelf kernel, ramdisk and use the
qcow2-converted-image-from-vmdk?
Thanks
Ravi



On Mon, Oct 21, 2013 at 8:18 PM, Robert Collins
wrote:

> You'll need a format qemu-img supports; diskimage-builder outputs
> qcow2 by default, for instance.
>
> Also note it must be a partition image, not a full disk image.
>
> Cheers,
> Rob
>
> On 22 October 2013 16:15, Ravikanth Samprathi  wrote:
> > Hi
> > Am using a vmdk file for provisioning baremetal. But the nova failed with
> > nova-compute log having the following message:
> > ERROR nova.compute.manager . Unexpected error while running command
> > qemu-img convert error while reading sector 131072  Invalid
> > argument.
> >
> > Can someone provide some help/pointers.
> > Thanks
> > Ravi
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
>
> --
> Robert Collins 
> Distinguished Technologist
> HP Converged Cloud
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] question regarding disk image builder

2013-10-21 Thread Ravikanth Samprathi
Thanks, Rob.

On the console, I see cloud-init starting up and it prints something like
this



ci-info lo0 : 127.0.0.1 255.0.0.0  ci-info eth0 : - - 
ci-info eth1 : - - 



Soon afterwards, I see this message repeat for 120s



util.py [WARNING]:

'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed: url error
[[Errno 101] Network is unreachable]








On Mon, Oct 21, 2013 at 8:19 PM, Robert Collins
wrote:

> You need to disable file injection in nova, and you need to have link
> up on the interfaces, or dhcp-all-interfaces will skip them.
>
> What do you see in the node console specifically?
>
> -Rob
>
> On 22 October 2013 16:11, Ravikanth Samprathi  wrote:
> > Hi Folks
> >
> > I'm using DiB to create an ubuntu 12.04 image where dhcp is enabled on
> all
> > interfaces, and .ssh/authorized_keys is copied
> >
> >
> >
> > $ bin/disk-image-create -a amd64 -o u1204.am d64.custom ubuntu
> local-config
> > dhcp-all-interfaces
> >
> >
> >
> > I see DiB printing these messages indicating it worked as expected
> >
> >
> >
> > dib-run-parts Mon Oct 21 21:10:41 UTC 2013 Running
> > /tmp/in_target.d/install.d/50-dhcp-all-interfaces
> >
> > + dirname /tmp/in_target.d/install.d/50-dhcp-all-interfaces
> >
> > + SCRIPTDIR=/tmp/in_target.d/install.d
> >
> > + [ -d /etc/init ]
> >
> > + install -D -g root -o root -m 0755
> >
> > /tmp/in_target.d/install.d/generate-interfaces-file.sh
> >
> > /usr/local/sbin/generate-interfaces-file.sh
> >
> > + install -D -g root -o root -m 0755
> >
> > /tmp/in_target.d/install.d/dhcp-all-interfaces.conf
> >
> > /etc/init/dhcp-all-interfaces.conf
> >
> > dib-run-parts Mon Oct 21 21:10:41 UTC 2013 50-dhcp-all-interfaces
> completed
> >
> >
> >
> > dib-run-parts Mon Oct 21 21:10:41 UTC 2013 62-ssh-key completed
> >
> >
> >
> > But when the baremetal node boots up, the interfaces still don't have
> dhcp
> > enabled. I see this in the BM node's console when cloud-init starts.
> >
> >
> >
> > How can I troubleshoot this?
> >
> > Thanks
> > Ravi
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
>
> --
> Robert Collins 
> Distinguished Technologist
> HP Converged Cloud
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] question regarding vmdk file format for baremetal provisioning

2013-10-21 Thread Robert Collins
You'll need a format qemu-img supports; diskimage-builder outputs
qcow2 by default, for instance.

Also note it must be a partition image, not a full disk image.

Cheers,
Rob

On 22 October 2013 16:15, Ravikanth Samprathi  wrote:
> Hi
> Am using a vmdk file for provisioning baremetal. But the nova failed with
> nova-compute log having the following message:
> ERROR nova.compute.manager . Unexpected error while running command
> qemu-img convert error while reading sector 131072  Invalid
> argument.
>
> Can someone provide some help/pointers.
> Thanks
> Ravi
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] question regarding disk image builder

2013-10-21 Thread Robert Collins
You need to disable file injection in nova, and you need to have link
up on the interfaces, or dhcp-all-interfaces will skip them.

What do you see in the node console specifically?

-Rob

On 22 October 2013 16:11, Ravikanth Samprathi  wrote:
> Hi Folks
>
> I'm using DiB to create an ubuntu 12.04 image where dhcp is enabled on all
> interfaces, and .ssh/authorized_keys is copied
>
>
>
> $ bin/disk-image-create -a amd64 -o u1204.am d64.custom ubuntu local-config
> dhcp-all-interfaces
>
>
>
> I see DiB printing these messages indicating it worked as expected
>
>
>
> dib-run-parts Mon Oct 21 21:10:41 UTC 2013 Running
> /tmp/in_target.d/install.d/50-dhcp-all-interfaces
>
> + dirname /tmp/in_target.d/install.d/50-dhcp-all-interfaces
>
> + SCRIPTDIR=/tmp/in_target.d/install.d
>
> + [ -d /etc/init ]
>
> + install -D -g root -o root -m 0755
>
> /tmp/in_target.d/install.d/generate-interfaces-file.sh
>
> /usr/local/sbin/generate-interfaces-file.sh
>
> + install -D -g root -o root -m 0755
>
> /tmp/in_target.d/install.d/dhcp-all-interfaces.conf
>
> /etc/init/dhcp-all-interfaces.conf
>
> dib-run-parts Mon Oct 21 21:10:41 UTC 2013 50-dhcp-all-interfaces completed
>
>
>
> dib-run-parts Mon Oct 21 21:10:41 UTC 2013 62-ssh-key completed
>
>
>
> But when the baremetal node boots up, the interfaces still don't have dhcp
> enabled. I see this in the BM node's console when cloud-init starts.
>
>
>
> How can I troubleshoot this?
>
> Thanks
> Ravi
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] question regarding vmdk file format for baremetal provisioning

2013-10-21 Thread Ravikanth Samprathi
Hi
Am using a vmdk file for provisioning baremetal. But the nova failed with
nova-compute log having the following message:
ERROR nova.compute.manager . Unexpected error while running command
qemu-img convert error while reading sector 131072  Invalid
argument.

Can someone provide some help/pointers.
Thanks
Ravi
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] question regarding disk image builder

2013-10-21 Thread Ravikanth Samprathi
Hi Folks

I'm using DiB to create an ubuntu 12.04 image where dhcp is enabled on all
interfaces, and .ssh/authorized_keys is copied



$ bin/disk-image-create -a amd64 -o u1204.am d64.custom ubuntu local-config
dhcp-all-interfaces



I see DiB printing these messages indicating it worked as expected



dib-run-parts Mon Oct 21 21:10:41 UTC 2013 Running
/tmp/in_target.d/install.d/50-dhcp-all-interfaces

+ dirname /tmp/in_target.d/install.d/50-dhcp-all-interfaces

+ SCRIPTDIR=/tmp/in_target.d/install.d

+ [ -d /etc/init ]

+ install -D -g root -o root -m 0755

/tmp/in_target.d/install.d/generate-interfaces-file.sh

/usr/local/sbin/generate-interfaces-file.sh

+ install -D -g root -o root -m 0755

/tmp/in_target.d/install.d/dhcp-all-interfaces.conf

/etc/init/dhcp-all-interfaces.conf

dib-run-parts Mon Oct 21 21:10:41 UTC 2013 50-dhcp-all-interfaces completed



dib-run-parts Mon Oct 21 21:10:41 UTC 2013 62-ssh-key completed



But when the baremetal node boots up, the interfaces still don't have dhcp
enabled. I see this in the BM node's console when cloud-init starts.



How can I troubleshoot this?
Thanks
Ravi
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] Incorporate Auditing Support / Usage Notifications

2013-10-21 Thread Angus Salkeld

Hi all

There has been some interest in Heat generating usage notifications
http://summit.openstack.org/cfp/details/87

so I thought I'd implement the blueprint:
https://blueprints.launchpad.net/heat/+spec/send-notification

I'd like to ask for suggestions on the content of the notifications.
I have added a section for Heat here (please tell me if this is the
wrong place): https://wiki.openstack.org/wiki/SystemUsageData

My plan was to generate the notification on each stack state change
so as the wiki page says:
orchestration.stack.{create,update,delete,suspend,resume}.{start,error,end}

 start maps to IN_PROGRESS
 end maps to COMPLETE

If you have any other needs of the notification please respond.

Thanks
Angus

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] What validation feature is necessary for Nova v3 API

2013-10-21 Thread Kenichi Oomichi

Hi Doug,

Thanks again.

-Original Message-
From: Doug Hellmann [mailto:doug.hellm...@dreamhost.com] 
Sent: Tuesday, October 22, 2013 6:51 AM
To: Ohmichi, Kenichi
Cc: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [Nova] What validation feature is necessary for 
Nova v3 API
>
> On Mon, Oct 21, 2013 at 7:14 AM, Kenichi Oomichi  
> wrote:
>>
>> Some validation features seem necessary as basic features for Nova APIs.
>> so I am trying to pick necessary features for WSME on the following
>> inline messages.
>>
>> Could you check them?
>>
>>> -Original Message-
>>> From: Doug Hellmann [mailto:doug.hellm...@dreamhost.com]
>>> Sent: Thursday, October 17, 2013 3:51 AM
>>> To: OpenStack Development Mailing List
>>> Subject: Re: [openstack-dev] [Nova] What validation feature is necessary 
>>> for Nova v3 API
>
> For discussing, I have investigated all validation ways of current Nova v3
> API parameters. There are 79 API methods, and 49 methods use API 
> parameters
> of a request body. Totally, they have 148 API parameters. (details: [1])
>
> Necessary features, what I guess now, are the following:
>
> << Basic Validation Feature >>
> Through this investigation, it seems that we need some basic validation
> features such as:
> * Type validation
>   str(name, ..), int(vcpus, ..), float(rxtx_factor), dict(metadata, ..),
>   list(networks, ..), bool(conbine, ..), None(availability_zone)
> * String length validation
>   1 - 255
> * Value range validation
>   value >= 0(rotation, ..), value > 0(vcpus, ..),
>   value >= 1(os-multiple-create:min_count, os-multiple-create:max_count)
>>
>> Ceilometer has class BoundedInt.
>> (https://github.com/openstack/ceilometer/blob/master/ceilometer/api/controllers/v2.py#L79)
>> This class seems useful for the above value range validation.
>> Can we implement this feature on WSME?
>> Or should we implement this on Oslo?
>
> I think it makes sense to add some of these validation features directly
> to WSME unless they are OpenStack-specific.

I see. I will start to implement this features for WSME.

BTW, now the launchpad of WSME does not contain "Blueprint" pages.
Is it OK to register this features as a bug?
or will you open the "Blueprint" page?


 * Data format validation
  * Pattern:
uuid(volume_id, ..), boolean(on_shared_storage, ..), 
 base64encoded(contents),
ipv4(access_ip_v4, fixed_ip), ipv6(access_ip_v6)
>>
>> This feature also seems implemantable by enhancing the above string 
>> validation.
>
> Yes, I could see having different types for each of those things.
> I believe there is already a boolean type.

Nice info, I will implement other type validations by referring the boolean 
type.


>Thanks for doing this analysis. It looks like with a little bit of work on 
>WSME, we will have a nice library of reusable validators.

Thanks, I get motivated by your comment:-)


Thanks
Ken'ichi Ohmichi


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Call for a clear COPYRIGHT-HOLDERS file in all OpenStack projects (and [trove] python-troveclient_0.1.4-1_amd64.changes REJECTED)

2013-10-21 Thread Michael Basnight

On Oct 21, 2013, at 5:09 PM, Monty Taylor wrote:
> On 10/21/2013 10:44 PM, Clint Byrum wrote:
>> Excerpts from Mark McLoughlin's message of 2013-10-21 13:45:21 -0700:
>>> On Mon, 2013-10-21 at 10:28 -0700, Clint Byrum wrote:
 Excerpts from Robert Collins's message of 2013-10-20 02:25:43 -0700:
> On 20 October 2013 02:35, Monty Taylor  wrote:
> 
>> However, even as a strong supporter of accurate license headers, I would
>> like to know more about the FTP masters issue. I dialog with them, as
>> folks who deal with this issue and its repercutions WAY more than any of
>> us might be really nice.
> 
> Debian takes it's responsibilities under copyright law very seriously.
> The integrity of the debian/copyright metadata is checked on the first
> upload for a package (and basically not thereafter, which is either
> convenient or pragmatic or a massive hole in rigour depending on your
> point of view. The goal is to ensure that a) the package is in the
> right repository in Debian (main vs nonfree) and b) that Debian can
> redistribute it and c) that downstreams of Debian who decide to use
> the package can confidently do so. Files with differing redistribution
> licenses that aren't captured in debian/copyright are an issue for c);
> files with different authors and the same redistribution licence
> aren't a problem for a/b/c *but* the rules the FTP masters enforce
> don't make that discrimination: the debian/copyright file needs to be
> a concordance of both copyright holders and copyright license.
> 
> Personally, I think it should really only be a concordance of
> copyright licenses, and the holders shouldn't be mentioned, but thats
> not the current project view.
> 
 
 The benefit to this is that by at least hunting down project leadership
 and getting an assertion and information about the copyright holder
 situation, a maintainer tends to improve clarity upstream.
>>> 
>>> By "improve clarity", you mean "compile an accurate list of all
>>> copyright holders"? Why is this useful information?
>>> 
>>> Sure, we could also "improve clarity" by compiling a list of all the
>>> cities in the world where some OpenStack code has been authored ... but
>>> *why*?
>>> 
>> 
>> If you don't know who the copyright holders are, you cannot know that
>> the license being granted is actually enforceable. What if the Trove
>> developers just found some repo lying out in the world and slapped an
>> Apache license on it? We aren't going to do an ehaustive investigation,
>> but we want to know _who_ granted said license.
> 
> You know I think you're great, but this argument doesn't hold up.
> 
> If the trove developers found some repo in the world and slapped an
> apache license AND said:
> 
> Copyright 2012 Michael Basnight
> 
> in the header, and Thomas put that in debian/copyright, the Debian FTP
> masters would very happily accept it.

I endorse this message. 

But seriously, the Trove team will take some time tomorrow and add copyrights 
to the files appropriately. Then ill be sure to ping zigo.



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] VMWare Mine Sweeper, Congrats!

2013-10-21 Thread Tracy Jones
Yes - we are going to change that.   I know it's annoying. 

Sent from my iPhone

> On Oct 21, 2013, at 5:14 PM, Michael Still  wrote:
> 
> This is super cool. Thanks!
> 
> One piece of feedback -- would it be possible to get the results as
> something other than a tarball? Downloading the entire tarball to read
> one log is slightly annoying.
> 
> Thanks,
> Michael
> 
> On Sat, Oct 19, 2013 at 9:29 AM, Sreeram Yerrapragada
>  wrote:
>> We had some infrastructure issues in the morning and went back to silent
>> mode. I just re-triggered tempest run for your patchset. Also note that
>> until we stabilize our CI infrastructure you would only be seeing postings
>> from vmware minesweeper for passed builds. For failed build we will manually
>> update it on the review.
>> 
>> Thanks
>> Sreeram
>> 
>> 
>> From: "Yaguang Tang" 
>> To: "OpenStack Development Mailing List" 
>> Sent: Friday, October 18, 2013 8:59:19 AM
>> Subject: Re: [openstack-dev] [Nova] VMWare Mine Sweeper, Congrats!
>> 
>> 
>> How can I enable or trigger Mine Sweeper for VMware related patches?  I have
>> update a patch about VMware driver today
>> https://review.openstack.org/#/c/51793/ . but haven't seen any posting
>> results .
>> 
>> 
>> 2013/10/18 Sean Dague 
>>> 
>>> On 10/17/2013 02:29 PM, Dan Smith wrote:
> 
> This system is running tempest against a VMWare deployment and posting
> the results publicly.  This is really great progress.  It will go a long
> way in helping reviewers be more confident in changes to this driver.
 
 
 This is huge progress, congrats and thanks to the VMware team for making
 this happen! There is really no substitute for the value this will
 provide for overall quality.
>>> 
>>> 
>>> Agreed. Nice job guys! It's super cool to now see SmokeStack and Mine
>>> Sweeper posting back on patches.
>>> 
>>> Tip of the hat to the VMWare team for pulling this together so quickly.
>>> 
>>>-Sean
>>> 
>>> --
>>> Sean Dague
>>> http://dague.net
>>> 
>>> 
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> 
>> 
>> 
>> --
>> Tang Yaguang
>> 
>> Canonical Ltd. | www.ubuntu.com | www.canonical.com
>> Mobile:  +86 152 1094 6968
>> gpg key: 0x187F664F
>> 
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> -- 
> Rackspace Australia
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Call for a clear COPYRIGHT-HOLDERS file in all OpenStack projects (and [trove] python-troveclient_0.1.4-1_amd64.changes REJECTED)

2013-10-21 Thread Monty Taylor


On 10/21/2013 10:44 PM, Clint Byrum wrote:
> Excerpts from Mark McLoughlin's message of 2013-10-21 13:45:21 -0700:
>> On Mon, 2013-10-21 at 10:28 -0700, Clint Byrum wrote:
>>> Excerpts from Robert Collins's message of 2013-10-20 02:25:43 -0700:
 On 20 October 2013 02:35, Monty Taylor  wrote:

> However, even as a strong supporter of accurate license headers, I would
> like to know more about the FTP masters issue. I dialog with them, as
> folks who deal with this issue and its repercutions WAY more than any of
> us might be really nice.

 Debian takes it's responsibilities under copyright law very seriously.
 The integrity of the debian/copyright metadata is checked on the first
 upload for a package (and basically not thereafter, which is either
 convenient or pragmatic or a massive hole in rigour depending on your
 point of view. The goal is to ensure that a) the package is in the
 right repository in Debian (main vs nonfree) and b) that Debian can
 redistribute it and c) that downstreams of Debian who decide to use
 the package can confidently do so. Files with differing redistribution
 licenses that aren't captured in debian/copyright are an issue for c);
 files with different authors and the same redistribution licence
 aren't a problem for a/b/c *but* the rules the FTP masters enforce
 don't make that discrimination: the debian/copyright file needs to be
 a concordance of both copyright holders and copyright license.

 Personally, I think it should really only be a concordance of
 copyright licenses, and the holders shouldn't be mentioned, but thats
 not the current project view.

>>>
>>> The benefit to this is that by at least hunting down project leadership
>>> and getting an assertion and information about the copyright holder
>>> situation, a maintainer tends to improve clarity upstream.
>>
>> By "improve clarity", you mean "compile an accurate list of all
>> copyright holders"? Why is this useful information?
>>
>> Sure, we could also "improve clarity" by compiling a list of all the
>> cities in the world where some OpenStack code has been authored ... but
>> *why*?
>>
> 
> If you don't know who the copyright holders are, you cannot know that
> the license being granted is actually enforceable. What if the Trove
> developers just found some repo lying out in the world and slapped an
> Apache license on it? We aren't going to do an ehaustive investigation,
> but we want to know _who_ granted said license.

You know I think you're great, but this argument doesn't hold up.

If the trove developers found some repo in the world and slapped an
apache license AND said:

Copyright 2012 Michael Basnight

in the header, and Thomas put that in debian/copyright, the Debian FTP
masters would very happily accept it.

I think that authors should attribute their work, because I think that
they should care. However, if they don't, that's fine. There is SOME
attribution in the file, and that attribution itself is correct. HP did
write some of the file. Rackspace also did but did not bother to claim
having done so.

debian/copyright should reflect what's in the files - it's what the
project is stating through the mechanisms that we have available to us.
I appreciate Thomas trying to be more precise here, but I think it's
actually too far. If you think that there is a bug in the copyright
header, you need to contact the project, via email, bug or patch, and
fix it. At THAT point, you can fix the debian/copyright file.

Until then, you need to declare to Debian what we are declaring to you.

>>>  Often things
>>> that are going into NEW are, themselves, new to the world, and often
>>> those projects have not done the due diligence to state their license
>>> and take stock of their copyright owners.
>>
>> I think OpenStack has done plenty of due diligence around the licensing
>> of its code and that all copyright holders agree to license their code
>> under those terms.
>>
>>> I think that is one reason
>>> the process survives despite perhaps going further than is necessary to
>>> maintain Debian's social contract integrity.
>>
>> This is related to some "social contract"? Please explain.
>>
> 
> http://www.debian.org/social_contract
> 
>>> I think OpenStack has taken enough care to ensure works are attributable
>>> to their submitters that Debian should have a means to accept that
>>> this project is indeed licensed as such. Perhaps a statement detailing
>>> the process OpenStack uses to ensure this can be drafted and included
>>> in each repository. It is not all that dissimilar to what MySQL did by
>>> stating the OpenSource linking exception for libmysqlclient's
>>> GPL license explicitly in a file that is now included with the tarballs.
>>
>> You objected to someone else on this thread conflating copyright
>> ownership and licensing. Now you do the same. There is absolutely no
>> ambiguity about OpenStack's lic

Re: [openstack-dev] [Nova] VMWare Mine Sweeper, Congrats!

2013-10-21 Thread Michael Still
This is super cool. Thanks!

One piece of feedback -- would it be possible to get the results as
something other than a tarball? Downloading the entire tarball to read
one log is slightly annoying.

Thanks,
Michael

On Sat, Oct 19, 2013 at 9:29 AM, Sreeram Yerrapragada
 wrote:
> We had some infrastructure issues in the morning and went back to silent
> mode. I just re-triggered tempest run for your patchset. Also note that
> until we stabilize our CI infrastructure you would only be seeing postings
> from vmware minesweeper for passed builds. For failed build we will manually
> update it on the review.
>
> Thanks
> Sreeram
>
> 
> From: "Yaguang Tang" 
> To: "OpenStack Development Mailing List" 
> Sent: Friday, October 18, 2013 8:59:19 AM
> Subject: Re: [openstack-dev] [Nova] VMWare Mine Sweeper, Congrats!
>
>
> How can I enable or trigger Mine Sweeper for VMware related patches?  I have
> update a patch about VMware driver today
> https://review.openstack.org/#/c/51793/ . but haven't seen any posting
> results .
>
>
> 2013/10/18 Sean Dague 
>>
>> On 10/17/2013 02:29 PM, Dan Smith wrote:

 This system is running tempest against a VMWare deployment and posting
 the results publicly.  This is really great progress.  It will go a long
 way in helping reviewers be more confident in changes to this driver.
>>>
>>>
>>> This is huge progress, congrats and thanks to the VMware team for making
>>> this happen! There is really no substitute for the value this will
>>> provide for overall quality.
>>
>>
>> Agreed. Nice job guys! It's super cool to now see SmokeStack and Mine
>> Sweeper posting back on patches.
>>
>> Tip of the hat to the VMWare team for pulling this together so quickly.
>>
>> -Sean
>>
>> --
>> Sean Dague
>> http://dague.net
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> --
> Tang Yaguang
>
> Canonical Ltd. | www.ubuntu.com | www.canonical.com
> Mobile:  +86 152 1094 6968
> gpg key: 0x187F664F
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] Meeting Tuesday October 22nd at 19:00 UTC

2013-10-21 Thread Elizabeth Krumbach Joseph
The OpenStack Infrastructure (Infra) team is hosting our weekly
meeting tomorrow, Tuesday October 22nd, at 19:00 UTC in
#openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting (anyone is
welcome to to add agenda items)

Everyone interested in infrastructure and process surrounding
automated testing and deployment is encouraged to attend.

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2
http://www.princessleia.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] How users should specify a datastore type when creating an instance

2013-10-21 Thread Vipul Sabhaya
On Mon, Oct 21, 2013 at 2:04 PM, Michael Basnight wrote:

>
> On Oct 21, 2013, at 1:40 PM, Tim Simpson wrote:
>
> >>> 2. I also think a datastore_version alone should be sufficient since
> the associated datastore type will be implied:
> >
> >> When i brought this up it was generally discussed as being confusing.
> Id like to use type and rely on having a default (or active) version behind
> the scenes.
> >
> > Can't we do both? If a user wants a specific version, most likely they
> had to enumerate all datastore_versions, spot it in a list, and grab the
> guid. Why force them to also specify the datastore_type when we can easily
> determine what that is?
>
> Fair enough.
>
>
It's not intuitive to the User, if they are specifying a version alone.
 You don't boot a 'version' of something, with specifying what that some
thing is.  I would rather they only specified the datastore_type alone, and
not have them specify a version at all.


> >
> >>> 4. Additionally, in the current pull request to implement this it is
> possible to avoid passing a version, but only if no more than one version
> of the datastore_type exists in the database.
> >>>
> >>> I think instead the datastore_type row in the database should also
> have a "default_version_id" property, that an operator could update to the
> most recent version or whatever other criteria they wish to use, meaning
> the call could become this simple:
> >
> >> Since we have determined from this email thread that we have an active
> status, and that > 1 version can be active, we have to think about the
> precedence of active vs default. My question would be, if we have a
> default_version_id and a active version, what do we choose on behalf of the
> user? If there is > 1 active version and a user does not specify the
> version, the api will error out, unless a default is defined. We also need
> a default_type in the config so the existing APIs can maintain
> compatibility. We can re-discuss this for v2 of the API.
> >
> > Imagine that an operator sets up Trove and only has one active version.
> They then somehow fumble setting up the default_version, but think they
> succeeded as the API works for users the way they expect anyway. Then they
> go to add another active version and suddenly their users get error
> messages.
> >
> > If we only use the "default_version" field of the datastore_type to
> define a default would honor the principle of least surprise.
>
> Are you saying you must have a default version defined to have > 1 active
> versions?
>
>
I think it makes sense to have a 'Active' flag on every version -- and a
default flag for the version that should be used as a default in the event
the user doesn't specify.  It also makes sense to require the deployer to
set this accurately, and if one doesn't exist instance provisioning errors
out.


> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Call for a clear COPYRIGHT-HOLDERS file in all OpenStack projects

2013-10-21 Thread Jeremy Stanley
On 2013-10-21 14:44:10 -0700 (-0700), Clint Byrum wrote:
[...]
> I assume the other CLA's have the same basic type of license being
> granted to the OpenStack Foundation.
[...]

For the record, there are only two other CLAs in place for OpenStack
source code contributions. One is the Corporate CLA which is in
addition to the Individual CLA you linked (the CCLA is signed by an
employer of a contributor, but the contributor still agrees to the
ICLA as well). The other is the United States Government CLA, which
I believe relies on works for the USG being released into the USA
Public Domain automatically (though I could be wrong on this
point--someone who is more familiar with it might correct me there).

There is also a System CLA you'll see linked in Gerrit, but that's
an implementation detail so we can get around our automation needing
to agree to something before being able to submit code reviews for
things like translations and requirements updates.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] What validation feature is necessary for Nova v3 API

2013-10-21 Thread Doug Hellmann
On Mon, Oct 21, 2013 at 7:14 AM, Kenichi Oomichi
wrote:

>
> Hi Doug,
>
> Thank you for your advice.
>
> Some validation features seem necessary as basic features for Nova APIs.
> so I am trying to pick necessary features for WSME on the following
> inline messages.
>
> Could you check them?
>
> > -Original Message-
> > From: Doug Hellmann [mailto:doug.hellm...@dreamhost.com]
> > Sent: Thursday, October 17, 2013 3:51 AM
> > To: OpenStack Development Mailing List
> > Subject: Re: [openstack-dev] [Nova] What validation feature is necessary
> for Nova v3 API
> >>>
> >>> For discussing, I have investigated all validation ways of current
> Nova v3
> >>> API parameters. There are 79 API methods, and 49 methods use API
> parameters
> >>> of a request body. Totally, they have 148 API parameters. (details:
> [1])
> >>>
> >>> Necessary features, what I guess now, are the following:
> >>>
> >>> << Basic Validation Feature >>
> >>> Through this investigation, it seems that we need some basic validation
> >>> features such as:
> >>> * Type validation
> >>>   str(name, ..), int(vcpus, ..), float(rxtx_factor), dict(metadata,
> ..),
> >>>   list(networks, ..), bool(conbine, ..), None(availability_zone)
> >>> * String length validation
> >>>   1 - 255
> >>> * Value range validation
> >>>   value >= 0(rotation, ..), value > 0(vcpus, ..),
> >>>   value >= 1(os-multiple-create:min_count,
> os-multiple-create:max_count)
>
> Ceilometer has class BoundedInt.
> (
> https://github.com/openstack/ceilometer/blob/master/ceilometer/api/controllers/v2.py#L79
> )
> This class seems useful for the above value range validation.
> Can we implement this feature on WSME?
> Or should we implement this on Oslo?
>

I think it makes sense to add some of these validation features directly to
WSME unless they are OpenStack-specific.


>
> Also we would be able to implement the string length validation with
> the similar code.
>

Yes, I think you're right.


>
>
> >>> * Data format validation
> >>>  * Pattern:
> >>>uuid(volume_id, ..), boolean(on_shared_storage, ..),
> base64encoded(contents),
> >>>ipv4(access_ip_v4, fixed_ip), ipv6(access_ip_v6)
>
> This feature also seems implemantable by enhancing the above string
> validation.
>

Yes, I could see having different types for each of those things. I believe
there is already a boolean type.


>
> >>>  * Allowed list:
> >>>'active' or 'error'(state), 'parent' or 'child'(cells.type),
> >>>'MANUAL' or 'AUTO'(os-disk-config:disk_config), ...
>
> WSME has this feature(wtypes.Enum) already.
>

Yes


>
> >>>  * Allowed string:
> >>>not contain '!' and '.'(cells.name),
> >>>contain [a-zA-Z0-9_.- ] only(flavor.name, flavor.id)
>
> This feature also seems implemantable.
>

Yes


>
> >>> * Mandatory validation
> >>>  * Required: server.name, flavor.name, ..
> >>>  * Optional: flavor.ephemeral, flavor.swap, ..
>
> WSME has this feature(mandatory argument) already.
>

Yes

Thanks for doing this analysis. It looks like with a little bit of work on
WSME, we will have a nice library of reusable validators.

Doug


>
>
> Thanks
> Ken'ichi Ohmichi
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Call for a clear COPYRIGHT-HOLDERS file in all OpenStack projects (and [trove] python-troveclient_0.1.4-1_amd64.changes REJECTED)

2013-10-21 Thread Clint Byrum
Excerpts from Mark McLoughlin's message of 2013-10-21 13:45:21 -0700:
> On Mon, 2013-10-21 at 10:28 -0700, Clint Byrum wrote:
> > Excerpts from Robert Collins's message of 2013-10-20 02:25:43 -0700:
> > > On 20 October 2013 02:35, Monty Taylor  wrote:
> > > 
> > > > However, even as a strong supporter of accurate license headers, I would
> > > > like to know more about the FTP masters issue. I dialog with them, as
> > > > folks who deal with this issue and its repercutions WAY more than any of
> > > > us might be really nice.
> > > 
> > > Debian takes it's responsibilities under copyright law very seriously.
> > > The integrity of the debian/copyright metadata is checked on the first
> > > upload for a package (and basically not thereafter, which is either
> > > convenient or pragmatic or a massive hole in rigour depending on your
> > > point of view. The goal is to ensure that a) the package is in the
> > > right repository in Debian (main vs nonfree) and b) that Debian can
> > > redistribute it and c) that downstreams of Debian who decide to use
> > > the package can confidently do so. Files with differing redistribution
> > > licenses that aren't captured in debian/copyright are an issue for c);
> > > files with different authors and the same redistribution licence
> > > aren't a problem for a/b/c *but* the rules the FTP masters enforce
> > > don't make that discrimination: the debian/copyright file needs to be
> > > a concordance of both copyright holders and copyright license.
> > > 
> > > Personally, I think it should really only be a concordance of
> > > copyright licenses, and the holders shouldn't be mentioned, but thats
> > > not the current project view.
> > > 
> > 
> > The benefit to this is that by at least hunting down project leadership
> > and getting an assertion and information about the copyright holder
> > situation, a maintainer tends to improve clarity upstream.
> 
> By "improve clarity", you mean "compile an accurate list of all
> copyright holders"? Why is this useful information?
> 
> Sure, we could also "improve clarity" by compiling a list of all the
> cities in the world where some OpenStack code has been authored ... but
> *why*?
> 

If you don't know who the copyright holders are, you cannot know that
the license being granted is actually enforceable. What if the Trove
developers just found some repo lying out in the world and slapped an
Apache license on it? We aren't going to do an ehaustive investigation,
but we want to know _who_ granted said license.

> >  Often things
> > that are going into NEW are, themselves, new to the world, and often
> > those projects have not done the due diligence to state their license
> > and take stock of their copyright owners.
> 
> I think OpenStack has done plenty of due diligence around the licensing
> of its code and that all copyright holders agree to license their code
> under those terms.
> 
> > I think that is one reason
> > the process survives despite perhaps going further than is necessary to
> > maintain Debian's social contract integrity.
> 
> This is related to some "social contract"? Please explain.
> 

http://www.debian.org/social_contract

> > I think OpenStack has taken enough care to ensure works are attributable
> > to their submitters that Debian should have a means to accept that
> > this project is indeed licensed as such. Perhaps a statement detailing
> > the process OpenStack uses to ensure this can be drafted and included
> > in each repository. It is not all that dissimilar to what MySQL did by
> > stating the OpenSource linking exception for libmysqlclient's
> > GPL license explicitly in a file that is now included with the tarballs.
> 
> You objected to someone else on this thread conflating copyright
> ownership and licensing. Now you do the same. There is absolutely no
> ambiguity about OpenStack's license.
> 

I'm not sure that was me, but I would object to conflating it, yes. They
are not the same thing, but they are related. Only a copyright holder
can grant a copyright license.

> Our CLA process for new contributors is documented here:
> 
>   
> https://wiki.openstack.org/wiki/How_To_Contribute#Contributors_License_Agreement
> 
> The key thing for Debian to understand is that all OpenStack
> contributors agree to license their code under the terms of the Apache
> License. I don't see why a list of copyright holders would clarify the
> licensing situation any further.
> 

So Debian has a rule that statements like these need to be delivered to
their users along with the end-user binaries (it relates to the social
contract and the guidelines attached to the contract.

https://review.openstack.org/static/cla.html

Article 2 is probably sufficient to say that it only really matters that
all of the copyrighted material came from people who signed the CLA,
and that the "Project Manager" (OpenStack Foundation) grants the license
on the code. I assume the other CLA's have the same basic type of
license being grante

Re: [openstack-dev] [Trove] How users should specify a datastore type when creating an instance

2013-10-21 Thread Andrey Shestakov
On Mon, Oct 21, 2013 at 11:40 PM, Tim Simpson  wrote:
>
> >> 4. Additionally, in the current pull request to implement this it is 
> >> possible to avoid passing a version, but only if no more than one version 
> >> of the datastore_type exists in the database.
> >>
> >> I think instead the datastore_type row in the database should also have a 
> >> "default_version_id" property, that an operator could update to the most 
> >> recent version or whatever other criteria they wish to use, meaning the 
> >> call could become this simple:
>
> > Since we have determined from this email thread that we have an active 
> > status, and that > 1 version can be active, we have to think about the 
> > precedence of active vs default. My question would be, if we have a 
> > default_version_id and a active version, what do we choose on behalf of the 
> > user? If there is > 1 active version and a user does not specify the 
> > version, the api will error out, unless a default is defined. We also need 
> > a default_type in the config so the existing APIs can maintain 
> > compatibility. We can re-discuss this for v2 of the API.
>
> Imagine that an operator sets up Trove and only has one active version. They 
> then somehow fumble setting up the default_version, but think they succeeded 
> as the API works for users the way they expect anyway. Then they go to add 
> another active version and suddenly their users get error messages.
>
> If we only use the "default_version" field of the datastore_type to define a 
> default would honor the principle of least surprise.

if default version is inactive? there will more cases for error

also, i think we have to use default version only if type contains > 1
active version. And default version should be active, otherwise -
error.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] How users should specify a datastore type when creating an instance

2013-10-21 Thread Michael Basnight

On Oct 21, 2013, at 1:57 PM, Nikhil Manchanda wrote:

> 
> The image approach works fine if Trove only supports deploying a single
> datastore type (mysql in your case). As soon as we support
> deploying more than 1 datastore type, Trove needs to have some knowledge
> of which guestagent manager classes to load. Hence the need
> for having a datastore type API.
> 
> The argument for needing to keep track of the version is
> similar. Potentially a version increment -- especially of the major
> version -- may require for a different guestagent manager. And Trove
> needs to have this information.

This is also true that we dont want to define the _need_ to have custom images 
for the datastores. You can, quite easily, deploy mysql or redis on a vanilla 
image.


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Call for a clear COPYRIGHT-HOLDERS file in all OpenStack projects (and [trove] python-troveclient_0.1.4-1_amd64.changes REJECTED)

2013-10-21 Thread Michael Basnight

On Oct 21, 2013, at 1:55 PM, Mark McLoughlin wrote:

> On Tue, 2013-10-22 at 01:45 +0800, Thomas Goirand wrote:
>> On 10/20/2013 09:00 PM, Jeremy Stanley wrote:
>>> On 2013-10-20 22:20:25 +1300 (+1300), Robert Collins wrote:
>>> [...]
 OTOH registering one's nominated copyright holder on the first
 patch to a repository is probably a sustainable overhead. And it's
 probably amenable to automation - a commit hook could do it locally
 and a check job can assert that it's done.
>>> 
>>> I know the Foundation's got work underway to improve the affiliate
>>> map from the member database, so it might be possible to have some
>>> sort of automated job which proposes changes to a copyright holders
>>> list in each project by running a query with the author and date of
>>> each commit looking for new affiliations. That seems like it would
>>> be hacky, fragile and inaccurate, but probably still more reliable
>>> than expecting thousands of contributors to keep that information up
>>> to date when submitting patches?
>> 
>> My request wasn't to go *THAT* far. The main problem I was facing was
>> that troveclient has a few files stating that HP was the sole copyright
>> holder, when it clearly was not (since I have discussed a bit with some
>> the dev team in Portland, IIRC some of them are from Rackspace...).
> 
> Talk to the Trove developers and politely ask them whether the copyright
> notices in their code reflects what they see as the reality.
> 
> I'm sure it would help them if you pointed out to them some significant
> chunks of code from the commit history which don't appear to have been
> written by a HP employee.
> 
> Simply adding a Rackspace copyright notice to a file or two which has
> had a significant contribution by someone from Rackspace would be enough
> to resolve your concerns completely.
> 
> i.e. if you spot in inaccuracy in the copyright headers, just make it
> easy for us people to fix it and I'm sure they will.

++ to this. Id like to do what is best for OpenStack, but i dont want to make 
it impossible for the debian ftp masters to approve trove :) so if this is 
sufficient, ill fix the copyright headers.



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] How users should specify a datastore type when creating an instance

2013-10-21 Thread Michael Basnight

On Oct 21, 2013, at 1:40 PM, Tim Simpson wrote:

>>> 2. I also think a datastore_version alone should be sufficient since the 
>>> associated datastore type will be implied:
> 
>> When i brought this up it was generally discussed as being confusing. Id 
>> like to use type and rely on having a default (or active) version behind the 
>> scenes.
> 
> Can't we do both? If a user wants a specific version, most likely they had to 
> enumerate all datastore_versions, spot it in a list, and grab the guid. Why 
> force them to also specify the datastore_type when we can easily determine 
> what that is?

Fair enough.

> 
>>> 4. Additionally, in the current pull request to implement this it is 
>>> possible to avoid passing a version, but only if no more than one version 
>>> of the datastore_type exists in the database.
>>> 
>>> I think instead the datastore_type row in the database should also have a 
>>> "default_version_id" property, that an operator could update to the most 
>>> recent version or whatever other criteria they wish to use, meaning the 
>>> call could become this simple:
> 
>> Since we have determined from this email thread that we have an active 
>> status, and that > 1 version can be active, we have to think about the 
>> precedence of active vs default. My question would be, if we have a 
>> default_version_id and a active version, what do we choose on behalf of the 
>> user? If there is > 1 active version and a user does not specify the 
>> version, the api will error out, unless a default is defined. We also need a 
>> default_type in the config so the existing APIs can maintain compatibility. 
>> We can re-discuss this for v2 of the API.
> 
> Imagine that an operator sets up Trove and only has one active version. They 
> then somehow fumble setting up the default_version, but think they succeeded 
> as the API works for users the way they expect anyway. Then they go to add 
> another active version and suddenly their users get error messages.
> 
> If we only use the "default_version" field of the datastore_type to define a 
> default would honor the principle of least surprise.

Are you saying you must have a default version defined to have > 1 active 
versions?


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT Software configuration proposal

2013-10-21 Thread Steve Baker
On 10/22/2013 08:45 AM, Mike Spreitzer wrote:
> Steve Baker  wrote on 10/15/2013 06:48:53 PM:
>
> > I've just written some proposals to address Heat's HOT software
> > configuration needs, and I'd like to use this thread to get some
> feedback:
> > https://wiki.openstack.org/wiki/Heat/Blueprints/hot-software-config
> >
> https://wiki.openstack.org/wiki/Heat/Blueprints/native-tools-bootstrap-config
> >
> > Please read the proposals and reply to the list with any comments or
> > suggestions.
>
> Can you confirm whether I have got the big picture right?  I think
> some of my earlier remarks were mistaken.
>
> You propose to introduce the concept of component and recognize
> software configuration as a matter of invoking components --- with a
> DAG of data dependencies among the component invocations.  While this
> is similar to what today's heat engine does for resources, you do NOT
> propose that the heat engine will get in the business of invoking
> components.  Rather: each VM will run a series of component
> invocations, and in-VM mechanisms will handle the cross-component
> synchronization and data communication. 
This is basically correct, except that in-VM mechanisms won't know much
about cross-component synchronization and data communication. They will
just execute whatever components are available to be executed, and
report back values to heat-engine by signalling to waitconditions.
>  You propose to add a bit of sugaring for the wait condition&handle
> mechanism, and the heat engine will do the de-sugaring. 
Yes, I think improvements can be made on what I proposed, such as every
component signalling when it is complete, and optionally including a
return value in that signal.
>  Each component is written in one of a few supported configuration
> management (CM) frameworks, and essentially all component invocations
> on a given VM invoke components of the same CM framework (with
> possible exceptions for one or two really basic ones).
Rather than being limited to a few supported CM tools, I like the idea
of some kind of provider mechanism so that users or heat admins can add
support for new CM tools. This implies that it needs to be possible to
add a component type without requiring custom python that runs on heat
engine.
> The heat engine gains the additional responsibility of making sure
> that the appropriate CM framework(s) is(are) bootstrapped in each VM.
Maybe. Or it might be up to the user to invoke images that already have
the CM tools installed, or the user can provide a custom component
provider which installs the tool in the way that they want.

As for the cross-component synchronization and data communication
question, at this stage I'm not comfortable with bringing something like
zookeeper into the mix for a general solution for inter-component
communication.  If heat engine handles resource dependencies and
zookeeper handles software configuration dependencies this would result
in the state of the stack being split between two different
co-ordination mechanisms.

We've put quite some effort into heat engine to co-ordinate resource
dependencies. Wait conditions are currently cumbersome to use, but by
exposing software configuration state in terms of resource dependencies
they do enable heat engine to be central source of state for the entire
stack, including progress of software config.

If wait conditions can become palatable to use (or completely
transparent) then to me that addresses the main concerns about using
them in the short term. Longer term I'd consider something like Marconi
to replace metadata polling and wait condition signalling but it is too
early to be having that conversation.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] How users should specify a datastore type when creating an instance

2013-10-21 Thread Nikhil Manchanda

The image approach works fine if Trove only supports deploying a single
datastore type (mysql in your case). As soon as we support
deploying more than 1 datastore type, Trove needs to have some knowledge
of which guestagent manager classes to load. Hence the need
for having a datastore type API.

The argument for needing to keep track of the version is
similar. Potentially a version increment -- especially of the major
version -- may require for a different guestagent manager. And Trove
needs to have this information.

Hope this helps,

Cheers,
-Nikhil


Kevin Conway writes:

> What is the major motivation not to simply use a glance image named "MySQL
> 5.5" or "MongoDB 2.4"?
>
> Wouldn't that give service providers all the flexibility they need for
> providing different types? For example, I could offer a simple "MySQL"
> image that creates a MySQL instance. If all my users use the one "MySQL"
> image then I can update that image deploy the latest version (or any
> version that I, as the service provider, want to deploy). Alternatively,
> my users could have a choice of versions if I roll a "MySQL 5.1" and
> "MySQL 5.5" image.
>
> Want to deactivate a version: delete the image. Want to offer a new
> version: create a new image.
>
> It seems like this is parallel to a NOVA deploy offering multiple version
> of the same OS (Ubuntu 12 vs Ubuntu 13). Images work nicely for that. Why
> couldn't they work for us?
>
> On 10/21/13 3:12 PM, "Michael Basnight"  wrote:
>
>>
>>On Oct 18, 2013, at 12:30 PM, Tim Simpson wrote:
>>
>>> 1. I think since we have two fields in the instance object we should
>>>make a new object for datastore and avoid the name prefixing, like this:
>>
>>I agree with this.
>>
>>> 2. I also think a datastore_version alone should be sufficient since
>>>the associated datastore type will be implied:
>>
>>When i brought this up it was generally discussed as being confusing. Id
>>like to use type and rely on having a default (or active) version behind
>>the scenes.
>>
>>> 3. Additionally, while a datastore_type should have an ID in the Trove
>>>infastructure database, it should also be possible to pass just the name
>>>of the datastore type to the instance call, such as "mysql" or "mongo".
>>>Maybe we could allow this in addition to the ID? I think this form
>>>should actually use the argument "type", and the id should then be
>>>passed as "type_id" instead.
>>
>>Id prefer this honestly.
>>
>>> 4. Additionally, in the current pull request to implement this it is
>>>possible to avoid passing a version, but only if no more than one
>>>version of the datastore_type exists in the database.
>>>
>>> I think instead the datastore_type row in the database should also have
>>>a "default_version_id" property, that an operator could update to the
>>>most recent version or whatever other criteria they wish to use, meaning
>>>the call could become this simple:
>>
>>Since we have determined from this email thread that we have an active
>>status, and that > 1 version can be active, we have to think about the
>>precedence of active vs default. My question would be, if we have a
>>default_version_id and a active version, what do we choose on behalf of
>>the user? If there is > 1 active version and a user does not specify the
>>version, the api will error out, unless a default is defined. We also
>>need a default_type in the config so the existing APIs can maintain
>>compatibility. We can re-discuss this for v2 of the API.
>>___
>>OpenStack-dev mailing list
>>OpenStack-dev@lists.openstack.org
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Call for a clear COPYRIGHT-HOLDERS file in all OpenStack projects (and [trove] python-troveclient_0.1.4-1_amd64.changes REJECTED)

2013-10-21 Thread Mark McLoughlin
On Tue, 2013-10-22 at 01:45 +0800, Thomas Goirand wrote:
> On 10/20/2013 09:00 PM, Jeremy Stanley wrote:
> > On 2013-10-20 22:20:25 +1300 (+1300), Robert Collins wrote:
> > [...]
> >> OTOH registering one's nominated copyright holder on the first
> >> patch to a repository is probably a sustainable overhead. And it's
> >> probably amenable to automation - a commit hook could do it locally
> >> and a check job can assert that it's done.
> > 
> > I know the Foundation's got work underway to improve the affiliate
> > map from the member database, so it might be possible to have some
> > sort of automated job which proposes changes to a copyright holders
> > list in each project by running a query with the author and date of
> > each commit looking for new affiliations. That seems like it would
> > be hacky, fragile and inaccurate, but probably still more reliable
> > than expecting thousands of contributors to keep that information up
> > to date when submitting patches?
> 
> My request wasn't to go *THAT* far. The main problem I was facing was
> that troveclient has a few files stating that HP was the sole copyright
> holder, when it clearly was not (since I have discussed a bit with some
> the dev team in Portland, IIRC some of them are from Rackspace...).

Talk to the Trove developers and politely ask them whether the copyright
notices in their code reflects what they see as the reality.

I'm sure it would help them if you pointed out to them some significant
chunks of code from the commit history which don't appear to have been
written by a HP employee.

Simply adding a Rackspace copyright notice to a file or two which has
had a significant contribution by someone from Rackspace would be enough
to resolve your concerns completely.

i.e. if you spot in inaccuracy in the copyright headers, just make it
easy for us people to fix it and I'm sure they will.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Call for a clear COPYRIGHT-HOLDERS file in all OpenStack projects (and [trove] python-troveclient_0.1.4-1_amd64.changes REJECTED)

2013-10-21 Thread Mark McLoughlin
On Tue, 2013-10-22 at 01:55 +0800, Thomas Goirand wrote:
> On 10/21/2013 09:28 PM, Mark McLoughlin wrote:
> > In other words, what exactly is a list of copyright holders good for?
> 
> At least avoid pain and reject when uploading to the Debian NEW queue...

I'm sorry, that is downstream Debian pain. It shouldn't be inflicted on
upstream unless it is generally a useful thing.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Call for a clear COPYRIGHT-HOLDERS file in all OpenStack projects (and [trove] python-troveclient_0.1.4-1_amd64.changes REJECTED)

2013-10-21 Thread Mark McLoughlin
On Mon, 2013-10-21 at 10:28 -0700, Clint Byrum wrote:
> Excerpts from Robert Collins's message of 2013-10-20 02:25:43 -0700:
> > On 20 October 2013 02:35, Monty Taylor  wrote:
> > 
> > > However, even as a strong supporter of accurate license headers, I would
> > > like to know more about the FTP masters issue. I dialog with them, as
> > > folks who deal with this issue and its repercutions WAY more than any of
> > > us might be really nice.
> > 
> > Debian takes it's responsibilities under copyright law very seriously.
> > The integrity of the debian/copyright metadata is checked on the first
> > upload for a package (and basically not thereafter, which is either
> > convenient or pragmatic or a massive hole in rigour depending on your
> > point of view. The goal is to ensure that a) the package is in the
> > right repository in Debian (main vs nonfree) and b) that Debian can
> > redistribute it and c) that downstreams of Debian who decide to use
> > the package can confidently do so. Files with differing redistribution
> > licenses that aren't captured in debian/copyright are an issue for c);
> > files with different authors and the same redistribution licence
> > aren't a problem for a/b/c *but* the rules the FTP masters enforce
> > don't make that discrimination: the debian/copyright file needs to be
> > a concordance of both copyright holders and copyright license.
> > 
> > Personally, I think it should really only be a concordance of
> > copyright licenses, and the holders shouldn't be mentioned, but thats
> > not the current project view.
> > 
> 
> The benefit to this is that by at least hunting down project leadership
> and getting an assertion and information about the copyright holder
> situation, a maintainer tends to improve clarity upstream.

By "improve clarity", you mean "compile an accurate list of all
copyright holders"? Why is this useful information?

Sure, we could also "improve clarity" by compiling a list of all the
cities in the world where some OpenStack code has been authored ... but
*why*?

>  Often things
> that are going into NEW are, themselves, new to the world, and often
> those projects have not done the due diligence to state their license
> and take stock of their copyright owners.

I think OpenStack has done plenty of due diligence around the licensing
of its code and that all copyright holders agree to license their code
under those terms.

> I think that is one reason
> the process survives despite perhaps going further than is necessary to
> maintain Debian's social contract integrity.

This is related to some "social contract"? Please explain.

> I think OpenStack has taken enough care to ensure works are attributable
> to their submitters that Debian should have a means to accept that
> this project is indeed licensed as such. Perhaps a statement detailing
> the process OpenStack uses to ensure this can be drafted and included
> in each repository. It is not all that dissimilar to what MySQL did by
> stating the OpenSource linking exception for libmysqlclient's
> GPL license explicitly in a file that is now included with the tarballs.

You objected to someone else on this thread conflating copyright
ownership and licensing. Now you do the same. There is absolutely no
ambiguity about OpenStack's license.

Our CLA process for new contributors is documented here:

  
https://wiki.openstack.org/wiki/How_To_Contribute#Contributors_License_Agreement

The key thing for Debian to understand is that all OpenStack
contributors agree to license their code under the terms of the Apache
License. I don't see why a list of copyright holders would clarify the
licensing situation any further.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] How users should specify a datastore type when creating an instance

2013-10-21 Thread Tim Simpson
>> 2. I also think a datastore_version alone should be sufficient since the 
>> associated datastore type will be implied:

>When i brought this up it was generally discussed as being confusing. Id like 
>to use type and rely on having a default (or active) version behind the scenes.

Can't we do both? If a user wants a specific version, most likely they had to 
enumerate all datastore_versions, spot it in a list, and grab the guid. Why 
force them to also specify the datastore_type when we can easily determine what 
that is?

>> 4. Additionally, in the current pull request to implement this it is 
>> possible to avoid passing a version, but only if no more than one version of 
>> the datastore_type exists in the database.
>>
>> I think instead the datastore_type row in the database should also have a 
>> "default_version_id" property, that an operator could update to the most 
>> recent version or whatever other criteria they wish to use, meaning the call 
>> could become this simple:

> Since we have determined from this email thread that we have an active 
> status, and that > 1 version can be active, we have to think about the 
> precedence of active vs default. My question would be, if we have a 
> default_version_id and a active version, what do we choose on behalf of the 
> user? If there is > 1 active version and a user does not specify the version, 
> the api will error out, unless a default is defined. We also need a 
> default_type in the config so the existing APIs can maintain compatibility. 
> We can re-discuss this for v2 of the API.

Imagine that an operator sets up Trove and only has one active version. They 
then somehow fumble setting up the default_version, but think they succeeded as 
the API works for users the way they expect anyway. Then they go to add another 
active version and suddenly their users get error messages.

If we only use the "default_version" field of the datastore_type to define a 
default would honor the principle of least surprise.



From: Michael Basnight [mbasni...@gmail.com]
Sent: Monday, October 21, 2013 3:12 PM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [Trove] How users should specify a datastore   
type when creating an instance

On Oct 18, 2013, at 12:30 PM, Tim Simpson wrote:

> 1. I think since we have two fields in the instance object we should make a 
> new object for datastore and avoid the name prefixing, like this:

I agree with this.

> 2. I also think a datastore_version alone should be sufficient since the 
> associated datastore type will be implied:

When i brought this up it was generally discussed as being confusing. Id like 
to use type and rely on having a default (or active) version behind the scenes.

> 3. Additionally, while a datastore_type should have an ID in the Trove 
> infastructure database, it should also be possible to pass just the name of 
> the datastore type to the instance call, such as "mysql" or "mongo". Maybe we 
> could allow this in addition to the ID? I think this form should actually use 
> the argument "type", and the id should then be passed as "type_id" instead.

Id prefer this honestly.

> 4. Additionally, in the current pull request to implement this it is possible 
> to avoid passing a version, but only if no more than one version of the 
> datastore_type exists in the database.
>
> I think instead the datastore_type row in the database should also have a 
> "default_version_id" property, that an operator could update to the most 
> recent version or whatever other criteria they wish to use, meaning the call 
> could become this simple:

Since we have determined from this email thread that we have an active status, 
and that > 1 version can be active, we have to think about the precedence of 
active vs default. My question would be, if we have a default_version_id and a 
active version, what do we choose on behalf of the user? If there is > 1 active 
version and a user does not specify the version, the api will error out, unless 
a default is defined. We also need a default_type in the config so the existing 
APIs can maintain compatibility. We can re-discuss this for v2 of the API.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] How users should specify a datastore type when creating an instance

2013-10-21 Thread Kevin Conway
What is the major motivation not to simply use a glance image named "MySQL
5.5" or "MongoDB 2.4"?

Wouldn't that give service providers all the flexibility they need for
providing different types? For example, I could offer a simple "MySQL"
image that creates a MySQL instance. If all my users use the one "MySQL"
image then I can update that image deploy the latest version (or any
version that I, as the service provider, want to deploy). Alternatively,
my users could have a choice of versions if I roll a "MySQL 5.1" and
"MySQL 5.5" image.

Want to deactivate a version: delete the image. Want to offer a new
version: create a new image.

It seems like this is parallel to a NOVA deploy offering multiple version
of the same OS (Ubuntu 12 vs Ubuntu 13). Images work nicely for that. Why
couldn't they work for us?

On 10/21/13 3:12 PM, "Michael Basnight"  wrote:

>
>On Oct 18, 2013, at 12:30 PM, Tim Simpson wrote:
>
>> 1. I think since we have two fields in the instance object we should
>>make a new object for datastore and avoid the name prefixing, like this:
>
>I agree with this.
>
>> 2. I also think a datastore_version alone should be sufficient since
>>the associated datastore type will be implied:
>
>When i brought this up it was generally discussed as being confusing. Id
>like to use type and rely on having a default (or active) version behind
>the scenes.
>
>> 3. Additionally, while a datastore_type should have an ID in the Trove
>>infastructure database, it should also be possible to pass just the name
>>of the datastore type to the instance call, such as "mysql" or "mongo".
>>Maybe we could allow this in addition to the ID? I think this form
>>should actually use the argument "type", and the id should then be
>>passed as "type_id" instead.
>
>Id prefer this honestly.
>
>> 4. Additionally, in the current pull request to implement this it is
>>possible to avoid passing a version, but only if no more than one
>>version of the datastore_type exists in the database.
>> 
>> I think instead the datastore_type row in the database should also have
>>a "default_version_id" property, that an operator could update to the
>>most recent version or whatever other criteria they wish to use, meaning
>>the call could become this simple:
>
>Since we have determined from this email thread that we have an active
>status, and that > 1 version can be active, we have to think about the
>precedence of active vs default. My question would be, if we have a
>default_version_id and a active version, what do we choose on behalf of
>the user? If there is > 1 active version and a user does not specify the
>version, the api will error out, unless a default is defined. We also
>need a default_type in the config so the existing APIs can maintain
>compatibility. We can re-discuss this for v2 of the API.
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT Software configuration proposal

2013-10-21 Thread Angus Salkeld

On 21/10/13 15:45 -0400, Mike Spreitzer wrote:

Steve Baker  wrote on 10/15/2013 06:48:53 PM:


I've just written some proposals to address Heat's HOT software
configuration needs, and I'd like to use this thread to get some

feedback:

https://wiki.openstack.org/wiki/Heat/Blueprints/hot-software-config


https://wiki.openstack.org/wiki/Heat/Blueprints/native-tools-bootstrap-config



Please read the proposals and reply to the list with any comments or
suggestions.


Can you confirm whether I have got the big picture right?  I think some of
my earlier remarks were mistaken.

You propose to introduce the concept of component and recognize software
configuration as a matter of invoking components --- with a DAG of data
dependencies among the component invocations.  While this is similar to
what today's heat engine does for resources, you do NOT propose that the
heat engine will get in the business of invoking components.  Rather: each
VM will run a series of component invocations, and in-VM mechanisms will
handle the cross-component synchronization and data communication.  You
propose to add a bit of sugaring for the wait condition&handle mechanism,
and the heat engine will do the de-sugaring.  Each component is written in
one of a few supported configuration management (CM) frameworks, and
essentially all component invocations on a given VM invoke components of
the same CM framework (with possible exceptions for one or two really
basic ones).  The heat engine gains the additional responsibility of
making sure that the appropriate CM framework(s) is(are) bootstrapped in
each VM.  The heat engine gains no additional responsibilities.

Have I got that right?


I hope so, I don't want Heat to get into the business of two different
dependency systems.

-Angus



Thanks,
Mike



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Testing of new service types support

2013-10-21 Thread Illia Khudoshyn
Michael, Tim,

Nice to see you, guys, agreed. But what should I do now? Dive into
trove-integration? I guess there will be no use of mocked tests, coz I
haven't actually written a single line of server side code. All the fun is
in guest agent.

Thanks

>> For the api stuff, sure thats fine. i just think the overall coverage of
> the review will be quite low if we are only testing the API via fake code.
>
> We're in agreement here, I think. I will say though that if the people
> working on Mongo want to test it early, and go beyond simply using the
> client to manually confirm stuff, it should be possible to run the existing
> tests by building a different image and running a subset, such as
> "--group=dbaas.guest.shutdown". IIRC those tests don't do much other than
> make an instance, see it turn to ACTIVE, and delete it. It would be a
> worthwhile spot test to see if it adheres to the bare-minimum Trove API.
>
> 
> From: Michael Basnight [mbasni...@gmail.com ]
> Sent: Monday, October 21, 2013 12:19 PM
> To: OpenStack Development Mailing List
> Subject: Re: [openstack-dev] [Trove] Testing of new service types support
>
> On Oct 21, 2013, at 10:02 AM, Tim Simpson wrote:
>
> > Can't we say that about nearly any feature though? In theory we could
> put a hold on any tests for feature work saying it
> > will need to be redone when Tempest integrated is finished.
> >
> > Keep in mind what I'm suggesting here is a fairly trivial change to get
> some validation via the existing fake mode / integration tests at a fairly
> small cost.
>
> Of course we can do the old tests. And for this it might be the best
> thing. The problem i see is that we cant do real integration tests w/o this
> work, and i dont want to integrate a bunch of different service_types w/o
> tests that actually spin them up and run the guest, which is where 80% of
> the "new" code lives for a new service_type. Otherwise we are running
> fake-guest stuff that is not a good representation.
>
> For the api stuff, sure thats fine. i just think the overall coverage of
> the review will be quite low if we are only testing the API via fake code.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


-- 

Best regards,

Illia Khudoshyn,
Software Engineer, Mirantis, Inc.



38, Lenina ave. Kharkov, Ukraine

www.mirantis.com 

www.mirantis.ru



Skype: gluke_work

ikhudos...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [marconi] Minutes from today's meeting

2013-10-21 Thread Kurt Griffiths
Folks,

Today the Marconi team held their regularly scheduled meeting in 
#openstack-meeting-alt @ 1600 UTC. We discussed progress on the new storage 
sharding feature which will let Marconi scale to very large deployments, and 
provide a solid foundation for implementing queue "flavors" depending on 
community demand for such a feature.

The team also discussed versioning, and it was determined to target a v1.1 
release of the API for the Icehouse integrated release. We also discussed the 
possibility of using extensions to prototype v2 features in the longer term, 
which would have the nice side-effect of opening up Marconi to vendors for 
customization.

Summary: http://goo.gl/2jxevN
Log: http://goo.gl/QQYrPx

Please join the conversation in #openstack-marconi, and help define the future 
of the OpenStack Queue Service.

---
@kgriffs
Kurt Griffiths

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] How users should specify a datastore type when creating an instance

2013-10-21 Thread Michael Basnight

On Oct 18, 2013, at 12:30 PM, Tim Simpson wrote:

> 1. I think since we have two fields in the instance object we should make a 
> new object for datastore and avoid the name prefixing, like this:

I agree with this.

> 2. I also think a datastore_version alone should be sufficient since the 
> associated datastore type will be implied:

When i brought this up it was generally discussed as being confusing. Id like 
to use type and rely on having a default (or active) version behind the scenes.

> 3. Additionally, while a datastore_type should have an ID in the Trove 
> infastructure database, it should also be possible to pass just the name of 
> the datastore type to the instance call, such as "mysql" or "mongo". Maybe we 
> could allow this in addition to the ID? I think this form should actually use 
> the argument "type", and the id should then be passed as "type_id" instead.

Id prefer this honestly.

> 4. Additionally, in the current pull request to implement this it is possible 
> to avoid passing a version, but only if no more than one version of the 
> datastore_type exists in the database. 
> 
> I think instead the datastore_type row in the database should also have a 
> "default_version_id" property, that an operator could update to the most 
> recent version or whatever other criteria they wish to use, meaning the call 
> could become this simple:

Since we have determined from this email thread that we have an active status, 
and that > 1 version can be active, we have to think about the precedence of 
active vs default. My question would be, if we have a default_version_id and a 
active version, what do we choose on behalf of the user? If there is > 1 active 
version and a user does not specify the version, the api will error out, unless 
a default is defined. We also need a default_type in the config so the existing 
APIs can maintain compatibility. We can re-discuss this for v2 of the API.


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Havana neutron security groups config issue

2013-10-21 Thread Leandro Reox
We tried that a few minutes ago, and removing nova-networks doesnt make any
difference, im starting to think that neutron security groups are not
working with dockerIO containers


On Mon, Oct 21, 2013 at 4:15 PM, Aaron Rosen  wrote:

> Hrm, your config files looks good to me. From your iptables-save output it
> looks like you have nova-network running as well. I wonder if that is
> overwritting the rules that the agents are installing. Can you try removing
> nova-network and see if that changes anything?
>
> Aaron
>
>
> On Mon, Oct 21, 2013 at 10:45 AM, Leandro Reox wrote:
>
>> Aaron,
>>
>> Here you are all the info, all the nova.confs (compute, controller) , all
>> the agent logs, iptables output etc ... btw as i said we're testing this
>> setup with docker containers , just to be clear regarding your last
>> recommedation about libvirt vif driver (that we alreade have on the conf )
>>
>> Here it is: http://pastebin.com/RMgQxFyN
>>
>> Any clues ?
>>
>>
>> Best
>> Lean
>>
>>
>> On Fri, Oct 18, 2013 at 8:06 PM, Aaron Rosen  wrote:
>>
>>> Is anything showing up in the agents log on the hypervisors? Also, can
>>> you confirm you have this setting in your nova.conf:
>>>
>>>
>>> libvirt_vif_driver = nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
>>>
>>>
>>>
>>> On Fri, Oct 18, 2013 at 1:14 PM, Leandro Reox wrote:
>>>
 Aaaron, i fixed the config issues moving the neutron opts up to the
 default section. But now im having this issue

 i can launch intances normally, it seems that the rules are not getting
 applied anywhere, i have full access to the docker containers. If i do
 iptable -t nat -L and iptables -L , no rules seems to be applied to any 
 flow

 I see the calls on the nova-api normally ... , but no rule applied


 2013-10-18 16:10:09.873 31548 DEBUG neutronclient.client [-]
 RESP:{'date': 'Fri, 18 Oct 2013 20:10:07 GMT', 'status': '200',
 'content-length': '2331', 'content-type': 'application/json;
 charset=UTF-8', 'content-location': '
 http://172.16.124.16:9696/v2.0/security-groups.json'}
 {"security_groups": [{"tenant_id": "df26f374a7a84eddb06881c669ffd62f",
 "name": "default", "description": "default", "security_group_rules":
 [{"remote_group_id": null, "direction": "egress", "remote_ip_prefix": null,
 "protocol": null, "ethertype": "IPv4", "tenant_id":
 "df26f374a7a84eddb06881c669ffd62f", "port_range_max": null,
 "port_range_min": null, "id": "131f26d3-6b7b-47ef-9abf-fd664e59a972",
 "security_group_id": "2391ac97-447e-45b7-97f2-cd8fbcafb0cb"},
 {"remote_group_id": null, "direction": "egress", "remote_ip_prefix": null,
 "protocol": null, "ethertype": "IPv6", "tenant_id":
 "df26f374a7a84eddb06881c669ffd62f", "port_range_max": null,
 "port_range_min": null, "id": "93a8882b-adcd-489a-89e4-694f5955",
 "security_group_id": "2391ac97-447e-45b7-97f2-cd8fbcafb0cb"},
 {"remote_group_id": "2391ac97-447e-45b7-97f2-cd8fbcafb0cb", "direction":
 "ingress", "remote_ip_prefix": null, "protocol": null, "ethertype": "IPv4",
 "tenant_id": "df26f374a7a84eddb06881c669ffd62f", "port_range_max": null,
 "port_range_min": null, "id": "fb15316c-efd0-4a70-ae98-23f260f0d76d",
 "security_group_id": "2391ac97-447e-45b7-97f2-cd8fbcafb0cb"},
 {"remote_group_id": "2391ac97-447e-45b7-97f2-cd8fbcafb0cb", "direction":
 "ingress", "remote_ip_prefix": null, "protocol": null, "ethertype": "IPv6",
 "tenant_id": "df26f374a7a84eddb06881c669ffd62f", "port_range_max": null,
 "port_range_min": null, "id": "fc524bb9-b015-42b0-bdab-cd64db2763a6",
 "security_group_id": "2391ac97-447e-45b7-97f2-cd8fbcafb0cb"}], "id":
 "2391ac97-447e-45b7-97f2-cd8fbcafb0cb"}, {"tenant_id":
 "df26f374a7a84eddb06881c669ffd62f", "name": "culo", "description": "",
 "security_group_rules": [{"remote_group_id": null, "direction": "egress",
 "remote_ip_prefix": null, "protocol": null, "ethertype": "IPv6",
 "tenant_id": "df26f374a7a84eddb06881c669ffd62f", "port_range_max": null,
 "port_range_min": null, "id": "2c23f70a-691b-4601-87a0-2ec092488746",
 "security_group_id": "fe569b17-d6e0-4b1e-bae3-1132e748190c"},
 {"remote_group_id": null, "direction": "egress", "remote_ip_prefix": null,
 "protocol": null, "ethertype": "IPv4", "tenant_id":
 "df26f374a7a84eddb06881c669ffd62f", "port_range_max": null,
 "port_range_min": null, "id": "7a445e16-81c1-45c1-8efd-39ce3bcd9ca6",
 "security_group_id": "fe569b17-d6e0-4b1e-bae3-1132e748190c"}], "id":
 "fe569b17-d6e0-4b1e-bae3-1132e748190c"}]}
  http_log_resp
 /usr/lib/python2.7/dist-packages/neutronclient/common/utils.py:179
 2013-10-18 16:10:09.959 31548 INFO nova.osapi_compute.wsgi.server
 [req-87c41dc0-d90a-47b9-bfa8-bd7921a26609 223f36a9e1fc44659ac93479cb508902
 df26f374a7a84eddb06881c669ffd62f] 172.16.124.10 "GET
 /v2/df26f374a7a84eddb06881c669ffd62f/servers/detail HTTP/1.1" status: 200
 len: 187

Re: [openstack-dev] [Heat] HOT Software configuration proposal

2013-10-21 Thread Mike Spreitzer
Steve Baker  wrote on 10/15/2013 06:48:53 PM:

> I've just written some proposals to address Heat's HOT software 
> configuration needs, and I'd like to use this thread to get some 
feedback:
> https://wiki.openstack.org/wiki/Heat/Blueprints/hot-software-config
> 
https://wiki.openstack.org/wiki/Heat/Blueprints/native-tools-bootstrap-config

> 
> Please read the proposals and reply to the list with any comments or
> suggestions.

Can you confirm whether I have got the big picture right?  I think some of 
my earlier remarks were mistaken.

You propose to introduce the concept of component and recognize software 
configuration as a matter of invoking components --- with a DAG of data 
dependencies among the component invocations.  While this is similar to 
what today's heat engine does for resources, you do NOT propose that the 
heat engine will get in the business of invoking components.  Rather: each 
VM will run a series of component invocations, and in-VM mechanisms will 
handle the cross-component synchronization and data communication.  You 
propose to add a bit of sugaring for the wait condition&handle mechanism, 
and the heat engine will do the de-sugaring.  Each component is written in 
one of a few supported configuration management (CM) frameworks, and 
essentially all component invocations on a given VM invoke components of 
the same CM framework (with possible exceptions for one or two really 
basic ones).  The heat engine gains the additional responsibility of 
making sure that the appropriate CM framework(s) is(are) bootstrapped in 
each VM.  The heat engine gains no additional responsibilities.

Have I got that right?

Thanks,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Testing of new service types support

2013-10-21 Thread Tim Simpson
>> For the api stuff, sure thats fine. i just think the overall coverage of the 
>> review will be quite low if we are only testing the API via fake code.

We're in agreement here, I think. I will say though that if the people working 
on Mongo want to test it early, and go beyond simply using the client to 
manually confirm stuff, it should be possible to run the existing tests by 
building a different image and running a subset, such as 
"--group=dbaas.guest.shutdown". IIRC those tests don't do much other than make 
an instance, see it turn to ACTIVE, and delete it. It would be a worthwhile 
spot test to see if it adheres to the bare-minimum Trove API.


From: Michael Basnight [mbasni...@gmail.com]
Sent: Monday, October 21, 2013 12:19 PM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [Trove] Testing of new service types support

On Oct 21, 2013, at 10:02 AM, Tim Simpson wrote:

> Can't we say that about nearly any feature though? In theory we could put a 
> hold on any tests for feature work saying it
> will need to be redone when Tempest integrated is finished.
>
> Keep in mind what I'm suggesting here is a fairly trivial change to get some 
> validation via the existing fake mode / integration tests at a fairly small 
> cost.

Of course we can do the old tests. And for this it might be the best thing. The 
problem i see is that we cant do real integration tests w/o this work, and i 
dont want to integrate a bunch of different service_types w/o tests that 
actually spin them up and run the guest, which is where 80% of the "new" code 
lives for a new service_type. Otherwise we are running fake-guest stuff that is 
not a good representation.

For the api stuff, sure thats fine. i just think the overall coverage of the 
review will be quite low if we are only testing the API via fake code.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Havana neutron security groups config issue

2013-10-21 Thread Aaron Rosen
Hrm, your config files looks good to me. From your iptables-save output it
looks like you have nova-network running as well. I wonder if that is
overwritting the rules that the agents are installing. Can you try removing
nova-network and see if that changes anything?

Aaron


On Mon, Oct 21, 2013 at 10:45 AM, Leandro Reox wrote:

> Aaron,
>
> Here you are all the info, all the nova.confs (compute, controller) , all
> the agent logs, iptables output etc ... btw as i said we're testing this
> setup with docker containers , just to be clear regarding your last
> recommedation about libvirt vif driver (that we alreade have on the conf )
>
> Here it is: http://pastebin.com/RMgQxFyN
>
> Any clues ?
>
>
> Best
> Lean
>
>
> On Fri, Oct 18, 2013 at 8:06 PM, Aaron Rosen  wrote:
>
>> Is anything showing up in the agents log on the hypervisors? Also, can
>> you confirm you have this setting in your nova.conf:
>>
>>
>> libvirt_vif_driver = nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
>>
>>
>>
>> On Fri, Oct 18, 2013 at 1:14 PM, Leandro Reox wrote:
>>
>>> Aaaron, i fixed the config issues moving the neutron opts up to the
>>> default section. But now im having this issue
>>>
>>> i can launch intances normally, it seems that the rules are not getting
>>> applied anywhere, i have full access to the docker containers. If i do
>>> iptable -t nat -L and iptables -L , no rules seems to be applied to any flow
>>>
>>> I see the calls on the nova-api normally ... , but no rule applied
>>>
>>>
>>> 2013-10-18 16:10:09.873 31548 DEBUG neutronclient.client [-]
>>> RESP:{'date': 'Fri, 18 Oct 2013 20:10:07 GMT', 'status': '200',
>>> 'content-length': '2331', 'content-type': 'application/json;
>>> charset=UTF-8', 'content-location': '
>>> http://172.16.124.16:9696/v2.0/security-groups.json'}
>>> {"security_groups": [{"tenant_id": "df26f374a7a84eddb06881c669ffd62f",
>>> "name": "default", "description": "default", "security_group_rules":
>>> [{"remote_group_id": null, "direction": "egress", "remote_ip_prefix": null,
>>> "protocol": null, "ethertype": "IPv4", "tenant_id":
>>> "df26f374a7a84eddb06881c669ffd62f", "port_range_max": null,
>>> "port_range_min": null, "id": "131f26d3-6b7b-47ef-9abf-fd664e59a972",
>>> "security_group_id": "2391ac97-447e-45b7-97f2-cd8fbcafb0cb"},
>>> {"remote_group_id": null, "direction": "egress", "remote_ip_prefix": null,
>>> "protocol": null, "ethertype": "IPv6", "tenant_id":
>>> "df26f374a7a84eddb06881c669ffd62f", "port_range_max": null,
>>> "port_range_min": null, "id": "93a8882b-adcd-489a-89e4-694f5955",
>>> "security_group_id": "2391ac97-447e-45b7-97f2-cd8fbcafb0cb"},
>>> {"remote_group_id": "2391ac97-447e-45b7-97f2-cd8fbcafb0cb", "direction":
>>> "ingress", "remote_ip_prefix": null, "protocol": null, "ethertype": "IPv4",
>>> "tenant_id": "df26f374a7a84eddb06881c669ffd62f", "port_range_max": null,
>>> "port_range_min": null, "id": "fb15316c-efd0-4a70-ae98-23f260f0d76d",
>>> "security_group_id": "2391ac97-447e-45b7-97f2-cd8fbcafb0cb"},
>>> {"remote_group_id": "2391ac97-447e-45b7-97f2-cd8fbcafb0cb", "direction":
>>> "ingress", "remote_ip_prefix": null, "protocol": null, "ethertype": "IPv6",
>>> "tenant_id": "df26f374a7a84eddb06881c669ffd62f", "port_range_max": null,
>>> "port_range_min": null, "id": "fc524bb9-b015-42b0-bdab-cd64db2763a6",
>>> "security_group_id": "2391ac97-447e-45b7-97f2-cd8fbcafb0cb"}], "id":
>>> "2391ac97-447e-45b7-97f2-cd8fbcafb0cb"}, {"tenant_id":
>>> "df26f374a7a84eddb06881c669ffd62f", "name": "culo", "description": "",
>>> "security_group_rules": [{"remote_group_id": null, "direction": "egress",
>>> "remote_ip_prefix": null, "protocol": null, "ethertype": "IPv6",
>>> "tenant_id": "df26f374a7a84eddb06881c669ffd62f", "port_range_max": null,
>>> "port_range_min": null, "id": "2c23f70a-691b-4601-87a0-2ec092488746",
>>> "security_group_id": "fe569b17-d6e0-4b1e-bae3-1132e748190c"},
>>> {"remote_group_id": null, "direction": "egress", "remote_ip_prefix": null,
>>> "protocol": null, "ethertype": "IPv4", "tenant_id":
>>> "df26f374a7a84eddb06881c669ffd62f", "port_range_max": null,
>>> "port_range_min": null, "id": "7a445e16-81c1-45c1-8efd-39ce3bcd9ca6",
>>> "security_group_id": "fe569b17-d6e0-4b1e-bae3-1132e748190c"}], "id":
>>> "fe569b17-d6e0-4b1e-bae3-1132e748190c"}]}
>>>  http_log_resp
>>> /usr/lib/python2.7/dist-packages/neutronclient/common/utils.py:179
>>> 2013-10-18 16:10:09.959 31548 INFO nova.osapi_compute.wsgi.server
>>> [req-87c41dc0-d90a-47b9-bfa8-bd7921a26609 223f36a9e1fc44659ac93479cb508902
>>> df26f374a7a84eddb06881c669ffd62f] 172.16.124.10 "GET
>>> /v2/df26f374a7a84eddb06881c669ffd62f/servers/detail HTTP/1.1" status: 200
>>> len: 1878 time: 0.6089120
>>>
>>>
>>>
>>>
>>> On Fri, Oct 18, 2013 at 5:07 PM, Aaron Rosen  wrote:
>>>
 Do you have [default] at the top of your nova.conf? Could you pastebin
 your nova.conf  for us to see.
  On Oct 18, 2013 12:31 PM, "Leandro Reox" 
 wrote:

> Yes it is, but i found that is not reading the parameter from th

Re: [openstack-dev] [TripleO] Tuskar UI - Resource Class Creation Wireframes - updated

2013-10-21 Thread Liz Blanchard
Hi Jarda,

Below you will find my comments and questions on the latest version of the 
Resource Class Creation wireframes.

Please let me know if you have any questions.

Thanks,
Liz
On Oct 16, 2013, at 12:31 PM, Jaromir Coufal  wrote:

> Hey folks,
> 
> I am sending an updated version of wireframes for Resource Class Creation. 
> Thanks everybody for your feedback, I tried to cover most of your concerns 
> and I am sending updated version for your reviews. If you have any concerns, 
> I am happy to discuss it with you.
> 
> http://people.redhat.com/~jcoufal/openstack/tuskar/2013-10-16_tuskar_resource_class_creation_wireframes.pdf

1) Will the user be able to click on any of the wizard steps in the menu at the 
top?

2) There shouldn't be a "Back" button on the first step of the wizard. The user 
will never have an opportunity to go back from here.

3) First class should be selected by default. Especially if the field that 
changes below is just the description.

4) Rather than labeling the class description with the class name, it should be 
"Class Description:".

5) The "Assist" checkbox labeling is confusing. Perhaps "Assist with proper 
halving of resources" would be better?

6) If the user unselects the "Assist" checkbox, it would be great if that 
section could collapse to save space. Alternatively, it would reappear if the 
user selects the checkbox again.

7) How come the user can't click the back button from the 2nd page? It looks 
greyed out like the "Hardware Profile" button.

8) I think we need a clearer design for when a table is empty. Maybe even a 
small message within the table along the lines of "There are currently no 
items."

9) Rather than "Yes" and "No" in the confirmation dialog, I think it would make 
it more clear to the user if the action they were taking is used. For example 
"Start Over" or "Enable Assistant" would be more descriptive. 

10) Is the Node Profile name going to be reflected in the tab name above? If 
so, it might be nice to fill in the field for "Profile Name" to be "Node 
Profile 1" by default. Then it could change as the user changes it in the field.

11) It would be better to name the "Add Row" link more specifically to the 
action. Probably "Add Requirement" in this case.

12) Is the "Associated Images" field supposed to be a drop down? Or should 
there be a Browse button? I'm just wondering why it has the helper text "Choose 
an image".

13) Would the image have an extension associated with it? If so, it might be 
good to show different examples here (Ex. QCOW2, ISO, IMG)

14) Are you sure we should select Nodes to assign to this resource class by 
default? It would be nice to ask some sample users this type of thing.

15) I think we can combine the label of "4 Matching Available Nodes" and the 
select action. This way, it would be clear that the user would be selecting the 
4 matching nodes...

16) The filter/search should be aligned closer to the table that it is 
filtering.

17) Where does the "L2-default_group" name come from in this list?

18) The filter description should probably be shortened to read "Current 
Filter: group 2". Also, I think the number of results might make sense to be on 
a different level. This might start to feel more organized if the search/filter 
control comes down to this level so that it's closer to the table.

19) If the user unselects the "Select all available" after filtering, it should 
still unselect all 4 matching nodes. In your example you've shown that only 2 
of the 4 are unselected and then in screen 29 th user is in a weird state where 
they have unselected all matching nodes, but the table still shows that 2 nodes 
are selected. I think instead, it might make sense to have a "Select 
All/Unselected All" action at the table level. 

> 
> Thanks
> -- Jarda
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Distributed Virtual Router Discussion

2013-10-21 Thread Artem Dmytrenko
Hi Swaminathan.

I work for a virtual networking startup called Midokura and I'm very interested 
in joining the discussion. We currently have distributed router implementation 
using existing Neutron API. Could you clarify why distributed vs centrally 
located routing implementation need to be distinguished? Another question is 
that are you proposing distributed routing implementation for tenant routers or 
for the router connecting the virtual cloud to the external network? The reason 
that I'm asking this question is because our company would also like to propose 
a router implementation that would eliminate a single point uplink failures. We 
have submitted a couple blueprints on that topic 
(https://blueprints.launchpad.net/neutron/+spec/provider-router-support, 
https://blueprints.launchpad.net/neutron/+spec/bgp-dynamic-routing) and would 
appreciate an opportunity to collaborate on making it a reality.

Note that the images in your document are badly corrupted - maybe my questions 
could already be answered by your diagrams. Could you update your document with 
legible diagrams?

Looking forward to further discussing this topic with you!

Sincerely,
Artem Dmytrenko


On Mon, 10/21/13, Vasudevan, Swaminathan (PNB Roseville) 
 wrote:

 Subject: [openstack-dev] Distributed Virtual Router Discussion
 To: "yong sheng gong (gong...@unitedstack.com)" , 
"cloudbe...@gmail.com" , "OpenStack Development Mailing 
List (openstack-dev@lists.openstack.org)" 
 Date: Monday, October 21, 2013, 12:18 PM
 
 
 
  
  
 
 
 
 
 Hi Folks, 
 I am currently working on a
 blueprint for Distributed Virtual Router. 
 If anyone interested in
 being part of the discussion please let me know. 
 I have put together a first
 draft of my blueprint and have posted it on Launchpad for
 review. 
 https://blueprints.launchpad.net/neutron/+spec/neutron-ovs-dvr
 
    
    
 Thanks. 
    
 Swaminathan Vasudevan 
 Systems Software Engineer
 (TC) 
    
    
 HP Networking 
 Hewlett-Packard 
 8000 Foothills Blvd 
 M/S 5541 
 Roseville, CA - 95747 
 tel: 916.785.0937 
 fax: 916.785.1815 
 email: swaminathan.vasude...@hp.com
 
    
    
 
 
 
 
 -Inline Attachment Follows-
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Call for a clear COPYRIGHT-HOLDERS file in all OpenStack projects (and [trove] python-troveclient_0.1.4-1_amd64.changes REJECTED)

2013-10-21 Thread Jeremy Stanley
On 2013-10-22 01:45:13 +0800 (+0800), Thomas Goirand wrote:
[...]
> The main problem I was facing was that troveclient has a few files
> stating that HP was the sole copyright holder, when it clearly was
> not (since I have discussed a bit with some the dev team in
> Portland, IIRC some of them are from Rackspace...).
[...]
> So, for me, the clean and easy way to fix this problem is to have a
> simple copyright-holder.txt file, containing a list of company or
> individuals. It doesn't really mater if some entities forget to write
> themselves in. After all, that'd be their fault, no?
[...]

I don't really see the difference here at all. You propose going
from...

A) copyright claims in headers of files, which contributors
might forget to update

...to...

B) copyright claims in one file, which contributors might also
forget to update

I don't understand how adding a file full of duplicate information
to each project is going to solve your actual concern. We could
automatically generate it based on the contents of the copyright
headers in other files (in which case it will be no more accurate
than they are), or we could manually maintain it using the same
mechanisms we do for the contents of the copyright headers in other
files (resulting in at best the same end result, and at worst a new
conflicting set of data to reconcile).
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Towards OpenStack Disaster Recovery

2013-10-21 Thread Ronen Kat
From:   Caitlin Bestler 
To: openstack-dev@lists.openstack.org,
Date:   21/10/2013 06:55 PM
Subject:Re: [openstack-dev] Towards OpenStack Disaster Recovery

>>
>> Hi all,
>> We (IBM and Red Hat) have begun discussions on enabling Disaster
Recovery
>> (DR) in OpenStack.
>>
>> We have created a wiki page with our initial thoughts:
>> https://wiki.openstack.org/wiki/DisasterRecovery
>> We encourage others to contribute to this wiki.
>>
>What wasn't clear to me on first read is what the intended scope is.
>Exactly what is being failed over? An entire multi-tenant data-center?
>Specific tenants? Or specific enumerated sets of VMs for one tenant?

The exact set could be from a single VM (with its associated resources:
images, volumes, etc) to a set of entities associated with a user.
The data-center itself (including its metadata and configuration) is
consider the equivalent of the "hardware" - in case of disaster, you
recover "what is running", not the infrastructure.

Thanks for pointing out the scope should be emphasized at the top...


Regards,
__
Ronen I. Kat, PhD
Storage Research
IBM Research - Haifa
Phone: +972.3.7689493
Email: ronen...@il.ibm.com


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Call for a clear COPYRIGHT-HOLDERS file in all OpenStack projects (and [trove] python-troveclient_0.1.4-1_amd64.changes REJECTED)

2013-10-21 Thread Thomas Goirand
On 10/21/2013 09:28 PM, Mark McLoughlin wrote:
> In other words, what exactly is a list of copyright holders good for?

At least avoid pain and reject when uploading to the Debian NEW queue...

Thomas Goirand (zigo)


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Call for a clear COPYRIGHT-HOLDERS file in all OpenStack projects (and [trove] python-troveclient_0.1.4-1_amd64.changes REJECTED)

2013-10-21 Thread Thomas Goirand
On 10/20/2013 09:38 PM, Jeremy Stanley wrote:
> Part of the issue is that historically the project has held a
> laissez faire position that claiming copyright on contributions is
> voluntary, and that if you don't feel your modifications to a
> particular file are worthy of copyright (due to triviality or
> whatever) then there was no need to update a copyright statement for
> new holders or years.

I don't really mind the above way, as long as there's an easy way for me
to write my debian/copyright file, which isn't the case ATM. Currently,
it's close to second-guessing, which is what needs to be fixed. A
copyright-holder.txt file would fix it, and I'm guessing that it's
existence only would push companies to add themselves in...

Thomas Goirand (zigo)


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] IPv6 & DHCP options for dnsmasq

2013-10-21 Thread Sean M. Collins
Hi,

Looking at the code for the linux DHCP agent, there's a comment
about trying to figure out how to indicate other options (ra-only,
slaac, ra-nameservers, and ra-stateless).

https://github.com/openstack/neutron/blob/master/neutron/agent/linux/dhcp.py#L330

I decided to take a crack at creating a blueprint, as well as some code.

https://blueprints.launchpad.net/neutron/+spec/dnsmasq-mode-keyword

I don't know if adding the "dhcp_mode" attribute to Subnets should be
considered an API extension (and the code should be converted to an API
extension) or if we're simply specifying behavior that was originally undefined.

The motivation is to help Neutron work with IPv6 - which is a must-have
for Comcast.

-- 
Sean M. Collins


pgpQJV_mcuSwE.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Disable async network allocation

2013-10-21 Thread Day, Phil
Hi Folks,

I'm trying to track down a couple of obsecure issues in network port creation 
where it would be really useful if I could disable the async network allocation 
so that everything happens in the context of a single eventlet rather than two 
(and also rule out if there is some obscure eventlet threading issue in here).  
 I thought it was configurable - but I don't see anything obvious in the code 
to go back to the old (slower) approach of doing network allocation in-lien in 
the main create thread ?

One of the issues I'm trying to track is Neutron occasionally creating more 
than one port - I suspect a retry mechanism in the httplib2 is sending the port 
create request multiple times if  Neutron is slow to reply, resulting in 
Neutron processing it multiple times.  Looks like only the Neutron client has 
chosen to use httplib2 rather that httplib - anyone got any insight here ?

Sometimes of course the Neutron timeout results in the create request being 
re-scheduled onto another node (which can it turn generate its own set of port 
create requests).Its the thread behavior around how the timeout exception 
is handled that I'm slightly nervous of (some of the retries seem to occur 
after the original network thread should have terminated).

Thanks
Phil
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Call for a clear COPYRIGHT-HOLDERS file in all OpenStack projects (and [trove] python-troveclient_0.1.4-1_amd64.changes REJECTED)

2013-10-21 Thread Thomas Goirand
On 10/20/2013 09:00 PM, Jeremy Stanley wrote:
> On 2013-10-20 22:20:25 +1300 (+1300), Robert Collins wrote:
> [...]
>> OTOH registering one's nominated copyright holder on the first
>> patch to a repository is probably a sustainable overhead. And it's
>> probably amenable to automation - a commit hook could do it locally
>> and a check job can assert that it's done.
> 
> I know the Foundation's got work underway to improve the affiliate
> map from the member database, so it might be possible to have some
> sort of automated job which proposes changes to a copyright holders
> list in each project by running a query with the author and date of
> each commit looking for new affiliations. That seems like it would
> be hacky, fragile and inaccurate, but probably still more reliable
> than expecting thousands of contributors to keep that information up
> to date when submitting patches?

My request wasn't to go *THAT* far. The main problem I was facing was
that troveclient has a few files stating that HP was the sole copyright
holder, when it clearly was not (since I have discussed a bit with some
the dev team in Portland, IIRC some of them are from Rackspace...).

Just writing HP as copyright holder to please the FTP masters because it
would match some of the source content, then seemed wrong to me, which
is why I raised the topic. Also, they didn't like that I list the
authors (from a "git log" output) in the copyright files.

So, for me, the clean and easy way to fix this problem is to have a
simple copyright-holder.txt file, containing a list of company or
individuals. It doesn't really mater if some entities forget to write
themselves in. After all, that'd be their fault, no? The point is, at
least I'd have an upstream source file to show to the FTP masters as
something which has a chance to be a bit more accurate than
second-guessing through "git log" or reading a few source code files
which represent a wrong view of the reality.

Any thoughts?

Thomas Goirand (zigo)

P.S: I asked the FTP masters to write in this thread, though it seems
nobody had time to do so...


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Havana neutron security groups config issue

2013-10-21 Thread Leandro Reox
Aaron,

Here you are all the info, all the nova.confs (compute, controller) , all
the agent logs, iptables output etc ... btw as i said we're testing this
setup with docker containers , just to be clear regarding your last
recommedation about libvirt vif driver (that we alreade have on the conf )

Here it is: http://pastebin.com/RMgQxFyN

Any clues ?


Best
Lean


On Fri, Oct 18, 2013 at 8:06 PM, Aaron Rosen  wrote:

> Is anything showing up in the agents log on the hypervisors? Also, can you
> confirm you have this setting in your nova.conf:
>
>
> libvirt_vif_driver = nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
>
>
>
> On Fri, Oct 18, 2013 at 1:14 PM, Leandro Reox wrote:
>
>> Aaaron, i fixed the config issues moving the neutron opts up to the
>> default section. But now im having this issue
>>
>> i can launch intances normally, it seems that the rules are not getting
>> applied anywhere, i have full access to the docker containers. If i do
>> iptable -t nat -L and iptables -L , no rules seems to be applied to any flow
>>
>> I see the calls on the nova-api normally ... , but no rule applied
>>
>>
>> 2013-10-18 16:10:09.873 31548 DEBUG neutronclient.client [-]
>> RESP:{'date': 'Fri, 18 Oct 2013 20:10:07 GMT', 'status': '200',
>> 'content-length': '2331', 'content-type': 'application/json;
>> charset=UTF-8', 'content-location': '
>> http://172.16.124.16:9696/v2.0/security-groups.json'}
>> {"security_groups": [{"tenant_id": "df26f374a7a84eddb06881c669ffd62f",
>> "name": "default", "description": "default", "security_group_rules":
>> [{"remote_group_id": null, "direction": "egress", "remote_ip_prefix": null,
>> "protocol": null, "ethertype": "IPv4", "tenant_id":
>> "df26f374a7a84eddb06881c669ffd62f", "port_range_max": null,
>> "port_range_min": null, "id": "131f26d3-6b7b-47ef-9abf-fd664e59a972",
>> "security_group_id": "2391ac97-447e-45b7-97f2-cd8fbcafb0cb"},
>> {"remote_group_id": null, "direction": "egress", "remote_ip_prefix": null,
>> "protocol": null, "ethertype": "IPv6", "tenant_id":
>> "df26f374a7a84eddb06881c669ffd62f", "port_range_max": null,
>> "port_range_min": null, "id": "93a8882b-adcd-489a-89e4-694f5955",
>> "security_group_id": "2391ac97-447e-45b7-97f2-cd8fbcafb0cb"},
>> {"remote_group_id": "2391ac97-447e-45b7-97f2-cd8fbcafb0cb", "direction":
>> "ingress", "remote_ip_prefix": null, "protocol": null, "ethertype": "IPv4",
>> "tenant_id": "df26f374a7a84eddb06881c669ffd62f", "port_range_max": null,
>> "port_range_min": null, "id": "fb15316c-efd0-4a70-ae98-23f260f0d76d",
>> "security_group_id": "2391ac97-447e-45b7-97f2-cd8fbcafb0cb"},
>> {"remote_group_id": "2391ac97-447e-45b7-97f2-cd8fbcafb0cb", "direction":
>> "ingress", "remote_ip_prefix": null, "protocol": null, "ethertype": "IPv6",
>> "tenant_id": "df26f374a7a84eddb06881c669ffd62f", "port_range_max": null,
>> "port_range_min": null, "id": "fc524bb9-b015-42b0-bdab-cd64db2763a6",
>> "security_group_id": "2391ac97-447e-45b7-97f2-cd8fbcafb0cb"}], "id":
>> "2391ac97-447e-45b7-97f2-cd8fbcafb0cb"}, {"tenant_id":
>> "df26f374a7a84eddb06881c669ffd62f", "name": "culo", "description": "",
>> "security_group_rules": [{"remote_group_id": null, "direction": "egress",
>> "remote_ip_prefix": null, "protocol": null, "ethertype": "IPv6",
>> "tenant_id": "df26f374a7a84eddb06881c669ffd62f", "port_range_max": null,
>> "port_range_min": null, "id": "2c23f70a-691b-4601-87a0-2ec092488746",
>> "security_group_id": "fe569b17-d6e0-4b1e-bae3-1132e748190c"},
>> {"remote_group_id": null, "direction": "egress", "remote_ip_prefix": null,
>> "protocol": null, "ethertype": "IPv4", "tenant_id":
>> "df26f374a7a84eddb06881c669ffd62f", "port_range_max": null,
>> "port_range_min": null, "id": "7a445e16-81c1-45c1-8efd-39ce3bcd9ca6",
>> "security_group_id": "fe569b17-d6e0-4b1e-bae3-1132e748190c"}], "id":
>> "fe569b17-d6e0-4b1e-bae3-1132e748190c"}]}
>>  http_log_resp
>> /usr/lib/python2.7/dist-packages/neutronclient/common/utils.py:179
>> 2013-10-18 16:10:09.959 31548 INFO nova.osapi_compute.wsgi.server
>> [req-87c41dc0-d90a-47b9-bfa8-bd7921a26609 223f36a9e1fc44659ac93479cb508902
>> df26f374a7a84eddb06881c669ffd62f] 172.16.124.10 "GET
>> /v2/df26f374a7a84eddb06881c669ffd62f/servers/detail HTTP/1.1" status: 200
>> len: 1878 time: 0.6089120
>>
>>
>>
>>
>> On Fri, Oct 18, 2013 at 5:07 PM, Aaron Rosen  wrote:
>>
>>> Do you have [default] at the top of your nova.conf? Could you pastebin
>>> your nova.conf  for us to see.
>>>  On Oct 18, 2013 12:31 PM, "Leandro Reox" 
>>> wrote:
>>>
 Yes it is, but i found that is not reading the parameter from the
 nova.conf , i forced on the code on /network/manager.py and took the
 argument finally but stacks cause says that the neutron_url and if i fix it
 it stacks on the next neutron parameter like timeout :

 File "/usr/local/lib/python2.7/dist-packages/oslo/config/cfg.py", line
 1648, in __getattr__
 2013-10-18 15:21:04.397 30931 TRACE nova.api.openstack raise
 NoSuchOptError(name)
 2013-10-18 15:

Re: [openstack-dev] Call for a clear COPYRIGHT-HOLDERS file in all OpenStack projects (and [trove] python-troveclient_0.1.4-1_amd64.changes REJECTED)

2013-10-21 Thread Clint Byrum
Excerpts from Robert Collins's message of 2013-10-20 02:25:43 -0700:
> On 20 October 2013 02:35, Monty Taylor  wrote:
> 
> > However, even as a strong supporter of accurate license headers, I would
> > like to know more about the FTP masters issue. I dialog with them, as
> > folks who deal with this issue and its repercutions WAY more than any of
> > us might be really nice.
> 
> Debian takes it's responsibilities under copyright law very seriously.
> The integrity of the debian/copyright metadata is checked on the first
> upload for a package (and basically not thereafter, which is either
> convenient or pragmatic or a massive hole in rigour depending on your
> point of view. The goal is to ensure that a) the package is in the
> right repository in Debian (main vs nonfree) and b) that Debian can
> redistribute it and c) that downstreams of Debian who decide to use
> the package can confidently do so. Files with differing redistribution
> licenses that aren't captured in debian/copyright are an issue for c);
> files with different authors and the same redistribution licence
> aren't a problem for a/b/c *but* the rules the FTP masters enforce
> don't make that discrimination: the debian/copyright file needs to be
> a concordance of both copyright holders and copyright license.
> 
> Personally, I think it should really only be a concordance of
> copyright licenses, and the holders shouldn't be mentioned, but thats
> not the current project view.
> 

The benefit to this is that by at least hunting down project leadership
and getting an assertion and information about the copyright holder
situation, a maintainer tends to improve clarity upstream. Often things
that are going into NEW are, themselves, new to the world, and often
those projects have not done the due diligence to state their license
and take stock of their copyright owners. I think that is one reason
the process survives despite perhaps going further than is necessary to
maintain Debian's social contract integrity.

I think OpenStack has taken enough care to ensure works are attributable
to their submitters that Debian should have a means to accept that
this project is indeed licensed as such. Perhaps a statement detailing
the process OpenStack uses to ensure this can be drafted and included
in each repository. It is not all that dissimilar to what MySQL did by
stating the OpenSource linking exception for libmysqlclient's
GPL license explicitly in a file that is now included with the tarballs.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Testing of new service types support

2013-10-21 Thread Michael Basnight

On Oct 21, 2013, at 10:02 AM, Tim Simpson wrote:

> Can't we say that about nearly any feature though? In theory we could put a 
> hold on any tests for feature work saying it 
> will need to be redone when Tempest integrated is finished.
> 
> Keep in mind what I'm suggesting here is a fairly trivial change to get some 
> validation via the existing fake mode / integration tests at a fairly small 
> cost.

Of course we can do the old tests. And for this it might be the best thing. The 
problem i see is that we cant do real integration tests w/o this work, and i 
dont want to integrate a bunch of different service_types w/o tests that 
actually spin them up and run the guest, which is where 80% of the "new" code 
lives for a new service_type. Otherwise we are running fake-guest stuff that is 
not a good representation. 

For the api stuff, sure thats fine. i just think the overall coverage of the 
review will be quite low if we are only testing the API via fake code.


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Towards OpenStack Disaster Recovery

2013-10-21 Thread Alex Glikson
Hi Caitlin,

Caitlin Bestler  wrote on 21/10/2013 06:51:36 
PM:
> On 10/21/2013 2:34 AM, Avishay Traeger wrote:
> >
> > Hi all,
> > We (IBM and Red Hat) have begun discussions on enabling Disaster 
Recovery
> > (DR) in OpenStack.
> >
> > We have created a wiki page with our initial thoughts:
> > https://wiki.openstack.org/wiki/DisasterRecovery
> > We encourage others to contribute to this wiki.
> >
> What wasn't clear to me on first read is what the intended scope is.
> Exactly what is being failed over? An entire multi-tenant data-center?
> Specific tenants? Or specific enumerated sets of VMs for one tenant?

Our assumption is that an entire DC is failing, while only (potentially 
small) subset of VMs/etc need to be protected/recovered.

Regards,
Alex

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announce of Rally - benchmarking system for OpenStack

2013-10-21 Thread Tim Bell

It is not just the development effort but also those users who rely on tempest 
to probe their production environments. If there is a second project, they'd 
have to configure the endpoints, user accounts etc. in both systems.

In my view, this cannot be done at the gate, it would take too long but a 
regular scheduled run would give us an early indication. Typical example is 
Rackspace running two weeks behind trunk... if problems could be identified 
before then, it would encourage those sites who are in a position to do full CI 
testing from trunk to catch the problems earlier.

Tim

> 
> And that's really what I mean about integrating better. Whenever possible 
> figuring out how functionality could be added to existing
> projects, especially when that means they are enhanced not only for your use 
> case, but for other use cases that those projects have
> wanted for a while (seriously, I'd love to have statistically valid run time 
> statistics for tempest that show us when we go off the rails, like
> we did last week for a few days, and quantify long term variability and 
> trends in the stack). It's harder in the short term to do that,
> because it means compromises along the way, but the long term benefit to 
> OpenStack is much greater than another project which
> duplicates effort from a bunch of existing projects.
> 
>   -Sean
> 
> --
> Sean Dague
> http://dague.net
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Testing of new service types support

2013-10-21 Thread Tim Simpson
Can't we say that about nearly any feature though? In theory we could put a 
hold on any tests for feature work saying it 
will need to be redone when Tempest integrated is finished.

Keep in mind what I'm suggesting here is a fairly trivial change to get some 
validation via the existing fake mode / integration tests at a fairly small 
cost.


From: Michael Basnight [mbasni...@gmail.com]
Sent: Monday, October 21, 2013 11:45 AM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [Trove] Testing of new service types support

Top posting…

Id like to see these in the tempest tests. Im just getting started integrating 
trove into tempest for testing, and there are some prerequisites that im 
working thru with the infra team. Progress is being made though. Id rather not 
see them go into 2 different test suites if we can just get them into the 
tempest tests. Lets hope the stars line up so that you can start testing in 
tempest. :)

On Oct 21, 2013, at 9:25 AM, Illia Khudoshyn wrote:

> Hi Tim,
>
> Thanks for a quick reply. I'll go with updating run_tests.py for now. Hope, 
> Andrey Shestakov's changes arrive soon.
>
> Best wishes.
>
>
>
> On Mon, Oct 21, 2013 at 7:01 PM, Tim Simpson  
> wrote:
> Hi Illia,
>
> You're correct; until the work on establishing datastore types and versions 
> as a first class Trove concept is finished, which will hopefully be soon (see 
> Andrey Shestakov's pull request), testing non-MySQL datastore types will be 
> problematic.
>
> A short term, fake-mode only solution could be accomplished fairly quickly as 
> follows: run the fake mode tests a third time in Tox with a new configuration 
> which allows for MongoDB.
>
> If you look at tox.ini, you'll see that the integration tests run in fake 
> mode twice already:
>
> >> {envpython} run_tests.py
> >> {envpython} run_tests.py --test-config=etc/tests/xml.localhost.test.conf
>
> The second invocation causes the trove-client to be used in XML mode, 
> effectively testing the XML client.
>
> (Tangent: currently running the tests twice takes some time, even in fake 
> mode- however it will cost far less time once the following pull request is 
> merged: https://review.openstack.org/#/c/52490/)
>
> If you look at run_tests.py, you'll see that on line 104 it accepts a trove 
> config file. If the run_tests.py script is updated to allow this value to be 
> specified optionally via the command line, you could create a variation on 
> "etc/trove/trove.conf.test" which specifies MongoDB. You'd then invoke 
> run_tests.py with a "--group=" argument to run some subset of the tests 
> support by the current Mongo DB code in fake mode.
>
> Of course, this will do nothing to test the guest agent changes or confirm 
> that the end to end system actually works, but it could help test a lot of 
> incidental API and infrastructure database code.
>
> As for real mode tests, I think we should wait until the datastore type / 
> version code is finished, at which point I know we'll all be eager to add 
> additional tests for these new datastores. Of course in the short term it 
> should be possible for you to  change the code locally to build a Mongo DB 
> image as well as a Trove config file to support this and then just run some 
> subset of tests that works with Mongo.
>
> Thanks,
>
> Tim
>
>
> From: Illia Khudoshyn [ikhudos...@mirantis.com]
> Sent: Monday, October 21, 2013 9:42 AM
> To: OpenStack Development Mailing List
> Subject: [openstack-dev] [Trove] Testing of new service types support
>
> Hi all,
>
> I've done with implementing the very first bits of MongoDB support in Trove 
> along with unit tests and faced an issue with proper testing of it.
>
> It is well known that right now only one service type per installation is 
> supported by Trove (it is set in config). All testing infrastructure, 
> including Trove-integration codebase and jenkins jobs, seem to rely on that 
> service type as well. So it seems to be impossible to run all existing tests 
> AND some additional tests for MongoDB service type in one pass, at least 
> until Trove client will allow to pass service type (I know that there is 
> ongoing work in this area).
>
> Please note, that all of the above is about functional and intergation 
> testing -- there is no issues with unit tests.
>
> So the question is, should I first submit the code to Trove and then proceed 
> with updating Trove-integration or just put aside all that MongoDB stuff 
> until client (and -integration) will be ready?
>
> PS AFAIK, there is some work on adding Cassandra and Riak (or Redis?) support 
> to Trove. These guys will likely face this issue as well.
>
> --
> Best regards,
> Illia Khudoshyn,
> Software Engineer, Mirantis, Inc.
>
> 38, Lenina ave. Kharkov, Ukraine
> www.mirantis.com
> www.mirantis.ru
>
> Skype: gluke_work
> ikhudos...@mirantis.com
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lis

Re: [openstack-dev] Call for a clear COPYRIGHT-HOLDERS file in all OpenStack projects (and [trove] python-troveclient_0.1.4-1_amd64.changes REJECTED)

2013-10-21 Thread Stefano Maffulli
On 10/20/2013 06:00 AM, Jeremy Stanley wrote:
> I know the Foundation's got work underway to improve the affiliate
> map from the member database, so it might be possible to have some
> sort of automated job which proposes changes to a copyright holders
> list in each project by running a query with the author and date of
> each commit looking for new affiliations. That seems like it would
> be hacky, fragile and inaccurate, but probably still more reliable
> than expecting thousands of contributors to keep that information up
> to date when submitting patches?

To solve the problem for future contributions (if we agree there is a
problem), wouldn't it be simpler to add one line to the commit saying
something like "Copyright ownership by: Small Corp"? This can be
semi-automatic by hackers (they only need to keep it current). We may
even check automatically at the gate the validity of that assertion
against the (to be built) database of Corporate CLAs.

For past contributions and to solve immediately the issue with
troveclient I guess we can use the data we have from activity
board/gitdm/stackalytics. You can contact me offline, of course.

/stef


-- 
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Testing of new service types support

2013-10-21 Thread Michael Basnight
Top posting…

Id like to see these in the tempest tests. Im just getting started integrating 
trove into tempest for testing, and there are some prerequisites that im 
working thru with the infra team. Progress is being made though. Id rather not 
see them go into 2 different test suites if we can just get them into the 
tempest tests. Lets hope the stars line up so that you can start testing in 
tempest. :)

On Oct 21, 2013, at 9:25 AM, Illia Khudoshyn wrote:

> Hi Tim,
> 
> Thanks for a quick reply. I'll go with updating run_tests.py for now. Hope, 
> Andrey Shestakov's changes arrive soon.
> 
> Best wishes.
> 
> 
> 
> On Mon, Oct 21, 2013 at 7:01 PM, Tim Simpson  
> wrote:
> Hi Illia,
> 
> You're correct; until the work on establishing datastore types and versions 
> as a first class Trove concept is finished, which will hopefully be soon (see 
> Andrey Shestakov's pull request), testing non-MySQL datastore types will be 
> problematic.
> 
> A short term, fake-mode only solution could be accomplished fairly quickly as 
> follows: run the fake mode tests a third time in Tox with a new configuration 
> which allows for MongoDB. 
> 
> If you look at tox.ini, you'll see that the integration tests run in fake 
> mode twice already:
> 
> >> {envpython} run_tests.py
> >> {envpython} run_tests.py --test-config=etc/tests/xml.localhost.test.conf
> 
> The second invocation causes the trove-client to be used in XML mode, 
> effectively testing the XML client. 
> 
> (Tangent: currently running the tests twice takes some time, even in fake 
> mode- however it will cost far less time once the following pull request is 
> merged: https://review.openstack.org/#/c/52490/)
> 
> If you look at run_tests.py, you'll see that on line 104 it accepts a trove 
> config file. If the run_tests.py script is updated to allow this value to be 
> specified optionally via the command line, you could create a variation on 
> "etc/trove/trove.conf.test" which specifies MongoDB. You'd then invoke 
> run_tests.py with a "--group=" argument to run some subset of the tests 
> support by the current Mongo DB code in fake mode.
> 
> Of course, this will do nothing to test the guest agent changes or confirm 
> that the end to end system actually works, but it could help test a lot of 
> incidental API and infrastructure database code.
> 
> As for real mode tests, I think we should wait until the datastore type / 
> version code is finished, at which point I know we'll all be eager to add 
> additional tests for these new datastores. Of course in the short term it 
> should be possible for you to  change the code locally to build a Mongo DB 
> image as well as a Trove config file to support this and then just run some 
> subset of tests that works with Mongo.
> 
> Thanks,
> 
> Tim 
> 
> 
> From: Illia Khudoshyn [ikhudos...@mirantis.com]
> Sent: Monday, October 21, 2013 9:42 AM
> To: OpenStack Development Mailing List
> Subject: [openstack-dev] [Trove] Testing of new service types support
> 
> Hi all,
> 
> I've done with implementing the very first bits of MongoDB support in Trove 
> along with unit tests and faced an issue with proper testing of it. 
> 
> It is well known that right now only one service type per installation is 
> supported by Trove (it is set in config). All testing infrastructure, 
> including Trove-integration codebase and jenkins jobs, seem to rely on that 
> service type as well. So it seems to be impossible to run all existing tests 
> AND some additional tests for MongoDB service type in one pass, at least 
> until Trove client will allow to pass service type (I know that there is 
> ongoing work in this area).
> 
> Please note, that all of the above is about functional and intergation 
> testing -- there is no issues with unit tests.
> 
> So the question is, should I first submit the code to Trove and then proceed 
> with updating Trove-integration or just put aside all that MongoDB stuff 
> until client (and -integration) will be ready?
> 
> PS AFAIK, there is some work on adding Cassandra and Riak (or Redis?) support 
> to Trove. These guys will likely face this issue as well.
> 
> -- 
> Best regards,
> Illia Khudoshyn,
> Software Engineer, Mirantis, Inc.
>  
> 38, Lenina ave. Kharkov, Ukraine
> www.mirantis.com
> www.mirantis.ru
>  
> Skype: gluke_work
> ikhudos...@mirantis.com
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> -- 
> Best regards,
> Illia Khudoshyn,
> Software Engineer, Mirantis, Inc.
>  
> 38, Lenina ave. Kharkov, Ukraine
> www.mirantis.com
> www.mirantis.ru
>  
> Skype: gluke_work
> ikhudos...@mirantis.com
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGM

Re: [openstack-dev] [Heat] Plugin to use Docker containers a resources in a template

2013-10-21 Thread Russell Bryant
On 10/17/2013 09:06 PM, Sam Alba wrote:
> Hi all,
> 
> I've been recently working on a Docker plugin for Heat that makes it
> possible to use Docker containers as resources.
> 
> I've just opened the repository:
> https://github.com/dotcloud/openstack-heat-docker

Related to this discussion, we'll have a session in the Nova track about
docker.  I'll be sure to not schedule it on top of the Heat track so
that we can discuss this as a part of the future of docker in OpenStack.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gerrit tools

2013-10-21 Thread Joshua Harlow
The nice thing about the current ssh is that it is doing "push" notifications 
over ssh, if the rest API supported that it would be great; instead of a pull 
notification via rest.

Sent from my really tiny device...

On Oct 21, 2013, at 6:48 AM, "Chmouel Boudjnah" 
mailto:chmo...@enovance.com>> wrote:


On Mon, Oct 21, 2013 at 3:03 PM, Flavio Percoco 
mailto:fla...@redhat.com>> wrote:
Also realize that OpenStack maintains gerritlib - 
https://github.com/openstack-infra/gerritlib

Which anyone can contribute to (and is the code that every message posted back 
to gerrit by a bot users). It would actually be nice to enhance gerritlib if 
there were enough features missing that are in python-gerrit.

Yup, that's part of the plan, python-gerrit rewrites a lot of stuff,
though.


It seems that gerritlib is using SSH commands, isn't that the plans is to have 
a gerrit with the full REST api enabled in the future without needing to have 
to spawn ssh commands for every calls?

Chmouel.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Testing of new service types support

2013-10-21 Thread Illia Khudoshyn
Hi Tim,

Thanks for a quick reply. I'll go with updating run_tests.py for now. Hope,
Andrey Shestakov's changes arrive soon.

Best wishes.



On Mon, Oct 21, 2013 at 7:01 PM, Tim Simpson wrote:

>  Hi Illia,
>
>  You're correct; until the work on establishing datastore types and
> versions as a first class Trove concept is finished, which will hopefully
> be soon (see Andrey Shestakov's pull request), testing non-MySQL datastore
> types will be problematic.
>
>  A short term, fake-mode only solution could be accomplished fairly
> quickly as follows: run the fake mode tests a third time in Tox with a
> new configuration which allows for MongoDB.
>
>  If you look at tox.ini, you'll see that the integration tests run in
> fake mode twice already:
>
>  >> {envpython} run_tests.py
> >> {envpython} run_tests.py --test-config=etc/tests/xml.localhost.test.conf
>
>  The second invocation causes the trove-client to be used in XML mode,
> effectively testing the XML client.
>
>  (Tangent: currently running the tests twice takes some time, even in
> fake mode- however it will cost far less time once the following pull
> request is merged: https://review.openstack.org/#/c/52490/)
>
>  If you look at run_tests.py, you'll see that on line 104 it accepts a
> trove config file. If the run_tests.py script is updated to allow this
> value to be specified optionally via the command line, you could create a
> variation on "etc/trove/trove.conf.test" which specifies MongoDB. You'd
> then invoke run_tests.py with a "--group=" argument to run some subset of
> the tests support by the current Mongo DB code in fake mode.
>
>  Of course, this will do nothing to test the guest agent changes or
> confirm that the end to end system actually works, but it could help test a
> lot of incidental API and infrastructure database code.
>
>  As for real mode tests, I think we should wait until the datastore type
> / version code is finished, at which point I know we'll all be eager to add
> additional tests for these new datastores. Of course in the short term it
> should be possible for you to change the code locally to build a Mongo DB
> image as well as a Trove config file to support this and then just run some
> subset of tests that works with Mongo.
>
>  Thanks,
>
>  Tim
>
>
>  --
> *From:* Illia Khudoshyn [ikhudos...@mirantis.com]
> *Sent:* Monday, October 21, 2013 9:42 AM
> *To:* OpenStack Development Mailing List
> *Subject:* [openstack-dev] [Trove] Testing of new service types support
>
>   Hi all,
>
>  I've done with implementing the very first bits of MongoDB support in
> Trove along with unit tests and faced an issue with proper testing of it.
>
>  It is well known that right now only one service type per installation
> is supported by Trove (it is set in config). All testing infrastructure,
> including Trove-integration codebase and jenkins jobs, seem to rely on that
> service type as well. So it seems to be impossible to run all existing
> tests AND some additional tests for MongoDB service type in one pass, at
> least until Trove client will allow to pass service type (I know that there
> is ongoing work in this area).
>
>  Please note, that all of the above is about functional and intergation
> testing -- there is no issues with unit tests.
>
>  So the question is, should I first submit the code to Trove and then
> proceed with updating Trove-integration or just put aside all that MongoDB
> stuff until client (and -integration) will be ready?
>
>  PS AFAIK, there is some work on adding Cassandra and Riak (or Redis?)
> support to Trove. These guys will likely face this issue as well.
>
>  --
>
> Best regards,
>
> Illia Khudoshyn,
> Software Engineer, Mirantis, Inc.
>
>
>
> 38, Lenina ave. Kharkov, Ukraine
>
> www.mirantis.com 
>
> www.mirantis.ru
>
>
>
> Skype: gluke_work
>
> ikhudos...@mirantis.com
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 

Best regards,

Illia Khudoshyn,
Software Engineer, Mirantis, Inc.



38, Lenina ave. Kharkov, Ukraine

www.mirantis.com 

www.mirantis.ru



Skype: gluke_work

ikhudos...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Distributed Virtual Router Discussion

2013-10-21 Thread Vasudevan, Swaminathan (PNB Roseville)
Hi Folks,
I am currently working on a blueprint for Distributed Virtual Router.
If anyone interested in being part of the discussion please let me know.
I have put together a first draft of my blueprint and have posted it on 
Launchpad for review.
https://blueprints.launchpad.net/neutron/+spec/neutron-ovs-dvr


Thanks.

Swaminathan Vasudevan
Systems Software Engineer (TC)


HP Networking
Hewlett-Packard
8000 Foothills Blvd
M/S 5541
Roseville, CA - 95747
tel: 916.785.0937
fax: 916.785.1815
email: swaminathan.vasude...@hp.com


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Murano] Team Meeting minutes - 10/21

2013-10-21 Thread Alexander Tivelkov
Hi,

Thanks everyone who has joined Murano IRC meeting.
These are the meeting minutes and the action items:
http://eavesdrop.openstack.org/meetings/murano/2013/murano.2013-10-21-15.00.html
Complete logs can be found here:
http://eavesdrop.openstack.org/meetings/murano/2013/murano.2013-10-21-15.00.log.html

--
Kind Regards,
Alexander Tivelkov
Principal Software Engineer

OpenStack Platform Product division
Mirantis, Inc

+7(495) 640 4904, ext 0236
+7-926-267-37-97(cell)
Vorontsovskaya street 35 B, building 3,
Moscow, Russia.
ativel...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Testing of new service types support

2013-10-21 Thread Tim Simpson
Hi Illia,

You're correct; until the work on establishing datastore types and versions as 
a first class Trove concept is finished, which will hopefully be soon (see 
Andrey Shestakov's pull request), testing non-MySQL datastore types will be 
problematic.

A short term, fake-mode only solution could be accomplished fairly quickly as 
follows: run the fake mode tests a third time in Tox with a new configuration 
which allows for MongoDB.

If you look at tox.ini, you'll see that the integration tests run in fake mode 
twice already:

>> {envpython} run_tests.py
>> {envpython} run_tests.py --test-config=etc/tests/xml.localhost.test.conf

The second invocation causes the trove-client to be used in XML mode, 
effectively testing the XML client.

(Tangent: currently running the tests twice takes some time, even in fake mode- 
however it will cost far less time once the following pull request is merged: 
https://review.openstack.org/#/c/52490/)

If you look at run_tests.py, you'll see that on line 104 it accepts a trove 
config file. If the run_tests.py script is updated to allow this value to be 
specified optionally via the command line, you could create a variation on 
"etc/trove/trove.conf.test" which specifies MongoDB. You'd then invoke 
run_tests.py with a "--group=" argument to run some subset of the tests support 
by the current Mongo DB code in fake mode.

Of course, this will do nothing to test the guest agent changes or confirm that 
the end to end system actually works, but it could help test a lot of 
incidental API and infrastructure database code.

As for real mode tests, I think we should wait until the datastore type / 
version code is finished, at which point I know we'll all be eager to add 
additional tests for these new datastores. Of course in the short term it 
should be possible for you to change the code locally to build a Mongo DB image 
as well as a Trove config file to support this and then just run some subset of 
tests that works with Mongo.

Thanks,

Tim



From: Illia Khudoshyn [ikhudos...@mirantis.com]
Sent: Monday, October 21, 2013 9:42 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [Trove] Testing of new service types support

Hi all,

I've done with implementing the very first bits of MongoDB support in Trove 
along with unit tests and faced an issue with proper testing of it.

It is well known that right now only one service type per installation is 
supported by Trove (it is set in config). All testing infrastructure, including 
Trove-integration codebase and jenkins jobs, seem to rely on that service type 
as well. So it seems to be impossible to run all existing tests AND some 
additional tests for MongoDB service type in one pass, at least until Trove 
client will allow to pass service type (I know that there is ongoing work in 
this area).

Please note, that all of the above is about functional and intergation testing 
-- there is no issues with unit tests.

So the question is, should I first submit the code to Trove and then proceed 
with updating Trove-integration or just put aside all that MongoDB stuff until 
client (and -integration) will be ready?

PS AFAIK, there is some work on adding Cassandra and Riak (or Redis?) support 
to Trove. These guys will likely face this issue as well.

--

Best regards,

Illia Khudoshyn,
Software Engineer, Mirantis, Inc.



38, Lenina ave. Kharkov, Ukraine

www.mirantis.com

www.mirantis.ru



Skype: gluke_work

ikhudos...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gerrit tools

2013-10-21 Thread Joshua Harlow
Neat didn't know about this library :)

Thx for the +1!

Sent from my really tiny device...

> On Oct 21, 2013, at 1:08 AM, "Flavio Percoco"  wrote:
> 
>> On 20/10/13 05:01 +, Joshua Harlow wrote:
>> I created some gerrit tools that I think others might find useful.
>> 
>> https://github.com/harlowja/gerrit_view
> 
> 
> I worked on this Python library for Gerrit[0] a couple of months ago and
> I've been using it for this gerrit-cli[1] tool. I was wondering if you'd
> like to migrate your Gerrit queries and make them use python-gerrit
> instead? I can do that for you.
> 
> [0] https://github.com/FlaPer87/python-gerrit
> [1] https://github.com/FlaPer87/gerrit-cli
> 
> BTW, Big +1 for the curses UI!
> 
> Cheers,
> FF
> 
> -- 
> @flaper87
> Flavio Percoco
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gerrit tools

2013-10-21 Thread Joshua Harlow
I am using gerritlib in the curses ui; seems to work nicely.

Only 1 thing that I don't like so much is that it silences connection/other 
errors from what I can tell.

See _run() method in 
https://github.com/openstack-infra/gerritlib/blob/master/gerritlib/gerrit.py

Otherwise pretty easy to use.

Sent from my really tiny device...

> On Oct 21, 2013, at 4:46 AM, "Sean Dague"  wrote:
> 
>> On 10/21/2013 04:04 AM, Flavio Percoco wrote:
>>> On 20/10/13 05:01 +, Joshua Harlow wrote:
>>> I created some gerrit tools that I think others might find useful.
>>> 
>>> https://github.com/harlowja/gerrit_view
>> 
>> 
>> I worked on this Python library for Gerrit[0] a couple of months ago and
>> I've been using it for this gerrit-cli[1] tool. I was wondering if you'd
>> like to migrate your Gerrit queries and make them use python-gerrit
>> instead? I can do that for you.
>> 
>> [0] https://github.com/FlaPer87/python-gerrit
>> [1] https://github.com/FlaPer87/gerrit-cli
>> 
>> BTW, Big +1 for the curses UI!
> 
> Also realize that OpenStack maintains gerritlib - 
> https://github.com/openstack-infra/gerritlib
> 
> Which anyone can contribute to (and is the code that every message posted 
> back to gerrit by a bot users). It would actually be nice to enhance 
> gerritlib if there were enough features missing that are in python-gerrit.
> 
>-Sean
> 
> -- 
> Sean Dague
> http://dague.net
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Towards OpenStack Disaster Recovery

2013-10-21 Thread Caitlin Bestler

On 10/21/2013 2:34 AM, Avishay Traeger wrote:


Hi all,
We (IBM and Red Hat) have begun discussions on enabling Disaster Recovery
(DR) in OpenStack.

We have created a wiki page with our initial thoughts:
https://wiki.openstack.org/wiki/DisasterRecovery
We encourage others to contribute to this wiki.


What wasn't clear to me on first read is what the intended scope is.
Exactly what is being failed over? An entire multi-tenant data-center?
Specific tenants? Or specific enumerated sets of VMs for one tenant?




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] A prototype for cross-vm synchronization and communication

2013-10-21 Thread Steven Hardy
On Fri, Oct 18, 2013 at 02:45:01PM -0400, Lakshminaraya Renganarayana wrote:

> The prototype is implemented in Python and Ruby is used for chef
> interception.

Where can we find the code?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] support for multiple active scheduler policies/drivers

2013-10-21 Thread Khanh-Toan Tran
I'm not sure it's a good moment for this but I would like to re-open the topic 
a little bit.

Just a small idea: is it OK if we use a file, or a database as a central point 
to store the policies 
and their associated aggregates? The Scheduler reads it first, then calls the 
scheduler drivers 
listed in the policy file for the associated aggregates. In this case we can 
get the list of 
filters and targeted aggregates before actually running the filters. Thus we 
avoid the loop 
filter -> aggregate -> policy -> filter ->.

Moreover, admin does not need to populate the flavors' extra_specs or associate 
them with the
aggregates, effectively avoiding defining two different policies in 2 flavors 
whose VMs are
eventually hosted in a same aggregate.

The downside of this method is that it is not API-accessible: at the current 
state we do not have
a policy management system. I would like a policy management system with REST 
API, but still, it
is not worse than using nova config.

Best regards,

Toan

Alex Glikson GLIKSON at il.ibm.com 
Wed Aug 21 17:25:30 UTC 2013
Just to update those who are interested in this feature but were not able 
to follow the recent commits, we made good progress converging towards a 
simplified design, based on combination of aggregates and flavors (both of 
which are API-drvien), addressing some of the concerns expressed in this 
thread (at least to certain extent).
The current design and possible usage scenario has been updated at 
https://wiki.openstack.org/wiki/Nova/MultipleSchedulerPolicies 
Comments are welcome (as well as code reviews at 
https://review.openstack.org/#/c/37407/).

Thanks, 
Alex




From:   Joe Gordon 
To: OpenStack Development Mailing List 
, 
Date:   27/07/2013 01:22 AM
Subject:Re: [openstack-dev] [Nova] support for multiple active 
scheduler   policies/drivers






On Wed, Jul 24, 2013 at 6:18 PM, Alex Glikson  wrote:
Russell Bryant  wrote on 24/07/2013 07:14:27 PM:

> 
> I really like your point about not needing to set things up via a config
> file.  That's fairly limiting since you can't change it on the fly via
> the API.


True. As I pointed out in another response, the ultimate goal would be to 
have policies as 'first class citizens' in Nova, including a DB table, 
API, etc. Maybe even a separate policy service? But in the meantime, it 
seems that the approach with config file is a reasonable compromise in 
terms of usability, consistency and simplicity. 

I do like your idea of making policies first class citizens in Nova, but I 
am not sure doing this in nova is enough.  Wouldn't we need similar things 
in Cinder and Neutron?Unfortunately this does tie into how to do good 
scheduling across multiple services, which is another rabbit hole all 
together.

I don't like the idea of putting more logic in the config file, as it is 
the config files are already too complex, making running any OpenStack 
deployment  require some config file templating and some metadata magic 
(like heat).   I would prefer to keep things like this in aggregates, or 
something else with a REST API.  So why not build a tool on top of 
aggregates to push the appropriate metadata into the aggregates.  This 
will give you a central point to manage policies, that can easily be 
updated on the fly (unlike config files).   In the long run I am 
interested in seeing OpenStack itself have a strong solution for for 
policies as a first class citizen, but I am not sure if your proposal is 
the best first step to do that.


 

Regards, 
Alex 

> -- 
> Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT Software orchestration proposal for workflows

2013-10-21 Thread Zane Bitter

On 18/10/13 20:24, John Davidge -X (jodavidg - AAP3 INC at Cisco) wrote:

It looks like this discussion involves many of the issues faced when
developing the Curvature & Donabe frameworks, which were presented at the
Portland Summit - slides and video here:

http://www.openstack.org/summit/portland-2013/session-videos/presentation/i
nteractive-visual-orchestration-with-curvature-and-donabe

Much of the work on the Donabe side revolved around defining a simple
JSON-based API for describing the sorts of virtual application templates
being discussed. All of the code for both Curvature and Donabe has
recently been made open source and is available here:

http://ciscosystems.github.io/curvature/

http://ciscosystems.github.io/donabe/


Hey John,
Congrats on getting this stuff Open-Sourced BTW (I know it's been out 
for a while now).


Can you be more specific about the parts that are relevant to this 
discussion? I'd be interested to know how Donabe handles configuring the 
software on Nova servers for a start.



It looks like some of the ground covered by these projects can be helpful
to this discussion.


Yep, it would be great to get input from any folks in the community who 
have experience with this problem.


cheers,
Zane.



John Davidge
jodav...@cisco.com




-- Forwarded message --
From: Thomas Spatzier 
Date: Wed, Oct 9, 2013 at 12:40 AM
Subject: Re: [openstack-dev] [Heat] HOT Software orchestration
proposal for workflows
To: OpenStack Development Mailing List 


Excerpts from Clint Byrum's message


From: Clint Byrum 
To: openstack-dev ,
Date: 09.10.2013 03:54
Subject: Re: [openstack-dev] [Heat] HOT Software orchestration
proposal for workflows

Excerpts from Stan Lagun's message of 2013-10-08 13:53:45 -0700:

Hello,


That is why it is necessary to have some central coordination service

which

would handle deployment workflow and perform specific actions (create

VMs

and other OpenStack resources, do something on that VM) on each stage
according to that workflow. We think that Heat is the best place for

such

service.



I'm not so sure. Heat is part of the Orchestration program, not
workflow.



I agree. HOT so far was thought to be a format for describing templates in
a structural, declaritive way. Adding workflows would stretch it quite a
bit. Maybe we should see what aspects make sense to be added to HOT, and
then how to do workflow like orchestration in a layer on top.


Our idea is to extend HOT DSL by adding  workflow definition

capabilities

as an explicit list of resources, components¹ states and actions.

States

may depend on each other so that you can reach state X only after

you¹ve

reached states Y and Z that the X depends on. The goal is from initial
state to reach some final state ³Deployed².



We also would like to add some mechanisms to HOT for declaratively doing
software component orchestration in Heat, e.g. saying that one component
depends on another one, or needs input from another one once it has been
deployed etc. (I BTW started to write a wiki page, which is admittedly far
from complete, but I would be happy to work on it with interested folks -
https://wiki.openstack.org/wiki/Heat/Software-Configuration-Provider).
However, we must be careful not to make such features too complicated so
nobody will be able to use it any more. That said, I believe we could make
HOT cover some levels of complexity, but not all. And then maybe workflow
based orchestration on top is needed.



Orchestration is not workflow, and HOT is an orchestration templating
language, not a workflow language. Extending it would just complect two
very different (though certainly related) tasks.

I think the appropriate thing to do is actually to join up with the
TaskFlow project and consider building it into a workflow service or

tools

(it is just a library right now).


There is such state graph for each of our deployment entities

(service,

VMs, other things). There is also an action that must be performed on

each

state.


Heat does its own translation of the orchestration template into a
workflow right now, but we have already discussed using TaskFlow to
break up the orchestration graph into distributable jobs. As we get more
sophisticated on updates (rolling/canary for instance) we'll need to
be able to reason about the process without having to glue all the
pieces together.


We propose to extend HOT DSL with workflow definition capabilities

where

you can describe step by step instruction to install service and

properly

handle errors on each step.

We already have an experience in implementation of the DSL, workflow
description and processing mechanism for complex deployments and

believe

we¹ll all benefit by re-using this experience and existing code,

having

properly discussed and agreed on abstraction layers and distribution

of

responsibilities between OS components. There is an idea of

implementing

part of workflow processing mechanism as a part of Convection


Re: [openstack-dev] [Heat] Plugin to use Docker containers a resources in a template

2013-10-21 Thread Zane Bitter

On 18/10/13 03:06, Sam Alba wrote:

Hi all,

I've been recently working on a Docker plugin for Heat that makes it
possible to use Docker containers as resources.

I've just opened the repository:
https://github.com/dotcloud/openstack-heat-docker


Cool, nice work. Thanks for sharing! :)

I agree that we shouldn't see this as a replacement for a Nova driver 
(mainly because it doesn't take advantage of Keystone for authenticating 
the user, nor abstract the pool of available hosts away from the user), 
but it is a really interesting concept to play around with. I too would 
definitely welcome it in Heat's /contrib directory where it can be 
subject to continuous testing to make sure that any changes in Heat 
don't break it.


So, here's a crazy, half-baked idea that I almost posted to the list 
last week: we've been discussing adding software configurations to the 
HOT format, to allow users (amongst other things) to deploy multiple 
independent software configurations to the same Nova VM... when we do so 
should we deploy each config in a Linux container?


Discuss.



It's now possible to do that via Nova (since there is now a Docker
driver for it). But the idea here is not to replace the Nova driver
with this Heat plugin, the idea is just to propose a different path.

Basically, Docker itself has a Rest API[1] with all features needed to
deploy and manage containers, the Nova driver uses this API. However
having the Nova API in front of it makes it hard to bring all Docker
features to the user, basically everything has to fit into the Nova
API. For instance, docker commit/push are mapped to nova snapshots,
etc... And a lot of Docker features are not available yet; I admit
that some of them will be hard to support (docker Env variables,
Volumes, etc... how should they fit in Nova?).

The idea of this Docker plugin for Heat is to use the whole Docker API
directly from a template. All possible parameters for creating a
container from the Docker API[2] can be defined from the template.
This allows more flexibility.

Since this approach is a bit different from the normal OpenStack
workflow (for instance, Nova's role is to abstract all computing units
right now), I am interested to get feedback on this.

Obviously, I'll keep maintaining the Docker driver for Nova and I'm
also working on putting together some new features I'll propose for
the next release.


[1] http://docs.docker.io/en/latest/api/docker_remote_api_v1.5/
[2] 
http://docs.docker.io/en/latest/api/docker_remote_api_v1.5/#create-a-container




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] A prototype for cross-vm synchronization and communication

2013-10-21 Thread Lakshminaraya Renganarayana

Thomas Spatzier  wrote on 10/21/2013 08:29:47
AM:

> you mentioned an example in your original post, but I did not find it.
Can
> you add the example?

Hi Thomas,

Here is the example I used earlier:

For example, consider
a two VM app, with VMs vmA, vmB, and a set of software components (ai's and
bi's)
to be installed on them:

vmA = base-vmA + a1 + a2 + a3
vmB = base-vmB + b1 + b2 + b3

let us say that software component b1 of vmB, requires a config value
produced by
software component a1 of vmA. How to declaratively model this dependence?
Clearly,
modeling a dependence between just base-vmA and base-vmB is not enough.
However,
defining a dependence between the whole of vmA and vmB is too coarse. It
would be ideal
to be able to define a dependence at the granularity of software
components, i.e.,
vmB.b1 depends on vmA.a1. Of course, it would also be good to capture what
value
is passed between vmB.b1 and vmA.a1, so that the communication can be
facilitated
by the orchestration engine.


Thanks,
LN

>
> Lakshminaraya Renganarayana  wrote on 18.10.2013
> 20:57:43:
> > From: Lakshminaraya Renganarayana 
> > To: OpenStack Development Mailing List
> ,
> > Date: 18.10.2013 21:01
> > Subject: Re: [openstack-dev] [Heat] A prototype for cross-vm
> > synchronization and communication
> >
> > Just wanted to add a couple of clarifications:
> >
> > 1. the cross-vm dependences are captured via the read/writes of
> > attributes in resources and in software components (described in
> > metadata sections).
> >
> > 2. these dependences are then realized via blocking-reads and writes
> > to zookeeper, which realizes the cross-vm synchronization and
> > communication of values between the resources.
> >
> > Thanks,
> > LN
> >
> >
> > Lakshminaraya Renganarayana/Watson/IBM@IBMUS wrote on 10/18/2013
02:45:01
> PM:
> >
> > > From: Lakshminaraya Renganarayana/Watson/IBM@IBMUS
> > > To: OpenStack Development Mailing List
> 
> > > Date: 10/18/2013 02:48 PM
> > > Subject: [openstack-dev] [Heat] A prototype for cross-vm
> > > synchronization and communication
> > >
> > > Hi,
> > >
> > > In the last Openstack Heat meeting there was good interest in
> > > proposals for cross-vm synchronization and communication and I had
> > > mentioned the prototype I have built. I had also promised that I
> > > will post an outline of the prototype ... Here it is. I might have
> > > missed some details, please feel free to ask / comment and I would
> > > be happy to explain more.
> > > ---
> > > Goal of the prototype: Enable cross-vm synchronization and
> > > communication using high-level declarative description (no wait-
> > > conditions) Use chef as the CM tool.
> > >
> > > Design rationale / choices of the prototype (note that these were
> > > made just for the prototype and I am not proposing them to be the
> > > choices for Heat/HOT):
> > >
> > > D1: No new construct in Heat template
> > > => use metadata sections
> > > D2: No extensions to core Heat engine
> > > => use a pre-processor that will produce a Heat template that the
> > > standard Heat engine can consume
> > > D3: Do not require chef recipes to be modified
> > > => use a convention of accessing inputs/outputs from chef node[][]
> > > => use ruby meta-programming to intercept reads/writes to node[][]
> > > forward values
> > > D4: Use a standard distributed coordinator (don't reinvent)
> > > => use zookeeper as a coordinator and as a global data space for
> > communciation
> > >
> > > Overall, the flow is the following:
> > > 1. User specifies a Heat template with details about software config
> > > and dependences in the metadata section of resources (see step S1
> below).
> > > 2. A pre-processor consumes this augmented heat template and
> > > produces another heat template with user-data sections with cloud-
> > > init scripts and also sets up a zookeeper instance with enough
> > > information to coordinate between the resources at runtime to
> > > realize the dependences and synchronization (see step S2)
> > > 3. The generated heat template is fed into standard heat engine to
> > > deploy. After the VMs are created the cloud-init script kicks in.
> > > The cloud init script installs chef solo and then starts the
> > > execution of the roles specified in the metadata section. During
> > > this execution of the recipes the coordination is realized (see
> > > steps S2 and S3 below).
> > >
> > > Implementation scheme:
> > > S1. Use metadata section of each resource to describe  (see
> > attached example)
> > > - a list of roles
> > > - inputs to and outputs from each role and their mapping to resource
> > > attrs (any attr)
> > > - convention: these inputs/outputs will be through chef node attrs
node
> [][]
> > >
> > > S2. Dependence analysis and cloud init script generation
> > >
> > > Dependence analysis:
> > > - resolve every reference that can be statically resolved using
> > > Heat's fucntions (this step just uses Heat's current dependence
> > > analysis -- Thanks to Zane Bitt

Re: [openstack-dev] [Heat] A prototype for cross-vm synchronization and communication

2013-10-21 Thread Lakshminaraya Renganarayana

Hi Stan,

Thanks for the comments. As you have observed the prototype that I have
built is tied to Chef. I just wanted to describe that here for reference
and not as a proposal for the general implementation.  What I would like to
work on is a more general solution that is agnostic to (or works with any)
underlying CM tool (such as chfe, puppet, saltstack, murano, etc.).

Regarding identifying reads/writes: I was thinking that we could come up
with a general syntax + semantics of explicitly defining the reads/writes
of Heat components. I think we can extend Steve Baker's recent proposal, to
include the inputs/outputs in software component definitions. Your
experience with the Unified Agent would be valuable for this. I would be
happy to collaborate with you!

Thanks,
LN


Stan Lagun  wrote on 10/21/2013 10:03:58 AM:

> From: Stan Lagun 
> To: OpenStack Development Mailing List

> Date: 10/21/2013 10:18 AM
> Subject: Re: [openstack-dev] [Heat] A prototype for cross-vm
> synchronization and communication
>
> Hi Lakshminarayanan,

> Seems like a solid plan.
> I'm probably wrong here but ain't this too tied to chef? I believe
> the solution should equally be suitable for chef, puppet, SaltStack,
> Murano, or maybe all I need is just a plain bash script execution.
> It may be difficult to intercept script reads the way it is possible
> with chef's node[][]. In Murano we has a generic agent that could
> integrate all such deployment platforms using common syntax. Agent
> specification can be found here: https://wiki.openstack.org/wiki/
> Murano/UnifiedAgent and it can be helpful or at least can be a
> source for design ideas.

> I'm very positive on adoption on such solution to Heat. There would
> be a significant amount of work to abstract all underlying
> technologies (chef, Zookeper etc) so that they become pluggable and
> replaceable without introducing hard-coded dependencies for the Heat
> and bringing everything to production quality level. We could
> collaborate on bringing such solution to the Heat if it would be
> accepted by Heat's core team and community
>
>

> On Fri, Oct 18, 2013 at 10:45 PM, Lakshminaraya Renganarayana <
> lren...@us.ibm.com> wrote:
> Hi,
>
> In the last Openstack Heat meeting there was good interest in
> proposals for cross-vm synchronization and communication and I had
> mentioned the prototype I have built. I had also promised that I
> will post an outline of the prototype ... Here it is. I might have
> missed some details, please feel free to ask / comment and I would
> be happy to explain more.
> ---
> Goal of the prototype: Enable cross-vm synchronization and
> communication using high-level declarative description (no wait-
> conditions) Use chef as the CM tool.
>
> Design rationale / choices of the prototype (note that these were
> made just for the prototype and I am not proposing them to be the
> choices for Heat/HOT):
>
> D1: No new construct in Heat template
> => use metadata sections
> D2: No extensions to core Heat engine
> => use a pre-processor that will produce a Heat template that the
> standard Heat engine can consume
> D3: Do not require chef recipes to be modified
> => use a convention of accessing inputs/outputs from chef node[][]
> => use ruby meta-programming to intercept reads/writes to node[][]
> forward values
> D4: Use a standard distributed coordinator (don't reinvent)
> => use zookeeper as a coordinator and as a global data space for
communciation
>
> Overall, the flow is the following:
> 1. User specifies a Heat template with details about software config
> and dependences in the metadata section of resources (see step S1 below).
> 2. A pre-processor consumes this augmented heat template and
> produces another heat template with user-data sections with cloud-
> init scripts and also sets up a zookeeper instance with enough
> information to coordinate between the resources at runtime to
> realize the dependences and synchronization (see step S2)
> 3. The generated heat template is fed into standard heat engine to
> deploy. After the VMs are created the cloud-init script kicks in.
> The cloud init script installs chef solo and then starts the
> execution of the roles specified in the metadata section. During
> this execution of the recipes the coordination is realized (see
> steps S2 and S3 below).
>
> Implementation scheme:
> S1. Use metadata section of each resource to describe  (see attached
example)
> - a list of roles
> - inputs to and outputs from each role and their mapping to resource
> attrs (any attr)
> - convention: these inputs/outputs will be through chef node attrs node
[][]
>
> S2. Dependence analysis and cloud init script generation
>
> Dependence analysis:
> - resolve every reference that can be statically resolved using
> Heat's fucntions (this step just uses Heat's current dependence
> analysis -- Thanks to Zane Bitter for helping me understand this)
> - flag all unresolved references as values resolved at run-time at
> communicated

[openstack-dev] [Trove] Testing of new service types support

2013-10-21 Thread Illia Khudoshyn
Hi all,

I've done with implementing the very first bits of MongoDB support in Trove
along with unit tests and faced an issue with proper testing of it.

It is well known that right now only one service type per installation is
supported by Trove (it is set in config). All testing infrastructure,
including Trove-integration codebase and jenkins jobs, seem to rely on that
service type as well. So it seems to be impossible to run all existing
tests AND some additional tests for MongoDB service type in one pass, at
least until Trove client will allow to pass service type (I know that there
is ongoing work in this area).

Please note, that all of the above is about functional and intergation
testing -- there is no issues with unit tests.

So the question is, should I first submit the code to Trove and then
proceed with updating Trove-integration or just put aside all that MongoDB
stuff until client (and -integration) will be ready?

PS AFAIK, there is some work on adding Cassandra and Riak (or Redis?)
support to Trove. These guys will likely face this issue as well.

-- 

Best regards,

Illia Khudoshyn,
Software Engineer, Mirantis, Inc.



38, Lenina ave. Kharkov, Ukraine

www.mirantis.com 

www.mirantis.ru



Skype: gluke_work

ikhudos...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] How users should specify a datastore type when creating an instance

2013-10-21 Thread Andrey Shestakov
2. it can be confusing coz not clear to what type version belongs 
(possible add "type" field in version).
also if you have default type, then specified version recognizes as 
version of default type (no lookup in version.datastore_type_id)
but i think we can do lookup in version.datastore_type_id before pick 
default.


4. if default version is need, then it should be specified in db, coz 
switching via versions can be frequent and restart service to reload 
config all times is not good.


On 10/21/2013 05:12 PM, Tim Simpson wrote:

Thanks for the feedback Andrey.

>> 2. Got this case in irc, and decided to pass type and version 
together to avoid confusing.
I don't understand how allowing the user to only pass the version 
would confuse anyone. Could you elaborate?


>> 3. Names of types and maybe versions can be good, but in irc conversation rejected this case, i cant 
remember exactly reason.

Hmm. Does anyone remember the reason for this?

>> 4. Actually, "active" field in version marks it as default in type.
>>Specify default version in config can be usefull if you have more then 
one active versions in default type.
If 'active' is allowed to be set for multiple rows of the 
'datastore_versions' table then it isn't a good substitute for the 
functionality I'm seeking, which is to allow operators to specify a 
*single* default version for each datastore_type in the database. I 
still think we should still add a 'default_version_id' field to the 
'datastore_types' table.


Thanks,

Tim


*From:* Andrey Shestakov [ashesta...@mirantis.com]
*Sent:* Monday, October 21, 2013 7:15 AM
*To:* OpenStack Development Mailing List
*Subject:* Re: [openstack-dev] [Trove] How users should specify a 
datastore type when creating an instance


1. Good point
2. Got this case in irc, and decided to pass type and version together 
to avoid confusing.
3. Names of types and maybe versions can be good, but in irc 
conversation rejected this case, i cant remember exactly reason.

4. Actually, "active" field in version marks it as default in type.
Specify default version in config can be usefull if you have more then 
one active versions in default type.
But how match active version in type depends on operator`s 
configuration. And what if "default version in config" will marked as 
inactive?


On 10/18/2013 10:30 PM, Tim Simpson wrote:

Hello fellow Trovians,

There has been some good work recently to figure out a way to specify 
a specific datastore  when using Trove. This is essential to 
supporting multiple datastores from the same install of Trove.


I have an issue with some elements of the proposed solution though, 
so I decided I'd start a thread here so we could talk about it.


As a quick refresher, here is the blue print for this work (there are 
some gists ammended to the end but I figured the mailing list would 
be an easier venue for discussion):

https://wiki.openstack.org/wiki/Trove/trove-versions-types

One issue I have is with the way the instance create call will change 
to support different data stores. For example, here is the post call:


"""
{
  "instance" : {
  "flavorRef" : "2",
  "name" : "as",
  "datastore_type" : "e60153d4-8ac4-414a-ad58-fe2e0035704a",
  "datastore_version" : "94ed1f9f-6c1a-4d6e-87e9-04ecff37b64b",
  "volume" : { "size" : "1" }
}
}
"""

1. I think since we have two fields in the instance object we should 
make a new object for datastore and avoid the name prefixing, like this:


"""
{
 "instance" : {
  "flavorRef" : "2",
  "name" : "as",
  "datastore": {
"type" : "e60153d4-8ac4-414a-ad58-fe2e0035704a",
"version" : "94ed1f9f-6c1a-4d6e-87e9-04ecff37b64b"
  }
  "volume" : { "size" : "1" }
}
}
"""

2. I also think a datastore_version alone should be sufficient since 
the associated datastore type will be implied:


"""
{
  "instance" : {
  "flavorRef" : "2",
  "name" : "as",
  "datastore": {
"version" : "94ed1f9f-6c1a-4d6e-87e9-04ecff37b64b"
  }
  "volume" : { "size" : "1" }
}
}
"""

3. Additionally, while a datastore_type should have an ID in the 
Trove infastructure database, it should also be possible to pass just 
the name of the datastore type to the instance call, such as "mysql" 
or "mongo". Maybe we could allow this in addition to the ID? I think 
this form should actually use the argument "type", and the id should 
then be passed as "type_id" instead.


"""
{
  "instance" : {
  "flavorRef" : "2",
  "name" : "as",
  "datastore": {
"type" : "mysql",
"version" : "94ed1f9f-6c1a-4d6e-87e9-04ecff37b64b"
  }
  "volume" : { "size" : "1" }
}
}

"""

4. Additionally, in the current pull request to implement this it is 
possible to avoid passing a version, but only if no more than one 
version of the datastore_type exists in the database.


I think i

Re: [openstack-dev] [ceilometer] [qa] Ceilometer ERRORS in normal runs

2013-10-21 Thread Neal, Phil
Sean, we currently have a BP out there to investigate basic tempest
integration and I think this might fall under the same umbrella. 
Unfortunately I've not been able to free up my development time 
for it, but I've assigned it out to someone who can take a look and 
report back.

https://blueprints.launchpad.net/tempest/+spec/basic-tempest-integration-for-ceilometer

- Phil

> -Original Message-
> From: Sean Dague [mailto:s...@dague.net]
> Sent: Sunday, October 20, 2013 7:39 AM
> To: OpenStack Development Mailing List
> Subject: [openstack-dev] [ceilometer] [qa] Ceilometer ERRORS in normal
> runs
> 
> Dave Kranz has been building a system so that we can ensure that during
> a Tempest run services don't spew ERRORs in the logs. Eventually, we're
> going to gate on this, because there is nothing that Tempest does to the
> system that should cause any OpenStack service to ERROR or stack trace
> (Errors should actually be exceptional events that something is wrong
> with the system, not regular events).
> 
> Ceilometer is currently one of the largest offenders in dumping ERRORs
> in the gate -
> http://logs.openstack.org/68/52768/1/check/check-tempest-devstack-vm-
> full/76f83a4/console.html#_2013-10-19_14_51_51_271
> (that item isn't in our whitelist yet, so you'll see a lot of it at the
> end of every run)
> 
> and
> http://logs.openstack.org/68/52768/1/check/check-tempest-devstack-vm-
> full/76f83a4/logs/screen-ceilometer-collector.txt.gz?level=TRACE
> for full details
> 
> This seems like something is wrong in the integration, and would be
> really helpful if we could get ceilometer eyes on this one to put ceilo
> into a non erroring state.
> 
>   -Sean
> 
> --
> Sean Dague
> http://dague.net
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][keystone] Adding client library related tests to tempest

2013-10-21 Thread Steven Hardy
On Mon, Oct 21, 2013 at 08:14:03AM -0400, Sean Dague wrote:
> On 10/21/2013 05:19 AM, Steven Hardy wrote:
> 
> >>
> >>Definitely agree we should have plenty of end-to-end tests in the
> >>gate, it's the reason we've got the scenario tests, to do exactly
> >>this kind of through testing.
> >
> >Ok, it seems like a potential solution which may keep all involved happy
> >would be:
> >- Add new API tests which provide full coverage of the documented
> >   interfaces to trusts
> >- Add a few scenario tests which provide end-to-end testing of the most
> >   important interfaces (these will use the client API)
> >
> >The scenario tests could just be those in my patches, moved from client_lib
> >to scenario/identity?
> 
> If there is a rush on a short term landing of code, making it a
> scenario test is a fine approach. And API tests for trust would be
> *highly* appreciated.

Ok, if we can land API tests then I guess the scenario/client tests can
wait until after the summit discussions (ayoung has indicated he's OK with
that plan in https://review.openstack.org/#/c/51558/)

I've raised this BP to track adding the API tests, I'll work on getting
those together:

https://blueprints.launchpad.net/tempest/+spec/keystone-trust-api

Thanks,

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announce of Rally - benchmarking system for OpenStack

2013-10-21 Thread Sean Dague

On 10/20/2013 02:36 PM, Alex Gaynor wrote:

There's several issues involved in doing automated regression checking
for benchmarks:

- You need a platform which is stable. Right now all our CI runs on
virtualized instances, and I don't think there's any particular
guarantee it'll be the same underlying hardware, further virtualized
systems tend to be very noisy and not give you the stability you need.
- You need your benchmarks to be very high precision, if you really want
to rule out regressions of more than N% without a lot of false positives.
- You need more than just checks on individual builds, you need long
term trend checking - 100 1% regressions are worse than a single 50%
regression.

Alex


Agreed on all these points. However I think non of them change where the 
load generation scripts should be developed.


They mostly speak to ensuring that we've got a repeatable hardware 
environment for running the benchmark, and that we've got the right kind 
of data collection and analysis to make it stastically valid.


Point #1 is hard - as it really does require bare metal. But lets put 
that asside for now, as I think there might be clouds being made 
available that we could solve that.


But the rest of this is just software. If we had performance metering 
available in either the core servers or as part of Tempest we could get 
appropriate data. Then you'd need a good statistics engine to provide 
statisically relevant processing of that data. Not just line graphs, but 
real error bars and confidence intervals based on large numbers of runs. 
I've seen way too many line graphs arguing one point or another about 
config changes that turns out have error bars far beyond the results 
that are being seen. Any system that doesn't expose that isn't really 
going to be useful.


Actual performance regressions are going to be *really* hard to find in 
the gate, just because of the rate of code change that we have, and the 
variability we've seen on the guests.


Honestly, the statistics engine that actually just took in our existing 
large sets of data and got baseline variability would be a great step 
forward (that's new invention, no one has that right now). I'm sure we 
can figure out a good way to take the load generation into Tempest to be 
consistent with our existing validation and scenario tests. The metering 
could easily be proposed as a nova extension (ala coverage). And that 
seems to leave you with a setup tool, to pull this together in arbitrary 
environments.


And that's really what I mean about integrating better. Whenever 
possible figuring out how functionality could be added to existing 
projects, especially when that means they are enhanced not only for your 
use case, but for other use cases that those projects have wanted for a 
while (seriously, I'd love to have statistically valid run time 
statistics for tempest that show us when we go off the rails, like we 
did last week for a few days, and quantify long term variability and 
trends in the stack). It's harder in the short term to do that, because 
it means compromises along the way, but the long term benefit to 
OpenStack is much greater than another project which duplicates effort 
from a bunch of existing projects.


-Sean

--
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] How users should specify a datastore type when creating an instance

2013-10-21 Thread Tim Simpson
Thanks for the feedback Andrey.

>> 2. Got this case in irc, and decided to pass type and version together to 
>> avoid confusing.
I don't understand how allowing the user to only pass the version would confuse 
anyone. Could you elaborate?

>> 3. Names of types and maybe versions can be good, but in irc conversation 
>> rejected this case, i cant remember exactly reason.
Hmm. Does anyone remember the reason for this?

>> 4. Actually, "active" field in version marks it as default in type.
>> Specify default version in config can be usefull if you have more then one 
>> active versions in default type.
If 'active' is allowed to be set for multiple rows of the 'datastore_versions' 
table then it isn't a good substitute for the functionality I'm seeking, which 
is to allow operators to specify a *single* default version for each 
datastore_type in the database. I still think we should still add a 
'default_version_id' field to the 'datastore_types' table.

Thanks,

Tim


From: Andrey Shestakov [ashesta...@mirantis.com]
Sent: Monday, October 21, 2013 7:15 AM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [Trove] How users should specify a datastore type 
when creating an instance

1. Good point
2. Got this case in irc, and decided to pass type and version together to avoid 
confusing.
3. Names of types and maybe versions can be good, but in irc conversation 
rejected this case, i cant remember exactly reason.
4. Actually, "active" field in version marks it as default in type.
Specify default version in config can be usefull if you have more then one 
active versions in default type.
But how match active version in type depends on operator`s configuration. And 
what if "default version in config" will marked as inactive?

On 10/18/2013 10:30 PM, Tim Simpson wrote:
Hello fellow Trovians,

There has been some good work recently to figure out a way to specify a 
specific datastore  when using Trove. This is essential to supporting multiple 
datastores from the same install of Trove.

I have an issue with some elements of the proposed solution though, so I 
decided I'd start a thread here so we could talk about it.

As a quick refresher, here is the blue print for this work (there are some 
gists ammended to the end but I figured the mailing list would be an easier 
venue for discussion):
https://wiki.openstack.org/wiki/Trove/trove-versions-types

One issue I have is with the way the instance create call will change to 
support different data stores. For example, here is the post call:

"""
{
  "instance" : {
  "flavorRef" : "2",
  "name" : "as",
  "datastore_type" : "e60153d4-8ac4-414a-ad58-fe2e0035704a",
  "datastore_version" : "94ed1f9f-6c1a-4d6e-87e9-04ecff37b64b",
  "volume" : { "size" : "1" }
}
}
"""

1. I think since we have two fields in the instance object we should make a new 
object for datastore and avoid the name prefixing, like this:

"""
{
 "instance" : {
  "flavorRef" : "2",
  "name" : "as",
  "datastore": {
"type" : "e60153d4-8ac4-414a-ad58-fe2e0035704a",
"version" : "94ed1f9f-6c1a-4d6e-87e9-04ecff37b64b"
  }
  "volume" : { "size" : "1" }
}
}
"""

2. I also think a datastore_version alone should be sufficient since the 
associated datastore type will be implied:

"""
{
  "instance" : {
  "flavorRef" : "2",
  "name" : "as",
  "datastore": {
"version" : "94ed1f9f-6c1a-4d6e-87e9-04ecff37b64b"
  }
  "volume" : { "size" : "1" }
}
}
"""

3. Additionally, while a datastore_type should have an ID in the Trove 
infastructure database, it should also be possible to pass just the name of the 
datastore type to the instance call, such as "mysql" or "mongo". Maybe we could 
allow this in addition to the ID? I think this form should actually use the 
argument "type", and the id should then be passed as "type_id" instead.

"""
{
  "instance" : {
  "flavorRef" : "2",
  "name" : "as",
  "datastore": {
"type" : "mysql",
"version" : "94ed1f9f-6c1a-4d6e-87e9-04ecff37b64b"
  }
  "volume" : { "size" : "1" }
}
}

"""

4. Additionally, in the current pull request to implement this it is possible 
to avoid passing a version, but only if no more than one version of the 
datastore_type exists in the database.

I think instead the datastore_type row in the database should also have a 
"default_version_id" property, that an operator could update to the most recent 
version or whatever other criteria they wish to use, meaning the call could 
become this simple:

"""
{
  "instance" : {
  "flavorRef" : "2",
  "name" : "as",
  "datastore": {
"type" : "mysql"
  }
  "volume" : { "size" : "1" }
}
}
"""

Thoughts?

Thanks,

Tim



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [Heat] A prototype for cross-vm synchronization and communication

2013-10-21 Thread Stan Lagun
Hi Lakshminarayanan,

Seems like a solid plan.
I'm probably wrong here but ain't this too tied to chef? I believe the
solution should equally be suitable for chef, puppet, SaltStack, Murano, or
maybe all I need is just a plain bash script execution. It may be difficult
to intercept script reads the way it is possible with chef's node[][]. In
Murano we has a generic agent that could integrate all such deployment
platforms using common syntax. Agent specification can be found here:
https://wiki.openstack.org/wiki/Murano/UnifiedAgent and it can be helpful
or at least can be a source for design ideas.

I'm very positive on adoption on such solution to Heat. There would be a
significant amount of work to abstract all underlying technologies (chef,
Zookeper etc) so that they become pluggable and replaceable without
introducing hard-coded dependencies for the Heat and bringing everything to
production quality level. We could collaborate on bringing such solution to
the Heat if it would be accepted by Heat's core team and community



On Fri, Oct 18, 2013 at 10:45 PM, Lakshminaraya Renganarayana <
lren...@us.ibm.com> wrote:

> Hi,
>
> In the last Openstack Heat meeting there was good interest in proposals
> for cross-vm synchronization and communication and I had mentioned the
> prototype I have built. I had also promised that I will post an outline of
> the prototype ... Here it is. I might have missed some details, please feel
> free to ask / comment and I would be happy to explain more.
> ---
> Goal of the prototype: Enable cross-vm synchronization and communication
> using high-level declarative description (no wait-conditions) Use chef as
> the CM tool.
>
> Design rationale / choices of the prototype (note that these were made
> just for the prototype and I am not proposing them to be the choices for
> Heat/HOT):
>
> D1: No new construct in Heat template
>  => use metadata sections
> D2: No extensions to core Heat engine
>  => use a pre-processor that will produce a Heat template that the
> standard Heat engine can consume
> D3: Do not require chef recipes to be modified
>  => use a convention of accessing inputs/outputs from chef node[][]
>  => use ruby meta-programming to intercept reads/writes to node[][]
> forward values
> D4: Use a standard distributed coordinator (don't reinvent)
>  => use zookeeper as a coordinator and as a global data space for
> communciation
>
> Overall, the flow is the following:
> 1. User specifies a Heat template with details about software config and
> dependences in the metadata section of resources (see step S1 below).
> 2. A pre-processor consumes this augmented heat template and produces
> another heat template with user-data sections with cloud-init scripts and
> also sets up a zookeeper instance with enough information to coordinate
> between the resources at runtime to realize the dependences and
> synchronization (see step S2)
> 3. The generated heat template is fed into standard heat engine to deploy.
> After the VMs are created the cloud-init script kicks in. The cloud init
> script installs chef solo and then starts the execution of the roles
> specified in the metadata section. During this execution of the recipes the
> coordination is realized (see steps S2 and S3 below).
>
> Implementation scheme:
> S1. Use metadata section of each resource to describe  (see attached
> example)
>  - a list of roles
>  - inputs to and outputs from each role and their mapping to resource
> attrs (any attr)
>  - convention: these inputs/outputs will be through chef node attrs
> node[][]
>
> S2. Dependence analysis and cloud init script generation
>
>  Dependence analysis:
>  - resolve every reference that can be statically resolved using Heat's
> fucntions (this step just uses Heat's current dependence analysis -- Thanks
> to Zane Bitter for helping me understand this)
>  - flag all unresolved references as values resolved at run-time at
> communicated via the coordinator
>
>  Use cloud-init in user-data sections:
>  - automatically generate a script that would bootstrap chef and will run
> the roles/recipes in the order specified in the metadata section
>  - generate dependence info for zookeeper to coordinate at runtime
>
> S3. Coordinate synchronization and communication at run-time
>  - intercept reads and writes to node[][]
>  - if it is a remote read, get it from Zookeeper
>  - execution will block till the value is available
>  - if write is for a value required by a remote resource, write the value
> to Zookeeper
>
> The prototype is implemented in Python and Ruby is used for chef
> interception.
>
> There are alternatives for many of the choices I have made for the
> prototype:
>  - zookeeper can be replaced with any other service that provides a data
> space and distributed coordination
>  - chef can be replaced by any other CM tool (a little bit of design /
> convention needed for other CM tools because of the interception used in
> the prototype to catch reads/writes to no

[openstack-dev] Neutron - an issue regarding what API to follow

2013-10-21 Thread Akihiro Motoki
Hi,

The API document is the official one, and Wiki is used during the
development.
We may be better to add a note to the wiki page to avoid such confusion.

I am not sure what confused you. Could you give me an example?

Thanks,
Akihiro

2013年10月21日月曜日 GROSZ, Maty (Maty)
maty.gr...@alcatel-lucent.com
:

>  Hey *,
>
> ** **
>
> I got a little confused with what API should we follow regarding Neutron
> VPN service…
>
> There is this wiki page https://wiki.openstack.org/wiki/Neutron/VPNaaS that
> handles VPN APIs, where as the formal Neutron API documentation, 
>
>
> http://docs.openstack.org/api/openstack-network/2.0/content/vpnaas_ext_ops_service.html,
> describes different API version and URL structure.
>
> ** **
>
> Generally, my decisions are always follow the formal API documentation.
> But in this case I am little confused…
>
> ** **
>
> Can anyone help? What are the actual APIs?
>
> ** **
>
> Thanks,
>
> ** **
>
> Maty.
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gerrit tools

2013-10-21 Thread Chmouel Boudjnah
On Mon, Oct 21, 2013 at 3:03 PM, Flavio Percoco  wrote:

> Also realize that OpenStack maintains gerritlib -
>> https://github.com/openstack-**infra/gerritlib
>>
>> Which anyone can contribute to (and is the code that every message posted
>> back to gerrit by a bot users). It would actually be nice to enhance
>> gerritlib if there were enough features missing that are in python-gerrit.
>>
>
> Yup, that's part of the plan, python-gerrit rewrites a lot of stuff,
> though.



It seems that gerritlib is using SSH commands, isn't that the plans is to
have a gerrit with the full REST api enabled in the future without needing
to have to spawn ssh commands for every calls?

Chmouel.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Call for a clear COPYRIGHT-HOLDERS file in all OpenStack projects (and [trove] python-troveclient_0.1.4-1_amd64.changes REJECTED)

2013-10-21 Thread Mark McLoughlin
On Sat, 2013-10-19 at 08:24 -0400, Monty Taylor wrote:
> 
> On 10/19/2013 05:49 AM, Michael Still wrote:
> > On Sat, Oct 19, 2013 at 7:52 PM, Clint Byrum  wrote:
> > 
> >> I suggest that we just put Copyright headers back in the source files.
> >> That will make Debian's licensecheck work fairly automatically. A single
> >> file that tries to do exactly what debian/copyright would do seems a bit
> >> odd.
> >
> > The problem here is that the copyright headers were wrong. They aren't
> > religiously added to, and sometimes people have tried to "gift" code
> > to the Foundation by saying the code is copyright the Foundation,
> > which isn't always true. So, we can't lean on these headers for
> > accurate statements of copyright.
> 
> This is correct. As with many things that are harder for us than for
> other people, we have >=1000 developers and the history thus-far has
> been for people to be rather antagonistic and annoyed when someone tries
> to suggest proper copyright attribution.
> 
> What we CAN say is that every single commit is Apache licensed. Our CLA
> and enforcement of it, sad as this statement makes me, ensures that we
> know that.

Right. We work hard to ensure that all copyright holders license their
contribution under the Apache License. A CLA isn't the only way of doing
this, but it's what we do now.

> I'm not sure what to do re: FTP masters. Could someone expand for me
> like I'm an idiot what the goal they are trying to achieve is? I _think_
> that they're trying to make sure that the code is free software and that
> it is annotated somewhere that we know this to be true, yeah? Is there
> an additional thing being attempted?

In other words, what exactly is a list of copyright holders good for? It
doesn't affect the license and there's no requirement under the Apache
License for Debian to credit the copyright holders.

It would be really silly of us to burden ourselves with maintaining
lists of copyright holders if no-one can explain how it helps anything.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [novaclient]should administrator can see all servers of all tenants by default?

2013-10-21 Thread Christopher Yeoh
On Mon, Oct 21, 2013 at 1:32 AM, Lingxian Kong  wrote:

> two questions here:
> 1. whther '--all-tenants' should be with '--tenant' or not.
> 2. can admin see other tenant's server using its name instead of id?
>
>
I think a name search as well as id makes sense, though that change lies
entirely within
python-novaclient and could potentially take a long time and could be
avoided by passing 'all_tenants 0'.

btw I have submitted a series of patches (IMO some cleanup is required as
well) which addresses
the tenant_id/all_tenants issue:

https://review.openstack.org/#/c/52007/
https://review.openstack.org/#/c/52864/
https://review.openstack.org/#/c/52919/

Chris.



> 2013/10/16 Robert Collins 
>
>> I think that would be fine: --tenant FOO implying 'show me results
>> from FOO if I have access to that' makes total sense to me.
>>
>> On 16 October 2013 17:52, Christopher Yeoh  wrote:
>> >
>> > --all-tenants would only be turned on if --tenant was specified, not a
>> > general default. Do you see that causing any problems for non trivial
>> > clouds?
>> >
>> > Chris
>> >
>> >
>> > On Tue, Oct 15, 2013 at 7:26 PM, Robert Collins <
>> robe...@robertcollins.net>
>> > wrote:
>> >>
>> >> Please don't invert the bug though: if --all-tenants becomes the
>> >> default nova server behaviour in v3, please ensure there is a
>> >> --no-all-tenants to unbreak it for non-trivial clouds.
>> >>
>> >> Thanks!
>> >> -Rob
>> >>
>> >> On 15 October 2013 20:54, Lingxian Kong  wrote:
>> >> > then, what's the conclusion that we can begin to start?
>> >> >
>> >> >
>> >> > 2013/10/15 Christopher Yeoh 
>> >> >>
>> >> >> On Tue, Oct 15, 2013 at 10:25 AM, Caitlin Bestler
>> >> >>  wrote:
>> >> >>>
>> >> >>> On 10/14/2013 8:37 AM, Ben Nemec wrote:
>> >> 
>> >>  I agree that this needs to be fixed.  It's very counterintuitive,
>> if
>> >>  nothing else (which is also my argument against requiring
>> all-tenants
>> >>  for admin users in the first place).  The only question for me is
>> >>  whether to fix it in novaclient or in Nova itself.
>> >> >>>
>> >> >>>
>> >> >>> If it is fixed in novaclient, then any unscrupulous tenant would be
>> >> >>> able
>> >> >>> to unfix it in novaclient themselves and gain the same information
>> >> >>> about
>> >> >>> other tenants that the bug is allowing.
>> >> >>>
>> >> >>> So if the intent is to protect leakage of information across tenant
>> >> >>> lines
>> >> >>> then the correct solution is a real lock (i.e. in Nova) rather
>> >> >>> than just a screen door "lock".
>> >> >>>
>> >> >>
>> >> >> The novaclient fix for V2 would be simply to automatically pass
>> >> >> all-tenants where needed. It would not give a non admin user any
>> extra
>> >> >> privileges even if they modified novaclient.
>> >> >>
>> >> >> Chris
>> >> >>
>> >> >> ___
>> >> >> OpenStack-dev mailing list
>> >> >> OpenStack-dev@lists.openstack.org
>> >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >> >>
>> >> >
>> >> >
>> >> >
>> >> > --
>> >> > 
>> >> > Lingxian Kong
>> >> > Huawei Technologies Co.,LTD.
>> >> > IT Product Line CloudOS PDU
>> >> > China, Xi'an
>> >> > Mobile: +86-18602962792
>> >> > Email: konglingx...@huawei.com; anlin.k...@gmail.com
>> >> >
>> >> > ___
>> >> > OpenStack-dev mailing list
>> >> > OpenStack-dev@lists.openstack.org
>> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >> >
>> >>
>> >>
>> >>
>> >> --
>> >> Robert Collins 
>> >> Distinguished Technologist
>> >> HP Converged Cloud
>> >>
>> >> ___
>> >> OpenStack-dev mailing list
>> >> OpenStack-dev@lists.openstack.org
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >
>> >
>> > ___
>> > OpenStack-dev mailing list
>> > OpenStack-dev@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>>
>>
>> --
>> Robert Collins 
>> Distinguished Technologist
>> HP Converged Cloud
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> *---*
> *Lingxian Kong*
> Huawei Technologies Co.,LTD.
> IT Product Line CloudOS PDU
> China, Xi'an
> Mobile: +86-18602962792
> Email: konglingx...@huawei.com; anlin.k...@gmail.com
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] resource tracking

2013-10-21 Thread Gary Kotton
Hi,
I have encountered a few issues regarding the resource tracking and hopefully 
someone can help clarify here. From what I understand we just seem to ignore 
the actual used disk and memory from the hypervisor and calculate it according 
to the allocated instances. For example, if the hypervisor has cached images or 
volume support, then this disk usage is ignored.
I have posted https://review.openstack.org/#/c/52900/ but I am not 100% sure 
that this is the correct way of addressing this (here we need to ensure that 
the hypervisor is not returning the disk spaces used by instances, similarly 
with memory)
Thanks
Gary
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gerrit tools

2013-10-21 Thread Flavio Percoco

On 21/10/13 07:44 -0400, Sean Dague wrote:

On 10/21/2013 04:04 AM, Flavio Percoco wrote:

On 20/10/13 05:01 +, Joshua Harlow wrote:

I created some gerrit tools that I think others might find useful.

https://github.com/harlowja/gerrit_view



I worked on this Python library for Gerrit[0] a couple of months ago and
I've been using it for this gerrit-cli[1] tool. I was wondering if you'd
like to migrate your Gerrit queries and make them use python-gerrit
instead? I can do that for you.

[0] https://github.com/FlaPer87/python-gerrit
[1] https://github.com/FlaPer87/gerrit-cli

BTW, Big +1 for the curses UI!


Also realize that OpenStack maintains gerritlib - 
https://github.com/openstack-infra/gerritlib


Which anyone can contribute to (and is the code that every message 
posted back to gerrit by a bot users). It would actually be nice to 
enhance gerritlib if there were enough features missing that are in 
python-gerrit.


Yup, that's part of the plan, python-gerrit rewrites a lot of stuff,
though.

Cheers,
FF

--
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Neutron - an issue regarding what API to follow

2013-10-21 Thread GROSZ, Maty (Maty)
Hey *,

I got a little confused with what API should we follow regarding Neutron VPN 
service...
There is this wiki page https://wiki.openstack.org/wiki/Neutron/VPNaaS that 
handles VPN APIs, where as the formal Neutron API documentation,
http://docs.openstack.org/api/openstack-network/2.0/content/vpnaas_ext_ops_service.html,
 describes different API version and URL structure.

Generally, my decisions are always follow the formal API documentation. But in 
this case I am little confused...

Can anyone help? What are the actual APIs?

Thanks,

Maty.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] A prototype for cross-vm synchronization and communication

2013-10-21 Thread Thomas Spatzier
Hi Lakshmi,

you mentioned an example in your original post, but I did not find it. Can
you add the example?

Lakshminaraya Renganarayana  wrote on 18.10.2013
20:57:43:
> From: Lakshminaraya Renganarayana 
> To: OpenStack Development Mailing List
,
> Date: 18.10.2013 21:01
> Subject: Re: [openstack-dev] [Heat] A prototype for cross-vm
> synchronization and communication
>
> Just wanted to add a couple of clarifications:
>
> 1. the cross-vm dependences are captured via the read/writes of
> attributes in resources and in software components (described in
> metadata sections).
>
> 2. these dependences are then realized via blocking-reads and writes
> to zookeeper, which realizes the cross-vm synchronization and
> communication of values between the resources.
>
> Thanks,
> LN
>
>
> Lakshminaraya Renganarayana/Watson/IBM@IBMUS wrote on 10/18/2013 02:45:01
PM:
>
> > From: Lakshminaraya Renganarayana/Watson/IBM@IBMUS
> > To: OpenStack Development Mailing List

> > Date: 10/18/2013 02:48 PM
> > Subject: [openstack-dev] [Heat] A prototype for cross-vm
> > synchronization and communication
> >
> > Hi,
> >
> > In the last Openstack Heat meeting there was good interest in
> > proposals for cross-vm synchronization and communication and I had
> > mentioned the prototype I have built. I had also promised that I
> > will post an outline of the prototype ... Here it is. I might have
> > missed some details, please feel free to ask / comment and I would
> > be happy to explain more.
> > ---
> > Goal of the prototype: Enable cross-vm synchronization and
> > communication using high-level declarative description (no wait-
> > conditions) Use chef as the CM tool.
> >
> > Design rationale / choices of the prototype (note that these were
> > made just for the prototype and I am not proposing them to be the
> > choices for Heat/HOT):
> >
> > D1: No new construct in Heat template
> > => use metadata sections
> > D2: No extensions to core Heat engine
> > => use a pre-processor that will produce a Heat template that the
> > standard Heat engine can consume
> > D3: Do not require chef recipes to be modified
> > => use a convention of accessing inputs/outputs from chef node[][]
> > => use ruby meta-programming to intercept reads/writes to node[][]
> > forward values
> > D4: Use a standard distributed coordinator (don't reinvent)
> > => use zookeeper as a coordinator and as a global data space for
> communciation
> >
> > Overall, the flow is the following:
> > 1. User specifies a Heat template with details about software config
> > and dependences in the metadata section of resources (see step S1
below).
> > 2. A pre-processor consumes this augmented heat template and
> > produces another heat template with user-data sections with cloud-
> > init scripts and also sets up a zookeeper instance with enough
> > information to coordinate between the resources at runtime to
> > realize the dependences and synchronization (see step S2)
> > 3. The generated heat template is fed into standard heat engine to
> > deploy. After the VMs are created the cloud-init script kicks in.
> > The cloud init script installs chef solo and then starts the
> > execution of the roles specified in the metadata section. During
> > this execution of the recipes the coordination is realized (see
> > steps S2 and S3 below).
> >
> > Implementation scheme:
> > S1. Use metadata section of each resource to describe  (see
> attached example)
> > - a list of roles
> > - inputs to and outputs from each role and their mapping to resource
> > attrs (any attr)
> > - convention: these inputs/outputs will be through chef node attrs node
[][]
> >
> > S2. Dependence analysis and cloud init script generation
> >
> > Dependence analysis:
> > - resolve every reference that can be statically resolved using
> > Heat's fucntions (this step just uses Heat's current dependence
> > analysis -- Thanks to Zane Bitter for helping me understand this)
> > - flag all unresolved references as values resolved at run-time at
> > communicated via the coordinator
> >
> > Use cloud-init in user-data sections:
> > - automatically generate a script that would bootstrap chef and will
> > run the roles/recipes in the order specified in the metadata section
> > - generate dependence info for zookeeper to coordinate at runtime
> >
> > S3. Coordinate synchronization and communication at run-time
> > - intercept reads and writes to node[][]
> > - if it is a remote read, get it from Zookeeper
> > - execution will block till the value is available
> > - if write is for a value required by a remote resource, write the
> > value to Zookeeper
> >
> > The prototype is implemented in Python and Ruby is used for chef
> > interception.
> >
> > There are alternatives for many of the choices I have made for
theprototype:
> > - zookeeper can be replaced with any other service that provides a
> > data space and distributed coordination
> > - chef can be replaced by any other CM tool (a little bit of design
> > / convent

Re: [openstack-dev] Call for a clear COPYRIGHT-HOLDERS file in all OpenStack projects (and [trove] python-troveclient_0.1.4-1_amd64.changes REJECTED)

2013-10-21 Thread Joe Gordon
On Sun, Oct 20, 2013 at 6:38 AM, Jeremy Stanley  wrote:

> On 2013-10-20 20:57:56 +0800 (+0800), Thomas Goirand wrote:
> > Well, good luck finding all the copyright holders for such a large and
> > old project. It's not really practical in this case, unfortunately.
>
> To a great extent, the same goes for projects a quarter the size and
> age of the Linux kernel--doesn't mean we shouldn't try to fix that
> though. In our case, we at least have names and (possibly stale)
> contact information for all the people who claim to have authored
> contributions, so I suspect we're in a somewhat better position to
> do something about it.
>


Although we may be in a better position to find all the copyright owners,
it appears that many projects skirt the issue by making the copyright owner
an open ended group:

http://ftp-master.metadata.debian.org/changelogs//main/p/python-django/python-django_1.5.4-1_copyright

http://ftp-master.metadata.debian.org/changelogs//main/r/rails-4.0/rails-4.0_4.0.0+dfsg-1_copyright
(I
don't think one person actually owns the copyright on rails)



>
> Part of the issue is that historically the project has held a
> laissez faire position that claiming copyright on contributions is
> voluntary, and that if you don't feel your modifications to a
> particular file are worthy of copyright (due to triviality or
> whatever) then there was no need to update a copyright statement for
> new holders or years. So the assumption there was that copyrights
> which an author wanted to assert were claimed in the files they
> touched, and if they didn't update the copyright statement on a
> change that was their prerogative.
>
> I think we collectively know that this isn't really how copyright
> works in most Berne Convention countries, but I also don't think
> reviewers would object to any copyright holder adding a separate
> commit to update valid copyright claims on a particular file which
> they previously neglected to document.
> --
> Jeremy Stanley
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] How users should specify a datastore type when creating an instance

2013-10-21 Thread Andrey Shestakov

1. Good point
2. Got this case in irc, and decided to pass type and version together 
to avoid confusing.
3. Names of types and maybe versions can be good, but in irc 
conversation rejected this case, i cant remember exactly reason.

4. Actually, "active" field in version marks it as default in type.
Specify default version in config can be usefull if you have more then 
one active versions in default type.
But how match active version in type depends on operator`s 
configuration. And what if "default version in config" will marked as 
inactive?


On 10/18/2013 10:30 PM, Tim Simpson wrote:

Hello fellow Trovians,

There has been some good work recently to figure out a way to specify 
a specific datastore  when using Trove. This is essential to 
supporting multiple datastores from the same install of Trove.


I have an issue with some elements of the proposed solution though, so 
I decided I'd start a thread here so we could talk about it.


As a quick refresher, here is the blue print for this work (there are 
some gists ammended to the end but I figured the mailing list would be 
an easier venue for discussion):

https://wiki.openstack.org/wiki/Trove/trove-versions-types

One issue I have is with the way the instance create call will change 
to support different data stores. For example, here is the post call:


"""
{
  "instance" : {
  "flavorRef" : "2",
  "name" : "as",
  "datastore_type" : "e60153d4-8ac4-414a-ad58-fe2e0035704a",
  "datastore_version" : "94ed1f9f-6c1a-4d6e-87e9-04ecff37b64b",
  "volume" : { "size" : "1" }
}
}
"""

1. I think since we have two fields in the instance object we should 
make a new object for datastore and avoid the name prefixing, like this:


"""
{
 "instance" : {
  "flavorRef" : "2",
  "name" : "as",
  "datastore": {
"type" : "e60153d4-8ac4-414a-ad58-fe2e0035704a",
"version" : "94ed1f9f-6c1a-4d6e-87e9-04ecff37b64b"
  }
  "volume" : { "size" : "1" }
}
}
"""

2. I also think a datastore_version alone should be sufficient since 
the associated datastore type will be implied:


"""
{
  "instance" : {
  "flavorRef" : "2",
  "name" : "as",
  "datastore": {
"version" : "94ed1f9f-6c1a-4d6e-87e9-04ecff37b64b"
  }
  "volume" : { "size" : "1" }
}
}
"""

3. Additionally, while a datastore_type should have an ID in the Trove 
infastructure database, it should also be possible to pass just the 
name of the datastore type to the instance call, such as "mysql" or 
"mongo". Maybe we could allow this in addition to the ID? I think this 
form should actually use the argument "type", and the id should then 
be passed as "type_id" instead.


"""
{
  "instance" : {
  "flavorRef" : "2",
  "name" : "as",
  "datastore": {
"type" : "mysql",
"version" : "94ed1f9f-6c1a-4d6e-87e9-04ecff37b64b"
  }
  "volume" : { "size" : "1" }
}
}

"""

4. Additionally, in the current pull request to implement this it is 
possible to avoid passing a version, but only if no more than one 
version of the datastore_type exists in the database.


I think instead the datastore_type row in the database should also 
have a "default_version_id" property, that an operator could update to 
the most recent version or whatever other criteria they wish to use, 
meaning the call could become this simple:


"""
{
  "instance" : {
  "flavorRef" : "2",
  "name" : "as",
  "datastore": {
"type" : "mysql"
  }
  "volume" : { "size" : "1" }
}
}
"""

Thoughts?

Thanks,

Tim


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][keystone] Adding client library related tests to tempest

2013-10-21 Thread Sean Dague

On 10/21/2013 05:19 AM, Steven Hardy wrote:



Definitely agree we should have plenty of end-to-end tests in the
gate, it's the reason we've got the scenario tests, to do exactly
this kind of through testing.


Ok, it seems like a potential solution which may keep all involved happy
would be:
- Add new API tests which provide full coverage of the documented
   interfaces to trusts
- Add a few scenario tests which provide end-to-end testing of the most
   important interfaces (these will use the client API)

The scenario tests could just be those in my patches, moved from client_lib
to scenario/identity?


If there is a rush on a short term landing of code, making it a scenario 
test is a fine approach. And API tests for trust would be *highly* 
appreciated.


I still think the conversation about what's needed for Keystone in 
Icehouse is worth the summit conversation. I think I'd rather back it up 
from just the client_lib conversation, and actually talk about Keystone 
special needs out of the gate pipeline, and what kind of split of 
functionality on the tempest/devstack side vs. the unit test side makes 
sense.


So if there isn't a rush, I'd take it to summit, and lets come up with a 
more holistic plan there.


-Sean

--
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announce of Rally - benchmarking system for OpenStack

2013-10-21 Thread Joe Gordon
On Sun, Oct 20, 2013 at 12:18 PM, Tim Bell  wrote:

>
> Is it easy ? No... it is hard, whether in an integrated test suite or on
> its own
> Can it be solved ?  Yes, we have done incredible things with the current
> QA infrastructure
> Should it be split off from other testing ? No, I want EVERY commit to
> have this check. Performance through benchmarking at scale is fundamental.
>
> Integration with the current process mean all code has to pass the bar...
> a new project makes it optional and therefore makes the users do the
> debugging... just the kind of thing that drives those users away...



Tim, we already have a very basic gating test to check for degraded
performance (large-ops test in tempest).  Does it have all the issues
listed below? Yes, this doesn't detect a minor performance degradation, it
doesn't work on RAX cloud (slower VMs) et cetera. But its a start.

After debugging the issues with nova-networking / rootwrap (
https://bugs.launchpad.net/oslo/+bug/1199433,
https://review.openstack.org/#/c/38000/) that caused nova to timeout when
booting just 50 instances, I added a test to boot up n (n=150 in gate)
instances at once using the fake virt driver.  We are now gating on the
nova-network version and are getting ready to enable gate on the neutron
version too.

The tests are pretty fast too:

gate-tempest-devstack-vm-large-ops SUCCESS in 13m 44s
gate-tempest-devstack-vm-neutron-large-ops SUCCESS in 16m 09s (non-voting)

best,
Joe



>


> Tim
>
> > -Original Message-
> > From: Robert Collins [mailto:robe...@robertcollins.net]
> > Sent: 20 October 2013 21:03
> > To: OpenStack Development Mailing List
> > Subject: Re: [openstack-dev] Announce of Rally - benchmarking system for
> OpenStack
> >
> > On 21 October 2013 07:36, Alex Gaynor  wrote:
> > > There's several issues involved in doing automated regression checking
> > > for
> > > benchmarks:
> > >
> > > - You need a platform which is stable. Right now all our CI runs on
> > > virtualized instances, and I don't think there's any particular
> > > guarantee it'll be the same underlying hardware, further virtualized
> > > systems tend to be very noisy and not give you the stability you need.
> > > - You need your benchmarks to be very high precision, if you really
> > > want to rule out regressions of more than N% without a lot of false
> positives.
> > > - You need more than just checks on individual builds, you need long
> > > term trend checking - 100 1% regressions are worse than a single 50%
> regression.
> >
> > Let me offer a couple more key things:
> >  - you need a platform that is representative of your deployments:
> > 1000 physical hypervisors have rather different checkin patterns than
> > 1 qemu hypervisor.
> >  - you need a workload that is representative of your deployments:
> > 1 VM's spread over 500 physical hypervisors routing traffic through
> one neutron software switch will have rather different load
> > characteristics than 5 qemu vm's in a kvm vm hosted all in one
> configuration.
> >
> > neither the platform - # of components, their configuration, etc, nor
> the workload in devstack-gate are representative of production
> > deployments of any except the most modest clouds. Thats fine -
> devstack-gate to date has been about base functionality, not digging down
> > into race conditions.
> >
> > I think having a dedicated tool aimed at:
> >  - setting up *many different* production-like environments and running
> >  - many production-like workloads and
> >  - reporting back which ones work and which ones don't
> >
> > makes a huge amount of sense.
> >
> > from the reports from that tool we can craft targeted unit test or
> isolated functional tests to capture the problem and prevent it
> > worsening or regressing (once fixed). See for instance Joe Gordons'
> > fake hypervisor which is great for targeted testing.
> >
> > That said, I also agree with the sentiment expressed that the
> workload-driving portion of Rally doesn't seem different enough to Tempest
> > to warrant being separate; it seems to me that Rally could be built like
> this:
> >
> > - a thing that does deployments spread out over a phase space of
> configurations
> > - instrumentation for deployments that permit the data visibility needed
> to analyse problems
> > - tests for tempest that stress a deployment
> >
> > So the single-button-push Rally would:
> >  - take a set of hardware
> >  - in a loop
> >  - deploy a configuration, run Tempest, report data
> >
> > That would reuse Tempest and still be a single button push data
> gathering thing, and if Tempest isn't capable of generating enough
> > concurrency/load [for a single test - ignore parallel execution of
> different tests] then that seems like something we should fix in Tempest,
> > because concurrency/race conditions are things we need tests for in
> devstack-gate.
> >
> > -Rob
> >
> > --
> > Robert Collins 
> > Distinguished Technologist
> > HP Converged Cloud
> >
> > ___

  1   2   >