Re: [Openstack] Openstack Baremetal Provisioning

2012-07-03 Thread Trinath Somanchi
Hi-

Thanks a lot for the reply... This helped me understand a bit more.

I have this kind of setup in mind.



  [ ESSEX-1]  <--->  [ ESSEX-2 ]  <--->  [<---
TILERA HW--->]

   x_86 HW   x_86  HW
NON  x_86 HW

Nova-Compute Proxy-Nova-Compute   New Hardware
Device.





So, change the config in nova.conf in ESSEX-2 server to bare-metal and
tilera specific config makes the nova-compute to proxy-nova-compute.

I have a doubt here, I have an idea on using Nova-compute to bringing up
VM's in ESSEX-2 when its another Nova-compute. But not we have
Proxy-Nova-compute in the ESSEX-2 and we have no VM's here rather we have
new Hardware board to bootup.

How will the Commands from Nova-compute in ESSEX-1 differ to bring up
TILERA HW board using the Proxy-Nova-compute in ESSEX-2 ?

Please kindly help me understand this scenario.

Thanks a lot for the reply...
--
Trinath S.





On Tue, Jul 3, 2012 at 12:54 AM, John Paul Walters  wrote:

> Hi Trinath,
>
> It's not clear whether the tilera board you're referring to is one of the
> PCI versions or a stand-alone board (is it in Essex-1?).  We've never
> setup/tested anything other than the TILEmpower stand-alone board with our
> bare-metal provisioning service.  That said, in order to make your
> nova-compute on Essex-2, you need to configure nova.conf as I described
> earlier: set the connection_type=baremetal, set the
> baremetal_driver=tilera, and set your path to tile-monitor appropriately.
>  That's what makes it a proxy node, which is otherwise run as a regular
> nova-compute.
>
> JP
>
>
> On Jul 2, 2012, at 12:42 PM, Trinath Somanchi wrote:
>
> Hi-
>
> Thanks a lot for the reply JP.
>
> The information provided is of most value for me.
>
> I have a doubt here.
>
> I will install Nova-compute in a server say Essex-1 and another server say
> Essex-2.
>
> I have a tilera board too in the setup.
>
> Can you please guide me on how to start this tilera board using
> Nova-compute in Essex-1 machine. and How Nova-compute in Essex-2 can be
> made as Proxy-Nova-compute. I mean what changes to Nova-compute makes Proxy
> nova-compute.
>
> On Mon, Jul 2, 2012 at 9:32 PM, John Paul Walters wrote:
>
>> Hi Trinath,
>>
>> Our baremetal experts are on vacation for the next week or so, so I'll
>> take a stab at answering in their absence.  First, just to be clear, right
>> now the baremetal work that's present in Essex supports ONLY the Tilera
>> architecture.  We're working with the NTT folks to add additional support,
>> but it's not in Essex.  We've tested on TILEmpower rack-mountable units.
>>  You'll need a baremetal proxy (x86) machine that will run nova-compute and
>> handle the provisioning of resources.  Most of the nova.conf options are
>> shown at:
>>
>>
>> http://docs.openstack.org/essex/openstack-compute/admin/content/compute-options-reference.html
>>
>> But it appears that there's at least one omission:  you'll need to set
>> your --connection_type=baremetal on the proxy node.  Probably the most
>> important options are: --baremetal_driver=tilera,
>> --tile_monitor=.  I would suggest that you have a
>> look at the link above under the baremetal section to see what other
>> options might apply to your environment.
>>
>> http://wiki.openstack.org/HeterogeneousTileraSupport
>>
>> Note that you'll need to set up tftp so that the Tilera boards can pick
>> up a boot rom. You'll also need to create a tilera-specific file system.
>>
>> I hope this helps.
>>
>> best,
>> JP
>>
>>
>> On Jul 2, 2012, at 8:05 AM, Trinath Somanchi wrote:
>>
>> Hi-
>>
>> Please help me in understanding and bringing up this kind of setup
>>
>> Kindly please help me in this regard.
>>
>> I have checked nova.conf and found bare metal provisioning support
>> options.
>>
>> Please help me understand on how modifying nova.conf with the respective
>> options can help bringing up tilera like machines up either from command
>> line or from GUI.
>>
>> Thanks in advance..
>>
>> --
>> Trinath S
>>
>> On Mon, Jul 2, 2012 at 12:13 PM, Trinath Somanchi <
>> trinath.soman...@gmail.com> wrote:
>>
>>> Hi-
>>>
>>> As explained in the email, With respect to the link,
>>>
>>> http://wiki.openstack.org/GeneralBareMetalProvisioningFramework
>>>
>>> Can you kindly guide/brief me on
>>> https://github.com/usc-isi/essex-baremetal-support (Stable/Essex)
>>>
>>> I mean Install/Config/Testing of the Provisioning support.
>>>
>>> Thanking you,
>>>
>>> --
>>> Regards,
>>> --
>>> Trinath Somanchi,
>>> +91 9866 235 130
>>>
>>>
>>
>>
>> --
>> Regards,
>> --
>> Trinath Somanchi,
>> +91 9866 235 130
>>
>> ___
>> Mailing list: https://launchpad.net/~openstack
>> Post to : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
>>
>>
>>
>
>
> --
> Regard

Re: [Openstack] multi_host not working

2012-07-03 Thread Marnus van Niekerk

On 02/07/2012 16:33, Razique Mahroua wrote:

I've put a small section here
http://docs.openstack.org/diablo/openstack-compute/admin/content/multi-host.html


Using this I have made progress, except I had to use "nova-manage 
network delete 10.10.11.128/26" to delete the network and then added it 
back with the --multi_host=T option.


I can now see the bridge created and assigned an address on each compute 
node, but all of the VMs get stuck after the bootloader - they never 
boot any further.


What else could be wrong?  Should nova-api be running on each compute node?

Tx
Marnus

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] multi_host not working

2012-07-03 Thread Marnus van Niekerk

On 03/07/2012 09:53, Marnus van Niekerk wrote:
I can now see the bridge created and assigned an address on each 
compute node, but all of the VMs get stuck after the bootloader - they 
never boot any further.


Sorry, they do actually boot after a while but without any networking..

cloud-init-nonet waiting 120 seconds for a network device.
cloud-init-nonet gave up waiting for a network device.
ci-info: lo: 1 127.0.0.1   255.0.0.0   .
ci-info: eth0  : 1 .   .   fa:16:3e:39:5f:02
route_info failed
Waiting for network configuration...
Waiting up to 60 more seconds for network configuration...
Booting system without full network configuration...





___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Is there special setting to attach volume to instance on Nexenta server?

2012-07-03 Thread romi zhang
Hi,

 

I'd already setup nexenta in a independent server and nova-volume run on
another server with nexenta driver configured in nova.conf to provide volume
service to the openstack system.

I can use command or dashboard to create volume well and nexenta also have
create relative zol,but question is I always could not attach the volume to
the instance,here is the environment and questions I met:

 

1.   The output of "iscsiadm -m session" that compute node
show(volume-0001 is created by command and was in the list of nexenta
ZVOL list)

root@nc01:/home/romi# iscsiadm -m session

tcp: [21] 192.168.1.42:3260,1
iqn.1986-03.com.sun:01:005008c802ff.4fb2f97dvolume-0001

tcp: [5] 192.168.1.42:3260,2 iqn.1986-03.com.sun:01:005008c802ff.4fb2f97d

2.   When I use command to attach the volume to the instance,
nova-volume service gave the log error is:

Command: sudo iscsiadm -m node -T
"iqn.1986-03.com.sun:01:005008c802ff.4fb2f97d"volume-0001 -p
192.168.1.42:3260 --rescan

2012-06-26 18:00:37 TRACE nova.rpc.amqp Stderr: 'iscsiadm: No portal
found.\n'

3.   I try to run "icsiadm -m node -T
"iqn.1986-03.com.sun:01:005008c802ff.4fb2f97d"volume-0001 -p
192.168.1.42:3260 -rescan" manually in compute node, the output is well:

Rescanning session [sid: 21, target:
iqn.1986-03.com.sun:01:005008c802ff.4fb2f97dvolume-0001, portal:
192.168.1.42,3260]

 

So, I could not know what is wrong or is there special setting needed in
nexenta server?

Appreciate if someone could help.

Regards,

 

Romi

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] 答复: Is there special setting to attach volume to instance on Nexenta server?

2012-07-03 Thread romi zhang
My nexenta configuration in nova.conf on nova-volume server is:

 

#nova-volume

--routing_source_ip=$my_ip

 

--volume_driver=nova.volume.nexenta.volume.NexentaDriver

--nexenta_host=192.168.1.42

--nexenta_iscsi_target_portal_port=3260

--nexenta_rest_port=80

--nexenta_user=admin

--nexenta_password=nexenta

--nexenta_volume=nova-volumes

--nexenta_target_prefix="iqn.1986-03.com.sun:01:005008c802ff.4fb2f97d"

 

--use_local_volumes = false

 

Regards,

 

Romi

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] PyPI uploads for client libs live

2012-07-03 Thread Chmouel Boudjnah
On Tue, Jul 3, 2012 at 12:50 AM, Monty Taylor  wrote:
> At the moment, the only people with permission to upload tags is the
> openstack-release team. However, since we're letting client libs manage
> their own versions, I kinda think we should give PTLs the right on their
> own project - so Vish would get tag access to python-novaclient, Brian
> to python-glanceclient, etc.

+1

Chmouel.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] How do I stop image-create from using /tmp?

2012-07-03 Thread Daniel P. Berrange
On Mon, Jul 02, 2012 at 12:09:55PM -0700, Johannes Erdfelt wrote:
> On Mon, Jul 02, 2012, Daniel P. Berrange  wrote:
> > On Mon, Jul 02, 2012 at 08:17:08AM -0700, Johannes Erdfelt wrote:
> > > Not using /tmp for large files is a good reason for practical reasons
> > > (distributions moving to ramfs for /tmp).
> > > 
> > > But please don't start throwing around warnings that all uses of /tmp
> > > are a security risk without backing that up.
> > 
> > I stand by my point that in general usage of /tmp is a risk because
> > for every experianced developer who can get things right, there are
> > hordes of others who get it wrong & eventually one such bug will
> > slip through the review net. Since there are rarely compelling reasons
> > for the use of /tmp, avoiding it by default is a good defensive choice.
> 
> So your argument isn't that using /tmp is inherently insecure, it's that
> using something not shared is safer?
> 
> It seems to me that we're just as likely to have a review slip through
> that uses /tmp insecurely as a review slipping through that uses /tmp at
> all.

We already run a bunch of PEP8 checks across the code on every
commit. It ought to be with the realm of practicality to add a
rule that blacklists any use of mkdtemp() which does not pass
an explicit directory. Most places in Nova don't actually use
it directly, but instead call nova.utils.tempdir() which could
again be made to default to '/var/lib/nova/tmp' or equivalent.


> Ultimately, the most compelling reason for using /tmp is that it's easy,
> it's standard and developers have been trained to use it for a long
> time.

These are all reasons against use of /tmp - precisely because it is
so convenient/easy, developers use it without ever thinking about the
possible consequences of accidental misuse.

> There is no well-defined alternative, either in LSB or in practice (or
> in either that blog post or your email).

It is fairly common for apps to use /var/cache/ or
/var/lib/.

> Since we can't trust developers to use /tmp securely, or avoid using
> /tmp at all, then why not use filesystem namespaces to setup a process
> specific non-shared /tmp?

That is possible, but I simply disagree with your point that we
can't stop using /tmp. It is entirely possible to stop using it
IMHO.


Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Openstack and Google Compute Engine

2012-07-03 Thread Simon G.
Firstly, I'm just curious about their technology. I'm unable to make any
representative benchmark and that's why I was asking you about it. I'm just
a student during master thesis, interested in cloud computing, Openstack
etc. so I can't afford for such huge deployment :).

Secondly, I don't think we shouldn't compare GCE to Openstack. I understand
that right now cloud (Openstack, Amazon, ...) is just easy in use, managed
and scalable datacenter. It allows users to create VMs, upload their
images, easily increase their (limited) demands, but don't you think that
HPC is the right direction? I've always thought that final cloud's goal is
to provide easy in use HPC infrastructure. Where users could do what they
can do right now in the clouds (Amazon, Openstack), but also could do what
they couldn't do in typical datacenter. They should run instance, run
compute-heavy software and if they need more resources, they just add them.
if cloud is unable to provide necessary resources, they should move their
app to bigger cloud and do what they need. Openstack should be prepared for
such large deployment. It should also be prepared for HPC use cases. Or if
it's not prepared yet, it should be Openstack's goal.

I know that clouds are fulfilling current needs for scalable datacenter,
but it should also fulfill future needs. Apps are faster and faster. More
often they do image processing, voice recognition, data mining and it
should be clouds' goal to provide an easy way to create such advanced apps,
not just simple web server which could be scaled up, by adding few VMs and
load balancer to redirect requests. Infrastructure should be prepared even
for such large deployment like that in google. It should also be optimized
and support heavy computations. In the future it should be as efficient as
grids (or almost as efficient), because ease of use has already been
achieved. If, right now, it's easy to deploy VM into the cloud, the next
step should be to optimize infrastructure to increase performance.

I've always thought about clouds in that way. Maybe I was wrong. Maybe
cloud should do only what it's doing right now and let to others
technologies handle HPC.

Cheers,

On Tue, Jul 3, 2012 at 12:55 AM, Paul McMillan wrote:

> It's KVM on Redhat with a fairly custom guest kernel, including optimized
> drivers for their network encapsulation. Auth is handled using their
> existing OAuth2.0 infrastructure.
>
> As Matt said, their offering is fairly different from EC2 (and Openstack),
> competing more with compute-heavy providers, rather than amazon-like
> application-host offerings.
>
> Their beta is currently only available to customers that they expect will
> run real jobs. Expect a phone call and a conversation about your
> application and current compute use before your organization gets an invite.
>
> One neat thing about their product is that they provide dedicated spindles
> on ephemeral disks for instances with more than 2 cores.
>
> Their user tooling looks very nice. There are probably features worth
> borrowing there.
>
> -Paul
>
>
>
> __**_
> Mailing list: 
> https://launchpad.net/~**openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : 
> https://launchpad.net/~**openstack
> More help   : 
> https://help.launchpad.net/**ListHelp
>


-- 
Simon**
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] VM network adapter hotplug

2012-07-03 Thread heut2008
Hi Dan,

I'd like to implement this feather for quantum ,can you assign it to
me? the Milestone target is set to folsom-3,so I  have a month to
implement,is it?

2012/7/2 Dan Wendlandt :
> Hi Irena,
>
> We've talked about adding this capability (blueprint here:
> https://blueprints.launchpad.net/quantum/+spec/nova-quantum-interface-creation)
> and its mentioned in this bug
> (https://bugs.launchpad.net/quantum/+bug/1019909), but I do not know of
> anyone actively working on this.  If you'd like to work on it, we can
> definitely help provide guidence.
>
> Dan
>
> p.s. I believe xenserver supports an equivalent mechanism
>
> On Mon, Jul 2, 2012 at 6:39 AM, Irena Berezovsky 
> wrote:
>>
>> Hi,
>>
>> I tried to find a way to add a network adapter to running VM without
>> needing to restart it but could not find an API to apply it.
>>
>> As I understand KVM allows such functionality :
>> https://fedoraproject.org/wiki/Features/KVM_NIC_Hotplug
>>
>> Is it supported or considered for Folsom?
>>
>>
>>
>> Thanks a lot,
>>
>> Irena
>>
>>
>> ___
>> Mailing list: https://launchpad.net/~openstack
>> Post to : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
>>
>
>
>
> --
> ~~~
> Dan Wendlandt
> Nicira, Inc: www.nicira.com
> twitter: danwendlandt
> ~~~
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] 答复: Is there special setting to attach volume to instance on Nexenta server?

2012-07-03 Thread Yuriy Taraday
Try to remove that quotes from nexenta_target_prefix flag. They seem
to be the source of this problem.

Kind regards, Yuriy.


On Tue, Jul 3, 2012 at 12:45 PM, romi zhang  wrote:
> My nexenta configuration in nova.conf on nova-volume server is:
>
>
>
> #nova-volume
>
> --routing_source_ip=$my_ip
>
>
>
> --volume_driver=nova.volume.nexenta.volume.NexentaDriver
>
> --nexenta_host=192.168.1.42
>
> --nexenta_iscsi_target_portal_port=3260
>
> --nexenta_rest_port=80
>
> --nexenta_user=admin
>
> --nexenta_password=nexenta
>
> --nexenta_volume=nova-volumes
>
> --nexenta_target_prefix="iqn.1986-03.com.sun:01:005008c802ff.4fb2f97d"
>
>
>
> --use_local_volumes = false
>
>
>
> Regards,
>
>
>
> Romi
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] best practices for merging common into specific projects

2012-07-03 Thread Thierry Carrez
Gabriel Hurley wrote:
> On a more fundamental level, did I miss some tremendous reason why we have 
> this "merge from common" pattern instead of making OpenStack Common a 
> standard python dependency just like anything else? Especially with the work 
> Monty has recently done on versioning and packaging the client libs from 
> Jenkins, I can't see a reason to keep following this "update common and merge 
> to everything else" pattern at all...

This discussion should probably wait for markmc to come back, since he
set up most of this framework in the first place. He would certainly
produce a more compelling rationale than I can :)

IIRC the idea was to have openstack-common APIs "in incubation" until
some of them are stable enough that we can apply backward compatibility
for them at the level expected from any other decent Python library.
When we reach this point, those stable modules would be "out of
incubation" and released in a real openstack-common library. Unstable
APIs would stay "in incubation" and still use the merge model.

My understanding is that we are not yet at this point, especially as we
tweak/enrich openstack-common modules to make them acceptable by all the
projects...

-- 
Thierry Carrez (ttx)
Release Manager, OpenStack

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] best practices for merging common into specific projects

2012-07-03 Thread Thierry Carrez
Thierry Carrez wrote:
> Gabriel Hurley wrote:
>> On a more fundamental level, did I miss some tremendous reason why we have 
>> this "merge from common" pattern instead of making OpenStack Common a 
>> standard python dependency just like anything else? Especially with the work 
>> Monty has recently done on versioning and packaging the client libs from 
>> Jenkins, I can't see a reason to keep following this "update common and 
>> merge to everything else" pattern at all...
> 
> This discussion should probably wait for markmc to come back, since he
> set up most of this framework in the first place. He would certainly
> produce a more compelling rationale than I can :)

Actually http://wiki.openstack.org/CommonLibrary explains it quite well.
In particular:

"openstack-common also provides a process for incubating APIs which,
while they are shared between multiple OpenStack projects, have not yet
matured to meet the [library inclusion] criteria described above."

"Incubation shouldn't be seen as a long term option for any API - it is
merely a stepping stone to inclusion into the openstack-common library
proper."

-- 
Thierry Carrez (ttx)
Release Manager, OpenStack

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Single global dependency list

2012-07-03 Thread Thierry Carrez
Joshua Harlow wrote:
> Ack, please don’t keep on
> adding on to the copy around stuff scheme. Plese :-)

You might have misread what Monty proposes. openstack-requires would
actually duplicate the openstack-common copy-around mechanism
(update.py) in an even more permanent way (openstack-common will become
a library at the end of time, whereas openstack-requires won't).

-- 
Thierry Carrez (ttx)
Release Manager, OpenStack


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] VM network adapter hotplug

2012-07-03 Thread Salvatore Orlando
Hi Yaguang,

Folsom-3 will be release on August 16th. If nothing changes wrt Folsom-2
all code should be in by August 14th, so it's about 40 days from today.
Out of curiosity, does lunchpad allow you to assign the bug to yourself?

Regards,
Salvatore

On 3 July 2012 10:10, heut2008  wrote:

> Hi Dan,
>
> I'd like to implement this feather for quantum ,can you assign it to
> me? the Milestone target is set to folsom-3,so I  have a month to
> implement,is it?
>
> 2012/7/2 Dan Wendlandt :
> > Hi Irena,
> >
> > We've talked about adding this capability (blueprint here:
> >
> https://blueprints.launchpad.net/quantum/+spec/nova-quantum-interface-creation
> )
> > and its mentioned in this bug
> > (https://bugs.launchpad.net/quantum/+bug/1019909), but I do not know of
> > anyone actively working on this.  If you'd like to work on it, we can
> > definitely help provide guidence.
> >
> > Dan
> >
> > p.s. I believe xenserver supports an equivalent mechanism
> >
> > On Mon, Jul 2, 2012 at 6:39 AM, Irena Berezovsky 
> > wrote:
> >>
> >> Hi,
> >>
> >> I tried to find a way to add a network adapter to running VM without
> >> needing to restart it but could not find an API to apply it.
> >>
> >> As I understand KVM allows such functionality :
> >> https://fedoraproject.org/wiki/Features/KVM_NIC_Hotplug
> >>
> >> Is it supported or considered for Folsom?
> >>
> >>
> >>
> >> Thanks a lot,
> >>
> >> Irena
> >>
> >>
> >> ___
> >> Mailing list: https://launchpad.net/~openstack
> >> Post to : openstack@lists.launchpad.net
> >> Unsubscribe : https://launchpad.net/~openstack
> >> More help   : https://help.launchpad.net/ListHelp
> >>
> >
> >
> >
> > --
> > ~~~
> > Dan Wendlandt
> > Nicira, Inc: www.nicira.com
> > twitter: danwendlandt
> > ~~~
> >
> >
> > ___
> > Mailing list: https://launchpad.net/~openstack
> > Post to : openstack@lists.launchpad.net
> > Unsubscribe : https://launchpad.net/~openstack
> > More help   : https://help.launchpad.net/ListHelp
> >
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] VM network adapter hotplug

2012-07-03 Thread heut2008
I can assign the bug  to me ,but can't  assign the blueprint .

2012/7/3 Salvatore Orlando :
> Hi Yaguang,
>
> Folsom-3 will be release on August 16th. If nothing changes wrt Folsom-2 all
> code should be in by August 14th, so it's about 40 days from today.
> Out of curiosity, does lunchpad allow you to assign the bug to yourself?
>
> Regards,
> Salvatore
>
>
> On 3 July 2012 10:10, heut2008  wrote:
>>
>> Hi Dan,
>>
>> I'd like to implement this feather for quantum ,can you assign it to
>> me? the Milestone target is set to folsom-3,so I  have a month to
>> implement,is it?
>>
>> 2012/7/2 Dan Wendlandt :
>> > Hi Irena,
>> >
>> > We've talked about adding this capability (blueprint here:
>> >
>> > https://blueprints.launchpad.net/quantum/+spec/nova-quantum-interface-creation)
>> > and its mentioned in this bug
>> > (https://bugs.launchpad.net/quantum/+bug/1019909), but I do not know of
>> > anyone actively working on this.  If you'd like to work on it, we can
>> > definitely help provide guidence.
>> >
>> > Dan
>> >
>> > p.s. I believe xenserver supports an equivalent mechanism
>> >
>> > On Mon, Jul 2, 2012 at 6:39 AM, Irena Berezovsky 
>> > wrote:
>> >>
>> >> Hi,
>> >>
>> >> I tried to find a way to add a network adapter to running VM without
>> >> needing to restart it but could not find an API to apply it.
>> >>
>> >> As I understand KVM allows such functionality :
>> >> https://fedoraproject.org/wiki/Features/KVM_NIC_Hotplug
>> >>
>> >> Is it supported or considered for Folsom?
>> >>
>> >>
>> >>
>> >> Thanks a lot,
>> >>
>> >> Irena
>> >>
>> >>
>> >> ___
>> >> Mailing list: https://launchpad.net/~openstack
>> >> Post to : openstack@lists.launchpad.net
>> >> Unsubscribe : https://launchpad.net/~openstack
>> >> More help   : https://help.launchpad.net/ListHelp
>> >>
>> >
>> >
>> >
>> > --
>> > ~~~
>> > Dan Wendlandt
>> > Nicira, Inc: www.nicira.com
>> > twitter: danwendlandt
>> > ~~~
>> >
>> >
>> > ___
>> > Mailing list: https://launchpad.net/~openstack
>> > Post to : openstack@lists.launchpad.net
>> > Unsubscribe : https://launchpad.net/~openstack
>> > More help   : https://help.launchpad.net/ListHelp
>> >
>>
>> ___
>> Mailing list: https://launchpad.net/~openstack
>> Post to : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
>
>

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] PyPI uploads for client libs live

2012-07-03 Thread Thierry Carrez
Monty Taylor wrote:
> At the moment, the only people with permission to upload tags is the
> openstack-release team. However, since we're letting client libs manage
> their own versions, I kinda think we should give PTLs the right on their
> own project - so Vish would get tag access to python-novaclient, Brian
> to python-glanceclient, etc.

Ideally it would be a two-side approval process (PTL +
openstack-release), because openstack-release shouldn't be able to tag
without PTL approval, and openstack-release should still be kept in the
loop before a tag is pushed by PTLs (there are multiple reasons why a
few hours delay before tagging would be a good idea, and the
openstack-release people actually keep track of those).

That said, we don't have that approval mechanism available yet (same
issue with the core projects release tagging) so in the mean time we
should probably let the small set of individuals with an understanding
of the issues (PTLs + openstack-release) have the power to do it. Within
that group, we can have a soft two-side approval process (based on IRC
pings) to check "everything is OK" before triggering a release.

-- 
Thierry Carrez (ttx)
Release Manager, OpenStack


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] How do I stop image-create from using /tmp?

2012-07-03 Thread John Garbutt
Sorry to go back in the tread, but just wanted to ask a possibly dumb question.

> Daniel P. Berrange wrote:
> In the particular case of the qemu-img command described in earlier in this
> thread, I'm not convinced we need a new option. Instead of using /tmp
> when extracting a snapshot from an existing disk image, it could just use the
> path where the source image already resides. ie the existing
> FLAGS.instances_path directory, which can be assumed to be a suitably large
> persistent data store.

Would that not be a bad idea for those having FLAGS.instances_path on a shared 
file system, like gluster?

Cheers,
John

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] How do I stop image-create from using /tmp?

2012-07-03 Thread Daniel P. Berrange
On Tue, Jul 03, 2012 at 11:01:11AM +0100, John Garbutt wrote:
> Sorry to go back in the tread, but just wanted to ask a possibly dumb 
> question.
> 
> > Daniel P. Berrange wrote:
> > In the particular case of the qemu-img command described in earlier in this
> > thread, I'm not convinced we need a new option. Instead of using /tmp
> > when extracting a snapshot from an existing disk image, it could just use 
> > the
> > path where the source image already resides. ie the existing
> > FLAGS.instances_path directory, which can be assumed to be a suitably large
> > persistent data store.
> 
> Would that not be a bad idea for those having FLAGS.instances_path on
> a shared file system, like gluster?

Well it would mean more I/O to that filesystem yes. Whether this is bad or
not depends on whether there is an alternative. If users of gluster also
have a large local scratch space area then, we could make it possible to
use that, if they don't have a local scratch space, then this is a
reasonable usage.

This would suggest there's a potential use case for a new config parameter
FLAGS.local_scratch_path, whose default value matches FLAGS.instances_path
if not set.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] How do I stop image-create from using /tmp?

2012-07-03 Thread Thierry Carrez
Daniel P. Berrange wrote:
> On Mon, Jul 02, 2012 at 12:09:55PM -0700, Johannes Erdfelt wrote:
>>
>> It seems to me that we're just as likely to have a review slip through
>> that uses /tmp insecurely as a review slipping through that uses /tmp at
>> all.

With my Vulnerability Management team hat on, looking at the types of
vulnerabilities we actually let go through in our reviews, I would
disagree with that. Not all the core developers have the security
mindset built into them. And spotting usage of /tmp is always easier
than spotting insecure usage of /tmp.

> It is fairly common for apps to use /var/cache/ or
> /var/lib/.
> 
>> Since we can't trust developers to use /tmp securely, or avoid using
>> /tmp at all, then why not use filesystem namespaces to setup a process
>> specific non-shared /tmp?
> 
> That is possible, but I simply disagree with your point that we
> can't stop using /tmp. It is entirely possible to stop using it
> IMHO.

+1. Always using application-specific, unshared temp space
(/var/cache/, /var/lib//tmp...) is a good security
strengthening mechanism that should help us avoid /some/ vulnerabilities
in the future.

-- 
Thierry Carrez (ttx)
OpenStack Vulnerability Management team

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] How do I stop image-create from using /tmp?

2012-07-03 Thread John Garbutt
> From: Daniel P. Berrange [mailto:berra...@redhat.com]
> Sent: 03 July 2012 11:09
> This would suggest there's a potential use case for a new config parameter
> FLAGS.local_scratch_path, whose default value matches
> FLAGS.instances_path if not set.

+1

Cheers,
John
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Nova Pacemaker Resource Agents

2012-07-03 Thread Christian Parpart
Hey,

that's great, but how do you handle RabbitMQ in-between?

I kind of achieved it w/o OCF agents but used native upstart support of
Pacemaker, however,
OCF's are much more nicer, and still, I'd be interested in how you solved
the RabbitMQ issue.

Best regards,
Christian Parpart.

On Mon, Jul 2, 2012 at 7:38 PM, Sébastien Han wrote:

> Hi everyone,
>
> For those of you who want to achieve HA in nova. I wrote some resource
> agents according to the OCF specification. The RAs available are:
>
>- nova-scheduler
>- nova-api
>- novnc
>- nova-consoleauth
>- nova-cert
>
> The how-to is available here:
> http://www.sebastien-han.fr/blog/2012/07/02/openstack-nova-components-ha/ and
> the RAs on my Github https://github.com/leseb/OpenStack-ra
>
> Those RAs mainly re-use the structure of the resource agent written by
> Martin Gerhard Loschwitz from Hastexo.
>
> Hope it helps!
>
> Cheers.
>
> ~Seb
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Openstack Baremetal Provisioning

2012-07-03 Thread Trinath Somanchi
Hi-

With respect to the web link
http://wiki.openstack.org/HeterogeneousTileraSupport,

"

For supporting non-x86 architecture (ex. TILERA), Proxy Compute Node should
be designed.
An x86 Proxy Compute Node is connected to the TILEmpower boards through
network. A Proxy Compute Node may handle multiple TILEmpower boards.
TILEmpower boards are connected to the network such that a cloud user can
ssh into them directly after an instance starts on the TILEmpower board. A
TILEmpower board is configured to be tftp-bootable or nfs-bootable. Proxy
Compute Node behaves as the tftp/nfs server for the TILEmpower boards. *After
Proxy Compute node receives instance images from the image server, it wakes
up a TILEmpower board and controls their booting.* Once a TILEmpower board
is booted, Proxy Compute Node doesn't do anything except
terminating/rebooting/power-down/power-up of the board. Once Tilera
instance is running, user can access the TILEmpower board, not Proxy
Compute Node, through ssh. Here, we assume that Proxy Compute Node can
power on/off TILEmpower boards remotely using PDU(Power Distribute Unit).
The block diagram shown below describes the procedure in detail...

"

In an general environment, Nova-compute with the help of Nova-scheduler,
boots an image instance at Server-2/ESSEX-2/Nova-agent server.

 Can any one help me in understanding the above colored lines 

Thanking you...

--
Trinath S



On Tue, Jul 3, 2012 at 12:27 PM, Trinath Somanchi <
trinath.soman...@gmail.com> wrote:

> Hi-
>
> Thanks a lot for the reply... This helped me understand a bit more.
>
> I have this kind of setup in mind.
>
>
>
>   [ ESSEX-1]  <--->  [ ESSEX-2 ]  <--->  [<---
> TILERA HW--->]
>
>x_86 HW   x_86  HW
> NON  x_86 HW
>
> Nova-Compute Proxy-Nova-Compute   New Hardware
> Device.
>
>
>
>
>
> So, change the config in nova.conf in ESSEX-2 server to bare-metal and
> tilera specific config makes the nova-compute to proxy-nova-compute.
>
> I have a doubt here, I have an idea on using Nova-compute to bringing up
> VM's in ESSEX-2 when its another Nova-compute. But not we have
> Proxy-Nova-compute in the ESSEX-2 and we have no VM's here rather we have
> new Hardware board to bootup.
>
> How will the Commands from Nova-compute in ESSEX-1 differ to bring up
> TILERA HW board using the Proxy-Nova-compute in ESSEX-2 ?
>
> Please kindly help me understand this scenario.
>
> Thanks a lot for the reply...
> --
> Trinath S.
>
>
>
>
>
>
> On Tue, Jul 3, 2012 at 12:54 AM, John Paul Walters wrote:
>
>> Hi Trinath,
>>
>> It's not clear whether the tilera board you're referring to is one of the
>> PCI versions or a stand-alone board (is it in Essex-1?).  We've never
>> setup/tested anything other than the TILEmpower stand-alone board with our
>> bare-metal provisioning service.  That said, in order to make your
>> nova-compute on Essex-2, you need to configure nova.conf as I described
>> earlier: set the connection_type=baremetal, set the
>> baremetal_driver=tilera, and set your path to tile-monitor appropriately.
>>  That's what makes it a proxy node, which is otherwise run as a regular
>> nova-compute.
>>
>> JP
>>
>>
>> On Jul 2, 2012, at 12:42 PM, Trinath Somanchi wrote:
>>
>> Hi-
>>
>> Thanks a lot for the reply JP.
>>
>> The information provided is of most value for me.
>>
>> I have a doubt here.
>>
>> I will install Nova-compute in a server say Essex-1 and another server
>> say Essex-2.
>>
>> I have a tilera board too in the setup.
>>
>> Can you please guide me on how to start this tilera board using
>> Nova-compute in Essex-1 machine. and How Nova-compute in Essex-2 can be
>> made as Proxy-Nova-compute. I mean what changes to Nova-compute makes Proxy
>> nova-compute.
>>
>> On Mon, Jul 2, 2012 at 9:32 PM, John Paul Walters wrote:
>>
>>> Hi Trinath,
>>>
>>> Our baremetal experts are on vacation for the next week or so, so I'll
>>> take a stab at answering in their absence.  First, just to be clear, right
>>> now the baremetal work that's present in Essex supports ONLY the Tilera
>>> architecture.  We're working with the NTT folks to add additional support,
>>> but it's not in Essex.  We've tested on TILEmpower rack-mountable units.
>>>  You'll need a baremetal proxy (x86) machine that will run nova-compute and
>>> handle the provisioning of resources.  Most of the nova.conf options are
>>> shown at:
>>>
>>>
>>> http://docs.openstack.org/essex/openstack-compute/admin/content/compute-options-reference.html
>>>
>>> But it appears that there's at least one omission:  you'll need to set
>>> your --connection_type=baremetal on the proxy node.  Probably the most
>>> important options are: --baremetal_driver=tilera,
>>> --tile_monitor=.  I would suggest that you have a
>>> look at the link above under the baremetal section to see what other
>>> options might apply to your environment.
>>>
>>> http://wiki.openstack.org/HeterogeneousTileraSupport
>>>
>>> Note th

Re: [Openstack] Nova Pacemaker Resource Agents

2012-07-03 Thread Sébastien Han
Hi,

Managing a resource via LSB only checks the PID. If the PID exists the
service is running but it's not enough because it doesn't mean that the
service is truly functionnal. However OCF agents offer more features like
fine monitoring (scripting).
I'm not sure to understand your question about Rabbit-MQ but if the
question was: "How do you monitor the connection of each service to
Rabbit-MQ?", here is the answer:

The RA monitors the connection state (ESTABLISHED) between the service
(nova-scheduler, nova-cert, nova-consoleauth) and rabbit-MQ according to
the PID of the process.

By the way, did you start with the floating IP OCF agent?

Cheers.


On Tue, Jul 3, 2012 at 12:45 PM, Christian Parpart  wrote:

> Hey,
>
> that's great, but how do you handle RabbitMQ in-between?
>
> I kind of achieved it w/o OCF agents but used native upstart support of
> Pacemaker, however,
> OCF's are much more nicer, and still, I'd be interested in how you solved
> the RabbitMQ issue.
>
> Best regards,
> Christian Parpart.
>
> On Mon, Jul 2, 2012 at 7:38 PM, Sébastien Han wrote:
>
>> Hi everyone,
>>
>> For those of you who want to achieve HA in nova. I wrote some resource
>> agents according to the OCF specification. The RAs available are:
>>
>>- nova-scheduler
>>- nova-api
>>- novnc
>>- nova-consoleauth
>>- nova-cert
>>
>> The how-to is available here:
>> http://www.sebastien-han.fr/blog/2012/07/02/openstack-nova-components-ha/ and
>> the RAs on my Github https://github.com/leseb/OpenStack-ra
>>
>> Those RAs mainly re-use the structure of the resource agent written by
>> Martin Gerhard Loschwitz from Hastexo.
>>
>> Hope it helps!
>>
>> Cheers.
>>
>> ~Seb
>>
>>
>> ___
>> Mailing list: https://launchpad.net/~openstack
>> Post to : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
>>
>>
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Keystone API docs - Create User JSON

2012-07-03 Thread Antonio Manuel Muñiz Martín
Hi.

I think there is an error in the Keystone API docs [1].
The parameter "password" in the JSON request for create an user,
should be "password" and not "OS-KSADM:password".

Regards,
Antonio.

[1] 
http://docs.openstack.org/api/openstack-identity-service/2.0/content/POST_addUser_v2.0_users_Admin_API_Service_Developer_Operations-d1e1356.html
-- 
Antonio Manuel Muñiz Martín
Software Developer at klicap - ingeniería del puzle

work phone + 34 954 894 322
www.klicap.es | blog.klicap.es

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Reminder: Project & release status meeting - 21:00 UTC

2012-07-03 Thread Thierry Carrez
On the Project & release status meeting on today:

Only a few hours left before we cut milestone-proposed branches for
Folsom-2. We'll review what's left to do, defer stuff that won't make
it, get the PTL sign-off and refine the F2-targeted bug lists.

Feel free to add extra topics to the agenda:
[1] http://wiki.openstack.org/Meetings/ProjectMeeting

All PTLs should be present (if you can't make it, please name a
substitute on [1]). Everyone else is welcome to attend.

The meeting will be held at 21:00 UTC on the #openstack-meeting channel
on Freenode IRC. You can look up how this time translates locally at:
[2] http://www.timeanddate.com/worldclock/fixedtime.html?iso=20120703T21

See you there,

-- 
Thierry Carrez (ttx)
Release Manager, OpenStack

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] How do I stop image-create from using /tmp?

2012-07-03 Thread Boris Filippov
2012/7/3 John Garbutt :
>> From: Daniel P. Berrange [mailto:berra...@redhat.com]
>> Sent: 03 July 2012 11:09
>> This would suggest there's a potential use case for a new config parameter
>> FLAGS.local_scratch_path, whose default value matches
>> FLAGS.instances_path if not set.
>
> +1
>
> Cheers,
> John
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp

e.g. should i return that config parameter?

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] new locations for some things

2012-07-03 Thread Doug Hellmann


On Jul 2, 2012, at 7:02 PM, Monty Taylor  wrote:

> Secondly, in addition to the normal per-commit tarballs, we're now
> publishing tarballs of the form "$project-$branch.tar.gz" which will get
> overwritten with each commit - that way, if you need to track trunk from
> a pip-requires file, (such as ceilometer, which needs to track nova
> trunk) you can simply plop in something like
> http://tarballs.openstack.org/nova/nova-master.tar.gz  - and it'll work
> for both pip installs AND easy_install/distutils based installs. Yay!

Yay indeed!!

Thank you from the entire ceilometer team!

Doug

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] How do I stop image-create from using /tmp?

2012-07-03 Thread Scott Moser
On Mon, 2 Jul 2012, Jay Pipes wrote:

> On 07/02/2012 01:32 PM, Mark Lehrer wrote:
> >> just did an "ln -s /some/dir/with/space /tmp" and that does solve
> >
> > I added an option to /etc/init/nova_compute.conf to specify the tmp
> > space, so the start line looks like this:
> >
> > exec su -s /bin/sh -c "export TMPDIR=/var/tmp; exec nova-compute
> > --flagfile 
> >
> > Not a great solution either since package updates over-write this setting.
>
> Yeah, that's exactly what we found as well -- Chef would overwrite the
> upstart script, and we couldn't for the life of us figure out how to get
> Chef to set the TMPDIR environment variable for *user running nova-compute*.

I'm not familiar with Chef, or how you were using it, so I can't help
there.  I wont argue against a specific flag, but I do think its mostly
useless.  I think this is largely bikeshed, and for some reason I feel
compelled to argue in favor of blue.

However, going the flag route then seems more dangerous to me.  Any
code would then to know that it should reference this flag for temp file
creation.  That problem is already solved with TMPDIR and mkdtemp and
friends.  Using the standard path, you get that for free.  If things are
*not* respecting TMPDIR, they should.

And, for what its worth (not much), a better way of modifying the
/etc/init/nova_compute.conf file above would be to add:
 env TMPDIR=/var/tmp

(see man 5 init).

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] best practices for merging common into specific projects

2012-07-03 Thread Doug Hellmann


On Jul 3, 2012, at 5:31 AM, Thierry Carrez  wrote:

> Gabriel Hurley wrote:
>> On a more fundamental level, did I miss some tremendous reason why we have 
>> this "merge from common" pattern instead of making OpenStack Common a 
>> standard python dependency just like anything else? Especially with the work 
>> Monty has recently done on versioning and packaging the client libs from 
>> Jenkins, I can't see a reason to keep following this "update common and 
>> merge to everything else" pattern at all...
> 
> This discussion should probably wait for markmc to come back, since he
> set up most of this framework in the first place. He would certainly
> produce a more compelling rationale than I can :)
> 
> IIRC the idea was to have openstack-common APIs "in incubation" until
> some of them are stable enough that we can apply backward compatibility
> for them at the level expected from any other decent Python library.
> When we reach this point, those stable modules would be "out of
> incubation" and released in a real openstack-common library. Unstable
> APIs would stay "in incubation" and still use the merge model.
> 
> My understanding is that we are not yet at this point, especially as we
> tweak/enrich openstack-common modules to make them acceptable by all the
> projects...
> 

Ideally when we reach that point the libraries will be released as individual 
components instead of a monolithic shared library, too. 

Doug


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Single global dependency list

2012-07-03 Thread Doug Hellmann


On Jul 2, 2012, at 6:40 PM, Monty Taylor  wrote:

> Hey all!
> 
> One of the tasks from the last ODS was to implement a single global
> dependency list. Turns out the more you think about it, the more
> important it is... because of the way we use devstack as part of the
> gate, we actually _currently_ have a de facto global dependency list,
> it's just not declared anywhere. (oops)
> 
> Anyway - the original thought was to put the depends in
> openstack-common. We'd use update.py to copy the depends in to the
> project, so that projects could align on their own timeframe.
> Additionally, we'd make the copy only copy in the versions from
> openstack-common for package that were already listed in the target
> project, so that we wouldn't add django to python-swiftclient, for instance.
> 
> The mechanics of that all work and are ready - but then bcwaldon pointed
> out that it didn't make a ton of sense for them to go in
> openstack-common, since that has its own lifecycle and is a place for
> common code to go - not just a catch all place.
> 
> To that end, I took the code we had written for the update logic and put
> it, along with the depends lists, into its own repo. I think we're ready
> to start actually moving forward with it - but we've run up against the
> hardest problem we every have:
> 
> naming
> 
> openstack-depends already got vetoed on IRC because it makes people
> think of adult diapers. I'm proposing openstack-requires, since the
> files we're talking about are actually python requirements files.
> 
> Any objections?

+0 on the name

As an alternative, how about combining the requirements file with the other 
packaging related stuff from openstack-common and calling the result 
openstack-packaging? 

Doug


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] PyPI uploads for client libs live

2012-07-03 Thread Doug Hellmann

On Jul 3, 2012, at 5:57 AM, Thierry Carrez  wrote:

> Monty Taylor wrote:
>> At the moment, the only people with permission to upload tags is the
>> openstack-release team. However, since we're letting client libs manage
>> their own versions, I kinda think we should give PTLs the right on their
>> own project - so Vish would get tag access to python-novaclient, Brian
>> to python-glanceclient, etc.
> 
> Ideally it would be a two-side approval process (PTL +
> openstack-release), because openstack-release shouldn't be able to tag
> without PTL approval, and openstack-release should still be kept in the
> loop before a tag is pushed by PTLs (there are multiple reasons why a
> few hours delay before tagging would be a good idea, and the
> openstack-release people actually keep track of those).
> 
> That said, we don't have that approval mechanism available yet (same
> issue with the core projects release tagging) so in the mean time we
> should probably let the small set of individuals with an understanding
> of the issues (PTLs + openstack-release) have the power to do it. Within
> that group, we can have a soft two-side approval process (based on IRC
> pings) to check "everything is OK" before triggering a release.

Could we use gerrit's 2-step approval like we do for other projects and combine 
it with a fancy tag detector like was just added for DocImpact?

Doug


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Heterogeneous hardware support

2012-07-03 Thread John Paul Walters
Hi,

I'm not sure that I fully understand the security angle that you're getting at 
here, but you and Jay are right that we're focusing on adding heterogeneity to 
Openstack.  Right now we support large shared memory x86 machines, like SGI 
UVs, GPUs, and Tilera systems.  The blueprints you linked to probably need some 
updating, but they capture the gist of what we're up to.  So far, we've been 
focusing on HPC-style workloads.  But if you  had a security application that 
ran on one of the Tilera boxes, I see no reason why you couldn't use Openstack 
to provision it.

We'd be interested in hearing if there's anything in particular that our 
heterogeneous support is lacking that would enable, for example, cloud WAF or 
other security services.

best,
JP


On Jul 3, 2012, at 1:22 AM, balaji patnala wrote:

> Hi Jay,
>  
> Thanks for information.
>  
> As it is observed that there is some work going on by UCIS team for "Folsom" 
> release on Heterogeneous support.
>  
> Please find the below link:
>  
> i) http://wiki.openstack.org/ScheduleHeterogeneousInstances
> ii)
> http://wiki.openstack.org/HeterogeneousArchitectureScheduler
> 
> which discusses about  support for Heterogeneous platform support for Open 
> Stack.
>  
> Also in one of the document published by Rackspace, it is said that the 
> heterogeneous platform support can be done by having multiple "Zones" in the 
> cloud. But i doubt this way will have more performance impact and also have 
> more complex networking issues.
>  
> It is observed that Essex Release is not supporting Zones and came to know 
> from mailing list that this support will be available in Folsom release as 
> "Cells".
>  
> I think  that this heterogeneous platform support will enable more options 
> for service providers and as well users for L2/L3 services applications like 
> security services,WAF,LB etc on different platforms in cloud network.
>  
> Iam sorrry if the below queries didnt gave clear information.
> 
> On Tue, Jul 3, 2012 at 12:04 AM, Jay Pipes  wrote:
> On 07/02/2012 12:09 PM, balaji patnala wrote:
> > Hi Jay,
> >
> > As you know that L2 and L3 services could be the next step of offering
> > as Services from Cloud providers. Security Applications like
> > WAF,IPS,Firewall,VPN etc can be offered as services. These security
> > applications can be run in VMs on Heterogeneous hardware like Freescale
> > and any other platforms.
> 
> I'm not sure if heterogeneous hardware supported is specifically related
> to security, but ...
> 
> > Open Stack must support for the Heterogeneous hardware to enable
> > different hardware platforms apart from x86.
> 
> There's nothing about OpenStack, in general, that is specific to x86
> hardware. It's Python, so if the Linux distribution of your preference
> runs on some other hardware architecture, have at it. The images you
> deploy will need to be tailored to the hardware architecture, of course,
> but that isn't the realm of what OpenStack services do -- that's up to
> the deployer.
> 
> > Please share with us if you have any examples of similar deployments in
> > Clouds.
> 
> ISI is the group most actively working on heterogeneous architecture
> support, but I don't believe they are focusing on security-related
> things at all.
> 
> Best,
> -jay
> 
> > On Mon, Jul 2, 2012 at 5:59 PM, Jay Pipes  > > wrote:
> >
> > On 07/02/2012 01:23 AM, balaji patnala wrote:
> > > Hi,
> > >
> > > Does open stack [Essex] release support Heterogeneous hardware for
> > > creating VMs with Security Applications?
> > > If not, what is the road map for this. Please let me know.
> >
> > Could you please elaborate on what you mean by "creating VMs with
> > security applications"?
> >
> > Thanks,
> > -jay
> >
> > ___
> > Mailing list: https://launchpad.net/~openstack
> > 
> > Post to : openstack@lists.launchpad.net
> > 
> > Unsubscribe : https://launchpad.net/~openstack
> > 
> > More help   : https://help.launchpad.net/ListHelp
> >
> >
> 
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] how can give vnc console access to public

2012-07-03 Thread sarath zacharia
Hi,

 We configured the cloud infrastructure using openstack and it
is working fine as private.And the dashboard is accessible in public.
all the functions in the dashboard can use in public except vnc console .
When we are accessing the vnc it didn't showing any error, but no vnc
screen is coming
For connecting vnc outside what we have to do ?


Thanking You
with regards

Sarath Zacharia
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Single global dependency list

2012-07-03 Thread Monty Taylor


On 07/03/2012 08:43 AM, Doug Hellmann wrote:
> 
> 
> On Jul 2, 2012, at 6:40 PM, Monty Taylor 
> wrote:
> 
>> Hey all!
>> 
>> One of the tasks from the last ODS was to implement a single
>> global dependency list. Turns out the more you think about it, the
>> more important it is... because of the way we use devstack as part
>> of the gate, we actually _currently_ have a de facto global
>> dependency list, it's just not declared anywhere. (oops)
>> 
>> Anyway - the original thought was to put the depends in 
>> openstack-common. We'd use update.py to copy the depends in to the 
>> project, so that projects could align on their own timeframe. 
>> Additionally, we'd make the copy only copy in the versions from 
>> openstack-common for package that were already listed in the
>> target project, so that we wouldn't add django to
>> python-swiftclient, for instance.
>> 
>> The mechanics of that all work and are ready - but then bcwaldon
>> pointed out that it didn't make a ton of sense for them to go in 
>> openstack-common, since that has its own lifecycle and is a place
>> for common code to go - not just a catch all place.
>> 
>> To that end, I took the code we had written for the update logic
>> and put it, along with the depends lists, into its own repo. I
>> think we're ready to start actually moving forward with it - but
>> we've run up against the hardest problem we every have:
>> 
>> naming
>> 
>> openstack-depends already got vetoed on IRC because it makes
>> people think of adult diapers. I'm proposing openstack-requires,
>> since the files we're talking about are actually python
>> requirements files.
>> 
>> Any objections?
> 
> +0 on the name
> 
> As an alternative, how about combining the requirements file with the
> other packaging related stuff from openstack-common and calling the
> result openstack-packaging?

Interesting. Which other packaging stuff? Do you mean the stuff in
openstack/common/setup.py?


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Single global dependency list

2012-07-03 Thread Monty Taylor


On 07/02/2012 08:43 PM, Joshua Harlow wrote:
> Ack, please don’t keep on adding on to the copy around stuff scheme.
> Plese :-)

You know - I was a huge opponent to the copy around stuff scheme when it
started. It raised all of my hackles and I got all upset about things
not being done the "right" way.

Turns out that we have a complex project with some interesting
requirements, and sometimes having an automated way to inject filtered
sets of code from a common source isn't as terrible a thing as you might
think (especially considering how half-baked ALL of pythons library and
dependency management stuff is. oy - I'm amazed I have hair left...)

> Is the devstack dependency list complete, when I created the anvil one,
> I found more than one hole...

Hi! This has nothing to do with the devstack dependency list. This list
is two lists, actually - one is the the superset of all of the
pip-requires files in all of the projects, the other is the superset of
all of the test-requires files in all of the projects.

> Also the devstack one is in a micro-format, maybe we can move to say, YAML?

The purpose of this is to drive pip installations of packages for
virtualenvs. That already has a required format that we have to use.
(again, this has nothing to do with devstack)

> How about hosting these requires on some website (with versions there)?

The apprpriate contents still need to exist in each projects'
pip-requires and test-requires files for unit testing, so you'll still
have to copy code into the projects.

> Github already provides this for the anvil dependency list, maybe that
>  (or something) similar is sufficent?

It's not even close.

It's possible that I was not clear on the purpose of what this is trying
to accomplish.

The underlying problem is that each project declares a set of
python-level dependencies. This is important so that the base unit of
release, the source tarball, is itself both complete, and also testable,
since we have to be able to have python install the necessary depends to
run unittests. I know there are people who would prefer we did that from
OS packages, but that cat is out of the bag and _way_ out of scope for
this particular discussion. Suffice it to say, each project needs a list
of depends that it can hand to pip (or in the case of the client libs,
additionally express in setup.py so that when the client lib is
installed via pip as a dependency of something else, all of its depends
come as well)

So thing number one is that there will be a python requirements.txt
format file in each project. The problem at the point is keeping them in
sync so that we don't have version drift, which is a thing we decided at
ODS that we wanted to do.

When I mentioned devstack in my earlier email, what I was referring to
is the fact that, since devstack installs each of the projects in
sequence, although we do not at the moment have a de jure global
dependency list, in practice we run all of our integration testing on
installs on a single machine - which means there _is_ a single set of
packages and versions that everything is required to run from - we just
don't declare it.

Back to the specific implementation. There are three main ways we've
come up with to solve having this list. The first had to do with git
submodules. We discarded this because most people are still against the
use of submodules in the project, and also because it makes testing
coordinated changes quite difficult here. (if you want to increase the
version of webob that the project uses, do you need to land that change
in the depend list and support it in multiple projects that are
consuming that list) The other problem is that if we just made the tools
dir a submodule (or something similar) there would be no way to exclude
certain depends from certain projects. The total list is quite large,
and for many of our projects (swift and all of the client libs) it would
introduce a massive additional unnecessary installation burden for unit
testing.

The second and third both involve copying, although the mechanism could
be different. In one version, we would do as you suggest above and post
the global depends file up on a website somewhere. Then, at runtime, the
project could do a wget of the file, perform the merge into the local
requirements list, and then go on about its business. It means that the
merge code would run hundreds of time more, the projects would lose the
ability to control when they sync up with the global depends, and it
would make it much more brittle to use any of our projects. So we
thought that since we've already gone down the hell-hole of copying code
from openstack-common, copying requirements file contents at the same
time won't kill us.

Is that more helpful?

Thanks!
Monty

> On 7/2/12 3:40 PM, "Monty Taylor"  > wrote:
> 
> Hey all!
> 
> One of the tasks from the last ODS was to implement a single global
> dependency list. Turns out the more you think about it, the more
> impor

[Openstack] [OpenStack][Nova] Live Migration + NFSv4 - Permission issues

2012-07-03 Thread Leander Bessa Beernaert
Hello all,

I've been trying to get the live migration to work according to the guide
http://docs.openstack.org/trunk/openstack-compute/admin/content/configuring-live-migrations.html.

So far i've setup 2 compute nodes and 1 controller node. They all share the
/var/lib/nova/instances dir. I've already verified that the nova user id is
the same across all the servers.

Currently i'm running into this error when i launch an instance:
http://paste.openstack.org/show/19221/

It's certainly a permission issue, so i tried adding the group "nova" to
the user "libvirt-qemu". However, it still doesn't work. To which user must
i give the nova group permission in order to be able to write in that
directory?

Regards,
Leander
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] PyPI uploads for client libs live

2012-07-03 Thread Monty Taylor


On 07/03/2012 08:47 AM, Doug Hellmann wrote:
> 
> On Jul 3, 2012, at 5:57 AM, Thierry Carrez 
> wrote:
> 
>> Monty Taylor wrote:
>>> At the moment, the only people with permission to upload tags is
>>> the openstack-release team. However, since we're letting client
>>> libs manage their own versions, I kinda think we should give PTLs
>>> the right on their own project - so Vish would get tag access to
>>> python-novaclient, Brian to python-glanceclient, etc.
>> 
>> Ideally it would be a two-side approval process (PTL + 
>> openstack-release), because openstack-release shouldn't be able to
>> tag without PTL approval, and openstack-release should still be
>> kept in the loop before a tag is pushed by PTLs (there are multiple
>> reasons why a few hours delay before tagging would be a good idea,
>> and the openstack-release people actually keep track of those).
>> 
>> That said, we don't have that approval mechanism available yet
>> (same issue with the core projects release tagging) so in the mean
>> time we should probably let the small set of individuals with an
>> understanding of the issues (PTLs + openstack-release) have the
>> power to do it. Within that group, we can have a soft two-side
>> approval process (based on IRC pings) to check "everything is OK"
>> before triggering a release.
> 
> Could we use gerrit's 2-step approval like we do for other projects
> and combine it with a fancy tag detector like was just added for
> DocImpact?

I have an outstanding bug/feature request for gerrit to allow for the
review of and approval or disapproval of tags.

That being said - if we wanted to go the route you're talking about in
the mean time, instead of actually using the git tag route we could have
an additional commit with a magical text in the commit message which, on
merging, would cause a tag to be created... We've got guys on the team
who hack gerrit now though - lemme get some feedback on how much it
would take to actually get proper tag reviewing.

Monty

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] best practices for merging common into specific projects

2012-07-03 Thread James E. Blair
Andrew Bogott  writes:

> I.  As soon as a patch drops into common, the patch author should
> submit merge-from-common patches to all affected projects.
> A.  (This should really be done by a bot, but that's not going to
> happen overnight)

Actually, I think with our current level of tooling, we can have Jenkins
do this (run by Zuul as a post-merge job on openstack-common).

I very much believe that the long-term goal should be to make
openstack-common a library -- so nothing I say here should be construed
against that.  But as long as it's in an incubation phase, if doing
something like this would help move things along, we can certainly
implement it, and fairly easily.

Note that a naive implementation might generate quite a bit of review
spam if several small changes land to openstack-common (there would then
be changes*projects number of reviews in gerrit).  We have some code
laying around which might be useful here that looks for an existing open
change and amends it; at least that would let us have at most only one
open merge-from-common-change per-project.

Okay, that's all on that; I don't want to derail the main conversation,
and I'd much rather it just be a library if we're close to being ready
for that.

-Jim

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Setting VM passwords when not running on Xen

2012-07-03 Thread Day, Phil
Hi Folks,

Is anyone else looking at how to support images that need a password rather 
than an ssh key (windows) on hypervisors that don't support set_admin_password 
(e.g. libvirt) ?

Thanks
Phil
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Nova Pacemaker Resource Agents

2012-07-03 Thread Christian Parpart
On Tue, Jul 3, 2012 at 1:35 PM, Sébastien Han wrote:

> Hi,
>
> Managing a resource via LSB only checks the PID. If the PID exists the
> service is running but it's not enough because it doesn't mean that the
> service is truly functionnal. However OCF agents offer more features like
> fine monitoring (scripting).
> I'm not sure to understand your question about Rabbit-MQ but if the
> question was: "How do you monitor the connection of each service to
> Rabbit-MQ?", here is the answer:
>
> The RA monitors the connection state (ESTABLISHED) between the service
> (nova-scheduler, nova-cert, nova-consoleauth) and rabbit-MQ according to
> the PID of the process.
>
> By the way, did you start with the floating IP OCF agent?
>

Hey,

and yes, I did start already, and have an intial work of it, but since I
did not
yet actually put it into Pacemaker somewhere, I did not share it yet.
But you may feel free in checking: http://trapni.de/~trapni/FloatingIP
In case you do improvements to this script, please share :-)

Cheers,
Christian.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Single global dependency list

2012-07-03 Thread Eric Windisch
I have to agree with others that copying files around is not ideal, and I can 
see the maintenance of this getting more involved as Nova becomes more coupled 
with common.

> > Additionally, we'd make the copy only copy in the versions from
> > openstack-common for package that were already listed in the target
> > project, so that we wouldn't add django to python-swiftclient, for instance.
> 

 


This seems to be a reasonable argument against using git submodules, but I'm 
afraid we might be losing more than we're gaining here.

Just because python-swiftclient depends on openstack-common, and django-using 
code exists there, doesn't mean that django needs to be installed for 
python-swiftclient. We might do better to use git submodules and solve the 
dependency problem, than continuing down this copy-everything path.

Alternatively, speed up the movement from incubation to library.

Regards,
Eric Windisch
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Openstack and Google Compute Engine

2012-07-03 Thread Matt Joyce
On Tue, Jul 3, 2012 at 2:01 AM, Simon G.  wrote:

> Secondly, I don't think we shouldn't compare GCE to Openstack. I
> understand that right now cloud (Openstack, Amazon, ...) is just easy in
> use, managed and scalable datacenter. It allows users to create VMs, upload
> their images, easily increase their (limited) demands, but don't you think
> that HPC is the right direction? I've always thought that final cloud's
> goal is to provide easy in use HPC infrastructure. Where users could do
> what they can do right now in the clouds (Amazon, Openstack), but also
> could do what they couldn't do in typical datacenter. They should run
> instance, run compute-heavy software and if they need more resources, they
> just add them. if cloud is unable to provide necessary resources, they
> should move their app to bigger cloud and do what they need. Openstack
> should be prepared for such large deployment. It should also be prepared
> for HPC use cases. Or if it's not prepared yet, it should be Openstack's
> goal.
>

HPC in the cloud operates more like a grid computing solution.  With things
like Amazon HPC or HPC under openstack the idea is to allocate entire
physical systems to a user on the fly.  Traditionally to date that has been
done with m1.full style instances.  In many ways bare metal provisioning is
a better option here than a hypervisor.  And for many people who do work in
an HPC environment bare metal really is the only solution that makes
sense.

The reality is that HPC use cases lose a lot of the underlying benefits of
cloud infrastructure.  So they really are something of an edge case at the
moment.  I believe that bare metal provisioning from within openstack could
be a bit of a game changer in HPC, and that it could be useful in a wide
variety of areas.  But, ultimately I believe the usage that HPC in no way
reflects general computing needs.  And that really sums it up.  Most folks
do not need or want HPC.  Most folks with HPC needs don't want a hypervisor
slowing down their memory access.


> I know that clouds are fulfilling current needs for scalable datacenter,
> but it should also fulfill future needs. Apps are faster and faster. More
> often they do image processing, voice recognition, data mining and it
> should be clouds' goal to provide an easy way to create such advanced apps,
> not just simple web server which could be scaled up, by adding few VMs and
> load balancer to redirect requests. Infrastructure should be prepared even
> for such large deployment like that in google. It should also be optimized
> and support heavy computations. In the future it should be as efficient as
> grids (or almost as efficient), because ease of use has already been
> achieved. If, right now, it's easy to deploy VM into the cloud, the next
> step should be to optimize infrastructure to increase performance.
>

Apps are actually slower and slower.  The hardware is faster.  The
Applications themselves abstract more and more and thus slow down.  As for
what you do on your instances, that's entirely your own thing herr user.
Some large data and some serious compute use cases simply don't lend
themselves to cloud today.  Hypervisors are limiting in so far as they give
up some speed to provide the ability to share resources better.  If you
have no desire to share resources then virt machines become something of an
impediment to you.  So I don't see this as being accurate for some use
cases.

There are also other external limiting factors.  People don't just turn on
a dime.  Many of the scientific and industrial applications of computing
power are built around software stacks that have grown over time, and for a
long time.  Those stacks can't be made to easily adopt the benefits of a
new technology.  Sometimes the reason not to use cloud as a platform is
entirely related to your inability to modify an existing software suite
enough to make it worthwhile.  I have seen this before at super computing
facilities.


> I've always thought about clouds in that way. Maybe I was wrong. Maybe
> cloud should do only what it's doing right now and let to others
> technologies handle HPC.
>

I think many, in the HPC environment, argue this is probably true.  I don't
necessarily agree.  GCE obviously proves a point.  Sharing resources means
that you don't have to run your own super computer.  You can simply rent
enough of a compute environment to solve your problem at will.  And odds
are the environment will be pretty up to date.  For many use cases cloud
environments are just dandy.  And HPC offerings in IaaS providers are
getting better all the time.  For low funded research, citizen science, and
a million other small fries out there there is certainly a value in
lowering the barrier to entry in this technology.

That being said, I think that private HPC will never go away if only
because of data retention rules and law.  Much research deals with data
that must be either safeguarded or simply classified and placed in an
envir

Re: [Openstack] Sending userdata during server create via api's

2012-07-03 Thread Scott Moser
On Mon, 2 Jul 2012, Ed Shaw wrote:

> Hi,
>
> I've posted on this previously but have yet to be pointed in the right 
> direction - so I'm posting again.  Examples or docs appreciated.
>
> I'm trying to pass user_data on server create using the xml (or JSON) api.
>
> My userdata looks like...
> "#!/bin/bash
> #
> #Purpose : Setup the initial image

Other than the leading ", that looks fine.  I suspect you did not mean the
leading quote.
>
> set -e -x
> export DEBIAN_FRONTEND=noninteractive
>
> apt-get update && apt-get upgrade -y
> ...
>
> I am base64 UTF-8 encoding the string and I've tried sending it as a message 
> part, a query string on the url and as a post parameter. This works from the 
> Horizon UI, but I get...
>
> 2012-06-18 19:58:18,610 - __init__.py[WARNING]: Unhandled non-multipart 
> userdata ''"

Well, the above is indicating that there is no user-data available to the
instance.  Ie, it is "".  I actually just committed a *removal* of this
warning in cloud-init as I had previously thought it just an annoying
message.

You can verify from inside the instance that there is no user-data, with
'ec2metadata --user-data' or 'wget http://169.254.169.254/latest/user-data'

> when I try to pass via xml. The only thing I haven't tried is a different 
> extension namespace on the user_data element if passing it that way, but I 
> can't see any docs on this.

Sorry, without digging/learning more, I can't help more.
Have you tried the python-novaclient:
  nova boot --user_data my.user.data.file.txt ...

Ie, try verifying that it is working with known working tools on your
openstack first.  Alternatively, if the ec2 api is available, that looks
like:
   euca-run-instances --user-data-file my.user.data.file.txt ...

> Here is an example of one of the configurations I tried...
>
> 
> http://docs.openstack.org/compute/api/v1.1"; name="server8" 
> imageRef="http://192.168.75.70:8774/30ddcb35079f406eae98857515cbf1d2/images/57443c48-eb29-48f6-853a-b8bc7d5dde05";
>  flavorRef="1" 
> user_data="IyEvYmluL2Jhc2gNCiMNCiNBdXRob3IgOiBFZCBzaGF3DQojRGF0ZSA6IDE0IEp1biAxMg0KI1B1cnBvc2UgOiBTZXR1cCB0aGUgaW5pdGlhbCBpbWFnZQ0KI0NvbW1lbnRzIDoNCiMNCiMgTGFzdCBFZGl0dGVkIGJ5OiBlZHNoYXcNCg0Kc2V0IC1lIC14DQpleHBvcnQgREVCSUFOX0ZST05URU5EPW5vbmludGVyYWN0aXZlDQoNCmFwdC1nZXQgdXBkYXRlICYmIGFwdC1nZXQgdXBncmFkZSAteQ0KYXB0LWdldCAteSBpbnN0YWxsIGFwYWNoZTINCg0KI1NlbmQgdG8gY29uc29sZSB3ZSBmaW5pc2hlZCBydW5uaW5nLg0KZWNobyAiTkVYSkNPTkZJRzogSW5zdGFuY2Ugc2V0dXAgc3VjY2Vzc2Z1bHkgZXhlY3V0ZWQuIiA"/>

I dont see anything obviously wrong, but I'd try using the knonw orking
tools first.  I think python-nova client even has a --debug to dump what
it is sending.

Scott

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Single global dependency list

2012-07-03 Thread Monty Taylor


On 07/03/2012 10:09 AM, Eric Windisch wrote:
> I have to agree with others that copying files around is not ideal, and
> I can see the maintenance of this getting more involved as Nova becomes
> more coupled with common.
> 
>>> Additionally, we'd make the copy only copy in the versions from
>>> openstack-common for package that were already listed in the target
>>> project, so that we wouldn't add django to python-swiftclient, for
>>> instance.
>  
> This seems to be a reasonable argument against using git submodules, but
> I'm afraid we might be losing more than we're gaining here.
> 
> Just because python-swiftclient depends on openstack-common, and
> django-using code exists there, doesn't mean that django needs to be
> installed for python-swiftclient. We might do better to use git
> submodules and solve the dependency problem, than continuing down this
> copy-everything path.

We're explicitly NOT doing a copy-everything path. That's the whole
point. We're only copying the needed depends from the master list.

git submodules actually make the problem worse, not better.

> Alternatively, speed up the movement from incubation to library.

Yeah - that's kind of the reason that bcwaldon was saying this shouldn't
be in openstack-common. openstack-common wants to be a library, and then
we're back at not having an appropriate place for the master list.

Monty

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] PyPI uploads for client libs live

2012-07-03 Thread Doug Hellmann
On Tue, Jul 3, 2012 at 9:54 AM, Monty Taylor  wrote:

>
>
> On 07/03/2012 08:47 AM, Doug Hellmann wrote:
> >
> > On Jul 3, 2012, at 5:57 AM, Thierry Carrez 
> > wrote:
> >
> >> Monty Taylor wrote:
> >>> At the moment, the only people with permission to upload tags is
> >>> the openstack-release team. However, since we're letting client
> >>> libs manage their own versions, I kinda think we should give PTLs
> >>> the right on their own project - so Vish would get tag access to
> >>> python-novaclient, Brian to python-glanceclient, etc.
> >>
> >> Ideally it would be a two-side approval process (PTL +
> >> openstack-release), because openstack-release shouldn't be able to
> >> tag without PTL approval, and openstack-release should still be
> >> kept in the loop before a tag is pushed by PTLs (there are multiple
> >> reasons why a few hours delay before tagging would be a good idea,
> >> and the openstack-release people actually keep track of those).
> >>
> >> That said, we don't have that approval mechanism available yet
> >> (same issue with the core projects release tagging) so in the mean
> >> time we should probably let the small set of individuals with an
> >> understanding of the issues (PTLs + openstack-release) have the
> >> power to do it. Within that group, we can have a soft two-side
> >> approval process (based on IRC pings) to check "everything is OK"
> >> before triggering a release.
> >
> > Could we use gerrit's 2-step approval like we do for other projects
> > and combine it with a fancy tag detector like was just added for
> > DocImpact?
>
> I have an outstanding bug/feature request for gerrit to allow for the
> review of and approval or disapproval of tags.
>

Ah, I actually meant the "magical text" feature you mention below, but
approving tags sounds like a good thing to have.


>
> That being said - if we wanted to go the route you're talking about in
> the mean time, instead of actually using the git tag route we could have
> an additional commit with a magical text in the commit message which, on
> merging, would cause a tag to be created... We've got guys on the team
> who hack gerrit now though - lemme get some feedback on how much it
> would take to actually get proper tag reviewing.
>

Sounds good.

Doug
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] How do I stop image-create from using /tmp?

2012-07-03 Thread Johannes Erdfelt
On Tue, Jul 03, 2012, Daniel P. Berrange  wrote:
> > It seems to me that we're just as likely to have a review slip through
> > that uses /tmp insecurely as a review slipping through that uses /tmp at
> > all.
> 
> We already run a bunch of PEP8 checks across the code on every
> commit. It ought to be with the realm of practicality to add a
> rule that blacklists any use of mkdtemp() which does not pass
> an explicit directory. Most places in Nova don't actually use
> it directly, but instead call nova.utils.tempdir() which could
> again be made to default to '/var/lib/nova/tmp' or equivalent.

As a recap, the security problem with /tmp is that developers make
mistakes and use it incorrectly, and reviewers also make mistakes and
don't always catch the developer mistakes.

I don't necessarily disagree with that.

I do disagree that fixing the problem is to believe that a PEP8-style
check can ensure they every possible to way to use /tmp incorrectly is
caught.

You're effectively trying to solve the halting problem.

You can probably catch most incorrect uses, but I don't want to be the
person to argue that we can catch most of the problem.

> > Since we can't trust developers to use /tmp securely, or avoid using
> > /tmp at all, then why not use filesystem namespaces to setup a process
> > specific non-shared /tmp?
> 
> That is possible, but I simply disagree with your point that we
> can't stop using /tmp. It is entirely possible to stop using it
> IMHO.

It's impossible to stop using /tmp:
A) People will continue submitting code that uses /tmp and since
   reviewers make mistakes, those will make it through the review
   process
B) It's not possible to write a program to analyzes another program to
   reliably ensure it doesn't use /tmp at all

If that's the case, then just making sure that all uses of /tmp are safe
will solve the problem.

Filesystem namespaces can do that by bind mounting /tmp to somewhere not
shared, and thusly safe.

Not to mention that any policy that requires not using /tmp will make
more work for reviewers. Being a nova-core reviewer has shown that people
all to often don't read HACKING or other documentation.

I don't think fighting human nature will be effective. I do think moving
humans into an area where their inate nature won't hurt themselves will
be much more effective.

JE


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack][Nova] Live Migration + NFSv4 - Permission issues

2012-07-03 Thread Sébastien Han
Which permissions did you set on /var/lib/nova/instances?


On Tue, Jul 3, 2012 at 3:48 PM, Leander Bessa Beernaert  wrote:

> Hello all,
>
> I've been trying to get the live migration to work according to the guide
> http://docs.openstack.org/trunk/openstack-compute/admin/content/configuring-live-migrations.html.
>
> So far i've setup 2 compute nodes and 1 controller node. They all share
> the /var/lib/nova/instances dir. I've already verified that the nova user
> id is the same across all the servers.
>
> Currently i'm running into this error when i launch an instance:
> http://paste.openstack.org/show/19221/
>
> It's certainly a permission issue, so i tried adding the group "nova" to
> the user "libvirt-qemu". However, it still doesn't work. To which user must
> i give the nova group permission in order to be able to write in that
> directory?
>
> Regards,
> Leander
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack][Nova] Live Migration + NFSv4 - Permission issues

2012-07-03 Thread Leander Bessa Beernaert
Currently it's using the default permission. Everything belongs to user
"nova" and the group "nova".

On Tue, Jul 3, 2012 at 4:23 PM, Sébastien Han wrote:

> Which permissions did you set on /var/lib/nova/instances?
>
>
> On Tue, Jul 3, 2012 at 3:48 PM, Leander Bessa Beernaert <
> leande...@gmail.com> wrote:
>
>> Hello all,
>>
>> I've been trying to get the live migration to work according to the guide
>> http://docs.openstack.org/trunk/openstack-compute/admin/content/configuring-live-migrations.html.
>>
>> So far i've setup 2 compute nodes and 1 controller node. They all share
>> the /var/lib/nova/instances dir. I've already verified that the nova user
>> id is the same across all the servers.
>>
>> Currently i'm running into this error when i launch an instance:
>> http://paste.openstack.org/show/19221/
>>
>> It's certainly a permission issue, so i tried adding the group "nova" to
>> the user "libvirt-qemu". However, it still doesn't work. To which user must
>> i give the nova group permission in order to be able to write in that
>> directory?
>>
>> Regards,
>> Leander
>>
>> ___
>> Mailing list: https://launchpad.net/~openstack
>> Post to : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
>>
>>
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack][Nova] Live Migration + NFSv4 - Permission issues

2012-07-03 Thread Leander Bessa Beernaert
Here's an output from ls -l:
drwxr-xr-x 3 nova nova   4096 Jul  3 14:10 instances

On Tue, Jul 3, 2012 at 4:25 PM, Leander Bessa Beernaert  wrote:

> Currently it's using the default permission. Everything belongs to user
> "nova" and the group "nova".
>
>
> On Tue, Jul 3, 2012 at 4:23 PM, Sébastien Han wrote:
>
>> Which permissions did you set on /var/lib/nova/instances?
>>
>>
>> On Tue, Jul 3, 2012 at 3:48 PM, Leander Bessa Beernaert <
>> leande...@gmail.com> wrote:
>>
>>> Hello all,
>>>
>>> I've been trying to get the live migration to work according to the
>>> guide
>>> http://docs.openstack.org/trunk/openstack-compute/admin/content/configuring-live-migrations.html.
>>>
>>> So far i've setup 2 compute nodes and 1 controller node. They all share
>>> the /var/lib/nova/instances dir. I've already verified that the nova user
>>> id is the same across all the servers.
>>>
>>> Currently i'm running into this error when i launch an instance:
>>> http://paste.openstack.org/show/19221/
>>>
>>> It's certainly a permission issue, so i tried adding the group "nova" to
>>> the user "libvirt-qemu". However, it still doesn't work. To which user must
>>> i give the nova group permission in order to be able to write in that
>>> directory?
>>>
>>> Regards,
>>> Leander
>>>
>>> ___
>>> Mailing list: https://launchpad.net/~openstack
>>> Post to : openstack@lists.launchpad.net
>>> Unsubscribe : https://launchpad.net/~openstack
>>> More help   : https://help.launchpad.net/ListHelp
>>>
>>>
>>
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Setting VM passwords when not running on Xen

2012-07-03 Thread John Garbutt
This seemed to crop up quite a lot in different sessions at the Design summit. 
I am certainly interested in a standard way to inject information into VMs.

What I think we need is a cross hypervisor two-way guest communication channel 
that is fairly transparent to the user of that VM (i.e. ideally not a network 
connection).

If I understand things correctly, we currently have these setup ideas:

* Config Drive (not supported by XenAPI, but not a two way transport)

* Cloud-Init / Metadata service (depends on DHCP(?), and not a two-way 
transport)

But to set the password, we ideally want two way communication. We currently 
have these:

* XenAPI guest plugin (XenServer specific, uses XenStore, but two way, 
no networking assumed )

* Serial port (used by http://wiki.libvirt.org/page/Qemu_guest_agent 
but not supported on XenServer)

I like the idea of building a common interface (maybe write out to a known file 
system location) for the above two hypervisor specific mechanisms. The agent 
should be able to pick which mechanism works. Then on top of that, we could 
write a common agent that can be shared for all the different hypervisors. You 
could also fallback to the metadata service and config drive when no two way 
communication is available.

I would love this Guest Agent to be an OpenStack project that can then be up 
streamed into many Linux distribution cloud images.

Sadly, I don't have any time to work on this right now, but hopefully that will 
change in the near future.

Cheers,
John

From: openstack-bounces+john.garbutt=eu.citrix@lists.launchpad.net 
[mailto:openstack-bounces+john.garbutt=eu.citrix@lists.launchpad.net] On 
Behalf Of Day, Phil
Sent: 03 July 2012 3:07
To: openstack@lists.launchpad.net (openstack@lists.launchpad.net) 
(openstack@lists.launchpad.net)
Subject: [Openstack] Setting VM passwords when not running on Xen

Hi Folks,

Is anyone else looking at how to support images that need a password rather 
than an ssh key (windows) on hypervisors that don't support set_admin_password 
(e.g. libvirt) ?

Thanks
Phil
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack][Nova] Live Migration + NFSv4 - Permission issues

2012-07-03 Thread Marnus van Niekerk
Have you tried setting the ownership of /var/lib/nova/instances to the 
nova user?


sudo chown -R nova:nova /var/lib/nova/instances

M

On 03/07/2012 15:48, Leander Bessa Beernaert wrote:

Hello all,

I've been trying to get the live migration to work according to the 
guide 
http://docs.openstack.org/trunk/openstack-compute/admin/content/configuring-live-migrations.html 
.


So far i've setup 2 compute nodes and 1 controller node. They all 
share the /var/lib/nova/instances dir. I've already verified that the 
nova user id is the same across all the servers.


Currently i'm running into this error when i launch an instance: 
http://paste.openstack.org/show/19221/


It's certainly a permission issue, so i tried adding the group "nova" 
to the user "libvirt-qemu". However, it still doesn't work. To which 
user must i give the nova group permission in order to be able to 
write in that directory?


Regards,
Leander


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Openstack and Google Compute Engine

2012-07-03 Thread John Paul Walters
Matt,

I agree with almost everything that you're saying, except to add that we hope 
to change things.  I hope that our work at ISI is moving in that direction.  
But you're right, hypervisors add some overhead, network performance isn't 
always great, etc.  Things are changing, albeit slowly, but I'm optimistic that 
we'll get there.  Amazon's #70-something ranked supercomputer is evidence that 
the cloud can compete, at least as far as stunt computers (HPL) go.  That the 
cloud enables you to customize your environment has proven to be a very 
powerful motivation for the folks that we work with.  

That reminds me, we had intended to set up a monthly (or so) HPC telecon after 
the most recent design summit.  I'd like to follow up with that.  I'll send a 
separate email to get that going, for those that are interested.

best,
JP  



On Jul 3, 2012, at 10:12 AM, Matt Joyce wrote:

> 
> On Tue, Jul 3, 2012 at 2:01 AM, Simon G.  wrote:
> Secondly, I don't think we shouldn't compare GCE to Openstack. I understand 
> that right now cloud (Openstack, Amazon, ...) is just easy in use, managed 
> and scalable datacenter. It allows users to create VMs, upload their images, 
> easily increase their (limited) demands, but don't you think that HPC is the 
> right direction? I've always thought that final cloud's goal is to provide 
> easy in use HPC infrastructure. Where users could do what they can do right 
> now in the clouds (Amazon, Openstack), but also could do what they couldn't 
> do in typical datacenter. They should run instance, run compute-heavy 
> software and if they need more resources, they just add them. if cloud is 
> unable to provide necessary resources, they should move their app to bigger 
> cloud and do what they need. Openstack should be prepared for such large 
> deployment. It should also be prepared for HPC use cases. Or if it's not 
> prepared yet, it should be Openstack's goal.
> 
> HPC in the cloud operates more like a grid computing solution.  With things 
> like Amazon HPC or HPC under openstack the idea is to allocate entire 
> physical systems to a user on the fly.  Traditionally to date that has been 
> done with m1.full style instances.  In many ways bare metal provisioning is a 
> better option here than a hypervisor.  And for many people who do work in an 
> HPC environment bare metal really is the only solution that makes sense.  
> 
> The reality is that HPC use cases lose a lot of the underlying benefits of 
> cloud infrastructure.  So they really are something of an edge case at the 
> moment.  I believe that bare metal provisioning from within openstack could 
> be a bit of a game changer in HPC, and that it could be useful in a wide 
> variety of areas.  But, ultimately I believe the usage that HPC in no way 
> reflects general computing needs.  And that really sums it up.  Most folks do 
> not need or want HPC.  Most folks with HPC needs don't want a hypervisor 
> slowing down their memory access.
>  
> I know that clouds are fulfilling current needs for scalable datacenter, but 
> it should also fulfill future needs. Apps are faster and faster. More often 
> they do image processing, voice recognition, data mining and it should be 
> clouds' goal to provide an easy way to create such advanced apps, not just 
> simple web server which could be scaled up, by adding few VMs and load 
> balancer to redirect requests. Infrastructure should be prepared even for 
> such large deployment like that in google. It should also be optimized and 
> support heavy computations. In the future it should be as efficient as grids 
> (or almost as efficient), because ease of use has already been achieved. If, 
> right now, it's easy to deploy VM into the cloud, the next step should be to 
> optimize infrastructure to increase performance. 
> 
> Apps are actually slower and slower.  The hardware is faster.  The 
> Applications themselves abstract more and more and thus slow down.  As for 
> what you do on your instances, that's entirely your own thing herr user.  
> Some large data and some serious compute use cases simply don't lend 
> themselves to cloud today.  Hypervisors are limiting in so far as they give 
> up some speed to provide the ability to share resources better.  If you have 
> no desire to share resources then virt machines become something of an 
> impediment to you.  So I don't see this as being accurate for some use cases.
> 
> There are also other external limiting factors.  People don't just turn on a 
> dime.  Many of the scientific and industrial applications of computing power 
> are built around software stacks that have grown over time, and for a long 
> time.  Those stacks can't be made to easily adopt the benefits of a new 
> technology.  Sometimes the reason not to use cloud as a platform is entirely 
> related to your inability to modify an existing software suite enough to make 
> it worthwhile.  I have seen this before at super computing facilities.
>  
> I've

Re: [Openstack] [OpenStack][Nova] Live Migration + NFSv4 - Permission issues

2012-07-03 Thread Leander Bessa Beernaert
Still the same problem :S

On Tue, Jul 3, 2012 at 4:46 PM, Marnus van Niekerk  wrote:

> Have you tried setting the ownership of /var/lib/nova/instances to the
> nova user?
>
> sudo chown -R nova:nova /var/lib/nova/instances
>
> M
>
>
> On 03/07/2012 15:48, Leander Bessa Beernaert wrote:
>
>> Hello all,
>>
>> I've been trying to get the live migration to work according to the guide
>> http://docs.openstack.org/**trunk/openstack-compute/admin/**
>> content/configuring-live-**migrations.html.
>>
>> So far i've setup 2 compute nodes and 1 controller node. They all share
>> the /var/lib/nova/instances dir. I've already verified that the nova user
>> id is the same across all the servers.
>>
>> Currently i'm running into this error when i launch an instance:
>> http://paste.openstack.org/**show/19221/
>>
>> It's certainly a permission issue, so i tried adding the group "nova" to
>> the user "libvirt-qemu". However, it still doesn't work. To which user must
>> i give the nova group permission in order to be able to write in that
>> directory?
>>
>> Regards,
>> Leander
>>
>>
>> __**_
>> Mailing list: 
>> https://launchpad.net/~**openstack
>> Post to : openstack@lists.launchpad.net
>> Unsubscribe : 
>> https://launchpad.net/~**openstack
>> More help   : 
>> https://help.launchpad.net/**ListHelp
>>
>
>
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [OSSA 2012-008] Arbitrary file injection/corruption through directory traversal issues (CVE-2012-3360, CVE-2012-3361)

2012-07-03 Thread Thierry Carrez
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

OpenStack Security Advisory: 2012-008
CVE: 2012-3360, 2012-3361
Date: July 3, 2012
Title: Arbitrary file injection/corruption through directory traversal
issues
Impact: Critical
Reporter: Matthias Weckbecker (SUSE Security team), Pádraig Brady (Red
Hat)
Products: Nova
Affects: All versions

Description:
Matthias Weckbecker from SUSE Security team reported a vulnerability
in Nova compute nodes handling of file injection in disk images. By
requesting files to be injected in malicious paths, a remote
authenticated user could inject files in arbitrary locations on the
host file system, potentially resulting in full compromise of the
compute node. Only Essex and later setups running the OpenStack API
over libvirt-based hypervisors are affected.

Upon further inspection of the code, Pádraig Brady from Red Hat found
an additional vulnerability. By crafting a malicious image and
requesting an instance based on it, a remote authenticated user may
corrupt arbitrary files on the host filesystem, potentially resulting
in a denial of service. This affects all setups.

Fixes:
Folsom:
https://github.com/openstack/nova/commit/2427d4a99bed35baefd8f17ba422cb7aae8dcca7
Essex:
https://github.com/openstack/nova/commit/b0feaffdb2b1c51182b8dce41b367f3449af5dd9
Diablo: see patch at https://review.openstack.org/9268

References:
http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2012-3360
http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2012-3361
https://bugs.launchpad.net/nova/+bug/1015531

Notes:
This fix will be included in the folsom-2 development milestone
(published this week) and in future Essex and Diablo releases.

- -- 
Thierry Carrez (ttx)
OpenStack Vulnerability Management Team
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQIcBAEBCAAGBQJP8xhQAAoJEFB6+JAlsQQjxrwP/0riLbaI8tCRfKeR6I2ATXIU
1QTOjn6TzQOhUNKwP63OzeUmu1xg7gI/XscWYLgxPYetsysao7YUgsy7PcVSznQh
Ii7LM7WnrxpanP3SOOM4qJQ4d3MZvP8qP0R9hQ1XAtdE9T4yB3aDvzf+XVXFFLad
nnF9meI5xPe+Ws70BH0rTo2XNcTTukpnNxOwYC4Sayx0cHvMCjLMr6RWOoPCftDd
WFDOeJNuSEh1NcDwt6qgPCQMLBS/+WavnQFf6EuBdjkASAtONDYblkxyYPRSsf8y
xYDVjrYUcJ5YeDwI2vbqKCP9EMuwb0JSfep767OIbupgIMm7rTjW+vEsns4e2d1m
2WovMHlV9ar7zpTIeqjAYE/BzUlRaOa7+JRJwy8F2awbu5oQUeOLq8XeAyo5Ag8C
zjYMut/OuHEdqMQY+eLqtPVcaNg801wXEfgdn8zuE41qXkk6yyAFJJUPlkBeMqiE
8cHEeJJwBDP5deHJIESzraeOUTFBXXoABhxdehAa708y4BWGt0/EG5SeHg38HoZs
gODHzZ5D+rgRYZsMV3JanAoB27QH4LQfPc1WLCM20wJSppZXq4KjngNA9trV68Na
+LKR+/EAZvOmpJMsymhuTgc9uRNRTlhC85NGquBzK2TZtlfJzI/qADV7fQPnWVQZ
JJcGXBOJw/J7rCmBIDuQ
=/7QJ
-END PGP SIGNATURE-

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Setting VM passwords when not running on Xen

2012-07-03 Thread Scott Moser
On Tue, 3 Jul 2012, Day, Phil wrote:

> Hi Folks,
>
> Is anyone else looking at how to support images that need a password
> rather than an ssh key (windows) on hypervisors that don't support
> set_admin_password (e.g. libvirt) ?

I'm completely ignorant about windows.
Please forgive me.

Is it for some reason not possible to have code that runs on first
instance boot that reads the metadata service (or config drive) and sets
the password appropriately?

Or, is there something that makes this actually a nova problem, something
that cannot reasonably be solved in the same way such things are solved in
other operating systems.

Is there no way that you could pass in a public key that would be used for
authentication to RDP or whatever you'd do?  (grave ignorance of windows).


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Adding OpenVZ support via libvirt to Nova

2012-07-03 Thread David Shrewsbury
Hey all,

I'm currently working on supporting OpenVZ through the libvirt driver in
Nova.
I can successfully spin up and tear down OpenVZ containers using a slightly
modified version of devstack (yay!), but I've come across an issue with the
networking part of it that I could use some advice on from libvirt experts
out
there.

When my containers are spawned, they do not have their networking
setup completely. I have to manually use vzctl to set the IP address:

  sudo vzctl set 101 --ipadd 10.0.0.2 --save

I need to do the same for the nameserver and hostname. Nova generates
this libvirt XML for the container's network interface:


  
  
  




  


I'd obviously like to avoid making calls 'vzctl' to finish setting up the
networking, so if it can be done by modifying the libvirt XML, I'd like
to do that, but can't seem to figure it out, and don't want to break
functionality by making stupid changes. I don't consider myself to
be either a libvirt or OpenVZ expert.

I did get an XML configuration like this automagically set the
IP address:


  
  


However, I don't know if this will break anything if I change Nova
to output this for OpenVZ.

So, any advice from libvirt experts?

-Dave
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] CY12-Q2 Community Analysis — OpenStack vs OpenNebula vs Eucalyptus vs CloudStack

2012-07-03 Thread Zhong TIAN
John, great job in compiling the data together! keep up the great job in 
takin the pulse of the open source movement in cloud.

I would second Tim's observation about additional forums that Open Stack 
uses for its developer and user community.

I would also suggest adding one particular vibrant community to the study, 
the LaunchPad Answers Forum by search Openstack or choosing a number of 
Open Stack projects
https://answers.launchpad.net


Kindest regards, 
Tian, Zhong PhD. () 
Senior Technical Staff Member, Open Standards; Member of IBM Academy of 
Technology 
Emerging Technology Institute (ETI) 
China Software Development Lab 
Email: ti...@cn.ibm.com 
Tel: +86-10-8245-2792 
Fax: +86-10-8245-0855 
Mobile: +86-186-1071-4866 (new) 

Building 28 (Ring), ZhongGuanCun Software Park, No.8 Dongbeiwang West 
Road, Haidian District, Beijing, China, 100193 



From:   Tim Bell 
To: Atul Jha , "Qingye Jiang (John)" 
, "openstack@lists.launchpad.net" 
, 
Date:   07/02/2012 05:27 PM
Subject:Re: [Openstack] CY12-Q2 Community Analysis — OpenStack vs 
OpenNebula vs Eucalyptus vs CloudStack
Sent by:openstack-bounces+tianz=cn.ibm@lists.launchpad.net



The following may also be worth scanning:

- http://forums.openstack.org/
- Mailing lists on http://wiki.openstack.org/MailingLists (although quite 
a
few of them are quiet so would not affect the numbers much)

Tim


> -Original Message-
> From: openstack-bounces+tim.bell=cern...@lists.launchpad.net
> [mailto:openstack-bounces+tim.bell=cern...@lists.launchpad.net] On 
Behalf
> Of Atul Jha
> Sent: 02 July 2012 08:50
> To: Qingye Jiang (John); openstack@lists.launchpad.net
> Subject: Re: [Openstack] CY12-Q2 Community Analysis - OpenStack vs
> OpenNebula vs Eucalyptus vs CloudStack
> 
> Hi,
> You should also add https://answers.launchpad.net/openstack
> 
> Cheers!!
> 
> Atul Jha
> 
> From: openstack-bounces+atul.jha=csscorp@lists.launchpad.net
> [openstack-bounces+atul.jha=csscorp@lists.launchpad.net] on behalf
> of Qingye Jiang (John) [qji...@gmail.com]
> Sent: Monday, July 02, 2012 7:50 AM
> To: openstack@lists.launchpad.net
> Subject: [Openstack] CY12-Q2 Community Analysis - OpenStack vs
> OpenNebula vs Eucalyptus vs CloudStack
> 
> Hi all,
> 
> I would like to let you know that I have just finished an analysis on 
the
4 open
> source projects (OpenStack, OpenNebula, Eucalyptus,
> CloudStack) from a community activity perspective. The analysis report
could
> be found from my personal blog at http://www.qyjohn.net/?p=2233 (with a
> lot of figures).
> 
> Best regards,
> 
> Qingye Jiang (John)
> 
> 
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
> http://www.csscorp.com/common/email-disclaimer.php
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
[attachment "smime.p7s" deleted by Zhong TIAN/China/IBM] 
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


<><>___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Anyone using instance metadata?

2012-07-03 Thread Scott Moser
Hi,
I'm looking at nova, and the compute API has 3 methods:
   delete_instance_metadata
   update_instance_metadata
   get_instance_metadata

I know that
 * python nova client has
   * the ability to specify --meta=KEY=VALUE on instance creation
   * a top level subcommand 'meta' which allows set and delete of
 metadata keys (but no support for querying current value).
 * content specified on instance creation is injected into the
   instance's root filesystem at '/meta.js'.  Or, if config_drive
   is given the config drive will have /meta.js

What I'm missing is:
 * There is an 'update' for this content, implying that it is at least
   partially dynamic in intent, but the filesystem data passing mechanism
   is clearly *not* dynamic.
 * This data is not made available in the metadata server
   (http://169.254.169.254) where it *could* be dynamic.

So, I'm confused on the intent of this metadata.  I can't decide if it is
really just supposed to be "tags" or more of a user-data replacement.

In EC2, you can store arbitrary key/value pairs on an instance-id,
ami-id, or anything else, but those values are not readable without
credentials.  Here, they're readable inside the instance from the
filesystem.

Anyone able to comment on the original intent of 'metadata'?
It seems to me that if we expose this in the metadata server, then it will
be a very useful feature, but one that overlaps confusingly with
user-data.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Setting VM passwords when not running on Xen

2012-07-03 Thread Day, Phil
Thanks John,

One approach we were wondering about is to have an agent in Windows which:


o   Generates a random password and sets it for the admin account

o   Gets the public ssh key from the metadata service

o   Encrypts the password with the public key

o   Pushes the encrypted public key back to the metadata server (requires the 
metadata server to support Push)

The user can then get the encrypted password from the API and decrypt it with 
their private key

The advantage would be that the clear text password never leaves the VM, so 
there are fewer security concerns about Nova having access to clear text 
passwords.

It would also seem to be a small change in the metadata service and no change 
in the API layer - not sure if there are concerns about what a VM could break 
if it updates its own metadata, but I guess we could also limit what values can 
be set.

Thoughts ?

Phil



From: John Garbutt [mailto:john.garb...@citrix.com]
Sent: 03 July 2012 16:41
To: Day, Phil; openstack@lists.launchpad.net (openstack@lists.launchpad.net) 
(openstack@lists.launchpad.net)
Subject: RE: Setting VM passwords when not running on Xen

This seemed to crop up quite a lot in different sessions at the Design summit. 
I am certainly interested in a standard way to inject information into VMs.

What I think we need is a cross hypervisor two-way guest communication channel 
that is fairly transparent to the user of that VM (i.e. ideally not a network 
connection).

If I understand things correctly, we currently have these setup ideas:

* Config Drive (not supported by XenAPI, but not a two way transport)

* Cloud-Init / Metadata service (depends on DHCP(?), and not a two-way 
transport)

But to set the password, we ideally want two way communication. We currently 
have these:

* XenAPI guest plugin (XenServer specific, uses XenStore, but two way, 
no networking assumed )

* Serial port (used by http://wiki.libvirt.org/page/Qemu_guest_agent 
but not supported on XenServer)

I like the idea of building a common interface (maybe write out to a known file 
system location) for the above two hypervisor specific mechanisms. The agent 
should be able to pick which mechanism works. Then on top of that, we could 
write a common agent that can be shared for all the different hypervisors. You 
could also fallback to the metadata service and config drive when no two way 
communication is available.

I would love this Guest Agent to be an OpenStack project that can then be up 
streamed into many Linux distribution cloud images.

Sadly, I don't have any time to work on this right now, but hopefully that will 
change in the near future.

Cheers,
John

From: 
openstack-bounces+john.garbutt=eu.citrix@lists.launchpad.net
 
[mailto:openstack-bounces+john.garbutt=eu.citrix@lists.launchpad.net]
 On Behalf Of Day, Phil
Sent: 03 July 2012 3:07
To: openstack@lists.launchpad.net 
(openstack@lists.launchpad.net) 
(openstack@lists.launchpad.net)
Subject: [Openstack] Setting VM passwords when not running on Xen

Hi Folks,

Is anyone else looking at how to support images that need a password rather 
than an ssh key (windows) on hypervisors that don't support set_admin_password 
(e.g. libvirt) ?

Thanks
Phil
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Swift] [Storage node] Lots of timeouts in load test after several hours around 1, 000, 0000 operations

2012-07-03 Thread Kuo Hugo
I found that updater and replicator could improve this issue.

In my original practice , for getting best performance , I only start main
workers ( account-server , container-server , object-server) , And keep
upload / download / delete objects over 100 times.

Issues:

1. XFS or Swift consumes lots of memory for some reason , does anyone know
what's been cached(or buffered , cached usage is not too much though) in
memory in this practice ? After running container/object replicator , those
memory all released. I'm curious the contents in memory . Is that all about
object's metadata or something else?

2. Plenty of 10s timeout in proxy-server's log . Due to timeout for getting
final status of put object from storage node.
At beginning , object-workers complain about 3s timeout for updating
container (async later). but there's not too much complains . As more and
more put / get / delete  operations , more and more timeout happend.
Seems that updater can improve this issue.
Does this behavior related to the number of data in pickle ?


Thanks
Hugo


2012/7/2 Kuo Hugo 

> Hi all ,
>
> I did several loading tests for swift in recent days.
>
> I'm facing an issue ... Hope you can share you consideration to me ...
>
> My environment:
> Swift-proxy with Tempauth in one server : 4 cores/32G rams
>
> Swift-object + Swift-account + Swift-container in storage node * 3 , each
> for : 8 cores/32G rams   2TB SATA HDD * 7
>
> =
> bench.conf :
>
> [bench]
> auth = http://172.168.1.1:8082/auth/v1.0
> user = admin:admin
> key = admin
> concurrency = 200
> object_size = 4048
> num_objects = 10
> num_gets = 10
> delete = yes
> =
>
> After 70 rounds .
>
> PUT operations get lots of failures , but GET still works properly
> *ERROR log:*
> Jul  1 04:35:03 proxy-server ERROR with Object server
> 192.168.100.103:36000/DISK6 re: Trying to get final status of PUT to
> /v1/AUTH_admin/af5862e653054f7b803d8cf1728412d2_6/24fc2f997bcc4986a86ac5ff992c4370:
> Timeout (10s) (txn: txd60a2a729bae46be9b667d10063a319f) (client_ip:
> 172.168.1.2)
> Jul  1 04:34:32 proxy-server ERROR with Object server
> 192.168.100.103:36000/DISK2 re: Expect: 100-continue on
> /AUTH_admin/af5862e653054f7b803d8cf1728412d2_19/35993faa53b849a89f96efd732652e31:Timeout
>  (10s)
>
>
> And kernel starts to report failed message as below
> *kernel failed log:*
> 7 Jul  1 16:37:50 angryman-storage-03 kernel: [350840.020736] w83795
> 0-002f: Failed to read from register 0x03c, err -6
>76667 Jul  1 16:37:50 angryman-storage-03 kernel: [350840.052654]
> w83795 0-002f: Failed to read from register 0x015, err -6
>76668 Jul  1 16:37:50 angryman-storage-03 kernel: [350840.080613]
> w83795 0-002f: Failed to read from register 0x03c, err -6
>76669 Jul  1 16:37:50 angryman-storage-03 kernel: [350840.112583]
> w83795 0-002f: Failed to read from register 0x016, err -6
>76670 Jul  1 16:37:50 angryman-storage-03 kernel: [350840.144517]
> w83795 0-002f: Failed to read from register 0x03c, err -6
>76671 Jul  1 16:37:50 angryman-storage-03 kernel: [350840.176468]
> w83795 0-002f: Failed to read from register 0x017, err -6
>76672 Jul  1 16:37:50 angryman-storage-03 kernel: [350840.208455]
> w83795 0-002f: Failed to read from register 0x03c, err -6
>76673 Jul  1 16:37:51 angryman-storage-03 kernel: [350840.240410]
> w83795 0-002f: Failed to read from register 0x01b, err -6
>76674 Jul  1 16:37:51 angryman-storage-03 kernel: [350840.272Jul  1
> 17:05:28 angryman-storage-03 kernel: imklog 6.2.0, log source  =
> /proc/kmsg started.
>
> PUTs become slower and slower , from 1,200/s to 200/s ...
>
> I'm not sure if this is a bug or that's the limitation of XFS. If it's an
> limit of XFS . How to improve it ?
>
> An additional question is XFS seems consume lots of memory , does anyone
> know about the reason of this behavior?
>
>
> Appreciate ...
>
>
> --
> +Hugo Kuo+
> tonyt...@gmail.com
> + 886 935004793
>
>


-- 
+Hugo Kuo+
tonyt...@gmail.com
+ 886 935004793
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Setting VM passwords when not running on Xen

2012-07-03 Thread John Garbutt
Interesting idea, that seams reasonable.

The password is encrypted when it leaves the VM in the XenServer case too (if I 
have understood the code correctly).

My only concerns are thinking about the more general solution:

* It only works on boot, so harder to change password if you forgot it.

* I guess it leaves people who are depended on Config drive stuck

* We are making more changes to an API we don't really own

* How does the VM know to trust it is not an "evil" metadata service, 
but I guess the same applies to injecting the SSH keys

Cheers,
John

From: Day, Phil [mailto:philip@hp.com]
Sent: 03 July 2012 6:06
To: John Garbutt; openstack@lists.launchpad.net (openstack@lists.launchpad.net) 
(openstack@lists.launchpad.net)
Subject: RE: Setting VM passwords when not running on Xen

Thanks John,

One approach we were wondering about is to have an agent in Windows which:


o   Generates a random password and sets it for the admin account

o   Gets the public ssh key from the metadata service

o   Encrypts the password with the public key

o   Pushes the encrypted public key back to the metadata server (requires the 
metadata server to support Push)

The user can then get the encrypted password from the API and decrypt it with 
their private key

The advantage would be that the clear text password never leaves the VM, so 
there are fewer security concerns about Nova having access to clear text 
passwords.

It would also seem to be a small change in the metadata service and no change 
in the API layer - not sure if there are concerns about what a VM could break 
if it updates its own metadata, but I guess we could also limit what values can 
be set.

Thoughts ?

Phil



From: John Garbutt [mailto:john.garb...@citrix.com]
Sent: 03 July 2012 16:41
To: Day, Phil; openstack@lists.launchpad.net (openstack@lists.launchpad.net) 
(openstack@lists.launchpad.net)
Subject: RE: Setting VM passwords when not running on Xen

This seemed to crop up quite a lot in different sessions at the Design summit. 
I am certainly interested in a standard way to inject information into VMs.

What I think we need is a cross hypervisor two-way guest communication channel 
that is fairly transparent to the user of that VM (i.e. ideally not a network 
connection).

If I understand things correctly, we currently have these setup ideas:

* Config Drive (not supported by XenAPI, but not a two way transport)

* Cloud-Init / Metadata service (depends on DHCP(?), and not a two-way 
transport)

But to set the password, we ideally want two way communication. We currently 
have these:

* XenAPI guest plugin (XenServer specific, uses XenStore, but two way, 
no networking assumed )

* Serial port (used by http://wiki.libvirt.org/page/Qemu_guest_agent 
but not supported on XenServer)

I like the idea of building a common interface (maybe write out to a known file 
system location) for the above two hypervisor specific mechanisms. The agent 
should be able to pick which mechanism works. Then on top of that, we could 
write a common agent that can be shared for all the different hypervisors. You 
could also fallback to the metadata service and config drive when no two way 
communication is available.

I would love this Guest Agent to be an OpenStack project that can then be up 
streamed into many Linux distribution cloud images.

Sadly, I don't have any time to work on this right now, but hopefully that will 
change in the near future.

Cheers,
John

From: 
openstack-bounces+john.garbutt=eu.citrix@lists.launchpad.net
 
[mailto:openstack-bounces+john.garbutt=eu.citrix@lists.launchpad.net]
 On Behalf Of Day, Phil
Sent: 03 July 2012 3:07
To: openstack@lists.launchpad.net 
(openstack@lists.launchpad.net) 
(openstack@lists.launchpad.net)
Subject: [Openstack] Setting VM passwords when not running on Xen

Hi Folks,

Is anyone else looking at how to support images that need a password rather 
than an ssh key (windows) on hypervisors that don't support set_admin_password 
(e.g. libvirt) ?

Thanks
Phil
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [metering] ceilometer dev docs on readthedocs.org

2012-07-03 Thread Doug Hellmann
I've set up the ceilometer development documentation build on RTD at
http://ceilometer.readthedocs.org/en/latest/index.html

Doug
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] best practices for merging common into specific projects

2012-07-03 Thread Joshua Harlow
I think that's a good little explanation as to why we have openstack-common, 
but when did it become a good reason to copy code around via an inclusion 
mechanism?

Lots of code is in packages (outside of openstack, in pypi and elsewhere) that 
is also in 'incubation' (in fact, what code isn't in perpetual incubation), 
that's why u still have version numbers?

I just worry about inclusion of code that isn't versioned into other projects, 
and I don't see the benefit of that when u can just have a package that has 
that code as well.

On 7/3/12 2:35 AM, "Thierry Carrez"  wrote:

Thierry Carrez wrote:
> Gabriel Hurley wrote:
>> On a more fundamental level, did I miss some tremendous reason why we have 
>> this "merge from common" pattern instead of making OpenStack Common a 
>> standard python dependency just like anything else? Especially with the work 
>> Monty has recently done on versioning and packaging the client libs from 
>> Jenkins, I can't see a reason to keep following this "update common and 
>> merge to everything else" pattern at all...
>
> This discussion should probably wait for markmc to come back, since he
> set up most of this framework in the first place. He would certainly
> produce a more compelling rationale than I can :)

Actually http://wiki.openstack.org/CommonLibrary explains it quite well.
In particular:

"openstack-common also provides a process for incubating APIs which,
while they are shared between multiple OpenStack projects, have not yet
matured to meet the [library inclusion] criteria described above."

"Incubation shouldn't be seen as a long term option for any API - it is
merely a stepping stone to inclusion into the openstack-common library
proper."

--
Thierry Carrez (ttx)
Release Manager, OpenStack

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] best practices for merging common into specific projects

2012-07-03 Thread Dan Prince


- Original Message -
> From: "Russell Bryant" 
> To: andrewbog...@gmail.com
> Cc: "Andrew Bogott" , openstack@lists.launchpad.net
> Sent: Monday, July 2, 2012 3:26:56 PM
> Subject: Re: [Openstack] best practices for merging common into specific  
> projects
> 
> On 07/02/2012 03:16 PM, Andrew Bogott wrote:
> > Background:
> > 
> > The openstack-common project is subject to a standard
> > code-review
> > process (and, soon, will also have Jenkins testing gates.)  Sadly,
> > patches that are merged into openstack-common are essentially
> > orphans.
> > Bringing those changes into actual use requires yet another step, a
> > 'merge from common' patch where the code changes in common are
> > copied
> > into a specific project (e.g. nova.)
> > Merge-from-common patches are generated via an automated
> > process.
> > Specific projects express dependencies on specific common
> > components via
> > a config file, e.g. 'nova/openstack-common.conf'.  The actual file
> > copy
> > is performed by 'openstack-common/update.py,' and its behavior is
> > governed by the appropriate openstack-common.conf file.
> 
> More background:
> 
> http://wiki.openstack.org/CommonLibrary
> 
> > Questions:
> > 
> > When should changes from common be merged into other projects?
> > What should a 'merge-from-common' patch look like?
> > What code-review standards should core committers observe when
> > reviewing merge-from-common patches?
> > 
> > Proposals:
> > 
> > I.  As soon as a patch drops into common, the patch author
> > should
> > submit merge-from-common patches to all affected projects.
> > A.  (This should really be done by a bot, but that's not going
> > to
> > happen overnight)
> 
> All of the APIs in openstack-common right now are considered to be in
> incubation, meaning that breaking changes could be made.  I don't
> think
> automated merges are good for anything in incubation.
> 
> Automation would be suitable for stable APIs.  Once an API is no
> longer
> in incubation, we should be looking at how to make releases and treat
> it
> as a proper library.  The copy/paste madness should be limited to
> APIs
> still in incubation.
> 
> There are multiple APIs close or at the point where I think we should
> be
> able to commit to them.  I'll leave the specifics for a separate
> discussion, but I think moving on this front is key to reducing the
> pain
> we are seeing with code copying.
> 
> > II. In the event that I. is not observed, merge-from-common
> > patches
> > will contain bits from multiple precursor patches.  That is not
> > only OK,
> > but encouraged.
> > A.  Keeping projects in sync with common is important!
> > B.  Asking producers of merge-from-common patches to understand
> > the
> > full diff will discourage the generation of such merges.
> 
> I don't see this as much different as any other patches to nova (or
> whatever project is using common).  It should be a proper patch
> series.
>  If the person pulling it in doesn't understand the merge well enough
>  to
> produce the patch series with proper commit messages, then they are
> the
> wrong person to be doing the merge in the first place.

I went on a bit of a rant about this on IRC yesterday. While I agree a patch 
series is appropriate for many new features and bug fixes I don't think it 
should be required for keeping openstack-common in sync. Especially since we 
don't merge tests from openstack-common which would help verify that the person 
doing the merges doesn't mess up the order of the patchset. If we were to 
include the tests from openstack-common in each project this could change my 
mind.

If someone wants to split openstack-common changes into patchsets that might be 
Okay in small numbers. If you are merging say 5-10 changes from openstack 
common into all the various openstack projects that could translate into a 
rather large number of reviews (25+) for things that have been already reviewed 
once.  For me using patchsets to keep openstack-common in sync just causes 
thrashing of Jenkins, SmokeStack, etc. for things that have already been gated. 
Seems like an awful waste of review/CI time. In my opinion patchsets are the 
way to go with getting things into openstack-common... but not when syncing to 
projects.

Hopefully this situation is short lived however and we start using a proper 
library sooner rather than later.


> 
> > III.Merge-from-common patches should be the product of a single
> > unedited run of update.py.
> 
> Disagree, see above.
> 
> > A.  If a merge-from-common patch breaks pep8 or a test in nova,
> > don't fix the patch; fix the code in common.
> 
> Agreed.
> 
> > IV.Merge-from-common patches are 'presumed justified'.  That
> > means:
> > A. Reviewers of merge-from-common patches should consider test
> > failures and pep8 breakages, and obvious functional problems.
> > B. Reviewers of merge-from-common patches should not consider
>

Re: [Openstack] Openstack and Google Compute Engine

2012-07-03 Thread Tim Bell

HPC is often used as a general term but it is actually many different facets 
depending on the computing model.

CERN is at the centre of a server grid of 100,000s of servers called WLCG 
(http://wlcg.web.cern.ch) for analyzing the data from the Large Hadron 
Collider. The servers are located at over 200 sites worldwide in a tiered 
structure.

However, we're more of High Throughput Computing (HTC) rather than HPC. HTC has 
a large number of programs running at the same time which have no need to talk 
to each other. Thus, it is more like a large scale batch farm than a massively 
parallel machine.

While we lose a little performance on memory and I/O performance, 
virtualization brings major benefits in ease of management of the thousands of 
servers. We expect to be saving the few percent of overhead over the lifetime 
of machines by more flexibility scheduling repairs and placing the workload, 
such as overcommitting a hypervisor when one of the VMs is waiting for a tape 
to be mounted.

Analysing the 25PB/year for the next 20 years, we're pretty intensive compute 
and I/O load. However, when we take the total cost of ownership, people 
included, we expect a significant efficiency gain from the use of a private 
cloud. Some more details at http://cern.ch/go/NH9w

The massively parallel processing use cases with Crays/BlueGenes may not 
benefit from private clouds but many of the research sites will.

Tim

On 3 Jul 2012, at 16:12, Matt Joyce wrote:


On Tue, Jul 3, 2012 at 2:01 AM, Simon G. 
mailto:semy...@gmail.com>> wrote:
Secondly, I don't think we shouldn't compare GCE to Openstack. I understand 
that right now cloud (Openstack, Amazon, ...) is just easy in use, managed and 
scalable datacenter. It allows users to create VMs, upload their images, easily 
increase their (limited) demands, but don't you think that HPC is the right 
direction? I've always thought that final cloud's goal is to provide easy in 
use HPC infrastructure. Where users could do what they can do right now in the 
clouds (Amazon, Openstack), but also could do what they couldn't do in typical 
datacenter. They should run instance, run compute-heavy software and if they 
need more resources, they just add them. if cloud is unable to provide 
necessary resources, they should move their app to bigger cloud and do what 
they need. Openstack should be prepared for such large deployment. It should 
also be prepared for HPC use cases. Or if it's not prepared yet, it should be 
Openstack's goal.

HPC in the cloud operates more like a grid computing solution.  With things 
like Amazon HPC or HPC under openstack the idea is to allocate entire physical 
systems to a user on the fly.  Traditionally to date that has been done with 
m1.full style instances.  In many ways bare metal provisioning is a better 
option here than a hypervisor.  And for many people who do work in an HPC 
environment bare metal really is the only solution that makes sense.

The reality is that HPC use cases lose a lot of the underlying benefits of 
cloud infrastructure.  So they really are something of an edge case at the 
moment.  I believe that bare metal provisioning from within openstack could be 
a bit of a game changer in HPC, and that it could be useful in a wide variety 
of areas.  But, ultimately I believe the usage that HPC in no way reflects 
general computing needs.  And that really sums it up.  Most folks do not need 
or want HPC.  Most folks with HPC needs don't want a hypervisor slowing down 
their memory access.

I know that clouds are fulfilling current needs for scalable datacenter, but it 
should also fulfill future needs. Apps are faster and faster. More often they 
do image processing, voice recognition, data mining and it should be clouds' 
goal to provide an easy way to create such advanced apps, not just simple web 
server which could be scaled up, by adding few VMs and load balancer to 
redirect requests. Infrastructure should be prepared even for such large 
deployment like that in google. It should also be optimized and support heavy 
computations. In the future it should be as efficient as grids (or almost as 
efficient), because ease of use has already been achieved. If, right now, it's 
easy to deploy VM into the cloud, the next step should be to optimize 
infrastructure to increase performance.

Apps are actually slower and slower.  The hardware is faster.  The Applications 
themselves abstract more and more and thus slow down.  As for what you do on 
your instances, that's entirely your own thing herr user.  Some large data and 
some serious compute use cases simply don't lend themselves to cloud today.  
Hypervisors are limiting in so far as they give up some speed to provide the 
ability to share resources better.  If you have no desire to share resources 
then virt machines become something of an impediment to you.  So I don't see 
this as being accurate for some use cases.

There are also other external limiting factors.  People don

Re: [Openstack] Anyone using instance metadata?

2012-07-03 Thread Vishvananda Ishaya
Metadata is supposed to be user "tags" that are associated with a guest
that are available via the api. We discussed displaying these tags inside
the guest as well.

The main difference between user-data and metadata is that metadata is
available to the api, whereas user-data is only available in the guest.

Vish

On Jul 3, 2012, at 10:05 AM, Scott Moser wrote:

> Hi,
> I'm looking at nova, and the compute API has 3 methods:
>   delete_instance_metadata
>   update_instance_metadata
>   get_instance_metadata
> 
> I know that
> * python nova client has
>   * the ability to specify --meta=KEY=VALUE on instance creation
>   * a top level subcommand 'meta' which allows set and delete of
> metadata keys (but no support for querying current value).
> * content specified on instance creation is injected into the
>   instance's root filesystem at '/meta.js'.  Or, if config_drive
>   is given the config drive will have /meta.js
> 
> What I'm missing is:
> * There is an 'update' for this content, implying that it is at least
>   partially dynamic in intent, but the filesystem data passing mechanism
>   is clearly *not* dynamic.
> * This data is not made available in the metadata server
>   (http://169.254.169.254) where it *could* be dynamic.
> 
> So, I'm confused on the intent of this metadata.  I can't decide if it is
> really just supposed to be "tags" or more of a user-data replacement.
> 
> In EC2, you can store arbitrary key/value pairs on an instance-id,
> ami-id, or anything else, but those values are not readable without
> credentials.  Here, they're readable inside the instance from the
> filesystem.
> 
> Anyone able to comment on the original intent of 'metadata'?
> It seems to me that if we expose this in the metadata server, then it will
> be a very useful feature, but one that overlaps confusingly with
> user-data.
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Setting VM passwords when not running on Xen

2012-07-03 Thread Vishvananda Ishaya
I like the security of this idea, but it would also require that metadata is 
available outside the vm which it isn't. What about creating a security group 
that opens a specific port, and run a little webserver on that port in the 
guest that makes the key available.  That would mean you don't need to modify 
the metadata server at all.

Vish

On Jul 3, 2012, at 10:05 AM, Day, Phil wrote:

> Thanks John,
>  
> One approach we were wondering about is to have an agent in Windows which:
>  
> o   Generates a random password and sets it for the admin account
> o   Gets the public ssh key from the metadata service
> o   Encrypts the password with the public key
> o   Pushes the encrypted public key back to the metadata server (requires the 
> metadata server to support Push)
>  
> The user can then get the encrypted password from the API and decrypt it with 
> their private key
>  
> The advantage would be that the clear text password never leaves the VM, so 
> there are fewer security concerns about Nova having access to clear text 
> passwords.
>  
> It would also seem to be a small change in the metadata service and no change 
> in the API layer – not sure if there are concerns about what a VM could break 
> if it updates its own metadata, but I guess we could also limit what values 
> can be set.
>  
> Thoughts ?
>  
> Phil
>  
>  
>  
> From: John Garbutt [mailto:john.garb...@citrix.com] 
> Sent: 03 July 2012 16:41
> To: Day, Phil; openstack@lists.launchpad.net (openstack@lists.launchpad.net) 
> (openstack@lists.launchpad.net)
> Subject: RE: Setting VM passwords when not running on Xen
>  
> This seemed to crop up quite a lot in different sessions at the Design 
> summit. I am certainly interested in a standard way to inject information 
> into VMs.
>  
> What I think we need is a cross hypervisor two-way guest communication 
> channel that is fairly transparent to the user of that VM (i.e. ideally not a 
> network connection).
>  
> If I understand things correctly, we currently have these setup ideas:
> · Config Drive (not supported by XenAPI, but not a two way transport)
> · Cloud-Init / Metadata service (depends on DHCP(?), and not a 
> two-way transport)
>  
> But to set the password, we ideally want two way communication. We currently 
> have these:
> · XenAPI guest plugin (XenServer specific, uses XenStore, but two 
> way, no networking assumed )
> · Serial port (used by http://wiki.libvirt.org/page/Qemu_guest_agent 
> but not supported on XenServer)
>  
> I like the idea of building a common interface (maybe write out to a known 
> file system location) for the above two hypervisor specific mechanisms. The 
> agent should be able to pick which mechanism works. Then on top of that, we 
> could write a common agent that can be shared for all the different 
> hypervisors. You could also fallback to the metadata service and config drive 
> when no two way communication is available.
>  
> I would love this Guest Agent to be an OpenStack project that can then be up 
> streamed into many Linux distribution cloud images.
>  
> Sadly, I don’t have any time to work on this right now, but hopefully that 
> will change in the near future.
>  
> Cheers,
> John
>  
> From: openstack-bounces+john.garbutt=eu.citrix@lists.launchpad.net 
> [mailto:openstack-bounces+john.garbutt=eu.citrix@lists.launchpad.net] On 
> Behalf OfDay, Phil
> Sent: 03 July 2012 3:07
> To: openstack@lists.launchpad.net (openstack@lists.launchpad.net) 
> (openstack@lists.launchpad.net)
> Subject: [Openstack] Setting VM passwords when not running on Xen
>  
> Hi Folks,
>  
> Is anyone else looking at how to support images that need a password rather 
> than an ssh key (windows) on hypervisors that don’t support 
> set_admin_password (e.g. libvirt) ?
>  
> Thanks
> Phil  
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Anyone using instance metadata?

2012-07-03 Thread Scott Moser
On Tue, 3 Jul 2012, Vishvananda Ishaya wrote:

> Metadata is supposed to be user "tags" that are associated with a guest
> that are available via the api. We discussed displaying these tags inside
> the guest as well.

Am I reading it wrong? It seems like it *is* available inside the guest.
At very least with config drive on i know it is.

> The main difference between user-data and metadata is that metadata is
> available to the api, whereas user-data is only available in the guest.

So to avoid confusion, if the intent was tags, I think we should disable
the 'meta.js' file injection, and get over the screams now.

half and half is just confusing.

Thoughts?

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [keystone] Congrats to Adam Young - now in Keystone Core

2012-07-03 Thread Joseph Heck
Congrats to Adam Young - now a member of Keystone Core. For those of you who 
don't know, Adam drove the initial LDAP backend implementation for the new 
keystone architecture, and has been the driving force (technically and code) 
behind getting PKI enabled within Keystone for signed tokens as we step things 
forward.

Thanks Adam, and as a sincere congratulations, we'll be giving you more work 
:-) Seriously, though - great work! We all appreciate it!

-joe

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Setting VM passwords when not running on Xen

2012-07-03 Thread Scott Moser
On Tue, 3 Jul 2012, John Garbutt wrote:

> This seemed to crop up quite a lot in different sessions at the Design 
> summit. I am certainly interested in a standard way to inject information 
> into VMs.
>
> What I think we need is a cross hypervisor two-way guest communication 
> channel that is fairly transparent to the user of that VM (i.e. ideally not a 
> network connection).
>
> If I understand things correctly, we currently have these setup ideas:
>
> * Config Drive (not supported by XenAPI, but not a two way transport)
>
> * Cloud-Init / Metadata service (depends on DHCP(?), and not a 
> two-way transport)

cloud-init does not require dhcp.  It explicitly supports the passing of
network interface definitions into it in Ubuntu 12.04.  Ie, config-drive
with static networking passed in works as it should.

> But to set the password, we ideally want two way communication. We currently 
> have these:
>
> * XenAPI guest plugin (XenServer specific, uses XenStore, but two 
> way, no networking assumed )
>
> * Serial port (used by http://wiki.libvirt.org/page/Qemu_guest_agent 
> but not supported on XenServer)
>
> I like the idea of building a common interface (maybe write out to a known 
> file system location) for the above two hypervisor specific mechanisms. The 
> agent should be able to pick which mechanism works. Then on top of that, we 
> could write a common agent that can be shared for all the different 
> hypervisors. You could also fallback to the metadata service and config drive 
> when no two way communication is available.
>
> I would love this Guest Agent to be an OpenStack project that can then be up 
> streamed into many Linux distribution cloud images.

The only thing I don't like about this is there is so very little need for
long-lived communication between the hypervisor and the guest.  I
personally think that cloud-init gets this right. Its goal is to do what
it needs to do and get out of the way.  It should provide enough to
bootstrap you to a more intelligent "management framework", such as
puppet, chef, juju there are literally dozens of these things that work
perfectly well without a hypervisor specific transport.  They're widely
available, work "right now", and don't care if they're running on bare
metal, xen, kvm, microsoft virtual server... They just assume that you can
connect to them via network, which is a really well defined and tested
thing.

Its 2012, do we really need to design for the case where IP is broken or
not available?

I'm not arguing that "guest agents" are not necessary, I'm arguing that
there is extremely little need for them to be hypervisor aware.

(yes, there is value in the host being able to request the guest to freeze
its filesystem after receiving a API "freeze" request.  However, even
*that* could happen over IP).

>
> Sadly, I don't have any time to work on this right now, but hopefully that 
> will change in the near future.
>
> Cheers,
> John
>
> From: openstack-bounces+john.garbutt=eu.citrix@lists.launchpad.net 
> [mailto:openstack-bounces+john.garbutt=eu.citrix@lists.launchpad.net] On 
> Behalf Of Day, Phil
> Sent: 03 July 2012 3:07
> To: openstack@lists.launchpad.net (openstack@lists.launchpad.net) 
> (openstack@lists.launchpad.net)
> Subject: [Openstack] Setting VM passwords when not running on Xen
>
> Hi Folks,
>
> Is anyone else looking at how to support images that need a password rather 
> than an ssh key (windows) on hypervisors that don't support 
> set_admin_password (e.g. libvirt) ?
>
> Thanks
> Phil
>

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Anyone using instance metadata?

2012-07-03 Thread Vishvananda Ishaya

On Jul 3, 2012, at 11:51 AM, Scott Moser wrote:

> On Tue, 3 Jul 2012, Vishvananda Ishaya wrote:
> 
>> Metadata is supposed to be user "tags" that are associated with a guest
>> that are available via the api. We discussed displaying these tags inside
>> the guest as well.
> 
> Am I reading it wrong? It seems like it *is* available inside the guest.
> At very least with config drive on i know it is.

The config drive was a later addition because we thought it might be useful.
The plan was to add it to the metadata server once we had a /openstack 
available.
> 
>> The main difference between user-data and metadata is that metadata is
>> available to the api, whereas user-data is only available in the guest.
> 
> So to avoid confusion, if the intent was tags, I think we should disable
> the 'meta.js' file injection, and get over the screams now.
> 
> half and half is just confusing.
> 
> Thoughts?

Seems much more useful in metadata server than in config drive, but we should
probably keep the same semantics we have been discussing

i.e. the same values are in both places. Config drive is information as it was 
during
launch and metadata is current information.

Vish



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Anyone using instance metadata?

2012-07-03 Thread Scott Moser
On Tue, 3 Jul 2012, Vishvananda Ishaya wrote:

>
> The config drive was a later addition because we thought it might be useful.
> The plan was to add it to the metadata server once we had a /openstack 
> available.
> >
> >> The main difference between user-data and metadata is that metadata is
> >> available to the api, whereas user-data is only available in the guest.
> >
> > So to avoid confusion, if the intent was tags, I think we should disable
> > the 'meta.js' file injection, and get over the screams now.
> >
> > half and half is just confusing.
> >
> > Thoughts?
>
> Seems much more useful in metadata server than in config drive, but we should
> probably keep the same semantics we have been discussing
>
> i.e. the same values are in both places. Config drive is information as it 
> was during
> launch and metadata is current information.

That does make sense.  The only thing that is confusing is its similarity to
user-data.

putting it in the MD then leads us to the other thread/request
([Openstack] Setting VM passwords when not running on Xen) of having
it be writable from the instance.



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Single global dependency list

2012-07-03 Thread Gabriel Hurley
So, I understand the rationales, and I think of those three options the one 
chosen is the most reasonable. I'm gonna just come out and say I hate this 
whole idea, but let's set this aside for a minute 'cuz I have a genuine 
question:

What will the process be for merging changes to this requirements list? Having 
yet another roadblock to getting your contribution merged is a huge developer 
disincentive. We're really making it exceptionally hard for new contributors, 
and frustrating even for the old hands.

So, with the goal of making the coordinated management of the projects 
possible, what can we do to respect developers?

- Gabriel

> -Original Message-
> From: openstack-bounces+gabriel.hurley=nebula@lists.launchpad.net
> [mailto:openstack-
> bounces+gabriel.hurley=nebula@lists.launchpad.net] On Behalf Of
> Monty Taylor
> Sent: Tuesday, July 03, 2012 7:54 AM
> To: Eric Windisch
> Cc: openstack@lists.launchpad.net
> Subject: Re: [Openstack] Single global dependency list
> 
> 
> 
> On 07/03/2012 10:09 AM, Eric Windisch wrote:
> > I have to agree with others that copying files around is not ideal,
> > and I can see the maintenance of this getting more involved as Nova
> > becomes more coupled with common.
> >
> >>> Additionally, we'd make the copy only copy in the versions from
> >>> openstack-common for package that were already listed in the target
> >>> project, so that we wouldn't add django to python-swiftclient, for
> >>> instance.
> >
> > This seems to be a reasonable argument against using git submodules,
> > but I'm afraid we might be losing more than we're gaining here.
> >
> > Just because python-swiftclient depends on openstack-common, and
> > django-using code exists there, doesn't mean that django needs to be
> > installed for python-swiftclient. We might do better to use git
> > submodules and solve the dependency problem, than continuing down
> this
> > copy-everything path.
> 
> We're explicitly NOT doing a copy-everything path. That's the whole point.
> We're only copying the needed depends from the master list.
> 
> git submodules actually make the problem worse, not better.
> 
> > Alternatively, speed up the movement from incubation to library.
> 
> Yeah - that's kind of the reason that bcwaldon was saying this shouldn't be in
> openstack-common. openstack-common wants to be a library, and then
> we're back at not having an appropriate place for the master list.
> 
> Monty
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] best practices for merging common into specific projects

2012-07-03 Thread Gabriel Hurley
The notion that copying code is any protection against APIs that may change is 
a red herring. It's the exact same effect as pegging a version of a dependency 
(whether it's a commit hash or a real version number), except now you have code 
duplication. An unstable upgrade path is an unstable upgrade path, and copying 
the code into the project doesn't alleviate the pain for the project if the 
upstream library decides to change its APIs.

Also, we're really calling something used by more or less all the core projects 
"incubated"? ;-) Seems like it's past the proof-of-concept phase now, at least 
for many parts of common. Questions of API stability are an issue unto 
themselves.

Anyhow, I'm +1 on turning it into a real library of its own, as a couple people 
suggested already.

- Gabriel

> -Original Message-
> From: openstack-bounces+gabriel.hurley=nebula@lists.launchpad.net
> [mailto:openstack-
> bounces+gabriel.hurley=nebula@lists.launchpad.net] On Behalf Of
> James E. Blair
> Sent: Tuesday, July 03, 2012 6:56 AM
> To: andrewbog...@gmail.com
> Cc: openstack@lists.launchpad.net
> Subject: Re: [Openstack] best practices for merging common into specific
> projects
> 
> Andrew Bogott  writes:
> 
> > I.  As soon as a patch drops into common, the patch author should
> > submit merge-from-common patches to all affected projects.
> > A.  (This should really be done by a bot, but that's not going to
> > happen overnight)
> 
> Actually, I think with our current level of tooling, we can have Jenkins do 
> this
> (run by Zuul as a post-merge job on openstack-common).
> 
> I very much believe that the long-term goal should be to make openstack-
> common a library -- so nothing I say here should be construed against that.
> But as long as it's in an incubation phase, if doing something like this would
> help move things along, we can certainly implement it, and fairly easily.
> 
> Note that a naive implementation might generate quite a bit of review spam
> if several small changes land to openstack-common (there would then be
> changes*projects number of reviews in gerrit).  We have some code laying
> around which might be useful here that looks for an existing open change
> and amends it; at least that would let us have at most only one open merge-
> from-common-change per-project.
> 
> Okay, that's all on that; I don't want to derail the main conversation, and 
> I'd
> much rather it just be a library if we're close to being ready for that.
> 
> -Jim
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [metering] ceilometer dev docs on readthedocs.org

2012-07-03 Thread Loic Dachary
On 07/03/2012 07:46 PM, Doug Hellmann wrote:
> I've set up the ceilometer development documentation build on RTD at 
> http://ceilometer.readthedocs.org/en/latest/index.html
>
Hi,

I've updated https://launchpad.net/ceilometer to list this link.

Cheers


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] best practices for merging common into specific projects

2012-07-03 Thread Andrew Bogott

On 7/3/12 1:59 PM, Gabriel Hurley wrote:

The notion that copying code is any protection against APIs that may change is 
a red herring. It's the exact same effect as pegging a version of a dependency 
(whether it's a commit hash or a real version number), except now you have code 
duplication. An unstable upgrade path is an unstable upgrade path, and copying 
the code into the project doesn't alleviate the pain for the project if the 
upstream library decides to change its APIs.

Also, we're really calling something used by more or less all the core projects 
"incubated"? ;-) Seems like it's past the proof-of-concept phase now, at least 
for many parts of common. Questions of API stability are an issue unto themselves.

Anyhow, I'm +1 on turning it into a real library of its own, as a couple people 
suggested already.

 - Gabriel


I feel like I should speak up since I started this fight in the first 
place :)


Like most people in this thread, I too long for an end to the weird 
double-commit process that we're using now.  So I'm happy to set aside 
my original Best Practices proposal until there's some consensus 
regarding how much longer we're going to use that process.  Presumably 
opinions about how to handle merge-from-common commits will vary in the 
meantime, but that's something we can live with.


In terms of promoting common into a real project, though, I want to 
raise another option that's guaranteed to be unpopular:  We make 
openstack-common a git-submodule that is automatically checked out 
within the directory tree of each other project.  Then each commit to 
common would need to be gated by the full set of tests on every project 
that includes common.


I haven't thought deeply about the pros and cons of code submodule vs. 
python project, but I want to bring up the option because it's the 
system that I'm the most familiar with, and one that's been discussed a 
bit off and on.


-Andrew

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Nova Pacemaker Resource Agents

2012-07-03 Thread Sébastien Han
Ok thanks! I will have a look :D

We keep in touch ;)


On Tue, Jul 3, 2012 at 4:09 PM, Christian Parpart  wrote:

> On Tue, Jul 3, 2012 at 1:35 PM, Sébastien Han wrote:
>
>> Hi,
>>
>> Managing a resource via LSB only checks the PID. If the PID exists the
>> service is running but it's not enough because it doesn't mean that the
>> service is truly functionnal. However OCF agents offer more features like
>> fine monitoring (scripting).
>> I'm not sure to understand your question about Rabbit-MQ but if the
>> question was: "How do you monitor the connection of each service to
>> Rabbit-MQ?", here is the answer:
>>
>> The RA monitors the connection state (ESTABLISHED) between the service
>> (nova-scheduler, nova-cert, nova-consoleauth) and rabbit-MQ according to
>> the PID of the process.
>>
>> By the way, did you start with the floating IP OCF agent?
>>
>
> Hey,
>
> and yes, I did start already, and have an intial work of it, but since I
> did not
> yet actually put it into Pacemaker somewhere, I did not share it yet.
> But you may feel free in checking: http://trapni.de/~trapni/FloatingIP
> In case you do improvements to this script, please share :-)
>
> Cheers,
> Christian.
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Single global dependency list

2012-07-03 Thread Monty Taylor
It's a good and valid question and I don't really know. In this case,
I'm merely the pack-horse who was told "global synchronized dependencies
lists!" (not that I'm not the evil person cooking up schemes)

That said - most patches from new contributors don't actually come with
new library dependencies. If they are adding a new depend, I think it's
reasonable to expect it to be slightly harder to get that landed.

I do think that we need an answer to "who approves changes to this
list". Getting stuff merged to openstack-common is often hard because
it's a smaller list of people who work on it. I'd hate to see this be
only PTLs. However, things like "let's upgrade webob" seem to _actually_
need more eyes than it seems like at the time.

meh.

On 07/03/2012 03:12 PM, Gabriel Hurley wrote:
> So, I understand the rationales, and I think of those three options the one 
> chosen is the most reasonable. I'm gonna just come out and say I hate this 
> whole idea, but let's set this aside for a minute 'cuz I have a genuine 
> question:
> 
> What will the process be for merging changes to this requirements list? 
> Having yet another roadblock to getting your contribution merged is a huge 
> developer disincentive. We're really making it exceptionally hard for new 
> contributors, and frustrating even for the old hands.
> 
> So, with the goal of making the coordinated management of the projects 
> possible, what can we do to respect developers?
> 
> - Gabriel
> 
>> -Original Message-
>> From: openstack-bounces+gabriel.hurley=nebula@lists.launchpad.net
>> [mailto:openstack-
>> bounces+gabriel.hurley=nebula@lists.launchpad.net] On Behalf Of
>> Monty Taylor
>> Sent: Tuesday, July 03, 2012 7:54 AM
>> To: Eric Windisch
>> Cc: openstack@lists.launchpad.net
>> Subject: Re: [Openstack] Single global dependency list
>>
>>
>>
>> On 07/03/2012 10:09 AM, Eric Windisch wrote:
>>> I have to agree with others that copying files around is not ideal,
>>> and I can see the maintenance of this getting more involved as Nova
>>> becomes more coupled with common.
>>>
> Additionally, we'd make the copy only copy in the versions from
> openstack-common for package that were already listed in the target
> project, so that we wouldn't add django to python-swiftclient, for
> instance.
>>>
>>> This seems to be a reasonable argument against using git submodules,
>>> but I'm afraid we might be losing more than we're gaining here.
>>>
>>> Just because python-swiftclient depends on openstack-common, and
>>> django-using code exists there, doesn't mean that django needs to be
>>> installed for python-swiftclient. We might do better to use git
>>> submodules and solve the dependency problem, than continuing down
>> this
>>> copy-everything path.
>>
>> We're explicitly NOT doing a copy-everything path. That's the whole point..
>> We're only copying the needed depends from the master list.
>>
>> git submodules actually make the problem worse, not better.
>>
>>> Alternatively, speed up the movement from incubation to library.
>>
>> Yeah - that's kind of the reason that bcwaldon was saying this shouldn't be 
>> in
>> openstack-common. openstack-common wants to be a library, and then
>> we're back at not having an appropriate place for the master list.
>>
>> Monty
>>
>> ___
>> Mailing list: https://launchpad.net/~openstack
>> Post to : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
> 
> 
> 


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Anyone using instance metadata?

2012-07-03 Thread Joshua Harlow
+1 for getting over screams earlier rather than later

On 7/3/12 11:51 AM, "Scott Moser"  wrote:

On Tue, 3 Jul 2012, Vishvananda Ishaya wrote:

> Metadata is supposed to be user "tags" that are associated with a guest
> that are available via the api. We discussed displaying these tags inside
> the guest as well.

Am I reading it wrong? It seems like it *is* available inside the guest.
At very least with config drive on i know it is.

> The main difference between user-data and metadata is that metadata is
> available to the api, whereas user-data is only available in the guest.

So to avoid confusion, if the intent was tags, I think we should disable
the 'meta.js' file injection, and get over the screams now.

half and half is just confusing.

Thoughts?

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Nova and asynchronous instance launching

2012-07-03 Thread Chris Behrens
There wasn't a blueprint, but you can see the change here:

https://review.openstack.org/#/c/7542/

Bandwidth is updated in a DB table outside of notifications.  Notifications 
just pulls the last data received and sends it.  With rapid state changes, I 
would expect that bandidth_usage would mostly not be different in the messages… 
unless a bandwidth update in the background happens to sneak in during the 
middle of the events.

In any case… these state change events are noted by 'compute.instance.update'.  
For actions like 'rebuild', you'll get an 'exists' message when the action 
starts… but then you'll also see some instance.update events as the states 
switch.

At least this is how I understand it.  Besides the code, your best resource for 
information about notification payloads, etc is this:

http://wiki.openstack.org/SystemUsageData

- Chris


On Jul 2, 2012, at 4:38 AM, Day, Phil wrote:

> Hi Chris,
> 
> Thanks for the pointer on the new notification on state change stuff, I'd 
> missed that change.
> 
> Is there a blueprint or some such which describes the change ?   
> 
> In particular I'm trying to understand how the bandwidth_usage values fit in 
> here.  It seems that during a VM creation there would normally be a number of 
> fairly rapid state changes, so re-calculating the bandwidth_usage figures 
> might be quiet expensive jut to log a change in task_state from say 
> "Networking" to "Block Device Mapping". I was kind of expecting that to 
> be more part of the "compute.exists" messages than the update.
> 
> Do we have something that catalogues the various notification messages and 
> their payloads ?
> 
> Thanks,
> Phil
> 
> 
> 
> -Original Message-
> From: Chris Behrens [mailto:cbehr...@codestud.com] 
> Sent: 02 July 2012 00:14
> To: Day, Phil
> Cc: Jay Pipes; Huang Zhiteng; openstack@lists.launchpad.net
> Subject: Re: [Openstack] Nova and asynchronous instance launching
> 
> 
> 
> On Jul 1, 2012, at 3:04 PM, "Day, Phil"  wrote:
> 
>> Rather than adding debug statements could we please add additional 
>> notification events (for example a notification event whenever task_state 
>> changes)
>> 
> 
> This has been in trunk for a month or maybe a little longer.
> 
> FYI
> 
> - Chris

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Single global dependency list

2012-07-03 Thread Gabriel Hurley
Agreed on all points, and I know you're not evil, Monty. ;-) (mostly)

You're totally right that this particular case won't stymie new contributors, 
but as we've seen for changes to common--and sometimes even to the client 
libraries or devstack--reviewers are in short supply and getting the change you 
need in one of the "gate" projects merged can often add days of impedance to 
otherwise fruitful work. It's bitten me plenty of times.

So the need for balance is critical. Being able to vet the impact of a change 
on every project consuming it is difficult for either automated systems or 
human reviewers, so we do our best.

Perhaps the simplest answer for now is devising a reasonable set of automated 
gate tests for this "os-requires " module that humans can trust, and working to 
expand the circle of reviewers on these centralized projects that have the 
power to block everyone yet are so easy to ignore...

All the best,

- Gabriel

> -Original Message-
> From: Monty Taylor [mailto:mord...@inaugust.com]
> Sent: Tuesday, July 03, 2012 12:49 PM
> To: Gabriel Hurley
> Cc: Eric Windisch; openstack@lists.launchpad.net
> Subject: Re: [Openstack] Single global dependency list
> 
> It's a good and valid question and I don't really know. In this case, I'm 
> merely
> the pack-horse who was told "global synchronized dependencies lists!" (not
> that I'm not the evil person cooking up schemes)
> 
> That said - most patches from new contributors don't actually come with new
> library dependencies. If they are adding a new depend, I think it's reasonable
> to expect it to be slightly harder to get that landed.
> 
> I do think that we need an answer to "who approves changes to this list".
> Getting stuff merged to openstack-common is often hard because it's a
> smaller list of people who work on it. I'd hate to see this be only PTLs.
> However, things like "let's upgrade webob" seem to _actually_ need more
> eyes than it seems like at the time.
> 
> meh.
> 
> On 07/03/2012 03:12 PM, Gabriel Hurley wrote:
> > So, I understand the rationales, and I think of those three options the one
> chosen is the most reasonable. I'm gonna just come out and say I hate this
> whole idea, but let's set this aside for a minute 'cuz I have a genuine
> question:
> >
> > What will the process be for merging changes to this requirements list?
> Having yet another roadblock to getting your contribution merged is a huge
> developer disincentive. We're really making it exceptionally hard for new
> contributors, and frustrating even for the old hands.
> >
> > So, with the goal of making the coordinated management of the projects
> possible, what can we do to respect developers?
> >
> > - Gabriel
> >
> >> -Original Message-
> >> From: openstack-
> bounces+gabriel.hurley=nebula@lists.launchpad.net
> >> [mailto:openstack-
> >> bounces+gabriel.hurley=nebula@lists.launchpad.net] On Behalf Of
> >> Monty Taylor
> >> Sent: Tuesday, July 03, 2012 7:54 AM
> >> To: Eric Windisch
> >> Cc: openstack@lists.launchpad.net
> >> Subject: Re: [Openstack] Single global dependency list
> >>
> >>
> >>
> >> On 07/03/2012 10:09 AM, Eric Windisch wrote:
> >>> I have to agree with others that copying files around is not ideal,
> >>> and I can see the maintenance of this getting more involved as Nova
> >>> becomes more coupled with common.
> >>>
> > Additionally, we'd make the copy only copy in the versions from
> > openstack-common for package that were already listed in the
> > target project, so that we wouldn't add django to
> > python-swiftclient, for instance.
> >>>
> >>> This seems to be a reasonable argument against using git submodules,
> >>> but I'm afraid we might be losing more than we're gaining here.
> >>>
> >>> Just because python-swiftclient depends on openstack-common, and
> >>> django-using code exists there, doesn't mean that django needs to be
> >>> installed for python-swiftclient. We might do better to use git
> >>> submodules and solve the dependency problem, than continuing down
> >> this
> >>> copy-everything path.
> >>
> >> We're explicitly NOT doing a copy-everything path. That's the whole
> point..
> >> We're only copying the needed depends from the master list.
> >>
> >> git submodules actually make the problem worse, not better.
> >>
> >>> Alternatively, speed up the movement from incubation to library.
> >>
> >> Yeah - that's kind of the reason that bcwaldon was saying this
> >> shouldn't be in openstack-common. openstack-common wants to be a
> >> library, and then we're back at not having an appropriate place for the
> master list.
> >>
> >> Monty
> >>
> >> ___
> >> Mailing list: https://launchpad.net/~openstack
> >> Post to : openstack@lists.launchpad.net
> >> Unsubscribe : https://launchpad.net/~openstack
> >> More help   : https://help.launchpad.net/ListHelp
> >
> >
> >
> 



___
Mailing list: http

Re: [Openstack] best practices for merging common into specific projects

2012-07-03 Thread Joshua Harlow
I 150% agree that is a red-herring, that's why I wonder what it really offers 
besides a 'façade' and/or the feeling that what u are using isn't a package, 
when in concept it really is, except now u have lost all the benefits of using 
version numbers, having dependency versions (with history) and all that, so the 
'façade' seems pretty weak imho.

+1 for library that follows the normal packaging methodology

On 7/3/12 11:59 AM, "Gabriel Hurley"  wrote:

The notion that copying code is any protection against APIs that may change is 
a red herring. It's the exact same effect as pegging a version of a dependency 
(whether it's a commit hash or a real version number), except now you have code 
duplication. An unstable upgrade path is an unstable upgrade path, and copying 
the code into the project doesn't alleviate the pain for the project if the 
upstream library decides to change its APIs.

Also, we're really calling something used by more or less all the core projects 
"incubated"? ;-) Seems like it's past the proof-of-concept phase now, at least 
for many parts of common. Questions of API stability are an issue unto 
themselves.

Anyhow, I'm +1 on turning it into a real library of its own, as a couple people 
suggested already.

- Gabriel

> -Original Message-
> From: openstack-bounces+gabriel.hurley=nebula@lists.launchpad.net
> [mailto:openstack-
> bounces+gabriel.hurley=nebula@lists.launchpad.net] On Behalf Of
> James E. Blair
> Sent: Tuesday, July 03, 2012 6:56 AM
> To: andrewbog...@gmail.com
> Cc: openstack@lists.launchpad.net
> Subject: Re: [Openstack] best practices for merging common into specific
> projects
>
> Andrew Bogott  writes:
>
> > I.  As soon as a patch drops into common, the patch author should
> > submit merge-from-common patches to all affected projects.
> > A.  (This should really be done by a bot, but that's not going to
> > happen overnight)
>
> Actually, I think with our current level of tooling, we can have Jenkins do 
> this
> (run by Zuul as a post-merge job on openstack-common).
>
> I very much believe that the long-term goal should be to make openstack-
> common a library -- so nothing I say here should be construed against that.
> But as long as it's in an incubation phase, if doing something like this would
> help move things along, we can certainly implement it, and fairly easily.
>
> Note that a naive implementation might generate quite a bit of review spam
> if several small changes land to openstack-common (there would then be
> changes*projects number of reviews in gerrit).  We have some code laying
> around which might be useful here that looks for an existing open change
> and amends it; at least that would let us have at most only one open merge-
> from-common-change per-project.
>
> Okay, that's all on that; I don't want to derail the main conversation, and 
> I'd
> much rather it just be a library if we're close to being ready for that.
>
> -Jim
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] PKI Token Generation

2012-07-03 Thread Adam Young
The discussion during the Keystone meeting today had a couple of key 
points I'd like to address.



The Current token length is 32 characters long.  An example:
 e50d580692d644cfb8bec0246aede2c2

With PKI Signed tokens,  they will be much longer

MIICgAYJKoZIhvcNAQcCoIICcTCCAm0CAQExCTAHBgUrDgMCGjCCAWEGCSqGSIb3\
DQEHAaCCAVIEggFOeyJhY2Nlc3MiOiB7InRva2VuIjogeyJleHBpcmVzIjogIjIw\
MTItMDYtMDJUMTQ6NDc6MzRaIiwgImlkIjogInBsYWNlaG9sZGVyIiwgInRlbmFu\
dCI6IHsiZW5hYmxlZCI6IHRydWUsICJkZXNjcmlwdGlvbiI6IG51bGwsICJuYW1l\
IjogInRlbmFudF9uYW1lMSIsICJpZCI6ICJ0ZW5hbnRfaWQxIn19LCAidXNlciI6\
IHsidXNlcm5hbWUiOiAidXNlcl9uYW1lMSIsICJyb2xlc19saW5rcyI6IFsicm9s\
ZTEiLCJyb2xlMiJdLCAiaWQiOiAidXNlcl9pZDEiLCAicm9sZXMiOiBbeyJuYW1l\
IjogInJvbGUxIn0sIHsibmFtZSI6ICJyb2xlMiJ9XSwgIm5hbWUiOiAidXNlcl9u\
YW1lMSJ9fX0NCjGB9zCB9AIBATBUME8xFTATBgNVBAoTDFJlZCBIYXQsIEluYzER\
MA8GA1UEBxMIV2VzdGZvcmQxFjAUBgNVBAgTDU1hc3NhY2h1c2V0dHMxCzAJBgNV\
BAYTAlVTAgEBMAcGBSsOAwIaMA0GCSqGSIb3DQEBAQUABIGAUcweczLJw0SMQhli\
qVSFTWnPKzCnh9qaAxY+29YKFIGYmsX4x+Eh+3D4-xND0gqpdh-CD-Fe7dwsAP4K\
YHCj4W13Z0EyucgXiIbdY+XBaUInYowNmBqMRzOXMO8UGOjYMEgFvRJApb6sS4PN\
wlctpz0dJe2rTELD28EmckkApeU="

However, nothing in the API comments on the token length.  You cannot 
assume that even under the current scheme they will be 32 characters long.


the code for just the token generation has been split from the 
auth_token changes.  You can see it here:


https://github.com/admiyo/keystone/tree/pki-token-generation

It is not up for code review yet as there are still a few changes required.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] best practices for merging common into specific projects

2012-07-03 Thread James E. Blair
Dan Prince  writes:

> If someone wants to split openstack-common changes into patchsets that
> might be Okay in small numbers. If you are merging say 5-10 changes
> from openstack common into all the various openstack projects that
> could translate into a rather large number of reviews (25+) for things
> that have been already reviewed once.  For me using patchsets to keep
> openstack-common in sync just causes thrashing of Jenkins, SmokeStack,
> etc. for things that have already been gated. Seems like an awful
> waste of review/CI time. In my opinion patchsets are the way to go
> with getting things into openstack-common... but not when syncing to
> projects.
>
> Hopefully this situation is short lived however and we start using a
> proper library sooner rather than later.

Indeed, as a real library, it could be incorporated into the devstack
gate tests like the rest of the libraries, which would simultaneously
gate changes to it using all of the projects that reference it.

-Jim

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] coding standards (was: review for implement dhcp agent for quantum)

2012-07-03 Thread Duncan McGreggor
On Tue, Jul 3, 2012 at 5:39 PM, Dan Wendlandt  wrote:
> Lately, Quantum reviewers have been doing their best to enforce python style
> guidelines above and beyond the programmatically enforced pep8 checks.  This
> has happened for many recent reviews, so Mark isn't being singled out here,

My objection isn't to Mark being singled-out -- my objection is to
*anyone* engaging in this level of nit-pickery. This is death to
projects.

This is coming from a guy who's incredibly anal about his code and
coding standards, too. I've been coding in Python for over a decade,
adhering to PEP8 for a considerable period of that time, am a member
of the notoriously picky Twisted project, and even I was surprised by
the flood of review comments -- a high number of which contributed
nothing to the improved readability, maintaiability, or functionality
of this code under review.

There were definitely some good points/comments. But there was a lot
in there that you had to wade through the rest, before you saw them.

> though admittedly there's a lot of code previously accepted to the codebase
> that wasn't held to such a high bar.  This attention to style guidelines is
> generally a good thing,

I *strongly* disagree.

/Attention/ to style guidelines is a huge boon to open source
projects. But /this/ attention seems beyond the pale, like a good idea
was taken too far and the intent of the guidelines has been lost.

> though I understand that it can be frustrating,
> especially for new developers unfamiliar with the rules (I personally like
> garyk's comment about how he felt dealing with PEP-257, see:
> http://www.youtube.com/watch?v=lYU-SeVofHs)

But that's just it: I'm *not* a new developer! I'm a seasoned Python
hacker, PSF member, obsessive-compulsive neat-freak with code. etc.,
etc. I haven't ever seen this level of zealous syntax pursuit in any
well-functioning open source project.

> As long as reviewer comments are inline with items covered in
> https://github.com/openstack/quantum/blob/master/HACKING.rst,

I may have missed something, but a lot of the comments I saw did not
reference something particular in the HACKING file, nor were many of
these marked as CONSIDER ...

> then I
> consider them fair game for reviews.  If they go beyond that, they should be
> generally be expressed as a "CONSIDER".
>
> If we're unhappy with what is or is not enforced,

I'm definitely unhappy with what is being enforced and how.

But even more: if reviews devolve to this level of non-code minutiae,
how long do you think you will have the hearts and minds of
enthusiastic contributing coders?

What about sponsoring organizations? If the review process consumes
multiple days -- not due to anything functional or checkable, but
rather somewhat arbitrary linguistic preferences -- and prevents
contributors from actually getting their *day* jobs done, don't you
imagine loss of inertia?

This is the sort of thing that encourages private forks and community
abandonment. It might be worth reviewing the comments over the last
few days -- in detail -- and doing so in that light ...

> we should have a
> discussion on the ML and update HACKING.rst correspondingly.

> Sound reasonable?

It does indeed.

d

> Dan
>
>
> On Tue, Jul 3, 2012 at 10:08 PM, Duncan McGreggor 
> wrote:
>>
>> Honestly?
>>
>> This seems a bit overboard to me, Maru. Mark's code is passing pep8 for
>> me.
>>
>> That should be enough.
>>
>> d
>>
>> On Tue, Jul 3, 2012 at 4:49 PM, Maru Newby (Code Review)
>>  wrote:
>> > Maru Newby has posted comments on this change.
>> >
>> > Change subject: implement dhcp agent for quantum
>> > ..
>> >
>> >
>> > Patch Set 4: I would prefer that you didn't merge this
>> >
>> > (33 inline comments)
>> >
>> > Nice cleanup.  As per my last review, minor docstring issues remain.
>> > assert_bridge_exists also requires attention.
>> >
>> > 
>> > File quantum/agent/dhcp_agent.py
>> > Line 65: """The DhcpAgent daemon runloop."""
>> > Unnecessary docstring
>> >
>> > Line 81: """This method polls the Quantum database and returns a
>> > represenation
>> > PEP257 - prescribe rather than describe
>> >
>> > Line 122: """Returns a dict containing the sets of networks that
>> > are new,
>> > PEP257
>> >
>> > Line 134: # We'll first get the networks that have subnets added
>> > or deleted.
>> > Avoid use of personal pronouns in docstrings:
>> >
>> > We'll first get => Get
>> >
>> > Line 142: # Now update with the networks that have had
>> > allocations added/deleted.
>> > Now update => Update
>> >
>> > Line 143: # change candidates are the net_id portion of the
>> > symmetric diff
>> > change => Change
>> >
>> > Line 158: """This method will invoke an action on a DHCP driver
>> > instance."""
>> > PEP257
>> >
>> > Line 177: # We need to manipulate the state so the action
>> > 

[Openstack] [RFC] Add more host checks to the compute filter

2012-07-03 Thread Jim Fehlig
Hi Daniel,

Attached is a patch that implements filtering on (architecture,
hypervisor_type, vm_mode) tuple as was discussed in this previous patch

https://review.openstack.org/#/c/9110/

CC'ing Chuck since he is the author of the ArchFilter patch.

Feedback appreciated before sending this off to gerrit.

Regards,
Jim
>From bc96fdf618a2b9426f4c5db59fc087f849ac9873 Mon Sep 17 00:00:00 2001
From: Jim Fehlig 
Date: Mon, 25 Jun 2012 15:54:43 -0600
Subject: [PATCH] Add more host checks to the compute filter

As discussed in a previous version of this patch [1], this change adds
checks in the ComputeFilter to verify hosts can support the
(architecture, hypervisor_type, vm_mode) tuple specified in the instance
properties.

Adding these checks to the compute filter seems consistent with its
definition [2]:

"ComputeFilter - checks that the capabilities provided by the compute service
satisfy the extra specifications, associated with the instance type."

[1] https://review.openstack.org/#/c/9110/
[2] https://github.com/openstack/nova/blob/master/doc/source/devref/filter_scheduler.rst

Change-Id: I1fcd7f9c706184701ca02f7d1672541d26c07f31
---
 nova/compute/api.py|4 +-
 .../versions/108_instance_hypervisor_type.py   |   46 ++
 nova/db/sqlalchemy/models.py   |1 +
 nova/scheduler/filters/arch_filter.py  |   44 --
 nova/scheduler/filters/compute_filter.py   |   56 ++-
 nova/tests/scheduler/test_host_filters.py  |  160 +---
 6 files changed, 211 insertions(+), 100 deletions(-)

diff --git a/nova/compute/api.py b/nova/compute/api.py
index 1e3ebf1..008bdd6 100644
--- a/nova/compute/api.py
+++ b/nova/compute/api.py
@@ -323,7 +323,9 @@ class API(base.Base):
 return value
 
 options_from_image = {'os_type': prop('os_type'),
-  'vm_mode': prop('vm_mode')}
+  'architecture': prop('architecture'),
+  'vm_mode': prop('vm_mode'),
+  'hypervisor_type': prop('hypervisor_type')}
 
 # If instance doesn't have auto_disk_config overridden by request, use
 # whatever the image indicates
diff --git a/nova/db/sqlalchemy/migrate_repo/versions/108_instance_hypervisor_type.py b/nova/db/sqlalchemy/migrate_repo/versions/108_instance_hypervisor_type.py
new file mode 100644
index 000..f68a6a4
--- /dev/null
+++ b/nova/db/sqlalchemy/migrate_repo/versions/108_instance_hypervisor_type.py
@@ -0,0 +1,46 @@
+# vim: tabstop=4 shiftwidth=4 softtabstop=4
+
+# Copyright 2012 OpenStack LLC.
+#
+#Licensed under the Apache License, Version 2.0 (the "License"); you may
+#not use this file except in compliance with the License. You may obtain
+#a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+#Unless required by applicable law or agreed to in writing, software
+#distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+#WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+#License for the specific language governing permissions and limitations
+#under the License.
+
+from sqlalchemy import Column, Integer, MetaData, String, Table
+
+
+def upgrade(migrate_engine):
+meta = MetaData()
+meta.bind = migrate_engine
+
+# add column:
+instances = Table('instances', meta,
+  Column('id', Integer(), primary_key=True, nullable=False)
+  )
+hypervisor_type = Column('hypervisor_type',
+ String(length=255, convert_unicode=False,
+assert_unicode=None, unicode_error=None,
+_warn_on_bytestring=False),
+ nullable=True)
+
+instances.create_column(hypervisor_type)
+
+
+def downgrade(migrate_engine):
+meta = MetaData()
+meta.bind = migrate_engine
+
+# drop column:
+instances = Table('instances', meta,
+  Column('id', Integer(), primary_key=True, nullable=False)
+  )
+
+instances.drop_column('hypervisor_type')
diff --git a/nova/db/sqlalchemy/models.py b/nova/db/sqlalchemy/models.py
index 3359891..30f23e6 100644
--- a/nova/db/sqlalchemy/models.py
+++ b/nova/db/sqlalchemy/models.py
@@ -253,6 +253,7 @@ class Instance(BASE, NovaBase):
 
 os_type = Column(String(255))
 architecture = Column(String(255))
+hypervisor_type = Column(String(255))
 vm_mode = Column(String(255))
 uuid = Column(String(36))
 
diff --git a/nova/scheduler/filters/arch_filter.py b/nova/scheduler/filters/arch_filter.py
deleted file mode 100644
index 1f11d07..000
--- a/nova/scheduler/filters/arch_filter.py
+++ /dev/null
@@ -1,44 +0,0 @@
-# Copyright (c) 2011-2012 OpenStack, LLC
-# Copyright (c) 2012 Canonical Ltd
-# All Rights Reserved.
-#
-#Licensed under the Apache Licen

Re: [Openstack] Anyone using instance metadata?

2012-07-03 Thread Steve Baker
Hi Vish

On Wed, Jul 4, 2012 at 6:28 AM, Vishvananda Ishaya
 wrote:
> Metadata is supposed to be user "tags" that are associated with a guest
> that are available via the api. We discussed displaying these tags inside
> the guest as well.

I've just been looking into what is already in place to implement the
CreateTags, DeleteTags, DescribeTags API and I also came across the
*_instance_metadata compute API.

http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/Using_Tags.html#Using_Tags_API

The tags API can add tags to a number of resource types, but currently
there only seems to be a metadata tables for instances and volumes.

Would there be interest in me working on a change to implement
CreateTags, DeleteTags, DescribeTags for instances and volumes?

Later changes could add new metadata tables for the other taggable
resource types.

cheers

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] OpenStack "G" naming poll

2012-07-03 Thread Thierry Carrez
Yes, it's that time of the year again... time for us to choose the name
of the next OpenStack release !

This time, cities and counties in California (San Diego, CA being the
location of the G design summit)

I set up a poll with the available options (based on our current rules
of naming) at:

https://launchpad.net/~openstack/+poll/g-release-naming

Poll is accessible to all members of ~openstack group in Launchpad, and
ends next Tuesday, 21:30 UTC. Please cast your vote!

I'm aware that a subversive movement wants to try to amend the rules so
that another name option becomes available. Since we can't stop (or
modify) the poll now that it's been launched, if that movement reaches
critical mass, we may organize a second round of polling :)

-- 
Thierry Carrez (ttx)
Release Manager, OpenStack


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] best practices for merging common into specific projects

2012-07-03 Thread Eric Windisch
git submodules don't have to be linked to the head of a branch. Instead of 
double-commiting (for every commit), we can do a single commit in each project 
to change the git reference of the submodule. This would not be too far from 
the existing behavior, except that it would minimize the double commits. 

-- 
Eric Windisch


On Tuesday, July 3, 2012 at 15:47 PM, Andrew Bogott wrote:

> On 7/3/12 1:59 PM, Gabriel Hurley wrote:
> > The notion that copying code is any protection against APIs that may change 
> > is a red herring. It's the exact same effect as pegging a version of a 
> > dependency (whether it's a commit hash or a real version number), except 
> > now you have code duplication. An unstable upgrade path is an unstable 
> > upgrade path, and copying the code into the project doesn't alleviate the 
> > pain for the project if the upstream library decides to change its APIs.
> > 
> > Also, we're really calling something used by more or less all the core 
> > projects "incubated"? ;-) Seems like it's past the proof-of-concept phase 
> > now, at least for many parts of common. Questions of API stability are an 
> > issue unto themselves.
> > 
> > Anyhow, I'm +1 on turning it into a real library of its own, as a couple 
> > people suggested already.
> > 
> > - Gabriel
> 
> I feel like I should speak up since I started this fight in the first 
> place :)
> 
> Like most people in this thread, I too long for an end to the weird 
> double-commit process that we're using now. So I'm happy to set aside 
> my original Best Practices proposal until there's some consensus 
> regarding how much longer we're going to use that process. Presumably 
> opinions about how to handle merge-from-common commits will vary in the 
> meantime, but that's something we can live with.
> 
> In terms of promoting common into a real project, though, I want to 
> raise another option that's guaranteed to be unpopular: We make 
> openstack-common a git-submodule that is automatically checked out 
> within the directory tree of each other project. Then each commit to 
> common would need to be gated by the full set of tests on every project 
> that includes common.
> 
> I haven't thought deeply about the pros and cons of code submodule vs. 
> python project, but I want to bring up the option because it's the 
> system that I'm the most familiar with, and one that's been discussed a 
> bit off and on.
> 
> -Andrew
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net (mailto:openstack@lists.launchpad.net)
> Unsubscribe : https://launchpad.net/~openstack
> More help : https://help.launchpad.net/ListHelp
> 
> 


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] best practices for merging common into specific projects

2012-07-03 Thread Andrew Bogott

On 7/3/12 5:47 PM, Eric Windisch wrote:
git submodules don't have to be linked to the head of a branch. 
Instead of double-commiting (for every commit), we can do a single 
commit in each project to change the git reference of the submodule. 
This would not be too far from the existing behavior, except that it 
would minimize the double commits.


Oh, I guess I left out an important part of my vision, which is that 
there would be a commit hook in common which moves the submodule 
reference in the parent projects anytime a patch is merged in common.  
So, in short: once a patch passed review for inclusion in common, that 
patch would automatically go live in all other project heads simultaneously.



--
Eric Windisch

On Tuesday, July 3, 2012 at 15:47 PM, Andrew Bogott wrote:


On 7/3/12 1:59 PM, Gabriel Hurley wrote:
The notion that copying code is any protection against APIs that may 
change is a red herring. It's the exact same effect as pegging a 
version of a dependency (whether it's a commit hash or a real 
version number), except now you have code duplication. An unstable 
upgrade path is an unstable upgrade path, and copying the code into 
the project doesn't alleviate the pain for the project if the 
upstream library decides to change its APIs.


Also, we're really calling something used by more or less all the 
core projects "incubated"? ;-) Seems like it's past the 
proof-of-concept phase now, at least for many parts of common. 
Questions of API stability are an issue unto themselves.


Anyhow, I'm +1 on turning it into a real library of its own, as a 
couple people suggested already.


- Gabriel


I feel like I should speak up since I started this fight in the first
place :)

Like most people in this thread, I too long for an end to the weird
double-commit process that we're using now. So I'm happy to set aside
my original Best Practices proposal until there's some consensus
regarding how much longer we're going to use that process. Presumably
opinions about how to handle merge-from-common commits will vary in the
meantime, but that's something we can live with.

In terms of promoting common into a real project, though, I want to
raise another option that's guaranteed to be unpopular: We make
openstack-common a git-submodule that is automatically checked out
within the directory tree of each other project. Then each commit to
common would need to be gated by the full set of tests on every project
that includes common.

I haven't thought deeply about the pros and cons of code submodule vs.
python project, but I want to bring up the option because it's the
system that I'm the most familiar with, and one that's been discussed a
bit off and on.

-Andrew

___
Mailing list: https://launchpad.net/~openstack 

Post to : openstack@lists.launchpad.net 

Unsubscribe : https://launchpad.net/~openstack 


More help : https://help.launchpad.net/ListHelp




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [CI] Retriggering Jenkins from Gerrit

2012-07-03 Thread James E. Blair
Hi,

As mentioned in the thread "Jenkins and transient failures", we've had
an unusually high number of transient failures in Jenkins lately.  We've
done several things in response to that:

1) Monty identified a problem with our pypi mirror which was the cause
of many of the errors, and corrected it.

2) Monty is continuing to work on the single dependency list which
should allow us to switch to using our local pypi mirror exclusively,
further reducing transient network errors, as well as significantly
speeding up test run time.

3) Several transient errors were caused by failed fetches from Gerrit.
While consulting with the Gerrit authors about tuning, they discovered a
bug in Gerrit where a 5 minute timeout was being interpreted as a 5
millisecond timeout.  I have updated our gerrit configuration to work
around that.

4) Clark Boylan implemented automatic retrying for the git fetches that
we use with Jenkins.


I hope that we'll get to the point where we have almost no transient
network errors when testing, but we know it will never be perfect, so at
the CI meeting we discussed how best to implement retriggering with
Zuul.  Clark added a comment filter that will retrigger Jenkins if you
leave a comment that matches a regex.

We currently run two kinds of jobs in Jenkins, the check job and the
gate job.  The check jobs run immediately when a patchset is uploaded
and vote +/-1.  The gate jobs run on approval, queue up across all
projects and vote +/-2 (if they fail, jobs behind them in the gate
pipeline may need to run again).


To retrigger the initial Jenkins check job, just leave a comment on the
review in Gerrit with only the text "recheck".

To retrigger the Jenkins merge gate job, leave a comment with only the
text "reverify", or if you are a core reviewer, just leave another
"Approved" vote.  (Don't leave a "reverify" comment if the change hasn't
been approved yet, it still won't be merged and will slow Jenkins down.)

-Jim

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack "G" naming poll

2012-07-03 Thread Brian Waldon
TL;DR - Screw the rules, let's call the next release 'Grizzly'

As California is rather lacking in the 'municipality names starting with a G 
that we should use for an OpenStack release' department, I have had to look 
*slightly* outside the ruleset to find a suitable 'G' release name - that name 
being 'Grizzly'. The rules clearly state that a release name must represent a 
city or county near the corresponding design summit and be comprised of a 
single word of ten characters or less - the problem here being that 'Grizzly' 
is actually 'Grizzly Flats.' Having already polled a small subset of the 
community, I feel like there would be enough support for 'Grizzly' to win if it 
were on the ballot. As I'm more interested in selecting a suitable name than 
accurately representing some arbitrary territory, I'd love to either 
permanently amend the rules to make this acceptable or grant an exception in 
this one case. As Thierry said, if this reaches critical mass, we will figure 
out what to do. Otherwise, I'll shut up and deal with 'Gazelle'.

Brian


On Jul 3, 2012, at 3:20 PM, Thierry Carrez wrote:

> Yes, it's that time of the year again... time for us to choose the name
> of the next OpenStack release !
> 
> This time, cities and counties in California (San Diego, CA being the
> location of the G design summit)
> 
> I set up a poll with the available options (based on our current rules
> of naming) at:
> 
> https://launchpad.net/~openstack/+poll/g-release-naming
> 
> Poll is accessible to all members of ~openstack group in Launchpad, and
> ends next Tuesday, 21:30 UTC. Please cast your vote!
> 
> I'm aware that a subversive movement wants to try to amend the rules so
> that another name option becomes available. Since we can't stop (or
> modify) the poll now that it's been launched, if that movement reaches
> critical mass, we may organize a second round of polling :)
> 
> -- 
> Thierry Carrez (ttx)
> Release Manager, OpenStack
> 
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] best practices for merging common into specific projects

2012-07-03 Thread Gabriel Hurley
I’m pretty -1 on triggering changes in other projects from common. That’s gonna 
result in a broken code (whether subtle or obvious) no matter how good your 
gates are.

At least as an external library you can freeze a version requirement until such 
time as you see fit to properly updated that code and *ensure* compatibility in 
your project.

Or if your project likes ridin’ trunk, then don’t pin a version and you’ve got 
the same effect as an automatic trigger.


-  Gabriel

From: openstack-bounces+gabriel.hurley=nebula@lists.launchpad.net 
[mailto:openstack-bounces+gabriel.hurley=nebula@lists.launchpad.net] On 
Behalf Of Andrew Bogott
Sent: Tuesday, July 03, 2012 3:54 PM
To: Eric Windisch; openstack@lists.launchpad.net
Subject: Re: [Openstack] best practices for merging common into specific 
projects

On 7/3/12 5:47 PM, Eric Windisch wrote:
git submodules don't have to be linked to the head of a branch. Instead of 
double-commiting (for every commit), we can do a single commit in each project 
to change the git reference of the submodule. This would not be too far from 
the existing behavior, except that it would minimize the double commits.

Oh, I guess I left out an important part of my vision, which is that there 
would be a commit hook in common which moves the submodule reference in the 
parent projects anytime a patch is merged in common.  So, in short: once a 
patch passed review for inclusion in common, that patch would automatically go 
live in all other project heads simultaneously.


--
Eric Windisch


On Tuesday, July 3, 2012 at 15:47 PM, Andrew Bogott wrote:
On 7/3/12 1:59 PM, Gabriel Hurley wrote:
The notion that copying code is any protection against APIs that may change is 
a red herring. It's the exact same effect as pegging a version of a dependency 
(whether it's a commit hash or a real version number), except now you have code 
duplication. An unstable upgrade path is an unstable upgrade path, and copying 
the code into the project doesn't alleviate the pain for the project if the 
upstream library decides to change its APIs.

Also, we're really calling something used by more or less all the core projects 
"incubated"? ;-) Seems like it's past the proof-of-concept phase now, at least 
for many parts of common. Questions of API stability are an issue unto 
themselves.

Anyhow, I'm +1 on turning it into a real library of its own, as a couple people 
suggested already.

- Gabriel

I feel like I should speak up since I started this fight in the first
place :)

Like most people in this thread, I too long for an end to the weird
double-commit process that we're using now. So I'm happy to set aside
my original Best Practices proposal until there's some consensus
regarding how much longer we're going to use that process. Presumably
opinions about how to handle merge-from-common commits will vary in the
meantime, but that's something we can live with.

In terms of promoting common into a real project, though, I want to
raise another option that's guaranteed to be unpopular: We make
openstack-common a git-submodule that is automatically checked out
within the directory tree of each other project. Then each commit to
common would need to be gated by the full set of tests on every project
that includes common.

I haven't thought deeply about the pros and cons of code submodule vs.
python project, but I want to bring up the option because it's the
system that I'm the most familiar with, and one that's been discussed a
bit off and on.

-Andrew

___
Mailing list: 
https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : 
https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack "G" naming poll

2012-07-03 Thread Paul McMillan

On 07/03/2012 04:50 PM, Brian Waldon wrote:

TL;DR - Screw the rules, let's call the next release 'Grizzly'


Do it!




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack "G" naming poll

2012-07-03 Thread Nathanael Burton
+1 for Grizzly
On Jul 3, 2012 8:02 PM, "Brian Waldon"  wrote:

> TL;DR - Screw the rules, let's call the next release 'Grizzly'
>
> As California is rather lacking in the 'municipality names starting with a
> G that we should use for an OpenStack release' department, I have had to
> look *slightly* outside the ruleset to find a suitable 'G' release name -
> that name being 'Grizzly'. The rules clearly state that a release name must
> represent a city or county near the corresponding design summit and be
> comprised of a single word of ten characters or less - the problem here
> being that 'Grizzly' is actually 'Grizzly Flats.' Having already polled a
> small subset of the community, I feel like there would be enough support
> for 'Grizzly' to win if it were on the ballot. As I'm more interested in
> selecting a suitable name than accurately representing some arbitrary
> territory, I'd love to either permanently amend the rules to make this
> acceptable or grant an exception in this one case. As Thierry said, if this
> reaches critical mass, we will figure out what to do. Otherwise, I'll shut
> up and deal with '*Gazelle*'.
>
> Brian
>
>
> On Jul 3, 2012, at 3:20 PM, Thierry Carrez wrote:
>
> Yes, it's that time of the year again... time for us to choose the name
> of the next OpenStack release !
>
> This time, cities and counties in California (San Diego, CA being the
> location of the G design summit)
>
> I set up a poll with the available options (based on our current rules
> of naming) at:
>
> https://launchpad.net/~openstack/+poll/g-release-naming
>
> Poll is accessible to all members of ~openstack group in Launchpad, and
> ends next Tuesday, 21:30 UTC. Please cast your vote!
>
> I'm aware that a subversive movement wants to try to amend the rules so
> that another name option becomes available. Since we can't stop (or
> modify) the poll now that it's been launched, if that movement reaches
> critical mass, we may organize a second round of polling :)
>
> --
> Thierry Carrez (ttx)
> Release Manager, OpenStack
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack "G" naming poll

2012-07-03 Thread Gabriel Hurley
+1 on "close enough to an arbitrary territory and also a great name". ;-)

Also, the Grizzly is the California state animal:  
http://www.statesymbolsusa.org/California/animal_grizzly_bear.html

Food for thought.


-  Gabriel

From: openstack-bounces+gabriel.hurley=nebula@lists.launchpad.net 
[mailto:openstack-bounces+gabriel.hurley=nebula@lists.launchpad.net] On 
Behalf Of Brian Waldon
Sent: Tuesday, July 03, 2012 4:50 PM
To: openstack@lists.launchpad.net (openstack@lists.launchpad.net)
Cc: Thierry Carrez
Subject: Re: [Openstack] OpenStack "G" naming poll

TL;DR - Screw the rules, let's call the next release 'Grizzly'

As California is rather lacking in the 'municipality names starting with a G 
that we should use for an OpenStack release' department, I have had to look 
*slightly* outside the ruleset to find a suitable 'G' release name - that name 
being 'Grizzly'. The rules clearly state that a release name must represent a 
city or county near the corresponding design summit and be comprised of a 
single word of ten characters or less - the problem here being that 'Grizzly' 
is actually 'Grizzly Flats.' Having already polled a small subset of the 
community, I feel like there would be enough support for 'Grizzly' to win if it 
were on the ballot. As I'm more interested in selecting a suitable name than 
accurately representing some arbitrary territory, I'd love to either 
permanently amend the rules to make this acceptable or grant an exception in 
this one case. As Thierry said, if this reaches critical mass, we will figure 
out what to do. Otherwise, I'll shut up and deal with 'Gazelle'.

Brian


On Jul 3, 2012, at 3:20 PM, Thierry Carrez wrote:


Yes, it's that time of the year again... time for us to choose the name
of the next OpenStack release !

This time, cities and counties in California (San Diego, CA being the
location of the G design summit)

I set up a poll with the available options (based on our current rules
of naming) at:

https://launchpad.net/~openstack/+poll/g-release-naming

Poll is accessible to all members of ~openstack group in Launchpad, and
ends next Tuesday, 21:30 UTC. Please cast your vote!

I'm aware that a subversive movement wants to try to amend the rules so
that another name option becomes available. Since we can't stop (or
modify) the poll now that it's been launched, if that movement reaches
critical mass, we may organize a second round of polling :)

--
Thierry Carrez (ttx)
Release Manager, OpenStack


___
Mailing list: https://launchpad.net/~openstack
Post to : 
openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack "G" naming poll

2012-07-03 Thread Monty Taylor
tl;dr - Screw the rules, I agree

Let's at least add it to the poll.

Also - I think we should further amend the rules such that we select the
NEXT release by the summit for the current release. That means two things:

At the g summit, we'd tell everyone where the next summit is:
At the g summit, we'd vote and announce the name of h
We wouldn't have to spend half the cycle saying "h, or whatever" when we
mean "we're going to defer that crazy idea until next time"
I wouldn't have had to use the letter g by itself twice just above here.

On 07/03/2012 06:50 PM, Brian Waldon wrote:
> TL;DR - Screw the rules, let's call the next release 'Grizzly'
> 
> As California is rather lacking in the 'municipality names starting with
> a G that we should use for an OpenStack release' department, I have had
> to look *slightly* outside the ruleset to find a suitable 'G' release
> name - that name being 'Grizzly'. The rules clearly state that a release
> name must represent a city or county near the corresponding design
> summit and be comprised of a single word of ten characters or less - the
> problem here being that 'Grizzly' is actually 'Grizzly Flats.' Having
> already polled a small subset of the community, I feel like there would
> be enough support for 'Grizzly' to win if it were on the ballot. As I'm
> more interested in selecting a suitable name than accurately
> representing some arbitrary territory, I'd love to either permanently
> amend the rules to make this acceptable or grant an exception in this
> one case. As Thierry said, if this reaches critical mass, we will figure
> out what to do. Otherwise, I'll shut up and deal with '/Gazelle/'.
> 
> Brian
> 
> 
> On Jul 3, 2012, at 3:20 PM, Thierry Carrez wrote:
> 
>> Yes, it's that time of the year again... time for us to choose the name
>> of the next OpenStack release !
>>
>> This time, cities and counties in California (San Diego, CA being the
>> location of the G design summit)
>>
>> I set up a poll with the available options (based on our current rules
>> of naming) at:
>>
>> https://launchpad.net/~openstack/+poll/g-release-naming
>>
>> Poll is accessible to all members of ~openstack group in Launchpad, and
>> ends next Tuesday, 21:30 UTC. Please cast your vote!
>>
>> I'm aware that a subversive movement wants to try to amend the rules so
>> that another name option becomes available. Since we can't stop (or
>> modify) the poll now that it's been launched, if that movement reaches
>> critical mass, we may organize a second round of polling :)
>>
>> -- 
>> Thierry Carrez (ttx)
>> Release Manager, OpenStack
>>
>>
>> ___
>> Mailing list: https://launchpad.net/~openstack
>> Post to : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
> 
> 
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
> 


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] best practices for merging common into specific projects

2012-07-03 Thread Timothy Daly

I've been following along at home a bit.  I can totally see where it's 
desirable to have well thought out APIs that you can commit to supporting and 
encourage other people to use.  And that you sometimes have expedient code that 
you aren't as comfortable with.  

What I don't get is how using a different mechanism to make libraries makes the 
code any less of a library.  Just make it a library using normal packaging 
methodology, call it version 0.1, and put a README that says "we're not real 
comfortable with this API yet".  That accomplishes the same thing, but it's a 
lot less hairy.


Cheers,
Tim


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack "G" naming poll

2012-07-03 Thread Brian Waldon

On Jul 3, 2012, at 5:21 PM, Monty Taylor wrote:

> tl;dr - Screw the rules, I agree
> 
> Let's at least add it to the poll.
> 
> Also - I think we should further amend the rules such that we select the
> NEXT release by the summit for the current release. That means two things:
> 
> At the g summit, we'd tell everyone where the next summit is:
> At the g summit, we'd vote and announce the name of h
> We wouldn't have to spend half the cycle saying "h, or whatever" when we
> mean "we're going to defer that crazy idea until next time"
> I wouldn't have had to use the letter g by itself twice just above here.

Fantastic idea. 

I haven't been involved in choosing the next location, so I'm not sure how hard 
it would be to choose it that far in advance. Maybe somebody can comment on how 
doable this is?
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] coding standards (was: review for implement dhcp agent for quantum)

2012-07-03 Thread Monty Taylor


On 07/03/2012 05:07 PM, Duncan McGreggor wrote:
> On Tue, Jul 3, 2012 at 5:39 PM, Dan Wendlandt  wrote:
>> Lately, Quantum reviewers have been doing their best to enforce python style
>> guidelines above and beyond the programmatically enforced pep8 checks.  This
>> has happened for many recent reviews, so Mark isn't being singled out here,
> 
> My objection isn't to Mark being singled-out -- my objection is to
> *anyone* engaging in this level of nit-pickery. This is death to
> projects.
> 
> This is coming from a guy who's incredibly anal about his code and
> coding standards, too. I've been coding in Python for over a decade,
> adhering to PEP8 for a considerable period of that time, am a member
> of the notoriously picky Twisted project, and even I was surprised by
> the flood of review comments -- a high number of which contributed
> nothing to the improved readability, maintaiability, or functionality
> of this code under review.
> 
> There were definitely some good points/comments. But there was a lot
> in there that you had to wade through the rest, before you saw them.

I actually am going to need to side with Duncan here, although also I'm
going to slightly disagree- but hopefully we're all used to that by now.

Duncan is right - nitpickery can be quite deadly, but I think what's
worse is when it's vague, not codified, and not checkable.

With pep8, there is a clear document, and there is a tool that a dev can
use to simply check his code. It's not like pylint, where it's literally
impossible to write code which satisfies all of the warnings - it is
completely possible to write code which is pep8 clean (as we all know,
since we are all required to do so)

But the best part about having a tool (other than my single-minded
devotion to automated gating) isn't that we can use it to gate - it's
that a dev can use it locally to verify things before sending them in
for review... and that's great. The death cycle is really about the lag
time. If you write some stuff, then run pep8 - or even nova's hacking.py
- and it tells you things like "Hey Duncan, I don't like it when you
write methods that have the word "is" in the name" - you may think it's
ridiculous, but the feedback cycle is quick and deterministic and it's
not nearly as frustrating.

I think this is why the extra pedanticness in nova has worked out ok
without killing people. The rules are in HACKING and are clear, but
they're also in tools/hacking.py - and we use them as part of the pep8
gate. Because the code is clean to begin with, they're not very onerous
to deal with... they're also simple and deterministic enough, because
someone had to code a flipping check for them.

Once there is a predictable and quick feedback cycle that can be locally
tested, a developer can train himself to write the code that way in the
first place - and they also don't feel like they're being picked on.

SO - I'd recommend a middle ground here - if you want to add additional
strictness in style checking, do what nova did with hacking.py ... we'll
happily add it to the gate if you like. However... just remember that
we're not here to write python style guidelines, or to write python
programs enforcing those guidelines (not even those of us on the CI
team) ... so if you find yourself spending weeks on a new version of
hacking.py, something has probably gone wrong.

>> though admittedly there's a lot of code previously accepted to the codebase
>> that wasn't held to such a high bar.  This attention to style guidelines is
>> generally a good thing,
> 
> I *strongly* disagree.
> 
> /Attention/ to style guidelines is a huge boon to open source
> projects. But /this/ attention seems beyond the pale, like a good idea
> was taken too far and the intent of the guidelines has been lost.
> 
>> though I understand that it can be frustrating,
>> especially for new developers unfamiliar with the rules (I personally like
>> garyk's comment about how he felt dealing with PEP-257, see:
>> http://www.youtube.com/watch?v=lYU-SeVofHs)
> 
> But that's just it: I'm *not* a new developer! I'm a seasoned Python
> hacker, PSF member, obsessive-compulsive neat-freak with code. etc.,
> etc. I haven't ever seen this level of zealous syntax pursuit in any
> well-functioning open source project.
> 
>> As long as reviewer comments are inline with items covered in
>> https://github.com/openstack/quantum/blob/master/HACKING.rst,
> 
> I may have missed something, but a lot of the comments I saw did not
> reference something particular in the HACKING file, nor were many of
> these marked as CONSIDER ...
> 
>> then I
>> consider them fair game for reviews.  If they go beyond that, they should be
>> generally be expressed as a "CONSIDER".
>>
>> If we're unhappy with what is or is not enforced,
> 
> I'm definitely unhappy with what is being enforced and how.
> 
> But even more: if reviews devolve to this level of non-code minutiae,
> how long do you think you will have the hearts and minds of
> enthu

  1   2   >