Re: [openstack-dev] [nova] a question about instance snapshot

2014-03-11 Thread laserjetyang
the live snapshot has some issue on KVM, and I think it is a problem of KVM
hypervisor. For VMware, live snapshot is quite mature, and I think it is a
good way to start with VMware live snapshot.


On Tue, Mar 11, 2014 at 1:37 PM, Qin Zhao chaoc...@gmail.com wrote:

 Hi Jay,
 When users move from old tools to new cloud tools, they also hope the new
 tool can inherit some good and well-known capabilities. Sometimes, assuming
 users can change their habit is dangerous. (Eg. removing Windows Start
 button). Live-snapshot is indeed a very useful feature of hypervisors, and
 it is widely used for several years (especially for VMware). I think it is
 not harmful to existing Nova structure and workflow, and will let more
 people to adopt OpenStack easier.


 On Tue, Mar 11, 2014 at 6:15 AM, Jay Pipes jaypi...@gmail.com wrote:

 On Mon, 2014-03-10 at 15:52 -0600, Chris Friesen wrote:
  On 03/10/2014 02:58 PM, Jay Pipes wrote:
   On Mon, 2014-03-10 at 16:30 -0400, Shawn Hartsock wrote:
   While I understand the general argument about pets versus cattle. The
   question is, would you be willing to poke a few holes in the strict
   cattle abstraction for the sake of pragmatism. Few shops are going
   to make the direct transition in one move. Poking a hole in the
 cattle
   abstraction allowing them to keep a pet VM might be very valuable to
   some shops making a migration.
  
   Poking holes in cattle aside, my experience with shops that prefer the
   pets approach is that they are either:
  
 * Not managing their servers themselves at all and just relying on
 some
   IT operations organization to manage everything for them, including
 all
   aspects of backing up their data as well as failing over and balancing
   servers, or,
 * Hiding behind rationales of needing to be secure or needing
 100%
   uptime or needing no customer disruption in order to avoid any
 change
   to the status quo. This is because the incentives inside legacy IT
   application development and IT operations groups are typically towards
   not rocking the boat in order to satisfy unrealistic expectations and
   outdated interface agreements that are forced upon them by management
   chains that haven't crawled out of the waterfall project management
 funk
   of the 1980s.
  
   Adding pet-based features to Nova would, IMO, just perpetuate the
 above
   scenarios and incentives.
 
  What about the cases where it's not a preference but rather just the
  inertia of pre-existing systems and procedures?

 You mean what I wrote in the second bullet point above?

  If we can get them in the door with enough support for legacy stuff,
  then they might be easier to convince to do things the cloud way in
  the future.

 Yes, fair point, and that's what Shawn was saying as well. Just noting
 that in my experience, the second part of the above sentence just
 doesn't happen. Once you bring them over and offer them the tools from
 their legacy environment, they aren't interested in changing. :)

  If we stick with the hard-line cattle-only approach we run the risk of
  alienating them completely since redoing everything at once is generally
  not feasible.

 Yes, I understand that. I'm actually fine with including functionality
 like memory snapshotting, but only if under no circumstances does it
 negatively impact the service of compute to other tenants/users and will
 not negatively impact the scaling factor of Nova either.

 I'm just not as optimistic as you are that once legacy IT folks have
 their old tools, they will consider changing their habits. ;)

 Best,
 -jay


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Qin Zhao

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] a question about instance snapshot

2014-03-11 Thread Bohai (ricky)
 -Original Message-
 From: Jay Pipes [mailto:jaypi...@gmail.com]
 Sent: Tuesday, March 11, 2014 3:20 AM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [nova] a question about instance snapshot

 On Mon, 2014-03-10 at 12:13 -0400, Shawn Hartsock wrote:
  We have very strong interest in pursing this feature in the VMware
  driver as well. I would like to see the revert instance feature
  implemented at least.
 
  When I used to work in multi-discipline roles involving operations it
  would be common for us to snapshot a vm, run through an upgrade
  process, then revert if something did not upgrade smoothly. This
  ability alone can be exceedingly valuable in long-lived virtual
  machines.
 
  I also have some comments from parties interested in refactoring how
  the VMware drivers handle snapshots but I'm not certain how much that
  plays into this live snapshot discussion.

 I think the reason that there isn't much interest in doing this kind of thing 
 is
 because the worldview that VMs are pets is antithetical to the worldview that
 VMs are cattle, and Nova tends to favor the latter (where DRS/DPM on
 vSphere tends to favor the former).

 There's nothing about your scenario above of being able to revert an 
 instance
 to a particular state that isn't possible with today's Nova.
 Snapshotting an instance, doing an upgrade of software on the instance, and
 then restoring from the snapshot if something went wrong (reverting) is
 already fully possible to do with the regular Nova snapshot and restore
 operations. The only difference is that the live-snapshot
 stuff would include saving the memory view of a VM in addition to its disk 
 state.
 And that, at least in my opinion, is only needed when you are treating VMs 
 like
 pets and not cattle.


Hi Jay,

I read every words in your reply and respect what you said.

But i can't agree with you that memory snapshot is a feature for pat not for 
cattle.
I think it's a feature whatever what do you look the instance as.

The world doesn't care about what we look the instance as, in fact, currently 
almost all the
mainstream hypervisors have supported the memory snapshot.
If it's just a dispensable feature and no users need it, I can't understand why
the hypervisors provide it without exception.

In the document  OPENSTACK OPERATIONS GUIDE section  Live snapshots has the
below words:
 To ensure that important services have written their contents to disk (such 
as, databases),
we recommend you read the documentation for those applications to determine 
what commands
to issue to have them sync their contents to disk. If you are unsure how to do 
this,
 the safest approach is to simply stop these running services normally.

This just pushes all the responsibility to guarantee the consistency of the 
instance to the end user.
It's absolutely not convenient and I doubt whether it's appropriate.


Best regards to you.
Ricky

 Best,
 -jay

  On Mon, Mar 10, 2014 at 12:04 AM, Bohai (ricky) bo...@huawei.com
 wrote:
   -Original Message-
   From: Alex Xu [mailto:x...@linux.vnet.ibm.com]
   Sent: Sunday, March 09, 2014 10:04 PM
   To: OpenStack Development Mailing List (not for usage questions)
   Subject: Re: [openstack-dev] [nova] a question about instance
   snapshot
  
   Hi, Jeremy, the discussion at here
   http://lists.openstack.org/pipermail/openstack-dev/2013-August/0136
   88.html
  
  
   I have a great interest in the topic too.
   I read the link you provided, and there is a little confusion for me.
   I agree with the security consideration in the discussion and memory
 snapshot can't be used for the cloning instance easily.
  
   But I think it's safe for the using for Instance revert.
   And revert the instance to a checkpoint is valuable for the user.
   Why we didn't use it for instance revert in the first step?
  
   Best regards to you.
   Ricky
  
   Thanks
   Alex
   On 2014年03月07日 10:29, Liuji (Jeremy) wrote:
Hi, all
   
Current openstack seems not support to snapshot instance with
memory and
   dev states.
I searched the blueprint and found two relational blueprint like below.
But these blueprint failed to get in the branch.
   
[1]: https://blueprints.launchpad.net/nova/+spec/live-snapshots
[2]:
https://blueprints.launchpad.net/nova/+spec/live-snapshot-vms
   
In the blueprint[1], there is a comment,
We discussed this pretty extensively on the mailing list and in a
design
   summit session.
The consensus is that this is not a feature we would like to have in 
nova.
   --russellb 
But I can't find the discuss mail about it. I hope to know why we think
 so ?
Without memory snapshot, we can't to provide the feature for user
to revert
   a instance to a checkpoint.
   
Anyone who knows the history can help me or give me a hint how to
find the
   discuss mail?
   
I am a newbie for openstack and I apologize if I am missing
something very
   

[openstack-dev] Cinder: AttributeError: No such RPC function 'create_export'

2014-03-11 Thread Deepak C Shetty

Hi All,
	I am using devstack with cinder git head @ 
f888e412b0d0fdb0426045a9c55e0be0390f842c


I am seeing the below error while trying to do cinder migrate for 
glusterfs backend. I don't think its backend specific tho' as the 
failure is in the common rpc layer of code.


http://paste.fedoraproject.org/84189/45169021/

Any pointers to get past this is appreciated.

thanx,
deepak

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cinder: AttributeError: No such RPC function 'create_export'

2014-03-11 Thread Swapnil Kulkarni
Hi Deepak,

I believe the migrate_volume is not implemented in glusterfs which causes
above error. I have seen similar errors earlier. Currently implementing the
migrate volume and testing it. I will push it upstream once successfully
tested.

Best Regards,
Swapnil Kulkarni
irc : coolsvap
swapnilkulkarni2...@gmail.com
+91-87960 10622(c)
http://in.linkedin.com/in/coolsvap
*It's better to SHARE*


On Tue, Mar 11, 2014 at 12:53 PM, Deepak C Shetty deepa...@redhat.comwrote:

 Hi All,
 I am using devstack with cinder git head @
 f888e412b0d0fdb0426045a9c55e0be0390f842c

 I am seeing the below error while trying to do cinder migrate for
 glusterfs backend. I don't think its backend specific tho' as the failure
 is in the common rpc layer of code.

 http://paste.fedoraproject.org/84189/45169021/

 Any pointers to get past this is appreciated.

 thanx,
 deepak

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat][Murano][TOSCA] Murano team contrib. to Heat TOSCA activities

2014-03-11 Thread Thomas Spatzier
Randall Burt randall.b...@rackspace.com wrote on 10/03/2014 19:51:58:

 From: Randall Burt randall.b...@rackspace.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: 10/03/2014 19:55
 Subject: Re: [openstack-dev] [Heat][Murano][TOSCA] Murano team
 contrib. to Heat TOSCA activities


 On Mar 10, 2014, at 1:26 PM, Georgy Okrokvertskhov
 gokrokvertsk...@mirantis.com
  wrote:

  Hi,
 
  Thomas and Zane initiated a good discussion about Murano DSL and
 TOSCA initiatives in Heat. I think will be beneficial for both teams
 to contribute into TOSCA.

 Wasn't TOSCA developing a simplified version in order to converge with
HOT?

Right, we are currently developing a simple profile of TOSCA (basically a
subset and cleanup of the v1.0 full feature set) and a YAML rendering for
that simple profile. We are working on aligning this as best as we can with
HOT, but there will be some differences. E.g. there will be additional
elements in TOSCA YAML that are not present in HOT (at least today). We
will be able to translate the topology portion of TOSCA models into HOT via
the heat-translator that Sahdev has kicked off. Over time, I could see some
of the advanced features we only have in TOSCA YAML today to be adopted
by HOT, but let's see what makes sense step by step. I coul also well
imagine TOSCA YAML as a layer for a portable format above HOT that gets
bound to plain HOT and/or other constructs (Mistral, Murano ...) during
deployment.

Note that TOSCA will continue to be a combination of a declarative model
(topology) and an imperative model (workflows). The imperative model is
optional, so if you don't require special flows for an application you can
just go with the declarative approach.
The imperative part could be passed to e.g. Mistral (or Murano?), i.e.
TOSCA having the concept of workflows (we call them plans) does not
necessarily mean to pull this into Heat, but to distribute work to
different components in the orchestration program.


  While Mirantis is working on organizational part for OASIS. I
 would like to understand what is the current view on the TOSCA and
 HOT relations.
  It looks like TOSCA can cover all aspects of declarative
 components HOT templates and imperative workflows which can be
 covered by Murano. What do you think about that?

I'm looking forward to having you join the TOSCA work and contribute your
experience!


 Aren't workflows covered by Mistral? How would this be different
 than including mistral support in Heat?

See my comment above: I don't see the concept of flows in a model like
TOSCA require us pushing workflows into Heat, but we could just push one
portion (declarative model) to Heat and the other part to Mistral and find
a way that the flows can access e.g. stack information in Heat.


  I think TOSCA format can be used a a descriptions of Applications
 and heat-translator can actually convert TOSCA descriptions to both
 HOT and Murano files which can be then used for actual Application
 deployment. Both Het and Murano workflows can coexist in
 Orchestration program and cover both declarative templates and
 imperative workflows use cases.
 
  --
  Georgy Okrokvertskhov
  Architect,
  OpenStack Platform Products,
  Mirantis
  http://www.mirantis.com
  Tel. +1 650 963 9828
  Mob. +1 650 996 3284
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][neutron]A Question about creating instance with duplication sg_name

2014-03-11 Thread Xurong Yang
It's allowed to create duplicate sg with the same name.
so exception happens when creating instance with the duplicate sg name.
code following:

security_groups = kwargs.get('security_groups', [])
security_group_ids = []

# TODO(arosen) Should optimize more to do direct query for security
# group if len(security_groups) == 1
if len(security_groups):
search_opts = {'tenant_id': instance['project_id']}
user_security_groups = neutron.list_security_groups(
**search_opts).get('security_groups')

for security_group in security_groups:
name_match = None
uuid_match = None
for user_security_group in user_security_groups:
if user_security_group['name'] == security_group:
if name_match:---exception happened here
raise exception.NoUniqueMatch(
_(Multiple security groups found matching
   '%s'. Use an ID to be more specific.) %
   security_group)

name_match = user_security_group['id']
  

so it's maybe improper to create instance with the sg name parameter.
appreciation if any response.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Cinder] Feature about volume delete protection

2014-03-11 Thread Huang Zhiteng
On Tue, Mar 11, 2014 at 11:38 AM, Zhangleiqiang
zhangleiqi...@huawei.com wrote:
 Hi all,



 Besides the soft-delete state for volumes, I think there is need for
 introducing another fake delete state for volumes which have snapshot.



 Current Openstack refuses the delete request for volumes which have
 snapshot. However, we will have no method to limit users to only use the
 specific snapshot other than the original volume ,  because the original
 volume is always visible for the users.



 So I think we can permit users to delete volumes which have snapshots, and
 mark the volume as fake delete state. When all of the snapshots of the
 volume have already deleted, the original volume will be removed
 automatically.

Can you describe the actual use case for this?  I not sure I follow
why operator would like to limit the owner of the volume to only use
specific version of snapshot.  It sounds like you are adding another
layer.  If that's the case, the problem should be solved at upper
layer instead of Cinder.




 Any thoughts? Welcome any advices.







 --

 zhangleiqiang



 Best Regards



 From: John Griffith [mailto:john.griff...@solidfire.com]
 Sent: Thursday, March 06, 2014 8:38 PM


 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Nova][Cinder] Feature about volume delete
 protection







 On Thu, Mar 6, 2014 at 9:13 PM, John Garbutt j...@johngarbutt.com wrote:

 On 6 March 2014 08:50, zhangyu (AI) zhangy...@huawei.com wrote:
 It seems to be an interesting idea. In fact, a China-based public IaaS,
 QingCloud, has provided a similar feature
 to their virtual servers. Within 2 hours after a virtual server is
 deleted, the server owner can decide whether
 or not to cancel this deletion and re-cycle that deleted virtual server.

 People make mistakes, while such a feature helps in urgent cases. Any idea
 here?

 Nova has soft_delete and restore for servers. That sounds similar?

 John



 -Original Message-
 From: Zhangleiqiang [mailto:zhangleiqi...@huawei.com]
 Sent: Thursday, March 06, 2014 2:19 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [Nova][Cinder] Feature about volume delete
 protection

 Hi all,

 Current openstack provide the delete volume function to the user.
 But it seems there is no any protection for user's delete operation miss.

 As we know the data in the volume maybe very important and valuable.
 So it's better to provide a method to the user to avoid the volume delete
 miss.

 Such as:
 We can provide a safe delete for the volume.
 User can specify how long the volume will be delay deleted(actually
 deleted) when he deletes the volume.
 Before the volume is actually deleted, user can cancel the delete
 operation and find back the volume.
 After the specified time, the volume will be actually deleted by the
 system.

 Any thoughts? Welcome any advices.

 Best regards to you.


 --
 zhangleiqiang

 Best Regards



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 I think a soft-delete for Cinder sounds like a neat idea.  You should file a
 BP that we can target for Juno.



 Thanks,

 John




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Regards
Huang Zhiteng

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Cinder] Feature about volume delete protection

2014-03-11 Thread laserjetyang
I think the workflow management might be a better place to solve your
problem, if I understood correctly


On Tue, Mar 11, 2014 at 4:29 PM, Huang Zhiteng winsto...@gmail.com wrote:

 On Tue, Mar 11, 2014 at 11:38 AM, Zhangleiqiang
 zhangleiqi...@huawei.com wrote:
  Hi all,
 
 
 
  Besides the soft-delete state for volumes, I think there is need for
  introducing another fake delete state for volumes which have snapshot.
 
 
 
  Current Openstack refuses the delete request for volumes which have
  snapshot. However, we will have no method to limit users to only use the
  specific snapshot other than the original volume ,  because the original
  volume is always visible for the users.
 
 
 
  So I think we can permit users to delete volumes which have snapshots,
 and
  mark the volume as fake delete state. When all of the snapshots of the
  volume have already deleted, the original volume will be removed
  automatically.
 
 Can you describe the actual use case for this?  I not sure I follow
 why operator would like to limit the owner of the volume to only use
 specific version of snapshot.  It sounds like you are adding another
 layer.  If that's the case, the problem should be solved at upper
 layer instead of Cinder.
 
 
 
 
  Any thoughts? Welcome any advices.
 
 
 
 
 
 
 
  --
 
  zhangleiqiang
 
 
 
  Best Regards
 
 
 
  From: John Griffith [mailto:john.griff...@solidfire.com]
  Sent: Thursday, March 06, 2014 8:38 PM
 
 
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [Nova][Cinder] Feature about volume delete
  protection
 
 
 
 
 
 
 
  On Thu, Mar 6, 2014 at 9:13 PM, John Garbutt j...@johngarbutt.com
 wrote:
 
  On 6 March 2014 08:50, zhangyu (AI) zhangy...@huawei.com wrote:
  It seems to be an interesting idea. In fact, a China-based public IaaS,
  QingCloud, has provided a similar feature
  to their virtual servers. Within 2 hours after a virtual server is
  deleted, the server owner can decide whether
  or not to cancel this deletion and re-cycle that deleted virtual
 server.
 
  People make mistakes, while such a feature helps in urgent cases. Any
 idea
  here?
 
  Nova has soft_delete and restore for servers. That sounds similar?
 
  John
 
 
 
  -Original Message-
  From: Zhangleiqiang [mailto:zhangleiqi...@huawei.com]
  Sent: Thursday, March 06, 2014 2:19 PM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: [openstack-dev] [Nova][Cinder] Feature about volume delete
  protection
 
  Hi all,
 
  Current openstack provide the delete volume function to the user.
  But it seems there is no any protection for user's delete operation
 miss.
 
  As we know the data in the volume maybe very important and valuable.
  So it's better to provide a method to the user to avoid the volume
 delete
  miss.
 
  Such as:
  We can provide a safe delete for the volume.
  User can specify how long the volume will be delay deleted(actually
  deleted) when he deletes the volume.
  Before the volume is actually deleted, user can cancel the delete
  operation and find back the volume.
  After the specified time, the volume will be actually deleted by the
  system.
 
  Any thoughts? Welcome any advices.
 
  Best regards to you.
 
 
  --
  zhangleiqiang
 
  Best Regards
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  I think a soft-delete for Cinder sounds like a neat idea.  You should
 file a
  BP that we can target for Juno.
 
 
 
  Thanks,
 
  John
 
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 --
 Regards
 Huang Zhiteng

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cinder: AttributeError: No such RPC function 'create_export'

2014-03-11 Thread Deepak C Shetty

Swapnil,
The failure is not in the glsuter specific part of code
IIUC its in the rpc/dispatcher area.. so shouldn't be gluster specific

On 03/11/2014 01:06 PM, Swapnil Kulkarni wrote:

Hi Deepak,

I believe the migrate_volume is not implemented in glusterfs which
causes above error. I have seen similar errors earlier. Currently
implementing the migrate volume and testing it. I will push it upstream
once successfully tested.

Best Regards,
Swapnil Kulkarni
irc : coolsvap
swapnilkulkarni2...@gmail.com mailto:swapnilkulkarni2...@gmail.com
+91-87960 10622(c)
http://in.linkedin.com/in/coolsvap
*It's better to SHARE*


On Tue, Mar 11, 2014 at 12:53 PM, Deepak C Shetty deepa...@redhat.com
mailto:deepa...@redhat.com wrote:

Hi All,
 I am using devstack with cinder git head @
f888e412b0d0fdb0426045a9c55e0b__e0390f842c

I am seeing the below error while trying to do cinder migrate for
glusterfs backend. I don't think its backend specific tho' as the
failure is in the common rpc layer of code.

http://paste.fedoraproject.__org/84189/45169021/
http://paste.fedoraproject.org/84189/45169021/

Any pointers to get past this is appreciated.

thanx,
deepak

_
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.__org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-11 Thread Miguel Angel Ajo Pelayo

I have included on the etherpad, the option to write a sudo 
plugin (or several), specific for openstack.


And this is a test with shedskin, I suppose that in more complicated
dependecy scenarios it should perform better.

[majopela@redcylon tmp]$ cat EOF test.py
 import sys
 print hello world
 sys.exit(0)
 EOF

[majopela@redcylon tmp]$ time python test.py
hello world

real0m0.016s
user0m0.015s
sys 0m0.001s


[majopela@redcylon tmp]$ shedskin test.py
*** SHED SKIN Python-to-C++ Compiler 0.9.4 ***
Copyright 2005-2011 Mark Dufour; License GNU GPL version 3 (See LICENSE)

[analyzing types..]
100% 
[generating c++ code..]
[elapsed time: 1.59 seconds]
[majopela@redcylon tmp]$ make 
g++  -O2 -march=native -Wno-deprecated  -I. 
-I/usr/lib/python2.7/site-packages/shedskin/lib /tmp/test.cpp 
/usr/lib/python2.7/site-packages/shedskin/lib/sys.cpp 
/usr/lib/python2.7/site-packages/shedskin/lib/re.cpp 
/usr/lib/python2.7/site-packages/shedskin/lib/builtin.cpp -lgc -lpcre  -o test
[majopela@redcylon tmp]$ time ./test
hello world

real0m0.003s
user0m0.000s
sys 0m0.002s


- Original Message -
 We had this same issue with the dhcp-agent. Code was added that paralleled
 the initial sync here: https://review.openstack.org/#/c/28914/ that made
 things a good bit faster if I remember correctly. Might be worth doing
 something similar for the l3-agent.
 
 Best,
 
 Aaron
 
 
 On Mon, Mar 10, 2014 at 5:07 PM, Joe Gordon  joe.gord...@gmail.com  wrote:
 
 
 
 
 
 
 On Mon, Mar 10, 2014 at 3:57 PM, Joe Gordon  joe.gord...@gmail.com  wrote:
 
 
 
 I looked into the python to C options and haven't found anything promising
 yet.
 
 
 I tried Cython, and RPython, on a trivial hello world app, but git similar
 startup times to standard python.
 
 The one thing that did work was adding a '-S' when starting python.
 
 -S Disable the import of the module site and the site-dependent manipulations
 of sys.path that it entails.
 
 Using 'python -S' didn't appear to help in devstack
 
 #!/usr/bin/python -S
 # PBR Generated from u'console_scripts'
 
 import sys
 import site
 site.addsitedir('/mnt/stack/oslo.rootwrap/oslo/rootwrap')
 
 
 
 
 
 
 I am not sure if we can do that for rootwrap.
 
 
 jogo@dev:~/tmp/pypy-2.2.1-src$ time ./tmp-c
 hello world
 
 real 0m0.021s
 user 0m0.000s
 sys 0m0.020s
 jogo@dev:~/tmp/pypy-2.2.1-src$ time ./tmp-c
 hello world
 
 real 0m0.021s
 user 0m0.000s
 sys 0m0.020s
 jogo@dev:~/tmp/pypy-2.2.1-src$ time python -S ./tmp.py
 hello world
 
 real 0m0.010s
 user 0m0.000s
 sys 0m0.008s
 
 jogo@dev:~/tmp/pypy-2.2.1-src$ time python -S ./tmp.py
 hello world
 
 real 0m0.010s
 user 0m0.000s
 sys 0m0.008s
 
 
 
 On Mon, Mar 10, 2014 at 3:26 PM, Miguel Angel Ajo Pelayo 
 mangel...@redhat.com  wrote:
 
 
 Hi Carl, thank you, good idea.
 
 I started reviewing it, but I will do it more carefully tomorrow morning.
 
 
 
 - Original Message -
  All,
  
  I was writing down a summary of all of this and decided to just do it
  on an etherpad. Will you help me capture the big picture there? I'd
  like to come up with some actions this week to try to address at least
  part of the problem before Icehouse releases.
  
  https://etherpad.openstack.org/p/neutron-agent-exec-performance
  
  Carl
  
  On Mon, Mar 10, 2014 at 5:26 AM, Miguel Angel Ajo  majop...@redhat.com 
  wrote:
   Hi Yuri  Stephen, thanks a lot for the clarification.
   
   I'm not familiar with unix domain sockets at low level, but , I wonder
   if authentication could be achieved just with permissions (only users in
   group neutron or group rootwrap accessing this service.
   
   I find it an interesting alternative, to the other proposed solutions,
   but
   there are some challenges associated with this solution, which could make
   it
   more complicated:
   
   1) Access control, file system permission based or token based,
   
   2) stdout/stderr/return encapsulation/forwarding to the caller,
   if we have a simple/fast RPC mechanism we can use, it's a matter
   of serializing a dictionary.
   
   3) client side implementation for 1 + 2.
   
   4) It would need to accept new domain socket connections in green threads
   to
   avoid spawning a new process to handle a new connection.
   
   The advantages:
   * we wouldn't need to break the only-python-rule.
   * we don't need to rewrite/translate rootwrap.
   
   The disadvantages:
   * it needs changes on the client side (neutron + other projects),
   
   
   Cheers,
   Miguel Ángel.
   
   
   
   On 03/08/2014 07:09 AM, Yuriy Taraday wrote:
   
   On Fri, Mar 7, 2014 at 5:41 PM, Stephen Gran
stephen.g...@theguardian.com mailto: stephen.g...@theguardian.com 
   wrote:
   
   Hi,
   
   Given that Yuriy says explicitly 'unix socket', I dont think he
   means 'MQ' when he says 'RPC'. I think he just means a daemon
   listening on a unix socket for execution requests. This seems like
   a reasonably sensible idea to me.
   
   
   Yes, 

Re: [openstack-dev] [Neutron][L3] FFE request: L3 HA VRRP

2014-03-11 Thread Mathieu Rohon
+1

On Mon, Mar 10, 2014 at 10:51 AM, Miguel Angel Ajo majop...@redhat.com wrote:
 +1  (Voting here to workaround my previous top-posting).


 On 03/09/2014 01:22 PM, Nir Yechiel wrote:

 +1

 I see it as one of the main current gaps and I believe that this is
 something that can promote Neutron as stable and production ready.
 Based on Édouard's comment below, having this enabled in Icehouse as
 experimental makes a lot of sense to me.

 - Original Message -

 +1

 - Original Message -

 +1

 On Fri, Mar 7, 2014 at 2:42 AM, Édouard Thuleau thul...@gmail.com
 wrote:

 +1
 I though it must merge as experimental for IceHouse, to let the
 community
 tries it and stabilizes it during the Juno release. And for the Juno
 release, we will be able to announce it as stable.

 Furthermore, the next work, will be to distribute the l3 stuff at the
 edge
 (compute) (called DVR) but this VRRP work will still needed for that
 [1].
 So if we merge L3 HA VRRP as experimental in I to be stable in J, will
 could
 also propose an experimental DVR solution for J and a stable for K.

 [1]

 https://docs.google.com/drawings/d/1GGwbLa72n8c2T3SBApKK7uJ6WLTSRa7erTI_3QNj5Bg/edit

 Regards,
 Édouard.


 On Thu, Mar 6, 2014 at 4:27 PM, Sylvain Afchain
 sylvain.afch...@enovance.com wrote:


 Hi all,

 I would like to request a FFE for the following patches of the L3 HA
 VRRP
 BP :

 https://blueprints.launchpad.net/neutron/+spec/l3-high-availability

 https://review.openstack.org/#/c/64553/
 https://review.openstack.org/#/c/66347/
 https://review.openstack.org/#/c/68142/
 https://review.openstack.org/#/c/70700/

 These should be low risk since HA is not enabled by default.
 The server side code has been developed as an extension which
 minimizes
 risk.
 The agent side code introduces a bit more changes but only to filter
 whether to apply the
 new HA behavior.

 I think it's a good idea to have this feature in Icehouse, perhaps
 even
 marked as experimental,
 especially considering the demand for HA in real world deployments.

 Here is a doc to test it :



 https://docs.google.com/document/d/1P2OnlKAGMeSZTbGENNAKOse6B2TRXJ8keUMVvtUCUSM/edit#heading=h.xjip6aepu7ug

 -Sylvain


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova]Question about Ephemeral/swap disk and flavor

2014-03-11 Thread ChangBo Guo
2014-03-11 16:28 GMT+08:00 Chen CH Ji jiche...@cn.ibm.com:

 Hi
  We are trying to fix some eph disk related bugs and also
 considering our private driver implementation based for eph disk ,so this
 is not a usage question ,instead, it's a development question

  I have some questions related ephemeral_gb in flavor and nova
 boot option --ephemeral, and I failed to find out descriptions in
 existing documents:

1. If I boot a new instance using flavor with ephemeral_gb defined
as 20 G, and no --ephemeral specified, does the new instance have a 20G
additional ephemeral disk after booted?

 ephemeral_gb means the maximum value allowed ,  need  option --ephemeral
to provide actual value, otherwise, no ephemeral disk is created.  The swap
option has same behavior with ephemeral_gb


2. The flavor has 20 G ephemeral disk defined, but specified two 5 G
ephemeral disk through --ephemeral option in nova boot cmd, what will
happen to the new instance? Get one 20G additional ephemeral? Or two 5 G
ephemeral disks instead?
3. If the answer to question 2 is two 5 G ephemeral disks, what will
happen if resize this instance to a flavor with ephemeral_gb defined as
40 G?
4.   I think the ephemeral_gb in flavor means maximum disk size you
can specified, is this understanding correct?
5.   Also, above question can be applied to swap disk


 Best Regards!

 Kevin (Chen) Ji 纪 晨

 Engineer, zVM Development, CSTL
 Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
 Phone: +86-10-82454158
 Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
 Beijing 100193, PRC


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
ChangBo Guo(gcb)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Python 3.3 patches (using six)

2014-03-11 Thread li zheming
+1


2014-03-11 5:54 GMT+08:00 David Kranz dkr...@redhat.com:

 There are a number of patches up for review that make various changes to
 use six apis instead of Python 2 constructs. While I understand the
 desire to get a head start on getting Tempest to run in Python 3, I'm not
 sure it makes sense to do this work piecemeal until we are near ready to
 introduce a py3 gate job. Many contributors will not be aware of what all
 the differences are and py2-isms will creep back in resulting in more
 overall time spent making these changes and reviewing. Also, the core
 review team is busy trying to do stuff important to the icehouse release
 which is barely more than 5 weeks away. IMO we should hold off on various
 kinds of cleanup patches for now.

  -David

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova]Question about Ephemeral/swap disk and flavor

2014-03-11 Thread ChangBo Guo
2014-03-11 17:01 GMT+08:00 ChangBo Guo glongw...@gmail.com:




 2014-03-11 16:28 GMT+08:00 Chen CH Ji jiche...@cn.ibm.com:

  Hi
  We are trying to fix some eph disk related bugs and also
 considering our private driver implementation based for eph disk ,so this
 is not a usage question ,instead, it's a development question

  I have some questions related ephemeral_gb in flavor and nova
 boot option --ephemeral, and I failed to find out descriptions in
 existing documents:

1. If I boot a new instance using flavor with ephemeral_gb defined
as 20 G, and no --ephemeral specified, does the new instance have a 20G
additional ephemeral disk after booted?

 ephemeral_gb means the maximum value allowed ,  need  option --ephemeral
 to provide actual value, otherwise, no ephemeral disk is created.  The swap
 option has same behavior with ephemeral_gb

Update: if without the option --ephemeral ,  the  size in flavor will be
used .


2. The flavor has 20 G ephemeral disk defined, but specified two 5 G
ephemeral disk through --ephemeral option in nova boot cmd, what will
happen to the new instance? Get one 20G additional ephemeral? Or two 5 G
ephemeral disks instead?
3. If the answer to question 2 is two 5 G ephemeral disks, what
will happen if resize this instance to a flavor with ephemeral_gb 
 defined
as 40 G?
4.   I think the ephemeral_gb in flavor means maximum disk size you
can specified, is this understanding correct?
5.   Also, above question can be applied to swap disk


 Best Regards!

 Kevin (Chen) Ji 纪 晨

 Engineer, zVM Development, CSTL
 Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
 Phone: +86-10-82454158
 Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
 Beijing 100193, PRC


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 ChangBo Guo(gcb)




-- 
ChangBo Guo(gcb)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Cinder] Feature about volume delete protection

2014-03-11 Thread Zhangleiqiang
 From: Huang Zhiteng [mailto:winsto...@gmail.com]
 Sent: Tuesday, March 11, 2014 4:29 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Nova][Cinder] Feature about volume delete
 protection
 
 On Tue, Mar 11, 2014 at 11:38 AM, Zhangleiqiang
 zhangleiqi...@huawei.com wrote:
  Hi all,
 
 
 
  Besides the soft-delete state for volumes, I think there is need for
  introducing another fake delete state for volumes which have snapshot.
 
 
 
  Current Openstack refuses the delete request for volumes which have
  snapshot. However, we will have no method to limit users to only use
  the specific snapshot other than the original volume ,  because the
  original volume is always visible for the users.
 
 
 
  So I think we can permit users to delete volumes which have snapshots,
  and mark the volume as fake delete state. When all of the snapshots
  of the volume have already deleted, the original volume will be
  removed automatically.
 
 Can you describe the actual use case for this?  I not sure I follow why 
 operator
 would like to limit the owner of the volume to only use specific version of
 snapshot.  It sounds like you are adding another layer.  If that's the case, 
 the
 problem should be solved at upper layer instead of Cinder.

For example, one tenant's volume quota is five, and has 5 volumes and 1 
snapshot already. If the data in base volume of the snapshot is corrupted, the 
user will need to create a new volume from the snapshot, but this operation 
will be failed because there are already 5 volumes, and the original volume 
cannot be deleted, too.

 
 
 
 
  Any thoughts? Welcome any advices.
 
 
 
 
 
 
 
  --
 
  zhangleiqiang
 
 
 
  Best Regards
 
 
 
  From: John Griffith [mailto:john.griff...@solidfire.com]
  Sent: Thursday, March 06, 2014 8:38 PM
 
 
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [Nova][Cinder] Feature about volume
  delete protection
 
 
 
 
 
 
 
  On Thu, Mar 6, 2014 at 9:13 PM, John Garbutt j...@johngarbutt.com
 wrote:
 
  On 6 March 2014 08:50, zhangyu (AI) zhangy...@huawei.com wrote:
  It seems to be an interesting idea. In fact, a China-based public
  IaaS, QingCloud, has provided a similar feature to their virtual
  servers. Within 2 hours after a virtual server is deleted, the server
  owner can decide whether or not to cancel this deletion and re-cycle
  that deleted virtual server.
 
  People make mistakes, while such a feature helps in urgent cases. Any
  idea here?
 
  Nova has soft_delete and restore for servers. That sounds similar?
 
  John
 
 
 
  -Original Message-
  From: Zhangleiqiang [mailto:zhangleiqi...@huawei.com]
  Sent: Thursday, March 06, 2014 2:19 PM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: [openstack-dev] [Nova][Cinder] Feature about volume delete
  protection
 
  Hi all,
 
  Current openstack provide the delete volume function to the user.
  But it seems there is no any protection for user's delete operation miss.
 
  As we know the data in the volume maybe very important and valuable.
  So it's better to provide a method to the user to avoid the volume
  delete miss.
 
  Such as:
  We can provide a safe delete for the volume.
  User can specify how long the volume will be delay deleted(actually
  deleted) when he deletes the volume.
  Before the volume is actually deleted, user can cancel the delete
  operation and find back the volume.
  After the specified time, the volume will be actually deleted by the
  system.
 
  Any thoughts? Welcome any advices.
 
  Best regards to you.
 
 
  --
  zhangleiqiang
 
  Best Regards
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  I think a soft-delete for Cinder sounds like a neat idea.  You should
  file a BP that we can target for Juno.
 
 
 
  Thanks,
 
  John
 
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 --
 Regards
 Huang Zhiteng
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][novaclient] How to get user's credentials for using novaclient API?

2014-03-11 Thread Dmitry Mescheryakov
Hello Nader,

You should use python-keystoneclient [1] to obtain the token. You can
find example usage in helper script [2].

Dmitry

[1] https://github.com/openstack/python-keystoneclient
[2] https://github.com/openstack/savanna/blob/master/tools/get_auth_token.py#L74



2014-03-10 21:25 GMT+04:00 Nader Lahouti nader.laho...@gmail.com:
 Hi All,


 I have a question regarding using novaclient API.


 I need to use it for getting a list of instances for an user/project.

 In oder to do that I tried to use :


 from novaclient.v1_1 import client

 nc = client.Client(username,token_id, project_id, auth_url,insecure,cacert)

 nc.servers.list()


 ( however, the comment on the code/document says different thing which as
 far as tried it didn't work.

 client = Client(USERNAME, PASSWORD, PROJECT_ID, AUTH_URL)


 so it seems token_id has to be provided.

 I can get the token_id using keystone REST API
 (http://localhost:5000/v2.0/tokens …-d ' the credentials …username and
 password'.

 And my question is: how to get credentials for an user in the code when
 using the keystone's REST API? Is there any api to get such an info?


 Appreciate your comments.


 Regards,

 Nader.



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cinder: AttributeError: No such RPC function 'create_export'

2014-03-11 Thread Swapnil Kulkarni
Hi Deepak,

When you say you are using glusterfs as backend, you are using glusterfs
driver, is it correct?

Best Regards,
Swapnil Kulkarni
irc : coolsvap


On Tue, Mar 11, 2014 at 2:17 PM, Deepak C Shetty deepa...@redhat.comwrote:

 Swapnil,
 The failure is not in the glsuter specific part of code
 IIUC its in the rpc/dispatcher area.. so shouldn't be gluster specific


 On 03/11/2014 01:06 PM, Swapnil Kulkarni wrote:

 Hi Deepak,

 I believe the migrate_volume is not implemented in glusterfs which
 causes above error. I have seen similar errors earlier. Currently
 implementing the migrate volume and testing it. I will push it upstream
 once successfully tested.

 Best Regards,
 Swapnil Kulkarni
 irc : coolsvap
 swapnilkulkarni2...@gmail.com mailto:swapnilkulkarni2...@gmail.com
 +91-87960 10622(c)
 http://in.linkedin.com/in/coolsvap
 *It's better to SHARE*



 On Tue, Mar 11, 2014 at 12:53 PM, Deepak C Shetty deepa...@redhat.com
 mailto:deepa...@redhat.com wrote:

 Hi All,
  I am using devstack with cinder git head @
 f888e412b0d0fdb0426045a9c55e0b__e0390f842c


 I am seeing the below error while trying to do cinder migrate for
 glusterfs backend. I don't think its backend specific tho' as the
 failure is in the common rpc layer of code.

 http://paste.fedoraproject.__org/84189/45169021/

 http://paste.fedoraproject.org/84189/45169021/

 Any pointers to get past this is appreciated.

 thanx,
 deepak

 _
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.__org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] os-cloud-config ssh access to cloud

2014-03-11 Thread Dmitry Mescheryakov
For what it's worth in Sahara (former Savanna) we inject the second
key by userdata. I.e. we add
echo ${public_key}  ${user_home}/.ssh/authorized_keys

to the other stuff we do in userdata.

Dmitry

2014-03-10 17:10 GMT+04:00 Jiří Stránský ji...@redhat.com:
 On 7.3.2014 14:50, Imre Farkas wrote:

 On 03/07/2014 10:30 AM, Jiří Stránský wrote:

 Hi,

 there's one step in cloud initialization that is performed over SSH --
 calling keystone-manage pki_setup. Here's the relevant code in
 keystone-init [1], here's a review for moving the functionality to
 os-cloud-config [2].

 The consequence of this is that Tuskar will need passwordless ssh key to
 access overcloud controller. I consider this suboptimal for two reasons:

 * It creates another security concern.

 * AFAIK nova is only capable of injecting one public SSH key into
 authorized_keys on the deployed machine, which means we can either give
 it Tuskar's public key and allow Tuskar to initialize overcloud, or we
 can give it admin's custom public key and allow admin to ssh into
 overcloud, but not both. (Please correct me if i'm mistaken.) We could
 probably work around this issue by having Tuskar do the user key
 injection as part of os-cloud-config, but it's a bit clumsy.


 This goes outside the scope of my current knowledge, i'm hoping someone
 knows the answer: Could pki_setup be run by combining powers of Heat and
 os-config-refresh? (I presume there's some reason why we're not doing
 this already.) I think it would help us a good bit if we could avoid
 having to SSH from Tuskar to overcloud.


 Yeah, it came up a couple times on the list. The current solution is
 because if you have an HA setup, the nodes can't decide on its own,
 which one should run pki_setup.
 Robert described this topic and why it needs to be initialized
 externally during a weekly meeting in last December. Check the topic
 'After heat stack-create init operations (lsmola)':

 http://eavesdrop.openstack.org/meetings/tripleo/2013/tripleo.2013-12-17-19.02.log.html


 Thanks for the reply Imre. Yeah i vaguely remember that meeting :)

 I guess to do HA init we'd need to pick one of the controllers and run the
 init just there (set some parameter that would then be recognized by
 os-refresh-config). I couldn't find if Heat can do something like this on
 it's own, probably we'd need to deploy one of the controller nodes with
 different parameter set, which feels a bit weird.

 Hmm so unless someone comes up with something groundbreaking, we'll probably
 keep doing what we're doing. Having the ability to inject multiple keys to
 instances [1] would help us get rid of the Tuskar vs. admin key issue i
 mentioned in the initial e-mail. We might try asking a fellow Nova developer
 to help us out here.


 Jirka

 [1] https://bugs.launchpad.net/nova/+bug/917850


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Cinder] Feature about volume delete protection

2014-03-11 Thread Huang Zhiteng
On Tue, Mar 11, 2014 at 5:09 PM, Zhangleiqiang zhangleiqi...@huawei.com wrote:
 From: Huang Zhiteng [mailto:winsto...@gmail.com]
 Sent: Tuesday, March 11, 2014 4:29 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Nova][Cinder] Feature about volume delete
 protection

 On Tue, Mar 11, 2014 at 11:38 AM, Zhangleiqiang
 zhangleiqi...@huawei.com wrote:
  Hi all,
 
 
 
  Besides the soft-delete state for volumes, I think there is need for
  introducing another fake delete state for volumes which have snapshot.
 
 
 
  Current Openstack refuses the delete request for volumes which have
  snapshot. However, we will have no method to limit users to only use
  the specific snapshot other than the original volume ,  because the
  original volume is always visible for the users.
 
 
 
  So I think we can permit users to delete volumes which have snapshots,
  and mark the volume as fake delete state. When all of the snapshots
  of the volume have already deleted, the original volume will be
  removed automatically.
 
 Can you describe the actual use case for this?  I not sure I follow why 
 operator
 would like to limit the owner of the volume to only use specific version of
 snapshot.  It sounds like you are adding another layer.  If that's the case, 
 the
 problem should be solved at upper layer instead of Cinder.

 For example, one tenant's volume quota is five, and has 5 volumes and 1 
 snapshot already. If the data in base volume of the snapshot is corrupted, 
 the user will need to create a new volume from the snapshot, but this 
 operation will be failed because there are already 5 volumes, and the 
 original volume cannot be deleted, too.

Hmm, how likely is it the snapshot is still sane when the base volume
is corrupted?  Even if this case is possible, I don't see the 'fake
delete' proposal is the right way to solve the problem.  IMO, it
simply violates what quota system is designed for and complicates
quota metrics calculation (there would be actual quota which is only
visible to admin/operator and an end-user facing quota).  Why not
contact operator to bump the upper limit of the volume quota instead?
 
 
 
 
  Any thoughts? Welcome any advices.
 
 
 
 
 
 
 
  --
 
  zhangleiqiang
 
 
 
  Best Regards
 
 
 
  From: John Griffith [mailto:john.griff...@solidfire.com]
  Sent: Thursday, March 06, 2014 8:38 PM
 
 
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [Nova][Cinder] Feature about volume
  delete protection
 
 
 
 
 
 
 
  On Thu, Mar 6, 2014 at 9:13 PM, John Garbutt j...@johngarbutt.com
 wrote:
 
  On 6 March 2014 08:50, zhangyu (AI) zhangy...@huawei.com wrote:
  It seems to be an interesting idea. In fact, a China-based public
  IaaS, QingCloud, has provided a similar feature to their virtual
  servers. Within 2 hours after a virtual server is deleted, the server
  owner can decide whether or not to cancel this deletion and re-cycle
  that deleted virtual server.
 
  People make mistakes, while such a feature helps in urgent cases. Any
  idea here?
 
  Nova has soft_delete and restore for servers. That sounds similar?
 
  John
 
 
 
  -Original Message-
  From: Zhangleiqiang [mailto:zhangleiqi...@huawei.com]
  Sent: Thursday, March 06, 2014 2:19 PM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: [openstack-dev] [Nova][Cinder] Feature about volume delete
  protection
 
  Hi all,
 
  Current openstack provide the delete volume function to the user.
  But it seems there is no any protection for user's delete operation miss.
 
  As we know the data in the volume maybe very important and valuable.
  So it's better to provide a method to the user to avoid the volume
  delete miss.
 
  Such as:
  We can provide a safe delete for the volume.
  User can specify how long the volume will be delay deleted(actually
  deleted) when he deletes the volume.
  Before the volume is actually deleted, user can cancel the delete
  operation and find back the volume.
  After the specified time, the volume will be actually deleted by the
  system.
 
  Any thoughts? Welcome any advices.
 
  Best regards to you.
 
 
  --
  zhangleiqiang
 
  Best Regards
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  I think a soft-delete for Cinder sounds like a neat idea.  You should
  file a BP that we can target for Juno.
 
 
 
  Thanks,
 
  John
 
 
 
 
  

Re: [openstack-dev] [nova][novaclient] How to get user's credentials for using novaclient API?

2014-03-11 Thread ChangBo Guo
Another helpful article about your question :-)

http://www.ibm.com/developerworks/cloud/library/cl-openstack-pythonapis/index.html


2014-03-11 17:15 GMT+08:00 Dmitry Mescheryakov dmescherya...@mirantis.com:

 Hello Nader,

 You should use python-keystoneclient [1] to obtain the token. You can
 find example usage in helper script [2].

 Dmitry

 [1] https://github.com/openstack/python-keystoneclient
 [2]
 https://github.com/openstack/savanna/blob/master/tools/get_auth_token.py#L74



 2014-03-10 21:25 GMT+04:00 Nader Lahouti nader.laho...@gmail.com:
  Hi All,
 
 
  I have a question regarding using novaclient API.
 
 
  I need to use it for getting a list of instances for an user/project.
 
  In oder to do that I tried to use :
 
 
  from novaclient.v1_1 import client
 
  nc = client.Client(username,token_id, project_id,
 auth_url,insecure,cacert)
 
  nc.servers.list()
 
 
  ( however, the comment on the code/document says different thing which as
  far as tried it didn't work.
 
  client = Client(USERNAME, PASSWORD, PROJECT_ID, AUTH_URL)
 
 
  so it seems token_id has to be provided.
 
  I can get the token_id using keystone REST API
  (http://localhost:5000/v2.0/tokens ...-d ' the credentials ...username and
  password'.
 
  And my question is: how to get credentials for an user in the code when
  using the keystone's REST API? Is there any api to get such an info?
 
 
  Appreciate your comments.
 
 
  Regards,
 
  Nader.
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
ChangBo Guo(gcb)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][db][performance] Proposal: Get rid of soft deletion (step by step)

2014-03-11 Thread Alexis Lee
Roman Podoliaka said on Mon, Mar 10, 2014 at 03:04:06PM -0700:
 So we have a homework to do: find out what for projects use
 soft-deletes. I assume that soft-deletes are only used internally and
 aren't exposed to API users, but let's check that. At the same time
 all new projects should avoid using of soft-deletes from the start.

On that homework, deleted records can be interesting when aggregating
over time. For example, nodes where over 100 instances went to ERROR
this month or nodes that hosted flavor FLAVOR this month.

Operators might have written plugins to test these business concerns, so
although Ceilometer might be a better place to get that information, the
transition should be considered.


Alexis
-- 
Nova Engineer, HP Cloud.  AKA lealexis, lxsli.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] tgt restart fails in Cinder startup start: job failed to start

2014-03-11 Thread Roey Chen
Forwarding the answer to the relevant mailing lists:

---

Hi,

Hope this could help,

I've encountered this issue myself not to long ago on Ubuntu 12.04 host,
it didn't happen again after messing with the Kernel Semaphore Limits 
parameters [1]:

Adding this [2] line to `/etc/sysctl.conf` seems to do the trick.


- Roey


[1] http://paste.openstack.org/show/73086/
[2] http://paste.openstack.org/show/73082/


From: Sukhdev Kapur [mailto:sukhdevka...@gmail.com]
Sent: Monday, March 10, 2014 5:56 PM
To: Dane Leblanc (leblancd)
Cc: OpenStack Development Mailing List (not for usage questions); 
openstack-in...@lists.openstack.org; openstack...@lists.openstack.org
Subject: Re: [OpenStack-Infra] tgt restart fails in Cinder startup start: job 
failed to start

I see the same issue. This issue has crept in during the latest flurry of 
check-ins. I started noticing this issue a day or two before the Icehouse 
Feature Freeze deadline.

I tried restarting tgt as well, but, it does not help.

However, rebooting the VM helps clear it up.

Has anybody else seen it as well? Does anybody have a solution for it?

Thanks
-Sukhdev




On Mon, Mar 10, 2014 at 8:37 AM, Dane Leblanc (leblancd) 
lebla...@cisco.commailto:lebla...@cisco.com wrote:
I don't know if anyone can give me some troubleshooting advice with this issue.

I'm seeing an occasional problem whereby after several DevStack 
unstack.sh/stack.shhttp://unstack.sh/stack.sh cycles, the tgt daemon (tgtd) 
fails to start during Cinder startup.  Here's a snippet from the stack.sh log:

2014-03-10 07:09:45.214 | Starting Cinder
2014-03-10 07:09:45.215 | + return 0
2014-03-10 07:09:45.216 | + sudo rm -f /etc/tgt/conf.d/stack.conf
2014-03-10 07:09:45.217 | + _configure_tgt_for_config_d
2014-03-10 07:09:45.218 | + [[ ! -d /etc/tgt/stack.d/ ]]
2014-03-10 07:09:45.219 | + is_ubuntu
2014-03-10 07:09:45.220 | + [[ -z deb ]]
2014-03-10 07:09:45.221 | + '[' deb = deb ']'
2014-03-10 07:09:45.222 | + sudo service tgt restart
2014-03-10 07:09:45.223 | stop: Unknown instance:
2014-03-10 07:09:45.619 | start: Job failed to start
jenkins@neutronpluginsci:~/devstack$ 2014-03-10 07:09:45.621 | + exit_trap
2014-03-10 07:09:45.622 | + local r=1
2014-03-10 07:09:45.623 | ++ jobs -p
2014-03-10 07:09:45.624 | + jobs=
2014-03-10 07:09:45.625 | + [[ -n '' ]]
2014-03-10 07:09:45.626 | + exit 1

If I try to restart tgt manually without success:

jenkins@neutronpluginsci:~$ sudo service tgt restart
stop: Unknown instance:
start: Job failed to start
jenkins@neutronpluginsci:~$ sudo tgtd
librdmacm: couldn't read ABI version.
librdmacm: assuming: 4
CMA: unable to get RDMA device list
(null): iser_ib_init(3263) Failed to initialize RDMA; load kernel modules?
(null): fcoe_init(214) (null)
(null): fcoe_create_interface(171) no interface specified.
jenkins@neutronpluginsci:~$

The config in /etc/tgt is:

jenkins@neutronpluginsci:/etc/tgt$ ls -l
total 8
drwxr-xr-x 2 root root 4096 Mar 10 07:03 conf.d
lrwxrwxrwx 1 root root   30 Mar 10 06:50 stack.d - 
/opt/stack/data/cinder/volumes
-rw-r--r-- 1 root root   58 Mar 10 07:07 targets.conf
jenkins@neutronpluginsci:/etc/tgt$ cat targets.conf
include /etc/tgt/conf.d/*.conf
include /etc/tgt/stack.d/*
jenkins@neutronpluginsci:/etc/tgt$ ls conf.d
jenkins@neutronpluginsci:/etc/tgt$ ls /opt/stack/data/cinder/volumes
jenkins@neutronpluginsci:/etc/tgt$

I don't know if there's any missing Cinder config in my DevStack localrc files. 
Here's one that I'm using:

MYSQL_PASSWORD=nova
RABBIT_PASSWORD=nova
SERVICE_TOKEN=nova
SERVICE_PASSWORD=nova
ADMIN_PASSWORD=nova
ENABLED_SERVICES=g-api,g-reg,key,n-api,n-crt,n-obj,n-cpu,n-cond,cinder,c-sch,c-api,c-vol,n-sch,n-novnc,n-xvnc,n-cauth,horizon,rabbit
enable_service mysql
disable_service n-net
enable_service q-svc
enable_service q-agt
enable_service q-l3
enable_service q-dhcp
enable_service q-meta
enable_service q-lbaas
enable_service neutron
enable_service tempest
VOLUME_BACKING_FILE_SIZE=2052M
Q_PLUGIN=cisco
declare -a Q_CISCO_PLUGIN_SUBPLUGINS=(openvswitch nexus)
declare -A 
Q_CISCO_PLUGIN_SWITCH_INFO=([10.0.100.243]=admin:Cisco12345:22:neutronpluginsci:1/9)
NCCLIENT_REPO=git://github.com/CiscoSystems/ncclient.githttp://github.com/CiscoSystems/ncclient.git
PHYSICAL_NETWORK=physnet1
OVS_PHYSICAL_BRIDGE=br-eth1
TENANT_VLAN_RANGE=810:819
ENABLE_TENANT_VLANS=True
API_RATE_LIMIT=False
VERBOSE=True
DEBUG=True
LOGFILE=/opt/stack/logs/stack.sh.log
USE_SCREEN=True
SCREEN_LOGDIR=/opt/stack/logs

Here are links to a log showing another localrc file that I use, and the 
corresponding stack.sh log:

http://128.107.233.28:8080/job/neutron/1390/artifact/vpnaas_console_log.txt
http://128.107.233.28:8080/job/neutron/1390/artifact/vpnaas_stack_sh_log.txt

Does anyone have any advice on how to debug this, or recover from this (beyond 
rebooting the node)? Or am I missing any Cinder config?

Thanks in advance for any help on this!!!
Dane



___
OpenStack-Infra mailing list

Re: [openstack-dev] [heat]Policy on upgades required config changes

2014-03-11 Thread Steven Hardy
Hi Keith  Clint,

On Tue, Mar 11, 2014 at 05:05:21AM +, Keith Bray wrote:
 I want to echo Clint's responses... We do run close to Heat master here at
 Rackspace, and we'd be happy to set up a non-voting job to notify when a
 review would break Heat on our cloud if that would be beneficial.  Some of
 the breaks we have seen have been things that simply weren't caught in
 code review (a human intensive effort), were specific to the way we
 configure Heat for large-scale cloud use, applicable to the entire Heat
 project, and not necessarily service provider specific.

I appreciate the feedback and I've certainly learned something during
this process and will endeavor to provide uniformly backwards compatible
changes in future.  I certainly agree we can do things better next time :)

Hopefully you can appreciate that the auth related features I've been
working on have been a large and difficult undertaking, and that once the
transitional pain has passed will bring considerable benefits for both
users and deployers.

One frustration I have is lack of review feedback for most of the
instance-users and v3 keystone work (except for a small and dedicated
subset of the heat-core team, thanks!).  So my feedback to you is if you're
running close to master, we really really need your help during the review
process, to avoid post-merge stress for everyone :)

Re gate CI - it sounds like a great idea, voting and non-voting feedback is
hugely valuable in addition to human reviewer feedback, so hopefully we can
work towards getting such tests in place.

Anyway, apologies again for any inconvenience, hopefully all is working OK
now with the fallback patch I provided.

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron]A Question about creating instance with duplication sg_name

2014-03-11 Thread Lingxian Kong
Hi Xurong:

If Neutron is used for security-group functionality, do not come back to
Nova for that. The security-group in Nova is just for backward
compatiblity, IMHO.


2014-03-11 16:20 GMT+08:00 Xurong Yang ido...@gmail.com:

 It's allowed to create duplicate sg with the same name.
 so exception happens when creating instance with the duplicate sg name.
 code following:
 
 security_groups = kwargs.get('security_groups', [])
 security_group_ids = []

 # TODO(arosen) Should optimize more to do direct query for security
 # group if len(security_groups) == 1
 if len(security_groups):
 search_opts = {'tenant_id': instance['project_id']}
 user_security_groups = neutron.list_security_groups(
 **search_opts).get('security_groups')

 for security_group in security_groups:
 name_match = None
 uuid_match = None
 for user_security_group in user_security_groups:
 if user_security_group['name'] == security_group:
 if name_match:---exception happened here
 raise exception.NoUniqueMatch(
 _(Multiple security groups found matching
'%s'. Use an ID to be more specific.) %
security_group)

 name_match = user_security_group['id']
   

 so it's maybe improper to create instance with the sg name parameter.
 appreciation if any response.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
*---*
*Lingxian Kong*
Huawei Technologies Co.,LTD.
IT Product Line CloudOS PDU
China, Xi'an
Mobile: +86-18602962792
Email: konglingx...@huawei.com; anlin.k...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cinder: AttributeError: No such RPC function 'create_export'

2014-03-11 Thread Deepak C Shetty
I think you are referrign to backend-assisted migration, I am referring 
to the generic one (with the support put forth by Avishay of IBM)


The generic flow of migration shoudl work as far as backend provides 
support for

1) create volume
2) attach/detach volume

It may not be ideal, but should work using the 'dd' to do the copy of 
the data between src and dest volume. I am currently looking at this 
generic migrate only


On 03/11/2014 02:50 PM, Swapnil Kulkarni wrote:

Hi Deepak,

When you say you are using glusterfs as backend, you are using glusterfs
driver, is it correct?

Best Regards,
Swapnil Kulkarni
irc : coolsvap


On Tue, Mar 11, 2014 at 2:17 PM, Deepak C Shetty deepa...@redhat.com
mailto:deepa...@redhat.com wrote:

Swapnil,
 The failure is not in the glsuter specific part of code
IIUC its in the rpc/dispatcher area.. so shouldn't be gluster specific


On 03/11/2014 01:06 PM, Swapnil Kulkarni wrote:

Hi Deepak,

I believe the migrate_volume is not implemented in glusterfs which
causes above error. I have seen similar errors earlier. Currently
implementing the migrate volume and testing it. I will push it
upstream
once successfully tested.

Best Regards,
Swapnil Kulkarni
irc : coolsvap
swapnilkulkarni2...@gmail.com
mailto:swapnilkulkarni2...@gmail.com
mailto:swapnilkulkarni2608@__gmail.com
mailto:swapnilkulkarni2...@gmail.com
+91-87960 10622(c)
http://in.linkedin.com/in/__coolsvap
http://in.linkedin.com/in/coolsvap
*It's better to SHARE*



On Tue, Mar 11, 2014 at 12:53 PM, Deepak C Shetty
deepa...@redhat.com mailto:deepa...@redhat.com
mailto:deepa...@redhat.com mailto:deepa...@redhat.com wrote:

 Hi All,
  I am using devstack with cinder git head @
 f888e412b0d0fdb0426045a9c55e0be0390f842c


 I am seeing the below error while trying to do cinder
migrate for
 glusterfs backend. I don't think its backend specific tho'
as the
 failure is in the common rpc layer of code.

http://paste.fedoraproject.org/84189/45169021/

 http://paste.fedoraproject.__org/84189/45169021/
http://paste.fedoraproject.org/84189/45169021/

 Any pointers to get past this is appreciated.

 thanx,
 deepak

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.__openstack.org
mailto:OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev
http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





_
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.__org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


_
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.__org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat]Policy on upgades required config changes

2014-03-11 Thread Sean Dague
On 03/04/2014 12:39 PM, Steven Hardy wrote:
 Hi all,
 
 As some of you know, I've been working on the instance-users blueprint[1].
 
 This blueprint implementation requires three new items to be added to the
 heat.conf, or some resources (those which create keystone users) will not
 work:
 
 https://review.openstack.org/#/c/73978/
 https://review.openstack.org/#/c/76035/
 
 So on upgrade, the deployer must create a keystone domain and domain-admin
 user, add the details to heat.conf, as already been done in devstack[2].
 
 The changes requried for this to work have already landed in devstack, but
 it was discussed to day and Clint suggested this may be unacceptable
 upgrade behavior - I'm not sure so looking for guidance/comments.
 
 My plan was/is:
 - Make devstack work
 - Talk to tripleo folks to assist in any transition (what prompted this
   discussion)
 - Document the upgrade requirements in the Icehouse release notes so the
   wider community can upgrade from Havana.
 - Try to give a heads-up to those maintaining downstream heat deployment
   tools (e.g stackforge/puppet-heat) that some tweaks will be required for
   Icehouse.
 
 However some have suggested there may be an openstack-wide policy which
 requires peoples old config files to continue working indefinitely on
 upgrade between versions - is this right?  If so where is it documented?

This is basically enforced in code in grenade, the language for this
actually got lost in the project requirements discussion in the TC, I'll
bring that back in the post graduation requirements discussion we're
having again.

The issue is - Heat still doesn't materially participate in grenade.
Heat is substantially far behind the other integrated projects in it's
integration with the upstream testing. Only monday did we finally start
gating on a real unit of work for Heat (the heat-slow jobs). If I was
letter grading projects right now on upstream testing I'd give Nova an
A, Neutron a C (still no full run, no working grenade), and Heat a D.

So in short. Heat did the wrong thing. You should be able to use your
configs from the last release. This is what all the mature projects in
OpenStack do. In the event that you *have* to make a change like that it
requires an UpgradeImpact tag in the commit. And those should be limited
really aggressively. This is the whole point of the deprecation cycle.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Cinder] Feature about volume delete protection

2014-03-11 Thread Zhangleiqiang
 From: Huang Zhiteng [mailto:winsto...@gmail.com]
 Sent: Tuesday, March 11, 2014 5:37 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Nova][Cinder] Feature about volume delete
 protection
 
 On Tue, Mar 11, 2014 at 5:09 PM, Zhangleiqiang zhangleiqi...@huawei.com
 wrote:
  From: Huang Zhiteng [mailto:winsto...@gmail.com]
  Sent: Tuesday, March 11, 2014 4:29 PM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [Nova][Cinder] Feature about volume
  delete protection
 
  On Tue, Mar 11, 2014 at 11:38 AM, Zhangleiqiang
  zhangleiqi...@huawei.com wrote:
   Hi all,
  
  
  
   Besides the soft-delete state for volumes, I think there is need
   for introducing another fake delete state for volumes which have
 snapshot.
  
  
  
   Current Openstack refuses the delete request for volumes which have
   snapshot. However, we will have no method to limit users to only
   use the specific snapshot other than the original volume ,  because
   the original volume is always visible for the users.
  
  
  
   So I think we can permit users to delete volumes which have
   snapshots, and mark the volume as fake delete state. When all of
   the snapshots of the volume have already deleted, the original
   volume will be removed automatically.
  
  Can you describe the actual use case for this?  I not sure I follow
  why operator would like to limit the owner of the volume to only use
  specific version of snapshot.  It sounds like you are adding another
  layer.  If that's the case, the problem should be solved at upper layer
 instead of Cinder.
 
  For example, one tenant's volume quota is five, and has 5 volumes and 1
 snapshot already. If the data in base volume of the snapshot is corrupted, the
 user will need to create a new volume from the snapshot, but this operation
 will be failed because there are already 5 volumes, and the original volume
 cannot be deleted, too.
 
 Hmm, how likely is it the snapshot is still sane when the base volume is
 corrupted?  

If the snapshot of volume is COW, then the snapshot will be still sane when the 
base volume is corrupted.

 Even if this case is possible, I don't see the 'fake delete' proposal
 is the right way to solve the problem.  IMO, it simply violates what quota
 system is designed for and complicates quota metrics calculation (there would
 be actual quota which is only visible to admin/operator and an end-user facing
 quota).  Why not contact operator to bump the upper limit of the volume
 quota instead?

I had some misunderstanding on Cinder's snapshot. 
Fake delete is common if there is chained snapshot or snapshot tree 
mechanism. However in cinder, only volume can make snapshot but snapshot cannot 
make snapshot again. 

I agree with your bump upper limit method. 

Thanks for your explanation.


  
  
  
  
   Any thoughts? Welcome any advices.
  
  
  
  
  
  
  
   --
  
   zhangleiqiang
  
  
  
   Best Regards
  
  
  
   From: John Griffith [mailto:john.griff...@solidfire.com]
   Sent: Thursday, March 06, 2014 8:38 PM
  
  
   To: OpenStack Development Mailing List (not for usage questions)
   Subject: Re: [openstack-dev] [Nova][Cinder] Feature about volume
   delete protection
  
  
  
  
  
  
  
   On Thu, Mar 6, 2014 at 9:13 PM, John Garbutt j...@johngarbutt.com
  wrote:
  
   On 6 March 2014 08:50, zhangyu (AI) zhangy...@huawei.com wrote:
   It seems to be an interesting idea. In fact, a China-based public
   IaaS, QingCloud, has provided a similar feature to their virtual
   servers. Within 2 hours after a virtual server is deleted, the
   server owner can decide whether or not to cancel this deletion and
   re-cycle that deleted virtual server.
  
   People make mistakes, while such a feature helps in urgent cases.
   Any idea here?
  
   Nova has soft_delete and restore for servers. That sounds similar?
  
   John
  
  
  
   -Original Message-
   From: Zhangleiqiang [mailto:zhangleiqi...@huawei.com]
   Sent: Thursday, March 06, 2014 2:19 PM
   To: OpenStack Development Mailing List (not for usage questions)
   Subject: [openstack-dev] [Nova][Cinder] Feature about volume
   delete protection
  
   Hi all,
  
   Current openstack provide the delete volume function to the user.
   But it seems there is no any protection for user's delete operation 
   miss.
  
   As we know the data in the volume maybe very important and valuable.
   So it's better to provide a method to the user to avoid the volume
   delete miss.
  
   Such as:
   We can provide a safe delete for the volume.
   User can specify how long the volume will be delay
   deleted(actually
   deleted) when he deletes the volume.
   Before the volume is actually deleted, user can cancel the delete
   operation and find back the volume.
   After the specified time, the volume will be actually deleted by
   the system.
  
   Any thoughts? Welcome any advices.
  
   Best regards to you.
 

Re: [openstack-dev] [heat]Policy on upgades required config changes

2014-03-11 Thread Steven Hardy
On Tue, Mar 11, 2014 at 07:04:32AM -0400, Sean Dague wrote:
 On 03/04/2014 12:39 PM, Steven Hardy wrote:
  Hi all,
  
  As some of you know, I've been working on the instance-users blueprint[1].
  
  This blueprint implementation requires three new items to be added to the
  heat.conf, or some resources (those which create keystone users) will not
  work:
  
  https://review.openstack.org/#/c/73978/
  https://review.openstack.org/#/c/76035/
  
  So on upgrade, the deployer must create a keystone domain and domain-admin
  user, add the details to heat.conf, as already been done in devstack[2].
  
  The changes requried for this to work have already landed in devstack, but
  it was discussed to day and Clint suggested this may be unacceptable
  upgrade behavior - I'm not sure so looking for guidance/comments.
  
  My plan was/is:
  - Make devstack work
  - Talk to tripleo folks to assist in any transition (what prompted this
discussion)
  - Document the upgrade requirements in the Icehouse release notes so the
wider community can upgrade from Havana.
  - Try to give a heads-up to those maintaining downstream heat deployment
tools (e.g stackforge/puppet-heat) that some tweaks will be required for
Icehouse.
  
  However some have suggested there may be an openstack-wide policy which
  requires peoples old config files to continue working indefinitely on
  upgrade between versions - is this right?  If so where is it documented?
 
 This is basically enforced in code in grenade, the language for this
 actually got lost in the project requirements discussion in the TC, I'll
 bring that back in the post graduation requirements discussion we're
 having again.
 
 The issue is - Heat still doesn't materially participate in grenade.
 Heat is substantially far behind the other integrated projects in it's
 integration with the upstream testing. Only monday did we finally start
 gating on a real unit of work for Heat (the heat-slow jobs). If I was
 letter grading projects right now on upstream testing I'd give Nova an
 A, Neutron a C (still no full run, no working grenade), and Heat a D.

Thanks for this, I know we have a lot more work to do in tempest, but
evidently grenade integration is something we should priotitize as soon as
possible.  Any volunteers out there? :)

 So in short. Heat did the wrong thing. You should be able to use your
 configs from the last release. This is what all the mature projects in
 OpenStack do. In the event that you *have* to make a change like that it
 requires an UpgradeImpact tag in the commit. And those should be limited
 really aggressively. This is the whole point of the deprecation cycle.

Ok, got that message loud and clear now, thanks ;)

Do you have a link to docs which describe the deprecation cycle and
openstack-wide policy for introducing backwards incompatible changes?

The thing I'm still not that clear on, is if we want to eventually require
a specific config option, and we can't just have an upgrade requirement to
add it as I was expecting - is it enough to just output a warning for one
release cycle then require it?

Then I guess my question is how do we rationalize the requirements of
trunk-chasing downstream users wrt the time based releases as part of the
deprecation cycle policy?

i.e if we branch stable/icehouse then I immediately post a patch removing
the deprecated fallback path, it may still break downstream users who don't
care about the stable-branch process and I have no way of knowing (other
than, as in this case, finding out too late when they shout at me..).

Thanks for contributing to the discussion, hopefully it's not only me who's
somewhat confused by the process, and the requirement to satisfy two quite
different sets of release constraints for downstream deployers.

Perhaps we need a wiki page similar to the StableBranch page which spells
out the requirements for projects wrt trunk-chasing deployers, unless one
exists already?.

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cinder: AttributeError: No such RPC function 'create_export'

2014-03-11 Thread Deepak C Shetty

I spoke with Avishay on IRC and he gave me this link...

https://review.openstack.org/#/c/76471/

So this is a known issue and the fix is under works ^^

thanx,
deepak

On 03/11/2014 12:53 PM, Deepak C Shetty wrote:

Hi All,
I am using devstack with cinder git head @ 
f888e412b0d0fdb0426045a9c55e0be0390f842c


I am seeing the below error while trying to do cinder migrate for 
glusterfs backend. I don't think its backend specific tho' as the 
failure is in the common rpc layer of code.


http://paste.fedoraproject.org/84189/45169021/

Any pointers to get past this is appreciated.

thanx,
deepak

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] New API methods for App Catalog UI

2014-03-11 Thread Alexander Tivelkov
Hi Georgy,

There was already a discussion of these APIs [1] about some time ago,
the draft for API has been proposed here [2], the etherpad for
discussion and feedback was created [3] and the direction was already
approved in the blueprint [4]. As far as I know, the work on this set
of APIs has already begun.
Please align your vision with this spec.
We may discuss it today on the weekly meeting in IRC.


[1] http://lists.openstack.org/pipermail/openstack-dev/2014-March/028886.html
[2] http://docs.muranorepositoryapi.apiary.io
[3] https://etherpad.openstack.org/p/muranorepository-api
[4] https://blueprints.launchpad.net/murano/+spec/murano-repository-api-v2
--
Regards,
Alexander Tivelkov


On Mon, Mar 10, 2014 at 7:21 PM, Georgy Okrokvertskhov
gokrokvertsk...@mirantis.com wrote:
 Hi,

 Murano is moving towards App Catalog functionality and in order to support
 this new aspect in the UI we need to add new API methods to cover App
 Catalog operations. Currently the vision for App Catalog API is the
 following:
 1) All App create operations will be covered by metadata repository API
 which will eventually be a part of Glance Artifacts functionality. New
 application creation will be technically a creation of a new artifact and
 uploading it to metadata repository. The sharing and distribution aspects
 will be covered by the same artifact repository functionality.

 2) App Listing and App Catalog rendering will be covered by a new Murano
 API. The reason for that is to keep UI thin and keep package representation
 aspects out of the general artifacts repository.

 The list of new API functions is available here:
 https://etherpad.openstack.org/p/MuranoAppCatalogAPI

 This is a first draft to cover minimal UI rendering requirements.

 Thanks
 Georgy

 --
 Georgy Okrokvertskhov
 Architect,
 OpenStack Platform Products,
 Mirantis
 http://www.mirantis.com
 Tel. +1 650 963 9828
 Mob. +1 650 996 3284

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack][Nova][Docker] Devstack with docker driver

2014-03-11 Thread urgensherpa
Hello!,

i can run docker containers and push it to docker io but i failed to push it
for local glance.and get the same error mentioned here.
Could you please show some more light on  how you resolved it. i started
settingup openstack and docker using devstack. 
here is my localrc 
FLOATING_RANGE=192.168.140.0/27
FIXED_RANGE=10.11.12.0/24
FIXED_NETWORK_SIZE=256
FLAT_INTERFACE=eth1
ADMIN_PASSWORD=g
MYSQL_PASSWORD=g
RABBIT_PASSWORD=g
SERVICE_PASSWORD=g
SERVICE_TOKEN=g
SCHEDULER=nova.scheduler.filter_scheduler.FilterScheduler
VIRT_DRIVER=docker
SCREEN_LOGDIR=$DEST/logs/screen
---
the machine im testing is on vmware ubuntu 13.01 with two nics  assuming
eth0 connected to internet and eth1 to local network.
---





--
View this message in context: 
http://openstack.10931.n7.nabble.com/Openstack-Nova-Docker-Devstack-with-docker-driver-tp28361p34845.html
Sent from the Developer mailing list archive at Nabble.com.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Suggestions for alarm improvements

2014-03-11 Thread Gordon Chung
i've created a bp to discuss whether moving the alarming into pipeline is 
feasible and can cover all the use cases for alarm. if we can find a 
solution that is a bit leaner than what we have and still provide same 
functionality coverage i don't see why we try it. it very well may be that 
what we have is the best solution.

https://blueprints.launchpad.net/ceilometer/+spec/alarm-pipelines

cheers,
gordon chung
openstack, ibm software standards___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Reminder - Weekly Project Meeting today at 21:00 UTC

2014-03-11 Thread Sean Dague
For today's weekly project meeting I'll be standing in for Thierry.
Agenda is here
https://wiki.openstack.org/wiki/Meetings/ProjectMeeting#Weekly_Project_meeting

I expect the bulk of the meeting will be checking in on where we stand
on FFEs that were granted, as those were all supposed to be in by the
meeting today.

For folks in the US (in all the places which do DST), remember, the
meeting time is in UTC, so now an hour later for us all.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa] Tempest review and development priorities until release

2014-03-11 Thread Sean Dague
Tempest has no feature freeze in the same way as the core projects, in a
lot of ways some of our most useful effort happens right now, as
projects shore up features within the tempest code.

That being said, the review queue remains reasonably large, so I would
like to focus review attention on items that will make a material impact
on the quality of the Icehouse release.

That means I'd like to *stop* doing patches and reviews that are
internal refactorings. We can start doing those again in Juno. I know
there were some client refactorings, and hacking cleanups in flight.
Those should wait until Icehouse is released.

From my perspective the top priorities for things to be reviewed /
developed are:
 * Heat related tests (especially on the heat slow job) as we're now
gating with that, but still only have 1 real test
 * Changes to get us Neutron full support (I actually think the tempest
side is complete, but just in case)
 * Unit tests of Tempest function (so we know that we are doing the
things we think)
 * Bugs in Tempest itself
 * The Keystone multi auth patches (so was can actually test v3)
 * Any additional positive API / scenario tests for *integrated*
projects (incubated projects are currently best effort).

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][IPv6] IRC meeting today?

2014-03-11 Thread Shixiong Shang
Do we have IRC meeting today? Didn’t see anybody in the chat room…..:(

Shixiong


Shixiong Shang

!--- Stay Hungry, Stay Foolish ---!___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] Sahara (ex. Savanna) project renaming process [savanna]

2014-03-11 Thread Sergey Lukjanov
All launchpad projects has been renamed keeping full path redirects.
It means that you can still reference to the bugs and blueprints under
the savanna launchpad project and it'll be redirected to the new
sahara project.

All savanna repositories will be renamed to sahara ones on Wednesday,
March 12 between 12:00 to 12:30 UTC [0]


[0] http://www.timeanddate.com/worldclock/fixedtime.html?iso=20140312T12am=30

On Sun, Mar 9, 2014 at 3:08 PM, Sergey Lukjanov slukja...@mirantis.com wrote:
 Matt,

 thanks for moving etherpad notes to the blueprints. I've added some
 notes and details to them and add some assignments to the blueprints
 where we have no choice.

 https://blueprints.launchpad.net/savanna/+spec/savanna-renaming-ci -
 Sergey Kolekonov
 https://blueprints.launchpad.net/savanna/+spec/savanna-renaming-guestagent
 - Dmitry Mescheryakov

 Thanks.

 On Sat, Mar 8, 2014 at 5:08 PM, Matthew Farrellee m...@redhat.com wrote:
 On 03/07/2014 04:50 PM, Sergey Lukjanov wrote:

 Hey folks,

 we're now starting working on the project renaming. You can find
 details in the etherpad [0]. We'll move all work items to the
 blueprints, one blueprint per sub-project to well track progress and
 work items. The general blueprint is [1], it'll depend on all other
 blueprints and it's currently consists of general renaming tasks.

 Current plan is to assign each subproject blueprint to volunteer.
 Please, contact me and Matthew Farrellee if you'd like to take the
 renaming bp.

 Please, share your ideas/suggestions in ML or etherpad.

 [0] https://etherpad.openstack.org/p/savanna-renaming-process
 [1] https://blueprints.launchpad.net/openstack?searchtext=savanna-renaming

 Thanks.

 P.S. Please, prepend email topics with [sahara] and append [savanna]
 to the end of topic (like in this email) for the transition period.


 savann^wsahara team,

 i've separated out most of the activities that can happen in parallel,
 aligned them on repository boundaries, and filed blueprints for the efforts.
 now we need community members to take ownership (be the assignee) of the
 blueprints. taking ownership means you'll be responsible for the renaming in
 the repository, coordinating with other owners and getting feedback from the
 community about important questions (such as compatibility requirements).

 to take ownership, just go to the blueprint and assign it to yourself. if
 there is already an assignee, reach out to that person and offer them
 assistance.

 blueprints up for grabs -

 what: savanna^wsahara ci
 blueprint:
 https://blueprints.launchpad.net/savanna/+spec/savanna-renaming-ci
 comments: this should be taken by someone already familiar with the ci. i'd
 nominate skolekonov

 what: saraha puppet modules
 blueprint:
 https://blueprints.launchpad.net/savanna/+spec/savanna-renaming-puppet
 comments: this should be taken by someone who can validate the changes. i'd
 nominate sbadia or dizz

 what: sahara extras
 blueprint:
 https://blueprints.launchpad.net/savanna/+spec/savanna-renaming-extra
 comments: this could be taken by anyone

 what: sahara dib image elements
 blueprint:
 https://blueprints.launchpad.net/savanna/+spec/savanna-renaming-image-elements
 comments: this could be taken by anyone

 what: sahara python client
 blueprint:
 https://blueprints.launchpad.net/savanna/+spec/savanna-renaming-client
 comments: this should be done by someone w/ experience in the client. i'd
 nominate tmckay

 what: sahara horizon plugin
 blueprint:
 https://blueprints.launchpad.net/savanna/+spec/savanna-renaming-dashboard
 comments: this will require experience and care. i'd nominate croberts

 what: sahara guestagent
 blueprint:
 https://blueprints.launchpad.net/savanna/+spec/savanna-renaming-guestagent
 comments: i'd nominate dmitrymex

 what: sahara section of openstack wiki
 blueprint:
 https://blueprints.launchpad.net/savanna/+spec/savanna-renaming-wiki
 comments: this could be taken by anyone

 what: sahara service
 blueprint:
 https://blueprints.launchpad.net/savanna/+spec/savanna-renaming-service
 comments: this requires experience, care and is a lot of work. i'd nominate
 alazarev  aignatov to tag team it



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Sincerely yours,
 Sergey Lukjanov
 Savanna Technical Lead
 Mirantis Inc.



-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [3rd party testing] QA meeting today at 14:00 EST / 18:00 UTC

2014-03-11 Thread Jeremy Stanley
On 2014-03-11 04:29:04 + (+), trinath.soman...@freescale.com wrote:
 +1
 
 Attending

Note that announcement was for yesterday. Nobody showed up with
questions so it ended very early.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron]A Question about creating instance with duplication sg_name

2014-03-11 Thread mar...@redhat.com
On 11/03/14 10:20, Xurong Yang wrote:
 It's allowed to create duplicate sg with the same name.
 so exception happens when creating instance with the duplicate sg name.

Hi Xurong - fyi there is a review open which raises this particular
point at https://review.openstack.org/#/c/79270/2 (together with
associated bug).

imo we shouldn't be using 'name' to distinguish security groups - that's
what the UUID is for,

thanks, marios

 code following:
 
 security_groups = kwargs.get('security_groups', [])
 security_group_ids = []
 
 # TODO(arosen) Should optimize more to do direct query for security
 # group if len(security_groups) == 1
 if len(security_groups):
 search_opts = {'tenant_id': instance['project_id']}
 user_security_groups = neutron.list_security_groups(
 **search_opts).get('security_groups')
 
 for security_group in security_groups:
 name_match = None
 uuid_match = None
 for user_security_group in user_security_groups:
 if user_security_group['name'] == security_group:
 if name_match:---exception happened here
 raise exception.NoUniqueMatch(
 _(Multiple security groups found matching
'%s'. Use an ID to be more specific.) %
security_group)
 
 name_match = user_security_group['id']
   
 
 so it's maybe improper to create instance with the sg name parameter.
 appreciation if any response.
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] Sahara (ex. Savanna) project renaming process [savanna]

2014-03-11 Thread Sergey Lukjanov
RE blueprints assignments - it looks like all bps have initial assignments.

On the renaming the main service code side Alex I. is contact person,
I'll help him with some setup stuff.

Additionally, you can find a bunch of my patches for external renaming
related changes -
https://review.openstack.org/#/q/status:open+topic:savanna-sahara+-savanna,n,z
and internal changes -
https://review.openstack.org/#/q/status:open+topic:savanna-sahara+savanna,n,z
(only open changes).

Thanks.

On Tue, Mar 11, 2014 at 5:33 PM, Sergey Lukjanov slukja...@mirantis.com wrote:
 All launchpad projects has been renamed keeping full path redirects.
 It means that you can still reference to the bugs and blueprints under
 the savanna launchpad project and it'll be redirected to the new
 sahara project.

 All savanna repositories will be renamed to sahara ones on Wednesday,
 March 12 between 12:00 to 12:30 UTC [0]


 [0] http://www.timeanddate.com/worldclock/fixedtime.html?iso=20140312T12am=30

 On Sun, Mar 9, 2014 at 3:08 PM, Sergey Lukjanov slukja...@mirantis.com 
 wrote:
 Matt,

 thanks for moving etherpad notes to the blueprints. I've added some
 notes and details to them and add some assignments to the blueprints
 where we have no choice.

 https://blueprints.launchpad.net/savanna/+spec/savanna-renaming-ci -
 Sergey Kolekonov
 https://blueprints.launchpad.net/savanna/+spec/savanna-renaming-guestagent
 - Dmitry Mescheryakov

 Thanks.

 On Sat, Mar 8, 2014 at 5:08 PM, Matthew Farrellee m...@redhat.com wrote:
 On 03/07/2014 04:50 PM, Sergey Lukjanov wrote:

 Hey folks,

 we're now starting working on the project renaming. You can find
 details in the etherpad [0]. We'll move all work items to the
 blueprints, one blueprint per sub-project to well track progress and
 work items. The general blueprint is [1], it'll depend on all other
 blueprints and it's currently consists of general renaming tasks.

 Current plan is to assign each subproject blueprint to volunteer.
 Please, contact me and Matthew Farrellee if you'd like to take the
 renaming bp.

 Please, share your ideas/suggestions in ML or etherpad.

 [0] https://etherpad.openstack.org/p/savanna-renaming-process
 [1] https://blueprints.launchpad.net/openstack?searchtext=savanna-renaming

 Thanks.

 P.S. Please, prepend email topics with [sahara] and append [savanna]
 to the end of topic (like in this email) for the transition period.


 savann^wsahara team,

 i've separated out most of the activities that can happen in parallel,
 aligned them on repository boundaries, and filed blueprints for the efforts.
 now we need community members to take ownership (be the assignee) of the
 blueprints. taking ownership means you'll be responsible for the renaming in
 the repository, coordinating with other owners and getting feedback from the
 community about important questions (such as compatibility requirements).

 to take ownership, just go to the blueprint and assign it to yourself. if
 there is already an assignee, reach out to that person and offer them
 assistance.

 blueprints up for grabs -

 what: savanna^wsahara ci
 blueprint:
 https://blueprints.launchpad.net/savanna/+spec/savanna-renaming-ci
 comments: this should be taken by someone already familiar with the ci. i'd
 nominate skolekonov

 what: saraha puppet modules
 blueprint:
 https://blueprints.launchpad.net/savanna/+spec/savanna-renaming-puppet
 comments: this should be taken by someone who can validate the changes. i'd
 nominate sbadia or dizz

 what: sahara extras
 blueprint:
 https://blueprints.launchpad.net/savanna/+spec/savanna-renaming-extra
 comments: this could be taken by anyone

 what: sahara dib image elements
 blueprint:
 https://blueprints.launchpad.net/savanna/+spec/savanna-renaming-image-elements
 comments: this could be taken by anyone

 what: sahara python client
 blueprint:
 https://blueprints.launchpad.net/savanna/+spec/savanna-renaming-client
 comments: this should be done by someone w/ experience in the client. i'd
 nominate tmckay

 what: sahara horizon plugin
 blueprint:
 https://blueprints.launchpad.net/savanna/+spec/savanna-renaming-dashboard
 comments: this will require experience and care. i'd nominate croberts

 what: sahara guestagent
 blueprint:
 https://blueprints.launchpad.net/savanna/+spec/savanna-renaming-guestagent
 comments: i'd nominate dmitrymex

 what: sahara section of openstack wiki
 blueprint:
 https://blueprints.launchpad.net/savanna/+spec/savanna-renaming-wiki
 comments: this could be taken by anyone

 what: sahara service
 blueprint:
 https://blueprints.launchpad.net/savanna/+spec/savanna-renaming-service
 comments: this requires experience, care and is a lot of work. i'd nominate
 alazarev  aignatov to tag team it



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Sincerely yours,
 Sergey Lukjanov
 Savanna Technical Lead
 Mirantis Inc.



Re: [openstack-dev] [qa] Tempest review and development priorities until release

2014-03-11 Thread Kenichi Oomichi

Hi Sean,

 -Original Message-
 From: Sean Dague [mailto:s...@dague.net]
 Sent: Tuesday, March 11, 2014 10:06 PM
 To: OpenStack Development Mailing List
 Subject: [openstack-dev] [qa] Tempest review and development priorities until 
 release
 
 Tempest has no feature freeze in the same way as the core projects, in a
 lot of ways some of our most useful effort happens right now, as
 projects shore up features within the tempest code.
 
 That being said, the review queue remains reasonably large, so I would
 like to focus review attention on items that will make a material impact
 on the quality of the Icehouse release.
 
 That means I'd like to *stop* doing patches and reviews that are
 internal refactorings. We can start doing those again in Juno. I know
 there were some client refactorings, and hacking cleanups in flight.
 Those should wait until Icehouse is released.
 
 From my perspective the top priorities for things to be reviewed /
 developed are:
  * Heat related tests (especially on the heat slow job) as we're now
 gating with that, but still only have 1 real test
  * Changes to get us Neutron full support (I actually think the tempest
 side is complete, but just in case)
  * Unit tests of Tempest function (so we know that we are doing the
 things we think)
  * Bugs in Tempest itself
  * The Keystone multi auth patches (so was can actually test v3)
  * Any additional positive API / scenario tests for *integrated*
 projects (incubated projects are currently best effort).

I got it, and I'd like to clarify whether one task is acceptable or not.

In most test cases, Tempest does not check API response body(API attributes).
Now I am working for improving API attribute test coverage for Nova API[1].
I think the task is useful for the backward compatibility and finding some
latent bags(API sample files etc). In addition, this improvement is necessary
to prove the concept of Nova v2.1 API because the we need to check v2.1 API
does not cause backward incompatibility issues.

Can we continue this improvement?
Of course, I will do review for the above areas(Heat, etc) also.


Thanks
Ken'ichi Ohmichi

---
[1]: https://blueprints.launchpad.net/tempest/+spec/nova-api-attribute-test

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6] IRC meeting today?

2014-03-11 Thread Collins, Sean
It starts at 10AM EST, due to daylight savings. See you in a couple minutes

Sean M. Collins

From: Shixiong Shang [sparkofwisdom.cl...@gmail.com]
Sent: Tuesday, March 11, 2014 9:15 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Neutron][IPv6] IRC meeting today?

Do we have IRC meeting today? Didn’t see anybody in the chat room…..:(

Shixiong


Shixiong Shang

!--- Stay Hungry, Stay Foolish ---!
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon] test_launch_instance_post questions

2014-03-11 Thread Abishek Subramanian (absubram)
Hi,

Can I please get some help with this UT?
I am having a little issue with the nics argument -
nics = [{net-id: netid, v4-fixed-ip: }


I wish to add a second network to this argument, but somehow
the UT only picks up the first network.

Any guidance will be appreciated.


Thanks!


On 3/6/14 12:06 PM, Abishek Subramanian (absubram) absub...@cisco.com
wrote:

Hi,

I had a couple of questions regarding this UT and the
JS template that it ends up using.
Hopefully someone can point me in the right direction
and help me understand this a little better.

I see that for this particular UT, we have a total of 3 networks
in the network_list (the second network is supposed to be disabled
though).
For the nic argument needed by the nova/server_create API though we
only pass the first network's net_id.

I am trying to modify this unit test so as to be able to accept 2
network_ids 
instead of just one. This should be possible yes?
We can have two nics in an instance of just one?
However, I always see that when the test runs,
in code it only finds the first network from the list.

This line of code -

 if netids:
nics = [{net-id: netid, v4-fixed-ip: }
for netid in netids]

There's always just one net-id in this dictionary even though I've added
a new network in the neutron test_data. Can someone please help me
figure out what I might be doing wrong?

How does the JS code in horizon.instances.js file work?
I assume this is where the network list is obtained from?
How does this translate in the unit test environment?



Thanks!
Abishek


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] savanna/sahara graduation review [savanna]

2014-03-11 Thread Sergey Lukjanov
Hey folks,

please, note that today will be our project graduation review on TC
meeting - https://wiki.openstack.org/wiki/Governance/TechnicalCommittee#Meeting

Thanks.

-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Tempest review and development priorities until release

2014-03-11 Thread Sean Dague
On 03/11/2014 09:48 AM, Kenichi Oomichi wrote:
 
 Hi Sean,
 
 -Original Message-
 From: Sean Dague [mailto:s...@dague.net]
 Sent: Tuesday, March 11, 2014 10:06 PM
 To: OpenStack Development Mailing List
 Subject: [openstack-dev] [qa] Tempest review and development priorities 
 until release

 Tempest has no feature freeze in the same way as the core projects, in a
 lot of ways some of our most useful effort happens right now, as
 projects shore up features within the tempest code.

 That being said, the review queue remains reasonably large, so I would
 like to focus review attention on items that will make a material impact
 on the quality of the Icehouse release.

 That means I'd like to *stop* doing patches and reviews that are
 internal refactorings. We can start doing those again in Juno. I know
 there were some client refactorings, and hacking cleanups in flight.
 Those should wait until Icehouse is released.

 From my perspective the top priorities for things to be reviewed /
 developed are:
  * Heat related tests (especially on the heat slow job) as we're now
 gating with that, but still only have 1 real test
  * Changes to get us Neutron full support (I actually think the tempest
 side is complete, but just in case)
  * Unit tests of Tempest function (so we know that we are doing the
 things we think)
  * Bugs in Tempest itself
  * The Keystone multi auth patches (so was can actually test v3)
  * Any additional positive API / scenario tests for *integrated*
 projects (incubated projects are currently best effort).
 
 I got it, and I'd like to clarify whether one task is acceptable or not.
 
 In most test cases, Tempest does not check API response body(API attributes).
 Now I am working for improving API attribute test coverage for Nova API[1].
 I think the task is useful for the backward compatibility and finding some
 latent bags(API sample files etc). In addition, this improvement is necessary
 to prove the concept of Nova v2.1 API because the we need to check v2.1 API
 does not cause backward incompatibility issues.
 
 Can we continue this improvement?
 Of course, I will do review for the above areas(Heat, etc) also.

Yes, absolutely.

I would count the API response checks in the Additional posititive API /
scenario tests for integrated projects. I should have been clear that it
also means enhancements of those tests that ensures they are properly
checking things.

I think these are the kind of changes that help ensure a solid Icehouse
release.

Thanks Kenichi!

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat]Policy on upgades required config changes

2014-03-11 Thread Steven Dake

On 03/11/2014 04:04 AM, Sean Dague wrote:

On 03/04/2014 12:39 PM, Steven Hardy wrote:

Hi all,

As some of you know, I've been working on the instance-users blueprint[1].

This blueprint implementation requires three new items to be added to the
heat.conf, or some resources (those which create keystone users) will not
work:

https://review.openstack.org/#/c/73978/
https://review.openstack.org/#/c/76035/

So on upgrade, the deployer must create a keystone domain and domain-admin
user, add the details to heat.conf, as already been done in devstack[2].

The changes requried for this to work have already landed in devstack, but
it was discussed to day and Clint suggested this may be unacceptable
upgrade behavior - I'm not sure so looking for guidance/comments.

My plan was/is:
- Make devstack work
- Talk to tripleo folks to assist in any transition (what prompted this
   discussion)
- Document the upgrade requirements in the Icehouse release notes so the
   wider community can upgrade from Havana.
- Try to give a heads-up to those maintaining downstream heat deployment
   tools (e.g stackforge/puppet-heat) that some tweaks will be required for
   Icehouse.

However some have suggested there may be an openstack-wide policy which
requires peoples old config files to continue working indefinitely on
upgrade between versions - is this right?  If so where is it documented?

This is basically enforced in code in grenade, the language for this
actually got lost in the project requirements discussion in the TC, I'll
bring that back in the post graduation requirements discussion we're
having again.

The issue is - Heat still doesn't materially participate in grenade.
Heat is substantially far behind the other integrated projects in it's
integration with the upstream testing. Only monday did we finally start
gating on a real unit of work for Heat (the heat-slow jobs). If I was
letter grading projects right now on upstream testing I'd give Nova an
A, Neutron a C (still no full run, no working grenade), and Heat a D.

Sean,

I agree the Heat community hasn't done a bang-up job of getting 
integrated with Tempest.  We only have 50 functional tests implemented.  
The community clearly needs to do more and provide better functional 
coverage with Heat.


It is inappropriate to say Only monday did we finally start gating 
because that was a huge move in the right direction.  It took alot of 
effort and should not be so easily dismissed.  Clearly the community, 
and especially the core developers, are making an effort.  Keep in mind 
we have to balance upstream development work, answering user questions, 
staying on top of a 5 page review queue, keeping relationships and track 
of the various integrated projects which are consuming Heat as a 
building block, plus all of the demands of our day jobs.


We just don't have enough bandwidth on the core team to tackle writing 
all of the tempest test cases ourselves.  We have made an effort to 
distribute this work to the overall heat community via wishlist bugs in 
Heat which several new folks have picked up.  I hope to see our coverage 
improve over time, especially with more advanced scenario tests through 
this effort.


Regards
-steve


So in short. Heat did the wrong thing. You should be able to use your
configs from the last release. This is what all the mature projects in
OpenStack do. In the event that you *have* to make a change like that it
requires an UpgradeImpact tag in the commit. And those should be limited
really aggressively. This is the whole point of the deprecation cycle.

-Sean



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Tempest review and development priorities until release

2014-03-11 Thread Kenichi Oomichi

 -Original Message-
 From: Sean Dague [mailto:s...@dague.net]
 Sent: Tuesday, March 11, 2014 11:02 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [qa] Tempest review and development priorities 
 until release
 
 On 03/11/2014 09:48 AM, Kenichi Oomichi wrote:
 
  -Original Message-
  From: Sean Dague [mailto:s...@dague.net]
  Sent: Tuesday, March 11, 2014 10:06 PM
  To: OpenStack Development Mailing List
  Subject: [openstack-dev] [qa] Tempest review and development priorities 
  until release
 
  Tempest has no feature freeze in the same way as the core projects, in a
  lot of ways some of our most useful effort happens right now, as
  projects shore up features within the tempest code.
 
  That being said, the review queue remains reasonably large, so I would
  like to focus review attention on items that will make a material impact
  on the quality of the Icehouse release.
 
  That means I'd like to *stop* doing patches and reviews that are
  internal refactorings. We can start doing those again in Juno. I know
  there were some client refactorings, and hacking cleanups in flight.
  Those should wait until Icehouse is released.
 
  From my perspective the top priorities for things to be reviewed /
  developed are:
   * Heat related tests (especially on the heat slow job) as we're now
  gating with that, but still only have 1 real test
   * Changes to get us Neutron full support (I actually think the tempest
  side is complete, but just in case)
   * Unit tests of Tempest function (so we know that we are doing the
  things we think)
   * Bugs in Tempest itself
   * The Keystone multi auth patches (so was can actually test v3)
   * Any additional positive API / scenario tests for *integrated*
  projects (incubated projects are currently best effort).
 
  I got it, and I'd like to clarify whether one task is acceptable or not.
 
  In most test cases, Tempest does not check API response body(API 
  attributes).
  Now I am working for improving API attribute test coverage for Nova API[1].
  I think the task is useful for the backward compatibility and finding some
  latent bags(API sample files etc). In addition, this improvement is 
  necessary
  to prove the concept of Nova v2.1 API because the we need to check v2.1 
  API
  does not cause backward incompatibility issues.
 
  Can we continue this improvement?
  Of course, I will do review for the above areas(Heat, etc) also.
 
 Yes, absolutely.
 
 I would count the API response checks in the Additional posititive API /
 scenario tests for integrated projects. I should have been clear that it
 also means enhancements of those tests that ensures they are properly
 checking things.
 
 I think these are the kind of changes that help ensure a solid Icehouse
 release.

I have got courage by your words.
Thank you, Sean!


Thanks
Ken'ichi Ohmichi

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] nominating Ildikó Váncsa and Nadya Privalova to ceilometer-core

2014-03-11 Thread Mehdi Abaakouk
On Mon, Mar 10, 2014 at 05:15:08AM -0400, Eoghan Glynn wrote:
 
 Folks,
 
 Time for some new blood on the ceilometer core team.
 
  * Ildikó co-authored the complex query API extension with Balazs Gibizer
and showed a lot of tenacity in pushing this extensive blueprint
through gerrit over multiple milestones.

+1 

  * Nadya has shown much needed love to the previously neglected HBase
driver bringing it much closer to feature parity with the other
supported DBs, and has also driven the introduction of ceilometer
coverage in Tempest.

+1

Cheers,

-- 
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht


signature.asc
Description: Digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat]Policy on upgades required config changes

2014-03-11 Thread Sean Dague
On 03/11/2014 10:15 AM, Steven Dake wrote:
 On 03/11/2014 04:04 AM, Sean Dague wrote:
 On 03/04/2014 12:39 PM, Steven Hardy wrote:
 Hi all,

 As some of you know, I've been working on the instance-users blueprint[1].

 This blueprint implementation requires three new items to be added to the
 heat.conf, or some resources (those which create keystone users) will not
 work:

 https://review.openstack.org/#/c/73978/
 https://review.openstack.org/#/c/76035/

 So on upgrade, the deployer must create a keystone domain and domain-admin
 user, add the details to heat.conf, as already been done in devstack[2].

 The changes requried for this to work have already landed in devstack, but
 it was discussed to day and Clint suggested this may be unacceptable
 upgrade behavior - I'm not sure so looking for guidance/comments.

 My plan was/is:
 - Make devstack work
 - Talk to tripleo folks to assist in any transition (what prompted this
   discussion)
 - Document the upgrade requirements in the Icehouse release notes so the
   wider community can upgrade from Havana.
 - Try to give a heads-up to those maintaining downstream heat deployment
   tools (e.g stackforge/puppet-heat) that some tweaks will be required for
   Icehouse.

 However some have suggested there may be an openstack-wide policy which
 requires peoples old config files to continue working indefinitely on
 upgrade between versions - is this right?  If so where is it documented?
 This is basically enforced in code in grenade, the language for this
 actually got lost in the project requirements discussion in the TC, I'll
 bring that back in the post graduation requirements discussion we're
 having again.

 The issue is - Heat still doesn't materially participate in grenade.
 Heat is substantially far behind the other integrated projects in it's
 integration with the upstream testing. Only monday did we finally start
 gating on a real unit of work for Heat (the heat-slow jobs). If I was
 letter grading projects right now on upstream testing I'd give Nova an
 A, Neutron a C (still no full run, no working grenade), and Heat a D.
 Sean,
 
 I agree the Heat community hasn't done a bang-up job of getting
 integrated with Tempest.  We only have 50 functional tests implemented. 
 The community clearly needs to do more and provide better functional
 coverage with Heat.
 
 It is inappropriate to say Only monday did we finally start gating
 because that was a huge move in the right direction.  It took alot of
 effort and should not be so easily dismissed.  Clearly the community,
 and especially the core developers, are making an effort.  Keep in mind
 we have to balance upstream development work, answering user questions,
 staying on top of a 5 page review queue, keeping relationships and track
 of the various integrated projects which are consuming Heat as a
 building block, plus all of the demands of our day jobs.

I agree it was a huge step in the right direction. It's not clear to me
why expressing that this was very recent was inappropriate.

Recent conversations have made me realize that a lot of the Heat core
team doesn't realize that Heat's participation in upstream gating is
below average, so I decided to be blunt about it. Because it was only
after being blunt about that with the Neutron team in Hong Kong did we
get any real motion on it (Neutron has seen huge gains this cycle).

All the integrated projects have the same challenges.

Upstream QA is really important. It not only protects heat from itself,
it protects it from changes in other projects.

 We just don't have enough bandwidth on the core team to tackle writing
 all of the tempest test cases ourselves.  We have made an effort to
 distribute this work to the overall heat community via wishlist bugs in
 Heat which several new folks have picked up.  I hope to see our coverage
 improve over time, especially with more advanced scenario tests through
 this effort.

Bandwidth is a problem for everyone. It's a matter of priorities. The
fact that realistic upstream gating is considered wishlist priority in
from a Heat perspective is something I find troubling.

Putting the investment into realistic scenarios in Tempest / gate is
going to be a huge timesaving for the Heat team. It will ensure Heat is
functioning at every commit (not just releases), it will protect Heat
from chasing breaking issues in Keystone or Nova, and it will mean that
we'll expose more subtle issues that only come with being able to do
data analysis on 10k runs.

I get it's never fun to hear that a project is below average on a metric
that's important to the OpenStack community. But if we aren't honest and
open about these things they never change.

-Sean

--
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [nova] a question about instance snapshot

2014-03-11 Thread Kashyap Chamarthy
On Fri, Mar 07, 2014 at 02:29:04AM +, Liuji (Jeremy) wrote:
 Hi, all
 
 Current openstack seems not support to snapshot instance with memory
 and dev states.  I searched the blueprint and found two relational
 blueprint like below.  But these blueprint failed to get in the
 branch.
 
 [1]: https://blueprints.launchpad.net/nova/+spec/live-snapshots [2]:
 https://blueprints.launchpad.net/nova/+spec/live-snapshot-vms
 
 In the blueprint[1], there is a comment, We discussed this pretty
 extensively on the mailing list and in a design summit session.  The
 consensus is that this is not a feature we would like to have in nova.
 --russellb  But I can't find the discuss mail about it. I hope to
 know why we think so ?  Without memory snapshot, we can't to provide
 the feature for user to revert a instance to a checkpoint. 

I agree, it's a useful feature.

Speaking from a libvirt/QEMU standpoint, with recent upstream versions,
it's entirely possible to do a live memory and disk snapshot in a single
operation. I think it's a matter of someone adding wiring up the support
in Nova.

In libvirt's parlance, it's called External 'system checkpoint' snapshot
i.e: the guest's disk-state will be saved in one file, its RAM 
device-state will be saved in another new file.

  NOTE: 'system checkpoint' meaning - it captures VM state and disk
  state; VM state meaning - it captures memory and device state (but
  _not_ disk state).

 
I just did a quick test  libvirt's virsh:

1. Start the guest:

  $ virsh start ostack-controller
  Domain ostack-controller started

2. List its block device in use:

  $ virsh domblklist ostack-controller
  Target Source
  
  vda/var/lib/libvirt/images/ostack-controller.qcow2

3. Take a LIVE external system checkpoint snapshot, specifying both disk
   file _and_ memory file:

  $ virsh snapshot-create-as --domain ostack-controller snap1 \
--diskspec vda,file=/export/vmimages/disk-snap.qcow2,snapshot=external \
--memspec file=/export/vmimages/mem-snap.qcow2,snapshot=external \
--atomic
  Domain snapshot snap1 created

  NOTE: Once the above command is issued, the original disk image of
ostack-controller will become the backing_file  the new overlay
image specified (disk-snap.qcow2) will be used to track the new
changes. Here on, libvirt will use this overlay for further
write operations (while using the original image as a read-only
backing_file).

4. List the snapshot: 

  $ virsh snapshot-list ostack-controller
  Name Creation Time State
  
  snap12014-03-11 20:01:54 +0530 running

5. Optionally, check if the snapshot file we specified (disk-snap.qocw2)
   is indeed the new overlay


That's the versions I used to test the above:

  $ uname -r; rpm -q qemu-system-x86 libvirt
  3.13.4-200.fc20.x86_64
  qemu-system-x86-1.7.0-5.fc21.x86_64
  libvirt-1.2.3-1.fc20.x86_64


Hope that helps.

-- 
/kashyap

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] os-cloud-config ssh access to cloud

2014-03-11 Thread Adam Young

On 03/11/2014 05:25 AM, Dmitry Mescheryakov wrote:

For what it's worth in Sahara (former Savanna) we inject the second
key by userdata. I.e. we add
echo ${public_key}  ${user_home}/.ssh/authorized_keys

to the other stuff we do in userdata.

Dmitry

2014-03-10 17:10 GMT+04:00 Jiří Stránský ji...@redhat.com:

On 7.3.2014 14:50, Imre Farkas wrote:

On 03/07/2014 10:30 AM, Jiří Stránský wrote:

Hi,

there's one step in cloud initialization that is performed over SSH --
calling keystone-manage pki_setup. Here's the relevant code in
keystone-init [1], here's a review for moving the functionality to
os-cloud-config [2].


You really should not be doing this.  I should never have written 
pki_setup:  it is a developers tool:  user a real CA and a real certificate.




The consequence of this is that Tuskar will need passwordless ssh key to
access overcloud controller. I consider this suboptimal for two reasons:

* It creates another security concern.

* AFAIK nova is only capable of injecting one public SSH key into
authorized_keys on the deployed machine, which means we can either give
it Tuskar's public key and allow Tuskar to initialize overcloud, or we
can give it admin's custom public key and allow admin to ssh into
overcloud, but not both. (Please correct me if i'm mistaken.) We could
probably work around this issue by having Tuskar do the user key
injection as part of os-cloud-config, but it's a bit clumsy.


This goes outside the scope of my current knowledge, i'm hoping someone
knows the answer: Could pki_setup be run by combining powers of Heat and
os-config-refresh? (I presume there's some reason why we're not doing
this already.) I think it would help us a good bit if we could avoid
having to SSH from Tuskar to overcloud.


Yeah, it came up a couple times on the list. The current solution is
because if you have an HA setup, the nodes can't decide on its own,
which one should run pki_setup.
Robert described this topic and why it needs to be initialized
externally during a weekly meeting in last December. Check the topic
'After heat stack-create init operations (lsmola)':

http://eavesdrop.openstack.org/meetings/tripleo/2013/tripleo.2013-12-17-19.02.log.html


Thanks for the reply Imre. Yeah i vaguely remember that meeting :)

I guess to do HA init we'd need to pick one of the controllers and run the
init just there (set some parameter that would then be recognized by
os-refresh-config). I couldn't find if Heat can do something like this on
it's own, probably we'd need to deploy one of the controller nodes with
different parameter set, which feels a bit weird.

Hmm so unless someone comes up with something groundbreaking, we'll probably
keep doing what we're doing. Having the ability to inject multiple keys to
instances [1] would help us get rid of the Tuskar vs. admin key issue i
mentioned in the initial e-mail. We might try asking a fellow Nova developer
to help us out here.


Jirka

[1] https://bugs.launchpad.net/nova/+bug/917850


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon] Edit subnet in workflows - ip_version hidden?

2014-03-11 Thread Abishek Subramanian (absubram)
Hi,

I had a question regarding the
dashboards/project/networks/subnets/workflows.py
file and in particular the portion of the ip_version field.

It is marked as a hidden input field for the update subnet class with this
note.

# NOTE(amotoki): When 'disabled' attribute is set for the ChoiceField
# and ValidationError is raised for POST request, the initial value of
# the ip_version ChoiceField is not set in the re-displayed form
# As a result, 'IPv4' is displayed even when IPv6 is used if
# ValidationError is detected. In addition 'required=True' check
complains
# when re-POST since the value of the ChoiceField is not set.
# Thus now I use HiddenInput for the ip_version ChoiceField as a work
# around.



Can I get a little more context to this please?
I'm not sure I understand why it says this field always is displayed as
IPv4.
Is this still the case? Adding some debug logs I seem to see that the
ipversion is correctly being detected as 4 or 6 as the case may be.

Thanks!



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][db][performance] Proposal: Get rid of soft deletion (step by step)

2014-03-11 Thread Tim Bell

If the deleted column is removed, how would the 'undelete' functionality be 
provided ? This saves operators when user accidents occur since restoring the 
whole database to a point in time affects the other tenants also.

Tim

 Hi all,
 
  I've never understood why we treat the DB as a LOG (keeping deleted == 0 
  records around) when we should just use a LOG (or
 similar system) to begin with instead.
 
 I can't agree more with you! Storing deleted records in tables is hardly 
 usable, bad for performance (as it makes tables and indexes
 larger) and it probably covers a very limited set of use cases (if
 any) of OpenStack users.
 

If the deleted column is removed, how would the 'undelete' functionality be 
provided ? This saves operators when user accidents occur since restoring the 
whole database to a point in time affects the other tenants also.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Edit subnet in workflows - ip_version hidden?

2014-03-11 Thread Radomir Dopieralski
On 11/03/14 15:52, Abishek Subramanian (absubram) wrote:
 Hi,
 
 I had a question regarding the
 dashboards/project/networks/subnets/workflows.py
 file and in particular the portion of the ip_version field.
 
 It is marked as a hidden input field for the update subnet class with this
 note.
 
 # NOTE(amotoki): When 'disabled' attribute is set for the ChoiceField
 # and ValidationError is raised for POST request, the initial value of
 # the ip_version ChoiceField is not set in the re-displayed form
 # As a result, 'IPv4' is displayed even when IPv6 is used if
 # ValidationError is detected. In addition 'required=True' check
 complains
 # when re-POST since the value of the ChoiceField is not set.
 # Thus now I use HiddenInput for the ip_version ChoiceField as a work
 # around.
 
 
 
 Can I get a little more context to this please?
 I'm not sure I understand why it says this field always is displayed as
 IPv4.
 Is this still the case? Adding some debug logs I seem to see that the
 ipversion is correctly being detected as 4 or 6 as the case may be.

Some browsers (Chrome, iirc) will not submit the values from form fields
that are disabled. That means, that when re-displaying this form
(after an error in any other field, for example), that field's value
will be missing, and the browser will happily display the first option,
which is ipv4.

Another solution could be perhaps using readonly instead of disabled.

-- 
Radomir Dopieralski


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Mini-summit Interest?

2014-03-11 Thread Tom Creighton
When the Designate team had their mini-summit, they had an open Google Hangout 
for remote participants.  We could even have an open conference bridge if you 
are not partial to video conferencing.  With the issue of inclusion solved, 
let’s focus on a date that is good for the team!

Cheers,

Tom Creighton


On Mar 10, 2014, at 4:10 PM, Edgar Magana emag...@plumgrid.com wrote:

 Eugene,
 
 A have a few arguments why I believe this is not 100% inclusive
   • Is the foundation involved on this process? How? What is the budget? 
 Who is the responsible from the foundation  side?
   • If somebody made already travel arraignments, it won't be possible to 
 make changes at not cost.
   • Staying extra days in a different city could impact anyone's budget
   • As a OpenStack developer. I want to understand why the summit is not 
 enough for deciding the next steps for each project. If that is the case, I 
 would prefer to make changes on the organization of the summit instead of 
 creating mini-summits all around!
 I could continue but I think these are good enough.
 
 I could agree with your point about previous summits being distractive for 
 developers, this is why this time the OpenStack foundation is trying very 
 hard to allocate specific days for the conference and specific days for the 
 summit.
 The point that I am totally agree with you is that we SHOULD NOT have session 
 about work that will be done no matter what!  Those are just a waste of good 
 time that could be invested in very interesting discussions about topics that 
 are still not clear.
 I would recommend that you express this opinion to Mark. He is the right guy 
 to decide which sessions will bring interesting discussions and which ones 
 will be just a declaration of intents.
 
 Thanks,
 
 Edgar
 
 From: Eugene Nikanorov enikano...@mirantis.com
 Reply-To: OpenStack List openstack-dev@lists.openstack.org
 Date: Monday, March 10, 2014 10:32 AM
 To: OpenStack List openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Neutron][LBaaS] Mini-summit Interest?
 
 Hi Edgar,
 
 I'm neutral to the suggestion of mini summit at this point. 
 Why do you think it will exclude developers? 
 If we keep it 1-3 days prior to OS Summit in Atlanta (e.g. in the same city) 
 that would allow anyone who joins OS Summit to save on extra travelling.
 OS Summit itself is too distractive to have really productive discussions, 
 unless your missing the sessions and spend time discussing.
 For instance design sessions basically only good for declaration of intents, 
 but not for real discussion of a complex topic at meaningful detail level.
 
 What would be your suggestions to make this more inclusive? 
 I think the time and place is the key here - hence Atlanta and few days prior 
 OS summit.
 
 Thanks,
 Eugene.
 
 
 
 On Mon, Mar 10, 2014 at 10:59 PM, Edgar Magana emag...@plumgrid.com wrote:
 Team,
 
 I found that having a mini-summit with a very short notice means excluding
 a lot of developers of such an interesting topic for Neutron.
 The OpenStack summit is the opportunity for all developers to come
 together and discuss the next steps, there are many developers that CAN
 NOT afford another trip for a special summit. I am personally against
 that and I do support Mark's proposal of having all the conversation over
 IRC and mailing list.
 
 Please, do not start excluding people that won't be able to attend another
 face-to-face meeting besides the summit. I believe that these are the
 little things that make an open source community weak if we do not control
 it.
 
 Thanks,
 
 Edgar
 
 
 On 3/6/14 9:51 PM, Mark McClain mmccl...@yahoo-inc.com wrote:
 
 
 On Mar 6, 2014, at 4:31 PM, Jay Pipes jaypi...@gmail.com wrote:
 
  On Thu, 2014-03-06 at 21:14 +, Youcef Laribi wrote:
  +1
 
  I think if we can have it before the Juno summit, we can take
  concrete, well thought-out proposals to the community at the summit.
 
  Unless something has changed starting at the Hong Kong design summit
  (which unfortunately I was not able to attend), the design summits have
  always been a place to gather to *discuss* and *debate* proposed
  blueprints and design specs. It has never been about a gathering to
  rubber-stamp proposals that have already been hashed out in private
  somewhere else.
 
 You are correct that is the goal of the design summit.  While I do think
 it is wise to discuss the next steps with LBaaS at this point in time, I
 am not a proponent of in person mini-design summits.  Many contributors
 to LBaaS are distributed all over the global, and scheduling a mini
 summit with short notice will exclude valuable contributors to the team.
 I¹d prefer to see an open process with discussions on the mailing list
 and specially scheduled IRC meetings to discuss the ideas.
 
 mark
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 

[openstack-dev] [neutron] Difficult to understand message when using incorrect role against object in Neutron

2014-03-11 Thread Sudipta Biswas3
Hi all,

I'm hitting a scenario where, a user runs an action against an object in 
neutron for which they don't have the authority to perform the 
action(perhaps their role allows read of the object, but not update). The 
following returned to back to the user when such an action is performed: 
The resource could not be found.  This can be confusing to users.  For 
example, basic users may not have the privilege to edit a network and 
attempts doing that but ends up getting the resource not found message, 
even though they have read privileges.

This is a confusing message because the object they just read in is now 
stating that it does not exist. This is not true, the root issue is that 
they do not have authority to it. One can argue that for security reasons, 
we should state that the object does not exist. However, it creates a odd 
scenario where you have certain roles that can read an object, but then 
not create/update/delete it. 

I have filed a community bug for the same: 
https://bugs.launchpad.net/neutron/+bug/1290895

I'm proposing that we change the message to The resource could not be 
found or user's role does not have sufficient privileges to run the 
operation.

I'm sending to the mailing list to see if there are any discussion points 
against making this change.

Thanks,
Sudipto___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon] Private flavors

2014-03-11 Thread Baldassin, Santiago B
Hi everyone,

I'm writing to you because I notice that horizon is throwing an error when a 
private flavor is created and the current project is added within the flavor 
access list. The problem is that when a non-public project is created, nova 
adds the current project to the flavor access list. So when horizon adds the 
current project, nova throws an exception saying that the project is already 
added to the flavor.

I created the following bug to document the problem: 
https://bugs.launchpad.net/horizon/+bug/1286297

I think that when a private flavor is created, horizon should not try to add 
the current project since it was already added by nova. Moreover, we should 
include a message explaining that if the flavor is private, the current project 
will be added within the access list

Thoughts?


Santiago B. Baldassin
ASDC Argentina
Software Development Center
Email: santiago.b.baldas...@intel.commailto:santiago.b.baldas...@intel.com
P Save a tree. Print only when necessary.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Edit subnet in workflows - ip_version hidden?

2014-03-11 Thread Abishek Subramanian (absubram)
Thanks Radomir.

Yes I've changed it to a readonly. But just wanted to double check
I didn't end up breaking something elsewhere :)

Althouh - how up to date is this code?

These are the actual lines of code -

# NOTE(amotoki): When 'disabled' attribute is set for the ChoiceField
# and ValidationError is raised for POST request, the initial value of
# the ip_version ChoiceField is not set in the re-displayed form
# As a result, 'IPv4' is displayed even when IPv6 is used if
# ValidationError is detected. In addition 'required=True' check
complains
# when re-POST since the value of the ChoiceField is not set.
# Thus now I use HiddenInput for the ip_version ChoiceField as a work
# around.
ip_version = forms.ChoiceField(choices=[(4, 'IPv4'), (6, 'IPv6')],
   #widget=forms.Select(
   #attrs={'disabled': 'disabled'}),
   widget=forms.HiddenInput(),
   label=_(IP Version))




I don't think ip_version even has an attribute or an option to be set to
'disabled'.
Is this from an old version where the create side got fixed but the update
side was forgotten about?


On 3/11/14 11:30 AM, Radomir Dopieralski openst...@sheep.art.pl wrote:

On 11/03/14 15:52, Abishek Subramanian (absubram) wrote:
 Hi,
 
 I had a question regarding the
 dashboards/project/networks/subnets/workflows.py
 file and in particular the portion of the ip_version field.
 
 It is marked as a hidden input field for the update subnet class with
this
 note.
 
 # NOTE(amotoki): When 'disabled' attribute is set for the ChoiceField
 # and ValidationError is raised for POST request, the initial value
of
 # the ip_version ChoiceField is not set in the re-displayed form
 # As a result, 'IPv4' is displayed even when IPv6 is used if
 # ValidationError is detected. In addition 'required=True' check
 complains
 # when re-POST since the value of the ChoiceField is not set.
 # Thus now I use HiddenInput for the ip_version ChoiceField as a
work
 # around.
 
 
 
 Can I get a little more context to this please?
 I'm not sure I understand why it says this field always is displayed as
 IPv4.
 Is this still the case? Adding some debug logs I seem to see that the
 ipversion is correctly being detected as 4 or 6 as the case may be.

Some browsers (Chrome, iirc) will not submit the values from form fields
that are disabled. That means, that when re-displaying this form
(after an error in any other field, for example), that field's value
will be missing, and the browser will happily display the first option,
which is ipv4.

Another solution could be perhaps using readonly instead of disabled.

-- 
Radomir Dopieralski


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][QOS]How is the BP about ml-qos going?

2014-03-11 Thread Collins, Sean
On Mon, Mar 10, 2014 at 11:13:47PM EDT, Yuzhou (C) wrote:
 Hi stackers,
 
   The progress of the bp about ml2-qos is code review for long time.
 Why didn't the implementation of qos commit the neutron master ?

For a while, I did not believe that this API extension would ever 
get merged, so I continued to do improvements and bug fixes and push
them to the Comcast GitHub repo for Neutron, to support our deployment,
but I did not update the reviews in Gerrit.

I recently revived the reviews - and have pushed some updates. I hope to
get this merged for the J release, and have scheduled a summit session
for Atlanta to discuss.

 Anyone who knows the history can help me or give me a hint how to find the 
 discuss mail?

Search for posts tagged [QoS] - that should get most of them.

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat]Policy on upgades required config changes

2014-03-11 Thread Steven Dake

On 03/11/2014 07:35 AM, Sean Dague wrote:

On 03/11/2014 10:15 AM, Steven Dake wrote:

On 03/11/2014 04:04 AM, Sean Dague wrote:

On 03/04/2014 12:39 PM, Steven Hardy wrote:

Hi all,

As some of you know, I've been working on the instance-users blueprint[1].

This blueprint implementation requires three new items to be added to the
heat.conf, or some resources (those which create keystone users) will not
work:

https://review.openstack.org/#/c/73978/
https://review.openstack.org/#/c/76035/

So on upgrade, the deployer must create a keystone domain and domain-admin
user, add the details to heat.conf, as already been done in devstack[2].

The changes requried for this to work have already landed in devstack, but
it was discussed to day and Clint suggested this may be unacceptable
upgrade behavior - I'm not sure so looking for guidance/comments.

My plan was/is:
- Make devstack work
- Talk to tripleo folks to assist in any transition (what prompted this
   discussion)
- Document the upgrade requirements in the Icehouse release notes so the
   wider community can upgrade from Havana.
- Try to give a heads-up to those maintaining downstream heat deployment
   tools (e.g stackforge/puppet-heat) that some tweaks will be required for
   Icehouse.

However some have suggested there may be an openstack-wide policy which
requires peoples old config files to continue working indefinitely on
upgrade between versions - is this right?  If so where is it documented?

This is basically enforced in code in grenade, the language for this
actually got lost in the project requirements discussion in the TC, I'll
bring that back in the post graduation requirements discussion we're
having again.

The issue is - Heat still doesn't materially participate in grenade.
Heat is substantially far behind the other integrated projects in it's
integration with the upstream testing. Only monday did we finally start
gating on a real unit of work for Heat (the heat-slow jobs). If I was
letter grading projects right now on upstream testing I'd give Nova an
A, Neutron a C (still no full run, no working grenade), and Heat a D.

Sean,

I agree the Heat community hasn't done a bang-up job of getting
integrated with Tempest.  We only have 50 functional tests implemented.
The community clearly needs to do more and provide better functional
coverage with Heat.

It is inappropriate to say Only monday did we finally start gating
because that was a huge move in the right direction.  It took alot of
effort and should not be so easily dismissed.  Clearly the community,
and especially the core developers, are making an effort.  Keep in mind
we have to balance upstream development work, answering user questions,
staying on top of a 5 page review queue, keeping relationships and track
of the various integrated projects which are consuming Heat as a
building block, plus all of the demands of our day jobs.

I agree it was a huge step in the right direction. It's not clear to me
why expressing that this was very recent was inappropriate.

Recent conversations have made me realize that a lot of the Heat core
team doesn't realize that Heat's participation in upstream gating is
below average, so I decided to be blunt about it. Because it was only
after being blunt about that with the Neutron team in Hong Kong did we
get any real motion on it (Neutron has seen huge gains this cycle).

All the integrated projects have the same challenges.

Upstream QA is really important. It not only protects heat from itself,
it protects it from changes in other projects.


We just don't have enough bandwidth on the core team to tackle writing
all of the tempest test cases ourselves.  We have made an effort to
distribute this work to the overall heat community via wishlist bugs in
Heat which several new folks have picked up.  I hope to see our coverage
improve over time, especially with more advanced scenario tests through
this effort.

Bandwidth is a problem for everyone. It's a matter of priorities. The
fact that realistic upstream gating is considered wishlist priority in
from a Heat perspective is something I find troubling.

Sean,

Unfortunately the root of the problem is there is no way to track in one 
place the suggested test cases for projects.  The Tempest community 
doesn't want test cases in the tempest launchpad tracker. At one point 
we were told to track the work using etherpads, which is absolutely 
ridiculous.


So we must resort to using wishlist priority.  In all cases, a user bug 
that has a negative impact on operation of Heat is higher priority then 
implementing functional testing.  I get that if we had functional 
testing, maybe that bug wouldn't have been filed in the first case.  
However, we are in a situation where we already have the bugs, and they 
already need to be addressed.


If the test cases were stored in tempest launchpad, they could be 
properly prioritized from a upstream-testing POV.  The purpose of the 
Heat launchpad tracker is to 

Re: [openstack-dev] No route matched for POST

2014-03-11 Thread Vijay B
Hi Aaron!

Yes, attaching the code diffs of the client and server. The diff
0001-Frist-commit-to-add-tag-create-CLI.patch needs to be applied on
python-neutronclient's master branch, and the diff
0001-Adding-a-tag-extension.patch needs to be applied on neutron's
stable/havana branch. After restarting q-svc, please run the CLI `neutron
tag-create --name tag1 --key key1 --value val1` to test it out.  Thanks for
offering to take a look at this!

Regards,
Vijay


On Mon, Mar 10, 2014 at 10:10 PM, Aaron Rosen aaronoro...@gmail.com wrote:

 Hi Vijay,

 I think you'd have to post you're code for anyone to really help you.
 Otherwise we'll just be taking shots in the dark.

 Best,

 Aaron


 On Mon, Mar 10, 2014 at 7:22 PM, Vijay B os.v...@gmail.com wrote:

 Hi,

 I'm trying to implement a new extension API in neutron, but am running
 into a No route matched for POST on the neutron service.

 I have followed the instructions in the link
 https://wiki.openstack.org/wiki/NeutronDevelopment#API_Extensions when
 trying to implement this extension.

 The extension doesn't depend on any plug in per se, akin to security
 groups.

 I have defined a new file in neutron/extensions/, called Tag.py, with a
 class Tag extending class extensions.ExtensionDescriptor, like the
 documentation requires. Much like many of the other extensions already
 implemented, I define my new extension as a dictionary, with fields like
 allow_post/allow_put etc, and then pass this to the controller. I still
 however run into a no route matched for POST error when I attempt to fire
 my CLI to create a tag. I also edited the ml2 plugin file
 neutron/plugins/ml2/plugin.py to add tags to
 _supported_extension_aliases, but that didn't resolve the issue.

 It looks like I'm missing something quite minor, causing the the new
 extension to not get registered, but I'm not sure what.

 I can provide more info/patches if anyone would like to take a look, and
 it would be very much appreciated if someone could help me out with this.

 Thanks!
 Regards,
 Vijay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




0001-Adding-a-tag-extension.patch
Description: Binary data


0001-Frist-commit-to-add-tag-create-CLI.patch
Description: Binary data
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] testr help

2014-03-11 Thread Doug Hellmann
On Mon, Mar 10, 2014 at 7:20 PM, Zane Bitter zbit...@redhat.com wrote:

 On 10/03/14 16:04, Clark Boylan wrote:

 On Mon, Mar 10, 2014 at 11:31 AM, Zane Bitter zbit...@redhat.com wrote:

 Thanks Clark for this great write-up. However, I think the solution to
 the
 problem in question is richer commands and better output formatting, not
 discarding information.


 On 07/03/14 16:30, Clark Boylan wrote:


 But running tests in parallel introduces some fun problems. Like where
 do you send logging and stdout output. If you send it to the console
 it will be interleaved and essentially useless. The solution to this
 problem (for which I am probably to blame) is to have each test
 collect the logging, stdout, and stderr associated to that test and
 attach it to that tests subunit reporting. This way you get all of the
 output associated with a single test attached to that test and don't
 have crazy interleaving that needs to be demuxed. The capturing of



 This is not really a problem unique to parallel test runners. Printing to
 the console is just not a great way to handle stdout/stderr in general
 because it messes up the output of the test runner, and nose does exactly
 the same thing as testr in collecting them - except that nose combines
 messages from the 3 sources and prints the output for human consumption,
 rather than in separate groups surrounded by lots of {{{random braces}}}.

  Except nose can make them all the same file descriptor and let
 everything multiplex together. Nose isn't demuxing arbitrary numbers
 of file descriptors from arbitrary numbers of processes.


 Can't each subunit process do the same thing?

 As a user, here's how I want it to work:
  - Each subunit process works like nose - multiplexing the various streams
 of output together and associating it with a particular test - except that
 nothing is written to the console but instead returned to testr in subunit
 format.
  - testr reads the subunit data and saves it to the test repository.
  - testr prints a report to the console based on the data it just
 received/saved.

 How it actually seems to work:
  - A magic pixie creates a TestCase class with a magic incantation to
 capture your stdout/stderr/logging without breaking other test runners.
  - Or they don't! You're hosed. The magic incantation is undocumented.
  - You change all of your TestCases to inherit from the class with the
 magic pixie dust.
  - Each subunit process associates the various streams of output (if you
 set it up to) with a particular test, but keeps them separate so that if
 you want to figure out the order of events you have to direct them all to
 the same channel - which, in practice, means you can only use logging
 (since some of the events you are interested in probably already exist in
 the code as logs).
  - when you want to debug a test, you have to all the tedious loigging
 setup if it doesn't already exist in the file. It probably won't, because
 flake8 would have made you delete it unless it's being used already.
  - testr reads the subunit data and saves it to the test repository.
  - testr prints a report to the console based on the data it just
 received/saved, though parts of it look like a raw data dump.

 While there may be practical reasons why it currently works like the
 latter, I would submit that there is no technical reason it could not work
 like the former. In particular, there is nothing about the concept of
 running the tests in parallel that would prevent it, just as there is
 nothing about what nose does that would prevent two copies of nose from
 running at the same time on different sets of tests.


  this data is toggleable in the test suite using environment variables
 and is off by default so that when you are not using testr you don't
 get this behavior [0]. However we seem to have neglected log capture
 toggles.



 Oh wow, there is actually a way to get the stdout and stderr? Fantastic!
 Why
 on earth are these disabled?

  See above, testr has to deal with multiple writers to stdout and
 stderr, you really don't want them all going to the same place when
 using testr (which is why stdout and stderr are captured when running
 testr but not otherwise).


 Ah, OK, I think I understand now. testr passes the environment variables
 automatically, so you only have to know the magic incantation at the time
 you're writing the test, not when you're running it.


  Please, please, please don't turn off the logging too. That's the only
 tool
 left for debugging now that stdout goes into a black hole.

  Logging goes into the same black hole today, I am suggesting that we
 make this toggleable like we have made stdout and stderr capturing
 toggleable. FWIW this isn't a black hole it is all captured on disk
 and you can refer back to it at any time (the UI around doing this
 could definitely be better though).


 Hmm, now that you mention it, I remember Clint did the setup work in Heat
 to get the logging working. So maybe we 

Re: [openstack-dev] [Neutron][LBaaS] Mini-summit Interest?

2014-03-11 Thread Mike Wilson
Hangouts  worked well at the nova mid-cycle meetup. Just make sure you have
your network situation sorted out before hand. Bandwidth and firewalls are
what comes to mind immediately.

-Mike


On Tue, Mar 11, 2014 at 9:34 AM, Tom Creighton
tom.creigh...@rackspace.comwrote:

 When the Designate team had their mini-summit, they had an open Google
 Hangout for remote participants.  We could even have an open conference
 bridge if you are not partial to video conferencing.  With the issue of
 inclusion solved, let's focus on a date that is good for the team!

 Cheers,

 Tom Creighton


 On Mar 10, 2014, at 4:10 PM, Edgar Magana emag...@plumgrid.com wrote:

  Eugene,
 
  A have a few arguments why I believe this is not 100% inclusive
* Is the foundation involved on this process? How? What is the
 budget? Who is the responsible from the foundation  side?
* If somebody made already travel arraignments, it won't be
 possible to make changes at not cost.
* Staying extra days in a different city could impact anyone's
 budget
* As a OpenStack developer. I want to understand why the summit is
 not enough for deciding the next steps for each project. If that is the
 case, I would prefer to make changes on the organization of the summit
 instead of creating mini-summits all around!
  I could continue but I think these are good enough.
 
  I could agree with your point about previous summits being distractive
 for developers, this is why this time the OpenStack foundation is trying
 very hard to allocate specific days for the conference and specific days
 for the summit.
  The point that I am totally agree with you is that we SHOULD NOT have
 session about work that will be done no matter what!  Those are just a
 waste of good time that could be invested in very interesting discussions
 about topics that are still not clear.
  I would recommend that you express this opinion to Mark. He is the right
 guy to decide which sessions will bring interesting discussions and which
 ones will be just a declaration of intents.
 
  Thanks,
 
  Edgar
 
  From: Eugene Nikanorov enikano...@mirantis.com
  Reply-To: OpenStack List openstack-dev@lists.openstack.org
  Date: Monday, March 10, 2014 10:32 AM
  To: OpenStack List openstack-dev@lists.openstack.org
  Subject: Re: [openstack-dev] [Neutron][LBaaS] Mini-summit Interest?
 
  Hi Edgar,
 
  I'm neutral to the suggestion of mini summit at this point.
  Why do you think it will exclude developers?
  If we keep it 1-3 days prior to OS Summit in Atlanta (e.g. in the same
 city) that would allow anyone who joins OS Summit to save on extra
 travelling.
  OS Summit itself is too distractive to have really productive
 discussions, unless your missing the sessions and spend time discussing.
  For instance design sessions basically only good for declaration of
 intents, but not for real discussion of a complex topic at meaningful
 detail level.
 
  What would be your suggestions to make this more inclusive?
  I think the time and place is the key here - hence Atlanta and few days
 prior OS summit.
 
  Thanks,
  Eugene.
 
 
 
  On Mon, Mar 10, 2014 at 10:59 PM, Edgar Magana emag...@plumgrid.com
 wrote:
  Team,
 
  I found that having a mini-summit with a very short notice means
 excluding
  a lot of developers of such an interesting topic for Neutron.
  The OpenStack summit is the opportunity for all developers to come
  together and discuss the next steps, there are many developers that CAN
  NOT afford another trip for a special summit. I am personally against
  that and I do support Mark's proposal of having all the conversation
 over
  IRC and mailing list.
 
  Please, do not start excluding people that won't be able to attend
 another
  face-to-face meeting besides the summit. I believe that these are the
  little things that make an open source community weak if we do not
 control
  it.
 
  Thanks,
 
  Edgar
 
 
  On 3/6/14 9:51 PM, Mark McClain mmccl...@yahoo-inc.com wrote:
 
  
  On Mar 6, 2014, at 4:31 PM, Jay Pipes jaypi...@gmail.com wrote:
  
   On Thu, 2014-03-06 at 21:14 +, Youcef Laribi wrote:
   +1
  
   I think if we can have it before the Juno summit, we can take
   concrete, well thought-out proposals to the community at the summit.
  
   Unless something has changed starting at the Hong Kong design summit
   (which unfortunately I was not able to attend), the design summits
 have
   always been a place to gather to *discuss* and *debate* proposed
   blueprints and design specs. It has never been about a gathering to
   rubber-stamp proposals that have already been hashed out in private
   somewhere else.
  
  You are correct that is the goal of the design summit.  While I do
 think
  it is wise to discuss the next steps with LBaaS at this point in time,
 I
  am not a proponent of in person mini-design summits.  Many contributors
  to LBaaS are distributed all over the global, and scheduling a mini
  summit with short notice will exclude 

[openstack-dev] [Ceilometer]Collector's performance

2014-03-11 Thread Nadya Privalova
Hi team!

Last week we were working on notification problem in ceilometer during
tempest tests creation. Tests for notification passed successfully on
Postgres but failed on MySQL. This made us start investigations and this
email contains some results.

As it turned out, tempest as it is is something like performance-testing
for Ceilometer. It contains 2057 tests. Almost in all test OpenStack
resources are being created and deleted: images, instances, volumes. E.g.
during instance creation nova sends 9 notifications. And all the tests are
running in parallel for about 40 minutes.
From ceilometer-collector logs we may found very useful message:

2014-03-10 09:42:41.356
http://logs.openstack.org/36/64136/20/check/check-tempest-dsvm-full/e361520/logs/screen-ceilometer-collector.txt.gz#_2014-03-10_09_42_41_356
22845 DEBUG ceilometer.dispatcher.database
[req-16ea95c5-6454-407a-9c64-94d5ef900c9e - - - - -] metering data
storage.objects.outgoing.bytes for b7a490322e65422cb1129b13b49020e6 @
2014-03-10T09:34:31.090107:

So collector starts to process_metering_data in dispatcher only in 9:42 but
nova sent it in 9:34. To look at whole picture please take look at picture
[1]. It illustrates time difference based on this message in logs.
Besides, I decided to take a look on difference between the RPC-publisher
sends the message and the collector receives the message. To create this
plot I've parsed the lines like below from anotifications log:

2014-03-10 09:25:49.333
http://logs.openstack.org/36/64136/20/check/check-tempest-dsvm-full/e361520/logs/screen-ceilometer-anotification.txt.gz#_2014-03-10_09_25_49_333
22833 DEBUG ceilometer.openstack.common.rpc.amqp [-] UNIQUE_ID is
683dd3f130534b9fbb5606aef862b83d.


After that I found the corresponding id in collector log:

2014-03-10 09:25:49.352
http://logs.openstack.org/36/64136/20/check/check-tempest-dsvm-full/e361520/logs/screen-ceilometer-collector.txt.gz#_2014-03-10_09_25_49_352
22845 DEBUG ceilometer.openstack.common.rpc.amqp [-] received
{u'_context_domain': None, u'_context_request_id':
u'req-0a5fafe6-e097-4f90-a68a-a91da1cff22c',

u'args': {u'data': [...,
 u'message_id': u'f7ad63fc-a835-11e3-8223-bc764e205385',
u'counter_type': u'gauge'}]}, u'_context_read_only': False,
u'_unique_id': u'683dd3f130534b9fbb5606aef862b83d',

u'_context_user_identity': u'- - - - -', u'_context_instance_uuid':
None, u'_context_show_deleted': False, u'_context_tenant': None,
u'_context_auth_token': 'SANITIZED',

} _safe_log
/opt/stack/new/ceilometer/ceilometer/openstack/common/rpc/common.py:280

So in the example above we see time-difference only in 20 milliseconds. But
it grows very quickly :( To see it please take a look on picture [2].

To summarize pictures:
1. Picture 1: Axis Y: amount of seconds between nova creates notification
and the collector retrieves the message. Axis X: timestamp
2. Picture 2: Axis Y: amount of seconds between the publisher publishes the
message and the collector retrieves the message. Axis X: timestamp

These pictures are almost the same and it makes me think that collector
cannot manage with big amount of messages. What do you think about it? Do
you agree or you need more evidences, e.g. amount of messages in rabbit or
amth else?
Let's discuss that in [Ceilometer] topic first, I will create a new thread
about testing strategy in tempest later. Because in this circumstances we
forced to refuse from created notification tests and cannot reduce time for
polling because it will make everything even worst.

[1]: http://postimg.org/image/r4501bdyb/
[2]: http://postimg.org/image/yy5a1ste1/

Thanks for your attention,
Nadya
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] os-cloud-config ssh access to cloud

2014-03-11 Thread Clint Byrum
Excerpts from Adam Young's message of 2014-03-11 07:50:58 -0700:
 On 03/11/2014 05:25 AM, Dmitry Mescheryakov wrote:
  For what it's worth in Sahara (former Savanna) we inject the second
  key by userdata. I.e. we add
  echo ${public_key}  ${user_home}/.ssh/authorized_keys
 
  to the other stuff we do in userdata.
 
  Dmitry
 
  2014-03-10 17:10 GMT+04:00 Jiří Stránský ji...@redhat.com:
  On 7.3.2014 14:50, Imre Farkas wrote:
  On 03/07/2014 10:30 AM, Jiří Stránský wrote:
  Hi,
 
  there's one step in cloud initialization that is performed over SSH --
  calling keystone-manage pki_setup. Here's the relevant code in
  keystone-init [1], here's a review for moving the functionality to
  os-cloud-config [2].
 
 You really should not be doing this.  I should never have written 
 pki_setup:  it is a developers tool:  user a real CA and a real certificate.
 

This alludes to your point, but also says that keystone-manage can be used:

http://docs.openstack.org/developer/keystone/configuration.html#certificates-for-pki

Seems that some time should be spent making this more clear if for some
reason pki_setup is weak for production use cases. My brief analysis
of the code says that the weakness is that the CA should generally be
kept apart from the CSR's so that a compromise of a node does not lead
to an attacker being able to generate their own keystone service. This
seems like a low probability attack vector, as compromise of the keystone
machines also means write access to the token backend, and thus no need
to generate ones' own tokens (you can just steal all the existing tokens).

I'd like to see it called out in the section above though, so that
users can know what risk their accepting when they use what looks like a
recommended tool. Another thing would be to log copious warnings when
pki_setup is run that it is not for production usage. That should be
sufficient to scare some diligent deployers into reading the docs closely
and mitigating the risk.

Anyway, shaking fist at users and devs in -dev for using tools in the
documentation probably _isn't_ going to convince anyone to spend more
time setting up PKI tokens.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] MySQL 5.6 disk-image-builder element

2014-03-11 Thread Lowery, Mathew
My colleague, Ranjitha Vemula, just submitted a trove-integration patch
set to add a MySQL 5.6 disk-image-builder element. Two major hurdles were
faced with this patch set.

1) The manager

The resulting MySQL 5.6 image can be registered using mysql as the
datastore, mysql as the manager, and
trove.guestagent.datastore.mysql.manager.Manager as the class--in other
words, all the same config as MySQL 5.5 except a different image. To
repeat, no trove changes are required.

Since there is no official Ubuntu package for MySQL 5.6, the official
mysql.com Debian package was used.

Several assumptions made by the MySQL 5.5 manager (specifically paths) had
to be worked around.

The following are hard-coded in the my.cnf template and the default values
from MySQL's Debian package for these paths don't match those in the
manager.
* basedir
* pid-file

The following are referenced using absolute paths (that don't match
mysql.com's Debian package).
* /usr/sbin/mysqld

For all of the above path mismatches, a combination of symlinking and
startup script sed's were used. Regarding use of absolute paths to
binaries, the manager sometimes uses binaries from the PATH and sometimes
uses absolute paths. This should probably be consistent one way or the
other. Although using the PATH would add flexibility to the manager.
Regarding my.cnf template, should there be a way (e.g. database) to inject
some fundamental path mapping between the image layout and the manager?


2) disk-image-builder elements for multiple versions of a single datastore

The following layout was chosen (after debating whether logic should
instead be added to the existing ubuntu-mysql element):
trove-integration/scripts/files/elements/ubuntu-mysql-5.6/install.d/10-mysq
l

Paired with Viswa Vurtharkar's patch set
(https://review.openstack.org/#/c/72804/), this element can be
kick-started using:
DATASTORE_VERSION=-5.6 PACKAGES=  ./redstack kick-start mysql

In my understanding, D.I.B. elements should be pretty dumb and the caller
should worry about composing them so this setup seems like the best
approach to me but it leaves ubuntu-mysql untouched. A point made by
hub_cap is that now ubuntu-mysql, similar to ubuntu-guest, would imply
things common to all MySQL images but as of right now, it is as it was
before: a MySQL 5.5 image. So there's that to discuss.

Feedback is appreciated.
Mat


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][db][performance] Proposal: Get rid of soft deletion (step by step)

2014-03-11 Thread Mike Wilson
Undeleting things is an important use case in my opinion. We do this in our
environment on a regular basis. In that light I'm not sure that it would be
appropriate just to log the deletion and git rid of the row. I would like
to see it go to an archival table where it is easily restored.

-Mike


On Mon, Mar 10, 2014 at 3:44 PM, Joshua Harlow harlo...@yahoo-inc.comwrote:

  Sounds like a good idea to me.

  I've never understood why we treat the DB as a LOG (keeping deleted == 0
 records around) when we should just use a LOG (or similar system) to begin
 with instead.

  Does anyone use the feature of switching deleted == 1 back to deleted =
 0? Has this worked out for u?

  Seems like some of the feedback on
 https://etherpad.openstack.org/p/operators-feedback-mar14 also suggests
 that this has been a operational pain-point for folks (Tool to delete
 things properly suggestions and such...).

   From: Boris Pavlovic bpavlo...@mirantis.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Monday, March 10, 2014 at 1:29 PM
 To: OpenStack Development Mailing List openstack-dev@lists.openstack.org,
 Victor Sergeyev vserge...@mirantis.com
 Subject: [openstack-dev] [all][db][performance] Proposal: Get rid of soft
 deletion (step by step)

   Hi stackers,

  (It's proposal for Juno.)

  Intro:

  Soft deletion means that records from DB are not actually deleted, they
 are just marked as a deleted. To mark record as a deleted we put in
 special table's column deleted record's ID value.

  Issue 1: Indexes  Queries
 We have to add in every query AND deleted == 0 to get non-deleted
 records.
 It produce performance issue, cause we should add it in any index one
 extra column.
 As well it produce extra complexity in db migrations and building queries.

  Issue 2: Unique constraints
 Why we store ID in deleted and not True/False?
  The reason is that we would like to be able to create real DB unique
 constraints and avoid race conditions on insert operation.

  Sample: we Have table (id, name, password, deleted) we would like to put
 in column name only unique value.

  Approach without UC: if count(`select  where name = name`) == 0:
 insert(...)
 (race cause we are able to add new record between )

  Approach with UC: try: insert(...) except Duplicate: ...

  So to add UC we have to add them on (name, deleted). (to be able to make
 insert/delete/insert with same name)

  As well it produce performance issues, because we have to use Complex
 unique constraints on 2  or more columns. + extra code  complexity in db
 migrations.

  Issue 3: Garbage collector

  It is really hard to make garbage collector that will have good
 performance and be enough common to work in any case for any project.
 Without garbage collector DevOps have to cleanup records by hand, (risk to
 break something). If they don't cleanup DB they will get very soon
 performance issue.

  To put in a nutshell most important issues:
 1) Extra complexity to each select query  extra column in each index
 2) Extra column in each Unique Constraint (worse performance)
 3) 2 Extra column in each table: (deleted, deleted_at)
 4) Common garbage collector is required


  To resolve all these issues we should just remove soft deletion.

  One of approaches that I see is in step by step removing deleted
 column from every table with probably code refactoring.  Actually we have 3
 different cases:

  1) We don't use soft deleted records:
 1.1) Do .delete() instead of .soft_delete()
 1.2) Change query to avoid adding extra deleted == 0 to each query
 1.3) Drop deleted and deleted_at columns

  2) We use soft deleted records for internal stuff e.g. periodic tasks
 2.1) Refactor code somehow: E.g. store all required data by periodic task
 in some special table that has: (id, type, json_data) columns
 2.2) On delete add record to this table
 2.3-5) similar to 1.1, 1.2, 13

  3) We use soft deleted records in API
 3.1) Deprecated API call if it is possible
 3.2) Make proxy call to ceilometer from API
 3.3) On .delete() store info about records in (ceilometer, or somewhere
 else)
 3.4-6) similar to 1.1, 1.2, 1.3

 This is not ready RoadMap, just base thoughts to start the constructive
 discussion in the mailing list, so %stacker% your opinion is very
 important!


  Best regards,
 Boris Pavlovic


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] OpenStack/GSoC

2014-03-11 Thread Davanum Srinivas
Hi,

Mentors:
* Please click on My Dashboard then Connect with organizations and
request a connection as a mentor (on the GSoC web site -
http://www.google-melange.com/)

Students:
* Please see the Application template you will need to fill in on the GSoC site.
  http://www.google-melange.com/gsoc/org2/google/gsoc2014/openstack
* Please click on My Dashboard then Connect with organizations and
request a connection

Both Mentors and Students:
Let's meet on #openstack-gsoc channel on Thursday 9:00 AM EDT / 13:00
UTC for about 30 mins to meet and greet since all application deadline
is next week. If this time is not convenient, please send me a note
and i'll arrange for another time say on friday as well.
http://www.timeanddate.com/worldclock/fixedtime.html?iso=20140313T09p1=43am=30

We need to get an idea of how many slots we need to apply for based on
really strong applications with properly fleshed out project ideas and
mentor support. Hoping the meeting on IRC will nudge the students and
mentors work towards that goal.

Thanks,
dims

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] os-cloud-config ssh access to cloud

2014-03-11 Thread Clint Byrum
Excerpts from Jiří Stránský's message of 2014-03-10 06:10:46 -0700:
 On 7.3.2014 14:50, Imre Farkas wrote:
  On 03/07/2014 10:30 AM, Jiří Stránský wrote:
  Hi,
 
  there's one step in cloud initialization that is performed over SSH --
  calling keystone-manage pki_setup. Here's the relevant code in
  keystone-init [1], here's a review for moving the functionality to
  os-cloud-config [2].
 
  The consequence of this is that Tuskar will need passwordless ssh key to
  access overcloud controller. I consider this suboptimal for two reasons:
 
  * It creates another security concern.
 
  * AFAIK nova is only capable of injecting one public SSH key into
  authorized_keys on the deployed machine, which means we can either give
  it Tuskar's public key and allow Tuskar to initialize overcloud, or we
  can give it admin's custom public key and allow admin to ssh into
  overcloud, but not both. (Please correct me if i'm mistaken.) We could
  probably work around this issue by having Tuskar do the user key
  injection as part of os-cloud-config, but it's a bit clumsy.
 
 
  This goes outside the scope of my current knowledge, i'm hoping someone
  knows the answer: Could pki_setup be run by combining powers of Heat and
  os-config-refresh? (I presume there's some reason why we're not doing
  this already.) I think it would help us a good bit if we could avoid
  having to SSH from Tuskar to overcloud.
 
  Yeah, it came up a couple times on the list. The current solution is
  because if you have an HA setup, the nodes can't decide on its own,
  which one should run pki_setup.
  Robert described this topic and why it needs to be initialized
  externally during a weekly meeting in last December. Check the topic
  'After heat stack-create init operations (lsmola)':
  http://eavesdrop.openstack.org/meetings/tripleo/2013/tripleo.2013-12-17-19.02.log.html
 
 Thanks for the reply Imre. Yeah i vaguely remember that meeting :)
 
 I guess to do HA init we'd need to pick one of the controllers and run 
 the init just there (set some parameter that would then be recognized by 
 os-refresh-config). I couldn't find if Heat can do something like this 
 on it's own, probably we'd need to deploy one of the controller nodes 
 with different parameter set, which feels a bit weird.
 
 Hmm so unless someone comes up with something groundbreaking, we'll 
 probably keep doing what we're doing. Having the ability to inject 
 multiple keys to instances [1] would help us get rid of the Tuskar vs. 
 admin key issue i mentioned in the initial e-mail. We might try asking a 
 fellow Nova developer to help us out here.
 

I think the long term idea is to run a separate CA and use Barbican for
key distribution, as that is precisely what it is designed to do.

For now SSH'ing in one time to bootstrap a cloud seems an acceptable
risk, and the scope of that SSH key can be ratcheted down to just running
pki_setup, which may be a good idea.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Trove] MySQL 5.6 disk-image-builder element

2014-03-11 Thread Lowery, Mathew
My colleague, Ranjitha Vemula, just submitted a trove-integration patch
set to add a MySQL 5.6 disk-image-builder element. Two major hurdles were
faced with this patch set.

1) The manager

The resulting MySQL 5.6 image can be registered using mysql as the
datastore, mysql as the manager, and
trove.guestagent.datastore.mysql.manager.Manager as the class--in other
words, all the same config as MySQL 5.5 except a different image. To
repeat, no trove changes are required.

Since there is no official Ubuntu package for MySQL 5.6, the official
mysql.com Debian package was used.

Several assumptions made by the MySQL 5.5 manager (specifically paths) had
to be worked around.

The following are hard-coded in the my.cnf template and the default values
from MySQL's Debian package for these paths don't match those in the
manager.
* basedir
* pid-file

The following are referenced using absolute paths (that don't match
mysql.com's Debian package).
* /usr/sbin/mysqld

For all of the above path mismatches, a combination of symlinking and
startup script sed's were used. Regarding use of absolute paths to
binaries, the manager sometimes uses binaries from the PATH and sometimes
uses absolute paths. This should probably be consistent one way or the
other. Although using the PATH would add flexibility to the manager.
Regarding my.cnf template, should there be a way (e.g. database) to inject
some fundamental path mapping between the image layout and the manager?


2) disk-image-builder elements for multiple versions of a single datastore

The following layout was chosen (after debating whether logic should
instead be added to the existing ubuntu-mysql element):
trove-integration/scripts/files/elements/ubuntu-mysql-5.6/install.d/10-mysq
l

Paired with Viswa Vurtharkar's patch set
(https://review.openstack.org/#/c/72804/), this element can be
kick-started using:
DATASTORE_VERSION=-5.6 PACKAGES=  ./redstack kick-start mysql

In my understanding, D.I.B. elements should be pretty dumb and the caller
should worry about composing them so this setup seems like the best
approach to me but it leaves ubuntu-mysql untouched. A point made by
hub_cap is that now ubuntu-mysql, similar to ubuntu-guest, would imply
things common to all MySQL images but as of right now, it is as it was
before: a MySQL 5.5 image. So there's that to discuss.

Feedback is appreciated.
Mat




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Developer documentation

2014-03-11 Thread Collins, Sean
I put together another review that starts to document the HTTP API layer
and structure.

https://review.openstack.org/#/c/79675/

I think it's pretty dense - there's a ton of terminology and concepts
about WSGI and python that I sort of skim over - it's probably not
newbie friendly just yet - comments and suggestions welcome - especially
on how to introduce WSGI and everything else without making someone's
head explode.

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] MySQL 5.6 disk-image-builder element

2014-03-11 Thread Clint Byrum
Excerpts from Lowery, Mathew's message of 2014-03-11 10:33:12 -0700:
 My colleague, Ranjitha Vemula, just submitted a trove-integration patch
 set to add a MySQL 5.6 disk-image-builder element. Two major hurdles were
 faced with this patch set.

snip

 In my understanding, D.I.B. elements should be pretty dumb and the caller
 should worry about composing them so this setup seems like the best
 approach to me but it leaves ubuntu-mysql untouched. A point made by
 hub_cap is that now ubuntu-mysql, similar to ubuntu-guest, would imply
 things common to all MySQL images but as of right now, it is as it was
 before: a MySQL 5.5 image. So there's that to discuss.

Yes and no. Yes you should allow users to compose their images by listing
elements. However, you can compose your element from other elements as
well automatically by using The element-deps file in the root.

I'd suggest copying everything that is common to the two elements
into an ubuntu-mysql-common element, and having both ubuntu-mysql and
ubuntu-mysq-5.6 list ubuntu-mysql-common in element-deps.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer]Collector's performance

2014-03-11 Thread Ildikó Váncsa
Hi Nadya,

You mentioned multiple DB backends in your mail. Which one did you use to 
perform these tests or did you get the same/similar performance results in case 
of both?

Best Regards,
Ildiko

From: Nadya Privalova [mailto:nprival...@mirantis.com]
Sent: Tuesday, March 11, 2014 6:05 PM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [Ceilometer]Collector's performance

Hi team!
Last week we were working on notification problem in ceilometer during tempest 
tests creation. Tests for notification passed successfully on Postgres but 
failed on MySQL. This made us start investigations and this email contains some 
results.
As it turned out, tempest as it is is something like performance-testing for 
Ceilometer. It contains 2057 tests. Almost in all test OpenStack resources are 
being created and deleted: images, instances, volumes. E.g. during instance 
creation nova sends 9 notifications. And all the tests are running in parallel 
for about 40 minutes.
From ceilometer-collector logs we may found very useful message:

2014-03-10 
09:42:41.356http://logs.openstack.org/36/64136/20/check/check-tempest-dsvm-full/e361520/logs/screen-ceilometer-collector.txt.gz#_2014-03-10_09_42_41_356
 22845 DEBUG ceilometer.dispatcher.database 
[req-16ea95c5-6454-407a-9c64-94d5ef900c9e - - - - -] metering data 
storage.objects.outgoing.bytes for b7a490322e65422cb1129b13b49020e6 @ 
2014-03-10T09:34:31.090107:
So collector starts to process_metering_data in dispatcher only in 9:42 but 
nova sent it in 9:34. To look at whole picture please take look at picture [1]. 
It illustrates time difference based on this message in logs.
Besides, I decided to take a look on difference between the RPC-publisher sends 
the message and the collector receives the message. To create this plot I've 
parsed the lines like below from anotifications log:


2014-03-10 
09:25:49.333http://logs.openstack.org/36/64136/20/check/check-tempest-dsvm-full/e361520/logs/screen-ceilometer-anotification.txt.gz#_2014-03-10_09_25_49_333
 22833 DEBUG ceilometer.openstack.common.rpc.amqp [-] UNIQUE_ID is 
683dd3f130534b9fbb5606aef862b83d.





After that I found the corresponding id in collector log:

2014-03-10 
09:25:49.352http://logs.openstack.org/36/64136/20/check/check-tempest-dsvm-full/e361520/logs/screen-ceilometer-collector.txt.gz#_2014-03-10_09_25_49_352
 22845 DEBUG ceilometer.openstack.common.rpc.amqp [-] received 
{u'_context_domain': None, u'_context_request_id': 
u'req-0a5fafe6-e097-4f90-a68a-a91da1cff22c',




u'args': {u'data': [...,
 u'message_id': u'f7ad63fc-a835-11e3-8223-bc764e205385', u'counter_type': 
u'gauge'}]}, u'_context_read_only': False, u'_unique_id': 
u'683dd3f130534b9fbb5606aef862b83d',




u'_context_user_identity': u'- - - - -', u'_context_instance_uuid': None, 
u'_context_show_deleted': False, u'_context_tenant': None, 
u'_context_auth_token': 'SANITIZED',




} _safe_log 
/opt/stack/new/ceilometer/ceilometer/openstack/common/rpc/common.py:280
So in the example above we see time-difference only in 20 milliseconds. But it 
grows very quickly :( To see it please take a look on picture [2].
To summarize pictures:
1. Picture 1: Axis Y: amount of seconds between nova creates notification and 
the collector retrieves the message. Axis X: timestamp
2. Picture 2: Axis Y: amount of seconds between the publisher publishes the 
message and the collector retrieves the message. Axis X: timestamp
These pictures are almost the same and it makes me think that collector cannot 
manage with big amount of messages. What do you think about it? Do you agree or 
you need more evidences, e.g. amount of messages in rabbit or amth else?
Let's discuss that in [Ceilometer] topic first, I will create a new thread 
about testing strategy in tempest later. Because in this circumstances we 
forced to refuse from created notification tests and cannot reduce time for 
polling because it will make everything even worst.

[1]: http://postimg.org/image/r4501bdyb/
[2]: http://postimg.org/image/yy5a1ste1/

Thanks for your attention,
Nadya
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [SWIFT] SWIFT object caching (HOT content)

2014-03-11 Thread Clay Gerrard
At the HK summit, the topic of hot content came up and seemed to broken
into two parts.

1) developing a caching storage tier for hot content that would allow
proxies to more quickly serve small data requests with even higher rates of
concurrent access.
2) developing a mechanism to programmatically/automatically (or even
explicitly) identify hot content that should be cached or expired from
the caching storage tier.

Much progress has been made during this development/release cycle on
storage policies [1] which would seem to offer a semantic building block
for the caching storage tierr - but to my knowledge no one is actively
working on the details of a caching storage police (besides maybe a
high-replica ring backed with ssds), or the second (harder?) part of
identifying which data should be cached or for how long.

I glanced at those blueprints and I'm not sure they line up entirely with
the current thinking on hot content - probably be a good idea to revisit
the topic at upcoming summit in ALT.  I believe proposals are open. [2]

-Clay

1. https://blueprints.launchpad.net/swift/+spec/storage-policies
2. http://summit.openstack.org/


On Mon, Mar 10, 2014 at 10:09 PM, Anbu a...@enovance.com wrote:

 Hi,
 I came across this blueprint
 https://blueprints.launchpad.net/swift/+spec/swift-proxy-caching and a
 related etherpad https://etherpad.openstack.org/p/swift-kt about SWIFT
 object caching.
 I would like to contribute in this and I would also like to know if
 anybody has made any progress in this area.
 If anyone is aware of a discussion that has happened/happening in this,
 kindly point me to it.

 Thank you,
 Babu

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer]Collector's performance

2014-03-11 Thread Nadya Privalova
Ildiko,

Thanks for question, I forgot to write about it. The results for mysql, the
link to logs
http://logs.openstack.org/36/64136/20/check/check-tempest-dsvm-full/e361520/.
But I guess postgress stuff looks the same because it failed during last
test run (https://review.openstack.org/#/c/64136/). Will check tomorrow
anyway.

Nadya


On Tue, Mar 11, 2014 at 10:01 PM, Ildikó Váncsa
ildiko.van...@ericsson.comwrote:

  Hi Nadya,



 You mentioned multiple DB backends in your mail. Which one did you use to
 perform these tests or did you get the same/similar performance results in
 case of both?



 Best Regards,

 Ildiko



 *From:* Nadya Privalova [mailto:nprival...@mirantis.com]
 *Sent:* Tuesday, March 11, 2014 6:05 PM
 *To:* OpenStack Development Mailing List
 *Subject:* [openstack-dev] [Ceilometer]Collector's performance



 Hi team!

 Last week we were working on notification problem in ceilometer during
 tempest tests creation. Tests for notification passed successfully on
 Postgres but failed on MySQL. This made us start investigations and this
 email contains some results.

 As it turned out, tempest as it is is something like performance-testing
 for Ceilometer. It contains 2057 tests. Almost in all test OpenStack
 resources are being created and deleted: images, instances, volumes. E.g.
 during instance creation nova sends 9 notifications. And all the tests are
 running in parallel for about 40 minutes.

 From ceilometer-collector logs we may found very useful message:

 2014-03-10 09:42:41.356 
 http://logs.openstack.org/36/64136/20/check/check-tempest-dsvm-full/e361520/logs/screen-ceilometer-collector.txt.gz#_2014-03-10_09_42_41_356
  22845 DEBUG ceilometer.dispatcher.database 
 [req-16ea95c5-6454-407a-9c64-94d5ef900c9e - - - - -] metering data 
 storage.objects.outgoing.bytes for b7a490322e65422cb1129b13b49020e6 @ 
 2014-03-10T09:34:31.090107:

 So collector starts to process_metering_data in dispatcher only in 9:42
 but nova sent it in 9:34. To look at whole picture please take look at
 picture [1]. It illustrates time difference based on this message in logs.

 Besides, I decided to take a look on difference between the RPC-publisher
 sends the message and the collector receives the message. To create this
 plot I've parsed the lines like below from anotifications log:



 2014-03-10 09:25:49.333 
 http://logs.openstack.org/36/64136/20/check/check-tempest-dsvm-full/e361520/logs/screen-ceilometer-anotification.txt.gz#_2014-03-10_09_25_49_333
  22833 DEBUG ceilometer.openstack.common.rpc.amqp [-] UNIQUE_ID is 
 683dd3f130534b9fbb5606aef862b83d.





  After that I found the corresponding id in collector log:

 2014-03-10 09:25:49.352 
 http://logs.openstack.org/36/64136/20/check/check-tempest-dsvm-full/e361520/logs/screen-ceilometer-collector.txt.gz#_2014-03-10_09_25_49_352
  22845 DEBUG ceilometer.openstack.common.rpc.amqp [-] received 
 {u'_context_domain': None, u'_context_request_id': 
 u'req-0a5fafe6-e097-4f90-a68a-a91da1cff22c',



 u'args': {u'data': [...,
  u'message_id': u'f7ad63fc-a835-11e3-8223-bc764e205385', u'counter_type': 
 u'gauge'}]}, u'_context_read_only': False, u'_unique_id': 
 u'683dd3f130534b9fbb5606aef862b83d',



 u'_context_user_identity': u'- - - - -', u'_context_instance_uuid': None, 
 u'_context_show_deleted': False, u'_context_tenant': None, 
 u'_context_auth_token': 'SANITIZED',



 } _safe_log 
 /opt/stack/new/ceilometer/ceilometer/openstack/common/rpc/common.py:280

 So in the example above we see time-difference only in 20 milliseconds.
 But it grows very quickly :( To see it please take a look on picture [2].

 To summarize pictures:

 1. Picture 1: Axis Y: amount of seconds between nova creates notification
 and the collector retrieves the message. Axis X: timestamp

 2. Picture 2: Axis Y: amount of seconds between the publisher publishes
 the message and the collector retrieves the message. Axis X: timestamp

 These pictures are almost the same and it makes me think that collector
 cannot manage with big amount of messages. What do you think about it? Do
 you agree or you need more evidences, e.g. amount of messages in rabbit or
 amth else?

 Let's discuss that in [Ceilometer] topic first, I will create a new thread
 about testing strategy in tempest later. Because in this circumstances we
 forced to refuse from created notification tests and cannot reduce time for
 polling because it will make everything even worst.



 [1]: http://postimg.org/image/r4501bdyb/
 [2]: http://postimg.org/image/yy5a1ste1/



 Thanks for your attention,

 Nadya

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][db][performance] Proposal: Get rid of soft deletion (step by step)

2014-03-11 Thread Joe Gordon
On Tue, Mar 11, 2014 at 10:24 AM, Mike Wilson geekinu...@gmail.com wrote:

 Undeleting things is an important use case in my opinion. We do this in
 our environment on a regular basis. In that light I'm not sure that it
 would be appropriate just to log the deletion and git rid of the row. I
 would like to see it go to an archival table where it is easily restored.


Although we want to *support* hard deletion, we still want to support the
current behavior as well (Soft deletion, where the operator, can prune
deleted rows periodically).


 -Mike


 On Mon, Mar 10, 2014 at 3:44 PM, Joshua Harlow harlo...@yahoo-inc.comwrote:

  Sounds like a good idea to me.

  I've never understood why we treat the DB as a LOG (keeping deleted ==
 0 records around) when we should just use a LOG (or similar system) to
 begin with instead.

  Does anyone use the feature of switching deleted == 1 back to deleted =
 0? Has this worked out for u?

  Seems like some of the feedback on
 https://etherpad.openstack.org/p/operators-feedback-mar14 also suggests
 that this has been a operational pain-point for folks (Tool to delete
 things properly suggestions and such...).

   From: Boris Pavlovic bpavlo...@mirantis.com
 Reply-To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: Monday, March 10, 2014 at 1:29 PM
 To: OpenStack Development Mailing List openstack-dev@lists.openstack.org,
 Victor Sergeyev vserge...@mirantis.com
 Subject: [openstack-dev] [all][db][performance] Proposal: Get rid of
 soft deletion (step by step)

   Hi stackers,

  (It's proposal for Juno.)

  Intro:

  Soft deletion means that records from DB are not actually deleted, they
 are just marked as a deleted. To mark record as a deleted we put in
 special table's column deleted record's ID value.

  Issue 1: Indexes  Queries
 We have to add in every query AND deleted == 0 to get non-deleted
 records.
 It produce performance issue, cause we should add it in any index one
 extra column.
 As well it produce extra complexity in db migrations and building
 queries.

  Issue 2: Unique constraints
 Why we store ID in deleted and not True/False?
  The reason is that we would like to be able to create real DB unique
 constraints and avoid race conditions on insert operation.

  Sample: we Have table (id, name, password, deleted) we would like to
 put in column name only unique value.

  Approach without UC: if count(`select  where name = name`) == 0:
 insert(...)
 (race cause we are able to add new record between )

  Approach with UC: try: insert(...) except Duplicate: ...

  So to add UC we have to add them on (name, deleted). (to be able to
 make insert/delete/insert with same name)

  As well it produce performance issues, because we have to use Complex
 unique constraints on 2  or more columns. + extra code  complexity in db
 migrations.

  Issue 3: Garbage collector

  It is really hard to make garbage collector that will have good
 performance and be enough common to work in any case for any project.
 Without garbage collector DevOps have to cleanup records by hand, (risk
 to break something). If they don't cleanup DB they will get very soon
 performance issue.

  To put in a nutshell most important issues:
 1) Extra complexity to each select query  extra column in each index
 2) Extra column in each Unique Constraint (worse performance)
 3) 2 Extra column in each table: (deleted, deleted_at)
 4) Common garbage collector is required


  To resolve all these issues we should just remove soft deletion.

  One of approaches that I see is in step by step removing deleted
 column from every table with probably code refactoring.  Actually we have 3
 different cases:

  1) We don't use soft deleted records:
 1.1) Do .delete() instead of .soft_delete()
 1.2) Change query to avoid adding extra deleted == 0 to each query
 1.3) Drop deleted and deleted_at columns

  2) We use soft deleted records for internal stuff e.g. periodic tasks
 2.1) Refactor code somehow: E.g. store all required data by periodic task
 in some special table that has: (id, type, json_data) columns
 2.2) On delete add record to this table
 2.3-5) similar to 1.1, 1.2, 13

  3) We use soft deleted records in API
 3.1) Deprecated API call if it is possible
 3.2) Make proxy call to ceilometer from API
 3.3) On .delete() store info about records in (ceilometer, or somewhere
 else)
 3.4-6) similar to 1.1, 1.2, 1.3

 This is not ready RoadMap, just base thoughts to start the constructive
 discussion in the mailing list, so %stacker% your opinion is very
 important!


  Best regards,
 Boris Pavlovic


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 

Re: [openstack-dev] [all][db][performance] Proposal: Get rid of soft deletion (step by step)

2014-03-11 Thread Johannes Erdfelt
On Tue, Mar 11, 2014, Mike Wilson geekinu...@gmail.com wrote:
 Undeleting things is an important use case in my opinion. We do this in our
 environment on a regular basis. In that light I'm not sure that it would be
 appropriate just to log the deletion and git rid of the row. I would like
 to see it go to an archival table where it is easily restored.

I'm curious, what are you undeleting and why?

JE


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack][Nova][Docker] Devstack with docker driver

2014-03-11 Thread Daniel Kuffner
Hi,
what is the error reported by docker?
Can you post the docker registry log?
What version of docker do you use?
I assume you use devstack master branch?

thank you,
Daniel

On Tue, Mar 11, 2014 at 1:19 PM, urgensherpa sherpa.ur...@outlook.com wrote:
 Hello!,

 i can run docker containers and push it to docker io but i failed to push it
 for local glance.and get the same error mentioned here.
 Could you please show some more light on  how you resolved it. i started
 settingup openstack and docker using devstack.
 here is my localrc
 FLOATING_RANGE=192.168.140.0/27
 FIXED_RANGE=10.11.12.0/24
 FIXED_NETWORK_SIZE=256
 FLAT_INTERFACE=eth1
 ADMIN_PASSWORD=g
 MYSQL_PASSWORD=g
 RABBIT_PASSWORD=g
 SERVICE_PASSWORD=g
 SERVICE_TOKEN=g
 SCHEDULER=nova.scheduler.filter_scheduler.FilterScheduler
 VIRT_DRIVER=docker
 SCREEN_LOGDIR=$DEST/logs/screen
 ---
 the machine im testing is on vmware ubuntu 13.01 with two nics  assuming
 eth0 connected to internet and eth1 to local network.
 ---





 --
 View this message in context: 
 http://openstack.10931.n7.nabble.com/Openstack-Nova-Docker-Devstack-with-docker-driver-tp28361p34845.html
 Sent from the Developer mailing list archive at Nabble.com.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Developer documentation

2014-03-11 Thread Brandon Logan
As a someone who has just spent the time to learn the Neutron code, this would 
have been quite helpful when I started.  I'll add on to this when it is merged 
in.  Awesome job!

Thanks,
Brandon Logan

From: Collins, Sean [sean_colli...@cable.comcast.com]
Sent: Tuesday, March 11, 2014 12:42 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron] Developer documentation

I put together another review that starts to document the HTTP API layer
and structure.

https://review.openstack.org/#/c/79675/

I think it's pretty dense - there's a ton of terminology and concepts
about WSGI and python that I sort of skim over - it's probably not
newbie friendly just yet - comments and suggestions welcome - especially
on how to introduce WSGI and everything else without making someone's
head explode.

--
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] a question about instance snapshot

2014-03-11 Thread Jay Pipes
On Tue, 2014-03-11 at 06:35 +, Bohai (ricky) wrote:
  -Original Message-
  From: Jay Pipes [mailto:jaypi...@gmail.com]
  Sent: Tuesday, March 11, 2014 3:20 AM
  To: openstack-dev@lists.openstack.org
  Subject: Re: [openstack-dev] [nova] a question about instance snapshot
 
  On Mon, 2014-03-10 at 12:13 -0400, Shawn Hartsock wrote:
   We have very strong interest in pursing this feature in the VMware
   driver as well. I would like to see the revert instance feature
   implemented at least.
  
   When I used to work in multi-discipline roles involving operations it
   would be common for us to snapshot a vm, run through an upgrade
   process, then revert if something did not upgrade smoothly. This
   ability alone can be exceedingly valuable in long-lived virtual
   machines.
  
   I also have some comments from parties interested in refactoring how
   the VMware drivers handle snapshots but I'm not certain how much that
   plays into this live snapshot discussion.
 
  I think the reason that there isn't much interest in doing this kind of 
  thing is
  because the worldview that VMs are pets is antithetical to the worldview 
  that
  VMs are cattle, and Nova tends to favor the latter (where DRS/DPM on
  vSphere tends to favor the former).
 
  There's nothing about your scenario above of being able to revert an 
  instance
  to a particular state that isn't possible with today's Nova.
  Snapshotting an instance, doing an upgrade of software on the instance, and
  then restoring from the snapshot if something went wrong (reverting) is
  already fully possible to do with the regular Nova snapshot and restore
  operations. The only difference is that the live-snapshot
  stuff would include saving the memory view of a VM in addition to its disk 
  state.
  And that, at least in my opinion, is only needed when you are treating VMs 
  like
  pets and not cattle.
 
 
 Hi Jay,
 
 I read every words in your reply and respect what you said.
 
 But i can't agree with you that memory snapshot is a feature for pat not for 
 cattle.
 I think it's a feature whatever what do you look the instance as.
 
 The world doesn't care about what we look the instance as, in fact, currently 
 almost all the
 mainstream hypervisors have supported the memory snapshot.
 If it's just a dispensable feature and no users need it, I can't understand 
 why
 the hypervisors provide it without exception.
 
 In the document  OPENSTACK OPERATIONS GUIDE section  Live snapshots has 
 the
 below words:
  To ensure that important services have written their contents to disk (such 
 as, databases),
 we recommend you read the documentation for those applications to determine 
 what commands
 to issue to have them sync their contents to disk. If you are unsure how to 
 do this,
  the safest approach is to simply stop these running services normally.
 
 This just pushes all the responsibility to guarantee the consistency of the 
 instance to the end user.
 It's absolutely not convenient and I doubt whether it's appropriate.

Hi Ricky,

I guess we will just have to disagree about the relative usefulness of
this kind of thing for users of the cloud (and not users of traditional
managed hosting) :) Like I said, if it does not affect the performance
of other tenants' instances, I'm fine with adding the functionality in a
way that is generic (not hypervisor-specific).

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Ephemeral RBD image support

2014-03-11 Thread Matt Riedemann



On 3/10/2014 11:20 AM, Dmitry Borodaenko wrote:

On Fri, Mar 7, 2014 at 8:55 AM, Sean Dague s...@dague.net wrote:

On 03/07/2014 11:16 AM, Russell Bryant wrote:

On 03/07/2014 04:19 AM, Daniel P. Berrange wrote:

On Thu, Mar 06, 2014 at 12:20:21AM -0800, Andrew Woodward wrote:

I'd Like to request A FFE for the remaining patches in the Ephemeral
RBD image support chain

https://review.openstack.org/#/c/59148/
https://review.openstack.org/#/c/59149/

are still open after their dependency
https://review.openstack.org/#/c/33409/ was merged.

These should be low risk as:
1. We have been testing with this code in place.
2. It's nearly all contained within the RBD driver.

This is needed as it implements an essential functionality that has
been missing in the RBD driver and this will become the second release
it's been attempted to be merged into.


Add me as a sponsor.


OK, great.  That's two.

We have a hard deadline of Tuesday to get these FFEs merged (regardless
of gate status).



As alt release manager, FFE approved based on Russell's approval.

The merge deadline for Tuesday is the release meeting, not end of day.
If it's not merged by the release meeting, it's dead, no exceptions.


Both commits were merged, thanks a lot to everyone who helped land
this in Icehouse! Especially to Russel and Sean for approving the FFE,
and to Daniel, Michael, and Vish for reviewing the patches!



There was a bug reported today [1] that looks like a regression in this 
new code, so we need people involved in this looking at it as soon as 
possible because we have a proposed revert in case we need to yank it 
out [2].


[1] https://bugs.launchpad.net/nova/+bug/1291014
[2] 
https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bug/1291014,n,z


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][db][performance] Proposal: Get rid of soft deletion (step by step)

2014-03-11 Thread Tim Bell

Typical cases are user error where someone accidentally deletes an item from a 
tenant. The image guys have a good structure where images become unavailable 
and are recoverable for a certain period of time. A regular periodic task 
cleans up deleted items after a configurable number of seconds to avoid 
constant database growth.

My preference would be to follow this model universally (an archive table is a 
nice way to do it without disturbing production).

Tim


 On Tue, Mar 11, 2014, Mike Wilson geekinu...@gmail.com wrote:
  Undeleting things is an important use case in my opinion. We do this
  in our environment on a regular basis. In that light I'm not sure that
  it would be appropriate just to log the deletion and git rid of the
  row. I would like to see it go to an archival table where it is easily 
  restored.
 
 I'm curious, what are you undeleting and why?
 
 JE
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][db][performance] Proposal: Get rid of soft deletion (step by step)

2014-03-11 Thread Joe Gordon
On Tue, Mar 11, 2014 at 12:43 PM, Tim Bell tim.b...@cern.ch wrote:


 Typical cases are user error where someone accidentally deletes an item
 from a tenant. The image guys have a good structure where images become
 unavailable and are recoverable for a certain period of time. A regular
 periodic task cleans up deleted items after a configurable number of
 seconds to avoid constant database growth.

 My preference would be to follow this model universally (an archive table
 is a nice way to do it without disturbing production).


That was the goal of the shadow table, if it doesn't support that now then
its a bug.



 Tim


  On Tue, Mar 11, 2014, Mike Wilson geekinu...@gmail.com wrote:
   Undeleting things is an important use case in my opinion. We do this
   in our environment on a regular basis. In that light I'm not sure that
   it would be appropriate just to log the deletion and git rid of the
   row. I would like to see it go to an archival table where it is easily
 restored.
 
  I'm curious, what are you undeleting and why?
 
  JE
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [python-openstacksdk] Minutes from 11 Mar meeting

2014-03-11 Thread Brian Curtin
We just wrapped up our weekly meeting, and the minutes and log are available.

Minutes: 
http://eavesdrop.openstack.org/meetings/python_openstacksdk/2014/python_openstacksdk.2014-03-11-19.00.html

Minutes (text):
http://eavesdrop.openstack.org/meetings/python_openstacksdk/2014/python_openstacksdk.2014-03-11-19.00.txt

Log: 
http://eavesdrop.openstack.org/meetings/python_openstacksdk/2014/python_openstacksdk.2014-03-11-19.00.log.html

We're starting to get into code, so a lot of the discussion was around
the direction of the current example
(https://review.openstack.org/#/c/79435/) and some of the library
choices. There is some research to be done and more reviews to be had,
but it's off to a good start.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat]Policy on upgades required config changes

2014-03-11 Thread Zane Bitter

On 11/03/14 01:05, Keith Bray wrote:

We do run close to Heat master here at
Rackspace, and we'd be happy to set up a non-voting job to notify when a
review would break Heat on our cloud if that would be beneficial.  Some of
the breaks we have seen have been things that simply weren't caught in
code review (a human intensive effort), were specific to the way we
configure Heat for large-scale cloud use, applicable to the entire Heat
project, and not necessarily service provider specific.


+1, thanks Keith that sounds like a great idea. it's obviously not 
possible to test every configuration, but testing a typical large 
operator configuration would be a big plus for the project.


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Ephemeral RBD image support

2014-03-11 Thread Jay Pipes
On Tue, 2014-03-11 at 14:18 -0500, Matt Riedemann wrote:
 
 On 3/10/2014 11:20 AM, Dmitry Borodaenko wrote:
  On Fri, Mar 7, 2014 at 8:55 AM, Sean Dague s...@dague.net wrote:
  On 03/07/2014 11:16 AM, Russell Bryant wrote:
  On 03/07/2014 04:19 AM, Daniel P. Berrange wrote:
  On Thu, Mar 06, 2014 at 12:20:21AM -0800, Andrew Woodward wrote:
  I'd Like to request A FFE for the remaining patches in the Ephemeral
  RBD image support chain
 
  https://review.openstack.org/#/c/59148/
  https://review.openstack.org/#/c/59149/
 
  are still open after their dependency
  https://review.openstack.org/#/c/33409/ was merged.
 
  These should be low risk as:
  1. We have been testing with this code in place.
  2. It's nearly all contained within the RBD driver.
 
  This is needed as it implements an essential functionality that has
  been missing in the RBD driver and this will become the second release
  it's been attempted to be merged into.
 
  Add me as a sponsor.
 
  OK, great.  That's two.
 
  We have a hard deadline of Tuesday to get these FFEs merged (regardless
  of gate status).
 
 
  As alt release manager, FFE approved based on Russell's approval.
 
  The merge deadline for Tuesday is the release meeting, not end of day.
  If it's not merged by the release meeting, it's dead, no exceptions.
 
  Both commits were merged, thanks a lot to everyone who helped land
  this in Icehouse! Especially to Russel and Sean for approving the FFE,
  and to Daniel, Michael, and Vish for reviewing the patches!
 
 
 There was a bug reported today [1] that looks like a regression in this 
 new code, so we need people involved in this looking at it as soon as 
 possible because we have a proposed revert in case we need to yank it 
 out [2].
 
 [1] https://bugs.launchpad.net/nova/+bug/1291014
 [2] 
 https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bug/1291014,n,z

Note that I have identified the source of the problem and am pushing a
patch shortly with unit tests.

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Ephemeral RBD image support

2014-03-11 Thread Matt Riedemann



On 3/11/2014 3:11 PM, Jay Pipes wrote:

On Tue, 2014-03-11 at 14:18 -0500, Matt Riedemann wrote:


On 3/10/2014 11:20 AM, Dmitry Borodaenko wrote:

On Fri, Mar 7, 2014 at 8:55 AM, Sean Dague s...@dague.net wrote:

On 03/07/2014 11:16 AM, Russell Bryant wrote:

On 03/07/2014 04:19 AM, Daniel P. Berrange wrote:

On Thu, Mar 06, 2014 at 12:20:21AM -0800, Andrew Woodward wrote:

I'd Like to request A FFE for the remaining patches in the Ephemeral
RBD image support chain

https://review.openstack.org/#/c/59148/
https://review.openstack.org/#/c/59149/

are still open after their dependency
https://review.openstack.org/#/c/33409/ was merged.

These should be low risk as:
1. We have been testing with this code in place.
2. It's nearly all contained within the RBD driver.

This is needed as it implements an essential functionality that has
been missing in the RBD driver and this will become the second release
it's been attempted to be merged into.


Add me as a sponsor.


OK, great.  That's two.

We have a hard deadline of Tuesday to get these FFEs merged (regardless
of gate status).



As alt release manager, FFE approved based on Russell's approval.

The merge deadline for Tuesday is the release meeting, not end of day.
If it's not merged by the release meeting, it's dead, no exceptions.


Both commits were merged, thanks a lot to everyone who helped land
this in Icehouse! Especially to Russel and Sean for approving the FFE,
and to Daniel, Michael, and Vish for reviewing the patches!



There was a bug reported today [1] that looks like a regression in this
new code, so we need people involved in this looking at it as soon as
possible because we have a proposed revert in case we need to yank it
out [2].

[1] https://bugs.launchpad.net/nova/+bug/1291014
[2]
https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bug/1291014,n,z


Note that I have identified the source of the problem and am pushing a
patch shortly with unit tests.

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



My concern is how much else where assumes nova is working with the 
glance v2 API because there was a nova blueprint [1] to make nova work 
with the glance V2 API but that never landed in Icehouse, so I'm worried 
about wack-a-mole type problems here, especially since there is no 
tempest coverage for testing multiple image location support via nova.


[1] https://blueprints.launchpad.net/nova/+spec/use-glance-v2-api

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][db][performance] Proposal: Get rid of soft deletion (step by step)

2014-03-11 Thread Tim Bell
Can we therefore make that no removal of deleted column is permitted if there 
is no implementation of shadow tables ?

Tim

From: Joe Gordon [mailto:joe.gord...@gmail.com]
Sent: 11 March 2014 20:57
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all][db][performance] Proposal: Get rid of soft 
deletion (step by step)



On Tue, Mar 11, 2014 at 12:43 PM, Tim Bell 
tim.b...@cern.chmailto:tim.b...@cern.ch wrote:

Typical cases are user error where someone accidentally deletes an item from a 
tenant. The image guys have a good structure where images become unavailable 
and are recoverable for a certain period of time. A regular periodic task 
cleans up deleted items after a configurable number of seconds to avoid 
constant database growth.

My preference would be to follow this model universally (an archive table is a 
nice way to do it without disturbing production).

That was the goal of the shadow table, if it doesn't support that now then its 
a bug.


Tim


 On Tue, Mar 11, 2014, Mike Wilson 
 geekinu...@gmail.commailto:geekinu...@gmail.com wrote:
  Undeleting things is an important use case in my opinion. We do this
  in our environment on a regular basis. In that light I'm not sure that
  it would be appropriate just to log the deletion and git rid of the
  row. I would like to see it go to an archival table where it is easily 
  restored.

 I'm curious, what are you undeleting and why?

 JE


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] MuranoPL questions?

2014-03-11 Thread Joshua Harlow
I guess I might be a bit biased to programming; so maybe I'm not the target 
audience.

I'm not exactly against DSL's, I just think that DSL's need to be really really 
proven to become useful (in general this applies to any language that 'joe' 
comp-sci student can create). Its not that hard to just make one, but the real 
hard part is making one that people actually like and use and survives the test 
of time. That’s why I think its just nicer to use languages that have stood the 
test of time already (if we can), creating a new DSL (muranoPL seems to be 
slightly more than a DSL imho) means creating a new language that has not stood 
the test of time (in terms of lifetime, battle tested, supported over years) so 
that’s just the concern I have.

Of course we have to accept innovation and I hope that the DSL/s makes it 
easier/simpler, I just tend to be a bit more pragmatic maybe in this area.

Here's hoping for the best! :-)

-Josh

From: Renat Akhmerov rakhme...@mirantis.commailto:rakhme...@mirantis.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, March 10, 2014 at 8:36 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] MuranoPL questions?

Although being a little bit verbose it makes a lot of sense to me.

@Joshua,

Even assuming Python could be sandboxed and whatever else that’s needed to be 
able to use it as DSL (for something like Mistral, Murano or Heat) is done  why 
do you think Python would be a better alternative for people who don’t know 
neither these new DSLs nor Python itself. Especially, given the fact that 
Python has A LOT of things that they’d never use. I know many people who have 
been programming in Python for a while and they admit they don’t know all the 
nuances of Python and actually use 30-40% of all of its capabilities. Even not 
in domain specific development. So narrowing a feature set that a language 
provides and limiting it to a certain domain vocabulary is what helps people 
solve tasks of that specific domain much easier and in the most expressive 
natural way. Without having to learn tons and tons of details that a general 
purpose language (GPL, hah :) ) provides (btw, the reason to write thick books).

I agree with Stan, if you begin to use a technology you’ll have to learn 
something anyway, be it TaskFlow API and principles or DSL. Well-designed DSL 
just encapsulates essential principles of a system it is used for. By learning 
DSL you’re leaning the system itself, as simple as that.

Renat Akhmerov
@ Mirantis Inc.



On 10 Mar 2014, at 05:35, Stan Lagun 
sla...@mirantis.commailto:sla...@mirantis.com wrote:

 I'd be very interested in knowing the resource controls u plan to add. 
 Memory, CPU...
We haven't discussed it yet. Any suggestions are welcomed

 I'm still trying to figure out where something like 
 https://github.com/istalker2/MuranoDsl/blob/master/meta/com.mirantis.murano.demoApp.DemoInstance/manifest.yaml
  would be beneficial, why not  just spend effort sand boxing lua, python... 
 Instead of spending effort on creating a new language and then having to 
 sandbox it as well... Especially if u picked languages that are made to be  
 sandboxed from the start (not python)...

1. See my detailed answer in Mistral thread why haven't we used any of those 
languages. There are many reasons besides sandboxing.

2. You don't need to sandbox MuranoPL. Sandboxing is restricting some 
operations. In MuranoPL ALL operations (including operators in expressions, 
functions, methods etc.) are just those that you explicitly provided. So there 
is nothing to restrict. There are no builtins that throw AccessViolationError

3. Most of the value of MuranoPL comes not form the workflow code but from 
class declarations. In all OOP languages classes are just a convenient to 
organize your code. There are classes that represent real-life objects and 
classes that are nothing more than data-structures, DTOs etc. In Murano classes 
in MuranoPL are deployable entities like Heat resources application components, 
services etc. In dashboard UI user works with those entities. He (in UI!) 
creates instances of those classes, fills their property values, binds objects 
together (assigns on object to a property of another). And this is done without 
even a single MuranoPL line being executed! That is possible because everything 
in MuranoPL is a subject to declaration and because it is just plain YAML 
anyone can easily extract those declarations from MuranoPL classes.
Now suppose it was Python instead of MuranoPL. Then you would have to parse 
*.py files to get list of declared classes (without executing anything). 
Suppose that you managed to solve this somehow. Probably you wrote regexp that 
finds all class declarations it text files. Are you 

Re: [openstack-dev] [Nova] FFE Request: Ephemeral RBD image support

2014-03-11 Thread Dmitry Borodaenko
On Tue, Mar 11, 2014 at 1:31 PM, Matt Riedemann
mrie...@linux.vnet.ibm.com wrote:
 There was a bug reported today [1] that looks like a regression in this
 new code, so we need people involved in this looking at it as soon as
 possible because we have a proposed revert in case we need to yank it
 out [2].

 [1] https://bugs.launchpad.net/nova/+bug/1291014
 [2] 
 https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bug/1291014,n,z

 Note that I have identified the source of the problem and am pushing a
 patch shortly with unit tests.

 My concern is how much else where assumes nova is working with the glance v2
 API because there was a nova blueprint [1] to make nova work with the glance
 V2 API but that never landed in Icehouse, so I'm worried about wack-a-mole
 type problems here, especially since there is no tempest coverage for
 testing multiple image location support via nova.

 [1] https://blueprints.launchpad.net/nova/+spec/use-glance-v2-api

As I mentioned in the bug comments, the code that made the assumption
about glance v2 API actually landed in September 2012:
https://review.openstack.org/13017

The multiple image location patch simply made use of a method that was
already there for more than a year.

-DmitryB

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer]Collector's performance

2014-03-11 Thread Gordon Chung
i did notice the collector service was only ever writing one db connection 
at a time. i've opened a bug for that here: 
https://bugs.launchpad.net/ceilometer/+bug/1291054

i am curious as to why postgresql passes but not mysql? is postgres 
actually faster or are it's default configurations set up better?

cheers,
gordon chung
openstack, ibm software standards___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] No route matched for POST

2014-03-11 Thread Vijay B
Hi Aaron!

I was able to get over the route issue - to begin with, turns out there was
a nasty single space rogue indent in the file (peril of not using a good
IDE). Apart from that, stepping through the api/extensions.py code showed
that I shouldn't be overriding the get_plugin_interface() method in tag.py
- because I don't have a plugin associated with this, and would use the
current plugin. Also, there was an attributes variable I was using wrongly
- had imported it as attr and had to use that..

So the next steps for me would be to go ahead in implementing the logic to
check/write to the db. Hopefully I'll get that to work quicker.

Thanks a lot again!

Regards,
Vijay


On Tue, Mar 11, 2014 at 9:42 AM, Vijay B os.v...@gmail.com wrote:

 Hi Aaron!

 Yes, attaching the code diffs of the client and server. The diff
 0001-Frist-commit-to-add-tag-create-CLI.patch needs to be applied on
 python-neutronclient's master branch, and the diff
 0001-Adding-a-tag-extension.patch needs to be applied on neutron's
 stable/havana branch. After restarting q-svc, please run the CLI `neutron
 tag-create --name tag1 --key key1 --value val1` to test it out.  Thanks for
 offering to take a look at this!

 Regards,
 Vijay


 On Mon, Mar 10, 2014 at 10:10 PM, Aaron Rosen aaronoro...@gmail.comwrote:

 Hi Vijay,

 I think you'd have to post you're code for anyone to really help you.
 Otherwise we'll just be taking shots in the dark.

 Best,

 Aaron


 On Mon, Mar 10, 2014 at 7:22 PM, Vijay B os.v...@gmail.com wrote:

 Hi,

 I'm trying to implement a new extension API in neutron, but am running
 into a No route matched for POST on the neutron service.

 I have followed the instructions in the link
 https://wiki.openstack.org/wiki/NeutronDevelopment#API_Extensions when
 trying to implement this extension.

 The extension doesn't depend on any plug in per se, akin to security
 groups.

 I have defined a new file in neutron/extensions/, called Tag.py, with a
 class Tag extending class extensions.ExtensionDescriptor, like the
 documentation requires. Much like many of the other extensions already
 implemented, I define my new extension as a dictionary, with fields like
 allow_post/allow_put etc, and then pass this to the controller. I still
 however run into a no route matched for POST error when I attempt to fire
 my CLI to create a tag. I also edited the ml2 plugin file
 neutron/plugins/ml2/plugin.py to add tags to
 _supported_extension_aliases, but that didn't resolve the issue.

 It looks like I'm missing something quite minor, causing the the new
 extension to not get registered, but I'm not sure what.

 I can provide more info/patches if anyone would like to take a look, and
 it would be very much appreciated if someone could help me out with this.

 Thanks!
 Regards,
 Vijay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][db][performance] Proposal: Get rid of soft deletion (step by step)

2014-03-11 Thread Joshua Harlow
The question that I don't understand is why does this process have to be
involve the database to begin with?

If you want to archive images per-say, on deletion just export it to a
'backup tape' (for example) and store enough of the metadata on that
'tape' to re-insert it if this is really desired and then delete it from
the database (or do the export... asynchronously). The same could be said
with VMs, although likely not all resources, aka networks/.../ make sense
to do this.

So instead of deleted = 1, wait for cleaner, just save the resource (if
possible) + enough metadata on some other system ('backup tape', alternate
storage location, hdfs, ceph...) and leave it there unless it's really
needed. Making the database more complex (and all associated code) to
achieve this same goal seems like a hack that just needs to be addressed
with a better way to do archiving.

In a cloudy world of course people would be able to recreate everything
they need on-demand so who needs undelete anyway ;-)

My 0.02 cents.

-Original Message-
From: Tim Bell tim.b...@cern.ch
Reply-To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date: Tuesday, March 11, 2014 at 11:43 AM
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [all][db][performance] Proposal: Get rid of
soft deletion (step by step)


Typical cases are user error where someone accidentally deletes an item
from a tenant. The image guys have a good structure where images become
unavailable and are recoverable for a certain period of time. A regular
periodic task cleans up deleted items after a configurable number of
seconds to avoid constant database growth.

My preference would be to follow this model universally (an archive table
is a nice way to do it without disturbing production).

Tim


 On Tue, Mar 11, 2014, Mike Wilson geekinu...@gmail.com wrote:
  Undeleting things is an important use case in my opinion. We do this
  in our environment on a regular basis. In that light I'm not sure that
  it would be appropriate just to log the deletion and git rid of the
  row. I would like to see it go to an archival table where it is
easily restored.
 
 I'm curious, what are you undeleting and why?
 
 JE
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Service VM: irc discussion?

2014-03-11 Thread Isaku Yamahata
Hi. Sorry for it.

Tuesdays at 23:00 UTC is correct.
I mixed the time with it due to the daylight saving time.
On the next week(March 18), it will be held on the correct time.

Again, sorry for it.
thanks,

On Tue, Mar 11, 2014 at 04:15:27PM -0700,
Stephen Wong s3w...@midokura.com wrote:

 Hi Isaku,
 
 Seems like you had the meeting at 22:00 UTC instead of 23:00 UTC?
 
 [15:01] yamahata hello? is anybody there for servicevm meeting?
 [15:02] yamahata #startmeeting neutron/servicevm
 [15:02] openstack Meeting started Tue Mar 11 22:02:14 2014 UTC and is due
 to finish in 60 minutes.  The chair is yamahata. Information about MeetBot
 at http://wiki.debian.org/MeetBot.
 [snip]
 [15:24] yamahata #endmeeting
 [15:24] *** openstack sets the channel topic to  (Meeting topic: project).
 [15:24] openstack Meeting ended Tue Mar 11 22:24:08 2014 UTC.
  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)
 
 To clarify, are you looking at Tuesdays at 22:00 UTC or 23:00 UTC?
 
 Thanks,
 - Stephen
 
 
 
 On Wed, Mar 5, 2014 at 9:57 AM, Isaku Yamahata 
 isaku.yamah...@gmail.comwrote:
 
  Since I received some mails privately, I'd like to start weekly IRC
  meeting.
  The first meeting will be
 
Tuesdays 23:00UTC from March 11, 2014
#openstack-meeting
https://wiki.openstack.org/wiki/Meetings/ServiceVM
If you have topics to discuss, please add to the page.
 
  Sorry if the time is inconvenient for you. The schedule will also be
  discussed, and the meeting time would be changed from the 2nd one.
 
  Thanks,
 
  On Mon, Feb 10, 2014 at 03:11:43PM +0900,
  Isaku Yamahata isaku.yamah...@gmail.com wrote:
 
   As the first patch for service vm framework is ready for review[1][2],
   it would be a good idea to have IRC meeting.
   Anyone interested in it? How about schedule?
  
   Schedule candidate
   Monday  22:00UTC-, 23:00UTC-
   Tuesday 22:00UTC-, 23:00UTC-
   (Although the slot of servanced service vm[3] can be resuled,
it doesn't work for me because my timezone is UTC+9.)
  
   topics for
   - discussion/review on the patch
   - next steps
   - other open issues?
  
   [1] https://blueprints.launchpad.net/neutron/+spec/adv-services-in-vms
   [2] https://review.openstack.org/#/c/56892/
   [3] https://wiki.openstack.org/wiki/Meetings/AdvancedServices
   --
   Isaku Yamahata isaku.yamah...@gmail.com
 
  --
  Isaku Yamahata isaku.yamah...@gmail.com
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 
Isaku Yamahata isaku.yamah...@gmail.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Ephemeral RBD image support

2014-03-11 Thread Matt Riedemann



On 3/11/2014 5:11 PM, Dmitry Borodaenko wrote:

On Tue, Mar 11, 2014 at 1:31 PM, Matt Riedemann
mrie...@linux.vnet.ibm.com wrote:

There was a bug reported today [1] that looks like a regression in this
new code, so we need people involved in this looking at it as soon as
possible because we have a proposed revert in case we need to yank it
out [2].

[1] https://bugs.launchpad.net/nova/+bug/1291014
[2] 
https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bug/1291014,n,z


Note that I have identified the source of the problem and am pushing a
patch shortly with unit tests.


My concern is how much else where assumes nova is working with the glance v2
API because there was a nova blueprint [1] to make nova work with the glance
V2 API but that never landed in Icehouse, so I'm worried about wack-a-mole
type problems here, especially since there is no tempest coverage for
testing multiple image location support via nova.

[1] https://blueprints.launchpad.net/nova/+spec/use-glance-v2-api


As I mentioned in the bug comments, the code that made the assumption
about glance v2 API actually landed in September 2012:
https://review.openstack.org/13017

The multiple image location patch simply made use of a method that was
already there for more than a year.

-DmitryB

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Yeah, I pointed that out today in IRC also.

So kudos to Jay for getting a patch up quickly, and a really nice one at 
that with extensive test coverage.


What I'd like to see in Juno is a tempest test that covers the multiple 
image locations code since it seems we obviously don't have that today. 
 How hard is something like that with an API test?


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] Intermittent failures cloning noVNC from github.com/kanaka

2014-03-11 Thread Sukhdev Kapur
I have noticed that even clone of devstack has failed few times within last
couple of hours - it was running fairly smooth so far.

-Sukhdev



On Tue, Mar 11, 2014 at 5:05 PM, Sukhdev Kapur sukhdevka...@gmail.comwrote:

 [adding openstack-dev list as well ]

 I have noticed that this has stated hitting my builds within last few
 hours. I have noticed exact same failures on almost 10 builds.
 Looks like something has happened within last few hours - perhaps the
 load?

 -Sukhdev



 On Tue, Mar 11, 2014 at 4:28 PM, Dane Leblanc (leblancd) 
 lebla...@cisco.com wrote:

  Apologies if this is the wrong audience for this question…



 I’m seeing intermittent failures running stack.sh whereby ‘git clone
 https://github.com/kanaka/noVNC.git /opt/stack/noVNC’ is returning
 various errors.  Below are 2 examples.



 Is this a known issue? Are there any localrc settings which might help
 here?



 Example 1:



 2014-03-11 15:00:33.779 | + is_service_enabled n-novnc

 2014-03-11 15:00:33.780 | + return 0

 2014-03-11 15:00:33.781 | ++ trueorfalse False

 2014-03-11 15:00:33.782 | + NOVNC_FROM_PACKAGE=False

 2014-03-11 15:00:33.783 | + '[' False = True ']'

 2014-03-11 15:00:33.784 | + NOVNC_WEB_DIR=/opt/stack/noVNC

 2014-03-11 15:00:33.785 | + git_clone 
 https://github.com/kanaka/noVNC.git/opt/stack/noVNC master

 2014-03-11 15:00:33.786 | + GIT_REMOTE=
 https://github.com/kanaka/noVNC.git

 2014-03-11 15:00:33.788 | + GIT_DEST=/opt/stack/noVNC

 2014-03-11 15:00:33.789 | + GIT_REF=master

 2014-03-11 15:00:33.790 | ++ trueorfalse False False

 2014-03-11 15:00:33.791 | + RECLONE=False

 2014-03-11 15:00:33.792 | + [[ False = \T\r\u\e ]]

 2014-03-11 15:00:33.793 | + echo master

 2014-03-11 15:00:33.794 | + egrep -q '^refs'

 2014-03-11 15:00:33.795 | + [[ ! -d /opt/stack/noVNC ]]

 2014-03-11 15:00:33.796 | + [[ False = \T\r\u\e ]]

 2014-03-11 15:00:33.797 | + git_timed clone
 https://github.com/kanaka/noVNC.git /opt/stack/noVNC

 2014-03-11 15:00:33.798 | + local count=0

 2014-03-11 15:00:33.799 | + local timeout=0

 2014-03-11 15:00:33.801 | + [[ -n 0 ]]

 2014-03-11 15:00:33.802 | + timeout=0

 2014-03-11 15:00:33.803 | + timeout -s SIGINT 0 git clone
 https://github.com/kanaka/noVNC.git /opt/stack/noVNC

 2014-03-11 15:00:33.804 | Cloning into '/opt/stack/noVNC'...

 2014-03-11 15:03:13.694 | error: RPC failed; result=56, HTTP code = 200

 2014-03-11 15:03:13.695 | fatal: The remote end hung up unexpectedly

 2014-03-11 15:03:13.697 | fatal: early EOF

 2014-03-11 15:03:13.698 | fatal: index-pack failed

 2014-03-11 15:03:13.699 | + [[ 128 -ne 124 ]]

 2014-03-11 15:03:13.700 | + die 596 'git call failed: [git clone'
 https://github.com/kanaka/noVNC.git '/opt/stack/noVNC]'

 2014-03-11 15:03:13.701 | + local exitcode=0

 2014-03-11 15:03:13.702 | [Call Trace]

 2014-03-11 15:03:13.703 | ./stack.sh:736:install_nova

 2014-03-11 15:03:13.705 | /var/lib/jenkins/devstack/lib/nova:618:git_clone

 2014-03-11 15:03:13.706 |
 /var/lib/jenkins/devstack/functions-common:543:git_timed

 2014-03-11 15:03:13.707 |
 /var/lib/jenkins/devstack/functions-common:596:die

 2014-03-11 15:03:13.708 | [ERROR]
 /var/lib/jenkins/devstack/functions-common:596 git call failed: [git clone
 https://github.com/kanaka/noVNC.git /opt/stack/noVNC]





 Example 2:



 2014-03-11 14:12:58.472 | + is_service_enabled n-novnc

 2014-03-11 14:12:58.473 | + return 0

 2014-03-11 14:12:58.474 | ++ trueorfalse False

 2014-03-11 14:12:58.475 | + NOVNC_FROM_PACKAGE=False

 2014-03-11 14:12:58.476 | + '[' False = True ']'

 2014-03-11 14:12:58.477 | + NOVNC_WEB_DIR=/opt/stack/noVNC

 2014-03-11 14:12:58.478 | + git_clone https://github.com/kanaka/noVNC.git 
 /opt/stack/noVNC master

 2014-03-11 14:12:58.479 | + GIT_REMOTE=https://github.com/kanaka/noVNC.git

 2014-03-11 14:12:58.480 | + GIT_DEST=/opt/stack/noVNC

 2014-03-11 14:12:58.481 | + GIT_REF=master

 2014-03-11 14:12:58.482 | ++ trueorfalse False False

 2014-03-11 14:12:58.483 | + RECLONE=False

 2014-03-11 14:12:58.484 | + [[ False = \T\r\u\e ]]

 2014-03-11 14:12:58.485 | + echo master

 2014-03-11 14:12:58.486 | + egrep -q '^refs'

 2014-03-11 14:12:58.487 | + [[ ! -d /opt/stack/noVNC ]]

 2014-03-11 14:12:58.488 | + [[ False = \T\r\u\e ]]

 2014-03-11 14:12:58.489 | + git_timed clone 
 https://github.com/kanaka/noVNC.git /opt/stack/noVNC

 2014-03-11 14:12:58.490 | + local count=0

 2014-03-11 14:12:58.491 | + local timeout=0

 2014-03-11 14:12:58.492 | + [[ -n 0 ]]

 2014-03-11 14:12:58.493 | + timeout=0

 2014-03-11 14:12:58.494 | + timeout -s SIGINT 0 git clone 
 https://github.com/kanaka/noVNC.git /opt/stack/noVNC

 2014-03-11 14:12:58.495 | Cloning into '/opt/stack/noVNC'...

 2014-03-11 14:14:02.315 | error: The requested URL returned error: 403 while 
 accessing https://github.com/kanaka/noVNC.git/info/refs

 2014-03-11 14:14:02.316 | fatal: HTTP request failed

 2014-03-11 14:14:02.317 | + [[ 128 -ne 124 ]]

 2014-03-11 14:14:02.318 | + die 596 'git call failed: [git clone' 

Re: [openstack-dev] [Mistral] Local vs. Scalable Engine

2014-03-11 Thread W Chan
I want to propose the following changes to implement the local executor and
removal of the local engine.  As mentioned before, oslo.messaging includes
a fake driver that uses a simple queue.  An example in the use of this
fake driver is demonstrated in test_executor.  The use of the fake driver
requires that both the consumer and publisher of the queue is running in
the same process so the queue is in scope.  Currently, the launcher for
both the api/engine and the executor are launched on separate processes.

Here're the proposed changes.
1) Rewrite the launch script to be more generic which contains option to
launch all components (i.e. API, engine, executor) on the same process but
over separate threads or launch each individually.
2) Move transport to a global variables, similar to global _engine and then
shared by the different component.
3) Modified the engine and the executor to use a factory method to get the
global transport

This doesn't change how the workflows are being processed.  It just changed
how the services are launched.

Thoughts?
Winson
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron]A Question about creating instance with duplication sg_name

2014-03-11 Thread Xurong Yang
Hi,Lingxian  marios
Thank for response. yes,personally speaking, it should be using UUID
instead of 'name' such as network_id port_id as name(not the key) can't
differentiate security groups. so, i don't know that how about other
folks's view, maybe we need fix it.

thanks,Xurong


2014-03-11 21:33 GMT+08:00 mar...@redhat.com mandr...@redhat.com:

 On 11/03/14 10:20, Xurong Yang wrote:
  It's allowed to create duplicate sg with the same name.
  so exception happens when creating instance with the duplicate sg name.

 Hi Xurong - fyi there is a review open which raises this particular
 point at https://review.openstack.org/#/c/79270/2 (together with
 associated bug).

 imo we shouldn't be using 'name' to distinguish security groups - that's
 what the UUID is for,

 thanks, marios

  code following:
  
  security_groups = kwargs.get('security_groups', [])
  security_group_ids = []
 
  # TODO(arosen) Should optimize more to do direct query for
 security
  # group if len(security_groups) == 1
  if len(security_groups):
  search_opts = {'tenant_id': instance['project_id']}
  user_security_groups = neutron.list_security_groups(
  **search_opts).get('security_groups')
 
  for security_group in security_groups:
  name_match = None
  uuid_match = None
  for user_security_group in user_security_groups:
  if user_security_group['name'] == security_group:
  if name_match:---exception happened here
  raise exception.NoUniqueMatch(
  _(Multiple security groups found matching
 '%s'. Use an ID to be more specific.) %
 security_group)
 
  name_match = user_security_group['id']

 
  so it's maybe improper to create instance with the sg name parameter.
  appreciation if any response.
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Service VM: irc discussion?

2014-03-11 Thread Stephen Wong
Hi Isaku,

Seems like you had the meeting at 22:00 UTC instead of 23:00 UTC?

[15:01] yamahata hello? is anybody there for servicevm meeting?
[15:02] yamahata #startmeeting neutron/servicevm
[15:02] openstack Meeting started Tue Mar 11 22:02:14 2014 UTC and is due
to finish in 60 minutes.  The chair is yamahata. Information about MeetBot
at http://wiki.debian.org/MeetBot.
[snip]
[15:24] yamahata #endmeeting
[15:24] *** openstack sets the channel topic to  (Meeting topic: project).
[15:24] openstack Meeting ended Tue Mar 11 22:24:08 2014 UTC.
 Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)

To clarify, are you looking at Tuesdays at 22:00 UTC or 23:00 UTC?

Thanks,
- Stephen



On Wed, Mar 5, 2014 at 9:57 AM, Isaku Yamahata isaku.yamah...@gmail.comwrote:

 Since I received some mails privately, I'd like to start weekly IRC
 meeting.
 The first meeting will be

   Tuesdays 23:00UTC from March 11, 2014
   #openstack-meeting
   https://wiki.openstack.org/wiki/Meetings/ServiceVM
   If you have topics to discuss, please add to the page.

 Sorry if the time is inconvenient for you. The schedule will also be
 discussed, and the meeting time would be changed from the 2nd one.

 Thanks,

 On Mon, Feb 10, 2014 at 03:11:43PM +0900,
 Isaku Yamahata isaku.yamah...@gmail.com wrote:

  As the first patch for service vm framework is ready for review[1][2],
  it would be a good idea to have IRC meeting.
  Anyone interested in it? How about schedule?
 
  Schedule candidate
  Monday  22:00UTC-, 23:00UTC-
  Tuesday 22:00UTC-, 23:00UTC-
  (Although the slot of servanced service vm[3] can be resuled,
   it doesn't work for me because my timezone is UTC+9.)
 
  topics for
  - discussion/review on the patch
  - next steps
  - other open issues?
 
  [1] https://blueprints.launchpad.net/neutron/+spec/adv-services-in-vms
  [2] https://review.openstack.org/#/c/56892/
  [3] https://wiki.openstack.org/wiki/Meetings/AdvancedServices
  --
  Isaku Yamahata isaku.yamah...@gmail.com

 --
 Isaku Yamahata isaku.yamah...@gmail.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] Intermittent failures cloning noVNC from github.com/kanaka

2014-03-11 Thread Sukhdev Kapur
Hey Monty,

The issue is when we are using stack.sh, how do we use cache dir as oppose
to going at git?
Is there any option which can be set to utilize this feature?

-Sukhdev



On Tue, Mar 11, 2014 at 4:42 PM, Monty Taylor mord...@inaugust.com wrote:

 Honestly not being snarky here ... The reason is that github if quite
 flaky. We try very hard to never touch it in infra. And by try, I mean we
 NEVER clone from it live, and if we absoluely can't avoid it for some
 reason, we are clone into a cache dir.

 On Mar 11, 2014 4:28 PM, Dane Leblanc (leblancd) lebla...@cisco.com
 wrote:
 
  Apologies if this is the wrong audience for this question…
 
 
 
  I’m seeing intermittent failures running stack.sh whereby ‘git clone
 https://github.com/kanaka/noVNC.git /opt/stack/noVNC’ is returning
 various errors.  Below are 2 examples.
 
 
 
  Is this a known issue? Are there any localrc settings which might help
 here?
 
 
 
  Example 1:
 
 
 
  2014-03-11 15:00:33.779 | + is_service_enabled n-novnc
 
  2014-03-11 15:00:33.780 | + return 0
 
  2014-03-11 15:00:33.781 | ++ trueorfalse False
 
  2014-03-11 15:00:33.782 | + NOVNC_FROM_PACKAGE=False
 
  2014-03-11 15:00:33.783 | + '[' False = True ']'
 
  2014-03-11 15:00:33.784 | + NOVNC_WEB_DIR=/opt/stack/noVNC
 
  2014-03-11 15:00:33.785 | + git_clone
 https://github.com/kanaka/noVNC.git /opt/stack/noVNC master
 
  2014-03-11 15:00:33.786 | + GIT_REMOTE=
 https://github.com/kanaka/noVNC.git
 
  2014-03-11 15:00:33.788 | + GIT_DEST=/opt/stack/noVNC
 
  2014-03-11 15:00:33.789 | + GIT_REF=master
 
  2014-03-11 15:00:33.790 | ++ trueorfalse False False
 
  2014-03-11 15:00:33.791 | + RECLONE=False
 
  2014-03-11 15:00:33.792 | + [[ False = \T\r\u\e ]]
 
  2014-03-11 15:00:33.793 | + echo master
 
  2014-03-11 15:00:33.794 | + egrep -q '^refs'
 
  2014-03-11 15:00:33.795 | + [[ ! -d /opt/stack/noVNC ]]
 
  2014-03-11 15:00:33.796 | + [[ False = \T\r\u\e ]]
 
  2014-03-11 15:00:33.797 | + git_timed clone
 https://github.com/kanaka/noVNC.git /opt/stack/noVNC
 
  2014-03-11 15:00:33.798 | + local count=0
 
  2014-03-11 15:00:33.799 | + local timeout=0
 
  2014-03-11 15:00:33.801 | + [[ -n 0 ]]
 
  2014-03-11 15:00:33.802 | + timeout=0
 
  2014-03-11 15:00:33.803 | + timeout -s SIGINT 0 git clone
 https://github.com/kanaka/noVNC.git /opt/stack/noVNC
 
  2014-03-11 15:00:33.804 | Cloning into '/opt/stack/noVNC'...
 
  2014-03-11 15:03:13.694 | error: RPC failed; result=56, HTTP code = 200
 
  2014-03-11 15:03:13.695 | fatal: The remote end hung up unexpectedly
 
  2014-03-11 15:03:13.697 | fatal: early EOF
 
  2014-03-11 15:03:13.698 | fatal: index-pack failed
 
  2014-03-11 15:03:13.699 | + [[ 128 -ne 124 ]]
 
  2014-03-11 15:03:13.700 | + die 596 'git call failed: [git clone'
 https://github.com/kanaka/noVNC.git '/opt/stack/noVNC]'
 
  2014-03-11 15:03:13.701 | + local exitcode=0
 
  2014-03-11 15:03:13.702 | [Call Trace]
 
  2014-03-11 15:03:13.703 | ./stack.sh:736:install_nova
 
  2014-03-11 15:03:13.705 |
 /var/lib/jenkins/devstack/lib/nova:618:git_clone
 
  2014-03-11 15:03:13.706 |
 /var/lib/jenkins/devstack/functions-common:543:git_timed
 
  2014-03-11 15:03:13.707 |
 /var/lib/jenkins/devstack/functions-common:596:die
 
  2014-03-11 15:03:13.708 | [ERROR]
 /var/lib/jenkins/devstack/functions-common:596 git call failed: [git clone
 https://github.com/kanaka/noVNC.git /opt/stack/noVNC]
 
 
 
 
 
  Example 2:
 
 
 
  2014-03-11 14:12:58.472 | + is_service_enabled n-novnc
 
  2014-03-11 14:12:58.473 | + return 0
 
  2014-03-11 14:12:58.474 | ++ trueorfalse False
 
  2014-03-11 14:12:58.475 | + NOVNC_FROM_PACKAGE=False
 
  2014-03-11 14:12:58.476 | + '[' False = True ']'
 
  2014-03-11 14:12:58.477 | + NOVNC_WEB_DIR=/opt/stack/noVNC
 
  2014-03-11 14:12:58.478 | + git_clone
 https://github.com/kanaka/noVNC.git /opt/stack/noVNC master
 
  2014-03-11 14:12:58.479 | + GIT_REMOTE=
 https://github.com/kanaka/noVNC.git
 
  2014-03-11 14:12:58.480 | + GIT_DEST=/opt/stack/noVNC
 
  2014-03-11 14:12:58.481 | + GIT_REF=master
 
  2014-03-11 14:12:58.482 | ++ trueorfalse False False
 
  2014-03-11 14:12:58.483 | + RECLONE=False
 
  2014-03-11 14:12:58.484 | + [[ False = \T\r\u\e ]]
 
  2014-03-11 14:12:58.485 | + echo master
 
  2014-03-11 14:12:58.486 | + egrep -q '^refs'
 
  2014-03-11 14:12:58.487 | + [[ ! -d /opt/stack/noVNC ]]
 
  2014-03-11 14:12:58.488 | + [[ False = \T\r\u\e ]]
 
  2014-03-11 14:12:58.489 | + git_timed clone
 https://github.com/kanaka/noVNC.git /opt/stack/noVNC
 
  2014-03-11 14:12:58.490 | + local count=0
 
  2014-03-11 14:12:58.491 | + local timeout=0
 
  2014-03-11 14:12:58.492 | + [[ -n 0 ]]
 
  2014-03-11 14:12:58.493 | + timeout=0
 
  2014-03-11 14:12:58.494 | + timeout -s SIGINT 0 git clone
 https://github.com/kanaka/noVNC.git /opt/stack/noVNC
 
  2014-03-11 14:12:58.495 | Cloning into '/opt/stack/noVNC'...
 
  2014-03-11 14:14:02.315 | error: The requested URL returned error: 403
 while accessing https://github.com/kanaka/noVNC.git/info/refs
 
  

Re: [openstack-dev] [OpenStack-Infra] Intermittent failures cloning noVNC from github.com/kanaka

2014-03-11 Thread Joshua Harlow
https://status.github.com/messages

* 'GitHub.com is operating normally, despite an ongoing DDoS attack. The 
mitigations we have in place are proving effective in protecting us and we're 
hopeful that we've got this one resolved.'

If you were cloning from github.org and not http://git.openstack.org then you 
were likely seeing some of the DDoS attack in action.

From: Sukhdev Kapur sukhdevka...@gmail.commailto:sukhdevka...@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, March 11, 2014 at 4:08 PM
To: Dane Leblanc (leblancd) lebla...@cisco.commailto:lebla...@cisco.com
Cc: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org, 
openstack-in...@lists.openstack.orgmailto:openstack-in...@lists.openstack.org
 
openstack-in...@lists.openstack.orgmailto:openstack-in...@lists.openstack.org
Subject: Re: [openstack-dev] [OpenStack-Infra] Intermittent failures cloning 
noVNC from github.com/kanaka

I have noticed that even clone of devstack has failed few times within last 
couple of hours - it was running fairly smooth so far.

-Sukhdev



On Tue, Mar 11, 2014 at 5:05 PM, Sukhdev Kapur 
sukhdevka...@gmail.commailto:sukhdevka...@gmail.com wrote:
[adding openstack-dev list as well ]

I have noticed that this has stated hitting my builds within last few hours. I 
have noticed exact same failures on almost 10 builds.
Looks like something has happened within last few hours - perhaps the load?

-Sukhdev



On Tue, Mar 11, 2014 at 4:28 PM, Dane Leblanc (leblancd) 
lebla...@cisco.commailto:lebla...@cisco.com wrote:
Apologies if this is the wrong audience for this question…

I’m seeing intermittent failures running stack.sh whereby ‘git clone 
https://github.com/kanaka/noVNC.git /opt/stack/noVNC’ is returning various 
errors.  Below are 2 examples.

Is this a known issue? Are there any localrc settings which might help here?

Example 1:

2014-03-11 15:00:33.779 | + is_service_enabled n-novnc
2014-03-11 15:00:33.780 | + return 0
2014-03-11 15:00:33.781 | ++ trueorfalse False
2014-03-11 15:00:33.782 | + NOVNC_FROM_PACKAGE=False
2014-03-11 15:00:33.783 | + '[' False = True ']'
2014-03-11 15:00:33.784 | + NOVNC_WEB_DIR=/opt/stack/noVNC
2014-03-11 15:00:33.785 | + git_clone https://github.com/kanaka/noVNC.git 
/opt/stack/noVNC master
2014-03-11 15:00:33.786 | + GIT_REMOTE=https://github.com/kanaka/noVNC.git
2014-03-11 15:00:33.788 | + GIT_DEST=/opt/stack/noVNC
2014-03-11 15:00:33.789 | + GIT_REF=master
2014-03-11 15:00:33.790 | ++ trueorfalse False False
2014-03-11 15:00:33.791 | + RECLONE=False
2014-03-11 15:00:33.792 | + [[ False = \T\r\u\e ]]
2014-03-11 15:00:33.793 | + echo master
2014-03-11 15:00:33.794 | + egrep -q '^refs'
2014-03-11 15:00:33.795 | + [[ ! -d /opt/stack/noVNC ]]
2014-03-11 15:00:33.796 | + [[ False = \T\r\u\e ]]
2014-03-11 15:00:33.797 | + git_timed clone https://github.com/kanaka/noVNC.git 
/opt/stack/noVNC
2014-03-11 15:00:33.798 | + local count=0
2014-03-11 15:00:33.799 | + local timeout=0
2014-03-11 15:00:33.801 | + [[ -n 0 ]]
2014-03-11 15:00:33.802 | + timeout=0
2014-03-11 15:00:33.803 | + timeout -s SIGINT 0 git clone 
https://github.com/kanaka/noVNC.git /opt/stack/noVNC
2014-03-11 15:00:33.804 | Cloning into '/opt/stack/noVNC'...
2014-03-11 15:03:13.694 | error: RPC failed; result=56, HTTP code = 200
2014-03-11 15:03:13.695 | fatal: The remote end hung up unexpectedly
2014-03-11 15:03:13.697 | fatal: early EOF
2014-03-11 15:03:13.698 | fatal: index-pack failed
2014-03-11 15:03:13.699 | + [[ 128 -ne 124 ]]
2014-03-11 15:03:13.700 | + die 596 'git call failed: [git clone' 
https://github.com/kanaka/noVNC.git '/opt/stack/noVNC]'
2014-03-11 15:03:13.701 | + local exitcode=0
2014-03-11 15:03:13.702 | [Call Trace]
2014-03-11 15:03:13.703 | ./stack.sh:736:install_nova
2014-03-11 15:03:13.705 | /var/lib/jenkins/devstack/lib/nova:618:git_clone
2014-03-11 15:03:13.706 | 
/var/lib/jenkins/devstack/functions-common:543:git_timed
2014-03-11 15:03:13.707 | /var/lib/jenkins/devstack/functions-common:596:die
2014-03-11 15:03:13.708 | [ERROR] 
/var/lib/jenkins/devstack/functions-common:596 git call failed: [git clone 
https://github.com/kanaka/noVNC.git /opt/stack/noVNC]


Example 2:


2014-03-11 14:12:58.472 | + is_service_enabled n-novnc

2014-03-11 14:12:58.473 | + return 0

2014-03-11 14:12:58.474 | ++ trueorfalse False

2014-03-11 14:12:58.475 | + NOVNC_FROM_PACKAGE=False

2014-03-11 14:12:58.476 | + '[' False = True ']'

2014-03-11 14:12:58.477 | + NOVNC_WEB_DIR=/opt/stack/noVNC

2014-03-11 14:12:58.478 | + git_clone https://github.com/kanaka/noVNC.git 
/opt/stack/noVNC master

2014-03-11 14:12:58.479 | + GIT_REMOTE=https://github.com/kanaka/noVNC.git

2014-03-11 14:12:58.480 | + GIT_DEST=/opt/stack/noVNC

2014-03-11 14:12:58.481 | + GIT_REF=master

2014-03-11 14:12:58.482 | ++ trueorfalse False False


  1   2   >