[openstack-dev] [Cinder] Question about synchronized decoration usage in cinder-volume

2014-04-26 Thread Zhangleiqiang (Trump)
Hi, all:

I find almost all of the @utils.synchronized decoration usage in 
cinder-volume (cinder.volume.manager / cinder.volume.drivers.*) with an 
external=True param. Such as 
cinder.volume.manager.VolumeManager:attach_volume:

def attach_volume(self, context, volume_id, instance_uuid, 
host_name,
  mountpoint, mode):
Updates db to show volume is attached.
@utils.synchronized(volume_id, external=True)
def do_attach():

However, in docstring of common.lockutils.synchronized, I find param 
external is used for multi-workers scenario:

:param external: The external keyword argument denotes whether 
this lock
should work across multiple processes. This means that if two different
workers both run a a method decorated with @synchronized('mylock',
external=True), only one of them will execute at a time.

I have two questions about it.
1. As far as I know, cinder-api has supported multi-worker mode and 
cinder-volume doesn't support it, does it? So I wonder why the external=True 
param is used here?
2. Specific to cinder.volume.manager.VolumeManager:attach_volume, all 
operations in do_attach method are database related. As said in [1], 
operations to the database will block the main thread of a service, so another 
question I want to know is why this method is needed to be synchronized?

Thanks.

[1] 
http://docs.openstack.org/developer/cinder/devref/threading.html#mysql-access-and-eventlet
--
zhangleiqiang (Trump)

Best Regards



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] nova compute error

2014-04-26 Thread abhishek jain
Hi Ricky

Thanks.You are right.
I'm getting following error after running the command ...

 sudo ip link add qvbb2fc7c52-ae type veth peer name qvob2fc7c52-ae
 RTNETLINK answers: Operation not supported

 Below is the description of my system..

uname -a
Linux t4240-ubuntu1310 3.8.13-rt9-QorIQ-SDK-V1.4 #3 SMP Wed Apr 23 12:11:58
CDT 2014 ppc64 ppc64 ppc64 GNU/Linux

Plaese help regarding this.


On Sat, Apr 26, 2014 at 2:13 PM, Bohai (ricky) bo...@huawei.com wrote:

  It seems that the command “ip link add qvbb2fc7c52-ae type veth peer
 name qvob2fc7c52-ae” failed.

 Maybe you can try it manually and confirm whether it’s the reason.



 Best regards to you.

 Ricky



 *From:* abhishek jain [mailto:ashujain9...@gmail.com]
 *Sent:* Saturday, April 26, 2014 3:25 PM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* [openstack-dev] nova compute error



 Hi all..

 I'm getting following nova-compute error and thus not able to boot the VMs
 on my compute node...


 nova-ompute service stopped and started giving following error...
 [02:06:44] Abhishek Jain: 2014-04-25 15:32:00.112 6501 TRACE
 nova.openstack.common.threadgroup cmd=' '.join(cmd))
 2014-04-25 15:32:00.112 6501 TRACE nova.openstack.common.threadgroup
 ProcessExecutionError: Unexpected error while running command.
 2014-04-25 15:32:00.112 6501 TRACE nova.openstack.common.threadgroup
 Command: ip link add qvbb2fc7c52-ae type veth peer name qvob2fc7c52-ae
 2014-04-25 15:32:00.112 6501 TRACE nova.openstack.common.threadgroup Exit
 code: 2
 2014-04-25 15:32:00.112 6501 TRACE nova.openstack.common.threadgroup
 Stdout: ''
 2014-04-25 15:32:00.112 6501 TRACE nova.openstack.common.threadgroup
 Stderr: 'RTNETLINK answers: Operation not supported\n'
 2014-04-25 15:32:00.112 6501 TRACE nova.openstack.common.threadgroup

 I'm able to run other services such as nova and q-agt on compute node and
 also the compute node is reflected on the controller node and vice versa.

 Please help me regarding this .

 Thanks

 Abhishek Jain

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How to implement and configure a new Neutron vpnaas driver from scratch?

2014-04-26 Thread Julio Carlos Barrera Juez
I'm trying to configure any VPNaaS plugin in single-provider mode. I'm not
able to achieve this goal. I'm using a devstack installation and I'm
editing */etc/neutron/neutron.conf* file, modifying this line:

...
service_provider=VPN:cisco_csr:neutron.services.vpn.service_drivers.ipsec.IPsecVPNDriver:default
...

and */etc/neutron/vpn_agent.ini* modifyin gthis line:

*...*

*vpn_device_driver=neutron.services.vpn.device_drivers.ipsec.IPsecDriver*
*...*

I'm not sure if this configuration is OK. I have some doubts:

- Is this configuration a valid one taking into account that plugin are
available in Python modules path?
- Where are the log files located to check valid neutron configuration?
- What services should I restart each time I change this configuration?

Thank you very much.


Julio C. Barrera Juez
Office phone: +34 93 357 99 27
Distributed Applications and Networks Area (DANA)
i2CAT Foundation, Barcelona, Spain
http://dana.i2cat.net


On 24 April 2014 16:14, Paul Michali (pcm) p...@cisco.com wrote:

  Not sure I quite understand the question, but to configuring VPNaaS in
 single provider mode, from a user's perspective is the same (see
 api.openstack.org).

  To bring up a cloud that uses a different vendor's service and device
 driver, you need to modify neutron.conf to select the vendor's service
 driver (as the default driver), instead of the reference driver, and in
 vpn_agent.ini you select the vendor's device driver (instead of or in
 addition to the reference implementation, doesn't matter, as it pairs with
 the service driver).

  HTHs,


  PCM (Paul Michali)

  MAIL . p...@cisco.com
 IRC ... pcm_ (irc.freenode.com)
 TW  @pmichali
 GPG Key ... 4525ECC253E31A83
 Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83



  On Apr 24, 2014, at 3:13 AM, Julio Carlos Barrera Juez 
 juliocarlos.barr...@i2cat.net wrote:

  OK, thank you guys, I understood that it was not possible to configure
 and make work any VPNaaS plugin. I don't care, by now, because it works in
 single-provider mode. I knew about the Cisco implementation, but I don't
 know how to configure it, because I didn't find enough documentation about
 that topic. I need some help on the basics configuring a VPNaaS plugin in
 single provider mode, because I only found information about it in 3rd
 party blog posts, etc.

  What are the basic steps?

  Thank you again.


  Julio C. Barrera Juez
 Office phone: +34 93 357 99 27
 Distributed Applications and Networks Area (DANA)
 i2CAT Foundation, Barcelona, Spain
 http://dana.i2cat.net


 On 18 April 2014 10:50, Bo Lin l...@vmware.com wrote:

  Hi Julio,
 +1 for Paul's response. Multiple-provider VPNaaS support is delayed. But
 you can take 
 https://review.openstack.org/#/c/74156/https://urldefense.proofpoint.com/v1/url?u=https://review.openstack.org/%23/c/74156/k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=F5etm0B6kVJ9jleIhCvNyA%3D%3D%0Am=1%2FHmRV%2F3ce%2Bjpzxjfyhv6xjuBhiOBVrajFVFZjco9Zw%3D%0As=3436530b865ab50e305340302d741b5f023419bebc45ec144caa57e4c51b0452
  and 
 https://review.openstack.org/#/c/74144/https://urldefense.proofpoint.com/v1/url?u=https://review.openstack.org/%23/c/74144/k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=F5etm0B6kVJ9jleIhCvNyA%3D%3D%0Am=1%2FHmRV%2F3ce%2Bjpzxjfyhv6xjuBhiOBVrajFVFZjco9Zw%3D%0As=08ac89d9f0424a7f6bb462accc106a9edc6df8e41b6b7fe568ea287db47abe30
  as
 examples to write your own vpnaas driver without multi-provider support. If
 any questions or problems in your codes leading to not work, just upload
 your codes onto the review board, we can find how to solve it :).

  Thanks!
 ---Bo


  --
  *From: *Paul Michali (pcm) p...@cisco.com

 *To: *OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
  *Sent: *Friday, April 11, 2014 2:15:18 AM

 *Subject: *Re: [openstack-dev] How to implement and configure a new
 Neutron vpnaas driver from scratch?

  By not working do you mean you cannot get the plugin to work in a
 multi-provider environment? Multi-provider solutions have been tabled until
 Juno, where more discussion is occurring on what is the best way to support
 different service providers.

  However, you should be able to get the plugin to work as the sole VPN
 service provider, which is what the Cisco solution does currently. You can
 look at how I've done that in the cisco_ipsec.py modules in the
 service_drivers and device_drivers directories, under neutron/services/vpn/.


  Regards,

   PCM (Paul Michali)

  MAIL . p...@cisco.com
 IRC ... pcm_ 
 (irc.freenode.comhttps://urldefense.proofpoint.com/v1/url?u=http://irc.freenode.comk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=F5etm0B6kVJ9jleIhCvNyA%3D%3D%0Am=1%2FHmRV%2F3ce%2Bjpzxjfyhv6xjuBhiOBVrajFVFZjco9Zw%3D%0As=3f732defa72f3a816af1d5b52eefd459e2939807789cbc29c963da082ce8c010
 )
 TW  @pmichali
 GPG Key ... 4525ECC253E31A83
 Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83


Re: [openstack-dev] [nova] Proposal: remove the server groups feature

2014-04-26 Thread Christopher Yeoh
On Fri, 25 Apr 2014 18:28:38 -0400
Jay Pipes jaypi...@gmail.com wrote:
 
  i) This is a feature that was discussed in at least one if not two
  Design Summits and went through a long review period, it wasn't one
  of those changes that merged in 24 hours before people could take a
  good look at it.
 
 Completely understood. That still doesn't mean we can't propose to get
 rid of it early instead of letting it sit around when an alternate
 implementation would be better for the user of OpenStack.

Long term stability is also very important though - the balance
between perfect and good enough. I think that raising the bar on what
we allow in the first place is really the key here (and that as
discussed previously may involve new features being considered
experimental for a period of time).

 
Whatever you feel about the implementation,  it is now in the API
  and we should assume that people have started coding against it.
 
 Sure, maybe. AFAIK, it's only in the v2 API, though, not in the v3 API
 (sorry, I made a mistake about that in my original email). Is there a
 reason it wasn't added to the v3 API?
 

We did have a pretty strong rule for most of the Icehouse
development cycle to only merge new API features if the change was
added either first to the V3 API or at the same time as the V2 API.
However this (almost unintentionally) ended up getting relaxed whilst
all the V2 vs V3 API discussions were occurring. As a result there are
some features that were merged into V2 that we definitely need to now
add to the V3 API in Juno.

Since the V3 API is still experimental we have some flexibility, but
transition pain for those moving from V2 to V3 is still going to be a
factor in terms of what we want to support.

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] nova compute error

2014-04-26 Thread Hai Bo
Maybe the link is help for you.
http://superuser.com/questions/232807/iproute2-not-functioning-rtnetlink-answers-operation-not-supported

Best regards to you.
Ricky


On Sat, Apr 26, 2014 at 6:29 PM, abhishek jain ashujain9...@gmail.comwrote:

 Hi Ricky

 Thanks.You are right.
 I'm getting following error after running the command ...

  sudo ip link add qvbb2fc7c52-ae type veth peer name qvob2fc7c52-ae

  RTNETLINK answers: Operation not supported

  Below is the description of my system..

 uname -a
 Linux t4240-ubuntu1310 3.8.13-rt9-QorIQ-SDK-V1.4 #3 SMP Wed Apr 23
 12:11:58 CDT 2014 ppc64 ppc64 ppc64 GNU/Linux

 Plaese help regarding this.


 On Sat, Apr 26, 2014 at 2:13 PM, Bohai (ricky) bo...@huawei.com wrote:

  It seems that the command “ip link add qvbb2fc7c52-ae type veth peer
 name qvob2fc7c52-ae” failed.

 Maybe you can try it manually and confirm whether it’s the reason.



 Best regards to you.

 Ricky



 *From:* abhishek jain [mailto:ashujain9...@gmail.com]
 *Sent:* Saturday, April 26, 2014 3:25 PM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* [openstack-dev] nova compute error



 Hi all..

 I'm getting following nova-compute error and thus not able to boot the
 VMs on my compute node...


 nova-ompute service stopped and started giving following error...
 [02:06:44] Abhishek Jain: 2014-04-25 15:32:00.112 6501 TRACE
 nova.openstack.common.threadgroup cmd=' '.join(cmd))
 2014-04-25 15:32:00.112 6501 TRACE nova.openstack.common.threadgroup
 ProcessExecutionError: Unexpected error while running command.
 2014-04-25 15:32:00.112 6501 TRACE nova.openstack.common.threadgroup
 Command: ip link add qvbb2fc7c52-ae type veth peer name qvob2fc7c52-ae
 2014-04-25 15:32:00.112 6501 TRACE nova.openstack.common.threadgroup Exit
 code: 2
 2014-04-25 15:32:00.112 6501 TRACE nova.openstack.common.threadgroup
 Stdout: ''
 2014-04-25 15:32:00.112 6501 TRACE nova.openstack.common.threadgroup
 Stderr: 'RTNETLINK answers: Operation not supported\n'
 2014-04-25 15:32:00.112 6501 TRACE nova.openstack.common.threadgroup

 I'm able to run other services such as nova and q-agt on compute node and
 also the compute node is reflected on the controller node and vice versa.

 Please help me regarding this .

 Thanks

 Abhishek Jain

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal: remove the server groups feature

2014-04-26 Thread Hai Bo
On Sat, Apr 26, 2014 at 5:15 AM, Jay Pipes jaypi...@gmail.com wrote:

 Hi Stackers,

 When recently digging in to the new server group v3 API extension
 introduced in Icehouse, I was struck with a bit of cognitive dissonance
 that I can't seem to shake. While I understand and support the idea
 behind the feature (affinity and anti-affinity scheduling hints), I
 can't help but feel the implementation is half-baked and results in a
 very awkward user experience.

 The use case here is very simple:

 Alice wants to launch an instance and make sure that the instance does
 not land on a compute host that contains other instances of that type.

 The current user experience is that the user creates a server group
 like so:

 nova server-group-create $GROUP_NAME --policy=anti-affinity

 and then, when the user wishes to launch an instance and make sure it
 doesn't land on a host with another of that instance type, the user does
 the following:

 nova boot --group $GROUP_UUID ...

 There are myriad problems with the above user experience and
 implementation. Let me explain them.

 1. The user isn't creating a server group when they issue a nova
 server-group-create call. They are creating a policy and calling it a
 group. Cognitive dissonance results from this mismatch.

 2. There's no way to add an existing server to this group. What this
 means is that the user needs to effectively have pre-considered their
 environment and policy before ever launching a VM. To realize why this
 is a problem, consider the following:

  - User creates three VMs that consume high I/O utilization
  - User then wants to launch three more VMs of the same kind and make
 sure they don't end up on the same hosts as the others

 No can do, since the first three VMs weren't started using a --group
 scheduler hint.

 3. There's no way to remove members from the group

 4. There's no way to manually add members to the server group

 5. The act of telling the scheduler to place instances near or away from
 some other instances has been hidden behind the server group API, which
 means that users doing a nova help boot will see a --group option that
 doesn't make much sense, as it doesn't describe the scheduling policy
 activity.

 Proposal
 

 I propose to scrap the server groups API entirely and replace it with a
 simpler way to accomplish the same basic thing.

 Create two new options to nova boot:

  --near-tag TAG
 and
  --not-near-tag TAG


Hi jay,

I have a little question.
Whether it will support multiple tags for the server?
Maybe  we hope a server to near some servers and not near another servers
currently.

Best regards to you.
Ricky



 The first would tell the scheduler to place the new VM near other VMs
 having a particular tag. The latter would tell the scheduler to place
 the new VM *not* near other VMs with a particular tag.

 What is a tag? Well, currently, since the Compute API doesn't have a
 concept of a single string tag, the tag could be a key=value pair that
 would be matched against the server extra properties.

 Once a real user-controlled simple string tags system is added to the
 Compute API, a tag would be just that, a simple string that may be
 attached or detached from some object (in this case, a server object).

 How does this solve all the issues highlighted above? In order, it
 solves the issues like so:

 1. There's no need to have any server group object any more. Servers
 have a set of tags (key/value pairs in v2/v3 API) that may be used to
 identify a type of server. The activity of launching an instance would
 now have options for the user to indicate their affinity preference,
 which removes the cognitive dissonance that happens due to the user
 needing to know what a server group is (a policy, not a group).

 2. Since there is no more need to maintain a separate server group
 object, if a user launched 3 instances and then wanted to make sure that
 3 new instances don't end up on the same hosts, all the user needs to do
 is tag the existing instances with a tag, and issue a call to:

  nova boot --not-near-tag $TAG ...

 and the affinity policy is applied properly.

 3. Removal of members of the server group is no longer an issue.
 Simply untag a server to remove it from the set of servers you wish to
 use in applying some affinity policy

 4. Likewise, since there's no server group object, in order to relate an
 existing server to another is to simply place a tag on the server.

 5. The act of applying affinity policies is now directly related to the
 act of launching instances, which is where it should be.

 I'll type up a real blueprint spec for this, but wanted to throw the
 idea out there, since it's something that struck me recently when I
 tried to explain the new server groups feature.

 Thoughts and feedback welcome,
 -jay


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 

Re: [openstack-dev] Gerrit downtime and upgrade on 2014-04-28

2014-04-26 Thread Yuriy Taraday
On Fri, Apr 25, 2014 at 11:41 PM, Zaro zaro0...@gmail.com wrote:

 Do you mean making it default to WIP on every patchset that gets
 uploaded?


No. I mean carrying WIP to all new patch sets once it is set just like
Code-Review -2 is handled by default.

Gerrit 2.8 does allow you to carry the same label score forward[1] if
 it's either a trivial rebase or no code has changed.  We plan to set
 these options for the 'Code-Review' label, but not the Workflow label.

 [1]
 https://gerrit-review.googlesource.com/Documentation/config-labels.html


It looks like copyMinScore option for Workflow label will do what I'm
talking about.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][blueprint] Accelerate the booting process of a number of vms via VMThunder

2014-04-26 Thread lihuiba
Hmm, I totally see the value of doing this. Not sure that there could be
the same kinds of liveness guarantees with non-shared-storage, but I
am certainly happy to see a proof of concept in this area! :)
By liveness, if you mean down time of migration, our current results
show that liveness is guaranteed with non-shared-storage. Some preliminary
work has been published in a conference SOSE14, which can be found at
http://www.vmthunder.org/dlsm_sose2014_final.pdf   And we have made
some improvements to it, and the work is still under development. We
are planning to write a new paper and submit it to another conference in 
this summer.




 how about zero-copying?

It would be an implementation detail within nova.image.api.copy()
function (and the aforementioned image bits mover library) :)

IMHO, (pre-)copying and zero-copying are different in nature, and it's
not necessary to mask such difference by a single interface. With 2
sets of interfaces, programmers (users of copying service) will be
reminded of the cost of (pre-)copying, or the risk of runtime network 
congestion of zero-copying.



At 2014-04-23 23:02:29,Jay Pipes jaypi...@gmail.com wrote:
On Wed, 2014-04-23 at 13:56 +0800, lihuiba wrote:
 For live migration, we use shared storage so I don't think it's quite
 the same as getting/putting image bits from/to arbitrary locations.
 With a good zero-copy transfer lib, live migration support can be 
 extended to non-shared storage, or cross-datacenter. It's a kind of
 value.

Hmm, I totally see the value of doing this. Not sure that there could be
the same kinds of liveness guarantees with non-shared-storage, but I
am certainly happy to see a proof of concept in this area! :)

 task = image_api.copy(from_path_or_uri, to_path_or_uri)
 # do some other work
 copy_task_result = task.wait()
 +1  looks cool!
 how about zero-copying?

It would be an implementation detail within nova.image.api.copy()
function (and the aforementioned image bits mover library) :)

The key here is to leak as little implementation detail out of the
nova.image.api module

Best,
-jay

 At 2014-04-23 07:21:27,Jay Pipes jaypi...@gmail.com wrote:
 Hi Vincent, Zhi, Huiba, sorry for delayed response. See comments inline.
 
 On Tue, 2014-04-22 at 10:59 +0800, Sheng Bo Hou wrote:
  I actually support the idea Huiba has proposed, and I am thinking of
  how to optimize the large data transfer(for example, 100G in a short
  time) as well. 
  I registered two blueprints in nova-specs, one is for an image upload
  plug-in to upload the image to
  glance(https://review.openstack.org/#/c/84671/), the other is a data
  transfer plug-in(https://review.openstack.org/#/c/87207/) for data
  migration among nova nodes. I would like to see other transfer
  protocols, like FTP, bitTorrent, p2p, etc, implemented for data
  transfer in OpenStack besides HTTP. 
  
  Data transfer may have many use cases. I summarize them into two
  catalogs. Please feel free to comment on it. 
  1. The machines are located in one network, e.g. one domain, one
  cluster, etc. The characteristic is the machines can access each other
  directly via the IP addresses(VPN is beyond consideration). In this
  case, data can be transferred via iSCSI, NFS, and definitive zero-copy
  as Zhiyan mentioned. 
  2. The machines are located in different networks, e.g. two data
  centers, two firewalls, etc. The characteristic is the machines can
  not access each other directly via the IP addresses(VPN is beyond
  consideration). The machines are isolated, so they can not be
  connected with iSCSI, NFS, etc. In this case, data have to go via the
  protocols, like HTTP, FTP, p2p, etc. I am not sure whether zero-copy
  can work for this case. Zhiyan, please help me with this doubt. 
  
  I guess for data transfer, including image downloading, image
  uploading, live migration, etc, OpenStack needs to taken into account
  the above two catalogs for data transfer.
 
 For live migration, we use shared storage so I don't think it's quite
 the same as getting/putting image bits from/to arbitrary locations.
 
   It is hard to say that one protocol is better than another, and one
  approach prevails another(BitTorrent is very cool, but if there is
  only one source and only one target, it would not be that faster than
  a direct FTP). The key is the use
  case(FYI:http://amigotechnotes.wordpress.com/2013/12/23/file-transmission-with-different-sharing-solution-on-nas/).
 
 Right, a good solution would allow for some flexibility via multiple
 transfer drivers.
 
  Jay Pipes has suggested we figure out a blueprint for a separate
  library dedicated to the data(byte) transfer, which may be put in oslo
  and used by any projects in need (Hoping Jay can come in:-)). Huiba,
  Zhiyan, everyone else, do you think we come up with a blueprint about
  the data transfer in oslo can work?
 
 Yes, so I believe the most appropriate solution is to create a library
 -- in oslo or a standalone library like taskflow -- 

Re: [openstack-dev] [neutron] status of VPNaaS and FWaaS APIs in Icehouse

2014-04-26 Thread Sumit Naiksatam
The so called multivendor work is dependent now on the flavors
framework.  Patches were presented in the Icehouse release to enable
multivendor support using the service-type-framework, however there
were concerns on the use of that framework, and hence those patches
were not approved in time.

A patch was presented in FWaaS to address the router/service insertion
issue, however that was blocked as well.

With the new blueprint process in place, the process on all of the
above patches has pretty much been reset.

Thanks,
~Sumit.

On Fri, Apr 25, 2014 at 6:48 AM, Akihiro Motoki mot...@da.jp.nec.com wrote:
 we need to correct the previous reply.
 
 Both should still be considered experimental because of
 the multivendor work was NOT completed in Icehouse.
 
 We can use only one service backend for each service and
 there are no way to choose a backend when creating a service instance.

 In addition, FWaaS API does not provides a way to specify
 a router which a firewall instance is applied to.
 It will be addressed in the service insertion blueprint.

 Akihiro

 (2014/04/25 6:55), McCann, Jack wrote:
 Thanks Mark.

 What steps are necessary to promote these APIs beyond experimental?

 - Jack

 -Original Message-
 From: Mark McClain [mailto:mmccl...@yahoo-inc.com]
 Sent: Thursday, April 24, 2014 11:07 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [neutron] status of VPNaaS and FWaaS APIs in
 Icehouse


 On Apr 23, 2014, at 6:20 PM, McCann, Jack jack.mcc...@hp.com wrote:

 Are VPNaaS and FWaaS APIs still considered experimental in Icehouse?

 For VPNaaS, [1] says This extension is experimental for the Havana 
 release.
 For FWaaS, [2] says The Firewall-as-a-Service (FWaaS) API is an 
 experimental
 API...


 Thanks for asking.  Both should still be considered experimental because of 
 the
 multivendor work was completed in Icehouse.


 [1] 
 http://docs.openstack.org/api/openstack-network/2.0/content/vpnaas_ext.html
 [2] http://docs.openstack.org/admin-guide-cloud/content/fwaas.html



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder][Manila]

2014-04-26 Thread Alun Champion
I'm sure this has been discussed I just couldn't find any reference to
it, perhaps someone can point me to the discussion/rationale.
Is there any reason why there needs to be another service to present a
control-plane to storage? Obviously object storage is different as
that is presenting a data-plane API but from a control-plane I'm
confused why there needs to be another service, surely control-planes
are pretty similar and the underlying networking issues for iSCSI
would be similar for NFS/CIFS.
Trove is looking to be a general purpose data container
(control-plane) service for traditional RDBMS, NoSQL, KeyValue, etc.,
why is the Cinder API not suitable for providing a general purpose
storage container (control-plane) service?

Creating separate services will complicate other services, e.g. Trove.

Thoughts?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][globalization] Need input on how to proceed .

2014-04-26 Thread Jay S. Bryant
All,

I am looking for feedback on how to complete implementation of i18n
support for Cinder.  I need to open a new BluePrint for Juno as soon as
the cinder-specs process is available.  In the mean time I would like to
start working on this and need feedback on the scope I should undertake
with this.

First, the majority of the code for i18n support went in with Icehouse.
There is just a small change that is needed to actually enable Lazy
Translation again.  I want to get this enabled as soon as possible to
get plenty of runtime on the code for Icehouse.

The second change is to add an explicit export for '_' to all of our
files to be consistent with other projects. [1]  This is also the safer
way to implement i18n.  My plan is to integrate the change as part of
the i18n work.  Unfortunately this will touch many of the files in
Cinder.

Given that fact, this brings me to the item I need feedback upon.  It
appears that Nova is moving forward with the plan to remove translation
of debug messages as there was a recent patch submitted to enable a
check for translated DEBUG messages.  Given that fact, would it be an
appropriate time, while adding the explicit import of '_' to also remove
translation of debug messages.  It is going to make the commit for
enabling Lazy Translation much bigger, but it would also take out
several work items that need to be addressed at once.  I am willing to
undertake the effort if I have support for the changes.

Please let me know your thoughts.

Thanks!2]
Jay
(jungleboyj on freenode)

[1] https://bugs.launchpad.net/cinder/+bug/1306275


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] PGP keysigning party for Juno summit in Atlanta?

2014-04-26 Thread Clint Byrum
Just a friendly reminder to add yourself to this list if you are
interested in participating in the key signing in Atlanta:

https://wiki.openstack.org/wiki/OpenPGP_Web_of_Trust/Juno_Summit

Now that we have more visibility about schedules, I think we should try
to find a time slot. Does anybody have an idea already? If not I think
we should just pick a break time period and get it done.

Excerpts from Thomas Goirand's message of 2014-03-29 23:32:55 -0700:
 On 03/30/2014 10:00 AM, Mark Atwood wrote:
  Hi!
  
  Are there plans for a PGP keysigning party at the Juno Summit in
  Atlanta, similar to the one at the Icehouse summit in Hong Kong?
  
  Inspired by the URL at
  https://wiki.openstack.org/wiki/OpenPGP_Web_of_Trust/Icehouse_Summit
  I looked for 
  https://wiki.openstack.org/wiki/OpenPGP_Web_of_Trust/Juno_Summit
  to discover that that wiki page does not yet exist and I do not have
  permission to create it.
  
  ..m
 
 If there's none, then we should do one.
 
 One thing about last key signing party, is that I didn't really like the
 photocopy method. IMO, it'd be much much nicer to use a file, posted
 somewhere, containing all participant fingerprints. To check for that
 file validity, together, we check for its sha256 sum (someone say it out
 loud, while everyone is checking for its own copy). And everyone,
 individually, checks for its own PGP fingerprint inside the file. Then
 we just need to validate entries in this file (with matching ID documents).
 
 Otherwise, there's the question of the trustability of the photocopy
 machine and such... Not that I don't trust Jimmy (I do...)! :)
 
 Plus having a text file with all fingerprints in it is more convenient:
 you can just cut/past the whole fingerprint and do gpg --recv-keys at
 once (and not just the key ID, which is unsafe because prone to
 brute-force). That file can be posted anywhere, provided that we check
 for its sha256 sum.
 
 I would happily organize this, if someone can find a *quite* room with
 decent network. Who can take care of the place and time?
 
 Of course, We will need need the fingerprints of every participant in
 advance, so the wiki page would be useful as well. I therefore created
 the wiki page:
 https://wiki.openstack.org/wiki/OpenPGP_Web_of_Trust/Juno_Summit
 
 Please add yourself. We'll see if I can make it to Atlanta, and organize
 something later on.
 
 Cheers,
 
 Thomas Goirand (zigo)
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder][Manila]

2014-04-26 Thread Swartzlander, Ben
 -Original Message-
 From: Alun Champion [mailto:p...@achampion.net] 
 Sent: Saturday, April 26, 2014 7:19 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [Cinder][Manila]

 I'm sure this has been discussed I just couldn't find any reference to it, 
 perhaps someone can point me to the discussion/rationale.
 Is there any reason why there needs to be another service to present a 
 control-plane to storage? Obviously object storage is
 different as that is presenting a data-plane API but from a control-plane I'm 
 confused why there needs to be another service,
 surely control-planes are pretty similar and the underlying networking issues 
 for iSCSI would be similar for NFS/CIFS.
 Trove is looking to be a general purpose data container
 (control-plane) service for traditional RDBMS, NoSQL, KeyValue, etc., why is 
 the Cinder API not suitable for providing a general
 purpose storage container (control-plane) service?

 Creating separate services will complicate other services, e.g. Trove.

 Thoughts?

There are good arguments on both sides of this question. There is substantial 
overlap between Cinder and Manila in their API constructs and backends (they 
both deal with storage, after all). In the long run it's entirely possible that 
the 2 projects could be merged.

However there are also some very important differences. In particular Cinder 
knows almost nothing about networking, but Manila needs to know a great deal 
about individual tenant networks in order to deliver NAS storage to tenants. 
Cinder can rely on hypervisors to do some of the hard work of translating block 
protocols and managing attaching/detaching whereas Manila routes around the 
hypervisor entirely and connects guest VMs with storage directly. The most 
important reason Manila ended up as a separate project from Cinder was because 
the Cinder team didn't want the distraction of dealing with some of the very 
hard technical problems that needed solving for Manila to be successful.

After working on Manila for the past year and struggling with a lot of hard 
technical decisions I think it was the right decision to split the projects. If 
Manila had remained a subproject of Cinder then it either wouldn't have 
received near the attention it needed or it would have sucked attention away 
from a lot of important issues that the Cinder team is dealing with.

If there's a future where Manila and Cinder merge back together then I'm pretty 
sure it's quite far away. The best thing we can do is strive to make both 
projects successful and keep asking these hard questions.

-Ben Swartzlander (Manila PTL)


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] looking to add support for server groups to heat...any comments?

2014-04-26 Thread Jay Lau
Just noticed this email, I have already filed a blueprint related to this
topic https://blueprints.launchpad.net/heat/+spec/vm-instance-group-support

My idea is that can we add a new field such as PlacemenetPolicy to
AutoScalingGroup? If the value is affinity, then when heat engine create
the AutoScalingGroup, it will first create a server group with affinity
policy, then when create VM instance for the AutoScalingGroup, heat engine
will transfer the server group id as scheduler hints so as to make sure all
the VM instances in the AutoScalingGroup can be created with affinity
policy.

resources:
  WorkloadGroup:
type: AWS::AutoScaling::AutoScalingGroup
properties:
  AvailabilityZones: [nova]
  LaunchConfigurationName: {Ref: LaunchConfig}
  PlacementPolicy: [affinity] 
  MaxSize: 3
  MinSize: 2



2014-04-26 5:27 GMT+08:00 Zane Bitter zbit...@redhat.com:

 On 25/04/14 16:07, Chris Friesen wrote:

 On 04/25/2014 12:00 PM, Zane Bitter wrote:

 On 25/04/14 13:50, Chris Friesen wrote:


  In the nova boot command we pass the group uuid like this:

 --hint group=e4cf5dea-4831-49a1-867d-e263f2579dd0

 If we were to make use of the scheduler hints, how would that look?
 Something like this?  (I'm not up to speed on my YAML, so forgive me if
 this isn't quite right.)  And how would this look if we wanted to
 specify other scheduler hints as well?

cirros_server1:
  type: OS::Nova::Server
  properties:
name: cirros1
image: 'cirros'
flavor: 'm1.tiny'
scheduler_hints: {group: { get_resource: my_heat_group }}


 Something like that (I don't think you need the quotes around group).
 Or, equivalently:

cirros_server1:
  type: OS::Nova::Server
  properties:
name: cirros1
image: 'cirros'
flavor: 'm1.tiny'
scheduler_hints:
  group: { get_resource: my_heat_group }


 Okay...assuming it works like that then that looks fine to me.


 Cool, +1 for that then.


  If we go this route then the changes are confined to a single new file.
   Given that, do we need a blueprint or can I just submit the code for
 review once I port it to the current codebase?


 I guess wearing my PTL hat I ought to say that you should still raise a
 blueprint (no real content necessary though, or just link to this thread).

 Wearing my core team hat, I personally couldn't care less either way ;)
 The change is self-explanatory and you've already done a good job of
 consulting on the changes before submitting them.

 cheers,
 Zane.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] looking to add support for server groups to heat...any comments?

2014-04-26 Thread Mike Spreitzer
Jay Lau jay.lau@gmail.com wrote on 04/26/2014 11:41:25 PM:

 Just noticed this email, I have already filed a blueprint related to
 this topic https://blueprints.launchpad.net/heat/+spec/vm-instance-
 group-support

 My idea is that can we add a new field such as PlacemenetPolicy to
 AutoScalingGroup? If the value is affinity, then when heat engine 
 create the AutoScalingGroup, it will first create a server group 
 with affinity policy, then when create VM instance for the 
 AutoScalingGroup, heat engine will transfer the server group id as 
 scheduler hints so as to make sure all the VM instances in the 
 AutoScalingGroup can be created with affinity policy.
 
 resources:
   WorkloadGroup:
 type: AWS::AutoScaling::AutoScalingGroup
 properties:
   AvailabilityZones: [nova]
   LaunchConfigurationName: {Ref: LaunchConfig}
   PlacementPolicy: [affinity] 
   MaxSize: 3
   MinSize: 2

Remember that Heat has two resource types named 
something::something::AutoScalingGroup, plus OS::Heat::InstanceGroup, 
and also OS::Heat::ResourceGroup.  Two of those four kinds of group are 
almost (remember the indirection through ScaledResource) certainly groups 
of Compute instances; the other two are more open about their elements.  I 
think the thing you are clearly saying that might be new in this 
discussion is the proposal to make one or more of those four kinds of 
groups use Nova's server group feature.  The relevant kind of group would 
then gain all the properties, attributes, privileges, and responsibilities 
of a server group.  At least, that makes sense to me.  The biggest 
questions in my mind are which kinds of groups should get this treatment 
and how to cope with the cases where the element is NOT a Compute 
instance.  A sub-case of that last issue is: the scaled element is a 
composite thing that includes one or more Compute instances.

Regards,
Mike
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal: remove the server groups feature

2014-04-26 Thread Jay Lau
I think server group is an important feature especially when working with
heat auto scaling group, there is already some discussion for this
http://markmail.org/message/jl5wlx3nr3g53ko5

The current server group feature does support add/delete a VM instance
to/from the server group but seems not able to manage existing VM
instances, but this can be enhanced.

The server group feature need two steps to create the VM instance:
1) Create a server group with policy
2) Create VMs for the server group

What Jay Pipes proposed is using resource tags directly:
1) Create VMs with a resource tag to specify the policy.

I think that those two directions are very similar, but what Jay Pipes
proposed does not specify the resource group and seems the resource group
was implicitly specified in resource tag.

Just some of my understanding. Thanks!



2014-04-27 1:25 GMT+08:00 Vishvananda Ishaya vishvana...@gmail.com:


 On Apr 25, 2014, at 2:25 PM, Chris Behrens cbehr...@codestud.com wrote:

 
  On Apr 25, 2014, at 2:15 PM, Jay Pipes jaypi...@gmail.com wrote:
 
  Hi Stackers,
 
  When recently digging in to the new server group v3 API extension
  introduced in Icehouse, I was struck with a bit of cognitive dissonance
  that I can't seem to shake. While I understand and support the idea
  behind the feature (affinity and anti-affinity scheduling hints), I
  can't help but feel the implementation is half-baked and results in a
  very awkward user experience.
 
  I agree with all you said about this.
 
  Proposal
  
 
  I propose to scrap the server groups API entirely and replace it with a
  simpler way to accomplish the same basic thing.
 
  Create two new options to nova boot:
 
  --near-tag TAG
  and
  --not-near-tag TAG
 
  The first would tell the scheduler to place the new VM near other VMs
  having a particular tag. The latter would tell the scheduler to place
  the new VM *not* near other VMs with a particular tag.
 
  What is a tag? Well, currently, since the Compute API doesn't have a
  concept of a single string tag, the tag could be a key=value pair that
  would be matched against the server extra properties.
 
  You can actually already achieve this behavior… although with a little
 more work. There’s the Affinty filter which allows you to specify a
 same_host/different_host scheduler hint where you explicitly specify the
 instance uuids you want…  (the extra work is having to know the instance
 uuids).

 It was my understanding from previous discussions that having the concept
 of a group was necessary for future schediuling decisions, especially
 involving live migration. The uuids you need to be far from at launch time
 won’t necessarily be the ones you need to be far from when a migration is
 performed. Server groups handle this case, although Jay’s proposal of
 near/far from tag would also solve this as long as the near-to/far-from
 data was saved in the instance record. My only concern here is removing an
 api we just added, so a smoother transition would be preferable.

 Vish

 
  But yeah, I think this makes more sense to me.
 
  - Chris
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal: remove the server groups feature

2014-04-26 Thread Mike Spreitzer
Jay Lau jay.lau@gmail.com wrote on 04/27/2014 12:31:01 AM:

 I think server group is an important feature especially when working
 with heat auto scaling group, there is already some discussion for this 
 http://markmail.org/message/jl5wlx3nr3g53ko5

 The current server group feature does support add/delete a VM 
 instance to/from the server group but seems not able to manage 
 existing VM instances, but this can be enhanced.

 The server group feature need two steps to create the VM instance:
 1) Create a server group with policy
 2) Create VMs for the server group

 What Jay Pipes proposed is using resource tags directly:
 1) Create VMs with a resource tag to specify the policy.

No, the tag contributes to the grouping; the policy is identified by the 
choice of command line switch (--affinity vs. --anti-affinity).

 I think that those two directions are very similar, but what Jay 
 Pipes proposed does not specify the resource group and seems the 
 resource group was implicitly specified in resource tag. 

In short, the proposal from Jay Pipes *does* have groups but their 
membership is declared in a different way than in the current server 
groups feature.  Jay's proposal is not really about *removing* server 
groups but rather it is a proposal to change their API.

Regards,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev