Re: [openstack-dev] [tc] Who is allowed to vote for TC candidates

2015-05-05 Thread Allamaraju, Subbu
Thiery - Most operators are busy fighting operational battles, scale out etc. 
It is often an all-hands-on-the-deck job. I don’t think we should just measure 
by contributors getting work done. The work is often silent, and lags behind 
the dev cycle.

Subbu

> On May 4, 2015, at 9:25 AM, Thierry Carrez  wrote:
> 
> But in the end, it all boils down to contributors that get the
> work done and therefore make it going in one direction or another.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hierarchical administrative boundary [keystone]

2014-05-13 Thread Allamaraju, Subbu
Hi Arvind,

This seems to be covering one of the use cases listed by 
https://wiki.openstack.org/wiki/Blueprint-VPC. Others to isolate between VPCs 
include shared resources like networks, images, roles, and other configuration. 

Subbu

On May 8, 2014, at 7:55 PM, Tiwari, Arvind  wrote:

> Hi All,
>  
> Below is my proposal to address VPC use case using hierarchical 
> administrative boundary. This topic is scheduled in Hierarchical 
> Multitenancysession of Atlanta design summit.
>  
> https://wiki.openstack.org/wiki/Hierarchical_administrative_boundary
>  
> Please take a look.
>  
> Thanks,
> Arvind
>  
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] useful deployment related talks

2014-05-12 Thread Allamaraju, Subbu
We should perhaps clarify that these etherpads are about deployment/operations 
of OpenStack discussed under 
https://wiki.openstack.org/wiki/Summit/Juno/Etherpads#Ops, and not Fuel.

Subbu

On May 12, 2014, at 2:33 PM, Vladimir Kuklin  wrote:

> Guys
> 
> It would be awesome if we shared links on useful talks/meetups that can be 
> related to FUEL and deployment/operations of Openstack. 
> 
> These are for setting optimal config options and upgrades:
> 
> https://etherpad.openstack.org/p/juno-summit-ops-upgradesdeployment
> https://etherpad.openstack.org/p/juno-summit-ops-reasonabledefaults
> 
> -- 
> Yours Faithfully,
> Vladimir Kuklin,
> Fuel Library Tech Lead,
> Mirantis, Inc.
> +7 (495) 640-49-04
> +7 (926) 702-39-68
> Skype kuklinvv
> 45bk3, Vorontsovskaya Str.
> Moscow, Russia,
> www.mirantis.com
> www.mirantis.ru
> vkuk...@mirantis.com
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [db][all] (Proposal) Restorable & Delayed deletion of OS Resources

2014-03-16 Thread Allamaraju, Subbu
Hi Boris,

I just read the other thread. As Jay asked in [1], have you considered 
precautions in the UI in stead? That should take care of mistakes with manual 
deletes.

Thx
Subbu

[1] http://lists.openstack.org/pipermail/openstack-dev/2014-March/029784.html

On Mar 16, 2014, at 10:37 PM, Boris Pavlovic  wrote:

> Subbu, 
> 
> No it's not too late. It's just proposal for Juno. 
> 
> First of all, you should keep in mind that in you cause it's probably 
> automated. In case of Web Hosting it's done by end users (so it's not 
> automated). 
> 
> If you spend some time and read discussion about removing "Soft deletion" [1] 
> you'll see that restoring "deleted" stuff is popular thing. So the goal of 
> this proposal is to make standard approach for restoring that won't use "soft 
> deletion".  
> 
> 
> [1] http://osdir.com/ml/openstack-dev/2014-03/msg00947.html
> 
> Best regards,
> Boris Pavlovic
> 
> 
> On Mon, Mar 17, 2014 at 9:23 AM, Allamaraju, Subbu  wrote:
> Hope this is not too late to ask this question, but isn't this extra code 
> just fat finger mistakes?
> 
> IME, most provisioning on cloud happens via automated tools, and it seems 
> counter-productive to design a feature for manual operations.
> 
> Thx,
> Subbu
> 
> On Mar 13, 2014, at 12:42 PM, Boris Pavlovic  wrote:
> 
> > Hi stackers,
> >
> > As a result of discussion:
> > [openstack-dev] [all][db][performance] Proposal: Get rid of soft deletion 
> > (step by step)
> > http://osdir.com/ml/openstack-dev/2014-03/msg00947.html
> >
> > I understood that there should be another proposal. About how we should 
> > implement Restorable & Delayed Deletion of OpenStack Resource in common way 
> > & without these hacks with soft deletion in DB.  It is actually very 
> > simple, take a look at this document:
> >
> > https://docs.google.com/document/d/1WGrIgMtWJqPDyT6PkPeZhNpej2Q9Mwimula8S8lYGV4/edit?usp=sharing
> >
> >
> > Best regards,
> > Boris Pavlovic
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [db][all] (Proposal) Restorable & Delayed deletion of OS Resources

2014-03-16 Thread Allamaraju, Subbu
Hope this is not too late to ask this question, but isn't this extra code just 
fat finger mistakes?

IME, most provisioning on cloud happens via automated tools, and it seems 
counter-productive to design a feature for manual operations.

Thx,
Subbu

On Mar 13, 2014, at 12:42 PM, Boris Pavlovic  wrote:

> Hi stackers, 
> 
> As a result of discussion:
> [openstack-dev] [all][db][performance] Proposal: Get rid of soft deletion 
> (step by step) 
> http://osdir.com/ml/openstack-dev/2014-03/msg00947.html
> 
> I understood that there should be another proposal. About how we should 
> implement Restorable & Delayed Deletion of OpenStack Resource in common way & 
> without these hacks with soft deletion in DB.  It is actually very simple, 
> take a look at this document: 
> 
> https://docs.google.com/document/d/1WGrIgMtWJqPDyT6PkPeZhNpej2Q9Mwimula8S8lYGV4/edit?usp=sharing
> 
> 
> Best regards,
> Boris Pavlovic 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] role of Domain in VPC definition

2014-02-16 Thread Allamaraju, Subbu
Harshad,

This is great. At least there is consensus on what it is and what it is not. I 
would leave it to others to discuss merits of a an AWS compat VPC API for 
Icehouse.

Perhaps this is a good topic to discuss at the Juno design summit.

Subbu

On Feb 16, 2014, at 10:15 AM, Harshad Nakil  wrote:

> As said I am not disagreeing with you or Ravi or JC. I also agree that
> Openstack VPC implementation will benefit from these proposals.
> What I am saying is it is not required AWS VPC API compatibility at
> this point.  Which is what our blueprint is all about. We are not
> defining THE "VPC".


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] role of Domain in VPC definition

2014-02-16 Thread Allamaraju, Subbu
Harshad,

But the key question that Ravi brought up remains though. A project is a very 
small administrative container to manage policies and resources for VPCs. We've 
been experimenting with VPCs on OpenStack (with some mods) at work for nearly a 
year, and came across cases where hundreds/thousands of apps in equal number of 
projects needing to share resources and policies, and project to VPC mapping 
did not cut. 

I was wondering if there was prior discussion around the mapping of AWS VPC 
model to OpenStack concepts like projects and domains. Thanks for any pointers.

Subbu

On Feb 16, 2014, at 8:01 AM, Harshad Nakil  wrote:

> Yes, [1] can be done without [2] and [3]. 
> As you are well aware [2] is now merged with group policy discussions. 
> IMHO all or nothing approach will not get us anywhere. 
> By the time we line up all our ducks in row. New features/ideas/blueprints 
> will keep Emerging.  
> 
> Regards
> -Harshad
> 
> 
> On Feb 16, 2014, at 2:30 AM, Salvatore Orlando  wrote:
> 
>> It seems this work item is made of several blueprints, some of which are not 
>> yet approved. This is true at least for the Neutron blueprint regarding 
>> policy extensions.
>> 
>> Since I first looked at this spec I've been wondering why nova has been 
>> selected as an endpoint for network operations rather than Neutron, but this 
>> probably a design/implementation details whereas JC here is looking at the 
>> general approach.
>> 
>> Nevertheless, my only point here is that is seems that features like this 
>> need an "all-or-none" approval.
>> For instance, could the VPC feature be considered functional if blueprint 
>> [1] is implemented, but not [2] and [3]?
>> 
>> Salvatore
>> 
>> [1] https://blueprints.launchpad.net/nova/+spec/aws-vpc-support
>> [2] 
>> https://blueprints.launchpad.net/neutron/+spec/policy-extensions-for-neutron
>> [3] https://blueprints.launchpad.net/keystone/+spec/hierarchical-multitenancy
>> 
>> 
>> On 11 February 2014 21:45, Martin, JC  wrote:
>> Ravi,
>> 
>> It seems that the following Blueprint
>> https://wiki.openstack.org/wiki/Blueprint-aws-vpc-support
>> 
>> has been approved.
>> 
>> However, I cannot find a discussion with regard to the merit of using 
>> project vs. domain, or other mechanism for the implementation.
>> 
>> I have an issue with this approach as it prevents tenants within the same 
>> domain sharing the same VPC to have projects.
>> 
>> As an example, if you are a large organization on AWS, it is likely that you 
>> have a large VPC that will be shred by multiple projects. With this 
>> proposal, we loose that capability, unless I missed something.
>> 
>> JC
>> 
>> On Dec 19, 2013, at 6:10 PM, Ravi Chunduru  wrote:
>> 
>> > Hi,
>> >   We had some internal discussions on role of Domain and VPCs. I would 
>> > like to expand and understand community thinking of Keystone domain and 
>> > VPCs.
>> >
>> > Is VPC equivalent to Keystone Domain?
>> >
>> > If so, as a public cloud provider - I create a Keystone domain and give it 
>> > to an organization which wants a virtual private cloud.
>> >
>> > Now the question is if that organization wants to have  departments wise 
>> > allocation of resources it is becoming difficult to visualize with 
>> > existing v3 keystone constructs.
>> >
>> > Currently, it looks like each department of an organization cannot have 
>> > their own resource management with in the organization VPC ( LDAP based 
>> > user management, network management or dedicating computes etc.,) For us, 
>> > Openstack Project does not match the requirements of a department of an 
>> > organization.
>> >
>> > I hope you guessed what we wanted - Domain must have VPCs and VPC to have 
>> > projects.
>> >
>> > I would like to know how community see the VPC model in Openstack.
>> >
>> > Thanks,
>> > -Ravi.
>> >
>> >
>> > ___
>> > OpenStack-dev mailing list
>> > OpenStack-dev@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] VPC Proposal

2014-02-16 Thread Allamaraju, Subbu
Harshad,

Thanks for clarifying.

> We started looking at this as some our customers/partners were interested in 
> get AWS API compatibility. We have this blueprint and code review pending for 
> long time now. We will know based on this thread wether the community is 
> interested. But I assumed that community was interested as the blueprint was 
> approved and code review has no -1(s) for long time now.

Makes sense. I would leave it to others on this list to chime in if there is 
sufficient interest or not.

> To clarify, a clear incremental path from an AWS compatible API to an 
> OpenStack model is not clear.
>  
> In my mind AWS compatible API does not need new openstack model. As more 
> discussion happen on JC's proposal and implementation becomes clear we will 
> know how incremental is the path. But at high level there two major 
> differences
> 1. New first class object will be introduced which effect all components
> 2. more than one project can be supported within VPC.
> But it does not change AWS API(s). So even in JC(s) model if you want AWS API 
> then we will have to keep VPC to project mapping 1:1, since the API will not 
> take both VPC ID and project ID.
> 
> As more users want to migrate from AWS or IaaS providers who want compete 
> with AWS should be interested in this compatibility.

IMHO that's a tough sell. Though an AWS compatible API does not need an 
OpenStack abstraction, we would end up with two independent ways of doing 
similar things. That would OpenStack repeating itself! 

Subbu



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] VPC Proposal

2014-02-15 Thread Allamaraju, Subbu
Harshad,

Curious to know if there is a broad interest in an AWS compatible API in the 
community? To clarify, a clear incremental path from an AWS compatible API to 
an OpenStack model is not clear.

Subbu

On Feb 15, 2014, at 10:04 PM, Harshad Nakil  wrote:

> 
> I agree with problem as defined by you and will require more fundamental 
> changes.
> Meanwhile many users will benefit from AWS VPC api compatibility.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] VPC Proposal

2014-02-15 Thread Allamaraju, Subbu
True. The domain hierarchy isn't useful to capture resource sharing across a 
VPC. For instance, if a VPC admin would like to scope certain networks or 
images to a projects managed within a VPC, there isn't an abstraction today.

Subbu

On Feb 14, 2014, at 11:42 AM, Martin, JC  wrote:

> Arvind,
> 
> Thanks for point me to the blueprint. I'll add it to the related blueprints.
> 
> I think this could be part of the solution, but in addition to defining 
> administrative boundaries, we need to change the way object sharing works. 
> Today, there is only two levels : project private or public. You can share 
> objects between projects but there is no single model across openstack to 
> define resource scope, each component has a slightly different model. The VPC 
> implementation will also have to address that.
> 
> JC
> 
> On Feb 14, 2014, at 11:26 AM, "Tiwari, Arvind"  wrote:
> 
>> Hi JC,
>> 
>> I have proposed BP to address VPC using domain hierarchy and hierarchical 
>> administrative boundary.
>> 
>> https://blueprints.launchpad.net/keystone/+spec/hierarchical-administrative-boundary
>> 
>> 
>> Thanks,
>> Arvind
>> -Original Message-
>> From: Martin, JC [mailto:jch.mar...@gmail.com] 
>> Sent: Friday, February 14, 2014 12:09 PM
>> To: OpenStack Development Mailing List
>> Subject: [openstack-dev] VPC Proposal
>> 
>> 
>> There is a Blueprint targeted for Icehouse-3 that is aiming to implement the 
>> AWS VPC api. I don't think that this blueprint is providing the necessary 
>> constructs to really implement a VPC, and it is not taking into account the 
>> domains, or proposed multi tenant hierarchy. In addition, I could not find a 
>> discussion about this topic leading to the approval.
>> 
>> For this reason, I wrote an 'umbrella' blueprint to hopefully start the 
>> discussion on how to really implement VPC, and eventually split it into 
>> multiple real blueprints for each area.
>> 
>> Please, provide feedback on the following document, and on the best way to 
>> move this forward.
>> 
>> https://wiki.openstack.org/wiki/Blueprint-VPC
>> 
>> Thanks,
>> 
>> JC.
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Access-key like authentication with password-rotation

2014-01-19 Thread Allamaraju, Subbu
At work we're currently looking at related use cases, and access keys are 
useful without keystone actually managing passwords. The only issue with 
https://blueprints.launchpad.net/keystone/+spec/access-key-authentication is 
that it requires client side code changes, which is a non-starter in many 
cases. HP cloud has a similar API (http://docs.hpcloud.com/api/identity) in 
their public cloud - but it too requires client code changes.

Cheers
Subbu

On Jan 16, 2014, at 2:48 AM, Tristan Cacqueray  
wrote:

> Hi,
> 
> I'd like to check in on this authentication mechanism.
> Keystone should have some kind of apiKey in order to prevent developer
> from storing their credential (username/password) in clear text
> configuration file.
> 
> There are two blueprints that can tackle this feature, yet they
> are both in needs of approval
> 
> https://blueprints.launchpad.net/keystone/+spec/access-key-authentication
> https://blueprints.launchpad.net/keystone/+spec/password-rotation
> 
> 
> I believe the access-key-authentication have been superseded by the
> password-rotation. Meaning:
> * The user create a secondary password.
> * He can use this new password to authenticate API request
>  with the credential_id + password.
> * He won't be able to login to Horizon as it will try to authenticate
>  with the user_id + password (Keystone will match those against the
>  "default_credential_id".)
> * API request like password change should be denied if the user didn't
>  used his "default_credential_id".
> 
> Did I get this right ?
> 
> 
> Best regards,
> Tristan
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] First steps towards amqp 1.0

2014-01-19 Thread Allamaraju, Subbu
Hope this thread isn't dead.

Mike - thanks for highlighting some really key issues at scale.

On a related note, can someone from the Ceilometer comment about the store and 
forward requirement? Currently scaling RabbitMQ is non-trivial. Though cells 
help make the problem smaller, as Paul Mathews points out in the video below, 
cells don't make the problems go away. Looking at the experience in the 
community, Qpid isn't an option either.

Cheers,
Subbu


On Dec 9, 2013, at 4:36 PM, Mike Wilson  wrote:

> This is the first time I've heard of the dispatch router, I'm really excited 
> now that I've looked at it a bit. Thx Gordon and Russell for bringing this 
> up. I'm very familiar with the scaling issues associated with any kind of 
> brokered messaging solution. We grew an Openstack installation to about 7,000 
> nodes and started having significant scaling issues with the qpid broker. 
> We've talked about our problems at a couple summits in a fair amount of 
> detail[1][2]. I won't bother repeating the information in this thread.
> 
> I really like the idea of separating the logic of routing away from the the 
> message emitter. Russell mentioned the 0mq matchmaker, we essentially ditched 
> the qpid broker for direct communication via 0mq and it's matchmaker. It 
> still has a lot of problems which dispatch seems to address. For example, in 
> ceilometer we have store-and-forward behavior as a requirement. This kind of 
> communication requires a broker but 0mq doesn't really officially support 
> one, which means we would probably end up with some broker as part of 
> OpenStack. Matchmaker is also a fairly basic implementation of what is 
> essentially a directory. For any sort of serious production use case you end 
> up sprinkling JSON files all over the place or maintaining a Redis backend. I 
> feel like the matchmaker needs a bunch more work to make modifying the 
> directory simpler for operations. I would rather put that work into a 
> separate project like dispatch than have to maintain essentially a one off in 
> Openstack's codebase.
> 
> I wonder how this fits into messaging from a driver perspective in Openstack 
> or even how this fits into oslo.messaging? Right now we have topics for 
> binaries(compute, network, consoleauth, etc), hostname.service_topic for 
> nodes, fanout queue per node (not sure if kombu also has this) and different 
> exchanges per project. If we can abstract the routing from the emission of 
> the message all we really care about is emitter, endpoint, messaging pattern 
> (fanout, store and forward, etc). Also not sure if there's a dispatch 
> analogue in the rabbit world, if not we need to have some mapping of concepts 
> etc between impls.
> 
> So many questions, but in general I'm really excited about this and eager to 
> contribute. For sure I will start playing with this in Bluehost's 
> environments that haven't been completely 0mqized. I also have some lingering 
> concerns about qpid in general. Beyond scaling issues I've run into some 
> other terrible bugs that motivated our move away from it. Again, these are 
> mentioned in our presentations at summits and I'd be happy to talk more about 
> them in a separate discussion. I've also been able to talk to some other 
> qpid+openstack users who have seen the same bugs. Another large installation 
> that comes to mind is Qihoo 360 in China. They run a few thousand nodes with 
> qpid for messaging and are familiar with the snags we run into.
> 
> Gordon,
> 
> I would really appreciate if you could watch those two talks and comment. The 
> bugs are probably separate from the dispatch router discussion, but it does 
> dampen my enthusiasm a bit not knowing how to fix issues beyond scale :-(. 
> 
> -Mike Wilson
> 
> [1] 
> http://www.openstack.org/summit/portland-2013/session-videos/presentation/using-openstack-in-a-traditional-hosting-environment
> [2] 
> http://www.openstack.org/summit/openstack-summit-hong-kong-2013/session-videos/presentation/going-brokerless-the-transition-from-qpid-to-0mq
> 
> 
> 
> 
> On Mon, Dec 9, 2013 at 4:29 PM, Mark McLoughlin  wrote:
> On Mon, 2013-12-09 at 16:05 +0100, Flavio Percoco wrote:
> > Greetings,
> >
> > As $subject mentions, I'd like to start discussing the support for
> > AMQP 1.0[0] in oslo.messaging. We already have rabbit and qpid drivers
> > for earlier (and different!) versions of AMQP, the proposal would be
> > to add an additional driver for a _protocol_ not a particular broker.
> > (Both RabbitMQ and Qpid support AMQP 1.0 now).
> >
> > By targeting a clear mapping on to a protocol, rather than a specific
> > implementation, we would simplify the task in the future for anyone
> > wishing to move to any other system that spoke AMQP 1.0. That would no
> > longer require a new driver, merely different configuration and
> > deployment. That would then allow openstack to more easily take
> > advantage of any emerging innovations in this space.
> 
> Sounds sane to me.