Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-09-06 Thread Doug Hellmann
Excerpts from joehuang's message of 2016-09-06 02:12:45 +:
> 
> > A full rewrite of the library that doesn't take under consideration the 
> > existing
> > deployed technologies is not going to be of any help, IMHO. The reason being
> > that upgradability would be broken and that's a no-go. I believe Clynt was
> > trying to make the same point when he brought up the choice of backends up.
> 
> +1.
> 
> That's why I proposed to provide plugin mechanism in Nova/Cinder API layer, 
> this layer
> abstract can hide the difference of messaging library, and ensure the

oslo.messaging is the abstraction layer you're looking for. Put the new
features there.

Doug

> existing implementation always work and successfully fallback during upgrade 
> if necessary
> and this way makes the upgrade easier to manage, and a cleaner and more steady
> interface to do improvement step by step.
> 
> Best Regards
> Chaoyi Huang (joehuang)
> 
> 
> From: Flavio Percoco [fla...@redhat.com]
> Sent: 05 September 2016 20:52
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [all][massively 
> distributed][architecture]Coordination between actions/WGs
> 
> On 05/09/16 18:55 +0700, Ian Wells wrote:
> >On 5 September 2016 at 17:08, Flavio Percoco <fla...@redhat.com> wrote:
> >
> >> We should probably start by asking ourselves who's really being bitten by
> >> the
> >> messaging bus right now? Large (and please, let's not bikeshed on what a
> >> Large
> >> Cloud is) Clouds? Small Clouds? New Clouds? Everyone?
> >> The we can start asking ourselves things like: Would a change of the
> >> API/underlying technology help them? Why? How? What technology exactly and
> >> why?
> >> What technology would make their lives simpler and why?
> >>
> >
> >Well, as far as RabbitMQ goes, then I would certainly say in deployment
> >it's not a pleasant thing to work with.  Even if you consider it good
> >enough day to day (which is debatable) then consider its upgradeability -
> >it's impractical to keep it running as you upgrade it, is my
> >understanding.  It would also seem to be a big factor in our scale
> >limitations - I wonder if we could do without such complexities as cells if
> >we had something a bit more performant (with perhaps a more lax operating
> >model).
> >
> >But this is not about blaming Rabbit for all our problems.  The original
> >statement was that RPC is a bad pattern to use in occasionally unreliable
> >distributed systems, and Rabbit in no ways forces us to use RPC patterns.
> >That we don't see the RPC pattern's problems so clearly is because a fault
> >happening at just the right time in a call sequence to show up the problem
> >rarely happens, and testing such a fault using injection is not practical -
> >but it does happen in reality and things do go weird when it happens.
> >
> >The proposal was to create a better interface in oslo for a comms model
> >(that we could implement - and regardless of how we chose to implement it -
> >and that would encourage people to code for the corner cases) and then
> >encourage people to move across.
> >
> >I'm not saying this research/work is not useful/important (in fact, I've
> >> been
> >> advocating for it for almost 2 years now) but I do want us to be more
> >> careful
> >> and certainly I don't think this change should be anything but transparent
> >> for
> >> every deployment out there.
> >>
> >
> >That is a perfectly reasonable thing to ask.  I presume by transparent you
> >mean that the standard upgrade approaches will work.
> >
> >To answer this topic more directly. As much as being opinionated would help
> >> driving focus and providing a better result here, I believe we are not
> >> there yet
> >> and I also believe a backend agnostic API would be more benefitial to begin
> >> with. We're not going to move 98% of the OpenStack deployments out there
> >> off of
> >> rabbitmq just like that.
> >>
> >
> >Again, this originally wasn't about Rabbit, or having a choice of
> >backends.  One backend would do if that backend were perfect for the job.
> >There are other reasons for doing this that would hopefully make OpenStack
> >more robust.
> 
> I did not mean to say Rabbit is to blame, if anything I meant to say that 
> things
> have gotten better from the Rabbit side. My point is that OPs/Deployments must
> be taken under consideration 

Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-09-05 Thread joehuang

> A full rewrite of the library that doesn't take under consideration the 
> existing
> deployed technologies is not going to be of any help, IMHO. The reason being
> that upgradability would be broken and that's a no-go. I believe Clynt was
> trying to make the same point when he brought up the choice of backends up.

+1.

That's why I proposed to provide plugin mechanism in Nova/Cinder API layer, 
this layer
abstract can hide the difference of messaging library, and ensure the
existing implementation always work and successfully fallback during upgrade if 
necessary
and this way makes the upgrade easier to manage, and a cleaner and more steady
interface to do improvement step by step.

Best Regards
Chaoyi Huang (joehuang)


From: Flavio Percoco [fla...@redhat.com]
Sent: 05 September 2016 20:52
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all][massively 
distributed][architecture]Coordination between actions/WGs

On 05/09/16 18:55 +0700, Ian Wells wrote:
>On 5 September 2016 at 17:08, Flavio Percoco <fla...@redhat.com> wrote:
>
>> We should probably start by asking ourselves who's really being bitten by
>> the
>> messaging bus right now? Large (and please, let's not bikeshed on what a
>> Large
>> Cloud is) Clouds? Small Clouds? New Clouds? Everyone?
>> The we can start asking ourselves things like: Would a change of the
>> API/underlying technology help them? Why? How? What technology exactly and
>> why?
>> What technology would make their lives simpler and why?
>>
>
>Well, as far as RabbitMQ goes, then I would certainly say in deployment
>it's not a pleasant thing to work with.  Even if you consider it good
>enough day to day (which is debatable) then consider its upgradeability -
>it's impractical to keep it running as you upgrade it, is my
>understanding.  It would also seem to be a big factor in our scale
>limitations - I wonder if we could do without such complexities as cells if
>we had something a bit more performant (with perhaps a more lax operating
>model).
>
>But this is not about blaming Rabbit for all our problems.  The original
>statement was that RPC is a bad pattern to use in occasionally unreliable
>distributed systems, and Rabbit in no ways forces us to use RPC patterns.
>That we don't see the RPC pattern's problems so clearly is because a fault
>happening at just the right time in a call sequence to show up the problem
>rarely happens, and testing such a fault using injection is not practical -
>but it does happen in reality and things do go weird when it happens.
>
>The proposal was to create a better interface in oslo for a comms model
>(that we could implement - and regardless of how we chose to implement it -
>and that would encourage people to code for the corner cases) and then
>encourage people to move across.
>
>I'm not saying this research/work is not useful/important (in fact, I've
>> been
>> advocating for it for almost 2 years now) but I do want us to be more
>> careful
>> and certainly I don't think this change should be anything but transparent
>> for
>> every deployment out there.
>>
>
>That is a perfectly reasonable thing to ask.  I presume by transparent you
>mean that the standard upgrade approaches will work.
>
>To answer this topic more directly. As much as being opinionated would help
>> driving focus and providing a better result here, I believe we are not
>> there yet
>> and I also believe a backend agnostic API would be more benefitial to begin
>> with. We're not going to move 98% of the OpenStack deployments out there
>> off of
>> rabbitmq just like that.
>>
>
>Again, this originally wasn't about Rabbit, or having a choice of
>backends.  One backend would do if that backend were perfect for the job.
>There are other reasons for doing this that would hopefully make OpenStack
>more robust.

I did not mean to say Rabbit is to blame, if anything I meant to say that things
have gotten better from the Rabbit side. My point is that OPs/Deployments must
be taken under consideration on this refactor.

A full rewrite of the library that doesn't take under consideration the existing
deployed technologies is not going to be of any help, IMHO. The reason being
that upgradability would be broken and that's a no-go. I believe Clynt was
trying to make the same point when he brought up the choice of backends up.

As I mentioned in my previous email, I'm all for having a better messaging API
that is backend agnostic, even if we end up using a single backend in the Z^2
release.

Hope it's clearer now,
Flavio

--
@flaper87
Flavio Percoco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-09-05 Thread Clint Byrum
Excerpts from Flavio Percoco's message of 2016-09-05 12:08:04 +0200:
> On 02/09/16 10:56 -0700, Clint Byrum wrote:
> >Excerpts from Ken Giusti's message of 2016-09-02 11:05:51 -0400:
> >> On Thu, Sep 1, 2016 at 4:53 PM, Ian Wells  wrote:
> >> > On 1 September 2016 at 06:52, Ken Giusti  wrote:
> >> >>
> >> >> On Wed, Aug 31, 2016 at 3:30 PM, Ian Wells  
> >> >> wrote:
> >> >
> >> >> > I have opinions about other patterns we could use, but I don't want to
> >> >> > push
> >> >>
> >> >> > my solutions here, I want to see if this is really as much of a 
> >> >> > problem
> >> >> > as
> >> >> > it looks and if people concur with my summary above.  However, the 
> >> >> > right
> >> >> > approach is most definitely to create a new and more fitting set of 
> >> >> > oslo
> >> >> > interfaces for communication patterns, and then to encourage people to
> >> >> > move
> >> >> > to the new ones from the old.  (Whether RabbitMQ is involved is 
> >> >> > neither
> >> >> > here
> >> >> > nor there, as this is really a question of Oslo APIs, not their
> >> >> > implementation.)
> >> >> >
> >> >>
> >> >> Hmm... maybe.   Message bus technology is varied, and so is it's
> >> >> behavior.  There are brokerless, point-to-point backends supported by
> >> >> oslo.messaging [1],[2] which will exhibit different
> >> >> capabilities/behaviors from the traditional broker-based
> >> >> store-and-forward backend (e.g. message acking end-to-end vs to the
> >> >> intermediary).
> >> >
> >> >
> >> > The important thing is that you shouldn't have to look behind the 
> >> > curtain.
> >> > We can offer APIs that are driven by the implementation (designed for 
> >> > test,
> >> > and trivial to implement correctly given handy open source projects we 
> >> > know
> >> > and trust) and the choice of design will therefore be dependent on the
> >> > backend mechanisms we consider for use to implement the API.  APIs are
> >> > always a point of negotiation between what the caller needs and what can 
> >> > be
> >> > implemented in a practical amount of time.  But *I do not care* whether
> >> > you're using rabbits or carrier pigeons just so long as what you have
> >> > documented that the API promises me is actually true.  I *do not expect* 
> >> > to
> >> > have to read RabbitMQ ior AMQP documentation to work out what behaviour I
> >> > should expect for my messaging.  And its behaviour should be consistent 
> >> > if I
> >> > have a choice of messaging backends.
> >> >
> >>
> >> And I agree totally - this is the way it _should_ be.  And to get
> >> there we do have to address the ambiguities in the existing API, as
> >> well as extend it so applications can explicitly state their service
> >> needs.
> >>
> >> My point is that the API also has to be _backend_ agnostic.  That
> >> really hasn't been the priority it should be IMHO.  The current API as
> >> it stands leaks too much of the backend behavior past the API.
> >>
> >> For example here's where we are with the current API: a majority of
> >> deployments are broker based - applications using oslo.messaging  have
> >> come to rely _indirectly_ on the behavioral side effects of using a
> >> broker backend.  In fact RabbitMQ's operational characteristics have
> >> become the de-facto "correct" behavior.  Any other backend that
> >> doesn't exhibit exactly the same behavior as RabbitMQ is considered
> >> buggy.   Consider qpidd for example - simple differences in default
> >> queue lifecycle and default flow control settings resulted in
> >> messaging behavior different from RabbitMQ.  These were largely
> >> considered bugs in qpidd.  I think this played a large part in the
> >> lack of adoption of qpidd.
> >>
> >> And qpidd is the same type of messaging backend as rabbitmq - a
> >> broker.  Imagine what deployer's are going to hit when they attempt to
> >> use a completely different technology - non-brokered backends like
> >> Zeromq or message routing.
> >>
> >> Improving the API as you describe will go a long way to solving this
> >> situation.  And believe me I agree 100% that this API work needs to be
> >> done.
> >>
> >> But the API redesign should be done in a backend-agnostic manner.  We
> >> (the oslo.messaging devs) have to ensure that _required_ API features
> >> cannot be tied to any one backend implementation.  For example things
> >> like notification pools are trivial to support for broker backends,
> >> but hard/impossible for point to point distributed technologies.  It
> >> must be clear to the application devs that using those optional
> >> features that cannot be effectively implemented for a given backend
> >> basically forces the deployer's hand.
> >>
> >> My point is yes we need to improve that API but it should be done in a
> >> backend agnostic way. There are currently features/behaviors that
> >> essentially require a broker back end.  We should avoid making such
> >> features mandatory elements 

Re: [openstack-dev] [all][massively distributed][architecture] Coordination between actions/WGs

2016-09-05 Thread Arkady_Kanevsky
Please, drive new multi projects requirements thru use cases of Product WG.
Thanks,
Arkady

-Original Message-
From: joehuang [mailto:joehu...@huawei.com]
Sent: Tuesday, August 23, 2016 9:01 PM
To: OpenStack Development Mailing List (not for usage questions) ; 
openstack-operators
Cc: discovery-...@inria.fr
Subject: Re: [openstack-dev] [all][massively distributed][architecture] 
Coordination between actions/WGs

Hello, Adrien,

How about different focus for different working gruop? For example, "massively 
distributed" working group can focus on identifying the use cases, challenges, 
issues in current openstack to support such fog/edge computing scenario, and 
even including the use cases/scenario from ETSI mobile edge computing 
(http://www.etsi.org/technologies-clusters/technologies/mobile-edge-computing, 
https://portal.etsi.org/portals/0/tbpages/mec/docs/mobile-edge_computing_-_introductory_technical_white_paper_v1%2018-09-14.pdf).
 For "architecture" working group, how about to focus on dicsussing technology 
solution/proposal to address these issues/challenges?

We have discussed/exchanged ideas a lot before/in/after Austin summit. As 
Tricircle has worked in the multisite area for several cycles, a lots of use 
cases/challengs/issues also have been identified, the proposal of Tricircle 
could be one basis to be discussed in "arhictecture" working group, other 
proposals are also welcome.

Best Regards
Chaoyi Huang (joehuang)


From: lebre.adr...@free.fr [lebre.adr...@free.fr]
Sent: 23 August 2016 18:17
To: OpenStack Development Mailing List; openstack-operators
Cc: discovery-...@inria.fr
Subject: [openstack-dev] [all][massively distributed][architecture] 
Coordination between actions/WGs

Hi Folks,

During the last summit, we suggested to create a new working group that deals 
with the massively distributed use case:
How can OpenStack be "slightly" revised to operate Fog/Edge Computing 
infrastructures, i.e. infrastructures composed of several sites.
The first meeting we did in Austin showed us that additional materials were 
mandatory to better understand the scope as well as the actions we can perform 
in this working group.

After exchanging with different persons and institutions, we have identified 
several actions that we would like to achieve and that make the creation of 
such a working group relevant from our point of view.

Among the list of possible actions, we would like to identify major scalability 
issues and clarify intra-site vs inter-site exchanges between the different 
services of OpenStack in a multi-site context (i.e. with the vanilla OpenStack 
code).
Such information will enable us to better understand how and where each service 
should be deployed and whether it should be revised.

We have started an action with the Performance WG with the ultimate goal to 
analyse how OpenStack behaves from the performance aspect as well as the 
interactions between the various services in such a context.

Meanwhile, we saw during this summer the Clynt's proposal about the 
Architecture WG.

Although we are very exciting about this WG (we are convinced it will be 
valuable for the whole community), we are wondering whether the actions we 
envision in the Massively distributed WG will not overlap the ones 
(scalability, multi-site operations ...) that could be performed in the 
Archictecture WG.

The goal of this email is to :

(i) understand whether the fog/edge computing use case is in the scope of the 
Architecture WG.

(ii) if not, whether it makes sense to create a working group that focus on 
scalability and multi-site challenges (Folks from Orange Labs and British 
Telecom for instance already told us that they are interesting by such a 
use-case).

(iii) what is the best way to coordinate our efforts with the actions performed 
in other WGs such as the Performance and Architecture ones (e.g., actions 
performed/decisions taken in the Architecture WG can have impacts on the 
massively distributed WG and thus drive the way we should perform actions to 
progress to the Fog/Edge Computing target)


According to the feedback, we will create dedicated wiki pages for the 
massively distributed WG.
Remarks/comments welcome.

Ad_rien_
Further information regarding the Fog/Edge Computing use-case we target is 
available at http://beyondtheclouds.github.io

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.or

Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-09-05 Thread Chris Dent

On Mon, 5 Sep 2016, Flavio Percoco wrote:


A full rewrite of the library that doesn't take under consideration
the existing deployed technologies is not going to be of any help,
IMHO. The reason being that upgradability would be broken and that's a
no-go. I believe Clynt was trying to make the same point when he
brought up the choice of backends up.


As I understood some of the proposals the idea was to transcend
backwards compatibility limitations by having two solutions
available concurrently. Services could migrate to the new way as
they are able.

This sort of solution is sometimes required when upgradability is
making it impossible to fix a big problem. We should probably
consider it more often.

--
Chris Dent   ┬─┬ノ( º _ ºノ)https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-09-05 Thread Flavio Percoco

On 05/09/16 18:55 +0700, Ian Wells wrote:

On 5 September 2016 at 17:08, Flavio Percoco  wrote:


We should probably start by asking ourselves who's really being bitten by
the
messaging bus right now? Large (and please, let's not bikeshed on what a
Large
Cloud is) Clouds? Small Clouds? New Clouds? Everyone?
The we can start asking ourselves things like: Would a change of the
API/underlying technology help them? Why? How? What technology exactly and
why?
What technology would make their lives simpler and why?



Well, as far as RabbitMQ goes, then I would certainly say in deployment
it's not a pleasant thing to work with.  Even if you consider it good
enough day to day (which is debatable) then consider its upgradeability -
it's impractical to keep it running as you upgrade it, is my
understanding.  It would also seem to be a big factor in our scale
limitations - I wonder if we could do without such complexities as cells if
we had something a bit more performant (with perhaps a more lax operating
model).

But this is not about blaming Rabbit for all our problems.  The original
statement was that RPC is a bad pattern to use in occasionally unreliable
distributed systems, and Rabbit in no ways forces us to use RPC patterns.
That we don't see the RPC pattern's problems so clearly is because a fault
happening at just the right time in a call sequence to show up the problem
rarely happens, and testing such a fault using injection is not practical -
but it does happen in reality and things do go weird when it happens.

The proposal was to create a better interface in oslo for a comms model
(that we could implement - and regardless of how we chose to implement it -
and that would encourage people to code for the corner cases) and then
encourage people to move across.

I'm not saying this research/work is not useful/important (in fact, I've

been
advocating for it for almost 2 years now) but I do want us to be more
careful
and certainly I don't think this change should be anything but transparent
for
every deployment out there.



That is a perfectly reasonable thing to ask.  I presume by transparent you
mean that the standard upgrade approaches will work.

To answer this topic more directly. As much as being opinionated would help

driving focus and providing a better result here, I believe we are not
there yet
and I also believe a backend agnostic API would be more benefitial to begin
with. We're not going to move 98% of the OpenStack deployments out there
off of
rabbitmq just like that.



Again, this originally wasn't about Rabbit, or having a choice of
backends.  One backend would do if that backend were perfect for the job.
There are other reasons for doing this that would hopefully make OpenStack
more robust.


I did not mean to say Rabbit is to blame, if anything I meant to say that things
have gotten better from the Rabbit side. My point is that OPs/Deployments must
be taken under consideration on this refactor.

A full rewrite of the library that doesn't take under consideration the existing
deployed technologies is not going to be of any help, IMHO. The reason being
that upgradability would be broken and that's a no-go. I believe Clynt was
trying to make the same point when he brought up the choice of backends up.

As I mentioned in my previous email, I'm all for having a better messaging API
that is backend agnostic, even if we end up using a single backend in the Z^2
release.

Hope it's clearer now,
Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-09-05 Thread Ian Wells
On 5 September 2016 at 17:08, Flavio Percoco  wrote:

> We should probably start by asking ourselves who's really being bitten by
> the
> messaging bus right now? Large (and please, let's not bikeshed on what a
> Large
> Cloud is) Clouds? Small Clouds? New Clouds? Everyone?
> The we can start asking ourselves things like: Would a change of the
> API/underlying technology help them? Why? How? What technology exactly and
> why?
> What technology would make their lives simpler and why?
>

Well, as far as RabbitMQ goes, then I would certainly say in deployment
it's not a pleasant thing to work with.  Even if you consider it good
enough day to day (which is debatable) then consider its upgradeability -
it's impractical to keep it running as you upgrade it, is my
understanding.  It would also seem to be a big factor in our scale
limitations - I wonder if we could do without such complexities as cells if
we had something a bit more performant (with perhaps a more lax operating
model).

But this is not about blaming Rabbit for all our problems.  The original
statement was that RPC is a bad pattern to use in occasionally unreliable
distributed systems, and Rabbit in no ways forces us to use RPC patterns.
That we don't see the RPC pattern's problems so clearly is because a fault
happening at just the right time in a call sequence to show up the problem
rarely happens, and testing such a fault using injection is not practical -
but it does happen in reality and things do go weird when it happens.

The proposal was to create a better interface in oslo for a comms model
(that we could implement - and regardless of how we chose to implement it -
and that would encourage people to code for the corner cases) and then
encourage people to move across.

I'm not saying this research/work is not useful/important (in fact, I've
> been
> advocating for it for almost 2 years now) but I do want us to be more
> careful
> and certainly I don't think this change should be anything but transparent
> for
> every deployment out there.
>

That is a perfectly reasonable thing to ask.  I presume by transparent you
mean that the standard upgrade approaches will work.

To answer this topic more directly. As much as being opinionated would help
> driving focus and providing a better result here, I believe we are not
> there yet
> and I also believe a backend agnostic API would be more benefitial to begin
> with. We're not going to move 98% of the OpenStack deployments out there
> off of
> rabbitmq just like that.
>

Again, this originally wasn't about Rabbit, or having a choice of
backends.  One backend would do if that backend were perfect for the job.
There are other reasons for doing this that would hopefully make OpenStack
more robust.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-09-05 Thread Flavio Percoco

On 02/09/16 10:56 -0700, Clint Byrum wrote:

Excerpts from Ken Giusti's message of 2016-09-02 11:05:51 -0400:

On Thu, Sep 1, 2016 at 4:53 PM, Ian Wells  wrote:
> On 1 September 2016 at 06:52, Ken Giusti  wrote:
>>
>> On Wed, Aug 31, 2016 at 3:30 PM, Ian Wells  wrote:
>
>> > I have opinions about other patterns we could use, but I don't want to
>> > push
>>
>> > my solutions here, I want to see if this is really as much of a problem
>> > as
>> > it looks and if people concur with my summary above.  However, the right
>> > approach is most definitely to create a new and more fitting set of oslo
>> > interfaces for communication patterns, and then to encourage people to
>> > move
>> > to the new ones from the old.  (Whether RabbitMQ is involved is neither
>> > here
>> > nor there, as this is really a question of Oslo APIs, not their
>> > implementation.)
>> >
>>
>> Hmm... maybe.   Message bus technology is varied, and so is it's
>> behavior.  There are brokerless, point-to-point backends supported by
>> oslo.messaging [1],[2] which will exhibit different
>> capabilities/behaviors from the traditional broker-based
>> store-and-forward backend (e.g. message acking end-to-end vs to the
>> intermediary).
>
>
> The important thing is that you shouldn't have to look behind the curtain.
> We can offer APIs that are driven by the implementation (designed for test,
> and trivial to implement correctly given handy open source projects we know
> and trust) and the choice of design will therefore be dependent on the
> backend mechanisms we consider for use to implement the API.  APIs are
> always a point of negotiation between what the caller needs and what can be
> implemented in a practical amount of time.  But *I do not care* whether
> you're using rabbits or carrier pigeons just so long as what you have
> documented that the API promises me is actually true.  I *do not expect* to
> have to read RabbitMQ ior AMQP documentation to work out what behaviour I
> should expect for my messaging.  And its behaviour should be consistent if I
> have a choice of messaging backends.
>

And I agree totally - this is the way it _should_ be.  And to get
there we do have to address the ambiguities in the existing API, as
well as extend it so applications can explicitly state their service
needs.

My point is that the API also has to be _backend_ agnostic.  That
really hasn't been the priority it should be IMHO.  The current API as
it stands leaks too much of the backend behavior past the API.

For example here's where we are with the current API: a majority of
deployments are broker based - applications using oslo.messaging  have
come to rely _indirectly_ on the behavioral side effects of using a
broker backend.  In fact RabbitMQ's operational characteristics have
become the de-facto "correct" behavior.  Any other backend that
doesn't exhibit exactly the same behavior as RabbitMQ is considered
buggy.   Consider qpidd for example - simple differences in default
queue lifecycle and default flow control settings resulted in
messaging behavior different from RabbitMQ.  These were largely
considered bugs in qpidd.  I think this played a large part in the
lack of adoption of qpidd.

And qpidd is the same type of messaging backend as rabbitmq - a
broker.  Imagine what deployer's are going to hit when they attempt to
use a completely different technology - non-brokered backends like
Zeromq or message routing.

Improving the API as you describe will go a long way to solving this
situation.  And believe me I agree 100% that this API work needs to be
done.

But the API redesign should be done in a backend-agnostic manner.  We
(the oslo.messaging devs) have to ensure that _required_ API features
cannot be tied to any one backend implementation.  For example things
like notification pools are trivial to support for broker backends,
but hard/impossible for point to point distributed technologies.  It
must be clear to the application devs that using those optional
features that cannot be effectively implemented for a given backend
basically forces the deployer's hand.

My point is yes we need to improve that API but it should be done in a
backend agnostic way. There are currently features/behaviors that
essentially require a broker back end.  We should avoid making such
features mandatory elements of the API and ensure that the API users
are well aware of the consequences for deployers when using such
features.



All of what you say is true.

However, I want us to also consider the cost of being so modular at the
RPC level.

Yes it's nice that we have RabbitMQ and ZeroMQ as options, But do we
actually need these options? Could we just migrate to ZeroMQ, or HTTP/2,
gRPC, thrift, etc.? Then we could tell deployers "good news, you don't
need that component anymore, we factored it out" rather than "hey look
here, more deployment choices, good luck!"


Based on the last OPs 

Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-09-02 Thread Clint Byrum
Excerpts from Ken Giusti's message of 2016-09-02 11:05:51 -0400:
> On Thu, Sep 1, 2016 at 4:53 PM, Ian Wells  wrote:
> > On 1 September 2016 at 06:52, Ken Giusti  wrote:
> >>
> >> On Wed, Aug 31, 2016 at 3:30 PM, Ian Wells  wrote:
> >
> >> > I have opinions about other patterns we could use, but I don't want to
> >> > push
> >>
> >> > my solutions here, I want to see if this is really as much of a problem
> >> > as
> >> > it looks and if people concur with my summary above.  However, the right
> >> > approach is most definitely to create a new and more fitting set of oslo
> >> > interfaces for communication patterns, and then to encourage people to
> >> > move
> >> > to the new ones from the old.  (Whether RabbitMQ is involved is neither
> >> > here
> >> > nor there, as this is really a question of Oslo APIs, not their
> >> > implementation.)
> >> >
> >>
> >> Hmm... maybe.   Message bus technology is varied, and so is it's
> >> behavior.  There are brokerless, point-to-point backends supported by
> >> oslo.messaging [1],[2] which will exhibit different
> >> capabilities/behaviors from the traditional broker-based
> >> store-and-forward backend (e.g. message acking end-to-end vs to the
> >> intermediary).
> >
> >
> > The important thing is that you shouldn't have to look behind the curtain.
> > We can offer APIs that are driven by the implementation (designed for test,
> > and trivial to implement correctly given handy open source projects we know
> > and trust) and the choice of design will therefore be dependent on the
> > backend mechanisms we consider for use to implement the API.  APIs are
> > always a point of negotiation between what the caller needs and what can be
> > implemented in a practical amount of time.  But *I do not care* whether
> > you're using rabbits or carrier pigeons just so long as what you have
> > documented that the API promises me is actually true.  I *do not expect* to
> > have to read RabbitMQ ior AMQP documentation to work out what behaviour I
> > should expect for my messaging.  And its behaviour should be consistent if I
> > have a choice of messaging backends.
> >
> 
> And I agree totally - this is the way it _should_ be.  And to get
> there we do have to address the ambiguities in the existing API, as
> well as extend it so applications can explicitly state their service
> needs.
> 
> My point is that the API also has to be _backend_ agnostic.  That
> really hasn't been the priority it should be IMHO.  The current API as
> it stands leaks too much of the backend behavior past the API.
> 
> For example here's where we are with the current API: a majority of
> deployments are broker based - applications using oslo.messaging  have
> come to rely _indirectly_ on the behavioral side effects of using a
> broker backend.  In fact RabbitMQ's operational characteristics have
> become the de-facto "correct" behavior.  Any other backend that
> doesn't exhibit exactly the same behavior as RabbitMQ is considered
> buggy.   Consider qpidd for example - simple differences in default
> queue lifecycle and default flow control settings resulted in
> messaging behavior different from RabbitMQ.  These were largely
> considered bugs in qpidd.  I think this played a large part in the
> lack of adoption of qpidd.
> 
> And qpidd is the same type of messaging backend as rabbitmq - a
> broker.  Imagine what deployer's are going to hit when they attempt to
> use a completely different technology - non-brokered backends like
> Zeromq or message routing.
> 
> Improving the API as you describe will go a long way to solving this
> situation.  And believe me I agree 100% that this API work needs to be
> done.
> 
> But the API redesign should be done in a backend-agnostic manner.  We
> (the oslo.messaging devs) have to ensure that _required_ API features
> cannot be tied to any one backend implementation.  For example things
> like notification pools are trivial to support for broker backends,
> but hard/impossible for point to point distributed technologies.  It
> must be clear to the application devs that using those optional
> features that cannot be effectively implemented for a given backend
> basically forces the deployer's hand.
> 
> My point is yes we need to improve that API but it should be done in a
> backend agnostic way. There are currently features/behaviors that
> essentially require a broker back end.  We should avoid making such
> features mandatory elements of the API and ensure that the API users
> are well aware of the consequences for deployers when using such
> features.
> 

All of what you say is true.

However, I want us to also consider the cost of being so modular at the
RPC level.

Yes it's nice that we have RabbitMQ and ZeroMQ as options, But do we
actually need these options? Could we just migrate to ZeroMQ, or HTTP/2,
gRPC, thrift, etc.? Then we could tell deployers "good news, you don't
need that 

Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-09-02 Thread Ken Giusti
On Thu, Sep 1, 2016 at 4:53 PM, Ian Wells  wrote:
> On 1 September 2016 at 06:52, Ken Giusti  wrote:
>>
>> On Wed, Aug 31, 2016 at 3:30 PM, Ian Wells  wrote:
>
>> > I have opinions about other patterns we could use, but I don't want to
>> > push
>>
>> > my solutions here, I want to see if this is really as much of a problem
>> > as
>> > it looks and if people concur with my summary above.  However, the right
>> > approach is most definitely to create a new and more fitting set of oslo
>> > interfaces for communication patterns, and then to encourage people to
>> > move
>> > to the new ones from the old.  (Whether RabbitMQ is involved is neither
>> > here
>> > nor there, as this is really a question of Oslo APIs, not their
>> > implementation.)
>> >
>>
>> Hmm... maybe.   Message bus technology is varied, and so is it's
>> behavior.  There are brokerless, point-to-point backends supported by
>> oslo.messaging [1],[2] which will exhibit different
>> capabilities/behaviors from the traditional broker-based
>> store-and-forward backend (e.g. message acking end-to-end vs to the
>> intermediary).
>
>
> The important thing is that you shouldn't have to look behind the curtain.
> We can offer APIs that are driven by the implementation (designed for test,
> and trivial to implement correctly given handy open source projects we know
> and trust) and the choice of design will therefore be dependent on the
> backend mechanisms we consider for use to implement the API.  APIs are
> always a point of negotiation between what the caller needs and what can be
> implemented in a practical amount of time.  But *I do not care* whether
> you're using rabbits or carrier pigeons just so long as what you have
> documented that the API promises me is actually true.  I *do not expect* to
> have to read RabbitMQ ior AMQP documentation to work out what behaviour I
> should expect for my messaging.  And its behaviour should be consistent if I
> have a choice of messaging backends.
>

And I agree totally - this is the way it _should_ be.  And to get
there we do have to address the ambiguities in the existing API, as
well as extend it so applications can explicitly state their service
needs.

My point is that the API also has to be _backend_ agnostic.  That
really hasn't been the priority it should be IMHO.  The current API as
it stands leaks too much of the backend behavior past the API.

For example here's where we are with the current API: a majority of
deployments are broker based - applications using oslo.messaging  have
come to rely _indirectly_ on the behavioral side effects of using a
broker backend.  In fact RabbitMQ's operational characteristics have
become the de-facto "correct" behavior.  Any other backend that
doesn't exhibit exactly the same behavior as RabbitMQ is considered
buggy.   Consider qpidd for example - simple differences in default
queue lifecycle and default flow control settings resulted in
messaging behavior different from RabbitMQ.  These were largely
considered bugs in qpidd.  I think this played a large part in the
lack of adoption of qpidd.

And qpidd is the same type of messaging backend as rabbitmq - a
broker.  Imagine what deployer's are going to hit when they attempt to
use a completely different technology - non-brokered backends like
Zeromq or message routing.

Improving the API as you describe will go a long way to solving this
situation.  And believe me I agree 100% that this API work needs to be
done.

But the API redesign should be done in a backend-agnostic manner.  We
(the oslo.messaging devs) have to ensure that _required_ API features
cannot be tied to any one backend implementation.  For example things
like notification pools are trivial to support for broker backends,
but hard/impossible for point to point distributed technologies.  It
must be clear to the application devs that using those optional
features that cannot be effectively implemented for a given backend
basically forces the deployer's hand.

My point is yes we need to improve that API but it should be done in a
backend agnostic way. There are currently features/behaviors that
essentially require a broker back end.  We should avoid making such
features mandatory elements of the API and ensure that the API users
are well aware of the consequences for deployers when using such
features.


>> All the more reason to have explicit delivery guarantees and well
>> understood failure scenarios defined by the API.
>
> And on this point we totally agree.
>
> I think the point of an API is to subdivide who carries which
> responsibilities - the caller for handling exceptional cases and the
> implementer for having predictable behaviour.  Documentation is the means of
> agreement.
>
> Sorry to state basic good practice - I'm sure we do all accept that this is
> good behaviour - but with a component that's this central to what we do and
> so frequently used by so many 

Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-09-01 Thread joehuang
Hi, Adrien,

+1, 

and the comment in the 
https://etherpad.openstack.org/p/massively-distributed_WG_description has been 
updated.

- Security management over the WAN: how manage the inter-site communications 
and edge cloud securely.

- Fault tolerant issues (each edge cloud should be able to run independently, a 
crash or an isolation of one (or several sites) should not impact other DCs. 

- Maintainability: each edge cloud installation/upgrading/patch should be able 
to be managed independently, don't have to upgrade all edge clouds at the same 
time)
ad_rien_: why not ? I would rather reformulate as: Appropriate/automatic 
mechanisms should enable the upgrade of the different sites in a consistency 
way (considering that upgrading the complete infrastructure can last a 
significant amount of time while facing crash and disconnection issues). 

- Service Operation-able:  resoures like VM, Container, Volume, etc in each 
edge cloud can still be manipulated locally even if the link to other cloud 
temporay broken.
ad_rien: could you please clarify/reword the above sentence it is not clear for 
me whether there is (or not) a difference with the maintainability aspect 
described above. 
joehuang: updated as above

- Easy integration: need to support easy integration for multi-vendors for 
hundreds or thousands of edge cloud.
ad_rien: same here, could you please clarify what do you mean by multiple 
vendors? You mean to be able to ''merge/federate'' DCs from Huawei, Orange and 
Rackspace for example. This looks like to be peering agreement challenge ? I'm 
not sure whether it is a technical challenge ? 
joehuang: in telecom industry, multi-vendor inter-operable is a basic 
requirement, even for edge clouds. So the interface between edge clouds should 
be inter-operable, and compared stable/easy to be certified and integrated. 
Binary RPC which is varied a lot in each version is not good for multi-verndor 
certification and integration, that's why in telecom industry, the standard is 
required.

- Consistency: eventually consistent information(stable status) should be 
achieved for distributed system.
ad_rien: let's reword as: Consistency: the states of the system should  be 
globally consistent. This means that if one project/vm/... is created on one 
site, the states of the other sites should be consistent to avoid for instance 
double assigmnent of Ids/IPs/...

Best Regards
Chaoyi Huang(joehuang)


From: lebre.adr...@free.fr [lebre.adr...@free.fr]
Sent: 01 September 2016 20:47
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all][massively 
distributed][architecture]Coordination between actions/WGs

May I suggest to open one (or more) thread(s) with the correct subject(s)?

There are at least three points discussed here:

- one related to the proposal of the Massively distributed group
- one related to the Architecture WG with the communication issue between 
services (RPC+Rabbit MQ, REST API..)
- one that mainly focuses on TriCircle.

While all of them are interesting it is a bit tedious to follow them in one 
large thread.

Regarding the Massively distributed WG (which was the initial topic ;)), I 
replied to some comments that have been done in the pad and I added a new 
action to discuss the single vs multi-endpoint questions.

Finally regarding the comparison between proposal (the link that has been added 
at the end), I think it is a good idea but that should be done after (or at 
least meanwhile) analyzing the current OpenStack ecosystem. As it has been 
written in some comments of the TriCircle Big Tent application, it is  
important to first identify pro/cons of the federation proposal before we go 
ahead holus-bolus.

My two cents
Ad_rien_

- Mail original -
> De: "joehuang" <joehu...@huawei.com>
> À: "OpenStack Development Mailing List (not for usage questions)" 
> <openstack-dev@lists.openstack.org>
> Envoyé: Jeudi 1 Septembre 2016 11:18:17
> Objet: Re: [openstack-dev] [all][massively 
> distributed][architecture]Coordination between actions/WGs
>
> > What is the REST API for tricircle?
> > When looking at the github I see:
> > ''Documentation: TBD''
> > Getting a feel for its REST API would really be helpful in
> > determine how
> > much of a proxy/request router it is vs being an actual API. I
> > don't
> > really want/like a proxy/request router (if that wasn't obvious,
> > ha).
>
> For Nova API-GW/Cinder API-GW, Nova API/Cinder API will be accepted
> and forwarded.
> For Neutron with Tricircle Neutron plugin in Tricircle, it's Neutron
> API,
> just like any other Neutron plugin, doesn't change Neutron API.
>
> Tricircle reuse tempest test cases to ensure the API accepted kept
> consistent
> with Nova/Cinder/Neutron. So no special documentation for these
&g

Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-09-01 Thread Ian Wells
On 1 September 2016 at 06:52, Ken Giusti  wrote:

> On Wed, Aug 31, 2016 at 3:30 PM, Ian Wells  wrote:
>
> > I have opinions about other patterns we could use, but I don't want to
push

> > my solutions here, I want to see if this is really as much of a problem
> as
> > it looks and if people concur with my summary above.  However, the right
> > approach is most definitely to create a new and more fitting set of oslo
> > interfaces for communication patterns, and then to encourage people to
> move
> > to the new ones from the old.  (Whether RabbitMQ is involved is neither
> here
> > nor there, as this is really a question of Oslo APIs, not their
> > implementation.)
> >
>
> Hmm... maybe.   Message bus technology is varied, and so is it's
> behavior.  There are brokerless, point-to-point backends supported by
> oslo.messaging [1],[2] which will exhibit different
> capabilities/behaviors from the traditional broker-based
> store-and-forward backend (e.g. message acking end-to-end vs to the
> intermediary).
>

The important thing is that you shouldn't have to look behind the curtain.
We can offer APIs that are driven by the implementation (designed for test,
and trivial to implement correctly given handy open source projects we know
and trust) and the choice of design will therefore be dependent on the
backend mechanisms we consider for use to implement the API.  APIs are
always a point of negotiation between what the caller needs and what can be
implemented in a practical amount of time.  But *I do not care* whether
you're using rabbits or carrier pigeons just so long as what you have
documented that the API promises me is actually true.  I *do not expect* to
have to read RabbitMQ ior AMQP documentation to work out what behaviour I
should expect for my messaging.  And its behaviour should be consistent if
I have a choice of messaging backends.

> All the more reason to have explicit delivery guarantees and well
> understood failure scenarios defined by the API.

And on this point we totally agree.

I think the point of an API is to subdivide who carries which
responsibilities - the caller for handling exceptional cases and the
implementer for having predictable behaviour.  Documentation is the means
of agreement.

Sorry to state basic good practice - I'm sure we do all accept that this is
good behaviour - but with a component that's this central to what we do and
so frequently used by so many people I think it's worth reiterating.
-- 
Ian.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-09-01 Thread Alec Hothan (ahothan)

This topic of oslo messaging issues has been going on for a long time and the 
main issue is not the transport itself (each transport has its own limitations) 
but the code using oslo messaging (e.g. pieces of almost every openstack 
service). It is relatively easy to write code using oslo messaging that works 
with devstack or a small scale deployment, it is much less easy to write such 
code that works under the conditions of operations at scale: frequent lack of 
an appropriate test platform, limitations in existing testing tools and to top 
it all, "fuzzy" oslo messaging API definition makes the handling of abnormal 
conditions and load conditions very unpredictable and inconsistent across 
components.

You can't solve this by just "fixing" the oslo messaging layer or by swapping 
to another transport (you'll just open up another can of worms)

As suggested by Ian below, the only practical way to fix this is to define a 
new set of APIs that is much more strictly defined, have openstack code migrate 
to these new APIs and test adequately.
That is clearly very difficult to do with resources moving away from "stable 
and mature" services and attracted by the latest buzzwords (such as containers).

On the original topic of this thread, having geographical distribution will 
certainly introduce a new set of issues at scale.


  Alec

 






On 9/1/16, 6:52 AM, "Ken Giusti"  wrote:

>On Wed, Aug 31, 2016 at 3:30 PM, Ian Wells  wrote:
>> On 31 August 2016 at 10:12, Clint Byrum  wrote:
>>>
>>> Excerpts from Duncan Thomas's message of 2016-08-31 12:42:23 +0300:
>>> > On 31 August 2016 at 11:57, Bogdan Dobrelya 
>>> > wrote:
>>> >
>>> > > I agree that RPC design pattern, as it is implemented now, is a major
>>> > > blocker for OpenStack in general. It requires a major redesign,
>>> > > including handling of corner cases, on both sides, *especially* RPC
>>> > > call
>>> > > clients. Or may be it just have to be abandoned to be replaced by a
>>> > > more
>>> > > cloud friendly pattern.
>>> >
>>> >
>>> > Is there a writeup anywhere on what these issues are? I've heard this
>>> > sentiment expressed multiple times now, but without a writeup of the
>>> > issues
>>> > and the design goals of the replacement, we're unlikely to make progress
>>> > on
>>> > a replacement - even if somebody takes the heroic approach and writes a
>>> > full replacement themselves, the odds of getting community by-in are
>>> > very
>>> > low.
>>>
>>> Right, this is exactly the sort of thing I'd like to gather a group of
>>> design-minded folks around in an Architecture WG. Oslo is busy with the
>>> implementations we have now, but I'm sure many oslo contributors would
>>> like to come up for air and talk about the design issues, and come up
>>> with a current design, and some revisions to it, or a whole new one,
>>> that can be used to put these summit hallway rumors to rest.
>>
>>
>> I'd say the issue is comparatively easy to describe.  In a call sequence:
>>
>> 1. A sends a message to B
>> 2. B receives messages
>> 3. B acts upon message
>> 4. B responds to message
>> 5. A receives response
>> 6. A acts upon response
>>
>> ... you can have a fault at any point in that message flow (consider crashes
>> or program restarts).  If you ask for something to happen, you wait for a
>> reply, and you don't get one, what does it mean?  The operation may have
>> happened, with or without success, or it may not have gotten to the far end.
>> If you send the message, does that mean you'd like it to cause an action
>> tomorrow?  A year from now?  Or perhaps you'd like it to just not happen?
>> Do you understand what Oslo promises you here, and do you think every person
>> who ever wrote an RPC call in the whole OpenStack solution also understood
>> it?
>>
>
>Precisely - IMHO it's a shortcoming of the current o.m. RPC (and
>Notification) API in that it does not let the API user explicitly set
>the desired delivery guarantee when publishing.  Right now it's
>implied that the delivery guarantee is "At Most Once" but that's
>mostly not precisely defined in any meaningful way.
>
>Any messaging API should be explicit regarding what delivery
>guarantee(s) are possible.  In addition, an API should allow the user
>to designate the importance of a message on a per-send basis:  can
>this message be dropped?  can this message be duplicated?  At what
>point in time does the message become invalid (already offered for RPC
>via timeout, but not Notifications IIRC), etc
>
>And well-understood failure modes... things always fail...
>
>
>> I have opinions about other patterns we could use, but I don't want to push
>> my solutions here, I want to see if this is really as much of a problem as
>> it looks and if people concur with my summary above.  However, the right
>> approach is most definitely to create a new and more fitting set of oslo
>> interfaces for communication 

Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-09-01 Thread Ken Giusti
On Wed, Aug 31, 2016 at 3:30 PM, Ian Wells  wrote:
> On 31 August 2016 at 10:12, Clint Byrum  wrote:
>>
>> Excerpts from Duncan Thomas's message of 2016-08-31 12:42:23 +0300:
>> > On 31 August 2016 at 11:57, Bogdan Dobrelya 
>> > wrote:
>> >
>> > > I agree that RPC design pattern, as it is implemented now, is a major
>> > > blocker for OpenStack in general. It requires a major redesign,
>> > > including handling of corner cases, on both sides, *especially* RPC
>> > > call
>> > > clients. Or may be it just have to be abandoned to be replaced by a
>> > > more
>> > > cloud friendly pattern.
>> >
>> >
>> > Is there a writeup anywhere on what these issues are? I've heard this
>> > sentiment expressed multiple times now, but without a writeup of the
>> > issues
>> > and the design goals of the replacement, we're unlikely to make progress
>> > on
>> > a replacement - even if somebody takes the heroic approach and writes a
>> > full replacement themselves, the odds of getting community by-in are
>> > very
>> > low.
>>
>> Right, this is exactly the sort of thing I'd like to gather a group of
>> design-minded folks around in an Architecture WG. Oslo is busy with the
>> implementations we have now, but I'm sure many oslo contributors would
>> like to come up for air and talk about the design issues, and come up
>> with a current design, and some revisions to it, or a whole new one,
>> that can be used to put these summit hallway rumors to rest.
>
>
> I'd say the issue is comparatively easy to describe.  In a call sequence:
>
> 1. A sends a message to B
> 2. B receives messages
> 3. B acts upon message
> 4. B responds to message
> 5. A receives response
> 6. A acts upon response
>
> ... you can have a fault at any point in that message flow (consider crashes
> or program restarts).  If you ask for something to happen, you wait for a
> reply, and you don't get one, what does it mean?  The operation may have
> happened, with or without success, or it may not have gotten to the far end.
> If you send the message, does that mean you'd like it to cause an action
> tomorrow?  A year from now?  Or perhaps you'd like it to just not happen?
> Do you understand what Oslo promises you here, and do you think every person
> who ever wrote an RPC call in the whole OpenStack solution also understood
> it?
>

Precisely - IMHO it's a shortcoming of the current o.m. RPC (and
Notification) API in that it does not let the API user explicitly set
the desired delivery guarantee when publishing.  Right now it's
implied that the delivery guarantee is "At Most Once" but that's
mostly not precisely defined in any meaningful way.

Any messaging API should be explicit regarding what delivery
guarantee(s) are possible.  In addition, an API should allow the user
to designate the importance of a message on a per-send basis:  can
this message be dropped?  can this message be duplicated?  At what
point in time does the message become invalid (already offered for RPC
via timeout, but not Notifications IIRC), etc

And well-understood failure modes... things always fail...


> I have opinions about other patterns we could use, but I don't want to push
> my solutions here, I want to see if this is really as much of a problem as
> it looks and if people concur with my summary above.  However, the right
> approach is most definitely to create a new and more fitting set of oslo
> interfaces for communication patterns, and then to encourage people to move
> to the new ones from the old.  (Whether RabbitMQ is involved is neither here
> nor there, as this is really a question of Oslo APIs, not their
> implementation.)
>

Hmm... maybe.   Message bus technology is varied, and so is it's
behavior.  There are brokerless, point-to-point backends supported by
oslo.messaging [1],[2] which will exhibit different
capabilities/behaviors from the traditional broker-based
store-and-forward backend (e.g. message acking end-to-end vs to the
intermediary).

All the more reason to have explicit delivery guarantees and well
understood failure scenarios defined by the API.

[1] http://docs.openstack.org/developer/oslo.messaging/zmq_driver.html
[2] http://docs.openstack.org/developer/oslo.messaging/AMQP1.0.html


> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Ken Giusti  (kgiu...@gmail.com)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-09-01 Thread Duncan Thomas
On 31 August 2016 at 22:30, Ian Wells  wrote:

> On 31 August 2016 at 10:12, Clint Byrum  wrote:
>
>> Excerpts from Duncan Thomas's message of 2016-08-31 12:42:23 +0300:
>> > Is there a writeup anywhere on what these issues are? I've heard this
>> > sentiment expressed multiple times now, but without a writeup of the
>> issues
>> > and the design goals of the replacement, we're unlikely to make
>> progress on
>> > a replacement - even if somebody takes the heroic approach and writes a
>> > full replacement themselves, the odds of getting community by-in are
>> very
>> > low.
>>
>> Right, this is exactly the sort of thing I'd like to gather a group of
>> design-minded folks around in an Architecture WG. Oslo is busy with the
>> implementations we have now, but I'm sure many oslo contributors would
>> like to come up for air and talk about the design issues, and come up
>> with a current design, and some revisions to it, or a whole new one,
>> that can be used to put these summit hallway rumors to rest.
>>
>
> I'd say the issue is comparatively easy to describe.  In a call sequence:
>
> 1. A sends a message to B
> 2. B receives messages
> 3. B acts upon message
> 4. B responds to message
> 5. A receives response
> 6. A acts upon response
>
> ... you can have a fault at any point in that message flow (consider
> crashes or program restarts).  If you ask for something to happen, you wait
> for a reply, and you don't get one, what does it mean?  The operation may
> have happened, with or without success, or it may not have gotten to the
> far end.  If you send the message, does that mean you'd like it to cause an
> action tomorrow?  A year from now?  Or perhaps you'd like it to just not
> happen?  Do you understand what Oslo promises you here, and do you think
> every person who ever wrote an RPC call in the whole OpenStack solution
> also understood it?
>
>

Thank you for the explanation. Some times it is best to state the
apparently obvious just so that everybody is on the same page.

There are some pieces in cinder that attempt to work around some of these
limitations already, added with the recent H/A cinder-volume work.

-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-09-01 Thread Ken Giusti
On Wed, Aug 31, 2016 at 6:02 PM, Clint Byrum  wrote:
> Excerpts from Ian Wells's message of 2016-08-31 12:30:45 -0700:
>> On 31 August 2016 at 10:12, Clint Byrum  wrote:
>>
>> > Excerpts from Duncan Thomas's message of 2016-08-31 12:42:23 +0300:
>> > > On 31 August 2016 at 11:57, Bogdan Dobrelya 
>> > wrote:
>> > >
>> > > > I agree that RPC design pattern, as it is implemented now, is a major
>> > > > blocker for OpenStack in general. It requires a major redesign,
>> > > > including handling of corner cases, on both sides, *especially* RPC
>> > call
>> > > > clients. Or may be it just have to be abandoned to be replaced by a
>> > more
>> > > > cloud friendly pattern.
>> > >
>> > >
>> > > Is there a writeup anywhere on what these issues are? I've heard this
>> > > sentiment expressed multiple times now, but without a writeup of the
>> > issues
>> > > and the design goals of the replacement, we're unlikely to make progress
>> > on
>> > > a replacement - even if somebody takes the heroic approach and writes a
>> > > full replacement themselves, the odds of getting community by-in are very
>> > > low.
>> >
>> > Right, this is exactly the sort of thing I'd like to gather a group of
>> > design-minded folks around in an Architecture WG. Oslo is busy with the
>> > implementations we have now, but I'm sure many oslo contributors would
>> > like to come up for air and talk about the design issues, and come up
>> > with a current design, and some revisions to it, or a whole new one,
>> > that can be used to put these summit hallway rumors to rest.
>> >
>>
>> I'd say the issue is comparatively easy to describe.  In a call sequence:
>>
>> 1. A sends a message to B
>> 2. B receives messages
>> 3. B acts upon message
>> 4. B responds to message
>> 5. A receives response
>> 6. A acts upon response
>>
>> ... you can have a fault at any point in that message flow (consider
>> crashes or program restarts).  If you ask for something to happen, you wait
>> for a reply, and you don't get one, what does it mean?  The operation may
>> have happened, with or without success, or it may not have gotten to the
>> far end.  If you send the message, does that mean you'd like it to cause an
>> action tomorrow?  A year from now?  Or perhaps you'd like it to just not
>> happen?  Do you understand what Oslo promises you here, and do you think
>> every person who ever wrote an RPC call in the whole OpenStack solution
>> also understood it?
>>
>> I have opinions about other patterns we could use, but I don't want to push
>> my solutions here, I want to see if this is really as much of a problem as
>> it looks and if people concur with my summary above.  However, the right
>> approach is most definitely to create a new and more fitting set of oslo
>> interfaces for communication patterns, and then to encourage people to move
>> to the new ones from the old.  (Whether RabbitMQ is involved is neither
>> here nor there, as this is really a question of Oslo APIs, not their
>> implementation.)
>
> I think it's about time we get some Architecture WG meetings started,
> and put "Document RPC design" on the agenda.
>

+1 I'm certainly interested in helping out here.



> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Ken Giusti  (kgiu...@gmail.com)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-09-01 Thread Pablo Chacin
+1000 for this proposal.

On Thu, Sep 1, 2016 at 2:47 PM, <lebre.adr...@free.fr> wrote:

> May I suggest to open one (or more) thread(s) with the correct subject(s)?
>
> There are at least three points discussed here:
>
> - one related to the proposal of the Massively distributed group
> - one related to the Architecture WG with the communication issue between
> services (RPC+Rabbit MQ, REST API..)
> - one that mainly focuses on TriCircle.
>
> While all of them are interesting it is a bit tedious to follow them in
> one large thread.
>
> Regarding the Massively distributed WG (which was the initial topic ;)), I
> replied to some comments that have been done in the pad and I added a new
> action to discuss the single vs multi-endpoint questions.
>
> Finally regarding the comparison between proposal (the link that has been
> added at the end), I think it is a good idea but that should be done after
> (or at least meanwhile) analyzing the current OpenStack ecosystem. As it
> has been written in some comments of the TriCircle Big Tent application, it
> is  important to first identify pro/cons of the federation proposal before
> we go ahead holus-bolus.
>
> My two cents
> Ad_rien_
>
> - Mail original -
> > De: "joehuang" <joehu...@huawei.com>
> > À: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> > Envoyé: Jeudi 1 Septembre 2016 11:18:17
> > Objet: Re: [openstack-dev] [all][massively 
> > distributed][architecture]Coordination
> between actions/WGs
> >
> > > What is the REST API for tricircle?
> > > When looking at the github I see:
> > > ''Documentation: TBD''
> > > Getting a feel for its REST API would really be helpful in
> > > determine how
> > > much of a proxy/request router it is vs being an actual API. I
> > > don't
> > > really want/like a proxy/request router (if that wasn't obvious,
> > > ha).
> >
> > For Nova API-GW/Cinder API-GW, Nova API/Cinder API will be accepted
> > and forwarded.
> > For Neutron with Tricircle Neutron plugin in Tricircle, it's Neutron
> > API,
> > just like any other Neutron plugin, doesn't change Neutron API.
> >
> > Tricircle reuse tempest test cases to ensure the API accepted kept
> > consistent
> > with Nova/Cinder/Neutron. So no special documentation for these
> > APIs(if we
> > provide, documentation inconsistency will be introduced)
> >
> > Except that, Tricircle Admin API provides its own API to manage
> > bottom
> > OpenStack instance, the documentation is in review:
> > https://review.openstack.org/#/c/356291/
> >
> > > Looking at say:
> > > https://github.com/openstack/tricircle/blob/master/
> tricircle/nova_apigw/controllers/server.py
> > > That doesn't inspire me so much, since that appears to be more of a
> > > fork/join across many different clients, and creating a nova like
> > > API
> > > out of the joined results of those clients (which feels sort of
> > > ummm,
> > > wrong). This is where I start to wonder about what the right API is
> > > here, and trying to map 1 `create_server` top-level API onto M
> > > child
> > > calls feels a little off (because that mapping will likely never be
> > > correct due to the nature of the child clouds, ie u have to assume
> > > a
> > > very strict homogenous nature to even get close to this working).
> >
> > > Where there other alternative ways of doing this that were
> > > discussed?
> >
> > > Perhaps even a new API that doesn't try to 1:1 map onto child
> > > calls,
> > > something along the line of make an API that more directly suits
> > > what
> > > this project is trying to do (vs trying to completely hide that
> > > there M
> > > child calls being made underneath).
> >
> > > I get the idea of becoming a uber-openstack-API and trying to unify
> > > X
> > > other other openstacks under that API with this uber-API but it
> > > just
> > > feels like the wrong way to tackle this.
> >
> > > -Josh
> >
> > This is an interesting phenomenon here: cloud operators and end users
> > often asked for single endpoint for the multi-site cloud. But for
> > technology guys often think multi-region mode(each region with
> > separate
> > endpoint) is not an issue for end user.
> > During the Tricircle big-tent application
> > https://review.openstack.org/#/c/338796/
> > , Anne Gentle c

Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-09-01 Thread lebre . adrien
May I suggest to open one (or more) thread(s) with the correct subject(s)? 

There are at least three points discussed here: 

- one related to the proposal of the Massively distributed group
- one related to the Architecture WG with the communication issue between 
services (RPC+Rabbit MQ, REST API..)
- one that mainly focuses on TriCircle. 

While all of them are interesting it is a bit tedious to follow them in one 
large thread.

Regarding the Massively distributed WG (which was the initial topic ;)), I 
replied to some comments that have been done in the pad and I added a new 
action to discuss the single vs multi-endpoint questions. 

Finally regarding the comparison between proposal (the link that has been added 
at the end), I think it is a good idea but that should be done after (or at 
least meanwhile) analyzing the current OpenStack ecosystem. As it has been 
written in some comments of the TriCircle Big Tent application, it is  
important to first identify pro/cons of the federation proposal before we go 
ahead holus-bolus.

My two cents
Ad_rien_ 

- Mail original -
> De: "joehuang" <joehu...@huawei.com>
> À: "OpenStack Development Mailing List (not for usage questions)" 
> <openstack-dev@lists.openstack.org>
> Envoyé: Jeudi 1 Septembre 2016 11:18:17
> Objet: Re: [openstack-dev] [all][massively 
> distributed][architecture]Coordination between actions/WGs
> 
> > What is the REST API for tricircle?
> > When looking at the github I see:
> > ''Documentation: TBD''
> > Getting a feel for its REST API would really be helpful in
> > determine how
> > much of a proxy/request router it is vs being an actual API. I
> > don't
> > really want/like a proxy/request router (if that wasn't obvious,
> > ha).
> 
> For Nova API-GW/Cinder API-GW, Nova API/Cinder API will be accepted
> and forwarded.
> For Neutron with Tricircle Neutron plugin in Tricircle, it's Neutron
> API,
> just like any other Neutron plugin, doesn't change Neutron API.
> 
> Tricircle reuse tempest test cases to ensure the API accepted kept
> consistent
> with Nova/Cinder/Neutron. So no special documentation for these
> APIs(if we
> provide, documentation inconsistency will be introduced)
> 
> Except that, Tricircle Admin API provides its own API to manage
> bottom
> OpenStack instance, the documentation is in review:
> https://review.openstack.org/#/c/356291/
> 
> > Looking at say:
> > https://github.com/openstack/tricircle/blob/master/tricircle/nova_apigw/controllers/server.py
> > That doesn't inspire me so much, since that appears to be more of a
> > fork/join across many different clients, and creating a nova like
> > API
> > out of the joined results of those clients (which feels sort of
> > ummm,
> > wrong). This is where I start to wonder about what the right API is
> > here, and trying to map 1 `create_server` top-level API onto M
> > child
> > calls feels a little off (because that mapping will likely never be
> > correct due to the nature of the child clouds, ie u have to assume
> > a
> > very strict homogenous nature to even get close to this working).
> 
> > Where there other alternative ways of doing this that were
> > discussed?
> 
> > Perhaps even a new API that doesn't try to 1:1 map onto child
> > calls,
> > something along the line of make an API that more directly suits
> > what
> > this project is trying to do (vs trying to completely hide that
> > there M
> > child calls being made underneath).
> 
> > I get the idea of becoming a uber-openstack-API and trying to unify
> > X
> > other other openstacks under that API with this uber-API but it
> > just
> > feels like the wrong way to tackle this.
> 
> > -Josh
> 
> This is an interesting phenomenon here: cloud operators and end users
> often asked for single endpoint for the multi-site cloud. But for
> technology guys often think multi-region mode(each region with
> separate
> endpoint) is not an issue for end user.
> During the Tricircle big-tent application
> https://review.openstack.org/#/c/338796/
> , Anne Gentle commented "Rackspace
> public cloud has had multiple endpoints for regions for years now. I
> know
> from supporting end users for years we had to document it, and
> explain it
> often, but end-users worked with it."  In another comment, "I want to
> be sure
> I'm clear that I want many of the problems solved that you mention in
> your
>  application. In my view, Tricircle has so far been a bit of an
>  isolated effort that
>  I hadn't heard of until now. Hence the amount of discussion and
>  further work
> we may need to get to t

Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-09-01 Thread Thierry Carrez
Clint Byrum wrote:
> [...]
> I think it's about time we get some Architecture WG meetings started,
> and put "Document RPC design" on the agenda.

+1
Anything blocking you ? Let me know where/if I can help.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-09-01 Thread joehuang
 proposed to add plugin mechanism in Nova/Cinder API layer to remove
the in-consistency worry, but it'll take long time to get consensus in
community wide. So Tricircle will be divided into two independent and
decoupled projects, only one of the projects which deal with networking
automation will try to become an big-tent project, And Nova/Cinder API-GW
will be removed from the scope of big-tent project application, and put
them into another project: 
https://docs.google.com/presentation/d/1kpVo5rsL6p_rq9TvkuczjommJSsisDiKJiurbhaQg7E

TricircleNetworking: Dedicated for cross Neutron networking automation in
multi-region OpenStack deployment, run without or with TricircleGateway.
Try to become big-tent project in the current application of 
https://review.openstack.org/#/c/338796/.

TricircleGateway: Dedicated to provide API gateway for those who need
single Nova/Cinder API endpoint in multi-region OpenStack deployment,
run without or with TricircleNetworking. Live as non-big-tent,
non-offical-openstack project, just like Tricircle toady’s status.
And not pursue big-tent only if the consensus can be achieved in OpenStack
community, including Arch WG and TCs, then decide how to get it on board
in OpenStack. A new repository is needed to be applied for this project.

If you want to use other APIs to manage edge clouds, at last we have to support
all operations and attributes which provided in OpenStack, and it will grow to 
a collection
of API sets which includes all in OpenStack APIs. Can we simplify and ignore
some features which are already supported in Nova/Cinder/Neutron? This
is a question.

Best Regards
Chaoyi Huang (joehuang)

From: Joshua Harlow [harlo...@fastmail.com]
Sent: 01 September 2016 12:17
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all][massively 
distributed][architecture]Coordination between actions/WGs

joehuang wrote:
> I just pointed out the issues for RPC which is used between API cell and
> child cell if we deploy child cells in edge clouds. For this thread is
> about massively distributed cloud, so the RPC issues inside current
> Nova/Cinder/Neutron are not the main focus(it could be another important
> and interesting topic), for example, how to guarantee the reliability
> for rpc message:

+1 although I'd like to also discuss this, but so be it, perhaps a
different topic :)

>
>  > Cells is a good enhancement for Nova scalability, but there are
> some issues
>  > in deployment Cells for massively distributed edge clouds:
>  >
>  > 1) using RPC for inter-data center communication will bring the
> difficulty
>  > in inter-dc troubleshooting and maintenance, and some critical
> issue in
>  > operation. No CLI or restful API or other tools to manage a child
> cell
>  > directly. If the link between the API cell and child cells is
> broken, then
>  > the child cell in the remote edge cloud is unmanageable, no
> matter locally
>  > or remotely.
>  >
>  > 2). The challenge in security management for inter-site RPC
> communication.
>  > Please refer to the slides[1] for the challenge 3: Securing
> OpenStack over
>  > the Internet, Over 500 pin holes had to be opened in the firewall
> to allow
>  > this to work – Includes ports for VNC and SSH for CLIs. Using RPC
> in cells
>  > for edge cloud will face same security challenges.
>  >
>  > 3)only nova supports cells. But not only Nova needs to support
> edge clouds,
>  > Neutron, Cinder should be taken into account too. How about
> Neutron to
>  > support service function chaining in edge clouds? Using RPC? how
> to address
>  > challenges mentioned above? And Cinder?
>  >
>  > 4). Using RPC to do the production integration for hundreds of
> edge cloud is
>  > quite challenge idea, it's basic requirements that these edge
> clouds may
>  > be bought from multi-vendor, hardware/software or both.
>  > That means using cells in production for massively distributed
> edge clouds
>  > is quite bad idea. If Cells provide RESTful interface between API
> cell and
>  > child cell, it's much more acceptable, but it's still not enough,
> similar
>  > in Cinder, Neutron. Or just deploy lightweight OpenStack instance
> in each
>  > edge cloud, for example, one rack. The question is how to manage
> the large
>  > number of OpenStack instance and provision service.
>  >
>  >
> 
> [1]https://www.openstack.org/assets/presentation-media/OpenStack-2016-Austin-D-NFV-vM.pdf
>
>
> Th

Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-31 Thread Joshua Harlow

joehuang wrote:

I just pointed out the issues for RPC which is used between API cell and
child cell if we deploy child cells in edge clouds. For this thread is
about massively distributed cloud, so the RPC issues inside current
Nova/Cinder/Neutron are not the main focus(it could be another important
and interesting topic), for example, how to guarantee the reliability
for rpc message:


+1 although I'd like to also discuss this, but so be it, perhaps a 
different topic :)




 > Cells is a good enhancement for Nova scalability, but there are
some issues
 > in deployment Cells for massively distributed edge clouds:
 >
 > 1) using RPC for inter-data center communication will bring the
difficulty
 > in inter-dc troubleshooting and maintenance, and some critical
issue in
 > operation. No CLI or restful API or other tools to manage a child
cell
 > directly. If the link between the API cell and child cells is
broken, then
 > the child cell in the remote edge cloud is unmanageable, no
matter locally
 > or remotely.
 >
 > 2). The challenge in security management for inter-site RPC
communication.
 > Please refer to the slides[1] for the challenge 3: Securing
OpenStack over
 > the Internet, Over 500 pin holes had to be opened in the firewall
to allow
 > this to work – Includes ports for VNC and SSH for CLIs. Using RPC
in cells
 > for edge cloud will face same security challenges.
 >
 > 3)only nova supports cells. But not only Nova needs to support
edge clouds,
 > Neutron, Cinder should be taken into account too. How about
Neutron to
 > support service function chaining in edge clouds? Using RPC? how
to address
 > challenges mentioned above? And Cinder?
 >
 > 4). Using RPC to do the production integration for hundreds of
edge cloud is
 > quite challenge idea, it's basic requirements that these edge
clouds may
 > be bought from multi-vendor, hardware/software or both.
 > That means using cells in production for massively distributed
edge clouds
 > is quite bad idea. If Cells provide RESTful interface between API
cell and
 > child cell, it's much more acceptable, but it's still not enough,
similar
 > in Cinder, Neutron. Or just deploy lightweight OpenStack instance
in each
 > edge cloud, for example, one rack. The question is how to manage
the large
 > number of OpenStack instance and provision service.
 >
 >

[1]https://www.openstack.org/assets/presentation-media/OpenStack-2016-Austin-D-NFV-vM.pdf


That's also my suggestion to collect all candidate proposals, and
discuss these proposals and compare their cons. and pros. in the
Barcelona summit.

I propose to use Nova/Cinder/Neutron restful API for inter-site
communication for edge clouds, and provide Nova/Cinder/Neutron API as
the umbrella for all edge clouds. This is the pattern of Tricircle:
https://github.com/openstack/tricircle/



What is the REST API for tricircle?

When looking at the github I see:

''Documentation: TBD''

Getting a feel for its REST API would really be helpful in determine how 
much of a proxy/request router it is vs being an actual API. I don't 
really want/like a proxy/request router (if that wasn't obvious, ha).


Looking at say:

https://github.com/openstack/tricircle/blob/master/tricircle/nova_apigw/controllers/server.py

That doesn't inspire me so much, since that appears to be more of a 
fork/join across many different clients, and creating a nova like API 
out of the joined results of those clients (which feels sort of ummm, 
wrong). This is where I start to wonder about what the right API is 
here, and trying to map 1 `create_server` top-level API onto M child 
calls feels a little off (because that mapping will likely never be 
correct due to the nature of the child clouds, ie u have to assume a 
very strict homogenous nature to even get close to this working).


Where there other alternative ways of doing this that were discussed?

Perhaps even a new API that doesn't try to 1:1 map onto child calls, 
something along the line of make an API that more directly suits what 
this project is trying to do (vs trying to completely hide that there M 
child calls being made underneath).


I get the idea of becoming a uber-openstack-API and trying to unify X 
other other openstacks under that API with this uber-API but it just 
feels like the wrong way to tackle this.


-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-31 Thread joehuang
Some evaluation aspect were added to the etherpad 
https://etherpad.openstack.org/p/massively-distributed_WG_description for 
massively distributed edge clouds, so we can evaluate each proposals. Your 
comments for these consideration are welcome :

- Security management over the WAN: how manage the inter-site communication and 
edge cloud securely.
- Fail-safe: each edge cloud should be able to run independently, one edge 
cloud crash should not impact other edge clouds running and operation.
- Maintainability: each edge cloud installation/upgrading/patch should be able 
to be managed indepently, don't have to upgrade all edge clouds at the same 
time.
- Manageable: no island even if some link broken
- Easy integration: need to support easy integration for multi-vendors for 
handreds or thousands of edge cloud.
- Consistency: eventually consistent information(stable status) should be 
achieved for distributed system.

And also prepared one skeleton for candidate proposals discussion: 
https://etherpad.openstack.org/p/massively-distributed_WG_candidate_proposals_ocata,
 and linked it into the etherpad mentioned above.

Consider that Tricircle is moving to divide it into two projects: 
TricircleNetworking and TricircleGateway: 
https://docs.google.com/presentation/d/1kpVo5rsL6p_rq9TvkuczjommJSsisDiKJiurbhaQg7E,
So I listed these two sub-projects in the etherpad, these two projects can work 
together or separately.

Best Regards
Chaoyi Huang(joehuang)


From: lebre.adr...@free.fr [lebre.adr...@free.fr]
Sent: 01 September 2016 1:36
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all][massively 
distributed][architecture]Coordination between actions/WGs

As promised, I just wrote a first draft at 
https://etherpad.openstack.org/p/massively-distributed_WG_description
I will try to add more content tomorrow in particular pointers towards 
articles/ETSI specifications/use-cases.

Comments/remarks welcome.
Ad_rien_

PS: Chaoyi, your proposal for f2f sessions in Barcelona sounds good. It is 
probably a bit too ambitious for one summit because the point 3 ''Gaps in 
OpenStack'' looks to me a major action that will probably last more than just 
one summit but I think you gave the right directions !

- Mail original -
> De: "joehuang" <joehu...@huawei.com>
> À: "OpenStack Development Mailing List (not for usage questions)" 
> <openstack-dev@lists.openstack.org>
> Envoyé: Mercredi 31 Août 2016 08:48:01
> Objet: Re: [openstack-dev] [all][massively 
> distributed][architecture]Coordination between actions/WGs
>
> Hello, Joshua,
>
> According to Peter's message, "However that still leaves us with the
> need to manage a stack of servers in thousands of telephone
> exchanges, central offices or even cell-sites, running multiple work
> loads in a distributed fault tolerant manner", the number of edge
> clouds may even at thousands level.
>
> These clouds may be disjoint, but some may need to provide
> inter-connection for the tenant's network, for example, to support
> database cluster distributed in several clouds, the inter-connection
> for data replication is needed.
>
> There are different thoughts, proposals or projects to tackle the
> challenge, architecture level discussion is necessary to see if
> these design and proposals can fulfill the demands. If there are
> lots of proposals, it's good to compare the pros. and cons, and
> which scenarios the proposal work, which scenario the proposal can't
> work very well.
>
> So I suggest to have at least two successive dedicated design summit
> sessions to discuss about that f2f, all thoughts, proposals or
> projects to tackle these kind of problem domain could be collected
> now, the topics to be discussed could be as follows :
>
> 0. Scenario
> 1, Use cases
> 2, Requirements in detail
> 3, Gaps in OpenStack
> 4, Proposal to be discussed
>
> Architecture level proposal discussion
> 1, Proposals
> 2, Pros. and Cons. comparation
> 3, Challenges
> 4, next step
>
> Best Regards
> Chaoyi Huang(joehuang)
> ________________
> From: Joshua Harlow [harlo...@fastmail.com]
> Sent: 31 August 2016 13:13
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [all][massively
> distributed][architecture]Coordination between actions/WGs
>
> joehuang wrote:
> > Cells is a good enhancement for Nova scalability, but there are
> > some issues in deployment Cells for massively distributed edge
> > clouds:
> >
> > 1) using RPC for inter-data center communication will bring the
> > difficulty in inter-dc troubleshooting and maintenance, and some
> > critical issue in operation. No CLI or rest

Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-31 Thread joehuang
I just pointed out the issues for RPC which is used between API cell and child 
cell if we deploy child cells in edge clouds. For this thread is about 
massively distributed cloud, so the RPC issues inside current 
Nova/Cinder/Neutron are not the main focus(it could be another important and 
interesting topic), for example, how to guarantee the reliability for rpc 
message:

> Cells is a good enhancement for Nova scalability, but there are some issues
>  in deployment Cells for massively distributed edge clouds:
>
> 1) using RPC for inter-data center communication will bring the difficulty
> in inter-dc troubleshooting and maintenance, and some critical issue in
> operation.  No CLI or restful API or other tools to manage a child cell
> directly. If the link between the API cell and child cells is broken, then
> the child cell in the remote edge cloud is unmanageable, no matter locally
> or remotely.
>
> 2). The challenge in security management for inter-site RPC communication.
> Please refer to the slides[1] for the challenge 3: Securing OpenStack over
> the Internet, Over 500 pin holes had to be opened in the firewall to allow
> this to work – Includes ports for VNC and SSH for CLIs. Using RPC in cells
> for edge cloud will face same security challenges.
>
> 3)only nova supports cells. But not only Nova needs to support edge clouds,
> Neutron, Cinder should be taken into account too. How about Neutron to
> support service function chaining in edge clouds? Using RPC? how to address
> challenges mentioned above? And Cinder?
>
> 4). Using RPC to do the production integration for hundreds of edge cloud is
> quite challenge idea, it's basic requirements that these edge clouds may
> be bought from multi-vendor, hardware/software or both.
> That means using cells in production for massively distributed edge clouds
> is quite bad idea. If Cells provide RESTful interface between API cell and
> child cell, it's much more acceptable, but it's still not enough, similar
> in Cinder, Neutron. Or just deploy lightweight OpenStack instance in each
> edge cloud, for example, one rack. The question is how to manage the large
> number of OpenStack instance and provision service.
>
> [1]https://www.openstack.org/assets/presentation-media/OpenStack-2016-Austin-D-NFV-vM.pdf

That's also my suggestion to collect all candidate proposals, and discuss these 
proposals and compare their cons. and pros. in the Barcelona summit.

I propose to use Nova/Cinder/Neutron restful API for inter-site communication 
for edge clouds, and provide Nova/Cinder/Neutron API as the umbrella for all 
edge clouds. This is the pattern of Tricircle: 
https://github.com/openstack/tricircle/

If there is other proposal, please don't hesitate to share and let's compare.

Best Regards
Chaoyi Huang(joehuang)


From: Duncan Thomas [duncan.tho...@gmail.com]
Sent: 01 September 2016 2:03
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all][massively 
distributed][architecture]Coordination between actions/WGs

On 31 August 2016 at 18:54, Joshua Harlow 
<harlo...@fastmail.com<mailto:harlo...@fastmail.com>> wrote:
Duncan Thomas wrote:
On 31 August 2016 at 11:57, Bogdan Dobrelya 
<bdobre...@mirantis.com<mailto:bdobre...@mirantis.com>
<mailto:bdobre...@mirantis.com<mailto:bdobre...@mirantis.com>>> wrote:

I agree that RPC design pattern, as it is implemented now, is a major
blocker for OpenStack in general. It requires a major redesign,
including handling of corner cases, on both sides, *especially* RPC call
clients. Or may be it just have to be abandoned to be replaced by a more
cloud friendly pattern.



Is there a writeup anywhere on what these issues are? I've heard this
sentiment expressed multiple times now, but without a writeup of the
issues and the design goals of the replacement, we're unlikely to make
progress on a replacement - even if somebody takes the heroic approach
and writes a full replacement themselves, the odds of getting community
by-in are very low.

+2 to that, there are a bunch of technologies that could replace the 
rabbit+rpc, aka, gRPC, then there is http2 and thrift and ... so a writeup IMHO 
would help at least clear the waters a little bit, and explain the blocker of 
the current RPC design pattern (which is multidimensional because most people 
are probably thinking RPC == rabbit when it's actually more than that now, ie 
zeromq and amqp1.0 and ...) and try to centralize on a better replacement.


Is anybody who dislikes the current pattern(s) and implementation(s) 
volunteering to start this documentation? I really am not aware of the issues, 
and I'd like to begin to understand them.
__
OpenStack Development Mailing List (not for us

Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-31 Thread James Bottomley
On Tue, 2016-08-30 at 03:08 +, joehuang wrote:
> Hello, Jay,
> 
> Sorry, I don't know why my mail-agent(Microsoft Outlook Web App) did 
> not carry the thread message-id information in the reply.  I'll check 
> and avoid to create a new thread for reply in existing thread.

It's a common problem with outlook.  Microsoft created their own
threading standards for email which are adopted by no-one.  Whenever
you get these headers in your email:

Thread-topic: 
Thread-index: 

And not these:

In-reply-to:
References: 

It usually means exchange has decided the other end is a microsoft
entity and it doesn't need to use the internet standard reply types. 

Unfortunately, this isn't fixable in outlook because Exchange (the MTA)
not outlook (the MUA) does the threading.  There are some thoughts
floating around the internet on how to fix exchange; if you're lucky
and you have exchange 2003, this might fix it:

https://support.microsoft.com/en-us/kb/908027

James



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-31 Thread Clint Byrum
Excerpts from Ian Wells's message of 2016-08-31 12:30:45 -0700:
> On 31 August 2016 at 10:12, Clint Byrum  wrote:
> 
> > Excerpts from Duncan Thomas's message of 2016-08-31 12:42:23 +0300:
> > > On 31 August 2016 at 11:57, Bogdan Dobrelya 
> > wrote:
> > >
> > > > I agree that RPC design pattern, as it is implemented now, is a major
> > > > blocker for OpenStack in general. It requires a major redesign,
> > > > including handling of corner cases, on both sides, *especially* RPC
> > call
> > > > clients. Or may be it just have to be abandoned to be replaced by a
> > more
> > > > cloud friendly pattern.
> > >
> > >
> > > Is there a writeup anywhere on what these issues are? I've heard this
> > > sentiment expressed multiple times now, but without a writeup of the
> > issues
> > > and the design goals of the replacement, we're unlikely to make progress
> > on
> > > a replacement - even if somebody takes the heroic approach and writes a
> > > full replacement themselves, the odds of getting community by-in are very
> > > low.
> >
> > Right, this is exactly the sort of thing I'd like to gather a group of
> > design-minded folks around in an Architecture WG. Oslo is busy with the
> > implementations we have now, but I'm sure many oslo contributors would
> > like to come up for air and talk about the design issues, and come up
> > with a current design, and some revisions to it, or a whole new one,
> > that can be used to put these summit hallway rumors to rest.
> >
> 
> I'd say the issue is comparatively easy to describe.  In a call sequence:
> 
> 1. A sends a message to B
> 2. B receives messages
> 3. B acts upon message
> 4. B responds to message
> 5. A receives response
> 6. A acts upon response
> 
> ... you can have a fault at any point in that message flow (consider
> crashes or program restarts).  If you ask for something to happen, you wait
> for a reply, and you don't get one, what does it mean?  The operation may
> have happened, with or without success, or it may not have gotten to the
> far end.  If you send the message, does that mean you'd like it to cause an
> action tomorrow?  A year from now?  Or perhaps you'd like it to just not
> happen?  Do you understand what Oslo promises you here, and do you think
> every person who ever wrote an RPC call in the whole OpenStack solution
> also understood it?
> 
> I have opinions about other patterns we could use, but I don't want to push
> my solutions here, I want to see if this is really as much of a problem as
> it looks and if people concur with my summary above.  However, the right
> approach is most definitely to create a new and more fitting set of oslo
> interfaces for communication patterns, and then to encourage people to move
> to the new ones from the old.  (Whether RabbitMQ is involved is neither
> here nor there, as this is really a question of Oslo APIs, not their
> implementation.)

I think it's about time we get some Architecture WG meetings started,
and put "Document RPC design" on the agenda.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-31 Thread Ian Wells
On 31 August 2016 at 10:12, Clint Byrum  wrote:

> Excerpts from Duncan Thomas's message of 2016-08-31 12:42:23 +0300:
> > On 31 August 2016 at 11:57, Bogdan Dobrelya 
> wrote:
> >
> > > I agree that RPC design pattern, as it is implemented now, is a major
> > > blocker for OpenStack in general. It requires a major redesign,
> > > including handling of corner cases, on both sides, *especially* RPC
> call
> > > clients. Or may be it just have to be abandoned to be replaced by a
> more
> > > cloud friendly pattern.
> >
> >
> > Is there a writeup anywhere on what these issues are? I've heard this
> > sentiment expressed multiple times now, but without a writeup of the
> issues
> > and the design goals of the replacement, we're unlikely to make progress
> on
> > a replacement - even if somebody takes the heroic approach and writes a
> > full replacement themselves, the odds of getting community by-in are very
> > low.
>
> Right, this is exactly the sort of thing I'd like to gather a group of
> design-minded folks around in an Architecture WG. Oslo is busy with the
> implementations we have now, but I'm sure many oslo contributors would
> like to come up for air and talk about the design issues, and come up
> with a current design, and some revisions to it, or a whole new one,
> that can be used to put these summit hallway rumors to rest.
>

I'd say the issue is comparatively easy to describe.  In a call sequence:

1. A sends a message to B
2. B receives messages
3. B acts upon message
4. B responds to message
5. A receives response
6. A acts upon response

... you can have a fault at any point in that message flow (consider
crashes or program restarts).  If you ask for something to happen, you wait
for a reply, and you don't get one, what does it mean?  The operation may
have happened, with or without success, or it may not have gotten to the
far end.  If you send the message, does that mean you'd like it to cause an
action tomorrow?  A year from now?  Or perhaps you'd like it to just not
happen?  Do you understand what Oslo promises you here, and do you think
every person who ever wrote an RPC call in the whole OpenStack solution
also understood it?

I have opinions about other patterns we could use, but I don't want to push
my solutions here, I want to see if this is really as much of a problem as
it looks and if people concur with my summary above.  However, the right
approach is most definitely to create a new and more fitting set of oslo
interfaces for communication patterns, and then to encourage people to move
to the new ones from the old.  (Whether RabbitMQ is involved is neither
here nor there, as this is really a question of Oslo APIs, not their
implementation.)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-31 Thread Duncan Thomas
On 31 August 2016 at 18:54, Joshua Harlow  wrote:

> Duncan Thomas wrote:
>
>> On 31 August 2016 at 11:57, Bogdan Dobrelya > > wrote:
>>
>> I agree that RPC design pattern, as it is implemented now, is a major
>> blocker for OpenStack in general. It requires a major redesign,
>> including handling of corner cases, on both sides, *especially* RPC
>> call
>> clients. Or may be it just have to be abandoned to be replaced by a
>> more
>> cloud friendly pattern.
>>
>>
>>
>> Is there a writeup anywhere on what these issues are? I've heard this
>> sentiment expressed multiple times now, but without a writeup of the
>> issues and the design goals of the replacement, we're unlikely to make
>> progress on a replacement - even if somebody takes the heroic approach
>> and writes a full replacement themselves, the odds of getting community
>> by-in are very low.
>>
>
> +2 to that, there are a bunch of technologies that could replace the
> rabbit+rpc, aka, gRPC, then there is http2 and thrift and ... so a writeup
> IMHO would help at least clear the waters a little bit, and explain the
> blocker of the current RPC design pattern (which is multidimensional
> because most people are probably thinking RPC == rabbit when it's actually
> more than that now, ie zeromq and amqp1.0 and ...) and try to centralize on
> a better replacement.
>
>
Is anybody who dislikes the current pattern(s) and implementation(s)
volunteering to start this documentation? I really am not aware of the
issues, and I'd like to begin to understand them.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-31 Thread lebre . adrien
As promised, I just wrote a first draft at 
https://etherpad.openstack.org/p/massively-distributed_WG_description
I will try to add more content tomorrow in particular pointers towards 
articles/ETSI specifications/use-cases.

Comments/remarks welcome. 
Ad_rien_

PS: Chaoyi, your proposal for f2f sessions in Barcelona sounds good. It is 
probably a bit too ambitious for one summit because the point 3 ''Gaps in 
OpenStack'' looks to me a major action that will probably last more than just 
one summit but I think you gave the right directions !

- Mail original -
> De: "joehuang" <joehu...@huawei.com>
> À: "OpenStack Development Mailing List (not for usage questions)" 
> <openstack-dev@lists.openstack.org>
> Envoyé: Mercredi 31 Août 2016 08:48:01
> Objet: Re: [openstack-dev] [all][massively 
> distributed][architecture]Coordination between actions/WGs
> 
> Hello, Joshua,
> 
> According to Peter's message, "However that still leaves us with the
> need to manage a stack of servers in thousands of telephone
> exchanges, central offices or even cell-sites, running multiple work
> loads in a distributed fault tolerant manner", the number of edge
> clouds may even at thousands level.
> 
> These clouds may be disjoint, but some may need to provide
> inter-connection for the tenant's network, for example, to support
> database cluster distributed in several clouds, the inter-connection
> for data replication is needed.
> 
> There are different thoughts, proposals or projects to tackle the
> challenge, architecture level discussion is necessary to see if
> these design and proposals can fulfill the demands. If there are
> lots of proposals, it's good to compare the pros. and cons, and
> which scenarios the proposal work, which scenario the proposal can't
> work very well.
> 
> So I suggest to have at least two successive dedicated design summit
> sessions to discuss about that f2f, all  thoughts, proposals or
> projects to tackle these kind of problem domain could be collected
> now,  the topics to be discussed could be as follows :
> 
>0. Scenario
>1, Use cases
>2, Requirements  in detail
>3, Gaps in OpenStack
>4, Proposal to be discussed
> 
>   Architecture level proposal discussion
>1, Proposals
>2, Pros. and Cons. comparation
>3, Challenges
>4, next step
> 
> Best Regards
> Chaoyi Huang(joehuang)
> ________________
> From: Joshua Harlow [harlo...@fastmail.com]
> Sent: 31 August 2016 13:13
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [all][massively
> distributed][architecture]Coordination between actions/WGs
> 
> joehuang wrote:
> > Cells is a good enhancement for Nova scalability, but there are
> > some issues in deployment Cells for massively distributed edge
> > clouds:
> >
> > 1) using RPC for inter-data center communication will bring the
> > difficulty in inter-dc troubleshooting and maintenance, and some
> > critical issue in operation. No CLI or restful API or other tools
> > to manage a child cell directly. If the link between the API cell
> > and child cells is broken, then the child cell in the remote edge
> > cloud is unmanageable, no matter locally or remotely.
> >
> > 2). The challenge in security management for inter-site RPC
> > communication. Please refer to the slides[1] for the challenge 3:
> > Securing OpenStack over the Internet, Over 500 pin holes had to be
> > opened in the firewall to allow this to work – Includes ports for
> > VNC and SSH for CLIs. Using RPC in cells for edge cloud will face
> > same security challenges.
> >
> > 3)only nova supports cells. But not only Nova needs to support edge
> > clouds, Neutron, Cinder should be taken into account too. How
> > about Neutron to support service function chaining in edge clouds?
> > Using RPC? how to address challenges mentioned above? And Cinder?
> >
> > 4). Using RPC to do the production integration for hundreds of edge
> > cloud is quite challenge idea, it's basic requirements that these
> > edge clouds may be bought from multi-vendor, hardware/software or
> > both.
> >
> > That means using cells in production for massively distributed edge
> > clouds is quite bad idea. If Cells provide RESTful interface
> > between API cell and child cell, it's much more acceptable, but
> > it's still not enough, similar in Cinder, Neutron. Or just deploy
> > lightweight OpenStack instance in each edge cloud, for example,
> > one rack. The question is how to manage the large number of
> > OpenStack instance and provision s

Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-31 Thread Clint Byrum
Excerpts from Duncan Thomas's message of 2016-08-31 12:42:23 +0300:
> On 31 August 2016 at 11:57, Bogdan Dobrelya  wrote:
> 
> > I agree that RPC design pattern, as it is implemented now, is a major
> > blocker for OpenStack in general. It requires a major redesign,
> > including handling of corner cases, on both sides, *especially* RPC call
> > clients. Or may be it just have to be abandoned to be replaced by a more
> > cloud friendly pattern.
> >
> 
> 
> Is there a writeup anywhere on what these issues are? I've heard this
> sentiment expressed multiple times now, but without a writeup of the issues
> and the design goals of the replacement, we're unlikely to make progress on
> a replacement - even if somebody takes the heroic approach and writes a
> full replacement themselves, the odds of getting community by-in are very
> low.

Right, this is exactly the sort of thing I'd like to gather a group of
design-minded folks around in an Architecture WG. Oslo is busy with the
implementations we have now, but I'm sure many oslo contributors would
like to come up for air and talk about the design issues, and come up
with a current design, and some revisions to it, or a whole new one,
that can be used to put these summit hallway rumors to rest.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-31 Thread Joshua Harlow

Duncan Thomas wrote:

On 31 August 2016 at 11:57, Bogdan Dobrelya > wrote:

I agree that RPC design pattern, as it is implemented now, is a major
blocker for OpenStack in general. It requires a major redesign,
including handling of corner cases, on both sides, *especially* RPC call
clients. Or may be it just have to be abandoned to be replaced by a more
cloud friendly pattern.



Is there a writeup anywhere on what these issues are? I've heard this
sentiment expressed multiple times now, but without a writeup of the
issues and the design goals of the replacement, we're unlikely to make
progress on a replacement - even if somebody takes the heroic approach
and writes a full replacement themselves, the odds of getting community
by-in are very low.


+2 to that, there are a bunch of technologies that could replace the 
rabbit+rpc, aka, gRPC, then there is http2 and thrift and ... so a 
writeup IMHO would help at least clear the waters a little bit, and 
explain the blocker of the current RPC design pattern (which is 
multidimensional because most people are probably thinking RPC == rabbit 
when it's actually more than that now, ie zeromq and amqp1.0 and ...) 
and try to centralize on a better replacement.


-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-31 Thread Duncan Thomas
On 31 August 2016 at 11:57, Bogdan Dobrelya  wrote:


> I agree that RPC design pattern, as it is implemented now, is a major
> blocker for OpenStack in general. It requires a major redesign,
> including handling of corner cases, on both sides, *especially* RPC call
> clients. Or may be it just have to be abandoned to be replaced by a more
> cloud friendly pattern.
>


Is there a writeup anywhere on what these issues are? I've heard this
sentiment expressed multiple times now, but without a writeup of the issues
and the design goals of the replacement, we're unlikely to make progress on
a replacement - even if somebody takes the heroic approach and writes a
full replacement themselves, the odds of getting community by-in are very
low.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-31 Thread Jay Pipes

On 08/31/2016 01:57 AM, Bogdan Dobrelya wrote:

I agree that RPC design pattern, as it is implemented now, is a major
blocker for OpenStack in general. It requires a major redesign,
including handling of corner cases, on both sides, *especially* RPC call
clients. Or may be it just have to be abandoned to be replaced by a more
cloud friendly pattern.


++

-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-31 Thread Bogdan Dobrelya
On 31.08.2016 03:52, joehuang wrote:
> Cells is a good enhancement for Nova scalability, but there are some issues 
> in deployment Cells for massively distributed edge clouds: 
> 
> 1) using RPC for inter-data center communication will bring the difficulty in 
> inter-dc troubleshooting and maintenance, and some critical issue in 
> operation. No CLI or restful API or other tools to manage a child cell 
> directly. If the link between the API cell and child cells is broken, then 
> the child cell in the remote edge cloud is unmanageable, no matter locally or 
> remotely. 
> 
> 2). The challenge in security management for inter-site RPC communication. 
> Please refer to the slides[1] for the challenge 3: Securing OpenStack over 
> the Internet, Over 500 pin holes had to be opened in the firewall to allow 
> this to work – Includes ports for VNC and SSH for CLIs. Using RPC in cells 
> for edge cloud will face same security challenges.
> 
> 3)only nova supports cells. But not only Nova needs to support edge clouds, 
> Neutron, Cinder should be taken into account too. How about Neutron to 
> support service function chaining in edge clouds? Using RPC? how to address 
> challenges mentioned above? And Cinder? 
> 
> 4). Using RPC to do the production integration for hundreds of edge cloud is 
> quite challenge idea, it's basic requirements that these edge clouds may be 
> bought from multi-vendor, hardware/software or both. 
> 
> That means using cells in production for massively distributed edge clouds is 
> quite bad idea. If Cells provide RESTful interface between API cell and child 
> cell, it's much more acceptable, but it's still not enough, similar in 
> Cinder, Neutron. Or just deploy lightweight OpenStack instance in each edge 
> cloud, for example, one rack. The question is how to manage the large number 
> of OpenStack instance and provision service.
> 
> [1]https://www.openstack.org/assets/presentation-media/OpenStack-2016-Austin-D-NFV-vM.pdf

I agree that RPC design pattern, as it is implemented now, is a major
blocker for OpenStack in general. It requires a major redesign,
including handling of corner cases, on both sides, *especially* RPC call
clients. Or may be it just have to be abandoned to be replaced by a more
cloud friendly pattern.

> 
> Best Regards
> Chaoyi Huang(joehuang)
> 
> 
> From: Andrew Laski [and...@lascii.com]
> Sent: 30 August 2016 21:03
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [all][massively 
> distributed][architecture]Coordination between actions/WGs
> 
> On Tue, Aug 30, 2016, at 05:36 AM, lebre.adr...@free.fr wrote:
>> Dear all
>>
>> Sorry my lack of reactivity, I 've been out for the few last days.
>>
>> According to the different replies, I think we should enlarge the
>> discussion and not stay on the vCPE use-case, which is clearly specific
>> and represents only one use-case among the ones we would like to study.
>> For instance we are in touch with NRENs in France and Poland that are
>> interested to deploy up to one rack in each of their largest PoP in order
>> to provide a distributed IaaS platform  (for further informations you can
>> give a look to the presentation we gave during the last summit [1] [2]).
>>
>> The two questions were:
>> 1./ Understand whether the fog/edge computing use case is in the scope of
>> the Architecture WG and if not, do we need a massively distributed WG?
> 
> Besides the question of which WG this might fall under is the question
> of how any of the work groups are going to engage with the project
> communities. There is a group of developers pushing forward on cellsv2
> in Nova there should be some level of engagement between them and
> whomever is discussing the fog/edge computing use case. To me it seems
> like there's some level of overlap between the efforts even if cellsv2
> is not a full solution. But whatever conversations are taking place
> about fog/edge or large scale distributed use cases seem  to be
> happening in channels that I am not aware of, and I haven't heard any
> other cells developers mention them either.
> 
> So let's please find a way for people who are interested in these use
> cases to talk to the developers who are working on similar things.
> 
> 
>> 2./ How can we coordinate our actions with the ones performed in the
>> Architecture WG?
>>
>> Regarding 1./, according to the different reactions, I propose to write a
>> first draft in an etherpard to present the main goal of the Massively
>> distributed WG and how people interested by such discussions can interact
>> (I will paste the link to the etherpad by tomorrow

Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-31 Thread joehuang
Hello, Joshua,

According to Peter's message, "However that still leaves us with the need to 
manage a stack of servers in thousands of telephone exchanges, central offices 
or even cell-sites, running multiple work loads in a distributed fault tolerant 
manner", the number of edge clouds may even at thousands level. 

These clouds may be disjoint, but some may need to provide inter-connection for 
the tenant's network, for example, to support database cluster distributed in 
several clouds, the inter-connection for data replication is needed.

There are different thoughts, proposals or projects to tackle the challenge, 
architecture level discussion is necessary to see if these design and proposals 
can fulfill the demands. If there are lots of proposals, it's good to compare 
the pros. and cons, and which scenarios the proposal work, which scenario the 
proposal can't work very well. 

So I suggest to have at least two successive dedicated design summit sessions 
to discuss about that f2f, all  thoughts, proposals or projects to tackle these 
kind of problem domain could be collected now,  the topics to be discussed 
could be as follows :

   0. Scenario
   1, Use cases
   2, Requirements  in detail
   3, Gaps in OpenStack
   4, Proposal to be discussed

  Architecture level proposal discussion
   1, Proposals
   2, Pros. and Cons. comparation 
   3, Challenges
   4, next step

Best Regards
Chaoyi Huang(joehuang)

From: Joshua Harlow [harlo...@fastmail.com]
Sent: 31 August 2016 13:13
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all][massively 
distributed][architecture]Coordination between actions/WGs

joehuang wrote:
> Cells is a good enhancement for Nova scalability, but there are some issues 
> in deployment Cells for massively distributed edge clouds:
>
> 1) using RPC for inter-data center communication will bring the difficulty in 
> inter-dc troubleshooting and maintenance, and some critical issue in 
> operation. No CLI or restful API or other tools to manage a child cell 
> directly. If the link between the API cell and child cells is broken, then 
> the child cell in the remote edge cloud is unmanageable, no matter locally or 
> remotely.
>
> 2). The challenge in security management for inter-site RPC communication. 
> Please refer to the slides[1] for the challenge 3: Securing OpenStack over 
> the Internet, Over 500 pin holes had to be opened in the firewall to allow 
> this to work – Includes ports for VNC and SSH for CLIs. Using RPC in cells 
> for edge cloud will face same security challenges.
>
> 3)only nova supports cells. But not only Nova needs to support edge clouds, 
> Neutron, Cinder should be taken into account too. How about Neutron to 
> support service function chaining in edge clouds? Using RPC? how to address 
> challenges mentioned above? And Cinder?
>
> 4). Using RPC to do the production integration for hundreds of edge cloud is 
> quite challenge idea, it's basic requirements that these edge clouds may be 
> bought from multi-vendor, hardware/software or both.
>
> That means using cells in production for massively distributed edge clouds is 
> quite bad idea. If Cells provide RESTful interface between API cell and child 
> cell, it's much more acceptable, but it's still not enough, similar in 
> Cinder, Neutron. Or just deploy lightweight OpenStack instance in each edge 
> cloud, for example, one rack. The question is how to manage the large number 
> of OpenStack instance and provision service.
>
> [1]https://www.openstack.org/assets/presentation-media/OpenStack-2016-Austin-D-NFV-vM.pdf
>
> Best Regards
> Chaoyi Huang(joehuang)
>

Very interesting questions,

I'm starting to think that the API you want isn't really nova, neutron,
or cinder at this point though. At some point it feels like the efforts
you are spending in things like service chaining (there is a south park
episode I almost linked here, but decided I probably shouldn't) would
almost be better served by a top-level API that knows how to communicate
with the more isolated silos (edge clouds I guess u are calling them).

It just starts to feel that the architecture you want and the one I see
being built are seemingly quite different and I haven't seen it shift to
something different so maybe it's time to switch the problem on the head
and accept that a solution may/will have to figure out how to unify a
bunch of disjoint clouds (as best you can)?

I know I can say that such a thing I'd like as well, because though
godaddy doesn't have hundreds of edge clouds, it is approaching more
than a handful of disjoint clouds (across the world) and a way to join
them behind something that can unify them (across just nova) as much as
it can would be welcome.

-Josh



__

Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-30 Thread Joshua Harlow

joehuang wrote:

Cells is a good enhancement for Nova scalability, but there are some issues in 
deployment Cells for massively distributed edge clouds:

1) using RPC for inter-data center communication will bring the difficulty in 
inter-dc troubleshooting and maintenance, and some critical issue in operation. 
No CLI or restful API or other tools to manage a child cell directly. If the 
link between the API cell and child cells is broken, then the child cell in the 
remote edge cloud is unmanageable, no matter locally or remotely.

2). The challenge in security management for inter-site RPC communication. 
Please refer to the slides[1] for the challenge 3: Securing OpenStack over the 
Internet, Over 500 pin holes had to be opened in the firewall to allow this to 
work – Includes ports for VNC and SSH for CLIs. Using RPC in cells for edge 
cloud will face same security challenges.

3)only nova supports cells. But not only Nova needs to support edge clouds, 
Neutron, Cinder should be taken into account too. How about Neutron to support 
service function chaining in edge clouds? Using RPC? how to address challenges 
mentioned above? And Cinder?

4). Using RPC to do the production integration for hundreds of edge cloud is 
quite challenge idea, it's basic requirements that these edge clouds may be 
bought from multi-vendor, hardware/software or both.

That means using cells in production for massively distributed edge clouds is 
quite bad idea. If Cells provide RESTful interface between API cell and child 
cell, it's much more acceptable, but it's still not enough, similar in Cinder, 
Neutron. Or just deploy lightweight OpenStack instance in each edge cloud, for 
example, one rack. The question is how to manage the large number of OpenStack 
instance and provision service.

[1]https://www.openstack.org/assets/presentation-media/OpenStack-2016-Austin-D-NFV-vM.pdf

Best Regards
Chaoyi Huang(joehuang)



Very interesting questions,

I'm starting to think that the API you want isn't really nova, neutron, 
or cinder at this point though. At some point it feels like the efforts 
you are spending in things like service chaining (there is a south park 
episode I almost linked here, but decided I probably shouldn't) would 
almost be better served by a top-level API that knows how to communicate 
with the more isolated silos (edge clouds I guess u are calling them).


It just starts to feel that the architecture you want and the one I see 
being built are seemingly quite different and I haven't seen it shift to 
something different so maybe it's time to switch the problem on the head 
and accept that a solution may/will have to figure out how to unify a 
bunch of disjoint clouds (as best you can)?


I know I can say that such a thing I'd like as well, because though 
godaddy doesn't have hundreds of edge clouds, it is approaching more 
than a handful of disjoint clouds (across the world) and a way to join 
them behind something that can unify them (across just nova) as much as 
it can would be welcome.


-Josh



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-30 Thread joehuang
Cells is a good enhancement for Nova scalability, but there are some issues in 
deployment Cells for massively distributed edge clouds: 

1) using RPC for inter-data center communication will bring the difficulty in 
inter-dc troubleshooting and maintenance, and some critical issue in operation. 
No CLI or restful API or other tools to manage a child cell directly. If the 
link between the API cell and child cells is broken, then the child cell in the 
remote edge cloud is unmanageable, no matter locally or remotely. 

2). The challenge in security management for inter-site RPC communication. 
Please refer to the slides[1] for the challenge 3: Securing OpenStack over the 
Internet, Over 500 pin holes had to be opened in the firewall to allow this to 
work – Includes ports for VNC and SSH for CLIs. Using RPC in cells for edge 
cloud will face same security challenges.

3)only nova supports cells. But not only Nova needs to support edge clouds, 
Neutron, Cinder should be taken into account too. How about Neutron to support 
service function chaining in edge clouds? Using RPC? how to address challenges 
mentioned above? And Cinder? 

4). Using RPC to do the production integration for hundreds of edge cloud is 
quite challenge idea, it's basic requirements that these edge clouds may be 
bought from multi-vendor, hardware/software or both. 

That means using cells in production for massively distributed edge clouds is 
quite bad idea. If Cells provide RESTful interface between API cell and child 
cell, it's much more acceptable, but it's still not enough, similar in Cinder, 
Neutron. Or just deploy lightweight OpenStack instance in each edge cloud, for 
example, one rack. The question is how to manage the large number of OpenStack 
instance and provision service.

[1]https://www.openstack.org/assets/presentation-media/OpenStack-2016-Austin-D-NFV-vM.pdf

Best Regards
Chaoyi Huang(joehuang)


From: Andrew Laski [and...@lascii.com]
Sent: 30 August 2016 21:03
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [all][massively 
distributed][architecture]Coordination between actions/WGs

On Tue, Aug 30, 2016, at 05:36 AM, lebre.adr...@free.fr wrote:
> Dear all
>
> Sorry my lack of reactivity, I 've been out for the few last days.
>
> According to the different replies, I think we should enlarge the
> discussion and not stay on the vCPE use-case, which is clearly specific
> and represents only one use-case among the ones we would like to study.
> For instance we are in touch with NRENs in France and Poland that are
> interested to deploy up to one rack in each of their largest PoP in order
> to provide a distributed IaaS platform  (for further informations you can
> give a look to the presentation we gave during the last summit [1] [2]).
>
> The two questions were:
> 1./ Understand whether the fog/edge computing use case is in the scope of
> the Architecture WG and if not, do we need a massively distributed WG?

Besides the question of which WG this might fall under is the question
of how any of the work groups are going to engage with the project
communities. There is a group of developers pushing forward on cellsv2
in Nova there should be some level of engagement between them and
whomever is discussing the fog/edge computing use case. To me it seems
like there's some level of overlap between the efforts even if cellsv2
is not a full solution. But whatever conversations are taking place
about fog/edge or large scale distributed use cases seem  to be
happening in channels that I am not aware of, and I haven't heard any
other cells developers mention them either.

So let's please find a way for people who are interested in these use
cases to talk to the developers who are working on similar things.


> 2./ How can we coordinate our actions with the ones performed in the
> Architecture WG?
>
> Regarding 1./, according to the different reactions, I propose to write a
> first draft in an etherpard to present the main goal of the Massively
> distributed WG and how people interested by such discussions can interact
> (I will paste the link to the etherpad by tomorrow).
>
> Regarding 2./,  I mentioned the Architecture WG because we do not want to
> develop additional software layers like Tricircle or other solutions (at
> least for the moment).
> The goal of the WG is to conduct studies and experiments to identify to
> what extent current mechanisms can satisfy the needs of such a massively
> distributed use-cases and what are the missing elements.
>
> I don't want to give to many details in the present mail in order to stay
> as consice as possible (details will be given in the proposal).
>
> Best regards,
> Adrien
>
> [1] https://youtu.be/1oaNwDP661A?t=583 (please just watch the use-case
> introduction ;  the distribution of the DB  was one possible revi

Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-30 Thread Andrew Laski


On Tue, Aug 30, 2016, at 09:55 AM, lebre.adr...@free.fr wrote:
> 
> 
> - Mail original -
> > De: "Andrew Laski" <and...@lascii.com>
> > À: openstack-dev@lists.openstack.org
> > Envoyé: Mardi 30 Août 2016 15:03:35
> > Objet: Re: [openstack-dev] [all][massively 
> > distributed][architecture]Coordination between actions/WGs
> > 
> > 
> > 
> > On Tue, Aug 30, 2016, at 05:36 AM, lebre.adr...@free.fr wrote:
> > > Dear all
> > > 
> > > Sorry my lack of reactivity, I 've been out for the few last days.
> > > 
> > > According to the different replies, I think we should enlarge the
> > > discussion and not stay on the vCPE use-case, which is clearly
> > > specific
> > > and represents only one use-case among the ones we would like to
> > > study.
> > > For instance we are in touch with NRENs in France and Poland that
> > > are
> > > interested to deploy up to one rack in each of their largest PoP in
> > > order
> > > to provide a distributed IaaS platform  (for further informations
> > > you can
> > > give a look to the presentation we gave during the last summit [1]
> > > [2]).
> > > 
> > > The two questions were:
> > > 1./ Understand whether the fog/edge computing use case is in the
> > > scope of
> > > the Architecture WG and if not, do we need a massively distributed
> > > WG?
> > 
> > Besides the question of which WG this might fall under is the
> > question
> > of how any of the work groups are going to engage with the project
> > communities. There is a group of developers pushing forward on
> > cellsv2
> > in Nova there should be some level of engagement between them and
> > whomever is discussing the fog/edge computing use case. To me it
> > seems
> > like there's some level of overlap between the efforts even if
> > cellsv2
> > is not a full solution. But whatever conversations are taking place
> > about fog/edge or large scale distributed use cases seem  to be
> > happening in channels that I am not aware of, and I haven't heard any
> > other cells developers mention them either.
> > 
> 
> I can only agree !
> Actually we organised an informal exchange with Sylvain Bauza in July in
> order to get additional information regarding the Cell V2
> architecture/implementation.  From our point of view, such changes in the
> code can help us toward our ultimate goal of managing remote DCs in an
> efficient manner (i.e by mitigating for instance the inter-sites
> traffic). 
> 
> 
> > So let's please find a way for people who are interested in these use
> > cases to talk to the developers who are working on similar things.
> 
> What is your proposal ? any particular ideas in mind?  

I am generally aware of things that are discussed in the weekly Nova IRC
meeting, on the ML with a [Nova] tag, and in proposed specs. Using those
forums as part of these discussions would be my recommendation. Or at
the very least use those forums to advertise that there is discussion
happening elsewhere.

The reality is that in order for any discussion to turn into tangible
work it needs to end up as a proposed spec. That can be the start of a
discussion or a summary of a discussion but it really needs to be a part
of the lifecycle of any discussion. Often from there it can branch out
into ML discussions or summit discussions. But specs are a good contact
point between Nova developers and people who have use cases for Nova. It
is important to note that spec proposals should be backed by someone
willing to do the work, which doesn't necessarily need to be the person
proposing the spec.


> 
> Ad_rien_
> 
> > 
> > 
> > > 2./ How can we coordinate our actions with the ones performed in
> > > the
> > > Architecture WG?
> > > 
> > > Regarding 1./, according to the different reactions, I propose to
> > > write a
> > > first draft in an etherpard to present the main goal of the
> > > Massively
> > > distributed WG and how people interested by such discussions can
> > > interact
> > > (I will paste the link to the etherpad by tomorrow).
> > > 
> > > Regarding 2./,  I mentioned the Architecture WG because we do not
> > > want to
> > > develop additional software layers like Tricircle or other
> > > solutions (at
> > > least for the moment).
> > > The goal of the WG is to conduct studies and experiments to
> > > identify to
> > > what extent current mechanisms can satisfy the needs of such a
> > > massi

Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-30 Thread lebre . adrien


- Mail original -
> De: "Andrew Laski" <and...@lascii.com>
> À: openstack-dev@lists.openstack.org
> Envoyé: Mardi 30 Août 2016 15:03:35
> Objet: Re: [openstack-dev] [all][massively 
> distributed][architecture]Coordination between actions/WGs
> 
> 
> 
> On Tue, Aug 30, 2016, at 05:36 AM, lebre.adr...@free.fr wrote:
> > Dear all
> > 
> > Sorry my lack of reactivity, I 've been out for the few last days.
> > 
> > According to the different replies, I think we should enlarge the
> > discussion and not stay on the vCPE use-case, which is clearly
> > specific
> > and represents only one use-case among the ones we would like to
> > study.
> > For instance we are in touch with NRENs in France and Poland that
> > are
> > interested to deploy up to one rack in each of their largest PoP in
> > order
> > to provide a distributed IaaS platform  (for further informations
> > you can
> > give a look to the presentation we gave during the last summit [1]
> > [2]).
> > 
> > The two questions were:
> > 1./ Understand whether the fog/edge computing use case is in the
> > scope of
> > the Architecture WG and if not, do we need a massively distributed
> > WG?
> 
> Besides the question of which WG this might fall under is the
> question
> of how any of the work groups are going to engage with the project
> communities. There is a group of developers pushing forward on
> cellsv2
> in Nova there should be some level of engagement between them and
> whomever is discussing the fog/edge computing use case. To me it
> seems
> like there's some level of overlap between the efforts even if
> cellsv2
> is not a full solution. But whatever conversations are taking place
> about fog/edge or large scale distributed use cases seem  to be
> happening in channels that I am not aware of, and I haven't heard any
> other cells developers mention them either.
> 

I can only agree !
Actually we organised an informal exchange with Sylvain Bauza in July in order 
to get additional information regarding the Cell V2 
architecture/implementation.  From our point of view, such changes in the code 
can help us toward our ultimate goal of managing remote DCs in an efficient 
manner (i.e by mitigating for instance the inter-sites traffic). 


> So let's please find a way for people who are interested in these use
> cases to talk to the developers who are working on similar things.

What is your proposal ? any particular ideas in mind?  

Ad_rien_

> 
> 
> > 2./ How can we coordinate our actions with the ones performed in
> > the
> > Architecture WG?
> > 
> > Regarding 1./, according to the different reactions, I propose to
> > write a
> > first draft in an etherpard to present the main goal of the
> > Massively
> > distributed WG and how people interested by such discussions can
> > interact
> > (I will paste the link to the etherpad by tomorrow).
> > 
> > Regarding 2./,  I mentioned the Architecture WG because we do not
> > want to
> > develop additional software layers like Tricircle or other
> > solutions (at
> > least for the moment).
> > The goal of the WG is to conduct studies and experiments to
> > identify to
> > what extent current mechanisms can satisfy the needs of such a
> > massively
> > distributed use-cases and what are the missing elements.
> > 
> > I don't want to give to many details in the present mail in order
> > to stay
> > as consice as possible (details will be given in the proposal).
> > 
> > Best regards,
> > Adrien
> > 
> > [1] https://youtu.be/1oaNwDP661A?t=583 (please just watch the
> > use-case
> > introduction ;  the distribution of the DB  was one possible
> > revision of
> > Nova and according to the cell V2 changes it is probably now
> > deprecated).
> > [2] https://hal.inria.fr/hal-01320235
> > 
> > - Mail original -
> > > De: "Peter Willis" <p3t3rw11...@gmail.com>
> > > À: "OpenStack Development Mailing List (not for usage questions)"
> > > <openstack-dev@lists.openstack.org>
> > > Envoyé: Mardi 30 Août 2016 11:24:00
> > > Objet: Re: [openstack-dev] [all][massively
> > > distributed][architecture]Coordination between actions/WGs
> > > 
> > > 
> > > 
> > > Colleagues,
> > > 
> > > 
> > > An interesting discussion, the only question appears to be
> > > whether
> > > vCPE is a suitable use case as the others do appear to be cloud
> > > use
> > > cases. L

Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-30 Thread Andrew Laski


On Tue, Aug 30, 2016, at 05:36 AM, lebre.adr...@free.fr wrote:
> Dear all 
> 
> Sorry my lack of reactivity, I 've been out for the few last days.
> 
> According to the different replies, I think we should enlarge the
> discussion and not stay on the vCPE use-case, which is clearly specific
> and represents only one use-case among the ones we would like to study.
> For instance we are in touch with NRENs in France and Poland that are
> interested to deploy up to one rack in each of their largest PoP in order
> to provide a distributed IaaS platform  (for further informations you can
> give a look to the presentation we gave during the last summit [1] [2]).
> 
> The two questions were: 
> 1./ Understand whether the fog/edge computing use case is in the scope of
> the Architecture WG and if not, do we need a massively distributed WG? 

Besides the question of which WG this might fall under is the question
of how any of the work groups are going to engage with the project
communities. There is a group of developers pushing forward on cellsv2
in Nova there should be some level of engagement between them and
whomever is discussing the fog/edge computing use case. To me it seems
like there's some level of overlap between the efforts even if cellsv2
is not a full solution. But whatever conversations are taking place
about fog/edge or large scale distributed use cases seem  to be
happening in channels that I am not aware of, and I haven't heard any
other cells developers mention them either.

So let's please find a way for people who are interested in these use
cases to talk to the developers who are working on similar things.


> 2./ How can we coordinate our actions with the ones performed in the
> Architecture WG? 
> 
> Regarding 1./, according to the different reactions, I propose to write a
> first draft in an etherpard to present the main goal of the Massively
> distributed WG and how people interested by such discussions can interact
> (I will paste the link to the etherpad by tomorrow). 
> 
> Regarding 2./,  I mentioned the Architecture WG because we do not want to
> develop additional software layers like Tricircle or other solutions (at
> least for the moment). 
> The goal of the WG is to conduct studies and experiments to identify to
> what extent current mechanisms can satisfy the needs of such a massively
> distributed use-cases and what are the missing elements.  
> 
> I don't want to give to many details in the present mail in order to stay
> as consice as possible (details will be given in the proposal).
> 
> Best regards, 
> Adrien 
> 
> [1] https://youtu.be/1oaNwDP661A?t=583 (please just watch the use-case
> introduction ;  the distribution of the DB  was one possible revision of
> Nova and according to the cell V2 changes it is probably now deprecated). 
> [2] https://hal.inria.fr/hal-01320235
> 
> - Mail original -
> > De: "Peter Willis" <p3t3rw11...@gmail.com>
> > À: "OpenStack Development Mailing List (not for usage questions)" 
> > <openstack-dev@lists.openstack.org>
> > Envoyé: Mardi 30 Août 2016 11:24:00
> > Objet: Re: [openstack-dev] [all][massively 
> > distributed][architecture]Coordination between actions/WGs
> > 
> > 
> > 
> > Colleagues,
> > 
> > 
> > An interesting discussion, the only question appears to be whether
> > vCPE is a suitable use case as the others do appear to be cloud use
> > cases. Lots of people assume CPE == small residential devices
> > however CPE covers a broad spectrum of appliances. Some of our
> > customers' premises are data centres, some are HQs, some are
> > campuses, some are branches. For residential CPE we use the
> > Broadband Forum's CPE Wide Area Network management protocol
> > (TR-069), which may be easier to modify to handle virtual
> > machines/containers etc. than to get OpenStack to scale to millions
> > of nodes. However that still leaves us with the need to manage a
> > stack of servers in thousands of telephone exchanges, central
> > offices or even cell-sites, running multiple work loads in a
> > distributed fault tolerant manner.
> > 
> > 
> > Best Regards,
> > Peter.
> > 
> > 
> > On Tue, Aug 30, 2016 at 4:48 AM, joehuang < joehu...@huawei.com >
> > wrote:
> > 
> > 
> > Hello, Jay,
> > 
> > > The Telco vCPE and Mobile "Edge cloud" (hint: not a cloud) use
> > > cases
> > 
> > Do you mean Mobile Edge Computing for Mobile "Edge cloud"? If so,
> > it's cloud. The introduction slides [1] can help you to learn the
> > use cases quickly, there are lots of material in ETSI website[2].
> >

Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-30 Thread Thierry Carrez
lebre.adr...@free.fr wrote:
> [...]
> According to the different replies, I think we should enlarge the discussion 
> and not stay on the vCPE use-case, which is clearly specific and represents 
> only one use-case among the ones we would like to study. For instance we are 
> in touch with NRENs in France and Poland that are interested to deploy up to 
> one rack in each of their largest PoP in order to provide a distributed IaaS 
> platform  (for further informations you can give a look to the presentation 
> we gave during the last summit [1] [2]).

+1

I think working on supporting more distributed clouds is worthwhile
because the technology would enable a lot of new use cases. Centering
the discussion on the Telco industry's specific vCPEs use case is
unnecessarily limiting...

> [...]
> Regarding 2./,  I mentioned the Architecture WG because we do not want to 
> develop additional software layers like Tricircle or other solutions (at 
> least for the moment). 
> The goal of the WG is to conduct studies and experiments to identify to what 
> extent current mechanisms can satisfy the needs of such a massively 
> distributed use-cases and what are the missing elements. 

Agreed that a bottom-up, incremental improvement strategy sounds more
likely to succeed in an established project like OpenStack (compared to
a big-bang top-bottom re-architecture).

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-30 Thread lebre . adrien
Dear all 

Sorry my lack of reactivity, I 've been out for the few last days.

According to the different replies, I think we should enlarge the discussion 
and not stay on the vCPE use-case, which is clearly specific and represents 
only one use-case among the ones we would like to study. For instance we are in 
touch with NRENs in France and Poland that are interested to deploy up to one 
rack in each of their largest PoP in order to provide a distributed IaaS 
platform  (for further informations you can give a look to the presentation we 
gave during the last summit [1] [2]).

The two questions were: 
1./ Understand whether the fog/edge computing use case is in the scope of the 
Architecture WG and if not, do we need a massively distributed WG? 
2./ How can we coordinate our actions with the ones performed in the 
Architecture WG? 

Regarding 1./, according to the different reactions, I propose to write a first 
draft in an etherpard to present the main goal of the Massively distributed WG 
and how people interested by such discussions can interact (I will paste the 
link to the etherpad by tomorrow). 

Regarding 2./,  I mentioned the Architecture WG because we do not want to 
develop additional software layers like Tricircle or other solutions (at least 
for the moment). 
The goal of the WG is to conduct studies and experiments to identify to what 
extent current mechanisms can satisfy the needs of such a massively distributed 
use-cases and what are the missing elements.  

I don't want to give to many details in the present mail in order to stay as 
consice as possible (details will be given in the proposal).

Best regards, 
Adrien 

[1] https://youtu.be/1oaNwDP661A?t=583 (please just watch the use-case 
introduction ;  the distribution of the DB  was one possible revision of Nova 
and according to the cell V2 changes it is probably now deprecated). 
[2] https://hal.inria.fr/hal-01320235

- Mail original -
> De: "Peter Willis" <p3t3rw11...@gmail.com>
> À: "OpenStack Development Mailing List (not for usage questions)" 
> <openstack-dev@lists.openstack.org>
> Envoyé: Mardi 30 Août 2016 11:24:00
> Objet: Re: [openstack-dev] [all][massively 
> distributed][architecture]Coordination between actions/WGs
> 
> 
> 
> Colleagues,
> 
> 
> An interesting discussion, the only question appears to be whether
> vCPE is a suitable use case as the others do appear to be cloud use
> cases. Lots of people assume CPE == small residential devices
> however CPE covers a broad spectrum of appliances. Some of our
> customers' premises are data centres, some are HQs, some are
> campuses, some are branches. For residential CPE we use the
> Broadband Forum's CPE Wide Area Network management protocol
> (TR-069), which may be easier to modify to handle virtual
> machines/containers etc. than to get OpenStack to scale to millions
> of nodes. However that still leaves us with the need to manage a
> stack of servers in thousands of telephone exchanges, central
> offices or even cell-sites, running multiple work loads in a
> distributed fault tolerant manner.
> 
> 
> Best Regards,
> Peter.
> 
> 
> On Tue, Aug 30, 2016 at 4:48 AM, joehuang < joehu...@huawei.com >
> wrote:
> 
> 
> Hello, Jay,
> 
> > The Telco vCPE and Mobile "Edge cloud" (hint: not a cloud) use
> > cases
> 
> Do you mean Mobile Edge Computing for Mobile "Edge cloud"? If so,
> it's cloud. The introduction slides [1] can help you to learn the
> use cases quickly, there are lots of material in ETSI website[2].
> 
> [1]
> http://www.etsi.org/images/files/technologies/MEC_Introduction_slides__SDN_World_Congress_15-10-14.pdf
> [2]
> http://www.etsi.org/technologies-clusters/technologies/mobile-edge-computing
> 
> And when we talk about massively distributed cloud, vCPE is only one
> of the scenarios( now in argue - ing ), but we can't forget that
> there are other scenarios like vCDN, vEPC, vIMS, MEC, IoT etc.
> Architecture level discussion is still necessary to see if current
> design and new proposals can fulfill the demands. If there are lots
> of proposals, it's good to compare the pros. and cons, and which
> scenarios the proposal work, which scenario the proposal can't work
> very well.
> 
> ( Hope this reply in the thread :) )
> 
> Best Regards
> Chaoyi Huang(joehuang)
> ________
> From: Jay Pipes [ jaypi...@gmail.com ]
> Sent: 29 August 2016 18:48
> To: openstack-dev@lists.openstack.org
> 
> 
> Subject: Re: [openstack-dev] [all][massively
> distributed][architecture]Coordination between actions/WGs
> 
> On 08/27/2016 11:16 AM, HU, BIN wrote:
> > The challenge in OpenStack is how to enable the innovation built on
> > top of OpenSt

Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-30 Thread Peter Willis
Colleagues,

An interesting discussion, the only question appears to be whether vCPE is
a suitable use case as the others do appear to be cloud use cases. Lots of
people assume CPE == small residential devices however CPE covers a broad
spectrum of appliances. Some of our customers' premises are data centres,
some are HQs, some are campuses, some are branches. For residential CPE we
use the Broadband Forum's CPE Wide Area Network management protocol
(TR-069), which may be easier to modify to handle virtual
machines/containers etc. than to get OpenStack to scale to millions of
nodes. However that still leaves us with the need to manage a stack of
servers in thousands of telephone exchanges, central offices or even
cell-sites, running multiple work loads in a distributed fault tolerant
manner.

Best Regards,
Peter.

On Tue, Aug 30, 2016 at 4:48 AM, joehuang <joehu...@huawei.com> wrote:

> Hello, Jay,
>
> > The Telco vCPE and Mobile "Edge cloud" (hint: not a cloud) use cases
>
> Do you mean Mobile Edge Computing for Mobile "Edge cloud"? If so, it's
> cloud. The introduction slides [1]  can help you to learn the use cases
> quickly, there are lots of material in ETSI website[2].
>
> [1] http://www.etsi.org/images/files/technologies/MEC_
> Introduction_slides__SDN_World_Congress_15-10-14.pdf
> [2] http://www.etsi.org/technologies-clusters/technologies/mobile-edge-
> computing
>
> And when we talk about massively distributed cloud, vCPE is only one of
> the scenarios( now in argue - ing ), but we can't forget that there are
> other scenarios like  vCDN, vEPC, vIMS, MEC, IoT etc. Architecture level
> discussion is still necessary to see if current design and new proposals
> can fulfill the demands. If there are lots of proposals, it's good to
> compare the pros. and cons, and which scenarios the proposal work, which
> scenario the proposal can't work very well.
>
> ( Hope this reply in the thread :) )
>
> Best Regards
> Chaoyi Huang(joehuang)
> 
> From: Jay Pipes [jaypi...@gmail.com]
> Sent: 29 August 2016 18:48
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [all][massively 
> distributed][architecture]Coordination
> between actions/WGs
>
> On 08/27/2016 11:16 AM, HU, BIN wrote:
> > The challenge in OpenStack is how to enable the innovation built on top
> of OpenStack.
>
> No, that's not the challenge for OpenStack.
>
> That's like saying the challenge for gasoline is how to enable the
> innovation of a jet engine.
>
> > So telco use cases is not only the innovation built on top of OpenStack.
> Instead, telco use cases, e.g. Gluon (NFV networking), vCPE Cloud, Mobile
> Cloud, Mobile Edge Cloud, brings the needed requirement for innovation in
> OpenStack itself. If OpenStack don't address those basic requirements,
>
> That's the thing, Bin, those are *not* "basic" requirements. The Telco
> vCPE and Mobile "Edge cloud" (hint: not a cloud) use cases are asking
> for fundamental architectural and design changes to the foundational
> components of OpenStack. Instead of Nova being designed to manage a
> bunch of hardware in a relatively close location (i.e. a datacenter or
> multiple datacenters), vCPE is asking for Nova to transform itself into
> a micro-agent that can be run on an Apple Watch and do things in
> resource-constrained environments that it was never built to do.
>
> And, honestly, I have no idea what Gluon is trying to do. Ian sent me
> some information a while ago on it. I read it. I still have no idea what
> Gluon is trying to accomplish other than essentially bypassing Neutron
> entirely. That's not "innovation". That's subterfuge.
>
> > the innovation will never happen on top of OpenStack.
>
> Sure it will. AT and BT and other Telcos just need to write their own
> software that runs their proprietary vCPE software distribution
> mechanism, that's all. The OpenStack community shouldn't be relied upon
> to create software that isn't applicable to general cloud computing and
> cloud management platforms.
>
> > An example is - self-driving car is built on top of many technologies,
> such as sensor/camera, AI, maps, middleware etc. All innovations in each
> technology (sensor/camera, AI, map, etc.) bring together the innovation of
> self-driving car.
>
> Yes, indeed, but the people who created the self-driving car software
> didn't ask the people who created the cameras to write the software for
> them that does the self-driving.
>
> > WE NEED INNOVATION IN OPENSTACK in order to enable the innovation built
> on top of OpenStack.
>
> You are defining "innovation" in an odd way, IMHO. "Innovation" for the
>

Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-29 Thread joehuang
Hello, Jay,

> The Telco vCPE and Mobile "Edge cloud" (hint: not a cloud) use cases 

Do you mean Mobile Edge Computing for Mobile "Edge cloud"? If so, it's cloud. 
The introduction slides [1]  can help you to learn the use cases quickly, there 
are lots of material in ETSI website[2].

[1] 
http://www.etsi.org/images/files/technologies/MEC_Introduction_slides__SDN_World_Congress_15-10-14.pdf
[2] http://www.etsi.org/technologies-clusters/technologies/mobile-edge-computing

And when we talk about massively distributed cloud, vCPE is only one of the 
scenarios( now in argue - ing ), but we can't forget that there are other 
scenarios like  vCDN, vEPC, vIMS, MEC, IoT etc. Architecture level discussion 
is still necessary to see if current design and new proposals can fulfill the 
demands. If there are lots of proposals, it's good to compare the pros. and 
cons, and which scenarios the proposal work, which scenario the proposal can't 
work very well. 

( Hope this reply in the thread :) )

Best Regards
Chaoyi Huang(joehuang)

From: Jay Pipes [jaypi...@gmail.com]
Sent: 29 August 2016 18:48
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [all][massively 
distributed][architecture]Coordination between actions/WGs

On 08/27/2016 11:16 AM, HU, BIN wrote:
> The challenge in OpenStack is how to enable the innovation built on top of 
> OpenStack.

No, that's not the challenge for OpenStack.

That's like saying the challenge for gasoline is how to enable the
innovation of a jet engine.

> So telco use cases is not only the innovation built on top of OpenStack. 
> Instead, telco use cases, e.g. Gluon (NFV networking), vCPE Cloud, Mobile 
> Cloud, Mobile Edge Cloud, brings the needed requirement for innovation in 
> OpenStack itself. If OpenStack don't address those basic requirements,

That's the thing, Bin, those are *not* "basic" requirements. The Telco
vCPE and Mobile "Edge cloud" (hint: not a cloud) use cases are asking
for fundamental architectural and design changes to the foundational
components of OpenStack. Instead of Nova being designed to manage a
bunch of hardware in a relatively close location (i.e. a datacenter or
multiple datacenters), vCPE is asking for Nova to transform itself into
a micro-agent that can be run on an Apple Watch and do things in
resource-constrained environments that it was never built to do.

And, honestly, I have no idea what Gluon is trying to do. Ian sent me
some information a while ago on it. I read it. I still have no idea what
Gluon is trying to accomplish other than essentially bypassing Neutron
entirely. That's not "innovation". That's subterfuge.

> the innovation will never happen on top of OpenStack.

Sure it will. AT and BT and other Telcos just need to write their own
software that runs their proprietary vCPE software distribution
mechanism, that's all. The OpenStack community shouldn't be relied upon
to create software that isn't applicable to general cloud computing and
cloud management platforms.

> An example is - self-driving car is built on top of many technologies, such 
> as sensor/camera, AI, maps, middleware etc. All innovations in each 
> technology (sensor/camera, AI, map, etc.) bring together the innovation of 
> self-driving car.

Yes, indeed, but the people who created the self-driving car software
didn't ask the people who created the cameras to write the software for
them that does the self-driving.

> WE NEED INNOVATION IN OPENSTACK in order to enable the innovation built on 
> top of OpenStack.

You are defining "innovation" in an odd way, IMHO. "Innovation" for the
vCPE use case sounds a whole lot like "rearchitect your entire software
stack so that we don't have to write much code that runs on set-top boxes."

Just being honest,
-jay

> Thanks
> Bin
> -Original Message-
> From: Edward Leafe [mailto:e...@leafe.com]
> Sent: Saturday, August 27, 2016 10:49 AM
> To: OpenStack Development Mailing List (not for usage questions) 
> <openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [all][massively 
> distributed][architecture]Coordination between actions/WGs
>
> On Aug 27, 2016, at 12:18 PM, HU, BIN <bh5...@att.com> wrote:
>
>>> From telco perspective, those are the areas that allow innovation, and 
>>> provide telco customers with new types of services.
>>
>> We need innovation, starting from not limiting ourselves from bringing new 
>> idea and new use cases, and bringing those impossibility to reality.
>
> There is innovation in OpenStack, and there is innovation in things built on 
> top of OpenStack. We are simply trying to keep the two layers from getting 
> confused.
>
>
> -- Ed Leafe
>
>
>
>
>
>
> 

Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-29 Thread joehuang
Hello, Jay,

Sorry, I don't know why my mail-agent(Microsoft Outlook Web App) did not carry 
the thread message-id information in the reply.  I'll check and avoid to create 
a new thread for reply in existing thread.

Best Regards
Chaoyi Huang ( joehuang)


From: Jay Pipes [jaypi...@gmail.com]
Sent: 29 August 2016 18:34
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [all][massively 
distributed][architecture]Coordination between actions/WGs

On 08/28/2016 09:02 PM, joehuang wrote:
> Hello, Bin,
>
> Understand your expectation. In Tricircle big-tent application: 
> https://review.openstack.org/#/c/338796/, a proposal was also given to add 
> plugin mechnism in Nova/Cinder API layer, just like Neutron support plugin 
> mechanism in API layer, that boosts innovation for different backend 
> implementation to be supported, from ODL to OVN, Open Contrail
>
> Mobile edging computing, NFV netwoking, distributed edge cloud etc are some 
> new scneario for OpenStack, I suggest to have at least two successive 
> dedicated design summit sessions to discuss about that f2f, the topics to be 
> discussed could be:
>
> 1, Use cases
> 2, Requirements  in detail
> 3, Gaps in OpenStack
> 4, Proposal to be discussed
>
> Arhietcture level proposal discussion
> 1, Proposals
> 2, Pros. and Cons. comparation
> 3, Chellenges
> 4, next step
>
>
> Looking forward to your thoughts.

We could also have a design summit session on how to use a mail user
agent that doesn't create new mailing list thread when you're responding
to an existing thread. We could also include a topic about top-posting.

-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-29 Thread Joshua Harlow




 From a brief look, it seems like vCPE is more along the lines of the
customer having a "thin" device on their premise and their (now
virtual) network functions, eg. firewall, live in the providers data
center over a private link created by that thin device. So having a
hypervisor on a customer premise is probably not what most telecoms
would consider vCPE [1].

But in my (limited) example, I'm not talking about managing that thin
device, I am thinking of a hypervisor or two instead in a customer
premise, or remote location, that is controlled by some (magic?)
remote nova, and yeah, would have access to glance, etc, to deploy
instances, basically as a way of avoiding running an OpenStack control
plane there. But not so much in the way of managing upgrades of the
software on that virtual machine on that hypervisor or anything, just
acting as IaaS.


So what/who is the cloud user in this case? It almost seems like there 
isn't much of a user (in the sense of a customer, like say myself) 
involved in this equation. Instead there is really the providers user 
that is issuing these commands and not much else? Is the user the 
provider themselves (so they can take advantage of the under-utilized 
resources on customer premise)?




Thanks,
Curtis.

[1]: Pg. 48 - 
http://innovation.verizon.com/content/dam/vic/PDF/Verizon_SDN-NFV_Reference_Architecture.pdf



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-29 Thread Curtis
On Mon, Aug 29, 2016 at 2:15 PM, Joshua Harlow  wrote:
> Curtis wrote:
>>
>> On Mon, Aug 29, 2016 at 1:27 PM, gordon chung  wrote:
>>>
>>> just to clarify, what 'innovation' do you believe is required to enable
>>> you
>>> to build on top of OpenStack. what are the feature gaps you are
>>> proposing?
>>> let's avoid defining "the cloud" since that will give you 1000 different
>>> answers if you ask 1000 different people.*
>>
>>
>> One idea I hear fairly often is having a couple of hypervisors in say
>> a single store or some other customer premise, but not wanting to also
>> run an OpenStack control plane there. If we are talking about a
>> hypervisor level, not some other unknown but smaller IoTs...uh thing,
>> does that make more sense from a OpenStack + vCPE context? Or do some
>> think that is out of scope for OpenStack's mission as well?
>>
>> Thanks,
>> Curtis.
>>
>
> 
>
> So is that like making a customer premise have the equivalent of dumb
> terminals (maybe we can call them 'semi-smart' terminals) where those things
> basically can be remote controlled (aka the VMs on them can be upgraded or
> downgraded or deleted or ...) by the corporate (or other) entity that is
> controlling those terminals?
>
> If that's the case, then I don't exactly call that a cloud (in my classical
> sense), but more of a software delivery (and remote-control) strategy (and
> using nova to do this for u?).
>
> But then I don't know all the 3 or 4 letter acronyms so who knows, I might
> be incorrect with the above assumption :-P

>From a brief look, it seems like vCPE is more along the lines of the
customer having a "thin" device on their premise and their (now
virtual) network functions, eg. firewall, live in the providers data
center over a private link created by that thin device. So having a
hypervisor on a customer premise is probably not what most telecoms
would consider vCPE [1].

But in my (limited) example, I'm not talking about managing that thin
device, I am thinking of a hypervisor or two instead in a customer
premise, or remote location, that is controlled by some (magic?)
remote nova, and yeah, would have access to glance, etc, to deploy
instances, basically as a way of avoiding running an OpenStack control
plane there. But not so much in the way of managing upgrades of the
software on that virtual machine on that hypervisor or anything, just
acting as IaaS.

Thanks,
Curtis.

[1]: Pg. 48 - 
http://innovation.verizon.com/content/dam/vic/PDF/Verizon_SDN-NFV_Reference_Architecture.pdf


>
> -Josh
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Blog: serverascode.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-29 Thread Joshua Harlow

Curtis wrote:

On Mon, Aug 29, 2016 at 1:27 PM, gordon chung  wrote:

just to clarify, what 'innovation' do you believe is required to enable you
to build on top of OpenStack. what are the feature gaps you are proposing?
let's avoid defining "the cloud" since that will give you 1000 different
answers if you ask 1000 different people.*


One idea I hear fairly often is having a couple of hypervisors in say
a single store or some other customer premise, but not wanting to also
run an OpenStack control plane there. If we are talking about a
hypervisor level, not some other unknown but smaller IoTs...uh thing,
does that make more sense from a OpenStack + vCPE context? Or do some
think that is out of scope for OpenStack's mission as well?

Thanks,
Curtis.





So is that like making a customer premise have the equivalent of dumb 
terminals (maybe we can call them 'semi-smart' terminals) where those 
things basically can be remote controlled (aka the VMs on them can be 
upgraded or downgraded or deleted or ...) by the corporate (or other) 
entity that is controlling those terminals?


If that's the case, then I don't exactly call that a cloud (in my 
classical sense), but more of a software delivery (and remote-control) 
strategy (and using nova to do this for u?).


But then I don't know all the 3 or 4 letter acronyms so who knows, I 
might be incorrect with the above assumption :-P


-Josh



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-29 Thread Curtis
On Mon, Aug 29, 2016 at 1:27 PM, gordon chung <g...@live.ca> wrote:
> just to clarify, what 'innovation' do you believe is required to enable you
> to build on top of OpenStack. what are the feature gaps you are proposing?
> let's avoid defining "the cloud" since that will give you 1000 different
> answers if you ask 1000 different people.*

One idea I hear fairly often is having a couple of hypervisors in say
a single store or some other customer premise, but not wanting to also
run an OpenStack control plane there. If we are talking about a
hypervisor level, not some other unknown but smaller IoTs...uh thing,
does that make more sense from a OpenStack + vCPE context? Or do some
think that is out of scope for OpenStack's mission as well?

Thanks,
Curtis.

>
> * actually you'll get 100 answers and the rest will say: "i don't know."
>
>
> On 29/08/16 12:23 PM, HU, BIN wrote:
>
> Please see inline [BH526R].
>
> -Original Message-
> From: Jay Pipes [mailto:jaypi...@gmail.com]
> Sent: Monday, August 29, 2016 3:48 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [all][massively
> distributed][architecture]Coordination between actions/WGs
>
> On 08/27/2016 11:16 AM, HU, BIN wrote:
>
> The challenge in OpenStack is how to enable the innovation built on top of
> OpenStack.
>
> No, that's not the challenge for OpenStack.
>
> That's like saying the challenge for gasoline is how to enable the
> innovation of a jet engine.
>
> [BH526R] True. 87 gas or diesel certainly cannot be used in any jet engine.
> While Jet A-1 and Jet B fuel are widely used for jet engine today,
> innovation of a new generation of jet engine may require an innovation of
> new type of aviation fuel.
>
> So telco use cases is not only the innovation built on top of
> OpenStack. Instead, telco use cases, e.g. Gluon (NFV networking), vCPE
> Cloud, Mobile Cloud, Mobile Edge Cloud, brings the needed requirement
> for innovation in OpenStack itself. If OpenStack don't address those
> basic requirements,
>
> That's the thing, Bin, those are *not* "basic" requirements. The Telco vCPE
> and Mobile "Edge cloud" (hint: not a cloud) use cases are asking for
> fundamental architectural and design changes to the foundational components
> of OpenStack. Instead of Nova being designed to manage a bunch of hardware
> in a relatively close location (i.e. a datacenter or multiple datacenters),
> vCPE is asking for Nova to transform itself into a micro-agent that can be
> run on an Apple Watch and do things in resource-constrained environments
> that it was never built to do.
>
> [BH526R] So we have 2 choices here - either to explicitly exclude telco
> requirement from OpenStack, and clearly indicate that telco needs to work on
> its own "telco stack"; or to allow telco to innovate within OpenStack
> through perhaps a new type of "telco nova" and/or "telco Neutron". Which way
> do you suggest?
>
> And, honestly, I have no idea what Gluon is trying to do. Ian sent me some
> information a while ago on it. I read it. I still have no idea what Gluon is
> trying to accomplish other than essentially bypassing Neutron entirely.
> That's not "innovation". That's subterfuge.
>
> [BH526R] Thank you for recognizing you don't know Gluon. Certainly the
> perception of "bypassing Neutron entirely" is incorrect. You are very
> welcome to join our project and meeting so that you can understand more of
> what Gluon is. We are also happy to set up specific meetings with you to
> discuss it too. Just let me know which way prefer. We are looking for you to
> participate in Gluon project and meeting.
>
> [BH526R] On the other hand, I also try to understand why "bypassing Neutron
> entirely" is not an innovation. Neutron is not perfect. (I don't mean Gluon
> here, but) if there is an innovation that can replace Neutron entirely,
> everyone should be happy. Just like automobile bypassed carriage wagon
> entirely.
>
> the innovation will never happen on top of OpenStack.
>
> Sure it will. AT and BT and other Telcos just need to write their own
> software that runs their proprietary vCPE software distribution mechanism,
> that's all. The OpenStack community shouldn't be relied upon to create
> software that isn't applicable to general cloud computing and cloud
> management platforms.
>
> [BH526R] If I understand correctly, this suggestion excludes telco from
> OpenStack entirely. That's fine.
>
> An example is - self-driving car is built on top of many technologies, such
> as sensor/camera, AI, maps, middleware etc. All innovations in each
> technology (sensor/camera, AI, map, etc.) bring togethe

Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-29 Thread gordon chung
just to clarify, what 'innovation' do you believe is required to enable you to 
build on top of OpenStack. what are the feature gaps you are proposing? let's 
avoid defining "the cloud" since that will give you 1000 different answers if 
you ask 1000 different people.*

* actually you'll get 100 answers and the rest will say: "i don't know."

On 29/08/16 12:23 PM, HU, BIN wrote:


Please see inline [BH526R].

-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com]
Sent: Monday, August 29, 2016 3:48 AM
To: openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [all][massively 
distributed][architecture]Coordination between actions/WGs

On 08/27/2016 11:16 AM, HU, BIN wrote:


The challenge in OpenStack is how to enable the innovation built on top of 
OpenStack.



No, that's not the challenge for OpenStack.

That's like saying the challenge for gasoline is how to enable the innovation 
of a jet engine.

[BH526R] True. 87 gas or diesel certainly cannot be used in any jet engine. 
While Jet A-1 and Jet B fuel are widely used for jet engine today, innovation 
of a new generation of jet engine may require an innovation of new type of 
aviation fuel.



So telco use cases is not only the innovation built on top of
OpenStack. Instead, telco use cases, e.g. Gluon (NFV networking), vCPE
Cloud, Mobile Cloud, Mobile Edge Cloud, brings the needed requirement
for innovation in OpenStack itself. If OpenStack don't address those
basic requirements,



That's the thing, Bin, those are *not* "basic" requirements. The Telco vCPE and 
Mobile "Edge cloud" (hint: not a cloud) use cases are asking for fundamental 
architectural and design changes to the foundational components of OpenStack. 
Instead of Nova being designed to manage a bunch of hardware in a relatively 
close location (i.e. a datacenter or multiple datacenters), vCPE is asking for 
Nova to transform itself into a micro-agent that can be run on an Apple Watch 
and do things in resource-constrained environments that it was never built to 
do.

[BH526R] So we have 2 choices here - either to explicitly exclude telco 
requirement from OpenStack, and clearly indicate that telco needs to work on 
its own "telco stack"; or to allow telco to innovate within OpenStack through 
perhaps a new type of "telco nova" and/or "telco Neutron". Which way do you 
suggest?

And, honestly, I have no idea what Gluon is trying to do. Ian sent me some 
information a while ago on it. I read it. I still have no idea what Gluon is 
trying to accomplish other than essentially bypassing Neutron entirely. That's 
not "innovation". That's subterfuge.

[BH526R] Thank you for recognizing you don't know Gluon. Certainly the 
perception of "bypassing Neutron entirely" is incorrect. You are very welcome 
to join our project and meeting so that you can understand more of what Gluon 
is. We are also happy to set up specific meetings with you to discuss it too. 
Just let me know which way prefer. We are looking for you to participate in 
Gluon project and meeting.

[BH526R] On the other hand, I also try to understand why "bypassing Neutron 
entirely" is not an innovation. Neutron is not perfect. (I don't mean Gluon 
here, but) if there is an innovation that can replace Neutron entirely, 
everyone should be happy. Just like automobile bypassed carriage wagon entirely.



the innovation will never happen on top of OpenStack.



Sure it will. AT and BT and other Telcos just need to write their own 
software that runs their proprietary vCPE software distribution mechanism, 
that's all. The OpenStack community shouldn't be relied upon to create software 
that isn't applicable to general cloud computing and cloud management platforms.

[BH526R] If I understand correctly, this suggestion excludes telco from 
OpenStack entirely. That's fine.



An example is - self-driving car is built on top of many technologies, such as 
sensor/camera, AI, maps, middleware etc. All innovations in each technology 
(sensor/camera, AI, map, etc.) bring together the innovation of self-driving 
car.



Yes, indeed, but the people who created the self-driving car software didn't 
ask the people who created the cameras to write the software for them that does 
the self-driving.

[BH526R] It's actually the other way around. Furthermore, camera/sensor 
industry does see the need, and VC's funding has been dramatically increased to 
invest in camera/sensor, map, AI areas. And the startups in those areas are the 
fastest growing areas. Those investments and innovations accelerate the 
maturity of self-driving cars.



WE NEED INNOVATION IN OPENSTACK in order to enable the innovation built on top 
of OpenStack.



You are defining "innovation" in an odd way, IMHO. "Innovation" for the vCPE 
use case sounds a whole lot like &quo

Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-29 Thread HU, BIN

Please see inline [BH526R].

-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com] 
Sent: Monday, August 29, 2016 3:48 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [all][massively 
distributed][architecture]Coordination between actions/WGs

On 08/27/2016 11:16 AM, HU, BIN wrote:
> The challenge in OpenStack is how to enable the innovation built on top of 
> OpenStack.

No, that's not the challenge for OpenStack.

That's like saying the challenge for gasoline is how to enable the innovation 
of a jet engine.

[BH526R] True. 87 gas or diesel certainly cannot be used in any jet engine. 
While Jet A-1 and Jet B fuel are widely used for jet engine today, innovation 
of a new generation of jet engine may require an innovation of new type of 
aviation fuel.

> So telco use cases is not only the innovation built on top of 
> OpenStack. Instead, telco use cases, e.g. Gluon (NFV networking), vCPE 
> Cloud, Mobile Cloud, Mobile Edge Cloud, brings the needed requirement 
> for innovation in OpenStack itself. If OpenStack don't address those 
> basic requirements,

That's the thing, Bin, those are *not* "basic" requirements. The Telco vCPE and 
Mobile "Edge cloud" (hint: not a cloud) use cases are asking for fundamental 
architectural and design changes to the foundational components of OpenStack. 
Instead of Nova being designed to manage a bunch of hardware in a relatively 
close location (i.e. a datacenter or multiple datacenters), vCPE is asking for 
Nova to transform itself into a micro-agent that can be run on an Apple Watch 
and do things in resource-constrained environments that it was never built to 
do.

[BH526R] So we have 2 choices here - either to explicitly exclude telco 
requirement from OpenStack, and clearly indicate that telco needs to work on 
its own "telco stack"; or to allow telco to innovate within OpenStack through 
perhaps a new type of "telco nova" and/or "telco Neutron". Which way do you 
suggest?

And, honestly, I have no idea what Gluon is trying to do. Ian sent me some 
information a while ago on it. I read it. I still have no idea what Gluon is 
trying to accomplish other than essentially bypassing Neutron entirely. That's 
not "innovation". That's subterfuge.

[BH526R] Thank you for recognizing you don't know Gluon. Certainly the 
perception of "bypassing Neutron entirely" is incorrect. You are very welcome 
to join our project and meeting so that you can understand more of what Gluon 
is. We are also happy to set up specific meetings with you to discuss it too. 
Just let me know which way prefer. We are looking for you to participate in 
Gluon project and meeting.

[BH526R] On the other hand, I also try to understand why "bypassing Neutron 
entirely" is not an innovation. Neutron is not perfect. (I don't mean Gluon 
here, but) if there is an innovation that can replace Neutron entirely, 
everyone should be happy. Just like automobile bypassed carriage wagon entirely.

> the innovation will never happen on top of OpenStack.

Sure it will. AT and BT and other Telcos just need to write their own 
software that runs their proprietary vCPE software distribution mechanism, 
that's all. The OpenStack community shouldn't be relied upon to create software 
that isn't applicable to general cloud computing and cloud management platforms.

[BH526R] If I understand correctly, this suggestion excludes telco from 
OpenStack entirely. That's fine.

> An example is - self-driving car is built on top of many technologies, such 
> as sensor/camera, AI, maps, middleware etc. All innovations in each 
> technology (sensor/camera, AI, map, etc.) bring together the innovation of 
> self-driving car.

Yes, indeed, but the people who created the self-driving car software didn't 
ask the people who created the cameras to write the software for them that does 
the self-driving.

[BH526R] It's actually the other way around. Furthermore, camera/sensor 
industry does see the need, and VC's funding has been dramatically increased to 
invest in camera/sensor, map, AI areas. And the startups in those areas are the 
fastest growing areas. Those investments and innovations accelerate the 
maturity of self-driving cars.

> WE NEED INNOVATION IN OPENSTACK in order to enable the innovation built on 
> top of OpenStack.

You are defining "innovation" in an odd way, IMHO. "Innovation" for the vCPE 
use case sounds a whole lot like "rearchitect your entire software stack so 
that we don't have to write much code that runs on set-top boxes."

[BH526R] Certainly it is misunderstanding. "Rearcihtect" may be needed. 
However, if the "telco Nova" and "telco Neutron" concept and components can be 
allowed for us telco to innovate within OpenStack, we will write the code and 
do the rest of work. (But prior

Re: [openstack-dev] [all][massively distributed][architecture] Coordination between actions/WGs

2016-08-29 Thread Zane Bitter

On 24/08/16 20:37, Jay Pipes wrote:

On 08/24/2016 04:26 AM, Peter Willis wrote:

Colleagues,

I'd like to confirm that scalability and multi-site operations are key
to BT's NFV use cases e.g. vCPE, vCDN, vEPC, vIMS, MEC, IoT, where we
will have compute highly distributed around the network (from thousands
to millions of sites). BT would therefore support a Massively
Distributed WG and/or work on scalability and multi-site operations in
the Architecture WG.


Love all the TLAs.


I think you mean ETLAs ;)

It seems to be an unfortunate occupational hazard of working in 
networking that over time one loses the ability to communicate with 
people using, you know, words. (I used to work in networking, but the 
good news is I'm still optimistic the damage is reversible ;)



I've asked this before to numerous Telco product managers and engineers,
but I've yet to get a solid answer from any of them, so I'll repeat the
question here...

How is vCPE a *cloud* use case?

From what I understand, the v[E]CPE use case is essentially that Telcos
want to have the set-top boxen/routers that are running cable television
apps (i.e. AT U-verse or Verizon FiOS-like things for US-based
customers) and home networking systems (broadband connectivity to a
local central office or point of presence, etc) be able run on virtual
machines to make deployment and management of new applications easier.
Since all those home routers and set-top boxen are essentially just
Linux boxes, the infrastructure seems to be there to make this a
cost-savings reality for Telcos. [1]


So I just heard of this today and looked it up. And unsurprisingly the 
explanations were mostly unclear and sometimes conflicting. (If you want 
to marvel at a rare instance of perfection in the genre of complete 
gibberish, check out http://www.telco.com/index.php?page=vcpe) However, 
I didn't come away with the same understanding as you.


My understanding is that they're taking stuff which used to run on edge 
devices (i.e. your home router or set-top box) and instead running them 
in the cloud:


http://www.nec.com/en/global/solutions/tcs/vcpe/index.html
http://searchsdn.techtarget.com/definition/vCPE-virtual-customer-premise-equipment

Basically as last-mile networks get faster, the bottleneck is no longer 
edge network bandwidth but the flexibility of the edge devices. So the 
idea, as I understand it, is to not run _more_ on them but to run _less_ 
and make use of the network bandwidth available to move a bunch of 
services into the cloud where they can be more flexible.


(Honestly, this sounds like the most cloud-y use case since those 
thermostats where you can't turn on the air conditioning without asking 
Google what they think about it first.)


Where I'm guessing this differs from other cloud use cases is that you 
want the newly-virtualised services running as close as possible to the 
edge. The user is essentially making the provider part of their layer 2 
network, so there's a number of drawbacks to having all of the 
virtualised services running in a single centralised cloud:


- It'd add a ton latency at a point where applications aren't expecting it.
- It'd start pushing some of your local traffic over the core network, 
where bandwidth is still very much scarce.
- It's really hard to keep a large number of layer 2 networks segregated 
from each other all the way through the core network (Ethernet gives you 
only 4094 to play with).


So I'd imagine that what they want to do is run a small cluster of Nova 
compute servers in e.g. your local telephone exchange, plus keep very 
tight control over how the workloads running on them are connected to 
actual physical networks. Then think about how many telephone exchanges 
there are in, say, Britain and it's obvious why they are interested in 
ensuring OpenStack can cope with massively distributed architectures.


Hopefully somebody who had heard of this stuff before today will jump in 
and correct all of the incorrect assumptions I have made. Remember: use 
your words! :P


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-29 Thread Jay Pipes

On 08/27/2016 11:16 AM, HU, BIN wrote:

The challenge in OpenStack is how to enable the innovation built on top of 
OpenStack.


No, that's not the challenge for OpenStack.

That's like saying the challenge for gasoline is how to enable the 
innovation of a jet engine.



So telco use cases is not only the innovation built on top of OpenStack. 
Instead, telco use cases, e.g. Gluon (NFV networking), vCPE Cloud, Mobile 
Cloud, Mobile Edge Cloud, brings the needed requirement for innovation in 
OpenStack itself. If OpenStack don't address those basic requirements,


That's the thing, Bin, those are *not* "basic" requirements. The Telco 
vCPE and Mobile "Edge cloud" (hint: not a cloud) use cases are asking 
for fundamental architectural and design changes to the foundational 
components of OpenStack. Instead of Nova being designed to manage a 
bunch of hardware in a relatively close location (i.e. a datacenter or 
multiple datacenters), vCPE is asking for Nova to transform itself into 
a micro-agent that can be run on an Apple Watch and do things in 
resource-constrained environments that it was never built to do.


And, honestly, I have no idea what Gluon is trying to do. Ian sent me 
some information a while ago on it. I read it. I still have no idea what 
Gluon is trying to accomplish other than essentially bypassing Neutron 
entirely. That's not "innovation". That's subterfuge.



the innovation will never happen on top of OpenStack.


Sure it will. AT and BT and other Telcos just need to write their own 
software that runs their proprietary vCPE software distribution 
mechanism, that's all. The OpenStack community shouldn't be relied upon 
to create software that isn't applicable to general cloud computing and 
cloud management platforms.



An example is - self-driving car is built on top of many technologies, such as 
sensor/camera, AI, maps, middleware etc. All innovations in each technology 
(sensor/camera, AI, map, etc.) bring together the innovation of self-driving 
car.


Yes, indeed, but the people who created the self-driving car software 
didn't ask the people who created the cameras to write the software for 
them that does the self-driving.



WE NEED INNOVATION IN OPENSTACK in order to enable the innovation built on top 
of OpenStack.


You are defining "innovation" in an odd way, IMHO. "Innovation" for the 
vCPE use case sounds a whole lot like "rearchitect your entire software 
stack so that we don't have to write much code that runs on set-top boxes."


Just being honest,
-jay


Thanks
Bin
-Original Message-
From: Edward Leafe [mailto:e...@leafe.com]
Sent: Saturday, August 27, 2016 10:49 AM
To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [all][massively 
distributed][architecture]Coordination between actions/WGs

On Aug 27, 2016, at 12:18 PM, HU, BIN <bh5...@att.com> wrote:


From telco perspective, those are the areas that allow innovation, and provide 
telco customers with new types of services.


We need innovation, starting from not limiting ourselves from bringing new idea 
and new use cases, and bringing those impossibility to reality.


There is innovation in OpenStack, and there is innovation in things built on 
top of OpenStack. We are simply trying to keep the two layers from getting 
confused.


-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-29 Thread Jay Pipes

On 08/28/2016 09:02 PM, joehuang wrote:

Hello, Bin,

Understand your expectation. In Tricircle big-tent application: 
https://review.openstack.org/#/c/338796/, a proposal was also given to add 
plugin mechnism in Nova/Cinder API layer, just like Neutron support plugin 
mechanism in API layer, that boosts innovation for different backend 
implementation to be supported, from ODL to OVN, Open Contrail

Mobile edging computing, NFV netwoking, distributed edge cloud etc are some new 
scneario for OpenStack, I suggest to have at least two successive dedicated 
design summit sessions to discuss about that f2f, the topics to be discussed 
could be:

1, Use cases
2, Requirements  in detail
3, Gaps in OpenStack
4, Proposal to be discussed

Arhietcture level proposal discussion
1, Proposals
2, Pros. and Cons. comparation
3, Chellenges
4, next step


Looking forward to your thoughts.


We could also have a design summit session on how to use a mail user 
agent that doesn't create new mailing list thread when you're responding 
to an existing thread. We could also include a topic about top-posting.


-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-28 Thread joehuang
Hello, Bin,

Understand your expectation. In Tricircle big-tent application: 
https://review.openstack.org/#/c/338796/, a proposal was also given to add 
plugin mechnism in Nova/Cinder API layer, just like Neutron support plugin 
mechanism in API layer, that boosts innovation for different backend 
implementation to be supported, from ODL to OVN, Open Contrail

Mobile edging computing, NFV netwoking, distributed edge cloud etc are some new 
scneario for OpenStack, I suggest to have at least two successive dedicated 
design summit sessions to discuss about that f2f, the topics to be discussed 
could be:

1, Use cases
2, Requirements  in detail
3, Gaps in OpenStack
4, Proposal to be discussed

Arhietcture level proposal discussion
1, Proposals
2, Pros. and Cons. comparation
3, Chellenges
4, next step


Looking forward to your thoughts.


Best Regards
Chaoyi Huang(joehuang)


From: HU, BIN [bh5...@att.com]
Sent: 28 August 2016 2:16
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev][all][massively 
distributed][architecture]Coordination  between actions/WGs

The challenge in OpenStack is how to enable the innovation built on top of 
OpenStack.

So telco use cases is not only the innovation built on top of OpenStack. 
Instead, telco use cases, e.g. Gluon (NFV networking), vCPE Cloud, Mobile 
Cloud, Mobile Edge Cloud, brings the needed requirement for innovation in 
OpenStack itself. If OpenStack don't address those basic requirements, the 
innovation will never happen on top of OpenStack.

An example is - self-driving car is built on top of many technologies, such as 
sensor/camera, AI, maps, middleware etc. All innovations in each technology 
(sensor/camera, AI, map, etc.) bring together the innovation of self-driving 
car.

WE NEED INNOVATION IN OPENSTACK in order to enable the innovation built on top 
of OpenStack.

Thanks
Bin
-Original Message-
From: Edward Leafe [mailto:e...@leafe.com]
Sent: Saturday, August 27, 2016 10:49 AM
To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [all][massively 
distributed][architecture]Coordination between actions/WGs

On Aug 27, 2016, at 12:18 PM, HU, BIN <bh5...@att.com> wrote:

>> From telco perspective, those are the areas that allow innovation, and 
>> provide telco customers with new types of services.
>
> We need innovation, starting from not limiting ourselves from bringing new 
> idea and new use cases, and bringing those impossibility to reality.

There is innovation in OpenStack, and there is innovation in things built on 
top of OpenStack. We are simply trying to keep the two layers from getting 
confused.


-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-27 Thread HU, BIN
The challenge in OpenStack is how to enable the innovation built on top of 
OpenStack.

So telco use cases is not only the innovation built on top of OpenStack. 
Instead, telco use cases, e.g. Gluon (NFV networking), vCPE Cloud, Mobile 
Cloud, Mobile Edge Cloud, brings the needed requirement for innovation in 
OpenStack itself. If OpenStack don't address those basic requirements, the 
innovation will never happen on top of OpenStack.

An example is - self-driving car is built on top of many technologies, such as 
sensor/camera, AI, maps, middleware etc. All innovations in each technology 
(sensor/camera, AI, map, etc.) bring together the innovation of self-driving 
car.

WE NEED INNOVATION IN OPENSTACK in order to enable the innovation built on top 
of OpenStack.

Thanks
Bin
-Original Message-
From: Edward Leafe [mailto:e...@leafe.com] 
Sent: Saturday, August 27, 2016 10:49 AM
To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [all][massively 
distributed][architecture]Coordination between actions/WGs

On Aug 27, 2016, at 12:18 PM, HU, BIN <bh5...@att.com> wrote:

>> From telco perspective, those are the areas that allow innovation, and 
>> provide telco customers with new types of services.
> 
> We need innovation, starting from not limiting ourselves from bringing new 
> idea and new use cases, and bringing those impossibility to reality.

There is innovation in OpenStack, and there is innovation in things built on 
top of OpenStack. We are simply trying to keep the two layers from getting 
confused.


-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-27 Thread Edward Leafe
On Aug 27, 2016, at 12:18 PM, HU, BIN  wrote:

>> From telco perspective, those are the areas that allow innovation, and 
>> provide telco customers with new types of services.
> 
> We need innovation, starting from not limiting ourselves from bringing new 
> idea and new use cases, and bringing those impossibility to reality.

There is innovation in OpenStack, and there is innovation in things built on 
top of OpenStack. We are simply trying to keep the two layers from getting 
confused.


-- Ed Leafe







signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-27 Thread HU, BIN
IMHO, I wouldn't limit ourselves.

If we expand our sight to view vCPE in its entirety, not any standalone VNF, it 
could be a cloud of vCPEs. It could be an enterprise cloud on top of enterprise 
vCPEs, or a community cloud across several organizations including vCPEs within 
residential communities.

There is another concept of "mobile cloud" where a cloud infrastructure is 
formed on top of mobile devices. Sounds crazy? Well, no one believed 
self-driving car could become reality so soon.

>From telco perspective, those are the areas that allow innovation, and provide 
>telco customers with new types of services.

We need innovation, starting from not limiting ourselves from bringing new idea 
and new use cases, and bringing those impossibility to reality.

Thanks
Bin
-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com] 
Sent: Saturday, August 27, 2016 2:47 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [all][massively 
distributed][architecture]Coordination between actions/WGs

On 08/25/2016 06:38 PM, joehuang wrote:
> Hello, Ed,
>
> Just as Peter mentioned,  "BT's NFV use cases e.g. vCPE, vCDN, vEPC, vIMS, 
> MEC, IoT, where we will have compute highly distributed around the network 
> (from thousands to millions of sites) ".  vCPE is only one use case, but not 
> all. And the hardware facility to run "vCDN, vEPC, vIMS, MEC" is not in 
> set-box or single hardware, even in current non-cloud way, it includes lots 
> of blades, rack servers, chasises, or racks.

Note that I have only questioned the use case of vCPE (and IoT) as "cloud use 
cases". content deliver networks, evolved packet core, and IP multimedia 
subsystem services are definitely cloud use cases, IMHO, since they belong as 
VNFs managed in a shared datacenter infrastructure.

> A whitepaper was just created "Accelerating NFV Delivery with 
> OpenStack" https://www.openstack.org/telecoms-and-nfv/

Nothing in the whitepaper above has anything to do with vCPE.

> So it's part of a cloud architecture,

No, it's not. vCPE is definitely not a "cloud architecture".

 > the challenge is how OpenStack to run "regardless of size" and in "massively 
 > distributed" manner.

No, that is not OpenStack's challenge.

It is the Telco industry's challenge to create purpose-built Telco software 
delivery mechanisms, just like it's the enterprise database and middleware 
industry's challenge to create RDBMS systems to meet the modern 
micro-service-the-world landscape in which we live.

Asking the OpenStack community to solve a very specific Telco application 
delivery need is like asking the OpenStack community to write a relational 
database system that works best on 10 million IoT devices. It's just not in our 
list of problem domains to tackle.

Best,
-jay

> Best Regards
> Chaoyi Huang (joehuang)
> 
> From: Ed Leafe [e...@leafe.com]
> Sent: 25 August 2016 22:03
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [all][massively 
> distributed][architecture] Coordination between actions/WGs
>
> On Aug 24, 2016, at 8:42 PM, joehuang <joehu...@huawei.com> wrote:
>>
>> Funny point of view. Let's look at the mission of OpenStack:
>>
>> "to produce the ubiquitous Open Source Cloud Computing platform that 
>> enables building interoperable public and private clouds regardless 
>> of size, by being simple to implement and massively scalable while serving 
>> the cloud users'
>> needs."
>>
>> It mentioned that "regardless of size", and you also mentioned "cloud to me:
>> lots of hardware consolidation".
>
> If it isn't part of a cloud architecture, then it isn't part of OpenStack's 
> mission. The 'size' qualifier relates to everything from massive clouds like 
> CERN and Walmart down to small private clouds. It doesn't mean 'any sort of 
> computing platform'; the focus is clear that we are an "Open Source Cloud 
> Computing platform".
>
>
> -- Ed Leafe
>
>
>
>
>
>
> __
>  OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
>  OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

_

Re: [openstack-dev] [all][massively distributed][architecture] Coordination between actions/WGs

2016-08-27 Thread Davanum Srinivas
LOL Thierry!

On Sat, Aug 27, 2016 at 8:44 AM, Thierry Carrez  wrote:
> Jay Pipes wrote:
>> [...]
>> However, I have not heard vCPE described in that way. v[E]CPE is all
>> about enabling a different kind of application delivery for Telco
>> products/services. Instead of sending the customer new hardware -- or
>> installing a giant monolith application with feature toggles all over
>> the place -- the Telco delivers to the customer a set-top box that has
>> the ability to pull virtual machine images with an application that the
>> customer desires.
>
> I'll defer to your acute knowledge of all those cryptic acronyms. On the
> flip side, that means next time I wonder what (for example) vIMS could
> mean, I'll ask you. (would be great if it was an attempt at virtualizing
> and cloning dims)
>
> --
> Thierry Carrez (ttx)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture] Coordination between actions/WGs

2016-08-27 Thread Thierry Carrez
Jay Pipes wrote:
> [...]
> However, I have not heard vCPE described in that way. v[E]CPE is all
> about enabling a different kind of application delivery for Telco
> products/services. Instead of sending the customer new hardware -- or
> installing a giant monolith application with feature toggles all over
> the place -- the Telco delivers to the customer a set-top box that has
> the ability to pull virtual machine images with an application that the
> customer desires.

I'll defer to your acute knowledge of all those cryptic acronyms. On the
flip side, that means next time I wonder what (for example) vIMS could
mean, I'll ask you. (would be great if it was an attempt at virtualizing
and cloning dims)

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-27 Thread Jay Pipes

On 08/25/2016 06:38 PM, joehuang wrote:

Hello, Ed,

Just as Peter mentioned,  "BT's NFV use cases e.g. vCPE, vCDN, vEPC, vIMS, MEC, IoT, where we 
will have compute highly distributed around the network (from thousands to millions of sites) 
".  vCPE is only one use case, but not all. And the hardware facility to run "vCDN, vEPC, 
vIMS, MEC" is not in set-box or single hardware, even in current non-cloud way, it includes 
lots of blades, rack servers, chasises, or racks.


Note that I have only questioned the use case of vCPE (and IoT) as 
"cloud use cases". content deliver networks, evolved packet core, and IP 
multimedia subsystem services are definitely cloud use cases, IMHO, 
since they belong as VNFs managed in a shared datacenter infrastructure.



A whitepaper was just created "Accelerating NFV Delivery with OpenStack" 
https://www.openstack.org/telecoms-and-nfv/


Nothing in the whitepaper above has anything to do with vCPE.


So it's part of a cloud architecture,


No, it's not. vCPE is definitely not a "cloud architecture".

> the challenge is how OpenStack to run "regardless of size" and in 
"massively distributed" manner.


No, that is not OpenStack's challenge.

It is the Telco industry's challenge to create purpose-built Telco 
software delivery mechanisms, just like it's the enterprise database and 
middleware industry's challenge to create RDBMS systems to meet the 
modern micro-service-the-world landscape in which we live.


Asking the OpenStack community to solve a very specific Telco 
application delivery need is like asking the OpenStack community to 
write a relational database system that works best on 10 million IoT 
devices. It's just not in our list of problem domains to tackle.


Best,
-jay


Best Regards
Chaoyi Huang (joehuang)

From: Ed Leafe [e...@leafe.com]
Sent: 25 August 2016 22:03
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all][massively distributed][architecture] 
Coordination between actions/WGs

On Aug 24, 2016, at 8:42 PM, joehuang <joehu...@huawei.com> wrote:


Funny point of view. Let's look at the mission of OpenStack:

"to produce the ubiquitous Open Source Cloud Computing platform that enables
building interoperable public and private clouds regardless of size, by being
simple to implement and massively scalable while serving the cloud users'
needs."

It mentioned that "regardless of size", and you also mentioned "cloud to me:
lots of hardware consolidation".


If it isn’t part of a cloud architecture, then it isn’t part of OpenStack’s mission. 
The ‘size’ qualifier relates to everything from massive clouds like CERN and Walmart 
down to small private clouds. It doesn’t mean ‘any sort of computing platform’; the 
focus is clear that we are an "Open Source Cloud Computing platform”.


-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture] Coordination between actions/WGs

2016-08-27 Thread Jay Pipes

On 08/25/2016 11:08 AM, Thierry Carrez wrote:

Jay Pipes wrote:

[...]
How is vCPE a *cloud* use case?

From what I understand, the v[E]CPE use case is essentially that Telcos
want to have the set-top boxen/routers that are running cable television
apps (i.e. AT U-verse or Verizon FiOS-like things for US-based
customers) and home networking systems (broadband connectivity to a
local central office or point of presence, etc) be able run on virtual
machines to make deployment and management of new applications easier.
Since all those home routers and set-top boxen are essentially just
Linux boxes, the infrastructure seems to be there to make this a
cost-savings reality for Telcos. [1]

The problem is that that isn't remotely a cloud use case. Or at least,
it doesn't describe what I think of as cloud.
[...]


My read on that is that they want to build a cloud using the computing
power in those set-top boxes and be able to distribute workloads to them
(in an API/cloudy manner). So yes, essentially nova-compute nodes on
those set-top boxes. It feels like that use case fits your description
of "cloud", only their datacenter ends up being distributed in their
customers homes (and conveniently using your own electricity/cooling) ?


That would indeed be interesting, even if far-fetched. [1]

However, I have not heard vCPE described in that way. v[E]CPE is all 
about enabling a different kind of application delivery for Telco 
products/services. Instead of sending the customer new hardware -- or 
installing a giant monolith application with feature toggles all over 
the place -- the Telco delivers to the customer a set-top box that has 
the ability to pull virtual machine images with an application that the 
customer desires.


What vCPE is about is co-opting the term "cloud" to mean changing the 
delivery mechanism for Telco software. [2]


Like you said on April 1st, Thierry, "on the Internet of Things, nobody 
knows you're a fridge".


The problem with vCPE is that it's essentially playing an April Fool's 
joke on the cloud management software industry. "In vCPE, nobody knows 
you're not actually a cloud, but instead you're a $5 whitelabel router 
sitting underneath a pile of sweaters in a closet."


Best,
-jay

[1] I look forward to the OpenStack Cloud powered by 10 million Apple 
Watches. Actually no, I don't. That sounds like a nightmare to me.


[2] To be perfectly clear, I have nothing against Telcos wanting to 
change their method of software delivery. Go for it! Embrace modern 
delivery mechanisms. But, that ain't cloud and it ain't OpenStack, IMHO.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-25 Thread joehuang
Hello, Ed,

Just as Peter mentioned,  "BT's NFV use cases e.g. vCPE, vCDN, vEPC, vIMS, MEC, 
IoT, where we will have compute highly distributed around the network (from 
thousands to millions of sites) ".  vCPE is only one use case, but not all. And 
the hardware facility to run "vCDN, vEPC, vIMS, MEC" is not in set-box or 
single hardware, even in current non-cloud way, it includes lots of blades, 
rack servers, chasises, or racks.

A whitepaper was just created "Accelerating NFV Delivery with OpenStack" 
https://www.openstack.org/telecoms-and-nfv/  

So it's part of a cloud architecture, the challenge is how OpenStack to run 
"regardless of size" and in "massively distributed" manner.

Best Regards
Chaoyi Huang (joehuang)

From: Ed Leafe [e...@leafe.com]
Sent: 25 August 2016 22:03
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all][massively distributed][architecture] 
Coordination between actions/WGs

On Aug 24, 2016, at 8:42 PM, joehuang <joehu...@huawei.com> wrote:
>
> Funny point of view. Let's look at the mission of OpenStack:
>
> "to produce the ubiquitous Open Source Cloud Computing platform that enables
> building interoperable public and private clouds regardless of size, by being
> simple to implement and massively scalable while serving the cloud users'
> needs."
>
> It mentioned that "regardless of size", and you also mentioned "cloud to me:
> lots of hardware consolidation".

If it isn’t part of a cloud architecture, then it isn’t part of OpenStack’s 
mission. The ‘size’ qualifier relates to everything from massive clouds like 
CERN and Walmart down to small private clouds. It doesn’t mean ‘any sort of 
computing platform’; the focus is clear that we are an "Open Source Cloud 
Computing platform”.


-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture] Coordination between actions/WGs

2016-08-25 Thread Thierry Carrez
lebre.adr...@free.fr wrote:
> [...]
> The goal of this email is to : 
> 
> (i) understand whether the fog/edge computing use case is in the scope of 
> the Architecture WG. 
> 
> (ii) if not, whether it makes sense to create a working group that focus 
> on scalability and multi-site challenges (Folks from Orange Labs and British 
> Telecom for instance already told us that they are interesting by such a 
> use-case).
> 
> (iii) what is the best way to coordinate our efforts with the actions 
> performed in other WGs such as the Performance and Architecture ones (e.g., 
> actions performed/decisions taken in the Architecture WG can have impacts on 
> the massively distributed WG and thus  drive the way we should perform 
> actions to progress to the Fog/Edge Computing target)

I think the two groups are complementary. The massively-distributed WG
needs to gather the parties interested in working in that, identify the
challenges and paint a picture of what the way forward could look like.

If only incremental changes or optional features are needed to achieve
the goal, I'd say the Arch WG doesn't really need to get involved. You
just need to push those features in the various impacted projects, with
some inter-project work coordination. But if the only way to achieve
those goals is to change to general architecture of OpenStack (for
example by needing Tricircle on a top cell in every OpenStack cloud),
then validation of the plan and assessment of how that could be rolled
out OpenStack-wide would involve the Arch WG (and ultimately probably
the TC).

The former approach is a lot easier than the latter :)

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture] Coordination between actions/WGs

2016-08-25 Thread Thierry Carrez
Jay Pipes wrote:
> [...]
> How is vCPE a *cloud* use case?
> 
> From what I understand, the v[E]CPE use case is essentially that Telcos
> want to have the set-top boxen/routers that are running cable television
> apps (i.e. AT U-verse or Verizon FiOS-like things for US-based
> customers) and home networking systems (broadband connectivity to a
> local central office or point of presence, etc) be able run on virtual
> machines to make deployment and management of new applications easier.
> Since all those home routers and set-top boxen are essentially just
> Linux boxes, the infrastructure seems to be there to make this a
> cost-savings reality for Telcos. [1]
> 
> The problem is that that isn't remotely a cloud use case. Or at least,
> it doesn't describe what I think of as cloud.
> [...]

My read on that is that they want to build a cloud using the computing
power in those set-top boxes and be able to distribute workloads to them
(in an API/cloudy manner). So yes, essentially nova-compute nodes on
those set-top boxes. It feels like that use case fits your description
of "cloud", only their datacenter ends up being distributed in their
customers homes (and conveniently using your own electricity/cooling) ?

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture] Coordination between actions/WGs

2016-08-25 Thread Ed Leafe
On Aug 24, 2016, at 8:42 PM, joehuang  wrote:
> 
> Funny point of view. Let's look at the mission of OpenStack:
> 
> "to produce the ubiquitous Open Source Cloud Computing platform that enables
> building interoperable public and private clouds regardless of size, by being
> simple to implement and massively scalable while serving the cloud users'
> needs."
> 
> It mentioned that "regardless of size", and you also mentioned "cloud to me:
> lots of hardware consolidation".

If it isn’t part of a cloud architecture, then it isn’t part of OpenStack’s 
mission. The ‘size’ qualifier relates to everything from massive clouds like 
CERN and Walmart down to small private clouds. It doesn’t mean ‘any sort of 
computing platform’; the focus is clear that we are an "Open Source Cloud 
Computing platform”.


-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture] Coordination between actions/WGs

2016-08-24 Thread joehuang
> But v[E]CPE just isn't cloud to me. And trying to morph Nova or other 
> OpenStack services that were designed to run as services in a datacenter 
> just isn't something I feel the OpenStack community needs to be focusing 
> on. Run Nova (or Kubernetes, or Mesos, or any other cloud management 
> platform) in the datacenter. Run a custom software application on the 
> set-top boxes that can communicate with datacenter cloud services. Let's 
> please not commandeer one of them to fit a model it was never built for.

Funny point of view. Let's look at the mission of OpenStack:

"to produce the ubiquitous Open Source Cloud Computing platform that enables
building interoperable public and private clouds regardless of size, by being
simple to implement and massively scalable while serving the cloud users'
needs."

It mentioned that "regardless of size", and you also mentioned "cloud to me:
lots of hardware consolidation".

May we call an edge site which include 10 hardware as a cloud? If 10 is ok, how
about 5 or 3?

>From OpenStack mission, it should be able to run "regardless of size", but 
>from 
your definition, what's the hardware number is the thresshold for OpenStack, 
especially
Nova to run, and so that it could be called a cloud?

I proposed to introduce plugin mechnism in Nova/Cinder API layer, just like 
Neutron
what has been done, so that Nova/Cinder can also run in samll size scenario.

The plugin mechanism can also be used for Tricircle project. During the 
Tricircle
big-tent project application[1], TCs's worried that Nova API-Gateway/Cinder 
API-Gateway
will reimplement some Nova/Cinder API. If Nova/Cinder can provide the plugin 
mechnism
in its API layer like what Neutron did, then innovation can be introduce to 
reach the OpenStack
 mission "regardless of size". 

Just as Jay implied(that's my imaging, forgive me if it's wrong :) ), one 
implementation
to fit all size is impossible.

[1] Tricircle big-tent project application 
https://review.openstack.org/#/c/338796/

Best Regards
Chaoyi Huang (joehuang)


From: Jay Pipes [jaypi...@gmail.com]
Sent: 25 August 2016 8:37
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [all][massively distributed][architecture] 
Coordination between actions/WGs

On 08/24/2016 04:26 AM, Peter Willis wrote:
> Colleagues,
>
> I'd like to confirm that scalability and multi-site operations are key
> to BT's NFV use cases e.g. vCPE, vCDN, vEPC, vIMS, MEC, IoT, where we
> will have compute highly distributed around the network (from thousands
> to millions of sites). BT would therefore support a Massively
> Distributed WG and/or work on scalability and multi-site operations in
> the Architecture WG.

Love all the TLAs.

I've asked this before to numerous Telco product managers and engineers,
but I've yet to get a solid answer from any of them, so I'll repeat the
question here...

How is vCPE a *cloud* use case?

 From what I understand, the v[E]CPE use case is essentially that Telcos
want to have the set-top boxen/routers that are running cable television
apps (i.e. AT U-verse or Verizon FiOS-like things for US-based
customers) and home networking systems (broadband connectivity to a
local central office or point of presence, etc) be able run on virtual
machines to make deployment and management of new applications easier.
Since all those home routers and set-top boxen are essentially just
Linux boxes, the infrastructure seems to be there to make this a
cost-savings reality for Telcos. [1]

The problem is that that isn't remotely a cloud use case. Or at least,
it doesn't describe what I think of as cloud.

Cloud to me means:

* Lots of hardware consolidated in datacenters for efficiency and
security/management
* Software-as-a-Service interface, meaning the service is driven by
users/tenants (i.e. no IT helpdesk to call to provision something) and
provided over the Internet to a dumb device or browser
* (HTTP) API-driven access to launch compute, storage and network
resources from large pools of those resources

Furthermore, applications written for the cloud (i.e. cloud-native apps)
are built from the ground up to assume and tolerate failure, to be as
close to shared-nothing as possible, to not need to be aware of where
they are running or what particular server they are running on and to
rely on well-defined APIs between (micro-)services.

v[E]CPE describes a purpose-built Telco application that doesn't meet
any of the above definitions of what "cloud" is all about.

vCPE also doesn't look like a cloud-native application either: A single
customer's vCPE software application is not capable of running on more
than one machine at a time (since obviously it's running on the
customer's set-top box or router).

Look, I'm all about designing Nova and other cloud services to function
well in distributed envir

Re: [openstack-dev] [all][massively distributed][architecture] Coordination between actions/WGs

2016-08-24 Thread Jay Pipes

On 08/24/2016 04:26 AM, Peter Willis wrote:

Colleagues,

I'd like to confirm that scalability and multi-site operations are key
to BT's NFV use cases e.g. vCPE, vCDN, vEPC, vIMS, MEC, IoT, where we
will have compute highly distributed around the network (from thousands
to millions of sites). BT would therefore support a Massively
Distributed WG and/or work on scalability and multi-site operations in
the Architecture WG.


Love all the TLAs.

I've asked this before to numerous Telco product managers and engineers, 
but I've yet to get a solid answer from any of them, so I'll repeat the 
question here...


How is vCPE a *cloud* use case?

From what I understand, the v[E]CPE use case is essentially that Telcos 
want to have the set-top boxen/routers that are running cable television 
apps (i.e. AT U-verse or Verizon FiOS-like things for US-based 
customers) and home networking systems (broadband connectivity to a 
local central office or point of presence, etc) be able run on virtual 
machines to make deployment and management of new applications easier. 
Since all those home routers and set-top boxen are essentially just 
Linux boxes, the infrastructure seems to be there to make this a 
cost-savings reality for Telcos. [1]


The problem is that that isn't remotely a cloud use case. Or at least, 
it doesn't describe what I think of as cloud.


Cloud to me means:

* Lots of hardware consolidated in datacenters for efficiency and 
security/management
* Software-as-a-Service interface, meaning the service is driven by 
users/tenants (i.e. no IT helpdesk to call to provision something) and 
provided over the Internet to a dumb device or browser
* (HTTP) API-driven access to launch compute, storage and network 
resources from large pools of those resources


Furthermore, applications written for the cloud (i.e. cloud-native apps) 
are built from the ground up to assume and tolerate failure, to be as 
close to shared-nothing as possible, to not need to be aware of where 
they are running or what particular server they are running on and to 
rely on well-defined APIs between (micro-)services.


v[E]CPE describes a purpose-built Telco application that doesn't meet 
any of the above definitions of what "cloud" is all about.


vCPE also doesn't look like a cloud-native application either: A single 
customer's vCPE software application is not capable of running on more 
than one machine at a time (since obviously it's running on the 
customer's set-top box or router).


Look, I'm all about designing Nova and other cloud services to function 
well in distributed environments where multiple datacenters are running 
a single OpenStack deployment spread over many regions.


But v[E]CPE just isn't cloud to me. And trying to morph Nova or other 
OpenStack services that were designed to run as services in a datacenter 
just isn't something I feel the OpenStack community needs to be focusing 
on. Run Nova (or Kubernetes, or Mesos, or any other cloud management 
platform) in the datacenter. Run a custom software application on the 
set-top boxes that can communicate with datacenter cloud services. Let's 
please not commandeer one of them to fit a model it was never built for.


My two cents,
-jay

[1] I say cost-savings because instead of shipping the customer a new 
piece of hardware when they purchase a new service (or sending a 
lineman), instead the telco now simply sends a command to the customer's 
router or set-top box to launch a VM that provides that service. One 
could say that the Telcos could just as easily ship a monolithic 
software application that would run on the set-top box and allow 
features to be toggled on and off, and I'm pretty sure many telco 
software applications are already like this, but deploying and managing 
monolithic applications is more complicated and expensive than shipping 
a VM image that contains a custom-built service that the customer can 
use at will.


p.s. I also don't think it's a good idea to run nova-compute on a fitbit 
watch.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][massively distributed][architecture] Coordination between actions/WGs

2016-08-24 Thread Peter Willis
Colleagues,

I'd like to confirm that scalability and multi-site operations are key to
BT's NFV use cases e.g. vCPE, vCDN, vEPC, vIMS, MEC, IoT, where we will
have compute highly distributed around the network (from thousands to
millions of sites). BT would therefore support a Massively Distributed WG
and/or work on scalability and multi-site operations in the Architecture WG.

Best Regards,
Peter Willis.
BT Research
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture] Coordination between actions/WGs

2016-08-23 Thread joehuang
Hello, Adrien,

How about different focus for different working gruop? For example, "massively 
distributed" working group can focus on identifying the use cases, challenges, 
issues in current openstack to support such fog/edge computing scenario, and 
even including the use cases/scenario from ETSI mobile edge computing 
(http://www.etsi.org/technologies-clusters/technologies/mobile-edge-computing,  
  
https://portal.etsi.org/portals/0/tbpages/mec/docs/mobile-edge_computing_-_introductory_technical_white_paper_v1%2018-09-14.pdf).
 For "architecture" working group, how about to focus on dicsussing technology 
solution/proposal to address these issues/challenges?

We have discussed/exchanged ideas a lot before/in/after Austin summit. As 
Tricircle has worked in the multisite area for several cycles, a lots of use 
cases/challengs/issues also have been identified, the proposal of Tricircle 
could be one basis to be discussed in "arhictecture" working group, other 
proposals are also welcome.

Best Regards
Chaoyi Huang (joehuang)


From: lebre.adr...@free.fr [lebre.adr...@free.fr]
Sent: 23 August 2016 18:17
To: OpenStack Development Mailing List; openstack-operators
Cc: discovery-...@inria.fr
Subject: [openstack-dev] [all][massively distributed][architecture] 
Coordination between actions/WGs

Hi Folks,

During the last summit, we suggested to create a new working group that deals 
with the massively distributed use case:
How can OpenStack be "slightly" revised to operate Fog/Edge Computing 
infrastructures, i.e. infrastructures composed of several sites.
The first meeting we did in Austin showed us that additional materials were 
mandatory to better understand the scope as well as the actions we can perform 
in this working group.

After exchanging with different persons and institutions, we have identified 
several actions that we would like to achieve and that make the creation of 
such a working group relevant from our point of view.

Among the list of possible actions, we would like to identify major scalability 
issues and clarify intra-site vs inter-site exchanges between the different 
services of OpenStack in a multi-site context (i.e. with the vanilla OpenStack 
code).
Such information will enable us to better understand how and where each service 
should be deployed and whether it should be revised.

We have started an action with the Performance WG  with the ultimate goal to 
analyse how OpenStack behaves from the performance aspect as well as the 
interactions between the various services in such a context.

Meanwhile, we saw during this summer the Clynt's proposal about the 
Architecture WG.

Although we are very exciting about this WG (we are convinced it will be 
valuable for the whole community), we are wondering
whether the  actions we envision in the Massively distributed WG will not 
overlap the ones (scalability, multi-site operations ...) that could be 
performed in the Archictecture WG.

The goal of this email is to :

(i) understand whether the fog/edge computing use case is in the scope of 
the Architecture WG.

(ii) if not, whether it makes sense to create a working group that focus on 
scalability and multi-site challenges (Folks from Orange Labs and British 
Telecom for instance already told us that they are interesting by such a 
use-case).

(iii) what is the best way to coordinate our efforts with the actions 
performed in other WGs such as the Performance and Architecture ones (e.g., 
actions performed/decisions taken in the Architecture WG can have impacts on 
the massively distributed WG and thus  drive the way we should perform actions 
to progress to the Fog/Edge Computing target)


According to the feedback, we will create dedicated wiki pages for the 
massively distributed WG.
Remarks/comments welcome.

Ad_rien_
Further information regarding the Fog/Edge Computing use-case we target is 
available at http://beyondtheclouds.github.io

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][massively distributed][architecture] Coordination between actions/WGs

2016-08-23 Thread lebre . adrien
Hi Folks, 

During the last summit, we suggested to create a new working group that deals 
with the massively distributed use case:
How can OpenStack be "slightly" revised to operate Fog/Edge Computing 
infrastructures, i.e. infrastructures composed of several sites. 
The first meeting we did in Austin showed us that additional materials were 
mandatory to better understand the scope as well as the actions we can perform 
in this working group. 

After exchanging with different persons and institutions, we have identified 
several actions that we would like to achieve and that make the creation of 
such a working group relevant from our point of view. 

Among the list of possible actions, we would like to identify major scalability 
issues and clarify intra-site vs inter-site exchanges between the different 
services of OpenStack in a multi-site context (i.e. with the vanilla OpenStack 
code). 
Such information will enable us to better understand how and where each service 
should be deployed and whether it should be revised.  

We have started an action with the Performance WG  with the ultimate goal to 
analyse how OpenStack behaves from the performance aspect as well as the 
interactions between the various services in such a context. 

Meanwhile, we saw during this summer the Clynt's proposal about the 
Architecture WG.

Although we are very exciting about this WG (we are convinced it will be 
valuable for the whole community), we are wondering
whether the  actions we envision in the Massively distributed WG will not 
overlap the ones (scalability, multi-site operations ...) that could be 
performed in the Archictecture WG.

The goal of this email is to : 

(i) understand whether the fog/edge computing use case is in the scope of 
the Architecture WG. 

(ii) if not, whether it makes sense to create a working group that focus on 
scalability and multi-site challenges (Folks from Orange Labs and British 
Telecom for instance already told us that they are interesting by such a 
use-case).

(iii) what is the best way to coordinate our efforts with the actions 
performed in other WGs such as the Performance and Architecture ones (e.g., 
actions performed/decisions taken in the Architecture WG can have impacts on 
the massively distributed WG and thus  drive the way we should perform actions 
to progress to the Fog/Edge Computing target)


According to the feedback, we will create dedicated wiki pages for the 
massively distributed WG. 
Remarks/comments welcome. 

Ad_rien_
Further information regarding the Fog/Edge Computing use-case we target is 
available at http://beyondtheclouds.github.io

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev