Re: Proposing Lean OpenWhisk

2018-07-19 Thread david . breitgand
Hi Dominic, 

Lean OpenWhisk is not supposed to run on the IoT devices such as sensors and 
actuators directly. It's supposed to run on a Gateway node that controls the 
sensors and actuators connected to it. Think AWS GreenGrass, Azure Functions on 
IoT Edge. This is the same use case. The data from a sensor, say a temperature 
sensor reading, will be sent to the Gateway via MQTT or HTTP or whatever and 
there will be a provider at the Gateway (say, an MQTT feed, which is outside of 
the OW core and this proposal) that can trigger an action on a trigger 
previously created via a feed action for this type of feed.

This proposal is strictly about making OW a better fit for small Gateway form 
factors. 

It's true that there are some other tools we need to provide to make OW@Edge a 
feasible option for developers, but they are outside of the core and this 
specific proposal and merit a separate discussion.

Cheers.

-- david

On 2018/07/16 11:40:35, Dominic Kim  wrote: 
> Dear David.
> 
> This is an awesome idea!!
> 
> Is this to control IoT devices programmatically?
> If yes, there would be many different types of IoT devices especially in
> terms of their capabilities such as lighting sensors, thermometer, robot
> cleaner, and so on.
> 
> Then do you have anything in mind to take care of heterogeneous sets of
> edge nodes?
> There is a possibility that some actions should only run on thermometers,
> while the others should run on lighting sensors.
> 
> If you are trying to install "one-for-all" OpenWhisk cluster rather than
> having separate OpenWhisk clusters for each device types, how will you
> manage heterogenous container pools and properly assign relevant actions to
> them?
> 
> 
> Best regards,
> Dominic
> 
> 
> 2018-07-16 20:24 GMT+09:00 Markus Thoemmes :
> 
> > Hi David,
> >
> > please send your PR for sure! IIRC we made the Loadbalancer pluggable
> > specifically for this use-case. Sounds like a great addition to our
> > possible deployment topologies.
> >
> > Shameless plug: Would you review the architecture I proposed here:
> > https://lists.apache.org/thread.html/29289006d190b2c68451f7625c13bb
> > 8020cc8e9928db66f1b0def18e@%3Cdev.openwhisk.apache.org%3E
> >
> > In theory, this could make your proposal even leaner in the future. Don't
> > hear me say though we should hold this back, we can absolutely go forward
> > with your implementation first. Just wanted to quickly verify this use-case
> > will also work with what we might plan for in the future.
> >
> > Cheers,
> > Markus
> >
> >
> 


Re: Proposing Lean OpenWhisk

2018-07-18 Thread TzuChiao Yeh
Hi David,

Definitely make sense :) We may have alternative option (i.e. "native"
function with kind of limitation) after realizing more detail on it. I
agree that moving to edge required long term process due to uncertainty.

Anyway, AFAIK, plenty of industries and academics rely on openwhisk to
explore edge computing cases. Looking forward to this getting merged!

On Wed, Jul 18, 2018 at 7:43 PM David Breitgand  wrote:

> Hi Tzu,
>
> You are right about GreenGrass. AFAIK, they are not using Docker in their
> solution. BTW, this brings about some limitations: e.g., they run Python
> lambdas in GreenGrass, while OpenWhisk at the edge will be able to run any
> container, just like it happens in the cloud, which makes it a polyglot
> capability.
>
> Azure Functions on IoT Edge uses containers. So, the approaches differ :)
> In general, I agree: containers are there for isolation. If edge is viewed
> as a cloud extension, then a typical use case might be migrating user's
> containers from the cloud to edge to save bandwidth, for example. This
> includes migrating a serverless workload to the edge more or less as is.
> So, at the moment we just want to lay a first brick to enable this.
>
> Concerning the cold start, I agree that this is a problem and it's more
> pronounced in the edge than in the cloud. But if we ignore this problem
> for a moment, we still get two benefits (out of 3 that you emphasize):
> autonomy and lower bandwidth by just allowing OW to run at the edge.
>
> I agree that considering alternatives to containers when putting
> serverless at the edge makes a lot of sense in the long run (or maybe even
> medium term) and will be happy to discuss this.
>
> Cheers.
>
>
> -- david
>
>
>
>
> From:   TzuChiao Yeh 
> To: dev@openwhisk.apache.org
> Date:   17/07/2018 05:49 PM
> Subject:Re: Proposing Lean OpenWhisk
>
>
>
> Hi David,
>
> Looks cool! Glad to see OpenWhisk step forward to the edge use case.
>
> Simple question: have you considered the way that remove out docker
> containers (break up isolation)?
>
> Due to close-source, I'm not sure how aws greengrass did, but seems like
> there's no docker got installed at all.
>
> The edge computing benefits for some advantages,
> 1. bandwidth reduction.
> 2. lower latency.
> 3. offline computing capability (not for all scenarios, but this is indeed
> aws greengrass claimed for).
>
> We can first ignore the use cases that required ultra low-latency (i.e.
> interactive AR/VR, speech language translation). But for general use
> cases,
> cold start problem in serverless makes low latency no sense. Since there's
> only about 100-200ms RTT from device to cloud, but container
> creation/deletion is much higher. Besides from this, (part of) edge
> devices
> are not provided as an IaaS service, therefore we can even care no
> multi-tenancy or weaker isolation. What do you think?
>
> Thanks,
> Tzu-Chiao Yeh (@tz70s)
>
>
> On Tue, Jul 17, 2018 at 9:43 PM David Breitgand 
> wrote:
>
> > Sure. Will do directly on Wiki.
> > Cheers.
> >
> > -- david
> >
> >
> >
> >
> > From:   "Markus Thoemmes" 
> > To: dev@openwhisk.apache.org
> > Date:   17/07/2018 04:31 PM
> > Subject:Re: Proposing Lean OpenWhisk
> >
> >
> >
> > Hi David,
> >
> > I absolutely agree, this should not be held back. It'd be great if you
> > could chime in on the discussion I opened on the new proposal regarding
> > your use-case though. It might be nice to verify a similar topology as
> you
> > are proposing is still implementable or maybe even easier to implement
> > when moving to a new architecture, just so we have all requirements to
> it
> > on the table.
> >
> > I agree it's entirely orthogonal though and your proposal can be
> > implemented/merged independent of that.
> >
> > Cheers,
> > Markus
> >
> >
> >
> >
> >
> >
>
>
>
>
>

-- 
Tzu-Chiao Yeh (@tz70s)


Re: Proposing Lean OpenWhisk

2018-07-18 Thread David Breitgand
Hi Tzu, 

You are right about GreenGrass. AFAIK, they are not using Docker in their 
solution. BTW, this brings about some limitations: e.g., they run Python 
lambdas in GreenGrass, while OpenWhisk at the edge will be able to run any 
container, just like it happens in the cloud, which makes it a polyglot 
capability. 

Azure Functions on IoT Edge uses containers. So, the approaches differ :) 
In general, I agree: containers are there for isolation. If edge is viewed 
as a cloud extension, then a typical use case might be migrating user's 
containers from the cloud to edge to save bandwidth, for example. This 
includes migrating a serverless workload to the edge more or less as is. 
So, at the moment we just want to lay a first brick to enable this. 

Concerning the cold start, I agree that this is a problem and it's more 
pronounced in the edge than in the cloud. But if we ignore this problem 
for a moment, we still get two benefits (out of 3 that you emphasize): 
autonomy and lower bandwidth by just allowing OW to run at the edge.

I agree that considering alternatives to containers when putting 
serverless at the edge makes a lot of sense in the long run (or maybe even 
medium term) and will be happy to discuss this.

Cheers.
 

-- david 




From:   TzuChiao Yeh 
To: dev@openwhisk.apache.org
Date:   17/07/2018 05:49 PM
Subject:Re: Proposing Lean OpenWhisk



Hi David,

Looks cool! Glad to see OpenWhisk step forward to the edge use case.

Simple question: have you considered the way that remove out docker
containers (break up isolation)?

Due to close-source, I'm not sure how aws greengrass did, but seems like
there's no docker got installed at all.

The edge computing benefits for some advantages,
1. bandwidth reduction.
2. lower latency.
3. offline computing capability (not for all scenarios, but this is indeed
aws greengrass claimed for).

We can first ignore the use cases that required ultra low-latency (i.e.
interactive AR/VR, speech language translation). But for general use 
cases,
cold start problem in serverless makes low latency no sense. Since there's
only about 100-200ms RTT from device to cloud, but container
creation/deletion is much higher. Besides from this, (part of) edge 
devices
are not provided as an IaaS service, therefore we can even care no
multi-tenancy or weaker isolation. What do you think?

Thanks,
Tzu-Chiao Yeh (@tz70s)


On Tue, Jul 17, 2018 at 9:43 PM David Breitgand  
wrote:

> Sure. Will do directly on Wiki.
> Cheers.
>
> -- david
>
>
>
>
> From:   "Markus Thoemmes" 
> To: dev@openwhisk.apache.org
> Date:   17/07/2018 04:31 PM
> Subject:Re: Proposing Lean OpenWhisk
>
>
>
> Hi David,
>
> I absolutely agree, this should not be held back. It'd be great if you
> could chime in on the discussion I opened on the new proposal regarding
> your use-case though. It might be nice to verify a similar topology as 
you
> are proposing is still implementable or maybe even easier to implement
> when moving to a new architecture, just so we have all requirements to 
it
> on the table.
>
> I agree it's entirely orthogonal though and your proposal can be
> implemented/merged independent of that.
>
> Cheers,
> Markus
>
>
>
>
>
>






Re: Proposing Lean OpenWhisk

2018-07-17 Thread TzuChiao Yeh
Hi David,

Looks cool! Glad to see OpenWhisk step forward to the edge use case.

Simple question: have you considered the way that remove out docker
containers (break up isolation)?

Due to close-source, I'm not sure how aws greengrass did, but seems like
there's no docker got installed at all.

The edge computing benefits for some advantages,
1. bandwidth reduction.
2. lower latency.
3. offline computing capability (not for all scenarios, but this is indeed
aws greengrass claimed for).

We can first ignore the use cases that required ultra low-latency (i.e.
interactive AR/VR, speech language translation). But for general use cases,
cold start problem in serverless makes low latency no sense. Since there's
only about 100-200ms RTT from device to cloud, but container
creation/deletion is much higher. Besides from this, (part of) edge devices
are not provided as an IaaS service, therefore we can even care no
multi-tenancy or weaker isolation. What do you think?

Thanks,
Tzu-Chiao Yeh (@tz70s)


On Tue, Jul 17, 2018 at 9:43 PM David Breitgand  wrote:

> Sure. Will do directly on Wiki.
> Cheers.
>
> -- david
>
>
>
>
> From:   "Markus Thoemmes" 
> To: dev@openwhisk.apache.org
> Date:   17/07/2018 04:31 PM
> Subject:Re: Proposing Lean OpenWhisk
>
>
>
> Hi David,
>
> I absolutely agree, this should not be held back. It'd be great if you
> could chime in on the discussion I opened on the new proposal regarding
> your use-case though. It might be nice to verify a similar topology as you
> are proposing is still implementable or maybe even easier to implement
> when moving to a new architecture, just so we have all requirements to it
> on the table.
>
> I agree it's entirely orthogonal though and your proposal can be
> implemented/merged independent of that.
>
> Cheers,
> Markus
>
>
>
>
>
>


Re: Proposing Lean OpenWhisk

2018-07-17 Thread David Breitgand
Sure. Will do directly on Wiki.
Cheers.

-- david 




From:   "Markus Thoemmes" 
To: dev@openwhisk.apache.org
Date:   17/07/2018 04:31 PM
Subject:    Re: Proposing Lean OpenWhisk



Hi David,

I absolutely agree, this should not be held back. It'd be great if you 
could chime in on the discussion I opened on the new proposal regarding 
your use-case though. It might be nice to verify a similar topology as you 
are proposing is still implementable or maybe even easier to implement 
when moving to a new architecture, just so we have all requirements to it 
on the table.

I agree it's entirely orthogonal though and your proposal can be 
implemented/merged independent of that.

Cheers,
Markus







Re: Proposing Lean OpenWhisk

2018-07-17 Thread Markus Thoemmes
Hi David,

I absolutely agree, this should not be held back. It'd be great if you could 
chime in on the discussion I opened on the new proposal regarding your use-case 
though. It might be nice to verify a similar topology as you are proposing is 
still implementable or maybe even easier to implement when moving to a new 
architecture, just so we have all requirements to it on the table.

I agree it's entirely orthogonal though and your proposal can be 
implemented/merged independent of that.

Cheers,
Markus



Re: Proposing Lean OpenWhisk

2018-07-17 Thread David Breitgand
Hi Markus, 

Thanks for the prompt response and the pointer to your proposal.  Indeed, 
there is a synergy between this Lean OW proposal and the more far fetching 
changes that you suggest. The similarity is that Invoker's role is 
basically being done by controller with no Kafka in between to directly 
route to the warm containers (which are collocated with the controller in 
case of an IoT gateway where the lean OW will be deployed).

I believe that the main difference is that  in Lean OpenWhisk, we are not 
proposing any changes to the code of the core project. It's about simply 
using SPI to dynamically load the right load balancer and the queue object 
and having controller together with invoker as one executable without 
changing Invoker's code. 

So I believe the lean OW design will cause very little friction with the 
current base and will be easy to merge. Later on when the new design 
matures this same pattern of SPI can be used to extend controller's LB to 
talk HTTP with the action containers directly as you suggest. 

Hope this makes sense.

Cheers.

-- david & Pavel 

On 2018/07/16 11:24:47, "Markus Thoemmes"  
wrote: 
> Hi David,
> 
> please send your PR for sure! IIRC we made the Loadbalancer pluggable 
specifically for this use-case. Sounds like a great addition to our 
possible deployment topologies.
> 
> Shameless plug: Would you review the architecture I proposed here: 
https://lists.apache.org/thread.html/29289006d190b2c68451f7625c13bb8020cc8e9928db66f1b0def18e@%3Cdev.openwhisk.apache.org%3E

> 
> In theory, this could make your proposal even leaner in the future. 
Don't hear me say though we should hold this back, we can absolutely go 
forward with your implementation first. Just wanted to quickly verify this 
use-case will also work with what we might plan for in the future.
> 
> Cheers,
> Markus
> 
> 

-- david 




Re: Proposing Lean OpenWhisk

2018-07-16 Thread Rodric Rabbah
Glad to see this work reaching maturity. As Markus noted I think there could be 
more convergence down the line too. 

Looking forward to the PR. 

-r

> On Jul 16, 2018, at 4:17 AM, David Breitgand  wrote:
> 
> Hi, 
> 
> Pavel and I are working on lean OpenWhisk. The idea is to allow an efficient 
> use of OpenWhisk with small form factor compute nodes (e.g., IoT gateways). 
> The proposition is to get rid of Kafka, have controller and invoker compiled 
> together into a "lean" controller-invoker (have a Gradle project for that) 
> with controller calling the invoker's method directly via an in-memory object 
> (a queue, which is also loaded via SPI) via a new pluggable load balancer 
> that is selected via SPI using configuration file property setting. Here is a 
> blog 
> https://medium.com/@davidbr_9022/lean-openwhisk-open-source-faas-for-edge-computing-fb823c6bbb9bproviding
>  some more detail and points to our fork. 
> 
> An important thing is that we are _not_ proposing a new project. Rather we 
> want to submit a PR to allow for a lean distro of OpenWhisk that uses the 
> same OpenWhisk code base and consumes everything downstream by virtue of 
> exploiting SPI.
> 
> Thoughts?
> 
> Thanks.
> 
> -- david 
> ===
> David Breitgand, Ph. D. 
> Senior Researcher, IBM Research -- Haifa, Israel 
> Tel: +972-4-829-1007 | Mobile: +972 54 7277-881 
> "Ambition is the path to success, persistence is the vehicle you arrive in", 
> William Eardley IV 
> ==
> 


Re: Proposing Lean OpenWhisk

2018-07-16 Thread Dominic Kim
Dear David.

This is an awesome idea!!

Is this to control IoT devices programmatically?
If yes, there would be many different types of IoT devices especially in
terms of their capabilities such as lighting sensors, thermometer, robot
cleaner, and so on.

Then do you have anything in mind to take care of heterogeneous sets of
edge nodes?
There is a possibility that some actions should only run on thermometers,
while the others should run on lighting sensors.

If you are trying to install "one-for-all" OpenWhisk cluster rather than
having separate OpenWhisk clusters for each device types, how will you
manage heterogenous container pools and properly assign relevant actions to
them?


Best regards,
Dominic


2018-07-16 20:24 GMT+09:00 Markus Thoemmes :

> Hi David,
>
> please send your PR for sure! IIRC we made the Loadbalancer pluggable
> specifically for this use-case. Sounds like a great addition to our
> possible deployment topologies.
>
> Shameless plug: Would you review the architecture I proposed here:
> https://lists.apache.org/thread.html/29289006d190b2c68451f7625c13bb
> 8020cc8e9928db66f1b0def18e@%3Cdev.openwhisk.apache.org%3E
>
> In theory, this could make your proposal even leaner in the future. Don't
> hear me say though we should hold this back, we can absolutely go forward
> with your implementation first. Just wanted to quickly verify this use-case
> will also work with what we might plan for in the future.
>
> Cheers,
> Markus
>
>


Re: Proposing Lean OpenWhisk

2018-07-16 Thread Markus Thoemmes
Hi David,

please send your PR for sure! IIRC we made the Loadbalancer pluggable 
specifically for this use-case. Sounds like a great addition to our possible 
deployment topologies.

Shameless plug: Would you review the architecture I proposed here: 
https://lists.apache.org/thread.html/29289006d190b2c68451f7625c13bb8020cc8e9928db66f1b0def18e@%3Cdev.openwhisk.apache.org%3E

In theory, this could make your proposal even leaner in the future. Don't hear 
me say though we should hold this back, we can absolutely go forward with your 
implementation first. Just wanted to quickly verify this use-case will also 
work with what we might plan for in the future.

Cheers,
Markus



Proposing Lean OpenWhisk

2018-07-16 Thread David Breitgand
Hi, 

Pavel and I are working on lean OpenWhisk. The idea is to allow an 
efficient use of OpenWhisk with small form factor compute nodes (e.g., IoT 
gateways). The proposition is to get rid of Kafka, have controller and 
invoker compiled together into a "lean" controller-invoker (have a Gradle 
project for that) with controller calling the invoker's method directly 
via an in-memory object (a queue, which is also loaded via SPI) via a new 
pluggable load balancer that is selected via SPI using configuration file 
property setting. Here is a blog 
https://medium.com/@davidbr_9022/lean-openwhisk-open-source-faas-for-edge-computing-fb823c6bbb9b
 
providing some more detail and points to our fork. 

An important thing is that we are _not_ proposing a new project. Rather we 
want to submit a PR to allow for a lean distro of OpenWhisk that uses the 
same OpenWhisk code base and consumes everything downstream by virtue of 
exploiting SPI.

Thoughts?

Thanks.

-- david 
===
David Breitgand, Ph. D. 
Senior Researcher, IBM Research -- Haifa, Israel 
Tel: +972-4-829-1007 | Mobile: +972 54 7277-881 
"Ambition is the path to success, persistence is the vehicle you arrive 
in", William Eardley IV 
==