On 18.7.2018 14:41, Markus Thoemmes wrote:
Hi Martin,

thanks for the great questions :)

thinking about scalability and the edge case. When there are not
enough
containers and new controllers are being created, and all of them
redirect traffic to the controllers with containers, doesn't it mean
overloading the available containers a lot? I'm curious how we
throttle the traffic in this case.
True, the first few requests will overload the controller that owns the very first 
container. That one will request new containers immediately, which will then be 
distributed to all existing Controllers by the ContainerManager. An interesting wrinkle 
here is, that you'd want the overloading requests to be completed by the Controllers that 
sent it to the "single-owning-Controller".

Ah, got it. So it is a pretty common scenario. Scaling out controllers and containers. I thought this is a case where we reach a limit of created containers and no more containers can be created.


  What we could do here is:

Controller0 owns ContainerA1
Controller1 relays requests for A to Controller0
Controller0 has more requests than it can handle, so it requests additional containers. All 
requests coming from Controller1 will be completed with a predefined message (for example 
"HTTP 503 overloaded" with a specific header say "X-Return-To-Sender-By: 
Controller0")
Controller1 recognizes this as "okay, I'll wait for containers to appear", 
which will eventually happen (because Controller0 has already requested them) so it can 
route and complete those requests on its own.
Controller1 will now no longer relay requests to Controller0 but will request 
containers itself (acknowledging that Controller0 is already overloaded).

Yeah, I think it makes sense.


I guess the other approach would be to block creating new controllers
when there are no containers available as long as we don't want to
overload the existing containers. And keep the overflowing workload
in Kafka as well.
Right, the second possibility is to use a pub/sub (not necessarily Kafka) queue 
between Controllers. Controller0 subscribes to a topic for action A because it 
owns a container for it. Controller1 doesn't own a container (yet) and 
publishes a message as overflow to topic A. The wrinkle in this case is, that 
Controller0 can't complete the request but needs to send it back to Controller1 
(where the HTTP connection is opened from the client).

Does that make sense?

I was rather thinking about blocking the creation of Controller1 in this case and responding to the client that the system is overloaded. But the first approach seems better because it's a pretty common use case (not reaching the limit of created containers).

Thanks!
Martin


Cheers,
Markus


Reply via email to