Hi Radek,
I thought about this, but in this case I have to scale number of containers manually, but I would like to leverage all the resources on mesos slaves..
In this way, all I need to increase throughput is to add slave nodes to the cluster and that's it.
Regards,
Uladzimir On Mar 27, 2017 11:58 PM, Radek Gruchalski <ra...@gruchalski.com> wrote: Well, you are storing the incoming events in Kafka. So data is not lost. That’s one. Instead of scheduling a container per image, start a number of containers, make each of them a member of consumer group and you’re sorted. By building a framework, you’re making your life really difficult.
On March 27, 2017 at 10:41:50 PM, vvshvv (vvs...@gmail.com) wrote:
Sorry, I replied not to all the mailing
list.
Regards,
Uladzimir
On Mar 27, 2017 11:39 PM, vvshvv
< vvs...@gmail.com> wrote:
When user uploads some image to a REST endpoint, it
should be processed in a separate Docker container. Actually, I can
implement my custom mesos framework that will consume uploading
events from a Kafka topic and will run tasks on slaves (in
containers), but it is slightly difficult, because when message
arrives, cluster can have no available resources to process it and
it has to wait until mesos offers some resources in a queue. And
when messages are waiting in a queue in memory, process can crash
and I will lose all pending events.
At the same time I am aware of Marathon, but I see
that it is suitable only for long tasks and I am not sure that it
will handle large amount of jobs.
Regards,
Uladzimir
On Mar 27, 2017 11:28 PM, vvshvv
< vvs...@gmail.com> wrote:
It seems you did not understand what I meant. My
processing includes metadata extraction and resizing, each job
should have isolated environment (limited cpu and memory).
Processing will be either in Golang or even C++.
|