You'd preserve the order with a statically partitioned routing table using
consistent hashing as well.
On Wed, Jul 22, 2015 at 11:43 AM, Gabriel Volpe
wrote:
> So, if this is the case, maybe Akka streams is not the solution to my
> problem. It's working well with actors. I just wanted to try it
So, if this is the case, maybe Akka streams is not the solution to my
problem. It's working well with actors. I just wanted to try it to see
whether it fit better or not.
Yes, as I explained before, I create actors dynamically depending on the
incoming data. Using this approach I can preserve t
I thought I needed to create dynamic flows at one point too. Then I
realized if I just "tag" my incoming data with what I need and make a smart
enough detached stage, I can control the pressure and order as necessary.
Also, at one point I did need to fork my data (into a predefined number of
sl
In my opinion, there is little value in creating the slots dynamically as
you most likely will either:
A) have more slots than CPUs, which means you won't get any performance
improvement for doing it dynamically.
or B) less slots than CPUs, in which case you can statically create them
and use cons
AFAIK there is currently no way of dynamically changing the size of the
pool. What you can do is assign a fixed number of 'slots' which get
dynamically turned off/on
You could look
at
https://github.com/akka/akka/blob/releasing-akka-stream-and-http-experimental-1.0/akka-http-core/src/main/scal
I think it's not that simple. I need to control the back-pressure only
between the Controller and the Processors that are created dynamically.
El lunes, 20 de julio de 2015, 16:35:07 (UTC+1), Jim Hazen escribió:
>
> If you already have the solution it shouldn't be hard to put Streams in
> front
>
> Sounds to me like you want a pooled set of streams for processing your
> events, with the restriction that events with the same ID must go into the
> same stream?
Yes, that's exactly what I wanna do because I need to preserve the order of
the events with the same ID.
> If that's the ca
If you already have the solution it shouldn't be hard to put Streams in front
of it to help manage back pressure. Call your code from within the sink and
apply whatever flow controls you need to the source ahead of it.
--
>> Read the docs: http://akka.io/docs/
>> Che
Sounds to me like you want a pooled set of streams for processing your
events, with the restriction that events with the same ID must go into the
same stream?
If that's the case, a FlexiRoute with a fixed set of downstreams should do
the job. Then all you need is a way of consistently selecting
I don't understand how the FlexiRoute could be the solution. Could you
expand the case further?
Actually, I'm solving the problem using only Akka Actors and it's quite
easy. I have a Controller Actor which has a Map[Long, ActorRef]
corresponding to the Event ID and the Processor Actor (child).
I think you might want to use a FlexiRoute
http://doc.akka.io/docs/akka-stream-and-http-experimental/1.0/scala/stream-customize.html#Using_FlexiRoute
And in it you'd decide "which way the element should go".
You'd have 2 outputs from it, one being the async path, and the other one
the other path.
A
Hi all, this is my case:
I have a queue (Rabbit MQ) producing Json data with the following format:
{ "id": 123, "message": "hello" }
{ "id": 876, "message": "shutdown" }
{ "id": 123, "message": "bye" }
And I have a respective Consumer. What I want to do is to distribute the
load depending on t
12 matches
Mail list logo