What persistenceIds would you like to use in the end?
Based on a counter? PO-1, PO-2, ... increasing for each routee?
Would be strange in a long running system when worker nodes are added and
removed over time.
Why not use Cluster Sharding instead? Wouldn't it be more natural to have a
separate e
I was wondering if anyone has thought of making the BackoffSupervisor
extendable so the consumer can change the behavior of the child life-cycle?
I had a case where I wanted to control which Props was used to create the
child-actor in a Shard based on the first message. The application this was
Hello! I'm hoping the wisdom of the Akka community can give me some
insight! I'm trying to use a ClusterClient to invoke an actor in a cluster
(v. 2.4.17), but the messages are always timing out. Here's my setup:
a) An Akka cluster running on AWS ECS (where each Akka node is a Dockerized
app
Ok thank you for your answer.
Le mardi 7 mars 2017 08:20:35 UTC+1, Alexandre Delegue a écrit :
>
> Hi,
>
> Executing this code
>
> Source.range(1, 4)
> .mapAsync(1, e -> CompletableFuture.supplyAsync(() -> {
> System.out.println("Num "+e);
> return e;
> }
Hi Konrad,
At the first view, I think the could not fnid any error .. but please take
into account that I am a scala beginner :)
Now, I am trying to test the code, and I am having a problem whe I execute
the line:
val sslConfig: AkkaSSLConfig = AkkaSSLConfig.get(system)
java.lang.BootstrapMe
Thanks for all the replies. When I first came across event sourcing it
clearly says one stores events and changes in the state but not the state
itself. so In my case, I get messages from Kafka and each message is say
JSON with 50 fields like this {key1: val1, key2: val2, ...key50: val50}
a
debug info:
00:12:07.670 [Camel (CamelRestTest) thread #3 - ShutdownTask] DEBUG
o.e.jetty.util.component.Container - Container
org.eclipse.jetty.server.Server@42d53760 -
SelectChannelConnector@localhost:8181 as connector
00:12:07.670 [Camel (CamelRestTest) thread #3 - ShutdownTask] DEBUG
o.e.j
Hi,
I am following book and cloned code from
https://github.com/RayRoestenburg/akka-in-action
when executing `sbt assembly` it appeared error as following:
23:09:59.383 [ConsumerTest-akka.actor.default-dispatcher-7] DEBUG
o.a.camel.impl.DefaultCamelContext - Warming up route id:
akka://Con
Hi Brice,
we previously recommended the stream variant
(`Source.single(request).mapAsync(1))`) but it turned out that this is not
a good idea as materialization has some cost and the API is more complex
than using `Http.singleRequest()` directly. So, using
`Http.singleRequest()` is the right w
You're welcome!
Happy hAkking :)
Rafał
W dniu wtorek, 14 marca 2017 09:03:05 UTC+1 użytkownik Arno Haase napisał:
>
> Rafał,
>
> snapshotting these big amounts of data in the schedulers feels like a
> bad trade-off, especially since this is not a core feature of the
> system, but I guess I wil
Hi All,
I have Kafka as my live streaming source of data (This data isn't really
events but rather just messages with a state) and I want to insert this
data into Cassandra but I have the following problem.
Cassandra uses Last Write Wins Strategy using timestamps to resolve
conflicting writes
Hi Brice,
we changed the default behavior in akka-http. Now, all directives (i.e.
also `Directive0`) behave as `dynamic` did in spray before.
Johannes
--
>> Read the docs: http://akka.io/docs/
>> Check the FAQ:
>> http://doc.akka.io/docs/akka/current/addition
On Tue, Mar 14, 2017 at 2:04 AM, kant kodali wrote:
> Hi!
>
> @Tal say The actor that writes to a Cassandra data store is different
> from an Actor/Streaming source that polls from Cassandra. And if I
> understand this correctly you are saying the Actor that writes to Cassandra
> data store noti
To explain my issue in a clear way I provided a sample code:
if I have an actor which looks like this:
class MasterActor extends Actor with ActorLogging with WorkerRouter with
AkkaProvider {
private val router = createWorkerRouter()
def receive = {
case PurchaseOrderMsg(content,
Rafał,
snapshotting these big amounts of data in the schedulers feels like a
bad trade-off, especially since this is not a core feature of the
system, but I guess I will just do some tests.
Storing the data in Cassandra with a hash as partition key to distribute
load, and the timestamp and user a
15 matches
Mail list logo