I think this is what you're after:
https://doc.akka.io/docs/akka/current/routing.html#broadcast-messages
If you want to hide the decision logic from the sender, you can create a
custom actor that inspects the message and /either/ sends it to the
hashing router as-is /or /sends it wrapped in a
Hi Patrik,
Thanks for the latest snapshot. I've been trying to play around with akka
typed by migrating the project I'm working on and previously I'd been stuck
at implementing stashing. Now with the StashBuffer it got simple.
Overall I'm really happy. It took a lot of time but I managed to migra
I use RestartSource.withBackoff to recover from broker outages.
https://doc.akka.io/docs/akka/current/stream/stream-error.html#delayed-restarts-with-a-backoff-stage
Hope that helps.
MichaĆ
On Friday, 12 January 2018 22:41:04 UTC, Sean Rohead wrote:
>
> I am using akka-stream-kafka 0.18. I have a
If you need exactly once semantics against your target database, the
common pattern is to store your last processed offset in that database
transactionally together with your output records, instead of committing
back to kafka. On startup you'd read the last offset from your database
and seek t
I drafted an implementation outline in kafka-streams to address the
problem of sliding-window reordering (to cater for late messages within
the time window), it also caters for de-duplication:
https://stackoverflow.com/questions/43939534/apache-kafka-order-windowed-messages-based-on-their-value
ORMap and ORMultiMap don't mention delta support. On the contrary, docs
for Maps say:
When a data entry is changed the full state of that entry is
replicated to other nodes, i.e. when you update a map the whole map is
replicated. Therefore, instead of using one|ORMap|with 1000 elements
it is
http://developer.lightbend.com/docs/alpakka/current/ started giving a 404.
I looked if it moved somewhere but couldn't find it.
http://doc.akka.io/docs/akka/2.5.2/java/common/other-modules.html
https://github.com/akka/alpakka
both point to the broken link.
Has it moved or is it a temporary ou
Hi Patrik,
I think indeed that is what we're seeing. It happened in production and
we can easily reproduce it by bringing down network connections one at a
time.
However, I wonder if this is something that could be solved in akka
itself, as opposed to trying to implement a custom downing pro
Hi Santanu,
I'm not sure I know enough about your requirements to advise.
In microservices for each piece of data I'd have only one of the
services be the "source of truth" for that data and others subscribe to
events from that microservice.
In your original case, however, it's not clear if
Sounds like what could help in your organization is using a higher-level
abstraction, such as Lagom.
You don't need to be comfortable with actors to use it and it does
provide a wrapper around reactive-kafka (or akka-stream-kafka as it's
currently known) with exponential backoff restarts:
ht
Hi Santanu,
If data consistency is key, please start by thinking carefully about
what you mean by consistency. How strong your consistency guarantees
actually need to be?
I see no reason not to build your system using akka, but be aware that
message-driven distributed systems will generally
I've not tried that, but it reminded me of something in the docs that says:
*Important*: Using setups involving Network Address Translation, Load
Balancers or Docker containers violates assumption 1, unless
additional steps are taken in the network configuration to allow
symmetric communicatio
Hi Igmar,
You can do a map on the source and pass the new returned Source as the
second parameter to Flow.fromSinkAndSource instead of the original source.
Something like this (untested):
final Source source = Source.range(1, 100).map(v -> new X());
final Sink> sink = Sink.foreach(v ->
Syste
Hi Marc,
Sounds very interesting but I couldn't find evidence of Aeron directly
supporting RDMA in the links your provided or otherwise. Can you please
point me to your sources?
I found this github ticket
https://github.com/real-logic/Aeron/issues/220 but it's still open.
Thanks,
Michal
I think you got your imports wrong. Given the call to .asJava(), I'm
guessing you've imported Consumer from the scaladsl.
If you use javadsl you'll get akka.stream.javadsl.Source, not
akka.stream.scaladsl.Source in the Tuple2.
committablePartitionedSource() is giving you a Source of tuples,
Hi Thibault,
Looks like the docs are out of date.
If you read the compilation error message carefully, you'll notice
Jackson.marshaller is giving you a
|Marshaller|
while completeWithFuture is expecting a
|akka.http.javadsl.marshalling.Marshaller)|
Instead you, try complet*OK*WithFuture
Hi Thibault,
I do this kind of thing with Source.fromIterator(...) passing it an
iterator obtained from a java stream.
In your case, it would look something like:
Source source = Source.fromIterator( () ->
Stream.generate( ()->
UUID.randomUUID()
).iterator()
).take(n);
th
Hi there,
Just to chip in for balance.
We're using akka http and the DSL in a real world business app.
The fact we don't have hundreds of routes or don't have to call 10s of
actors per request doesn't make our app any less a "real world business
app".
Just that ours is small. And we're crea
Both http servers (jetty and akka-http) will try to bind to the
specified port (9001) and the OS will only allow the first one to
succeed, leading to the exception you are seeing in the process that is
second to start.
If you must have them serve requests on the same port, then the way I
see
While there is little I can say about the speed of decompressing an
individual file, you can design your pipeline so that multiple files are
decompressed in parallel, but you probably thought of that already ;)
Thanks,
Michal
On 06/04/17 03:37, Sean Callahan wrote:
I doubt there is really an
Indeed. This is the relevant bit of docs I believe
(http://doc.akka.io/docs/akka/2.4.17/common/cluster.html#Membership):
The node identifier internally also contains a UID that uniquely
identifies this actor system instance at thathostname:port. Akka uses
the UID to be able to reliably trigger
Hi Unmesh,
If you configure multiple seed nodes, then only at least one of the has
to be up for new (or restarted) members to join.
In our deployment we have a pretty static membership (we don't add nodes
dynamically), so we set all nodes to be seed nodes, no harm in that.
Hope that helps,
Mi
Hi Unmesh,
AFAIK, the crashed node has to be downed (whether manually or
automatically) for the cluster to reach convergence.
Only once there are no unreachable nodes observed by any member can the
leader resume it's duties and allow the new member (your re-started
instance) to join.
For t
Hello,
Lagom has a nice feature when subscribing to kafka topics that upon
failure it re-creates the flow with exponential backoff.
I see this is implemented by wrapping the KafkaSubscriberActor props
with BackoffSupervisor props:
https://github.com/lagom/lagom/blob/master/service/scaladsl/
ra or Cassandra driver?
Thanks,
kant
On Mon, Mar 13, 2017 at 1:57 AM, 'Michal Borowiecki' via Akka User
List mailto:akka-user@googlegroups.com>>
wrote:
Also, there's a setting called
"cassandra-journal.pubsub-minimum-interval", which if set will
cau
Hi Ryan,
Kafka's log compaction is not an accurate analogy, as it simply works by
preserving the last msg with a given key, removing previous messages
with that key. That's not the same as the concept of snapshots in event
sourcing.
Cheers,
Michal
On 15/03/17 16:29, 'Ryan Tanner' via Akka
Also, there's a setting called
"cassandra-journal.pubsub-minimum-interval", which if set will cause the
journal to notify the persistence query side of new writes, so it can
only poll when needed instead of doing so periodically.
Cheers,
Michal
On 12/03/17 19:15, Patrik Nordwall wrote:
Tal de
Hi Dai,
Have you had a look at http://doc.akka.io/docs/akka/current/java/camel.html
yet?
Cheers,
Michal
--
>> Read the docs: http://akka.io/docs/
>> Check the FAQ:
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>> Search the archives:
28 matches
Mail list logo