Yep weak/hybrid schema all the way.
So long as adding a field doesn't break downstream consumers then you
are #@$@ed with schema (think binary serializer)
On Mon, Apr 13, 2015 at 9:29 PM, Richard Rodseth rrods...@gmail.com wrote:
My favourite topic
My favourite topic
https://groups.google.com/forum/#!topic/akka-user/mNVxRPRUDv0
Jay Kreps recently strongly endorsed Avro for use with Kafka.
http://t.co/l9uTFmb6OS
On Mon, Apr 13, 2015 at 10:54 AM, Greg Young gregoryyou...@gmail.com
wrote:
For 1 use weak serialization (say json)
On
For 1 use weak serialization (say json)
On Monday, April 13, 2015 at 4:54:24 PM UTC+3, Chris Ridmann wrote:
Hello!
I've recently been experimenting with architectures that use CQRS /
Cluster Sharding / Event Sourcing / DDD, and I have some beginner questions.
1) What is the best way to
I have updated the ticket with the config I am using.
On Friday, April 10, 2015 at 2:52:47 AM UTC-7, Akka Team wrote:
I added a ticket: https://github.com/akka/akka/issues/17171
On Fri, Apr 10, 2015 at 10:23 AM, Endre Varga endre...@typesafe.com
javascript: wrote:
Wow! This is an
Scala 2.11.6, Akka 2.3.9, Spray 1.3.2.
When testing a particular workload, we ran into CPU spikes. The avg CPU
load is less than 5% but about 6-7 minutes into the run, we start to see
CPU spiking to near 100% lasting for several seconds. This repeats itself
every 6-7 minutes. We can't
This is using 1.0-M5
On Monday, April 13, 2015 at 9:32:38 PM UTC-7, Jeff wrote:
I am creating an ActorPublisher to encapsulate a kafka consumer. I am
trying to bulkhead the actor behind a custom dispatcher (since the kafka
consumer is blocking) with the following code:
val in =
I am creating an ActorPublisher to encapsulate a kafka consumer. I am
trying to bulkhead the actor behind a custom dispatcher (since the kafka
consumer is blocking) with the following code:
val in =
Hi,
In Akka Persistence(here
http://doc.akka.io/docs/akka/snapshot/java/persistence.html#Batch_writes),
It's specified that, the batches are used to ensure atomic writes of even.
Can any one please give an example on, how
to write batch writes in persistence with atomicity?.
--
Hi,
While implementing a FlexiMerge we stumbled on the following issue:
override def initialState =
State[T](ReadPreferred(p.priority, p.second)) {
(ctx, input, element) =
if(input == p.priority).. // always true
ctx.emit(element)
Is if(input eq p.priority) also true?
On Mon, Apr 13, 2015 at 7:11 PM, Johannes Plapp johannes.pl...@gmail.com
wrote:
Hi,
While implementing a FlexiMerge we stumbled on the following issue:
override def initialState =
State[T](ReadPreferred(p.priority, p.second)) {
Hi Reza,
There is no acknowledgement support built-in to streams since they model a
unidirectional stream and acknowledgements need a backchannel. To add that
feature you need a backchannel, which means that the processing stages
should be modeled as BidiFlows instead of Flows. That also means
Restart of node with same hostname and port will be simplified by this
feature: https://github.com/akka/akka/issues/16726
In summary how it is implemented: When new uid is seen in join attempt we
can down existing member and thereby new restarted node will be able to
join in later retried join
On Sat, Apr 11, 2015 at 2:37 PM, olle.martens...@gmail.com wrote:
I don't think that this is an issue now since you can use roles and
spread out the cluster-singletons that way.
That the cluster-singleton can be made to only start on a node with a
specific role is perfectly clear from the
To verify that it is running with the right configuration you can try
setting this config property:
akka.log-config-on-start = on
http://doc.akka.io/docs/akka/2.3.9/scala/logging.html#Auxiliary_logging_options
/Patrik
On Fri, Apr 10, 2015 at 5:59 AM, Stefan Schmidt stsme...@gmail.com wrote:
On Sat, Apr 11, 2015 at 1:36 AM, Andrey Ilinykh ailin...@gmail.com wrote:
Thank you! It works. But still there is a slim chance something goes wrong
(for example java process crashed). What a reason not to allow the same
actor system join multiple times? As far as I understand each actor
Sorry I may not have made which ConnectionException I was expecting clear.
I was talking about the ConnectionException in Akka streams which extends
StreamTcpException. It is defined in
It turns out that the problem was something different.
The server was producing chunks with a size of more than the default
value for max-chunk-size (1m). The client aborted with
Failure(akka.http.model.EntityStreamException: HTTP chunk size exceeds
the configured limit of 1048576 bytes). I did
Hello!
I've recently been experimenting with architectures that use CQRS / Cluster
Sharding / Event Sourcing / DDD, and I have some beginner questions.
1) What is the best way to handle changing the structure of events as
business requirements (or refactorings) change over time?
As a brief
Hi,
I am new to Akka and had some fundamental questions regarding the
effective configuration of Akka in a production environment.
1. In general how many actors are advisable per set of threads for x cores
? E.g., for 8 cores, 24 threads and maybe 50 actors is a reasonable number.
I know
Hi all,
I am trying to consume a Json based API using the Http client system.
My code is the following :
```scala
implicit val system = ActorSystem(client-system)
implicit val materializer = ActorFlowMaterializer()
val host = 127.0.0.1
val httpClient =
Hi all,
I am trying to consume a Json based API using the Http client system.
My code is the following :
```scala
implicit val system = ActorSystem(client-system)
implicit val materializer = ActorFlowMaterializer()
val host = 127.0.0.1
val httpClient =
21 matches
Mail list logo