Does a journal continue to grow in size forever, even if you're using
snapshots? Presumably, once a snapshot is stored, the events preceding it are
no longer needed. Are they kept? If so, is it possible to purge them?
--
>> Read the docs: http://akka.io/docs/
>> Check
Hi Dan!
Event sourcing systems like to think in terms of "never delete anything”. It’s
basically auditing for free as well as the power to replay and analyse
the entire history of your application.
In such systems snapshots are only used ot make recovery faster - not to drop
the proceeding data
Hi akka team,
Thx for the fast reply!
We do not have Await.result() calls. We do have some db calls, writing data
to hbase, however it is in small parts (~20 records each write) and we have
no more than 5-6 actors writing in parallel.
We did 2 modifications:
1. We changed our internal scheduler
Hi Oren,
1. We changed our internal scheduler to send heartbeat directly to log
> instead of sending heartbeat as message
> 2. We change our first in line actor to use PinnedDispacher instead of the
> default one.
>
It should be the other way around, the actors doing the blocking calls
should b
Hi Diego,
I don't know if there are commercially supported 3rd party journals. There
is a list of contributions that you might want to look at:
http://akka.io/community/ maybe some of them are supported.
-Endre
On Wed, Oct 29, 2014 at 5:34 PM, Diego Alvarez Zuluaga <
diego.alvarez.zulu...@gmail.
Hi Bryn,
On Wed, Oct 29, 2014 at 8:56 PM, Bryn Keller wrote:
> Hi Folks,
>
> We have an application that needs to run in an environment where it is not
> allowed to listen on any new sockets. It can connect to remote servers, but
> it can't be a server itself.
>
> Is there any way to use Akka in
Hi Adam,
This might or might not be associated with the components of the stream
> itself; in my current scenario I have a sink which is a reactive tcp
> connection.
>
What do you mean by a reactive tcp connection here? Is it an Akka TCP
stream?
> I don't seem to be getting any errors (at leas
On Thursday, October 30, 2014 11:15:50 AM UTC+1, Akka Team wrote:
>
> Hi Adam,
>
> This might or might not be associated with the components of the stream
>> itself; in my current scenario I have a sink which is a reactive tcp
>> connection.
>>
>
> What do you mean by a reactive tcp connection
Hi Adam,
> I'm not running Windows (MacOS), but how could these signals be handled? I
> was looking for some handlers in the value returned by run() (e.g.
> onComplete, onError), but it seems not much is there :)
>
Which version of Akka Stream are you using and which version of the
scaladsl? How
Hi Endre
We did used the PinnedDispacher for the potential "blockers" but it still
occurred.
We use only log printing with the scheduling task (we didn't send any
message), and it stopped.
I don't think it's a scheduler specific bug cause it happen only when the
BIG shutting down phenomena occur
Sure :) I'm using streams 0.9, scaladsl2 and I want to add failure
detection to that:
https://github.com/adamw/reactmq/blob/master/src/main/scala/com/reactmq/Sender.scala
Adam
On Thursday, October 30, 2014 12:04:23 PM UTC+1, Akka Team wrote:
>
> Hi Adam,
>
>
>> I'm not running Windows (MacOS),
Hello all,
I have been looking into this
library https://github.com/sclasen/akka-persistence-dynamodb/ and I got
couple of questions both about the library and the Akka Persistence in
general.
One of the questions that I have is this:
Is persistence counter going to ever reset? As far I underst
Hi Saparbek,
If sequenceNr is a Long (64-bit) you'd have to generate 1 event per
nanosecond for 292 years before it overflows. If your system is still
running in 292 years and you are able to generate 1 event per nanosecond,
let us know :)
On Thu, Oct 30, 2014 at 12:48 PM, Saparbek Smailov
wrote
Journals I guess no, but if you need support then both typesafe offer it for
the akka part of your system and each database vendor does so for their
database.
Cassandra being a Datastax supported thing, mongo do their own support etc. So
seems like that’s what you may have to look into?
— k
On
Hi Adam,
You can use broadcast to split out the stream going into the Sink, and then
attach to that side of the graph an OnCompleteSink which takes a function
callback: Try[Unit] ⇒ Unit which is called during normal and failure
termination.
-Endre
On Thu, Oct 30, 2014 at 12:31 PM, Adam Warski w
There's an OnCompleteDrain :) (btw. - sink, drain, subscriber - a lot of
names ;) )
How do I use broadcast, thought? I can see it can be a vertex in the graph,
but how to add it?
So I'll end up with sth like this:
/---> sink1 (output
stre
> You can delete them by reacting to a snapshot succcess by issuing an
> deleteMessages(toSequenceNr = lastseqnr).
If I only care about recovery and not auditing, are there any disadvantages to
doing that? Would it be considered an anti-pattern? I'm just evaluating and
considering my options at
Thanks, Endre.
I'm not quite clear on what you're saying - I understand Akka wants to work
in peer-to-peer mode, but I don't understand this part:
This does not mean that you cannot use Akka in a Client-Server setup
>> though, but you will need to implement your own client-server connection.
>> T
Ok thanks.
We already have TypeSafe Support. We're looking for (Cassandra, MongoDB)
Commercial Support.
I sent a email to all of the companies that are listed in the Cassandra
Wiki, but so far I just received a response from DataStax.
Does TypeSafe/akka-team have any recommendation on using Ca
Personally I'd rather using Cassandra because is not a SPOF like the
master-slave architecture that mongodb has.
But I've no experience with any of those databases :/
On Thursday, October 30, 2014 2:49:36 PM UTC-5, Diego Alvarez Zuluaga wrote:
>
> Ok thanks.
>
> We already have TypeSafe Support.
Hello everybody,
I want to add Akka to an existing scala web application (not play), and
I'm wondering how it could be best achieve. The goal for now is not to
build a fully distributed and fault tolerant system, but to introduce
Akka at some point in the application and let its use grow, allowing
21 matches
Mail list logo