Hi Steffen,
As documented Distribute Data has limitations with regards to data size.
Crucial is to split it up to many top level entries. I’d suggest you try
with 1000-1 top level entries.
Even with delta-CRDTs it must sometimes transfer the full state, meaning
that the message size for that
Just a thought (I've never had this exact problem): I would try an
experiment with cascading Source.unfoldResource[Async] stages.
1. The first unfold stage would extract the CSV headers. This would use CSV
resource description that's little more than a FileReader.
2. The second unfold stage would
1) No. If you want this, you have to do it yourself as the result of a
SaveSnapshotSuccess message.
2) By default, the actor will be offered the most recent snapshot and then
events that are younger than the snapshot. If there's no suitable snapshot,
all events are replayed.
Check the
Hi all,
I have the following problem, I need to parse CSV files where the heade
lines are not a fixed number, i.e. a file could have an header with three
lines and another with five lines for example.
At the same time I need to parse the header to extract the information I
need to parse
Hi,
In context of Cassandra plugin, I have following queries
1) As I understand, as consequence of snaphotting, there would be implicit
pruning of entries in journal - is this correct ?
2) If yes, then we are facing typical Cassandra tombstones pruned entries -
now, before the tombstones
Hi,
we are investigating Akka Distributed Data for storing some (Long -> Long)
mapping in a LWWMap.
We plan for up to ~1mio entries for this and expect a write load of few
hundreds/thousands entries per day, in contrast we expect a read load of
some hundred requests per second. Therefore, we