Hi, I'm interested in your experiences with trimming Akka persistence journal in Cassandra. We would generate a fairly large amount of journal data (up to 2TB or so per month), so we'd like to keep only 2 weeks' worth of data in Cassandra and archive older data in HDFS.
Is this a recommended approach? How would the deletion of older data work, as far as I can tell the journal schema is not very well suited to processing or deleting data based on time? I found a message by Brice Figureau (https://groups.google.com/d/msg/akka-user/wOSZ2fW43ts/0k0CzyUrBAAJ) saying that tombstones for deleted messages can be a problem, would that be an issue if we wouldn't be doing constant reading of the journal? Thanks in advance for any insights, Maris -- >>>>>>>>>> Read the docs: http://akka.io/docs/ >>>>>>>>>> Check the FAQ: >>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html >>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups "Akka User List" group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at https://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.