On Mon, Jul 11, 2016 at 08:15AM, Dmitriy Setrakyan wrote: > My answers are inline… > > On Sat, Jul 9, 2016 at 3:04 AM, Dmitriy Setrakyan <dsetrak...@apache.org> > wrote: > > > Thanks Sasha! > > > > Resending to the dev list. > > > > D. > > > > On Fri, Jul 8, 2016 at 2:02 PM, Alexandre Boudnik <alexan...@boudnik.org> > > wrote: > > > >> Apache Ignite a great platform but it lacks of certain capabilities, > >> which are common in RDMS world, such as: > >> - Consistent on-line backup for data on entire cluster (or for > >> specified set of caches) > >> > > > I think you mean data center replication here. It is not an easy feature to > implement, and so far has been handled by commercial vendors of Ignite, > e.g. GridGain.
Apache Geode (incubating) has added the DC replication a couple months ago, so I don't see why Ignite shouldn't? Cos > > - Hierarchal snapshots for specified set caches > >> > > > What do you mean by hierarchical? > > > > - Transaction log > >> > > > Why does Ignite need it for in-memory transactions? > > > > - Restore cluster state as of certain point in time > >> > > > Given that such restorability may introduce lots of memory overhead, does > it really make sense for an in-memory cache? > > > > - Rolling forward from snapshot with ability to filter/modify transactions > >> > > > Same as above > > > > - Asynchronous replication based either on log shipment or snapshot > >> shipment > >> -- Between clusters > >> > > > This is the same as data center replication, no? > > > > -- Continues data export to let’s say RDMS > >> > > > Don’t we already support it with our write-through feature to a database? > > > > It is also a necessity to reduce cold start time for huge clusters > >> with strict SLAs. > >> > > > What part are you trying to speed up here? Are you talking about loading > data from databases? > > > > > >> I'll put some implementation ideas in JIRA later on. I believe that > >> this list is far from being complete, but I want the community to > >> discuss these abovementioned use cases. > >> > >> --Sasha > >> > > > >