> On Mar 25, 2019, at 12:48 PM, Mike Rhodes <couc...@dx13.co.uk> wrote: > > On Wed, 20 Mar 2019, at 22:47, Adam Kocoloski wrote: >> >> ## Option 1: Queue + Compaction >> >> One way to tackle this in FoundationDB is to have an intermediate >> subspace reserved as a queue. Each transaction that modifies a database >> would insert a versionstamped KV into the queue like >> >> Versionstamp = (DbName, EventType) >> >> Versionstamps are monotonically increasing and inserting versionstamped >> keys is a conflict-free operation. We’d have a consumer of this queue >> which is responsible for “log compaction”; i.e., the consumer would do >> range reads on the queue subspace, toss out duplicate contiguous >> “dbname”:“updated” events, and update a second index which would look >> more like the _changes feed. > > I couldn't immediately see how we cleared out older entries from this > potentially very large queue. For example, the worker processing the queue to > deduplicate might issue range deletes after processing each "batch". Is this > simple enough to do? > > Mike.
Yes, that’s the (implicit) idea. Simple to implement, not clear to me how well the storage servers can handle the load. I think the “range clears are cheap” statement largely refers to the transaction management system. Adam