I was going to say that, you can do a stream layer on top of a bookkeeper ledger. Actually, there is a streaming package that isn't really maintained, but you could consider using as a starting point. Mahadev Konar developed it some time ago.
-Flavio > On 14 Dec 2015, at 14:19, Lucas Bradstreet <[email protected]> wrote: > > Hi Sijie, > > Thanks for your helpful reply. My testing has shown that you're correct, > writes above 1MB fail with a NotEnoughBookies exception. I must have tested a > compressed version of data previously. > > Onyx uses a state changelog using small state updates and wish to > occasionally compact these ledgers into a new ledger, with a single state > entry. This would allow for fast recovery of the latest state when nodes > fail. Therefore latency isn't a big issue. We may consider chunking the > writes into multiple entries to deal with this limitation. > > Thanks again, > > Lucas > > > On 12 Dec 2015, at 5:02 AM, Sijie Guo <[email protected] > <mailto:[email protected]>> wrote: > >> Lucas, >> >> I think there is a hard limitation on entry size, which is 1MB. Did you >> successfully write entries that are large than 1MB? >> >> In bookkeeper, the entry is unit of durability. It potentially has latency >> impacts, as it has to fsync all 1MB to disk before acknowledge. If your >> traffic is comprised with constant MBs, that's probably. If you traffic is >> mixed with small entries, those small entries' add latency might be impacts. >> Other than that, I didn't see too much concerns. >> >> Do you have any test results to share with the community about adding MB >> entries? >> >> - Sijie >> >> On Thu, Dec 10, 2015 at 6:57 AM, Lucas Bradstreet <[email protected] >> <mailto:[email protected]>> wrote: >> Hi all, >> >> Does anyone have any experience with large ledger entry writes (in the >> multiple MB range)? My testing has shown that these writes appear to >> work, but I'm interested in whether there are any operational concerns >> I should be aware of. >> >> Thank you, >> >> Lucas >>
