On Fri, 25 Jan 2019, at 09:58, Robert Samuel Newson wrote:
>

Thanks for sharing this Bob, and also thanks everybody who shared their
thoughts too.

I'm super excited, partly because we get to keep all our Couchy
goodness, and also that FDB brings some really interesting operational
capabilities to the table that normally you spend a decade trying to
build from scratch. The level of testing that has gone into FDB is
astounding[1].

Things like seamless data migration, expanding storage and rebalancing
shards and nodes, as anybody who's dealt with large or long-lived
couchdb clusters knows are Hard Problems today.

There's clearly a lot of work to be done -- it's early days -- and it
changes a lot of non-visible things like packaging, dependencies,
cross-platform support, and a markedly different operations model -- but
I'm most excited about the opportunities here at the storage layer for
us.

Regarding handling larger k/v items than what fdb can handle, is covered
in the forums already[2] and is similar to how we'd query multiple docs
from a couchdb view today using an array-based complex/compound key:

[0, ..] would give you all the docs in that view under key 0

except that in FDB that query would happen for a single couchdb doc, and
returning a range query to achieve that. Similar to multiple docs, there
are some traps around managing that in an atomic fashion at the higher
layer.

I'm sure there are many more things like this we'll need to wrap our
heads around!

Especial thanks to the dual-hat-wearing IBM folk who have engaged with
the community so early in the process -- basically at the napkin
stage[3].

[1]: https://www.youtube.com/watch?v=4fFDFbi3toc
[2]: 
https://forums.foundationdb.org/t/intent-roadmap-to-handle-larger-value-sizes/126
[3]: https://www.computerhistory.org/atchm/the-two-napkin-protocol/ the
famous napkin where BGP, the modern internet's backbone routing
protocol, was described.

A+
Dave

Reply via email to