Hello,

MongoDb has a driver called Gridfs intended to handle large files. Since they 
have a hard limit of 16mb per document, this driver transparently splits a file 
in 256kb chunks and then transparently reassembles it upon read. Metadata are 
stored so they support things such as range queries (very useful in video/audio 
streaming scenario - Couchdb supports range queries too), more information is 
available on this page:

https://docs.mongodb.com/manual/core/gridfs/

I was wondering is something similar could be built on top of FoundationDb and 
if such an approach would solve the current issues with large attachments. In 
particular, it could make replication easier, since only small files would need 
to be replicated and it would be easier to resume replication at a particular 
chunk.

MongoDb stores this data in a dedicated "collection" which is not the CouchDb 
way. My thinking was that this could be opt-in: in addition to a document being 
able to have an attachment, we could introduce a new entity called 
largeAttachment using such a driver behind the scene, and the user would choose 
how to best store his data based on the performance caracteristics of each 
storage method and his needs (field, attachment, largeAttachments).

I am just wondering if the idea is broadly feasible in the next FDB based 
version or if there is an obvious showstopper / challenge that would need to be 
addressed first.

Thank you!

Reddy

Reply via email to