> 
> Hi Cedric,
> 
> a short answer: some of what you're trying to do has been traditionnally 
> handled by object databases - multiple images retrieving and updating objects 
> on a central store ensuring lots of good properties : see Gemstone.

Yes , I’d like to avoid this centralized approach.

> 
> Another answer, since you're looking at content-addressable distributed file 
> systems. You can resolve the offline mode with object duplications (duplicate 
> all objects in the images using them), and, when coming back online, have a 
> merge approach to reconcile changes between versions of shared objects. 
> Content-based hashes as identifiers in a distributed store have very nice 
> properties for that purpose, because, if image C has modified object 
> 0xcdaff3, then that object has in fact become object 0xee345d for that image 
> (and the unmodified object is still 0xcdaff3 for images A and B).

You nailed it. This is what I’d like to reproduce.

Existing implementations out there seems to uses whatever nodes on the network 
to replicate the information.

I’d like to control nodes where it is replicated. My nodes (all my app 
instances + nodes of person I’m exchanging information with + eventually friend 
of friend).

What hash function would you use ?

To get something compatible with ipfs, I’d need something like: 
https://github.com/multiformats/multihash 
<https://github.com/multiformats/multihash> 
It looks to me quite universal as self describing. But any (existing) hash 
method compatible with content hashing would do the job.


> I wouldn't be against a slightly higher granularity when dealing with object 
> transfers, however.

You mean at the pharo level ? Higher granularity means having more control on 
the exchange/merge ?

Cheers,

Cédrick

Reply via email to