2017-10-27 11:28 GMT+02:00 Cédrick Béler <cdric...@gmail.com>:

>
> Hi Cedric,
>
> a short answer: some of what you're trying to do has been traditionnally
> handled by object databases - multiple images retrieving and updating
> objects on a central store ensuring lots of good properties : see Gemstone.
>
>
> Yes , I’d like to avoid this centralized approach.
>

I mean that they have already some of the mechanisms in place, even if
centralized.


>
>
> Another answer, since you're looking at content-addressable distributed
> file systems. You can resolve the offline mode with object duplications
> (duplicate all objects in the images using them), and, when coming back
> online, have a merge approach to reconcile changes between versions of
> shared objects. Content-based hashes as identifiers in a distributed store
> have very nice properties for that purpose, because, if image C has
> modified object 0xcdaff3, then that object has in fact become object
> 0xee345d for that image (and the unmodified object is still 0xcdaff3 for
> images A and B).
>
>
> You nailed it. This is what I’d like to reproduce.
>
> Existing implementations out there seems to uses whatever nodes on the
> network to replicate the information.
>

Yes, because that makes them decentralized :)


>
> I’d like to control nodes where it is replicated. My nodes (all my app
> instances + nodes of person I’m exchanging information with + eventually
> friend of friend).
>

We've done recent work on capability-based content adressing, but, first
it's very slow (so you use a two level cryptosystem: the crypted header
contains the key to decrypt the content, and the crypted header can only be
decrypted if your private key has the right level of capabilities on the
data item.


>
> What hash function would you use ?
>

Anything that is fast, bonus points if it uses CPU special instructions
(that you can't use from Pharo, of course ;)), and has the right
cryptographic properties. Unless you go into a specific cryptosystem, I'd
say that it is not important.


>
> To get something compatible with ipfs, I’d need something like:
> https://github.com/multiformats/multihash
> It looks to me quite universal as self describing. But any (existing) hash
> method compatible with content hashing would do the job.
>

Interesting, but it looks like a minor issue in the overall scheme. Makes
some of your system robust to evolution in the hash used, but, since first
bytes are not well distributed, can you use it safely to build a multi-hash
function system? Probably not.


>
>
> I wouldn't be against a slightly higher granularity when dealing with
> object transfers, however.
>
>
> You mean at the pharo level ? Higher granularity means having more control
> on the exchange/merge ?
>

No, just that the content-based address scheme is costly... and that a
practical implementation would probably look to provide addresses only to
large enough entities (a page containing objects, for example, or a large
objects containing smaller ones). So that you donc create an address for
each character of the string object describing the name of a person, say).

Regards,

Thierry


>
> Cheers,
>
> Cédrick
>

Reply via email to