> On 13 Jun 2019, at 09:00, Marcus Denker <marcus.den...@inria.fr> wrote:
> 
> 
> 
>> On 13 Jun 2019, at 08:10, Konrad Hinsen <konrad.hin...@fastmail.net> wrote:
>> 
>> Stephan Eggermont <step...@stack.nl> writes:
>> 
>>>> All of http://files.pharo.org/ ? So how many GB is that?
>>> 
>>> It is only a few thousand changes per release. There is no reason why that
>>> shouldn’t compress well
>> 
>> Did anybody try?
>> 
>> In IPFS, files are cut into blocks of roughly 256 KB. Blocks shared
>> between multiple files are stored only once. So if changes in Pharo
>> images from one version to the next happen mainly in one or two specific
>> places (such as beginning and end), they would be stored efficiently on
>> IPFS without any effort.
>> 
>> But again, the best way to know is to try it out.
>> 
> 
> Indeed. Another thing to try: if someone has a local IPFS running, would it 
> cache
> only those images that either this machine or near machines requested?
> 
> This way we could have one “full” copy and caching (e.g. in the “African 
> class room” 
> example) would happen automatically.
> 
> Of course the downside is that one needs to speak the IPFS protocol, thus 
> either running
> a client (e.g the go client)… so real transparent use would only be possible 
> if Pharo could
> implement the protocol… 

There are many other ideas to explore (based on my partial knowledge)… more 
brainstorming
than anything else:

- Could we have an image formate that would be “block friendly”?
- How could one move resources out of the image into IPFS?

- Content addressing (like it is done in Git, too) and Objects, how can it mix?
        — Could one have “content pointers” instead of “memory pointers”?

I am sure there is much more…


        Marcus

Reply via email to