On Wed, Dec 4, 2013 at 2:13 AM, Aymeric Vitte <vitteayme...@gmail.com>wrote:

> OK for the different records but just to understand correctly, when you
> fetch {chunk1, chunk2, etc} or [chunk1, chunk2, etc], does it do something
> else than just keeping references to the chunks and storing them again with
> (new?) references if you didn't do anything with the chunks?
>

I believe you understand correctly, assuming a reasonable[1] IDB
implementation. Updating one record with multiple chunk references vs.
storing one record per chunk really comes down to personal preference.

[1] A conforming IDB implementation *could* store blobs by copying the data
into the record, which would be extremely slow. Gecko uses references (per
Jonas); Chromium will as well, so updating a record with [chunk1, chunk2,
...] shouldn't be significantly slower than updating a record not
containing Blobs. In Chromium's case there will be extra book-keeping going
on but no huge data copies.



>
> Regards
>
> Aymeric
>
> Le 03/12/2013 22:12, Jonas Sicking a écrit :
>
>  On Tue, Dec 3, 2013 at 11:55 AM, Joshua Bell <jsb...@google.com> wrote:
>>
>>> On Tue, Dec 3, 2013 at 4:07 AM, Aymeric Vitte <vitteayme...@gmail.com>
>>> wrote:
>>>
>>>> I am aware of [1], and really waiting for this to be available.
>>>>
>>>> So you are suggesting something like {id:file_id, chunk1:chunk1,
>>>> chunk2:chunk2, etc}?
>>>>
>>> No, because you'd still have to fetch, modify, and re-insert the value
>>> each
>>> time. Hopefully implementations store blobs by reference so that doesn't
>>> involve huge data copies, at least.
>>>
>> That's what the Gecko implementation does. When reading a Blob from
>> IndexedDB, and then store the same Blob again, that will not copy any
>> of the Blob data, but simply just create another reference to the
>> already existing data.
>>
>> / Jonas
>>
>
> --
> Peersm : http://www.peersm.com
> node-Tor : https://www.github.com/Ayms/node-Tor
> GitHub : https://www.github.com/Ayms
>
>

Reply via email to