Back in this thread lately too...

I still don't see how you intend to solve the use case I gave:

- load 'something' progressively, get chunks (should be possible via xhr but it's not) - handle it via File augmenting the related Blob (not possible) and store it progressively in indexedDB (not possible too), or do the contrary, store it progressively and get it as an (uncomplete) Blob from indexedDB that will get augmented while it is stored (not possible again)
- URL.createObjectURL(something) (will not work)

where 'something' has a size of 1 GB (video for example)

That's equivalent to do with your browser:

file:/something or http://abcd.com/something where something is a file on you computer being downloaded or on a web site, so not complete.

Which works very well since a long time.

File, XHR and indexedDB should handle partial data, I thought I understood from messages last year that it was clearly identified as a mistake.

Regards,

Aymeric

Le 11/08/2014 13:24, Anne van Kesteren a écrit :
On Fri, Aug 8, 2014 at 5:56 PM, Arun Ranganathan <a...@mozilla.com> wrote:
I strongly think we should leave FileReaderSync and FileReader alone. Also note 
that FileReaderSync and XHR (sync) are not different, in that both don’t do 
partial data. But we should have a stream api that evolves to read, and it 
might be something off Blob itself.
Seems fair.


Other than “chunks of bytes” which needs some normative backbone, is the basic 
abstract model what you had in mind? If so, that might be worth getting into 
File APi and calling it done, because that’s a reusable abstract model for 
Fetch and for FileSystem.
Yeah that looks good. https://whatwg.github.io/streams/ defines chunks
and such, but is not quite there yet. But it is what we want to build
this upon eventually.


--
Peersm : http://www.peersm.com
torrent-live: https://github.com/Ayms/torrent-live
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms


Reply via email to