On Tue, May 8, 2007 1:17 am, Derick Rethans wrote:
> On Mon, 7 May 2007, Lukas Kahwe Smith wrote:
>
>> Stanislav Malyshev wrote:
>> > > can write this data to disk. So, you needs 20MB. If serialize
>> (and of
>> > > course unserialize) would be able to write directly to disk (or
>> read
>> > > directly from disk), you only needs 10MB.
>> >
>> > Actually having serialize/unserialize be able to write directly to
>> a stream
>> > and read directly from a stream might be interesting, would
>> probably improve
>> > working with things like large sessions or caching large data
>> substantially.
>>
>> Indeed, especially since this is the most common use case. Maybe it
>> should
>> optionally also return an md5 of the written data.
>
> If we're to add this, make sure writes to the files are atomic.

Is this suggesting that the entire 80M upload has to be done in a
single operation?...

Or is the md5/sha1 computed chunk by chunk, in parallel, with writing
buffered data to the disk?

Cuz if it's the former, I don't see that working out too well for
ginormous uploaded files...

Which people probably shouldn't be doing over HTTP anyway, but they
do, and that's the reality one has to deal with...

Apologies if I'm being alarmist and totally mis-reading this through
my ignorance.

-- 
Some people have a "gift" link here.
Know what I want?
I want you to buy a CD from some indie artist.
http://cdbaby.com/browse/from/lynch
Yeah, I get a buck. So?

-- 
PHP Internals - PHP Runtime Development Mailing List
To unsubscribe, visit: http://www.php.net/unsub.php

Reply via email to