Matthew Toseland a ?crit :
> On Thursday 11 September 2008 00:04, pmpp wrote:
>> Matthew Toseland a ?crit :
>>> On Wednesday 10 September 2008 18:42, pmpp wrote:
>>>> Matthew Toseland a ?crit :
>>>>> On Saturday 06 September 2008 23:21, pmpp wrote:
>>>>>   
>>>>>> hi, i have made a small cifs experiment based on alfresco jlan-4.0 ( 
>>> gpl ) 
>>>>>> pure java on non windows , 2 jni dll on w32/64known problem:  cifs IO 
>>> will block on file copy/seeking till chk got at 100% 
>>>>>> by freenet because no pseudo-randomaccessfile access to current 
> download 
>>> is 
>>>>>> available with fcp.
>>>>>>     
>>>>> Nice! Any filesystem interface is going to have the last problem. It's 
> not 
>>>>> something we can easily fix: we really don't want to fetch stuff from 
> the 
>>>>> beginning, because then the later parts will fall out first. A plugin 
> for 
>>>>> Explorer (etc) to show progress would be a possibility.
>>>>>   
>>>>> ------------------------------------------------------------------------
>>>> Thanks for you interest.
>>>>  -  imho  plugin to show progress is not allowed at cifs protocol level 
>>>> and is completely useless in multimedia appliance.
>>>>  - You said "because then the later parts will fall out first " :
>>>> not sure , nowadays  "files" are rarely fetch "from" the beginning . 
>>>> especially with binary data container  like mpeg 
>>>> audio/avi/quicktime/matroska : these streams are indexed to allow 
>>>> seeking. so application open a few headers blocks  then jump straight to 
>>>> the end of file to check bounds ,  and after it's a complex matter of 
>>>> random access and prebuffering in the middle.
>>>>  please note  that in advanced video compression schemes, losing data ( 
>>>> not keyframe ) in input stream is tolerated.
>>>>
>>>>  -"easily fix" why fixing ?   If  some don't want  data to be random 
>>>> fetchable : why not  precise it in metadata ?  as this type of access 
>>>> make sense only for indexed uncompressed streamable binary structure and 
>>>> then allow another splitfileType (eg  with compression off by default , 
>>>> and FEC at user discretion , dedicated to random streaming of (volatile, 
>>>> cause unpopular in 2 months ) multimedia crap ? ,
>>>>  I already did a raf experiment on <16meg chk, it was usefull to add 
>>>> such a type because things won't broke outside the testing code/node.
>>>>
>>>>  Freenet at first glance seems to be a distributed remote storage,  it 
>>>> would be nice to have some concepts of a real useable storage device in 
>>>> it. even if i know its basicly only a distributed unreliable lru cache ( 
>>>> I did same experiment in the past with chained squid and a samba plugin 
>>>> after reading this 
>>>> http://www.squid-cache.org/mail-archive/squid-dev/200002/0002.html ).
>>> Most of the time when somebody reads the file they will read it 
> sequentially, 
>>> because either:
>>> 1) They want to watch the movie/play the song from the beginning to the 
> end, 
>>> or
>>> 2) They want to copy it to another medium.
>>>
>>> IMHO different parts of a file falling out at different rates is 
> unacceptable. 
>>> The whole file should either survive or not. And of course for 
> old/unpopular 
>>> files, it may take a very long time to fetch the file: the download queue 
> is 
>>> the correct metaphor here, not the instantly accessible storage medium.
>>>
>>>
>>> ------------------------------------------------------------------------
>>   Sorry maybe misunderstanding, I didn't mean to modify the way download 
>> queue work and then fetch data from freenet sequentially. I was just 
>> thinking about to give an FCP option allowing client access to the block 
>> currently pointed by a pseudo-randomaccessfile ( let say a 1mo buffer 
>> forward bounding a long pointer ) as soon as possible when it's 
>> available from node. That will be very appropriate especially when 
>> nearly all buckets are already in cache/datastore.
> 
> Files are divided up into 2MB segments. It might be possible to send each 
> segment to the client when the segment is done; iirc I originally called this 
> (not yet implemented) feature "ReturnType=chunked". It hasn't been 
> implemented yet because there is no demand. However, we should still fetch 
> blocks from a file in random order (although once we have a block succeeded 
> in a segment, we tend to fetch the other blocks in that segment).
> 
> Is this acceptable?
> 
> 
> ------------------------------------------------------------------------
  Well seems to be : on file <=2MB IO will not take more than 3 seconds 
if buckets are in cache/store. But If file is really larger than 2mB 
then it 's likely to be - very - usefull otherwise - even if buckets are 
cached - client app could froze for minutes ... most of them don't 
expect timeouts on cifs file and treat \\ unc  as disk files !

"ReturnType=chunked" sounds good and maybe could give a starting point 
for a jlan plugin ( CIFS/NFS/FTP bundle ) and even some kind of basic 
http streaming :)

note : for a good performance in media appliance, it may be usefull to 
prefetch the last 2MB segment asap to fake to client that we have "all 
file" present, I think most of mediaplayer don't really "read" the last 
bytes of file, but most of them check end of file at opening.


Reply via email to