On Tue, 30 Aug 2005 14:36:29 +0100, Matthew Toseland wrote: > On Tue, Aug 30, 2005 at 11:43:10AM +0100, Ian Clarke wrote: >> What, if anything, prevents FCP from being backwards compatible? It >> would be a shame if 3rd party apps like Frost had to be completely >> redesigned unnecessarily. > > Everything. People will not have to redesign Frost completely, they will > simply have to change the library it uses a bit - or the classes that deal > with FCP. > > Specifics: > - In Freenet 0.7, we use one connection per client, and mux the results. > This will not significantly complicate matters, but it will change the > code needed for parsing. > - Request rate limiting. Any request submitted can be queued > indefinitely. The client can query the current status, and will in any > case be informed when it changes. Timeouts and automatic retries are > strongly discouraged and will be thwarted in any case by request > coalescing.
So, what happens when you get DNF ? Will the node keep retrying forever (very bad idea, since both edition freesites and Frost would inevitably cause plenty of DNF's even if Freenet had perfect routing - another reason to implement updatable keys), or will the client have to retry by itself ? > - ALL splitfile handling, and at least most metadata handling, is done by > the node. The client no longer needs to implement its own splitfile > code, and in particular it no longer needs to call the FCP FEC API which > has been widely criticised as a monstrosity. It simply tells the node > that it wants a given file, whether it wants it written to disk or > streamed to the client, and how much status information it requires. Can the client request that the file be streamed out-of-order ? That is, can a client request that each segment of file is streamed to it as soon as it is decoded/reassembled, coupled with a number telling the offset of this piece of data in the original file ? This is required to allow streaming large files to clients without taking up tremendous amount of disk space for temporary files. Streaming is preferred since it is guaranteed to work (the client was able to connect to Fred, after all), while it cannot be guaranteed that client and Fred share any writable disk space - they could be in separate machines on a home LAN, Fred could be chroot-jailed, or whatever. I'm assuming here that FEC still works (for large files) by dividing the file into segments and FECcing each segment separately. > Then the node downloads it, optionally continuing even after the client > is terminated, and the node is restarted. Progress information is sent How long does the node keep trying ? Suppose part of the file is simply not available - will the node try forever ? And how long does it keep large completed downloads around if no client asks them ? After all, a buggy client could very well forget it ever asked node for the file. Maybe it would be a good idea to add a "You have downloaded files waiting" -link to the main page of the web interface ? > to the client when things happen, depending on the level of information > asked for. A simple client can send a command, and then simply wait > until the file is returned. A more complex client can queue a request > and provide the user with continually updated progress bars for each > queued file. The client in either case does not need to know about, for > example, hierarchical metadata (the metadata for a splitfile itself so > big that it has to be split), which will be required for largish files. >> >> Ian.
