On Sat, Jul 19, 2003 at 09:46:31PM -0500, Tom Kaitchuck wrote:
> When browsing Freenet just after the date rollover can be very annoying. It is 
> almost unusable for a period of time. Many DBR don't get their sites out soon 
> enough, or don't have enough time to distribute them before people start 
> requesting the new versions. This is in part due to variations in different 
> peoples clocks. However the fact that a request failed because the content is 
> not yet widely distributed is not helped by the fact that it is going to be 
> in the failure table of various nodes for some time.
> 
> Ways this could be improved from the users prospective:
> 1. Warn users to make sure their date/time is set correctly.

Added to README.
> 2. Tell site owners to insert before the last minute.

They should.
> 3. Have tools that insert DBR sites use a high HTL.

With current routing, they should. If routing works, there should be no
"propagation" - data inserted at a reasonable HTL should be available
with one try at a reasonable HTL.

> 4. If a site fails, automatically try altdbrurl BEFORE prompting the user.
> 5. If we are using altdbrurl or a previous dated copy, make that apply to the 
> ENTIRE SITE. There is nothing worse then getting an error, because todays 
> edition cannot be found, clicking on a link and being told the same thing 
> again.

Don't do that. altdbrurl would take you to yesterday's edition, and you
would never see today's edition. Doing this automatically would be
really annoying.
> 
> More invasive changes:
> 6. Eliminate the random first step in routing. (Introduce other mechanisms to 
> replace it's functionality) This way the queries form the local node can be 
> routed normally. (Querying the best host then retrying with the next best 
> etc.) Although we probably shouldn't timeout on out own node :)

Umm, no. Random first hop was introduced to avoid the network forking...
anyway, how would this improve getting DBRs?

> 7. To eliminate unnecessary network requests, don't retry automaticlay. Most 
> failed requests are caused by data that does not exist. If the user wants it 
> they can click retry.

Very many requests are not "honest" DNFs. Even if routing worked
perfectly, there could have been a problem with the node we routed to. A
separate question is whether to use an initial HTL of the maximum.

> 8. So that the user does not have to click retry, make the initial HTL 
> proportional to how likely we believe the data is on the network. For example 
> pages linked to from the main page should automatically be given the maximum 
> initial default HTL. (We are prefetching these anyway as I recall.) Links 

Why? Hrrm, second thoughts, maybe we should make the bookmark links HTL
25.

> that go to CHKs/One shot SSKs/CHK images should get a High initial default 
> HTL. Finally links to Future edition sites, NIMs, KSKs should get a low 
> initial default HTL. Note: the difference between the various HTLs should not 
> be huge otherwise it might become too self reinforcing.
> 9. Improve routing ;)

In progress... :)

-- 
Matthew J Toseland - [EMAIL PROTECTED]
Freenet Project Official Codemonkey - http://freenetproject.org/
ICTHUS - Nothing is impossible. Our Boss says so.

Attachment: pgp00000.pgp
Description: PGP signature

Reply via email to