On Sat, Jun 14, 2003 at 12:10:31AM -0500, Tom Kaitchuck wrote:
> On Friday 13 June 2003 10:22 pm, fish wrote:
> > I just don't think that they should be >1meg,
> 
> Uh, just to clarify you mean >1meg right?
> 
> I agree with you. It doesn't seem to me that there are many nodes accepting 
> blocks >1meg anyway. If you are combinning it into a container only to break 
> it up into chunks I don't see the gain.
> 
> However this could be implemented at the application level by combining all 
> the SMALL images (aside from the sites thumbnail) into one file. But leave 
> the HTML alone this way each page is still independently referenceable and 
> unnecessary redundancy is eliminated. 
> 
> A trick that might be nice would be to to try to get all the small images to 
> combine to a nice even size. That way you pad less.
> 
> Also, your comment about size and low bandwith users got me thinking: Is their 
> some way to have nodes bias the size of the data they store biased on their 
> available banwidth. And if so how could routing, load distribution, and 
> utilization of low bandwith nodes be improved biased on this?

No, that information is not available to routing. If large chunks are a
problem we should impose a limit - we do not want only a few nodes to
cache a really large chunk.
> 
> Anyway that is the start of a whole new thread, so I'll post it to tech@ once 
> I think it through a little more.
-- 
Matthew J Toseland - [EMAIL PROTECTED]
Freenet Project Official Codemonkey - http://freenetproject.org/
GPG key lost in last few weeks, new key on keyservers
ICTHUS - Nothing is impossible. Our Boss says so.

Attachment: pgp00000.pgp
Description: PGP signature

Reply via email to