On Friday 01 Aug 2003 11:10, [EMAIL PROTECTED] wrote:

> then perhaps i misunderstood the approach.
>
> i believed the container compression has been cleared by introducing the
> code from fish; supporting jar and standard zip files?

I'm confused now, too. I thought that, too.

> i thought that the discussion bzip2/gzip was started not for discussing the
> container compressor, but an *additional* *transparent* compression step
> while inserting through FCP:

That was my impression, too.

> some data the user wants to insert
>
>       V
> insertion tool (fishtools/FIW/...)
>
>       V
> normal oldsk00l FCP insert request triggered by insertion tool
>
>       V
> node input buffers
>
>       V
> node determines if data is suitable for compression (like html/txt/...; not
> already well compressed data like mp3/zip/...) preselected by mimetype or
> brute force
>
>       V
> compress if suitable
>
>       V
> upload into freenet (compressed data if compressed data has a smaller log2
> size than the original data; original data if no gain)
>
>       V
> return CHK@

That is precisely what my understanding was.

> the problem i had here was the returned CHK hashkey.
> would it be the hash of the original data or of the compressed data?

Umm... Interesting question. My understanding was that the hash has to match 
the data. Therefore, it would be the compressed data.

> if it's the normal hash, then
> + 3rd party tools can recalculate the hash value by themselves
> - freenet does not know where to search for the data, because the hash of
> the original is != the compressed hash which we're looking for

Agreed, I don't think that would work. But I could be wrong.

> if it's the compressed hash, then
> + the freenet protocol would know where to look for the data in the
> freenet, grab it, detect it's precompression, extract the original data and
> return the data just like it went into the insertion tool
> - 3rd party tools
> would have to emulate the node's compression step, killing portability and
> future compatibility, if they want to calculate the CHK hash by themselves

Is there any particular reason why the tool would do that on it's own? The 
node already has to do it regardless (am I correct in thinking that?), so 
doing it twice is wasteful.

> but both would add additional obfuscation to the protocols, which is bad
> for obvious reasons
> 
> i'm glad it was just a discussion about the container compressor.... :)

I don't think it was...

Gordan
_______________________________________________
devl mailing list
[EMAIL PROTECTED]
http://hawk.freenetproject.org:8080/cgi-bin/mailman/listinfo/devl

Reply via email to