>> >I understand where you are coming from, and at the moment I am fairly
>> > evenly split between considering zip vs bzip2.
>>
>> please try to stick to plain stupid zip if you have to.
>> not only because it's been around with the jre for some time, so we can
>> assume it's mostly bugfree, but mainly because i worry for 3rd party
>> freenet nodes.
>
>It would not lead to 3rd party nodes at all. Because the compression is the 
>matter of a client-side protocol, in order to extract the data from the 
>network, all clients would have to support the (de)compression algorithm. 
>Therefore either everybody uses it or nobody uses it. There is little scope 
>for compromise inbetween because all compressed files will not work for the 
>people who don't have the node with the codec. So, IMHO, it comes down to the 
>choice between better compression, and the convenience of a built-in library.
>
>Gordan

>It's not in the node. It's in the client code. Which can work over FCP.
>So you can reimplement the node without reimplementing the client code.
>So it's no big deal.
[Toad]

then perhaps i misunderstood the approach.

i believed the container compression has been cleared by introducing the code from 
fish; supporting jar and standard zip files?

by the way, you are using jar-specific classes to access the container ingredients:
+           JarInputStream myJis=new JarInputStream(myIs);
+           String containerMeta="metadata";
+           // because we don't have access to a File object,
+           // we have to skip through until we hit our filename...
+           JarEntry ent=null;
but you also allow the zip mimetype for container digging... is it applicable to use 
JarEntry and JarInputStream on zip files??!?

---~~---

i thought that the discussion bzip2/gzip was started not for discussing the container 
compressor, but an *additional* *transparent* compression step while inserting through 
FCP:


some data the user wants to insert
        |
        V
insertion tool (fishtools/FIW/...)
        |
        V
normal oldsk00l FCP insert request triggered by insertion tool
        |
        V
node input buffers
        |
        V
node determines if data is suitable for compression (like html/txt/...; not already 
well compressed data like mp3/zip/...) preselected by mimetype or brute force
        |
        V
compress if suitable
        |
        V
upload into freenet (compressed data if compressed data has a smaller log2 size than 
the original data; original data if no gain)
        |
        V
return CHK@


the problem i had here was the returned CHK hashkey.
would it be the hash of the original data or of the compressed data?

if it's the normal hash, then
+ 3rd party tools can recalculate the hash value by themselves
- freenet does not know where to search for the data, because the hash of the original 
is != the compressed hash which we're looking for

if it's the compressed hash, then
+ the freenet protocol would know where to look for the data in the freenet, grab it, 
detect it's precompression, extract the original data and return the data just like it 
went into the insertion tool
- 3rd party tools would have to emulate the node's compression step, killing 
portability and future compatibility, if they want to calculate the CHK hash by 
themselves

but both would add additional obfuscation to the protocols, which is bad for obvious 
reasons

i'm glad it was just a discussion about the container compressor.... :)



_______________________________________________
devl mailing list
[EMAIL PROTECTED]
http://hawk.freenetproject.org:8080/cgi-bin/mailman/listinfo/devl

Reply via email to