>
> Is there a way to represent that something is compressed in a MIME type?
> For example:
>
> application/gzip:text/html
Not in mime. Thats why its Content-Encoding and Content-Type in the HTTP
spec.
___
Devl mailing list
Devl at freenetproject.o
>
> Is there a way to represent that something is compressed in a MIME type?
> For example:
>
> application/gzip:text/html
Not in mime. Thats why its Content-Encoding and Content-Type in the HTTP
spec.
___
Devl mailing list
[EMAIL PROTECTED]
http:
> Didn't someone suggest a while back putting an entire web page or web site
> including images into a single tar file? The file could then be gzipped
> solving the compression, prefetching, and uneven dropout issues.
I was offering prefetching as an alternative to just this idea.
On Thu, May 03, 2001 at 09:56:43AM -0400, Timm Murray wrote:
> I'm not about to do any scientific servay of it, but I know
> a lot of data could benifit (web pages, other text, all Freenet metadata,
> etc.).
> OTOH, a lot of data would not benifit. So, I say make it a CLI option
> to compress b
On Fri, May 04, 2001 at 04:04:26PM -0700, Aaron Voisine wrote:
> Didn't someone suggest a while back putting an entire web page or web site
> including images into a single tar file? The file could then be gzipped
> solving the compression, prefetching, and uneven dropout issues.
Yeah, and then
On Thu, May 03, 2001 at 09:56:43AM -0400, Timm Murray wrote:
> I'm not about to do any scientific servay of it, but I know
> a lot of data could benifit (web pages, other text, all Freenet metadata, etc.).
> OTOH, a lot of data would not benifit. So, I say make it a CLI option
> to compress bef
On Fri, May 04, 2001 at 04:04:26PM -0700, Aaron Voisine wrote:
> Didn't someone suggest a while back putting an entire web page or web site
> including images into a single tar file? The file could then be gzipped
> solving the compression, prefetching, and uneven dropout issues.
Yeah, and then
The advantage of using a single file over pre-fetching is that the site
designer decides what should be fetched instead of the client trying to
guess. The files could be still be streamed from within the tar file so
the whole thing doesn't need to be downloaded before displaying in the
browser.
The advantage of using a single file over pre-fetching is that the site
designer decides what should be fetched instead of the client trying to
guess. The files could be still be streamed from within the tar file so
the whole thing doesn't need to be downloaded before displaying in the
browser
> Didn't someone suggest a while back putting an entire web page or web site
> including images into a single tar file? The file could then be gzipped
> solving the compression, prefetching, and uneven dropout issues.
I was offering prefetching as an alternative to just this idea.
___
Didn't someone suggest a while back putting an entire web page or web site
including images into a single tar file? The file could then be gzipped
solving the compression, prefetching, and uneven dropout issues.
l8r
Aaron
___
Devl mailing list
Devl at
Didn't someone suggest a while back putting an entire web page or web site
including images into a single tar file? The file could then be gzipped
solving the compression, prefetching, and uneven dropout issues.
l8r
Aaron
___
Devl mailing list
[EMAIL
Lets just forget the wussy zlib. Lets integrate lzip instead.
Timm Murray
Life is like a perl script: Really short and messy.
___
Devl mailing list
Devl at freenetproject.org
http://lists.freenetproject.org/mailman/listinfo/dev
Lets just forget the wussy zlib. Lets integrate lzip instead.
Timm Murray
Life is like a perl script: Really short and messy.
___
Devl mailing list
[EMAIL PROTECTED]
http://lists.freenetproject.org/mailman/listinfo/devl
> our metadata spec is not based on HTTP. Our metadata spec does not, in
> fact, have anything to do with the HTTP spec whatsoever.
Oh stop being so damned short sighted. The HTTP spec has good ideas
related to metadata, Content-Encoding is one of them. Just because we
arent the W3C doesn't mean
> I hate to say it but your wrong Brandon. The content-type of a file
> indicates what the file really is, independent of its encoding. Read the
> HTTP spec some time. (Yes I know, not everyone is a web browser, don't
> try that argument, but its not just specifying web browsers).
I see no rel
> > Does the ContentEncoding field add value for clients other than FProxy?
>
> Hell yes. Anything that wants to be able to decode the encoding and then
> hand off to a viewer.
In this sense, ContentEncoding adds no value whatsoever over
ContentDecoding.
__
> Actualy, this brings me to something I was thinking about
> a while back. Right now, a person downloading a freesite must
> make a seperate request for every image on the site. This could
> be improved by putting an entire freesite (or a chunk of one
> if it's really big) in a .tar.gz, with fi
> our metadata spec is not based on HTTP. Our metadata spec does not, in
> fact, have anything to do with the HTTP spec whatsoever.
Oh stop being so damned short sighted. The HTTP spec has good ideas
related to metadata, Content-Encoding is one of them. Just because we
arent the W3C doesn't mean
toad wrote on 4/28/01 5:00 pm:
<>
>And sticking stuff
>in ZIPs is not a good answer
>for pages which people would
>like to refer to and browse
>individually. The size bias
>means that it will
>signigicantly improve short
>term survival of pages.
Actualy, this brings me to something I was
Scott G. Miller wrote on 4/29/01 11:26 am:
<>
>Analyze the data
>stored on Freenet and come
>up with a number on how
>much data is compressable
>versus how much isn't. I
>think you'll be amused.
I'm not about to do any scientific servay of it, but I know
a lot of data could benifit (web p
Stefan Reich wrote on 4/28/01 1:06 pm:
>Compressing takes longer
>than sending uncompressed
>data? You can't be not
>serious!
Quite serious. It depends on what you're compressing. There
is a certain ammount of overhead added to a compressed
file. If the ammount of savings in compressing do
> I hate to say it but your wrong Brandon. The content-type of a file
> indicates what the file really is, independent of its encoding. Read the
> HTTP spec some time. (Yes I know, not everyone is a web browser, don't
> try that argument, but its not just specifying web browsers).
I see no re
> > Does the ContentEncoding field add value for clients other than FProxy?
>
> Hell yes. Anything that wants to be able to decode the encoding and then
> hand off to a viewer.
In this sense, ContentEncoding adds no value whatsoever over
ContentDecoding.
_
> Actualy, this brings me to something I was thinking about
> a while back. Right now, a person downloading a freesite must
> make a seperate request for every image on the site. This could
> be improved by putting an entire freesite (or a chunk of one
> if it's really big) in a .tar.gz, with f
On Thu, May 03, 2001 at 02:13:09AM -0500, Brandon wrote:
>
> I just thought of a way to rephrase this discussion that might be more
> productive.
>
> Does the ContentEncoding field add value for clients other than FProxy?
Hell yes. Anything that wants to be able to decode the encoding and then
>
> This is backwards compatible with old clients. They simply can't tell the
> difference between a zip file and a text file which was automatically
> zipped on insert, which is much more reasonable than thinking that zip
> files are actually text files.
I hate to say it but your wrong Brandon.
Scott G. Miller wrote on 4/29/01 11:26 am:
<>
>Analyze the data
>stored on Freenet and come
>up with a number on how
>much data is compressable
>versus how much isn't. I
>think you'll be amused.
I'm not about to do any scientific servay of it, but I know
a lot of data could benifit (web
toad wrote on 4/28/01 5:00 pm:
<>
>And sticking stuff
>in ZIPs is not a good answer
>for pages which people would
>like to refer to and browse
>individually. The size bias
>means that it will
>signigicantly improve short
>term survival of pages.
Actualy, this brings me to something I wa
Stefan Reich wrote on 4/28/01 1:06 pm:
>Compressing takes longer
>than sending uncompressed
>data? You can't be not
>serious!
Quite serious. It depends on what you're compressing. There
is a certain ammount of overhead added to a compressed
file. If the ammount of savings in compressing d
On Thu, May 03, 2001 at 02:13:09AM -0500, Brandon wrote:
>
> I just thought of a way to rephrase this discussion that might be more
> productive.
>
> Does the ContentEncoding field add value for clients other than FProxy?
Hell yes. Anything that wants to be able to decode the encoding and then
>
> This is backwards compatible with old clients. They simply can't tell the
> difference between a zip file and a text file which was automatically
> zipped on insert, which is much more reasonable than thinking that zip
> files are actually text files.
I hate to say it but your wrong Brandon.
On Thu, May 03, 2001 at 02:05:55AM -0500, Brandon wrote:
>
> Would this be better?
>
> ContentType=application/x-gzip
> ContentDecoding=text/html
Yes, that's much better. :-)
___
Devl mailing list
Devl at freenetproject.org
http://lists.freenetproject.org/mailman/listinfo/devl
>From - Sun May 6 15:04:36 200
I just thought of a way to rephrase this discussion that might be more
productive.
Does the ContentEncoding field add value for clients other than FProxy?
That's a much more concise and less confrontational way to express most
everything that I said in the last post. :-)
> No, I'm saying that FProxy should do the right thing w/r/to a web browser.
> Other clients may or may not need to perform automatic decompression at
> layer X. How can adding a feature to FProxy break all existing clients?
You're not proposing a feature addition to FProxy, you're proposing a n
> Would this be better?
>
> ContentType=application/x-gzip
> ContentDecoding=text/html
Yes, that's much better. :-)
___
Devl mailing list
[EMAIL PROTECTED]
http://lists.freenetproject.org/mailman/listinfo/devl
On Thu, May 03, 2001 at 02:05:55AM -0500, Brandon wrote:
>
> > No, I'm saying that FProxy should do the right thing w/r/to a web browser.
> > Other clients may or may not need to perform automatic decompression at
> > layer X. How can adding a feature to FProxy break all existing clients?
>
> Y
I just thought of a way to rephrase this discussion that might be more
productive.
Does the ContentEncoding field add value for clients other than FProxy?
That's a much more concise and less confrontational way to express most
everything that I said in the last post. :-)
___
> No, I'm saying that FProxy should do the right thing w/r/to a web browser.
> Other clients may or may not need to perform automatic decompression at
> layer X. How can adding a feature to FProxy break all existing clients?
You're not proposing a feature addition to FProxy, you're proposing a
On Wed, May 02, 2001 at 08:55:19PM -0500, Brandon wrote:
>
> > The correct behavior would be to send the compressed data along to the
> > browser without touching it, while setting the Transfer-Encoding HTTP
> > header to gzip. If the browser is incapable of handling that
> > Transfer-Encoding th
On Wed, May 02, 2001 at 07:57:04PM -0500, Brandon wrote:
>
> > > Why not just use the Content-Type field and set it to application/x-gzip?
> > > Why add another metadata field?
> >
> > Because that's actually a technically wrong use of Content-Type. I forget
> > which RFC I read about this in, bu
> The correct behavior would be to send the compressed data along to the
> browser without touching it, while setting the Transfer-Encoding HTTP
> header to gzip. If the browser is incapable of handling that
> Transfer-Encoding then you can unzip it yourself and send it along.
That's the right w
On Wed, May 02, 2001 at 07:26:13PM -0500, Brandon wrote:
>
> > Why not just use the Content-Type field and set it to application/x-gzip?
> > Why add another metadata field?
>
> Because that's actually a technically wrong use of Content-Type. I forget
> which RFC I read about this in, but it was one of the HTTP or MIME ones.
> Content-Type is supposed to
On Wed, May 02, 2001 at 08:55:19PM -0500, Brandon wrote:
>
> > The correct behavior would be to send the compressed data along to the
> > browser without touching it, while setting the Transfer-Encoding HTTP
> > header to gzip. If the browser is incapable of handling that
> > Transfer-Encoding t
> I think Tavin gets what I was suggesting here. What do you think Ian?
> For large text or html files to be both compressed and browsable,
> zlib compression signified by a meta data field would be most
> useful. We could start by having only text, html and various other
> markup languages automa
> The correct behavior would be to send the compressed data along to the
> browser without touching it, while setting the Transfer-Encoding HTTP
> header to gzip. If the browser is incapable of handling that
> Transfer-Encoding then you can unzip it yourself and send it along.
That's the right
On Wed, May 02, 2001 at 07:57:04PM -0500, Brandon wrote:
>
> > > Why not just use the Content-Type field and set it to application/x-gzip?
> > > Why add another metadata field?
> >
> > Because that's actually a technically wrong use of Content-Type. I forget
> > which RFC I read about this in, b
> > Why not just use the Content-Type field and set it to application/x-gzip?
> > Why add another metadata field?
>
> Because that's actually a technically wrong use of Content-Type. I forget
> which RFC I read about this in, but it was one of the HTTP or MIME ones.
> Content-Type is supposed to
I think Tavin gets what I was suggesting here. What do you think Ian?
For large text or html files to be both compressed and browsable,
zlib compression signified by a meta data field would be most
useful. We could start by having only text, html and various other
markup languages automatically com
On Wed, May 02, 2001 at 07:26:13PM -0500, Brandon wrote:
>
> > I think Tavin gets what I was suggesting here. What do you think Ian?
> > For large text or html files to be both compressed and browsable,
> > zlib compression signified by a meta data field would be most
> > useful. We could start b
> I think Tavin gets what I was suggesting here. What do you think Ian?
> For large text or html files to be both compressed and browsable,
> zlib compression signified by a meta data field would be most
> useful. We could start by having only text, html and various other
> markup languages autom
I think Tavin gets what I was suggesting here. What do you think Ian?
For large text or html files to be both compressed and browsable,
zlib compression signified by a meta data field would be most
useful. We could start by having only text, html and various other
markup languages automatically co
On Mon, Apr 30, 2001 at 12:06:02PM +0100, toad wrote:
> On Mon, Apr 30, 2001 at 12:50:01AM -0400, Tavin Cole wrote:
> > On Mon, Apr 30, 2001 at 03:19:13AM +0100, Michael Rogers wrote:
> > > On Sun, Apr 29, 2001 at 10:45:38AM -0700, Mr.Bad wrote:
> > > > For small files, like plain text and HTML, yo
toad (Mon, Apr 30, 2001 at 12:06:02PM +0100):
> On Mon, Apr 30, 2001 at 12:50:01AM -0400, Tavin Cole wrote:
> > On Mon, Apr 30, 2001 at 03:19:13AM +0100, Michael Rogers wrote:
> > > On Sun, Apr 29, 2001 at 10:45:38AM -0700, Mr.Bad wrote:
> > > > For small files, like plain text and HTML, you really
On Mon, Apr 30, 2001 at 12:06:02PM +0100, toad wrote:
> On Mon, Apr 30, 2001 at 12:50:01AM -0400, Tavin Cole wrote:
> > On Mon, Apr 30, 2001 at 03:19:13AM +0100, Michael Rogers wrote:
> > > On Sun, Apr 29, 2001 at 10:45:38AM -0700, Mr.Bad wrote:
> > > > For small files, like plain text and HTML, y
On Mon, Apr 30, 2001 at 12:50:01AM -0400, Tavin Cole wrote:
> On Mon, Apr 30, 2001 at 03:19:13AM +0100, Michael Rogers wrote:
> > On Sun, Apr 29, 2001 at 10:45:38AM -0700, Mr.Bad wrote:
> > > For small files, like plain text and HTML, you really don't get much
> > > bang for your buck by compressin
toad (Mon, Apr 30, 2001 at 12:06:02PM +0100):
> On Mon, Apr 30, 2001 at 12:50:01AM -0400, Tavin Cole wrote:
> > On Mon, Apr 30, 2001 at 03:19:13AM +0100, Michael Rogers wrote:
> > > On Sun, Apr 29, 2001 at 10:45:38AM -0700, Mr.Bad wrote:
> > > > For small files, like plain text and HTML, you reall
On Mon, Apr 30, 2001 at 12:50:01AM -0400, Tavin Cole wrote:
> On Mon, Apr 30, 2001 at 03:19:13AM +0100, Michael Rogers wrote:
> > On Sun, Apr 29, 2001 at 10:45:38AM -0700, Mr.Bad wrote:
> > > For small files, like plain text and HTML, you really don't get much
> > > bang for your buck by compressi
On Sun, Apr 29, 2001 at 10:45:38AM -0700, Mr.Bad wrote:
> For small files, like plain text and HTML, you really don't get much
> bang for your buck by compressing them, at least w/r/t on-the-wire
> transfer time.
Why are text and HTML necessarily small? People might want to insert
books, HOWTOs e
On Mon, Apr 30, 2001 at 03:19:13AM +0100, Michael Rogers wrote:
> On Sun, Apr 29, 2001 at 10:45:38AM -0700, Mr.Bad wrote:
> > For small files, like plain text and HTML, you really don't get much
> > bang for your buck by compressing them, at least w/r/t on-the-wire
> > transfer time.
>
> Why are t
On Mon, Apr 30, 2001 at 03:19:13AM +0100, Michael Rogers wrote:
> On Sun, Apr 29, 2001 at 10:45:38AM -0700, Mr.Bad wrote:
> > For small files, like plain text and HTML, you really don't get much
> > bang for your buck by compressing them, at least w/r/t on-the-wire
> > transfer time.
>
> Why are
On Sun, Apr 29, 2001 at 10:45:38AM -0700, Mr.Bad wrote:
> For small files, like plain text and HTML, you really don't get much
> bang for your buck by compressing them, at least w/r/t on-the-wire
> transfer time.
Why are text and HTML necessarily small? People might want to insert
books, HOWTOs
On Sat, Apr 28, 2001 at 07:34:18PM -0500, Scott G. Miller wrote:
> > This is a special case - text or HTML of a reasonable size that should be
> > compressed to save space and hence increase its likelihood of survival on
> > the
> > network. It would not be sensible for it to occur automatically,
On Sun, Apr 29, 2001 at 12:26:30AM -0500, Brandon wrote:
>
> > Another solution would be to have files compressed and decompressed
> > purely by clients, and have whether a file is compressed or not marked
> > in the file's metadata.
>
> That is a much better idea.
Wasn't this the proposal? It wa
> "TC" == Tavin Cole <[EMAIL PROTECTED]> writes:
TC> Sure. How would we keep track of it? The sourceforge
TC> bugtracker perhaps?
I was thinking maybe just someone sending a list to this mailing list,
but that's not a bad idea.
How about this: if "core developers"* send in some jo
On Sun, Apr 29, 2001 at 10:37:41AM -0700, Mr. Bad wrote:
> > "SGM" == Scott G Miller writes:
>
> SGM> Me think that you guys just like coding too much.
>
> Speaking of which... I'm thinking that we probably need a "job jar"
> for Freenet. Like, a bunch of low- to medium-priority things t
Man, I should not write any postings before morning coffee... I understood
the exact opposite of what you're actually saying... %-|
Anyway, I feel the only sensible way of transparently compressing files is
_before_ encryption, and treating the file as a stream (the node just sees a
stream anyway)
netproject.org; Sun, 29 Apr 2001 14:00:46 +0200
From: mor...@1723.net
To: devl at freenetproject.org
Subject: Re: [freenet-devl] integrating zlib compression into freenet
Message-ID: <20010429140045.E233 at 1723.net>
Stefan Reich (Sun, Apr 29, 2001 at 02:52:08PM +0200):
> - Original Message -
> From:
> > but we're not talking about compressing the packets, but about compressing
> > the whole files before inserting them.
>
> Why treat packets individually? We know which packets belong to the same
> fil
> >
> > Again. If the problem is that data is falling out, the solution is not to
> > dance around that with compression. FIX THE REAL ISSUE. Also, any gains
> > from compression are a drop in the bucket compared to the latency of doing
> > a Freenet search... which compression does nothing for
On Sun, Apr 29, 2001 at 10:37:41AM -0700, Mr. Bad wrote:
> > "SGM" == Scott G Miller <[EMAIL PROTECTED]> writes:
>
> SGM> Me think that you guys just like coding too much.
>
> Speaking of which... I'm thinking that we probably need a "job jar"
> for Freenet. Like, a bunch of low- to medi
Stefan Reich (Sun, Apr 29, 2001 at 01:14:55AM +0200):
> - Original Message -
> From: "Chris Anderson"
> > > So, the equation becomes, for sufficiently small data values, compress
> > > time + const xfer time vs. const xfer time.
> > >
> > > Of course, once again, feel free to prove me wron
> >
> > Again. If the problem is that data is falling out, the solution is not to
> > dance around that with compression. FIX THE REAL ISSUE. Also, any gains
> > from compression are a drop in the bucket compared to the latency of doing
> > a Freenet search... which compression does nothing fo
> "SR" == Stefan Reich writes:
SR> Anyway, I feel the only sensible way of transparently
SR> compressing files is _before_ encryption, and treating the
SR> file as a stream (the node just sees a stream anyway). I
SR> thought this was pretty clear, and I'm a bit puzzled over th
(Debian))
id 14tv9I-0007bA-00
for ; Sun, 29 Apr 2001 18:38:20 +0100
To: devl at freenetproject.org
Subject: Re: [freenet-devl] integrating zlib compression into freenet
Message-ID: <20010429183820.B28661 at cableinet.co.uk>
References: <20010428201852.E2481 at execpc.co
> "SR" == Stefan Reich <[EMAIL PROTECTED]> writes:
SR> Anyway, I feel the only sensible way of transparently
SR> compressing files is _before_ encryption, and treating the
SR> file as a stream (the node just sees a stream anyway). I
SR> thought this was pretty clear, and I'm a
On Sat, Apr 28, 2001 at 07:34:18PM -0500, Scott G. Miller wrote:
> > This is a special case - text or HTML of a reasonable size that should be
> > compressed to save space and hence increase its likelihood of survival on the
> > network. It would not be sensible for it to occur automatically, unle
On Sun, Apr 29, 2001 at 12:26:30AM -0500, Brandon wrote:
>
> > Another solution would be to have files compressed and decompressed
> > purely by clients, and have whether a file is compressed or not marked
> > in the file's metadata.
>
> That is a much better idea.
Wasn't this the proposal? It w
> "SGM" == Scott G Miller <[EMAIL PROTECTED]> writes:
SGM> Me think that you guys just like coding too much.
Speaking of which... I'm thinking that we probably need a "job jar"
for Freenet. Like, a bunch of low- to medium-priority things that need
to get done, both on 0.3 and 0.4, that e
Man, I should not write any postings before morning coffee... I understood
the exact opposite of what you're actually saying... %-|
Anyway, I feel the only sensible way of transparently compressing files is
_before_ encryption, and treating the file as a stream (the node just sees a
stream anyway
Stefan Reich (Sun, Apr 29, 2001 at 02:52:08PM +0200):
> - Original Message -
> From: <[EMAIL PROTECTED]>
> > but we're not talking about compressing the packets, but about compressing
> > the whole files before inserting them.
>
> Why treat packets individually? We know which packets belo
- Original Message -
From: <[EMAIL PROTECTED]>
> but we're not talking about compressing the packets, but about compressing
> the whole files before inserting them.
Why treat packets individually? We know which packets belong to the same
file - we can compress files as streams.
-Stefan
Stefan Reich (Sun, Apr 29, 2001 at 01:14:55AM +0200):
> - Original Message -
> From: "Chris Anderson" <[EMAIL PROTECTED]>
> > > So, the equation becomes, for sufficiently small data values, compress
> > > time + const xfer time vs. const xfer time.
> > >
> > > Of course, once again, feel f
- Original Message -
From: "Chris Anderson"
> > So, the equation becomes, for sufficiently small data values, compress
> > time + const xfer time vs. const xfer time.
> >
> > Of course, once again, feel free to prove me wrong.
> >
>
> No, I agree. In addition, there is usually a several K
On Sat, Apr 28, 2001 at 06:48:58PM -0500, Scott G. Miller wrote:
> >
> > One solution would be to have a minimum compression threshold. Files
> > under this threshold would be uncompressed, and files larger than this
> > threshold would be compressed. This would result in there be space
> > and
> Another solution would be to have files compressed and decompressed
> purely by clients, and have whether a file is compressed or not marked
> in the file's metadata.
That is a much better idea.
___
Devl mailing list
Devl at freenetproject.org
http
- Original Message -
From: "Mr.Bad"
> SR> There's a reason why all modern modem protocols contain
> SR> compression after all.
>
> There's also a reason they have automatic control mechanisms to turn
> compression off. B-)
It would still be faster if they didn't.
Always a pleas
rday, April 28, 2001 11:57 PM
Subject: Re: [freenet-devl] integrating zlib compression into freenet
> >>>>> "SR" == Stefan Reich writes:
>
> >> This seems like a problem in search of a solution. The stuff
> >> that needs compressing is alrea
- Original Message -
From: "Mr.Bad"
> This seems like a problem in search of a solution. The stuff that
> needs compressing is already compressed. The stuff that isn't already
> compressed, like HTML files and text files, is small enough that
> Another solution would be to have files compressed and decompressed
> purely by clients, and have whether a file is compressed or not marked
> in the file's metadata.
That is a much better idea.
___
Devl mailing list
[EMAIL PROTECTED]
http://lists
On Sun, Apr 29, 2001 at 01:00:00AM +0100, toad wrote:
> On Sat, Apr 28, 2001 at 06:48:58PM -0500, Scott G. Miller wrote:
> > >
> > > One solution would be to have a minimum compression threshold. Files
> > > under this threshold would be uncompressed, and files larger than this
> > > threshold wo
On Sat, Apr 28, 2001 at 07:21:22PM -0400, Chris Anderson wrote:
> On Sun, 29 Apr 2001, Stefan Reich wrote:
>
> > - Original Message -
> > From: "Chris Anderson"
> > > > So, the equation becomes, for sufficiently small data values, compress
> > > > time + const xfer time vs. const xfer tim
> This is a special case - text or HTML of a reasonable size that should be
> compressed to save space and hence increase its likelihood of survival on the
> network. It would not be sensible for it to occur automatically, unless if
> only
> with particular MIME types, but decoding support in free
On Sun, 29 Apr 2001, Stefan Reich wrote:
> - Original Message -
> From: "Chris Anderson"
> > > So, the equation becomes, for sufficiently small data values, compress
> > > time + const xfer time vs. const xfer time.
> > >
> > > Of course, once again, feel free to prove me wrong.
> > >
> >
On 28 Apr 2001, Mr.Bad wrote:
> > "CA" == Chris Anderson writes:
>
> CA> It's a simple eq, compress time + xfer time( compressed data )
> CA> vs xfer time( data ).
>
> It's an oversimplified eq. That's not necessarily true at all.
>
> CA> If you get 50% compression you save 5
>
> One solution would be to have a minimum compression threshold. Files
> under this threshold would be uncompressed, and files larger than this
> threshold would be compressed. This would result in there be space
> and bandwidth savings for big files, but not slowdowns from
> compressing small
On Sun, 29 Apr 2001, Stefan Reich wrote:
> - Original Message -
> From: "Mr.Bad"
> > SR> There's a reason why all modern modem protocols contain
> > SR> compression after all.
> >
> > There's also a reason they have automatic control mechanisms to turn
> > compression off. B-)
>
1 - 100 of 164 matches
Mail list logo