Re: [PHP] Re: Why small big?

2006-08-25 Thread Ivo F.A.C. Fokkema
 [SNIP]
 As for PNG:  As far as I know, the only issue with any realistic browser 
 (other than very old ones like IE2 or something) is that the alpha 
 channel is not supported.  As there is no alpha channel in JPEG, so 
 there is no difference.  Though I do not profess to be absolutely sure 
 that all browsers you might encounter manage PNG ok.

I personally use PNG all the time for smaller images. Only for high color,
larger images, I use JPEG. Besides the unsupported Alpha
transparency that you've already mentioned, I've never had any form of
problem (or heard anyone complain) about unsupported PNG images. And
that's indexed and RGB PNGs.

-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP] Re: Why small big?

2006-08-25 Thread Ivo F.A.C. Fokkema
 [SNIP]
 Considering in this thread where I left the quality at 100% and 
 reduced the image to less 40 percent of the original, and the end 
 result was that I actually made a larger file. So, I belive that at 
 least this example shows that 100% is not a good quality value 
 setting for reducing images -- thus we know the high end is less than 
 100.

I know from experience that reducing the quality from 100% to 95%
sometimes reduces the file size to 50%; without showing much noticeable
change in quality...

HTH

-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP] Re: Why small big?

2006-08-24 Thread Alex Turner

As I promised, here is the writeup with examples:

http://nerds-central.blogspot.com/2006/08/choosing-file-format-for-small-web.html

Cheers

AJ

tedd wrote:

Alex:

Excuse for top posting:

You said: Clear as mud?

Well actually, it's simperer than I thought. After your reply, I did 
some reading on jpeg and found it's simply a transform, not unlike FFT 
where two-dimensional temporal data is transformed from the time domain 
to the frequency domain -- very interesting reading.


The reverse cosine matrix you mention is probably the discrete cosine 
transform (DCT) matrix where the x, y pixels of an image file have a z 
component representing color. From that you can translate the data into 
the frequency domain, which actually generates more data than the original.


However, the quality setting is where you make it back up in compression 
ratio's by trimming off higher frequencies which don't add much to the 
data. Unlike the FFT, the algorithm does not address phasing, which I 
found interesting.


However, the answer to my question deals with the quality statement. In 
the statement:


imagejpeg($image_p, null, 100);

I should have used something less than 100.

I've change the figure to 25 and don't see any noticeable difference in 
quality of the thumbnail.


It seems to me there should be a table (or algorithm) somewhere that 
would recommend what quality to use when reducing the size of an image 
via this method. In this case, I reduced an image 62 percent (38% of the 
original) with a quality setting of 25 and see no difference. I think 
this (the quality factor) is programmable.


As for png images, I would probably agree (if I saw comparisons), but 
not all browsers accept them. I belive that at least one IE has problems 
with png's, right?


tedd

At 4:45 PM +0100 8/23/06, Alex Turner wrote:
M Sokolewice got it nearly correct.  However, the situation is a 
little more complex than he has discussed.


The % compression figure for jpeg is translated into the amount of 
information stored in the reverse cosine matrix.  The size of the 
compressed file is not proportional to the % you set in the 
compressor.  Thus 100% actually means store all the information in the 
reverse cosine matrix.  This is like storing the image in a 24 bit 
png, but with the compressor turned off.  So at 100% jpeg is quite 
inefficient.


The other issue is the amount of high frequency information in your 
images.  If you have a 2000x2000 image with most of the image dynamics 
at a 10 pixel frequency, and you reduce this to 200x200 then the JPEG 
compression algorithm will 'see' approximately the same amount of 
information in the image :-(  The reality is not quite as simple as 
this because of the way JPEG uses blocks etc, but it is an easy way of 
thinking about it.


What all this means is that as you reduce the size of an image, if you 
want it to retain some of the detail of the original but at a smaller 
size, there will be a point at which 8 or 24 bit PNG will become a 
better bet.


Clear as mud?

AJ

M. Sokolewicz wrote:

I'm not quite sure, but consider the following:

Considering the fact that most JPEG images are stored with some form 
of compression usually ~75% that would mean the original image, in 
actual size, is about 1.33x bigger than it appears in filesize. When 
you make a thumbnail, you limit the amount of pixels, but you are 
setting compression to 100% (besides that, you also use a truecolor 
pallete which adds to its size). So, for images which are scaled down 
less than 25% (actually this will prob. be more around 30-ish, due to 
palette differences) you'll actually see the thumbnail being bigger 
in *filesize* than the original (though smaller in memory-size)


- tul

P.S. isn't error_reporting( FATAL | ERROR | WARNING ); supposed to be 
error_reporting( E_FATAL | E_ERROR | E_WARNING ); ??


tedd wrote:

Hi gang:

I have a thumbnail script, which does what it is supposed to do. 
However, the thumbnail image generated is larger than the original 
image, how can that be?


Here's the script working:

http://xn--ovg.com/thickbox

And, here's the script:

?php /* thumb from file */

/* some settings */
ignore_user_abort();
set_time_limit( 0 );
error_reporting( FATAL | ERROR | WARNING );

/* security check */
ini_set( 'register_globals', '0' );

/* start buffered output */
ob_start();

/* some checks */
if ( ! isset( $_GET['s'] ) ) die( 'Source image not specified' );

$filename = $_GET['s'];

// Set a maximum height and width
$width = 200;
$height = 200;

// Get new dimensions
list($width_orig, $height_orig) = getimagesize($filename);

if ($width  ($width_orig  $height_orig))
{
$width = ($height / $height_orig) * $width_orig;
}
else
{
$height = ($width / $width_orig) * $height_orig;
}

// Resample
$image_p = imagecreatetruecolor($width, $height);
$image = imagecreatefromjpeg($filename);
imagecopyresampled($image_p, $image, 0, 0, 0, 0, $width, $height, 
$width_orig, 

[PHP] Re: Why small big?

2006-08-24 Thread tedd
 rather 
than in frequency space you don't get this problem (hence PNG 
permitting perfectly sharp lines).


So - back on topic.  If you take an image with sharp lines in it, 
then pass it through DCT twice (the process in symmetrical) but 
loose some of the high frequency data in the process (compression) 
then the result is that the very high frequency components that 
encode the edge are stripped off.  Rather than (as one might like) 
this making the edge fussy, it produces what is called mosquito 
noise around the edges.


Because mosquito noise is nothing like what you are 'expecting' to 
see, the brain is very sensitive to it.


Thus, the amount you notice the compression of JPEG depends on the 
nature of the image you compress.


Now it gets nasty.  DCT scales as a power of n (where n is the size 
of image) - there is a fast DCT process like the F in FFT.  But it 
is still non linear.  This means that to make the encoding and 
decoding of JPEG reasonably quick the image is split into blocks and 
each block is separately passed through the DCT process.  This is 
fine except that it produces errors from one block to the next as to 
where the edges are in HSV space.  Thus, as the compression is 
turned up, the edges of the block can become visible due to 
discontinuities in the color, huge and saturation at the borders. 
This again is sensitive to the sort of image you are compressing. 
For example, if it has a very flat (say black or white) background, 
then you will not notice. Alternatively, if the image is tonally 
rich, like someone's face, you will notice it a lot.


Again, this effect means that it is not really possible to automate 
the process of figuring out what compression setting is optimum.


As for PNG:  As far as I know, the only issue with any realistic 
browser (other than very old ones like IE2 or something) is that the 
alpha channel is not supported.  As there is no alpha channel in 
JPEG, so there is no difference.  Though I do not profess to be 
absolutely sure that all browsers you might encounter manage PNG ok.


Side Issues:
DCT is integer.  This means that if you have zero compression in the 
DCT process, then you get out what you put in (except if you get 
overflow, which can be avoided as far as I know).  This is not the 
case in FFT where floating point errors mean you always loose 
something.  Thus JPEG/100% should be at or near perfect (lossless) 
but does not actually compress.


Another area where FFT and DCT become very interesting is in moving 
picture processing.  You can filter video using FFT or DCT in ways 
that are hard or impossible using spacing filters.  This can be good 
for improving noisy or fussy 'avi' files etc.


Best wishes

AJ

PS - I'll stick the above on my nerd block 
nerds-central.blogspot.com, if you have any good links to suggest to 
expand the subject, please let me know and I shall add them.



Alexander J Turner Ph.D.
www.project-network.com
www.deployview.com
www.funkifunctions.blogspot.com

-Original Message-
From: tedd [mailto:[EMAIL PROTECTED]
Sent: 23 August 2006 20:17
To: Alex Turner; php-general@lists.php.net
Subject: TPN POSSIBLE SPAM:[PHP] Re: Why small  big?

Alex:

Excuse for top posting:

You said: Clear as mud?

Well actually, it's simperer than I thought. After your reply, I did
some reading on jpeg and found it's simply a transform, not unlike
FFT where two-dimensional temporal data is transformed from the time
domain to the frequency domain -- very interesting reading.

The reverse cosine matrix you mention is probably the discrete cosine
transform (DCT) matrix where the x, y pixels of an image file have a
z component representing color. From that you can translate the data
into the frequency domain, which actually generates more data than
the original.

However, the quality setting is where you make it back up in
compression ratio's by trimming off higher frequencies which don't
add much to the data. Unlike the FFT, the algorithm does not address
phasing, which I found interesting.

However, the answer to my question deals with the quality statement.
In the statement:

imagejpeg($image_p, null, 100);

I should have used something less than 100.

I've change the figure to 25 and don't see any noticeable difference
in quality of the thumbnail.

It seems to me there should be a table (or algorithm) somewhere that
would recommend what quality to use when reducing the size of an
image via this method. In this case, I reduced an image 62 percent
(38% of the original) with a quality setting of 25 and see no
difference. I think this (the quality factor) is programmable.

As for png images, I would probably agree (if I saw comparisons), but
not all browsers accept them. I belive that at least one IE has
problems with png's, right?

tedd

At 4:45 PM +0100 8/23/06, Alex Turner wrote:

M Sokolewice got it nearly correct.  However, the situation is a
little more complex than he has discussed.

The % compression figure for jpeg

[PHP] Re: Why small big?

2006-08-23 Thread M. Sokolewicz

I'm not quite sure, but consider the following:

Considering the fact that most JPEG images are stored with some form of 
compression usually ~75% that would mean the original image, in actual 
size, is about 1.33x bigger than it appears in filesize. When you make a 
thumbnail, you limit the amount of pixels, but you are setting 
compression to 100% (besides that, you also use a truecolor pallete 
which adds to its size). So, for images which are scaled down less than 
25% (actually this will prob. be more around 30-ish, due to palette 
differences) you'll actually see the thumbnail being bigger in 
*filesize* than the original (though smaller in memory-size)


- tul

P.S. isn't error_reporting( FATAL | ERROR | WARNING ); supposed to be 
error_reporting( E_FATAL | E_ERROR | E_WARNING ); ??


tedd wrote:

Hi gang:

I have a thumbnail script, which does what it is supposed to do. 
However, the thumbnail image generated is larger than the original 
image, how can that be?


Here's the script working:

http://xn--ovg.com/thickbox

And, here's the script:

?php /* thumb from file */

/* some settings */
ignore_user_abort();
set_time_limit( 0 );
error_reporting( FATAL | ERROR | WARNING );

/* security check */
ini_set( 'register_globals', '0' );

/* start buffered output */
ob_start();

/* some checks */
if ( ! isset( $_GET['s'] ) ) die( 'Source image not specified' );

$filename = $_GET['s'];

// Set a maximum height and width
$width = 200;
$height = 200;

// Get new dimensions
list($width_orig, $height_orig) = getimagesize($filename);

if ($width  ($width_orig  $height_orig))
{
$width = ($height / $height_orig) * $width_orig;
}
else
{
$height = ($width / $width_orig) * $height_orig;
}

// Resample
$image_p = imagecreatetruecolor($width, $height);
$image = imagecreatefromjpeg($filename);
imagecopyresampled($image_p, $image, 0, 0, 0, 0, $width, $height, 
$width_orig, $height_orig);


//  Output  Content type
header('Content-type: image/jpeg');
imagejpeg($image_p, null, 100);

/* end buffered output */
ob_end_flush();
?

---

Thanks in advance for any comments, suggestions or answers.

tedd



--
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



[PHP] Re: Why small big?

2006-08-23 Thread Alex Turner
M Sokolewice got it nearly correct.  However, the situation is a little 
more complex than he has discussed.


The % compression figure for jpeg is translated into the amount of 
information stored in the reverse cosine matrix.  The size of the 
compressed file is not proportional to the % you set in the compressor. 
 Thus 100% actually means store all the information in the reverse 
cosine matrix.  This is like storing the image in a 24 bit png, but with 
the compressor turned off.  So at 100% jpeg is quite inefficient.


The other issue is the amount of high frequency information in your 
images.  If you have a 2000x2000 image with most of the image dynamics 
at a 10 pixel frequency, and you reduce this to 200x200 then the JPEG 
compression algorithm will 'see' approximately the same amount of 
information in the image :-(  The reality is not quite as simple as this 
because of the way JPEG uses blocks etc, but it is an easy way of 
thinking about it.


What all this means is that as you reduce the size of an image, if you 
want it to retain some of the detail of the original but at a smaller 
size, there will be a point at which 8 or 24 bit PNG will become a 
better bet.


Clear as mud?

AJ

M. Sokolewicz wrote:

I'm not quite sure, but consider the following:

Considering the fact that most JPEG images are stored with some form of 
compression usually ~75% that would mean the original image, in actual 
size, is about 1.33x bigger than it appears in filesize. When you make a 
thumbnail, you limit the amount of pixels, but you are setting 
compression to 100% (besides that, you also use a truecolor pallete 
which adds to its size). So, for images which are scaled down less than 
25% (actually this will prob. be more around 30-ish, due to palette 
differences) you'll actually see the thumbnail being bigger in 
*filesize* than the original (though smaller in memory-size)


- tul

P.S. isn't error_reporting( FATAL | ERROR | WARNING ); supposed to be 
error_reporting( E_FATAL | E_ERROR | E_WARNING ); ??


tedd wrote:

Hi gang:

I have a thumbnail script, which does what it is supposed to do. 
However, the thumbnail image generated is larger than the original 
image, how can that be?


Here's the script working:

http://xn--ovg.com/thickbox

And, here's the script:

?php /* thumb from file */

/* some settings */
ignore_user_abort();
set_time_limit( 0 );
error_reporting( FATAL | ERROR | WARNING );

/* security check */
ini_set( 'register_globals', '0' );

/* start buffered output */
ob_start();

/* some checks */
if ( ! isset( $_GET['s'] ) ) die( 'Source image not specified' );

$filename = $_GET['s'];

// Set a maximum height and width
$width = 200;
$height = 200;

// Get new dimensions
list($width_orig, $height_orig) = getimagesize($filename);

if ($width  ($width_orig  $height_orig))
{
$width = ($height / $height_orig) * $width_orig;
}
else
{
$height = ($width / $width_orig) * $height_orig;
}

// Resample
$image_p = imagecreatetruecolor($width, $height);
$image = imagecreatefromjpeg($filename);
imagecopyresampled($image_p, $image, 0, 0, 0, 0, $width, $height, 
$width_orig, $height_orig);


//  Output  Content type
header('Content-type: image/jpeg');
imagejpeg($image_p, null, 100);

/* end buffered output */
ob_end_flush();
?

---

Thanks in advance for any comments, suggestions or answers.

tedd



--
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



[PHP] Re: Why small big?

2006-08-23 Thread tedd

Alex:

Excuse for top posting:

You said: Clear as mud?

Well actually, it's simperer than I thought. After your reply, I did 
some reading on jpeg and found it's simply a transform, not unlike 
FFT where two-dimensional temporal data is transformed from the time 
domain to the frequency domain -- very interesting reading.


The reverse cosine matrix you mention is probably the discrete cosine 
transform (DCT) matrix where the x, y pixels of an image file have a 
z component representing color. From that you can translate the data 
into the frequency domain, which actually generates more data than 
the original.


However, the quality setting is where you make it back up in 
compression ratio's by trimming off higher frequencies which don't 
add much to the data. Unlike the FFT, the algorithm does not address 
phasing, which I found interesting.


However, the answer to my question deals with the quality statement. 
In the statement:


imagejpeg($image_p, null, 100);

I should have used something less than 100.

I've change the figure to 25 and don't see any noticeable difference 
in quality of the thumbnail.


It seems to me there should be a table (or algorithm) somewhere that 
would recommend what quality to use when reducing the size of an 
image via this method. In this case, I reduced an image 62 percent 
(38% of the original) with a quality setting of 25 and see no 
difference. I think this (the quality factor) is programmable.


As for png images, I would probably agree (if I saw comparisons), but 
not all browsers accept them. I belive that at least one IE has 
problems with png's, right?


tedd

At 4:45 PM +0100 8/23/06, Alex Turner wrote:
M Sokolewice got it nearly correct.  However, the situation is a 
little more complex than he has discussed.


The % compression figure for jpeg is translated into the amount of 
information stored in the reverse cosine matrix.  The size of the 
compressed file is not proportional to the % you set in the 
compressor.  Thus 100% actually means store all the information in 
the reverse cosine matrix.  This is like storing the image in a 24 
bit png, but with the compressor turned off.  So at 100% jpeg is 
quite inefficient.


The other issue is the amount of high frequency information in your 
images.  If you have a 2000x2000 image with most of the image 
dynamics at a 10 pixel frequency, and you reduce this to 200x200 
then the JPEG compression algorithm will 'see' approximately the 
same amount of information in the image :-(  The reality is not 
quite as simple as this because of the way JPEG uses blocks etc, but 
it is an easy way of thinking about it.


What all this means is that as you reduce the size of an image, if 
you want it to retain some of the detail of the original but at a 
smaller size, there will be a point at which 8 or 24 bit PNG will 
become a better bet.


Clear as mud?

AJ

M. Sokolewicz wrote:

I'm not quite sure, but consider the following:

Considering the fact that most JPEG images are stored with some 
form of compression usually ~75% that would mean the original 
image, in actual size, is about 1.33x bigger than it appears in 
filesize. When you make a thumbnail, you limit the amount of 
pixels, but you are setting compression to 100% (besides that, you 
also use a truecolor pallete which adds to its size). So, for 
images which are scaled down less than 25% (actually this will 
prob. be more around 30-ish, due to palette differences) you'll 
actually see the thumbnail being bigger in *filesize* than the 
original (though smaller in memory-size)


- tul

P.S. isn't error_reporting( FATAL | ERROR | WARNING ); supposed to 
be error_reporting( E_FATAL | E_ERROR | E_WARNING ); ??


tedd wrote:

Hi gang:

I have a thumbnail script, which does what it is supposed to do. 
However, the thumbnail image generated is larger than the original 
image, how can that be?


Here's the script working:

http://xn--ovg.com/thickbox

And, here's the script:

?php /* thumb from file */

/* some settings */
ignore_user_abort();
set_time_limit( 0 );
error_reporting( FATAL | ERROR | WARNING );

/* security check */
ini_set( 'register_globals', '0' );

/* start buffered output */
ob_start();

/* some checks */
if ( ! isset( $_GET['s'] ) ) die( 'Source image not specified' );

$filename = $_GET['s'];

// Set a maximum height and width
$width = 200;
$height = 200;

// Get new dimensions
list($width_orig, $height_orig) = getimagesize($filename);

if ($width  ($width_orig  $height_orig))
{
$width = ($height / $height_orig) * $width_orig;
}
else
{
$height = ($width / $width_orig) * $height_orig;
}

// Resample
$image_p = imagecreatetruecolor($width, $height);
$image = imagecreatefromjpeg($filename);
imagecopyresampled($image_p, $image, 0, 0, 0, 0, $width, $height, 
$width_orig, $height_orig);


//  Output  Content type
header('Content-type: image/jpeg');
imagejpeg($image_p, null, 100);

/* end buffered output */
ob_end_flush();
?

---


Re: [PHP] Re: Why small big?

2006-08-23 Thread Alex Turner

Tedd,

Sorry for the floppy language.  You are quite correct, the name is 
discrete cosine.  I get too relaxed sometimes!


As to the visual impact of a degree of compression, I don't think that 
you can automate this.  The issue surrounds the way the brain processes 
information.  When you see something, you brain processes the visual 
field and looks for patters that it recognizes and then your conscious 
mind becomes aware of the patterns, not actually the thing you are 
looking at.  Optical illusions can illustrate this point. For example 
where you see a bunch of blobs on a white background and then someone 
tells you it is a dog and you see the dog.  Once you see the dog you can 
no longer 'not see it'.  This is because of the way the brain processes 
patterns.


The trick to DCT is that in most 'organic' images - people, trees etc - 
the patterns for which your brain is looking actually occupy low 
frequencies.  However, the majority of the information which is encoded 
into the image is in high frequencies.  Consequently, by selectively 
removing the high frequencies, the image appears to the conscious mind 
to be the same whilst in reality it is degraded.


The snag come when the pattern your brain is looking to match to 
requires high frequencies.  The classic is a edge.  If one has an 
infinitely large white background with a single infinitely sharp line on 
it, you require infinite frequencies to encode it correctly (ten years 
ago I knew the proof for this, time and good wine has put a stop to 
that).  This is much like the side band problem in radio transmission. 
If you encode an image in dimensional space rather than in frequency 
space you don't get this problem (hence PNG permitting perfectly sharp 
lines).


So - back on topic.  If you take an image with sharp lines in it, then 
pass it through DCT twice (the process in symmetrical) but loose some of 
the high frequency data in the process (compression) then the result is 
that the very high frequency components that encode the edge are 
stripped off.  Rather than (as one might like) this making the edge 
fussy, it produces what is called mosquito noise around the edges.


Because mosquito noise is nothing like what you are 'expecting' to see, 
the brain is very sensitive to it.


Thus, the amount you notice the compression of JPEG depends on the 
nature of the image you compress.


Now it gets nasty.  DCT scales as a power of n (where n is the size of 
image) - there is a fast DCT process like the F in FFT.  But it is still 
non linear.  This means that to make the encoding and decoding of JPEG 
reasonably quick the image is split into blocks and each block is 
separately passed through the DCT process.  This is fine except that it 
produces errors from one block to the next as to where the edges are in 
HSV space.  Thus, as the compression is turned up, the edges of the 
block can become visible due to discontinuities in the color, huge and 
saturation at the borders.  This again is sensitive to the sort of image 
you are compressing.  For example, if it has a very flat (say black or 
white) background, then you will not notice. Alternatively, if the image 
is tonally rich, like someone's face, you will notice it a lot.


Again, this effect means that it is not really possible to automate the 
process of figuring out what compression setting is optimum.


As for PNG:  As far as I know, the only issue with any realistic browser 
(other than very old ones like IE2 or something) is that the alpha 
channel is not supported.  As there is no alpha channel in JPEG, so 
there is no difference.  Though I do not profess to be absolutely sure 
that all browsers you might encounter manage PNG ok.


Side Issues:
DCT is integer.  This means that if you have zero compression in the DCT 
process, then you get out what you put in (except if you get overflow, 
which can be avoided as far as I know).  This is not the case in FFT 
where floating point errors mean you always loose something.  Thus 
JPEG/100% should be at or near perfect (lossless) but does not actually 
compress.


Another area where FFT and DCT become very interesting is in moving 
picture processing.  You can filter video using FFT or DCT in ways that 
are hard or impossible using spacing filters.  This can be good for 
improving noisy or fussy 'avi' files etc.


Best wishes

AJ

PS - I'll stick the above on my nerd blog nerds-central.blogspot.com, if 
you have any good links to suggest to expand the subject, please let me 
know and I shall add them.



tedd wrote:

Alex:

Excuse for top posting:

You said: Clear as mud?

Well actually, it's simperer than I thought. After your reply, I did 
some reading on jpeg and found it's simply a transform, not unlike FFT 
where two-dimensional temporal data is transformed from the time domain 
to the frequency domain -- very interesting reading.


The reverse cosine matrix you mention is probably the discrete cosine 
transform (DCT) matrix where