On Sat, 19 Mar 2005, Steven Boswell II wrote:

> >This makes me wonder: There is a difference
> >between a clean, high quality stream from a human
> >perspective and from the perspective of an encoder ...
> 
> Yes, there's a difference.  A human may not be
> able to tell the difference between luminosity
> levels 128 and 129, but the computer can, and if

        Thanks for reminding me about that section - I was going to respond
        to that question but was in a hurry and cleaned up the mailbox too
        soon.

        Yes indeed there are data sets that to the eye look fine but will
        not look good when encoded.  

        In fact the 'blocks in dark scenes' problem (which has been the subject
        of frequent discussion) is a perfect example.  The slight variations
        get made more visible as the luma value becomes lower.  We might not
        see the difference between 125 and 129 but we can see the difference
        between 16 and 20 I believe.

> you have a screen that's all 128-level except for
> lots of 129-level dots spread throughout randomly,

        Or as I've seen in the chroma you'll have values like 125, 127, 129,
        etc - close to 128 but not quite.  The encoder does it's job and
        the result is a set of blocks that become noticeable as the scene
        gets dark.   A similar thing happens with the luma - in bright scenes
        a minor variation isn't noticeable but as the brightness decreases
        those minor variations become visible. 

        The raw DV file looks fine to the eye but the data needs to be 
        processed, sometimes quite heavily, to get the encoded result to look
        good.

> But it'll all look the same to a human
> being.  From this simple example, one can easily
> infer that something that looks the same to a  human being does not 
> necessarily look the same to the computer.

        In a similar way it should be pointed out that using a computer 
        monitor for making adjustments to video streams is not a good thing
        at all.   Most computer monitors aren't calibrated for one thing.
        Something that looks good on a computer can be way out of spec and/or 
        look terrible on a TV set.  A good first step is to calibrate the 
        computer monitor to 6500K, a second step would be to adjust the 
        response to be NTSC (or PAL)-like but that might cause normal computer
        activities to look a bit different.

        The best way of all is to have a production/broadcast monitor but those
        become very expensive in a hurry  - a more economical way would be to 
        use a regular TV and get it close to calibrated using color bars.

> y4mdenoise run at very low tolerances ("-z 1 -t 1"
> or "-z 1 -t 2") qualifies as such processing.

        It's also quite a consumer of cpu cycles :-)  -t 3 seems to run more
        quickly and does a nice job.

        I'm not sure how Canopus is doing the denoising in the ADVC300 but it
        is quite effective without softening/blurring the image.  Still needs
        to be run thru y4mdenoise BUT since the data is a little cleaner
        the y4mdenoise program runs more quickly.  Quite the winning 
        combination :)

        Cheers,
        Steven Schultz



-------------------------------------------------------
SF email is sponsored by - The IT Product Guide
Read honest & candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click
_______________________________________________
Mjpeg-users mailing list
Mjpeg-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mjpeg-users

Reply via email to