Hi Greg,

> This is interesting.  I hadn't realized that the Sigma lens used something 
> other than the equidistant projection.  I thought this was the more normal 
> type, and our casual measurements seemed to indicate that it followed this 
> projection.  Perhaps I wasn't as careful as I ought to have been with my 
> observations.  The horizon on the Sigma is enough of a mess that it's 
> difficult to gauge things accurately at the outer rim of the circular image.

True, the horizon is a bit messy, but A LOT cleaner than on the Nikon FC-E8.

> Looking at the ever-helpful Wikipedia page on the topic 
> <http://en.wikipedia.org/wiki/Fisheye_lens#Mapping_function>, I see that the 
> equidistant and equisolid-angle projections are fairly similar except at the 
> outer reaches (near +/-90°).  Of course, errors could still be large assuming 
> the wrong projection if your only light sources is out in that part of the 
> view.

Here's another good page with a helpful diagram (scroll down) to 'fisheye':
http://www.bobatkins.com/photography/technical/field_of_view.html

> In lieu of reprojecting the image with pinterp, which is as you say 
> unsupported, it is possibly to apply a correction to the image values to 
> account for the difference in solid angle at each pixel.  Given that the 
> solid angle of the equisolid-angle projection is the same for each pixel, we 
> really only need the solid angle for the equidistant projection.  This can be 
> computed with a simple expression, which is sin(theta)/theta.

You see, this is where I get a little lost between 'projection',
'distortion', and 'vignetting'. They are, of course, different things.
Something like (don't quote me on it):
- projection: that would be 'equidistant' or 'equisolidangle'
- distortion: the deviation from the ideal 'projection'. Think pin
cushion or barrel
- vignetting: drop-off in image brightness towards the image horizon

Would not the sin(theta)/theta correction account only for the pixel
brightness? In other words: the pixel 'location' on the photographic
plate/CCD chip would still be wrong, so that the Guth index in the UGR
formula would be off?

I can imagine that one could create a pcomb cal file that builds up a
new image from the corrected radial distance of the pixel in the
source image. If this get too aliased (blocky), one could average over
the nearby pixels using the optional x,y offset that pcomb provides,
e.g. (spaces added for clarity):
ro=.5*ri(1) + .5*( ri(1,-1,0)+ri(1,1,0)+ri(1,0,-1)+ri(1,0,1) )/4 etc
which is effectively a box filter. Not tested! Don't try this at home!
One could fiddle with the pixel/off-pixel multipliers (both .5 in this
case) to see what would look best. I'm not sure how floating point
pixel coordiantes are handled by pcomb. Are they just rounded off?

This approach would, of course, smudge out the luminance of the new,
constructed pixel to a certain extend, but considering that the lens
rensponse function does this anyhow, it's probably a small price to
pay for a smooth-as-a-baby's-bottom image.

This would correct the pixel 'position', but where I get confused with
all this is this: How would one then have to correct the pixel
brightness (vignetting?) to account for this re-projection of pixel
locations, while still maintaining photometric integrity of the image
as a whole (vertical illuminance, say... Or UGR)?

> Does this help?
> -Greg
>
>> From: Axel Jacobs <[email protected]>
>> Date: February 3, 2012 6:27:59 AM PST
>>
>> Dear list,
>>
>> I've been exerimenting with Radiance's findglare and glarendx, trying
>> to get UGRs from photographic HDRs. I'm using the Sigma 4.5mm on a
>> D200, which seems to be quite a popular choice amongst you.
>>
>> Unlike the FC-E8/Coolpix combo, which produces an equidistant
>> projection (-vta), the Sigma 4.5mm results in a 180deg equisolidangle
>> view. I gather from this post to the rad-gen list:
>> http://www.radiance-online.org/pipermail/radiance-general/2010-April/006709.html
>> that the NYT cart was based on a Sigma lens (4.5mm ?), operated at
>> F5.6. The code snippet in that post suggests that the HDRs were
>> vignetting corrected.
>>
>> An overall calibration of the image luminance can be carried out (I
>> think) by measuring the vertical illuminance at the lens when the
>> exposure-bracketed sequence is taken, and then running findglare and
>> glarendx -t ver_illu on the HDR, which should give a calibration
>> factor that can then be used to fiddle with the EXPOSURE= line. This
>> is probably more accurate than calibrating against spot meter
>> readings. So far, so good.
>>
>> What I don't seem to be able to find in the googleable literature, nor
>> in the HDR book, is any words of wisdom regarding the impact of the
>> lens projection on glare metrics. Radiance doesn't have an
>> equisolidangle view type, so using pinterp as detailed in this post:
>> http://www.radiance-online.org/pipermail/radiance-general/2011-August/008141.html
>> is not an option.
>>
>> It might be possible to utilse ImageMagick to re-project the JPGs
>> prior to running hdrgen, but I'd rather not go there.
>>
>> The deviation between equisolidangle and -vta is most noticeable for
>> high off-axis angles, which is also where glare sources have less of
>> an impact (Guth position index). I'm therefore wondering whether
>> people just tend to go with the vignetting-corrected and
>> luminance-calibrated HDR without worrying too much about re-projecting
>> the fisheye. The same question would apply to evalglare's DGP rating,
>> which relies on the HDR coming in -vta.  Has anybody looked into this?
>>
>> Cheers
>>
>> Axel
>
> _______________________________________________
> HDRI mailing list
> [email protected]
> http://www.radiance-online.org/mailman/listinfo/hdri

_______________________________________________
HDRI mailing list
[email protected]
http://www.radiance-online.org/mailman/listinfo/hdri

Reply via email to