On Sat 22-May-2010 at 02:30 -0000, polecat_butter wrote:

[Cc'd to hugin-ptx, scroll down]

Hello, I'm new to HDR and I've been learning how to use the PanoTools, in particular to create HDR panoramas. I've been using raw images with +/- 2EV bracketing to produce the individual HDR component images for the panorama. However, my experience has been that tone mapping before stitching & blending with Hugin has lead to color tone differences across the images.

So I have been trying to figure out how to start the Hugin process with the HDRs. It took me very little time to figure out that APSC doesn't do HDR files, even though the file I/O libraries will readily read them. The rub is in the routine DisplayImage.c/DisplayImage_ConvertToImageMap(), which takes the image map read from an input file by the library, and creates a monochrome array of float luminance values in the range of [0,1]. As an aside, though it knows about alpha, the routine glibly ignores the alpha channel. Continuing, the code in the routine only knows how to handle 32 bpp and 64 bpp, which it assumes are 8 and 16 bit integers for ARGB respectively.

So I added code to handle an HDR image that is read as 128 bpp - which assumes a 32 bit single precision float ARGB quad. As another aside, I wish the authors of this code had used the much better programming practice of using "4 * sizeof(int)" or "4 * sizeof(float)" instead of magical numbers "8" and "16".

The history of this software is that is was originally written in C#, it was then ported anonymously to C. Various people have worked on it since, but it is fundamentally structured the same as the C# tool.

This lack of foresight comes back to bite us when the code is compiled for a 64 bit O/S because they glibly typecast back & forth between ints and "void *" which works kind of fine in 32 bit land but comes unglued in a 64 bit environment - this is why the compiler complains about this practice. Which is why it is better coding practice to use "size_t".

Anyway, the conversion of the ARGB quad to a luminance field uses the L1 norm which is implemented without an absolute value operator because everything is cast to unsigned. I decided to continue this with HDR floats, hoping for the best. In order to normalize the luminance field over [0,1], I track the minimum and maximum L1 norm values, then offset and scale by them respectively. This worked OK for a few test values, but fails in the case of large outliers. So I revised the code to also compute the mean value, then used the minimum of max and 2 * mean as the normalization divisor. This seems to be more reliable, though I have not traced through to determine what this does to the control point calculations.

I also patched the pointer conversion problems, which are generally in the hashing routines and that code gives me the willies. I generated a patch based on the 5155 version of APSC from the svn repository. I will try to append it below if I can in the hopes that it might be useful to someone. If this is not the right place for this, I'm sorry.

Thanks this is a much requested improvement. The patch applies and builds on 32bit Linux, the tools run but I don't have any HDR photos to hand to test.

The patch was a bit mangled by yahoo, so I've pushed it to the autopano-sift-C repository so others can test.

Note that all the sotware related to the Hugin project has now switched from subversion to mercurial for version control, you can find this here:

  hg clone http://hugin.hg.sourceforge.net/hgroot/hugin/autopano-sift-C

--
Bruno

--
You received this message because you are subscribed to the Google Groups "hugin and 
other free panoramic software" group.
A list of frequently asked questions is available at: 
http://wiki.panotools.org/Hugin_FAQ
To post to this group, send email to hugin-ptx@googlegroups.com
To unsubscribe from this group, send email to 
hugin-ptx+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/hugin-ptx

Reply via email to