Good question, Steve. Thanks for posing it. Here are my (initial)
thoughts. In a nutshell it is the technological differences between a
spherical image and a flat image, when the end result must be flat.

There are two sides to producing a panorama image: Taking & Making

On the Taking side, what you call "normal" panorama is rotating the
camera around the center point of the tripod stalk. To do it properly,
you need to position the nodal point of the lens over that rotational
point. If you simply rotate around the tripod hole on your camera
body, you are going to have problems with objects that are closer to
your camera. As you mentioned, it also helps to have a pano head. The
HippoCam does not rotate, so there is no concern over a nodal point.
Instead the camera is traveling in a plane over the 6x7 image circle,
with some overlap to allow the stitching.

Now think for a moment about the projected image quality. In your
"normal" method you are working with a lens that normally sacrifices
some quality at the corners. So a "normal" panorama image has
overlapping weak corners at each "seam" of the process. By using a
larger format lens the weak corners aren't even being sampled. The
APS-C sensor is sliding right across the middle of the 6x7 image
circle.

That brings us to the making part. In the "normal" panorama process
you have to do two things: stitch and then distortion correct.

First let's talk about the stitch.
Here's an example of a simple stitched image:
http://www.altostorm.com/images/corrector/sample_1_original.jpg
Two things:
As you know if you have ever produced a normal pano like this, you
know that it wasn't rectangular like this. The original image was a
bowtie shape. You had to crop off pixels to get to the USABLE
rectilinear area. In short there is pixel "waste" or cost. You have in
effect used a much smaller part of your sensor (especially vertically)
than you started with.

Secondly, depending upon the focal length of the lens you used to take
the individual pano frames, you know that there are often problems
that creep in on parts of the image at the blends. These are called
"stitching artifacts". These things can "give away" the fact that an
impressive looking image was made up of segments. This is generally
not critical for making web-resolution images, but if you want to make
bigger wall sized prints from your images those things have to be
dealt with in some way.

The HippoCam stitching process is much easier technically because we
are not stitching spherically, but only "flat stitching". It is a
completely different process in Photoshop. In theory the pixels should
PERFECTLY OVERLAP from one frame to the next (as opposed to an
algorithm that must BLEND spherically distorted pixels in a pleasing
way). No stitching artifacts are introduced into the process. And you
throw away no pixels. Assuming the HippoCam is level, you should lose
very few pixels vertically and get to use almost the full 23.7mm of
sensor width in the vertical dimension.

Now, let's talk about the distortion correction phase.
How do we magically go from this:
http://www.altostorm.com/images/corrector/sample_1_original.jpg
to this:
http://www.altostorm.com/images/corrector/sample_1_corrected.jpg
???

Think about it. Either the pixels on the extreme right and left had to
stretch apart (did the software interpolate pixels to fill that space
in a way that made sense?) OR the center had to shrink to match the
outside edges - which means again "throwing away" pixel information
(which equals a lost of resolution). Anybody who has ever tried to
up-size a jpeg knows that there is a cost of sharpness and resolution
to do so. No algorithm can reproduce information it doesn't have. The
best it can do is guess and the end result is something that is very
clearly inferior to our eyes.

If you eliminate the need for distortion correction in the first place
(as the HippoCam/RhinoCam method does) you eliminate the corresponding
loss of resolution.

We haven't even talked about DOF issues when comparing a spherical
image to a panorama made from taking images across a single flat image
plane.

All of this is just theory talking however. We'll hopefully see if it
works in practice and I can do some comparison shots both ways.

On Thu, Aug 22, 2013 at 12:05 PM, steve harley <[email protected]> wrote:
> on 2013-08-21 10:01 Darren Addy wrote
>
>> I think it helps to think of this project as being more of a view
>> camera, than a solid-bodied camera.
>
>
> i'm curious what qualities you expect from this that you wouldn't get from a
> careful normal multi-exposure, perhaps with a panoramic head … perspective
> correction?
>
>
>> Fotodiox says to
>> expose each segment letting the camera control the exposure.  I'm not
>> sure if that will be good advice or not. Normally one is counselled to
>> lock the exposure across segments of a panorama or exposure
>> differences will occur. So that is one area where I expect a little
>> experimentation to be my final guide.
>
>
> in my modest experience, varying exposure is almost impossible to correct
> for in a panaoramic set; it doesn't merely shift the luminance values, it
> seems to change the color response a bit too
>
>
>
>
> --
> PDML Pentax-Discuss Mail List
> [email protected]
> http://pdml.net/mailman/listinfo/pdml_pdml.net
> to UNSUBSCRIBE from the PDML, please visit the link directly above and
> follow the directions.



-- 
"Photography is a Bastard left by Science on the Doorstep of Art" -
Peter Galassi

-- 
PDML Pentax-Discuss Mail List
[email protected]
http://pdml.net/mailman/listinfo/pdml_pdml.net
to UNSUBSCRIBE from the PDML, please visit the link directly above and follow 
the directions.

Reply via email to