Hi Jeffrey,

Am 07.02.2011 23:10, schrieb Jeffrey Martin:

I'm quite swamped with "real" work right now, so I don't have time to work on cpfind for the next weeks.

HI Kay,

a bunch of nice ideas!

comments below

On Monday, February 7, 2011 3:34:25 PM UTC+1, kfj wrote:

 If I could tell the CPG to scale all
    images to, say, 50 pixels per degree overall, I would achieve
    precisely the effect I want by specifying one single parameter. It'd
    make things so much easier for mixed-lens takes.


ok, with a minimum image width of say 320 (long side), this would be
great, i think.

The FOV normalisation is a nice idea, but I needs a bit more effort than a simple scaling.

    I know that theoretically SURF and SIFT features are (quite) scale-
    insenstive, but my experience tells me that this truth only goes so
    far. I'm not sure how scale-insenstive the gradient-based detector in
    cpfind is. But I feel that implementing my proposition might be easy
    and it'd give us the opportunity to see if this might be a cheaply
    bought improvement in performance.

It should perform similar to SURF in that respect.

    The idea is to run the process on smaller images and once the
    orientations are established, to replace the images with full scale
    versions and have all pto parameters that build on image coordinates
    rescaled. I wrote this because I wanted to work on the screen-sized
    images I carry with me on my laptop and apply the results to the full-
    scale images back home with the fat data corpus. It works, and
    surprisingly well. The scaled-down versions are usually a fair bit
    crisper than the full-sized images, so there is enough detail for the
    CPGs to work on - and since the feature detectors produce subpixel
    accuracy, the scaled-up pto often stitches without any need for
    further intervention - if you want you can run a global fine-tune on
    the CPs.

Note that a global finetune is not really the best thing, as most of the control points found by cpfind/panomatic/sift will be in some higher scale, and the finetune only uses a small window for correlation.

    The next idea, to look at the overlapping parts once the overlap has
    been roughly established, is also promising and heas been previously
    exploited, though I'm not entirely sure where the code is.

The --multirow option of cpfind does that.

>     Another
    interesting aspect along these lines is to warp the overlapping parts
    of two images to a common projection and run the CPGs on those warped
    partial images, to later retransform the CPs to original image
    coordinates. This has also been done, and I've experimented with it
    myself, but found the gain not so noteworthy as to make me want to
    investigate the matter more deeply -


do you want some 2000 image panos to test it on? in that case the time
savings might be very significant. what I mean is, sure for panos
containing 4 or 10 images this just won't matter but for gigapixel
images it might save minutes or hours.

Have you tried the multirow mode of cpfind for this type of panoramas? It was specially designed for that, and should be faster than doing a miniature pano and then reusing the overlaps, as it first matches only the consecutive images, connects strips, optimizes and then looks for control points in the overlapping images.

I don't have such a large pano, so I don't have much experience with it, though.

ciao
 Pablo

--
You received this message because you are subscribed to the Google Groups "Hugin and 
other free panoramic software" group.
A list of frequently asked questions is available at: 
http://wiki.panotools.org/Hugin_FAQ
To post to this group, send email to hugin-ptx@googlegroups.com
To unsubscribe from this group, send email to 
hugin-ptx+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/hugin-ptx

Reply via email to