[hugin-ptx] Re: cpfind ransac mode

2011-02-12 Thread kfj


On 11 Feb., 20:35, Pablo d'Angelo  wrote:

> The FOV normalisation is a nice idea, but I needs a bit more effort than
> a simple scaling.

As a first, and very simple, step, it should be enough to do these
steps:

- pixels/degree fov is passed as parameter PPD
- image N has fov of FOV degrees across
- image N is WIDTH pixels wide

- pixels wanted across NPIX = FOV * PPD
- scaling factor FSCALE = NPIX / WIDTH

of course this does not take into account the variation in resolution
from center to edge, which is significant in wide-angle images, but as
a rule-of-thumb method to get an idea if this path is promising, it
might be worth investigating. If cpfind has a facility to scale
individual images with individual factors, it should be cheap to
implement - otherwise it'd be more laborious - I fear it's the latter
because I know of no CPG with per-image-scaling, but it might be a
valuable feature anyway, and would set cpfind further apart from the
rest ;-)

I hope your 'real' work's grip will lessen eventually, so you can
dedicate some time again to cpfind!

> Note that a global finetune is not  really the best thing, as most of
> the control points found by cpfind/panomatic/sift will be in some higher
> scale, and the finetune only uses a small window for correlation.

I think the finetuning could do with a bit of TLC anyway - or maybe an
alternative method to yield better results in some situations when the
usual method has difficulties. For finetuning CPs on the overlapping
edges of fisheye images, correlation may not be appropriate. Only
guessing - haven't looked at the code.

Kay

-- 
You received this message because you are subscribed to the Google Groups 
"Hugin and other free panoramic software" group.
A list of frequently asked questions is available at: 
http://wiki.panotools.org/Hugin_FAQ
To post to this group, send email to hugin-ptx@googlegroups.com
To unsubscribe from this group, send email to 
hugin-ptx+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/hugin-ptx


[hugin-ptx] Re: cpfind ransac mode

2011-02-11 Thread Pablo d'Angelo

Hi Jeffrey,

Am 07.02.2011 23:10, schrieb Jeffrey Martin:

I'm quite swamped with "real" work right now, so I don't have time to 
work on cpfind for the next weeks.



HI Kay,

a bunch of nice ideas!

comments below

On Monday, February 7, 2011 3:34:25 PM UTC+1, kfj wrote:

 If I could tell the CPG to scale all
images to, say, 50 pixels per degree overall, I would achieve
precisely the effect I want by specifying one single parameter. It'd
make things so much easier for mixed-lens takes.


ok, with a minimum image width of say 320 (long side), this would be
great, i think.


The FOV normalisation is a nice idea, but I needs a bit more effort than 
a simple scaling.



I know that theoretically SURF and SIFT features are (quite) scale-
insenstive, but my experience tells me that this truth only goes so
far. I'm not sure how scale-insenstive the gradient-based detector in
cpfind is. But I feel that implementing my proposition might be easy
and it'd give us the opportunity to see if this might be a cheaply
bought improvement in performance.


It should perform similar to SURF in that respect.


The idea is to run the process on smaller images and once the
orientations are established, to replace the images with full scale
versions and have all pto parameters that build on image coordinates
rescaled. I wrote this because I wanted to work on the screen-sized
images I carry with me on my laptop and apply the results to the full-
scale images back home with the fat data corpus. It works, and
surprisingly well. The scaled-down versions are usually a fair bit
crisper than the full-sized images, so there is enough detail for the
CPGs to work on - and since the feature detectors produce subpixel
accuracy, the scaled-up pto often stitches without any need for
further intervention - if you want you can run a global fine-tune on
the CPs.


Note that a global finetune is not  really the best thing, as most of 
the control points found by cpfind/panomatic/sift will be in some higher 
scale, and the finetune only uses a small window for correlation.



The next idea, to look at the overlapping parts once the overlap has
been roughly established, is also promising and heas been previously
exploited, though I'm not entirely sure where the code is.


The --multirow option of cpfind does that.

> Another

interesting aspect along these lines is to warp the overlapping parts
of two images to a common projection and run the CPGs on those warped
partial images, to later retransform the CPs to original image
coordinates. This has also been done, and I've experimented with it
myself, but found the gain not so noteworthy as to make me want to
investigate the matter more deeply -


do you want some 2000 image panos to test it on? in that case the time
savings might be very significant. what I mean is, sure for panos
containing 4 or 10 images this just won't matter but for gigapixel
images it might save minutes or hours.


Have you tried the multirow mode of cpfind for this type of panoramas? 
It was specially designed for that, and should be faster than doing a 
miniature pano and then reusing the overlaps, as it first matches only 
the consecutive images, connects strips, optimizes and then looks for 
control points in the overlapping images.


I don't have such a large pano, so I don't have much experience with it, 
though.


ciao
 Pablo

--
You received this message because you are subscribed to the Google Groups "Hugin and 
other free panoramic software" group.
A list of frequently asked questions is available at: 
http://wiki.panotools.org/Hugin_FAQ
To post to this group, send email to hugin-ptx@googlegroups.com
To unsubscribe from this group, send email to 
hugin-ptx+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/hugin-ptx


[hugin-ptx] Re: cpfind ransac mode

2011-02-11 Thread Pablo d'Angelo

Hi Oskar,

Am 11.02.2011 08:12, schrieb Oskar Sander:

Maybe a stupid question, what happens when ypr are not the most
significant parameters like in a "mosaic"?Will any of the RANSAC
models work  (I would assume one has to use the old version if anything...)


The homography model should work relativly well, unless you are trying 
to make a mosaic with fisheye images.


The ransac mode can be selected with the --ransacmode parameter, so you 
don't need to go back to an older version. Note that cpfind will use the 
homography for images with HFOV < 65° automatically.


ciao
 Pablo

--
You received this message because you are subscribed to the Google Groups "Hugin and 
other free panoramic software" group.
A list of frequently asked questions is available at: 
http://wiki.panotools.org/Hugin_FAQ
To post to this group, send email to hugin-ptx@googlegroups.com
To unsubscribe from this group, send email to 
hugin-ptx+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/hugin-ptx


[hugin-ptx] Re: cpfind ransac mode

2011-02-11 Thread kfj


On 11 Feb., 08:12, Oskar Sander  wrote:
> Maybe a stupid question, what happens when ypr are not the most significant
> parameters like in a "mosaic"?    Will any of the RANSAC models work  (I
> would assume one has to use the old version if anything...)

Ransac is basically a statistical method and will work on any data set
which contains mainly inliers and some outliers:

http://en.wikipedia.org/wiki/Ransac

As usual with statistics, you have to be careful. If you have a mosaic
and feed your statistical analysis with ypr values, your results will
most likely be disappointing. This is no fault of the method, then,
but of it's application.

Kay

-- 
You received this message because you are subscribed to the Google Groups 
"Hugin and other free panoramic software" group.
A list of frequently asked questions is available at: 
http://wiki.panotools.org/Hugin_FAQ
To post to this group, send email to hugin-ptx@googlegroups.com
To unsubscribe from this group, send email to 
hugin-ptx+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/hugin-ptx


Re: [hugin-ptx] Re: cpfind ransac mode

2011-02-10 Thread Oskar Sander
Maybe a stupid question, what happens when ypr are not the most significant
parameters like in a "mosaic"?Will any of the RANSAC models work  (I
would assume one has to use the old version if anything...)

/O

2011/2/8 kfj <_...@yahoo.com>

>
>
> On 7 Feb., 23:10, Jeffrey Martin <360cit...@gmail.com> wrote:
>
> > just to start things out in a simple way, couldn't we (well, not me :)
> > because I don't know how) just add a parameter to cpfind specifying how
> much
> > to reduce source images (not only halfsize or fullsize as we have now)
>
> A scaling factor would be nice indeed, but I reckon my pixels per
> degrees of field of view idea would be even more useful. I suppose
> we'll have to get Pablo's attention somehow ;-)
>
> > > I have made experiments in this direction using up- and downscaling of
> > > pto files. You can find a working prototype of my Python script here:
> >
> > >http://bazaar.launchpad.net/~kfj/+junk/script/view/head:/main/scale_p..
> . .>
> >
> > > The idea is to run the process on smaller images and once the
> > > orientations are established, to replace the images with full scale
> > > versions and have all pto parameters that build on image coordinates
> > > rescaled. I wrote this because I wanted to work on the screen-sized
> > > images I carry with me on my laptop and apply the results to the full-
> > > scale images back home with the fat data corpus. It works, and
> > > surprisingly well. The scaled-down versions are usually a fair bit
> > > crisper than the full-sized images, so there is enough detail for the
> > > CPGs to work on - and since the feature detectors produce subpixel
> > > accuracy, the scaled-up pto often stitches without any need for
> > > further intervention - if you want you can run a global fine-tune on
> > > the CPs.
> >
> > this always worries me. after optimizing something and you see it
> works,
> > optimizing again always has a risk that it will all go to hell again ;)
> but
> > maybe in real life this will not happen much?
>
> My worries along these lines have greatly deminished ever since the
> advent of the undo feature. It goes to hell? so what - it can go back
> as well.
>
>  > > The next idea, to look at the overlapping parts once the overlap
> has
> > > been roughly established, is also promising and heas been previously
> > > exploited, though I'm not entirely sure where the code is. Another
> > > interesting aspect along these lines is to warp the overlapping parts
> > > of two images to a common projection and run the CPGs on those warped
> > > partial images, to later retransform the CPs to original image
> > > coordinates. This has also been done, and I've experimented with it
> > > myself, but found the gain not so noteworthy as to make me want to
> > > investigate the matter more deeply -
> >
> > do you want some 2000 image panos to test it on?  in that case the time
> > savings might be very significant. what I mean is, sure for panos
> containing
> > 4 or 10 images this just won't matter but for gigapixel images it might
> save
> > minutes or hours.
>
> My last comment was not about the downscaled images but about
> transforming overlapping parts of the images to a common projection.
> What I did can easily be explained in standard hugin terms: you choose
> rectilinear projection, rotate your panorama to put the center of the
> overlap into the center of the panorama, crop the panorama to the
> overlapping region and generate separate images instead of a panorama.
> The two resulting 'warped' images would, ideally, be very similar, so
> the CPGs should have an easy task of finding CPs. This is not about
> saving time - it's about getting better-distributed and more precise
> CPs. But, as I said, I didn't find the gains worth the effort. Thanks
> for the offer of your test data, anyway. I may make a python plugin to
> do the warped overlap extraction (it also makes for a nice acronym:
> WOE) once the plugin interface is established - but the outlined
> method above works just as well, only that the retransformation of the
> CPs from the warped images to original image coordinates takes some
> fiddling.
>
> Kay
>
> --
> You received this message because you are subscribed to the Google Groups
> "Hugin and other free panoramic software" group.
> A list of frequently asked questions is available at:
> http://wiki.panotools.org/Hugin_FAQ
> To post to this group, send email to hugin-ptx@googlegroups.com
> To unsubscribe from this group, send email to
> hugin-ptx+unsubscr...@googlegroups.com
> For more options, visit this group at
> http://groups.google.com/group/hugin-ptx
>



-- 
/O

-- 
You received this message because you are subscribed to the Google Groups 
"Hugin and other free panoramic software" group.
A list of frequently asked questions is available at: 
http://wiki.panotoo

[hugin-ptx] Re: cpfind ransac mode

2011-02-08 Thread kfj


On 7 Feb., 23:10, Jeffrey Martin <360cit...@gmail.com> wrote:

> just to start things out in a simple way, couldn't we (well, not me :)
> because I don't know how) just add a parameter to cpfind specifying how much
> to reduce source images (not only halfsize or fullsize as we have now)

A scaling factor would be nice indeed, but I reckon my pixels per
degrees of field of view idea would be even more useful. I suppose
we'll have to get Pablo's attention somehow ;-)

> > I have made experiments in this direction using up- and downscaling of
> > pto files. You can find a working prototype of my Python script here:
>
> >http://bazaar.launchpad.net/~kfj/+junk/script/view/head:/main/scale_p...
>
> > The idea is to run the process on smaller images and once the
> > orientations are established, to replace the images with full scale
> > versions and have all pto parameters that build on image coordinates
> > rescaled. I wrote this because I wanted to work on the screen-sized
> > images I carry with me on my laptop and apply the results to the full-
> > scale images back home with the fat data corpus. It works, and
> > surprisingly well. The scaled-down versions are usually a fair bit
> > crisper than the full-sized images, so there is enough detail for the
> > CPGs to work on - and since the feature detectors produce subpixel
> > accuracy, the scaled-up pto often stitches without any need for
> > further intervention - if you want you can run a global fine-tune on
> > the CPs.
>
> this always worries me. after optimizing something and you see it works,
> optimizing again always has a risk that it will all go to hell again ;) but
> maybe in real life this will not happen much?

My worries along these lines have greatly deminished ever since the
advent of the undo feature. It goes to hell? so what - it can go back
as well.

 > > The next idea, to look at the overlapping parts once the overlap
has
> > been roughly established, is also promising and heas been previously
> > exploited, though I'm not entirely sure where the code is. Another
> > interesting aspect along these lines is to warp the overlapping parts
> > of two images to a common projection and run the CPGs on those warped
> > partial images, to later retransform the CPs to original image
> > coordinates. This has also been done, and I've experimented with it
> > myself, but found the gain not so noteworthy as to make me want to
> > investigate the matter more deeply -
>
> do you want some 2000 image panos to test it on?  in that case the time
> savings might be very significant. what I mean is, sure for panos containing
> 4 or 10 images this just won't matter but for gigapixel images it might save
> minutes or hours.

My last comment was not about the downscaled images but about
transforming overlapping parts of the images to a common projection.
What I did can easily be explained in standard hugin terms: you choose
rectilinear projection, rotate your panorama to put the center of the
overlap into the center of the panorama, crop the panorama to the
overlapping region and generate separate images instead of a panorama.
The two resulting 'warped' images would, ideally, be very similar, so
the CPGs should have an easy task of finding CPs. This is not about
saving time - it's about getting better-distributed and more precise
CPs. But, as I said, I didn't find the gains worth the effort. Thanks
for the offer of your test data, anyway. I may make a python plugin to
do the warped overlap extraction (it also makes for a nice acronym:
WOE) once the plugin interface is established - but the outlined
method above works just as well, only that the retransformation of the
CPs from the warped images to original image coordinates takes some
fiddling.

Kay

-- 
You received this message because you are subscribed to the Google Groups 
"Hugin and other free panoramic software" group.
A list of frequently asked questions is available at: 
http://wiki.panotools.org/Hugin_FAQ
To post to this group, send email to hugin-ptx@googlegroups.com
To unsubscribe from this group, send email to 
hugin-ptx+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/hugin-ptx


[hugin-ptx] Re: cpfind ransac mode

2011-02-07 Thread Jeffrey Martin
HI Kay,

a bunch of nice ideas!

comments below

On Monday, February 7, 2011 3:34:25 PM UTC+1, kfj wrote:

> My idea is to pass a parameter to the CPG that defines magnification 
> of the images based on their field of view. Why that? I sometimes mix 
> images taken with different lenses: Fisheye shots for the 360x180 side 
> of things, and for those parts of the image where it doesn't matter so 
> much (sky, ocean etc.) - and rectilinear shots, sometimes even zoomed 
> for super-clear details, where it matters. In my experience, the CPGs 
> work best when they are presented with images which roughly share the 
> same amount of pixels per degree of fov. To achieve this with the 
> current CPGs, which, if at all, only allow scaling by a factor, I have 
> to blow up some images and shrink some, and then the CPG generation 
> works best - but it's laborious. If I could tell the CPG to scale all 
> images to, say, 50 pixels per degree overall, I would achieve 
> precisely the effect I want by specifying one single parameter. It'd 
> make things so much easier for mixed-lens takes. 
>

ok, with a minimum image width of say 320 (long side), this would be great, 
i think.

just to start things out in a simple way, couldn't we (well, not me :) 
because I don't know how) just add a parameter to cpfind specifying how much 
to reduce source images (not only halfsize or fullsize as we have now)

anyway, your idea is very interesting if in fact it would improve alignment 
of different FOV lenses. I fully agree that using CPG's in real life, this 
doesn't work so well.
 

>
> I know that theoretically SURF and SIFT features are (quite) scale- 
> insenstive, but my experience tells me that this truth only goes so 
> far. I'm not sure how scale-insenstive the gradient-based detector in 
> cpfind is. But I feel that implementing my proposition might be easy 
> and it'd give us the opportunity to see if this might be a cheaply 
> bought improvement in performance. 
>

It is so easy, by all means! I'll be happy to test it.
 

>
> Also, it'd instantly put an estimate on the images: if it's 1000 
> pixels from a consumer point-and-shoot camera, you may want to go 
> fullscale,


maybe only reduce to 1/2 ;)
 

> whereas 1000 pixels from an SLR sensor might as well be 
> scaled down for the CPG: The first 1000 pixels might represent 50 
> degrees, whereas the second might represent 15. Setting a default of, 
> say 50 pixels per degree, would make more sense for both images than 
> any fixed scaling. 
>
> > taking it a step further, could it be possible to first run cpfind first 
> on 
> > very small versions of the images, and then after some optimization, to 
> run 
> > cpfind again on only the overlapping portions of the image pairs at a 
> higher 
> > sensitivity, to align the images more precisely? i could imagine that 
> that 
> > could increase the speed of cpfind dramatically. would that be possible, 
> or 
> > is this a bad idea? 
>
> I have made experiments in this direction using up- and downscaling of 
> pto files. You can find a working prototype of my Python script here: 
>
> http://bazaar.launchpad.net/~kfj/+junk/script/view/head:/main/scale_pto.py
>  
>
> The idea is to run the process on smaller images and once the 
> orientations are established, to replace the images with full scale 
> versions and have all pto parameters that build on image coordinates 
> rescaled. I wrote this because I wanted to work on the screen-sized 
> images I carry with me on my laptop and apply the results to the full- 
> scale images back home with the fat data corpus. It works, and 
> surprisingly well. The scaled-down versions are usually a fair bit 
> crisper than the full-sized images, so there is enough detail for the 
> CPGs to work on - and since the feature detectors produce subpixel 
> accuracy, the scaled-up pto often stitches without any need for 
> further intervention - if you want you can run a global fine-tune on 
> the CPs. 
>

this always worries me. after optimizing something and you see it works, 
optimizing again always has a risk that it will all go to hell again ;) but 
maybe in real life this will not happen much?


 

>
> This method seems promising, and it's on my list of scripts to convert 
> into a plugin for the python plugin interface I've written to whet 
> everyone's appetite. Until then there's the pure python 
> implementation, and there is also a perl script by Bruno Postle, which 
> scales by factors of two: 
>
>
> http://www.google.com/url?sa=D&q=http://panotools.svn.sourceforge.net/viewvc/panotools/trunk/Panotools-Script/bin/ptohalve%3Fview%3Dmarkup%26pathrev%3D1291
>  
>
> The next idea, to look at the overlapping parts once the overlap has 
> been roughly established, is also promising and heas been previously 
> exploited, though I'm not entirely sure where the code is. Another 
> interesting aspect along 

[hugin-ptx] Re: cpfind ransac mode

2011-02-07 Thread kfj


On 7 Feb., 12:30, Jeffrey Martin <360cit...@gmail.com> wrote:
> I found that reducing source images to 25% (or less) of original size seemed
> to produce a good stitch. although I wasn't rendering the full size to
> check, it aligned a screen-size version of the pano perfectly. I wonder if
> cpfind should allow reducing image size by any arbitrary amount (not only
> half, or fullsize)? I mean, half of what? Autopano sift C has a setting that
> allows you to set the size the images are reduced to for analysis in
> terms of speed, this is where the tweaking needs to happen, i think.

Good thing you touch the issue. I'd like to make a proposition
concerning the modification of image size. I agree that it would be
very helpful to have more precise control over the process by allowing
arbitrary scaling. But I'd go one step further. What would that be?

My idea is to pass a parameter to the CPG that defines magnification
of the images based on their field of view. Why that? I sometimes mix
images taken with different lenses: Fisheye shots for the 360x180 side
of things, and for those parts of the image where it doesn't matter so
much (sky, ocean etc.) - and rectilinear shots, sometimes even zoomed
for super-clear details, where it matters. In my experience, the CPGs
work best when they are presented with images which roughly share the
same amount of pixels per degree of fov. To achieve this with the
current CPGs, which, if at all, only allow scaling by a factor, I have
to blow up some images and shrink some, and then the CPG generation
works best - but it's laborious. If I could tell the CPG to scale all
images to, say, 50 pixels per degree overall, I would achieve
precisely the effect I want by specifying one single parameter. It'd
make things so much easier for mixed-lens takes.

I know that theoretically SURF and SIFT features are (quite) scale-
insenstive, but my experience tells me that this truth only goes so
far. I'm not sure how scale-insenstive the gradient-based detector in
cpfind is. But I feel that implementing my proposition might be easy
and it'd give us the opportunity to see if this might be a cheaply
bought improvement in performance.

Also, it'd instantly put an estimate on the images: if it's 1000
pixels from a consumer point-and-shoot camera, you may want to go
fullscale, whereas 1000 pixels from an SLR sensor might as well be
scaled down for the CPG: The first 1000 pixels might represent 50
degrees, whereas the second might represent 15. Setting a default of,
say 50 pixels per degree, would make more sense for both images than
any fixed scaling.

> taking it a step further, could it be possible to first run cpfind first on
> very small versions of the images, and then after some optimization, to run
> cpfind again on only the overlapping portions of the image pairs at a higher
> sensitivity, to align the images more precisely? i could imagine that that
> could increase the speed of cpfind dramatically. would that be possible, or
> is this a bad idea?

I have made experiments in this direction using up- and downscaling of
pto files. You can find a working prototype of my Python script here:

http://bazaar.launchpad.net/~kfj/+junk/script/view/head:/main/scale_pto.py

The idea is to run the process on smaller images and once the
orientations are established, to replace the images with full scale
versions and have all pto parameters that build on image coordinates
rescaled. I wrote this because I wanted to work on the screen-sized
images I carry with me on my laptop and apply the results to the full-
scale images back home with the fat data corpus. It works, and
surprisingly well. The scaled-down versions are usually a fair bit
crisper than the full-sized images, so there is enough detail for the
CPGs to work on - and since the feature detectors produce subpixel
accuracy, the scaled-up pto often stitches without any need for
further intervention - if you want you can run a global fine-tune on
the CPs.

This method seems promising, and it's on my list of scripts to convert
into a plugin for the python plugin interface I've written to whet
everyone's appetite. Until then there's the pure python
implementation, and there is also a perl script by Bruno Postle, which
scales by factors of two:

http://www.google.com/url?sa=D&q=http://panotools.svn.sourceforge.net/viewvc/panotools/trunk/Panotools-Script/bin/ptohalve%3Fview%3Dmarkup%26pathrev%3D1291

The next idea, to look at the overlapping parts once the overlap has
been roughly established, is also promising and heas been previously
exploited, though I'm not entirely sure where the code is. Another
interesting aspect along these lines is to warp the overlapping parts
of two images to a common projection and run the CPGs on those warped
partial images, to later retransform the CPs to original image
coordinates. This has also been done, and I've experimented with it
myself, but found the gain not so noteworthy as to make me want to
investigate th

[hugin-ptx] Re: cpfind ransac mode

2011-02-07 Thread Jeffrey Martin
 
I found that reducing source images to 25% (or less) of original size seemed 
to produce a good stitch. although I wasn't rendering the full size to 
check, it aligned a screen-size version of the pano perfectly. I wonder if 
cpfind should allow reducing image size by any arbitrary amount (not only 
half, or fullsize)? I mean, half of what? Autopano sift C has a setting that 
allows you to set the size the images are reduced to for analysis in 
terms of speed, this is where the tweaking needs to happen, i think.

taking it a step further, could it be possible to first run cpfind first on 
very small versions of the images, and then after some optimization, to run 
cpfind again on only the overlapping portions of the image pairs at a higher 
sensitivity, to align the images more precisely? i could imagine that that 
could increase the speed of cpfind dramatically. would that be possible, or 
is this a bad idea?


-- 
You received this message because you are subscribed to the Google Groups 
"Hugin and other free panoramic software" group.
A list of frequently asked questions is available at: 
http://wiki.panotools.org/Hugin_FAQ
To post to this group, send email to hugin-ptx@googlegroups.com
To unsubscribe from this group, send email to 
hugin-ptx+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/hugin-ptx


Re: [hugin-ptx] Re: cpfind ransac mode

2011-01-25 Thread Jim Watters

On 2011-01-25 7:56 AM, Rogier Wolff wrote:

On Mon, Jan 24, 2011 at 11:35:46PM +0100, Pablo d'Angelo wrote:

Pre-rotating the images outside hugin is not a good idea. Even if hfov
and a/b/c parameters are applicable, what about the shift parameters
d/e? Left or right rotation makes a big difference here, and there is no
reliable way for hugin to figure that out.

Why not? When I get back from vacation I usually run a script on my
pictures collection that uses the exif info to rotate all portrait
images. This has the added advantage for the panorama photos that when
I load them into Hugin, and when I want to do manual control points,
they start out right-side-up. So: Why not rotate them?

If your camera has large d, e parameters then it makes a big difference!
Once the images have been rotated there is no way to know if they were rotated 
CW or CCW.

Landscape image d=100, e=100
Rotated portrait image.  There is a big difference between d=-100, e=100 and 
d=100, e=-100



(The "hfov" parameter becomes a bit ambiguous, I agree. And it becomes
non-obvious that the 90 degree HFOV (landscape) and the 60 degree HFOV
(portrait) shots are from the same lens  Instead of using hfov, I
think we should move to 35mm-equiv-lens-length or diagonal-fov.)

Roger.
A portable lens profile is in the works.  Thomas Sharpless has put together a 
wiki page that discusses the issues and some possible solutions.  We want to 
change this but we want to change it right and once.

http://wiki.panotools.org/Lens_Correction_in_PanoTools.

It is currently being discussed on the panotools deve list on soureforge.
https://lists.sourceforge.net/lists/listinfo/panotools-devel


--
Jim Watters
http://photocreations.ca

--
You received this message because you are subscribed to the Google Groups "Hugin and 
other free panoramic software" group.
A list of frequently asked questions is available at: 
http://wiki.panotools.org/Hugin_FAQ
To post to this group, send email to hugin-ptx@googlegroups.com
To unsubscribe from this group, send email to 
hugin-ptx+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/hugin-ptx


Re: [hugin-ptx] Re: cpfind ransac mode

2011-01-25 Thread Rogier Wolff
Hi Pablo, 

On Mon, Jan 24, 2011 at 11:35:46PM +0100, Pablo d'Angelo wrote:
> Pre-rotating the images outside hugin is not a good idea. Even if hfov 
> and a/b/c parameters are applicable, what about the shift parameters 
> d/e? Left or right rotation makes a big difference here, and there is no 
> reliable way for hugin to figure that out.

Why not? When I get back from vacation I usually run a script on my
pictures collection that uses the exif info to rotate all portrait
images. This has the added advantage for the panorama photos that when
I load them into Hugin, and when I want to do manual control points,
they start out right-side-up. So: Why not rotate them?

(The "hfov" parameter becomes a bit ambiguous, I agree. And it becomes
non-obvious that the 90 degree HFOV (landscape) and the 60 degree HFOV
(portrait) shots are from the same lens  Instead of using hfov, I
think we should move to 35mm-equiv-lens-length or diagonal-fov.)

Roger.

-- 
** r.e.wo...@bitwizard.nl ** http://www.BitWizard.nl/ ** +31-15-2600998 **
**Delftechpark 26 2628 XH  Delft, The Netherlands. KVK: 27239233**
*-- BitWizard writes Linux device drivers for any device you may have! --*
Q: It doesn't work. A: Look buddy, doesn't work is an ambiguous statement. 
Does it sit on the couch all day? Is it unemployed? Please be specific! 
Define 'it' and what it isn't doing. - Adapted from lxrbot FAQ

-- 
You received this message because you are subscribed to the Google Groups 
"Hugin and other free panoramic software" group.
A list of frequently asked questions is available at: 
http://wiki.panotools.org/Hugin_FAQ
To post to this group, send email to hugin-ptx@googlegroups.com
To unsubscribe from this group, send email to 
hugin-ptx+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/hugin-ptx


[hugin-ptx] Re: cpfind ransac mode

2011-01-24 Thread Pablo d'Angelo

Hi Kay,

Am 24.01.2011 18:12, schrieb kfj:

J. Martin asks


so your standard 6 shots around and 1 shot up pano (fullframe fisheye) where
the "up" shot has been rotated to landscape orientation by the camera - this
should not pose any problems for hugin?


I'd say - not really a problem, but if the first six are portrait and
the up shot landscape, if it's a manual lens (like my Samyang) I have
to tell hugin every time it's the same, even if the image dimensions
are the same to the other shots.More a nuisance. I have to have two
lens.ini files handy, or go through the dialog every time of 'do I
want to accept the parameters even though the image dimensions are
different'.


So all your image are physically in Landscape mode (i.e. width > 
height)? Then hugin should ask only once, otherwise its a bug.


Pre-rotating the images outside hugin is not a good idea. Even if hfov 
and a/b/c parameters are applicable, what about the shift parameters 
d/e? Left or right rotation makes a big difference here, and there is no 
reliable way for hugin to figure that out.


ciao
 Pablo

--
You received this message because you are subscribed to the Google Groups "Hugin and 
other free panoramic software" group.
A list of frequently asked questions is available at: 
http://wiki.panotools.org/Hugin_FAQ
To post to this group, send email to hugin-ptx@googlegroups.com
To unsubscribe from this group, send email to 
hugin-ptx+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/hugin-ptx


Re: [hugin-ptx] Re: cpfind ransac mode

2011-01-24 Thread Bruno Postle

On Mon 24-Jan-2011 at 08:10 -0800, Jeffrey Martin wrote:

sorry to take this thread off on a tanget, but can i clarify 
bruno's statement?


so your standard 6 shots around and 1 shot up pano (fullframe 
fisheye) where the "up" shot has been rotated to landscape 
orientation by the camera - this should not pose any problems for 
hugin?


No camera I have seen actually rotates the image file saved in the 
camera.  i.e. they all produce landscape images but add an EXIF tag 
indicating the position of the orientation sensor.


So if you feed these photos into Hugin it does the right thing: they 
all get the same lens number, but get the initial roll set ∓90° 
depending on the EXIF data.


However if you are processing with a RAW converter, it might 
silently rotate a photo and save it in portrait format.  In this 
case Hugin would assign a different lens number.


(personally I think it would suck if I asked a RAW converter to 
process a series of photos identically but they ended up with 
different pixel dimensions).


why is it HFOV and not FOV "in the narrow image dimension" ? 
wouldn't this be better?


See all the other recent threads about the panotools lens model.  
Yes it isn't ideal.


--
Bruno

--
You received this message because you are subscribed to the Google Groups "Hugin and 
other free panoramic software" group.
A list of frequently asked questions is available at: 
http://wiki.panotools.org/Hugin_FAQ
To post to this group, send email to hugin-ptx@googlegroups.com
To unsubscribe from this group, send email to 
hugin-ptx+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/hugin-ptx


[hugin-ptx] Re: cpfind ransac mode

2011-01-24 Thread Jeffrey Martin
I tried this and it seems to work fine. 

Multirow is faster (nearly 3x faster, actually). "all images at once" was 
slower but gave the same result (for what i tested, anyway)

Pablo, does this mean that I should not heat my house using my CPU at 100% 
for the next months? :) Or should I still play around with sieve sizes and 
stuff?

-- 
You received this message because you are subscribed to the Google Groups 
"Hugin and other free panoramic software" group.
A list of frequently asked questions is available at: 
http://wiki.panotools.org/Hugin_FAQ
To post to this group, send email to hugin-ptx@googlegroups.com
To unsubscribe from this group, send email to 
hugin-ptx+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/hugin-ptx


[hugin-ptx] Re: cpfind ransac mode

2011-01-24 Thread Jeffrey Martin


On Monday, January 24, 2011 6:12:47 PM UTC+1, kfj wrote:
>
>
> but if the first six are portrait and 
> the up shot landscape, if it's a manual lens (like my Samyang) I have 
> to tell hugin every time it's the same, even if the image dimensions 
> are the same to the other shots. More a nuisance. I have to have two 
> lens.ini files handy, or go through the dialog every time of 'do I 
> want to accept the parameters even though the image dimensions are 
> different'. 
>

I would regard this as a major bug. The issue ot camera/image rotation 
creates so many problems in panos. Why can't this be solved definitively 
without harassing the user? (If it is how kfj describes it, I would call it 
"harassment".) (PTGui is the same, BTW - differently rotated panos don't get 
treated in a graceful way)

-- 
You received this message because you are subscribed to the Google Groups 
"Hugin and other free panoramic software" group.
A list of frequently asked questions is available at: 
http://wiki.panotools.org/Hugin_FAQ
To post to this group, send email to hugin-ptx@googlegroups.com
To unsubscribe from this group, send email to 
hugin-ptx+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/hugin-ptx


[hugin-ptx] Re: cpfind ransac mode

2011-01-24 Thread kfj


On 23 Jan., 21:53, Bruno Postle  wrote:

> It doesn't really work like this.  Hugin looks exclusively at the
> image dimensions, so both the landscape and portrait shots will have
> the same lens number and field of view.

Of course the final calculation does it right.
What I mean is that there is only one parameter in the pto, and that's
the horizontal field of view. Of course, if you have the pixel
dimensions you can arrive at a correct calculation, but if you mix
landscape and portrait shots of the same lens in a panorama, you have
images with two different hfov values. That's why Jeffrey Matin says:

> why is it HFOV and not FOV "in the narrow image dimension" ? wouldn't this
> be better?

And, having thought about it some more, I agree with him for two
reasons:

- with true fisheye images, the diagonal isn't inside the image. It
may even be silly to calculate a dfov for a fisheye - it might be
nominally more than 360 degrees. The narrow edge will be about 180
degrees in a fisheye, so no problem.

- the (current) lens correction polynomial is, if I'm not mistaken,
based on the small side of the image as well (taking half the diameter
there as unit radius), so hugin already uses this standard.

J. Martin asks

> so your standard 6 shots around and 1 shot up pano (fullframe fisheye) where
> the "up" shot has been rotated to landscape orientation by the camera - this
> should not pose any problems for hugin?

I'd say - not really a problem, but if the first six are portrait and
the up shot landscape, if it's a manual lens (like my Samyang) I have
to tell hugin every time it's the same, even if the image dimensions
are the same to the other shots. More a nuisance. I have to have two
lens.ini files handy, or go through the dialog every time of 'do I
want to accept the parameters even though the image dimensions are
different'.

Kay

-- 
You received this message because you are subscribed to the Google Groups 
"Hugin and other free panoramic software" group.
A list of frequently asked questions is available at: 
http://wiki.panotools.org/Hugin_FAQ
To post to this group, send email to hugin-ptx@googlegroups.com
To unsubscribe from this group, send email to 
hugin-ptx+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/hugin-ptx


[hugin-ptx] Re: cpfind ransac mode

2011-01-24 Thread Jeffrey Martin
I am strongly in favor of #1.

Only if there is no Exif, and the user cannot supply any additional data, 
then it can go to #2. 

I think there is no other way :-)

Jeffrey


On Saturday, January 22, 2011 10:20:34 PM UTC+1, Pablo d'Angelo wrote:
>
> 1. User has a good estimate of the HFOV (EXIF Data or prior 
> calibrations) -> use cpfind --ransacmode rpy
> which makes cpfind virtually bullet proof to really bad mismatches.
>
> 2. Bad EXIF Data and user doesn't know about crop factors or the like -> 
> use cpfind --ransacmode auto (the default) or cpfind --ransacmode 
> homography, and accept some outliers.
>
> I hesitate to default to --ransacmode rpy, as this will probably lead to 
> quite some breakage for novice users, who enter bad crop factors.
>
> I find 2. a bit unsatisfying as it means that we will get suboptimal 
> results for many inexperience users (and many experienced ones too, who 
> don't know about all the cpfind internals...).
>
> Whats your opinion about that?
>
> - Should we add more default presets to the control point detector 
> preferences?
>
> - Try to automatically add --ransacmode rpy, if the hugin could 
> successfully read HFOV from the EXIF data?
>
> - Try to robustly add HFOV to the RANSAC model? Maybe just trying a 
> range of initial HFOVs would be sufficient... However, I'm not sure if I 
> can do that with my limited time budget.
>
> Another way to reduce the problem would be to use a camera-crop factor 
> database, such as the one from Autopano PRO:
> http://www.autopano.net/wiki-en/Cameras.txt
>
> ciao
>   Pablo
>

-- 
You received this message because you are subscribed to the Google Groups 
"Hugin and other free panoramic software" group.
A list of frequently asked questions is available at: 
http://wiki.panotools.org/Hugin_FAQ
To post to this group, send email to hugin-ptx@googlegroups.com
To unsubscribe from this group, send email to 
hugin-ptx+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/hugin-ptx


Re: [hugin-ptx] Re: cpfind ransac mode

2011-01-24 Thread Jeffrey Martin
sorry to take this thread off on a tanget, but can i clarify bruno's 
statement?

so your standard 6 shots around and 1 shot up pano (fullframe fisheye) where 
the "up" shot has been rotated to landscape orientation by the camera - this 
should not pose any problems for hugin?

why is it HFOV and not FOV "in the narrow image dimension" ? wouldn't this 
be better?

On Sunday, January 23, 2011 9:53:11 PM UTC+1, bruno.postle wrote:
>
> On Sat 22-Jan-2011 at 23:57 -0800, kfj wrote:
> >
> > In theory this is a fine idea. But keep in mind one point that noone
> > ever adresses in this whole discussion: The treatment of FOV in hugin
> > is, if I am interpreting the mechanism right, fundamentally flawed.
> > The only thing that is asked for and processed seems to be the
> > HORIZONTAL field of view. Now if I make images with an APSC sensor
> > and, in landscape mode, have a HFOV of 60 degrees, then do some shots
> > in portrait, suddenly the HFOV is 40 degrees.
>
> It doesn't really work like this.  Hugin looks exclusively at the 
> image dimensions, so both the landscape and portrait shots will have 
> the same lens number and field of view.
>
> If you have actually rotated the pixels of one photo in an image 
> editor then Hugin will see two different lens numbers and field of 
> views, but the field of view will be 'correct' and doesn't need 
> to be changed to get a good stitch.
>
> -- 
> Bruno
>

-- 
You received this message because you are subscribed to the Google Groups 
"Hugin and other free panoramic software" group.
A list of frequently asked questions is available at: 
http://wiki.panotools.org/Hugin_FAQ
To post to this group, send email to hugin-ptx@googlegroups.com
To unsubscribe from this group, send email to 
hugin-ptx+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/hugin-ptx


Re: [hugin-ptx] Re: cpfind ransac mode

2011-01-23 Thread Bruno Postle

On Sat 22-Jan-2011 at 23:57 -0800, kfj wrote:


In theory this is a fine idea. But keep in mind one point that noone
ever adresses in this whole discussion: The treatment of FOV in hugin
is, if I am interpreting the mechanism right, fundamentally flawed.
The only thing that is asked for and processed seems to be the
HORIZONTAL field of view. Now if I make images with an APSC sensor
and, in landscape mode, have a HFOV of 60 degrees, then do some shots
in portrait, suddenly the HFOV is 40 degrees.


It doesn't really work like this.  Hugin looks exclusively at the 
image dimensions, so both the landscape and portrait shots will have 
the same lens number and field of view.


If you have actually rotated the pixels of one photo in an image 
editor then Hugin will see two different lens numbers and field of 
views, but the field of view will be 'correct' and doesn't need 
to be changed to get a good stitch.


--
Bruno

--
You received this message because you are subscribed to the Google Groups "Hugin and 
other free panoramic software" group.
A list of frequently asked questions is available at: 
http://wiki.panotools.org/Hugin_FAQ
To post to this group, send email to hugin-ptx@googlegroups.com
To unsubscribe from this group, send email to 
hugin-ptx+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/hugin-ptx


[hugin-ptx] Re: cpfind ransac mode

2011-01-22 Thread kfj


On 22 Jan., 22:20, Pablo d'Angelo  wrote:

> I have thus implemented a new RANSAC model that can make use of the
> restrictions we have in panoramas, and also include prior information
> about lens type, HFOV and distortion etc. Basically, it tries to
> estimate roll,pitch and yaw for each image pair (using two control
> points), and checks if the remaining points are consistent, and repeats
> that a few times.

This is definitely the way to go. Since RANSAC bases it's exclusion of
outliers on a consistency check which depends on feedback from a
model, if the model is wrong (i. e. assuming rectilinear images)
consistency of a set of points under consideration cannot be
established. Extending the tolerances here will only gloss over the
fact that the model is wrong, not the points fed into it. There is no
way to avoid having information about FOV and projection - if the EXIF
data don't yield, the user must provide.

> Unfortunately, in some cases, the HFOV is not know very well (incomplete
> EXIF data, bad crop factors entered by the user etc...), so one cannot
> unconditionally recommend --ransacmode rpy.

You have to draw the line somewhere. If the EXIF data are missing and
the user is prompted to supply sufficient subset of FOV, crop factor
and focal length and the user enters wrong data, you can't make it
right for him/her. What the user can do if he/she has no clue at all
is do a panorama as best as they can and so arrive at an estimate of
these data to use from then on.

> 1. User has a good estimate of the HFOV (EXIF Data or prior
> calibrations) -> use cpfind --ransacmode rpy
> which makes cpfind virtually bullet proof to really bad mismatches.
>
> 2. Bad EXIF Data and user doesn't know about crop factors or the like ->
> use cpfind --ransacmode auto (the default) or cpfind --ransacmode
> homography, and accept some outliers.

I think that is a perfectly reasonable choice. And once case 2 has
produced a roughly correct output, the FOV has been established with
sufficient accuracy to use --ransacmode rpy.

> I hesitate to default to --ransacmode rpy, as this will probably lead to
> quite some breakage for novice users, who enter bad crop factors.

quite right. I'd assume the inexperienced users aren't usually the
ones using fisheyes anyway.

> I find 2. a bit unsatisfying as it means that we will get suboptimal
> results for many inexperience users (and many experienced ones too, who
> don't know about all the cpfind internals...).
>
> Whats your opinion about that?

I feel that the problem here is in the presentation of these choices.
As long as the user, experienced or not, has to make all the way into
the CPG settings dialog to modify command line arguments, all but the
most confident and experienced users will hesitate to go down that
road. The treatment of CPGs, CPG parameters and their manipulation
needs a facelift. The capabilities of the CPGs themselves, be it your
creature cpfind or the other, patent-encumbered ones, are, imho,
perfectly sufficient.

> - Try to automatically add --ransacmode rpy, if the hugin could
> successfully read HFOV from the EXIF data?

In theory this is a fine idea. But keep in mind one point that noone
ever adresses in this whole discussion: The treatment of FOV in hugin
is, if I am interpreting the mechanism right, fundamentally flawed.
The only thing that is asked for and processed seems to be the
HORIZONTAL field of view. Now if I make images with an APSC sensor
and, in landscape mode, have a HFOV of 60 degrees, then do some shots
in portrait, suddenly the HFOV is 40 degrees. Hugin even insists on me
entering a 'different lens' instead of just calculating the diagonal
fov and realizing it's the same thing after all. So a 60 degree limit
does not necessarily work - if hfov in landcape is, say, 65 (like with
my ordinary zoom lens at 18mm) and 43 in portait, all of the sudden
the same lens would once be treated as a fisheye and once as a
rectilinear. Please correct me if I'm mistaken.

Kay

-- 
You received this message because you are subscribed to the Google Groups 
"Hugin and other free panoramic software" group.
A list of frequently asked questions is available at: 
http://wiki.panotools.org/Hugin_FAQ
To post to this group, send email to hugin-ptx@googlegroups.com
To unsubscribe from this group, send email to 
hugin-ptx+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/hugin-ptx