[hugin-ptx] Re: Blending very large numbers of files
On 16 Dez., 18:49, Matthew Gates matthew...@gmail.com wrote: On 14 December 2010 22:48, kfj _...@yahoo.com wrote: Well, we have tarballs with the original plate jpegs split into small chunks. That's how the data arrived to us after some processing by another The tarballs contain sub-directories x1 x2 x4 etc. x1 is the highest res. Don't worry about the rest of the directories, they are just lower resolution version of the same data. Okay, I had a first look at the tar ball you linked to. If I interpret the data correctly, x64 is the lowest resolution and shows the complete plate, and the other directories offer tiles at various resolutions, with the highest resolution images contained in x1, where there are 64*64 =4096 images. The data don't look to me as if they were suffering from significant vignetting. The background of the image has a slight reddish tint towards the center in the order of magnitude of 10 (out of 256), but I don't think that would translate into such a visible effect as can be seen on your sample stitch image. So I'm a bit puzzled - is this maybe just one plate where the problem we're trying to tackle isn't so visible? At any rate, to get an idea of the overall quality of the data, it would be useful to have a set of x64 plate images, covering an area of - or even better, the whole sky. If you have the data uncompressed somewhere you could just lump them together with a command like (including the appropriate metadata) tar cvf x64.tar S*/x64/*.jpg S*/x64/*.hhh The resulting tar file would be smaller than one of the pyramids, since it would only contain 1792 images, one for each tar ball. My reasoning here is that any vignette-like effect would be perfectly visible on the x64 image, the higher-res tiles would not offer too much extra information for the purpose at hand. The 1792 images at the lowest resolution would offer an idea of the overall problem and need for processing and, therefore, be more useful than an arbitrary pyramid with all resolutions - I suppose the x64 image is nothing but a compressed composite of the higher-res tiles anyway. The tiles come with a cornucopia of metadata, most of which exceed my admittedly narrow astronomic horizon - what I seem to have gleaned, though, is that the projection is gnomonic and localization of the individual images should be easy straight from the metadata. What I need for a trial stitch in hugin (from which we might be able to derive the vignetting data) is translation of the astronomical nomenclature in the hhh files into hugin's system. Hugin uses a notion of roll, pitch and yaw. I suspect roll will be zero for the images, pitch would refer to the center of the image and be in degrees from the equator and yaw to the center of the image in degrees from any reference point on the equator you care for, maybe you could point me to which of the metadata to touch for the purpose and how to translate them, if necessary. I suspect the relevant data are in the 'Hour Angle' and 'Zenith distance' data fields in the hhh file, hour angle could be translated straight into yaw, and Zenith distance would be pitch + 90 degrees? I could then extract them from the hhh files. Alternatively, put small files with the images containing roll, pitch and yaw in degrees - then I wouldn't have to do the extraction myself ;-) The x64 directories in the .tgz files might be useful here, as they are full-plate at low res. We could maybe extract these, experiment with blending and even stitch them together into a panorama to check it. 1792 300x300 tiles isn't so bad. I can do the extraction of this data on the server and create a single archive file which should be a manageable size. Sounds very good; I fully agree, hope to see the data soon! with regards Kay -- You received this message because you are subscribed to the Google Groups Hugin and other free panoramic software group. A list of frequently asked questions is available at: http://wiki.panotools.org/Hugin_FAQ To post to this group, send email to hugin-ptx@googlegroups.com To unsubscribe from this group, send email to hugin-ptx+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/hugin-ptx
[hugin-ptx] Re: Blending very large numbers of files
On 17 Dez., 08:09, Pablo d'Angelo pablo.dang...@web.de wrote: If you want to try the hugin vignetting correction, one needs to know corresponding pixels between the overlapping plates. There are two possibilities: 1. Add support for the input image (plate) format to hugin, and write a script that creates a PTO file from the metadata in the plate files. Then the standard workfow could be used (on the x64 images). The corresponding parameters could then be used for radiometric correction of the individal plates. If we add the toast projection, to panotools, it would be possible to directly output toast, otherwise its possible to produce corrected plate images. I think that's a good path for now; the pto should be easy to make - I've had a look at the attached metadata and it does look like it's all there (wish all the other incoming images were only half as well documented - these astronomers sure are precise...) Looking athttp://porpoisehead.net/images/dss_blend_needed.jpg I'm a bit sceptical how well it might work with hugins vignetting correction though. so am I Do you know if the position of the plate in the telescopes focal plane is available in the metadata? Do you have a document describing the metadata? I my notion of astronomy isn't wrong, I suppose it's one plate at a time right bang in the middle where the focal point of the mirror is. Center of image = optical axis, which should be normal to the image plane. Then leave the plate there for an hour with the telescope slowly rotating to make up for the earth's rotation, then next plate - as long as the night lasts and there are no clouds. Matthew, please correct me if I'm wrong. For a first experiment, a smaller subset of the mosaic would be sufficient, maybe a 5x5 grid. For this test, the alignment could be done in hugin to avoid the steps I have mentioned before. The full data will be a real challange, even when using the tiny images. I disagree. At the lowest resolution, so at x64, the total amounts to a measly 161 Megapixels, transfer volume much less since the JPGs are quite cruelly compressed [this surprises me - I'm almost certain the true source data would not be JPG and the pyramids at hand are already a few steps down the processing pipeline...] The real work is writing the script from the metadata. Also it seems to me that the problems are more pronounced in some areas than other. Best to have the full x64 set. The full mosaic would be much simpler, if we had a reference image that covers the whole mosaic. It only needs to contain the background brightness, not all the stars etc. This would make the vignetting correction much simpler and probably yield a nicer result. We would have that if we stitched together the x64 tiles, and since they are so precisely located, this stitch should come out really well. with regards Kay -- You received this message because you are subscribed to the Google Groups Hugin and other free panoramic software group. A list of frequently asked questions is available at: http://wiki.panotools.org/Hugin_FAQ To post to this group, send email to hugin-ptx@googlegroups.com To unsubscribe from this group, send email to hugin-ptx+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/hugin-ptx
[hugin-ptx] Re: have you heard about TOAST projection?
On 16 Dez., 15:50, Seb Perez-D sbprzd+...@gmail.com wrote: On Thu, Dec 16, 2010 at 15:48, Sebastien Perez-Duarte sebastien.perez-dua...@m4x.org wrote: A much better projection is the Peirce Quincuncial. It is also square, but is almost everywhere conformal. Sorry, I forgot the Wikipedia link! http://en.wikipedia.org/wiki/Peirce_quincuncial_projection Hey, I really like this projection! You're right, it has more aesthetic appeal than TOAST. Kay -- You received this message because you are subscribed to the Google Groups Hugin and other free panoramic software group. A list of frequently asked questions is available at: http://wiki.panotools.org/Hugin_FAQ To post to this group, send email to hugin-ptx@googlegroups.com To unsubscribe from this group, send email to hugin-ptx+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/hugin-ptx
Re: [hugin-ptx] Re: Blending very large numbers of files
On 17 December 2010 12:57, kfj _...@yahoo.com wrote: The tiles come with a cornucopia of metadata, most of which exceed my admittedly narrow astronomic horizon - what I seem to have gleaned, though, is that the projection is gnomonic and localization of the individual images should be easy straight from the metadata. What I need for a trial stitch in hugin (from which we might be able to derive the vignetting data) is translation of the astronomical nomenclature in the hhh files into hugin's system. Hugin uses a notion of roll, pitch and yaw. I suspect roll will be zero for the images, pitch would refer to the center of the image and be in degrees from the equator and yaw to the center of the image in degrees from any reference point on the equator you care for, maybe you could point me to which of the metadata to touch for the purpose and how to translate them, if necessary. I suspect the relevant data are in the 'Hour Angle' and 'Zenith distance' data fields in the hhh file, hour angle could be translated straight into yaw, and Zenith distance would be pitch + 90 degrees? I could then extract them from the hhh files. Alternatively, put small files with the images containing roll, pitch and yaw in degrees - then I wouldn't have to do the extraction myself ;-) I hope Fabien will join in the conversation here. He's been dealing with the processing of these images into the toast projection and so is much more familiar with this stuff than me. The x64 directories in the .tgz files might be useful here, as they are full-plate at low res. We could maybe extract these, experiment with blending and even stitch them together into a panorama to check it. 1792 300x300 tiles isn't so bad. I can do the extraction of this data on the server and create a single archive file which should be a manageable size. Sounds very good; I fully agree, hope to see the data soon! Here's a tarball with the x64 images and their associated meta-data files: http://porpoisehead.net/misc/dss_low.tar.gz (~11 meg) I'm assuming the N??? ones are for the Northern hemisphere and the S??? files are for the Southern hemisphere, but this should become apparent from the metadata analysis. Matthew -- You received this message because you are subscribed to the Google Groups Hugin and other free panoramic software group. A list of frequently asked questions is available at: http://wiki.panotools.org/Hugin_FAQ To post to this group, send email to hugin-ptx@googlegroups.com To unsubscribe from this group, send email to hugin-ptx+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/hugin-ptx
[hugin-ptx] why is vertical line detection so hard?
ok, i will also take my thoughts from this thread https://groups.google.com/forum/#!topic/hugin-ptx/27hdrRBxVYY and expand on it in a new thread. * Why can't any (known) software do automatic vertical line detection (and thus levelling) of images? * I know it's a hard problem. But I suspect it is something that has 1) been solved already in some some proprietary systems that we don't know about and 2) not important enough to have been solved in academia or other places, where we would have heard about it. I'm interested to hear anybody's thoughts on this matter. cheers! Jeffrey -- You received this message because you are subscribed to the Google Groups Hugin and other free panoramic software group. A list of frequently asked questions is available at: http://wiki.panotools.org/Hugin_FAQ To post to this group, send email to hugin-ptx@googlegroups.com To unsubscribe from this group, send email to hugin-ptx+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/hugin-ptx
[hugin-ptx] Re: why is vertical line detection so hard?
On 17 Dez., 15:52, Jeffrey Martin 360cit...@gmail.com wrote: Why can't any (known) software do automatic vertical line detection (and thus levelling) of images? So you're quite specific about _vertical_ line detection. Line detection per se isn't really a problem, and I think there's even software in the hugin bundle that uses line detection (doesn't the lens calibration tool do so?). The trouble is just - how would you know that any given line in any image is vertical and not some other orientation? So you either need to make preliminary assumptions (i.e. images are roughly upright, lines within a certain angle range from 'up' are considered) - or, again, you need user input. The current mechanism for levelling uses a statistical approach, but doesn't really look at the images. You could look at it as some sort of user input, though: the user would have tried to hold the camera upright, and the statistics makes the best of it. Maybe if the current mechanism was applied first and then detected lines were looked for which were 'near assumed vertical' an automatic detection could be attempted. with regards Kay -- You received this message because you are subscribed to the Google Groups Hugin and other free panoramic software group. A list of frequently asked questions is available at: http://wiki.panotools.org/Hugin_FAQ To post to this group, send email to hugin-ptx@googlegroups.com To unsubscribe from this group, send email to hugin-ptx+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/hugin-ptx
Re: [hugin-ptx] Re: why is vertical line detection so hard?
Hi Kay, I would like to have automatic leveling too. Assuming that someone is at least trying to get it right, all those assumptions are perfectly valid. Jan On Fri, Dec 17, 2010 at 4:45 PM, kfj _...@yahoo.com wrote: On 17 Dez., 15:52, Jeffrey Martin 360cit...@gmail.com wrote: Why can't any (known) software do automatic vertical line detection (and thus levelling) of images? So you're quite specific about _vertical_ line detection. Line detection per se isn't really a problem, and I think there's even software in the hugin bundle that uses line detection (doesn't the lens calibration tool do so?). The trouble is just - how would you know that any given line in any image is vertical and not some other orientation? So you either need to make preliminary assumptions (i.e. images are roughly upright, lines within a certain angle range from 'up' are considered) - or, again, you need user input. The current mechanism for levelling uses a statistical approach, but doesn't really look at the images. You could look at it as some sort of user input, though: the user would have tried to hold the camera upright, and the statistics makes the best of it. Maybe if the current mechanism was applied first and then detected lines were looked for which were 'near assumed vertical' an automatic detection could be attempted. with regards Kay -- You received this message because you are subscribed to the Google Groups Hugin and other free panoramic software group. A list of frequently asked questions is available at: http://wiki.panotools.org/Hugin_FAQ To post to this group, send email to hugin-ptx@googlegroups.com To unsubscribe from this group, send email to hugin-ptx+unsubscr...@googlegroups.comhugin-ptx%2bunsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/hugin-ptx -- You received this message because you are subscribed to the Google Groups Hugin and other free panoramic software group. A list of frequently asked questions is available at: http://wiki.panotools.org/Hugin_FAQ To post to this group, send email to hugin-ptx@googlegroups.com To unsubscribe from this group, send email to hugin-ptx+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/hugin-ptx
[hugin-ptx] Re: Blending very large numbers of files
On Dec 17, 3:02 pm, Matthew Gates matthew...@gmail.com wrote: On 17 December 2010 12:57, kfj _...@yahoo.com wrote: The tiles come with a cornucopia of metadata, most of which exceed my admittedly narrow astronomic horizon - what I seem to have gleaned, though, is that the projection is gnomonic and localization of the individual images should be easy straight from the metadata. What I need for a trial stitch in hugin (from which we might be able to derive the vignetting data) is translation of the astronomical nomenclature in the hhh files into hugin's system. Hugin uses a notion of roll, pitch and yaw. I suspect roll will be zero for the images, pitch would refer to the center of the image and be in degrees from the equator and yaw to the center of the image in degrees from any reference point on the equator you care for, maybe you could point me to which of the metadata to touch for the purpose and how to translate them, if necessary. I suspect the relevant data are in the 'Hour Angle' and 'Zenith distance' data fields in the hhh file, hour angle could be translated straight into yaw, and Zenith distance would be pitch + 90 degrees? I could then extract them from the hhh files. Alternatively, put small files with the images containing roll, pitch and yaw in degrees - then I wouldn't have to do the extraction myself ;-) I hope Fabien will join in the conversation here. He's been dealing with the processing of these images into the toast projection and so is much more familiar with this stuff than me. The x64 directories in the .tgz files might be useful here, as they are full-plate at low res. We could maybe extract these, experiment with blending and even stitch them together into a panorama to check it. 1792 300x300 tiles isn't so bad. I can do the extraction of this data on the server and create a single archive file which should be a manageable size. Sounds very good; I fully agree, hope to see the data soon! Hi, I'm Fabien from Stellarium brought here by Matthew. Here's a tarball with the x64 images and their associated meta-data files: http://porpoisehead.net/misc/dss_low.tar.gz(~11 meg) I'm assuming the N??? ones are for the Northern hemisphere and the S??? files are for the Southern hemisphere, but this should become apparent from the metadata analysis. Yes N stands for North, S for South. So I am not not completely sure what we need as input: I think for what we need we can make the assumption that the plate projection is gnomonic. In this case I can extact the lon/lat positions of the 4 corners of each plate. Then what do we need to do with that? Can the .pto be generated from just that? Fabien Matthew -- You received this message because you are subscribed to the Google Groups Hugin and other free panoramic software group. A list of frequently asked questions is available at: http://wiki.panotools.org/Hugin_FAQ To post to this group, send email to hugin-ptx@googlegroups.com To unsubscribe from this group, send email to hugin-ptx+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/hugin-ptx
[hugin-ptx] Re: Blending very large numbers of files
On 17 Dez., 15:02, Matthew Gates matthew...@gmail.com wrote: Here's a tarball with the x64 images and their associated meta-data files: http://porpoisehead.net/misc/dss_low.tar.gz(~11 meg) Thank you. I have already had a go at the data, but no conclusive result. I'd like to discuss my assumptions about the metadata, while hugin is having a hard time swallowing a pto file with almost 900 images ;-) obvious: - The metadata are organized in 80-character blocks - The heading 8 characters, followed by a '=' sign, contain the field name - next are 30 characters are value, followed by an optional comment what I extracted: - I have taken the field CRVAL1 (commented: Rectascension of the reference pixel) - And the field CRVAL2 (Declination of same) as yaw and pitch, roll zero - I have assumed the hfov to be 6.6266 degrees approximately, but I'm unsure here: it's half-guessed, combined from the fields: PLTSCALE= 67.2000 /Detector: Plate Scale arcsec per mm PLTSIZEX=355.0 /Detector: Plate X dimension mm and this may well be wrong. please help me with the field of view - I'm not sure if the whole plate dimension goes into the 300X300 tile or just some part of it. - there are the fields CRPIX1 and CRPIX2, each valued 61.0, denoting the 'X Refernce and Y Refernce Pixel'. Does this mean that the rectascension and declination do not refer to the tile center? I need help with that, as well. I batch-extracted the metadata with a python script which wrote out the pto file, the pto goes like p w2000 h1000 f2 o nN002_00_00_x64.jpg w300 h300 f0 v6.62 r0.0 y15.701600 p83.455875 o nN003_00_00_x64.jpg w300 h300 f0 v6.62 r0.0 y53.398000 p83.397969 ... I did a trial stitch with a subset of about 100 images and it came out looking plausible, but with stars it's of course hard to tell if the overlap is right. Currently I asked hugin to do a sample stitch with the whole northern data and my assumed positions and fovs, this should take a long while if it succeeds at all. I'll keep you posted, it would be nice if you could help with the metadata. with regards Kay -- You received this message because you are subscribed to the Google Groups Hugin and other free panoramic software group. A list of frequently asked questions is available at: http://wiki.panotools.org/Hugin_FAQ To post to this group, send email to hugin-ptx@googlegroups.com To unsubscribe from this group, send email to hugin-ptx+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/hugin-ptx
[hugin-ptx] Re: Blending very large numbers of files
On 17 Dez., 17:51, Fabien fabien.cher...@googlemail.com wrote: Hi, I'm Fabien from Stellarium brought here by Matthew. Hi Fabien. Sorry, our posts crossed, I was busy writing up mine while yours came in Yes N stands for North, S for South. So I am not not completely sure what we need as input: I think for what we need we can make the assumption that the plate projection is gnomonic. In this case I can extact the lon/lat positions of the 4 corners of each plate. Then what do we need to do with that? Can the .pto be generated from just that? As you can see from my previous post, I've already made some assumptions here. For the pto to work, our reference needs to be to the center of the tile and we need the horizontal field of view. I'll have to dig into the reference if hugin can take gnomonic source data - if not, it shouldnt'be too hard to transform to rectilinear, though every transformation degrades the data. For my trial I've assumed rectilinear which shouldn't be too far off for an initial test (or isn't it even the same, come to think of it?). The precise field of view of the actual 300X300 tile is absolutely crucial to make the overlap correct. the other issue concerning the precise location of the refernce point is as crucial - if the reference is to another point than the center, the RA/Dec values have to be recalculated for the image center, which shouldn't be too hard. I'd need to know where (0,0) is, though - bottom left with y up? Apart from that I see no problems - if hugin can't swallow the lot in one go, we can just make like two dozen strips and blend them in a second step - my hugin job is already done warping and the blender is half done loading the warped tiles. Once these initial technicalities are dealt with we can proceed to deal with the real issue, which is the detection and possible removal of assumed vignetting artefacts. with regards Kay -- You received this message because you are subscribed to the Google Groups Hugin and other free panoramic software group. A list of frequently asked questions is available at: http://wiki.panotools.org/Hugin_FAQ To post to this group, send email to hugin-ptx@googlegroups.com To unsubscribe from this group, send email to hugin-ptx+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/hugin-ptx