Re: [GRASS-user] Workflow of a classification project with orthophotos

2008-08-01 Thread Moritz Lennert

On 31/07/08 20:39, Nikos Alexandris wrote:

Any Open Source alternatives for image segmentation?


SAGA GIS has some segmentation algorithms included:
http://www.saga-gis.uni-goettingen.de/html/index.php

Moritz
___
grass-user mailing list
grass-user@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/grass-user


Re: [GRASS-user] Workflow of a classification project with orthophotos

2008-08-01 Thread Maciej Sieczka

Nikos Alexandris pisze:

how do Open Source Professionals image normalisation for aerial
photos... let's say 300 photos? I cannot imagine that people sit-down
and extract psuedoinvariant targets for 300 photos (except they are
payed a lot for that).


Nikos,

Have you looked at OSSIM? Not that I'm sure it provides the tool, but
seems likely.

--
Maciej Sieczka
www.sieczka.org
___
grass-user mailing list
grass-user@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/grass-user


Re: [GRASS-user] Workflow of a classification project with orthophotos

2008-08-01 Thread Nikos Alexandris
On Fri, 2008-08-01 at 12:01 +0200, Maciej Sieczka wrote:
 Nikos Alexandris pisze:
  how do Open Source Professionals image normalisation for aerial
  photos... let's say 300 photos? I cannot imagine that people sit-down
  and extract psuedoinvariant targets for 300 photos (except they are
  payed a lot for that).
 
 Nikos,
 
 Have you looked at OSSIM? Not that I'm sure it provides the tool, but
 seems likely.
 

Thanks for the suggestion Maciej.

OSSIM sounds very promising (from what I've read so far). Till today I
never managed to get OSSIM running under Ubuntu. I've seen it only under
some win-boxes and partially running under wine. But it's probably an
adventure to compile it properly.

___
grass-user mailing list
grass-user@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/grass-user


Re: [GRASS-user] Workflow of a classification project with orthophotos

2008-08-01 Thread Nikos Alexandris
On Fri, 2008-08-01 at 15:29 +0200, G. Allegri wrote:
 A collegue sent me this ticks to run OSSIM on Ubuntu 7.10:
 
 
 http://trac.osgeo.org/ossim/wiki/Ubuntu-7.10Build
 
 after every make make install, give a ldconfig and start another
 shell to continue the compilation.
 
 in /etc/ld.so.conf.d/
 
 I've exported the following libs :
 
 /usr/lib
 /usr/local/lib
 /home/sasha/GIS/ossim/ossim/lib
 
 within a file ossim.conf
 --
 
 Yet I've never found the time to try it...
 
 Giovanni

I've been trying in the past and today again... but no luck. It's not an
easy process. I wonder how this affects the status of OSSIM as an osgeo
tool?

Shouldn't all osgeo packages be installable in most major platforms?
Well, this is a question for another list.

Cheers, Nikos

___
grass-user mailing list
grass-user@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/grass-user


Re: [GRASS-user] Workflow of a classification project with orthophotos

2008-07-31 Thread Nikos Alexandris
After examining the mosaic I found multiple and big differences. I
conclude that the producer did not perform any radiometric nor
topographic corrections. It is a collage and not a mosaic :-)

Is this the way it should be?

Thank you,
Nikos


___
grass-user mailing list
grass-user@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/grass-user


Re: [GRASS-user] Workflow of a classification project with orthophotos

2008-07-31 Thread Jonathan Greenberg

Nikos:

   Performing relative radiometric normalization is a *requirement* of 
applying a single classification to multiple images (also for change 
detection).  Unfortunately, it is not an algorithm that is available (to 
my knowledge), out-of-the-box, on ANY remote sensing platform (GRASS, 
ENVI, etc.).  However, you can do the radiometric normalization yourself 
-- the idea is that pixels in the overlap zone between two images which 
are invariant (e.g. have not changed in structure, spectral properties 
or, in more complex architectures like trees, sun angle) should be 
linearly related to their counterpart in the other image.  Assuming 
this, you can either manually choose a set of psuedoinvariant targets 
(pairs of pixels which are at the same location and are not changing) 
between the two images, and calculate an orthogonal regression to 
generate gains and offsets.  One of those images, therefore, becomes 
your reference and the other one your target.  The gains/offsets are 
applied to the target image.


   There are automated algorithms for doing the pseudoinvariant pixel 
selection (search for radiometric normalization remote sensing on 
google scholar), or if you assume that the images do not change between 
dates and are WELL rectified to one another, you can extract the ENTIRE 
overlap zone between the two images and calculate the regressions based 
on those.  This last suggestion is probably the fastest, but also incurs 
the most error and I wouldn't neccessarily recommend it.


   This would be a VERY good algorithm to add to GRASS -- if anyone is 
interested in pursuing coding this, I can help design the algorithm 
(including which are the best automated invariant target selection 
algorithms).


--j

Nikos Alexandris wrote:

After examining the mosaic I found multiple and big differences. I
conclude that the producer did not perform any radiometric nor
topographic corrections. It is a collage and not a mosaic :-)

Is this the way it should be?

Thank you,
Nikos


___
grass-user mailing list
grass-user@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/grass-user

  


--
Jonathan A. Greenberg, PhD
Postdoctoral Scholar
Center for Spatial Technologies and Remote Sensing (CSTARS)
University of California, Davis
One Shields Avenue
The Barn, Room 250N
Davis, CA 95616
Cell: 415-794-5043
AIM: jgrn307, MSN: [EMAIL PROTECTED], Gchat: jgrn307

___
grass-user mailing list
grass-user@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/grass-user


Re: [GRASS-user] Workflow of a classification project with orthophotos

2008-07-31 Thread Nikos Alexandris
On Thu, 2008-07-31 at 11:17 -0700, Jonathan Greenberg wrote:
 Nikos:
 
 Performing relative radiometric normalization is a *requirement* of 
 applying a single classification to multiple images (also for change 
 detection).  Unfortunately, it is not an algorithm that is available (to 
 my knowledge), out-of-the-box, on ANY remote sensing platform (GRASS, 
 ENVI, etc.).  However, you can do the radiometric normalization yourself 
 -- the idea is that pixels in the overlap zone between two images which 
 are invariant (e.g. have not changed in structure, spectral properties 
 or, in more complex architectures like trees, sun angle) should be 
 linearly related to their counterpart in the other image.  Assuming 
 this, you can either manually choose a set of psuedoinvariant targets 
 (pairs of pixels which are at the same location and are not changing) 
 between the two images, and calculate an orthogonal regression to 
 generate gains and offsets.  One of those images, therefore, becomes 
 your reference and the other one your target.  The gains/offsets are 
 applied to the target image.
 
 There are automated algorithms for doing the pseudoinvariant pixel 
 selection (search for radiometric normalization remote sensing on 
 google scholar), or if you assume that the images do not change between 
 dates and are WELL rectified to one another, you can extract the ENTIRE 
 overlap zone between the two images and calculate the regressions based 
 on those.  This last suggestion is probably the fastest, but also incurs 
 the most error and I wouldn't neccessarily recommend it.
 
 This would be a VERY good algorithm to add to GRASS -- if anyone is 
 interested in pursuing coding this, I can help design the algorithm 
 (including which are the best automated invariant target selection 
 algorithms).
 
 --j

Jonathan,

thank you very much for your reply. I've done my homework and I already
read previous posts of yours as well as from other people. I already
know this process as I performed it on a change detection project [1]

It's a time consuming process even for just 2 images. My real BIG
question is: how do Open Source Professionals image normalisation for
aerial photos... let's say 300 photos? I cannot imagine that people
sit-down and extract psuedoinvariant targets for 300 photos (except they
are payed a lot for that).

As I wrote the Mosaic that I work on is a MESS. And the people do not
provide the original data. So I don't have any overlapping zones at
all :D So I forget the normalisation anyway!

The next possible solution for mapping my forest gaps (see first and
second mail of mine) is, I think, to extract only segments somehow and
the identify the forest gaps visually. The segmentation would save me
since it's faster to recognise homogenous gaps that way. Now I am kind
of disappointed since I can't get i.smap do this segmentation-solo task.
And of course I cannot collect training samples for 300 photos.

Any Open Source alternatives for image segmentation?

.

[1] Details: I performed an empirical image normalisation, that is a
regression-based normalisation, for burned area mapping with MODIS
satellite imagery, a pre-fire and a post-fire image more or less the way
you describe it. I intend to participate in FOSS4G in South Africa
(although other difficulties do not allow me to participate in the
upcoming conference). I have a step-by-step document with more than 120
pages and I don't know anybody with experience who would like to have a
look at it so it's still under heavy corrections :-)

P.S. If anyone is interested to have a look in my step-by-step document
I invite him for free vacation in my home in Central Greece :D

___
grass-user mailing list
grass-user@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/grass-user


Re: [GRASS-user] Workflow of a classification project with orthophotos

2008-07-23 Thread Nikos Alexandris
I am still struggling with this. In theory it sounds easy but when it
comes to the point it's quite hard considering that we don't have the
raw data. Any other ideas?

Thank you,
Nikos

On Wed, 2008-07-16 at 16:50 +0200, Nikos Alexandris wrote:
[...]

 My workflow
 
 1. Stretch colour orthophotos (8-bit R,G and B bands) from 0 to 255
 values (weither with GDAL or import in GRASS' database and stretch
 inside the DB)
 
 2. Visually identify the different groups of images taken more or less
 at the same time

This sounds too difficult but we don't have the metadata (i.e. date of
acquisition to reasonably group the tiles based on this information).

 I have some vector of interest areas which correspond to biger
 admnistrative areas (images are from West-Central Germany, groups are
 something like koblenz, trier, simmern and more).
 
 3. Split the mosaic in the groups that include photos that present less
 colour differences
 
 4. Sampling
 
 5. Segmentation with i.smap
 
 6. Use r.texture as I think it will boost the accuracy of the
 classification
 
 7. Classify
 
 8. Some handwork to improve sampling
 
 9. Re-Run segmentation, classification
 
 10. Handwork to correct obvious errors
 
 11. Voila the power of GFOSS ;-)

___
grass-user mailing list
grass-user@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/grass-user


[GRASS-user] Workflow of a classification project with orthophotos

2008-07-16 Thread Nikos Alexandris
Dear GRASSers,

I would like to have a confirmation that my feet are on the ground when
I try to realise the following work-flow with G-FOSS. I want to classify
forest gaps out of orthophotos (...actually it's not my job but I want
to help somebody who intented to do all by hand or accept what others
say that this task cannot be done unless one utilises commercial
tools... !)

I have more than 300 tiles of a mosaic composed by on aerial imagery.
Unfortunately it is a mixture of different acquisitions (date) and has
significant contrast differences in some regions.

My class-scheme would be gaps, shadows of tree-stands withing the gaps
water, vegetation, urban surfaces.

I can not perform any normalisation the way I know it for some number of
pictures (e.g. for 3,4 satelliteimagery). First of all there are no
overlapping areas and I am not aware (practically) of any other method
to perform a colour balance. 

Anyone struggling with normalisation, colour balancing issues without
having the meta-data (date of acquisition) nor the raw data?

My workflow

1. Stretch colour orthophotos (8-bit R,G and B bands) from 0 to 255
values (weither with GDAL or import in GRASS' database and stretch
inside the DB)

2. Visually identify the different groups of images taken more or less
at the same time

I have some vector of interest areas which correspond to biger
admnistrative areas (images are from West-Central Germany, groups are
something like koblenz, trier, simmern and more).

3. Split the mosaic in the groups that include photos that present less
colour differences

4. Sampling

5. Segmentation with i.smap

6. Use r.texture as I think it will boost the accuracy of the
classification

7. Classify

8. Some handwork to improve sampling

9. Re-Run segmentation, classification

10. Handwork to correct obvious errors

11. Voila the power of GFOSS ;-)


Cheers,
Nikos

___
grass-user mailing list
grass-user@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/grass-user