Re: [gdal-dev] [RFC] [GDAL] Idea for GSoC, 2014

2014-03-15 Thread Seth Price
Also, I wouldn't worry much about the multispectral part of the data. You're 
going to have more trouble with reliably finding the correct key point matches. 
Use RANSAC, also in OpenCV.

~Seth

via iPhone

> On Mar 15, 2014, at 12:50 PM, Kshitij Kansal  wrote:
> 
> Hello Again,
> 
> @Dimitriy - Currently the GDALComputeMatchingPoints  is using the SimpleSurf 
> algorithm for matching points. Are you proposing that, I should implement the 
> BRISK and then provide user the option of using either this or 
> SimpleSurf(already implemented)? 
> This is indeed a very interesting thought but the problem in this is that, 
> the GDALComputeMatchingPoints is developed with respect to the correlator 
> project and I feel that SimpleSurf algorithm implemented there won't work on 
> my Automatic geo-referencer as I would be considering the Multispectral 
> Imagery and Large Datasets which are not handled in the current 
> implementation. So this will require modification to SimpleSurf as well.
> I hope I have made my doubt clear? Please convey your views on this.
> 
> @Chaitanya - In comparison to the SURF, BRISK can definitely handle the large 
> imagery to great extent. But there is going to be some threshold upto which 
> this algorithm will work because we must not forget that these algorithms are 
> developed for Normal RGB images for Computer Vision related work and there 
> usage to Remote Sensing requires some modification. I will try to look for 
> this thing in more detail and then get back to you.
> 
> 
> Also, should I prepare my initial draft of proposal based this BRISK idea 
> only? 
> I have already started work in this direction and will soon post it, for 
> review.
> 
> With Regards,
> 
> Kshitij Kansal
> Lab For Spatial Informatics,
> IIIT Hyderabad
> 
> 
>> On Sat, Mar 15, 2014 at 12:29 AM, Chaitanya kumar CH 
>>  wrote:
>> Kshitij,
>> 
>> What is the performance of the proposed algorithms for very large rasters? 
>> If one of them is good with large images that's a cleaner choice without all 
>> the workaround with scaling the rasters.
>> 
>> --
>> Best regards,
>> Chaitanya Kumar CH
>> 
>>> On 15-Mar-2014 12:22 am, "Dmitriy Baryshnikov"  wrote:
>>> Hi,
>>> 
>>> I think we need to decide it here, not to create lot of proposals. The 
>>> second idea is very interesting. Maybe it worth to create some common 
>>> interface (or API) to add new methods BRISK, SURF, SIFT etc. 
>>> You can develop you realisation of BRISK and demonstrate how-to one can use 
>>> it via such common interface.
>>> E.g. in GDALComputeMatchingPoints add enum for algorithms or use exist  
>>> papszOptions.
>>> Best regards,
>>> Dmitry
>>> 14.03.2014 17:28, Kshitij Kansal пишет:
 Hello everyone
 
 Continuing the previous discussion, I would like to propose something and 
 the community's suggestions are welcomed/needed. I can understand that 
 this thread is a little old, so let me remind you that its regarding the 
 automatic geo-referencer idea. The idea is also proposed on the GDAL ideas 
 page (http://trac.osgeo.org/gdal/wiki/SummerOfCode). 
 
 Based on the previous discussions, what came out was that we can improve 
 the current implementation of SIMPLE SURF in GDAL which was developed as a 
 part of 2012 GSOC GDAL Correlator project, to support large data and multi 
 spectral imagery. And then apply this modified algorithm for the 
 geo-reference purposes. Now I have been in touch with Chaitanya, who is 
 willing to mentor this project, and there are some things on which we 
 would like to know community's suggestions/response.
 
 There are basically two things that can be done regarding this project:
 
 1. As mentioned above, we can modify the SIMPLE SURF algorithm and make it 
 much better for the geo-reference purposes. Already, a lot had been 
 discussed on this and we   have a fairly good idea about what is 
 to be done.
 
 2. One more thing that can be done is that we can implement BRISK 
 algorithm[1] instead of SURF along with the FLANN matcher for this 
 purpose. What advantages this thing offers is that it is fairly fast and 
 gives comparable outputs along with   that it works well with 
 fairly large data sets. So we do not need to segment the imagery as we 
 would have done in the case of SURF. Also added to this, this algorithm 
 also has no patent issues. We had a lot of problem regarding patent issues 
 in SIFT/SURF and we discussed them at length on the mailing list as well. 
 
 One thing that I fell can be done is that  two proposal can be written, 
 one for each and then community can decide accordingly which one is more 
 useful. Or we can decide it here itself..? 
 
 Kindly provide your valuable comments and suggestion..
 
 With Regards,
 
 Kshitij Kansal
 Lab For Spatial Informatics,
 IIIT Hyderabad
 
 

Re: [gdal-dev] [RFC] [GDAL] Idea for GSoC, 2014

2014-03-15 Thread Seth Price
I have done something like this recently. You would be better off tearing out 
SURF & linking to OpenCV for all feature detection and extraction. Here is a 
link to the patch that OpenCV needs to support large & 16 bit imagery.

https://github.com/Itseez/opencv/pull/1932

~Seth

via iPhone

> On Mar 15, 2014, at 12:50 PM, Kshitij Kansal  wrote:
> 
> Hello Again,
> 
> @Dimitriy - Currently the GDALComputeMatchingPoints  is using the SimpleSurf 
> algorithm for matching points. Are you proposing that, I should implement the 
> BRISK and then provide user the option of using either this or 
> SimpleSurf(already implemented)? 
> This is indeed a very interesting thought but the problem in this is that, 
> the GDALComputeMatchingPoints is developed with respect to the correlator 
> project and I feel that SimpleSurf algorithm implemented there won't work on 
> my Automatic geo-referencer as I would be considering the Multispectral 
> Imagery and Large Datasets which are not handled in the current 
> implementation. So this will require modification to SimpleSurf as well.
> I hope I have made my doubt clear? Please convey your views on this.
> 
> @Chaitanya - In comparison to the SURF, BRISK can definitely handle the large 
> imagery to great extent. But there is going to be some threshold upto which 
> this algorithm will work because we must not forget that these algorithms are 
> developed for Normal RGB images for Computer Vision related work and there 
> usage to Remote Sensing requires some modification. I will try to look for 
> this thing in more detail and then get back to you.
> 
> 
> Also, should I prepare my initial draft of proposal based this BRISK idea 
> only? 
> I have already started work in this direction and will soon post it, for 
> review.
> 
> With Regards,
> 
> Kshitij Kansal
> Lab For Spatial Informatics,
> IIIT Hyderabad
> 
> 
>> On Sat, Mar 15, 2014 at 12:29 AM, Chaitanya kumar CH 
>>  wrote:
>> Kshitij,
>> 
>> What is the performance of the proposed algorithms for very large rasters? 
>> If one of them is good with large images that's a cleaner choice without all 
>> the workaround with scaling the rasters.
>> 
>> --
>> Best regards,
>> Chaitanya Kumar CH
>> 
>>> On 15-Mar-2014 12:22 am, "Dmitriy Baryshnikov"  wrote:
>>> Hi,
>>> 
>>> I think we need to decide it here, not to create lot of proposals. The 
>>> second idea is very interesting. Maybe it worth to create some common 
>>> interface (or API) to add new methods BRISK, SURF, SIFT etc. 
>>> You can develop you realisation of BRISK and demonstrate how-to one can use 
>>> it via such common interface.
>>> E.g. in GDALComputeMatchingPoints add enum for algorithms or use exist  
>>> papszOptions.
>>> Best regards,
>>> Dmitry
>>> 14.03.2014 17:28, Kshitij Kansal пишет:
 Hello everyone
 
 Continuing the previous discussion, I would like to propose something and 
 the community's suggestions are welcomed/needed. I can understand that 
 this thread is a little old, so let me remind you that its regarding the 
 automatic geo-referencer idea. The idea is also proposed on the GDAL ideas 
 page (http://trac.osgeo.org/gdal/wiki/SummerOfCode). 
 
 Based on the previous discussions, what came out was that we can improve 
 the current implementation of SIMPLE SURF in GDAL which was developed as a 
 part of 2012 GSOC GDAL Correlator project, to support large data and multi 
 spectral imagery. And then apply this modified algorithm for the 
 geo-reference purposes. Now I have been in touch with Chaitanya, who is 
 willing to mentor this project, and there are some things on which we 
 would like to know community's suggestions/response.
 
 There are basically two things that can be done regarding this project:
 
 1. As mentioned above, we can modify the SIMPLE SURF algorithm and make it 
 much better for the geo-reference purposes. Already, a lot had been 
 discussed on this and we   have a fairly good idea about what is 
 to be done.
 
 2. One more thing that can be done is that we can implement BRISK 
 algorithm[1] instead of SURF along with the FLANN matcher for this 
 purpose. What advantages this thing offers is that it is fairly fast and 
 gives comparable outputs along with   that it works well with 
 fairly large data sets. So we do not need to segment the imagery as we 
 would have done in the case of SURF. Also added to this, this algorithm 
 also has no patent issues. We had a lot of problem regarding patent issues 
 in SIFT/SURF and we discussed them at length on the mailing list as well. 
 
 One thing that I fell can be done is that  two proposal can be written, 
 one for each and then community can decide accordingly which one is more 
 useful. Or we can decide it here itself..? 
 
 Kindly provide your valuable comments and suggestion..
 
 With Regards,
 

Re: [gdal-dev] New compression library based on gdal design

2014-03-01 Thread Seth Price
I am interested in seeing an OpenCL JPEG2000 decoder/encoder developed. I have 
a bit of experience writing OpenCL kernels. Please contact me if you need help.
~Seth

via iPhone

> On Mar 1, 2014, at 7:27 AM, Aaron Boxer  wrote:
> 
> Hello,
> 
> I recently started developing an open source jpeg2000 compression library 
> using opencl.
> I would like to base the library design on the very successful gdal library 
> design.
> Can anyone recomend any resources to help me to grok the high level design of 
> gdal?
> 
> Thanks so much,
> Aaron
> ___
> gdal-dev mailing list
> gdal-dev@lists.osgeo.org
> http://lists.osgeo.org/mailman/listinfo/gdal-dev
___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev


Re: [gdal-dev] GDALSimpleSURF improvements

2012-12-09 Thread Seth Price
I just found some bugs and made some algorithm improvements. New code can be 
downloaded from the same location.
~Seth


via iPhone

On Dec 8, 2012, at 2:21 AM, Dmitry Baryshnikov  wrote:

> Hi Seth,
> 
> I'll test our code and try to patch GDAL if it possible.
> 
> Regards,
> Dmitry
> 
> 08.12.2012 3:02, Seth Price пишет:
>> I've worked the FLANN library into GDALSimpleSURF for a massive speedup
>> and made some other minor speed improvements mainly related to setting
>> constants as the right type and moving invariants outside of loops.
>> 
>> You can grab the updated source here. Please add it to whatever repository
>> is appropriate.
>> http://seth1.bluezone.usu.edu/flann_surf.tgz
>> 
>> Thanks,
>> Seth
>> 
>> ___
>> gdal-dev mailing list
>> gdal-dev@lists.osgeo.org
>> http://lists.osgeo.org/mailman/listinfo/gdal-dev
> 
> ___
> gdal-dev mailing list
> gdal-dev@lists.osgeo.org
> http://lists.osgeo.org/mailman/listinfo/gdal-dev
___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev

[gdal-dev] GDALSimpleSURF improvements

2012-12-07 Thread Seth Price
I've worked the FLANN library into GDALSimpleSURF for a massive speedup
and made some other minor speed improvements mainly related to setting
constants as the right type and moving invariants outside of loops.

You can grab the updated source here. Please add it to whatever repository
is appropriate.
http://seth1.bluezone.usu.edu/flann_surf.tgz

Thanks,
Seth

___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev


Re: [gdal-dev] [NEW] Multi-threaded warping

2012-06-10 Thread Seth Price
If you tell OpenCL to target the CPU, it will produce a multiprocessor 
implementation instead of GPU.
~Seth


via iPhone

On Jun 10, 2012, at 3:52 PM, Even Rouault  wrote:

> Le dimanche 10 juin 2012 23:44:54, Yogesh Dahiya a écrit :
>> As far I know gdal1.9 has integrated opencl so we can parallelize by
>> setting it right.
>> So what exactly is your addition.
>> By the way I tried opencl case and was able to get 16x over general case
>> for lanczos for image of 16000*16000
> 
> (Replying to list too, as others might have the same question)
> 
> Yes indeed, GDAL 1.9 can use OpenCL and this implementation is of course 
> still 
> available.
> 
> Unfortunately, AFAIK, there is not yet any working OpenSource OpenCL 
> implementation (and my experience with some proprietary OpenCL implementation 
> has not always been convincing, like the GUI being totally unresponsive 
> during 
> the processing).
> 
> The new multi-threaded implementation just uses traditional multi-threading 
> technics that are available on all platforms where GDAL can run.
> ___
> gdal-dev mailing list
> gdal-dev@lists.osgeo.org
> http://lists.osgeo.org/mailman/listinfo/gdal-dev
___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev

Re: [gdal-dev] Re: OpenCL, GDAL, and You

2010-12-17 Thread Seth Price

I can't help for MacOS build system.



Good thing I'm here. :) Do you want an account on my machine for  
testing?


It's me. You can find the culprit of any commit by doing a svn blame  
or looking

at the log ;-)



Most of that clampToDst() code was brought over from  
GWKSetPixelValue(). I'm not sure if it's a good idea to remove it. The  
original .cpp file should be the same as the new OpenCL code/. My goal  
is to make the code execution identical so I don't need to do any post- 
processing of pixels when I copy them out of OpenCL memory.


Which error did you get ? You are just changing some whitespace  
change that

seem to be equivalent to the trunk code. Or do I miss something ?



I was getting a weird build options error that seems to be caused by  
an extra space. So you're right that it's a whitespace change, but  
it's non-trivial.

~Seth

On Dec 17, 2010, at 3:54 PM, Even Rouault wrote:


Le vendredi 17 décembre 2010 23:41:53, Seth Price a écrit :

I'm working on the trunk OpenCL build on my Mac now.

** First, on my mac I get an error at the end of make:
[...] ./ogr/.libs/ogr_srs_xml.o ./ogr/.libs/ograssemblepolygon.o ./
ogr/.libs/ogr2gmlgeometry.o ./ogr/.libs/gml2ogrgeometry.o ./ 
ogr/.libs/
ogr_expat.o   /opt/local/lib/libsqlite3.dylib -L/opt/local/lib -L/ 
usr/
local/lib /usr/local/lib/libexpat.dylib /usr/local/lib/ 
libjpeg.dylib /

usr/local/lib/libtiff.dylib /usr/local/lib/libpng12.dylib -lpthread -
ldl /opt/local/lib/libcurl.dylib /opt/local/lib/libidn.dylib -lssl -
lcrypto -lz -lOpenCL-install_name  /usr/local/lib/libgdal. 
1.dylib -

compatibility_version 16 -current_version 16.0 -Wl,-single_module
ld: library not found for -lOpenCL
collect2: ld returned 1 exit status
make[1]: *** [libgdal.la] Error 1
make: *** [check-lib] Error 2
seth:gdal-svn-trunk-2010.12.17 sprice$

To fix this, I changed "OPENCL_LIB  =   -lOpenCL" to
"OPENCL_LIB  =   -framework OpenCL" in GDALMmake.opt.


I can't help for MacOS build system.



*  Why is USE_CLAMP_TO_DST_FLOAT in there? I would think that is
required even if it isn't on ATI. I'm not sure who inserted its use,
but I'm just wondering if the reasoning is documented somewhere.


It's me. You can find the culprit of any commit by doing a svn blame  
or looking

at the log ;-)

The reasoning was that I observed with my ATI card that float  
buffers didn't get
to be multiplied by FLT_MAX. The values weren't normalized between 0  
and 1,
but had directly the expected value. (in the float case, I also  
dropped the
rounding to nearest integer which is clearly not desired, and not in  
the CPU

implementation)

This is perhaps also needed for NVidia, but I have no way to check,  
so I just
put the condition you saw. If you can check this is unneeded, the  
condition

can go away.



* I had to make this change to alg/gdalwarpkernel_opencl.c to get
it to build without a build option error.

@@ -1168,7 +1168,7 @@

 //Assemble the compiler arg string for speed. All invariants
should be defined here.
 sprintf(buffer, "-cl-fast-relaxed-math -Werror -D FALSE=0 -D
TRUE=1 "
-"%s "
+"%s"
 "-D iSrcWidth=%d -D iSrcHeight=%d -D iDstWidth=%d -D
iDstHeight=%d "
 "-D useUnifiedSrcDensity=%d -D useUnifiedSrcValid=%d "
 "-D useDstDensity=%d -D useDstValid=%d -D useImag=%d "
@@ -1176,9 +1176,9 @@
 "-D nXRadius=%d -D nYRadius=%d -D nFiltInitX=%d -D
nFiltInitY=%d "
 "-D PI=%015.15lff -D outType=%s -D dstMinVal= 
%015.15lff -

D dstMaxVal=%015.15lff "
 "-D useDstNoDataReal=%d -D vecf=%s %s -D doCubicSpline=
%d "
-"-D useUseBandSrcValid=%d -D iCoordMult=%d",
+"-D useUseBandSrcValid=%d -D iCoordMult=%d ",
 /* FIXME: Is it really a ATI specific thing ? */
-(warper->imageFormat == CL_FLOAT && warper->bIsATI) ?  
"-D

USE_CLAMP_TO_DST_FLOAT=1" : "",
+(warper->imageFormat == CL_FLOAT && warper->bIsATI) ?  
"-D

USE_CLAMP_TO_DST_FLOAT=1 " : "",
 warper->srcWidth, warper->srcHeight, warper->dstWidth,
warper->dstHeight,
 warper->useUnifiedSrcDensity, warper- 
>useUnifiedSrcValid,

 warper->useDstDensity, warper->useDstValid, warper-


imagWorkCL != NULL,


Which error did you get ? You are just changing some whitespace  
change that

seem to be equivalent to the trunk code. Or do I miss something ?



 After doing all of the above to make things compile, I don't get
the bug described below. I'm working off of the latest trunk daily.


Yeah, that's probably an issue with the ATI SDK.



~Seth

On Dec 8, 2010, at 1:12 PM, Even Roua

Re: [gdal-dev] Re: OpenCL, GDAL, and You

2010-12-17 Thread Seth Price

I'm working on the trunk OpenCL build on my Mac now.

** First, on my mac I get an error at the end of make:
[...] ./ogr/.libs/ogr_srs_xml.o ./ogr/.libs/ograssemblepolygon.o ./ 
ogr/.libs/ogr2gmlgeometry.o ./ogr/.libs/gml2ogrgeometry.o ./ogr/.libs/ 
ogr_expat.o   /opt/local/lib/libsqlite3.dylib -L/opt/local/lib -L/usr/ 
local/lib /usr/local/lib/libexpat.dylib /usr/local/lib/libjpeg.dylib / 
usr/local/lib/libtiff.dylib /usr/local/lib/libpng12.dylib -lpthread - 
ldl /opt/local/lib/libcurl.dylib /opt/local/lib/libidn.dylib -lssl - 
lcrypto -lz -lOpenCL-install_name  /usr/local/lib/libgdal.1.dylib - 
compatibility_version 16 -current_version 16.0 -Wl,-single_module

ld: library not found for -lOpenCL
collect2: ld returned 1 exit status
make[1]: *** [libgdal.la] Error 1
make: *** [check-lib] Error 2
seth:gdal-svn-trunk-2010.12.17 sprice$

To fix this, I changed "OPENCL_LIB  =   -lOpenCL" to  
"OPENCL_LIB  =   -framework OpenCL" in GDALMmake.opt.


*  Why is USE_CLAMP_TO_DST_FLOAT in there? I would think that is  
required even if it isn't on ATI. I'm not sure who inserted its use,  
but I'm just wondering if the reasoning is documented somewhere.


* I had to make this change to alg/gdalwarpkernel_opencl.c to get  
it to build without a build option error.


@@ -1168,7 +1168,7 @@

 //Assemble the compiler arg string for speed. All invariants  
should be defined here.
 sprintf(buffer, "-cl-fast-relaxed-math -Werror -D FALSE=0 -D  
TRUE=1 "

-"%s "
+"%s"
 "-D iSrcWidth=%d -D iSrcHeight=%d -D iDstWidth=%d -D  
iDstHeight=%d "

 "-D useUnifiedSrcDensity=%d -D useUnifiedSrcValid=%d "
 "-D useDstDensity=%d -D useDstValid=%d -D useImag=%d "
@@ -1176,9 +1176,9 @@
 "-D nXRadius=%d -D nYRadius=%d -D nFiltInitX=%d -D  
nFiltInitY=%d "
 "-D PI=%015.15lff -D outType=%s -D dstMinVal=%015.15lff - 
D dstMaxVal=%015.15lff "
 "-D useDstNoDataReal=%d -D vecf=%s %s -D doCubicSpline= 
%d "

-"-D useUseBandSrcValid=%d -D iCoordMult=%d",
+"-D useUseBandSrcValid=%d -D iCoordMult=%d ",
 /* FIXME: Is it really a ATI specific thing ? */
-(warper->imageFormat == CL_FLOAT && warper->bIsATI) ? "-D  
USE_CLAMP_TO_DST_FLOAT=1" : "",
+(warper->imageFormat == CL_FLOAT && warper->bIsATI) ? "-D  
USE_CLAMP_TO_DST_FLOAT=1 " : "",
 warper->srcWidth, warper->srcHeight, warper->dstWidth,  
warper->dstHeight,

 warper->useUnifiedSrcDensity, warper->useUnifiedSrcValid,
 warper->useDstDensity, warper->useDstValid, warper- 
>imagWorkCL != NULL,


 After doing all of the above to make things compile, I don't get  
the bug described below. I'm working off of the latest trunk daily.


~Seth


On Dec 8, 2010, at 1:12 PM, Even Rouault wrote:


Seth,

Thanks for your help.


It's more than a little strange that none of those image sizes work.
Perhaps it's a problem with the image format? Can you verify that the
given format should work?


The image format was CL_UNORM_INT8 (for GDT_Byte)



Looking at the spec, it might also be a problem with the 'sz'
argument. What value is that passing?


It's 1.

I managed to found the following workaround that enables gdalwarp to  
complete
(see http://trac.osgeo.org/gdal/changeset/21220  that basically  
passes a dummy

buffer instead of a NULL pointer).

However the visual result of the warping is really poor. I see 4  
"ghost"

images shifted.

For better understanding I've attached the source image  
(small_world_b1.tif)
and the result of bilinear resampling (but I get similar weird  
visual effects

with cubic, cubic spline or lanczos)

gdalwarp  -rb small_world_b1.tif out_bilinear.tif

Best regards,

Even



___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev

Re: [gdal-dev] Re: OpenCL, GDAL, and You

2010-12-07 Thread Seth Price
It's more than a little strange that none of those image sizes work.  
Perhaps it's a problem with the image format? Can you verify that the  
given format should work?


Looking at the spec, it might also be a problem with the 'sz'  
argument. What value is that passing?

~Seth



On Dec 7, 2010, at 12:01 AM, Even Rouault wrote:


Le mardi 07 décembre 2010 02:05:51, Seth Price a écrit :

Ah, the joys of multiple platform development.

The first two warnings should be fixable by replacing "-99.0" with
"-99.0f".

Your fix for read_imagef() should work fine.


ok, I'll commit that then



To fix that last error, try changing the image size from "1, 1" to  
"2,

2" or "4, 4". It shouldn't matter because the image is there only so
we have something to pass to the kernel (it's not used in this case).
The spec says that the values need to be greater or equal to 1, so
it's technically a SDK problem.


I tried 2,2 ; 4,4 ; 8,8 ; 16,16 ; 256,256; 1024,1024 and none of  
them work...



~Seth

On Dec 6, 2010, at 4:12 PM, Even Rouault wrote:

Hi Seth,

I gave a try to Frank's integration of your work with my ATI Radeon
HD 5400. I
got the ATI SDK 2.2 correctly installed with latest ATI drivers
(10-11). The
few OpenCL demos provided with the SDK I tried work on the GPU  
device.


(Note: "thanks" to the errors bellow, I've made a few cleanups in
http://trac.osgeo.org/gdal/changeset/21205 to improve error
reporting and
clean-up in case of error. I also added missing initialization of 2
members of
the warper struct)

Then I tried a simple warp and I got the following error :

"""
0ERROR 1: Error: Failed to build program executable!
Build Log:
Warning: invalid option: -cl-fast-relaxed-math

Warning: invalid option: -Werror

/tmp/OCL3QLDj2.cl(193): warning: double-precision constant is
represented as

single-precision constant because double is not enabled

return (float2)(-99.0, -99.0);

 ^

/tmp/OCL3QLDj2.cl(193): warning: double-precision constant is
represented as

single-precision constant because double is not enabled

return (float2)(-99.0, -99.0);

^

/tmp/OCL3QLDj2.cl(195): error: bad argument type to opencl image op:
expected

sampler_t

CLK_NORMALIZED_COORDS_TRUE |
^

1 error detected in the compilation of "/tmp/OCL3QLDj2.cl".

ERROR 1: Error at file gdalwarpkernel_opencl.c line 2228:
CL_BUILD_PROGRAM_FAILURE
""""

Hum, then I looked at similar code in the neighbourhood and I came
with the
following change that solves the compilation. It is not commited
yet, does it
look ok to you ?

Index: alg/gdalwarpkernel_opencl.c
===
--- alg/gdalwarpkernel_opencl.c (révision 21205)
+++ alg/gdalwarpkernel_opencl.c (copie de travail)
@@ -724,13 +724,13 @@

   // Check & return when the thread group overruns the image size
   "if (nDstX >= iDstWidth || nDstY >= iDstHeight)\n"

   "return (float2)(-99.0, -99.0);\n"

+
+"const sampler_t samp =  CLK_NORMALIZED_COORDS_TRUE |\n"
+"CLK_ADDRESS_CLAMP_TO_EDGE |\n"
+"CLK_FILTER_LINEAR;\n"
+
+"float4  fSrcCoords = read_imagef(srcCoords,samp,fDst);\n"

-"float4  fSrcCoords = read_imagef(srcCoords,\n"
- "CLK_NORMALIZED_COORDS_TRUE | 
\n"
- 
"CLK_ADDRESS_CLAMP_TO_EDGE |

\n"
-"CLK_FILTER_LINEAR,\n"
- "fDst);\n"
-

   "return (float2)(fSrcCoords.x, fSrcCoords.y);\n"

"}\n";


After solving the compilation error, I'm stuck with  :

"""
0ERROR 1: Error at file gdalwarpkernel_opencl.c line 1391:
CL_INVALID_IMAGE_SIZE
ERROR 1: Error at file gdalwarpkernel_opencl.c line 1391:
CL_INVALID_IMAGE_SIZE
ERROR 1: Error at file gdalwarpkernel_opencl.c line 2292:
CL_INVALID_IMAGE_SIZE
ERROR 1: Error at file gdalwarpkernel_opencl.c line 1391:
CL_INVALID_IMAGE_SIZE
ERROR 1: OpenCL routines reported failure (-40) on line 2570.
"""

The revelant line is :
"""

  //Make a fake image so we don't have a NULL pointer
  (*srcImag) = clCreateImage2D(warper->context,  
CL_MEM_READ_ONLY,


&imgFmt,

   1, 1, sz, NULL, &err);

  handleErr(err);

"""

Any ideas ? Did you try with ATI or NVidia cards ?

I've attached the output of CLInfo if it can help.

Best regards,

Even

Le lundi 06 décembre 2010 15:50:04, Seth Price a écrit :
Over the summer I rewrote the warper to use OpenCL. There was a  
2x to

50x speedup. Here is a description of what I did:
http:

Re: [gdal-dev] Re: OpenCL, GDAL, and You

2010-12-06 Thread Seth Price

Ah, the joys of multiple platform development.

The first two warnings should be fixable by replacing "-99.0" with  
"-99.0f".


Your fix for read_imagef() should work fine.

To fix that last error, try changing the image size from "1, 1" to "2,  
2" or "4, 4". It shouldn't matter because the image is there only so  
we have something to pass to the kernel (it's not used in this case).  
The spec says that the values need to be greater or equal to 1, so  
it's technically a SDK problem.

~Seth


On Dec 6, 2010, at 4:12 PM, Even Rouault wrote:


Hi Seth,

I gave a try to Frank's integration of your work with my ATI Radeon  
HD 5400. I
got the ATI SDK 2.2 correctly installed with latest ATI drivers  
(10-11). The

few OpenCL demos provided with the SDK I tried work on the GPU device.

(Note: "thanks" to the errors bellow, I've made a few cleanups in
http://trac.osgeo.org/gdal/changeset/21205 to improve error  
reporting and
clean-up in case of error. I also added missing initialization of 2  
members of

the warper struct)

Then I tried a simple warp and I got the following error :

"""
0ERROR 1: Error: Failed to build program executable!
Build Log:
Warning: invalid option: -cl-fast-relaxed-math

Warning: invalid option: -Werror

/tmp/OCL3QLDj2.cl(193): warning: double-precision constant is  
represented as

 single-precision constant because double is not enabled
 return (float2)(-99.0, -99.0);
  ^

/tmp/OCL3QLDj2.cl(193): warning: double-precision constant is  
represented as

 single-precision constant because double is not enabled
 return (float2)(-99.0, -99.0);
 ^

/tmp/OCL3QLDj2.cl(195): error: bad argument type to opencl image op:  
expected

 sampler_t
 CLK_NORMALIZED_COORDS_TRUE |
 ^

1 error detected in the compilation of "/tmp/OCL3QLDj2.cl".

ERROR 1: Error at file gdalwarpkernel_opencl.c line 2228:
CL_BUILD_PROGRAM_FAILURE
""""

Hum, then I looked at similar code in the neighbourhood and I came  
with the
following change that solves the compilation. It is not commited  
yet, does it

look ok to you ?

Index: alg/gdalwarpkernel_opencl.c
===
--- alg/gdalwarpkernel_opencl.c (révision 21205)
+++ alg/gdalwarpkernel_opencl.c (copie de travail)
@@ -724,13 +724,13 @@
// Check & return when the thread group overruns the image size
"if (nDstX >= iDstWidth || nDstY >= iDstHeight)\n"
"return (float2)(-99.0, -99.0);\n"
+
+"const sampler_t samp =  CLK_NORMALIZED_COORDS_TRUE |\n"
+"CLK_ADDRESS_CLAMP_TO_EDGE |\n"
+"CLK_FILTER_LINEAR;\n"
+
+"float4  fSrcCoords = read_imagef(srcCoords,samp,fDst);\n"

-"float4  fSrcCoords = read_imagef(srcCoords,\n"
- "CLK_NORMALIZED_COORDS_TRUE |\n"
-"CLK_ADDRESS_CLAMP_TO_EDGE | 
\n"

-"CLK_FILTER_LINEAR,\n"
- "fDst);\n"
-
"return (float2)(fSrcCoords.x, fSrcCoords.y);\n"
"}\n";


After solving the compilation error, I'm stuck with  :

"""
0ERROR 1: Error at file gdalwarpkernel_opencl.c line 1391:
CL_INVALID_IMAGE_SIZE
ERROR 1: Error at file gdalwarpkernel_opencl.c line 1391:  
CL_INVALID_IMAGE_SIZE
ERROR 1: Error at file gdalwarpkernel_opencl.c line 2292:  
CL_INVALID_IMAGE_SIZE
ERROR 1: Error at file gdalwarpkernel_opencl.c line 1391:  
CL_INVALID_IMAGE_SIZE

ERROR 1: OpenCL routines reported failure (-40) on line 2570.
"""

The revelant line is :
"""
   //Make a fake image so we don't have a NULL pointer
   (*srcImag) = clCreateImage2D(warper->context, CL_MEM_READ_ONLY,
&imgFmt,
1, 1, sz, NULL, &err);
   handleErr(err);
"""

Any ideas ? Did you try with ATI or NVidia cards ?

I've attached the output of CLInfo if it can help.

Best regards,

Even

Le lundi 06 décembre 2010 15:50:04, Seth Price a écrit :

Over the summer I rewrote the warper to use OpenCL. There was a 2x to
50x speedup. Here is a description of what I did:
http://osgeo-org.1803224.n2.nabble.com/gdal-dev-gdalwarp-OpenCL-Performance
-Week-9-td5341226.html

~Seth

On Dec 6, 2010, at 5:10 AM, Konstantin Baumann wrote:

Hi,

what benefit/improvement would the OpenCL integration bring to GDAL?
Additional functionality or a speedup of existing functions?
Probably only operations on images and/or rasters are supported;
reprojection/warping and filtering would be good candidates, right?
What concrete operations would be supported?

Kosta

-Original Message-

Re: [gdal-dev] Re: OpenCL, GDAL, and You

2010-12-06 Thread Seth Price
Over the summer I rewrote the warper to use OpenCL. There was a 2x to  
50x speedup. Here is a description of what I did:

http://osgeo-org.1803224.n2.nabble.com/gdal-dev-gdalwarp-OpenCL-Performance-Week-9-td5341226.html

~Seth



On Dec 6, 2010, at 5:10 AM, Konstantin Baumann wrote:


Hi,

what benefit/improvement would the OpenCL integration bring to GDAL?  
Additional functionality or a speedup of existing functions?  
Probably only operations on images and/or rasters are supported;  
reprojection/warping and filtering would be good candidates, right?  
What concrete operations would be supported?


Kosta

-Original Message-
From: gdal-dev-boun...@lists.osgeo.org [mailto:gdal-dev-boun...@lists.osgeo.org 
] On Behalf Of Frank Warmerdam (External)

Sent: Monday, December 06, 2010 1:43 AM
To: Seth Price
Cc: Philippe Vachon; gdal-dev; Wolf Bergenheim
Subject: [gdal-dev] Re: OpenCL, GRASS, GDAL, and You

On 10-09-23 10:09 AM, Seth Price wrote:
Hey all, I was just wondering if there was any progress in  
integrating

the OpenCL code into trunk in each project? I haven't heard anything,
but it would be a shame to just leave the code sit, or wait until the
code branches have significantly diverged.
~Seth



Seth,

Last week I went out and bought a new AMD/ATI machine in the hopes  
(amoung

other goals) that OpenCL would work on it with the ATI OpenCL SDK.
Unfortunately I have discovered that the ATI Radion 4200 HD is not  
supported

for OpenCL stuff.  :-(

Nevertheless, with some persistance I was able to build the ATI SDK,  
and
configure GDAL to build against it.  So I have integrated OpenCL  
support
in trunk.  It is not enabled by default, but you can enable it with  
the

--with-opencl directive.  If the include files and libraries are in a
non-standard location you can also use the --with-opencl-include and
--with-opencl-lib directives to configure like this:

--with-opencl \
--with-opencl-include=/home/warmerda/pkg/ati-stream-sdk-v2.2- 
lnx64/include \


--with-opencl-lib="-L/home/warmerda/pkg/ati-stream-sdk-v2.2-lnx64/ 
lib/x86_64

-lOpenCL" \

I ran into a few issues:

1) It seems the include file from ATI is  not OpenCL.h>
as it is on the Mac.  I've put a platform dependent ifdef but I  
don't now

what the situation will be on other machines.

2) In your get_device() function you were passing NULL in for the  
platform.
The online docs indicate this as an option but warn that behavior  
then is
platform dependent.  The ATI SDK just fails with an invalid platform  
error.

So I updated the code to fetch a platform id and use that.

3) On my system it falls back to using the CPU but it turns out the  
CPU
does not offer "image" support in my case.  I added some extra logic  
to

look for this capability so a better error could be reported.

4) I restructured things a bit so that the OpenCL warper case can  
return

CE_Warning to indicate to the high level warper that OpenCL should be
skipped and other mechanisms used.  That is what it does not if it  
fails

to find a suitable device, or some of the other specific checks.

5) I made a few changes to use CPLDebug instead of printf for debug  
output.


I haven't tried this yet on your Mac.  I avoided using the account you
kindly offered because I find the Mac is often a perverse build  
environment
and I didn't want to establish the "norm" based on it.  I might try  
it out

tonight though.

I have also not yet tried it on windows, and likely won't in the  
near future.
Perhaps someone else will pick up the ball there.  The code itself  
just
depends on having HAVE_OPENCL defined at least in the alg directory  
and

of course appropriate include and link options.

(cc:ed to the list so everyone is aware of the availability).

Best regards,
--
--- 
+--

I set the clouds in motion - turn up   | Frank Warmerdam, warmer...@pobox.com
light and sound - activate the windows | http://pobox.com/~warmerdam
and watch the world go round - Rush| Geospatial Programmer for  
Rent


___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev


___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev

Re: [gdal-dev] cubic / bilinear resampling with gdalwarp looks similar to nearest neighbour

2010-10-13 Thread Seth Price
Bilinear uses the nearest four pixels, bicubic uses the nearest 16  
pixels. That's what makes them linear and cubic. Do you have a source  
for another way to do it?


It would be possible to do averaging, *then* resample, but I'm not  
sure that's worth it here. (Weighted averaging would be better anyway.)

~Seth


On Oct 13, 2010, at 9:48 PM, Frank Warmerdam wrote:


Craig de Stigter wrote:
Apologies for the long delay. I've finally had opportunity to look  
into this again. I'm still unsure why exactly the bilinear and  
bicubic resampling produces such second-rate output in GDAL. I have  
a few questions:
1. Is there a fundamental reason why the bilinear resampling  
couldn't use all the pixels from the source region, rather than  
just the 4 corner pixels? A destination pixel would then be the  
weighted average of all the source pixels for each band.


Craig,

Well, it would no longer be bilinear.  It would be averaged.

2. Suppose a 1/6 sampling ratio, i.e. a 6x6 region (0, 0)-->(6, 6)  
in the source image corresponds to a 1x1 region in the output.

What source pixels are used by the bilinear resampling in GDAL?

Wikipedia   
suggests it should be the 'nearest' four pixels to the desired  
destination pixel, i.e. (2, 2), (2, 3), (3, 2), (3, 3). Is that  
what GDAL is doing, or is it using (0, 0), (0, 5), (5, 0), (5, 5) ?


It will use (2, 2), (2, 3), (3, 2), (3, 3).

3. Assuming a 'no' answer to (1), if I were to contribute a patch  
to make bilinear/bicubic resampling take all the source pixels into  
account, would it meet much opposition? It would probably make  
large-scale downsampling using these methods much slower, though  
its questionable whether anyone would complain given that the  
quality of current output is so poor...


I'm not adverse to your preparing a patch that implements an
"averaged" resampling kernel, but it doesn't replace bilinear.

Generally speaking the warper is not intended for dramatic changes
of resolution - it is intended for changes of geometry at roughly the
same resolution - for instance reprojection.

Average was already implemented as a resampling kernel in the
"build overviews" tool because it is aimed at dramatic changes of
resolution.

Best regards,
--
--- 
+--

I set the clouds in motion - turn up   | Frank Warmerdam, warmer...@pobox.com
light and sound - activate the windows | http://pobox.com/~warmerdam
and watch the world go round - Rush| Geospatial Programmer for  
Rent




___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev


Re: [gdal-dev] cubic / bilinear resampling with gdalwarp looks similar to nearest neighbour

2010-09-29 Thread Seth Price
The only way to 'force' it is to use cubicspline or lanczos. Everything
else has a hardcoded filter size. If those are too slow, I'd suggest using
the OpenCL code I wrote over the summer. It's many times faster. (It
should be in trunk soon if it isn't already.)

http://github.com/mailseth/OpenCL-integration-for-GRASS---GDAL

~Seth



On Wed, September 29, 2010 4:12 pm, Craig de Stigter wrote:
> Thanks Even
>
> However if I do the same thing with reprojection I get the same result.
> gdalwarp output looks sharp and ugly, imagemagick output looks nice.
>
> Is there any way to force the bilinear/cubic interpolation to notice more
> pixels? The other interpolation methods are much slower, I imagine even
> forcing cubic to notice all of the 5x5 pixels would still be faster.
>
> Regards
> Craig de Stigter
>
> On Thu, Sep 30, 2010 at 10:34 AM, Even Rouault
> > wrote:
>
>> Craig,
>>
>> The main reason is that you are using gdalwarp to reduce an image by a
>> large
>> factor and not reproject it, which generally only involves image
>> resizing
>> by
>> small factors. The implementation of the bilinear and cubic resampling
>> algorithms is currently not designed for the pure resizing use case and
>> takes
>> only into account a few source pixels (basically if to compute a
>> destination
>> pixel you would need to take into account a 5x5 source window, they will
>> only
>> used the 4 corners of that square, which fine usually since those are
>> just
>> the
>> immediate neighbours of the source pixel), whereas cubicspline and
>> lancsoz
>> take into account more pixels (all the pixels in the 5x5 square).
>>
>> Best regards,
>>
>> Even
>>
>> Le mercredi 29 septembre 2010 22:27:01, Craig de Stigter a écrit :
>> > Hi folks
>> >
>> > I filed this bug  on gdal a
>> few
>> > weeks back and haven't heard anything since.
>> >
>> > Would someone mind taking a look at it?
>> >
>> > Thanks
>> > Craig de Stigter
>>
>
>
>
> --
> Koordinates Ltd
> PO Box 1604, Shortland St, Auckland, New Zealand
> Phone +64-9-966 0433 Fax +64-9-969 0045
> Web http://www.koordinates.com
> ___
> gdal-dev mailing list
> gdal-dev@lists.osgeo.org
> http://lists.osgeo.org/mailman/listinfo/gdal-dev


___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev


[gdal-dev] Week 10 OpenCL Integration

2010-08-03 Thread Seth Price

1) What did I do last week?
All my work can be seen here:
http://github.com/mailseth/OpenCL-integration-for-GRASS---GDAL

I put a bunch more effort into finding and fixing bugs. The latest bug  
(which I spent way too much time on) seems to be caused by error  
introduced in the GPU. I think I've set up the code to avoid it in the  
future.


I've implemented vectorization, and it runs much faster on both the  
GPU and CPU. You can expect a speedup over the original code from 2x  
using bilinear resampling to 40x using lanczos resampling. This is  
affected by how much manipulation and masks are used by GDAL's  
processing, of course.


What took most of my time this week was implementing a reduced X/Y  
translation matrix. Now the projection numbers are slightly  
interpolated with a greatly reduced GPU memory footprint (16x smaller  
for this matrix).


2) What I plan to do this week.
I think I'm pretty close to being done with OpenCL GDAL. The only  
thing that needs doing is a clean config script to handle compiling.  
There needs to be changes made for it to run on Windows, but I don't  
have a machine to test, so someone else will have to look at that.


Therefore I'll head over to r.sun and see if I can finish that up.

3) Do I have any problems or obstacles which will interfere with my  
work?
We need to finalize what's going on with pj_do_proj() in r.sun, but it  
sounds like this is pretty much done. My family has a hiking trip the  
7th thu the 10th, so I'll either send my next report a few days early  
or late.


~Seth
___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev


[gdal-dev] gdalwarp OpenCL Performance (Week 9)

2010-07-27 Thread Seth Price
I just finished the first performance tests of my gdalwarp OpenCL  
code. It's doing better than I expected. I used this command:
"time gdalwarp -q -r lanczos -t_srs '+proj=merc +a=6378137.0  
+b=6378137.0 +nadgri...@null +wktext +units=m' big_test.tif  
big_test.out.tif"


I can compile the OpenCL code two different ways. I can run OpenCL  
code on the CPU and distribute it across processors by selecting the  
CPU as the device. This compiles a multithreaded version of the code.  
By selecting the GPU device, the OpenCL code compiles to run on my Mac  
Pro's graphics card, a GeForce GTX 285. To test, I used a 80 MB RGB  
raster, with 8 bits per channel.


With the original lanczos resampler code I get 5:31, with OpenCL on my  
Mac Pro's 16 cores 0:39, and with OpenCL on my GTX 285 0:10. That's a  
36x speedup.


Using cubicspline resampling, the original code takes 0:59, the OpenCL  
CPU code takes 0:13, and the OpenCL GPU code takes 0:08. Still a  
significant speedup.


And with cubic resampling, the original code takes 0:19, OpenCL CPU  
takes 0:09, and OpenCL GPU takes 0:07. Still better than twice as fast.


Basically, the OpenCL GPU code in all cases is I/O bound. The GPU is  
laughing and requesting more difficult work.


I haven't tested all different types of data and commands. If anyone  
has any samples and warping commands for testing, now would be the  
time to send them to me. I don't know of any GPU bugs in the current  
code.


Here is my current code:
http://github.com/mailseth/OpenCL-integration-for-GRASS---GDAL

~Seth
___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev


[gdal-dev] Help with the makefile

2010-07-11 Thread Seth Price
I've been working on OpenCL for gdalwarp, and I have the majority of  
the code written. However, I'm going to need some help getting the  
compiler flags right. I need to detect an OpenCL install, then add the  
correct compiler flag(s) ("-framework OpenCL" when on the Mac).


How do I get do this? Doing this correctly is way beyond my make  
skills, and anything that I could do would be an ugly hack.


Thanks,
Seth
___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev


Re: [gdal-dev] GDAL Speed Optimization

2010-06-10 Thread Seth Price
I'm on a Mac, so I normally use Shark for profiling. It's included  
with Apple's developer tools.


I would definitely try running outside of a virtual machine  
environment. That might be your problem, but you won't know until you  
try.

~Seth


On Jun 10, 2010, at 2:46 AM, stefano.mora...@gmail.com wrote:

I'm working on a VirtualMachine running on an usb hdd and my  
customer use iMac laptop so I can't change disk.


The primary dataset is composed by a lot of jpeg download from  
internet and then cached to disk.
I think the performance slowdown is caused after the creation of the  
virtual dataset because and it is present only when the output  
bitmap has the resolution of the printer (considering alwarys the  
sampre geographic area). All the output bitmaps, exept the dataset's  
jpeg, are in memory and the workingset of the application does not  
exeed the 256 MB


Can you suggest a profiler?

Thanks,
Stefano

Il giorno 10/giu/2010 10.31, Seth Price  ha  
scritto:
> Unfortunately, both nearest neighbor and bilinear are probably not  
CPU

>
> bound, so speeding up their processing won't help. They are  
probably I/O

>
> bound, so you might need faster disks. (Though I can test this  
later in my

>
> project.) Have you tried running a profiling tool on GDAL while  
you're

>
> running the warper?
>
>
>
> I've just started the OpenCL project, so it's still pretty  
immature. I'm

>
> still discussing the best way to integrate an OpenCL warper into  
existing

>
> code.
>
> ~Seth
>
>
>
>
>
> On Thu, June 10, 2010 2:26 am, Stefano Moratto wrote:
>
> > Seth,
>
> >
>
> > You have been choosen a very interesting project.
>
> >
>
> >
>
> > I use the following :
>
> >
>
> > hWarp := GDALAutoCreateWarpedVRT (hSized,
>
> >nil,
>
> > PChar(FMapSRSWkt),
>
> >
>
> > GDALResampleAlg_GRA_NearestNeighbour,
>
> > 0.5,
>
> > nil);
>
> >
>
> > where hSized is a
>
> >
>
> > hSized =  F (hDataset) ;
>
> >
>
> > F (hDataset) := Virtual Dataset of hDataset. I use it for in  
memory

>
> > resizing
>
> > .
>
> > It is generated by: (I do not write all the details but I think  
it is

>
> > clear)
>
> >
>
> >
>
> > VRTAddSimpleSource (poVRTBand,
>
> >   poSrcBand,
>
> >  Round(anSrcWin[0]),
>
> >   Round(anSrcWin[1]),
>
> >  Round(anSrcWin[2]),
>
> >   Round(anSrcWin[3]),
>
> >  0, 0,
>
> >   size.cx,
>
> >   size.cy,'Bilinear', 0.0 );
>
> >
>
> >
>
> > and hDataset = GDALOpen( ... "openstreet.xml" )
>
> >
>
> > Hence I use
>
> > 1) ,'Bilinear' for zooming
>
> > 2) NearestNeighbour for warping.
>
> >
>
> >
>
> > What is the state of your project?
>
> >
>
> > Stefano
>
> >
>
> >
>
> > On Thu, Jun 10, 2010 at 9:52 AM, Seth Price s...@pricepages.org>  
wrote:

>
> >
>
> >> >1) hw accelerated functions as IPP or GPU (e.g CUDA)
>
> >>
>
> >> This is my (ongoing) Google Summer of Code project, except I'm  
using

>
> >> OpenCL. :D
>
> >>
>
> >> What resampling algorithm are you using?
>
> >> ~Seth
>
> >>
>
> >>
>
> >> On Thu, June 10, 2010 1:49 am, Stefano Moratto wrote:
>
> >> > I use GDAL in my traffic optimization CAD program.
>
> >> > It is a Win32 application written in DELPHI that uses GDAL  
"C" API -

>
> >> the
>
> >> > binding was autogenerated by my SWIG module for object pascal.
>
> >> >
>
> >> > I use GDAL to download tiles (jpeg) from OpenstreetMap and to  
compose

>
> >> a
>
> >> > bitmap of the area that is being viewed. The resulting bitmap  
(not

>
> >> > compressed) is warped  and displayed.
>
> >> >
>
> >> > The performances are quite acceptable but when I try to print  
they are

>
> >> > not.
>
> >> >
>
> >> > The drawing to be printed has a resolution larger of the screen
>
> >&

Re: [gdal-dev] GDAL Speed Optimization

2010-06-10 Thread Seth Price
>1) hw accelerated functions as IPP or GPU (e.g CUDA)

This is my (ongoing) Google Summer of Code project, except I'm using
OpenCL. :D

What resampling algorithm are you using?
~Seth


On Thu, June 10, 2010 1:49 am, Stefano Moratto wrote:
> I use GDAL in my traffic optimization CAD program.
> It is a Win32 application written in DELPHI that uses GDAL "C" API - the
> binding was autogenerated by my SWIG module for object pascal.
>
> I use GDAL to download tiles (jpeg) from OpenstreetMap and to compose a
> bitmap of the area that is being viewed. The resulting bitmap (not
> compressed) is warped  and displayed.
>
> The performances are quite acceptable but when I try to print they are
> not.
>
> The drawing to be printed has a resolution larger of the screen (Screen :
> 1024x1024, Printer 4096 x 4094 in A4 and 9000 x 9000 in A3 approximately).
> I think tha the bottleneck could be found in:
> 1) jpeg decompression.
> 2) bitmap interpolation (I use the low qualitiy settings).
> 3) warping ( I use an approsimated warping function).
>
> An increase of performance could be achieved using
> 1) hw accelerated functions as IPP or GPU (e.g CUDA)
> 2) parallel alghorithms that takes advantage from multicore CPU
>
> Has someone already approched these problems?
>
> Regards,
> Stefano
>
>
> --
> Dr.Eng. Stefano Moratto
> stefano.mora...@gmail.com
> stefano.mora...@csiat.it
> http://www.csiat.it - Traffic Optimization Software
> ___
> gdal-dev mailing list
> gdal-dev@lists.osgeo.org
> http://lists.osgeo.org/mailman/listinfo/gdal-dev


___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev


Re: [gdal-dev] download free DEM data

2010-04-08 Thread Seth Price
This isn't the right list for requesting DEM data, but you could start  
here:

http://www2.jpl.nasa.gov/srtm/

~Seth


On Apr 8, 2010, at 8:47 AM, weixj2003ld wrote:


Where could I download DEM data,please give me a web address..
Thanks for help in advance.


___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev


___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev


Re: [gdal-dev] CUDA PyCUDA and GDAL

2009-11-18 Thread Seth Price
I've been intending for a while to work on either CUDA or OpenCL with GDAL
& GRASS. I applied to do this for the Google Summer of  Code, but wasn't
accepted this past summer. I'll probably work on it someday just to make
sure my thesis work gets finished within budget.

However, I'm mostly interested in speeding up the resampling routines.
They should be able to get close to the theoretical maximum on CUDA. I
don't know about the routines which you mention without looking closer at
the code. For example, image reading is probably limited by the disk
speed, so it wouldn't be faster in CUDA. Translates are another operation
which doesn't involve much CPU time compared to disk time, so it would
also be difficult to speed it with CUDA. For these operations your best
option might be to replace your hard drive with a SSD.

I'm not familiar with image mosaics in GDAL, but I would guess that they
are heavy on the resampling when generating a quality final image. This is
something where each output pixel depends on the nearest ~16 input pixels.
It takes a lot of CPU time to process all those pixels, and it would
benefit from CUDA.

If you want, I could hunt down my GSoC application which would go into a
bit more detail.
~Seth

On Wed, November 18, 2009 2:46 pm, Shaun Kolomeitz wrote:
> I've heard a lot about the power of NVidia CUDA and am curious about
> ways in which we could leverage off this to increase the performance of
> 1) Image Mosaics 2) Translates and 3) Image Reading/rendering
> (especially highly compressed images).
> I also see that there is pyCUDA as well. Both of which I am unsure how
> (or if) you could use them to run (even portions of) GDAL ?
>
> If anyone has any pointers it would be nice to know.
>
> Many thanks,
> Shaun Kolomeitz
> Principal Project Officer
> Business and Asset Services
> Queensland Parks and Wildlife Service
>
>
> As of 26 March 2009 the Department of Natural Resources and
> Water/Environmental Protection Agency integrated to form the Department
> of Environment and Resource Management
>
> ++
> Think B4U Print
> 1 ream of paper = 6% of a tree and 5.4kg CO2 in the atmosphere
> 3 sheets of A4 paper = 1 litre of water
> ++
>
>
> ___
> gdal-dev mailing list
> gdal-dev@lists.osgeo.org
> http://lists.osgeo.org/mailman/listinfo/gdal-dev
>


___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev


Re: [gdal-dev] Re: Problems with large raster sizes (WMS/TMS)

2009-11-13 Thread Seth Price
Jumping in here, and I may be misunderstanding, but I don't see how we
would have greater than 2 gigapixel images in the foreseeable future.
Doing the math: if we represent the entire earth as a single image, each
pixel would be 2 cm at the equator, and the whole image would be 6291456
TB in size (24-bit image).

I'm not even sure if it's physically possible to image the globe at better
than 2 cm.
~Seth

On Fri, November 13, 2009 11:35 am, Tamas Szekeres wrote:
> 2009/11/13 Frank Warmerdam :
>> I would prefer to limit ourselves to 2 gigapixels by 2 gigapixels images
>> for the time being.  I just can't yet see that larger images are of
>> sufficient interest to justify the complexity involved in properly
>> supporting larger images throughout GDAL.
>>
>> If we were to do so, I'd want it handled via an RFC.
>>
>
> This is true probably for most of the existing drivers. However for
> the server based raster data sources can have no such limit in the
> size when the supported level of the detail is increasing, may be
> earlier than we can expect.
>
> Best regards,
>
> Tamas
> ___
> gdal-dev mailing list
> gdal-dev@lists.osgeo.org
> http://lists.osgeo.org/mailman/listinfo/gdal-dev
>


___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev


Re: [gdal-dev] Anti-Aliasing

2009-10-02 Thread Seth Price
Could you give an example of the problem? Anti-aliasing generally only
applies when you are going from vector line drawings to rasterized images.

If I was trying to smooth things I would first look at cubic or bilinear.
Any other resampling may enhance any noise in your image.
~Seth

On Fri, October 2, 2009 11:16 am, Chris Emberson wrote:
>
> I am looking for an anti-aliasing function to smooth out the pixellated
> effect between raster values. I have tried the various interpolation
> methods with gdalwarp and have also increased the resolution of the raster
> to try and minimise the effect, to no avail.
> Is this a function to be added soon?
>
> Thanks,
> Chris
>
> _
> Share your photos with Windows Live Photos – Free.
> http://clk.atdmt.com/UKM/go/134665338/direct/01/___
> gdal-dev mailing list
> gdal-dev@lists.osgeo.org
> http://lists.osgeo.org/mailman/listinfo/gdal-dev


___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev


Re: [gdal-dev] resampling techniques in GDAL

2009-06-16 Thread Seth Price
The reason that what you are suggesting isn't done is mainly speed and
simplicity. GDAL makes the assumption that you are warping from one
resolution to another, similar, resolution. The current scheme works well
for that.

However, I believe that Cubic Spline and Lanczos resampling both do what
you are referring to. They use variable kernel sizes depending on the
input/output image sizes.

In conclusion, if you want quality image resampling across large scales,
use a image manipulation package such as imagemagick. Their algorithms
incorporate your concerns and more. But GDAL's method works pretty well
for what it's used for.
~Seth


On Tue, June 16, 2009 5:52 pm, Gregory, Matthew wrote:
> Hi all,
>
> A recent post got me thinking about resampling techniques.  I've posted
> a graphic to help illustrate my question.
>
>   http://www.fsl.orst.edu/lemma/sandbox/resample.png
>
> In the graphic, assume the input image is the 3x3 grid with black
> outlines and blue and yellow dot centers.  Assume the red outlined
> pixels represent three different output resolutions of a single pixel
> each having its center at the black dot.
>
> Using bilinear interpolation, each of these three output resolutions get
> the same value (tested using gdalwarp) based on the four yellow cell
> centers.  I understand why this is happening and realize this is the
> expected behavior.
>
> My question, however, is whether or not there is a resampling technique
> (inside or outside GDAL) that uses the proportional weights and values
> of *all* input pixels touched by the output pixel.  At the finest
> resolution in the illustration, this would be equivalent a nearest
> neighbor resampling (ie. the output pixel is wholly contained within the
> input pixel) and at the coarsest resolution, all nine input pixels would
> contribute to the output value based on proportional area.
>
> This falls outside the traditional { nearest neighbor | bilinear
> interpolation | cubic convolution } resampling techniques and there may
> be a reason why this is a bad idea.  I can see that it might be
> prohibitively slow for large output pixel resolutions but, to my way of
> thinking, a potentially more accurate representation of the underlying
> (finer-resolution) data.
>
> matt
> ___
> gdal-dev mailing list
> gdal-dev@lists.osgeo.org
> http://lists.osgeo.org/mailman/listinfo/gdal-dev
>


___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev


Re: [gdal-dev] Distributed processing

2009-06-13 Thread Seth Price
I would personally use Condor to distribute the processes, but the  
learning curve may be steep depending on where you're coming from.

~Seth

On Jun 12, 2009, at 5:05 AM, John Donovan wrote:


Hi,
We currently have a stand-alone app that converts, mosaicks and scales
GDAL-supported images to a proprietary format. It works well, but we
handle tens of thousands of source files at a time, which can slow the
process down to a crawl.

So we're investigating parallelising this process over several  
machines,
and I was wondering if anyone has any experience of this that they'd  
be

willing to share? It's still early days yet, so we're open to all
suggestions.

Regards,
John Donovan - Programmer, Virtalis Ltd.

__
This email has been scanned for Virtalis Ltd by the MessageLabs  
Email Security System.  
__

___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev


___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev


Re: [gdal-dev] Availability of Lanczos and cubicspline in gdaladdo

2009-03-30 Thread Seth Price
One reason that warping is different than overviews is that warping has
much more versatile resampling. For the current overview code, an integer
number of pixels are combined into one pixel, and there is no overlap
between resampling areas. For warping code, each destination pixel draws
from source pixels which may overlap with the neighboring destination
pixel. This is also complicated because the weights need to be
recalculated for each destination pixel. I believe that there are
complications due to various image bounds and NODATA masks in warping.

Due to these problems, the warping code is much slower and complicated
than the overview code. Feel free to take a look, though. Be sure to post
anything you notice, and copy me in case I'm not paying attention to the
mailing list on that day.
~Seth

On Mon, March 30, 2009 3:45 pm, Benoit Andrieu wrote:
> Thanks for the answer,
>
>> > Hi list !
>> >
>> > I was wondering why the Lanczos and cubicspline are available in
>> > gdal_warp and not gdaladdo ?
>>
>> Benoît,
>>
>> The overview builder and warper use quite different mechanisms so there
>> is no close relationship between the resampling options available in
>> each
>> case.
>>
>
> Ok, I think the best thing to do for me is to look at the source code...
>
>> > The quality after downsizing images with gdalwarp is so perfect that I
>> > am now willing to include this in my overviews.
>> > Is there any chances to have this include in future releases or is
>> there
>> > any difficulties I am not aware of ?
>>
>> It is my intention to add a cubic resampling option to the overview
>> building for 1.7.  I am not planning to add the other options.
>
> Are you not planning because there is nobody asking for (poor me) ?
> The image quality on our datasets is really amazing between merging with
> bilinear / cubicspline / Lanczos !
> I am surprised to be the first one to notice such a difference.
> I will try to see if gdaladdo and gdalwarp give the same results. I have a
> doubt right now.
>
>> In theory it should be possible to programatically connect warpers to
>> overview levels in cases where overview band objects have "proper"
>> dataset parents - which is the case with GeoTIFF hosted overviews.
>>
>> For instance, this C entry point could likely be used with hSrcDS being
>> the base datset, and hDstDS being the dataset for the overviews.  One
>> problem with this plan is that it is likely that overview datasets do
>> not have proper geotransforms or coordinate systems.  So you might need
>> to push a geotransform onto the overview dataset (with
>> GDALSetGeoTransform())
>> before calling GDALReprojectImage().
>>
>> CPLErr CPL_DLL CPL_STDCALL
>> GDALReprojectImage( GDALDatasetH hSrcDS, const char *pszSrcWKT,
>>  GDALDatasetH hDstDS, const char *pszDstWKT,
>>  GDALResampleAlg eResampleAlg, double
>> dfWarpMemoryLimit,
>>  double dfMaxError,
>>  GDALProgressFunc pfnProgress, void *pProgressArg,
>>  GDALWarpOptions *psOptions );
>>
>> Generally the warp api resamplers are not suitable for with a
>> destination
>> dataset with a radically lower resolution than the source.  So I'd
>> suggest
>> doing it such that each overview is generated from the next higher
>> resolution
>> overview rather than always building from the base level.
>
> Ok.
> Do you mean that producing a 1m x 1m image with 20cm x 20cm images could
> give bad results ?
>
> There is something I am not sure to understand.
> You are saying to use the warp library directly from the overviewing
> mechanism.
> Why can't I take the lanczos/cubicspline codes, separate it from the
> warping code to make a "library" callable from the overviewing and warping
> codes ?
> I think that the answer is a long one so if it is too much to ask, just
> say to me to look at the code ! ;)
>
>> Note that the overview levels still need to be pre-created using normal
>> mechanisms.
>
> Ok. Have to look.
>
> I'll take a look in the next days and will come back for help !! :)
>
> Regards,
>
> Benoît Andrieu
> b...@ixsea.com
> benoit.andr...@gmail.com
>
> ___
> gdal-dev mailing list
> gdal-dev@lists.osgeo.org
> http://lists.osgeo.org/mailman/listinfo/gdal-dev
>


___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev


Re: [gdal-dev] Availability of Lanczos and cubicspline in gdaladdo

2009-03-30 Thread Seth Price
I actually hope to be addressing performance in my GSoC project. I'm
interested in rewriting the GDAL resampling code to CUDA, so the graphics
card does the hard work. For example, instead of processing one pixel at a
time, the latest GeForce GTX 260 would be able to process 216. I'm hoping
for CUDA to be 50 to 100 times faster.

Make sure you're using GDAL 1.6 or later, I recently rewrote the regular
Lanczos/cubicspline warping code to be much faster.
~Seth


On Mon, March 30, 2009 3:16 pm, Benoit Andrieu wrote:
> Sounds interesting !
> I am curious to know what you will try to do in your project !
>
> I am very interested in producing high quality images resulting for
> mapserving and image merging.
> We are using GDAL / Mapserver in our software suite. I made a try with
> GeoServer and I found the results were very nice comparing to Mapserver.
> We did not try to use GeoServer after that because there were others
> artifacts.
>
> But now that I have seen Lanczos and cubicspline results on our images
> using GDAL, I am convinced that there is something to do about Mapserver
> to challenge GeoServer !! ;)
> Of course, performance is a really big problem with Lanczos/cubicspline...
>
> I'll take a look at the code.
>
> Regards,
>
> Benoît Andrieu
> b...@ixsea.com
> benoit.andr...@gmail.com
>
>> -Message reçu-
>> De: "Seth Price" 
>> À: Benoît Andrieu 
>> Cc: gdal-dev@lists.osgeo.org
>> Date: 30/03/2009 20:38
>> Objet: Re: [gdal-dev] Availability of Lanczos and cubicspline in
>> gdaladdo
>>
>> The resampling code between gdal_warp and gdaladdo is completely
>> separate,
>> thus it is basically two different projects. For the Google Summer of
>> Code
>> application I'm about to submit I will be working on the resampling code
>> in GDAL's warper and GRASS. If I have time (and my application is
>> accepted!) I'll try to look into gdaladdo also.
>> ~Seth
>>
>>
>> On Mon, March 30, 2009 12:27 pm, Benoît Andrieu wrote:
>> > Hi list !
>> >
>> > I was wondering why the Lanczos and cubicspline are available in
>> gdal_warp
>> > and not gdaladdo ?
>> >
>> > The quality after downsizing images with gdalwarp is so perfect that I
>> am
>> > now willing to include this in my overviews.
>> > Is there any chances to have this include in future releases or is
>> there
>> > any difficulties I am not aware of ?
>> >
>> > Benoît Andrieu
>> > b...@ixsea.com
>> > benoit.andr...@gmail.com
>> >
>> > ___
>> > gdal-dev mailing list
>> > gdal-dev@lists.osgeo.org
>> > http://lists.osgeo.org/mailman/listinfo/gdal-dev
>> >
>
>


___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev


Re: [gdal-dev] Availability of Lanczos and cubicspline in gdaladdo

2009-03-30 Thread Seth Price
The resampling code between gdal_warp and gdaladdo is completely separate,
thus it is basically two different projects. For the Google Summer of Code
application I'm about to submit I will be working on the resampling code
in GDAL's warper and GRASS. If I have time (and my application is
accepted!) I'll try to look into gdaladdo also.
~Seth


On Mon, March 30, 2009 12:27 pm, Benoît Andrieu wrote:
> Hi list !
>
> I was wondering why the Lanczos and cubicspline are available in gdal_warp
> and not gdaladdo ?
>
> The quality after downsizing images with gdalwarp is so perfect that I am
> now willing to include this in my overviews.
> Is there any chances to have this include in future releases or is there
> any difficulties I am not aware of ?
>
> Benoît Andrieu
> b...@ixsea.com
> benoit.andr...@gmail.com
>
> ___
> gdal-dev mailing list
> gdal-dev@lists.osgeo.org
> http://lists.osgeo.org/mailman/listinfo/gdal-dev
>


___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev


[gdal-dev] Removing nodata values

2008-12-29 Thread Seth Price
How can one remove the nodata values from a GeoTIFF? I'm working with a 3
band RGB image.

I find many instructions on how to add the values, but none on how to
remove them.
Thanks,
Seth

___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev


Re: [gdal-dev] GDAL/OGR 1.6.0 Release Candidate 2

2008-11-30 Thread Seth Price

Default compile fails due to the bug I reported here:
http://trac.osgeo.org/gdal/ticket/2628
~Seth


On Nov 30, 2008, at 10:33 AM, Frank Warmerdam wrote:


Folks,

There have been a few noteworthy errors identified and corrected since
RC1 was issued.  So I've decided to retract it and prepare an RC2.   
RC2

is available at:

http://download.osgeo.org/gdal/gdal160RC2.zip - source as a zip
http://download.osgeo.org/gdal/gdal-1.6.0RC2.tar.gz - source  
as .tar.gz


On behalf of the GDAL project, I request everyone interested in a high
quality final release do some quick verification of this candidate  
release.
Depending on immediate feedback, I will call for a PSC vote  
promoting this as
a final release on Monday (which would take till Wednesday to  
complete).


Very detailed release news is available at:

http://trac.osgeo.org/gdal/wiki/Release/1.6.0-News

Best regards,
--
--- 
+--

I set the clouds in motion - turn up   | Frank Warmerdam, [EMAIL PROTECTED]
light and sound - activate the windows | http://pobox.com/~warmerdam
and watch the world go round - Rush| Geospatial Programmer for  
Rent



___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev


___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev


Re: [gdal-dev] Nearest Neighbour resampling

2008-11-21 Thread Seth Price
I would also be very interested in seeing the command which produced those
files. It looks like something I would expect to see as a result of
overview building after this patch (and average resampling):
https://trac.osgeo.org/gdal/ticket/2408

~Seth


On Thu, November 20, 2008 10:05 am, Frank Warmerdam wrote:
> Ian Elliott wrote:
>> Hi,
>>
>> An error is consistently occurring when I use the nearest neighbour
>> technique to resample an 8-bit paletted image.
>>
>> Please see this page for an example:
>> http://projects.exeter.ac.uk/msel/personnel/iae/gdal/
>> (The resampling error is shown in image c).
>>
>> A possible solution I have discovered is to use 'Majority' resampling,
>> but this is not available in gdal.
>>
>> Can anyone suggest why this is happening?
>
> Ian,
>
> I see the png files on the web site are RGB, and so apparently not the
> original source data.  I do not know why nearest neighbour resampling
> would
> give the effect you are seeing.  I think you will need to provide details
> on how you accomplished the downsampling (along with the original data)
> for me to say more.
>
> There is no majority resampler in gdalwarp or gdal_translate.  GDAL 1.6
> does include the MODE resampler for computing overviews with gdaladdo
> which
> I imagine is the same thing as the majority resampler.
>
> Best regards,
> --
> ---+--
> I set the clouds in motion - turn up   | Frank Warmerdam,
> [EMAIL PROTECTED]
> light and sound - activate the windows | http://pobox.com/~warmerdam
> and watch the world go round - Rush| Geospatial Programmer for Rent
>
> ___
> gdal-dev mailing list
> gdal-dev@lists.osgeo.org
> http://lists.osgeo.org/mailman/listinfo/gdal-dev
>


___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev


Re: [gdal-dev] gdalwarp interpolation artifacts

2008-09-30 Thread Seth Price
The problem will be in all methods if I recall correctly, but to  
varying degrees. In the original resampling method, they all use  
kernels of varying sizes. When said kernel would sample against the  
edge of the image, it defaults to bilinear resampling (I think).  
Cubicspline has the largest kernel size, so it will have the largest  
artifact line. But cubic will also run up against the edge of the  
image, so it will also have artifacting, but the line will be narrower.


One thing you could try is changing the amount of memory that gdalwarp  
uses. Once it can fit more pixels in RAM, the chunks it works on will  
be larger.

~Seth


On Sep 30, 2008, at 11:33 AM, Roger André wrote:

Well, the good news is that the artifacts don't seem to be present  
when I use the plain "cubic" resampling method.  The bad news is  
that the the quality of that method isn't good enough for me to  
use.  I think my solution is going to have to be that I manually cut  
pieces out of the low-res mosaic that overlap one-another, then  
resample them to a size below what will cause the artifact (since I  
*think* it's a size related problem), cut out the edge artifacts,  
and then remosaic them into a full data set.


I'm afraid that if I spend a bunch of time now trying to get a newer  
version of gdal working, I'm going to get into an install death- 
spiral that I can't get out of.


Seth, can you comment as to whether you think the problem should  
actually only exist in the cubicspline method?  From reading the  
changelogs, it sounds like the error should be present in the other  
methods as well.


Thanks.
--



On Tue, Sep 30, 2008 at 9:37 AM, Roger André <[EMAIL PROTECTED]> wrote:
Hi Seth and Frank,

Thanks for the feedback.  I just finished reading the changelog link  
that Seth sent and all I can say is, "Wow, nice job!"


I did some further testing last night using a single GTOPO30 tile,  
and doing a 10x upsample.  The artifacts are present in it at  
columns 6000 and 12000, so I think we can rule out that  
gdal_merge.py is introducing the error.  I'm testing the cubic  
resampling operator this morning, in the hopes that it won't produce  
the error.  Since it seems to require a very large scaling  
difference, it also takes a bit of time to run the test.  I'll let  
you know what I find when it's done.


Regarding running the latest code from trunk, I'm not sure if I can  
do that.  I'm currently using the gdal implementation in the Linux  
version of FWTools-2.0.6 because I am unable to get all of the  
dependencies installed that I need for a source-compiled version of  
gdal to work properly on my system.  I think I remember seeing some  
traffic from Mateuz in the recent past that indicated he used  
FWTools to build against.  I'll see if I can figure out what he was  
doing.  If I recall correctly, Frank wasn't a huge fan of this  
technique though. ;)


Thanks again,

Roger
--


On Mon, Sep 29, 2008 at 10:34 PM, Seth Price <[EMAIL PROTECTED]>  
wrote:
I agree with Frank, this sounds like the bug that I reported (and  
fixed) here:

http://trac.osgeo.org/gdal/ticket/2327

(Although it became a log of my alterations to the warper kernel,  
which was well beyond the scope of the original bug.)

~Seth



On Sep 29, 2008, at 11:19 PM, Frank Warmerdam wrote:

On Tue, Sep 30, 2008 at 4:30 AM, Roger André <[EMAIL PROTECTED]> wrote:
Hi List,

I've hit the same problem twice now, and I'm pretty certain after  
Round 2
that I'm not introducing it.  Basically, I'm using gdal_merge.py to  
mosaic a

group of low-resolution, 32-bit floating point rasters together, then
running "gdalwarp -ts xxx yyy -r cubicspline" on the mosaic to get a  
higher
resolution image.  I'm finding afterwards that there are linear  
artifacts in
the mosaic which run vertically through the mosaic, which look like  
edge
artifacts, but which are not located near the edge of either an  
original
low-res tile, or the resulting high res one.  I've tested the  
interpolation
of a single low-res tile in the area where the artifact is present  
in the
mosaic and I do not get the same results - the interpolated tile is  
clear of

artifacts.

Is it possible that I'm hitting some sort of memory limit that is  
causing
only a certain number of columns to be interpolated at one time,  
rather than

the entire row?

Roger,

There have been some fixes to some of the gdalwarp interpolator
code in recent months, so one hint is to ensure you try the latest
"trunk" code to see if perhaps the problem is already fixed.

Second, you may want to investigate the gdal_merge.py product
carefully to ensure there aren't any bad pixels in it along the
boundaries.

Third you might try the cubic (instead of cubicspline) inte

Re: [gdal-dev] gdalwarp interpolation artifacts

2008-09-29 Thread Seth Price
I agree with Frank, this sounds like the bug that I reported (and  
fixed) here:

http://trac.osgeo.org/gdal/ticket/2327

(Although it became a log of my alterations to the warper kernel,  
which was well beyond the scope of the original bug.)

~Seth


On Sep 29, 2008, at 11:19 PM, Frank Warmerdam wrote:


On Tue, Sep 30, 2008 at 4:30 AM, Roger André <[EMAIL PROTECTED]> wrote:

Hi List,

I've hit the same problem twice now, and I'm pretty certain after  
Round 2
that I'm not introducing it.  Basically, I'm using gdal_merge.py to  
mosaic a

group of low-resolution, 32-bit floating point rasters together, then
running "gdalwarp -ts xxx yyy -r cubicspline" on the mosaic to get  
a higher
resolution image.  I'm finding afterwards that there are linear  
artifacts in
the mosaic which run vertically through the mosaic, which look like  
edge
artifacts, but which are not located near the edge of either an  
original
low-res tile, or the resulting high res one.  I've tested the  
interpolation
of a single low-res tile in the area where the artifact is present  
in the
mosaic and I do not get the same results - the interpolated tile is  
clear of

artifacts.

Is it possible that I'm hitting some sort of memory limit that is  
causing
only a certain number of columns to be interpolated at one time,  
rather than

the entire row?


Roger,

There have been some fixes to some of the gdalwarp interpolator
code in recent months, so one hint is to ensure you try the latest
"trunk" code to see if perhaps the problem is already fixed.

Second, you may want to investigate the gdal_merge.py product
carefully to ensure there aren't any bad pixels in it along the
boundaries.

Third you might try the cubic (instead of cubicspline) interpolator
to see if it gives cleaner results.

Beyond that a ticket submitted on the smallest test case that
demonstrates the problem would be appreciated.

Best regards,
--
--- 
+--

I set the clouds in motion - turn up   | Frank Warmerdam, [EMAIL PROTECTED]
light and sound - activate the windows | http://pobox.com/~warmerdam
and watch the world go round - Rush| Geospatial Programmer for  
Rent

___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev


___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev