Re: [gdal-dev] OGR Field Types?

2015-06-04 Thread Brad Hards
On Fri, 5 Jun 2015 12:38:01 AM Stefan Keller wrote:
 We've finished the GeoCSV spec. 
Does that mean:  http://giswiki.hsr.ch/GeoCSV ?

Brad

___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev


Re: [gdal-dev] OGR Field Types?

2015-06-04 Thread Stefan Keller
Hi Brad

2015-06-05 3:18 GMT+02:00 Brad Hards br...@frogmouth.net wrote:
 We've finished the GeoCSV spec.
 Does that mean:  http://giswiki.hsr.ch/GeoCSV ?

Yes. Any comments are welcome.

Cheers, S.


2015-06-05 3:18 GMT+02:00 Brad Hards br...@frogmouth.net:
 On Fri, 5 Jun 2015 12:38:01 AM Stefan Keller wrote:
 We've finished the GeoCSV spec.
 Does that mean:  http://giswiki.hsr.ch/GeoCSV ?

 Brad

 ___
 gdal-dev mailing list
 gdal-dev@lists.osgeo.org
 http://lists.osgeo.org/mailman/listinfo/gdal-dev
___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev


[gdal-dev] Problem appending non-geometry tables in PostGis

2015-06-04 Thread Roger André
Hi All,

I'm having some trouble using ogr2ogr to do batch uploads to a Postgis DB.
It appears that on tables which don't contain geometry, CreateLayer is
failing and not allowing data to be appended to an existing table.  Here is
the command that is being used to load each feature class:

ogr2ogr -f PostgreSQL -lco ENCODING=UTF-8 -append -update PG:host=foo
dbname=bar user=crak password=head ./nokia_here_asia/file.gdb CityPOINames

Below is my logging output.  The first set of data shows the asia data set
being loaded, which creates the PostGIS tables. The next set shows what
happens with the europe data set.  Although warnings are given for
MapAdminArea, MapAdminLink and CityPOI layers, they are sucessfully
appended.  However, CityPOINames, MapAreaNames, and MapNameTrans all fail
with the CreateLayer error.  Those tables are different in that they lack
geometry.

Extracting from ./nokia_here_asia/file.gdb
Loading MapAdminArea feature_class
Loading MapAdminLink feature_class
Loading CityPOI feature_class
Loading CityPOINames feature_class
Loading MapAreaNames feature_class
Loading MapNameTrans feature_class

Extracting from ./nokia_here_europe/file.gdb
Loading MapAdminArea feature_class
WARNING: Layer creation options ignored since an existing layer is
 being appended to.
Loading MapAdminLink feature_class
WARNING: Layer creation options ignored since an existing layer is
 being appended to.
Loading CityPOI feature_class
WARNING: Layer creation options ignored since an existing layer is
 being appended to.
Loading CityPOINames feature_class
ERROR 1: Layer citypoinames already exists, CreateLayer failed.
Use the layer creation option OVERWRITE=YES to replace it.
ERROR 1: Terminating translation prematurely after failed
translation of layer CityPOINames (use -skipfailures to skip errors)

Loading MapAreaNames feature_class
ERROR 1: Layer mapareanames already exists, CreateLayer failed.
Use the layer creation option OVERWRITE=YES to replace it.
ERROR 1: Terminating translation prematurely after failed
translation of layer MapAreaNames (use -skipfailures to skip errors)

Loading MapNameTrans feature_class
ERROR 1: Layer mapnametrans already exists, CreateLayer failed.
Use the layer creation option OVERWRITE=YES to replace it.
ERROR 1: Terminating translation prematurely after failed
translation of layer MapNameTrans (use -skipfailures to skip errors)


I could create some giant CSV files and load these in one shot, but I'd
prefer it if I could handle the entire load using ogr2ogr.  Has anyone else
encountered this, or have suggestions for dealing with the problem?

Thanks,

Roger
___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev

Re: [gdal-dev] Problem appending non-geometry tables in PostGis

2015-06-04 Thread Saulteau Don
If you add -nlt NONE to the ogr2ogr command, does that work?

You might need to do that just for the non-spatial tables, because if you
do that with the spatial tables, they will only upload the attributes.


Donovan

On Thu, Jun 4, 2015 at 5:19 PM, Roger André ran...@gmail.com wrote:

 Hi All,

 I'm having some trouble using ogr2ogr to do batch uploads to a Postgis
 DB.  It appears that on tables which don't contain geometry, CreateLayer is
 failing and not allowing data to be appended to an existing table.  Here is
 the command that is being used to load each feature class:

 ogr2ogr -f PostgreSQL -lco ENCODING=UTF-8 -append -update PG:host=foo
 dbname=bar user=crak password=head ./nokia_here_asia/file.gdb CityPOINames

 Below is my logging output.  The first set of data shows the asia data set
 being loaded, which creates the PostGIS tables. The next set shows what
 happens with the europe data set.  Although warnings are given for
 MapAdminArea, MapAdminLink and CityPOI layers, they are sucessfully
 appended.  However, CityPOINames, MapAreaNames, and MapNameTrans all fail
 with the CreateLayer error.  Those tables are different in that they lack
 geometry.
 
 Extracting from ./nokia_here_asia/file.gdb
 Loading MapAdminArea feature_class
 Loading MapAdminLink feature_class
 Loading CityPOI feature_class
 Loading CityPOINames feature_class
 Loading MapAreaNames feature_class
 Loading MapNameTrans feature_class
 
 Extracting from ./nokia_here_europe/file.gdb
 Loading MapAdminArea feature_class
 WARNING: Layer creation options ignored since an existing layer is
  being appended to.
 Loading MapAdminLink feature_class
 WARNING: Layer creation options ignored since an existing layer is
  being appended to.
 Loading CityPOI feature_class
 WARNING: Layer creation options ignored since an existing layer is
  being appended to.
 Loading CityPOINames feature_class
 ERROR 1: Layer citypoinames already exists, CreateLayer failed.
 Use the layer creation option OVERWRITE=YES to replace it.
 ERROR 1: Terminating translation prematurely after failed
 translation of layer CityPOINames (use -skipfailures to skip errors)

 Loading MapAreaNames feature_class
 ERROR 1: Layer mapareanames already exists, CreateLayer failed.
 Use the layer creation option OVERWRITE=YES to replace it.
 ERROR 1: Terminating translation prematurely after failed
 translation of layer MapAreaNames (use -skipfailures to skip errors)

 Loading MapNameTrans feature_class
 ERROR 1: Layer mapnametrans already exists, CreateLayer failed.
 Use the layer creation option OVERWRITE=YES to replace it.
 ERROR 1: Terminating translation prematurely after failed
 translation of layer MapNameTrans (use -skipfailures to skip errors)
 

 I could create some giant CSV files and load these in one shot, but I'd
 prefer it if I could handle the entire load using ogr2ogr.  Has anyone else
 encountered this, or have suggestions for dealing with the problem?

 Thanks,

 Roger

 ___
 gdal-dev mailing list
 gdal-dev@lists.osgeo.org
 http://lists.osgeo.org/mailman/listinfo/gdal-dev

___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev

[gdal-dev] ImportError: numpy.core.multiarray failed to import

2015-06-04 Thread Ronquillo, Edgar Nahum
Hello,
So I am trying to run the example called Clip a Geotiff with Shapefile from 
this website 
https://pcjericks.github.io/py-gdalogr-cookbook/raster_layers.html. However, I 
get an error about numpy although I do have numpy installed. I believe is 
something with GDAL. Here is my output error:

ImportError: numpy.core.multiarray failed to import

Traceback (most recent call last):
  File C:\Users\r294505\Desktop\pythonTesting.py, line 1, in module
from osgeo import gdal, gdalnumeric, ogr, osr
  File C:\Python27\lib\site-packages\osgeo\gdalnumeric.py, line 1, in module
from gdal_array import *
  File C:\Python27\lib\site-packages\osgeo\gdal_array.py, line 26, in module
_gdal_array = swig_import_helper()
  File C:\Python27\lib\site-packages\osgeo\gdal_array.py, line 22, in 
swig_import_helper
_mod = imp.load_module('_gdal_array', fp, pathname, description)
ImportError: numpy.core.multiarray failed to import

By the way, I am running 64 bit windows 7 in case that is of any help. Any help 
as to what could be this error would be appreciated.

Thank You
___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev

[gdal-dev] Call for discussion on RFC 26: GDAL Block Cache Improvements

2015-06-04 Thread Even Rouault
Hi,

I've updated an old RFC initiated by Tamas. The main idea, having a hashset 
based implementation as an alternative to the array based, remains. Changes 
consist mainly in code restructuration, perf improvements to reduce lock 
contention and porting to the state of the latest code base.

This is a RFC for GDAL 2.1

Details at https://trac.osgeo.org/gdal/wiki/rfc26_blockcache

== Summary and rationale ==

GDAL maintains an in-memory cache for the raster blocks fetched from the 
drivers and ensures that the second attempt to access the same block will be 
served from the cache instead of the driver. This cache is maintained in a 
per-band fashion and an array is allocated for the pointers for each blocks 
(or sub-blocks). This approach is not sufficient with large raster dimensions 
(or large virtual rasters ie. with the WMS/TMS driver), which may cause out of 
memory errors in GDALRasterBand::InitBlockInfo, as raised in #3224

For example, a band of a dataset at level 21 with a GoogleMaps tiling requires 
2097152x2097152 tiles of 256x256 pixels. This means that GDAL will try to 
allocate an array of 32768x32768 = 1 billion elements (32768 = 2097152 / 64). 
The size of this array is 4 GB on a 32-bit build, so it cannot be allocated at 
all. And it is 8 GB on a 64-bit build (even if this is generally only virtual 
memory reservation but not actually allocation of physical pages of memory, 
due to over-commit mechanism of the operating system). At dataset closing, 
this means that those 1 billion cells will have to be explored to discover 
remaining cached blocks. In reality, all above figures must be multiplied by 3 
for a RGB (or 4 for a RGBA) dataset.

In the hash set implementation, memory allocation depends directly on the 
number of cached blocks. Typically with the default GDAL_CACHEMAX size of 40 
MB, only 640 blocks of 256x256 pixels can be simultaneously cached (for all 
datasets).

Best regards,

Even

-- 
Spatialys - Geospatial professional services
http://www.spatialys.com
___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev


Re: [gdal-dev] Read Ascii Grid and produce GeoTiff

2015-06-04 Thread Even Rouault
Le jeudi 04 juin 2015 15:58:49, Ronquillo, Edgar Nahum a écrit :
 Hello everyone,
 I have been looking around for a while now. I want to create a python
 script that can read an AsciiGrid file and be able to produce a GeoTiff
 out of it. I know it is possible with Gdal, I just don't seem to get it.
 There are some references out there about this but nothing exact. Someone
 please help we accomplish this or maybe link me to anything out there that
 can help me with this task.

Edgar,

those 2 lines should do it:

from osgeo import gdal
gdal.GetDriverByName('GTiff').CreateCopy('out.tif', gdal.Open('in.asc'))

Even

-- 
Spatialys - Geospatial professional services
http://www.spatialys.com
___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev

[gdal-dev] Bindings

2015-06-04 Thread Ari Jolma

Hi,

I've been trying to find a way to make the common SWIG interface files 
less concerned about languages and the whole system more flexible and 
understandable (which I see a prerequisite for further developments).


My conclusion seems to be now that it is probably better to make the 
main files, what are now gdal.i, ogr.i etc., language specific and only 
the class files, now ColorTable.i, MajorObject.i, etc., and some other 
files (typedefs.i etc.) common. That way each language could compose the 
module as they like. For example in Perl I would like to get rid of 
Const, and a language could put all classes into one module (gdal) etc.


This would at least require extracting remaining common material in 
gdal.i and ogr.i into new files.


I'll test this in my github fork - which I've mentioned a couple of 
times already. But it will probably take some time due to summer etc.


Any comments on this? This is again just internal reorganization and 
does not affect the APIs.


Ari

___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev


Re: [gdal-dev] Bytes still reachables in setFromUserInput function

2015-06-04 Thread Mathieu Coudert
Thanks Even for the prompt answer.
I would have expected a method through the OGR API to handle it without
having to call proj but if it appears like the standard or default way for
you to do it, let's do it.

Cheers,

Mathieu


On Thu, Jun 4, 2015 at 4:20 PM, Even Rouault even.roua...@spatialys.com
wrote:

 Le jeudi 04 juin 2015 16:04:32, Mathieu Coudert a écrit :
  Hello,
 
 
 
  I am using GDAL 1.11.2 on linux Centos and here is one of my function :
 
 
 
  {
 
  OGRSpatialReference sr;
 
  if (sr.SetFromUserInput(proj.c_str()) != OGRERR_NONE)
 
  throw InvalidInput();
 
 
 
  char *wkt;
 
  sr.exportToWkt(wkt);
 
  string srs (wkt);
 
  OGRFree(wkt);
 
 
 
  return srs;
 
  }
 
 
 
  This code works fine but when I use Valgrind on it, I get the following
  message :
 
 
 
  […] bytes still reachable at line X
 
  where X is the line with the setFromUserInput
 
  (please find below the full stacktrace)
 
 
 
  [...] are still reachable [...]
 
  ==22766==at 0x4A06A2E: malloc (vg_replace_malloc.c:270)
 
  ==22766==by 0x54EB3B7: pj_malloc (pj_malloc.c:19)
 
  ==22766==by 0x54F67BB: pj_insert_initcache (pj_initcache.c:161)
 
  ==22766==by 0x54EA388: get_init (pj_init.c:288)
 
  ==22766==by 0x54EA57B: pj_init_ctx (pj_init.c:428)
 
  ==22766==by 0x54EB154: pj_init_plus_ctx (pj_init.c:366)
 
  ==22766==by 0x6A4978: OCTProj4Normalize (ogrct.cpp:309)
 
  ==22766==by 0x6A3215: OGRSpatialReference::importFromEPSGA(int)
  (ogr_fromepsg.cpp:)
 
  ==22766==by 0x6A381B: OGRSpatialReference::importFromEPSG(int)
  (ogr_fromepsg.cpp:2095)
 
  ==22766==by 0x693E41: OGRSpatialReference::SetFromUserInput(char
  const*) (ogrspatialreference.cpp:1955)
 
  ==22766==by 0x44D70B:
  GisKit::ProjToSRS(std::__cxx11::basic_stringchar,
 std::char_traitschar,
  std::allocatorchar  const) (GisKit.cpp:46)
 
 
  Do you have any ideas?

 Mathieu,

 This is not lost memory, but cached proj.4 definitions. You could
 explicitely
 call pj_clear_initcache() (from proj_api.h) at the end of your process to
 clear that cache.

 Even

 --
 Spatialys - Geospatial professional services
 http://www.spatialys.com

___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev

[gdal-dev] Read Ascii Grid and produce GeoTiff

2015-06-04 Thread Ronquillo, Edgar Nahum
Hello everyone,
I have been looking around for a while now. I want to create a python script 
that can read an AsciiGrid file and be able to produce a GeoTiff out of it. I 
know it is possible with Gdal, I just don't seem to get it. There are some 
references out there about this but nothing exact. Someone please help we 
accomplish this or maybe link me to anything out there that can help me with 
this task.

Any help would be appreciated!

Thank You
___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev

[gdal-dev] Bytes still reachables in setFromUserInput function

2015-06-04 Thread Mathieu Coudert
Hello,



I am using GDAL 1.11.2 on linux Centos and here is one of my function :



{

OGRSpatialReference sr;

if (sr.SetFromUserInput(proj.c_str()) != OGRERR_NONE)

throw InvalidInput();



char *wkt;

sr.exportToWkt(wkt);

string srs (wkt);

OGRFree(wkt);



return srs;

}



This code works fine but when I use Valgrind on it, I get the following
message :



[…] bytes still reachable at line X

where X is the line with the setFromUserInput

(please find below the full stacktrace)



[...] are still reachable [...]

==22766==at 0x4A06A2E: malloc (vg_replace_malloc.c:270)

==22766==by 0x54EB3B7: pj_malloc (pj_malloc.c:19)

==22766==by 0x54F67BB: pj_insert_initcache (pj_initcache.c:161)

==22766==by 0x54EA388: get_init (pj_init.c:288)

==22766==by 0x54EA57B: pj_init_ctx (pj_init.c:428)

==22766==by 0x54EB154: pj_init_plus_ctx (pj_init.c:366)

==22766==by 0x6A4978: OCTProj4Normalize (ogrct.cpp:309)

==22766==by 0x6A3215: OGRSpatialReference::importFromEPSGA(int)
(ogr_fromepsg.cpp:)

==22766==by 0x6A381B: OGRSpatialReference::importFromEPSG(int)
(ogr_fromepsg.cpp:2095)

==22766==by 0x693E41: OGRSpatialReference::SetFromUserInput(char
const*) (ogrspatialreference.cpp:1955)

==22766==by 0x44D70B:
GisKit::ProjToSRS(std::__cxx11::basic_stringchar, std::char_traitschar,
std::allocatorchar  const) (GisKit.cpp:46)


Do you have any ideas?

I'd appreciate any feedback/suggestion.



Thanks a lot.



Regards,

Mathieu
___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev

Re: [gdal-dev] Bytes still reachables in setFromUserInput function

2015-06-04 Thread Even Rouault
Le jeudi 04 juin 2015 16:04:32, Mathieu Coudert a écrit :
 Hello,
 
 
 
 I am using GDAL 1.11.2 on linux Centos and here is one of my function :
 
 
 
 {
 
 OGRSpatialReference sr;
 
 if (sr.SetFromUserInput(proj.c_str()) != OGRERR_NONE)
 
 throw InvalidInput();
 
 
 
 char *wkt;
 
 sr.exportToWkt(wkt);
 
 string srs (wkt);
 
 OGRFree(wkt);
 
 
 
 return srs;
 
 }
 
 
 
 This code works fine but when I use Valgrind on it, I get the following
 message :
 
 
 
 […] bytes still reachable at line X
 
 where X is the line with the setFromUserInput
 
 (please find below the full stacktrace)
 
 
 
 [...] are still reachable [...]
 
 ==22766==at 0x4A06A2E: malloc (vg_replace_malloc.c:270)
 
 ==22766==by 0x54EB3B7: pj_malloc (pj_malloc.c:19)
 
 ==22766==by 0x54F67BB: pj_insert_initcache (pj_initcache.c:161)
 
 ==22766==by 0x54EA388: get_init (pj_init.c:288)
 
 ==22766==by 0x54EA57B: pj_init_ctx (pj_init.c:428)
 
 ==22766==by 0x54EB154: pj_init_plus_ctx (pj_init.c:366)
 
 ==22766==by 0x6A4978: OCTProj4Normalize (ogrct.cpp:309)
 
 ==22766==by 0x6A3215: OGRSpatialReference::importFromEPSGA(int)
 (ogr_fromepsg.cpp:)
 
 ==22766==by 0x6A381B: OGRSpatialReference::importFromEPSG(int)
 (ogr_fromepsg.cpp:2095)
 
 ==22766==by 0x693E41: OGRSpatialReference::SetFromUserInput(char
 const*) (ogrspatialreference.cpp:1955)
 
 ==22766==by 0x44D70B:
 GisKit::ProjToSRS(std::__cxx11::basic_stringchar, std::char_traitschar,
 std::allocatorchar  const) (GisKit.cpp:46)
 
 
 Do you have any ideas?

Mathieu,

This is not lost memory, but cached proj.4 definitions. You could explicitely 
call pj_clear_initcache() (from proj_api.h) at the end of your process to 
clear that cache.

Even

-- 
Spatialys - Geospatial professional services
http://www.spatialys.com
___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev

Re: [gdal-dev] Filesize too large when writing compressed float's to a Geotiff from Python

2015-06-04 Thread Even Rouault

 It makes sense that the order in which the data is written/stored affects
 the performance of the compression, but i don't get why it would be
 different for integers as compared to floats?

Floats are larger than Int8 and Int16, so for the same amount of 
GDAL_CACHEMAX, you can cache less blocks, causing more temporary flushes of 
partial tiles/strips to disk (partial = that have data only for one or two 
bands, but not the 3), than need to be refetched when data for the remaining 
bands is available and then recompressed and rewritten.

 
 
 Regards,
 Rutger
 
 
 
 
 Even Rouault-2 wrote
 
  Le mercredi 03 juin 2015 15:21:07, Rutger a écrit :
  
  Rutger,
  
  the issue is that you write data band after band, whereas by default the
  GTiff
  driver create pixel-interleaved datasets. So some blocks in the GTiff
  might be
  reread and rewritten several times as the data coming from the various
  bands
  come.
  
  Several fixes/workarounds :
  - if you've sufficient RAM to hold another copy of the uncompressed
  dataset,
  increase GDAL_CACHEMAX
  - or add options = [ 'INTERLEAVE=BAND' ] in the Create() call to create a
  band
  interleaved dataset
  - more involved fix: since there's no dataset WriteArray() in GDAL Python
  for
  now, you would have to iterate block by block and for each block write
  the corresponding region of each band.
  - you could also use Dataset.WriteRaster() if you can get a buffer from
  the
  numpy array
  
  Even
 
 --
 View this message in context:
 http://osgeo-org.1560.x6.nabble.com/Filesize-too-large-when-writing-compre
 ssed-float-s-to-a-Geotiff-from-Python-tp5208916p5209075.html Sent from the
 GDAL - Dev mailing list archive at Nabble.com.
 ___
 gdal-dev mailing list
 gdal-dev@lists.osgeo.org
 http://lists.osgeo.org/mailman/listinfo/gdal-dev

-- 
Spatialys - Geospatial professional services
http://www.spatialys.com
___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev

Re: [gdal-dev] Filesize too large when writing compressed float's to a Geotiff from Python

2015-06-04 Thread Rutger
Even, 

Thanks for the suggestions, the first two work well. I'll have a look at the
ds.WriteRaster, that seems an interesting way, since it also prevents
unnecessary looping over the bands. 

Writing per block is what i usually do, maybe that's why i never noticed it
before. I now ran into it while fetching and writing a dataset from OpenDAP,
whereas i usually read blocks from GTiffs.

It makes sense that the order in which the data is written/stored affects
the performance of the compression, but i don't get why it would be
different for integers as compared to floats? 


Regards,
Rutger




Even Rouault-2 wrote
 Le mercredi 03 juin 2015 15:21:07, Rutger a écrit :
 
 Rutger,
 
 the issue is that you write data band after band, whereas by default the
 GTiff 
 driver create pixel-interleaved datasets. So some blocks in the GTiff
 might be 
 reread and rewritten several times as the data coming from the various
 bands 
 come.
 
 Several fixes/workarounds :
 - if you've sufficient RAM to hold another copy of the uncompressed
 dataset, 
 increase GDAL_CACHEMAX
 - or add options = [ 'INTERLEAVE=BAND' ] in the Create() call to create a
 band 
 interleaved dataset
 - more involved fix: since there's no dataset WriteArray() in GDAL Python
 for 
 now, you would have to iterate block by block and for each block write the 
 corresponding region of each band.
 - you could also use Dataset.WriteRaster() if you can get a buffer from
 the 
 numpy array
 
 Even





--
View this message in context: 
http://osgeo-org.1560.x6.nabble.com/Filesize-too-large-when-writing-compressed-float-s-to-a-Geotiff-from-Python-tp5208916p5209075.html
Sent from the GDAL - Dev mailing list archive at Nabble.com.
___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev

[gdal-dev] Need feedback regarding the approaches for swig bindings for librarified GDAL utilities

2015-06-04 Thread faza mahamood
Hi,

I am working on swig bindings for librarified GDAL utilities as part of my
Google Summer of Code project. I have already librarified gdalinfo utility.

https://github.com/fazam/gdal/blob/gdalinfo/gdal/apps/gdal_utils.h
https://github.com/fazam/gdal/blob/gdalinfo/gdal/apps/gdalinfo_lib.cpp

These are the two approaches for swig bindings, after discussing with my
mentor Even.

(1)To create a new module gdal_utils
If we go with this approach, the function calls for gdalinfo and ogrinfo
will be like
gdal_utils.gdalinfo()
gdal_utils.ogrinfo()

(2)To go with the existing modules gdal and ogr
In this approach the function calls for gdalinfo and ogrinfo will be like
gdal.Info()
ogr.Info()

We also need to decide on the names of the functions for all librarified
utilities. Since there is already gdal.RasterizeLayer() and
gdal.ContourGenerate() which does most of the work of gdal_rasterize and
gdal_contour utilities, what should be the names of functions for
gdal_rasterize and gdal_contour?
What should we name the function for ogr2ogr?

I'd appreciate any feedback/suggestion.

Regards,
Faza
___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev

Re: [gdal-dev] Bindings

2015-06-04 Thread Tamas Szekeres
Hi Ari,

Creating language specific main files would be fine for me. We could also
add the language specific extensions at the bottom section (like
gdal_csharp_extend.i) directly into the file.

We should however make sure to update all relevant files if a common change
is done in a language specific file.

Best regards,

Tamas


2015-06-04 16:35 GMT+02:00 Ari Jolma ari.jo...@gmail.com:

 Hi,

 I've been trying to find a way to make the common SWIG interface files
 less concerned about languages and the whole system more flexible and
 understandable (which I see a prerequisite for further developments).

 My conclusion seems to be now that it is probably better to make the main
 files, what are now gdal.i, ogr.i etc., language specific and only the
 class files, now ColorTable.i, MajorObject.i, etc., and some other files
 (typedefs.i etc.) common. That way each language could compose the module
 as they like. For example in Perl I would like to get rid of Const, and a
 language could put all classes into one module (gdal) etc.

 This would at least require extracting remaining common material in gdal.i
 and ogr.i into new files.

 I'll test this in my github fork - which I've mentioned a couple of times
 already. But it will probably take some time due to summer etc.

 Any comments on this? This is again just internal reorganization and does
 not affect the APIs.

 Ari

 ___
 gdal-dev mailing list
 gdal-dev@lists.osgeo.org
 http://lists.osgeo.org/mailman/listinfo/gdal-dev

___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev