Trying to run this using a function relying on scipy.ndimage...
When running gdal_translate on the VRT, I get ImportError: No module named
scipy.ndimage
This comes after successfully import numpy. scipy.ndimage will happily
import within the python interpreter.
Any tips on how to track this
Trying to run this using a function relying on scipy.ndimage...
When running gdal_translate on the VRT, I get ImportError: No module named
scipy.ndimage
This comes after successfully import numpy. scipy.ndimage will happily
import within the python interpreter.
Any tips on how to track this
Hi
We have been looking into this problem some more.
So it seems we can reproduce this by using postgis raster tiles of 100 x
100.
raster2pgsql cmd was:
raster2pgsql bigtiff.tif -C -r -s 27700 -t 100x100 -P -I -M -Y
| psql
The effect seems more pronounced using GDAL 2.2 (trunk) vs GDAL 2.1
Hi Ari,
I began some work to clarify my ideas here: www.github.com/JamesRamm/GeoAlg
perhaps there is potential for merging with your project? For
neighbourhoods I provide 2 iterators - one simple block based and one
'buffered'. The block can be the natural block or user defined. I also
planned to
Forgive me a newbie question but I'm not so familiar with the RFC process:
- Is there any kind of timescale for this appearing in trunk or what needs
to happen for that to happen?
--
View this message in context:
I am seeing a continuous, linear memory increase when reading sequential
windows of a large postgis dataset. This is the same regardless of my
GDAL_CACHEMAX settings
If I run he same code on an equivalent GTiff dataset, I do not see the same
increase in memory.
I can see that the postgis dataset
I added the following to the end of the Create method in
frmts/northwood/grddataset.cpp:
vsi_l_offset nFileSize = 1024 + nXSize * nYSize * 2;
if (VSIFTruncateL(poDS->fp, nFileSize) != 0) {
CPLError(CE_Failure, CPLE_FileIO,
"Failed to
jramm wrote
> gdalwarp -of NWT_GRD -ot Float32 -t_srs EPSG:4326 test.tif test.grd
>
> very quickly returns
Sorry this meant to read:
very quickly returns:
0ERROR 1: /home/jamesramm/test.grd, band 1: IReadBlock failed at X offset 0,
Y offset 0
ERROR 1: GetBlockRef failed at X block of
I have a strange error when reprojecting a dataset to NWT_GRD format.
e.g.
gdalwarp -of NWT_GRD -ot Float32 -t_srs EPSG:4326 test.tif test.grd
very quickly returns
Stepping through, I've traced this to a call to `GetLockedBlockRef` in
IRasterIO (rasterio.cpp) which is called when GDALWarp
I can see where some similarities with other new and existing GDAL work could
be a blocker on this, but I also think this adds a a degree more flexibility
allowing potentially any kind of complex processing to be carried out
without worrying/bothering about boilerplate.
It would be good to find
Even Rouault-2 wrote
> For GeoTIFF, the unit of query remains the block. So if a block is missing
> is
> present, it computes how many pixels of the window of request intersect
> the
> block, and count them as present.
>
> Imagine that you have a raster of dimensions 20x20, with tiles of size
>
Based on Even's work on FGDB vector, I have begun looking at raster data,
here: https://github.com/JamesRamm/fgdb_raster_spec
So far everything I've found is pretty much based on the output of v the
dump_gdb script. I have two troubling areas currently blocking progress
1. I can locate pixel
Two options:
1. Use the xml module in python to build up the VRT file. You would create a
VRTRasterBand node for each band of each file. Note you will need to figure
out the maximum extent and raster size (in pixels) first, so this will be
easiest if you enforce the same pixel size and projection
Does GDAL offer any functionality (in the python bindings) to create a PAM
dataset (.aux.xml file) without a 'main' dataset attached?
This is a useful metadata container that we would like to use for other
formats (e.g. non spatial CSV file, which might share the same custom
metadata as a
Im using VectorTranslate and VectorTranslateOptions in python bindings.
It seems like there is nothing equivalent to the ogr2ogr -clipsrc option in
VectorTranslateOptions.
Is there another way of passing this in?
--
View this message in context:
Hi
Given a GDALDataset pointer, which has been opened using GDALOpenEx(...),
what is the best way to discover whether the dataset is raster or vector?
I have thought of checking the drivers' metadata for GDAL_DCAP_OF_RASTER or
VECTOR, but this could potentially return YES for both if it is e.g
This is my mistake.
I had not truly created a sparse raster! I keep forgetting that the gdal
tools (gdal_translate) will still write the sparse blocks and the sparse
creation option is just to indicate that FillEmptyBlocks should be skipped.
Creating the sparse file by hand correctly sets the
Hi
I am having a few problems with using this (via python).
I have a test dataset with a 16 x 16 blocksize. In the below examples, bnd =
ds.GetRasterBand(1)
First, I try getting the blocksizes using the metadata:
This gives:
None of the first 10 blocks are empty - this is strange as it is
As an update to this; I have finished write support for the GRD driver for
now, with ZMAX/ZMIN exposed as creation options and will be explicitly
calculated in CreateCopy if not passed in - exactly as you suggested Even,
thanks.
I'm waiting permission to get a osgeo user id so I can raise a
Hi
I have made changes to FillEmptyTiles so that if nodata is set, then it will
always fill with nodata, otherwise 0.
I have attached the raw diff...I have no idea how to submit a change
request/review etc?
fillempty_nodata.diff
NoDataValue is set on a band level and FillEmptyTiles operates at a dataset
level.
I've never heard of a geotiff with different nodata per band - is this even
possible, or would it be reasonable to take the nodatavalue of the 1st band
and use this to initialise the empty tiles block?
Then there
im building off trunk, so Gdal2.1dev
Ill get that stack trace...
--
View this message in context:
http://osgeo-org.1560.x6.nabble.com/Segfault-in-GDALWriteBlock-tp5264660p5264899.html
Sent from the GDAL - Dev mailing list archive at Nabble.com.
___
Hi I am getting a segfault in a call to GDALWriteBlock with this code:
Stepping through seems to suggest that the segfault is arising from line 171
of "geo_new.c" in libgeotiff:
I cant find previsely what it is I have done wrong to cause this in my
codeany ideas?
--
View this message
Hi
When writing geotiffs, if I dont write blocks they will automatically be
filled on close by the FillEmptyTiles.
It appears that this will only fill with zeros - is it possible to make it
fill with the no data value instead?
This is potentially a huge time saver when processing a large, fairly
Even Rouault-2 wrote
> Skimming through the code, it seems you must provide zmin / zmax to do the
> scaling of floating point elevations to integral values. So either it is
> user
> provided, or you do a prior statistics computation in CreateCopy() if the
> input dataset hasn't min/max
Hi
I have implemented the write support in the NWT_GRD format, having to make a
few compromises. I'd like to go through them and see if they are reasonable.
I have also attached the diff.
1. The existing read driver reports 4 bands for a GRD dataset. The first 3
are essentially 'virtual' - they
Hi
I am adding write support for the Northwood Grid (Vertical Mapper/MapInfo
format) driver.
This format only allows 32 bit float as the datatype, and to make things
more tricky it actually stores this on disk as either 16 or 32 bit ints,
using scaling rules.
The current read driver applies the
is not entirely necessary -
mostly a shortcut when writing a whole dataset to another, but I've not
looked closely at it yet.
On 30 March 2016 at 16:36, Even Rouault-2 [via OSGeo.org] <
ml-node+s1560n5258858...@n6.nabble.com> wrote:
> Le mercredi 30 mars 2016 17:17:11, jramm a écrit :
Hi
I am implementing a write driver for the Vertical Mapper/Northwood Grid
format (NWT_GRD), which already has read support.
I have attached a diff of what I have got so far. Needless to say, it is not
complete or working :D.
I'm looking for information on development, specifically:
- Precisely
I was wondering if there would be any interest in a standard GDAL format for
HDF5 files?
The major problem I have with current formats supported by the HDF5 driver
is that they are very application-specific, with requirements on the
structure of groups and group names.
What I propose would make
We are storing large raster datasets (~500,000 x 500,000) in geotiff files.
The array data is typically very sparse (density is 0.02% or less) and
compression greatly reduces the geotiff size.
However, when processing data we read the data in chunks, which are
automatically decompressed..this
Hi
I am working on a project to manipulate huge rasters in a postgis database.
In order to achieve best performance, we are implementing most of our code
'server-side', e.g. as a postgresql extension.
It occurs to me that another GDAL postgis driver using SPI
Yes this would be incredibly useful. We typically process >100GB rasters and
so iterating in 'windows' is a must for us.
We have typically implemented a solution outside of GDAL, with a 'next'
function that looks something like this:
int RasterWindow::next()
{
static int windowNoX = -1;
I have a raster (.tif), which has an accompanying shapefile attribute table
(.dbf). Basically, every value in the raster is a key into the attribute
table, which has a number of fields (there is a 'Value' field which is the
'key' corresponding to the raster values, then a number of attributes).
I
You can get full binary distributions for windows (including) ogr_FileGDB.dll
from http://www.gisinternals.com/
They dont seem to have it in msvc_1800 releases, but in 1700 and 1600.
You will also need the FileGDB API dll from ESRI. You can get this from the
ESRI website (you will need to
I'm using GDAL 201 to iterate features in a geodatabase feature class and add
a new text field, whose value depends on the integer value in another
field.It gives a fairly large memory leak (approx 1GB for every 1m features
iterated). I'm not sure where the leak isany ideas?The full code:int
gdal_rasterize is limited to use just 10MB of memory (line ~640 of
gdalrasterize.cpp).
Is there anyway to change this (without having to recompile?)
I'm noticing that changing the output data format from Byte to Int16 or even
Int32 drastically reduces performance. This must be because the strict
I have a number of vector datasets to rasterize.
I am finding that when the datasets are small, I have no problem starting
multiple gdal_rasterize processes (using python).
If my datasets are large (I'm trying to create rasters with millions of
pixels), I'm finding that I can only start one
Is there a way to programatically query what dataset creation/layer creation
options are available for a driver?I'd like to be able to do something like:
driver-listDCO(*options)
--
View this message in context:
http://osgeo-org.1560.x6.nabble.com/GDAL-query-creation-options-tp5210356.html
I'm running gdal_rasterize, trying to make it create a new tif from polygons,
using an attribute field as the burn value. Here is my command:
gdal_rasterize -tr 5 5 -a SOP -l test -a_nodata 0 test.shp test.tif
I end up with an image that is all 0's. The shapefile is correct and can be
read by
-node+s1560n5206664...@n6.nabble.com wrote:
Le mercredi 20 mai 2015 15:57:38, jramm a écrit :
Does GDAL have any support for writing ESRI-style raster attribute
tables?
These are the .vat.dbf files that often accompany geotiffs and other
formats
I wasn't particularly aware of those
Does GDAL have any support for writing ESRI-style raster attribute tables?
These are the .vat.dbf files that often accompany geotiffs and other
formats
--
View this message in context:
http://osgeo-org.1560.x6.nabble.com/Raster-attribute-tables-in-GDAL-tp5206653.html
Sent from the GDAL -
It seems GDAL will always let you create a Raster Attribute Table (in memory)
but may fail when writing to disk depending on whether the format supports
it?
I can't seem to find any information anywhere on what formats support RAT's?
Is there a list I can refer to ??
--
View this message in
David, there should be no compression, but I will try setting it explicitly
in the driver...
--
View this message in context:
http://osgeo-org.1560.x6.nabble.com/GDAL-slow-to-write-GeoTIFF-tp5203128p5203203.html
Sent from the GDAL - Dev mailing list archive at Nabble.com.
I'm writing a custom processing program using GDAL in C.
I'm processing a raster of roughly 150 000 * 200 000 pixels in windows of
256 * 256 pixels.
I'm finding that traversing the raster and applying some basic processing to
each of the windows takes very little time; about 10 minutes on my
45 matches
Mail list logo