Even,
On Tue, Jul 17, 2012 at 10:20 AM, Even Rouault wrote:
>
>
> You could just remove the tweaking done with GDAL_CACHEMAX and -wm and
> leave
> them to their default values. In my experience, a so big value for
> GDAL_CACHEMAX
> is of no use (no need to use more than 500 MB), and when merging
Selon Margherita Di Leo :
> On Mon, Jul 16, 2012 at 6:01 PM, Margherita Di Leo wrote:
>
> >
> > now I'm trying with:
> >
> > CACHE="--config GDAL_CACHEMAX 8000 -wm 2000"
> > gdalwarp $CACHE -srcnodata - -dstnodata - -r bilinear -tr $RES
> > $RES $LIST mosaik_$RES.tif -co TILED=YES
> >
> >
Margherita Di Leo gmail.com> writes:
>
> now I'm trying with:CACHE="--config GDAL_CACHEMAX 8000 -wm 2000"gdalwarp
$CACHE -srcnodata - -dstnodata - -r bilinear -tr $RES $RES $LIST
mosaik_$RES.tif -co TILED=YES
>
> It is taking long, of course, but somehow it is working and I don't get e
On Tue, Jul 17, 2012 at 8:40 AM, Margherita Di Leo wrote:
>
>
> On Mon, Jul 16, 2012 at 6:01 PM, Margherita Di Leo wrote:
>
>>
>> now I'm trying with:
>>
>> CACHE="--config GDAL_CACHEMAX 8000 -wm 2000"
>> gdalwarp $CACHE -srcnodata - -dstnodata - -r bilinear -tr $RES
>> $RES $LIST mosaik_$
On Mon, Jul 16, 2012 at 6:01 PM, Margherita Di Leo wrote:
>
> now I'm trying with:
>
> CACHE="--config GDAL_CACHEMAX 8000 -wm 2000"
> gdalwarp $CACHE -srcnodata - -dstnodata - -r bilinear -tr $RES
> $RES $LIST mosaik_$RES.tif -co TILED=YES
>
>
> It is taking long, of course, but somehow it
Hi Even,
> BigTIFF support likely works as expected. In that instance, the error
> comes from
> the huge value used for the -wm option of gdalwarp. Try to keep it below
> 2000.
> If I remember well, there must be a sanity check in the warping algorithm
> to
> prevent allocation of buffers above 2
Margherita,
> # GDAL cache in megabytes
> CACHE="--config GDAL_CACHEMAX 8000 -wm 8000"
>
BigTIFF support likely works as expected. In that instance, the error comes from
the huge value used for the -wm option of gdalwarp. Try to keep it below 2000.
If I remember well, there must be a sanity check
Hi,
I need to make a mosaic with gdalwarp using a lot of large files (ASTER
GDEM, Europe coverage). For this, I need the bigtiff support, but I'm not
sure it is working properly.
Here is my work flow:
# RES=0:00:01
RES=0.000278
BIGTIFF="-co BIGTIFF=YES"
# GDAL cache in megabytes
CACHE="--config G
Hello,
I'm facing a problem related to BigTiff support in experimental FWTools3
on Linux. BigTiff support works fine with all compiled programs (e.g.
gdal_translate, gdalwarp) but not with the python script
gdal_fillnodata.py. For instance, executing:
gdal_fillnodata.py -md 10 unfilled.tif
Hi all
I've been testing.
Seems like something happens when I set tilesize (BLOCKXSIZE/BLOCKYSIZE) to
256 or bigger.
My test program (VisualBasic2010) produced an untiled file of 10x10
pixels in 324 seconds. (Uncompressed, no overviews) I can live with that.
With tilesize 128 it takes 158
It looks like WriteRaster [or some other component in the process] has
used all available memory and now has to do massive amounts of churning
between the last snippet of memory and your hard drive. Whether it's a
leak or a 'feature' of some library the author depended on is of course
moot. If you
On Thu, Jul 1, 2010 at 10:00 AM, Helge Morkemo wrote:
> Hi
> I'm creating a huge bigtiff file, it's 169000x151000 pixels (23.77
> Gigapixels, rgb) from .net using WriteRaster.
>
> In the beginning it seemed to be producing output with good performance, but
> it is getting slower and slower.
> The
Hi
I'm creating a huge bigtiff file, it's 169000x151000 pixels (23.77
Gigapixels, rgb) from .net using WriteRaster.
In the beginning it seemed to be producing output with good performance, but
it is getting slower and slower.
The production time seems to be increasing in steps.
Is there a caching
Even, Christopher:
I have started building a new overview with the GDAL_CACHEMAX set to
65. Already, I can see the improvement.
It is a bit embarrassing to inquire why GDAL_CACHEMAX did not show
such a change using 2048 or a larger number? I am seeing the
thrashing that you mentioned @ 65 so I
2 things :
1) Christopher Schmidt is right in suggesting to increase GDAL_CACHEMAX (the
default is 40 MB). In your use case (building external overviews with
compression), the analysis of the algorithm (and a bit of experimentation to
confirm ;-)) shows that that the minimum cache size is :
s
I should add that I picked up the 64-bit executable from Tamas' site
after the 32-bit executable, v.1.6.0, was taking hours to create
overviews.
Thanks.
= David
On Tue, Feb 9, 2010 at 2:32 PM, David Fogel wrote:
> The gdaladdo command is below (for a single overview):
>
> D:\SRTM\Central_North>
The gdaladdo command is below (for a single overview):
D:\SRTM\Central_North> "c:\Tamas\bin\gdal\apps\gdaladdo.exe" -r
nearest -ro --config BIGTIFF_OVERVIEW YES --config COMPRESS_OVERVIEW
LZW --config PREDICTOR 2 Central_North.tif 2
On Tue, Feb 9, 2010 at 2:26 PM, Even Rouault
wrote:
> And the
And the exact gdaladdo command line you're using ? (must be sure if it's
internal or external overviews)
Le Tuesday 09 February 2010 23:22:05 David Fogel, vous avez écrit :
> Hi Even:
>
> The GeoTIFF file is described below (output from gdalinfo). The
> pre-compiled code is from Tamas' site:
> h
Hi Christopher,
And so I shall experiment with the GDAL_CACHEMAX setting. Perhaps I
can post back on that before the end of the day here (US West Coast;
PST);
Thanks.
= David
On Tue, Feb 9, 2010 at 2:13 PM, Christopher Schmidt
wrote:
> On Tue, Feb 09, 2010 at 01:57:26PM -0800, David Fogel wrot
Hi Even:
The GeoTIFF file is described below (output from gdalinfo). The
pre-compiled code is from Tamas' site:
http://vbkto.dyndns.org/sdk/
I am using the MSVC2008 version ( scroll to the bottom of the page ).
This ought to be v1.7.0 with some or all of the code for HFA files
folded into it.
Th
On Tue, Feb 09, 2010 at 01:57:26PM -0800, David Fogel wrote:
> Hi:
>
> In the process of moving data into GeoTIFF, I am re-creating pyramids
> (OVR files). The good news: It takes fewer than ten minutes to create
> an LZW Horizontal Differenced compressed 3.5 GB file on a Windows 7
> box using GD
Could you append the output of gdalinfo on the TIF you make overviews on and
the exact gdaladdo you're using (mainly if you use one particular resampling
algorithm) ?
Le Tuesday 09 February 2010 22:57:26 David Fogel, vous avez écrit :
> Hi:
>
> In the process of moving data into GeoTIFF, I am re
Hi:
In the process of moving data into GeoTIFF, I am re-creating pyramids
(OVR files). The good news: It takes fewer than ten minutes to create
an LZW Horizontal Differenced compressed 3.5 GB file on a Windows 7
box using GDAL 1.70+ ( via Tamas Szekeres site; 64-bit precompiled ).
The OVR files
On Thu, Jul 23, 2009 at 09:12:01PM -0400, Frank Warmerdam wrote:
>
> Enrico,
>
> I concur with Dave. The best place to get the latest libtiff4 with
> bigtiff support is CVS. Please note that the Aperio code is a distinct
> fork of libtiff and not supported or encouraged for use by the core
> li
bug but Even got my Python script
anyway.
Thanks a lot.
My best regards,
Ivan
> ---Original Message---
> From: Frank Warmerdam
> Subject: Re: [gdal-dev] Bigtiff question
> Sent: Mar 05 '09 17:07
>
> Lucena, Ivan wrote:
> > Yes, that runs a lot o
Even Rouault wrote:
What raster format would you suggest then?
I mean, based on those requirements:
- Multiband;
- Large files;
- Good performance *reading* the data in pixel space. Not the "band as
usual" ;)
How about rotating the axis so:
stored x axis = data time axis,
stored y axis = data
Le Thursday 05 March 2009 20:45:49 Lucena, Ivan, vous avez écrit :
> Even,
>
> > for the very poor performance when dealing with pixel interleaved GTiffs
> > with a large number of bands, I think you've hit ticket #2838 that has
> > been fixed 3 weeks ago in trunk and branches/1.6 (*). The perform
Even,
> for the very poor performance when dealing with pixel interleaved GTiffs with
> a large number of bands, I think you've hit ticket #2838 that has been fixed
> 3 weeks ago in trunk and branches/1.6 (*). The performance issue was about
> *reading* in such files, but sometimes when you wr
Ivan,
for the very poor performance when dealing with pixel interleaved GTiffs with
a large number of bands, I think you've hit ticket #2838 that has been fixed
3 weeks ago in trunk and branches/1.6 (*). The performance issue was about
*reading* in such files, but sometimes when you write and t
Lucena, Ivan wrote:
Yes, that runs a lot of seek's to writes just few bytes here and there.
Ivan,
I would note that for pixel interleaved data, access is still a whole
strip/tile at a time which in your case likely means a whole scanline.
In no case does GDAL's GTiff driver seek along to updat
Frank,
> > I am running a Python script that goes through a relatively large number of
> > single band raster files (320) and aggregates it in a big-geotiff (around
> > 7Gb) and I am facing three basic problems *poor performance*, *wrong
> > results*" and *lost of metadata*.
> >
> Ivan,
>
Lucena, Ivan wrote:
Hi there,
I am running a Python script that goes through a relatively large number of
single band raster files (320) and aggregates it in a big-geotiff (around
7Gb) and I am facing three basic problems *poor performance*, *wrong
results*" and *lost of metadata*.
I need that
Hi there,
I am running a Python script that goes through a relatively large number of
single band raster files (320) and aggregates it in a big-geotiff (around 7Gb)
and I am facing three basic problems *poor performance*,
*wrong results*" and *lost of metadata*.
I need that geotiff to be inter
Jukka Rahkonen wrote:
Hi,
Is there a switch for telling gdaladdo to create external overview in BigTIFF
format? Would BigTIFF overviews even work? I tried to create external
overviews for such a big BigTIFF image that even jpeg compressed .ovr file
exceeded the 4 GB limit and creation failed t
Hi,
Is there a switch for telling gdaladdo to create external overview in BigTIFF
format? Would BigTIFF overviews even work? I tried to create external
overviews for such a big BigTIFF image that even jpeg compressed .ovr file
exceeded the 4 GB limit and creation failed therefore.
Regards,
-Ju
35 matches
Mail list logo