Hi Even,
On Mon, 28 Nov 2022 at 16:14, Even Rouault
wrote:
> The Big reformat commit itself
> should be done with a dedicated github account, like we did with the
> "git mv gdal/* ." tree-reorganisation to avoid unfair commit statistics.
Git supports blame-ignore files [1], which will "Ignore
Clive,
CPL_VSIL_USE_TEMP_FILE_FOR_RANDOM_WRITE is a configuration option /
environment variable, so you have to pass it with --config
CPL_VSIL_USE_TEMP_FILE_FOR_RANDOM_WRITE YES
Even
Le 29/11/2022 à 20:12, Clive Swan a écrit :
Greetings,
I am trying to select band 1 and then use gdal_merge
Hi Marcin,
For writing the data to an S3 bucket, two options come to mind:
1) Write directly from GDAL, using the /vsis3/ virtual file system
2) Write the data locally from GDAL, and then copy to S3 using another
Python library (e.g. boto3, s3fs) or the AWS CLI
Option 2 is quite widely adopted.
Greetings,
I am trying to select band 1 and then use gdal_merge to merge two tiff
files in an AWS bucket.
Getting odd errors.
gdal_merge.py -o
/vsis3/summer-outputs/3_dataupdated/rcp26-2020-coastal_flood-NA-false.tif
/vsis3/summer-outputs/3_data_ready_for_spectra/coastal-undefended-rcp26-2020.tif
I had some success with geolambda [1] a few years ago.
I used the S3 trigger [2].
[1] https://github.com/developmentseed/geolambda
[2] https://docs.aws.amazon.com/lambda/latest/dg/with-s3-example.html
- Joe
---
Build the Metaverse w/ HDF.
On Tue, Nov 29, 2022 at 4:23 AM Marcin Niemyjski via gdal
Hello,
First of all, thank you for providing awesome open-source tools, my work would
not be possible without it ^^
I'm looking for an effective way/workflow to get data from s3 bucket, then
generate COG and VRT from it into another S3 bucket. Everything should be
working as containerized pyth