Re: [gdal-dev] gdal_merge
On 12/5/2022 6:31 AM, Clive Swan wrote: Greetings, I tried -ps 3600 7200 -ps 3600,7200 -ps x=3600 y=7200 Just get errors, I don't see any option to select LZW or any compression?? The compression options are described on the geotiff driver page you need to add -co compress=LZW Also consider DEFLATE.Better compression faster decompression, slower to compress, though. you might want to look at the PREDICTOR option as well ___ gdal-dev mailing list gdal-dev@lists.osgeo.org https://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] odbc issue
A lighter-weight solution (in my view) would be to export the Filemaker data to .csv or Excel, then import that data into QGIS and save to a geopackage. On 9/17/2022 12:52 AM, Andreas Oxenstierna wrote: I would use this method https://community.claris.com/en/s/question/0D50H6h95LvSAI/quick-setup-for-filemaker-15-postgres-on-a-local-mac to transfer all FileMaker data to PostgreSQL with the option to directly add geography locations as spatial geometries, 2D and 3D (PostGIS) Best Regards / Vänliga hälsningar Andreas Oxenstierna Senior Strategic Advisor ___ gdal-dev mailing list gdal-dev@lists.osgeo.org https://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] How to deal with security related bug reports?
On 7/29/2021 3:29 PM, Even Rouault wrote: Fair point. I've added a commit with the following text "However please refrain from publicly posting exploits with harmful consequences (data destruction, etc.). Only people with the github handles from the [Project Steering Committee](https://gdal.org/community/index.html#project-steering-committee) (or people that they would explictly allow) are allowed to ask you privately for such dangerous reproducers if that was needed." That resolves my question. Thank you. ___ gdal-dev mailing list gdal-dev@lists.osgeo.org https://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] How to deal with security related bug reports?
On 7/29/2021 11:20 AM, Even Rouault wrote: I've created https://github.com/OSGeo/gdal/pull/4152 with a SECURITY.md that largely uses Kurt's proposal. Even I've read the security.md file and maybe I'm running a little slow today, but I still don't understand how I would go about reporting a serious security bug and what will happen afterwards. Let's say I find a really serious vulnerability, something that might let me erase your file system, or perhaps to run some code as root. It seems irresponsible to provide any level of detail about this in a public issue tracker beyond saying "contact me, I've found a major vulnerability". I realize this is a real problem for the development team because you don't know if I've really found something or I'm a troll out to waste your time. On the flip side, posting "the string xxx in a file read by driver yyy will allow me to do " in a public issue tracker is just asking for trouble. How am I supposed to proceed and what response can I reasonably expect? On the plus side for a public issue tracker, if I'm a developer on a system that relies on gdal (eg, QGIS), I can easily keep tabs on reported issues. ___ gdal-dev mailing list gdal-dev@lists.osgeo.org https://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] Polygon operations
On 6/19/2021 2:52 PM, Andrew Bell wrote: The X and Y dimensions are assumed to lie on a plane. All intersection points are also assumed to lie on the same plane as the polygon. Z values are assigned after the fact. On Sat, Jun 19, 2021, 4:40 PM David Strip <qgis-u...@stripfamily.net> wrote: On 6/19/2021 1:34 PM, Andrew Bell wrote: These are done in 2D, without regard to the spatial reference. This still doesn't answer the question about great circles. After some head-scratching and playing in QGIS, I realized that what Andrew is saying is that vertices are treated as Cartesian coordinates with lon/lat values. QGIS appears to always draw a straight line between any two vertices regardless of the active projection. This leads to some un-intuitive outcomes. Consider the map below in an Albers projection. The intersection of the two green lines is computed as the pink point. In EPSG:4326, the northern border of the US and the green line are coincident, and the intersection point lies on the visual intersection of the two lines. Densification of the line solves the problem, since each vertex is projected, creating the appearance of a curved line. And for a different use case, there is a Geodesic Densification plug-in to create great circle lines between vertices. ___ gdal-dev mailing list gdal-dev@lists.osgeo.org https://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] Driver maintenance - long-term solution ?
Further evidence of the challenge of getting funds to support gdal from US gov't agencies: Someone at the USGS privately emailed me to confirm that like Sandia, they cannot support gdal through voluntary donations or in-kind contributions that are outside the scope of contracted project work. Over the years the USGS has contracted through open solicitations for specific projects that are returned to the community, but these contracts are driven by programmatic USGS requirements which may or may not have utility to the broader gdal community. ___ gdal-dev mailing list gdal-dev@lists.osgeo.org https://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] Driver maintenance - long-term solution ?
I first encountered gdal while working at Sandia National Labs, one of the US DOE national labs. (This was over a decade ago - I'm now retired). I have verified with colleagues still working there that although Sandia open sources much of it's current software and contributes source code to non-Sandia projects, there is no practical way that it can support a project with cash payments without violating it's contract with the gov't. Further, it cannot contribute in-kind effort to a project unless those contributions are in direct support of the program that is funding the effort. The same rules almost certainly apply to any US gov't funded contractor. Perhaps someone at one of the US federal agencies that no doubt uses gdal (eg, NGA) can speak to whether the agency can make a direct voluntary contribution. ___ gdal-dev mailing list gdal-dev@lists.osgeo.org https://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] Driver maintenance - long-term solution ?
On 1/13/2021 3:58 PM, Howard Butler wrote: License monkey business isn't viable in any way with GDAL. It would just create confusion and erode trust, which we can't get back if broken. gdal wouldn't be the first project to change it's license, though I really don't know enough about the consequences others have faced for doing so. Even the revered GPL is a moving target. If the alternative is a burned out lead developer/maintainer and a dead project, that's not a desirable outcome either. I'm not sure I agree that changing the license would create confusion and erode trust. Assuming that we (whoever "we" are) actually have the legal right to change the license, let's play a hypothetical. The new license maintains fees from two classes of users: 1. Anyone incorporating gdal into a product that is a. not completely open source, and b. charges a license fee (perpetual or subscription), and c. has more than x active licenses (x = 500? 1000?) 2. Any for-profit organization utilizing gdal in-house for data analysis, conversion, on-line services, etc, in excess of x CPU hours per year (where y = 1000? 5000?...) 3. Any organization that uses gdal indirectly through a free, open source product (eg, QGIS) or a licensed product covered under 1) above is exempt from 1) and 2). (Standard caveat - I'm not a lawyer and I'm not proposing this is the actual language of the license. It is intended as a discussion of how we might describe firms who are obligated to pay a license fee. I have deliberately not suggested an actual fee. The number of licensees will be small and I expect each license will be negotiated separately to suit the specific case.) I don't think I need to name names - you know who the big players are in categories 1 and 2. Only two in category 1 and none in category 2 stepped up with a large (relative to the ask) commitment in the previous barn raising. By selecting appropriate values for x and y the net result will be a very small number of large (and mostly extremely profitable) firms are covered by the paid license category and they are easily identified. The value they derive from gdal far exceeds whatever might be asked of them to support one (or even several) full-time developers among themselves. Equally, the vast majority of users will have no question that they continue to operate in the free range. Given that this whole thing started with a suggestion that the only way to make users aware of deprecation of obsolete drivers was to make the drivers stop working, how many users will even be aware of a license change? For the very few companies right at the boundary, it's not like the gdal license gods are going to audit to see if they have x + 1 or x - 1 active licenses. At some point they will big enough that either they will voluntarily recognize their obligation or someone will call them out on it. It's not like gdal will have legion of lawyers chasing after folks any more than we currently chase those who fail to meet their obligations under the existing license. The big organizations running 100,000,000s of CPU hours extracting information from imagery they're reading in COGs with GDAL need to be donating substantial resources into an organization that provides coordination. The last time I did a fund raise with gdalbarn.com I was called out for naming some of these organizations and expressing my disappointment they couldn't find a way to participate or simply ignored the request. Maybe they will step forward this time around. I don't see how another one-off barn-raising is a sustainable solution. I looked over the list and none of the companies (and I'm sure it's plural) "running 100,000,000 of CPU hours" contributed last time around. Even if that were to change, how often do you want to go around with your beggar's bowl asking for alms? Unless faced with a license that legally obligates them to contribute, history tells us they will not contribute anything near their fair share (or anything at all, for that matter). Whether it is in a new foundation or an existing one like NumFocus, substantial resources need to be dumped in a pot that are earmarked for supporting work that generates value for the project. Chasing new feature work to subsidize project maintenance activities is not sustainable in two directions – burn out for the maintainer and creeping feature-itis for the project. Absolutely on the mark. Funding has to be provided directly for on-going m
Re: [gdal-dev] Driver maintenance - long-term solution ?
Kudos to Howard for his succinct summary of the situation and the call to action. While I have nowhere near his experience with open source, my experience with other volunteer organizations reveals a similar pattern. One person, or maybe a small number of people, carry the burden of keeping the organization running. This goes on for years until someone burns out. Sometimes new people step before chaos sets in, but too often the organization begins a death spiral. Open source broadly is facing something of a turning point as commercial organizations have learned how to profit from open source, but have not yet learned they have to contribute to the commons. A particularly relevant example is the case of MongoDB where cloud services were offering paid hosting while paying nothing to support the project. Gdal's situation strikes me as similar. Large commercial vendors are embedding gdal in their offerings, either directly in software delivered to users or as part of the infrastructure behind the services they provide. Some of these companies are very profitable and could well afford to pay their way. Unfortunately, it is often the case that the developer who is aware of this reliance on gdal may not be in a position to convince his/her management to ante up for the "free" software. What is the path forward? One path Howard suggests is establishing a foundation similar to that behind Qgis. Another alternative, probably far more controversial, is a license change. MongoDB created a license class directed at the cloud suppliers who were (morally) abusing the free license terms. gdal could adopt a license that requires those bundling gdal into a commercial product or service to pay their way. As I said, this would no doubt be quite controversial. Then there's the case of "second-order" free-riders. Gdal is critical technology underlying Qgis, another free, open-source project. Should firms that contribute to the qgis foundation also contribute to gdal, or can they rely on the appropriate portion of their "dues" to be forwarded to gdal? ___ gdal-dev mailing list gdal-dev@lists.osgeo.org https://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] Considering drivers removal ?
Bearing in mind that I use none of the drivers on Even's list, I find his suggestion and reasoning compelling. I especially agree with his comment that the only way to get anyone's attention is to break their workflow, if only temporarily. The main risk here is that a program that uses gdal (eg, QGIS) might hide this from it's users by setting the options in code. Of course, when it truly breaks then this program will have deal with unhappy users, so the burden will be on them, not on gdal devs (we can only hope). As to the "cemetery" as Even calls it, this is in line with my thinking before I saw Even's message. GIT maintains history, so anyone wanting an old driver can always revert back to older versions (at the cost of losing new capabilities.) I would consider that instead of just noting in this in the docs somewhere, an attempt to load a removed driver will result in a message that says "This driver was removed in GIT update " to make it easier to track down. A variation on this that would require a little more work is to replace each removed driver with a stub that prints an appropriate failure message. In most cases, this would be the same "this was removed" message, but in the rare case that someone else picks up maintenance of the driver, the message could be something like "This driver is independently maintained at ". ___ gdal-dev mailing list gdal-dev@lists.osgeo.org https://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] Contour Line Thinning
Like Richard, I live and hike in a region with a substantial amount of steep, cliffy terrain, so the bunching of contour lines serves a useful purpose - "Stay away". That said, the Swiss maps suggest a multi-step procedure. Assuming we have a DEM - 1. Compute the the slope of the terrain 2. Generate polygons corresponding to specific ranges of slope, say Poly 1 - < 20% Poly 2 <40% and so on 3. Generate contour layers 100' contours 50' contours that are not 100' contours 10' contours that are not 50' or 100' contours 4. Clip the 10' contour layer to the <20% layer Clip the 50' contour layer to the <40% layer and so on ___ gdal-dev mailing list gdal-dev@lists.osgeo.org https://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] Slope obtained with GDAL has weird lines
On 3/17/2020 2:24 PM, Danilo da Rosa wrote: Do you think it would be a good idea to do some kind of interpolation to smooth the DEM file or the slope file? Do you have any recommendations on how to do that using gdal? The idea is to use the gdaldem color-relief command to generate a coloured and easy to read map. The problem is that this lines makes the map more difficult to understand, which is a priority in this case. as has already been pointed out, the root of your problem is that your DEM appears to have been created from contours with no interpolation, resulting in a series of terraces with steps between each level, giving the appearance that the terrain consists of flat areas bounded by 90° slopes. Interpolating the existing DEM would relieve this problem, but there is no reason to believe the interpolation represents the actual surface. You would be better off starting with a new DEM that accurately represents the region of interest. I used the SRTM downloader plug-in QGIS to download a new DEM. I saved this DEM in a projected coordinate system to get away from the problems of dealing with degrees (as issue pointed out by others). I then used the slope tool (which actually just calls gdal) to create this slope image: The slope ranges from 0 (white) to about 30° (black). SRTM data is 1 arc-second, or about 30 meters, which is coarser than you previous data, but at least it's the actual surface. There are higher resolution DEMs for much of the US on the US National Map. Maryland (where your patch appears to lie) is covered by 1/9 arc-second data and parts are covered by 1 meter DEMs derived (I think) from LIDAR. You state your goal is a colored, easy to read map. I've seen articles about combining hillshade, elevation coloring, and slope, though I've more often seen just hillshade + elevation (the latter part being hypsometric tinting). I think you will find it easier to achieve this goal in Qgis, which will allow you to experiment interactively with the various parameters in tinting, hill-shading, and combining the layers. For most of these operations Qgis uses gdal behind the curtain, so you can even see the gdal calls if you want to replicate the results from the command line. ___ gdal-dev mailing list gdal-dev@lists.osgeo.org https://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] .kml to .xlsx or .xls ( with Geometry Column Included)
my reading of the docs suggests this will only work for point geometries. However, there is a more general GEOMETRY=AS_WKT that should work for other geometries. On 11/27/2019 7:00 AM, Jeff McKenna wrote: > Another option is to convert from KML to CSV, which can be opened by > LibreOffice/Word etc. using the "GEOMETRY=AS_XY" switch, which > generates your X and Y columns magically: > > ogr2ogr -f CSV -lco GEOMETRY=AS_XY output.csv input.kml > > > -jeff > > > ___ gdal-dev mailing list gdal-dev@lists.osgeo.org https://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] "Banning" use of underflow/overflow with unsigned integer arithmetic ?
This strikes me as a really good idea. Even though the behavior is well-defined in C/C++, it doesn't mean it's desirable in all or even most cases. It's much easier to imagine use cases where overflow/underflow produce unexpected or inexplicable results that it is to think of ones where it produces a desirable result. Just my $.02 from a non-voting user of gdal and codes dependent on it. ___ gdal-dev mailing list gdal-dev@lists.osgeo.org https://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] Getting actual path to a layer
The docs say that the return value of GetName() should be sufficient to open the data source if passed to the same OGRSFDriver that this data source was opened with, but it need not be exactly the same string that was used to open the data source. Looking at your code snippet I'm guessing that /home/nyall/dev/QGIS/tests is the current working directory and the return value is relative to that path, hence just the bare filename. I'd suggest testing with a file not located in the working directory and see if you get a full path (or a path relative to the working directory). Unfortunately the laptop I'm mailing from doesn't have GDAL/OGR installed, so I can't test this myself. David On 6/10/2019 10:17 PM, Nyall Dawson wrote: Hi list, If I have a bunch of say shapefiles in a folder, OGR is happy to let me access these as a dataset, e.g. da = r"/home/nyall/dev/QGIS/tests/testdata" driver = ogr.GetDriverByName('ESRI Shapefile') dataSource = driver.Open(da, 0) for i in dataSource: print(i.GetName()) My question is -- GetName() here returns just the file base name (i.e. "lines" for "/path/lines.shp"). Is there anyway to retrieve the actual layer file path for these? Nyall ___ gdal-dev mailing list gdal-dev@lists.osgeo.org https://lists.osgeo.org/mailman/listinfo/gdal-dev ___ gdal-dev mailing list gdal-dev@lists.osgeo.org https://lists.osgeo.org/mailman/listinfo/gdal-dev
[gdal-dev] Code example(s?) lost in migration to Sphinx
Sorry if this has already been addressed- on the doc page for GDALRasterBand, method ReadBlock the documentation states The following code would efficiently compute a histogram of eight bit raster data. Note that the final block may be partial … data beyond the edge of the underlying raster band in these edge blocks is of an undermined value. but the code is missing. Looking at the old docs, there is indeed a code example. Given the automated nature of the move to Sphinx, I wonder if this problem is occurring elsewhere in the docs. Code examples seem to be missing in GDALDataset::CreateLayer and GDALDDataset::BuildOverviews among other places David ___ gdal-dev mailing list gdal-dev@lists.osgeo.org https://lists.osgeo.org/mailman/listinfo/gdal-dev
[gdal-dev] What is the correct way to read pixel data from a GDALRasterBand in C++?
the docs have moved (but apparently Google hasn't figured that out yet?) GDALRasterBand is now here. Anyway, as you've figured out, the GDALRasterBand object is the heart of the matter. You have two ways to read the data, either with the RasterIO method or the ReadBlock method. If your dataset is small enough to read into available memory, RasterIO is probably easier to use. ReadBlock can be more efficient, but your code has to handle the offsets for the origin of the block. The code example said to be in ReadBlock seems to have disappeared in the migration of the docs. Here's what it contained: The following code would efficiently compute a histogram of eight bit raster data. Note that the final block may be partial ... data beyond the edge of the underlying raster band in these edge blocks is of an undermined value. CPLErr GetHistogram( GDALRasterBand *poBand, int *panHistogram ) { intnXBlocks, nYBlocks, nXBlockSize, nYBlockSize; intiXBlock, iYBlock; GByte *pabyData; memset( panHistogram, 0, sizeof(int) * 256 ); CPLAssert( poBand->GetRasterDataType() == GDT_Byte ); poBand->GetBlockSize( &nXBlockSize, &nYBlockSize ); nXBlocks = (poBand->GetXSize() + nXBlockSize - 1) / nXBlockSize; nYBlocks = (poBand->GetYSize() + nYBlockSize - 1) / nYBlockSize; pabyData = (GByte *) CPLMalloc(nXBlockSize * nYBlockSize); for( iYBlock = 0; iYBlock < nYBlocks; iYBlock++ ) { for( iXBlock = 0; iXBlock < nXBlocks; iXBlock++ ) { intnXValid, nYValid; poBand->ReadBlock( iXBlock, iYBlock, pabyData ); // Compute the portion of the block that is valid // for partial edge blocks. if( (iXBlock+1) * nXBlockSize > poBand->GetXSize() ) nXValid = poBand->GetXSize() - iXBlock * nXBlockSize; else nXValid = nXBlockSize; if( (iYBlock+1) * nYBlockSize > poBand->GetYSize() ) nYValid = poBand->GetYSize() - iYBlock * nYBlockSize; else nYValid = nYBlockSize; // Collect the histogram counts. for( int iY = 0; iY < nYValid; iY++ ) { for( int iX = 0; iX < nXValid; iX++ ) { panHistogram[pabyData[iX + iY * nXBlockSize]] += 1; } } } } } ___ gdal-dev mailing list gdal-dev@lists.osgeo.org https://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] How to read all metadata from GeoTIFF file?
If you're willing to use command line tools, there is a pair of tools that ship with libgeotiff for extracting metadata from a geotiff and importing into a tiff to make it a geotiff. Given a GeoTIFF file named original.tif, and a modified file (modified.tif) without the GeoTIFF tags, but still the same size and region: listgeo -no_norm original.tif > original.geo geotifcp -g original.geo modified.tif modified_geotiff.tif ___ gdal-dev mailing list gdal-dev@lists.osgeo.org https://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] Removing layers from GeoSpatial PDF with ogr2ogr?
On 4/26/2018 3:33 PM, Tobias Wendorff wrote: > Am Mi, 25.04.2018, 21:02 schrieb Even Rouault: >> This is expected. When doing ogr2ogr you run into limitations >> of the read side and write side of the PDF driver, and running >> through OGR abstraction in the middle, so loss is expected in >> the case of PDF. > Even experienced PDF-orientated applications, like Ghostscript, > are have problems with PDF layers. > > I had a similar problem in the past. I ended up unpacking the PDF > to an editable text-only format and removed the objects manually. > > Maybe you can find someone with Acrobat PDF (the editor one), who > can fix the files for you? > > I'm working with the "new" USGS topo maps which are distributed in geospatial PDF format. I'm trying to remove the orthoimage layer. There is a hack using ghostscript to remove all bitmaps that works for this special case. ___ gdal-dev mailing list gdal-dev@lists.osgeo.org https://lists.osgeo.org/mailman/listinfo/gdal-dev
[gdal-dev] Removing layers from GeoSpatial PDF with ogr2ogr?
I'm trying to remove a layer from a geospatial pdf (specifically the orthoimage layer in USGS topos). ogrinfo reports 26 layers in the meta-data report, but only 12 layers with vector features. When I try to remove the image layer with this command ogr2ogr -f "PDF" map.pdf NM_Canada_Ojitos_20110201_TM_geo_enabled.pdf --config GDAL_PDF_LAYERS_OFF "Images.Orthoimage" the output file is missing the layers with the text labels. In addition, the styling of at least some of the layers is different. For example, the contours in the input have two different line weights, but the output is a single line weight. Are these differences a limitation of the driver, or do I need to add more parameters/options? gdal_translate creates an accurate representation in a pdf file, but the output is a bitmap within a pdf, not a layered vector file. ghostscript can be used to remove the image and maintain a vector file, but the layers are gone. And of course, the proprietary Adobe Acrobat can do the task. ___ gdal-dev mailing list gdal-dev@lists.osgeo.org https://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] GDAL and cloud storage
Users with large storage needs and tight budgets might want to look into B2 from Backblaze. It's significantly cheaper than S3. The structure (buckets and file) is similar to S3 as is the API, so implementing access in GDAL is probably pretty straightforward from the S3 implementation. I don't use cloud storage for geo-data, so I won't be pursuing this route, but I do use Backblaze for backup and have been extremely satisfied. Full disclosure: My connection to Backblaze is pretty minor - one of the founders is a friend of a friend (ie, almost no connection at all) ___ gdal-dev mailing list gdal-dev@lists.osgeo.org https://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] RFC68: C++11 compilation mode - Call for vote on adoption
On 9/7/2017 9:59 AM, Joaquim Luis wrote: And since more people are probably confused as well I find 16 copies of api-ms-win-crt-runtime-l1-1-0.dll in my machine that has a updated Win10 + VS compilers. Among them, those installed by Firefox TortoiseSVN MikTex VScode OneDrive and others curiously I have none in the Windows directory but looked at another Win 10 computer that has no Compilers installed, and found them only under System32, SysWOW64 and an Windows Avast subdir. Well, the usual Windows mess. For a while now the Windows dll search order starts with the directory that the app was loaded from. This was in response to the "dll hell" phenomenon in earlier editions that was created when an app overrode a dll version expected by a previously installed app. So, at the expense of extra space, it is now common to distribute the required dlls with an app to control the version the app uses. The OS will not reload a dll if one with the same name is already loaded, so someone who foolishly or maliciously redistributes a misnamed dll can still screw you up. ___ gdal-dev mailing list gdal-dev@lists.osgeo.org https://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] Minimum supported C and C++ standards
On 5/7/2016 11:10 AM, Kurt Schwehr wrote: This is why starting with zero features and working our way up with a white list gives examples of correct usage. It looks like a lot of GDAL development happens by copy-paste-tweak, so good examples are key. And for every issue, we have solutions that are valid C/C++03/C++11/C++14... the best solution is not necessarily in any particular one. Amen. auto poInstance = std::make_unique(arg1, arg2, arg3); This example is more powerful than just the elimination of the opportunities for memory leaks. Kurt has snuck in the use of the GDALMyBigObj constructor, which makes the initialization of the object more transparent (and in fact assures that it occurs.) And if I correctly understand std::make_unique, by making poinstance const, we can go farther and guarantee that this pointer isn't assigned to another pointer, so that the object will be destroyed at the end of current scope. (If the pointer is assigned to another std::make_unique ptr, the scope of that pointer will control when the object is destroyed). If the use of std::make_unique doesn't make it into the whitelist of features, we can still achieve a similar goal by using the GDALMyBigObj directly. Recalling Kurt's concern about growing stack size, we don't want to declare a GDALMyBigObj in the local scope on the stack. But we can resolve this contradiction by allocating the buffer or whatever the space hog is within the constructor for GDALMyBigObj. If inheritance is deemed to be an encouraged style, we can even define a GDALBigObj class that encapsulates whatever is the current best practice for allocating large chunks of space and GDALMyBigObj just inherits from that. In this case we can use whatever ugly C++ construct we want to allocate the space since very few devs will ever have to look at the guts of this base class. As an added benefit, if in the future there is a better way to allocate space, we just change the GDALBigObj class and the effect propagates through the inheritance. (Of course, depending on the use, inheriting from GDALBigObj might not be conceptually right, in which case we could just have a GDALBigObj object as a member var of GDALMyBigObj. ) ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] Minimum supported C and C++ standards
Even raises an important point about adopting the latest C++ standards. This point actually applies to C++ in general as well. In particular, C++ can be used to write some very powerful but tremendously opaque code. This problem is amplified by the inscrutable error messages that are produced when the error is buried in the definition or instantiation of a class. Having spent far too much time trying to read other people's code, I've come to value readability as the most important attribute of code (after correctness, of course). Even's example comparing "std::unique_ptr> Vals(CPLCalloc(256, 0), CPLFree); to "std::vector Vals(256,0); shows clearly that new features do not necessarily create greater clarity in the code. Just because a newer standard enables new capabilities, it doesn't mean we have to adopt them all. Lambda expressions are an example of a capability I would discourage for issues of readability. I have no problem when lambda functions are trivial [](int a, int b) {return a + b;} This is the sort of code you wouldn't comment outside a lambda _expression_ anyway. But when you start getting complex lambda expressions that hard to comprehend (think the sort of crazy stuff you see with Python comprehensions (so ironically named)), I consider it to be a premature optimization to use a lambda _expression_ in place of a "normal" function that is well commented. Modern compilers can be extremely effective in optimizing stuff like this. And if this does become a bottleneck, then use a well-documented lambda _expression_ if that's what it takes. Adoption of a newer standard is not the same thing as a wholesale endorsement of features that will make the code harder to read and maintain. This is where the issue of coding standards comes in. We use the coding standards to encourage those newly enabled forms we see and important and discourage/ban those that will damage readability (or correctness or efficiency). ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] Starting a discussion on style and coding guidelines
On 5/5/2016 9:00 AM, Kurt Schwehr wrote: Thanks! I've integrated your derived class in the alternates section and Even's response about commenting on resize into the drawbacks line Can you provide (on the list would be best) a bit more on why the vector suggestion makes the it less tempting to do pointer tricks? First I want to make clear that use of std::vector doesn't prevent using pointer tricks. At least for me (and many others I've worked with) the fact that something has been defined as a std::vector creates a mental barrier to using an interface other than that declared in std::vector. To a large degree the strength of this barrier will reflect the programmer's feelings towards object-orientation and all that goes with it. If you're an old-school C programmer who thinks all this new C++ "O-O crap" is for sissies, then the first thing you'll do is grab the address of element zero and use all the nasty, obscure coding style you've always used. (Are my prejudices showing? And for what it's worth, I certainly qualify as old school by virtue of age, but like to think I've learned something along the way.) If, on the other hand, you actually program in C++ (as opposed to using the C++ compiler on C code), you'll look at the object as a vector and use the interface. Here's what I mean by "pointer tricks" - Imagine you have to do some operations on the diagonal of a square matrix. Consider the readability of the two snippets: // I'm assuming we defined our array as a vector of vectors. for (int i = 0; i < matrix_size; ++i) { do something(the_matrix[i][i]); } vs. //Since I would never do this myself, a "real C programmer" would almost certainly write this differently, esp. the test for the end of the loop, but you get the idea for (int * i = the_matrix; i < sizeof(the_matrix) + the_matrix; i += matrix_size + 1)) { do_something(i); } naming the loop var as diag_element would help, but in the first example it's clear that do_something() is operating on a diagonal element. In the second example it's not clear what you're operating on at all. ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] Starting a discussion on style and coding guidelines
On 5/4/2016 4:30 PM, Kurt Schwehr wrote: Drawbacks: It is possible to change the size of the vector later on in the code Vector has some storage overhead and bookkeeping that has to be done (but often the compiler can probably optimize away most of that). TODO: References that explain this? Resizing the array could break anything that was using a C style array pointer to the vector’s data Drawbacks one and three can be eliminated by deriving a class from vector that hides resize, so there really is only the single drawback of storage overhead and bookkeeping, which are often minor. Another benefit of the proposed approach is that it makes it less tempting to use inscrutable pointer tricks for referencing array entries. ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] gdal_retile not generating alpha band
This sounds more like a problem with a missing NODATA value rather than transparency/alpha. As far as I can tell from the docs, there is no means to specify the NODATA value in the gdal_retile command, including the CO options. If the only place that true black occurs is in the regions with no input, you could use gdal_edit.py to add a NODATA value. On 5/3/2015 11:23 PM, Tom Wardrop wrote: Hi all, I retiled whole lot of imagery over the weekend using the following gdal_retile command: gdal_retile.py -v -r bilinear -levels 8 -ps 2048 2048 -s_srs EPSG:28355 -co "TILED=YES" -co "ALPHA=YES" -targetDir ~/output ~/input/*.tif The result was good until I realised all the areas that should be transparent were instead black. It seems the problem is the source imagery didn’t have an alpha channel; it didn’t need one as there was no transparent data to represent. Because the source imagery didn’t fill the entire extent, gdal_retile had to create loads of transparent and partially-transparent tiles, which of course wound up black. I was under the impression that "-co ALPHA=YES” would instruct gdal_retile to generate an alpha channel for all the generated tiles. Is this a bug? If not, how do I otherwise instruct gdal_retile to generate an alpha channel when required? I’m working around this by using gdal_translate to re-save the source imagery with an (empty) alpha channel, so gdal_retile now outputs transparent tiles. Any feedback is appreciated. Cheers, Tom ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] GDAL slow to write GeoTIFF?
On 4/27/2015 9:11 AM, Even Rouault wrote: > Le lundi 27 avril 2015 16:55:24, jramm a écrit : >> > I'm writing a custom processing program using GDAL in C. >> > >> > I'm processing a raster of roughly 150 000 * 200 000 pixels in windows of >> > 256 * 256 pixels. > Is the TIFF tiled ? If it is not, this should help. And/or you could perhaps > try increasing GDAL_CACHEMAX too. > But for a given organization (tiled vs non-tiled), I'm also surprised you > would get such a difference between read and write scenarios. > Could the compression mode of the TIFF have any impact here? What compression mode are you using? ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] reprojection issue
This result is not entirely unexpected, though it depends on the two projections. A square box in one projection will map onto something entirely different in many cases, often introducing skew which will change the aspect ratio and size of the resulting raster. If you view your reprojected dataset you will almost certainly find a skewed region surrounded by no data values. On 3/2/2015 9:01 AM, Robert Oehler wrote: Hi, I’m developing a raster library using GDAL. I use the Java bindings. I would like to do a reprojection of an image which I have in memory. I’m using an in-memory driver to create a dataset. I fill it with the properties of the image and its data. Now I’m doing Dataset warped = gdal.AutoCreateWarpedVRT(ds, src_proj.ExportToWkt(), dst_proj.ExportToWkt()); but the resulting raster is very different from the original. (720,720 --> 635,810) while the resulting bounding box is correct (compared to my own reprojection implementation) What am I doing wrong? I’ve seen gdal.reprojectImage(..) What is the difference? Is it preferable in my case ? Thank you Robert Oehler ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] Fwd: [SoC] Preparation of ideas pages for GSoC 2015 (deadline: 18th February!)
Two thoughts for the list: 1. A more robust algorithm for determining EPSG from WKT. 2. This list recently had an exchange regarding the use of gdalpolygonize (and the underlying algorithm) on country scale data. The algorithm works well on the sort of data that is typical of ground cover classification or other raster sources that have a large number of small (compared to the image) regions. It doesn't work well on images that have a small number of very large regions. Perhaps development of an algorithm that is more suited for this case might be of interest. 3. Adding a polygon simplification algorithm to gdal/ogr. This would especially useful on the output of gdalpolygonize. Not clear who could mentor these. ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] new to gdal, converting projection, need some help
While you've shown us your output is in unprojected (lon/lat) WGS84 coordinates, you haven't told us about your input - only that it's datum is NAD83. If the inputs are projected, you should expect to see skewing, since a rectangular area in projected coordinates in general does not map to a rectangle in lon/lat. As to size - you took 300 tiles each about 50-75M and made a single file. Your input is in the 20GB range, so 30 GB is not out of hand. Are your input tiles compressed? Your output is NOT compressed, so that would go a long way towards explaining things. On 2/12/2015 5:58 AM, Andrew Simpson wrote: I'm new to learning GDAL (mapping in general). I have a bunch of tif files (300+) that are in NAD83, but my mapping application only uses WGS84. The tiles are all around 50-75Mb each I'm able to use gdal warp to convert the projection properly to WGS84, but the images (tiles) come out skewed. I tried using bdalbuildvrt first to create the vrt, then gdalwarp with the following options: -co TILED=YES -co BIGTIFF=YES -of GTIFF -t_srs "+proj=longlat +datum=WGS84 + no_defs" input.vrt output.vrt Ok, so this mostly works, but I have 2 questions/issues: 1. what is the best way to convert the projection and eliminate the skewing? 2. The single .tif produced is now >30GB! significantly larger. why does this occur and how can I keep this file size down? Thanks for any info! Andrew Simpson ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] gdal_polygonize.py TIF to JSON performance
On 1/13/2015 2:37 AM, Graeme B. Bell wrote: > Whenever you deal with national scale data for any country with coastline, > you frequently end up with an absolutely gigantic and horrifically complex > single polygon which depicts the coastline and all the rivers throughout the > country as a single continuous edge. This mega-polygon, so often present and > so often necessary, is very time-consuming for gdal_polygonise to produce and > the result is very painful for every GIS geometry package to handle. > > It would be great if the people behind gdal_polygonise could put some thought > into this extremely common situation for anyone working with country or > continent scale rasters to make sure that it is handled well. It has > certainly affected us a great deal when working with data at up to 2m > resolution for a country larger than the UK... This second use case is a very bad mismatch to the design of the polygonize algorithm, as you have discovered. If this is indeed a common use case (I have no basis to judge one way or the other), a very different algorithm would be far more appropriate. In many respects, the desired algorithm would also be easier to implement if the nature of the data is basically binary - in the country/region/polygon, or out. Matters get more complicated if we allow holes, disconnected regions, or have multiple regions to identify (but a small number). Perhaps one of the core project members could describe how to build a case to support this need and submit an enhancement request. ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] gdal_polygonize.py TIF to JSON performance
I ran a test case on my Windows 7 laptop (i7, quad core (not that it matters), 2.4 GHz, 8G RAM). Input file was geotiff, 29847x33432, paletted 8-bit, 11 landcover classes. This dataset covers the city limits of Philadelphia, PA, so the polygon representing the Delaware River runs approximately from the lower left to the upper right corner, creating the same pathological case as the Thames in your dataset. However, unlike your dataset, there are relatively few other very large features, as would be the flood level contours in your data set. gdal_polygonize to shapefile took ~9 hours and the output files sum to 1.22GB, 1.13M ploygonal features. gdal_polygonize to geojson took about ~15 hours and the output file is just over 2GB. Not sure how useful this is in comparing your dataset that started this thread, but at least it's some point of comparison. ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] gdal_polygonize.py TIF to JSON performance
Your team writes that the image is usually exported as a vector file, eg shapefile. Can they do this successfully for the 1.4GB image? If so, have you tried just converting the shapefile to geojson? Might be the simplest solution. If that doesn't work, you could try tiling, as you mention. As Even has already noted, the challenge to threading the code is rejoining the polygons at the boundaries. It's not an overwhelming problem, but it is a challenge and requires buffering the output rather than streaming it. You could do a poor-man's version of multi-threading. 1. Tile your input image. I would probably try something bigger than 1024x1024 that your mention. Perhaps 4K x 4K, maybe 8K x 8K. Overlap the tiles by a pixel or two on all edges. For the initial experiment just a couple of adjacent tiles are sufficient. 2. Feed each tile to gdal_polygonize in as many processes as you have available processors. 3. Take the resulting polygon files and merge them into a single shapefile (or other equivalent format). You can do this with ogr2ogr or in qgis 4. Dissolve using the classification value 5. Split multipart polygons to single polygons. I don't know anything about how the dissolve algorithm is written, so I can't predict it's performance and how it will scale with image size and number of tiles. However, if it takes advantage of spatial indices, it could scale fairly well unless you have shapes (like roads) that tend to stretch from one tile boundary to the next. On 1/12/2015 3:07 AM, chris snow wrote: > Hi David, > > Thanks for your response. I have a little more information since > feeding your response to the project team: > > "The tif file is around 1.4GB as you noted and the data is similar to > that of the result of an image classification where each pixel value > is in a range between (say) 1-5. After a classification this image is > usually exported as a vector file (EVF of Shapefile) but in this case > we want to use geojson. This has taken both Mark and myself weeks to > complete with gdal_polygonize as you noted. > > I think an obvious way to speed this up would be threading by breaking > the tiff file in tiles (say 1024x1024) and spreading these over the > available cores, then there would need to be a way to dissolve the > tile boundaries to complete the polygons as we would not want obvious > tile lines." > > Does this help? > > ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] gdal_polygonize.py TIF to JSON performance
I'm surprised at your colleague's experience. We've run some polygonize on large images and have never had this problem. The g2.2xlarge instance is overkill in the sense that the code is not multi-threaded, so the extra CPUs don't help. Also, as you have already determined, the image is read in small chunks, so you don't need large buffers for the image. But two weeks make no sense. In fact, your run shows that the job reaches 5% completion in a couple of hours. The reason for so many reads (though 2.3 seconds out of "a few hours" is negligible overhead) is that the algorithm operates on a pair of adjacent raster lines at a time. This allows processing of extremely large images with very modest memory requirements. It's been a while since I've looked at the code, but from my recollection, the algorithm should scale approximately linearly in the number of pixels and polygons in the image. Far more important to the run-time is the nature of the image itself. If the input is something like a satellite photo, your output can be orders of magnitude larger than the input image, as you can get a polygon for nearly every pixel. If the output format is a verbose format like KML or JSON, the number of bytes to describe each pixel is large. How big was the output in your colleague's run? The algorithm runs in two passes. If I'm reading the code right, the progress indicator is designed to show 10% at the end of the first pass. You will have a better estimate of the run-time on your VM by noting the elapsed time to 10%, then the elapsed time from 10% to 20%. Also, tell us more about the image. Is it a continuous scale raster - eg, a photo? One way to significantly reduce the output size (and hence runtime), as well as to get a more meaningful output in most cases, is to posterize the image into a small number of colors/tones. Then run a filter to remove isolated pixels or small groups of pixels. Polygonize run on this pre-processed image should perform better. Bear in mind that the algorithm is such that the first pass will be very similar in run-time for the unprocessed and pre-processed image. However, the second pass is more sensitive to the number of polygons and should improve for the posterized image. Hopefully Frank will weigh in where I've gotten it wrong or missed something. On 1/11/2015 10:11 AM, chris snow wrote: I have been informed by a colleague attempting to convert a 1.4GB TIF file using gdal_polygonize.py on a g2.2xlarge Amazon instance (8 vCPU, 15gb RAM) that the processing took over 2 weeks running constantly. I have also been told that the same conversion using commercial tooling was completed in a few minutes. As a result, I'm currently investigating to see if there is an opportunity for improving the performance of the gdal_polygonize.py TIF to JSON conversion. I have run a strace while attempting the same conversion, but stopped after a few hours (the gdal_polygonize.py status indicator was showing between 5% and 7.5% complete). The strace results are: % time seconds usecs/call calls errors syscall -- --- --- - - 99.40 2.348443 9 252474 read ... 0.00 0.00 0 1 set_robust_list -- --- --- - - 100.00 2.362624 256268 459 total FYI - I performed my test inside a vagrant virtualbox guest with 30GB memory and 8 CPUS assigned to the guest. It appears that the input TIF file is read in small pieces at a time. I have shared the results here in case any one else is looking at optimising the performance of the conversion or already has ideas where the code can be optimised. Best regards, Chris ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] shp - inconsistent extent
The linked file opens without warning or error in ArcGIS 10.2.2 and qgis 2.0.1 for me. Windows 7. ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] Aligning Raster Over the Polygon
This is a pretty straightforward task in qgis. Just load the image and treat the four corners as ground control points, using the corresponding corners of the polygon. There's an tutorial here. On 11/18/2014 11:29 AM, Simen Langseth wrote: Dear GDAL Users: I have a raster image which have to be aligned over the polygon. Both the raster and polygon have the same coordinate system. The figure shows the raster image which has to be fitted over the RED polygon. https://drive.google.com/file/d/0B2RqG9tSAAIUc3J5ZDN6V01IcG8/view?usp=sharing How can I do it guys? Thanks. Simen ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] shapefile polygon/multipolygon ordering
On 11/14/2014 9:26 AM, mccorb wrote: > Two questions: > 1. Is it within the shapefile specification to have polygons that have holes > that have polygons? > 2. Does GDAL provide any options to coerce it to not re-order the polygons? > > thanks > Yes, polygons can have holes. I have no idea about question 2, but in fact this raises a different question: Does the shapefile spec say anything about rendering order? My quick read of the ESRI whitepaper suggests the answer is no. In general, I would suggest the solution is to have one shapefile for land masses, one for lakes, and one for islands. Rendering order can then be controlled by application. The one drawback of this approach is islands with lakes on them, which is a bit pathological, but does occur. In his book "Maphead", Ken Jennings (yes, the Jeopardy guy) cites some place where the nesting of water/island/water... goes three layers deep. ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] LZW Compression on geotiffs
On 10/1/2014 12:02 PM, Jukka Rahkonen wrote: For comparison: Tiff as zipped347 MB Tiff into png 263 MB If I have understood right both zip and png are using deflate algorithm so there might be some place for improving deflate compression in GDAL. I was curious how png could achieve such better compression if it is using the same deflate algorithm. I wouldn't think different implementations would account for so much improvement. It turns out the png compression uses a "filtering" step ahead of compression. This is explained here. The filter is similar to a differential pulse code modulation, in which the pixel is represented as the difference from the pixels to the left, left upper diagonal, and above. This typically reduces the magnitude of the value to something close to zero, making the encoding more efficient. David ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] LZW Compression on geotiffs
I was sufficiently intrigued by this result that I tried on some 3-band aerial data I had handy. My data is from over Phila, PA. Here are the results for various compression/tiling combinations. It's quite different from yours 93,784,427 out_none.tif 73,241,943 out_deflate.tif 59,786,628 out_deflate_tiled.tif 93,191,345 out_lzw.tif 78,036,699 out_lzw_tiled.tif 94,516,744 out_pack.tif 94,552,019 out_pack_tiled.tif As Even noted, it's not surprising that some compression methods, esp. packbits which is a very simple compression, would not have much effect, and in fact actually increased file size. More surprising would be that LZW and deflate don't save space. In my case, tiling made a big difference. On 10/1/2014 8:32 AM, Even Rouault wrote: > Le mercredi 01 octobre 2014 16:26:15, Newcomb, Doug a écrit : >> Well, >> I went back and compiled with 1.11.1 with the external geotiff library and >> got the same result for LZW, tiling made no difference in size . >> >> As an interesting sideline, I noticed that I had not been compiling with >> LZMA compression previously and compiled gdal 1.11.1 with LZMA. I ran: >> >> >> gdal_translate -of “GTIFF” -co “COMPRESS=LZMA” on the original file and got >> a file of 149MB in size. >> >> I noticed that LZMA compression is not listed as one of the compression >> options for the GeoTIFF format in the online documentation. Has that >> capability been available long? > GDAL 1.8.0. Should likely be documented. Caution: GeoTIFF LZMA compression is > something that is far from being standard. Probably only GDAL LZMA-enabled > builds with internal libtiff (or with external libtiff 4.0.X built against > liblzma) will be able to deal with that. > > If that's an option for you, you could try lossless JPEG2000 compression. See > http://www.gdal.org/frmt_jp2openjpeg.html and the "Lossless compression" > paragraph. > >> Doug >> >> On Wed, Oct 1, 2014 at 9:45 AM, Even Rouault >> >> wrote: >>> Le mercredi 01 octobre 2014 15:41:06, Newcomb, Doug a écrit : Hi, I have a 4 band uncompressed geotiff ( NAIP data for North Carolina, USA) that is 193 MB I've been going over the compression options for gdal_translate geotiffs and I note the following: gdal_translate -of “GTIFF” -co “COMPRESS=PACKBITS” gives me a file that >>> is >>> 194MB in size gdal_translate -of “GTIFF” -co “COMPRESS=LZW” gives me a file that is >>> 232MB >>> in size gdal_translate -of “GTIFF” -co “COMPRESS=DEFLATE” gives me a file that is 177MB in size Of the above Lossless compression options for geotiff, it seems that DEFLATE is the only one actually compressing. I'm running gdal 1.11.0 on Ubuntu 12.04.3 64 bit compiled with >>> ./configure >>> --with-libz --with-png --with-libtiff=internal --with-geotiff=internal --with-jpeg=internal --with-openjpeg --with-pg=/usr/local/pgsql/bin/pg_config --with-ogr --with-xerces --with-geos --with-hdf4 --with-hdf5 --with-poppler=/usr/local/lib --with-podofo --with-spatialite --with-netcdf Is this expected behavior? >>> Doug, >>> >>> Not completely surprising that lossless methods don't work well with >>> aerial/satellite imagery. >>> You may also want to test with -co TILED=YES . >>> >>> Even >>> >>> -- >>> Spatialys - Geospatial professional services >>> http://www.spatialys.com ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] gdal2tiles for 16bit data
On 9/22/2014 2:18 PM, Jan Tosovsky wrote: > On 2014-09-21 David Strip wrote: >> Looking back at the code, the actual output file is created with >> this line: >> self.out_drv.CreateCopy(tilefilename, dstile, strict=0) > Hmm, I have no idea how to pass this info into CreateCopy: > http://www.gdal.org/classGDALDriver.html This was bad advice. CreateCopy uses the same datatype as the raster it's copying, so it couldn't be the source of your problem (and there's no way to specify a datatype, since it's just copied). > >> If this still doesn't fix it, the first thing to do is run gdalinfo >> against one of the tiles and let us know the result. > Thanks for the hint. I've checked the tiling source (correct file): > > ... > Metadata: > AREA_OR_POINT=Area > Image Structure Metadata: > INTERLEAVE=BAND > Corner Coordinates: > ... > Band 1 Block=256x256 Type=Int16, ColorInterp=Gray > NoData Value=0 > > > While my tile is: > ... > Image Structure Metadata: > INTERLEAVE=PIXEL > Corner Coordinates: > ... > Band 1 Block=256x8 Type=UInt16, ColorInterp=Gray > Band 2 Block=256x8 Type=UInt16, ColorInterp=Undefined > > I've changed UInt16 to Int16, but in this case there are 'ERROR 1: Buffer > too small' errors when tiles are produced (but they are somehow generated > anyway). It fixes the type to Int16, but there are still some differences. I have no idea what this is about. It does appear to be some sort of numpy error msg from within the routine, but why it occurs and you can still get output is a mystery to me. > > What exactly is 'Block=256x256' and is 'Block=256x8' in the second case > correct? The block describes how the pixels are laid out in the file. The fact that the input and output block sizes are different is not an error. Likewise, interleave = band vs. pixel is a matter of internal organization and shouldn't matter. > Are other differences cosmetic or fatal ones? gdal2tiles automatically creates a mask band (band2) for the tile, so the addition of the second band is not surprising or incorrect. This also accounts for the removal of the NoData Value > > Thanks, Jan > > ___ > gdal-dev mailing list > gdal-dev@lists.osgeo.org > http://lists.osgeo.org/mailman/listinfo/gdal-dev > ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] gdal2tiles for 16bit data
On 9/20/2014 9:02 AM, Jan Tosovsky wrote: But the final tif has most likely incorrect metadata as its reading via jai-imageio fails ArrayIndexOutOfBoundsException: 256 (Despite the Ok result when reading the tiling source image using the same method) Any idea? Thanks, Jan Adding an extra parameter seems to be fixing this: dstile = self.mem_drv.Create('', self.tilesize, self.tilesize, tilebands, gdal.GDT_UInt16) (applied to all 'mem_drv.Create' occurrences) Looking back at the code, the actual output file is created with this line: self.out_drv.CreateCopy(tilefilename, dstile, strict=0) This will also need to be set to the correct datatype. If this still doesn't fix it, the first thing to do is run gdalinfo against one of the tiles and let us know the result. If it really is producing bogus meta-data (which would surprise me), that will set off alarm bells for a number of readers of this mailing list. If, on the other hand, the meta-data is fine, it will prompt some thinking in other directions. ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] gdal2tiles for 16bit data
A few lines before the ones you quoted from the script dstile is set to a raster in memory: dstile = self.mem_drv.Create('', self.tilesize, self.tilesize, tilebands) This memory dataset driver defaults to a datatype of byte. You need to override this to the datatype of your choice. I'm not familiar enough with the Python interface to be able to tell you how to do that in Python, though. On 9/20/2014 6:20 AM, Jan Tosovsky wrote: Dear All, I am trying to produce GeoTiff tiles from my SRTM based GeoTiff: gdal2tiles.py -z 6-7 warped.tif D:\tiles-gdal The problem is that the final data is somehow converted from 16bit into 8bit so useless for intended further processing. In the script there are following lines responsible for r/w operations: data = "" ry, rxsize, rysize, wxsize, wysize, band_list=list(range(1,self.dataBandsCount+1))) dstile.WriteRaster(wx, wy, wxsize, wysize, data, band_list=list(range(1,self.dataBandsCount+1))) Just a naive idea, could this be somehow edited to get the proper result? I believe there is a solution as all translate->merge->warp steps kept the 16bit intact. Thanks, Jan Win7/GDAL 1.10 ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] GetFeatureCount() bForce set to FALSE
On 9/8/2014 8:15 AM, Martin Landa wrote: Hi all, I was trying to change a default value of bForce in OGRVFKLayer::GetFeatureCount() to FALSE [1], but when I debug eg. ogrinfo, it still reports bForce as TRUE (1) Breakpoint 1, OGRVFKLayer::GetFeatureCount (this=0x71a7d0, bForce=1) at ogrvfklayer.cpp:162 Any idea what could be wrong? Thanks in advance, Martin [1] http://trac.osgeo.org/gdal/browser/trunk/gdal/ogr/ogrsf_frmts/vfk/ogr_vfk.h#L83 What you describe should work. (See this stackoverflow post). I do tend to agree with Even that changing the default seems like a bad practice, since existing code that relies on the original default could end up calling your new code and have unexpected behavior. Maybe I'm not understanding what your link to the repository is supposed to show, but when I look at the linked to the repository code, I see 83 int GetFeatureCount(int = TRUE); ie, you have not changed the default to FALSE as you intended. If this is the code you're compiling, it would certainly explain your problem. ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] How get EPSG code from an OGRSpatialReferenceH?
On 8/29/2014 11:45 AM, Even Rouault wrote: > Le vendredi 29 août 2014 05:20:16, David Strip a écrit : >> > It's my recollection from a question I posted here a little over a year >> > ago that except for a few special cases, autoIdentifyEPSG only works if >> > there is an authority node providing the EPSG. > Hu, no. That would make it really useless. AutoIdentifyEPSG() tries to add > EPSG authority and code on a very restricted set of common SRS that don't > have > it > Isn't that what I wrote? "few special cases" = "very restricted set" ? ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] SVG support
What is the source of your SVG file? According to the SVG driver page, only files produced with the Cloudmade Vector Stream Server will work. On 8/29/2014 9:25 AM, Scott Rowles wrote: Hi, I am using the 1.11 precompiled binaries with with libexpat included in the GDAL_DATA path. I am trying to read an SVG file, which according to the vector format support table should work. But it is failing and saying that it cannot open the file. Any thoughts on how to troubleshoot/resolve this? scott ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] How get EPSG code from an OGRSpatialReferenceH?
It's my recollection from a question I posted here a little over a year ago that except for a few special cases, autoIdentifyEPSG only works if there is an authority node providing the EPSG. Hopefully someone will give you an authoritative answer. I believe GeoTools has the capability to find the EPSG from the WKT, but I have no idea how to interface to that On 8/28/2014 8:42 PM, Nik Sands wrote: > Hi devs, > > What is the correct way to extract the EPSG code from an OGRSpatialReferenceH > ? > > Currently I'm finding that the following works only for SRS of some images > and not others: > > const char *charAuthType = OSRGetAttrValue(gdal.srcSRS, "AUTHORITY", 0); > const char *charSrsCode = OSRGetAttrValue(gdal.srcSRS, "AUTHORITY", 1); > > However, there is no "AUTHORITY" node in some SRSs so it doesn't work for > those images. To cater for this, I'm trying to explicitly set the authority > node using: > > OSRAutoIdentifyEPSG(gdal.srcSRS); > > But this fails with OGRERR_UNSUPPORTED_SRS (even when it is an SRS that GDAL > recognises and uses well). > > So I'm stumped... how do I reliably determine the EPSG for an > OGRSpatialReferenceH? > > Cheers, > Nik. > > ___ > gdal-dev mailing list > gdal-dev@lists.osgeo.org > http://lists.osgeo.org/mailman/listinfo/gdal-dev > ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] Adding a "Commercial support" section on gdal.org?
As Frank wrote, this is a slippery issue. Personally I could be comfortable with anything from self-registration to the highly selective approach described by Frank. To me, the important issue is making clear to a reader of the list what exactly the list means and how to use that to interpret the skills of those on the list. One way to use this list is as a reward to significant contributors to project. This would tend to point to those most familiar with the internals of the project, as well as having a broad commitment to the project and the notion of an open source community. Of course this requires a voting process, presumably by the PSC, which can be burdensome and stressful, as Frank notes. While I have found this project community to be generally welcoming, open source projects somewhat deservedly have a reputation for being insular and hard to crack. (For a great read, check out this article. Worth reading just for a remarkably intolerant response from Linus Torvalds on the merits of C++). A vetted list of names carries an implied endorsement, which is valuable to the reader, but carries a risk for the committee that chooses the list. (I'm not talking risk in the legal sense, though that could occur, I suppose. More the reflection on how the community chooses who to include or exclude.) As the other extreme, we allow anyone to register and hopefully provide some guidance in how to choose amongst them. For example, suggest that people search the archives of this mailing list to see how often the consultant participates. Put a star next to names who have commit privileges, perhaps the date the achieved this status, so you can tell how long they've been active. There are many ways to objectively identify the stronger contributors while remaining open. I am tempted to suggest even allowing endorsements, but policing that against spam, abuse, fraud is probably more work than it's worth. My choice leans to an open list of self-registrants with some objective measures of their participation, but I'll probably be content with whatever the community decides. On 8/21/2014 11:02 AM, Frank Warmerdam wrote: Folks, This is a somewhat sticky area, which is why I started just with just the self-registration mechanism on the OSGeo site in the past. A scenario that I could support would be a section somewhat like the postgis.net support list where being added to it needs to be voted on by the PSC. My criteria as a PSC member would be: - The organization has made significant contributions to the project (in code, docs, etc) - The organization has staff that I personally know to be competent GDAL/OGR developers. It is a slippery sort of thing of course. Subjective, and I would hate to be in the situation where I'm having to vote against an addition. If we were to pursue this I actually think an RFC with an initial list of entries, and some general principles would be appropriate (though additions wouldn't need an RFC - just a up/down vote). My perspective when consulting was that being active on the mailing list, and noting in my email signature that I was available for consulting was enough to give me some profile with those looking for someone. PS. as happy customer of Even's (at Planet Labs) I can strongly endorse him as a consultant! Best regards, Frank ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] having problem with polygonize
On 6/27/2014 9:26 PM, devendra dahal wrote: file1 = "my_raster.tif" OutFile = "wetlands.tif" rastDriver = gdal.GetDriverByName('Gtiff') # reading gdal driver rastDriver.Register() look at this StackExchange topic. I think your problem is you have the wrong driver name. It's GTiff (note the capital T). I'm a little surprised the Register didn't fail, but that's life. dst_ds = rastDriver.Create(OutFile, cols, rows, bands, GDT_Int16) dst_ds.SetGeoTransform(geot) dst_ds.SetProjection(proj) dst_ds.GetRasterBand(1).WriteArray(newArray) newArray = None I can't see anyplace where geot and proj are assigned values. This will probably be your next problem when you get past the error you're seeing. But it throws the error below. dst_ds = drv.CreateDataSource( dst_filename ) AttributeError: 'NoneType' object has no attribute 'CreateDataSource' You cut out the traceback that would have helped spot or at least localize the error. David Strip ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
[gdal-dev] GeoTiff crashing qgis
I have a piece of C++ code that is clipping regions from a very large (roughly 90,000 x 100,000 pixels, 11G compressed) Float32 gray-scale single band image and writing each as a separate GeoTiff (with LZW compression). In the sample that I've run, I've clipped four different regions. gdalinfo can read and provide a description of all four, however for only two of the images does gdalinfo provide summary statistics. I can open two of the images in qgis, while the two that fail to produce summary statistics cause qgis to hang when I try to open them. The two that I can open have block sizes of 247x8 (image size 247,311) and 583x3 (image size 583,602) , while the two that fail have block size 1198x1 (image size 1198, 809) and block size 1444x1 (image size 1444, 1007). I'm not sure why the block size would have an impact, esp since the ones that fail are n x 1, which I would think would be the easier to read in less capable systems, but it's the only common factor I've found so far. All four images display correctly in the Windows Photo Viewer (Win 7). I'm a bit reluctant to show the whole code since it would take quite a few lines to get the whole picture down to the gdal calls. The heart of the write is done using GDALRasterBand::RasterIO, though. At the moment I'm at a loss as to where to start looking for the problem. Any suggestions are most welcome. David Strip ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] Misc. subjects : OSGeo Vienna code sprint, release plans, GDAL 2.0
On 3/31/2014 1:03 PM, Even Rouault wrote: > Hi Etienne, > > Thanks for your ideas. > >> Hi all, >> >> I have a few suggestions for gdal 2.0, based on my personal experience in >> learning to use, enhance and maintain gdal/ogr code. >> >> - replace cpl/csl/string/xml code with a mainstream, modern cross-platform >> toolkit such as QT, boost, etc. > QT is certainly a dependency we wouldn't want to draw. Too big for some > embededded usage, and it would make GDAL to be practially bound by the LGPL. > I guess standard C++ libraries classes, or perhaps boost, should do the job > for what you mention below. +1 for the idea of moving to either well-supported toolkits or more extensive use of the std library or features of C++11 > >> While cpl/csl classes are robust and "do the job", they are not well >> documented and not very intuitive for a new gdal coder. This is from my >> personal experience, some may not agree. >> They are also not used outside gdal, as such do not benefit from >> enhancements as other toolkits. > Well, at least, MapServer uses a few CPL functions : CPL minixml, > CPLMalloc/CPLFree, CPLReadDir, CPLFormFilename, CPLGetFilename, > CSLInsertString, etc.. To the extent that these remain necessary, making them a thin wrapper of whatever standard lib/toolkit is adopted would both improve the effective documentation, as well as make them compatible with the library/toolkit (esp memory allocation) ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] Installing GDAL on Win7
On 3/24/2014 6:31 AM, Mike Flannigan wrote: > However DEFLATE is totally unreadable: > http://www.mflan.com/temp/deflate.jpg > in both Global Mapper and QGIS. Early in the qgis 2.x series one of popular the pre-built Windows binaries was built without DEFLATE compression. However this has been fixed, at least on the OSGEO4W downloads. ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] NLCD images and north
On 12/19/2013 11:38 AM, David Strip wrote: > On 12/19/2013 2:18 AM, Jo Meder wrote: >> Can you clarify that last part for me please? Are you saying that the data >> should be aligned north up based on the .tfw file? The geotransform from >> GDAL also suggested there was no rotation. Or is it just that the data is >> correct for the projection it is in? Ignore this part of my previous reply. It's not correct and reflects confusion on my part. > The .tfw file is saying the is indeed aligned north up, in the sense > that pixel (0,0) is due north of pixel (1,0). That is, the columns of > the array are aligned north-south. End region to ignore > Once again, open the raster in qgis. You get an apparently rotated patch > that is slightly tapered towards the top. This is how the projection > renders for this part of the world (Washington, DC). Now click > View->Decorations->North Arrow, click Enable North Arrow and click Set > Direction Automatically. Click OK. The resulting north arrow does not > point vertically, but rather is parallel to the edges of the patch, > again validating the north-up nature of the rasters. >> So far all the data I've used has been north up and seems to have been >> projected in UTM or something like it. I need to make things transparent to >> the user so I wonder if what I should be doing is reprojecting all the data >> to a custom projection for our world. The reprojection would handle the >> cases like this NLCD data. > Hopefully someone will chime in here with a suggestion how to address > your problem to make this easier for your users. Reprojecting your data > to a projection that doesn't have the visual effect of the Albers seems > to be the right way to go so as not to confuse your users, but what > projection to use is a complicated question that has a lot to do with > the use case and what areas of the world you're working in. Each > projection introduces it own forms of distortion (or alternatively, each > preserves specific characteristics). The choice of projection will > depend on which characteristics are most important to preserve, the > scale/extent of the maps your users work in, and the sorts of analyses > (formal or informal) that they will be performing. > ___ > gdal-dev mailing list > gdal-dev@lists.osgeo.org > http://lists.osgeo.org/mailman/listinfo/gdal-dev > ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] NLCD images and north
On 12/19/2013 2:18 AM, Jo Meder wrote: > Can you clarify that last part for me please? Are you saying that the data > should be aligned north up based on the .tfw file? The geotransform from GDAL > also suggested there was no rotation. Or is it just that the data is correct > for the projection it is in? The .tfw file is saying the is indeed aligned north up, in the sense that pixel (0,0) is due north of pixel (1,0). That is, the columns of the array are aligned north-south. Once again, open the raster in qgis. You get an apparently rotated patch that is slightly tapered towards the top. This is how the projection renders for this part of the world (Washington, DC). Now click View->Decorations->North Arrow, click Enable North Arrow and click Set Direction Automatically. Click OK. The resulting north arrow does not point vertically, but rather is parallel to the edges of the patch, again validating the north-up nature of the rasters. > > So far all the data I've used has been north up and seems to have been > projected in UTM or something like it. I need to make things transparent to > the user so I wonder if what I should be doing is reprojecting all the data > to a custom projection for our world. The reprojection would handle the cases > like this NLCD data. Hopefully someone will chime in here with a suggestion how to address your problem to make this easier for your users. Reprojecting your data to a projection that doesn't have the visual effect of the Albers seems to be the right way to go so as not to confuse your users, but what projection to use is a complicated question that has a lot to do with the use case and what areas of the world you're working in. Each projection introduces it own forms of distortion (or alternatively, each preserves specific characteristics). The choice of projection will depend on which characteristics are most important to preserve, the scale/extent of the maps your users work in, and the sorts of analyses (formal or informal) that they will be performing. ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] NLCD images and north
I downloaded some NLCD test data (roughly Washington DC, for what it's worth) using the NationalMap viewer and I see the apparent rotation, but actually it's not rotated. I suspect you are seeing the same phenomenon. Your download includes, among other files, a .tif file with the image, and a .tfw file that defines the "world" in terms of the projection defined in the .prj file. In my experiment the .tfw file shows no rotation (entries 2 and 3 are equal to 0). However, the projection is an Albers Equal Area projection, so the image boundaries don't align to the tif boundaries. If you open your file in something like qgis and set the projection to the appropriate UTM zone or similar projection, you'll probably get what you're expecting - a north-south rectangle containing your data. At least that's what happened when I tried it. (And of course that's what it should be according the .tfw file) ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
[gdal-dev] .csvt file doesn't allow space after comma
I am trying to use a .csv file in a qgis project. After not getting what I expected, I learned about .csvt files and wrote a file that looked like "String", "Integer" with a space following the comma. This did not work - both fields were still read as String. After removing the comma I got a String field and an Integer field. The documentation states the list is comma-separated, which I suppose if strictly interpreted could mean no white space. However, in general .csv files ignore whitespace following a comma. I'm posting this in the event that someone is working with a more complex .csvt file that fails might have a chance of finding this when they search for a clue as to why their file isn't working. ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
[gdal-dev] GDALColorTable - can't delete it
I'm having trouble with the GDALColorTable in C++ on WIn7 using Visual Studio 2010 With a function as simple as void foo() { GDALColorTable * ct = new GDALColorTable; delete ct; } I've also tried void foo() { GDALColorTable ct; } and void foo() { GDALColorTable * ct = new GDALColorTable; GDALDestroyColorTable((GDALColorTableH) ct); } I get either unhandled exception or stack corruption errors. I expect there's something simple I'm missing, but it's eluding me at the moment. ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] gdalwarp multiple nodata
Just a word of caution about val-repl.py - I recently tried to use this script and be warned that it doesn't preserve all the properties of the input file. In my case, the input file was a paletted geotiff and the output is grayscale, a very different beast for my purpose. It shouldn't be too hard to fix this, as long as you're aware of the problem. ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] Does 64 bit gdal support deflate compression in geotiffs?
sorry for wasting everyone's time. I just realized I could download the pre-built version from gisinternals and test against that. And the answer is yes, deflate is supported in the latest release build. ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
[gdal-dev] Does 64 bit gdal support deflate compression in geotiffs?
The new 64 bit release of qgis (2.0.1 Dufur) fails when trying to load deflate compressed geotiffs. Further testing reveals that the bundled gdal utilities suffer the same problem. The version lists as 1.10.0, released 2013/04/24. No idea how it was built - this is a download of the pre-compiled Win7 qgis release. Before digging any deeper, I thought I'd ask if the 64bit version of gdal is supposed to support deflate. The 32 bit version does (that's how I compressed the files in the first place). ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] will ogr get coordinates in geographic coordinate system or projected coordinate system when using getPoint function
On 9/15/2013 3:39 AM, sepideh wrote: > I come to the point that coordinates are stored in the *PCS* not *GCS*, but > why my layer is stretched. > > Is it because I show GCS coordinates in a glOrtho projection or do you think > the problem is something else? > Your problem is that the coordinates are being graphed in a coordinate system that fills the viewport that you have defined. You can see this by grabbing the right edge of your openGL window and resizing. The entire display rubber-bands with the shape of the screen. I'm not that familiar with the structure of openGL programs, but your problem lies in the interaction of the viewport with the projection (glOrtho in this case). You need to get these coordinated so that shape is preserved. I would start with a very simple program- plotting a square - and make sure that no matter how you resize the window, it remains square. Once you get this understood, apply that logic to setting the viewport and projection for your shapefile program. ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] TR: Load GDALDataset Into DIB
If this is pretty much a one-time thing, just reverse the order in which you read the dataset. If this is going to be a regular thing for different kinds of data sources, you need to read the y-pixel height in the dataset. You do this using GetGeoTransform. The Y_height is the last element of the array. If this is negative, the band reads in the reverse order as does a windows DIB. (ie, the first row is the top row). If the value is positive, it's the same arrangement as the DIB. In general, the pixels can be non-square, which creates a problem reading it straight into a DIB. ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] How do I tell if an OGRSpatialReference has been initialized?
I had considered validate(), but decided against it for the reason you suggest - an initialized SRS might be in some weird format that fails validation. I've been using exportToWkt(), but was/am concerned that it might be possible to fail to export in odd-ball situations. Hence, my question about a direct check. Sounds like exportToWkt() is my best bet. Thanks to Frank and Etienne for the clarification. David ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
[gdal-dev] How do I tell if an OGRSpatialReference has been initialized?
Given an OGRSpatialReference class object, how do I tell if it's been initialized to anything? (ie,clear() was called or else was constructed with a null string an no further action was taken to set the SRS?) I've looked over the interface and can't spot anything that tells me it's in a clear state, but of course I might have missed it. Thanks David ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
[gdal-dev] OGRSpatialReference::autoIdentifyEPSG and the pcs.csv table
After an admittedly quick skim of the code base, it appears that the pcs.csv file is used only to go from an EPSG to the parameters for a spatial reference. If my reading is right, then it is not used to go from a WKT to an EPSG. The autoIdentifyEPSG works for a few special cases, but otherwise relies on the presence of an authority node for the projection's EPSG. Is that correct? If so, that would explain my earlier question about the WKT for Maryland State Plane not being recognized even though it's in the pcs.csv file. David ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] pcs.csv values don't match spatialreference.org
SInce posting this, I've learned that the pcs.csv is in degree.minutes, while the wkt is in decimal degrees, so that explains the pcs.csv. This still leaves the question as to why autoIdentifyEPSG won't return the value. On 6/26/2013 11:44 PM, David Strip wrote: I'm working with some files in Maryland State Plane (US foot), and autoIdentifyEPSG is failing. I tracked it down to what I suspect is the problem - my files and spatialreference.org show the standard parallels as 38.3 and 39.45, with a latitude of origin at 37. the pcs.csv file shows values of 38.18 and 39.27, with a latitude of origin at 37.4 (I don't actually know how to match the csv columns to the parameters, but these are the closest matching values). Why does the pcs.csv differ from spatialreference.org, and what is the appropriate strategy for fixing this? Thanks (and apologies for a cross-posting to stackexchange, which I posted before I realized the difference between the two sources) David ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
[gdal-dev] pcs.csv values don't match spatialreference.org
I'm working with some files in Maryland State Plane (US foot), and autoIdentifyEPSG is failing. I tracked it down to what I suspect is the problem - my files and spatialreference.org show the standard parallels as 38.3 and 39.45, with a latitude of origin at 37. the pcs.csv file shows values of 38.18 and 39.27, with a latitude of origin at 37.4 (I don't actually know how to match the csv columns to the parameters, but these are the closest matching values). Why does the pcs.csv differ from spatialreference.org, and what is the appropriate strategy for fixing this? Thanks (and apologies for a cross-posting to stackexchange, which I posted before I realized the difference between the two sources) David ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
[gdal-dev] How robust is OGRSpatialReference::GetAuthorityCode?
How well does OGRSpatialReference::GetAuthorityCode perform in the wild when handed a WKT? In particular, if I have satellite imagery from someplace in the world that was ortho-rectified in a "sane" manner to a national or state projection with an conforming WKT, will this function return an EPSG value? (I realize "robust" is somewhat ill-defined, but this is the best I could come up with in terms of my question. Sorry). ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] GDALRasterBand::RasterIO and FlushCache
On 5/31/2013 5:38 PM, Frank Warmerdam wrote: Note that the image cache will start discarding blocks on it's own when it is full. I think the default cache is about 64MB.I don't think that flushing the cache should be part of normal applications operations, though if you are very tight on memory you might want to alter the default size. Note that the cached blocks are discarded when the dataset is closed. Thanks - I didn't realize this as happening. My memory usage appeared to be growing, but I didn't look that closely. A little experimentation showed that max_cache = 40960K (at least in my build, which is pretty much a default setup), and indeed once max cache is reached there is no growth in the memory usage. David ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
[gdal-dev] GDALRasterBand::RasterIO and FlushCache
I'm reading a fairly large (at least to me) geotif (about 2GB) a line at a time. I noticed that memory usage increases as I read the image. It appears that what's going on is that the driver hangs onto these lines of the image, even though I'm providing the same buffer for it to read into each time. At this point I noticed the FlushCache method. Calling this after each reading each scan line solves the problem of the growing memory usage. I assume this behavior of caching whatever is read is deliberate. I think it would be useful in the API examples to make this behavior clear and perhaps show the use of the FlushCache method. For example, the API Tutorial page shows this for how to read a line into a buffer: Reading Raster Data There are a few ways to read raster data, but the most common is via the GDALRasterBand::RasterIO() method. This method will automatically take care of data type conversion, up/down sampling and windowing. The following code will read the first scanline of data into a similarly sized buffer, converting it to floating point as part of the operation. In C++: float *pafScanline; int nXSize = poBand->GetXSize(); pafScanline = (float *) CPLMalloc(sizeof(float)*nXSize); poBand->RasterIO( GF_Read, 0, 0, nXSize, 1, pafScanline, nXSize, 1, GDT_Float32, 0, 0 ); The pafScanline buffer should be freed with CPLFree() when it is no longer used. I think it would be useful to point out, in addition to the need to CPLFree the buffer that if you are using this example in a loop to read the image a line at a time, you should FlushCache every line (or periodically) to keep from accumulating the entire image in the cache (which is invisible to the user). It seems (to me) that it would be better to put this in the API Tutorial rather than as a code snippet in the Wiki, since the tutorial is where most people would look when starting out. ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] What's the best way to fix an incorrect projection in a geotiff
thanks. Somehow I missed the "override" implications of -a when I looked at gdal_translate. And yes, I believe the coordinates are OK in my file, just a bad SRS. On 5/28/2013 9:38 AM, Frank Warmerdam wrote: David, The simpliest way of correcting the coordinate system might be something like: gdal_translate -a_srs EPSG:27700 wrong.tif right.tif This assumes the coordinates are ok. This does result in copying the whole image and repacking it but avoids any resampling that might occur with gdalwarp. Best regards, Frank On Tue, May 28, 2013 at 8:25 AM, David Strip <g...@stripfamily.net> wrote: I have a geotiff that contains data in Penn. State-Plane, but the coordinate system is listed as GCS WGS84. What's the best way to fix this, esp if I have multiple files with this problem? I can use listgeo to get a file of the all the tags, edit the coordinate system to the correct value, then use tiffcp on each file to strip out the tags and then use geotifcp to put back the correct ones, but I suspect there's a better way, perhaps using gdalwarp with the -to option set to override the input SRS, creating a trivial warp? Or maybe something simpler that I'm not seeing. thanks David ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev -- ---+-- I set the clouds in motion - turn up | Frank Warmerdam, warmer...@pobox.com light and sound - activate the windows | http://pobox.com/~warmerdam and watch the world go round - Rush | Geospatial Software Developer ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
[gdal-dev] What's the best way to fix an incorrect projection in a geotiff
I have a geotiff that contains data in Penn. State-Plane, but the coordinate system is listed as GCS WGS84. What's the best way to fix this, esp if I have multiple files with this problem? I can use listgeo to get a file of the all the tags, edit the coordinate system to the correct value, then use tiffcp on each file to strip out the tags and then use geotifcp to put back the correct ones, but I suspect there's a better way, perhaps using gdalwarp with the -to option set to override the input SRS, creating a trivial warp? Or maybe something simpler that I'm not seeing. thanks David ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] Is it possible to build pdb files in Release ?
The pdb file is generated by the /Zi flag. Add that to the list of flags in your release build. On 5/14/2013 10:55 AM, Mihaela Gaspar wrote: I built GDAL in both Debug and Release. The default behavior is to create .pdb files in Debug, but not in Release. I would like to get a pdb file in Release build as well, is it possible ? The only lines I see that have to do with pdb, are: !IFNDEF DEBUG OPTFLAGS= $(CXX_ANALYZE_FLAGS) /nologo /MT /EHsc /Ox /D_CRT_SECURE_NO_DEPRECATE /D_CRT_NONSTDC_NO_DEPRECATE /DNDEBUG /Fd$(LIBDIR)\gdal$(VERSION).pdb /ITERATOR_DEBUG_LEVEL=0 !ELSE OPTFLAGS= $(CXX_ANALYZE_FLAGS) /nologo /MTd /EHsc /Zi /D_CRT_SECURE_NO_DEPRECATE /D_CRT_NONSTDC_NO_DEPRECATE /Fd$(LIBDIR)\gdal$(VERSION).pdb /DDEBUG /ITERATOR_DEBUG_LEVEL=2 !ENDIF It is not apparent to me how to get pdb files to be created, based on this. Thank you. ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
[gdal-dev] translating palettes in geotiff - followup
In my previous email, I laid out a process that seems to work for me. Now I realize that while this works for a single input file, how do I generalize it to work on a set of tiles? At first I naively thought I could just replace the SourceFileName and re-run the gdal_translate -of gtiff step, but since the location is embedded in the vrt, every tile would end up in the same place. How do I generalize this process so I can reuse the edits to the .vrt? On 4/23/2013 3:20 PM, David Strip wrote: From Even's advice, I was able to piece together this workflow. Given an input geotiff image.gtif gdaltranslate -of image.gtif image.vrt Then open image.vrt in a text editor and look for the color table by searching for the tag (actually you probably don't need to search, it's near the top) Replace the contents of the color palette with the palette entries you want on output. Then below the color table look for and replace it with Look for and replace with Add a new line above that looks like input_color_index:output_color_index, input_color_index:output_color_index, ... for example if you want to map input palette entry 1 to output entry 10, input 3, to output 5, ... 1:10, 3:5, ... You will need to map EVERY input color that occurs in your image to an output color or else gdal_translate will interpolate a color for you, and you don't want that. Now that you've edited the file run gdal_translate -of gtiff image.vrt mapped_image.gtif And you're done. ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] translating palettes in geotiff
From Even's advice, I was able to piece together this workflow. Given an input geotiff image.gtif gdaltranslate -of image.gtif image.vrt Then open image.vrt in a text editor and look for the color table by searching for the tag (actually you probably don't need to search, it's near the top) Replace the contents of the color palette with the palette entries you want on output. Then below the color table look for and replace it with Look for and replace with Add a new line above that looks like input_color_index:output_color_index, input_color_index:output_color_index, ... for example if you want to map input palette entry 1 to output entry 10, input 3, to output 5, ... 1:10, 3:5, ... You will need to map EVERY input color that occurs in your image to an output color or else gdal_translate will interpolate a color for you, and you don't want that. Now that you've edited the file run gdal_translate -of gtiff image.vrt mapped_image.gtif And you're done. ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] translating palettes in geotiff
On 4/23/2013 2:12 PM, Jukka Rahkonen wrote: rgb2pct.py utilityhttp://www.gdal.org/rgb2pct.html is doing kind of similar thing with the -pct option. -Jukka Rahkonen- I looked at the reference, but don't see how this would work. Seems like I would have to first translate the geotiff to an RGB, but even then I don't see how the pct option would let me do an arbitrary mapping of colors. Wouldn't the underlying algorithm look for a best match against the palette? ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] translating palettes in geotiff
On 4/23/2013 2:11 PM, Even Rouault wrote: Le mardi 23 avril 2013 21:52:39, David Strip a écrit : I have a paletted geotiff that I want to modify by mapping palette entries to a new palette. Thus, I have an input palette, an output palette, and a geotiff. I have a mapping (many to 1, if it matters) from the input palette to the output palette. Each pixel in the output geotiff will have a value equivalent to mapping the input geotiff's pixel value through the map. I've looked over the utilities and none seems to have this capability. It would be pretty easy, I think to write code using gdal to do this, but if one of the utilities will handle this case, that's even better. One possibility would be to use a VRT file where you specify a element to map the indices from the input palette to the indices of the output palette. See http://www.gdal.org/gdal_vrttut.html for the syntax. I was wondering whether a vrt would do the trick, but I'm not that familiar. If I'm reading this right, the path is something like translate from geotiff to vrt add the LUT translate from vrt back to geotiff. If I'm going to be doing this a lot, with input files of the same size, all I need to do is create the .vrt as above, then edit the SourceFileName tag to point to the appropriate input file. Is that right? Many thanks ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
[gdal-dev] translating palettes in geotiff
I have a paletted geotiff that I want to modify by mapping palette entries to a new palette. Thus, I have an input palette, an output palette, and a geotiff. I have a mapping (many to 1, if it matters) from the input palette to the output palette. Each pixel in the output geotiff will have a value equivalent to mapping the input geotiff's pixel value through the map. I've looked over the utilities and none seems to have this capability. It would be pretty easy, I think to write code using gdal to do this, but if one of the utilities will handle this case, that's even better. Thanks. ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] dear god!!
the bottom of the message you sent (just like every message) has a link to the mailing list manager - http://lists.osgeo.org/mailman/listinfo/gdal-dev Follow that link and you'll find unsubscribe instructions at the bottom. ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] Multiple pens in OGR feature style
On 12/10/2012 7:29 AM, Even Rouault wrote: Looking at addstylestring2kml() in ogr/ogrsf_frmts/libkml/ogrlibkmlstyle.cpp, I can see that only one PEN instance will be taken into account (looking at the coulde, I would have said that it would be the last occurence...). And it seems that it is a limitation of LIBKML itself, since the Style class seems to accept only one line style. If I'm reading the docs right, KML doesn't support multi-pen lines. Google Earth extensions to KML, however, do allow a two-color line, so it makes sense that LIBKML would not allow this. ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] gdalwarp produces all black output
gdalinfo -mm shows your input data file has all it's values in the range 518 to 2396. When converted to a 16bit tif, these all fall in a range that appears black on your screen. You need to rescale the data to fill the 16bit range. This is easily done with gdal_translate, which is probably what you want to use anyway, since you're not changing the projection, only the data format. gdal_translate -ot Int16 -scale 518 2396 0 32767 n29.dt1 mosaic.tif ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
[gdal-dev] Deleting GDALColorTable *
If I do something like this: GDALColorTable * ct = raster_band->GetColorTable()->clone(); ... bunch of stuff delete ct; Will the destructor be called on the color table represented pointed to by ct, or do I have to call GDALDestroyColorTable((GDALColorTableH) ct) ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] gdalwarp question - probably pretty simple
On 11/22/2012 10:40 PM, David Strip wrote: I've got a geotiff which gdalinfo reports as 2 bands, with band 2 interpreted as alpha. The projection is Maryland State Plane. The color table is paletted, with NO_DATA = 0 I can view this file in OpenEV with no problem I call gdalwarp -t "WGS84" input.tif output.tif When I open the resulting file in OpenEV, I see nothing, the screen appears to remain black. When I move the cursor around, it reports the value as NO DATA. I thought this might have something to do with the alpha band, so I added - dstalpha. This produces ERROR 6: SetColorTable() not supported for multi-sample images It still completes and produces a file, but gdalinfo doesn't report band2 as alpha, and I still get a blank view in OpenEV. I suspect I'm missing something quite basic, but I'm not seeing any obvious command option. I can make this work if I first strip out the alpha band gdal_translate -b 1 input.tif output.tif But I'm still interesting in hearing how I could make gdalwarp work directly. ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
[gdal-dev] gdalwarp question - probably pretty simple
I've got a geotiff which gdalinfo reports as 2 bands, with band 2 interpreted as alpha. The projection is Maryland State Plane. The color table is paletted, with NO_DATA = 0 I can view this file in OpenEV with no problem I call gdalwarp -t "WGS84" input.tif output.tif When I open the resulting file in OpenEV, I see nothing, the screen appears to remain black. When I move the cursor around, it reports the value as NO DATA. I thought this might have something to do with the alpha band, so I added - dstalpha. This produces ERROR 6: SetColorTable() not supported for multi-sample images It still completes and produces a file, but gdalinfo doesn't report band2 as alpha, and I still get a blank view in OpenEV. I suspect I'm missing something quite basic, but I'm not seeing any obvious command option. Thanks ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] The GeoTransform Array
On 11/21/2012 5:32 AM, Mateusz Loskot wrote: On 21 November 2012 12:06, Even Rouault wrote: Selon David Strip : The GDAL API tutorial describes this array as: adfGeoTransform[0]/* top left x */ adfGeoTransform[1]/* w-e pixel resolution */ adfGeoTransform[2]/* rotation, 0 if image is "north up" */ adfGeoTransform[3]/* top left y */ adfGeoTransform[4]/* rotation, 0 if image is "north up" */ adfGeoTransform[5]/* n-s pixel resolution */ The GDAL Data model page says Xgeo = GT(0) + Xpixel*GT(1) + Yline*GT(2) Ygeo = GT(3) + Xpixel*GT(4) + Yline*GT(5) where the GT[i]are the coeffs described above. From this I conclude that the rotations are not sin/cos of the rotation, but rather the sin/cos times the appropriate pixel size. Is that right, or did I miss something? Yes, for a pure rotation. If the [1], [2], [4] and [5] have no particular relation, the matrix can represent a combination of scaling, rotation and shearing. See http://en.wikipedia.org/wiki/Transformation_matrix Isaac's "Improving the Documentation of Get/SetGeoTransform" post is worth checking too: http://lists.osgeo.org/pipermail/gdal-dev/2011-July/029449.html Best regards, Isaac's post is quite informative. However, combined with the Wikipedia entry and Even's comment, the post is incomplete as it does not represent the shear coefficients, but that's pretty straightforward to understand. I think the simplest summary is that the coeffs 1,2,4,and 5 are the terms of the matrix that we get from a concatenation of the rotation, sheer, and scaling matrices. But that raises a new question about pixel resolution. If I read this carefully, what I conclude is the [1] is the pixel resolution of a transformed pixel in true E/W space. It is not the resolution in the x-direction in the original raster. Is that correct? If I want the resolution in the original raster, I have to solve for the underlying scale factors, resolutions, shear, and rotation angle. That's six unknowns and four equations. ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
[gdal-dev] The GeoTransform Array
The GDAL API tutorial describes this array as: adfGeoTransform[0]/* top left x */ adfGeoTransform[1]/* w-e pixel resolution */ adfGeoTransform[2]/* rotation, 0 if image is "north up" */ adfGeoTransform[3]/* top left y */ adfGeoTransform[4]/* rotation, 0 if image is "north up" */ adfGeoTransform[5]/* n-s pixel resolution */ The GDAL Data model page says Xgeo = GT(0) + Xpixel*GT(1) + Yline*GT(2) Ygeo = GT(3) + Xpixel*GT(4) + Yline*GT(5) where the GT[i]are the coeffs described above. From this I conclude that the rotations are not sin/cos of the rotation, but rather the sin/cos times the appropriate pixel size. Is that right, or did I miss something? ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] Gdal - read starting left-bottom
On 11/15/2012 12:20 AM, netcadturgay wrote: my program reads a tif file starting left-top point. But I want to read starting left-bottom(LowerLeft). How can I solve this problem? Example: band.ReadRaster(0, 0, 500, 500, pixels, 500, 500, 0, 0); You will have to read one line at a time into the i_th row of the array. This looks like the Java interface. I'm not really familiar with java, but taking a stab int [] [] pixels = new int [500][500] for (i = 0; i < 500; i = i + 1) band.readRaster(0,i, 500, 1, pixels[500-i]) Of course, this begs the question of why you want to do this is the first place? ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] Suggestion for API addition
On 11/11/2012 12:56 PM, Even Rouault wrote: GDALRasterBand::WriteCropToBand(int nCropSizeX, // width of the cropped region int nCropSizeY, // height of the cropped region int nXOffset, // offset to the left edge of the cropped region int nYOffset, // offset to the top of the cropped region void * pBuf,// pointer to the buffer (uncropped) GDALDataType eBufType, // data type in the buffer int nPixelSpace, // byte offset from one pixel to the next int nLineSpace) // byte offset from one line to the next { void * pData = pBuf + nYOffset * nLineSpace + nXOffset * nPixelSpace; this->RasterIO(GF_Write, 0, 0, this->nRasterXSize, this->nRasterYSize, pData, nCropSizeX, nCropSizeY, eBuftype, nPixelSpace, nLineSpace); } The code is obviously not very complicated. The value is that when you're reading code you can tell immediately that what's happening is the buffer is being cropped and written. If written directly in terms of RasterIO the intent is buried in the computation of the parameters, making it less clear. Your implementation is more about rescaling than cropping ( http://en.wikipedia.org/wiki/Cropping_%28image%29 ). For me cropping implies at least working with a subwindow of the original image. Sorry, but I still think this is going to confuse people more than help them. now it's my turn to be confused. This is pixel-by-pixel copy of a subset of the input pixels. It is a subwindow of the original image, exactly as you say. I'm taking an m x n input image and copying p x q pixels (p <= m, q <=n) into p x q pixels in the band. The scale is unchanged. ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] Suggestion for API addition
On 11/11/2012 9:25 AM, Even Rouault wrote: The value of a new function is code clarity. It's not just computing pData, it's that the nBufXSize and nBufYSize become the size of the cropped region, not the size of the actual buffer, I don't understand what you mean, sorry. Perhaps you should show the code. I understand that better than English ;-) No problem - your English is a whole lot better than my French :-) GDALRasterBand::WriteCropToBand(int nCropSizeX, // width of the cropped region int nCropSizeY, // height of the cropped region int nXOffset, // offset to the left edge of the cropped region int nYOffset, // offset to the top of the cropped region void * pBuf, // pointer to the buffer (uncropped) GDALDataType eBufType, // data type in the buffer int nPixelSpace, // byte offset from one pixel to the next int nLineSpace) // byte offset from one line to the next { void * pData = pBuf + nYOffset * nLineSpace + nXOffset * nPixelSpace; this->RasterIO(GF_Write, 0, 0, this->nRasterXSize, this->nRasterYSize, pData, nCropSizeX, nCropSizeY, eBuftype, nPixelSpace, nLineSpace); } The code is obviously not very complicated. The value is that when you're reading code you can tell immediately that what's happening is the buffer is being cropped and written. If written directly in terms of RasterIO the intent is buried in the computation of the parameters, making it less clear. Not sure to understand that. The presence of nBufXSize, nBufYsize in your proposal allows subsampling/oversampling. Perhaps you meant something else with decimation/replication ? You're right. nPixelSpace and nLineSpace now become required, no default values will work. Not understanding that either. Sorry, not very clear. I agree that nBufXSize, nBufYSize are no longer necessary - they are read from the class variables. We could leave these vars in and continue to support decimation replication, I suppose. My comment about nPixelSpace, nLineSpace is that these will have to be supplied, because we count on them to step through the buffer. They can no longer have default values. ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] copying a GDALColorTable
On 11/11/2012 11:59 AM, Mateusz Loskot wrote: IMO, for GDAL 2.0 the API could be improved and some symmetry introduced. For example, for GDALColorTable::Clone() two static methods could be added: static GDALColorTable* GDALColorTable::Create(GDALPaletteInterp=GPI_RGB); static void GDALColorTable::Destroy(GDALColorTable*); If we're building a list of suggestions, a copy constructor would allow more C++ like coding: GDALColorTable::GDALColorTable (const GDALColorTable &) and perhaps an assignment operator GDALColorTable::operator = (const GDALColorTable &) ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] copying a GDALColorTable
On 11/11/2012 11:22 AM, Even Rouault wrote: Le dimanche 11 novembre 2012 19:16:30, David Strip a écrit : If I'm reading the docs right, there is no copy constructor or assignment operator for a GDALColorTable. If I use GDALColorTable::clone(), how do release the copy when I'm done? Do I use CPLFree or can I use delete? delete should work, but the preferred way would be GDALDestroyColorTable() to avoid potential cross-heap issues on Windows in case your code and GDAL aren't built with the same compiler / compiler options. Thanks for a really prompt response. Perhaps the c++ class docs for clone() could be updated to reflect this. And if I'm correct the call would be GDALDestroyColorTable((GDALColorTableH) pColorTable), casting the c++ pointer to a handle? ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
[gdal-dev] copying a GDALColorTable
If I'm reading the docs right, there is no copy constructor or assignment operator for a GDALColorTable. If I use GDALColorTable::clone(), how do release the copy when I'm done? Do I use CPLFree or can I use delete? Thanks. ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] Suggestion for API addition
On 11/11/2012 3:53 AM, Even Rouault wrote: Le dimanche 11 novembre 2012 03:45:57, David Strip a écrit : The current GDALRasterBand::RasterIO signature makes it easy to read or write a contiguous subregion of the band. However, if you want to read or write the entire band into/from a contiguous sub-region of the buffer, it's not nearly as straightforward. You can do this by using the nLineSpace parameter, giving the size of the subregion for buffer size parameters, and offsetting the address of the buffer. It works, but that makes the code a bit baroque and hard to read. How about a signature that indicates the entire band will be read, but moves the meaning of the offset to the buffer. Something like CPLErr GDALRasterBand::RasterIO ( GDALRWFlag eRWFlag, int nXSize, int nYSize, void * pData, int nBufXOff, int nBufYOff, int nBufXSize, int nBufYSize, GDALDataTypeeBufType, int nPixelSpace, int nLineSpace ) Obviously I can write a wrapper for this functionality that just calls RasterIO, setting the appropriate values for buffer size, etc, but I was thinking I might not be the only one who wants to crop a buffer while writing to the band, or reading a band into a sub-region of a buffer. This is just a personal opinion, but I'm not convinced this is a good idea. I think people will be more confused than currently. The meaning of the parameters of the existing RasterIO() requires some time to be confortable with, but once you've finished learning curve, it is pretty logical. Adding another method with the same name, almost same parameters, but subtle differences in behaviour is not going to make users life easier. You're right that the actual signature and function name I provided would not be the right way to go. The substance of my question was whether there would be value to a function which crops a buffer while writing to the band. Rather than overloading RasterIO, the name could be writeCroppedBand or something. Why not adding an example code snippet of your use case in the documentation of the current API ? (if I've understood well, your proposal is just a convenient way of computing the right value for pData, likely pData = pBufferBase + nBufYOff * nLineSpace + nBufXOff * nPixelSpace) The value of a new function is code clarity. It's not just computing pData, it's that the nBufXSize and nBufYSize become the size of the cropped region, not the size of the actual buffer, and offsets are not visible in the existing signature (there is no nBufXOff, nBufYOff), they're buried in the computation of pData, which of course no longer points to the actual buffer. When the existing function is used for this purpose it becomes very difficult to see what is actually happening. Of course one can say that's what comments are for. Furthermore, if you want to operate on the whole band, you should also remove the nXSize and nYSize parameters that are useless. Yes, those should have been removed This cropping interpretation doesn't allow for decimation/replication. It requires the buffer size be large enough to copy the entire band into the sub-region of the buffer. Allowing for decimation/replication would require yet more parameters. Not sure to understand that. The presence of nBufXSize, nBufYsize in your proposal allows subsampling/oversampling. Perhaps you meant something else with decimation/replication ? You're right. nPixelSpace and nLineSpace now become required, no default values will work. ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
[gdal-dev] Suggestion for API addition
The current GDALRasterBand::RasterIO signature makes it easy to read or write a contiguous subregion of the band. However, if you want to read or write the entire band into/from a contiguous sub-region of the buffer, it's not nearly as straightforward. You can do this by using the nLineSpace parameter, giving the size of the subregion for buffer size parameters, and offsetting the address of the buffer. It works, but that makes the code a bit baroque and hard to read. How about a signature that indicates the entire band will be read, but moves the meaning of the offset to the buffer. Something like CPLErr GDALRasterBand::RasterIO ( GDALRWFlag eRWFlag, int nXSize, int nYSize, void * pData, int nBufXOff, int nBufYOff, int nBufXSize, int nBufYSize, GDALDataType eBufType, int nPixelSpace, int nLineSpace ) Obviously I can write a wrapper for this functionality that just calls RasterIO, setting the appropriate values for buffer size, etc, but I was thinking I might not be the only one who wants to crop a buffer while writing to the band, or reading a band into a sub-region of a buffer. This cropping interpretation doesn't allow for decimation/replication. It requires the buffer size be large enough to copy the entire band into the sub-region of the buffer. Allowing for decimation/replication would require yet more parameters. ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
[gdal-dev] another multi-band question
In a followup to my previous question about property differences across bands in a multiband images - The No Data Value and color table are both per-band properties. Do any file types (other than VRTs) support bands with differences in these properties? The file type I'm most familiar with is geotiff, which has these properties at the file (dataset) level. ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] Images/datasets with bands with different properties
Thanks to Even and Jukka for their prompt and clear responses to my question David ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
[gdal-dev] Images/datasets with bands with different properties
I was looking at the API and it appears, that at least in principle a multi-band dataset (and hence the associated input file) could have bands with different data types since the type is a property of the band, not the dataset. Does this actually happen "in the wild"? If so, what file types support this scenario? Likewise, the bands apparently can have different size rasters (in terms of pixel counts), since the raster size is a property of the band. On the other hand, the dataset also has a raster size, as well as the overall properties of the raster, such as the geo-location of the origin and the size of the pixels. If the band doesn't match the dataset properties, what do we know about origin, pixel size, etc? If these situations don't exist in real datasets, then I'm not particularly concerned. In fact, I'd be happiest if someone could tell me to just ignore these issues because they don't happen in real life. ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] GeoPDF (TM) vs. GeoSpatial PDF
On 6/3/2012 10:59 AM, George Demmy wrote: David Strip wrote: > When used in conjunction with the (free) TerraGo toolbar, the geoPDF > provides many more capabilities than a geo-spatial pdf, especially if the > pdf "modify" permission is set. As of version 6 which shipped recently, the TerraGo Toolbar offers continuous display of coordinates and other functionality for any geospatial PDF as long as it can grok the encoding. It supports OGC and ISO and uses proj4 via GDAL under the hood for projections. Like Reader, Toolbar has some more advanced functionality that is accessible if certain permission bits are set. While V6 of the toolbar (and in fact previous versions) provide continuous display of coordinates, a "true geoPDF" (which I believe means having the LGIDict), has far more coordinate conversion capability, as well as the ability to add "geo-marks", geo-located stamps (icons) added to the file. I verified this by downloading the latest version of the Terrago Toolbar (15.0.0.591). I opened a geo-spatial pdf that I created with ArcMap 10.0.2. I can get the position display in Lat/Lon and MGRS at the bottom of the image, and in the Geolocator tool (from the toolbar), I can also see UTM13N, WGS84, USNG, and MGRS. I can't use the GeoMark tools at all. I then open a geoPDF (a new generation USGS topo map). In this case I can set the GeoLocator projection to display in any of a hundred or more predefined projections, or define my own projection parameters. I can also mark the map with GeoMarks. The ability to use GeoMarks is not a function of the pdf "modify" bit being set. It is set in the ArcMap generated map, which I can add non-geospatial marks with other pdf tools. I belief that the presence of the LGIDict is what enables these other features, but I could be wrong. That's something I've been trying to find out. ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
[gdal-dev] GeoPDF (TM) vs. GeoSpatial PDF
Jukka's posting earlier today has made me aware that GDAL will now write a geospatial PDF. Is it also capable of writing a geoPDF (which is a trademark of TerraGo, but there is an OGC standard, so possibly it's legal to create them)? When used in conjunction with the (free) TerraGo toolbar, the geoPDF provides many more capabilities than a geo-spatial pdf, especially if the pdf "modify" permission is set. From the looks of it, the main difference between a geospatial PDF and a geoPDF is that the latter contains a dictionary object LGIdict which contains the projection, the coordinate transform to the page, and that sort of thing. In fact, it will support multiple data frames on the same page. ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev
Re: [gdal-dev] NODATA value in rasters
On 4/13/2012 1:50 PM, Carl Godkin wrote: I think a lot of formats support the concept of NODATA. The GDAL API has functions for checking for this in the GDALRasterBand class, for instance, as double GDALRasterBand::GetNoDataValue() in the C++ or in the C API as GDALGetRasterNoDataValue(). I've found that this sometimes works, but not always. What I do is to read the raster data and change everything outside the minimum to maximum range (returned by two other API functions) to my own software's NODATA value. Hope that helps, carl Yes, that helps a lot. I'm not sure how I missed GDALRasterBand::GetNoDataValue() in my searches, but at least now I know how to address the problem I was dealing with. Thanks. ___ gdal-dev mailing list gdal-dev@lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/gdal-dev