Hi Jukka,

One workaround is to use -skipfailures and compare afterwards which IDs in
the source data are not present in the target table. Skipfailures is forcing
the transaction size into one row and it may be deadly slow if the output is
GeoPackage because initializing/committing a transaction in SQLite is slow.
With PostGIS as an outlut it is not as slow. A dry run into some some fast,
not transactual format like GeoJSON could be used as some sort of test but
that does not test the database constraints.

If you know your database and the contraints that you have you should be
able to make quite nice tests by running SQL against the source data. But
having a typeless SQLite as source can make it harder to write the SQL
tests. That may even be a reason for your problems (field reserved for
integers contains text etc.).

I wonder why even the database engines would go on after finding the first
error in the transaction. The whole transaction must be rollbacked in any
case so why not to quit sooner rather than later?

thanks for the input. It seems that -skipfailures is the parameter that I can use to do what I have to.

My idea is to use a "staging" database on PG where I can run the single transaction with -skipfailures and then, if I don't have any errors, I will run a similar job with the same input data but on a "production" database.

Maybe not the cleanest solution but still a good one to catch all the errors of all the tables

Matteo
_______________________________________________
gdal-dev mailing list
gdal-dev@lists.osgeo.org
https://lists.osgeo.org/mailman/listinfo/gdal-dev

Reply via email to