Re: [postgis-users] Longitude and latitude ranges

2011-08-25 Thread Ture Pålsson
2011/8/23 Jaime Casanova :

> db=# update transmitter_mv set punto = st_makepoint(tx_long, tx_lat);
> ERROR:  Coordinate values are out of range [-180 -90, 180 90] for GEOGRAPHY 
> type
> """
>
> any ideas why this is happening?

Perhaps this is a too-obvious question, but have you made sure that
you don't have some "bad" point, with latitude and/or longitude out of
range, that has somehow sneaked into your data set?

  -- T
___
postgis-users mailing list
postgis-users@postgis.refractions.net
http://postgis.refractions.net/mailman/listinfo/postgis-users


Re: [postgis-users] "Erase" and "Intersect" performance questions - PostGIS way slower than ArcGIS

2011-08-25 Thread Martin Davis
For your query #1, it looks like you are computing the ST_Intersection 
twice.  Does Postgres optimize this away?  IF not, you might want to use 
a subquery to avoid this expensive second computation.


I also agree with Chris, that query #2 is probably not doing what you 
want it to.  What you need to do is for each parcel, subtract the union 
of the water features covered by it.  This is still likely to be slow, 
however.


A general comment is that ArcGIS is using a very different approach to 
compute erase and intersect (aka overlay).  It evaluates the entire set 
of geometries together, rather than piece-wise like the SQL query is 
doing.  This generally results in much better performance for large 
datasets, since there is less I/O and more efficient algorithms available.


By it's nature, using SQL for spatial computation is most efficient for 
operations which can be carried out in a feature-wise manner.  
Unfortunately, overlay does not fall into this category (since there is 
a large amount of interaction between features.


Implementing a more efficient overlay algorithm in PostGIS is a nice 
challenge for the future...


On 8/24/2011 8:18 PM, Sheara Cohen wrote:


Hi all --

I have what is likely to sound like the newbie question it is. I am in 
the process of shifting some of our modeling workload from ArcGIS to 
PostGIS. While PostGIS seems much faster for most non-spatial 
operations, I'm finding the exact opposite for spatial operations like 
"erase," "intersect," etc. And I'm sure there is some basic thing I 
just don't know about how to write these scripts to get fast performance.


Below are details for two issues I have run into.

1."Intersect":  In ArcGIS, I used the intersect tool to return a 
polygon file from the intersection of two different polygon files. 
They were large input files -- one the size of the state of California 
and one the size of a county in California, both with between 200-300 
thousand records. In ArcGIS, this took 43 minutes. In PostGIS, I used 
the script below, and it took over 17 hours.


CREATE TABLE public.fresno_parcels_lt_intersect as

SELECT

ST_Intersection(p.wkb_geometry, 
lt.wkb_geometry) as wkb_geometry,


id_parcel,


(st_area(ST_Intersection(p.wkb_geometry, lt.wkb_geometry))) * 
0.000247105381 as acres_lt_parcel,


Landtype

FROM fresno_parcels_unique_id as p, ca_landtypes_010211 as lt

WHERE ST_Intersects(p.wkb_geometry, lt.wkb_geometry);

2."Erase":  In ArcGIS, I used the erase tool to remove water features 
(polygons) from a county parcel file. Both files were large. The water 
features covered the state of California with 100K records, and the 
parcel file had almost 300K records. In ArcGIS, this took 17 minutes. 
In PostGIS, I had to cancel the run after 16 hours. I used the script 
below.


CREATE TABLE fresno_parcels_minus_ca_water as

SELECT ST_GeomFromWKB (ST_Difference (wkb_geometry 
(ca_water_final_082211), wkb_geometry (fresno_parcels_clean)))


FROM ca_water_final_082211, fresno_parcels_clean;

I added a spatial index to all of the input files in PostGIS (CREATE 
INDEX  ON  USING gist (wkb_geometry)). Do any of you all have 
suggestions as to how to make these sorts of operations run more quickly?


___
postgis-users mailing list
postgis-users@postgis.refractions.net
http://postgis.refractions.net/mailman/listinfo/postgis-users


Re: [postgis-users] OT Understanding slow queries

2011-08-25 Thread Stephen Woodbridge

On 8/25/2011 1:21 PM, Ben Madin wrote:

Steve,

does this just apply to count(*), or is count(id) just as bad? I was
originally a MySQL user and count(*) could be very efficient there.



Best way to evaluate this is with explain. But yes both have the same 
full table scan.


MySQL maintains row count as one of its tables stats so it just pulls 
that from the stats metadata. If you just want an approximate row count, 
there are some status based on the last analyze. If you use pgadmin3 you 
can see in the table properties:


Rows (estimated)1957620 

This is one of the largest hurdles for MySQL users moving over to 
postgresql because it is so obvious and in your face. But the things I 
can do today with postgresql and with postgis, I could never do in MySQL.


-Steve

[snip history]
___
postgis-users mailing list
postgis-users@postgis.refractions.net
http://postgis.refractions.net/mailman/listinfo/postgis-users


Re: [postgis-users] Raster Points in Polygon (was Re: PostGIS Raster for Met Data)

2011-08-25 Thread Bborie Park

Hey Michael,

All the code I'm currently writing is for inclusion in the PostGIS 
Raster project.


Specialized C programs running against flat files will almost always be 
faster than PostGIS Raster.  But, if PostGIS Raster can return an answer 
in a reasonable time frame (<= 10 seconds ideally), my users will be happy.


Currently, my testing of a polygon filter against a daily coverage 
limited by a half month takes ~30 seconds.  But, this is because of the 
inefficiencies in ST_Intersection.


If you have any glaring performance inefficiencies, please do let us 
know so that we see if some work could be done to speed things up.


-bborie

On 08/25/2011 04:46 PM, Michael Akinde wrote:

Hi,

Sounds interesting. Is your code online anywhere?

All of our source code is GPL and available at https://github.com/wdb. WDB is the 
backend database server for yr.no, Norway's largest weather site with ~3M visitors 
a week. It's heavily optimized for point retrieval from grids - which it does very 
effectively (an average yr.no weather page retrieves approximately 3000 points 
in<0.2s). We also recently implemented a NetCDF-Java interface to the database, 
to facilitate easy link up with WMS services (at least in theory).

Our plan now is to extend and optimize the functionality for polygon retrieval 
and point data, since the system is getting leveraged for a variety of 
different types of projects.

I don't think we'll be moving our data into PostGIS rasters (our experiments so 
far haven't shown any encouraging results performance-wise - perhaps not 
surprising, given that our current algorithms is essentially C code working on 
flat-files); but I hope that we can leverage the algorithms discussed in the 
thread to do the geo-spatial calculations for the more nasty queries using 
PostGIS. Pierre was kind enough to demonstrate that it can be done fairly 
easily - now I just have to figure out how to do it efficiently enough that our 
users will be happy.

st_intersect(raster, raster) sounds very interesting, but what would be the 
result from the function?

Regards,

Michael A.



--
Bborie Park
Programmer
Center for Vectorborne Diseases
UC Davis
530-752-8380
bkp...@ucdavis.edu
___
postgis-users mailing list
postgis-users@postgis.refractions.net
http://postgis.refractions.net/mailman/listinfo/postgis-users


Re: [postgis-users] Raster Points in Polygon (was Re: PostGIS Raster for Met Data)

2011-08-25 Thread Michael Akinde
Hi,

Sounds interesting. Is your code online anywhere?

All of our source code is GPL and available at https://github.com/wdb. WDB is 
the backend database server for yr.no, Norway's largest weather site with ~3M 
visitors a week. It's heavily optimized for point retrieval from grids - which 
it does very effectively (an average yr.no weather page retrieves approximately 
3000 points in <0.2s). We also recently implemented a NetCDF-Java interface to 
the database, to facilitate easy link up with WMS services (at least in theory).

Our plan now is to extend and optimize the functionality for polygon retrieval 
and point data, since the system is getting leveraged for a variety of 
different types of projects.

I don't think we'll be moving our data into PostGIS rasters (our experiments so 
far haven't shown any encouraging results performance-wise - perhaps not 
surprising, given that our current algorithms is essentially C code working on 
flat-files); but I hope that we can leverage the algorithms discussed in the 
thread to do the geo-spatial calculations for the more nasty queries using 
PostGIS. Pierre was kind enough to demonstrate that it can be done fairly 
easily - now I just have to figure out how to do it efficiently enough that our 
users will be happy.

st_intersect(raster, raster) sounds very interesting, but what would be the 
result from the function?

Regards,

Michael A.

- Original Message -
> Hey Michael,
> 
> I do something similar with meteorological/climate (temperature,
> precipication, ndvi, gpp) datasets but the raw data is stored in
> rasters
> (one image per day per variable) for 50+ years. In the current system,
> the rasters are stored as massive tables with each row containing the
> observation date, grid cell value and grid cell coordinates. I'm in
> the
> process of testing the performance and storage requirements of PostGIS
> Raster with a subset of the rasters for a variety of tile sizes. I
> expect that I'll be moving over to using PostGIS Raster by the end of
> this year.
> 
> I plan on writing a two raster ST_Intersects(raster, raster) function
> as
> PostGIS Raster currently only has ST_Intersects(raster, geometry). The
> additional hope is that I can get the two raster ST_MapAlgebra(raster,
> raster) done by the time PostGIS 2.0 is branched.
> 
> -bborie
> 
> --
> Bborie Park
> Programmer
> Center for Vectorborne Diseases
> UC Davis
> 530-752-8380
> bkp...@ucdavis.edu
> ___
> postgis-users mailing list
> postgis-users@postgis.refractions.net
> http://postgis.refractions.net/mailman/listinfo/postgis-users
___
postgis-users mailing list
postgis-users@postgis.refractions.net
http://postgis.refractions.net/mailman/listinfo/postgis-users


Re: [postgis-users] "Erase" and "Intersect" performance questions - PostGIS way slower than ArcGIS

2011-08-25 Thread Chris Hermansen
No problem.

With respect to st_difference(), one of the recommended alternatives is to
st_union() all your polygons, then re-attribute them, then delete the ones
you don't want.  It's my impression that st_union() is pretty fast these
days so this might be a good approach for you.  I have also seen it
suggested that one st_union() all the polygon linework, then
re-polygonalize.  This gets to a fair bit of SQL especially if the polygons
have interior rings.

With respect to your performance issue, your spatial indexes only really
help in the case of relatively compact polygons.  If your land use polygons
include thousand-kilometer-long rights of way or double-line rivers, those
will drastically reduce the performance of your indexing because each of
them will && with many hundreds or thousands of other polygons and therefore
the actual intersection between them must be computed.  Offhand I don't know
what ArcGIS does to be faster at this.  Maybe someone else on this list has
some ideas.

If you can tile your big ugly polygons for sure you will dramatically
improve your performance.

2011/8/25 Sheara Cohen 

> Hey Chris – Thanks for your response.
>
> ** **
>
> With regard to ST_Difference, what I was trying to accomplish is a
> “deletion” or  “clipping out” of rivers and lakes from a parcel file. Both
> are vector files. Based on your description, it sounds like I’m using the
> wrong query. Do you know which one I should be using?
>
> ** **
>
> And with regard to both the ST_Intersection and ST_Difference… yes, I am
> using vector files with HUGE, complex polygons that spread over very large
> areas (hundreds of miles). I can dice these in ArcGIS or something like that
> before throwing the files over the fence into Postgres. But I’m still
> confused by the fact that similar operations run so quickly on the exact
> same files in ArcGIS. I feel like there is something I’m totally missing
> about how to set up these queries correctly in PostGIS.
>
> ** **
>
> If you or anyone have any last thoughts, I’d be really interested.
>
> ** **
>
> Thanks again,
>
> ** **
>
> ~ Sheara
>
> ** **
>
> *Sheara Cohen*
> Planner
>
> *C A L T H O R P E**  A S S O C I A T E S*
> 2095 ROSE STREET, SUITE 201, BERKELEY, CALIFORNIA, 94709 USA
> 510 809-1165 (direct) | 510-548-6800 x35 (main) | 510 548-6848 (fax)
> she...@calthorpe.com | www.calthorpe.com
>
> ** **
>
> ___
> postgis-users mailing list
> postgis-users@postgis.refractions.net
> http://postgis.refractions.net/mailman/listinfo/postgis-users
>
>


-- 
Chris Hermansen
*Vice President*

TECO Natural Resource Group Limited
301 · 958 West 8th Avenue
Vancouver BC CANADA · V5Z 1E5
Tel +1.604.714.2878 · Cel +1.778.840.4625
<>___
postgis-users mailing list
postgis-users@postgis.refractions.net
http://postgis.refractions.net/mailman/listinfo/postgis-users


Re: [postgis-users] "Erase" and "Intersect" performance questions - PostGIS way slower than ArcGIS

2011-08-25 Thread Paul Ramsey
No thoughts, you're turning up expected results, I'd say. Arc is a
GIS, PostGIS is a database. I'm sure we could get much better
performance on your workload if we added the intersection operation
into the prepared geometry concept, but that would be quite a bit of
work I'm sure.

P.

On Thu, Aug 25, 2011 at 2:56 PM, Sheara Cohen  wrote:
> Hey Chris – Thanks for your response.
>
>
>
> With regard to ST_Difference, what I was trying to accomplish is a
> “deletion” or  “clipping out” of rivers and lakes from a parcel file. Both
> are vector files. Based on your description, it sounds like I’m using the
> wrong query. Do you know which one I should be using?
>
>
>
> And with regard to both the ST_Intersection and ST_Difference… yes, I am
> using vector files with HUGE, complex polygons that spread over very large
> areas (hundreds of miles). I can dice these in ArcGIS or something like that
> before throwing the files over the fence into Postgres. But I’m still
> confused by the fact that similar operations run so quickly on the exact
> same files in ArcGIS. I feel like there is something I’m totally missing
> about how to set up these queries correctly in PostGIS.
>
>
>
> If you or anyone have any last thoughts, I’d be really interested.
>
>
>
> Thanks again,
>
>
>
> ~ Sheara
>
>
>
> Sheara Cohen
> Planner
>
> C A L T H O R P E  A S S O C I A T E S
> 2095 ROSE STREET, SUITE 201, BERKELEY, CALIFORNIA, 94709 USA
> 510 809-1165 (direct) | 510-548-6800 x35 (main) | 510 548-6848 (fax)
> she...@calthorpe.com | www.calthorpe.com
>
>
>
> ___
> postgis-users mailing list
> postgis-users@postgis.refractions.net
> http://postgis.refractions.net/mailman/listinfo/postgis-users
>
>
___
postgis-users mailing list
postgis-users@postgis.refractions.net
http://postgis.refractions.net/mailman/listinfo/postgis-users


Re: [postgis-users] "Erase" and "Intersect" performance questions - PostGIS way slower than ArcGIS

2011-08-25 Thread Sheara Cohen
Hey Chris - Thanks for your response.

 

With regard to ST_Difference, what I was trying to accomplish is a
"deletion" or  "clipping out" of rivers and lakes from a parcel file.
Both are vector files. Based on your description, it sounds like I'm
using the wrong query. Do you know which one I should be using?

 

And with regard to both the ST_Intersection and ST_Difference... yes, I
am using vector files with HUGE, complex polygons that spread over very
large areas (hundreds of miles). I can dice these in ArcGIS or something
like that before throwing the files over the fence into Postgres. But
I'm still confused by the fact that similar operations run so quickly on
the exact same files in ArcGIS. I feel like there is something I'm
totally missing about how to set up these queries correctly in PostGIS.

 

If you or anyone have any last thoughts, I'd be really interested.

 

Thanks again,

 

~ Sheara

 

Sheara Cohen
Planner

C A L T H O R P E  A S S O C I A T E S
2095 ROSE STREET, SUITE 201, BERKELEY, CALIFORNIA, 94709 USA
510 809-1165 (direct) | 510-548-6800 x35 (main) | 510 548-6848 (fax)
she...@calthorpe.com   | www.calthorpe.com

 

___
postgis-users mailing list
postgis-users@postgis.refractions.net
http://postgis.refractions.net/mailman/listinfo/postgis-users


Re: [postgis-users] OT Understanding slow queries

2011-08-25 Thread fork
Charles Galpin  lhsw.com> writes:

> My apologies

Hehe -- none necessary -- welcome to the light side. 

> I would consider Postgres on trunk if it has features youreally need.
> 
> Sadly it's for immediate production use and I'm forced to use windows which
limits my version choices a bit given my lack of skill under windows to build
postgis :(

Yeah, so much for index only queries ;)

You ask interesting questions -- don't be a stranger.


___
postgis-users mailing list
postgis-users@postgis.refractions.net
http://postgis.refractions.net/mailman/listinfo/postgis-users


Re: [postgis-users] OT Understanding slow queries

2011-08-25 Thread Charles Galpin

On Aug 25, 2011, at 4:00 PM, fork wrote:

> One thing -- while we hope that you ask lots of questions on this list, would
> you not "top posting", and trimming out non-germane text?  Threading and
> trimming make a conversation MUCH easier to follow.

My apologies

> Also -- if you are developing an app that will be rolled out later or that is
> somewhat academic, I would consider Postgres on trunk if it has features you
> really need.  It is easy to build, just make sure you back up your data...

Sadly it's for immediate production use and I'm forced to use windows which 
limits my version choices a bit given my lack of skill under windows to build 
postgis :(

charles

___
postgis-users mailing list
postgis-users@postgis.refractions.net
http://postgis.refractions.net/mailman/listinfo/postgis-users


Re: [postgis-users] OT Understanding slow queries

2011-08-25 Thread fork
Charles Galpin  lhsw.com> writes:


> All of your feedback has been most helpful.  Yes this query is contrived but I
*thought* it was
> representative of a worst case scenario that might be similar to future data
sets and it's likely not.  

I think we are all glad to help.  The count(*) assumption is reasonable enough
to be made by LOTS of people.  

One thing -- while we hope that you ask lots of questions on this list, would
you not "top posting", and trimming out non-germane text?  Threading and
trimming make a conversation MUCH easier to follow.

Also -- if you are developing an app that will be rolled out later or that is
somewhat academic, I would consider Postgres on trunk if it has features you
really need.  It is easy to build, just make sure you back up your data...



___
postgis-users mailing list
postgis-users@postgis.refractions.net
http://postgis.refractions.net/mailman/listinfo/postgis-users


Re: [postgis-users] Raster Points in Polygon (was Re: PostGIS Raster for Met Data)

2011-08-25 Thread Bryce L Nordgren
postgis-users-boun...@postgis.refractions.net wrote on 08/25/2011 05:25:56 
PM:

> > If I understand this right, SRID 900917 is a custom coordinate system
> > for the (i,j) pixel coordinates of the image? Wouldn't this mean that
> > you have to have a different SRID for every tile (raster) of the
> > image, since each tile starts with (1,1)?
> 
> No. We deal with meteorological data rather than data images, and 
> apart from topography data (which can be pretty massive), none of 
> our raw data is tiled.

Understood. I think our issue is with the term "tile". If you store your 
massive data in more than one (say N) database rows, then you require N 
SRIDs to represent the N transforms from the within-the-tile coordinates 
to real world coordinates. If, however, the within-the-tile coordinates 
are first transformed to the (i,j) coordinates from the original grid 
(e.g., the coordinates from before you loaded it into your database), then 
you only need a single SRID. I did not see such a transformation in my 
cursory examination of the code posted to the list.

Even without a formal definition, the unique SRIDs should be used to 
distinguish between within-the-tile coordinates from different tiles 
(database rows). You'll then need another SRID to represent "original 
grid" coordinates.

Bryce___
postgis-users mailing list
postgis-users@postgis.refractions.net
http://postgis.refractions.net/mailman/listinfo/postgis-users


Re: [postgis-users] OT Understanding slow queries

2011-08-25 Thread Bergenroth, Brandon
> there is no way to get a count without seq scanning.  

True for a single table, in the given example, there was a join.

A possible outcome could have been a seq scan on the smaller table and a nested 
loop index lookup on the bigger table.  The optimizer must have thought this 
was more expensive than the full scan with hash join.



> -Original Message-
> From: postgis-users-boun...@postgis.refractions.net [mailto:postgis-
> users-boun...@postgis.refractions.net] On Behalf Of fork
> Sent: Thursday, August 25, 2011 1:56 PM
> To: postgis-users@postgis.refractions.net
> Subject: Re: [postgis-users] OT Understanding slow queries
> 
> Ben Madin  remoteinformation.com.au> writes:
> 
> > does this just apply to count(*), or is count(id) just as bad? I was
> originally a MySQL user and count(*)
> > could be very efficient there.
> 
> My understanding is that Postgres does not keep record length for any
> of its
> tables internally, so there is no way to get a count without seq
> scanning.  The
> trade off is that inserts deletes are faster but a very common query is
> much
> slower.  I don't know if the planner could benefit in any way from the
> count
> being available, though.
> 
> The lists say to use a trigger on inserts and deletes to update a
> metadata table
> if you really do need to know how many elements are in it exactly, but
> that is a
> far less frequent need than you may think (for example an EXISTS can
> get you an
> answer to "are there any records" more quickly than a count(*)).  I
> think you
> can do a quick and rough estimate by doing some math with the table
> size on
> disk, but I never have.
> 
> It is unfortunate that the first (rather lame) "benchmark" anyone tries
> to do
> with a new database is run "select count(*) from x" -- I am sure lots
> of people
> have been turned off from PG because this particular query is slow
> compared to
> MySQL.
> 
> (MySQL always wins in the more hollywood competitions against PG, but
> fails in
> the long run, IMHO)
> 
> 
> 
> ___
> postgis-users mailing list
> postgis-users@postgis.refractions.net
> http://postgis.refractions.net/mailman/listinfo/postgis-users
___
postgis-users mailing list
postgis-users@postgis.refractions.net
http://postgis.refractions.net/mailman/listinfo/postgis-users


Re: [postgis-users] OT Understanding slow queries

2011-08-25 Thread Charles Galpin
I haven't noticed count(*) causing additional query time, but I was just using 
it to rule out overhead of pulling all those rows back. 

All of your feedback has been most helpful.  Yes this query is contrived but I 
*thought* it was representative of a worst case scenario that might be similar 
to future data sets and it's likely not.  Due to your prompting I did figure 
out why this particular problem i was seeing was slow and it was really just 
user stupidity.

The real use case is using geoserver to visualize this data. In the worst case 
scenario someone zooms out nice and far and effectively gets all the links in 
that join.  Now geoserver seems to be able to get all the links with something 
like "get * from links" and generate the tiles at this zoom level fairly 
quickly so I figured all the other overhead being equal, the join (sans the 
count(*) ) would be the worst case and although it is, I think you are right 
that it simply has to join across all those ids and there is no way to improve 
that if selecting all. 

I'll show a more reasonable case of what usually happens, but I'll explain the 
actual problem. I though adding a distinct on (link_id) would speed up the join 
since the source_link table has many rows for each link_id but it turns out 
this was what was making it slow (and I didn't realize that's what the real 
query was doing). I also though for some reason an index would make distinct 
run quickly since the index is effectively all unique values right?

So here is what is typically going on for the query and after changing the 
"select distinct on (link_id) l.*" to "select l.*" it performs reasonably with 
an additional level of filtering with a bounding box. 

explain analyse SELECT 
"link_id",encode(ST_AsBinary(ST_Force_2D("the_geom")),'base64') as "the_geom" 
FROM (select l.* from links l, source_link ld where l.link_id = ld.link_id) as 
"vtable" 
  WHERE "the_geom" && ST_GeomFromText('POLYGON ((-74.79526213727112 
40.11142841966784, -74.79526213727112 40.2127052195, -74.66273955428638 
40.2127052195, -74.66273955428638 40.11142841966784, -74.79526213727112 
40.11142841966784))', 4326)
/*
'Hash Join  (cost=3722.92..74955.80 rows=58813 width=378) (actual 
time=412.729..5946.610 rows=44469 loops=1)'
'  Hash Cond: (ld.link_id = l.link_id)'
'  ->  Seq Scan on source_link ld  (cost=0.00..58282.29 rows=3179029 width=10) 
(actual time=0.026..2823.685 rows=3179029 loops=1)'
'  ->  Hash  (cost=3706.60..3706.60 rows=1306 width=378) (actual 
time=7.805..7.805 rows=1285 loops=1)'
'->  Bitmap Heap Scan on links l  (cost=74.42..3706.60 rows=1306 
width=378) (actual time=0.944..6.093 rows=1285 loops=1)'
'  Recheck Cond: (the_geom && 
'010320E61001000500E6D42993E5B252C0BF285549430E4440E6D42993E5B252C0ADE2B3EC391B44403DDB29536AAA52C0ADE2B3EC391B44403DDB29536AAA52C0BF285549430E4440E6D42993E5B252C0BF285549430E4440'::geometry)'
'  ->  Bitmap Index Scan on links_geom_idx  (cost=0.00..74.09 
rows=1306 width=0) (actual time=0.883..0.883 rows=1285 loops=1)'
'Index Cond: (the_geom && 
'010320E61001000500E6D42993E5B252C0BF285549430E4440E6D42993E5B252C0ADE2B3EC391B44403DDB29536AAA52C0ADE2B3EC391B44403DDB29536AAA52C0BF285549430E4440E6D42993E5B252C0BF285549430E4440'::geometry)'
'Total runtime: 5983.473 ms'

So what prompted my initial concern is solved.  This slowness caught me at a 
time where in the back of my mind i am contemplating our next data challenge 
which is why I looked into it.  If I run into problems with our other data set 
I'll be sure to cover all my options and give realistic queries if I ask for 
help. 

Thanks again for all the advice.

charles

- Original Message -
From: "Ben Madin" 
To: "PostGIS Users Discussion" 
Sent: Thursday, August 25, 2011 1:21:00 PM GMT -05:00 US/Canada Eastern
Subject: Re: [postgis-users] OT Understanding slow queries

Steve,

does this just apply to count(*), or is count(id) just as bad? I was originally 
a MySQL user and count(*) could be very efficient there.

cheers

Ben


On 26/08/2011, at 12:01 AM, Stephen Woodbridge wrote:

> The issue here is the count(*) which forces a full table scan in postgresql 
> as fork mentioned. You need to look at a real query, unless you are really 
> doing a count(*).
> 
> -Steve
> 
> On 8/25/2011 11:49 AM, Ben Madin wrote:
>> I'm no expert at this, but my understanding (which is limited) was
>> that you are asking for the whole table, so indexing doesn't really
>> get used (my understanding is that indexing helps to find the page
>> for a subset of data more quickly than scanning through the whole
>> lot).
>> 
>> Also, you might be able to get some speed up by using a different
>> join type (outer join and not null where clause)?
>> 
>> cheers
>> 
>> Ben
>> 
>> 
>> On 25/08/2011, at 9:41 PM, Charles Galpin wrote:
>> 
>>> If this is too off topic, please let me know and I'll sign up on a
>>> postgres list to get hel

Re: [postgis-users] OT Understanding slow queries

2011-08-25 Thread Bborie Park



On 08/25/2011 10:56 AM, fork wrote:

Ben Madin  remoteinformation.com.au>  writes:


does this just apply to count(*), or is count(id) just as bad? I was

originally a MySQL user and count(*)

could be very efficient there.


My understanding is that Postgres does not keep record length for any of its
tables internally, so there is no way to get a count without seq scanning.  The
trade off is that inserts deletes are faster but a very common query is much
slower.  I don't know if the planner could benefit in any way from the count
being available, though.

The lists say to use a trigger on inserts and deletes to update a metadata table
if you really do need to know how many elements are in it exactly, but that is a
far less frequent need than you may think (for example an EXISTS can get you an
answer to "are there any records" more quickly than a count(*)).  I think you
can do a quick and rough estimate by doing some math with the table size on
disk, but I never have.

It is unfortunate that the first (rather lame) "benchmark" anyone tries to do
with a new database is run "select count(*) from x" -- I am sure lots of people
have been turned off from PG because this particular query is slow compared to
MySQL.

(MySQL always wins in the more hollywood competitions against PG, but fails in
the long run, IMHO)



I think PostgreSQL 9.2 will have index-only scans that should improve 
the performance of SELECT count(*) queries.


http://rhaas.blogspot.com/2011/08/index-only-scans-now-theres-patch.html

Granted, this is in trunk so it won't help for any production databases.

-bborie

--
Bborie Park
Programmer
Center for Vectorborne Diseases
UC Davis
530-752-8380
bkp...@ucdavis.edu
___
postgis-users mailing list
postgis-users@postgis.refractions.net
http://postgis.refractions.net/mailman/listinfo/postgis-users


Re: [postgis-users] Raster Points in Polygon (was Re: PostGIS Raster for Met Data)

2011-08-25 Thread Bborie Park


No. We deal with meteorological data rather than data images, and apart from 
topography data (which can be pretty massive), none of our raw data is tiled.

A raster for us typically represents one meteorological parameter (e.g., 
temperature, daily precipitation, etc.) for a given time and data source. A 
typical database of the kind we are  working on now would hold such data for 
several different parameters over lengthy time periods (30-50 years * daily 
values * # parameters * # data sources) for about 3-5 TB of data. For a limited 
system we might have only one raster definition, but a typical database would 
probably have a handful or so.

A query using polygons as described above would, e.g., be to extract all the 
data points within a region (a county, for instance) and aggregate the results 
(typically avg, min or max) for each day or month over a thirty year period. 
The result is outputted in all sorts of interesting graphs interactively, so we 
need to do this in an effective manner. Not necessarily with sub-second 
response times (our users do understand that they are asking for a lot of 
data), but still snappily enough that people don't need to go out to lunch 
while waiting for their data. :-)

We already do simple polygons with our existing WDB system, but:
- The algorithm is imprecise (just something simple that was hacked together at 
need).
- It doesn't handle more complex polygons.
- Isn't fast enough (not optimized at all).

Thus our current interest in PostGIS raster.



Hey Michael,

I do something similar with meteorological/climate (temperature, 
precipication, ndvi, gpp) datasets but the raw data is stored in rasters 
(one image per day per variable) for 50+ years.  In the current system, 
the rasters are stored as massive tables with each row containing the 
observation date, grid cell value and grid cell coordinates.  I'm in the 
process of testing the performance and storage requirements of PostGIS 
Raster with a subset of the rasters for a variety of tile sizes.  I 
expect that I'll be moving over to using PostGIS Raster by the end of 
this year.


I plan on writing a two raster ST_Intersects(raster, raster) function as 
PostGIS Raster currently only has ST_Intersects(raster, geometry).  The 
additional hope is that I can get the two raster ST_MapAlgebra(raster, 
raster) done by the time PostGIS 2.0 is branched.


-bborie

--
Bborie Park
Programmer
Center for Vectorborne Diseases
UC Davis
530-752-8380
bkp...@ucdavis.edu
___
postgis-users mailing list
postgis-users@postgis.refractions.net
http://postgis.refractions.net/mailman/listinfo/postgis-users


Re: [postgis-users] OT Understanding slow queries

2011-08-25 Thread fork
Ben Madin  remoteinformation.com.au> writes:

> does this just apply to count(*), or is count(id) just as bad? I was
originally a MySQL user and count(*)
> could be very efficient there.

My understanding is that Postgres does not keep record length for any of its
tables internally, so there is no way to get a count without seq scanning.  The
trade off is that inserts deletes are faster but a very common query is much
slower.  I don't know if the planner could benefit in any way from the count
being available, though.

The lists say to use a trigger on inserts and deletes to update a metadata table
if you really do need to know how many elements are in it exactly, but that is a
far less frequent need than you may think (for example an EXISTS can get you an
answer to "are there any records" more quickly than a count(*)).  I think you
can do a quick and rough estimate by doing some math with the table size on
disk, but I never have.  

It is unfortunate that the first (rather lame) "benchmark" anyone tries to do
with a new database is run "select count(*) from x" -- I am sure lots of people
have been turned off from PG because this particular query is slow compared to
MySQL. 

(MySQL always wins in the more hollywood competitions against PG, but fails in
the long run, IMHO) 



___
postgis-users mailing list
postgis-users@postgis.refractions.net
http://postgis.refractions.net/mailman/listinfo/postgis-users


Re: [postgis-users] Longitude and latitude ranges

2011-08-25 Thread Jaime Casanova
On Mon, Aug 22, 2011 at 6:22 PM, Jaime Casanova  wrote:
> Hi,
>
> after some tries, i haven't managed to make this query use the GiST
> index that was created on columns transmitter_mv.punto nor
> rowlatlong.punto and it finishes using a seq scan on table
> transmitter_mv for every row in rowfreqlevel that satisfies the join
> and where conditions.
>
> Stephen Woodbridge, made me notice in
> http://postgis.refractions.net/pipermail/postgis-users/2011-August/030575.html
> that in the plan the POINT is being casted to geography so i decided
> to bite the bullet and use geography columns intead but when i tried
> to create the points from long/lat pairs i got this error (which i
> didn't get when the column was geometry)
> """
> db=# update transmitter_mv set punto = st_makepoint(tx_long, tx_lat);
> ERROR:  Coordinate values are out of range [-180 -90, 180 90] for GEOGRAPHY 
> type
> """
>

btw, this is obviously postgis 1.5 and postgres version (less obvious) is 9.0

-- 
Jaime Casanova         www.2ndQuadrant.com
Professional PostgreSQL: Soporte 24x7 y capacitación
___
postgis-users mailing list
postgis-users@postgis.refractions.net
http://postgis.refractions.net/mailman/listinfo/postgis-users


Re: [postgis-users] Raster Points in Polygon (was Re: PostGIS Raster for Met Data)

2011-08-25 Thread Michael Akinde
Hi,

- Original Message -
> I had to modify the coordinates of your raster and your polygon
> because the 900917 SRID do not exist by default in spatial_ref_sys,
> but this example returns the x and y coordinates (a well as the point
> geometry and the value) of every pixels intersecting a polygon.

Apologies for the hassle; it's of course a local SRID of the kind used by our 
application (in this case +proj=ob_tran +o_proj=longlat +lon_0=-24 
+o_lat_p=23.5 +a=6367470.0 +no_defs').
 
> I had to create the ST_PixelAsPoints(rast raster, band integer)
> function which is a derivate of the ST_PixelAsPolygons(rast raster,
> band integer) function available in this page:
> http://trac.osgeo.org/postgis/wiki/WKTRasterUsefulFunctions

Many, many thanks - this is pretty much exactly what I was looking for. :-)

I had hoped that there was a better "native" way of getting the points inside 
the polygon than to output the points and running an ST_Intersects over them. 
Hmm - well, at least it is easy enough to optimize ST_PixelAsPoints by 
constraining it to only return the points within the axis-aligned minimum 
bounding box of the polygon. Also, I expect simply constructing geom using the 
corner point and dX/dY would be a bit faster than using 
ST_Centroid(ST_PixelAsPolygon), so perhaps we can get a solution along these 
lines running at a reasonable clip.

Thanks again for pointing us at the referenced functions - this gives us 
something to work with.

Regards,

Michael A.
___
postgis-users mailing list
postgis-users@postgis.refractions.net
http://postgis.refractions.net/mailman/listinfo/postgis-users


Re: [postgis-users] Raster Points in Polygon (was Re: PostGIS Raster for Met Data)

2011-08-25 Thread Michael Akinde
Hi,

- Original Message -
> > I had to modify the coordinates of your raster and your polygon
> > because the 900917 SRID do not exist by default in spatial_ref_sys,
> > but this example returns the x and y coordinates (a well as the
> > point geometry and the value) of every pixels intersecting a
> > polygon.
> 
> If I understand this right, SRID 900917 is a custom coordinate system
> for the (i,j) pixel coordinates of the image? Wouldn't this mean that
> you have to have a different SRID for every tile (raster) of the
> image, since each tile starts with (1,1)?

No. We deal with meteorological data rather than data images, and apart from 
topography data (which can be pretty massive), none of our raw data is tiled.

A raster for us typically represents one meteorological parameter (e.g., 
temperature, daily precipitation, etc.) for a given time and data source. A 
typical database of the kind we are  working on now would hold such data for 
several different parameters over lengthy time periods (30-50 years * daily 
values * # parameters * # data sources) for about 3-5 TB of data. For a limited 
system we might have only one raster definition, but a typical database would 
probably have a handful or so.

A query using polygons as described above would, e.g., be to extract all the 
data points within a region (a county, for instance) and aggregate the results 
(typically avg, min or max) for each day or month over a thirty year period. 
The result is outputted in all sorts of interesting graphs interactively, so we 
need to do this in an effective manner. Not necessarily with sub-second 
response times (our users do understand that they are asking for a lot of 
data), but still snappily enough that people don't need to go out to lunch 
while waiting for their data. :-)

We already do simple polygons with our existing WDB system, but:
- The algorithm is imprecise (just something simple that was hacked together at 
need).
- It doesn't handle more complex polygons.
- Isn't fast enough (not optimized at all).

Thus our current interest in PostGIS raster.

Regards,

Michael Akinde

___
postgis-users mailing list
postgis-users@postgis.refractions.net
http://postgis.refractions.net/mailman/listinfo/postgis-users


Re: [postgis-users] OT Understanding slow queries

2011-08-25 Thread Ben Madin
Steve,

does this just apply to count(*), or is count(id) just as bad? I was originally 
a MySQL user and count(*) could be very efficient there.

cheers

Ben


On 26/08/2011, at 12:01 AM, Stephen Woodbridge wrote:

> The issue here is the count(*) which forces a full table scan in postgresql 
> as fork mentioned. You need to look at a real query, unless you are really 
> doing a count(*).
> 
> -Steve
> 
> On 8/25/2011 11:49 AM, Ben Madin wrote:
>> I'm no expert at this, but my understanding (which is limited) was
>> that you are asking for the whole table, so indexing doesn't really
>> get used (my understanding is that indexing helps to find the page
>> for a subset of data more quickly than scanning through the whole
>> lot).
>> 
>> Also, you might be able to get some speed up by using a different
>> join type (outer join and not null where clause)?
>> 
>> cheers
>> 
>> Ben
>> 
>> 
>> On 25/08/2011, at 9:41 PM, Charles Galpin wrote:
>> 
>>> If this is too off topic, please let me know and I'll sign up on a
>>> postgres list to get help. But this is related to my use of postgis
>>> and If anyone knows this stuff, it's you guys.
>>> 
>>> I have an example query that I expect to be much faster, but my
>>> main concern is we are about to do some visualization of historical
>>> congestion data which will require queries across much larger data
>>> sets - like 150 million records a day. We are about to test using
>>> partitions but the number per table will still be much larger than
>>> what I am dealing with now.
>>> 
>>> So here is a query I would think would be much faster than 43
>>> seconds for two tables, one with about 97k rows, and the other 3.2
>>> million.
>>> 
>>> explain select count(l.*) from links l, source_link ld where
>>> l.link_id = ld.link_id; /* 'Aggregate  (cost=174731.72..174731.73
>>> rows=1 width=32)' '  ->   Hash Join  (cost=13024.27..166784.14
>>> rows=3179029 width=32)' 'Hash Cond: (ld.link_id =
>>> l.link_id)' '->   Seq Scan on source_link ld
>>> (cost=0.00..58282.29 rows=3179029 width=10)' '->   Hash
>>> (cost=10963.12..10963.12 rows=96812 width=42)' '  ->
>>> Seq Scan on links l  (cost=0.00..10963.12 rows=96812 width=42)' */
>>> 
>>> Each table has an index on link_id, defined like this
>>> 
>>> CREATE INDEX links_link_id_idx ON links USING btree (link_id);
>>> 
>>> CREATE INDEX source_link_link_id_idx ON source_link USING btree
>>> (link_id);
>>> 
>>> Shouldn't this index prevent these sequential scans, or am I
>>> misreading this?  Should this really take 43 seconds?
>>> 
>>> thanks for any advice, charles
>>> 
>>> ___ postgis-users
>>> mailing list postgis-users@postgis.refractions.net
>>> http://postgis.refractions.net/mailman/listinfo/postgis-users
>> 
>> ___ postgis-users mailing
>> list postgis-users@postgis.refractions.net
>> http://postgis.refractions.net/mailman/listinfo/postgis-users
> 
> ___
> postgis-users mailing list
> postgis-users@postgis.refractions.net
> http://postgis.refractions.net/mailman/listinfo/postgis-users

___
postgis-users mailing list
postgis-users@postgis.refractions.net
http://postgis.refractions.net/mailman/listinfo/postgis-users


[postgis-users] ST_Equals with different geometry types

2011-08-25 Thread Jose Carlos Martínez Llario

Dear PostGIS list,
According to the OGC/SQL-MM in both standards this query should return 
true. JTS does.

I guess its because the geometry types are different but it shouldnt matter.
Its a bug or it is the expected behavior?
cheers,
Jose

Im running:
 POSTGIS="2.0.0SVN" GEOS="3.3.1dev-CAPI-1.7.1" PROJ="Rel. 4.6.1, 21 
August 2008" LIBXML="2.7.8" USE_STATS


test2=# SELECT st_equals(wkta, wktb) as relateab,
test2-#st_relate(wktb, wkta) as relateba
test2-# from (select
test2(# 'LINESTRING (0 0, 10 0)'::text as wkta,
test2(# 'MULTILINESTRING ((10 0, 5 0),(0 0, 5 0))'::text as wktb
test2(# ) as mitabla;
 relateab | relateba
--+---
 f| 1FFF0FFF2



___
postgis-users mailing list
postgis-users@postgis.refractions.net
http://postgis.refractions.net/mailman/listinfo/postgis-users


Re: [postgis-users] OT Understanding slow queries

2011-08-25 Thread Stephen Woodbridge
The issue here is the count(*) which forces a full table scan in 
postgresql as fork mentioned. You need to look at a real query, unless 
you are really doing a count(*).


-Steve

On 8/25/2011 11:49 AM, Ben Madin wrote:

I'm no expert at this, but my understanding (which is limited) was
that you are asking for the whole table, so indexing doesn't really
get used (my understanding is that indexing helps to find the page
for a subset of data more quickly than scanning through the whole
lot).

Also, you might be able to get some speed up by using a different
join type (outer join and not null where clause)?

cheers

Ben


On 25/08/2011, at 9:41 PM, Charles Galpin wrote:


If this is too off topic, please let me know and I'll sign up on a
postgres list to get help. But this is related to my use of postgis
and If anyone knows this stuff, it's you guys.

I have an example query that I expect to be much faster, but my
main concern is we are about to do some visualization of historical
congestion data which will require queries across much larger data
sets - like 150 million records a day. We are about to test using
partitions but the number per table will still be much larger than
what I am dealing with now.

So here is a query I would think would be much faster than 43
seconds for two tables, one with about 97k rows, and the other 3.2
million.

explain select count(l.*) from links l, source_link ld where
l.link_id = ld.link_id; /* 'Aggregate  (cost=174731.72..174731.73
rows=1 width=32)' '  ->   Hash Join  (cost=13024.27..166784.14
rows=3179029 width=32)' 'Hash Cond: (ld.link_id =
l.link_id)' '->   Seq Scan on source_link ld
(cost=0.00..58282.29 rows=3179029 width=10)' '->   Hash
(cost=10963.12..10963.12 rows=96812 width=42)' '  ->
Seq Scan on links l  (cost=0.00..10963.12 rows=96812 width=42)' */

Each table has an index on link_id, defined like this

CREATE INDEX links_link_id_idx ON links USING btree (link_id);

CREATE INDEX source_link_link_id_idx ON source_link USING btree
(link_id);

Shouldn't this index prevent these sequential scans, or am I
misreading this?  Should this really take 43 seconds?

thanks for any advice, charles

___ postgis-users
mailing list postgis-users@postgis.refractions.net
http://postgis.refractions.net/mailman/listinfo/postgis-users


___ postgis-users mailing
list postgis-users@postgis.refractions.net
http://postgis.refractions.net/mailman/listinfo/postgis-users


___
postgis-users mailing list
postgis-users@postgis.refractions.net
http://postgis.refractions.net/mailman/listinfo/postgis-users


Re: [postgis-users] OT Understanding slow queries

2011-08-25 Thread Ben Madin
I'm no expert at this, but my understanding (which is limited) was that you are 
asking for the whole table, so indexing doesn't really get used (my 
understanding is that indexing helps to find the page for a subset of data more 
quickly than scanning through the whole lot).

Also, you might be able to get some speed up by using a different join type 
(outer join and not null where clause)?

cheers

Ben


On 25/08/2011, at 9:41 PM, Charles Galpin wrote:

> If this is too off topic, please let me know and I'll sign up on a postgres 
> list to get help. But this is related to my use of postgis and If anyone 
> knows this stuff, it's you guys.
> 
> I have an example query that I expect to be much faster, but my main concern 
> is we are about to do some visualization of historical congestion data which 
> will require queries across much larger data sets - like 150 million records 
> a day. We are about to test using partitions but the number per table will 
> still be much larger than what I am dealing with now.
> 
> So here is a query I would think would be much faster than 43 seconds for two 
> tables, one with about 97k rows, and the other 3.2 million.
> 
> explain select count(l.*) 
> from links l, source_link ld where l.link_id = ld.link_id;
> /*
> 'Aggregate  (cost=174731.72..174731.73 rows=1 width=32)'
> '  ->  Hash Join  (cost=13024.27..166784.14 rows=3179029 width=32)'
> 'Hash Cond: (ld.link_id = l.link_id)'
> '->  Seq Scan on source_link ld  (cost=0.00..58282.29 rows=3179029 
> width=10)'
> '->  Hash  (cost=10963.12..10963.12 rows=96812 width=42)'
> '  ->  Seq Scan on links l  (cost=0.00..10963.12 rows=96812 
> width=42)'
> */
> 
> Each table has an index on link_id, defined like this
> 
> CREATE INDEX links_link_id_idx
>  ON links
>  USING btree
>  (link_id);
> 
> CREATE INDEX source_link_link_id_idx
>  ON source_link
>  USING btree
>  (link_id);
> 
> Shouldn't this index prevent these sequential scans, or am I misreading this? 
>  Should this really take 43 seconds?
> 
> thanks for any advice,
> charles
> 
> ___
> postgis-users mailing list
> postgis-users@postgis.refractions.net
> http://postgis.refractions.net/mailman/listinfo/postgis-users

___
postgis-users mailing list
postgis-users@postgis.refractions.net
http://postgis.refractions.net/mailman/listinfo/postgis-users


Re: [postgis-users] OT Understanding slow queries

2011-08-25 Thread fork
Charles Galpin  lhsw.com> writes:

> explain select count(l.*) 
> from links l, source_link ld where l.link_id = ld.link_id;

Can you try this returning some sort of value (like the keys) instead of a
count(*)?  count(*) can be pretty slow in Postgres, sometimes 
(I think) forcinga seq scan.

I am not particularly confident this will fix your problem, 
but it is worth a shot.

I would also experiment with DISTINCT and LIMIT, after
 making sure ANALYZE has been run appropriately.

___
postgis-users mailing list
postgis-users@postgis.refractions.net
http://postgis.refractions.net/mailman/listinfo/postgis-users


[postgis-users] OT Understanding slow queries

2011-08-25 Thread Charles Galpin
If this is too off topic, please let me know and I'll sign up on a postgres 
list to get help. But this is related to my use of postgis and If anyone knows 
this stuff, it's you guys.

I have an example query that I expect to be much faster, but my main concern is 
we are about to do some visualization of historical congestion data which will 
require queries across much larger data sets - like 150 million records a day. 
We are about to test using partitions but the number per table will still be 
much larger than what I am dealing with now.

So here is a query I would think would be much faster than 43 seconds for two 
tables, one with about 97k rows, and the other 3.2 million.

explain select count(l.*) 
from links l, source_link ld where l.link_id = ld.link_id;
/*
'Aggregate  (cost=174731.72..174731.73 rows=1 width=32)'
'  ->  Hash Join  (cost=13024.27..166784.14 rows=3179029 width=32)'
'Hash Cond: (ld.link_id = l.link_id)'
'->  Seq Scan on source_link ld  (cost=0.00..58282.29 rows=3179029 
width=10)'
'->  Hash  (cost=10963.12..10963.12 rows=96812 width=42)'
'  ->  Seq Scan on links l  (cost=0.00..10963.12 rows=96812 
width=42)'
*/

Each table has an index on link_id, defined like this

CREATE INDEX links_link_id_idx
  ON links
  USING btree
  (link_id);
  
CREATE INDEX source_link_link_id_idx
  ON source_link
  USING btree
  (link_id);

Shouldn't this index prevent these sequential scans, or am I misreading this?  
Should this really take 43 seconds?

thanks for any advice,
charles

___
postgis-users mailing list
postgis-users@postgis.refractions.net
http://postgis.refractions.net/mailman/listinfo/postgis-users


[postgis-users] A problem with make check and CUnit

2011-08-25 Thread Andrea Peri
Hi,
Hi have compiled postgis from SVN on a Unix (RedHat ES6 64 bit).

Now I have some problem in the "make check" phase with the CUnit library.

When I do the Configure phase the CUnit was correctly find and setting.
And also in the make phase no problem was reported.

But when
try the "make check"
it report don't find the CUnit library: as reported here:

-- from log 
[postgres@artu postgis-svn]$ make check > ris_check.log
cu_surface.c:244: warning: âcheck_tgeomâ defined but not used
./cu_tester: error while loading shared libraries: libcunit.so.1: cannot
open shared object file: No such file or directory
make[2]: *** [check] Error 127
make[1]: *** [check] Error 2
make: *** [check] Error 1
[postgres@artu postgis-svn]$ more ris_check.log
for s in liblwgeom libpgcommon regress postgis loader utils raster topology;
do
\
echo " Making all in ${s}"; \
make -C ${s} all || exit 1; \
done;
 Making all in liblwgeom

--- end log 

I see the
libcunit.so.1
was present in the path
/usr/local/lib
and that path is declared in the /etc/ld.conf.d/..

I guess the "make check" ignore the ld.conf.d settings ?

So, how is possible to say to the "make check" where find the "libcunit ?

thx,

-- 
-
Andrea Peri
. . . . . . . . .
qwerty àèìòù
-
___
postgis-users mailing list
postgis-users@postgis.refractions.net
http://postgis.refractions.net/mailman/listinfo/postgis-users


Re: [postgis-users] How configure postgis to enable the rasters.

2011-08-25 Thread Andrea Peri
>Adding /usr/local/lib to PYTHONPATH won't help much since python knowns
>where the GDAL python module but can't find the GDAL library itself.
>
>Is /usr/local/lib in the linker path?  You may want to check
>/etc/ld.so.conf to see if /usr/local/lib is in there.  If not, add
>/usr/local/lib to /etc/ld.so.conf and then run ldconfig.  BUT, this may
>not be the best solution as you are running a 64-bit linux system.
>
>Since it looks like you compiled your own GDAL, you should reconfigure
>and recompile GDAL with something like:
>
>./configure --libdir=/usr/local/lib64 OTHER_CONFIG_FLAGS
>
>Reconfiguring and recompiling GDAL should work as I'm guessing that
>/usr/local/lib64 is in /etc/ld.so.conf but /usr/local/lib isn't.

Great !

Adding
/usr/local/lib to
ld.so.conf.d it worl perfectly !

thx a lot.

Andrea.




-- 
-
Andrea Peri
. . . . . . . . .
qwerty àèìòù
-
___
postgis-users mailing list
postgis-users@postgis.refractions.net
http://postgis.refractions.net/mailman/listinfo/postgis-users