Re: [GENERAL] Development of an extension for PostgreSQL and PostGIS

2017-08-14 Thread Paul Ramsey
In order to get an LWGEOM from PostGIS you'll need to convert from the
serialized form (GSERIALIZED) which you can read all about in the
liblwgeom.h header. You'll be adding a hard dependency of course, but
hopefully you're OK with that.

If you're just hoping to build a compound type, as your example shows, you
can do that without a C extension, just read up on CREATE TYPE.

For an alternate example of an extension with a lighter dependency on
PostGIS, check out pgpointcloud, which has it's own structure for spatial
data (a point patch) and exchanges data with PostGIS via well-known-binary.
This removes the liblwgeom dependency, which means it's possible to compile
and use pgpointcloud without PostGIS installed, which is not entirely
uncommon.

P


On Mon, Aug 14, 2017 at 11:18 AM, Fabiana Zioti 
wrote:

> Hello.
>
> I will start developing an extension to PostgreSQL next to PostGIS using
> the C language.
>
> If my new type were:
>
>
> CREATE TYPE mytype (.., .., .., geom geometry);
>
> The creation of the structure in c, would be something like?
>
> #include "liblwgeom.h"
>
> Struct mytype
> {
>Int32 id;
>LWGEOM lwgeom;
>
> };
>
>
> In the extension I will create new data types for PostgreSQL, but I would
> like to use the geometric objects that the PostGIS extension offers, such
> as POINT, LINE, POLYGON, etc. In addition to their input functions (wkt-
> ST_GeomFromText ()), operators, index, etc.
>
> In this case just importing the liblwgeom library would be enough to
> develop an extension to PostgreSQL / PostGIS?
>
> Would you have any examples of such a project?
>
> Thanks in advance!!
>
>


Re: [GENERAL] Running out of memory the hard way ...

2017-02-08 Thread Paul Ramsey
On Wed, Feb 8, 2017 at 7:44 AM, Tom Lane  wrote:

> Albe Laurenz  writes:
> > Bill Moran wrote:
> >> What I feel is the best way to mitigate the situation, is to have some
> >> setting that limits the maximum RAM any backend can consume.
>
> > I'd delegate that problem to the operating system which, after all,
> > should know best of all how much memory a process uses.
>
> I've had some success using ulimit in the past, although it does have
> the disadvantage that you have to impose the same limit on every PG
> process.  (You set it before starting the postmaster and it inherits
> to every child process.)  If memory serves, limiting with the -v switch
> works better than -d or -m on Linux; but I might be misremembering.
> Conceivably we could add code to let the ulimit be set per-process,
> if the use-case were strong enough.
>
> To implement a limit inside PG, we'd have to add expensive bookkeeping
> to the palloc/pfree mechanism, and even that would be no panacea because
> it would fail to account for memory allocated directly from malloc.
> Hence, you could be pretty certain that it would be wildly inaccurate
> for sessions using third-party code such as PostGIS or Python.  An
> OS-enforced limit definitely sounds better from here.
>
>
Confirming what Tom said, with respect to the specific example in this
thread, a large proportion of the allocations in memory hungry bits of
PostGIS are in fact using bare malloc via the GEOS library.

P


Re: [GENERAL] Improve PostGIS performance with 62 million rows?

2017-01-09 Thread Paul Ramsey
romText('SRID=4326;LINESTRING(-150.008056
> 61.179167,-156.77 71.285833)'),
> 600
> )::geometry
> ) as pt
> )
> SELECT elevation
> FROM data
> INNER JOIN (SELECT
> ST_MakeLine(ARRAY[a.geom, b.geom]) as short_line
> FROM pts a
> INNER JOIN pts b
> ON a.vert=b.vert-1 AND b.vert>1) segments
> ON  ST_DWithin(location, segments.short_line, 600)
> ORDER BY elevation DESC limit 1;
>
>
> Then you could remove the useless and (potentially explosive if you have
> large number of dump points) inner join on points :
> "FROM pts a
> INNER JOIN pts b "
>
> You could simply use a window function to generate the segments, like in
> here
> <https://github.com/Remi-C/_utilities/blob/master/postgis/rc_DumpSegments.sql#L51>
> .
> The idea is to dump points, order them by path, and then link each point
> with the previous one (function lag).
> Assuming you don't want to use the available function,
> this would be something like :
>
>
>
> WITH segments AS (
> SELECT ST_MakeLine( lag((pt).geom , 1, NULL) OVER (ORDER BY (pt).path)
>   ,(pt).geom) AS short_line
> FROM ST_DumpPoints(
>   ST_Segmentize(
> ST_GeographyFromText('SRID=4326;LINESTRING(-150.008056
> 61.179167,-156.77 71.285833)'),
> 600
> )::geometry
> ) as pt
> )
> SELECT elevation
> FROM data ,segments
> WHERE segments.short_line IS NOT NULL --the first segment is null by
> design (lag function)
>   AND  ST_DWithin(location, segments.short_line, 600) = TRUE
> ORDER BY elevation DESC
> limit 1;
>
>
> I don't know if you can further improve this query after that,
> but I'll guess it would reduce your time and be more secure regarding
> scaling.
>
>
> if you want to further improve your result,
> you'll have to reduce the number of row in your index,
> that is partition your table into several tables !
>
> This is not easy to do with current postgres partitionning methods as far
> as I know
> (partitionning is easy, automatic efficient query is hard).
>
> Another way would be to reduce you requirement, and consider that in some
> case you may want less details in the altimetry, which would allow you to
> use a Level Of Detail approach.
>
> Congrats for the well explained query/problem anyway !
> Cheers,
> Rémi-C
>
>
>
> Ooooh, nice use of a window function - that change right there cut the
> execution time in half! I was able to shave off a few hundreds of a second
> more but tweaking the ST_Segmentize length parameter up to 5,000 (still
> have to play with that number some), so execution time is now down to the
> sub-300ms range. If I reduce the radius I am looking around the line, I
> can additionally improve the time to around 200 ms, but I'm not sure that
> will be an option. Regardless, 300ms is rather impressive, I think. Thanks!
>
>
> 2017-01-05 23:09 GMT+01:00 Paul Ramsey :
>
>> Varying the segment length upwards might have a salutary effect for a
>> while, as the efficiency improvement of fewer inner loops battles with the
>> inefficiency of having more points selected by the index filter. Worth an
>> experiment.
>>
>> P
>>
>> On Thu, Jan 5, 2017 at 1:00 PM, Israel Brewster 
>> wrote:
>>
>>>
>>> On Jan 5, 2017, at 10:38 AM, Paul Ramsey 
>>> wrote:
>>>
>>> Yes, you did. You want a query that spits out a tupleset of goemetries
>>> (one each for each wee segment), and then you can join that set to your
>>> main table using st_dwithin() as the join clause.
>>> So start by ditching the main table and just work on a query that
>>> generates a pile of wee segments.
>>>
>>>
>>> Ahhh, I see you've done this sort of thing before (
>>> http://blog.cleverelephant.ca/2015/02/breaking-linestring-i
>>> nto-segments.html) :-)
>>>
>>> So following that advice I came up with the following query:
>>>
>>> WITH dump AS (SELECT
>>> ST_DumpPoints(
>>> ST_Segmentize(
>>> ST_GeographyFromText('SRID=4326;LINESTRING(-150.008056
>>> 61.179167,-156.77 71.285833)'),
>>> 600
>>> )::geometry
>>> ) as pt
>>> ),
>>> pts AS (
>>> SELECT (pt).geom, (pt).path[1] as vert FROM dump
>>> )
>>> SELECT elevation
>>> FROM data
>>> INNER JOIN (SELECT
>>> ST_MakeLine(ARRAY[a.geom, b.geom]) as short_line
>>> FROM pts a
>>> INNER

Re: [GENERAL] Improve PostGIS performance with 62 million rows?

2017-01-05 Thread Paul Ramsey
Varying the segment length upwards might have a salutary effect for a
while, as the efficiency improvement of fewer inner loops battles with the
inefficiency of having more points selected by the index filter. Worth an
experiment.

P

On Thu, Jan 5, 2017 at 1:00 PM, Israel Brewster 
wrote:

>
> On Jan 5, 2017, at 10:38 AM, Paul Ramsey 
> wrote:
>
> Yes, you did. You want a query that spits out a tupleset of goemetries
> (one each for each wee segment), and then you can join that set to your
> main table using st_dwithin() as the join clause.
> So start by ditching the main table and just work on a query that
> generates a pile of wee segments.
>
>
> Ahhh, I see you've done this sort of thing before (
> http://blog.cleverelephant.ca/2015/02/breaking-
> linestring-into-segments.html) :-)
>
> So following that advice I came up with the following query:
>
> WITH dump AS (SELECT
> ST_DumpPoints(
> ST_Segmentize(
> ST_GeographyFromText('SRID=4326;LINESTRING(-150.008056
> 61.179167,-156.77 71.285833)'),
> 600
> )::geometry
> ) as pt
> ),
> pts AS (
> SELECT (pt).geom, (pt).path[1] as vert FROM dump
> )
> SELECT elevation
> FROM data
> INNER JOIN (SELECT
> ST_MakeLine(ARRAY[a.geom, b.geom]) as short_line
> FROM pts a
> INNER JOIN pts b
> ON a.vert=b.vert-1 AND b.vert>1) segments
> ON  ST_DWithin(location, segments.short_line, 600)
> ORDER BY elevation DESC limit 1;
>
> Which yields the following EXPLAIN ANALYZE (https://explain.depesz.com/s/
> RsTD <https://explain.depesz.com/s/ukwc>):
>
>
>QUERY PLAN
>
>
> 
> 
> 
> 
>  Limit  (cost=11611706.90..11611706.91 rows=1 width=4) (actual
> time=1171.814..1171.814 rows=1 loops=1)
>CTE dump
>  ->  Result  (cost=0.00..5.25 rows=1000 width=32) (actual
> time=0.024..1.989 rows=1939 loops=1)
>CTE pts
>  ->  CTE Scan on dump  (cost=0.00..20.00 rows=1000 width=36) (actual
> time=0.032..4.071 rows=1939 loops=1)
>->  Sort  (cost=11611681.65..11611768.65 rows=34800 width=4) (actual
> time=1171.813..1171.813 rows=1 loops=1)
>  Sort Key: data.elevation DESC
>  Sort Method: top-N heapsort  Memory: 25kB
>  ->  Nested Loop  (cost=0.55..11611507.65 rows=34800 width=4)
> (actual time=0.590..1167.615 rows=28408 loops=1)
>->  Nested Loop  (cost=0.00..8357.50 rows=1665 width=64)
> (actual time=0.046..663.475 rows=1938 loops=1)
>  Join Filter: (a.vert = (b.vert - 1))
>  Rows Removed by Join Filter: 3755844
>  ->  CTE Scan on pts b  (cost=0.00..22.50 rows=333
> width=36) (actual time=0.042..0.433 rows=1938 loops=1)
>Filter: (vert > 1)
>Rows Removed by Filter: 1
>  ->  CTE Scan on pts a  (cost=0.00..20.00 rows=1000
> width=36) (actual time=0.000..0.149 rows=1939 loops=1938)
>->  Index Scan using location_gix on
> data  (cost=0.55..6968.85 rows=1 width=36) (actual time=0.085..0.256
> rows=15 loops=1938)
>  Index Cond: (location &&
> _st_expand((st_makeline(ARRAY[a.geom, b.geom]))::geography, '600'::double
> precision))
>  Filter: (((st_makeline(ARRAY[a.geom,
> b.geom]))::geography && _st_expand(location, '600'::double precision)) AND
> _st_dwithin(location, (st_makeline(ARRAY[a.geom,
> b.geom]))::geography, '600'::double precision, true))
>  Rows Removed by Filter: 7
>  Planning time: 4.318 ms
>  Execution time: 1171.994 ms
> (22 rows)
>
> So not bad. Went from 20+ seconds to a little over 1 second. Still
> noticeable for a end user, but defiantly usable - and like mentioned,
> that's a worst-case scenario query. Thanks!
>
> Of course, if you have any suggestions for further improvement, I'm all
> ears :-)
> ---
> Israel Brewster
> Systems Analyst II
> Ravn Alaska
> 5245 Airport Industrial Rd
> Fairbanks, AK 99709
> (907) 450-7293
> ---
>
>
> On Thu, Jan 5, 2017 at 11:36 AM, Israel Brewster 
> wrote:
>
>> On Jan 5, 2017, at 8:50 AM, Paul Ramsey 
>> wrote:
>>
>>
>> The index filters using bounding boxes.  A long, diagonal route will have
>> a 

Re: [GENERAL] Improve PostGIS performance with 62 million rows?

2017-01-05 Thread Paul Ramsey
Yes, you did. You want a query that spits out a tupleset of goemetries (one
each for each wee segment), and then you can join that set to your main
table using st_dwithin() as the join clause.
So start by ditching the main table and just work on a query that generates
a pile of wee segments.

On Thu, Jan 5, 2017 at 11:36 AM, Israel Brewster 
wrote:

> On Jan 5, 2017, at 8:50 AM, Paul Ramsey  wrote:
>
>
> The index filters using bounding boxes.  A long, diagonal route will have
> a large bounding box, relative to the area you actually care about (within
> a narrow strip of the route). Use ST_Segmentize() to add points to your
> route, ST_DumpPoints() to dump those out as point and ST_MakeLine to
> generate new lines from those points, each line very short. The maximum
> index effectiveness will come when your line length is close to your buffer
> width.
>
> P
>
>
> Ok, I think I understand the concept. So attempting to follow your advice,
> I modified the query to be:
>
> SELECT elevation
> FROM data
> WHERE
> ST_DWithin(
> location,
> (SELECT ST_MakeLine(geom)::geography as split_line
>  FROM (SELECT
> (ST_DumpPoints(
> ST_Segmentize(
> ST_GeographyFromText('SRID=4326;LINESTRING(-150.008056
> 61.179167,-156.77 71.285833)'),
> 600
> )::geometry
> )).geom
> ) s1),
> 600
> )
> ORDER BY elevation DESC limit 1;
>
> It took some fiddling to find a syntax that Postgresql would accept, but
> eventually that's what I came up with. Unfortunately, far from improving
> performance, it killed it - in running the query, it went from 22 seconds
> to several minutes (EXPLAIn ANALYZE has yet to return a result). Looking at
> the query execution plan shows, at least partially, why:
>
>   QUERY PLAN
>
> 
> --
>  Limit  (cost=17119748.98..17119748.98 rows=1 width=4)
>InitPlan 1 (returns $0)
>  ->  Aggregate  (cost=17.76..17.77 rows=1 width=32)
>->  Result  (cost=0.00..5.25 rows=1000 width=32)
>->  Sort  (cost=17119731.21..17171983.43 rows=20900890 width=4)
>  Sort Key: data.elevation DESC
>  ->  Seq Scan on data  (cost=0.00..17015226.76 rows=20900890
> width=4)
>Filter: st_dwithin(location, $0, '600'::double precision)
> (8 rows)
>
> So apparently it is now doing a sequential scan on data rather than using
> the index. And, of course, sorting 20 million rows is not trivial either.
> Did I do something wrong with forming the query?
>
> ---
> Israel Brewster
> Systems Analyst II
> Ravn Alaska
> 5245 Airport Industrial Rd
> Fairbanks, AK 99709
> (907) 450-7293
> ---
>
>
> On Thu, Jan 5, 2017 at 9:45 AM, Israel Brewster 
> wrote:
>
>> I have a database (PostgreSQL 9.6.1) containing 62,702,675 rows of
>> latitude (numeric), longitude(numeric), elevation(integer) data, along with
>> a PostGIS (2.3.0) geometry column (location), running on a CentOS 6.8 box
>> with 64GB RAM and a RAID10 SSD data drive. I'm trying to get the maximum
>> elevation along a path, for which purpose I've come up with the following
>> query (for one particular path example):
>>
>> SELECT elevation FROM data
>>
>>
>>
>>
>> WHERE ST_DWithin(location, 
>> ST_GeographyFromText('SRID=4326;LINESTRING(-150.008056
>> 61.179167,-156.77 71.285833)'), 600)
>>
>>
>>
>>   ORDER BY elevation LIMIT 1;
>>
>> The EXPLAIN ANALYZE output of this particular query (
>> https://explain.depesz.com/s/heZ) shows:
>>
>>
>>
>> QUERY PLAN
>>
>>
>> 
>> 
>> 
>> 
>> 
>> --
>>  Limit  (cost=4.83..4.83 rows=1 width=4) (actual
>> time=22653.840..22653.842 rows=1 loops=1)
>>->  Sort  (cost=4.83..4.83 rows=1 width=4) (actual
>> time=22653.837..22653.837 rows=1 loops=1)
>>  Sort Key: elevation DESC
>>  Sort Method: top-N heapsort  Memory: 25kB
>>  ->  Index Scan using location_gix on data  (cost=0.42..4.82
>>

Re: [GENERAL] Improve PostGIS performance with 62 million rows?

2017-01-05 Thread Paul Ramsey
The index filters using bounding boxes.  A long, diagonal route will have a
large bounding box, relative to the area you actually care about (within a
narrow strip of the route). Use ST_Segmentize() to add points to your
route, ST_DumpPoints() to dump those out as point and ST_MakeLine to
generate new lines from those points, each line very short. The maximum
index effectiveness will come when your line length is close to your buffer
width.

P

On Thu, Jan 5, 2017 at 9:45 AM, Israel Brewster 
wrote:

> I have a database (PostgreSQL 9.6.1) containing 62,702,675 rows of
> latitude (numeric), longitude(numeric), elevation(integer) data, along with
> a PostGIS (2.3.0) geometry column (location), running on a CentOS 6.8 box
> with 64GB RAM and a RAID10 SSD data drive. I'm trying to get the maximum
> elevation along a path, for which purpose I've come up with the following
> query (for one particular path example):
>
> SELECT elevation FROM data
>
>
>
>
> WHERE ST_DWithin(location, 
> ST_GeographyFromText('SRID=4326;LINESTRING(-150.008056
> 61.179167,-156.77 71.285833)'), 600)
>
>
>
>   ORDER BY elevation LIMIT 1;
>
> The EXPLAIN ANALYZE output of this particular query (
> https://explain.depesz.com/s/heZ) shows:
>
>
>
> QUERY PLAN
>
>
> 
> 
> 
> 
> 
> --
>  Limit  (cost=4.83..4.83 rows=1 width=4) (actual time=22653.840..22653.842
> rows=1 loops=1)
>->  Sort  (cost=4.83..4.83 rows=1 width=4) (actual
> time=22653.837..22653.837 rows=1 loops=1)
>  Sort Key: elevation DESC
>  Sort Method: top-N heapsort  Memory: 25kB
>  ->  Index Scan using location_gix on data  (cost=0.42..4.82
> rows=1 width=4) (actual time=15.786..22652.041 rows=11081 loops=1)
>Index Cond: (location && '010220E61002002C11
> A8FE41C062C0DFC2BAF1EE964E40713D0AD7A39863C086C77E164BD25140'::geography)
>Filter: (('010220E61002002C11
> A8FE41C062C0DFC2BAF1EE964E40713D0AD7A39863C086C77E164BD25140'::geography
> && _st_expand(location, '600'::double precision)) AND
> _st_dwithin(location, '010220E61002002C11
> A8FE41C062C0DFC2BAF1EE964E40713D0AD7A39863C086C77E164BD25140'::geography,
> '600'::double precision, true))
>Rows Removed by Filter: 4934534
>  Planning time: 0.741 ms
>  Execution time: 22653.906 ms
> (10 rows)
>
> So it is using the index properly, but still takes a good 22 seconds to
> run, most of which appears to be in the Index Scan.
>
> Is there any way to improve this, or is this going to be about as good as
> it gets with the number of rows being dealt with? I was planning to use
> this for a real-time display - punch in a couple of points, get some
> information about the route between, including maximum elevation - but with
> it taking 22 seconds for the longer routes at least, that doesn't make for
> the best user experience.
>
> It's perhaps worth noting that the example above is most likely a worst
> case scenario. I would expect the vast majority of routes to be
> significantly shorter, and I want to say the shorter routes query much
> faster [testing needed]. That said, the faster the better, even for short
> routes :-)
> ---
> Israel Brewster
> Systems Analyst II
> Ravn Alaska
> 5245 Airport Industrial Rd
> Fairbanks, AK 99709
> (907) 450-7293
> ---
>
>
>
>
>
>


Re: [GENERAL] Index impact on update?

2017-01-04 Thread Paul Ramsey
You'd be better off forcing the table to write in bulk with something like

CREATE TABLE mynewtable AS
SELECT *, geography(ST_SetSRID(ST_MakePoint(lng, lat), 4326)) AS geog
FROM myoldtable;

Then index the new table, rename, etc. Bulk update will, in addition to
being slow, use 2x the amount of space on disk, as all the old tuples are
left behind from the update until you cluster or vacuum full the table.

P



On Wed, Jan 4, 2017 at 8:59 AM, Israel Brewster 
wrote:

> Short version:
> Do indexes impact the speed of an UPDATE, even when the indexed columns
> aren't changing?
>
> Details:
> I have a table containing geographical data (Latitude, longitude, and
> elevation) with 406,833,705 records. The Latitude and Longitude columns are
> indexed. In order to better utilize the data, I've been looking into
> PostGIS, and decided I wanted to add a "Location" column with PostGIS type
> "GEOGRAPHY(point)". I then tried to populate it from the existing
> latitude/longitude data using the following query:
>
> UPDATE data SET location=ST_GeographyFromText(
> 'SRID=4326;POINT('||lng::text||' '||lat::text||')');
>
> I expected this update to take quite a while, since it has 406 million
> rows to update, but at this point it's been over 19 hours since I started
> the query, and it still hasn't completed.
>
> I'm wondering if the presence of the indexes could be slowing things down
> even though the indexed columns aren't being updated? Would I be better off
> canceling the update query, dropping the indexes, and trying again? Or is
> more likely that the update query is "almost" done, and it would be better
> to just let it run it's course? Or is there an even better option, such as
> perhaps exporting the data, adding the additional column in a text editor,
> and re-importing the data with a COPY command?
>
> Thanks for any feedback/advice you can offer!
> ---
> Israel Brewster
> Systems Analyst II
> Ravn Alaska
> 5245 Airport Industrial Rd
> Fairbanks, AK 99709
> (907) 450-7293
> ---
>
>
>
>
>
>


Re: [GENERAL] Extensions and privileges in public schema

2016-12-04 Thread Paul Ramsey
When you create the student user, remove their create privs in public.
Then create a scratch schema and grant them privs there.
Finally, alter the student user so that the scratch schema appears FIRST in
their search path. This will cause unqualified CREATE statements to create
in the scratch schema.
For full separation, give each student their own login and set the search
path to

"$user", public

That way each student gets their own private scratch area, and it is used
by default for their creates.

P


On Sun, Dec 4, 2016 at 1:10 PM, Lee Hachadoorian <
lee.hachadooria...@gmail.com> wrote:

> This question is specifically motivated by my use of the PostGIS
> extension, but since other extensions create functions and other supporting
> objects in public schema, I believe it is more general.
>
> I'm teaching a university-level class using PostGIS. I have created a
> scratch schema for students to create objects in. At the end of the term I
> can drop scratch and start fresh the following term.
>
> Students of course can also create objects in public schema, and often do
> unintentionally because the forget to schema qualify their CREATE TABLE
> statements. This complicates things because I can't drop public schema
> without dropping various PostGIS (and other) tables and functions.
> Additionally, while I doubt the students would do something like drop a
> public function or supporting table (like spatial_ref_sys), it nonetheless
> seems like a poor idea for these database objects to be vulnerable.
>
> What is considered best practices in this case? Should PostGIS extension
> be kept in its own schema (as was suggested when I asked about this on
> GIS.SE)? If I do so, can I treat public schema the way I have been using
> scratch schema, i.e. could I drop and recreate clean public schema at end
> of term? Should I leave extensions in public but limit rights of public
> role in that schema (so that they don't unintentionally create tables
> there, or accidentally delete other objects)? Or do Postgres DBA's just not
> worry about the objects in public schema, and rely upon applications and
> login roles to interact with the database intelligently?
>
> To be clear, primary goal is to keep student created objects in one schema
> which can be dropped at the end of the term. But the question of preventing
> accidental creation/deletion of objects in public schema is possibly
> related, and the overall database organization might address both concerns.
>
> Best,
> --Lee
>
>
> --
> Lee Hachadoorian
> Assistant Professor of Instruction, Geography and Urban Studies
> Assistant Director, Professional Science Master's in GIS
> Temple University
>


Re: [GENERAL] migrating data from an old postgres version

2016-07-15 Thread Paul Ramsey

> On Jul 15, 2016, at 6:55 AM, Melvin Davidson  wrote:
> 
> 
> 
> On Fri, Jul 15, 2016 at 9:46 AM, Willy-Bas Loos  > wrote:
> Hi,
> 
> A coworker is getting a new laptop and he wants to migrate some data from his 
> old one to the new one. So he installed postgres 9.5 on the new one and is 
> asking me how to migrate the data from the old 8.4 database. This database 
> includes postgis.
> 
> I asked him to make a network connection so that he can make a dump from the 
> 9.5 machine, but that seems to be tricky for him.
> He is inserting a harddrive in his new laptop which has the data directory on 
> it.
> 
> So i went to look for a 8.4 installer but it has been EOL for a while now, so 
> i found no official source for that. But i found a 8.4.12 installer on 
> filehorse.com 
> 
> Now, maybe this is overcautious, but being a linux man myself, i dislike 
> installing unknown software. So is there an md5 signature for that installer 
> around?
> Or maybe a more official source for a 8.4 installer?
> 
> Cheers,


Make sure to pg_dump -Fc
(note the flags)
so that in case you need to do a hard restore (if you’re going from postgis 1.x 
to 2.x) you can run the dump file through the special cleaning script.

See postgis documentation notes on “hard upgrade” for full info on doing a 
clean 1->2 upgrade. 

If he’s already blown away his old environment then yeah, you’re going to have 
to exactly duplicate it first so you can run 8.4 *and* an old version of 
postgis (have to match major version numbers) and get a clean dump file out of 
it.

P.

Re: [GENERAL] Update statement results in Out of memory

2016-07-06 Thread Paul Ramsey
Running a multi-million row update will take a long time.
It's possible you've exposed a memory leak in ST_Buffer (the older
your version of GEOS, the more likely that is) but it's also possible
you're just running a really long update.
I find for batch processing purposes that creating fresh tables is far
preferable:

CREATE TABLE newtable AS SELECT ST_Buffer(geom) ... FROM oldtable;

If you still see memory issues with the above then you probably do
have a leak, *or* you're just running buffer on a sufficiently large
input geometry or with a large enough radius to blow up the memory
naturally.

P


On Wed, Jul 6, 2016 at 2:13 AM, Ivan Bianchi  wrote:
> Hello,
>
> I am trying to update a column using a PostGIS ST_Buffer function into a
> table of 4.257.769 rows, but after 6 hours, an Out of memory error appears
> and the kernel starts killing processes until a Kernel Panic shows up.
>
> I have simplified the buffer target geometry and also added a gist index to
> that column.
>
> The statement is the following:
>>
>> psql -h host -U user -W -d database -c "UPDATE table SET buffer =
>> ST_Buffer(simplified_geometry, 0.005);"
>
>
> After reading and tunning the configuration, I still have the same result.
>
> Here's the initial memory stats:
>
>>   totalusedfree shared  buff/cache   available
>> Mem:15G1.5G 12G503M1.4G
>> 13G
>> Swap:  7.8G  0B7.8G
>
>
>
> I'm running out of ideas, as I think the postgresql.conf memory parameters
> are quite low for the machine specs. I understand I can split the process
> and paginate the rows, but I can't see why I can't deal with this full
> statement right now.
>
> Do you think this issue is related with the postgres memory parameters
> configuration? Why is not respecting the shared_buffers or
> effective_cache_size parameters and keeps growing?
>
>
> Here's some info:
>
> Machine specs
>
> Intel(R) Core(TM) i7-4790K CPU @ 4.00GHz (8 cores)
> 16 GB of memory
> Fedora release 23 (Twenty Three)
> Kernel - 4.5.7-202.fc23.x86_64
>
> postgresql.conf
>
> effective_cache_size = 5GB
> shared_buffers = 3GB
> work_mem = 10MB
>
> maintenance_work_mem = 800MB
> wal_buffers = 16MB
>
> Kernel parameters
>
> vm.overcommit_memory=2
>
> kernel.shmmax = 8340893696
> kernel.shmall = 2036351
>
> Versions:
>
> PostgreSQL 9.5.3 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 5.3.1
> 20160406 (Red Hat 5.3.1-6), 64-bit
> POSTGIS="2.2.2 r14797" GEOS="3.5.0-CAPI-1.9.0 r4084" PROJ="Rel. 4.9.1, 04
> March 2015" GDAL="GDAL 2.0.2, released 2016/01/26" LIBXML="2.9.3"
> LIBJSON="0.12" RASTER
>
>
> Many thanks,
>
> --
> Ivan


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PostGIS 2.2.0 Released

2015-10-07 Thread Paul Ramsey
http://postgis.net/2015/10/07/postgis-2.2.0

PostGIS 2.2.0 is released! Over the last two years a number of interesting new 
features have been added, such as:

* True nearest-neighbor searching for all geometry and geography types
* New volumetric geometry support, including ST_3DDifference, ST_3DUnion and 
more
* Temporal data model support and functions like ST_ClosestPointOfApproach to 
support temporal query
* Spatial clustering functions ST_ClusterIntersecting and ST_ClusterWithin
* Subdividing large geometries with ST_Subdivide
* Fast box clipping with ST_ClipByBox2D
* In-database raster processing with ST_Retile and ST_CreateOverview
* New high-speed native code address standardizer
* Visvalingam-Whyatt geometry simplification with ST_SimplifyVW
* Support for compressed “tiny well-known binary” format with ST_AsTWKB and 
ST_GeomFromTWKB

See the full list of changes in the news file: 
  http://svn.osgeo.org/postgis/tags/2.2.0/NEWS

Thanks to everyone who helped with testing and development over this cycle!

Yours,
Team PostGIS
 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] relpages for pg_toast.* tables

2015-08-28 Thread Paul Ramsey
In the spirit of "asking the question leads you to the answer", while
ANALYZE doesn't update stats on toast tables, VACUUM does. So running
VACUUM ANALYZE on the parent table updates all the relevant "relpages"
slots and the space summary turns out passable guesstimates.

P.

On Fri, Aug 28, 2015 at 6:16 AM, Paul Ramsey  wrote:
> I've been trying to write up a "pg_total_relation_size()" replacement
> that just uses the system tables (sum up relpages for tables, indexes
> and toast tables), thereby avoiding the overhead of running stat() on
> thousands of filesystem tables, but I've come up against what seems to
> be an unsuperable problem.
>
> The `relpages` data for the toast tables doesn't seem to get updated.
> Not when I run a general 'ANALYZE' call, and when I specifically call
> ANALYZE on the toast tables, it tells me "WARNING:  skipping
> "pg_toast_4597532" --- cannot analyze non-tables or special system
> tables". Well, OK then.
>
> Presumably this is by design, but it makes it impossible to get a true
> guesstimate (as of latest ANALYZE) of size. Are there any known
> workarounds?
>
> Thanks,
> P


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] relpages for pg_toast.* tables

2015-08-28 Thread Paul Ramsey
I've been trying to write up a "pg_total_relation_size()" replacement
that just uses the system tables (sum up relpages for tables, indexes
and toast tables), thereby avoiding the overhead of running stat() on
thousands of filesystem tables, but I've come up against what seems to
be an unsuperable problem.

The `relpages` data for the toast tables doesn't seem to get updated.
Not when I run a general 'ANALYZE' call, and when I specifically call
ANALYZE on the toast tables, it tells me "WARNING:  skipping
"pg_toast_4597532" --- cannot analyze non-tables or special system
tables". Well, OK then.

Presumably this is by design, but it makes it impossible to get a true
guesstimate (as of latest ANALYZE) of size. Are there any known
workarounds?

Thanks,
P


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Less is More

2015-06-22 Thread Paul Ramsey
Hey all,
I have an odd performance quirk I wonder if anyone has any theories for… (not a 
one-time result, table all heated up, etc)


spain=# explain analyze select way from planet_osm_point;                       
                                  QUERY PLAN
-
 Seq Scan on planet_osm_point  (cost=0.00..37240.97 rows=1748797 width=32) 
(actual time=0.044..813.521 rows=1748797 loops=1)
 Total runtime: 902.256 ms
(2 rows)

Time: 902.690 ms

spain=# explain analyze select * from planet_osm_point;
                                                          QUERY PLAN
--
 Seq Scan on planet_osm_point  (cost=0.00..37240.97 rows=1748797 width=848) 
(actual time=0.051..241.413 rows=1748797 loops=1)
 Total runtime: 330.271 ms
(2 rows)

Time: 331.869 ms



Thanks,

-- 
http://postgis.net
http://cleverelephant.ca



[GENERAL] Re: [postgis-users] Memory management in postgres (with liblwgeom functions in particular)

2015-03-30 Thread Paul Ramsey
Igor,
Your supposition is is all correct (you might want to ask your
questions on postgis-devel, where the developer density is higher).
lwalloc by default is just a facade over malloc, for standalone use.
But, when used in PostGIS, it it backed by palloc, which is in turn
the memory manager provided by PgSQL. The PgSQL memory manager is a
heirarchical memory manager that provides indepedent blocks of memory
for different levels of operation. So a SQL query will get a block,
and every palloc in that context will go into that block. Then when
the query is done, the whole block gets thrown away. The upshot is
that code that uses palloc in relatively short-lived contexts (like
queries) can be pretty lazy about memory management, since the whole
context is getting tossed at the end anyways.
P.

On Mon, Mar 30, 2015 at 8:11 AM, Igor Stassiy  wrote:
> Hello,
>
> I am developing a C++ extension (most of the code is C++) for postgres that
> links dynamically with liblwgeom, without linking to postgis. I call
> liblwgeom functions that serialize/deserialize LWGEOM* (and similar
> structures) that don't need a backend like GEOS.
>
> I wonder how is the memory freed when we call lwerror, as the transaction
> will be terminated (elog/ereport create a long jump), so if I call
> liblwgeoms functions from within C++, the stack will not be unwind and even
> if I use smart pointers it wouldn't make a difference (right?).
>
> On the contrary, when Postgis module loads itself, in _PG_init it overrides
> memory allocation functions of liblwgeom with pg_alloc and pg_free. Which in
> turn call palloc and pfree. And in this case when we call lwerror, the
> memory that we allocated is freed automatically (right?).
>
> I guess (just a guess) it has something to do with the memory context and
> when a memory context is "closed" the entire memory allocated within would
> be freed. But lwalloc by default is malloc, so does Postgres do something
> extremely clever like overriding malloc with its palloc?
>
> Thank you,
> Igor
>
> ___
> postgis-users mailing list
> postgis-us...@lists.osgeo.org
> http://lists.osgeo.org/cgi-bin/mailman/listinfo/postgis-users


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres not using GiST index in a lateral join

2015-03-04 Thread Paul Ramsey
Stop writing so many subqueries, think in joins; the poor planner!

SELECT DISTINCT ON (a.id) a.id AS a_id, b.id AS b_id
FROM a
JOIN b
ON ST_Contains(b.shape, a.shape)
WHERE b.kind != 1

Also, the DISTINCT ON syntax (PgSQL custom) lets you winnow a result
set down to just one of the inputs.

P.


On Wed, Mar 4, 2015 at 6:36 AM, Igor Stassiy  wrote:
> Hello,
>
> I have a query plan optimization question. It is formatted nicely on
>
> http://stackoverflow.com/questions/28856452/postgres-not-using-gist-index-in-lateral-join
>
> But here is a copy for the archive:
>
> Here is the setup:
>
> CREATE EXTENSION postgis;
> DROP TABLE IF EXISTS A;
> DROP TABLE IF EXISTS B;
> CREATE TABLE A(shape Geometry, id INT);
> CREATE TABLE B(shape Geometry, id INT, kind INT);
> CREATE INDEX ON A USING GIST (shape);
> CREATE INDEX ON B USING GIST (shape);
>
> I am running the following commands:
>
> ANALYZE A;
> ANALYZE B;
>
> -- for each row in A, select exactly one row in B (if there is one)
> -- such that B contains geometry of A
> EXPLAIN SELECT * FROM (SELECT A.id as aid, (SELECT B.id FROM B WHERE
> ST_Contains(B.shape, A.shape) AND B.kind != 1 LIMIT 1) as bid FROM A) AS
> TMP;
>
> which gives me { "Plan": { "Node Type": "Seq Scan", "Relation Name": "A",
> "Startup Cost": 0.00, "Total Cost": 2606592.33, "Plan Rows": 549745, "Plan
> Width": 1646, "Plans": [ { "Node Type": "Limit", "Parent Relationship":
> "SubPlan", "Subplan Name": "SubPlan 1", "Startup Cost": 0.00, "Total Cost":
> 4.68, "Plan Rows": 1, "Plan Width": 8, "Plans": [ { "Node Type": "Seq Scan",
> "Parent Relationship": "Outer", "Relation Name": "B", "Startup Cost": 0.00,
> "Total Cost": 4.68, "Plan Rows": 1, "Plan Width": 8, "Filter": "((shape &&
> A.shape) AND _st_contains(shape, A.shape))" } ] } ] } }
>
>
> Note that there is a sequential scan inside the lateral join, however there
> is clearly an index available. However after setting
>
> set enable_seqscan=false;
>
> the index is being used. This actually affects runtime significantly (around
> 3 times faster) and seems that postgres should figure things like that
> automatically. { "Plan": { "Node Type": "Seq Scan", "Relation Name": "A",
> "Startup Cost": 100.00, "Total Cost": 10004716493.85, "Plan Rows":
> 549450, "Plan Width": 1677, "Plans": [ { "Node Type": "Limit", "Parent
> Relationship": "SubPlan", "Subplan Name": "SubPlan 1", "Startup Cost": 0.00,
> "Total Cost": 8.52, "Plan Rows": 1, "Plan Width": 8, "Plans": [ { "Node
> Type": "Index Scan", "Parent Relationship": "Outer", "Scan Direction":
> "NoMovement", "Index Name": "B_shape_idx", "Relation Name": "B", "Startup
> Cost": 0.00, "Total Cost": 8.52, "Plan Rows": 1, "Plan Width": 8, "Index
> Cond": "(shape && A.shape)", "Filter": "_st_contains(shape, A.shape)" } ] }
> ] } }
>
> Is there any way to tell postgres to use index in a less hacky way? Possibly
> by rewriting the query? From what I understand the use of set enable_... is
> not recommended in production.
>
> When you actually run the commands above it will give
>
> { "Plan": { "Node Type": "Seq Scan", "Relation Name": "a", "Alias": "a",
> "Startup Cost": 0.00, "Total Cost": 10372.75, "Plan Rows": 1230, "Plan
> Width": 36, "Plans": [ { "Node Type": "Limit", "Parent Relationship":
> "SubPlan", "Subplan Name": "SubPlan 1", "Startup Cost": 0.14, "Total Cost":
> 8.41, "Plan Rows": 1, "Plan Width": 4, "Plans": [ { "Node Type": "Index
> Scan", "Parent Relationship": "Outer", "Scan Direction": "NoMovement",
> "Index Name": "b_shape_idx", "Relation Name": "b", "Alias": "b", "Startup
> Cost": 0.14, "Total Cost": 8.41, "Plan Rows": 1, "Plan Width": 4, "Index
> Cond": "(shape && a.shape)", "Filter": "((kind <> 1) AND _st_contains(shape,
> a.shape))" } ] } ] } }
>
> Unfortunately I cannot provide data to reproduce the query plan results.
>
> Thanks,
> Igor


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] autovacuum worker running amok - and me too ;)

2015-03-04 Thread Paul Ramsey
Circling back on this one, I had a look at our analyze code. I found
one place where *maybe* we weren't freeing memory and freed it, but
analyzing a 2M record table I barely see any bump up in memory usage
(from 22M up to 24M at peak) during analyze. And the change I made
didn't appear to alter that (though the objects were probably all
small enough that they weren't being detoasted into copies in any
event). Though maybe with a really big table? (with really big
objects?) Though still, doesn't analyze just pull a limited sample
(30K approx max) so why would table size make any difference after a
certain point?

P.

On Tue, Mar 3, 2015 at 3:17 PM, Tom Lane  wrote:
> wambacher  writes:
>> My system has 24GB of real memory but after some hours one autovacuum worker
>> is using 80-90% of  memory, the OOM-Killer (out of memory killer) kills the
>> process with kill -9 and the postgresql-server is restarting because of that
>> problem.
>
>> i changed the base configuration to use very small buffers, restartetd the
>> server twice but the problem still exists.
>
>> i think, it's allways the same table and that table is huge: 111GB data and
>> 3 Indices with 4GB, 128 GB and 12 GB. It's the table planet_osm_ways from
>> openstreetmap. maybe that helps.
>
> Maybe you could reduce the statistics targets for that table.
>
> I think we've heard that the analyze functions for PostGIS data types are
> memory hogs, too --- maybe it's worth inquiring about that on the postgis
> mailing lists.
>
> regards, tom lane
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Indexing large table of coordinates with GiST

2015-01-15 Thread Paul Ramsey
 On January 15, 2015 at 12:36:29 PM, Daniel Begin 
(jfd...@hotmail.com(mailto:jfd...@hotmail.com)) wrote:

> Paul, the nodes distribution is all over the world but mainly over inhabited 
> areas. However, if I had to define a limit of some sort, I would use the 
> dateline. Concerning spatial queries, I will want to find nodes that are 
> within the boundary of irregular polygons (stored in another table). Is 
> querying on irregular polygons is compatible with geohashing?


Well… yes you can, although the relative efficiency compared to r-tree will 
depend a bit on how the query polygons interact with the geohash split points. 
Also, if you’re planning to slam pretty large polygons through this process, 
expect it to be kind of slow. You’ll want to do some sharding, to spread the 
problem out over multiple nodes.
 

-- 
Paul Ramsey
http://cleverelephant.ca 
http://postgis.net




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Indexing large table of coordinates with GiST

2015-01-15 Thread Paul Ramsey
As Remi notes, going with a pointcloud approach might be wiser, particularly if 
you aren’t storing much more about the points that coordinates and other lidar 
return information. Since you’re only working with points, depending on your 
spatial distribution (over poles? dateline?) you might just geohash them and 
index them with a btree instead. The index will work better than a rtree for 
points, efficiencywise, however you’ll still have a multi-billion record table, 
which could cause other slowdowns, depending on your plans for accessing this 
data once you’ve indexed it.

P.

-- 
Paul Ramsey
http://cleverelephant.ca
http://postgis.net

On January 15, 2015 at 8:44:03 AM, Rémi Cura (remi.c...@gmail.com) wrote:

Hey,
You may want to post this on postGIS list.

I take that so many rows mean either raster or point cloud.
If it is point cloud simply consider using pg_pointcloud.
A 6 billion point cloud is about 600 k lines for one of my data set.

If it is raster, you may consider using postgis raster type.
If you really want to keep that much geometry,
you may want to partition your data on a regular grid.
Cheers,
Rémi-C

2015-01-15 15:45 GMT+01:00 Andy Colson :
On 1/15/2015 6:44 AM, Daniel Begin wrote:
Hi, I'm trying to create an index on coordinates (geography type) over a
large table (4.5 billion records) using GiST...

CREATE INDEX nodes_geom_idx ON nodes USING gist (geom);

The command ran for 5 days until my computer stops because a power outage!
Before restarting the index creation, I am asking the community if there are
ways to shorten the time it took the first time :-)

Any idea?

Daniel




Set maintenance_work_mem as large as you can.

-Andy



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] spgist index not getting used

2014-09-24 Thread Paul Ramsey
Yep, that was a typo (or, rather, an unpushed commit). And yep, the lack of a 
commutator was the problem.  Thanks so much, it’s a huge relief to see it 
turning over properly :) now, onwards to actually doing the PostGIS 
implementation.

(On an semi-related note, if the spgist example had been in contrib, so it had 
to show a working instance of the SQL definitions of the operators and operator 
classes it would have been much easier to get started. For an API like spgist 
that is really meant for extension, having the example in contrib rather than 
core would help a lot.)

Thanks again Tom!

P.


--  
http://postgis.net  
http://cleverelephant.ca


On September 24, 2014 at 8:24:02 PM, Tom Lane (t...@sss.pgh.pa.us) wrote:
> Paul Ramsey writes:
> > My C implementation is here 
> > https://github.com/pramsey/postgis/blob/spgist/postgis/gserialized_spgist_2d.c
> >   
> > My SQL binding calls are here 
> > https://github.com/pramsey/postgis/blob/spgist/postgis/gserialized_spgist_2d.sql
> >   
>  
> > Thanks to help from Andres Freund, I can now build an index based on my 
> > extension. However,  
> when I run a query using the operator(s) I have defined, the query never uses 
> my index,  
> it always sequence scans.
>  
> > explain analyze select * from somepoints where 
> > '(5898.82450178266,7990.24286679924)'::point  
> = pt;
>  
> (I assume that's a typo and you meant "... &=& pt")
>  
> As stated, this WHERE clause cannot be used with the index: indexable
> clauses in PG are always of the form "indexed_column operator something",
> and you've written it the other way round. I gather that you think the
> operator is commutative; but since you didn't declare that, the planner
> doesn't know it can flip the clause around. Try adding "commutator = &=&"
> to the declaration of the "point &=& point" operator.
>  
> regards, tom lane
>  



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] spgist index not getting used

2014-09-24 Thread Paul Ramsey
Still no go. I actually tried a bunch of different selectivity functions too, 
and the planner correctly used them to estimate the number of potential 
returned functions, but in no case did the index actually kick in, no matter 
how selective I made the operator appear. 

P.


--  
http://postgis.net  
http://cleverelephant.ca


On September 24, 2014 at 4:32:35 PM, Peter Geoghegan 
(peter.geoghega...@gmail.com) wrote:
> On Wed, Sep 24, 2014 at 2:01 PM, Paul Ramsey wrote:
> > If I build an index on the same table using the internal quad-tree ops, and
> > use their operator, I do get an index scan.
>  
>  
> What about when enable_seqscan = off?
>  
> --
> Regards,
> Peter Geoghegan
>  



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] spgist index not getting used

2014-09-24 Thread Paul Ramsey
Hi all,
I continue to bang along towards a binding of the spgist api from a run-time 
extension (postgis, in this case).
To avoid complication, I am actually not doing any postgis code at this point, 
just copying the internal point quadtree implementation and seeing if I can get 
it to turn over.

My C implementation is here 
https://github.com/pramsey/postgis/blob/spgist/postgis/gserialized_spgist_2d.c
My SQL binding calls are here 
https://github.com/pramsey/postgis/blob/spgist/postgis/gserialized_spgist_2d.sql

Thanks to help from Andres Freund, I can now build an index based on my 
extension. However, when I run a query using the operator(s) I have defined, 
the query never uses my index, it always sequence scans.

explain analyze select * from somepoints where 
'(5898.82450178266,7990.24286679924)'::point = pt;
                                                 QUERY PLAN                     
                            

 Seq Scan on somepoints  (cost=0.00..1887.00 rows=100 width=20) (actual 
time=26.675..26.675 rows=0 loops=1)
   Filter: ('(5898.82450178266,7990.24286679924)'::point = pt)
   Rows Removed by Filter: 10
 Total runtime: 26.743 ms


If I build an index on the same table using the internal quad-tree ops, and use 
their operator, I do get an index scan.

The database is recognizing that the index is there, and if I put a breakpoint 
on the spgist ‘config’ API function, I see it getting turned over as the query 
starts and the planner looks at things, but none of the other hooks get called, 
and the plan ends up being a sequence scan every time.

So, the system knows the index exists, it just thinks it is irrelevant to the 
query. The system also knows my operators exist, and uses them (in sequence 
scan mode). But even though they are bound into strategies declared for the 
operator class, the index is not getting used.

I’ve poked around looking at all the places I can in the system catalogue to 
try and find out what might differ between my index and the internal quad-tree, 
but no luck so far: they seem to be defined exactly the same.

Presumably I’ve again forgotten something simple-yet-crucial in my attempt to 
bind this access method to the point type: any suggestions for fixes or at 
least diagnostics I can run to get more clues?

Thanks,

P


-- 
Paul Ramsey
http://cleverelephant.ca
http://postgis.net

Re: [GENERAL] fmgr_oldstyle in extensions

2014-09-23 Thread Paul Ramsey
Oh vey, I knew after spending a half-day walking through the debugger it would 
turn out to be a trivial mistake on my part. This is always how it is when the 
Magic Fails to Work :) Thanks so much for the help! (and now I know a lot more 
about the function manager (well, the old one)).

P

-- 
Paul Ramsey
http://cleverelephant.ca
http://postgis.net

On September 23, 2014 at 10:02:33 AM, Andres Freund (and...@2ndquadrant.com) 
wrote:

You forgot to add a PG_FUNCTION_INFO_V1(yourfunc); for the function. 


[GENERAL] fmgr_oldstyle in extensions

2014-09-23 Thread Paul Ramsey
 
Hi all,

I’m trying to implement an spgist index in the PostGIS extension, which seems 
like it should work, but is thus far not working for what appear (to me) to be 
issues in the way spgist expects to pass pointers to user-defined functions.  

Right before anything happens, spgist calls a ‘config’ routine in the user 
code. This is the code from the example implementation (which is internal to 
pgsql, not a run-time add-on like postgis)  

 Datum  
 spg_quad_config(PG_FUNCTION_ARGS)
 {
 /* spgConfigIn *cfgin = (spgConfigIn *) PG_GETARG_POINTER(0); */
 spgConfigOut *cfg = (spgConfigOut *) PG_GETARG_POINTER(1); 

 cfg->prefixType = POINTOID;  
 cfg->labelType = VOIDOID; /* we don't need node labels */
 cfg->canReturnData = true;
 cfg->longValuesOK = false;
 PG_RETURN_VOID();
 }

It is called from the spgist core in the spgGetCache() function, via 

 FunctionCall2Coll(procinfo,
   index->rd_indcollation[0],
   PointerGetDatum(&in),
   PointerGetDatum(&cache->config));

The part the user function cares about is the pointer to cache->config. 

In the core code, the call stack to the user function goes like this:

postgres`spg_quad_config(fcinfo=0x7fff5f0faf90) + 18 at spgquadtreeproc.c:29
postgres`FunctionCall2Coll(flinfo=0x7ff59a804cc8, collation=0, 
arg1=140734788252568, arg2=140692835814944) + 150 at fmgr.c:1327
 postgres`spgGetCache(index=0x00010254be68) + 220 at spgutils.c:71

So, spgGetCache to FunctionCall2Coll to spg_quad_config.

In my user-defined version of the same thing (I just copied the C code and 
wrote a CREATE OPERATOR CLASS for it) the call stack is this

postgis-2.2.so`gserialized_spgist_quadtree_2d_config(fcinfo=0x7fff5f0fb398) 
+ 30 at gserialized_spgist_2d.c:60
postgres`fmgr_oldstyle(fcinfo=0x7fff5f0faf90) + 424 at fmgr.c:678
postgres`FunctionCall2Coll(flinfo=0x7ff59a039cc8, collation=0, 
arg1=140734788252568, arg2=140692827643424) + 150 at fmgr.c:1327
postgres`spgGetCache(index=0x00010254be68) + 220 at spgutils.c:71

So, spgGetCache to FunctionCall2Coll to fmgr_oldstyle to 
gserialized_spgist_quadtree_2d_config! 

On the way through fmgr_oldstyle things go very much awry and the 
gserialized_spgist_quadtree_2d_config doesn’t get called with a pointer to an 
fcinfo, but with a pointer to the block of memory occupied by the first 
argument, and from there things naturally stop working.

So, I’m wondering what I could have done that is causing my calls to route 
through fmgr_oldstyle instead of the usual path?

P.

--  
Paul Ramsey
http://cleverelephant.ca  
http://postgis.net




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Analyze against a table with geometry columns runs out of memory

2014-05-08 Thread Paul Ramsey
Roxanne, you seem to have isolated the problem to a particular geometry column, 
which speaks to this being a PostGIS problem. Since the analyze code was 
re-written in 2.1, and your issue is coming up in a 2.0>2.1 upgrade, that 
further points to the issue potentially being a PostGIS problem. Unless the 
same data works in a PgSQL 9.2/PostGIS 2.1 combo, it seems clear that PgSQL 9.3 
is not the culprit here. (Though I would love to be reassured that 9.2/2.1 
combo also does not work, since that eliminates a bad interaction between 
9.3/2.1 as the issue.)

At some point in order to debug I’ll probably need a copy of the data, or 
access to a system that has the data and a dev environment. Please do file a 
ticket at http://trac.osgeo.org/postgis on this issue.

P

-- 
Paul Ramsey
http://cleverelephant.ca
http://postgis.net

On May 7, 2014 at 11:15:10 PM, Roxanne Reid-Bennett (r...@tara-lu.com) wrote:

Hello,  

We are working out the upgrade of our servers from Postgres 9.1 and  
Postgis 2.0 to Postgres 9.3 and Postgis 2.1  
After building the base stack, The System Admin restored the database  
from a backup. [I'll ask for more details if you need them]  

I have 3 tables with geometry columns in them that when they are  
autovacuumed, vacuumed, or analyzed run the system out of memory. I  
have isolated that the problem for one of the tables is related to a  
geometry column. I have tables in the system that are much larger on  
disk with geometry columns in them that vacuum analyze just fine, so it  
isn't just that they have geometry columns. Two of the tables are  
related to each other, the other is a load of Government supplied data  
and completely separate in detail and concept for data.  

Using the smallest table... we looked at maintenance_work_mem and tried  
several runs with varying values [16MB, 64MB, 256MB, and 500MB]. Larger  
maintenance_work_mem allows the process to run longer before it starts  
gobbling up swap, but the process still spends most of it's time in  
"uninterruptible sleep (usually IO)" state and just eats up the swap  
until all of the memory is gone.  

Smallest table definition, config and log file entries, etc follow  
below. If I have failed to provide necessary or desired information,  
just ask.  

We have noted that the memory management was changed going into 9.3 -  
but we haven't been able to find anything that would indicate any known  
issues ... This problem caused us to take a hard look at the stack  
again, and we will be building a new stack anyway because we need a  
newer GEOS - but we are seriously considering dropping Postgres back to  
9.2.  

I am out of ideas on what else to try after maintenance_work_mem ...  
Does anybody have any suggestions/questions/observations for me?  

Thank you.  

Roxanne  
--  

VirutualBox: 4.1.24 Intel Xeon 2.13 GHz (8) 48 Gb RAM  
Virtual Box instance: 64 Bit 4 Processors Base Memory: 12Gb  

running Ubuntu 12.04.1 LTS  
Linux 3.2.0-23-generic #36-Ubuntu SMP Tue Apr 10 20:39:51 UTC 2012  
x86_64 x86_64 x86_64 GNU/Linux  

Postgres: PostgreSQL 9.3.4 on x86_64-unknown-linux-gnu, compiled by gcc  
(Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3, 64-bit  
PostGis: POSTGIS="2.1.2 r12389" GEOS="3.3.3-CAPI-1.7.4" PROJ="Rel.  
4.8.0, 6 March 2012" GDAL="GDAL 1.9.2, released 2012/10/08"  
LIBXML="2.7.8" LIBJSON="UNKNOWN" TOPOLOGY RASTER  

Postgres.conf entries (of probable interest - if I didn't list it, it's  
probably defaulted):  

max_connections = 100  
shared_buffers = 4089196kB  
work_mem = 128MB  
maintenance_work_mem = 64MB  
checkpoint_segments = 64  
checkpoint_timeout = 30min  
checkpoint_completion_target = 0.75  
effective_cache_size = 4089196kB  
default_statistics_target = 200  
autovacuum_max_workers = 1 [this is normally set to 3]  

Analyzing the original table "activity" failed. Using a copy of the  
original table with no indexes, no foreign keys, no constraints also  
failed. However, dropping one of the two geometry columns (region) out  
of the copy allowed it to succeed. Taking a copy of just "region" which  
contains (Multi)Polygons and the primary key via "CREATE TABLE ... as  
(Select...)", from the original table "activity" to create  
temp.region... analyze runs out of memory. The following were run  
against temp.region.  

smallest/shortest table definition from \d:  

Table "temp.region"  
Column | Type | Modifiers  
-+-+---  
activity_id | integer |  
region | geometry(Geometry,4326) |  

  
HQ4_Staging=# analyze verbose temp.region;  
INFO: 0: analyzing "temp.region"  
LOCATION: do_analyze_rel, analyze.c:335  
INFO: 0: "region": scanned 1022 of 1022 pages, containing 52990  
live rows and 0 dead rows; 52990 rows in sample, 52990 estimated total rows  
LOCATION: ac

Re: [GENERAL] round(real,integer)

2014-01-10 Thread Paul Ramsey
True (?) though I’m guessing the real test for most folks is if printf renders 
it as expected. Anything else if icing on the cake, no?

P

-- 
Paul Ramsey
http://cleverelephant.ca
http://postgis.net

On January 10, 2014 at 1:09:24 PM, Tom Lane (t...@sss.pgh.pa.us) wrote:

to, say, 6 digits produces an exact 
decimal answer. Even if you're not exceeding 6 digits overall, it's 
unlikely that the answ

Re: [GENERAL] round(real,integer)

2014-01-10 Thread Paul Ramsey
What surprises do you think would come from a

round(real, integer) returns real

function? 

Just asking the question, I guess I can see the answer, since though round() is 
usually used to reduce precision, it’s also possible to use it to increase it 
arbitrarily… bah.

It does bug me a fair bit, so perhaps I’ve finally found my bite-size pgsql 
contribution project :)

P.

-- 
Paul Ramsey
http://cleverelephant.ca
http://postgis.net

On January 10, 2014 at 12:37:04 PM, Tom Lane (t...@sss.pgh.pa.us) wrote:

Paul Ramsey  writes:  
> History question:  
> Why does select round(3,3) work,   
>          select round(3.0,3) work,   
> but      select round(3.0::real,1) not work?  

The 2-argument round() function actually takes (numeric, integer).  

There's an implicit cast from int to numeric, but not an implicit  
cast from real to numeric (for that pair of types, the direction  
of implicit casting is from numeric to real).  

The choices we made for implicit casting behavior among the numeric  
datatypes are based on what it says in the SQL standard about exact and  
approximate numeric types. There's probably room for argument about  
the details, but it seems unlikely that we'd risk the breakage that'd  
ensue from rejiggering the casting rules at this stage.  

If this annoys you enough, a more realistic solution would be to add  
an additional round(float8, int) function. I think it'd have to return  
numeric though if you don't want surprises.  

regards, tom lane  


[GENERAL] round(real,integer)

2014-01-10 Thread Paul Ramsey
History question:
Why does select round(3,3) work, 
         select round(3.0,3) work, 
but      select round(3.0::real,1) not work?

There's a utility cast in the integer case (described here 
http://www.postgresql.org/docs/9.3/static/typeconv-func.html), but not in the 
real case.
Is this on purpose, or just an oversight? 
Obviously one can work around it, but I'm sick of doing so, it makes my SQL 
examples ugly. I’d like to fix it, but not if there’s a reason it’s not 
supposed to be done.
P.

--  
Paul Ramsey
http://cleverelephant.ca  
http://postgis.net




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Upgrading from postgreSQL 9.1.6 to 9.3

2013-09-19 Thread Paul Ramsey
Recommendation: do one at a time. First postgis 1.5 to 2.1, then pg 9.1 to 9.3. 
P.

-- 
Paul Ramsey
http://cleverelephant.ca
http://postgis.net


On Thursday, September 19, 2013 at 4:34 PM, fburg...@radiantblue.com wrote:

> 
> 
> Are there any showstoppers/recommendations/experiences with upgrading from 
> Postgres 9.1.6 Postgis 1.5.3 to PostGres 9.3 and associated PostGIS version 
> XX on rhel 6.4 that will help the process go smoothly.
> 
> 
> 
> thanks 




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] earthdistance compass bearing

2013-06-18 Thread Paul Ramsey
The code for azimuth on a sphere isn't so gnarly you couldn't whip it up in 
plpgsql, 

http://trac.osgeo.org/postgis/browser/trunk/liblwgeom/lwgeodetic.c#L924

P. 

-- 
Paul Ramsey
http://cleverelephant.ca
http://postgis.net


On Tuesday, June 18, 2013 at 11:16 AM, Jeff Herrin wrote:

> I don't need it to be too accurate. We're pushing hotel info into the GDS 
> (sabre, expedia, orbitz, etc). They require airport info relative to the 
> hotel. Example: DFW is 25 miles NW of the property. I thought about just 
> faking it...comparing the hotel's lat/long from the airports. I can probably 
> get N,S,E,W reliably enough, but i'm not sure at what point N becomes NW, 
> etc. That just seems like a really crude bad way to do it, but the 
> alternatives seem unnecessarily complex. I found some examples that use 
> bearing but they all take headings in degrees (which im not seeing in 
> earthdistance). I guess I'm going to have to either setup postGIS or brush up 
> on my trig.
> 
> thanks,
> altimage
> 
> From: "Steve Crawford"  (mailto:scrawf...@pinpointresearch.com)>
> To: "Jeff Herrin" mailto:j...@openhotel.com)>
> Cc: pgsql-general@postgresql.org (mailto:pgsql-general@postgresql.org)
> Sent: Tuesday, June 18, 2013 11:37:10 AM
> Subject: Re: [GENERAL] earthdistance compass bearing
> 
> On 06/18/2013 10:42 AM, Jeff Herrin wrote:
> > I'm trying to get a compass bearing (N,S,NW,etc) using earthdistance. I can 
> > successfully get the distance between 2 points using either the point or 
> > cube method, but I've been struggling with getting the bearing. Any tips?
> 
> 
> PostGIS has some functions that may be of use but might be overkill depending 
> on your use but I don't see anything in earthdistance.
> 
> What are you trying to solve?
> 
> It's one thing if you are looking for a one-degree-accurate 
> magnetic-variation-compensated great-circle heading for a 6,000km flight 
> using WGS84 projection (initial-heading, of course, as it will vary over the 
> course of your travel).
> 
> If you just want to be accurate to eight compass-points over a few 
> city-blocks then simple trig is probably more than sufficient.
> 
> Cheers,
> Steve





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] What's a good way to improve this query?

2013-06-05 Thread Paul Ramsey
Well, your objects are larger than the page size, so you're getting them out of 
the toast tables, not directly out of main storage. You may also have your type 
declared as 'main' storage, which means it's zipped up, so it's being unzipped 
before you can access it, that's also an overhead.  

For metadata retrieval, the thing to do is store the metadata at the head of 
the object (which I'm not looking into pgraster to see if you do, but I'll 
assume for now) and then use PG_DETOAST_DATUM_SLICE in the metadata accessor 
function, so that you only pull the bytes you want, rather than detoasting the 
whole object just to get the header information.

You may be causing further pain by having all the metadata functions separate, 
so that in fact the object is being read 9 separate times by your different 
functions. It'll float into cache quickly enough, but the uncompression step at 
each access will still be there. You might want to stuff the query through a 
sampling profiler (OSX Shark!) and confirm, but I would guess you'll find a lot 
of cycles spinning in zlib for this query.

Paul  

--  
Paul Ramsey
http://cleverelephant.ca
http://postgis.net


On Wednesday, June 5, 2013 at 11:22 AM, Jorge Arévalo wrote:

> Hello,  
>  
> I'm running this PostGIS Raster query
>  
> select  
> st_scalex(rast),  
> st_scaley(rast),  
> st_skewx(rast),  
> st_skewy(rast),  
> st_width(rast),  
> st_height(rast),  
> rid,  
> st_upperleftx(rast),  
> st_upperlefty(rast),  
> st_numbands(rast)  
> from  
> my_postgis_raster_table
>  
>  
>  
> I want to remark that, even when 'rast' is a complex type and can be really 
> big, I'm getting just metadata. Not the whole 'rast' column. Anyway, the 
> average dimensions of a 'rast' column in like 600x400 pixels (8 bits per 
> pixel). So, not so big (about 234 KB per rast object).  
>  
> My table has 1257 rows, and this query takes about 45 secs to execute (45646 
> msecs). I think it's too slow. I'm just getting metadata, not the whole 
> 'rast' object, as said.  
>  
> This is the explain analyze output
>  
> Seq Scan on my_postgis_raster_table (cost=0.00..198.85 rows=1257 width=36) 
> (actual time=86.867..51861.495 rows=1257 loops=1)
> Total runtime: 51863.919 ms
>  
>  
>  
> So, basically a sequential scan. As expected, I guess (I'm not a postgres 
> expert, so sorry if I'm talking nonsense)
>  
> I've calculated the effective transfer rate for this table
>  
> SELECT pg_size_pretty(CAST(pg_relation_size('my_postgis_raster_table') / 
> 45646 * 1000 as int8)) AS bytes_per_second;  
>  
> As it's 27KB. Isn't it a slow rate? Is there any kind of index I could create 
> to speed this query? Maybe use some kind of cache system?
>  
> Many thanks in advance,
>  
> --  
> Jorge Arevalo
> Freelance developer
>  
> http://www.krop.com/jorgearevalo
> http://about.me/jorgeas80
>  
> Enviado con Sparrow (http://www.sparrowmailapp.com/?sig)
>  
>  
>  
>  
> --  
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org 
> (mailto:pgsql-general@postgresql.org))
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Strange dump/restore effect

2012-09-24 Thread Paul Ramsey
Try just loading the 1.5 dump directly into the 2.0 database without
the filtering step. It will be immensely noisy with lots of errors and
warnings, but with luck you should find your data is there waiting for
you when it's done.

P

On Sun, Sep 23, 2012 at 2:37 PM, Gražvydas Valeika  wrote:
> Hi all,
>
> I just migrated to 9.2 and observing stange thing.
>
> While restoring 9.2 database to another server's empty database I'm getting
> several errors while restoring views:
>
> pg_restore: creating RULE _RETURN
> pg_restore: [archiver (db)] Error while PROCESSING TOC:
> pg_restore: [archiver (db)] Error from TOC entry 5466; 2618 26660 RULE
> _RETURN postgres
> pg_restore: [archiver (db)] could not execute query: ERROR:  syntax error at
> or near ")"
> LINE 2: ALTER VIEW v_vehicle SET ();
>   ^
> Command was: CREATE RULE "_RETURN" AS ON SELECT TO v_vehicle DO INSTEAD
> SELECT v.vehicle_id, v.vehicle_code, v.home_location_id, v.vehicl...
> pg_restore: creating RULE _RETURN
> pg_restore: [archiver (db)] Error from TOC entry 5487; 2618 26835 RULE
> _RETURN postgres
> pg_restore: [archiver (db)] could not execute query: ERROR:  syntax error at
> or near ")"
> LINE 2: ALTER VIEW v_r_delivery SET ();
>
> There is lot of views, other are backuped/restored without problems.
> Those several views which are restored with errors are visible in new
> database as empty tables.
>
> Source database is produced by PostGIS 2.0 migration script from 9.1/PostGIS
> 1.5 database backup. Attempt to drop those strange views, recreate them and
> then run dump/restore doesn't change anything.
>
>
> Same backup/restore procedure worked fine in 9.1.
>
> Any ideas?
>
> Grazvydas
>
>


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgis 1.5.2 installation configure: WARNING: could not locate CUnit required for liblwgeom unit tests

2011-02-08 Thread Paul Ramsey

On 2011-02-07, at 11:27 AM, Edoardo Panfili wrote:

> On 07/02/11 18.55, Paul Ramsey wrote:
>> Well, maybe you could in-place upgrade if you left your PostGIS version
>> at the original and only upgraded the PostgreSQL part, but you aren't
>> doing that, you're also upgrading your PostGIS version.
>> 
>> pg_dump the database
>> create a new database on the new server, install postgis in it
>> pg_restore the database
> I am at the beginning with postigis. This is also my way to copy the data 
> from one machine to another. I have the same server in both the machines 
> (postgresql 8.4 + postgis 1.5), no errors during this process.
> 
>> ignore the many errors
> this sentence scares me a bit (for my future)... what kind of errors?

"error messages" would be a better term. In the procedure above you are 
installing a database dump, which will naturally include the PostGIS function 
and type definitions into a database in which you have *already* installed the 
PostGIS function and type definitions. Naturally you get a lot of noise as the 
functions in the dump try to install and hit the existing types and functions. 
But it's sound and fury signifying nothing.

P.

> 
> 
> Edoardo
> 
> 
> 
>> On 2011-02-07, at 9:49 AM, akp geek wrote:
>> 
>>> 
>>> Please pardon my ignorance. The reason I am worried about it is, when
>>> I tried to use pg_upgrade to migrate from 8.3 to 9.0.2. I am getting
>>> an error
>>> 
>>> Failed to load library: $libdir/liblwgeom
>>> ERROR: could not access file "$libdir/liblwgeom": No such file or
>>> directory
>>> 
>>> I am using pg_upgrade for upgrading
>>> 
>>> 
>>> Regards
>>> 
>>> On Mon, Feb 7, 2011 at 12:39 PM, Paul Ramsey
>>> mailto:pram...@cleverelephant.ca>> wrote:
>>> 
>>>It's not a dynlib, it's statically linked at build time, so have
>>>no fear. Stop thinking so much :)
>>>P
>>> 
>>>On 2011-02-07, at 9:38 AM, akp geek wrote:
>>> 
>>>>installation was successful. But it did not install the liblwgeom.so
>>>> 
>>>>Regards
>>>> 
>>>>On Mon, Feb 7, 2011 at 12:29 PM, Paul Ramsey
>>>>mailto:pram...@cleverelephant.ca>> wrote:
>>>> 
>>>>It's just a warning, continue happily onwards. Just means a
>>>>few unit tests won't be run.
>>>> 
>>>>P.
>>>> 
>>>> 
>>>>On 2011-02-07, at 9:27 AM, akp geek wrote:
>>>> 
>>>>>Hi All -
>>>>> 
>>>>>I am trying to install postgis 1.5.2 on solaris10. When I
>>>>>run the configure I get the following.
>>>>> 
>>>>>*configure: WARNING: could not locate CUnit required for
>>>>>liblwgeom unit tests*
>>>>> 
>>>>>is there some setting I need to do to make it work?
>>>>> 
>>>>> 
>>>>>$./configure --prefix=/opt/postgres/gis
>>>>>--with-geosconfig=/opt/postgres/gis/bin/geos-config
>>>>>--with-projdir=/opt/postgres/gis
>>>>>checking build system type... sparc-sun-solaris2.10
>>>>>checking host system type... sparc-sun-solaris2.10
>>>>>checking for gcc... gcc
>>>>>checking whether the C compiler works... ^C10.112.161.124$
>>>>>$./configure --prefix=/opt/postgres/gis
>>>>>--with-geosconfig=/opt/postgres/gis/bin/geos-config
>>>>>--with-projdir=/opt/postgres/gis
>>>>>checking build system type... sparc-sun-solaris2.10
>>>>>checking host system type... sparc-sun-solaris2.10
>>>>>checking for gcc... gcc
>>>>>checking whether the C compiler works... yes
>>>>>checking for C compiler default output file name... a.out
>>>>>checking for suffix of executables...
>>>>>checking whether we are cross compiling... no
>>>>>checking for suffix of object files... o
>>>>>checking whether we are using the GNU C compiler... yes
>>>>>checking whether gcc accepts -g... yes
>>>>>checking for gcc option to accept ISO C89... none needed
>>>>>checking for a sed that does not tru

Re: [GENERAL] postgis 1.5.2 installation configure: WARNING: could not locate CUnit required for liblwgeom unit tests

2011-02-07 Thread Paul Ramsey
I'm not sure you can in-place upgrade a postgis database...

On 2011-02-07, at 9:49 AM, akp geek wrote:

> 
> Please pardon my ignorance. The reason I am worried about it is, when I tried 
> to use pg_upgrade to migrate from 8.3 to 9.0.2. I am getting an error
> 
> Failed to load library: $libdir/liblwgeom
> ERROR:  could not access file "$libdir/liblwgeom": No such file or directory
> 
> I am using pg_upgrade for upgrading
> 
> 
> Regards
> 
> On Mon, Feb 7, 2011 at 12:39 PM, Paul Ramsey  
> wrote:
> It's not a dynlib, it's statically linked at build time, so have no fear. 
> Stop thinking so much :)
> P
> 
> On 2011-02-07, at 9:38 AM, akp geek wrote:
> 
>> installation was successful. But it did not install the liblwgeom.so 
>> 
>> Regards
>> 
>> On Mon, Feb 7, 2011 at 12:29 PM, Paul Ramsey  
>> wrote:
>> It's just a warning, continue happily onwards. Just means a few unit tests 
>> won't be run.
>> 
>> P.
>> 
>> 
>> On 2011-02-07, at 9:27 AM, akp geek wrote:
>> 
>>> Hi All -
>>> 
>>> I am trying to install postgis 1.5.2 on solaris10. When I run 
>>> the configure I get the following. 
>>> 
>>> configure: WARNING: could not locate CUnit required for 
>>> liblwgeom unit tests
>>> 
>>> is there some setting I need to do to make it work?
>>> 
>>> 
>>> $./configure --prefix=/opt/postgres/gis 
>>> --with-geosconfig=/opt/postgres/gis/bin/geos-config  
>>> --with-projdir=/opt/postgres/gis
>>> checking build system type... sparc-sun-solaris2.10
>>> checking host system type... sparc-sun-solaris2.10
>>> checking for gcc... gcc
>>> checking whether the C compiler works... ^C10.112.161.124$
>>> $./configure --prefix=/opt/postgres/gis 
>>> --with-geosconfig=/opt/postgres/gis/bin/geos-config  
>>> --with-projdir=/opt/postgres/gis
>>> checking build system type... sparc-sun-solaris2.10
>>> checking host system type... sparc-sun-solaris2.10
>>> checking for gcc... gcc
>>> checking whether the C compiler works... yes
>>> checking for C compiler default output file name... a.out
>>> checking for suffix of executables...
>>> checking whether we are cross compiling... no
>>> checking for suffix of object files... o
>>> checking whether we are using the GNU C compiler... yes
>>> checking whether gcc accepts -g... yes
>>> checking for gcc option to accept ISO C89... none needed
>>> checking for a sed that does not truncate output... /usr/5bin/sed
>>> checking for grep that handles long lines and -e... /usr/sfw/bin/ggrep
>>> checking for egrep... /usr/sfw/bin/ggrep -E
>>> checking for fgrep... /usr/sfw/bin/ggrep -F
>>> checking for ld used by gcc... /usr/ccs/bin/ld
>>> checking if the linker (/usr/ccs/bin/ld) is GNU ld... no
>>> checking for BSD- or MS-compatible name lister (nm)... /usr/ccs/bin/nm -p
>>> checking the name lister (/usr/ccs/bin/nm -p) interface... BSD nm
>>> checking whether ln -s works... yes
>>> checking the maximum length of command line arguments... 786240
>>> checking whether the shell understands some XSI constructs... yes
>>> checking whether the shell understands "+="... no
>>> checking for /usr/ccs/bin/ld option to reload object files... -r
>>> checking for objdump... no
>>> checking how to recognize dependent libraries... pass_all
>>> checking for ar... ar
>>> checking for strip... strip
>>> checking for ranlib... ranlib
>>> checking command to parse /usr/ccs/bin/nm -p output from gcc object... ok
>>> checking how to run the C preprocessor... gcc -E
>>> checking for ANSI C header files... yes
>>> checking for sys/types.h... yes
>>> checking for sys/stat.h... yes
>>> checking for stdlib.h... yes
>>> checking for string.h... yes
>>> checking for memory.h... yes
>>> checking for strings.h... yes
>>> checking for inttypes.h... yes
>>> checking for stdint.h... yes
>>> checking for unistd.h... yes
>>> checking for dlfcn.h... yes
>>> checking for objdir... .libs
>>> checking if gcc supports -fno-rtti -fno-exceptions... no
>>> checking for gcc option to produce PIC... -fPIC -DPIC
>>> checking if gcc PIC flag -fPIC -DPIC works... yes
>>> checking if gcc static flag -static works... no
>>> checking if gcc supports -c -o file.o... yes
>>> checking if gcc supports -c -o file.o... (

Re: [GENERAL] postgis 1.5.2 installation configure: WARNING: could not locate CUnit required for liblwgeom unit tests

2011-02-07 Thread Paul Ramsey
Well, maybe you could in-place upgrade if you left your PostGIS version at the 
original and only upgraded the PostgreSQL part, but you aren't doing that, 
you're also upgrading your PostGIS version.

pg_dump the database
create a new database on the new server, install postgis in it
pg_restore the database
ignore the many errors

P

On 2011-02-07, at 9:49 AM, akp geek wrote:

> 
> Please pardon my ignorance. The reason I am worried about it is, when I tried 
> to use pg_upgrade to migrate from 8.3 to 9.0.2. I am getting an error
> 
> Failed to load library: $libdir/liblwgeom
> ERROR:  could not access file "$libdir/liblwgeom": No such file or directory
> 
> I am using pg_upgrade for upgrading
> 
> 
> Regards
> 
> On Mon, Feb 7, 2011 at 12:39 PM, Paul Ramsey  
> wrote:
> It's not a dynlib, it's statically linked at build time, so have no fear. 
> Stop thinking so much :)
> P
> 
> On 2011-02-07, at 9:38 AM, akp geek wrote:
> 
>> installation was successful. But it did not install the liblwgeom.so 
>> 
>> Regards
>> 
>> On Mon, Feb 7, 2011 at 12:29 PM, Paul Ramsey  
>> wrote:
>> It's just a warning, continue happily onwards. Just means a few unit tests 
>> won't be run.
>> 
>> P.
>> 
>> 
>> On 2011-02-07, at 9:27 AM, akp geek wrote:
>> 
>>> Hi All -
>>> 
>>> I am trying to install postgis 1.5.2 on solaris10. When I run 
>>> the configure I get the following. 
>>> 
>>> configure: WARNING: could not locate CUnit required for 
>>> liblwgeom unit tests
>>> 
>>> is there some setting I need to do to make it work?
>>> 
>>> 
>>> $./configure --prefix=/opt/postgres/gis 
>>> --with-geosconfig=/opt/postgres/gis/bin/geos-config  
>>> --with-projdir=/opt/postgres/gis
>>> checking build system type... sparc-sun-solaris2.10
>>> checking host system type... sparc-sun-solaris2.10
>>> checking for gcc... gcc
>>> checking whether the C compiler works... ^C10.112.161.124$
>>> $./configure --prefix=/opt/postgres/gis 
>>> --with-geosconfig=/opt/postgres/gis/bin/geos-config  
>>> --with-projdir=/opt/postgres/gis
>>> checking build system type... sparc-sun-solaris2.10
>>> checking host system type... sparc-sun-solaris2.10
>>> checking for gcc... gcc
>>> checking whether the C compiler works... yes
>>> checking for C compiler default output file name... a.out
>>> checking for suffix of executables...
>>> checking whether we are cross compiling... no
>>> checking for suffix of object files... o
>>> checking whether we are using the GNU C compiler... yes
>>> checking whether gcc accepts -g... yes
>>> checking for gcc option to accept ISO C89... none needed
>>> checking for a sed that does not truncate output... /usr/5bin/sed
>>> checking for grep that handles long lines and -e... /usr/sfw/bin/ggrep
>>> checking for egrep... /usr/sfw/bin/ggrep -E
>>> checking for fgrep... /usr/sfw/bin/ggrep -F
>>> checking for ld used by gcc... /usr/ccs/bin/ld
>>> checking if the linker (/usr/ccs/bin/ld) is GNU ld... no
>>> checking for BSD- or MS-compatible name lister (nm)... /usr/ccs/bin/nm -p
>>> checking the name lister (/usr/ccs/bin/nm -p) interface... BSD nm
>>> checking whether ln -s works... yes
>>> checking the maximum length of command line arguments... 786240
>>> checking whether the shell understands some XSI constructs... yes
>>> checking whether the shell understands "+="... no
>>> checking for /usr/ccs/bin/ld option to reload object files... -r
>>> checking for objdump... no
>>> checking how to recognize dependent libraries... pass_all
>>> checking for ar... ar
>>> checking for strip... strip
>>> checking for ranlib... ranlib
>>> checking command to parse /usr/ccs/bin/nm -p output from gcc object... ok
>>> checking how to run the C preprocessor... gcc -E
>>> checking for ANSI C header files... yes
>>> checking for sys/types.h... yes
>>> checking for sys/stat.h... yes
>>> checking for stdlib.h... yes
>>> checking for string.h... yes
>>> checking for memory.h... yes
>>> checking for strings.h... yes
>>> checking for inttypes.h... yes
>>> checking for stdint.h... yes
>>> checking for unistd.h... yes
>>> checking for dlfcn.h... yes
>>> checking for objdir... .libs
>>> checking if gcc supports -fno-rtti -fno-exceptions... no
>>> checking for 

Re: [GENERAL] postgis 1.5.2 installation configure: WARNING: could not locate CUnit required for liblwgeom unit tests

2011-02-07 Thread Paul Ramsey
It's just a warning, continue happily onwards. Just means a few unit tests 
won't be run.

P.

On 2011-02-07, at 9:27 AM, akp geek wrote:

> Hi All -
> 
> I am trying to install postgis 1.5.2 on solaris10. When I run the 
> configure I get the following. 
> 
> configure: WARNING: could not locate CUnit required for liblwgeom 
> unit tests
> 
> is there some setting I need to do to make it work?
> 
> 
> $./configure --prefix=/opt/postgres/gis 
> --with-geosconfig=/opt/postgres/gis/bin/geos-config  
> --with-projdir=/opt/postgres/gis
> checking build system type... sparc-sun-solaris2.10
> checking host system type... sparc-sun-solaris2.10
> checking for gcc... gcc
> checking whether the C compiler works... ^C10.112.161.124$
> $./configure --prefix=/opt/postgres/gis 
> --with-geosconfig=/opt/postgres/gis/bin/geos-config  
> --with-projdir=/opt/postgres/gis
> checking build system type... sparc-sun-solaris2.10
> checking host system type... sparc-sun-solaris2.10
> checking for gcc... gcc
> checking whether the C compiler works... yes
> checking for C compiler default output file name... a.out
> checking for suffix of executables...
> checking whether we are cross compiling... no
> checking for suffix of object files... o
> checking whether we are using the GNU C compiler... yes
> checking whether gcc accepts -g... yes
> checking for gcc option to accept ISO C89... none needed
> checking for a sed that does not truncate output... /usr/5bin/sed
> checking for grep that handles long lines and -e... /usr/sfw/bin/ggrep
> checking for egrep... /usr/sfw/bin/ggrep -E
> checking for fgrep... /usr/sfw/bin/ggrep -F
> checking for ld used by gcc... /usr/ccs/bin/ld
> checking if the linker (/usr/ccs/bin/ld) is GNU ld... no
> checking for BSD- or MS-compatible name lister (nm)... /usr/ccs/bin/nm -p
> checking the name lister (/usr/ccs/bin/nm -p) interface... BSD nm
> checking whether ln -s works... yes
> checking the maximum length of command line arguments... 786240
> checking whether the shell understands some XSI constructs... yes
> checking whether the shell understands "+="... no
> checking for /usr/ccs/bin/ld option to reload object files... -r
> checking for objdump... no
> checking how to recognize dependent libraries... pass_all
> checking for ar... ar
> checking for strip... strip
> checking for ranlib... ranlib
> checking command to parse /usr/ccs/bin/nm -p output from gcc object... ok
> checking how to run the C preprocessor... gcc -E
> checking for ANSI C header files... yes
> checking for sys/types.h... yes
> checking for sys/stat.h... yes
> checking for stdlib.h... yes
> checking for string.h... yes
> checking for memory.h... yes
> checking for strings.h... yes
> checking for inttypes.h... yes
> checking for stdint.h... yes
> checking for unistd.h... yes
> checking for dlfcn.h... yes
> checking for objdir... .libs
> checking if gcc supports -fno-rtti -fno-exceptions... no
> checking for gcc option to produce PIC... -fPIC -DPIC
> checking if gcc PIC flag -fPIC -DPIC works... yes
> checking if gcc static flag -static works... no
> checking if gcc supports -c -o file.o... yes
> checking if gcc supports -c -o file.o... (cached) yes
> checking whether the gcc linker (/usr/ccs/bin/ld) supports shared 
> libraries... yes
> checking whether -lc should be explicitly linked in... yes
> checking dynamic linker characteristics... solaris2.10 ld.so
> checking how to hardcode library paths into programs... immediate
> checking whether stripping libraries is possible... no
> checking if libtool supports shared libraries... yes
> checking whether to build shared libraries... yes
> checking whether to build static libraries... yes
> checking for gcc... (cached) gcc
> checking whether we are using the GNU C compiler... (cached) yes
> checking whether gcc accepts -g... (cached) yes
> checking for gcc option to accept ISO C89... (cached) none needed
> checking how to run the C preprocessor... gcc -E
> checking for g++... g++
> checking whether we are using the GNU C++ compiler... yes
> checking whether g++ accepts -g... yes
> checking whether we are using the GNU C++ compiler... (cached) yes
> checking whether g++ accepts -g... (cached) yes
> checking how to run the C++ preprocessor... g++ -E
> checking for ld used by g++... /usr/ccs/bin/ld
> checking if the linker (/usr/ccs/bin/ld) is GNU ld... no
> checking whether the g++ linker (/usr/ccs/bin/ld) supports shared 
> libraries... yes
> checking for g++ option to produce PIC... -fPIC -DPIC
> checking if g++ PIC flag -fPIC -DPIC works... yes
> checking if g++ static flag -static works... no
> checking if g++ supports -c -o file.o... yes
> checking if g++ supports -c -o file.o... (cached) yes
> checking whether the g++ linker (/usr/ccs/bin/ld) supports shared 
> libraries... yes
> checking dynamic linker characteristics... solaris2.10 ld.so
> checking how to hardcode library paths into programs... immediate
> checking if g++ supports -Wall... yes
> checking

Re: [GENERAL] Comparison of Oracle and PostgreSQL full text search

2010-08-11 Thread Paul Ramsey
Did the FAA ever publish slides of those talks? Sure wish I could see them... :)

P.

On 2010-08-11, at 6:58 PM, Bruce Momjian  wrote:

> Greg Smith wrote:
>> Greg Williamson wrote:
>>> Our tests -- very much oriented at postGIS found Oracle to be between 5
>>> and 15% _faster_ depending on the specifics of the task. We decided to go
>>> with postgres given the price difference (several hundred thousand dollars 
>>> for
>>> Oracle in the configuration we needed vs. zip for postgres -- we already had
>>> trained postgres DBAs).
>>> 
>> 
>> Can always throw the licensing savings toward larger hardware too; $100K 
>> buys a pretty big server nowadays.  At the FAA's talk about their 
>> internal deployment of PostgreSQL:  
>> https://www.postgresqlconference.org/2010/east/talks/faa_airports_gis_and_postgresql
>> 
>> They were reporting that some of their difficult queries were 
>> dramatically faster on PostgreSQL; I vaguely recall one of them was 100X 
>> the speed it ran under Oracle Spatial.  It was crazy.  As always this 
>> sort of thing is very workload dependent.  There are certainly queries 
>> (such as some of the ones from the TPC-H that big DB vendors optimize 
>> for) that can be 100X faster on Oracle too.
> 
> The FAA reported something like that at PG East about Oracle vs.
> Postgres performance with GIS data.
> 
> -- 
>  Bruce Momjian  http://momjian.us
>  EnterpriseDB http://enterprisedb.com
> 
>  + It's impossible for everything to be true. +
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] locating cities within a radius of another

2010-07-21 Thread Paul Ramsey
create table cities (
  geog geography,
  name varchar,
  id integer primary key
);

insert into cities
  select
Geography(ST_SetSRID(ST_MakePoint(lon, lat),4326)) as geog,
name, id
  from mytable;

create index cities_gix on cities using gist ( geog );

select st_distance(a.geog, b.geog), b.name
from cities a, cities b
where a.name = 'New York';

On Wed, Jul 21, 2010 at 8:10 AM, Andy Colson  wrote:
> On 7/21/2010 8:01 AM, Geoffrey wrote:
>>
>> We need to locate all cities within a certain distance of a single city.
>> We have longitude and latitude data for all cities. I was thinking
>> postGIS was a viable solution, but I don't see a way to use our existing
>> data via postGIS.
>>
>> Is postGIS a viable solution, or should I be looking at a different
>> approach? Thanks for any suggestions or RTFM pointers.
>>
>
> I'd say PostGIS is a great option.
>
> Did you try:
>
> http://postgis.refractions.net/documentation/manual-1.5/ST_Distance_Sphere.html
>
>
> or do you mean, how do I turn my lat, long columns into a geomentry column?
>
>
> -Andy
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Large index operation crashes postgres

2010-03-26 Thread Paul Ramsey
Occams razor says it's PostGIS. However, I'm concerned about how old
the code being run is. In particular, the library underneath PostGIS,
GEOS, had a *lot* of memory work done on it over the last year. I'd
like to see if things improve if you upgrade to GEOS 3.2.

On Fri, Mar 26, 2010 at 9:04 AM, Tom Lane  wrote:
> Frans Hals  writes:
>> Operation is now running for around 13 hrs.
>> Two postmaster processes above 1% memory usage are running.
>
>> One of them uses constantly 26.5% of memory.
>> The other one is growing:
>> After 1 hr        25%
>> After 9 hrs      59%
>> After 13 hrs    64%
>
> Well, it's pretty clear that you've found a memory leak, but that was
> what we thought before; this data doesn't move us any closer to a fix.
> In particular it's not possible to guess whether the leak should be
> blamed on Postgres or Postgis code.  Even if we knew that, I'm not
> sure we could fix the leak without tracing through actual execution.
>
> Can you generate a self-contained test case that exhibits similar bloat?
> I would think it's probably not very dependent on the specific data in
> the column, so a simple script that constructs a lot of random data
> similar to yours might be enough, if you would rather not show us your
> real data.
>
>                        regards, tom lane
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Large index operation crashes postgres

2010-03-24 Thread Paul Ramsey
Can you do?

alter table placex add column geometry_sector integer;
update placex set geometry_sector = geometry_sector(geometry);

P.

On Wed, Mar 24, 2010 at 1:15 PM, Frans Hals  wrote:
> Hi,
>
> running a geo-database from a dump restore where still one of the most
> important indexes is missing and so the search is slow.
> Whenever I try to add the follwing index to the table "placex", one of
> the postmaster processes dies and the server restarts.
>
> I try:
> CREATE INDEX idx_placex_sector ON placex USING btree
> (geometry_sector(geometry), rank_address, osm_type, osm_id);
>
> The table counts around 50.000.000 rows.
> The first 20.000.000 are indexed in 20-30 minutes. Nice!
> Then indexing becomes slow and slower, first taking 100.000 rows in
> ten minutes while further consequently decreasing speed.
> When the job reaches something around row 25.000.000 postgres goes down:
>
> WARNING:  terminating connection because of crash of another server process
> DETAIL:  The postmaster has commanded this server process to roll back
> the current transaction and exit, because another server process
> exited abnormally and possibly corrupted shared memory.
>
> I have checked RAM and changed the HD with no success.
> Experimenting with a lot of different memory settings in the conf-file
> didn't help either.
> Is there anybody else who experienced this and found a way to create this 
> index?
> Server is postgres 8.3.9 with 4 GB dedicated RAM.
>
> gemoetry_sector function looks like this (postgis):
>
> DECLARE
>  NEWgeometry geometry;
> BEGIN
> -- RAISE WARNING '%',place;
>  NEWgeometry := place;
>  IF ST_IsEmpty(NEWgeometry) OR NOT ST_IsValid(NEWgeometry) OR
> ST_X(ST_Centroid(NEWgeometry))::text in ('NaN','Infinity','-Infinity')
> OR ST_Y(ST_Centroid(NEWgeometry))::text in
> ('NaN','Infinity','-Infinity') THEN
>    NEWgeometry := ST_buffer(NEWgeometry,0);
>    IF ST_IsEmpty(NEWgeometry) OR NOT ST_IsValid(NEWgeometry) OR
> ST_X(ST_Centroid(NEWgeometry))::text in ('NaN','Infinity','-Infinity')
> OR ST_Y(ST_Centroid(NEWgeometry))::text in
> ('NaN','Infinity','-Infinity') THEN
>      RETURN NULL;
>    END IF;
>  END IF;
>  RETURN (500-ST_X(ST_Centroid(NEWgeometry))::integer)*1000 +
> (500-ST_Y(ST_Centroid(NEWgeometry))::integer);
> END;
>
> The subcalled St_Centroid is a postgis C-function located in
> /usr/lib/postgresql/8.3/lib/liblwgeom.
>
>
> Anybody out there has an idea what happens or better how to reach the
> 50.000.000?
>
> Thanks
> Frans
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Installation of Postgis/postgresql

2009-12-28 Thread Paul Ramsey
Nick,
You'll find GEOS questions get answered better on the GEOS mailing
list (http://lists.osgeo.org/mailman/listinfo/geos-devel), BUT:

If ./configure is not finding g++ then odds are, you don't have it
installed (type g++ on the commandline, see what happens). Use your
package manager to do a search for "g++" or "c++" and find out which
package hold it, then install it. Keep working on getting GEOS built
and installed, you can't proceed with PostGIS until you get past that
step.

Also build and install Proj4 (http://trac.osgeo.org/proj) before
proceeding with PostGIS.

Best,

Paul

Also, your PostGIS questions will be better answered on
http://postgis.refractions.net/mailman/listinfo/postgis-users

On Mon, Dec 28, 2009 at 6:24 PM, Nick  wrote:
> Trying to install postgis on LINUX machine, need to find the geos-
> config file, as seems it is no where to be found after download of
> postgis in the ./configure step, is there anyone that can give me a
> clue on how to find this file? Ultimate goal is to get postgis working
> so we can use geom data type in the tables on postgresql, and to get
> php to be able to talk to postgresql when scripts are run using
> pg_connect function.
>
> during ./configure of postGIS errors out looking for geos-config
>
> during ./configure of geos errors out looking for g++
>
> to the best of my knowledge g++ is part of gcc and we have a current
> version of the gcc compiler on the computer
>
> Thanks,
> Nick
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] incorrect restore from pg_dumpall

2009-11-27 Thread Paul Ramsey
In order to restore a backup taken with pg_dumpall you'll want to
ensure that the postgis installed in your new system is identical to
the postgis in your old one. This is because the postgis function
definitions will be looking for a particular postgis library name...
the name of the library from your old database.

You can hack around this, and have your cake and eat it too, to an
extent, by symlinking the name of your old postgis to your new postgis
library.

P

On Fri, Nov 27, 2009 at 4:11 PM, Tomas Lanczos  wrote:
> Hello,
>
> I am trying to restore my databases stored by a pg_dumpall command in
> the Karmic Koala box. The restore command is the following:
>
> psql -f /media/disk/.../backup -U postgres
>
> I have a PostsgreSQL 8.4 installed from repositories with postgis1.4.1.
> I recognized that the tables with spatial geometries were not restored
> but at the moment I am almost sure  that it's caused by that the older
> version of postgis in the stored database. What is a kind of mystery for
> me that data in several tables were not restored, although the table
> definitions did (it means that I got several tables without data)
> althoug the data are there in the backup file (I checked physically). I
> really don't understand what's going on there, I did the same many times
> before, without any problem.
>
> Tomas
>
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] R-Trees in PostgreSQL

2009-11-02 Thread Paul Ramsey
Also for one-dimensional ranges, consider contrib/seg

P.

On Mon, Nov 2, 2009 at 4:41 PM, Jeff Davis  wrote:
> On Tue, 2009-11-03 at 00:25 +0100, Viktor Rosenfeld wrote:
>> I'd like to know what kind of functions I have to implement for a R-Tree
>> index on numeric columns,
>
> NUMERIC is scalar, so an R-Tree doesn't make much sense. You can install
> btree_gist (a contrib module) to be able to use numeric columns as part
> of a GiST index.
>
> If you have more complex spatial data, you should look into PostGIS.
>
> Regards,
>        Jeff Davis
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Out of memory error

2009-07-06 Thread Paul Ramsey
If you are on PostGIS < 1.3.4 there are substantial memory leaks in
intersects() for point/polygon cases. Upgrading to 1.3.6 is
recommended.

P

On Mon, Jul 6, 2009 at 1:39 PM, Paul Smith wrote:
> On Mon, Jul 6, 2009 at 3:34 PM, Tom Lane wrote:
>> Clearly a memory leak, but it's not so clear exactly what's causing it.
>> What's that intersects() function?  Can you put together a
>> self-contained test case?
>
> It's actually ST_Intersects from PostGIS (some of the PostGIS function
> names are still recognize without the leading "ST_").
>
> http://postgis.refractions.net/documentation/manual-1.3/ch06.html#id2574404
>
> # select postgis_version();
>            postgis_version
> ---
>  1.3 USE_GEOS=1 USE_PROJ=1 USE_STATS=1
>
> -Paul
>
> --
> Paul Smith
> http://www.pauladamsmith.com/
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is there a meaningful benchmark?

2009-03-20 Thread Paul Ramsey
http://blog.redfin.com/devblog/2007/11/elephant_versus_dolphin_which_is_faster_which_is_smarter.html

Here's another PgSQL/MySQL testimonial, with a spatial twist :)

P

On Fri, Mar 20, 2009 at 10:15 AM, Greg Smith  wrote:
> On Fri, 20 Mar 2009, Will Rutherdale (rutherw) wrote:
>
>> However, keeping the KISS principle in mind, you can create a benchmark
>> that simply sets up a sample database and forks off a bunch of processes
>> to do random updates for an hour, say.  Dead simple.
>
> There's a benchmark tool that does something like this that comes with
> PostgreSQL named pgbench.  A MySQL-oriented tool named sysbench also can do
> that, and it supports running against PostgreSQL as well--badly though, so
> it's hard to use that to do a fair comparison.
>
> Simple benchmarks tend to measure only one thing though, and it's often not
> what you think you're measuring.  For example, pgbench produces a
> transactions/per second number.  It's useful for comparing the relative
> performance between two PostgreSQL instances, and people think it gives you
> an idea of transactional performance.  What the actual magnitude of the
> result measures in many cases is instead how well the generated data set
> fits in cache.
>
> If you're doing something update heavy, a lot of the time what you actually
> will measure is how fast your disk can seek, process a disk commit done
> using fsync, or some combination of the two.  If you're not careful to make
> sure you're using the same level of disk commit guarantee on both
> installations, it's real easy to get bad benchmark results here. The intro
> to that subject from the PostgreSQL perspective is at
> http://www.postgresql.org/docs/8.3/static/wal-reliability.html
>
> On MySQL, the parameters that controls this behavior are described starting
> at
> http://dev.mysql.com/doc/refman/5.1/en/innodb-parameters.html#sysvar_innodb_flush_log_at_trx_commit
>
> For something with lots of disk commits, it's critical that you have both
> systems configured identically here.
>
> --
> * Greg Smith gsm...@gregsmith.com http://www.gregsmith.com Baltimore, MD
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Custom Type Alignment

2008-12-20 Thread Paul Ramsey
Simple question on CREATE TYPE (I hope):

When using the "alignment" option, and setting it to "double", what
ends up double aligned? VARDATA()? The whole thing datum (so that the
alignment of VARDATA() is actually conditioned on the size of
VARHDRSZ) ?

Thanks,

Paul

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 8.0, UTF8, and CLIENT_ENCODING

2007-05-17 Thread Paul Ramsey

Thanks all for the information. Summary is:

- 8.0 wasn't very strict, and allowed the illegal values in, instead  
of mapping them over into UTF-8 space

- the values can be stripped with iconv -c
- 8.2 should be more strict

I'm in the midst of my upgrade to 8.2 now, hopefully the LATIN1->UTF8  
conversion will now map the odd characters cleanly into UTF space.


On 17-May-07, at 3:25 PM, Michael Glaesemann wrote:



On May 17, 2007, at 16:47 , PFC wrote:

and put that in the form. Instead of being mapped to 2-byte UTF8  
high-bit equivalents, they are going into the database directly  
as one-byte values > 127. That is, as illegal UTF8 values.


Sometimes you also get HTML entities in the mix. Who knows.
	All my web forms are UTF-8 back to back, it just works. Was I  
lucky ?
	Normally postgres rejects illegal UTF8 values, you wouldn't be  
able to insert them...


8.0 and earlier weren't quite as strict as it should have been. See  
the note at the end of the migration instuctions in the release  
notes for 8.1[1] That may have been part of the issue here.


Michael Glaesemann
grzm seespotcode net

[1](http://www.postgresql.org/docs/8.2/interactive/ 
release-8-1.html#AEN80196)



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] 8.0, UTF8, and CLIENT_ENCODING

2007-05-17 Thread Paul Ramsey


I have a small database (PgSQL 8.0, database encoding UTF8) that folks 
are inserting into via a web form. The form itself is declared 
ISO-8859-1 and the prior to inserting any data, pg_client_encoding is 
set to LATIN1.


Most of the high-bit characters are correctly translated from LATIN1 to 
UTF8. So for e-accent-egu I see the two-byte UTF8 value in the database.


Sometimes, in their wisdom, people cut'n'paste information out of MSWord 
and put that in the form. Instead of being mapped to 2-byte UTF8 
high-bit equivalents, they are going into the database directly as 
one-byte values > 127. That is, as illegal UTF8 values.


When I try to dump'n'restore this database into PgSQL 8.2, my data can't 
made the transit.


Firstly, is this "kinda sorta" encoding handling expected in 8.0, or did 
I do something wrong?


Secondly, anyone know any useful tools to pipe a stream through to strip 
out illegal UTF8 bytes, so I can pipe my dump through that rather than 
hand editing it?


Thanks,

Paul

--

  Paul Ramsey
  Refractions Research
  http://www.refractions.net
  [EMAIL PROTECTED]
  Phone: 250-383-3022
  Cell: 250-885-0632

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] PostGIS Binary RPM for Red Hat Linux

2006-11-06 Thread Paul Ramsey
Any one you want.  Using the latest won't hurt.On 6-Nov-06, at 11:36 AM, Sandeep Kumar Jakkaraju wrote:What version of Postgis should i install for postgres 8.1.5 ??On 11/7/06, Paul Ramsey <[EMAIL PROTECTED] > wrote:PgSQL 7.4 is still supported by the latest PostGIS versions.As is PgSQL 8.1 and (when it comes) 8.2.POn 6-Nov-06, at 8:07 AM, Joshua D. Drake wrote:>  Kumar Jakkaraju wrote:>> Ya...>>>> That was my next question ...>> if i build the source (the tar.gz) on the postgis.org site ..>> will it be compatible with... postgres-8.1.5>> Probably not. You will need a postgis version for 7.4.>> Joshua D. Drake >>>>>> Thanks>> Regards>> Sandeep>>>>>> On 11/6/06, Devrim GUNDUZ <[EMAIL PROTECTED] > wrote:>>>>>> Hi,>>>>>> On Mon, 2006-11-06 at 21:04 +0530, Sandeep Kumar Jakkaraju wrote:>>>> The site u gave me has ..>>>> has postgis rpm ..but it needs postgres-7.4 !! i need>>>> postgis rp m for postgres-8.1.5>>>>>> So use the SRPMs to build RPMs for you. I don't have 7.4+RHEL4>>> around to>>> build you an  RPM for that. :( >>>>>> Regards,>>> -->>> The PostgreSQL Company - Command Prompt, Inc. 1.503.667.4564>>> PostgreSQL Replication, Consulting, Custom Development, 24x7 support >>> Managed Services, Shared and Dedicated Hosting>>> Co-Authors: plPHP, plPerlNG - http://www.commandprompt.com/>>>>>>>>> >>>>>>>>>>>>>>>>>>> -->>   === The PostgreSQL Company: Command Prompt, Inc. ===> Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240 > Providing the most comprehensive  PostgreSQL solutions since 1997>  http://www.commandprompt.com/>> Donate to the PostgreSQL Project:  http://www.postgresql.org/about/> donate>>> ---(end of> broadcast)---> TIP 3: Have you checked our extensive FAQ?>> http://www.postgresql.org/docs/faq-- Sandeep Kumar Jakkaraju WeBlog: http://jakkarajus.blogspot.com

Re: [GENERAL] PostGIS Binary RPM for Red Hat Linux

2006-11-06 Thread Paul Ramsey

PgSQL 7.4 is still supported by the latest PostGIS versions.

As is PgSQL 8.1 and (when it comes) 8.2.

P

On 6-Nov-06, at 8:07 AM, Joshua D. Drake wrote:


Sandeep Kumar Jakkaraju wrote:

Ya...

That was my next question ...
if i build the source (the tar.gz) on the postgis.org site ..
will it be compatible with... postgres-8.1.5


Probably not. You will need a postgis version for 7.4.

Joshua D. Drake




Thanks
Regards
Sandeep


On 11/6/06, Devrim GUNDUZ <[EMAIL PROTECTED]> wrote:


Hi,

On Mon, 2006-11-06 at 21:04 +0530, Sandeep Kumar Jakkaraju wrote:

The site u gave me has ..
has postgis rpm ..but it needs postgres-7.4 !! i need
postgis rp m for postgres-8.1.5


So use the SRPMs to build RPMs for you. I don't have 7.4+RHEL4  
around to

build you an  RPM for that. :(

Regards,
--
The PostgreSQL Company - Command Prompt, Inc. 1.503.667.4564
PostgreSQL Replication, Consulting, Custom Development, 24x7 support
Managed Services, Shared and Dedicated Hosting
Co-Authors: plPHP, plPerlNG - http://www.commandprompt.com/













--

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/ 
donate



---(end of  
broadcast)---

TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Simulating sequences

2003-08-18 Thread Paul Ramsey
Vilson farias wrote:

Does PostgreSQL sequences deal with these kinds performance questions? Is it
clean, fast and secury to use sequences? Maybe I still can make up my boss
mind about this subject if I get good arguments here to justify the use of
sequences.
Sorry, but you just outlined a sequence replacement which is a big hunk 
of PL/PgSQL! How is that not completely specific to PostgreSQL? The 
PgSQL 'serial' type is close enough to other RDBMS autoincrement types 
that porting to a different DB should be trivial. Porting your PL/PgSQL, 
that will be hard (particularly if you decide to go to something like 
MySQL, which doesn't even support procedural languages).

--
  __
     /
     | Paul Ramsey
 | Refractions Research
 | Email: [EMAIL PROTECTED]
 | Phone: (250) 885-0632
 \_
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] Upgrading to 7.3.4?

2003-07-31 Thread Paul Ramsey
Voot! Not necessary! :)
This is a minor version upgrade, so data can remain in place. For a 7.2 
-> 7.3 upgrade you would have to dump-and-restore.
My upgrade procedure is:
tar xvfz postgresql-7.3.4.tar.gz ; cd postgresql-7.3.4
./configure ; make ; make install
pg_ctl stop ; pg_ctl start

Freddy Menjívar M. wrote:
- Original Message -
From: "Bjørn T Johansen" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Wednesday, July 30, 2003 3:53 PM
Subject: [GENERAL] Upgrading to 7.3.4?
 > If I upgrade from 7.3.3 to 7.3.4, is it enough to just copy the data
 > directory from 7.3.3 to 7.3.4 to "restore" my databases?
 > Regards
 
Try something like this
==
 
pgdumpall > dbbackup  -- backup and stop
pg_ctl stop
mv %PGHOME% /usr/local/pgsql.old  -- move old pgsql program

cd /usr/local/src/postgresql-7.3.2-- installs new pgsql version
make install
initdb -D %PGHOME%/data   -- start and restore db.
pg_ctl start
psql < dbackup
 


--
  __
 /
 | Paul Ramsey
 | Refractions Research
 | Email: [EMAIL PROTECTED]
 | Phone: (250) 885-0632
 \_
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[GENERAL] pg_dump "all tables" in 7.3.X

2003-06-26 Thread Paul Ramsey
We are trying to do an "all tables" dump using the 7.3.3 pg_dump, but 
are getting no love.  The pg_dump command which worked before, in 7.2, 
no longer returns any tables:

  pg_dump -t "*" dbname

Is this by design, or by accident?

Paul

--
      __
 /
 | Paul Ramsey
 | Refractions Research
 | Email: [EMAIL PROTECTED]
 | Phone: (250) 885-0632
 \_
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Failure to install 7.3.3

2003-06-24 Thread Paul Ramsey
Red Hat (and most other RPM based distros) split libraries into 
"runtime" and "development" halves. In order to actually compile 
software against the libraries, you must install the development half. 
So, to compile against readline, you must also install the 
readline-devel RPM.

Daniel E. Fisher wrote:
I get an error during config I have redhat 7.2.  readline library and Zlib
library not found...they are installed with the latest version how do I fix
this?
-Dan
--
      __
 /
 | Paul Ramsey
 | Refractions Research
 | Email: [EMAIL PROTECTED]
 | Phone: (250) 885-0632
 \_
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] PostgreSQL On the Mac? OS9 or OSX?

2001-08-31 Thread Paul Ramsey

Indeed, on OSX.

http://www.pgmac.com

Israel Evans wrote:
> 
> In looking over the documentation, I haven't seen any evidence of PostgreSQL
> working on the Mac.  Is this the case, or am I missing something.  Has
> anyone heard of it ever being used on a mac?
> 
> Thanks,
> 
> ~Israel~
> 
> ---(end of broadcast)---
> TIP 3: if posting/reading through Usenet, please send an appropriate
> subscribe-nomail command to [EMAIL PROTECTED] so that your
> message can get through to the mailing list cleanly

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[GENERAL] Support for Geometric Types

1999-05-27 Thread Paul Ramsey


I have a couple questions regarding the maturity of the support for
geometric types:

- is there support for indexing geometric types so that things like '='
operators and some of the other geometric operators are a bit faster?
- are there any plans on extending some of the geometric operators to a
wider range of geometric types? ie: intersects with, contains, etc, to
work on paths and polygons.
- is the developer who originated the types and operators still attached
to the project? would there be any appetite for doing some extensions or
indexing improvements if funding could be found? (I may be able to bring
some funding to bear to support this kind of development)

Thanks,
Paul

-- 
  __
 /
     | Paul Ramsey
 | Refractions Research
 | Email: [EMAIL PROTECTED]
 | Phone: (250) 885-0632
 \_



[GENERAL] Booleans and Casting

1999-02-12 Thread Paul Ramsey


I have the following SQL statement, which doesn't work (though I wish it
did):

  select sum(value * ( plulabel = 'FO' )) from btmm group by gis_tag;

PG quite correctly complains that:

  ERROR:  There is more than one possible operator '*' for types
'float4' and 'bool' You will have to retype this query using an explicit
cast  

However, when I do this:

  select sum(value * int( plulabel = 'FO' )) from btmm group by gis_tag;

I still get an error:

  ERROR:  No such function 'int' with the specified attributes 

I can find no function at all to use to cast booleans to what I want,
which is 1 = true, 0 = false (IE, the standard cast for boolean ->
number).

Any advice?


-- 
  __
 /
 | Paul Ramsey
 | Refractions Research
 | Email: [EMAIL PROTECTED]
 | Phone: (250) 885-0632
 \_