[GENERAL] Can't take base back up with Postgres 9.0 on Solaris 10

2010-10-28 Thread dan.m.harris

Hi everyone,

I'm trying to take a base backup of my primary DB ready for copying to my
standby. I've followed the instructions to configure my postgresql.conf at
hxxp://www.postgresql.org/docs/9.0/static/continuous-archiving.html, but I
get one of two errors depending on how I try to run the backup:

bash-3.00$ bin/psql -c SELECT pg_start_backup('basebackup', true)
ERROR: function pg_start_backup(unknown, boolean) does not exist
HINT: No function matches the given name and argument types. You may need to
add explicit type casts.
bash-3.00$ bin/psql -c SELECT pg_start_backup('basebackup')
ERROR: WAL archiving is not active
HINT: archive_command must be defined before online backups can be made
safely.

Here are the lines I've changed in postgresql.conf:
listen_addresses = '*' # what IP address(es) to listen on;
port =  # (change requires restart)
wal_level = archive # minimal, archive, or hot_standby
archive_mode = on # allows archiving to be done
archive_command = 'scp %p
postg...@172.18.131.211:/postgres/postgres/9.0-pgdg/primary_archive' #
command to use to archive a logfile segment
max_wal_senders = 1 # max number of walsender processes
wal_keep_segments = 32 # in logfile segments, 16MB each; 0 disables


Could anyone tell me what the problem might be?

Thanks,
Dan 
-- 
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Can-t-take-base-back-up-with-Postgres-9-0-on-Solaris-10-tp3240364p3240364.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Can't take base back up with Postgres 9.0 on Solaris 10

2010-10-28 Thread Thom Brown
On 28 October 2010 12:42, dan.m.harris daniel.har...@metaswitch.com wrote:

 Hi everyone,

 I'm trying to take a base backup of my primary DB ready for copying to my
 standby. I've followed the instructions to configure my postgresql.conf at
 hxxp://www.postgresql.org/docs/9.0/static/continuous-archiving.html, but I
 get one of two errors depending on how I try to run the backup:

 bash-3.00$ bin/psql -c SELECT pg_start_backup('basebackup', true)
 ERROR: function pg_start_backup(unknown, boolean) does not exist
 HINT: No function matches the given name and argument types. You may need to
 add explicit type casts.
 bash-3.00$ bin/psql -c SELECT pg_start_backup('basebackup')
 ERROR: WAL archiving is not active
 HINT: archive_command must be defined before online backups can be made
 safely.

 Here are the lines I've changed in postgresql.conf:
 listen_addresses = '*' # what IP address(es) to listen on;
 port =  # (change requires restart)
 wal_level = archive # minimal, archive, or hot_standby
 archive_mode = on # allows archiving to be done
 archive_command = 'scp %p
 postg...@172.18.131.211:/postgres/postgres/9.0-pgdg/primary_archive' #
 command to use to archive a logfile segment
 max_wal_senders = 1 # max number of walsender processes
 wal_keep_segments = 32 # in logfile segments, 16MB each; 0 disables

Go into psql and type SHOW wal_level;  What does it say?  If it
doesn't say archive then you've either not restarted PostgreSQL
since you changed it, or you're looking at the wrong config file.

As for the first error, have you tried just typing that command whilst
in psql rather than passing it as a -c parameter?

-- 
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Can't take base back up with Postgres 9.0 on Solaris 10

2010-10-28 Thread dan.m.harris

Hi Thom, thanks for your reply. Here's my psql output:

bash-3.00$ bin/psql
psql (9.0.1, server 8.1.4)
WARNING: psql version 9.0, server version 8.1.
 Some psql features might not work.
Type help for help.

postgres=# SHOW wal_level;
ERROR:  unrecognized configuration parameter wal_level
postgres=# SELECT pg_start_backup('basebackup', true);
ERROR:  function pg_start_backup(unknown, boolean) does not exist
HINT:  No function matches the given name and argument types. You may need
to add explicit type casts.
postgres=# SELECT pg_start_backup('basebackup');
ERROR:  WAL archiving is not active
HINT:  archive_command must be defined before online backups can be made
safely.


The mismatched server version number doesn't look right! Note that this is a
Postgres 9.0 tarball I've extracted onto a server which already has Postgres
8.1.4 installed.

Any thoughts from anybody? Thanks.

-- 
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Can-t-take-base-back-up-with-Postgres-9-0-on-Solaris-10-tp3240364p3240406.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Can't take base back up with Postgres 9.0 on Solaris 10

2010-10-28 Thread Thom Brown
On 28 October 2010 13:07, dan.m.harris daniel.har...@metaswitch.com wrote:

 Hi Thom, thanks for your reply. Here's my psql output:

 bash-3.00$ bin/psql
 psql (9.0.1, server 8.1.4)
 WARNING: psql version 9.0, server version 8.1.
         Some psql features might not work.
 Type help for help.

 postgres=# SHOW wal_level;
 ERROR:  unrecognized configuration parameter wal_level

It's not recognised because your 9.0.1 psql client is connecting to a
8.1.4 server which doesn't use such a parameter, hence the warning
you're receiving when you run psql.  This also explains why you can't
use the 2-parameter form of pg_start_backup as that wasn't introduced
until 8.4.

You need to connect to a 9.0 instance.

-- 
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] exceptionally large UPDATE

2010-10-28 Thread Vick Khera
On Wed, Oct 27, 2010 at 10:26 PM, Ivan Sergio Borgonovo
m...@webthatworks.it wrote:
 I'm increasing maintenance_work_mem to 180MB just before recreating
 the gin index. Should it be more?


You can do this on a per-connection basis; no need to alter the config
file.  At the psql prompt (or via your script) just execute the query

SET maintenance_work_mem=180MB

If you've got the RAM, just use more of it.  'd suspect your server
has plenty of it, so use it!  When I reindex, I often give it 1 or 2
GB.  If you can fit the whole table into that much space, you're going
to go really really fast.

Also, if you are going to update that many rows you may want to
increase your checkpoint_segments.  Increasing that helps a *lot* when
you're loading big data, so I would expect updating big data may also
be helped.  I suppose it depends on how wide your rows are.  1.5
Million rows is really not all that big unless you have lots and lots
of text columns.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PostGIS return multiple points

2010-10-28 Thread trevor1940

Hi

I have a PostGIS table and I wish to get the location/name of multiple
points at once the command for selecting one point is

select PolyName from MyPolygones where st_Contains(the_geom,
GeomFromText('point($LAT $LONG)4326');

where $LAT $LONG are perl varables
So how can i do this if iI have 100 points without hitting the database 100
times?

-- 
View this message in context: 
http://postgresql.1045698.n5.nabble.com/PostGIS-return-multiple-points-tp3240107p3240107.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] DB become enormous with continuos insert and update

2010-10-28 Thread Hfe80

Thank to all
I couldn't respond yesterday but we had tried all you suggestion in the past
weeks.

But nothing change really
Now finally have resolve the problem!!!

It was a bug of version 8.1.
We solve all our problem simply update to 8.4 !!!

It's incredible, DB don't change dimension even after 1 million insert and
update.
CPUs average load  drastically is fallen down! 

before 16 core server has an incredible average load from 2 to 16!
Now 16 core server now have nothing to do! :)  average load  is 0,1   :)

NB Thank you to friends of  http://www.nikos.it nikos  that has give us the
right suggestion.
-- 
View this message in context: 
http://postgresql.1045698.n5.nabble.com/DB-become-enormous-with-continuos-insert-and-update-tp3237814p3240118.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] DB become enormous with continuos insert and update

2010-10-28 Thread Gabriele Bartolini
Hello,

 It was a bug of version 8.1.
 We solve all our problem simply update to 8.4 !!!

Well ... it was not a bug, actually a feature that was not yet
implemented.

My first question about which version of PostgreSQL you were using in this
thread was exactly for that purpose, but ... you never replied to that.

Next time, please post the version you are using and possibly the
operating system.

 It's incredible, DB don't change dimension even after 1 million insert
and
 update.
 CPUs average load  drastically is fallen down! 

Glad you fixed it.

Ciao,
Gabriele

-- 
 Gabriele Bartolini - 2ndQuadrant Italia
 PostgreSQL Training, Services and Support
 gabriele.bartol...@2ndquadrant.it - www.2ndQuadrant.it
 http://www.2ndquadrant.com/books/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Can't take base back up with Postgres 9.0 on Solaris 10

2010-10-28 Thread Adrian Klaver
On Thursday 28 October 2010 5:07:06 am dan.m.harris wrote:
 Hi Thom, thanks for your reply. Here's my psql output:

 bash-3.00$ bin/psql
 psql (9.0.1, server 8.1.4)
 WARNING: psql version 9.0, server version 8.1.
  Some psql features might not work.
 Type help for help.

 postgres=# SHOW wal_level;
 ERROR:  unrecognized configuration parameter wal_level
 postgres=# SELECT pg_start_backup('basebackup', true);
 ERROR:  function pg_start_backup(unknown, boolean) does not exist
 HINT:  No function matches the given name and argument types. You may need
 to add explicit type casts.
 postgres=# SELECT pg_start_backup('basebackup');
 ERROR:  WAL archiving is not active
 HINT:  archive_command must be defined before online backups can be made
 safely.


 The mismatched server version number doesn't look right! Note that this is
 a Postgres 9.0 tarball I've extracted onto a server which already has
 Postgres 8.1.4 installed.


Did you  use the source version of 9.0 and do configure/make to install it?  If 
so you might want to take a look at:

http://www.postgresql.org/docs/9.0/interactive/install-upgrading.html

Going from a major versions (8.1 to 9.0 in this case) requires some additional 
steps. Besides the above problem you ran into, there will probably others even 
if you follow the upgrade procedure. There have been a lot of changes from 8.1 
to 9.0, not the least of which is the dropping of implied type casts in 8.3. 
You probably want to spend some time with the Release Notes: 
http://www.postgresql.org/docs/9.0/interactive/release.html

 Any thoughts from anybody? Thanks.

 --
 View this message in context:
 http://postgresql.1045698.n5.nabble.com/Can-t-take-base-back-up-with-Postgr
es-9-0-on-Solaris-10-tp3240364p3240406.html Sent from the PostgreSQL -
 general mailing list archive at Nabble.com.



-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostGIS return multiple points

2010-10-28 Thread maarten
hey,

I haven't used postgis yet, however,

assuming the normal rules still apply and st_Contains returns
true/false:

SELECT ... WHERE st_Contains(point1) OR st_Contains(point2) OR ...

or using the IN statement:
SELECT ... WHERE true IN (st_Contains(point1),st_Contains(point2),...)

That should give you a list of all polynames.
The trick is figuring out what polyname goes with which point.  But I
leave that as an exercise to the reader, as it's tea time.

regards,
Maarten

On Thu, 2010-10-28 at 01:00 -0700, trevor1940 wrote:
 Hi
 
 I have a PostGIS table and I wish to get the location/name of multiple
 points at once the command for selecting one point is
 
 select PolyName from MyPolygones where st_Contains(the_geom,
 GeomFromText('point($LAT $LONG)4326');
 
 where $LAT $LONG are perl varables
 So how can i do this if iI have 100 points without hitting the database 100
 times?
 
 -- 
 View this message in context: 
 http://postgresql.1045698.n5.nabble.com/PostGIS-return-multiple-points-tp3240107p3240107.html
 Sent from the PostgreSQL - general mailing list archive at Nabble.com.
 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Printing command string passed to EXECUTE command in plpgsql (after argument resolution)

2010-10-28 Thread Allan Kamau
I am debugging a plpgsql function which contains a long sql query
consisting of several parameters which is executed using EXECUTE
command. I would like to output this command string including the
actual values of the parameters contained within it so I can obtain
the actual query and run it directly (manually) in psql (or other
client such as pgAdmin3). Or at least is possible to output the
command string as is, followed by the parameter resolution details
such as $1='2', $2='abc' and so on.

Allan.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Can't take base back up with Postgres 9.0 on Solaris 10

2010-10-28 Thread dan.m.harris

I was just being a bit dull, I'm afraid - I've had a read of the psql help,
and by default it connects to port 5432, which is the port that's listening
for the PG 8.1.4 install on my server. I ran 'bin/psql -p ' instead, to
point at my PG 9.0 instance, and the pg_start_backup worked fine.

Thanks to all!
Dan
-- 
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Can-t-take-base-back-up-with-Postgres-9-0-on-Solaris-10-tp3240364p3240641.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Can't take base back up with Postgres 9.0 on Solaris 10

2010-10-28 Thread Adrian Klaver
On Thursday 28 October 2010 5:07:06 am dan.m.harris wrote:
 Hi Thom, thanks for your reply. Here's my psql output:

 bash-3.00$ bin/psql
 psql (9.0.1, server 8.1.4)
 WARNING: psql version 9.0, server version 8.1.
  Some psql features might not work.
 Type help for help.

 postgres=# SHOW wal_level;
 ERROR:  unrecognized configuration parameter wal_level
 postgres=# SELECT pg_start_backup('basebackup', true);
 ERROR:  function pg_start_backup(unknown, boolean) does not exist
 HINT:  No function matches the given name and argument types. You may need
 to add explicit type casts.
 postgres=# SELECT pg_start_backup('basebackup');
 ERROR:  WAL archiving is not active
 HINT:  archive_command must be defined before online backups can be made
 safely.


 The mismatched server version number doesn't look right! Note that this is
 a Postgres 9.0 tarball I've extracted onto a server which already has
 Postgres 8.1.4 installed.

 Any thoughts from anybody? Thanks.



In my previous response I was assuming you where replacing one version with the 
other. More brain cells have woken up and I realized you might also be talking 
about running two instances on the same machine at the same time. That is 
possible but you will need to have each instance run on a different port in 
postgresql.conf (port=? where 5432 is the default). You will also need to make 
sure you are using the right program with the right cluster. Currently when you 
run psql you are calling the one installed with 9.0.1 which is connecting to 
the 8.1.4 cluster, which at a guess is probably running on port 5432. You have 
the choice of either specifying full paths to the appropriate binary or 
creating sym links.

-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Printing command string passed to EXECUTE command in plpgsql (after argument resolution)

2010-10-28 Thread Pavel Stehule
Hello

2010/10/28 Allan Kamau kamaual...@gmail.com:
 I am debugging a plpgsql function which contains a long sql query
 consisting of several parameters which is executed using EXECUTE
 command. I would like to output this command string including the
 actual values of the parameters contained within it so I can obtain
 the actual query and run it directly (manually) in psql (or other
 client such as pgAdmin3). Or at least is possible to output the
 command string as is, followed by the parameter resolution details
 such as $1='2', $2='abc' and so on.


if you has a some modern version of PostgreSQL, you can use a
autoexplain contrib module. just do

load 'auto_explain';
set auto_explain.log_min_duration = 0;
set auto_explain.log_nested_statements = true;

select your_function();

look to log

Regards

Pavel Stehule

other solution is just a using a RAISE NOTICE

var := 'SELECT  ';
RAISE NOTICE '%', var;
EXECUTE var USING ..

Regards

Pavel Stehule



 Allan.

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Printing command string passed to EXECUTE command in plpgsql (after argument resolution)

2010-10-28 Thread Leif Biberg Kristensen
On Thursday 28. October 2010 16.25.47 Allan Kamau wrote:
 I am debugging a plpgsql function which contains a long sql query
 consisting of several parameters which is executed using EXECUTE
 command. I would like to output this command string including the
 actual values of the parameters contained within it so I can obtain
 the actual query and run it directly (manually) in psql (or other
 client such as pgAdmin3). Or at least is possible to output the
 command string as is, followed by the parameter resolution details
 such as $1='2', $2='abc' and so on.

If I understand you correctly, you can assign the SQL string to a variable x, 
and then do a RAISE NOTICE 'Query is: %', x

regards,
Leif B. Kristensen

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to merge data from two separate databases into one (maybe using xlogs)?

2010-10-28 Thread Daniel . Crespo
 On Wed, Oct 27, 2010 at 5:19 PM,  daniel.cre...@l-3com.com wrote:
  thinking about the possibility of shipping all xlogs of both
 databases
  and putting them into the final master (one of them), and replay
them
 to
  have all data. Later, I would take care of the conflicts.
 
 
 Again, I recommend you look at Bucardo if you want multi-master
 replication.

Thanks for your response, Vick.

Unfortunately, I need to replicate DDLs, therefore Bucardo is not an
option. I was thinking about having a way to replay xlogs from different
servers into one, to have all data. I just don't know if it's
possible. Anybody knows?

Thanks,
-Daniel

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Printing command string passed to EXECUTE command in plpgsql (after argument resolution)

2010-10-28 Thread Allan Kamau
On Thu, Oct 28, 2010 at 5:47 PM, Leif Biberg Kristensen
l...@solumslekt.org wrote:
 On Thursday 28. October 2010 16.25.47 Allan Kamau wrote:
 I am debugging a plpgsql function which contains a long sql query
 consisting of several parameters which is executed using EXECUTE
 command. I would like to output this command string including the
 actual values of the parameters contained within it so I can obtain
 the actual query and run it directly (manually) in psql (or other
 client such as pgAdmin3). Or at least is possible to output the
 command string as is, followed by the parameter resolution details
 such as $1='2', $2='abc' and so on.

 If I understand you correctly, you can assign the SQL string to a variable x,
 and then do a RAISE NOTICE 'Query is: %', x

 regards,
 Leif B. Kristensen

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general


I could use the RAISE NOTICE could work but I will have to write
another command string and use % in place of the $somenumber for the
parameters, one string for RAISE NOTICE and the other for EXECUTE.
This may potentially introduce some differences (due to human error)
between the output of RAISE NOTICE and the command string executed
after parameter solution during the call to EXECUTE.

Pavel's suggestion to use 'auto_explain' contrib module may be one of
the probable solutions.

Allan.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Printing command string passed to EXECUTE command in plpgsql (after argument resolution)

2010-10-28 Thread Pavel Stehule
 I could use the RAISE NOTICE could work but I will have to write
 another command string and use % in place of the $somenumber for the
 parameters, one string for RAISE NOTICE and the other for EXECUTE.
 This may potentially introduce some differences (due to human error)
 between the output of RAISE NOTICE and the command string executed
 after parameter solution during the call to EXECUTE.


you can simply minimalize these risks

CREATE OR REPLACE FUNCTION notice(text, boolena)
RETURNS text AS $$
BEGIN
  IF $2 THEN
RAISE NOTICE '%', $1;
  END IF;
  RETURN $1;
END;
$$ LANGUAGE plpgsql;

and then you can use it in EXECUTE

EXECUTE notice('SELECT ', true) USING ...

Regards

Pavel Stehule

 Pavel's suggestion to use 'auto_explain' contrib module may be one of
 the probable solutions.

 Allan.

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Should PQconsumeInput/PQisBusy be expensive to use?

2010-10-28 Thread Michael Clark
Hello all.

Thanks a lot for the responses, they are appreciated.

I think I now understand the folly of my loop, and how that was negatively
impacting my test.

I tried the suggestion Alex and Tom made to change my loop with a select()
and my results are now very close to the non-async version.

The main reason for looking at this API is not to support async in our
applications, that is being achieved architecturally in a PG agnostic way.
 It is to give our PG agnostic layer the ability to cancel queries.
(Admittedly the queries I mention in these emails are not candidates for
cancelling...).

Again, thanks so much for the help.
Michael.


On Wed, Oct 27, 2010 at 6:10 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 Michael Clark codingni...@gmail.com writes:
  In doing some experiments I found that using
  PQsendQueryParams/PQconsumeInput/PQisBusy/PQgetResult produces slower
  results than simply calling PQexecParams.

 Well, PQconsumeInput involves at least one extra kernel call (to see
 whether data is available) so I don't know why this surprises you.
 The value of those functions is if your application can do something
 else useful while it's waiting.  If the data comes back so fast that
 you can't afford any extra cycles expended on the client side, then
 you don't have any use for those functions.

 However, if you do have something useful to do, the problem with
 this example code is that it's not doing that, it's just spinning:

  while ( ((consume_result = PQconsumeInput(self.db)) == 1) 
  ((is_busy_result = PQisBusy(self.db)) == 1) )
  ;

 That's a busy-wait loop, so it's no wonder you're eating cycles there.
 You want to sleep, or more likely do something else productive, when
 there is no data available from the underlying socket.  Usually the
 idea is to include libpq's socket in the set of files being watched
 by select() or poll(), and dispatch off to something that absorbs
 the data whenever you see some data is available to read.

regards, tom lane



Re: [GENERAL] Should PQconsumeInput/PQisBusy be expensive to use?

2010-10-28 Thread A.M.

On Oct 28, 2010, at 11:08 AM, Michael Clark wrote:

 Hello all.
 
 Thanks a lot for the responses, they are appreciated.
 
 I think I now understand the folly of my loop, and how that was negatively
 impacting my test.
 
 I tried the suggestion Alex and Tom made to change my loop with a select()
 and my results are now very close to the non-async version.
 
 The main reason for looking at this API is not to support async in our
 applications, that is being achieved architecturally in a PG agnostic way.
 It is to give our PG agnostic layer the ability to cancel queries.
 (Admittedly the queries I mention in these emails are not candidates for
 cancelling...).

Hm- I'm not sure how the async API will allow you to cancel queries. In 
PostgreSQL, query canceling is implemented by opening a second connection and 
passing specific data which is received from the first connection (effectively 
sending a cancel signal to the connection instead of a specific query). This 
implementation is necessitated by the fact that the PostgreSQL backend isn't 
asynchronous.

Even if you cancel the query, you still need to consume the socket input. Query 
cancellation is available for libpq both in sync and async modes.

Cheers,
M
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Should PQconsumeInput/PQisBusy be expensive to use?

2010-10-28 Thread Michael Clark
On Thu, Oct 28, 2010 at 11:15 AM, A.M. age...@themactionfaction.com wrote:


 On Oct 28, 2010, at 11:08 AM, Michael Clark wrote:

  Hello all.
 
  Thanks a lot for the responses, they are appreciated.
 
  I think I now understand the folly of my loop, and how that was
 negatively
  impacting my test.
 
  I tried the suggestion Alex and Tom made to change my loop with a
 select()
  and my results are now very close to the non-async version.
 
  The main reason for looking at this API is not to support async in our
  applications, that is being achieved architecturally in a PG agnostic
 way.
  It is to give our PG agnostic layer the ability to cancel queries.
  (Admittedly the queries I mention in these emails are not candidates for
  cancelling...).

 Hm- I'm not sure how the async API will allow you to cancel queries. In
 PostgreSQL, query canceling is implemented by opening a second connection
 and passing specific data which is received from the first connection
 (effectively sending a cancel signal to the connection instead of a specific
 query). This implementation is necessitated by the fact that the PostgreSQL
 backend isn't asynchronous.

 Even if you cancel the query, you still need to consume the socket input.
 Query cancellation is available for libpq both in sync and async modes.


Oh.  I misunderstood that.

I guess I can have one thread performing the query using the non async PG
calls, then from another thread issue the cancellation.  Both threads
accessing the same PGconn ?

I am glad I added that extra bit of info in my reply, and that your caught
it!!

Thank you!
Michael.


[GENERAL] earthdistance or PostGIS for find * within point and radius

2010-10-28 Thread John Cheng
I have a database with the earthdistance contrib module installed and
I need to find records whose long  latitude are within a given
distance of a zip code. From the documentation on earthdistance, I
believe it is certainly possible to do a find points within a
distance of another point using the ll_to_earth, earth_box, and
earth_distance functions. What would be the advantages to installing
and using PostGIS instead?

I would like to avoid adding another dependency to the system and I
think earthdistance will be able to do what I need. However, I'll
certainly try PostGIS if it has a significant performance advantage.
Can anyone talk a bit about the pros and cons of earthdistance and
PostGIS?

--
---
John L Cheng

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why Select Count(*) from table - took over 20 minutes?

2010-10-28 Thread Ozz Nixon
How/where do I query this?

My script does not need a 100% accurate count - just a recently valid count - 
so I can verify the web crawlers are still crawling :-)

On Oct 27, 2010, at 7:15 AM, Merlin Moncure wrote:

 pg_class.reltuples


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostGIS return multiple points

2010-10-28 Thread ludwig

Just some thoughts:
- create a multipoint with 100 vertices instead of a single point and query 
once with st_Intersect
- prepare the single-point-query and execute the prepared query 100 times with 
the changing coordinates

Ludwig 


- Ursprüngliche Nachricht -
Von: trevor1940
Gesendet: 28.10.10 10:00 Uhr
An: pgsql-general@postgresql.org
Betreff: [GENERAL] PostGIS return multiple points

Hi I have a PostGIS table and I wish to get the location/name of multiple 
points at once the command for selecting one point is select PolyName from 
MyPolygones where st_Contains(the_geom, GeomFromText('point($LAT $LONG)4326'); 
where $LAT $LONG are perl varables So how can i do this if iI have 100 points 
without hitting the database 100 times? -- View this message in context: 
http://postgresql.1045698.n5.nabble.com/PostGIS-return-multiple-points-tp3240107p3240107.html
 Sent from the PostgreSQL - general mailing list archive at Nabble.com. -- Sent 
via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes 
to your subscription: http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] Why Select Count(*) from table - took over 20 minutes?

2010-10-28 Thread Merlin Moncure
On Thu, Oct 28, 2010 at 11:36 AM, Ozz Nixon ozzni...@gmail.com wrote:
 How/where do I query this?

 My script does not need a 100% accurate count - just a recently valid count - 
 so I can verify the web crawlers are still crawling :-)

you can do this:
select reltuples from pg_class where relname = 'your_table' and relkind = 'r';

that will give you accurate count as of the last analyze, which is
going to be driven by table usage and/or manual analyze.   Probably
much better in your particular case is to do this:

select * from pg_stat_all_tables where relname = 'your_table';

and look at the n_tup_ins, del, etc.  and make sure they are changing
(those numbers are reset when server resets, fyi).

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Should PQconsumeInput/PQisBusy be expensive to use?

2010-10-28 Thread Daniel Verite
A.M. wrote:

 In PostgreSQL, query canceling is implemented by opening a
 second connection and passing specific data which is received
 from the first connection

With libpq's PQCancel(), a second connection is not necessary.

Best regards,
-- 
Daniel
PostgreSQL-powered mail user agent and storage: http://www.manitou-mail.org

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Should PQconsumeInput/PQisBusy be expensive to use?

2010-10-28 Thread Daniel Verite
Michael Clark wrote:

 I guess I can have one thread performing the query using the non async PG
 calls, then from another thread issue the cancellation.  Both threads
 accessing the same PGconn ?

Yes. See http://www.postgresql.org/docs/9.0/static/libpq-cancel.html

Best regards,
-- 
Daniel
PostgreSQL-powered mail user agent and storage: http://www.manitou-mail.org

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] moving database objects from one schema to other

2010-10-28 Thread akp geek
Hi all -

  I would like to know if there is any other way of moving
objects from one schema to other schema? right now the way I am doing it is
, take the backup and importing whatever the database objects I wanted to
move, I am doing that using pg_restore. But I am finding it difficult for
functions, sequences

Thanks for your help

Regards


Re: [GENERAL] moving database objects from one schema to other

2010-10-28 Thread Thom Brown
On 28 October 2010 17:11, akp geek akpg...@gmail.com wrote:
 Hi all -
               I would like to know if there is any other way of moving
 objects from one schema to other schema? right now the way I am doing it is
 , take the backup and importing whatever the database objects I wanted to
 move, I am doing that using pg_restore. But I am finding it difficult for
 functions, sequences
 Thanks for your help
 Regards

Yes, you just need to assign a new schema:

ALTER TABLE table_name SET SCHEMA new_schema;
ALTER FUCNTION function_name SET SCHEMA new_schema;

etc...

-- 
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] moving database objects from one schema to other

2010-10-28 Thread Pavel Stehule
2010/10/28 akp geek akpg...@gmail.com:
 Hi all -
               I would like to know if there is any other way of moving
 objects from one schema to other schema? right now the way I am doing it is
 , take the backup and importing whatever the database objects I wanted to
 move, I am doing that using pg_restore. But I am finding it difficult for
 functions, sequences
 Thanks for your help
 Regards

hello

use a statement: ALTER TABLE tablename SET SCHEMA new_schema

Regards

Pavel Stehule

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostGIS return multiple points

2010-10-28 Thread Szymon Guz
On 28 October 2010 10:00, trevor1940 anto...@nsom.org.uk wrote:


 Hi

 I have a PostGIS table and I wish to get the location/name of multiple
 points at once the command for selecting one point is

 select PolyName from MyPolygones where st_Contains(the_geom,
 GeomFromText('point($LAT $LONG)4326');

 where $LAT $LONG are perl varables
 So how can i do this if iI have 100 points without hitting the database 100
 times?


hi,
Hi,
you could create one query using MULTIPOINT, something like:

ST_MPointFromText('MULTIPOINT($LAT1 $LONG1, $LAT2 $LONG2 ... $LAT100
$LONG100 )', 4326)

now you could find all polygons:

SELECT DISTINCT PolyName
FROM MyPolygones
WHERE ST_INTERSECTS(the_geom,
ST_MPointFromText('MULTIPOINT($LAT1 $LONG1, $LAT2 $LONG2 ... $LAT100
$LONG100 )', 4326)
);

where:
ST_INTERSECTS returns true if geometries have any common point
ST_MPointFromText creates brand new and shiny new MULTIPOINT geometry
containing all the points

regards
Szymon


Re: [GENERAL] moving database objects from one schema to other

2010-10-28 Thread akp geek
Thanks a lot.

Regards

On Thu, Oct 28, 2010 at 12:23 PM, Pavel Stehule pavel.steh...@gmail.comwrote:

 2010/10/28 akp geek akpg...@gmail.com:
  Hi all -
I would like to know if there is any other way of moving
  objects from one schema to other schema? right now the way I am doing it
 is
  , take the backup and importing whatever the database objects I wanted to
  move, I am doing that using pg_restore. But I am finding it difficult for
  functions, sequences
  Thanks for your help
  Regards

 hello

 use a statement: ALTER TABLE tablename SET SCHEMA new_schema

 Regards

 Pavel Stehule



Re: [GENERAL] exceptionally large UPDATE

2010-10-28 Thread Ivan Sergio Borgonovo
On Thu, 28 Oct 2010 08:58:34 -0400
Vick Khera vi...@khera.org wrote:

 On Wed, Oct 27, 2010 at 10:26 PM, Ivan Sergio Borgonovo
 m...@webthatworks.it wrote:
  I'm increasing maintenance_work_mem to 180MB just before
  recreating the gin index. Should it be more?
 
 
 You can do this on a per-connection basis; no need to alter the
 config file.  At the psql prompt (or via your script) just execute
 the query
 
 SET maintenance_work_mem=180MB

 If you've got the RAM, just use more of it.  'd suspect your server
 has plenty of it, so use it!  When I reindex, I often give it 1 or
 2 GB.  If you can fit the whole table into that much space, you're
 going to go really really fast.

 Also, if you are going to update that many rows you may want to
 increase your checkpoint_segments.  Increasing that helps a *lot*
 when you're loading big data, so I would expect updating big data
 may also be helped.  I suppose it depends on how wide your rows
 are.  1.5 Million rows is really not all that big unless you have
 lots and lots of text columns.

Actually I'm pretty happy with performance of the DB under normal
circumstances. I never investigated to much if I could squeeze it
more. But when I have to deal with such huge updates the
performance is painful.

You made me start to wonder if I could improve performances even
under normal load.

But right now I've to take care of this huge (well the use of huge
is just related to the performance I'm obtaining right now) update.

The things I've touched compared to stock configuration where:
max_connections = 100 
shared_buffers = 240M
work_mem = 42MB
maintenance_work_mem = 180MB #(generally it is 40MB)
# these were touched as of autovacuum suggestion
max_fsm_pages = 437616
max_fsm_relations = 1200
checkpoint_segments = 35
random_page_cost = 3.0
default_statistics_target = 30
log_min_duration_statement = 1000

The box is running apache, total average occupied length of tetxt
for each row should be around 1Kb on the largest table.

What I'm planning to do is:
max_connections = 5
shared_buffers = 240M
work_mem = 90MB
maintenance_work_mem = 1GB
max_fsm_pages = 437616
max_fsm_relations = 1200
checkpoint_segments = 70
default_statistics_target = 30
#log_min_duration_statement = 1000

Any improvement?


-- 
Ivan Sergio Borgonovo
http://www.webthatworks.it


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] share lock when only one user connected?

2010-10-28 Thread Ivan Sergio Borgonovo
I'm running this query when I'm the only user and this should be the
only thing running.

update catalog_items
  set Code=s.codice, CodeAlt=s.codicealt, BrandID=s.marca,
  Name=initcap(s.nome), Description=s.desc_l1,
  qty=coalesce(s.quantita, 0),
  ListPrice=coalesce(s.prezzoListino, 0) ,
  valIva=s.valIva, ivaInc=s.ivaInc,
  StatusID=coalesce(s.stato, 1) ,
  Weight=s.peso, dataInserimento=s.dataInserimento, daPub=s.daPub,
  BestSeller=s.bestSeller, posInMag=s.posInMag , ISBN=s.ISBN,
  dataPub=coalesce(s.dataPubblicazione, datapub),
  updated=now()
  from (
select a.id, a.codice, a.codicealt,
a.marca, a.nome, a.desc_l1, a.quantita, a.prezzoListino,
a.valIva, a.ivaInc, a.stato, a.peso, a.dataInserimento, a.daPub,
a.bestSeller, a.posInMag, a.ISBN, ip.dataPubblicazione
from
  import.Articoli a
  left join import.ArticoliPropTipo_1 ip on a.id=ip.idArt
  where a.action=8
) as s
where s.id=catalog_items.ItemID
;

And I get

DETAIL:  Process 7188 waits for ShareLock on transaction 110562621;
blocked by process 7244. Process 7244 waits for ShareLock on
transaction 110562544; blocked by process 7188.

On that table 2 triggers are defined:

create or replace function FT1IDX_catalog_items_update() returns
trigger as
$$
declare
_Name varchar(64);
begin
  select into _Name Name from catalog_brands
   where BrandID=new.BrandID;
   new.FT1IDX:=
 GetFTIDX('pg_catalog.english', new.Code, new.CodeAlt, new.ISBN,
 new.Name, new.Authors, _Name);
  return new;
end;
$$ language plpgsql volatile;

create or replace function FT1IDX_catalog_brands_update() returns
trigger as
$$
begin
  if(TG_OP='DELETE') then
update catalog_items set
  FT1IDX=
   GetFTIDX('pg_catalog.english', Code, CodeAlt, ISBN, Name, Authors, '')
where BrandID=old.BrandID;
return old;
  else
if(coalesce(new.Name,'')coalesce(old.Name,'')) then
  update catalog_items set
FT1IDX=
  GetFTIDX('pg_catalog.english', Code, CodeAlt, ISBN, Name, Authors, 
new.Name) where BrandID=new.BrandID;
end if;
  end if;
return new;
end;
$$ language plpgsql volatile;

What could it be? how can I fix it?



-- 
Ivan Sergio Borgonovo
http://www.webthatworks.it


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Should PQconsumeInput/PQisBusy be expensive to use?

2010-10-28 Thread A.M.

On Oct 28, 2010, at 12:04 PM, Daniel Verite wrote:

   A.M. wrote:
 
 In PostgreSQL, query canceling is implemented by opening a
 second connection and passing specific data which is received
 from the first connection
 
 With libpq's PQCancel(), a second connection is not necessary.

To clarify, PQcancel() opens a new socket to the backend and sends the cancel 
message. (The server's socket address is passed as part of the cancel structure 
to PQcancel.)

http://git.postgresql.org/gitweb?p=postgresql.git;a=blob;f=src/interfaces/libpq/fe-connect.c;h=8f318a1a8cc5bf2d49b2605dd76581609cf9be32;hb=HEAD#l2964

The point is that a query can be cancelled from anywhere really and 
cancellation will not use the original connection socket.

Cheers,
M
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] share lock when only one user connected?

2010-10-28 Thread Tom Lane
Ivan Sergio Borgonovo m...@webthatworks.it writes:
 I'm running this query when I'm the only user and this should be the
 only thing running.

 And I get

 DETAIL:  Process 7188 waits for ShareLock on transaction 110562621;
 blocked by process 7244. Process 7244 waits for ShareLock on
 transaction 110562544; blocked by process 7188.

It's pretty hard to believe that you haven't got two processes ...
maybe there's something using dblink down in there somewhere?

Also, if you're using a recent PG release, the postmaster log should
contain additional information about the deadlock, like the specific
queries involved on both sides.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to merge data from two separate databases into one (maybe using xlogs)?

2010-10-28 Thread John R Pierce

On 10/28/10 7:49 AM, daniel.cre...@l-3com.com wrote:

Unfortunately, I need to replicate DDLs, therefore Bucardo is not an
option. I was thinking about having a way to replay xlogs from different
servers into one, to have all data. I just don't know if it's
possible. Anybody knows?


you want multimaster replication WITH multimaster DDL changes?


wow.

how do you resolve conflicts?



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] share lock when only one user connected?

2010-10-28 Thread Ivan Sergio Borgonovo
On Thu, 28 Oct 2010 13:57:18 -0400
Tom Lane t...@sss.pgh.pa.us wrote:

 Ivan Sergio Borgonovo m...@webthatworks.it writes:
  I'm running this query when I'm the only user and this should be
  the only thing running.
 
  And I get
 
  DETAIL:  Process 7188 waits for ShareLock on transaction
  110562621; blocked by process 7244. Process 7244 waits for
  ShareLock on transaction 110562544; blocked by process 7188.
 
 It's pretty hard to believe that you haven't got two processes ...
 maybe there's something using dblink down in there somewhere?
 
 Also, if you're using a recent PG release, the postmaster log
 should contain additional information about the deadlock, like the
 specific queries involved on both sides.

I can't think of any other process running concurrently... but yeah
I'm surprised as well, and no there is no dblink running.

I may be wrong on assuming there is no other process running and
I'll check further, but at least I'm pretty sure there is no query
that is writing in that table other than the update.
These start to happen when involved row are above 80K and chances it
happens grow up as the number of rows involved grow (not a big
surprise).
Chances that this happens decrease if I drop the triggers. This may
just be related to the chance that 2 queries clashes since leaving
the triggers there increase the execution time.

There is nothing else on the log other than:
2010-10-26 18:12:09 CEST ERROR:  deadlock detected
2010-10-26 18:12:09 CEST DETAIL:  Process 9729 waits for ShareLock
on transaction 110146905; blocked by process 11082. Process 11082
waits for ShareLock on transaction 110145470; blocked by process
9729.

Followed by the update statement...

How am I going to find which statements where locking each other?

thanks

-- 
Ivan Sergio Borgonovo
http://www.webthatworks.it


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Full Text Search - Slow on common words

2010-10-28 Thread sub3

Hi,

I have a small web page set up to search within my domain based on keywords. 
One of the queries is:
  SELECT page.id ts_rank_cd('{1.0, 1.0, 1.0, 1.0}',contFTI,q) FROM page, 
to_tsquery('steve') as q WHERE contFTI @@ q 

My problem is: when someone puts in a commonly seen word, the system slows
down and takes a while because of the large amount of data being returned
(retrieved from the table)  processed by the rand_cd function.

How does everyone else handle something like this?  I can only think of 2
possible solutions:
- change the query to search for the same terms at least twice in the same
document (can I do that?)
- limit any searches to x results before ranking  tell the user their
search criteria is too generic.

Is there a better solution that I am missing?

Thanks,
Steve



-- 
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Full-Text-Search-Slow-on-common-words-tp3241060p3241060.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Full Text Search - Slow on common words

2010-10-28 Thread Reid Thompson
On Thu, 2010-10-28 at 12:08 -0700, sub3 wrote:
 Hi,
 
 I have a small web page set up to search within my domain based on keywords. 
 One of the queries is:
   SELECT page.id ts_rank_cd('{1.0, 1.0, 1.0, 1.0}',contFTI,q) FROM page, 
 to_tsquery('steve') as q WHERE contFTI @@ q 
 
 My problem is: when someone puts in a commonly seen word, the system slows
 down and takes a while because of the large amount of data being returned
 (retrieved from the table)  processed by the rand_cd function.
 
 How does everyone else handle something like this?  I can only think of 2
 possible solutions:
 - change the query to search for the same terms at least twice in the same
 document (can I do that?)
 - limit any searches to x results before ranking  tell the user their
 search criteria is too generic.
 
 Is there a better solution that I am missing?
 

if the keyword is that common, is it really a keyword?  Exclude it.


Re: [GENERAL] How to merge data from two separate databases into one (maybe using xlogs)?

2010-10-28 Thread Daniel . Crespo
 you want multimaster replication WITH multimaster DDL changes?
 
 
 wow.
 
 how do you resolve conflicts?
 

I don't know how to get to that point yet. The question is: Can I use
xlogs from two different servers and replay them all in one? Depending
on how it is done, I can tell you how would I resolve the conflicts.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Full Text Search - Slow on common words

2010-10-28 Thread Dann Corbit
From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Reid Thompson
Sent: Thursday, October 28, 2010 12:57 PM
To: st...@subwest.com
Cc: Reid Thompson; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Full Text Search - Slow on common words


On Thu, 2010-10-28 at 12:08 -0700, sub3 wrote:
 Hi,

 I have a small web page set up to search within my domain based on keywords.
 One of the queries is:
   SELECT page.id ts_rank_cd('{1.0, 1.0, 1.0, 1.0}',contFTI,q) FROM page,
 to_tsquery('steve') as q WHERE contFTI @@ q

 My problem is: when someone puts in a commonly seen word, the system slows
 down and takes a while because of the large amount of data being returned
 (retrieved from the table)  processed by the rand_cd function.

 How does everyone else handle something like this?  I can only think of 2
 possible solutions:
 - change the query to search for the same terms at least twice in the same
 document (can I do that?)
 - limit any searches to x results before ranking  tell the user their
 search criteria is too generic.

 Is there a better solution that I am missing?


if the keyword is that common, is it really a keyword?  Exclude it.



This general idea is called a stopword list.  You create a list of words that 
are so common that searching on them is counter-productive.

http://en.wikipedia.org/wiki/Stop_words




Re: [GENERAL] How to merge data from two separate databases into one (maybe using xlogs)?

2010-10-28 Thread John R Pierce

On 10/28/10 12:55 PM, daniel.cre...@l-3com.com wrote:

you want multimaster replication WITH multimaster DDL changes?


wow.

how do you resolve conflicts?


I don't know how to get to that point yet. The question is: Can I use
xlogs from two different servers and replay them all in one? Depending
on how it is done, I can tell you how would I resolve the conflicts.


xlogs just contain blocks by block number.   not high level SQL 
operations that could be resolved.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Replication Poll

2010-10-28 Thread Joshua D. Drake
Hey,

Based on the discussion here:

http://www.commandprompt.com/blogs/joshua_drake/2010/10/users_versus_customers_-_you_dont_need_no_stinking_replication/
http://thebuild.com/blog/2010/10/28/small-postgresql-installations-and-9-0-replication/
http://thebuild.com/blog/2010/10/27/users-want-functionality-not-features/

I have created a Replication Poll. Curious was the general populous
thinks:

https://www.postgresqlconference.org/content/replication-poll

You don't have to log in, but that would obviously help with validity of
results.

Sincerely,

JD
-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579
Consulting, Training, Support, Custom Development, Engineering
http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] MySQL - PostgreSQL conversion issue

2010-10-28 Thread Scott Newton
Hi

I have the following rather complicated SQL which works under MySQL but fails 
under PostgreSQL 8.3. The SQL is generated by A2Billing 
(http://www.asterisk2billing.org/cgi-bin/trac.cgi). The issue is the following 
part of the SQL:

as tf on tf.dnid=substr(cdr.dnid,1,length(tf.dnid))

where tf.dnid is a bigint and cdr.dnid is varchar(40). The error returned is 
ERROR:  function length(bigint) does not exist at character 2521
2010-10-29 13:34:27 NZDT HINT:  No function matches the given name and 
argument types. You might need to add explicit type casts.

I have tried adding explicit casts:

tf.dnid = substr(cdr.dnid, 1, length(cast(td.dnid) as varchar(40)))
which complains with
ERROR:  operator does not exist: bigint = text at character 2502

and

tf.dnid = cast(substr(cdr.dnid, 1, length(cast(td.dnid) as varchar(40))) as 
bigint)
which complains with
ERROR:  invalid input syntax for integer: dnid

What would be the best way to convert this to work under PostgreSQL?

The complete log entry is given below:

Thanks

2010-10-29 13:34:27 NZDT ERROR:  function length(bigint) does not exist at 
character 2521
2010-10-29 13:34:27 NZDT HINT:  No function matches the given name and 
argument types. You might need to add explicit type casts.
2010-10-29 13:34:27 NZDT STATEMENT:  create temporary table pnl_report  as 
select  
id,name,call_count,time_minutes,tall_free_buy_cost,pay_phone_buy_cost,orig_only,credits,orig_total,

tall_free_sell_cost,pay_phone_sell_cost,term_only,charges,term_total,   
first_use,discount,
net_revenue,(net_revenue-orig_total) as profit, (net_revenue-
orig_total)/net_revenue*100 as  margin
from(
select main_id as id, 
name,call_count,time_minutes,tall_free_buy_cost,pay_phone_buy_cost,orig_only,credits,orig_cost+credits
 
as orig_total,

tall_free_sell_cost,pay_phone_sell_cost,term_only,charges,term_cost+charges 
as term_total,
first_use,discount,
((term_cost+charges))*( 1-discount/100)as 
net_revenue
from(
 select  t1.id_group as 
main_id,cg.name,call_count,time_minutes,tall_free_buy_cost,pay_phone_buy_cost,
orig_cost-tall_free_buy_cost-pay_phone_buy_cost  as 
orig_only,orig_cost,
case when credits is null then 0 else credits end as credits,0 
as total,
tall_free_sell_cost,pay_phone_sell_cost,term_cost-
tall_free_sell_cost-pay_phone_sell_cost as term_only,term_cost,
case when charges is null then 0 else  charges end as  
charges,
first_use,discount

 from
 (
  select  id_group,count(*) as call_count ,sum(sessiontime) / 60 as 
time_minutes,
sum( case when tall_free=0 then 0 else 
real_sessiontime/60*tf_cost end) as tall_free_buy_cost,
sum( case when pay_phone=0 then 0 else 
real_sessiontime/60*tf_cost end) as pay_phone_buy_cost,
sum(buycost) as orig_cost,
sum( case when tall_free=0 then 0 else 
real_sessiontime/60*tf_sell_cost end) as tall_free_sell_cost,
sum( case when pay_phone=0 then 0 else 
real_sessiontime/60*tf_sell_cost end) as pay_phone_sell_cost,
sum(sessionbill) as term_cost,
sum(discount*sessionbill)/sum(sessionbill) as discount
  from (
   select cc.id_group,
   
cdr.sessiontime,cdr.dnid,cdr.real_sessiontime,sessionbill,buycost,cc.discount,
   case when tf.cost is null then 0 else tf.cost end as 
tf_cost,
case when tf.sell_cost is null then 0 else tf.sell_cost 
end as tf_sell_cost,
case when tf.dnid_type is null then 0 when 
tf.dnid_type=1 then 1 else 0 end as tall_free,
case when tf.dnid_type is null then 0 when 
tf.dnid_type=2 then 1 else 0 end as pay_phone
from cc_call cdr left join cc_card cc on cdr.card_id=cc.id 
left join 
(select 'dnid' as dnid, 0.1 as sell_cost,0.1 as cost,0 
as dnid_type union  select 8887798764,0.02,0.06 ,1  
) as tf on tf.dnid=substr(cdr.dnid,1,length(tf.dnid))
  where
sessiontime0 and CURRENT_TIMESTAMP - interval '6 hours' = 
cdr.starttime 
order by cdr.starttime desc
   ) as a group by id_group
 ) as t1  left join cc_card_group as cg on cg.id=id_group left join 
pnl_report_sub1 as t2 on t1.id_group=t2.id_group 
   left join pnl_report_sub2  as t3 on t1.id_group=t3.id_group
   left join  pnl_report_sub3 as t4 on t1.id_group=t4.id_group
 )as result
)as final
 
2010-10-29 13:34:27 NZDT ERROR:  relation pnl_report does not exist
2010-10-29 13:34:27 NZDT STATEMENT:  SELECT  
name,call_count,time_minutes,tall_free_buy_cost,pay_phone_buy_cost,orig_only,credits,orig_total,


Re: [GENERAL] MySQL - PostgreSQL conversion issue

2010-10-28 Thread Scott Marlowe
On Thu, Oct 28, 2010 at 6:44 PM, Scott Newton
scott.new...@vadacom.co.nz wrote:
 Hi

 I have the following rather complicated SQL which works under MySQL but fails
 under PostgreSQL 8.3. The SQL is generated by A2Billing
 (http://www.asterisk2billing.org/cgi-bin/trac.cgi). The issue is the following
 part of the SQL:

 as tf on tf.dnid=substr(cdr.dnid,1,length(tf.dnid))

 where tf.dnid is a bigint and cdr.dnid is varchar(40). The error returned is
 ERROR:  function length(bigint) does not exist at character 2521
 2010-10-29 13:34:27 NZDT HINT:  No function matches the given name and
 argument types. You might need to add explicit type casts.

This is usually a precedence thing, i.e. you're trying to join to a
set that doesn't exist yet to that part of the query.  Newer versions
of mysql will also choke on such queries I believe as well.  Didn't
have time to go through your whole query but that's what to look for.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] MySQL - PostgreSQL conversion issue

2010-10-28 Thread Dann Corbit
 -Original Message-
 From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
 ow...@postgresql.org] On Behalf Of Scott Marlowe
 Sent: Thursday, October 28, 2010 6:31 PM
 To: Scott Newton
 Cc: pgsql-general@postgresql.org
 Subject: Re: [GENERAL] MySQL - PostgreSQL conversion issue
 
 On Thu, Oct 28, 2010 at 6:44 PM, Scott Newton
 scott.new...@vadacom.co.nz wrote:
  Hi
 
  I have the following rather complicated SQL which works under MySQL
 but fails
  under PostgreSQL 8.3. The SQL is generated by A2Billing
  (http://www.asterisk2billing.org/cgi-bin/trac.cgi). The issue is the
 following
  part of the SQL:
 
  as tf on tf.dnid=substr(cdr.dnid,1,length(tf.dnid))
 
  where tf.dnid is a bigint and cdr.dnid is varchar(40). The error
 returned is
  ERROR:  function length(bigint) does not exist at character 2521
  2010-10-29 13:34:27 NZDT HINT:  No function matches the given name
 and
  argument types. You might need to add explicit type casts.
 
 This is usually a precedence thing, i.e. you're trying to join to a
 set that doesn't exist yet to that part of the query.  Newer versions
 of mysql will also choke on such queries I believe as well.  Didn't
 have time to go through your whole query but that's what to look for.

If the operation is a substring, and if tf.dnid is a bigint, then the query is 
absurd.
The data length of a bigint will be 8 bytes.  Length has no meaning since 
tf.dnid is not a string. Also, assignment of a substring to a bigint is a 
rather silly operation.

Probably what the o.p. wants is just:
tf.dnid = cdr.dnid::bigint
But that assumes that the types bigint and varchar and the o.p. wants to store 
the number contained in the string into the big integer.

I would hate to see the rest of the design.  I am ill from this tiny fragment.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] MySQL - PostgreSQL conversion issue

2010-10-28 Thread Tom Lane
Scott Newton scott.new...@vadacom.co.nz writes:
 I have the following rather complicated SQL which works under MySQL but fails 
 under PostgreSQL 8.3. The SQL is generated by A2Billing 
 (http://www.asterisk2billing.org/cgi-bin/trac.cgi). The issue is the 
 following 
 part of the SQL:

 as tf on tf.dnid=substr(cdr.dnid,1,length(tf.dnid))

 where tf.dnid is a bigint and cdr.dnid is varchar(40).

[ blanch... ]  That's not the worst SQL code I've ever seen, but it
might be in the top ten.  They're apparently trying to see whether
tf.dnid, taken as a string, matches a prefix of cdr.dnid --- but what if
you have say 123 versus 1234foo?  This will match, but most likely
it shouldn't.  They need to acquire a clue, and a better data
representation.

Anyway, you've got two different typing violations there, so you need
two casts to fix it:

as tf on tf.dnid::text = substr(cdr.dnid,1,length(tf.dnid::text))

(salt to taste with CAST and/or varchar if you prefer, but it's
all the same)

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] MySQL - PostgreSQL conversion issue

2010-10-28 Thread Scott Newton
On Fri, 29 Oct 2010 14:40:51 Tom Lane wrote:
 [ blanch... ]  That's not the worst SQL code I've ever seen, but it
 might be in the top ten.  They're apparently trying to see whether
 tf.dnid, taken as a string, matches a prefix of cdr.dnid --- but what if
 you have say 123 versus 1234foo?  This will match, but most likely
 it shouldn't.  They need to acquire a clue, and a better data
 representation.

Agreed - it's not very pretty.

 Anyway, you've got two different typing violations there, so you need
 two casts to fix it:
 
 as tf on tf.dnid::text = substr(cdr.dnid,1,length(tf.dnid::text))

Unfortunately not:
from cc_call cdr left join cc_card cc on cdr.card_id=cc.id left join 
(select 'dnid' as dnid, 0.1 as sell_cost,0.1 as cost,0 
as dnid_type union  select 8887798764,0.02,0.06 ,1  
) as tf on 
tf.dnid::text=substr(cdr.dnid,1,length(tf.dnid::text))

still gives

2010-10-29 14:58:09 NZDT ERROR:  invalid input syntax for integer: dnid

Thanks

-- 
Regards
Scott Newton
Software Engineer @ Vadacom Limited
Ph: (09) 9690600 x 280

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] MySQL - PostgreSQL conversion issue

2010-10-28 Thread Alex Hunsaker
On Thu, Oct 28, 2010 at 20:02, Scott Newton scott.new...@vadacom.co.nz wrote:
 On Fri, 29 Oct 2010 14:40:51 Tom Lane wrote:
 Anyway, you've got two different typing violations there, so you need
 two casts to fix it:

 Unfortunately not:
 from cc_call cdr left join cc_card cc on cdr.card_id=cc.id left join
                        (select 'dnid' as dnid, 0.1 as sell_cost,0.1 as cost,0
 as dnid_type union  select 8887798764,0.02,0.06 ,1
                        ) as tf on
 tf.dnid::text=substr(cdr.dnid,1,length(tf.dnid::text))

 still gives

 2010-10-29 14:58:09 NZDT ERROR:  invalid input syntax for integer: dnid

You can fix that by casting 8887798764 as text (or by making dnid an
int).  The columns for a union need to be of the same type.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Can Postgres Not Do This Safely ?!?

2010-10-28 Thread Karl Pickett
Hello Postgres Hackers,

We have a simple 'event log' table that is insert only (by multiple
concurrent clients).  It has an integer primary key.  We want to do
incremental queries of this table every 5 minutes or so, i.e. select
* from events where id  LAST_ID_I_GOT to insert into a separate
reporting database.  The problem is, this simple approach has a race
that will forever skip uncommitted events.  I.e., if 5000 was
committed sooner than 4999, and we get 5000, we will never go back and
get 4999 when it finally commits.  How can we solve this?  Basically
it's a phantom row problem but it spans transactions.

I looked at checking the internal 'xmin' column but the docs say that
is 32 bit, and something like 'txid_current_snapshot' returns a 64 bit
value.  I don't get it.   All I want to is make sure I skip over any
rows that are newer than the oldest currently running transaction.
Has nobody else run into this before?

Thank you very much.

-- 
Karl Pickett

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Adivice on master - master replication.

2010-10-28 Thread Gregory Machin
Hi
I'm new to Posgresql.
I have been requested to setup application level HA. The picture is, I
will have 2 web servers behind 2 new work load balancers, so should
one fail the NLB's will redirect traffic to the remaining server. I
need a similar scenario for the Pgsql servers.

From my reading I need a synchronise master - master. so both servers
need to be hot. Such that the web servers are unaware of which server
the are communicating with, so should one  pgsql server go down there
would be no interruption to the web servers.

The processes need to be handled automagicly , well at least the fail
over where the one server takes over the failed servers work load.

Everything I have looked at has a single point of failure, generally
the management application that looks after the replication.

How do I achieve this .. I'm looking for a comprehensive howto .

Thanks
G

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Adivice on master - master replication.

2010-10-28 Thread Shoaib Mir
On Fri, Oct 29, 2010 at 9:42 AM, Gregory Machin g...@linuxpro.co.za wrote:


 From my reading I need a synchronise master - master. so both servers
 need to be hot. Such that the web servers are unaware of which server
 the are communicating with, so should one  pgsql server go down there
 would be no interruption to the web servers.


I have used Rubyrep for such scenario in the past but that was for a small
database so it didn't give me much trouble.

http://www.rubyrep.org/

-- 
Shoaib Mir
http://shoaibmir.wordpress.com/


[GENERAL] How to Convert Integer to Serial

2010-10-28 Thread venkat
Dear All,

  I want to convert integer datatype to serial datatype.. is that
possible.Please let me know.

Thanks and Regards,

Ven


Re: [GENERAL] How to Convert Integer to Serial

2010-10-28 Thread venkat
Dear All,

  I got the solution... Here is my query

ALTER TABLE DemoTable  ALTER gid TYPE INT2;

Thanks again


On Fri, Oct 29, 2010 at 10:18 AM, venkat ven.tammin...@gmail.com wrote:

 Dear All,

   I want to convert integer datatype to serial datatype.. is that
 possible.Please let me know.

 Thanks and Regards,

 Ven



Re: [GENERAL] Adivice on master - master replication.

2010-10-28 Thread John R Pierce

On 10/28/10 3:42 PM, Gregory Machin wrote:

Hi
I'm new to Posgresql.
I have been requested to setup application level HA. The picture is, I
will have 2 web servers behind 2 new work load balancers, so should
one fail the NLB's will redirect traffic to the remaining server. I
need a similar scenario for the Pgsql servers.

 From my reading I need a synchronise master - master. so both servers
need to be hot. Such that the web servers are unaware of which server
the are communicating with, so should one  pgsql server go down there
would be no interruption to the web servers.

The processes need to be handled automagicly , well at least the fail
over where the one server takes over the failed servers work load.

Everything I have looked at has a single point of failure, generally
the management application that looks after the replication.

How do I achieve this .. I'm looking for a comprehensive howto .


the traditional method is a HA cluster with master/standy, sharing 
storage via a fully redundant and dual pathed SAN.  all ethernet 
connections are dual pathed, too, with redundant network switches.   the 
master and slave run a cluster management package, such as linux 
heartbeat, or veritas cluster service, which has distributed management 
capability.   for robustness, you use a fencing systemm on the shared 
storage so only one server is physically capable of accessing the shared 
storage (this prevents a class off failures where the other server also 
thinks its active and tries to mount and access the database storage).   
The cluster manager also shares a single IP such that it reaches the 
current master server, and your webfarm uses that IP for all database 
operations.






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [SQL] How to Convert Integer to Serial

2010-10-28 Thread Shoaib Mir
On Fri, Oct 29, 2010 at 3:48 PM, venkat ven.tammin...@gmail.com wrote:

 Dear All,

   I want to convert integer datatype to serial datatype.. is that
 possible.Please let me know.


I don't think ALTER TABLE will let you do that... so the best way to achieve
the same is:

- Create a sequence for the column (set the start of sequence according to
the values you got already in the column)
- Now set this sequence as the default value for that specific column which
you need to ALTER like this:

ALTER TABLE tablename ALTER COLUMN columnname
SET DEFAULT nextval('newsequence'::regclass);

-- 
Shoaib Mir
http://shoaibmir.wordpress.com/


Re: [GENERAL] [SQL] How to Convert Integer to Serial

2010-10-28 Thread Shoaib Mir
On Fri, Oct 29, 2010 at 3:58 PM, venkat ven.tammin...@gmail.com wrote:

 Dear All,

   I got the solution... Here is my query

 ALTER TABLE DemoTable  ALTER gid TYPE INT2;



Are you sure that you have converted here to a SERIAL type?

-- 
Shoaib Mir
http://shoaibmir.wordpress.com/


Re: [GENERAL] [SQL] How to Convert Integer to Serial

2010-10-28 Thread venkat
ohhh,, sorry... It was wrong post..sorry again...

On Fri, Oct 29, 2010 at 10:30 AM, Shoaib Mir shoaib...@gmail.com wrote:

 On Fri, Oct 29, 2010 at 3:48 PM, venkat ven.tammin...@gmail.com wrote:

 Dear All,

   I want to convert integer datatype to serial datatype.. is that
 possible.Please let me know.


 I don't think ALTER TABLE will let you do that... so the best way to
 achieve the same is:

 - Create a sequence for the column (set the start of sequence according to
 the values you got already in the column)
 - Now set this sequence as the default value for that specific column which
 you need to ALTER like this:

 ALTER TABLE tablename ALTER COLUMN columnname
 SET DEFAULT nextval('newsequence'::regclass);

 --
 Shoaib Mir
 http://shoaibmir.wordpress.com/



Re: [GENERAL] How to Convert Integer to Serial

2010-10-28 Thread John R Pierce

On 10/28/10 9:48 PM, venkat wrote:

Dear All,

  I want to convert integer datatype to serial datatype.. is that 
possible.Please let me know.




SERIAL is not actually a data type, its simply an integer with an 
associated sequence.what Shoab Mir gave will add this sequence to 
the existing integer




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [SQL] How to Convert Integer to Serial

2010-10-28 Thread Shoaib Mir
On Fri, Oct 29, 2010 at 4:05 PM, venkat ven.tammin...@gmail.com wrote:

 ALTER TABLE Demotable ALTER COLUMN gid
 SET DEFAULT nextval('serial'::regclass);


 It is saying that ..serial does not exist...is that true ?



Please avoid sending emails to the personal email addresses but use the
community mailing lists for that.

For the error you mentioned please follow the steps I mentioned in my last
email as you are doing it the wrong way.

Create a sequence first and then you can assign that sequence to this
column. You can not assign SERIAL like this, it has to be a sequence name in
there.

Shoaib Mir
http://shoaibmir.wordpress.com/


Re: [GENERAL] [SQL] How to Convert Integer to Serial

2010-10-28 Thread venkat
Yes Done... Thanks alot...

On Fri, Oct 29, 2010 at 10:40 AM, Shoaib Mir shoaib...@gmail.com wrote:

 On Fri, Oct 29, 2010 at 4:05 PM, venkat ven.tammin...@gmail.com wrote:

 ALTER TABLE Demotable ALTER COLUMN gid
 SET DEFAULT nextval('serial'::regclass);


 It is saying that ..serial does not exist...is that true ?



 Please avoid sending emails to the personal email addresses but use the
 community mailing lists for that.

 For the error you mentioned please follow the steps I mentioned in my last
 email as you are doing it the wrong way.

 Create a sequence first and then you can assign that sequence to this
 column. You can not assign SERIAL like this, it has to be a sequence name in
 there.

 Shoaib Mir
 http://shoaibmir.wordpress.com/