[GENERAL] How to store and load images in PostgreSQL db?

2011-05-25 Thread MarkB
Hello, 

I recently started using PostgresDAC 9.0 and had no trouble getting things
to work until I started to try to find out if it is possible to store (and
load) images in the Postgresql db from Delphi 7. I read about it and in my
opinion it is the best option to use a Bytea field.
Now I am struggling to find out how to save and load data to this field from
Delphi.
My test code is fairly simple:

  PSQLTable1.Insert;
  BlobField := TBlobField(PSQLTable1.FieldByName('Picture_Bytea'));
  BlobField.LoadFromFile('picture.bmp');
  PSQLTable1.Post;

(PSQLTable1 is on the form.)
This works fine until the Post procedure is called. A db error near \
appears.

Instead of using a table I would rather use a Query, but I can't get to work
that at all.

  Query := TPSQLQuery.Create(nil);
  Query.DataBase := Db;
  BlobField := TBlobField.Create(nil);
  try
BlobField.LoadFromFile('picture.bmp');
Query.SQL.Text := Format ('insert into tblImage (Picture_Bytea)
values (%s)', [QuotedStr(BlobField.Value)]);
Query.Open;
  finally
Query.Free;
BlobField.Free;
  end;

Apparantly it is not allowed to create a TBlobField this way and I don't
think it's a good idea to format a bytea-field as a string (%s) either.

Sorry for my newbie question. I hope someone will be able to point me in the
right direction to get this working as I have been searching all over the
internet, but have not been able to find a solution. Even finding a correct
forum to post this question is difficult as there does not seem to be a
PostgreSQL forum for Delphi users.

Thanks in advance!

--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/How-to-store-and-load-images-in-PostgreSQL-db-tp4424778p4424778.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] No control over max.num. WAL files

2011-05-25 Thread Rafael Martinez
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hello

I am trying to move a postgres cluster with 90 databases and around
140GB of data between two servers (8.3.12 - 8.3.15).

I am using 'pg_dumpall | psql' in the process and everything works ok
until our pg_xlog partition gets full.

According to the documentation [1] we can expect a maximum of
(3 * checkpoint_segments + 1 segment files) WAL files in pg_xlog.

In our system this will be (3 * 128 + 1) = 385 WAL files (~6GB)

We have taken this into account + some extra space.

Our pg_xlog partition is ~8GB and under the restore process 486 WAL
files were created in this partition. The partition got full and
everything crashed.

Our question is: How can we get 486 WAL files generated in our pg_xlog
partition if the documentation says that in the worst case we will get
385 WAL files?

These are the relevant parameters we have changed in postgresql.conf:

 archive_mode  | off
 checkpoint_segments   | 128
 default_statistics_target | 100
 maintenance_work_mem  | 512MB
 max_fsm_pages | 80
 max_fsm_relations | 8000
 shared_buffers| 10GB
 wal_buffers   | 512kB
 wal_sync_method   | fdatasync
 work_mem  | 16MB

And these the relevant error messages:

PANIC:  could not write to file pg_xlog/xlogtemp.25133: No space left
on device
LOG:  WAL writer process (PID 25133) was terminated by signal 6: Aborted
LOG:  terminating any other active server processes
WARNING:  terminating connection because of crash of another server process
DETAIL:  The postmaster has commanded this server process to roll back
the current transaction and exit, because another server process exited
abnormally and possibly corrupted shared memory.
HINT:  In a moment you should be able to reconnect to the database and
repeat your command.
STATEMENT:  CREATE INDEX attachments_textsearch ON attachments USING gin
(textsearchable);
FATAL:  the database system is in recovery mode
LOG:  all server processes terminated; reinitializing
LOG:  database system was interrupted; last known up at 2011-05-20
17:46:18 CEST
LOG:  database system was not properly shut down; automatic recovery in
progress
LOG:  redo starts at 12/6FD38F70
FATAL:  the database system is in recovery mode
LOG:  could not open file pg_xlog/0001001300B0 (log file
19, segment 176): No such file or directory
LOG:  redo done at 13/ACE8
LOG:  autovacuum launcher started
LOG:  database system is ready to accept connections


As you can see the last SQL statement before the crash is:
CREATE INDEX attachments_textsearch ON attachments USING gin
(textsearchable);

Maybe the problem is related to this?

Any ideas?, thanks in advance.

[1] http://www.postgresql.org/docs/8.3/interactive/wal-configuration.html

regards,
- -- 
 Rafael Martinez Guerrero
 Center for Information Technology
 University of Oslo, Norway

 PGP Public Key: http://folk.uio.no/rafael/
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.14 (GNU/Linux)

iEYEARECAAYFAk3c6FsACgkQBhuKQurGihT/pgCcD5nA8E5VHIHf984VjrHDk3YT
yAAAoIiW5CClJ7CN9bu+Ib89IckHmMEf
=H5W3
-END PGP SIGNATURE-

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres 8.3.5 - ECPG and the use of descriptors and cursors in multi-threaded programs

2011-05-25 Thread Leif Jensen
   Hello,

   Thank you for the suggestion, seems the way to go. I have implemented this 
using both variable descriptor and prepared statement (execquery) in my program 
and it works nicely, except in one specific situation.

   What I didn't mention previously is that we are sometimes using 2 
connections in the same thread: 1 for reading some tables (doing SELECT), and 1 
for writing other tables (doing INSERTs/UPDATEs) for each record from the 
first, after some complex operations on the data.

   In this case when I deallocate the execquery (and descriptor) I get an error 
from the ecpg lib saying: -230:26000 invalid statement name

   Debugging into the ecpglib, I see that when 'get_connection()' is called 
(from ECPGdeallocate()) with NULL as parameter, it returns the wrong connection 
and then uses this and the query name in a call to 'find_prepared_statement()' 
which of course doesn't find any because of the mismatch of name and 
connection, hence the error message.

   Is it really not possible to use 2 separate connection within 1 thread at 
the same time ? or is it an error in the ecpg library ?

   Please help,

 Leif


- Bosco Rama postg...@boscorama.com wrote:

 Leif Jensen wrote:
  
  This seems to be working most of the time, but looking at the
 generated C
  code from the ecpg compiler and the associated library functions, we
 are
  not sure whether we should put mutex locks around the 'select' part
 to
  avoid several threads are using the same execdesc at the same
 time.
  
  We have made sure that each thread uses their own and only their
 own
  database connection, but are unsure whether the ecpg library
 functions is
  able to handle multiple use of the statical name execdesc ?
 
 You are most probably trashing memory by using the same descriptor
 name in
 multiple threads.  However, given that you have already spent the
 effort to
 have the connections 'thread-dedicated' I think that rather than
 creating a
 critical path through an area that is intentionally supposed to be
 mutli-
 hreaded, I'd be inclined to use the connection name (or some
 derivation of
 it) as the name of the descriptor.  I haven't used descriptors in ecpg
 so I
 don't know if the syntax works, but you could try:
 
 exec sql char *dname = _thisDbConn;  // Or some derivation
 
 EXEC SQL AT :_thisDbConn ALLOCATE DESCRIPTOR :dname;
 ...
 EXEC SQL AT :_thisDbConn FETCH IN execcurs INTO SQL DESCRIPTOR
 :dname;
 ...
 EXEC SQL DEALLOCATE DESCRIPTOR :dname;
 
 
 Just a thought.
 
 Bosco.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] No control over max.num. WAL files

2011-05-25 Thread Andrew Sullivan
On Wed, May 25, 2011 at 01:30:35PM +0200, Rafael Martinez wrote:
 I am using 'pg_dumpall | psql' in the process and everything works ok
 until our pg_xlog partition gets full.
 
 According to the documentation [1] we can expect a maximum of
 (3 * checkpoint_segments + 1 segment files) WAL files in pg_xlog.

That isn't what the documentation says.  It says this:

If, due to a short-term peak of log output rate, there are more
than 3 * checkpoint_segments + 1 segment files, the unneeded
segment files will be deleted instead of recycled until the system
gets back under this limit.

Note that unneeded.  Obviously, you need more than that, probably
because you're restoring the database in one transaction (so none of
the files can be flushed).

 In our system this will be (3 * 128 + 1) = 385 WAL files (~6GB)
 
 We have taken this into account + some extra space.
 
 Our pg_xlog partition is ~8GB and under the restore process 486 WAL
 files were created in this partition. The partition got full and
 everything crashed.

Disk is cheap.  8G is hardly anything any more; I'd buy some more disk for WAL.

A


-- 
Andrew Sullivan
a...@crankycanuck.ca

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] No control over max.num. WAL files

2011-05-25 Thread Rafael Martinez
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 05/25/2011 02:08 PM, Andrew Sullivan wrote:
 On Wed, May 25, 2011 at 01:30:35PM +0200, Rafael Martinez wrote:

Thanks for your answer.


 According to the documentation [1] we can expect a maximum of
 (3 * checkpoint_segments + 1 segment files) WAL files in pg_xlog.

 That isn't what the documentation says.  It says this:

 If, due to a short-term peak of log output rate, there are more
 than 3 * checkpoint_segments + 1 segment files, the unneeded
 segment files will be deleted instead of recycled until the system
 gets back under this limit.

 Note that unneeded.  Obviously, you need more than that, probably
 because you're restoring the database in one transaction (so none of
 the files can be flushed).


We are not restoring the database in one transaction so it is not so
clear to me you need all these extra WAL files around.

If this is the case, I think the documentation is not clear enough and
can lead to misunderstandings.

If we cannot calculate roughly how much disk we need for WAL files, how
can we be sure we won't get into another situation that needs even more
space than the space we have allocated?

[..]

 Disk is cheap.  8G is hardly anything any more; I'd buy some more disk for 
 WAL.


That is not the problem, we have our data in a SAN. Our problem is that
if what you are saying is true, how much diskspace is enough?

regards,
- -- 
 Rafael Martinez Guerrero
 Center for Information Technology
 University of Oslo, Norway

 PGP Public Key: http://folk.uio.no/rafael/
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.14 (GNU/Linux)

iEYEARECAAYFAk3c9vQACgkQBhuKQurGihQupQCeJaIsZFs/AhrMoP/jDS5R+gTj
CwIAnA36bb8/bOrezC91lNQAvXP5nmMc
=VOGM
-END PGP SIGNATURE-

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] No control over max.num. WAL files

2011-05-25 Thread Simon Riggs
On Wed, May 25, 2011 at 1:08 PM, Andrew Sullivan a...@crankycanuck.ca wrote:

 Note that unneeded.  Obviously, you need more than that, probably
 because you're restoring the database in one transaction (so none of
 the files can be flushed).

That's the way SQLServer and Oracle work, but not PostgreSQL. We can
clear down WAL files even during a long running transaction.

For us, unneeded means prior to the second-to-last checkpoint record.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] No control over max.num. WAL files

2011-05-25 Thread Scott Marlowe
On Wed, May 25, 2011 at 6:37 AM, Simon Riggs si...@2ndquadrant.com wrote:
 On Wed, May 25, 2011 at 1:08 PM, Andrew Sullivan a...@crankycanuck.ca wrote:

 Note that unneeded.  Obviously, you need more than that, probably
 because you're restoring the database in one transaction (so none of
 the files can be flushed).

 That's the way SQLServer and Oracle work, but not PostgreSQL. We can
 clear down WAL files even during a long running transaction.

 For us, unneeded means prior to the second-to-last checkpoint record.

I wonder if OP is outrunning his checkpoint writing?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] No control over max.num. WAL files

2011-05-25 Thread Andrew Sullivan
On Wed, May 25, 2011 at 01:37:47PM +0100, Simon Riggs wrote:
 
 That's the way SQLServer and Oracle work, but not PostgreSQL. We can
 clear down WAL files even during a long running transaction.
 
 For us, unneeded means prior to the second-to-last checkpoint record.

Well, they're obviously not getting cleared down, so they must be
needed.  I know how Postgres is supposed to work in these cases, but
in my experience you cannot rely on the OP's calculation to provide
you with a true maximum.  Pathological conditions result in a lot of
WAL segments hanging around.

What I really suspect is that this has to do with the way I/O
scheduling works, particularly in the presence of the bgwriter.  But I
don't feel comfortable suggesting particular reasons for what I've
experienced in production.  What I _can_ tell you is that, when I've
had to do large restores like this, I wanted plenty of overhead for
WAL.  ISTR dedicating 40G to WAL one time for a case like this.

A

-- 
Andrew Sullivan
a...@crankycanuck.ca

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] No control over max.num. WAL files

2011-05-25 Thread Scott Marlowe
On Wed, May 25, 2011 at 6:47 AM, Andrew Sullivan a...@crankycanuck.ca wrote:
 On Wed, May 25, 2011 at 01:37:47PM +0100, Simon Riggs wrote:

 That's the way SQLServer and Oracle work, but not PostgreSQL. We can
 clear down WAL files even during a long running transaction.

 For us, unneeded means prior to the second-to-last checkpoint record.

 Well, they're obviously not getting cleared down, so they must be
 needed.  I know how Postgres is supposed to work in these cases, but
 in my experience you cannot rely on the OP's calculation to provide
 you with a true maximum.  Pathological conditions result in a lot of
 WAL segments hanging around.

 What I really suspect is that this has to do with the way I/O
 scheduling works, particularly in the presence of the bgwriter.  But I
 don't feel comfortable suggesting particular reasons for what I've
 experienced in production.  What I _can_ tell you is that, when I've
 had to do large restores like this, I wanted plenty of overhead for
 WAL.  ISTR dedicating 40G to WAL one time for a case like this.

I have one db server on a SAN right now and it's got 20G allocated for
WAL and 500G for the data/base dir, and have no problems with my WAL
ever coming close to filling up.  But if I did, I'd just shut down the
db, move pg_xlog back to the data/base dir on the 500G drive set,
restart, restore, shut down, move pg_xlog back, restart and go.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] No control over max.num. WAL files

2011-05-25 Thread Craig Ringer

These are the relevant parameters we have changed in postgresql.conf:

  archive_mode  | off
  checkpoint_segments   | 128
  default_statistics_target | 100
  maintenance_work_mem  | 512MB
  max_fsm_pages | 80
  max_fsm_relations | 8000
  shared_buffers| 10GB
  wal_buffers   | 512kB
  wal_sync_method   | fdatasync
  work_mem  | 16MB



Are you sure you don't also have WAL archiving enabled? That can easily 
lead to too many WAL segments being kept if the archiving process can't 
keep up or fails.


It'd really help if you could show part of your postgresql logs from the 
during restore.


--
Craig Ringer

Tech-related writing at http://soapyfrogs.blogspot.com/

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] No control over max.num. WAL files

2011-05-25 Thread Rafael Martinez
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 05/25/2011 02:55 PM, Craig Ringer wrote:
 These are the relevant parameters we have changed in postgresql.conf:

   archive_mode  | off
   checkpoint_segments   | 128
   default_statistics_target | 100
   maintenance_work_mem  | 512MB
   max_fsm_pages | 80
   max_fsm_relations | 8000
   shared_buffers| 10GB
   wal_buffers   | 512kB
   wal_sync_method   | fdatasync
   work_mem  | 16MB
 
 
 Are you sure you don't also have WAL archiving enabled? That can easily
 lead to too many WAL segments being kept if the archiving process can't
 keep up or fails.
 

I am sure. I have double checked.

# grep archive_mode postgresql.conf
archive_mode = off  

# SHOW archive_mode;
 archive_mode
- --
 off
(1 row)


 It'd really help if you could show part of your postgresql logs from the
 during restore.
 

This happens when restoring a 30GB database. It looks like it stops when
creating a GIN index on a 4217 MB table. See my first post for the
relevant information from the log file (No relevant information/errors
before this)

STATEMENT:  CREATE INDEX attachments_textsearch ON attachments USING gin
(textsearchable);

Where 'textsearchable' is a tsvector column. In the source database,
this index is 7989 MB.

regards,
- -- 
 Rafael Martinez Guerrero
 Center for Information Technology
 University of Oslo, Norway

 PGP Public Key: http://folk.uio.no/rafael/
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.14 (GNU/Linux)

iEYEARECAAYFAk3dAIIACgkQBhuKQurGihSRUQCfTHrPPoc0ean2O99269KOIEZY
fjIAoIHoVIw4QOx0s52wWy4XBh9gH1Os
=N0TH
-END PGP SIGNATURE-

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] No control over max.num. WAL files

2011-05-25 Thread Rafael Martinez
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 05/25/2011 02:47 PM, Andrew Sullivan wrote:
 On Wed, May 25, 2011 at 01:37:47PM +0100, Simon Riggs wrote:

 That's the way SQLServer and Oracle work, but not PostgreSQL. We can
 clear down WAL files even during a long running transaction.

 For us, unneeded means prior to the second-to-last checkpoint record.
 
 Well, they're obviously not getting cleared down, so they must be
 needed .. 

I wonder if full_page_writes has something to do with this.

- From the documentation:

 If full_page_writes is set (as is the default), there is another
factor to consider. To ensure data page consistency, the first
modification of a data page after each checkpoint results in logging the
entire page content 


It looks like we get this 'problem' when creating a GIN index on a
tsvector column on a 4217 MB table (This takes longer than
checkpoint_timeout)

- -- 
 Rafael Martinez Guerrero
 Center for Information Technology
 University of Oslo, Norway

 PGP Public Key: http://folk.uio.no/rafael/
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.14 (GNU/Linux)

iEYEARECAAYFAk3dA14ACgkQBhuKQurGihTqtgCdH+4trtuH+x0CBkYj5Vth
ZFMAninQQqtE4+ZBOA/Gff+v1Zm8xo73
=1Tbn
-END PGP SIGNATURE-

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to store and load images in PostgreSQL db?

2011-05-25 Thread David Johnston
It may be as simple as making sure that:

bytea_output = escape

is present in postgresql.conf

9.0 changed the default to hex

No idea on the Table/Query aspect but at least in JDBC query parameters are
generic (i.e., ?) and you tell the API what type of value is being sent -
letting the API do the appropriate conversions.  Occasionally putting an
explicit cast on the parameter (i.e., ?::int) can be handy if you are
dealing with, say, strings in code but the parameter needs to be something
else.

If necessary you could decode your image into some textual representation
and then encode into back into binary once it is in the server (using
appropriate PostgreSQL functions).

David J.

 (PSQLTable1 is on the form.)
 This works fine until the Post procedure is called. A db error near \
 appears.
 
 Instead of using a table I would rather use a Query, but I can't get to
work
 that at all.
 
   Query := TPSQLQuery.Create(nil);
   Query.DataBase := Db;
   BlobField := TBlobField.Create(nil);
   try
 BlobField.LoadFromFile('picture.bmp');
 Query.SQL.Text := Format ('insert into tblImage (Picture_Bytea)
values
 (%s)', [QuotedStr(BlobField.Value)]);
 Query.Open;
   finally
 Query.Free;
 BlobField.Free;
   end;
 
 Apparantly it is not allowed to create a TBlobField this way and I don't
think
 it's a good idea to format a bytea-field as a string (%s) either.
 
 Sorry for my newbie question. I hope someone will be able to point me in
 the right direction to get this working as I have been searching all over
the
 internet, but have not been able to find a solution. Even finding a
correct
 forum to post this question is difficult as there does not seem to be a
 PostgreSQL forum for Delphi users.
 
 Thanks in advance!
 
 --
 View this message in context:
 http://postgresql.1045698.n5.nabble.com/How-to-store-and-load-images-
 in-PostgreSQL-db-tp4424778p4424778.html
 Sent from the PostgreSQL - general mailing list archive at Nabble.com.
 
 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make
 changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] temp files getting me down

2011-05-25 Thread Ben Chobot
I'm running 9.0.3, and recently started getting temp files being created. This 
is a problem because it's making a bunch of dirty buffers that have to be 
flushed to disk and my poor little disk isn't up to the task. I'm not sure why 
though, because this is the explain verbose for the queries that are creating 
them:


   
QUERY PLAN  
  
-
 Limit  (cost=0.00..15180.05 rows=4000 width=109) (actual time=159.462..174.694 
rows=4000 loops=1)
   -  Index Scan using pending_replication_items on replication_queue  
(cost=0.00..37247114.20 rows=9814757 width=109) (actual time=159.459..169.061 
rows=4000 loops=1)
 Total runtime: 177.437 ms
(3 rows)


My understanding with temp files is that they are generated when sorts need to 
happen. But given that the index is doing all the sorting for me, I'm at a 
loss. Why else might I get temp files?
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres 8.3.5 - ECPG and the use of descriptors and cursors in multi-threaded programs

2011-05-25 Thread Bosco Rama
Leif Jensen wrote:
 
 Is it really not possible to use 2 separate connection within 1 thread
 at the same time ? or is it an error in the ecpg library ?

It should be entirely possible to run multiple connections in a single
thread as long as you manage the 'AT connName' clauses properly.

Though, IIRC, using an 'AT connName' clause on any sort of 'deallocate'
statement generates an error in ecpg:

  ecpg -o test.c test.pgc
  test.pgc:35: ERROR: AT option not allowed in DEALLOCATE statement

This happens when trying to deallocate a query or a prepared statement.
I don't use descriptors but the error message indicates it's _any_ sort
of deallocate.

So, it would appear that you can allocate on a connection but not
deallocate from one.  :-(

I'm wonder if Tom or Michael can shine some light on this one?

Bosco.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Miidpoint between two long/lat points? (earthdistance?)

2011-05-25 Thread Carlo Stonebanks
I need to calculate the long/lat values between a line demarcated by two
long/lat points.

 

The points will be very close, but there is the 180 degree problem to
consider, so a simple average won't work.

 

Does anyone know of a function or have a formula that will work using geo
long/lat values? I don't see anything obvious in the earthdistance module.



Re: [GENERAL] Miidpoint between two long/lat points? (earthdistance?)

2011-05-25 Thread Rick Genter
On Wed, May 25, 2011 at 9:47 AM, Carlo Stonebanks 
stonec.regis...@sympatico.ca wrote:

  I need to calculate the long/lat values between a line demarcated by two
 long/lat points.



 The points will be very close, but there is the 180 degree problem to
 consider, so a simple average won’t work.



 Does anyone know of a function or have a formula that will work using geo
 long/lat values? I don’t see anything obvious in the earthdistance module.


The simplest way to deal with the 180 degree problem is to remember that
you can add 360 degrees to a long and get a value that should continue to
work. So, assuming West is negative, -175 (175 degrees West) is the same
as -175+360 = 185 (185 degrees East). Then you don't have to worry about
wraparound. If the result is  180, subtract 360.
-- 
Rick Genter
rick.gen...@gmail.com


[GENERAL] Connecting to Postgres using Windows 7

2011-05-25 Thread Geoffrey Becker
I'm relatively new to postgres.  I've got a Visual Basic (VB) 
application that i would like to connect to a Postgres database using 
ODBC .  Both the VB application and postgres are on my laptop and both 
work beautifully independent of each other.  Trouble is, I have a 
windows 7 64bit OS and therefore I have been unable to get VB to connect 
to the database.  My code is as follows:


Public Class Form1

Dim cCon As ADODB.Connection

Private Sub Button1_Click(ByVal sender As System.Object, ByVal 
e As System.EventArgs) Handles Button1.Click


On Error GoTo EH

cCon.ConnectionString = 
DSN=MyDatabase;uid=MyUserID;pwd=MyPassword


cCon.Open()
MsgBox(O.K., vbInformation, Connection Message)
Exit Sub
EH:
MsgBox(Err.Description, vbCritical, Error Message)
End Sub

Private Sub Form1_Load(ByVal sender As System.Object, ByVal e 
As System.EventArgs) Handles MyBase.Load

cCon = New ADODB.Connection
End Sub
End Class

When I try to connect, all I get is a Server does not exist or access 
is denied error.  I've tried configuring ODBC using odbcad32.exe as it 
seems that is necessary on a 64 bit OS, but I'm not sure if that even 
the right way to go.


Any help i could get would be much appreciated.

Thanks,

Geoff

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PostgreSQL 8.4.8 bringing my website down every evening

2011-05-25 Thread Alexander Farber
Hello fellow PostgreSQL-users,

I run a Drupal 7 (+Facebook app) website
with a multiplayer flash game and use
postgresql-server-8.4.8-1PGDG.rhel5 +
CentOS 5.6 64 bit on a Quad-Core/4GB machine.

I generally like using PostgreSQL eventhough
I'm not an experienced DB-user, but in the recent
weeks it gives me a lot of headache bringing
my website to a halt every evening (when
most players visit the website for a game).

I think this is result of having more users
and having written few more statistics scripts
for them (I use PHP with persistent connections;
I use only local PostgreSQL-connections).

I suspect if I could configure
PostgreSQL accordingly, it would run ok again.

During crashes when/if I manage to ssh into
my server it is barely usable and I see lots
of postmaster processes.

I have the following settings in pg_hba.conf:

local   all all   md5
hostall all 127.0.0.1/32  md5

And the following changes in postgresql.conf:

max_connections = 512
shared_buffers = 32MB
log_destination = 'stderr'
log_directory = 'pg_log'
log_filename = 'postgresql-%a.log'
logging_collector = on
log_rotation_age = 1d
log_rotation_size = 0
log_truncate_on_rotation = on

My Apache httpd.conf:
IfModule prefork.c
StartServers   10
MinSpareServers12
MaxSpareServers   50
ServerLimit  300
MaxClients   300
MaxRequestsPerChild  4000
/IfModule

I look into
/var/lib/pgsql/data/pg_log/postgresql-Wed.log
but don't see anything alarming there.

WARNING:  nonstandard use of \\ in a string literal at character 220
HINT:  Use the escape string syntax for backslashes, e.g., E'\\'.
WARNING:  nonstandard use of \\ in a string literal at character 142
HINT:  Use the escape string syntax for backslashes, e.g., E'\\'.
WARNING:  nonstandard use of \\ in a string literal at character 204
HINT:  Use the escape string syntax for backslashes, e.g., E'\\'.
etc.

Does anybody please have any advice?

Do I have to apply any shared memory/etc. settings
to CentOS Linux system? When I used OpenBSD some
years ago, there where specific instructions to apply to
its kernel/sysctl.conf in the postgresql port readme.

Thank you
Alex

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Preventing OOM kills

2011-05-25 Thread Marco Colombo

On 05/25/2011 03:01 AM, John R Pierce wrote:

On 05/24/11 5:50 PM, Andrej wrote:

Add more RAM? Look at tunables for other processes on
the machine? At the end of the day making the kernel shoot
anything out of despair shouldn't be the done thing.


somehow, 'real' unix has neither a OOMkiller nor does it flat out die
under heavy loads, it just degrades gracefully. I've seen Solaris and
AIX and BSD servers happily chugging along with load factors in the
100s, significant portions of memory paging, etc, without completely
crumbling to a halt. Soimetimes I wonder why Linux even pretends to
support virtual memory, as you sure don't want it to be paging.




http://developers.sun.com/solaris/articles/subprocess/subprocess.html

Some operating systems (such as Linux, IBM AIX, and HP-UX) have a 
feature called memory overcommit (also known as lazy swap allocation). 
In a memory overcommit mode, malloc() does not reserve swap space and 
always returns a non-NULL pointer, regardless of whether there is enough 
VM on the system to support it or not.


The memory overcommit feature has advantages and disadvantages.

(the page goes on with some interesting info) [*]

It appears by your definition that neither Linux, AIX nor HP-UX are 
'real' Unix. Oh, wait, FreeBSD overcommits, too, so can't be 'real' either.


/me wonders now what a 'real' Unix is. :) Must be something related with 
'true' SysV derivatives. If memory serves me well, that's where the word 
'thrashing' originated, right? Actually in my experience nothing 
'thrashes' better than a SysV, Solaris included.


The solution for the OP problem is to keep the system from reaching OOM 
state in the first place. That is necessary even with overcommitting 
turned off. PG not performing its job because malloc() keeps failing 
isn't really a solution.


.TM.

[*] One missing piece is that overcommitting actually prevents or delays 
OOM state. The article does mention system memory can be used more 
flexibly and efficiently w/o really elaborating further. It means that, 
given the same amount of memory (RAM+swap), a non overcommitting system 
reaches OOM way before than a overcommitting one. Also it is rarely a 
good idea, when running low on memory, to switch to an allocation policy 
that is _less_ efficient, memory wise.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL 8.4.8 bringing my website down every evening

2011-05-25 Thread Thom Brown
On 25 May 2011 18:58, Alexander Farber alexander.far...@gmail.com wrote:

 Hello fellow PostgreSQL-users,

 I run a Drupal 7 (+Facebook app) website
 with a multiplayer flash game and use
 postgresql-server-8.4.8-1PGDG.rhel5 +
 CentOS 5.6 64 bit on a Quad-Core/4GB machine.

 I generally like using PostgreSQL eventhough
 I'm not an experienced DB-user, but in the recent
 weeks it gives me a lot of headache bringing
 my website to a halt every evening (when
 most players visit the website for a game).

 I think this is result of having more users
 and having written few more statistics scripts
 for them (I use PHP with persistent connections;
 I use only local PostgreSQL-connections).

 I suspect if I could configure
 PostgreSQL accordingly, it would run ok again.

 During crashes when/if I manage to ssh into
 my server it is barely usable and I see lots
 of postmaster processes.

 I have the following settings in pg_hba.conf:

 local   all all   md5
 hostall all 127.0.0.1/32  md5

 And the following changes in postgresql.conf:

 max_connections = 512
 shared_buffers = 32MB
 log_destination = 'stderr'
 log_directory = 'pg_log'
 log_filename = 'postgresql-%a.log'
 logging_collector = on
 log_rotation_age = 1d
 log_rotation_size = 0
 log_truncate_on_rotation = on

 My Apache httpd.conf:
 IfModule prefork.c
 StartServers   10
 MinSpareServers12
 MaxSpareServers   50
 ServerLimit  300
 MaxClients   300
 MaxRequestsPerChild  4000
 /IfModule

 I look into
 /var/lib/pgsql/data/pg_log/postgresql-Wed.log
 but don't see anything alarming there.

 WARNING:  nonstandard use of \\ in a string literal at character 220
 HINT:  Use the escape string syntax for backslashes, e.g., E'\\'.
 WARNING:  nonstandard use of \\ in a string literal at character 142
 HINT:  Use the escape string syntax for backslashes, e.g., E'\\'.
 WARNING:  nonstandard use of \\ in a string literal at character 204
 HINT:  Use the escape string syntax for backslashes, e.g., E'\\'.
 etc.

 Does anybody please have any advice?

 Do I have to apply any shared memory/etc. settings
 to CentOS Linux system? When I used OpenBSD some
 years ago, there where specific instructions to apply to
 its kernel/sysctl.conf in the postgresql port readme.


Well your shared_buffers are likely to be far too low.  How much memory do
you have available in your system?

And the instructions I think you are looking for are here:
http://www.postgresql.org/docs/current/static/kernel-resources.html

What have you got checkpoint_segments set to?  Are there any warnings in
your log about checkpoints occurring too frequently?

And a way to reduce the I/O impact of checkpoint spikes is to smooth them
out by increasing checkpoint_completion_target to 0.8.

And do you know how many connections are in use during the times where it's
locked up?  If you're reaching your connection limit, it will start
rejecting connections.  A way to solve this problem is either to increase
your max_connections setting further, or introduce connection pooling,
either with something like pgBouncer (
http://pgfoundry.org/projects/pgbouncer/) or using Apache's connection
pooling if you know how to set it up.

-- 
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: [GENERAL] PostgreSQL 8.4.8 bringing my website down every evening

2011-05-25 Thread tv
 Hello fellow PostgreSQL-users,

 I run a Drupal 7 (+Facebook app) website
 with a multiplayer flash game and use
 postgresql-server-8.4.8-1PGDG.rhel5 +
 CentOS 5.6 64 bit on a Quad-Core/4GB machine.

 I generally like using PostgreSQL eventhough
 I'm not an experienced DB-user, but in the recent
 weeks it gives me a lot of headache bringing
 my website to a halt every evening (when
 most players visit the website for a game).

 I think this is result of having more users
 and having written few more statistics scripts
 for them (I use PHP with persistent connections;
 I use only local PostgreSQL-connections).

 I suspect if I could configure
 PostgreSQL accordingly, it would run ok again.

 During crashes when/if I manage to ssh into
 my server it is barely usable and I see lots
 of postmaster processes.

 I have the following settings in pg_hba.conf:

 local   all all   md5
 hostall all 127.0.0.1/32  md5

 And the following changes in postgresql.conf:

 max_connections = 512
 shared_buffers = 32MB
 log_destination = 'stderr'
 log_directory = 'pg_log'
 log_filename = 'postgresql-%a.log'
 logging_collector = on
 log_rotation_age = 1d
 log_rotation_size = 0
 log_truncate_on_rotation = on

Decrease the max_connections, use connection pooling if possible (e.g.
pgbouncer). Each connection represents a separate postgres process, so you
may get up to 512 processes. And that many active processes kills the
performance.

Set it to something like 25 and use connection pooling to handle the rest.
You may increase the number until the server is 'saturated' - beyond that
point there's no point in adding more connections.

Then increase the shared_buffers. Go with something like 512MB if there's
enough RAM.

 I look into
 /var/lib/pgsql/data/pg_log/postgresql-Wed.log
 but don't see anything alarming there.

 WARNING:  nonstandard use of \\ in a string literal at character 220
 HINT:  Use the escape string syntax for backslashes, e.g., E'\\'.
 WARNING:  nonstandard use of \\ in a string literal at character 142
 HINT:  Use the escape string syntax for backslashes, e.g., E'\\'.
 WARNING:  nonstandard use of \\ in a string literal at character 204
 HINT:  Use the escape string syntax for backslashes, e.g., E'\\'.
 etc.

 Does anybody please have any advice?

It has nothing to do with the performance issue, this is related to
incorrectly escaped strings. Modify the app so that strings are properly
escaped (put E in front of the string, so you get something like
E'string').

Or just turn off the warning (escape_string_warning=off). See this

http://www.postgresql.org/docs/9.0/interactive/sql-syntax-lexical.html#SQL-SYNTAX-STRINGS-ESCAPE

 Do I have to apply any shared memory/etc. settings
 to CentOS Linux system? When I used OpenBSD some
 years ago, there where specific instructions to apply to
 its kernel/sysctl.conf in the postgresql port readme.

There still are are such instructions. See this

http://www.postgresql.org/docs/9.0/interactive/kernel-resources.html#SYSVIPC

But if the db starts after increasing the shared_buffers, then you
probably don't need to update this.

Tomas


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] temp files getting me down

2011-05-25 Thread Ben Chobot
On May 25, 2011, at 9:31 AM, Ben Chobot wrote:

 I'm running 9.0.3, and recently started getting temp files being created. 
 This is a problem because it's making a bunch of dirty buffers that have to 
 be flushed to disk and my poor little disk isn't up to the task. I'm not sure 
 why though, because this is the explain verbose for the queries that are 
 creating them:
 
 
   
 QUERY PLAN
 
 -
 Limit  (cost=0.00..15180.05 rows=4000 width=109) (actual 
 time=159.462..174.694 rows=4000 loops=1)
   -  Index Scan using pending_replication_items on replication_queue  
 (cost=0.00..37247114.20 rows=9814757 width=109) (actual time=159.459..169.061 
 rows=4000 loops=1)
 Total runtime: 177.437 ms
 (3 rows)
 
 
 My understanding with temp files is that they are generated when sorts need 
 to happen. But given that the index is doing all the sorting for me, I'm at a 
 loss. Why else might I get temp files?

Just to close the loop, the problem was (apparently) due to table bloat. After 
I clustered the table, the problems went away immediately. I'd still like to 
understand what was happening, but at least my problem is solved.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL 8.4.8 bringing my website down every evening

2011-05-25 Thread Steve Crawford

On 05/25/2011 10:58 AM, Alexander Farber wrote:

Hello fellow PostgreSQL-users,

I run a Drupal 7 (+Facebook app) website
with a multiplayer flash game and use
postgresql-server-8.4.8-1PGDG.rhel5 +
CentOS 5.6 64 bit on a Quad-Core/4GB machine.

I generally like using PostgreSQL eventhough
I'm not an experienced DB-user, but in the recent
weeks it gives me a lot of headache bringing
my website to a halt every evening (when
most players visit the website for a game).

I think this is result of having more users
and having written few more statistics scripts
for them (I use PHP with persistent connections;
I use only local PostgreSQL-connections).

I suspect if I could configure
PostgreSQL accordingly, it would run ok again.

During crashes when/if I manage to ssh into
my server it is barely usable and I see lots
of postmaster processes.

I have the following settings in pg_hba.conf:

local   all all   md5
hostall all 127.0.0.1/32  md5

And the following changes in postgresql.conf:

max_connections = 512
shared_buffers = 32MB
log_destination = 'stderr'
log_directory = 'pg_log'
log_filename = 'postgresql-%a.log'
logging_collector = on
log_rotation_age = 1d
log_rotation_size = 0
log_truncate_on_rotation = on

My Apache httpd.conf:
IfModule prefork.c
StartServers   10
MinSpareServers12
MaxSpareServers   50
ServerLimit  300
MaxClients   300
MaxRequestsPerChild  4000
/IfModule

I look into
/var/lib/pgsql/data/pg_log/postgresql-Wed.log
but don't see anything alarming there.

WARNING:  nonstandard use of \\ in a string literal at character 220
HINT:  Use the escape string syntax for backslashes, e.g., E'\\'.
WARNING:  nonstandard use of \\ in a string literal at character 142
HINT:  Use the escape string syntax for backslashes, e.g., E'\\'.
WARNING:  nonstandard use of \\ in a string literal at character 204
HINT:  Use the escape string syntax for backslashes, e.g., E'\\'.
etc.

Does anybody please have any advice?

Do I have to apply any shared memory/etc. settings
to CentOS Linux system? When I used OpenBSD some
years ago, there where specific instructions to apply to
its kernel/sysctl.conf in the postgresql port readme.

Thank you
Alex

Start by reading 
http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server and 
http://www.postgresql.org/docs/current/static/kernel-resources.html.


It's impossible to give specific advice given the information provided. 
With persistent connections, you will likely see lots of PostgreSQL 
processes since there will be one per established connection. But are 
they idle or doing something? And if they are doing something, is the 
bottleneck disk, memory or CPU?


As to general advice, if you are limiting Apache connections to 300, I'm 
not sure why you need 512 max connections to the DB unless there are a 
lot of simultaneous non-web processes hitting the DB.


I doubt that most of those connections are simultaneously in use. A 
connection pooler like pgbouncer may be in your future. Pgbouncer is 
pretty easy to set up and mah


If most of the queries are simple reads that can be cached, something 
like memcached can provide huge benefits.


Your shared_mem looks way too low. Read the Tuning Guide noted above. 
You will probably want something closer to a 1G (though probably a bit 
less due to the memory use of Apache, OS, etc.). The kernel-resources 
article has info on adjusting the kernel settings.


Bad query design or need for indexes can be non-issues at low-load but 
damaging under high-use. Enable more query logging - especially log 
queries that exceed some threshold. You might start at a couple seconds 
and adjust from there. See log_min_duration_statement.


Cheers,
Steve


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Access to postgres conversion

2011-05-25 Thread akp geek
Dear all -

I would like to know if any one has migrated database from MS
access to Postgres . We use postgres 9.0.2 on solaris . Are there any open
source tools that you have used to do this task. Can you please share your
experiences ?


Regards


Re: [GENERAL] PostgreSQL 8.4.8 bringing my website down every evening

2011-05-25 Thread Alexander Farber
Thank you for your replies,

I've reverted httpd.conf to

  StartServers   8
  MinSpareServers5
  MaxSpareServers   20
  ServerLimit  256
  MaxClients   256

and have changed postgresql.conf to:

  shared_buffers = 512MB
  # for Apache + my game daemon + cron jobs
  max_connections = 260

Do you think I need to reconfigure CentOS 5.6
for the bigger shared memory too or
will it adapt by itself?

I'm still studying the docs.

Also I've installed the pgbouncer package and
will read on it too, but I already wonder what is
its behaviour if configured for 100 connections
and a 101st comes in?

; total number of clients that can connect
max_client_conn = 100
default_pool_size = 20

Regards
Alex

# rpm -qa|grep -i pg
perl-DBD-Pg-1.49-2.el5_3.1
pgbouncer-1.3.4-1.rhel5
pgdg-centos-8.4-2
php53-pgsql-5.3.3-1.el5_6.1
postgresql-8.4.8-1PGDG.rhel5
postgresql-devel-8.4.8-1PGDG.rhel5
postgresql-libs-8.4.8-1PGDG.rhel5
postgresql-server-8.4.8-1PGDG.rhel5

# uname -rm
2.6.18-238.9.1.el5 x86_64

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL 8.4.8 bringing my website down every evening

2011-05-25 Thread Alexander Farber
# sysctl kernel.shmmax
kernel.shmmax = 68719476736
# sysctl kernel.shmall
kernel.shmall = 4294967296


On Wed, May 25, 2011 at 9:54 PM, Alexander Farber
alexander.far...@gmail.com wrote:
  shared_buffers = 512MB

 Do you think I need to reconfigure CentOS 5.6
 for the bigger shared memory too or
 will it adapt by itself?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Dumping schemas using pg_dump without extensions (9.1 Beta)

2011-05-25 Thread Tom Lane
Adrian Schreyer ams...@cam.ac.uk writes:
 I am trying to backup a single schema only, without any other database
 objects such as extensions. pg_dump however always includes
 extensions, even with the --schema=schema option specified (see below
 for example). Is there a workaround for this?

I've applied a patch to fix this behavior.  Thanks for the report!

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Access to postgres conversion

2011-05-25 Thread John R Pierce

On 05/25/11 12:42 PM, akp geek wrote:

Dear all -

I would like to know if any one has migrated database from 
MS access to Postgres . We use postgres 9.0.2 on solaris . Are there 
any open source tools that you have used to do this task. Can you 
please share your experiences ?


how many tables is this database?  is there more to it than just 
tables?  (Access isn't really a database, its a data-centric rapid 
application development system).  postgres won't do the reports, forms, 
macros, etc (ok, it has user definable procedures/functions, but they 
aren't in vbasic, and they don't work anything like access programs do).


if its just a few tables, it might just be easiest to dump those tables 
as CSV files, then import them one by one into equivalent tables in 
postgres via COPY FROM




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Access to postgres conversion

2011-05-25 Thread akp geek
It's 10 tables.  that's all. No reports. I will follow your suggestion.
Thanks for the help

On Wed, May 25, 2011 at 4:25 PM, John R Pierce pie...@hogranch.com wrote:

 On 05/25/11 12:42 PM, akp geek wrote:

 Dear all -

I would like to know if any one has migrated database from MS
 access to Postgres . We use postgres 9.0.2 on solaris . Are there any open
 source tools that you have used to do this task. Can you please share your
 experiences ?


 how many tables is this database?  is there more to it than just tables?
  (Access isn't really a database, its a data-centric rapid application
 development system).  postgres won't do the reports, forms, macros, etc (ok,
 it has user definable procedures/functions, but they aren't in vbasic, and
 they don't work anything like access programs do).

 if its just a few tables, it might just be easiest to dump those tables as
 CSV files, then import them one by one into equivalent tables in postgres
 via COPY FROM



 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] PostgreSQL 8.4.8 bringing my website down every evening

2011-05-25 Thread Tomas Vondra
Dne 25.5.2011 21:54, Alexander Farber napsal(a):
 Thank you for your replies,
 
 I've reverted httpd.conf to
 
   StartServers   8
   MinSpareServers5
   MaxSpareServers   20
   ServerLimit  256
   MaxClients   256
 
 and have changed postgresql.conf to:
 
   shared_buffers = 512MB
   # for Apache + my game daemon + cron jobs
   max_connections = 260
 
 Do you think I need to reconfigure CentOS 5.6
 for the bigger shared memory too or
 will it adapt by itself?

If the database started up fine, then you probably don't need to mess
with the kernel parameters. There are two limits here

SHMMAX - max size of a single memory segment
SHMALL - max size of all the memory segments (sum)

So if you've increased the shared buffers and the database starts fine,
there's a slight change that something else will fail because it needs
it's own segment and the sum exceeds SHMALL.

But if everything works fine, it's probably fine. When something fails
you'll know where to look.

You can see the list of segments using ipcs -m

 I'm still studying the docs.
 
 Also I've installed the pgbouncer package and
 will read on it too, but I already wonder what is
 its behaviour if configured for 100 connections
 and a 101st comes in?

Say you have max_client_conn = 2 and pool_size = 1, and then three
clients come.

client 1: connects to pgbouncer, gets a db connection from the pool,
  starts a transaction and works

client 2: connects to pgbouncer, asks for a db connection but has to
  wait until client 1 finishes (because there's only 1
  connection in the pool)

client 3: can't connect to the pgbouncer, get's ERROR:  no more
  connections allowed (there's max_client_conn = 2)

So in your case the 101st client is rejected. But those connections are
much cheaper (compared to the real db connections), so you may create
more of them. So if you have ServerLimit=256, you may do set
max_client_conn to 256 to handle the peak.

 ; total number of clients that can connect
 max_client_conn = 100
 default_pool_size = 20

BTW as Steve Crawford already pointed out, it's impossible to give
reliable advices without more information. So it may happen that this
won't fix the actual cause.

You need to find out whether the number of connections really is the
problem. Maybe there's some poorly performing SQL that causes all this,
and fixing that one problem might solve all this.

Or maybe there's some concurrency issue (e.g. all the sessions updating
the same row). In that case the number of connections is rather a
symptom than a cause.

Are those connections idle? What does a vmstat / iostat show? Have you
enabled logging of slow queries?

Tomas

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] temp files getting me down

2011-05-25 Thread Merlin Moncure
On Wed, May 25, 2011 at 2:14 PM, Ben Chobot be...@silentmedia.com wrote:
 On May 25, 2011, at 9:31 AM, Ben Chobot wrote:

 I'm running 9.0.3, and recently started getting temp files being created. 
 This is a problem because it's making a bunch of dirty buffers that have to 
 be flushed to disk and my poor little disk isn't up to the task. I'm not 
 sure why though, because this is the explain verbose for the queries that 
 are creating them:


                                                                              
  QUERY PLAN
 -
 Limit  (cost=0.00..15180.05 rows=4000 width=109) (actual 
 time=159.462..174.694 rows=4000 loops=1)
   -  Index Scan using pending_replication_items on replication_queue  
 (cost=0.00..37247114.20 rows=9814757 width=109) (actual 
 time=159.459..169.061 rows=4000 loops=1)
 Total runtime: 177.437 ms
 (3 rows)


 My understanding with temp files is that they are generated when sorts need 
 to happen. But given that the index is doing all the sorting for me, I'm at 
 a loss. Why else might I get temp files?

 Just to close the loop, the problem was (apparently) due to table bloat. 
 After I clustered the table, the problems went away immediately. I'd still 
 like to understand what was happening, but at least my problem is solved.

are you sure this is the query that caused the temp files? can we see the query?

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] temp files getting me down

2011-05-25 Thread Ben Chobot
On May 25, 2011, at 2:57 PM, Merlin Moncure wrote:

 On Wed, May 25, 2011 at 2:14 PM, Ben Chobot be...@silentmedia.com wrote:
 On May 25, 2011, at 9:31 AM, Ben Chobot wrote:
 
 I'm running 9.0.3, and recently started getting temp files being created. 
 This is a problem because it's making a bunch of dirty buffers that have to 
 be flushed to disk and my poor little disk isn't up to the task. I'm not 
 sure why though, because this is the explain verbose for the queries that 
 are creating them:
 
 
 
   QUERY PLAN
 -
 Limit  (cost=0.00..15180.05 rows=4000 width=109) (actual 
 time=159.462..174.694 rows=4000 loops=1)
   -  Index Scan using pending_replication_items on replication_queue  
 (cost=0.00..37247114.20 rows=9814757 width=109) (actual 
 time=159.459..169.061 rows=4000 loops=1)
 Total runtime: 177.437 ms
 (3 rows)
 
 
 My understanding with temp files is that they are generated when sorts need 
 to happen. But given that the index is doing all the sorting for me, I'm at 
 a loss. Why else might I get temp files?
 
 Just to close the loop, the problem was (apparently) due to table bloat. 
 After I clustered the table, the problems went away immediately. I'd still 
 like to understand what was happening, but at least my problem is solved.
 
 are you sure this is the query that caused the temp files? can we see the 
 query?

Well, the query itself was calling a plpgsql function, and the function itself 
was doing:

DECLARE
row formatted_replication_queue%ROWTYPE;
BEGIN
for row in select * from formatted_replication_queue where
distributor_id IS NULL AND
('{{%,%}}'::varchar[] @ ARRAY[source_site, dest_site]::varchar[] OR
'{{%,%}}'::varchar[] @ ARRAY['%', dest_site]::varchar[] OR
'{{%,%}}'::varchar[] @ ARRAY[source_site, '%']::varchar[] OR
'{{%,%}}'::varchar[] @ ARRAY['%', '%']::varchar[])
ORDER BY update_time ASC
limit 4000
for update
LOOP
UPDATE replication_queue SET distributor_id = 't32' WHERE 
filehash = row.filehash;
RETURN NEXT row;
END LOOP;
RETURN;
END

Doing that select manually didn't seem to be causing the same issues. 
formatted_replication_queue is a simple view that reformats some columns but 
does no sorting.
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] full_page_writes makes no difference?

2011-05-25 Thread Jeff Davis
On Wed, 2011-05-04 at 00:17 -0400, Tian Luo wrote:
 So, nbytes should always be multiples of XLOG_BLCKSZ, which in the
 default case, is 8192.
 
 My question is, if it always writes full pages no matter
 full_page_writes is on or off, what is the difference?

Most I/O systems and filesystems can end up writing part of a page (in
this case, 8192 bytes) in the event of a power failure, which is called
a torn page. That can cause problems for postgresql, because the page
will be a mix of old and new data, which is corrupt.

The solution is full page writes, which means that when a data page is
modified for the first time after a checkpoint, it logs the entire
contents of the page (except the free space) to WAL, and can use that as
a starting point during recovery. This results in extra WAL data for
safety, but it's unnecessary if your filesytem + IO system guarantee
that there will be no torn pages (and that's the only safe time to turn
it off).

So, to answer your question, the difference is that full_page_writes=off
means less total WAL data, which means fewer 8192-byte writes in the
long run (you have to test long enough to go through a checkpoint to see
this difference, however). PostgreSQL will never issue write() calls
with 17 bytes, or some other odd number, regardless of the
full_page_writes setting.

I can see how the name is slightly misleading, but it has to do with
whether to write this extra information to WAL (where extra
information happens to be full data pages in this case); not whether
to write the WAL itself in full pages.

Regards,
Jeff Davis


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Access to postgres conversion

2011-05-25 Thread Tommy

Hi.

You could try /Access to PostgreSQL /from Bullzip.com.  I liked the dump 
file that this creates.  Small neat and it is free!


Tommy.


It's 10 tables.  that's all. No reports. I will follow your 
suggestion. Thanks for the help


On Wed, May 25, 2011 at 4:25 PM, John R Pierce pie...@hogranch.com 
mailto:pie...@hogranch.com wrote:


On 05/25/11 12:42 PM, akp geek wrote:

Dear all -

   I would like to know if any one has migrated
database from MS access to Postgres . We use postgres 9.0.2 on
solaris . Are there any open source tools that you have used
to do this task. Can you please share your experiences ?


how many tables is this database?  is there more to it than just
tables?  (Access isn't really a database, its a data-centric rapid
application development system).  postgres won't do the reports,
forms, macros, etc (ok, it has user definable
procedures/functions, but they aren't in vbasic, and they don't
work anything like access programs do).

if its just a few tables, it might just be easiest to dump those
tables as CSV files, then import them one by one into equivalent
tables in postgres via COPY FROM



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org

mailto:pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general






[GENERAL] General Postgres performance tips when using ARRAY

2011-05-25 Thread bubba postgres
So, what are the gotcha's around manipulating Arrays in stored procs?
It seems reasonable that an array_cat /etc would cause the creation of a new
array, but does mutating an existing array also create a copy?


Re: [GENERAL] General Postgres performance tips when using ARRAY

2011-05-25 Thread Merlin Moncure
On Wed, May 25, 2011 at 9:17 PM, bubba postgres
bubba.postg...@gmail.com wrote:

 So, what are the gotcha's around manipulating Arrays in stored procs?
 It seems reasonable that an array_cat /etc would cause the creation of a new
 array, but does mutating an existing array also create a copy?

Never, ever, if at all possible, build arrays with array_cat, ||
operator, etc.   Try not to work with arrays iteratively.  It will be
very slow.  You have better options:

1. subquery array constructor:
array_var := array(select bar from foo where ...);

2. array_agg()
select into array_var array_agg(bar) from foo where ...

3. values array constructor:
array_var := array[1, 2, 3];

don't forget, in recent postgres, you can make arrays of composite
types as well, and also nest:
complex_type := array(select row(bar, array(select baz from bat where
..))  from foo);

For expanding arrays prefer unnest() and check out the coming 9.1
foreach feature
(http://www.postgresql.org/docs/9.1/static/plpgsql-control-structures.html#PLPGSQL-FOREACH-ARRAY):

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Connecting to Postgres using Windows 7

2011-05-25 Thread Craig Ringer

On 05/25/2011 09:49 PM, Geoffrey Becker wrote:


When I try to connect, all I get is a Server does not exist or access
is denied error. I've tried configuring ODBC using odbcad32.exe as it
seems that is necessary on a 64 bit OS, but I'm not sure if that even
the right way to go.


*which* odbcad32.exe?

Due to a nasty historical quirk Microsoft is stuck with for backward 
compat, there are both 64-bit and 32-bit versions of odbcad32.exe .


http://support.microsoft.com/kb/942976

http://robertoschiabel.wordpress.com/2008/02/28/windows-x64-32bit-odbc-vs-64bit-odbc/

(Is this in the PgODBC documentation? It should be.)

--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] temp files getting me down

2011-05-25 Thread Merlin Moncure
On Wed, May 25, 2011 at 6:44 PM, Ben Chobot be...@silentmedia.com wrote:
 On May 25, 2011, at 2:57 PM, Merlin Moncure wrote:

 On Wed, May 25, 2011 at 2:14 PM, Ben Chobot be...@silentmedia.com wrote:
 On May 25, 2011, at 9:31 AM, Ben Chobot wrote:

 I'm running 9.0.3, and recently started getting temp files being created. 
 This is a problem because it's making a bunch of dirty buffers that have 
 to be flushed to disk and my poor little disk isn't up to the task. I'm 
 not sure why though, because this is the explain verbose for the queries 
 that are creating them:


                                                                            
    QUERY PLAN
 -
 Limit  (cost=0.00..15180.05 rows=4000 width=109) (actual 
 time=159.462..174.694 rows=4000 loops=1)
   -  Index Scan using pending_replication_items on replication_queue  
 (cost=0.00..37247114.20 rows=9814757 width=109) (actual 
 time=159.459..169.061 rows=4000 loops=1)
 Total runtime: 177.437 ms
 (3 rows)


 My understanding with temp files is that they are generated when sorts 
 need to happen. But given that the index is doing all the sorting for me, 
 I'm at a loss. Why else might I get temp files?

 Just to close the loop, the problem was (apparently) due to table bloat. 
 After I clustered the table, the problems went away immediately. I'd still 
 like to understand what was happening, but at least my problem is solved.

 are you sure this is the query that caused the temp files? can we see the 
 query?

 Well, the query itself was calling a plpgsql function, and the function 
 itself was doing:

 DECLARE
        row formatted_replication_queue%ROWTYPE;
 BEGIN
        for row in select * from formatted_replication_queue where
        distributor_id IS NULL AND
        ('{{%,%}}'::varchar[] @ ARRAY[source_site, dest_site]::varchar[] OR
        '{{%,%}}'::varchar[] @ ARRAY['%', dest_site]::varchar[] OR
        '{{%,%}}'::varchar[] @ ARRAY[source_site, '%']::varchar[] OR
        '{{%,%}}'::varchar[] @ ARRAY['%', '%']::varchar[])
        ORDER BY update_time ASC
        limit 4000
        for update
        LOOP
                UPDATE replication_queue SET distributor_id = 't32' WHERE 
 filehash = row.filehash;
                RETURN NEXT row;
        END LOOP;
        RETURN;
 END

 Doing that select manually didn't seem to be causing the same issues. 
 formatted_replication_queue is a simple view that reformats some columns but 
 does no sorting.

um, right. how about for starters querying the view outside of the
function and see what plan we get...

also you still haven't posted the original query. one major gotcha
with plpgsql is the function executing more times than you think.
posting the calling query will help, or simple logging from inside the
function (raise notice, etc).

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL 8.4.8 bringing my website down every evening

2011-05-25 Thread Craig Ringer

On 05/26/2011 02:53 AM, t...@fuzzy.cz wrote:


Decrease the max_connections, use connection pooling if possible (e.g.
pgbouncer). Each connection represents a separate postgres process, so you
may get up to 512 processes. And that many active processes kills the
performance.


... and this is why it'd be great to see pooling-by-default in Pg, be it 
integrated PgPool or something else. For every person making the effort 
to ask on the mailing list, how many give up and go away?


Yes, I know it's not exactly easy to integrate pooling, and that there 
are real disagreements about pooling vs admission control.


--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] max_connections proposal

2011-05-25 Thread Craig Ringer
There might be a very cheap and simple way to help reduce the number of 
people running into problems because they set massive max_connections 
values that their server cannot cope with instead of using pooling.


In the default postgresql.conf, change:

max_connections = 100   # (change requires restart)
# Note:  Increasing max_connections costs ~400 bytes of shared memory
# per connection slot, plus lock space (see max_locks_per_transaction).

to:

max_connections = 100   # (change requires restart)
# WARNING: If you're about to increase max_connections above 100, you
# should probably be using a connection pool instead. See:
# http://wiki.postgresql.org/max_connections
#
# Note:  Increasing max_connections costs ~400 bytes of shared memory
# per connection slot, plus lock space (see max_locks_per_transaction).
#


... where wiki.postgresql.org/max_connections (which doesn't yet exist) 
explains the throughput costs of too many backends and the advantages of 
configuring a connection pool instead.


Sure, this somewhat contravenes the users don't read - ever principle, 
but we can hope that _some_ people will read a comment immediately 
beside the directive they're modifying.


--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL 8.4.8 bringing my website down every evening

2011-05-25 Thread Merlin Moncure
On Wed, May 25, 2011 at 2:40 PM, Thom Brown t...@linux.com wrote:
 On 25 May 2011 18:58, Alexander Farber alexander.far...@gmail.com wrote:

 Hello fellow PostgreSQL-users,

 I run a Drupal 7 (+Facebook app) website
 with a multiplayer flash game and use
 postgresql-server-8.4.8-1PGDG.rhel5 +
 CentOS 5.6 64 bit on a Quad-Core/4GB machine.

 I generally like using PostgreSQL eventhough
 I'm not an experienced DB-user, but in the recent
 weeks it gives me a lot of headache bringing
 my website to a halt every evening (when
 most players visit the website for a game).

 I think this is result of having more users
 and having written few more statistics scripts
 for them (I use PHP with persistent connections;
 I use only local PostgreSQL-connections).

 I suspect if I could configure
 PostgreSQL accordingly, it would run ok again.

 During crashes when/if I manage to ssh into
 my server it is barely usable and I see lots
 of postmaster processes.

 I have the following settings in pg_hba.conf:

 local   all         all                               md5
 host    all         all         127.0.0.1/32          md5

 And the following changes in postgresql.conf:

 max_connections = 512
 shared_buffers = 32MB
 log_destination = 'stderr'
 log_directory = 'pg_log'
 log_filename = 'postgresql-%a.log'
 logging_collector = on
 log_rotation_age = 1d
 log_rotation_size = 0
 log_truncate_on_rotation = on

 My Apache httpd.conf:
 IfModule prefork.c
 StartServers       10
 MinSpareServers    12
 MaxSpareServers   50
 ServerLimit      300
 MaxClients       300
 MaxRequestsPerChild  4000
 /IfModule

 I look into
 /var/lib/pgsql/data/pg_log/postgresql-Wed.log
 but don't see anything alarming there.

 WARNING:  nonstandard use of \\ in a string literal at character 220
 HINT:  Use the escape string syntax for backslashes, e.g., E'\\'.
 WARNING:  nonstandard use of \\ in a string literal at character 142
 HINT:  Use the escape string syntax for backslashes, e.g., E'\\'.
 WARNING:  nonstandard use of \\ in a string literal at character 204
 HINT:  Use the escape string syntax for backslashes, e.g., E'\\'.
 etc.

 Does anybody please have any advice?

 Do I have to apply any shared memory/etc. settings
 to CentOS Linux system? When I used OpenBSD some
 years ago, there where specific instructions to apply to
 its kernel/sysctl.conf in the postgresql port readme.

 Well your shared_buffers are likely to be far too low.  How much memory do
 you have available in your system?

I doubt this will help.  For many systems, most even, especially those
not doing a lot of writing, the number of shared buffers is
irrelevant.   The first step to solving the problem is determining
what the problem is.

during high load:
1. cpu bound? check top cpu usage during
2. i/o bound? check top wait%
3. scaling issues? # active connections over 20 or so can be
dangerous.  consider installing a pooler (my favorite is pgbouncer).
also monitor vmstat for context switches
4. lousy queries? enable min_duration_statement in logs and take note
of queries running over 20-50ms
5. something else? when are your backups running?  what else is
happening at that time?

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] max_connections proposal

2011-05-25 Thread Merlin Moncure
On Wed, May 25, 2011 at 10:58 PM, Craig Ringer
cr...@postnewspapers.com.au wrote:
 There might be a very cheap and simple way to help reduce the number of
 people running into problems because they set massive max_connections values
 that their server cannot cope with instead of using pooling.

 In the default postgresql.conf, change:

 max_connections = 100                   # (change requires restart)
 # Note:  Increasing max_connections costs ~400 bytes of shared memory
 # per connection slot, plus lock space (see max_locks_per_transaction).

 to:

 max_connections = 100                   # (change requires restart)
 # WARNING: If you're about to increase max_connections above 100, you
 # should probably be using a connection pool instead. See:
 #     http://wiki.postgresql.org/max_connections
 #
 # Note:  Increasing max_connections costs ~400 bytes of shared memory
 # per connection slot, plus lock space (see max_locks_per_transaction).
 #


 ... where wiki.postgresql.org/max_connections (which doesn't yet exist)
 explains the throughput costs of too many backends and the advantages of
 configuring a connection pool instead.

 Sure, this somewhat contravenes the users don't read - ever principle, but
 we can hope that _some_ people will read a comment immediately beside the
 directive they're modifying.

+1 on this idea, although I'm not so sure it's a good idea to point to
the wiki.  Also, all other .conf explanation is in the standard docs,
so maybe this should be too.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] temp files getting me down

2011-05-25 Thread Ben Chobot

On May 25, 2011, at 7:36 PM, Merlin Moncure wrote:

 On Wed, May 25, 2011 at 6:44 PM, Ben Chobot be...@silentmedia.com wrote:
 
 
 Well, the query itself was calling a plpgsql function, and the function 
 itself was doing:
 
 DECLARE
row formatted_replication_queue%ROWTYPE;
 BEGIN
for row in select * from formatted_replication_queue where
distributor_id IS NULL AND
('{{%,%}}'::varchar[] @ ARRAY[source_site, dest_site]::varchar[] OR
'{{%,%}}'::varchar[] @ ARRAY['%', dest_site]::varchar[] OR
'{{%,%}}'::varchar[] @ ARRAY[source_site, '%']::varchar[] OR
'{{%,%}}'::varchar[] @ ARRAY['%', '%']::varchar[])
ORDER BY update_time ASC
limit 4000
for update
LOOP
UPDATE replication_queue SET distributor_id = 't32' WHERE 
 filehash = row.filehash;
RETURN NEXT row;
END LOOP;
RETURN;
 END
 
 Doing that select manually didn't seem to be causing the same issues. 
 formatted_replication_queue is a simple view that reformats some columns but 
 does no sorting.
 
 um, right. how about for starters querying the view outside of the
 function and see what plan we get...

Sorry, I thought I'd mentioned that I had run this select outside the function 
but was unable to replicate the temp file creation that way. That's how I got 
the explain analyze plan. 

 also you still haven't posted the original query. one major gotcha
 with plpgsql is the function executing more times than you think.
 posting the calling query will help, or simple logging from inside the
 function (raise notice, etc).

Well the calling query was merely select * from f(4000,'t32'); It wasn't in a 
join, and the args were constants. It's a simple give me more work from the 
work queue function.

I really don't think it was being called more than I thought, because I saw 
each query was taking about 3 minutes. During those 3 minutes, I observed a 
temp file building up. Then it would go away right as the query finished. 

I'll be lax about staying on top of bloat for this database and if it returns 
to previous levels of bloat, perhaps the problem will return. 
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] OHI to Sponsor PG West 2011

2011-05-25 Thread Andy Lurie
May 24, 2011

Open Hosting to Sponsor PG West 2011 Conference

PRESS RELEASE – Open Hosting Inc., Vienna, VA: Open Hosting Inc.
(“OHI”) announces the sponsorship of PG West 2011.  The PostgreSQL
Conference West will be held on September 27-30th at the San Jose (CA)
Convention Center.  PostgreSQL, with over 15 years of active
development, is the most advanced open source database system.

“PG West 2011 offers the entire Postgres community the opportunity to
come together to collaborate and share in the latest product
enhancements. OHI is committed to supporting the global family of
PostgreSQL community of developers,” said Dave Cummings, CEO of Open
Hosting Inc.

“PG West 2011 is very pleased to have Open Hosting Inc. as a sponsor
again.  The PostgreSQL community appreciates the continued support of
organizations like OHI,” says Joshua Drake, Conference Organizer.

For more information and registration information, visit
https://www.postgresqlconference.org/

Open Hosting is a trusted and reliable provider of cloud computing
solutions based in Vienna, VA.  Since 2003, we’ve been pioneers in our
filed and partners in innovation for developers and businesses alike.
Clients worldwide trust us to store, host and manage their cloud
hosting solutions because of our exceptional functionality, technical
expertise and our outstanding support and performance.
- END -



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Miidpoint between two long/lat points? (earthdistance?)

2011-05-25 Thread Merlin Moncure
On Wed, May 25, 2011 at 12:47 PM, Carlo Stonebanks
stonec.regis...@sympatico.ca wrote:
 I need to calculate the long/lat values between a line demarcated by two
 long/lat points.



 The points will be very close, but there is the 180 degree problem to
 consider, so a simple average won’t work.



 Does anyone know of a function or have a formula that will work using geo
 long/lat values? I don’t see anything obvious in the earthdistance module.

Conerted from javascript from here: http://en.wikipedia.org/wiki/Atan2

btw i'm not huge fan of earthdistance module either -- it's easier to
just rig functions to do what you want.

merlin

create or replace function midpoint(
  lat1 float8,
  lon1 float8,
  lat2 float8,
  lon2 float8,
  lat_mid out float8,
  lon_mid out float8) returns record as
$$
   select
 atan2(sin(lat1)+sin(lat2),
sqrt( (cos(lat1)+Bx)*(cos(lat1)+Bx) + By*By) ) * 57.2957795,
 (lon1 + atan2(By, cos(lat1) + Bx)) * 57.2957795
 from
 (
select
  lat1,
  lat2,
  lon1,
  cos(lat2) * cos(dlon) as bx,
  cos(lat2) * sin(dlon) as by
  from
  (
select
$1 * 0.0174532925 as lat1,
$3 * 0.0174532925 as lat2,
$2 * 0.0174532925 as lon1,
($4 - $2) * 0.0174532925 as dlon
   ) q
  ) q;
$$ language sql immutable;

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general