Re: [GENERAL] Overlapping ranges

2014-06-19 Thread Alban Hertroys
On 19 Jun 2014, at 1:47, Jason Long mailing.li...@octgsoftware.com wrote:

 I have a large table of access logs to an application.  
 
 I want is to find all rows that overlap startdate and enddate with any
 other rows.
 
 The query below seems to work, but does not finish unless I specify a
 single id.  
 
 select distinct a1.id
 from t_access a1, 
t_access a2 
 where tstzrange(a1.startdate, a1.enddate)  
  tstzrange(a2.startdate, a2.enddate) 

You’re comparing overlapping records twice there; you compare all records in a1 
to all records in a2. You’ll want to skip the records that you already compared.

Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Insert query hangs what could be the reason

2014-06-19 Thread M Tarkeshwar Rao
Hi,

Insert query hangs what could be the reason. Is there any way to find out?
Any timeout feature there with query which can be set at client or server end?

Regards
Tarkeshwar


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Best backup strategy for production systems

2014-06-19 Thread Oliver
Hi,
thank you very much for your reply.
Ok, I've read again the official documentation about backup, slowly now ;-)
Is it correct if I use same location for archiving wal files and base
backups, isn't it? It will be in a different filesystem of $PGDATA.
OmniPITR allows be configured without having hot_standby? I have PostgreSQL
configured using archive, for archiving wal files into a different
filesystem/path.
About many wal generated, reading documentation, I've done a error I think
.. :

*The archive command is only invoked on completed WAL segments. Hence, if
your server generates only little WAL traffic (or has slack periods where
it does so), there could be a long delay between the completion of a
transaction and its safe recording in archive storage. To put a limit on
how old unarchived data can be, you can setarchive_timeout
http://www.postgresql.org/docs/current/static/runtime-config-wal.html#GUC-ARCHIVE-TIMEOUT
to
force the server to switch to a new WAL segment file at least that often.
Note that archived files that are archived early due to a forced switch are
still the same length as completely full files. It is therefore unwise to
set a very short archive_timeout — it will bloat your archive
storage. archive_timeout settings of a minute or so are usually reasonable.*

So I modified my archive_timeout parameter to 60 .. so I understand now
that it is creating wal files each min. of 16MB each one, correct? Even not
being fill (because there isn't activity in the database), it will create
wal files each min. of 16MB, and for that, I've had my archiving filesystem
full quickly. Correct? I've modified parameter now to original value, 0, so
it is disabled now.
About wal files and archiving of them, I must delete both manually, isn't
it? There isn't any option for automatically delete wal files with a given
age in the postgresql.conf, isn't it? (Away of archive_command). Do you use
Linux? Could you pass me your archive_command or script that you use for
copying/gzipping the files?
Thanks beforehand.

Cheers...


2014-06-17 14:52 GMT+01:00 François Beausoleil franc...@teksol.info:

 Hi!

 Le 2014-06-17 à 08:31, Oliver ofab...@gmail.com a écrit :

  Hi,
  I'm a newbie in postgresql. I've mounted my first postgresql instance,
 it is empty now, only with default postgres DB.
  It is under Linux, with 2 filesystems, one for data and another for
 archiving (I've enabled archiving as it will be for production).
  Could someone recommend me a strategy for backups, scripts and so on?
  Can base backup be done with the system up (postgres up), isn't it?
  Would it be ok if I do a base backup each week and archiving backup each
 day?
  As I've not configured backups (and archiving deletion), I've had my
 first problem and it is that my archiving filesystem (FS) is full and
 archiver process is showing failed with the last wal file copy (normal as
 archiving FS is full).
  Please, recommend me what I should make now .. I should create another
 network FS for base backups and archiving backups? When I have my first
 base backup, could I then delete archiving files, isn't it?
  My archiving FS has 20GB, I don't understand as with a system without
 load (it will be for production, but it hasn't databases now .. only
 postgres), how it full the FS in a few days ... Is it normal?
  Thanks beforehand.

 Welcome to PostgreSQL!

 The PostgreSQL manual has a great section on backup and restore:
 http://www.postgresql.org/docs/current/static/backup.html

 I found value in « Instant PostgreSQL Backup and Restore How-To » at
 http://www.packtpub.com/how-to-postgresql-backup-and-restore/book

 Regarding your questions:

 * Yes, base backups can be made while the server is up and running.
 PostgreSQL has a tool named pg_basebackup to do just that
 http://www.postgresql.org/docs/current/static/app-pgbasebackup.html. I
 personally use OmniPITR to handle my base backups and continuous archiving
 https://github.com/omniti-labs/omnipitr . There also exists WAL-E
 https://github.com/wal-e/wal-e which backs up your data to S3 / Rackspace
 CloudFiles automatically.

 * Your WAL files are of no value once you have a new base backup: the new
 base backup includes all previous WAL files. You can think of a base backup
 as a snapshot. WAL files describe changes to the last snapshot. Depending
 on your rate of change, you can delete obsolete WAL files that are older
 than « a few days » than the last base backup. I personally keep 3 weeks of
 WAL files, 2 weeks of base backups.

 * The vacuum daemon will vacuum databases regularly, and checkpoints will
 also occur on a schedule, even on a system without activity. Those
 processes will generate some amount of WAL archives. WAL archives compress
 very well: 16MB to 4MB is very typical on my system.

 * My database is too big to do pg_dump (3 TiB), so I dont, but I have
 weekly base backups, plus the WAL archives which I keep for three weeks.

 Hope that helps!
 François Beausoleil




Re: [GENERAL] Insert query hangs what could be the reason

2014-06-19 Thread dinesh kumar
On Thu, Jun 19, 2014 at 1:59 PM, M Tarkeshwar Rao 
m.tarkeshwar@ericsson.com wrote:

 Hi,

 Insert query hangs what could be the reason. Is there any way to find out?
 Any timeout feature there with query which can be set at client or server
 end?


It might be due to concurrent primary key/unique key modifications. Try to
enable the log_lock_waits in postgresql.conf, which gives you more locks
information.

Regards,
Dinesh
manojadinesh.blogspot.com


 Regards
 Tarkeshwar


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] pg_restore: custom archiver unexpected end of file , ERROR: missing data for column

2014-06-19 Thread Khangelani Gama
 Adrian you might be right I have just tried to pg_restore on the same
 server(8.4.2), I get the same errors related to xml. Can someone
 advise if the following XML file as an example is in the correct state or
 not:

 Hi all

I have a question, why is that pg_dump does not fail when dumping the
database but only fails when restoring the database?
Dumping the database from same server and restoring it on the same server.
Since there is data issues inside the database why is the pg_dump not
failing as well?
This is postgres 8.4.2.

Please assist. This is a frustrating , application developers need answers
from us.


Regards,
Khangelani


CONFIDENTIALITY NOTICE
The contents of and attachments to this e-mail are intended for the addressee 
only, and may contain the confidential
information of Argility (Proprietary) Limited and/or its subsidiaries. Any 
review, use or dissemination thereof by anyone
other than the intended addressee is prohibited.If you are not the intended 
addressee please notify the writer immediately
and destroy the e-mail. Argility (Proprietary) Limited and its subsidiaries 
distance themselves from and accept no liability
for unauthorised use of their e-mail facilities or e-mails sent other than 
strictly for business purposes.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore: custom archiver unexpected end of file , ERROR: missing data for column

2014-06-19 Thread Alban Hertroys
On 19 June 2014 13:11, Khangelani Gama kg...@argility.com wrote:
 Adrian you might be right I have just tried to pg_restore on the same
 server(8.4.2), I get the same errors related to xml. Can someone
 advise if the following XML file as an example is in the correct state or
 not:

  Hi all

 I have a question, why is that pg_dump does not fail when dumping the
 database but only fails when restoring the database?
 Dumping the database from same server and restoring it on the same server.
 Since there is data issues inside the database why is the pg_dump not
 failing as well?

Because then you wouldn't be able to get your data out of your
database once some corruption occurred. You would be forced to fix the
issue on your live database.

Now you can edit the dump and attempt to restore it until it succeeds
on a different system, after which you know for certain that your data
matches at least your integrity constraints.

-- 
If you can't see the forest for the trees,
Cut the trees and you'll see there is no forest.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore: custom archiver unexpected end of file , ERROR: missing data for column

2014-06-19 Thread Khangelani Gama
-Original Message-
From: Alban Hertroys [mailto:haram...@gmail.com]
Sent: Thursday, June 19, 2014 1:54 PM
To: Khangelani Gama
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] pg_restore: custom archiver unexpected end of file ,
ERROR: missing data for column

On 19 June 2014 13:11, Khangelani Gama kg...@argility.com wrote:
 Adrian you might be right I have just tried to pg_restore on the same
 server(8.4.2), I get the same errors related to xml. Can someone
 advise if the following XML file as an example is in the correct
 state or
 not:

  Hi all

 I have a question, why is that pg_dump does not fail when dumping the
 database but only fails when restoring the database?
 Dumping the database from same server and restoring it on the same server.
 Since there is data issues inside the database why is the pg_dump not
 failing as well?

Because then you wouldn't be able to get your data out of your database once
some corruption occurred. You would be forced to fix the issue on your live
database.

Now you can edit the dump and attempt to restore it until it succeeds on a
different system, after which you know for certain that your data matches at
least your integrity constraints.

That makes sense to me, thanks Is there is an easier or better way to
edit the binary dump file? Like how I can I fix a certain row or a column of
pg_dump -Fc file?  I am sorry I have never done this beforeplease
give a clue on how I can do it.





--
If you can't see the forest for the trees, Cut the trees and you'll see
there is no forest.


CONFIDENTIALITY NOTICE
The contents of and attachments to this e-mail are intended for the addressee 
only, and may contain the confidential
information of Argility (Proprietary) Limited and/or its subsidiaries. Any 
review, use or dissemination thereof by anyone
other than the intended addressee is prohibited.If you are not the intended 
addressee please notify the writer immediately
and destroy the e-mail. Argility (Proprietary) Limited and its subsidiaries 
distance themselves from and accept no liability
for unauthorised use of their e-mail facilities or e-mails sent other than 
strictly for business purposes.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore: custom archiver unexpected end of file , ERROR: missing data for column

2014-06-19 Thread Adrian Klaver

On 06/19/2014 05:03 AM, Khangelani Gama wrote:

-Original Message-
From: Alban Hertroys [mailto:haram...@gmail.com]
Sent: Thursday, June 19, 2014 1:54 PM
To: Khangelani Gama
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] pg_restore: custom archiver unexpected end of file ,
ERROR: missing data for column

On 19 June 2014 13:11, Khangelani Gama kg...@argility.com wrote:

Adrian you might be right I have just tried to pg_restore on the same
server(8.4.2), I get the same errors related to xml. Can someone
advise if the following XML file as an example is in the correct
state or
not:


  Hi all

I have a question, why is that pg_dump does not fail when dumping the
database but only fails when restoring the database?
Dumping the database from same server and restoring it on the same server.
Since there is data issues inside the database why is the pg_dump not
failing as well?


Because then you wouldn't be able to get your data out of your database once
some corruption occurred. You would be forced to fix the issue on your live
database.

Now you can edit the dump and attempt to restore it until it succeeds on a
different system, after which you know for certain that your data matches at
least your integrity constraints.

That makes sense to me, thanks Is there is an easier or better way to
edit the binary dump file? Like how I can I fix a certain row or a column of
pg_dump -Fc file?  I am sorry I have never done this beforeplease
give a clue on how I can do it.



You can think of the binary dump as a 'frozen' version of your database. 
Generally you restore to another database, but it is also possible to 
restore to a text file:


http://www.postgresql.org/docs/9.3/interactive/app-pgrestore.html

So if you use:

-f filename
--file=filename

Specify output file for generated script, or for the listing when 
used with -l. Default is the standard output.



instead of a database name you will create a text version of the 
database dump. A word of caution, the text based version will be 
considerably bigger than the binary compressed version. As I recall you 
said the database in question is large so you might not want to convert 
the whole thing. The same switches apply to restoring to a file that 
work with restoring to a database. So it is possible to select one or 
more tables and/or data and restore that only. The text file that is 
generated can then be inspected. What cannot be done is after making 
changes is reincorporating back into the binary dump. What you do after 
making the changes depends on the relationships between the changed 
tables and the other tables in the database. That would require more 
information.


Another thought. Since pg_dump uses COPY(unless you specify INSERTs) you 
might want to look at the COPY command and see if that is a better way 
of seeing what is being retrieved from the table:


http://www.postgresql.org/docs/9.3/interactive/sql-copy.html

IMPORTANT:

COPY with a file name instructs the PostgreSQL server to directly read 
from or write to a file. The file must be accessible to the server and 
the name must be specified from the viewpoint of the server. When 
PROGRAM is specified, the server executes the given command, and reads 
from its standard input, or writes to its standard output. The command 
must be specified from the viewpoint of the server, and be executable by 
the postgres user. When STDIN or STDOUT is specified, data is 
transmitted via the connection between the client and the server



With COPY you can use a query so it is possible to restrict the data you 
retrieve to a subset of the total.












--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pros/cons of using synchronous commit=off - AWS in particular

2014-06-19 Thread Larry J Prikockis
so from the much-loved 
https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server page, we 
have this:



PostgreSQL can only safely use a write cache if it has a battery backup. 
See WAL reliability for an essential introduction to this topic. No, 
really; go read that right now, it's vital to understand that if you 
want your database to work right.

...
For situations where a small amount of data loss is acceptable in return 
for a large boost in how many updates you can do to the database per 
second, consider switching synchronous commit off. This is particularly 
useful in the situation where you do not have a battery-backed write 
cache on your disk controller, because you could potentially get 
thousands of commits per second instead of just a few hundred.

...


My question is-- does it make sense to switch synchronous commit off for 
EBS-backed EC2 instances running postgresql at Amazon? Has anyone done 
any benchmarking of this change on AWS? Since EBS is a black box to us 
as end users, I have no clue what type of caching- volatile or not-- may 
be going on behind the scenes.



--
Larry J. Prikockis
System Administrator
240-965-4597 (direct)
lprikoc...@vecna.com
http://www.vecna.com



Vecna Technologies, Inc.
6404 Ivy Lane Suite 500
Greenbelt, MD 20770
Phone: (240) 965-4500
Fax: (240) 547-6133

Better Technology, Better World (TM)
The contents of this message may be privileged and confidential. 
Therefore, if this message has been received in error, please delete it 
without reading it. Your receipt of this message is not intended to 
waive any applicable privilege. Please do not disseminate this message 
without the permission of the author.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re : [GENERAL] Global value/global variable?

2014-06-19 Thread Edson Richter
Yes. It's similar, but consider I'm using a connection pool, so I''ve no control on when the connections are established/destroyed.I think I need something with scope broader than session...Atenciosamente,Edson Richter-- Mensagem original --De:John McKownData:19/06/2014 8h44Para:Edson Richter;Assunto:Re: [GENERAL] Global value/global variable?A psql session is shown below:tsh009=# \set lpar '\'LIH1\''tsh009=# select * from capped where lpar=:lpar limit 5;lpar | started | ended 
--+-+-LIH1 | 2014-06-09 21:57:13 | 2014-06-09 22:21:21LIH1 | 2014-06-09 22:42:12 | 2014-06-09 23:06:22LIH1 | 2014-06-09 23:22:12 | 2014-06-09 23:39:12
LIH1 | 2014-06-09 23:52:12 | 2014-06-10 00:01:21LIH1 | 2014-06-10 01:07:11 | 2014-06-10 01:07:51(5 rows)Is this close to what you are looking for?
On Wed, Jun 18, 2014 at 9:50 PM, Edson Richteredsonrich...@hotmail.com wrote:



It is possible to define a global value/variable in PostgreSQL in a way that I can use it in any query/view/function?For example, I do have a connection string I use for dblink connections in several places (specially, inside views).
Then, if I want to change the connection string, I do have to change every view manually.If I can set a kind of global variable, then I just use it inside every view - then, at my application startup (or even at PostgreSQL startup, if I can set this string at postgresql.conf level), would set this global variable to point the current string.
Your enlightment will be really welcome.Regards,Edson 		 	   		  
-- There is nothing more pleasant than traveling and meeting new people!Genghis KhanMaranatha! 
John McKown




Re: [GENERAL] How to store fixed size images?

2014-06-19 Thread sunpeng
Thank you, Jeff!
peng


On Wed, Jun 18, 2014 at 12:15 AM, Jeff Janes jeff.ja...@gmail.com wrote:

 On Mon, Jun 16, 2014 at 6:10 PM, sunpeng blueva...@gmail.com wrote:
  We have many small size(most fixed size) images, how to store them? There
  are two options:
  1. Store images in folders, managed by os file system, only store path in
  postgresql
  2. Store image as bytea in postgresql
  How do you usually store images?

 I use method 1, because the library/modules I made use of only
 implemented that method.  I'd prefer to use method 2, but not enough
 to write the code for doing it when there was existing code.  The
 problem with 1 is now you have two streams of data to back up, and the
 data itself is no longer transactional with its metadata.  A potential
 problem with 2 is that it will run into problems if any of the data is
 more than a small fraction of RAM.  So the images must be always
 small.  If they are just usually small, that isn't good enough.
 Another problem with bytea is the encoding issues.  Good up-to-date
 drivers will handle that for you (mostly) transparently, but there are
 lots of drivers that are not good, or not up-to-date.

 Cheers,

 Jeff



[GENERAL] python modul pre-import to avoid importing each time

2014-06-19 Thread Rémi Cura
Hey List,

I use plpython with postgis and 2 python modules (numpy and shapely).
Sadly importing such module in the plpython function is very slow (several
hundreds of milliseconds).

I also don't know if this overhead is applied each time the function is
called in the same session.

Is there a way to pre-import those modules once and for all,
such that the python function are accelerated?

Thanks,

Cheers,
Rémi-C


Re: [GENERAL] Overlapping ranges

2014-06-19 Thread Steve Crawford

On 06/18/2014 04:47 PM, Jason Long wrote:

I have a large table of access logs to an application.

I want is to find all rows that overlap startdate and enddate with any
other rows.

The query below seems to work, but does not finish unless I specify a
single id.

select distinct a1.id
from t_access a1,
 t_access a2
where tstzrange(a1.startdate, a1.enddate) 
   tstzrange(a2.startdate, a2.enddate)





I'm not sure what you mean by specify a single id but a couple comments.

1. This query will return all ids since there is no constraint to 
prevent a1 from finding the matching record in a2 which will, of course, 
overlap. You need to add something like ...and a1.id != a2.id...


2. Even without the above issue there is a great potential to have this 
query run a very long time - especially if the indexes are such that 
each row on a1 requires scanning all rows in a2. I'd test it on a small 
table to make sure it gives the results you want and read up on what 
indexes are most appropriate to help speed it up. (I can't help much 
here as I haven't yet experimented enough with indexing on range types.)


Cheers,
Steve



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to store fixed size images?

2014-06-19 Thread Andy Colson

On 06/16/2014 08:10 PM, sunpeng wrote:

We have many small size(most fixed size) images, how to store them? There are 
two options:
1. Store images in folders, managed by os file system, only store path in 
postgresql
2. Store image as bytea in postgresql
How do you usually store images?
Thanks!

peng


I think it depends on how you are going to use them.  I, for example, have lots 
of images that are served on a web page, after benchmarks I found it was faster 
to store them on filesystem and let apache serve them directly.

-Andy


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore: custom archiver unexpected end of file , ERROR: missing data for column

2014-06-19 Thread Khangelani Gama
-Original Message-
From: Adrian Klaver [mailto:adrian.kla...@aklaver.com]
Sent: Thursday, June 19, 2014 3:34 PM
To: Khangelani Gama; Alban Hertroys
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] pg_restore: custom archiver unexpected end of file ,
ERROR: missing data for column

On 06/19/2014 05:03 AM, Khangelani Gama wrote:
 -Original Message-
 From: Alban Hertroys [mailto:haram...@gmail.com]
 Sent: Thursday, June 19, 2014 1:54 PM
 To: Khangelani Gama
 Cc: pgsql-general@postgresql.org
 Subject: Re: [GENERAL] pg_restore: custom archiver unexpected end of
 file ,
 ERROR: missing data for column

 On 19 June 2014 13:11, Khangelani Gama kg...@argility.com wrote:
 Adrian you might be right I have just tried to pg_restore on the
 same server(8.4.2), I get the same errors related to xml. Can
 someone advise if the following XML file as an example is in the
 correct state or
 not:

   Hi all

 I have a question, why is that pg_dump does not fail when dumping the
 database but only fails when restoring the database?
 Dumping the database from same server and restoring it on the same
 server.
 Since there is data issues inside the database why is the pg_dump not
 failing as well?

 Because then you wouldn't be able to get your data out of your
 database once some corruption occurred. You would be forced to fix the
 issue on your live database.

 Now you can edit the dump and attempt to restore it until it succeeds
 on a different system, after which you know for certain that your data
 matches at least your integrity constraints.

 That makes sense to me, thanks Is there is an easier or better way
 to edit the binary dump file? Like how I can I fix a certain row or a
 column of pg_dump -Fc file?  I am sorry I have never done this
 beforeplease give a clue on how I can do it.


You can think of the binary dump as a 'frozen' version of your database.
Generally you restore to another database, but it is also possible to
restore to a text file:

http://www.postgresql.org/docs/9.3/interactive/app-pgrestore.html

So if you use:

-f filename
--file=filename

 Specify output file for generated script, or for the listing when used
with -l. Default is the standard output.


instead of a database name you will create a text version of the
database dump. A word of caution, the text based version will be
considerably bigger than the binary compressed version. As I recall you
said the database in question is large so you might not want to convert
the whole thing. The same switches apply to restoring to a file that
work with restoring to a database. So it is possible to select one or
more tables and/or data and restore that only. The text file that is
generated can then be inspected. What cannot be done is after making
changes is reincorporating back into the binary dump. What you do after
making the changes depends on the relationships between the changed
tables and the other tables in the database. That would require more
information.

Another thought. Since pg_dump uses COPY(unless you specify INSERTs) you
might want to look at the COPY command and see if that is a better way
of seeing what is being retrieved from the table:

http://www.postgresql.org/docs/9.3/interactive/sql-copy.html

IMPORTANT:

COPY with a file name instructs the PostgreSQL server to directly read
from or write to a file. The file must be accessible to the server and
the name must be specified from the viewpoint of the server. When
PROGRAM is specified, the server executes the given command, and reads
from its standard input, or writes to its standard output. The command
must be specified from the viewpoint of the server, and be executable by
the postgres user. When STDIN or STDOUT is specified, data is
transmitted via the connection between the client and the server


With COPY you can use a query so it is possible to restrict the data you
retrieve to a subset of the total.


Many Thanks, I think I will begin with pg_dump that uses INTERTS and see
what I get.












-- 
Adrian Klaver
adrian.kla...@aklaver.com


CONFIDENTIALITY NOTICE
The contents of and attachments to this e-mail are intended for the addressee 
only, and may contain the confidential
information of Argility (Proprietary) Limited and/or its subsidiaries. Any 
review, use or dissemination thereof by anyone
other than the intended addressee is prohibited.If you are not the intended 
addressee please notify the writer immediately
and destroy the e-mail. Argility (Proprietary) Limited and its subsidiaries 
distance themselves from and accept no liability
for unauthorised use of their e-mail facilities or e-mails sent other than 
strictly for business purposes.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to store fixed size images?

2014-06-19 Thread Alan Hodgson
On Thursday, June 19, 2014 10:21:56 AM Andy Colson wrote:
 On 06/16/2014 08:10 PM, sunpeng wrote:
  We have many small size(most fixed size) images, how to store them? There
  are two options: 1. Store images in folders, managed by os file system,
  only store path in postgresql 2. Store image as bytea in postgresql
  How do you usually store images?
  Thanks!
  
  peng
 
 I think it depends on how you are going to use them.  I, for example, have
 lots of images that are served on a web page, after benchmarks I found it
 was faster to store them on filesystem and let apache serve them directly.
 
 -Andy

That will always be the (much) faster option. There's basically no CPU 
overhead, the web server can tell the kernel to copy the image from the 
filesystem cache directly into a network buffer, and off it goes. Even apache 
can 
serve line speed like that.

It's a lot easier to manage the images if they're in the database, though, 
especially if you run off multiple web servers. If CPU overhead is actually an 
issue, you can eliminate most of the speed hit by sticking a caching proxy 
server like Varnish in front of your site, or by offloading the image serving 
to 
a pass-through CDN. Just make sure images get a new URL path if they change 
content.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Best backup strategy for production systems

2014-06-19 Thread John R Pierce

On 6/19/2014 3:14 AM, Oliver wrote:
About wal files and archiving of them, I must delete both manually, 
isn't it? There isn't any option for automatically delete wal files 
with a given age in the postgresql.conf, isn't it? (Away of 
archive_command). Do you use Linux? Could you pass me your 
archive_command or script that you use for copying/gzipping the files?

Thanks beforehand.


you need ALL the archived WAL files since the start of the last base 
backup, or none of them are useful.




--
john r pierce  37N 122W
somewhere on the middle of the left coast



[GENERAL] \COPY from CSV ERROR: unterminated CSV quoted field

2014-06-19 Thread ogromm
Hi,

I get the error unterminated CSV quoted field when I try to copy text with
new line \. new line


For example:

CREATE TABLE test (text TEXT);
\COPY test FROM 'test.csv' WITH DELIMITER ',' CSV HEADER;

test.csv:
Text
some text
\.
more text


Can any one please give me some clue how to get rid of this problem.

Thanks in advance.

ogromm



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/COPY-from-CSV-ERROR-unterminated-CSV-quoted-field-tp5807700.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] max_connections reached in postgres 9.3.3

2014-06-19 Thread Vasudevan, Ramya
Merlin, Thank you for the response.

On the waiting queries - When  we reached 1500 connections, we had 759 
connections  that were in active state (116 COMMIT, 238 INSERT, UPDATE 176, 57 
AUTHENTICATION, 133 BIND. These active INSERTS and UPDATES  also includes the 
80-90 waiting sessions (We checked pg_stat_activity for 'waiting' state.  And 
pg_locks for granted=f). The blocking and the waiting queries were simple one 
row updates/inserts/deletes. These shouldn’t be blocking each other normally 
(unless, we think, there was a problem writing to the disk). Correct me if I am 
wrong.

1) log_min_duration_statement  
 We have it set to 200ms and capturing all slow queries.

2) detailed system performance diagnostics during load event.   We need to know 
precise values for user, system, iowait 
 First occurance of 'max_connections reached' problem:  It seems to have 
started about 10:40. 

At 10:42, we ran out of connections:
Jun 17 10:42:21 FATAL:  remaining connection slots are reserved for 
non-replication superuser connections

AT 10:48:
 We dropped the disk cache (used 
http://www.evanhoffman.com/evan/2012/09/25/problems-with-postgresql-9-0-on-centos-6-3-with-a-2-6-32-series-kernel/
 as reference). However, that didn’t help.

vmstat information (with user cpu, system cpu, i/dle % and i/o wait):
---
At 10:40:19, user cpu started climbing, and cs (context switches) and in 
(interrupts) climbed up as well. No change really in swap, bi (bytes in), bo 
(bytes out)
procs  ---memory--  ---swap--  -io  --procs  
---memory--  ---swap--  -io  --system--  -cpu-- 
 ---timestamp---
r  bswpd   free buffcache   
si   so  bibo in cs  us  sy  id  wa  st
7  00  939888   145808  
774588544   00   1524  77221177  32761   26  6   67  1  
 0   2014-06-17  10:40:06  PDT
14 00  951436   145816  
774589952   00   1128  1396   22584  42339   34  6   59  1  
 0   2014-06-17  10:40:07  PDT
12 10  916132   145816  
774591360   00   2036  1956   23132  35945   27  6   65  2  
 0   2014-06-17  10:40:08  PDT
7  00  925444   145816  
774593920   00   1604  3156   23722  78796   35  8   55  1  
 0   2014-06-17  10:40:09  PDT
15 20  946200   145816  
774595840   00   1420  46024051  56856   35  8   57  1  
 0   2014-06-17  10:40:10  PDT
6  20  979356   145816  
774597376   00   1708  5016   23379  33239   31  6   61  1  
 0   2014-06-17  10:40:11  PDT
6  00  1005784  145816  
774615616   00   1980  6792   23584  36025   30  6   62  2  
 0   2014-06-17  10:40:12  PDT
9  10  1004352  145816  
774618880   00   1760  9784   24826  63949   35  7   57  2  
 0   2014-06-17  10:40:13  PDT
18 00  980156   145816  
774621568   00   1260  9336   23897  69468   35  8   56  1  
 0   2014-06-17  10:40:14  PDT
4  10  960672   145816  
774639168   00   1516  6420   23235  38023   33  7   58  1  
 0   2014-06-17  10:40:15  PDT
19 00  944560   145816  
774642048   00   1880  6940   23337  39658   32  6   60  2  
 0   2014-06-17  10:40:16  PDT
16 10  897036   145816  
774661440   00   2072  6024   26300  52597   39  8   53  1  
 0   2014-06-17  10:40:17  PDT
16 50  880428   145816  
774664192   00   1460  12652  22617  28171   32  5   62  1  
 0   2014-06-17  10:40:18  PDT
11 00  885412   145816  
774666112   00   1840  7884   25048  45393   39  7   53  1  
 0   2014-06-17  10:40:19  PDT 
* interrupts start climbing about here (in and cs columns)
24 10  900336   145816  
774685120   00   2248  4804   26126  42707   42  7   50  1  
 0   2014-06-17  10:40:20  PDT
14 20  895120   145816  
774687744   00   1856  22660  27721  57019   46  8   45  1  
 0   2014-06-17  10:40:21  PDT
27 00  881216   145816  
774690496

Re: Re : [GENERAL] Global value/global variable?

2014-06-19 Thread Edson Richter
I've ended creating a table to store database parameters, and using a subquery, 
I can achieve the global variable effect.
Thanks for your insight,
Edson.

From: edsonrich...@hotmail.com
To: pgsql-general@postgresql.org
Subject: Re : [GENERAL] Global value/global variable?
Date: Thu, 19 Jun 2014 14:39:45 +

Yes. It's similar, but consider I'm using a connection pool, so I''ve no 
control on when the connections are established/destroyed.I think I need 
something with scope broader than session... Atenciosamente,

Edson Richter 

-- Mensagem original --
De: John McKown
Data: 19/06/2014 8h44
Para: Edson Richter;
Assunto:Re: [GENERAL] Global value/global variable?

A psql session is shown below:
tsh009=# \set lpar '\'LIH1\''tsh009=# select * from capped where lpar=:lpar 
limit 5; lpar |   started   |ended
--+-+- LIH1 | 2014-06-09 21:57:13 | 
2014-06-09 22:21:21 LIH1 | 2014-06-09 22:42:12 | 2014-06-09 23:06:22 LIH1 | 
2014-06-09 23:22:12 | 2014-06-09 23:39:12
 LIH1 | 2014-06-09 23:52:12 | 2014-06-10 00:01:21 LIH1 | 2014-06-10 01:07:11 | 
2014-06-10 01:07:51(5 rows)

Is this close to what you are looking for?


On Wed, Jun 18, 2014 at 9:50 PM, Edson Richter edsonrich...@hotmail.com wrote:




It is possible to define a global value/variable in PostgreSQL in a way that I 
can use it in any query/view/function?For example, I do have a connection 
string I use for dblink connections in several places (specially, inside views).
Then, if I want to change the connection string, I do have to change every view 
manually.If I can set a kind of global variable, then I just use it inside 
every view - then, at my application startup (or even at PostgreSQL startup, if 
I can set this string at postgresql.conf level), would set this global 
variable to point the current string.

Your enlightment will be really welcome.
Regards,
Edson
  


-- 
There is nothing more pleasant than traveling and meeting new people!
Genghis Khan

Maranatha! 

John McKown



  

Re: [GENERAL] max_connections reached in postgres 9.3.3

2014-06-19 Thread Kevin Grittner
Vasudevan, Ramya ramya.vasude...@classmates.com wrote:

 On the waiting queries - When  we reached 1500 connections, we
 had 759 connections that were in active state (116 COMMIT, 238
 INSERT, UPDATE 176, 57 AUTHENTICATION, 133 BIND. These active
 INSERTS and UPDATES also includes the 80-90 waiting sessions (We
 checked pg_stat_activity for 'waiting' state.  And pg_locks for
 granted=f). The blocking and the waiting queries were simple one
 row updates/inserts/deletes. These shouldn’t be blocking each
 other normally (unless, we think, there was a problem writing to
 the disk). Correct me if I am wrong.

You may want to consider this:

http://wiki.postgresql.org/wiki/Number_Of_Database_Connections

 [vmstat show up to 173111 context switches per second, with high
 cs rate corresponding to user CPU% between 64 and 82]

That usually means there is high contention for spinlocks,
potentially with processes getting suspended while holding
spinlocks, making things worse.

 2) Should we look into changing our I/O scheduler from CFQ to
 deadline?
 (http://www.cybertec.at/postgresql-linux-kernel-io-tuning/)

That is likely to help when disk I/O is the bottleneck, but your
problem now is you need to be using a connection pooler in
transaction mode, so that when more than some limit of transactions
are active (that limit probably being somewhere around twice the
core count on the machine), new requests to start a transaction are
queued.  You will see much better throughput and much better
latency if you can do that.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] max_connections reached in postgres 9.3.3

2014-06-19 Thread Merlin Moncure
On Thu, Jun 19, 2014 at 2:35 PM, Kevin Grittner kgri...@ymail.com wrote:
 Vasudevan, Ramya ramya.vasude...@classmates.com wrote:

 On the waiting queries - When  we reached 1500 connections, we
 had 759 connections that were in active state (116 COMMIT, 238
 INSERT, UPDATE 176, 57 AUTHENTICATION, 133 BIND. These active
 INSERTS and UPDATES also includes the 80-90 waiting sessions (We
 checked pg_stat_activity for 'waiting' state.  And pg_locks for
 granted=f). The blocking and the waiting queries were simple one
 row updates/inserts/deletes. These shouldn’t be blocking each
 other normally (unless, we think, there was a problem writing to
 the disk). Correct me if I am wrong.

 You may want to consider this:

 http://wiki.postgresql.org/wiki/Number_Of_Database_Connections

 [vmstat show up to 173111 context switches per second, with high
 cs rate corresponding to user CPU% between 64 and 82]

 That usually means there is high contention for spinlocks,
 potentially with processes getting suspended while holding
 spinlocks, making things worse.

Completely agree on both diagnosis and proposed solution -- load
profile (low iowait, high user%, high processes, high CS) is
symptomatic of too many processes trying to do things at once.  So
there may be some application caused driver of the problem or you are
hitting a contention point within postgres (a 'perf top' might give
clues to the latter).  Either way, once you are in this state you end
up with too many processes fighting for cpu and cache lines which
exaggerates the problem to the point you can classify it as an outage.

Be advised transaction mode pooling makes certain features of the
database difficult or impossible to use -- advisory locks (except xact
variants), server side prepared statements, asynchronous
notificiation, WITH HOLD cursors and the like -- basically anything
scoped to the session.  For many workloads it is a high win though.
If for whatever reason this solution doesn't work, your other options
are to try to optimize whatever is causing the load event
(particularly if it's in your code -- careful query logging might give
some clues) or to simply upgrade hardware (more/faster cpu especially
for your case) to the point that even when highly loaded you're always
clearing queries at an acceptable rate.  The hardware approach has
some risk though -- if you have a contention problem it's not always a
given that adding cores will scale as well as you think.  Faster
core/bus is almost always a win, but obviously there's a very strict
limit you can go.

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] \COPY from CSV ERROR: unterminated CSV quoted field

2014-06-19 Thread Tom Lane
ogromm alex.schiller1...@web.de writes:
 I get the error unterminated CSV quoted field when I try to copy text with
 new line \. new line

 For example:

 CREATE TABLE test (text TEXT);
 \COPY test FROM 'test.csv' WITH DELIMITER ',' CSV HEADER;

 test.csv:
 Text
 some text
 \.
 more text

Yeah, psql's \copy command doesn't know anything about CSV mode, and
will take \. as an EOF marker even though it shouldn't.  If you need
to copy data like this I'd suggest using a backend-side COPY.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] max_connections reached in postgres 9.3.3

2014-06-19 Thread Borislav Ivanov
Hi Ramya,

We experience exactly the same problem here at Bitbucket. From what I can
tell the major difference between your setup and ours is that you run 9.3.3
and we run 9.2.8. Our post for the issues is at
http://www.postgresql.org/message-id/CAJ+wzrb1qhz3xuoeSy5mo8i=E-5OO9Yvm6R+VxLBGaPB=ue...@mail.gmail.com.
We have shared collected information from the incident.

We have tried quite a few things and none of them have solved the problem
yet. Things we tried:
- Upgraded CentOS 6.4 to 6.5
- Upgrade kernel multiple times (current 2.6.32-431.17.1 )
- Changed IO scheduler from CFQ to deadline (front_merges = 0, read_expire
= 150, write_expire = 1500) - ext4 for both PGDATA and pg_xlog volumes
- Switched from ext4 to XFS for both PGDATA and pg_xlog (We thought we were
hitting the ext4 freeze bug)
- Upgraded Postgres from 9.2.5 to .6 then to .8
- Lowered max_connections from 3000 to 500
- Lowered pgbouncer default_pool_size
- Switched pgbouncer from session to transaction mode (It was suitable for
our use)
- Multiple App improvements which resulted in lowering the returned tuples
from 10M to 1.5M
- Fixed all slow queries.
- Truncated tables bigger than 25GB data, 25GB index
- Tried shared_buffers with 4GB, 6GB, 8GB (current 4GB)
- Tried work_mem with multiple values (current 384MB)
- Tried wal_buffers with 4MB and 8MB (current 8MB)
- Tried different approaches about checkpoints. None of them made a
difference for the actual problem.
- We've been always running with synchronous_commit = on
- We have 4 streaming replicas
- We do archive from the master using rsync in archive_command
- Switched the master to a different machine to triple-confirm no hardware
issues


Next we are adding a second layer for pgbouncer on the database itself. Our
goal is to pass less connections to Postgres than the number of cores we
have. Many are suggesting that we lower our max_connections and this thread
also says the same. So we are going to try that.

However, most people on our team think that the number of connections is
purely a symptom of the actual problem. We would love to be wrong about
this. But for now we feel the high number of connections contributes for
preserving the problem but it's not actually triggering the problem.

I am including some data from when the DB freezes at 400 connections and
another data set of the DB slowing down due to a load test with 500
connections where the performance is quite degraded but Postgres didn't
hang. I am also willing to provide more data on our side as needed.

#  vmstat -S M 1 20 from when the DB freezes. Site is down. We are maxed
out on conns. This incident is the same as what you see. In this case our
Postgres cannot recover. The only fix we have so far is to restart Postgres.

procs ---memory-- ---swap-- -io --system--
-cpu-

 r  b   swpd   free   buff  cache   si   sobibo   in   cs us sy id
wa st

51  0  0 123717152 12178500 43411 20  3 77
0  0

228  0  0 123702152 12178600 0   720 134874 295787 79
11 10  0  0

232  0  0 123700152 12178600 0  7128 138700 314263 79
12  9  0  0

274  0  0 123691152 12178600 0   980 133590 245257 81
11  8  0  0

380  0  0 123789152 12178600 0  1008 133792 258339 80
11  9  0  0
142  0  0 123839152 12178600 0  1328 139243 303489 78
12 10  0  0


#  vmstat -S M 1 20 from when the DB is stressed with real production load.
The site is slow but up. Performance is degraded but the machine and
Postgres can recover from this situation. Note that the machine was freshly
rebooted before running this test.

procs ---memory-- ---swap-- -io --system--
-cpu-

 r  b   swpd   free   buff  cache   si   sobibo   in   cs us sy id
wa st

156 10  0 232743 45  167490042 2   80   80  3  0
96  1  0

260 11  0 232639 45  1682800 76704  2792 155594 223928 83
12  4  0  0

128 12  0 232552 45  1688700 59536  2896 153932 229723 82
12  6  1  0

267  6  0 232480 45  1693100 43144  2320 149501 200510 83
11  5  0  0

105  8  0 232408 45  1696500 38016  2208 148905 185142 84
11  5  0  0

112  1  0 232339 45  1700000 32640  2528 148390 183620 83
11  6  0  0


Again, if you want to see more of the data we've collected, see our post
http://www.postgresql.org/message-id/CAJ+wzrb1qhz3xuoeSy5mo8i=E-5OO9Yvm6R+VxLBGaPB=ue...@mail.gmail.com.
We have shared collected information from the incident.

I'll also keep you updated with any new things we find and also how
lowering the potential connections to our Postgres goes. We don't have
plans of testing on CentOS 5.8.


On 19 June 2014 12:56, Merlin Moncure mmonc...@gmail.com wrote:

 On Thu, Jun 19, 2014 at 2:35 PM, Kevin Grittner kgri...@ymail.com wrote:
  Vasudevan, Ramya ramya.vasude...@classmates.com wrote:
 
  On the 

Re: [GENERAL] Best backup strategy for production systems

2014-06-19 Thread Borislav Ivanov
If your database is relatively small, I would recommend
http://www.pgbarman.org/. It does binary backup and will take care of your
WAL files. The laster version of pgbarman can also take backups from a
slave using pgespresso extension. Note that pgbarman runs over streaming
replication protocol.

If your database is big, go for pg_basebackup and archive_command. You can
run this on a slave. The pg_basebackup will give you the base and during
restore you can use restore_command with recovery_target_time for example
to replay from the archived WAL files.


On 19 June 2014 11:28, John R Pierce pie...@hogranch.com wrote:

  On 6/19/2014 3:14 AM, Oliver wrote:

 About wal files and archiving of them, I must delete both manually, isn't
 it? There isn't any option for automatically delete wal files with a given
 age in the postgresql.conf, isn't it? (Away of archive_command). Do you use
 Linux? Could you pass me your archive_command or script that you use for
 copying/gzipping the files?
 Thanks beforehand.


 you need ALL the archived WAL files since the start of the last base
 backup, or none of them are useful.



 --
 john r pierce  37N 122W
 somewhere on the middle of the left coast




Re: [GENERAL] max_connections reached in postgres 9.3.3

2014-06-19 Thread Merlin Moncure
On Thu, Jun 19, 2014 at 5:12 PM, Borislav Ivanov biva...@atlassian.com wrote:
 However, most people on our team think that the number of connections is
 purely a symptom of the actual problem. We would love to be wrong about
 this. But for now we feel the high number of connections contributes for
 preserving the problem but it's not actually triggering the problem.

This is entirely correct.  pgbouncer does not preventing database load
but about limiting damage when it occurs.  This generally necessary in
environments where application servers keep piling on connections when
the database is not clearing queries fast enough.

In your case user% is dominating system load.  Along with the high cs
this is really suggesting spinlock contention.  A 'perf top' is
essential for identifying the culprit.  It's very possible that 9.4
will fix your problem...see:
http://postgresql.1045698.n5.nabble.com/Cpu-usage-100-on-slave-s-lock-problem-td5768655.html.
There was some poorly optimized code in the wal replay.

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] max_connections reached in postgres 9.3.3

2014-06-19 Thread Vasudevan, Ramya
Hi Borislav –  Thank You for the update and all the information. It does look 
like we are on the same boat. And I feel the same too - maxing out on 
max_connections is just a symptom. pgbouncer may help alleviate the problem 
(though in your case it didn’t) and is definitely good to have either way.

One thing I wanted to mention is that we upgraded from CentOS 5.6 to 6.3 in 
order to fix a kernel bug we hit on 5.6. It was only after we moved to 6.3 that 
we started seeing issues with max_connections. And it is worth mentioning that 
there was no application change when we moved from 5.6 to 6.3. We then upgraded 
to 6.5 and still have the same problem. And from your case, 6.4 also has the 
same problem. I think that is why we wanted to downgrade to 5.8 to see if it 
resolves the issue.

Thank You
Ramya


Re: [GENERAL] max_connections reached in postgres 9.3.3

2014-06-19 Thread Borislav Ivanov
We do record perf data. For each incident we've had the data looks about
the same. Unfortunately, I can't read much into it. Besides it getting
stuck on a spinlock. But why and with what?


### from perf report

 53.28%   postmaster  postgres   [.] s_lock


  6.22%   postmaster  postgres   [.] 0x001b4306


  2.30%   postmaster  [kernel.kallsyms]  [k] _spin_lock


  2.23%   postmaster  postgres   [.] LWLockAcquire


  1.79%   postmaster  postgres   [.] LWLockRelease


  1.39%   postmaster  postgres   [.]
hash_search_with_hash_value


  1.20%   postmaster  postgres   [.] SearchCatCache


  0.71%   postmaster  [kernel.kallsyms]  [k] hrtimer_interrupt


  0.56%   postmaster  [kernel.kallsyms]  [k] tick_program_event


  0.54%   postmaster  [kernel.kallsyms]  [k] schedule


  0.44%   postmaster  [kernel.kallsyms]  [k] _spin_lock_irq


### Then zooming on s_lock

# Annotate s_lock

...

 99.04 │   test   %al,%al

...


# Zoom into postmaster(81487) thread

 55.84%  postmaster  postgres   [.] s_lock
  ◆

  6.52%  postmaster  postgres   [.] 0x001b4306
  ▒

  2.42%  postmaster  [kernel.kallsyms]  [k] _spin_lock
  ▒

  2.34%  postmaster  postgres   [.] LWLockAcquire
  ▒

  1.87%  postmaster  postgres   [.] LWLockRelease
  ▒

  1.46%  postmaster  postgres   [.] hash_search_with_hash_value
  ▒

  1.26%  postmaster  postgres   [.] SearchCatCache
  ▒

  0.75%  postmaster  [kernel.kallsyms]  [k] hrtimer_interrupt
  ▒

  0.59%  postmaster  [kernel.kallsyms]  [k] tick_program_event
  ▒

  0.57%  postmaster  [kernel.kallsyms]  [k] schedule


# Zoom into postgres DSO

 65.75%  postmaster  [.] s_lock
  ◆

  7.68%  postmaster  [.] 0x001b4306
  ▒

  2.75%  postmaster  [.] LWLockAcquire
  ▒

  2.20%  postmaster  [.] LWLockRelease
  ▒

  1.72%  postmaster  [.] hash_search_with_hash_value
  ▒

  1.49%  postmaster  [.] SearchCatCache
  ▒

  0.54%  postmaster  [.] _bt_compare
  ▒

  0.51%  postmaster  [.] _bt_checkkeys


On 19 June 2014 15:57, Merlin Moncure mmonc...@gmail.com wrote:

 On Thu, Jun 19, 2014 at 5:12 PM, Borislav Ivanov biva...@atlassian.com
 wrote:
  However, most people on our team think that the number of connections is
  purely a symptom of the actual problem. We would love to be wrong about
  this. But for now we feel the high number of connections contributes for
  preserving the problem but it's not actually triggering the problem.

 This is entirely correct.  pgbouncer does not preventing database load
 but about limiting damage when it occurs.  This generally necessary in
 environments where application servers keep piling on connections when
 the database is not clearing queries fast enough.

 In your case user% is dominating system load.  Along with the high cs
 this is really suggesting spinlock contention.  A 'perf top' is
 essential for identifying the culprit.  It's very possible that 9.4
 will fix your problem...see:

 http://postgresql.1045698.n5.nabble.com/Cpu-usage-100-on-slave-s-lock-problem-td5768655.html
 .
 There was some poorly optimized code in the wal replay.

 merlin



Re: [GENERAL] max_connections reached in postgres 9.3.3

2014-06-19 Thread Erik van Zijst
On Thu, Jun 19, 2014 at 3:57 PM, Merlin Moncure mmonc...@gmail.com wrote:
 In your case user% is dominating system load.  Along with the high cs
 this is really suggesting spinlock contention.  A 'perf top' is
 essential for identifying the culprit.  It's very possible that 9.4
 will fix your problem...see:
 http://postgresql.1045698.n5.nabble.com/Cpu-usage-100-on-slave-s-lock-problem-td5768655.html.
 There was some poorly optimized code in the wal replay.

Did that patch go in? The mailing list thread doesn't seem conclusive.

Cheers,
Erik


 merlin


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general