Re: [GENERAL] question about age()

2013-08-30 Thread Andreas Kretschmer
Tom Lane t...@sss.pgh.pa.us wrote:

 Andreas Kretschmer akretsch...@spamfence.net writes:
  '3 years' != '2 years 11 mons 30 days', but i got 0 rows, why?
 
 Well, actually:
 
 regression=# select '3 years'::interval = '2 years 11 mons 30 days'::interval;
  ?column? 
 --
  t
 (1 row)
 
 IIRC, interval comparison operators normalize the two values assuming that
 1 month = 30 days.  Which is kind of arbitrary, but without some such
 assumption there's no way to have a scalar ordering of intervals at all.

Thanks, okay, make sense. I'm using extract() to extract and compare the
YEAR-field to spot a birthday from the age() ;-)


Andreas
-- 
Really, I'm not out to destroy Microsoft. That will just be a completely
unintentional side effect.  (Linus Torvalds)
If I was god, I would recompile penguin with --enable-fly.   (unknown)
Kaufbach, Saxony, Germany, Europe.  N 51.05082°, E 13.56889°


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pgpool-ha not found in yum postgres repository

2013-08-30 Thread Mistina Michal
Hello to everybody.

I've found there should be a package pgpool-ha in the full package list:

http://yum.postgresql.org/news-packagelist.php

 

I installed PostgreSQL yum repository by installing this RPM (red hat 6,
x86_64):

http://yum.postgresql.org/9.2/redhat/rhel-6-x86_64/pgdg-redhat92-9.2-7.noarc
h.rpm

found on the website:

http://yum.postgresql.org/repopackages.php

 

Then I issued yum search pgpool-ha. The package wasn't found.

Yum repository for postgresql is enabled and present..

cat /etc/yum.repos.d/pgdg-92-redhat.repo 

[pgdg92]

name=PostgreSQL 9.2 $releasever - $basearch

baseurl=http://yum.postgresql.org/9.2/redhat/rhel-6-$basearch

enabled=1

gpgcheck=1

gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-PGDG-92

 

[pgdg92-source]

name=PostgreSQL 9.2 $releasever - $basearch - Source

failovermethod=priority

baseurl=http://yum.postgresql.org/srpms/9.2/redhat/rhel-$releasever-$basearc
h

enabled=0

gpgcheck=1

gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-PGDG-92

 

 

Why is pgpool-ha rpm package missing in the PostgreSQL yum repository
despite it is listed in the full package list link?

 

Best regards,

Michal Mistina

 



smime.p7s
Description: S/MIME cryptographic signature


[GENERAL] My Experiment of PG crash when dealing with huge amount of data

2013-08-30 Thread 高健
Hello:

I  have done the following experiment to test :

PG's activity when dealing with data which is bigger in size than total
memory of the whole os system.

The result is:
PG says:

WARNING:  terminating connection because of crash of another server process
DETAIL:  The postmaster has commanded this server process to roll back the
current transaction and exit, because another server process exited
abnormally and possibly corrupted shared memory.
HINT:  In a moment you should be able to reconnect to the database and
repeat your command.
The connection to the server was lost. Attempting reset: Failed.
!
-

In log, I can see the following:
LOG:  background writer process (PID 3221) was terminated by signal 9:
Killed

But why it need to kill the bgwriter ?
And that PG will use as much resource as it can to finish a sql dml , till
it used almost all the resource and then crash?

I used the default  checkpoint_segments(3) and shard_buffers(32MB)
settings. And my machine has only 1024MB memory.
The process is as :
1) create table whose one tuple is more than 1KB:

postgres=# create table test01(id integer, val char(1024));

2) insert into the table 2457600 records, which makes totally more than
2400MB:

postgres=# insert into test01 values(generate_series(1,2457600),repeat(
chr(int4(random()*26)+65),1024));

It really took a few minutes, When the sql statement run, I can see that
the server process is consuming 80% of total memory of the os.

--

[root@server ~]# ps aux | grep post

root  3180  0.0  0.0 105296   712 pts/1S16:31   0:00 su -
postgres

postgres  3181  0.0  0.0  70304   676 pts/1S+   16:31   0:00 -bash

postgres  3219  0.0  0.2 113644  2864 pts/1S16:32   0:00
/usr/local/pgsql/bin/postgres -D /gao/data

postgres  3221  0.4  3.0 113724 35252 ?Ss   16:32   0:01 postgres:
writer process

postgres  3222  0.2  0.1 113644  1616 ?Ds   16:32   0:00 postgres:
wal writer process

postgres  3223  0.0  0.0 114380  1148 ?Ss   16:32   0:00 postgres:
autovacuum launcher process

postgres  3224  0.0  0.0  73332   472 ?Ss   16:32   0:00 postgres:
stats collector process

root  3252  0.0  0.0 105296   712 pts/2S16:32   0:00 su -
postgres

postgres  3253  0.0  0.0  70304   676 pts/2S16:32   0:00 -bash

postgres  3285  0.0  0.0  83488   740 pts/2S+   16:32   0:00 ./psql

postgres  3286 14.8 80.2 2598332 924308 ?  Ds   16:32   0:35 postgres:
postgres postgres [local] INSERT

root    0.0  0.0  65424   812 pts/3S+   16:36   0:00 grep post


After a while, I found in the PG's log , the following information:


LOG:  autovacuum launcher started
LOG:  database system is ready to accept connections
LOG:  checkpoints are occurring too frequently (2 seconds apart)
HINT:  Consider increasing the configuration parameter
checkpoint_segments.
LOG:  checkpoints are occurring too frequently (1 second apart)
HINT:  Consider increasing the configuration parameter
checkpoint_segments.
LOG:  checkpoints are occurring too frequently (2 seconds apart)
HINT:  Consider increasing the configuration parameter
checkpoint_segments.
LOG:  checkpoints are occurring too frequently (1 second apart)
HINT:  Consider increasing the configuration parameter
checkpoint_segments.
LOG:  checkpoints are occurring too frequently (2 seconds apart)
HINT:  Consider increasing the configuration parameter
checkpoint_segments.
LOG:  checkpoints are occurring too frequently (1 second apart)
HINT:  Consider increasing the configuration parameter
checkpoint_segments.
LOG:  checkpoints are occurring too frequently (2 seconds apart)
HINT:  Consider increasing the configuration parameter
checkpoint_segments.
LOG:  checkpoints are occurring too frequently (4 seconds apart)
HINT:  Consider increasing the configuration parameter
checkpoint_segments.
LOG:  checkpoints are occurring too frequently (10 seconds apart)
HINT:  Consider increasing the configuration parameter
checkpoint_segments.
LOG:  checkpoints are occurring too frequently (9 seconds apart)
HINT:  Consider increasing the configuration parameter
checkpoint_segments.
LOG:  checkpoints are occurring too frequently (7 seconds apart)
HINT:  Consider increasing the configuration parameter
checkpoint_segments.
LOG:  checkpoints are occurring too frequently (9 seconds apart)
HINT:  Consider increasing the configuration parameter
checkpoint_segments.
LOG:  checkpoints are occurring too frequently (7 seconds apart)
HINT:  Consider increasing the configuration parameter
checkpoint_segments.
LOG:  checkpoints are occurring too frequently (5 seconds apart)
HINT:  Consider increasing the configuration parameter
checkpoint_segments.
LOG:  

Re: [GENERAL] Why is NULL = unbounded for rangetypes?

2013-08-30 Thread Andreas Joseph Krogh
På fredag 30. august 2013 kl. 03:23:09, skrev Jeff Davis pg...@j-davis.com:


On Tue, 2013-07-09 at 10:45 +0200, Andreas Joseph Krogh wrote:
 I would expect the queries above to return FALSE and have to use
 INFINITY to have them return TRUE. I don't understand what you mean by
 ranges not allowing either bound to be NULL as it seems to be the case
 (as in it works).

Although passing NULL to the constructor works, it does *not* create a
range where one bound is NULL. It actually creates an unbounded range;
that is, a range where one bound is infinite.

NULL semantics are far too confusing to be useful with ranges. For
instance, if ranges did support NULLs; the queries you mention would
have to return NULL, not FALSE.


 

But I agree that returning NULL would be OK, then it would be easy to catch in queries when starting playing with range-types in queries. Having it implicitly mean infinity comes as a surprise, to me at least.

 

But now that I know this it's exactly not a blocker...

 

--
Andreas Joseph Krogh andr...@officenet.no      mob: +47 909 56 963
Senior Software Developer / CTO - OfficeNet AS - http://www.officenet.no
Public key: http://home.officenet.no/~andreak/public_key.asc

 

Re: [GENERAL] My Experiment of PG crash when dealing with huge amount of data

2013-08-30 Thread hxreno1

This should be the operating system  OOM kills pg process,check syslog

On Fri 30 Aug 2013 05:10:42 PM CST, 高健 wrote:

Hello:

I  have done the following experiment to test :

PG's activity when dealing with data which is bigger in size than
total memory of the whole os system.

The result is:
PG says:

WARNING:  terminating connection because of crash of another server
process
DETAIL:  The postmaster has commanded this server process to roll back
the current transaction and exit, because another server process
exited abnormally and possibly corrupted shared memory.
HINT:  In a moment you should be able to reconnect to the database and
repeat your command.
The connection to the server was lost. Attempting reset: Failed.
!
-

In log, I can see the following:
LOG:  background writer process (PID 3221) was terminated by signal 9:
Killed

But why it need to kill the bgwriter ?
And that PG will use as much resource as it can to finish a sql dml ,
till it used almost all the resource and then crash?

I used the default  checkpoint_segments(3) and shard_buffers(32MB)
settings. And my machine has only 1024MB memory.
The process is as :
1) create table whose one tuple is more than 1KB:

postgres=# create table test01(id integer, val char(1024));

2) insert into the table 2457600 records, which makes totally more
than 2400MB:

postgres=# insert into test01
values(generate_series(1,2457600),repeat(
chr(int4(random()*26)+65),1024));

It really took a few minutes, When the sql statement run, I can see
that the server process is consuming 80% of total memory of the os.

--

[root@server ~]# ps aux | grep post

root  3180  0.0  0.0 105296   712 pts/1S16:31   0:00 su -
postgres

postgres  3181  0.0  0.0  70304   676 pts/1S+   16:31   0:00 -bash

postgres  3219  0.0  0.2 113644  2864 pts/1S16:32   0:00
/usr/local/pgsql/bin/postgres -D /gao/data

postgres  3221  0.4  3.0 113724 35252 ?Ss   16:32   0:01
postgres: writer process

postgres  3222  0.2  0.1 113644  1616 ?Ds   16:32   0:00
postgres: wal writer process

postgres  3223  0.0  0.0 114380  1148 ?Ss   16:32   0:00
postgres: autovacuum launcher process

postgres  3224  0.0  0.0  73332   472 ?Ss   16:32   0:00
postgres: stats collector process

root  3252  0.0  0.0 105296   712 pts/2S16:32   0:00 su -
postgres

postgres  3253  0.0  0.0  70304   676 pts/2S16:32   0:00 -bash

postgres  3285  0.0  0.0  83488   740 pts/2S+   16:32   0:00 ./psql

postgres  3286 14.8 80.2 2598332 924308 ?  Ds   16:32   0:35
postgres: postgres postgres [local] INSERT

root    0.0  0.0  65424   812 pts/3S+   16:36   0:00 grep post



After a while, I found in the PG's log , the following information:


LOG:  autovacuum launcher started
LOG:  database system is ready to accept connections
LOG:  checkpoints are occurring too frequently (2 seconds apart)
HINT:  Consider increasing the configuration parameter
checkpoint_segments.
LOG:  checkpoints are occurring too frequently (1 second apart)
HINT:  Consider increasing the configuration parameter
checkpoint_segments.
LOG:  checkpoints are occurring too frequently (2 seconds apart)
HINT:  Consider increasing the configuration parameter
checkpoint_segments.
LOG:  checkpoints are occurring too frequently (1 second apart)
HINT:  Consider increasing the configuration parameter
checkpoint_segments.
LOG:  checkpoints are occurring too frequently (2 seconds apart)
HINT:  Consider increasing the configuration parameter
checkpoint_segments.
LOG:  checkpoints are occurring too frequently (1 second apart)
HINT:  Consider increasing the configuration parameter
checkpoint_segments.
LOG:  checkpoints are occurring too frequently (2 seconds apart)
HINT:  Consider increasing the configuration parameter
checkpoint_segments.
LOG:  checkpoints are occurring too frequently (4 seconds apart)
HINT:  Consider increasing the configuration parameter
checkpoint_segments.
LOG:  checkpoints are occurring too frequently (10 seconds apart)
HINT:  Consider increasing the configuration parameter
checkpoint_segments.
LOG:  checkpoints are occurring too frequently (9 seconds apart)
HINT:  Consider increasing the configuration parameter
checkpoint_segments.
LOG:  checkpoints are occurring too frequently (7 seconds apart)
HINT:  Consider increasing the configuration parameter
checkpoint_segments.
LOG:  checkpoints are occurring too frequently (9 seconds apart)
HINT:  Consider increasing the configuration parameter
checkpoint_segments.
LOG:  checkpoints are occurring too frequently (7 seconds apart)
HINT:  Consider increasing the configuration parameter
checkpoint_segments.
LOG:  checkpoints are 

Re: [GENERAL] Using of replication by initdb for both nodes?

2013-08-30 Thread Michael Paquier
On Thu, Aug 29, 2013 at 11:06 PM, Bocap kakalot...@yahoo.com wrote:
 1. Instead of using pg_basebackup for standby DB, i initdb for both standby
 and primary.
 2. Create recovery.conf in standby DB, and start both nodes.
 Now it work fine for me, but is it sure that no problem?
A standby node *needs* to be based on a base backup of its primary or
the node it connects to (it can be as well a slave in a cascading
configuration). One of the reasons being that they need to share the
same system identifier (Database system identifier output in
pg_controldata). Using initdb on multiple nodes to set up a cluster
will simply not satisfy this condition, and your cluster setup will
fail. More details here:
http://www.postgresql.org/docs/9.2/static/continuous-archiving.html
-- 
Michael


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] My Experiment of PG crash when dealing with huge amount of data

2013-08-30 Thread Michael Paquier
On Fri, Aug 30, 2013 at 6:10 PM, 高健 luckyjack...@gmail.com wrote:
 In log, I can see the following:
 LOG:  background writer process (PID 3221) was terminated by signal 9:
 Killed
Assuming that no users on your server manually killed this process, or
that no maintenance task you implemented did that, this looks like the
Linux OOM killer because of a memory overcommit. Have a look here for
more details:
http://www.postgresql.org/docs/current/static/kernel-resources.html#LINUX-MEMORY-OVERCOMMIT
So have a look at dmesg to confirm that, then you could use one of the
strategies described in the docs. Also, as you have been doing a bulk
INSERT, you should as well increase temporarily checkpoint_segments to
reduce the pressure on the background writer by reducing the number of
checkpoints happening. This will also make your data load faster.
-- 
Michael


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Unable to CREATE SCHEMA and INSERT data in table in that schema in same EXECUTE

2013-08-30 Thread vibhuti nataraj
Hi,

I am using server 9.0.4, I am trying to
1. Create a new schema,
2. Create a table under that schema, and
3. Insert data in that schema,
In the same EXECUTE and its failing. If I try

CREATE OR REPLACE FUNCTION pg_temp.test( )
RETURNS void LANGUAGE plpgsql AS
$BODY$
BEGIN
EXECUTE 'CREATE SCHEMA test; CREATE TABLE test.t (id integer ); INSERT INTO
test.t (id) VALUES (0);';
END;
$BODY$;

select pg_temp.test( );

I get a failure with the following error.
ERROR:  schema test does not exist
LINE 1: ...t; CREATE TABLE test.t (id integer ); INSERT INTO test.t (id...
 ^
However, the same thing works if use two EXECUTEs in the same transaction.
CREATE OR REPLACE FUNCTION pg_temp.test( )
RETURNS void LANGUAGE plpgsql AS
$BODY$
BEGIN
EXECUTE 'CREATE SCHEMA test; CREATE TABLE test.t (id integer );';
EXECUTE 'INSERT INTO test.t (id) VALUES (0);';
END;
$BODY$;

 select pg_temp.test( );


 Unable to understand the difference between the two. Will appreciate if
someone can help me here.

Thanks.
Best regards,
Vibhuti


Re: [GENERAL] Unable to CREATE SCHEMA and INSERT data in table in that schema in same EXECUTE

2013-08-30 Thread Tom Lane
vibhuti nataraj vvnata...@gmail.com writes:
 EXECUTE 'CREATE SCHEMA test; CREATE TABLE test.t (id integer ); INSERT INTO
 test.t (id) VALUES (0);';

That's not going to work because the whole string is parsed before any of
it is executed.  In particular, parse analysis of the INSERT is going to
spit up because neither the table nor even the schema exist yet.

 EXECUTE 'CREATE SCHEMA test; CREATE TABLE test.t (id integer );';
 EXECUTE 'INSERT INTO test.t (id) VALUES (0);';

The reason this happens to work is that CREATE TABLE is a utility
statement, which doesn't do any parse analysis to speak of, in
particular it doesn't notice at parse time whether the mentioned
schema exists.  However, that's an implementation detail that
could change from release to release.  By and large, trying to
EXECUTE multiple statements in one query string is something best
avoided, especially if any of them are DDL.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] What is the relationship between checkpoint and wal

2013-08-30 Thread Kevin Grittner
高健 luckyjack...@gmail.com wrote:

 So I think that in a mission critical environment, it is not a
 good choice to turn full_page_writes on.

If full_page_writes is off, your database can be corrupted in the
event of a crash of the hardware, OS, or VM (for example a power
failure).  The only exception is if your environment somehow
guarantees that in such failures it is not possible to write part
of a 4KB write request without the entire 4KB being written.  Such
corruption may be hidden and result in inconsistent or incorrect
results, without generating an error; so you would be well-advised
to restore from backup if there is such a crash.

full_page_writes = on is required for protection of database
integrity in most environments.

In the mission critical environments I've worked with, it has
generally been desirable to preserve database integrity and to be
able to recover from an OS crash faster than can be done from
backup.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] virtualxid,relation lock

2013-08-30 Thread Kevin Grittner
shanmugavel muthuvel shanmugavel.mu...@gmail.com wrote:

 I have an issue with of idle transaction and one select
 statement in backend.

 How can fix the issue.

There wasn't enough detail in the problem statement to make a
detailed suggestion.  The general suggestion would be to make sure
that transactions are never left idle for any significant amount of
time.  (For example, applications should never wait for user input
with a transaction pending.)

For more detailed suggestions, please review this for ideas on what
people might need to see to provide more specific suggestions:

http://wiki.postgresql.org/wiki/Guide_to_reporting_problems

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] What is the relationship between checkpoint and wal

2013-08-30 Thread 高健
Hello:

Thank you all.

I have understood this.

Best Regards


2013/8/31 Kevin Grittner kgri...@ymail.com

 高健 luckyjack...@gmail.com wrote:

  So I think that in a mission critical environment, it is not a
  good choice to turn full_page_writes on.

 If full_page_writes is off, your database can be corrupted in the
 event of a crash of the hardware, OS, or VM (for example a power
 failure).  The only exception is if your environment somehow
 guarantees that in such failures it is not possible to write part
 of a 4KB write request without the entire 4KB being written.  Such
 corruption may be hidden and result in inconsistent or incorrect
 results, without generating an error; so you would be well-advised
 to restore from backup if there is such a crash.

 full_page_writes = on is required for protection of database
 integrity in most environments.

 In the mission critical environments I've worked with, it has
 generally been desirable to preserve database integrity and to be
 able to recover from an OS crash faster than can be done from
 backup.

 --
 Kevin Grittner
 EDB: http://www.enterprisedb.com
 The Enterprise PostgreSQL Company