Re: [GENERAL] Increase in max_connections

2014-03-10 Thread Venkata Balaji Nagothi
On Tue, Mar 11, 2014 at 12:04 PM, Anand Kumar, Karthik <
karthik.anandku...@classmates.com> wrote:

>   Hi all,
>
>  We're running postgres 9.3.2, server configuration below.
>
>  Seemingly randomly, we will see the number of active queries in postgres
> go up until we hit max_connections. The DB will recover after a few minutes.
>
>  We had the issue a couple of times in Feb 2014. We then upgraded the
> postgres server from 9.1 to 9.3.2, and the occurrence has gone up
> significantly - to several times a day.
>
>  The user CPU goes up as well to a 100%, no increase in I/O or system CPU.
> We have slow query logging, and there is no dramatic change in the slow
> queries either.
> There is a corresponding spike in shared locks, but that seems to be an
> effect not a cause - it corresponds to an increase in the number of running
> processes at the time.
>
>  We had a similar issue in the past - that was solved by disabling
> transparent_huge_pages - but the difference there was that we'd see queries
> slow down dramatically. Currently, we don't. Also, transparent_huge_pages
> is still disabled.
>
>  I do realize the issue would be caused by a spurt in incoming
> connections - we do not yet have conclusive evidence on whether that's
> happening (active queries climbs up, however no conclusive proof on whether
> thats because of slow down, or because of increase in traffic). Working on
> getting the information, will update with that information as soon as we
> have it.
>
>  I thought I'd send a post out to the group before then, to see if anyone
> has run into anything similar.
>
>  Thanks,
> Karthik
>
>  site=# SELECT version();
> PostgreSQL 9.3.2 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.6
> 20120305 (Red Hat 4.4.6-4), 64-bit
>
>  site=# SELECT name, current_setting(name), source
> site-#   FROM pg_settings
> site-#   WHERE source NOT IN ('default', 'override');
> application_name|psql|client
> archive_command|/usr/bin/archiver.sh %f %p|configuration file
> archive_mode|on|configuration file
> autovacuum_freeze_max_age|25000|configuration file
> autovacuum_max_workers|6|configuration file
> bgwriter_lru_maxpages|1000|configuration file
> bgwriter_lru_multiplier|4|configuration file
> checkpoint_completion_target|0.8|configuration file
> checkpoint_segments|250|configuration file
> checkpoint_timeout|15min|configuration file
> checkpoint_warning|6min|configuration file
> client_encoding|UTF8|client
> commit_siblings|25|configuration file
> cpu_tuple_cost|0.03|configuration file
> DateStyle|ISO, MDY|configuration file
> default_statistics_target|300|configuration file
> default_text_search_config|pg_catalog.english|configuration file
> effective_cache_size|568GB|configuration file
> fsync|on|configuration file
> lc_messages|en_US.UTF-8|configuration file
> lc_monetary|en_US.UTF-8|configuration file
> lc_numeric|en_US.UTF-8|configuration file
> lc_time|en_US.UTF-8|configuration file
> listen_addresses|*|configuration file
> log_autovacuum_min_duration|0|configuration file
> log_checkpoints|on|configuration file
> log_connections|on|configuration file
> log_destination|syslog|configuration file
> log_directory|pg_log|configuration file
> log_filename|postgresql-%a.log|configuration file
> log_line_prefix|user=%u,db=%d,ip=%h |configuration file
> log_min_duration_statement|100ms|configuration file
> log_min_messages|debug1|configuration file
> log_rotation_age|1d|configuration file
> log_rotation_size|0|configuration file
> log_timezone|US/Pacific|configuration file
> log_truncate_on_rotation|on|configuration file
> logging_collector|off|configuration file
> maintenance_work_mem|1GB|configuration file
> max_connections|1500|configuration file
> max_locks_per_transaction|1000|configuration file
> max_stack_depth|2MB|environment variable
> max_wal_senders|5|configuration file
> port|5432|command line
> random_page_cost|2|configuration file
> shared_buffers|8GB|configuration file
> synchronous_commit|off|configuration file
> syslog_facility|local0|configuration file
> syslog_ident|postgres|configuration file
> TimeZone|US/Pacific|configuration file
> vacuum_freeze_table_age|0|configuration file
> wal_buffers|32MB|configuration file
> wal_keep_segments|250|configuration file
> wal_level|hot_standby|configuration file
> wal_sync_method|fsync|configuration file
> work_mem|130MB|configuration file
>

Please let us know your hardware configuration like RAM, CPU (cores) etc.

Do you see any messages indicating any processes getting terminated/killed
forcibly in the Postgresql logs ?

Or do you see any shared memory related error messages ?

cpu_tuple_cost=0.03 - which is not default, any reasons for increasing this.

effective_cache_size = 568 GB  -  Please help us know if this is optimal
for your system.


Venkata Balaji N

Sr. Database Administrator
Fujitsu Australia


Re: [GENERAL] replication timeout in pg_basebackup

2014-03-10 Thread John R Pierce

On 3/9/2014 6:52 PM, Aggarwal, Ajay wrote:
Our replication timeout is default 60 seconds. If we increase the 
replication time to say 180 seconds, we see better results but backups 
still fail occasionally.


so increase it to 300 seconds, or whatever.   thats an upper limit, it 
needs to be big enough that you DONT get into problems when doing stuff 
like basebackups.






--
john r pierce  37N 122W
somewhere on the middle of the left coast



Re: [GENERAL] replication timeout in pg_basebackup

2014-03-10 Thread Aggarwal, Ajay
If I don't include WAL files as part of my backup, I do not run into this 
issue. But a backup without WAL files is not what I want.

From: Aggarwal, Ajay
Sent: Monday, March 10, 2014 9:46 PM
To: Haribabu Kommi
Cc: pgsql-general@postgresql.org
Subject: RE: [GENERAL] replication timeout in pg_basebackup

I have already tried experimenting with linux dirty_ratio etc. You can only 
fine tune up to a limit. The backup process still fills up the buffer cache 
very quickly. Yes, my database is about 5-6 GB in size and will grow bigger 
over time.

If wish there was a way to slow down pg_basebackup or force it to use direct 
I/O.

From: Haribabu Kommi [kommi.harib...@gmail.com]
Sent: Monday, March 10, 2014 8:31 PM
To: Aggarwal, Ajay
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] replication timeout in pg_basebackup

On Tue, Mar 11, 2014 at 7:07 AM, Aggarwal, Ajay  wrote:
> Thanks Hari Babu.
>
> I think what is happening is that my dirty cache builds up quickly for the
> volume where I am backing up. This would trigger flush of these dirty pages
> to the disk. While this flush is going on pg_basebackup tries to do fsync()
> on a received WAL file and gets blocked.

But the sync is executed for every WAL file finish. Does your database
is big in size?
Does your setup is write-heavy operations?

In Linux when it tries to write a bunch of buffers at once, the fysnc
call might block for some time.
In the following link there are some "Tuning Recommendations for
write-heavy operations" which might be useful to you.

http://www.westnet.com/~gsmith/content/linux-pdflush.htm

Any other ideas to handle these kind of problems?

Regards,
Hari Babu
Fujitsu Australia


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] replication timeout in pg_basebackup

2014-03-10 Thread Aggarwal, Ajay
I have already tried experimenting with linux dirty_ratio etc. You can only 
fine tune up to a limit. The backup process still fills up the buffer cache 
very quickly. Yes, my database is about 5-6 GB in size and will grow bigger 
over time.

If wish there was a way to slow down pg_basebackup or force it to use direct 
I/O.

From: Haribabu Kommi [kommi.harib...@gmail.com]
Sent: Monday, March 10, 2014 8:31 PM
To: Aggarwal, Ajay
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] replication timeout in pg_basebackup

On Tue, Mar 11, 2014 at 7:07 AM, Aggarwal, Ajay  wrote:
> Thanks Hari Babu.
>
> I think what is happening is that my dirty cache builds up quickly for the
> volume where I am backing up. This would trigger flush of these dirty pages
> to the disk. While this flush is going on pg_basebackup tries to do fsync()
> on a received WAL file and gets blocked.

But the sync is executed for every WAL file finish. Does your database
is big in size?
Does your setup is write-heavy operations?

In Linux when it tries to write a bunch of buffers at once, the fysnc
call might block for some time.
In the following link there are some "Tuning Recommendations for
write-heavy operations" which might be useful to you.

http://www.westnet.com/~gsmith/content/linux-pdflush.htm

Any other ideas to handle these kind of problems?

Regards,
Hari Babu
Fujitsu Australia


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Increase in max_connections

2014-03-10 Thread Anand Kumar, Karthik
Hi all,

We're running postgres 9.3.2, server configuration below.

Seemingly randomly, we will see the number of active queries in postgres go up 
until we hit max_connections. The DB will recover after a few minutes.

We had the issue a couple of times in Feb 2014. We then upgraded the postgres 
server from 9.1 to 9.3.2, and the occurrence has gone up significantly – to 
several times a day.

The user CPU goes up as well to a 100%, no increase in I/O or system CPU.
We have slow query logging, and there is no dramatic change in the slow queries 
either.
There is a corresponding spike in shared locks, but that seems to be an effect 
not a cause – it corresponds to an increase in the number of running processes 
at the time.

We had a similar issue in the past – that was solved by disabling 
transparent_huge_pages – but the difference there was that we'd see queries 
slow down dramatically. Currently, we don't. Also, transparent_huge_pages is 
still disabled.

I do realize the issue would be caused by a spurt in incoming connections – we 
do not yet have conclusive evidence on whether that's happening (active queries 
climbs up, however no conclusive proof on whether thats because of slow down, 
or because of increase in traffic). Working on getting the information, will 
update with that information as soon as we have it.

I thought I'd send a post out to the group before then, to see if anyone has 
run into anything similar.

Thanks,
Karthik

site=# SELECT version();
PostgreSQL 9.3.2 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.6 
20120305 (Red Hat 4.4.6-4), 64-bit

site=# SELECT name, current_setting(name), source
site-#   FROM pg_settings
site-#   WHERE source NOT IN ('default', 'override');
application_name|psql|client
archive_command|/usr/bin/archiver.sh %f %p|configuration file
archive_mode|on|configuration file
autovacuum_freeze_max_age|25000|configuration file
autovacuum_max_workers|6|configuration file
bgwriter_lru_maxpages|1000|configuration file
bgwriter_lru_multiplier|4|configuration file
checkpoint_completion_target|0.8|configuration file
checkpoint_segments|250|configuration file
checkpoint_timeout|15min|configuration file
checkpoint_warning|6min|configuration file
client_encoding|UTF8|client
commit_siblings|25|configuration file
cpu_tuple_cost|0.03|configuration file
DateStyle|ISO, MDY|configuration file
default_statistics_target|300|configuration file
default_text_search_config|pg_catalog.english|configuration file
effective_cache_size|568GB|configuration file
fsync|on|configuration file
lc_messages|en_US.UTF-8|configuration file
lc_monetary|en_US.UTF-8|configuration file
lc_numeric|en_US.UTF-8|configuration file
lc_time|en_US.UTF-8|configuration file
listen_addresses|*|configuration file
log_autovacuum_min_duration|0|configuration file
log_checkpoints|on|configuration file
log_connections|on|configuration file
log_destination|syslog|configuration file
log_directory|pg_log|configuration file
log_filename|postgresql-%a.log|configuration file
log_line_prefix|user=%u,db=%d,ip=%h |configuration file
log_min_duration_statement|100ms|configuration file
log_min_messages|debug1|configuration file
log_rotation_age|1d|configuration file
log_rotation_size|0|configuration file
log_timezone|US/Pacific|configuration file
log_truncate_on_rotation|on|configuration file
logging_collector|off|configuration file
maintenance_work_mem|1GB|configuration file
max_connections|1500|configuration file
max_locks_per_transaction|1000|configuration file
max_stack_depth|2MB|environment variable
max_wal_senders|5|configuration file
port|5432|command line
random_page_cost|2|configuration file
shared_buffers|8GB|configuration file
synchronous_commit|off|configuration file
syslog_facility|local0|configuration file
syslog_ident|postgres|configuration file
TimeZone|US/Pacific|configuration file
vacuum_freeze_table_age|0|configuration file
wal_buffers|32MB|configuration file
wal_keep_segments|250|configuration file
wal_level|hot_standby|configuration file
wal_sync_method|fsync|configuration file
work_mem|130MB|configuration file


Re: [GENERAL] replication timeout in pg_basebackup

2014-03-10 Thread Haribabu Kommi
On Tue, Mar 11, 2014 at 7:07 AM, Aggarwal, Ajay  wrote:
> Thanks Hari Babu.
>
> I think what is happening is that my dirty cache builds up quickly for the
> volume where I am backing up. This would trigger flush of these dirty pages
> to the disk. While this flush is going on pg_basebackup tries to do fsync()
> on a received WAL file and gets blocked.

But the sync is executed for every WAL file finish. Does your database
is big in size?
Does your setup is write-heavy operations?

In Linux when it tries to write a bunch of buffers at once, the fysnc
call might block for some time.
In the following link there are some "Tuning Recommendations for
write-heavy operations" which might be useful to you.

http://www.westnet.com/~gsmith/content/linux-pdflush.htm

Any other ideas to handle these kind of problems?

Regards,
Hari Babu
Fujitsu Australia


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] bg writer went away after reload

2014-03-10 Thread Venkata Balaji Nagothi
Yes. It is the "writer process". It is still called as background writer
process. It displays as "writer process" since PostgreSQL-8.0.

Venkata Balaji N

Sr. Database Administrator
Fujitsu Australia


On Tue, Mar 11, 2014 at 10:32 AM, Matthew Chambers
wrote:

>
> This is postgres 9.3.2.
>
> This is what the log shows.
>
> Mar 11 08:16:29 jupiter521 postgres[2026]: [8-1] 2014-03-11 08:16:29
> NZDTLOG:  received SIGHUP, reloading configuration files
> Mar 11 08:16:29 jupiter521 postgres[2026]: [9-1] 2014-03-11 08:16:29
> NZDTLOG:  parameter "bgwriter_lru_maxpages" changed to "200"
>
> Here are the processes I have running besides the connections.
>
> postgres  2028  0.0  8.3 17245532 8279356 ?Ss   Mar09   2:42 postgres:
> checkpointer process
> postgres  2029  0.0  0.1 17245272 107900 ? Ss   Mar09   0:08 postgres:
> writer process
> postgres  2030  0.2  0.0 17245272 34248 ?  Ss   Mar09   6:44 postgres:
> wal writer process
> postgres  2031  0.0  0.0 17246164 2596 ?   Ss   Mar09   0:09 postgres:
> autovacuum launcher process
> postgres  2032  0.0  0.0  18152  1244 ?Ss   Mar09   0:06 postgres:
> archiver process   last was 0001020200F8
> postgres  2033  0.0  0.0  18568  1636 ?Ss   Mar09   1:47 postgres:
> stats collector process
> postgres  3914  0.4  0.0 17246520 2844 ?   Ss   Mar09  14:04 postgres:
> wal sender process postgres 192.168.122.54(48686) streaming 202/F996C000
>
> Is it the "writer process"?  I was sure it was called the background
> writer before.
>
> -Matt
>
>
> On 11/03/14 12:03, Venkata Balaji Nagothi wrote:
>
>
>  On Tue, Mar 11, 2014 at 8:30 AM, Matthew Chambers  > wrote:
>
>> Hi, just wondering if this is normal, DB is operating just fine.
>>
>> I upped bgwriter_lru_maxpages to 200 and issued a reload.  Normally, I'd
>> see the bgwriter constantly churning as one of my main I/O using processes,
>> but now I have:
>>
>> postgres: wal writer process
>> postgres: checkpointer process
>>
>> The wal writer seems to have taken over.  Does this make sense?
>>
>
>  What WAL writer does is completely different from the way bgwriter
> functions. These two critical background processes of PostgreSQL performing
> independent I/O operations independently.One cannot take over another.
>
>  which version of Postgres is this ?
>
>  Do you see anything in the Postgres logs ? Do you see any message which
> indicates that reloading of the new configuration in postgresql.conf file
> was successful ?
>
>  Venkata Balaji N
>
>  Sr. Database Administrator
> Fujitsu Australia
>
>
>


Re: [GENERAL] bg writer went away after reload

2014-03-10 Thread Matthew Chambers


This is postgres 9.3.2.

This is what the log shows.

Mar 11 08:16:29 jupiter521 postgres[2026]: [8-1] 2014-03-11 08:16:29 
NZDTLOG:  received SIGHUP, reloading configuration files
Mar 11 08:16:29 jupiter521 postgres[2026]: [9-1] 2014-03-11 08:16:29 
NZDTLOG:  parameter "bgwriter_lru_maxpages" changed to "200"


Here are the processes I have running besides the connections.

postgres  2028  0.0  8.3 17245532 8279356 ?Ss   Mar09   2:42 
postgres: checkpointer process
postgres  2029  0.0  0.1 17245272 107900 ? Ss   Mar09   0:08 
postgres: writer process
postgres  2030  0.2  0.0 17245272 34248 ?  Ss   Mar09   6:44 
postgres: wal writer process
postgres  2031  0.0  0.0 17246164 2596 ?   Ss   Mar09   0:09 
postgres: autovacuum launcher process
postgres  2032  0.0  0.0  18152  1244 ?Ss   Mar09   0:06 
postgres: archiver process   last was 0001020200F8
postgres  2033  0.0  0.0  18568  1636 ?Ss   Mar09   1:47 
postgres: stats collector process
postgres  3914  0.4  0.0 17246520 2844 ?   Ss   Mar09  14:04 
postgres: wal sender process postgres 192.168.122.54(48686) streaming 
202/F996C000


Is it the "writer process"?  I was sure it was called the background 
writer before.


-Matt

On 11/03/14 12:03, Venkata Balaji Nagothi wrote:


On Tue, Mar 11, 2014 at 8:30 AM, Matthew Chambers 
mailto:mchamb...@wetafx.co.nz>> wrote:


Hi, just wondering if this is normal, DB is operating just fine.

I upped bgwriter_lru_maxpages to 200 and issued a reload.
 Normally, I'd see the bgwriter constantly churning as one of my
main I/O using processes, but now I have:

postgres: wal writer process
postgres: checkpointer process

The wal writer seems to have taken over.  Does this make sense?


What WAL writer does is completely different from the way bgwriter 
functions. These two critical background processes of PostgreSQL 
performing independent I/O operations independently.One cannot take 
over another.


which version of Postgres is this ?

Do you see anything in the Postgres logs ? Do you see any message 
which indicates that reloading of the new configuration in 
postgresql.conf file was successful ?


Venkata Balaji N

Sr. Database Administrator
Fujitsu Australia





Re: [GENERAL] bg writer went away after reload

2014-03-10 Thread Venkata Balaji Nagothi
On Tue, Mar 11, 2014 at 8:30 AM, Matthew Chambers wrote:

> Hi, just wondering if this is normal, DB is operating just fine.
>
> I upped bgwriter_lru_maxpages to 200 and issued a reload.  Normally, I'd
> see the bgwriter constantly churning as one of my main I/O using processes,
> but now I have:
>
> postgres: wal writer process
> postgres: checkpointer process
>
> The wal writer seems to have taken over.  Does this make sense?
>

What WAL writer does is completely different from the way bgwriter
functions. These two critical background processes of PostgreSQL performing
independent I/O operations independently.One cannot take over another.

which version of Postgres is this ?

Do you see anything in the Postgres logs ? Do you see any message which
indicates that reloading of the new configuration in postgresql.conf file
was successful ?

Venkata Balaji N

Sr. Database Administrator
Fujitsu Australia


Re: [GENERAL] Recovering from failed transaction

2014-03-10 Thread Brian Crowell
On Mon, Mar 10, 2014 at 4:16 PM, Jeff Janes  wrote:
> ...  Or maybe it is mangling the "ROLLBACK;" into some form
> the database doesn't recognize.  Look in the postgres log files to see what
> the events look like from PostgreSQL's perspective.

Well that's the clue I needed. I was misinterpreting Postgres's log
file; it was complaining about the "SET statement_timeout" statement
Npgsql was slipping ahead of my ROLLBACK. Apparently I need to do
transactions with Npgsql's transaction class.

--Brian


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] bg writer went away after reload

2014-03-10 Thread Matthew Chambers

Hi, just wondering if this is normal, DB is operating just fine.

I upped bgwriter_lru_maxpages to 200 and issued a reload.  Normally, I'd 
see the bgwriter constantly churning as one of my main I/O using 
processes, but now I have:


postgres: wal writer process
postgres: checkpointer process

The wal writer seems to have taken over.  Does this make sense?

-Matt



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Recovering from failed transaction

2014-03-10 Thread Jeff Janes
On Mon, Mar 10, 2014 at 1:41 PM, Brian Crowell  wrote:

> I feel dumb asking this question, but I can't seem to find the answer
> online.
>
> I'm running serializable transactions, and so naturally, they will
> sometimes fail with the error "could not serialize access due to
> concurrent update."
>
> But then I try to issue a ROLLBACK so I can continue using the
> connection, and I get error 25P02: "current transaction is aborted,
> commands ignored until end of transaction block."
>
> ...doesn't "ROLLBACK" end a transaction block? What does Postgres want
> here? How can I retry without closing the connection altogether?
>

What tool are you using to connect to the database?  Perhaps it is
confusing the error response to some earlier statement in the stream with
the response for the rollback.  Or maybe it is mangling the "ROLLBACK;"
into some form the database doesn't recognize.  Look in the postgres log
files to see what the events look like from PostgreSQL's perspective.

Cheers,

Jeff


[GENERAL] Recovering from failed transaction

2014-03-10 Thread Brian Crowell
I feel dumb asking this question, but I can't seem to find the answer online.

I'm running serializable transactions, and so naturally, they will
sometimes fail with the error "could not serialize access due to
concurrent update."

But then I try to issue a ROLLBACK so I can continue using the
connection, and I get error 25P02: "current transaction is aborted,
commands ignored until end of transaction block."

...doesn't "ROLLBACK" end a transaction block? What does Postgres want
here? How can I retry without closing the connection altogether?

--Brian


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] replication timeout in pg_basebackup

2014-03-10 Thread Aggarwal, Ajay
Thanks Hari Babu.

I think what is happening is that my dirty cache builds up quickly for the 
volume where I am backing up. This would trigger flush of these dirty pages to 
the disk. While this flush is going on pg_basebackup tries to do fsync() on a 
received WAL file and gets blocked.

While in this state, i.e. when dirty page count is high, following are the 
results of pg_test_fsync


# /usr/pgsql-9.2/bin/pg_test_fsync -f /backup/fsync_test
2 seconds per test
O_DIRECT supported on this platform for open_datasync and open_sync.

Compare file sync methods using one 8kB write:
(in wal_sync_method preference order, except fdatasync
is Linux's default)
open_datasync  16.854 ops/sec
fdatasync  15.242 ops/sec
fsync   0.187 ops/sec
fsync_writethroughn/a
open_sync  14.747 ops/sec

Compare file sync methods using two 8kB writes:
(in wal_sync_method preference order, except fdatasync
is Linux's default)
open_datasync   6.137 ops/sec
fdatasync  14.899 ops/sec
fsync   0.007 ops/sec
fsync_writethroughn/a
open_sync   1.450 ops/sec

Compare open_sync with different write sizes:
(This is designed to compare the cost of writing 16kB
in different write open_sync sizes.)
 1 * 16kB open_sync write  13.486 ops/sec
 2 *  8kB open_sync writes  6.006 ops/sec
 4 *  4kB open_sync writes  3.446 ops/sec
 8 *  2kB open_sync writes  1.400 ops/sec
16 *  1kB open_sync writes  0.859 ops/sec

Test if fsync on non-write file descriptor is honored:
(If the times are similar, fsync() can sync data written
on a different descriptor.)
write, fsync, close 0.009 ops/sec
write, close, fsync 0.008 ops/sec

Non-Sync'ed 8kB writes:
write   99415.368 ops/sec


However when backups are not going on and dirty pages count is low, below are 
the results of this test

# /usr/pgsql-9.2/bin/pg_test_fsync -f /backup/fsync_test
2 seconds per test
O_DIRECT supported on this platform for open_datasync and open_sync.

Compare file sync methods using one 8kB write:
(in wal_sync_method preference order, except fdatasync
is Linux's default)
open_datasync1974.243 ops/sec
fdatasync1410.804 ops/sec
fsync 181.129 ops/sec
fsync_writethroughn/a
open_sync 547.389 ops/sec

Compare file sync methods using two 8kB writes:
(in wal_sync_method preference order, except fdatasync
is Linux's default)
open_datasync 290.109 ops/sec
fdatasync 962.378 ops/sec
fsync 158.987 ops/sec
fsync_writethroughn/a
open_sync 642.309 ops/sec

Compare open_sync with different write sizes:
(This is designed to compare the cost of writing 16kB
in different write open_sync sizes.)
 1 * 16kB open_sync write1014.456 ops/sec
 2 *  8kB open_sync writes627.964 ops/sec
 4 *  4kB open_sync writes340.313 ops/sec
 8 *  2kB open_sync writes173.581 ops/sec
16 *  1kB open_sync writes103.236 ops/sec

Test if fsync on non-write file descriptor is honored:
(If the times are similar, fsync() can sync data written
on a different descriptor.)
write, fsync, close   244.670 ops/sec
write, close, fsync   207.248 ops/sec

Non-Sync'ed 8kB writes:
write   202216.900 ops/sec



From: Haribabu Kommi [kommi.harib...@gmail.com]
Sent: Monday, March 10, 2014 1:42 AM
To: Aggarwal, Ajay
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] replication timeout in pg_basebackup

On Mon, Mar 10, 2014 at 12:52 PM, Aggarwal, Ajay 
mailto:aaggar...@verizon.com>> wrote:
Our environment: Postgres version 9.2.2 running on CentOS 6.4

Our backups using pg_basebackup are frequently failing with following error

"pg_basebackup: could not send feedback packet: server closed the connection 
unexpectedly
This probably means the server terminated abnormally
before or while processing the request."

We are invoking pg_basebackup with these arguments : pg_basebackup -D 
backup_dir -X stream -l backup_dir

In postgres logs we see this log message "terminating walsender process due to 
replication timeout".

Our replication timeout is default 60 seconds. If we increase the replication 
time to say 180 seconds, we see better results but backups still 

Re: [GENERAL] libpq - lack of support to set the fetch size

2014-03-10 Thread Daniel Verite
matshyeq wrote:

> The only solution is CURSOR based which I find an awkward low level hack
> comparing to elegant option supported by native library.

That's not the only solution, even with the current DBD::Pg
you could do:

$dbh->do("COPY (sql-squery) TO STDOUT");
my $data;
while ($dbh->pg_getcopydata($data) >= 0) {
# process $data
}

The results would be streamed as opposed to being accumulated in
memory. Also COPY is optimized for high performance.

The drawback is you'd have to parse $data according to the
specific rules of the COPY format, which may be easy or
not-so-easy depending on the actual data, numeric or text
or other, whether it has NULLs, backslashes and so on.

Best regards,
-- 
Daniel
PostgreSQL-powered mail user agent and storage: http://www.manitou-mail.org


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Extracting data from the view to retrieve the foreign key is not declared

2014-03-10 Thread nill
Given a view, I need to extract tables, the join columns (ON) . I need to do
this analysis because of the view (agreements with the join condition and
where) I can say that there is a foreign key

Example:

CREATE OR REPLACE VIEW x_customer AS 
SELECT a.asset_id, a.client_id
FROM asset_d ad, asset a ON asset_id = ad.asset_id

Result: 

NAMEVIEW |TABLE1| COLUMN1  | TABLE2 | COLUMN2
x_customer| asset_d  | asset_id| asset| asset_id   | 
x_customer| asset  | -|  -   | -   | 

OR 

CREATE OR REPLACE VIEW x_customer AS 
 SELECT a.asset_id, a.client_id, a.org_id, a.isactive, a.created,
a.createdby, a.updated, a.updatedby, a.value, a.name, a.description,
a.ad_user_id, ( SELECT count(*) AS count
   FROM asset_d ad
  WHERE a.asset_id = ad.asset_id) AS deliverycount
   FROM asset a
  WHERE a.bpartner_id IS NOT NULL;
Result: 

NAMEVIEW |TABLE1| COLUMN1  | TABLE2 | COLUMN2
x_customer| asset_d  | asset_id| asset| asset_id   | 

Someone can help me?
Thanks




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Extracting-data-from-the-view-to-retrieve-the-foreign-key-is-not-declared-tp5795408.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Playing with 9.4devel - unnest

2014-03-10 Thread bricklen
On Mon, Mar 10, 2014 at 10:16 AM, Tim Kane  wrote:

> But what I really wanted to do, is unnest multiple sets of array values as
> returned from a table/query..
>

Craig Ringer posted an interesting answer to a somewhat related question a
few months ago on Stack Overflow: http://stackoverflow.com/a/17646605


Re: [GENERAL] Playing with 9.4devel - unnest

2014-03-10 Thread Tim Kane

I think I may have misunderstood the use case of this..

I can do the following:


select * from unnest(array[1,2,3], array[1,3,4]);
 unnest | unnest
+
  1 |  1
  2 |  3
  3 |  4
(3 rows)


But what I really wanted to do, is unnest multiple sets of array values as
returned from a table/query..

Eg:


xml_test=# create temp table z (foo integer[], bar integer[]);
CREATE TABLE

xml_test=# insert into z values (array[1,2,3], array[4,5,6]);
INSERT 0 1

xml_test=# select * from z;
   foo   |   bar
-+-
 {1,2,3} | {4,5,6}
(1 row)

xml_test=# select * from unnest (select foo, bar from z);
ERROR:  syntax error at or near "select"
LINE 1: select * from unnest (select foo, bar from z);
  ^

xml_test=# select * from unnest (select * from z);
ERROR:  syntax error at or near "select"
LINE 1: select * from unnest (select * from z);
  ^


xml_test=#  select (array[1,2,3], array[4,5,6]);
  row
---
 ("{1,2,3}","{4,5,6}")
(1 row)



xml_test=# select row(foo,bar) from z;
  row
---
 ("{1,2,3}","{4,5,6}")
(1 row)



xml_test=# select * from unnest(array[1,2,3], array[1,3,4]);
 unnest | unnest
+
  1 |  1
  2 |  3
  3 |  4
(3 rows)


xml_test=# select * from unnest ( select row(foo,bar) from z );
ERROR:  syntax error at or near "select"
LINE 1: select * from unnest ( select row(foo,bar) from z );
   ^


xml_test=# select * from unnest ( (select row(foo,bar) from z) );
ERROR:  function unnest(record) does not exist
LINE 1: select * from unnest ( (select row(foo,bar) from z) );
  ^
HINT:  No function matches the given name and argument types. You might need
to add explicit type casts.


Any suggestions? Or should the parser be allowing a subquery as a parameter
to unnest?


Tim



From:  Tim Kane 
Date:  Monday, 10 March 2014 15:26
To:  Tom Lane 
Cc:  pgsql-general General 
Subject:  Re: [GENERAL] Playing with 9.4devel - unnest


Hmm. So it is. 
My bad, thanks Tom.

I hadn’t noticed the documentation where it clearly says "This is only
allowed in the FROM clause”

xml_test=# select unnest(*) from (select array[1,2],array[1,2,3]) foo;
ERROR:  function unnest() does not exist


And, yes.. I was expecting the function signature to change. Thanks for
setting me straight.

Tim



From:  Tom Lane 
Date:  Monday, 10 March 2014 15:10
To:  Tim Kane 
Cc:  pgsql-general General 
Subject:  Re: [GENERAL] Playing with 9.4devel - unnest

Tim Kane  writes:
>  I decided to have a play with postgresql-9.4devel as I wanted to explore the
>  functionality provided by
>  unnest (anyarray, anyarray [, ╜])
>  Iâ•˙ve taken the nightly snpahost, compiled, installed.. All good.  (Obtained
>  from http://ftp.postgresql.org/pub/snapshot/dev/ )
>  However it seems the expected multi-argument unnest function doesnâ•˙t exist.

>  Have I missed something?

It's there:

regression=# select * from unnest(array[1,2], array[3,4]);
 unnest | unnest 
+
  1 |  3
  2 |  4
(2 rows)

If you were expecting this to change, it didn't:

regression=# \df unnest
   List of functions
   Schema   |  Name  | Result data type | Argument data types |  Type
++--+-+
 pg_catalog | unnest | SETOF anyelement | anyarray| normal
(1 row)

because the new functionality arises from a parser transformation,
not from a simple function.

regards, tom lane





Re: [GENERAL] libpq - lack of support to set the fetch size

2014-03-10 Thread Jeff Janes
On Sun, Mar 9, 2014 at 6:43 AM, matshyeq  wrote:

> Hello,
>
> I've found an issue when tried to implement fetching rows from big table
> (2mln rows) in my app.
> Basically I don't find an elegant and easy way (other than always use
> cursors) to limit the number of rows returned.
> This causes my application to break due to the excessive memory
> consumption.
>
> I'm using Perl and DBD::Pg library but contacted maintainer who actually
> pointed out this is an issue that goes much deeper (libpq):
>
> "Unfortunately, this is a limitation in the underlying driver (libpq)
> rather than DBD::Pg itself. There have been talks over the years of
> supporting this, but nothing concrete yet. Your best bet would be to ask
> about this on the Postgres lists"
>

I don't think this is correct.  First, DBD::Pg could get tricky and
automatically wrap your query in a cursor and then fetch from the cursor
behind the scenes.  I believe that this is what Python's module does for
you in some modes.  Second, the feature needed to do this without even
using a cursor was added 1.5 years ago (PQsetSingleRowMode).  The DBD::Pg
was just not taught how to use it yet.

The first strategy could probably be done purely in Perl, the second would
require changes to the C parts of DBD::Pg.

Of course just because it can be implemented in DBD::Pg doesn't mean anyone
has an obligation to do it.  You could speed that along by contributing the
code yourself.  But I would say the ball is firmly in DBD::Pg's court.


Cheers,

Jeff


Re: [GENERAL] 9.1.11 - many backends in "semtimedop" syscall

2014-03-10 Thread Thom Brown
On 10 March 2014 15:32, hubert depesz lubaczewski  wrote:
> On Thu, Mar 06, 2014 at 06:03:54PM +0100, hubert depesz lubaczewski wrote:
>> On Thu, Mar 06, 2014 at 12:02:50PM -0500, Tom Lane wrote:
>> > hubert depesz lubaczewski  writes:
>> > > I didn't have a chance to do it. Can try if there is a way to get trace
>> > > *without* making core (sorry, my c/gdb knowledge is very, very limited).
>> >
>> > Sure, you just attach to the process:
>> >
>> > $ gdb /path/to/postgres PID-of-process
>> > gdb> bt
>> > gdb> quit
>> >
>> > This is usually preferable to forcing a core dump.
>>
>> Thank you. If the problem will strike again, I will do it on all (or
>> most, depending how fast I can make it) backends.
>
> The problem did happen again, and we were able to find a fix (I think).
> For some reason we had a table with over 5 (yes, 50 thousand)
> indexes on it. This table was a bucardo internals table, so maybe it was
> something in bucardo (we are using it to migrate hundreds of tables to
> another machine, so maybe it has something to do with it.

This sort of thing is the reason why I'd want to see index maintenance
nodes in explain (analyse) plans, so that it's possible to gauge their
contribution to the overall duration of a DML statement.
-- 
Thom


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 9.1.11 - many backends in "semtimedop" syscall

2014-03-10 Thread hubert depesz lubaczewski
On Thu, Mar 06, 2014 at 06:03:54PM +0100, hubert depesz lubaczewski wrote:
> On Thu, Mar 06, 2014 at 12:02:50PM -0500, Tom Lane wrote:
> > hubert depesz lubaczewski  writes:
> > > I didn't have a chance to do it. Can try if there is a way to get trace
> > > *without* making core (sorry, my c/gdb knowledge is very, very limited).
> > 
> > Sure, you just attach to the process:
> > 
> > $ gdb /path/to/postgres PID-of-process
> > gdb> bt
> > gdb> quit
> > 
> > This is usually preferable to forcing a core dump.
> 
> Thank you. If the problem will strike again, I will do it on all (or
> most, depending how fast I can make it) backends.

The problem did happen again, and we were able to find a fix (I think).
For some reason we had a table with over 5 (yes, 50 thousand)
indexes on it. This table was a bucardo internals table, so maybe it was
something in bucardo (we are using it to migrate hundreds of tables to
another machine, so maybe it has something to do with it.

Anyway - after removing obsolete indexes there - the problem is gone.

Best regards,

depesz



signature.asc
Description: Digital signature


Re: [GENERAL] Playing with 9.4devel - unnest

2014-03-10 Thread Tim Kane

Hmm. So it is. 
My bad, thanks Tom.

I hadn’t noticed the documentation where it clearly says "This is only
allowed in the FROM clause”

xml_test=# select unnest(*) from (select array[1,2],array[1,2,3]) foo;
ERROR:  function unnest() does not exist


And, yes.. I was expecting the function signature to change. Thanks for
setting me straight.

Tim



From:  Tom Lane 
Date:  Monday, 10 March 2014 15:10
To:  Tim Kane 
Cc:  pgsql-general General 
Subject:  Re: [GENERAL] Playing with 9.4devel - unnest

Tim Kane  writes:
>  I decided to have a play with postgresql-9.4devel as I wanted to explore the
>  functionality provided by
>  unnest (anyarray, anyarray [, ╜])
>  Iâ•˙ve taken the nightly snpahost, compiled, installed.. All good.  (Obtained
>  from http://ftp.postgresql.org/pub/snapshot/dev/ )
>  However it seems the expected multi-argument unnest function doesnâ•˙t exist.

>  Have I missed something?

It's there:

regression=# select * from unnest(array[1,2], array[3,4]);
 unnest | unnest 
+
  1 |  3
  2 |  4
(2 rows)

If you were expecting this to change, it didn't:

regression=# \df unnest
   List of functions
   Schema   |  Name  | Result data type | Argument data types |  Type
++--+-+
 pg_catalog | unnest | SETOF anyelement | anyarray| normal
(1 row)

because the new functionality arises from a parser transformation,
not from a simple function.

regards, tom lane





Re: [GENERAL] libpq - lack of support to set the fetch size

2014-03-10 Thread Tom Lane
matshyeq  writes:
> If 'SingleRowMode' goes row-by-row then again it's not a solution,
> especially given that this particular issue applies to rather large row
> sets.

Perhaps you should actually experiment with that solution instead of
rejecting it out of hand.  Or at least RTFM about it.

It does have limitations: you can't interleave fetching of different
large query results.  But I don't have a problem telling people they
ought to use cursors for such cases.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] libpq - lack of support to set the fetch size

2014-03-10 Thread Adrian Klaver

On 03/10/2014 04:51 AM, matshyeq wrote:

Albe Laurenz wrote:


I would believe the stackoverflow
(http://stackoverflow.com/questions/21960121/perl-script-fails-when-selecting-data-from-big-postgresql-table)
question referred to explains the issue well.


You can retrieve the full result set,

not an option because of client memory limitations (in this case it's
poor client spec but there always are some, especially when you want to
pull 1e7 rows)


you can retrieve it row by row,

not an option because of performance (db calls/network roundtrips)


you can use a LIMIT clause to retrieve it in batches.

you pointed the best why it's not a feasible option (complexity,
isolation levels, not always possible ie. when custom query and last but
not least: far from being elegant)


CURSOR option

As already explained at stackoverflow - I'm using it as a workaround. My
general point is it forces developers to use lower level communication
with DB (cursors) therefore not as elegant as just setting RowCacheSize
parameter as specified by DBI. According to DBD::Pg maintainer this
hasn't and can't be implemented for PostgreSQL due to the lack of
support in its own libpq library.
So again.., I'm really surprised this functionality is not yet supported
in PostgreSQL. Does that mean everybody have been implementing this
through cursors?

To recap what's on stackoverflow - The functionality I'm talking about
would be an equivalent of JDBC setFetchSize()

 function
to optimize the load from (any) database in batches, like in the example
below:

  Statement st =
conn.createStatement(java.sql.ResultSet.TYPE_FORWARD_ONLY,
java.sql.ResultSet.CONCUR_READ_ONLY);

  // Set the fetch size to 1000.

  st.setFetchSize(1000);

  // Execute the given sql query

  String sql = "select * from bigtable";

  ResultSet rs = statement.executeQuery(sql);

  while (rs.next()) {

   ⋮

  }


where underneath ResultSet.next() doesn't actually fetch one row at a
time from the RESULT-SET. It returns that from the (local) ROW-SET and
fetches ROW-SET (transparently) whenever it becomes exhausted on the
local client.

Actually, curious now if this functionality has been implemented in
PostgreSQL JDBC drivers...?


Yes, using a cursor.

http://jdbc.postgresql.org/documentation/92/query.html

By default the driver collects all the results for the query at once. 
This can be inconvenient for large data sets so the JDBC driver provides 
a means of basing a ResultSet on a database cursor and only fetching a 
small number of rows.


.



Anyway, according to one of the DBD::Pg developers it's impossible to
bring this functionality as the problem lies deeper, within libpq library:

"Unfortunately, this is a limitation in the underlying driver (libpq)
rather than DBD::Pg itself. There have been talks over the years of
supporting this, but nothing concrete yet."

So probably the best is to ask Greg to speak to details if still unclear.

Kind Regards,
Maciek



On Mon, Mar 10, 2014 at 9:42 AM, Albe Laurenz mailto:laurenz.a...@wien.gv.at>> wrote:

matshyeq wrote:
 > Postgresql is there for a good while perceived as one of the best
(or just simply the best!?)
 > available open source DB solution, so I'm really surprised this
functionality is not yet supported...

You can retrieve the full result set,
you can retrieve it row by row,
you can use a LIMIT clause to retrieve it in batches.

Can you explain how exactly the functionality would look that
you are missing?

Yours,
Laurenz Albe




--
Thank you,
Kind Regards
~Maciek



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Playing with 9.4devel - unnest

2014-03-10 Thread Tom Lane
Tim Kane  writes:
> I decided to have a play with postgresql-9.4devel as I wanted to explore the
> functionality provided by
> unnest (anyarray, anyarray [, …])
> I’ve taken the nightly snpahost, compiled, installed.. All good.  (Obtained
> from http://ftp.postgresql.org/pub/snapshot/dev/ )
> However it seems the expected multi-argument unnest function doesn’t exist.

> Have I missed something?

It's there:

regression=# select * from unnest(array[1,2], array[3,4]);
 unnest | unnest 
+
  1 |  3
  2 |  4
(2 rows)

If you were expecting this to change, it didn't:

regression=# \df unnest
   List of functions
   Schema   |  Name  | Result data type | Argument data types |  Type  
++--+-+
 pg_catalog | unnest | SETOF anyelement | anyarray| normal
(1 row)

because the new functionality arises from a parser transformation,
not from a simple function.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] libpq - lack of support to set the fetch size

2014-03-10 Thread matshyeq
>Albe Laurenz wrote:

I would believe the stackoverflow (
http://stackoverflow.com/questions/21960121/perl-script-fails-when-selecting-data-from-big-postgresql-table)
question referred to explains the issue well.

> You can retrieve the full result set,
not an option because of client memory limitations (in this case it's poor
client spec but there always are some, especially when you want to pull 1e7
rows)

> you can retrieve it row by row,
not an option because of performance (db calls/network roundtrips)

> you can use a LIMIT clause to retrieve it in batches.
you pointed the best why it's not a feasible option (complexity, isolation
levels, not always possible ie. when custom query and last but not least:
far from being elegant)

> CURSOR option
As already explained at stackoverflow - I'm using it as a workaround. My
general point is it forces developers to use lower level communication with
DB (cursors) therefore not as elegant as just setting RowCacheSize
parameter as specified by DBI. According to DBD::Pg maintainer this hasn't
and can't be implemented for PostgreSQL due to the lack of support in its
own libpq library.
So again.., I'm really surprised this functionality is not yet supported in
PostgreSQL. Does that mean everybody have been implementing this through
cursors?

To recap what's on stackoverflow - The functionality I'm talking about
would be an equivalent of JDBC
setFetchSize()
function
to optimize the load from (any) database in batches, like in the example
below:

 Statement st =
conn.createStatement(java.sql.ResultSet.TYPE_FORWARD_ONLY,
java.sql.ResultSet.CONCUR_READ_ONLY);

 // Set the fetch size to 1000.

 st.setFetchSize(1000);

 // Execute the given sql query

 String sql = "select * from bigtable";

 ResultSet rs = statement.executeQuery(sql);

 while (rs.next()) {

  ⋮

 }

where underneath ResultSet.next() doesn't actually fetch one row at a time
from the RESULT-SET. It returns that from the (local) ROW-SET and fetches
ROW-SET (transparently) whenever it becomes exhausted on the local client.

Actually, curious now if this functionality has been implemented in
PostgreSQL JDBC drivers...?

Anyway, according to one of the DBD::Pg developers it's impossible to bring
this functionality as the problem lies deeper, within libpq library:

"Unfortunately, this is a limitation in the underlying driver (libpq)
rather than DBD::Pg itself. There have been talks over the years of
supporting this, but nothing concrete yet."

So probably the best is to ask Greg to speak to details if still unclear.

Kind Regards,
Maciek



On Mon, Mar 10, 2014 at 9:42 AM, Albe Laurenz wrote:

> matshyeq wrote:
> > Postgresql is there for a good while perceived as one of the best (or
> just simply the best!?)
> > available open source DB solution, so I'm really surprised this
> functionality is not yet supported...
>
> You can retrieve the full result set,
> you can retrieve it row by row,
> you can use a LIMIT clause to retrieve it in batches.
>
> Can you explain how exactly the functionality would look that
> you are missing?
>
> Yours,
> Laurenz Albe
>



-- 
Thank you,
Kind Regards
~Maciek


Re: [GENERAL] libpq - lack of support to set the fetch size

2014-03-10 Thread matshyeq
Fully agree with Laurenz.
LIMIT in some (limited!) cases could be seen as a workaround but it's far
from being elegant (what if your end user types the query?)
If 'SingleRowMode' goes row-by-row then again it's not a solution,
especially given that this particular issue applies to rather large row
sets.
The only solution is CURSOR based which I find an awkward low level hack
comparing to elegant option supported by native library.

Postgresql is there for a good while perceived as one of the best (or just
simply the best!?) available open source DB solution, so I'm really
surprised this functionality is not yet supported...



On Mon, Mar 10, 2014 at 6:58 AM, Albe Laurenz wrote:

> Daniel Verite wrote:
> > matshyeq wrote:
>
> [ runs out of memory on the client because all results from a large query
> are retrieved at once ]
>
> >> "Unfortunately, this is a limitation in the underlying driver (libpq)
> rather
> >> than DBD::Pg itself. There have been talks over the years of supporting
> >> this, but nothing concrete yet. Your best bet would be to ask about
> this on
> >> the Postgres lists"
> >>
> >> Would you consider putting this on the roadmap, so one day it gets
> improved?
> >
> > This improvement seems to have actually been made since 9.2 with
> > the PQsetSingleRowMode() function:
> >
> > http://postgresql.org/docs/current/static/libpq-single-row-mode.html
>
> Yes, DBD::Pg could be improved to make use of that; the problem is probably
> that the code would have to differentiate between PostgreSQL versions.
>
> Your solution with using
>SELECT ... OFFSET ? LIMIT 1
> in a loop is bound to suck.
>
> First of all, there is no guarantee that the rows will be returned in
> the same order each time, see for example
>
> http://www.postgresql.org/docs/current/static/runtime-config-compatible.html#GUC-SYNCHRONIZE-SEQSCANS
> Also, unless you operate with an isolation level higher than READ
> COMMITTED,
> the various SELECTs could operate on different data sets.
>
> So you are likely to end up with incorrect results sooner or later
> if you use OFFSET and LIMIT without an ORDER BY clause.
>
> Then you will have really bad performance, especially with a large table,
> because each SELECT statement will have to start scanning the table again.
> The complexity will rise from O(n) to O(n^2).
>
> You can improve on this by using ORDER BY with an index and remembering
> the last returned row (get and read http://sql-performance-explained.com/
> ).
>
> Finally, you will have a client-server round trip for each row returned.
> This is a problem you would also have when using PQsetSingleRowMode().
>
> Yours,
> Laurenz Albe
>



-- 
Thank you,
Kind Regards
~Maciek


[GENERAL] Playing with 9.4devel - unnest

2014-03-10 Thread Tim Kane
Hi all,

I decided to have a play with postgresql-9.4devel as I wanted to explore the
functionality provided by

unnest (anyarray, anyarray [, …])


I’ve taken the nightly snpahost, compiled, installed.. All good.  (Obtained
from http://ftp.postgresql.org/pub/snapshot/dev/ )
However it seems the expected multi-argument unnest function doesn’t exist.

Have I missed something?

Cheers,

Tim




Re: [NOVICE][GENERAL] Convert Datum to an user-defined data type?

2014-03-10 Thread Amit Langote
On Mon, Mar 10, 2014 at 11:09 PM, Anh Pham  wrote:
> Hi everyone,
> I have created a new data types using SQL in a database:
>
> CREATE TYPE type1 AS (
> score integer,
> time integer[],
> );
>
> I also created a server extension module (C code), and I was able to
> retrieve a Datum value of type1 from the database (using SPI_ functions)
> My question is: how can I convert that Datum value back to type1 data
> structure (in C code), so I can easily deal with score and time values?
>

I wonder if you read section 35.9.8 about using "funcapi.h" here:
http://www.postgresql.org/docs/9.1/static/xfunc-c.html

I suggest you read more about the following functions and how they are
to be used (quoting brief descriptions from the page here):

* To get a TupleDesc based on a type OID, call:
TupleDesc TypeGetTupleDesc(Oid typeoid, List *colaliases)

* Once you have a TupleDesc, call:
TupleDesc BlessTupleDesc(TupleDesc tupdesc)

* With the blessed tupdesc, given user data in Datum form, to build a
HeapTuple, use:
HeapTuple heap_form_tuple(TupleDesc tupdesc, Datum *values, bool *isnull)

--
Amit


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[NOVICE][GENERAL] Convert Datum to an user-defined data type?

2014-03-10 Thread Anh Pham
Hi everyone,
I have created a new data types using SQL in a database:

CREATE TYPE *type1* AS (
score integer,
time integer[],
);

I also created a server extension module (C code), and I was able to
retrieve a Datum value of *type1 *from the database (using SPI_ functions)
My question is: how can I convert that Datum value back to type1 data
structure (in C code), so I can easily deal with score and time values?

typedef struct *type1*{
int score;
int *time;
} type1;

PS: I really don't want to deal with cstring to in order to get the score
and time attributes.

Many thanks :)


Re: [GENERAL] libpq - lack of support to set the fetch size

2014-03-10 Thread Marko Kreen
On Mon, Mar 10, 2014 at 06:58:26AM +, Albe Laurenz wrote:
> Daniel Verite wrote:
> > matshyeq wrote:
> 
> [ runs out of memory on the client because all results from a large query are 
> retrieved at once ]
> 
> >> "Unfortunately, this is a limitation in the underlying driver (libpq) 
> >> rather
> >> than DBD::Pg itself. There have been talks over the years of supporting
> >> this, but nothing concrete yet. Your best bet would be to ask about this on
> >> the Postgres lists"
> >>
> >> Would you consider putting this on the roadmap, so one day it gets 
> >> improved?
> > 
> > This improvement seems to have actually been made since 9.2 with
> > the PQsetSingleRowMode() function:
> > 
> > http://postgresql.org/docs/current/static/libpq-single-row-mode.html

> Finally, you will have a client-server round trip for each row returned.
> This is a problem you would also have when using PQsetSingleRowMode().

PQsetSingleRowMode() does not do additional roudtrips, it loads rows
from libpq internal buffer.

-- 
marko



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] execute table query in backend

2014-03-10 Thread Anh Pham
Thank you for you suggestion.
I actually took a look at SPI_ functions before. However, I believe using
this will bring quite an overhead.
I am trying to modify the Sequence scan node such that: each time the scan
returns a tuple,
we'll take some information from this tuple. And finally issue another scan
operation on a different table based on these information


On Thu, Mar 6, 2014 at 12:36 AM, Amit Langote wrote:

> On Tue, Mar 4, 2014 at 12:38 AM, Anh Pham  wrote:
> > Hi,
> > I am trying to extend the server backend by writing a new module.
> > Basically, it tries to retrieve tuples from a specific table using some
> > predefined qualifications (equivalent to "SELECT FROM WHERE" client sql
> > statement ).
> > Is there any quick or efficient way to do this?
>
>
> Assuming you're trying to write a server extension (and NOT a client
> application), this is where you can start:
>
> http://www.postgresql.org/docs/9.3/static/server-programming.html
>
> Specifically, an interface called "server programming interface"
> (chapter 44 on the above page) can be used to issue SQLs from a server
> extension module. Read more about it here (documentation also includes
> few examples to get started with):
>
> http://www.postgresql.org/docs/9.3/static/spi.html
>
> --
> Amit
>



-- 
Anh T Pham
Computer Science
Worcester Polytechnic Institute


Re: [GENERAL] libpq - lack of support to set the fetch size

2014-03-10 Thread John R Pierce

On 3/9/2014 6:43 AM, matshyeq wrote:

Hello,

I've found an issue when tried to implement fetching rows from big 
table (2mln rows) in my app.
Basically I don't find an elegant and easy way (other than always use 
cursors) to limit the number of rows returned.
This causes my application to break due to the excessive memory 
consumption.


I'm using Perl and DBD::Pg library but contacted maintainer who 
actually pointed out this is an issue that goes much deeper (libpq):


"Unfortunately, this is a limitation in the underlying driver (libpq) 
rather than DBD::Pg itself. There have been talks over the years of 
supporting this, but nothing concrete yet. Your best bet would be to 
ask about this on the Postgres lists"




in addition to what the others suggested, you can use a CURSOR to read 
through results in arbitrary sized blocks.


--
john r pierce  37N 122W
somewhere on the middle of the left coast



Re: [GENERAL] libpq - lack of support to set the fetch size

2014-03-10 Thread Albe Laurenz
matshyeq wrote:
> Postgresql is there for a good while perceived as one of the best (or just 
> simply the best!?)
> available open source DB solution, so I'm really surprised this functionality 
> is not yet supported...

You can retrieve the full result set,
you can retrieve it row by row,
you can use a LIMIT clause to retrieve it in batches.

Can you explain how exactly the functionality would look that
you are missing?

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general