Re: [GENERAL] PostgreSQL mirroring from RPM install to RPM install-revisited

2017-02-17 Thread Richard Brosnahan
Hi again Adrian,

Facepalm...

The master server was not installed by me. I was assured by the installer guy 
that it was version 9.4.1 and 64 bit. 

Facepalm... I managed to get enough access to that server to discover they had 
installed the 32 bit version of PostgreSQL. Who knows why? This explains 
everything about my issues with the 64 bit PostgreSQL on the slave. It's 
difficult to get access to our servers, so try not to blame me and think "Why 
didn't he do that first?" Still, I should have tried harder to get access.

In the PostgreSQL documentation, it clearly states that the two servers have to 
be the same architecture (both 32 bit or both 64 bit). Further, when Google 
searching for the errors I see, I find a number of people with similar issues, 
and they were fighting with 32 bit vs 64 bit PostgreSQLs. 

I wasted a LOT of time trying to track this down. I'm sorry I wasted other 
people's time too. 

Anyhow, I uninstalled PostgreSQL on the slave, and reinstalled the 32 bit 
version. Then I followed the instructions for setting up the slave, and it all 
works.

Plenty to do, including setting up proper monitoring, and documentation. It's 
great we have a hot standby, but if nobody knows how to use it in case the 
master goes away, it's not so great. 

THANK YOU for your assistance!


> On Feb 17, 2017, at 10:43 AM, Adrian Klaver  wrote:
> 
> On 02/16/2017 04:39 PM, Richard Brosnahan wrote:
>> Hi all,
>> 
>> Way back in December I posted a question about mirroring from an RPM
>> installed PostgreSQL (binary) to a source built PostgreSQL, with the
>> same version (9.4.1 --> 9.4.1). Both servers are running OEL6.
> 
> I went back to the previous threads and I could not find if you ever said 
> whether the two systems are using the same hardware architecture or not? 
> Vincent Veyron asked but I can't find a response.
> 
>> 
>> I won't copy the entire thread from before, as the situation has changed
>> a bit. The biggest changes are that I have root on the slave,
>> temporarily, and I've installed PostgreSQL on the slave using yum (also
>> binary).
>> 
>> I've followed all the instructions found here:
>> 
>> https://www.postgresql.org/docs/9.4/static/warm-standby.html#STREAMING-REPLICATION
>> 
>> 
>> The slave is running PostgreSQL 9.4.11 and was installed using yum.
>> It runs fine after I've run initdb and set things up. The master was
>> also installed from rpm binaries, but the installers used Puppet. That
>> version is 9.4.1. Yes, I know I should be using the exact same version,
>> but I couldn't find 9.4.1 in the PostgreSQL yum repo.
>> 
>> 
>> When I replace its data directory as part of the mirroring instructions,
>> using pg_basebackup, PostgreSQL won't start. I used pg_basebackup.
>> 
>> 
>> I get a checksum error, from pg_ctl.
>> 
>> 2016-12-15 08:27:14.520 PST >FATAL: incorrect checksum in control file
>> 
>> 
>> Previously, Tom Lane suggested I try this:
>> 
>> You could try using pg_controldata to compare the pg_control contents;
>> 
>> it should be willing to print field values even if it thinks the checksum
>> 
>> is bad. It would be interesting to see (a) what the master's
>> 
>> pg_controldata prints about its pg_control, (b) what the slave's
>> 
>> pg_controldata prints about pg_control from a fresh initdb there, and
>> 
>> (c) what the slave's pg_controldata prints about the copied pg_control.
>> 
>> 
>> For Tom's requests (a and b), I can provide good output from
>> pg_controldata from the master with production data, and from the slave
>> right after initdb. I'll provide that on request.
>> 
>> 
>> for Tom's request (c) I get this from the slave, after data is copied.
>> 
>> $ pg_controldata
>> 
>> WARNING: Calculated CRC checksum does not match value stored in file.
>> 
>> Either the file is corrupt, or it has a different layout than this program
>> 
>> is expecting.  The results below are untrustworthy.
>> 
>> 
>> Segmentation fault (core dumped)
>> 
>> 
>> With this new installation on the slave, same result. core dump
>> 
>> 
>> Tom Lane then suggested:
>> 
>> $ gdb path/to/pg_controldata
>> 
>> gdb> run /apps/database/postgresql-data
>> 
>> (wait
>> 
>> for it to report segfault)
>> 
>> gdb> bt
>> 
>> 
>> Since I now have gdb, I can do that:
>> 
>> $ gdb /usr/pgsql-9.4/bin/pg_controldata
>> 
>> -bash: gdb: command not

[GENERAL] Re: [GENERAL] PostgreSQL mirroring from RPM install to RPM install-revisited

2017-02-17 Thread Richard Brosnahan

Thanks for the response Adrian,

Both servers are pretty much identical. 

uname -a
master
Linux devtmbm178 2.6.32-642.6.2.el6.x86_64 #1 SMP Tue Oct 25 13:37:48 PDT 2016 
x86_64 x86_64 x86_64 GNU/Linux

slave
Linux devtmbm176 2.6.32-642.11.1.el6.x86_64 #1 SMP Tue Nov 15 09:40:59 PST 2016 
x86_64 x86_64 x86_64 GNU/Linux

Since the last message, I've downgraded PostgreSQL to 9.4.1 on the slave, using 
rpm -Uvh --oldpackage [file names]

I had wisely kept copies of the rpm files for PostgreSQL 9.4.1 for OEL6 and 
used those. rpm did the downgrade without issue, and I tested the 9.4.1 
PostgreSQL installation. The minimal testing I did after the install worked 
fine. initdb, start the server, psql, etc.

I then stopped the new slave PostgreSQL instance, and proceeded with the 
instructions for creating a slave. 
I again used pg_basebackup
postgres $ pg_basebackup -D /var/lib/pgsql/9.4/data --write-recovery-conf -h 
devtmbm178.unix.gsm1900.org -U pgrepuser -p 5432 -W

NOTICE:  pg_stop_backup complete, all required WAL segments have been archived

This executed without incident. 

After verifying, and modifying postgresql.conf, recovery.conf I attempted to 
start postgresql. This was again, not successful.

postgres $ pg_ctl start
server starting
-bash-4.1$ < 2017-02-17 12:13:53.176 PST >FATAL:  incorrect checksum in control 
file

postgres $ pg_controldata
WARNING: Calculated CRC checksum does not match value stored in file.
Either the file is corrupt, or it has a different layout than this program
is expecting.  The results below are untrustworthy.

Segmentation fault (core dumped)

Now I'm really unhappy. Same server architecture, same PostgreSQL versions. No 
joy!


--

Richard Brosnahan

On Feb 17, 2017, at 10:43 AM, Adrian Klaver  wrote:

On 02/16/2017 04:39 PM, Richard Brosnahan wrote:
Hi all,

Way back in December I posted a question about mirroring from an RPM
installed PostgreSQL (binary) to a source built PostgreSQL, with the
same version (9.4.1 --> 9.4.1). Both servers are running OEL6.

I went back to the previous threads and I could not find if you ever 
said whether the two systems are using the same hardware architecture or 
not? Vincent Veyron asked but I can't find a response.



I won't copy the entire thread from before, as the situation has changed
a bit. The biggest changes are that I have root on the slave,
temporarily, and I've installed PostgreSQL on the slave using yum (also
binary).

I've followed all the instructions found here:

https://www.postgresql.org/docs/9.4/static/warm-standby.html#STREAMING-REPLICATION


The slave is running PostgreSQL 9.4.11 and was installed using yum.
It runs fine after I've run initdb and set things up. The master was
also installed from rpm binaries, but the installers used Puppet. That
version is 9.4.1. Yes, I know I should be using the exact same version,
but I couldn't find 9.4.1 in the PostgreSQL yum repo.


When I replace its data directory as part of the mirroring instructions,
using pg_basebackup, PostgreSQL won't start. I used pg_basebackup.


I get a checksum error, from pg_ctl.

2016-12-15 08:27:14.520 PST >FATAL: incorrect checksum in control file


Previously, Tom Lane suggested I try this:

You could try using pg_controldata to compare the pg_control contents;

it should be willing to print field values even if it thinks the checksum

is bad. It would be interesting to see (a) what the master's

pg_controldata prints about its pg_control, (b) what the slave's

pg_controldata prints about pg_control from a fresh initdb there, and

(c) what the slave's pg_controldata prints about the copied pg_control.


For Tom's requests (a and b), I can provide good output from
pg_controldata from the master with production data, and from the slave
right after initdb. I'll provide that on request.


for Tom's request (c) I get this from the slave, after data is copied.

$ pg_controldata

WARNING: Calculated CRC checksum does not match value stored in file.

Either the file is corrupt, or it has a different layout than this program

is expecting. The results below are untrustworthy.


Segmentation fault (core dumped)


With this new installation on the slave, same result. core dump


Tom Lane then suggested:

$ gdb path/to/pg_controldata

gdb> run /apps/database/postgresql-data

(wait

for it to report segfault)

gdb> bt


Since I now have gdb, I can do that:

$ gdb /usr/pgsql-9.4/bin/pg_controldata

-bash: gdb: command not found

-bash-4.1$ gdb /usr/pgsql-9.4/bin/pg_controldata

GNU gdb (GDB) Red Hat Enterprise Linux (7.2-90.el6)

Copyright (C) 2010 Free Software Foundation, Inc.

License GPLv3+: GNU GPL version 3 or later
<http://gnu.org/licenses/gpl.html>

This is free software: you are free to change and redistribute it.

There is NO WARRANTY, to the extent permitted by law. Type "show copying"

and "show warranty" for details.


[GENERAL] PostgreSQL mirroring from RPM install to RPM install-revisited

2017-02-17 Thread Richard Brosnahan

Hi all,

Way back in December I posted a question about mirroring from an RPM installed 
PostgreSQL (binary) to a source built PostgreSQL, with the same version (9.4.1 
--> 9.4.1). Both servers are running OEL6. 

I won't copy the entire thread from before, as the situation has changed a bit. 
The biggest changes are that I have root on the slave, temporarily, and I've 
installed PostgreSQL on the slave using yum (also binary).

I've followed all the instructions found here:
https://www.postgresql.org/docs/9.4/static/warm-standby.html#STREAMING-REPLICATION

The slave is running PostgreSQL 9.4.11 and was installed using yum. It runs 
fine after I've run initdb and set things up. The master was also installed 
from rpm binaries, but the installers used Puppet. That version is 9.4.1. Yes, 
I know I should be using the exact same version, but I couldn't find 9.4.1 in 
the PostgreSQL yum repo. 

When I replace its data directory as part of the mirroring instructions, using 
pg_basebackup, PostgreSQL won't start. I used pg_basebackup. 

I get a checksum error, from pg_ctl.
2016-12-15 08:27:14.520 PST >FATAL: incorrect checksum in control file

Previously, Tom Lane suggested I try this:
You could try using pg_controldata to compare the pg_control contents;
it should be willing to print field values even if it thinks the checksum
is bad. It would be interesting to see (a) what the master's
pg_controldata prints about its pg_control, (b) what the slave's
pg_controldata prints about pg_control from a fresh initdb there, and
(c) what the slave's pg_controldata prints about the copied pg_control.

For Tom's requests (a and b), I can provide good output from pg_controldata 
from the master with production data, and from the slave right after initdb. 
I'll provide that on request.

for Tom's request (c) I get this from the slave, after data is copied.
$ pg_controldata
WARNING: Calculated CRC checksum does not match value stored in file.
Either the file is corrupt, or it has a different layout than this program
is expecting.  The results below are untrustworthy.

Segmentation fault (core dumped)

With this new installation on the slave, same result. core dump

Tom Lane then suggested:
$ gdb path/to/pg_controldata
gdb> run /apps/database/postgresql-data
(wait
for it to report segfault)
gdb> bt

Since I now have gdb, I can do that:
$ gdb /usr/pgsql-9.4/bin/pg_controldata
-bash: gdb: command not found
-bash-4.1$ gdb /usr/pgsql-9.4/bin/pg_controldata
GNU gdb (GDB) Red Hat Enterprise Linux (7.2-90.el6)
Copyright (C) 2010 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-redhat-linux-gnu".
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>...
Reading symbols from /usr/pgsql-9.4/bin/pg_controldata...(no debugging symbols 
found)...done.
Missing separate debuginfos, use: debuginfo-install 
postgresql94-server-9.4.11-1PGDG.rhel6.x86_64
(gdb) run /var/lib/pgsql/9.4/data
Starting program: /usr/pgsql-9.4/bin/pg_controldata /var/lib/pgsql/9.4/data
WARNING: Calculated CRC checksum does not match value stored in file.
Either the file is corrupt, or it has a different layout than this program
is expecting.  The results below are untrustworthy.


Program received signal SIGSEGV, Segmentation fault.
0x0033d20a3a15 in __strftime_internal () from /lib64/libc.so.6
(gdb) bt
#0  0x0033d20a3a15 in __strftime_internal () from /lib64/libc.so.6
#1  0x0033d20a5a36 in strftime_l () from /lib64/libc.so.6
#2  0x004015c7 in ?? ()
#3  0x0033d201ed1d in __libc_start_main () from /lib64/libc.so.6
#4  0x00401349 in ?? ()
#5  0x7fffe518 in ?? ()
#6  0x001c in ?? ()
#7  0x0002 in ?? ()
#8  0x7fffe751 in ?? ()
#9  0x7fffe773 in ?? ()
#10 0x in ?? ()
(gdb)

pg_controldata shouldn't be core dumping. 

Should I give up trying to use 9.4.1 and 9.4.11 as master/slave? 

My options appear to be
1 upgrade the master to 9.4.11, which will be VERY DIFFICULT given its Puppet 
install, and the difficulty I have getting root access to our servers.
2 Downgrade the slave. This is easier than option 1, but I would need to find a 
yum repo that has that version. 
3 Make what I have work, somehow. 

Any assistance would be greatly appreciated!
--

Richard Brosnahan

[GENERAL] Re: Re: [GENERAL] PostgreSQL mirroring from RPM install to Source install

2016-12-19 Thread Richard Brosnahan


gdb is not available on this machine. Neither which or locate could find it. 
The servers we're given are truly, shockingly, stripped down models. 

At this point, I believe my best course of action is to twist some sys admin 
arms and get a properly installed PostgreSQL on this machine. Even if I manage 
to get the mirror working with the current install, there's a real risk 
something bad will happen. I can't trust this source built version.

Sigh

Thanks again for your help.
--

Richard Brosnahan

On Dec 15, 2016, at 01:00 PM, Tom Lane  wrote:

Richard Brosnahan  
=?utf-8?B?UmU6IFtHRU5FUkFMXSBQb3N0Z3JlU1FMIG1pcnJvcmluZyBmcm9tIFJQTSBp?= 
=?utf-8?B?bnN0YWxsIHRvIFNvdXJjZSBpbnN0YWxs?= writes:
The slave:
$ pg_controldata --version
pg_controldata (PostgreSQL) 9.4.1
$ echo $PGDATA
/apps/database/postgresql-data
$ pg_controldata $PGDATA
WARNING: Calculated CRC checksum does not match value stored in file.
Either the file is corrupt, or it has a different layout than this program
is expecting.  The results below are untrustworthy.

Segmentation fault (core dumped)

Wow --- a core dump in pg_controldata is *really* unexpected, since it's
just printing some scalar fields. I can easily believe garbage out, but
a crash is harder. I'm beginning to think that there's something wrong
with your hand-rolled build, though I have no idea what.

Can you get a stack trace out of that? Should be enough to do

$ gdb path/to/pg_controldata
gdb> run /apps/database/postgresql-data
(wait for it to report segfault)
gdb> bt

Understood that it's way easier to do the mirroring when the directory paths 
agree. In my case, I can't line them up, due to restrictions.

That doesn't seem to be your problem, at least not yet.

                   regards, tom lane


[GENERAL] Re: [GENERAL] PostgreSQL mirroring from RPM install to Source install

2016-12-19 Thread Richard Brosnahan

The slave:
$ pg_controldata --version
pg_controldata (PostgreSQL) 9.4.1
$ echo $PGDATA
/apps/database/postgresql-data
$ pg_controldata $PGDATA
WARNING: Calculated CRC checksum does not match value stored in file.
Either the file is corrupt, or it has a different layout than this program
is expecting.  The results below are untrustworthy.

Segmentation fault (core dumped)

The master:
$ echo $PGDATA
/var/lib/pgsql/9.4/data
$ pg_controldata --version
pg_controldata (PostgreSQL) 9.4.1
$ pg_controldata $PGDATA
pg_control version number:            942
Catalog version number:               201409291
Database system identifier:           6158049305757004233
Database cluster state:               in production
pg_control last modified:             Thu 15 Dec 2016 11:19:04 AM PST
Latest checkpoint location:           44/E8FDC2F0
Prior checkpoint location:            44/E8EEE07C
Latest checkpoint's REDO location:    44/E8F72A4C
Latest checkpoint's REDO WAL file:    0001004400E8
Latest checkpoint's TimeLineID:       1
Latest checkpoint's PrevTimeLineID:   1
Latest checkpoint's full_page_writes: on
Latest checkpoint's NextXID:          0/231975962
Latest checkpoint's NextOID:          602794
Latest checkpoint's NextMultiXactId:  86
Latest checkpoint's NextMultiOffset:  171
Latest checkpoint's oldestXID:        40676549
Latest checkpoint's oldestXID's DB:   36696
Latest checkpoint's oldestActiveXID:  231975962
Latest checkpoint's oldestMultiXid:   1
Latest checkpoint's oldestMulti's DB: 421289
Time of latest checkpoint:            Thu 15 Dec 2016 11:18:39 AM PST
Fake LSN counter for unlogged rels:   0/1
Minimum recovery ending location:     0/0
Min recovery ending loc's timeline:   0
Backup start location:                0/0
Backup end location:                  0/0
End-of-backup record required:        no
Current wal_level setting:            hot_standby
Current wal_log_hints setting:        off
Current max_connections setting:      200
Current max_worker_processes setting: 8
Current max_prepared_xacts setting:   0
Current max_locks_per_xact setting:   64
Maximum data alignment:               4
Database block size:                  8192
Blocks per segment of large relation: 131072
WAL block size:                       8192
Bytes per WAL segment:                16777216
Maximum length of identifiers:        64
Maximum columns in an index:          32
Maximum size of a TOAST chunk:        2000
Size of a large-object chunk:         2048
Date/time type storage:               64-bit integers
Float4 argument passing:              by value
Float8 argument passing:              by reference
Data page checksum version:           0

The seg fault on the slave looks suspicious. 

Understood that it's way easier to do the mirroring when the directory paths agree. In my case, I can't line them up, due to restrictions. 

It's really looking like I'll need to do something more drastic, like blackmail a sys admin to coerce him to install PostgreSQL on the slave using RPM. 


Thanks for the help!
--

Richard Brosnahan

On Dec 15, 2016, at 10:01 AM, Tom Lane  wrote:

Richard Brosnahan  writes:
I've got a PostgreSQL database server version 9.4.1 installed on an OEL 6 
server. I have a second PostgreSQL database server version 9.4.1 running on an 
identical OEL 6 server. The first PostgreSQL was installed by system admins 
using RPM. The second PostgreSQL was installed from source, using --prefix to 
set the user and path to the installation. I want to run a mirror on the second 
server.

I do not have root, and cannot get root on these systems. Also, the sys admins 
now refuse to install PostgreSQL anywhere. This is really annoying!

I've followed all the instructions found here:
https://www.postgresql.org/docs/9.4/static/warm-standby.html#STREAMING-REPLICATION

The slave, running PostgreSQL 9.4.1 built from source, runs fine after I've run 
initdb and set things up.
When I replace its data directory as part of the mirroring instructions, using 
pg_basebackup, PostgreSQL won't start.

I get a checksum error, from pg_ctl.
2016-12-15 08:27:14.520 PST >FATAL: incorrect checksum in control file

My guess is that despite the versions being the same, the binaries are 
different.

You could try using pg_controldata to compare the pg_control contents;
it should be willing to print field values even if it thinks the checksum
is bad. It would be interesting to see (a) what the master's
pg_controldata prints about its pg_control, (b) what the slave's
pg_controldata prints about pg_control from a fresh initdb there, and
(c) what the slave's pg_controldata prints about the copied pg_control.

I am a little suspicious about whether the PG versions are really the same.
There's a bug in the 9.5.x series that it will issue a checksum complaint
not a version-number complaint if started against a 9.4.x pg_control.

                   regards, tom lane


[GENERAL] PostgreSQL mirroring from RPM install to Source install

2016-12-15 Thread Richard Brosnahan

Good day to you!

I've got a PostgreSQL database server version 9.4.1 installed on an OEL 6 
server. I have a second PostgreSQL database server version 9.4.1 running on an 
identical OEL 6 server. The first PostgreSQL was installed by system admins 
using RPM. The second PostgreSQL was installed from source, using --prefix to 
set the user and path to the installation. I want to run a mirror on the second 
server.

I do not have root, and cannot get root on these systems. Also, the sys admins 
now refuse to install PostgreSQL anywhere. This is really annoying!

I've followed all the instructions found here:
https://www.postgresql.org/docs/9.4/static/warm-standby.html#STREAMING-REPLICATION

The slave, running PostgreSQL 9.4.1 built from source, runs fine after I've run initdb and set things up. 
When I replace its data directory as part of the mirroring instructions, using pg_basebackup, PostgreSQL won't start.


I get a checksum error, from pg_ctl. 
2016-12-15 08:27:14.520 PST >FATAL: incorrect checksum in control file


My guess is that despite the versions being the same, the binaries are different. 


Is there a way to make this work, or should I give up? I wonder if the sys 
admins take bribes...

THANKS in advance!!
--

Richard Brosnahan


Re: [GENERAL] Recovering data from an old disk image

2016-07-15 Thread Richard Kuhns

On 07/15/16 12:13, Adrian Klaver wrote:

On 07/15/2016 09:06 AM, Tom Lane wrote:

Richard Kuhns  writes:

I uninstalled 9.3 & installed the most recent 9.4. When I try to start
it, it tells me:



postgres[99770]: [1-1] FATAL:  database files are incompatible with
server
postgres[99770]: [1-2] DETAIL:  The database cluster was initialized
with PG_CONTROL_VERSION 937, but the server was compiled with
PG_CONTROL_VERSION 942.



Based on a search of the mailing list archives I'm guessing that the
original postgresql server was a 9.4 beta.


[ digs in commit logs... ]  Assuming that this actually was a release of
some sort, and not just a random git snapshot, it would have to have been
9.4beta1.  Betas later than that one used the newer PG_CONTROL_VERSION
value.  The catalog_version_no would provide a finer-grain dating, but
trying beta1 seems like a good bet.

I'm not sure if there are still tarballs of 9.4beta1 on our webservers,
but in any case you could easily check out that tag from our git server
to recover the matching source code.


If you go here:

https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=e6df2e1be6330660ba4d81daa726ae4a71535aa9


would not fetching the snapshot also work?



regards, tom lane




Thank you all! Fetching the snapshot worked just fine, and I have the 
missing table.


I'm also dumping the entire database so I don't have to worry about it 
again.


Thanks again,

- Richard
--
Richard Kuhns 
Wintek Corporation
427 N 6th Street
Lafayette, IN 47901-2211

Main:   765-742-8428
Direct: 765-269-8541


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Recovering data from an old disk image

2016-07-15 Thread Richard Kuhns

Greetings,

I need to recover some data from a disk image that was attached to a 
virtual machine that has since been deleted.


I'm 99.9% sure that postgres was stopped normally before this image was 
saved. Everything was supposed to have been migrated as part of an 
upgrade, but it seems that at least one of the original tables didn't 
get moved. I've been asked to recover it.


It was apparently decided that there was no reason to save the output of 
the pg_dump that was used for the upgrade, since they saved the original 
pgsql data directory.


I've copied the entire pgsql directory to a new machine & installed the 
most recent 9.3 to try to read it. When I start the server it tells me 
that the database was initialized by version 9.4, so it can't handle it.


I uninstalled 9.3 & installed the most recent 9.4. When I try to start 
it, it tells me:


postgres[99770]: [1-1] FATAL:  database files are incompatible with server
postgres[99770]: [1-2] DETAIL:  The database cluster was initialized 
with PG_CONTROL_VERSION 937, but the server was compiled with 
PG_CONTROL_VERSION 942.


Based on a search of the mailing list archives I'm guessing that the 
original postgresql server was a 9.4 beta.


I'd greatly appreciate any help in recovering this database. If anyone 
can tell me how to find the postgresql source to the 9.4 version that 
used PG_CONTROL_VERSION 937 that would be wonderful. If there's a 
simpler way to get at the data that would be even better.


Any and all help would be greatly appreciated.

Thanks in advance.

--
Richard Kuhns 
Wintek Corporation
427 N 6th Street
Lafayette, IN 47901-2211

Main:   765-742-8428
Direct: 765-269-8541


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Whither recovery.conf?

2016-06-06 Thread Richard Tisch
On Mon, Jun 6, 2016 at 10:12 PM, Vik Fearing  wrote:
> On 06/06/16 15:07, Vik Fearing wrote:
>> It seems the commitfest link in there doesn't work anymore.  I should
>> probably bring that up in a separate thread.
>
> It's in the old commitfest app.  Here's a new link for it:
> https://commitfest-old.postgresql.org/action/patch_view?id=1293

Thanks!

Richard.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Whither recovery.conf?

2016-06-06 Thread Richard Tisch
Hi there,

I was just wondering about the statement below in another thread:

2016-06-04 22:58 GMT+09:00 Vik Fearing :
> There are plans to allow SQL
> access to the parameters in recovery.conf (or to merge them into
> postgresql.conf) but that's not currently possible.

Are there any active plans or patches on this front? I did a bit of searching,
I see one thread from 2013 (this one:
https://www.postgresql.org/message-id/CAJKUy5id1eyweK0W4+yyCM6+-qYs9erLidUmb=1a-qybgtw...@mail.gmail.com),
but it's not clear to where things are at the
moment. Mind you I might be searching in the wrong places and/or for the
wrong terms, it's happened before so apologies if I'm missing the obvious.

Thanks!

Richard.


-- 
Those who do not understand SQL are condemned to reinvent it, poorly


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to get good performance for very large lists/sets?

2014-10-06 Thread Richard Frith-Macdonald
On 6 Oct 2014, at 17:54, Igor Neyman  wrote:
> 
>> -Original Message-
>> From: pgsql-general-ow...@postgresql.org 
>> [mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Richard 
>> Frith-Macdonald
>> Sent: Monday, October 06, 2014 4:02 AM
>> To: pgsql-general@postgresql.org
>> Subject: [GENERAL] How to get good performance for very large lists/sets?
>> 
>> I'm wondering if anyone can help with advice on how to manage large 
>> lists/sets of items in a postgresql database.
>> 
>> I have a database which uses multiple  lists of items roughly like this:
>> 
>> CREATE TABLE List (
>>  ID SERIAL,
>>  Name VARCHAR 
>> );
>> 
>> and a table containing individual entries in the lists:
>> 
>> CREATE TABLE ListEntry (
>>  ListID INT, /* Reference the List table */
>>  ItemID INT /* References an Item table */
>> ) ;
>> CREATE UNIQUE INDEX ListEntryIDX ON ListEntry(ListID, ItemID);
>> 
>> Now, there are thousands of lists, many with millions of entries, and items 
>> are added to and removed from lists in an unpredictable way (in response to 
>> our customer's actions, not under our control).  Lists are also created by 
>> customer actions.
>> 
>> Finding whether a particular item is in a particular list is reasonably 
>> fast, but when we need to do things like find all the items in list A but 
>> not list B things can get very slow (particularly when both lists contain 
>> millions of common items).
>> 
>> I think that server won't use index-only scans because, even in cases where 
>> a particular list has not had any recent changes, the ListEntry table will 
>> almost always have had some change (for one of the other lists) since its 
>> last vacuum.
>> Perhaps creating multiple ListEntry tables (one for each list) would allow 
>> better performance; but that would be thousands (possibly tens of thousands) 
>> of tables, and allowing new tables to be created by our clients might 
>> conflict with things like nightly backups.
>> 
>> Is there a better way to manage list/set membership for many thousands of 
>> sets and many millions of items?
> 
> --
> 
> You mean you are get sequential scans?
> Index-only scans are not always quicker (you could try "turning off" seq 
> scans by setting enable_seqscan=off).
> 
> Could you show your query, corresponding plans, and what don't you like about 
> them?

I guess I didn't express myself well.

No I'm not particularly dissatisfied with any query plan;  have tried 
enabling/disabling different scan types to experiment, and have been able to 
get better results from the query planner with such tweaks in some cases (ie 
with specific datasets), but not consistently.  Certainly the index is used 
quite often, and when it isn't the query planner seems to be making reasonable 
decisions.
I've tried NOT IN, and NOT EXISTS and NOT EXISTS for different situations ... 

My fundamental problem is huge datasets;  with hundreds of gigabytes of memory, 
I can have the lists basically in memory and these queries seem to be 
cpu-limited ... so I'm searching for a way to minimise the work the cpu has to 
do.

So what I was wondering was whether this whole approach to set/list membership 
was the correct one to use or if there's some other approach which can simply 
avoid the cpu having to look at so much data (which was why I wondered about 
index-only scans).



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to get good performance for very large lists/sets?

2014-10-06 Thread Richard Frith-Macdonald
I'm wondering if anyone can help with advice on how to manage large lists/sets 
of items in a postgresql database.

I have a database which uses multiple  lists of items roughly like this:

CREATE TABLE List (
  ID SERIAL,
  Name VARCHAR 
);

and a table containing individual entries in the lists:

CREATE TABLE ListEntry (
  ListID INT, /* Reference the List table */
  ItemID INT /* References an Item table */
) ;
CREATE UNIQUE INDEX ListEntryIDX ON ListEntry(ListID, ItemID);

Now, there are thousands of lists, many with millions of entries, and items are 
added to and removed from lists in an unpredictable way (in response to our 
customer's actions, not under our control).  Lists are also created by customer 
actions.

Finding whether a particular item is in a particular list is reasonably fast, 
but when we need to do things like find all the items in list A but not list B 
things can get very slow (particularly when both lists contain millions of 
common items).

I think that server won't use index-only scans because, even in cases where a 
particular list has not had any recent changes, the ListEntry table will almost 
always have had some change (for one of the other lists) since its last vacuum.
Perhaps creating multiple ListEntry tables (one for each list) would allow 
better performance; but that would be thousands (possibly tens of thousands) of 
tables, and allowing new tables to be created by our clients might conflict 
with things like nightly backups.

Is there a better way to manage list/set membership for many thousands of sets 
and many millions of items?



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] client that supports editing views

2013-12-10 Thread Richard Broersma
On Mon, Dec 9, 2013 at 7:16 AM, Adrian Klaver wrote:


> Is there a timestamp field in the view? This sounds like an issue Access
> has with timestamp precision, where if you supply a timestamp that is too
> precise it has problems. See here for more detail:
>

Updateable view can be a challenge due to MS-Access Optimistic Locking
checks.  First, for each row updated by Access, MS-Access checks that each
field is the same returning as what it issued - Any changes with throw a
roll-back.  Next if the count of record changes does not match the count
that that Access expects, it will roll-back the changes.


-- 
Regards,
Richard Broersma Jr.


Re: [GENERAL] Need help with upsert

2013-12-04 Thread Richard Dunks

Hello,

On Dec 4, 2013, at 12:39 PM, Eric Lamer  wrote:

> Hi,
>  
>I need some help with upsert.
>  
>Some info on what I want to do:
>  
>Each day I create a table for my firewall logs.   I have one entry for one 
> occurrence of all the field except sum, which is the number of occurrence I 
> have of each log that match all the field. My table has the following field: 
> firewall,action,src_zone,src_ip,dst_zone,dst_ip,proto,port,hex1,hex2,sum
>   
>   Each day I want to copy the last 7 days into one table so I have one table 
> with the last 7 days of logs.
>  
>   So I want to copy the data from 7 tables into 1.  If the row does not exist 
> I just insert and if the row already exist I just update the sum (existing 
> sum + new sum).
>  
>   Public.test is the table I use for the last 7 days logs.
>   daily.daily_20131202 is table for 1 day.
>   I will run this command 7 times with different daily table.
>  
> WITH upsert as
> (update public.test T set 
> firewall=S.firewall,action=S.action,src_zone=S.src_zone,src_ip=S.src_ip,dst_zone=S.dst_zone,dst_ip=S.dst_ip,proto=S.proto,port=S.port,hex1=S.hex1,hex2=S.hex2,sum=T.sum+S.sum
>  from daily.daily_20131202 S where (T.firewall=S.firewall and 
> T.action=S.action and T.src_zone=S.src_zone and T.src_ip=S.src_ip and 
> T.dst_zone=S.dst_zone and T.dst_ip=S.dst_ip and T.proto=S.proto and 
> T.port=S.port and T.hex1=S.hex1 and T.hex2=S.hex2) RETURNING * )
> insert into public.test select * from daily.daily_20131202 a WHERE NOT EXISTS 
> (SELECT * FROM upsert b WHERE a.firewall=b.firewall and a.action=b.action and 
> a.src_zone=b.src_zone and a.src_ip=b.src_ip and a.dst_zone=b.dst_zone and 
> a.dst_ip=b.dst_ip and a.proto=b.proto and a.port=b.port and a.hex1=b.hex1 and 
> a.hex2=b.hex2);
>  
> When I run the command I get an  error
> ERROR:  column reference "firewall" is ambiguous
> LINE 3: ... a WHERE NOT EXISTS (SELECT * FROM upsert b WHERE b.firewall...
>  
> Any idea what I am doing wrong?

In your UPDATE statement, I'd suggest explicitly putting the "T" table alias 
before each column you're setting. That will make the assignment more explicit 
and hopefully get around the error.

>  
> Also, is there an easier way to do that?
>  
> Thanks for the help.

Best,
Richard Dunks

Re: [GENERAL] Wrap around id failure and after effects

2013-11-26 Thread Richard Huxton

On 26/11/13 07:15, Arun P.L wrote:

Hi all,

We had a wraparound failure in the db and most of the tables and data
were missing. So we have done a full vacuum in db and after that the
tables reappeared but now the problem is, all the tables have duplicate
when listing tables with /dt. And also after the vacuum we recievied the
following warning.

*INFO:  free space map: 48 relations, 29977 pages stored; 134880 total
pages needed*
*DETAIL:  Allocated FSM size: 1000 relations + 2 pages = 215 kB
shared memory.*
*WARNING:  some databases have not been vacuumed in over 2 billion
transactions*
*DETAIL:  You may have already suffered transaction-wraparound data loss.*
*
*

Is this an error happened between the vacuum?  If so what can be done
next to prevent data loss? The vacuum was not done as superuser, we are
doing a second time vacuum as superuser now. And what are the further
steps to be followed now like reindexing,etc?



1. Did you take a full file-level backup of things before vacuuming?

2. What version?

3. How far back in the logs do the warnings go (you should have been 
receiving warnings for a long time)?


4. How/why had you disabled/altered the autovacuum daemon?

This shouldn't really be possible without disabling autovaccuum or 
configuring it strangely.


http://www.postgresql.org/docs/current/static/routine-vacuuming.html#VACUUM-FOR-WRAPAROUND


--
  Richard Huxton
  Archonet Ltd


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Failed to autoconvert '1' to text.

2013-09-06 Thread Richard Huxton

On 06/09/13 09:13, Szymon Guz wrote:

Hi,
why isn't 'aa' always treated as string?



with x as (
   select
   '1' a,
   '2' b
)
SELECT levenshtein(a, b), length(a)
FROM x;

ERROR:  failed to find conversion function from unknown to text



Why should I cast '1' to '1'::TEXT to satisfy a function (TEXT, TEXT)?


I think it's to do with the CTE. Presumably its types get fixed 
separately from the SELECT levenshtein() call. A quoted literal is type 
"unknown" until it has a context. It could be a date, point, hstore etc.


If you use the literals directly the context lets PostgreSQL figure it out.
   SELECT levenshtein('1','2');

--
  Richard Huxton
  Archonet Ltd


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [pgeu-general] [GENERAL] Call for design: PostgreSQL mugs

2013-09-05 Thread Richard Sickler
Okay, how about

PostgreSQL - The DataBase with 10,000 programmers on your side.

PostgreSQL - You wish the rest of your stuff was this good.

PostgreSQL -  apply many quotes to mug


On Thu, Sep 5, 2013 at 12:37 PM, Atri Sharma  wrote:

> On Fri, Sep 6, 2013 at 12:35 AM, Marc Balmer  wrote:
> > ACID. Actually Coffee Inside, Drinkable.
> >
> > Am 05.09.2013 um 20:49 schrieb Steve Crawford <
> scrawf...@pinpointresearch.com>:
> >
> >> org)
>
> Something of the lines of evolution?
>
> A small elephant, growing more powerful?
>
> --
> Regards,
>
> Atri
> l'apprenant
>


Re: [pgeu-general] [GENERAL] Call for design: PostgreSQL mugs

2013-09-04 Thread Richard Sickler
How about an elephant flying around Earth.
Caption:  PostgreSQL - Used all around the World


On Wed, Sep 4, 2013 at 1:58 PM, Gavin Flower
wrote:

>  On 05/09/13 08:40, patrick keshishian wrote:
>
> On 9/4/13, Andreas 'ads' Scherbaum  
>  wrote:
>
>  On 09/04/2013 10:17 PM, Stefan Kaltenbrunner wrote:
>
>  On 09/04/2013 10:13 PM, Marc Balmer wrote:
>
>  Am 04.09.13 22:02, schrieb Gavin Flower:
>
>  On 04/09/13 22:47, Eduardo Morras wrote:
>
>  On Wed, 04 Sep 2013 00:08:52 +0200
> Andreas 'ads' Scherbaum   wrote:
>
>
>  PostgreSQL folks!
>
> We are looking for the next big thing. Actually, it's a bit smaller:
> a
> new design for mugs. So far we had big blue elephants, small blue
> elephants, frosty elephants, white SQL code on black mugs ... Now
> it's
> time to design something new.
>
>
> What's in for you? Fame, of course - we will announce the designer of
> the next mug along with the mugs itself. Plus 4 mugs for you and your
> friends.
>
>
> Do you have a cool idea? Please let us know. Either reply here or
> send
> an email to pgeu-bo...@postgresql.org.
>
>  A big yellow elephant? Perhaps with a nice smile? Oh it's already
> taken ;)
>
>
>  Thanks,
>
>  ---   ---
> Eduardo Morras  
>
>  A big elephant, and 2 smaller elephants.
> A big elephant, and lots of increasingly smaller elephants.
> A Mummy & Daddy elephants, with baby elephants.
>
> To represent that PostgreSQL now replicates?
>
>  If it represents that PostgreSQL replicates, the elephants must be of
> the same size.  We don't loose any data.
>
> But I like the idea.  With replication as the motto, we can sell two
> mugs instead of one...
>
>  well - these days we can actually do cascading replication. which opens
> up even more possibilities..
>
>  So it needs a design which combines more than one mug into some kind of
> artwork? That would be awesome - who can design this?
>
>
> And for the family of elephants - how can we make sure it's not similar
> to what we used before?
>
>  someone beat you to 
> it:http://img0.etsystatic.com/000/0/6099703/il_570xN.280457416.jpg
>
>  I would envision mugs all the same size.
>
> How about a stylized elephant shaped mug? I was inspired by the Toby Jug
> my maternal grandmother had in England (see
> http://www.tobyjug.collecting.org.uk)
>
>
> Cheers,
> Gavin
>


Re: [GENERAL] Why doesn't COPY support the HEADER options for tab-separated output?

2013-08-15 Thread Richard Huxton

On 12/08/13 23:18, Bruce Momjian wrote:

On Mon, Aug 12, 2013 at 03:17:00PM -0700, Jeff Janes wrote:

On Mon, Aug 12, 2013 at 2:21 PM, Bruce Momjian  wrote:

On Mon, Aug 12, 2013 at 02:15:25PM -0700, Joe Van Dyk wrote:

Mostly just curious, as this is preventing me from using tab-separated output.
I'd like there to be a header in my files. I have to use CSVs instead.


Late to the discussion, but it does work to set format=csv and delimiter 
= E'\t' to get tab-separated. Be nice not to have to though.


--
  Richard Huxton
  Archonet Ltd


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Seemingly inconsistent ORDER BY behavior

2013-08-14 Thread Richard Hipp
On Wed, Aug 14, 2013 at 2:56 PM, Tom Lane  wrote:

>
> Our interpretation is that a bare column name ("ORDER BY foo") is resolved
> first as an output-column label, or failing that as an input-column name.
> However, as soon as you embed a name in an expression, it will be treated
> *only* as an input column name.
>
> The SQL standard is not a lot of help here.  In SQL92, the only allowed
> forms of ORDER BY arguments were an output column name or an output column
> number.  SQL99 and later dropped that definition (acknowledging that they
> were being incompatible) and substituted some fairly impenetrable verbiage
> that seems to boil down to allowing input column names that can be within
> expressions.  At least that's how we've chosen to read it.  Our current
> behavior is a compromise that tries to support both editions of the spec.
>
>
Thanks for the explanation, Tom.

Just to be clear, you intend that a COLLATE clause in the ORDER BY is
treated as an expression, right?  So that the two queries in the following
SQL output rows in the opposite order:


CREATE TABLE t1(m VARCHAR(4));
INSERT INTO t1 VALUES('az');
INSERT INTO t1 VALUES('by');
INSERT INTO t1 VALUES('cx');
SELECT '1', substr(m,2) AS m FROM t1 ORDER BY m;
SELECT '2', substr(m,2) AS m FROM t1 ORDER BY m COLLATE "POSIX";


If that is not correct, please let me know because I am about to change
SQLite to work exactly as PostgreSQL does.

-- 
D. Richard Hipp
d...@sqlite.org


Re: [GENERAL] Seemingly inconsistent ORDER BY behavior

2013-08-14 Thread Richard Hipp
On Wed, Aug 14, 2013 at 2:28 PM, Scott Marlowe wrote:

> On Wed, Aug 14, 2013 at 12:01 PM, Richard Hipp  wrote:
>
> substr(m,2) as m
>
> is bad form. Always use a new and unique alias, like m1.  How does this
> work:
>
> SELECT '2', substr(m,2) AS m1
>   FROM t1
>  ORDER BY lower(m1);
>

Tnx.  I think everybody agrees that "substr(m,2) as m" is bad form.  And
all the database engines get the same consistent answer when you avoid the
bad form and use "substr(m,2) as m1" instead.  The question is, what should
the database engine do when the programmer disregards sounds advice and
uses the bad form anyhow?

-- 
D. Richard Hipp
d...@sqlite.org


[GENERAL] Seemingly inconsistent ORDER BY behavior

2013-08-14 Thread Richard Hipp
Consider the following SQL:

---
CREATE TABLE t1(m VARCHAR(4));
INSERT INTO t1 VALUES('az');
INSERT INTO t1 VALUES('by');
INSERT INTO t1 VALUES('cx');

SELECT '1', substr(m,2) AS m
  FROM t1
 ORDER BY m;

SELECT '2', substr(m,2) AS m
  FROM t1
 ORDER BY lower(m);
---

Using PostgreSQL 9.1.5, the first query returns x-y-z while the second
returns z-y-x.  Is this "correct"?  It certainly is surprising to me.

I'm asking because the same question has come up in SQLite and whenever
there is a language dispute in SQLite, our first instinct is to find out
what PostgreSQL does and try to do the same thing.  SQLite version 3.7.15
was behaving the same way as PostgreSQL 9.1.5 (unbeknownst to us at the
time).  Then a bug was written about the inconsistent behavior of ORDER
BY.  We fixed that bug so that the latest SQLite answers x-y-z in both
cases.  Now someone is complaining that the "fix" was really a 'break".  Is
it?  Or is there an equivalent bug in PostgreSQL?

There are, of course, many ways to resolve the ambiguity (such as using a
unique label for the result column, or by saying "t1.m" instead of just "m"
when you mean the column of the table).  But that is not really the point
here.  The question is, how should symbolic names in the ORDER BY clause be
resolved?  Should column names in the source table take precedence over
result column name, or should it be the other way around?

Any insights are appreciated.  Please advise if a different mailing list
would be more appropriate for this question.
-- 
D. Richard Hipp
d...@sqlite.org


Re: [GENERAL] PostrgreSQL Commercial restrictions?

2013-08-07 Thread Richard Broersma
On Wed, Aug 7, 2013 at 8:43 AM, David Johnston  wrote:

> Where the PostgreSQL license comes into play is if you make alterations to
> the PostgreSQL database itself - the underlying engine implemented in C and
> to some degree the supporting utilities written in various languages.
> Anything contributed to the core PostgreSQL project becomes open-source but
> you are permitted to create a commercial port of PostgreSQL with
> proprietary
> code under terms different from those for the core PostgreSQL project.  As
> your application is most likely NOT one of these ports I'll stop here.
>

That my be true for MySQL, but I don't think the applies to PostgreSQL.
Several companies have forked PostgreSQL into their own proprietary product.

Here's a nice presentation on the subject that was put together by Josh
Berkus:
http://www.slideshare.net/pgconf/elephant-roads-a-tour-of-postgres-forks



-- 
Regards,
Richard Broersma Jr.


Re: [GENERAL] Seamless replacement to MySQL's GROUP_CONCAT function...

2013-08-05 Thread Richard Broersma
For posterity that finalized function could be posted here:

http://okbob.blogspot.com/2009/08/mysql-functions-for-postgresql.html

There's already a GROUP_CONCAT, listed there, but I guess this one was
lacking in some way.


On Mon, Aug 5, 2013 at 10:04 AM, Alvaro Herrera wrote:

> immersive.ex...@gmail.com escribió:
>
> > Note: I found some close-but-no cigar aggregates shared online, but
> > they would not accept integer arguments, nor would they handle the
> > optionally furnished delimiter. People would suggesting casting the
> > argument to the pseudo-GROUP_CONCAT. Again: Rewrite perhaps hundreds
> > of queries?
>
> I don't think you need all the variations; you should be able to make
> them all work with a single set of functions, taking ANYELEMENT instead
> of text/int8/int4 etc.
>
>
> --
> Álvaro Herrerahttp://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Training & Services
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
Regards,
Richard Broersma Jr.


Re: [GENERAL] Identify primary key in simple/updatable view

2013-08-02 Thread Richard Broersma
MS-Access lets the user specify which column is the Primary Key to avoid
this introspection.


On Fri, Aug 2, 2013 at 8:18 AM, Lionel Elie Mamane  wrote:

> Now that PostgreSQL has updatable views, users (of LibreOffice /
> native PostgreSQL drivers) want to use them... LibreOffice needs a
> primary key to "locate" updates (that is, construct the WHERE clause
> of an UPDATE or DELETE).
>
> How can the driver automatically identify the view columns that
> correspond to the primary key of the underlying table (and more
> generally the same for keys and indexes)? For "simple" views. Without
> parsing the SQL that defines the view (unless libpq will serve me a
> parse tree? Didn't think so.).
>
> For tables, it does that by reading from pg_constraint, but to use
> that for views, I'd need to parse the SQL, track renamed columns,
> etc.
>
> Thanks in advance for your precious help,
>
> --
> Lionel
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
Regards,
Richard Broersma Jr.


Re: [GENERAL] Add a NOT NULL column with default only during add

2013-08-01 Thread Richard Broersma
Notice :
http://www.postgresql.org/docs/9.3/static/sql-altertable.html
After you add a column to your table, you can latter *alter* this column to
add, change, or remove the default expression.  There's no need add
temporary columns to manage this kind of change.  In fact, all of the DDL
that you've described can be achieved in one SQL command.


On Thu, Aug 1, 2013 at 3:49 PM, BladeOfLight16 wrote:

> When I want to add a new column with a NOT NULL constraint, I need to
> specify a DEFAULT to avoid violations. However, I don't always want to keep
> that DEFAULT; going forward after the initial add, I want an error to occur
> if there are inserts where this data is missing. So I have to DROP DEFAULT
> on the column. See this SQL Fiddle for a demonstration:
> http://sqlfiddle.com/#!12/58750/1/0. I could leave off the NOT NULL
> constraint, fill the new column with an UPDATE, and then add the NOT NULL
> constraint afterwards, but that, in my opinion, seems to be a somewhat
> messier alternative.
>
> By comparison, if I change data types, I can take advantage of the very
> useful USING clause to specify how to calculate the new value. As near as I
> can tell, there is no similar functionality for ADD COLUMN to specify a
> value (or means of calculating a value) only during the execution of the
> ALTER. I can understand why that might be the case. Without USING, changing
> the data type would force the creation of a new column instead in many
> cases, which is a much bigger hardship and makes the data type changing
> command far less useful.
>
> Am I missing something, or are the ways I mentioned the only ways to
> accomplish this with ADD COLUMN? It's true that neither possibility is
> particularly difficult to implement, but it doesn't seem like I should have
> to create a constraint I don't want or leave off a constraint I do want to
> add the column. I suppose in some cases, the fact that "fully creating" the
> column is non-atomic may be a problem. If I'm correct that this feature is
> not currently present, would adding it be a reasonable feature request? How
> would I go about making a feature request? (My apologies if there is a
> how-to on feature requests somewhere; my searching didn't turn it up.)
>
> Thank you.
>



-- 
Regards,
Richard Broersma Jr.


Re: [GENERAL] How to REMOVE an "on delete cascade"?

2013-06-28 Thread Richard Broersma
You can do all that in a single sql command.

ALTER TABLE b DROP CONSTRAINT b_a_id_fkey, ADD CONSTRAINT b_a_id FOREIGN
KEY (a_id) REFERENCES a(a_id);



On Fri, Jun 28, 2013 at 5:55 AM, Albe Laurenz wrote:

> Phoenix Kiula wrote:
> > Hi. Hard to find this command in the documentation - how should I alter
> a table to REMOVE the "on
> > delete cascade" constraint from a table? Thanks.
>
> Unless you want to mess with the catalogs directly, I believe that
> you have to create a new constraint and delete the old one, like:
>
>   Table "laurenz.b"
>  Column |  Type   | Modifiers
> +-+---
>  b_id   | integer | not null
>  a_id   | integer | not null
> Indexes:
> "b_pkey" PRIMARY KEY, btree (b_id)
> "b_a_id_ind" btree (a_id)
> Foreign-key constraints:
> "b_a_id_fkey" FOREIGN KEY (a_id) REFERENCES a(a_id) ON DELETE CASCADE
>
>
> ALTER TABLE b ADD CONSTRAINT scratch FOREIGN KEY (a_id) REFERENCES a(a_id);
>
> ALTER TABLE b DROP CONSTRAINT b_a_id_fkey;
>
> ALTER TABLE b RENAME CONSTRAINT scratch TO b_a_id_fkey;
>
> Yours,
> Laurenz Albe
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
Regards,
Richard Broersma Jr.


Re: [GENERAL] Postgres DB crashing

2013-06-20 Thread Richard Huxton

On 18/06/13 18:31, bhanu udaya wrote:

Hello,
Greetings.

My PostgresSQL (9.2) is crashing after certain load tests. Currently,
postgressql is crashing when simulatenously 800 to 1000 threads are run
on a 10 million records schema. Not sure, if we have to tweak some more
parameters of postgres. Currently, the postgressql is configured as
below on a 7GB Ram on an Intel Xeon CPU E5507 2.27 GZ. Is this postgres
limitation to support only 800 threads or any other configuration
required. Please look at the log as below with errors. Please reply


max_connections 5000
shared_buffers  2024 MB
synchronous_commit  off
wal_buffers 100 MB
wal_writer_delays   1000ms
checkpoint_segments 512
checkpoint_timeout  5 min
checkpoint_completion_target0.5
checkpoint_warning  30s
work_memory 1G
effective_cache_size5 GB


Just to point out, your memory settings are set to allow *at least*

 shared-buffers 2GB + (5000 * 1GB) = 5TB+

You don't have that much memory. You probably don't have that much disk. 
This is never going to work.


As has been said, there's no way you can do useful work simultaneously 
with 1000 threads if you only have 4 cores - use a connection pooler. 
You'll also need to reduce work_mem to 1MB or so.


--
  Richard Huxton
  Archonet Ltd


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] INSERT RETURNING with values other than inserted ones.

2013-06-10 Thread Richard Dunks
If you're just replicating the data from table A into table B, why does it need 
its own ID number? Wouldn't the table A ID suffice?

I'd recommend using the following:

CREATE TABLE b AS ( SELECT * FROM a );

This way, you only define the columns and insert the data once, then let 
Postgres do the rest for you. Obviously if you need to have a separate table B 
ID, you can alter as necessary. 

Good luck,
Richard Dunks

On Jun 10, 2013, at 7:29 PM, Aleksandr Furmanov  
wrote:

> Hello,
> I want to insert new values into target table 'a' from source table 'b', and 
> then update table 'b' with ids from table 'a', somewhat like:
> 
> CREATE TABLE a(id SERIAL, name TEXT);
> INSERT INTO a (name) VALUES('Jason');
> INSERT INTO a (name) VALUES('Peter');
> 
> CREATE TABLE b(row_id serial, id INT, name TEXT);
> INSERT INTO b (name) VALUES('Jason');
> INSERT INTO b (name) VALUES('Peter');
> 
> 
> WITH inserted AS (INSERT INTO a (name) SELECT b.name FROM b WHERE b.name = 
> name RETURNING a.id)
>  UPDATE b SET id = inserted.id FROM inserted WHERE inserted.row_id = b.row_id;
> 
> However this would not work for obvious reason:
> 
> WHERE inserted.row_id = b.row_id is invalid because RETURNING clause cannot 
> return row_id.
> What can be returned are only columns of 'a', but they are insufficient to 
> identify matching records of 'b'.
> 
> So the question is - what to put in WHERE clause to match RETURNING with rows 
> being inserted from 'b'?
> 
> Thanks!
> 
> --
> Aleksandr Furmanov
> 
> 
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL Synchronous Replication in production

2013-06-06 Thread Richard Huxton

On 06/06/13 12:48, Colin S wrote:

Thanks for your answer. I find it very interesting that you say that
synchronous setups should always be in two geographically separate
locations. In this case they are on the same subnet. Adding the lag of
committing to two, geographically separate, databases is not feasible
for this OLTP application.


Well, if they're in the same building(s) then your transactions are all 
at the same risk from fire/earthquake/godzilla etc. Might/might not be 
important to you.



I also like your point that "mostly synchronous is just asynchronous."
So, responding by switching to asynchronous as a response to slow-down
is asynchronous anyway.


"Mostly synchronous" is like "a bit pregnant".


Any other comments, or examples, of when synchronous is worth
implementing would be greatly appreciated.


Note that PostgreSQL's synchronous replication just guarantees that the 
commit has reached the transaction log on the slave. That doesn't mean 
the slave has replayed the transaction log and a query against the slave 
will show the transaction's results. So - it doesn't in itself guarantee 
that you can see issue read-only queries against either server 
indiscriminately.


However, if you really, really need to know that a committed transaction 
is on two physically separate sets of disks then synchronous is what you 
want. If both sets of disks are in the same building then you might be 
able to achieve the same result by other (cheaper/simpler?) means.


If you have a business e.g. selling books or train tickets or some such 
then you might decide it's better to have a simpler more robust setup 
from the point of view of providing continuous service to end-customers. 
In the (hopefully rare) event of a crash irreparably losing some 
transactions apologise to your customers and recompense them generously.


For a system handling multi-million pound inter-bank transfers you might 
decide it's better to have the system not working at all rather than 
have an increased risk of a lost transaction.


Of course in both cases you might well want a separate list/cache of 
pending/recently-committed transactions to check against in the event of 
a failure.


I believe what you should do from an engineering approach is to treat it 
in a similar way to security. What do you want to protect against? Make 
a list of possible failures and what they mean to the business/project 
and then decide how much time/money to spend protecting against each one.


--
  Richard Huxton
  Archonet Ltd


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL Synchronous Replication in production

2013-06-06 Thread Richard Huxton

On 06/06/13 11:20, Colin Sloss wrote:


I have been testing the differences between asynchronous and synchronous
hot standby streaming replication on PostgreSQL 9.2.4. There is some
push towards synchronous replication, but I am finding some serious
problems, and wonder how other people deal with them.

[snip]

The whole idea of my solution was to have no single point of failure.
This seems to create two exclusive points of failure, each needing a
completely separate reaction.


Synchronous replication provides a higher level of guarantee for an 
individual transaction (it's safely[1] on at least two boxes now) at the 
cost of making the system as a whole more brittle.


Your uptime as a "service" will inevitably be reduced since in the event 
of problems talking to the slave the master will *have* to delay/cancel 
new transactions.


I have seen people suggest some sort of mode where the server drops back 
to asynch mode in the event of problems. I can't quite understand the 
use-case for that though - either you want synchronous replication or 
you don't. Mostly-synchronous is just asynchronous.


Here's a few questions. How you answer them will decide whether you 
really want synchronous replication or not:

1. The link between servers encounters network congestion
  a. The whole system should slow down.
 Committed transactions should ALWAYS be on
 two geographically separate machines.
  b. An alert should be sent.
 If it's not sorted in 5 mins we'll get someone to look at it.
2. Adding more servers[2] to my replication should:
  a. Make the system as a whole slower[3] and reduce uptime
 but increase the safety of committed transactions
  b. Make the system as a whole faster and increase uptime

There are cases where you want (a), but lots where you want (b) and 
monitor the replication lag.



[1] For various values of "safely" of course
[2] In the same mode - adding async slaves doesn't count
[3] Assuming a reasonable write load of course. Read-only databases 
won't care.


--
  Richard Huxton
  Archonet Ltd


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Table Partitioning

2013-05-22 Thread Richard Onorato
So I worked around most of my errors.  I removed the bigserial and used two of 
the columns as the primary key.  I am now getting the following hibernate 
exception back:
Batch update returned unexpected row count from update [0]; actual row count: 
0; expected: 1


This appears to be caused by the fact that the function is not returning back 
the row count.  I did a google search and found a few suggestions on how to 
resolve this issue, but they do not seem to work well.  I tried returning NEW, 
but that seems to cause the engine to also insert the record in the base table 
as well as a partition.  Thus I end up with 120 records when I am expecting 
just 60.

Any ideas on how I can fix this issue?
 
Regards,


Richard



 From: Richard Onorato 
To: Raghavendra  
Cc: "pgsql-general@postgresql.org"  
Sent: Wednesday, May 22, 2013 7:27 PM
Subject: Re: [GENERAL] Table Partitioning
 


Raghavendra,

I am doing my inserts via Java JPA statements embedded in my Data Access Layer. 
 I can share them if you would like to see them.
 
Regards,


Richard



 From: Raghavendra 
To: Richard Onorato  
Cc: "pgsql-general@postgresql.org"  
Sent: Wednesday, May 22, 2013 2:39 AM
Subject: Re: [GENERAL] Table Partitioning
 


On Wed, May 22, 2013 at 6:54 AM, Richard Onorato  
wrote:

Were you able to get it to insert with the bigserial being used on the table?  

Yes.
 
Every time I go to do an insert into one of the inherited tables I am now 
getting the following exception:
>org.hibernate.HibernateException: The database returned no natively generated 
>identity value
>
>

Hmm, I guess you are inserting on the parent table not directly into inherited 
table.
Can you share the INSERT statement.
 
Is auto-increment supported on table partitioning?
> 

Yes, BIGSERIAL will create a sequence that will be shared by all child 
partitions.
Check below example as per your test case, INSERT statement do not have 
BIGSERIAL column still its auto-increment and populated data in child tables. 

postgres=# insert into mymappingtable(c1,c2,c3,count,createdtime) values 
(9,20,30,1,now());
INSERT 0 0
postgres=# insert into mymappingtable(c1,c2,c3,count,createdtime) values 
(7,20,30,1,now());
INSERT 0 0
postgres=# select * from mymappingtablet5;
 id | c1 | c2 | c3 | count |           createdtime
++++---+--
  8 |  9 | 20 | 30 |     1 | 2013-05-18 02:08:33.061548+05:30 

postgres=# select * from mymappingtablet3;
 id | c1 | c2 | c3 | count |           createdtime
++++---+--
  9 |  7 | 20 | 30 |     1 | 2013-05-18 02:12:03.076529+05:30
(1 row)

(Request not to top-post please ... :)  )

---
Regards,
Raghavendra
EnterpriseDB Corporation
Blog: http://raghavt.blogspot.com/

Re: [GENERAL] Table Partitioning

2013-05-22 Thread Richard Onorato
Raghavendra,

I am doing my inserts via Java JPA statements embedded in my Data Access Layer. 
 I can share them if you would like to see them.
 
Regards,


Richard



 From: Raghavendra 
To: Richard Onorato  
Cc: "pgsql-general@postgresql.org"  
Sent: Wednesday, May 22, 2013 2:39 AM
Subject: Re: [GENERAL] Table Partitioning
 


On Wed, May 22, 2013 at 6:54 AM, Richard Onorato  
wrote:

Were you able to get it to insert with the bigserial being used on the table?  

Yes.
 
Every time I go to do an insert into one of the inherited tables I am now 
getting the following exception:
>org.hibernate.HibernateException: The database returned no natively generated 
>identity value
>
>

Hmm, I guess you are inserting on the parent table not directly into inherited 
table.
Can you share the INSERT statement.
 
Is auto-increment supported on table partitioning?
> 

Yes, BIGSERIAL will create a sequence that will be shared by all child 
partitions.
Check below example as per your test case, INSERT statement do not have 
BIGSERIAL column still its auto-increment and populated data in child tables. 

postgres=# insert into mymappingtable(c1,c2,c3,count,createdtime) values 
(9,20,30,1,now());
INSERT 0 0
postgres=# insert into mymappingtable(c1,c2,c3,count,createdtime) values 
(7,20,30,1,now());
INSERT 0 0
postgres=# select * from mymappingtablet5;
 id | c1 | c2 | c3 | count |           createdtime
++++---+--
  8 |  9 | 20 | 30 |     1 | 2013-05-18 02:08:33.061548+05:30 

postgres=# select * from mymappingtablet3;
 id | c1 | c2 | c3 | count |           createdtime
++++---+--
  9 |  7 | 20 | 30 |     1 | 2013-05-18 02:12:03.076529+05:30
(1 row)

(Request not to top-post please ... :)  )

---
Regards,
Raghavendra
EnterpriseDB Corporation
Blog: http://raghavt.blogspot.com/

Re: [GENERAL] Table Partitioning

2013-05-21 Thread Richard Onorato
Were you able to get it to insert with the bigserial being used on the table?  
Every time I go to do an insert into one of the inherited tables I am now 
getting the following exception:

org.hibernate.HibernateException: The database returned no natively generated 
identity value


Is auto-increment supported on table partitioning?
 
Regards,


Richard



 From: Raghavendra 
To: Richard Onorato  
Cc: "pgsql-general@postgresql.org"  
Sent: Tuesday, May 21, 2013 1:06 PM
Subject: Re: [GENERAL] Table Partitioning
 


On Tue, May 21, 2013 at 11:03 PM, Richard Onorato  
wrote:

I am wanting to partition my data based on a mod of one of the bigint columns, 
but when I run my insert test all the data goes into the base table and not the 
partitions.  Here is what the table looks like:
>
>
>CREATE table MyMappingTable ( id bigserial NOT NULL,
>                                     c1 bigInt NOT NULL,
>                                     c2 bigInt NOT NULL,
>                                     c3 bigint NOT NULL,
>                                     count bigint DEFAULT 1,
>                                     createdTime timestamp with time zone 
>default CURRENT_TIMESTAMP,
>                                     CONSTRAINT MyMappingTable_index PRIMARY 
>KEY (id) )
>with (OIDS=FALSE);
>
>
>CREATE TABLE MyMappingTableT1 (PRIMARY KEY (id), CHECK((c1 % 5) = 0)) INHERITS 
>(MyMappingTable);
>CREATE TABLE MyMappingTableT2 (PRIMARY KEY (id), CHECK((c1 % 5) = 1)) INHERITS 
>(MyMappingTable);
>CREATE TABLE MyMappingTableT3 (PRIMARY KEY (id), CHECK((c1 % 5) = 2)) INHERITS 
>(MyMappingTable);
>CREATE TABLE MyMappingTableT4 (PRIMARY KEY (id), CHECK((c1 % 5) = 3)) INHERITS 
>(MyMappingTable);
>CREATE TABLE MyMappingTableT5 (PRIMARY KEY (id), CHECK((c1 % 5) = 4)) INHERITS 
>(MyMappingTable);
>
>
>Here is the trigger function that I added to the database:
>
>
>CREATE OR REPLACE FUNCTION my_mapping_table_insert_trigger()
>RETURNS trigger AS $$
>BEGIN
>  IF ( (NEW.c1 % 5) = 0 ) THEN 
>    INSERT INTO MyMappingTableT1 VALUES (NEW.*); 
>  ELSIF ( (NEW.c1 % 5) = 1 ) THEN 
>    INSERT INTO MyMappingTableT2 VALUES (NEW.*);
>  ELSIF ( (NEW.c1 % 5) = 2 ) THEN 
>    INSERT INTO MyMappingTableT3 VALUES (NEW.*);
>  ELSIF ( (NEW.c1 % 5) = 3 ) THEN 
>    INSERT INTO MyMappingTableT4 VALUES (NEW.*);
>  ELSIF ( (NEW.c1 % 5) = 4 ) THEN 
>    INSERT INTO MyMappingTableT5 VALUES (NEW.*);
>  ELSE
>
>    RAISE EXCEPTION 'c1 mod out of range.  Something wrong with the 
>my_mapping_table_insert_trigger() function!';
>  END IF;
>  RETURN NULL;
>END;
>$$
>LANGUAGE plpgsql;
>
>
>Here is the Trigger that I added to the table:
>
>
>CREATE TRIGGER insert_my_mapping_table_trigger 
>  BEFORE INSERT ON MyMappingTable 
>  FOR EACH ROW EXECUTE PROCEDURE my_mapping_table_insert_trigger();
>
>
>SET constraint_exclusion = ON;
>
>
>
>Regards,
>
>
>Richard

I tried your test case, its working fine from my end and populating data 
properly to partition childs. 

insert into mymappingtable values (1,7,20,30,1,now());
insert into mymappingtable values (2,6,20,30,1,now());
insert into mymappingtable values (3,8,20,30,1,now());
insert into mymappingtable values (4,9,20,30,1,now());
insert into mymappingtable values (5,10,20,30,1,now());

postgres=# \dt+ MyMappingTable*
                            List of relations
 Schema |       Name       | Type  |  Owner   |    Size    | Description
+--+---+--++-
 public | mymappingtable   | table | postgres | 0 bytes    |
 public | mymappingtablet1 | table | postgres | 8192 bytes |
 public | mymappingtablet2 | table | postgres | 8192 bytes |
 public | mymappingtablet3 | table | postgres | 8192 bytes |
 public | mymappingtablet4 | table | postgres | 8192 bytes |
 public | mymappingtablet5 | table | postgres | 8192 bytes |

---
Regards,
Raghavendra
EnterpriseDB Corporation
Blog: http://raghavt.blogspot.com/

Re: [GENERAL] Table Partitioning

2013-05-21 Thread Richard Onorato
Interesting.  I wonder what I am doing wrong.  I will try and setup the 
database again and see if I can get it to work.

thank you for testing it out for me.

Richard 

On May 21, 2013, at 1:06 PM, Raghavendra  
wrote:

> On Tue, May 21, 2013 at 11:03 PM, Richard Onorato  
> wrote:
> I am wanting to partition my data based on a mod of one of the bigint 
> columns, but when I run my insert test all the data goes into the base table 
> and not the partitions.  Here is what the table looks like:
> 
> CREATE table MyMappingTable ( id bigserial NOT NULL,
>  c1 bigInt NOT NULL,
>  c2 bigInt NOT NULL,
>  c3 bigint NOT NULL,
>  count bigint DEFAULT 1,
>  createdTime timestamp with time zone 
> default CURRENT_TIMESTAMP,
>  CONSTRAINT MyMappingTable_index PRIMARY 
> KEY (id) )
> with (OIDS=FALSE);
> 
> CREATE TABLE MyMappingTableT1 (PRIMARY KEY (id), CHECK((c1 % 5) = 0)) 
> INHERITS (MyMappingTable);
> CREATE TABLE MyMappingTableT2 (PRIMARY KEY (id), CHECK((c1 % 5) = 1)) 
> INHERITS (MyMappingTable);
> CREATE TABLE MyMappingTableT3 (PRIMARY KEY (id), CHECK((c1 % 5) = 2)) 
> INHERITS (MyMappingTable);
> CREATE TABLE MyMappingTableT4 (PRIMARY KEY (id), CHECK((c1 % 5) = 3)) 
> INHERITS (MyMappingTable);
> CREATE TABLE MyMappingTableT5 (PRIMARY KEY (id), CHECK((c1 % 5) = 4)) 
> INHERITS (MyMappingTable);
> 
> Here is the trigger function that I added to the database:
> 
> CREATE OR REPLACE FUNCTION my_mapping_table_insert_trigger()
> RETURNS trigger AS $$
> BEGIN
>   IF ( (NEW.c1 % 5) = 0 ) THEN 
> INSERT INTO MyMappingTableT1 VALUES (NEW.*); 
>   ELSIF ( (NEW.c1 % 5) = 1 ) THEN 
> INSERT INTO MyMappingTableT2 VALUES (NEW.*);
>   ELSIF ( (NEW.c1 % 5) = 2 ) THEN 
> INSERT INTO MyMappingTableT3 VALUES (NEW.*);
>   ELSIF ( (NEW.c1 % 5) = 3 ) THEN 
> INSERT INTO MyMappingTableT4 VALUES (NEW.*);
>   ELSIF ( (NEW.c1 % 5) = 4 ) THEN 
> INSERT INTO MyMappingTableT5 VALUES (NEW.*);
>   ELSE
> RAISE EXCEPTION 'c1 mod out of range.  Something wrong with the 
> my_mapping_table_insert_trigger() function!';
>   END IF;
>   RETURN NULL;
> END;
> $$
> LANGUAGE plpgsql;
> 
> Here is the Trigger that I added to the table:
> 
> CREATE TRIGGER insert_my_mapping_table_trigger 
>   BEFORE INSERT ON MyMappingTable 
>   FOR EACH ROW EXECUTE PROCEDURE my_mapping_table_insert_trigger();
> 
> SET constraint_exclusion = ON;
> 
> Regards,
> 
> Richard
> 
> I tried your test case, its working fine from my end and populating data 
> properly to partition childs. 
> 
> insert into mymappingtable values (1,7,20,30,1,now());
> insert into mymappingtable values (2,6,20,30,1,now());
> insert into mymappingtable values (3,8,20,30,1,now());
> insert into mymappingtable values (4,9,20,30,1,now());
> insert into mymappingtable values (5,10,20,30,1,now());
> 
> postgres=# \dt+ MyMappingTable*
> List of relations
>  Schema |   Name   | Type  |  Owner   |Size| Description
> +--+---+--++-
>  public | mymappingtable   | table | postgres | 0 bytes|
>  public | mymappingtablet1 | table | postgres | 8192 bytes |
>  public | mymappingtablet2 | table | postgres | 8192 bytes |
>  public | mymappingtablet3 | table | postgres | 8192 bytes |
>  public | mymappingtablet4 | table | postgres | 8192 bytes |
>  public | mymappingtablet5 | table | postgres | 8192 bytes |
> 
> ---
> Regards,
> Raghavendra
> EnterpriseDB Corporation
> Blog: http://raghavt.blogspot.com/
> 



[GENERAL] Table Partitioning

2013-05-21 Thread Richard Onorato
I am wanting to partition my data based on a mod of one of the bigint columns, 
but when I run my insert test all the data goes into the base table and not the 
partitions.  Here is what the table looks like:

CREATE table MyMappingTable ( id bigserial NOT NULL,
                                     c1 bigInt NOT NULL,
                                     c2 bigInt NOT NULL,
                                     c3 bigint NOT NULL,
                                     count bigint DEFAULT 1,
                                     createdTime timestamp with time zone 
default CURRENT_TIMESTAMP,
                                     CONSTRAINT MyMappingTable_index PRIMARY 
KEY (id) )
with (OIDS=FALSE);

CREATE TABLE MyMappingTableT1 (PRIMARY KEY (id), CHECK((c1 % 5) = 0)) INHERITS 
(MyMappingTable);
CREATE TABLE MyMappingTableT2 (PRIMARY KEY (id), CHECK((c1 % 5) = 1)) INHERITS 
(MyMappingTable);
CREATE TABLE MyMappingTableT3 (PRIMARY KEY (id), CHECK((c1 % 5) = 2)) INHERITS 
(MyMappingTable);
CREATE TABLE MyMappingTableT4 (PRIMARY KEY (id), CHECK((c1 % 5) = 3)) INHERITS 
(MyMappingTable);
CREATE TABLE MyMappingTableT5 (PRIMARY KEY (id), CHECK((c1 % 5) = 4)) INHERITS 
(MyMappingTable);

Here is the trigger function that I added to the database:

CREATE OR REPLACE FUNCTION my_mapping_table_insert_trigger()
RETURNS trigger AS $$
BEGIN
  IF ( (NEW.c1 % 5) = 0 ) THEN 
    INSERT INTO MyMappingTableT1 VALUES (NEW.*); 
  ELSIF ( (NEW.c1 % 5) = 1 ) THEN 
    INSERT INTO MyMappingTableT2 VALUES (NEW.*);
  ELSIF ( (NEW.c1 % 5) = 2 ) THEN 
    INSERT INTO MyMappingTableT3 VALUES (NEW.*);
  ELSIF ( (NEW.c1 % 5) = 3 ) THEN 
    INSERT INTO MyMappingTableT4 VALUES (NEW.*);
  ELSIF ( (NEW.c1 % 5) = 4 ) THEN 
    INSERT INTO MyMappingTableT5 VALUES (NEW.*);
  ELSE

    RAISE EXCEPTION 'c1 mod out of range.  Something wrong with the 
my_mapping_table_insert_trigger() function!';
  END IF;
  RETURN NULL;
END;
$$
LANGUAGE plpgsql;

Here is the Trigger that I added to the table:

CREATE TRIGGER insert_my_mapping_table_trigger 
  BEFORE INSERT ON MyMappingTable 
  FOR EACH ROW EXECUTE PROCEDURE my_mapping_table_insert_trigger();

SET constraint_exclusion = ON;


Regards,


Richard

Re: [GENERAL] Shortcut evaluation in OR or IN

2013-05-06 Thread Richard Poole
On Mon, May 06, 2013 at 02:16:38PM +1200, Tim Uckun wrote:
> Say I have a select like this.
> 
> SELECT * FROM table where field = X OR field = Y limit 1
> 
> And I have two records one that matches X and one that matches Y will I
> always get X because the evaluation will stop after the first clause in the
> OR matches?
> 
> What about for IN (X, Y)

There is no short-circuiting; you'll get one record or the other but no
guarantee which. If you want to guarantee what order records come out
in you need to add an ORDER BY. In the specific case you're describing
you could do ORDER BY field = X DESC and get the order you're looking for.

> how about if I am doing an update
> 
> UPDATE table1 set x=table2.y where table1.field1 = table2.field1 OR
> table1.field2=table2.field2
> 
> Will it update based on field1 if both fields match?

An update affects all rows that match the given condition so you'd get
both rows updated in this case. There's no LIMIT or ORDER BY available
in UPDATE.

Richard


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] apt.postgresql.org broken dependency?

2013-04-26 Thread Richard Huxton

On 25/04/13 18:01, Martín Marqués wrote:

Just tried upgrading and added the apt-postgresql.org repo to my
Debian server (on testing now) and I got some backages like barman
retained because some dependencies couldn't be satisfied.

Los siguientes paquetes tienen dependencias incumplidas:
  barman : Depende: python (<  2.7) pero 2.7.3-4 va a ser instalado
   Depende: python-argcomplete pero no va a instalarse



Since when 2.7.3 isn't larger then 2.7.


Is that not complaining that it *wants* a version of python < 2.7 and 
you have larger?


--
  Richard Huxton
  Archonet Ltd


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Table containing only valid table names

2013-04-26 Thread Richard Huxton

On 26/04/13 16:09, Michael Graham wrote:

I'm pretty sure I can't do what I need as postgres doesn't support
triggers on DDL but maybe I'm wrong.


If you're still in development and not live, it'll be worth checking out 9.3

http://www.postgresql.org/docs/devel/static/event-triggers.html


--
  Richard Huxton
  Archonet Ltd


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Checking for changes in other tables

2013-04-26 Thread Richard Huxton

On 26/04/13 10:01, CR Lender wrote:

I can add a trigger on eu_loans to check if Diane and Betty both live in
the EU. The problem is how to prevent one of them from moving to a
non-EU country (if they do, the loan has to be cancelled first). They
are however allowed to move to other EU countries.

At the moment, this is checked by the application, but not enforced by
the database. I could add more triggers to the persons table (and
another one on countries), but that doesn't "feel" right... countries
and persons are base data and shouldn't need to "know" about other
tables using their records.


I think this is more a problem of terminology rather than your current 
triggers. Triggers aren't really "part" of a table, but they are 
observing it, so it's a sensible place to list them when viewing a 
table-definition in psql. There's no reason the trigger function is even 
in the same schema as the targetted table.


How would it feel if the syntax was more like the following?

CREATE TRIGGER ... OBSERVING UPDATES ON persons ...

or even

PUBLISH UPDATE,INSERT,DELETE ON persons AS person_changes;
SUBSCRIBE TO person_changes CALLING PROCEDURE ...;

A different "feel", but no difference in behaviour.

--
  Richard Huxton
  Archonet Ltd


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Checking for changes in other tables

2013-04-26 Thread Richard Poole
On Fri, Apr 26, 2013 at 11:01:28AM +0200, CR Lender wrote:

> Is there any way to ensure that all donors and recipients in eu_loans
> are in the EU, without altering the countries and persons tables?

One way to do this would be to add countries to the eu_loans table so
it looks like this:

create table eu_loans (
donor text not null,
donor_country char(2) not null,
recipient text not null,
recipient_country char(2) not null,
primary key(donor, recipient),
foreign key (donor, donor_country) references persons (name, country) 
on update cascade,
foreign key (recipient, recipient_country) references persons (name, 
country) on update cascade
);

then create an sql function to tell you whether a country is in the eu:

create function country_in_eu (char(2)) returns bool as $$
select count(*) > 0 from countries where code = $1 and eu = 't'
$$ language 'sql';

and add two constraints to eu_loans:

alter table eu_loans add constraint donor_in_eu 
check(country_in_eu(donor_country));
alter table eu_loans add constraint recipient_in_eu 
check(country_in_eu(recipient_country));

This will give an error if someone moves outside the EU (but not if a
country leaves the EU).

It may or may not seem elegant depending on your thinking but it does
have the effect you're looking for. Of course you could set things up
so that you could do an insert to eu_loans specifying just the donor
and recipient names and the system would populate the country fields
for you by looking up in persons, throwing an error if appropriate.

Richard


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Selecting timestamp from Database

2013-04-08 Thread Richard Harley

That returns nothings also. But I have spied the problem now:
select  ATTENDANCE.timestamp::text from attendance order by timestamp desc 
limit 1
return the actual timestamp: 2013-04-08 12:42:40.089952
> So the theory I'm wondering about is that the stored data in fact
> contains (some values with) fractional seconds, but Richard's
> client-side software isn't bothering to show those, misleading him
> into entering values that don't actually match the stored data.
> Looking at the table directly with psql would prove it one way
> or the other.

This is it. It was the psycopg adapter. My bad!!
Thanks Adrian / Tom.
Rich



On 8 Apr 2013, at 14:58, Adrian Klaver  wrote:

> On 04/08/2013 06:49 AM, Richard Harley wrote:
>> It's
>> 
>> Column|Type |
>>   Modifiers
>> --+-+---
>>  attendanceid | integer | not null default
>> nextval('attendance_attendanceid_seq'::regclass)
>>  entered  | date| not null default
>> ('now'::text)::date
>>  timeperiod   | character(2)|
>>  timestamp| timestamp without time zone | default now()
>> 
> 
> 
> Well timestamp is not time zone aware, so I have no idea where your time zone 
> offsets are coming from.
> 
> What happens if you do:
> "select timestamp from attendance where timestamp = ' '2012-12-14 12:02:45';
> 
> 
> -- 
> Adrian Klaver
> adrian.kla...@gmail.com
> 
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] Selecting timestamp from Database

2013-04-08 Thread Richard Harley
It's

Column|Type | 
Modifiers 
--+-+---
 attendanceid | integer | not null default 
nextval('attendance_attendanceid_seq'::regclass)
 entered  | date| not null default 
('now'::text)::date
 timeperiod   | character(2)| 
 timestamp| timestamp without time zone | default now()





On 8 Apr 2013, at 14:48, Adrian Klaver  wrote:

> On 04/08/2013 06:45 AM, Richard Harley wrote:
>> I am running the query straight through PSQL so there are no other programs 
>> or adapters.
>> 
>> The field definition is just 'timestamp'.
> 
> From psql what do you get if you do?:
> 
> \d attendance
> 
>> 
>> I did try that as well - no luck :)
>> 
>> Rich
> 
> -- 
> Adrian Klaver
> adrian.kla...@gmail.com



Re: [GENERAL] Selecting timestamp from Database

2013-04-08 Thread Richard Harley
I am running the query straight through PSQL so there are no other programs or 
adapters.

The field definition is just 'timestamp'.

I did try that as well - no luck :)

Rich



On 8 Apr 2013, at 14:36, Adrian Klaver  wrote:

> On 04/08/2013 06:27 AM, Richard Harley wrote:
>> Sure
>> 
>> Timestamp
>> 2013/04/08 12:42:40 GMT+1
>> 2013/04/08 12:42:33 GMT+1
>> 2013/04/07 20:25:11 GMT+1
>> 2013/04/07 20:19:52 GMT+1
>> 2013/04/07 20:19:52 GMT+1
> 
> What program are you using to get the above result?
> 
> What is the field definition for the timestamp column?
> 
> From your previous post try:
> 
> select timestamp from attendance where timestamp = '2012/12/14 12:02:45+0'
> 
>> 
>> 
>> Some are GMT, some are GMT+1 depending on when they were entered.
>> 
>> 
>> 
>> On 8 Apr 2013, at 14:25, Adrian Klaver > <mailto:adrian.kla...@gmail.com>> wrote:
>> 
>>> On 04/08/2013 06:22 AM, Richard Harley wrote:
>>>> This doesn't seem to work - take a normal GMT date for example:
>>>> 2012/12/14 12:02:45 GMT
>>>> 
>>>> select timestamp from attendance where timestamp = '2012/12/14 12:02:45'
>>>> 
>>>> ..returns nothing
>>> 
>>> Can you show the results of an unconstrained SELECT?:
>>> 
>>> select timestamp from attendance limit 5;
>>>> 
>>>> 
>>>> 
>>> 
>>> 
>>> --
>>> Adrian Klaver
>>> adrian.kla...@gmail.com <mailto:adrian.kla...@gmail.com>
>> 
> 
> 
> -- 
> Adrian Klaver
> adrian.kla...@gmail.com
> 
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Selecting timestamp from Database

2013-04-08 Thread Richard Harley
Sure

Timestamp
2013/04/08 12:42:40 GMT+1
2013/04/08 12:42:33 GMT+1
2013/04/07 20:25:11 GMT+1
2013/04/07 20:19:52 GMT+1
2013/04/07 20:19:52 GMT+1


Some are GMT, some are GMT+1 depending on when they were entered.



On 8 Apr 2013, at 14:25, Adrian Klaver  wrote:

> On 04/08/2013 06:22 AM, Richard Harley wrote:
>> This doesn't seem to work - take a normal GMT date for example:
>> 2012/12/14 12:02:45 GMT
>> 
>> select timestamp from attendance where timestamp = '2012/12/14 12:02:45'
>> 
>> ..returns nothing
> 
> Can you show the results of an unconstrained SELECT?:
> 
> select timestamp from attendance limit 5;
>> 
>> 
>> 
> 
> 
> -- 
> Adrian Klaver
> adrian.kla...@gmail.com



Re: [GENERAL] Selecting timestamp from Database

2013-04-08 Thread Richard Harley
This doesn't seem to work - take a normal GMT date for example: 2012/12/14 
12:02:45 GMT

select timestamp from attendance where timestamp = '2012/12/14 12:02:45'

..returns nothing



On 8 Apr 2013, at 14:17, Adrian Klaver  wrote:

> On 04/08/2013 06:03 AM, Richard Harley wrote:
>> Hello all
>> 
>> Pretty sure this should be simple - how can I select a timestamp from a
>> database?
>> 
>> The timestamp is stored in the db like this:
>> 
>> 2013/04/08 13:54:41 GMT+1
>> 
>> 
>> How can I select based on that timestamp?
>> 
>> At the simplest level "select timestamp from attendance where timestamp
>> = '2013/04/08 13:54:41 GMT+1'"
>> 
>> ..doesn't obviously work but I've tried all sorts of to_char and
>> to_timestamp combos to no avail..
>> 
>> Any ideas?
> 
> select timestamp from attendance where timestamp  = '2013/04/08 13:54:41+1'
> 
>> 
>> Cheers
>> Rich
> 
> 
> -- 
> Adrian Klaver
> adrian.kla...@gmail.com
> 
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general



[GENERAL] Selecting timestamp from Database

2013-04-08 Thread Richard Harley
Hello all

Pretty sure this should be simple - how can I select a timestamp from a 
database?

The timestamp is stored in the db like this:

2013/04/08 13:54:41 GMT+1

How can I select based on that timestamp?

At the simplest level "select timestamp from attendance where timestamp = 
'2013/04/08 13:54:41 GMT+1'"

..doesn't obviously work but I've tried all sorts of to_char and to_timestamp 
combos to no avail..

Any ideas?

Cheers
Rich

Re: [GENERAL] Do "after update" trigger block the current transaction?

2013-03-26 Thread Richard Huxton

On 26/03/13 13:24, Clemens Eisserer wrote:

Hi Richard,


  Will triggers (after
update specifically) cause the execution of SQL-commands to pause
until the trigger-function has returned (at statement execution time
or commit)?


The trigger will block. If it didn't then it couldn't abort the transaction
if it needed to.


Thanks for the clarification.


Why not use one of the established trigger-based replication solutions?


Because the "other" database which I would like to keep in sync is a
MySQL db. Furthermore I do not need a 1:1 replica, but instead just
update a few columns in different tables there.

My inital plan was to add a timestamp-column which is updated at every
Update and to poll for changes every 5-10s. However, the word
"polling" seems to cause an allergic reaction for some poeple ;)


Might be worth looking at PgQ - a queueing system underlying Londiste. 
That would handle tracking the changes in PostgreSQL leaving you to just 
handle the MySQL end. Timestamps will do the job as long as you are 
careful to allow enough slack to deal with clock updates.



--
  Richard Huxton
  Archonet Ltd


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL service terminated by query

2013-03-26 Thread Richard Huxton

On 26/03/13 05:55, adrian.kitching...@dse.vic.gov.au wrote:

I'm hoping I can get some info on a query which terminates my PostgreSQL
service.
The query is a relatively simple PostGIS query:



The log text when the service crashes is:



2013-03-26 15:49:55 EST LOG: server process (PID 3536) was terminated by
exception 0xC005
2013-03-26 15:49:55 EST HINT: See C include file "ntstatus.h" for a
description of the hexadecimal value.



I'm running PostgreSQL 9.1 with PostGIS 2.0 installed on an WinXP SP3:
4GB RAM machine. Shared_buffers set at 50MB. Let me know if further info
needed.


This is a Windows memory-related error. It might be due to a library 
problem, bad RAM or a corrupted pointer in the database table itself.


1. Can you reliably produce the error with this specific gid?
2. Can you dump the database (or perhaps just the tables in question)?

If we can't find any problems in the database itself and you can spare 
the downtime, it may be worth running a RAM checker overnight.



Notice:
This email and any attachments may contain information that is personal,
confidential,
legally privileged and/or copyright. No part of it should be reproduced,
adapted or communicated without the prior written consent of the
copyright owner.


Oh no, too late!

--
  Richard Huxton
  Archonet Ltd


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Do "after update" trigger block the current transaction?

2013-03-26 Thread Richard Huxton

On 26/03/13 08:52, Clemens Eisserer wrote:

Hi,

We are currently evaluating the feasibility of executing long-running
scripts written in shell-script (plsh) called by triggers (after
update/insert) to synchronize two databases. Will triggers (after
update specifically) cause the execution of SQL-commands to pause
until the trigger-function has returned (at statement execution time
or commit)?


The trigger will block. If it didn't then it couldn't abort the 
transaction if it needed to.



The other possible approach would be to use polling on some
trigger-updated timestamp-column, which is not pretty but should be
fairly simple.


Why not use one of the established trigger-based replication solutions?

--
  Richard Huxton
  Archonet Ltd


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Running update in chunks?

2013-01-25 Thread Richard Huxton

On 25/01/13 11:38, Tim Uckun wrote:

That suggests (to me, at least) that it is related to index updating. Again,
your GIN index seems primary candidate.

Try running iostat (I think that's available on a Mac) with/without the
fillfactor and with/without the GIN index while you do the updates. It's
possible your SSD is just behaving oddly under stress.



I dropped the index and the numbers shot up tenfold or more.  I don't
know why postgres feels the need to update the GIN index on the hstore
field when I am only updating an integer field but it looks like I
need to split the hstore into a different table.
If the row moves to a different block, then it has no choice. The old 
index entry will point to an invalid block. There are some optimisations 
(HOT - http://pgsql.tapoueh.org/site/html/misc/hot.html) but that relies 
on (iirc) the update staying on the same block and also not updating any 
indexed fields (and you were, I think).


A GIN index is very expensive to update compared to btree too.
--
  Richard Huxton
  Archonet Ltd


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Running update in chunks?

2013-01-25 Thread Richard Huxton

On 25/01/13 08:57, Tim Uckun wrote:

What if you do:
alter table cars.imports set (fillfactor=50);
Before the vacuum full, and then try the update again?


This makes a dramatic difference when combined with a vacuum.

UPDATE 98834
Time: 3408.210 ms

Ten times faster!
That suggests (to me, at least) that it is related to index updating. 
Again, your GIN index seems primary candidate.


A fillfactor of 50% means row updates probably stay on the same 
disk-block as their previous version. This implies less index updates.


Try running iostat (I think that's available on a Mac) with/without the 
fillfactor and with/without the GIN index while you do the updates. It's 
possible your SSD is just behaving oddly under stress.


--
  Richard Huxton
  Archonet Ltd


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Running update in chunks?

2013-01-21 Thread Richard Huxton

On 21/01/13 20:09, Tim Uckun wrote:

Just to close this up and give some guidance to future googlers...

Careful, future googlers.

Conclusion.  Updates on postgres are slow

Nope.


(given the default
postgresql.conf).  I presume this is due to MVCC or the WAL or
something and there are probably some things I can do to tweak the
conf file to make them go faster but out of the box running an update
on a table with lots of rows is going to cost you a lot.
Unlikely. Do you really think that a PostgreSQL installation typically 
runs 100 times slower on updates than inserts and every other user has 
just said "oh, that's ok then"? Or is it more likely that something 
peculiar is broken on your setup.



  Removing the indexes doesn't help that much.

Suggestion for the PG team.  Deliver a more realistic postgres.conf by
default. The default one seems to be aimed at ten year old PCs with
very little RAM and disk space. At least deliver additional conf files
for small, medium, large, huge setups.

--
  Richard Huxton


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Running update in chunks?

2013-01-21 Thread Richard Huxton

On 21/01/13 10:30, Tim Uckun wrote:

Can you try a couple of things just to check timings. Probably worth EXPLAIN
ANALYSE.

SELECT count(*) FROM cars.imports i JOIN cars.models md ON i.model_id =
md.id;


Takes about 300 ms


CREATE TEMP TABLE tt AS SELECT i.* FROM cars.imports i JOIN cars.models md
ON i.model_id = md.id;

Takes about 300 ms
OK - so writing all the data takes very under one second but updating 
the same amount takes 50 seconds.


The only differences I can think of are WAL logging (transaction log) 
and index updates (the temp table has no indexes).


1. Try CREATE TABLE rather than CREATE TEMP TABLE - if that's still 
quick then it's not the time taken to write WAL.
2. Run the update query against your new tt table and see how long that 
takes.
3. Add indexes and repeat (in particular I'd be suspicious of the gin 
index on "data")


My guess is that it's the time taken to update the "data" index - gin 
indexes can be slow to rebuild (although 50 seconds seems *very* slow). 
If so there are a few options:
1. Split the table and put whatever this "data" is into an import_data 
table - assuming it doesn't change often.
2. Try a fill-factor of 50% or less - keeping the updates on the same 
data page as the original might help
3. Drop the gin index before doing your bulk update and rebuild it at 
the end. This is a common approach with bulk-loading / updates.


Oh - I'm assuming you're only updating those rows whose id has changed - 
that seemed to be the suggestion in your first message. If not, simply 
adding "AND make_id <> md.make_id" should help. Also (and you may well 
have considered this) - for a normalised setup you'd just have the 
model-id in "imports" and look up the make-id through the "models" table.


--
  Richard Huxton
  Archonet Ltd


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Running update in chunks?

2013-01-21 Thread Richard Huxton

On 21/01/13 08:04, Tim Uckun wrote:

This is the query I am running

update cars.imports i
 set make_id = md.make_id
 from cars.models md where i.model_id = md.id;


Here is the analyse

Looks like it's the actual update that's taking all the time.

This query takes fifty seconds on a macbook air with i7 processor and
eight gigs of RAM and SSD hard drive.  I am using postgres 9.2
installed with homebrew using the standard conf file.
Can you try a couple of things just to check timings. Probably worth 
EXPLAIN ANALYSE.


SELECT count(*) FROM cars.imports i JOIN cars.models md ON i.model_id = 
md.id;


CREATE TEMP TABLE tt AS SELECT i.* FROM cars.imports i JOIN cars.models 
md ON i.model_id = md.id;


Now the first one should take half a second judging by your previous 
explain. If the second one takes 50 seconds too then that's just the 
limit of your SSD's write. If it's much faster then something else is 
happening.



--
  Richard Huxton
  Archonet Ltd


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Backup/Restore bytea data

2013-01-14 Thread Richard Huxton

On 14/01/13 01:13, sub3 wrote:

Hi,
I am having an issue upgrading a really old 8.2 db up to 9.2.2. One of the
tables contains a bytea field.  When I backup&  restore using pgadmin from
my 9.2.2 install, it doesn't convert this field correctly.


Could this be due to your bytea_output setting?
  http://www.postgresql.org/docs/9.2/static/datatype-binary.html

Not sure how this could snag you if you are dumping using 9.2, but this:


I see it starts w/special character when selecting it from the old database;
in the new db, I see a string starting w/"\211PNG".


is clearly in "escape" rather than "hex" format.

--
  Richard Huxton
  Archonet Ltd


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] DONT_CARE Aggregate

2012-12-20 Thread Richard Broersma
On Thu, Dec 20, 2012 at 5:45 AM, Robert James wrote:

> Sergey - That's an interesting option, but I'm not sure how to use it
> as an aggregate.  Could you give an example?
>

Here is an example:

buildinghac=>
  SELECT itemnbr, buildingnbr
FROM Actionitems
ORDER BY buildingnbr
   LIMIT 10;
 itemnbr | buildingnbr
-+-
1181 | B-0106
 363 | B-0106
 185 | B-0106
 483 | B-0106
  67 | B-0106
 125 | B-0106
 303 | B-0106
 245 | B-0106
  68 | B-0107
 304 | B-0107
(10 rows)


buildinghac=>

  SELECT DISTINCT ON ( buildingnbr )
itemnbr, buildingnbr
FROM Actionitems
ORDER BY buildingnbr
   LIMIT 10;
 itemnbr | buildingnbr
-+-
 245 | B-0106
 364 | B-0107
1170 | B-0111
 361 | B-0112
 128 | B-0116
1013 | B-0117
 129 | B-0118
 368 | B-0300
1141 | B-0307
  74 | B-0423
(10 rows)


-- 
Regards,
Richard Broersma Jr.


Re: [GENERAL] Postgres PHP error

2012-12-03 Thread Richard Huxton

On 03/12/12 05:18, rahul143 wrote:

Hi All

Im receiving the following error, on a php page, conneting to postgres 7.4,
installed on Mandrake 10.0



Others have answered your question. However...

Please make sure you have regular scheduled backups for that database. 
That is quite an old (9 years) version of PostgreSQL and you'll be 
unlikely to find many people with a similar version who can help you 
with problems in the event of a crash.


It's probably a good idea to see if you can install the latest version 
from source on that machine and use it's version of pg_dump to dump the 
database regularly too. I'd expect to have to do a little work to move 
the data into an up-to-date version of PostgreSQL and it's always better 
to know what issues you'll have before doing it for real.


--
  Richard Huxton
  Archonet Ltd


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Experiences with pl/Java

2012-11-19 Thread Welty, Richard
i think pl/java may expect the method signatures to match up precisely. not 
entirely sure, as there are no
examples published as to how pl/java expects out parameters to work.

richard


From: Thomas Hill [thomas.k.h...@t-online.de]
Sent: Monday, November 19, 2012 5:56 PM
To: Welty, Richard; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Experiences with pl/Java

thanks - not sure how plJava works here and if the implementation is
identical to Apache Derby - what I can tell however is that defining the
types the way I did (integer on one side vs an array of integers on the
other side) is exactely how Apache Derby needs this as there out parms
always need to be defined as arrays in the method declaration and are
then automatically returned as integers - I will try to use integers on
both sides for plJava tomorrow, but if this would solve the problem this
would also mean that method declaration is different and depending on
data base backend implementation - which would make could re-use impossible


Am 19.11.2012 22:58, schrieb Welty, Richard:
> i looked your stuff over. i never needed to make out params work, so i'm kind 
> of guessing here, but the first thing i'd look at is the type mismatch 
> between the args given in the pl/pgsql 'create or replace function' vs the 
> args in the java declaration, that is, the int on one side vs the array of 
> ints on the other. due to the lack of examples available that you previously 
> pointed out, i have no real idea how that should look in a working example.
>
> richard
>
> 
> From: Thomas Hill [thomas.k.h...@t-online.de]
> Sent: Monday, November 19, 2012 2:55 PM
> To: Welty, Richard; pgsql-general@postgresql.org
> Subject: Re: [GENERAL] Experiences with pl/Java
>
> Thanks to all for providing feedback and sharing opinions. Looks like
> you have gone much further on it than I thought someone would have. So I
> think I might spend some more time with it, but not plan to use it for
> my application in a production environment.
>
> My initial attempts were to try to re-use/port some simple procedures
> which are running fine on Apache Derby, but then I got stuck quite early
> in the process and could not find documentation showing how things needs
> to be done and helping me to understand what I am doing wrong.
>
> My first use case was calling a procedure which does not have any
> parameter and this I actually got to run, i.e.
>
> public static String CURRENT_CLIENTID() throws SQLException {
>
>   String vcFKClientID = "000";
>
>   return vcFKClientID;
>   }
>
> CREATE OR REPLACE FUNCTION rte."CURRENT_CLIENTID"()
> RETURNS character varying AS
> 'onlyPostgreSQLPk.Functions.CURRENT_CLIENTID'
> LANGUAGE java VOLATILE SECURITY DEFINER
> COST 100;
> ALTER FUNCTION rte."CURRENT_CLIENTID"()
> OWNER TO postgres;
>
> => select rte."CURRENT_CLIENTID"() returns '000'
>
> My second use case was to create a procedure with an out parameter, i.e.
>
> public static void SP_getNextID(int iNextVal[], String vcIDName)
>   throws SQLException {
>   Connection conn = getDefaultConnection();
>
>   Statement stmt = conn.createStatement(ResultSet.TYPE_FORWARD_ONLY,
>   ResultSet.CONCUR_UPDATABLE);
>
>   String cSQL = "SELECT \"LastValue\" \n" + "FROM rte.\"TBL_IDs\"
> \n"
>   + "WHERE \"IDName\" = '" + vcIDName + "'\n";
>
>   ResultSet rs = stmt.executeQuery(cSQL);
>
>   while (rs.next()) {
>   iNextVal[0] = rs.getInt(1) + 1;
>   rs.updateInt("LastValue", iNextVal[0]);
>   rs.updateRow();
>   }
>
>   rs.close();
>   stmt.close();
>
>   return;
>
>   }
>
> CREATE OR REPLACE FUNCTION rte."SP_getNextID"(OUT "iNextID" integer, IN
> "vcIDName" character varying)
> RETURNS integer AS
> 'allDatabasesPk.Procedures.SP_getNextID(int[], java.lang.String)'
>
> The static method seems to be found. But when calling the function using
> pgadmin and issuing a 'Select rte."SP_getNextID"('xy');' I am getting error
> Too many parameters - expected 1
> which I find confusing as I am only passing one parameter!?
>
> I tried some variations I could think of, but without success.
> Unfortunately I have not found an exmaple anywhere on the web showing
> how this needs to be done.
>
> Would be great if someone could have a look at a

Re: [GENERAL] Experiences with pl/Java

2012-11-19 Thread Welty, Richard
Edson Richter [edsonrich...@hotmail.com] writes:
>Em 19/11/2012 15:26, Welty, Richard escreveu:

>> PL/Java requires that the methods being directly called from PostgreSQL are 
>> static.
>> while i don't disagree with the advice, PL/Java is limited in this respect.

>:-) as I said, I know little about pl/Java... thanks for pointing this out.
>So, after calling the static method, probably inside the static methods,
>the programmer will create his/her logic. Writing this logic considering
>GC behavior would make your apps more stable.
>I would recommend to not use any Java library that keeps "caches" (ones
>like EclipseLink or Hibernate), unless you know exactly how to configure
>caches and soft/weak references - and their limitations.

i would probably recommend severely limiting what you try to do in PL/Java.

in my particular project, the overall goal was incrementally doing push updates 
to an Apache
Solr search engine. this entailed building xml descriptions of the update that 
was required,
sending it with an http request (which required untrusted pl/java), and 
providing a recovery & retry
mechanism in case the http request failed, which limited itself to using the 
provided jdbc.

i got it working and working well, but i put a lot of care into insuring that 
the garbage collecter
never got stressed very hard and i didn't try to do more than was strictly 
necessary. i'd argue
that if you find yourself wanting to use hibernate or eclipselink down in 
PL/Java, you should
rethink your application design.

richard


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Experiences with pl/Java

2012-11-19 Thread Welty, Richard

Edson Richter [edsonrich...@hotmail.com] writes:

>I don't know much about PostgreSQL with plJava, but I can give a few
>tips about memory stability when using Java (see inline):

...
>In the past, one choice was using static classes whenever possible. This
>is not true (neither recommended) anymore, and in most of the cases
>having thousands of cicles creation-use-release makes GC more effective
>(and your VM more stable).

PL/Java requires that the methods being directly called from PostgreSQL are 
static.
while i don't disagree with the advice, PL/Java is limited in this respect.

richard


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Experiences with pl/Java

2012-11-19 Thread Welty, Richard
i used it for a project about 6 months ago. it took a little bit of effort to 
get things going, and the documentation could use some work, but it was 
reliable once i got oriented.

the developer does read the mailing list, and responds to requests for help.

i think you could use it in production so long as you have good processes in 
place and can dedicate some developer time to learning & supporting it. but 
it's definitely not plug and play right now.

richard

From: pgsql-general-ow...@postgresql.org [pgsql-general-ow...@postgresql.org] 
on behalf of Thomas Hill [thomas.k.h...@t-online.de]
Sent: Monday, November 19, 2012 3:02 AM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Experiences with pl/Java

Hi,

was wondering if there is anyone wanted to share some experiences gained
and some knowledge on pl/Java. Have looked into it for a couple of days
now and am getting the impression it is not something ready to use in
production environment. Also have trouble sending to the developer
mailing list (the last email on the mail archive of that list is almost
three weeks old) which raises questions on how active this project is.

Have worked with PostgreSQL and pg/SQL before and also used Apache Derby
in the past. On Apache Derby I have implemented some stored procedures
using Java code and my interst now was on seeing how much of this could
be re-used this pl/Java so PostgreSQL could potentially become a second
data base backend my application would run on.

But trying to port my java procedures from Derby to PostgreSQL had a bad
start and for me pl/Java and the support around it so far suggest not to
pursue this any further. Why?.
Porting the simlest procedure was possible, but trying procedures with
out parameters was not.

Has anyone made the same experience with pl/Java or is it just my lack
of understanding (in conjunction with a lack of documentation on pl/Java).

Thanks a lot in advance
Thomas


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Noticed something odd with pgbench

2012-11-16 Thread Richard Huxton

On 16/11/12 19:35, Shaun Thomas wrote:

Hey guys,

So, we have a pretty beefy system that runs dual X5675's with 72GB of 
RAM. After our recent upgrade to 9.1, things have been... odd. I 
managed to track it down to one setting:


shared_buffers = 8GB



It does the same thing at 6GB. 4GB is safe for hours on end, but 6GB 
and 8GB implode within in minutes. During this, kswapd goes crazy 
paging out the cache, at the same time it's reading from disk to put 
them back in. It's like I fed the kernel poison or something.


Has anybody else noticed something like this? I got this behavior with 
9.1.6 on a 3.2 kernel. No amount of tweaks in /proc/sys/vm changed 
anything either, so I'm not convinced it's a NUMA problem.



Does this match what you're seeing?

http://frosty-postgres.blogspot.co.uk/2012/08/postgresql-numa-and-zone-reclaim-mode.html


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] deadlock detected

2012-11-05 Thread Richard Huxton

On 05/11/12 18:39, AI Rumman wrote:

Hi all,

I am using Postrgesql 9.1

I got a message in my log:
ERROR:  deadlock detected
DETAIL:  Process 20265 waits for ShareLock on transaction 27774015; 
blocked by process 20262.
Process 20262 waits for ShareLock on transaction 27774018; 
blocked by process 20265.
Process 20265: UPDATE t1 SET product_id = 1017966 WHERE 
ticketid = '2170501'

Process 20262: UPDATE c1 SET deleted=1 WHERE id='2170501'
HINT:  See server log for query details.
STATEMENT:  UPDATE t1 SET product_id = 1017966 WHERE ticketid = '2170501'

How may I get more information about this deadlock like which queries 
created it.


The error message shows which queries - your two UPDATEs. I'm guessing 
either t1 or c1 are views and so refer to the same row with id "2710501".


--
  Richard Huxton



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Recover from failed files

2012-11-05 Thread Richard Huxton

On 04/11/12 23:28, Nyamul Hassan wrote:
Upon inspecting the hard drive, we were able to salvage the "data" 
folder, but when we try to load it into the PGSQL version of the data 
folder (8.2), the server would not load.  Upon closer inspection, we 
suspect the culprit to be the file 0015 in pg_clog which is 214kb 
instead of 256kb size of all other files in the same folder.


Is there any way that we can salvage the data from this data folder?


1. Make sure you have a complete backup of all of the data dir (pg_xlog, 
clog - everything) before making any changes.
This one is vital. Don't skip it. Any fixes you attempt may end up 
making things worse.


2. What is the exact error-message you get when trying to start up 
PostgreSQL?
When starting the server, try doing it directly in single-user mode 
until everything is up and running

http://www.postgresql.org/docs/8.2/static/app-postgres.html

3. Have a quick read of the blog-post below as a start point explaining 
the various files.
It's a useful introduction and can give you some keywords to search 
against.

http://it.toolbox.com/blogs/database-soup/pg_log-pg_xlog-and-pg_clog-45611


4. You may end up needing the resetxlog tool, but don't just blindly run it:
http://www.postgresql.org/docs/8.2/static/app-pgresetxlog.html

5. Version 8.2 is end-of-life since 2011.
Once your database is working again, dump the data and upgrade as 
soon as possible.
Until you upgrade, make sure you have the last release of 8.2.23 
running.


6. Obviously, make sure backups are scheduled regularly.

It may well be that you can get things up and running by just padding 
pg_log/0015 to 256KB with zeroes. That doesn't mean your database will 
be 100% consistent though - transaction information will have been lost.


But before you do anything drastic, do steps #1 and #2.

--
  Richard Huxton


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] table logging

2012-10-29 Thread Richard Huxton

On 28/10/12 19:25, Jeff Janes wrote:

I am looking for some very simple table logging.  I am not trying to
do auditing in a hostile environment, just simple logging.

I found two candidates, tablelog from pgfoundry, and
http://wiki.postgresql.org/wiki/Audit_trigger_91plus

The first has the advantage of being simple, but hasn't been
maintained in 5 years which is a little worrisome.


I've got tablelog in use on one of my projects. I had to make one small 
fix when I upgraded the db to 9.1 - something to do with quote escaping. 
Can't remember the details I'm afraid.


Other than that, it seems to work fine.


--
  Richard Huxton
  Archonet Ltd


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] fgets failure in Solaris after patching

2012-07-27 Thread Stephan, Richard
Sorry, that was not the verbatim message, (a cut & paste mistake).

More information (9.0.8):

$ pg_ctl start
Killed
fgets failure: Error 0
The program "postgres" is needed by pg_ctl but was not found in the
same directory as "/opt/postgres/9.0/bin/pg_ctl".
Check your installation.

$ postgres -V
ld.so.1: postgres: fatal: libc.so.1: version `SUNW_1.22.7' not found (required 
by file /opt/postgres/9.0/bin/postgres)
ld.so.1: postgres: fatal: /usr/lib/libc.so.1: wrong ELF class: ELFCLASS32
Killed

$ ldd /opt/postgres/9.0/bin/postgres
libssl.so.0.9.8 =>   /opt/postgres/externalDeps/lib/libssl.so.0.9.8
libcrypto.so.0.9.8 =>
/opt/postgres/externalDeps/lib/libcrypto.so.0.9.8
libnsl.so.1 =>   /lib/64/libnsl.so.1
libsocket.so.1 =>/lib/64/libsocket.so.1
libm.so.2 => /lib/64/libm.so.2
libldap-2.4.so.2 =>  /opt/postgres/externalDeps/lib/libldap-2.4.so.2
libc.so.1 => /lib/64/libc.so.1
libc.so.1 (SUNW_1.22.7) =>   (version not found)
libdl.so.1 =>/lib/64/libdl.so.1
libgcc_s.so.1 => /usr/sfw/lib/64/libgcc_s.so.1
libmp.so.2 =>/lib/64/libmp.so.2
libmd.so.1 =>/lib/64/libmd.so.1
libscf.so.1 =>   /lib/64/libscf.so.1
liblber-2.4.so.2 =>  /opt/postgres/externalDeps/lib/liblber-2.4.so.2
libresolv.so.2 =>/lib/64/libresolv.so.2
libgen.so.1 =>   /lib/64/libgen.so.1
libsasl.so.1 =>  /usr/lib/64/libsasl.so.1
libgss.so.1 =>   /usr/lib/64/libgss.so.1
libdoor.so.1 =>  /lib/64/libdoor.so.1
libuutil.so.1 => /lib/64/libuutil.so.1
libcmd.so.1 =>   /lib/64/libcmd.so.1
/platform/SUNW,Sun-Fire-V890/lib/sparcv9/libc_psr.so.1
/platform/SUNW,Sun-Fire-V890/lib/sparcv9/libmd_psr.so.1


***
9.0.4 for comparison:

$ postgres -V
postgres (PostgreSQL) 9.0.4

$ ldd /opt/postgres/9.0/bin/postgres
libssl.so.0.9.8 =>   /opt/postgres/externalDeps/lib/libssl.so.0.9.8
libcrypto.so.0.9.8 =>
/opt/postgres/externalDeps/lib/libcrypto.so.0.9.8
libnsl.so.1 =>   /lib/64/libnsl.so.1
librt.so.1 =>/lib/64/librt.so.1
libsocket.so.1 =>/lib/64/libsocket.so.1
libm.so.2 => /lib/64/libm.so.2
libldap-2.4.so.2 =>  /opt/postgres/externalDeps/lib/libldap-2.4.so.2
libc.so.1 => /lib/64/libc.so.1
libdl.so.1 =>/lib/64/libdl.so.1
libgcc_s.so.1 => /usr/sfw/lib/64/libgcc_s.so.1
libmp.so.2 =>/lib/64/libmp.so.2
libmd.so.1 =>/lib/64/libmd.so.1
libscf.so.1 =>   /lib/64/libscf.so.1
libaio.so.1 =>   /lib/64/libaio.so.1
liblber-2.4.so.2 =>  /opt/postgres/externalDeps/lib/liblber-2.4.so.2
libresolv.so.2 =>/lib/64/libresolv.so.2
libgen.so.1 =>   /lib/64/libgen.so.1
libsasl.so.1 =>  /usr/lib/64/libsasl.so.1
libgss.so.1 =>   /usr/lib/64/libgss.so.1
libdoor.so.1 =>  /lib/64/libdoor.so.1
libuutil.so.1 => /lib/64/libuutil.so.1
libcmd.so.1 =>   /lib/64/libcmd.so.1
/platform/SUNW,Sun-Fire-V890/lib/sparcv9/libc_psr.so.1
/platform/SUNW,Sun-Fire-V890/lib/sparcv9/libmd_psr.so.1



Thank you for helping to point out where the actual problem lies.  The ldd 
command is showing that there is a library issue with trying to use the 9.0.8 
version.

libc.so.1 => /lib/64/libc.so.1
libc.so.1 (SUNW_1.22.7) =>   (version not found)

Richard

-Original Message-
From: Tom Lane [mailto:t...@sss.pgh.pa.us] 
Sent: Friday, July 27, 2012 1:21 PM
To: Stephan, Richard
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] fgets failure in Solaris after patching

"Stephan, Richard"  writes:
> Rebuilt the PostgreSQL server software because we were patching up from 9.0.4 
> to 9.0.8.  Deployed software and received the following error when trying to 
> restart server.
> fgets failure: Error 0
> The program postgres is needed by pg_ctl but was not found in the same 
> directory as pg_ctl

Is that a verbatim copy of the error message?  When I try intentionally 
provoking this type of failure (by renaming the postgres executable out of the 
way), 9.0 pg_ctl gives me this:

$ pg_ctl start
The program "postgres" is needed by pg_ctl but was not found in the same 
directory as "/home/tgl/version90/bin/pg_ctl".
Check your installation.
$ 

The lack of double quotes and the lack of a full path to the pg_ctl program 
make me wonder if you're running some really old copy of pg_ctl instead of the 
9.0 version as intended.

Anyway, if you didn't copy-and-paste exactly, what the error indicates is that 
pg_ctl tried to execute "postgres -V" and didn't ge

[GENERAL] fgets failure in Solaris after patching

2012-07-27 Thread Stephan, Richard
Rebuilt the PostgreSQL server software because we were patching up from 9.0.4 
to 9.0.8.  Deployed software and received the following error when trying to 
restart server.

fgets failure: Error 0
The program postgres is needed by pg_ctl but was not found in the same 
directory as pg_ctl

9.0.4 works.  9.0.8 does not.


The information in this email is confidential and may be legally privileged 
against disclosure other than to the intended recipient. It is intended solely 
for the addressee. Access to this email by anyone else is unauthorized. If you 
are not the intended recipient, any disclosure, copying, distribution or any 
action taken or omitted to be taken in reliance on it, is prohibited and may be 
unlawful. Please immediately delete this message and inform the sender of this 
error. 


Re: [GENERAL] Notiffy problem

2012-06-29 Thread Richard Huxton

On 29/06/12 09:01, adasko98 wrote:

Hi
In first sorry for my english :) I have got a problem with notify/listener.
I do a function which returns a trigger. Everything is ok but when i want
send in a second parameter a variable NOTIFY say: "syntax error"



 Notify demoApp, 'some text';



n_user :='sda';
 Notify demoApp, n_user ;<here is a problem


Looks like a limitation of the plpgsql parser, perhaps even counts as a 
bug. You can work around it with EXECUTE though, something like:

  cmd := 'NOTIFY demoApp, ' || quote_literal(n_user);
  EXECUTE cmd;
or just
  EXECUTE 'NOTIFY demoApp, ' || quote_literal(n_user);

--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Migrating from 8.2 to 9.1 : invalid port number

2012-05-23 Thread Welty, Richard
port numbers are restricted to 2 octets (16 bits). they are TCP/IP entities and 
are restricted in size by the RFCs (internet standards.)

richard

-Original Message-
From: pgsql-general-ow...@postgresql.org on behalf of Alberto Zanon
Sent: Wed 5/23/2012 10:19 AM
To: Merlin Moncure
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Migrating from 8.2 to 9.1 : invalid port number
 
Thanks Merlin for the reply, 

when you define a " Unix domain socket " you can use any number because it's 
just used for the filename, e.g. "/var/run/.s.PGSQL.123456". 
It works in PgPool and Postgres 8.2 with no problems. 


Alberto 

- Messaggio originale -
Da: "Merlin Moncure"  
A: "Alberto Zanon"  
Cc: pgsql-general@postgresql.org 
Inviato: Mercoledì, 23 maggio 2012 15:45:26 
Oggetto: Re: [GENERAL] Migrating from 8.2 to 9.1 : invalid port number 

On Wed, May 23, 2012 at 7:56 AM, Alberto Zanon 
 wrote: 
> Hi all, 
> 
> in my production environment I have Postgresql 8.2 + PgPool. The port number 
> I specify in PgPool configuration is not a real portnumber (e.g. 123456). I 
> can define a "dblink" as : 
> 
> dblink('dbname= host=/var/run port=123456 user= 
> password='::text, 'select ... '::text) t1( ... ); 
> 
> or start "psql" as: 
> 
> psql -U  -h /var/run -p 123456  
> 
> without problems. In my development environment I upgraded to 9.1 version 
> and the result is: 
> 
> invalid port number: "123456" 
> 
> in both cases. Does Postgresql 9.1 check the "port number" value although 
> I'm connecting through a "Unix domain socket"? Is it a bug or an intended 
> behavior? 
> I noticed that it accepts values up to 65535. 

65535 is the highest port number so what's happening here is 9.1 is 
just doing error checking on the input value which is totally normal 
and expected. why are you supplying a bogus port number? 

merlin 



Re: [GENERAL] Pre-built binary packages of PostgreSQL 9.2 Beta

2012-05-16 Thread Richard Broersma
Okay, should the 9.2 beta announcement and press releases be amended
to show this link rather than the ones posted?

On Wed, May 16, 2012 at 10:40 AM, Guillaume Lelarge
 wrote:
> On Wed, 2012-05-16 at 10:28 -0700, Richard Broersma wrote:
>> I've seen the following statement made several places.
>>
>> "Pre-built binary packages of PostgreSQL 9.2 Beta are available from
>> the project's downloads page for Windows, Mac OS X, Linux, FreeBSD and
>> Solaris."
>>
>> But I looking in the following links does not produce any results:
>>
>> http://www.postgresql.org/download/
>> or
>> http://www.postgresql.org/download/windows/
>> or
>> http://www.enterprisedb.com/products-services-training/pgdownload#windows
>>
>> Are we waiting for a refresh on the download page?
>>
>
> Nope, download page is for stable releases AFAIUI.
>
> Alpha/Beta packages are available on
> http://www.postgresql.org/download/snapshots/
>
>
> --
> Guillaume
> http://blog.guillaume.lelarge.info
> http://www.dalibo.com
>



-- 
Regards,
Richard Broersma Jr.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Pre-built binary packages of PostgreSQL 9.2 Beta

2012-05-16 Thread Richard Broersma
I've seen the following statement made several places.

"Pre-built binary packages of PostgreSQL 9.2 Beta are available from
the project's downloads page for Windows, Mac OS X, Linux, FreeBSD and
Solaris."

But I looking in the following links does not produce any results:

http://www.postgresql.org/download/
or
http://www.postgresql.org/download/windows/
or
http://www.enterprisedb.com/products-services-training/pgdownload#windows

Are we waiting for a refresh on the download page?

-- 
Regards,
Richard Broersma Jr.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] maybe incorrect regexp_replace behavior in v8.3.4 ?

2012-05-16 Thread Richard Huxton

On 16/05/12 14:54, Gauthier, Dave wrote:

bi_hsx_a0_latest=# select regexp_replace('xxx','^xxxy$',null);
regexp_replace


(1 row)
But why did it return null in this case?  I would think no match would leave it 
'xxx'.


If a function is defined as "strict" then any null parameters 
automatically result in a null result.


And indeed, this:
  SELECT * FROM pg_proc WHERE proname LIKE 'regexp_r%';
shows pro_isstrict is set to true, as it is for most other function.s

--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] relpages sudden increase

2012-05-09 Thread Richard Harley

On 09/05/12 00:00, Tomas Vondra wrote:

On 8.5.2012 19:27, Richard Harley wrote:

I currently do nightly database dumps on a ton of small dbs that are
increasing around 2-3mb per day. Suddenly, in a recent backup file, one
db in particular jumped from 55mb to 122mb overnight.

Well, I wouldn't call that 'massive size' but in my experience such
sudden changes in sizes are caused by one of these two things

(a) modification patterns + slow vacuuming

(b) batch updates (e.g. update of a column in the whole table)

Given that this is a one-time issue, I'd guess it's (b). Were you doing
any such updates or something like that?


No I wasn't doing any maintenance work compared to the days and weeks 
previously when size was increasing by a few mb a day.






I did some investigation -

One table increased from 8mb to 31mb during a 24hr period. The table is
just text so this is highly unusual given that the number of rows DIDN'T
increase any more than normal.

What do you mean by 'number of rows'? Is that number of live rows, i.e.
the number you get from

   SELECT COUNT(*) FROM ...

or the number you get from pg_class as reltuples?


I mean the number of live rows. So the size on disk went up unexpectedly 
but the rows increase was normal and the data in the rows was normal - 
just like previous days.






pg_toast increased from 8mb to 27mb during the same period.

The relpages for the table in question increased from 164 to 1088 during
the 24hr period. On the live db, the relpages is back to 164 but the
size of the table remains massive.

Hmmm, I wonder how the number of pages could drop, because that does not
happen unless you run VACUUM FULL / CLUSTER or such commands. And that
does not happen regularly.

Also, how could the table size remain massive when the number of pages
dropped to 164? Did you mean a different table or the whole database?


The same table. I imported the db dump that suddenly got bigger into a 
test db and the table in question has 1088 relpages. In the live db, 
same table, we're back down to 132 and no vacuuming has taken place ..


Thanks for your help

Rich
**



[GENERAL] relpages sudden increase

2012-05-08 Thread Richard Harley
I currently do nightly database dumps on a ton of small dbs that are 
increasing around 2-3mb per day. Suddenly, in a recent backup file, one 
db in particular jumped from 55mb to 122mb overnight.


I did some investigation -

One table increased from 8mb to 31mb during a 24hr period. The table is 
just text so this is highly unusual given that the number of rows DIDN'T 
increase any more than normal.


pg_toast increased from 8mb to 27mb during the same period.

The relpages for the table in question increased from 164 to 1088 during 
the 24hr period. On the live db, the relpages is back to 164 but the 
size of the table remains massive.


Using Centos5/postgres 8.1.21 at the moment. Planning to upgrade over 
summer but not had issues like this before on 8.1.


What gives?!

Thanks
Rich


Re: [GENERAL] How do I setup this Exclusion Constraint?

2012-05-01 Thread Richard Broersma
On Tue, May 1, 2012 at 10:15 AM, bradford  wrote:
> I'm trying to used what I learned in
> http://www.depesz.com/2010/01/03/waiting-for-8-5-exclusion-constraints/,
> but I cannot figure out how to apply this exclusion constraint to col1
> (integer) + col2 (varchar).

Take a look at Btree_gist index:
http://www.postgresql.org/docs/9.1/static/btree-gist.html

I think this is the part that your missing.


-- 
Regards,
Richard Broersma Jr.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to add "on delete cascade" constraints

2012-04-27 Thread Richard Broersma
On Fri, Apr 27, 2012 at 12:40 PM, Alexander Farber
 wrote:
> So it's not a problem to drop and recreate the FOREIGN KEYs?
>
> And can I use START TRANSACTION while doing it?

You could, but you don't need to since you can do all of this is one statement:

ALTER TABLE public.pref_scores
DROP CONSTRAINT pref_scores_gid_fkey,
ADD CONSTRAINT pref_scores_gid_fkey
FOREIGN KEY (gid)
REFERENCES pref_games(gid)
ON DELETE CASCADE;


-- 
Regards,
Richard Broersma Jr.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: how to set up automatically startup database when the server boot or reboot.

2012-04-27 Thread Welty, Richard
in the RHEL and related linux systems (Fedora, CentOS, Amazon EC2 Linux), use 
this
command:

chkconfig postgresql on

to set up postgresql to start at boot. it needs to be executed as root.

richard

-Original Message-
From: pgsql-general-ow...@postgresql.org on behalf of leaf_yxj
Sent: Fri 4/27/2012 1:22 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Re: how to set up automatically startup database when the 
server boot or reboot.
 
My os is redhat linux 5.5. And My database is greenplum 4.2.1( postgresql
8.2.15).  I will take a look about the init.d directory.

Thanks. Guys. Any opinion is welcome. Please help.


--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/how-to-set-up-automatically-startup-database-when-the-server-boot-or-reboot-tp5670442p5670905.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] Problem with reading data from standby server ?

2012-04-20 Thread Richard Huxton

On 20/04/12 09:39, Condor wrote:

Hello,

when I read binary replication tutorial
(http://wiki.postgresql.org/wiki/Binary_Replication_Tutorial) I see on
Hot Standby: Hot Standby is identical to Warm Standby, except that the
Standby is available to run read-only queries.
I setup hot standby server described in tutorial and it's working fine,
no problem with that.



> I have a problem when I try to start a script that

should read whole table, error message from php is:



PHP Warning: pg_query(): Query failed: ERROR: canceling statement due to
conflict with recovery



When data is fetched it's saved into a file after some modifications.
This script is work a 30-40 min until all data is parsed. Well, I think
problem is started when master server send new wal file to slave, but
how I can resolve that problem ?


Your master database is being updated all the time and your slave is 
supposed to be a perfect copy, including deleted/updated rows being no 
longer visible. So - when you run a query it might need to do one of two 
things:

  1. Pause replication
  2. Cancel the query

At some point PostgreSQL switches from doing #1 to doing #2 (otherwise 
you could get so far behind the replica could never catch up). You can 
control how long before it switches:


http://www.postgresql.org/docs/9.1/static/hot-standby.html#HOT-STANDBY-CONFLICT


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] somewhat high profile project using PostgreSQL

2012-04-18 Thread Welty, Richard
some of you may have seen this in the NYT two weeks ago:

http://bits.blogs.nytimes.com/2012/04/05/pentagon-pushes-crowdsourced-manufacturing/

just FYI, the database being used by the MIT/GE team is PostgreSQL 9.1.3

cheers,
   richard


[GENERAL] recommended schema diff tools?

2012-04-12 Thread Welty, Richard
can anyone recommend an open source tool for diffing schemas?

(it should go without saying that i'm looking for ddl to update production and 
QA DBs from development DBs, but i'll say it, just in case.)

thanks,
   richard


Re: [GENERAL] 9.1.3: launching streaming replication

2012-04-03 Thread Welty, Richard
thanks for the suggestions. the light has gone on and i have it working as of 
about 15 minutes ago. i'm going to revisit the documentation and possibly make 
suggestions about making things a little clearer, or else issue a mea culpa 
about my reading comprehension. don't know which just yet.

richard

-Original Message-
From: Michael Nolan [mailto:htf...@gmail.com]
Sent: Mon 4/2/2012 7:19 PM
To: Welty, Richard
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] 9.1.3: launching streaming replication
 
On Mon, Apr 2, 2012 at 4:21 PM, Welty, Richard  wrote:

I got similar messages the first few times I tried to start up my slave
server, I never did figure out exactly what caused it.

You can either delete all the files on the slave and try again, or do what
I did, write a script that handles transferring just the files needed to
resync the slave.

Here's the script I've been using to transfer the files between my two
servers to resync them. This is not a production-ready script.

I have a second tablespace, so there are two 'data' transfers plus the xlog
transfer.  (You may run into issues transferring the pg_tblspc directory,
as I did, hence the '-safe-links' parameter.) The '-delete' term deletes
any files on the slave that aren't on the server, unless you list them in
an '--exclude' clause.)

/usr/local/pgsql/bin/psql -c "select pg_start_backup('tardir',true)"
postgres postgres

rsync -av --exclude log.out --exclude postgresql.conf \
--exclude postgresql.pid --delete --exclude pg_hba.conf \
--exclude pg_xlog --exclude server.crt --exclude server.key \
--exclude restore.conf --exclude restore.done \
--safe-links /usr/local/pgsql/data/ postgres@xxx:/usr/local/pgsql/data

rsync -av /usr/local/pgsql/data2/ postgres@xxx:/usr/local/pgsql/data2

/usr/local/pgsql/bin/psql -c "select pg_stop_backup()" postgres postgres

rsync -av /usr/local/pgsql/data/pg_xlog postgres@xxx:/usr/local/pgsql/data/

echo "ok to start standby"

--
Mike Nolan



[GENERAL] 9.1.3: launching streaming replication

2012-04-02 Thread Welty, Richard
i have a server in the ec2 cloud which in theory is set up as a master; it 
starts and runs. i've got an amazon s3 bucket mounted using s3fs on both the 
master and the standby (the standby is also set up in the ec2 cloud.)

i followed the steps here: http://wiki.postgresql.org/wiki/Streaming_Replication
and have the backup and the wal archive on the s3 bucket and they are both 
there.

when i go to start the hot standby, i pretty consistently get


LOG:  entering standby mode
cp: cannot stat `/db-backup/wal_archive/00010001': No such file 
or directory
WARNING:  WAL was generated with wal_level=minimal, data may be missing
HINT:  This happens if you temporarily set wal_level=minimal without taking a 
new base backup.
FATAL:  hot standby is not possible because wal_level was not set to 
"hot_standby" on the master server
HINT:  Either set wal_level to "hot_standby" on the master, or turn off 
hot_standby here.
LOG:  startup process (PID 29938) exited with exit code 1
LOG:  aborting startup due to startup process failure


and the startup of the hot standby fails. the Riggs admin cookbook says "you 
will need a short delay", but has no guesstimate on the length required. i 
don't even know if i'm seeing this problem or something else.

if i need to run a new backup, what cleanup do i need to do of old backups and 
wal_archives? could this be interfering with the startup of the standby? i've 
gone through several iterations and fixed some problems, and wonder if there's 
obsolete data that is messing things up?

thanks,
   richard


Re: [GENERAL] default value returned from sql stmt

2012-03-30 Thread Richard Huxton

On 30/03/12 08:46, Pavel Stehule wrote:

2012/3/30 Richard Huxton:

On 29/03/12 23:28, Pavel Stehule wrote:


select anum from t1 where anum = 4
union all select 100 limit 1;



I'm not sure the ordering here is guaranteed by the standard though, is it?
You could end up with the 4 being discarded.


A order is random for only "UNION", "UNION ALL" should to respect
order.  But I didn't check it in standard.


Let's put it this way - a quick bit of googling can't find anything that 
says the order *is* guaranteed, and (almost?) no other operations do so 
by default.


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] default value returned from sql stmt

2012-03-30 Thread Richard Huxton

On 29/03/12 23:28, Pavel Stehule wrote:

select anum from t1 where anum = 4
union all select 100 limit 1;


I'm not sure the ordering here is guaranteed by the standard though, is 
it? You could end up with the 4 being discarded.


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] configuring RAID10 for data in Amazon EC2 cloud?

2012-03-27 Thread Welty, Richard
does anyone have any tips on this? Linux Software Raid doesn't seem to be doing 
a very good job here, but i may well have missed something.

i did a fairly naive setup using linux software raid on an amazon linux 
instance,
10 volumes (8G each), (WAL on a separate EBS volume) with the following setup:

mdadm -v --create /dev/md1 --level=raid10 --raid-devices=10 /dev/xvdg /dev/xvdh 
/dev/xvdi /dev/xvdj /dev/xvdk /dev/xvdl /dev/xvdm /dev/xvdn /dev/xvdo /dev/xvdp

pvcreate /dev/md1

vgcreate vg-pgdata /dev/md1

vgdisplay vg-pgdata

lvcreate -L39.98g -nlv-pgdata vg-pgdata


this particular instance is running about a factor of two slower than a simple 
single disk instance. both the single disk instance and the one with RAID10 for 
~postgres/data/base
started from amazon m1.xlarge instances.

postgresql version is 8.4.9, using a simple pgbench test for 600 seconds; the 
single disk instance shows this:

dbDev, single disk, shared_buffers=4GB, effective_cache_size=8GB
   disk mounted noatime, readahead 4096, other stuff default

-bash-4.1$ pgbench -T 600 bench
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 1
query mode: simple
number of clients: 1
duration: 600 s
number of transactions actually processed: 535018
tps = 891.696072 (including connections establishing)
tps = 891.704512 (excluding connections establishing)


and the RAID10 instance shows this:

dbQA, wal+raid10 setup, ext3 for WAL, ext4 for raid10,
  shared_buffers=2GB, effective_cache_size=3GB
  readahead 10240, wal&raid mount noatime, journal=ordered
  vm.swappiness=0,vm.overcommit_memory=2, dirty_ratio=2,
  dirty_background_ratio=1

starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 1
query mode: simple
number of clients: 1
duration: 600 s
number of transactions actually processed: 261513
tps = 435.854738 (including connections establishing)
tps = 435.858853 (excluding connections establishing)



Re: [GENERAL] Desperately need a magical PG monitoring tool

2012-03-26 Thread Richard Huxton

On 26/03/12 19:58, Andreas wrote:

Hi,

is there a tool for monitoring PG servers?



How do you watch that all runs well ?


There are a number of tools. You might want to google around:
- nagios
- monit
- munin
There are plenty of others

Nagios is aimed at multi-server service monitoring (and alerting). So 
you can keep track of 20 websites on 5 different servers etc.


Monit is more focused on monitoring/alerting/restarting on a single server.

Munin is about performance tracking and graphing. You can set it up to 
alert if parameters get outside a set range.



For your scenario, I'd consider restoring the backup to another database 
(on another server perhaps) and checking some suitable value (e.g. a max 
timestamp in a frequently updated table). You could do all this from a 
simple cron-job + perl script but you might want to consider one of the 
tools mentioned above.


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] current thinking on Amazon EC2?

2012-03-19 Thread Welty, Richard



On Mon 3/19/2012 4:30 PM Mike Christensen writes:

>I've been running my site on RackSpace CloudServers (similar to EC2)
>and have been getting pretty good performance, though I don't have
>huge amounts of database load.

>One advantage, though, is RackSpace allows for hybrid solutions so I
>could potentially lease a dedicated server and continue to host my web
>frontend servers on the cloud.

that's good to know, although for the project i'm working on, EC2 is
what we have to work with, good parts and bad parts and all.

richard


Re: [GENERAL] Indexing MS/Open Office and PDF documents

2012-03-15 Thread Richard Huxton

On 15/03/12 21:12, Jeff Davis wrote:

On Fri, 2012-03-16 at 01:57 +0530, alexander.bager...@cognizant.com



We have
hard time identifying MS/Open Office and PDF parsers to index stored
documents and make them available for text searching.



The first step is to find a library that can parse such documents, or
convert them to a format that can be parsed.


I've used docx2txt and pdf2txt and friends to produce text files that I 
then index during the import process. An external script runs the whole 
process. All I cared about was extracting raw text though, this does 
nothing to identify headings etc.


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Backups

2012-03-15 Thread Richard Harley
Thanks for a quick reply. The server has 6 cores, 6GB ram and top gets 
to 2.3-2.5 load average when running the dumpall. So I assume we are 
nowhere near this causing performance issues for users?

Thanks
Rich




On 15/03/12 12:21, Bèrto ëd Sèra wrote:

Hi Richard,

it's no easy answer. If your server has plenty of free resources there 
won't be trouble, but I do have customers who cannot even imagine of 
launching a dump in normal traffic hours. How loaded is your box, 
currently?


Cheerio
Bèrto

On 15 March 2012 12:15, Richard Harley <mailto:rich...@scholarpack.com>> wrote:


Hello all

Very simple question -  does pg_dump/dumpall hit the server in
terms of database performance? We currently do nightly backups and
I want to move to hourly backups but not at the expense of hogging
all the resources for 5 mins.

Pg_dumpall is currently producing a 1GB file - that's the combined
size of around 30 databases and it takes around 5 mins to run.

Thanks
Rich




--
==
If Pac-Man had affected us as kids, we'd all be running around in a 
darkened room munching pills and listening to repetitive music.




[GENERAL] Backups

2012-03-15 Thread Richard Harley

Hello all

Very simple question -  does pg_dump/dumpall hit the server in terms of 
database performance? We currently do nightly backups and I want to move 
to hourly backups but not at the expense of hogging all the resources 
for 5 mins.


Pg_dumpall is currently producing a 1GB file - that's the combined size 
of around 30 databases and it takes around 5 mins to run.


Thanks
Rich


Re: [GENERAL] 100% cpu usage on some postmaster processes kill the complete database

2012-03-01 Thread Richard Huxton

On 01/03/12 19:41, Paul Dunkler wrote:

I did that now - and analyzed the situation a bit. There are only queries
running which will process very fast under high load (only index scans, very low
rates of sequential scans). I found a remarkable number of Insert statements...

And sometimes when that happens, the CPU Utilization is going up to nearby 100%
too and 98% is system usage...


You're running on a box larger than I'm used to, so this is only 
speculation. I'm wondering whether you're hitting problems with lock 
contention or some such. It looks like you've got 48 cores there all at 
about 100% possibly none of them getting much chance to do any work.


Oddly, the totals you posted in your top output show 6.3% user cpu 
usage, which I can't make match with 50-odd processes all approaching 
100% cpu.


Perhaps have a look at vmstat output too - see if context-switches spike 
unusually high during these periods (sorry - no idea what an unusually 
high number would be on a machine like yours).


Reducing the number of concurrent backends might help, but that rather 
depends on whether my guess is right.


If no-one more experienced than me comes along shortly, try reposting to 
the performance list. There are people there who are used to machines of 
this size.


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 100% cpu usage on some postmaster processes kill the complete database

2012-03-01 Thread Richard Huxton

On 01/03/12 16:41, Paul Dunkler wrote:

Hi List,

we are currently running a rather large postgresql-installation with 
approximately 4k Transactions and 50k index scans per second.

In the last days on some times of the day (irregular - 3-4 times a day), some 
of the postmaster processes are running with 100% cpu usage. That leads to a 
totally breakdown of the query execution. We see tons of statements which are 
correctly automatically aborted by our statement_timeout set to 15 seconds. I 
tried to search, but do not really recognize what the problem could be there...

Some things i have checked:
- We are not running any bulk jobs or maintenance scripts at this time
- No system errors in any logs during that slowdowns
- I/O Performance seems fine. No high IO Wait amount... But IO Write totally 
drops in that times because it seems that no postgres process can perform any 
update

I just installed a script, which prints me out the top and ps axf information 
for facing out the problem. I will post a snippet of the top here:


Combine that with this:
  SELECT * FROM pg_stat_activity;

That will let you line up pids from top with active queries.

--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] problem trying to create a temp table

2012-02-24 Thread Richard Huxton

On 24/02/12 13:37, Andrew Gould wrote:

On Fri, Feb 24, 2012 at 7:32 AM, Richard Huxton  wrote:


Temp tables get their own schema, and each session (connection) gets
its own temp schema. So - don't qualify them by schema.


Is that to avoid naming conflicts between simultaneous users?


Yes. I believe it also invisibly adds it to your search_path too, the 
same as it does with the pg_catalog schema.


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] problem trying to create a temp table

2012-02-24 Thread Richard Huxton

On 24/02/12 13:36, mgo...@isstrucksoftware.net wrote:

How do I access it. I just did that and when I try and access it with a
ERROR: relation "sessionsetting" does not exist
LINE 1: select * from "sessionsetting"


=> CREATE SCHEMA foo;
CREATE SCHEMA
=> CREATE TABLE foo.table1 (id int);
CREATE TABLE
=> SET search_path = foo;
SET
=> INSERT INTO table1 VALUES (1),(2),(3);
INSERT 0 3
=> CREATE TEMP TABLE table1 (id int);
CREATE TABLE
=> INSERT INTO table1 VALUES (4),(5),(6);
INSERT 0 3
=> SELECT * FROM table1;
 id

  4
  5
  6
(3 rows)
=> DROP TABLE table1;
DROP TABLE
=> SELECT * FROM table1;
 id

  1
  2
  3
(3 rows)

Try "SELECT * FROM pg_namespace" to see the various temp schemas being 
created.


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] problem trying to create a temp table

2012-02-24 Thread Richard Huxton

On 24/02/12 13:26, mgo...@isstrucksoftware.net wrote:


ALL,
Using 9.1.2 on Windows 7 X64 for development.
I'm trying to create a temporary table used to store session variables
 CREATE TEMP TABLE iss.sessionsettings



When I try and run this I get the following error message.
ERROR: cannot create temporary relation in non-temporary schema

Temp tables get their own schema, and each session (connection) gets
its own temp schema. So - don't qualify them by schema.

--
  Richard Huxton
  Archonet Ltd


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problemas com client_encoding ?

2012-02-24 Thread Richard Huxton

On 24/02/12 02:34, Emanuel Araújo wrote:

[user@local ~]$ psql
psql: invalid connection option "client_encoding"



1o. Server
SO - Centos 5.7 Final
PostgreSQL 9.1.1


Apologies - my Spanish is non-existent (that's assuming your email 
wasn't in Portugese or some such).


http://archives.postgresql.org/pgsql-admin/2011-09/msg00088.php
http://archives.postgresql.org/pgsql-admin/2011-09/msg00101.php

Do you have version 9.0 installed too?

--

  Richard Huxton
  Archonet Ltd



Re: [GENERAL] Postgresql as main database

2012-02-23 Thread Richard Huxton

Hello all,
 Good day,
I would like to make  Postgresql 8.4 as main database for running 
three(software) and possible exchanging data.Is it possible? if yes 
what is the implication in terms of performance?


It's certainly possible. Obviously it will need a bigger machine than 
you would use for just one of the databases.


Bear in mind that you can't have cross-database queries without using 
something like the dblink package. I don't know if that affects your 
"exchanging data".


Oh - and unless you really have no choice in the matter, use 9.1 rather 
than 8.4 - you will get better performance, new features and it will be 
supported for longer.


--
  Richard Huxton
  Archonet Ltd



Re: [GENERAL] windows 2008 scheduled task problem

2012-02-06 Thread Richard Sickler
Ralph,

It may help to break this into a couple of parts
Make a folder called c:\cronjobs
in c:\cronjobs create a bat file that contains the commands you want
executed.

Maybe something like

set logfile=shp2pgsql.log
echo Running shp2pgsql > %logfile%
date /t >> %logfile%
time /t >> %logfile%
shp2pgsql -s 900913 -I -d myshape.shp gis.myshape -u xxx -p yyy >
myshape.txt

Get everything working as expected when you invoke the bat file. Once you
have that working, then setup scheduled tasks to call the bat file.

Rich

On Mon, Feb 6, 2012 at 8:42 AM, Susan Cassidy  wrote:

> **
>
> See my reply below:
>
> * *
>
> *From:* pgsql-general-ow...@postgresql.org [mailto:
> pgsql-general-ow...@postgresql.org] *On Behalf Of *Ralph Dell
> *Sent:* Monday, February 06, 2012 8:26 AM
> *To:* **pgsql-general@postgresql.org**
> *Subject:* [GENERAL] windows 2008 scheduled task problem
>
> ** **
>
> I am running postgresql 8.4.3/ postgis 1.5 on a windows server 2008 R2
>
> I am unable to get the shp2pgsql command to run as scheduled tasks. There
> is no problem running any of the commands from the command line or a python
> script.
>
> ** **
>
> Some sample commands are 
>
> ** **
>
> shp2pgsql -s 900913 -I -d myshape.shp gis.myshape -u xxx -p yyy >
> myshape.txt
>
> or
>
> %POSTGIS_PATH%\shp2pgsql -s 900913 -I -d myshape.shp gis.myshape -u xxx -p
> yyy > myshape.txt
>
> or 
>
> shp2pgsql -s 900913 -I -d myshape.shp gis.myshape -u xxx -p yyy  | psql –d
> mydb –U xxx –q
>
> ** **
>
> I do not get any output from the scheduled task.
>
> ** **
>
> The history tab in the task scheduler will show Task triggered/ task
> started / task completed, and report the task successfully completed.
>
> I am running the tasks under a system administrator account.
>
> ** **
>
> Any suggestions from someone who has a similar environment.
>
> ** **
>
> This is my first post to the list and I hope it is an appropriate place
> for the question I and I have provided enough detail.
>
> ** **
>
> Ralph Dell
>
> ** **
>
> I don’t know about Win 2008, but under XP, I have to use the full path of
> any programs I have in scheduled job.
>
> ** **
>
> Susan
>


Re: [GENERAL] function return update count

2012-01-06 Thread Richard Huxton

On 06/01/12 16:33, David Johnston wrote:

In 9.1 you could use and updatable CTE and in the main query perform
and return a count.  I would think plpgsql would be the better option
though.


For the SQL option, it would be this (9.1 only though - I think David's 
right there).


CREATE FUNCTION f1() RETURNS int AS $$
WITH rows AS (
UPDATE t1 SET ... WHERE ... RETURNING 1
)
SELECT count(*)::int FROM rows
$$ LANGUAGE SQL;


Unfortunately you can't do UPDATE ... RETURNING count(*) directly so you 
need to go through this business with the CTE (WITH clause).


Oh - the cast to int is because count() returns bigint.

--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


  1   2   3   4   5   6   7   8   9   10   >