Re: [GENERAL] heavy swapping, not sure why

2011-08-30 Thread Scott Marlowe
On Tue, Aug 30, 2011 at 12:54 AM, Florian Weimer  wrote:
> * Scott Marlowe:
>
>> On a machine with lots of memory, I've run into pathological behaviour
>> with both the RHEL 5 and Ubuntu 10.04 kernels where the kswapd starts
>> eating up CPU and swap io like mad, while doing essentially nothing.
>> Setting swappiness to 0 delayed this behaviour but did not stop it.
>> Given that I'm on a machine with 128G ram, I just put "/sbin/swapoff
>> -a" in /etc/rc.local and viola, problem solved.
>
> Was this NUMA machine?  Some older kernels can only migrate pages
> between nodes through swap.

48 core AMD Magny Cours.  I also had to turn off  zone reclaim mode on
it or it would slow to a crawl as the OS started to cache more and
more data.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] heavy swapping, not sure why

2011-08-30 Thread Allan Kamau
On Tue, Aug 30, 2011 at 10:39 AM, Scott Marlowe  wrote:
> On Tue, Aug 30, 2011 at 12:54 AM, Florian Weimer  wrote:
>> * Scott Marlowe:
>>
>>> On a machine with lots of memory, I've run into pathological behaviour
>>> with both the RHEL 5 and Ubuntu 10.04 kernels where the kswapd starts
>>> eating up CPU and swap io like mad, while doing essentially nothing.
>>> Setting swappiness to 0 delayed this behaviour but did not stop it.
>>> Given that I'm on a machine with 128G ram, I just put "/sbin/swapoff
>>> -a" in /etc/rc.local and viola, problem solved.
>>
>> Was this NUMA machine?  Some older kernels can only migrate pages
>> between nodes through swap.
>
> 48 core AMD Magny Cours.  I also had to turn off  zone reclaim mode on
> it or it would slow to a crawl as the OS started to cache more and
> more data.
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


I have experienced such behaviour on Ubuntu 64bit OS but not on Fedora
64bit OS. Try installing Fedora on a test machine and see if you run
into the high memory consumption problem(s).

Allan.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] heavy swapping, not sure why

2011-08-30 Thread Sim Zacks


  
  


  On a machine with lots of memory, I've run into pathological behaviour
with both the RHEL 5 and Ubuntu 10.04 kernels where the kswapd starts
eating up CPU and swap io like mad, while doing essentially nothing.
Setting swappiness to 0 delayed this behaviour but did not stop it.
Given that I'm on a machine with 128G ram, I just put "/sbin/swapoff
-a" in /etc/rc.local and viola, problem solved.


I've tried running without swap and the problem is if you actually
do run out of memory then the process killer can take out your
postgresql.

Sim
  



Re: [GENERAL] heavy swapping, not sure why

2011-08-30 Thread Sim Zacks


  
  


  Yes, a few hundred MB of swap, and its definitely making a huge
difference.  Upon restarting postgres, its all freed up, and then perf
is good again.  Also, this box only has 1GB of swap total, so its
never going to get up a few dozen GB.

Anyway, here's some of top output for systemA right now:

Instead of restarting the database try swapoff -a && swapon
-a and see if that helps performance. If it is that little swap in
use, it might be something else clogging up the works.

Sim
  



Re: [GENERAL] heavy swapping, not sure why

2011-08-30 Thread Scott Marlowe
On Tue, Aug 30, 2011 at 2:50 AM, Sim Zacks  wrote:
>
> On a machine with lots of memory, I've run into pathological behaviour
> with both the RHEL 5 and Ubuntu 10.04 kernels where the kswapd starts
> eating up CPU and swap io like mad, while doing essentially nothing.
> Setting swappiness to 0 delayed this behaviour but did not stop it.
> Given that I'm on a machine with 128G ram, I just put "/sbin/swapoff
> -a" in /etc/rc.local and viola, problem solved.
>
> I've tried running without swap and the problem is if you actually do run
> out of memory then the process killer can take out your postgresql.

My postgres is configured to never use more than 10 or 20 gig at max
load.  That leaves about 80+Gig for caching the database by the
kernel.  work_mem is pretty small (8MB) given the maximum connections
(1000) So 8Gig if everybody sorts at once.  Even under very heavy load
memory usage has never really gone anywhere near that high.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] heavy swapping, not sure why

2011-08-30 Thread Scott Marlowe
On Tue, Aug 30, 2011 at 2:50 AM, Sim Zacks  wrote:
>
> Instead of restarting the database try swapoff -a && swapon -a and see if
> that helps performance. If it is that little swap in use, it might be
> something else clogging up the works.

Check to see if kswapd is going crazy or not.  If it is, that's the
pathological behavior I was talking about earlier.  It works real hard
at doing nothing and the machine grinds to a halt.  swapoff -a fixed
it.  And it was happening about every 2 weeks for me too.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] heavy swapping, not sure why

2011-08-30 Thread Boszormenyi Zoltan
Hi,

2011-08-29 22:36 keltezéssel, Lonni J Friedman írta:
> ...  I read that
> (max_connections * work_mem) should never exceed physical RAM, and if
> that's accurate, then I suspect that's the root of my problem on
> systemA (below).

work_mem is process-local memory so

(max_connections * work_mem)  < (physical RAM - shared_buffers)

Some queries may allocate multiples of work_mem, too.

Also, the kernel uses some memory for internal accounting, caching
and you need to account for the process binary in memory.

>   However, I'd like confirmation before I start
> tweaking things, as one of these servers is in production, and I can't
> easily tweak settings to experiment (plus this problem takes a few
> weeks before swapping gets bad enough to impact performance).
>
> A few examples:
>
> 0) system A: 56GB RAM, running postgresql-8.4.8 with the following parameters:
> maintenance_work_mem = 96MB
> effective_cache_size = 40GB
> work_mem = 256MB
> wal_buffers = 16MB
> shared_buffers = 13GB
> max_connections = 300

RAM (56GB) - shared_buffers (13GB) = 43GB

which is less than

work_mem * max_connections = 300 * 0.25GB = 75GB

The system would start swapping before 43GB/0.25GB = 172 clients.

> 1) system B: 120GB RAM, running postgresql-9.0.4 with the following 
> parameters:
> maintenance_work_mem = 1GB
> effective_cache_size = 88GB
> work_mem = 576MB
> wal_buffers = 4MB
> shared_buffers = 28GB
> max_connections = 200

Similarly:

120GB - 28GB = 92GB

is less than

work_mem * max_connections = 200 * 576MB = 112.5GB

Also, if you run anything else on the machine then the system would start
swapping much sooner than hitting max_connections number of clients.

I would never set work_mem that high by default. 8 - 16MB is usually
enough for the common case and you can set work_mem for special
queries from the client and then reset it.

Best regards,
Zoltán Böszörményi

-- 
--
Zoltán Böszörményi
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt, Austria
Web: http://www.postgresql-support.de
 http://www.postgresql.at/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] heavy swapping, not sure why

2011-08-30 Thread Venkat Balaji
It is recommended to identify the processes using up high work_mem and try
to set work_mem to higher value at the session level.

I this case, all the connections using up maximum work_mem is the potential
threat. As said by Zoltan, work_mem is very high and shared_buffers as well.

Other considerations would be as following -

 - Allocated kernel memory settings (like shmmax and shmget etc..)
 - How much memory is used up by the system level processes (like root and
non-pg users)
 - It also depends on the database size and the amount of data being
accessed across CPUs and memory.
 - We need to ensure if unnecessary data is being read into the memory
   ( queries hitting non-vacuumed tables, slow performing queries,
unnecessary full table scans etc)

Regards,
Venkat

On Tue, Aug 30, 2011 at 3:30 PM, Boszormenyi Zoltan  wrote:

> Hi,
>
> 2011-08-29 22:36 keltezéssel, Lonni J Friedman írta:
> > ...  I read that
> > (max_connections * work_mem) should never exceed physical RAM, and if
> > that's accurate, then I suspect that's the root of my problem on
> > systemA (below).
>
> work_mem is process-local memory so
>
> (max_connections * work_mem)  < (physical RAM - shared_buffers)
>
> Some queries may allocate multiples of work_mem, too.
>
> Also, the kernel uses some memory for internal accounting, caching
> and you need to account for the process binary in memory.
>
> >   However, I'd like confirmation before I start
> > tweaking things, as one of these servers is in production, and I can't
> > easily tweak settings to experiment (plus this problem takes a few
> > weeks before swapping gets bad enough to impact performance).
> >
> > A few examples:
> >
> > 0) system A: 56GB RAM, running postgresql-8.4.8 with the following
> parameters:
> > maintenance_work_mem = 96MB
> > effective_cache_size = 40GB
> > work_mem = 256MB
> > wal_buffers = 16MB
> > shared_buffers = 13GB
> > max_connections = 300
>
> RAM (56GB) - shared_buffers (13GB) = 43GB
>
> which is less than
>
> work_mem * max_connections = 300 * 0.25GB = 75GB
>
> The system would start swapping before 43GB/0.25GB = 172 clients.
>
> > 1) system B: 120GB RAM, running postgresql-9.0.4 with the following
> parameters:
> > maintenance_work_mem = 1GB
> > effective_cache_size = 88GB
> > work_mem = 576MB
> > wal_buffers = 4MB
> > shared_buffers = 28GB
> > max_connections = 200
>
> Similarly:
>
> 120GB - 28GB = 92GB
>
> is less than
>
> work_mem * max_connections = 200 * 576MB = 112.5GB
>
> Also, if you run anything else on the machine then the system would start
> swapping much sooner than hitting max_connections number of clients.
>
> I would never set work_mem that high by default. 8 - 16MB is usually
> enough for the common case and you can set work_mem for special
> queries from the client and then reset it.
>
> Best regards,
> Zoltán Böszörményi
>
> --
> --
> Zoltán Böszörményi
> Cybertec Schönig & Schönig GmbH
> Gröhrmühlgasse 26
> A-2700 Wiener Neustadt, Austria
> Web: http://www.postgresql-support.de
> http://www.postgresql.at/
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


[GENERAL] Postgresql-9.0.1 Recovery

2011-08-30 Thread Venkat Balaji
Hello Everyone,

I have a situation here -

I am trying to restore the production online backup and recover the same.

 - I had initially rsynced (excluded pg_log) the data directory and the
tarred and zipped the same
 - SCP'd the tar to a different server and untarred and unzipped the same
 - I got an error "unable to read  from pg_clog location" (file
size is around 160K)

I have an backup as on Aug 24th and the pg_clog file which pg is unable to
read is as on Aug 5th.

I am aware that the size of the files in pg_clog is supposed to be 256K.

What i understood is that, rsync some how missed out on syncing the files in
"pg_clog", so, i had manually coped the missing pg_clog file from production
and tried recovery.

To my surprise, PG recovery had asked for the corresponding pg_xlog (wal
archive) file as on Aug 5th.

Is there a way to recovery this ?
Do i need to get that particular wal archive which is before online backup
time ?

I do have all the other files till Aug24th.

By this experience what i understand is that Postgresql stores committed and
uncommited transactions in pg_xlog / wal archive files and information (not
the transaction data) about transaction commit status is stored in pg_clog.
Am I correct ?

I am in the process of designing a disaster recovery planner for our
productions systems.

Version  - PG-9.0.1
OS- RHEL 5

Please advice !

Regards,
Venkat


Re: [GENERAL] heavy swapping, not sure why

2011-08-30 Thread Daniel Verite
Lonni J Friedman wrote:

> ok, I'll do my best to capture this data, and then reply back.

If using linux, you should find interesting data on per-process swap and
memory usage in /proc/${pid}/smaps

Also consider the script here:
http://northernmost.org/blog/find-out-what-is-using-your-swap/
to group the numbers per running process.

FWIW I have found the swap column of the top utility to be useless, while the
information in /proc/${pid}/smaps looks trustworthy.

Best regards,
-- 
Daniel
PostgreSQL-powered mail user agent and storage: http://www.manitou-mail.org

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] heavy swapping, not sure why

2011-08-30 Thread Vick Khera
On Tue, Aug 30, 2011 at 1:26 AM, Greg Smith  wrote:
> I doubt this has anything to do with your problem, just pointing this out as
> future guidance.  Until there's a breakthrough in the PostgreSQL buffer
> cache code, there really is no reason to give more than 8GB of dedicated
> memory to the database on Linux via shared_buffers.  You're better off
> letting the OS do caching with it instead.
>

Any differing advice for FreeBSD?  I'm running with 5GB on a 24GB RAM
server (about to be replaced with a server with 48GB RAM).

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] heavy swapping, not sure why

2011-08-30 Thread Lonni J Friedman
On Mon, Aug 29, 2011 at 6:54 PM, peixubin  wrote:
> You should monitor PageTables value in /proc/meminfo.if the value larger than 
> 1G,I  Suggest enable hugepages .
>
> To monitor PageTables:
>   # cat /proc/meminfo |grep -i pagetables

$ cat /proc/meminfo |grep -i pagetables
PageTables:   608732 kB

Looks to be less than 1GB currently.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] heavy swapping, not sure why

2011-08-30 Thread Lonni J Friedman
On Tue, Aug 30, 2011 at 3:00 AM, Boszormenyi Zoltan  wrote:
> Hi,
>
> 2011-08-29 22:36 keltezéssel, Lonni J Friedman írta:
>> ...  I read that
>> (max_connections * work_mem) should never exceed physical RAM, and if
>> that's accurate, then I suspect that's the root of my problem on
>> systemA (below).
>
> work_mem is process-local memory so
>
> (max_connections * work_mem)  < (physical RAM - shared_buffers)
>
> Some queries may allocate multiples of work_mem, too.
>
> Also, the kernel uses some memory for internal accounting, caching
> and you need to account for the process binary in memory.
>
>>   However, I'd like confirmation before I start
>> tweaking things, as one of these servers is in production, and I can't
>> easily tweak settings to experiment (plus this problem takes a few
>> weeks before swapping gets bad enough to impact performance).
>>
>> A few examples:
>>
>> 0) system A: 56GB RAM, running postgresql-8.4.8 with the following 
>> parameters:
>> maintenance_work_mem = 96MB
>> effective_cache_size = 40GB
>> work_mem = 256MB
>> wal_buffers = 16MB
>> shared_buffers = 13GB
>> max_connections = 300
>
> RAM (56GB) - shared_buffers (13GB) = 43GB
>
> which is less than
>
> work_mem * max_connections = 300 * 0.25GB = 75GB
>
> The system would start swapping before 43GB/0.25GB = 172 clients.
>
>> 1) system B: 120GB RAM, running postgresql-9.0.4 with the following 
>> parameters:
>> maintenance_work_mem = 1GB
>> effective_cache_size = 88GB
>> work_mem = 576MB
>> wal_buffers = 4MB
>> shared_buffers = 28GB
>> max_connections = 200
>
> Similarly:
>
> 120GB - 28GB = 92GB
>
> is less than
>
> work_mem * max_connections = 200 * 576MB = 112.5GB
>
> Also, if you run anything else on the machine then the system would start
> swapping much sooner than hitting max_connections number of clients.
>
> I would never set work_mem that high by default. 8 - 16MB is usually
> enough for the common case and you can set work_mem for special
> queries from the client and then reset it.

Thanks for your reply.  I've reduced shared_buffers to 8GB everywhere,
which should definitely help.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] heavy swapping, not sure why

2011-08-30 Thread Lonni J Friedman
On Mon, Aug 29, 2011 at 5:42 PM, Tom Lane  wrote:
> Lonni J Friedman  writes:
>> I have several Linux-x68_64 based dedicated PostgreSQL servers where
>> I'm experiencing significant swap usage growth over time.  All of them
>> have fairly substantial amounts of RAM (not including swap), yet the
>> amount of swap that postgres is using ramps up over time and
>> eventually hurts performance badly.  In every case, simply restarting
>> postgresql frees up all the swap in use (until it ramps up again
>> later).
>
> If you're certain that it's restarting *postgres* that does it, and not
> restarting your application or pgbouncer or some other code, then it
> seems like you must have uncovered a memory leak someplace.  We haven't
> got nearly enough info here to diagnose it though.
>
> First thing I'd want to know is which process(es) exactly are bloating.
> The top output you showed us is unhelpful for that since it just shows
> them all as "postmaster" --- you'll need to match up the problem PIDs
> with "ps auxww" output.  Keep in mind also that top is pretty awful
> about distinguishing a process's actual memory use (private memory)
> from the portion of PG's shared memory that it happens to have touched.
> What you need to pay attention to is RES minus SHR, not either number
> alone.  With shared buffers set as high as you've got it, you'll
> probably not be able to be sure that a process is bloating until it's
> eaten hundreds of megs of private space.

In the past 18 hours, swap usage has nearly doubled on systemA:
$ free -m
 total   used   free sharedbuffers cached
Mem: 56481  56210271  0 11  52470
-/+ buffers/cache:   3727  52753
Swap: 1099 35   1064

As a reminder, this is what it was yesterday afternoon (roughly 18
hours earlier):
total   used   free sharedbuffers cached
Mem: 56481  55486995  0 15  53298
-/+ buffers/cache:   2172  54309
Swap: 1099 18   1081

Here's top sorted by memory usage:
  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
 2250 postgres  20   0 13.5g  10g  10g S  0.0 19.0   4:50.17
postmaster
20427 postgres  20   0 13.6g 8.6g 8.5g D  3.6 15.5   6:15.53
postmaster
 1861 postgres  20   0 13.5g 2.2g 2.2g D  7.2  3.9   0:44.34
postmaster
32275 postgres  20   0 13.5g 2.0g 2.0g S  0.0  3.7   0:32.52
postmaster
 2444 postgres  20   0 13.5g 2.0g 2.0g S  0.0  3.6   0:21.22
postmaster
21632 postgres  20   0 13.5g 2.0g 1.9g S  0.0  3.6   1:48.76
postmaster
 1870 postgres  20   0 13.5g 1.7g 1.7g S  0.0  3.1   0:18.70
postmaster
 2383 postgres  20   0 13.5g 1.7g 1.7g S  0.0  3.1   0:13.23
postmaster
32280 postgres  20   0 13.5g 1.6g 1.6g S  0.0  2.9   0:20.97
postmaster
 2378 postgres  20   0 13.5g 1.3g 1.3g S  0.0  2.4   0:15.36
postmaster
 2367 postgres  20   0 13.5g 1.3g 1.3g S  0.0  2.4   0:15.71
postmaster
 2396 postgres  20   0 13.5g 1.2g 1.2g S  0.0  2.3   0:09.33
postmaster
31794 postgres  20   0 13.5g 1.2g 1.2g S  0.0  2.2   0:14.68
postmaster
 1891 postgres  20   0 13.5g 1.2g 1.2g S  0.0  2.1   0:06.49
postmaster
 2435 postgres  20   0 13.5g 1.2g 1.1g S  0.0  2.1   0:16.38
postmaster
 2370 postgres  20   0 13.5g 1.1g 1.1g S  0.0  2.0   0:08.51
postmaster
 2411 postgres  20   0 13.5g 1.0g 1.0g S  0.0  1.9   0:09.18
postmaster
31782 postgres  20   0 13.5g 981m 967m S  0.0  1.7   0:12.33
postmaster
32314 postgres  20   0 13.5g 909m 891m S  0.0  1.6   0:09.07
postmaster
 2395 postgres  20   0 13.5g 890m 876m S  0.0  1.6   0:07.16
postmaster
 2381 postgres  20   0 13.5g 829m 815m S  0.0  1.5   0:05.30
postmaster
 2213 postgres  20   0 13.5g 829m 817m S  0.0  1.5   0:04.40
postmaster
 2421 postgres  20   0 13.5g 803m 790m S  0.0  1.4   0:05.11
postmaster
 1866 postgres  20   0 13.5g 797m 784m S  0.0  1.4   0:08.10
postmaster
 2371 postgres  20   0 13.5g 793m 781m S  0.0  1.4   0:05.88
postmaster
 3460 postgres  20   0 13.5g 786m 773m S  0.0  1.4   0:06.17
postmaster
 2418 postgres  20   0 13.5g 604m 594m S  0.0  1.1   0:01.69
postmaster
 2425 postgres  20   0 13.5g 582m 570m S  0.0  1.0   0:02.67
postmaster
 5863 postgres  20   0 13.5g 358m 347m D  0.0  0.6   0:01.90
postmaster
 5865 postgres  20   0 13.5g 293m 283m S  0.0  0.5   0:01.70
postmaster
 2243 postgres  20   0 13.5g 289m 288m S  0.0  0.5   0:07.01
postmaster
 5862 postgres  20   0 13.5g 195m 189m S  0.0  0.3   0:00.48
postmaster
 1890 postgres  20   0 13.5g 149m 145m S  0.0  0.3   0:06.02
postmaster
 5852 postgres  20   0 13.5g 132m 124m S  0.0  0.2   0:01.04
postmaster
32316 postgres  20   0 13.5g  21m  19m S  0.0  0.0   0:00.60
postmaster
 1876 postgres  20   0 13.5g  21m  19m S  0.0  0.0   0:00.16
postmaster
 5866 postgres  20   0 13.5g  17m  15m S  0.0  0.0   0:00.01
postmaster
 2251 postgres  20   0 13.5g  17m  16m S  0.0  0.0   0:57.19
postmaster
32328 postgres  20   0 13.5g  16m  13m S  0.0  0.0   0:00.20
post

[GENERAL] duplicate key violates unique on a nextval() field

2011-08-30 Thread Peter Warasin
hi guys (and hopefully also ladies)

I use postgresql as a backend for freeradius with a coova-chilli hotspot
we have an installation with plenty of concurrent users with a lot of
traffic, however the database is not under that huge load.

Normally all is working fine, but from time to time i get this error
message:

---
Tue Aug 30 13:53:18 2011 : Error: rlm_sql (sql): failed after re-connect
Tue Aug 30 13:53:18 2011 : Error: rlm_sql (sql): Couldn't insert SQL
accounting STOP record - ERROR:  duplicate key violates unique
constraint "radacct_unique"
---

which causes that at the end of the day i have inconsistencies and
missing data, which makes my customers sad (and me also).


Can you imagine how this error possibly can happen?

the table looks like this:


CREATE TABLE radacct (
RadAcctId   BIGSERIAL PRIMARY KEY,
AcctSessionId   VARCHAR(32) NOT NULL,
AcctUniqueIdVARCHAR(32) NOT NULL,
UserNameVARCHAR(253),
...
);
SELECT pg_catalog.setval(pg_catalog.pg_get_serial_sequence('radacct',
'radacctid'), 1, false);
CREATE UNIQUE INDEX radacct_unique ON radacct USING btree (acctuniqueid)
WHERE (radacctid > 0);


After investigation in freeradius code i found that freeradius is trying
to insert the record, when that fails, the db link will be closed and
reconnected and the sql statement is retried again.

The error message posted above tells me that the insert statement does
even not succeed after the reconnect. (failed after reconnect)

The message tells me furthermore that freeradius tries to insert a
record with a radacctid which already exists.

But how can that happen when it is bigserial?

Is bigserial not using a transaction in order to calculate a unique id?


postgres version is 8.1.5. I know it is a bit old, but we are stuck with
it right now.

Any idea?

Would be nice,. Thank you in advance

Peter


-- 
:: e n d i a n
:: security with passion

:: peter warasin
:: http://www.endian.com   :: pe...@endian.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] duplicate key violates unique on a nextval() field

2011-08-30 Thread Scott Ribe
On Aug 30, 2011, at 10:19 AM, Peter Warasin wrote:

> The message tells me furthermore that freeradius tries to insert a
> record with a radacctid which already exists.
> 
> But how can that happen when it is bigserial?

Postgres only assigns the value if it is not explicitly provided. Any client, 
freeradius included, could be assigning ids and could have bugs. Allowing pg to 
assign the value is safe, using nextval is safe--I'd look for client code that 
tries to get ranges ahead of time & cache...

> The error message posted above tells me that the insert statement does
> even not succeed after the reconnect. (failed after reconnect)

If freeradius is trying to insert a record with an id that already exists, 
after a re-connect there's no reason at all to assume that the previously 
existing record is gone and that the insert with the same id will now succeed.

That DDL is also kind of nasty... Why the big effort to set the sequence to 1 
immediately after creating the table? Why the creation of a unique index when 
the "primary key" attribute already causes a unique index to be created on the 
id? Ugh.

-- 
Scott Ribe
scott_r...@elevated-dev.com
http://www.elevated-dev.com/
(303) 722-0567 voice





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] IDLE queries taking up space

2011-08-30 Thread JD Wong
Hi,

When I run select datname, procpid, current_query from pg_stat_activity; I
get 26 rows of  queries.  How can I set postgres to qutomatically
close connections that have finished their queries and now sit idle?

Thanks!
-JD


[GENERAL] duplicate key violates unique on a nextval() field

2011-08-30 Thread Peter Warasin
hi guys (and hopefully also ladies)

I use postgresql as a backend for freeradius with a coova-chilli hotspot
we have an installation with plenty of concurrent users with a lot of
traffic, however the database is not under that huge load.

Normally all is working fine, but from time to time i get this error
message:

---
Tue Aug 30 13:53:18 2011 : Error: rlm_sql (sql): failed after re-connect
Tue Aug 30 13:53:18 2011 : Error: rlm_sql (sql): Couldn't insert SQL
accounting STOP record - ERROR:  duplicate key violates unique
constraint "radacct_unique"
---

which causes that at the end of the day i have inconsistencies and
missing data, which makes my customers sad (and me also).


Can you imagine how this error possibly can happen?

the table looks like this:


CREATE TABLE radacct (
RadAcctId   BIGSERIAL PRIMARY KEY,
AcctSessionId   VARCHAR(32) NOT NULL,
AcctUniqueIdVARCHAR(32) NOT NULL,
UserNameVARCHAR(253),
...
);
SELECT pg_catalog.setval(pg_catalog.pg_get_serial_sequence('radacct',
'radacctid'), 1, false);
CREATE UNIQUE INDEX radacct_unique ON radacct USING btree (acctuniqueid)
WHERE (radacctid > 0);


After investigation in freeradius code i found that freeradius is trying
to insert the record, when that fails, the db link will be closed and
reconnected and the sql statement is retried again.

The error message posted above tells me that the insert statement does
even not succeed after the reconnect. (failed after reconnect)

The message tells me furthermore that freeradius tries to insert a
record with a radacctid which already exists.

But how can that happen when it is bigserial?

Is bigserial not using a transaction in order to calculate a unique id?


postgres version is 8.1.5. I know it is a bit old, but we are stuck with
it right now.

Any idea?

Would be nice,. Thank you in advance

Peter

-- 
:: e n d i a n
:: security with passion

:: peter warasin
:: http://www.endian.com   :: pe...@endian.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] out of memory - no sort

2011-08-30 Thread Don

  
  
I am trying a simple
access of a table and get an out of
  memory error.  How do I avoid this issue.  It seems I
have some configuration set wrong.

Our system has 24GB of memory and is dedicated to the postgres
database.

Back ground information

aquarec=> explain analyze verbose select * from ens_memb;
    QUERY
PLAN    
--
 Seq Scan on ens_memb  (cost=0.00..719893.12 rows=32216212
width=62) (actual time=4.954..37513.377 rows=32216154 loops=1)
   Output: id, shefpc, bwmon, ewmon, pb, tb, cdate, vdate,
source, tyr, val
 Total runtime: 39588.386 ms


#--
# RESOURCE USAGE (except WAL)
#--

# - Memory -

shared_buffers = 6144MB # min 128kB
    # (change requires
restart)
#temp_buffers = 8MB # min 800kB
max_prepared_transactions = 5   # zero disables the
feature
    # (change requires
restart)
# Note:  Increasing max_prepared_transactions costs ~600 bytes
of shared memory
# per transaction slot, plus lock space (see
max_locks_per_transaction).
# It is not advisable to set max_prepared_transactions nonzero
unless you
# actively intend to use prepared transactions.
work_mem = 48MB # min 64kB
maintenance_work_mem = 256MB    # min 1MB
#max_stack_depth = 2MB  # min 100kB

# - Kernel Resource Usage -

#max_files_per_process = 1000   # min 25
    # (change requires
restart)
#shared_preload_libraries = ''  # (change requires
restart)

# - Cost-Based Vacuum Delay -

#vacuum_cost_delay = 0ms    # 0-100 milliseconds
#vacuum_cost_page_hit = 1   # 0-1 credits
#vacuum_cost_page_miss = 10 # 0-1 credits
#vacuum_cost_page_dirty = 20    # 0-1 credits
#vacuum_cost_limit = 200    # 1-1 credits

# - Background Writer -

#bgwriter_delay = 200ms # 10-1ms between
rounds
#bgwriter_lru_maxpages = 100    # 0-1000 max buffers
written/round
#bgwriter_lru_multiplier = 2.0  # 0-10.0 multipler on
buffers scanned/round

# - Asynchronous Behavior -

#effective_io_concurrency = 1   # 1-1000. 0 disables
prefetching


#--
# WRITE AHEAD LOG
#--

# - Settings -

#fsync = on # turns forced
synchronization on or off
#synchronous_commit = on    # immediate fsync at
commit
#wal_sync_method = fsync    # the default is the
first option 
    # supported by the
operating system:
    #   open_datasync
    #   fdatasync
    #   fsync
    #   fsync_writethrough
    #   open_sync
#full_page_writes = on  # recover from partial
page writes
wal_buffers = 8MB   # min 32kB
    # (change requires
restart)
#wal_writer_delay = 200ms   # 1-1 milliseconds

#commit_delay = 0   # range 0-10, in
microseconds
#commit_siblings = 5    # range 1-1000

# - Checkpoints -

checkpoint_segments = 32    # in logfile segments,
min 1, 16MB each
#checkpoint_timeout = 5min  # range 30s-1h
#checkpoint_completion_target = 0.5 # checkpoint target
duration, 0.0 - 1.0
#checkpoint_warning = 30s   # 0 disables

# - Archiving -


[GENERAL] SELECT Query on DB table preventing inserts

2011-08-30 Thread Dan Scott
Hi,

I have a server running PostgreSQL 8.4 (Scientific Linux release 6.0).
I'm running a process which receives messages from a remote server and
logs them into a table. Here is the table structure:

CREATE TABLE messages.message_log
(
  message_id text,
  message_timestamp timestamp with time zone,
  message_type text,
  message text,
  message_pkid bigserial NOT NULL,
  CONSTRAINT hld_log_pk PRIMARY KEY (message_pkid)
)

I'm receiving a few messages per second. I have a problem when I query
the table and try and parse the message (a complex, read-only,
function which takes quite a long time to process) the insert process
is unable to insert new rows into the database and I end up losing
some messages. Perhaps because I'm locking the table with my query?
But I wouldn't have thought that a read-only query would lock the
table for insert operations.

The insert process is run as the owner of the DB. The query is run as
me (and other users who have select permission on the table). I'd like
to be able to change the priority of the inserts (or my user's query),
or adjust the table locks so that the inserts occur in preference to
the parsing query.

I'd like to ensure that the DB stops whatever else it's doing so that
the insert can occur. What is the best way of doing this?

Thanks,

Dan Scott
http://danieljamesscott.org

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Whether the function exists a in pgsql table or not?

2011-08-30 Thread shuaixf
My friend, thanks for your replay, however how to prove your view?

--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Whether-the-function-exists-a-in-pgsql-table-or-not-tp4741670p4749963.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] COPY failure on directory I own

2011-08-30 Thread Rich Shepard

  I need to understand why this command fails:

nevada=# copy statdata to
'/home/rshepard/projects/nevada/queenstake/stats/chem.csv' with delimiter '|'
null as 'NA' CSV HEADER;
ERROR:  could not open file
"/home/rshepard/projects/nevada/queenstake/stats/chem.csv" for writing:
Permission denied

  The permissions on that directory are 755 and it's owned by me. Since I
have no problems writing other files to that directory I must have the
command syntax incorrect but I don't see where.

Rich

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] IDLE queries taking up space

2011-08-30 Thread Scott Ribe
On Aug 30, 2011, at 10:03 AM, JD Wong wrote:

> How can I set postgres to qutomatically close connections that have finished 
> their queries and now sit idle?

They haven't finished their queries. They've opened transactions, and then are 
sitting there doing nothing. In other words, this is a bug in your clients, and 
no, you really would not want PG automatically terminating connections 
mid-transaction just because it thought the client was taking too long to get 
to the next step.

-- 
Scott Ribe
scott_r...@elevated-dev.com
http://www.elevated-dev.com/
(303) 722-0567 voice





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY failure on directory I own

2011-08-30 Thread Scott Ribe
On Aug 30, 2011, at 11:14 AM, Rich Shepard wrote:

> The permissions on that directory are 755 and it's owned by me. Since I
> have no problems writing other files to that directory I must have the
> command syntax incorrect but I don't see where.

Where is the server and where are you? You are issuing a command to the server 
to create a file at that path on the server.

-- 
Scott Ribe
scott_r...@elevated-dev.com
http://www.elevated-dev.com/
(303) 722-0567 voice





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SELECT Query on DB table preventing inserts

2011-08-30 Thread Scott Ribe
On Aug 30, 2011, at 8:22 AM, Dan Scott wrote:

> Perhaps because I'm locking the table with my query?

Do you mean you're explicitly locking the table? If so, why???

-- 
Scott Ribe
scott_r...@elevated-dev.com
http://www.elevated-dev.com/
(303) 722-0567 voice





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY failure on directory I own

2011-08-30 Thread Scott Mead
On Tue, Aug 30, 2011 at 1:20 PM, Scott Ribe wrote:

> On Aug 30, 2011, at 11:14 AM, Rich Shepard wrote:
>
> > The permissions on that directory are 755 and it's owned by me. Since I
> > have no problems writing other files to that directory I must have the
> > command syntax incorrect but I don't see where.
>

In this case, it's not about YOU and your permissions, it's about the
server.  The COPY command writes data as the 'postgres' operating system
user (or whichever user owns the postgres backend process).

--Scott



>
> Where is the server and where are you? You are issuing a command to the
> server to create a file at that path on the server.
>
> --
> Scott Ribe
> scott_r...@elevated-dev.com
> http://www.elevated-dev.com/
> (303) 722-0567 voice
>
>
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] out of memory - no sort

2011-08-30 Thread Pavel Stehule
Hello

if table is large, then client can raise this exception too

try to set FETCH_COUNT to 1000

http://www.postgresql.org/docs/8.4/interactive/app-psql.html

Regards

Pavel Stehule

2011/8/30 Don :
> I am trying a simple access of a table and get an out of memory error.  How
> do I avoid this issue.  It seems I have some configuration set wrong.
>
> Our system has 24GB of memory and is dedicated to the postgres database.
>
> Back ground information
>
> aquarec=> explain analyze verbose select * from ens_memb;
>     QUERY
> PLAN
> --
>  Seq Scan on ens_memb  (cost=0.00..719893.12 rows=32216212 width=62) (actual
> time=4.954..37513.377 rows=32216154 loops=1)
>    Output: id, shefpc, bwmon, ewmon, pb, tb, cdate, vdate, source, tyr, val
>  Total runtime: 39588.386 ms
>
>
> #--
> # RESOURCE USAGE (except WAL)
> #--
>
> # - Memory -
>
> shared_buffers = 6144MB # min 128kB
>     # (change requires restart)
> #temp_buffers = 8MB # min 800kB
> max_prepared_transactions = 5   # zero disables the feature
>     # (change requires restart)
> # Note:  Increasing max_prepared_transactions costs ~600 bytes of shared
> memory
> # per transaction slot, plus lock space (see max_locks_per_transaction).
> # It is not advisable to set max_prepared_transactions nonzero unless you
> # actively intend to use prepared transactions.
> work_mem = 48MB # min 64kB
> maintenance_work_mem = 256MB    # min 1MB
> #max_stack_depth = 2MB  # min 100kB
>
> # - Kernel Resource Usage -
>
> #max_files_per_process = 1000   # min 25
>     # (change requires restart)
> #shared_preload_libraries = ''  # (change requires restart)
>
> # - Cost-Based Vacuum Delay -
>
> #vacuum_cost_delay = 0ms    # 0-100 milliseconds
> #vacuum_cost_page_hit = 1   # 0-1 credits
> #vacuum_cost_page_miss = 10 # 0-1 credits
> #vacuum_cost_page_dirty = 20    # 0-1 credits
> #vacuum_cost_limit = 200    # 1-1 credits
>
> # - Background Writer -
>
> #bgwriter_delay = 200ms # 10-1ms between rounds
> #bgwriter_lru_maxpages = 100    # 0-1000 max buffers written/round
> #bgwriter_lru_multiplier = 2.0  # 0-10.0 multipler on buffers
> scanned/round
>
> # - Asynchronous Behavior -
>
> #effective_io_concurrency = 1   # 1-1000. 0 disables prefetching
>
>
> #--
> # WRITE AHEAD LOG
> #--
>
> # - Settings -
>
> #fsync = on # turns forced synchronization on or
> off
> #synchronous_commit = on    # immediate fsync at commit
> #wal_sync_method = fsync    # the default is the first option
>     # supported by the operating system:
>     #   open_datasync
>     #   fdatasync
>     #   fsync
>     #   fsync_writethrough
>     #   open_sync
> #full_page_writes = on  # recover from partial page writes
> wal_buffers = 8MB   # min 32kB
>     # (change requires restart)
> #wal_writer_delay = 200ms   # 1-1 milliseconds
>
> #commit_delay = 0   # range 0-10, in microseconds
> #commit_siblings = 5    # range 1-1000
>
> # - Checkpoints -
>
> checkpoint_segments = 32    # in logfile segments, min 1, 16MB
> each
> #checkpoint_timeout = 5min  # range 30s-1h
> #checkpoint_completion_target = 0.5 # checkpoint target duration, 0.0 -
> 1.0
> #checkpoint_warning = 30s   # 0 disables
>
> # - Archiving -
>
> #archive_mode = off # allows archiving to be done
>     # (change requires restart)
> #archive_command = ''   # command to use to archive a logfile
> segment
> #archive_timeout = 0    # force a logfile segment switch after this
>     # number of seconds; 0 disables
>
> #--
> # QUERY TUNING
> #--
>
> # - Planner Method Configuration -
>
> #enable_bitmapscan

Re: [GENERAL] duplicate key violates unique on a nextval() field

2011-08-30 Thread Peter Warasin
Hi

thank you for answering!

On 30/08/11 18:56, Scott Ribe wrote:
>> But how can that happen when it is bigserial?
> 
> Postgres only assigns the value if it is not explicitly provided. Any client, 
> freeradius included, could be assigning ids and could have bugs. Allowing pg 
> to assign the value is safe, using nextval is safe--I'd look for client code 
> that tries to get ranges ahead of time & cache...

freeradius is not supplying an id.
that field is left away as it should be in order to use nextval

that renders the whole thing strange.. also because it happens only from
time to time.
it must be related to load. when the system is under heavy load
something goes wrong and 2 concurrent transactions get the same unique
id. but how??

peter


-- 
:: e n d i a n
:: security with passion

:: peter warasin
:: http://www.endian.com   :: pe...@endian.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY failure on directory I own

2011-08-30 Thread Rich Shepard

On Tue, 30 Aug 2011, Scott Ribe wrote:


Where is the server and where are you? You are issuing a command to the
server to create a file at that path on the server.


  It's sitting right here next to my desk. That host is the network server
and my workstation. Yes, my home directory (and all other directories) are
on that host.

Rich

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY failure on directory I own

2011-08-30 Thread Rich Shepard

On Tue, 30 Aug 2011, Scott Mead wrote:


In this case, it's not about YOU and your permissions, it's about the
server. The COPY command writes data as the 'postgres' operating system
user (or whichever user owns the postgres backend process).


Scott,

  Ah so. User 'postgres' is in the same group ('users') as I am, so I need
to change the perms on the data directory to 775 to give postgres write
access.

Thanks,

Rich

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] out of memory - no sort

2011-08-30 Thread John R Pierce

On 08/30/11 7:28 AM, Don wrote:
I am trying a simple access of a table and get an out of memory 
error.  How do I avoid this issue.  It seems I have some configuration 
set wrong.


Our system has 24GB of memory and is dedicated to the postgres database.

Back ground information

aquarec=> explain analyze verbose select * from ens_memb;
QUERY PLAN
--
 Seq Scan on ens_memb  (cost=0.00..719893.12 rows=32216212 width=62) 
(actual time=4.954..37513.377 rows=32216154 loops=1)
   Output: id, shefpc, bwmon, ewmon, pb, tb, cdate, vdate, source, 
tyr, val

 Total runtime: 39588.386 ms


wild guess (since you didn't show the error), your system doesn't have 
enough memory available to store all 32 million rows of your result 
set.  This could be happening on the CLIENT ('psql') side or the server 
side (and in fact, if both are on the same system, I believe that query 
as written will require two copies of the result set in memory)



--
john r pierceN 37, W 122
santa cruz ca mid-left coast


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY failure on directory I own [FIXED]

2011-08-30 Thread Rich Shepard

On Tue, 30 Aug 2011, Rich Shepard wrote:


 Ah so. User 'postgres' is in the same group ('users') as I am, so I need
to change the perms on the data directory to 775 to give postgres write
access.


  That did the trick. Thanks for the lesson, Scott.

Rich

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] duplicate key violates unique on a nextval() field

2011-08-30 Thread Daniel Verite
Peter Warasin wrote:

> The message tells me furthermore that freeradius tries to insert a
> record with a radacctid which already exists.

No, the message you quoted tells about the other unique constraint, the one
named radacct_unique. It's not related to the bigserial primary key.

Best regards,
-- 
Daniel
PostgreSQL-powered mail user agent and storage: http://www.manitou-mail.org

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg_upgrade from 8.3.4 issue

2011-08-30 Thread Justin Arnold
Hey, I am trying to upgrade a CentOS 5.4 32bit test server running postgres
8.3.4 to postgres 9.1 RC1 and am running into an error I haven't seen
mentioned in the forums (at least dealing with the upgrade process). The
steps I ran through for the upgrade are...

>Stop postgres
>move /usr/local/pgsql to /usr/local/pgsql.8.3
>move /usr/pgdata/data to /usr/pgdata/data.8.3
>build 9.1 RC1 from source using "./configure --with-perl --with-openssl
--disable-integer-datetimes; make; make install"
>build and install pg_upgrade and pg_upgrade_support
>swap to postgres user
>run "/usr/local/pgsql/bin/initdb --lc-collate=C --lc-ctype=C
--lc-messages=C --lc-monetary=C --lc-numeric=C --lc-time=C -E SQL-ASCII -D
/usr/pgdata/data" to create the 9.1 cluster and set the settings to match
the old cluster
>/usr/local/pgsql/bin/pg_upgrade --link --old-datadir /usr/pgdata/data.8.3/
--new-datadir /usr/pgdata/data/ --old-bindir /usr/local/pgsql.8.3/bin/
--new-bindir /usr/local/pgsql/bin/

What I get is...
Performing Consistency Checks
-
Checking current, bin, and data directories ok
Checking cluster versions   ok
Checking database user is a superuser   ok
Checking for prepared transactions  DB command
failed
SELECT * FROM pg_catalog.pg_prepared_xact()
ERROR:  a column definition list is required for functions returning
"record"

Failure, exiting

The binaries for the 8.3.4 install were built from source using
the --with-perl --with-openssl options as well. Any thoughts on what I might
be able to do to fix or workaround this? Thanks!

- Justin


Re: [GENERAL] duplicate key violates unique on a nextval() field

2011-08-30 Thread Peter Warasin
hi

On 30/08/11 19:43, Daniel Verite wrote:
>> The message tells me furthermore that freeradius tries to insert a
>> record with a radacctid which already exists.
> 
> No, the message you quoted tells about the other unique constraint, the one
> named radacct_unique. It's not related to the bigserial primary key.

aah, stupid me. thank you!
ok, then back to coova-chilli :)

peter

-- 
:: e n d i a n
:: security with passion

:: peter warasin
:: http://www.endian.com   :: pe...@endian.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SELECT Query on DB table preventing inserts

2011-08-30 Thread Daniel Verite
Dan Scott wrote:

> the insert process is unable to insert new rows into the database

You should probably provide the error message on insert or otherwise describe
how it's not working. Normally reading does not unintentionally prevent
writing in a concurrent session.

Best regards,
-- 
Daniel
PostgreSQL-powered mail user agent and storage: http://www.manitou-mail.org

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] IDLE queries taking up space

2011-08-30 Thread Rodrigo Gonzalez

On 08/30/2011 02:13 PM, Scott Ribe wrote:

On Aug 30, 2011, at 10:03 AM, JD Wong wrote:


How can I set postgres to qutomatically close connections that have finished 
their queries and now sit idle?
AFAIK you can't, you should check |pg_terminate_backend function and see 
if it is useful for you

|

They haven't finished their queries. They've opened transactions, and then are 
sitting there doing nothing. In other words, this is a bug in your clients, and 
no, you really would not want PG automatically terminating connections 
mid-transaction just because it thought the client was taking too long to get 
to the next step.

 in transaction is what you mean...
 are clients connected but not running any query





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] heavy swapping, not sure why

2011-08-30 Thread Merlin Moncure
On Tue, Aug 30, 2011 at 11:17 AM, Lonni J Friedman  wrote:
> On Mon, Aug 29, 2011 at 5:42 PM, Tom Lane  wrote:
>> Lonni J Friedman  writes:
>>> I have several Linux-x68_64 based dedicated PostgreSQL servers where
>>> I'm experiencing significant swap usage growth over time.  All of them
>>> have fairly substantial amounts of RAM (not including swap), yet the
>>> amount of swap that postgres is using ramps up over time and
>>> eventually hurts performance badly.  In every case, simply restarting
>>> postgresql frees up all the swap in use (until it ramps up again
>>> later).
>>
>> If you're certain that it's restarting *postgres* that does it, and not
>> restarting your application or pgbouncer or some other code, then it
>> seems like you must have uncovered a memory leak someplace.  We haven't
>> got nearly enough info here to diagnose it though.
>>
>> First thing I'd want to know is which process(es) exactly are bloating.
>> The top output you showed us is unhelpful for that since it just shows
>> them all as "postmaster" --- you'll need to match up the problem PIDs
>> with "ps auxww" output.  Keep in mind also that top is pretty awful
>> about distinguishing a process's actual memory use (private memory)
>> from the portion of PG's shared memory that it happens to have touched.
>> What you need to pay attention to is RES minus SHR, not either number
>> alone.  With shared buffers set as high as you've got it, you'll
>> probably not be able to be sure that a process is bloating until it's
>> eaten hundreds of megs of private space.
>
> In the past 18 hours, swap usage has nearly doubled on systemA:
> $ free -m
>             total       used       free     shared    buffers     cached
> Mem:         56481      56210        271          0         11      52470
> -/+ buffers/cache:       3727      52753
> Swap:         1099         35       1064
>
> As a reminder, this is what it was yesterday afternoon (roughly 18
> hours earlier):
>            total       used       free     shared    buffers     cached
> Mem:         56481      55486        995          0         15      53298
> -/+ buffers/cache:       2172      54309
> Swap:         1099         18       1081

This is totally uninteresting.  All this means is that the o/s has
found 35 mb (17 more) of memory sitting around doing nothing and
swapped it out.  That inconsequential amount of extra memory can now
be used by something else, which is expected and desired behavior.

Swap usage, in terms of the amount of memory in swap holding unused
resident memory, is a *good* thing.  Swap churn, that is memory
constantly moving in and out of the swap volume due to the o/s
managing different and competing demands is a bad thing.  FWICT, your
churn is zero -- the vast majority of your memory is allocated to
caching of filesystem pages, which is totally normal.  This is making
me wonder if your original diagnosis of swap causing your surrounding
performance issues is in fact correct.  We need to see a memory and
i/o profile (iostat/iowait) during your 'bad' times to be sure though.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] IDLE queries taking up space

2011-08-30 Thread Merlin Moncure
On Tue, Aug 30, 2011 at 11:03 AM, JD Wong  wrote:
> Hi,
> When I run select datname, procpid, current_query from pg_stat_activity; I
> get 26 rows of  queries.  How can I set postgres to qutomatically
> close connections that have finished their queries and now sit idle?

you don't. this should be managed from the client, or you can use a
connection pool.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] heavy swapping, not sure why

2011-08-30 Thread Tom Lane
Merlin Moncure  writes:
> On Tue, Aug 30, 2011 at 11:17 AM, Lonni J Friedman  wrote:
>> In the past 18 hours, swap usage has nearly doubled on systemA:
>> $ free -m
>> total   used   free sharedbuffers cached
>> Mem: 56481  56210271  0 11  52470
>> -/+ buffers/cache:   3727  52753
>> Swap: 1099 35   1064
>> 
>> As a reminder, this is what it was yesterday afternoon (roughly 18
>> hours earlier):
>>total   used   free sharedbuffers cached
>> Mem: 56481  55486995  0 15  53298
>> -/+ buffers/cache:   2172  54309
>> Swap: 1099 18   1081

> This is totally uninteresting.

Yeah.  You're going to need a whole lot more than 17MB of bloat before
it'll be possible to tell which process is at fault, given that the
expected process sizes are up to 10GB.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_upgrade from 8.3.4 issue

2011-08-30 Thread Merlin Moncure
On Tue, Aug 30, 2011 at 12:48 PM, Justin Arnold  wrote:
> Hey, I am trying to upgrade a CentOS 5.4 32bit test server running postgres
> 8.3.4 to postgres 9.1 RC1 and am running into an error I haven't seen
> mentioned in the forums (at least dealing with the upgrade process). The
> steps I ran through for the upgrade are...
>>Stop postgres
>>move /usr/local/pgsql to /usr/local/pgsql.8.3
>>move /usr/pgdata/data to /usr/pgdata/data.8.3
>>build 9.1 RC1 from source using "./configure --with-perl --with-openssl
>> --disable-integer-datetimes; make; make install"
>>build and install pg_upgrade and pg_upgrade_support
>>swap to postgres user
>>run "/usr/local/pgsql/bin/initdb --lc-collate=C --lc-ctype=C
>> --lc-messages=C --lc-monetary=C --lc-numeric=C --lc-time=C -E SQL-ASCII -D
>> /usr/pgdata/data" to create the 9.1 cluster and set the settings to match
>> the old cluster
>>/usr/local/pgsql/bin/pg_upgrade --link --old-datadir /usr/pgdata/data.8.3/
>> --new-datadir /usr/pgdata/data/ --old-bindir /usr/local/pgsql.8.3/bin/
>> --new-bindir /usr/local/pgsql/bin/
> What I get is...
> Performing Consistency Checks
> -
> Checking current, bin, and data directories                 ok
> Checking cluster versions                                   ok
> Checking database user is a superuser                       ok
> Checking for prepared transactions                          DB command
> failed
> SELECT * FROM pg_catalog.pg_prepared_xact()
> ERROR:  a column definition list is required for functions returning
> "record"
> Failure, exiting
> The binaries for the 8.3.4 install were built from source using
> the --with-perl --with-openssl options as well. Any thoughts on what I might
> be able to do to fix or workaround this? Thanks!
> - Justin

It looks like some time after 8.3 was released that function was
changed from returning 'record'.  This is making me wonder if the
upgrade process was ever tested/verified on 8.3.  I absolutely do not
advise doing this without taking a lot of precautions, but you might
force your way past that step with:

[login as superuser]
alter function pg_prepared_xact() rename to hack;
create function pg_catalog.pg_prepared_xact(
  OUT transaction xid,
  OUT gid text,
  OUT prepared timestamptz,
  OUT ownerid oid, OUT dbid oid) returns setof record as
$$
  select * from hack()  r(transaction xid,  gid text,  prepared
timestamptz,  ownerid oid, dbid oid);
$$ language sql;

I'd like to see someone more comfortable with the upgrade process
comment before attempting that though.  Another way to do that is to
hack the   r(transaction xid,  gid text,  prepared timestamptz,
ownerid oid, dbid oid);
at the end of whatever query is trying to the select.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_upgrade from 8.3.4 issue

2011-08-30 Thread Tom Lane
Merlin Moncure  writes:
> It looks like some time after 8.3 was released that function was
> changed from returning 'record'.  This is making me wonder if the
> upgrade process was ever tested/verified on 8.3.

Not lately, apparently :-(

> I absolutely do not
> advise doing this without taking a lot of precautions, but you might
> force your way past that step with:

I think it'd be a lot safer to modify (or just remove) the test in
pg_upgrade.  It looks like a one-liner:

prep_status("Checking for prepared transactions");

res = executeQueryOrDie(conn,
"SELECT * "
"FROM pg_catalog.pg_prepared_xact()");

if (PQntuples(res) != 0)
pg_log(PG_FATAL, "The %s cluster contains prepared transactions\n",
   CLUSTER_NAME(cluster));

There's no reason at all for this code to not use the published API,
which is the pg_prepared_xacts system view.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] FATAL: terminating connection due to conflict with recovery

2011-08-30 Thread Jeff Ross

Hi,

I have a set of servers in the rack running 9.0.3.  The production 
server is doing streaming replication and that is working fine.  I have 
some quarterly reports that are select only so I've been running them 
against the replica.


I have one part of that report that consistently dies with the error 
message


"FATAL:  terminating connection due to conflict with recovery".

Following that in the logs is:

"2011-08-30 13:33:02.336353500 <_postgresql%wykids> DETAIL:  User query 
might have needed to see row versions that must be removed.
2011-08-30 13:33:02.336359500 <_postgresql%wykids> HINT:  In a moment 
you should be able to reconnect to the database and repeat your command."


The failure point is not always at the same point of the query.  The 
query is a long cross-tab so I've posted it to


http://www.wykids.org/query.html

On my workstation using psql this query runs in about 1.5 minutes. I can 
choose the quarter the query uses and I'm virtually positive that no 
rows in that set will be updated or deleted so the error message to me 
seems wrong.


It does successfully complete when run against the master and in less 
than a minute.


Here are my server settings from the replica:

jross@dukkha:/home/jross $ cat /var/postgresql/my_postgresql.conf
listen_addresses = '*'  # what IP address(es) to listen on;
unix_socket_directory = '/var/postgresql/' # (change requires restart)
log_line_prefix = '<%u%%%d> '   # special values:
log_statement = 'all'# none, ddl, mod, all
archive_mode = on
wal_level = hot_standby   # minimal, archive, or hot_standby
archive_command = 'cd .'
default_statistics_target = 50 # pgtune wizard 2009-12-19
maintenance_work_mem = 120MB # pgtune wizard 2009-12-19
constraint_exclusion = on # pgtune wizard 2009-12-19
checkpoint_completion_target = 0.9 # pgtune wizard 2009-12-19
effective_cache_size = 1408MB # pgtune wizard 2009-12-19
work_mem = 12MB # pgtune wizard 2009-12-19
wal_buffers = 8MB # pgtune wizard 2009-12-19
checkpoint_segments = 16 # pgtune wizard 2009-12-19
shared_buffers = 480MB # pgtune wizard 2009-12-19
max_connections = 80 # pgtune wizard 2009-12-19
max_wal_senders = 3 # max number of walsender processes
wal_keep_segments = 30# in logfile segments, 16MB each; 0 disables
#recovery_connections = on
hot_standby = on  # "on" allows queries during recovery
log_timezone = 'America/Denver'
timezone = 'America/Denver'

Is there a setting in this or something else that I should tweak so this 
query can complete against the replica?  Google turned up some threads 
on the error code associated with the error but I didn't find much else 
that seems applicable.


Thanks,

Jeff Ross
Wyoming Children's Action Alliance
Cheyenne, Wyoming



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_upgrade from 8.3.4 issue

2011-08-30 Thread Justin Arnold
Thanks Tom and Merlin, I removed that logic from check.c, rebuilt, and it
worked fine.

On Tue, Aug 30, 2011 at 2:47 PM, Tom Lane  wrote:

> Merlin Moncure  writes:
> > It looks like some time after 8.3 was released that function was
> > changed from returning 'record'.  This is making me wonder if the
> > upgrade process was ever tested/verified on 8.3.
>
> Not lately, apparently :-(
>
> > I absolutely do not
> > advise doing this without taking a lot of precautions, but you might
> > force your way past that step with:
>
> I think it'd be a lot safer to modify (or just remove) the test in
> pg_upgrade.  It looks like a one-liner:
>
>prep_status("Checking for prepared transactions");
>
>res = executeQueryOrDie(conn,
>"SELECT * "
>"FROM pg_catalog.pg_prepared_xact()");
>
>if (PQntuples(res) != 0)
>pg_log(PG_FATAL, "The %s cluster contains prepared transactions\n",
>   CLUSTER_NAME(cluster));
>
> There's no reason at all for this code to not use the published API,
> which is the pg_prepared_xacts system view.
>
>regards, tom lane
>


Re: [GENERAL] pg_upgrade from 8.3.4 issue

2011-08-30 Thread Tom Lane
I wrote:
> I think it'd be a lot safer to modify (or just remove) the test in
> pg_upgrade.  It looks like a one-liner:

Specifically, the attached patch takes care of the problem.  Thanks
for reporting it!

regards, tom lane


diff --git a/contrib/pg_upgrade/check.c b/contrib/pg_upgrade/check.c
index 9690695..93b9e69 100644
*** a/contrib/pg_upgrade/check.c
--- b/contrib/pg_upgrade/check.c
*** check_for_prepared_transactions(ClusterI
*** 532,538 
  
res = executeQueryOrDie(conn,
"SELECT * "
!   "FROM 
pg_catalog.pg_prepared_xact()");
  
if (PQntuples(res) != 0)
pg_log(PG_FATAL, "The %s cluster contains prepared 
transactions\n",
--- 532,538 
  
res = executeQueryOrDie(conn,
"SELECT * "
!   "FROM 
pg_catalog.pg_prepared_xacts");
  
if (PQntuples(res) != 0)
pg_log(PG_FATAL, "The %s cluster contains prepared 
transactions\n",

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] heavy swapping, not sure why

2011-08-30 Thread John R Pierce

On 08/30/11 12:18 PM, Tom Lane wrote:

total   used   free sharedbuffers cached
>>  Mem: 56481  55486995  0 15  53298
>>  -/+ buffers/cache:   2172  54309
>>  Swap: 1099 18   1081

>  This is totally uninteresting.

Yeah.  You're going to need a whole lot more than 17MB of bloat before
it'll be possible to tell which process is at fault, given that the
expected process sizes are up to 10GB.


indeed, its 100% normal for the linux virtual memory manager to write 
'dirty' pages to the swapfile during idle time.



--
john r pierceN 37, W 122
santa cruz ca mid-left coast


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] heavy swapping, not sure why

2011-08-30 Thread Lonni J Friedman
On Tue, Aug 30, 2011 at 2:55 PM, John R Pierce  wrote:
> On 08/30/11 12:18 PM, Tom Lane wrote:

            total       used       free     shared    buffers     cached
 >>  Mem:         56481      55486        995          0         15
 >>  53298
 >>  -/+ buffers/cache:       2172      54309
 >>  Swap:         1099         18       1081
>>>
>>> >  This is totally uninteresting.
>>
>> Yeah.  You're going to need a whole lot more than 17MB of bloat before
>> it'll be possible to tell which process is at fault, given that the
>> expected process sizes are up to 10GB.
>
> indeed, its 100% normal for the linux virtual memory manager to write
> 'dirty' pages to the swapfile during idle time.

Sure, but it shouldn't grow indefinitely?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] regression between 8.4.8 and 8.4.2?

2011-08-30 Thread Ben Chobot
We recently took a copy of our production data (running on 8.4.2), scrubbed 
many data fields, and then loaded it onto a qa server (running 8.4.8). We're 
seeing some odd planner performance that I think might be a bug, though I'm 
hoping it's just idiocy on my part. I've analyzed things and looked into 
pg_stats and it seems as if the relevant columns have about the same 
statistics. 

I've managed to simplify the query, but if I make it any simpler, then the two 
servers end up with the same good plan. The query is down to:

SELECT machines.quota_purchased 
FROM machines
WHERE NOT deleted AND machines.user_id IN (
SELECT id FROM users WHERE user_group_id IN (
 select 607547 offset 0
) OFFSET 0
  );


(Those "offset 0" are in there to protect us from planner regressions we saw 
when moving to 8.4. When we move to 9, they can hopefully go away.)

On the production server, this returns a fairly accurate plan:

QUERY 
PLAN
--
 Nested Loop  (cost=843.59..1447.90 rows=243 width=8) (actual time=0.044..0.045 
rows=1 loops=1)
   ->  HashAggregate  (cost=843.59..845.59 rows=200 width=4) (actual 
time=0.027..0.027 rows=1 loops=1)
 ->  Limit  (cost=0.02..823.90 rows=1575 width=4) (actual 
time=0.024..0.025 rows=1 loops=1)
   ->  Nested Loop  (cost=0.02..823.90 rows=1575 width=4) (actual 
time=0.023..0.024 rows=1 loops=1)
 ->  HashAggregate  (cost=0.02..0.03 rows=1 width=4) 
(actual time=0.005..0.005 rows=1 loops=1)
   ->  Limit  (cost=0.00..0.01 rows=1 width=0) (actual 
time=0.001..0.002 rows=1 loops=1)
 ->  Result  (cost=0.00..0.01 rows=1 width=0) 
(actual time=0.000..0.000 rows=1 loops=1)
 ->  Index Scan using users_user_groups_idx on users  
(cost=0.00..804.18 rows=1575 width=8) (actual time=0.017..0.018 rows=1 loops=1)
   Index Cond: (users.user_group_id = (607547))
   ->  Index Scan using machines_sid_un on machines  (cost=0.00..3.00 rows=1 
width=12) (actual time=0.015..0.015 rows=1 loops=1)
 Index Cond: (machines.user_id = users.id)
 Total runtime: 0.121 ms
(12 rows)


On the QA server, things are not so accurate. It doesn't hurt the timing of 
this simplified query much, but when put into the actual query, the row 
estimation being off by 6 orders of magnitude really throws the planning in the 
wrong direction. The plan on the QA server is:

QUERY 
PLAN 
---
 Nested Loop  (cost=1887.16..3671.20 rows=1192462 width=8) (actual 
time=0.049..0.051 rows=1 loops=1)
   ->  HashAggregate  (cost=1887.16..1889.16 rows=200 width=4) (actual 
time=0.032..0.033 rows=1 loops=1)
 ->  Limit  (cost=0.02..1868.20 rows=1517 width=4) (actual 
time=0.027..0.029 rows=1 loops=1)
   ->  Nested Loop  (cost=0.02..1868.20 rows=1517 width=4) (actual 
time=0.027..0.028 rows=1 loops=1)
 ->  HashAggregate  (cost=0.02..0.03 rows=1 width=4) 
(actual time=0.008..0.008 rows=1 loops=1)
   ->  Limit  (cost=0.00..0.01 rows=1 width=0) (actual 
time=0.001..0.001 rows=1 loops=1)
 ->  Result  (cost=0.00..0.01 rows=1 width=0) 
(actual time=0.001..0.001 rows=1 loops=1)
 ->  Index Scan using users_user_groups_idx on users  
(cost=0.00..1849.20 rows=1517 width=8) (actual time=0.015..0.016 rows=1 loops=1)
   Index Cond: (users.user_group_id = (607547))
   ->  Index Scan using machines_sid_un on machines  (cost=0.00..8.90 rows=1 
width=12) (actual time=0.013..0.013 rows=1 loops=1)
 Index Cond: (machines.user_id = users.id)
 Total runtime: 0.148 ms
(12 rows)



The problem here (I think) seems to be that the QA server believes that running 
a nested loop over 200 users.id values and joining that against 
machines.user_id will result in >1M rows. The production servers sees this more 
accurately as the nearly 1:1 relationship that it is.

The reason I wonder if this might be a bug is because if I change the obtuse 
clause "WHERE user_group_id IN (select 607547 offset 0)" to simply "where 
user_group_id in (607547)" then the plan collapses to the same plan on both 
servers:

explain analyze SELECT machines.quota_purchased
FROM machines
WHERE NOT deleted AND machines.user_id IN (
SELECT id FROM users WHE

Re: [GENERAL] heavy swapping, not sure why

2011-08-30 Thread Merlin Moncure
On Tue, Aug 30, 2011 at 5:05 PM, Lonni J Friedman  wrote:
> On Tue, Aug 30, 2011 at 2:55 PM, John R Pierce  wrote:
>> On 08/30/11 12:18 PM, Tom Lane wrote:
>
>            total       used       free     shared    buffers     cached
> >>  Mem:         56481      55486        995          0         15
> >>  53298
> >>  -/+ buffers/cache:       2172      54309
> >>  Swap:         1099         18       1081

 >  This is totally uninteresting.
>>>
>>> Yeah.  You're going to need a whole lot more than 17MB of bloat before
>>> it'll be possible to tell which process is at fault, given that the
>>> expected process sizes are up to 10GB.
>>
>> indeed, its 100% normal for the linux virtual memory manager to write
>> 'dirty' pages to the swapfile during idle time.
>
> Sure, but it shouldn't grow indefinitely?

no it should not (and it can't -- you've only reserved so much space).
  however 35mb swap is incredibly low.  on my linux workstation vm
I've got 176mb used with nary an issue -- it fluctuates a lot
depending what happens to be going on at the time (a workstation
includes a lot of high memory low activity processes like gnome-panel
which tend to swap out).  had I disabled swappiness, that would have
reduced memory available to do stuff with rather drastically.

btw symptoms of swap thrashing are 100% full swap volume, extremely
high iowait, low/no memory used for file caching, process(s) with high
non shared resident memory usage, and, depending on how serious your
situation is and how linux is configured, OOM errors and/or randomly
killed processes.  for systems not under extreme memory stress
(including yours, at least at the moment) it's main purpose is to make
sure some processes doesn't gobble up memory and sit on it.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] FATAL: terminating connection due to conflict with recovery

2011-08-30 Thread Fujii Masao
On Wed, Aug 31, 2011 at 5:51 AM, Jeff Ross  wrote:
> Is there a setting in this or something else that I should tweak so this
> query can complete against the replica?  Google turned up some threads on
> the error code associated with the error but I didn't find much else that
> seems applicable.

Increasing max_standby_archive_delay and max_standby_streaming_delay
would be helpful to make the query complete. Please see the following manual
for details.
http://www.postgresql.org/docs/9.0/interactive/hot-standby.html#HOT-STANDBY-CONFLICT

Regards,

-- 
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgresql-9.0.1 Recovery

2011-08-30 Thread Craig Ringer

On 30/08/2011 6:59 PM, Venkat Balaji wrote:

Hello Everyone,

I have a situation here -

I am trying to restore the production online backup and recover the same.

  - I had initially rsynced (excluded pg_log) the data directory and the
tarred and zipped the same


Did you do that after pg_start_backup() or on a stopped database server?

If you did it on a running database server without first running 
pg_start_backup(), your backup is invalid.


Personally I like to take my base backups from an LVM snapshot of the 
datadir just to be extra safe. That isn't necessary, though, and a 
regular rsync or tar or whatever of a datadir after pg_start_backup() is 
fine.


Remember to run pg_stop_backup() afterwards.


  - I got an error "unable to read  from pg_clog location"
(file size is around 160K)


... from PostgreSQL, when you tried to start it?

What emitted that error message?


What i understood is that, rsync some how missed out on syncing the
files in "pg_clog"  so, i had manually coped the missing pg_clog file
from production and tried recovery.


That won't work. You need a consistent snapshot of all the files in the 
data dir. You cannot just mix and match copies taken at different times.


For efficiency reasons PostgreSQL will recycle used clog files. You 
can't just copy a file over and hope that because it has the same name, 
it still contains the data you want.


Your backup *failed* at the point where you got an incomplete copy of 
the data directory.



Do i need to get that particular wal archive which is before online
backup time ?


No, you need to get the missing clog files. If you cannot do that, try 
using pg_resetxlog, but be aware that that may lose transactions and can 
potentially cause corruption of tables and indexes.



By this experience what i understand is that Postgresql stores committed
and uncommited transactions in pg_xlog / wal archive files and
information (not the transaction data) about transaction commit status
is stored in pg_clog. Am I correct ?


That sounds right to me, but I don't know as much about how Pg stores 
things as I should.



I am in the process of designing a disaster recovery planner for our
productions systems.


Congratulations!

Be extremely glad this didn't happen in a real recovery scenario. This 
is a marvellous example of why you should always test your backups - you 
actually did, and found a problem that would've been a critical issue if 
the backup were actually needed.


--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] FATAL: terminating connection due to conflict with recovery

2011-08-30 Thread Craig Ringer

On 31/08/2011 4:51 AM, Jeff Ross wrote:


On my workstation using psql this query runs in about 1.5 minutes. I can
choose the quarter the query uses and I'm virtually positive that no
rows in that set will be updated or deleted so the error message to me
seems wrong.


AFAIK: There may be other data on the same page that is being written 
to, or it might be VACUUM activity replicated from the master. Either 
way, the standby is saying it has to get rid of that page in order to 
stay within sync delay limits with the master.


See the other reply post you got for the tuning parameters you can use 
to tweak that permissible sync delay.


--
Craig Ringer


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] duplicate key violates unique on a nextval() field

2011-08-30 Thread Craig Ringer

On 31/08/2011 1:28 AM, Peter Warasin wrote:

Hi

thank you for answering!

On 30/08/11 18:56, Scott Ribe wrote:

But how can that happen when it is bigserial?

Postgres only assigns the value if it is not explicitly provided. Any client, 
freeradius included, could be assigning ids and could have bugs. Allowing pg to 
assign the value is safe, using nextval is safe--I'd look for client code that 
tries to get ranges ahead of time&  cache...

freeradius is not supplying an id.
that field is left away as it should be in order to use nextval

that renders the whole thing strange.. also because it happens only from
time to time.
it must be related to load. when the system is under heavy load
something goes wrong and 2 concurrent transactions get the same unique
id. but how??

That *really* should not be possible. Sequences are tested extremely 
heavily on a daily basis by almost every Pg user. It's not impossible 
there's a bug there, but it's exceedingly unlikely.


Even if the queries that are failing use nextval(), DEFAULT, or omit the 
RadAcctId field entirely, that doesn't prevent some other prior query 
from inserting a record with a RadAcctId that is above the current 
sequence value. If that happens, then everything will seem fine until 
the sequence counts up to the inserted value, at which point the 
statement that uses the sequence will fail to insert. For example:


regress=> CREATE TABLE x ( id bigserial primary key, y integer );
NOTICE:  CREATE TABLE will create implicit sequence "x_id_seq" for 
serial column "x.id"
NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index "x_pkey" 
for table "x"

CREATE TABLE
regress=> insert into x (y) values (1),(2),(3);
INSERT 0 3
regress=> insert into x (id, y) values (nextval('x_id_seq'), 4);
INSERT 0 1
regress=> select max(id) from x;
 max
-
   4
(1 row)

regress=> insert into x (id, y) values ( (select max(id) from x)+1, 5); 
-- Wrong

INSERT 0 1
regress=> insert into x (y) values (6);
ERROR:  duplicate key value violates unique constraint "x_pkey"


Before doing much else, I suggest you temporarily enable logging of DML 
so you can capture some of the queries freeradius is sending to the 
server. Find the ones that touch the `radacct' table and post them after 
Xing out any important usernames/passwords etc. Search the logs for any 
queries that mention the RadAcctId column (use "grep -i" or some other 
case-insensitive search) and see if any other queries might be modifying it.


Also, you mentioned the use of dblink. Where does it come into play?

--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY failure on directory I own

2011-08-30 Thread Craig Ringer

On 31/08/2011 1:34 AM, Rich Shepard wrote:

On Tue, 30 Aug 2011, Scott Mead wrote:


In this case, it's not about YOU and your permissions, it's about the
server. The COPY command writes data as the 'postgres' operating system
user (or whichever user owns the postgres backend process).


Scott,

Ah so. User 'postgres' is in the same group ('users') as I am, so I need
to change the perms on the data directory to 775 to give postgres write
access.


Yeah, or use the client/server copy protocol via psql's \copy command.

--
Craig Ringer


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY failure on directory I own

2011-08-30 Thread Rich Shepard

On Wed, 31 Aug 2011, Craig Ringer wrote:


Yeah, or use the client/server copy protocol via psql's \copy command.


Craig,

  I was aware there was a back-slash version but did not recall when its use
is appropriate nor just how to use it.

Thanks,

Rich

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] heavy swapping, not sure why

2011-08-30 Thread mark


> -Original Message-
> From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
> ow...@postgresql.org] On Behalf Of Scott Marlowe
> Sent: Tuesday, August 30, 2011 3:52 AM
> To: Sim Zacks
> Cc: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] heavy swapping, not sure why
> 
> On Tue, Aug 30, 2011 at 2:50 AM, Sim Zacks  wrote:
> >
> > On a machine with lots of memory, I've run into pathological
> behaviour
> > with both the RHEL 5 and Ubuntu 10.04 kernels where the kswapd starts
> > eating up CPU and swap io like mad, while doing essentially nothing.
> > Setting swappiness to 0 delayed this behaviour but did not stop it.
> > Given that I'm on a machine with 128G ram, I just put "/sbin/swapoff
> > -a" in /etc/rc.local and viola, problem solved.
> >
> > I've tried running without swap and the problem is if you actually do
> run
> > out of memory then the process killer can take out your postgresql.
> 
> My postgres is configured to never use more than 10 or 20 gig at max
> load.  That leaves about 80+Gig for caching the database by the
> kernel.  work_mem is pretty small (8MB) given the maximum connections
> (1000) So 8Gig if everybody sorts at once.  Even under very heavy load
> memory usage has never really gone anywhere near that high.


Scott,
1000 max connections ? I thought that was several times more than
recommended these days, even for 24 or 48 core machines. Or am I living in
the past ? (I admit that my most recent runs of pgbench showed that best
throughput at around 250 backends from a 2 cpu VM which kind of surprised me
for a synthetic load and all that)

Thanks.



To the broader list, regarding troubles with kswap. I am curious to what
others seeing from /proc/zoneinfo for DMA pages (not dma32 or normal) -
basically if it sits at 1 or not.  Setting swappiness to 0 did not have any
affect for us on kswap issues. Another thing I have not had time and
resources to go work on... interested in what kernel they are running and
what storage drivers they might be using. 

FWIW (to the list vm. swappiness) at 0 didn't play well for us, with a
postgresql fork, until we had a swap partition the size of memory. We were
recommended to make that setting change for the fork that we are using and
didn't, we learn the hard way sometimes. Granted what works well there isn't
always applicable to Postgres. 

-Mark






> 
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] heavy swapping, not sure why

2011-08-30 Thread Scott Marlowe
On Tue, Aug 30, 2011 at 8:36 PM, mark  wrote:
>
> Scott,
> 1000 max connections ? I thought that was several times more than
> recommended these days, even for 24 or 48 core machines. Or am I living in
> the past ? (I admit that my most recent runs of pgbench showed that best
> throughput at around 250 backends from a 2 cpu VM which kind of surprised me
> for a synthetic load and all that)
>

It's definitely sub optimal, but we haven't had the time to test
pgbouncer or pgpool in staging.  luckily about 95% of those
connections are idle at any given time, and no queries do anything
that causes a thundering herd issue.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] heavy swapping, not sure why

2011-08-30 Thread Scott Marlowe
On Tue, Aug 30, 2011 at 8:36 PM, mark  wrote:
> To the broader list, regarding troubles with kswap. I am curious to what
> others seeing from /proc/zoneinfo for DMA pages (not dma32 or normal) -
> basically if it sits at 1 or not.  Setting swappiness to 0 did not have any
> affect for us on kswap issues. Another thing I have not had time and
> resources to go work on... interested in what kernel they are running and
> what storage drivers they might be using.

Well, we had zone reclaim mode autoset to 1, and we had to turn it off
to get decent performance with postgresql.  Machine was a quad
dodecacore Magny Cours, so 48 cores with 128G RAM.  RAID controller is
an Areca 1680 with BBU, 34 15kRPM 147G SAS Seagate 15k6 drives in two
16 drive external enclosures and 2 drives in the server.

The only solution we could find for kswapd going crazy was to just
turn off swap.  Pretty sure I used a large swap file to test larger
swaps, but all that did was put off the eventual kswapd storm. It took
anywhere from one to two weeks, maybe more, and then one day you check
and your servers maxed out by kswapd.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Getting Table Names in a Particular Database

2011-08-30 Thread Adarsh Sharma

Dear all,

Today I am researching about fetching all the table names in a 
particular database.

There is \dt command but I need to fetch it from metadata.
I find some commands as below :

|1. SELECT table_name FROM information_schema.tables WHERE table_schema 
= 'public';


2. |SELECT tablename FROM pg_tables WHERE tablename NOT LIKE 'pg%' AND 
tablename NOT LIKE 'sql%'.


But I need to specify a particular database & then fetch tables in that.


Thanks


Re: [GENERAL] Getting Table Names in a Particular Database

2011-08-30 Thread Scott Marlowe
On Tue, Aug 30, 2011 at 11:26 PM, Adarsh Sharma
 wrote:
> Dear all,
>
> Today I am researching about fetching all the table names in a particular
> database.
> There is \dt command but I need to fetch it from metadata.
> I find some commands as below :
>
> 1. SELECT table_name FROM information_schema.tables WHERE table_schema =
> 'public';
>
> 2. SELECT tablename FROM pg_tables WHERE tablename NOT LIKE ‘pg%’ AND
> tablename NOT LIKE ‘sql%’.
>
> But I need to specify a particular database & then fetch tables in that.

Try this, start psql with the -E switch, then run \d and copy and edit
the query(s) that gives you.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Getting Table Names in a Particular Database

2011-08-30 Thread Scott Marlowe
On Tue, Aug 30, 2011 at 11:30 PM, Scott Marlowe  wrote:
>> But I need to specify a particular database & then fetch tables in that.
>
> Try this, start psql with the -E switch, then run \d and copy and edit
> the query(s) that gives you.

P.s. I think you have to connect to the database you want to pull info
from / about.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Getting Table Names in a Particular Database

2011-08-30 Thread Adarsh Sharma

Below is the output of the \d command

SELECT n.nspname as "Schema",
 c.relname as "Name",
 CASE c.relkind WHEN 'r' THEN 'table' WHEN 'v' THEN 'view' WHEN 'i' 
THEN 'index' WHEN 'S' THEN 'sequence' WHEN 's' THEN 'special' END as "Type",

 pg_catalog.pg_get_userbyid(c.relowner) as "Owner"
FROM pg_catalog.pg_class c
LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
WHERE c.relkind IN ('r','v','S','')
 AND n.nspname <> 'pg_catalog'
 AND n.nspname <> 'information_schema'
 AND n.nspname !~ '^pg_toast'
 AND pg_catalog.pg_table_is_visible(c.oid)
ORDER BY 1,2;

I want to specify the database name & fetch tables from that but I think 
this query filters from schema 'public';



Thanks

Scott Marlowe wrote:

On Tue, Aug 30, 2011 at 11:26 PM, Adarsh Sharma
 wrote:
  

Dear all,

Today I am researching about fetching all the table names in a particular
database.
There is \dt command but I need to fetch it from metadata.
I find some commands as below :

1. SELECT table_name FROM information_schema.tables WHERE table_schema =
'public';

2. SELECT tablename FROM pg_tables WHERE tablename NOT LIKE ‘pg%’ AND
tablename NOT LIKE ‘sql%’.

But I need to specify a particular database & then fetch tables in that.



Try this, start psql with the -E switch, then run \d and copy and edit
the query(s) that gives you.
  




Re: [GENERAL] Getting Table Names in a Particular Database

2011-08-30 Thread Scott Marlowe
On Tue, Aug 30, 2011 at 11:38 PM, Adarsh Sharma
 wrote:
> Below is the output of the \d command
>
> SELECT n.nspname as "Schema",
>   c.relname as "Name",
>   CASE c.relkind WHEN 'r' THEN 'table' WHEN 'v' THEN 'view' WHEN 'i' THEN
> 'index' WHEN 'S' THEN 'sequence' WHEN 's' THEN 'special' END as "Type",
>   pg_catalog.pg_get_userbyid(c.relowner) as "Owner"
> FROM pg_catalog.pg_class c
>  LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
> WHERE c.relkind IN ('r','v','S','')
>   AND n.nspname <> 'pg_catalog'
>   AND n.nspname <> 'information_schema'
>   AND n.nspname !~ '^pg_toast'
>   AND pg_catalog.pg_table_is_visible(c.oid)
> ORDER BY 1,2;
>
> I want to specify the database name & fetch tables from that but I think
> this query filters from schema 'public';

You HAVE to connect to the db you want to query about tables.  They
are isolated from each other.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Getting Table Names in a Particular Database

2011-08-30 Thread Scott Marlowe
On Tue, Aug 30, 2011 at 11:42 PM, Scott Marlowe  wrote:
> On Tue, Aug 30, 2011 at 11:38 PM, Adarsh Sharma
>  wrote:
>> Below is the output of the \d command
>>
>> SELECT n.nspname as "Schema",
>>   c.relname as "Name",
>>   CASE c.relkind WHEN 'r' THEN 'table' WHEN 'v' THEN 'view' WHEN 'i' THEN
>> 'index' WHEN 'S' THEN 'sequence' WHEN 's' THEN 'special' END as "Type",
>>   pg_catalog.pg_get_userbyid(c.relowner) as "Owner"
>> FROM pg_catalog.pg_class c
>>  LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
>> WHERE c.relkind IN ('r','v','S','')
>>   AND n.nspname <> 'pg_catalog'
>>   AND n.nspname <> 'information_schema'
>>   AND n.nspname !~ '^pg_toast'
>>   AND pg_catalog.pg_table_is_visible(c.oid)
>> ORDER BY 1,2;
>>
>> I want to specify the database name & fetch tables from that but I think
>> this query filters from schema 'public';
>
> You HAVE to connect to the db you want to query about tables.  They
> are isolated from each other.

Also take out the AND pg_catalog.pg_table_is_visible(c.oid) bit if you
want to view all tables in all schemas, not just the ones in your
search path.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Getting Table Names in a Particular Database

2011-08-30 Thread John R Pierce

On 08/30/11 10:26 PM, Adarsh Sharma wrote:

Dear all,

Today I am researching about fetching all the table names in a 
particular database.

There is \dt command but I need to fetch it from metadata.
I find some commands as below :

|1. SELECT table_name FROM information_schema.tables WHERE 
table_schema = 'public';|


That should only return tables in the database you're currently 
connected to...


assuming you have multiple schemas in your database, I'd use something 
like...


select table_schema || '.' || table_name
from information_schema.tables
where table_schema not in ('pg_catalog', 'information_schema')
and table_type = 'BASE TABLE';




--
john r pierceN 37, W 122
santa cruz ca mid-left coast


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Getting Table Names in a Particular Database

2011-08-30 Thread Adarsh Sharma
I understand, So there is no way to fetch table in a single query. The 
only way is :


1. Connect demo
2. Execute the query  'SELECT n.nspname as "Schema",   c.relname as 
"Name",  CASE c.relkind WHEN 'r' THEN 'table' WHEN 'v' THEN 'view' WHEN 
'i' THEN 'index' WHEN 'S' THEN 'sequence' WHEN 's' THEN 'special' END as 
"Type",  pg_catalog.pg_get_userbyid(c.relowner) as "Owner" FROM 
pg_catalog.pg_class cLEFT JOIN pg_catalog.pg_namespace n ON n.oid = 
c.relnamespace WHERE c.relkind IN ('r','v','S','')   AND n.nspname <> 
'pg_catalog'AND n.nspname <> 'information_schema' AND n.nspname 
!~ '^pg_toast'

ORDER BY 1,2;

As in Mysql we can view all tables in a test database from below command :

select table_name from information_schema.tables where table_schema 
='test';;



Thanks

Scott Marlowe wrote:

On Tue, Aug 30, 2011 at 11:42 PM, Scott Marlowe  wrote:
  

On Tue, Aug 30, 2011 at 11:38 PM, Adarsh Sharma
 wrote:


Below is the output of the \d command

SELECT n.nspname as "Schema",
  c.relname as "Name",
  CASE c.relkind WHEN 'r' THEN 'table' WHEN 'v' THEN 'view' WHEN 'i' THEN
'index' WHEN 'S' THEN 'sequence' WHEN 's' THEN 'special' END as "Type",
  pg_catalog.pg_get_userbyid(c.relowner) as "Owner"
FROM pg_catalog.pg_class c
 LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
WHERE c.relkind IN ('r','v','S','')
  AND n.nspname <> 'pg_catalog'
  AND n.nspname <> 'information_schema'
  AND n.nspname !~ '^pg_toast'
  AND pg_catalog.pg_table_is_visible(c.oid)
ORDER BY 1,2;

I want to specify the database name & fetch tables from that but I think
this query filters from schema 'public';
  

You HAVE to connect to the db you want to query about tables.  They
are isolated from each other.



Also take out the AND pg_catalog.pg_table_is_visible(c.oid) bit if you
want to view all tables in all schemas, not just the ones in your
search path.
  




Re: [GENERAL] Getting Table Names in a Particular Database

2011-08-30 Thread Scott Marlowe
On Tue, Aug 30, 2011 at 11:50 PM, Adarsh Sharma
 wrote:
>  I understand, So there is no way to fetch table in a single query. The only
> way is :
>
> 1. Connect demo
> 2. Execute the query  'SELECT n.nspname as "Schema",   c.relname as "Name",
> CASE c.relkind WHEN 'r' THEN 'table' WHEN 'v' THEN 'view' WHEN 'i' THEN
> 'index' WHEN 'S' THEN 'sequence' WHEN 's' THEN 'special' END as "Type",
> pg_catalog.pg_get_userbyid(c.relowner) as "Owner" FROM pg_catalog.pg_class
> c    LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace WHERE
> c.relkind IN ('r','v','S','')   AND n.nspname <> 'pg_catalog'    AND
> n.nspname <> 'information_schema' AND n.nspname !~ '^pg_toast'
> ORDER BY 1,2;
>
> As in Mysql we can view all tables in a test database from below command :
>
>  select table_name from information_schema.tables where table_schema
> ='test';;

Have you tried it in pgsql, cause that works too.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Getting Table Names in a Particular Database

2011-08-30 Thread Adarsh Sharma
pdc_uima=# select table_name from information_schema.tables where 
table_schema='pdc_uima';

table_name

(0 rows)

But filtering on 'public', it gives the result ,  :

pdc_uima=# select * from information_schema.tables where 
table_schema='public';
table_catalog | table_schema |table_name| table_type | 
self_referencing_column_name | reference_generation | 
user_defined_type_catalog | user_define
d_type_schema | user_defined_type_name | is_insertable_into | is_typed | 
commit_action

---+--+--++--+--+---+
--+++--+---
pdc_uima  | public   | spatial_ref_sys  | BASE TABLE 
|  |  
|   |   
 || YES| NO   |
pdc_uima  | public   | geometry_columns | BASE TABLE 
|  |  
|   |   
 || YES| NO   |
pdc_uima  | public   | adarsh   | BASE TABLE 
|  |  
|   |   
 || YES| NO   |

(3 rows)

Come back to the original problem. I have 10 databases with different 
names you have to go into the database by \c command to fetch the table 
names.



Thanks

Scott Marlowe wrote:

On Tue, Aug 30, 2011 at 11:50 PM, Adarsh Sharma
 wrote:
  

 I understand, So there is no way to fetch table in a single query. The only
way is :

1. Connect demo
2. Execute the query  'SELECT n.nspname as "Schema",   c.relname as "Name",
CASE c.relkind WHEN 'r' THEN 'table' WHEN 'v' THEN 'view' WHEN 'i' THEN
'index' WHEN 'S' THEN 'sequence' WHEN 's' THEN 'special' END as "Type",
pg_catalog.pg_get_userbyid(c.relowner) as "Owner" FROM pg_catalog.pg_class
cLEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace WHERE
c.relkind IN ('r','v','S','')   AND n.nspname <> 'pg_catalog'AND
n.nspname <> 'information_schema' AND n.nspname !~ '^pg_toast'
ORDER BY 1,2;

As in Mysql we can view all tables in a test database from below command :

 select table_name from information_schema.tables where table_schema
='test';;



Have you tried it in pgsql, cause that works too.
  




Re: [GENERAL] Getting Table Names in a Particular Database

2011-08-30 Thread Scott Marlowe
On Wed, Aug 31, 2011 at 12:10 AM, Adarsh Sharma
 wrote:
> Come back to the original problem. I have 10 databases with different names
> you have to go into the database by \c command to fetch the table names.

Again, in PostgreSQL databases are very separate objects.  In mysql
they are closer to schemas than separate entities.  If you want to
examine a database in pg, you need to connect to it.  period.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgresql-9.0.1 Recovery

2011-08-30 Thread Venkat Balaji
Thanks Craig !

Below  is what i did -

1. pg_start_backup()
2. rsync the data dir
3. pg_stop_backup()

I believe the backup is valid because, i was able to bring up the cluster
without any issues (ofcourse with data loss).

+ve signs-

I am able to bring up the cluster with the Online backup, but, only with the
loss  of data.

-ve signs and things to be strongly foreseen while backup testing -

   - pg_clog files were not synced. I suspect they were being written at the
   time of backup. I might have tried to sync the data dir when pg_clog files
   were half filled.
   - Though the WAL Archives are there, Postgres is not trying to recover
   beyond the timestamp at which pg_clog was missing.
   - Even if i replace the missing pg_clog files (which i did), Postgres is
   asking for the corresponding wal archive files

Yes. What i learnt is that we need to ensure that all the pg_clog files must
be fully copied as on the backup time. We cannot afford to miss any of them.

Thanks
Venkat
On Wed, Aug 31, 2011 at 5:46 AM, Craig Ringer  wrote:

> On 30/08/2011 6:59 PM, Venkat Balaji wrote:
>
>> Hello Everyone,
>>
>> I have a situation here -
>>
>> I am trying to restore the production online backup and recover the same.
>>
>>  - I had initially rsynced (excluded pg_log) the data directory and the
>> tarred and zipped the same
>>
>
> Did you do that after pg_start_backup() or on a stopped database server?
>
> If you did it on a running database server without first running
> pg_start_backup(), your backup is invalid.
>
> Personally I like to take my base backups from an LVM snapshot of the
> datadir just to be extra safe. That isn't necessary, though, and a regular
> rsync or tar or whatever of a datadir after pg_start_backup() is fine.
>
> Remember to run pg_stop_backup() afterwards.
>
>
>   - I got an error "unable to read  from pg_clog location"
>> (file size is around 160K)
>>
>
> ... from PostgreSQL, when you tried to start it?
>
> What emitted that error message?
>
>
>  What i understood is that, rsync some how missed out on syncing the
>> files in "pg_clog"  so, i had manually coped the missing pg_clog file
>> from production and tried recovery.
>>
>
> That won't work. You need a consistent snapshot of all the files in the
> data dir. You cannot just mix and match copies taken at different times.
>
> For efficiency reasons PostgreSQL will recycle used clog files. You can't
> just copy a file over and hope that because it has the same name, it still
> contains the data you want.
>
> Your backup *failed* at the point where you got an incomplete copy of the
> data directory.
>
>
>  Do i need to get that particular wal archive which is before online
>> backup time ?
>>
>
> No, you need to get the missing clog files. If you cannot do that, try
> using pg_resetxlog, but be aware that that may lose transactions and can
> potentially cause corruption of tables and indexes.
>
>
>  By this experience what i understand is that Postgresql stores committed
>> and uncommited transactions in pg_xlog / wal archive files and
>> information (not the transaction data) about transaction commit status
>> is stored in pg_clog. Am I correct ?
>>
>
> That sounds right to me, but I don't know as much about how Pg stores
> things as I should.
>
>
>  I am in the process of designing a disaster recovery planner for our
>> productions systems.
>>
>
> Congratulations!
>
> Be extremely glad this didn't happen in a real recovery scenario. This is a
> marvellous example of why you should always test your backups - you actually
> did, and found a problem that would've been a critical issue if the backup
> were actually needed.
>
> --
> Craig Ringer
>