Re: [GENERAL] if-clause to an exiting statement

2010-12-17 Thread Rob Sargent



Jasen Betts wrote:

On 2010-12-07, Kobi Biton k...@comns.co.il wrote:
  

hi i am a newbie to sql statments  , I am running postgres 8.1 with
application called opennms version 1.8.5 due to an application bug
queries that I execute aginst the DB which returns raw-count=0 are being
ignored and will not process a certain trigger I need to process.



I think you want this:

 ORIGINAL QUERY
 union
   select DUMMY ROW DATA
 where 
   not exists ( ORIGINAL QUERY )



you need to return something to get a rowcount of 1 this is what the
dummy row data provides. the where not exists part blocks the dummy
row data when the main query returns something.

  


Simple enough, but I suspect it runs the same query twice, so I hope 
it's not to expensive.  I wonder what the app is/was doing with the 
vacuous single row  or will do with the dummy data row?  Seems the 
app/trigger simply needs to know the execution of the query was 
successful irrespective of the actual row count - or is that the bug 
with 1.8.5?


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Recommended replication solution?

2010-12-17 Thread Marcin Krol
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hello everyone,

Yes I know, size one not fits all, I specifically need:

* preferably multi-master

* local read, cluster-wide write

* a solution that allows me to maintain consistency between masters in
case of single node failure

* last but not least, production-ready

Load balancing is not important in my case, neither is connection
pooling. Fault-tolerance would be nice to have, but it's not absolute
must - what concerns me most is cluster-wide data integrity, and not
necessarily HA.

I need to put the replicated database behind SQLAlchemy, and the
simplest setup would be if SA were handing queries over to a cluster as
if it were a single local db.

I'd love to use Postgres-R, as its explicit design goals seem most
sensible, but they say on their page that it's not production-ready yet.
 (having said that, has anybody used that in production of some kind?)

Londiste seems complicated to set up and I was not able to gather
specific information on whether it is able to meet the requirements above.

Pgpool-II seems viable, and slide 17 on
http://www.pgcon.org/2008/schedule/attachments/66_pgpool-II-demo.pdf
seems to suggest it may be used as multi-master, but I wasn't able to
figure out setup for this yet.

Anybody has production experience with those? Pros? Cons?


- --

Regards,
mk

- --
Premature optimization is the root of all fun.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQEcBAEBAgAGBQJNC3ZtAAoJEFMgHzhQQ7hOMgQH/0PTxhlGjuqg9Z0cKjsRutWI
02ipZ5ve2OhoCVAtgdje5w5Ts4bNf5l4dn0AmQ6uWSpruBwpuRNIp2tvpAIpc9q/
+NiLtgH5uSdrGkKIgJd53sGdtLqUz1Ax+n0MHD+9IRjmN65pbuPaxzgPAQ4z3Lxx
9HvKanzBT9VxT0IXAM7OIWrhuCsDh4DT89/JrRiBlPwWq823rixI6QBc3QWAda+I
P3NYs9Dcg5nRSZQF8VV66otGxWj1aDOu0maOHuIUX0A2C6MAM3dvzIovxV3SBEeh
zAV2t7TnTrGyCAftZCOxe1c6jQ3pAGPbaOHbyscQ/d3Rm/FpcR/LxY4RhwH5/Fs=
=7jMW
-END PGP SIGNATURE-

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Table both does not and does exist! wth?

2010-12-17 Thread Melvin Davidson
PostgreSQL 8.3.11 on i686-redhat-linux-gnu, compiled by GCC gcc (GCC) 4.1.2 
20080704 (Red Hat 4.1.2-46)
Linux version 2.6.18-194.26.1.el5 (mockbu...@builder10.centos.org) (gcc version 
4.1.2 20080704 (Red Hat 4.1.2-48)) #1 SMP Tue Nov 9 12:54:40 EST 2010

How is this possible? I've been working as a PostgreSQL DBA for 5 years, and 
frankly I'm baffled.

I had previosly created a TEMP table in a session, but later decided to 
make it a permanent table. However, when I attempted to do so, I came across a 
very weird problem. PostgreSQL 1st denies that the table exists, because I do a
DROP TABLE IF EXISTS.
But when I do a CREATE TABLE, it says it is already there!

At first I thought I might have to do with shared_buffer memory. But after 
shutting down both the client and server, the problem persists. In fact, I have 
even dropped and reloaded the database, and it still occurs.

Am I missing something obvious?
Or does PostgreSQL have some undocumented, hidden catalog I am not aware of 
where it tracks TEMP tables? 

Below is an output from my session showing the problem.

enf=# DROP TABLE IF EXISTS xtmp_changed_ids;
NOTICE:  table xtmp_changed_ids does not exist, skipping
DROP TABLE
enf=# CREATE TABLE xtmp_changed_ids
 (
  seq_all  SERIAL NOT NULL,
  new_id   VARCHAR(200),
  id   VARCHAR(200),
  pin  VARCHAR(200),
  pc   VARCHAR(200),
  site_id  INTEGER,
  status   INTEGER,
  csn  INTEGER,
  raw_seconds INTEGER,
  lastname    VARCHAR(200),
  firstname   VARCHAR(200),
  CONSTRAINT xtmp_changed_ids PRIMARY KEY (seq_all)
  ) WITH (OIDS = FALSE);
NOTICE:  CREATE TABLE will create implicit sequence 
xtmp_changed_ids_seq_all_seq for serial column xtmp_changed_ids.seq_all
NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index 
xtmp_changed_ids for table xtmp_changed_ids
ERROR:  relation xtmp_changed_ids already exists
enf=#
enf=# SELECT * FROM pg_class WHERE relname = 'xtmp_changed_ids';
 relname | relnamespace | reltype | relowner | relam | relfilenode | 
reltablespace | relpages | reltuples | reltoastrelid | reltoastidxid | 
relhasindex | relisshared | relkind | relnatts | relchecks | reltriggers | 
relukeys | relfkeys | relrefs | relhasoids | relhaspkey | relhasrules | 
relhassubclass | relfrozenxid | relacl | reloptions
-+--+-+--+---+-+---+--+---+---+---+-+-+-+--+---+-+--+--+-+++-++--++
(0 rows)


Melvin Davidson 
 



  

Re: [GENERAL] Searing array fields - or should I redesign?

2010-12-17 Thread Jim Nasby
On Dec 16, 2010, at 11:26 AM, Vincent Veyron wrote:
 table logdetail
  logid int
  attribute varchar/int
  value decimal
  textvalue varchar
 
 You can retrieve logentries for specific vehicles, timeframes and attributes 
 - and you can extend more log attributes without changing the database 
 structure. I would suggest another table for the attributes where you can 
 lookup if it is a text or numeric entry.
 ..
 
 The problem with this approach is that you need to loop through your
 recordset in your code to collect all the values.
 If you only have one value per key to store per vehicule, it's much
 easier to have one big table with all the right columns, thus having
 just one line to process with all the information . So, from your
 example :
 
 create table logtable(
 id_vehicle text,
 date_purchased date,
 voltage integer,
 rpm integer);
 
 the corresponding record being 
 vehicle123, now(), 13, 600
 
 this will simplify your queries/code _a lot_. You can keep subclasses
 for details that have more than one value. Adding a column if you have
 to store new attributes is not a big problem.

Plus, that logdetail table will have a per-row overhead of 24+4 (or 8)+4 (or 
8)+1 bytes, assuming attribute is stored as an int (which you'd want). That's a 
minimum of 33 bytes per attribute, and you don't even have payload yet.

Entity-attribute-value (what logdetail is) is extremely expensive. You want to 
avoid it at all costs unless you have a really trivial amount of data.
--
Jim C. Nasby, Database Architect   j...@nasby.net
512.569.9461 (cell) http://jim.nasby.net



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Table both does not and does exist! wth?

2010-12-17 Thread Adrian Klaver
On Friday 17 December 2010 7:47:44 am Melvin Davidson wrote:
 PostgreSQL 8.3.11 on i686-redhat-linux-gnu, compiled by GCC gcc (GCC) 4.1.2
 20080704 (Red Hat 4.1.2-46) Linux version 2.6.18-194.26.1.el5
 (mockbu...@builder10.centos.org) (gcc version 4.1.2 20080704 (Red Hat
 4.1.2-48)) #1 SMP Tue Nov 9 12:54:40 EST 2010

 How is this possible? I've been working as a PostgreSQL DBA for 5 years,
 and frankly I'm baffled.

 I had previosly created a TEMP table in a session, but later decided to
 make it a permanent table. However, when I attempted to do so, I came
 across a very weird problem. PostgreSQL 1st denies that the table exists,
 because I do a DROP TABLE IF EXISTS.
 But when I do a CREATE TABLE, it says it is already there!

 At first I thought I might have to do with shared_buffer memory. But after
 shutting down both the client and server, the problem persists. In fact, I
 have even dropped and reloaded the database, and it still occurs.

 Am I missing something obvious?
 Or does PostgreSQL have some undocumented, hidden catalog I am not aware of
 where it tracks TEMP tables?


Have you tried a REINDEX on pg_class as superuser?


-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Copy From suggestion

2010-12-17 Thread Mark Watson
Hello all,
Firstly, I apologise if this is not the correct list for this subject.
Lately, I've been working on a data conversion, importing into Postgres
using Copy From. The text file I'm copying from is produced from an ancient
program and produces either a tab or semi-colon delimited file. One file
contains about 1.8M rows and has a 'comments' column. The exporting program,
which I am forced to use, does not surround this column with quotes and this
column contains cr/lf characters, which I must deal with (and have dealt
with) before I can import the file via Copy. Hence to my suggestion:
I was envisioning a parameter DELIMITER_COUNT which, if one was 100%
confident that all columns are accounted for in the input file, could be
used to alleviate the need to deal with cr/lf's in varchar and text columns.
i.e., if copy loaded a line with fewer delimiters than delimiter_count, the
next line from the text file would be read and the assignment of columns
would continue for the current row/column.
Just curious as to the thoughts out there.
Thanks to all for this excellent product, and a merry Christmas/holiday
period to all.

Mark Watson


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] DB files, sizes and cleanup

2010-12-17 Thread Gauthier, Dave
Hi:

I'm trying to justify disk space for a new linux server they're going to give 
me for my Postgres instance.  When I do a du of the place I installed the 
older instance on the system that is to be replaced, I see that the vast, vast 
majorityof the space goes to the contents of the base dir.  In there are a 
bunch of files with integers for names (iod's ?).  And some of those have 
millions of files inside.

Is this normal?  Should there be millions of files in some of these base 
directories?
Is this indicative of some sort of problem or lack of cleanup that I should 
have been doing?

The du shows that I'm using 196G (again, mostly in base) but 
pg_database_size shows something like 1/4 that amount, around 50G.  I'd like to 
know if there's something I'm supposed to be doing to cleanup old (possibly 
deleted) data.

Also, I was running pg_size_pretty(pg_database_size('mydb')) on all the dbs.  
It runs very fast for most, but just hangs for two of the databases.  Is this 
indicative of some sort of problem?  (BTW, the 2 it hangs on are very much like 
others that it doesn't hang on, so I used those numbers to estimate the 50G)


Thanks in Advance.


Re: [GENERAL] Table both does not and does exist! wth?

2010-12-17 Thread Melvin Davidson
Have you tried a REINDEX on pg_class as superuser?

Yes, in fact, I have even done a vacuumdb  -v -a -f
The problem is still there

Melvin



  

Re: [GENERAL] Table both does not and does exist! wth?

2010-12-17 Thread Greg Sabino Mullane

-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160


 enf=# DROP TABLE IF EXISTS xtmp_changed_ids;
 NOTICE: table xtmp_changed_ids does not exist, skipping
 DROP TABLE
 enf=# CREATE TABLE xtmp_changed_ids
 (seq_all SERIAL NOT NULL,
...
 NOTICE: CREATE TABLE will create implicit sequence...
 NOTICE: CREATE TABLE / PRIMARY KEY will create implicit 
  index xtmp_changed_ids for table xtmp_changed_ids
 ERROR: relation xtmp_changed_ids already exists

This seems to indicate it's the xtmp_changed_ids INDEX that 
already exists, not the xtmp_changed_ids TABLE. If it was the 
table, we'd presumably see the ERROR appear before the 
NOTICE (as the table is created first, and then the indexes).
Thus, make sure you don't have an index named xtmp_changed_ids 
somewhere already: \di xtmp_changed_ids

- -- 
Greg Sabino Mullane g...@turnstep.com
End Point Corporation http://www.endpoint.com/
PGP Key: 0x14964AC8 201012171118
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8
-BEGIN PGP SIGNATURE-

iEYEAREDAAYFAk0LjWYACgkQvJuQZxSWSsh/tgCgtKx53lBBjcbTR1178t1ZpLJL
JIUAn0giUTyphUX7D0KGDzb1C7bK0nw5
=n8OS
-END PGP SIGNATURE-



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Table both does not and does exist! wth?

2010-12-17 Thread Adrian Klaver
On Friday 17 December 2010 8:16:52 am Melvin Davidson wrote:
 Have you tried a REINDEX on pg_class as superuser?

 Yes, in fact, I have even done a vacuumdb  -v -a -f
 The problem is still there

 Melvin

Yea, I believe Greg may have the answer, he has a better eye than I.

-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Table both does not and does exist! wth?

2010-12-17 Thread hubert depesz lubaczewski
On Fri, Dec 17, 2010 at 07:47:44AM -0800, Melvin Davidson wrote:
 enf=# DROP TABLE IF EXISTS xtmp_changed_ids;
 NOTICE:  table xtmp_changed_ids does not exist, skipping
 DROP TABLE
 enf=# CREATE TABLE xtmp_changed_ids
  (
   seq_all  SERIAL NOT NULL,
   new_id   VARCHAR(200),
   id   VARCHAR(200),
   pin  VARCHAR(200),
   pc   VARCHAR(200),
   site_id  INTEGER,
   status   INTEGER,
   csn  INTEGER,
   raw_seconds INTEGER,
   lastname    VARCHAR(200),
   firstname   VARCHAR(200),
   CONSTRAINT xtmp_changed_ids PRIMARY KEY (seq_all)
   ) WITH (OIDS = FALSE);
 NOTICE:  CREATE TABLE will create implicit sequence 
 xtmp_changed_ids_seq_all_seq for serial column xtmp_changed_ids.seq_all
 NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index 
 xtmp_changed_ids for table xtmp_changed_ids
 ERROR:  relation xtmp_changed_ids already exists
 enf=#
 enf=# SELECT * FROM pg_class WHERE relname = 'xtmp_changed_ids';
  relname | relnamespace | reltype | relowner | relam | relfilenode | 
 reltablespace | relpages | reltuples | reltoastrelid | reltoastidxid | 
 relhasindex | relisshared | relkind | relnatts | relchecks | reltriggers | 
 relukeys | relfkeys | relrefs | relhasoids | relhaspkey | relhasrules | 
 relhassubclass | relfrozenxid | relacl | reloptions
 -+--+-+--+---+-+---+--+---+---+---+-+-+-+--+---+-+--+--+-+++-++--++
 (0 rows)

please don't name your constraint using the same name you named your
table.

constraint name is used to create index, and at the time of index
creation - table already exists.

just skip CONSTRAINT xtmp_changed_ids part, and you'll be fine.

depesz

-- 
Linkedin: http://www.linkedin.com/in/depesz  /  blog: http://www.depesz.com/
jid/gtalk: dep...@depesz.com / aim:depeszhdl / skype:depesz_hdl / gg:6749007

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Table both does not and does exist! wth?

2010-12-17 Thread Melvin Davidson
please don't name your constraint using the same name you named your
table.


DOH! and duh. :)

I can't believe I missed the obvious, but that's why it's better to have 
someone else
take a look.
Thanks for spotting that.  I've tacked on _pk to the constraint. Fixed!

Melvin




  

Re: [GENERAL] Table both does not and does exist! wth?

2010-12-17 Thread Tom Lane
Greg Sabino Mullane g...@turnstep.com writes:
 This seems to indicate it's the xtmp_changed_ids INDEX that 
 already exists, not the xtmp_changed_ids TABLE.

Oh, of course: he's got this:

CREATE TABLE xtmp_changed_ids
...
CONSTRAINT xtmp_changed_ids PRIMARY KEY (seq_all)

so he's trying to force the pkey index to have the same name as the
table.  Doesn't work.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] DB files, sizes and cleanup

2010-12-17 Thread Bill Moran
In response to Gauthier, Dave dave.gauth...@intel.com:

 Hi:
 
 I'm trying to justify disk space for a new linux server they're going to give 
 me for my Postgres instance.  When I do a du of the place I installed the 
 older instance on the system that is to be replaced, I see that the vast, 
 vast majorityof the space goes to the contents of the base dir.  In there 
 are a bunch of files with integers for names (iod's ?).  And some of those 
 have millions of files inside.
 
 Is this normal?  Should there be millions of files in some of these base 
 directories?
 Is this indicative of some sort of problem or lack of cleanup that I should 
 have been doing?
 
 The du shows that I'm using 196G (again, mostly in base) but 
 pg_database_size shows something like 1/4 that amount, around 50G.  I'd like 
 to know if there's something I'm supposed to be doing to cleanup old 
 (possibly deleted) data.
 
 Also, I was running pg_size_pretty(pg_database_size('mydb')) on all the dbs.  
 It runs very fast for most, but just hangs for two of the databases.  Is this 
 indicative of some sort of problem?  (BTW, the 2 it hangs on are very much 
 like others that it doesn't hang on, so I used those numbers to estimate the 
 50G)

1) Do you have autovacuum running, or do you have a regular vacuum
   scheduled?  Because this seems indicative of no vacuuming, or errors
   in vacuuming, or significantly insufficient vacuuming.
2) Unless your databases contain close to 100G of actual data, that size
   seems unreasonable.
3) pg_database_size() is probably not hanging, it's probably just taking
   a very long time to stat() millions of files.

Overall, I'm guessing you're not vacuuming your databases on a proper
schedule and that most of that 196G is bloat that doesn't need to be
there.  When bloat gets really bad, you're generally better off dumping
the datbases and restoring them, as a vacuum full might take a very,
very long time.

If you can demonstrate that the cause of this is table bloat, then I
would go through all your databases and do a vacuum full/reindex or
do a dump/restore if the problem is very bad.  Once you have done that,
your du output should be more realistic and more helpful.

Then, take some time to set up appropriate autovacuum settings so the
problem doesn't come back.

-- 
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] DB files, sizes and cleanup

2010-12-17 Thread Gauthier, Dave
When I restart the DB, it reports... LOG: autovacuum launcher started.

ps aux | grep postgres yields this...

dfg_suse ps aux | grep postgres
pgdbadm 22656  0.0  0.0  21296  2616 pts/7S+   Dec16   0:00 
/usr/intel/pkgs/postgresql/8.3.4/bin/psql -h fcadsql3.fc.intel.com hsxreuse
pgdbadm  9135  0.0  0.0  5  5924 pts/10   S12:22   0:00 
/nfs/hd/itools/em64t_linux26/pkgs/postgresql/8.3.4/bin/postgres -D /app/PG/v83
pgdbadm  9146  0.0  0.0  5  1360 ?Ss   12:22   0:00 postgres: 
writer process
pgdbadm  9147  0.0  0.0  5  1156 ?Ss   12:22   0:00 postgres: wal 
writer process
pgdbadm  9148  0.0  0.0  5  1316 ?Ss   12:22   0:00 postgres: 
autovacuum launcher process
pgdbadm  9149  0.0  0.0  18904  1308 ?Ss   12:22   0:00 postgres: stats 
collector process
pgdbadm  9354  0.0  0.0   2896   760 pts/9S+   12:27   0:00 grep postgres


TSo I assu,e it's running?

This is PG v 8.3.4 on linux.



-Original Message-
From: Bill Moran [mailto:wmo...@potentialtech.com]
Sent: Friday, December 17, 2010 12:17 PM
To: Gauthier, Dave
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] DB files, sizes and cleanup

In response to Gauthier, Dave dave.gauth...@intel.com:

 Hi:

 I'm trying to justify disk space for a new linux server they're going to give 
 me for my Postgres instance.  When I do a du of the place I installed the 
 older instance on the system that is to be replaced, I see that the vast, 
 vast majorityof the space goes to the contents of the base dir.  In there 
 are a bunch of files with integers for names (iod's ?).  And some of those 
 have millions of files inside.

 Is this normal?  Should there be millions of files in some of these base 
 directories?
 Is this indicative of some sort of problem or lack of cleanup that I should 
 have been doing?

 The du shows that I'm using 196G (again, mostly in base) but 
 pg_database_size shows something like 1/4 that amount, around 50G.  I'd like 
 to know if there's something I'm supposed to be doing to cleanup old 
 (possibly deleted) data.

 Also, I was running pg_size_pretty(pg_database_size('mydb')) on all the dbs.  
 It runs very fast for most, but just hangs for two of the databases.  Is this 
 indicative of some sort of problem?  (BTW, the 2 it hangs on are very much 
 like others that it doesn't hang on, so I used those numbers to estimate the 
 50G)

1) Do you have autovacuum running, or do you have a regular vacuum
   scheduled?  Because this seems indicative of no vacuuming, or errors
   in vacuuming, or significantly insufficient vacuuming.
2) Unless your databases contain close to 100G of actual data, that size
   seems unreasonable.
3) pg_database_size() is probably not hanging, it's probably just taking
   a very long time to stat() millions of files.

Overall, I'm guessing you're not vacuuming your databases on a proper
schedule and that most of that 196G is bloat that doesn't need to be
there.  When bloat gets really bad, you're generally better off dumping
the datbases and restoring them, as a vacuum full might take a very,
very long time.

If you can demonstrate that the cause of this is table bloat, then I
would go through all your databases and do a vacuum full/reindex or
do a dump/restore if the problem is very bad.  Once you have done that,
your du output should be more realistic and more helpful.

Then, take some time to set up appropriate autovacuum settings so the
problem doesn't come back.

--
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/



Re: [GENERAL] DB files, sizes and cleanup

2010-12-17 Thread Merlin Moncure
On Fri, Dec 17, 2010 at 12:31 PM, Gauthier, Dave
dave.gauth...@intel.com wrote:
 When I restart the DB, it reports... LOG: autovacuum launcher started.

 ps aux | grep postgres yields this...

 dfg_suse ps aux | grep postgres
 pgdbadm 22656  0.0  0.0  21296  2616 pts/7    S+   Dec16   0:00
 /usr/intel/pkgs/postgresql/8.3.4/bin/psql -h fcadsql3.fc.intel.com hsxreuse
 pgdbadm  9135  0.0  0.0  5  5924 pts/10   S    12:22   0:00
 /nfs/hd/itools/em64t_linux26/pkgs/postgresql/8.3.4/bin/postgres -D
 /app/PG/v83
 pgdbadm  9146  0.0  0.0  5  1360 ?    Ss   12:22   0:00 postgres:
 writer
 process
 pgdbadm  9147  0.0  0.0  5  1156 ?    Ss   12:22   0:00 postgres:
 wal writer
 process
 pgdbadm  9148  0.0  0.0  5  1316 ?    Ss   12:22   0:00 postgres:
 autovacuum launcher
 process
 pgdbadm  9149  0.0  0.0  18904  1308 ?    Ss   12:22   0:00 postgres:
 stats collector
 process
 pgdbadm  9354  0.0  0.0   2896   760 pts/9    S+   12:27   0:00 grep
 postgres


 TSo I assu,e it's running?

 This is PG v 8.3.4 on linux.



 -Original Message-
 From: Bill Moran [mailto:wmo...@potentialtech.com]
 Sent: Friday, December 17, 2010 12:17 PM
 To: Gauthier, Dave
 Cc: pgsql-general@postgresql.org
 Subject: Re: [GENERAL] DB files, sizes and cleanup

 In response to Gauthier, Dave dave.gauth...@intel.com:

 Hi:

 I'm trying to justify disk space for a new linux server they're going to
 give me for my Postgres instance.  When I do a du of the place I installed
 the older instance on the system that is to be replaced, I see that the
 vast, vast majorityof the space goes to the contents of the base dir.  In
 there are a bunch of files with integers for names (iod's ?).  And some of
 those have millions of files inside.

 Is this normal?  Should there be millions of files in some of these base
 directories?
 Is this indicative of some sort of problem or lack of cleanup that I
 should have been doing?

 The du shows that I'm using 196G (again, mostly in base) but
 pg_database_size shows something like 1/4 that amount, around 50G.  I'd like
 to know if there's something I'm supposed to be doing to cleanup old
 (possibly deleted) data.

 Also, I was running pg_size_pretty(pg_database_size('mydb')) on all the
 dbs.  It runs very fast for most, but just hangs for two of the databases.
 Is this indicative of some sort of problem?  (BTW, the 2 it hangs on are
 very much like others that it doesn't hang on, so I used those numbers to
 estimate the 50G)

 1) Do you have autovacuum running, or do you have a regular vacuum
    scheduled?  Because this seems indicative of no vacuuming, or errors
    in vacuuming, or significantly insufficient vacuuming.
 2) Unless your databases contain close to 100G of actual data, that size
    seems unreasonable.
 3) pg_database_size() is probably not hanging, it's probably just taking
    a very long time to stat() millions of files.

 Overall, I'm guessing you're not vacuuming your databases on a proper
 schedule and that most of that 196G is bloat that doesn't need to be
 there.  When bloat gets really bad, you're generally better off dumping
 the datbases and restoring them, as a vacuum full might take a very,
 very long time.

 If you can demonstrate that the cause of this is table bloat, then I
 would go through all your databases and do a vacuum full/reindex or
 do a dump/restore if the problem is very bad.  Once you have done that,
 your du output should be more realistic and more helpful.

 Then, take some time to set up appropriate autovacuum settings so the
 problem doesn't come back.

Check your logs for warnings about the free space map.  what are
max_fsm_pages and max_fsm_relations set to? how many tables and
indexes do you have approximately?  do you truly have 'millions' of
files?

go into base folder and do:
find | wc -l

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] DB files, sizes and cleanup

2010-12-17 Thread Gauthier, Dave
max_fsm_pages = 20
max_fsm_relations = 12000

There are 12 DBs with roughly 30 tables+indexes each.

There are apparently 2 bad DBs.  Both identical in terms of data models 
(clones with different data).  I've pg_dummped one of them to a file, dropped 
the DB (took a long time as millions of files were deleted) and recreated it.  
It now has 186 files.

ls -1 | wc took a while for the other bad one but eventually came up with 
exactly 7,949,911 files, so yes, millions.  The other one had millions too 
before I dropped it.  Something is clearly wrong.  But, since the DB recreate 
worked for the other one, I'll do the same thing to fix this one too.

What I will need to know then is how to prevent this in the future.  It's very 
odd because the worst of the 2 bad DBs was a sister DB to one that's no problem 
at all.  Here's the picture...

I have a DB, call it foo, that gets loaded with a ton of data at night.  The 
users query the thing readonly all day.  At midnight, an empty DB called 
foo_standby, which is identical to foo in terms of data model is reloaded 
from scratch.  It takes hours.  But when it's done, I do a few rename databases 
to swap foo with foo_standby (really just a name swap).  foo_standby 
serves as a live backup of yesterday's data.  Come the next midnight, I 
truncate all the tables and start the process all over again. 

I say all this because foo is the DB with 8 million files in it but 
foo_standby has 186 files.  Looks like one of these things is getting 
vacuumed fine while the other is carrying baggage.  

I can't remember, but perhaps one of these 2 is a carry-over from an earlier 
version of PG (8.1 maybe, or maybe even 7.something).  Maybe it had, and still 
has the millions of files and the vacuum isn't getting to them?

Anyway, your advise on what to set in postgres.conf to make sure this is 
working would be greatly appreciated.

Thanks for the interest and advise !




-Original Message-
From: Merlin Moncure [mailto:mmonc...@gmail.com] 
Sent: Friday, December 17, 2010 4:19 PM
To: Gauthier, Dave
Cc: Bill Moran; pgsql-general@postgresql.org
Subject: Re: [GENERAL] DB files, sizes and cleanup

On Fri, Dec 17, 2010 at 12:31 PM, Gauthier, Dave
dave.gauth...@intel.com wrote:
 When I restart the DB, it reports... LOG: autovacuum launcher started.

 ps aux | grep postgres yields this...

 dfg_suse ps aux | grep postgres
 pgdbadm 22656  0.0  0.0  21296  2616 pts/7    S+   Dec16   0:00
 /usr/intel/pkgs/postgresql/8.3.4/bin/psql -h fcadsql3.fc.intel.com hsxreuse
 pgdbadm  9135  0.0  0.0  5  5924 pts/10   S    12:22   0:00
 /nfs/hd/itools/em64t_linux26/pkgs/postgresql/8.3.4/bin/postgres -D
 /app/PG/v83
 pgdbadm  9146  0.0  0.0  5  1360 ?    Ss   12:22   0:00 postgres:
 writer
 process
 pgdbadm  9147  0.0  0.0  5  1156 ?    Ss   12:22   0:00 postgres:
 wal writer
 process
 pgdbadm  9148  0.0  0.0  5  1316 ?    Ss   12:22   0:00 postgres:
 autovacuum launcher
 process
 pgdbadm  9149  0.0  0.0  18904  1308 ?    Ss   12:22   0:00 postgres:
 stats collector
 process
 pgdbadm  9354  0.0  0.0   2896   760 pts/9    S+   12:27   0:00 grep
 postgres


 TSo I assu,e it's running?

 This is PG v 8.3.4 on linux.



 -Original Message-
 From: Bill Moran [mailto:wmo...@potentialtech.com]
 Sent: Friday, December 17, 2010 12:17 PM
 To: Gauthier, Dave
 Cc: pgsql-general@postgresql.org
 Subject: Re: [GENERAL] DB files, sizes and cleanup

 In response to Gauthier, Dave dave.gauth...@intel.com:

 Hi:

 I'm trying to justify disk space for a new linux server they're going to
 give me for my Postgres instance.  When I do a du of the place I installed
 the older instance on the system that is to be replaced, I see that the
 vast, vast majorityof the space goes to the contents of the base dir.  In
 there are a bunch of files with integers for names (iod's ?).  And some of
 those have millions of files inside.

 Is this normal?  Should there be millions of files in some of these base
 directories?
 Is this indicative of some sort of problem or lack of cleanup that I
 should have been doing?

 The du shows that I'm using 196G (again, mostly in base) but
 pg_database_size shows something like 1/4 that amount, around 50G.  I'd like
 to know if there's something I'm supposed to be doing to cleanup old
 (possibly deleted) data.

 Also, I was running pg_size_pretty(pg_database_size('mydb')) on all the
 dbs.  It runs very fast for most, but just hangs for two of the databases.
 Is this indicative of some sort of problem?  (BTW, the 2 it hangs on are
 very much like others that it doesn't hang on, so I used those numbers to
 estimate the 50G)

 1) Do you have autovacuum running, or do you have a regular vacuum
    scheduled?  Because this seems indicative of no vacuuming, or errors
    in vacuuming, or significantly insufficient vacuuming.
 2) Unless your databases contain close to 100G of actual data, that size
    seems unreasonable.
 3) pg_database_size() is 

Re: [GENERAL] DB files, sizes and cleanup

2010-12-17 Thread Tom Lane
Gauthier, Dave dave.gauth...@intel.com writes:
 ls -1 | wc took a while for the other bad one but eventually came up
 with exactly 7,949,911 files, so yes, millions.  The other one had
 millions too before I dropped it.  Something is clearly wrong.

No doubt, but it's impossible to tell what from just the bare statement
that you had a lot of unexpected files.

Did you happen to notice the timestamps on those files --- eg, were
there recent ones, or were they all old?  Do you use a lot of temporary
tables?  Have you had a lot of crashes in this database?

 This is PG v 8.3.4 on linux.

It wouldn't be a bad idea to update to 8.3.something-recent.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Getting number of affected rows after DELETE FROM

2010-12-17 Thread Raimon Fernandez
Hi,

I'm trying to solve what I think must be a real trivial question.

When I use psql after every DELETE FROM table WHERE id= I get how many rows 
were affected, in this case, deleted.

Also I've implemented the full FrontEnd/BackEnd Protocol v3 and there after a 
CommandComplete also I receive how many rows were affected.

But now, I'm using REALstudio www.realsoftware.com with their plugin, and I 
can't get the rows affected.

I can send a simple DELETE FROM table WHERE id=  and all what I get is 
nothing, no rows, no set, no info, even if the action didn't delete any row 
because the id was wrong.

They say that if the DELETE gives an empty string, means that PostgreSQL isn't 
returning nothing and that I have to get those values with some special values, 
like return parameters.

In pg/plsql I've used sometimes the GET DIAGNOSTICS variable = ROW_COUNT or 
FOUND with great success, but I really can't make them work outside their main 
function.

There is something like select lastval(); but for rows affected ?

thanks in advance,

regards,

r.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Postgres 9.0 Hiding CONTEXT string in Logs

2010-12-17 Thread Alex -

Hi,
I occasionally output information from my PL/Perl functions to show a progres 
or a debug info like this
 
elog(NOTICE, Table some_table analyzed);
 
which generated the 
 
NOTICE:  Table some_table analyzed
 
however since upgrading to version 9.0 I also get this annoying string
 
CONTEXT:  PL/Perl function func_some_fun
 
Is there any way to suppress this as I don't really need it?
 
 
Thanks
Alex