Re: [GENERAL] Help with trigger

2010-12-27 Thread Tom Lane
Michael Satterwhite mich...@weblore.com writes:
 On Monday, December 27, 2010 12:58:40 pm Guillaume Lelarge wrote:
 Le 27/12/2010 18:57, Michael Satterwhite a écrit :
 I'm obviously missing something ... and probably something obvious. Why
 is date2 still null?
 
 I'm not sure it'll help you. I copy/pasted your SQL script in my 9.0.2
 release. Worked great.

 I'm running 8.4.2.

Well, as somebody already pointed out, the example you posted works
fine.  When I try it in 8.4.6, I get

# select * from test;
date1|date2
-+-
 2012-05-04 00:00:00 | 2012-04-27 00:00:00
(1 row)

I find it interesting that your quoted result is

# select * from test;
date1| date2 
-+---
 2012-04-27 00:00:00 | 
(1 row)

What it looks like from here is there's a typo in the actually-executing
version of the function, such that date1 not date2 is assigned the
week-old date value.  Perhaps \df+ t_listing_startdate would get
you started towards sorting it out.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help on explain analyze

2010-11-27 Thread Alban Hertroys
On 26 Nov 2010, at 6:04, Leif Jensen wrote:

Looking at your query, I'm kind of wondering why you're not letting the 
database do the time-zone calculations?

But, that's not what your question was about:

 I have concentrate my effort on the (double) 'Seq Scan':
 
   -  Hash Join  (cost=16.65..282.84 
 rows=429 width=38) (actual time=0.078..6.587 rows=429 loops=10)
 Hash Cond: (t.id = ti.taskid)
 -  Seq Scan on task t  
 (cost=0.00..260.29 rows=429 width=30) (actual time=0.022..5.089 rows=429 
 loops=10)
 -  Hash  (cost=11.29..11.29 
 rows=429 width=12) (actual time=0.514..0.514 rows=429 loops=1)
   -  Seq Scan on task_info 
 ti  (cost=0.00..11.29 rows=429 width=12) (actual time=0.020..0.302 rows=429 
 loops=1)


These aren't your problem, they take up a whole whopping 65 ms.

Your problem is caused by the nested loops. See also: 
http://explain.depesz.com/s/z38

 Nested Loop  (cost=284.88..9767.82 rows=1 width=109) (actual 
 time=2515.318..40073.432 rows=10 loops=1)
   -  Nested Loop  (cost=284.88..9745.05 rows=70 width=102) (actual 
 time=2515.184..40071.697 rows=10 loops=1)
 -  Nested Loop  (cost=229.56..5692.38 rows=1 width=88) (actual 
 time=2512.044..39401.729 rows=10 loops=1)
   -  Nested Loop  (cost=229.56..5692.07 rows=1 width=80) (actual 
 time=2511.999..39401.291 rows=10 loops=1)
 -  Nested Loop  (cost=229.56..5691.76 rows=1 width=77) 
 (actual time=2511.943..39400.680 rows=10 loops=1)
   Join Filter: (ti.id = td.taskinfoid)
   -  Seq Scan on task_definitions td  
 (cost=0.00..13.68 rows=1 width=22) (actual time=0.204..0.322 rows=10 loops=1)
 Filter: ((name = 'UseWSData'::text) AND 
 (value = 401) AND (string)::boolean)
   -  Hash Left Join  (cost=229.56..5672.72 rows=429 
 width=59) (actual time=7.159..3939.536 rows=429 loops=10)
 Hash Cond: (d.id = cd.ctrlid)
 -  Nested Loop  (cost=24.66..5442.80 
 rows=429 width=55) (actual time=6.797..3937.349 rows=429 loops=10)

The problem is inside this loop alright, but...

This is the patch of lines that you were looking at, it takes around 6.5ms each 
time it is looped over, which happens only 10 times = 65ms total.

-  Hash Join  (cost=16.65..282.84 
 rows=429 width=38) (actual time=0.078..6.587 rows=429 loops=10)
 Hash Cond: (t.id = ti.taskid)
 -  Seq Scan on task t  
 (cost=0.00..260.29 rows=429 width=30) (actual time=0.022..5.089 rows=429 
 loops=10)
 -  Hash  (cost=11.29..11.29 
 rows=429 width=12) (actual time=0.514..0.514 rows=429 loops=1)
   -  Seq Scan on task_info 
 ti  (cost=0.00..11.29 rows=429 width=12) (actual time=0.020..0.302 rows=429 
 loops=1)

This bit here is taking about 9 ms each time it is looped over, which happens 
over 4000 times = 39s total.
That's the major contribution of what makes the outer nested loop jump from 
6.5ms to almost 4s! And that loop runs another 10 times, putting its total time 
at 39s.


   -  Bitmap Heap Scan on device d  
 (cost=8.01..12.02 rows=1 width=21) (actual time=9.145..9.146 rows=1 
 loops=4290)
 Recheck Cond: (d.id = ti.ctrlid)
 -  Bitmap Index Scan on 
 pk_device  (cost=0.00..8.01 rows=1 width=0) (actual time=0.463..0.463 
 rows=1569 loops=4290)
   Index Cond: (d.id = 
 ti.ctrlid)

In the below the estimates seem to be off. Still, for only 343 rows a seq-scan 
may be fastest. Is there an index on that name-column?

 -  Hash  (cost=202.61..202.61 rows=183 
 width=8) (actual time=3.534..3.534 rows=343 loops=1)
   -  Seq Scan on ctrl_definitions cd  
 (cost=0.00..202.61 rows=183 width=8) (actual time=0.034..3.298 rows=343 
 loops=1)
 Filter: ((name)::text = 
 'IrrPeriodStart'::text)

 -  Index Scan using devtype_pkey on devtype dt  
 (cost=0.00..0.30 rows=1 width=11) (actual time=0.053..0.055 rows=1 loops=10)
   Index Cond: (dt.id = d.devtypeid)
   -  Index Scan using pk_task_type on task_type tt  
 (cost=0.00..0.30 rows=1 width=16) (actual time=0.036..0.039 rows=1 loops=10)
 Index Cond: (tt.id = t.tasktypeid)
 Filter: ((tt.handler = 'modthcswi.so'::text) OR 
 (tt.handler = 

Re: [GENERAL] Help on explain analyze

2010-11-26 Thread Marc Mamin
Hello,

did you also try defininig partial indexes?

e.g. 
CREATE INDEX xx on task_definitions (ctrlid) WHERE (name::text = 'UseWSData')
CREATE INDEX yy on ctrl_definitions (ctrlid) WHERE (name::text = 
'IrrPeriodStart')

HTH,

Marc Mamin

-Original Message-
From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Leif Jensen
Sent: Freitag, 26. November 2010 06:04
To: pgsql-general
Subject: [GENERAL] Help on explain analyze

   Hi guys,

   I have a rather complex view that sometimes takes an awful long time to 
execute. I have tried to do an 'explain analyze' on it. My intention was to try 
to optimize the tables involved by creating some indexes to help the lookup. I 
looked for the Seq Scan's and created appropriate indexes, I thought. However, 
in most cases the search got even slower. I have expanded the view as follows:

cims=# explain analyze select * from (SELECT t.id AS oid, d.id AS devid, 
d.description AS devname, cd.value AS period, upper(dt.typename::text) AS 
devtype, (date_part('epoch'::text, timezone('GMT'::text, t.firstrun))::bigint - 
(z.timezone::integer - 
CASE
WHEN z.daylightsaving  'Y'::bpchar THEN 0
ELSE 
CASE
WHEN cy.dl_start  now() AND now()  cy.dl_finish THEN 1
ELSE 0
END
END) * 3600) % 86400::bigint AS firstrun, t.interval, t.id AS tid, 
ti.id AS tiid, t.runstatus, t.last, tt.handler, td.value AS ctrlid, td.string 
AS alarm, z.timezone AS real_timezone, cy.dl_start  now() AND now()  
cy.dl_finish AS daylight, z.timezone::integer - 
CASE
WHEN z.daylightsaving  'Y'::bpchar THEN 0
ELSE 
CASE
WHEN cy.dl_start  now() AND now()  cy.dl_finish THEN 1
ELSE 0
END
END AS timezone
   FROM device d
   LEFT JOIN task_info ti ON ti.ctrlid = d.id
   LEFT JOIN task t ON t.id = ti.taskid
   LEFT JOIN ctrl_definitions cd ON d.id = cd.ctrlid AND cd.name::text = 
'IrrPeriodStart'::text, task_type tt, task_definitions td, devtype dt, 
ctrl_definitions cd2, zip z, county cy
  WHERE td.name = 'UseWSData'::text AND ti.id = td.taskinfoid AND d.devtypeid = 
dt.id AND tt.id = t.tasktypeid AND (tt.handler = 'modthcswi.so'::text OR 
tt.handler = 'modthcswb.so'::text) AND btrim(cd2.string) = z.zip::text AND 
cd2.ctrlid = td.value AND cd2.name::text = 'ZIP'::text AND z.countyfips = 
cy.countyfips AND z.state = cy.state AND date_part('year'::text, now()) = 
date_part('year'::text, cy.dl_start)) AS wstaskdist
  WHERE wstaskdist.ctrlid = 401 AND CAST( alarm AS boolean ) = 't';

  The view is actually the sub-SELECT which I have name 'wstaskdist', and my 
search criteria is the bottom WHERE. The result of the ANALYZE is:


QUERY PLAN  
  
--
 Nested Loop  (cost=284.88..9767.82 rows=1 width=109) (actual 
time=2515.318..40073.432 rows=10 loops=1)
   -  Nested Loop  (cost=284.88..9745.05 rows=70 width=102) (actual 
time=2515.184..40071.697 rows=10 loops=1)
 -  Nested Loop  (cost=229.56..5692.38 rows=1 width=88) (actual 
time=2512.044..39401.729 rows=10 loops=1)
   -  Nested Loop  (cost=229.56..5692.07 rows=1 width=80) (actual 
time=2511.999..39401.291 rows=10 loops=1)
 -  Nested Loop  (cost=229.56..5691.76 rows=1 width=77) 
(actual time=2511.943..39400.680 rows=10 loops=1)
   Join Filter: (ti.id = td.taskinfoid)
   -  Seq Scan on task_definitions td  
(cost=0.00..13.68 rows=1 width=22) (actual time=0.204..0.322 rows=10 loops=1)
 Filter: ((name = 'UseWSData'::text) AND (value 
= 401) AND (string)::boolean)
   -  Hash Left Join  (cost=229.56..5672.72 rows=429 
width=59) (actual time=7.159..3939.536 rows=429 loops=10)
 Hash Cond: (d.id = cd.ctrlid)
 -  Nested Loop  (cost=24.66..5442.80 rows=429 
width=55) (actual time=6.797..3937.349 rows=429 loops=10)
   -  Hash Join  (cost=16.65..282.84 
rows=429 width=38) (actual time=0.078..6.587 rows=429 loops=10)
 Hash Cond: (t.id = ti.taskid)
 -  Seq Scan on task t  
(cost=0.00..260.29 rows=429 width=30) (actual time=0.022..5.089 rows=429 
loops=10)
 -  Hash  (cost=11.29..11.29 
rows=429 width=12) (actual time=0.514..0.514 rows=429 loops=1)
   -  Seq Scan on task_info ti 
 (cost=0.00..11.29 rows=429 width=12) (actual 

Re: [GENERAL] Help in Getting Started

2010-11-12 Thread Elford,Andrew [Ontario]
From the shell, I use something like this :

psql -d datebasename -U username -c COPY tablename FROM 'datefile' WITH
DELIMITER ',' NULL '' ; 

You'll have to add the path to the psql binary and to the csv file


-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of ray
Sent: 2010 November 12 5:59 AM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Help in Getting Started

I just downloaded 9.0 onto my laptop and desktop Windows XP machines.
I would like to move data from an Excel spreadsheet into a database.
I understand that a method is to export the worksheet as a CVS and then
'copy' into the database.

I have not been able to get function to happen in the shell.I have tried
creating a database in the shell but failed.  I was able to
create one in pgAdmin III.   I have read a copy tutorials but have not
been able to find one that is a simple step by step.

I would appreciate any help in learning how to learn.

ray

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To
make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help with Stack Builder

2010-11-12 Thread Sachin Srivastava
Can you open the specified URL in the browser?

On Nov 12, 2010, at 4:24 PM, ray wrote:

 I just downloaded 9.0 onto my laptop and desktop Windows XP
 machines. .Nether one could run the stack builder.  Windows would
 fault any time I tried to run it.  I tried inputing the proxy server
 but that did not
 help.  This was at work where there is a proxy server.  I tried this
 at homw wiith the laptop but got the same error:
 Could not open the specified URL.
 
 ray
 
 -- 
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general

--
Regards,
Sachin Srivastava
EnterpriseDB, the Enterprise PostgreSQL company.



Re: [GENERAL] Help with Stack Builder

2010-11-12 Thread ray
On Nov 12, 10:03 am, sachin.srivast...@enterprisedb.com (Sachin
Srivastava) wrote:
 Can you open the specified URL in the browser?

 On Nov 12, 2010, at 4:24 PM, ray wrote:

  I just downloaded 9.0 onto my laptop and desktop Windows XP
  machines. .Nether one could run the stack builder.  Windows would
  fault any time I tried to run it.  I tried inputing the proxy server
  but that did not
  help.  This was at work where there is a proxy server.  I tried this
  at homw wiith the laptop but got the same error:
  Could not open the specified URL.

  ray

  --
  Sent via pgsql-general mailing list (pgsql-gene...@postgresql.org)
  To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general

 --
 Regards,
 Sachin Srivastava
 EnterpriseDB, the Enterprise PostgreSQL company.

Yes,
I get a large xml file.

ray

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] help with design of the 'COUNT(*) in separate table schema

2010-10-20 Thread Peter Geoghegan
On 20 October 2010 23:52, Dennis Gearon gear...@sbcglobal.net wrote:
 Regarding the previously discussed triggers on tables to keep track of count:
 http://www.varlena.com/GeneralBits/120.php
 http://www.varlena.com/GeneralBits/49.php
 from article
 CREATE OR REPLACE FUNCTION count_rows()
 RETURNS TRIGGER AS
 '
   BEGIN
      IF TG_OP = ''INSERT'' THEN
         UPDATE rowcount
            SET total_rows = total_rows + 1
            WHERE table_name = TG_RELNAME;
      ELSIF TG_OP = ''DELETE'' THEN
         UPDATE rowcount
            SET total_rows = total_rows - 1
            WHERE table_name = TG_RELNAME;
      END IF;
      RETURN NULL;
   END;
 ' LANGUAGE plpgsql;
 /from article

 Wouldn't something like this need row-locking (SELECT for UPDATE) in order to 
 serialize the execution of all triggers?


The update will acquire a row level lock on rowcount for the
TG_RELNAME tuple without you doing anything else.

-- 
Regards,
Peter Geoghegan

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] help modeling a schedule system

2010-10-05 Thread Scott Ribe
Well, when I did something similar a very long time ago, I put a next time to 
fire column in the table. Each time an event is fired, the application updates 
that column.

Was a lot simpler than baroque searches for events that would need firing 
between the last check and now. Was a lot cleaner than creating tons of 
instances out through the future.

You could create a stored procedure, if you want, to calculate the next time 
based on all the possible criteria.

You could, have a last time fired column that keeps track of when the event 
was most recently fired, for logging/debugging/info. You could have a trigger 
that updates that next time to fire column each time the last time fired 
column is updated.

-- 
Scott Ribe
scott_r...@elevated-dev.com
http://www.elevated-dev.com/
(303) 722-0567 voice





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] help with error unexpected pageaddr

2010-09-15 Thread Tom Lane
Scot Kreienkamp skre...@la-z-boy.com writes:
 We have a PG 8.3.7 server that is doing WAL log shipping to 2 other
 servers that are remote mirrors.  This has been working well for almost
 two years.  Last night we did some massive data and structure changes to
 one of our databases.  Since then I get these errors on the two mirrors:

 2010-09-15 08:35:05 EDT: LOG:  restored log file
 0001030100D9 from archive

 2010-09-15 08:35:27 EDT: LOG:  restored log file
 0001030100DA from archive

 2010-09-15 08:35:40 EDT: LOG:  restored log file
 0001030100DB from archive

 2010-09-15 08:35:40 EDT: LOG:  unexpected pageaddr 301/4700 in log
 file 769, segment 219, offset 0

This appears to indicate that you archived the wrong contents of log
file 0001030100DB.  If you don't still have the correct
contents on the master, I think the only way to recover is to take a
fresh base backup so you can make the slaves roll forward from a point
later than this log segment.  There's no reason to suppose that there's
data corruption on the master, just bad data in the WAL archive.

You'd probably be well advised to look closely at your WAL archiving
script to see if it has any race conditions that might be triggered by
very fast generation of WAL.

 Also, one additional question.  I don't have a 1.history file which
 makes the PITRTools complain constantly.  Is there any way to regenerate
 this file?

Just ignore that, it's cosmetic (the file isn't supposed to exist).

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] help with error unexpected pageaddr

2010-09-15 Thread Tom Lane
Scot Kreienkamp skre...@la-z-boy.com writes:
 I tried to take a new base backup about 45 minutes ago.  The master has
 rolled forward a number of WAL files since I last tried, but it still
 fails.  

 LOG:  restored log file 0001030100FE from archive
 LOG:  restored log file 00010302 from archive
 LOG:  restored log file 000103020001 from archive
 LOG:  restored log file 000103020002 from archive
 LOG:  restored log file 000103020003 from archive
 LOG:  unexpected pageaddr 301/5000 in log file 770, segment 3,
 offset 0

Hmmm ... is it possible that your WAL archive contains log files
numbered higher than where your master is?

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] help with error unexpected pageaddr

2010-09-15 Thread Scot Kreienkamp

Scot Kreienkamp skre...@la-z-boy.com writes:
 We have a PG 8.3.7 server that is doing WAL log shipping to 2 other
 servers that are remote mirrors.  This has been working well for
almost
 two years.  Last night we did some massive data and structure changes
to
 one of our databases.  Since then I get these errors on the two
mirrors:

 2010-09-15 08:35:05 EDT: LOG:  restored log file
 0001030100D9 from archive

 2010-09-15 08:35:27 EDT: LOG:  restored log file
 0001030100DA from archive

 2010-09-15 08:35:40 EDT: LOG:  restored log file
 0001030100DB from archive

 2010-09-15 08:35:40 EDT: LOG:  unexpected pageaddr 301/4700 in log
 file 769, segment 219, offset 0

This appears to indicate that you archived the wrong contents of log
file 0001030100DB.  If you don't still have the correct
contents on the master, I think the only way to recover is to take a
fresh base backup so you can make the slaves roll forward from a point
later than this log segment.  There's no reason to suppose that there's
data corruption on the master, just bad data in the WAL archive.

You'd probably be well advised to look closely at your WAL archiving
script to see if it has any race conditions that might be triggered by
very fast generation of WAL.

 Also, one additional question.  I don't have a 1.history file
which
 makes the PITRTools complain constantly.  Is there any way to
regenerate
 this file?

Just ignore that, it's cosmetic (the file isn't supposed to exist).

regards, tom lane


Tom,

I tried to take a new base backup about 45 minutes ago.  The master has
rolled forward a number of WAL files since I last tried, but it still
fails.  

LOG:  restored log file 0001030100FE from archive
LOG:  restored log file 00010302 from archive
LOG:  restored log file 000103020001 from archive
LOG:  restored log file 000103020002 from archive
LOG:  restored log file 000103020003 from archive
LOG:  unexpected pageaddr 301/5000 in log file 770, segment 3,
offset 0
LOG:  redo done at 302/2BCE828
LOG:  last completed transaction was at log time 2010-09-15
15:07:01.040854-04
LOG:  restored log file 000103020002 from archive
LOG:  selected new timeline ID: 2

My entire WAL archiving script is 4 cp %p %f commands.  It's so short I
don't even have a script, it's directly in the postgresql.conf archive
command.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] help with error unexpected pageaddr

2010-09-15 Thread Scot Kreienkamp
Shouldn't have, the only thing we did to the server was restart it and
run our database queries.  Clearing out all the wal files from pg_xlog
along with a new base backup did fix it though.

Thanks for the help Tom!

Scot Kreienkamp
skre...@la-z-boy.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help! pg_dump: Error message from server: ERROR: cache lookup failed for type 19

2010-09-13 Thread Craig Ringer
On 14/09/2010 11:02 AM, 夏武 wrote:
 I reconvery it  by \copy command.
 thanks very much.

Glad to help.

In future, it might be a good idea to:

- Keep regular pg_dump backups; and
- Avoid trying to alter the system catalogs

With Slony you can never completely avoid needing to mess with the
catalogs, as it seems to be able to get things into a nasty state
sometimes. However, if you do have to do catalog work it's a good idea
to ask for help here *before* doing anything, because it'll be easier to
fix if you haven't deleted catalog entries etc.

-- 
Craig Ringer

Tech-related writing at http://soapyfrogs.blogspot.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help! pg_dump: Error message from server: ERROR: cache lookup failed for type 19

2010-09-12 Thread Craig Ringer
On 11/09/10 14:21, 夏武 wrote:
 i use the slony for replication of postgresql database. it work fine
 some day.
 After i use the slony command to delete the replication node, pg_dump
 does not work, the error message is:
 *pg_dump: schema with OID 73033 does not exist*
 
 Then i delete the name space of slony in  pg_namespace, and pg_dump does
 not work.
 So i delete the data of the name space in pg_class, pg_type, the command is:
 *DELETE FROM pg_class where relnamespace=73033*
 *DELETE FROM pg_type where relnamespace=73033*

That might not have been a very good idea. You're really not meant to go
messing with the catalog unless you *really* know what you are doing.

It sounds like you have a very badly messed up catalog. You will need to
find a way to dump your database so you can drop it, re-create it and
reload it.

 i got the error message:
 pg_dump: Error message from server: ERROR:  cache lookup failed for type 19

 How to fix it? How to recovery the database?

Your best chance is to do a pg_dump of each table, one by one, using the
--table option. Find out which table or tables are failing.

Once you know that, post here with the table definition, the exact
error, and the output of select * from pg_class and select * from
pg_type. Somebody might be able to help you if there's enough information.

Alternately, you might be able to extract the data from the problem
table(s) using  the \copy command from psql. Then you can re-create them
in the new database and re-load their contents with \copy .

-- 
Craig Ringer

Tech-related writing: http://soapyfrogs.blogspot.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help with select with max and min please

2010-08-08 Thread Edoardo Panfili

On 08/08/10 20.47, Jose Maria Terry wrote:

Hello all,

I need to run a query on a table that holds logged data from several
water flow meters.

I need the first and last values (with their associated time) for every
logger in a time range.

I've tried this that returns the min and max time in the desired range
for every logger, but i don't know how to get the associated data (the
row called caudal) for min and max .

select remota_id,min(hora),max(hora) from historicos where hora 
'2010-08-07' and hora '2010-08-08 00:03' group by remota_id order by
remota_id;

remota_id | min | max
---+-+-
01 | 2010-08-07 00:00:30 | 2010-08-08 00:00:30
02 | 2010-08-07 00:00:06 | 2010-08-08 00:00:06
03 | 2010-08-07 00:00:03 | 2010-08-08 00:00:02
04 | 2010-08-07 00:00:06 | 2010-08-08 00:00:06
05 | 2010-08-07 00:00:01 | 2010-08-08 00:00:02
06 | 2010-08-07 00:00:02 | 2010-08-08 00:00:02
07 | 2010-08-07 00:00:06 | 2010-08-08 00:00:06
09 | 2010-08-07 00:00:06 | 2010-08-08 00:00:06
(8 filas)

I need some like this:

remota_id | min | max | min_caudal | max_caudal
---+-+-++

01 | 2010-08-07 00:00:30 | 2010-08-08 00:00:30 | 976.04 | 904.21
02 | 2010-08-07 00:00:06 | 2010-08-08 00:00:06 | 829.71 | 764.42

Where min_caudal is the value of caudal in hora = min() and max_caudal
is the same for hora=max()


this can help?
select remota_id,min(hora),max(hora), (SELECT caudal FROM historicos 
WHERE remota_id=ooo.remota_id AND hora=min(ooo.hora)) as min_caudal, 
(SELECT caudal FROM historicos WHERE remota_id=ooo.remota_id AND 
hora=max(ooo.hora)) as max_caudal from historicos AS ooo group by 
remota_id order by remota_id;


Edoardo



The data in the table is like this:

select hora,remota_id,caudal from historicos;

hora | remota_id | caudal
-+---+--
2010-05-21 20:00:06 | 04 | 1201.309
2010-05-21 20:15:08 | 04 | 1201.309
2010-05-21 20:30:06 | 04 | 1219.803
2010-05-21 20:45:06 | 04 | 1225.098
2010-05-21 21:00:06 | 04 | 1238.359
2010-05-21 21:15:06 | 04 | 1241.015
2010-05-21 21:30:06 | 04 | 1241.015
2010-05-21 21:45:06 | 04 | 1246.33
2010-05-21 22:00:06 | 04 | 1248.989
2010-05-21 22:15:06 | 04 | 1235.704
2010-05-21 22:30:06 | 04 | 1222.45
2010-05-21 22:45:06 | 04 | 1201.309
2010-05-21 23:00:06 | 04 | 1203.947
2010-05-21 23:15:06 | 04 | 1219.803
2010-05-21 23:30:06 | 04 | 1275.649
2010-05-21 23:45:06 | 04 | 1280.995
2010-05-22 00:00:06 | 04 | 1294.38
2010-05-22 00:15:06 | 04 | 1299.742
2010-05-22 00:30:06 | 04 | 1294.38
2010-05-22 00:45:06 | 04 | 1294.38
2010-05-22 01:00:06 | 04 | 1299.742

Can anyone help me?

Best,


=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Scanned with Copfilter Version 0.84beta3a (ProxSMTP 1.6)
AntiVirus: ClamAV 0.95.2/11515 - Sun Aug 8 18:16:38 2010
by Markus Madlener @ http://www.copfilter.org




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help with select with max and min please

2010-08-08 Thread Jose Maria Terry

 El 08/08/10 21:49, Edoardo Panfili escribió:

On 08/08/10 20.47, Jose Maria Terry wrote:

Hello all,

I need to run a query on a table that holds logged data from several
water flow meters.

I need the first and last values (with their associated time) for every
logger in a time range.

I've tried this that returns the min and max time in the desired range
for every logger, but i don't know how to get the associated data (the
row called caudal) for min and max .

select remota_id,min(hora),max(hora) from historicos where hora 
'2010-08-07' and hora '2010-08-08 00:03' group by remota_id order by
remota_id;

remota_id | min | max
---+-+-
01 | 2010-08-07 00:00:30 | 2010-08-08 00:00:30
02 | 2010-08-07 00:00:06 | 2010-08-08 00:00:06
03 | 2010-08-07 00:00:03 | 2010-08-08 00:00:02
04 | 2010-08-07 00:00:06 | 2010-08-08 00:00:06
05 | 2010-08-07 00:00:01 | 2010-08-08 00:00:02
06 | 2010-08-07 00:00:02 | 2010-08-08 00:00:02
07 | 2010-08-07 00:00:06 | 2010-08-08 00:00:06
09 | 2010-08-07 00:00:06 | 2010-08-08 00:00:06
(8 filas)

I need some like this:

remota_id | min | max | min_caudal | max_caudal
---+-+-++ 



01 | 2010-08-07 00:00:30 | 2010-08-08 00:00:30 | 976.04 | 904.21
02 | 2010-08-07 00:00:06 | 2010-08-08 00:00:06 | 829.71 | 764.42

Where min_caudal is the value of caudal in hora = min() and max_caudal
is the same for hora=max()


this can help?
select remota_id,min(hora),max(hora), (SELECT caudal FROM historicos 
WHERE remota_id=ooo.remota_id AND hora=min(ooo.hora)) as min_caudal, 
(SELECT caudal FROM historicos WHERE remota_id=ooo.remota_id AND 
hora=max(ooo.hora)) as max_caudal from historicos AS ooo group by 
remota_id order by remota_id;


Edoardo


Thanks, Edoardo!

Works perfect, i've added the date (hora) select and the result is just 
what expected:


select remota_id,min(hora),max(hora), (SELECT caudal FROM historicos 
WHERE remota_id=ooo.remota_id AND hora=min(ooo.hora)) as min_caudal, 
(SELECT caudal FROM historicos WHERE remota_id=ooo.remota_id AND 
hora=max(ooo.hora)) as max_caudal from historicos AS ooo where hora  
'2010-08-07' and hora '2010-08-08 00:03' group by remota_id order by 
remota_id;


 remota_id | min | max | min_caudal | 
max_caudal

---+-+-++
 01| 2010-08-07 00:00:30 | 2010-08-08 00:00:30 |   2785.727 |   
2766.883
 02| 2010-08-07 00:00:06 | 2010-08-08 00:00:06 |   1820.309 |   
1860.785
 03| 2010-08-07 00:00:03 | 2010-08-08 00:00:02 |   2296.633 |   
2280.154
 04| 2010-08-07 00:00:06 | 2010-08-08 00:00:06 |   1946.548 |   
1898.955
 05| 2010-08-07 00:00:01 | 2010-08-08 00:00:02 |   664.5776 |   
984.9826
 06| 2010-08-07 00:00:02 | 2010-08-08 00:00:02 |1103.71 
|1185.17
 07| 2010-08-07 00:00:06 | 2010-08-08 00:00:06 |   452.0654 |   
410.4259
 09| 2010-08-07 00:00:06 | 2010-08-08 00:00:06 |   766.8262 |   
774.8085

(8 filas)


Best


=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Scanned with Copfilter Version 0.84beta3a (ProxSMTP 1.6)
AntiVirus: ClamAV 0.95.2/11515 - Sun Aug  8 18:16:38 2010
by Markus Madlener @ http://www.copfilter.org

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help with pgAndroid Contest

2010-07-02 Thread Josh Kupershmidt
I tried it on my T-Mobile G1, stock 1.6 Android firmware. It downloaded and 
installed fine, but right after installation, I clicked to open and got an 
error box saying ... has encountered an unexpected error and closed.

But pgQuilt did install successfully, and when I go to my menu of applications 
and open from there it seems to run fine. I tried connecting to an 8.4.4 
server, and got the expected quilt of database size breakdown.

Gripes/suggestions:
 * my Menu button doesn't do anything from within the application. I have to 
actually tap on the hammer-and-wrench icon in the top right to bring up the 
Demo | Connection String options.
 * I'm not crazy about editing the JDBC connection string, especially on a puny 
phone keyboard. I think it'd be easier to enter hostname, database, user, and 
password in separate fields.

Interesting little idea, though, and seems reasonably well put-together.

Josh

On Jul 1, 2010, at 7:39 PM, Stephen Denne wrote:

 Not having an Android phone, I've got no idea whether what I've
 produced for my entry to this contest works...
 
 I'd appreciate it if anybody with an Android phone could try out my
 pgQuilt application, and let me know whether it even runs!
 
 Screenshot: http://www.datacute.co.nz/pgQuilt.png
 Application: http://www.datacute.co.nz/pgQuilt.apk
 
 Cheers,
 Stephen Denne.
 
 -- 
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help with pgAndroid Contest

2010-07-02 Thread Stephen Denne
Many thanks for the feedback. I'll look into the problem after
installing. My first suspicion is that it is due to not having had
preferences set.

I agree about jdbc settings, I was rushing to get it out for some
testing. Likewise the menu, though that is also due to developing
using the emulator, where a mouse acts like a very small, precise
finger.

I hope to add options (after the contest) to:
1 - change what gets shown for the area weighting, and/or categories,
(rows, correlation, usage stats, bloat)
2 - filter the schema/table hierarchy,
3 - have a user, connection, usage hierarchy,
4 - weightings or categories based on delta between refresh

Thanks again,
Stephen

On 7/3/10, Josh Kupershmidt schmi...@gmail.com wrote:
 I tried it on my T-Mobile G1, stock 1.6 Android firmware. It downloaded and
 installed fine, but right after installation, I clicked to open and got an
 error box saying ... has encountered an unexpected error and closed.

 But pgQuilt did install successfully, and when I go to my menu of
 applications and open from there it seems to run fine. I tried connecting to
 an 8.4.4 server, and got the expected quilt of database size breakdown.

 Gripes/suggestions:
  * my Menu button doesn't do anything from within the application. I have
 to actually tap on the hammer-and-wrench icon in the top right to bring up
 the Demo | Connection String options.
  * I'm not crazy about editing the JDBC connection string, especially on a
 puny phone keyboard. I think it'd be easier to enter hostname, database,
 user, and password in separate fields.

 Interesting little idea, though, and seems reasonably well put-together.

 Josh

 On Jul 1, 2010, at 7:39 PM, Stephen Denne wrote:

 Not having an Android phone, I've got no idea whether what I've
 produced for my entry to this contest works...

 I'd appreciate it if anybody with an Android phone could try out my
 pgQuilt application, and let me know whether it even runs!

 Screenshot: http://www.datacute.co.nz/pgQuilt.png
 Application: http://www.datacute.co.nz/pgQuilt.apk

 Cheers,
 Stephen Denne.

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



-- 
Sent from my mobile device

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help on update.

2010-05-26 Thread Kenichiro Tanaka

Hello.

First,we can not execute the SQL which Paulo indicated in PostgreSQL.

See this manual.
==

http://www.postgresql.org/docs/8.4/interactive/sql-update.html


Compatibility

This command conforms to the SQL standard, except that the FROM
and RETURNING clauses are PostgreSQL extensions.

According to the standard, the column-list syntax should allow a list
 of columns to be assigned from a single row-valued expression, such
as a sub-select:

UPDATE accounts SET (contact_last_name, contact_first_name) =
(SELECT last_name, first_name FROM salesmen
 WHERE salesmen.id = accounts.sales_id);

This is not currently implemented — the source must be a list of
independent expressions.

Some other database systems offer a FROM option in which the target
table is supposed to be listed again within FROM. That is not how
PostgreSQL interprets FROM. Be careful when porting applications that
 use this extension.
=


So, I tried to following SQL, but I got error.

update test t1 set t1.j= (COALESCE(Lag(t2.j) over(order by 
t2.j),null,0)  ) + t2.j   from test t2;

ERROR:  cannot use window function in UPDATE at character 36


If I use temporary table ,I can.
But I feel this way is not simple.

=
ex) PostgreSQL is 8.4.4

--drop table test;
create table test(i int , j int);
insert into test values(1,2);
insert into test values(1,3);
insert into test values(1,4);

begin;
create temporary table test_temp (i int , j int);
insert into test_temp
SELECT i,COALESCE(Lag(j) over(order by j),null,0)  + j   from test;
truncate table test;
insert into test select * from test_temp;
drop table test_temp;
commit;
=


Anyone have a good idea?


(2010/05/26 22:46), paulo matadr wrote:


|create table test(i number , j number);|
|insert into test values(1,2)
||insert into test values(1,3)
||insert into test values(1,4)
|
select * from test;
  I  J
-- --
  1  2
  1  3
  1  4

Myintentions:
after update
select * from test;
  I  J
-- --
  1  2
  1  3+2

 1  4+3
after
select * from test;
  I  J
-- --
  1  2
  1  5+2
  1  7+5

In  oracle,I use this:|
update test x
set x.j = x.j + (select lag_j
from (select i, j, nvl(lag(j) over (order by i,j) ,0) as lag_j
from test) y
where x.i = y.i and x.j = y.j)

how can translate this for work in postgres?

Thanks's

Paul
|





   




Paulo





--

Kenichiro Tanaka
K.K.Ashisuto
http://www.ashisuto.co.jp/english/index.html



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help on update.

2010-05-26 Thread Jayadevan M
Hello,
While I can't answer my question, I am afraid I did not understand your 
intentions either. The update you mentioned may not return consistent 
results, isn't it? You are not ordering the data in any particular manner 
and there is no unique column either. So the result of the update will 
depend on the order in which the data is fetched - which need not be 
consistent?
If data is fetched as 
 I  J
-- --
 1  2
 1  3
 1  4
update will result in one set of data.
1   2
1   5
1   7
If the query returns
 I  J
-- --
 1  4
 1  2
 1  3

the result of the update will be different?
1   4
1   6
1   5
Regards,
Jayadevan






DISCLAIMER: 

The information in this e-mail and any attachment is intended only for 
the person to whom it is addressed and may contain confidential and/or 
privileged material. If you have received this e-mail in error, kindly 
contact the sender and destroy all copies of the original communication. 
IBS makes no warranty, express or implied, nor guarantees the accuracy, 
adequacy or completeness of the information contained in this email or any 
attachment and is not liable for any errors, defects, omissions, viruses 
or for resultant loss or damage, if any, direct or indirect.






-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help reqd on azimuth finction

2010-05-22 Thread Merlin Moncure
On Sat, May 22, 2010 at 8:02 AM, Deepa Thulasidasan
deepatulsida...@yahoo.co.in wrote:
 Dear All,

 Can some one help me understand st_azimuth() available in postgis.

did you read the documentation?
(http://postgis.refractions.net/documentation/manual-svn/ST_Azimuth.html).
 the function calculates the angle between two points on a plane.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help writing a query to predict auto analyze

2010-05-19 Thread Alvaro Herrera
Excerpts from Gordon Shannon's message of mié may 19 11:49:45 -0400 2010:

 at: last analysis tuples = pg_class.reltuples 
 
 I'm the least confident about the last one -- tuples as of last analyze.
 Can anyone confirm or correct these?

In 8.4 it's number of dead + lives tuples that there were in the previous
analyze.  See pgstat_recv_analyze in src/backend/postmaster/pgstat.c.
(In 9.0 it's been reworked a bit.)
-- 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help writing a query to predict auto analyze

2010-05-19 Thread Gordon Shannon



alvherre wrote:
 
 Excerpts from Gordon Shannon's message of mié may 19 11:49:45 -0400 2010:
 
 at: last analysis tuples = pg_class.reltuples 
 
 I'm the least confident about the last one -- tuples as of last analyze.
 Can anyone confirm or correct these?
 
 In 8.4 it's number of dead + lives tuples that there were in the previous
 analyze.  See pgstat_recv_analyze in src/backend/postmaster/pgstat.c.
 (In 9.0 it's been reworked a bit.)
 
 

I'm sorry, I'm not following you.  Are you saying that last analysis
tuples is number of dead + live tuples from the previous anlyze?  If so,
that would really confuse me because X would always be 0:

X = lt + dt - at
X = pg_stat_user_tables.n_live_tup + n_dead_tup - (n_live_tup + n_dead_tup)
X = 0

or is there something else wrong with the formula?

--gordon

-- 
View this message in context: 
http://old.nabble.com/Help-writing-a-query-to-predict-auto-analyze-tp28610247p28614875.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help writing a query to predict auto analyze

2010-05-19 Thread Alvaro Herrera
Excerpts from Gordon Shannon's message of mié may 19 18:02:51 -0400 2010:

 I'm sorry, I'm not following you.  Are you saying that last analysis
 tuples is number of dead + live tuples from the previous anlyze?  If so,
 that would really confuse me because X would always be 0:
 
 X = lt + dt - at
 X = pg_stat_user_tables.n_live_tup + n_dead_tup - (n_live_tup + n_dead_tup)
 X = 0
 
 or is there something else wrong with the formula?

Hmm?  n_live_tup and n_dead_tup corresponds to the current numbers,
whereas last analysis tuples are the values from back when the
previous analyze ran.  These counters keep moving per updates, deletes,
inserts, they are not static.

If there are no update/ins/del then indeed the difference is 0, which is
why we choose not do analyze.
-- 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help writing a query to predict auto analyze

2010-05-19 Thread Gordon Shannon



alvherre wrote:
 
 n_live_tup and n_dead_tup corresponds to the current numbers,
 whereas last analysis tuples are the values from back when the
 previous analyze ran.  These counters keep moving per updates, deletes,
 inserts, they are not static.
 
 

OK.  Do you know how can I get the values from back when the previous
analyze ran?
-- 
View this message in context: 
http://old.nabble.com/Help-writing-a-query-to-predict-auto-analyze-tp28610247p28616817.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help writing a query to predict auto analyze

2010-05-19 Thread Alvaro Herrera
Excerpts from Gordon Shannon's message of mié may 19 23:32:07 -0400 2010:
 
 alvherre wrote:
  
  n_live_tup and n_dead_tup corresponds to the current numbers,
  whereas last analysis tuples are the values from back when the
  previous analyze ran.  These counters keep moving per updates, deletes,
  inserts, they are not static.
 
 OK.  Do you know how can I get the values from back when the previous
 analyze ran?

I don't think we expose those values to SQL.  I guess you could create a
C function to get it, modelled after pg_stat_get_live_tuples and friends
(see src/backend/utils/adt/pgstatfuncs.c).
-- 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help with tracking!

2010-05-03 Thread Shoaib Mir
2010/4/19 Đỗ Ngọc Trí Cường semin...@gmail.com

 Dear all,

 I've a problem but I search all the help file and can't find the solution.

 I want to track all action of a specify role on all or one schema in
 database.

 Can you help me?

 Thanks a lot and Best Regard,



Setup your log_line_prefix and log_statement setting properly in
postgresql.conf so that you get the user info and other appropriate details.
After that you can tail the DB server log file and grep for that specific
user to get what sort of SQL is been executed.

-- 
Shoaib Mir
http://shoaibmir.wordpress.com/


Re: [GENERAL] Help me stop postgres from crashing.

2010-04-25 Thread Samuel
On Apr 24, 4:13 pm, t...@sss.pgh.pa.us (Tom Lane) wrote:
 Sam s...@palo-verde.us writes:
  A particular web application I am working has a staging version
  running one a vps, and a production version running on another vps.
  They both get about the same usage, but the production version keeps
  crashing and has to be re-started daily for the last couple days.  The
  log file at the time of crash looks like this:
  LOG:  could not accept new connection: Cannot allocate memory
  LOG:  select() failed in postmaster: Cannot allocate memory

 This looks like a system-level memory shortage.  You might find useful
 information in the kernel log.  I'd suggest enabling timestamps in the
 PG log (see log_line_prefix) so that you can correlate events in the
 two log files.

                         regards, tom lane

 --
 Sent via pgsql-general mailing list (pgsql-gene...@postgresql.org)
 To make changes to your 
 subscription:http://www.postgresql.org/mailpref/pgsql-general

Thanks, for the responses.

I've enabled the timestamps on the log lines.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help me stop postgres from crashing.

2010-04-25 Thread Samuel
On Apr 24, 3:53 pm, thombr...@gmail.com (Thom Brown) wrote:
 On 24 April 2010 18:48, Sam s...@palo-verde.us wrote:





  Hi,

  I am a web developer, I've been using postgesql for a few years but
  administratively I am a novice.

  A particular web application I am working has a staging version
  running one a vps, and a production version running on another vps.
  They both get about the same usage, but the production version keeps
  crashing and has to be re-started daily for the last couple days.  The
  log file at the time of crash looks like this:

  LOG:  could not accept new connection: Cannot allocate memory
  LOG:  select() failed in postmaster: Cannot allocate memory
  FATAL:  semctl(2457615, 0, SETVAL, 0) failed: Invalid argument
  LOG:  logger shutting down
  LOG:  database system was interrupted at 2010-04-24 09:33:39 PDT

  It ran out of memory.

  I am looking for a way to track down what is actually causing the
  memory shortage and how to prevent it or increase the memory
  available.

  The vps in question is a media temple DV running CentOS and postgres
  8.1.18

  Could you provide some more information?  What do you get if you run

 sysctl -a | grep kernel.shm and sysctl -a | grep sem? And what are you
 developing in which connects to the database?  Are you using persistent
 connections?  And how many connections to you estimate are in use?  What
 have you got max_connections and shared_buffers in your postgresql.conf
 file?  And how much memory does your VPS have?

 Thom

This application is php5/Zend Framework and using Doctrine ORM which
manages the database connections, but they aren't persistent.

max_connections is 100 and shared_buffers is 1000

What is the best way to profile the exact number of connections (for
future reference)?  Right now, there is almost no site usage, the site
is not launched yet.  The staging version of the site has been running
for months without issues.  There is a cron that runs every three
minutes and checks a users account on another web service and tracks
that activity.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help me stop postgres from crashing.

2010-04-25 Thread Samuel
On Apr 24, 3:53 pm, thombr...@gmail.com (Thom Brown) wrote:
 On 24 April 2010 18:48, Sam s...@palo-verde.us wrote:





  Hi,

  I am a web developer, I've been using postgesql for a few years but
  administratively I am a novice.

  A particular web application I am working has a staging version
  running one a vps, and a production version running on another vps.
  They both get about the same usage, but the production version keeps
  crashing and has to be re-started daily for the last couple days.  The
  log file at the time of crash looks like this:

  LOG:  could not accept new connection: Cannot allocate memory
  LOG:  select() failed in postmaster: Cannot allocate memory
  FATAL:  semctl(2457615, 0, SETVAL, 0) failed: Invalid argument
  LOG:  logger shutting down
  LOG:  database system was interrupted at 2010-04-24 09:33:39 PDT

  It ran out of memory.

  I am looking for a way to track down what is actually causing the
  memory shortage and how to prevent it or increase the memory
  available.

  The vps in question is a media temple DV running CentOS and postgres
  8.1.18

  Could you provide some more information?  What do you get if you run

 sysctl -a | grep kernel.shm and sysctl -a | grep sem? And what are you
 developing in which connects to the database?  Are you using persistent
 connections?  And how many connections to you estimate are in use?  What
 have you got max_connections and shared_buffers in your postgresql.conf
 file?  And how much memory does your VPS have?

 Thom

sysctl -a | grep kernel.shm
error: Operation not permitted reading key kernel.cap-bound
kernel.shmmni = 4096
kernel.shmall = 2097152
kernel.shmmax = 33554432

sysctl -a | grep sem
error: Operation not permitted reading key kernel.cap-bound
kernel.sem = 25032000   32  128

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help me stop postgres from crashing.

2010-04-24 Thread Thom Brown
On 24 April 2010 18:48, Sam s...@palo-verde.us wrote:

 Hi,

 I am a web developer, I've been using postgesql for a few years but
 administratively I am a novice.

 A particular web application I am working has a staging version
 running one a vps, and a production version running on another vps.
 They both get about the same usage, but the production version keeps
 crashing and has to be re-started daily for the last couple days.  The
 log file at the time of crash looks like this:

 LOG:  could not accept new connection: Cannot allocate memory
 LOG:  select() failed in postmaster: Cannot allocate memory
 FATAL:  semctl(2457615, 0, SETVAL, 0) failed: Invalid argument
 LOG:  logger shutting down
 LOG:  database system was interrupted at 2010-04-24 09:33:39 PDT

 It ran out of memory.

 I am looking for a way to track down what is actually causing the
 memory shortage and how to prevent it or increase the memory
 available.

 The vps in question is a media temple DV running CentOS and postgres
 8.1.18


 Could you provide some more information?  What do you get if you run
sysctl -a | grep kernel.shm and sysctl -a | grep sem? And what are you
developing in which connects to the database?  Are you using persistent
connections?  And how many connections to you estimate are in use?  What
have you got max_connections and shared_buffers in your postgresql.conf
file?  And how much memory does your VPS have?

Thom


Re: [GENERAL] Help me stop postgres from crashing.

2010-04-24 Thread Tom Lane
Sam s...@palo-verde.us writes:
 A particular web application I am working has a staging version
 running one a vps, and a production version running on another vps.
 They both get about the same usage, but the production version keeps
 crashing and has to be re-started daily for the last couple days.  The
 log file at the time of crash looks like this:

 LOG:  could not accept new connection: Cannot allocate memory
 LOG:  select() failed in postmaster: Cannot allocate memory

This looks like a system-level memory shortage.  You might find useful
information in the kernel log.  I'd suggest enabling timestamps in the
PG log (see log_line_prefix) so that you can correlate events in the
two log files.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help with tracking!

2010-04-20 Thread Scott Marlowe
2010/4/18 Craig Ringer cr...@postnewspapers.com.au:
 Đỗ Ngọc Trí Cường wrote:
 Dear all,

 I've a problem but I search all the help file and can't find the solution.

 I want to track all action of a specify role on all or one schema in
 database.

 Can you help me?

 You can use statement-level logging, though there are no facilities in
 statement-level logging to restrict what is logged to only one role's
 activity.

Wouldn't this work:

alter user smarlowe set log_statement = 'all';

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help with tracking!

2010-04-20 Thread Devrim GÜNDÜZ
On Tue, 2010-04-20 at 06:03 -0600, Scott Marlowe wrote:
 Wouldn't this work:
 
 alter user smarlowe set log_statement = 'all'; 

IIRC it only works inside the given session (so it needs to be run each
time a query will be executed)
-- 
Devrim GÜNDÜZ
PostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer
PostgreSQL RPM Repository: http://yum.pgrpms.org
Community: devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr
http://www.gunduz.org  Twitter: http://twitter.com/devrimgunduz


signature.asc
Description: This is a digitally signed message part


Re: [GENERAL] Help with tracking!

2010-04-20 Thread Scott Marlowe
On Tue, Apr 20, 2010 at 6:11 AM, Devrim GÜNDÜZ dev...@gunduz.org wrote:
 On Tue, 2010-04-20 at 06:03 -0600, Scott Marlowe wrote:
 Wouldn't this work:

 alter user smarlowe set log_statement = 'all';

 IIRC it only works inside the given session (so it needs to be run each
 time a query will be executed)

Not true:

psql
show log_statement;
 log_statement
---
 none
alter user smarlowe set log_statement = 'all';
show log_statement;
 log_statement
---
 all
\q
psql
show log_statement;
 log_statement
---
 all

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help with tracking!

2010-04-20 Thread Scott Marlowe
On Tue, Apr 20, 2010 at 8:31 AM, Scott Marlowe scott.marl...@gmail.com wrote:
 On Tue, Apr 20, 2010 at 6:11 AM, Devrim GÜNDÜZ dev...@gunduz.org wrote:
 On Tue, 2010-04-20 at 06:03 -0600, Scott Marlowe wrote:
 Wouldn't this work:

 alter user smarlowe set log_statement = 'all';

 IIRC it only works inside the given session (so it needs to be run each
 time a query will be executed)

 Not true:

 psql
 show log_statement;
  log_statement
 ---
  none
 alter user smarlowe set log_statement = 'all';
 show log_statement;
  log_statement
 ---
  all
 \q
 psql
 show log_statement;
  log_statement
 ---
  all

Note however that other sessions won't see the change.  Only
connections that come after the change will see it.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help with tracking!

2010-04-20 Thread Craig Ringer

On 20/04/2010 10:33 PM, Scott Marlowe wrote:


psql
show log_statement;
  log_statement
---
  all


Note however that other sessions won't see the change.  Only
connections that come after the change will see it.


Also, as the OP wants to use it for auditing, it's worth noting that 
it's trivial for the audited user to simply disable log_statement in a 
session with a SET command.


--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help with tracking!

2010-04-20 Thread Steve Crawford

Craig Ringer wrote:

On 20/04/2010 10:33 PM, Scott Marlowe wrote:


psql
show log_statement;
  log_statement
---
  all


Note however that other sessions won't see the change.  Only
connections that come after the change will see it.


Also, as the OP wants to use it for auditing, it's worth noting that 
it's trivial for the audited user to simply disable log_statement in a 
session with a SET command.


From the docs for log_statement: Only superusers can change this 
setting. 


Cheers,
Steve


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help with tracking!

2010-04-20 Thread Craig Ringer

On 21/04/2010 8:10 AM, Steve Crawford wrote:


 From the docs for log_statement: Only superusers can change this
setting. 


Argh. Thankyou.

--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help with tracking!

2010-04-19 Thread Craig Ringer
Đỗ Ngọc Trí Cường wrote:
 Dear all,
 
 I've a problem but I search all the help file and can't find the solution.
 
 I want to track all action of a specify role on all or one schema in
 database.
 
 Can you help me?

You can use statement-level logging, though there are no facilities in
statement-level logging to restrict what is logged to only one role's
activity.

You can use the usual audit triggers on database tables, which is what I
would recommend. Audit triggers in PostgreSQL cannot track reads
(SELECTs), only INSERT/UPDATE/DELETE and in 8.4 also TRUNCATE. They
cannot track ALTER/RENAME/DROP table, changes to sequences, etc. It is
trivial to write an audit trigger that only records anything when a user
is a member of a particular role.

-- 
Craig Ringer

Tech-related writing: http://soapyfrogs.blogspot.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help with tracking!

2010-04-19 Thread Yeb Havinga

Craig Ringer wrote:

Đỗ Ngọc Trí Cường wrote:
  

Dear all,

I've a problem but I search all the help file and can't find the solution.

I want to track all action of a specify role on all or one schema in
database.

Can you help me?



You can use statement-level logging, though there are no facilities in
statement-level logging to restrict what is logged to only one role's
activity.

You can use the usual audit triggers on database tables, which is what I
would recommend. Audit triggers in PostgreSQL cannot track reads
(SELECTs), only INSERT/UPDATE/DELETE and in 8.4 also TRUNCATE. They
cannot track ALTER/RENAME/DROP table, changes to sequences, etc. It is
trivial to write an audit trigger that only records anything when a user
is a member of a particular role.
  
Yes tracking SELECTs needs would have to go with a log file, since also 
a DO INSTEAD rule on SELECT has to be another SELECT command, and cannot 
e.g. be a INSERT followed by a SELECT.


Something similar is mentioned in this thread: 
http://archives.postgresql.org/pgsql-performance/2008-07/msg00144.php


regards,
Yeb Havinga


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help with tracking!

2010-04-19 Thread Craig Ringer
Yeb Havinga wrote:
 Craig Ringer wrote:
 Đỗ Ngọc Trí Cường wrote:
  
 Dear all,

 I've a problem but I search all the help file and can't find the
 solution.

 I want to track all action of a specify role on all or one schema in
 database.

 Can you help me?
 

 You can use statement-level logging, though there are no facilities in
 statement-level logging to restrict what is logged to only one role's
 activity.

 You can use the usual audit triggers on database tables, which is what I
 would recommend. Audit triggers in PostgreSQL cannot track reads
 (SELECTs), only INSERT/UPDATE/DELETE and in 8.4 also TRUNCATE. They
 cannot track ALTER/RENAME/DROP table, changes to sequences, etc. It is
 trivial to write an audit trigger that only records anything when a user
 is a member of a particular role.
   
 Yes tracking SELECTs needs would have to go with a log file, since also
 a DO INSTEAD rule on SELECT has to be another SELECT command, and cannot
 e.g. be a INSERT followed by a SELECT.

OK, then a trigger-based audit setup is not going to work for you
because Pg doesn't support triggers on SELECT. I guess you're stuck with
statement logging and a filter on that log unless there's something else
I don't know of.

One other question, though: Does your audit just have to track the SQL
executed? Or the *data* accessed? The same SQL statement has different
results at different times. If you need to track what someone has
accessed, you're not likely to be able to do that with PostgreSQL
without some heavy use of stored procedures to wrap basically every
query. Ick.

If all you need is to log the SQL executed, then stick with log_statement.

--
Craig Ringer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] help

2010-03-29 Thread Raymond O'Donnell
On 29/03/2010 15:43, 赤松 建司 wrote:
 help

Surely. What with? :-)

Ray.

-- 
Raymond O'Donnell :: Galway :: Ireland
r...@iol.ie

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help me with this multi-table query

2010-03-24 Thread Dean Rasheed
On 24 March 2010 05:17, Nilesh Govindarajan li...@itech7.com wrote:
 On 03/24/2010 12:45 AM, Dean Rasheed wrote:

 On 23 March 2010 11:07, Nilesh Govindarajanli...@itech7.com  wrote:

 Hi,

 I want to find out the userid, nodecount and comment count of the userid.

 I'm going wrong somewhere.

 Check my SQL Code-

 select u.uid, count(n.nid) nc , count(c.cid) cc from users u left join
 node
 n on ( n.uid = u.uid ) left join comments c on ( c.uid = u.uid ) group by
 u.uid having u.uid  0 order by u.uid;


 I think you want select u.uid, count(distinct n.nid) nc ,
 count(distinct c.cid) cc from ...
 otherwise you're counting each node/comment multiple times as the rows in
 the
 join multiply up (note 85 x 174 = 14790).

 For big tables, this could start to become inefficient, and you might
 be better off
 doing your queries 1 and 2 above as sub-queries and joining them in an
 outer query.

 Regards,
 Dean

 Thanks a lot !! It worked.
 How to do it using subqueries ?


Well the problem with the original joined query is that when it is
executed there will be an intermediate step where it has to consider
many thousands of rows (one for each combination of a node and comment
for each user). You can see the number of rows processed from your
original query by adding up the counts (about 17000). This problem
would be compounded if you added more table joins and counts to the
query.

One way to re-write it using sub-queries would be something like

select v1.uid, v1.nc, v2.cc from
(select u.uid, count(n.nid) nc from users u left join node n on (
n.uid = u.uid ) group by u.uid) as v1,
(select u.uid, count(c.nid) cc from users u left join comments c on (
c.uid = u.uid ) group by u.uid) as v2
where v1.uid=v2.uid
order by u.uid

This is the equivalent of defining a couple of views for the counts
and then selecting from those views.

Another possibility would be something like

select
  u.uid,
  (select count(n.nid) from node n where n.uid = u.uid) as nc,
  (select count(c.nid) from comments c where c.uid = u.uid) as cc
from users u order by u.uid

There are probably other ways too. Which is best probably depends on
the size and distribution of your data, and any indexes you have. You
might benefit from indexes on the uid columns of node and comments, if
you don't already have them. Try timing them in psql with \timing, and
use EXPLAIN ANALYSE to see how each is executed.

Regards,
Dean

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help me with this multi-table query

2010-03-24 Thread Nilesh Govindarajan

On 03/24/2010 01:14 PM, Dean Rasheed wrote:

On 24 March 2010 05:17, Nilesh Govindarajanli...@itech7.com  wrote:

On 03/24/2010 12:45 AM, Dean Rasheed wrote:


On 23 March 2010 11:07, Nilesh Govindarajanli...@itech7.comwrote:


Hi,

I want to find out the userid, nodecount and comment count of the userid.

I'm going wrong somewhere.

Check my SQL Code-

select u.uid, count(n.nid) nc , count(c.cid) cc from users u left join
node
n on ( n.uid = u.uid ) left join comments c on ( c.uid = u.uid ) group by
u.uid having u.uid0 order by u.uid;



I think you want select u.uid, count(distinct n.nid) nc ,
count(distinct c.cid) cc from ...
otherwise you're counting each node/comment multiple times as the rows in
the
join multiply up (note 85 x 174 = 14790).

For big tables, this could start to become inefficient, and you might
be better off
doing your queries 1 and 2 above as sub-queries and joining them in an
outer query.

Regards,
Dean


Thanks a lot !! It worked.
How to do it using subqueries ?



Well the problem with the original joined query is that when it is
executed there will be an intermediate step where it has to consider
many thousands of rows (one for each combination of a node and comment
for each user). You can see the number of rows processed from your
original query by adding up the counts (about 17000). This problem
would be compounded if you added more table joins and counts to the
query.

One way to re-write it using sub-queries would be something like

select v1.uid, v1.nc, v2.cc from
(select u.uid, count(n.nid) nc from users u left join node n on (
n.uid = u.uid ) group by u.uid) as v1,
(select u.uid, count(c.nid) cc from users u left join comments c on (
c.uid = u.uid ) group by u.uid) as v2
where v1.uid=v2.uid
order by u.uid

This is the equivalent of defining a couple of views for the counts
and then selecting from those views.

Another possibility would be something like

select
   u.uid,
   (select count(n.nid) from node n where n.uid = u.uid) as nc,
   (select count(c.nid) from comments c where c.uid = u.uid) as cc
from users u order by u.uid

There are probably other ways too. Which is best probably depends on
the size and distribution of your data, and any indexes you have. You
might benefit from indexes on the uid columns of node and comments, if
you don't already have them. Try timing them in psql with \timing, and
use EXPLAIN ANALYSE to see how each is executed.

Regards,
Dean


The second method is the best. It takes 3.311 ms to execute. The first 
method suggested by you takes 5.7 ms, and the worst is my method which 
takes 60ms (boo).


Thanks a lot :) :) :)

--
Nilesh Govindarajan
Site  Server Administrator
www.itech7.com

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help me with this multi-table query

2010-03-23 Thread Dean Rasheed
On 23 March 2010 11:07, Nilesh Govindarajan li...@itech7.com wrote:
 Hi,

 I want to find out the userid, nodecount and comment count of the userid.

 I'm going wrong somewhere.

 Check my SQL Code-

 select u.uid, count(n.nid) nc , count(c.cid) cc from users u left join node
 n on ( n.uid = u.uid ) left join comments c on ( c.uid = u.uid ) group by
 u.uid having u.uid  0 order by u.uid;


I think you want select u.uid, count(distinct n.nid) nc ,
count(distinct c.cid) cc from ...
otherwise you're counting each node/comment multiple times as the rows in the
join multiply up (note 85 x 174 = 14790).

For big tables, this could start to become inefficient, and you might
be better off
doing your queries 1 and 2 above as sub-queries and joining them in an
outer query.

Regards,
Dean

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help me with this multi-table query

2010-03-23 Thread Nilesh Govindarajan

On 03/24/2010 12:45 AM, Dean Rasheed wrote:

On 23 March 2010 11:07, Nilesh Govindarajanli...@itech7.com  wrote:

Hi,

I want to find out the userid, nodecount and comment count of the userid.

I'm going wrong somewhere.

Check my SQL Code-

select u.uid, count(n.nid) nc , count(c.cid) cc from users u left join node
n on ( n.uid = u.uid ) left join comments c on ( c.uid = u.uid ) group by
u.uid having u.uid  0 order by u.uid;



I think you want select u.uid, count(distinct n.nid) nc ,
count(distinct c.cid) cc from ...
otherwise you're counting each node/comment multiple times as the rows in the
join multiply up (note 85 x 174 = 14790).

For big tables, this could start to become inefficient, and you might
be better off
doing your queries 1 and 2 above as sub-queries and joining them in an
outer query.

Regards,
Dean


Thanks a lot !! It worked.
How to do it using subqueries ?

--
Nilesh Govindarajan
Site  Server Administrator
www.itech7.com

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help me with this tricky join

2010-03-21 Thread A. Kretschmer
In response to Jay :
 Thanks!
 But, since the master can contain many users (user2, user3, and so on)
 I suppose this won't be a proper solution?
 Sorry if I was a bit unclear in my description.
 
 I.e., the master is of the form:
 
 user_id date
 User1 20010101
 User1 2101
 User1 19990101
 User1 19970101
 User2 ...
 ...

That's not the problem ...

 
 Btw, I'm using Postgre version 8.2 and I cannot use subqueries do the

but this.
lag() over () and similar windowing functions new since 8.4.



Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: - Header)
GnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431  2EB0 389D 1DC2 3172 0C99

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help me with this tricky join

2010-03-20 Thread Andreas Kretschmer
Jay josip.2...@gmail.com wrote:

 Hi,
 
 I'm somewhat new to SQL so I need a bit of help with this problem. So
 I have 2 tables: selection and master, both have two columns in
 each: user_id and date.
 
 The selection contains one row for each user_id and depicts _one_
 date value for each user.
 The master contains all date changes for each user_id. I.e.,
 there are many dates for each user_id. It is a history of previous
 occurrences.
 
 Now, I want to add a 3rd column to the selection table that is the
 date value from one step back for each user_id. I.e., if the
 master contains:
 
 User1   20010101
 User1   2101
 User1   19990101
 User1   19970101
 
 for User1, and the selection is
 
 User1  19990101
 
 I want this to become:
 
 User1   2101   19990101
 
 How do I do this? A simple join wont do it since it is dependent on
 what value date is for each user..

I think, you don't need a new column, because you can determine this
value (assuming you have 8.4)

test=*# select * from selection ;
 user_id |   date
-+--
 user1   | 20010101
 user1   | 2101
 user1   | 19990101
 user1   | 19970101
(4 Zeilen)

Zeit: 0,255 ms
test=*# select *, lag(date) over (order by date)from selection order by date 
desc;
 user_id |   date   |   lag
-+--+--
 user1   | 20010101 | 2101
 user1   | 2101 | 19990101
 user1   | 19990101 | 19970101
 user1   | 19970101 |
(4 Zeilen)



Regards, Andreas
-- 
Really, I'm not out to destroy Microsoft. That will just be a completely
unintentional side effect.  (Linus Torvalds)
If I was god, I would recompile penguin with --enable-fly.   (unknown)
Kaufbach, Saxony, Germany, Europe.  N 51.05082°, E 13.56889°

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help me with this tricky join

2010-03-20 Thread Jay
Thanks!
But, since the master can contain many users (user2, user3, and so on)
I suppose this won't be a proper solution?
Sorry if I was a bit unclear in my description.

I.e., the master is of the form:

user_id date
User1 20010101
User1 2101
User1 19990101
User1 19970101
User2 ...
...

Btw, I'm using Postgre version 8.2 and I cannot use subqueries do the
GP software.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help me with this tricky join

2010-03-19 Thread Jay
CORRECTION:

The original selection should be:

User1  2101

NOT

User1  19990101

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] help with SQL join

2010-02-12 Thread Igor Neyman
 

 -Original Message-
 From: John R Pierce [mailto:pie...@hogranch.com] 
 Sent: Thursday, February 11, 2010 3:01 PM
 To: pgsql-general@postgresql.org
 Subject: Re: help with SQL join
 
 Neil Stlyz wrote:
  Now... here is the problem I am having... the above SQL query is 
  retrieving results from one table: sales I have another 
 table called 
  customers with a couple of fields (customerid, and customername are 
  two of the fields).
  I want to join on the customerid in both tables to retrieve the 
  customername in the query.
  So I need the results to look something like this:
   
  customerid|customername|
  TODAYS_USERS|LAST 7 DAYS|
 LAST 30 DAYS
  bigint|varchar   |
  bigint  |bigint 

  |bigint
  
 --
 --
  8699| Joe Smith  |  
   1
  |
  1   |1
  8700| Sara Olson|   
  1
  |12 
  |17
  8701| Mike Jones   |
 3
   |
  5   |   19
   
  Can someone show me how to use a JOIN with the above SQL 
 Statement? I 
  need to bring the customername field into the query from the other 
  table and I have been having issues writting the query... can this 
  even be done?
 
 something like...
 
 SELECT results.customerid, c.customername, count(distinct 
 count1) AS TODAYS_USERS, count(distinct count2) AS LAST 7 
 DAYS , count(distinct count3) AS LAST 30 DAYS
 FROM (SELECT distinct case when s.modified = 
 '2010-02-11' then s.modelnumber else null end as count1,
case when s.modified = '2010-02-04' then 
 s.modelnumber else null end as count2,
case when s.modified = '2010-01-11' then 
 s.modelnumber else null end as count3, s.customerid
FROM sales as s WHERE s.modelnumber LIKE 'GH77%') 
 AS results
 JOIN customers as c ON (results.customerid = c.customerid)
 GROUP BY results.customerid
 


One correction:  you should group on all non-aggregate columns in your
select list, i.e.:

 SELECT results.customerid, c.customername, count(distinct 
 count1) AS TODAYS_USERS, count(distinct count2) AS LAST 7 
 DAYS , count(distinct count3) AS LAST 30 DAYS
 FROM (SELECT distinct case when s.modified = 
 '2010-02-11' then s.modelnumber else null end as count1,
case when s.modified = '2010-02-04' then 
 s.modelnumber else null end as count2,
case when s.modified = '2010-01-11' then 
 s.modelnumber else null end as count3, s.customerid
FROM sales as s WHERE s.modelnumber LIKE 'GH77%') 
 AS results
 JOIN customers as c ON (results.customerid = c.customerid)
 GROUP BY results.customerid, c.customername

Igor Neyman

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] help with SQL join

2010-02-11 Thread John R Pierce

Neil Stlyz wrote:
Now... here is the problem I am having... the above SQL query is 
retrieving results from one table: sales
I have another table called customers with a couple of fields 
(customerid, and customername are two of the fields).
I want to join on the customerid in both tables to retrieve the 
customername in the query.

So I need the results to look something like this:
 
customerid|customername|
TODAYS_USERS|LAST 7 DAYS|LAST 30 DAYS
bigint|varchar   |
bigint  |bigint
|bigint


8699| Joe Smith  |1
|
1   |1
8700| Sara Olson|1
|12 
|17
8701| Mike Jones   |3
 |
5   |   19
 
Can someone show me how to use a JOIN with the above SQL Statement? I 
need to bring the customername field into the query from the other 
table and I have been having issues writting the query... can this 
even be done?


something like...

SELECT results.customerid, c.customername, count(distinct count1) AS 
TODAYS_USERS,

count(distinct count2) AS LAST 7 DAYS ,
count(distinct count3) AS LAST 30 DAYS
   FROM (SELECT distinct case when s.modified = '2010-02-11' then 
s.modelnumber else null end as count1,
  case when s.modified = '2010-02-04' then s.modelnumber else null 
end as count2,
  case when s.modified = '2010-01-11' then s.modelnumber else null 
end as count3, s.customerid

  FROM sales as s WHERE s.modelnumber LIKE 'GH77%') AS results
   JOIN customers as c ON (results.customerid = c.customerid)
   GROUP BY results.customerid



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help: Postgresql on Microsoft cluster (MSCS)

2010-01-25 Thread Craig Ringer

On 25/01/2010 12:21 PM, Steeles wrote:

As title, please help.
I want to setup Postgresql HA by MSCS in VMWARE platform. (win server
2003, PG 8.3 on 32 bit)
MSCS has been setup, the problem can't start postgresql service.
PGDATA is on the shared disk.


Are you trying to share a PostgreSQL data directory between more than 
one postmaster? It doesn't matter if they're on the same machine, or on 
different machines via shared storage, you *can't* *do* *that*.


Look into replication options - see the postgresql wiki.


I tried generic service, and application, either one won't bring up
postgresql database engine service.


Error message?

Contents of Windows error log (see event viewer) ?

PostgreSQL error log entries (in `pg_log' under data directory) ?

--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help: Postgresql on Microsoft cluster (MSCS)

2010-01-25 Thread Magnus Hagander
2010/1/25 Craig Ringer cr...@postnewspapers.com.au:
 On 25/01/2010 12:21 PM, Steeles wrote:

 As title, please help.
 I want to setup Postgresql HA by MSCS in VMWARE platform. (win server
 2003, PG 8.3 on 32 bit)
 MSCS has been setup, the problem can't start postgresql service.
 PGDATA is on the shared disk.

 Are you trying to share a PostgreSQL data directory between more than one 
 postmaster? It doesn't matter if they're on the same machine, or on different 
 machines via shared storage, you *can't* *do* *that*.

MSCS shared disk clusters make sure oinly one node has the disk mapped
at the same time, so that's not the problem.


 I tried generic service, and application, either one won't bring up
 postgresql database engine service.

 Error message?

 Contents of Windows error log (see event viewer) ?

 PostgreSQL error log entries (in `pg_log' under data directory) ?

That's where you need to go, yes.

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help me about postgreSql code

2010-01-22 Thread Mark Morgan Lloyd

John R Pierce wrote:

Elian Laura wrote:
i understand, but why my teacher wrote in his paper..Probably the 
most obvious case is a database engine where the user defines, at run 
time, if a field is integer, char, float, etc. but, it is not 
necessary to compile the program again. All this felxibility must be 
.


I am not a PostgreSQL developer but I think the thing to understand here 
is that there are two stages involved. If I say much more I think I'll 
be guilty of doing your homework for you, I suggest that you look very 
carefully at the examples John gave you earlier and consider that from 
the viewpoint of the database engine they are being entered at runtime.


   CREATE TABLE users (uid INTEGER, username TEXT, firstname TEXT,
 lastname TEXT);

   INSERT INTO users (uid, username) VALUES (103, 'jpierce', 'John',
 'Pierce'), ('104', 'elaura', 'Elian', 'Laura');

   SELECT username,firstname,lastname FROM users WHERE uid=103;

If you think about it one of those statements is giving the system 
information which it can store (I'm not sure the way it does this is 
really important) and re-use.


--
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or colleagues]

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help me about postgreSql code

2010-01-21 Thread John R Pierce

Elian Laura wrote:
i understand, but why my teacher wrote in his paper..Probably the 
most obvious case is a database engine where the user defines, at run 
time, if a field is integer, char, float, etc. but, it is not 
necessary to compile the program again. All this felxibility must be 
.


I have no idea why your teacher wrote that.   You should ask them.

in a relational database, all data fields are typed.   the only user 
input that Postgres processes is the SQL language, and thats a full 
blown complex language parser, the internals of which I have little 
desire to take apart.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help on constructing a query that matches array

2010-01-20 Thread Dean Rasheed
2010/1/19 BlackMage dsd7...@uncw.edu:

 Hey all,

 I need help on constructing a query with arrays that matches the arrays up
 as foriegn keys to another table. For example, say that I have two tables,
 owners and pets.

 Owner Table
 owner_id | pet_ids
 1             |    {1,2,3}
 2             |    {2,3}

 Pet Table
 pet_ids   |   Pet Type
 1            |  Dog
 2            |  Cat
 3            |  Fish
 4            |  Bird

 Basically I am trying to create a SELECT query that returns the type of pets
 an owner has by matching the pet_ids up. Can anyone help me with this?

You can use the built-in unnest() array function (see
http://www.postgresql.org/docs/8.4/static/functions-array.html) to
convert the array to a set of rows which you can then join in the
standard way. For example:

select o.owner_id, o.pet_id, p.pet_type from
(select owner_id, unnest(pet_ids) as pet_id from owner) as o, pet as p
where p.pet_id = o.pet_id and owner_id=1;

Note: the unnest() function is only defined as standard in postgresql
8.4. If you have an older version, you'll need to define it yourself,
as described here:
http://wiki.postgresql.org/wiki/Array_Unnest

Regards,
Dean

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help me about postgreSql code

2010-01-19 Thread John R Pierce

Elian Laura wrote:

Hello,
I´m an undergraduate in Peru in systems engineering.
I need to know about how does postgres work with the definition of 
data type in run time.
I downloaded de source code of postgres, but es very complex, at least 
I would like to know in which part of the code is the recognition of a 
data that the user enters. I write a data in a field and the postgress 
must know what kind it is, as it do that?


postgres is, at the core,  a relational database engine with a SQL 
language processor, it doesn't know what a 'field' is.


the fundamental storage units of a relational database are tables made 
of rows made of columns.  a column of a row of a given table is akin to 
a field, but on disk these are stored in a binary representation, the 
typing information required to decode it is part of the table definition.


the only user 'input' is SQL statements, such as...

   CREATE TABLE users (uid INTEGER, username TEXT, firstname TEXT, 
lastname TEXT);


   INSERT INTO users (uid, username) VALUES (103, 'jpierce', 'John', 
'Pierce'), ('104', 'elaura', 'Elian', 'Laura');


   SELECT username,firstname,lastname FROM users WHERE uid=103;

(I use all upper case letters for SQL keywords to distinguish them, in 
fact, SQL doesn't care)


Things like forms, fields, human input decoding  are the domain of the 
user application software, which will generate and execute SQL 
statements to store the data in the database.



Is this clear enough?




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help! xlog flush request is not satisfied

2009-12-07 Thread Craig Ringer

On 7/12/2009 3:00 PM, A B wrote:

Hi.
I just logged into a system and found this in the log when trying to
start postgres.


Possibly silly question: is there plenty of free disk space on the file 
system where PostgreSQL's data is stored?


--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help! xlog flush request is not satisfied

2009-12-07 Thread Tom Lane
Craig Ringer cr...@postnewspapers.com.au writes:
 On 7/12/2009 3:00 PM, A B wrote:
 I just logged into a system and found this in the log when trying to
 start postgres.

 Possibly silly question: is there plenty of free disk space on the file 
 system where PostgreSQL's data is stored?

That's not what that message is about.  What it's unhappy about is that
it found a page with an LSN that's past the logical end of WAL.  This
either means that page is corrupt, or there's something wrong with WAL.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help with starting portable version of postgresql

2009-12-05 Thread John R Pierce

marco di antonio wrote:

Hi,

I downloaded the portable version of Postegresql from 
http://greg.webhop.net/postgresql_portable


It works but the problem is that to start the server you must load it 
from the gui. Instead since I want to start the server from my app, I 
need to understand how to load it manually. I tried starting 
postgres.exe but it's saying:

Execution of PostgreSQL by a user with administrative permissions is not
permitted.
The server must be started under an unprivileged user ID to prevent
possible system security compromises.  See the documentation for
more information on how to properly start the server.

That0s strange, because the gui instead is able to load it so I'de 
like to know how can I do the same thing (and I can't find the source 
code of the ui).
I'd look in Local Users and Groups to see if this utility has created a 
special user account to run the postmaster under.  Also, after the 
GUI has startted it, use something like Process Explorer (from 
Microsoft, formerly SysInternals) to see what user the postgres process 
is running as.


my guess is you need to invoke postgres.exe with the appropriate 
arguments, such that it runs as that non-privileged user, see 
http://msdn.microsoft.com/en-us/library/ms682431(VS.85).aspx for one way 
to do this...










--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help with starting portable version of postgresql

2009-12-05 Thread Thomas Kellerer

John R Pierce wrote on 06.12.2009 00:01:
I downloaded the portable version of Postegresql from 
http://greg.webhop.net/postgresql_portable


It works but the problem is that to start the server you must load it 
from the gui. Instead since I want to start the server from my app, I 
need to understand how to load it manually. I tried starting 
postgres.exe but it's saying:

Execution of PostgreSQL by a user with administrative permissions is not
permitted.


You should use pg_ctl instead. That will take care of dropping any administrative rights 
the current user might have. I'm using a set of batch files to create a 
portable postgres, and I use pg_ctl for that purpose even with administrator 
account.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help, server cannot start anymore

2009-11-29 Thread CyTG
Solved. For now.
- LOG:  could not open directory G:/mysql/pgsql/share/timezone: No
such file or directory

I copied this folder from another postgres installation i did some
months back on another system and wham, up and running agan.

weird.


--
On Sun, Nov 29, 2009 at 11:27 AM, CyTG cytg@gmail.com wrote:
 Hi, i recently migrated from mysql to postgres and i might have made a
 no-no in that process.
 bascily i want/need a selfcontained folder \database where i can zip
 that directory move it anywhere, unzip and carry on my busniess with
 no dependencies. Postgres presented me with a challenge in this regard
 as i am not allowed to run it from a admin account (yes i know i know,
 still..), alas i followed the steps here
 http://charupload.wordpress.com/2008/07/26/manual-installation-of-postgresql-under-administrator-user-in-windows/

 initdb -D c:\postgresql\data
 pg_ctl.exe start -D c:\postgresql\data

 Fantastic, i was up and running, migrated my data and continued
 developent .. the server have stopped on occasion on its own for some
 reason i cannot pinpoint, but now it is totally unable to start up,
 here is the recent entries from the server log.

 LOG:  could not open directory G:/mysql/pgsql/share/timezone: No
 such file or directory
 LOG:  could not open directory G:/mysql/pgsql/share/timezone: No
 such file or directory
 FATAL:  could not select a suitable default timezone
 DETAIL:  It appears that your GMT time zone uses leap seconds.
 PostgreSQL does not support leap seconds.
 LOG:  could not open directory G:/mysql/pgsql/share/timezone: No
 such file or directory
 LOG:  could not open directory G:/mysql/pgsql/share/timezone: No
 such file or directory
 FATAL:  could not select a suitable default timezone
 DETAIL:  It appears that your GMT time zone uses leap seconds.
 PostgreSQL does not support leap seconds.
 LOG:  could not open directory G:/mysql/pgsql/share/timezone: No
 such file or directory
 LOG:  could not open directory G:/mysql/pgsql/share/timezone: No
 such file or directory
 FATAL:  could not select a suitable default timezone
 DETAIL:  It appears that your GMT time zone uses leap seconds.
 PostgreSQL does not support leap seconds.

 Dont put anything onto the g:/mysql directory, inhere mysql and
 postgres lives side by side...
 I understand the error, the directory is NOT there ... but this thing
 was running happily a few days ago .. what went wrong ???


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help with postgresql memory issue

2009-11-03 Thread Brooks Lyrette
I wish the solution was that simple. I rent the zone and that is my  
providers cap.


On 2-Nov-09, at 5:21 PM, Scott Marlowe wrote:

On Mon, Nov 2, 2009 at 1:11 PM, Brooks Lyrette brooks.lyre...@gmail.com 
 wrote:

Thanks for all the help guys.
So this is what I get from all this. My solaris zone will cap me at  
around
900M-1000M RSS memory. Therefore using the math from a pervious  
reply I can

only have about 23 connections to my database without maxing out the
machines memory?
This seems a little low, won't postgres start swapping to disk once  
the

available RAM is used up?


pgsql doesn't swap, the OS swaps, when it runs out of memory.  Since
pgsql is limited to 1G, the OS has no reason to swap.

Can you simply remove the cap from this instance?  It doesn't seem to
be doing anything useful.




Re: [GENERAL] Help with postgresql memory issue

2009-11-02 Thread Brooks Lyrette

Thanks for all the help guys.

So this is what I get from all this. My solaris zone will cap me at  
around 900M-1000M RSS memory. Therefore using the math from a pervious  
reply I can only have about 23 connections to my database without  
maxing out the machines memory?


This seems a little low, won't postgres start swapping to disk once  
the available RAM is used up?


You'll have to excuse me if this seems like a newbie question.

Thanks again,
Brooks L.

On 28-Oct-09, at 5:56 PM, Tom Lane wrote:


Greg Smith gsm...@gregsmith.com writes:

On Wed, 28 Oct 2009, Tom Lane wrote:

What's the platform exactly?  Is it possible that the postmaster is
being launched under very restrictive ulimit settings?


Now that Brooks mentioned this being run inside of a Solaris zone,  
seems
like this might be running into some memory upper limit controlled  
by the

zone configuration.


A bit of quick googling confirmed that there is (or can be) a per-zone
memory cap.  I'll bet Greg has nailed it.  The docs I found claim that
the cap management code is smart enough to count shared memory only
once, which would eliminate the most obvious way in which the cap  
might

be way off; but it still sounds like checking into that configuration
setting is job #1.

regards, tom lane




Re: [GENERAL] Help with postgresql memory issue

2009-11-02 Thread Scott Marlowe
On Mon, Nov 2, 2009 at 1:11 PM, Brooks Lyrette brooks.lyre...@gmail.com wrote:
 Thanks for all the help guys.
 So this is what I get from all this. My solaris zone will cap me at around
 900M-1000M RSS memory. Therefore using the math from a pervious reply I can
 only have about 23 connections to my database without maxing out the
 machines memory?
 This seems a little low, won't postgres start swapping to disk once the
 available RAM is used up?

pgsql doesn't swap, the OS swaps, when it runs out of memory.  Since
pgsql is limited to 1G, the OS has no reason to swap.

Can you simply remove the cap from this instance?  It doesn't seem to
be doing anything useful.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help with postgresql memory issue

2009-11-02 Thread Scott Marlowe
On Mon, Nov 2, 2009 at 3:56 PM, Brooks Lyrette brooks.lyre...@gmail.com wrote:
 I wish the solution was that simple. I rent the zone and that is my
 providers cap.

Am I misunderstanding this?  You rent an image with 32Gigs of ram.
Your provider limits you to any single process / application being 1G
total by a cap?  Then what good is the 32Gigs of ram?  It's like
seeing the promised land but never allowed to enter.  And what reason
would they have for capping a single app inside the vm?  It's already
using 32Gig, so I don't see them saving any memory.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help with postgresql memory issue

2009-10-28 Thread Thom Brown
2009/10/28 Brooks Lyrette brooks.lyre...@gmail.com:
 Hello All,

 I'm new to postgres and it seems my server is unable to fork new
 connections.

 Here is the log:

 LOG:  could not fork new process for connection: Not enough space
 LOG:  could not fork new process for connection: Not enough space
 TopMemoryContext: 84784 total in 8 blocks; 5584 free (10 chunks); 79200 used
  Prepared Queries: 8192 total in 1 blocks; 1800 free (0 chunks); 6392 used
  TopTransactionContext: 8192 total in 1 blocks; 7856 free (0 chunks); 336
 used
  Record information cache: 8192 total in 1 blocks; 1800 free (0 chunks);
 6392 used
  MessageContext: 8192 total in 1 blocks; 7128 free (0 chunks); 1064 used
  smgr relation table: 8192 total in 1 blocks; 3840 free (0 chunks); 4352
 used
  TransactionAbortContext: 32768 total in 1 blocks; 32752 free (0 chunks); 16
 used
  Portal hash: 8192 total in 1 blocks; 3912 free (0 chunks); 4280 used
  PortalMemory: 8192 total in 1 blocks; 8040 free (0 chunks); 152 used
    PortalHeapMemory: 1024 total in 1 blocks; 976 free (0 chunks); 48 used
  Relcache by OID: 8192 total in 1 blocks; 3376 free (0 chunks); 4816 used
  CacheMemoryContext: 405328 total in 19 blocks; 70888 free (1 chunks);
 334440 used
    unnamed prepared statement: 24576 total in 2 blocks; 7912 free (1
 chunks); 16664 used
    CachedPlan: 1024 total in 1 blocks; 904 free (0 chunks); 120 used
    CachedPlanSource: 1024 total in 1 blocks; 840 free (0 chunks); 184 used
    pg_ts_dict_oid_index: 1024 total in 1 blocks; 344 free (0 chunks); 680
 used
    pg_aggregate_fnoid_index: 1024 total in 1 blocks; 344 free (0 chunks);
 680 used
    pg_language_name_index: 1024 total in 1 blocks; 344 free (0 chunks); 680
 used
    pg_statistic_relid_att_index: 1024 total in 1 blocks; 280 free (0
 chunks); 744 used
    pg_ts_dict_dictname_index: 1024 total in 1 blocks; 280 free (0 chunks);
 744 used
    pg_namespace_nspname_index: 1024 total in 1 blocks; 304 free (0 chunks);
 720 used
    pg_opfamily_oid_index: 1024 total in 1 blocks; 344 free (0 chunks); 680
 used
    pg_opclass_oid_index: 1024 total in 1 blocks; 344 free (0 chunks); 680
 used
    pg_ts_parser_prsname_index: 1024 total in 1 blocks; 280 free (0 chunks);
 744 used
    pg_amop_fam_strat_index: 1024 total in 1 blocks; 128 free (0 chunks); 896
 used
    pg_opclass_am_name_nsp_index: 1024 total in 1 blocks; 192 free (0
 chunks); 832 used
    pg_trigger_tgrelid_tgname_index: 1024 total in 1 blocks; 240 free (0
 chunks); 784 used
    pg_cast_source_target_index: 1024 total in 1 blocks; 280 free (0 chunks);
 744 used
    pg_auth_members_role_member_index: 1024 total in 1 blocks; 280 free (0
 chunks); 744 used
    pg_attribute_relid_attnum_index: 1024 total in 1 blocks; 280 free (0
 chunks); 744 used
    pg_ts_config_cfgname_index: 1024 total in 1 blocks; 280 free (0 chunks);
 744 used
    pg_authid_oid_index: 1024 total in 1 blocks; 304 free (0 chunks); 720
 used
    pg_ts_config_oid_index: 1024 total in 1 blocks; 344 free (0 chunks); 680
 used
    pg_conversion_default_index: 1024 total in 1 blocks; 128 free (0 chunks);
 896 used
    pg_language_oid_index: 1024 total in 1 blocks; 344 free (0 chunks); 680
 used
    pg_enum_oid_index: 1024 total in 1 blocks; 344 free (0 chunks); 680 used
    pg_proc_proname_args_nsp_index: 1024 total in 1 blocks; 192 free (0
 chunks); 832 used
    pg_ts_parser_oid_index: 1024 total in 1 blocks; 344 free (0 chunks); 680
 used
    pg_database_oid_index: 1024 total in 1 blocks; 304 free (0 chunks); 720
 used
    pg_conversion_name_nsp_index: 1024 total in 1 blocks; 280 free (0
 chunks); 744 used
    pg_class_relname_nsp_index: 1024 total in 1 blocks; 280 free (0 chunks);
 744 used
    pg_attribute_relid_attnam_index: 1024 total in 1 blocks; 280 free (0
 chunks); 744 used
    pg_class_oid_index: 1024 total in 1 blocks; 344 free (0 chunks); 680 used
    pg_amproc_fam_proc_index: 1024 total in 1 blocks; 128 free (0 chunks);
 896 used
    pg_operator_oprname_l_r_n_index: 1024 total in 1 blocks; 128 free (0
 chunks); 896 used
    pg_index_indexrelid_index: 1024 total in 1 blocks; 344 free (0 chunks);
 680 used
    pg_type_oid_index: 1024 total in 1 blocks; 304 free (0 chunks); 720 used
    pg_rewrite_rel_rulename_index: 1024 total in 1 blocks; 280 free (0
 chunks); 744 used
    pg_authid_rolname_index: 1024 total in 1 blocks; 304 free (0 chunks); 720
 used
    pg_auth_members_member_role_index: 1024 total in 1 blocks; 280 free (0
 chunks); 744 used
    pg_enum_typid_label_index: 1024 total in 1 blocks; 280 free (0 chunks);
 744 used
    pg_constraint_oid_index: 1024 total in 1 blocks; 344 free (0 chunks); 680
 used
    pg_conversion_oid_index: 1024 total in 1 blocks; 344 free (0 chunks); 680
 used
    pg_ts_template_tmplname_index: 1024 total in 1 blocks; 280 free (0
 chunks); 744 used
    pg_ts_config_map_index: 1024 total in 1 blocks; 192 free (0 chunks); 832
 used
    pg_namespace_oid_index: 1024 total in 1 blocks; 304 free (0 chunks); 720
 used
    

Re: [GENERAL] Help with postgresql memory issue

2009-10-28 Thread Thom Brown
2009/10/28 Brooks Lyrette brooks.lyre...@gmail.com:
 There should be no other processes running, this system is dedicated to
 running postgresql.

 Max connections is configured to: max_connections = 400


Well it sounds like you've somehow run out of swap space.  Are you
able to run top and sort by resident memory and also swap memory to
see where it's all going?  Also use pg_top if you have it.  That will
tell you how much memory each connection is using.

Thom

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help with postgresql memory issue

2009-10-28 Thread Tom Lane
Brooks Lyrette brooks.lyre...@gmail.com writes:
 I'm new to postgres and it seems my server is unable to fork new  
 connections.

 LOG:  could not fork new process for connection: Not enough space

For what I suppose is a lightly loaded machine, that is just plain
weird.  What's the platform exactly?  Is it possible that the postmaster
is being launched under very restrictive ulimit settings?

If it's a Unix-ish machine, it would be useful to look at top and
vmstat output to see if the machine is under severe memory
pressure for some reason.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help with postgresql memory issue

2009-10-28 Thread Brooks Lyrette
The machine is running a moderate load. This is running on a Solaris  
Zone.


Top is showing:

load averages:  2.49,  4.00,  3.78;up  
124 
+ 
12 
: 
24 
: 
47 
16 
:04:21

46 processes: 45 sleeping, 1 on cpu
CPU states: 76.6% idle, 14.6% user,  8.8% kernel,  0.0% iowait,  0.0%  
swap

Memory: 32G phys mem, 942M free mem, 76G swap, 74G free swap

   PID USERNAME LWP PRI NICE  SIZE   RES STATETIMECPU COMMAND
  5069 postgres   1  520  167M   20M sleep0:04 13.50% postgres
   902 postgres   1   10  167M   21M sleep0:12  6.39% postgres
  5068 postgres   1  590  167M   21M sleep0:01  4.92% postgres
  5070 postgres   1  590  166M   20M sleep0:00  3.72% postgres
 27817 postgres   1  590  167M   22M sleep0:23  1.43% postgres
   903 postgres   1  590  157M   11M sleep0:02  1.14% postgres
 23594 postgres   1  590  148M 2096K sleep0:10  0.11% postgres
  5510 brooks 1  590 5624K 2184K cpu  0:00  0.10% top
 23598 postgres   1  590 6404K 1680K sleep0:11  0.10% postgres
 23595 postgres   1  590  148M 1852K sleep0:01  0.01% postgres
 23597 postgres   1  590 6220K 1556K sleep0:00  0.01% postgres
 24870 root  30  390 7060K 3332K sleep7:01  0.00% nscd
   736 brooks 1  590 6292K 2060K sleep0:00  0.00% sshd
 23596 postgres   1  590  148M 2024K sleep0:00  0.00% postgres
 24828 root  13  290 9300K 2128K sleep2:02  0.00% svc.st

And vmstat shows:

 kthr  memorypagedisk   
faults  cpu
 r b w   swap  free  re  mf pi po fr de sr s0 s1 s2 s3   in   sy   cs  
us sy id
 0 0 0 74805352 2910024 373 4154 96 7 7 0 0 -0 52 19 19 4561 230770  
6889 11 13 76


On 28-Oct-09, at 4:01 PM, Tom Lane wrote:


Brooks Lyrette brooks.lyre...@gmail.com writes:

I'm new to postgres and it seems my server is unable to fork new
connections.



LOG:  could not fork new process for connection: Not enough space


For what I suppose is a lightly loaded machine, that is just plain
weird.  What's the platform exactly?  Is it possible that the  
postmaster

is being launched under very restrictive ulimit settings?

If it's a Unix-ish machine, it would be useful to look at top and
vmstat output to see if the machine is under severe memory
pressure for some reason.

regards, tom lane



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help with postgresql memory issue

2009-10-28 Thread Brooks Lyrette
There should be no other processes running, this system is dedicated  
to running postgresql.


Max connections is configured to: max_connections = 400 

Brooks L.

On 28-Oct-09, at 3:46 PM, Thom Brown wrote:


2009/10/28 Brooks Lyrette brooks.lyre...@gmail.com:

Hello All,

I'm new to postgres and it seems my server is unable to fork new
connections.

Here is the log:

LOG:  could not fork new process for connection: Not enough space
LOG:  could not fork new process for connection: Not enough space
TopMemoryContext: 84784 total in 8 blocks; 5584 free (10 chunks);  
79200 used
 Prepared Queries: 8192 total in 1 blocks; 1800 free (0 chunks);  
6392 used
 TopTransactionContext: 8192 total in 1 blocks; 7856 free (0  
chunks); 336

used
 Record information cache: 8192 total in 1 blocks; 1800 free (0  
chunks);

6392 used
 MessageContext: 8192 total in 1 blocks; 7128 free (0 chunks); 1064  
used
 smgr relation table: 8192 total in 1 blocks; 3840 free (0 chunks);  
4352

used
 TransactionAbortContext: 32768 total in 1 blocks; 32752 free (0  
chunks); 16

used
 Portal hash: 8192 total in 1 blocks; 3912 free (0 chunks); 4280 used
 PortalMemory: 8192 total in 1 blocks; 8040 free (0 chunks); 152 used
   PortalHeapMemory: 1024 total in 1 blocks; 976 free (0 chunks);  
48 used
 Relcache by OID: 8192 total in 1 blocks; 3376 free (0 chunks);  
4816 used
 CacheMemoryContext: 405328 total in 19 blocks; 70888 free (1  
chunks);

334440 used
   unnamed prepared statement: 24576 total in 2 blocks; 7912 free (1
chunks); 16664 used
   CachedPlan: 1024 total in 1 blocks; 904 free (0 chunks); 120 used
   CachedPlanSource: 1024 total in 1 blocks; 840 free (0 chunks);  
184 used
   pg_ts_dict_oid_index: 1024 total in 1 blocks; 344 free (0  
chunks); 680

used
   pg_aggregate_fnoid_index: 1024 total in 1 blocks; 344 free (0  
chunks);

680 used
   pg_language_name_index: 1024 total in 1 blocks; 344 free (0  
chunks); 680

used
   pg_statistic_relid_att_index: 1024 total in 1 blocks; 280 free (0
chunks); 744 used
   pg_ts_dict_dictname_index: 1024 total in 1 blocks; 280 free (0  
chunks);

744 used
   pg_namespace_nspname_index: 1024 total in 1 blocks; 304 free (0  
chunks);

720 used
   pg_opfamily_oid_index: 1024 total in 1 blocks; 344 free (0  
chunks); 680

used
   pg_opclass_oid_index: 1024 total in 1 blocks; 344 free (0  
chunks); 680

used
   pg_ts_parser_prsname_index: 1024 total in 1 blocks; 280 free (0  
chunks);

744 used
   pg_amop_fam_strat_index: 1024 total in 1 blocks; 128 free (0  
chunks); 896

used
   pg_opclass_am_name_nsp_index: 1024 total in 1 blocks; 192 free (0
chunks); 832 used
   pg_trigger_tgrelid_tgname_index: 1024 total in 1 blocks; 240  
free (0

chunks); 784 used
   pg_cast_source_target_index: 1024 total in 1 blocks; 280 free (0  
chunks);

744 used
   pg_auth_members_role_member_index: 1024 total in 1 blocks; 280  
free (0

chunks); 744 used
   pg_attribute_relid_attnum_index: 1024 total in 1 blocks; 280  
free (0

chunks); 744 used
   pg_ts_config_cfgname_index: 1024 total in 1 blocks; 280 free (0  
chunks);

744 used
   pg_authid_oid_index: 1024 total in 1 blocks; 304 free (0  
chunks); 720

used
   pg_ts_config_oid_index: 1024 total in 1 blocks; 344 free (0  
chunks); 680

used
   pg_conversion_default_index: 1024 total in 1 blocks; 128 free (0  
chunks);

896 used
   pg_language_oid_index: 1024 total in 1 blocks; 344 free (0  
chunks); 680

used
   pg_enum_oid_index: 1024 total in 1 blocks; 344 free (0 chunks);  
680 used
   pg_proc_proname_args_nsp_index: 1024 total in 1 blocks; 192 free  
(0

chunks); 832 used
   pg_ts_parser_oid_index: 1024 total in 1 blocks; 344 free (0  
chunks); 680

used
   pg_database_oid_index: 1024 total in 1 blocks; 304 free (0  
chunks); 720

used
   pg_conversion_name_nsp_index: 1024 total in 1 blocks; 280 free (0
chunks); 744 used
   pg_class_relname_nsp_index: 1024 total in 1 blocks; 280 free (0  
chunks);

744 used
   pg_attribute_relid_attnam_index: 1024 total in 1 blocks; 280  
free (0

chunks); 744 used
   pg_class_oid_index: 1024 total in 1 blocks; 344 free (0 chunks);  
680 used
   pg_amproc_fam_proc_index: 1024 total in 1 blocks; 128 free (0  
chunks);

896 used
   pg_operator_oprname_l_r_n_index: 1024 total in 1 blocks; 128  
free (0

chunks); 896 used
   pg_index_indexrelid_index: 1024 total in 1 blocks; 344 free (0  
chunks);

680 used
   pg_type_oid_index: 1024 total in 1 blocks; 304 free (0 chunks);  
720 used

   pg_rewrite_rel_rulename_index: 1024 total in 1 blocks; 280 free (0
chunks); 744 used
   pg_authid_rolname_index: 1024 total in 1 blocks; 304 free (0  
chunks); 720

used
   pg_auth_members_member_role_index: 1024 total in 1 blocks; 280  
free (0

chunks); 744 used
   pg_enum_typid_label_index: 1024 total in 1 blocks; 280 free (0  
chunks);

744 used
   pg_constraint_oid_index: 1024 total in 1 blocks; 344 free (0  
chunks); 680

used
   pg_conversion_oid_index: 1024 total in 1 blocks; 344 free (0  
chunks); 680

used
   

Re: [GENERAL] Help with postgresql memory issue

2009-10-28 Thread Greg Stark
On Wed, Oct 28, 2009 at 1:05 PM, Brooks Lyrette
brooks.lyre...@gmail.com wrote:
 The machine is running a moderate load. This is running on a Solaris Zone.

 Memory: 32G phys mem, 942M free mem, 76G swap, 74G free swap

   PID USERNAME LWP PRI NICE  SIZE   RES STATE    TIME    CPU COMMAND
  5069 postgres   1  52    0  167M   20M sleep    0:04 13.50% postgres

Hm, well 400 processes if each were taking 190M would be 76G. But that
doesn't really make much sense since most of the 167M of that process
is presumably the shared buffers. What is your shared buffers set to
btw? And your work_mem and maintenance_work_mem?

Fwiw ENOMEM is documented as There is not enough swap space..

Perhaps you have some big usage spike which uses up lots of swap and
causes postgres to start needing lots of new processes at the same
time?


-- 
greg

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help with postgresql memory issue

2009-10-28 Thread Greg Smith

On Wed, 28 Oct 2009, Tom Lane wrote:

What's the platform exactly?  Is it possible that the postmaster is 
being launched under very restrictive ulimit settings?


Now that Brooks mentioned this being run inside of a Solaris zone, seems 
like this might be running into some memory upper limit controlled by the 
zone configuration.


--
* Greg Smith gsm...@gregsmith.com http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help with postgresql memory issue

2009-10-28 Thread Greg Smith

On Wed, 28 Oct 2009, Greg Stark wrote:


  PID USERNAME LWP PRI NICE  SIZE   RES STATE    TIME    CPU COMMAND
 5069 postgres   1  52    0  167M   20M sleep    0:04 13.50% postgres


Hm, well 400 processes if each were taking 190M would be 76G. But that
doesn't really make much sense since most of the 167M of that process
is presumably the shared buffers. What is your shared buffers set to
btw? And your work_mem and maintenance_work_mem?


Pieced together from the upthread config file posts:

shared_buffers = 128MB
work_mem = 16MB
max_connections = 400

So somewhere bewteen 128MB and 167MB of that SIZE is shared_buffers plus 
the other usual shared memory suspects.  Let's say each process is using 
40MB, which is on the high side.  I'd guess this system might peak at 40MB 
* 400 connections+170MB~=16GB of database RAM used, which is so much less 
than physical RAM it seems more like a software limit is being run into 
instead.


--
* Greg Smith gsm...@gregsmith.com http://www.gregsmith.com Baltimore, MD
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help with postgresql memory issue

2009-10-28 Thread Tom Lane
Greg Smith gsm...@gregsmith.com writes:
 On Wed, 28 Oct 2009, Tom Lane wrote:
 What's the platform exactly?  Is it possible that the postmaster is 
 being launched under very restrictive ulimit settings?

 Now that Brooks mentioned this being run inside of a Solaris zone, seems 
 like this might be running into some memory upper limit controlled by the 
 zone configuration.

A bit of quick googling confirmed that there is (or can be) a per-zone
memory cap.  I'll bet Greg has nailed it.  The docs I found claim that
the cap management code is smart enough to count shared memory only
once, which would eliminate the most obvious way in which the cap might
be way off; but it still sounds like checking into that configuration
setting is job #1.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help! Database restored with disabled triggers

2009-09-23 Thread Adrian Klaver
On Tuesday 22 September 2009 7:28:03 pm Joe Kramer wrote:
 I have database backup schema+data in text (non-compressed) format.
 Backup is created using pg_dump -i  -h ... -U ... -f dump.sql.
 I run it with psql dump.sql but after restore all triggers are disabled!

 I can't use this text dump with pg_restore because it only accept
 archived dumps. And I am not sure that using pg_restore will solve
 disabled triggers problem.
 I need to have the backup in text format so I can open and edit it.

 There was a recipe earlier in this mailing list that involves writing
 a function that will enable all triggers one-by-one. But I want to do
 it a proper way, without such hacking.

 What would be the solution for me?

 Thanks.

What version of Postgres are you dumping from, restoring to? Which version of 
pg_dump are you using?

-- 
Adrian Klaver
akla...@comcast.net

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help! Database restored with disabled triggers

2009-09-23 Thread Tom Lane
Joe Kramer cckra...@gmail.com writes:
 I have database backup schema+data in text (non-compressed) format.
 Backup is created using pg_dump -i  -h ... -U ... -f dump.sql.
 I run it with psql dump.sql but after restore all triggers are disabled!

You sure they weren't disabled in the source database?  AFAICS pg_dump
just duplicates the trigger state it sees in the source.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help! Database restored with disabled triggers

2009-09-23 Thread Joe Kramer
On Thu, Sep 24, 2009 at 12:02 AM, Adrian Klaver akla...@comcast.net wrote:
 On Tuesday 22 September 2009 7:28:03 pm Joe Kramer wrote:
 I have database backup schema+data in text (non-compressed) format.
 Backup is created using pg_dump -i  -h ... -U ... -f dump.sql.
 I run it with psql dump.sql but after restore all triggers are disabled!

 I can't use this text dump with pg_restore because it only accept
 archived dumps. And I am not sure that using pg_restore will solve
 disabled triggers problem.
 I need to have the backup in text format so I can open and edit it.

 There was a recipe earlier in this mailing list that involves writing
 a function that will enable all triggers one-by-one. But I want to do
 it a proper way, without such hacking.

 What would be the solution for me?

 Thanks.

 What version of Postgres are you dumping from, restoring to? Which version of
 pg_dump are you using?

 --
I am using client 8.1.9 to dump from server 8.3.0  (unable to use client 8.3.x)
Importing to server 8.3.7.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help! Database restored with disabled triggers

2009-09-23 Thread Joe Kramer
On Thu, Sep 24, 2009 at 12:53 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 Joe Kramer cckra...@gmail.com writes:
 I have database backup schema+data in text (non-compressed) format.
 Backup is created using pg_dump -i  -h ... -U ... -f dump.sql.
 I run it with psql dump.sql but after restore all triggers are disabled!

 You sure they weren't disabled in the source database?  AFAICS pg_dump
 just duplicates the trigger state it sees in the source.

                        regards, tom lane


Yes, I'm absolutely sure they are not disabled. And in the SQL dump
file there are no commands that would disable them.
It simply goes on to creating triggers, but in the end they are all disabled.

Regards.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help! Database restored with disabled triggers

2009-09-23 Thread Tom Lane
Joe Kramer cckra...@gmail.com writes:
 On Thu, Sep 24, 2009 at 12:02 AM, Adrian Klaver akla...@comcast.net wrote:
 What version of Postgres are you dumping from, restoring to? Which version of

 I am using client 8.1.9 to dump from server 8.3.0  (unable to use client 
 8.3.x)
 Importing to server 8.3.7.

You mean you are dumping from an 8.3 server with an 8.1 pg_dump?
That is pretty much guaranteed not to work; I am surprised that the
only symptom you notice is bad trigger state.  Why do you feel
you can't use an up-to-date pg_dump?

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help! Database restored with disabled triggers

2009-09-23 Thread Tom Lane
Joe Kramer cckra...@gmail.com writes:
 On Thu, Sep 24, 2009 at 12:53 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 You sure they weren't disabled in the source database?

 Yes, I'm absolutely sure they are not disabled. And in the SQL dump
 file there are no commands that would disable them.

Better take another look for ALTER TABLE foo DISABLE TRIGGER bar
commands.  Given the information that this is a pre-8.3 pg_dump,
that's exactly the behavior I'd expect, because it's not going to
understand the values it finds in pg_trigger.tgenabled in an 8.3
server.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help! Database restored with disabled triggers

2009-09-23 Thread Joe Kramer
On Thu, Sep 24, 2009 at 1:33 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 Joe Kramer cckra...@gmail.com writes:
 On Thu, Sep 24, 2009 at 12:53 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 You sure they weren't disabled in the source database?

 Yes, I'm absolutely sure they are not disabled. And in the SQL dump
 file there are no commands that would disable them.

 Better take another look for ALTER TABLE foo DISABLE TRIGGER bar
 commands.  Given the information that this is a pre-8.3 pg_dump,
 that's exactly the behavior I'd expect, because it's not going to
 understand the values it finds in pg_trigger.tgenabled in an 8.3
 server.


Thanks, I found DISABLE TRIGGER commands and deleted them,
but wish I could find a way to make pg_dump not to add them!

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help! Database restored with disabled triggers

2009-09-23 Thread Adrian Klaver

- Joe Kramer cckra...@gmail.com wrote:

 On Thu, Sep 24, 2009 at 1:33 AM, Tom Lane t...@sss.pgh.pa.us wrote:
  Joe Kramer cckra...@gmail.com writes:
  On Thu, Sep 24, 2009 at 12:53 AM, Tom Lane t...@sss.pgh.pa.us
 wrote:
  You sure they weren't disabled in the source database?
 
  Yes, I'm absolutely sure they are not disabled. And in the SQL
 dump
  file there are no commands that would disable them.
 
  Better take another look for ALTER TABLE foo DISABLE TRIGGER bar
  commands.  Given the information that this is a pre-8.3 pg_dump,
  that's exactly the behavior I'd expect, because it's not going to
  understand the values it finds in pg_trigger.tgenabled in an 8.3
  server.
 
 
 Thanks, I found DISABLE TRIGGER commands and deleted them,
 but wish I could find a way to make pg_dump not to add them!
 

You are going to have to use the 8.3 pg_dump :)

Adrian Klaver
akla...@comcast.net


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help! Database restored with disabled triggers

2009-09-23 Thread Scott Marlowe
On Wed, Sep 23, 2009 at 9:12 AM, Joe Kramer cckra...@gmail.com wrote:
 I am using client 8.1.9 to dump from server 8.3.0  (unable to use client 
 8.3.x)
 Importing to server 8.3.7.

That won't work

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] help me please with function

2009-09-09 Thread Thomas Kellerer

I can not find command in postgres - With ... as


You need Postgres 8.4 for that:

http://www.postgresql.org/docs/current/static/queries-with.html

Thomas


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help interpreting pg_stat_bgwriter output

2009-08-19 Thread Jaime Casanova
On Thu, Aug 13, 2009 at 3:00 AM, Greg Smithgsm...@gregsmith.com wrote:

 buffers_backend = 740

 This number represents the behavior the background writer is trying to
 prevent--backends having to clean their own buffers up.


so what we want on busy systems is buffers_backend to be (at least)
equal or (better) lower than buffers_clean, rigth?
or i'm understanding wrong?

-- 
Atentamente,
Jaime Casanova
Soporte y capacitación de PostgreSQL
Asesoría y desarrollo de sistemas
Guayaquil - Ecuador
Cel. +59387171157

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help interpreting pg_stat_bgwriter output

2009-08-13 Thread Greg Smith

On Wed, 12 Aug 2009, sam mulube wrote:


 is my interpreting of buffers_clean = 0 correct?


Yes.


 If so, why would the bgwriter not be writing out any buffers?


The purpose of the cleaner is to prepare buffers that we expect will be 
needed for allocations in the near future.  Let's do a little math on your 
system to guess why that's not happening.



checkpoints_timed = 333
checkpoints_req = 0


You're never triggering checkpoints from activity.  This suggests that 
your system is having a regular checkpoint every 5 minutes, and therefore 
the time your server has been up is about 1665 minutes.



bgwriter_delay = 200ms


With the background writer running 5 times per second, the data you've 
sampled involved it running 1665 * 60 * 5 = 499500 times.  During none of 
those runs did it actually write anything; why?



buffers_alloc = 19163


During those runs, 19163 buffers were allocated.  This means that during 
the average background writer delay nap, 19163 / 499500 = 0.04 buffers 
were allocated.  That's very little demand for buffers that need to be 
cleaned on average, and the evidence here suggests the system is finding 
plenty of cleaned up and ready to go buffers from the background 
checkpoint process.  It doesn't need to do any work on top of what the 
checkpoint buffer cleanup is doing.



buffers_backend = 740


This number represents the behavior the background writer is trying to 
prevent--backends having to clean their own buffers up.  Your result here 
suggests that on average, during any 5 minute period there are 740 / 333 = 
2.2 buffers being written that we might have had the background writer 
take care of instead.  Again, that's so little activity that the averages 
the background writer estimates with aren't even detecting anything worth 
doing.


In short, your system isn't nearly active enough for the background writer 
to find itself with useful work to do, and one of the design goals for it 
was to keep it from spinning around doing nothing in that situation.  If 
your system load goes up, I expect you'll discover cleaning starts 
happening too.


--
* Greg Smith gsm...@gregsmith.com http://www.gregsmith.com Baltimore, MD
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help using SELECT INTO to make schema

2009-07-22 Thread Jeff Davis
On Wed, 2009-07-22 at 18:15 -0400, Robert James wrote:
 I'd like to SELECT INTO one table into another one. However, I'd like
 to do two things that I don't know how to do using SELECT INTO:
 
  1. Copy over the indexes and constraints of the first table into the
 second

See: http://www.postgresql.org/docs/8.4/static/sql-createtable.html
LIKE parent_table [ { INCLUDING | EXCLUDING } { DEFAULTS | CONSTRAINTS |
INDEXES } ]

 2. Do SELECT INTO even if the second table already exists.

INSERT INTO foo ... SELECT

Regards,
Jeff Davis


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help me please...

2009-07-09 Thread jacob
do the DB folders still exist? if so back them up, reinstall Postgres
(reinstalling XP probably wiped out either DLL's or registry entries)
and relaunch it. don't have it initialize a DB on install

 

From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Roseller A.
Romanos
Sent: Wednesday, July 08, 2009 9:37 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Help me please...

 

Please help me with this. I really need your advice as to how to
retrieve the data in my postgresql database.

I have postgresql installed in Windows XP platform five months ago. Just
yesterday my OS bugged down and saying NTDLR is missing.  What I did was
I re-installed my OS. When I finished my installation I found out that I
cannot access anymore my postgresql using PGAdmin III. 

What should I do to access my database and retrieve the important
records in it?   

Unfortunately, I have not created a backup of it.  Please help me.

Thanks in advance and God bless...

 

Roseller Romanos
STI-Pagadian
Gillamac Building, Pagadian City
Office Nos: (062) 2144-785
Home Nos: (062) 2144-695
Mobile Nos: 09203502636

 

 



Re: [GENERAL] Help me please...

2009-07-09 Thread jacob
if you used default locations I believe it should be (this is from
memory mind) under c:\program files\postgres\version\data\

 

data is the folder you want.

 

From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of ja...@aers.ca
Sent: Thursday, July 09, 2009 1:18 PM
To: don2_...@yahoo.com; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Help me please...

 

do the DB folders still exist? if so back them up, reinstall Postgres
(reinstalling XP probably wiped out either DLL's or registry entries)
and relaunch it. don't have it initialize a DB on install

 

From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Roseller A.
Romanos
Sent: Wednesday, July 08, 2009 9:37 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Help me please...

 

Please help me with this. I really need your advice as to how to
retrieve the data in my postgresql database.

I have postgresql installed in Windows XP platform five months ago. Just
yesterday my OS bugged down and saying NTDLR is missing.  What I did was
I re-installed my OS. When I finished my installation I found out that I
cannot access anymore my postgresql using PGAdmin III. 

What should I do to access my database and retrieve the important
records in it?   

Unfortunately, I have not created a backup of it.  Please help me.

Thanks in advance and God bless...

 

Roseller Romanos
STI-Pagadian
Gillamac Building, Pagadian City
Office Nos: (062) 2144-785
Home Nos: (062) 2144-695
Mobile Nos: 09203502636

 

 



Re: [GENERAL] Help me please...

2009-07-09 Thread John R Pierce

ja...@aers.ca wrote:


if you used default locations I believe it should be (this is from 
memory mind) under c:\program files\postgres\version\data\


 


data is the folder you want.

 



First, verify the location of pgdata...

   sc qc pgsql-8.3

(I'm assuming this is 8.3, modify for other versions)
note the value of the -D parameter on the BINARY_PATH_NAME, like...

   BINARY_PATH_NAME   : D:\postgres\8.3\bin\pg_ctl.exe runservice 
-w -N pgsql-8.3 -D D:\postgres\8.3\data\


hence, mine is D:\postgres\8.3\data\

If you've reinstalled postgres from scratch, you'll likely have to do a 
bit of dinking around.


First,

   NET STOP pgsql-8.3

Now, MOVE the current data dir somewhere safe, and COPY/S your backup of 
the DATA directory to the active location.   Then, use the 'security' 
dialog in file explorer, or the CACL command line, to grant the 
'postgres' user full control over the data directory and all files in it.


command line version:

   cacls /t /e /c /g postgres:f \path\to\data

if this postgres user already existed from before, but the reinstalled 
postgres service is having problems starting, you may need to reset the 
service account password.pick a random forgettable password.   I'm 
going to use shattUp373treHodhu (random generator output)...


   NET USER postgres shattUp373treHodhu
   SC CONFIG pgsql-8.3 password= shattUp373treHodhu

upper case doesn't matter in the commands except for the password itself 
but the spacing around the password=  is critical (none before the =, 
one space after)


then try

   net start pgsql-8.3

and with any luck, your data is all intact.

its absolutely critical if you've reinstalled postgres that you install 
the same version as you used before.






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help with installation please...

2009-07-02 Thread Craig Ringer
On Tue, 2009-06-30 at 17:58 +0200, Rosko C.A. wrote:
 Hi. I'm trying to install Postgresql 8.3 in my computer but a windows
 appear asking me a password that i no have... If I click next button
 no continues... i don't know what can I do. Yesterday I tried to
 install pokertracker 3 (the latest version because i had already
 another version and everything was working perfect) and now i can't
 use this program because pokertracker 3 also ask me for a password.
 Thanks.

Please read the documentation for the software you are using:

http://www.pokertracker.com/products/PT3/docs/PokerTracker3_Manual_Uninstall_Reinstall_PostgreSQL.pdf

If that doesn't help, consider contacting them for support.
 
 __
-- 
Craig Ringer


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help with installation please...

2009-06-30 Thread Alan McKay
What OS are you running?
What exactly is the window saying?  If you could take a snapshot of it
and upload it to a photo site and send the URL to the list, that might
be helpful.
Most OSes allow you to snapshot the active window with CTRL-PRT-SCRN
Then you can use the paste option in your favorite photo editing
software (e.g. Gnu GIMP, which is free) to paste it, then save it to a
file.

On Tue, Jun 30, 2009 at 11:58 AM, Rosko C.A.roskoamati...@hotmail.com wrote:
 Hi. I'm trying to install Postgresql 8.3 in my computer but a windows appear
 asking me a password that i no have... If I click next button no
 continues... i don't know what can I do. Yesterday I tried to install
 pokertracker 3 (the latest version because i had already another version and
 everything was working perfect) and now i can't use this program because
 pokertracker 3 also ask me for a password. Thanks.
 
 Diferentes formas de estar en contacto con amigos y familiares. Descúbrelas.
 Descúbrelas.



-- 
“Don't eat anything you've ever seen advertised on TV”
 - Michael Pollan, author of In Defense of Food

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] help with data recovery from injected UPDATE

2009-06-24 Thread Gus Gutoski
Success, of sorts.  I was able to retrieve 90% the corrupted data by
dumping the heap file.  Many thanks to those who replied with helpful
suggestions.

If you're interested in detail then read on.  Otherwise, don't bother.

The data was still in the table -- I could see it using a hex editor.
This surprised me, as autovacuum was on at the time of corruption.
Perhaps vacuum didn't bother reclaiming storage space because the
database is relatively small and low-traffic.

The attempt at point-in-time-recovery via transaction logs was doomed
to failure, as I do not have a file system backup from before the
corruption.  Still, I tried Merlin's trick with pg_resetxlog to no
avail.

I tried using the pg_filedump utility to dump the heap file, but it
wasn't what I needed.  I later discovered a souped-up utility called
pg_dumpdata:
http://blogs.sun.com/avalon/entry/recovering_postgres_data
While this utility still didn't provide everything I needed, it was a
sufficient starting point.
(It's written for postgres 8.2, whereas I'm running 8.1 -- it
segfaulted when I first ran it on my heap file.)

I sifted through the postgres source tree looking for the code that
reads/writes the heap files, but I couldn't make head or tail of
anything.  In the end, it was easier to reverse engineer the format
for user data and use the pg_dumpdata source as a base to get me to
the items in the heap files.  The reason that I couldn't get 100% of
the lost data is that the heap tuple header that points to the user
data sometimes landed me at a random point in the middle of the item,
rather than at the beginning.  At this point I gave up trying to get
the last 10% of the data -- I had run out of time and patience.

Having partially learned my lesson, I've set up a utility to run
pg_dump each day.  After I've taken a break, I'll look into a
reasonabe set-up for file system backups with point-in-time recovery.
But really, what are the chances anything like this will ever happen
again? ;-)

-Gus

On Mon, Jun 15, 2009 at 9:02 AM, Merlin Moncuremmonc...@gmail.com wrote:
 On Sun, Jun 14, 2009 at 10:32 AM, Gus
 Gutoskishared.entanglem...@gmail.com wrote:
 Merlin Moncure wrote:
 postgresql  8.1 supports pitr archiving.  you can
 do continuous backups and restore the database to just before the bad
 data.

 I tried using point-in-time-recovery to restore the state of the
 database immediately before the corruption.  It didn't work, but it
 was quite a show.  Here's the story.

 yes, I'm sorry...you  misunderstood my suggestion.  the database
 supports continuous *archiving* from which a recovery can be made.  No
 archives, no recovery :-).  Here is what I'd do if I in your shoes:

 From a copy of your filesystem backup, set up the database to run and
 attempt pg_resetxlog before starting it up.  Log in and see if your
 data is there...if it is, you hit the jackpot...if not...the next step
 is to determine if the data is actually _in_ the table.  There are a
 couple of ways to do this..tinkering around with transaction
 visibility is one...simply dumping the heap file for the table and
 inspecting it is another.

 merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] help with data recovery from injected UPDATE

2009-06-23 Thread Merlin Moncure
On Tue, Jun 23, 2009 at 2:05 PM, Gus
Gutoskishared.entanglem...@gmail.com wrote:
 Success, of sorts.  I was able to retrieve 90% the corrupted data by
 dumping the heap file.  Many thanks to those who replied with helpful
 suggestions.

 If you're interested in detail then read on.  Otherwise, don't bother.

 The data was still in the table -- I could see it using a hex editor.
 This surprised me, as autovacuum was on at the time of corruption.
 Perhaps vacuum didn't bother reclaiming storage space because the
 database is relatively small and low-traffic.

 The attempt at point-in-time-recovery via transaction logs was doomed
 to failure, as I do not have a file system backup from before the
 corruption.  Still, I tried Merlin's trick with pg_resetxlog to no
 avail.

 I tried using the pg_filedump utility to dump the heap file, but it
 wasn't what I needed.  I later discovered a souped-up utility called
 pg_dumpdata:
 http://blogs.sun.com/avalon/entry/recovering_postgres_data
 While this utility still didn't provide everything I needed, it was a
 sufficient starting point.
 (It's written for postgres 8.2, whereas I'm running 8.1 -- it
 segfaulted when I first ran it on my heap file.)

 I sifted through the postgres source tree looking for the code that
 reads/writes the heap files, but I couldn't make head or tail of
 anything.  In the end, it was easier to reverse engineer the format
 for user data and use the pg_dumpdata source as a base to get me to
 the items in the heap files.  The reason that I couldn't get 100% of
 the lost data is that the heap tuple header that points to the user
 data sometimes landed me at a random point in the middle of the item,
 rather than at the beginning.  At this point I gave up trying to get
 the last 10% of the data -- I had run out of time and patience.

 Having partially learned my lesson, I've set up a utility to run
 pg_dump each day.  After I've taken a break, I'll look into a
 reasonabe set-up for file system backups with point-in-time recovery.
 But really, what are the chances anything like this will ever happen
 again? ;-)

Regular scheduled pg_dump is often enough :-)

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] help with data recovery from injected UPDATE

2009-06-15 Thread Merlin Moncure
On Sun, Jun 14, 2009 at 10:32 AM, Gus
Gutoskishared.entanglem...@gmail.com wrote:
 Merlin Moncure wrote:
 postgresql  8.1 supports pitr archiving.  you can
 do continuous backups and restore the database to just before the bad
 data.

 I tried using point-in-time-recovery to restore the state of the
 database immediately before the corruption.  It didn't work, but it
 was quite a show.  Here's the story.

yes, I'm sorry...you  misunderstood my suggestion.  the database
supports continuous *archiving* from which a recovery can be made.  No
archives, no recovery :-).  Here is what I'd do if I in your shoes:

From a copy of your filesystem backup, set up the database to run and
attempt pg_resetxlog before starting it up.  Log in and see if your
data is there...if it is, you hit the jackpot...if not...the next step
is to determine if the data is actually _in_ the table.  There are a
couple of ways to do this..tinkering around with transaction
visibility is one...simply dumping the heap file for the table and
inspecting it is another.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] help with data recovery from injected UPDATE

2009-06-14 Thread Gus Gutoski
Merlin Moncure wrote:
 postgresql  8.1 supports pitr archiving.  you can
 do continuous backups and restore the database to just before the bad
 data.

I tried using point-in-time-recovery to restore the state of the
database immediately before the corruption.  It didn't work, but it
was quite a show.  Here's the story.

After much wailing and gnashing of teeth, I got postmaster to execute
a recovery (so that recovery.conf was renamed to recovery.done).  But
the database was completely screwed after the recovery.  Here's an
example of the kind of output I saw while executing a simple SELECT
statement:

postgres=# SELECT entry_date,machine_id,coin FROM collections WHERE
entry_date::date'2009-06-06' ORDER BY entry_date;
WARNING:  could not write block 32 of 1663/10793/2608
DETAIL:  Multiple failures --- write error may be permanent.
ERROR:  xlog flush request 0/4DC6CC88 is not satisfied --- flushed
only to 0/4DC06180
CONTEXT:  writing block 32 of relation 1663/10793/2608

Here's the recovery procedure I followed:
0. Shortly after the corruption on June 9, 2009, I shut down the
server and backed up the entire data directory.  The recovery
procedure described herein begins with this file system backup.
1. The most recent non-corrupted snapshot of the database is a pg_dump
from May 13, 2009.  (I don't have any file system backups from before
the corruption.)  I restored the database to this snapshot by
executing the commands from the May 13 pg_dump on the June 9 corrupted
data.
2.  I removed the files in the pg_xlog directory and replaced them
with the contents of pg_xlog from the corrupted file system backup
from June 9.
3.  I modified the sample recovery.conf file so as to replay all the
transactions right up until the point of corruption.  The hope was
that postmaster would somehow know to begin replaying transactions at
the appropriate point from the May 13 state.

I guess it's too much to ask postmaster to do a PITR from a pg_dump
backup, as opposed to a file system backup.  Bummer.

By the way, I can reliably get postmaster to hang during startup if I
manually create the pg_xlog\RECOVERYHISTORY and pg_xlog\RECOVERYXLOG
directories (even with correct permissions) before starting up the
server.  When I say hang, I mean that
(i) any attempt to connect is met with the response FATAL: the
database system is starting up, and
(ii) pg_ctl stop cannot be used to shut down the server -- I have to
use pg_ctl kill

Anyway, I'm going to try implementing Tom's suggestion of writing a
program to modify the xmin/xmax values.  I expect this approach won't
work, as autovacuum was on at the time of corruption.  However, the
files in the data directory are quite large -- many times larger than
a pg_dump.  The database sees such a small amount of traffic that it's
possible that even vacuum decided not to bother reclaiming the unused
storage created by the corrupting transaction (?).  Here's hoping.

-Gus

On Thu, Jun 11, 2009 at 1:43 PM, Gus Gutoski wrote:
 Thanks for the replies.

 Tom Lane wrote:
 This being 8.1, if you haven't turned on autovacuum there is some chance
 of that.

 Unfortunately, autovacuum was on.  I don't recall ever turning it on,
 but this database is over two years old; it's possible that I blindly
 followed advice from pgAdmin or something way back when.

 Merlin Moncure wrote:
 does postgresql 8.1 have last_vacuum time in the pg_stat_all_tables
 coiumn?

 I did not see a column called last_vacuum in the pg_stat_all_tables table.

 postgresql  8.1 supports pitr archiving.  you can
 do continuous backups and restore the database to just before the bad
 data.

 I'm learning about this now.  I'm cautiously optimistic, as my pg_xlog
 directory contains some files whose timestamp is near the time of the
 'incident'.

 By backup do you mean the contents of a pg_dump?  The most recent
 dump was two months ago.  I'm worried that my log files might not go
 far enough back in time to restore the table from the most recent
 dump.

 Both Tom's and Merlin's suggestions carry a significant learning
 curve.  I'll do what I can in the coming days and post to the list if
 anything noteworthy happens.

 plus, there is no way you are escaping the obligatory 'where are your
 backups?'.  :-).

 It's a classic story.  I'm volunteering about one day per month for
 this project, learning SQL as I go.  Priority was always given to the
 get it working tasks and never the make it safe tasks.  I had/have
 grandiose plans to rewrite the whole system properly after I graduate.
  Unfortunately, the inevitable corruption didn't wait that long.

 Cheers.

 -Gus


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] help with data recovery from injected UPDATE

2009-06-14 Thread Craig Ringer
On Sun, 2009-06-14 at 10:32 -0400, Gus Gutoski wrote:

 0. Shortly after the corruption on June 9, 2009, I shut down the
 server and backed up the entire data directory.  The recovery
 procedure described herein begins with this file system backup.
 1. The most recent non-corrupted snapshot of the database is a pg_dump
 from May 13, 2009.  (I don't have any file system backups from before
 the corruption.)  I restored the database to this snapshot by
 executing the commands from the May 13 pg_dump on the June 9 corrupted
 data.
 2.  I removed the files in the pg_xlog directory and replaced them
 with the contents of pg_xlog from the corrupted file system backup
 from June 9.

That really, REALLY won't work. It just doesn't work like that.

You're trying to use a block-level restore process (the transaction
logs) with a base backup that's at a much higher level, and isn't
block-for-block the same as the old database files. Additionally, you're
trying to do so over a known corrupt database.

The only thing that confuses me is how you convinced Pg to run recovery
using the xlog files you put in place. It should've refused, surely?

 I guess it's too much to ask postmaster to do a PITR from a pg_dump
 backup, as opposed to a file system backup.  Bummer.

Yep. No hope.

--
Craig Ringer


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


<    1   2   3   4   5   6   7   8   9   >