[ADMIN] Postgres shared memory error

2012-04-28 Thread Kevin Kempter

All;

I just want to be sure that I'm not causing myself greif.  I have a kvm 
in the cloud that is supposed to have access to 32GB of ram. when I do a 
top I only see 1GB of ram, I've pinged the hosting provider, maybe it 
shows up as it's used?


Anyway when I try and start postgres I see this:
/
 $ 2012-04-28 12:00:33 EDT [6429]: [1-1] FATAL:  XX000: could not 
create shared memory segment: Cannot allocate memory
2012-04-28 12:00:33 EDT [6429]: [2-1] DETAIL:  Failed system call was 
shmget(key=5432001, size=7700914176, 03600).
2012-04-28 12:00:33 EDT [6429]: [3-1] HINT:  This error usually means 
that PostgreSQL's request for a shared memory segment exceeded available 
memory or swap space. To reduce the request size (currently 7700914176 
bytes), reduce PostgreSQL's shared_buffers parameter (currently 917504) 
and/or its max_connections parameter (currently 503).
The PostgreSQL documentation contains more information about 
shared memory configuration.
2012-04-28 12:00:33 EDT [6429]: [4-1] LOCATION:  
InternalIpcMemoryCreate, pg_shmem.c:178 /



Which means I should bump up shmmax like this:

/# sysctl -w kernel.shmmax=7700914176/


and add it to /etc/sysctl.conf:

/# tail /etc/sysctl.conf
kernel.msgmax = 65536

# Controls the maximum shared segment size, in bytes
kernel.shmmax = 68719476736

# Controls the maximum number of shared memory segments, in pages
kernel.shmall = 4294967296

#PostgreSQL
kernel.shmmax = 7700914176 /


I assume I should have to tweak ONLY kernel.shmmax, am I correct?
I'm also assuming that this is a KVM cloud host provider issue, i.e. it 
looks like I actually do not have 32G or ram.  Does anyone disagree with 
my conclusions?


Thanks in advance





Re: [ADMIN] Postgres shared memory error

2012-04-28 Thread Kevin Kempter

On 04/28/2012 10:32 AM, Fernando Hevia wrote:



On Sat, Apr 28, 2012 at 13:10, Kevin Kempter 
cs_...@consistentstate.com mailto:cs_...@consistentstate.com wrote:


All;

I just want to be sure that I'm not causing myself greif.  I have
a kvm in the cloud that is supposed to have access to 32GB of ram.
when I do a top I only see 1GB of ram, I've pinged the hosting
provider, maybe it shows up as it's used?


What does the 'free' command show?

Anyway when I try and start postgres I see this:
/
 $ 2012-04-28 12:00:33 EDT [6429]: [1-1] FATAL:  XX000: could not
create shared memory segment: Cannot allocate memory
2012-04-28 12:00:33 EDT [6429]: [2-1] DETAIL:  Failed system call
was shmget(key=5432001, size=7700914176, 03600).
2012-04-28 12:00:33 EDT [6429]: [3-1] HINT:  This error usually
means that PostgreSQL's request for a shared memory segment
exceeded available memory or swap space. To reduce the request
size (currently 7700914176 bytes), reduce PostgreSQL's
shared_buffers parameter (currently 917504) and/or its
max_connections parameter (currently 503).
The PostgreSQL documentation contains more information
about shared memory configuration.
2012-04-28 12:00:33 EDT [6429]: [4-1] LOCATION: 
InternalIpcMemoryCreate, pg_shmem.c:178 /



Which means I should bump up shmmax like this:

/# sysctl -w kernel.shmmax=7700914176/


and add it to /etc/sysctl.conf:

/# tail /etc/sysctl.conf
kernel.msgmax = 65536

# Controls the maximum shared segment size, in bytes
kernel.shmmax = 68719476736

# Controls the maximum number of shared memory segments, in pages
kernel.shmall = 4294967296

#PostgreSQL
kernel.shmmax = 7700914176 /


I assume I should have to tweak ONLY kernel.shmmax, am I correct?


Correct.

I'm also assuming that this is a KVM cloud host provider issue,
i.e. it looks like I actually do not have 32G or ram.  Does anyone
disagree with my conclusions?


You haven't provided evidence on how much RAM your system sees. A free 
-m will show the total memory the system has and is being used.
Regards. 




Free command:

/ # free
 total   used   free sharedbuffers cached
Mem:   1020856 946804  74052  0  61280 796004
-/+ buffers/cache:  89520 931336
Swap:0  0  0/







Re: [ADMIN] psql: could not connect to server: No route to host

2012-04-26 Thread Kevin Kempter

On 04/25/2012 10:50 PM, Tom Lane wrote:

Kevin Kemptercs_...@consistentstate.com  writes:

I can scp files between the servers

Really?


$ psql -h 192.168.1.125
psql: could not connect to server: No route to host

Because that is not a Postgres problem, that is a network connectivity
problem.  I'd bet that there's something wrong with the VM
configuration, such that the VM host isn't connecting 192.168.1.* in one
VM to 192.168.1.* in the other.

regards, tom lane


Scientific Linux turns on the firewall by default. It lets me scp via 
username  passwd but wouldn't allow the port 5432 tcp connection till I 
disabled it.  I like Scientific because it seems that they are a large 
team and as a project seem to have better long term viability than other 
RH based projects but a couple of their defaults drive me nuts. Like 
this one, and the default install of yum-autoupdate





--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] psql: could not connect to server: No route to host

2012-04-25 Thread Kevin Kempter

Hi all;

I've setup PostgreSQL to talk across servers thousand of times... not 
sure what I'm doing wrong, maybe I'm just over-tired.



I have 2 scientific linux VM's running in vmware workstation

server 1 - 192.168.1.125
server 2 - 192.168.1.127

I've disabled selinux on both servers

Ive instaled PostgreSQL 9.1.3 on both servers

I have listen_addresses on server 1 set to '*'

postgres=# show listen_addresses ;
 listen_addresses
--
 *
(1 row)


I've added this to the pg_hba.conf file of server 1:
host all all 192.168.1.0/24trust


I can scp files between the servers

Then from server 2 I run this:

$ psql -h 192.168.1.125
psql: could not connect to server: No route to host
Is the server running on host 192.168.1.125 and accepting
TCP/IP connections on port 5432?


am i loosing my mind?


thanks in advance

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] dropping constraints

2010-09-01 Thread Kevin Kempter
Hi All;

we have a few not null constraints we want to drop on very large tables. 
Unfortunately the drop's are taking hours. Is there a way to drop a not null 
constraint via the system catalogs?

Thanks in advance

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] dropping constraints

2010-09-01 Thread Kevin Kempter
]
 Kevin Kempter cs_...@consistentstate.com writes:
  we have a few not null constraints we want to drop on very large tables.
  Unfortunately the drop's are taking hours. Is there a way to drop a not
  null constraint via the system catalogs?

 Dropping a NOT NULL constraint is a trivial operation.  The problem is
 probably that you are blocked waiting to get a lock on the table.
 Look for idle-in-transaction sessions that are holding locks on the
 target table.

   regards, tom lane

thanks

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] Postgresql 8.3 on OSX hangs

2010-08-24 Thread Kevin Kempter
Hi all;

we have a client experiencing the following scenario:


- Postgresql 8.3.10 plus postgis 1.5.1

- Both databases stop responding to queries and several of the backend 
processes show as being in an uninterruptible state (state Us according to 
a ps aux)

- The client killed the processes, did a pg_dumpall and a restore only to find 
the condition happen again the next day

- The client read every file reported by lsof as being accessed by a backend 
process without issues.. thus it seems to NOT be a disk I/O issue.



Thoughts?



-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] Error: missing chunk number for toast value

2010-08-17 Thread Kevin Kempter
Hi All;

We're seeing this error in our 8.3.9 database:

# select * from comments;
ERROR:  missing chunk number 0 for toast value 1274882 in pg_toast_49234


I tried this:

# reindex table pg_toast.pg_toast_49234;
REINDEX


but the error still persists.

Thoughts?

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] Partitioned table system info

2010-08-05 Thread Kevin Kempter
Hi All;

can anyone point me to a system catalog query that will show me the base table 
name, and all partition table names for any table that has children 
(inheritance) ?


Thanks in advance

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] insert OS command results into a table on a remote system ?

2010-06-09 Thread Kevin Kempter
Hi All;

I want to gather OS command based stats such as the results of commands 
like: 'df' 'iostat' 'free' etc and put the results into a temp table that I 
can then select from. 

I need to do this all from the db since I wont have OS/shell access, 
only 'psql -h' access. Is this possible?  

Can I somehow leverage the '\!' meta command in psql to pull results of OS 
commands into a table?

Thanks in advance...

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] default isolation level per user?

2010-06-03 Thread Kevin Kempter
Hi all;

is it possible to specify a default isolation level for a user/role ?

Thanks in advance...

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] pg_dump errors

2010-06-03 Thread Kevin Kempter
Hi all;

I'm seeing these errors when running a pg_dump of the postgres database:

Running: [pg_dump --schema-only postgres  postgres.ddl]
pg_dump: SQL command failed
pg_dump: Error message from server: ERROR:  could not find hash function for 
hash operator 33639
pg_dump: The command was: SELECT a.attnum, a.attname, a.atttypmod, 
a.attstattarget, a.attstorage, t.typstorage, a.attnotnull, a.atthasdef, 
a.attisdropped, a.attlen, a.attalign, a.attislocal, 
pg_catalog.format_type(t.oid,a.atttypmod) AS atttypname FROM 
pg_catalog.pg_attribute a LEFT JOIN pg_catalog.pg_type t ON a.atttypid = 
t.oid WHERE a.attrelid = '34093'::pg_catalog.oid AND a.attnum  
0::pg_catalog.int2 ORDER BY a.attrelid, a.attnum


Running: [pg_dump --data-only  postgres  postgres.dat]
pg_dump: SQL command failed
pg_dump: Error message from server: ERROR:  could not find hash function for 
hash operator 33639
pg_dump: The command was: SELECT a.attnum, a.attname, a.atttypmod, 
a.attstattarget, a.attstorage, t.typstorage, a.attnotnull, a.atthasdef, 
a.attisdropped, a.attlen, a.attalign, a.attislocal, 
pg_catalog.format_type(t.oid,a.atttypmod) AS atttypname FROM 
pg_catalog.pg_attribute a LEFT JOIN pg_catalog.pg_type t ON a.atttypid = 
t.oid WHERE a.attrelid = '34093'::pg_catalog.oid AND a.attnum  
0::pg_catalog.int2 ORDER BY a.attrelid, a.attnum


thoughts? (I'm on Version 8.4.3)


thanks in advance

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] pg_dump errors

2010-06-03 Thread Kevin Kempter
On Thursday 03 June 2010 11:18, Tom Lane wrote:
 Kevin Kempter kev...@consistentstate.com writes:
  pg_dump: Error message from server: ERROR:  could not find hash function
  for hash operator 33639

 Bizarre ... that command really oughtn't be invoking any non-builtin
 operator, but the OID is too high for a builtin.  What do you get from
 select 33639::regoperator?

   regards, tom lane


postgres=# select 33639::regoperator
postgres-# ;
  regoperator

 abc.=(oid,oid)
(1 row)

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] pg_dump errors

2010-06-03 Thread Kevin Kempter
On Thursday 03 June 2010 11:45, Tom Lane wrote:
 Kevin Kempter kev...@consistentstate.com writes:
  On Thursday 03 June 2010 11:18, Tom Lane wrote:
  Bizarre ... that command really oughtn't be invoking any non-builtin
  operator, but the OID is too high for a builtin.  What do you get from
  select 33639::regoperator?
 
  postgres=# select 33639::regoperator
  postgres-# ;
regoperator
  
   abc.=(oid,oid)
  (1 row)

 So where did that come from, and why is it defined incorrectly?
 (Evidently it's marked oprcanhash but there is no associated hash
 opclass.)  I can hardly see a reason to define your own oid equality
 operator, much less a wrong one ...

   regards, tom lane


I dont know where it came from. I havent intentionally defined any classes, or 
anything like this. However this is a dev instance so maybe something went 
wrong at some point.

 

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Using Postgresql Replication

2010-02-19 Thread Kevin Kempter
On Friday 19 February 2010 06:27:10 Brad Nicholson wrote:
 On Fri, 2010-02-19 at 05:49 -0500, Manasi Save wrote:
  Hi All,
 
 
  I am very new to Postgres in that I have to setup replication. For
  Which I am very confused between Postgres's replication types.
 
  I want a sychronized but a master-slave replication.
 
  I have read about Slony-I replication method. But I am confused if I
  have to manually setup slony-I replication for all databases. As the
  interface which I have seen for slony-I setup, it has given
  replication tab under every database.
 
 With Slony, you must set up every database, table, and sequence you want
 to replicate yourself.
 
 
 Slony is an asynchronous replication engine, btw.
 
  If I have to manually define replication master and slave node for any
  database I am creating then it is not a feasible  option for me to use
  Slony-I.
 
  SLONY-I = Is it possible that all the databases getting created on
  Master Node will be replicated to specified Slave node.

SLONY is database specific, not cluster specific, so you'll need to create a 
SLONY replication cluster (which may or may not have multiple replication 
sets) for each database that you want to replicate

It's pretty straight forward but it is a manual setup and there are some down 
sides depending on what your use case is, for example if all your write 
connections are hitting the master and you want to do a switch-over so you can 
bring the current master node down for maintenance,  things can become overly 
complicated trying to coordinate the switchover commands for each 
cluster/replication set. 

The easy fix is to stop or pause application traffic during the switch, then it 
becomes a non issue.

However if this is not an option you could end up with a tough issue in that 
say one db switches over quickly and another takes more time at which point 
you have the master for one db on node2  and the other still on node1 thus 
until the other db finishes it's switch you have a scenario where writes for 
db1 are only allowed on node2 and writes for db2 are only allowed on node1.



 
 Not possible.
 
  So, I thought of PGCluster. But it comes with multi-master
  replication.
 
  PGCluster = Is it possible that I can configure Master-Slave
  replication in PGCluster.
 
 
  Please let me know if I am unclear in any point. Any input will be a
  great help.
 
 What do you need replication for?  DR?  Query offloading?  Custom schema
 or data transformations on a replica?
 
 Can you justify the cost warm standby hardware?
 

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] effective_io_concurrency cannot be changed error when starting 8.4.2 instance

2010-02-08 Thread Kevin Kempter
Hi All;

I get the below error when trying to start an 8.4.2 instance

Anyone know what might be causing this?

FATAL:  parameter effective_io_concurrency cannot be changed


Thanks in advance

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] query to list all user tables and related constraints

2010-01-21 Thread Kevin Kempter
Hi All;

anyone have a query that will list all user tables and their related 
constraints 

(i.e. for each table list any FK's for 
  and any tables/columns that reference the table) 


Thanks in advance.


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] last insert/update for a table

2010-01-20 Thread Kevin Kempter
Hi all;

is there a system catalog query or function that will show me the last time an 
insert or update occurred for a specified table ?


Thanks in advance

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] dupes for PK and other UNIQUE indexes

2009-11-01 Thread Kevin Kempter
On Sunday 01 November 2009 16:25:58 Tom Lane wrote:
 Kevin Kempter kev...@consistentstate.com writes:
  On Sunday 01 November 2009 15:50:20 you wrote:
  Kevin Kempter kev...@consistentstate.com writes:
  Anyone have any Idea how this could happen?
 
  Corrupt indexes.  What PG version are we talking about?
 
  version 8.3.8
 
 Well, there aren't any known causes of btree index corruption in 8.3.x,
 but that doesn't mean you haven't found something new.  Does this
 installation have any nonstandard history?  (I'm wondering about things
 like it having been a WAL standby at one time, or part of a Slony
 setup, or something like that.)  Can you provide any other data that
 might help somebody reproduce the problem?
 
 BTW, please keep your responses cc'd to the list.
 
   regards, tom lane
 

I'll do some digging and see what I can find. They are using Ruby on Rails, 
that's all I have so far.



-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] select actual data size for a relation?

2009-10-22 Thread Kevin Kempter
Hi all;

Anyone know how to select / calculate the actual data size for a table or 
index?  

NOT the disk usage as in:
 pg_class.relpages -- shows disk usage including bloat
pg_relation_size() -- same as pg_class.relpages but in bytes
or pg_total_relation_size()  -- total disk usage inclusive of bloat and 
indexes

I want only the data size. I tried grabbing the sum of all the avg_width cols 
from pg_stats and multiplying times the pg_class.reltuples but this seems way 
too big.

Thoughts?



Thanks in advance

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] drop table error

2009-10-08 Thread Kevin Kempter
Hi all;

anyone know what would cause errors like this (postgres version 8.1.15):



drop table bigtab1; 
   
ERROR:  could not find tuple for constraint 373617



drop sequence bigtab2_id_seq;   
   
ERROR:  cache lookup failed for relation 371698  

Thanks in advance...


[ADMIN] munin plugin for bloat

2009-10-07 Thread Kevin Kempter
Hi all;

I have an sql query I like to run that shows the top 10 bloated tables.  I'm 
wanting to turn it into a munin graph however the issue is that the list of 
tables (the top 10) are not constant which seems to indicate that I need 
something more dynamic than what's available as a standard munin plugin.

Has anyone out there already tackled this?

Thanks in advance...

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] sql to show tablespace for indexes

2009-09-10 Thread Kevin Kempter
Hi all;

How / where can I find which tablespace the indexes live in?

Thanks in advance 

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] cannot read block errors !

2009-09-09 Thread Kevin Kempter
Hi All;

we're getting these errors on our system today:

Sep  9 00:53:45 ux-rqs04-atl2 ri_load[5846]: Caught Exception! 
Shutting down: ERROR:  could not read block 703769 of relation  
37698/17627/38565: Input/output error CONTEXT:  SQL statement SELECT 
1 FROM ONLY our_schema.big_table x WHERE id OPERATOR(pg_catalog.=) $1 
FOR SHARE OF x

Sep  9 15:34:36 ux-rqs04-atl2 ri_load[23717]: Caught Exception! Shutting 
down: PANIC:  right sibling's left-link doesn't match: block 1377627 
links to 3120580 instead of expected 1736900 in index no_dups CONTEXT:  COPY 
big_table, line 1563: 157945230x/1739441607/false/p_1378267744=0 
\copy: unexpected response (7)

Anyone have any ideas what's gone wrong and how to fix it?


Thanks in advance



-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] cannot read block errors !

2009-09-09 Thread Kevin Kempter
On Wednesday 09 September 2009 12:22:29 Tom Lane wrote:
 Kevin Kempter kev...@consistentstate.com writes:
  we're getting these errors on our system today:
 
  Sep  9 00:53:45 ux-rqs04-atl2 ri_load[5846]: Caught Exception!
  Shutting down: ERROR:  could not read block 703769 of relation
  37698/17627/38565: Input/output error CONTEXT:  SQL statement SELECT
  1 FROM ONLY our_schema.big_table x WHERE id OPERATOR(pg_catalog.=)
  $1 FOR SHARE OF x
 
  Sep  9 15:34:36 ux-rqs04-atl2 ri_load[23717]: Caught Exception! Shutting
  down: PANIC:  right sibling's left-link doesn't match: block 1377627
  links to 3120580 instead of expected 1736900 in index no_dups CONTEXT: 
  COPY big_table, line 1563: 157945230   
  x/1739441607/false/p_1378267744=0 \copy: unexpected response (7)
 
  Anyone have any ideas what's gone wrong and how to fix it?

 Input/output error generally means you've got hardware problems.
 Have you looked into the kernel logs to see if there are device driver
 complaints?

 The other one indicates index corruption, which I might think suggested
 a Postgres bug if I hadn't seen the first one.  You can fix that index
 with REINDEX, but you'd better deal with the underlying issue first
 before it corrupts data that's not so easily fixed.

   regards, tom lane

Thanks, in fact it was a HW issue. we did a REINDEX and stopped the db so we 
could fix the HW issues.


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] partitioning and set statistics

2009-09-08 Thread Kevin Kempter
If I want to set the statistics target for a column in a partitioned table to 
a higher value, say 250, do I need to alter each partition? or can I just 
alter the base table (and the partitions will inherit the change)?


Thanks in advance

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] server disk space

2009-09-06 Thread Kevin Kempter
On Sunday 06 September 2009 00:05:04 Brian Modra wrote:
 Hi,
 I'm maintaining a fairly large online database, and am trying to free
 up disk space. Its got to 98% full.
 I am certain that the postgresql data files are responsible for more
 than 97% of this partition's usage.
 The WAL logs for example are stored elsewhere.

 The largest tables in this database are only inserted, not updated.
 There are about 6 inserts per second. Its all time-stamped, and I am
 deleting old rows.
 There are 5 such tables, each 3 times as large as the previous.

 On the 2 smallest tables, I have already done a create table ... (like
 ...), a re-insert of everything after a certain date, a vaccuum
 analyse, and recreated the indexes. But they are relatively small, so
 no real gains.

 On the larger tables though, I have deleted old rows, and am now
 running a (plain) vacuum.
 The 3rd largest table's vacuum has completed. No space gain at all.

 The other two (largest) table's vacuums are still in progress (still
 running since last evening). I have shut down part of the service so
 that its no longer inserting data to the tables, but rather caching it
 for later insertion.

 I suspect I need to run vacuum full, and drop indexes. Then re-create
 the indexes...

 But is there something I'm missing, e.g. that although the database
 disk is 98% full, postgresql sees the database as having large blocks
 of free space that it can write into? A vacuum full is going to take
 an age, and I'm not sure if I can afford to have the database offline
 for that period...

 I will appreciate your help.
 Thanks
 Brian

Brian;

you may simply have too much data, try the check-postgres script(s) you can 
get it here (http://bucardo.org/check_postgres/) , specifically look at the 
bloat or dead space in your biggest tables.  You may need to run a 'VACUUM 
FULL' on those tables to reclaim disk space, a normal vacuum will not reclaim 
any disk space, just make the space in the table available for re-use by the 
database.


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] server disk space

2009-09-06 Thread Kevin Kempter
On Sunday 06 September 2009 10:28:30 you wrote:
 Thanks, I suspected that was the case. However, the plain vacuum on
 the largest table has been running for almost 24 hours now, despite
 the postgresql being idle (no rows being inserted or updated for 24
 hours). The vacuum full will probably take days... which is a problem.
 I can't really take the database offline over the whole weekend Do
 you have any suggestions?
 I was just looking at pg_class to see how big the tables were, so I
 know which are the largest tables. This largest one has a really large
 pg_toast_4643492 index...

 (I'm using this to find out which are the largest relations:
 SELECT relname, reltuples, relpages FROM pg_class ORDER BY relpages DESC;)

 How do I force a reindex hitting just that pg_toast_oid relation?

 2009/9/6 Kevin Kempter kev...@consistentstate.com:
  On Sunday 06 September 2009 00:05:04 Brian Modra wrote:
  Hi,
  I'm maintaining a fairly large online database, and am trying to free
  up disk space. Its got to 98% full.
  I am certain that the postgresql data files are responsible for more
  than 97% of this partition's usage.
  The WAL logs for example are stored elsewhere.
 
  The largest tables in this database are only inserted, not updated.
  There are about 6 inserts per second. Its all time-stamped, and I am
  deleting old rows.
  There are 5 such tables, each 3 times as large as the previous.
 
  On the 2 smallest tables, I have already done a create table ... (like
  ...), a re-insert of everything after a certain date, a vaccuum
  analyse, and recreated the indexes. But they are relatively small, so
  no real gains.
 
  On the larger tables though, I have deleted old rows, and am now
  running a (plain) vacuum.
  The 3rd largest table's vacuum has completed. No space gain at all.
 
  The other two (largest) table's vacuums are still in progress (still
  running since last evening). I have shut down part of the service so
  that its no longer inserting data to the tables, but rather caching it
  for later insertion.
 
  I suspect I need to run vacuum full, and drop indexes. Then re-create
  the indexes...
 
  But is there something I'm missing, e.g. that although the database
  disk is 98% full, postgresql sees the database as having large blocks
  of free space that it can write into? A vacuum full is going to take
  an age, and I'm not sure if I can afford to have the database offline
  for that period...
 
  I will appreciate your help.
  Thanks
  Brian
 
  Brian;
 
  you may simply have too much data, try the check-postgres script(s) you
  can get it here (http://bucardo.org/check_postgres/) , specifically look
  at the bloat or dead space in your biggest tables.  You may need to run a
  'VACUUM FULL' on those tables to reclaim disk space, a normal vacuum will
  not reclaim any disk space, just make the space in the table available
  for re-use by the database.


You could try a dump/restore of this table However Im not sure this would 
actually be faster. I'd suggest you let it run as long as you can.

Likewise if the table is that big you should probably look at partitioning the 
table. This will help you not only per vacuum/space management but performance 
as well


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] pg_tblspc links question

2009-08-29 Thread Kevin Kempter
Hi all;

I'm moving a db install from one server to another. the source box had 2 
tablespaces and I see this in the pg_tblspc dir:

lrwxrwxrwx 1 postgres postgres 20 2009-08-10 19:23 17261 - 
/san_B/pwreport_3000
lrwxrwxrwx 1 postgres postgres 20 2009-08-11 17:20 17625 - 
/san_A/pwreport_1000
lrwxrwxrwx 1 postgres postgres 20 2009-08-11 17:20 17626 - 
/san_B/pwreport_3000

I wonder, why do the links point to /san_B/pwreport_3000 as opposed to being 
linked to the actual /san_B/pwreport_3000/17xxx dir?

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] help tuning query

2009-08-19 Thread Kevin Kempter
Hi all;

we've been fighting this query for a few days now. we bumped up the statistict 
target for the a.id , c.url_hits_id and the b.id columns below to 250 and ran 
an analyze on the relevant tables.  we killed it after 8hrs.  

Any suggestions?



$ psql -ef expl.sql pwreport
explain 

select  

a.id,   

ident_id,   

time,   

customer_name,  

extract('day' from timezone(e.name, to_timestamp(a.time))) as day,  

category_id 

from

pwreport.url_hits a left outer join 

pwreport.url_hits_category_jt c on (a.id = c.url_hits_id),  

pwreport.ident b,   

pwreport.timezone e 

where   

a.ident_id = b.id   

and b.timezone_id = e.id

and time = extract ('epoch' from timestamp '2009-08-12')   

and time  extract ('epoch' from timestamp '2009-08-13' )   

and direction = 'REQUEST'   

;

 QUERY 
PLAN
 

 Merge Right Join  (cost=47528508.61..180424544.59 rows=10409251 width=53)
   Merge Cond: (c.url_hits_id = a.id)
   -  Index Scan using mt_url_hits_category_jt_url_hits_id_index on 
url_hits_category_jt c  (cost=0.00..122162596.63 rows=4189283233 width=8)
   -  Sort  (cost=47528508.61..47536931.63 rows=3369210 width=49)
 Sort Key: a.id
 -  Hash Join  (cost=2565.00..47163219.21 rows=3369210 width=49)
   Hash Cond: (b.timezone_id = e.id)
   -  Hash Join  (cost=2553.49..47116881.07 rows=3369210 
width=37)
 Hash Cond: (a.ident_id = b.id)
 -  Seq Scan on url_hits a  (cost=0.00..47051154.89 
rows=3369210 width=12)
   Filter: ((direction = 
'REQUEST'::proxy_direction_enum) AND ((time)::double precision = 
1250035200::double precision) AND ((time)::double precision  
1250121600::double precision))
 -  Hash  (cost=2020.44..2020.44 rows=42644 width=29)
   -  Seq Scan on ident b  (cost=0.00..2020.44 
rows=42644 width=29)
   -  Hash  (cost=6.78..6.78 rows=378 width=20)
 -  Seq Scan on timezone e  (cost=0.00..6.78 rows=378 
width=20)
(15 rows)


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] vacuum question

2009-08-18 Thread Kevin Kempter
Hi all;

we have a large table that gets a lot of churn throughout the day.

performance has dropped off a cliff.  A vacuum verbose on the table showed us 
this:
  

INFO: action_rollup_notifier: found 0 removable, 34391214 nonremovable row 
versions in 152175 pages
DETAIL: 22424476 dead row versions cannot be removed yet.
There were 0 unused item pointers.
2 pages contain useful free space.
0 pages are entirely empty.
CPU 0.03s/2.58u sec elapsed 4.89 sec.
VACUUM

were running at isolation level 1 (autocommit), using serialized transactions 
and a persistent db connection.

Anyone have any suggestions per why these rows cannot be removed yet? As far 
as we can see these rows should be 'frozen' at this point. Can we force the 
issue somehow?

Thanks in advance




Re: [ADMIN] vacuum question

2009-08-18 Thread Kevin Kempter
On Tuesday 18 August 2009 13:37:12 Tom Lane wrote:
 Kevin Kempter kev...@consistentstate.com writes:
  INFO: action_rollup_notifier: found 0 removable, 34391214 nonremovable
  row versions in 152175 pages
  DETAIL: 22424476 dead row versions cannot be removed yet.
 
  Anyone have any suggestions per why these rows cannot be removed yet?

 You've got an open transaction that's older than the one that deleted
 those rows (or at least started before the latter committed).
 pg_stat_activity might be helpful in fingering the culprit.

   regards, tom lane

Does the use of serialized transactions affect this in any way?


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] vacuum full questions

2009-08-18 Thread Kevin Kempter
I'm running a vacuum full on a table due to excessive updates/deletes. It's 
been running for more than an hour (it's about 3Gig in size)

2 questions:

1) is there any way to gain some visibility per the progress of the vacuum 
full

2) can I safely kill the vacuum full and do a dump, drop table, restore 
instead?



-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] replay WAL segments without a base backup ?

2009-08-11 Thread Kevin Kempter
Hi all;

due to operator error at one of my client sites we're trying to restore a 
deleted db.

We did a restore with a pgdump that went fine but the dump is 3weeks old. they 
do have WAL segments and they claim that the WAL segments are exactly in line 
with the needed transactions - from a timing perspective.  However no base 
backup with pg_start_backup() and pg_stop_backup() was ever run.


Is there any way to trick postgres into thinking that it needs to play fwd 
these WAL's?  Would it work to simply place them in a dir, create a 
recovery.conf file pointing to that dir and restart the cluster?


Thanks in advance...
 

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] replay WAL segments without a base backup ?

2009-08-11 Thread Kevin Kempter
On Tuesday 11 August 2009 09:49:14 Tom Lane wrote:
 Kevin Kempter kev...@consistentstate.com writes:
  due to operator error at one of my client sites we're trying to restore a
  deleted db.
 
  We did a restore with a pgdump that went fine but the dump is 3weeks old.
  they do have WAL segments and they claim that the WAL segments are
  exactly in line with the needed transactions - from a timing perspective.
   However no base backup with pg_start_backup() and pg_stop_backup() was
  ever run.
 
  Is there any way to trick postgres into thinking that it needs to play
  fwd these WAL's?  Would it work to simply place them in a dir, create a
  recovery.conf file pointing to that dir and restart the cluster?

 No, there is exactly 0 hope here.  You would need a filesystem-level
 backup to work from.  A restore from a pg_dump is not going to reproduce
 any of the low-level details (such as OID assignments or tuple
 placement) that WAL replay needs to have match.

   regards, tom lane


What if we could do a pg_start_backup on the running cluster (the one that's 3 
weeks behind, tar up the file system, do a pg_stop_backup and walk thru the 
recovery steps?

Would there be a  way to force postgres to recognize the wal segments that 
werte created earlier (which according to the client include all tx from 3 
weeks ago thru today) ?



-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] insert a backslash into a bytea field?

2009-08-11 Thread Kevin Kempter
Hi all;

we want to insert a single backslash into a bytea column (running postgres v 
8.3.5)

this works to insert 2 backslashes (but we only want one):
insert into test_bytea_tab (id, bytea_test_col) 
values(99, 'testdata'::bytea)

However none of these work:

insert into test_bytea_tab (id, bytea_test_col) 
values(99, '\\testdata'::bytea)

insert into test_bytea_tab (id, bytea_test_col) 
values(99, E'\\testdata'::bytea)


FWIW backslash_quote is set to 'safe_encoding'

Thoughts?


Thanks in advance

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] force varchar column to be lowercase

2009-07-27 Thread Kevin Kempter
Can I use a check constraint, or something other than a function to force a 
column to be lowercase only?


Thx in advance

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] pg_stat_activity reporting of VACUUM in 8.1

2009-07-23 Thread Kevin Kempter
Hi all;

looking at a system running 8.1.15
via a select from pg_stat_activity I see a row where the vcurrent_query = 
'VACUUM'

I see also that autovacuum is on. Would autovacuum be running a vacuum on the 
entire db? I didn't think autovacuum did that. If not, how do I know in 
version 8.1 what table is being vacuumed? I thought that even in 8.1 the 
current_query column in pg_stat_activity would list the full sql being run as 
opposed to 'VACUUM'.


Thanks in advance




-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Cannot connect to postgresql

2009-07-21 Thread Kevin Kempter
On Tuesday 21 July 2009 10:35:46 Carol Walter wrote:
 Hello,

 I emailed to the group last week about being able to access one of my
 postgres databases from other machines on my network.  I thought the
 problem was a installation of 8.2.10 that I had been testing an
 upgrade on had clobbered some of the files that the system needed.  I
 had been testing implementing enabling of ssl, so I thought that I had
 gotten a partial installation that was causing my password
 authentication failure.  I did a package add on another machine of the
 same package that was used to load our original 8.2.3 with everything
 the same.  I still can't connect.  I think the pg_hba.conf file is
 fine.  Where else should I be looking?

 Carol Walter
it sounds like either a network issue, or a pg_hba.conf issue.

If you send me the following, I'll have a look:

- pg_hba.conf file from the target machine (the one you're trying to connect 
to)

- the i.p.'s/hostnames of both boxes

- the command you're using to try and connect



-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] convert an Insert Into... select from... to use COPY ?

2009-07-16 Thread Kevin Kempter
Hi All;


I have an sql stmt that runs:

insert into x 
( a.col1, a.col2, a.col3,b.col1,b.col2) 
select blah,blah,blah
...
from giant_table a,
 even_bigger_table b
where.

I wonder, could I convert this to use COPY into the table (x) instead of the 
select stmt?



Thanks in advance

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] how much memory (work_mem) is a query using?

2009-06-29 Thread Kevin Kempter
Hi all;

is it possible to see how much work_mem memory a particular session is using?




-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] vacuum tx id wraparound issues

2009-06-25 Thread Kevin Kempter
On Thursday 25 June 2009 10:08:16 Greg Stark wrote:
 On Thu, Jun 25, 2009 at 4:39 PM, Kevin

 Kempterkev...@consistentstate.com wrote:
  we're inserting an average of 70-100 rows per second into these tables.

 Hm. And every row is a separate transaction? That's still only a few
 hundred rows per second. About 25 million per day. You should have
 about 4 days before it hits
 autovacuum_freeze_max_age-vacuum_freeze_min_age.

 Are you using subtransactions heavily (savepoints in sql or exception
 clauses in plpgsql)? That could add a multiplier or two to the number
 of transaction ids used up.

 You can raise autovacuum_freeze_max_age to, say, 800 million to get
 four times longer before the autovacuum kicks in. You can also lower
 vacuum_freeze_min_age to maybe 25 million. That will give you about
 775 million transaction ids, almost 8x what you have now, which will
 hopefully give you about a week before autovacuum tries to freeze the
 table



 --
 greg
 http://mit.edu/~gsstark/resume.pdf


Thanks Greg;  I'll give these settings a shot.





[ADMIN] Connection Issue

2009-06-11 Thread Kevin Kempter
Hi all;

I cannot connect to a postgres (8.3) cluster locally (via ssh onto the 
server).

The postgresql.conf file has this:

# - Connection Settings -

listen_addresses = '*'
#listen_addresses = '0.0.0.0'   # what IP address(es) to listen on;
# comma-separated list of addresses;
# defaults to 'localhost', '*' = all
# (change requires restart)
port = 5432 # (change requires restart)
max_connections = 100   # (change requires restart)
# Note:  Increasing max_connections costs ~400 bytes of shared memory per
# connection slot, plus lock space (see max_locks_per_transaction).  You might
# also need to raise shared_buffers to support more connections.
#superuser_reserved_connections = 3 # (change requires restart)
unix_socket_directory = '/var/run/postgresql'   # (change requires 
restart)
#unix_socket_group = '' # (change requires restart)
#unix_socket_permissions = 0777 # begin with 0 to use octal notation
# (change requires restart)
#bonjour_name = ''  # defaults to the computer name
# (change requires restart)



The pg_hba.conf has this:
local   all postgres  trust
local   all postgres  ident sameuser

# TYPE  DATABASEUSERCIDR-ADDRESS  METHOD

# local is for Unix domain socket connections only
local   all all   ident sameuser
# IPv4 local connections:
hostall postgres 127.0.0.1/32  trust
hostall all 127.0.0.1/32  md5
hostssl all all 0.0.0.0/0  md5
host all all 0.0.0.0/0  md5
# IPv6 local connections:
hostall all ::1/128   md5



I can ssh to a different host and connect via psql -h host

but if I'm on the postgres local server I get this:

$ psql -l
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket /tmp/.s.PGSQL.5432?


Thoughts?

Thanks in advance




[ADMIN] HW recommendatons

2009-06-10 Thread Kevin Kempter
So, if price were not an option and you had a system that needed to average 
10K+ inserts per second what HW would you recommend?  (Obviously there's mega 
architectural issues here)

Server?
(Dell PowerEdge 905? , HP Proliant DL700?)

Direct Attached Storage:
(Dell MD3000? HP MSA70?)

RAID/Storage array cards?

Other?

Thanks in advance




[ADMIN] Security question UNIX Sockets vs CIDR style access

2009-06-01 Thread Kevin Kempter
Hi All;

I'm looking for thoughts/feedback on the use of UNIX Sockets vs standard CIDR 
style access (i,e, Ident, MD5, etc) to a Postgres Cluster. What are the 
pros/cons, which is more secure and why, etc...



Thanks in advance



Re: [ADMIN] User table porting

2009-03-10 Thread Kevin Kempter
On Tuesday 10 March 2009 11:13:54 Carol Walter wrote:
 I want to recreate users with their passwords from one database
 cluster on another database cluster.  What would be the best way for
 me to do this?

 Thanks,
 Carol

from the old server run this (where new_db_cluster is the new server):

 pg_dumpall -g | psql -h new_db_cluster 


[ADMIN] triggers on system tables ?

2009-01-23 Thread Kevin Kempter
Hi All;

Is it possible to add triggers to system catalog and/or statistics collector 
tables ?


Thanks in advance 

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] check to see when tables were last vacummed

2009-01-19 Thread Kevin Kempter
Hi all;

Anyone know where to look to see when tables were last vacuumed via autovacuum 
? 
Also, can I run this check only in 8.3+ or did it work also in 8.2, 8.1 ?


Thanks in advance

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] pg_buffercache - usagecount info

2009-01-19 Thread Kevin Kempter
Hi All;

Can someone point me in the direction of more info per the usagecount value in 
the pg_buffercache view, or send me a description beyond Page LRU count

There's no README in my postgresql-8.3.5/contrib/pg_buffercache dir and I've 
found nothing more in the online manual.


Thanks in advance



-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] query to find out # of buffer hits vs. # of disk reads ?

2009-01-14 Thread Kevin Kempter
Hi all;

Anyone know what system catalogs I need to query to find out # of disk reads 
and # of memory / buffercache reads ?

Thanks in advance

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] vacuum monitoring question

2008-12-24 Thread Kevin Kempter
Hi All;
I'm working on a vacuum monitor script to alert us when tables start to grow 
out of control per dead space.  Initially I excluded the system catalog 
schemas (pg_catalog, pg_toast, etc) however I wonder if maybe I should monitor 
these as well. PLus I'm looking for any input as to specifically what to look 
for - I've calculated a 'fill factor' which shows the average page fill as a 
percentage but I wonder what other metrics may help monitor possible problem 
tables effeciently..


Thoughts ?


Thanks in advance


Re: [ADMIN] vacuum monitoring question

2008-12-24 Thread Kevin Kempter
On Wednesday 24 December 2008 09:56:29 Scott Marlowe wrote:
 On Wed, Dec 24, 2008 at 9:31 AM, Kevin Kempter

 kev...@consistentstate.com wrote:
  Hi All;
 
  I'm working on a vacuum monitor script to alert us when tables start to
  grow out of control per dead space. Initially I excluded the system
  catalog schemas (pg_catalog, pg_toast, etc) however I wonder if maybe I
  should monitor these as well. PLus I'm looking for any input as to
  specifically what to look for - I've calculated a 'fill factor' which
  shows the average page fill as a percentage but I wonder what other
  metrics may help monitor possible problem tables effeciently..

 It's a good idea to keep an eye on system catalogs, especially if you
 have a fair bit of DDL going on in your usage.  I'd take a look at
 what the check_postgresql.pl script does.  No need to reproduce all
 that work on your own.

Thanks for the feedback - where can I find the check_postgresql.pl script ?




[ADMIN] Logging autovacuum

2008-12-01 Thread Kevin Kempter


Hi All;

I wonder is there a way to force autovacuum events to be logged to the 
postgres log ?


Thanks in advance

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] 'invalid command \N' during restore from pg_dump

2008-10-17 Thread kevin kempter

Hi all;

I'm trying to run a restore from a pg_dump and I get loads of these:

invalid command \N


I see lots of \N char's in the data for some of the tables (NULL's ?)

Thoughts?

Thanks in advance

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] 'invalid command \N' during restore from pg_dump

2008-10-17 Thread kevin kempter


On Oct 17, 2008, at 3:20 PM, Jan-Peter Seifert wrote:


Hello,

kevin kempter wrote:


I'm trying to run a restore from a pg_dump and I get loads of these:

invalid command \N


I see lots of \N char's in the data for some of the tables (NULL's ?)


Are you using pgAdmin's SQL-editor for restoring by any chance? It
throws errors - especially if the \N (NULL) is at the beginning of the
line. Use psql for restoring or try with activating the option to use
INSERT instead of COPY commands for creating dumps with pg_dump.

Peter



No, I'm doing it at the command line on a Redhat box (postgres version  
8.2.5)




--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] Select all table column names for a specified tablename (per the system catalogs)

2008-09-22 Thread kevin kempter

Hi List;

I want to pull the list of column names from the system catalogs for a  
specified table. I only want column names, I want to exclude any index  
names, or other non-column name rows.  I got this far (see below)  
however this query produces additional rows with attname's like  
tableoid, cmax, xmax ctid, etc.


select attname from pg_attribute where attrelid = (select oid from  
pg_class where relname = 'my_tablename');


Any thoughts ?

Thanks in advance...



--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] Errors installing contrib

2008-09-19 Thread kevin kempter

Hi All;

I've gone into contribs/pg_buffercache and run this:

make (as postgres)
make install (as root)
psql -f pg_buffercache.sql my_db

when I run the psql command against my database I get this:

BEGIN
SET
psql:pg_buffercache.sql:9: ERROR:  could not access file $libdir/ 
pg_buffercache: No such file or directory
psql:pg_buffercache.sql:15: ERROR:  current transaction is aborted,  
commands ignored until end of transaction block
psql:pg_buffercache.sql:18: ERROR:  current transaction is aborted,  
commands ignored until end of transaction block
psql:pg_buffercache.sql:19: ERROR:  current transaction is aborted,  
commands ignored until end of transaction block

ROLLBACK



Any thoughts ?


Thanks in advance...




Re: [ADMIN] Errors installing contrib

2008-09-19 Thread kevin kempter

ok so I have more info.


a pg_config --libdir shows me /usr/lib64

however the make install installs the pg_buffercache in /pgmoveable/ 
lib/postgresql



Thoughts ?



On Sep 19, 2008, at 3:51 PM, kevin kempter wrote:


Hi All;

I've gone into contribs/pg_buffercache and run this:

make (as postgres)
make install (as root)
psql -f pg_buffercache.sql my_db

when I run the psql command against my database I get this:

BEGIN
SET
psql:pg_buffercache.sql:9: ERROR:  could not access file $libdir/ 
pg_buffercache: No such file or directory
psql:pg_buffercache.sql:15: ERROR:  current transaction is aborted,  
commands ignored until end of transaction block
psql:pg_buffercache.sql:18: ERROR:  current transaction is aborted,  
commands ignored until end of transaction block
psql:pg_buffercache.sql:19: ERROR:  current transaction is aborted,  
commands ignored until end of transaction block

ROLLBACK



Any thoughts ?


Thanks in advance...






Re: [ADMIN] Errors installing contrib

2008-09-19 Thread kevin kempter


On Sep 19, 2008, at 4:18 PM, Jeff Frost wrote:


kevin kempter wrote:


ok so I have more info.


a pg_config --libdir shows me /usr/lib64

however the make install installs the pg_buffercache in /pgmoveable/ 
lib/postgresql





How was postgresql installed on the machine in the first place?   
RPM? DEB?  Just guessing by the pg_config output, I'd say postgresql  
was installed on the server via some package management and you also  
have a separately compiled source tree?  If you installed from  
package management, there usually is a contrib package which  
contains pg_buffercache as well as the other normal contrib modules.


Looks like they have both. there are postgres binaries in /usr/bin  
tied to 8.1.4 RPM's. There are also binaries in /pg825/bin tied to a  
postgres 8.2.5 source install.


In either case I get the same error

I go to /usr/share/pgsql/contrib installed by the postgresql- 
contrib-8.1.4-1.1 rpm and run /usr/bin/psql -f pg_buffercache.sql  
dbname


I also tried goint to the src tree in the contrib/pg_buffercache dir  
and running this:

export PATH=/pg825/bin
make (as postgres)
make install (as root)
/pg825/bin/psql -f pg_buffercache.sql dbname


also
/usr/bin/pg_config --libdir shows me /usr/lib64

/pg825/bin/pg_config --libdir shows me /disk1/pg825/lib


I even tried placing the pg_buffercache.so.* files from the src build  
in the lib dirs:

/usr/lib64
/disk1/pg825/lib
/disk1/pg825/lib/postgresql

and I still got the same error


Thoughts ?





Thoughts ?



On Sep 19, 2008, at 3:51 PM, kevin kempter wrote:


Hi All;

I've gone into contribs/pg_buffercache and run this:

make (as postgres)
make install (as root)
psql -f pg_buffercache.sql my_db

when I run the psql command against my database I get this:

BEGIN
SET
psql:pg_buffercache.sql:9: ERROR:  could not access file $libdir/ 
pg_buffercache: No such file or directory
psql:pg_buffercache.sql:15: ERROR:  current transaction is  
aborted, commands ignored until end of transaction block
psql:pg_buffercache.sql:18: ERROR:  current transaction is  
aborted, commands ignored until end of transaction block
psql:pg_buffercache.sql:19: ERROR:  current transaction is  
aborted, commands ignored until end of transaction block

ROLLBACK



Any thoughts ?


Thanks in advance...






--
Jeff Frost, Owner   [EMAIL PROTECTED]
Frost Consulting, LLC   http://www.frostconsultingllc.com/
Phone: 916-647-6411 FAX: 916-405-4032




[ADMIN] Setting Effective Cache Size

2008-09-16 Thread kevin kempter

Hi All;

Is there an easy way to determine the actual OS disk cache size or at  
least a set of guidelines based on the OS and disk subsystem type ?



Thanks in advance...


/Kevin

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] shared_buffers setting

2008-09-15 Thread kevin kempter

Hi All;

I'm unclear per the docs what the effect is of specifying a shared  
buffers value without a kB or an MB at the end.

For example if I specify shared buffers as:

shared_buffers = 5000

would this equate to 5000 8K pages, or 5000K, 5000bytes?



I usually specify the buffers in MB but I want to be sure I'm tuning  
an existing install properly - the previous DBA has used the  
convention above.



Thanks in advance

/Kevin


--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] shared_buffers setting

2008-09-15 Thread kevin kempter

Thanks !


On Sep 15, 2008, at 7:10 PM, Tom Lane wrote:


kevin kempter [EMAIL PROTECTED] writes:

I'm unclear per the docs what the effect is of specifying a shared
buffers value without a kB or an MB at the end.
For example if I specify shared buffers as:
shared_buffers = 5000
would this equate to 5000 8K pages, or 5000K, 5000bytes?


Pages.  When in doubt, look at the units column of pg_settings.

regards, tom lane

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin



--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] exporting/importing BLOB's (lo_* style)

2008-09-12 Thread kevin kempter

Hi All;

I want to dump a set of BLOB's from one db/server and import to another.

I assume I cannot do a pg_dump unless I also dump related  
pg_largeobject rows for the referenced OID


I assume I'll need to export via distinct lo_export commands and use  
lo_import to insert into the second database server.


Is there a way to do a bulk transfer or export/import of lo_* style  
large objects ?



Thanks in advance

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] Triggers BLOB's

2008-09-12 Thread kevin kempter

Hi All;

Can I put an update/insert/delete trigger on a BLOB (lo_* style)  
column, It looks like the trigger should actually live on the data  
column of the pg_largeobject table to be effective. Is this correct ?


Thanks in advance...


--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] Dupes inserted that violate the PK

2008-07-08 Thread kevin kempter

Hi All;

Sorry f this is a duplicate post, my  first post from yesterday never  
showed up.



we have a table that has a PK as follows:

CREATE TABLE btx (
   id integer NOT NULL,
   url character varying(256) NOT NULL,
   password character varying(50),
btx_counter  integer,
...
   size double precision
);


CREATE SEQUENCE btx_id_seq
   INCREMENT BY 1
   NO MAXVALUE
   NO MINVALUE
   CACHE 1;


ALTER TABLE btx ALTER COLUMN id SET DEFAULT  
nextval('btx_id_seq'::regclass);




ALTER TABLE ONLY btx
   ADD CONSTRAINT btx_pk PRIMARY KEY (id);



We've run into a scenario where 2 inserted rows both got the same id  
value


# select id,url, btx_counter  from btx where url  = 'test1' and  
btx_counter = 449;

  id   |   url   | btx_counter
++
194848 | test1 |449
(1 row)

# select id,url,btx_counter from btx where url = 'test1' AND  
btx_counter = 450;

  id   |   url   | btx_counter
++
194848 | test1 |450
(1 row)

We suspected the index was corrupt but a reindex fails for obvious  
reasons.




Anyone have any Ideas why this would be possible ?  Any debugging  
advice or steps to verify that my db is not corrupted would be much  
appreciated..


/Kevin

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Dupes inserted that violate the PK

2008-07-08 Thread kevin kempter

8.2.9

I have a hunch what went wrong. We tested a warm standby failover  
while the 'master' was still running - our warm standby includes a  
process that grabs the final unarchived WAL segment from the master  
pg_xlog directory. I have a hunch that this is where the disconnect  
happened, I think the rule needs to be that we only execute a failover  
based on bringing the master down as opposed to simply creating the  
trigger file.  I'm trying to test this theory out now - I'll update  
the list with my findings.




/Kevin


On Jul 8, 2008, at 10:52 AM, Alvaro Herrera wrote:


kevin kempter wrote:


We suspected the index was corrupt but a reindex fails for obvious
reasons.


What version is this?  We're investigating a similar problem on  
8.1.11.




--
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support



--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] strange drop errors

2008-07-01 Thread kevin kempter

Hi List;

in trying to drop an index I get this:


# drop index sl_pf_fact_tmp11_ep_idx11;
ERROR:  could not open relation with OID 3666815


I even dropped the associated table and still get this error. Likewise  
we cannot create an index with the same name.


Thoughts ?


--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] pg_tables query issue

2008-06-30 Thread kevin kempter

Hi List;

I've found that if I run this select (below) I get a table returned  
named 'dma' een though I specified like 'dm_%' :



select tablename from pg_tables where tablename like 'dm_%';


  tablename
--
 dm_service_provider_dim
 dma


Am I missing something or is this a bug ?

Thanks in advance..

/Kevin






--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] pg_tables query issue

2008-06-30 Thread kevin kempter

OK. Thanks for the info.


On Jun 30, 2008, at 5:48 PM, Stephan Szabo wrote:




I've found that if I run this select (below) I get a table returned
named 'dma' een though I specified like 'dm_%' :


Underscore is a special character for like matching any single  
character

so I don't think that's wrong.

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin



--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] select current UTC time ?

2008-06-05 Thread kevin kempter

Hi List;

How could I select the current UTC time ?  now() cast as a timestamp  
at UTC is wrong because it takes now() and says: if it were now() time  
UTC what would it be here (timezone) and returns that. I want to  
return 'what time is it now in UTC ?'


Thanks in advance...

/Kevin


--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] select current UTC time ? (SOLVED)

2008-06-05 Thread kevin kempter


On Jun 5, 2008, at 1:01 PM, kevin kempter wrote:


Hi List;

How could I select the current UTC time ?  now() cast as a timestamp  
at UTC is wrong because it takes now() and says: if it were now()  
time UTC what would it be here (timezone) and returns that. I want  
to return 'what time is it now in UTC ?'


Thanks in advance...

/Kevin


--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin



Ignore this post, the first query gives me the right answer [as does :  
select timeszone('UTC', now()) ] - neurons firing slowly today.




--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] postgres, syslog and freeBSD

2008-05-29 Thread kevin kempter

Hi list;

I'm trying to get postgres  syslog to get along with no luck. I'm  
running freeBSD7 and postgres 8.3.1


Here's my postgres setup

log_destination = 'syslog'
#logging_collector = off
syslog_facility = 'LOCAL0'
syslog_ident = 'postgres'
silent_mode = on
log_min_duration_statement = 0


Here's my /etc/syslog.conf file:


security.*  /var/log/security
auth.info;authpriv.info /var/log/auth.log
mail.info   /var/log/maillog
lpr.info/var/log/lpd-errs
ftp.info/var/log/xferlog
cron.*  /var/log/cron
*.=debug/var/log/debug.log
*.emerg *
# uncomment this to log all writes to /dev/console to /var/log/ 
console.log

#console.info   /var/log/console.log
# uncomment this to enable logging of all log messages to /var/log/ 
all.log

# touch /var/log/all.log and chmod it to mode 600 before it will work
#*.*/var/log/all.log
# uncomment this to enable logging to a remote loghost named loghost
#*.*@loghost
# uncomment these if you're running inn
# news.crit /var/log/news/news.crit
# news.err  /var/log/news/news.err
# news.notice   /var/log/news/news.notice
!startslip
*.* /var/log/slip.log
!ppp
*.* /var/log/ppp.log
LOCAL0.*/var/log/pgsql



I dont see any files show up in /var/log

what am I doing wrong?

Thanks in advance..

/Kevin


--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] insert a -1 into a bigint ?

2008-05-20 Thread kevin kempter

Hi All:


I have this table:

# \d seg_id_tmp7
Table public.seg_id_tmp7
 Column  |Type | Modifiers
-+-+---
 customer_srcid  | bigint  |
 show_srcid  | bigint  |
 show_name   | character varying   |
 season_srcid| bigint  |
 season_name | character varying   |
 episode_srcid   | bigint  |
 episode_name| character varying   |
 segment_type_id | bigint  |
 segment_type| character varying   |
 segment_srcid   | bigint  |
 segment_name| character varying   |
 create_dt   | timestamp without time zone |



I want to insert some data along with a -1 for two of the bigint  
columns. like this:


insert into seg_id_tmp7
(   customer_srcid,
show_srcid,
show_name,
season_srcid,
season_name,
episode_srcid,
episode_name,
segment_type_id,
segment_type,
segment_srcid,
segment_name,
create_dt  )
select
t8.customer_srcid,
t8.show_srcid,
t8.show_name,
t8.season_srcid,
t8.season_name,
t8.episode_srcid,
t8.episode_name
-1::bigint ,
'none',
-1::bigint ,
'none',
(select start_dt from load_start_date)
from
seg_id_tmp8 t8
;



I've also tried without the ::bigint cast on the above query.


I always get this error:

sql:load_sl_cd_segment_dim.sql:237: ERROR:  operator does not exist:  
character varying - bigint

LINE 22:  -1::bigint ,
  ^
HINT:  No operator matches the given name and argument type(s). You  
might need to add explicit type casts.






Thanks in advance for any help, advice, etc..


/Kevin



--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] insert a -1 into a bigint ?

2008-05-20 Thread kevin kempter

I forgot to mention - I'm running version 8.3.1




On May 20, 2008, at 12:14 PM, kevin kempter wrote:


Hi All:


I have this table:

# \d seg_id_tmp7
   Table public.seg_id_tmp7
Column  |Type | Modifiers
-+-+---
customer_srcid  | bigint  |
show_srcid  | bigint  |
show_name   | character varying   |
season_srcid| bigint  |
season_name | character varying   |
episode_srcid   | bigint  |
episode_name| character varying   |
segment_type_id | bigint  |
segment_type| character varying   |
segment_srcid   | bigint  |
segment_name| character varying   |
create_dt   | timestamp without time zone |



I want to insert some data along with a -1 for two of the bigint  
columns. like this:


insert into seg_id_tmp7
(   customer_srcid,
show_srcid,
show_name,
season_srcid,
season_name,
episode_srcid,
episode_name,
segment_type_id,
segment_type,
segment_srcid,
segment_name,
create_dt  )
select
t8.customer_srcid,
t8.show_srcid,
t8.show_name,
t8.season_srcid,
t8.season_name,
t8.episode_srcid,
t8.episode_name
-1::bigint ,
'none',
-1::bigint ,
'none',
(select start_dt from load_start_date)
from
seg_id_tmp8 t8
;



I've also tried without the ::bigint cast on the above query.


I always get this error:

sql:load_sl_cd_segment_dim.sql:237: ERROR:  operator does not exist:  
character varying - bigint

LINE 22:  -1::bigint ,
 ^
HINT:  No operator matches the given name and argument type(s). You  
might need to add explicit type casts.






Thanks in advance for any help, advice, etc..


/Kevin



--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin



--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] insert a -1 into a bigint ?

2008-05-20 Thread kevin kempter

wow, good catch.


Thanks for taking the time to look.




On May 20, 2008, at 1:22 PM, Shane Ambler wrote:


kevin kempter wrote:

Hi All:


I want to insert some data along with a -1 for two of the bigint  
columns. like this:

insert into seg_id_tmp7
(   customer_srcid,
   show_srcid,
   show_name,
   season_srcid,
   season_name,
   episode_srcid,
   episode_name,
   segment_type_id,
   segment_type,
   segment_srcid,
   segment_name,
   create_dt  )
select
   t8.customer_srcid,
   t8.show_srcid,
   t8.show_name,
   t8.season_srcid,
   t8.season_name,
   t8.episode_srcid,
   t8.episode_name
   -1::bigint ,
   'none',
   -1::bigint ,
   'none',
   (select start_dt from load_start_date)
from
   seg_id_tmp8 t8
;
I've also tried without the ::bigint cast on the above query.
I always get this error:
sql:load_sl_cd_segment_dim.sql:237: ERROR:  operator does not  
exist: character varying - bigint


Try a comma before the -1

You are trying to enter - t8.episode_name -1::bigint into the column

hence - character varying - bigint does not work



--

Shane Ambler
pgSQL (at) Sheeky (dot) Biz

Get Sheeky @ http://Sheeky.Biz

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin



--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] Global / cluster-wide functions

2008-05-16 Thread kevin kempter

Hi List;

Can I install a PL/pgSQL function so it's global to all databases  
within a cluster or do I need to install per database ?



Thanks in advance

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] Deadlocks ?

2008-05-13 Thread kevin kempter

Hi All;

we seem to be getting deadlock quite a lot. We have a python process  
that does the following and we keep getting deadlocks. Is this a real  
deadlock or maybe a wait condition? Any help in debugging / resolving  
this would be much appreciated.



Thanks in advance





cursor.execute(lock player_log_uuid in share update exclusive mode)



Traceback (most recent call last):
  File StatsParser.py, line 740, in InsertData
cursor.execute(lock player_log_uuid in share update exclusive  
mode)

ProgrammingError: deadlock detected
DETAIL:  Process 23098 waits for ShareUpdateExclusiveLock on  
relation 428126 of database 427376; blocked by process 23916.
Process 23916 waits for ShareLock on transaction 46802680; blocked  
by process 23098.

 

Re: [ADMIN] Deadlocks ?

2008-05-13 Thread kevin kempter


On May 13, 2008, at 5:00 PM, Tom Lane wrote:


Tino Schwarze [EMAIL PROTECTED] writes:

On Tue, May 13, 2008 at 01:18:24PM -0600, kevin kempter wrote:
ProgrammingError: deadlock detected
DETAIL:  Process 23098 waits for ShareUpdateExclusiveLock on
relation 428126 of database 427376; blocked by process 23916.
Process 23916 waits for ShareLock on transaction 46802680; blocked
by process 23098.



I've never figured out how to resolve the lock on transaction to
something understandable...


It's presumably waiting for a row lock that the other transaction
has got.  We don't keep enough information about row locks in memory
to give a better error message (because we could run out of memory
if we tried :-()

regards, tom lane



If that's true does it make sense to play with a timeout value (I  
assume the timeout is configurable somewhere in postgresql.conf) in an  
effort to tune for this ?




/Kevin






--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin



--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] warm standby issues

2008-04-27 Thread kevin kempter

Hi List;

I'm setting up a warm standby server on version 8.1.9

I setup a recovery.sh script to keep the standby cluster in recovery  
mode waiting for the next  WAL segment. everything works fine as long  
as the standby server is in recovery mode.  I see the recovery taking  
place in the postgres log of the standby server. I've set it up to  
exit the recovery.sh script if it see's a trigger file (to bring the  
standby online).


when I create the trigger file I see this:

copy /home/postgres/healthCareCoding/WAL/00010004  
pg_xlog/RECOVERYXLOG
`/home/postgres/healthCareCoding/WAL/00010004' -  
`pg_xlog/RECOVERYXLOG'

LOG:  restored log file 00010004 from archive
LOG:  could not open file pg_xlog/00010005 (log file  
0, segment 5): No such file or directory

LOG:  redo done at 0/4FFFE90
PANIC:  could not open file pg_xlog/00010004 (log  
file 0, segment 4): No such file or directory

LOG:  startup process (PID 9348) was terminated by signal 6
LOG:  aborting startup due to startup process failure
LOG:  logger shutting down


I see that the log 00010004 was restored, but then  
when it tags the redo as done it complains that it cannot find the  
same log.


If I re-start the standby cluster and select count(*) from my test  
table (the table I'm inserting data into to fill the logs) I get this:


postgres=# select count(*) from t1 ;
ERROR:  xlog flush request 0/113730C is not satisfied --- flushed only  
to 0/1135018

CONTEXT:  writing block 1500 of relation 1663/10819/16384
postgres=#



My recovery.sh code is below, thanks in advance for any help...

/Kevin




#!/bin/bash

#DELAY=40
DELAY=100
SEG_SIZE=16777216
TRIGGERED=0
TRIGGER_FILE=/home/postgres/healthCareCoding/trigger
WAL_PATH=/home/postgres/healthCareCoding/WAL
COPY_FLAG=0
RESTORE_FROM=${WAL_PATH}/$2
RESTORE_TO=$1


echo $1 | grep -i history
if [ $? -eq 0 ]
then
if [ -f $RESTORE_FROM ]
then
echo copy $RESTORE_FROM $RESTORE_TO
cp -v -i $RESTORE_FROM $RESTORE_TO
fi
exit
fi

while [ ! -f $TRIGGER_FILE  -a  $COPY_FLAG -eq 0 ]
do
usleep $DELAY;
if [ -f $RESTORE_FROM ]
then
fs=`ls -l $RESTORE_FROM`
set - $fs
echo size= [$5]
if [ $5 == $SEG_SIZE ]
then
COPY_FLAG=1
fi
fi
done

if [ -f $TRIGGER_FILE ]
then
exit
fi

if [ ! -f $TRIGGER_FILE ]
then
echo copy $RESTORE_FROM $RESTORE_TO
cp -v -i $RESTORE_FROM $RESTORE_TO
exit
fi



--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] weird network issue

2008-03-28 Thread kevin kempter

Hi LIst;

I have 2 Linux servers:
192.168.111.11
192.168.111.13

Both are running postgres v 8.2.6

I can ping the .11 box from .13 and vice versa
I can connect remotely from the .11 box to the .13 box but I cannot  
connect to the .11 box from the .13 box.


I can do this:

on the 192.168.111.11 box:

-bash-3.1$   psql -h 192.168.111.13
Welcome to psql 8.2.6, the PostgreSQL interactive terminal.

Type:  \copyright for distribution terms
   \h for help with SQL commands
   \? for help with psql commands
   \g or terminate with semicolon to execute query
   \q to quit

postgres=#

However if I do this it fails:

on the 192.168.111.13 box:

-bash-3.1$   psql -h 192.168.111.11 postgres
psql: could not connect to server: No route to host
Is the server running on host 192.168.111.11 and accepting
TCP/IP connections on port 5432?



Both boxes have the same copy of the postgresql.conf file and the  
pg_hba.conf file.


Here's the listen address setting (on the 192.168.111.11 box) from the  
postgresql.conf file:

listen_addresses = '*'


I also checked (after a restart) that the listen address and port was  
in fact as I thought

on 192.168.111.11 :


Welcome to psql 8.2.6, the PostgreSQL interactive terminal.

Type:  \copyright for distribution terms
   \h for help with SQL commands
   \? for help with psql commands
   \g or terminate with semicolon to execute query
   \q to quit

postgres=# show listen_addresses;
 listen_addresses
--
 *
(1 row)

postgres=# show port
;
 port
--
 5432
(1 row)

postgres=#


Here's the current pg_hba.conf file on 192.168.111.11 :



# TYPE  DATABASEUSERCIDR-ADDRESS  METHOD

# local is for Unix domain socket connections only
local   all all   ident sameuser
# IPv4 local connections:
hostall all 127.0.0.1/32  ident sameuser
# IPv6 local connections:
hostall all ::1/128   ident sameuser

#DRW.  This should be tighted up once the db instances are figured out
hostall all 192.168.111.0/24  trust



I'm stumped..

Anyone have any thoughts ?

Thanks in advance.




Re: [ADMIN] weird network issue

2008-03-28 Thread kevin kempter
Not sure. I'm not so well versed in the firewall/networking areas. I  
can however do an scp pull from both machines :


scp a file from 192.168.111.13 while logged onto 192.168.111.11
and
scp a file from 192.168.111.11 while logged onto 192.168.111.13


Can you point me where to look for firewall/iptables/SE issues?


Thanks in advance





On Mar 28, 2008, at 10:45 AM, Jonathan Nalley wrote:

are you running any kind of firewall/iptables/SELinux  where the  
settings are perhaps not the same on the two machines?



From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
] On Behalf Of kevin kempter

Sent: Friday, March 28, 2008 12:31
To: pgsql-admin@postgresql.org
Subject: [ADMIN] weird network issue

Hi LIst;

I have 2 Linux servers:
192.168.111.11
192.168.111.13

Both are running postgres v 8.2.6

I can ping the .11 box from .13 and vice versa
I can connect remotely from the .11 box to the .13 box but I cannot  
connect to the .11 box from the .13 box.


I can do this:

on the 192.168.111.11 box:

-bash-3.1$   psql -h 192.168.111.13
Welcome to psql 8.2.6, the PostgreSQL interactive terminal.

Type:  \copyright for distribution terms
  \h for help with SQL commands
  \? for help with psql commands
  \g or terminate with semicolon to execute query
  \q to quit

postgres=#

However if I do this it fails:

on the 192.168.111.13 box:

-bash-3.1$   psql -h 192.168.111.11 postgres
psql: could not connect to server: No route to host
   Is the server running on host 192.168.111.11 and accepting
   TCP/IP connections on port 5432?



Both boxes have the same copy of the postgresql.conf file and the  
pg_hba.conf file.


Here's the listen address setting (on the 192.168.111.11 box) from  
the postgresql.conf file:

listen_addresses = '*'


I also checked (after a restart) that the listen address and port  
was in fact as I thought

on 192.168.111.11 :


Welcome to psql 8.2.6, the PostgreSQL interactive terminal.

Type:  \copyright for distribution terms
  \h for help with SQL commands
  \? for help with psql commands
  \g or terminate with semicolon to execute query
  \q to quit

postgres=# show listen_addresses;
listen_addresses
--
*
(1 row)

postgres=# show port
;
port
--
5432
(1 row)

postgres=#


Here's the current pg_hba.conf file on 192.168.111.11 :



# TYPE  DATABASEUSERCIDR-ADDRESS  METHOD

# local is for Unix domain socket connections only
local   all all   ident sameuser
# IPv4 local connections:
hostall all 127.0.0.1/32  ident sameuser
# IPv6 local connections:
hostall all ::1/128   ident sameuser

#DRW.  This should be tighted up once the db instances are figured out
hostall all 192.168.111.0/24  trust



I'm stumped..

Anyone have any thoughts ?

Thanks in advance.







--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] weird network issue

2008-03-28 Thread kevin kempter
I'm not sure how to get the version however given the below listing  
of /boot/grub/grub.conf it's obviously Cent OS:



#boot=/dev/sda
default=0
timeout=5
splashimage=(hd0,0)/grub/splash.xpm.gz
hiddenmenu
title CentOS (2.6.18-8.1.6.el5)
root (hd0,0)
kernel /vmlinuz-2.6.18-8.1.6.el5 ro root=/dev/VolGroup00/ 
LogVol00 clock=

pit noapic nolapic rhgb quiet
initrd /initrd-2.6.18-8.1.6.el5.img
title CentOS (2.6.18-8.el5)
root (hd0,0)
kernel /vmlinuz-2.6.18-8.el5 ro root=/dev/VolGroup00/LogVol00  
rhgb quiet


initrd /initrd-2.6.18-8.el5.img

I also did a uname -a:

[EMAIL PROTECTED] ~]# uname -a
Linux vmsrv02.myclient.local 2.6.18-8.1.6.el5 #1 SMP Thu Jun 14 17:29:04
EDT 2007 x86_64 x86_64 x86_64 GNU/Linux



I ran the telnet command and got this:

-bash-3.1$ telnet 192.168.111.11 5432
Trying 192.168.111.11...
telnet: connect to address 192.168.111.11: No route to host
telnet: Unable to connect to remote host: No route to host


However a ping works:

-bash-3.1$ ping 192.168.111.11
PING 192.168.111.11 (192.168.111.11) 56(84) bytes of data.
64 bytes from 192.168.111.11: icmp_seq=1 ttl=64 time=0.438 ms
64 bytes from 192.168.111.11: icmp_seq=2 ttl=64 time=0.168 ms
64 bytes from 192.168.111.11: icmp_seq=3 ttl=64 time=0.232 ms



Thanks for the help...

/Kevin






[EMAIL PROTECTED] ~]#On Mar 28, 2008, at 11:03 AM, Jonathan Nalley wrote:

Could you send us the distro and version of Linux that you're using  
on these two boxes?


Just as a sanity check and assuming you have a telnet client on the . 
13 box, can you perform the following command:


telnet 192.168.111.11 5432

and then paste the output to us. I guess conceptually there's not a  
whole lot of difference between doing that and using psql -h  
192.168.111.11 but it might eliminate some layer of issues.  If  
you're familiar with nmap or similar programs you can accomplish the  
same thing to ping .11 on port 5432



-Original Message-
From: kevin kempter [mailto:[EMAIL PROTECTED]
Sent: Friday, March 28, 2008 12:56
To: Jonathan Nalley
Cc: 'pgsql-admin@postgresql.org'
Subject: Re: [ADMIN] weird network issue

Not sure. I'm not so well versed in the firewall/networking areas. I
can however do an scp pull from both machines :

scp a file from 192.168.111.13 while logged onto 192.168.111.11
and
scp a file from 192.168.111.11 while logged onto 192.168.111.13


Can you point me where to look for firewall/iptables/SE issues?


Thanks in advance





On Mar 28, 2008, at 10:45 AM, Jonathan Nalley wrote:


are you running any kind of firewall/iptables/SELinux  where the
settings are perhaps not the same on the two machines?


From: [EMAIL PROTECTED] [mailto:pgsql-admin-

[EMAIL PROTECTED]

] On Behalf Of kevin kempter
Sent: Friday, March 28, 2008 12:31
To: pgsql-admin@postgresql.org
Subject: [ADMIN] weird network issue

Hi LIst;

I have 2 Linux servers:
192.168.111.11
192.168.111.13

Both are running postgres v 8.2.6

I can ping the .11 box from .13 and vice versa
I can connect remotely from the .11 box to the .13 box but I cannot
connect to the .11 box from the .13 box.

I can do this:

on the 192.168.111.11 box:

-bash-3.1$   psql -h 192.168.111.13
Welcome to psql 8.2.6, the PostgreSQL interactive terminal.

Type:  \copyright for distribution terms
 \h for help with SQL commands
 \? for help with psql commands
 \g or terminate with semicolon to execute query
 \q to quit

postgres=#

However if I do this it fails:

on the 192.168.111.13 box:

-bash-3.1$   psql -h 192.168.111.11 postgres
psql: could not connect to server: No route to host
  Is the server running on host 192.168.111.11 and accepting
  TCP/IP connections on port 5432?



Both boxes have the same copy of the postgresql.conf file and the
pg_hba.conf file.

Here's the listen address setting (on the 192.168.111.11 box) from
the postgresql.conf file:
listen_addresses = '*'


I also checked (after a restart) that the listen address and port
was in fact as I thought
on 192.168.111.11 :


Welcome to psql 8.2.6, the PostgreSQL interactive terminal.

Type:  \copyright for distribution terms
 \h for help with SQL commands
 \? for help with psql commands
 \g or terminate with semicolon to execute query
 \q to quit

postgres=# show listen_addresses;
listen_addresses
--
*
(1 row)

postgres=# show port
;
port
--
5432
(1 row)

postgres=#


Here's the current pg_hba.conf file on 192.168.111.11 :



# TYPE  DATABASEUSERCIDR-ADDRESS  METHOD

# local is for Unix domain socket connections only
local   all all   ident sameuser
# IPv4 local connections:
hostall all 127.0.0.1/32  ident sameuser
# IPv6 local connections:
hostall all ::1/128   ident sameuser

#DRW.  This should be tighted up once the db instances are figured

out

hostall all 192.168.111.0/24

Re: [ADMIN] weird network issue

2008-03-28 Thread kevin kempter
I'm not sure who all can connect to this box, I have several clients  
with pgadmin III installed that cannot get to it.



This is from the 192.168.111.11 box:

[EMAIL PROTECTED] ~]# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source   destination
RH-Firewall-1-INPUT  all  --  anywhere anywhere

Chain FORWARD (policy ACCEPT)
target prot opt source   destination
RH-Firewall-1-INPUT  all  --  anywhere anywhere

Chain OUTPUT (policy ACCEPT)
target prot opt source   destination

Chain RH-Firewall-1-INPUT (2 references)
target prot opt source   destination
ACCEPT all  --  anywhere anywhere
ACCEPT icmp --  anywhere anywhereicmp any
ACCEPT esp  --  anywhere anywhere
ACCEPT ah   --  anywhere anywhere
ACCEPT udp  --  anywhere 224.0.0.251 udp  
dpt:mdns
ACCEPT udp  --  anywhere anywhereudp  
dpt:syslog

ACCEPT udp  --  anywhere anywhereudp dpt:ntp
ACCEPT udp  --  anywhere anywhereudp dpt:ipp
ACCEPT tcp  --  anywhere anywheretcp dpt:ipp
ACCEPT all  --  anywhere anywherestate  
RELATED,ESTAB

LISHED
ACCEPT tcp  --  anywhere anywherestate NEW  
tcp dpt:s

sh
ACCEPT tcp  --  anywhere anywherestate NEW  
tcp dpt:h

ttps
ACCEPT tcp  --  anywhere anywherestate NEW  
tcp dpt:h

ttp
ACCEPT tcp  --  anywhere anywherestate NEW  
tcp dpt:v

mware-authd
ACCEPT tcp  --  anywhere anywherestate NEW  
tcp dpt:a

uth
ACCEPT udp  --  anywhere anywherestate NEW  
udp dpt:a

uth
ACCEPT tcp  --  anywhere anywherestate NEW  
tcp dpt:s

vn
ACCEPT udp  --  anywhere anywherestate NEW  
udp dpt:s

vn
REJECT all  --  anywhere anywherereject- 
with icmp-ho

st-prohibited
[EMAIL PROTECTED] ~]#




And this is from the 192.168.111.13 box:

[EMAIL PROTECTED] ~]# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source   destination
RH-Firewall-1-INPUT  all  --  anywhere anywhere

Chain FORWARD (policy ACCEPT)
target prot opt source   destination
RH-Firewall-1-INPUT  all  --  anywhere anywhere

Chain OUTPUT (policy ACCEPT)
target prot opt source   destination

Chain RH-Firewall-1-INPUT (2 references)
target prot opt source   destination
ACCEPT all  --  anywhere anywhere
ACCEPT icmp --  anywhere anywhereicmp any
ACCEPT esp  --  anywhere anywhere
ACCEPT ah   --  anywhere anywhere
ACCEPT udp  --  anywhere 224.0.0.251 udp  
dpt:mdns
ACCEPT udp  --  anywhere anywhereudp  
dpt:microsoft-d

s
ACCEPT tcp  --  anywhere anywheretcp  
dpt:microsoft-d

s
ACCEPT udp  --  anywhere anywhereudp  
dpts:netbios-ns

:netbios-ssn
ACCEPT tcp  --  anywhere anywheretcp  
dpts:netbios-ns

:netbios-ssn
ACCEPT udp  --  anywhere anywhereudp dpt:ipp
ACCEPT tcp  --  anywhere anywheretcp dpt:ipp
ACCEPT all  --  anywhere anywherestate  
RELATED,ESTAB

LISHED
ACCEPT tcp  --  anywhere anywherestate NEW  
tcp dpt:s

mtp
ACCEPT tcp  --  anywhere anywherestate NEW  
tcp dpt:s

sh
ACCEPT tcp  --  anywhere anywherestate NEW  
tcp dpt:h

ttps
ACCEPT tcp  --  anywhere anywherestate NEW  
tcp dpt:h

ttp
ACCEPT tcp  --  anywhere anywherestate NEW  
tcp dpt:p

ostgres
LOGall  --  anywhere anywhereLOG level  
warning
REJECT all  --  anywhere anywherereject- 
with icmp-ho

st-prohibited
[EMAIL PROTECTED] ~]#






On Mar 28, 2008, at 11:45 AM, Shane Ambler wrote:


kevin kempter wrote:



Can you point me where to look for firewall/iptables/SE issues?




Start with the output of iptables -L  on both machines (you may need  
to run that as root).



Can every other machine connect to both servers?
or is it just the one server that has a problem connecting to the  
other?





--

Shane Ambler
pgSQL (at) Sheeky (dot) Biz

Get Sheeky @ http://Sheeky.Biz

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin




[ADMIN] Deferred Constraints Issues

2008-03-25 Thread kevin kempter

Hi List;

I've placed this sql in a script:

psql my_db !
begin;
SET CONSTRAINTS ALL DEFERRED ;
delete from account where 1 = 1;
delete from cust where 1 = 1;
delete from cust_type where 1 = 1;
delete from cust_orders where 1 = 1;
commit;
!

The tables have relationships, however I expect the 'SET CONSTRAINTS  
ALL DEFERRED' to defer constraint checks.  When I run the script I get  
this:



BEGIN
SET CONSTRAINTS
ERROR:  update or delete on table account violates foreign key  
constraint account_rel on table cust_type
DETAIL:  Key (account_id)=(1) is still referenced from table  
cust_type.
ERROR:  current transaction is aborted, commands ignored until end of  
transaction block
ERROR:  current transaction is aborted, commands ignored until end of  
transaction block
ERROR:  current transaction is aborted, commands ignored until end of  
transaction block

ROLLBACK


Am I doing something wrong ? Or, is the 'SET CONSTRAINTS ALL DEFERRED'  
not working properly ?



Thanks in advance...




--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] last analyze / vacuum

2008-02-04 Thread Kevin Kempter
Hi List;

Is there a system catalog, or other means where I can find out the last time a 
table was analyzed and the last time a table was vacuumed?


Thanks in advance,


/Kevin

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[ADMIN] src equivelant of postgresql-python

2008-01-22 Thread Kevin Kempter
Hi List;

What's the source equivelant of postgresql-python. Is it a client interface ? 
Is it the same as ./configure --with-python ?


Thanks in advance.




---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[ADMIN] pg_restore errors

2008-01-21 Thread Kevin Kempter
Hi List;

Trying to restore a pg_dump I get this:

pg_restore: ERROR:  invalid byte sequence for encoding UTF8: 0xae
HINT:  This error can also happen if the byte sequence does not match the 
encoding expected by the server, which is controlled by client_encoding.
CONTEXT:  COPY imm_lookup, line 2
pg_restore: [archiver (db)] error returned by PQendcopy: ERROR:  invalid byte 
sequence for encoding UTF8: 0xae
HINT:  This error can also happen if the byte sequence does not match the 
encoding expected by the server, which is controlled by client_encoding.
CONTEXT:  COPY imm_lookup, line 2


This is a dump file that was handed to me from a client. I don't know anything 
about what the dump flags were or what the db encoding was. Is there a way to 
find out (I'm assuming this is an encoding mis-match) from the dump file.

I did do a pg_restore -l and got this:

snip
;
; Archive created at Wed Nov 28 21:04:49 2007
; dbname: practice
; TOC Entries: 867
; Compression: -1
; Dump Version: 1.10-0
; Format: CUSTOM
; Integer: 4 bytes
; Offset: 4 bytes
; Dumped from database version: 8.1.9
; Dumped by pg_dump version: 8.1.4
;
;
; Selected TOC Entries:
;
5; 2615 2200 SCHEMA - public postgres
2911; 0 0 COMMENT - SCHEMA public postgres
2912; 0 0 ACL - public postgres
589; 2612 43134 PROCEDURAL LANGUAGE - plpgsql
14; 1255 43135 FUNCTION public dtos(date) postgres
27; 1255 43136 FUNCTION public get_session_alta_user() postgres
15; 1255 43137 FUNCTION public plpgsql_call_handler() postgres
16; 1255 43138 FUNCTION public plpgsql_validator(oid) postgres
23; 1255 43139 FUNCTION public process_appointment_audit() postgres
24; 1255 43140 FUNCTION public process_billing_audit() postgres
25; 1255 43141 FUNCTION public process_billing_detail_audit() postgres
26; 1255 43142 FUNCTION public process_patient_audit() postgres
1524; 1259 43146 TABLE public address postgres
2913; 0 0 SEQUENCE SET public address_rec_no_seq postgres
/snip


Any thoughts?

Thanks in advance..

/Kevin

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[ADMIN] more blob questions

2008-01-17 Thread Kevin Kempter
Hi List;

Suppose I have a table with say an integer key, a char column and a large 
object column.

can I simply delete a row from the table (delete from bigtable where...), then 
run a vacuum (or vacuum full) ? If so, in this scenario will postgres release 
the space back to the OS or must I first remove the large object with 
lo_unlink() ?

also, does the contrib vacuumlo perform tasks I cannot do via vacuum and  
lo_unlink ?

Thanks in advance..

/Kevin

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


[ADMIN] large objects space usage

2008-01-13 Thread Kevin Kempter
Hi List;

I'm engaging with a client who is trying to solve what sounds like a bloat 
issue. However in further conversations they told me this:

1) we have a table that is an integer key and a blob column
2) the table was at 30Gig, we went in and deleted the last 2 months of data 
and it's still at 30Gog

At this point I thought bloat and that vacuum full or a table rebuild would 
fix the issue.  However later they told me:

3) we did a pg_dump and a reload into a database on the test box and the table 
is still 30Gig.

I haven't gotten access to their servers yet so I only have the above info 
however I wonder is there special space considerations or storage 
requirements, etc with blobs ?

Thanks in advance...

/Kevin

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [ADMIN] Postgres replication

2008-01-10 Thread Kevin Kempter
On Thursday 10 January 2008 00:38:38 nalini wrote:
 Dear All

 I have a application running at various locations with backend as postgres
 Slony is configured at all locations for replication at local site anda
 central server where each site has its own database instance.

 Now we wish to merge data collected from all the sites into one database
 instance to provide consolidated view

 database schema is same at all the places
 However as the application is still under development the schema maychange.
 however the change will happen at all the locations.

 Pls suggest a solution

 with regards


 Nalini S. Nautiyal

Maybe you could use dblink (in contribs) or dbi-link (see pgfoundry) and 
create a db full of remote views each one being a join that retrieves all 
data for that table from all related databases/clusters ?

/Kevin


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [ADMIN] PITR warm-standby with 8.2 setup questions

2008-01-10 Thread Kevin Kempter
On Thursday 10 January 2008 12:27:37 David Wall wrote:
 I'm trying to get WAL file copying working across two systems.  It seems
 pretty straightforward to handle this in the archive_command of the
 primary, in which I am able to copy the files easily to a staging area
 for the backup system.

 On the backup system, I have the recovery.conf pointing to my script,
 but I have a few questions on how that program should behave, and
 whether I'd be much wiser to just use pg_standby instead of my own script.

If you're on 8.2 I'd use pg_sdtandby just because it's actively managed- 
although I've written my own as well. It's pretty straight forward


 1) I do not want to give the backup PG a file unless it's a complete
 file.  I don't want to give it one that is currently in the middle of
 being transferred from the primary to the backup system (or a backup
 copy that failed in the middle).  Most of my files are 16777216 bytes,
 so I initially checked that size.  But I found that there are files with
 a name like '0001000200DC.0020.backup' that is only 272
 bytes.  Will that file also need to be given to PG recovery, and if so,
 how are most determining if a small file is complete or just a partial
 from the copy?

all the WAL segments in the target WAL archive dir from your 'master' are 
complete. They don't get archived until the file is complete. The .backup 
file is from a pg_start_backup() which tells postgres where a file system 
backup was done


 2) On the recovery side, the %p and %f values are giving me %p of
 'pg_xlog/RECOVERYXLOG' and %f of '0001000300B1'.  Can I
 confirm that this means my recover program should find a file named
 '0001000300B1' in my backup system's WAL archive and copy it
 to the file named 'pg_xlog/RECOVERYXLOG', or should I put it into
 'pg_xlog/0001000300B1'?

Generally you want to push the files into a source WAL dir and specify this 
source path in the recovery.conf file - don't copy the files into pg_xlog or 
pg_xlog/RECOVERYLOG yourself, the engine will do it as needed


 3) On the backup PG, is it correct that I have to remove backup_label,
 postmaster.pid and pg_xlog if they are part of the or TAR backup from
 the primary's PGDATA area?

AFAIK you only need to remove the pid file and p_xlog/*



 Thanks,
 David



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[ADMIN] strftime pattern list ?

2007-12-19 Thread Kevin Kempter
Anyone know where I can find a full list of the strftime patterns that can be 
used for log_filename ?

Thanks in advance..

/Kevin

---(end of broadcast)---
TIP 6: explain analyze is your friend


[ADMIN] pg_restore error

2007-12-13 Thread Kevin Kempter
Hi list;

We're seeing this error whern trying to restore a pg_dump file:

-
pg_restore: ERROR:  invalid byte sequence for encoding UTF8: 0x9f CONTEXT:  
COPY log_customer_api, line 4551 pg_restore: [archiver (db)] error returned 
by PQendcopy: ERROR:  invalid byte sequence for encoding UTF8: 0x9f 
CONTEXT:  COPY log_customer_api, line 4551 pg_restore: *** aborted because of 
error
-


Any thoughts?


Thx in advance

/Kevin

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[ADMIN] How to correlate an index relname to it's table

2007-11-20 Thread Kevin Kempter
Hi List;

I've got an index name in pg_class and I want to figure out which table it 
belongs to, any suggestions?


Thanks in advance


/Kevin


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[ADMIN] Get a list of ALL tables

2007-11-19 Thread Kevin Kempter
Hi List;

I'm working with a client with several highly active 8.1.4 servers. I want to 
run a weekly cron that will vacuum ALL db tables except those with entries in 
pg_autovacuum (where we've setup manual/cron vacuums) in order to eliminate 
transaction ID wraparound failure warnings (which comes up fairly often).

My question is:
will a select from pg_tables for each database in the cluster provide me with 
a full table listing for the purposes of ensuring that all tables in the db 
have been vacuumed or are there others I'll need to account for as well ?


Thanks in advance...

/Kevin

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [ADMIN] Get a list of ALL tables

2007-11-19 Thread Kevin Kempter
Sorry, I meant to send this to the list:


We have a handful of tables that are quite large and take several hours to 
vacuum. We're managing these large tables by using cron via a schedule that 
accounts for system load. I want to pull the list of all tables and exclude 
these large tables from the list, then run the vacuum on the remaining tables 
once a week to avoid transaction ID wraparound failures.


/Kevin




 Kevin Kempter [EMAIL PROTECTED] writes:
  I'm working with a client with several highly active 8.1.4 servers. I
  want to run a weekly cron that will vacuum ALL db tables except those
  with entries in pg_autovacuum (where we've setup manual/cron vacuums) in
  order to eliminate transaction ID wraparound failure warnings (which
  comes up fairly often).
 
  My question is:
  will a select from pg_tables for each database in the cluster provide me
  with a full table listing for the purposes of ensuring that all tables in
  the db have been vacuumed or are there others I'll need to account for as
  well ?

 Why are you insisting on inventing your own wheel for this, when
 vacuum without a parameter does that just fine?

   regards, tom lane




On Monday 19 November 2007 16:29:15 you wrote:
 Kevin Kempter [EMAIL PROTECTED] writes:
  I'm working with a client with several highly active 8.1.4 servers. I
  want to run a weekly cron that will vacuum ALL db tables except those
  with entries in pg_autovacuum (where we've setup manual/cron vacuums) in
  order to eliminate transaction ID wraparound failure warnings (which
  comes up fairly often).
 
  My question is:
  will a select from pg_tables for each database in the cluster provide me
  with a full table listing for the purposes of ensuring that all tables in
  the db have been vacuumed or are there others I'll need to account for as
  well ?

 Why are you insisting on inventing your own wheel for this, when
 vacuum without a parameter does that just fine?

   regards, tom lane



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[ADMIN] Understanding wal segments

2007-11-01 Thread Kevin Kempter
Hi List;

I'm wanting to better understand wal / checkpoint segments.

I have a cluster which has checkpoint_segments set to 32 and wal_buffers set 
to 24.

I see consistently 66 files in pg_xlog and I've recently implemented wal 
archiving and so far there are 41 segments/files in the wal archive dir.

What's the relationship between wal_buffers and checkpoint_segments? even 
though the max distance between checkpoints is 32 segments, is there 
something that controls the number of files/segments in the pg_xlog dir ?


Thanks in advance


/Kevin


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [ADMIN] Understanding wal segments

2007-11-01 Thread Kevin Kempter
On Thursday 01 November 2007 22:38:02 Tom Lane wrote:
 Kevin Kempter [EMAIL PROTECTED] writes:
  What's the relationship between wal_buffers and checkpoint_segments?

 None particularly.

  even
  though the max distance between checkpoints is 32 segments, is there
  something that controls the number of files/segments in the pg_xlog dir ?

 The system is designed to settle at 2*checkpoint_segments+1 files in
 pg_xlog, assuming that you have activity spikes sufficient to drive it
 up to that level at all.

   regards, tom lane

 ---(end of broadcast)---
 TIP 3: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faq

makes sense. our system cetrtianly has the activity needed to push it to that 
level.  I turned on wal archivint 1hr  10min ago and I already have 135 wal 
segments in my archive directory. I'm running a file system backup, I did a 
pg_start_backup and now I'm running the tar. Will this affect the transaction 
activity? I thought it would not. 

Also I suspect after each filesystem backup I should remove the older archived 
segments since the file system backup makes them irrelevant. Is this 
accurate?

Thx

/Kevin 


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [ADMIN] Understanding wal segments

2007-11-01 Thread Kevin Kempter
On Thursday 01 November 2007 23:38:48 Tom Lane wrote:
 Kevin Kempter [EMAIL PROTECTED] writes:
  Also I suspect after each filesystem backup I should remove the older
  archived segments since the file system backup makes them
  irrelevant. Is this accurate?

 Depends what your goals are.  If you only want to be able to recover
 the latest database state then you only need archived WAL segments
 back to the start of the latest filesystem backup.  If you are
 interested in the PITR aspect of it --- being able to revert back
 to last Tuesday before some bad guy injected bad data, say --- then
 you'll want to keep older filesystem backups too, and a continuous
 string of archived WAL files covering the period of interest.

   regards, tom lane

 ---(end of broadcast)---
 TIP 2: Don't 'kill -9' the postmaster

Makes sense. Thanks.

One more question. I see a fair amount of file changed warnings in the tar 
output. Like this:

tar: ./data/base/82097725/4242350944: file changed as we read it
./data/base/82097725/143763179
./data/base/82097725/3587685114
./data/base/82097725/3587684991
./data/base/82097725/3587685046
./data/base/82097725/3587685003
./data/base/82097725/3588484372
tar: ./data/base/82097725/3588484372: file changed as we read it
./data/base/82097725/3587685172
./data/base/82097725/10733
./data/base/82097725/3588482971
./data/base/82097725/4082992388
./data/base/82097725/4242349291.4
tar: ./data/base/82097725/4242349291.4: file changed as we read it


Is this an issue or does postgres comensate for this since it knows I'm 
running a backup (i've run a pg_start_backup) ?

Thx

/Kevin


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [ADMIN] Understanding wal segments

2007-11-01 Thread Kevin Kempter
On Thursday 01 November 2007 23:52:00 Tom Lane wrote:
 Kevin Kempter [EMAIL PROTECTED] writes:
  One more question. I see a fair amount of file changed warnings in the
  tar output. Like this:
 
  tar: ./data/base/82097725/4242350944: file changed as we read it

 Yeah, GNU tar complains about that.  It's not a problem, other than the
 risk that a boatload of such warnings could obscure real problems ...

   regards, tom lane

 ---(end of broadcast)---
 TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match

great.  Thanks again.

/Kevin


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


  1   2   >