Re: [ADMIN] Migration tool

2006-08-15 Thread Jim C. Nasby
On Sat, Aug 12, 2006 at 08:42:27AM -0700, RPK wrote:
> 
> Is there any "Migration Toolkit" under development by PostgreSQL development
> team? As we see in MySQL, it has a very good Migration Tool.
> 
> Can we expect the same in near future?

It's very unlikely there will every be any migration tools as part of
the core distribution, but there are already projects for various
databases on http://pgfoundry.org.
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [ADMIN] Restoring database question, part 2..

2006-08-15 Thread Jim C. Nasby
On Tue, Aug 15, 2006 at 05:25:38PM +0200, [EMAIL PROTECTED] wrote:
> 
> Hi all,
> 
> this is a follow question to my other question about moving the data 
> folder (that holds all databases) as a method to restore the databases 
> after my server was hacked and everything removed.
> 
> After having read the manual, recieved some helpfuls responses I have 
> carefully stopped the database, moved the newly installed data folder 
> (under /var/lib/postgres) and copied over the old (before the hacking 
> attempt) data folder.
> 
> The database starts and I can select from the tables but I can't do \d 
>  or dump the database. The error message is as follows:
> 
> ERROR:  could not open relation "pg_inherits": No such file or directory
> 
> 
> From what I can read about the pg_inherits (and other pg_* tables) is that 
> it is a table that contains info about the tables in my database. What I 
> can't find is where these should be located.

It sounds like you didn't get all the tables. If you do a SELECT
relfilenode FROM pg_class WHERE relname='pg_inherits' on another 7.4
database (I don't have one handy), you'll see what the filename should
be so you can look for it.

If you haven't done many DDL changes since the last backup, you could
possibly restore an old copy and use that info to reconstruct
pg_inherits.
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [ADMIN] dropping partitioned table waits forever

2006-08-15 Thread Jim C. Nasby
On Mon, Aug 14, 2006 at 03:32:10PM -0700, Sriram Dandapani wrote:
> Hi
> 
>  
> 
> I have a table partitioned on a daily basis. Data is constantly inserted
> in this table. When I attempt to drop a partition that is not used(i.e
> previous day's), the drop table waits for a lng time.This is
> probably due to the fact that the old partitioned table is being
> referenced in a query plan(the insert into the parent table uses
> triggers).
> 
> How can I prevent the locking out of the drop table? I tried truncate
> table but it is worse. It locks out the inserts in the parent table and
> then it waits.(hence causing a deadlock)

What about dropping/changing the trigger?
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [ADMIN] dropping partitioned table waits forever

2006-08-15 Thread Jim C. Nasby
On Tue, Aug 15, 2006 at 02:34:57PM -0700, Sriram Dandapani wrote:
> When I modify the trigger, I noticed that postgres restarted. Not sure
> why, but it happens when it is being executed. This behaviour is
> consistent.
 
Restarted as in the database crashed??

Is this actually being done with a trigger or with rules?
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [ADMIN] Possible bug in planner (or planner not enough wise in some cases)

2006-09-19 Thread Jim C. Nasby
Try changing to a just a join and see if it works.

On Mon, Sep 18, 2006 at 05:35:52PM +0400, Boguk Maxim wrote:
> postgres version 8.1
> 
> all tables fresh vacuumed/analyzed
> 
> Problem table:
> 
> somedb=# \d el_comment
>Table "public.el_comment"
>   Column  |Type | 
>   Modifiers
> --+-+---
>  id   | integer | not null default 
> nextval(('public.documents_id_seq'::text)::regclass)
>  user_id  | integer | not null
>  text_id  | integer | not null
>  status   | smallint| not null default 0
>  parent_id| integer |
> Indexes:
> "el_comment_pkey" PRIMARY KEY, btree (id)
> "el_comment_parent_id" btree (parent_id)
> "el_comment_text" btree (text_id)
> "el_comment_user" btree (user_id)
> Foreign-key constraints:
> "delete_el_text" FOREIGN KEY (text_id) REFERENCES el_text(id) ON DELETE 
> CASCADE
> 
> Problem query:
> 
> somedb=# EXPLAIN ANALYZE SELECT count(*) FROM el_comment WHERE ((parent_id IN 
> (SELECT tt.id FROM el_comment as tt WHERE tt.user_id = 112 AND tt.status=1)) 
> OR (text_id IN (SELECT el_text.id FROM el_text WHERE el_text.user_id = 112))) 
> AND status=1;
>QUERY PLAN 
> 
> -
>  Aggregate  (cost=80641.51..80641.52 rows=1 width=0) (actual 
> time=13528.870..13528.873 rows=1 loops=1)
>->  Seq Scan on el_comment  (cost=56.07..80352.97 rows=1154156 width=0) 
> (actual time=113.866..13528.705 rows=15 loops=1)
>  Filter: ((status = 1) AND ((hashed subplan) OR (hashed subplan)))
>  SubPlan
>->  Index Scan using el_text_user on el_text  (cost=0.00..15.92 
> rows=12 width=4) (actual time=0.992..82.397 rows=12 loops=1)
>  Index Cond: (user_id = 112)
>->  Index Scan using el_comment_user on el_comment tt  
> (cost=0.00..40.14 rows=28 width=4) (actual time=8.748..21.661 rows=14 loops=1)
>  Index Cond: (user_id = 112)
>  Filter: (status = 1)
>  Total runtime: 13529.189 ms
> (10 rows)
> 
> Now lets look output of both subqueries:
> SELECT tt.id FROM el_comment as tt WHERE tt.user_id = 112 AND tt.status=1:
> 
>  2766039
>  2766057
>  2244101
>  1929350
>  1929419
>  1929439
>  1490610
> 1052
>  2766033
>  2421000
>  2420878
>   611328
> 1019
> 1646
> (14 rows)
> 
> and SELECT el_text.id FROM el_text WHERE el_text.user_id = 112
> 
>  3758109
>53688
>  1947631
>  1915372
>  1224421
>  1011606
>13772
> 1017
>   463135
>   470614
>   575691
>   916229
> (12 rows)
> 
> And put these values into query:
> 
> planet=# EXPLAIN ANALYZE SELECT count(*) FROM el_comment WHERE ((parent_id IN 
> (2766039,2766057,2244101,1929350,1929419,1929439,1490610,1052,2766033,2421000,2420878,611328,1019,1646))
>  OR (text_id IN 
> (3758109,53688,1947631,1915372,1224421,1011606,13772,1017,463135,470614,575691,916229)))
>  AND status=1;
> 
> QUERY PLAN
> 
>  Aggregate  (cost=340.76..340.77 rows=1 width=0) (actual time=9.452..9.453 
> rows=1 loops=1)
>->  Bitmap Heap Scan on el_comment  (cost=52.24..340.71 rows=194 width=0) 
> (actual time=5.431..9.269 rows=15 loops=1)
>  Recheck Cond: ((parent_id = 2766039) OR (parent_id = 2766057) OR 
> (parent_id = 2244101) OR (parent_id = 1929350) OR (parent_id = 1929419) OR 
> (parent_id = 1929439) OR (parent_id = 1490610) OR (parent_id = 1052) OR 
> (parent_id = 2766033) OR (parent_id = 2421000) OR (parent_id = 2420878) OR 
> (parent_id = 611328) OR (parent_id = 1019) OR (parent_id = 1646) OR (text_id 
> = 3758109) OR (text_id = 53688) OR (text_id = 1947631) OR (text_id = 1915372) 
> OR (text_id = 1224421) OR (text_id = 1011606) OR (text_id = 13772) OR 
> (text_id = 1017) OR (text_id = 463135) OR (text_id = 470614) OR (text_id = 
> 575691) OR (text_id = 916229))
>  Filter: (status = 1)
>  ->  BitmapOr  (cost=52.24..52.24 rows=194 width=0) (actual 
> time=4.972..4.972 rows=0 loops=1)
>->  Bitmap Index Scan on el_comment_parent_id  
> (cost=0.00..2.00 rows=2 width=0) (actual time=0.582..0.582 rows=
> 1 loops=1)
>  Index Cond: (parent_id = 2766039)
> 
> 14 same rows
> 
> 
>->  Bitmap Index Scan on el_comment_text  (cost=0.00..2.02 
> rows=13 width=0) (actual time=0.983..0.983 rows=0 loops=1)
>  Index Cond: (text_id = 3758109)
> 
> 11 same rows
> 
> 
>  Total runtime: 10.368 ms
> (58 rows

Re: [ADMIN] how to check the creation date of a postgres database?

2006-09-22 Thread Jim C. Nasby
Please include the list in your replies.

On Fri, Sep 22, 2006 at 09:04:08AM -0700, Jessica Richard wrote:
> Thanks for your reply.
> 
> It is for database documentation.
 
Is it really that important to know when a database was created?

I'm doubtful that anyone in the community will get excited enough to
make a patch based just on that use case; though if you were to create
one it might get accepted.

> Jim Nasby <[EMAIL PROTECTED]> wrote: On Sep 20, 2006, at 11:45 AM, Jessica 
> Richard wrote:
> > \l and  \l+ show only the database name, owner and description.
> > select * from pg_database does not have date info, either.
> >
> > I would like to know when each of my databases were created.
> 
> You're not the first person to ask for this, but it's still pretty  
> unclear what the use-case for that info is (and without a decent use- 
> case, it's pretty unlikely that this info will get stored).
> 
> So... why is it you want to know this?
-- 
Jim C. Nasby, Database Architect   [EMAIL PROTECTED]
512.569.9461 (cell) http://jim.nasby.net

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [ADMIN] how to check the creation date of a postgres database?

2006-09-22 Thread Jim C. Nasby
On Fri, Sep 22, 2006 at 10:30:55AM -0400, Tom Lane wrote:
> Jim Nasby <[EMAIL PROTECTED]> writes:
> > On Sep 20, 2006, at 11:45 AM, Jessica Richard wrote:
> >> I would like to know when each of my databases were created.
> 
> > You're not the first person to ask for this, but it's still pretty  
> > unclear what the use-case for that info is (and without a decent use- 
> > case, it's pretty unlikely that this info will get stored).
> 
> If we did store it, it would presumably reflect the time of issuance of
> CREATE DATABASE, which would mean it wouldn't survive a dump and reload
> anyway (as the OP seems to be wishing for).

Well, that case could be handled as well, it's just a SMOC. But as I
mentioned, it's unlikely an existing developer will get excited about
any of this.
-- 
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [ADMIN] how to check the creation date of a postgres database?

2006-09-22 Thread Jim C. Nasby
On Fri, Sep 22, 2006 at 11:30:55AM -0500, Scott Marlowe wrote:
> On Fri, 2006-09-22 at 11:17, Jim C. Nasby wrote:
> > On Fri, Sep 22, 2006 at 10:30:55AM -0400, Tom Lane wrote:
> > > Jim Nasby <[EMAIL PROTECTED]> writes:
> > > > On Sep 20, 2006, at 11:45 AM, Jessica Richard wrote:
> > > >> I would like to know when each of my databases were created.
> > > 
> > > > You're not the first person to ask for this, but it's still pretty  
> > > > unclear what the use-case for that info is (and without a decent use- 
> > > > case, it's pretty unlikely that this info will get stored).
> > > 
> > > If we did store it, it would presumably reflect the time of issuance of
> > > CREATE DATABASE, which would mean it wouldn't survive a dump and reload
> > > anyway (as the OP seems to be wishing for).
> > 
> > Well, that case could be handled as well, it's just a SMOC. But as I
> > mentioned, it's unlikely an existing developer will get excited about
> > any of this.
> 
> How about a default database comment on creation if the user doesn't set
> one, of "database created 2006-09-21 12:34:56"
 
If we had catalog triggers, one could do that automagically...
-- 
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [ADMIN] How can I restore from WAL log? [PG 7.3]

2006-09-28 Thread Jim C. Nasby
On Thu, Sep 28, 2006 at 10:16:19PM +0800, Chan Michael wrote:
> Hi,
> 
> I am new to PostgreSQL and now want to know how can I recover from a database 
> crash.
> 
> I know pg_dump and pg_restore but with pg_dump all transactions between every 
> pg_dump will be lost. I found WAL in the doc and seems with it I can archieve 
> point-in-time recovery. But I don't know the exact steps of doing this.

Point in Time Recovery wasn't added until version 8.0.
-- 
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [ADMIN] SELECT FOR UPDATE NOWAIT

2006-09-28 Thread Jim C. Nasby
On Thu, Sep 28, 2006 at 03:26:36PM +, Mathias Laurent wrote:
> If I do :
> Session 1:
> decibel=# begin;

Does decibel have some meaning in some language other than english?

> BEGIN
> decibel=# select * from i where i=1 for update nowait;
> i
> ---
> 1
> (1 row)
> 
> decibel=# begin;
> BEGIN
> decibel=# select * from i where i=2 for update nowait;
> i
> ---
> 2
> (1 row)
> 
> Session 2 :
> 
> decibel=# select * from i where i=1 for update nowait;
> ERROR:  could not obtain lock on row in relation "i"
> decibel=# select * from i where i=3 for update nowait;
> ERROR:  current transaction is aborted, commands ignored until end of 
> transaction block
> decibel=#  ==> Why ? :'(
> 
> Why i can't do any "Select for update" (current transaction aborted) after 
> having receive a not obtain lock ???
> Thank you for your answer !

Because as soon as there is an error in a transaction, the entire
transaction is void. You have to rollback the transaction (or return to
a savepoint).
-- 
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [ADMIN] transaction id wraparound

2006-09-28 Thread Jim C. Nasby
Interesting... age(xid) isn't documented anywhere.

No, vacuum shouldn't be generating a lot of xid's. My guess is that your
generating process actually does 2 transactions per row.

On Thu, Sep 28, 2006 at 11:16:24AM -0700, Sriram Dandapani wrote:
> When I run this query
> 
>  
> 
> fwdb01=# select current_timestamp,datname,age(datfrozenxid) from
> pg_database;
> 
>   now  |  datname  |age
> 
> ---+---+
> 
>  2006-09-28 18:04:24.489935+00 | postgres  | 1087834006
> 
>  2006-09-28 18:04:24.489935+00 | fwdb01| 2039254861
> 
>  2006-09-28 18:04:24.489935+00 | template1 | 2039253122
> 
>  2006-09-28 18:04:24.489935+00 | template0 | 1542808250
> 
> (4 rows)
> 
>  
> 
> fwdb01=# select current_timestamp,datname,age(datfrozenxid) from
> pg_database;
> 
>  now  |  datname  |age
> 
> --+---+
> 
>  2006-09-28 18:10:45.64452+00 | postgres  | 1088357075
> 
>  2006-09-28 18:10:45.64452+00 | fwdb01| 2039777930
> 
>  2006-09-28 18:10:45.64452+00 | template1 | 2039776191
> 
>  2006-09-28 18:10:45.64452+00 | template0 | 1543331319
> 
>  
> 
>  
> 
> In approximately 6 minutes, the fwdb01 count has gone up by about 500K.
> I am generating about 250K rows for every 6 . I am also running
> vacuumdb.
> 
>  
> 
> Does vacuumdb generate a lot of transactions that affects this counter.
> 

-- 
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [ADMIN] transaction id wraparound

2006-09-28 Thread Jim C. Nasby
On Thu, Sep 28, 2006 at 11:58:12AM -0700, Talha Khan wrote:
> datfrozenxid column of pg_database row is updated at the completion of any
> database wide vacuum operation. The value store here is the cuttoff xid used
> by the vacuum operation all xid's older then this cutoffxid are replaced by
> theis xid so i think the behaviour being shown by your database is quite
> normal.

True, but if age(xid) is showing how many transactions have occured
since xid then his conclusion that he did 500k transactions between
those two snapshots is correct.
-- 
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [ADMIN] number of transactions doubling

2006-09-28 Thread Jim C. Nasby
On Thu, Sep 28, 2006 at 02:51:24PM -0700, Sriram Dandapani wrote:
> I have a strange problem with the  number of transactions generated
> within my application. I use jdbc batching to submit insert statements
> from a jboss app server to postgres 8.1.2.
> 
> A batch can have from 100 to 3000 inserts.
> 
> I noticed the following:
> 
> When I run the following query
> 
> select current_timestamp,datname,age(datfrozenxid) from pg_database;
> 
> The age columns shows say 1,500,000,000
> 
> When I run the same query after say 10 minutes, it shows 1,500,600,000
> 
> I have issued about 40,000 inserts via jdbc batching in 10 minutes.
> 
> It appears that the jdbc driver is generating twice the number of
> transactions as inserts. This behaviour appears consistent with the
> number of inserts that I generate
 
Uh... you're inserting 40k rows and getting 600k transactions... how are
you concluding that the jdbc driver is generating 80k transactions?

> The target table has triggers that route data to appropriate tables. The
> tables to which data is routed has check constraints that do further
> inserts. (All of this happens in 1 jdbc transaction)
 
Actually, no matter what JDBC is doing, all of that will happen within a
single transaction on the database (unless you're using something like
dblink from within the triggers). So even if you were issuing insert
statements with autocommit on, you'd see at most one transaction per
insert.

> I expect  JDBC Batching to generate fewer transactions depending on
> batch size.
> 
> The mystery is why am I seeing double the number of transactions being
> generated.
> 
> This is causing transaction id wraparound limits to be approached
> quicker than I would like.

As for your autocommit bit-flipping, why don't you just issue either a
rollback or a commit when you release the connection?
-- 
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [ADMIN] Addendum on stored procedure array limits

2006-09-28 Thread Jim C. Nasby
On Thu, Sep 28, 2006 at 06:13:04PM -0400, Paul B. Anderson wrote:
> I'm running RHEL 3.0 and postgreSQL 8.1.4, compiled from source.

Have you tried enforcing that there's no NULLs in the arrays? That code
is pretty new (in fact, I thought it was only in 8.2...), so it's
possible that therein lies the bug.

I'd also look into the size of the varchars... if they're big enough you
could be hitting a memory limit somewhere.

In any case, the error message could certainly use improving... :)
-- 
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [ADMIN] Addendum on stored procedure array limits

2006-09-28 Thread Jim C. Nasby
You might want to try 8.2 beta then, as support for NULLs in arrays was
just added. Testing of that magnitude would be great!

On Thu, Sep 28, 2006 at 09:08:19PM -0400, Paul B. Anderson wrote:
> There are definitely nulls in there, and in the fields where the error 
> is signaled.  I missed in my reading that they are not allowed.
> 
> Thanks.
> 
> Paul
> 
> 
> Jim C. Nasby wrote:
> >On Thu, Sep 28, 2006 at 06:13:04PM -0400, Paul B. Anderson wrote:
> >  
> >>I'm running RHEL 3.0 and postgreSQL 8.1.4, compiled from source.
> >>
> >
> >Have you tried enforcing that there's no NULLs in the arrays? That code
> >is pretty new (in fact, I thought it was only in 8.2...), so it's
> >possible that therein lies the bug.
> >
> >I'd also look into the size of the varchars... if they're big enough you
> >could be hitting a memory limit somewhere.
> >
> >In any case, the error message could certainly use improving... :)
> >  

-- 
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [ADMIN] postgres in HA constellation

2006-10-10 Thread Jim C. Nasby
On Fri, Oct 06, 2006 at 06:34:25AM -, Sebastian Reitenbach wrote:
> > I think PITR would be a much better option to protect against this,  
> > since you could probably recover up to the exact point of failover.
> > 
> > When it comes to the actual failover, take a look at the HA-linux  
> > project. They've got some stuff you could probably use (such as the  
> > heartbeat program). Another really good idea is to give the backup  
> > machine to kill the power to the primary machine, and not have either  
> > machine mount the shared storage at bootup.
> As I am using carp on OpenBSD to setup HA cluster, I am very comfortable with 
> ucarp, the userland implementation for Linux. at boot up the default mount of 
> the database files is readonly and having the database not running, then 
> starting ucarp, and then only in case this machine becomes master, it 
> remounts 
> the postgres data and shall start the database.
> And I thought about that killing the power of the "lost" master after a 
> takeover too, to make sure hte machine will not come back unconditionally 
> later.

Heh, I'd assumed you were on linux, but of course there's no reason you
couldn't setup HA on OpenBSD. The key is just to make sure that you
never bring up two servers on the same data directory.
-- 
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [ADMIN] Recursive use

2006-10-10 Thread Jim C. Nasby
On Fri, Oct 06, 2006 at 10:37:26AM -0500, Jay A. Kreibich wrote:
>   These are generally referred to as "Hierarchical Queries" and center
>   around the idea of a self-referencing table (such as an employee
>   table with a "manager" field that is a FK to another row in the same
>   table).  This essentially makes a tree-like structure.

>   As pointed out by others, the most general way to deal with this in
>   PostgreSQL is to write PL/PgSQL (or some other language) functions
>   that can generate the specific queries you need.  It isn't always
>   pretty, but it can be made to work for a specific set of queries.

There are also other ways to represent this type of information without
using hierarchical queries. Joe Celko presents two methods in SQL For
Smarties.

There's also the ltree module in contrib that might be of some use.
-- 
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [ADMIN] Copying data from table to table (cloned tables)

2006-10-10 Thread Jim C. Nasby
On Mon, Oct 09, 2006 at 05:02:07PM -0700, Bricklen Anderson wrote:
> Fourat Zouari wrote:
> >Hello all,
> >Any one could suggest the best way to copy data from table to table in 
> >the same db, the reason why am seeking for this is that the first table 
> >is becoming very big, and old data has no reason why to stay there, so i 
> >created a cloned table but without indexes and constraints (lighter) and 
> >now i would like to copy 'some' data from first to second table (copied 
> >data is data older than 3 months, i have a timestamp column).
> >
> >In other way, i have a table called 'hotqueues' where i store fresh 
> >messages queued, once messages are treated, they stay in 'hotqueues' but 
> >with a flag indicating that their arent queued for treatment..
> >so in this way, data will rest there forever, slowing down any searches 
> >in that table, the solution was to copy old messages to another table 
> >called 'coldqueues' that has the same structure as 'hotqueues' but 
> >lighter (without constraints and indexes).
> >How to copy these data with 100% data-loose free.
> >
> >Thanks for any help you can provide.
> 
> If you just want to copy the data across to the other table:
> begin;
> insert into table2 select * from table1 where ;
> commit;
> 
> if you also want to remove that same data from table1:
> begin;
> insert into table2 select * from table1 where ;
> delete from table1 where ;
> commit;

You need to be careful with this method. For what the OP wants to do it
would probably work, but not always. The problem is that in some
scenarios,  won't necessarily return the same
set of rows.

Starting in 8.2 you'll be able to do something like

INSERT INTO table2 DELET FROM table1 WHERE ... RETURNING *;

The RETURNING * will return all the data that the command deleted. In
older versions, your best bet is to store the data you're moving in a
temporary table, and then use that to delete the exact rows.
-- 
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [ADMIN] Copying data from table to table (cloned tables)

2006-10-10 Thread Jim C. Nasby
On Tue, Oct 10, 2006 at 10:24:57AM -0500, Jim C. Nasby wrote:
> > If you just want to copy the data across to the other table:
> > begin;
> > insert into table2 select * from table1 where ;
> > commit;
> > 
> > if you also want to remove that same data from table1:
> > begin;
> > insert into table2 select * from table1 where ;
> > delete from table1 where ;
> > commit;

I forgot to mention you could also use a serializable transaction with
your method...

BEGIN ISOLATION LEVEL SERIALIZABLE;
INSERT ...
DELETE ...
COMMIT;

Just remember you'll need to deal with the possibility of a 'could not
serialize' error.
-- 
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [ADMIN] How to remove a superuser

2006-10-10 Thread Jim C. Nasby
On Tue, Oct 10, 2006 at 02:22:53PM -0500, Richard Ray wrote:
> I need to drop a superuser account. I get
> 
> dba=# drop role bram;
> ERROR:  role "bram" cannot be dropped because some objects depend on it
> DETAIL:  owner of view teacher
> owner of type teacher
> owner of view temp1
> owner of type temp1
> ...
> 
> What is the preferred method.

You need to reassign ownership of those objects to someone else.
-- 
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [ADMIN] Recursive use

2006-10-10 Thread Jim C. Nasby
On Tue, Oct 10, 2006 at 04:33:46PM -0500, Jay A. Kreibich wrote:
> On Tue, Oct 10, 2006 at 10:15:42AM -0500, Jim C. Nasby scratched on the wall:
> > On Fri, Oct 06, 2006 at 10:37:26AM -0500, Jay A. Kreibich wrote:
> > >   These are generally referred to as "Hierarchical Queries" and center
> > >   around the idea of a self-referencing table (such as an employee
> > >   table with a "manager" field that is a FK to another row in the same
> > >   table).  This essentially makes a tree-like structure.
> > 
> > >   As pointed out by others, the most general way to deal with this in
> > >   PostgreSQL is to write PL/PgSQL (or some other language) functions
> > >   that can generate the specific queries you need.  It isn't always
> > >   pretty, but it can be made to work for a specific set of queries.
> > 
> > There are also other ways to represent this type of information without
> > using hierarchical queries. Joe Celko presents two methods in SQL For
> > Smarties.
> 
>   If you're referring to Joe's March 1996 DBMS article,
>   (http://www.dbmsmag.com/9603d06.html) he does demonstrate two models,

Nope... I'm talking about his book SQL For Smarties

>   For one, the table requires an extreme amount of maintenance-- something
>   as simple as inserting a single leaf node may require updating every
>   row in the whole table.  On average, more than half the nodes/rows will
>   require updating for each record insertion and removal, but it isn't clear
>   how this update process would work (since the sequences require a
>   traversal to update, but a proper traversal requires the correct
>   sequences).  There are tricks for the simple cases, but I'm not sure
>   you could do an update in-place in the general case.

The book goes into great detail about how to handle inserts and deletes.

Something he doesn't talk about that I think would be of use is putting
intentional gaps in the nested set. That would greatly reduce the
overhead of many operations (at the cost of added code complexity). For
example, if instead of incrementing each sequence by one, you
incremented by 10, you could insert 9 new values into one location in
the tree before you'd have to worry about moving other numbers around.

>   I also noticed that Joe has a book out titled "Joe Celko's Trees and
>   Hierarchies in SQL for Smarties".  I have not yet had a chance to
>   review this book (other than the on-line table of contents) but it
>   looks interesting.  While much of this is on graphs and more general
>   edge/node structures, a fair bit of the book appears to be about this
>   type of tree structure.  He goes into more detail on some of these
>   issues, such as insertion and deletion times, and tricks to play for
>   inserting whole sub-trees, and that kind of thing.  Maybe the book
>   would sell the so-called "nested-set" implementation a bit better,
>   but it still strikes me as a solution for warehouses, not OLTP style
>   stuff.  I might have to find this book and have a closer read.

I'd be curious to know if that book is just an out-take of SQL For
Smarties, or if he adds new information. I can tell you that Smarties is
a book every serious database architect should have.
-- 
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [ADMIN] How to remove a superuser

2006-10-10 Thread Jim C. Nasby
On Tue, Oct 10, 2006 at 04:07:19PM -0500, Richard Ray wrote:
> On Tue, 10 Oct 2006, Jim C. Nasby wrote:
> 
> >On Tue, Oct 10, 2006 at 02:22:53PM -0500, Richard Ray wrote:
> >>I need to drop a superuser account. I get
> >>
> >>dba=# drop role bram;
> >>ERROR:  role "bram" cannot be dropped because some objects depend on it
> >>DETAIL:  owner of view teacher
> >>owner of type teacher
> >>owner of view temp1
> >>owner of type temp1
> >>...
> >>
> >>What is the preferred method.
> >
> >You need to reassign ownership of those objects to someone else.
> >
> 
> I can reassign the view but not the type.
> 
> 
> Welcome to psql 8.1.0, the PostgreSQL interactive terminal.
> 
> Type:  \copyright for distribution terms
>\h for help with SQL commands
>\? for help with psql commands
>\g or terminate with semicolon to execute query
>\q to quit
> 
> dba=# alter table teacher owner to bjay;
> ALTER TABLE
> dba=# alter type teacher owner to bjay;
> ERROR:  "teacher" is a table's row type

Yes, unfortunately you'll have to re-create the view as another owner.
-- 
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [ADMIN] postgres in HA constellation

2006-10-11 Thread Jim C. Nasby
On Wed, Oct 11, 2006 at 10:28:44AM -0400, Andrew Sullivan wrote:
> On Thu, Oct 05, 2006 at 08:43:21PM -0500, Jim Nasby wrote:
> > Isn't it entirely possible that if the master gets trashed it would  
> > start sending garbage to the Slony slave as well?
> 
> Well, maybe, but unlikely.  What happens in a shared-disc failover is
> that the second machine re-mounts the same partition as the old
> machine had open.  The risk is the case where your to-be-removed
> machine hasn't actually stopped writing on the partition yet, but
> your failover software thinks it's dead, and can fail over.  Two
> processes have the same Postgres data and WAL files mounted at the
> same time, and blammo.  As nearly as I can tell, it takes
> approximately zero time for this arrangement to make such a mess that
> you're not committing any transactions.  Slony will only get the data
> on COMMIT, so the risk is very small.
 
Hrm... I guess it depends on how quickly the Slony master would stop
processing if it was talking to a shared-disk that had become corrupt
from another postmaster.

> > I think PITR would be a much better option to protect against this,  
> > since you could probably recover up to the exact point of failover.
> 
> That oughta work too, except that your remounted WAL gets corrupted
> under the imagined scenario, and then you copy the next updates to
> the WAL.  So you have to save all the incremental copies of the WAL
> you make, so that you don't have a garbage file to read.
> 
> As I said, I don't think that it's a bad idea to use this sort of
> trick.  I just think it's a poor single line of defence, because when
> it fails, it fails hard.

Yeah, STONITH is *critical* for shared-disk.
-- 
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [ADMIN] [PERFORM] autovacuum on a -mostly- r/o table

2006-10-18 Thread Jim C. Nasby
On Sun, Oct 15, 2006 at 04:52:12PM +0200, Tobias Brox wrote:
> Are there any logs that can help me, and eventually, are there any
> ready-made scripts for checking when autovacuum is running, and
> eventually for how long it keeps its transactions?  I'll probably write
> up something myself if not.

8.2 adds some stats on when autovac last ran, per-table. I don't
remember if it reports how long it took to vacuum the table, but that
would probably be useful info.
-- 
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [ADMIN] Rules/Triggers executio order

2006-10-18 Thread Jim C. Nasby
On Wed, Oct 18, 2006 at 06:42:21PM -0600, Benjamin Krajmalnik wrote:
> I have a partitioned table to which I route data using a trigger.
> I am changing it to use a set of rules which executes "INSTEAD" on
> insert.
> The parent table currently has a trigger.
>  
> The system is a live system.  I would prefer to not have to suspend the
> data flow.
> If I create the rules, and given the fact that they execute "INSTEAD" of
> the insertion into the parent table, will the trigger still execute.
> I will be removing the trigger immediately after the creation of the
> rules, but just want to be safe and make sure I get no duplicate
> entries.

An INSTEAD OF rule replaces the query that you originally had with the
one re-written by the rule, so no, the triggers shouldn't fire. But you
should test to make sure.
-- 
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [ADMIN] CLUSTER using more memory than expected

2006-10-26 Thread Jim C. Nasby
On Wed, Oct 25, 2006 at 10:13:59AM -0600, Joshua Marsh wrote:
> My version is: 8.0.3

You should really upgrade to 8.0.9; there's crash and data loss bugs
that have been fixed since 8.0.3.
-- 
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [ADMIN] Vacuum Problems (locking)

2006-10-26 Thread Jim C. Nasby
On Wed, Oct 25, 2006 at 04:00:29PM -0400, Mike Goldner wrote:
> I just noticed that my nightly vacuum failed the previous three nights
> with the following error:
> 
> [2032-jbossdb-postgres-2006-10-24 04:12:30.019 EDT]ERROR:  failed to re-
> find parent key in "jms_messages_pkey"
> [2032-jbossdb-postgres-2006-10-24 04:12:30.019 EDT]STATEMENT:  VACUUM
> ANALYZE;
> 
> I'm not sure if it is related, but it seems pertinent.

Well, vacuuming a lot of databases once a day is bad enough. Not
vacuuming for 3 days...

IIRC you can fix that issue by reindexing jms_messages_pkey.
-- 
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [ADMIN] Backing Up Partial Databases

2006-10-26 Thread Jim C. Nasby
On Thu, Oct 26, 2006 at 04:08:05PM +0930, Shane Ambler wrote:
> >I have two identical databases (primary and backup) and I need to 
> >transfer a small portion of missing data from one to the other. Is there 
> >an easier way to do this rather than SELECT-ing the interval then 
> >parsing the ouput to look like INSERT statements?
> >
> 
> I can think of a couple of ways -
> 
> COPY (SELECT * FROM myPrimaryTable WHERE id<200) TO '/tmp/myData';
 
Only in 8.2...

> the --inserts option of pg_dump will export as INSERT's if that is the 
> way you want to do it. You will need to do a find and replace on the 
> INSERT's to change the table name though.

Though, I don't see any reason you'd want to do that; just stick with
the pg_dump defaults.

Another option is to use dbi_link.
-- 
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [ADMIN] Copying data from table to table (cloned tables)

2006-10-30 Thread Jim C. Nasby
On Fri, Oct 27, 2006 at 12:38:51PM -, shaik mastan dilshad wrote:
> Hi all,
> 
> Can any one tell me,how to retirve the data which had deleted for an instance 
> ,i mean how to rollback the previous.
> 
> Look forward to hearing from all of you.

Please start a new thread instead of replying to an existing one.

You can only rollback an uncommitted transaction. Once you commit, the
changes are permanent.

Your only other option is to setup Point In Time Recovery (PITR), and
use it to restore a copy of your database to just before committing a
the transaction you want to back out. Of course, everything that's
happened in the database since then will also be gone.
-- 
Jim C. Nasby, Database Architect[EMAIL PROTECTED] 
Give your computer some brain candy! www.distributed.net Team #1828

Windows: "Where do you want to go today?"
Linux: "Where do you want to go tomorrow?"
FreeBSD: "Are you guys coming, or what?"

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [ADMIN] Is there anyway to...

2006-11-07 Thread Jim C. Nasby
Moving to -general (and please start a new thread instead of hijacking
an existing one).

On Thu, Nov 02, 2006 at 01:14:22PM -0500, louis gonzales wrote:
> Hello all,
> Is there an existing mechanism is postgresql that can automatically 
> increment/decrement on a daily basis w/out user interaction?  The use 
> case I'm considering is where a student is in some type of contract with 
> an instructor of some sort, and that contract puts a time limit on the 
> student requiring her to pay a fee by a certain day.  IF that day comes 
> to pass - or a certain number of days elapse - and that payment 
> requirement hasn't been met, I want to trigger a function.
> 
> The one requirement I want to impose is, that no end user of the DB 
> application, needs to do anything to set the trigger, other than the 
> initialization of making the student of this type.
> 
> An example would be:
> Day1 - Application user(typically the instructor) creates a profile for 
> a new student - John Doe, which sets a 30 day time limit for John Doe to 
> pay $100.00
> Day2 -> Day31 - John Doe didn't make the payment
> Day 31 - Trigger of event occurs when the instructor logs in.
> 
> Basically on Day 1 when John Doe's profile was created, I want a 
> decrement counter to occur daily on his profile(some attribute/timer) 
> and nothing should happen until day 31 when he doesn't pay.

While you could setup a cron job to decrement some counter every day, I
think that's not the best approach. Instead, I'd run a query once a day
that finds all students that are past-due and takes some kind of action.
-- 
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [ADMIN] createlang on Mac OS X

2006-11-07 Thread Jim C. Nasby
On Sun, Nov 05, 2006 at 08:35:54AM -0800, [EMAIL PROTECTED] wrote:
> I installed the PostgreSQL tools for Mac from the sourceforge site
> (8.1.5). The server and client tools installed fine and the server is
> running and accessible. However, it doesn't appear that the language
> libraries got installed, as I can't install plpgsql. I get:
> 
> could not access file "$libdir/plpgsql": No such file or directory
> 
> Unfortunately the bug reporting system on the postgresqlformac site
> isn't working. Has anyone here gotten the langauge libraries to install
> for use by this package? Is there a way to build and install just those
> files? I like the convenience of the binary installer from the
> postgresqlformac folks - are there other projects that have a binary
> installer for Mac - I really don't want to recompile everytime an
> update comes out.

I haven't looked at the postgresqlformac stuff (you might try emailing
them...), but you might want to look at using http://macports.org. They
have a number of PostgreSQL ports, and they build plpgsql.
-- 
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [ADMIN] Memory space

2006-11-07 Thread Jim C. Nasby
On Mon, Nov 06, 2006 at 11:16:46AM -0600, Nikola Radakovic wrote:
> Dear community,
> 
> First of all, I am completely new on this mailing list.
> I have read tons of tutorials, but I can't find how to set up
> further thing.I know how to add database for each user, but how to
> achieve  in Postgres 8.1 how big assigned database should be on server
> machine?
> 
> For instance:
> I want that database cash for user1 occupies 500Mb disk space.
> database calendar for user2  occupies 100Mb
> etc...

Unlike some other databases, PostgreSQL doesn't deal with size limits.
Unless you really want to limit the size of each database there's
nothing to worry about. If you do want to limit the size, your only
option is to create a cluster for each user and let the OS quota limit
size (though many versions of PostgreSQL don't react too kindly to
running out of space).

A third option is to use one cluster, but create each database with it's
own tablespace, and only give users access to their tablespaces. It's
not quite as fool-proof as multiple clusters, but it's a lot easier on
system resources.
-- 
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [ADMIN] Use Jobs

2006-11-07 Thread Jim C. Nasby
On Tue, Nov 07, 2006 at 03:34:36PM -0200, Alexander Burbello wrote:
> Where can I find Job contrib and how to install jobs on postgres?

Right now the job agent that comes with pgAdminIII is your only option,
though I'm hoping to get started again on the pgjobs project on
pgFoundry.
-- 
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [ADMIN] Archive WAL Logs Failed

2006-11-07 Thread Jim C. Nasby
You're not by chance doing a filesystem copy into
/b01/pgdata/archive/db1?

On Tue, Nov 07, 2006 at 10:15:01AM -0800, Subbiah, Stalin wrote:
> Hi All,
> 
> Yesterday, we'd archiver problems in copying archived wal files from
> pg_xlog to archive destination at the end of full hotbackup. Below are
> the errors from server logs.
> What makes me to wonder is that why does archiver tries to copy
> "000100080074.0066C9C0.backup" again when it's already been
> archived. As soon I removed "000100080074.0066C9C0.backup"
> from archive location, then archiver was happy to copy again and moved
> on to archiving 000100080074 etc.
> 
> 2006-11-07 00:35:23 CST  LOG:  archived transaction log file
> "000100080074.0066C9C0.backup"
> 2006-11-07 00:35:23 CST  LOG:  archive command "test ! -f
> /b01/pgdata/archive/db1/000100080074.0066C9C0.backup && cp
> pg_xlog/000100080074.0066C9C0.backup
> /b01/pgdata/archive/db1/000100080074.0066C9C0.backup"
> failed: return code 256
> 
> Any idea what may be going on here.
> 
> Thanks,
> Stalin
> 
> PG814, RH4.0
> 
> 
> 

-- 
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [ADMIN] Problems changing column attribute

2006-11-08 Thread Jim C. Nasby
On Tue, Nov 07, 2006 at 10:36:54AM +, Kuckuck Kuckuck wrote:
> Hi folks, I have been breaking my head over this.
> I logg in as testuser via pgadmin.
> I create a table as user 'testuser' added some columns, one of them for 
> example is name: 'testcolumn' type: varchar length: 50
> Now I'd like to change testcolumn's length to 60, as user 'testuser' I'm not 
> allowed to change it and pgadmin tells me  that I don't have the rights to 
> change relation pg_attribute.
> 
> If I change the length of  'testcolumn' from an admin account it works. Now I 
> granted all priviliges for this table to user 'testuser' via the admin 
> account, but still I cannot change the length if I log in with 'testuser'.
> 
> Is there any chance to grant some user (in my case 'testuser') the right to 
> change attributes, without making him superuser?

Can you reproduce this using psql? If you can, please provide us a
complete test case so others can look at it. If not, you'll need to post
on the pgAdmin list.
-- 
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [ADMIN] Checkpoint Location Format

2006-11-08 Thread Jim C. Nasby
On Tue, Nov 07, 2006 at 01:12:53PM +, Andy Shellam (Mailing Lists) wrote:
> Hi,
> 
> I'm writing an automated file-system level backup application for use 
> with WAL archiving, that will issue the pg_start_backup call, tar and 
> gzip the cluster data directory, issue the pg_stop_backup call, and 
> remove all previous un-needed WAL files from the archive.
> 
> I need to write a regular expression that will search for the WAL 
> filename and checkpoint location from the backup_label file, and just 
> want to clarify that the checkpoint location will always be of the 
> format: X/XX - where X is one of 0-9, A-F?
> 
> And then the WAL .backup file that is generated in the archive, has a 
> filename of the form:
> 
> .00XX.backup
> 
> where  is the name of the "STARTING WAL LOCATION" directive in 
> the backup_label file, and XX is the last 6 digits of the checkpoint 
> (after the / )?

I don't know the answer, but since no one's replied I suggest looking in
the code. Looking at the source of pg_start_backup would probably be a
good start, though I'm guessing the real answer is somewhere in the
backend.
-- 
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [ADMIN] Manual sys catalog constraint setup to avoid downtime?

2006-11-08 Thread Jim C. Nasby
On Tue, Nov 07, 2006 at 11:05:38AM -0500, Jerry Sievers wrote:
> Admins; this is (I assume) a not uncommon scenario).
> 
> Suppose you are managing a BIG ugly DB that's in 24x7 production and
> desperately in need of constraints of all types.
> 
> (Of course, you inherited this piece of crap and the irresponsible
> designers are long gone :-)
> 
> You have verified that the application now honors the new constraints
> because on your R&D DBs, the constraints are in place and no
> constraint violations are occuring.
> 
> You can't however get business to sign off on proper constraint
> addition via ALTER TABLE because the tables are huge and this locks
> out concurrent activity for several hours.
> 
> My question;
> 
> Have any of you setup the constraints by doing manual catalog updates
> and later (or maybe never),  gone back and cleaned up the
> non-conforming data using small batch updates or deletes?
> 
> Frobing of pg_constraint and pg_depend (at least) would be required.
> 
> I'm wondering what non-obvious pitfalls exist here?
> 
> Comments?

Use the source, luke!

Hmmm... interesting problem. For some constraints, I think you could
probably handle things pretty well once we have non-blocking index
creation (did that get into 8.2?). I guess the real key there would be
if the constraint creation code will use indexes when checking the
constraints. This would be especially useful for generic CHECK
constraints... if you build an index that has a WHERE clause that
matches the proposed constraint, the backend could just look to see if
there are any entries in the index. If there aren't, you're good to go.

In the meantime, I suggest using triggers to enforce your constraints,
since you can create one without verifying the underlying data.
-- 
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [ADMIN] pg_dump and foreign keys troubles

2006-11-08 Thread Jim C. Nasby
On Tue, Nov 07, 2006 at 04:42:35PM -0500, Tom Lane wrote:
> "Paolo Negri" <[EMAIL PROTECTED]> writes:
> > My problem is that I have to load the dump (from a 7.3) in a brand new
> > database (8.1) and then I need to do quite a few changes to the schema
> > dropping some old foreign constraint and adding some different ones.
> > ...
> > How can I tell to pg_dump to dump the schema with foreign keys
> > construct instead of creating constraint trigger?
> 
> You can't: the information just isn't there in 7.3.  You'll need to look
> at the trigger arguments and reconstruct the FKs.
> 
> IIRC the contrib/adddepend script might be able to help you with this.

Would a 7.4 or newer pg_dump connecting to a 7.4 database know enough to
do this for you? If so, Paolo could restore his dump to a 7.3 install
and go from there...
-- 
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [ADMIN] Use Jobs

2006-11-08 Thread Jim C. Nasby
On Wed, Nov 08, 2006 at 09:29:28PM +1300, adey wrote:
> Jim - is there a component that needs to be installed on the database server
> with Postgres that PGAdmin communicates with please, or is only PGAdmin
> required?

I know there's a server-side set of components for pgAdmin, but I don't
know if the job agent needs them or not.

BTW, my main motivation behind pgJob is to have something that has no
external dependencies, so that you can just install it in any database.
-- 
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [ADMIN] statistics buffer is full

2006-11-08 Thread Jim C. Nasby
On Wed, Nov 08, 2006 at 09:46:48PM -0800, Jeff Frost wrote:
> On Wed, 8 Nov 2006, Joshua D. Drake wrote:
> 
> >>hah!  Does that mean it's just the buffering for pg_stat_activity and not 
> >>for
> >>planner statistics?  That doesn't make me feel too bad in this case.
> >
> >Yeah it is nothing to worry about, however turning it off is certainly a
> >performance benefit.
> >
> 
> So, then the next question: is that error because the command string was 
> too long or because there were too many command strings in the buffer at 
> once (i.e. it got backed up a bit and filled)?

IIRC, (in 8.1) the same amount of data is shuffled through the stats
system for every command, so it'd be an issue of too many messages.

> It would be a drag to be without pg_stat_activity on this system.  The 
> performance detriment is lessened in 8.2, correct?

Correct. Performance of updating the info for ps was also improved,
iirc.
-- 
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [ADMIN] Use Jobs

2006-11-13 Thread Jim C. Nasby
Right now the only job scheduling utility for PostgreSQL is the job
agent that was written by the pgAdmin folks. Or of course you could use
your OS's built-in scheduling.

On Fri, Nov 10, 2006 at 04:57:42PM -0200, Alexander Burbello wrote:
> Sorry, but now I am in a doubt!!
> 
> pgAdmin isn't a tool???
> How can I install the feature Jobs and how can I schedule some procedure?
> 
> Jim C. Nasby escreveu:
> 
> >On Wed, Nov 08, 2006 at 09:29:28PM +1300, adey wrote:
> > 
> >
> >>Jim - is there a component that needs to be installed on the database 
> >>server
> >>with Postgres that PGAdmin communicates with please, or is only PGAdmin
> >>required?
> >>   
> >>
> >
> >I know there's a server-side set of components for pgAdmin, but I don't
> >know if the job agent needs them or not.
> >
> >BTW, my main motivation behind pgJob is to have something that has no
> >external dependencies, so that you can just install it in any database.
> > 
> >
> 

-- 
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [ADMIN] Manual sys catalog constraint setup to avoid downtime?

2006-11-13 Thread Jim C. Nasby
On Thu, Nov 09, 2006 at 08:46:42AM -0500, Jerry Sievers wrote:
> 2. Create new NOT NULL, CHECK and/or FK constraints by inserting them
>into the catalogs directly.  NOT NULL is the simplest and requires
>only an update to pg_attribute table.  CHECK constraints a bit more
>tricky and requires figuring out what goes in the conbin field of
>pg_constraint table.  FK the most difficult as it requires adding
>recs to all of pg_constraint, pg_trigger and pg_depend.  The
>trigger I guess can be created using standard CREATE TRIGGER
>operation.
> 
> Having done this, we are NOT absolutely guaranteed that the new
> constraints are expressed in the DB due to possible client activity
> that ran during or after our bulk delete or updates in step #1.  We
> are however insured that new activity as of the constraint additions
> is conforming.
> 
> 3. Let the DB age beyond any transactions that may have been open
>during our initial cleanup pass and repeat cleanup steps again.
>Now, we should be up to snuff and fine going forward.

You should check into whether other backends will pick those catalog
changes up automagically or not... you may have to restart all
connections for those changes to take effect.

BTW, it'd probably be worthwhile looking at the code that creates the
different constraints to see exactly what it's doing.

And you're right... this is not something the community supports. :)
-- 
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [ADMIN] running initdb on running database

2006-11-14 Thread Jim C. Nasby
On Mon, Nov 13, 2006 at 03:10:14PM -0600, Ben K. wrote:
> 
> Admins,
> 
> I'm reading this post:
> 
> http://archives.postgresql.org/pgsql-novice/2005-04/msg00221.php
> 
> Is it possible and safe to run initdb again on a running database to 
> change encoding from SQL_ASCII to UTF8?
> 
> Will it affect new databases only or existing databases as well?

Pretty much guaranteed to be unsafe. It's also not possible unless you
hack initdb.
-- 
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [ADMIN] replication?

2006-11-14 Thread Jim C. Nasby
On Mon, Nov 13, 2006 at 10:21:40PM +0100, TomDB wrote:
> 
> I'd like to create a setup where there are exactly 2 masters which
> need to be synchronized against each other. The two masters should
> have read+write access, but don't have a permanent connection to each
> other. One will be on a LAN-server, the other instance will be
> installed on a Windows-client-laptop which can be taken along the
> road. When coming back to the office, we need to sync the changed
> data.
> 
> is there any tool that will allow me to do so ?

If you got *really* clever you could maybe pull it off with Slony, but
in general, no, this isn't possible.
-- 
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [ADMIN] Performance question

2006-12-04 Thread Jim C. Nasby
143))
> > > ->  Index Scan using tblkssnapshot07_idx_monthtest on
> > > tblkssnapshot07 tblkssnapshotdaily  (cost=0.00..5.94 rows=1 width=8)
> > >   Index Cond: ((monthdate = '2006-12-01'::date) AND 
> > > (kstestssysid = 3143))
> > > ->  Index Scan using tblkssnapshot08_idx_monthtest on
> > > tblkssnapshot08 tblkssnapshotdaily  (cost=0.00..4.73 rows=1 width=8)
> > >   Index Cond: ((monthdate = '2006-12-01'::date) AND 
> > > (kstestssysid = 3143))
> > > ->  Index Scan using tblkssnapshot09_idx_monthtest on
> > > tblkssnapshot09 tblkssnapshotdaily  (cost=0.00..5.56 rows=1 width=8)
> > >   Index Cond: ((monthdate = '2006-12-01'::date) AND 
> > > (kstestssysid = 3143))
> > > ->  Index Scan using tblkssnapshot10_idx_monthtest on 
> > > tblkssnapshot10 tblkssnapshotdaily  (cost=0.00..5.75 rows=1 width=8)
> > >   Index Cond: ((monthdate = '2006-12-01'::date) AND 
> > > (kstestssysid = 3143))
> > > ->  Index Scan using tblkssnapshot11_idx_monthtest on
> > > tblkssnapshot11 tblkssnapshotdaily  (cost=0.00..5.81 rows=1 width=8)
> > >   Index Cond: ((monthdate = '2006-12-01'::date) AND 
> > > (kstestssysid = 3143))
> > > ->  Index Scan using tblkssnapshot12_idx_monthtest on
> > > tblkssnapshot12 tblkssnapshotdaily  (cost=0.00..4.33 rows=1 width=8)
> > >   Index Cond: ((monthdate = '2006-12-01'::date) AND 
> > > (kstestssysid = 3143))
> > > ->  Index Scan using tblkssnapshot13_idx_monthtest on
> > > tblkssnapshot13 tblkssnapshotdaily  (cost=0.00..4.83 rows=1 width=8)
> > >   Index Cond: ((monthdate = '2006-12-01'::date) AND 
> > > (kstestssysid = 3143))
> > > ->  Index Scan using tblkssnapshot14_idx_monthtest on
> > > tblkssnapshot14 tblkssnapshotdaily  (cost=0.00..4.83 rows=1 width=8)
> > >   Index Cond: ((monthdate = '2006-12-01'::date) AND 
> > > (kstestssysid = 3143))
> > > ->  Index Scan using tblkssnapshot15_idx_monthtest on
> > > tblkssnapshot15 tblkssnapshotdaily  (cost=0.00..4.83 rows=1 width=8)
> > >   Index Cond: ((monthdate = '2006-12-01'::date) AND 
> > > (kstestssysid = 3143))
> > > ->  Index Scan using tblkssnapshot16_idx_monthtest on
> > > tblkssnapshot16 tblkssnapshotdaily  (cost=0.00..4.83 rows=1 width=8)
> > >   Index Cond: ((monthdate = '2006-12-01'::date) AND 
> > > (kstestssysid = 3143))
> > > ->  Index Scan using tblkssnapshot17_idx_monthtest on
> > > tblkssnapshot17 tblkssnapshotdaily  (cost=0.00..4.83 rows=1 width=8)
> > >   Index Cond: ((monthdate = '2006-12-01'::date) AND 
> > > (kstestssysid = 3143))
> > > ->  Index Scan using tblkssnapshot18_idx_monthtest on
> > > tblkssnapshot18 tblkssnapshotdaily  (cost=0.00..4.83 rows=1 width=8)
> > >   Index Cond: ((monthdate = '2006-12-01'::date) AND 
> > > (kstestssysid = 3143))
> > > ->  Index Scan using tblkssnapshot19_idx_monthtest on
> > > tblkssnapshot19 tblkssnapshotdaily  (cost=0.00..4.83 rows=1 width=8)
> > >   Index Cond: ((monthdate = '2006-12-01'::date) AND 
> > > (kstestssysid = 3143))
> > >     ->  Index Scan using tblkssnapshot20_idx_monthtest on 
> > > tblkssnapshot20 tblkssnapshotdaily  (cost=0.00..4.83 rows=1 width=8)
> > >   Index Cond: ((monthdate = '2006-12-01'::date) AND 
> > > (kstestssysid = 3143))
> > > ->  Index Scan using tblkssnapshot21_idx_monthtest on
> > > tblkssnapshot21 tblkssnapshotdaily  (cost=0.00..4.83 rows=1 width=8)
> > >   Index Cond: ((monthdate = '2006-12-01'::date) AND 
> > > (kstestssysid = 3143))
> > > ->  Index Scan using tblkssnapshot22_idx_monthtest on
> > > tblkssnapshot22 tblkssnapshotdaily  (cost=0.00..4.83 rows=1 width=8)
> > >   Index Cond: ((monthdate = '2006-12-01'::date) AND 
> > > (kstestssysid = 3143))
> > > ->  Index Scan using tblkssnapshot23_idx_monthtest on
> > > tblkssnapshot23 tblkssnapshotdaily  (cost=0.00..4.83 rows=1 width=8)
> > >   Index Cond: ((monthdate = '2006-12-01'::date) AND 
> > > (kstestssysid = 3143))
> > > ->  Index Scan using tblkssnapshot24_idx_monthtest on
> > > tblkssnapshot24 tblkssnapshotdaily  (cost=0.00..4.83 rows=1 width=8)
> > >   Index Cond: ((monthdate = '2006-12-01'::date) AND 
> > > (kstestssysid = 3143))
> > >
> > > If constraint exclusion is on, why are index scans taking 
> > place on all 
> > > of the tables and not only on the only partitioned table 
> > for which the 
> > > constraint exists?
> > >
> > > I went to a partitioned table thinking this would improve 
> > performance 
> > > from the previous schema in which all of the data was in the parent 
> > > table, but the end result was significantly slower performance (by 
> > > orders of magnitude).
> > >
> > >
> > >
> > > ---(end of
> > > broadcast)---
> > > TIP 5: don't forget to increase your free space map settings
> > >
> > 
> > --
> > Jim Nasby[EMAIL PROTECTED]
> > EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)
> > 
> > 
> > 
> 

-- 
Jim C. Nasby, Database Architect[EMAIL PROTECTED] 
Give your computer some brain candy! www.distributed.net Team #1828

Windows: "Where do you want to go today?"
Linux: "Where do you want to go tomorrow?"
FreeBSD: "Are you guys coming, or what?"

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [ADMIN] Upgrading to 8.2, changes in user/group management scripts

2007-01-21 Thread Jim C. Nasby
On Fri, Jan 19, 2007 at 03:45:18PM -0600, Peter Koczan wrote:
> I'm upgrading our Postgres installation from 7.4 to 8.2 (I know, I know, but
> we've been busy). The biggest thing to change will be our user/group
> management scripts, as they directly modify pg_shadow and pg_group. Beyond
> being potentially stupid (but hey, I didn't write it), this just won't work
> anymore, for a few reasons, which I will get to in a bit.
> 
> What we used to do (all in a transaction block so it's nice and atomic):
> For users: Delete pg_shadow, read in logins/uids from /etc/passwd and create
> users, making sure the Unix uid matches up with the postgres sysid.
> For groups: For each group, drop the group, read in the new group from
> either another database table or some other mechanism.
> 
> Now, having the Unix uids match up with the Postgres sysids isn't such a big
> deal since we'll be using Kerberos, which can presumably match up based on
> login name. It was nice to have them match up, but it's probably not
> necessary. However, the above mechanisms won't work for the following
> reasons:
> 
> 1. pg_shadow and pg_group are now views, and you can't update views.
> 2a. Simply dropping a user is insufficient if the user owns anything in the
> database cluster.
> 2b. Deleting a user by removing the row from pg_authid works, but then the
> object owner is unknown and the object disappears unless you know where to
> look for it in the system tables. The objects are effectively orphaned.
> 3. There seems to be no way to recover the old sysid, as the "WITH SYSID
> uid" clause in CREATE USER/CREATE ROLE is now just noise, and trying to
> insert it directly into the system table proved fruitless. If you can't
> recover the old sysid, it leads to the orphaned objects problem described in
> 2b.
> 
> So, I'm wondering how I can do something roughly equivalent to this, I've
> been considering using some sort of "diff"-like mechanism, but it'd be nice
> to have something simpler. Any help would be greatly appreciated.

You already stated there's no reason to have the ID's match, so why
bother? Just check each user/group name to see if it already exists and
add it if it doesn't.
-- 
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [ADMIN] Uregent- data Recovery - PG_CONTROL_VERSION mismatch

2007-01-21 Thread Jim C. Nasby
AFAIK PG_CONTROL_VERSION should never be 0, so it sounds like your
hardware corrupted something. IF you're doing all of this on the *same
hardware and OS*, I suggest the following:

Make a copy of the data directory (ie: cp -R)
With the 7.2 initdb, create a new cluster somewhere (doesn't matter
where, we only need one file from it)
Copy global/pg_control from the cluster you just created into the copy
of the cluster you made
Try starting 7.2 against that copy

pg_control is a safeguard mechanism to ensure that not only is the
version of the database compatible with the files on disk, but that the
endianness and other things match as well. That's why it's critical that
you do this on the same hardware.

If you do get the database up, I'd immediately pg_dumpall to a remote
machine, since your hardware seems untrustworthy.

On Sun, Jan 21, 2007 at 05:00:39AM -0800, ABC wrote:
> We are facing a probelm and its very urgent to sort it out. Please Help.
> The database server unexpectedly shutdown down due to power failure. On 
> reboot, Postgresql server fails to start giving following Error:
> --
> LOG:  could not create IPv6 socket: Address family not supported by protocol
> FATAL:  database files are incompatible with server
> DETAIL:  The database cluster was initialized with PG_CONTROL_VERSION 0, but 
> the server was compiled with PG_CONTROL_VERSION 72.
> HINT:  It looks like you need to initdb.
> --
> 
> we are using  Mandrake  Linux 9.1 and two version of Postgresql Version-7.4.2 
> and  Postgresql Version-7.3.2 is installed
> The Data directory is initialised with  postgresql-7.4.2.
> Postgresql -7.3.2 is not used .Postgresql-7.4.2 is used for managing Postgres 
> Server.
> 
> We have tried to initialize a new data directory and then copy the contents 
> of previous data directory (which is failed to start) into the new directory
> after that while starting postmaster from the new dir it gives the same 
> message describe above
> 
> so, we have tried this one and after that we are able to start the database 
> but now we are unable to use the data 
> the select queries returns 0 rows or giving error [ERROR:  invalid attribute 
> number 0 for ]
> 
> also we are not able to dump the database the command pg_dump gives folowing 
> error and then exit
> 
> pg_dump: schema with OID 536944636 does not exist
> 
> Please  suggest any solution . Its very urgent.
> Thanks And Regards
> Chandrakant Chouhan
>  
> -
> Sucker-punch spam with award-winning protection.
>  Try the free Yahoo! Mail Beta.

-- 
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [ADMIN] Ditributed PosgreSQL

2007-04-20 Thread Jim C. Nasby
Bizgres/Greenplum is about as close as you'll get, unless you're talking
about replication ("distributed database" isn't exactly a standard
term).

On Mon, Apr 09, 2007 at 02:38:18AM -0700, s kh wrote:
> hi,
> I have a question about distributed databases,
> can we have a ditributed posgresql database now? 
> Does postgreSQL support distributed database system now?
> Can you send me a good reference for it?
> thanks.
>  
> -
> Sucker-punch spam with award-winning protection.
>  Try the free Yahoo! Mail Beta.

-- 
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [ADMIN] Return data from function

2007-04-20 Thread Jim C. Nasby
The magic you're looking for is UNION ALL.

You could just union the two address tables together and join that to
the address code table, but that'll probably lead to an inefficient
plan. You'll likely need to join the address code table to each address
table separately, and then UNION ALL those together.

On Tue, Apr 17, 2007 at 04:17:36PM -0300, Alexander B. wrote:
> Hello,
> 
> I need to create a function that find data over 3 tables, merge and
> return data.
> With example can be specified more clearly:
> 
> 1.1 Find all addresses code;
> 1.2 For each addresses code, get address, complement and city on the
> first table;
> 1.3 For each addresses code, get address, complement and city on the
> second table;
> 1.4 Merge result from first and second tables and return to the function;
> 
> I would like to do not using cursor.
> Do you have any ideas how to implement?
> 
> Thanks in advance.
> 
> ---(end of broadcast)---
> TIP 6: explain analyze is your friend
> 

-- 
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [ADMIN] User rights

2007-05-12 Thread Jim C. Nasby
On Wed, May 09, 2007 at 12:43:55PM -0700, ebmb wrote:
>  Hello all,
>  I'm a MySQL user and trying to migrate some applications to
> PostgreSQL 8.1.9. My first question is about Postgre administration...
> 
>  I would like to know if is possible to revoke users access to
> commands like "\l, \du, \dn". In MySQL, limited users cannot see other
> users (like root, for example) and other databases (like mysql, for
> example). Is it possible in Postgre?

Take a look at the setting db_user_namespace; it might help. You *might*
be able to get away with revoking select on pg_database, though you'd
have to do it in every database. Of course you'll want to fully
understand pg_hba.conf as well.

When it comes to schemas, I think you're pretty much out of luck.
-- 
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [ADMIN] increasing of the shared memory does not solve the problem of "OUT of shared memory"

2007-05-12 Thread Jim C. Nasby
On Fri, May 11, 2007 at 04:58:28PM +0300, Sorin N. Ciolofan wrote:
> I increased significantly the number of shared buffers from 3000 to 100 000
> (80Mb)

BTW, 100,000 shared buffers is actually 800MB, not 80.
-- 
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [ADMIN] Recovery question

2007-05-14 Thread Jim C. Nasby
On Thu, May 03, 2007 at 11:15:37AM -0400, Lorraine Mancuso wrote:
> Hi,
> 
>I am new to Postgres and had a few questions. We are testing the 
> recovery of a database cluster. We have read the documentation and have 
> been able to recover successfully but we were wondering about the 
> contents of the ARCHIVE_STATUS subdirectory. It appears that it is OK 
> for this directory to be entry during recovery. I just wanted to verify 
> that this is correct.
 
Yes, that's OK. That directory is just used as a means to tell the
archiver what logs it can archive.

>Also, is there a way to search the archives for all notes on a 
> particular subject?

http://archives.postgresql.org/

>Is this ADMIN list the appropriate forum for backup and recovery 
> questions?

Yes.
-- 
Jim Nasby  [EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [ADMIN] query becomes much slower after Postgres 8.2.4 and Postgis 1.2.1 upgrade

2007-05-15 Thread Jim C. Nasby
On Tue, May 15, 2007 at 12:49:54PM -0700, Jessica Richard wrote:
> We had the following upgrades last week:
> 
> proj-4.4.9 to proj-4.5.0
> geos-2.2.1 to geos-2.2.3
> Postgis 1.1.1 to Postgis 1.2.1,
> then
> Postgres 8.1.4 to 8.2.4
> 
> 
> After the entire upgrades, a same query with geospatial functions that used 
> to come back in a few minutes, now take about 3 minutes to finish, no erors, 
> just much slower than before.
> 
> Has any one seen this performance issue after the upgrade before?

It can happen, yes. If it's a PostGIS-specific query I suggest you post
to the PostGIS list asking for help about it. If it's not
PostGIS-specific you should post the query and EXPLAIN ANALYZE output
from both versions to pgsql-performance.
-- 
Jim Nasby  [EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [ADMIN] Problem Recovering from Windows PITR

2007-05-15 Thread Jim C. Nasby
On Tue, May 15, 2007 at 05:43:06PM -0400, Nykolyn, Andrew wrote:
> In testing my PITR on windows 8.1.4 I find the server does not go into
> recovery mode and will not start after restoring the base backup and
> setting up the recovery.conf file as described in the PostgreSQL
> documentation.  All permissions appear to be set correctly on the new
> data directories and files.  There are no log files generated that I can
> see anywhere to give me a clue on what is happening.  The service
> message indicates that there are no error messages but the service did
> not start.  Does anybody have any ideas on where I should look to solve
> my problem.

Can you launch PostgreSQL from the command line instead? You might get
more output that way... also try increasing the logging verbosity;
setting log_min_messages to debug might help. You could also try
bringing up a postmaster in standalone mode, though I'm not sure if it
will attempt a PITR recovery in that case...
-- 
Jim Nasby  [EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [ADMIN] How far does interrupting vacuum full rollback?

2007-05-15 Thread Jim C. Nasby
On Wed, May 16, 2007 at 09:37:28AM +1000, Nick Urbanik wrote:
> Dear Folks,
> 
> I would like to interrupt this full vacuum run as:
> 
> vacuum full verbose;
> 
> which has vacuumed a number of tables.
> 
> Will interrupting it (with Control-C) cause vacuuming of the previous
> tables to be rolled back, or just the vacuuming of the current table?
 
Just the current table.

> WIll any of the work done on the current table remain if the vacuuming
> process is interrupted?
 
If it's had to cycle though the heap more than once due to running out
of maintenance_work_mem (or whatever it's called in 7.3) then I believe
some of the work will be saved.

> This is PostgreSQL 7.3.8.

You should strongly consider upgrading to 8.2 or at least 8.1. If
nothing else at least get on the latest 7.3 version.
-- 
Jim Nasby  [EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [ADMIN] How overcome wait for vacuum full?

2007-05-15 Thread Jim C. Nasby
On Wed, May 16, 2007 at 11:20:52AM +1000, Nick Urbanik wrote:
> Dear Alvaro,
> 
> On 15/05/07 21:12 -0400, Alvaro Herrera wrote:
> >Nick Urbanik wrote:
> >>On 15/05/07 18:53 -0400, Alvaro Herrera wrote:
> >>>Instead of waiting a month for the time when you can take the
> >>>application offline (thus accumulating a month's worth of dead tuples),
> >>>run a non-full vacuum more often (say, once a day or more).  It doesn't
> >>>lock the table so the app can continue to be online while it runs.
> >>
> >>Yes, it is vacuumed non-full regularly.  However, for some reason,
> >>only a full vacuum will recover the space.
> >
> >You probably need to better configure the FSM settings.  See
> >fsm_max_pages in the docs.
> 
> This is probably the critical piece of information we need to prevent
> the need for another sleepless night and an outage!  Thank you.

http://www.pervasive-postgres.com/instantkb13/article.aspx?id=10087&cNode=5K1C3W
might be of use as well.
-- 
Jim Nasby  [EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [ADMIN] pg_dump not enough space to create a backup FreeBSD 6.1?

2007-05-16 Thread Jim C. Nasby
On Wed, May 16, 2007 at 02:36:45PM +0200, Joost Kraaijeveld wrote:
> Hi,
> 
> Running pg_dump on a FreeBSD 6.1 machine gives me:
> 
> [EMAIL PROTECTED]:/data/even]# pg_dump -Ft -f ./my_db-2007-5-16.backup -U 
> user my_db
> 
>  /tmp: write failed, filesystem is full
> pg_dump: [tar archiver] could not write to tar member (wrote 0,
> attempted 86)
> 
> pg_dump: *** aborted because of error
> 
> [EMAIL PROTECTED]:/data/even]# df -h
> 
> Filesystem   SizeUsed   Avail Capacity  Mounted on
> /dev/aacd0s1g125G5.8G109G 5%/data
> /dev/aacd0s1e484M 86K445M 0%/tmp
> 
> The database is, when dumped in text format and gzipped, 40 Mb, so I
> assume that there is enough space on the /tmp drive to store the temp
> files of the pg_dump.
> 
> Is that assumption OK? Can I somehow use another partition to store the
> temporaray files without changing my /tmp mount?

tar is what's creating the temp files, not pg_dump. AFAIK it obeys the
TEMP environment variable, so changing that should make things good.
-- 
Jim Nasby  [EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [ADMIN] WAL file utilization question

2007-05-16 Thread Jim C. Nasby
On Wed, May 16, 2007 at 10:28:29AM -0600, Keaton Adams wrote:
> On most database systems I am used to, there is a physical log and
> separate logical log. The physical log holds any ???before images??? of data
> pages that have been modified since the last checkpoint. The logical log
> holds the individual rows of data that have been inserted, updated,
> deleted as well as checkpoint records and the like. If the server
> crashes, fast recovery is initiated where (1) any pages in the physical
> log are put back in the data files (tablespaces/dbspaces) on disk to get
> back to a state of physical consistency and (2) individual transactions
> in the logical log since the last checkpoint are rolled forward / rolled
> back to get to a point of logical consistency.
 
PostgreSQL combines the two, though there has been recent discussion
about changing that. There may be some improvements in this regard in
8.3 (I don't remember if the patches were accepted or not).

> Even with full_page_writes set to false and checkpoints taking place
> every 60 seconds or so, the amount of WAL log data generated per minute
> seems to be significant.
 
Wow, do you really want to be checkpointing every 60 seconds? That's
going to greatly increase your WAL volume, as well as the size of WAL.

> So my question is this: If I kick off a transaction that loads records
> with a size of 100 bytes, does the insert for that record take 100 bytes
> in the WAL file, or is the data archived in the WAL log in page size
> (8k) portions? So with piggyback commits if I can only stack up 2K worth
> of data before the next LogFlush, will 2K be written to the WAL file, or
> will 8K be written each time regardless of the amount of actual
> transaction data that is available to flush?
 
(Generally) WAL only records differences. The exception is that the
first time a page is modified after a checkpoint, the entire page is
written out to WAL.

> Since there is no separate physical log to keep track of dirty/modified
> pages since the last checkpoint I would assume that the WAL log is
> serving a dual purpose of being able to get back to the point of
> physical and logical database consistency, but I need to know for
> certain that there is not a way to reduce the amount of WAL data being
> written for the amount of transaction information we are actually
> writing to the database at any given point in time.

The only way to do it right now is to reduce the frequency of your
checkpoints. IIRC you can't actually disable full page writes if you're
using PITR.
-- 
Jim Nasby  [EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [ADMIN] WAL file utilization question

2007-05-16 Thread Jim C. Nasby
On Wed, May 16, 2007 at 10:49:04AM -0600, Keaton Adams wrote:
> So, if I do set the checkpoint_segments parameter to 50, 75, or even 100
> I will have a considerable number of WAL files in data/pg_xlog, but they
> will be used more efficiently, so I will be archiving less?  That's what
> I understand from the comments received so far.

Yes. Let it create 500 or 1000 WAL files if it wants... it's much more
important to limit the frequency of checkpoints than to reduce the
number of WAL files (which has virtually no impact on performance).
-- 
Jim Nasby  [EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [ADMIN] plpgsql debugger

2007-07-17 Thread Jim C. Nasby
On Tue, Jul 17, 2007 at 06:45:29PM +0300, Abraham, Danny wrote:
> Can someone help? Need a plpgsql debugger.

You can get our debugger from the bottom of
http://www.enterprisedb.com/downloads.do
-- 
Jim Nasby  [EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)


pgpnAFRZzVjfW.pgp
Description: PGP signature


Re: [ADMIN] update within trigger function

2007-07-23 Thread Jim C. Nasby
On Mon, Jul 23, 2007 at 02:05:20PM -0400, Mark Steben wrote:
> We have 'table a' which has defined to it an 'after insert or update'
> trigger function
> 
> That inserts to 'table b', obviously for every update or insert of table a.
> The challenge
> 
> I have now is to update a new date column on 'table a' with a value coming
> out of
> 
> Another inhouse written function within the trigger function.  But I cannot
> use the UPDATE 
> 
> statement within the update trigger function for fear of an infinite loop.  
> 
> All I need to do is to populate the date field of the row that is currently
> being processed by the trigger function.  
> 
> Is there a command within plpgsql that can help me with this?

See NEW on
http://www.postgresql.org/docs/8.2/interactive/plpgsql-trigger.html
-- 
Jim Nasby  [EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)


pgp3ePQSKi3pX.pgp
Description: PGP signature


Re: [ADMIN] hardware to get best performance

2004-11-24 Thread Jim C. Nasby
ext3 is just ext2 with journalling, and from what I've heard is rather
slow. ext2 is fast because it's async. RFS is fast at losing your data.
:)

UFS is rock-solid and is still being developed. In the past couble years
it's been changed to support some monsterous file and filesystem sizes
and has had soft-updates added. Soft-updates order metadata and data
writes in such a way that the filesystem will always come up in a
consistent state. It does what journaling does without the performance
drawback of a fixed journal space on the drive.

Another advantage FreeBSD has is that the kernel will take process
priority into account when it's scheduling disk IO. This means you can
nice long-running processes and not starve other processes for disk IO.

Some people think having a half-dozen filesystems to choose from is an
advantage. Personally, I'd much rather stick with a filesystem that's
proven itself  for the past 30 years.

On Wed, Nov 24, 2004 at 01:30:06PM +0800, Vincent Chen wrote:
> >> Hi, all
> >> 
> >> Does postgresql support thread? Or let me put it this way, will postgresql
> get
> >> better performance on multiple processor host than single processor? What's
> the
> >> best platform, freebsd? linux?
> >
> >If you have multiple connections to the database server making simultaneous
> >queries, multiple processors will be used. This is the normal case for
> >real database use.
> >
> >The best platform for you is probably the one you are most familiar with.
> >
> 
> About platform, linux has much more filesystem choice than freebsd. If I put
> postgresql data file on new filesystem like xfs or reiserfs, is it possible
> that overall performance will be better than old filesystem like ufs?
> 
> 
> Thanks for your response,
> 
> 
> Vincent Chen
> 
> 
> -
> Yahoo!?_???y?F
> ???s???y?F???W???A???H???W?H???m?I
> http://tw.avatar.yahoo.com/
> 
> -------(end of broadcast)---
> TIP 6: Have you searched our list archives?
> 
>http://archives.postgresql.org
> 

-- 
Jim C. Nasby, Database Consultant   [EMAIL PROTECTED] 
Give your computer some brain candy! www.distributed.net Team #1828

Windows: "Where do you want to go today?"
Linux: "Where do you want to go tomorrow?"
FreeBSD: "Are you guys coming, or what?"

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [ADMIN] Scheduling/Automated Jobs in Postgre

2005-03-02 Thread Jim C. Nasby
This is something that comes up every few months. Core is of the opinion
that you should just use the tools of your OS. Personally, I think some
kind of job system in the database would be very useful, so the last
time this came up I created http://pgfoundry.org/projects/pgjob/. No
work is being done on it right now, but if this is something you have an
interest in you should sign up for the mailing list.

On Wed, Mar 02, 2005 at 10:09:53AM +0530, Aravindhan G.D wrote:
> Hi All,
> 
> I need to do some user defined jobs when ever the Postgre server gets 
> restarted. These tasks should be automated one i.e) When ever I restart the 
> server the PL/SQL functions I have defined should be executed automatically 
> without my intervention. This is to record some information like Audit Trails 
> and some more.
> 
> Could any one tell me how can I implement this in Postgre. Is there any 
> concept like "Scheduled Jobs" in Postgre. If so pls. provide me with related 
> informations or the links from where I could get them.
> 
> With Regards,
> Aravindhan.

-- 
Jim C. Nasby, Database Consultant   [EMAIL PROTECTED] 
Give your computer some brain candy! www.distributed.net Team #1828

Windows: "Where do you want to go today?"
Linux: "Where do you want to go tomorrow?"
FreeBSD: "Are you guys coming, or what?"

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [ADMIN] VACUUM and read-mostly tables

2005-04-04 Thread Jim C. Nasby
We'll only answer if you do a write-up on your database. :P

Seriously, those are some seriously big numbers. What else is the
database doing? What hardware is it running on?
-- 
Jim C. Nasby, Database Consultant   [EMAIL PROTECTED] 
Give your computer some brain candy! www.distributed.net Team #1828

Windows: "Where do you want to go today?"
Linux: "Where do you want to go tomorrow?"
FreeBSD: "Are you guys coming, or what?"

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [ADMIN] VACUUM and read-mostly tables

2005-04-05 Thread Jim C. Nasby
On Tue, Apr 05, 2005 at 11:13:06AM -0400, Ian Westmacott wrote:
> On Tue, 2005-04-05 at 00:41, Jim C. Nasby wrote:
> > We'll only answer if you do a write-up on your database. :P
> > 
> > Seriously, those are some seriously big numbers. What else is the
> > database doing? What hardware is it running on?
> 
> 
> We run on a dual 3.2GHz P4 with 2GB RAM, but are still
> finalizing the storage hardware.  We've tried various
> flavors of RAID, filesystems and volume management (and
> are anxious to try out tablespaces in 8).  We've found
> fragmentation to be our largest limiting factor.  XFS
> helps with that, and seems to provide the highest 
> sustained throughput on raw tables, but its not the end
> of the story since fragmentation is still high.

What else is the database doing besides the inserts?

And if UFS is available for linux you should might try it.
-- 
Jim C. Nasby, Database Consultant   [EMAIL PROTECTED] 
Give your computer some brain candy! www.distributed.net Team #1828

Windows: "Where do you want to go today?"
Linux: "Where do you want to go tomorrow?"
FreeBSD: "Are you guys coming, or what?"

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [ADMIN] VACUUM and read-mostly tables

2005-04-05 Thread Jim C. Nasby
On Tue, Apr 05, 2005 at 01:56:05PM -0400, Ian Westmacott wrote:
> > And if UFS is available for linux you should might try it.
> 
> Would UFS help the fragmentation issue?  We have seen ext3
> allocating blocks in 2-4 pages, while XFS manages 8-16
> pages.

Hrm... I'm not sure what the allocation stratedgy is. I guess this is a
case where it would be useful to tell postgresql to allocate space for a
table in large chunks.
-- 
Jim C. Nasby, Database Consultant   [EMAIL PROTECTED] 
Give your computer some brain candy! www.distributed.net Team #1828

Windows: "Where do you want to go today?"
Linux: "Where do you want to go tomorrow?"
FreeBSD: "Are you guys coming, or what?"

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [ADMIN] VACUUM and read-mostly tables

2005-04-05 Thread Jim C. Nasby
On Tue, Apr 05, 2005 at 01:56:05PM -0400, Ian Westmacott wrote:
> > And if UFS is available for linux you should might try it.
> 
> Would UFS help the fragmentation issue?  We have seen ext3
> allocating blocks in 2-4 pages, while XFS manages 8-16
> pages.

Actually, I spoke too soon in my last reply. UFS has done work to be
more intelligent about where it allocates space from, so it's very
possible that even if it's only allocating a block at a time that it
will be smart enough not to allocate nearby space for other things. I
believe there's also some tuning parameters that affect that behavior.
Of course, other linux filesystems might be tunable in that regard as
well.
-- 
Jim C. Nasby, Database Consultant   [EMAIL PROTECTED] 
Give your computer some brain candy! www.distributed.net Team #1828

Windows: "Where do you want to go today?"
Linux: "Where do you want to go tomorrow?"
FreeBSD: "Are you guys coming, or what?"

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [ADMIN] delete to slow

2005-04-06 Thread Jim C. Nasby
On Wed, Apr 06, 2005 at 02:32:55PM -0300, Ricardo Valen?a de Assis wrote:
> I?ve tried... It takes the same time. I used explain and I saw it... Exactly
> the same time.

And what does explain show? Are statistics up to date?
-- 
Jim C. Nasby, Database Consultant   [EMAIL PROTECTED] 
Give your computer some brain candy! www.distributed.net Team #1828

Windows: "Where do you want to go today?"
Linux: "Where do you want to go tomorrow?"
FreeBSD: "Are you guys coming, or what?"

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [ADMIN] delete to slow

2005-04-06 Thread Jim C. Nasby
On Wed, Apr 06, 2005 at 02:59:18PM -0300, Ricardo Valen?a de Assis wrote:
> DATABASE=# explain delete from usuario where
> usua_cd_usuario=backup.usua_cd_usuario;
>  QUERY PLAN
> 
>  Hash Join  (cost=5.71..644.15 rows=298 width=6)
>Hash Cond: ("outer".usua_cd_usuario = "inner".usua_cd_usuario)
>->  Seq Scan on usuario  (cost=0.00..516.64 rows=23764 width=10)
>->  Hash  (cost=4.97..4.97 rows=297 width=4)
>  ->  Seq Scan on backup  (cost=0.00..4.97 rows=297 width=4)
> (5 rows)
> 
> DATABASE=# explain delete from usuario where usua_cd_usuario in (select
> usua_cd_usuario from backup);
>  QUERY PLAN
> 
>  Hash IN Join  (cost=5.71..644.15 rows=298 width=6)
>Hash Cond: ("outer".usua_cd_usuario = "inner".usua_cd_usuario)
>->  Seq Scan on usuario  (cost=0.00..516.64 rows=23764 width=10)
>->  Hash  (cost=4.97..4.97 rows=297 width=4)
>  ->  Seq Scan on backup  (cost=0.00..4.97 rows=297 width=4)
> (5 rows)

Well, neither of those should take very long at all. What's EXPLAIN
ANALYZE show?
-- 
Jim C. Nasby, Database Consultant   [EMAIL PROTECTED] 
Give your computer some brain candy! www.distributed.net Team #1828

Windows: "Where do you want to go today?"
Linux: "Where do you want to go tomorrow?"
FreeBSD: "Are you guys coming, or what?"

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [ADMIN] delete to slow

2005-04-06 Thread Jim C. Nasby
On Wed, Apr 06, 2005 at 02:15:39PM -0400, Tom Lane wrote:
> "Jim C. Nasby" <[EMAIL PROTECTED]> writes:
> > Well, neither of those should take very long at all. What's EXPLAIN
> > ANALYZE show?
> 
> Indeed ... now I'm wondering about foreign key checks.  Are there any
> tables with foreign keys linking to usuario?  If so, the problem is
> likely unindexed foreign key columns, or maybe a datatype mismatch
> between foreign key and referenced column.

Hmm... it would be pretty handy if there was a way to make triggers show
up in explain somehow, maybe as a pseudo query node.
-- 
Jim C. Nasby, Database Consultant   [EMAIL PROTECTED] 
Give your computer some brain candy! www.distributed.net Team #1828

Windows: "Where do you want to go today?"
Linux: "Where do you want to go tomorrow?"
FreeBSD: "Are you guys coming, or what?"

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [ADMIN] delete to slow

2005-04-06 Thread Jim C. Nasby
Are you going to do an explain analyze at some point?

-- 
Jim C. Nasby, Database Consultant   [EMAIL PROTECTED] 
Give your computer some brain candy! www.distributed.net Team #1828

Windows: "Where do you want to go today?"
Linux: "Where do you want to go tomorrow?"
FreeBSD: "Are you guys coming, or what?"

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [ADMIN] delete to slow

2005-04-06 Thread Jim C. Nasby
On Wed, Apr 06, 2005 at 06:28:26PM -0400, Tom Lane wrote:
> It's too big a change to consider back-patching into 8.0, unfortunately.

I'll happily settle for 8.1 over nothing... :)
-- 
Jim C. Nasby, Database Consultant   [EMAIL PROTECTED] 
Give your computer some brain candy! www.distributed.net Team #1828

Windows: "Where do you want to go today?"
Linux: "Where do you want to go tomorrow?"
FreeBSD: "Are you guys coming, or what?"

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [ADMIN] Mysql to postgres

2005-04-11 Thread Jim C. Nasby
On Mon, Apr 11, 2005 at 03:02:01AM -0400, Tom Lane wrote:
> I think you could probably also do it by building a locale whose
> comparisons are case-insensitive, but I don't know enough about
> locales to be sure.

FWIW, this is how old versions of Sybase handled case insensitivity.
Pretty much every other database I've worked on though, you want to go
the lower()/lcase() route.

citext is interesting; I'll have to look into that for
stats.distributed.net. Another example of how incredibly extensible
PostgreSQL is.
-- 
Jim C. Nasby, Database Consultant   [EMAIL PROTECTED] 
Give your computer some brain candy! www.distributed.net Team #1828

Windows: "Where do you want to go today?"
Linux: "Where do you want to go tomorrow?"
FreeBSD: "Are you guys coming, or what?"

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [ADMIN] cost of empty fields

2005-04-15 Thread Jim C. Nasby
If you setup rules on the view and just have the application select,
insert, update, and delete from the view instead of the raw tables you
won't need to change your application at all. Though you do need to be
aware that you can't easily enforce uniqueness across multiple tables.

As for clearing the fields, if by clearing you mean setting to NULL then
you're correct, it would save space. Just remember that you won't
actually be able to use the saved space until the table is vacuumed.

On Fri, Apr 15, 2005 at 04:22:07PM +0200, Enrico Weigelt wrote:
> 
> Hi folks,
> 
> 
> we've got an quite large table with 17 columns and now >15k rows.
> The table works also as an journal will continously grow day by day. 
> Man of the columns (mostly text) aren't needed anymore if the tuple 
> has reached a certain age. 
> So I thought about clearing the unnecessary fields of old tuples 
> to save resources. Does it bring any performance improvement ?
> 
> Of I could splitt off this table an working- an archive- table 
> and map them together in some views - someday I probably *will* 
> do it - but our applications are not yet ready for this.
> 
> 
> thx
> -- 
> -
>  Enrico Weigelt==   metux IT service
> 
>   phone: +49 36207 519931 www:   http://www.metux.de/
>   fax:   +49 36207 519932 email: [EMAIL PROTECTED]
>   cellphone: +49 174 7066481
> -
>  -- DSL ab 0 Euro. -- statische IP -- UUCP -- Hosting -- Webshops --
> -
> 
> -----------(end of broadcast)---
> TIP 4: Don't 'kill -9' the postmaster
> 

-- 
Jim C. Nasby, Database Consultant   [EMAIL PROTECTED] 
Give your computer some brain candy! www.distributed.net Team #1828

Windows: "Where do you want to go today?"
Linux: "Where do you want to go tomorrow?"
FreeBSD: "Are you guys coming, or what?"

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [ADMIN] Database syncronization

2005-04-20 Thread Jim C. Nasby
http://www.google.com/search?q=postgresql+replication

And http://gborg.postgresql.org/project/slony1/projdisplay.php, which is
missing for some reason...

On Wed, Apr 20, 2005 at 06:44:26PM +0200, Shadow wrote:
> 
> Hy all !
> 
> Is there any tool, middelware, or feature to syncronize 2 databases (With the 
> same schema, ofcourse)
> 
> I have an application that runs from 2 points. One of this points has a 
> master 
> database and the other point neds to sync its data daily from the master.
> 
> Any suggestion ? 
> 
> Thanks !
> 
> ---(end of broadcast)---
> TIP 8: explain analyze is your friend
> 

-- 
Jim C. Nasby, Database Consultant   [EMAIL PROTECTED] 
Give your computer some brain candy! www.distributed.net Team #1828

Windows: "Where do you want to go today?"
Linux: "Where do you want to go tomorrow?"
FreeBSD: "Are you guys coming, or what?"

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [ADMIN] cost of empty fields

2005-04-20 Thread Jim C. Nasby
On Thu, Apr 21, 2005 at 04:19:02AM +0200, Enrico Weigelt wrote:
> * Jim C. Nasby <[EMAIL PROTECTED]> wrote:
> > If you setup rules on the view and just have the application select,
> > insert, update, and delete from the view instead of the raw tables you
> > won't need to change your application at all. Though you do need to be
> > aware that you can't easily enforce uniqueness across multiple tables.
> hmm. that doesnt sound stable enough for me. 
> this table is really critical and there's a lot of money in game
> (realtime currency trading ...)

Note I didn't say you couldn't do it, I just said it wasn't easy. Easy
as in adding a normal unique constraint. In this case, you need to add
triggers to the tables to check for uniqueness. It's absolutely stable,
it's just not as nice as it could be.
-- 
Jim C. Nasby, Database Consultant   [EMAIL PROTECTED] 
Give your computer some brain candy! www.distributed.net Team #1828

Windows: "Where do you want to go today?"
Linux: "Where do you want to go tomorrow?"
FreeBSD: "Are you guys coming, or what?"

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [ADMIN] unsubscribe

2005-05-03 Thread Jim C. Nasby
Nope, so sorry, try again.

On Tue, May 03, 2005 at 10:52:25AM -0700, Arcane_Rhino wrote:
> unsubscribe -farewell general [EMAIL PROTECTED]
> 
> ---(end of broadcast)---
> TIP 2: you can get off all lists at once with the unregister command
> (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
> 

-- 
Jim C. Nasby, Database Consultant   [EMAIL PROTECTED] 
Give your computer some brain candy! www.distributed.net Team #1828

Windows: "Where do you want to go today?"
Linux: "Where do you want to go tomorrow?"
FreeBSD: "Are you guys coming, or what?"

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [ADMIN] varying pagesize in postgresql

2005-06-21 Thread Jim C. Nasby
pg_dumpall
Change something in one of the header files (which is docummented
somewhere...)
Recompile, install, initdb
restore from backup

Of course the backup/restore are optional if you don't care about your
data...

On Mon, Jun 20, 2005 at 11:09:36PM -0700, gayatri ganpaa wrote:
> Hi,
> How do I change the page size in postgresql? I have the postgresql-7.4.3 
> version. 
> I am trying to create an r*-tree index on a spatial table with around 100,000 
> rows. I just want to check if increasing the page size would make the join of 
> two tables faster. Are there any other options I could use to make this 
> faster? Would increasing the page size help?
>  
> Thanks,
> gayatri.
> 
> __
> 
> 
>   
> -
> Yahoo! Sports
>  Rekindle the Rivalries. Sign up for Fantasy Football

-- 
Jim C. Nasby, Database Consultant   [EMAIL PROTECTED] 
Give your computer some brain candy! www.distributed.net Team #1828

Windows: "Where do you want to go today?"
Linux: "Where do you want to go tomorrow?"
FreeBSD: "Are you guys coming, or what?"

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [ADMIN] real time replication of pgsql database

2005-06-26 Thread Jim C. Nasby
You should also look at http://pgfoundry.org/projects/pgcluster/

On Sat, Jun 25, 2005 at 11:44:37AM -0400, Christopher Browne wrote:
> Centuries ago, Nostradamus foresaw when [EMAIL PROTECTED] (hellz waren) would 
> write:
> > we are designing a heavy duty database in pgsql that
> > expects to grow at an average of 100 MB spread over
> > tables of 250 tables. 
> >
> > We require always on database (24 X 7) database. And
> > if one fails, the other one needs to start up. 
> >
> > Could you someone tell me if there's any open source
> > project that enables real time replication of data
> > back up.
> > I can use heartbeat, mon etc then to awake the other
> > server if the main server is down. 
> 
> You might want to look into Slony-I; it can do this sort of thing.
> 
> Version 1.1.0 was released on Thursday...
> -- 
> wm(X,Y):-write(X),write('@'),write(Y). wm('cbbrowne','gmail.com').
> http://slony.info
> Who's afraid of ARPA?
> 
> -----------(end of broadcast)---
> TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
> 

-- 
Jim C. Nasby, Database Consultant   [EMAIL PROTECTED] 
Give your computer some brain candy! www.distributed.net Team #1828

Windows: "Where do you want to go today?"
Linux: "Where do you want to go tomorrow?"
FreeBSD: "Are you guys coming, or what?"

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [ADMIN] Determine PG version used for initdb

2005-07-20 Thread Jim C. Nasby
On Wed, Jul 20, 2005 at 10:05:22AM -0400, Tom Lane wrote:
> > That tells the server version running, but does it tell the initdb
> > binary version used to init the cluster?
> 
> Normally there's not any difference between dot-release initdb's.
> In this case there is; read the 8.0.3 release notes:
> http://developer.postgresql.org/docs/postgres/release.html#RELEASE-8-0-3
> You could determine which was used by looking at the declared arguments
> of the conversion functions.

Hrm. Is there no definative way to determine what version of initdb was
used on a cluster? ISTM that would be a useful feature to have.
-- 
Jim C. Nasby, Database Consultant   [EMAIL PROTECTED] 
Give your computer some brain candy! www.distributed.net Team #1828

Windows: "Where do you want to go today?"
Linux: "Where do you want to go tomorrow?"
FreeBSD: "Are you guys coming, or what?"

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [ADMIN] Size of Large objects

2005-07-26 Thread Jim C. Nasby
On Tue, Jul 26, 2005 at 09:47:33AM +0200, Peter Eisentraut wrote:
> Am Dienstag, 26. Juli 2005 08:37 schrieb Bianca Oberst:
> > I'm new in postgres. I searched the archives, but I don't find an answer.
> > So can anybody tell me the explicit size, which a clob or a blob can have
> > in postgres 8.0?
> 
> Columns can contain up to 1 GB of data.  The terms clob or blob are not used 
> in PostgreSQL; the limitation is the same on all column types.

Not entirely true...

decibel=# select version();
 PostgreSQL 8.0.3 on powerpc-apple-darwin7.9.0, compiled by GCC gcc
 (GCC) 3.3 20030304 (Apple Computer, Inc. build 1495)

 decibel=# create table t(v varchar(10));
 ERROR:  length for type varchar cannot exceed 10485760
 decibel=# 
-- 
Jim C. Nasby, Database Consultant   [EMAIL PROTECTED] 
Give your computer some brain candy! www.distributed.net Team #1828

Windows: "Where do you want to go today?"
Linux: "Where do you want to go tomorrow?"
FreeBSD: "Are you guys coming, or what?"

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PHP] [ADMIN] Data insert

2005-08-22 Thread Jim C. Nasby
On Sun, Aug 21, 2005 at 06:35:22AM +0100, Aldor wrote:
> if you want to insert biiig data volumes try either using COPY instead 
> of INSERT - it will run much much faster

And if for some reason you have to stick with inserts, group them into
transactions; it will perform much better than individual transactions.
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Softwarehttp://pervasive.com512-569-9461

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [ADMIN] sleep?

2005-08-23 Thread Jim C. Nasby
On a related note, you might be interested in
http://pgfoundry.org/projects/pgjob/

On Tue, Aug 23, 2005 at 09:42:27AM -0500, Don Drake wrote:
> Sorry for the delay in replying.
> 
> The use-case that I have is the following. I'm writing a job-control 
> tracking sub-system that will store when jobs are started, finished, failed, 
> etc. I would like to have the ability to have a process that is requesting 
> to start, to actually wait a specified period time before starting. It could 
> wait for another job to finish. I'm writing this in plpgsql since I'm 
> storing status in the db.
> 
> -Don
> 
> On 8/22/05, Michael Fuhr <[EMAIL PROTECTED]> wrote:
> > 
> > On Mon, Aug 22, 2005 at 08:34:29AM -0500, Don Drake wrote:
> > > I agree that a basic function (non-CPU intensive sleep) like this should 
> > be
> > > built in.
> > 
> > It's being discussed in pgsql-hackers:
> > 
> > http://archives.postgresql.org/pgsql-hackers/2005-08/msg00633.php
> > 
> > Do you have any use cases in addition to what's already been
> > mentioned? Sleeping isn't really a database operation, so there
> > needs to be some justification for making it a standard function.
> > 
> > --
> > Michael Fuhr
> > 
> 
> 
> 
> -- 
> Donald Drake
> President
> Drake Consulting
> http://www.drakeconsult.com/
> http://www.MailLaunder.com/
> http://www.mobilemeridian.com/
> 312-560-1574

-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Softwarehttp://pervasive.com512-569-9461

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [ADMIN] Reg:sql command

2005-08-24 Thread Jim C. Nasby
You should also take a look at
http://pgfoundry.org/projects/newsysviews/; it makes getting this kind
of info out of the database much easier.

On Wed, Aug 24, 2005 at 02:43:31PM +0530, Gnanavel S wrote:
> run the psql command with -E option and do 
> \d tablename.
> 
> On 8/24/05, sandhya <[EMAIL PROTECTED]> wrote:
> > 
> > Hi,
> > Is there any equivalent command for '\d tablename'.
> > Inorder to get the table structure is there any SQL query in postgres?
> >  Thank you,
> > Regards,
> > sandhya
> > 
> 
> 
> 
> -- 
> with regards,
> S.Gnanavel
> Satyam Computer Services Ltd.

-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Softwarehttp://pervasive.com512-569-9461

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [ADMIN] Fwd: Indexes (Disk space)

2005-08-24 Thread Jim C. Nasby
I think the best you could do would be to estimate it based on index key
size and knowing the overhead for the index structure. Note that it
probably wouldn't be too hard to codify this; it's just that no one has
done it.

http://www.postgresql.org/docs/8.0/interactive/storage.html would be a
good start at figuring out how much space the index will take.

On Wed, Aug 24, 2005 at 02:51:30PM +0100, Aldor wrote:
> I don't think this is possible...
> 
> you could try a explain on the create
> index command but... this is like playinng lotto, didn't try it and 
> don't have the time to try it;-)
> 
> i think getting the size is only possible after creating the index, 
> because when the index is created it will gather the information about 
> the data indexed.
> 
> if this answer is not enough: why do you need to know the size of the 
> index BEFORE creating it?
> 
> jose fuenmayor wrote:
> >Ok I understand but I need to know the size of the index before i create it
> >I need to know  how much space it will ocupy (before actually create
> >the index, having the information of the table on wich I will build
> >the index), is this possible?
> >thanks in advance to everyone that can help me.
> >
> >On 8/23/05, Aldor <[EMAIL PROTECTED]> wrote:
> >
> >>select * from pg_class;
> >>
> >>or
> >>
> >>select relname, relpages from pg_class where relname = '[index-name]';
> >>
> >>the pages give you the information about the space the index uses, a
> >>page has 8kb.
> >>
> >>[...]Every table and index is stored as an array of pages of a fixed
> >>size (usually 8Kb, although a different page size can be selected when
> >>compiling the server). In a table, all the pages are logically
> >>equivalent, so a particular item (row) can be stored in any page. In
> >>indexes, the first page is generally reserved as a metapage holding
> >>control information, and there may be different types of pages within
> >>the index, depending on the index access method. [...]
> >>
> >>Source-URL:
> >>http://www.postgresql.org/docs/8.0/interactive/storage-page-layout.html
> >>
> >>jose fuenmayor wrote:
> >>
> >>>-- Forwarded message --
> >>>From: jose fuenmayor <[EMAIL PROTECTED]>
> >>>Date: Aug 22, 2005 1:37 PM
> >>>Subject: Indexes (Disk space)
> >>>To: pgsql-admin@postgresql.org
> >>>
> >>>
> >>>Hi all,
> >>>I have the following question.
> >>>Is there anyway for me to know how much space on disk will ocupy an
> >>>index, created in a determined row or rows of a table?, anything like
> >>>a rule, formula, calculation? that allow me to know in advance how
> >>>much space will the index use before actually created it.
> >>>
> >>>I aprecciate all the help you can give me
> >>>
> >>>Thanks in advance.
> >>>
> >>>---(end of broadcast)---
> >>>TIP 1: if posting/reading through Usenet, please send an appropriate
> >>>  subscribe-nomail command to [EMAIL PROTECTED] so that your
> >>>  message can get through to the mailing list cleanly
> >>>
> >>
> >
> 
> ---(end of broadcast)---
> TIP 5: don't forget to increase your free space map settings
> 

-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Softwarehttp://pervasive.com512-569-9461

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [ADMIN] dumping query results to a csv

2005-08-25 Thread Jim C. Nasby
On Thu, Aug 25, 2005 at 03:48:54PM -0700, Adi Alurkar wrote:
> Greetings,
> 
> There is no direct way to achieve what you want, the easiest hack is  
> to create a temp table with you query i.e.
> 
> create table tmp_foo as select col1, col4, col7 from table1, table2  
> where  ;
> copy table tmp_foo to [stdout|]

Can we make this a TODO? It would certainly be handy to be able to COPY
directly from a query.

In the mean time, you can also do something like

SELECT field1 || ',' || field2 || ',' || field3

and capture the output of that to a file, but that's an uglier hack than
the temptable trick.
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Softwarehttp://pervasive.com512-569-9461

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [ADMIN] Hash index

2005-08-27 Thread Jim C. Nasby
On Sat, Aug 27, 2005 at 01:21:50AM -0500, Bruno Wolff III wrote:
> On Fri, Aug 26, 2005 at 10:59:50 -,
>   RAJU  kumar <[EMAIL PROTECTED]> wrote:
> >  ?
> >  ?
> > i want to find out the difference between the btree index and hash index 
> > and how exactly the hash index work.
> 
> Why? You probably don't want to use hash indexes as btrees are virtually
> always better in the current implementation.

That may be true but it's still good to be able to see what they
actually do and how they work. In particular it would be good to know
what relation they have to the hash operations (HashAgg, HashJoin, etc).
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Softwarehttp://pervasive.com512-569-9461

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [ADMIN] Hash index

2005-08-30 Thread Jim C. Nasby
On Tue, Aug 30, 2005 at 03:32:26PM -0400, Tom Lane wrote:
> That leaves hash.  I'm hoping someone will step up and do WAL logging
> for hash in the near future.  Unlike rtree, I'm not expecting that we
> might get rid of hash indexes.  Even if the performance problems never
> get fixed, we use hash index opclasses to manage datatype-specific
> hashing for hash joins, hash aggregation, etc, so if we removed hash
> indexes we'd need to find some other representation for all that.

So does that mean a hash index could (theoretically) improve the
performance of a hash join or hash aggregation?
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Softwarehttp://pervasive.com512-569-9461

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [ADMIN] Hash index

2005-08-30 Thread Jim C. Nasby
On Tue, Aug 30, 2005 at 04:20:03PM -0400, Tom Lane wrote:
> "Jim C. Nasby" <[EMAIL PROTECTED]> writes:
> > So does that mean a hash index could (theoretically) improve the
> > performance of a hash join or hash aggregation?
> 
> I don't think so --- the whole point of a hash is to do your matching
> in-memory, rather than jumping all over the disk for it ...

True, but if you have an index on the value you're hashing you should be
able to find out very quickly if it exists or not, no? So if you were
doing a hash join between two tables and one of them had a hash index on
the appropriate field, couldn't you check the index first to see if
there was a match? Granted, you could probably do the same thing with a
B-tree index, but this might be faster in some scenarios.
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Softwarehttp://pervasive.com512-569-9461

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] [ADMIN] How to determine date / time of last postmaster restart

2005-09-06 Thread Jim C. Nasby
On Tue, Sep 06, 2005 at 08:22:34PM -0400, Tom Lane wrote:
> PG 8.1 will have a function to return postmaster start time, but not
> database reset time.  I wonder if this is misdefined --- if you are
> trying to measure database uptime, the last reset would be more
> appropriate to track.

Is it too late to add a function that returns last reset time as well?
That would cover all bases and force some less confusing naming.
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [ADMIN] Linking Oracle DB from postgresql

2005-09-08 Thread Jim C. Nasby
DBI-Link (http://pgfoundry.org/projects/dbi-link/) is probably your best
bet. There's been some talk of expanding dblink, but nothing's happened
on that yet.

On Thu, Sep 08, 2005 at 10:37:23AM +0900, [EMAIL PROTECTED] wrote:
> Hi,
> I am new to pgsql.
> At present i ma having Oracle database and using pgsql for my new application.
> I would like to access data exists in Oracle database from new pgsql database.
> 
> Is there a way to link Oracle database from pgsql?
> If not, is there any other ways to achieve it?
> 
> Regards,
> Chandra.
> 
> ---(end of broadcast)---
> TIP 2: Don't 'kill -9' the postmaster
> 

-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] [ADMIN] How to determine date / time of last postmaster restart

2005-09-08 Thread Jim C. Nasby
On Wed, Sep 07, 2005 at 09:22:55PM -0400, Tom Lane wrote:
> Alvaro Herrera <[EMAIL PROTECTED]> writes:
> > On Wed, Sep 07, 2005 at 12:38:44AM -0500, Jim C. Nasby wrote:
> >> Is it too late to add a function that returns last reset time as well?
> >> That would cover all bases and force some less confusing naming.
> 
> > This would be one more vote in favour of initdb before next beta.
> > (pltemplate being the other one.)  We should set a threshold in order to
> > be able to decide ...
> 
> It looks like we'll be initdb'ing to add pltemplate, so if someone wants
> to submit a proposed patch for this ...

I might be able to do some of it; where would I need to capture reload
time?

Also, I'm thinking of leaving pg_postmaster_start_time as-is and adding
pg_postmaster_reload_time. Thoughts?

For reference, here's the original commit:
http://archives.postgresql.org/pgsql-committers/2005-06/msg00181.php
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [ADMIN] Relationship beween sequences (serial) and tables

2005-09-09 Thread Jim C. Nasby
On Thu, Sep 08, 2005 at 01:03:26AM -0700, Thusitha Kodikara wrote:
> Hi,
>  
> I use Postgres 7.4.5 on Linux
>  
> In many of my tables the primary key is bigserial for which sequences are 
> automatcially generated. Through pg_catalog tables how can I find the 
> relationship petween each table and its corresponding sequence ?

If you install newsysviews (http://pgfoundry.org/projects/newsysviews/),
the following query will give you all table columns that are using a
sequence for their default value:

select * from pg_user_table_columns where default_value like 'nextval(%';
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [ADMIN] Disabling WAL for bulk data loads

2005-09-09 Thread Jim C. Nasby
On Fri, Sep 09, 2005 at 11:02:33AM -0500, Kevin Keith wrote:
> I am coming from an Oracle background - which in the case of bulk data 
> loads there were several options I had where I could disable writing to the 
> redo log to speed up the bulk data load (i.e. direct load, set the user 
> session in no archive logging, set the affected tables to have no logging).
> 
> I know the COPY command is one option - however it appears the data would 
> need to be in formatted file in order to use it correct? I want to avoid 
> writing a new file out for the COPY command and loading that.

Well, COPY will give you the best performance. It's also fairly flexable
when it comes to formatting. It shouldn't be difficult to get Oracle's
export tool to generate data that COPY can load.

> What other options does Postgres 7.4 provide which would allow data loads 
> to bypass writing to the WAL? I don't need to have this enabled - because 
> in the event of a database crash, I would simply reload the data from the 
> source files as recovery.

At this point, your only option to change fsync to off in
postgresql.conf and reload the postmaster (kill -HUP). There are plans
to improve support for non-logged tables, but I don't think any of it is
in 8.1.
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [ADMIN] Disabling WAL for bulk data loads

2005-09-10 Thread Jim C. Nasby
On Sat, Sep 10, 2005 at 08:15:50PM -, Greg Sabino Mullane wrote:
> 
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
> 
>   
> > Well, COPY will give you the best performance. It's also fairly flexable
> > when it comes to formatting. It shouldn't be difficult to get Oracle's
> > export tool to generate data that COPY can load.
> 
> With Oracle, everything is difficult. Their export utility does not have
> a way to dump things as a series of INSERT commands, nor as tab-delimited
> data. There are other tools/scripts around that can do this, but as a
> general rule, the word "difficult" is one of the more mild adjectives
> usually found in the same sentence as "Oracle utility". :)

Can't it export CSV? Copy can deal with CSV.
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [ADMIN] Replication

2005-09-20 Thread Jim C. Nasby
There's also pgcluster, but keep in mind that these are all
statement-level replication solutions which have some rather big
gotchas.

On Mon, Sep 19, 2005 at 03:24:31PM -0500, Scott Marlowe wrote:
> On Mon, 2005-09-19 at 02:02, the vespadict wrote:
> > Hi ,
> > 
> > I want high Availability but I don't need load balance. I need sync
> > replication if is posible using two way.
> 
> If you need sync, then pgpool may be a good choice, or client side
> replication like C-JDBC or something like that.
> 
> pgpool doesn't HAVE to do load balancing, it can do simple two way sync
> replication if that's what you need.
> 
> ---(end of broadcast)---
> TIP 1: if posting/reading through Usenet, please send an appropriate
>subscribe-nomail command to [EMAIL PROTECTED] so that your
>message can get through to the mailing list cleanly
> 

-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [ADMIN] Hi.........

2005-09-24 Thread Jim C. Nasby
Take a look at http://pgfoundry.org/projects/newsysviews/

On Fri, Sep 23, 2005 at 12:57:31PM +0530, sandhya wrote:
> Is there any way/any SQL query of getting information regarding users.
> Like 
> 1) To Display all the databases under Particular user
> 2) To Display the timings of login and logout of the databases of other users.
> 3) To Display the permissions of the users
> 4) To Display the tables and Access permissions of it under a database
> 
> Can any one please do helpful in the above cases
> 
> Thank you
> -Sandhya

-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [ADMIN] vacuum message

2005-09-26 Thread Jim C. Nasby
On Mon, Sep 26, 2005 at 07:28:30PM +0530, Gourish Singbal wrote:
> Thats a good Idea.
>  My output after full vaccum is :-
>  INFO: free space map: 9 relations, 114736 pages stored; 76592 total pages
> needed
> DETAIL: Allocated FSM size: 1000 relations + 30 pages = 1856 kB shared
> memory.
> VACUUM
>  I just have a Question:
> Is the allocate pages 30 okey ?. OR do i need to decrease it to 10 ?
>  regards

Downside: you're burning extra memory, to the tune of 6 bytes per page.
Downside: there's some additional overhead in maintaining a larger FSM
Upside: you have room to grow, as well as to accommodate occasional
bursts of update activity

Both downsides are pretty minor, I'd say just stick with what you've
got.
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [ADMIN] Need help with corrupt pg_statistic

2005-10-03 Thread Jim C. Nasby
You should probably be asking yourself why you have corruption in your
database. I suspect whatever hardware that database is on is having
problems that need to be addressed.

On Thu, Sep 29, 2005 at 05:42:16PM -0700, Kevin Seghetti wrote:
> I am using postgres 7.4.5, and have a corrupt pg_statistic.
> Many commands cause the following error:
> ERROR:  invalid page header in block 10 of relation "pg_statistic"
> 
> I want to preserve my data if at all possible, I figured I would just
> dump the database and reload it. I tried
> pg_dump, but it gives me:
>  pg_dump: query to obtain list of data types failed: ERROR:  invalid page 
> header in block 11 of relation "pg_statistic"
> even if I just try to dump a single table. 
> 
> I tried postgres in single-user mode:
>  delete from pg_statistic
> gives me the same error
>  drop table pg_statistic
> gives me
>  ERROR:  permission denied: "pg_statistic" is a system catalog
> 
> Is there any way to nuke pg_statistic (since its contents can be
> recalculated anyway?)
> 
> Thanks, 
> 
> -- 
> Kevin Seghetti: E-Mail: [EMAIL PROTECTED], HTTP: www.tenetti.org
> GPG public key: http://tenetti.org/cgi-bin/twiki/view.cgi/KevinSeghetti/GPGKey
> Check out www.worldfoundry.org for my GPL'ed 3D video game engine
> Copyright is a temporary loan from the public domain, not property
> 
> 
> ---(end of broadcast)---
> TIP 1: if posting/reading through Usenet, please send an appropriate
>        subscribe-nomail command to [EMAIL PROTECTED] so that your
>message can get through to the mailing list cleanly
> 

-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [ADMIN] Vacuum Full Analyze Stalled

2005-10-03 Thread Jim C. Nasby
> DETAIL:  0 dead row versions cannot be removed yet.
> Nonremovable row versions range from 68 to 68 bytes long.
> There were 479137 unused item pointers.
> Total free space (including removable row versions) is 32760236 bytes.
> 4227 pages are or will become empty, including 0 at the end of the table.
> 4244 pages containing 32760236 free bytes are potential move destinations.
> CPU 0.00s/0.00u sec elapsed 0.00 sec.
> INFO:  index "DbTranImageStatusPK" now contains 72 row versions in 6654 pages
> DETAIL:  0 index row versions were removed.
> 6544 index pages have been deleted, 6544 are currently reusable.
> CPU 0.00s/0.00u sec elapsed 0.00 sec.
> INFO:  "DbTranImageStatus": moved 22 row versions, truncated 4244 to 1 pages
> DETAIL:  CPU 0.00s/0.02u sec elapsed 0.03 sec.
> INFO:  index "DbTranImageStatusPK" now contains 72 row versions in 6654 pages
> DETAIL:  22 index row versions were removed.
> 6544 index pages have been deleted, 6544 are currently reusable.
> CPU 0.00s/0.00u sec elapsed 0.00 sec.
> INFO:  vacuuming "pg_toast.pg_toast_272793"
> INFO:  "pg_toast_272793": found 0 removable, 0 nonremovable row versions in 0 
> pages
> DETAIL:  0 dead row versions cannot be removed yet.
> Nonremovable row versions range from 0 to 0 bytes long.
> There were 0 unused item pointers.
> Total free space (including removable row versions) is 0 bytes.
> 0 pages are or will become empty, including 0 at the end of the table.
> 0 pages containing 0 free bytes are potential move destinations.
> CPU 0.00s/0.00u sec elapsed 0.00 sec.
> INFO:  index "pg_toast_272793_index" now contains 0 row versions in 1 pages
> DETAIL:  0 index pages have been deleted, 0 are currently reusable.
> CPU 0.00s/0.00u sec elapsed 0.00 sec.
> INFO:  analyzing "public.DbTranImageStatus"
> INFO:  "DbTranImageStatus": scanned 1 of 1 pages, containing 72 live rows and 
> 0 dead rows; 72 rows in sample, 72 estimated total rows
> VACUUM
> dtr=# reindex table "DbTranImageStatus";
> REINDEX
> dtr=# vacuum analyze verbose "DbTranImageStatus";
> INFO:  vacuuming "public.DbTranImageStatus"
> INFO:  index "DbTranImageStatusPK" now contains 72 row versions in 2 pages
> DETAIL:  0 index pages have been deleted, 0 are currently reusable.
> CPU 0.00s/0.00u sec elapsed 0.00 sec.
> INFO:  "DbTranImageStatus": found 0 removable, 72 nonremovable row versions 
> in 1 pages
> DETAIL:  0 dead row versions cannot be removed yet.
> There were 48 unused item pointers.
> 0 pages are entirely empty.
> CPU 0.00s/0.00u sec elapsed 0.00 sec.
> INFO:  vacuuming "pg_toast.pg_toast_272793"
> INFO:  index "pg_toast_272793_index" now contains 0 row versions in 1 pages
> DETAIL:  0 index pages have been deleted, 0 are currently reusable.
> CPU 0.00s/0.00u sec elapsed 0.00 sec.
> INFO:  "pg_toast_272793": found 0 removable, 0 nonremovable row versions in 0 
> pages
> DETAIL:  0 dead row versions cannot be removed yet.
> There were 0 unused item pointers.
> 0 pages are entirely empty.
> CPU 0.00s/0.00u sec elapsed 0.00 sec.
> INFO:  analyzing "public.DbTranImageStatus"
> INFO:  "DbTranImageStatus": scanned 1 of 1 pages, containing 72 live rows and 
> 0 dead rows; 72 rows in sample, 72 estimated total rows
> VACUUM
>  
> These all ran sub-second.  We updated postgresql.conf for more
> aggressive autovacuum, and restarted postgres, then restarted the
> application.
>  
> This failure seems rather different from the one I previously posted,
> since it was blocking on the application table, rather than
> pg_constraint_contypid_index, and it did not wake up when all
> other processes where stopped.
>  
> As before, both boxes are running 8.1beta2.  Windows is not showing
> these problems with the autovacuum blocking; just Linux.
>  
> -Kevin
>  
>  
> >>> Tom Lane <[EMAIL PROTECTED]> 10/02/05 8:53 PM >>>
> "Jeff Kirby" <[EMAIL PROTECTED]> writes:
> > the Linux box however is still chugging away this morning... and
> > appears to be stuck on vacuuming "pg_constraint_contypid_index".  How
> > do I know... well I don't really... I'm inferring based on the order
> > of the log output on the Windows box.
> 
> Looking in pg_locks would give you a more reliable indicator of what the
> VACUUM is currently working on.
> 
> Is the Linux box otherwise idle?  There was another report recently of a
> vacuum hung up for a long time on pg_constraint_contypid_index, but it
> seemed to be due to extremely heavy usage of domain types ...
> 
>   regards, tom lane
> 
> ---(end of broadcast)---
> TIP 4: Have you searched our list archives?
> 
>http://archives.postgresql.org
> 
> 
> ---(end of broadcast)---
> TIP 1: if posting/reading through Usenet, please send an appropriate
>subscribe-nomail command to [EMAIL PROTECTED] so that your
>message can get through to the mailing list cleanly
> 

-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


  1   2   3   >