Hi ,
I am setting up a postgres 7.3 database for a big client.I don't want to lose any data
in case of database crash or hardware failure . I need help in
1. Steps needed to setup the postgres database , is it posssible for me to write data
on to two different disks. if so how?
2. Steps
On Mon, 6 Oct 2003, Somasekhar Bangalore wrote:
Hi ,
I am setting up a postgres 7.3 database for a big client.I don't want to lose any
data in case of database crash or hardware failure . I need help in
1. Steps needed to setup the postgres database , is it posssible for me to write
On Mon, 6 Oct 2003, Sam Barnett-Cormack wrote:
On Mon, 6 Oct 2003, Somasekhar Bangalore wrote:
Hi ,
I am setting up a postgres 7.3 database for a big client.I don't want to lose any
data in case of database crash or hardware failure . I need help in
1. Steps needed to setup the
For data redundancy, I reccomend RAID level 0 or 5 - 5 is vastly
superior, if you can afford it.
You ment RAID 1, not RAID 0
regards,
Oli
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
On Mon, Oct 06, 2003 at 12:28:49 +0530,
Somasekhar Bangalore [EMAIL PROTECTED] wrote:
1. Steps needed to setup the postgres database , is it posssible for me to write
data on to two different disks. if so how?
2. Steps needed to recover the data in case of database crash or hardware
On Mon, 6 Oct 2003, Bruno Wolff III wrote:
On Mon, Oct 06, 2003 at 12:28:49 +0530,
Somasekhar Bangalore [EMAIL PROTECTED] wrote:
1. Steps needed to setup the postgres database , is it posssible for me to write
data on to two different disks. if so how?
2. Steps needed to recover
Hi all,
only two questions:
1) where is the documentation about the
meanings of option passed to pg_autovacuum
2) Is it normal that a strace wake up it during a sleep?
From linux doc:
sleep() makes the current process sleep until seconds
seconds have elapsed or a
Hi,
I was trying to create cluster-wide functions but without success.
Even creating one function in template1 did not work. If I create
a new database it is copied from template1 to new databases,
but older databases don't have that created function.
Is it possible to do something like that in
[EMAIL PROTECTED] writes:
Is it possible to do something like that in postgresql?
No.
--
Peter Eisentraut [EMAIL PROTECTED]
---(end of broadcast)---
TIP 8: explain analyze is your friend
Tom,
I found my source of the not removing all objects. Now however, when I
rerun my tests I am still seeing the pg_largeobject table grow even
though there are no entries in the table.
I started with any empty pg_largeobject table and added and then deleted
6 large objects of 80K.
Database
Chris White \(cjwhite\) [EMAIL PROTECTED] writes:
Why aren't there any unused tuples?
The unused number isn't especially interesting, it's just the number
of line pointer slots that were once used and aren't at the moment.
At 4 bytes apiece, they aren't costing you anything worth noticing.
Why
But as you could see from the prior query \lo_list showed no large
objects, this was done just prior to the vacuum.
aesop=# \lo_list
Large objects
ID | Description
+-
(0 rows)
aesop=# vacuum verbose pg_largeobject;
NOTICE: --Relation pg_largeobject--
NOTICE: Index
Chris White \(cjwhite\) [EMAIL PROTECTED] writes:
But as you could see from the prior query \lo_list showed no large
objects, this was done just prior to the vacuum.
aesop=# \lo_list
Large objects
ID | Description
+-
(0 rows)
aesop=# vacuum verbose pg_largeobject;
Okay now I understand what is going on. I have a second thread which is
being used to read these objects out of the database to present to the
user, and because large objects can only be accessed in a transaction
mode I have not closed the transaction on this thread. Should I do a
commit or
Chris White \(cjwhite\) [EMAIL PROTECTED] writes:
Okay now I understand what is going on. I have a second thread which is
being used to read these objects out of the database to present to the
user, and because large objects can only be accessed in a transaction
mode I have not closed the
15 matches
Mail list logo