I'm pretty sure that the answer to this question is no, but I wanted to
verify.
Given the following scenario (pg 9.0):
Server A (Master), Server B (A's streaming replication Slave), Server C
(A's streaming replication Slave)
If you lose Server A, is there anyway to promote Server B to master and
We understand that. The question is why is an uncompressed Fc format so
much larger than a straight dump? Is the so bad it takes almost 5% extra
space?
On Fri, Jul 18, 2008 at 2:36 PM, Scott Mead <[EMAIL PROTECTED]> wrote:
> Chris,
>
> Sorry I'm not responding to the list, I am not allowed to
We are doing some additional testing of our backup strategy with the goal of
moving from the text dump to the custom format and have some questions about
pg_dump -Fc.
Question 1. Why would the custom format be larger? pg_dump -Fc -Z 0 this
morning was 88GB while the normal pg_dump last night was
Another question.
When autovacuum sleeps, does it release the lock it has on the table?
What we would like to have happen is for vacuum to work for a while, sleep,
and while it is sleeping run an analyze on the table. We need this due to
how quickly our data is changing. Currently we are runnin
On Mon, Jul 14, 2008 at 4:29 PM, Alvaro Herrera <[EMAIL PROTECTED]>
wrote:
> Chris Hoover escribió:
> > I just have a couple of quick questions about 8.3 and Autovacuum.
> >
> > 1. When autovacuum chooses to vacuum a table, the entire table is
> vacuumed,
> >
I just have a couple of quick questions about 8.3 and Autovacuum.
1. When autovacuum chooses to vacuum a table, the entire table is vacuumed,
and not just a specified percentage of the table. Correct?
2. There is no way, without turning autovacuum off, of excluding
vacuum/analyze during a spec
I'm doing some testing on how to decrease our database size as I work on a
partitioning scheme.
I have found that if I have the database store all empty strings as nulls, I
get a significant savings over saving them as blank strings (i.e. '').
Below is an example of savings I am seeing for the sam
Question for those of you who are going to PGCon 2008. Where do most
attendees stay? My employer is sending me up there this year, and I would
like to stay where most of the attendees stay so I can stay plugged into
what is going on. So, do most people stay on campus, or at one of the 2
hotels m
On Thu, Feb 28, 2008 at 12:17 PM, Antonio Grassi <[EMAIL PROTECTED]> wrote:
> Hi list. I've searched the archives and I've seen a lot of posts related
> to database size, vacuuming, etc., but I can't totally understand this.
>
> I've a small database of about 300Mb (when initially loaded), whose d
On Feb 7, 2008 6:48 PM, Alvaro Herrera <[EMAIL PROTECTED]> wrote:
> Bruce Momjian escribió:
> > Dimitri Fontaine wrote:
>
> > > What about Alvaro's concern about manual vacuuming policies editing
> wrt to
> > > HOT?
> >
> > I think most people will be using autovacuum, and those who aren't will
>
I need some help calculating how much activity a query causes.
If I have a table defined as:
create table test_a (
col1 integer,
col2 char(1),
col3 varchar(35),
primary key (col1,col2)
) with oids;
When I do an "insert into test_a values (1,'a','test data');", does this
generate 3 writes
On Feb 5, 2008 2:53 PM, Bruce Momjian <[EMAIL PROTECTED]> wrote:
> Chris Hoover wrote:
> > I have been searching through the 8.3 documentation trying to find
> > information on the new HOT tables that 8.3 provides and can not seem to
> find
> > any. Can someon
I have been searching through the 8.3 documentation trying to find
information on the new HOT tables that 8.3 provides and can not seem to find
any. Can someone please point me towards some documentation on this new
feature so I can better understand it, and see if I can implement it into
our curr
On Jan 24, 2008 5:38 PM, Tom Lane <[EMAIL PROTECTED]> wrote:
>
> The .log file is to catch any non-csv-format data that comes down the
> pipe. You can't turn it off.
>
>regards, tom lane
What type of data would you expect this to be?
>
> > [...]
> > log_destination = 'syslog,csvlog' # Valid values are
> > combinations of
>
> This is kind of strange. It should be syslog or csvlog. It shouldn't be
> both of them. Or am I wrong ?
>
> Regards.
>
>
> --
> Guillaume.
> http://www.postgresqlfr.org
> http://dalibo.com
>
I'm running PG 8.3rc2 to start testing it out, and really like how it is
preforming for me so far. However, I have one question on the logging. I
am trying to use syslog and csvlog to log all sql. This seems to work great
except that for every log rotation postgres does, I get a .csv log file th
Just wanted to check, are there any problems/gotcha's to having a cluster of
all numeric database names? We are thinking of switching our database
naming system to use our customer number (a 6 digit integer) for the
database name. Is a negative/bad thing?
Thanks,
Chris
PG 8.1
Is there a limit to the number of partitions a table can have in PostgreSQL?
The reason I as is we are looking again at partitioning our databases with
the possibility of doing the partitioning by year, month, or even day.
However, we are required by HIPPA to keep 7 years of data, and we are
plan
Subject pretty much says it all. When I do my nightly dumps with
pg_dump, does that copy statement on each table cause the stats
collector to increment the seq_scan column?
thanks,
Chris
pg 8.1.3
---(end of broadcast)---
TIP 2: Don't 'kill -9' th
On Nov 15, 2007 11:25 AM, Tom Lane <[EMAIL PROTECTED]> wrote:
> > Here is the table definition:
> > CREATE TABLE rev_code_desc
> > (
> > rev_code character(4) NOT NULL,
> If changing the queries seems impractical, you could add an index on
> rev_code::text, or reconsider the choice of column ty
I am working through our application looking for ways to eliminate seq
scans in our database. I am looking for some help with this query.
SELECT rev_desc FROM rev_code_desc WHERE rev_code = lpad('0300', 4, '0')
Is there anyway to make postgres use an index on this query?
Here is the table defin
Question,
I have an ancient database that is running Postgresql 7.2.1 on AIX. Is it
safe to directly upgrade the database to 8.1? If not, what steps do I need
to take?
I have been pushing for 3 years to upgrade this database, and with some
upcoming projects, I am going to do another push with m
>
> > 2. copy whole data directory over to new SAN
> If database is big, then I think that is much faster to copy running
> database
> to new SAN.
> Then stop postgres and rsync what was changed - but this will be probably
> only a few files.
I believe if you do this, you will not get a function
On 9/20/07, Carol Walter <[EMAIL PROTECTED]> wrote:
>
> We just did this. I did a pg_dumpall using 8.1. Then we fired up
> 8.2 while 8.1 was still running. Uploaded the data to 8.2. Then
> shut 8.1 down. This allowed us to have the database unavailable for
> a very short time.
>
> Carol
Car
DBI to insert it into the PG. Which is basically an ETL
> (Extract, Transform, Load). I know insert is slower, but the point was to
> show a general way that will always for.
>
> Cheers
> Medi Montaseri
>
> On 8/28/07, Chris Hoover <[EMAIL PROTECTED]> wrote:
> >
>
I need some help importing a csv file. I have been given a csv file full of
data that is delimited by a comma, and the strings are delimited by a single
quote. How do you write the copy statement to use a single quote
delimiter? I have tried several things, but so far, no joy.
Thanks for any he
On 8/21/07, Scott Marlowe <[EMAIL PROTECTED]> wrote:
>
>
> Yes. you need to use the copy command inside a function and use
> security definer to say that the function runs as a superuser.
>
Ok, I thought from the documentation that copy could only extract entire
table or a specific column.
If I
I need some guidance.
I need to have an after insert trigger execute on a specific table and
export the results of query to a file on the database file system. Is this
possible to do from a trigger? If so, how can I accomplish this? I am
drawing a blank on how to export the results of the query
I need a little bit of help. I need to use sql to pull any tables that have
the a foreign key referencing a given tables primary key.
So far I have come up with the listed query. It works great for single
column primary keys, but if a table has a multi column primary key, it is
returning to many
On 8/10/07, Joshua D. Drake <[EMAIL PROTECTED]> wrote:
>
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Chris Hoover wrote:
> > I have accidentally created some tables in the wrong schema. Since my
> > testers have begun to use them and put data in them, I
I have accidentally created some tables in the wrong schema. Since my
testers have begun to use them and put data in them, I was hoping for an
easy way to move the existing tables.
Is there a way to move tables between schema's w/o having to drop and
recreate?
Thanks,
Chris
On 8/8/07, Michael Goldner <[EMAIL PROTECTED]> wrote:
>
> Is it possible to access tables in two different databases, running on
> the
> same server, from within psql?
>
Take a look at dblink in the contrib area. I think it will give you what
you are looking for.
Chris
We need some help, we have some what we believe are high value unicode
characters (Unicode 0x2). How can you search and replace for these? We are
storing this data in a text field, and having the data contain this unicode
value is violating our xml rules the application uses and causing abends in
On 7/27/07, Tena Sakai <[EMAIL PROTECTED]> wrote:
>
> Hi folks,
>
> My postgres (v 7.1.4, I think) did not come up upon reboot.
> I have tried to use startup script called postgresql in
> /etc/rc.d/init.d directory, but it reports FATAL when I
> try to stop postmaster.
>
> The most recent log file
How do you correctly move a tablespace?
I am thinking this would be the process:
1. Cleanly shutdown Postgres
2. Move tablespace directory and all contents to new location
3. Create a symlink from the new location back to the old location
4. Restart Postgres
Would that be correct? Or is the
Well, the one index:
CREATE INDEX acceptedbilling_to_date_accepted_billing_dt_idx
ON acceptedbilling
USING btree
(to_date(accepted_billing_dt::text, 'mmdd'::text));.
Reject:
ERROR: functions in index expression must be marked IMMUTABLE
SQL state: 42P17
Is actually a date stored in a varch
Maybe I'm blind (wouldn't be the first time), but I can't see a way to find
out how long postgres has been running. Is there a way to tell this from a
query? I am working with some of the stat views and would like to correlate
them to how long the server has been running.
Thanks,
Chris
I am testing upgrades to 8.2.4 from 8.1.3 and am having problems with
8.2.4balking at the functional indexes I have created. These indexes
exist and
work fine in 8.1.3, so why is 8.2.4 rejecting them?
Index 1:
CREATE INDEX acceptedbilling_to_date_accepted_billing_dt_idx
ON acceptedbilling
USIN
I am doing some research into partitioning my postgres database. While
doing this, I am trying to take the opportunity to improve the over all
database.design. One of the things I am thinking of implementing would be
the use of nullif/coalesce. Our application tends to send a lot of strings
tha
Just curious, what sort of field do they use to mark the rows? I'm assuming
a timestamp since it would half to be part of the primary key, or am I way
off? This has really gotten me thinking about how I might implement this in
my database.
Thanks,
Chris
On 5/31/07, Jim Nasby <[EMAIL PROTECTED
AIL PROTECTED]> wrote:
"Chris Hoover" <[EMAIL PROTECTED]> writes:
> maintenance_work_mem = 1048576 (I have 9GB ram on this server).
You might have that much RAM, but I wonder how much of it the kernel
will give to any one process. Did you check the ulimit settings the
postmaster
maintenance_work_mem = 1048576 (I have 9GB ram on this server).
Chris
On 5/30/07, Tom Lane <[EMAIL PROTECTED]> wrote:
"Chris Hoover" <[EMAIL PROTECTED]> writes:
> I am getting the following error when trying to run a reindex on one of
my
> databases.
> reindex
Any ideas?
-- Forwarded message --
From: Chris Hoover <[EMAIL PROTECTED]>
Date: May 29, 2007 11:36 AM
Subject: Out of Memory on Reindex
To: "pgsql-admin@postgresql.org"
I am getting the following error when trying to run a reindex on one of my
databases.
reind
I am getting the following error when trying to run a reindex on one of my
databases.
reindexdb: reindexing of database "xxx" failed: ERROR: out of memory
DETAIL: Failed on request of size 268435456.
Can someone advise on what memory parameter was violated? Are we looking at
work_mem, shmmax,
Ok, we are looking more into this. In the mean time, can someone please
explain to me (or point me to the correct documentation) what happens in
PostgreSQL when a commit is issued. What exactly does the database do?
Thanks,
Chris
On 5/22/07, Alvaro Herrera <[EMAIL PROTECTED]> wrote:
I found an e-mail showing how to strace and here is what I have found. Does
it help/mean anything to those who can decode this?
We are installing PostgreSQL from the official rpms.
Chris
time strace -c -p 2690
Process 2690 attached - interrupt to quit
Process 2690 detached
% time seconds
I am trying to diagnose a slowness issue in some of my databases. I think
the issue is due to a hardware/network upgrade over the weekend, but like
all dba's, I have to prove it.
Right now, I have almost all commits being issues taking between 1 and 5
seconds to complete. The can take as long a
Hello,
I am looking for any advice about PostgreSQL training courses that might be
available. I have been a PostgreSQL DBA for the past 3+ years and have over
7 years experience as a DBA and am wanting to improve and hone my skills. I
would like to learn more about PostgreSQL and learn tips, tr
/11/07, Robert Treat <[EMAIL PROTECTED]> wrote:
On Wednesday 11 April 2007 16:14, Chris Hoover wrote:
> I'm trying to pull a listing of views with the columns and column
aliases.
> However, I must be blind, and can't find what system view has the alias
for
> the columns of
I am running my system with the logs going into a dated directory. Then
each night, I have a cron job that tars up the previous days directory.
Here is my script for the archive_command:
#!/bin/bash
echo "start" >> ~/archive_log_timings
date >> ~/archive_log_timings
source /home/postgres/etc/po
I'm trying to pull a listing of views with the columns and column aliases.
However, I must be blind, and can't find what system view has the alias for
the columns of the view. Would someone please enlighten me.
Thanks,
Chris
hope this makes sense.
Chris
On 4/11/07, Chris Hoover <[EMAIL PROTECTED]> wrote:
No, I need to store the entire file as received into the database.
Chris
On 4/11/07, Alvaro Herrera < [EMAIL PROTECTED]> wrote:
>
> Chris Hoover escribió:
> > Hopefully a quick question.
No, I need to store the entire file as received into the database.
Chris
On 4/11/07, Alvaro Herrera <[EMAIL PROTECTED]> wrote:
Chris Hoover escribió:
> Hopefully a quick question.
>
> If I store a file into a bytea field, is there anyway for postgres to
read
> the contents
Hopefully a quick question.
If I store a file into a bytea field, is there anyway for postgres to read
the contents of the file?
Here is my scenario:
Receive text file
insert file into bytea field
insert trigger fires
- read file contents
- post contents into production tables
Is this at all
Is where anyway to set the sqlstate to a custom value when inside a plpgsql
function? I am still working on converting from Sybase ASA to Postgres and
some of the triggers and stored procedures in Sybase are setting the
variable to specific custom values.
Thanks,
Chris
I am starting to have some significant problems with one of my
8.1.3databases. I am getting more and more of these errors:
wbd,20552,dholton,2007-01-31 15:32:27.137 EST,45c0fcb4.5048,16,2007-01-31
15:31:48 EST,342126177,UPDATE,ERROR: could not access status of transaction
253762136
wbd,20552,dho
Over the weekend, I suffered from major database corruption on my
8.1.3database. Fortunetly, this was my development/non-production box
so I did
not have to spend my entire weekend getting it back online, but I did spend
a major portion of it. I ended up having to do a pitr recovery of the
syste
One other option is to shut the database down competely, and then do a copy
of the file system the new server. I have done this when I need to move a
very large database to a new server. I can copy 500GB's in a couple of
hours, where restoring my large databases backups would take 10+ hours.
Jus
Very easy,
Install the new version of PostgreSQL on your new machine. Then use the new
machine's install to pg_dump the old machine's database and load it into the
new machine (obviously, the machines must be table to talk to each other
over the network). Very easy. That is how I upgraded all
select pg_database_size('')/1024/1024;http://www.postgresql.org/docs/8.1/interactive/functions-admin.html
On 11/6/06, Nikola Radakovic <[EMAIL PROTECTED]> wrote:
Dear community,I would be very grateful if someone could answer me,where to find instructions how to set the size ( in megabytes ) forpar
Has anyone ever done a conversion from Sql Anywhere to PostgreSQL? I have a task to investigate what this would take.Also, the Sql Anywhere database stores image files in the database as long binary types. What would be the best format to store these in PostgreSQL?
Also, while all of my current P
Is there an easy way to get a count on the number of transactions postgres is processing a min/hour/day/etc.? If so, can it be broken down by the individual databases in the cluster?Thanks,Chris
What sort of lock does truncate require? If it is not an access exclusive lock, what locks would block the truncate?Thanks,Chris
I am looking to squeeze a bit more speed out of my database by clustering most of my tables. However, on tables with multiple indexes, how is the best way to choose the index. I am thinking I want to use the index with the largest pg_stat_user_indexes.idx_tup_read. Would this be correct? If not
or = 0.001 autovacuum_analyze_scale_factor = 0.0005 autovacuum_vacuum_cost_delay = -1
autovacuum_vacuum_cost_limit = -1 statement_timeout = 0 lc_messages = 'C' lc_monetary = 'C' lc_numeric = 'C' lc_time = 'C'
ittle
bit of lattitude on what I do with it.ChrisOn 9/6/06, Tom Lane <
[EMAIL PROTECTED]> wrote:"Chris Hoover" <
[EMAIL PROTECTED]> writes:> Right now, I have 510 log files waiting to be archived totaling 8GB.> ...> Why is the server so far behind?Good question. T
Tom,Did you get my reply to this message with my data? I have not seen it come onto the list. I sent it out yesterday.ChrisOn 9/1/06, Tom Lane
<[EMAIL PROTECTED]> wrote:Alvaro Herrera <
[EMAIL PROTECTED]> writes:> strace -s0> That'll cut any strings though, not only for read/writes. You'll stil
t has potential personal/patient data in it.
ChrisOn 8/31/06, Tom Lane <[EMAIL PROTECTED]> wrote:
"Chris Hoover" <[EMAIL PROTECTED]> writes:> Any other ideas? I am still 508 archive log files behind.Try "strace -f -p " on the archiver subprocess to see what it
lc_monetary = 'C' lc_numeric = 'C' lc_time = 'C' add_missing_from = on
On 8/29/06, Chris Hoover <[EMAIL PROTECTED]> wrote:
The oldest log file that is not archived is: -rw--- 1 postgres postgres 16777216 Aug 28 15:24 00010193005AThe
Sorry, this was supposed to go to the -general list.On 8/30/06, Chris Hoover <[EMAIL PROTECTED]> wrote:
I am in need of some help. I need to use the encode function against a text string in one of my pg/plsql functions. However, the encode function expects a bytea field to be passed to it
I am in need of some help. I need to use the encode function against a text string in one of my pg/plsql functions. However, the encode function expects a bytea field to be passed to it. How do I convert my text string into a bytea?
Thanks,Chris
file.)Perhaps try running it manually with an early log file as the PostgreSQLuser account?When you say you're 500 files behind, how old is the earliest log?Andy.
Chris Hoover wrote:> I am working on getting my development system working with archive> logging so I can use the pitr features of
Mr. VanIngen,There are native PostgreSQL dbsize commands in 8.1. Check out http://www.postgresql.org/docs/8.1/interactive/functions-admin.html
for more details. Hopefully these will meet your needs if you don't get a better answer.ChrisOn 8/29/06, Lane Van Ingen <
[EMAIL PROTECTED]> wrote:
G
I am working on getting my development system working with archive logging so I can use the pitr features of Postgres. However, I am finding that the actual archiving is not keeping up with the generation of the logs. I am now over 500 wal archive files behind (8+ GB).
What controls how often the
I am trying to create a table in my production system that has a foreign key into an existing table. However, this action is being blocked by transactions on that table, and is blocking any additional selects on that table.
Why is this? The table is empty (being a create), so I don't understand
I have been testing pitr with my development system. After the system rolled forward, I recieve a ton of errors right after it come on line. I'm assuming these are fine and just the system cleaning up, but I wanted to make sure.
Thanks,Chris,11962,,2006-07-13 16:11:16.281 EDT,,360LOG: invali
I am having what appears to be a bug with unique indexes on 8.1.3.I have created a new table. create table payer_835 ( payer_id int8 not null default nextval('payer_835_payer_id_seq'::regclass) primary key,
payer_name varchar(50) not null, payer_trn03 varchar(10) not
On 7/7/06, Benjamin Krajmalnik <[EMAIL PROTECTED]> wrote:
Is there a query which can be issued which will return the size of adatabase?I see that in pgadmin you can get the values on a per table manner.I have looced at the pg_catalog schema, but to be truthful I cannot
figure out how to get that in
I'm not sure where to post requests for rpms so I'm posting here. Can the PostgreSQL rpm team please release rpms for 8.1.3/8.1.4 for RHEL 2.1?Thanks,Chris
:25PM -0400, Chris Hoover wrote:> Are there any plans to fix this so we can see atleast the table activity (> i.e. vacuuming table x) any time soon?
>> Thanks,>> Chris>> On 6/26/06, Tom Lane <[EMAIL PROTECTED]> wrote:> >> >"Chris Hoover" <
[EMAIL PR
Are there any plans to fix this so we can see atleast the table activity (i.e. vacuuming table x) any time soon?Thanks,ChrisOn 6/26/06, Tom Lane
<[EMAIL PROTECTED]> wrote:"Chris Hoover" <
[EMAIL PROTECTED]> writes:> I turned on autovacuum on one of my database servers o
I turned on autovacuum on one of my database servers over the weekend. I have been looking at the logs trying to see what it is doing. However, all I see are lines like this:,7674,,2006-06-26 16:31:37.650 EDT,,1LOG: autovacuum: processing database "postgres"
Is there a way to tell if autovac
Just curious,What are most of you setting your vacuum and autovacuum parameters to for your 8.x databases. I just turned on autovacuuming on one of my db servers and went with a very conservative vacuum_cost_delay of 200 and vacuum_cost_limit of 50. I am wondering if anyone else has tested to fin
None of this will work since he is using PG 7.2. To start off with, and urgent upgrade is needed to a newer version since there are serious bugs and data loss issues in these earlier versions. As far as having the
2.2 GB of xlogs, someone more knowledgeable of this would need to answer that.On 6
Question,I am wanting to run a cluster on the main table in all of our 200+ databases. However, my manager wants to know what would happen if we suffer a major failure while the cluster is running. If I am running a "cluster chosen_index on main_table", and the server crashes, would the database
I am having some performance issues on some of my larger databases (50-100+GB). I just ran a vacuum analyze on our key table, and saw this at the end.DETAIL: 562499 dead row versions cannot be removed yet.There were 15117 unused item pointers.
0 pages are entirely empty.CPU 8.32s/2.59u sec elapse
Is there anyway to query the free space map and see how many entries there are for a specific table? We are starting to have some performance issues since our upgrade to 8.1 (running 8.1.3). However I can vacuum one table and the issues are temporarily solved. I am trying to do some research an
This question would be better suited for the pgsql-odbc list. You will probably also get more answers there.However, I believe you can download the windoze ODBC files from any of the standard
postgresql.org mirror sites. ChrisOn 6/12/06, rock well <[EMAIL PROTECTED]> wrote:
Hi all, i have a quest
Is there an easy way to see if this is the problem? How can I verify the size of PIPEBUF, and additionally, how do you tell the exact size of the statement being logged?On 6/9/06,
Tom Lane <[EMAIL PROTECTED]> wrote:"Chris Hoover" <
[EMAIL PROTECTED]> writes:> ---I am ha
I'm not sure what is going on, but this is the third try to get this to the list.---I am having a major problem with the PostgreSQL log files. We
have sql logging turned on for recovery purposes, and have discovered a
problem. We have lines of sql being interrupted by the next log
line before the
In 8.1, what is the best way to index large text/varchar fields for wild card searches?I am getting good number of queries being submitted for reporting purposes that are doing the following - where like '%some phrase%' and
Is there a good way to index these fields so that the db engine will
sounds like you might have a firewall blocking the new port. Make
sure you can connect locally the the database, and then get a hole
poked in your iptables firewall (assuming this is a linux server)HTH,Chris
On 6/9/06, Mohamed Fazil <[EMAIL PROTECTED]> wrote:
Hi to all, I am facing an issue wh
Autovacuum does not release space back to the OS. It only marks the tuples avaiable for reuse. To give the space back to the OS, you would have to run a vacuum full on the table.Chris
On 6/1/06, Sriram Dandapani <[EMAIL PROTECTED]> wrote:
Hi
I have a table about 17G in size. I di
Ok, I'm still working on the possibility of turning autovacuum on.Question, I have a tables that are multi GB. When I run the threshold calculations using "normal" values, it seems that my large important tables would almost never be vacuumed. Here is an example, in my largest database, my most i
Thanks to those who answered my first question. I am looking at implementing autovacuuming on our new production 8.1.3 systems. I am reading the documentation on the various configuration parameters. The autovacuum_analyze_threshold and autovacuum_vacuum_threshold talk about the number of update
PostgreSQL 8.1.3Question on autovacuum.autovacuum_naptime (integer) Specifies the delay between activity rounds for the autovacuum subprocess. In each round the subprocess examines one database and issues VACUUM and ANALYZE commands as needed for tables in that database. The delay is measured in se
Question, Can you set both the log_rotation_age and log_rotation_size? If so, does either one have presidence over the other? I would like to have both on and am hoping that you will tell me that they work together.
So, if log_rotation_age = 1440, and log_rotation_size = 1048576 then my log will
Is there a way inside of Postgresql to automatically terminate idle
connections? I would like to terminate any connection to my database
that has not has any activity for a specified period of time. I would
like to be able to do this despite the state of the connection (the
majority of my truly i
I know it is in the docs, but right now I can't see it. How do I pull the length of a specific varchar column? I have just found a problem with our schema being out of sync on one specific column, and need to find out how many of our 200+ databases need to be fixed.
Here is what I have so far:sele
you should use the PG 8.1 pg_dump to dump the 7.4.3 database and then import it into the new server. It is not recommended to do it the way you have done it.On 4/25/06,
Uwe Maiwald <[EMAIL PROTECTED]> wrote:
Hi,i get the following warning from pg_restore and would like to know if i cansafely igno
1 - 100 of 199 matches
Mail list logo