Thanks to all who responded - upgrade was successful!
One final note, when using pg_upgrade ... --link, it finally recommends
use of delete_old_cluster.sh to remove the old data files. I'm tempted,
but --link re-uses old data files,... bit of a contradiction there, if you
follow my meaning?
Is
> "C" is the official name of that locale. Not sure how you got it to say
> "POSIX" ... maybe we didn't have normalization of the locale name back
> then?
>
> Anyway, simplest fix seems to be to update the 9.0 installation's
> pg_database to say "C" in those entries.
Never ceases to amaze me wher
Hi all,
Using centos 5.x
I'm trying to upgrade (without having to dump/restore a 1.5TB db) from 9.0
to 9.2 using pg_upgrade, but am having a few issues.
1. I ran into the (usual?) issue with ld libraries conflicting, so
renamed /etc/ld.so.conf.d/postgresql-9.0-libs.conf to blah, and reran
ldcon
On Fri, June 3, 2011 13:57, t...@fuzzy.cz wrote:
> There's something very wrong with snames - the planner expects 22 rows but
> gets 164147851. Which probably causes a bad plan choice or something like
> that.
> Try to analyze the snames table (and maybe increase the statistics
> target on the col
On Fri, June 3, 2011 13:57, t...@fuzzy.cz wrote:
> See this http://explain.depesz.com/s/THh
>
>
> There's something very wrong with snames - the planner expects 22 rows but
> gets 164147851. Which probably causes a bad plan choice or something like
> that.
> Try to analyze the snames table (and ma
Greets,
I'm trying to figure out why the following SELECT has become slow (hardware,
code changes, etc) and would appreciate any comments on interpreting the
EXPLAIN ANALYZE output. It *used* to take a few seconds at most, but not
anymore... In figuring out which part is taking so long, what's t
Hi,
Is it possible to replicate only a single or selected tables (as opposed to
the whole shebang) using PG's built-in replication?
I can't seem to find much on this topic, so I'm guessing not.
I have a feeling I'll need to return to Londiste for this particular
application.
Thanks
--
Sen
Greets,
I've just activated another replication slave and noticed the following in the
logs:
WARNING: xlog min recovery request 38E/E372ED60 is past current point
38E/D970
It seems to be happily restoring log files from the archive, but the warning
message above concerns me.
Googling only
Resolved the startup problem by identifying which pg_clog file it was failing
on with:
strace postgres --single -D 9.0/data
Then grabbed that file from the replication slave.
Cheers
h
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
h
I managed to resolve this issue. Using strace
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On Sat, April 23, 2011 09:56, Henry C. wrote:
> 1. how to proceed with getting db1 back up so I can run the script?
> 2. how to proceed with replicated database (db2)? (switch to standalone
> (since it's in readonly replication mode) and run upgrade fix script as per
> wik
Hi,
Lovely start to Saturday morning. My eyes are misting over with joy.
I'm trying to figure out whether I'm affected (probably) by this beauty
http://wiki.postgresql.org/wiki/20110408pg_upgrade_fix (I missed the ANNOUNCE
advisory, even more joy).
One of my DBs fails at startup:
2011-04-23 09:
> I've done some testing of PostgreSQL on different filesystems, and with
> different filesystem mount options.
Since Pg is already "journalling", why bother duplicating (and pay the
performance penalty, whatever that penalty may be) the effort for no real
gain (except maybe a redundant sense of s
On Thu, April 14, 2011 20:54, Andrew Sullivan wrote:
> On Thu, Apr 14, 2011 at 12:27:34PM -0600, Scott Marlowe wrote:
>
>>> That's what a UPS and genset are for. Â Who writes critical stuff to
>>> *any*
>>> drive without power backup?
>>
>> Because power supply systems with UPS never fail.
>>
>
> R
On Thu, April 14, 2011 18:56, Benjamin Smith wrote:
> After a glowing review at AnandTech (including DB benchmarks!) I decided to
> spring for an OCX Vertex 3 Pro 120 for evaluation purposes. It cost about $300
> with shipping, etc and at this point, won't be putting any
>
> Considering that I sp
> On 14/04/2011 2:15 AM, Henry C. wrote:
> Nope, it's working as designed I'm afraid.
>
> There are params you can tune to control how far slaves are allowed to
> get behind the master before cancelling queries...
Thanks Craig - this dawned on me eventually.
--
Sent
On Thu, April 14, 2011 11:30, Leonardo Francalanci wrote:
> have a look at
>
> http://postgresql.1045698.n5.nabble.com/Intel-SSDs-that-may-not-suck-td426826
> 1.html
>
>
>
> It looks like those are "safe" to use with a db, and aren't that expensive.
The new SSDs look great. From our experience,
On Thu, April 14, 2011 10:51, Craig Ringer wrote:
> On 14/04/2011 4:35 PM, Henry C. wrote:
>
>
>> There is no going back. Hint: don't use cheap SSDs - cough up and use
>> Intel.
>>
>
> The server-grade SLC stuff with a supercap, I hope, not the scary
>
On Thu, April 14, 2011 06:19, Benjamin Smith wrote:
> The speed benefits of SSDs as benchmarked would seem incredible. Can anybody
> comment on SSD benefits and problems in real life use?
>
> I maintain some 100 databases on 3 servers, with 32 GB of RAM each and an
> extremely rich, complex schema
> However, a SELECT eventually fails with "canceling statement due to conflict
> with recovery".
>
> Where else can I check, or what else can I do to determine what the problem
> is?
...or maybe there _is_ no problem.
select count(*) from big_table; -- will fail because it's long-lived and rows
a
On Wed, April 13, 2011 20:15, Henry C. wrote:
> If I try and execute a long-lived SQL query on the slave, it eventually fails
> with "canceling statement due to conflict with recovery". Replication is
> definitely working (DML actions are propagated to the slave), but somethi
Forgot to mention recovery.conf on slave:
standby_mode = 'on'
primary_conninfo = 'host..."
restore_command = 'cp /home/psql-wal-archive/%f "%p"'
archive_cleanup_command = 'pg_archivecleanup /home/psql-wal-archive %r'
The wiki states "If wal_keep_segments is a high enough number to retain the
WA
Greets,
Pg 9.0.3
This must be due to my own misconfiguration, so apologies if I'm not seeing
the obvious - I've noticed that my slave seems to be stuck in a permanent
startup/recovery state. ps on the slave shows:
...
postgres: wal receiver process streaming 190/A6C384A0
postgres: startup pro
On Wed, April 13, 2011 11:16, Simon Riggs wrote:
>> seems to be chugging along at a rather sedate pace.
>
> The replication lag is zero since the master and slave WAL locations match.
>
>
> There seems to be nothing to expedite... why do you say it is slow?
>
>
> Maybe because you see this as an "i
Greets,
My test replication seems to be proceeding normally, but the process appears
to be quite slow:
SLAVE
...postgres: startup process recovering 000101900024(1)
...postgres: wal receiver process streaming 190/244FEA80
MASTER
...postgres: wal sender process replicator 1.1.
On Wed, April 13, 2011 04:28, Fujii Masao wrote:
> When the standby fails to read the WAL file from the archive, it tries to
> read that from the master via replication connection. So the standby would not
> skip that file.
Great, thanks. It looks like it's proceeding normally (if slow) then.
-
Greets,
Pg 9.0.3.
I'm trying out Pg's built-in replication for the first time and noticed
something odd.
On the slave I see the following in the logs (after rsyncing all from master
to slave and firing up Pg on the slave):
...
restored log file "0001018E000E" from archive
restored l
On Sat, April 2, 2011 22:30, Tom Lane wrote:
>> Have you tried upping the aggressiveness of autovacuum?
>>
>
> I'm wondering about poor selection of the cost_delay settings in
> particular. It's quite easy to slow autovacuum to the point that it takes
> forever to do anything.
It's been on the de
On Sat, April 2, 2011 21:26, Scott Marlowe wrote:
> On Sat, Apr 2, 2011 at 11:26 AM, Henry C. wrote:
>
>> On Sat, April 2, 2011 14:17, Jens Wilke wrote:
>>
>>> Nevertheless since at least 8.4 IMO there's no need to bother with
>>> manual vacuum any mo
On Sat, April 2, 2011 14:17, Jens Wilke wrote:
> Nevertheless since at least 8.4 IMO there's no need to bother with
> manual vacuum any more.
Sadly, in my case, the db is so busy that autovac processes run for weeks and
never catch up (insufficient h/w for the app quite frankly - the addition of
s
Forgot to mention: I'm using 9.0.3
> Usually a manual vacuum cancels a running autovacuum task.
Not in my case - however, the autovac does seem to be in a waiting state.
> You should find a notice about the cancelation in th logfile.
>
> > current_query | vacuum analyze
> > age | 11:
> Anyway, is that autovac duplicating work or locked out and waiting?
Impolitely responding to my own post: a quick strace confirms the autovac
process is indeed locked out and waiting it's turn to work.
Presumably when my manual vacuum finishes, it will then proceed and
*hopefully* not re-vacu
Greets,
I just noticed something odd: I'm busy with a manual vacuum on a table and an
autovacuum keeps firing up as well. Thinking this looks rather weird, I
pg_cancel_backend() the autovacuum process:
current_query | vacuum analyze page_citation_text;
age | 11:34:10.759279
...
curren
On Fri, November 5, 2010 09:52, Grzegorz JaÅkiewicz wrote:
> Timing is on.
> I would say hashtext is consequently beating md5 in terms of performance
> here.
nice concise answer, thanks Grzegorz.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subs
G'day,
I need to do a mass update on about 550 million rows (I will be breaking it up
into chunks based on id value so I can monitor progress).
Hashing one of the columns is part of the process and I was wondering which is
more efficient/faster: md5() or hashtext()?
hashtext() produces a nice t
35 matches
Mail list logo