While doing a PG dump, I seem to have a problem:
ERROR: invalid memory alloc request size 4294967293
Upon googling, this seems to be a data corruption issue!
( Came about while doing performance tuning as being discussed on the
PG-PERFORMANCE list:
note:
8.2.9 Release Date: 2008-06-12; 8.2.21 Release Date: 2011-04-18
seems like you were running almost three years without bugfixes.
aside from fixing your current problem, I would first do the upgrade
to avoid more corruption.
2011/4/18 Phoenix Kiula phoenix.ki...@gmail.com
While doing
On Mon, Apr 18, 2011 at 11:02 PM, t...@fuzzy.cz wrote:
Thanks Filip.
I know which table it is. It's my largest table with over 125 million
rows.
All the others are less than 100,000 rows. Most are in fact less than
25,000.
Now, which specific part of the table is corrupted -- if it is
On Tue, Apr 19, 2011 at 8:35 AM, Scott Marlowe scott.marl...@gmail.com wrote:
On Mon, Apr 18, 2011 at 5:44 PM, Tomas Vondra t...@fuzzy.cz wrote:
Still, do the file backup as described in the previous posts. You could
even do an online backup using pg_backup_start/pg_backup_stop etc.
As soon
System logs maybe? Something about a process getting killed? Have
you tried turning up the verbosity of the pg logs?
Syslog has to be compiled with PG? How do I enable it? Where should I
look for it?
The documentation, whenever it mentions syslog, always just assumes
the expression If
Hi,
Need some help here. I have a large table (200 million rows already).
There are two columns.
ColA
ColB
There is an index on ColA. It's an important column.
ColB is a boolean. Either 1 or 0.
For about 10% of the data, ColB is 1. Otherwise it's default is 0.
Now, my question: for that
It's far easier to adjust the logging.
psql mydb
alter mydb set log_statement='mod';
Thanks Scott.
Two questions:
1. Will this log_statement='mod' be a permanent thing, or when I have
done my testing and identifying the problem SQL, I can set it back to
default so not too much logging is
Hi
I'm having some issues with a code base where several different
programs are updating one status column in the DB with their code.
Mostly this is working, but in some cases the status column in a DB is
getting updated when it shouldn't have been, and we're trying to
locate which program did
From 8.2.9 on a 32 bit Linux CentOS server to a different server
(Cpanel/WHM) which is now running 8.4.2 with 64 bit CentOS.
Truly appreciate any pointers or recommendations of good folks who can
do this, or have done this.
Thanks!
--
Sent via pgsql-general mailing list
2, 2010 at 12:06 AM, Andy Colson a...@squeakycode.net wrote:
On 01/01/2010 02:01 AM, Phoenix Kiula wrote:
From 8.2.9 on a 32 bit Linux CentOS server to a different server
(Cpanel/WHM) which is now running 8.4.2 with 64 bit CentOS.
Truly appreciate any pointers or recommendations of good folks
2009/12/23 Devrim GÜNDÜZ dev...@gunduz.org:
On Tue, 2009-12-22 at 18:34 -0500, Francisco Reyes wrote:
Which os/distro are you looking for? I hope it is not Fedora 11 -
x86_64.
CentOS x86_64
As written before:
http://yum.pgsqlrpms.org/8.4/redhat/rhel-5-x86_64/repoview/
Thanks. Just
2009/12/23 Devrim GÜNDÜZ dev...@gunduz.org:
On Wed, 2009-12-23 at 18:27 +0800, Phoenix Kiula wrote:
What;s the YUM command to upgrade PG? Do I need to stop PG server? Can
I be sure that my config and other settings will not be overwritten?
If you are upgrading from 8.X.Y to 8.X.Z, you don't
Hi
Not sure what I am doing wrong.
I am not getting all the required rpms for the Postgres version 8.4.2.
=
http://rpm.pbone.net/
Your query postgresql-devel-8.4.2 did not match any entry in database.
Your query postgresql-server-8.4.2 did not match any entry in database.
On Fri, Dec 18, 2009 at 3:54 AM, Bruce Momjian br...@momjian.us wrote:
Bruce Momjian wrote:
pg_migrator (not pg_upgrade) has been used by many people to migrate
from 8.3 to 8.4. I just helped someone yesterday with a migration.
pg_migrator threw an error because they had reindexed
On Tue, Dec 15, 2009 at 9:26 PM, Howard Cole howardn...@selestial.com wrote:
Phoenix Kiula wrote:
An easy question for some I hope.
I have a DB from 8.2 days that when I now dump and try to take into
the 8.3.7, it gives me errors about utf-8 stuff.
I tried searching this list's archives
An easy question for some I hope.
I have a DB from 8.2 days that when I now dump and try to take into
the 8.3.7, it gives me errors about utf-8 stuff.
I tried searching this list's archives but could not come up with an answer.
Google returns some sites like these:
Actually the title of my email should have been how to **replace**
utf-8 values.
Thanks.
On Mon, Dec 14, 2009 at 7:03 PM, Phoenix Kiula phoenix.ki...@gmail.com wrote:
An easy question for some I hope.
I have a DB from 8.2 days that when I now dump and try to take into
the 8.3.7, it gives
=false-geo_id=16000US3752760-states=
Which part of this is non-UTF8? Why is this going into a UTF8 table
with corrupted values? The lc_collate etc and all settings I can
imagine are already utf-8!
Thanks for any pointers.
On Mon, Dec 14, 2009 at 7:04 PM, Phoenix Kiula phoenix.ki...@gmail.com
On Fri, Jul 31, 2009 at 12:29 PM, Greg Smithgsm...@gregsmith.com wrote:
On Thu, 30 Jul 2009, Phoenix Kiula wrote:
svn: error while loading shared libraries: libpq.so.4: cannot open
shared object file: No such file or directory
There's a long guide to working around the issues you run
We installed 8.3.7 on a new server. Some of the config changes such as
tracker_acvitity etc are nice.
But big problem. When I start SVN on this machine (which we need!) I
see this error:
-
svn: error while loading shared libraries: libpq.so.4: cannot open
shared object file: No such file or
I use a Mac OSX at work. And finally have a running PG install.
So I'm thinking: can I use some mechanism to have my local PG server
(in our premises) as a slave mirror of the main live website server
(at our data center).
Would Slony be the solution to look at? Is this a dumb thought to
begin
On Thu, Jul 30, 2009 at 8:31 PM, Scott Meadscott.li...@enterprisedb.com wrote:
...snip...
It may make more sense to setup a dedicated PITR slave in your office, and
refresh it every now and then.
http://www.postgresql.org/docs/8.3/interactive/continuous-archiving.html
Thanks. This was
Although mysqldump is the official solution, there's a fabulous perl
script mysqlhotcopy that simply copies the data from MySQL tables in
folders and restoring the data is as simple as copying the files back
to their /var/lib/mysql/data location.
I know about pg_dumpall, which creates a humongous
I read here in a different context
http://www.redhat.com/docs/manuals/cms/rhea-dpg-cms-en-6.1/ch-config.html
About /etc/security/limits.conf and /etc/sysctl.conf files.
For the last year or so I've had this on a CentOS 32 bit system with
4GB of RAM and SATA II disks, only Postgres relevant
FWIW, to follow PostgreSQL defaults, I changed PGDG rpms to use it by
default. I think this will be the first time that we break compatibility
Ok this discussion became too complex for me. I am a simple guy with a
simple question: will my old data from 8.2.9, which does have some
date/time
Just looking for experiences of people. Are people already using 8.4
in serious live hosting environments? Thanks.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On Tue, Jun 30, 2009 at 11:21 PM, Scott Ribescott_r...@killerbytes.com wrote:
Shut down the postmasters and rsync. (Assuming same architecture build
options...)
You mean rsync the data folder, or the entire PG folder?
Architecture may be the same (same processor) but the setup is a touch
On Tue, Jul 21, 2009 at 6:35 PM, Sam Masons...@samason.me.uk wrote:
On Tue, Jul 21, 2009 at 09:37:04AM +0200, Daniel Verite wrote:
I'd love to fix them. But if I do a search for
SELECT * FROM xyz WHERE col like '%0x80%'
it doesn't work. How should I search for these characters?
In 8.2,
On Tue, Jul 21, 2009 at 6:12 PM, Peter Eisentrautpete...@gmx.net wrote:
On Tuesday 21 July 2009 04:36:41 Phoenix Kiula wrote:
On Mon, Jul 20, 2009 at 2:04 PM, Peter Eisentrautpete...@gmx.net wrote:
On Monday 20 July 2009 06:45:40 Phoenix Kiula wrote:
explain analyze select * from sites where
On Mon, Jul 20, 2009 at 2:04 PM, Peter Eisentrautpete...@gmx.net wrote:
On Monday 20 July 2009 06:45:40 Phoenix Kiula wrote:
explain analyze select * from sites where user_id = 'phoenix' order by
id desc limit 10;
QUERY PLAN
0x80 makes me think of the following:
The data originate from a Windows system, where 0x80 is a Euro
sign. Somehow these were imported into PostgreSQL without the
appropriate translation into UTF-8 (how I do not know).
I wonder: why do you spend so much time complaining instead of
simply
On Sun, Jul 19, 2009 at 7:08 PM, Martijn van
Oosterhoutklep...@svana.org wrote:
On Sun, Jul 19, 2009 at 10:16:17AM +0800, Phoenix Kiula wrote:
Look through the archives, there are scripts that will scan all your
text fields for UTF-8 problems. If you run them once you can clear out
all
Hi.
I have a query that should be very fast because it's using all
indexes, but it takes a lot of time.
explain analyze select * from sites where user_id = 'phoenix' order by
id desc limit 10;
QUERY PLAN
On Tue, Jul 14, 2009 at 9:52 PM, Alvaro
Herreraalvhe...@commandprompt.com wrote:
Andres Freund wrote:
On Tuesday 14 July 2009 11:36:57 Jasen Betts wrote:
if you do an ascii dump and the dump starts out SET CLIENT ENCODING
'UTF8' or similar but you still get errors.
Do you mean that a dump
Hi. I *always* get an error moving my current fully utf-8 database
data into a new DB.
My server has the version 8.3 with a five year old DB. Everything, all
collation, LC_LOCALE etc are all utf8.
When I install a new Postgresql 8.4 on my home Mac OSX machine (after
losing some hair) I set
Hi
We're trying PG on a new machine, so we copied our current (live)
database to that server. Tested the code and it's all working. Now, to
make that second server the main live server, we will need to copy the
db again including the new records since we copied for testing. Is
there any way to
Hi. I have a questionf or people who run high traffic websites.
We are considering a new dedicated server host for a set of 25
domains, about 5 of which are very high traffic (80 million clicks a
day each). A lot of this is VIEW content, but there may be a million
or so INSERTs and UPDATEs.
I am
HI. I made a small alteration to a table (added a column).
Now when I do:
vacuum analyze TABLENAME
or
delete from TABLENAME where id = 99
Nothing happens! The carriage return means the my shell cursor goes to
the next line, but it just stays there. I thought something may be
happening
On Wed, Mar 4, 2009 at 12:10 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Phoenix Kiula phoenix.ki...@gmail.com writes:
Now when I do:
vacuum analyze TABLENAME
or
delete from TABLENAME where id = 99
Nothing happens! The carriage return means the my shell cursor goes to
the next line
commit the transaction where you altered the table. It has an open lock on
the table.
=# commit;
WARNING: there is no transaction in progress
COMMIT
Time: 0.282 ms
So no, there's nothing pending.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to
Although when I try this:
select pg_class.relname,pg_locks.* from pg_class,pg_locks where
pg_class.relfilenode=pg_locks.relation;
There are many rows!
How can I get rid of these open locks?
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your
On Wed, Mar 4, 2009 at 12:36 AM, Adrian Klaver akla...@comcast.net wrote:
Are you connected to the right database?. I have been in that situation,
looking at the log for db A and doing things in db B.
Thanks. I only have one database, so yes I am connected to it.
I have the lock file in
I guess my question is, how should I remove all pending locks on a
table so that I can get on with the rest of the stuff?
I mean, even if I can now find an offending RULE on the table, I
cannot replace or remove it. '
Thanks for any pointers!
--
Sent via pgsql-general mailing list
On Wed, Mar 4, 2009 at 12:55 AM, Phoenix Kiula phoenix.ki...@gmail.com wrote:
I guess my question is, how should I remove all pending locks on a
table so that I can get on with the rest of the stuff?
I mean, even if I can now find an offending RULE on the table, I
cannot replace or remove
On Wed, Mar 4, 2009 at 1:17 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Phoenix Kiula phoenix.ki...@gmail.com writes:
How can I get rid of these open locks?
Close the transactions that are holding them. Look into
pg_stat_activity and pg_prepared_xacts.
Thanks for this. But can I simply delete
On Wed, Mar 4, 2009 at 1:23 AM, Phoenix Kiula phoenix.ki...@gmail.com wrote:
On Wed, Mar 4, 2009 at 1:17 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Phoenix Kiula phoenix.ki...@gmail.com writes:
How can I get rid of these open locks?
Close the transactions that are holding them. Look
On Tue, Feb 3, 2009 at 3:33 PM, Alban Hertroys
dal...@solfertje.student.utwente.nl wrote:
On Feb 3, 2009, at 5:21 AM, Phoenix Kiula wrote:
On Mon, Jan 12, 2009 at 1:02 PM, Tom Lane t...@sss.pgh.pa.us wrote:
.snip
regression=# update pg_attribute set atttypmod = 35+4 where attrelid
I have a table with two fields:
user_id
col2
There is quite a pivotal SQL for our site that issues the query:
...WHERE user_id = 'xyz' and col2 = 'xyz'
Until now, I have had two indexes, one for user_id and one for col2.
Now, I built a compound index concurrently (user_id, col2).
In my conf_pg, the autovacuum is on, so the DB should be (or I hope is
being) regularly vacuumed.
These are my settings:
work_mem = 20MB
temp_buffers = 4096
authentication_timeout = 10s
ssl = off
checkpoint_warning =
On Tue, Feb 3, 2009 at 9:09 PM, Andreas Wenk
a.w...@netzmeister-st-pauli.de wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Phoenix Kiula schrieb:
In my conf_pg, the autovacuum is on, so the DB should be (or I hope is
being) regularly vacuumed.
These are my settings:
work_mem
Thanks, Gregory and Simon, for the very useful posts.
I have increased the vacuum_cost_limit to 2000 for now, just to see if
that has an impact. Hopefully positive.
Next on my list is to be able to easily upgrade to 8.3, but Slony
seemed like a daunting task the last time I tried. I am on 8.2.9,
Thanks for the suggestions.
Following is the SQL query. Actually, this is not merely a DROP INDEX
question. I am also surprised that this straight index query takes
more time than it used to! It would be under 1 second because it's a
one-row conditional match, but not it takes anywhere between 5
On Wed, Feb 4, 2009 at 5:13 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Phoenix Kiula phoenix.ki...@gmail.com writes:
Index Scan using new_idx_testimonials_userid on testimonials
(cost=0.00..157.78 rows=1 width=9) (actual time=8809.715..8809.715
rows=0 loops=1)
Index Cond: ((user_id)::text
On Mon, Jan 12, 2009 at 1:02 PM, Tom Lane t...@sss.pgh.pa.us wrote:
.snip
regression=# update pg_attribute set atttypmod = 35+4 where attrelid =
't1'::regclass and attname = 'f1';
UPDATE 1
regression=# \d t1
Table public.t1
Column | Type | Modifiers
On Tue, Feb 3, 2009 at 1:30 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Phoenix Kiula phoenix.ki...@gmail.com writes:
Tom, this has worked, and a \d TABLENAME shows that the column is
varchar(35).
But I still have messages in my log saying:
ERROR: value too long for type character varying(20
On Wed, Jan 28, 2009 at 2:37 AM, Alban Hertroys
dal...@solfertje.student.utwente.nl wrote:
Does that query plan look any better without the select count(id) from
testimonials?
If so you may be better off keeping track of those counts in a separate
table updated by triggers on the
Hi. Further to my bafflement with the count(*) queries as described
in this thread:
http://archives.postgresql.org/pgsql-general/2009-01/msg00804.php
It seems that whenever this question has come up, Postgresql comes up
very short in terms of count(*) functions.
The performance is always slow,
On Thu, Jan 29, 2009 at 2:25 AM, Alban Hertroys
dal...@solfertje.student.utwente.nl wrote:
Ah I see, that's the original query and its plan again, not the one after
implementing those triggers! You had me scratching my head for a bit there,
wondering why the count() subquery was still there.
I wonder if this is an SQL limitation or something I'm missing in the
PG manual, but I need to run an update on my database (to replace the
value of a column to match a new design structure).
Due to the new business logic, the replaced value of a field may end
up being already present in the
On Mon, Jan 26, 2009 at 9:45 PM, Matthias Karlsson matth...@yacc.se wrote:
On Mon, Jan 26, 2009 at 2:09 PM, Phoenix Kiula phoenix.ki...@gmail.com
wrote:
I wonder if this is an SQL limitation or something I'm missing in the
PG manual, but I need to run an update on my database (to replace
Hope someone can shed light on this.
I have a well-performing query that uses the index. Without OFFSET etc
it returns about 11,000 records.
I only need about 20 of these records at any given time, which is why
my LIMIT and OFFSET try to pull only the 20 records or so.
My queries are fast in
On Mon, Jan 26, 2009 at 2:26 PM, Scott Marlowe scott.marl...@gmail.com wrote:
On Sun, Jan 25, 2009 at 8:41 PM, Phoenix Kiula phoenix.ki...@gmail.com
wrote:
My query is:
explain analyze SELECT
testimonials.url
,testimonials.alias
On Mon, Jan 26, 2009 at 3:04 PM, Scott Marlowe scott.marl...@gmail.com wrote:
On Sun, Jan 25, 2009 at 11:58 PM, Phoenix Kiula phoenix.ki...@gmail.com
wrote:
Hi Scott. Yes, there is an autovacuum on both the tables. Should i
additionally do a manual vacuum too?
Nah, just an analyze. what
2009/1/12 Thomas Markus t.mar...@proventis.net:
be sure to use correct data types. I suppose psql uses timestamps so
select id from users where modify_date = '2009-01-08' limit 1;
is converted to
select id from users where modify_date = '2009-01-08 00:00:00'::timestamp
limit 1;
try
When queries used to be fast and now are slow very often depends upon the
indexes. Less frequently upon the amount of memory available for cache and
the server configuration.
Do you used ti have any index on that column?
Do you have server configuration files for comparison?
The config file
I am trying to resize a column on a large-ish database (with 5 million rows).
The column was 20 characters before, now I want to make it 35 characters.
Challenge is: this is the main indexed column in a busy database.
I tried looking at the ALTER TABLE commands available and there seems
nothing
On Mon, Jan 12, 2009 at 9:12 AM, Ian Barwick barw...@gmail.com wrote:
2009/1/12 Phoenix Kiula phoenix.ki...@gmail.com:
I am trying to resize a column on a large-ish database (with 5 million rows).
ALTER TABLE users ALTER COLUMN name TYPE VARCHAR(35)
HTH
Thanks! I guess I was missing
ALTER COLUMN TYPE is intended for cases where actual transformation of
the data is involved. Obviously varchar(20) to varchar(35) doesn't
really require any per-row effort, but there's no operation in the
system that handles that case. But if you're brave, you can do it
via manipulation of
Hi. I have a large table that has a modify_date field in it, which is
timestamp without time zone.
I used to be able to do queries like these in 8.2.3 --
select id from users where modify_date = '2009-01-08' limit 1;
select id from users where modify_date '2009-01-08' limit 1;
Suddenly
I am in psql. I tried \dd but this lists pg_catalog and stuff like
that. Long list that I don't need.
I scoured through this page --
http://www.postgresql.org/docs/8.3/static/app-psql.html -- and still
did not find how to list only my own functions? A regular \d shows
only tables and views it
I am looking to convert all my database access code into stored
procedures in PL/PGSQL.
I have googled but it looks like there's a whole variety of
information from 2003 (when PG must have been quite different) until
now--some people find stored functions slow for web based apps, others
find it
Hi.
Per this thread:
http://archives.postgresql.org/pgsql-general/2008-11/msg00608.php -- I
think I understood that the time had come for my db to have a VACUUM
FULL. (Regular autovacuum etc is working)
I know a full vacuum is slow. A simple google search had suggested so.
But I had no idea it
Thanks Scott. Could be my I/O as I am on 15k Raptor SATA drives with
RAID 1 only.
Anyway, reindexing happened fast (12 minutes) and things are now humming along.
How long should cluster take on a db that's about 5.5GB in size? Is it
worth doing on a production db? The db is running fast now
Thanks Tomas.
The table may still be bloated - the default autovacuum parameters may not
be agressive enough for heavily modified tables.
My autovacuum settings:
autovacuum = on
autovacuum_vacuum_cost_delay = 20
vacuum_cost_delay= 20
autovacuum_naptime
On Fri, Nov 14, 2008 at 9:22 PM, Hoover, Jeffrey [EMAIL PROTECTED] wrote:
There were 2132065 unused item pointers.
Looks to me like a large update or insert failed on this table
Thanks. So what can I do? I have reindexed all indexes already!
--
Sent via pgsql-general mailing list
Hi.
I had tweaked my PG 8.2.6 with the very kind help of this list a
couple years ago. It has been working fine, until recently. Not sure
if it is after the update to 8.3 or because my DB has been growing,
but the db is very slow now and the cache doesn't seem enough.
~ free -m
total used free
On Fri, Nov 14, 2008 at 3:10 AM, Scott Marlowe [EMAIL PROTECTED] wrote:
On Thu, Nov 13, 2008 at 11:59 AM, Phoenix Kiula [EMAIL PROTECTED] wrote:
Hi.
I had tweaked my PG 8.2.6 with the very kind help of this list a
couple years ago. It has been working fine, until recently. Not sure
Thanks Scott. Responses below.
(1) The culprit SELECT sql is (note that MYUSER in this example can
be an IP address) --
So, it can be, but might not be? Darn, If it was always an ip I'd
suggest changing types.
Yes, it can either be a registered USER ID or an IP address. I thought
of
On Fri, Nov 14, 2008 at 10:57 AM, Tom Lane [EMAIL PROTECTED] wrote:
Yeah, but note that the planner knows darn well that this will be an
expensive query --- 493427.14 cost units estimated to fetch 2 rows!
My interpretation is that the condition on user_id is horribly
nonselective (at least
My post at the bottom.
On 9/23/08, Craig Ringer [EMAIL PROTECTED] wrote:
Dodgy forum software. Lots of it uses an IP address as a fake username for
unregistered users, rather than doing the sensible thing and tracking both
IP address and (if defined) username.
How I'd want to do
Please forgive my attempt to help you based on a woefully insufficient
description of your problem and situation. I will not make any attempt to do
so again.
Actually it was not my problem, this is a thread started by some one
else. I use Gmail so I see the entire thread as a conversation
If you don't want to store IPs for registered users, I'd use:
user_id INTEGER,
ip cidr,
CONSTRAINT must_have_userstamp
CHECK ( user_id IS NOT NULL OR ip IS NOT NULL)
... and yes, I'd use a functional index to look it up, or even a
trigger-maintained cache of the text representation
...snip...
I'd try a functional index first. If that didn't do the job, I'd use a
trigger-maintained column _purely_ as an optimisation (ie I could drop
it and lose no data) that stored text representations of the data.
Honestly, though, I expect the functional index would be more than
my IP addresses are stored in a TEXT type field. that field can actually
contain usernames like 'joao' or 'scott' and it can contain IP
addresses
I think this is common DB design on many websites that have registered
user IDs.
My humble suggestion would be to make another column in
I think this is common DB design on many websites that have registered
user IDs.
Is it? Name one! Sounds like crappy design to me.
It might sound crappy design to you, but for websites that allow users
to do something while they are registered OR unregistered, will choose
to
Thanks Andrew.
On the server (the DB to be dumped) everything is UTF8.
On my home server (where I would like to mirror the DB), this is the output:
=# \l
List of databases
Name| Owner | Encoding
---+-+---
postgres | postgres
On 8/29/08, Andrew Sullivan [EMAIL PROTECTED] wrote:
Hrm. Well, what about the locale, as I suggested? I have no idea if
How should I check for the locale?
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
Thanks so much! I think we may be on to something:
On the pg_dump machine:
LANG=en_US.UTF-8
LC_CTYPE=en_US.UTF-8
LC_NUMERIC=en_US.UTF-8
LC_TIME=en_US.UTF-8
LC_COLLATE=en_US.UTF-8
LC_MONETARY=en_US.UTF-8
LC_MESSAGES=en_US.UTF-8
LC_PAPER=en_US.UTF-8
LC_NAME=en_US.UTF-8
LC_ADDRESS=en_US.UTF-8
On 8/30/08, Adrian Klaver [EMAIL PROTECTED] wrote:
Might also be useful to do SHOW ALL from within the database cluster you have
on your restore machine. Will show what choices initdb made.
test=# SHOW all;
Snip
lc_collate | en_US.UTF-8
lc_ctype
I hope someone can urgently help. I was running 8.2.3 with a lot of
pleasure and no-nonsense. Very fast and delightful database that had
me singing paeans.
But I upgraded to 8.2.9 this morning and have had a major slowdown of
the DB processes. How do I begin to test what is going wrong?
I
On 8/28/08, Phoenix Kiula [EMAIL PROTECTED] wrote:
I hope someone can urgently help. I was running 8.2.3 with a lot of
pleasure and no-nonsense. Very fast and delightful database that had
me singing paeans.
But I upgraded to 8.2.9 this morning and have had a major slowdown of
the DB
On 8/28/08, Andreas Kretschmer [EMAIL PROTECTED] wrote:
Please show us a EXPLAIN ANALYSE for this query. Btw, why
random_page_cost=2? (your other post)
The EXPLAIN ANALYZE shows that it's using an INDEX and getting one
row! So I know the SQL is right.
Could it be that the SQL queries become
On our database of about 5GB we vaccuum all of our 12 tables (only one
is huge, all others have about 100,000 rows or so) every hour or so.
But we also have autovaccuum enabled. Is this okay? Do the two vaccuum
processes contradict each other, or add unnecessary load to the
system?
The reason we
On 8/29/08, Andrew Sullivan [EMAIL PROTECTED] wrote:
The latter should be in effect if your database encoding is UTF-8.
What's the database encoding?
My database encoding on BOTH the sides is UTF8. The dumped DB is UTF8.
The one I am importing into is also UTF8. So why is it showing me this
Hello. I have a database dump. With data and schema, which includes
all the constraints and rules. But it seems the pg_dumpall command
does not copy data in such a way that foreign keys are satisfied upon
restoring. Because tables are inter-related, importing them keep
giving errors and eventually
On Wed, Aug 27, 2008 at 9:29 PM, Terry Lee Tucker [EMAIL PROTECTED] wrote:
We have all sorts of constraints and foreign keys and we have never had any
problem with pg_restore related to dumping such that foreign keys are
satisfied. You must have data already in the database that violates the
The recommended solution is to use a regular schema-and-data dump.
If you really have to separate schema and data, the --disable-triggers
switch might help, though you open yourself to the possibility of
loading inconsistent data that way.
Thanks Tom.
This is the dump command being used on
Thanks Bill for this recommendation:
*) In the early step dump the data only with pg_dumpall --disable-triggers -a
Dumb question. Will this kind of pg_dumpall lead to downtime, I mean
is there a database lock during this time?
Thanks!
--
Sent via pgsql-general mailing list
On Wed, Aug 27, 2008 at 10:34 PM, Bill Moran
[EMAIL PROTECTED] wrote:
Dumb question. Will this kind of pg_dumpall lead to downtime, I mean
is there a database lock during this time?
No.
Thanks. But there seems to be a tangible slowdown of DB operations
during the time that pg_dump is
On Wed, Aug 27, 2008 at 10:42 PM, Tom Lane [EMAIL PROTECTED] wrote:
For pg_dumpall, you shouldn't even have created the
databases, just start from a virgin installation.
Should I have CREATEd the db at least without the table structure?
I dropped the database locally. Entirely. Gone.
Then
101 - 200 of 321 matches
Mail list logo