于 2013/9/17 0:02, Kevin Grittner 写道:
Possibly. As I said before, I think the symptoms might better fit a
situation where the table in need of VACUUM was a shared table and it
just happened to mention db1 because that was the database it was
scanning at the time. (Every database includes the sha
于 2013/9/16 1:31, Kevin Grittner 写道:
There's your problem. You left so little space between when
autovacuum would kick in for wraparound prevention (2 billion
transactions) and when the server prevents new transactions in
order to protect your data (2 ^ 31 - 100 transactions) that
autovacuum
于 2013/9/15 1:06, Kevin Grittner 写道:
Rural Hunter wrote:
Why in the world would you want to use VACUUM FULL in this circumstance?
the db name in the error message wrong?
I just googled around and found the solution. What's the other option?
There are two possibilities -- either you
Sure. thanks any away and have a good night.
Let me put here the whole scenario:
1. I was called by our application users that all the updating was
failing. So I went to check the db. Any update transaction including
manual vacuum is blocked out by the error message:
ERROR: database is not acce
于 2013/9/14 10:25, David Johnston 写道:
Likely auto-vacuum kicked in and took care of the wraparound issue
while the system was handling your manual vacuum routines...but that
is just a theory
I don't think so. I had to use single connection mode to run the vacuum
full on other dbs.
--
Sent
I'm on 9.2.4 and I have several databases on the instance say db1, db2,
db2 etc. Today I got this error message on connection of any of the
databases:
ERROR: database is not accepting commands to avoid wraparound data loss
in database "db1"
Suggestion??Stop the postmaster and use a standalone
Yes, that's also an acceptable
solution.
于 2013/6/20 3:48, Craig James 写道:
On Wed, Jun 19, 2013 at 2:35 AM, Rural
Hunter <ruralhun...@gmail.com>
wrote:
I really hate the error "permission den
于 2013/6/19 17:47, Szymon Guz 写道:
On 19 June 2013 11:35, Rural Hunter <ruralhun...@gmail.com>
wrote:
I really
hate the error "permission denied for sequence x" when
I grant on a tab
I really hate the error "permission denied for sequence x" when I
grant on a table but forget to grant additionally on the related
sequence to users. Can the permission of table and related sequences be
merged?
--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make chan
Hi,
I encounter the same issue often: Granted update/insert to an user but
forgot to grant it on the related sequence. It's hard to understand that
an user has write access on table but not on necessary sequences. I
think the grant on tables should cascade to related sequences. What do
you th
I'm doing the same thing. In my case, the vacuum part on parent is very
quick while analyzing takes a bit longer since it runs rough analyzes
all children tables. You can see the behavior by "analyze verbose".
Maybe the bigger part of your vacuum/analyze is on analyze so that you
are seeing thi
于 2012/12/12 13:44, Sergey Konoplev 写道:
On Tue, Dec 11, 2012 at 9:39 PM, Rural Hunter wrote:
Great. It works now. Thanks a lot for your instant help!
You are welcome.
Thanks for your feedback and sorry for this bugs. I have noted down
this issue with password and planned to add .pg* and env
于 2012/12/12 13:31, Sergey Konoplev 写道:
Yes. It is known bug and it is fixed in the future (not yet released) version.
See the attachment.
Great. It works now. Thanks a lot for your instant help!
--
Sergey Konoplev
Database and Software Architect
http://www.linkedin.com/in/grayhemp
Phones:
U
于 2012/12/12 12:47, Sergey Konoplev 写道:
On Tue, Dec 11, 2012 at 8:30 PM, Rural Hunter wrote:
No. I was running it with another db super user. should it only be run by
postgres?
$ echo 'SELECT 1;' | psql -q -A -t -X -U postgres -P null=""
Password for user postgres:
1
O
于 2012/12/12 12:24, Sergey Konoplev 写道:
On Tue, Dec 11, 2012 at 7:57 PM, Rural Hunter wrote:
I do have psql installed. I'm on the db server.
$ psql
psql.bin (9.1.0)
Type "help" for help.
postgres=# \q
$ ./pgcompactor -a -u
DatabaseChooserError Can not find an adapter. at
/l
On Tue, Dec 11, 2012 at 7:40 PM, Rural Hunter wrote:
I downloaded pgtoolkit-v1.0beta3-fatscripts.tar.gz and tested it. I got
error when trying this:
./pgcompactor -a -u
DatabaseChooserError Can not find an adapter. at
/loader/0x1c26f18/PgToolkit/DatabaseChooser.pm line 63.
./pgcompact
Hi Sergey,
I downloaded pgtoolkit-v1.0beta3-fatscripts.tar.gz and tested it. I got
error when trying this:
./pgcompactor -a -u
DatabaseChooserError Can not find an adapter. at
/loader/0x1c26f18/PgToolkit/DatabaseChooser.pm line 63.
./pgcompactor -d testdb -u
DatabaseChooserError Can not find
Got it. Thanks.
于2012年11月8日 23:21:33,Albe Laurenz写到:
Rural Hunter wrote:
I'm on 9.1.3. I set auto vacuum off for some tables. I noticed one
thing: when I run manual analyze on parent table, It seems the
children
tables are also analyzed. Here is the analyze log:
INFO: anal
Hi,
I'm on 9.1.3. I set auto vacuum off for some tables. I noticed one
thing: when I run manual analyze on parent table, It seems the children
tables are also analyzed. Here is the analyze log:
INFO: analyzing "public.table_parent"
INFO: "table_parent": scanned 0 of 0 pages, containing 0 li
or set the 'PGPORT' env variable.
于 2012/9/17 14:29, Devrim GÜNDÜZ 写道:
Hi,
On Mon, 2012-09-17 at 11:55 +0530, himanshu.joshi wrote:
I have changed the default port of postgres in file
"postgresql.conf" from 5432 to another four digit port which was not
assigned to any one else. After th
于 2012/9/16 2:06, Bruce Momjian 写道:
On Sat, Sep 15, 2012 at 11:40:06AM +0800, Rural Hunter wrote:
The check is to make sure that once we have created all the user schema
details in the new cluster, that there are the same number of objects in
the new and old databases.
Obviously there are a
于2012年9月14日 22:26:16,Bruce Momjian写到:
On Fri, Sep 14, 2012 at 01:43:30PM +0800, Rural Hunter wrote:
I am trying to test the upgrade from my 9.1.3 db to 9.2 on ubuntu
10.10 server. I got error below when run the pg_upgrade command.
What can I do for this?
$ /opt/PostgreSQL/9.2/bin/pg_upgrade -b
I am trying to test the upgrade from my 9.1.3 db to 9.2 on ubuntu 10.10
server. I got error below when run the pg_upgrade command. What can I do
for this?
$ /opt/PostgreSQL/9.2/bin/pg_upgrade -b /opt/PostgreSQL/9.1/bin -B
/opt/PostgreSQL/9.2/bin -d /raid/pgsql -D /raid/pg92data
Performing Con
turn auto vacuum off for those tables.
于 2012/9/11 4:30, David Morton 写道:
We have many large tables which contain static historical data, they
are auto vacuumed on a regular basis (sometimes to prevent wraparound)
which i suspect causes a few annoying side effects:
- Additional WAL file genera
Thanks!
于 2012/9/7 20:20, Sergey Konoplev 写道:
On Fri, Sep 7, 2012 at 9:44 AM, Rural Hunter wrote:
base_20120902.tar.gz
27781958.tar.gz
27781959.tar.gz
Now I want to restore it on another server with only one disk. I'm confused
how to handle those tablespace files. Is there a guideline o
Hi,
I have a database with several tablespaces on different disks and
backup-ed it with pg_basebackup. I have theses files:
base_20120902.tar.gz
27781958.tar.gz
27781959.tar.gz
Now I want to restore it on another server with only one disk. I'm
confused how to handle those tablespace files. Is
mx.google.com with ESMTPS id sr3sm15887340pbc.44.2012.08.27.18.54.57
(version=TLSv1/SSLv3 cipher=OTHER);
Mon, 27 Aug 2012 18:54:58 -0700 (PDT)
Message-ID: <503c24ec.3030...@gmail.com>
Date: Tue, 28 Aug 2012 09:54:52 +0800
From: Rural Hunter
User-Agent: Mozilla/5.0 (Windows
v9.1.3 on Ubuntu 10.04 server. I have one table which has frequent
insert and weekly deletion, no update. Recently, I found the auto vacuum
on the toast table of it almost running all the time. each run took
around 1 hour. I don't think there are so many inerts so that it reaches
the auto vacuu
于 2012/4/17 18:06, Albe Laurenz 写道:
Rural Hunter wrote:
That's probably the problem - it seems to emit something that is
not proper UTF-8 sometimes.
Do you get the error if you try Chinese settings without nlpbamboo?
How can I do this? The Chinese processing is provided by nlpbamboo.
--
692f48196a1f480abbd0ae69cace69687e794b1e882a1e7a5a8e68a95e8b584e5b7a5e585b7
3ae9b9b0e79cbce4b8aae882a1e699bae883bde8b79fe8b8aae7b3bbe7bb9fe68f90e4be9befbc8ce6aca2e8bf8ee8bdace8bdbdefbc8ce8afb7e6b3a8e6988ee587bae5a484e380820ae79bb8e585b3e696b0e997bbefbc9
a0ae8b4a3e4e7bc96e8be91efbc9ae5bb96e6b3bde5878c
于 2012/4/16 21:34, Albe Laurenz 写道:
Please
released.
于 2012/4/16 16:31, Albe Laurenz 写道:
Rural Hunter wrote:
My db is in utf-8, I have a row in my table say tmp_article and I
wanted
to generate ts_vector from the article content:
select to_tsvector(content) from tmp_article;
But I got this error:
ERROR: invalid byte sequence for e
, Apr 14, 2012 at 9:31 AM, Rural
Hunter <ruralhun...@gmail.com>
wrote:
doesn't work either.
db=# show client_encoding;
client_encoding
-
UTF8
(1 row)
ow)
db=# select to_tsvector(content) from tmp_article;
ERROR: invalid byte sequence for encoding "UTF8": 0xf481
于 2012/4/14 10:15, raghu ram 写道:
2012/4/14 Rural Hunter <ruralhun...@gmail.com>
My db is in utf-8, I have a
My db is in utf-8, I have a row in my table say tmp_article and I wanted
to generate ts_vector from the article content:
select to_tsvector(content) from tmp_article;
But I got this error:
ERROR: invalid byte sequence for encoding "UTF8": 0xf481
I am wondering how this could happen. I think if
I'm trying to set up a standby server. Both the primary and standby
servers are on latest version 9.1.3 on ubunt server 10.10. So far I
tried to init the setup 2 times but both failed after the replication
running for some time. what can I do to fix this? The log on the standby
is shown below:
x27;t mention about
triggers.
于2012年3月12日 22:58:06,Kevin Grittner写到:
Rural Hunter wrote:
is it worth mentioning in the doc?
Do you have a suggestion for what language you would have found
helpful, or which section(s) of the docs should be modified?
-Kevin
--
Sent via pgsql-admin ma
yes, I understand that. but is it worth mentioning in the doc?
于2012年3月12日 22:11:16,Kevin Grittner写到:
Rural Hunter wrote:
"triggers are not shared between parent and child tables". is it
true?
Yes.
You can use the same trigger *function* for more than one trigger
thoug
I implemented table partition recently and found that the tiggers on
parent table are not working any more, except the before insert trigger
which redirects data to children table. The after insert/update triggers
are not working now. I understand the after insert trigger may not
working since
thanks. temporarily not having the indexes should be fine to me.
于2012年3月5日 23:53:52,Nicholson, Brad (Toronto, ON, CA)写到:
-Original Message-
From: pgsql-admin-ow...@postgresql.org [mailto:pgsql-admin-
ow...@postgresql.org] On Behalf Of Rural Hunter
Sent: Monday, March 05, 2012 8:28 AM
I want to separate my indexes into a tablespace which resides on a raid0
disk for fast r/w access. Is it safe? what I mean is, if the raid0 disk
is damaged, is it easy to recover the indexes by just recreating those
indexes?
--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To
My db is in UTF-8 encoding. When I tried to generate gist index for some
rows I got the strange error:
select to_tsvector('chinesecfg',content) from article where
article_id=23834922;
ERROR: invalid byte sequence for encoding "UTF8": 0xf480
However, "select content from article where article_id
.
于2011年12月22日 0:09:17,Kevin Grittner写到:
Rural Hunter wrote:
I still have this question:
same statement A,B,C,D update same row. The start order is
A->B->C-D. From what I've gotten, B/C/D got the lock before A.
Why did that happen?
Did you do anything to prevent it from happe
well, thanks. I checked the application and found there was a bug
causing many queries were updating the same row. However, those updates
are just single statement, no multi-statement transaction involved. So
I still have this question:
same statement A,B,C,D update same row. The start order is
yes, it's truncated. the full sql is like this:
"update article set
tm_update=$1,rply_cnt=$2,read_cnt=$3,tm_last_rply=$4 where
title_hash=$5"
the title_hash is unique.
I dig another case more and found something interesting. it's actually
waiting for a lock of type transactionid. I ran the qu
I'm seeing connection hang issue these days. many concurrent
connections are hanging on db. They basically do the same thing:
update different rows in same table. The sql itself should run very
fast as it's updating just one row based on an unique key. I though
it might b
I noticed there were some statistics for triggers in pgAdmin III GUI,
such as total execution count and execution time. But they are all
empty. is there any way to enable those statistics?
--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
ht
well, is pgsql capable to parse shell variables in postgresql.conf?
anway you should check your master log. If it can not archive the wal,
there will be errors reported in it.
于2011年12月16日 2:02:03,Khusro Jaleel写到:
Hello, I'm trying out a simple example from the Postgresql 9
Administration Cook
Hi,
I'm trying to set up a hot standby server with both primary and standby
are at latest version 9.1.2. I used pg_basebackup to create the backup
and there was no problem in the log:
NOTICE: pg_stop_backup complete, all required WAL segments have been
archived
-rw-r--r-- 1 postgres postgres
I have the same confusion...
于 2011/11/30 2:34, Rob Richardson 写道:
Very naïve question here: Why would you want to save the data from the first
insert?
I thought the purpose of a transaction was to make sure that all steps in the
transaction executed, or none of them executed. If Oracle sav
rg/docs/current/static/functions.html
--
Ian.
On Tue, Sep 27, 2011 at 10:44 AM, Rural Hunter wrote:
great, thanks. is there any other system function besides string functions?
于2011年9月27日 17:37:29,Thomas Kellerer写到:
Rural Hunter, 27.09.2011 11:00:
I am looking for something like a string h
great, thanks. is there any other system function besides string
functions?
于2011年9月27日 17:37:29,Thomas Kellerer写到:
Rural Hunter, 27.09.2011 11:00:
I am looking for something like a string hash function to order a
string cloumn randomly.
It's all in the manual ;)
I am looking for something like a string hash function to order a string
cloumn randomly.
--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin
Got it. thanks.
于2011年9月21日 1:31:09,Kevin Grittner写到:
Rural Hunter wrote:
I set up an warm standby server which fetches WAL logs from a
remote server. It has been working very well. But today, I was
editing the restore script(which is set as the restore_command)
with vi. Just right after I
Hi,
I set up an warm standby server which fetches WAL logs from a remote
server. It has been working very well. But today, I was editing the
restore script(which is set as the restore_command) with vi. Just right
after I saved the script, I noticed suddenly the standby server
terminated. I che
not needed any more in the final release. So my problem
is resolved. Thank you all.
于 2011/9/18 23:25, Peter Eisentraut 写道:
On sön, 2011-09-18 at 22:41 +0800, Rural Hunter wrote:
I didn't install anything else there:
(for a very small value of "anything else")
It looks like you
.1
drwxr-xr-x 4 root daemon4096 2011-08-25 11:20 postgresql
于2011年9月18日 21:58:17,Peter Eisentraut写到:
On sön, 2011-09-18 at 14:56 +0800, Rural Hunter wrote:
This is my env:
postgres@backup:~$ env
MANPATH=:/opt/PostgreSQL/9.1/share/man
SHELL=/bin/bash
TERM=linux
USER=postgres
LD_LIBRARY_PATH=/
yes, tried that too and same thing happened.
于2011年9月18日 17:09:25,Richard Shaw写到:
Tried adding the path to ldconfig rather than redefining ld_library_path?
On 18 Sep 2011, at 09:03, Scott Marlowe wrote:
Wow, you got me.
On Sun, Sep 18, 2011 at 12:56 AM, Rural Hunter wrote:
This is my env
make top/vi work, I just need to unset LD_LIBRARY_PATH.
于2011年9月18日 13:31:45,Scott Marlowe写到:
2011/9/17 Rural Hunter:
I installed pgsql 9.1 on Ubuntu server(10.10). Now I have a problem with the
required env variable LD_LIBRARY_PATH. If I do not set this, I can not use
some pg tools such as
I installed pgsql 9.1 on Ubuntu server(10.10). Now I have a problem with
the required env variable LD_LIBRARY_PATH. If I do not set this, I can
not use some pg tools such as pg_dump, createuser...etc. If I set this
variable, my term gets warning "unknown terminal type" for some linux
commands s
OK, thank you.
于2011年9月11日 1:30:48,Guillaume Lelarge写到:
On Sun, 2011-09-11 at 01:19 +0800, Rural Hunter wrote:
I'm making a base backup with 9.1rc by following 24.3.3 in manual:
http://www.postgresql.org/docs/9.1/static/continuous-archiving.html
1. SELECT pg_start_backup('label
I'm making a base backup with 9.1rc by following 24.3.3 in manual:
http://www.postgresql.org/docs/9.1/static/continuous-archiving.html
1. SELECT pg_start_backup('label');
2. perform file system backup with tar
3. SELECT pg_stop_backup();
But when I was performing step 2, I got warning from tar c
byte sequence for encoding
"UTF8": 0xe68e27
2011-09-01 11:27:01 CST ERROR: invalid byte sequence for encoding
"UTF8": 0xe7272c
2011-09-01 11:27:06 CST ERROR: invalid byte sequence for encoding
"UTF8": 0xe5272c
2011-09-01 11:27:15 CST ERROR: invalid byte sequence for
Marlowe 写道:
2011/8/29 Rural Hunter:
Hi all,
I'm a newbie here. I'm trying to test pgsql with my mysql data. If the
performance is good, I will migrate from mysql to pgsql.
I installed pgsql 9.1rc on my Ubuntu server. I'm trying to import a large
sql file dumped from mysql into p
6, Kevin Grittner 写道:
Rural Hunter wrote:
2011/8/29 23:18, Kevin Grittner:
I also recommend a VACUUM FREEZE ANALYZE on the database unless
most of these rows will be deleted or updated before you run a
billion database transactions. Otherwise you will get a painful
"anti-wraparound" auto
Hi Kevin,
Thank you very much for the quick and detailed answers/suggestions. I
will check and try them.
于 2011/8/29 23:18, Kevin Grittner 写道:
Good (but don't forget to change that once the bulk load is done). You
should probably also turn off full_page_writes and synchronous_commit.
I've se
.
于 2011/8/29 21:44, Kevin Grittner 写道:
Rural Hunter wrote:
it's a problem of migrating vast data from mysql to pgsql.
I don't know how helpful you'll find this, but when we migrated to
PostgreSQL a few years ago, we had good luck with using Java to
stream from one database t
well, thank you for the quick reply but actually I'm not concerning the
performance as of now. My problem is related to the bulk insert of
client side program psql. Or, it's a problem of migrating vast data from
mysql to pgsql.
于2011年8月29日 21:25:29,Julio Leyva写到:
check this out
http://www.re
Hi all,
I'm a newbie here. I'm trying to test pgsql with my mysql data. If the
performance is good, I will migrate from mysql to pgsql.
I installed pgsql 9.1rc on my Ubuntu server. I'm trying to import a
large sql file dumped from mysql into pgsql with 'plsql -f'. The file is
around 30G with bu
68 matches
Mail list logo