On Thu, Oct 10, 2013 at 8:09 AM, Thara Vadakkeveedu wrote:
>
>
>
> [root@perf277 ~]# yum install postgresql92-contrib.x86_64
> Loaded plugins: product-id, security, subscription-manager
> This system is not registered to Red Hat Subscription Management. You can use
> subscription-manager to regis
-admin@postgresql.org
Subject: RE: [ADMIN] convert from latin1 to utf8
I needed both UTF8 and Latin-1. I accomplished this by initdb with the LOCALE
set to C. That lets me create dbs "with template0 encoding='Latin-1'" as well
as "encoding=UTF8," FWIW...
Hey,
There are a couple of free tools to convert mysql dumps to postgresql
dumps. Although these do not cater to functions and stored procedures. They
will need to be converted manually. For the rest, you can have a look at:
https://github.com/ahammond/mysql2pgsql (Perl)
http://rubygems.org/gems/my
I needed both UTF8 and Latin-1. I accomplished this by initdb with the LOCALE
set to C. That lets me create dbs "with template0 encoding='Latin-1'" as well
as "encoding=UTF8," FWIW...
Original message
From: Marc Fromm
Date: 10/10/2013 5:39 PM (GMT-06:00)
To: pgsql-admin@po
Thursday, October 10, 2013 1:55 PM
To: Marc Fromm; pgsql-admin@postgresql.org
Subject: Re: [ADMIN] move dbs from 8.1 to 8.4
On 10/10/2013 01:17 PM, Marc Fromm wrote:
I built a new server running centos 6.4 and postgresql 8.4. I backed up all the
databases from the old server running fedora and
anks
I am using 8.4 just because it's what gets installed with CentOS6.4.
From: Steve Crawford [mailto:scrawf...@pinpointresearch.com]
Sent: Thursday, October 10, 2013 1:55 PM
To: Marc Fromm; pgsql-admin@postgresql.org
Subject: Re: [ADMIN] move dbs from 8.1 to 8.4
On 10/10/2013 01:17 PM, Marc
On 10/10/2013 01:17 PM, Marc Fromm wrote:
I built a new server running centos 6.4 and postgresql 8.4. I backed
up all the databases from the old server running fedora and postgresql
8.1 using this script.
*#!/bin/bash*
*# Backup all Postgresql databases*
**
*# Location of the backup logfi
I also realized that during the restore any database with latin1 encoding is
not created at all
psql:pgdbs:167: ERROR: encoding LATIN1 does not match locale en_US.UTF-8
DETAIL: The chosen LC_CTYPE setting requires encoding UTF8.
About half my databases are latin1.
My problem is now two fold, h
Can you use '-v' with pg_dumpall and output to the log file? That might
help. Likely something is not right with the individual pg_dump processes
that work inside the pg_dumpall.
Payal Singh,
OmniTi Computer Consulting Inc.
Junior Database Architect,
Phone: 240.646.0770 x 253
On Thu, Oct 10, 201
it sounds like you are missing the contrib code.
if you built from source you need to go to the contrib directory and run make,
make install.
if you installed from a package you need to install the appropriate contrib
package.
reiner
sent by smoke signals at great danger to my self.
On 1
Thanks for your reply.
---> You should update to a more current minor release.
Recently I updated to 9.1.5. I will do in near future.
--> You need to provide more information on how you backed up, how you
are restoring, and what the bottleneck seems to be.
I am pg_restore using for restoring
Thanks for your reply.
---> You should update to 9.1.9.
Recently I updated to 9.1.5. I will do in near future.
--> How are you doing this restore? Is it from a dump? Are you using or
could you use custom format?
I am pg_restore using for restoring the backup which created using pg_dump.
-
Hi,
I ran the following command on my server:
yum search postgresql
this listed postgresql92-contrib.x86_64 package.
I tried to do a yum install of this package but that threw an error that it
requires libossp-uuid.so.16()(64bit).
[root@perf277 ~]# yum install postgresql92-contrib.x8
"Does the server log say anything about broken
connections or client not responding?"
Nope, no errors in server log, just high I/O and no free slots remaining.
We are thinking of adding more RAM to the server what should speed up the
queries..
--
Best regards,
Viktor
Viktor Juhanson wrote:
> Btw we have the max pool size of web application 50 connections
> and since we have 4 instances of application running it makes
> max 200.
>
> I don't really get how the database pool gets full when
> application can use 200 connections max and postrgesql config has
> se
Hello,
It's me again..
Log_connection inspection didn't help to find the cause, the connections come
from our app-servers as usual..
Btw we have the max pool size of web application 50 connections and since we
have 4 instances of application running it makes max 200.
I don't really g
On Tue, Oct 8, 2013 at 2:43 AM, Venakata Ramana wrote:
> Thanks for your reply.
>
>
> ---> You should update to 9.1.9.
>
> Recently I updated to 9.1.5. I will do in near future.
>
>
Recently??? 9.1.5 was released more then one year ago... =/
>
> --> How are you doing this restore? Is it from a du
m: pgsql-admin-ow...@postgresql.org
> [mailto:pgsql-admin-ow...@postgresql.org] On Behalf Of David Johnston
> Sent: 03 October 2013 19:44
> To: pgsql-admin@postgresql.org
> Subject: Re: [ADMIN] postgres connections in IDLE state..
>
> Rajagopalan, Jayashree wrote
> > I
Venakata Ramana wrote:
> I am using postgresql 9.1.5. on windows Xp.
You should update to a more current minor release.
http://www.postgresql.org/support/versioning/
> 1. Restore of DB is very slow.
> How to improve the speed of Restore?
You need to provide more information on how you bac
On Mon, Oct 7, 2013 at 9:03 AM, Venakata Ramana wrote:
> Hi,
>
> I am using postgresql 9.1.5. on windows Xp.
>
>
You should update to 9.1.9.
I am facing two problems:
>
> 1. Restore of DB is very slow.
> How to improve the speed of Restore?
>
>
How are you doing this restore? Is it from a du
we set the application_name to some unique id such as client process ID or
processid#threadid when client Start the new session.
jov
在 2013-10-7 上午10:39,"Rajagopalan, Jayashree" 写道:
> Hi:
>
> ** **
>
> How to correlate the procpid in pg_stat activity table to any application
> process? I nee
On Sun, Oct 6, 2013 at 7:38 PM, Rajagopalan, Jayashree
wrote:
> How to correlate the procpid in pg_stat activity table to any application
> process? I need to track down some connections to the queries/application
> threads. Please help!!
Note down the client_port from pg_stat_activity and run:
First thought is make sure you are looking in the right database.
pg_locks shows data over the entire server, not just the connected database.
pg_class exists for each database so if you are connected to 'my_database' but
the object is in 'your_database', it won't show up in pg_class on 'my_data
Update - someone unleashed a 'cleanup script' yesterday via puppet to
multiple hosts and greedily deleted files that had not been modified in 15
days. This is the most likely culprit so mystery basically solved.
Thankfully this is in QA, whew! It would be interesting to still know if
there are w
Alejandro Brust escribió:
> U could try some like "SET/ zero_damaged_pages = on"and perform a
> vacuum-db and may be pg_dump
I don't think this is a good idea. It might cause data loss. In any
case it's unlikely to fix the reported problem.
--
Álvaro Herrerahttp://www.2nd
El 04/10/2013 14:10, Mike Broers escribió:
> Strange, this is happening in a totally different environment now too.
> The only thing these two environments share is a SAN, but I wouldnt
> think something going on at the SAN level would make files disappear.
> Any suggestions are greatly apprecia
Strange, this is happening in a totally different environment now too. The
only thing these two environments share is a SAN, but I wouldnt think
something going on at the SAN level would make files disappear. Any
suggestions are greatly appreciated.
On Fri, Oct 4, 2013 at 9:40 AM, Mike Broers
El 04/10/2013 11:40, Mike Broers escribió:
> Hello, our postgresql 9.2.4 qa database (thankfully its just qa) seems
> to be hosed.
>
> Starting at around 3:39am last night I started seeing errors about
> missing files and now I cannot run a pgdump or a vacuum without it
> complaining about files
Radovan Jablonovsky wrote:
> When postgresql system tables, (tables in pg_catalog) reach size of 10mil
> rows, it will slow down some
> DDL operations, which insert, update or delete data from pg_* tables. Is it
> possible to partition the
> system tables or is there some other way to improve per
Johnston
Sent: 03 October 2013 19:44
To: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] postgres connections in IDLE state..
Rajagopalan, Jayashree wrote
> I'm seeing intermittently - the DB connections getting stale - and not
> getting returned to the Hibernate session pool.
Rajagopalan, Jayashree wrote
> I'm seeing intermittently - the DB connections getting stale - and not
> getting returned to the Hibernate session pool. Some of the connections
> are as old as 9 days.
The whole point of a connection pool is to keep open connections to the
database. These connectio
On Wed, Oct 2, 2013 at 3:20 AM, Nikolay Morozov wrote:
> Can I you database Master-Slave replication on cluster with 50-80 nodes.
> Database is small (configuration), changed rarely. If there any problems
> with such nodes count.
> One master other slaves.
Yes, you can... just set max_wal_senders
On Wed, Oct 2, 2013 at 1:20 AM, Nikolay Morozov wrote:
> Can I you database Master-Slave replication on cluster with 50-80 nodes.
> Database is small (configuration), changed rarely. If there any problems
> with such nodes count.
> One master other slaves.
What do you need so many slaves for?
--
On Tue, 01 Oct 2013 08:17:37 -0600, Andrew Swartzendruber wrote:
> Would it be possible to prevent the system from forgetting the stored
> password when pgAdmin fails to connect in a future release or have an
> option that would prevent forgetting of passwords?
>
> I use pgAdmin 1.16 or 1.18 to c
Radovan Jablonovsky wrote
> Hello,
>
> When postgresql system tables, (tables in pg_catalog) reach size of 10mil
> rows, it will slow down some DDL operations, which insert, update or
> delete
> data from pg_* tables. Is it possible to partition the system tables or is
> there some other way to im
Laurenz [laurenz.a...@wien.gv.at]
Sent: Wednesday, October 02, 2013 3:58 AM
To: Bhanu Murthy; pgsql-admin@postgresql.org; pgsql-...@postgresql.org
Subject: Re: [ADMIN] DB link from postgres to Oracle; how to query
Dbname.tablename?
Bhanu Murthy wrote:
> Using Oracle Heterogeneous Services (Oracle
Igor Neyman wrote:
>> Our Java application uses c3p0 connection pooler and we don't
>> think that it's the issue.
>
> "Client-side" connection pooling is different from server-side
> (such as PgBouncer), and I believe is not as effective as
> PgBouncer.
In my experience a good client-side pooler
I tried it out. It did not make any difference.
On 01.10.2013 23:30, Alejandro Brust wrote:
Did U perform any vacuumdb / reindexdb before the Pg_dump?
El 01/10/2013 09:49, Magnus Hagander escribió:
On Tue, Oct 1, 2013 at 11:07 AM, Sergey Klochkov wrote:
Hello All,
While trying to backup a
> -Original Message-
> From: pgsql-admin-ow...@postgresql.org [mailto:pgsql-admin-
> ow...@postgresql.org] On Behalf Of Viktor
> Sent: Wednesday, October 02, 2013 4:12 AM
> To: pgsql-admin@postgresql.org
> Subject: Re: [ADMIN] Random server overload
>
> On 10/1/2
On 10/1/2013 4:45 PM, Igor Neyman wrote:
Did you try using any kind of connection pooler, e.g. PgBouncer?
Should help.
Our Java application uses c3p0 connection pooler and we don't think that
it's the issue.
On 10/1/2013 6:15 PM, Albe Laurenz wrote:
Looks like something tries to open lots
Bhanu Murthy wrote:
> Using Oracle Heterogeneous Services (Oracle HS) I have configured/created a
> DB link from Postgres 9.3
> database into Oracle 11gR3 database (with postgres DB user credentials).
>
> SQL> create public database link pg_link connect to "postgres" identified by
> "blahblah"
preceding 3 lines from PG_LINK;*
>
> I tried dbname.tablename syntax, but it didn't work! BTW, all my tables
> belong to public schema.
>
> Does anyone with DB link expertise try to answer my question?
>
> Thanks,
> Bhanu M. Gandikota
> Mobile: (415) 420-7740
&g
Did U perform any vacuumdb / reindexdb before the Pg_dump?
El 01/10/2013 09:49, Magnus Hagander escribió:
> On Tue, Oct 1, 2013 at 11:07 AM, Sergey Klochkov wrote:
>> Hello All,
>>
>> While trying to backup a database of relatively modest size (160 Gb) I ran
>> into the following issue:
>>
>> W
Viktor wrote:
> We are experiencing database random overloads caused by IDLE processes.
> Their count jumps from normal ~70 connections to 250-300 with high I/O
> (30-40% wa, when normal ~ 1 % wa).
>
> The overload isn't long and lasts about 5 -10 minutes just a couple of
> times during the month.
> -Original Message-
> From: pgsql-admin-ow...@postgresql.org [mailto:pgsql-admin-
> ow...@postgresql.org] On Behalf Of Viktor
> Sent: Tuesday, October 01, 2013 9:19 AM
> To: pgsql-admin@postgresql.org
> Subject: [ADMIN] Random server overload
>
> Hello,
>
> We are experiencing database r
On Tue, Oct 1, 2013 at 11:07 AM, Sergey Klochkov wrote:
> Hello All,
>
> While trying to backup a database of relatively modest size (160 Gb) I ran
> into the following issue:
>
> When I run
> $ pg_dump -f /path/to/mydb.dmp -C -Z 9 mydb
>
> File /path/to/mydb.dmp does not appear (yes, I've checked
On Tue, Oct 1, 2013 at 4:01 AM, Giuseppe Broccolo <
giuseppe.brocc...@2ndquadrant.it> wrote:
> Maybe you can performe your database changing some parameters properly:
>
> max_connections = 500 # (change requires restart)
>>
> Set it to 100, the highest value supported by PostgreS
No, it did not make any difference. And after looking through pg_dump.c
and pg_dump_sort.c, I cannot tell how it possibly could. See the
stacktrace that I've sent to the list.
Thanks.
On 01.10.2013 15:01, Giuseppe Broccolo wrote:
Maybe you can performe your database changing some parameters p
Maybe you can performe your database changing some parameters properly:
PostgreSQL configuration:
listen_addresses = '*' # what IP address(es) to listen on;
port = 5432 # (change requires restart)
max_connections = 500 # (change requires re
Stack trace:
Thread 1 (Thread 0x7ff72c4c97c0 (LWP 13086)):
#0 removeHeapElement (objs=0x1a0c90630, numObjs=,
preBoundaryId=, postBoundaryId=out>) at pg_dump_sort.c:502
#1 TopoSort (objs=0x1a0c90630, numObjs=,
preBoundaryId=, postBoundaryId=out>) at pg_dump_sort.c:415
#2 sortDumpableObjects (
I've upgraded to 9.2.4. The problem still persists. It consumed 10 Gb of
RAM in 5 minutes and still grows. The dump file did not appear.
On 01.10.2013 14:04, Jov wrote:
Try update to the latest release,I see there is a bug fix about pg_dump
out of memroy in 9.2.2,from the release note
http://ww
Try update to the latest release,I see there is a bug fix about pg_dump out
of memroy in 9.2.2,from the release note
http://www.postgresql.org/docs/devel/static/release-9-2-2.html:
-
Work around unportable behavior of malloc(0) and realloc(NULL, 0) (Tom
Lane)
On platforms where these
2013/9/26 Thara Vadakkeveedu :
> Hi,
> When you say preinstalled with the system, you mean preinstalled with RedHat
> Linux?
>
> I seem to have the right version ... I had to use the full path to identify
> the version.
>
> -bash-4.1$ /usr/pgsql-9.2/bin/pg_dump --version
> pg_dump (PostgreSQL) 9.2.
older version?
Thanks!
Thara.
From: Craig James
To: Thara Vadakkeveedu
Sent: Wednesday, September 25, 2013 5:32 PM
Subject: Re: [ADMIN] pd_dump server mismatch error
On Wed, Sep 25, 2013 at 2:15 PM, Thara Vadakkeveedu wrote:
Hi
>I did not inst
Thara Vadakkeveedu wrote:
> I need to create a database and a user and make this new user the owner of
> this new database.
>
> Since I cannot access postgres db from pgadmin client on my desktop,
>
>
> I tried to do the same from the command line on the linux db server:
>
> su - postgres
> -bas
y the result like the 9.2 one
-- Original --
From: "Thara Vadakkeveedu";;
Date: Thu, Sep 26, 2013 05:15 AM
To: "alejand...@pasteleros.org.ar";
"pgsql-admin@postgresql.org";
Subject: Re: [ADMIN] pd_dump server mismatch error
?
Thanks,
TG
From: Alejandro Brust
To: pgsql-admin@postgresql.org
Sent: Wednesday, September 25, 2013 3:47 PM
Subject: Re: [ADMIN] pd_dump server mismatch error
Hello, first excuse my English
U cant do a backup whit a Client minor version than the server
Looks like the real problem here is that you got a version of
postgresql installed with the OS, and then you also installed the 9.2
RPMs. Uninstall the postgres that came with the OS and/or specify the
full path to the 9.2 pg_dump.
On Wed, Sep 25, 2013 at 12:47 PM, Alejandro Brust
wrote:
> Hello
Hello, first excuse my English
U cant do a backup whit a Client minor version than the server
U must have same version to do the backup, so U need upgrade your client
(pg_dump 8.4.13) to at least 9.2.4
See U
El 25/09/2013 15:55, Thara Vadakkeveedu escribió:
>
> Hi
> I wanted to take a backup of
I need to create a database and a user and make this new user the owner of this
new database.
Since I cannot access postgres db from pgadmin client on my desktop,
I tried to do the same from the command line on the linux db server:
su - postgres
-bash-4.1$ psql -d postgres
postgres=# create
Thank you!
I deleted data directory again and then ran initdb.
This time service started successfully.
TG
From: Albe Laurenz
To: Thara Vadakkeveedu ; "pgsql-admin@postgresql.org"
Sent: Tuesday, September 24, 2013 10:11 AM
Subject: Re: [ADMIN] ac
Thara Vadakkeveedu wrote:
> Do I need to run pg_resetxlog to fix the corrupted pg_control file issue?
No, since you don't mind starting afresh,
just delete the data directory and run initdb.
Yours,
Laurenz Albe
--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to
tember 24, 2013 9:53 AM
Subject: Re: [ADMIN] accidentally deleted data directory
This is what I find in pg_log/postgresql-Mon.log file (contents relate to my
attempts to start postgres yesterday)
LOG: database system was shut down at 2013-09-22 12:20:51 EDT
LOG: invalid magic number 000
_
From: Thara Vadakkeveedu
To: Albe Laurenz ; "pgsql-admin@postgresql.org"
Sent: Tuesday, September 24, 2013 9:47 AM
Subject: Re: [ADMIN] accidentally deleted data directory
Here are the settings in the postgresql.conf file:
I remember turning on logging to troubleshoot
; "pgsql-admin@postgresql.org"
Sent: Tuesday, September 24, 2013 6:41 AM
Subject: Re: [ADMIN] accidentally deleted data directory
Thara Vadakkeveedu wrote:
> ps –edf shows no postgres processes:
> -bash-4.1$ ps -edf | grep postgres
> root 6412 6249 0 18:22 pts/0 00:00:00
Thara Vadakkeveedu wrote:
> ps –edf shows no postgres processes:
> -bash-4.1$ ps -edf | grep postgres
> root 6412 6249 0 18:22 pts/000:00:00 su - postgres
> postgres 6413 6412 0 18:22 pts/000:00:00 -bash
> postgres 6465 6413 1 18:27 pts/000:00:00 ps -edf
> postgres 6466 6
If pgstartup.log is empty that points at an issue with the configuration
file. From memory, when you do an initdb it recreates the configuration
files in the /var/lib/pg... directory. I would look at the main control
file and the pg_hba.conf and try to start the database passing it the name
of the
Hi ,
When you don't specify unit, it's in ms.
See the test bellow:
BR
Patrick KUI-LI
postgres=# SET statement_timeout TO 1000;
SET
Time: 0.214 ms
postgres=# show statement_timeout;
statement_timeout
---
1s
(1 row)
Example with unit:
postgres=# SET statement_timeout TO '1000s'
Thara Vadakkeveedu wrote:
> I am new to postgresql. I have postgresql 9.2 installed on Red hat linux 6.4
> I accidentally deleted data directory this morning
> (/var/lib/postgresql/9.2/data)
>
> I tried to start posgresql 9.2 service
> service postgresql9.2 start
>
> I got a amessage to initiali
On Tue, Sep 10, 2013 at 10:13 PM, Charles Sprickman wrote:
> I just wanted to check if there are any updates on the preferred way to
> upgrade a group of servers that are using streaming replication with
> minimal downtime.
>
> * No good options to speed up the slave copy - rsync will either use
Natalie Wenz wrote:
>>> autovacuum_freeze_max_age | 8
> We talked a little bit about lowering the
> autovacuum_max_freeze_age, at least some, but there was concern
> that it would end up doing a lot more lengthy full-table scans.
> Is that a legitimate concern?
It will cause full-
On Sep 17, 2013, at 3:46 PM, Kevin Grittner wrote:
> Natalie Wenz wrote:
>
>> Sorry; my description of what is going on was a little unclear.
>> We didn't upgrade the existing database. We moved it to different
>> hardware, and just created a brand new database to accept the
>> data that had b
Roberto Grandi wrote
> Thanks Igor,
>
> this is a sufficient idea to take into account for upgrading to 9.x
> release.
> Thanks again.
There is no 9.x "release" - singular
A release designation requires both the first and second position.
8.4.x
9.0.x
9.1.x
9.2.x
9.3.x
An ".x" can be used in th
Thanks Igor,
this is a sufficient idea to take into account for upgrading to 9.x release.
Thanks again.
Roberto
- Messaggio originale -
Da: "Igor Neyman"
A: "Roberto Grandi" , pgsql-admin@postgresql.org
Inviato: Mercoledì, 18 settembre 2013 15:37:12
Oggetto: RE: Catch exceptions outsid
On Sep 18, 2013, at 7:23 AM, David Johnston wrote:
> If an exception gets that far your transaction
> has failed and you have to ROLLBACK.
Right, and after my prior post where I suggested rollback, I realized, it may
be the case OP doesn't even realize there's an open transaction, which must
e
> -Original Message-
> From: pgsql-admin-ow...@postgresql.org [mailto:pgsql-admin-
> ow...@postgresql.org] On Behalf Of Roberto Grandi
> Sent: Wednesday, September 18, 2013 6:17 AM
> To: pgsql-admin@postgresql.org
> Subject: [ADMIN] Catch exceptions outside function
>
>
> Dear all
>
> I
On Sep 18, 2013, at 5:53 AM, Roberto Grandi
wrote:
> Do you have any suggestion for me?
After the timeout, roll back the current transaction.
--
Scott Ribe
scott_r...@elevated-dev.com
http://www.elevated-dev.com/
(303) 722-0567 voice
--
Sent via pgsql-admin mailing list (pgsql-admin@po
Roberto Grandi wrote
> Hi
>
> this is my script in details, remember that I launch it by .Net code
> (devart connector):
>
>
> SET LOCAL statement_timeout TO 1000;
> BEGIN;
>
> SELECT pg_sleep(5); -- QUERY that is long running;
>
> -- Some exception catch such as EXCEPTION
>
> END;
>
>
> I
Roberto Grandi wrote:
> this is my script in details, remember that I launch it by .Net code (devart
> connector):
>
>
> SET LOCAL statement_timeout TO 1000;
> BEGIN;
>
> SELECT pg_sleep(5); -- QUERY that is long running;
>
> -- Some exception catch such as EXCEPTION
>
> END;
>
>
> I suppos
Hi
this is my script in details, remember that I launch it by .Net code (devart
connector):
SET LOCAL statement_timeout TO 1000;
BEGIN;
SELECT pg_sleep(5); -- QUERY that is long running;
-- Some exception catch such as EXCEPTION
END;
I supposed my code can throw an eception for timeout and
Roberto Grandi wrote:
> I ask for your help cause I can't point out the solution to my problem on PG
> 8.3
> I would catch an exception outside any function/procedure but directly within
> script.
>
>
> BEGIN;
>
> -- raise an exception code
>
> EXCEPTION
> WHEN 'exception_type'
> THEN ROLLBAC
On Tue, Sep 17, 2013 at 9:48 AM, Natalie Wenz wrote:
> maintenance_work_mem| 10GB
> shared_buffers | 128MB
>
maintenance_work_mem seems pretty high, and shared_buffers seems really
low. Out of curiousity, were those set as a product of internal testing
which determ
Natalie Wenz wrote:
> Sorry; my description of what is going on was a little unclear.
> We didn't upgrade the existing database. We moved it to different
> hardware, and just created a brand new database to accept the
> data that had been backing up in sqlite files while our original
> database w
This should work.
SELECT set_config('statement_timeout','1000 s',false);
The set_config function is quite flexible as can accept dynamic values.
More info here
http://www.postgresql.org/docs/current/static/functions-admin.html
I've used successfully to change the script timeout for each statement
It occurs to me that asking for feedback on the tuning, I am asking about two
separate things:
Was there anything in the tuning below that contributed to the database getting
into trouble? And is there anything I should change in that tuning to make the
single-user vacuum as fast as it can be f
other query runs on the same
connection pool ?
I would limit the effect only on this specific query.
Roberto
- Original Message -
From: "Federico"
To: "Roberto Grandi"
Cc: pgsql-admin@postgresql.org
Sent: Tuesday, September 17, 2013 6:37:09 PM
Subject: Re:
No… the shared_buffers value is just a legacy value that never got changed
(the shmmax value in sysctl is still 1073741824). When I set up the new
database, I set the shared_buffers to 25% of system memory, so 12GB. (And since
the new database is on 9.3, I didn't have to adjust the sysctl valu
On Sep 17, 2013, at 7:43 AM, Kevin Grittner wrote:
> Natalie Wenz wrote:
>
>> I have a large database from our test environment that got into trouble with
>> some high volume and some long-running queries about…six weeks ago? We have a
>> buffer mechanism that has been storing the new data sinc
Benjamin Krajmalnik wrote:
> During a maintenance window, we upgraded our systems to Postgres
> 9.0.13 from 9.0.3 running on FreeBSD 8.1 amd64.
> When we restarted the postgres server, I notices, and continue to
> notice, a recurrence of messages in the log.
>
> 2013-09-16 21:15:58 MDT LOG: aut
t manually?
Thanks,
Keith
From: Matheus de Oliveira [matioli.math...@gmail.com]
Sent: Saturday, September 14, 2013 11:55 AM
To: Keith Ouellette
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] Too many WAL archive files
On Sat, Sep 14, 2013 at 11:19 AM, Keith
Natalie Wenz wrote:
> I have a large database from our test environment that got into trouble with
> some high volume and some long-running queries about…six weeks ago? We have a
> buffer mechanism that has been storing the new data since the database stopped
> accepting connections, so we haven'
于 2013/9/17 0:02, Kevin Grittner 写道:
Possibly. As I said before, I think the symptoms might better fit a
situation where the table in need of VACUUM was a shared table and it
just happened to mention db1 because that was the database it was
scanning at the time. (Every database includes the sha
Thanks for the useful information.
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/postgresql-patching-tp5770236p5771014.html
Sent from the PostgreSQL - admin mailing list archive at Nabble.com.
--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To ma
Rural Hunter wrote:
> This was changed quite long time ago when I saw too frequent auto
> vacuums to prevent the wrap-around on a very busy/large table
> which slow down the performance. I will change it back to the
> default to see how it works.
There was a long-standing bug which could cause o
于 2013/9/16 1:31, Kevin Grittner 写道:
There's your problem. You left so little space between when
autovacuum would kick in for wraparound prevention (2 billion
transactions) and when the server prevents new transactions in
order to protect your data (2 ^ 31 - 100 transactions) that
autovacuum
Rural Hunter wrote:
> I'm on Ubuntu 12.04.1 64bit with 32 cores and 377G memory. The
> data is stored on several rai10 SAS 15k disks.
With a machine that beefy I have found it necessary to make the
autovacuum settings more aggressive. Otherwise the need for
vacuuming can outpace the ability of
Rural Hunter escribió:
> 2. Since db1 is a very large database(it is the main db the user is
> using) I can not afford to take long time to vacuum full on that. So
> I thought about to try on other small dbs first.
>
> 3. I stop the instance.
>
> 4. I use "echo 'vacuum full;' | postgres --single
On Sat, Sep 14, 2013 at 6:05 PM, Rural Hunter wrote:
> 于 2013/9/15 1:06, Kevin Grittner 写道:
>
>> Rural Hunter wrote:
>>
>> Why in the world would you want to use VACUUM FULL in this circumstance?
>> the db name in the error message wrong?
>>
> I just googled around and found the solution. What's
于 2013/9/15 1:06, Kevin Grittner 写道:
Rural Hunter wrote:
Why in the world would you want to use VACUUM FULL in this circumstance?
the db name in the error message wrong?
I just googled around and found the solution. What's the other option?
There are two possibilities -- either you had a long
You should consider using the cleanup command in your recovery.conf file,
this will make sure that wal files no longer needed by the secondary server
are eliminated.
If you need the wal files for PITR, yo could also setup your archive
command to rchive the wal files to two different locations. a s
1 - 100 of 27993 matches
Mail list logo