Hello, our postgresql 9.2.4 qa database (thankfully its just qa) seems to
be hosed.
Starting at around 3:39am last night I started seeing errors about missing
files and now I cannot run a pgdump or a vacuum without it complaining
about files that it cannot find with errors like this: ERROR: could
Strange, this is happening in a totally different environment now too. The
only thing these two environments share is a SAN, but I wouldnt think
something going on at the SAN level would make files disappear. Any
suggestions are greatly appreciated.
On Fri, Oct 4, 2013 at 9:40 AM, Mike Broers
if
there are ways of having postgres check and verify that files it expects to
find are there, and to get an idea on the extent of the damage.
On Fri, Oct 4, 2013 at 12:10 PM, Mike Broers mbro...@gmail.com wrote:
Strange, this is happening in a totally different environment now too.
The only thing
I was recently asked how long it takes for postgres (or in my case
pgbouncer) to create a database connection and could not find a way within
postgres logging or psql to report this information.
I came across depesz's great article on pgbouncer utilizing tcpdump:
On further review this particular server skipped from 9.2.2 to 9.2.4. This
is my most busy and downtime sensitive server and I was waiting on a
maintenance window to patch to 9.2.3 when 9.2.4 dropped and bumped up the
urgency. However, I have 3 other less busy production servers that were
all
Looks like psql vacuum (verbose, analyze) is not reflecting in
pg_stat_user_tables as well in some cases. In this scenario I run the
command, it outputs all the deleted pages etc (unlike the vacuumdb -avz
analyze that seemed to be skipped in the log), but it does not update
pg_stat_user_tables.
After patching to 9.2.4 I am noticing some mysterious behavior in my
nightly vacuumdb cron job.
I have been running vacuumdb -avz nightly for a while now, and have a
script that tells me the next day if all the tables in pg_stat_user_tables
have been vacuumed and analyzed in the last 24 hours.
Wow thanks for the code!! I'll test it out and let you know if I get any
unexpected results.
On Wed, Nov 7, 2012 at 8:39 PM, Craig Ringer ring...@ringerc.id.au wrote:
On 11/08/2012 04:42 AM, Mike Broers wrote:
I would like to bump all sequences in a schema by a specified increment
I would like to bump all sequences in a schema by a specified increment.
Is there a stored proc or some method that is recommended? Currently I
have sql that generates scripts to do this, but it seems to be an inelegant
approach and before I rework it from the ground up I want to see if anyone
Ultimately the hosting service restored the files that they had not brought
over during their maintenance migration and we started up ok. So that was
a relief.
We had archived log files but it did not appear that the archive
destination was caught up with the xlog the cluster was complaining
Hello,
We shut down our postgres 8.3 server last night cleanly for some hosted
services maintenance. When we got our server back, it didnt have the
pg_xlog mount with files and now when we start the server, it complains:
2012-06-23 06:06:04 CDT [18612]: [1-1] user=,db= LOG: database system was
if that provides a better option.
On Sat, Jun 23, 2012 at 7:01 AM, Mike Broers mbro...@gmail.com wrote:
Hello,
We shut down our postgres 8.3 server last night cleanly for some hosted
services maintenance. When we got our server back, it didnt have the
pg_xlog mount with files and now when we
Hello, I am setting up a new postgres production server in a managed
hosting environment. I dont have much insight into the underlying disk
architecture but the filesystem I have been presented with has a 4k block
size. Postgres defaults to 8k block size; would it be beneficial to repave
the
Should this be posted in performance instead?
On Fri, Jun 3, 2011 at 9:46 AM, Mike Broers mbro...@gmail.com wrote:
I am in the process of implementing cascade on delete constraints
retroactively on rather large tables so I can cleanly remove deprecated
data. The problem is recreating some
I am in the process of implementing cascade on delete constraints
retroactively on rather large tables so I can cleanly remove deprecated
data. The problem is recreating some foreign key constraints on tables of
55 million rows+ was taking much longer than the maintenance window I had,
and now I
Lately I have been paranoid about the possibility of transaction wrap around
failure due to a potential orphaned toast table. I have yet to prove that I
have such an object in my database.. but I am running Postgres 8.3 with
auto_vacuum enabled and am doing nightly manual vacuums as well and
Pg v8.3.8
I have a table whose column size needs to be increased:
\d dim_product
Table report.dim_product
Column| Type
|
Modifiers
17 matches
Mail list logo