The database is not crashing thankfully. We are waiting for the errors to
come back to turn up logging in the hopes of creating the reproducible set.
I will do my best to provide a reproducible test case. Is there any more
information I can supply in the meantime that would help?
Hello,
We have an ltree column (tree_path) that has a gist index
(index_nodes_on_tree_path). This is in a 9.3.5 database. Recently errors
started occurring in the postgres log on some updates to this table:
fixing incomplete split in index "index_nodes_on_tree_path", block 2358
STATEMENT: UPD
see if that reveals any noticeable performance difference.
Thanks again
Mike
On Mon, Jan 13, 2014 at 7:11 PM, Michael Paquier
wrote:
> On Tue, Jan 14, 2014 at 1:50 AM, Mike Broers wrote:
> > Hello, I am in the process of planning a 9.3 migration of postgres and I
> am
> >
Hello, I am in the process of planning a 9.3 migration of postgres and I am
curious about the checksum features available. In my test 9.3 instance it
seemed like this feature provides a log entry of the exact database/oid of
the corrupt object when it is accessed, but not much else. I can't find
replication at the time of the crash have prevented this
from cascading or was it already too late at that point?
Thanks again for the input, its been very helpful!
Mike
On Mon, Nov 25, 2013 at 12:20 PM, Mike Broers wrote:
> Thanks Shaun,
>
> Im planning to schedule a time to do the vacu
Thanks Shaun,
Im planning to schedule a time to do the vacuum freeze suggested
previously. So far the extent of the problem seems limited to the one
session table and the one session row that was being used by a heavy bot
scan at the time of the crash. Currently Im testing a recovery of a
produc
needs to be run in production.
On Thu, Nov 21, 2013 at 5:09 PM, Kevin Grittner wrote:
> Mike Broers wrote:
>
> > Is there anything I should look out for with vacuum freeze?
>
> Just check the logs and the vacuum output for errors and warnings.
>
> --
>
for with vacuum freeze?
Much appreciated,
Mike
On Thu, Nov 21, 2013 at 4:51 PM, Kevin Grittner wrote:
> Mike Broers wrote:
>
> > Thanks for the response. fsync and full_page_writes are both on.
>
> > [ corruption appeared following power loss on the machine hosing
> >
I am planning on running the reindex in actual production tonight during
our maintenance window, but was hoping if that worked we would be out of
the woods.
On Thu, Nov 21, 2013 at 3:56 PM, Kevin Grittner wrote:
> Mike Broers wrote:
>
> > Hello we are running postgres 9.2.5 on RHEL
see if there is a way to force vacuum to continue on error, worst case I
might have to script a table by table vacuum script I guess.. If anyone
has a better suggestion for determining the extent of the damage Id
appreciate it.
On Thu, Nov 21, 2013 at 2:10 PM, Mike Broers wrote:
> Hello we
Hello we are running postgres 9.2.5 on RHEL6, our production server crashed
hard and when it came back up our logs were flooded with:
STATEMENT: SELECT "session_session"."session_key",
"session_session"."session_data", "session_session"."expire_date",
"session_session"."nonce" FROM "session_sessi
We take nightly backups using the start backup, copying the data directory
and archived logs, then stop backup method.
Today I tested the recoverability of the backup by mounting this backup
directory on a different server, copying the 3 hours of transactions logs
from after last nights backup up
13 matches
Mail list logo