I've checked the disk with badblocs(8). The results are:

File /pgsql/9.0/data0/base/16386/11838.5 (inode #3015588, mod time Wed Mar
30 13:13:13 2011)
  has 50 multiply-claimed block(s), shared with 1 file(s):
    <The bad blocks inode> (inode #1, mod time Wed Mar 30 15:23:19 2011)

After this, I've dropped the database and create a new one. Problem is
solved.
All the same it is interesting, why there was such problem? I am disturbed
because
I intend to use large objects in production...

Suggestions ?

2011/3/30 Dmitriy Igrishin <dmit...@gmail.com>

> Hey all,
>
> I've never experienced such problems before pefrorming some tests
> on large objects. I am on Ubuntu and my HDD is whole encrypted
> (LVM2). I've imported large object ~ 1.5 Gb of size. After this, entire
> system lost performance dramaticaly and the disk activity becomes
> anomalous.
>
> After reboot everithing is fine with OS, but attempt to remove all
> large object results in error:
>
> dmitigr=# select lo_unlink(loid) from (select distinct loid from
> pg_largeobject) as foo;
> ERROR:  could not read block 704833 in file "base/16386/11838.5": read only
> 4096 of 8192 bytes
>
> What does this means -- hardware / OS / Postgres failure ?
>
> Any suggestions ?
>
> --
> // Dmitriy.
>
>
>


-- 
// Dmitriy.

Reply via email to