"Andrew Hammond" <[EMAIL PROTECTED]> writes:
> (I thought this line was interesting)
> Jun 27 15:54:31 qadb2 postgres[92519]: [44-1] PANIC:  could not open
> relation 1663/16386/679439393: No such file or directory

> I googled to find out what the numbers 1663/16386/679439393 from the
> PANIC message mean, but no luck.

tablespaceOID/databaseOID/relfilenode.  Looks like just some random user
table.  Not clear why this would be a crash, *especially* since WAL
recovery is generally willing to create nonexistent files.  Is this
reproducible?

> (On Thursday night)
> vacuumdb: vacuuming of database "adecndb" failed: ERROR:  could not
> write block 209610 of relation 1663/16386/236356665: No space left on
> device
> CONTEXT:  writing block 209610 of relation 1663/16386/236356665

That's pretty frickin' odd as well, because as a rule we make sure that
backing store exists for each table page before we open it up for
normal writing.  Do you have a way to find out what relation
1663/16386/236356665 is?  What filesystem is this database sitting on?

> Now, the first message is very strange since we have monitoring on the
> file system used by the database and it's been hovering at about 18%
> space used for the last month. So I can't figure out why we'd get "No
> space left on device", assuming the device is actually the disk (which
> seems reasonable given the context) and not shared memory.

Yeah, this is definitely a case of ENOSPC being returned by write(),
If you're sure the disk wasn't out of space, maybe some per-user quota
was getting in the way?

                        regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to