On Mon, Jul 31, 2006 at 06:09:33PM -0400, Francisco Reyes wrote:
Martijn van Oosterhout writes:
That's when you've reached the end of the table. The point is that
before then you'll have found the value of N that produces the error.
Will be a while.. my little python script is doing under
On Sun, Jul 30, 2006 at 04:58:34PM -0400, Francisco Reyes wrote:
Martijn van Oosterhout writes:
It's still a reasonable suggestion. The maximum offset is the number of
rows in the table. You'll notice when the output is empty.
Once I find the point where the output is empty then what?
Martijn van Oosterhout writes:
That's when you've reached the end of the table. The point is that
before then you'll have found the value of N that produces the error.
Will be a while.. my little python script is doing under 10 selects/sec...
and there are nearly 67 million records. :-(
On Sun, Jul 30, 2006 at 01:31:14AM -0400, Francisco Reyes wrote:
Looking at archives seem to indicate missing pg_clog files is some form
of row or page corruption.
In an old thread from back in 2003 Tom Lane recommended
(http://tinyurl.com/jushf):
If you want to try to narrow down where
Martijn van Oosterhout writes:
It's still a reasonable suggestion. The maximum offset is the number of
rows in the table. You'll notice when the output is empty.
Once I find the point where the output is empty then what?
Do you have
an idea how much data it contains?
Yes. Around 87
Looking at archives seem to indicate missing pg_clog files is some form
of row or page corruption.
In an old thread from back in 2003 Tom Lane recommended
(http://tinyurl.com/jushf):
If you want to try to narrow down where the corruption is, you can
experiment with commands like
select