[SQL] pg_clog (?) problem with VACUMM

2007-11-27 Thread Gerardo Herzig

Hi all. Well, im having this problem for the first time.
When executing a VACUUM, i have this msg:
gse_new_version=# VACUUM ;
ERROR:  could not access status of transaction 118
DETAIL:  could not open file "pg_clog/": No existe el fichero o el 
directorio


Ok, wtf, i will create the file. So touch pg_clog/ and VACUUM again
ERROR:  could not access status of transaction 118
DETAIL:  could not read from file "pg_clog/" at offset 0: Conseguido

I hope the error will still be clear due to the word "Conseguido", wich 
will be something like 'acomplished', or 'obtained'


The database was created making a "with template=gse" modifiers. 
Vacuuming the 'gse' database gives me no errors at all.


Here is the final lines from strace when triyng to vacuum...
lseek(20, 0, SEEK_SET)  = 0
read(20, "", 8192)  = 0
close(20)   = 0
rt_sigprocmask(SIG_SETMASK, ~[QUIT ILL TRAP ABRT BUS FPE KILL SEGV CONT 
STOP SYS RTMIN RT_1], NULL, 8) = 0

stat("/etc/localtime", {st_mode=S_IFREG|0644, st_size=377, ...}) = 0
stat("/etc/localtime", {st_mode=S_IFREG|0644, st_size=377, ...}) = 0
stat("/etc/localtime", {st_mode=S_IFREG|0644, st_size=377, ...}) = 0
write(2, "<2007-11-27 16:54:23 ART, gse_ne"..., 346) = 346
sendto(8, "E\0\0\0\240SERROR\0CXX000\0Mcould not ac"..., 161, 0, NULL, 
0) = 161

munmap(0x2aaabffa9000, 839680)  = 0
munmap(0x2aaabdfa8000, 33558528)= 0
rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
sendto(7, "\4\0\0\0\310\3\0\0\1\0\0\0\2731\0\0 S\255\0\r\0\0\0+\0"..., 
968, 0, NULL, 0) = 968
sendto(7, "\4\0\0\0\310\3\0\0\1\0\0\0\2731\0\0 S\255\0\r\0\0\0\0\0"..., 
968, 0, NULL, 0) = 968
sendto(7, "\4\0\0\0\310\3\0\0\1\0\0\0\2731\0\0 S\255\0\r\0\0\0\0\0"..., 
968, 0, NULL, 0) = 968
sendto(7, "\4\0\0\0\310\3\0\0\1\0\0\0\2731\0\0 S\255\0\r\0\0\0\0\0"..., 
968, 0, NULL, 0) = 968
sendto(7, "\4\0\0\0\370\0\0\0\1\0\0\0\2731\0\0 S\255\0\3\0\0\0\0\0"..., 
248, 0, NULL, 0) = 248
sendto(7, "\4\0\0\0008\3\0\0\1\0\0\0\2731\0\0\0\0\0\0\v\0\0\0\0\0"..., 
824, 0, NULL, 0) = 824

sendto(7, "\3\0\0\0\27\0\0\0\1\0\0\0\2731\0\0\0", 23, 0, NULL, 0) = 23
sendto(8, "Z\0\0\0\5I", 6, 0, NULL, 0)  = 6
recvfrom(8,   


Dont know what to do now.
Wait...i have some more.
When vacuuming a table at a time, i got the table making vacuum crash. 
It's this one:

gse_new_version=# \d words
Table "public.words"
   Column |   Type| Modifiers
---+---+---
page_id   | integer   |
word  | character varying |
word_position | integer   |
original_word | character varying |
Indexes:
   "words_idx" btree (upper(word::text))
   "words_page_id_idx" btree (page_id)
   "words_page_id_word_position_id" btree (page_id, word_position) CLUSTER
   "words_upper_idx" btree (upper(word::text) varchar_pattern_ops)

Im souspiciuos about the CLUSTER (dont ask why :-). Meanwhile im looking 
for 'unclustering' tables and triyng again.


Server version: 8.1.2
On SuSE 9.1

Some other ideas will be apreciated.
Thanks!!

Gerardo.

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [SQL] pg_clog (?) problem with VACUMM

2007-11-27 Thread Gregory Stark
"Gerardo Herzig" <[EMAIL PROTECTED]> writes:

> Hi all. Well, im having this problem for the first time.
> When executing a VACUUM, i have this msg:
> gse_new_version=# VACUUM ;
> ERROR:  could not access status of transaction 118
> DETAIL:  could not open file "pg_clog/": No existe el fichero o el
> directorio

This is the typical error you see if the data is corrupted by bad memory or
disk. It's trying to check if the record was committed or aborted and finding
a bogus transactionid.

(Except in this case I'm not sure that the transaction id is necessarily
bogus. It's awfully early though. Was the table in the template database
created soon after running initdb? And then you're creating a copy of the
database much later?)

It's also possible it's one of the various bugs fixed in the bug-fix releases
8.1.3 through 8.1.10. I don't see any related to clog but there are some
related to vacuum which could cause data corruption.

But aside from upgrading immediately to 8.1.10 I would suggest you run
memtest86 (assuming it's a x86 machine) or some other memory checker. Bad
memory is quite common and could easily cause this. Also run some SMART tool
to check the disks.


> Ok, wtf, i will create the file. So touch pg_clog/ and VACUUM again
> ERROR:  could not access status of transaction 118
> DETAIL:  could not read from file "pg_clog/" at offset 0: Conseguido

That's almost the right idea for a last ditch attempt to extract what data you
can from a corrupted table. You have to fill the file with nul bytes though.
Something like dd if=/dev/zero of= bs=1k count=nnn where nnn is, uh, I'm
not sure how large, it won't take much to cover transactionid 118 though.


-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com
  Ask me about EnterpriseDB's On-Demand Production Tuning

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [SQL] pg_clog (?) problem with VACUMM

2007-11-27 Thread Gerardo Herzig

Gregory Stark wrote:


"Gerardo Herzig" <[EMAIL PROTECTED]> writes:

 


Hi all. Well, im having this problem for the first time.
When executing a VACUUM, i have this msg:
gse_new_version=# VACUUM ;
ERROR:  could not access status of transaction 118
DETAIL:  could not open file "pg_clog/": No existe el fichero o el
directorio
   



[]

 


Ok, wtf, i will create the file. So touch pg_clog/ and VACUUM again
ERROR:  could not access status of transaction 118
DETAIL:  could not read from file "pg_clog/" at offset 0: Conseguido
   



That's almost the right idea for a last ditch attempt to extract what data you
can from a corrupted table. You have to fill the file with nul bytes though.
Something like dd if=/dev/zero of= bs=1k count=nnn where nnn is, uh, I'm
not sure how large, it won't take much to cover transactionid 118 though.

 



Mmmm, yeah, after dd'ing the  file, VACUUM execute just fine!!
BTW, a previous atempt to build a CLUSTER gives me the same error as 
with VACUUM, so i dont think it was a VACUUM issue.


Thanks Gregor for your help!!

Gerardo

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [SQL] pg_clog (?) problem with VACUMM

2007-11-27 Thread Stefan Becker
Dear Gerardo and Pgsql-List,

>When executing a VACUUM, i have this msg:
>gse_new_version=# VACUUM ;
>ERROR:  could not access status of transaction 118
>DETAIL:  could not open file "pg_clog/": No existe el fichero o el

I had a similar problem.  I decided one day to "clean" out
The pg_clog/ directory of the database cluster.  These files'
"last modified time Stamps" on many files were quite 
old, so I deleted them.  This didn't have any consequences for
the data (dumps or indexes etc.) or the operation of 
the installation except, as I found out MUCH later found out
- I could no longer vacuum the Database.

I restored the missing files from a backup and everything
was okay again, after a vacuum of the database the
system cleaned out the older pg_log/ files on its own.

I hope you get things fixed!

My best regards,

Stefan Becker
Email: [EMAIL PROTECTED]



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[SQL] Strang behaviour SELECT ... LIMIT n FOR UPDATE

2007-11-27 Thread Daniel Caune
Hi,

I'm facing a strange behaviour with a statement SELECT ... LIMIT n FOR
UPDATE in PostgreSQL 8.1.  The number of rows returned is actually (n -
1).  I'm trying to find whether this is an identified issue with
PostgreSQL 8.1 that might have been fixed in a later version such as
8.2; I don't have any problem in moving to a later version if needed.

agoratokens=>  SELECT * FROM "Tokens" INNER JOIN "Tokentypes" ON
"Tokens"."type"="Tokentypes"."type" WHERE "Tokentypes"."tokenName"
='clanName' AND "Tokens"."isLocked" = false limit 2 FOR UPDATE;

 id  | type |  value   | isLocked | timestamp
| type | tokenName

-+--+--+--+-
---+--+---

 104 |2 | RegressionTestClanName13 | f| 2007-11-27
20:40:25.208074 |2 | clanName

(1 row)

 
agoratokens=>  SELECT * FROM "Tokens" INNER JOIN "Tokentypes" ON
"Tokens"."type"="Tokentypes"."type" WHERE "Tokentypes"."tokenName"
='clanName' AND "Tokens"."isLocked" = false limit 3 FOR UPDATE;

 id  | type |  value   | isLocked | timestamp
| type | tokenName

-+--+--+--+-
---+--+---

 104 |2 | RegressionTestClanName13 | f| 2007-11-27
20:40:25.208074 |2 | clanName

 118 |2 | RegressionTestClanName28 | f| 2007-11-21
21:10:29.872352 |2 | clanName

(2 rows)


If I remove the FOR UPDATE clause, the SELECT ... LIMIT n statement
returns n rows as expected:

agoratokens=>  SELECT * FROM "Tokens" INNER JOIN "Tokentypes" ON
"Tokens"."type"="Tokentypes"."type" WHERE "Tokentypes"."tokenName"
='clanName' AND "Tokens"."isLocked" = false limit 3;

 id  | type |  value   | isLocked | timestamp
| type | tokenName

-+--+--+--+-
---+--+---

 104 |2 | RegressionTestClanName13 | f| 2007-11-27
20:40:25.208074 |2 | clanName

  40 |2 | RegressionTestClanName9  | f| 2007-10-15
11:27:31.897|2 | clanName

 118 |2 | RegressionTestClanName28 | f| 2007-11-21
21:10:29.872352 |2 | clanName

(3 rows)



--
Daniel

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[SQL] NULLIF problem

2007-11-27 Thread Gera Mel Handumon
HELLO,

I encounter an error if i use NULLIF with timestamp with time zone.
eq. dbtime=nullif(mytime,'')

i want to null the value of field DBTIME if the variable mytime=" "

DBTIME ="timestamp with time zone" datatype

error: column DBTIME is of type timestamp with time zone but expression is
of type text.

What should i do?

Thanks in advance...

-- 
Gera Mel E. Handumon
Application Programmer
PaySoft Solutions, Inc.
-
"Share your knowledge. It's a way to achieve immortality" - Dalai Lama


Re: [SQL] NULLIF problem

2007-11-27 Thread Michael Glaesemann


On Nov 27, 2007, at 21:04 , Gera Mel Handumon wrote:


I encounter an error if i use NULLIF with timestamp with time zone.
eq. dbtime=nullif(mytime,'')

i want to null the value of field DBTIME if the variable mytime=" "

DBTIME ="timestamp with time zone" datatype

error: column DBTIME is of type timestamp with time zone but  
expression is of type text.


I believe the reason is that '' is not a valid timestamp value: think  
of it this way:


IF mytime = '' THEN
  mytime := NULL;
END IF;

The first thing it needs to do is compare the mytime value with ''.  
As '' is not a valid timestamp value, it may be casing mytime to  
text. You'll run into problems if you're assigning a text value to a  
timestamp field (which happens after the initial comparison--and the  
cast--are done.)


I think you may need to handle this is you middleware, or handle the  
IF THEN explicitly in a function. Maybe CASE would work:


CASE WHEN mytime = '' THEN NULL
 ELSE CAST(mytime AS TIMESTAMP)
END

Michael Glaesemann
grzm seespotcode net



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [SQL] Strang behaviour SELECT ... LIMIT n FOR UPDATE

2007-11-27 Thread Tom Lane
"Daniel Caune" <[EMAIL PROTECTED]> writes:
> I'm facing a strange behaviour with a statement SELECT ... LIMIT n FOR
> UPDATE in PostgreSQL 8.1.  The number of rows returned is actually (n -
> 1).  I'm trying to find whether this is an identified issue with
> PostgreSQL 8.1 that might have been fixed in a later version such as
> 8.2; I don't have any problem in moving to a later version if needed.

There's no known issue specifically of that form (and a quick test of
8.1 doesn't reproduce any such behavior).  However, it is known and
documented that LIMIT and FOR UPDATE behave rather oddly together:
the LIMIT is applied first, which means that if FOR UPDATE rejects
any rows as being no longer up-to-date, you get fewer than the expected
number of rows out.  You did not mention any concurrent activity in
your example, but I'm betting there was some ...

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [SQL] Strang behaviour SELECT ... LIMIT n FOR UPDATE

2007-11-27 Thread Bruce Momjian
Tom Lane wrote:
> "Daniel Caune" <[EMAIL PROTECTED]> writes:
> > I'm facing a strange behaviour with a statement SELECT ... LIMIT n FOR
> > UPDATE in PostgreSQL 8.1.  The number of rows returned is actually (n -
> > 1).  I'm trying to find whether this is an identified issue with
> > PostgreSQL 8.1 that might have been fixed in a later version such as
> > 8.2; I don't have any problem in moving to a later version if needed.
> 
> There's no known issue specifically of that form (and a quick test of
> 8.1 doesn't reproduce any such behavior).  However, it is known and
> documented that LIMIT and FOR UPDATE behave rather oddly together:
> the LIMIT is applied first, which means that if FOR UPDATE rejects
> any rows as being no longer up-to-date, you get fewer than the expected
> number of rows out.  You did not mention any concurrent activity in
> your example, but I'm betting there was some ...

Current documentation explains why in the SELECT manual page:

It is possible for a SELECT command using both
LIMIT and  FOR UPDATE/SHARE
clauses to return fewer rows than specified by
LIMIT.  This is because LIMIT is applied
first.  The command selects the specified number of rows, but might
then block trying to obtain lock on one or more of them.  Once the
SELECT unblocks, the row might have been deleted or updated
so that it does not meet the query WHERE condition anymore,
in which case it will not be returned.

-- 
  Bruce Momjian  <[EMAIL PROTECTED]>http://momjian.us
  EnterpriseDB http://postgres.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly