Re: [GENERAL] timeofday() and CAST
Mr. Lane, I've been taking a look at documentation about timezones (Appendix A/Time Zone Abbreviations) and it seems that BRT is really missing in that list. Isn't Brazil important/big enough to have at least BRT and BRST timezones supported from PostgreSQL? IMHO maybe it's time for a more robust support for timezones, because maybe more people from other countries are getting the same problem in newer PostgreSQL releases, since timestamps from strings are not so flexible as they were in older releases. Here is a list of Brazilian timezones. # Zone NAME GMTOFF RULES FORMAT [UNTIL] # # Atlantic islands: Fernando de Noronha, Trindade, Martin Vaz, # Atol das Rocas, and Penedos de Sao Pedro e Sao Paulo Zone America/Noronha -2:09:40 - LMT 1914 -2:00 Brazil FN%sT 1990 Sep 17 -2:00 - FNT # # Amapa (AP), east Para (PA) # East Para includes Belem, Maraba, Serra Norte, and Sao Felix do Xingu. Zone America/Belem -3:13:56 - LMT 1914 -3:00 Brazil BR%sT 1988 Sep 12 -3:00 - BRT # # Maranhao (MA), Piaui (PI), Ceara (CE), Rio Grande do Norte (RN), # Paraiba (PB) Zone America/Fortaleza -2:34:00 - LMT 1914 -3:00 Brazil BR%sT 1990 Sep 17 -3:00 - BRT 1999 Sep 30 -3:00 Brazil BR%sT 2000 Oct 22 -3:00 - BRT 2001 Sep 13 -3:00 Brazil BR%sT # # Pernambuco (PE) (except Atlantic islands) Zone America/Recife -2:19:36 - LMT 1914 -3:00 Brazil BR%sT 1990 Sep 17 -3:00 - BRT 1999 Sep 30 -3:00 Brazil BR%sT 2000 Oct 15 -3:00 - BRT 2001 Sep 13 -3:00 Brazil BR%sT # # Tocantins (TO) Zone America/Araguaina -3:12:48 - LMT 1914 -3:00 Brazil BR%sT 1990 Sep 17 -3:00 - BRT 1995 Sep 14 -3:00 Brazil BR%sT # # Alagoas (AL), Sergipe (SE) Zone America/Maceio -2:22:52 - LMT 1914 -3:00 Brazil BR%sT 1990 Sep 17 -3:00 - BRT 1995 Oct 13 -3:00 Brazil BR%sT 1996 Sep 4 -3:00 - BRT 1999 Sep 30 -3:00 Brazil BR%sT 2000 Oct 22 -3:00 - BRT 2001 Sep 13 -3:00 Brazil BR%sT # # Bahia (BA), Goias (GO), Distrito Federal (DF), Minas Gerais (MG), # Espirito Santo (ES), Rio de Janeiro (RJ), Sao Paulo (SP), Parana (PR), # Santa Catarina (SC), Rio Grande do Sul (RS) Zone America/Sao_Paulo -3:06:28 - LMT 1914 -3:00 Brazil BR%sT 1963 Oct 23 00:00 -3:00 1:00 BRST 1964 -3:00 Brazil BR%sT # # Mato Grosso (MT), Mato Grosso do Sul (MS) Zone America/Cuiaba -3:44:20 - LMT 1914 -4:00 Brazil AM%sT # # west Para (PA), Rondonia (RO) # West Para includes Altamira, Oribidos, Prainha, Oriximina, and Santarem. Zone America/Porto_Velho -4:15:36 - LMT 1914 -4:00 Brazil AM%sT 1988 Sep 12 -4:00 - AMT # # Roraima (RR) Zone America/Boa_Vista -4:02:40 - LMT 1914 -4:00 Brazil AM%sT 1988 Sep 12 -4:00 - AMT 1999 Sep 30 -4:00 Brazil AM%sT 2000 Oct 15 -4:00 - AMT # # east Amazonas (AM): Boca do Acre, Jutai, Manaus, Floriano Peixoto Zone America/Manaus -4:00:04 - LMT 1914 -4:00 Brazil AM%sT 1988 Sep 12 -4:00 - AMT 1993 Sep 28 -4:00 Brazil AM%sT 1994 Sep 22 -4:00 - AMT # # west Amazonas (AM): Atalaia do Norte, Boca do Maoco, Benjamin Constant, # Eirunepe, Envira, Ipixuna Zone America/Eirunepe -4:39:28 - LMT 1914 -5:00 Brazil AC%sT 1988 Sep 12 -5:00 - ACT 1993 Sep 28 -5:00 Brazil AC%sT 1994 Sep 22 -5:00 - ACT # # Acre (AC) Zone America/Rio_Branco -4:31:12 - LMT 1914 -5:00 Brazil AC%sT 1988 Sep 12 -5:00 - ACT Best Regards - Original Message - From: "Tom Lane" <[EMAIL PROTECTED]> To: "Vilson farias" <[EMAIL PROTECTED]> Cc: <[EMAIL PROTECTED]>; "SIMONE Carla MOSENA" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]> Sent: Thursday, August 21, 2003 11:40 AM Subject: Re: [GENERAL] timeofday() and CAST > "Vilson farias" <[EMAIL PROTECTED]> writes: > > PostgreSQL 7.3.4 > > > bxs=# SELECT timeofday(); > > timeofday > > - > > Thu Aug 21 10:04:18.215420 2003 BRT > > (1 row) > > > bxs=# SELECT CAST(timeofday() AS timestamp); > > ERROR: Bad timestamp external representation 'Thu Aug 21 10:04:42.597819 > > 2003 BRT' > > bxs=# > > It's unhappy about "BRT", which is not a known timezone name. Not sure > why the earlier version didn't complain too. > > regards, tom lane > > ---(end of broadcast)--- > TIP 6: Have you searched our list archives? > >http://archives.postgresql.org > ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[GENERAL] Simulating sequences
Greetings, I'm getting a big performance problem and I would like to ask you what would be the reason, but first I need to explain how it happens. Let's suppose I can't use sequences (it seams impossible but my boss doesn't like specific database features like this one). For sequence simulation I had created a table called cnfg_key_generation and each tuple holds information for one of my tables (tablename, current_sequencial_number). Lets check : CREATE TABLE cnfg_key_generation ( department integer NOT NULL, table_name varchar(20) NOT NULL, current_key integer NOT NULL, CONSTRAINT XPKcnfg_key_generation PRIMARY KEY (department, table_name) ); Per example, for a table called 'my_test' I would have the following values : department = 1 table_name = 'my_test' current_key = 1432 Everytime I want a new key to use in my_test primary-key I just increment current_key value. For this job, I've created a simple stored procedure called key_generation CREATE FUNCTION key_generation(integer, varchar(20)) RETURNS integer AS' DECLARE the_department ALIAS FOR $1; the_table_name ALIAS FOR $2; new_key_value integer; err_numinteger; BEGIN new_value := 0; LOCK TABLE cnfg_key_generation IN ROW EXCLUSIVE MODE; SELECT current_value INTO new_value FROM cnfg_key_generation WHERE the_department = department AND the_table_name = table_name; IF NOT FOUND THEN new_key_value := 1; INSERT INTO cnfg_key_generation VALUES (the_department,the_table_name, new_key_value); ELSE new_key_value := new_key_value + 1; UPDATE cnfg_key_generation SET current_key_value = new_key_value WHERE department = the_department AND table_name = the_table_name; END IF; RETURN new_key_value; END; ' LANGUAGE 'plpgsql'; Data insertion is done by the following way : INSERT INTO my_test VALUES (key_generation(1, 'my_test'), ...other fields...); Ok, this works, but has a huge performance problem. I read in pg docs that everytime I make an UPDATE, a new tuple is created and the old is marked as invalid. For a tuple that holds sequencial key generation, everytime a new key is generated, a new tuple is created inside cfg_key_generation. It means after 2million key generations for same table, performance will be completly degradated because there will be 2million of old versions of same tuple. For instance, I have a table called 'cham_chamada' that actually holds 1.5Million of tuples. The response time for key_generation execution for this table is more than 5seconds. In this same case if I execute key_generation for a table that has just few values (current key = 5 per example), response time is just some miliseconds (30 to 50ms). I tryied to fix this problem with a VACUUM and it was completly ineffective. After execution the problem was still there. Later, after execution of every kind of vacuum I knew (with and without ANALYZE, especific for that table, vacuumdb from shell ...) I gave up and tryied a COPY TO/DROP/CREATE/COPY FROM. At this time, the performance problem was fixed. What can I do to solve this problem without table reconstruction? Is there a performance degradation in PostgreSQL that can't be solved? If a have a huge table with millions of data being update constantly, what can I do to keep a good performance if vacuum isn't 100%? Does PostgreSQL sequences deal with these kinds performance questions? Is it clean, fast and secury to use sequences? Maybe I still can make up my boss mind about this subject if I get good arguments here to justify the use of sequences. Am I doing some stupid thing? Best regards, - José Vilson de Mello de Farias Software Engineer Dígitro Tecnologia Ltda - www.digitro.com.br APC - Customer Oriented Applications E-mail: [EMAIL PROTECTED] Tel.: +55 48 281 7158 ICQ 11866179 ---(end of broadcast)--- TIP 8: explain analyze is your friend
[GENERAL] Lost indexes
Greetings, I've been fighting against a very strange behaviour found in PostgreSQL 7.1.2 on a RedHat 6.2. I have a very simple table called site_site and I lost it's indexes everytime I run a vaccum. Do you know why this happens? Is there a way to get around or fix this kind of problem?I put a full sequence of steps I've used to make it happen. CREATE TABLE site_site ( id_site int4 NOT NULL, nome varchar(30) NOT NULL, CONSTRAINT XPKsite_site PRIMARY KEY (id_site) ); CREATE UNIQUE INDEX XAK1site_site ON site_site ( nome ); [EMAIL PROTECTED] /home]# psql -U postgres gravador gravador=# \d site_site Table "site_site" Attribute | Type | Modifier ---+---+-- id_site | integer | not null nome | character varying(30) | not null Indices: xak1site_site, xpksite_site gravador=# select * from site_site; id_site | nome -+-- (0 rows) gravador=# explain select * from site_site where id_site = 1; NOTICE: QUERY PLAN: Index Scan using xpksite_site on site_site (cost=0.00..8.14 rows=10 width=16) EXPLAIN gravador=# explain select * from site_site where nome = 'Vilson'; NOTICE: QUERY PLAN: Index Scan using xak1site_site on site_site (cost=0.00..8.14 rows=10 width=16) EXPLAIN gravador=# VACUUM VERBOSE ANALYZE site_site; NOTICE: --Relation site_site-- NOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, Crash 0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0. CPU 0.00s/0.00u sec. NOTICE: Index xpksite_site: Pages 1; Tuples 0. CPU 0.00s/0.00u sec. NOTICE: Index xak1site_site: Pages 1; Tuples 0. CPU 0.00s/0.00u sec. NOTICE: Analyzing... VACUUM gravador=# explain select * from site_site where id_site = 1; NOTICE: QUERY PLAN: Seq Scan on site_site (cost=0.00..0.00 rows=1 width=16) EXPLAIN gravador=# explain select * from site_site where nome = 'Vilson'; NOTICE: QUERY PLAN: Seq Scan on site_site (cost=0.00..0.00 rows=1 width=16) EXPLAIN Here we go again. At this time there are data inside the table... gravador=# insert into site_site values (1, 'Vilson'); INSERT 22798 1 gravador=# explain select * from site_site where id_site = 1; NOTICE: QUERY PLAN: Index Scan using xpksite_site on site_site (cost=0.00..8.14 rows=10 width=16) EXPLAIN gravador=# explain select * from site_site where nome = 'Vilson'; NOTICE: QUERY PLAN: Index Scan using xak1site_site on site_site (cost=0.00..8.14 rows=10 width=16) EXPLAIN gravador=# VACUUM VERBOSE ANALYZE site_site; NOTICE: --Relation site_site-- NOTICE: Pages 1: Changed 1, reaped 0, Empty 0, New 0; Tup 1: Vac 0, Keep/VTL 0/0, Crash 0, UnUsed 0, MinLen 46, MaxLen 46; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0. CPU 0.00s/0.00u sec. NOTICE: Index xpksite_site: Pages 2; Tuples 1. CPU 0.00s/0.01u sec. NOTICE: Index xak1site_site: Pages 2; Tuples 1. CPU 0.00s/0.00u sec. NOTICE: Analyzing... VACUUM gravador=# explain select * from site_site where id_site = 1; NOTICE: QUERY PLAN: Seq Scan on site_site (cost=0.00..1.01 rows=1 width=16) EXPLAIN gravador=# explain select * from site_site where nome = 'Vilson'; NOTICE: QUERY PLAN: Seq Scan on site_site (cost=0.00..1.01 rows=1 width=16) EXPLAIN Thanks in advance! José Vilson de Mello de Farias Software Engineer Dígitro Tecnologia Ltda - www.digitro.com.br APC - Customer Oriented Applications E-mail: [EMAIL PROTECTED] Tel.: +55 48 281 7158 ICQ 11866179 ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] slow line insert
Greetings, I have a table with only three tuples. I've been using it as primary key container for tables, like the Sequence from postgresql. I'm not using sequences because my application was originally created for MSSQL 6.5. My problem is very specific, I have a stored procedure for primary key generation, it checks if there is a tuple for a given table and increase count if exists, otherwise it creates a new tuple begining with 1. Here is my table definition : bxs=# \d cnfg_gerachave Table "cnfg_gerachave" Attribute |Type | Modifier ---+-+-- cod_cad | integer | not null(it's like a department - always 1 here) cod_gerachave | varchar(20) | not null (table name) valor | integer | not null(first available primary key value for cod_gerachave table) Index: xpkcnfg_gerachave Here are the tuples (all) : bxs=# select * from cnfg_gerachave; cod_cad |cod_gerachave| valor -+-+ 1 | rel__impressao | 10 1 | rel__relatorio |167 1 | serv_acaoserv | 154406 (4 rows) Here is the stored procedure wich updates this table : CREATE FUNCTION gerachave(INT4, VARCHAR(20)) RETURNS INT4 AS' DECLARE CAD ALIAS FOR $1; tabelaALIAS FOR $2; novovalor INT4; err_num INT4; BEGIN novovalor := 0; LOCK TABLE cnfg_gerachave IN ROW EXCLUSIVE MODE; SELECT valor INTO novovalor FROM cnfg_gerachave WHERE cod_cad = CAD AND cod_gerachave = tabela; IF NOT FOUND THEN novovalor := 1; INSERT INTO cnfg_gerachave VALUES (cad,tabela,novovalor); ELSE novovalor := novovalor + 1; UPDATE cnfg_gerachave SET valor = novovalor WHERE cod_cad = cad AND cod_gerachave = tabela; END IF; RETURN novovalor; END; ' LANGUAGE 'plpgsql'; The problem is realted with "1 | serv_acaoserv | 154406" tuple, wich takes 5-10 secs to be updated, whether it's called from this stored procedure or from a simple UPDATE clause. All other tuples goes fine, update time is about 10-30ms. The first time I noticed this was happening it was taking about 3-5secs to update. I tried a lot of things to solve the problem. The last was recreate table. After the table was recreated, the update time for that tuple gone back to 50ms. But I noticed the time to update was slowly growing for each update. Now update time os about 5-10sec. I tested a single "UPDATE cnfg_gerachave SET valor = 154406 WHERE cod_cad = 1 AND cod_gerachave = 'serv_acaoserv'; " and I noticed the problem happens outside the stored proc too. I did not tried a vacuumdb yet. Does anyone knows what could be happening here? How could a single line has it perfomance so higher than others? Could LOCK TABLE cnfg_gerachave IN ROW EXCLUSIVE MODE affect performance? I would like to know too how this lock is released. Is it after stored proc is terminated or after the commit? And if I'm not inside a transaction? I searched at documentation and could no find answers to these questions. Best regards, José Vilson de Mello de Farias. Dígitro Tecnologia Ltda - Brazil ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] Re: Still getting problems with -cache lookup for userid 26 failed- (PART 2)
Sorry, I do have a pg_shadow file, but it's empty. I have a pg_pwd too with this text inside : [postgres@bxsgalena data]$ cat pg_pwd postgres0 x x x x 1 \N What is it used for? I couldn't find info about this file in html docs. I have another postgre installed here and in that machine (working very well) pg_pwd is empty. Regards. > How do you mean, Tom? > > > "Vilson farias" <[EMAIL PROTECTED]> writes: > > > Does anyone know what is this error? > > > ERROR: cache lookup for userid 26 failed > > > > Evidently pg_shadow has no entry with usesysid 26. Add it back... > > > > regards, tom lane > > > > I dont have a pg_shadow file, I use a passwd file instead. Inside passwd there is >only one line : > postgres:LtOj6bwu7TF7k > > My pg_hba.conf : > localall trust > host all 127.0.0.1 255.255.255.255 trust > host all 192.168.160.0 255.255.255.0 password passwd > host all 192.168.162.0 255.255.255.0 password passwd > > My /etc/passwd : > postgres:x:26:26:PostgreSQL Server:/home/postgres:/bin/bash > > My /etc/group > postgres:x:26: > > And if this is a passwd (pg_shadow) error, why can I access some tables and other >not? > > > persona=> select * from pg_class > persona-> ; > relname | reltype | relowner | relam | relpages | reltuples >| rell > ongrelid | relhasindex | relisshared | relkind | relnatts | relchecks | reltriggers >| reluk > eys | relfkeys | relrefs | relhaspkey | relhasrules | relacl > >-+-+--+---+--+---+- > >-+-+-+-+--+---+-+-- > +--+-++-+ > pg_type | 71 | 26 | 0 |2 | 122 >| >0 | t | f | r | 16 | 0 | 0 | > 0 |0 | 0 | f | f | > pg_attribute| 75 | 26 | 0 |6 | 461 >| >0 | t | f | r | 15 | 0 | 0 | > 0 |0 | 0 | f | f | > pg_proc | 81 | 26 | 0 | 26 | 1083 >| >0 | t | f | r | 16 | 0 | 0 | > 0 |0 | 0 | f | f | > > tele_fidel_hierarq | 0 | 26 | 0 | 176 | 6592 >| >0 | t | f | r |7 | 0 | 1 | > 0 |0 | 0 | f | f | > tipo_atendimento| 0 | 26 | 0 | 10 | 1000 >| >0 | t | f | r |3 | 0 | 2 | > 0 |0 | 0 | f | f | > (69 rows) > > Regards, > > José Vilson de Mello de Farias > Dígitro Tecnologia Ltda - Brazil > > > - Original Message - > From: Tom Lane <[EMAIL PROTECTED]> > To: Vilson farias <[EMAIL PROTECTED]> > Cc: <[EMAIL PROTECTED]> > Sent: Segunda-feira, 18 de Junho de 2001 16:57 > Subject: Re: [GENERAL] ERROR: cache lookup for userid 26 failed > > > ---(end of broadcast)--- > TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED] > ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://www.postgresql.org/search.mpl
[GENERAL] Still getting problems with -cache lookup for userid 26 failed-
How do you mean, Tom? > "Vilson farias" <[EMAIL PROTECTED]> writes: > > Does anyone know what is this error? > > ERROR: cache lookup for userid 26 failed > > Evidently pg_shadow has no entry with usesysid 26. Add it back... > > regards, tom lane > I dont have a pg_shadow file, I use a passwd file instead. Inside passwd there is only one line : postgres:LtOj6bwu7TF7k My pg_hba.conf : localall trust host all 127.0.0.1 255.255.255.255 trust host all 192.168.160.0 255.255.255.0 password passwd host all 192.168.162.0 255.255.255.0 password passwd My /etc/passwd : postgres:x:26:26:PostgreSQL Server:/home/postgres:/bin/bash My /etc/group postgres:x:26: And if this is a passwd (pg_shadow) error, why can I access some tables and other not? persona=> select * from pg_class persona-> ; relname | reltype | relowner | relam | relpages | reltuples | rell ongrelid | relhasindex | relisshared | relkind | relnatts | relchecks | reltriggers | reluk eys | relfkeys | relrefs | relhaspkey | relhasrules | relacl -+-+--+---+--+---+- -+-+-+-+--+---+-+-- +--+-++-+ pg_type | 71 | 26 | 0 |2 | 122 | 0 | t | f | r | 16 | 0 | 0 | 0 |0 | 0 | f | f | pg_attribute| 75 | 26 | 0 |6 | 461 | 0 | t | f | r | 15 | 0 | 0 | 0 |0 | 0 | f | f | pg_proc | 81 | 26 | 0 | 26 | 1083 | 0 | t | f | r | 16 | 0 | 0 | 0 |0 | 0 | f | f | tele_fidel_hierarq | 0 | 26 | 0 | 176 | 6592 | 0 | t | f | r |7 | 0 | 1 | 0 |0 | 0 | f | f | tipo_atendimento| 0 | 26 | 0 | 10 | 1000 | 0 | t | f | r |3 | 0 | 2 | 0 |0 | 0 | f | f | (69 rows) Regards, José Vilson de Mello de Farias Dígitro Tecnologia Ltda - Brazil - Original Message - From: Tom Lane <[EMAIL PROTECTED]> To: Vilson farias <[EMAIL PROTECTED]> Cc: <[EMAIL PROTECTED]> Sent: Segunda-feira, 18 de Junho de 2001 16:57 Subject: Re: [GENERAL] ERROR: cache lookup for userid 26 failed ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[GENERAL] ERROR: cache lookup for userid 26 failed
Greetings, Does anyone know what is this error? ERROR: cache lookup for userid 26 failed Some system tables are inacessible in my database but I can access others. I tried to check pg_tables, but it's blocked : persona=> select * from pg_tables; ERROR: cache lookup for userid 26 failed persona=> The table pg_class tells me that I'm the user of the table (me, postgres user), but always I get this msg. What can be done to kick it out? persona=> select * from pg_class persona-> ; relname | reltype | relowner | relam | relpages | reltuples | rell ongrelid | relhasindex | relisshared | relkind | relnatts | relchecks | reltriggers | reluk eys | relfkeys | relrefs | relhaspkey | relhasrules | relacl -+-+--+---+--+---+- -+-+-+-+--+---+-+-- +--+-++-+ pg_type | 71 | 26 | 0 |2 | 122 | 0 | t | f | r | 16 | 0 | 0 | 0 |0 | 0 | f | f | pg_attribute| 75 | 26 | 0 |6 | 461 | 0 | t | f | r | 15 | 0 | 0 | 0 |0 | 0 | f | f | pg_proc | 81 | 26 | 0 | 26 | 1083 | 0 | t | f | r | 16 | 0 | 0 | 0 |0 | 0 | f | f | tele_fidel_hierarq | 0 | 26 | 0 | 176 | 6592 | 0 | t | f | r |7 | 0 | 1 | 0 |0 | 0 | f | f | tipo_atendimento| 0 | 26 | 0 | 10 | 1000 | 0 | t | f | r |3 | 0 | 2 | 0 |0 | 0 | f | f | (69 rows) My linux users are ok. Postgres has 26 as user_id. [postgres@bxsgalena postgres]$ cat /etc/passwd root:x:0:0:root:/root:/bin/bash ... postgres:x:26:26:PostgreSQL Server:/home/postgres:/bin/bash [postgres@bxsgalena postgres]$ cat /etc/group root:x:0:root ... postgres:x:26: The database was working very well, but I don't know what was done to it, because another guy did the last maintenance. That's all for now. Thanks in advance and my best regards! ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] very slow execution of stored procedures
cham_chamada; count 145978 (1 row) -- I found a solution that uses index scan. I redone some parameters, check out : CREATE FUNCTION finaliza_chamv2(INT4, INT4, INT4, TIMESTAMP, TIMESTAMP, CHAR(1), INT4, INT4, INT4, CHAR(23), INT4, INT4, CHAR(25), INT4, INT4, INT4) RETURNS int4 AS ' DECLARE pbxs ALIAS FOR $1; pchave ALIAS FOR $2; pidentificacao ALIAS FOR $3; pdtinicial ALIAS FOR $4; pdtfinal ALIAS FOR $5; pflgliber ALIAS FOR $6; ptempototalALIAS FOR $7; pcodliber ALIAS FOR $8; pddd ALIAS FOR $9; pdtocupALIAS FOR $10; pindicadoraALIAS FOR $11; pcategoria ALIAS FOR $12; pidentidadea ALIAS FOR $13; pfds ALIAS FOR $14; presultchamALIAS FOR $15; pcifraorigem ALIAS FOR $16; BEGIN UPDATE cham_chamada SET dt_final= pdtfinal, flg_liberacao = pflgliber, temp_total = ptempototal, cod_liberjuntor = pcodliber, ddd = pddd, indicadora = pindicadora, cod_categoria = pcategoria, identidadea = pidentidadea, cod_fds = pfds, cod_resultcham = presultcham, cifra_origem= pcifraorigem WHERE cod_bxs = pbxs AND chave = pchave AND identificacao = pidentificacao AND dt_inicial= pdtinicial; IF pdtocup <> '''' THEN UPDATE cham_servico SET dt_ocupacao = pdtocup WHERE cod_bxs = pbxs AND chave = pchave AND identificacao = pidentificacao AND dt_inicial= pdtinicial AND dt_finalizacao is null; END IF; Now execution time is <1s. Ok, but I really would like to know what's happening to the older version. Best Regards, José Vilson de Mello de Farias Dígitro Tecnologia Ltda. - Brazil ----- Original Message - From: Richard Huxton <[EMAIL PROTECTED]> To: Vilson farias <[EMAIL PROTECTED]> Cc: <[EMAIL PROTECTED]> Sent: Sexta-feira, 20 de Abril de 2001 06:26 Subject: Re: [GENERAL] very slow execution of stored procedures : From: "Vilson farias" <[EMAIL PROTECTED]> : : > Greetings, : > : > I found something very weird related with stored procedures execution. : I : > have this stored procedure to finalize a phone call, writing tha time of : > call finalization and some other values to a calls table, called : > cham_chamada. Please check this out (very simple) : : : Nobody else seems to have answered this yet, so I'll have a stab. : : > -- : > : > CREATE FUNCTION finaliza_chamv2(INT4, INT4, INT4, CHAR(23), CHAR(23), : > CHAR(1), : > INT4, INT4, INT4, CHAR(23), INT4, INT4, : > CHAR(25), : > INT4, INT4, INT4) RETURNS int4 AS : : [snipped simple update function] : : > If I change all variables to the parameters value inside the stored : > procedure and then execute the frist script, then it is very fast, check : out : > : > execution time : <1ms : : > now its time to do the same thing using the stored procedure : : > execution time : about 5s : > : > Is it supose to execute with different speed? What can I do to fix it? : > : > I'm using postgres RPM 7.0.3-2 in RedHat 6.2. : : Well - there are some differences - if I understand correctly, the parser is : smarter about things when constants are explicitly specified (like in the : quick example). : : I'm assuming your table is large and what is happening is that the function : is not using indexes. The most likely reason I can see is the timestamp() : calls in the code. : : If you do: : : select proname,proiscachable from pg_proc where proname='timestamp'; : : You'll see that the conversion functions are marked not cachable, so that : would probably discourage the use of the index on the timestamp fields. : : Use another variable to hold the converted timestamp value and see if that : helps. If you still don't get an improvement try passing in the values as : timestamps rather than text. : : If that still doesn't help try: : : SET ENABLE_SEQSCAN = OFF; : : before calling the function and see what that does. : : If you are still having problems, can you supply the output of EXPLAIN for : the fast version. : : > ps: There are some specific procedures I needed to execute before I got : > pl/pgsql working : : > : > CREATE FUNCTION plpgsql_call_handler () : > RETURNS opaque AS '/usr/lib/pgsql/plpgsql.so' : > LANGUAGE 'C'; : > : > CREATE PROCEDURAL LANGUAGE 'plpgsql' : > HANDLER plpgsql_call_handler : > LANCOMPILER 'PL/PgSql internal'; : : A simpler method is to do: : : createlang plpgsql mydatabase : : from the command-line. : : HTH : : - Richard Huxton : ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] enable pl/pgsql in postgres from postgres-server-7.0.3-2 RPM
My RPM install didn't set pl/pgsql too... I needed to write this code : Execute psql template1 After this, type these 2 lines : CREATE FUNCTION plpgsql_call_handler () RETURNS opaque AS '/usr/lib/pgsql/plpgsql.so' LANGUAGE 'C'; CREATE PROCEDURAL LANGUAGE 'plpgsql' HANDLER plpgsql_call_handler LANCOMPILER 'PL/PgSql internal'; - Original Message - From: Marc Wrubleski <[EMAIL PROTECTED]> To: <[EMAIL PROTECTED]> Sent: Quarta-feira, 28 de Março de 2001 17:55 Subject: [GENERAL] enable pl/pgsql in postgres from postgres-server-7.0.3-2 RPM : I installed Postgres from the 7.0.3-2 RPM on the Redhat site, and it : works well, and the plpgsql.so library exists, but how do I enable this : procedural language. : : I know how to do it from source, but my customer demands I install from : RPM. : : Thanks in advance. : : Marc Wrubleski : : : ---(end of broadcast)--- : TIP 6: Have you searched our list archives? : : http://www.postgresql.org/search.mpl : ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [GENERAL] VARCHAR and TEXT
I gave up from using ODBC... to much problems with blobs and other stuff... and really very low speed under Delphi. Try Zeos Database Components for Postgre(http://www.marms.com/zeos). I've been using for 6 mounths and its great. Blobs greater than 8Kb still are a big problem, but you will see this component can recognize text fields as Memo and OIDs as Binary Blobs. A really good point of these components is that you dont need BDE or ODBC (everything is done accessing a single dll). My apps are very small and installation is a piece of cake. My config : Zeos 3.0.8 under Delphi 5 Enterprise. Regards, José Vilson de Mello de Farias Dígitro Tecnologia Ltda - Brazil - Original Message - From: Paolo Sinigaglia <[EMAIL PROTECTED]> To: <[EMAIL PROTECTED]> Sent: Quinta-feira, 29 de Março de 2001 11:53 Subject: [GENERAL] VARCHAR and TEXT : Hi everybody, : I have a little question about string types in PostgreSQL: it seems I don't : understood well how string data are stored in the database, from a recent : post by Tom Lane I feel encouraged to think that VARCHAR and TEXT are : treated the same way by PG, i.e. they occupy ony the space needed by their : actual length, bt I'm not so sure of it. : : I ask this because I'm trying using PG as database server in a project : developed in delphi under ms-win and it seems to me that varchar and text : fields are not treated the same way by odbc interface and/or bde (borland : database engine, the database interface layer used by delphi programs). When : I connect to a table the fields of type VARCHAR are seen as strings while : the fields of type TEXT are seen as blobs, and this is a little upsetting, : because some useful functions are not implemented for blob fields. : : I have several tables with fields that could contain quite lengthy strings, : but usually don't. So I don't want to define theese fields as VARCHAR(400) : if this means a disk occupation of 400 bye or so for each row (I have some : 50 rows in a table and the average length of the field in question is : about 30, but the max length is near 400). : : Can someone clarify this point? : : Thanks in advance : : ___PS : : : ---(end of broadcast)--- : TIP 3: if posting/reading through Usenet, please send an appropriate : subscribe-nomail command to [EMAIL PROTECTED] so that your : message can get through to the mailing list cleanly : ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
[GENERAL] indexes not working very well
Greetings, I have a little problem here and need some help. I created a table where indexes are not working very well here. Please take a look at the code below (it's easier). Two databases with same problem : 7.0.2 and 7.0.3 both on linux redhat 6.2. Am I doing something wrong? bxs=# bxs=# CREATE TABLE hora_minuto( bxs(#horachar(5), bxs(#hora_minuto TIME, bxs(#CONSTRAINT XPKhora_minuto PRIMARY KEY (hora, hora_minuto) bxs(# ) bxs-# ; NOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'xpkhora_minuto' for table 'hora_minuto' CREATE bxs=# bxs=# bxs=# \d hora_minuto Table "hora_minuto" Attribute | Type | Modifier -+-+-- hora| char(5) | not null hora_minuto | time| not null Index: xpkhora_minuto bxs=# bxs-# bxs-# bxs-# bxs-# bxs-# INSERT INTO hora_minuto VALUES( '', '13:38:00'); ERROR: parser: parse error at or near "]" bxs=# INSERT INTO hora_minuto VALUES( '', '13:39:00'); INSERT 2675143 1 bxs=# INSERT INTO hora_minuto VALUES( '', '13:40:00'); INSERT 2675144 1 bxs=# INSERT INTO hora_minuto VALUES( '', '13:41:00'); INSERT 2675145 1 bxs=# INSERT INTO hora_minuto VALUES( '', '13:42:00'); INSERT 2675146 1 bxs=# INSERT INTO hora_minuto VALUES( '', '13:43:00'); INSERT 2675147 1 bxs=# INSERT INTO hora_minuto VALUES( '', '13:44:00'); INSERT 2675148 1 bxs=# INSERT INTO hora_minuto VALUES( '', '13:45:00'); INSERT 2675149 1 bxs=# INSERT INTO hora_minuto VALUES( '', '13:46:00'); INSERT 2675150 1 bxs=# bxs=# bxs=# bxs=# bxs=# bxs=# bxs=# EXPLAIN SELECT hora_minuto FROM hora_minuto WHERE hora_minuto > '13:43:00'; NOTICE: QUERY PLAN: Seq Scan on hora_minuto (cost=0.00..22.50 rows=333 width=8) EXPLAIN bxs=# bxs=# bxs=# Regards, José Vilson de Mello de Farias Dígitro Tecnologia Ltda. ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Cant connect if -B 1024 was set to postmaster
Great! Now it's ok. Thanks. - Original Message - From: Poul L. Christiansen <[EMAIL PROTECTED]> To: Vilson farias <[EMAIL PROTECTED]> Cc: <[EMAIL PROTECTED]>; SIMONE Carla MOSENA <[EMAIL PROTECTED]> Sent: Segunda-feira, 19 de Março de 2001 11:23 Subject: Re: [GENERAL] Cant connect if -B 1024 was set to postmaster : I remember having this problem. You need to pass the '-i' switch. : Try: '-B 1024 -i' : : I don't know if this behavior is an error or not. : : Poul L. Christiansen : : On Mon, 19 Mar 2001, Vilson farias wrote: : : > Greetings, : > : > There are a very weird problem here. : > : > I changed my /etc/rc.d/init.d/postgresql. The postmaster line now is : : > su -l postgres -c "/usr/bin/pg_ctl -o '-B 1024' -D $PGDATA -p : > /usr/bin/postmaster start >/dev/null 2>&1" < /dev/null : > : > That mean now are 8Mb (1024 buffers of 8Kb) available for postgres : > processes. Ok, but if I set this '-B 1024' all my non-local connections : > don't work. Then, If I simply remove this -B statement everything work : > again. Besides, there are lots of -B configurations simply dont work here, : > like -B 2048, -B 4096... when I try to call /etc/rc.d/init.d/postgressql : > start, it checks installation (ok), but postmaster fails. : > : > Do you know why this is happening? : > : > I'm using RedHat 6.2, AMD K6 400, 256Mb RAM with Postgre 7.0.2. : > and a Pentium 75, 32Mb RAM, RedHat 6.2 with Postgre 7.0.3 and both has the : > same problem. : > : > Regards, : > : > José Vilson de Mello de Farias : > : > : > : > ---(end of broadcast)--- : > TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED] : > : ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[GENERAL] Cant connect if -B 1024 was set to postmaster
Greetings, There are a very weird problem here. I changed my /etc/rc.d/init.d/postgresql. The postmaster line now is : su -l postgres -c "/usr/bin/pg_ctl -o '-B 1024' -D $PGDATA -p /usr/bin/postmaster start >/dev/null 2>&1" < /dev/null That mean now are 8Mb (1024 buffers of 8Kb) available for postgres processes. Ok, but if I set this '-B 1024' all my non-local connections don't work. Then, If I simply remove this -B statement everything work again. Besides, there are lots of -B configurations simply dont work here, like -B 2048, -B 4096... when I try to call /etc/rc.d/init.d/postgressql start, it checks installation (ok), but postmaster fails. Do you know why this is happening? I'm using RedHat 6.2, AMD K6 400, 256Mb RAM with Postgre 7.0.2. and a Pentium 75, 32Mb RAM, RedHat 6.2 with Postgre 7.0.3 and both has the same problem. Regards, José Vilson de Mello de Farias ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] Why my vacuum fails?
- Original Message - From: Tom Lane <[EMAIL PROTECTED]> To: Vilson farias <[EMAIL PROTECTED]> Cc: <[EMAIL PROTECTED]>; SIMONE Carla MOSENA <[EMAIL PROTECTED]> Sent: Quinta-feira, 15 de Março de 2001 18:08 Subject: Re: [GENERAL] Why my vacuum fails? : "Vilson farias" <[EMAIL PROTECTED]> writes: : > The main problems appears to be related with "Cannot insert a duplicate : > key into unique index pg_attribute_relid_attnum_index" message. How can I : > fix it? : : What platform are you running on, and how did you build or acquire the : executables? Have you ever run the regression tests? Linux RedHat 6.0 running on a K6-2. I installed using RPM packages and it's PostgreSQL 7.0.2. Nop, I never did a regression test. Is it necessary? Regards, José Vilson de Mello de Farias Dígitro Tecnologia Ltda. : : This looks a lot like known portability problems on PPC, Alpha, etc. : There are workarounds but you need to know about them when building... : : regards, tom lane : : ---(end of broadcast)--- : TIP 2: you can get off all lists at once with the unregister command : (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED]) : ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] Why my vacuum fails?
Greetings, I'm trying to do a vacuum and it's not working very well. Could you please tell me what can I do? The main problems appears to be related with "Cannot insert a duplicate key into unique index pg_attribute_relid_attnum_index" message. How can I fix it? Thanks a lot. [postgres@dgtao postgres]$ /usr/bin/vacuumdb --analyze --verbose -d bxs > /home/postgres/log4.txt 2>&1 [postgres@dgtao postgres]$ cat log4.txt NOTICE: --Relation pg_type-- NOTICE: Pages 4: Changed 0, reaped 1, Empty 0, New 0; Tup 230: Vac 0, Keep/VTL 0/0, Crash 0, UnUsed 51, MinLen 105, MaxLen 109; Re-using: Free/Avail. Space 6124/0; EndEmpty/Avail. Pages 0/0. CPU 0.00s/0.02u sec. NOTICE: Index pg_type_typname_index: Pages 7; Tuples 212: Deleted 0. CPU 0.00s/0.00u sec. NOTICE: Index pg_type_typname_index: NUMBER OF INDEX' TUPLES (212) IS NOT THE SAME AS HEAP' (230). Recreate the index. NOTICE: Index pg_type_oid_index: Pages 2; Tuples 234: Deleted 0. CPU 0.00s/0.00u sec. NOTICE: Index pg_type_oid_index: NUMBER OF INDEX' TUPLES (234) IS NOT THE SAME AS HEAP' (230). Recreate the index. NOTICE: --Relation pg_attribute-- NOTICE: Pages 57: Changed 1, reaped 39, Empty 3, New 0; Tup 1845: Vac 210, Keep/VTL 0/0, Crash 29, UnUsed 1848, MinLen 98, MaxLen 100; Re -using: Free/Avail. Space 251548/241720; EndEmpty/Avail. Pages 1/37. CPU 0.00s/0.14u sec. NOTICE: Index pg_attribute_relid_attnum_index: Pages 24; Tuples 1506: Deleted 0. CPU 0.00s/0.02u sec. NOTICE: Index pg_attribute_relid_attnum_index: NUMBER OF INDEX' TUPLES (1506) IS NOT THE SAME AS HEAP' (1845). Recreate the index. NOTICE: Index pg_attribute_relid_attnam_index: Pages 56; Tuples 1506: Deleted 0. CPU 0.00s/0.03u sec. NOTICE: Index pg_attribute_relid_attnam_index: NUMBER OF INDEX' TUPLES (1506) IS NOT THE SAME AS HEAP' (1845). Recreate the index. ERROR: Cannot insert a duplicate key into unique index pg_attribute_relid_attnum_index vacuumdb: vacuum failed [postgres@dgtao postgres]$ ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[GENERAL] postgresql-server-7.0.3-2.i386.rpm error
Hello! I've been trying to install 7.0.3, but I'm always getting the same error, even if I try to install in another computer. What can I do? [root@localhost /tmp]# rpm -i postgresql-7.0.3-2.i386.rpm [root@localhost /tmp]# rpm -i postgresql-devel-7.0.3-2.i386.rpm [root@localhost /tmp]# rpm -i postgresql-server-7.0.3-2.i386.rpm /sbin/ldconfig: warning: can't open var/tmp/rpm-tmp.10903 (Not a directory), skipping /sbin/ldconfig: warning: can't open 1 (No such file or directory), skipping Thanks in advance! José Vilson de Mello de Farias Dígitro Tecnologia Ltda - Brasil
No Luck -> Re: [GENERAL] Sequencial scan over primary keys
Tom, I've tried like you said (::integer) but doesn't work. I tried another way, using GROUP BY at the end, but no luck again. What now? Table "prog_teste" Attribute| Type | Modifier -+--+-- cod_teste | integer | not null ... Index: xpkprog_teste Index "xpkprog_teste" Attribute | Type ---+- cod_teste | integer unique btree (primary key) sitest=# explain select * from prog_teste where cod_teste=90::integer; NOTICE: QUERY PLAN: Seq Scan on prog_teste (cost=0.00..1.21 rows=1 width=138) EXPLAIN sitest=# explain select * from prog_teste where cod_teste=90::integer order by cod_teste; NOTICE: QUERY PLAN: Sort (cost=1.22..1.22 rows=1 width=138) -> Seq Scan on prog_teste (cost=0.00..1.21 rows=1 width=138) Best regards, José Vilson de Mello de Farias Dígitro Tecnologia - Brasil - Original Message - From: Tom Lane <[EMAIL PROTECTED]> To: Vilson farias <[EMAIL PROTECTED]> Cc: <[EMAIL PROTECTED]>; SIMONE Carla MOSENA <[EMAIL PROTECTED]> Sent: Sexta-feira, 10 de Novembro de 2000 13:11 Subject: Re: [GENERAL] Sequencial scan over primary keys : "Vilson farias" <[EMAIL PROTECTED]> writes: : > sitest=# CREATE TABLE tipo_categoria ( : > sitest(#cod_categoriasmallint NOT NULL, : > sitest(#descricaovarchar(40), : > sitest(#CONSTRAINT XPKtipo_categoria PRIMARY KEY (cod_categoria) : > sitest(# : > sitest(# ); : : > sitest=# explain select * from tipo_categoria where cod_categoria = 1; : > NOTICE: QUERY PLAN: : : > Seq Scan on tipo_categoria (cost=0.00..22.50 rows=10 width=14) : : Try it with : select * from tipo_categoria where cod_categoria = 1::smallint; : : An unadorned literal "1" is an int, not a smallint, and the planner is : not currently very smart about indexing cross-datatype comparisons. : : Alternatively, just declare your table with column type int. Because of : alignment requirements for the varchar column, you're not actually : saving any space by using the smallint declaration anyway. : : regards, tom lane :
[GENERAL] Sequencial scan over primary keys 2
Just another question : Does foreign has a automatic created index, like primary keys? - Original Message - From: Vilson farias <[EMAIL PROTECTED]> To: <[EMAIL PROTECTED]> Cc: SIMONE Carla MOSENA <[EMAIL PROTECTED]> Sent: Sexta-feira, 10 de Novembro de 2000 10:16 Subject: [GENERAL] Sequencial scan over primary keys : : Hello, : : I need help in case below. My table tipo_categoria has a primary key, : called cod_categoria When I use this key as parameter for my sql script, the : result of execution is a sequencial scan, but this is a PRIMARY KEY, it does : has an index. How can it be explained? : : : : sitest=# CREATE TABLE tipo_categoria ( : sitest(#cod_categoriasmallint NOT NULL, : sitest(#descricaovarchar(40), : sitest(#CONSTRAINT XPKtipo_categoria PRIMARY KEY (cod_categoria) : sitest(# : sitest(# ); : NOTICE: CREATE TABLE/PRIMARY KEY will create implicit index : 'xpktipo_categoria' for table 'tipo_categoria' : CREATE : sitest=# copy tipo_categoria from '/home/postgres/categ.txt'; : COPY : sitest=# explain select * from tipo_categoria where cod_categoria = 1; : NOTICE: QUERY PLAN: : : Seq Scan on tipo_categoria (cost=0.00..22.50 rows=10 width=14) : : EXPLAIN : sitest=# \di :List of relations : Name| Type | Owner : +---+-- : ... : xpktipo_categoria | index | postgres : ... : (26 rows) : sitest=# select * from tipo_categoria; : cod_categoria | descricao : ---+--- : 0 | Categoria chamador desconhecida : 1 | Reserva : 2 | Reserva : .. :224 | Assinante com tarifacao especial :226 | Telefone publico interurbano : (20 rows) : : : Thanks. : : José Vilson de Mello de Farias : Digitro Tecnologia Ltda - Brasil :
[GENERAL] Sequencial scan over primary keys
Hello, I need help in case below. My table tipo_categoria has a primary key, called cod_categoria When I use this key as parameter for my sql script, the result of execution is a sequencial scan, but this is a PRIMARY KEY, it does has an index. How can it be explained? sitest=# CREATE TABLE tipo_categoria ( sitest(#cod_categoriasmallint NOT NULL, sitest(#descricaovarchar(40), sitest(#CONSTRAINT XPKtipo_categoria PRIMARY KEY (cod_categoria) sitest(# sitest(# ); NOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'xpktipo_categoria' for table 'tipo_categoria' CREATE sitest=# copy tipo_categoria from '/home/postgres/categ.txt'; COPY sitest=# explain select * from tipo_categoria where cod_categoria = 1; NOTICE: QUERY PLAN: Seq Scan on tipo_categoria (cost=0.00..22.50 rows=10 width=14) EXPLAIN sitest=# \di List of relations Name| Type | Owner +---+-- ... xpktipo_categoria | index | postgres ... (26 rows) sitest=# select * from tipo_categoria; cod_categoria | descricao ---+--- 0 | Categoria chamador desconhecida 1 | Reserva 2 | Reserva .. 224 | Assinante com tarifacao especial 226 | Telefone publico interurbano (20 rows) Thanks. José Vilson de Mello de Farias Digitro Tecnologia Ltda - Brasil
[GENERAL] CPU killer
Greetings, I've been using Postgres in a Pentium 75Mhz, Linux RedHat 6.2, 32Mb. Every big query I execute uses too much cpu (more than 90%). I start postgres with these params: su -l postgres -c '/usr/bin/postmaster -B 2048 -i -D "/home/postgres/data"' &. What should I do for avoid postgres extreme cpu allocation? I know sometimes non-indexed tables or huge size tables can be slow, but here I don't care about execution speed, I just want less cpu allocation no matter how slow. Regards from Brazil, José Vilson de Mello de Farias Dígitro Tecnologia Ltda.
Re: [GENERAL] Delete temp tables
: In short: if you want to delete a table there is one and only one : safe method to do it: DROP TABLE. The key difference between a temp : table and a regular table is that the DROP will be done for you : automatically when you disconnect. Now why? relatorio=# DROP TABLE "pg_temp.1823.17"; ERROR: class "pg_temp.1823.17" is a system catalog I really need to erase these tables, because they are not been auto-removed after my application crashes. I have more than 100 zombie temp tables in my system. What are the system tables that I need to remove temp tables information? Regards, José Vilson de Mello de Farias Dígitro Tecnologia Ltda. : : regards, tom lane :
[GENERAL] Delete temp tables
Hi people! I'm writing a script to erase non-droped temp tables. When I tried to delete from pg_tables, the following problem raises: **check out, there are data: relatorio=# SELECT COUNT(*) FROM pg_tables WHERE tablename like 'pg_temp.%'; count --- 101 (1 row) **There are data, really: relatorio=# SELECT * FROM pg_tables WHERE tablename like 'pg_temp.%' LIMIT 5; tablename| tableowner | hasindexes | hasrules | hastriggers -+++--+- pg_temp.10752.1 | postgres | f | f| f pg_temp.1085.10 | postgres | f | f| f pg_temp.1085.11 | postgres | f | f| f pg_temp.1085.12 | postgres | f | f| f pg_temp.1085.13 | postgres | f | f| f (5 rows) **But when I execute a delete, nothing happens: relatorio=# DELETE FROM pg_tables WHERE tablename like 'pg_temp.%'; DELETE 0 relatorio=# **or relatorio=# delete from pg_tables where tablename ='pg_temp.1823.12'; DELETE 0 ** I tried everything, even with double quotes (of course doesn't work). relatorio=# delete from pg_tables where tablename ="pg_temp.1823.12"; ERROR: Attribute 'pg_temp.1823.12' not found Why DELETE 0, if the select, with same structure, shows data? I remember that table associated files where auto-removed. I went to $PGDATA/base/relatorio and there were no temp_table files. I created them by my self. Now, how can I remove these temp tables? Regards from Brazil, José Vilson de Mello de Farias Dígitro Tecnologia Ltda.
[GENERAL] Help needed : temp tables.
I'm having problems with temporary tables in postgre. I've been using Postgre 7.02 in a red hat 6.2 and I'm accessing using Delphi 5 + Winzeos Postgre Components. I've been using for some mounths and now I checked that there are lots of temp tables in my system. These tables where created by my application and I think everytime it crashed the temp tables where not removed. How can I delete a 'zombie' temp table? I tried to drop these tables, but the following message appeared, per example : relatorio=# \dS List of relations Name| Type | Owner ---+-+-- ... pg_rules | view| postgres pg_shadow | table | postgres pg_statistic | table | postgres pg_tables | view| postgres pg_temp.10050.18 | table | postgres pg_temp.10050.19 | table | postgres pg_temp.10050.20 | table | postgres pg_temp.10863.0 | table | postgres pg_temp.10863.1 | table | postgres pg_temp.10863.2 | table | postgres pg_temp.10863.3 | table | postgres pg_temp.10863.4 | table | postgres pg_temp.10863.5 | table | postgres pg_temp.10863.6 | table | postgres ...lots and lots more relatorio=# drop table pg_temp.10863.1; ERROR: parser: parse error at or near ".10863" And if I try vacuum verbose analyze, I get this message: NOTICE: --Relation pg_temp.10863.0-- NOTICE: mdopen: couldn't open pg_temp.10863.0: No such file or directory ERROR: cannot open relation pg_temp.10863.0 Does someone know how to solve this problem? Thanks! José Vilson de Mello de Farias Dígitro Tecnologia ltda - Brazil