[GENERAL] foreign keys and memory consumption

2007-10-09 Thread Jan Poslusny
Hi all, pg 8.2.4 on Fedora Core 6 x86-64, mostly default postgres.conf just shared memory buffers increased to 256M. 1GB RAM. I attempt to insert ~200k rows into table in one transaction from psql console, calling stored function of plperlu language, which inserts row by row via

[GENERAL] How to recognize obsolete WAL logs

2007-03-19 Thread Jan Poslusny
Hi all, I participate on realization of warm standby pg 8.2.3, according to documentation. Mostly all works fine, but one problem remains. The recovering rdbms eats transfered wal logs fluently, but I cannot see the way how to recognize if the recovered wal log file is really processed into

[GENERAL] Howto repair template1

2005-06-21 Thread Jan Poslusny
Hi, my colleague unfortunately ran some database dump in template1 on our production server, pg7.3.5. Can we drop template1, dump template0 and rebuild template1 from this dump? Or does exist some better way how to repair template1 without any affect of living database instances? Thanks,

Re: [GENERAL] random record from small set

2005-02-15 Thread Jan Poslusny
And what about another data representation like create table r1 ( i int, chance_from numeric, chance_to numeric ) , you can select one random row in one select, for instance select * from r1 where chance_from = $rnd and chance_to $rnd; I see these advantages - Only one select. - Indices can

Re: [GENERAL] random record from small set

2005-02-15 Thread Jan Poslusny
Or create table r1 ( i int, chance_from numeric ) and select * from r1 where chance_from = $rnd order by chance_from desc limit 1; which can be easier updated... Just ideas, I has never tested it... Jan Poslusny wrote: And what about another data representation like create table r1 ( i int

Re: [GENERAL]

2005-02-08 Thread Jan Poslusny
You can use savepoints in pg 8.0: http://www.postgresql.org/docs/8.0/static/sql-savepoint.html Surabhi Ahuja wrote: in a transaction i try to insert into a table1, followed by insert into table 2 then insert into table 3 and last insert into table 4. However if a unique key violation occurs in

Re: [GENERAL]

2005-02-08 Thread Jan Poslusny
/interactive/plpgsql-control-structures.html search for trapping errors ? *From:* Jan Poslusny [mailto:[EMAIL PROTECTED] *Sent:* Tue 2/8/2005 3:30 PM *To:* Surabhi Ahuja *Cc:* pgsql-general@postgresql.org *Subject:* Re: [GENERAL

Re: [GENERAL] Functionality in database or external langauge

2004-05-13 Thread Jan Poslusny
I have some experience with following scenario: 1. Pg or other RDBMS realizes data integrity and transaction isolation, only. Potentially, only _very_ well known and oldschool rules are implemented here, for instance user sessions, which are not strictly related with real bussiness rules.

Re: [GENERAL] Inner join question

2004-02-19 Thread Jan Poslusny
Hi, try this on psql console: explain analyze select tq1.*, tq2.* from cal_quat_1 tq1, cal_quat_2 tq2 where tq1.timestamp = tq2.timestamp and tq1.timestamp '2004-01-12 09:47:56. +0'::timestamp with time zone and tq1.timestamp '2004-01-12 09:50:44.7187 +0'::timestamp with time zone order by

[GENERAL] invalid command \

2003-11-05 Thread Jan Poslusny
Hallo, we runs PostgreSQL 7.3.4 on RedHat 9. I initialized dbcluster with --locale=cs_CZ.UTF-8 and created database with --encoding=unicode. But following script generates error: -- -*- coding: iso-8859-2 -*- set client_encoding to LATIN2; create table t (txt text); begin; --inserted character

[GENERAL] Databases with different encodings

2003-11-04 Thread Jan Poslusny
RedHat 9, PostgreSQL 7.3.4. When I perform initdb --locale=cs_CZ createdb --encoding=latin2 lat createdb --encoding=unicode uni , the tables of lat are sorted fine, but the tables of uni not. When I initdb --locale=cs_CZ.UTF-8, sorts of uni are good, but sorts of lat not. Is some way to create

Re: [GENERAL] Monthly table partitioning for fast purges?

2003-08-04 Thread Jan Poslusny
Did you think about cluster on index ? [EMAIL PROTECTED] wrote: I am looking at ways to speed up queries, the most common way by for queries to be constrianed is by date range. I have indexed the date column. Queries are still slower than i would like. Would there be any performance increase