Re: [GENERAL] dropping a master table and all of its partitions?

2007-02-27 Thread Tom Lane
George Nychis <[EMAIL PROTECTED]> writes:
> Here is an exact script which generates this every single time...
> After you're done running the ruby script:
> DROP TABLE testflows CASCADE;

I tweaked the ruby script to emit the SQL commands into a script file,
which proved to issue 1765 CREATE TABLE commands (one parent and 1764
children) and 1764 CREATE RULE commands (one per child table).  On my
test installation the creation script runs about 6m15s, while
"DROP TABLE testflows CASCADE" runs about 3m4s.  While neither number
is exactly awe-inspiring, I'm not seeing why you think the DROP is
particularly broken?

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] SQL names for C constants

2007-02-27 Thread Vegard Bønes

Hi!

I am writing a serverside function in plpgsql, which returns a part of a 
large object.


To solve this problem I can do something like this:

fd := lo_open( some_oid, 262144 );
PERFORM lo_lseek( fd, index, 0 );
RETURN loread( fd, read_size );

This works well enough, but I do feel slightly uneasy about using 
numbers instead of symbolic names (262144 for INV_READ and 0 for 
SEEK_SET). I do especially find SEEK_SET unsettling, since this constant 
is defined outside of postgresql, and it may therefore be defined to 
other values than 0 on other platforms.


Am I missing something here? Is there a way to specify these names 
without writing the actual numbers, or should I just close my eyes and 
hope that everything will work?



thanks

VG

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] stored procedure optimisation...

2007-02-27 Thread Albe Laurenz
Anton Melser wrote:
> I need to analyse some html to get some links out, and with only 25
> lines in exports_tmp_links (and text_to_parse no more than around
> 10KB) this function has taken 10 minutes and counting. Something
> horribly wrong is going on here! Can someone give me any pointers?

I bet that you are running into an endless loop there.

Try adding a few RAISE statements to find out what is happening,
something like

RAISE NOTICE 'I am here, and myvar = % and thatvar = %', myvar, thatvar;

Yours,
Laurenz Albe

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Bad performace of a query

2007-02-27 Thread Rafa Comino

I have this query:

SELECT DISTINCT isbn, CURRENT_TIMESTAMP, 1
 FROM librosdisponibilidadtemp
WHERE proceso = ai_proceso
  AND gen_isbn_pais(isbn) IN (SELECT pais FROM raizpaises)
  AND NOT EXISTS
  ( SELECT isbn
  FROM libros
 WHERE isbn = librosdisponibilidadtemp.isbn)
  AND NOT EXISTS
  ( SELECT isbn
  FROM isbns_a_descubrir
 WHERE isbn = librosdisponibilidadtemp.isbn);

and the plan execution is
Unique  (cost=133558107.45..133558128.13 rows=414 width=21) (actual time=
790552.899..790553.098 rows=9 loops=1)
 ->  Sort  (cost=133558107.45..133558112.62 rows=2068 width=21) (actual
time=790552.882..790552.944 rows=9 loops=1)
   Sort Key: isbn, now(), 1
   ->  Index Scan using librosdisponibilidadtemp_idx_proceso on
librosdisponibilidadtemp  (cost=1.01..133557993.56 rows=2068 width=21)
(actual time=5722.607..790552.588 rows=9 loops=1)
 Index Cond: (proceso = 28465)
 Filter: ((hashed subplan) AND (NOT (subplan)) AND (NOT
(subplan)))
 SubPlan
   ->  Seq Scan on isbns_a_descubrir
(cost=0.00..8067.91rows=1 width=21) (actual time=
30.044..30.044 rows=1 loops=2025)
 Filter: ((isbn)::bpchar = $1)
   ->  Index Scan using "libros_idx_ISBN" on libros  (cost=
0.00..5.95 rows=1 width=21) (actual time=12.938..12.938 rows=1 loops=50512)
 Index Cond: (isbn = $1)
   ->  Seq Scan on raizpaises  (cost=0.00..1.01 rows=1
width=10) (actual time=0.764..0.871 rows=1 loops=1)
Total runtime: 790553.561 ms

The libros table has 120 regs.
The isbns_a_descubrir table has 30 regs.
The librosdisponibilidadtemp table has 5 regs.

does anybody can explain me, why using index
ibrosdisponibilidadtemp_idx_proceso is so slow and the others conditions are
good enough
Thanks everybody


Re: [GENERAL] dropping a master table and all of its partitions?

2007-02-27 Thread George Nychis



Tom Lane wrote:

George Nychis <[EMAIL PROTECTED]> writes:

Here is an exact script which generates this every single time...
After you're done running the ruby script:
DROP TABLE testflows CASCADE;


I tweaked the ruby script to emit the SQL commands into a script file,
which proved to issue 1765 CREATE TABLE commands (one parent and 1764
children) and 1764 CREATE RULE commands (one per child table).  On my
test installation the creation script runs about 6m15s, while
"DROP TABLE testflows CASCADE" runs about 3m4s.  While neither number
is exactly awe-inspiring, I'm not seeing why you think the DROP is
particularly broken?

regards, tom lane
Then maybe it's a bug in my version of postgresql, what version are you using? 
Because that DROP TABLE testflows CASCADE; runs for ~5 minutes and then dies on 
my side.  It never finishes.


- George






---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] Unable to restore dump due to client encoding issues -- or, when is SQL_ASCII really UTF8

2007-02-27 Thread Bill Moran

I'm probably missing something painfully obvious here, but it's not
obvious to me ...

I've pulled a dump of a production database to put it in our lab for
various types of testing ... I'm glad I did, as this testing is telling
me we'll have issues if we try to upgrade.

First off, it's my understanding that with SQL_ASCII "encoding", that
PostgreSQL does no checking for valid/invalid characters, per the docs:
http://www.postgresql.org/docs/8.2/static/multibyte.html

The beginning of the dump file I am restoring has the following:
--
-- PostgreSQL database dump
--

SET client_encoding = 'SQL_ASCII';
[...]

But when I try to pull the dump in with psql, I get the following errors:
ERROR:  invalid byte sequence for encoding "UTF8": 0xa0
HINT:  This error can also happen if the byte sequence does not match the 
encoding expected by the server, which is controlled by "client_encoding".

Connecting to the database and issuing "show client_encoding" shows that
the database is indeed set to SQL_ASCII.

Now ... I'm expecting the server to accept any byte sequence, since we're
using SQL_ANSII, but that is (obviously) not the case.  Am I missing
something obvious here?  Grepping the entire dump file shows absolutely
no references to UTF8 ... so why is the server trying to validate the
byte string as UTF8?

-- 
Bill Moran
Collaborative Fusion Inc.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] How does filter order relate to query optimization?

2007-02-27 Thread Andrew Edson
I'm working on a php project that's supposed to draw information from the DB 
for display, and I've been requested to speed up the display as much as 
possible.  I'm drawing data from four tables, with an additional two that I 
have to 'bounce' through to match the keys together.  Also, I've got five 
direct filtering requirements, four of the 'value = X' type and a date range.
   
  My question is this: Would shuffling the placement of the filtering 
requirements (t1.some_key = t2.some_key and t1.some_other_value = X, etc.) make 
a difference in processing speed for the response time?  And if so, would the 
response be quicker with the direct (value = x) filters at the front of the 
WHERE clause, or the end?

 
-
Now that's room service! Choose from over 150,000 hotels 
in 45,000 destinations on Yahoo! Travel to find your fit.

[GENERAL] performance of partitioning?

2007-02-27 Thread George Nychis

Hey all,

So I have a master table called "flows" and 400 partitions in the format 
"flow_*" where * is equal to some epoch.


Each partition contains ~700,000 rows and has a check such that 1 field is equal 
to a value:
   "flows_1107246900_interval_check" CHECK ("interval" = '2005-02-01 
03:35:00'::timestamp without time zone)


Each partition has a different and unique non-overlapping check.

This query takes about 5 seconds to execute:
dp=> select count(*) from flows_1107246900;
 count

 696836
(1 row)

This query has been running for 10 minutes now and hasn't stopped:
dp=> select count(*) from flows where interval='2005-02-01 03:35:00';

Isn't partitioning supposed to make the second query almost as fast?  My WHERE 
is exactly the partitioning constraint, therefore it only needs to go to 1 
partition and execute the query.


Why would it take magnitudes longer to run?  Am i misunderstanding something?

Thanks!
George

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] performance of partitioning?

2007-02-27 Thread Jorge Godoy
George Nychis <[EMAIL PROTECTED]> writes:

> Hey all,
>
> So I have a master table called "flows" and 400 partitions in the format
> "flow_*" where * is equal to some epoch.
>
> Each partition contains ~700,000 rows and has a check such that 1 field is
> equal to a value:
>"flows_1107246900_interval_check" CHECK ("interval" = '2005-02-01
> 03:35:00'::timestamp without time zone)
>
> Each partition has a different and unique non-overlapping check.
>
> This query takes about 5 seconds to execute:
> dp=> select count(*) from flows_1107246900;
>  count
> 
>  696836
> (1 row)
>
> This query has been running for 10 minutes now and hasn't stopped:
> dp=> select count(*) from flows where interval='2005-02-01 03:35:00';
>
> Isn't partitioning supposed to make the second query almost as fast?  My WHERE
> is exactly the partitioning constraint, therefore it only needs to go to 1
> partition and execute the query.
>
> Why would it take magnitudes longer to run?  Am i misunderstanding something?

When checking from the partition it only contains the records from that
specific partition.  When checking from the parent table it contains records
for all partitions.

Also note that an index on interval wouldn't be helpful here, I believe, due
to the fact that data is in a different table and not on the parent one. 


-- 
Jorge Godoy  <[EMAIL PROTECTED]>

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] performance of partitioning?

2007-02-27 Thread cedric
Le mardi 27 février 2007 15:00, George Nychis a écrit :
> Hey all,
>
> So I have a master table called "flows" and 400 partitions in the format
> "flow_*" where * is equal to some epoch.
>
> Each partition contains ~700,000 rows and has a check such that 1 field is
> equal to a value:
> "flows_1107246900_interval_check" CHECK ("interval" = '2005-02-01
> 03:35:00'::timestamp without time zone)
>
> Each partition has a different and unique non-overlapping check.
>
> This query takes about 5 seconds to execute:
> dp=> select count(*) from flows_1107246900;
>   count
> 
>   696836
> (1 row)
>
> This query has been running for 10 minutes now and hasn't stopped:
> dp=> select count(*) from flows where interval='2005-02-01 03:35:00';
>
> Isn't partitioning supposed to make the second query almost as fast?  My
> WHERE is exactly the partitioning constraint, therefore it only needs to go
> to 1 partition and execute the query.
>
> Why would it take magnitudes longer to run?  Am i misunderstanding
> something?
perhaps you should consider constraint_exclusion 
http://www.postgresql.org/docs/current/static/ddl-partitioning.html
http://www.postgresql.org/docs/current/static/runtime-config-query.html#GUC-CONSTRAINT-EXCLUSION
>
> Thanks!
> George
>
> ---(end of broadcast)---
> TIP 4: Have you searched our list archives?
>
>http://archives.postgresql.org/

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] performance of partitioning?

2007-02-27 Thread George Nychis



cedric wrote:

Le mardi 27 février 2007 15:00, George Nychis a écrit :

Hey all,

So I have a master table called "flows" and 400 partitions in the format
"flow_*" where * is equal to some epoch.

Each partition contains ~700,000 rows and has a check such that 1 field is
equal to a value:
"flows_1107246900_interval_check" CHECK ("interval" = '2005-02-01
03:35:00'::timestamp without time zone)

Each partition has a different and unique non-overlapping check.

This query takes about 5 seconds to execute:
dp=> select count(*) from flows_1107246900;
  count

  696836
(1 row)

This query has been running for 10 minutes now and hasn't stopped:
dp=> select count(*) from flows where interval='2005-02-01 03:35:00';

Isn't partitioning supposed to make the second query almost as fast?  My
WHERE is exactly the partitioning constraint, therefore it only needs to go
to 1 partition and execute the query.

Why would it take magnitudes longer to run?  Am i misunderstanding
something?
perhaps you should consider constraint_exclusion 
http://www.postgresql.org/docs/current/static/ddl-partitioning.html

http://www.postgresql.org/docs/current/static/runtime-config-query.html#GUC-CONSTRAINT-EXCLUSION

Thanks!
George

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/




That sounds like what i'm looking for, thanks.  I'll give it a try and report 
back.

- George

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Unable to restore dump due to client encoding issues -- or, when is SQL_ASCII really UTF8

2007-02-27 Thread Michael Fuhr
On Tue, Feb 27, 2007 at 08:43:27AM -0500, Bill Moran wrote:
> First off, it's my understanding that with SQL_ASCII "encoding", that
> PostgreSQL does no checking for valid/invalid characters, per the docs:
> http://www.postgresql.org/docs/8.2/static/multibyte.html

Correct.  As the documentation says, SQL_ASCII "is not so much a
declaration that a specific encoding is in use, as a declaration
of ignorance about the encoding."

> The beginning of the dump file I am restoring has the following:
> --
> -- PostgreSQL database dump
> --
> 
> SET client_encoding = 'SQL_ASCII';
> [...]
> 
> But when I try to pull the dump in with psql, I get the following errors:
> ERROR:  invalid byte sequence for encoding "UTF8": 0xa0
> HINT:  This error can also happen if the byte sequence does not match the 
> encoding expected by the server, which is controlled by "client_encoding".
> 
> Connecting to the database and issuing "show client_encoding" shows that
> the database is indeed set to SQL_ASCII.

client_encoding doesn't show the database encoding, it shows the
client encoding; execute "show server_encoding" to see the database
encoding.  You can also use "psql -l" or "\l" from within psql to
see all databases and their encodings.  The error suggests that the
database encoding is UTF8.

> Now ... I'm expecting the server to accept any byte sequence, since we're
> using SQL_ANSII, but that is (obviously) not the case.  Am I missing
> something obvious here?  Grepping the entire dump file shows absolutely
> no references to UTF8 ... so why is the server trying to validate the
> byte string as UTF8?

Probably because the database is UTF8 (see above).  Either create
the database as SQL_ASCII (see createdb's -E option) or change the
client_encoding setting in the dump to whatever the encoding really
is (probably LATIN1 or WIN1252 for Western European languages).

-- 
Michael Fuhr

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] How to debug this crash?

2007-02-27 Thread Jorge Godoy

Hi!


I've updated my system and after importing some old data I started getting
this message:


*** glibc detected *** postgres: godoy neo localhost(34476) SELECT: double free 
or corruption (out): 0x08494440 ***
=== Backtrace: =
/lib/libc.so.6[0xb7b4f6e1]
/lib/libc.so.6(cfree+0x89)[0xb7b50d79]
/usr/lib/postgresql/plpython.so[0xb7ee3944]
/usr/lib/libpython2.5.so.1.0[0xb6b8d85a]
/usr/lib/libpython2.5.so.1.0(PyEval_EvalFrameEx+0x55dc)[0xb6be80ac]
/usr/lib/libpython2.5.so.1.0(PyEval_EvalCodeEx+0x7c4)[0xb6be9734]
/usr/lib/libpython2.5.so.1.0(PyEval_EvalCode+0x63)[0xb6be97b3]
/usr/lib/postgresql/plpython.so[0xb7ee4dfe]
/usr/lib/postgresql/plpython.so[0xb7ee5c97]
/usr/lib/postgresql/plpython.so(plpython_call_handler+0xf6)[0xb7ee7016]
postgres: godoy neo localhost(34476) 
SELECT(ExecMakeFunctionResult+0xec)[0x813813c]
postgres: godoy neo localhost(34476) SELECT(ExecProject+0x1c6)[0x81364e6]
postgres: godoy neo localhost(34476) SELECT(ExecNestLoop+0x127)[0x8143f37]
postgres: godoy neo localhost(34476) SELECT(ExecProcNode+0x130)[0x8135c40]
postgres: godoy neo localhost(34476) SELECT(ExecutorRun+0x30b)[0x8134f4b]
postgres: godoy neo localhost(34476) SELECT[0x81b8e50]
postgres: godoy neo localhost(34476) SELECT(PortalRun+0x198)[0x81b9aa8]
postgres: godoy neo localhost(34476) SELECT[0x81b58cc]
postgres: godoy neo localhost(34476) SELECT(PostgresMain+0x1481)[0x81b7491]
postgres: godoy neo localhost(34476) SELECT[0x818f92a]
postgres: godoy neo localhost(34476) SELECT(PostmasterMain+0xc5b)[0x8190aeb]
postgres: godoy neo localhost(34476) SELECT(main+0x249)[0x8153f89]
/lib/libc.so.6(__libc_start_main+0xdc)[0xb7b00f9c]
postgres: godoy neo localhost(34476) SELECT[0x8078ce1]
=== Memory map: 
08048000-082d6000 r-xp  03:05 1037143/usr/bin/postgres
082d6000-082e rw-p 0028d000 03:05 1037143/usr/bin/postgres
082e-0859 rw-p 082e 00:00 0  [heap]
b6a0-b6a21000 rw-p b6a0 00:00 0 
b6a21000-b6b0 ---p b6a21000 00:00 0 
b6b4e000-b6c3e000 r-xp  03:05 945257 /usr/lib/libpython2.5.so.1.0
b6c3e000-b6c64000 rw-p 000ef000 03:05 945257 /usr/lib/libpython2.5.so.1.0
b6c64000-b6cec000 rw-p b6c64000 00:00 0 
b6cec000-b6d09000 r-xp  03:05 1404258/usr/lib/postgresql/plpgsql.so
b6d09000-b6d0b000 rw-p 0001d000 03:05 1404258/usr/lib/postgresql/plpgsql.so
b6d4c000-b6dcd000 rw-p b6d4c000 00:00 0 
b6dcd000-b6dd7000 r-xp  03:05 651576 /lib/libgcc_s.so.1
b6dd7000-b6dd9000 rw-p 9000 03:05 651576 /lib/libgcc_s.so.1
b6dd9000-b6dee000 r--p  03:05 1030633
/usr/share/locale/pt_BR/LC_MESSAGES/libc.mo
b6dee000-b6e5 rw-p b6dee000 00:00 0 
b6e5-b784c000 rw-s  00:08 9666560/SYSV0052e2c1 (deleted)
b784c000-b7892000 r--p  03:05 1037190
/usr/share/locale/pt_BR/LC_MESSAGES/postgres.mo
b7892000-b78c7000 r--s  03:05 392441 /var/run/nscd/db0rBNlN 
(deleted)
b78c7000-b7902000 r--p  03:05 1062284
/usr/lib/locale/pt_BR.utf8/LC_CTYPE
b7902000-b79d9000 r--p  03:05 1062285
/usr/lib/locale/pt_BR.utf8/LC_COLLATE
b79d9000-b7a0e000 r--s  03:05 392440 /var/run/nscd/group
b7a0e000-b7a43000 r--s  03:05 392433 /var/run/nscd/passwd
b7a43000-b7a45000 rw-p b7a43000 00:00 0 
b7a45000-b7a59000 r-xp  03:05 651558 /lib/libpthread-2.5.so
b7a59000-b7a5b000 rw-p 00013000 03:05 651558 /lib/libpthread-2.5.so
b7a5b000-b7a5d000 rw-p b7a5b000 00:00 0 
b7a5d000-b7a98000 r-xp  03:05 651584 /lib/libncurses.so.5.5
b7a98000-b7a9f000 r--p 0003a000 03:05 651584 /lib/libncurses.so.5.5
b7a9f000-b7aa4000 rw-p 00041000 03:05 651584 /lib/libncurses.so.5.5
b7aa4000-b7aab000 r-xp  03:05 1032787/usr/lib/libkrb5support.so.0.1
b7aab000-b7aad000 rw-p 6000 03:05 1032787/usr/lib/libkrb5support.so.0.1
b7aad000-b7aae000 rw-p b7aad000 00:00 0 
b7aae000-b7ad2000 r-xp  03:05 1032775/usr/lib/libk5crypto.so.3.0
b7ad2000-b7ad4000 rw-p 00023000 03:05 1032775/usr/lib/libk5crypto.so.3.0
b7ad4000-b7ae5000 r-xp  03:05 651569 /lib/libaudit.so.0.0.0
b7ae5000-b7ae7000 rw-p 0001 03


(the last line ends like that already...  I believe it is missing
 information...)



This happens when running this view:


CREATE OR REPLACE VIEW neolab.v_resultado_amostra_analise AS
SELECT vrr.amostra_analise_id AS id, vrr.id AS amostra_id, 
ar.liberado AS is_liberado, ar.alterado_em, 
an.metodologia, an.ibmp, an.valores_referencia, an.nome AS analise,
neolab.f_v_formata_valor_resultado(
(SELECT o_resultado 
FROM neolab.f_v_resultado_amostra_analise(vrr.amostra_analise_id)), 
an.id) AS resultado,
(SELECT o_is_limite_quantificacao 
 

Re: [GENERAL] How does filter order relate to query optimization?

2007-02-27 Thread Tom Lane
Andrew Edson <[EMAIL PROTECTED]> writes:
> I'm working on a php project that's supposed to draw information from the DB 
> for display, and I've been requested to speed up the display as much as 
> possible.  I'm drawing data from four tables, with an additional two that I 
> have to 'bounce' through to match the keys together.  Also, I've got five 
> direct filtering requirements, four of the 'value = X' type and a date range.
   
>   My question is this: Would shuffling the placement of the filtering
> requirements (t1.some_key = t2.some_key and t1.some_other_value = X,
> etc.) make a difference in processing speed for the response time?

No; certainly it will make no difference how you shuffle clauses that
involve different sets of tables.  If you've got clauses that wind up in
the same "Filter:" condition in the generated plan, and some of them
involve expensive functions, it might be useful to shuffle the
expensive-to-evaluate ones to the end.  But in most cases that's just
micro-optimization.  Usually what you want to think about for something
like this is plan optimization, ie, what order are the tables joined in
and with what join methods.  Have you looked at EXPLAIN ANALYZE results
for the query?

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] performance of partitioning?

2007-02-27 Thread George Nychis



George Nychis wrote:



cedric wrote:

Le mardi 27 février 2007 15:00, George Nychis a écrit :

Hey all,

So I have a master table called "flows" and 400 partitions in the format
"flow_*" where * is equal to some epoch.

Each partition contains ~700,000 rows and has a check such that 1 
field is

equal to a value:
"flows_1107246900_interval_check" CHECK ("interval" = '2005-02-01
03:35:00'::timestamp without time zone)

Each partition has a different and unique non-overlapping check.

This query takes about 5 seconds to execute:
dp=> select count(*) from flows_1107246900;
  count

  696836
(1 row)

This query has been running for 10 minutes now and hasn't stopped:
dp=> select count(*) from flows where interval='2005-02-01 03:35:00';

Isn't partitioning supposed to make the second query almost as fast?  My
WHERE is exactly the partitioning constraint, therefore it only needs 
to go

to 1 partition and execute the query.

Why would it take magnitudes longer to run?  Am i misunderstanding
something?
perhaps you should consider constraint_exclusion 
http://www.postgresql.org/docs/current/static/ddl-partitioning.html
http://www.postgresql.org/docs/current/static/runtime-config-query.html#GUC-CONSTRAINT-EXCLUSION 


Thanks!
George

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/




That sounds like what i'm looking for, thanks.  I'll give it a try and 
report back.


- George

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly



Worked perfectly, the two commands have near exact execution time now.  Thank 
you!

- George

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Unable to restore dump due to client encoding issues -- or, when is SQL_ASCII really UTF8

2007-02-27 Thread Bill Moran
In response to Michael Fuhr <[EMAIL PROTECTED]>:

[snip]

> > The beginning of the dump file I am restoring has the following:
> > --
> > -- PostgreSQL database dump
> > --
> > 
> > SET client_encoding = 'SQL_ASCII';
> > [...]
> > 
> > But when I try to pull the dump in with psql, I get the following errors:
> > ERROR:  invalid byte sequence for encoding "UTF8": 0xa0
> > HINT:  This error can also happen if the byte sequence does not match the 
> > encoding expected by the server, which is controlled by "client_encoding".
> > 
> > Connecting to the database and issuing "show client_encoding" shows that
> > the database is indeed set to SQL_ASCII.
> 
> client_encoding doesn't show the database encoding, it shows the
> client encoding; execute "show server_encoding" to see the database
> encoding.  You can also use "psql -l" or "\l" from within psql to
> see all databases and their encodings.  The error suggests that the
> database encoding is UTF8.

Ahh ... see, I knew there'd be something obvious I was missing, but all
the google searches I tried were focused on client encoding, so I wasn't
getting this answer.

The database was, indeed, UTF8, which is the default on newer Postgres.
I find it odd that I can't alter the encoding after database creation,
but in this case it wasn't a big deal, as it's a lab system and I can
just back up a few steps and start over.

Good to learn this change now, as opposed to during the next production
deployment :)

> > Now ... I'm expecting the server to accept any byte sequence, since we're
> > using SQL_ANSII, but that is (obviously) not the case.  Am I missing
> > something obvious here?  Grepping the entire dump file shows absolutely
> > no references to UTF8 ... so why is the server trying to validate the
> > byte string as UTF8?
> 
> Probably because the database is UTF8 (see above).  Either create
> the database as SQL_ASCII (see createdb's -E option) or change the
> client_encoding setting in the dump to whatever the encoding really
> is (probably LATIN1 or WIN1252 for Western European languages).

Yup.  Thanks for the quick response!

-- 
Bill Moran
Collaborative Fusion Inc.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] select ARRAY[1,4,3] @> ARRAY[3,1] gives ERROR: operator does not exist: integer[] @> integer[] ???

2007-02-27 Thread David Gagnon
Hi all,

  I tried select ARRAY[1,4,3] @> ARRAY[3,1] in pgAdminIII and get an error.
This example comes directly from the postgresql manual section: 9.14. Array
Functions and Operators.

Did I screw up something ?  I tried on a 8.1 postgreql and get the same
error.


ERROR: operator does not exist: integer[] @> integer[]
SQL state: 42883
Hint: No operator matches the given name and argument type(s). You may need
to add explicit type casts.

Thanks to point me what I'm doing wrong or how to fix this :-/

Regards

David



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] dropping a master table and all of its partitions?

2007-02-27 Thread Tom Lane
George Nychis <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> While neither number
>> is exactly awe-inspiring, I'm not seeing why you think the DROP is
>> particularly broken?
>> 
> Then maybe it's a bug in my version of postgresql, what version are you 
> using? 

I tried it on HEAD and 8.2 and didn't see a problem.  Just now I retried
on 8.1 and indeed it eats memory :-(.  It looks like the immediate
difference is the lack of this 8.2 fix:

2006-01-08 15:04  tgl

* src/backend/utils/cache/relcache.c: Avoid leaking memory while
reading toasted entries from pg_rewrite, and nail a couple more
system indexes into cache.  This doesn't make any difference in
normal system operation, but when forcing constant cache resets
it's difficult to get through the rules regression test without
these changes.

There are quite a few other changes in 8.2 that are likely to help you
if you want to work with large numbers of partitions, so rather than
worrying about whether this change would be safe to back-patch, I'd
suggest an upgrade.  Even with 8.2 though I'm not sure that you will get
decent performance with thousands of partitions.  That infrastructure
is intended for maybe 10 to 100 partitions, not thousands ...

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Unable to restore dump due to client encoding issues -- or, when is SQL_ASCII really UTF8

2007-02-27 Thread Michael Fuhr
On Tue, Feb 27, 2007 at 10:31:47AM -0500, Bill Moran wrote:
> The database was, indeed, UTF8, which is the default on newer Postgres.

initdb determines the encoding from your environment.  If you're
on a Unix-like platform run "env | grep UTF" and you'll probably
see that LANG and/or LC_* is set to something like "en_US.UTF-8".

> I find it odd that I can't alter the encoding after database creation,
> but in this case it wasn't a big deal, as it's a lab system and I can
> just back up a few steps and start over.

Altering the database encoding would mean that all string data would
have to be checked and possibly converted.  Doing that on a large
running system would be problematic; it would probably be just as
easy to dump and restore the entire database.

-- 
Michael Fuhr

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] select ARRAY[1,4,3] @> ARRAY[3,1] gives ERROR: operator does not exist: integer[] @> integer[] ???

2007-02-27 Thread Tom Lane
"David Gagnon" <[EMAIL PROTECTED]> writes:
>   I tried select ARRAY[1,4,3] @> ARRAY[3,1] in pgAdminIII and get an error.
> ...
> Thanks to point me what I'm doing wrong or how to fix this :-/

Don't use the 8.2 manuals as a guide to an 8.1 or older PG?  That
operator is new in 8.2.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Unable to restore dump due to client encoding issues -- or, when is SQL_ASCII really UTF8

2007-02-27 Thread Tom Lane
Bill Moran <[EMAIL PROTECTED]> writes:
> In response to Michael Fuhr <[EMAIL PROTECTED]>:
>>> Connecting to the database and issuing "show client_encoding" shows that
>>> the database is indeed set to SQL_ASCII.
>> 
>> client_encoding doesn't show the database encoding, it shows the
>> client encoding; execute "show server_encoding" to see the database
>> encoding.

> The database was, indeed, UTF8, which is the default on newer Postgres.

No, it's not necessarily the default --- the default is the encoding
used by whatever locale you initdb'd in.  But what I find odd about the
above is that client_encoding should default to equal server_encoding
unless something on the client side specifically overrides it.  The
above behavior suggests that you've got a .psqlrc or PGCLIENTENCODING
environment variable or some such that is forcing client_encoding to
SQL_ASCII when the server encoding is something different.  That strikes
me as a pretty bad practice; there is use for forcing client_encoding to
something specific, but forcing it to SQL_ASCII seems useless and
possibly dangerous.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Building a record in a function

2007-02-27 Thread Robert Fitzpatrick
Haven't done a whole lot of plsql returning records, only those based on
a query. I was wondering, can I build a record from the results of the
query using other values for some fields in the record? I know 'return
next' requires a record and want to build my own record to include some
argument values in the record. I have my record declared and my query
results in that record. So, how do you construct a record in a function?
Below is my work...

  period := $1;
  rep := $2;
  FOR dateval IN SELECT (period::date+(s||'month')::interval)::date from 
generate_series(0,11)s
  loop
SELECT INTO picks
  COUNT(public.view_pick_1_data.units)::numeric AS units
FROM
  public.view_pick_1_data
WHERE
  (public.view_pick_1_data.rep = rep) AND
  (public.view_pick_1_data.start_date = dateval."date")
GROUP BY
  public.view_pick_1_data.rep,
  public.view_pick_1_data.start_date;
return next picks;
  end loop;
  return;

I need the period in the record. My record type has three fields of
text, date and numeric where I need the word units in the first field,
the period from the loop query in the second and count from the picks
query in the last field. If you could return fields, which I see you
cannot, it would look like this:

return next 'units', dateval."date", picks.units

The final objective is a crosstab based on the return of this function.

-- 
Robert


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] How to debug this crash?

2007-02-27 Thread Tom Lane
Jorge Godoy <[EMAIL PROTECTED]> writes:
> This is with:

> OpenSuSE 10.2
> postgresql-server-8.1.5-13
> postgresql-libs-8.1.5-13
> postgresql-docs-8.1.5-13
> postgresql-devel-8.1.5-13
> postgresql-8.1.5-13
> postgresql-pl-8.1.5-15

What python version?  (Hint: pre-8.2 plpython is known not to work
with python 2.5)

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] select ARRAY[1,4,3] @> ARRAY[3,1] gives ERROR: operator does not exist: integer[] @> integer[] ???

2007-02-27 Thread David Gagnon
Dam! my computer rebooted and restarted the old 8.1 postmaster...

Thanks for your help.  Sorry for the noise.

Best Regards
David 

-Message d'origine-
De : Tom Lane [mailto:[EMAIL PROTECTED] 
Envoyé : Tuesday, February 27, 2007 11:02 AM
À : David Gagnon
Cc : pgsql-general@postgresql.org
Objet : Re: [GENERAL] select ARRAY[1,4,3] @> ARRAY[3,1] gives ERROR:
operator does not exist: integer[] @> integer[] ??? 

"David Gagnon" <[EMAIL PROTECTED]> writes:
>   I tried select ARRAY[1,4,3] @> ARRAY[3,1] in pgAdminIII and get an
error.
> ...
> Thanks to point me what I'm doing wrong or how to fix this :-/

Don't use the 8.2 manuals as a guide to an 8.1 or older PG?  That
operator is new in 8.2.

regards, tom lane



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] pg_dump and restore problem with function as DEFAULT-Constraint

2007-02-27 Thread Markus Schulz
Hello,

i have some trouble with dumping and restoring a database with 
postgresql 7.4.7-6sarge4 (debian sarge).

Some tables have plpgsql-functions as DEFAULT contraints. But these 
functions are dumped after the CREATE TABLE command. Now these tables 
can't be created on restore.

If i'm dumping only structure and import twice in a new and empty 
database (this works), i can't restore the data-only dump. Cause in the 
data-only dump the table order was wrong in respect to FOREIGN KEY 
constraints and the constraints are now active (instead of a full-dump, 
where constraints will be created after data was inserted).

Why DEFAULT (and NOT NULL) constraints would'nt be added after data 
insertion? (like Primary-Key and Foreign-Key)

What can i do to enforce the order of plpgsql functions prior to all 
table structures?

-- 
Markus Schulz - [EMAIL PROTECTED]

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] pg_dump and restore problem with function as DEFAULT-Constraint

2007-02-27 Thread A. Kretschmer
am  Tue, dem 27.02.2007, um 17:44:06 +0100 mailte Markus Schulz folgendes:
> Hello,
> 
> i have some trouble with dumping and restoring a database with 
> postgresql 7.4.7-6sarge4 (debian sarge).

Is the new database also 7.4? Why not 8.1 oder 8.2?

> 
> Some tables have plpgsql-functions as DEFAULT contraints. But these 
> functions are dumped after the CREATE TABLE command. Now these tables 
> can't be created on restore.

IIRC a know issue with 7.x, but solved (i hope...) in 8.x. If you
upgrade to such a version, you should use the pg_dump(all) from this
against the old database.

> 
> If i'm dumping only structure and import twice in a new and empty 
> database (this works), i can't restore the data-only dump. Cause in the 
> data-only dump the table order was wrong in respect to FOREIGN KEY 
> constraints and the constraints are now active (instead of a full-dump, 
> where constraints will be created after data was inserted).
> 
> Why DEFAULT (and NOT NULL) constraints would'nt be added after data 
> insertion? (like Primary-Key and Foreign-Key)
> 
> What can i do to enforce the order of plpgsql functions prior to all 
> table structures?

You can use pg_restore with -l to generate a listfile for all objects in
the database. Then you can reorder this ($EDITOR) and then use -L to use
this ordered listfile to enforce the right order of objects. I hope this
helps you.

(man pg_restore)


Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: -> Header)
GnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] pg_dump and restore problem with function as DEFAULT-Constraint

2007-02-27 Thread Tom Lane
Markus Schulz <[EMAIL PROTECTED]> writes:
> i have some trouble with dumping and restoring a database with 
> postgresql 7.4.7-6sarge4 (debian sarge).
> ...
> What can i do to enforce the order of plpgsql functions prior to all 
> table structures?

Update to PG 8.x --- IIRC 8.0 was the first release in which pg_dump
really understood about dependencies properly.  7.x is dumping basically
in creation order, which falls down as soon as you start using ALTER
TABLE to create dependencies that weren't there originally.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Building a record in a function

2007-02-27 Thread Richard Huxton

Robert Fitzpatrick wrote:

Haven't done a whole lot of plsql returning records, only those based on
a query. I was wondering, can I build a record from the results of the
query using other values for some fields in the record? I know 'return
next' requires a record and want to build my own record to include some
argument values in the record. I have my record declared and my query
results in that record. So, how do you construct a record in a function?
Below is my work...

  period := $1;
  rep := $2;
  FOR dateval IN SELECT (period::date+(s||'month')::interval)::date from 
generate_series(0,11)s
  loop
SELECT INTO picks
  COUNT(public.view_pick_1_data.units)::numeric AS units,

CURRENT_DATE AS "date",
'x'::text AS some_text_field

That's the simplest way.
Alternatively, you can use CREATE TYPE to make the required row-type and 
fill a variable of that type.

--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] How to debug this crash?

2007-02-27 Thread Jorge Godoy
Tom Lane <[EMAIL PROTECTED]> writes:

> Jorge Godoy <[EMAIL PROTECTED]> writes:
>> This is with:
>
>> OpenSuSE 10.2
>> postgresql-server-8.1.5-13
>> postgresql-libs-8.1.5-13
>> postgresql-docs-8.1.5-13
>> postgresql-devel-8.1.5-13
>> postgresql-8.1.5-13
>> postgresql-pl-8.1.5-15
>
> What python version?  (Hint: pre-8.2 plpython is known not to work
> with python 2.5)

Bingo! ;-)

I'll upgrade to 8.2.


Be seeing you,
-- 
Jorge Godoy  <[EMAIL PROTECTED]>

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Unable to restore dump due to client encoding issues -- or, when is SQL_ASCII really UTF8

2007-02-27 Thread Bill Moran
In response to Tom Lane <[EMAIL PROTECTED]>:

> Bill Moran <[EMAIL PROTECTED]> writes:
> > In response to Michael Fuhr <[EMAIL PROTECTED]>:
> >>> Connecting to the database and issuing "show client_encoding" shows that
> >>> the database is indeed set to SQL_ASCII.
> >> 
> >> client_encoding doesn't show the database encoding, it shows the
> >> client encoding; execute "show server_encoding" to see the database
> >> encoding.
> 
> > The database was, indeed, UTF8, which is the default on newer Postgres.
> 
> No, it's not necessarily the default --- the default is the encoding
> used by whatever locale you initdb'd in.

Ok, but Postgres' idea of what locale translates to what encoding has
changed from 8.1.4 to 8.1.8 ... or FreeBSD changed something about the
way it uses locales ... or something ...

> But what I find odd about the
> above is that client_encoding should default to equal server_encoding
> unless something on the client side specifically overrides it.  The
> above behavior suggests that you've got a .psqlrc or PGCLIENTENCODING
> environment variable or some such that is forcing client_encoding to
> SQL_ASCII when the server encoding is something different.  That strikes
> me as a pretty bad practice; there is use for forcing client_encoding to
> something specific, but forcing it to SQL_ASCII seems useless and
> possibly dangerous.

Well, the explanation for that got snipped -- the file created by pg_dump
had a set client_encoding statement at the beginning of it, which seems
logical to me: the database this was dumped from was SQL_ASCII, so the
restore process should attempt to replicate that.  I'm chalking this up
to "Bill didn't notice that the new databases were getting created with a
different encoding."


-- 
Bill Moran
Collaborative Fusion Inc.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] pg_dump and restore problem with function as DEFAULT-Constraint

2007-02-27 Thread Markus Schulz
Am Dienstag, 27. Februar 2007 18:28 schrieb A. Kretschmer:
> am  Tue, dem 27.02.2007, um 17:44:06 +0100 mailte Markus Schulz 
folgendes:
> > Hello,
> >
> > i have some trouble with dumping and restoring a database with
> > postgresql 7.4.7-6sarge4 (debian sarge).
>
> Is the new database also 7.4? Why not 8.1 oder 8.2?

7.4. We are using some selfwritten modules and currently they are not 
ported to 8.x.

> > Some tables have plpgsql-functions as DEFAULT contraints. But these
> > functions are dumped after the CREATE TABLE command. Now these
> > tables can't be created on restore.
>
> IIRC a know issue with 7.x, but solved (i hope...) in 8.x. If you
> upgrade to such a version, you should use the pg_dump(all) from this
> against the old database.
>
> > If i'm dumping only structure and import twice in a new and empty
> > database (this works), i can't restore the data-only dump. Cause in
> > the data-only dump the table order was wrong in respect to FOREIGN
> > KEY constraints and the constraints are now active (instead of a
> > full-dump, where constraints will be created after data was
> > inserted).
> >
> > Why DEFAULT (and NOT NULL) constraints would'nt be added after data
> > insertion? (like Primary-Key and Foreign-Key)
> >
> > What can i do to enforce the order of plpgsql functions prior to
> > all table structures?
>
> You can use pg_restore with -l to generate a listfile for all objects
> in the database. Then you can reorder this ($EDITOR) and then use -L
> to use this ordered listfile to enforce the right order of objects. I
> hope this helps you.

thanks, nice idea. I will try it.

-- 
Markus Schulz - [EMAIL PROTECTED]

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] stored procedure optimisation...

2007-02-27 Thread Anton Melser

RAISE NOTICE 'I am here, and myvar = % and thatvar = %', myvar, thatvar;


Thanks... it is indeed a gem that little instruction!!!
Cheers
Anton

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] How does filter order relate to query optimization?

2007-02-27 Thread Andrew Edson
I have now, over two of our setups, and I'm getting another, er, interesting 
problem.  Same statement, same data, wildly different times.  One's taking 
nearly half an hour, the other's ready within a few minutes.  It's a rather 
large database, so I'm not surprised at a little delay (although shortening 
that as much as possible is the goal), but still...
   
  Statement in question.
  select substring(ttrans.tran_dt, 1, 10) as tran_dt, ttrans.dist_id as 
dist_id, ttrans.cntrct_id as cntrct_id, cntrt.cntrtyp_cd as cntrt_type, 
cntrt.actual_amt as cntrt_amt, acntrec.mth_reck as mth_reck, persn.frst_nm as 
fnm, persn.lst_nm as lnm from ttrans, cntrt, acntrec, persn, custm, addru where 
ttrans.tran_dt >= '2007-01-01' and ttrans.tran_dt < '2007-02-01' and 
ttrans.cntrct_id = cntrt.cntrct_id and cntrt.cntrct_seq = addru.cntrct_seq and 
addru.aunit_seq = acntrec.aunit_seq and cntrt.cntrtyp_cd = 260 and 
cntrt.clnt_seq = custm.clnt_seq and custm.person_seq = persn.person_seq and 
acntrec.cd_inst = 49 and acntrec.months = 49 and cntrt.dow_flg1 = 'NO' order by 
ttrans.dist_id asc, cntrt.cntrct_id asc, cntrt.cntrct_id asc, cntrt.cntrct_id 
asc, cntrt.cntrct_id asc;
  
Results: Slow system
   
QUERY PLAN

 Sort  (cost=169629.77..169629.78 rows=1 width=91) (actual 
time=1262832.907..1262833.259 rows=120 loops=1)
   Sort Key: ttrans.dist_id, cntrt.cntrct_id
   ->  Nested Loop  (cost=0.00..169629.76 rows=1 width=91) (actual 
time=18755.330..1262808.593 rows=120 loops=1)
 Join Filter: (("inner".cntrct_id)::bpchar = "outer".cntrct_id)
 ->  Nested Loop  (cost=0.00..18902.45 rows=1 width=68) (actual 
time=369.208..20016.454 rows=65 loops=1)
   ->  Nested Loop  (cost=0.00..18897.73 rows=1 width=48) (actual 
time=347.437..15905.930 rows=65 loops=1)
 ->  Nested Loop  (cost=0.00..18892.32 rows=1 width=48) 
(actual time=338.897..12678.319 rows=65 loops=1)
   ->  Nested Loop  (cost=0.00..18875.15 rows=3 
width=43) (actual time=317.317..8104.591 rows=68 loops=1)
 ->  Seq Scan on cntrt  (cost=0.00..18857.61 
rows=3 width=43) (actual time=317.181..4249.752 rows=68 loops=1)
   Filter: (((cntrtyp_cd)::text = 
'260'::text) AND (dow_flg1 = 'NO'::bpchar))
 ->  Index Scan using fk_cntrct on addru  
(cost=0.00..5.83 rows=1 width=8) (actual time=56.661..56.666 rows=1 loops=68)
   Index Cond: ("outer".cntrct_seq = 
addru.cntrct_seq)
   ->  Index Scan using fk_aunit on acntrec  
(cost=0.00..5.71 rows=1 width=13) (actual time=66.415..67.243 rows=1 loops=68)
 Index Cond: ("outer".aunit_seq = 
acntrec.aunit_seq)
 Filter: ((cd_inst = 49) AND ((months)::text = 
'49'::text))
 ->  Index Scan using "pkeyCUSTM" on custm  
(cost=0.00..5.39 rows=1 width=8) (actual time=49.633..49.638 rows=1 loops=65)
   Index Cond: ("outer".clnt_seq = custm.clnt_seq)
   ->  Index Scan using "pkeyPERSN" on persn  (cost=0.00..4.70 
rows=1 width=28) (actual time=63.212..63.220 rows=1 loops=65)
 Index Cond: ("outer".person_seq = persn.person_seq)
 ->  Seq Scan on ttrans  (cost=0.00..149327.10 rows=112017 width=23) 
(actual time=163.610..18845.905 rows=86415 loops=65)
   Filter: ((tran_dt >= '2007-01-01 00:00:00-06'::timestamp with 
time zone) AND (tran_dt < '2007-02-01 00:00:00-06'::timestamp with time zone))
 Total runtime: 1262856.689 ms
(22 rows)
  
Results: 'Normal' system
   
QUERY PLAN

 Sort  (cost=177304.02..177304.03 rows=1 width=125) (actual 
time=271165.198..271165.432 rows=120 loops=1)
   Sort Key: ttrans.dist_id, cntrt.cntrct_id
   ->  Nested Loop  (cost=0.00..177304.01 rows=1 width=125) (actual 
time=21612.109..270938.877 rows=120 loops=1)
 ->  Nested Loop  (cost=0.00..177299.40 rows=1 width=106) (actual 
time=21538.207..267696.943 rows=120 loops=1)
   ->  Nested Loop  (cost=0.00..177294.64 rows=1 width=106) (actual 
time=21271.967..263316.202 rows=120 loops=1)
 Join Filter: (("inner".cntrct_id)::bpchar = 
"outer".cntrct_id)
 ->  Nested Loop  (cost=0.00..19221.50 rows=1 width=48) 
(actual time=2057.840..25089.891 rows=65 loops=1)
   ->  Nested Loop  (cost=0.00..19210.31 rows=2 
width=43) (actual time=1884.550..17108.249 rows=68 

[GENERAL] How often do I need to reindex tables?

2007-02-27 Thread Dhaval Shah

I am planning to use 8.2 and the average inserts/deletes and updates
across all tables is moderate. That is, it is a moderate sized
database with moderate usage of tables.

Given that, how often do I need to reindex the tables? Do I need to do
it everyday?

Also with 8.2, I do not have to do vacuum anymore or that is what I
understand. Does it do auto-vacuum?

Thanks in advance
Dhaval Shah

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] pg_dump and restore problem with function as DEFAULT-Constraint

2007-02-27 Thread Andreas Kretschmer
Markus Schulz <[EMAIL PROTECTED]> schrieb:
> > You can use pg_restore with -l to generate a listfile for all objects
> > in the database. Then you can reorder this ($EDITOR) and then use -L
> > to use this ordered listfile to enforce the right order of objects. I
> > hope this helps you.
> 
> thanks, nice idea. I will try it.

I have never used this, please tell me if you have success with this.
(And yes, i hope, i never need this...)


Andreas
-- 
Really, I'm not out to destroy Microsoft. That will just be a completely
unintentional side effect.  (Linus Torvalds)
"If I was god, I would recompile penguin with --enable-fly."(unknow)
Kaufbach, Saxony, Germany, Europe.  N 51.05082°, E 13.56889°

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] How often do I need to reindex tables?

2007-02-27 Thread Jim C. Nasby
On Tue, Feb 27, 2007 at 11:26:02AM -0800, Dhaval Shah wrote:
> I am planning to use 8.2 and the average inserts/deletes and updates
> across all tables is moderate. That is, it is a moderate sized
> database with moderate usage of tables.
> 
> Given that, how often do I need to reindex the tables? Do I need to do
> it everyday?
 
No, you should very rarely if ever need to do it.

If you're really concerned, I suggest monitoring average tuples per
index page; something like

SELECT relname, reltuples/relpages FROM pg_class WHERE relkind = 'i' AND
relpages > 1000;

That'll show tuples/page for all indexes over 8MB in size.

> Also with 8.2, I do not have to do vacuum anymore or that is what I
> understand. Does it do auto-vacuum?

You still need to enable autovacuum. See autovacuum_enable.
-- 
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] How to use OIDs on tables......OR....a better solution?

2007-02-27 Thread Lenorovitz, Joel
Greetings,

I have seen many a warning against using the Postgres internal OIDs to
refer to DB objects, but I've got a situation that may warrant it.  In a
nutshell, I've got a table called 'Notes', which contains (you guessed
it) notes that users can create to attach to various records in the DB.
These records may well be in different tables of different sorts of
entities (e.g., an inventory item, a calendar event, a facility, etc.).
One note may be attached to many records and each record may have
multiple notes.

The notes are attached to the records via a separate associative table
that contains the 'note_id', the 'record_id' (both generated by a
sequence), and the 'table_name' in which the record resides.  It's
managable now, but my gut tells me that the association to the table
should be handled by something besides just 'table_name' because if that
were to be changed it would break things or potentially cause a lot of
maintenance issues.  Is the OID a good bet for something to use as a
unique and reliable table identifier?

If so, is there an elegant way to dereference the OID instead of using
the alias (i.e. table name) to run a query against that table?
   I want to do this:
   > SELECT * FROM inventory_item;
   But, the following does not work (where 16675 is the OID of tabled
inventory_item):
   > SELECT * FROM 16675;

The one (very scary) pitfall I can see with using the OID is that if the
DB were rebuilt, there's probably no guarantee or expectation that a
table would have the same OID as before.  That's certainly a deal
breaker.

Maybe the best solution is to continue using the table name, but to
create that as a foreign key to the official table name in the
information_schema?  That way it could cascade if the name was changed,
but I'm not sure what kind of ugliness might result if you tried to drop
the table and it still had a referencing record.  Any opinions on that
or any other ways to approach this challenge?

Thanks in advance,
Joel

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] [ANN]VTD-XML 2.0

2007-02-27 Thread Jimmy Zhang
XimpleWare is proud to announce the release of version 2.0 of 
VTD-XML, the next generation XML parser/indexer. The new 
features introduced in this version are:


* VTD+XML version 1.0: the world's first true native XML index 
that is simple, general-purpose and back-compatible with XML. 
* NodeRecorder Class that saves VTDNav's cursor location for 
later sequential access.

* Overwrite capability
* Lexically comparisons between VTD and strings

To download the software, please go to 
http://sourceforge.net/project/showfiles.php?group_id=110612


To read the latest benchmark report please go to
http://vtd-xml.sf.net/benchmark1.html

To get the latest API overview
http://www.ximpleware.com/vtd-xml_intro.pdf

- Original Message - 
From: "Jim C. Nasby" <[EMAIL PROTECTED]>

To: "Dhaval Shah" <[EMAIL PROTECTED]>
Cc: 
Sent: Tuesday, February 27, 2007 11:56 AM
Subject: Re: [GENERAL] How often do I need to reindex tables?



On Tue, Feb 27, 2007 at 11:26:02AM -0800, Dhaval Shah wrote:

I am planning to use 8.2 and the average inserts/deletes and updates
across all tables is moderate. That is, it is a moderate sized
database with moderate usage of tables.

Given that, how often do I need to reindex the tables? Do I need to do
it everyday?


No, you should very rarely if ever need to do it.

If you're really concerned, I suggest monitoring average tuples per
index page; something like

SELECT relname, reltuples/relpages FROM pg_class WHERE relkind = 'i' AND
relpages > 1000;

That'll show tuples/page for all indexes over 8MB in size.


Also with 8.2, I do not have to do vacuum anymore or that is what I
understand. Does it do auto-vacuum?


You still need to enable autovacuum. See autovacuum_enable.
--
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster




---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] How to use OIDs on tables......OR....a better solution?

2007-02-27 Thread Martijn van Oosterhout
On Tue, Feb 27, 2007 at 12:57:51PM -0700, Lenorovitz, Joel wrote:
> The notes are attached to the records via a separate associative table
> that contains the 'note_id', the 'record_id' (both generated by a
> sequence), and the 'table_name' in which the record resides.  It's
> managable now, but my gut tells me that the association to the table
> should be handled by something besides just 'table_name' because if that
> were to be changed it would break things or potentially cause a lot of
> maintenance issues.  Is the OID a good bet for something to use as a
> unique and reliable table identifier?

It's possible the "regclass" type would be better. It is an OID but it
dumps and reads as the table name. And when you rename the table, it
gives the new name.

> If so, is there an elegant way to dereference the OID instead of using
> the alias (i.e. table name) to run a query against that table?
>I want to do this:
>> SELECT * FROM inventory_item;
>But, the following does not work (where 16675 is the OID of tabled
> inventory_item):
>> SELECT * FROM 16675;

No. You'll have to build the query dynamically. At I said, you can use
"::regclass::text" to get the table name, but you'll have the issue the
query seperately.

Hope this helps,
-- 
Martijn van Oosterhout  http://svana.org/kleptog/
> From each according to his ability. To each according to his ability to 
> litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] grant on sequence and pg_restore/pg_dump problem

2007-02-27 Thread Bruce Momjian
Tony Caduto wrote:
> Hi,
> I did a quick search and didn't see anything on this, if I missed it 
> sorry in advance.
> Anyway, I was doing a restore of a 8.1 database(on a 8.1 server) using 
> the 8.2 pg_restore and it was throwing errors when it was trying to
> restore the permissions on the sequences.
> basically the pg_restore was using the grant on sequence against the 8.1 
> database which of course 8.1 knows nothing about.
> 
> Is there a switch or something I missed that would allow this to work 
> properly on a 8.1 or lower database?
> Or do I have to now have 2 versions of dump/restore in order to do this?

You can use 8.2 to dump 8.1, but for restore, you should use the same
version as the target database.

-- 
  Bruce Momjian  <[EMAIL PROTECTED]>  http://momjian.us
  EnterpriseDB   http://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] How to Kill IDLE users

2007-02-27 Thread Goran Rakic
I have installed POSTGRESQL 8.2 on W2K3 32bit , 100 users connecting from
desktop applications and 200 users connecting thru web service from handheld
computers

I have problem with second groups of users.

Often they do not disconnect from POSTGRE Server and with time passing thru
I have lot of IDLE users and very much memory consumptions. From time to
time that can crash server then only restarting server will kill all
postgres.exe from memory and this annoying me, because that I build script
to restart server every night. I could not get programmers to change
program.



Is there parameters which will disconnect IDLE users if they excided some
time or some program which will purge memory from non active posgres.exe



Thanks in advance



[EMAIL PROTECTED]






---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Recovering from a corrupt database

2007-02-27 Thread Dhaval Shah

I am testing postgres for HA and at one stage I could not start the
db. I get the following message:

LOG:  database system was shut down at 2007-02-22 16:07:51 PST
LOG:  could not open file "pg_xlog/0001000B" (log file
0, segment 11): No such file or directory
LOG:  invalid primary checkpoint record
LOG:  could not open file "pg_xlog/0001000B" (log file
0, segment 11): No such file or directory
LOG:  invalid secondary checkpoint record
PANIC:  could not locate a valid checkpoint record
LOG:  startup process (PID 21951) was terminated by signal 6
LOG:  aborting startup due to startup process failure
LOG:  logger shutting down

I think files from pg_xlog got removed. I am trying to start the db,
but fails with the above message. How do I recover and come to a point
where I can start the database? I do have the schema dump avl. so once
it is started, I can restore it back. I just need it to start to a
point where I can connect to it and possibly create/drop db.

Thanks in advance!
Dhaval Shah

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] How to Kill IDLE users

2007-02-27 Thread Scott Marlowe
On Tue, 2007-02-27 at 15:23, Goran Rakic wrote:
> I have installed POSTGRESQL 8.2 on W2K3 32bit , 100 users connecting from
> desktop applications and 200 users connecting thru web service from handheld
> computers
> 
> I have problem with second groups of users.
> 
> Often they do not disconnect from POSTGRE Server and with time passing thru
> I have lot of IDLE users and very much memory consumptions. From time to
> time that can crash server then only restarting server will kill all
> postgres.exe from memory and this annoying me, because that I build script
> to restart server every night. I could not get programmers to change
> program.

There are a few issues here, and I have a few questions.

Idle users don't normally use a lot of resources, and they certainly
don't generally use increasing resources just sitting idle.  However,
idle in transaction users (a different thing) do cause problems in that
the database cannot free up deleted tuples during vacuum.  This can
result in a bloated database store.

If your server is crashing from 100 idle users, something is wrong.

Are you running 8.2.0 or 8.2.3?  If you're running a version of 8.2 from
before 8.2.3 you should upgrade as soon as possible.

Your programmers are writing broken programs if they are leaving
connections idle in transaction.  You have my permission to beat them. 
:) If they are just leaving connections idle, plain old idle, then
that's probably no big deal.

Can you run a shell script that just connects until all the connections
are used up?  Or does that crash the server?  If it crashes it, then
you've got something configured wrong, and need to either reduce the max
number of connections, or increase the resources available to the
postgresql db server.

You might wanna post a bit more info, like what things you've changed in
postgresql.conf, how much memory your machine has, and what the output
of 

select * from pg_stat_activity ;

says

> Is there parameters which will disconnect IDLE users if they excided some
> time or some program which will purge memory from non active postgres.exe

The problem is it's hard to tell the difference between someone who's
just taking a long time to commit a real transaction and someone who's
forgotten that they logged in yesterday.  The general solution here is
to have a firewall that will time out idle connections after X time. 
However, such a configuration can be overcome by judicious use of tcp
keepalives.

You can use something like:

select procpid, usename, now()-query_start from pg_stat_activity where
current_query like '%IDLE%' and now()-query_start >  interval '5
minutes';

to list all the users that have been idle over the interval in the
list.  Using some kind of scripting language, you could then issue kill
signals to those procpids.

Note that I'm a unix guy, so translating this to the (in my mind)
insanity that is windows is up to you. :)

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] How often do I need to reindex tables?

2007-02-27 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 02/27/07 13:26, Dhaval Shah wrote:
> I am planning to use 8.2 and the average inserts/deletes and updates
> across all tables is moderate. That is, it is a moderate sized
> database with moderate usage of tables.

Moderate?




-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFF5MVmS9HxQb37XmcRAu3PAJ9BwYSpuENbeJKweBn4arApxqyiKACgg8pg
1wExzokHE3tLSj5o4MjEaK4=
=GAEs
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Difference between UNIQUE constraint vs index

2007-02-27 Thread John Jawed

Is there any difference as far as when the "uniqueness" of values is
checked in DML between a unique index vs a unique constraint? Or is
the only difference syntax between unique indices and constraints in
PostgreSQL?

John

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Slony subscription problem

2007-02-27 Thread Joseph S

How come the slony list isn't on the nntp server?

Martijn van Oosterhout wrote:


There's a mailing list for slony, you might have better luck there.

http://gborg.postgresql.org/mailman/listinfo/slony1-general

Have a nice day,


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Difference between UNIQUE constraint vs index

2007-02-27 Thread Jim C. Nasby
On Tue, Feb 27, 2007 at 06:43:51PM -0600, John Jawed wrote:
> Is there any difference as far as when the "uniqueness" of values is
> checked in DML between a unique index vs a unique constraint? Or is
> the only difference syntax between unique indices and constraints in
> PostgreSQL?

Syntax only, AFAIK. I prefer using constraints if I actually want to
constrain the data; it makes it clear that it's a restriction.

In some databases if you know that an index just happens to be unique
you might gain some query performance by defining the index as unique,
but I don't think the PostgreSQL planner is that smart. There can also
be some additional overhead involved with a unique index (vs
non-unique), such as when two backends try and add the same key at the
same time (one of them will have to block).
-- 
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Difference between UNIQUE constraint vs index

2007-02-27 Thread Joshua D. Drake
John Jawed wrote:
> Is there any difference as far as when the "uniqueness" of values is
> checked in DML between a unique index vs a unique constraint? Or is
> the only difference syntax between unique indices and constraints in
> PostgreSQL?

They are functionally the same and unique constraint will create a
unique index.

Joshua D. Drake

> 
> John
> 
> ---(end of broadcast)---
> TIP 3: Have you checked our extensive FAQ?
> 
>   http://www.postgresql.org/docs/faq
> 


-- 

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Slony subscription problem

2007-02-27 Thread Joshua D. Drake
Joseph S wrote:
> How come the slony list isn't on the nntp server?

Likely because it is a affiliated project.

Joshua D. Drake

> 
> Martijn van Oosterhout wrote:
> 
>> There's a mailing list for slony, you might have better luck there.
>>
>> http://gborg.postgresql.org/mailman/listinfo/slony1-general
>>
>> Have a nice day,
> 
> ---(end of broadcast)---
> TIP 2: Don't 'kill -9' the postmaster
> 


-- 

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Difference between UNIQUE constraint vs index

2007-02-27 Thread John Jawed

This is more or less correct, I was looking for performance gains on
the [possible] differences during DML and DDL.

If Jim is correct, is there a particular reason that PostgreSQL does
not behave like other RDBMs without a SET ALL DEFERRED? Or is this a
discussion best left for -HACKERS?

On 2/27/07, Jim C. Nasby <[EMAIL PROTECTED]> wrote:

On Tue, Feb 27, 2007 at 06:43:51PM -0600, John Jawed wrote:
> Is there any difference as far as when the "uniqueness" of values is
> checked in DML between a unique index vs a unique constraint? Or is
> the only difference syntax between unique indices and constraints in
> PostgreSQL?

Syntax only, AFAIK. I prefer using constraints if I actually want to
constrain the data; it makes it clear that it's a restriction.

In some databases if you know that an index just happens to be unique
you might gain some query performance by defining the index as unique,
but I don't think the PostgreSQL planner is that smart. There can also
be some additional overhead involved with a unique index (vs
non-unique), such as when two backends try and add the same key at the
same time (one of them will have to block).
--
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] How to debug this crash?

2007-02-27 Thread Jorge Godoy
Tom Lane <[EMAIL PROTECTED]> writes:

> What python version?  (Hint: pre-8.2 plpython is known not to work
> with python 2.5)

This is more to confirm what I've found in practice and couldn't find at the
online docs for 8.2: is it possible to use output variables to write stored
procedures in plpythonu or am I restricted to functions only?

For a function where I declared an output parameter I'm getting the following
message: 




neo=# select neolab.f_v_formata_valor_resultado(3.23456, 53);
ERRO:  proargnames must have the same number of elements as the function has 
arguments
neo=# 




The code to this function was provided on my first message, the signature of
it is:





neo=# \df neolab.f_v_formata_valor_resultado
 List of functions
 Schema |Name | Result data type |  
 Argument data types
+-+--+--
 neolab | f_v_formata_valor_resultado | text | p_resultado double 
precision, p_analise_id integer, OUT o_resultado text
(1 row)

neo=# 



This worked with Python 2.4 and PG 8.1.4 / 8.1.5. 



I'm getting this error with PostgreSQL 8.2.3 and Python 2.5.  Is it a bug?  It
is correct behavior?  (If this is correct behavior it would be very nice to
get the 'out' parameter as it was before and also being able to reference
variables directly without resorting to using the args array...)


TIA,
-- 
Jorge Godoy  <[EMAIL PROTECTED]>

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Difference between UNIQUE constraint vs index

2007-02-27 Thread Jim C. Nasby
Adding -general back in.

On Tue, Feb 27, 2007 at 07:19:15PM -0600, John Jawed wrote:
> This is more or less correct, I was looking for performance gains on
> the [possible] differences during DML and DDL.
> 
> If Jim is correct, is there a particular reason that PostgreSQL does
> not behave like other RDBMs without a SET ALL DEFERRED? Or is this a
> discussion best left for -HACKERS?
 
Well, currently only FK constraints support deferred. And IIRC it's not
generally a performance gain, anyway.

What I was trying to say is that if you're running a query (generally a
SELECT) with certain conditions, the planner can make use of the
knowledge that a column or set of columns is guaranteed to be unique.
PostgreSQL currently can't do that.

> John
> 
> On 2/27/07, Jim C. Nasby <[EMAIL PROTECTED]> wrote:
> >On Tue, Feb 27, 2007 at 06:43:51PM -0600, John Jawed wrote:
> >> Is there any difference as far as when the "uniqueness" of values is
> >> checked in DML between a unique index vs a unique constraint? Or is
> >> the only difference syntax between unique indices and constraints in
> >> PostgreSQL?
> >
> >Syntax only, AFAIK. I prefer using constraints if I actually want to
> >constrain the data; it makes it clear that it's a restriction.
> >
> >In some databases if you know that an index just happens to be unique
> >you might gain some query performance by defining the index as unique,
> >but I don't think the PostgreSQL planner is that smart. There can also
> >be some additional overhead involved with a unique index (vs
> >non-unique), such as when two backends try and add the same key at the
> >same time (one of them will have to block).
> >--
> >Jim Nasby[EMAIL PROTECTED]
> >EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)
> >
> 

-- 
Jim C. Nasby, Database Architect   [EMAIL PROTECTED]
512.569.9461 (cell) http://jim.nasby.net

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] How to Kill IDLE users

2007-02-27 Thread Ang Chin Han

On 2/28/07, Scott Marlowe <[EMAIL PROTECTED]> wrote:


select procpid, usename, now()-query_start from pg_stat_activity where
current_query like '%IDLE%' and now()-query_start >  interval '5
minutes';

to list all the users that have been idle over the interval in the
list.  Using some kind of scripting language, you could then issue kill
signals to those procpids.


There's always pg_cancel_backend().
http://www.postgresql.org/docs/8.2/static/functions-admin.html

So:

select pg_cancel_backend(procpid) from pg_stat_activity where
current_query = '' and now() - query_start > interval '5
minutes';

should work nicely. I've not really tested that properly though.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Difference between UNIQUE constraint vs index

2007-02-27 Thread Tom Lane
"Jim C. Nasby" <[EMAIL PROTECTED]> writes:
> In some databases if you know that an index just happens to be unique
> you might gain some query performance by defining the index as unique,
> but I don't think the PostgreSQL planner is that smart.

Actually, the planner only pays attention to whether indexes are unique;
the notion of a unique constraint is outside its bounds.  In PG a unique
constraint is implemented by creating a unique index, and so there is
really not any interesting difference.

I would imagine that other DBMSes also enforce uniqueness by means of
indexes, because it'd be awful darn expensive to enforce the constraint
without one; but I'm only guessing here, not having looked.  Can anyone
point to a real system that enforces unique constraints without an
underlying index?

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] [HACKERS] 5 Weeks till feature freeze or (do you know where your patch is?)

2007-02-27 Thread Henry B. Hotz


On Feb 23, 2007, at 1:24 PM, Joshua D. Drake wrote:


Henry Hotz: GSSAPI (with Magnus)


Progressing.  Had hoped to have alpha patches by March 1, but I just  
got handed a proposal that I have to do by then.  I trust it's OK to  
send the first version in next week?


No real issues, except I haven't disentangled the SSL buffering to  
disentangle my security layer from theirs.


Question:  are there any corresponding deadlines for the Java client  
code that I need to worry about?


The opinions expressed in this message are mine,
not those of Caltech, JPL, NASA, or the US Government.
[EMAIL PROTECTED], or [EMAIL PROTECTED]



---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Re: [HACKERS] 5 Weeks till feature freeze or (do you know where your patch is?)

2007-02-27 Thread Joshua D. Drake
Henry B. Hotz wrote:
> 
> On Feb 23, 2007, at 1:24 PM, Joshua D. Drake wrote:
> 
>> Henry Hotz: GSSAPI (with Magnus)
> 
> Progressing.  Had hoped to have alpha patches by March 1, but I just got
> handed a proposal that I have to do by then.  I trust it's OK to send
> the first version in next week?

Feature freeze is the end of March so I would say yes.

> 
> No real issues, except I haven't disentangled the SSL buffering to
> disentangle my security layer from theirs.
> 
> Question:  are there any corresponding deadlines for the Java client
> code that I need to worry about?

You would need to talk to the jdbc folks about that I would guess. There
development cycle is independent of .Org.

Joshua D. Drake

> 
> The opinions expressed in this message are mine,
> not those of Caltech, JPL, NASA, or the US Government.
> [EMAIL PROTECTED], or [EMAIL PROTECTED]
> 
> 
> 
> ---(end of broadcast)---
> TIP 6: explain analyze is your friend
> 


-- 

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/


---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Re: [HACKERS] 5 Weeks till feature freeze or (do you know where your patch is?)

2007-02-27 Thread Kris Jurka



On Tue, 27 Feb 2007, Henry B. Hotz wrote:

Question:  are there any corresponding deadlines for the Java client code 
that I need to worry about?


The JDBC driver will release a new version at the same time as the server, 
but we don't have nearly as strict rules about feature freeze/beta.  We 
don't need multiple months of beta so as long as the patch isn't terribly 
complicated we'll accept it close to the release.


Kris Jurka

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings