Re: [HACKERS] [GENERAL] Undetected corruption of table files

2007-08-29 Thread Florian Weimer
* Alban Hertroys:

> If you have a "proper" production database server, your memory has
> error checking, and your RAID controller has something of the kind
> as well.

To my knowledge, no readily available controller performs validation
on reads (not even for RAID-1 or RAID-10, where it would be pretty
straightforward).

Something like an Adler32 checksum (not a full CRC) on each page might
be helpful.  However, what I'd really like to see is something that
catches missed writes, but this is very difficult to implement AFAICT.

-- 
Florian Weimer<[EMAIL PROTECTED]>
BFK edv-consulting GmbH   http://www.bfk.de/
Kriegsstraße 100  tel: +49-721-96201-1
D-76133 Karlsruhe fax: +49-721-96201-99

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Seeking datacenter PITR backup suggestions

2007-08-29 Thread Simon Riggs
On Tue, 2007-08-28 at 21:04 -0500, Decibel! wrote:
> On Tue, Aug 28, 2007 at 09:54:23PM -0400, Tom Lane wrote:

> > Perhaps the docs are not sufficiently clear on the point?
> 
> Yeah... I think that's a big gotcha waiting to smack someone. I'd
> actually make the mention  so that hopefully no one can miss
> it... or do we have an official method for putting warnings in the docs?
> 
> "Because WAL segment files are renamed and not re-created from scratch,
> it is critical that the archive command actually copy files, not move
> or hard-link them."

I'll shortly be writing a doc patch to clarify a few points and to
explain new possibilities, such as Koichi Suzuki's work.

-- 
  Simon Riggs
  2ndQuadrant  http://www.2ndQuadrant.com


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Install on 32 or 64 bit Linux?

2007-08-29 Thread Hannes Dorbath

On 28.08.2007 23:21, Ralph Smith wrote:

Are there any indications whether 32 or 64 bit Linux would be preferable?


If you have 64bit hardware, use a 64bit OS. Running IA32 on EM64T/AMD64 
is pointless and requires you to use crappy hacks such as PAE.


For virtual machines, consider if OS level virtualisation is not more 
what you want. OpenVZ is a good example.



--
Regards,
Hannes Dorbath

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] How to avoid "Seq Scans"?

2007-08-29 Thread Vincenzo Romano
Hi all.

In PG 8.2.4 I have a 4+M rows table like this:

create table t (
  f1 bigint,
  f2 text,
  f3 text
);

create index i_t_1 on t( f1 );
create index i_t_2 on t( f2 );
create index i_t_2 on t( f3 );

I'd need to write a stored function that should do the
following:

for rec in select * from t order by f2,f2 loop
...
end loop;

This loop is increadibly slow. Infact the friendly explain tells me
that:

test=# explain select * from t order by f2,f3;
   QUERY PLAN
-
 Sort  (cost=958786.20..970734.55 rows=4779338 width=28)
   Sort Key: f2,f3
   ->  Seq Scan on t  (cost=0.00..85501.38 rows=4779338 width=28)

I'd like to know a hint about a technicque to avoid the sequential 
scan!

Thanks.

-- 
Vincenzo Romano
--
Maybe Computer will never become as intelligent as Humans.
For sure they won't ever become so stupid. [VR-1988]

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] How to avoid "Seq Scans"?

2007-08-29 Thread Richard Huxton

Vincenzo Romano wrote:

Hi all.

In PG 8.2.4 I have a 4+M rows table like this:



I'd need to write a stored function that should do the
following:

for rec in select * from t order by f2,f2 loop
...
end loop;



   ->  Seq Scan on t  (cost=0.00..85501.38 rows=4779338 width=28)

I'd like to know a hint about a technicque to avoid the sequential 
scan!


But you're fetching all the rows - what other way would be faster?

--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] How to avoid "Seq Scans"?

2007-08-29 Thread A. Kretschmer
am  Wed, dem 29.08.2007, um 11:15:21 +0200 mailte Vincenzo Romano folgendes:
> This loop is increadibly slow. Infact the friendly explain tells me
> that:
> 
> test=# explain select * from t order by f2,f3;
>QUERY PLAN
> -
>  Sort  (cost=958786.20..970734.55 rows=4779338 width=28)
>Sort Key: f2,f3
>->  Seq Scan on t  (cost=0.00..85501.38 rows=4779338 width=28)
> 
> I'd like to know a hint about a technicque to avoid the sequential 
> scan!

A 'select foo from bar' without a WHERE-condition forces a seq-scan
because _YOU_ want the whole table.


Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: -> Header)
GnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] How to avoid "Seq Scans"?

2007-08-29 Thread Martijn van Oosterhout
On Wed, Aug 29, 2007 at 11:15:21AM +0200, Vincenzo Romano wrote:
> This loop is increadibly slow. Infact the friendly explain tells me
> that:

Is it wrong? Have you have run with seq_scan disabled to see if an
index scan is actually faster? If so, then perhaps your
random+pagE_cost needs adjusting.

Also, your query can't use an index anyway, for that you'd need an
index on (f2,f3).

Have a nice day,
-- 
Martijn van Oosterhout   <[EMAIL PROTECTED]>   http://svana.org/kleptog/
> From each according to his ability. To each according to his ability to 
> litigate.


signature.asc
Description: Digital signature


[GENERAL] Performance issue with nested loop

2007-08-29 Thread Jens Reufsteck
Hi all,

I'm having a strange performance issue with two almost similar queries, the
one running as expected, the other one taking far more time. The only
difference is that I have "uniid in (10)" in the normally running query and
"uniid in (9,10)" in the other one. The number of rows resulting from the
respective table differs not very much being 406 for the first and 511 for
the second query.

This is the full query - the "uniid in (9,10)" is in the last subquery:


SELECT 'Anzahl' AS column1, count(DISTINCT sid) AS column2
FROM (
SELECT sid
FROM stud
WHERE stud.status > 0
AND length(stud.vname) > 1
AND length(stud.nname) > 1
) AS qur_filter_1 INNER JOIN (
SELECT DISTINCT sid
FROM stud_vera
INNER JOIN phon USING (sid)
WHERE veraid = 22
AND stud_vera.status > 0
AND (
(
veraid IN (2, 3, 22, 24, 36)
AND phontyp = 5
AND phon.typ = 1
AND phon.status > 0
) OR (
veraid NOT IN (2, 3, 22, 24, 36)
)
)
) AS qur_filter_2 USING (sid) INNER JOIN (
SELECT DISTINCT sid 
FROM ausb
INNER JOIN uni USING (uniid)
WHERE uni.uniort IN ('Augsburg')
AND ausb.overview = 1
AND ausb.zweitstudium != 2
AND ausb.status > 0
) AS qur_filter_3 USING (sid) INNER JOIN (
SELECT DISTINCT sid 
FROM ausb
WHERE uniid IN (9, 10)
AND ausb.overview = 1
AND ausb.zweitstudium != 2
AND ausb.status > 0
) AS qur_filter_4 USING (sid)



These are the query-plans for both queries, first the problematic one:



Aggregate  (cost=78785.78..78785.79 rows=1 width=4) (actual
time=698777.890..698777.891 rows=1 loops=1)

  ->  Nested Loop  (cost=65462.58..78785.78 rows=1 width=4) (actual
time=6743.856..698776.957 rows=250 loops=1)

Join Filter: ("outer".sid = "inner".sid)

->  Merge Join  (cost=11031.79..11883.12 rows=1 width=12) (actual
time=387.837..433.612 rows=494 loops=1)

  Merge Cond: ("outer".sid = "inner".sid)

  ->  Nested Loop  (cost=5643.11..6490.17 rows=19 width=8)
(actual time=114.323..154.043 rows=494 loops=1)

->  Unique  (cost=5643.11..5645.35 rows=180 width=4)
(actual time=114.202..116.002 rows=511 loops=1)

  ->  Sort  (cost=5643.11..5644.23 rows=448 width=4)
(actual time=114.199..114.717 rows=511 loops=1)

Sort Key: public.ausb.sid

->  Seq Scan on ausb  (cost=0.00..5623.38
rows=448 width=4) (actual time=0.351..112.459 rows=511 loops=1)

  Filter: (((uniid = 9) OR (uniid = 10))
AND (overview = 1) AND (zweitstudium <> 2) AND (status > 0))

->  Index Scan using stud_pkey on stud  (cost=0.00..4.67
rows=1 width=4) (actual time=0.062..0.067 rows=1 loops=511)

  Index Cond: (stud.sid = "outer".sid)

  Filter: ((status > 0) AND (length((vname)::text) >
1) AND (length((nname)::text) > 1))

  ->  Materialize  (cost=5388.68..5392.05 rows=337 width=4)
(actual time=273.506..276.785 rows=511 loops=1)

->  Unique  (cost=5383.29..5384.98 rows=337 width=4)
(actual time=273.501..275.421 rows=511 loops=1)

  ->  Sort  (cost=5383.29..5384.13 rows=337 width=4)
(actual time=273.499..274.091 rows=511 loops=1)

Sort Key: public.ausb.sid

->  Hash Join  (cost=17.61..5369.14 rows=337
width=4) (actual time=1.139..272.465 rows=511 loops=1)

  Hash Cond: ("outer".uniid =
"inner".uniid)

  ->  Seq Scan on ausb
(cost=0.00..4827.30 rows=104174 width=8) (actual time=0.026..200.111
rows=103593 loops=1)

Filter: ((overview = 1) AND
(zweitstudium <> 2) AND (status > 0))

  ->  Hash  (cost=17.60..17.60 rows=2
width=4) (actual time=0.435..0.435 rows=2 loops=1)

->  Seq Scan on uni
(cost=0.00..17.60 rows=2 width=4) (actual time=0.412..0.424 rows=2 loops=1)

  Filter: ((uniort)::text =
'Augsburg'::text)

->  Unique  (cost=54430.79..4.18 rows=10599 width=4) (actual
time=6.851..1374.135 rows=40230 loops=494)

  ->  Merge Join  (cost=54430.79..66319.65 rows=137811 width=4)
(actual time=6.849..1282.333 rows=40233 loops=494)

Merge Cond: ("outer".sid = "inner".sid)

Join Filter: "outer".veraid = 2) OR ("outer".veraid
= 3) OR ("outer".veraid = 22) OR ("outer".veraid = 24) OR ("outer".veraid =
36)) AND ("inner".phontyp = 5) AND ("inner".typ = 1) AND ("inner".status >
0)) OR (("outer".veraid <> 2

[GENERAL] Question regarding autovacuum in 8.1

2007-08-29 Thread Denis Gasparin
How is it possibile to check if autovacuum is running in 8.1.x?

"Show Autovacuum" gives me on and also i see evidence in logs where
autovacuum writes "LOG:  autovacuum: processing database ".

However i have no idea of what tables the autovacuum daemon is
processing because there aren't autovacuum info columns on
pg_stat_all_tables (as there are for 8.2.x).

Also I'm asking this because the size of the pg_clog is >200M and I am
worried about possible transaction  ID wraparound failures...

Thank you,
Denis

Tom Lane ha scritto:
> Karl Denninger <[EMAIL PROTECTED]> writes:
>   
>> But... .shouldn't autovacuum prevent this?  Is there some way to look in 
>> a log somewhere and see if and when the autovacuum is being run - and on 
>> what?
>> 
>
> There's no log messages (at the default log verbosity anyway).  But you
> could look into the pg_stat views for the last vacuum time for each table.
>
>   regards, tom lane
>
> ---(end of broadcast)---
> TIP 6: explain analyze is your friend
>
>   


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] pg_dump.... pg_restore...how long does it take?

2007-08-29 Thread rafikoko

Hi,
I've dumped my database into archive by using following command:
.\pg_dump -U tad -ci -F t -f openbravo.tar openbravo.
Everything seemed to be all right and the whole process was very fast.
Afterwards I've got tar file which size is about 4MB.
I've got also the following message after dumping:
pg_dump: [tar archiver] actual file length (4104361) does not match expected
(4104361)
which in fact does not make sense for me.

Then I've copied tar archive to another computer to restore the database
using following command:
pg_restore -d openbravo ..\openbravo.tar
The process started about 18hours ago and still runs. I've checked task
manager and noticed that pg_restore still uses 50% of CPU. 

The question is:
Have I done everything correctly? Since the tar archive has only 4MB and was
generated in few second, I wonder why restoring takes so long? How long it
can take? Does anybody have such an experience?

Looking forward any support.

BR,
Rafal
-- 
View this message in context: 
http://www.nabble.com/pg_dump-pg_restore...how-long-does-it-take--tf4347124.html#a12385020
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] Etc/% timezones

2007-08-29 Thread Naz Gassiep
I just noticed that in the pg_timezone_names system table, the name and 
abbrev of the "Etc/%" timezones appear to be inverted with their 
utc_offset value.


I never noticed before, as I filter these zones out (among others) and 
do not use them in my app. I was just interested as to why the suggested 
abbrev field and utc_offset field are the opposite sign of each other. 
Is this a bug or is there a reason for this? I have included the output 
from the view with the relevant time zones below.


E.g., I am in Melbourne, Australia, which I have always known as GMT+10 
(ignoring DST). According to the below list, however, the timezone that 
has the matching utc_offset to mine is called "Etc/GMT-10", which to me 
seems wrong. Were I presented with a list of the names, I'd select 
"Etc/GMT+10" as my time zone, incorrectly telling the app that my 
timezone's utc_offset is -10:00:00.


I'm pretty certain that this is a bug. Can anyone confirm?

Regards,
- Naz.


select * from pg_timezone_names where name ilike 'etc/%' order by 
utc_offset, abbrev, name;

name  | abbrev | utc_offset | is_dst
---+++
Etc/GMT+12| GMT+12 | -12:00:00  | f
Etc/GMT+11| GMT+11 | -11:00:00  | f
Etc/GMT+10| GMT+10 | -10:00:00  | f
Etc/GMT+9 | GMT+9  | -09:00:00  | f
Etc/GMT+8 | GMT+8  | -08:00:00  | f
Etc/GMT+7 | GMT+7  | -07:00:00  | f
Etc/GMT+6 | GMT+6  | -06:00:00  | f
Etc/GMT+5 | GMT+5  | -05:00:00  | f
Etc/GMT+4 | GMT+4  | -04:00:00  | f
Etc/GMT+3 | GMT+3  | -03:00:00  | f
Etc/GMT+2 | GMT+2  | -02:00:00  | f
Etc/GMT+1 | GMT+1  | -01:00:00  | f
Etc/GMT   | GMT| 00:00:00   | f
Etc/GMT+0 | GMT| 00:00:00   | f
Etc/GMT-0 | GMT| 00:00:00   | f
Etc/GMT0  | GMT| 00:00:00   | f
Etc/Greenwich | GMT| 00:00:00   | f
Etc/UCT   | UCT| 00:00:00   | f
Etc/UTC   | UTC| 00:00:00   | f
Etc/Universal | UTC| 00:00:00   | f
Etc/Zulu  | UTC| 00:00:00   | f
Etc/GMT-1 | GMT-1  | 01:00:00   | f
Etc/GMT-2 | GMT-2  | 02:00:00   | f
Etc/GMT-3 | GMT-3  | 03:00:00   | f
Etc/GMT-4 | GMT-4  | 04:00:00   | f
Etc/GMT-5 | GMT-5  | 05:00:00   | f
Etc/GMT-6 | GMT-6  | 06:00:00   | f
Etc/GMT-7 | GMT-7  | 07:00:00   | f
Etc/GMT-8 | GMT-8  | 08:00:00   | f
Etc/GMT-9 | GMT-9  | 09:00:00   | f
Etc/GMT-10| GMT-10 | 10:00:00   | f
Etc/GMT-11| GMT-11 | 11:00:00   | f
Etc/GMT-12| GMT-12 | 12:00:00   | f
Etc/GMT-13| GMT-13 | 13:00:00   | f
Etc/GMT-14| GMT-14 | 14:00:00   | f
(35 rows)


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Reliable and fast money transaction design

2007-08-29 Thread cluster
OK, thanks. But what with the second question in which the UPDATE is 
based on a SELECT max(...) statement on another table? How can I ensure 
that no other process inserts a row between my SELECT max() and UPDATE - 
making my SELECT max() invalid?


A table lock could be an option but I am only interested in blocking for 
row insertions for this particular account_id. Insertions for other 
account_ids will not make the SELECT max() invalid and should therefore 
be allowed.


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Etc/% timezones

2007-08-29 Thread Alvaro Herrera
Naz Gassiep wrote:

> I'm pretty certain that this is a bug. Can anyone confirm?

It is a bug -- in the SQL standard definition.  The meaning of the sign
is inverted w.r.t. the relevant POSIX (?) standard, AFAIU.
Unsurprisingly, we're following the SQL standard here.

-- 
Alvaro Herrera http://www.amazon.com/gp/registry/CTMLCN8V17R4
"Ninguna manada de bestias tiene una voz tan horrible como la humana" (Orual)

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Reliable and fast money transaction design

2007-08-29 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 08/29/07 07:27, cluster wrote:
> OK, thanks. But what with the second question in which the UPDATE is
> based on a SELECT max(...) statement on another table? How can I ensure
> that no other process inserts a row between my SELECT max() and UPDATE -
> making my SELECT max() invalid?
> 
> A table lock could be an option but I am only interested in blocking for
> row insertions for this particular account_id. Insertions for other
> account_ids will not make the SELECT max() invalid and should therefore
> be allowed.

Well, concurrency and transactional consistency *allows* other
processes to update the table after you start your transaction.  You
just won't *see* their updates while you're inside of a transaction.

Of course, if you truly want exclusive access, you could LOCK the
table.  It's well explained in the documentation...

- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFG1XaWS9HxQb37XmcRAi5hAKDff5j5KnqWdGKxHjCJuTwXxfPwjACfZuko
1Ic5Bq1tU3IlPP44VYyD74M=
=Sv0p
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Etc/% timezones

2007-08-29 Thread Naz Gassiep



I'm pretty certain that this is a bug. Can anyone confirm?



It is a bug -- in the SQL standard definition.  The meaning of the sign
is inverted w.r.t. the relevant POSIX (?) standard, AFAIU.
Unsurprisingly, we're following the SQL standard here.


Wow. Seriously, wow.
Good thing I'm filtering them out then, else confusion would ensue. Has 
anyone brought the stupidity of this to the attention of the SQL team? Is 
there any rationale behind this? I've been working with timezone stuff for 
the last few weeks and I'm really getting into it, so I'm just interested 
to know how it was concluded that this is a good idea.



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Geographic High-Availability/Replication

2007-08-29 Thread Markus Schiltknecht

Hi,

Decibel! wrote:

But is the complete transaction information safely stored on all nodes
before a commit returns?


Good question. It depends very much on the group communication system 
and the guarantees it provides for message delivery. For certain, the 
information isn't safely stored on every node before commit 
confirmation. Let me quickly explain those two points.


Lately, I've read a lot about different Group Communication Systems and 
how they handle delivery guarantees. Spread offers an 'agreed' and a 
'safe' mode, only the later guarantees that all nodes have received the 
data. It's a rather expensive mode in terms of latency.


In our case, it would be sufficient if at least n nodes would confirm 
having correctly received the data. That would allow for (n - 1) 
simultaneously failing nodes, so that there's always at least one 
correct node which has received the data, even if the sender just failed 
after sending. This one node can redistribute the data to others which 
didn't receive the message until all nodes have received it.


No group communication system I know of offers such fine grained levels 
of delivery guarantees. Additionally, I've figured that it would be nice 
to have subgroups and multiple orderings within a group. Thus - opposed 
to my initial intention - I've finally started to write yet another 
group communication system, providing all of these nice features. 
Anyway, that's another story.


Regarding durability: given the above assumption, that at most (n - 1) 
nodes fail, you don't have to care much about recovery, because there's 
always at least one running node which has all the data. As we know, 
reality doesn't always care about our assumptions. So, if you want to 
prevent data loss due to failures of more than (n - 1) nodes, possibly 
even all nodes, you'd have to do transaction logging, much like WAL, but 
a cluster-wide one. Having every single node write a transaction log, 
like WAL, would be rather expensive and complex during recovery, as 
you'd have to mix and match all node's WALs.


Instead, I think it's better to decouple transaction logging (backup) 
from ordinary operation. That gives you much more freedom. For example, 
you could have nodes dedicated to and optimized for logging. But most 
importantly, you have separated the problem: as long as your permanent 
storage for transaction logging is living, you can recover your data. No 
matter what's happening with the rest of the cluster. And the other way 
around: as long as your cluster is living (i.e. no more than (n - 1) 
simultaneous failures), you don't really need the transaction log.


So, before committing a transaction, a node has to wait for the delivery 
of the data through the GCS *and* for the transaction logger(s) to have 
written the data to permanent storage. Please note, that those two 
operations can be done simultaneously, i.e. the latency does not 
summarize, it's rather just the maximum of the two.


Regards

Markus


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] pgsql Windows installer fixed registry key

2007-08-29 Thread Dizzy
On Tuesday 28 August 2007 00:14:55 Dave Page wrote:
> > The pgsql MSI installer does register a registry key but it's random
> > everytime it installs (probably something MSI choses).
>
> No it's not random. It uses the product ID which only changes between major
> versions (ie. 8.2 - 8.3) or between distros (eg. pgInstaller vs.
> EnterpriseDB Postgres).

Yes, you are right, sorry for the spam then and thanks for the answer:)

-- 
Mihai RUSU  Email: [EMAIL PROTECTED]
"Linux is obsolete" -- AST

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] ecpg: dtime_t vs timestamp

2007-08-29 Thread Michael Meskes
On Tue, Aug 28, 2007 at 10:59:42AM -0400, Paul Tilles wrote:
> We have upgraded from Version 7.4.x to Version 8.2.4. 
> In 7.4.x, we use the Informix compatibility functionality to use legacy 
> code. 
> Our .pgc code looks as follows:
> ...

This is indeed a bug. Thanks for reporting it.

What happens is that ecpg defines the data types, but does not use that
define while in C code. I just committed patches to CVS. I also attach
it here. Could you please try if this fixes your problem?

Thanks.

Michael
-- 
Michael Meskes
Email: Michael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)
ICQ: 179140304, AIM/Yahoo: michaelmeskes, Jabber: [EMAIL PROTECTED]
Go SF 49ers! Go Rhein Fire! Use Debian GNU/Linux! Use PostgreSQL!
--- preproc/ecpg.c	8 Feb 2006 09:10:04 -	1.94
+++ preproc/ecpg.c	29 Aug 2007 13:50:55 -
@@ -185,11 +185,6 @@
 	char		informix_path[MAXPGPATH];
 
 	compat = (strcmp(optarg, "INFORMIX") == 0) ? ECPG_COMPAT_INFORMIX : ECPG_COMPAT_INFORMIX_SE;
-	/* system_includes = true; */
-	add_preprocessor_define("dec_t=decimal");
-	add_preprocessor_define("intrvl_t=interval");
-	add_preprocessor_define("dtime_t=timestamp");
-
 	get_pkginclude_path(my_exec_path, pkginclude_path);
 	snprintf(informix_path, MAXPGPATH, "%s/informix/esql", pkginclude_path);
 	add_include_path(informix_path);
--- preproc/pgc.l	22 Sep 2006 21:39:58 -	1.150
+++ preproc/pgc.l	29 Aug 2007 13:50:55 -
@@ -48,6 +48,8 @@
 static void parse_include (void);
 static void check_escape_warning(void);
 static bool ecpg_isspace(char ch);
+static bool isdefine(void);
+static bool isinformixdefine(void);
 
 char *token_start;
 int state_before;
@@ -671,29 +673,8 @@
 	}
 {identifier}	{
 		ScanKeyword*keyword;
-		struct _defines *ptr;
-		
-		/* How about a DEFINE? */
-		for (ptr = defines; ptr; ptr = ptr->next)
-		{
-			if (strcmp(yytext, ptr->old) == 0 && ptr->used == NULL)
-			{
-struct _yy_buffer *yb;
-
-yb = mm_alloc(sizeof(struct _yy_buffer));
-
-yb->buffer =  YY_CURRENT_BUFFER;
-yb->lineno = yylineno;
-yb->filename = mm_strdup(input_filename);
-yb->next = yy_buffer;
-
-ptr->used = yy_buffer = yb;
 
-yy_scan_string(ptr->new);
-break;
-			}
-		}
-		if (ptr == NULL)
+		if (!isdefine())
 		{
 			/* Is it an SQL keyword? */
 			keyword = ScanKeywordLookup(yytext);
@@ -765,38 +746,10 @@
 	}
 {identifier} 	{
 		ScanKeyword		*keyword;
-		struct _defines *ptr;
-
-		if (INFORMIX_MODE)
-		{
-			/* Informix uses SQL defines only in SQL space */
-			ptr = NULL;
-		}
-		else
-		{
-			/* is it a define? */
-			for (ptr = defines; ptr; ptr = ptr->next)
-			{
-if (strcmp(yytext, ptr->old) == 0 && ptr->used == NULL)
-{
-	struct _yy_buffer *yb;
-
-	yb = mm_alloc(sizeof(struct _yy_buffer));
 
-	yb->buffer =  YY_CURRENT_BUFFER;
-	yb->lineno = yylineno;
-	yb->filename = mm_strdup(input_filename);
-	yb->next = yy_buffer;
-
-	ptr->used = yy_buffer = yb;
-
-	yy_scan_string(ptr->new);
-	break;
-}
-			}
-		}
-
-		if (ptr == NULL)
+		/* Informix uses SQL defines only in SQL space */
+		/* however, some defines have to be taken care of for compatibility */
+		if ((!INFORMIX_MODE || !isinformixdefine()) && !isdefine())
 		{
 			keyword = ScanCKeywordLookup(yytext);
 			if (keyword != NULL)
@@ -1347,3 +1300,61 @@
 		return true;
 	return false;
 }
+
+static bool isdefine(void)
+{
+	struct _defines *ptr;
+
+	/* is it a define? */
+	for (ptr = defines; ptr; ptr = ptr->next)
+	{
+		if (strcmp(yytext, ptr->old) == 0 && ptr->used == NULL)
+		{
+			struct _yy_buffer *yb;
+
+			yb = mm_alloc(sizeof(struct _yy_buffer));
+
+			yb->buffer =  YY_CURRENT_BUFFER;
+			yb->lineno = yylineno;
+			yb->filename = mm_strdup(input_filename);
+			yb->next = yy_buffer;
+
+			ptr->used = yy_buffer = yb;
+
+			yy_scan_string(ptr->new);
+			return true;
+		}
+	}
+
+	return false;
+}
+
+static bool isinformixdefine(void)
+{
+	const char *new = NULL;
+
+	if (strcmp(yytext, "dec_t") == 0)
+		new = "decimal";
+	else if (strcmp(yytext, "intrvl_t") == 0)
+	new = "interval";
+	else if (strcmp(yytext, "dtime_t") == 0)
+new = "timestamp";
+
+	if (new)
+	{
+		struct _yy_buffer *yb;
+
+		yb = mm_alloc(sizeof(struct _yy_buffer));
+
+		yb->buffer =  YY_CURRENT_BUFFER;
+		yb->lineno = yylineno;
+		yb->filename = mm_strdup(input_filename);
+		yb->next = yy_buffer;
+		yy_buffer = yb;
+
+		yy_scan_string(new);
+		return true;
+	}
+
+	return false;
+}

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Reliable and fast money transaction design

2007-08-29 Thread Decibel!
On Wed, Aug 29, 2007 at 08:37:26AM -0500, Ron Johnson wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
> 
> On 08/29/07 07:27, cluster wrote:
> > OK, thanks. But what with the second question in which the UPDATE is
> > based on a SELECT max(...) statement on another table? How can I ensure
> > that no other process inserts a row between my SELECT max() and UPDATE -
> > making my SELECT max() invalid?
> > 
> > A table lock could be an option but I am only interested in blocking for
> > row insertions for this particular account_id. Insertions for other
> > account_ids will not make the SELECT max() invalid and should therefore
> > be allowed.
> 
> Well, concurrency and transactional consistency *allows* other
> processes to update the table after you start your transaction.  You
> just won't *see* their updates while you're inside of a transaction.

Just make sure and read up about transaction isolation... in the default
of READ COMMITTED mode, you can sometimes see changes made by other
transactions.
-- 
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)


pgpzgn6LbJ2dr.pgp
Description: PGP signature


Re: [GENERAL] Etc/% timezones

2007-08-29 Thread Tom Lane
Naz Gassiep <[EMAIL PROTECTED]> writes:
> Good thing I'm filtering them out then, else confusion would ensue. Has 
> anyone brought the stupidity of this to the attention of the SQL team? Is 

Do you really think they're unaware of that?

Actually, the bug is in the POSIX definition, which is out of step with
the rest of the world.  The SQL spec is following ISO 8601.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Reliable and fast money transaction design

2007-08-29 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 08/29/07 09:34, Decibel! wrote:
> On Wed, Aug 29, 2007 at 08:37:26AM -0500, Ron Johnson wrote:
>> -BEGIN PGP SIGNED MESSAGE-
>> Hash: SHA1
>>
>> On 08/29/07 07:27, cluster wrote:
>>> OK, thanks. But what with the second question in which the UPDATE is
>>> based on a SELECT max(...) statement on another table? How can I ensure
>>> that no other process inserts a row between my SELECT max() and UPDATE -
>>> making my SELECT max() invalid?
>>>
>>> A table lock could be an option but I am only interested in blocking for
>>> row insertions for this particular account_id. Insertions for other
>>> account_ids will not make the SELECT max() invalid and should therefore
>>> be allowed.
>> Well, concurrency and transactional consistency *allows* other
>> processes to update the table after you start your transaction.  You
>> just won't *see* their updates while you're inside of a transaction.
> 
> Just make sure and read up about transaction isolation... in the default
> of READ COMMITTED mode, you can sometimes see changes made by other
> transactions.

Argh!!!  The RDBMS that I typically use defaults to SERIALIZABLE.

- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFG1YxuS9HxQb37XmcRAlJOAKCWL+NtM95YC2bMkFjOkD2NfF/xuQCggfKO
QQC/mW+IYtlV6R9rqaSomMs=
=H3+i
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Reliable and fast money transaction design

2007-08-29 Thread Joshua D. Drake
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Ron Johnson wrote:
> On 08/29/07 09:34, Decibel! wrote:
>> On Wed, Aug 29, 2007 at 08:37:26AM -0500, Ron Johnson wrote:
>>> -BEGIN PGP SIGNED MESSAGE-
>>> Hash: SHA1
>>>
>>> On 08/29/07 07:27, cluster wrote:
 OK, thanks. But what with the second question in which the UPDATE is
 based on a SELECT max(...) statement on another table? How can I ensure
 that no other process inserts a row between my SELECT max() and UPDATE -
 making my SELECT max() invalid?

 A table lock could be an option but I am only interested in blocking for
 row insertions for this particular account_id. Insertions for other
 account_ids will not make the SELECT max() invalid and should therefore
 be allowed.
>>> Well, concurrency and transactional consistency *allows* other
>>> processes to update the table after you start your transaction.  You
>>> just won't *see* their updates while you're inside of a transaction.
>> Just make sure and read up about transaction isolation... in the default
>> of READ COMMITTED mode, you can sometimes see changes made by other
>> transactions.
> 
> Argh!!!  The RDBMS that I typically use defaults to SERIALIZABLE.

SERIALIZABLE is really slow :). You should look into SERIALIZABLE only
for those transactions that need it. There is also SELECT FOR UPDATE.

Joshua D. Drake

> 

- ---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings



- --

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564   24x7/Emergency: +1.800.492.2240
PostgreSQL solutions since 1997  http://www.commandprompt.com/
UNIQUE NOT NULL
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFG1ZOLATb/zqfZUUQRAl5UAKCf8cli24MMOjxsKlel5nEFXllGsgCeIfDn
eg5BSlRpUlTGgGA7tBbx3EM=
=ynMx
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Reliable and fast money transaction design

2007-08-29 Thread Tom Lane
Ron Johnson <[EMAIL PROTECTED]> writes:
> On 08/29/07 07:27, cluster wrote:
>> Just make sure and read up about transaction isolation... in the default
>> of READ COMMITTED mode, you can sometimes see changes made by other
>> transactions.

> Argh!!!  The RDBMS that I typically use defaults to SERIALIZABLE.

You can change default_transaction_isolation if you like.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


ACID (was Re: [GENERAL] Reliable and fast ...)

2007-08-29 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 08/29/07 10:40, Joshua D. Drake wrote:
> Ron Johnson wrote:
>> On 08/29/07 09:34, Decibel! wrote:
>>> On Wed, Aug 29, 2007 at 08:37:26AM -0500, Ron Johnson wrote:

 On 08/29/07 07:27, cluster wrote:
> OK, thanks. But what with the second question in which the UPDATE is
> based on a SELECT max(...) statement on another table? How can I ensure
> that no other process inserts a row between my SELECT max() and UPDATE -
> making my SELECT max() invalid?
>
> A table lock could be an option but I am only interested in blocking for
> row insertions for this particular account_id. Insertions for other
> account_ids will not make the SELECT max() invalid and should therefore
> be allowed.
 Well, concurrency and transactional consistency *allows* other
 processes to update the table after you start your transaction.  You
 just won't *see* their updates while you're inside of a transaction.
>>> Just make sure and read up about transaction isolation... in the default
>>> of READ COMMITTED mode, you can sometimes see changes made by other
>>> transactions.
>> Argh!!!  The RDBMS that I typically use defaults to SERIALIZABLE.
> 
> SERIALIZABLE is really slow :). You should look into SERIALIZABLE only
> for those transactions that need it. There is also SELECT FOR UPDATE.

We use SERIALIZABLE (with all it's locking "issues") to guarantee
the I in ACID.  ISTM that READ COMMITTED can only deliver "ACD".

- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFG1ZVYS9HxQb37XmcRAlopAJ9wvAovDcqvUpsj5dqSrum+/3QUbgCeODwL
a8BJm6gi7VnR6dWgtmTLkcM=
=eg1s
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] postgresql connection string to Vb.NET

2007-08-29 Thread smithveg
Hi,

I got a source of connection string at this page,
http://www.connectionstrings.com/default.aspx?carrier=postgresql
I can't test it because it seems i do not add a reference in visual studio
2005.

Where can i download the ODBC or .NET data provider in order to successfully
connect to postgresql from vb.net

Thanks.

-- 
Smithveg


[GENERAL] Question regarding autovacuum in 8.1

2007-08-29 Thread Denis Gasparin
How is it possibile to check if autovacuum is running in 8.1.x?

"Show Autovacuum" gives me "on" and also i see evidence in logs
where,autovacuum writes "LOG:  autovacuum: processing database ".

However i have no idea of what tables the autovacuum daemon is
processing because there aren't autovacuum info columns on
pg_stat_all_tables (as there are for 8.2.x).

Also I'm asking this because the size of the pg_clog is >200M and I am
worried about possible transaction  ID wraparound failures

Thank you,
Denis

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] pg_dump.... pg_restore...how long does it take?

2007-08-29 Thread Tom Lane
rafikoko <[EMAIL PROTECTED]> writes:
> I've got also the following message after dumping:
> pg_dump: [tar archiver] actual file length (4104361) does not match expected
> (4104361)
> which in fact does not make sense for me.

Hmm, it looks like that code is printing the wrong variable's value.
However, the condition it's complaining of shouldn't have happened,
so you need to look more closely.  I suspect you've got a corrupt
tar archive.  Does it look sane if you do "tar tvf" on it?

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Question regarding autovacuum in 8.1

2007-08-29 Thread Alvaro Herrera
Denis Gasparin wrote:
> How is it possibile to check if autovacuum is running in 8.1.x?
> 
> "Show Autovacuum" gives me "on" and also i see evidence in logs
> where,autovacuum writes "LOG:  autovacuum: processing database ".

Then it is running.

> However i have no idea of what tables the autovacuum daemon is
> processing because there aren't autovacuum info columns on
> pg_stat_all_tables (as there are for 8.2.x).

For that, you need to change log_min_messages to debug2.

Keep track of the PID of autovacuum from the "processing database"
message and see if you can spot an ERROR message from it.

-- 
Alvaro Herrera  Developer, http://www.PostgreSQL.org/
"A wizard is never late, Frodo Baggins, nor is he early.
 He arrives precisely when he means to."  (Gandalf, en LoTR FoTR)

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Reliable and fast money transaction design

2007-08-29 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 08/29/07 10:47, Tom Lane wrote:
> Ron Johnson <[EMAIL PROTECTED]> writes:
>> On 08/29/07 07:27, cluster wrote:
>>> Just make sure and read up about transaction isolation... in the default
>>> of READ COMMITTED mode, you can sometimes see changes made by other
>>> transactions.
> 
>> Argh!!!  The RDBMS that I typically use defaults to SERIALIZABLE.
> 
> You can change default_transaction_isolation if you like.

You misunderand: we do that on purpose, and I had forgotten that
most RDBMSs don't do that.

- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFG1Zf8S9HxQb37XmcRAjt+AKCW7I1H9ON60TEUaZMzLzcXYFb0pwCgh9qC
Q+VwNMFCHTWqq1mTL8kx13w=
=3NIY
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] postgresql connection string to Vb.NET

2007-08-29 Thread Hiroshi Saito

Hi.

Please see,
http://npgsql.projects.postgresql.org/
and
http://psqlodbc.projects.postgresql.org/

Regards,
Hiroshi Saito

- Original Message - 
From: smithveg

To: pgsql-general@postgresql.org
Sent: Thursday, August 30, 2007 12:49 AM
Subject: [GENERAL] postgresql connection string to Vb.NET


Hi,

I got a source of connection string at this page, 
http://www.connectionstrings.com/default.aspx?carrier=postgresql

I can't test it because it seems i do not add a reference in visual studio 2005.

Where can i download the ODBC or .NET data provider in order to successfully connect to 
postgresql from vb.net


Thanks.

--
Smithveg 



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Is there a better way to do this?

2007-08-29 Thread David Fetter
On Tue, Aug 28, 2007 at 04:59:46PM -0400, Wei Weng wrote:
> Hi all
> 
> I want to implement something like the following:
> 
> CREATE OR REPLACE FUNCTION AddDays
>(TIMESTAMP WITHOUT TIME ZONE
>, INT)
> RETURNS TIMESTAMP WITHOUT TIME ZONE AS '
> DECLARE
>time ALIAS FOR $1;
>days ALIAS FOR $2;
> BEGIN
>RETURN time+days*24*3600*''1 second''::INTERVAL;
> END;
> ' LANGUAGE 'plpgsql';

This seems like a lot of extra work.

SELECT now() + 5 * INTERVAL '1 day';

does a similar trick, and is quite clear as to what it does :)

Cheers,
David.
-- 
David Fetter <[EMAIL PROTECTED]> http://fetter.org/
phone: +1 415 235 3778AIM: dfetter666
  Skype: davidfetter

Remember to vote!
Consider donating to PostgreSQL: http://www.postgresql.org/about/donate

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: ACID (was Re: [GENERAL] Reliable and fast ...)

2007-08-29 Thread Joshua D. Drake
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Ron Johnson wrote:
> On 08/29/07 10:40, Joshua D. Drake wrote:
>> Ron Johnson wrote:

>>> Argh!!!  The RDBMS that I typically use defaults to SERIALIZABLE.
>> SERIALIZABLE is really slow :). You should look into SERIALIZABLE only
>> for those transactions that need it. There is also SELECT FOR UPDATE.
> 
> We use SERIALIZABLE (with all it's locking "issues") to guarantee
> the I in ACID.  ISTM that READ COMMITTED can only deliver "ACD".

You are using serializable for select statements?


- ---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster



- --

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564   24x7/Emergency: +1.800.492.2240
PostgreSQL solutions since 1997  http://www.commandprompt.com/
UNIQUE NOT NULL
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFG1ZzQATb/zqfZUUQRAm+lAJ4i8s6I2MKCQGo1zD3g2w5lPRFikwCeNZML
4bV06CiM196qwC2l5MKqn10=
=ygzn
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] postgresql connection string to Vb.NET

2007-08-29 Thread Curtis Scheer
http://www.postgresql.org/download/ 
lists the db drivers for a variety of languages.

 

  _  

From: smithveg [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, August 29, 2007 10:50 AM
To: pgsql-general@postgresql.org
Subject: [GENERAL] postgresql connection string to Vb.NET

 

Hi,

I got a source of connection string at this page,
http://www.connectionstrings.com/default.aspx?carrier=postgresql
 
I can't test it because it seems i do not add a reference in visual studio
2005. 

Where can i download the ODBC or .NET data provider in order to successfully
connect to postgresql from vb.net  

Thanks.

-- 
Smithveg 



Re: ACID (was Re: [GENERAL] Reliable and fast ...)

2007-08-29 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 08/29/07 11:20, Joshua D. Drake wrote:
> Ron Johnson wrote:
>> On 08/29/07 10:40, Joshua D. Drake wrote:
>>> Ron Johnson wrote:
> 
 Argh!!!  The RDBMS that I typically use defaults to SERIALIZABLE.
>>> SERIALIZABLE is really slow :). You should look into SERIALIZABLE only
>>> for those transactions that need it. There is also SELECT FOR UPDATE.
>> We use SERIALIZABLE (with all it's locking "issues") to guarantee
>> the I in ACID.  ISTM that READ COMMITTED can only deliver "ACD".
> 
> You are using serializable for select statements?

READ ONLY, which defaults to SERIALIZABLE.

(It's not PostgreSQL...)

- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFG1Z02S9HxQb37XmcRAnzVAKDFFqHLuMHE1q6sgvO288bzZvZa1gCfcGWM
KUyB8HyjE3s9NfWq5GeLfvQ=
=9jB2
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Reliable and fast money transaction design

2007-08-29 Thread Tom Lane
"Joshua D. Drake" <[EMAIL PROTECTED]> writes:
> Ron Johnson wrote:
>> Argh!!!  The RDBMS that I typically use defaults to SERIALIZABLE.

> SERIALIZABLE is really slow :).

Say what?  If anything it's probably faster than READ COMMITTED, because
it doesn't take as many snapshots.  But the difference is likely down in
the noise anyway.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Reliable and fast money transaction design

2007-08-29 Thread Joshua D. Drake
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Tom Lane wrote:
> "Joshua D. Drake" <[EMAIL PROTECTED]> writes:
>> Ron Johnson wrote:
>>> Argh!!!  The RDBMS that I typically use defaults to SERIALIZABLE.
> 
>> SERIALIZABLE is really slow :).
> 
> Say what?  If anything it's probably faster than READ COMMITTED, because
> it doesn't take as many snapshots.  But the difference is likely down in
> the noise anyway.

Not in production it isn't.

Joshua D. Drake


> 
>   regards, tom lane
> 
> ---(end of broadcast)---
> TIP 9: In versions below 8.0, the planner will ignore your desire to
>choose an index scan if your joining column's datatypes do not
>match
> 


- --

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564   24x7/Emergency: +1.800.492.2240
PostgreSQL solutions since 1997  http://www.commandprompt.com/
UNIQUE NOT NULL
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFG1aPLATb/zqfZUUQRAlWhAKCHgvvxUHRBZ5xQDmMK841U3/gglQCfdh9o
mooGYXxZ57Hla31WeqQM9jI=
=0mTL
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] postgresql connection string to Vb.NET

2007-08-29 Thread Owen Hartnett

At 8:49 AM -0700 8/29/07, smithveg wrote:

Hi,

I got a source of connection string at this page, 
http://www.connectionstrings.com/default.aspx?carrier=postgresql
I can't test it because it seems i do not add a reference in visual 
studio 2005.


Where can i download the ODBC or .NET data provider in order to 
successfully connect to postgresql from vb.net




I recommend Npgsql if you're using ADO and Datasets.  Otherwise, the 
ODBC stuff. The OLE interface seems slower than Npgsql, but I haven't 
compared with ODBC.


You can find all the interfaces at:



-Owen

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[GENERAL] SSL and crash woes.

2007-08-29 Thread Jeff Amiel
A couple of years back (2005) we were setting up replication for the first time 
(using slony) from
our production database server to a stand-by box  sitting next to it and a 
remote box in a DR
site.  We were running FreeBSD 5.X/6.X on all systems on Dell servers and 
postgres 7.4.X and then
8.0.X

Replication appeared to crash our production database...a lot.  After looking 
at the core
dumps/stack traces  at the time, we determined that we were crashing in the ssl 
layersso we
disabled SSL (via pg_hba.conf and the slony conn settings) and haven't had an 
issue for the last
couple of years. 

 Stable as a rock.

Wellwe just upgraded our hardware (Sun X4600s) and operating sytems 
(solaris 10) , postgres
versions (8.2.4), and slony (1.2.10).  Rock solid.  
However, our first indication of an issue was an issue with executing pg_dump 
from a remote backup
server. (see http://archives.postgresql.org/pgsql-general/2007-08/msg01347.php)
Local pg_dump's have no issue.   So we changed our backup scheme to do local 
dumps and push the
files off the server to the backup location.  Problem solved.

Then...replication woes again.  With these fresh installs, we didn't think too 
much about the SSL
settingsand bing-bang...crash.  Crash.  Crash.  Crash.  Stopped 
replication.  Problem goes
away.  Start replication...crash crash.  So we stopped replication.

We recompiled postgres with debug info on a test db box and loaded up the most 
recent database
dump.  We then attempted a remote pg_dump from another local server.  Crash.  
Took a look at the
core dump...

Core was generated by `/usr/local/pgsql/bin/postgres -D /testdb'.
Program terminated with signal 11, Segmentation fault.
#0  0xfee8ec23 in sk_value () from /usr/local/ssl/lib/libcrypto.so.0.9.8
(gdb) bt
#0  0xfee8ec23 in sk_value () from /usr/local/ssl/lib/libcrypto.so.0.9.8
#1  0xfef5b05b in ssl3_output_cert_chain () from 
/usr/local/ssl/lib/libssl.so.0.9.8
#2  0x in ?? ()


Hmmm...that looked familiar (from years ago)

Sowe set up the connection to be 'hostnossl' in pg_hba.conf and tried 
again.  Success.
Changed it back to 'hostssl'.crash.  Same place.  

I am going to take the time and set up test environment for the replication as 
well, but I assume
I will experience the same thing.  SSL means crash...no SSL means no crash.

Anyone have any thoughts?





   

Pinpoint customers who are looking for what you sell. 
http://searchmarketing.yahoo.com/

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Reliable and fast money transaction design

2007-08-29 Thread Tom Lane
"Joshua D. Drake" <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> "Joshua D. Drake" <[EMAIL PROTECTED]> writes:
>>> SERIALIZABLE is really slow :).
>> 
>> Say what?  If anything it's probably faster than READ COMMITTED, because
>> it doesn't take as many snapshots.  But the difference is likely down in
>> the noise anyway.

> Not in production it isn't.

Well, I can believe that specific applications might be slower overall
due to having to retry transactions that get serialization failures,
or perhaps because they take more locks to prevent such failures.
But it's not slower as far as the database engine is concerned.
If you think otherwise I'd like to see a test case.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Reliable and fast money transaction design

2007-08-29 Thread Joshua D. Drake
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Tom Lane wrote:
> "Joshua D. Drake" <[EMAIL PROTECTED]> writes:
>> Tom Lane wrote:
>>> "Joshua D. Drake" <[EMAIL PROTECTED]> writes:
 SERIALIZABLE is really slow :).
>>> Say what?  If anything it's probably faster than READ COMMITTED, because
>>> it doesn't take as many snapshots.  But the difference is likely down in
>>> the noise anyway.
> 
>> Not in production it isn't.
> 
> Well, I can believe that specific applications might be slower overall
> due to having to retry transactions that get serialization failures,
> or perhaps because they take more locks to prevent such failures.
> But it's not slower as far as the database engine is concerned.

Well I can only speak to live production loads. I have never profiled
the difference from that low of a level. I can definitely say that in a
standard web app, under velocity, serializable is a huge performance killer.

Sincerely,

Joshua D. Drake

- --

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564   24x7/Emergency: +1.800.492.2240
PostgreSQL solutions since 1997  http://www.commandprompt.com/
UNIQUE NOT NULL
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFG1bHqATb/zqfZUUQRAvDMAJ9nEu+9cumsD+P6E7pZmdkEry6V7QCeN1Cz
nRjVC8BoFZb4b+u6ncP8UFo=
=N4gK
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] SSL and crash woes.

2007-08-29 Thread Tom Lane
Jeff Amiel <[EMAIL PROTECTED]> writes:
> [ SSL plus slony = crash ]

Interesting --- I don't recall that that's ever been reported before.
It might be best to take it up on the slony lists; I wouldn't want to
speculate whether the bug is in slony or the core backend (or openssl?)
but slony hackers would have less of a learning curve to try to
reproduce your results.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Out of Memory - 8.2.4

2007-08-29 Thread Alvaro Herrera
Tom Lane escribió:
> Alvaro Herrera <[EMAIL PROTECTED]> writes:

> >> Given that the worst-case consequence is extra index vacuum passes,
> >> which don't hurt that much when a table is small, maybe some smaller
> >> estimate like 100 TIDs per page would be enough.  Or, instead of
> >> using a hard-wired constant, look at pg_class.reltuples/relpages
> >> to estimate the average tuple density ...
> 
> > This sounds like a reasonable compromise.
> 
> Do you want to make it happen?

I'm not having much luck really.  I think the problem is that ANALYZE
stores reltuples as the number of live tuples, so if you delete a big
portion of a big table, then ANALYZE and then VACUUM, there's a huge
misestimation and extra index cleanup passes happen, which is a bad
thing.

There seems to be no way to estimate the dead space, is there?  We could
go to pgstats but that seems backwards.

I was having a problem at first with estimating for small tables which
had no valid info in pg_class.reltuples, but I worked around that by
using MaxHeapTuplesPerPage.  (I was experimenting with the code that
estimates average tuple width in estimate_rel_size() but then figured it
was too much work.)  So this part is fine AFAICS.

I attach the patch I am playing with, and the simple test I've been
examining (on which I comment the ANALYZE on some runs, change the
conditions on the DELETE, put the CREATE INDEX before insertion instead
of after it, etc).

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.
drop table if exists foo;
create table foo (a int, b varchar);
create index foo_idx on foo(a);
insert into foo select * from generate_series(1, 20);
delete from foo where a % 2 = 0; -- or a % 3 = 0;
analyze foo;
vacuum verbose foo;

Index: src/backend/commands/vacuumlazy.c
===
RCS file: /home/alvherre/Code/cvs/pgsql/src/backend/commands/vacuumlazy.c,v
retrieving revision 1.90
diff -c -p -r1.90 vacuumlazy.c
*** src/backend/commands/vacuumlazy.c	30 May 2007 20:11:57 -	1.90
--- src/backend/commands/vacuumlazy.c	29 Aug 2007 18:36:18 -
*** static int lazy_vacuum_page(Relation one
*** 120,126 
  static void lazy_truncate_heap(Relation onerel, LVRelStats *vacrelstats);
  static BlockNumber count_nondeletable_pages(Relation onerel,
  		 LVRelStats *vacrelstats);
! static void lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks);
  static void lazy_record_dead_tuple(LVRelStats *vacrelstats,
  	   ItemPointer itemptr);
  static void lazy_record_free_space(LVRelStats *vacrelstats,
--- 121,128 
  static void lazy_truncate_heap(Relation onerel, LVRelStats *vacrelstats);
  static BlockNumber count_nondeletable_pages(Relation onerel,
  		 LVRelStats *vacrelstats);
! static void lazy_space_alloc(Relation onerel, LVRelStats *vacrelstats,
!  BlockNumber relblocks);
  static void lazy_record_dead_tuple(LVRelStats *vacrelstats,
  	   ItemPointer itemptr);
  static void lazy_record_free_space(LVRelStats *vacrelstats,
*** lazy_scan_heap(Relation onerel, LVRelSta
*** 289,295 
  	vacrelstats->rel_pages = nblocks;
  	vacrelstats->nonempty_pages = 0;
  
! 	lazy_space_alloc(vacrelstats, nblocks);
  
  	for (blkno = 0; blkno < nblocks; blkno++)
  	{
--- 291,297 
  	vacrelstats->rel_pages = nblocks;
  	vacrelstats->nonempty_pages = 0;
  
! 	lazy_space_alloc(onerel, vacrelstats, nblocks);
  
  	for (blkno = 0; blkno < nblocks; blkno++)
  	{
*** count_nondeletable_pages(Relation onerel
*** 964,979 
   * See the comments at the head of this file for rationale.
   */
  static void
! lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks)
  {
  	long		maxtuples;
  	int			maxpages;
  
  	if (vacrelstats->hasindex)
  	{
! 		maxtuples = (maintenance_work_mem * 1024L) / sizeof(ItemPointerData);
  		maxtuples = Min(maxtuples, INT_MAX);
  		maxtuples = Min(maxtuples, MaxAllocSize / sizeof(ItemPointerData));
  		/* stay sane if small maintenance_work_mem */
  		maxtuples = Max(maxtuples, MaxHeapTuplesPerPage);
  	}
--- 966,999 
   * See the comments at the head of this file for rationale.
   */
  static void
! lazy_space_alloc(Relation onerel, LVRelStats *vacrelstats, BlockNumber relblocks)
  {
  	long		maxtuples;
  	int			maxpages;
  
  	if (vacrelstats->hasindex)
  	{
! 		BlockNumber	relpages;
! 		double		reltuples;
! 
! 		/* coerce values in pg_class to more desirable types */
! 		relpages = (BlockNumber) onerel->rd_rel->relpages;
! 		reltuples = (double) onerel->rd_rel->reltuples;
! 
! 		/*
! 		 * If the relation has never been vacuumed, assume worst-case
! 		 * number of tuples.  Otherwise, use the density from pg_class to
! 		 * estimate it.
! 		 */
! 		if (relpages > 0)
! 			maxtuples = (long) (reltuples / (double) relpages * (double) relblocks);
! 		else
! 			maxtuples = MaxHeapTuplesPerPage * relblocks;
! 
! 		maxtuples = Mi

Re: [GENERAL] SSL and crash woes.

2007-08-29 Thread Scott Marlowe
On 8/29/07, Jeff Amiel <[EMAIL PROTECTED]> wrote:
> A couple of years back (2005) we were setting up replication for the first 
> time (using slony) from
> our production database server to a stand-by box  sitting next to it and a 
> remote box in a DR
> site.  We were running FreeBSD 5.X/6.X on all systems on Dell servers and 
> postgres 7.4.X and then
> 8.0.X
>
> Replication appeared to crash our production database...a lot.  After looking 
> at the core
> dumps/stack traces  at the time, we determined that we were crashing in the 
> ssl layersso we
> disabled SSL (via pg_hba.conf and the slony conn settings) and haven't had an 
> issue for the last
> couple of years.

Interesting.  Is it possible that either you've got 2 versions of
openssl?  Maybe slony is being compiled against one, then using the
other lib, etc.?

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] How to avoid "Seq Scans"?

2007-08-29 Thread Vincenzo Romano
On Wednesday 29 August 2007 11:20:53 Richard Huxton wrote:
> Vincenzo Romano wrote:
> > Hi all.
> >
> > In PG 8.2.4 I have a 4+M rows table like this:
> >
> > I'd need to write a stored function that should do the
> > following:
> >
> > for rec in select * from t order by f2,f2 loop
> > ...
> > end loop;
> >
> >->  Seq Scan on t  (cost=0.00..85501.38 rows=4779338 width=28)
> >
> > I'd like to know a hint about a technicque to avoid the
> > sequential scan!
>
> But you're fetching all the rows - what other way would be faster?

Definitely right. 

I'm trying to investigate the strange (to me) bahaviour of a couple of 
stored procedure.
The outer one is in PL/PGSQL and has the above mentioned loop.
The inner one, called into the loop, is an "SQL stable strict"
function.
The outer "empty" loop takes less than 16 seconds.
The inner function takes between 10 and 50 msec when called by itself.
The inner+outer function needs more than 45 minutes just to run over
the first 10 thousands lines.

The inner function is actually a select over another table (16+M rows)
and always shows very good timing when execute by itself.
What I argue now is that something wrong happens with the query 
planner when the inner function gets called by the outer one.

Is there any confirmation (and possibly workaround) for this
behaviour?

-- 
Vincenzo Romano
--
Maybe Computer will never become as intelligent as Humans.
For sure they won't ever become so stupid. [VR-1988]

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Out of Memory - 8.2.4

2007-08-29 Thread Tom Lane
Alvaro Herrera <[EMAIL PROTECTED]> writes:
> I'm not having much luck really.  I think the problem is that ANALYZE
> stores reltuples as the number of live tuples, so if you delete a big
> portion of a big table, then ANALYZE and then VACUUM, there's a huge
> misestimation and extra index cleanup passes happen, which is a bad
> thing.

Yeah ... so just go with a constant estimate of say 200 deletable tuples
per page?

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] SSL and crash woes.

2007-08-29 Thread Jeff Amiel

--- Scott Marlowe <[EMAIL PROTECTED]> wrote:

> 
> Interesting.  Is it possible that either you've got 2 versions of
> openssl?  Maybe slony is being compiled against one, then using the
> other lib, etc.?

yes...I suppose it is.Solaris came with one...we installed another.
hm...
# find /usr /lib -name libssl*
/usr/lib/mps/amd64/libssl3.so
/usr/lib/mps/secv1/amd64/libssl3.so
/usr/lib/mps/secv1/libssl3.so
/usr/lib/mps/libssl3.so
/usr/sfw/lib/amd64/libssl.so
/usr/sfw/lib/amd64/libssl.so.0.9.7
/usr/sfw/lib/libssl.so
/usr/sfw/lib/libssl.so.0.9.7
/usr/sfw/lib/mozilla/libssl3.so
/usr/apache/libexec/libssl.so
/usr/local/ssl/lib/libssl.a
/usr/local/ssl/lib/libssl.so
/usr/local/ssl/lib/libssl.so.0.9.8
/usr/local/ssl/lib/pkgconfig/libssl.pc
 



   

Yahoo! oneSearch: Finally, mobile search 
that gives answers, not web links. 
http://mobile.yahoo.com/mobileweb/onesearch?refer=1ONXIC

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] autovacuum not running

2007-08-29 Thread Ben

On Tue, 28 Aug 2007, Decibel! wrote:


On Tue, Aug 28, 2007 at 03:10:34PM -0700, Ben wrote:

Hm, I assumed it wasn't running because pg_stat_all_tables shows the last
vacuum from several weeks ago, and this is an active db. Also, I see no
vacuum activity in the logs. But "show autovacuum" does show it being
on


Last vacuum, or last autovacuum?



Doh! I was looking at last_vacuum. Unfortunately, the *last_autovaccum* 
column is completely blank on the tables I'm inserting into, so I guess 
it's never worked there.


But I do see the stats collector running, and upping the log level to 
debug2 does show that the autovacuum at least starts running, and never 
seems to have any errors.


.h..

After some tests, it seems that autovacuum doesn't kick in from simple 
inserts, as the manual says it will. You have to delete and/or update as 
well. Am I misreading the manual?


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Reliable and fast money transaction design

2007-08-29 Thread Gregory Stark
"Joshua D. Drake" <[EMAIL PROTECTED]> writes:

> Tom Lane wrote:
>> "Joshua D. Drake" <[EMAIL PROTECTED]> writes:
>>> Tom Lane wrote:
 "Joshua D. Drake" <[EMAIL PROTECTED]> writes:
> SERIALIZABLE is really slow :).
 Say what?  If anything it's probably faster than READ COMMITTED, because
 it doesn't take as many snapshots.  But the difference is likely down in
 the noise anyway.
>> 
>>> Not in production it isn't.
>> 
>> Well, I can believe that specific applications might be slower overall
>> due to having to retry transactions that get serialization failures,
>> or perhaps because they take more locks to prevent such failures.
>> But it's not slower as far as the database engine is concerned.
>
> Well I can only speak to live production loads. I have never profiled
> the difference from that low of a level. I can definitely say that in a
> standard web app, under velocity, serializable is a huge performance killer.

Are you having to retry after serialization failures frequently?

There's no reason for an individual transaction to take longer in SERIALIZABLE
mode. In fact I believe SERIALIZABLE mode is actually measurably faster in
benchmarks but haven't run one in READ COMMITTED mode recently (for that
reason).

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] show connection limit?

2007-08-29 Thread Ken . Colson
I have set the connection limit of a user with the ALTER ROLE command in
8.1.  Is there a way to see what the connection limit is set to for various
users?

 

Thanks,

Ken

 



Re: [GENERAL] autovacuum not running

2007-08-29 Thread Tom Lane
Ben <[EMAIL PROTECTED]> writes:
> After some tests, it seems that autovacuum doesn't kick in from simple 
> inserts, as the manual says it will. You have to delete and/or update as 
> well. Am I misreading the manual?

There's nothing to vacuum until you delete or update.

You should see some auto-analyze activity on an insert-only table, but
not vacuuming.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] Npgsql GUI problem C#.net

2007-08-29 Thread Jessie
Hi all,

I'm using C#.net 2005 dealing with Npgsql. I have a very newbie
question about how to use the data received from the database. I've
already connected the database with the program and reading data from
database works very well in the Console application. But in windows
form application, I want use form_load function to call the reading
function, which is confusing since the parameter of the function
"String[ ] args". How should I combine this reading function with my
windows form application. This seems a simple question happens in
every GUI application form when accessing to the database, but I
couldn't find any source to help on this yet.

I appreciate any of your help!

Jessie


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] EXPLAIN ANALYZE not considering primary and unique indices!

2007-08-29 Thread Sanjay
Thanks a lot everybody! I got it clear. I was wrongly thinking that
PostgreSQL might not be creating the indices by default.

regards
Sanjay


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Bigtime scaling of Postgresql (cluster and stuff I suppose)

2007-08-29 Thread Trent Shipley
On Monday 2007-08-27 08:04, Andrew Sullivan wrote:
> On Sat, Aug 25, 2007 at 11:13:45AM -0400, Tom Lane wrote:
> > In case you hadn't noticed the disconnect between these statements:
> > if they have to be that close together, there *will* be a single point
> > of failure.  Fire in your data center, for instance, will take out every
> > copy of your data.  So as a "high availability" solution I don't find
> > it all that compelling.
>
> Indeed.  There's another issue, too, which you have to spend some
> time reading the manual to get.  The clustering stuff is _yet
> another_ table type, with subtly different semantics from other table
> types.  As usual, this means that you can blow off your foot by
> mixing table types in a transaction.  As near as I can tell, the
> cluster table type (I disremeber the name of it) cannot be run in
> strict mode, either.
>
> To answer the OP's question, you can do some "cluster-like" things by
> doing hardware clustering -- two machines attached to a RAID with
> some sort of hardware fail-over in place.
>
> I think that the MySQL cluster stuff wasn't intended as an HA
> feature, though (although they might well be selling it that way).
> It was a way to scale many small systems for certain kinds of
> workloads.  My impression is that in most cases, it's a SQL-ish
> solution to a problem where someone decided to use the SQL nail
> because that's the hammer they had.  I can think of ways you could
> use it, and I'm not surprised that Some Giant Corp is doing so.  But
> I'd be astonished if someone used it for truly valuable data.  I
> would think very hard about the qualifications of someone who
> proposed using it for financial data.
>
If it was developed by Ericson for Telco purposes then it would be designed to 
be wicked fast for OLTP (billing-switching is an OLTP application) with VERY 
high up time and reliable (customers and on occasion regulators get angry 
when the phones dont work).  It wouldn't matter if it can be geographically 
distributed.  If the switching center catches fire you're hosed anyway.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] psql \copy command runs as a transcation?

2007-08-29 Thread Tom Lane
Ow Mun Heng <[EMAIL PROTECTED]> writes:
> Does the psql's \copy command run as a transaction?

Certainly.

> I think it does, but
> somehow when I cancel (in a script) a running import, "seems" (I can't
> seem to duplicate it on the cli though) like a few lines/rows gets
> inserted anyway..

Hmm.  Some client-side software is written to split an import into
multiple short copy commands, but I don't believe there's any such thing
in psql.  Are you sure those rows didn't come from someplace else?

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Creating indices on foreign keys

2007-08-29 Thread Sanjay
Hi All,

I think PostgreSQL does not create indexes on foreign keys of a table,
and I have to do so explicitly. That means, wherever I am using master-
detail table, I have to create explicit indexes on the foreign key in
the detail table. Am I wrong?

thanks
Sanjay


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Removing pollution from log files

2007-08-29 Thread Andrus
> That's not pollution; it's telling you you need to fix your
> application to escape the backslashes differently.

I havent seen that ODBC specification requires escaping strings.
So this is task of ODBC driver.

Andrus. 



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Creating indices on foreign keys

2007-08-29 Thread Sanjay
> I think PostgreSQL does not create indexes on foreign keys of a table,
> and I have to do so explicitly. That means, wherever I am using master-
> detail table, I have to create explicit indexes on the foreign key in
> the detail table. Am I wrong?

Refering the discussion here, I assume I am correct:

http://groups.google.co.in/group/pgsql.general/browse_thread/thread/516db5b5f36c82d0

thanks
Sanjay


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] ERROR: table row type and query-specified row type do not match

2007-08-29 Thread Patryk Kordylewski

Hi,

can someone help/explain me why i get this error when i try to make an 
update on that view? It seems that something is wrong with the subselect.


http://fooby.imosnet.de/error.txt

Thanks,
Patryk

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] PGError: input out of range

2007-08-29 Thread dustov

The problem was indeed ACOS() being outside of the [-1,1] range, and this
happened because it was calculating the distance between the same LAT,LONG
pair (the same location)

I added a WHERE L1.ID <> L2.ID to stop the reflexive calculation.


Martijn van Oosterhout wrote:
> 
> On Sat, Aug 18, 2007 at 03:21:02PM -0700, dustov wrote:
>> 
>> My database just had this new error, and I have no idea why (because I
>> haven't intentionally made any changes to this table).   Does anyone have
>> an
>> idea which input is out of range-- or what the problem might be?
> 
> The only thing in your query that I can imagine being out of range is
> ACOS() which would need to be between -1 and 1 (otherwise the result
> would be complex).
> 
> I'd try and see what the argument to the ACOS is, but it's probably
> some corner case where the rounding is getting you.
> 
> Hope this helps,
> -- 
> Martijn van Oosterhout   <[EMAIL PROTECTED]>   http://svana.org/kleptog/
>> From each according to his ability. To each according to his ability to
>> litigate.
> 
>  
> 

-- 
View this message in context: 
http://www.nabble.com/PGError%3A-input-out-of-range-tf4291698.html#a12376732
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] counting columns

2007-08-29 Thread pere roca

hi! 

I want different users to insert data and perform different analysis which
will be inserted in an interactively created new column. The name of this
new column should follow a logic name such as: fxanalysis_1 for the first
user that perform the analysis called fx, fxanalysis_2 for the second... 
To get it, I think I should construct a function that counts how many
columns have their name starting with fxanalysis. 
   If count >=1, then alter table adding a new column with name fxanalysis_2
   and so on...
   One of the questions: how to COUNT COLUMNS? I can rename, add, alter
columns... but count?

Thanks in advance,
Pere
-- 
View this message in context: 
http://www.nabble.com/counting-columns-tf4342021.html#a12368870
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] Can this function be declared IMMUTABLE?

2007-08-29 Thread beickhof
Hello,

I have a question about whether I can safely declare a function IMMUTABLE. 
 Citing the documentation under "Function Volatility Categories" in the 
section on "Extending SQL":

It is generally unwise to select from database tables within an IMMUTABLE 
function at all, since the immutability will be broken if the table 
contents ever change.


Well, I am considering a function that does read from a table, but the 
table contents change extremely infrequently (the table is practically a 
list of constants).  Would it be safe to declare the function IMMUTABLE 
provided that the table itself is endowed with a trigger that will drop 
and recreate the function any time the table contents are modified?  In 
this way, it seems that the database would gain the performance benefit of 
an immutable function for the long stretches of time in between changes to 
the table.

I apologize that I don't have any details -- it is still very early in the 
development of the database design, and I was just hoping to get a better 
understanding of whether an immutable function would safely offer any 
benefit in this scenario.

Thanks very much,
Bobby

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] pg_dump.... pg_restore...how long does it take?

2007-08-29 Thread rafikoko

Hi,
I've dumped my database into archive by using following command:
.\pg_dump -U tad -ci -F t -f openbravo.tar openbravo.
Everything seemed to be all right and the whole process was very fast.
Afterwards I've got tar file which size is about 4MB.
I've got also the following message after dumping:
pg_dump: [tar archiver] actual file length (4104361) does not match expected
(4104361)
which in fact does not make sense for me.

Then I've copied tar archive to another computer to restore the database
using following command:
pg_restore -d openbravo ..\openbravo.tar
The process started about 18hours ago and still runs. I've checked task
manager and noticed that pg_restore still uses 50% of CPU. 

The question is:
Have I done everything correctly? Since the tar archive has only 4MB and was
generated in few second, I wonder why restoring takes so long? How long it
can take? Does anybody have such an experience?

Looking forward any support.

BR,
Rafal
-- 
View this message in context: 
http://www.nabble.com/pg_dump-pg_restore...how-long-does-it-take--tf4345965.html#a12381468
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] simple query runs 26 seconds

2007-08-29 Thread Andrus
> OK, so the info relevant to this query is
>
>> INFO:  index "rid_toode_idx" now contains 1517900 row versions in 9950 
>> pages
>> DETAIL:  7375 index row versions were removed.
>> 245 index pages have been deleted, 232 are currently reusable.
>
>> INFO:  "rid": found 7375 removable, 1517900 nonremovable row versions in
>> 82560 pages
>> DETAIL:  0 dead row versions cannot be removed yet.
>> There were 292585 unused item pointers.
>> 18375 pages contain useful free space.
>> 0 pages are entirely empty.
>
> which gives us a density of about 150 entries/page in the index and
> 18 rows/page in the heap proper.  That seems a bit low --- do you think
> your rows are several hundred bytes each?

This table has lot of columns  [1] and database encoding is UTF-8
Most columns in every row are empty. Empty numeric columns contain 0 value, 
empty string columns  NULL value mostly.
Item code and name is filled in most rows.

I don'nt know how to get table row size in disk.

> If not, the best cleanup
> strategy is probably to CLUSTER the table on whichever index you use
> most (dunno if this is your most important query or not).

RID table contains all rows from all documents in company.
It is queried and new rows are added to it frequently.

It is requently accessed by item code (toode field) and by document id 
(dokumnr field).
Both of those indexes are frequently accessed.
So clustering by toode field may decrease acees speed by dokumnr field.

Anyway I run command

CLUSTER rid_toode_idx ON firma1.rid

in this night.

I'm wondering why fetching speed afects so much.

set search_path to firma1,public; select count(*)
   from rid join dok using (dokumnr)
   where toode='NE TR'
 and doktyyp='U'

returns 5
there are indexes in all fields used in join and where clauses. See note [2]
So only 5 rows need to be fetched.
No idea why fetching 5 rows requires 18 seconds.

>> postgresql.conf contains
>> autovacuum = on   # enable autovacuum subprocess?
>> However, log files does not show any autovacuum messages.
>> So I expect that autovacuum in not running.
>> Any idea why autovacuum is not running ?
>
> Did you also turn on stats_row_level and stats_start_collector?

Default postgresql.conf file installed by windows installer contains

#stats_command_string = on
#update_process_title = on
stats_start_collector = on  # needed for block or row stats
#stats_block_level = off
stats_row_level = on
#stats_reset_on_server_start = off # (change requires restart)

So I expect that they are on.
I also ran ANALYZE manually before testing.

> It could also be that autovac *is* running but its efforts are wasted
> because of too small FSM settings --- what have you got max_fsm_pages
> set to?

postgresql.conf file contains

max_fsm_pages = 204800  # min max_fsm_relations*16, 6 bytes each
#max_fsm_relations = 1000  # min 100, ~70 bytes each

So they are ON. I ran VACUUM and ANALYZE commands manually before testing.

Server has 1 GB RAM.
I added shared_buffers= 15000  to postgresql.conf file but speed did not 
change.

Other queries for this database run fast.
Maybe I must try to re-write this query in some other way?
It seems that for some reason more than 5 rows are fetched from table.


[1] rid table structure and indexes

CREATE TABLE firma1.rid
(
  id integer NOT NULL DEFAULT nextval('rid_id_seq'::regclass),
  reanr integer NOT NULL DEFAULT nextval('rid_reanr_seq'::regclass),
  dokumnr integer NOT NULL,
  nimetus character(50),
  hind numeric(15,5) NOT NULL DEFAULT 0,
  kogus numeric(12,4) NOT NULL DEFAULT 0,
  toode character(20),
  partii character(15),
  myygikood character(4),
  hinnak character(5),
  kaubasumma numeric(15,5) NOT NULL DEFAULT 0,
  yhik character(6),
  kulukonto character(10),
  kuluobjekt character(10),
  rid2obj character(10),
  reakuupaev date,
  kogpak numeric(9,4) NOT NULL DEFAULT 0,
  kulum numeric(15,5) NOT NULL DEFAULT 0,
  baasostu numeric(15,5),
  ostuale numeric(7,2),
  rid3obj character(10),
  rid4obj character(10),
  rid5obj character(10),
  rid6obj character(10),
  rid7obj character(10),
  rid8obj character(10),
  rid9obj character(10),
  kaskogus numeric(12,4),
  aktsiis numeric(8,2),
  kulutoode character(20),
  kulupartii character(15),
  inpdoktyyp character(7),
  inpdokumnr integer,
  rtaitkogus numeric(12,4),
  fifoexpens ebool,
  calculrow ebool,
  laosumma numeric(12,2),
  laoraha character(3),
  variant ebool,
  taitmata numeric(12,4),
  iseteha ebool,
  rtellimus character(25),
  reakaal numeric(16,5),
  paritoluri character(2),
  statvaartu numeric(10,2),
  pakendilii numeric(1),
  CONSTRAINT rid_pkey PRIMARY KEY (id),
  CONSTRAINT rid_dokumnr_fkey FOREIGN KEY (dokumnr)
  REFERENCES firma1.dok (dokumnr) MATCH SIMPLE
  ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY IMMEDIATE,
  CONSTRAINT rid_hinnak_fkey FOREIGN KEY (hinnak)
  REFERENCES firma1.hkpais (hinnak) MATCH SIMPLE
  ON UPDATE CASCADE ON DELETE NO ACTION DEFERRABLE INITIALLY IMMEDIATE,
  CONSTRAINT rid

[GENERAL] how to print a schema

2007-08-29 Thread mrodriguez
Does anybody know how to print a schema in PostgreSQL? I know you can
look at one table at at time, but is there a way to print all columns
and rows at once?

thanks.


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] show connection limit?

2007-08-29 Thread Tom Lane
[EMAIL PROTECTED] writes:
> Is there a way to see what the connection limit is set to for various
> users?

Look in pg_authid.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] counting columns

2007-08-29 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 08/28/07 10:46, pere roca wrote:
> hi! 
> 
> I want different users to insert data and perform different analysis which
> will be inserted in an interactively created new column. The name of this
> new column should follow a logic name such as: fxanalysis_1 for the first
> user that perform the analysis called fx, fxanalysis_2 for the second... 
> To get it, I think I should construct a function that counts how many
> columns have their name starting with fxanalysis. 
>If count >=1, then alter table adding a new column with name fxanalysis_2
>and so on...
>One of the questions: how to COUNT COLUMNS? I can rename, add, alter
> columns... but count?

You'll need to go into the catalog.

Your idea, though, is poor.  You should make a second table, where
the key is the same as the master table, plus a "sequence number".

If you want to do it your way, though, PostgreSQL supports array types.

- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFG1gSbS9HxQb37XmcRAqw+AKC8lDaQZ6IQ2ugVZzfxuT8TlGeHvQCgsLic
esN5w79t4bGAnURm+Nulq20=
=PBHq
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] ERROR: table row type and query-specified row type do not match

2007-08-29 Thread Tom Lane
Patryk Kordylewski <[EMAIL PROTECTED]> writes:
> can someone help/explain me why i get this error when i try to make an 
> update on that view? It seems that something is wrong with the subselect.
> http://fooby.imosnet.de/error.txt

AFAICT, this works for me, so either you left something important out of
your example or you're looking at an already-fixed bug.  What PG version
is that?

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] PostgreSQL Conference Fall 2007

2007-08-29 Thread Stuart Cooper
> PostgreSQL.Org is having a one day technical conference in Portland,
> Oregon on October 20th 2007. The conference will be held at Portland
> State University, and like PDXPGDay during OSCON there will be a dinner
> and party afterward.

This is wrongly advertised (and dead-linked) on the homepage of
http://www.postgresql.org for 20th September.

Somebody should tidy that up.

Cheers,
Stuart.

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Npgsql GUI problem C#.net

2007-08-29 Thread Hiroshi Saito

Hi.

Please see,
http://npgsql.projects.postgresql.org/exampleprograms.html
This is DataGrid sample. It is very simple composition.
However, Operation may be somewhat difficult at the property 
of VS2005. Then, ADO.NET2.0 and Npgsql are not congenial. 


Regards,
Hiroshi Saito

- Original Message - 
From: "Jessie" <[EMAIL PROTECTED]>

To: 
Sent: Tuesday, August 28, 2007 4:04 AM
Subject: [GENERAL] Npgsql GUI problem C#.net



Hi all,

I'm using C#.net 2005 dealing with Npgsql. I have a very newbie
question about how to use the data received from the database. I've
already connected the database with the program and reading data from
database works very well in the Console application. But in windows
form application, I want use form_load function to call the reading
function, which is confusing since the parameter of the function
"String[ ] args". How should I combine this reading function with my
windows form application. This seems a simple question happens in
every GUI application form when accessing to the database, but I
couldn't find any source to help on this yet.

I appreciate any of your help!

Jessie


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Reliable and fast money transaction design

2007-08-29 Thread Joshua D. Drake
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Gregory Stark wrote:
> "Joshua D. Drake" <[EMAIL PROTECTED]> writes:
> 
>> Tom Lane wrote:
>>> "Joshua D. Drake" <[EMAIL PROTECTED]> writes:
 Tom Lane wrote:
> "Joshua D. Drake" <[EMAIL PROTECTED]> writes:
>> SERIALIZABLE is really slow :).
> Say what?  If anything it's probably faster than READ COMMITTED, because
> it doesn't take as many snapshots.  But the difference is likely down in
> the noise anyway.
 Not in production it isn't.
>>> Well, I can believe that specific applications might be slower overall
>>> due to having to retry transactions that get serialization failures,
>>> or perhaps because they take more locks to prevent such failures.
>>> But it's not slower as far as the database engine is concerned.
>> Well I can only speak to live production loads. I have never profiled
>> the difference from that low of a level. I can definitely say that in a
>> standard web app, under velocity, serializable is a huge performance killer.
> 
> Are you having to retry after serialization failures frequently?
> 
> There's no reason for an individual transaction to take longer in SERIALIZABLE
> mode. In fact I believe SERIALIZABLE mode is actually measurably faster in
> benchmarks but haven't run one in READ COMMITTED mode recently (for that
> reason).

Oddly enough, I am the exact opposite boat :). We found that READ
COMMITTED was faster a while back and haven't looked back except where
the logic requires. The only recent testing I have done is with our
PostgreSQL Analytics software. We are using Pyscopg2 which defaults to
serializable. We were having serious performance problems under high
concurrency selects. We moved to READ COMMITTED and it went away.

I will see if I can do some digging and get some actual numbers for us.

Joshua D. Drake





- --

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564   24x7/Emergency: +1.800.492.2240
PostgreSQL solutions since 1997  http://www.commandprompt.com/
UNIQUE NOT NULL
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFG1ju7ATb/zqfZUUQRAlXlAJ0TWwfTpUQX++TDN0QPtYvhGGRyuwCghzRi
8mIlB2013+T4QMdjK2F3a9M=
=HGhc
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] how to print a schema

2007-08-29 Thread Rodrigo De León
On 8/27/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> Does anybody know how to print a schema in PostgreSQL? I know you can
> look at one table at at time, but is there a way to print all columns
> and rows at once?

1. pg_dump / pg_dumpall
   http://www.postgresql.org/docs/8.2/static/backup-dump.html

2. PostgreSQL Autodoc
   http://www.rbt.ca/autodoc/

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] What kind of locks does vacuum process hold on the db?

2007-08-29 Thread Nitin Verma
What kind of locks does it hold on the db? Table level / Row level /
AccessExclusiveLock ?
Is there a document that details vacuum process, and its internals?


Use case: I am doing a vacuum {all full analyze} on a postgres 8.1 hosting
two user databases. For this I am using ${pg-home}/bin/vacuumdb, not direct
SQLs. Parallel to this I am running transaction on both user-DBs.
Observation: 
Transactions don't pause while vacuum is running. (Odd, how come?)

http://www.postgresql.org/docs/8.1/interactive/sql-vacuum.html
"Plain VACUUM (without FULL) simply reclaims space and makes it available for
re-use. This form of the command can operate in parallel with normal reading
and writing of the table, as an exclusive lock is not obtained. VACUUM FULL
does more extensive processing, including moving of tuples across blocks to
try to compact the table to the minimum number of disk blocks. This form is
much slower and requires an exclusive lock on each table while it is being
processed."

After this I tried two vacuums in parallel but those lock each other.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] What kind of locks does vacuum process hold on the db?

2007-08-29 Thread Scott Marlowe
On 8/29/07, Nitin Verma <[EMAIL PROTECTED]> wrote:
> What kind of locks does it hold on the db? Table level / Row level /
> AccessExclusiveLock ?
> Is there a document that details vacuum process, and its internals?
>
>
> Use case: I am doing a vacuum {all full analyze} on a postgres 8.1 hosting
> two user databases. For this I am using ${pg-home}/bin/vacuumdb, not direct
> SQLs. Parallel to this I am running transaction on both user-DBs.
> Observation:
> Transactions don't pause while vacuum is running. (Odd, how come?)

Why are you doing FULL vacuums?  Is there some problem that regular
vacuums aren't solving?

> After this I tried two vacuums in parallel but those lock each other.

Yes, vacuum full takes a hard lock on a table.

Vacuum full is to be avoided.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Can this function be declared IMMUTABLE?

2007-08-29 Thread Jaime Casanova
On 8/27/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> Well, I am considering a function that does read from a table, but the
> table contents change extremely infrequently (the table is practically a
> list of constants).  Would it be safe to declare the function IMMUTABLE
> provided that the table itself is endowed with a trigger that will drop
> and recreate the function any time the table contents are modified?  In
> this way, it seems that the database would gain the performance benefit of
> an immutable function for the long stretches of time in between changes to
> the table.
>

make the function STABLE instead

> I apologize that I don't have any details -- it is still very early in the
> development of the database design, and I was just hoping to get a better
> understanding of whether an immutable function would safely offer any
> benefit in this scenario.
>

do you know that early optimization is the root of all evil?

-- 
regards,
Jaime Casanova

"Programming today is a race between software engineers striving to
build bigger and better idiot-proof programs and the universe trying
to produce bigger and better idiots.
So far, the universe is winning."
   Richard Cook

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Can this function be declared IMMUTABLE?

2007-08-29 Thread Josh Tolley
On 8/27/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> Hello,
>
> I have a question about whether I can safely declare a function IMMUTABLE.
>  Citing the documentation under "Function Volatility Categories" in the
> section on "Extending SQL":
> 
> It is generally unwise to select from database tables within an IMMUTABLE
> function at all, since the immutability will be broken if the table
> contents ever change.
> 
>
> Well, I am considering a function that does read from a table, but the
> table contents change extremely infrequently (the table is practically a
> list of constants).  Would it be safe to declare the function IMMUTABLE
> provided that the table itself is endowed with a trigger that will drop
> and recreate the function any time the table contents are modified?  In
> this way, it seems that the database would gain the performance benefit of
> an immutable function for the long stretches of time in between changes to
> the table.
>
> I apologize that I don't have any details -- it is still very early in the
> development of the database design, and I was just hoping to get a better
> understanding of whether an immutable function would safely offer any
> benefit in this scenario.
>

Lemme see if I can embarrass myself trying to answer something like
this. It seems like your function really ought to be declared STABLE,
because during a single transaction MVCC will make sure your function
sees the same values in its references table each time you call it,
but between transactions the reference table might change, changing
the result of the function. The benefits of an IMMUTABLE function over
a STABLE one, as far as I know, are these:

1) The planner can take advantage of the fact that this function is
IMMUTABLE to evaluate it only once if its arguments are constant, and
keep this result throughout the life of the query plan. If you don't
cache the query plan (e.g. with PREPARE) this is identical to STABLE,
but if you PREPARE a query, for instance, involving a call to an
IMMUTABLE function with constant arguments, the system can evaluate
the function only once during planning, and never again, whereas for
STABLE you'd have to execute the function each time it was called. For
this to be a big win over STABLE, you have to both call your function
with constant arguments and cache the query plan somehow, such as by
having the query inside another pl/pgsql function or by using PREPARE.

2) You can use IMMUTABLE functions, but not STABLE ones, in
expression-based indexes. If you declared your function IMMUTABLE, you
could build an index on a bunch of data using an index expression
involving your function, have your referenced table change somehow,
and end up not being able to correctly use that index anymore.

There may well be other advantages of IMMUTABLE over STABLE that I
don't know about. Were we talking about data I was supposed to care
for, I'd make the function STABLE, not IMMUTABLE, because that's the
most appropriate for the function.

-Josh

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Question regarding autovacuum in 8.1

2007-08-29 Thread Denis Gasparin
Alvaro Herrera ha scritto:
>> However i have no idea of what tables the autovacuum daemon is
>> processing because there aren't autovacuum info columns on
>> pg_stat_all_tables (as there are for 8.2.x).
>> 
>
> For that, you need to change log_min_messages to debug2.
>
> Keep track of the PID of autovacuum from the "processing database"
> message and see if you can spot an ERROR message from it.
>
>   
Ok. Thank you.
Another question/idea: why don't put messages about what tables got
vacuumed by the autovacuum daemon as normal log messages (instead of
debug2)?
I think it could be useful because in this way you can also know what
tables are used more often then other...
If i'm not wrong, the old autovacuum process in 7.4 and 8.0 did that...

Denis