Re: [HACKERS] PG_DUMP and table locking in PG7.4

2005-11-16 Thread Yann Michel
Hi,

first of all, thanks to all, that replied!

On Wed, Nov 16, 2005 at 08:28:31AM +0100, Martijn van Oosterhout wrote:
 On Wed, Nov 16, 2005 at 08:09:31AM +0100, Yann Michel wrote:
  Well, thanks for all the answers. Are the locks then released once they
  are not needed any more like in 2PC?
 
 2PC doesn't release any locks, it can't to maintain integrity.

Aehm. sorry I meant 2PL ... all this accronyms... ;-)
The normal 2PL releases the locks once they are not needed anymore but
can not aquire new ones. Strict 2PL releases them all at one point.

  That should still leaqve the taken snapshot of the released table in a
  consistent state but might enable other transactions to work on that one
  table once it is released. 
 
 ACCESS SHARE means what it says, it stops the table being VACUUMed and
 a few other things, but doesn't block INSERTs, UPDATEs or DELETEs.

Thanks. BTW: Is there anything about locks and their meaning inside of
the Docs? If not, wouldn't that be nice?

 pg_dump doesn't blocks inserts, so your problem must be somewhere
 else... Are you running VACUUM anywhere. It's possible that pg_dump is
 blocking VACUUM which blocks your inserts...

Well, now that I'm thinking about, what you've written I think this is
exactly the point. I think, that there is a VACUUM waiting for the dump
to finish whereas the INSERTS are waiting for the VACUUM to finish.

Thannks!

Cheers,
Yann

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] [COMMITTERS] pgsql: Translation typo fix

2005-11-16 Thread Peter Eisentraut
Am Dienstag, 15. November 2005 00:01 schrieb Alvaro Herrera:
 Good question.  Peter, is pgtranslation supposed to be the primary
 source of translations?

Yes, and more importantly, your changes will get overwritten by pgtranslation.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] PG_DUMP and table locking in PG7.4

2005-11-16 Thread Martijn van Oosterhout
On Wed, Nov 16, 2005 at 09:27:55AM +0100, Yann Michel wrote:
 On Wed, Nov 16, 2005 at 08:28:31AM +0100, Martijn van Oosterhout wrote:
  On Wed, Nov 16, 2005 at 08:09:31AM +0100, Yann Michel wrote:
   Well, thanks for all the answers. Are the locks then released once they
   are not needed any more like in 2PC?
  
  2PC doesn't release any locks, it can't to maintain integrity.
 
 Aehm. sorry I meant 2PL ... all this accronyms... ;-)
 The normal 2PL releases the locks once they are not needed anymore but
 can not aquire new ones. Strict 2PL releases them all at one point.

Ah yes, PostgreSQL doesn't use 2PL, it uses MVCC. quick overview here:

http://linuxgazette.net/issue68/mitchell.html

 Thanks. BTW: Is there anything about locks and their meaning inside of
 the Docs? If not, wouldn't that be nice?

Check here:

http://www.postgresql.org/docs/8.0/interactive/mvcc.html

Under Explicit Locking it lists all the locks and what they're for.

Hope this helps,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
 tool for doing 5% of the work and then sitting around waiting for someone
 else to do the other 95% so you can sue them.


pgpHY4pPDVZvH.pgp
Description: PGP signature


Re: [HACKERS] bind variables, soft vs hard parse

2005-11-16 Thread Marcus Engene

Douglas McNaught wrote:

Which will be the same as the second call. There is quite a big
difference in performance using bind variables.

Does Postgres work the same? Where can I go for more info?


You can do this (or close to it) but you need to explicitly PREPARE
the query (or use the protocol-level prepare, which some client
libraries will do for you).  See the SQL documentation for PREPARE.

-Doug


Hi,

But this is of no use in a web-context. According to the docs, this 
prepare is per session.


This sql cache I think is a really good thing. Is there a reason
Postgres hasn't got it? Would it be very hard to implement? From
a naive perspective; make a hashvalue from the sql-string to
quickly find the cached one, a last used-list for keeping
track of which to delete when cache full etc seems close to
trivial. Does the architecture/internal flow make it hard
actually reuse the query data structure?

Thanks for the answer.

Best regards,
Marcus

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] bind variables, soft vs hard parse

2005-11-16 Thread Martijn van Oosterhout
On Wed, Nov 16, 2005 at 09:56:44AM +0100, Marcus Engene wrote:
 Douglas McNaught wrote:
 You can do this (or close to it) but you need to explicitly PREPARE
 the query (or use the protocol-level prepare, which some client
 libraries will do for you).  See the SQL documentation for PREPARE.
 
 But this is of no use in a web-context. According to the docs, this 
 prepare is per session.

Unless you use something like pgpool, in which case a single session
may include multiple requests.

 This sql cache I think is a really good thing. Is there a reason
 Postgres hasn't got it? Would it be very hard to implement? From
 a naive perspective; make a hashvalue from the sql-string to
 quickly find the cached one, a last used-list for keeping
 track of which to delete when cache full etc seems close to
 trivial. Does the architecture/internal flow make it hard
 actually reuse the query data structure?

It's hard to reuse the structure. Also, things like search_path mean
that the same query text can mean completely different things in
different backends. Most of the time it's planning that dominates, not
parsing so storing just the parser output seems somewhat useless.

Unless you've thought of a new way to do it.

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
 tool for doing 5% of the work and then sitting around waiting for someone
 else to do the other 95% so you can sue them.


pgpP8shUwfFhk.pgp
Description: PGP signature


Re: [HACKERS] bind variables, soft vs hard parse

2005-11-16 Thread Marcus Engene

Christopher Kings-Lynne wrote:
  Oracle recently gave some money to Zend to make proper Oracle 
support

  for PHP. In that interface they use bind variables. Apart from
greater
  speed, sqlinjection becomes history as well.



I did the same for PostgreSQL for PHP 5.1.

http://au3.php.net/manual/en/function.pg-query-params.php

Chris


Brilliant! I'll upgrade to 5.1 for this reason alone!

Best regards,
Marcus


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [HACKERS] bind variables, soft vs hard parse

2005-11-16 Thread Marcus Engene

Martijn van Oosterhout wrote:
But this is of no use in a web-context. According to the docs, this 
prepare is per session.


Unless you use something like pgpool, in which case a single session
may include multiple requests.


ok. Good point.


This sql cache I think is a really good thing. Is there a reason
Postgres hasn't got it? Would it be very hard to implement? From
a naive perspective; make a hashvalue from the sql-string to
quickly find the cached one, a last used-list for keeping
track of which to delete when cache full etc seems close to
trivial. Does the architecture/internal flow make it hard
actually reuse the query data structure?


It's hard to reuse the structure. Also, things like search_path mean
that the same query text can mean completely different things in
different backends. Most of the time it's planning that dominates, not
parsing so storing just the parser output seems somewhat useless.


Of course I didn't mean only the parse was to be saved. The planning
goes there too.

Thanks for the explanation.


Have a nice day,


The same!

Marcus

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Long-time 7.4 contrib failure Mac OS X 10.3.8

2005-11-16 Thread Michael Glaesemann


On Nov 16, 2005, at 15:37 , Michael Glaesemann wrote:

I'm using per-branch configurations, but missed HEAD. I'll get that  
fixed. Thanks for the catch.


Back to normal:

http://pgbuildfarm.org/cgi-bin/show_log.pl?nm=wallaroodt=2005-11-16% 
2009:12:09



Michael Glaesemann
grzm myrealbox com




---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[HACKERS] Reproducable deadlock situation (possibly with foreign keys)

2005-11-16 Thread Mario Weilguni
I've a problem occurring daily for me, I get quite a few deadlocks every day, 
and the concurrency is not so high. Happens with postgresql 8.0 and 8.1. as 
well...

Here's a self-contained testcase, which I think it might be the problem I have 
in our production database. While it might be some sort of theoretical 
problem, it happens, the referenced tables are never really updated, but are 
just lookup-tables. In the production systen it's a lot more complicated, 
there are at least 10 different lookup tables, and not all table contain 
references to all lookup-tables:

create table lookup1 (
  id int primary key,
  t text
);
 
create table lookup2 (
  id int primary key,
  t text
);
 
insert into lookup1 values (1, 'test1');
insert into lookup1 values (2, 'test2');
 
insert into lookup2 values (3, 'test3');
insert into lookup2 values (4, 'test4');
 
create table master1 (
id  int primary key,
l1_id int  references lookup1(id),
l2_id int  references lookup2(id),
t text
);
 
create table master2 (
id  int primary key,
l2_id int  references lookup2(id),
l1_id int  references lookup1(id),
t text
);
 
insert into master1 values (1000, 1, 3);
insert into master2 values (1001, 3, 1);
 

T1: BEGIN;
T2: BEGIN;
-- these are the queries similar to those from the foreign key code
T1: SELECT 1 FROM ONLY lookup1 x WHERE id = 1 FOR UPDATE OF x;
T2: SELECT 1 FROM ONLY lookup2 x WHERE id = 3 FOR UPDATE OF x;
T1: SELECT 1 FROM ONLY lookup2 x WHERE id = 3 FOR UPDATE OF x;
T2: SELECT 1 FROM ONLY lookup1 x WHERE id = 1 FOR UPDATE OF x;
-- DEADLOCK OCCURS!
T1: UPDATE master1 set t='foo' where id=1000;
T2: UPDATE master2 set t='foo' where id=1001;
 

IMO it should be possible to solve this IF the foreign key code reorders the 
for update queries in a well-defined order, maybe ordered by the oid of the 
pgclass entry.

In my case, it always happens on INSERT activity (no updates on those tables, 
just inserts), but I hope the above problem might be the solution for the 
insert deadylock too.

Does this sound reasonable?

Regards,
Mario Weilguni

p.s. Is it possible to modify logging so that the SELECT 1 FROM ONLY are 
logged? Maybe this could help me finding out which queries the foreign key 
code really issues.



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] [COMMITTERS] pgsql: Translation typo fix

2005-11-16 Thread Alvaro Herrera
Peter Eisentraut wrote:
 Am Dienstag, 15. November 2005 00:01 schrieb Alvaro Herrera:
  Good question.  Peter, is pgtranslation supposed to be the primary
  source of translations?
 
 Yes, and more importantly, your changes will get overwritten by pgtranslation.

Ok, I'll update it too and make sure to use it in the future.

-- 
Alvaro Herrera   Valdivia, Chile   ICBM: S 39º 49' 17.7, W 73º 14' 26.8
Someone said that it is at least an order of magnitude more work to do
production software than a prototype. I think he is wrong by at least
an order of magnitude.  (Brian Kernighan)

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] Reproducable deadlock situation (possibly with foreign keys)

2005-11-16 Thread Alvaro Herrera
Mario Weilguni wrote:

Hi,

 T1: BEGIN;
 T2: BEGIN;
 -- these are the queries similar to those from the foreign key code
 T1: SELECT 1 FROM ONLY lookup1 x WHERE id = 1 FOR UPDATE OF x;
 T2: SELECT 1 FROM ONLY lookup2 x WHERE id = 3 FOR UPDATE OF x;
 T1: SELECT 1 FROM ONLY lookup2 x WHERE id = 3 FOR UPDATE OF x;
 T2: SELECT 1 FROM ONLY lookup1 x WHERE id = 1 FOR UPDATE OF x;
 -- DEADLOCK OCCURS!
 T1: UPDATE master1 set t='foo' where id=1000;
 T2: UPDATE master2 set t='foo' where id=1001;

Actually, in 8.1 the FK code issues queries like

T1: SELECT 1 FROM ONLY lookup1 x WHERE id = 1 FOR SHARE OF x;

which takes only a share lock on the tuple, not an exclusive lock, which
solves the blocking and deadlocking problem.  If you have a test case
where it fails on 8.1 I certainly want to see it.


 p.s. Is it possible to modify logging so that the SELECT 1 FROM ONLY 
 are 
 logged? Maybe this could help me finding out which queries the foreign key 
 code really issues.

Hmm, actually, those queries should be logged normally, because AFAIK
they are issued just like any other query, via SPI.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Reproducable deadlock situation (possibly with foreign keys)

2005-11-16 Thread Mario Weilguni
Ok, this my fault, and you're right.

I took the query from the error messages (a 8.0.3 DB) and applied it to a 8.1 
DB on a testing system without thinking too much.

Still I think reordering those queries might prevent a deadlock.

Best regards

Am Mittwoch, 16. November 2005 12:21 schrieb Alvaro Herrera:
 Mario Weilguni wrote:

 Hi,

  T1: BEGIN;
  T2: BEGIN;
  -- these are the queries similar to those from the foreign key code
  T1: SELECT 1 FROM ONLY lookup1 x WHERE id = 1 FOR UPDATE OF x;
  T2: SELECT 1 FROM ONLY lookup2 x WHERE id = 3 FOR UPDATE OF x;
  T1: SELECT 1 FROM ONLY lookup2 x WHERE id = 3 FOR UPDATE OF x;
  T2: SELECT 1 FROM ONLY lookup1 x WHERE id = 1 FOR UPDATE OF x;
  -- DEADLOCK OCCURS!
  T1: UPDATE master1 set t='foo' where id=1000;
  T2: UPDATE master2 set t='foo' where id=1001;

 Actually, in 8.1 the FK code issues queries like

 T1: SELECT 1 FROM ONLY lookup1 x WHERE id = 1 FOR SHARE OF x;

 which takes only a share lock on the tuple, not an exclusive lock, which
 solves the blocking and deadlocking problem.  If you have a test case
 where it fails on 8.1 I certainly want to see it.

  p.s. Is it possible to modify logging so that the SELECT 1 FROM
  ONLY are logged? Maybe this could help me finding out which queries
  the foreign key code really issues.

 Hmm, actually, those queries should be logged normally, because AFAIK
 they are issued just like any other query, via SPI.

---(end of broadcast)---
TIP 6: explain analyze is your friend


[HACKERS] question about count(b) where b is a custom type

2005-11-16 Thread Grzegorz Jaskiewicz

Hi folks

We're developing here gist index (which works finally, and I remember  
about writing some docs about it).
I have few fprintf(stderr,) in function that converts internal  
rep. into string. I was really supprised to see them on me screen when :

select count(b) from blah where b ~ 'something';
was issued. What the hell, isn't it wrong ? I don't need strings to  
count results, right ?

IMO this is a serious bug.
What do yah think ?


--
GJ

If we knew what we were doing, it wouldn't be called Research, would  
it? - AE





---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [HACKERS] Reproducable deadlock situation (possibly with foreign keys)

2005-11-16 Thread Alvaro Herrera
Mario Weilguni wrote:
 Ok, this my fault, and you're right.
 
 I took the query from the error messages (a 8.0.3 DB) and applied it to a 8.1 
 DB on a testing system without thinking too much.
 
 Still I think reordering those queries might prevent a deadlock.

Well, if we could reorder them, we could have solved the problem long
ago.  I'm not totally sure it can't be done, but we tackled the problem
in a different way so it's moot now.

-- 
Alvaro Herrera http://www.amazon.com/gp/registry/CTMLCN8V17R4
Cada quien es cada cual y baja las escaleras como quiere (JMSerrat)

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] question about count(b) where b is a custom type

2005-11-16 Thread Martijn van Oosterhout
On Wed, Nov 16, 2005 at 10:05:36AM +0100, Grzegorz Jaskiewicz wrote:
 Hi folks
 
 We're developing here gist index (which works finally, and I remember  
 about writing some docs about it).
 I have few fprintf(stderr,) in function that converts internal  
 rep. into string. I was really supprised to see them on me screen when :
 select count(b) from blah where b ~ 'something';
 was issued. What the hell, isn't it wrong ? I don't need strings to  
 count results, right ?
 IMO this is a serious bug.
 What do yah think ?

Firstly, if you just want a count, what's wrong with count(1) or
count(*).

Secondly, if you want an aggregate to work on your new type, you should
declare it as such. This is one of the reasons why implicit casts to
text are discouraged. If it had to be explicit, the parser would have
told you that what you asked for wasn't possible directly. (There's no
count(yourtype) function defined).

See CREATE AGGREGATE.

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
 tool for doing 5% of the work and then sitting around waiting for someone
 else to do the other 95% so you can sue them.


pgpeCgJLQg5wL.pgp
Description: PGP signature


Re: [HACKERS] question about count(b) where b is a custom type

2005-11-16 Thread Hannu Krosing
On K, 2005-11-16 at 10:05 +0100, Grzegorz Jaskiewicz wrote:
 Hi folks
 
 We're developing here gist index (which works finally, and I remember  
 about writing some docs about it).
 I have few fprintf(stderr,) in function that converts internal  
 rep. into string. I was really supprised to see them on me screen when :
 select count(b) from blah where b ~ 'something';
 was issued. What the hell, isn't it wrong ? I don't need strings to  
 count results, right ?

I gess you need the rep in b ~ 'something'

 IMO this is a serious bug.
 What do yah think ?
 
 
-- 
Hannu Krosing [EMAIL PROTECTED]


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] question about count(b) where b is a custom type

2005-11-16 Thread Pollard, Mike
 Firstly, if you just want a count, what's wrong with count(1) or
 count(*).


Because unless the column does not allow nulls, they will not return the
same value.

Mike Pollard
SUPRA Server SQL Engineering and Support
Cincom Systems, Inc.

 Better to remain silent and be thought a fool than to speak out and
remove all doubt.
 Abraham Lincoln



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] question about count(b) where b is a custom type

2005-11-16 Thread Richard Huxton

Pollard, Mike wrote:

Firstly, if you just want a count, what's wrong with count(1) or
count(*).




Because unless the column does not allow nulls, they will not return the
same value.


Ah, but in the example given the column was being matched against a 
value, so nulls were already excluded.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [HACKERS] question about count(b) where b is a custom type

2005-11-16 Thread Pollard, Mike
Richard Huxton wrote:
 Pollard, Mike wrote:
 Firstly, if you just want a count, what's wrong with count(1) or
 count(*).
 
 
 
  Because unless the column does not allow nulls, they will not return
the
  same value.
 
 Ah, but in the example given the column was being matched against a
 value, so nulls were already excluded.
 
 --

Details, details.  But there is a valid general question here, and
changing the semantics of the query will not address it.  When doing a
count(col), why convert col into a string just so you can determine if
it is null or not?  This isn't a problem on a small amount of data, but
it seems like a waste, especially if you are counting millions of
records.  Is there some way to convert this to have the caller convert
nulls to zero and non-nulls to 1, and then just pass an int?  So
logically the backend does:

Select count(case col when null then 0 else 1) from table

And count just adds the number to the running tally.

Mike Pollard
SUPRA Server SQL Engineering and Support
Cincom Systems, Inc.

 Better to remain silent and be thought a fool than to speak out and
remove all doubt.
 Abraham Lincoln


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] question about count(b) where b is a custom type

2005-11-16 Thread Tino Wildenhain

Pollard, Mike schrieb:

Richard Huxton wrote:


Pollard, Mike wrote:


Firstly, if you just want a count, what's wrong with count(1) or
count(*).




Because unless the column does not allow nulls, they will not return


the


same value.


Ah, but in the example given the column was being matched against a
value, so nulls were already excluded.

--



Details, details.  But there is a valid general question here, and
changing the semantics of the query will not address it.  When doing a
count(col), why convert col into a string just so you can determine if
it is null or not?  This isn't a problem on a small amount of data, but


Why convert? A null is always null no matter in which datatype.


it seems like a waste, especially if you are counting millions of
records.  Is there some way to convert this to have the caller convert
nulls to zero and non-nulls to 1, and then just pass an int?  So
logically the backend does:

Select count(case col when null then 0 else 1) from table


Which would be totally silly :-) no matter if its 0 or 1
it counts as 1. Do you mean sum() maybe?
Even then you dont need coalesce to convert null to 0
because sum() just ignores null.



And count just adds the number to the running tally.


Which number here?



Mike Pollard
SUPRA Server SQL Engineering and Support

strange...


Cincom Systems, Inc.



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] question about count(b) where b is a custom type

2005-11-16 Thread Martijn van Oosterhout
On Wed, Nov 16, 2005 at 08:28:28AM -0500, Pollard, Mike wrote:
 Details, details.  But there is a valid general question here, and
 changing the semantics of the query will not address it.  When doing a
 count(col), why convert col into a string just so you can determine if
 it is null or not?  This isn't a problem on a small amount of data, but
 it seems like a waste, especially if you are counting millions of
 records.  Is there some way to convert this to have the caller convert
 nulls to zero and non-nulls to 1, and then just pass an int?  So
 logically the backend does:
 
 Select count(case col when null then 0 else 1) from table
 
 And count just adds the number to the running tally.

Actually, something is wrong with this whole thread. count(x) is
defined to take any type, hence count(b) won't convert anything to text
or anything else.

Which seems to imply that in the original query it's the '~' operator
that has the text conversion. Can you post an EXPLAIN VERBOSE for that
query so we can see where the conversion is being called.

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
 tool for doing 5% of the work and then sitting around waiting for someone
 else to do the other 95% so you can sue them.


pgp1yMyUej9gJ.pgp
Description: PGP signature


Re: [HACKERS] question about count(b) where b is a custom type

2005-11-16 Thread Pollard, Mike

If count(col) convert col to a string (an assumption that Martijn
has cast into doubt, or perhaps shredded), then rather than convert all
non-nulls that are not a string into a string, I was proposing
converting the values into an int with the values 0 or 1 (0 means that
row was null for that column, 1 means that row was not null;, since
count(col) means count the non-null rows in col).

Anyway, to make a short story long.  The idea is rather than convert the
column into a string, convert it into a value indicating whether the
column was null or not null (which is all count cares about).  In any
case, it's moot idea since it appears Postgres already does that.

Mike Pollard
SUPRA Server SQL Engineering and Support
Cincom Systems, Inc.

 Better to remain silent and be thought a fool than to speak out and
remove all doubt.
 Abraham Lincoln

 -Original Message-
 From: Tino Wildenhain [mailto:[EMAIL PROTECTED]
 Sent: Wednesday, November 16, 2005 8:43 AM
 To: Pollard, Mike
 Cc: Richard Huxton; pgsql-hackers@postgresql.org
 Subject: Re: [HACKERS] question about count(b) where b is a custom
type
 
 Pollard, Mike schrieb:
  Richard Huxton wrote:
 
 Pollard, Mike wrote:
 
 Firstly, if you just want a count, what's wrong with count(1) or
 count(*).
 
 
 
 Because unless the column does not allow nulls, they will not
return
 
  the
 
 same value.
 
 Ah, but in the example given the column was being matched against a
 value, so nulls were already excluded.
 
 --
 
 
  Details, details.  But there is a valid general question here, and
  changing the semantics of the query will not address it.  When doing
a
  count(col), why convert col into a string just so you can determine
if
  it is null or not?  This isn't a problem on a small amount of data,
but
 
 Why convert? A null is always null no matter in which datatype.
 
  it seems like a waste, especially if you are counting millions of
  records.  Is there some way to convert this to have the caller
convert
  nulls to zero and non-nulls to 1, and then just pass an int?  So
  logically the backend does:
 
  Select count(case col when null then 0 else 1) from table
 
 Which would be totally silly :-) no matter if its 0 or 1
 it counts as 1. Do you mean sum() maybe?
 Even then you dont need coalesce to convert null to 0
 because sum() just ignores null.
 
 
  And count just adds the number to the running tally.
 
 Which number here?
 
 
  Mike Pollard
  SUPRA Server SQL Engineering and Support
 strange...
 
  Cincom Systems, Inc.


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] bind variables, soft vs hard parse

2005-11-16 Thread Michael Alan Dorman
Martijn van Oosterhout kleptog@svana.org writes:
 Unless you use something like pgpool, in which case a single session
 may include multiple requests.

Actually, I've found pgpool to be no better when it comes to using
real prepared queries---there's no guarantee that any given request is
going to connect to the same pgpool process as before, so it won't
have the prepared request.

Mike

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] question about count(b) where b is a custom type

2005-11-16 Thread Tino Wildenhain

Pollard, Mike schrieb:

If count(col) convert col to a string (an assumption that Martijn
has cast into doubt, or perhaps shredded), then rather than convert all
non-nulls that are not a string into a string, I was proposing
converting the values into an int with the values 0 or 1 (0 means that
row was null for that column, 1 means that row was not null;, since
count(col) means count the non-null rows in col).


I'm not getting how you got this idea of count() doing any conversion?
It does not and there is nothing in the docs wich would lead to this.


Anyway, to make a short story long.  The idea is rather than convert the
column into a string, convert it into a value indicating whether the
column was null or not null (which is all count cares about).  In any
case, it's moot idea since it appears Postgres already does that.


No, count does not convert. It just counts all non null values.
If you want to count rows, just use count(*).

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] PG_DUMP and table locking in PG7.4

2005-11-16 Thread Rod Taylor
 I'm asking, because we have a bigger datawarehouse and dump the data for
 a backup every night. Unfortunately, the backup now takes realy long.

You may want to consider upgrading and using PITR backups instead. They
can be much faster to both backup and to restore if something goes
wrong.

-- 


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] MERGE vs REPLACE

2005-11-16 Thread Tom Lane
Christopher Kings-Lynne [EMAIL PROTECTED] writes:
 We should probably throw a notice or warning if we go to a table lock,
 too.

 That's not very useful, because you can only do somethign about it AFTER 
 the 1 hour exclusive lock merge has already run :)

We shouldn't do anything remotely like that.  A statement whose locking
effects can't be predicted on sight is horrid both from the user's
viewpoint and from the implementation viewpoint.  In particular, if we
have to do planning before we can determine whether the table needs just
a SELECT lock or something stronger, then we have to take a weak lock to
do the planning and then we are faced with upgrading to the stronger
lock at runtime.  Can you say deadlock risk?

I think we should do REPLACE-like functionality that simply fails if the
match condition isn't equality on a primary key.  If we can use SQL-spec
MERGE syntax for this, that's fine, but let's not think in terms of
silently changing to a stronger table lock and a much slower
implementation when the condition isn't a primary key.  That's a whole
lot of work that isn't solving any real-world problems, and *is*
creating a foot-gun for people to cause themselves performance and
deadlock problems anytime they spell the WHERE condition slightly wrong.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] OS X 7.4 failure

2005-11-16 Thread Andrew Dunstan



Tom Lane wrote:


Jim C. Nasby [EMAIL PROTECTED] writes:
 


So the recent thread about getting 7.4 compiling on OS X inspired me.
But what I can't understand is that I've yanked --with-ssl, but it's
still looking for libssl:
   



Tad hard to believe.  Maybe you missed a make distclean or so?

 



make distclean should never be necessary for a buildfarm run - we 
always build in one of these 3 ways:


. against a fresh checkout made with 'cvs export'
. against a one-off temporary copy of our local repo, made after we ran 
cvs update

. against our local repo using VPATH

The recent release of buildfarm code goes to some length to ensure that 
the repo is clean, in case somebody has mangled it by hand.


cheers

andrew

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] PG_DUMP and table locking in PG7.4

2005-11-16 Thread Tom Lane
Yann Michel [EMAIL PROTECTED] writes:
 Well, now that I'm thinking about, what you've written I think this is
 exactly the point. I think, that there is a VACUUM waiting for the dump
 to finish whereas the INSERTS are waiting for the VACUUM to finish.

Only if it's a VACUUM FULL ... plain VACUUM neither blocks INSERT nor is
blocked by pg_dump.

The short answer may be don't use VACUUM FULL (at least not for
routine automatic vacuums).

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] question about count(b) where b is a custom type

2005-11-16 Thread Tom Lane
Grzegorz Jaskiewicz [EMAIL PROTECTED] writes:
 I have few fprintf(stderr,) in function that converts internal  
 rep. into string. I was really supprised to see them on me screen when :
 select count(b) from blah where b ~ 'something';
 was issued. What the hell, isn't it wrong ?

Depends ... what does the ~ operator do for your datatype?  (If you've
defined an implicit conversion to text then it's very possibly invoking
that followed by the regex match operator.)

Have you tried getting a stack trace back from your function to see
exactly what is calling it?

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] OS X 7.4 failure

2005-11-16 Thread Tom Lane
Andrew Dunstan [EMAIL PROTECTED] writes:
 Tom Lane wrote:
 Jim C. Nasby [EMAIL PROTECTED] writes:
 So the recent thread about getting 7.4 compiling on OS X inspired me.
 But what I can't understand is that I've yanked --with-ssl, but it's
 still looking for libssl:

 Tad hard to believe.  Maybe you missed a make distclean or so?

 make distclean should never be necessary for a buildfarm run - we 
 always build in one of these 3 ways:

He didn't say it was a buildfarm run.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] OS X 7.4 failure

2005-11-16 Thread Andrew Dunstan



Tom Lane wrote:


Andrew Dunstan [EMAIL PROTECTED] writes:
 


Tom Lane wrote:
   


Jim C. Nasby [EMAIL PROTECTED] writes:
 


So the recent thread about getting 7.4 compiling on OS X inspired me.
But what I can't understand is that I've yanked --with-ssl, but it's
still looking for libssl:
   



 


Tad hard to believe.  Maybe you missed a make distclean or so?
 



 

make distclean should never be necessary for a buildfarm run - we 
always build in one of these 3 ways:
   



He didn't say it was a buildfarm run.
 



Yeah he did ;-)  He said:


But what I can't understand is that I've yanked --with-ssl, but it's
still looking for libssl:
http://pgbuildfarm.org/cgi-bin/show_log.pl?nm=cuckoodt=2005-11-15%2023:56:22




anyway, no biggie.

cheers

andrew


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] MERGE vs REPLACE

2005-11-16 Thread Rick Gigger
I agree.  I would never ever ever want it to silently start doing  
table locks.  I would simply avoid using merge at all if that was a  
possibility.


However it seems like the idea is to eventually flesh out full  
fledged merge.  And to do that it sounds like you would need to do  
one of the following:


1) implement predicate locking beyond the simple match on unique  
index case that we have here

2) do full table locks.

It sounds like #1 isn't going to happen for a while.  So in order to  
do more complicated merges you will need to do #2.  If you are going  
to implement more complicated merge functionality I certainly  
wouldn't want it throwing a warning telling me about a table lock if  
I had already knew it would get the table lock and decided I wanted  
to go ahead with using merge anyway.


Could you let the user create the lock himself to handle this  
situation?  For instance:


analyze the merge
if merge condition matches unique index
merge without  table locking
elseif needed table lock already exists
merge
else
throw an error

You could also just add something to the merge syntax like ALLOW  
TABLE LOCK or something.  The idea is just that the user can  
explicitly allow the table lock and thus the more complicated merge.


I don't really know anything about the implementation details but  
that is the behavior that I would prefer.  That way I could always do  
a complicated merge if I wanted to but there is no way it would ever  
do an implicit table lock on me.  And it would never throw an error/ 
warning unless I actually did something questionable.


Does that make sense.

Rick Gigger

On Nov 16, 2005, at 7:49 AM, Tom Lane wrote:


Christopher Kings-Lynne [EMAIL PROTECTED] writes:
We should probably throw a notice or warning if we go to a table  
lock,

too.


That's not very useful, because you can only do somethign about it  
AFTER

the 1 hour exclusive lock merge has already run :)


We shouldn't do anything remotely like that.  A statement whose  
locking

effects can't be predicted on sight is horrid both from the user's
viewpoint and from the implementation viewpoint.  In particular, if we
have to do planning before we can determine whether the table needs  
just
a SELECT lock or something stronger, then we have to take a weak  
lock to

do the planning and then we are faced with upgrading to the stronger
lock at runtime.  Can you say deadlock risk?

I think we should do REPLACE-like functionality that simply fails  
if the
match condition isn't equality on a primary key.  If we can use SQL- 
spec

MERGE syntax for this, that's fine, but let's not think in terms of
silently changing to a stronger table lock and a much slower
implementation when the condition isn't a primary key.  That's a whole
lot of work that isn't solving any real-world problems, and *is*
creating a foot-gun for people to cause themselves performance and
deadlock problems anytime they spell the WHERE condition slightly  
wrong.


regards, tom lane

---(end of  
broadcast)---

TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that  
your

   message can get through to the mailing list cleanly




---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [HACKERS] OS X 7.4 failure

2005-11-16 Thread Jim C. Nasby
On Tue, Nov 15, 2005 at 10:27:06PM -0600, Jim C. Nasby wrote:
 On Tue, Nov 15, 2005 at 11:04:59PM -0500, Tom Lane wrote:
  Jim C. Nasby [EMAIL PROTECTED] writes:
   So the recent thread about getting 7.4 compiling on OS X inspired me.
   But what I can't understand is that I've yanked --with-ssl, but it's
   still looking for libssl:
  
  Tad hard to believe.  Maybe you missed a make distclean or so?
  (BTW, the flag is --with-openssl ... --with-ssl would do nothing :-()
 
 Hrm, I am using ccache... maybe it's got a screw loose. I'll try wiping
 the cache next if the clean checkout doesn't do it.

Well, I've tried blowing away the CVS checkout
(http://lnk.nu/pgbuildfarm.org/62o.pl) and clearing out my ccache
(http://lnk.nu/pgbuildfarm.org/62p.pl), but I'm still getting the same
failure. I do have perl, python, tcl and nls enabled, could one of them
be trying to pull libssl and libcrypto in for some reason?
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] MERGE vs REPLACE

2005-11-16 Thread Jaime Casanova
 You could also just add something to the merge syntax like ALLOW
 TABLE LOCK or something.  The idea is just that the user can
 explicitly allow the table lock and thus the more complicated merge.


The problem here is that many people will see that option and think
it's safe to do it... i mean, many people will shoot themselves in the
foot and the culprit will be PostgreSQL because he let a ready to
shoot gun in a visible place when are kids around

--
regards,
Jaime Casanova
(DBA: DataBase Aniquilator ;)

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] MERGE vs REPLACE

2005-11-16 Thread Bruce Momjian

Interesting approach.  Actually, we could tell the user they have to use
BEGIN;LOCK tab before doing MERGE, and throw an error if we don't
already have a table lock.

---

Rick Gigger wrote:
 I agree.  I would never ever ever want it to silently start doing  
 table locks.  I would simply avoid using merge at all if that was a  
 possibility.
 
 However it seems like the idea is to eventually flesh out full  
 fledged merge.  And to do that it sounds like you would need to do  
 one of the following:
 
 1) implement predicate locking beyond the simple match on unique  
 index case that we have here
 2) do full table locks.
 
 It sounds like #1 isn't going to happen for a while.  So in order to  
 do more complicated merges you will need to do #2.  If you are going  
 to implement more complicated merge functionality I certainly  
 wouldn't want it throwing a warning telling me about a table lock if  
 I had already knew it would get the table lock and decided I wanted  
 to go ahead with using merge anyway.
 
 Could you let the user create the lock himself to handle this  
 situation?  For instance:
 
 analyze the merge
 if merge condition matches unique index
   merge without  table locking
 elseif needed table lock already exists
   merge
 else
   throw an error
 
 You could also just add something to the merge syntax like ALLOW  
 TABLE LOCK or something.  The idea is just that the user can  
 explicitly allow the table lock and thus the more complicated merge.
 
 I don't really know anything about the implementation details but  
 that is the behavior that I would prefer.  That way I could always do  
 a complicated merge if I wanted to but there is no way it would ever  
 do an implicit table lock on me.  And it would never throw an error/ 
 warning unless I actually did something questionable.
 
 Does that make sense.
 
 Rick Gigger
 
 On Nov 16, 2005, at 7:49 AM, Tom Lane wrote:
 
  Christopher Kings-Lynne [EMAIL PROTECTED] writes:
  We should probably throw a notice or warning if we go to a table  
  lock,
  too.
 
  That's not very useful, because you can only do somethign about it  
  AFTER
  the 1 hour exclusive lock merge has already run :)
 
  We shouldn't do anything remotely like that.  A statement whose  
  locking
  effects can't be predicted on sight is horrid both from the user's
  viewpoint and from the implementation viewpoint.  In particular, if we
  have to do planning before we can determine whether the table needs  
  just
  a SELECT lock or something stronger, then we have to take a weak  
  lock to
  do the planning and then we are faced with upgrading to the stronger
  lock at runtime.  Can you say deadlock risk?
 
  I think we should do REPLACE-like functionality that simply fails  
  if the
  match condition isn't equality on a primary key.  If we can use SQL- 
  spec
  MERGE syntax for this, that's fine, but let's not think in terms of
  silently changing to a stronger table lock and a much slower
  implementation when the condition isn't a primary key.  That's a whole
  lot of work that isn't solving any real-world problems, and *is*
  creating a foot-gun for people to cause themselves performance and
  deadlock problems anytime they spell the WHERE condition slightly  
  wrong.
 
  regards, tom lane
 
  ---(end of  
  broadcast)---
  TIP 1: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that  
  your
 message can get through to the mailing list cleanly
 
 
 
 ---(end of broadcast)---
 TIP 3: Have you checked our extensive FAQ?
 
http://www.postgresql.org/docs/faq
 

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  pgman@candle.pha.pa.us   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] question about count(b) where b is a custom type

2005-11-16 Thread Grzegorz Jaskiewicz


Yes, sorry for the mess. The problem was somewhere else (not quite  
well written log procedure issuing conversion used for logging, even  
tho log was off).


--
GJ

If we knew what we were doing, it wouldn't be called Research, would  
it? - AE





---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [HACKERS] OS X 7.4 failure

2005-11-16 Thread Tom Lane
Jim C. Nasby [EMAIL PROTECTED] writes:
 I do have perl, python, tcl and nls enabled, could one of them
 be trying to pull libssl and libcrypto in for some reason?

Perhaps --- try otool -L (local equivalent of ldd) on them to find
out.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] MERGE vs REPLACE

2005-11-16 Thread Jaime Casanova
On 11/16/05, Bruce Momjian pgman@candle.pha.pa.us wrote:

 Interesting approach.  Actually, we could tell the user they have to use
 BEGIN;LOCK tab before doing MERGE, and throw an error if we don't
 already have a table lock.


If the lock will be required, what's the problem in doing it
internally? without user interaction?

--
regards,
Jaime Casanova
(DBA: DataBase Aniquilator ;)

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] MERGE vs REPLACE

2005-11-16 Thread Tom Lane
Jaime Casanova [EMAIL PROTECTED] writes:
 If the lock will be required, what's the problem in doing it
 internally?

I already explained that: lock upgrading is deadlock-prone.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] MERGE vs REPLACE

2005-11-16 Thread Bruce Momjian
Jaime Casanova wrote:
 On 11/16/05, Bruce Momjian pgman@candle.pha.pa.us wrote:
 
  Interesting approach.  Actually, we could tell the user they have to use
  BEGIN;LOCK tab before doing MERGE, and throw an error if we don't
  already have a table lock.
 
 
 If the lock will be required, what's the problem in doing it
 internally? without user interaction?

Because first, we are making it visible to the user, and second the lock
is taken out _before_ we actually execute the statement, meaning we
don't have to escalate our locks.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  pgman@candle.pha.pa.us   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] OS X 7.4 failure

2005-11-16 Thread Jim C. Nasby
On Wed, Nov 16, 2005 at 11:50:51AM -0500, Tom Lane wrote:
 Jim C. Nasby [EMAIL PROTECTED] writes:
  I do have perl, python, tcl and nls enabled, could one of them
  be trying to pull libssl and libcrypto in for some reason?
 
 Perhaps --- try otool -L (local equivalent of ldd) on them to find
 out.

[EMAIL PROTECTED]:13]~/buildfarm/source:39%otool -L `which perl`
/opt/local/bin/perl:
/usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current 
version 71.1.3)
[EMAIL PROTECTED]:13]~/buildfarm/source:40%otool -L `which python`
/opt/local/bin/python:
/usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current 
version 71.1.1)
[EMAIL PROTECTED]:13]~/buildfarm/source:41%otool -L `which tclsh`
/opt/local/bin/tclsh:
/opt/local/lib/libtcl8.4.dylib (compatibility version 8.4.0, current 
version 8.4.11)

/System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation 
(compatibility version 150.0.0, current version 299.35.0)
/usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current 
version 71.1.3)
[EMAIL PROTECTED]:14]~/buildfarm/source:42%otool -L 
/opt/local/lib/libtcl8.4.dylib
/opt/local/lib/libtcl8.4.dylib:
/opt/local/lib/libtcl8.4.dylib (compatibility version 8.4.0, current 
version 8.4.11)

/System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation 
(compatibility version 150.0.0, current version 299.35.0)
/usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current 
version 71.1.3)
[EMAIL PROTECTED]:14]~/buildfarm/source:43%

I'll try yanking that stuff in any case...
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] OS X 7.4 failure

2005-11-16 Thread Tom Lane
Jim C. Nasby [EMAIL PROTECTED] writes:
 [EMAIL PROTECTED]:13]~/buildfarm/source:39%otool -L `which perl`
 /opt/local/bin/perl:
 /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current 
 version 71.1.3)

These aren't particularly relevant: you need to look at the shared
libraries that are being pulled into the PG link commands, not random
standalone executables that happen to come from the same package.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] MERGE vs REPLACE

2005-11-16 Thread Martijn van Oosterhout
On Wed, Nov 16, 2005 at 11:37:46AM -0500, Bruce Momjian wrote:
 
 Interesting approach.  Actually, we could tell the user they have to use
 BEGIN;LOCK tab before doing MERGE, and throw an error if we don't
 already have a table lock.

The bit I'm still missing is why there needs to be a lock at all. The
SQL standard doesn't say anywhere that concurrent MERGE operations
can't conflict. It seems to me that standard visibility rules apply. If
neither MERGE statement can see the results of the other, then they
will both INSERT. If you don't have a UNIQUE constraint to prevent this
then what's the problem?

It seems to me people would like, in the case of an existing UNIQUE
constraint, to be able to use it to prevent duplicate key errors.
This is nice, but the standard doesn't require that either. 

In other words, if we can use an index to avoid duplicate key errors,
fine. But if there is no index available, it is not an error to do an
INSERT because another INSERT was hidden from you.

Conceptually, a MERGE statement is just a long string of INSERTs and
UPDATEs in the same transaction and I think we should treat it as
such.

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
 tool for doing 5% of the work and then sitting around waiting for someone
 else to do the other 95% so you can sue them.


pgpLQAwAS0yUe.pgp
Description: PGP signature


[HACKERS] Some array semantics issues

2005-11-16 Thread Tom Lane
While hacking on the nulls-in-arrays addition, I noticed a couple of
behaviors that seem a bit bogus to me.

First, array slicing returns NULL any time the requested slice falls
completely outside the array bounds.  For instance

regression=# select ('{1,2,3}'::int[])[2:4];
 int4
---
 {2,3}
(1 row)

regression=# select ('{1,2,3}'::int[])[3:4];
 int4
--
 {3}
(1 row)

regression=# select ('{1,2,3}'::int[])[4:4];
 int4
--

(1 row)

I'm inclined to think that an empty array ('{}') would be a more
sensible result.

Second, array comparison compares the contained values but pays no
attention to the array dimensions.  Thus for example

regression=# select '[0:2]={1,2,3}'::int[] = '{1,2,3}'::int[];
 ?column?
--
 t
(1 row)

regression=# select '{1,2,3,4}'::int[] = '{{1,2},{3,4}}'::int[];
 ?column?
--
 t
(1 row)

This seems pretty bogus as well.  To maintain backwards compatibility as
much as possible, I'd be inclined to sort first on the contained values
as we do now, but if they are equal sort on the array dimension data.
I'm not too concerned about exactly what the sort order is for
different-shaped arrays, I just don't think the above cases should be
considered equal.

Comments?

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] [COMMITTERS] pgsql: make_restrictinfo() failed to attach the specified

2005-11-16 Thread Alvaro Herrera
Tom Lane wrote:
 Log Message:
 ---
 make_restrictinfo() failed to attach the specified required_relids to
 its result when the clause was an OR clause.  Brain fade exposed by
 example from Sebastian B?ck.

I wonder if there should be regression tests for all the bugs exposed
after 8.1 ... I mean, what stops anyone from introducing the same bugs
again?

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] MERGE vs REPLACE

2005-11-16 Thread Bruce Momjian
Martijn van Oosterhout wrote:
-- Start of PGP signed section.
 On Wed, Nov 16, 2005 at 11:37:46AM -0500, Bruce Momjian wrote:
  
  Interesting approach.  Actually, we could tell the user they have to use
  BEGIN;LOCK tab before doing MERGE, and throw an error if we don't
  already have a table lock.
 
 The bit I'm still missing is why there needs to be a lock at all. The
 SQL standard doesn't say anywhere that concurrent MERGE operations
 can't conflict. It seems to me that standard visibility rules apply. If
 neither MERGE statement can see the results of the other, then they
 will both INSERT. If you don't have a UNIQUE constraint to prevent this
 then what's the problem?

I assume they want MERGE because they don't want duplicates.  If they
don't care, they would have used INSERT.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  pgman@candle.pha.pa.us   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] [COMMITTERS] pgsql: make_restrictinfo() failed to attach the specified

2005-11-16 Thread Tom Lane
Alvaro Herrera [EMAIL PROTECTED] writes:
 I wonder if there should be regression tests for all the bugs exposed
 after 8.1 ... I mean, what stops anyone from introducing the same bugs
 again?

I've never been a fan of regression tests in the narrow sense of
let's test for this specific mistake we made once.  If you can devise
a test that catches a class of errors including the one you actually
made, that's a different story, because it's much more likely to catch a
real future problem.

I was thinking about adding some regression tests to exercise OR-conditions
in OUTER JOIN ON clauses, because Sebastian's examples indicate that we
haven't tested that area nearly hard enough, but I'm not in favor of
just pushing his examples into the tests as-is.  (For one reason,
they'll soon stop being tests of OR-conditions at all, once I've got IN
reimplemented the way I want.)

If you want to spend some time consing up test cases, the areas I'd
suggest covering are:

1. OR clauses that actually reference both sides of the JOIN, plus
OR clauses that actually mention only the outer side or only the inner
side.

2. OR clauses consisting of AND sub-clauses where the sub-clauses are
different combinations of the above cases.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


[HACKERS] Heading to Mexico

2005-11-16 Thread Bruce Momjian
I leaving for Mexico in a few hours to speak at a conference.  I return
on Monday.

Is it helpful to tell hackers when I am not around?  I was in NYC last
week for four days and didn't publicize it.  Of course, I didn't finish
my backlog of email until yesterday.  I guess the question is whether my
not processing email for a few days is something I should announce here.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  pgman@candle.pha.pa.us   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] Heading to Mexico

2005-11-16 Thread Simon Riggs
On Wed, 2005-11-16 at 13:09 -0500, Bruce Momjian wrote:
 I leaving for Mexico in a few hours to speak at a conference.  I return
 on Monday.
 
 Is it helpful to tell hackers when I am not around?  I was in NYC last
 week for four days and didn't publicize it.  Of course, I didn't finish
 my backlog of email until yesterday.  I guess the question is whether my
 not processing email for a few days is something I should announce here.

Helpful, and it brings colour to the proceedings here... and reminds
everybody its a global project.

Best Regards, Simon Riggs


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] MERGE vs REPLACE

2005-11-16 Thread Simon Riggs
On Wed, 2005-11-16 at 18:34 +0100, Martijn van Oosterhout wrote:
 On Wed, Nov 16, 2005 at 11:37:46AM -0500, Bruce Momjian wrote:
  
  Interesting approach.  Actually, we could tell the user they have to use
  BEGIN;LOCK tab before doing MERGE, and throw an error if we don't
  already have a table lock.
 
 The bit I'm still missing is why there needs to be a lock at all. The
 SQL standard doesn't say anywhere that concurrent MERGE operations
 can't conflict. It seems to me that standard visibility rules apply. If
 neither MERGE statement can see the results of the other, then they
 will both INSERT. If you don't have a UNIQUE constraint to prevent this
 then what's the problem?
 
 It seems to me people would like, in the case of an existing UNIQUE
 constraint, to be able to use it to prevent duplicate key errors.
 This is nice, but the standard doesn't require that either. 
 
 In other words, if we can use an index to avoid duplicate key errors,
 fine. But if there is no index available, it is not an error to do an
 INSERT because another INSERT was hidden from you.
 
 Conceptually, a MERGE statement is just a long string of INSERTs and
 UPDATEs in the same transaction and I think we should treat it as
 such.

Agreed.

Best Regards, Simon Riggs


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] MERGE vs REPLACE

2005-11-16 Thread Martijn van Oosterhout
On Wed, Nov 16, 2005 at 12:59:36PM -0500, Bruce Momjian wrote:
 Martijn van Oosterhout wrote:
  The bit I'm still missing is why there needs to be a lock at all. The
  SQL standard doesn't say anywhere that concurrent MERGE operations
  can't conflict. It seems to me that standard visibility rules apply. If
  neither MERGE statement can see the results of the other, then they
  will both INSERT. If you don't have a UNIQUE constraint to prevent this
  then what's the problem?
 
 I assume they want MERGE because they don't want duplicates.  If they
 don't care, they would have used INSERT.

The whole point of MERGE was because you can do different things on
INSERT or UPDATE. If you didn't care about the UPDATE case you could
indeed do just inserts...

I just don't think we should tie ourselves in knots over details that
the spec doesn't require. If the spec doesn't require predicate
locking then why on earth should we provide it if it's so difficult?

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
 tool for doing 5% of the work and then sitting around waiting for someone
 else to do the other 95% so you can sue them.


pgpN5btUoK1IV.pgp
Description: PGP signature


Re: [HACKERS] Some array semantics issues

2005-11-16 Thread Greg Stark

Tom Lane [EMAIL PROTECTED] writes:

 regression=# select '[0:2]={1,2,3}'::int[] = '{1,2,3}'::int[];
  ?column?
 --
  t
 (1 row)
 
 regression=# select '{1,2,3,4}'::int[] = '{{1,2},{3,4}}'::int[];
  ?column?
 --
  t
 (1 row)
 
 This seems pretty bogus as well.  

The second case seems utterly bogus. But the first case seems maybe
justifiable. maybe. 

In the past Postgres treated the array bounds as so insignificant they weren't
even worth preserving across a dump/restore. 

And changing that would make it harder to test just the contents of the array
without having to match bounds as well. That is, You couldn't say list =
'{1,2}' to test if the array contained 1,2. You would have to, well, I'm not
even sure how you would test it actually. Maybe something kludgy like
'{}'::int[] || list = '{1,2}' ?

I'm not entirely against the idea of making array bounds significant but I
guess we would need some convenient way of taking them out of the picture too.
Perhaps another equality operator.

-- 
greg


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] MERGE vs REPLACE

2005-11-16 Thread Dann Corbit
 -Original Message-
 From: [EMAIL PROTECTED] [mailto:pgsql-hackers-
 [EMAIL PROTECTED] On Behalf Of Simon Riggs
 Sent: Wednesday, November 16, 2005 10:35 AM
 To: Martijn van Oosterhout
 Cc: Bruce Momjian; Rick Gigger; Tom Lane; Christopher Kings-Lynne; Jim
C.
 Nasby; josh@agliodbs.com; pgsql-hackers@postgresql.org; Jaime
Casanova;
 Peter Eisentraut
 Subject: Re: [HACKERS] MERGE vs REPLACE
 
 On Wed, 2005-11-16 at 18:34 +0100, Martijn van Oosterhout wrote:
  On Wed, Nov 16, 2005 at 11:37:46AM -0500, Bruce Momjian wrote:
  
   Interesting approach.  Actually, we could tell the user they have
to
 use
   BEGIN;LOCK tab before doing MERGE, and throw an error if we don't
   already have a table lock.
 
  The bit I'm still missing is why there needs to be a lock at all.
The
  SQL standard doesn't say anywhere that concurrent MERGE operations
  can't conflict. It seems to me that standard visibility rules apply.
If
  neither MERGE statement can see the results of the other, then they
  will both INSERT. If you don't have a UNIQUE constraint to prevent
this
  then what's the problem?
 
  It seems to me people would like, in the case of an existing UNIQUE
  constraint, to be able to use it to prevent duplicate key errors.
  This is nice, but the standard doesn't require that either.
 
  In other words, if we can use an index to avoid duplicate key
errors,
  fine. But if there is no index available, it is not an error to do
an
  INSERT because another INSERT was hidden from you.
 
  Conceptually, a MERGE statement is just a long string of INSERTs and
  UPDATEs in the same transaction and I think we should treat it as
  such.

Merge could also be considered as a long string of deletes and inserts.
I guess that deleting those records that already exist and then
inserting all of the records is faster because it could be done like a
single join to perform the delete and then a single batch insert.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] MERGE vs REPLACE

2005-11-16 Thread Martijn van Oosterhout
On Wed, Nov 16, 2005 at 11:06:15AM -0800, Dann Corbit wrote:
  -Original Message-
  From: [EMAIL PROTECTED] [mailto:pgsql-hackers-
  [EMAIL PROTECTED] On Behalf Of Simon Riggs
   Conceptually, a MERGE statement is just a long string of INSERTs and
   UPDATEs in the same transaction and I think we should treat it as
   such.
 
 Merge could also be considered as a long string of deletes and inserts.
 I guess that deleting those records that already exist and then
 inserting all of the records is faster because it could be done like a
 single join to perform the delete and then a single batch insert.

And for us it makes no difference because in MVCC, UPDATE == DELETE +
INSERT. IMHO it's just a nice construct to specify UPDATEs and INSERTs
in the same statement.

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
 tool for doing 5% of the work and then sitting around waiting for someone
 else to do the other 95% so you can sue them.


pgpsVlES6C7e7.pgp
Description: PGP signature


Re: [HACKERS] Some array semantics issues

2005-11-16 Thread Tom Lane
Greg Stark [EMAIL PROTECTED] writes:
 And changing that would make it harder to test just the contents of the array
 without having to match bounds as well.

Fair point, but currently it's impossible to make a comparison that
*does* consider the bounds, which one would think would be the ordinary
meaning of equality.

 I'm not entirely against the idea of making array bounds significant
 but I guess we would need some convenient way of taking them out of
 the picture too.  Perhaps another equality operator.

I could go for a separate operator that has the current behavior
(might as well ignore number of dimensions too, if we're going to
ignore bounds).  Any thoughts about the operator name?

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Some array semantics issues

2005-11-16 Thread Greg Stark

Tom Lane [EMAIL PROTECTED] writes:

 I could go for a separate operator that has the current behavior
 (might as well ignore number of dimensions too, if we're going to
 ignore bounds).  Any thoughts about the operator name?

Well to me these are two different cases. At least the way I see it {1,2} is a
list of two numbers, and {{1,2,},{3,4}} is a list of two lists. They aren't
the same and they don't even contain the same thing.

It occurs to me that it would also make sense to have an operator that
considered the arrays in an order-insensitive comparison. 

It all depends on what you're using the arrays to represent.

If you're implementing something where each slot of the array corresponds to
some specific meaning then you need the array bounds included.

If you're representing stacks where the array bounds march up as they're used
then you don't really want to include the array bounds in your comparison.

If you're representing a denormalized one-to-many relationship (being aware of
all the associated pros and cons of denormalization of course) then you really
don't care about the order at all.

Personally I can't really think of any cases where the shape of the array
doesn't matter though.

-- 
greg


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] forcing returned values to be binary

2005-11-16 Thread Bruno Wolff III
On Tue, Nov 15, 2005 at 13:01:20 -0500,
  Dave Cramer [EMAIL PROTECTED] wrote:
 I've talked to Ken Geis via email. He suggests that there is  
 considerable overhead to be saved if we go to binary; especially in  
 date, and timestamp fields
 
 One thing though if the date is 64 bit instead of float, what does  
 the binary output look like? Are they different ?

Yes, the integer float representation is different than the floating point
representation.

 
 If so this would seem to complicate things quite a bit.

You probably also need to worry about the floating point representation on
the server being different from that on the client.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Some array semantics issues

2005-11-16 Thread Tom Lane
Greg Stark [EMAIL PROTECTED] writes:
 Tom Lane [EMAIL PROTECTED] writes:
 I could go for a separate operator that has the current behavior
 (might as well ignore number of dimensions too, if we're going to
 ignore bounds).  Any thoughts about the operator name?

 Well to me these are two different cases. At least the way I see it {1,2} is a
 list of two numbers, and {{1,2,},{3,4}} is a list of two lists. They aren't
 the same and they don't even contain the same thing.

Well, in that case what do you think about
{{1,2},{3,4},{5,6},{7,8}}
vs
{{1,2,3,4},{5,6,7,8}}
These have the same physical contents and the same number of dimensions,
so unless you want to consider them equal, you have to consider the
dimension values.

I think what we may be dancing around here is that there are some cases
where it makes sense to ignore the lower bounds, as opposed to the axis
lengths.  I'm not convinced that there are any cases where it makes
sense to compare the number of dimensions without comparing the axis
lengths --- but I can see the argument that lower bounds might be
uninteresting, particularly seeing that array_push and and array_cat
do some not-necessarily-always-right things with them.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] PG_DUMP and table locking in PG7.4

2005-11-16 Thread Yann Michel
Hi Martijn,

On Wed, Nov 16, 2005 at 09:47:33AM +0100, Martijn van Oosterhout wrote:
 
 Ah yes, PostgreSQL doesn't use 2PL, it uses MVCC. quick overview here:
 http://linuxgazette.net/issue68/mitchell.html

THX! That was interesting!

  Thanks. BTW: Is there anything about locks and their meaning inside of
  the Docs? If not, wouldn't that be nice?
 
 Check here:
 http://www.postgresql.org/docs/8.0/interactive/mvcc.html
 
 Under Explicit Locking it lists all the locks and what they're for.

Yes, thanks once more!

Cheers,
Yann

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] PG_DUMP and table locking in PG7.4

2005-11-16 Thread Yann Michel
Hi,

On Wed, Nov 16, 2005 at 10:07:24AM -0500, Tom Lane wrote:
 Yann Michel [EMAIL PROTECTED] writes:
  Well, now that I'm thinking about, what you've written I think this is
  exactly the point. I think, that there is a VACUUM waiting for the dump
  to finish whereas the INSERTS are waiting for the VACUUM to finish.
 
 Only if it's a VACUUM FULL ... plain VACUUM neither blocks INSERT nor is
 blocked by pg_dump.
 
 The short answer may be don't use VACUUM FULL (at least not for
 routine automatic vacuums).

... I guiess that the AUTOVACUUM switch only does an automated VACUUM
but no VACUUM FULL?


Cheers,
Yann

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Some array semantics issues

2005-11-16 Thread Greg Stark
Tom Lane [EMAIL PROTECTED] writes:

 Well, in that case what do you think about
   {{1,2},{3,4},{5,6},{7,8}}
 vs
   {{1,2,3,4},{5,6,7,8}}

In the first case the first element is {1,2} and in the second case the first
element is {1,2,3,4} so from my point of view there's no way these are the
same.

None of the three use cases I conjured would want them considered equal, which
isn't to say there isn't some data structure somewhere out there which would,
but I haven't thought of it.

-- 
greg


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Some array semantics issues

2005-11-16 Thread Tom Lane
Greg Stark [EMAIL PROTECTED] writes:
 Tom Lane [EMAIL PROTECTED] writes:
 Well, in that case what do you think about
 {{1,2},{3,4},{5,6},{7,8}}
 vs
 {{1,2,3,4},{5,6,7,8}}

 In the first case the first element is {1,2} and in the second case the first
 element is {1,2,3,4} so from my point of view there's no way these are the
 same.

Well, then I think we're converging on agreement that array comparison
should always take into account the number of dimensions and the axis
lengths.  What seems still in question is whether to compare or ignore
the axis lower bounds.

I'd argue that ordinary equality should include the lower bounds, but
I'm willing to provide a separate operator (or whole btree opclass
if people want it) that ignores the lower bounds.  We just need a name.
Maybe ~=, ~, etc?

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Some array semantics issues

2005-11-16 Thread David Fetter
On Wed, Nov 16, 2005 at 03:03:53PM -0500, Greg Stark wrote:
 
 Tom Lane [EMAIL PROTECTED] writes:
 
  I could go for a separate operator that has the current behavior
  (might as well ignore number of dimensions too, if we're going to
  ignore bounds).  Any thoughts about the operator name?
 
 Well to me these are two different cases. At least the way I see it
 {1,2} is a list of two numbers, and {{1,2,},{3,4}} is a list of two
 lists. They aren't the same and they don't even contain the same
 thing.

Right.

 It occurs to me that it would also make sense to have an operator
 that considered the arrays in an order-insensitive comparison. 

That sounds more like the SQL:2003 MULTISET, which is essentially
unordered.  Any plans for these?

Cheers,
D
-- 
David Fetter [EMAIL PROTECTED] http://fetter.org/
phone: +1 510 893 6100   mobile: +1 415 235 3778

Remember to vote!

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Some array semantics issues

2005-11-16 Thread Tom Lane
David Fetter [EMAIL PROTECTED] writes:
 On Wed, Nov 16, 2005 at 03:03:53PM -0500, Greg Stark wrote:
 It occurs to me that it would also make sense to have an operator
 that considered the arrays in an order-insensitive comparison. 

 That sounds more like the SQL:2003 MULTISET, which is essentially
 unordered.  Any plans for these?

Seems to me it would be really expensive to try to make such a
comparison directly with the present array representation.  The
only sensible way to do it would be to sort the elements of the
two arrays (using the comparison operator of the element data type)
and then compare the results.  So you don't actually need a variant
equality operator, you just need a generic array_sort() function,
and then go array_sort(x) = array_sort(y).  Such a function might
have other uses besides this, too.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


[HACKERS] PANIC: could not locate a valid checkpoint record

2005-11-16 Thread Dave Cramer

Other than backing up $PGDATA
and running pg_resetxlog

Is there any advice ?

Dave

Nov 16 18:03:30 dx1-ptr postgres[22337]: [6-2]  '2005-11-16 17:47:31'
Nov 16 18:03:32 dx1-ptr postgres[29335]: [2-1] LOG:  received fast  
shutdown request
Nov 16 18:03:32 dx1-ptr postgres[29335]: [3-1] LOG:  aborting any  
active transactions
Nov 16 18:03:32 dx1-ptr postgres[22337]: [7-1] FATAL:  terminating  
connection due to administrator command
Nov 16 18:03:32 dx1-ptr postgres[32704]: [3-1] FATAL:  terminating  
connection due to administrator command
Nov 16 18:03:32 dx1-ptr postgres[15245]: [3-1] FATAL:  terminating  
connection due to administrator command
Nov 16 18:03:32 dx1-ptr postgres[3714]: [3-1] FATAL:  terminating  
connection due to administrator command

Nov 16 18:03:32 dx1-ptr postgres[23410]: [4-1] LOG:  shutting down
Nov 16 18:03:35 dx1-ptr postgres[23410]: [5-1] LOG:  database system  
is shut down
Nov 16 18:21:18 dx1-ptr postgres[10884]: [1-1] LOG:  could not create  
IPv6 socket: Address family not supported by protocol
Nov 16 18:21:18 dx1-ptr postgres[10960]: [2-1] LOG:  database system  
was shut down at 2005-11-16 18:03:35 GMT
Nov 16 18:21:18 dx1-ptr postgres[10960]: [3-1] LOG:  could not open  
file /data/db/pgdata/pg_xlog/008200E2 (log file 130, segm

ent 226): No such file or directory
Nov 16 18:21:18 dx1-ptr postgres[10960]: [4-1] LOG:  invalid primary  
checkpoint record
Nov 16 18:21:18 dx1-ptr postgres[10960]: [5-1] LOG:  could not open  
file /data/db/pgdata/pg_xlog/008200E2 (log file 130, segm

ent 226): No such file or directory
Nov 16 18:21:18 dx1-ptr postgres[10960]: [6-1] LOG:  invalid  
secondary checkpoint record
Nov 16 18:21:18 dx1-ptr postgres[10960]: [7-1] PANIC:  could not  
locate a valid checkpoint record
Nov 16 18:21:18 dx1-ptr postgres[10884]: [2-1] LOG:  startup process  
(PID 10960) was terminated by signal 6
Nov 16 18:21:18 dx1-ptr postgres[10884]: [3-1] LOG:  aborting startup  
due to startup process failure
Nov 16 18:28:06 dx1-ptr postgres[15001]: [1-1] LOG:  could not create  
IPv6 socket: Address family not supported by protocol
Nov 16 18:28:06 dx1-ptr postgres[15005]: [2-1] LOG:  database system  
was shut down at 2005-11-16 18:03:35 GMT
Nov 16 18:28:06 dx1-ptr postgres[15005]: [3-1] LOG:  could not open  
file /data/db/pgdata/pg_xlog/008200E2 (log file 130, segm

ent 226): No such file or directory


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] MERGE vs REPLACE

2005-11-16 Thread Rick Gigger



Conceptually, a MERGE statement is just a long string of INSERTs and
UPDATEs in the same transaction and I think we should treat it as
such.


I've just got one question about this.  Everyone seems to be saying  
that try to insert and if that fails update is the same as try to  
insert and if that fails delete and then insert.


What about the following scenario:

mytable

id  serial primary key,
a  int4,
b  int4,
data text

I've got an id field on every table because it simplifies a lot of  
things (such as slony configuration for example)
But I've also got a unique key on (a, b) and if I was to do a merge I  
would most likely do it in (a, b) not id.


If merge does a delete insert then it creates new values for the id  
columns which could cause me problems.  Basically any default fields  
are going to change or for that matter any fields not specified would  
be reinitialized whereas an update would leave them in place.


It seems to me that try to update and if that fails insert seems to  
be the best approach for not messing with existing data.  I guess   
try to insert and if that fails update gets you the same effect.


- Rick Gigger

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [HACKERS] MERGE vs REPLACE

2005-11-16 Thread daveg
On Wed, Nov 16, 2005 at 09:49:28AM -0500, Tom Lane wrote:
 I think we should do REPLACE-like functionality that simply fails if the
 match condition isn't equality on a primary key.  If we can use SQL-spec
 MERGE syntax for this, that's fine, but let's not think in terms of
 silently changing to a stronger table lock and a much slower
 implementation when the condition isn't a primary key.  That's a whole

I agree, but would like to relax the primary key requirement to simply
a unique index. I can see use cases for unique so long as not null keys,
so it would be nice if the MERGE operation would work for these. As nulls
are not equal anyway this doesn't seem to do too much violence to the
semantics.

-dg

-- 
David Gould  [EMAIL PROTECTED]
If simplicity worked, the world would be overrun with insects.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] MERGE vs REPLACE

2005-11-16 Thread Tom Lane
daveg [EMAIL PROTECTED] writes:
 I agree, but would like to relax the primary key requirement to simply
 a unique index. I can see use cases for unique so long as not null keys,
 so it would be nice if the MERGE operation would work for these. As nulls
 are not equal anyway this doesn't seem to do too much violence to the
 semantics.

But a unique key doesn't guarantee that there's only one matching row,
so ISTM you're right back to needing a predicate lock if you do that.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Some array semantics issues

2005-11-16 Thread Joe Conway

Tom Lane wrote:

Greg Stark [EMAIL PROTECTED] writes:

Tom Lane [EMAIL PROTECTED] writes:


Well, in that case what do you think about
{{1,2},{3,4},{5,6},{7,8}}
vs
{{1,2,3,4},{5,6,7,8}}



In the first case the first element is {1,2} and in the second case the first
element is {1,2,3,4} so from my point of view there's no way these are the
same.


Well, then I think we're converging on agreement that array comparison
should always take into account the number of dimensions and the axis
lengths.  What seems still in question is whether to compare or ignore
the axis lower bounds.

I'd argue that ordinary equality should include the lower bounds, but
I'm willing to provide a separate operator (or whole btree opclass
if people want it) that ignores the lower bounds.  We just need a name.
Maybe ~=, ~, etc?


A couple of thoughts based on the last time I read SQL2003 WRT arrays.

First, the spec only allows arrays to have a lower bound of 1. That 
requirement simplifies a whole lot of things. I don't think that many 
people actually depend on other than 1 as a lower bound (or at least if 
they do, they weren't dumping and reloading those databases prior to 
8.0) -- maybe given other possibly non-backward compatible changes for 
NULLs, we should also change this?


Second, the spec does not really directly allow for multidimensional 
arrays. What it does allow is nesting of arrays. So as Greg states, 
{1,2} is clearly a different array than {1,2,3,4}. I had been thinking 
that when (if?) the array literal parser and related infrastructure is 
rewritten, it should be done so that arrays-as-array-elements are 
processed similar to any scalar element (and perhaps tuples as array 
elements as well). My hope was that eventually anyarray I/O functions 
could eliminate the need to create an array type for every data type you 
wanted to use as an array element.


Joe





---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Some array semantics issues

2005-11-16 Thread Tom Lane
Joe Conway [EMAIL PROTECTED] writes:
 First, the spec only allows arrays to have a lower bound of 1. That 
 requirement simplifies a whole lot of things. I don't think that many 
 people actually depend on other than 1 as a lower bound (or at least if 
 they do, they weren't dumping and reloading those databases prior to 
 8.0) -- maybe given other possibly non-backward compatible changes for 
 NULLs, we should also change this?

I don't have a lot of use for arguments that go we should remove any
functionality that's not in the spec ... ISTM that variable lower
bounds are clearly useful for some applications, and even if they had
bugs in earlier releases that's not an argument for removing them.

 ... My hope was that eventually anyarray I/O functions 
 could eliminate the need to create an array type for every data type you 
 wanted to use as an array element.

Interesting thought, but then how do you declare the type of an array
column, or the type of a function argument that's not supposed to range
over every array type?  If we can't use an OID to identify a data type
completely, we're going to have lots of problems.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] PANIC: could not locate a valid checkpoint record

2005-11-16 Thread Tom Lane
Dave Cramer [EMAIL PROTECTED] writes:
 Nov 16 18:03:32 dx1-ptr postgres[23410]: [4-1] LOG:  shutting down
 Nov 16 18:03:35 dx1-ptr postgres[23410]: [5-1] LOG:  database system  
 is shut down
 Nov 16 18:21:18 dx1-ptr postgres[10884]: [1-1] LOG:  could not create  
 IPv6 socket: Address family not supported by protocol
 Nov 16 18:21:18 dx1-ptr postgres[10960]: [2-1] LOG:  database system  
 was shut down at 2005-11-16 18:03:35 GMT
 Nov 16 18:21:18 dx1-ptr postgres[10960]: [3-1] LOG:  could not open  
 file /data/db/pgdata/pg_xlog/008200E2 (log file 130, segm
 ent 226): No such file or directory
 Nov 16 18:21:18 dx1-ptr postgres[10960]: [4-1] LOG:  invalid primary  
 checkpoint record
 Nov 16 18:21:18 dx1-ptr postgres[10960]: [5-1] LOG:  could not open  
 file /data/db/pgdata/pg_xlog/008200E2 (log file 130, segm
 ent 226): No such file or directory
 Nov 16 18:21:18 dx1-ptr postgres[10960]: [6-1] LOG:  invalid  
 secondary checkpoint record
 Nov 16 18:21:18 dx1-ptr postgres[10960]: [7-1] PANIC:  could not  
 locate a valid checkpoint record

So what happened between 18:03 and 18:21 exactly?  If you don't have 
008200E2, what files *do* you have in pg_xlog?  What does
pg_controldata print?

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] MERGE vs REPLACE

2005-11-16 Thread Martijn van Oosterhout
On Wed, Nov 16, 2005 at 04:51:07PM -0500, Tom Lane wrote:
 daveg [EMAIL PROTECTED] writes:
  I agree, but would like to relax the primary key requirement to simply
  a unique index. I can see use cases for unique so long as not null keys,
  so it would be nice if the MERGE operation would work for these. As nulls
  are not equal anyway this doesn't seem to do too much violence to the
  semantics.
 
 But a unique key doesn't guarantee that there's only one matching row,
 so ISTM you're right back to needing a predicate lock if you do that.

But there is no need to guarentee anything. As the spec says, if the
join of the table with the other clauses matches a row in the table
more than once, raise a cardinality exception. If someone creates a
join that matches more than once the whole statement fails. But you can
work that out at runtime. If the user specifies NOT NULL in the join
condition then it can work and there no reason to forbid that.

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
 tool for doing 5% of the work and then sitting around waiting for someone
 else to do the other 95% so you can sue them.


pgpoZSWYuKply.pgp
Description: PGP signature


Re: [HACKERS] PG_DUMP and table locking in PG7.4

2005-11-16 Thread Alvaro Herrera
Yann Michel wrote:

 ... I guiess that the AUTOVACUUM switch only does an automated VACUUM
 but no VACUUM FULL?

Certainly.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Some array semantics issues

2005-11-16 Thread Greg Stark

Tom Lane [EMAIL PROTECTED] writes:

 Joe Conway [EMAIL PROTECTED] writes:
  First, the spec only allows arrays to have a lower bound of 1. That 
  requirement simplifies a whole lot of things. I don't think that many 
  people actually depend on other than 1 as a lower bound (or at least if 
  they do, they weren't dumping and reloading those databases prior to 
  8.0) -- maybe given other possibly non-backward compatible changes for 
  NULLs, we should also change this?
 
 I don't have a lot of use for arguments that go we should remove any
 functionality that's not in the spec ... ISTM that variable lower
 bounds are clearly useful for some applications, and even if they had
 bugs in earlier releases that's not an argument for removing them.

Normally I don't either. But it's not just functionality that's not in the
spec. It's functionality that creates behaviour the spec specifies otherwise.

That is, if you have an array [1,2] the spec says you can get 1 by referring
to arr[1]. On Postgres you have to take more care. There could easily be code
out there that won't work on Postgres because of this difference.

The main reason for having non-zero lower bounds in the first place was
originally that NULLs weren't allowed in arrays. Otherwise you run into
problems when you try to set arr[5] = 1 when there isn't an arr[1]..arr[4].
But if we have NULLs in arrays then we could easily have all arrays have lower
bounds of 1. We don't even have to store the leading NULL elements.

I think having all arrays start at 1 would actually much simplify the
semantics and avoid a lot of strange corner cases and surprising behaviour
that will follow from having non-1 based arrays.

-- 
greg


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] PANIC: could not locate a valid checkpoint record

2005-11-16 Thread Dave Cramer

Apparently the machine hung in between those two times.

I don't have any specific information,  I am debugging remotely over  
the phone.


Dave
On 16-Nov-05, at 5:21 PM, Tom Lane wrote:


Dave Cramer [EMAIL PROTECTED] writes:

Nov 16 18:03:32 dx1-ptr postgres[23410]: [4-1] LOG:  shutting down
Nov 16 18:03:35 dx1-ptr postgres[23410]: [5-1] LOG:  database system
is shut down
Nov 16 18:21:18 dx1-ptr postgres[10884]: [1-1] LOG:  could not create
IPv6 socket: Address family not supported by protocol
Nov 16 18:21:18 dx1-ptr postgres[10960]: [2-1] LOG:  database system
was shut down at 2005-11-16 18:03:35 GMT
Nov 16 18:21:18 dx1-ptr postgres[10960]: [3-1] LOG:  could not open
file /data/db/pgdata/pg_xlog/008200E2 (log file 130, segm
ent 226): No such file or directory
Nov 16 18:21:18 dx1-ptr postgres[10960]: [4-1] LOG:  invalid primary
checkpoint record
Nov 16 18:21:18 dx1-ptr postgres[10960]: [5-1] LOG:  could not open
file /data/db/pgdata/pg_xlog/008200E2 (log file 130, segm
ent 226): No such file or directory
Nov 16 18:21:18 dx1-ptr postgres[10960]: [6-1] LOG:  invalid
secondary checkpoint record
Nov 16 18:21:18 dx1-ptr postgres[10960]: [7-1] PANIC:  could not
locate a valid checkpoint record


So what happened between 18:03 and 18:21 exactly?  If you don't have
008200E2, what files *do* you have in pg_xlog?  What does
pg_controldata print?

regards, tom lane

---(end of  
broadcast)---

TIP 2: Don't 'kill -9' the postmaster




---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Some array semantics issues

2005-11-16 Thread Joe Conway

Greg Stark wrote:

Tom Lane [EMAIL PROTECTED] writes:

Joe Conway [EMAIL PROTECTED] writes:
First, the spec only allows arrays to have a lower bound of 1. That 
requirement simplifies a whole lot of things. I don't think that many 
people actually depend on other than 1 as a lower bound (or at least if 
they do, they weren't dumping and reloading those databases prior to 
8.0) -- maybe given other possibly non-backward compatible changes for 
NULLs, we should also change this?


I don't have a lot of use for arguments that go we should remove any
functionality that's not in the spec ... ISTM that variable lower
bounds are clearly useful for some applications, and even if they had
bugs in earlier releases that's not an argument for removing them.


Normally I don't either. But it's not just functionality that's not in the
spec. It's functionality that creates behaviour the spec specifies otherwise.


This is an important point. SQL2003 doesn't leave this as undefined:

4.10.2 Arrays
An array is a collection A in which each element is associated with 
exactly one ordinal position in A. If n is the cardinality of A, then 
the ordinal position p of an element is an integer in the range 1 (one) 
= p = n. If EDT is the element type of A, then A can thus be 
considered as a function of the integers in the range 1 (one) to n into EDT.


Joe

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [HACKERS] [COMMITTERS] pgsql: make_restrictinfo() failed to attach

2005-11-16 Thread Christopher Kings-Lynne

I've never been a fan of regression tests in the narrow sense of
let's test for this specific mistake we made once.  If you can devise
a test that catches a class of errors including the one you actually
made, that's a different story, because it's much more likely to catch a
real future problem.


Heh.  See what I do is envision a future 10 years from now when the guy 
who truly understands the planner and executor (Tom) has long gone and 
the rest of us poor buggers keep on trying to change and fix things, 
thereby recreating all these 10 year old bugs :)


Chris


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] [COMMITTERS] pgsql: make_restrictinfo() failed to attach the specified

2005-11-16 Thread Alvaro Herrera
Christopher Kings-Lynne wrote:
 I've never been a fan of regression tests in the narrow sense of
 let's test for this specific mistake we made once.  If you can devise
 a test that catches a class of errors including the one you actually
 made, that's a different story, because it's much more likely to catch a
 real future problem.
 
 Heh.  See what I do is envision a future 10 years from now when the guy 
 who truly understands the planner and executor (Tom) has long gone and 
 the rest of us poor buggers keep on trying to change and fix things, 
 thereby recreating all these 10 year old bugs :)

That's why someone else should be studying the planner and executor code
right now ...  I've long wanted to start doing it but I've been always
distracted with other minutia ...

-- 
Alvaro Herrera http://www.amazon.com/gp/registry/DXLWNGRJD34J
PHP is what I call the Dumb Monkey language. [A]ny dumb monkey can code
something in PHP. Python takes actual thought to produce something useful.
   (J. Drake)

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] [COMMITTERS] pgsql: make_restrictinfo() failed to attach the specified

2005-11-16 Thread Robert Treat
On Wednesday 16 November 2005 21:05, Alvaro Herrera wrote:
 Christopher Kings-Lynne wrote:
  I've never been a fan of regression tests in the narrow sense of
  let's test for this specific mistake we made once.  If you can devise
  a test that catches a class of errors including the one you actually
  made, that's a different story, because it's much more likely to catch a
  real future problem.
 
  Heh.  See what I do is envision a future 10 years from now when the guy
  who truly understands the planner and executor (Tom) has long gone and
  the rest of us poor buggers keep on trying to change and fix things,
  thereby recreating all these 10 year old bugs :)

 That's why someone else should be studying the planner and executor code
 right now ...  I've long wanted to start doing it but I've been always
 distracted with other minutia ...

If you think shared row locks and subtransactions were minutia, I think you're 
the right person for the job!   :-)

-- 
Robert Treat
Build A Brighter Lamp :: Linux Apache {middleware} PostgreSQL

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] [ANNOUNCE] PostgreSQL Weekly News - November 13 2005

2005-11-16 Thread Robert Treat
On Tuesday 15 November 2005 04:03, Teodor Sigaev wrote:
 Look at HEAD branch.

 Kaare Rasmussen wrote:
  David Fetter writes in PostgreSQL Weekly News - November 13 2005:
  Teodor Sigaev has been making lots of improvements to tsearch2, a
  full-text search engine.
 
  I can't find them. Am I blind? Can someone help?
 

Hey Teodor,

one thing that has confused me lately is figuring out where the latest code 
for tsearch2 is kept.  A number of places on the web indicate you can get 
updated releases on places like the tsearch2 home page 
(http://www.sai.msu.su/~megera/postgres/gist/tsearch/V2/) but I haven't found 
anything relevent there or in the wiki or on the delta-soft web page. Is the 
current development going on in the postgresql cvs tree then?

-- 
Robert Treat
Build A Brighter Lamp :: Linux Apache {middleware} PostgreSQL

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[HACKERS] Call for sample databases

2005-11-16 Thread Christopher Kings-Lynne

Hi guys,

I've set up a new sample databases project:

http://pgfoundry.org/projects/dbsamples/

If any of you have sample databases (schema + data, pg_dump format) that 
you are willing to share under the BSD license, please send 'em to me so 
I can host them on the project.


You might also find interesting the new icons that Niko of the pgAdmin 
project has donated to the work.  Check the 'Icons' category here: 
http://pgfoundry.org/docman/?group_id=189  They are _awesome_.


Cheers,

Chris



---(end of broadcast)---
TIP 6: explain analyze is your friend


[HACKERS] tablespaces and non-empty directories

2005-11-16 Thread Philip Yarra
I assume CREATE TABLESPACE refuses to use a non-empty directory because of the 
risk of trashing existing files. Makes sense, but consider the following:

# mkfs -t ext2 /dev/sdc1
# mount -t ext2 /dev/sdc1 /mnt/pg_tables
# chown postgres /mnt/pg_tables
# su -c psql pyarra
pyarra=# CREATE TABLESPACE spc_tables LOCATION '/mnt/pg_tables/';
ERROR:  directory /mnt/pg_tables is not empty

This is because lost+found exists. Since lost+found would be a reasonably 
common directory to find at a mount-point on Unix-like OSs*, would it make 
sense for CREATE TABLESPACE to ignore it if present?

Of course this isn't hard to get around:
# mkdir /mnt/pg_tables/data
# chown postgres /mnt/pg_tables/data
CREATE TABLESPACE spc_tables LOCATION '/mnt/pg_tables/data/';

If consensus is that it is a bad idea to treat lost+found as a special case, 
would it be worth putting an explicit mention in the doco about the preferred 
way to set up a database with multiple disks?

Related question: are there plans afoot to allow specifying an alternate 
location for pg_xlog (or pg_delete-me-not) to save doing the shutdown-DB, mv 
directory to other disk, symlink, start-DB dance?

Regards, Philip.

* Solaris 9 and Linux both use lost+found, Tru64 v4.0f does not seem to (and 
has extra guff for quota management too). I doubt we could cater to every 
possible Unix OS and the administrative files it creates at mount points, 
however since lost+found is so common, if it's there, we could ignore it.

-- 

Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it. - Brian W. Kernighan

-
Utiba Pty Ltd 
This message has been scanned for viruses and
dangerous content by Utiba mail server and is 
believed to be clean.


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] tablespaces and non-empty directories

2005-11-16 Thread Gavin Sherry
On Thu, 17 Nov 2005, Philip Yarra wrote:

 I assume CREATE TABLESPACE refuses to use a non-empty directory because of the
 risk of trashing existing files. Makes sense, but consider the following:

Right, that was the reasoning.


 # mkfs -t ext2 /dev/sdc1
 # mount -t ext2 /dev/sdc1 /mnt/pg_tables
 # chown postgres /mnt/pg_tables
 # su -c psql pyarra
 pyarra=# CREATE TABLESPACE spc_tables LOCATION '/mnt/pg_tables/';
 ERROR:  directory /mnt/pg_tables is not empty

 This is because lost+found exists. Since lost+found would be a reasonably
 common directory to find at a mount-point on Unix-like OSs*, would it make
 sense for CREATE TABLESPACE to ignore it if present?

This came up when tablespaces were being developed.


 Of course this isn't hard to get around:
 # mkdir /mnt/pg_tables/data
 # chown postgres /mnt/pg_tables/data
 CREATE TABLESPACE spc_tables LOCATION '/mnt/pg_tables/data/';

Right. We decided that this was easy for admins to do and also makes
things a little clearer: if /mnt/pg_tables was the data directory, you'd
have something like:

lost+found123413212223132[etc]

It might not be immediately obvious what the numeric named directories are
for.


 If consensus is that it is a bad idea to treat lost+found as a special case,
 would it be worth putting an explicit mention in the doco about the preferred
 way to set up a database with multiple disks?

Sounds like a good idea.


 Related question: are there plans afoot to allow specifying an alternate
 location for pg_xlog (or pg_delete-me-not) to save doing the shutdown-DB, mv
 directory to other disk, symlink, start-DB dance?

People have discussed it but I don't know of anyone working on it.

Gavin

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] tablespaces and non-empty directories

2005-11-16 Thread Tom Lane
Philip Yarra [EMAIL PROTECTED] writes:
 This is because lost+found exists. Since lost+found would be a reasonably 
 common directory to find at a mount-point on Unix-like OSs*, would it make 
 sense for CREATE TABLESPACE to ignore it if present?

No.  There is no reason to use a volume's root directory as a
tablespace; especially so since the root directory ought to be owned
by root and so you'd have a permissions problem anyhow.  Make a
subdirectory.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] [ANNOUNCE] PostgreSQL Weekly News - November 13 2005

2005-11-16 Thread Oleg Bartunov

On Wed, 16 Nov 2005, Robert Treat wrote:


On Tuesday 15 November 2005 04:03, Teodor Sigaev wrote:

Look at HEAD branch.

Kaare Rasmussen wrote:

David Fetter writes in PostgreSQL Weekly News - November 13 2005:

Teodor Sigaev has been making lots of improvements to tsearch2, a
full-text search engine.


I can't find them. Am I blind? Can someone help?



Hey Teodor,

one thing that has confused me lately is figuring out where the latest code
for tsearch2 is kept.  A number of places on the web indicate you can get
updated releases on places like the tsearch2 home page
(http://www.sai.msu.su/~megera/postgres/gist/tsearch/V2/) but I haven't found
anything relevent there or in the wiki or on the delta-soft web page. Is the
current development going on in the postgresql cvs tree then?


current development is in the postgresql cvs tree, but we keep backpatches 
on our page. Currently, we're working on full UTF-8 support for 8.2 and 
other improvements. This is rather difficult task, but is  very important for 
our users and we'll publish patch for 8.1 release on our page. 
We have our TODO 
http://www.sai.msu.su/~megera/oddmuse/index.cgi/todo






Regards,
Oleg
_
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [HACKERS] OS X 7.4 failure

2005-11-16 Thread Tom Lane
Jim C. Nasby [EMAIL PROTECTED] writes:
 http://pgbuildfarm.org/cgi-bin/show_log.pl?nm=cuckoodt=2005-11-15%2023:56:22

I took a closer look at this, and noticed something interesting:

ccache gcc -no-cpp-precomp -O2 -fno-strict-aliasing -g -Wall 
-Wmissing-prototypes -Wmissing-declarations  -bundle execute.o typename.o 
descriptor.o data.o error.o prepare.o memory.o connect.o misc.o path.o 
-L../pgtypeslib -L../../../../src/interfaces/libpq -L../../../../src/port 
-L/opt/local/lib -lpgtypes -lpq -lintl -lm  -o libecpg.so.4.1
ld: warning can't open dynamic library: /opt/local/lib/libssl.0.9.7.dylib 
(checking for undefined symbols may be affected) (No such file or directory, 
errno = 2)
ld: warning can't open dynamic library: /opt/local/lib/libcrypto.0.9.7.dylib 
(checking for undefined symbols may be affected) (No such file or directory, 
errno = 2)
ld: warning multiple definitions of symbol _pg_strncasecmp
/opt/local/lib/libpgtypes.dylib(pgstrcasecmp.o) definition of _pg_strncasecmp
/opt/local/lib/libpq.dylib(pgstrcasecmp.o) definition of _pg_strncasecmp

You should be asking yourself what the heck is it doing pulling in
libpgtypes and libpq from /opt/local/lib instead of the current build?
That's way down the -L search list.

I am not sure about Darwin's linker search rules, but it could easy be
that it first looks through the entire search path for a .dylib and only
upon failing looks for a .so.  If so, a .dylib lurking in /opt/local/lib
could capture the build away from the .so that the 7.4 build process
tries to make.

Solution would be to remove the PG libraries from /opt/local/lib, or
else remove /opt/local/lib from the search path for the 7.4 build
(which'd probably mean removing --with-tcl etc, but I'm not sure they
would work anyway).

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org