Re: [HACKERS] [BUGS] BUG #3774: create table like including index doesn't update pg_constraints with primary key

2007-12-02 Thread NikhilS
Hi,


  The fundamental question though is should we allow primary, unique
  CONSTRAINTS which use the index mechanism just as an implementation to
 be
  created using the INCLUDING INDEXES mechanism.

 Yeah, this bizarreness was foreseen and agreed to back when we set up
 LIKE INCLUDING CONSTRAINTS the way it was defined (ie, copying only
 CHECK constraints and not other things called constraints).  I was never
 very thrilled with that definition myself, but it's a bit too late to
 revisit it.


Yeah this is all confusing. I believe we should remove the following TODO
now that the above has been checked in:

 CREATE

   - Have WITH CONSTRAINTS also create constraint indexes

   http://archives.postgresql.org/pgsql-patches/2007-04/msg00149.php


Regards,
Nikhils
-- 
EnterpriseDB   http://www.enterprisedb.com


Re: [HACKERS] String encoding during connection handshake

2007-12-02 Thread Neil Conway
On Wed, 2007-11-28 at 09:38 -0800, Trevor Talbot wrote:
 PostgreSQL's problem is that it (and AFAICT POSIX) conflates encoding
 with locale, when the two are entirely separate concepts.

In what way does PostgreSQL conflate encoding with locale?

-Neil



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[HACKERS] There's random access and then there's random access

2007-12-02 Thread Gregory Stark

Recently there was a post on -performance about a particular case where
Postgres doesn't make very good use of the I/O system. This is when you try to
fetch many records spread throughout a table in random order.

http://archives.postgresql.org/pgsql-performance/2007-12/msg5.php

Currently Postgres reads each record as needed and processes it. This means
even if you have a large raid array you get no benefit from it since you're
limited by the latency of each request. The raid array might let you run more
queries simultaneously but not improve the response time of a single query.

But in most cases, as in the use case in the email message above, we can do
substantially better. We can arrange to issue all the read requests without
blocking, then process the blocks either as they come in or in the order we
want blocking until they're actually satisfied. Handling them as they come in
is in theory more efficient but either way I would expect to see more or less
a speedup nearly equal to the number of drives in the array. Even on a single
drive it should slightly improve performance as it allows us to do some CPU
work while the I/O requests are pending.

The two interfaces I'm aware of for this are posix_fadvise() and libaio. I've
run tests with a synthetic benchmark which generates a large file then reads a
random selection of blocks from within it using either synchronous reads like
we do now or either of those interfaces. I saw impressive speed gains on a
machine with only three drives in a raid array. I did this a while ago so I
don't have the results handy. I'll rerun the tests again and post them.

I think this will be easiest to do for bitmap index scans. Since we gather up
all the pages we'll need before starting the heap scan we can easily skim
through them, issue posix_fadvises for at least a certain number ahead of the
actual read point and then proceed with the rest of the scan unchanged. 

For regular index scans I'm not sure how easy it will be to beat them into
doing this but I suspect it might not be too hard to at least prefetch the
tuples in the page-at-a-time buffer. That's probably safer too since for such
scans we're more likely to not actually read all the results anyways; there
could be a limit or something else above which will stop us.

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com
  Get trained by Bruce Momjian - ask me about EnterpriseDB's PostgreSQL 
training!

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [HACKERS] There's random access and then there's random access

2007-12-02 Thread Douglas McNaught
On 12/2/07, Gregory Stark [EMAIL PROTECTED] wrote:

 The two interfaces I'm aware of for this are posix_fadvise() and libaio. I've
 run tests with a synthetic benchmark which generates a large file then reads a
 random selection of blocks from within it using either synchronous reads like
 we do now or either of those interfaces. I saw impressive speed gains on a
 machine with only three drives in a raid array. I did this a while ago so I
 don't have the results handy. I'll rerun the tests again and post them.

The issue I've always seen raised with asynchronous I/O is
portability--apparently some platforms PG runs on don't support it (or
not well).  AIUI Linux actually still has a fairly crappy
implementation of AIO--glibc starts threads to do the I/O and then
tracks when they finish.  Not absolutely horrible, but a nice way to
suddenly have a threaded backend when you're not expecting one.

-Doug

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] 8.3beta3 ERROR: cached plan must not change result type

2007-12-02 Thread Alvaro Herrera
Louis-David Mitterrand wrote:

 Wow, this above email took three days to reach to list.
 
   Received: from maia-2.hub.org (maia-2.hub.org [200.46.204.187])
   by zenon.apartia.fr (Postfix) with ESMTP id 
 2B95E59C5B43C
   for [EMAIL PROTECTED]; Sat,  1 Dec 2007 02:14:00
   +0100 (CET)
   Received: from postgresql.org (postgresql.org [200.46.204.71])
   by maia-2.hub.org (Postfix) with ESMTP id C5C1F2CA2E2
   for [EMAIL PROTECTED]; Wed, 28 Nov 2007 11:51:20
   -0400 (AST)
 
 Why did maia-2.hub.org keep it so long in its belly?

I got it immediately:

Received: from maia-2.hub.org (maia-2.hub.org [200.46.204.187])
by perhan.alvh.no-ip.org (Postfix) with ESMTP id 0B76547BFF
for [EMAIL PROTECTED]; Wed, 28 Nov 2007 12:52:30 -0300 (CLST)
Received: from postgresql.org (postgresql.org [200.46.204.71])
by maia-2.hub.org (Postfix) with ESMTP id ED7032CA417
for [EMAIL PROTECTED]; Wed, 28 Nov 2007 11:52:21 -0400 (AST)

Perhaps your system refused mail from maia-2 for a while and then it got
queued.  How could have 3 days passed for it to be delivered, I don't know.

-- 
Alvaro Herrera  Developer, http://www.PostgreSQL.org/
Prefiero omelette con amigos que caviar con tontos
  (Alain Nonnet)

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] There's random access and then there's random access

2007-12-02 Thread Gregory Stark
Douglas McNaught [EMAIL PROTECTED] writes:

 On 12/2/07, Gregory Stark [EMAIL PROTECTED] wrote:

 The two interfaces I'm aware of for this are posix_fadvise() and libaio. I've
 run tests with a synthetic benchmark which generates a large file then reads 
 a
 random selection of blocks from within it using either synchronous reads like
 we do now or either of those interfaces. I saw impressive speed gains on a
 machine with only three drives in a raid array. I did this a while ago so I
 don't have the results handy. I'll rerun the tests again and post them.

 The issue I've always seen raised with asynchronous I/O is
 portability--apparently some platforms PG runs on don't support it (or
 not well).  AIUI Linux actually still has a fairly crappy
 implementation of AIO--glibc starts threads to do the I/O and then
 tracks when they finish.  Not absolutely horrible, but a nice way to
 suddenly have a threaded backend when you're not expecting one.

In the tests I ran Linux's posix_fadvise worked well and that's the simpler
interface for us to adapt to anyways. On Solaris there was no posix_fadvise
but libaio worked instead.

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com
  Get trained by Bruce Momjian - ask me about EnterpriseDB's PostgreSQL 
training!

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] There's random access and then there's random access

2007-12-02 Thread Tom Lane
Gregory Stark [EMAIL PROTECTED] writes:
 Recently there was a post on -performance about a particular case where
 Postgres doesn't make very good use of the I/O system. This is when you try to
 fetch many records spread throughout a table in random order.

 http://archives.postgresql.org/pgsql-performance/2007-12/msg5.php

Since the OP in that thread has still supplied zero information
(no EXPLAIN, let alone ANALYZE, and no version info), it's pure
guesswork as to what his problem is.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] There's random access and then there's random access

2007-12-02 Thread Jens-Wolfhard Schicke
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Tom Lane wrote:
 Gregory Stark [EMAIL PROTECTED] writes:
 Recently there was a post on -performance about a particular case where
 Postgres doesn't make very good use of the I/O system. This is when you try 
 to
 fetch many records spread throughout a table in random order.
 
 http://archives.postgresql.org/pgsql-performance/2007-12/msg5.php
 
 Since the OP in that thread has still supplied zero information
 (no EXPLAIN, let alone ANALYZE, and no version info), it's pure
 guesswork as to what his problem is.
Nonetheless, asynchronous IO will reap performance improvements.
Wether a specific case would indeed benefit from it is imho irrelevant,
if other cases can indeed be found, where performance would be
improved significantly.

I experimented with a raid of 8 solid state devices, and found that
the blocks/second for random access improved signifacantly with the
number of processes doing the access. I actually wanted to use said
raid as a tablespace for postgresql, and alas, the speedup did not
depend on the number of drives in the raid, which is very unfortunate.

I still got the lower solid-state latency, but the raid did not help.

Regards,
   Jens-Wolfhard Schicke
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFHUwM7zhchXT4RR5ARAsziAJ9qm/c8NuaJ+HqoJo9Ritb2t92fRwCgnF9J
r5YU/Fa0mNYG7YXed4QW7K4=
=Mvyj
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [HACKERS] Stored procedure issue

2007-12-02 Thread Usama Dar
On Dec 2, 2007 7:40 AM, Dragan Zubac [EMAIL PROTECTED] wrote:

 Hello

 I have a stored procedure which does the billing stuff
 in our system,it works ok,but if I put in
 production,where there is some 5-10 billing events per
 second,the whole database slows down. It won't even
 drop some test table,reindex,vacuum,things which were
 done before in the blink of an eye. If I stop the
 application which calls the procedure,all is back to
 normal.

 We didn't implement any special locking mechanism in
 the procedure,all is default. The procedure is
 updating user's balance in table 'users'. On the other
 hand a couple of 'heavy load' table has foreign keys
 pointing to table 'users'.

 Is it the matter of concurency and some locking issue
 or maybe the existing of all those foreign keys
 pointing to table 'users',or maybe something else
 which we're not aware at the moment ?


Can you please post your procedure and explain plan of the SQL which the
procedure uses to do the billing stuff . There can be a zillion reasons for
the performance problems you are seeing, but the email does not provide
enough information.



 Sincerely

 Pera



  
 
 Be a better sports nut!  Let your teams follow you
 with Yahoo Mobile. Try it now.
 http://mobile.yahoo.com/sports;_ylt=At9_qDKvtAbMuh1G1SQtBI7ntAcJ

 ---(end of broadcast)---
 TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate




-- 
Usama Munir Dar http://linkedin.com/in/usamadar
Consultant Architect
Cell:+92 321 5020666
Skype: usamadar


Re: [HACKERS] There's random access and then there's random access

2007-12-02 Thread Gregory Stark
Tom Lane [EMAIL PROTECTED] writes:

 Gregory Stark [EMAIL PROTECTED] writes:
 Recently there was a post on -performance about a particular case where
 Postgres doesn't make very good use of the I/O system. This is when you try 
 to
 fetch many records spread throughout a table in random order.

 http://archives.postgresql.org/pgsql-performance/2007-12/msg5.php

 Since the OP in that thread has still supplied zero information
 (no EXPLAIN, let alone ANALYZE, and no version info), it's pure
 guesswork as to what his problem is.

Sure, consider it a hypothetical which needs further experimentation. That's
part of why I ran (and will rerun) those synthetic benchmarks to test whether
posix_fadvise() actually speeds up subsequent reads on a few operating
systems. Surely any proposed patch will have to prove itself on empirical
grounds too.

I could swear this has been discussed in the past too. I seem to recall Luke
disparaging Postgres on the same basis but proposing an immensely complicated
solution. posix_fadvise or using libaio in a simplistic fashion as a kind of
fadvise would be fairly lightweight way to get most of the benefit of the more
complex solutions.

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com
  Get trained by Bruce Momjian - ask me about EnterpriseDB's PostgreSQL 
training!

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] Stored procedure issue

2007-12-02 Thread Dragan Zubac
Hello

Please find in attachment stored procedure
(proc_uni.txt),as well as description of tables
involved in calculations.
The idea for procedure is to find longest prefix match
for destination number,try to find it in table
'billing' for particular users,find the price,and
insert message into history and inqueue table,as well
as to decreace the user's balance in table 'users'.
Would it help to put all prefices,prices data in some
sort of cache and let procedure first try to match
with data from cache and if it can't find to try to
get data from table itself from hard disk ?

I'm looking for some solution where this procedure can
operate at higher loads and to leave other parts of
database operational as much as it could.

Sincerely

Pera

--- Usama Dar [EMAIL PROTECTED] wrote:

 On Dec 2, 2007 7:40 AM, Dragan Zubac
 [EMAIL PROTECTED] wrote:
 
  Hello
 
  I have a stored procedure which does the billing
 stuff
  in our system,it works ok,but if I put in
  production,where there is some 5-10 billing events
 per
  second,the whole database slows down. It won't
 even
  drop some test table,reindex,vacuum,things which
 were
  done before in the blink of an eye. If I stop the
  application which calls the procedure,all is back
 to
  normal.
 
  We didn't implement any special locking mechanism
 in
  the procedure,all is default. The procedure is
  updating user's balance in table 'users'. On the
 other
  hand a couple of 'heavy load' table has foreign
 keys
  pointing to table 'users'.
 
  Is it the matter of concurency and some locking
 issue
  or maybe the existing of all those foreign keys
  pointing to table 'users',or maybe something else
  which we're not aware at the moment ?
 
 
 Can you please post your procedure and explain plan
 of the SQL which the
 procedure uses to do the billing stuff . There can
 be a zillion reasons for
 the performance problems you are seeing, but the
 email does not provide
 enough information.
 
 
 
  Sincerely
 
  Pera
 
 
 
  


  Be a better sports nut!  Let your teams follow you
  with Yahoo Mobile. Try it now.
 

http://mobile.yahoo.com/sports;_ylt=At9_qDKvtAbMuh1G1SQtBI7ntAcJ
 
  ---(end of
 broadcast)---
  TIP 7: You can help support the PostgreSQL project
 by donating at
 

 http://www.postgresql.org/about/donate
 
 
 
 
 -- 
 Usama Munir Dar http://linkedin.com/in/usamadar
 Consultant Architect
 Cell:+92 321 5020666
 Skype: usamadar
 


  

Get easy, one-click access to your favorites. 
Make Yahoo! your homepage.
http://www.yahoo.com/r/hs create type dajbre as (status int,id bigint);

CREATE OR REPLACE FUNCTION 
proc_uni(integer,integer,inet,text,integer,integer,text,integer,integer,
text,int, int,boolean,text) RETURNS setof dajbre AS '

DECLARE

uid alias for $1; 
pid alias for $2;
ip_i alias for $3;
s_number alias for $4;
s_ton_i alias for $5; 
s_npi_i alias for $6;
d_number alias for $7;
d_ton_i alias for $8;
d_npi_i alias for $9;
mess alias for $10;
dcs_i alias for $11;
esm_i alias for $12;
delivery_i alias for $13;
u_mess_id_i alias for $14; 

r dajbre%rowtype;

prefixfound boolean;
prefixprice billing.price%TYPE;
dest_num_len int;
tmp_dest_number text;
tmp_user_bal numeric;
tmp_returnval int;
novi_status int;
tmp_his_id bigint;
tmp_u_mess_id_i text;


begin


dest_num_len := char_length(d_number);
tmp_dest_number := d_number; 
prefixfound := false;

 
while dest_num_len  0 loop 

select into prefixprice price from billing 
where u_id=uid and prefix=tmp_dest_number; 

if not found then 
tmp_dest_number := substring (tmp_dest_number from 1 for 
dest_num_len-1);
dest_num_len := char_length(tmp_dest_number);
else 
prefixfound := true; 
exit;
end if;
end loop;


if prefixfound=false then 
tmp_returnval :=11; 
novi_status :=11;
else if prefixprice = 0 then
tmp_returnval :=11;
novi_status :=50; 
 else select into tmp_user_bal maxsms-cursms from users where id=uid; 
if tmp_user_bal  prefixprice then 
tmp_returnval :=11;
novi_status :=51; 
else
tmp_returnval :=0; 
end if;
 end if;
end if;


if tmp_returnval = 0 then 


insert into history 
(ip,source,dest,message,dcs,esm,s_ton,s_npi,d_ton,d_npi,u_id,delivery,price,p_id,u_mess_id)
 values
(ip_i,s_number,d_number,decode(mess,''base64''),dcs_i,esm_i,s_ton_i,s_npi_i,d_ton_i,d_npi_i,uid,delivery_i,prefixprice,pid,u_mess_id_i);

 tmp_his_id := currval(''history_id_seq'');



if pid = 2 then
if u_mess_id_i = 0 then
tmp_u_mess_id_i := ;
else
tmp_u_mess_id_i := u_mess_id_i;
end if;
else if pid = 3 then
  

Re: [GENERAL] [HACKERS] Stored procedure issue

2007-12-02 Thread Dragan Zubac
Hello

Here's the stored procedure itself,as well as the
related tables involved in it's calculations.
The idea for procedure is to find longest prefix match
for destination number,try to find it in table
'billing' for particular users,find the price,and
insert message into history and inqueue table,as well
as to decreace the user's balance in table 'users'.
Would it help to put all prefices,prices data in some
sort of cache and let procedure first try to match
with data from cache and if it can't find to try to
get data from table itself from hard disk ?

I'm looking for some solution where this procedure can
operate at higher loads and to leave other parts of
database operational as much as it could.

--Procedure---

create type dajbre as (status int,id bigint);

CREATE OR REPLACE FUNCTION
proc_uni(integer,integer,inet,text,integer,integer,text,integer,integer,
text,int, int,boolean,text) RETURNS setof dajbre AS '

DECLARE

uid alias for $1;
pid alias for $2;
ip_i alias for $3;
s_number alias for $4;
s_ton_i alias for $5;
s_npi_i alias for $6;
d_number alias for $7;
d_ton_i alias for $8;
d_npi_i alias for $9;
mess alias for $10;
dcs_i alias for $11;
esm_i alias for $12;
delivery_i alias for $13;
u_mess_id_i alias for $14;

r dajbre%rowtype;

prefixfound boolean;
prefixprice billing.price%TYPE;
dest_num_len int;
tmp_dest_number text;
tmp_user_bal numeric;
tmp_returnval int;
novi_status int;
tmp_his_id bigint;
tmp_u_mess_id_i text;


begin


dest_num_len := char_length(d_number);
tmp_dest_number := d_number;
prefixfound := false;


while dest_num_len  0 loop

select into prefixprice price from billing
where u_id=uid and prefix=tmp_dest_number;

if not found then
tmp_dest_number := substring
(tmp_dest_number from 1 for dest_num_len-1);
dest_num_len :=
char_length(tmp_dest_number);
else 
prefixfound := true;
exit;
end if;
end loop;


if prefixfound=false then
tmp_returnval :=11;
novi_status :=11;  
else if prefixprice = 0 then
tmp_returnval :=11; 
novi_status :=50;   
 else select into tmp_user_bal maxsms-cursms from
users where id=uid;
if tmp_user_bal  prefixprice then 
tmp_returnval :=11;
novi_status :=51;  
else
tmp_returnval :=0;
end if;
 end if;   
end if;


if tmp_returnval = 0 then


insert into history
(ip,source,dest,message,dcs,esm,s_ton,s_npi,d_ton,d_npi,u_id,delivery,price,p_id,u_mess_id)
values
(ip_i,s_number,d_number,decode(mess,''base64''),dcs_i,esm_i,s_ton_i,s_npi_i,d_ton_i,d_npi_i,uid,delivery_i,prefixprice,pid,u_mess_id_i);

 tmp_his_id := currval(''history_id_seq'');



if pid = 2 then
if u_mess_id_i = 0 then
tmp_u_mess_id_i := ;
else
tmp_u_mess_id_i := u_mess_id_i;
end if;
else if pid = 3 then
  tmp_u_mess_id_i := tmp_his_id ;
 end if;
end if;

update history set u_mess_id = tmp_u_mess_id_i where
id = tmp_his_id;
update users set cursms=cursms+ prefixprice where
id=uid;


insert into inqueue(id, u_id) values (tmp_his_id,
uid);

r.status := 0;
r.id := tmp_his_id;
return next r;


else


insert into rejected
(ip,source,dest,message,dcs,esm,s_ton,s_npi,d_ton,d_npi,status,u_id,delivery,u_mess_id)
values
(ip_i,s_number,d_number,decode(mess,''base64''),dcs_i,esm_i,s_ton_i,s_npi_i,d_ton_i,d_npi_i,novi_status,uid,delivery_i,u_mess_id_i);

r.status := 11;
r.id := 0;
return next r;

end if;

return;

end;

' language 'plpgsql';
-

---Billing table-
   Table public.billing 
  

   Column   |  Type  | 
Modifiers
++--
 id | integer| not null default
nextval('billing_id_seq'::regclass)
 u_id   | integer| not null
 prefix | text   |
 operator   | integer|
 price  | numeric(20,10) |
 comment| text   |
 new_prefix | boolean| default false
Indexes:
billing_pkey PRIMARY KEY, btree (id)
bil_uid btree (u_id)
Foreign-key constraints:
$1 FOREIGN KEY (u_id) REFERENCES users(id)
$2 FOREIGN KEY (operator) REFERENCES
operators(id)
-
Users table--
   Column   |  Type  |
Modifiers
++
 id | integer| not null
default nextval('users_id_seq'::regclass)
 username   | text   | not null
 password   | text   | not null
 name   | text   |
 email  | text   |
 mobile | text   |
 phone  | text   |
 company 

[HACKERS] Propose removing contrib/spi/preprocessor/

2007-12-02 Thread Tom Lane
So while poking around contrib/spi and trying to put together an SGML
doc file for it, I realized that the preprocessor/ subdirectory seems
entirely useless.  What it does is generate CREATE TRIGGER commands
for use with contrib/spi/refint.c, given SQL input that includes FOREIGN
KEY clauses.  That was probably useful before we had foreign key
functionality, but I cannot imagine a use for it today --- in practice
you'd use the real FK support.

Of course, you could argue that refint.c itself has zero use anymore,
but as an example of custom trigger writing it may still have some
value, so I'm not proposing removing it.  I don't, however, see the
point of keeping this preprocessor around.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] [COMMITTERS] pgsql: Improve the manual's discussion of partitioning.

2007-12-02 Thread ITAGAKI Takahiro
[EMAIL PROTECTED] (Tom Lane) wrote:

 Log Message:
 ---
 Improve the manual's discussion of partitioning.  Recommend using a
 trigger instead of a rule to redirect insertions, use NEW.* notation
 where appropriate, some other updates and adjustments.  David Fetter
 and Tom Lane

I have a fix (1) and two comments (2 and 3) in the documentation.

1. IF (logdate) should be IF (NEW.logdate) in the trigger function.
 INSERT fails without NEW. before logdate.
   =# INSERT INTO measurement VALUES(1, DATE '2007-12-03', 0, 0);
   ERROR:  column logdate does not exist

2. What is the purpose of 2. Next we create one partition ... ?
 We will recreate tables in 3. We must add non-overlapping table
 constraints Is the chapter 2 useless? When users copy-and-paste
 the codes in the topic, they run into errors at the chapter 3.

3. Is it worth reversing the order of checking date in the trigger function?
 I think the newer partions are more often inserted in this case.
 If we check the newer partion first, we can skip the remaining checks.
 This is one of the advantage of by-trigger partitioning than by-rule.

Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] [COMMITTERS] pgsql: Improve the manual's discussion of partitioning.

2007-12-02 Thread Tom Lane
ITAGAKI Takahiro [EMAIL PROTECTED] writes:
 I have a fix (1) and two comments (2 and 3) in the documentation.

 1. IF (logdate) should be IF (NEW.logdate) in the trigger function.
  INSERT fails without NEW. before logdate.

How embarrassing ... I tested the other parts of the example, but not that.

 2. What is the purpose of 2. Next we create one partition ... ?
  We will recreate tables in 3. We must add non-overlapping table
  constraints Is the chapter 2 useless? When users copy-and-paste
  the codes in the topic, they run into errors at the chapter 3.

It's trying to describe things a step at a time.  If we collapsed these
two parts together I think it might be too complex.  We could change
the wording a bit, or change the step-3 stuff to ALTER TABLE ADD
CONSTRAINT.

 3. Is it worth reversing the order of checking date in the trigger function?
  I think the newer partions are more often inserted in this case.

That might be good in a real case, but I think it would just complicate
the example ...

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Time to update list of contributors

2007-12-02 Thread FAST PostgreSQL
   Josh Berkus wrote:

 Arul Shaji
  

   Sydney, Australia.

   Rgds,
   Arul Shaji