Re: [HACKERS] deadlock while doing VACUUM and DROP

2008-05-16 Thread Gregory Stark
Pavan Deolasee [EMAIL PROTECTED] writes:

 Alternatively, we can just acquire AccessExclusiveLock on the main relation
 before proceeding with the recursive deletion. That would solve this case,
 but may be there are other similar deadlocks waiting to happen. 

Surely we should be locking the relation before even doing the dependency scan
or else someone else can come along and add more dependencies after we've
started the scan?

 Also I am not sure if the issue is big enough to demand the change.

I think it is, effectively what we have now is your DDL could fail randomly
for reasons that are out of your control :(

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com
  Ask me about EnterpriseDB's 24x7 Postgres support!

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] deadlock while doing VACUUM and DROP

2008-05-16 Thread Pavan Deolasee
On Fri, May 16, 2008 at 1:17 PM, Gregory Stark [EMAIL PROTECTED] wrote:


 Surely we should be locking the relation before even doing the dependency scan
 or else someone else can come along and add more dependencies after we've
 started the scan?


Yeah, that's indeed possible. I could make that happen the following way:

Session 1:

- CREATE TABLE test (a int);
- Attach the session to gdb
- Put a break at dependency.c:727 (just before doDeletion() call)
- DROP TABLE test;

Session 2:
- CREATE INDEX testindx ON test(a);

The CREATE INDEX in session 2 succeeds. But DROP TABLE at this point
has already scanned all the dependencies and fails to recognize the
newly added dependency. As a result, the table gets dropped but the
index remains.

Session 1:
- continue from the breakpoint
- DROP TABLE succeeds.
- But the index remains

postgres=# SELECT relname, relfilenode from pg_class WHERE relname
like '%test%';
  relname  | relfilenode
---+-
 testindx  |   16391
(1 row)


You can't even drop the index now.

postgres=# DROP INDEX testindx;
ERROR:  could not open relation with OID 16388

If I remember correctly, we had seen a similar bug report few days
back. May be we now know the cause.

 Also I am not sure if the issue is big enough to demand the change.

 I think it is, effectively what we have now is your DDL could fail randomly
 for reasons that are out of your control :(


Yeah. I think we better fix this, especially given the above mentioned scenario.


Thanks,
Pavan

-- 
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Arbitary file size limit in twophase.c

2008-05-16 Thread Heikki Linnakangas

Tom Lane wrote:

Heikki Linnakangas [EMAIL PROTECTED] writes:
If we're going to check for file length, we should definitely check the 
file length when we write it, so that we fail at PREPARE time, and not 
at COMMIT time.


I think this is mere self-delusion, unfortunately.  You can never be
certain at prepare time that a large alloc will succeed sometime later
in a different process.

Gavin's complaint is essentially that a randomly chosen hard limit is
bad, and I agree with that.  Choosing a larger hard limit doesn't make
it less random.

It might be worth checking at prepare that the file size doesn't exceed
MaxAllocSize, but any smaller limit strikes me as (a) unnecessarily
restrictive and (b) not actually creating any useful guarantee.


Hmm, I guess you're right.

Patch attached. I can't commit it myself right now, but will do so as 
soon as I can, unless there's objections.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com
Index: src/backend/access/transam/twophase.c
===
RCS file: /home/hlinnaka/pgcvsrepository/pgsql/src/backend/access/transam/twophase.c,v
retrieving revision 1.42
diff -c -r1.42 twophase.c
*** src/backend/access/transam/twophase.c	12 May 2008 00:00:45 -	1.42
--- src/backend/access/transam/twophase.c	16 May 2008 09:56:56 -
***
*** 56,61 
--- 56,62 
  #include storage/procarray.h
  #include storage/smgr.h
  #include utils/builtins.h
+ #include utils/memutils.h
  
  
  /*
***
*** 866,871 
--- 867,881 
  	hdr-total_len = records.total_len + sizeof(pg_crc32);
  
  	/*
+ 	 * If the file size exceeds MaxAllocSize, we won't be able to read it in
+ 	 * ReadTwoPhaseFile. Check for that now, rather than fail at commit time.
+ 	 */
+ 	if (hdr-total_len  MaxAllocSize)
+ 		ereport(ERROR,
+ (errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+  errmsg(two-phase state file maximum length exceeed)));
+ 
+ 	/*
  	 * Create the 2PC state file.
  	 *
  	 * Note: because we use BasicOpenFile(), we are responsible for ensuring
***
*** 1044,1051 
  	}
  
  	/*
! 	 * Check file length.  We can determine a lower bound pretty easily. We
! 	 * set an upper bound mainly to avoid palloc() failure on a corrupt file.
  	 */
  	if (fstat(fd, stat))
  	{
--- 1054,1060 
  	}
  
  	/*
! 	 * Check file length.  We can determine a lower bound pretty easily.
  	 */
  	if (fstat(fd, stat))
  	{
***
*** 1059,1066 
  
  	if (stat.st_size  (MAXALIGN(sizeof(TwoPhaseFileHeader)) +
  		MAXALIGN(sizeof(TwoPhaseRecordOnDisk)) +
! 		sizeof(pg_crc32)) ||
! 		stat.st_size  1000)
  	{
  		close(fd);
  		return NULL;
--- 1068,1074 
  
  	if (stat.st_size  (MAXALIGN(sizeof(TwoPhaseFileHeader)) +
  		MAXALIGN(sizeof(TwoPhaseRecordOnDisk)) +
! 		sizeof(pg_crc32)))
  	{
  		close(fd);
  		return NULL;

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GSoC08]some detail plan of improving hash index

2008-05-16 Thread Kenneth Marshall
Hi Xiao Meng,

I am glad that you are making some progress. I have added a
couple of comments below. Your phased approach is a good way
to get it in a position for testing. I had a very basic test
for creation time, query time for a simple lookup, and index
size that I would like to re-run when you have a proto-type
working.

Regards,
Ken

On Fri, May 16, 2008 at 10:42:05AM +0800, Xiao Meng wrote:
 Hi, hackers.
 
 I'm reading the source codes of hash and reviewing Neil's old patch of
 improving hash index.
 Here is some detail plan. I'm trying to adjust Neil's patch to the current
 version of PostgreSQL first. I'm not quite familar with the code yet, so
 please make some comment.
 
 * Phase 1. Just store hash value instead of hash keys
 
 First, define a macro to make it optional.
 
Good.

 Second, add a new function _hash_form_item to construct IndexTuple with hash
 code to replace index_form_tuple uaed in hash access method. It seems easy
 since we did'nt need to deal with TOAST.
 
 Third, modify _hash_checkqual. We can first compare the hash value; if it's
 the same, we compare the real key value.
I think the changes to the system catalog cause this to happen
automatically for an access method with the re-check flag set. You
just need to return all of the tuples that satisfy the hash1 == hash2
criteria and the system will check them against the heap. This will
need to be done for support of a unique index, but that should wait
until we have demonstrated the performance of the new approach.

 Also, HashScanOpaqueData adds an element hashso_sk_hash to hold scan key's
 hash value to support scan function.
 
 Finally, it seems the system catalog pg_amop also need to be modified.
 In Neil's patch, he set the amopreqcheck to be True.
 In the documents, it means index hit must be rechecked in the document. But
 I'm not so clear. Does it just means we need to recheck the value when use
 _hash_chechqual?

This means that the system will perform the re-check for you so you
do not have to access the heap to check for yourself.

 
 
 * Phase 2. Sort the hash value when insert into the bucket and use binary
 search when scan
 Add a function _hash_binsearch to support binary search in a bucket;
 It involved in all functions when we need to search, insert and delete.

I would wait on this piece, or at least make it a separate option so
we can test whether or not the overhead is a worthwhile trade-off
performance-wise. If we can make a smaller bucket-size work, than for
a bucket size of a cacheline or two just reading the entire bucket and
re-writing should be faster. It may be of value for buckets with many
items with the same hash value.

 
 * Phase 3. When it's necessary, store the real value.
 When we insert a new index tuple , for example tp , to a bucket, we can
 check whether there's the same hash code.
 If there's already an index tuple with such a hash code, we store both the
 hash code and real key of tp.
I would always store the hash code and not the value. One of the big wins
is the reduction in index size to improve the ability to index very large
items and tables efficiently. The btree index already handles the case of
storing the actual value in the index. Since a hash code is a non-unique
mapping, you will always need to check the value in the heap. So let the
system do that and then the index does not need to carry that overhead.

 Alternatively, we add a flag to represent the tuple stores a real value or
 just hash code. It seems a little complex.
 
See above.

 Phase 1 seems extremely easy. I'm trying to do it first.
 Additionally, I need a benchmark to test the performance. It seems there's
 some tools list in http://wiki.postgresql.org/wiki/Performances_QA_testing .
 Any advice?
 
 -- 
 Have a good day;-)
 Best Regards,
 Xiao Meng
 
 ?
 Data and Knowledge Engineering Research Center
 Harbin Institute of Technology, China
 Gtalk: [EMAIL PROTECTED]
 MSN: [EMAIL PROTECTED]
 http://xiaomeng.yo2.cn

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] deadlock while doing VACUUM and DROP

2008-05-16 Thread Alvaro Herrera
Pavan Deolasee escribió:

  Also I am not sure if the issue is big enough to demand the change.
 
  I think it is, effectively what we have now is your DDL could fail randomly
  for reasons that are out of your control :(
 
 Yeah. I think we better fix this, especially given the above mentioned 
 scenario.

The pg_shdepend code has code to grab a lock on the object being
dropped, which is also grabbed by someone who wants to add a dependency
on the object.  Perhaps the pg_depend code should do the same.

I don't think this closes the original report though, unless we ensure
that the lock taken by vacuum conflicts with that one.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Arbitary file size limit in twophase.c

2008-05-16 Thread Tom Lane
Heikki Linnakangas [EMAIL PROTECTED] writes:
 Tom Lane wrote:
 It might be worth checking at prepare that the file size doesn't exceed
 MaxAllocSize, but any smaller limit strikes me as (a) unnecessarily
 restrictive and (b) not actually creating any useful guarantee.

 Patch attached. I can't commit it myself right now, but will do so as 
 soon as I can, unless there's objections.

Two bugs: exceeed - exceeded, please; and on the read side, you
should still have an upper-bound check, but it should be MaxAllocSize.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] deadlock while doing VACUUM and DROP

2008-05-16 Thread Tom Lane
Gregory Stark [EMAIL PROTECTED] writes:
 Pavan Deolasee [EMAIL PROTECTED] writes:
 Alternatively, we can just acquire AccessExclusiveLock on the main relation
 before proceeding with the recursive deletion. That would solve this case,
 but may be there are other similar deadlocks waiting to happen. 

 Surely we should be locking the relation before even doing the dependency scan

Yeah.  I think this is just another manifestation of the problem I was
noodling about a few days ago:
http://archives.postgresql.org/pgsql-hackers/2008-05/msg00301.php

As I said then, I don't want to think about it until after commitfest.
I foresee an invasive and not sanely back-patchable patch.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ecpg localization

2008-05-16 Thread Peter Eisentraut
Am Samstag, 10. Mai 2008 schrieb Euler Taveira de Oliveira:
 This is a second try. Fix some issues pointed by Peter. It's a little
 fatter 'cause I worked on almost all of the strings. I attempted to
 mimic the postgresql style but I think that those strings need more work
 on as I pointed out in the first e-mail.

Committed.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ecpg localization

2008-05-16 Thread Peter Eisentraut
Am Sonntag, 11. Mai 2008 schrieb Euler Taveira de Oliveira:
 I forgot to say that this patch doesn't include nls support to ecpg
 files automagically. If you guys think that it's is a Good Thing to do,
 we need to hack the preproc.y to hardcode the locale stuff; if you
 decide that it isn't necessary, we need to document that the nls support
 can be achieved by using the locale stuff. Comments?

I don't understand what you mean here.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Arbitary file size limit in twophase.c

2008-05-16 Thread Heikki Linnakangas

Tom Lane wrote:

Heikki Linnakangas [EMAIL PROTECTED] writes:

Tom Lane wrote:

It might be worth checking at prepare that the file size doesn't exceed
MaxAllocSize, but any smaller limit strikes me as (a) unnecessarily
restrictive and (b) not actually creating any useful guarantee.


Patch attached. I can't commit it myself right now, but will do so as 
soon as I can, unless there's objections.


Two bugs: exceeed - exceeded, please; 


Thanks.


and on the read side, you
should still have an upper-bound check, but it should be MaxAllocSize.


That seems like a highly unlikely failure scenario, where a two-phase 
file is corrupt file so that it becomes larger than 1GB. It's not like 
the check costs anything either, though, so I'll just put it back like 
you suggested.


Updated patch attached. I think it's ok now, but I'll air this as a 
patch before committing since I got it wrong before...


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com
? GNUmakefile
? config.log
? config.status
? readFile-close-1.patch
? remove-twophase-file-size-limit-2.patch
? contrib/pg_standby/.deps
? contrib/pg_standby/pg_standby
? contrib/spi/.deps
? doc/src/sgml/cvsmsg
? src/Makefile.global
? src/backend/postgres
? src/backend/access/common/.deps
? src/backend/access/gin/.deps
? src/backend/access/gist/.deps
? src/backend/access/hash/.deps
? src/backend/access/heap/.deps
? src/backend/access/index/.deps
? src/backend/access/nbtree/.deps
? src/backend/access/transam/.deps
? src/backend/bootstrap/.deps
? src/backend/catalog/.deps
? src/backend/catalog/postgres.bki
? src/backend/catalog/postgres.description
? src/backend/catalog/postgres.shdescription
? src/backend/commands/.deps
? src/backend/executor/.deps
? src/backend/lib/.deps
? src/backend/libpq/.deps
? src/backend/main/.deps
? src/backend/nodes/.deps
? src/backend/optimizer/geqo/.deps
? src/backend/optimizer/path/.deps
? src/backend/optimizer/plan/.deps
? src/backend/optimizer/prep/.deps
? src/backend/optimizer/util/.deps
? src/backend/parser/.deps
? src/backend/port/.deps
? src/backend/postmaster/.deps
? src/backend/regex/.deps
? src/backend/rewrite/.deps
? src/backend/snowball/.deps
? src/backend/snowball/snowball_create.sql
? src/backend/storage/buffer/.deps
? src/backend/storage/file/.deps
? src/backend/storage/freespace/.deps
? src/backend/storage/ipc/.deps
? src/backend/storage/large_object/.deps
? src/backend/storage/lmgr/.deps
? src/backend/storage/page/.deps
? src/backend/storage/smgr/.deps
? src/backend/tcop/.deps
? src/backend/tsearch/.deps
? src/backend/utils/.deps
? src/backend/utils/probes.h
? src/backend/utils/adt/.deps
? src/backend/utils/cache/.deps
? src/backend/utils/error/.deps
? src/backend/utils/fmgr/.deps
? src/backend/utils/hash/.deps
? src/backend/utils/init/.deps
? src/backend/utils/mb/.deps
? src/backend/utils/mb/conversion_procs/conversion_create.sql
? src/backend/utils/mb/conversion_procs/ascii_and_mic/.deps
? src/backend/utils/mb/conversion_procs/cyrillic_and_mic/.deps
? src/backend/utils/mb/conversion_procs/euc_cn_and_mic/.deps
? src/backend/utils/mb/conversion_procs/euc_jis_2004_and_shift_jis_2004/.deps
? src/backend/utils/mb/conversion_procs/euc_jp_and_sjis/.deps
? src/backend/utils/mb/conversion_procs/euc_kr_and_mic/.deps
? src/backend/utils/mb/conversion_procs/euc_tw_and_big5/.deps
? src/backend/utils/mb/conversion_procs/latin2_and_win1250/.deps
? src/backend/utils/mb/conversion_procs/latin_and_mic/.deps
? src/backend/utils/mb/conversion_procs/utf8_and_ascii/.deps
? src/backend/utils/mb/conversion_procs/utf8_and_big5/.deps
? src/backend/utils/mb/conversion_procs/utf8_and_cyrillic/.deps
? src/backend/utils/mb/conversion_procs/utf8_and_euc_cn/.deps
? src/backend/utils/mb/conversion_procs/utf8_and_euc_jis_2004/.deps
? src/backend/utils/mb/conversion_procs/utf8_and_euc_jp/.deps
? src/backend/utils/mb/conversion_procs/utf8_and_euc_kr/.deps
? src/backend/utils/mb/conversion_procs/utf8_and_euc_tw/.deps
? src/backend/utils/mb/conversion_procs/utf8_and_gb18030/.deps
? src/backend/utils/mb/conversion_procs/utf8_and_gbk/.deps
? src/backend/utils/mb/conversion_procs/utf8_and_iso8859/.deps
? src/backend/utils/mb/conversion_procs/utf8_and_iso8859_1/.deps
? src/backend/utils/mb/conversion_procs/utf8_and_johab/.deps
? src/backend/utils/mb/conversion_procs/utf8_and_shift_jis_2004/.deps
? src/backend/utils/mb/conversion_procs/utf8_and_sjis/.deps
? src/backend/utils/mb/conversion_procs/utf8_and_uhc/.deps
? src/backend/utils/mb/conversion_procs/utf8_and_win/.deps
? src/backend/utils/misc/.deps
? src/backend/utils/mmgr/.deps
? src/backend/utils/resowner/.deps
? src/backend/utils/sort/.deps
? src/backend/utils/time/.deps
? src/bin/initdb/.deps
? src/bin/initdb/initdb
? src/bin/pg_config/.deps
? src/bin/pg_config/pg_config
? src/bin/pg_controldata/.deps
? src/bin/pg_controldata/pg_controldata
? src/bin/pg_ctl/.deps
? src/bin/pg_ctl/pg_ctl
? src/bin/pg_dump/.deps
? src/bin/pg_dump/pg_dump
? src/bin/pg_dump/pg_dumpall
? 

Re: [HACKERS] ecpg localization

2008-05-16 Thread Euler Taveira de Oliveira

Peter Eisentraut wrote:


I forgot to say that this patch doesn't include nls support to ecpg
files automagically. If you guys think that it's is a Good Thing to do,
we need to hack the preproc.y to hardcode the locale stuff; if you
decide that it isn't necessary, we need to document that the nls support
can be achieved by using the locale stuff. Comments?


I don't understand what you mean here.


I mean that you need to put locale.h and setlocale(LC_MESSAGES, ) at
.pgc so you get localized messages from ecpg program.


--
  Euler Taveira de Oliveira
  http://www.timbira.com/


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GSoC08]some detail plan of improving hash index

2008-05-16 Thread Josh Berkus
Xiao,

 Phase 1 seems extremely easy. I'm trying to do it first.
 Additionally, I need a benchmark to test the performance. It seems
 there's some tools list in
 http://wiki.postgresql.org/wiki/Performances_QA_testing . Any advice?

For a simple test, pgbench is actually going to be pretty good for hash 
index since it's mostly primary key access.  You also might want to write 
your own unit tests using pgunittest, because you want to test the 
following:

bulk load, both COPY and INSERT
single-row updates, inserts and deletes
batch update by key
batch update by other index
batch delete by key
batch delete by other index
concurrent index updates (64 connections insert/deleting concurrently)

You can compare all of the above against b-tree and unindexed columns.

For a hard-core benchmark, I'd try EAStress (SpecJAppserver Lite)

-- 
--Josh

Josh Berkus
PostgreSQL @ Sun
San Francisco

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [PATCHES] [HACKERS] TRUNCATE TABLE with IDENTITY

2008-05-16 Thread Tom Lane
Zoltan Boszormenyi [EMAIL PROTECTED] writes:
 Attached patch implements the extension found in the current SQL200n draft,
 implementing stored start value and supporting ALTER SEQUENCE seq RESTART;

 Updated patch implements TRUNCATE ... RESTART IDENTITY
 which restarts all owned sequences for the truncated table(s).

Applied with corrections.  Most notably, since ALTER SEQUENCE RESTART
is nontransactional like most other ALTER SEQUENCE operations, I
rearranged things to try to ensure that foreseeable failures like
deadlock and lack of permissions would be detected before TRUNCATE
starts to issue any RESTART commands.

One interesting point here is that the patch as submitted allowed
ALTER SEQUENCE MINVALUE/MAXVALUE to be used to set a sequence range
that the original START value was outside of.  This would result in
a failure at ALTER SEQUENCE RESTART.  Since, as stated above, we
really don't want that happening during TRUNCATE, I adjusted the
patch to make such an ALTER SEQUENCE fail.  This is at least potentially
an incompatible change: command sequences that used to be legal could
now fail.  I doubt it's very likely to bite anyone in practice, though.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [PATCHES] [HACKERS] TRUNCATE TABLE with IDENTITY

2008-05-16 Thread Tom Lane
I wrote:
 One interesting point here is that the patch as submitted allowed
 ALTER SEQUENCE MINVALUE/MAXVALUE to be used to set a sequence range
 that the original START value was outside of.  This would result in
 a failure at ALTER SEQUENCE RESTART.  Since, as stated above, we
 really don't want that happening during TRUNCATE, I adjusted the
 patch to make such an ALTER SEQUENCE fail.  This is at least potentially
 an incompatible change: command sequences that used to be legal could
 now fail.  I doubt it's very likely to bite anyone in practice, though.

It occurs to me that we could define
ALTER SEQUENCE s START WITH x
(which is syntactically legal, but rejected by sequence.c at the moment)
as updating the stored start_value and thus affecting what future
ALTER SEQUENCE RESTART commands will do.  Right now there is simply
no way to change start_value after sequence creation, which is pretty
strange considering we let you change every other sequence parameter.
It would also provide a way out for anyone who does want to change the
minval/maxval as sketched above.

I think this is about a ten-line change as far as the code goes...
any objections?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [PATCHES] [HACKERS] TRUNCATE TABLE with IDENTITY

2008-05-16 Thread Neil Conway
On Fri, 2008-05-16 at 19:41 -0400, Tom Lane wrote:
 Applied with corrections.  Most notably, since ALTER SEQUENCE RESTART
 is nontransactional like most other ALTER SEQUENCE operations, I
 rearranged things to try to ensure that foreseeable failures like
 deadlock and lack of permissions would be detected before TRUNCATE
 starts to issue any RESTART commands.

Ugh. The fact that the RESTART IDENTITY part of TRUNCATE is
non-transactional is a pretty unsightly wort. I would also quarrel with
your addition to the docs that suggests this is only an issue in
practice if TRUNCATE RESTART IDENTITY is used in a transaction block:
unpredictable failures (such as OOM or query cancellation) can certainly
occur in practice, and would be very disruptive (e.g. if the sequence
values are stored into a column with a UNIQUE constraint, it would break
all inserting transactions until the DBA intervenes).

I wonder if it would be possible to make the sequence operations
performed by TRUNCATE transactional: while the TRUNCATE remains
uncommitted, it should be okay to block concurrent access to the
sequence.

-Neil



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [PATCHES] [HACKERS] TRUNCATE TABLE with IDENTITY

2008-05-16 Thread Tom Lane
Neil Conway [EMAIL PROTECTED] writes:
 Ugh. The fact that the RESTART IDENTITY part of TRUNCATE is
 non-transactional is a pretty unsightly wort.

Actually, I agree.  Shall we just revert that feature?  The ALTER
SEQUENCE part of this patch is clean and useful, but I'm less than
enamored of the TRUNCATE part.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Problems with CVS HEAD compile

2008-05-16 Thread Bruce Momjian
Tom Lane wrote:
 Bruce Momjian [EMAIL PROTECTED] writes:
  Since ecpg localization was added today, I am unable to compile
  src/interfaces/ecpg.  I get:
 
  $ gmake -w clean
  gmake: Entering directory 
  `/usr/var/local/src/gen/pgsql/CURRENT/pgsql/src/interfaces/ecpg'
  rm -f
  usage: rm [-dfiPRrW] file ...
  gmake: *** [clean-po] Error 1
 
 Huh, seems you have a remarkably picky version of rm.  None of the
 machines I use seem to have a problem with an empty file list.
 
 Of course the underlying issue is that ecpg hasn't actually got any
 translations yet --- but that's unlikely to change for awhile.
 Do we need to work around this?

You are right;  my 'rm' is picky:

$ rm
usage: rm [-dfiPRrW] file ...

I can remove the file as part of my CVS update script.

-- 
  Bruce Momjian  [EMAIL PROTECTED]http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Problems with CVS HEAD compile

2008-05-16 Thread Euler Taveira de Oliveira

Tom Lane wrote:


Huh, seems you have a remarkably picky version of rm.  None of the
machines I use seem to have a problem with an empty file list.


Don't see this problem here too.


Of course the underlying issue is that ecpg hasn't actually got any
translations yet --- but that's unlikely to change for awhile.
Do we need to work around this?

BTW, I sent an only-for-test pt-br translation within the patch. Maybe 
we could commit it just to have one language there.



--
  Euler Taveira de Oliveira
  http://www.timbira.com/

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers