Re: [HACKERS] serial arrays?

2008-03-22 Thread Shane Ambler

Joshua D. Drake wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Fri, 21 Mar 2008 12:55:26 -0400
Tom Lane [EMAIL PROTECTED] wrote:


regression=# create table foo (f1 serial[11]);
NOTICE:  CREATE TABLE will create implicit sequence foo_f1_seq for
serial column foo.f1 CREATE TABLE
regression=# \d foo
 Table public.foo
 Column |  Type   |Modifiers 
+-+--

 f1 | integer | not null default nextval('foo_f1_seq'::regclass)


Should we throw an error for this?  If not, what behavior would be
sane?


Interesting? Would be to create 11 sequences that can update each
element of the array. 


Would you increment one element at a time? The first element in the 
first nextval, the second element in the next... or would it increment 
the first till it was 10 then the second till it was 10 Or would you 
increment each element by one for each nextval so each element is the 
same number (use same sequence)?


I would think the most elegant solution would be to create an 
array_sequence type. Which would open a great multitude of rule 
definitions on how to define how each element is incremented. Well 
probably a simple syntax that can end up with a complex list of rules 
saved for the sequence that could be hard to decipher later or by the 
next dba to come along.


As much as I can see at least one use for this (think number plate 
sequences - 0-36 for each element) and some curiosity as a challenging 
project, I do think this would be better handled by functions designed 
specifically for the app that wants them.



H, It could be an intriguing feature, but I'm not sure it would get 
much use.


CREATE SEQUENCE_ARRAY my_silly_seq AS
  integer[11] ON INCREMENT APPLY FROM ELEMENT 0,
  ELEMENT 0 FROM 0 TO 36 ON LIMIT INCREMENT ELEMENT 1 AND RESET TO 0,
  ELEMENT 1 FROM 0 TO 9 ON LIMIT INCREMENT ELEMENT 2 AND RESET TO 0,
  ...

Could there be char[] array that can increment from 0-9 then a-z before 
rolling back to 0?


Guess I got too much time on my hands... I'll go find something better 
to do now. ;-)



 Sane? None. We should throw an error.

+1 for the error



--

Shane Ambler
pgSQL (at) Sheeky (dot) Biz

Get Sheeky @ http://Sheeky.Biz

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Sort Refinement

2008-03-22 Thread Simon Riggs
On Thu, 2008-03-20 at 22:35 +, Gregory Stark wrote:
 Simon Riggs [EMAIL PROTECTED] writes:
 
  If we assume we use heap sort, then if we *know* that the data is
  presorted on (a) then we should be able to emit tuples directly that the
  value of (a) changes and keep emitting them until the heap is empty,
  since they will exit the heap in (a,b) order.
 
 Actually, I would think the way to do this would be to do a quicksort if you
 find you've accumulated all the records in a subgroup in memory. One easy way
 to do it would be to have nodeSort build a new tuplesort for each subgroup if
 it has a level break key parameter set (memories of RPG III are coming
 bubbling to the surface).

Yes, its essentially the same thing as running a series of otherwise
unconnected sorts. However, that seems to introduce its own overheads if
we did that literally.

We needn't fix this to either a heapsort or a quicksort. We can let the
existing code decide which and let the mode change naturally from one to
the other as is needed.

-- 
  Simon Riggs
  2ndQuadrant  http://www.2ndQuadrant.com 

  PostgreSQL UK 2008 Conference: http://www.postgresql.org.uk


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Sort Refinement

2008-03-22 Thread Simon Riggs
On Thu, 2008-03-20 at 21:34 +, Sam Mason wrote:
 On Thu, Mar 20, 2008 at 05:17:22PM +, Simon Riggs wrote:
  Currently, our sort algorithm assumes that its input is unsorted. So if
  your data is sorted on (a) and you would like it to be sorted on (a,b)
  then we need to perform the full sort of (a,b).
  
  For small sorts this doesn't matter much. For larger sorts the heap sort
  algorithm will typically result in just a single run being written to
  disk which must then be read back in. Number of I/Os required is twice
  the total volume of data to be sorted.
  
  If we assume we use heap sort, then if we *know* that the data is
  presorted on (a) then we should be able to emit tuples directly that the
  value of (a) changes and keep emitting them until the heap is empty,
  since they will exit the heap in (a,b) order.
 
 We also have stats to help decide when this will be a win.  For example
 if a has a small range (i.e. a boolean) and b has a large range
 (i.e. some sequence) then this probably isn't going to be a win and
 you're better off using the existing infrastructure.  If it's the other
 way around then this is going to be a big win.

Yep, sounds sensible.

-- 
  Simon Riggs
  2ndQuadrant  http://www.2ndQuadrant.com 

  PostgreSQL UK 2008 Conference: http://www.postgresql.org.uk


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Building PostgreSQL 8.3.1 on OpenVMS 8.3 AXP

2008-03-22 Thread Mihai Criveti
I am trying to build PostgreSQL 8.3.1 on OpenVMS 8.3 Alpha, patched to
UPDATE v6.0 ECO:
DEC AXPVMS VMS83A_UPDATE V6.0Patch   Install Val 14-MAR-2008

Using the HP C compilers:
HP C Version 7.3 for OpenVMS Alpha Systems
HP C++ Version V7.3 for OpenVMS Alpha Systems

And the GNU (GNV) POSIX userland:
DEC AXPVMS GNV V2.1-2Full LP Install Val 22-MAR-2008

$ gcc --version
GNV Dec 10 2007 16:40:09
HP C V7.3-009 on OpenVMS Alpha V8.3

$ make --version
GNU Make version 3.78.1, by Richard Stallman and Roland McGrath.
Built for VMS


Anyway, I've setup the POSIX environment and fired up ./configure (I've
tried various templates for starters, since there isn't a default one for
OpenVMS):
What I've seen is that GNU autotools will create an empty conftest.c file,
and attempt compilation. While an empty file *does* compile to a.out, it
won't return 0, but 179.
bash$ gcc conftest.c


^
%CC-W-EMPTYFILE, Source file does not contain any declarations.
at line number 1 in file SYS$SYSROOT:[SYSMGR.VIM.GNU.ALPHA.POSTGRESQL-8_3_1
]CONF
TEST.C;1
%LINK-W-WRNERS, compilation warnings
in module CONFTEST file SYS$SYSROOT:[
SYSMGR.VIM.GNU.ALPHA.POSTGRESQL-8_3
_1]CONFTEST.O;2
%LINK-W-USRTFR, image SYS$SYSROOT:[SYSMGR.VIM.GNU.ALPHA.POSTGRESQL-8_3_1
]A.OUT;3
 has no user transfer address

bash$ ./a.out
%DCL-E-NOTFR, no transfer address
bash$ echo $?
154

Of course, since configure will rewrite that file a *lot* of times, a simple
echo #include stdio.h int main() { return 0 }  conftest.c won't fix the
issue.

I've had similar issues / results on the z/OS / AS/390 platform, using c89.

Any other way around this then digging deep into autotools? Also, are there
any previous OpenVMS ports of PostgreSQL I could make use of?


BUILD LOGS:
==

bash$ ./configure --with-template=AIX
%DCL-W-PARMDEL, invalid parameter delimiter - check use of special
characters
 \.SH\
%DCL-W-IVVERB, unrecognized command verb - check validity and spelling
 \HOSTINFO\
checking build system type... alpha-dec-vms
checking host system type... alpha-dec-vms
checking which template to use... AIX
checking whether to build with 64-bit integer date/time support... no
checking whether NLS is wanted... no
checking for default port number... 5432
checking for gcc... gcc
checking for C compiler default output file name... a.out
checking whether the C compiler works...
%DCL-E-NOTFR, no transfer address
configure: error: cannot run C compiled programs.
If you meant to cross compile, use `--host'.
See `config.log' for more details.



$ TYPE config.log
PATH: /bin
PATH: /gnu/bin
PATH: /GNU/BIN
PATH: /usr/bin
PATH: /usr/local/bin
PATH: .
configure:1414: checking build system type
configure:1432: result: alpha-dec-vms
configure:1440: checking host system type
configure:1454: result: alpha-dec-vms
configure:1464: checking which template to use
configure:1564: result: AIX
configure:1706: checking whether to build with 64-bit integer date/time
support
configure:1738: result: no
configure:1745: checking whether NLS is wanted
configure:1780: result: no
configure:1788: checking for default port number
configure:1818: result: 5432
configure:2197: checking for gcc
configure:2197: found /gnu/bin/gcc
configure:2197: result: gcc
configure:2208: checking for C compiler version
configure:2214: gcc --version /dev/null 5
GNV Dec 10 2007 16:40:09
HP C V7.3-009 on OpenVMS Alpha V8.3
configure:2214: $? = 0
configure:2219: gcc -v /dev/null 5
? cc: No support for switch -v
%LINK-F-NOMODS, no input modules specified (or found)
configure:2219: $? = 2
configure:2224: gcc -V /dev/null 5
GNV Dec 10 2007 16:40:09
HP C V7.3-009 on OpenVMS Alpha V8.3
configure:2224: $? = 0
configure:2246: checking for C compiler default output file name
configure:2295: gccconftest.c  5


--
Criveti Mihai
http://unixsadm.blogspot.com/ - UNIX, OpenVMS and Windows System
Administration, Digital Forensics, High Performance Computing, Clustering
and Distributed Systems.
In girum imus nocte, ecce et consumimur igni.


Re: [HACKERS] Idea for minor tstore optimization

2008-03-22 Thread Bruce Momjian

Added to TODO:

* Avoid tuple some tuple copying in sort routines

  http://archives.postgresql.org/pgsql-hackers/2008-02/msg01206.php


---

Tom Lane wrote:
 Neil Conway [EMAIL PROTECTED] writes:
  I notice that several of the call sites of tuplestore_puttuple() start
  with arrays of datums and nulls, call heap_form_tuple(), and then switch
  into the tstore's context and call tuplestore_puttuple(), which
  deep-copies the HeapTuple into the tstore. ISTM it would be faster and
  simpler to provide a tuplestore_putvalues(), which just takes the datum
  + nulls arrays and avoids the additional copy.
 
 Seems reasonable.  Check whether tuplesort should offer the same, while
 you are at it.
 
   regards, tom lane
 
 ---(end of broadcast)---
 TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly

-- 
  Bruce Momjian  [EMAIL PROTECTED]http://momjian.us
  EnterpriseDB http://postgres.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Reworking WAL locking

2008-03-22 Thread Bruce Momjian

Added to TODO:

* Improve WAL concurrency by increasing lock granularity

  http://archives.postgresql.org/pgsql-hackers/2008-02/msg00556.php


---

Simon Riggs wrote:
 
 Paul van den Bogaard (Sun) suggested to me that we could use more than
 two WAL locks to improve concurrency. I think its possible to introduce
 such a scheme with some ease. All mods within xlog.c
 
 The scheme below requires an extra LWlock per WAL buffer.
 
 Locking within XLogInsert() would look like this:
 
 Calculate length of data to be inserted.
 Calculate initial CRC
 
 LWLockAcquire(WALInsertLock, LW_EXCLUSIVE)
 
 Reserve space to write into. 
 LSN = current Insert pointer
 Move pointer forward by length of data to be inserted, acquiring
 WALWriteLock if required to ensure space is available.
 
 LWLockAcquire(LSNGetWALPageLockId(LSN), LW_SHARED);
 
 Note that we don't lock every page, just the first one of the set we
 want, but we hold it until all page writes are complete.
 
 LWLockRelease(WALInsertLock);
 
 finish calculating CRC
 write xlog into reserved space
   
 LWLockRelease(LSNGetWALPageLockId(LSN));
 
 XLogWrite() will then try to get a conditional LW_EXCLUSIVE lock
 sequentially on each page it plans to write. It keeps going until it
 fails to get the lock, then writes. Callers of XLogWrite will never be
 able to pass a backend currently performing the wal buffer fill.
 
 We write whole page at a time.
 
 Next time, we do a regular lock wait on the same page, so that we always
 get a page eventually.
 
 This requires us to get 2 locks for an XLogInsert rather than just one.
 However the second lock is always acquired with zero-wait time when the
 wal_buffers are sensibly sized. Overall this should reduce wait time for
 the WALInsertLock since it seems likely that each actual filling of WAL
 buffers will effect different cache lines and are very likely to be able
 to be performed in parallel.
 
 Sounds good to me.
 
 Any objections/comments before this can be tried out? 
 
 -- 
   Simon Riggs
   2ndQuadrant  http://www.2ndQuadrant.com 
 
 
 ---(end of broadcast)---
 TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match

-- 
  Bruce Momjian  [EMAIL PROTECTED]http://momjian.us
  EnterpriseDB http://postgres.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_dump additional options for performance

2008-03-22 Thread Bruce Momjian

Added to TODO:

o Allow pre/data/post files when dumping a single object, for
  performance reasons

  http://archives.postgresql.org/pgsql-hackers/2008-02/msg00205.php


---

Simon Riggs wrote:
 pg_dump allows you to specify -s --schema-only, or -a --data-only.
 
 The -s option creates the table, as well as creating constraints and
 indexes. These objects need to be dropped prior to loading, if we are to
 follow the performance recommendations in the docs. But the only way to
 do that is to manually edit the script to produce a cut down script.
 
 So it would be good if we could dump objects in 3 groups
 1. all commands required to re-create table
 2. data
 3. all commands required to complete table after data load
 
 My proposal is to provide two additional modes:
 --schema-pre-load corresponding to (1) above
 --schema-post-load corresponding to (3) above
 
 This would then allow this sequence of commands 
 
 pg_dump --schema-pre-load
 pg_dump --data-only
 pg_dump --schema-post-load
 
 to be logically equivalent, but faster than
 
 pg_dump --schema-only
 pg_dump --data-only
 
 both forms of which are equivalent to just
 
 pg_dump
 
 
 [Assuming data isn't changing between invocations...]
 
 -- 
   Simon Riggs
   2ndQuadrant  http://www.2ndQuadrant.com 
 
 
 ---(end of broadcast)---
 TIP 5: don't forget to increase your free space map settings

-- 
  Bruce Momjian  [EMAIL PROTECTED]http://momjian.us
  EnterpriseDB http://postgres.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Page-at-a-time Locking Considerations

2008-03-22 Thread Bruce Momjian

With no concrete patch or performance numbers, this thread has been
removed from the patches queue.

---

Simon Riggs wrote:
 
 In heapgetpage() we hold the buffer locked while we look for visible
 tuples. That works well in most cases since the visibility check is fast
 if we have status bits set. If we don't have visibility bits set we have
 to do things like scan the snapshot and confirm things via clog lookups.
 All of that takes time and can lead to long buffer lock times, possibly
 across multiple I/Os in the very worst cases.
 
 This doesn't just happen for old transactions. Accessing very recent
 TransactionIds is prone to rare but long waits when we ExtendClog(). 
 
 Such problems are numerically rare, but the buffers with long lock times
 are also the ones that have concurrent or at least recent write
 operations on them. So all SeqScans have the potential to induce long
 wait times for write transactions, even if they are scans on 1 block
 tables. Tables with heavy write activity on them from multiple backends
 have their work spread across multiple blocks, so a SeqScan will hit
 this issue repeatedly as it encounters each current insertion point in a
 table and so greatly increases the chances of it occurring.
 
 It seems possible to just memcpy() the whole block away and then drop
 the lock quickly. That gives a consistent lock time in all cases and
 allows us to do the visibility checks in our own time. It might seem
 that we would end up copying irrelevant data, which is true. But the
 greatest cost is memory access time. If hardware memory pre-fetch cuts
 in we will find that the memory is retrieved en masse anyway; if it
 doesn't we will have to wait for each cache line. So the best case is
 actually an en masse retrieval of cache lines, in the common case where
 blocks are fairly full (vague cutoff is determined by exact mechanism of
 hardware/compiler induced memory prefetch).
 
 The copied block would be used only for visibility checks. The main
 buffer would retain its pin and we would pass references to the block
 through the executor as normal. So this would be a change completely
 isolated to heapgetpage().
 
 Was the copy-aside method considered when we introduced page at a time
 mode? Any reasons to think it would be dangerous or infeasible? If not,
 I'll give it a bash and get some test results.
 
 -- 
   Simon Riggs
   2ndQuadrant  http://www.2ndQuadrant.com 
 
 
 ---(end of broadcast)---
 TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match

-- 
  Bruce Momjian  [EMAIL PROTECTED]http://momjian.us
  EnterpriseDB http://postgres.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Building PostgreSQL 8.3.1 on OpenVMS 8.3 AXP

2008-03-22 Thread Tom Lane
Mihai Criveti [EMAIL PROTECTED] writes:
 I am trying to build PostgreSQL 8.3.1 on OpenVMS 8.3 Alpha, patched to
 UPDATE v6.0 ECO:

 $ gcc --version
 GNV Dec 10 2007 16:40:09
 HP C V7.3-009 on OpenVMS Alpha V8.3

Hmmm ... any chance of using a real gcc, instead of HP's compiler doing
a poor job of counterfeiting it?  It's possible that specifying CC=cc
would help by avoiding that particular issue.  However ...

 What I've seen is that GNU autotools will create an empty conftest.c file,
 and attempt compilation. While an empty file *does* compile to a.out, it
 won't return 0, but 179.

An empty file doesn't run (or even compile) on most platforms, eg

$ touch foo.c
$ gcc foo.c
/usr/ccs/bin/ld: Unsatisfied symbols:
   main
collect2: ld returned 1 exit status

If the configure script really is building an empty .c file to test
with, then you've got some low-level tools problems you need to solve
before configure will do anything very useful.  It looks to me like
that first test program is built with

cat conftest.$ac_ext _ACEOF
/* confdefs.h.  */
_ACEOF
cat confdefs.h conftest.$ac_ext
cat conftest.$ac_ext _ACEOF
/* end confdefs.h.  */

int
main ()
{

  ;
  return 0;
}
_ACEOF

It doesn't get much simpler than that :-(  Either cat doesn't work or
you've got some shell-level incompatibilities.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Idea for minor tstore optimization

2008-03-22 Thread Tom Lane
Bruce Momjian [EMAIL PROTECTED] writes:
 Added to TODO:
 * Avoid tuple some tuple copying in sort routines
   http://archives.postgresql.org/pgsql-hackers/2008-02/msg01206.php

Actually ... isn't this done already?

http://archives.postgresql.org/pgsql-patches/2008-02/msg00176.php

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Possible future performance improvement: sort updates/deletes by ctid

2008-03-22 Thread Bruce Momjian

Added to TODO:

* Sort large UPDATE/DELETEs so it is done in heap order

  http://archives.postgresql.org/pgsql-hackers/2008-01/msg01119.php


---

Tom Lane wrote:
 We've had a couple of discussions recently revolving around the
 inefficiency of using hashjoin/hashaggregation output to update a target
 table, because of the resulting very random access pattern.  I believe
 this same mechanism is underlying the slowness of Stephen Denne's
 alternate query described here:
 http://archives.postgresql.org/pgsql-performance/2008-01/msg00227.php
 
 I made up the attached doubtless-oversimplified test case to model what
 he was seeing.  It's cut down about 4x from the table size he describes,
 but the UPDATE still takes forever --- I gave up waiting after 2 hours,
 when it had deleted about a fifth of its hashjoin temp files, suggesting
 that the total runtime would be about 10 hours.
 
 A brute force idea for fixing this is to sort the intended update or
 delete operations of an UPDATE/DELETE command according to the target
 table's ctid, which is available for free anyway since the executor top
 level must have it to perform the operation.  I made up an even more
 brute force patch (also attached) that forces that to happen for every
 UPDATE or DELETE --- obviously we'd not want that for real, it's just
 for crude performance testing.  With that patch, I got the results
 
   QUERY PLAN  
  
 ---
  Sort  (cost=6075623.03..6085623.05 rows=408 width=618) (actual 
 time=2078726.637..3371944.124 rows=400 loops=1)
Sort Key: df.ctid
Sort Method:  external merge  Disk: 2478992kB
-  Hash Join  (cost=123330.50..1207292.72 rows=408 width=618) (actual 
 time=20186.510..721120.455 rows=400 loops=1)
  Hash Cond: (df.document_id = d.id)
  -  Seq Scan on document_file df  (cost=0.00..373334.08 rows=408 
 width=614) (actual time=11.775..439993.807 rows=400 loops=1)
  -  Hash  (cost=57702.00..57702.00 rows=4000200 width=8) (actual 
 time=19575.885..19575.885 rows=400 loops=1)
-  Seq Scan on document d  (cost=0.00..57702.00 rows=4000200 
 width=8) (actual time=0.039..14335.615 rows=400 loops=1)
  Total runtime: 3684037.097 ms
 
 or just over an hour runtime --- still not exactly speedy, but it
 certainly compares favorably to the estimated 10 hours for unsorted
 updates.
 
 This is with default shared_buffers (32MB) and work_mem (1MB);
 a more aggressive work_mem would have meant fewer hash batches and fewer
 sort runs and hence better performance in both cases, but with the
 majority of the runtime going into the sort step here, I think that the
 sorted update would benefit much more.
 
 Nowhere near a workable patch of course, but seems like food for
 thought.
 
   regards, tom lane
 

Content-Description: bighash.sql

 drop table if exists document;
 drop table if exists document_file ;
 
 create table document (document_type_id int, id int primary key);
 create table document_file (document_type_id int, document_id int primary key,
filler char(600));
 
 insert into document_file select x,x,'z' from generate_series(1,400) x;
 insert into document select x,x from generate_series(1,400) x;
 
 analyze document_file;
 analyze document;
 
 set enable_mergejoin = false;
 
 explain analyze UPDATE ONLY document_file AS df SET document_type_id = 
 d.document_type_id FROM document AS d WHERE d.id = document_id;

Content-Description: ctid-sort.patch

 Index: src/backend/optimizer/prep/preptlist.c
 ===
 RCS file: /cvsroot/pgsql/src/backend/optimizer/prep/preptlist.c,v
 retrieving revision 1.88
 diff -c -r1.88 preptlist.c
 *** src/backend/optimizer/prep/preptlist.c1 Jan 2008 19:45:50 -   
 1.88
 --- src/backend/optimizer/prep/preptlist.c30 Jan 2008 03:06:30 -
 ***
 *** 32,37 
 --- 32,38 
   #include optimizer/var.h
   #include parser/analyze.h
   #include parser/parsetree.h
 + #include parser/parse_clause.h
   #include parser/parse_coerce.h
   
   
 ***
 *** 103,108 
 --- 104,120 
   tlist = list_copy(tlist);
   
   tlist = lappend(tlist, tle);
 + 
 + /*
 +  * Force the query result to be sorted by CTID, for better 
 update
 +  * speed.  (Note: we expect parse-sortClause to be NIL here,
 +  * but this code will do no harm if it's not.)
 +  */
 + parse-sortClause = addTargetToSortList(NULL, tle,
 +

Re: [HACKERS] Idea for minor tstore optimization

2008-03-22 Thread Bruce Momjian
Tom Lane wrote:
 Bruce Momjian [EMAIL PROTECTED] writes:
  Added to TODO:
  * Avoid tuple some tuple copying in sort routines
http://archives.postgresql.org/pgsql-hackers/2008-02/msg01206.php
 
 Actually ... isn't this done already?
 
 http://archives.postgresql.org/pgsql-patches/2008-02/msg00176.php

Yea, removed because I thought you just did it.

-- 
  Bruce Momjian  [EMAIL PROTECTED]http://momjian.us
  EnterpriseDB http://postgres.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Idea for minor tstore optimization

2008-03-22 Thread Tom Lane
Bruce Momjian [EMAIL PROTECTED] writes:
 Tom Lane wrote:
 Actually ... isn't this done already?
 http://archives.postgresql.org/pgsql-patches/2008-02/msg00176.php

 Yea, removed because I thought you just did it.

Oh, wait, that's just a -patches entry; it doesn't look like Neil
ever committed it.  Neil, how come?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Idea for minor tstore optimization

2008-03-22 Thread Bruce Momjian
Tom Lane wrote:
 Bruce Momjian [EMAIL PROTECTED] writes:
  Tom Lane wrote:
  Actually ... isn't this done already?
  http://archives.postgresql.org/pgsql-patches/2008-02/msg00176.php
 
  Yea, removed because I thought you just did it.
 
 Oh, wait, that's just a -patches entry; it doesn't look like Neil
 ever committed it.  Neil, how come?

I thought this was Neil's commit that you just did:

http://archives.postgresql.org/pgsql-committers/2008-03/msg00439.php

but I see now this was another patch queue patch.  I have re-added the
TODO item and included your URL.

-- 
  Bruce Momjian  [EMAIL PROTECTED]http://momjian.us
  EnterpriseDB http://postgres.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Idea for minor tstore optimization

2008-03-22 Thread Tom Lane
Bruce Momjian [EMAIL PROTECTED] writes:
 Tom Lane wrote:
 Oh, wait, that's just a -patches entry; it doesn't look like Neil
 ever committed it.  Neil, how come?

 I thought this was Neil's commit that you just did:

No, the one I just put in was the one you have listed under Avoid
needless copy in nodeMaterial.  That should be removed, but the
tstore optimization thread is still live.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Idea for minor tstore optimization

2008-03-22 Thread Bruce Momjian
Tom Lane wrote:
 Bruce Momjian [EMAIL PROTECTED] writes:
  Tom Lane wrote:
  Oh, wait, that's just a -patches entry; it doesn't look like Neil
  ever committed it.  Neil, how come?
 
  I thought this was Neil's commit that you just did:
 
 No, the one I just put in was the one you have listed under Avoid
 needless copy in nodeMaterial.  That should be removed, but the
 tstore optimization thread is still live.

I am thinking I need a todo queue separate from the patches queue,
except I often can't figure out which is which until I am done.

-- 
  Bruce Momjian  [EMAIL PROTECTED]http://momjian.us
  EnterpriseDB http://postgres.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers